text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
Bhikangaon é uma cidade e uma nagar panchayat no distrito de West Nimar, no estado indiano de Madhya Pradesh.
Geografia
Bhikangaon está localizada a . Tem uma altitude média de 278 metros (912 pés).
Demografia
Segundo o censo de 2001, Bhikangaon tinha uma população de 14 297 habitantes. Os indivíduos do sexo masculino constituem 52% da população e os do sexo feminino 48%. Bhikangaon tem uma taxa de literacia de 69%, superior à média nacional de 59,5%; a literacia no sexo masculino é de 75% e no sexo feminino é de 62%. 15% da população está abaixo dos 6 anos de idade.
Localidades de Madhya Pradesh | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,060 |
2 miejscowości w Polsce:
Ciechomin – wieś w woj. lubelskim, w pow. łukowskim, w gminie Wola Mysłowska
Ciechomin – wieś w woj. łódzkim, w pow. piotrkowskim, w gminie Aleksandrów | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,758 |
Q: How to add focus to the input contained in the div from which a class have been removed using jQuery The code given below is a jQuery function which loads a bootstrap modal triggered by onclick event. This modal has an option of Add Category which on click shows an input field that was kept hidden previously. The problem is that I want to focus the input which appears after the click.
// code to show through on click
window.getModalForm_3=function(context){
$('#ajaxModal').remove();
var defaults = {
title: 'Edit',
action: '',
helpText:'',
icon: '',
placeholder: '',
name:'', name2:'', name3:'', name4:'', name5:'', name6:'', name7:'',
value: '', value2: '', value3: '', value4: '', value5: '', value6: '', value7: '',
};
var context = $.extend(defaults, context);
var modal = '<div class="modal fade" id="ajaxModal" tabindex="-1" role="dialog" aria-labelledby="ajaxModalLabel">';
modal += '<div class="modal-dialog" role="document">';
modal += '<div class="modal-content">';
modal += '<div class="modal-header">';
modal += '<button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button>';
modal += '<h4 class="modal-title color-dark">'+ context.title +'</h4>';
modal += '</div>';
modal += '<form class="form-horizontal clearfix" action="'+ context.action +'" method="post">';
modal += '<input type="hidden" name="_method" value="PUT">';
modal += '<div class="modal-body">';
modal += '<div class="col-full p-l-20 p-t-5 p-r-20 p-b-20">';
modal += '<div class="col-full p-b-10">'+ context.helpText +'</div>';
modal += '<div class="p-t-20 strong">Primary category</div>';
modal += '<div class="col-full">';
modal += '<div class="form-group form-group-mat m-l-0 m-r-0 m-t-0 m-b-20 box-60-plus pull-left">';
modal += '<input id="category_id" name="category_id" value="'+ context.id +'" type="hidden"/>';
modal += '<input id="category" type="text" class="form-control" name="category" value="'+ context.value +'" required autocomplete="off" spellcheck="false" maxlength="255">';
modal += '<label for="category" class="control-label"><i class="fa fa-'+ context.icon +' m-r-5"></i>'+ context.placeholder +'</label><i class="bar"></i>';
modal += '<div id="error_category" class=""></div>';
modal += '</div>';
modal += '<div id="categoryList"></div>';
modal += '</div>';
modal += '<div class="p-t-20 strong">Additional categories</div>';
modal += '<div class="col-full '+ (context.name2 == "" ? "hidden" : "") +'">';
modal += '<div class="form-group form-group-mat m-l-0 m-r-0 m-t-0 m-b-20 box-60-plus pull-left">';
modal += '<input id="'+ context.name2 +'" type="text" class="form-control" name="'+ context.name2 +'" value="'+ context.value2 +'" required autocomplete="off" spellcheck="false" maxlength="255">';
modal += '<label for="'+ context.name2 +'" class="control-label"><i class="fa fa-'+ context.icon +' m-r-5"></i>'+ context.placeholder +'</label><i class="bar"></i>';
modal += '<div id="error_'+ context.name2 +'" class=""></div>';
modal += '</div>';
modal += '<div class="pull-left m-t-5 m-l-15 l-h-1 hideCategoryInputInModal" style="font-size: 30px;"><a href="#" style="text-decoration: none;">×</a></div>';
modal += '</div>';
modal += '<div class="col-full '+ (context.name3 == "" ? "hidden" : "") +'">';
modal += '<div class="form-group form-group-mat m-l-0 m-r-0 m-t-0 m-b-20 box-60-plus pull-left">';
modal += '<input id="'+ context.name3 +'" type="text" class="form-control" name="'+ context.name3 +'" value="'+ context.value3 +'" required autocomplete="off" spellcheck="false" maxlength="255">';
modal += '<label for="'+ context.name3 +'" class="control-label"><i class="fa fa-'+ context.icon +' m-r-5"></i>'+ context.placeholder +'</label><i class="bar"></i>';
modal += '<div id="error_'+ context.name3 +'" class=""></div>';
modal += '</div>';
modal += '<div class="pull-left m-t-5 m-l-15 l-h-1 hideCategoryInputInModal" style="font-size: 30px;"><a href="#" style="text-decoration: none;">×</a></div>';
modal += '</div>';
modal += '<div class="col-full '+ (context.name4 == "" ? "hidden" : "") +'">';
modal += '<div class="form-group form-group-mat m-l-0 m-r-0 m-t-0 m-b-20 box-60-plus pull-left">';
modal += '<input id="'+ context.name4 +'" type="text" class="form-control" name="'+ context.name4 +'" value="'+ context.value4 +'" required autocomplete="off" spellcheck="false" maxlength="255">';
modal += '<label for="'+ context.name4 +'" class="control-label"><i class="fa fa-'+ context.icon +' m-r-5"></i>'+ context.placeholder +'</label><i class="bar"></i>';
modal += '<div id="error_'+ context.name4 +'" class=""></div>';
modal += '</div>';
modal += '<div class="pull-left m-t-5 m-l-15 l-h-1 hideCategoryInputInModal" style="font-size: 30px;"><a href="#" style="text-decoration: none;">×</a></div>';
modal += '</div>';
modal += '<div class="col-full '+ (context.name5 == "" ? "hidden" : "") +'">';
modal += '<div class="form-group form-group-mat m-l-0 m-r-0 m-t-0 m-b-20 box-60-plus pull-left">';
modal += '<input id="'+ context.name5 +'" type="text" class="form-control" name="'+ context.name5 +'" value="'+ context.value5 +'" required autocomplete="off" spellcheck="false" maxlength="255">';
modal += '<label for="'+ context.name5 +'" class="control-label"><i class="fa fa-'+ context.icon +' m-r-5"></i>'+ context.placeholder +'</label><i class="bar"></i>';
modal += '<div id="error_'+ context.name5 +'" class=""></div>';
modal += '</div>';
modal += '<div class="pull-left m-t-5 m-l-15 l-h-1 hideCategoryInputInModal" style="font-size: 30px;"><a href="#" style="text-decoration: none;">×</a></div>';
modal += '</div>';
modal += '<div id="showNextCategoryInModal" class="col-full '+ (context.name5 != "" ? "hidden" : "") +'">';
modal += '<div class="form-group form-group-mat m-l-0 m-r-0 m-t-20 m-b-10 strong"><a href="#" style="text-decoration: none;">Add another category</a></div>';
modal += '</div>';
modal += '</div>';
modal += '</div>';
modal += '<div class="modal-footer">';
modal += '<button type="button" class="btn btn-default" data-dismiss="modal">Close</button>';
modal += '<button type="button" class="btn btn-primary">Save changes</button>';
modal += '</div>';
modal += '</form>';
modal += '</div>';
modal += '</div>';
modal += '</div>';
$("body").append(modal);
$('#ajaxModal')
.on("shown.bs.modal", function() { $(this).find(".form-control:first").focusCursorAtEnd(); })
.modal({ backdrop: 'static', keyboard: false });
$("#showNextCategoryInModal").on('click', function(e) {
e.preventDefault();
($( ".col-full.hidden" ).first()).find("input.form-control").focus(); // This is the line where I am facing problem
$( ".col-full.hidden" ).first().removeClass("hidden");
if (!($(".col-full.hidden")[0])) {
$("#showNextCategoryInModal").addClass( "hidden" );
}
});
$(".hideCategoryInputInModal").on("click", function (e) {
e.preventDefault();
$(this).closest('div.col-full').addClass( "hidden" );
$(this).closest('div.col-full').find("input").attr("value", "");
if (($(".col-full.hidden")[0])) {
$("#showNextCategoryInModal").removeClass( "hidden" );
}
});
};
I have tried find(), next(), prev(), and much more but failed to achieve the desired result.
Image of the modal:
A: You have multiple inputs that has the selector input.form-control you should find a unique one and focus on it. If we assume that you will show the latest one:
($( ".col-full:not(.hidden)" ).first()).find("input.form-control").last().focus();
A: It would be helpful if I can see html codes.(HTML sources code in developer mode)
Try to focus after its shown.
$("#showNextCategoryInModal").on('click', function(e) {
e.preventDefault();
$( ".col-full.hidden" ).first().removeClass("hidden");
if (!($(".col-full.hidden")[0])) {
$("#showNextCategoryInModal").addClass( "hidden" );
}
$( ".col-full" ).last().find("input.form-control").focus();
});
A: i think you should first remove the hidden class
$( ".col-full.hidden" ).first().removeClass("hidden");
and then add focus by accessing last element, which will be one which is recently appeared
$( ".col-full" ).last().find("input.form-control").focus()
A: After many efforts, this code worked for me.
Changed this
($( ".col-full.hidden" ).first()).find("input.form-control").focus();
to this
$( ".col-full.hidden > div > input" ).first().addClass("focused");
$( ".col-full.hidden" ).first().removeClass("hidden");
$("input.form-control.focused").last().focus();
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,910 |
{"url":"http:\/\/mathschallenge.net\/full\/harmonic_sum_approximation","text":"## Harmonic Sum Approximation\n\n#### Problem\n\nHn = 1 + 1\/2 + 1\/3 + ... + 1\/n is defined as the nth Harmonic number.\n\n1. Prove that Hn ln(n) + 1\/(2n) + k, where 0.5 k 1.\n2. By using k = 0.5772, estimate the sum of the first one hundred Harmonic numbers.\n\n#### Solution\n\nConsider the graph y = 1\/x for 0 x 5:\n\nThe exact area below the curve from 1 to n is given by\n n 1\n1\/x dx = ln(n)\n\nBy using rectangles, it can be seen that the approximate area is given by,\n1\/2 + 1\/3 + 1\/n = Hn 1. And as this under-estimates the area:\n\nHn 1 ln(n)\n\nHn ln(n) + 1\n\nBy using the Trapezium rule:\n\n ln(n) (1\/2)[1 + 2(1\/2 + 1\/3 + ... + 1\/(n1)) + 1\/n] = 1 + 1\/2 + 1\/3 + ... + 1\/n 1\/(2n) 1\/2 = Hn 1\/(2n) 1\/2\n\nBut as the Trapezium rule is over-estimating the area in this case.\n\nHn 1\/(2n) 1\/2 ln(n)\n\nHn ln(n) + 1\/(2n) + 1\/2.\n\nHence we establish upper and lower limits:\n\nln(n) + 1\/(2n) + 1\/2 Hn ln(n) + 1\n\nHn ln(n) + 1\/(2n) + k, where 1\/2 k 1 1\/(2n)\n\nAlthough this by no means proves that k 0.5772, using a spreadsheet, and working to 10 d.p., let us consider the error between Hn and the approximation ln(n) + 1\/(2n):\n\n n Hn ln(n) + 1\/(2n) Error 1 1 0.5 0.5 2 1.5 0.943147181 0.556852819 3 1.833333333 1.265278955 0.568054378 4 2.083333333 1.511294361 0.572038972 5 2.283333333 1.709437912 0.573895421 6 2.45 1.875092803 0.574907197 7 2.592857143 2.01733872 0.575518422 8 2.717857143 2.141941542 0.575915601 9 2.828968254 2.252780133 0.576188121 10 2.928968254 2.352585093 0.576383161 20 3.597739657 3.020732274 0.577007384 50 4.499205338 3.922023005 0.577182333 100 5.187377518 4.610170186 0.577207332 1000 7.485470861 6.908255279 0.577215582\n\nIn fact, k = 0.5772156649 (10 d.p.) is called the Euler-Mascheroni constant and appears in many other contexts; for example, the average number of divisors of all the numbers from 1 to n is approximately ln(n) + 2k 1. Although k is suspected to be transcendental, no one so far has even established if it is irrational. Care to prove it?\n\nAt this stage it may be tempting to use this approximation to sum the first one hundred Harmonic numbers:\n\n H = H1 + H2 + ... + H100 ln(1)+1\/2+0.5772 + ln(2)+1\/4+0.5772 + ... + ln(100)+1\/200+0.5772 = ln(1)+ln(2)+...+ln(100) + (1\/2)(1+1\/2+...+1\/100) + 1000.5772 = ln(100!) + (1\/2)H100 + 57.72 ln(100!) + (1\/2)(ln(100)+1\/200+0.5772) + 57.72 = ln(100!) + ln(10) + 58.0411\n\nHowever, not many calculators can evaluate 100!, and we would expect the error accumulated in one hundred and one approximations to be significant. It turns out that this compound approximation gives an answer of 424.083 (3 d.p.), which, as we shall see, is only correct to the nearest whole number.\n\nInstead we shall consider the general sum of the first n Harmonic numbers:\n\n H = H1 + H2 + ... + Hn = 1 + (1 + 1\/2) + (1 + 1\/2 + 1\/3) + ... + (1 + 1\/2 + ... + 1\/n) = n + (n1)\/2 + (n2)\/3 + ... + 2\/(n1) + 1\/n Writing n = 1 + 2\/2 + 3\/3 + ... + (n1)\/(n1) + n\/n H + n = (n+1) + (n+1)\/2 + (n+1)\/3 + ... + (n+1)\/n = (n+1)(1 + 1\/2 + 1\/3 + ... + 1\/n) = (n+1)Hn\n\nTherefore H = (n+1)Hn n.\n\nThis is a much better approach as we need only make one approximation to find the sum: 101(ln(100) + 1\/200 + 0.5772) 100 423.924; incredibly the actual sum, without using the approximation, is 423.925 (3 d.p.).\n\nProblem ID: 209 (17 Feb 2005) \u00a0 \u00a0 Difficulty: 4 Star\n\nOnly Show Problem","date":"2014-04-16 15:58:57","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8273628354072571, \"perplexity\": 949.6844634121494}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-15\/segments\/1397609524259.30\/warc\/CC-MAIN-20140416005204-00138-ip-10-147-4-33.ec2.internal.warc.gz\"}"} | null | null |
Thomas Obermeier (* 2. November 1964 in Kipfenberg) ist ein bayerischer Politiker (CSU) und ehemaliger Abgeordneter des Bayerischen Landtags.
Ausbildung und Beruf
Thomas Obermeier besuchte von 1971 bis 1976 die Volksschule in Kipfenberg und von 1976 bis 1985 das Willibald-Gymnasium in Eichstätt. Von 1986 bis 1994 absolvierte er ein Studium der Rechtswissenschaft in Regensburg und ist seit 1995 Rechtsanwalt in Eichstätt tätig. Er ist römisch-katholisch und hat vier Kinder.
Politik
Thomas Obermeier trat 1997 in die CSU ein. Er war bis 2013 im Kreistag von Eichstätt.
Vom 28. September 1998 bis zum 19. Oktober 2008 war er Mitglied des Landtags. Dort war Mitglied des Ausschusses für Verfassungs-, Rechts- und Parlamentsfragen sowie des Ausschusses für Kommunale Fragen und Innere Sicherheit. Zudem ist er Mitglied der Enquête-Kommission "Jung sein in Bayern" des Bayerischen Landtages. Nach der Bayerischen Landtagswahl im September 2008 und der großen Stimmverluste der CSU in Bayern aber auch im Landkreis Eichstätt blieb Obermeier weiterhin Vorsitzender des CSU Kreisverbandes Eichstätt. In den Landtag schaffte er es dagegen nicht erneut. Im Februar 2009 trat er vom Amt des CSU-Kreisvorsitzenden zurück, nachdem Dr. Reinhard Brandl vom Vorstand zum Bewerber um die Bundestagskandidatur im Wahlkreis 217 bestimmt worden war.
Sonstige Ämter
Obermeier war Vorsitzender der Musikschule Eichstätt e.V. Zudem übernahm er den Vorsitz als Richter des Disziplinargerichts der Stiftung Katholische Universität Eichstätt – Ingolstadt und war Mitglied des Schiedsgerichts im DJK Diözesanverband Eichstätt.
Weblinks
Persönliche Homepage (abgerufen am 7. Oktober 2008)
Biographie und Eintrag auf den Seiten des Bayerischen Landtages (abgerufen am 7. Oktober 2008)
Landtagsabgeordneter (Bayern)
CSU-Mitglied
Politiker (21. Jahrhundert)
Deutscher
Geboren 1964
Mann | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,472 |
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-starter-stream-processor-tcp-client</artifactId>
<packaging>jar</packaging>
<name>spring-cloud-starter-stream-processor-tcp-client</name>
<description>Spring Cloud Stream tcp client processor</description>
<parent>
<groupId>org.springframework.cloud.stream.app</groupId>
<artifactId>spring-cloud-stream-tcp-parent</artifactId>
<version>1.1.0.BUILD-SNAPSHOT</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.cloud.stream.app</groupId>
<artifactId>app-starters-tcp-common</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud.stream.app</groupId>
<artifactId>app-starters-test-support</artifactId>
</dependency>
</dependencies>
</project>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,828 |
What can you do with a million images? In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of results for each input image and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.
James Hays, Alexi Efros (2007). Scene Completion Using Millions of Photographs. ACM Transactions on Graphics / SIGGRAPH, 26(3). | {
"redpajama_set_name": "RedPajamaC4"
} | 3,777 |
«Непрощённый» — российский драматический фильм режиссёра Сарика Андреасяна снятый на основе реальных событий авиакатастрофы над Боденским озером. Главную роль сыграл Дмитрий Нагиев.
Премьера фильма в России состоялась 27 сентября 2018 года. Телевизионная премьера фильма состоялась 19 февраля 2021 года на телеканале «НТВ».
Сюжет
В прологе к фильму демонстрируются кадры разбитого самолёта. Затем показывают подсудимого в зале суда, где он в своей речи произносит, что умер в день катастрофы и что нет большего наказания для мужчины, чем потеря семьи.
Действие фильма начинается с того, что летом 2001 года инженер-строитель Виталий Калоев отказывается утвердить недостроенное здание, наглядно продемонстрировав, что объект был построен с нарушениями техники безопасности, и заявляя, что лучше заплатить штраф за срыв сроков, чем рисковать жизнями людей.
Виталий возвращается с работы в свой дом, где его ждут жена Светлана и двое детей: 4-летняя Диана и 10-летний Константин. Диана играет с котом, а Константин разбирает партию в шахматы между Гарри Каспаровым и Анатолием Карповым. Уложив детей спать, Виталий говорит жене, что ему предлагают контракт в Барселоне сроком на 1 год. Светлана рекомендует согласиться. Уезжая в аэропорт, Виталий дарит сыну новые шахматы, а дочери — жемчужное ожерелье.
Год спустя, завершив контракт, Виталий зовёт семью к себе в Барселону, чтобы через несколько дней всем вместе вернуться домой. 1 июля 2002 года Светлана с детьми приезжают в Москву, но билетов в кассе аэропорта Домодедово не оказывается, и им предлагают чартер. Так они оказываются на борту самолёта Ту-154М, летящего рейсом BTC 2937 Москва—Барселона, где кроме них находятся ещё 50 детей.
Тем временем в Цюрихе в диспетчерском центре компании «Skyguide» авиадиспетчер Петер Нильсен просит коллегу принести ему кофе. Тот соглашается, хотя по инструкции один авиадиспетчер не может находиться за двумя пультами. Оставшись один, Петер Нильсен не успевает вовремя среагировать на то, что на одном лётном эшелоне оказались два самолёта: авиалайнер Ту-154М авиакомпании «Башкирские авиалинии (БАЛ)» (рейс BTC 2937 Москва—Барселона) и грузовой самолёт Boeing 757-200PF авиакомпании DHL (рейс DHX 611 Мухаррак—Бергамо—Брюссель). Системы TCAS обоих самолётов сигналят пилотам об опасности столкновения, но Петер Нильсен отдаёт пилотам команды, расходящиеся с командами системы TCAS. Пилоты обоих самолётов следуют указаниям авиадиспетчера, из-за чего происходит авиакатастрофа.
Виталий, ждущий семью в аэропорту Барселоны, видит, что с электронного табло пропала информация о его рейсе. В то же время по телевизору сообщают о странном взрыве над Боденским озером. Через некоторое время Виталий, как и множество других встречающих, узнают о произошедшей трагедии. Виталий от услышанного теряет сознание, приходит в себя в медпункте, прямо оттуда отправляется обратно в аэропорт и летит в Иберлинген. Пожарные сначала не пускают Виталия на место авиакатастрофы, но, узнав, что этим рейсом летела его семья, дают ему униформу и флажки. Виталий находит сначала бусинки от жемчужного ожерелья, а потом тело дочери и начинает горько оплакивать её.
Дальнейшее действие фильма охватывает 2002 и 2003 годы. Виталий Калоев ежедневно посещает могилу своей семьи. Потеряв интерес к жизни и практически перестав разговаривать, он слушает только сводки новостей об авиакатастрофе и собирает вырезки из газет. Услышав однажды имя авиадиспетчера Петера Нильсена, Виталий записывает его у себя на руке. У брата с сестрой Виталий только изредка спрашивает, где кот, но они не отвечают.
Виталий едет в Башкирию на открытие мемориала. Там он знакомится с Владимиром Савчуком, также потерявшим в авиакатастрофе всех родных. Чуть позже они снова встречаются на месте трагедии под Иберлингеном, где в первую годовщину авиакатастрофы тоже открыт мемориал в виде разорванного жемчужного ожерелья. Виталий рассчитывает пообщаться с представителями диспетчерской компании «Skyguide». Ему удаётся познакомиться с президентом Алланом Росье. Тот предлагает Виталию денежную компенсацию, но Виталий отказывается от денег и требует просто попросить у него прощения.
Руководство компании «Skyguide» на одном из совещаний, на котором так же присутствует Петер Нильсен, решает, что вопрос денежной компенсации родственникам жертв нужно решать, но извинения до окончания официального расследования причин катастрофы приносить нельзя, так как это будет означать, что компания признала свою вину. Виталий ещё неоднократно пытается связаться с Алланом Росье, но тот даёт понять, что готов обсуждать только размер денежной компенсации.
Виталий берёт у брата телефон частного детектива, который когда-то нашёл его угнанный автомобиль. Детектив соглашается узнать адрес Петера Нильсена и через 3 месяца узнаёт его. Виталий ещё раз посещает могилу семьи и наказывает работнику кладбища, чтобы тот ставил свежие цветы, но не сообщает, когда сам вернётся сюда в следующий раз.
24 февраля 2004 года Виталий летит в Швейцарию и покупает в гостинице сувенирный нож. Он предпринимает ещё одну неудачную попытку созвониться с Алланом Росье, после чего идёт по адресу, который ему сообщил детектив. Дверь ему открывают Петер Нильсен с ребёнком. Отправив ребёнка в дом, Нильсен интересуется у Виталия, зачем он пришёл. Тот показывает фотографию жены и детей и требует, чтобы Петер Нильсен попросил прощения, но вместо этого авиадиспетчер прогоняет Виталия и выбивает фотографию из его руки. Виталий наносит Петеру Нильсену 12 ножевых ранений, отчего тот погибает на месте.
Полиция арестовывает Виталия в гостиничном номере. На суде Виталий произносит речь, с которой начинается фильм. При этом он задаёт вопрос суду, как бы кто-нибудь другой поступил на его месте, но не требует и не получает ответа. Виталия приговаривают к 8 годам лишения свободы, хотя прокурор требовал 12. Фактически Виталий проводит в одиночной камере чуть больше 3 лет; его освобождают досрочно, после того как компанию «Skyguide» и авиадиспетчера Петера Нильсена официально признают виновными в авиакатастрофе. Аллан Росье по-русски публично признаёт вину и приносит официальные извинения.
В аэропорту Виталия встречают журналисты. Он коротко отвечает на их вопросы, что не испытывал никаких предчувствий ни до, ни, тем более, после трагедии. Также он заявляет, что «с Богом я поссорился» и что сейчас он поедет на могилу к родным. Посетив могилу впервые после тюремного заключения, Виталий кается перед женой и детьми и плачет.
В эпилоге к фильму Виталий находит на пороге своего дома котёнка и берёт себе. Далее сообщается, что несколько руководителей компании «Skyguide» были приговорены к денежным штрафам и условным тюремным срокам, а Виталий Калоев всю жизнь проработал в сфере строительства и недавно вышел на пенсию.
Во время финальных титров демонстрируются фрагменты видеозаписей последствий авиакатастрофы над Боденским озером 1 июля 2002 года и интервью с реальным Виталием Калоевым.
В ролях
Дмитрий Нагиев — Виталий Калоев
Марджан Аветисян — Светлана Калоева, жена Виталия
Роза Хайруллина — Зоя, сестра Виталия
Самвел Мужикян — Юрий, брат Виталия
Михаил Горевой — Владимир Савчук
Андрюс Паулавичюс — Петер Нильсен
Ирина Безрукова — сотрудница аэропорта Домодедово
Карина Каграманян — Диана Калоева, дочь Виталия
Артём Шкляев — Константин Калоев, сын Виталия
Вадим Цаллати — друг семьи Калоевых
Себастьян Сисак — Станислас, следователь
Микаэл Джанибекян — частный детектив
Лисавета Сахнова — журналистка
Александр Лырчиков — сосед Калоева в самолёте
Павел Савинов — испанец в аэропорту Барселоны
Оливье Сиу — Аллан Росье
Съёмочная группа
Авторы сценария — Алексей Гравицкий, Сергей Волков, Мэттью Джейкобс
Режиссёр-постановщик — Сарик Андреасян
Оператор-постановщик — Морад Абдель Фаттах
Композитор — Марк Дорбский
Критика
Фильм получил неоднозначные, преимущественно отрицательные или средние оценки кинокритиков. Фильм разгромили в том числе в таких изданиях как Российская газета, «25-й кадр», Time Out, Meduza (Антон Долин). Нейтральные рецензии опубликовали «Афиша», ivi (Сергей Кудрявцев), Фильм.ру, Intermedia. Среди немногих изданий, чьим обозревателям понравился «Непрощённый» — «Киноафиша», «Вокруг ТВ» и «Московский комсомолец».
Награды
2018 — Первый открытый фестиваль популярных киножанров «Хрустальный источникъ» (Ессентуки):
Приз «За лучшую мужскую роль» (Дмитрий Нагиев)
Приз прессы
Приз зрительских симпатий
См. также
«Последствия» — американский фильм о тех же событиях (роль главного героя, прообразом которого стал Калоев, сыграл Арнольд Шварценеггер).
Примечания
Комментарии
Фильмы России 2018 года
Фильмы-драмы России
Фильмы на русском языке
Фильмы, основанные на реальных событиях
Фильмы о вигилантах
Фильмы об инженерах | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,379 |
Portugal is experiencing a demographic crisis, which is weakened only by immigrants
Osbert Jordan December 19, 2022 5 min read
Portugal has lost 196,000 people in the past ten years, and is rapidly moving towards a serious demographic crisis, with projections indicating between 7 to 8 million Portuguese within 30 years. The survey conducted by the Francisco Manuel dos Santos Foundation (FFMS) shows that it was immigrants who were able to mitigate the decline, with more entries and children. The good news is that the number of migrants has decreased and some are returning.
This is a picture of migrations in Portugal drawn by Pordata, the foundation's database, to celebrate World Migrants Day, Sunday.
The natural balance of the Portuguese population – the ratio between deaths and births – has been negative since 2009, and reached an all-time low in 2021 (see table). The immigration balance was also negative between 2011 and 2016, the year in which re-entries saw more exits. But only in 2019 and 2020 were they able to compensate for the negative natural balances.
Subscribe to newsletters Diario de Noticias and get first-hand information.
Last year, the population balance was negative again, despite the entry of 51,000 immigrants, twice the number of new immigrants. But 124.8 thousand people died, 45.2 thousand more people were born,
The entry of foreign residents into Portugal also contributed to the fact that birth rates did not fall, as they arrived in a country of childbearing age. In 2021, 70% were between the ages of 20 and 59, and 25% were in their 20s. Of the 79,582 children born that year, 13.6% had a foreign mother, when these communities account for 7% of the resident population.
The Foreigners and Borders Service (SEF) registers 698,997 foreigners in the country in 2021.
The weight of the active population and its contribution to the birth rate is more pronounced among immigrants, but in a negative way. Among the Portuguese who left last year, 93% were between the ages of 20 and 59, and 42% were between the ages of 20 and 29. In other words, they are also potential parents leaving the country.
Citizenship applications doubled
Another consequence of migration is the flight of eligible persons, outnumbering the general population (see table). In 2021, 20% of the resident population aged 15 and over had a higher education, which rose to 34% among immigrants.
The good news is that there are payouts. Most of the entries (52%) were natives of Portugal. These citizens appear in the designation "immigrants," a concept from the National Institute of Statistics (INE) to designate domestic movements, regardless of nationality. To which must be added the 25% of foreigners with Portuguese nationality who have returned to the country.
347,000 foreigners have become Portuguese in the last ten years, a number that has doubled in 2021 (54,537) compared to 2011 (25,016). We are talking about cases where the citizen does not have a direct relationship with Portugal, a nationality known as "derived". The largest share is for foreigners who have a residence permit in the country for at least 5 years (Deadline dropped in 2018), which shows those who married a Portuguese, children of immigrants, etc., for DN Ministry of Justice
Another way to get a Portuguese identity card is "attribution" and it is called "original nationality". They include, for example, children of the Portuguese born abroad. According to the Ministry of Justice, 138,874 applications for citizenship were granted last year.
In 2020, Portugal ranked fourth among the 27 EU countries to grant citizenship, almost twice the EU average of 27 (163 per 100,000 inhabitants). At the top of the ranking are Sweden, Luxembourg and the Netherlands. For the first time in 10 years, more people abroad have been granted Portuguese citizenship than those living in Portugal, from 2,000 (2011) to 30,000 (2021).
Bordata's photo highlights: "Naturalization is the main method of obtaining citizenship: for those who live in Portugal, the reason is the fact that they have lived there for at least five years (61%); for those who live abroad, it is the descendants of Sephardic Jews (77%).
Brazilian (32%) and Cape Verdean (12%) nationalities are the most important among the population who acquired Portuguese citizenship in 2021. Among non-residents, Israeli citizenship (65%) stands out.
million foreigners
According to INE, last year 542,165 citizens of foreign nationality lived in Portugal. It represents an increase of 148 thousand compared to 2011, which practically corresponds to the population of Lisbon. But this number doubles if we consider foreign residents who have obtained Portuguese citizenship. They are more than a million, 10% of the country's population.
The foreign population residing in Portugal mainly comes from Brazil (37%), followed by Angola (6%), Cape Verde (5%), the United Kingdom (5%), and Ukraine (4%). In addition to the Brazilian communities (9 percentage points more), Nepalese, Indian and Italian (2 percentage points more), and Bengali (1 part) are the communities that have increased their relative weight. In the opposite direction, there are Cape Verdeans, Ukrainians, Romanians, Moldovans and Guineans🇧🇷
The 55,833 Temporary Protection granted in 2022 by SEF to refugees residing in Ukraine as a result of the war were not covered.
What the FFMS survey also tells us is that the foreign population is still more concentrated on the country's coast (92%), in the Lisbon metropolitan area (47%) and in the Algarve (13%) than Portugal's nationals. . In the Algarve region, they are three times more than the natives (4.1%).
In terms of distribution by municipality in terms of local population (see table), the top ten are Odemira, Vila do Bispo, Aljezur, Lagos, and Albufeira. In these municipalities, one in five residents is a foreigner.
Bordatta concludes that analyzing the income and risk of poverty or social exclusion of the foreign population in Portugal, "it is clear that these vary by nationality".
In Portugal, the income of the Portuguese is higher than that of foreigners from non-EU countries, but lower when compared to foreign nationals from the 27 EU countries.
The risk of poverty or social exclusion among the adult population of Portugal is lower among Portuguese nationals (22%) than among foreigners (35%), but with differences: it is higher among nationals of non-EU countries 27 (37%) than among nationals of non-EU countries European 27 (37%) are citizens of the 27 countries that make up the European Union (27%).
At Diário de Notícias, dozens of journalists work every day to produce news, interviews, reports and analysis that guarantee accurate information to readers. It has been so for more than 150 years that we are the oldest national newspaper. In order to continue to provide this "service to the reader," as our founder wrote in 1864, we need your support.
Subscribe here to your newspaper
Osbert Jordan
"Devoted food specialist. General alcohol fanatic. Amateur explorer. Infuriatingly humble social media scholar. Analyst."
Previous The robot can write poetry and answer questions in the same way as a human
Next Baba Vanga left predictions for 2023, including a nuclear explosion and storms
A father loses both of his legs while saving his daughters from a snow plow accident
Climate extremes in the Amazon directly affect Tibet | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,527 |
1 Corinthian 2:1-2 1 And I, brethren, when I came to you, did not come with excellence of speech or of wisdom declaring to you the testimony of God. 2 For I determined not to know anything among you except Jesus Christ and Him crucified.
As the years in God's ministry through me have flown by I have had the temptation time and again to be consumed with need for "excellence of speech" and "wisdom". The drive to be the "go to guy" can consume a minister faster than a blaze in a dry cedar forest.
Paul reminded the Corinthian church that his ministry to them was not "excellence of speech" or of "wisdom" but rather his only goal was to know and live one thing: Jesus Christ and Him crucified.
That is enough to know. We can spend 50 years in ministry and never fully comprehend the depth of God's love expressed on the cross. We can earned multiple degrees from every known university and still not even begin to fathom the mind and heart of a God who allowed His only Son to die on a cruel cross. Here is the truth: When we KNOW Him in His completed fullness speaking about Him will be the most natural outflow of our life.
Can you just imagine the power in the lives of God's people if we would desire to ONLY know Jesus and Him crucified, risen and ascended into the heavens? Gossip and slander would stop falling from our lips. Hidden habits and sins would die before the gaze of a loving Savior. The witness and testimony of the church of the Living God would be powerful and strong.
Jesus, may my ambition for every day of my life be to know you more and more and more and more…In Jesus Name…Amen!!! | {
"redpajama_set_name": "RedPajamaC4"
} | 1,402 |
Anna Fiodorovna Volkova (en russe : Анна Фёдоровна Волкова ; 1800-1876) est une chimiste russe, première femme membre de la Société russe de chimie.
Certains composés chimiques de synthèse qu'on lui doit ont été présentés par la Russie à l'Exposition industrielle internationale de Londres de 1876.
Le cratère vénusien Volkova a été nommé en son honneur.
Références
Naissance en 1800
Femme scientifique du XIXe siècle
Décès en 1876
Chimiste de l'Empire russe au XIXe siècle | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,122 |
{"url":"https:\/\/homework.cpm.org\/category\/CCI_CT\/textbook\/calc\/chapter\/3\/lesson\/3.3.1\/problem\/3-89","text":"### Home > CALC > Chapter 3 > Lesson 3.3.1 > Problem3-89\n\n3-89.\n\nName all point(s) on the graph below which meet the given criteria: .\n\n1. Slope of the tangent is most positive.\n\n2. Slope of the tangent is negative.\n\n3. Slope of the tangent is the most negative.","date":"2022-12-04 14:12:48","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8024485111236572, \"perplexity\": 2207.9387414150747}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710974.36\/warc\/CC-MAIN-20221204140455-20221204170455-00489.warc.gz\"}"} | null | null |
\section{Introduction}\label{intr-1}
The problem of the existence of nonstochastic objects has been discussed in
seventies at Kolmogorov's seminar in the Moscow State University (see also
Cover et al.~\cite{CGG89}). Following Levin~\cite{Lev84}, Levin and V'yugin~\cite{LeV77},
V'yugin~\cite{Vyu76, Vyu82, Vyu2012}),
the most suitable objects for such a study are infinite binary sequences and
the problem should be considered in the information aspect, that is, the problem is
whether we can generate ``nonstochastic information'' using a probabilistic
Turing machine and what types of nonstochastic information can be generated
using probabilistic these machines.
We will study the properties (collections or Borel sets) of infinite binary sequences,
as carriers of certain information, i.e. such properties should
be invariant with respect to the encoding methods. We will consider
the encoding methods of the most general type -- algorithmic operators.
The algebra of collections (of infinite sequences) that are closed under Turing equivalence
were introduced by Levin and V'yugin~\cite{LeV77} and studied by V'yugin~\cite{Vyu82}.
Roughly speaking, given two such collections $A$ and $B$, $A\preceq B$ in this ordering
if $A\setminus B$ is negligible, i.e. the Levin's a priory (semi)measure of this set
is equal to zero. This is equivalent to the fact that no probabilistic Turing machine
can produce a sequence from this set with a positive probability. See about the
a priory semimeasure in Zvonkin and Levin~\cite{ZvL70}, Levin and V'yugin~\cite{LeV77},
and in Solomonoff~\cite{Sol64, Sol64a}.
We call any two invariant collections $A$ and $B$
equivalent, $A\sim B$, if $A\preceq B$ and $B\preceq A$, i.e., these sets differ
on a negligible set. We consider the factor algebra with respect to this equivalence,
its element is defined as $[A]=\{B:B\sim A\}$.
The degree structure associated with this ordering is a Boolean algebra which
was called by V'yugin~\cite{Vyu82} the algebra of invariant properties.
This algebra was recently called by Bienvenu and Patey~\cite{BiP2017} and in
Holzl and Porter~\cite{HoP2021} the algebra of Levin-V'yugin degrees (or LV-degrees).
The idea of studying this algebra was put forward by Leonid Levin.
In this paper we study structural properties of LV-degrees of collections
of sequences that are non-negligible in the sense that they can be computed by
a probabilistic algorithm with positive probability. The equivalence class
of all negligible collections defines the zero element while
all infinite sequences defines the maximal -- unit element, of this algebra.
Two natural elements of this algebra can be distinguished. This is the element generated
by non-computable Martin-L\"of random sequences and the element generated by computable
sequences.
It was proved by V'yugin~\cite{Vyu76, Vyu82} that the complement
of these two elements is non-zero. Using the probabilistic Turing machines, we can generate
infinite sequences which are non-random with respect to any computable measure
and, moreover, they cannot be Turing equivalent to random sequences. We say that
such sequences carry non-random information.
In this work, we will study the properties of of LV-algebra. Levin and V'yugin~\cite{LeV77}
pointed out that all non-computable Martin-L\"of sequences define
the atom of LV-algebra in the sense that such an element cannot be represented as a union
of two incomparable non-zero elements. Computable sequences form the atom in a trivial way.
In this paper we construct infinitely many other atoms defined by collections of non-random
sequences. We will also show that the complement of all atoms of the LV-algebra is
nontrivial. The complement of all atoms is an infinitely divisible element.
Thus, we get a representation of the unit element of LV-degrees in the form of
a union of an infinite sequence of atoms, two of which have the natural interpretation,
and an infinitely divisible element. The constructions are based on the corresponding
templates. In particular, we present the template for defining atoms of the algebra
of LV-degrees.
Also, we correct and improve the construction of the atoms from V'yugin~\cite{Vyu82},
which is insufficient to achieve the desired result. The author is grateful
to Rupert Holzl and Cristopher Porter who noticed this insufficiency.
An excellent analysis of the relationship between LV-degrees and Turing degrees is
given in the review by Holzl and Porter~\cite{HoP2021}, where the idea of the
construction from V'yugin~\cite{Vyu82} is also explained.
We will point out the connection between the properties of LV-algebra
and classical properties from computability theory. In particular,
we apply results on the interactions between notions of randomness and Turing
reducibility to establish new facts about specific LV-degrees, such as
the LV-degrees of the collections of hyperimmune sequences
which are characteristic sequences of hyperimmune sets.
We will construct atoms and infinitely divisible element defined by collections
of hyperimmune sequences. Thus, a representation
of LV-degree of the collection of all hyperimmune sequences will be obtained in the
form of a union of an infinite sequence of atoms and an infinitely divisible element.
\section{Preliminaries}\label{prelimin-1}
Let $\Xi$ be the set of all finite binary sequences,
$\Omega$ be the set of all infinite binary sequences, $\lambda$ be the empty sequence.
In what follows by a sequence (finite or infinite) we mean the binary sequence, i.e.,
the sequence $\omega_1\omega_2\dots$, where $\omega_i\in\{0,1\}$ for $i=1,2,\dots$.
For any finite or infinite $\omega=\omega_1\dots\omega_n\dots$, we
denote its prefix (initial fragment) of length $n$ as $\omega^n=\omega_1\dots\omega_n$.
We write $x\subseteq y$ if a sequence $y$ is an extension
of a sequence $x$, $l(x)$ is the length of $x$.
Let $\cal R$ be the set of all real numbers extended by adding
the infinities $-\infty$ and $+\infty$, $\cal N$ and $\cal Q$ -- be the sets
of all positive integer numbers and of all rational numbers. Let $[r]$ denotes
the integer part of a real number $r$.
We assume that the reader is familiar with the basics of computability
and algorithmic randomness theory
(for instance, the material covered in Rogers~\cite{Rog67}, Soare~\cite{Soa2016},
Nies~\cite{Nie2009}, Downey and Hirschfeldt~\cite{DH2010}, G{\'a}cs~\cite{Gac12},
Vereshchagin et al.~\cite{She2007}, and Li and Vitanyi~\cite{LiV97}).
We need a one-to-one enumeration of all ordered pairs of positive integers.
We fix some form of this enumeration. We use the natural correspondence between
finite binary sequences and nonnegative integers:
$\lambda$--$0$, $0$--$1$, $1$--$2$, $00$--$3$,
$01$--$4$, $11$--$5$, $000$--$6,\dots$.
We will identify any ordered pair of positive integer numbers $\langle i,j\rangle$
and its ordinal number.
A one-to-one enumeration of all ordered triples $\langle i,j,k\rangle$ also can be defined
in a similar way: $\langle i,j,k\rangle=\langle \langle i,j\rangle,k\rangle$.
We denote the inverse functions $[\langle i,j\rangle]_1=i$ and $[\langle i,j\rangle]_2=j$.
Also, $[\langle i,j,k\rangle]_1=i$, $[\langle i,j,k\rangle]_2=j$ and
$[\langle i,j,k\rangle]_3=k$.
We fix a model of computation. Algorithms may be regarded as
Turing machines and so the notion of a program and a time of computation
will be well-defined. Our considerations will be invariant under polynomial
computation time, so the results will be machine-independent.
An algorithm transforms finite objects into finite objects.
Integer and rational numbers (but no reals) are examples of finite objects.
Finite sequences of finite objects are again finite objects.
The main property of finite objects that we use is that
they can be enumerated with positive integers, and therefore they can be
arguments of computable (partial recursive) functions and algorithms.
A function $f$ is called partial recursive if there is an algorithm
(Turing machine) computing values of $f$. For any input $x$
the corresponding Turing machine when fed with $x$ stops after several steps
and outputs the result $f(x)$ if $f(x)$ is defined and never stops otherwise.
We call a function $f$ total if $f(x)$ is defined for every $x$.
Define
\[
f^n(x)=
\left\{
\begin{array}{l}
f(x) \mbox{ if } f(x)\mbox{ is computed in }n\mbox{ steps}
\\
\infty\mbox{, otherwise},
\end{array}
\right.
\]
where $f$ is a partial recursive function and $n$ is a positive integer number.
We will use the recursive sequence $\phi_i$ of all partial recursive functions.
This means that there is partial recursive universal
function $U(i,x)=\phi(x)$ such that for any partial recursive function $f$ there exists
an $i$ such that $\phi_i(x)=U(i,x)=f(x)$ for each $x$, where both sides if this
equality are defined or undefined simultaneously.
A set of finite objects is called recursively enumerable if it is a domain
of some computable function. A nonempty set $A$ is recursively enumerable
if and only if it is the range of some total recursive
(computable) function. A set $A$ of finite objects is
called (algorithmically) decidable (recursive) if $A$ and the complement of $A$ are
recursively enumerable.
Let $A$ be a set of all finite objects of certain type.
A function $f\colon A\rightarrow\cal R$ is called lower semicomputable if
$\{(r,x): x\in\Xi,\ r\in{\cal Q}, r<f(x)\}$ is a recursively
enumerable set. This means that there is an algorithm which when fed with
a rational number $r$ and a finite object $x$ eventually stops if
$r<f(x)$ and never stops otherwise. In other words, the semicomputability
of $f$ means that if $f(x)>r$ this fact will sooner or later be learned,
whereas if $f(x)\leq r$ we may be for ever uncertain.
We use also a concept of computable operator (operation) on $\Xi\bigcup\Omega$
(Zvonkin and Levin~\cite{ZvL70}, Uspenskyi et al.~\cite{USS90}).
Let $\hat F$ be a recursively enumerable set of
ordered pairs of finite sequences (graph of a computable operator) satisfying
the following properties:
\begin{itemize}
\item{}
$(x,\lambda)\in\hat F$ for any $x$, where $\lambda$ is the empty sequence;
\item{}
if $(x,y)\in\hat F$, $x\subseteq x'$ and $y'\subseteq y$ then $(x',y')\in\hat F$;
\item{}
if $(x,y)\in\hat F$ and $(x,y')\in\hat F$ then $y\subseteq y'$ or $y'\subseteq y$.
\end{itemize}
A computable operator $F$ is defined using the graph $\hat F$ as follows
$$
F(\omega)=\sup\{y: \exists x(x\subseteq\omega\&(x,y)\in\hat F\},
$$
where $\omega\in\Omega\bigcup\Xi$ and $\sup$ is under partial ordering
$\subseteq$ on $\Xi$.
Informally, the computable operator $F$ is defined by some algorithm
which when fed with an infinite or a finite sequence $\omega$ takes it
sequentially bit by bit, processes it and produces an output sequence
also sequentially bit by bit.
In what follows we will use the modified version of computable operator, where
\begin{eqnarray}
\tilde F(x)=\sup\{y: l(y)\le l(x)\&\exists x'(x'\subseteq x\&(x',y)\in\hat F^{l(x)}\}
\label{modify-operator-1}
\end{eqnarray}
for any $x\in\Xi$, where $\sup$ is taken under partial ordering
$x\subseteq y$ and $F^{l(x)}$ is the finite subset of elements of $\hat F$
enumerated in $l(x)$ steps. Let $(x,\lambda)\in\hat F^0$ for every $x$.
By definition for modified operator, $l(\tilde F(x))\le l(x)$ for each finite
sequence $x$ and $\tilde F(\omega)=F(\omega)$ for each infinite $\omega$,
where $\tilde F(\omega)=\sup_n \tilde F(\omega^n)$.
For any finite sequence $x$ the value of $\tilde F(x)$ is defined in $l(x)$ steps
of computation.
We will use the uniformly computable sequence of all computable operators
$\{F_i\}$ such that given $i$ and $\omega$ some algorithm computes the
value $F_i(\omega)$ and for any computable operator $F$ there exists an $i$ such that
$F(\omega)=F_i(\omega)$ for each $\omega$. This sequence is defined by a recursively
enumerable set $\cal F$ of triples $(i,x,y)$ such that for any $i$ the set
$\hat F=\{(x,y):(i,x,y)\in {\cal F}\}$ defines the computable operator $F_i$
and for each computable operator $F$ an $i$ exists such that $F=F_i$.
We transform this sequence to a sequence $\{\tilde F_i\}$
as it was done by (\ref{modify-operator-1}).
A real-valued non-negative function $P:\Xi\to{\cal R}$ is called semimeasure if
\begin{eqnarray}
P(\lambda)\le 1,
\nonumber
\\
P(x0)+P(x1)\le P(x)
\label{semi-1}
\end{eqnarray}
for all $x\in\Xi$. We will consider lower semicomputable
semimeasures $P$; this means that the set
$\{(r,x):r\in {\cal Q},\ r<P(x)\}$ is recursively enumerable.
Solomonoff~\cite{Sol64, Sol64a} proposed ideas for defining the a priori probability
distribution on the basis of the general theory of algorithms.
Levin in~\cite{ZvL70} and~\cite{LeV77} gives a precise
form of Solomonoff's ideas in a concept of a maximal lower semicomputable semimeasure
(see also Li and Vytanyi~\cite{LiV97}, Section 4.5.)
Levin proved that there exists the maximal to within a multiplicative
positive constant factor semimeasure $M$ semicomputable from below, i.e.
for every semimeasure $P$ semicomputable from below a positive constant $c$
exists such that the inequality
\begin{equation}\label{M-ineq}
cM(x)\ge P(x)
\end{equation}
holds for all $x$. The semimeasure $M$ is called a priory or universal semimeasure.
A function $P$ is a measure if (\ref{semi-1}) holds, where
both inequality signs $\le$ are replaced with the equality $=$. Any function $P$ satisfying
(\ref{semi-1}) (with equalities) can be extended on all Borel subsets
of $\Omega$. Consider intervals $\Gamma_x=\{\omega\in\Omega:x\subseteq\omega\}$
in $\Omega$ for all $x\in\Xi$. The measure of any such interval can be defined as
$P(\Gamma_x)=P(x)$ and can be extended to all Borel subsets of $\Omega$.
A measure $P$ is computable if there is an algorithm which for any $x\in\Xi$
outputs a rational approximation of the real number $P(x)$ with
a given degree of precision. A typical example of computable measure on $\Omega$
is the uniform measure $L$, where $L(\Gamma_x)=2^{-l(x)}$ for every $x\in\Xi$.
For technical reason, for any semimeasure $P$, we consider the maximal measure
$\bar P$ such that $\bar P\le P$. This measure can be defined as
\begin{eqnarray*}
\bar P(x)=\lim\limits_{n\to\infty}\sum\limits_{l(y)=n,x\subseteq y}P(y)
\end{eqnarray*}
(see Levin and V'yugin~\cite{LeV77}).
In general, the measure $\bar P$ is non-computable (and it is not a probability measure,
since $\bar P(\Omega)<1$) even then $P$ is a lower semicomputable semimeasure.
From (\ref{M-ineq}) the inequality $c\bar M(A)\ge\bar P(A)$ follows for each lower
semicomputable semimeasure $P$ and for every Borel set $A$, where $c$ is a positive
constant (the same as in (\ref{M-ineq})). In particular, the measure $\bar P$ is
absolutely continuous with respect to the measure $\bar M$.
Following Levin~\cite{ZvL70, LeV77, Lev84} (see also V'yugin~\cite{Vyu82, Vyu2012}),
the combinations of probabilistic and deterministic processes is considered as
the most general class of processes for generating data. Any probabilistic process
is defined by some computable probability distribution.
Any deterministic process is realized by means of an algorithm.
Algorithmic processes transform sequences generated by
probabilistic processes into new sequences. More precise, a probabilistic
computer is a pair $(P,F)$, where $P$ is a computable probability distribution
(for example, $P=L$) and $F$ is a Turing machine supplied with an additional input tape.
In the process of computation this machine reads on this tape
a sequence $\omega$ distributed according to $P$ and produces a sequence
$\omega'=F(\omega)$ (A correct definition see in~\cite{ZvL70, USS90, Vyu82, LiV97}).
So, we can compute the probability
$$
Q(x)=P\{\omega: x\subseteq F(\omega)\}
$$
of that the result $F(\omega)$ of the computation begins with a
finite sequence $x$. It is easy to see that $Q(x)$ is the lower semicomputable
semimeasure.
Generally, the semimeasure $Q$ can be not a probability distribution in $\Omega$,
since $F(\omega)$ may be finite for some infinite $\omega$.
The converse result is proved in Zvonkin and Levin~\cite{ZvL70}: for every
lower semicomputable semimeasure $Q(x)$ a probabilistic algorithm $(L,F)$ exists such that
$$
Q(x)=L\{\omega\vert x\subseteq F(\omega)\},
$$
for all $x$, where $L(x)=2^{-l(x)}$ is the uniform probability
distribution in the set of all binary sequences.
Therefore, by (\ref{M-ineq}) $M(x)$ defines an asymptotic universal upper bound of
the probability of generating $x$ by probabilistic algorithms.
A set of infinite sequences $U\subseteq\Omega$ is called open if it can be
represented as a union of a sequence of intervals $U=\cup\Gamma_{x_i}$, where $x_i\in\Xi$
for $i=1,2,\dots$. An open set $U$ is effectively open if the function $f(i)=x_i$ is total
computable.
Let $P$ be a computable measure on $\Omega$.
By Martin-L\"of~\cite{Mar66} test of randomness with respect to the measure $P$
we mean a uniformly recursively enumerable
sequence $\{U_i\}$ of effectively open sets (i.e., $U_i=\cup_j\Gamma_{x_{i,j}}$
for each $i$ and the function $f(i,j)=x_{i,j}$ is computable) such that
$P(U_i)\le 2^{-i}$ for all $i$. The null-set of any test is $\cap_i U_i$.
From definition $P(\cap_i U_i)=0$. An infinite sequence $\omega\in\Omega$ is
called Martin-L\"of random with respect to a computable measure $P$ if
$\omega\not\in\cap_i U_i$ for each test $\{U_i\}$ of randomness
with respect to $P$.
An equvalent definition of Martin-L\"of randomness can be given in terms
of the a priory semimeasure.
An infinite sequence $\omega$ is Martin-L\"of random with respect to a computable
measure $P$ if and only if a positive constant $c$ exists such that
\begin{equation}
\frac{P(\omega^n)}{M(\omega^n)}\ge c>0
\label{qrit-random-1}
\end{equation}
for every $n$ (see Zvonkin and Levin~\cite{ZvL70},~\cite{LiV97}).
\section{Algebra of LV-degrees}\label{algebra-1}
An infinite sequence $\alpha\in\Omega$ is Turing (or algorithmically) reducible to
an infinite sequence $\beta\in\Omega$ if $\alpha=F(\beta)$ for some
algorithmic operator $F$. Denote this as $\alpha\le_T\beta$.
Two infinite sequences $\alpha$ and $\beta$
are Turing equivalent: $\alpha\equiv_T\beta$, if each of them
reducible to another one. The classes of equivalent sequences
form Turing degrees.
A Borel set $A\subseteq\Omega$ is called
algorithmically (or Turing) invariant if it is together with each
sequence also contains all algorithmically equivalent
sequences. In other words, the set $A$ can be represented as the union
of Turing degrees. For any set $A\subseteq\Omega$, let
$\bar A=\{\omega:\exists\alpha(\alpha\in A\&\alpha\equiv_T\omega)\}$
be the algorithmic closure the set $A$.
Martin-L\"of random sequences should serve as mathematical analogs of sequences
which can be obtained in stochastic processes.
On the other hand, some infinite Martin-L\"of random sequences can be defined
using exact mathematical constructions, that contradicts to our intuition.
For example, the binary representation of the Chaitin number $\sum_n 2^{-\KP(n)}$,
where $\KP(n)$ is the prefix Kolmogorov complexity of a positive integer number $n$,
is Martin-L\"of random with respect to uniform measure on $\Omega$ (see~\cite{USS90}),
another example of the exact mathematical construction of Martin-L\"of random sequence
can be found in Zvonkin and Levin~\cite{ZvL70}.
These examples show that some correction is needed in the interpretation of
the concept of a random sequence.
By de Leeuw--Moore--Shannon--Shapiro theorem $\bar M(\{\alpha\}) = 0$,
where $\alpha$ is any non-computable sequence (Leeuw et al.~\cite{LMSS56},
see also Sacks~\cite{Sac63}).
In particular, the Chaitin number, or rather, its binary representation
$\alpha$, cannot be output (with a positive probability) of any probabilistic algorithm.
Similarly, any individual random sequence $\alpha$, defined by a mathematical construction,
cannot be obtained as output in any combination of random and algorithmic processes.
Let any property ${\cal A}$ defines a Borel set $A=\{\omega\in\Omega:{\cal A}(\omega)\}$
such that $\bar M(A)=0$. Then for any probabilistic machine $(L,F)$,
the probability $P(A)=L\{\omega: F(\omega)\in A\}$ of generation a sequence
from $A$ is equal 0. We call such sets {\it negligible}. A set $A$ is negligible
if and only if $L(F^{-1}(A))=0$ for each computable operator $F$, where
$F^{-1}(A)=\{\omega\in\Omega:F(\omega)\in A\}$ (see~\cite{LeV77, Vyu82}).
We consider algorithmic transformations of infinite
sequences, which can be carried out using probabilistic algorithms.
By definition any infinite sequence from the negligible set cannot be obtained
(with positive probability) in any combination of stochastic and algorithmic
processes.
For example, for any non-computable infinite sequence $\alpha$ the set
$$
\{\omega\in\Omega:\exists F(F(\omega)=\alpha)\}
$$
is negligible.
Let ${\cal B}$ be the Boolean algebra of all algorithmically invariant
Borel subsets (collections of infinite binary sequences) of $\Omega$.
We identify any two sets from ${\cal B}$
which differ by a negligible set. More correctly,
let us consider the equivalence relation on ${\cal B}$
$$
A\sim B \Longleftrightarrow \bar M((A\setminus B)\cup (B\setminus A))=0.
$$
Let $\Upsilon$ be the factor algebra of ${\cal B}$ by the equivalence relation
$\sim$. Denote the equivalence class of any set $A$ by ${\bf a}=[A]$.
The elements of $\Upsilon$ will be called degrees of randomized computability
or LV-degrees.\footnote{V'yugin~\cite{Vyu82} called $\Upsilon$ the algebra
of invariant properties, this algebra was recently called by
Bienvenu and Patey~\cite{BiP2017} and in Holzl and Porter~\cite{HoP2021}
the algebra of Levin-V'yugin degrees (or LV-degrees).}
Define, for any lower semicomputable semimeasure $P$,
$\bar P({\bf a})=\bar P(A)$. Define the Boolean operations on $\Upsilon$:
${\bf a}\cup {\bf b}=[A\cup B]$ and ${\bf a}\cap {\bf b}=[A\cap B]$,
where ${\bf a}=[A]$ and ${\bf b}=[B]$. The partial ordering on $\Upsilon$
is defined:
$$
{\bf a}\preceq {\bf b}\Longleftrightarrow \bar M(A\setminus B)=0.
$$
In what follows, we call {\it standard} any sequence which is algorithmically
equivalent to a sequence Martin-L\"of random with respect to
some computable measure. By definition, any computable
measure of the set of all standard sequences is equal to 1.
Zvonkin and Levin~\cite{ZvL70} (Theorem~3.1) proved that any sequence $\omega$
Martin-L\"of random with respect to a computable measure is computable or
algorithmically equivalent to a sequence which is Martin-L\"of random with
respect to the uniform measure.
Therefore, the elements ${\bf r}=[\bar R]$ and ${\bf c}=[C]$ arise naturally,
where $R$ be the set of all sequences Martin-L\"of random with respect
to uniform measure, $\bar R$ be its algorithmic closure. In particular, the set $\bar R$
contains all non-computable sequences random with respect to computable measures
(and all sequences Turing equivalent to such sequences),
Let $C$ be the set of all computable sequences.
Evidently, $\bar M({\bf r})>0$ and $\bar M({\bf c})>0$.
The zero element {\bf 0} of the algebra $\Upsilon$ is the equivalence class
of the empty set. It consists of all algorithmically invariant negligible
Borel subsets of $\Omega$, $\bar M({\bf 0})=0$.
The maximal element of $\Upsilon$ is ${\bf 1}=[\Omega]$.
By definition ${\bf d}$ is an atom of $\Upsilon$ if
$\bf d\not =\bf 0$ and it cannot be represented as
${\bf d}={\bf a}\cup{\bf b}$, where ${\bf a}\cap{\bf b}={\bf 0}$,
${\bf a}\not ={\bf 0}$ and ${\bf b}\not ={\bf 0}$.
It was first pointed in Levin and V'yugin~\cite{LeV77} that ${\bf r}$ and $\bf c$
are atoms of $\Upsilon$. The proof of this result (which attributes to Levin)
was first given in V'yugin~\cite{Vyu82}. Holzl and Porter~\cite{HoP2021}
also presented the careful proof of this result.
We present a short proof for completeness of presentation.
\begin{theorem} \label{atom-1}
The element ${\bf r}$ is an atom of $\Upsilon$.
\end{theorem}
{\it Proof.} Assume that ${\bf r}={\bf a}\cup{\bf b}$,
where ${\bf a}\cap{\bf b}={\bf 0}$, ${\bf a}\not ={\bf 0}$ and
${\bf b}\not ={\bf 0}$. Then $\bar R=A\cup B$, where ${\bf a}=[A]$ and
${\bf b}=[B]$, where $A$ and $B$ are the algorithmically invariant sets
of infinite sequences. We can assume without loss of generality that $A\cap B=\emptyset$.
Recall that $R$ is the set of all Martin-L\"of random sequences with respect to
the uniform measure. Let $A'=A\cap R$ and $B'=B\cap R$. Since any sequence $\alpha\in A$
is algorithmically equivalent to some sequence from $A'$ and $\bar M(A)>0$,
$\bar M(A')>0$ follows. Analogously $\bar M(B')>0$.
Let $P$ be a probability measure on $\Omega$ absolutely continuous with respect
to $\bar M$, i.e., $\bar M(X)=0$ implies $P(X)=0$ for each Borel set $X$.
Let $\frac{dP}{d\bar M}(\omega)$ be the Radon--Nicodym derivative of $P$ by
the measure $\bar M$, where $P$ is a probability measure on $\Omega$ absolutely
continuous with respect to the measure $\bar M$. By definition
\begin{equation}
P(X)=\int\limits_{X}\frac{dP}{d\bar M}(\omega)d\bar M
\label{radon-defin-1}
\end{equation}
for each Borel set $X$. In particular, (\ref{radon-defin-1}) holds true for
each computable measure $P$ and for the measure $\bar P$, where $P$ is
a lower semicomputable semimeasure.
\begin{lemma}\label{Rad-Nic}
Let a measure $P$ is absolutely continuous with respect to the measure $\bar M$,
$A\subseteq\Omega$ and $\frac{dP}{d\bar M}(\omega)>0$ for each $\omega\in A$.
Then $P(A)=0$ implies $\bar M(A)=0$.
\end{lemma}
{\it Proof.} By (\ref{radon-defin-1}),
$P(A)=\int\limits_{A}\frac{dP}{d\bar M}(\omega)d\bar M$.
It is easy to prove that if $\frac{dP}{d\bar M}(\omega)>0$ for each $\omega\in A$
and $P(A)=0$ then $\bar M(A)=0$.
$\Box$
\begin{corollary} \label{rad-2}
Let $P$ be a computable measure and $A$ consists of $P$-random sequences.
Then $P(A)=0$ implies $\bar M(A)=0$.
\end{corollary}
{\it Proof.} For any random sequence $\omega$
$$
P(\omega^n)/\bar M(\omega^n)\ge P(\omega^n)/M(\omega^n)\ge c>0
$$
holds for every $n$, where $c$ is a constant depending on $\omega$. Then
$\frac{dP}{d\bar M}(\omega)\not =0$ for each $\omega\in A$. By Lemma~\ref{Rad-Nic}
$\bar M(A)=0$. $\Box$
Let us finish the proof of the theorem. If an infinite sequence
$\omega$ is random with respect to the uniform measure then any sequence $\omega'$,
which differs from it in a finite number of bits, is also random.
Then $\omega,\omega'\in R$. Besides, $\omega\equiv_T\omega'$.
We can choose algorithmically invariant Borel sets $A$ and $B$ such that
any two sequences $\alpha\in A$ and $\beta\in B$ are not algorithmically
equivalent. Let $A'=A\cap R$ and $B'=B\cap R$.
Then $\omega,\omega'\in A'$ or $\omega,\omega'\in B'$,
By Corollary~\ref{Rad-Nic} $\bar M(A')>0$ implies $L(A')>0$. Analogously
$\bar M(B')>0$ implies $L(B')>0$.
We apply the Kolmogorov 0 or 1 law to the sequence $f_1, f_2,\dots$
of random variables, where $f_i(\omega)=\omega_i$ are random variables
defined on the probability space $(\Omega,L)$.
It follows from invariant property of the sets $A'$ and $B'$
that for each $n$ they belong to $\sigma$-algebra generated by random variables
$f_n,f_{n+1},\dots$ and then
they belong to the residual $\sigma$-algebra of $f_1, f_2,\dots$.
By Kolmogorov 0 or 1 law $L(A')=0$ or $L(A')=1$ and the same holds for $B'$.
This is a contradiction, since $A'\cap B'=\emptyset$ and $L(A')>0$,
$L(B')>0$. Therefore, $\bf r$ is an atom of $\Upsilon$.
$\Box$
Evidently, $\bf c$ is also an atom of $\Upsilon$.
It is easy to see that $\bf r$ is the single atom of the uniform measure 1.
The atoms $\bf c$ and $\bf r$ be generated by the standard sequences. A question
arises does ${\bf 1}={\bf c}\cup {\bf r}$?\footnote{This would mean that all
sequences (information)
that can be generated using probabilistic algorithms are stochastic or computable.}
We prove in Section~\ref{app-1}
that ${\bf 1}\setminus({\bf c}\cup {\bf r})\not={\bf 0}$ and, moreover,
we prove in Section~\ref{atoms-count-1} that there exists an infinite sequence
of other atoms.
It is easy to show that the set of all atoms of the algebra $\Upsilon$ is at most
countable. To do this, choose for each atom ${\bf a}=[A]$ a union $D_a$ of a finite number
of intervals such that
$$
\bar M((A\setminus D_a)\cup(D_a\setminus A))<(1/4)\bar M(\bf a).
$$
If $\bf a\not =\bf b$ then $D_a\not = D_b$. The set of all $D_a$ is at most
countable.
We will prove in Section~\ref{atoms-count-1} that the set
of all atoms is countable. Let ${\bf a}_1,{\bf a}_2,{\bf a}_3,\dots$ be all atoms of
$\Upsilon$, where ${\bf a}_1={\bf c}$ and ${\bf a}_2={\bf r}$ and the atoms
$\bf c$ and $\bf r$ are defined by standard sequences. We will also
prove that the algebra $\Upsilon$ is not limited to atoms only:
${\bf 1}\setminus \bigcup_{i=1^\infty} {\bf a}_i)\not = {\bf 0}$.
By definition the element
${\bf e}={\bf 1}\setminus\bigcup_{i=1}^{\infty} {\bf a}_i$
is infinitely divisible, i.e. for any non-zero element
${\bf x}\subseteq {\bf e}$ a decomposition
${\bf x}={\bf x}_1\cup {\bf x}_2$ exists, where ${\bf x}_1\cap{\bf x}_2={\bf 0}$,
${\bf x}_1\not ={\bf 0}$ and ${\bf x}_2\not ={\bf 0}$.
Theorems~\ref{infini-div-1} and~\ref{nucl-ato-1} given below in Sections~\ref{nucl-2-pr-1}
and~\ref{atoms-count-1} will imply the main result
of this paper on decomposition of the maximal element of LV-algebra:
{\it The decomposition of the maximal element of $\Upsilon$ take place:
\begin{equation}
{\bf 1}=\cup_{i = 1}^{\infty}{\bf a}_i\cup{\bf e},
\label{main-decomposition-1}
\end{equation}
where $ {\bf a}_1, {\bf a}_2, \dots$ are all atoms of $\Upsilon$ and
${\bf e}$ is a non-zero infinitely divisible element.}
The decomposition (\ref{main-decomposition-1}) shows that any non-zero
LV-degree can be represented as a union of some atoms or it is an infinitely
divisible element, or it is a union of some atoms and non-zero infinitely
divisible element.
We show in Theorem~\ref{nucl-4} given below in Section~\ref{hyperimmune-2}
that the similar to (\ref{main-decomposition-1}) nontrivial decomposition
take place for LV-degree of all hyperimmune sequences:\footnote{i.e., for
indicator sequences of the hyperimmune sets of integer numbers.}
\begin{equation}
{\bf h}=\cup_{i = 1}^{\infty}{\bf h}_i\cup{\bf e},
\label{main-decomposition-2}
\end{equation}
where $ {\bf h}_1, {\bf h}_2, \dots$ are atoms and
${\bf e}$ is a non-zero infinitely divisible element generated by hyperimmune degrees.
\section{Network flows}\label{sec-net-1}
To construct the elements $\Upsilon$ generated by non-standard sequences,
we have to construct lower semicomputable semimeasures $P$ such that
$\bar P(\Omega\setminus (\bar R\cup C))>0$.
We will construct some such semimeasure $P$ which will be represented as a flow
over a certain network.
We will consider the set $\Xi$ of all finite binary sequences as a graph (tree)
whose vertices are sequences $x\in\Xi$ connected by edges of unit length
$(x,x0)$, $(x,x1)$. During the construction, we will add extra edges
$(x,y)$, where $x,y\in\Xi$, $x\subset y$, of length $l(y)-l(x)>1$.
For any edge $\sigma=(x,y)$ denote by $\sigma_1=x$ its starting vertex,
and by $\sigma_2=y$ its final vertex. A function $q(\sigma)$, which is defined
on all edges of unit length as well as on all extra edges, is called network if
\begin{equation} \label{net-1}
\sum\limits_{\sigma:\sigma_1=x} q(\sigma)\le 1
\end{equation}
for each $x\in\Xi$.
By $q$-flow we mean the minimal semimeasure $P$ such that
$P\ge R$, where the function $R$ (frame of the network flow) is defined as follows:
\begin{eqnarray}
R(\lambda)=1,
\label{net-base-1}
\\
R(y)=\sum\limits_{\sigma:\sigma_2=y}q(\sigma)R(\sigma_1)
\label{net-base-2}
\end{eqnarray}
for $y\not =\lambda$ (empty sequence).
It is easy to verify that the semimeasure $P$ can be defined as
\begin{eqnarray}
P(\lambda)=1,
\label{net-base-1a}
\\
P(y)=\sum\limits_{\sigma:\sigma_1\subset y\subseteq\sigma_2}q(\sigma)R(\sigma_1)
\label{net-base-2a}
\end{eqnarray}
for each $y$. The value $q(\sigma)$ can be interpreted as a portion of the
flow that goes from $x=\sigma_1$ to the vertex $y=\sigma_2$ along the edge $\sigma$.
We associate with any network $q$ the flow-delay function
$$
s(x)=1-q(x,x0)-q(x,x1).
$$
A network $q$ is called elementary if there exists an $n$ such that
$s(x)$ is defined for all $x$, $l(x)\le n$, the set $G^n$ of all extra edges is
finite and $l(\sigma_2)\le n$ for each $\sigma\in G^n$.
We assume that $q(x,x0)=q(x,x1)=\frac{1}{2}(1-s(x))$ for both edges of unit length
outgoing from $x$.
Any elementary network is a constructive object.
We will define a sequence of elementary networks gradually increasing $n$.
\subsection{Template~1}\label{nucl-2-pr}
We present the construction of a network $q$ depending on a recursive
predicate $B(i,\sigma)$, where $i$ is a positive integer number (task number),
$\sigma$ is an extra edge.
Let $p:{\cal N}\to{\cal N}$ be a total computable function such that for any
$i$, $p(n)=i$ for infinitely many $n$.\footnote{This function can be defined as follows.
Let $\langle i,j\rangle$ denotes
the order number of any pair $(i,j)$ of positive integer numbers under some
one-to-one corresponding between all positive integer numbers and all such pairs.
Define $p(\langle i,j\rangle)=i$ for all $(i,j)$.
}
Any extra edge $\sigma$ will refer to some task $i$ so that
$p(l(\sigma_1))=p(l(\sigma_2))=i$. We say that the edge $\sigma$ is of $i$th type.
The goal of the task $i$ will be to
draw extra edges $\sigma$ such that $B(i,\sigma)$ is satisfied and
that each infinite sequence $\omega$ passes
through one of these edges or the delay function would be equal to 1 on some
initial fragment of $\omega$.\footnote{An infinite sequence $\omega$ passes through
the edge $\sigma$ if $\sigma_2\subset\omega$.}
We associate with the predicate $B$ the function of setting an extra edge
\begin{equation}
\beta(x,n)=\min\{y:l(y)=n, p(l(y))=p(l(x)), B(p(l(x)),(x,y))\}.
\end{equation}
Here $\min$ is taken with respect to the natural linear orderings of all
finite binary sequences.
We suppose that $\min\emptyset$ is undefined. The pair $(x,\beta(x,n))$
will be drawn in $G$ as an extra edge.
Define a sequence of elementary networks by the mathematical induction on $n$.
Define $s(\lambda)=0$ and $G^0=\emptyset$.
Let $n\ge 1$ and $G^{n-1}$ and $q(\sigma)$ be defined for every
$\sigma\in G^{n-1}$, $s(x)$ be defined and for every $x$ such that $l(x)\le n-1$,
and $q(\sigma)=\frac{1}{2}(1-s(\sigma_1)$ for each $\sigma$ of unit length such that
$l(\sigma_2)=n$.
Let $G^{n-1}(i)$ be the set of all extra edges drawn by the task $i$
at steps $<n$. It should be $p(l(\sigma_1))=p(l(\sigma_2))=i$ for each
$\sigma\in G^{n-1}(i)$.
We first introduce an auxiliary function $w(i,n)$.
The value of $w(i,n)$ is equal to the smallest $m$ such that $m\le n$, $p(m)=i$ and
$m>l(\sigma_2)$ for each extra edge $\sigma\in G^{n-1}(j)$ where $j<i$, i.e.,
for any extra edge $\sigma$ drawn during the processing of any task $j<i$.
Let us give the exact definition:
\begin{eqnarray}
w(i,n)=\min\{m: m\le n\&p(m)=i\&
\nonumber
\\
\forall j\forall\sigma
((j<i\&\sigma\in G^{n-1}(j)\rightarrow m>l(\sigma_2)).
\label{win-1}
\end{eqnarray}
We refer to $w(i,n)$ as to the initial step of a session for executing the task $i$.
The change in the value: $w(i,n)\not = w(i,n-1)$,
may occur due to the fact that at step $n$ some task $j<i$ draws its extra edge above
the level $w(i,n-1)$, and thus violates the condition
for the definition of $w(i,n-1)$.\footnote{By construction, if at a step $n$
of the induction some task draw a new extra edge $\sigma$ then $l(\sigma_2)=n$.
}
Lemma~\ref{gen-tech-1} will show that
this violation will occur at no more than a finite number of construction steps.
We will use a function $\rho(n)$ which is a parameter of the construction,
put $\rho(n)=(n+3)^2$.
The construction of step $n$ splits into three cases. Let $i=p(n)$.
{\it Case 1}. $w(p(n),n)=n$ (starting a new session for executing the task $i=p(n)$:
installing or reinstalling the task $i$).
In this case define $s(y)=1/\rho(n)$ for every $y$ such that $l(y)=n$ and
set $G^n=G^{n-1}$.
{\it Case 2.} $w(i,n)<n$ and $C_n(i)\not=\emptyset$, where
$C_n(i)$ is the set of all sequences $x$ that require processing, i.e., such that
$p(l(x))=i$, $w(i,n)\le l(x)<n$, $s(x)>0$,
$\beta(x,n)$ is defined and no extra edge in $G^{n-1}$ outgoes from $x$
(processing step of the task $i$).
In this case define
$G^n=G^{n-1}\cup\{(x,\beta(x,n)):x\in C_n(i)\}$
and $q((x,\beta(x,n)))=s(x)$ for each $x\in C_n(i)$.
If $s(x)<1$ then define $s(\beta(x,n))=0$ and
$
s(y)=s(x)/(1-s(x))
$
for all other $y$ of length $n$ such that
$x\subset y$ and $y\not=\beta(x,n)$.
If $s(x)=1$ then define $s(y)=0$ for every $y$ such that
$x\subset y$ and $l(y)=n$.
Define $s(y)=0$ for all other $y$ of length $n$.
{\it Case 3.} Cases 1 and 2 do not occur.
In this case define $s(x)=0$ for all $x$ of length $n$ and $G^n=G^{n-1}$.
After all, define $q(\sigma)=\frac{1}{2}(1-s(\sigma_1))$ for each $\sigma$ of
unit length such that $l(\sigma_1)=n$.
This concludes the description of the induction step.
Define $G=\cup_n G^n$ and $G(i)=\cup_n G^n(i)$ for any $i$. By the construction
$s(x)$ is defined for each $x$ and $0\le s(x)\le 1$,
$q(\sigma)$ is defined for each $\sigma\in G$
and $q(\sigma)=\frac{1}{2}(1-s(\sigma_1)$ for each $\sigma$ of unit length.
Lemmas~\ref{flow-delay-values-1}--\ref{two-edges-1} below present the simplest
properties of the construction.
\begin{lemma}\label{flow-delay-values-1}
The values of the flow delay function $s$ are $0$ or rational
numbers of type $\frac{1}{M}$, where $M$ is a positive integer number.
\end{lemma}
{\it Proof.}
By Case 1 at step $n$ we define $s(x)=\frac{1}{\rho(n)}$ for each $x$ such that
$l(x)=n$. By induction on $n$, in Case 2 if $s(x)=\frac{1}{M}$
for some $M>1$ then $s(y)=\frac{1}{M-1}$ for each $y$ such that $x\subset y$
and $y\not=\beta(x,n)$, also, $s(\beta(x,n))=0$. If $s(x)=1$ then $s(y)=0$
for each $y$ such that $x\subset y$ and $l(y)=n$.
$\Box$
The $q$-flow $P$ is lower semicomputable semimeasure by definition
(\ref{net-base-1a})--(\ref{net-base-2a}).
\begin{lemma}\label{two-edges-1}
There cannot be two overlapping extra edges $(x,y), (x',y')\in G$ such that
$x'\subset x\subset y'$ and $l(y')<l(y)$.
\end{lemma}
{\it Proof.} Assume that such a pair of overlapping extra edges exists.
Let $i=p(l(x))$ and $i'=p(l(x'))$. Evidently $i\not =i'$.
By the construction the extra edge $(x',y')$ was drawn at step $n'=l(y')$
and the extra edge $(x,y)$ was drawn at the later step step $n=l(y)$, where
$n>n'$ and $i<i'$.
There are two mutually exclusive cases. If $n''=l(x)=w(i,n'')$, i.e.,
the task $i$ was installed (or reinstalled) at the step $n''$. Then the pair $(x',y')$
cannot be an extra edge, since it should be added to $G$ at the later step $n'>n''$
by the task $i'>i$, that leads to the contradiction.
Assume that $n''=l(x)>w(i,n'')$, i.e., the task $i$ is processed at step $n''$.
Then Case 2 holds at step $n$ and $s(x)>0$. In this case, some extra edge
$\sigma$ such that $l(\sigma_2)=n''$ have to be drawn by the task $i$ at the step $n''$.
Then the contradiction is obtained, since the extra edge $(x',y')$ should be added to $G$
at the later step $n'>n''$ by the task $i'>i$.
The resulting contradiction proves the lemma. $\Box$
The next lemma shows that each task leads to the installation of new extra
edges only at a finite number of steps.
By construction $w(i,n+1)\ge w(i,n)$ for every $n$.
Let $w(i)=\lim_{n\to\infty} w(i,n)$.
\begin{lemma}\label{gen-tech-1}
$G(i)$ is finite and $w(i)<\infty$ for each $i$.
\end{lemma}
{\it Proof.} Note that if $G(j)$ is finite for every $j<i$,
then $w(i)<\infty$. Therefore, it suffices to prove that $G(i)$ is finite for
each $i$. Assume the opposite. Let $i$ be the minimal for which
$G(i)$ is infinite. Since $G(j)$ is finite for each $j<i$, $w(i)<\infty$.
For any $x$ such that $l(x)\ge w(i)$, let $k$ be the maximal such that
$\sigma_1=x^k$ and $l(\sigma_2)\le l(x)$ for some edge $\sigma\in G(i)$.
This extra edge can be drawn by Case 2, where $s(x^k)>0$. By Lemma~\ref{flow-delay-values-1}
$s(x^k)=1/M$, where $M$ is an integer number such that
$M\ge 1$. If no such edge exists then set $k=w(i)$. Define
\[
K(x)=
\left\{
\begin{array}{l}
\rho(w(i)) \mbox{ if } l(x)\le w(i) \mbox{ or } k=w(i),\\
M-1 \mbox{ if } l(x)>w(i)\mbox{ and } k>w(i), \mbox{ where }s(x^k)=1/M.\\
\end{array}
\right.
\]
Since $K(x)\ge K(y)$ for every $x$ and $y$ such that $x\subset y$, and,
moreover, if $K(x)>K(y)$ then $K(x)>K(z)$ for each
$z$ such that $x\subset z$ and $l(z)=l(y)$, the function
$$
\hat K(\omega)=\min\{n:\forall k\ge n (K(\omega^k)=K(\omega^n))\}
$$
is defined for each infinite $\omega\in\Omega$ and it is continuous.
Since $\Omega$ is compact, it is upper bounded by some number $m$. Then
$K(x)=K(x^m)$ for every $x$ such that $l(x)\ge m$.
If at some step $n\ge m$ an extra edge $\sigma$ will be drawn by task $i$, where
$l(\sigma_1)>m$, then by Case 2 $K(y)<K(\sigma_1)$ for every $y$ of length $n$
such that $\sigma_1\subset y$.
Therefore, the existence of such $m$ contradicts to the assumption of
infinity of $G(i)$. The lemma is proved. $\Box$
The support set of a semimeasure $P$ is defined as
$$
E_P=\{\omega\in\Omega:\forall n(P(\omega^n)>0)\}.
$$
It is easy to see that $\bar P(E_P)=\bar P(\Omega)$.
A sequence $\alpha\in\Omega$ is called $i$-extension of a finite sequence $x$
if $x\subset\alpha$ and $B(i,(x,\alpha^n))$ is satisfied for almost all $n$.
Note that if $\sigma\in G(i)$ is an extra
edge of the $i$th type then $B(i,\sigma)$ is satisfied.
\begin{lemma}\label{exten-1}
Let $\omega\in E_P$ and for any initial fragment $\omega^n$ of the sequence
$\omega$ there is an $i$-extension. Then $\omega$ passes through an extra edge of
the $i$th type (i.e., $\sigma_2\subset\omega$ for some $\sigma\in G(i)$).
\end{lemma}
{\it Proof.} By definition $P(\omega^n)\not =0$ for all $n$.
By Lemma~\ref{gen-tech-1}, there is a maximal $m$ such that $p(m)=i$ and $s(\omega^m)>0$.
Since $\omega^m$ has an $i$-extension and $s(\omega^m)>0$,
by Case 2 of the construction, an extra edge $(\omega^m,y)$ will be drawn on some
step $n$, where $l(y)=n$. Assume that $y\not\subset\omega$. If $s(\omega^m)<1$
then $s(\omega^n)>0$ that contradicts to the choice of $m$.
Let $s(\omega^m)=1$. Since $m\ge w(i,n)$ and by Lemma~\ref{two-edges-1},
no extra edge $\sigma$ exists such that $\sigma_1\subset\omega^m\subset\sigma_2$.
Then $P(\omega^{m+1})=0$ that is a contradiction with the
assumption of the lemma. Hence, $y\subset\omega$.
$\Box$
A semimeasure $P$ is continuous if $\lim\limits_{n\to\infty}P(\omega^n)=0$ for each
infinite sequence $\omega$. We give some sufficient condition for the continuity
of the $q$-network flow $P$.
A number $n$ separates the set $D$ of edges if $l(\sigma_1)\ge n$
or $l(\sigma_2)<n$ for each edge $\sigma\in D$.
\begin{lemma} \label{contin-1}
Let $q$ be a network. The $q$-flow is continuous if the set of extra edges
is separated by an infinite set of numbers, and $q(x,x0)=q(x,x1)$
for each $x\in\Xi$.
\end{lemma}
{\it Proof.} Let $P$ be the $q$-flow and a number $n$ separates the set
of extra edges. Then
$$
P(x)=R(x)=q(x^{n-1},x)R(x^{n-1})\le q(x^{n-1},x)P(x^{n-1})
$$
for each $x$ of length $n$. By~(\ref{net-1}) and by the assumption of the lemma
$q(x^{n-1},x)\le 1/2$ for all $x$ and $n$. Then
$P(\omega^n)\le(1/2)P(\omega^{n-1})$ for each $n$ separating the set of
extra edges. Since there are infinitely many of such $n$, we have
$\lim\limits_{n\to\infty} P(\omega^n)=0$, i.e., the semimeasure $P$
is continuous. $\Box$
The following corollary of Lemma~\ref{contin-1} takes place.
\begin{corollary}
Let $P$ be the flow through the network $q$ defined by Template 1.
Then the semimeasure $P$ is continuous.
\end{corollary}
{\it Proof.}
To apply Lemma~\ref{contin-1} to the semimeasure $P$, it suffices to note
that the number $w(i)$ separates $G$ for each $i$.
$\Box$
\begin{lemma}\label{nontriv-1a}
$\bar P({\bf 1})>0$.
\end{lemma}
{\it Proof.} Let us estimate $\bar P(\Omega)$ from below. Let $i=p(n)$. Define
$$
S_n=\sum\limits_{u:l(u)=n}R(u)-
\sum\limits_{\sigma:\sigma\in G,l(\sigma_2)=n}q(\sigma)R(\sigma_1).
$$
From the definition of the frame,
\begin{eqnarray}
\sum\limits_{u:l(u)=n+1}R(u)=\sum\limits_{u:l(u)=n}(1-s(u))R(u)+
\sum\limits_{\sigma:\sigma\in G,l(\sigma_2)=n+1}q(\sigma)R(\sigma_1).
\label{RR-2a}
\end{eqnarray}
Consider the case where $w(p(n),n)<n$.
If there is no edge $\sigma\in G$ such that $\sigma_1\in C_n(i)$ and $l(\sigma_2)=n$,
then $S_{n+1}\ge S_n$. Now, let $C_n(i)\not=\emptyset$. Define
$$
\Phi(\sigma,u)\Longleftrightarrow\sigma_1\in C_n(i)\&l(\sigma_2)=l(u)\&
\sigma_1\subseteq u\&u\not =\sigma_2.
$$
If $s(\sigma_1)=1$ for $\sigma_1\in C_n(i)$ then
$\sum\limits_{u:l(u)=n,\sigma_1\subseteq u}s(u)R(u)=0$.
By the construction, the value $s(x)$ defines a portion of the delayed flow in
the vertex $x$.
The rest portion $1-s(x)$ of the flow goes equally to the vertices $x0$ and $x1$.
The portion $s(x)$ of the delayed flow can be later directed to some $y$ such that $l(y)=n$
and $x\subset y$ only by Case 2 along an extra edge $(x,y)$, where $y=\beta(x,n)$
outgoing from $x$ and drawn on the step $n$. We will show that at step $n$
the portion of the newly delayed flow in all $u$ of length $n$ such that
$x\subseteq u$ and $u\not=y$ does not exceed the portion of the previously delayed
flow at the vertex $x$ and directed along the edge $(x,y)$, this part of the flow
is no longer delayed by the task $i$ in the current session.
The following chain of equalities and inequalities take place:
\begin{eqnarray}
\sum\limits_{u:l(u)=n}s(u)R(u)=
\nonumber
\\
\sum\limits_{\sigma:\sigma_1\in C_n(i), s(\sigma_1)<1}
\sum\limits_{u:l(u)=n,\sigma_1\subseteq u}s(u)R(u)=
\nonumber
\\
\sum\limits_{\sigma:\sigma_1\in C_n(i),s(\sigma_1)<1}s(\sigma_2)
\sum\limits_{u:\Phi(\sigma,u)}R(u)=
\nonumber
\\
\sum\limits_{\sigma:\sigma_1\in C_n(i),s(\sigma_1)<1}
\frac{s(\sigma_1)}{1-s(\sigma_1)}
\sum\limits_{u:\Phi(\sigma,u)}R(u)\le
\nonumber
\\
\sum\limits_{\sigma:\sigma_1\in C_n(i),s(\sigma_1)<1}s(\sigma_1)R(\sigma_1)=
\nonumber
\\
\sum\limits_{\sigma:\sigma_1\in C_n(i),s(\sigma_1)<1}q(\sigma)R(\sigma_1)=
\nonumber
\\
\sum\limits_{\sigma:\sigma\in G,l(\sigma_2)=n}q(\sigma)R(\sigma_1).
\label{chain-1}
\end{eqnarray}
Here we have used the inequality
\begin{equation}\label{cont-flow-1}
\sum\limits_{u:\Phi(\sigma,u)}R(u)\le (1-s(\sigma_1))R(\sigma_1)
\end{equation}
for each $\sigma\in G^n$ such that $\sigma_1\in C_n(i)$ and $s(\sigma_1)<1$.
Inequality (\ref{cont-flow-1}) takes place, since the sum on the left is equal
to the flow through the set of vertices $\{u:\Phi(\sigma, u)\}$, and the value
from the right-hand side of the inequality is equal
to the value of the flow outgoing from the vertex $\sigma_1$,
except for its part passing through an extra
edge $\sigma$. By Lemma~\ref{two-edges-1} there cannot be an edge $\sigma'\in G$
overlapping with $\sigma$, i.e.,
such that $\sigma'_1\subset\sigma_1\subset\sigma'_2$ and $l(\sigma'_2)<l(\sigma_2)$.
Therefore, no extra portion of the flow from some vertex $\sigma'_1\subset\sigma_1$
cannot go through $\sigma_1$ and thus increase the flow to
$\{u:\Phi(\sigma, u)\}$.
Combining the resulting estimate with (\ref{RR-2a}),
we get $S_{n+1}\ge S_n$.
Consider now the case where $w(p(n),n)=n$. Then
$$
\sum\limits_{u:l(u)=n}s(u)R(u)\le 1/\rho(n)=1/(n+3)^2.
$$
Combining this inequality with
(\ref{RR-2a}),
we get $S_{n+1}\ge S_n-1/(n + 3)^2 $. From here and from $S_0=1$ we get
$$
S_n\ge 1-\sum\limits_{i=1}^{\infty}1/(i+3)^2\ge\frac{1}{2}
$$
for all $n$. Since $P\ge R$,
$$
\bar P(\Omega)=\inf\limits_{n}\sum\limits_{l(u)=n} P(u)\ge
\inf\limits_n S_n\ge\frac{1}{2}.
$$
The lemma is proved. $\Box$
\section{Applications of Template 1}\label{appl-1}
In this section we present two applications of Template 1.
\subsection{Nonstochastic Turing degrees}\label{app-1}
We will prove that ${\bf 1}\setminus ({\bf r}\cup{\bf c})\not = {\bf 0}$.
Let $\{F_i\}$ be the uniformly computable sequence of all computable operators
We will assume that this sequence is modified by (\ref{modify-operator-1}) of
Section~\ref{prelimin-1} such that some output
$\tilde F_(x)\subseteq F_i(x)$ is obtained in $l(x)$ steps of computation and
the length of this output does not exceed the length $l(x)$ of the input $x$,
$\tilde F_i(\omega)=F_i(\omega)$ for each infinite $\omega$.
We also assume that for any computable operator $F$ there are infinitely many $i$
such that $F_i=F$.\footnote{To define such a sequence, redefine a sequence
of all computable operators $F_i$ as follows.
For any $i$, define $F'_{\langle i,j\rangle}=F_i$ for all $j$.
As before, the new sequence of operators will be denoted by $F_i$. Thus, for
any number $i$ of a computable operator $F_i$, one can enumerate an infinite
sequence of its other numbers.}
Define
\begin{eqnarray*}
B(i,\sigma)\Longleftrightarrow l(\tilde F_i(\sigma_2))>\sigma_1+i,
\end{eqnarray*}
where $\tilde F_i$ is the modified by (\ref{modify-operator-1}) computable
operator and the finite sequence $\sigma_1$ (the starting point of the edge $\sigma$)
is identified with its number in the natural numbering of the set $\Xi$.
\begin{theorem} \label{nontriv-1b}
For any infinite sequence $\omega$ from the support set of the semimeasure $P$ and
for any computable operator $F$, if $F(\omega)$ is infinite then the sequence
$F(\omega)$ is not Martin-L\"of random with respect to the uniform measure.
\end{theorem}
{\it Proof.} Note that if $F(\omega)$ is infinite and $F_i=F$, then for each
initial fragment of the sequence $\omega$ there is an $i$-continuation.
By Lemma~\ref{exten-1} for each such $i$, there is an edge $\sigma\in G(i)$
lying on $\omega$. For any $i$ define an open set
$$
U_i=\cup_{\sigma\in G(i)}\Gamma_{\tilde F_i(\sigma_2)}.
$$
Since $l(\tilde F_i(\sigma_2))>\sigma_1+i$ for $\sigma\in G(i)$,
$$
L(U_i)\le\sum\limits_{\sigma\in G(i)}2^{-\sigma_1-i}\le 2^{-i},
$$
where $L$ is the uniform measure. Define $U'_i=\cup_{j>i}U_i$, $L(U'_i)\le 2^{-i}$.
We have proved that $\{U'_i\}$ is Martin-L\"of test of randomness with respect
to the uniform measure. Since for any infinite $\omega$, $F(\omega)=\tilde F_i(\omega)$
for infinitely many $i$, $F(\omega)\in\cap_i U'_i$.
Thus, the sequence $F(\omega)$ is not Martin-L\"of random with respect to
the uniform measure $L$.
$\Box$
\begin{corollary}\label{nontriv-1b-cor-1}
$\bar P$-almost every infinite sequence $\omega$ cannot be Turing equvalent to
a sequence which is Martin-L\"of random with respect to some computable measure.
The a priory measure of all such sequences is positive.
\end{corollary}
{\it Proof.} The set of all computable sequences is countable. The continuity of
the semimeasure $P$ implies that $\bar P$-almost every sequence from its support set
is non-computable.
By~\cite{ZvL70} (Theorem~3.1), each non-computable
sequence Martin-L\"of random with respect to some computable measure is
algorithmically equivalent to a sequence which is Martin-L\"of random with respect
to the uniform measure. Therefore, $\bar P$-almost every sequence $\omega$ from
the support set of the semimeasure $P$ cannot be Turing equvalent to a sequence
Martin-L\"of random with respect to some computable measure.
Since the semimeasure $P$ is lower semicomputable, $cM\ge P$ for some constant $c$. Then
$\bar M(E_P)\ge\bar P(E_P)>0$.
$\Box$
\subsection{Infinitely divisible element}\label{nucl-2-pr-1}
We will construct a non-zero infinitely divisible element ${\bf e}\in\Upsilon$
which does not contain any atom. In order to do this, we apply Template 1
with specific recursive predicate $B(i,\sigma)$.
We will use the numbering of all pairs $\langle i,x\rangle$, where $i$ is a number and
$x$ is a finite sequence.\footnote{Recall that we identify finite sequence
and positive integer numbers.} The inverse functions also exist:
$[\langle i,x\rangle]_1=i$ is a task number, and the sequence
$[\langle i,x\rangle]_2=x$ is a candidate for processing.
Let $p(n)$ be such that for any $i$, $p(n)=i$ for infinitely many $n$.
We say that a sequence $z$ of length $n$ is $i$-discarded by an edge $\sigma\in G(i)$
at step $n$, where $i=[p(n)]_1$, if
$l(z)=l(\sigma_2)=n$ and $\tilde F_i(\sigma_2)\subseteq z$. Let $D_n(\sigma)$ be the set
of sequences of length $n$ which are $i$-discarded by the extra edge $\sigma$.
We slightly modify Case 2 of Template 1 to avoid collision between new extra edges
drawn at any step $n$ and the sequences discarded at step $n$. Now, at any step $n$,
at most one extra edge $\sigma$ will be drawn in $G$, where $\sigma_1\in C_n(i)$.
Other elements of this set will be processed on later steps one by one. This edge $\sigma$
defines the set $D_n(\sigma)$ of $i$-discarded sequences of length $n$ such that
$D_n(\sigma)\cap\{z:l(z)=n, \sigma_1\subseteq z\}=\emptyset$.
We use the same sequence of all uniformly computable operators $\{\tilde F_i\}$
as in Section~\ref{app-1}.
Define the recursive predicate
\begin{eqnarray}
B(i,\sigma)\Longleftrightarrow \tilde F_i(\sigma_2)\not\subseteq\sigma_2\&
\sum\limits_{z:z\in D_n(\sigma)}R(z)\le 2^{-(\sigma_1+3)},
\label{rel-2}
\end{eqnarray}
where $R$ denotes the frame of the $q$-flow constructed in $n-1$ steps
and $\tilde F_i$ is modified by (\ref{modify-operator-1}).
{\it Modification of Case 2.}
{\it Case 2.} $w(i,n)<n$ and $C_n(i)\not=\emptyset$, where $i=[p(n)]_1$,
$C_n(i)$ is the set of all $x$ that requires processing, i.e., such that
$p(l(x))=i$, $w(i,n)\le l(x)<n$, $s(x)>0$, $\beta(x,n)$ is defined
and there is no extra edge in $G^{n-1}$ outgoing from $x$
(processing step of the task $i$).
If $x=[p(n)]_2\in C_n(i)$ then define
\begin{eqnarray*}
G^n=G^{n-1}\cup\{(x,\beta(x,n))\}.
\end{eqnarray*}
and $q((x,\beta(x,n)))=s(x)$. If $s(x)<1$ then define $s(\beta(x,n))=0$ and
$s(y)=s(x)/(1-s(x))$ for all other $y$ of length $n$ such that
$x\subset y$. If $s(x)=1$ then define $s(y)=0$ for these $y$.
For any sequence $z$ of length $n$, which is $i$-discarded by the extra edge
$\sigma=(x,\beta(x,n))$, define $s(z)=1$.
Define $s(x)=0$ for all other $x$ of length $n$ and define
$q(\sigma)=\frac{1}{2}(1-s(\sigma_1))$
for all edges $\sigma$ of unit length such that $l(\sigma_1)=n$.
This modification does not change the basic properties of the Template 1.
Let $s$ be the flow delay function for the network $q$ and $P$
denotes the $q$-flow. The frame $R$ is defined using equalities
(\ref{net-base-1})--(\ref{net-base-2}).
Similarly to how it was done in Section~\ref{nucl-1-pr},
we can prove that $P$ is continuous lower semicomputable semimeasure.
We modify the proof of Lemma~\ref{nontriv-1a} for the case
where some sequences are discarded.
\begin{lemma} \label{nontriv-1b-2}
$\bar P({\bf 1})>0$.
\end{lemma}
{\it Proof.} Let us estimate from below the value of $\bar P(\Omega)$.
Consider
$$
S_n=\sum\limits_{u:l(u)=n}R(u)-
\sum\limits_{\sigma:\sigma\in G,l(\sigma_2)=n}q(\sigma)R(\sigma_1).
$$
From the definition of the frame, we have
\begin{eqnarray}
\sum\limits_{u:l(u)=n+1}R(u)=\sum\limits_{u:l(u)=n}(1-s(u))R(u)+
\label{RR-1b}
\\
\sum\limits_{\sigma:\sigma\in G,l(\sigma_2)=n+1}q(\sigma)R(\sigma_1).
\label{RR-2b}
\end{eqnarray}
In case $w(p(n),n)=n$
$$
\sum\limits_{u:l(u)=n}s(u)R(u)\le\rho(n)=1/(n+3)^2.
$$
Combining this inequality with (\ref{RR-1b})--(\ref{RR-2b}), where
the sum (\ref{RR-2b}) is equal to 0, we obtain
$S_{n+1}\ge S_n-1/(n+3)^2$.
Let $w(p(n),n)<n$ and $\sigma\in G$ such that $\sigma_1=[p(n)]_2$. Then
\begin{eqnarray}
\sum\limits_{u:l(u)=n}s(u)R(u)=
\nonumber
\\
\sum\limits_{u:l(u)=n,u\not\in D_n(\sigma)}s(u)R(u)+
\sum\limits_{u:l(u)=n,u\in D_n(\sigma)}R(u).
\label{represent-1}
\end{eqnarray}
The first sum in (\ref{represent-1}) is bounded similarly as (\ref{chain-1})
of the proof of Lemma~\ref{nontriv-1a}:
\begin{eqnarray}
\sum\limits_{u:l(u)=n,u\not\in D_n}s(u)R(u)\le
q(\sigma)R(\sigma_1).
\label{chain-1a}
\end{eqnarray}
The second sum is bounded as
\begin{eqnarray}
\sum\limits_{u:l(u)=n,u\in D_n(\sigma)}R(u)\le 2^{-(\sigma_1+3)}.
\label{rel-2a}
\end{eqnarray}
The bounds (\ref{chain-1a}) and (\ref{rel-2a})
implies the lower bound
$$
S_n\ge 1-\sum\limits_{i=1}^{\infty}1/(i+3)^2-
\sum\limits_{\sigma}2^{-(\sigma_2+3)}\ge\frac{1}{2}
$$
for all $n$. Since $P\ge R$, we have
$$
\bar P(\Omega)=\inf\limits_{n}\sum\limits_{l(u)=n} P(u)\ge
\inf\limits_n S_n\ge\frac{1}{2}.
$$
Lemma is proved. $\Box$
\begin{lemma}\label{not-rduc-1}
Any two different infinite sequences $\omega$ and $\alpha$ from
the set $E_P$ are not Turing reducible to each other.
\end{lemma}
{\it Proof.} Assume that $\alpha=F_i(\omega)$ for some $i$.
Since $\omega\not=\alpha$, $\tilde F_i(\omega^n)\not\subseteq\omega^n$ for all sufficiently
large $n$. Then by Lemma~\ref{exten-1}, for all sufficiently large $n>w(i)$
such that $[p(n)]_1=i$ an edge $\sigma$ exists such that $l(\sigma_2)=n$,
$\sigma_2\subset\omega$ and $B(i,\sigma)$ is satisfied, in particular,
$\sigma_1\in C_n(i)$. By construction $\sigma=[p(n)]_2$ for some of these $n$.
Then the sequence $\alpha^n$ will be $i$-discarded for some $n>w(i)$ and $s(\alpha^n)=1$
will be defined. No extra edge $\sigma'$ such that
$\sigma'_1\subset\alpha^n\subset\sigma'_2$ can be drawn at any step
$n'=l(\sigma'_2)>n$, and therefore, no extra portion of the flow
can go through the vertex $\alpha^n$. Indeed, any task $j<i$ cannot draw
extra edges on steps $n'>n$ since $w(i,n)=w(i)$. At the step $n$ the sessions of all tasks
$j>i$ are terminated, and on the later steps $n'>n$ the tasks $j>i$ can draw only extra
edges $\sigma'$ such that $l(\sigma'_1)>n$.
Hence, $q((\alpha^n,\alpha^{n+1}))=0$, and so, $\alpha\not\in E_P$.
This contradiction proves that $\alpha$ is not Turing reducible to $\omega$.
$\Box$
\begin{theorem}\label{infini-div-1}
There exists a non-zero infinitely divisible element ${\bf e}$ such that
$$
{\bf e}\cap({\bf r}\cup{\bf c})={\bf 0}.
$$
\end{theorem}
{\it Proof.}
Define ${\bf f}=[\bar E_P]$. Let ${\bf e}=i_P({\bf f})=[E]$.
Assume that ${\bf x}\subseteq{\bf e}$ and ${\bf x}\not={\bf 0}$. Take an
$X\in{\bf x}$ and put $X'=X\cap E$. Clearly, $\bar P(X')>0$.
Since $X'\subseteq E$, by Lemma~\ref{Rad-Nic} $\bar M(X')>0$.
Since $X'\subseteq E_P$, by Lemma~\ref{not-rduc-1}
any two sequences from $X'$ do not reducible to each other.
Let us represent $X'=X_1\cup X_2$, where $X_1\cap X_2=\emptyset$, $\bar P(X_1)>0$,
and $\bar P(X_2)>0$. Let ${\bf x}_1=[\bar X_1]$ and ${\bf x}_2=[\bar X_2]$.
Then ${\bf x}={\bf x}_1\cup {\bf x}_2$, ${\bf x}_1\not={\bf 0}$,
${\bf x}_2\not={\bf 0}$ and ${\bf x}_1\cap {\bf x}_2={\bf 0}$.
Hence, $\bf x$ cannot be an atom. Theorem is proved.
$\Box$
\subsection{Template~2}
In this section we present Template~2 which is a modification of Template~1
and which will be used to construct atoms of the algebra $\Upsilon$.
The modification given below does not affect the above properties of Template~1.
Let $p(n)$ and $\tilde p(n)$ be computable functions such that
for each pair of positive integer numbers $(i,k)$,
$p(n)=i$ and $\tilde p(n)=k$ for infinitely many $n$.
Any extra edge $\sigma$ corresponds to a task $i$, where
$p(l(\sigma_1))=p(l(\sigma_2))=i$. It also corresponds to some
subtask $(i,k)$, where $\tilde p(l(\sigma_1))=\tilde p(l(\sigma_2))=k$.
By induction on $n$, define a sequence of elementary networks.
and the sets of extra edges $G^n$.
Define $s(\lambda)=0$ and $G^0=\emptyset$. The induction hypothesis
is the same as for step $n$ of Template 1.
Consider an auxiliary function $w(i,n)$. The value of $w(i,n)$ is equal to
the least $m$ such that $m\le n$, $p(m)=i$ and $m>l(\sigma_2)$ for each extra edge
$\sigma$, which was drawn by a task $j<i$. In particular, for each $n'$
such that $w(i,n)\le n'\le n$ no task $j<i$ was processed. The formal
definition was given by (\ref{win-1}).
We refer to $w(i,n)$ as to the initial step of a session to process the task $i$.
The equality $w(i,n)=w(i,n-1)$ is violated (i.e. $w(i,n)\not =w(i,n-1)$)
only if some task $j<i$ has established an extra edge located above the
level $w(i,n-1)$, and thus it violates the condition (\ref{win-1})
for the definition of $w(i,n)$. Lemma~\ref{gen-tech-1} states that this can
only happen at a finite number of construction steps.
Define a family of equivalence relations between finite sequences depending
on the parameter $w$:
$$
x\sim_w y\Longleftrightarrow l(x)=l(y)\&\forall(w\le i\le l(x)\Longrightarrow x_i=y_i);
$$
Note that if $x\sim_w y$ then $x\sim_{w'} y$ for each $w'\ge w$.
For any edges $\sigma$ and $\sigma'$, define $\sigma\sim_w\sigma'$
if and only if $\sigma_1\sim_w\sigma'_1$ and $\sigma_2\sim_w\sigma'_2$.
Sometimes, we write $x\sim z$ instead of $x\sim_w z$, where $w=w(p(l(x)),l(x))$.
During the construction process, we will execute the task $i$ by executing
the subtasks $(i,k)$ in the order of their priority. If the edge is drawn
by subtask $(i,k)$ then we say that it also is drawn by the task $i$.
At any step $n$, let $z_{i,1,n},\dots, z_{i,2^{w(i,n)},n}$ be all finite binary sequences
$z$ of length $w(i,n)$ written out in the lexicographic order. We refer
to the sets $T_{z_{i,t,n}}=\{y:z_{i,t,n}\subseteq y\}$ as to subtrees of the tree $\Xi$
with the roots $z_{i,1,n},\dots, z_{i,2^{w(i,n)},n}$.
Given a set of extra edges $G$ let
$$
G(i)=\{\sigma:\sigma\in G\&p(l(\sigma_1))=p(l(\sigma_2))=i\}
$$
be the set of all extra edges drawn by the task $i$ and
$
G(i,k)=\{\sigma\in G(i):\tilde p(l(\sigma_1))=\tilde p(l(\sigma_2))=k\}
$
be the subset of all extra edges drawn by the subtask $(i,k)$.
By definition, the $w(i,k,n) $ is equal to the smallest $m$
such that $m\le n$ and the following conditions are satisfied. First, $p(m)=i$ and
$m>l(\sigma_2)$ for each extra edge $\sigma$ which was drawn by
the task $j<i$. Second, $m>l(\sigma_2)$ for each extra edge $\sigma$ that was
drawn by some subtask $(i,t)$, where $t<k$.
This means that at all steps $n'$ such that $w(i,k,n)\le n'\le n$
any subtask $(j,t)$, where $j<i$ or $j=i$ and $t<k$, did not draw new extra edges.
Let us give the exact definition. For $k\le 2^{w(i,n)}$ define
\begin{eqnarray}
w(i,k,n)=\min\{m: m\le n\&p(m)=i\&\tilde p(m)=k\&
\nonumber
\\
\forall j\forall\sigma ((j<i\&\sigma\in G^{n-1}(j)\rightarrow m>l(\sigma_2))\&
\\
\nonumber
\forall t\forall\sigma (1\le t<k\&\sigma\in G^{n-1}(i,t)\rightarrow m>l(\sigma_2))\}.
\label{wikn-2}
\end{eqnarray}
Assume that $\min\emptyset=\infty$.
We say that $w(i,k,n)$ is the initial step of the sub-session for executing
the subtask $(i,k)$. By definition, $w(i,k,n)\ge w(i,n)$ for each $i$ and $k$.
Any session for the execution of any task $i$ consists of sub-sessions $(i,k)$,
which are executed in order of their priority.
Violation of the equality $w(i,k,n)=w(i,k,n-1)$ can occur because
$w(i,n)\not =w(i,n-1)$ or for $w(i,n)=w(i,n-1)$ because some subtask
$(i,t)$, where $t<k$, draws an extra edge above the level
$w(i,k,n-1)$ and thus violates the condition for the definition of
$w(i,k,n-1)$. It will be shown below that such a violation will occur at
no more than a finite number of steps $n$ such that $w(i,n)=w(i,n-1)$.
We define a network, depending on a recursive predicate
$B(i,\sigma)$, where $i$ is a positive integer number (task number),
$\sigma$ is an extra edge.
The goal of the task $i$ is the same as for Template 1 -- to draw extra edges
$\sigma$ such that each infinite sequence $\omega$ from the support set
of the corresponding network flow passes through one of these edges.
Each such edge $\sigma$ should satisfy the predicate $B(i,\sigma)$.
There is an additional requirement. In order for the corresponding flow to define
an atom of the algebra $\Upsilon $, in the modified construction the value of
the flow through any two edges $\sigma$ and $\sigma '$ such that $\sigma'\sim\sigma$
should be the same. Therefore, when the edge $\sigma$ is drawn by the task $i$,
all the edges $\sigma'\sim\sigma$ and $\sigma'\not=\sigma$ become dependent on it.
All assignments on these edges should to mimic the assignments on $\sigma$.
In this case, a collision may occur if we try to simultaneously make assignments
of the task $i$ by Case 2 for another edge, which is located in a different subtree.
In order to avoid a collision when setting the edges of the task $i$, at step $n$
we split the process of executing the task $i$ into subtasks $(i,k)$, where
$k=1,\dots 2^{w(i,n)}$.
At any step $n$ of the construction we execute the subtask $(i,k)$ into the subtree
$T_{z_{i,k,n}}$ (we call it the leading subtree), where $k=\tilde p(n)$,
and duplicate all actions in all other subtrees $T_{z_{i,t,n}}$ for $t\not=k$
(we call them dependent subtrees). We will perform the task $i$ for the other
subtrees in subsequent steps in order of their priority, still repeating all
the assignments in the remaining subtrees.
By the construction below if the subtask $(i,k)$ is executed at step $n$,
then for each extra edge $\sigma$ which was drawn by any subtask $(i,t)$ where $t<k$
it will be $l(\sigma_2)<w(i,k,n)$ and, therefore, the equality
$w(i,t,n)=w(i,t,n-1)$ will not be violated for $t<k$.
If a new edge will be drawn in this subtree then
all subtasks $(i,t)$ corresponding to subtrees with lower priority $t>k$ will
be terminated and the equalities $w(i,t,n)=w(i,t,n-1)$ will be violated.
These subtasks should be reinstalled on later steps.
Each extra edge $\sigma$ will refer to a task $i$ such that
$p(l(\sigma_1))=p(l(\sigma_2))=i$ and to the subtask $(i,k)$,
where $\tilde p(l(\sigma_1))=\tilde p(l(\sigma_2))=k$.
The predicate $B$ defines the function of setting an extra edge
$$
\beta(x,n)=\min\{y:l(y)=n, p(l(y))=p(l(x)),B(p(l(x)),(x,y))\}.
$$
We assume that $\min\emptyset$ is undefined.
Define $s(\lambda)=0$ and $G^0=\emptyset$.
At any step $n$, let $i=p(n)$ and $k=\tilde p(n)$.
Let $n\ge 1$ and $G^{n-1}$ and $q(\sigma)$ be defined for every
$\sigma\in G^{n-1}$, $s(x)$ be defined for every $x$ such that $l(x)\le n-1$,
and $s(x)=s(x')$ for every $x$ and $x'$ of length $<n$ such that $x\sim_{w(i,n)}x'$.
Let also, $q(\sigma)=\frac{1}{2}(1-s(\sigma_1)$ for each $\sigma$ of unit length such that
$l(\sigma_2)=n$.
If $k>2^{w(i,n}$ then define $s(x)=0$ for each $x$ of length $n$ and $G^n=G^{n-1}$
and go to the next step. If $k\le 2^{w(i,n)}$ then go to the definitions below.
The construction of any step $n$ splits into three cases:
{\it Case 1}. $w(i,k,n)=n$ (starting a new sub-session for executing the subtask
$(i,k)$, where $i=p(n)$ and $k=\tilde p(n)$: first installing or reinstalling
of the subtask $(i,k)$).
Define $s(y)=1/\rho(n)$ for all $y$ such that $l(y)=n$ and set $G^n=G^{n-1}$,
where $\rho(n)=(n+3)^2$.
{\it Case 2.} $w(i,k,n)<n$ and $C_n(i,k)\not=\emptyset$, where
$C_n(i,k)$ denotes the set of all sequences $x$
which should be processed by subtask $(i,k)$, i.e.,
such that $z_{i,k,n}\subseteq x$, $p(l(x))=i$ and $\tilde p(l(x))=k$,
$w(i,k,n)\le l(x)<n$, $s(x)>0$,
$\beta(x,n))$ is defined and there is no extra edge from $G^{n-1}$ outgoing from $x$
(the subtask $(i,k)$ processing step).
In this case, we make identical assignments in all subtrees $T_{z_{i,t,n}}$,
$1\le t\le 2^{w(i,n)}$, which repeat the assignments in the leading subtree
$T_{z_{i,k,n}}$ of the subtask $(i,k)$:
1) For any $x\in C_n(i,k)$ define $s(z)=0$ for each $z$ such that
$z\sim_{w(i,n)}\beta(x,n)$ and $q(\sigma)=s(x)$ for every $\sigma$ such that
$\sigma\sim_{w(i,n)}(x,\beta(x,n))$.
We add all these edges $\sigma$ to $G^{n-1}$ and get $G^n$.
2) If $s(x)<1$ then for any $y$ such that $x\subset y$, $l(y)=n$ and
$y\not=\beta(x,n)$ define $s(z)=s(x)/(1-s(x))$ for each $z$ such that
$z\sim_{w(i,n)} y$.
3) If $s(x)=1$ then define $s(z)=0$ for all these $z$.
4) Define $s(z)=0$ for all other $z$ of length $n$.
{\it Case 3.} Cases 1 and 2 do not occure. In this case define $s(x)=0$ for each
$x$ of length $n$, and define $G^n=G^{n-1}$.
After all, define
$
q(\sigma)=\frac{1}{2}(1-s(\sigma_1))
$
for each $\sigma$ of unit length such that $l(\sigma_1)=n$.
This concludes the description of the induction step.
Define $G=\cup_n G^n$ and $G(i)=\cup_n G^n(i)$, $G(i,k)=\cup_n G^n(i,k)$
for any $i$ and $k$.
Let $w(i)=\lim_{n\to\infty}w(i,n)$ and $w(i,k)=\lim_{n\to\infty}w(i,k,n)$.
The analogs of the Lemmas~\ref{flow-delay-values-1}--\ref{gen-tech-1}
also take place for the modified construction.
In particular, any task $i$ is processed only at a finite number
of steps and the set $G(i)$ is finite and $w(i)<\infty$ for all $i$.
The next lemma states that any subtask $(i,k)$ is processed only at
a finite number of steps.
\begin{lemma} \label{gen-tech-1a}
The set $G(i,k)$ is finite and $w(i,k)<\infty$ for all $i$ and $k\le 2^{w(i)}$.
\end{lemma}
{\it Proof.} By Lemma~\ref{gen-tech-1} $w(i)<\infty$. Then
$w(i,n)=w(i)$ for all $n\ge n'$ for some $n'$.
Further, following the proof of Lemma~\ref{gen-tech-1}, where the function $w(i,n)$
is replaced with $w(i,k,n)$, we show that
$w(i,k,n)\not =w(i,k,n-1)$ only for a finite number of different $n\ge n'$.
$\Box$
The following duplication property takes place.
\begin{lemma}\label{dup-1}
For any $i$, $q(\sigma)=q(\delta)$ for each $\sigma,\delta\in G$
such that $\sigma\sim_{w(i)}\delta$.
\end{lemma}
{\it Proof.} Since $w(i)=\lim_{n\to\infty}w(i,n)$,
only tasks $j\ge i$ can draw the extra edges $\sigma\in G$ on steps $n\ge w(i)$,
where $l(\sigma_1)\ge w(i)$.
Assume that the extra edges $\sigma,\delta$ be drawn by some subtask $(j,k)$,
where $j\ge i$.
By definition a single extra edge $\sigma'\in G$ exists in the leading subtree
such that $z_{j,k,n}\subset\sigma'$, $\sigma\sim_{w(j,n)}\sigma'$
and $q(\sigma)=q(\sigma')$, where $n=l(\sigma_2$ and $j=p(n)$ and $k=\tilde p(n)$.
Similarly, a single extra edge $\delta'\in G$ exists such that
$\delta\sim_{w(j,n)}\delta'$ and $q(\delta)=q(\delta')$.
Since $\sigma\sim_{w(i)}\delta$ and $w(j,n)\ge w(i)$, we have
$\sigma\sim_{w(j,n)}\delta$. Then $\sigma'=\delta'$ and
by the construction $q(\sigma)=q(\delta)$.
$\Box$
Let $R$ be the frame of the $q$-flow $P$. Clearly, the semimeasure $P$
is lower semicomputable.
By Lemma~\ref{contin-1} the semimeasure $P$ is continuous, since
for any $i$ the number $w(i)$ separates $G$.
The support set of a semimeasure $P$ is equal to
$$
E_{P}=\{\omega\in\Omega:\forall n(P(\omega^n)\not=0)\}.
$$
The following lemma will enable us to apply Kolmogorov 0 or 1 law to the measure
$\bar P$, where $P$ is the $q$-flow. Although this measure is not normalized,
this does not lead to a loss
of generality, since the subsequent statements do not depend on the multiplicative factor.
For any $i$, let $f_i(\omega)=\omega_i$ be a random variable in the probability space
$(\Omega,\bar P)$.
\begin{lemma} \label{zero-one-law}
For any $n>w(i)$, the random variable $f_n$ does not depend on the random
variables $\{f_j:j\le w(i)\}$.
\end{lemma}
{\it Proof.} Since $w(i)=\lim_{n\to\infty}w(i,n)$,
only tasks $i'\ge i$ can draw the extra edges $\sigma\in G$ on steps $n\ge w(i)$,
where $l(\sigma_1)\ge w(i)$. From this it follows that
for $l(v)>w(i)$, the formula (\ref{net-base-2}) can be rewritten as
\begin{eqnarray}
R(v)=\sum\limits_{l(\sigma_1)\ge w(i),\sigma_2=v}q(\sigma)R(\sigma_1).
\label{short-1}
\end{eqnarray}
Assume that $v\sim_{w(i)} v'$ and $l(v)>w(i)$. These sequences belong to
some subtrees.
By Lemma~\ref{dup-1} for any $\sigma\in G$ such that
$l(\sigma_1)\ge w(i)$ and $\sigma_2=v$ there exists $\sigma'\in G$
which belongs to the same subtree as $v'$ and
such that $\sigma'\sim_{w(i)}\sigma$ and $q(\sigma')=q(\sigma)$.
Clearly, $\sigma'_2=v'$. Using induction on recursive definition (\ref{short-1}),
we obtain
\begin{equation}\label{rela-1}
R(v)/R(v^{w(i)})=R(v')/R(v'^{w(i)}).
\end{equation}
Since there are no extra edges $\sigma$ such that $\sigma_1\subset z^{w(i)}\subset\sigma_2$,
the equality $P(z^{w(i)})=R(z^{w(i)})$ takes place.
Now, using the representation (\ref{net-base-1a})--(\ref{net-base-2a}), we will
prove the similar equality for $P$.
Let $l(y)>w(i)$ and $y\sim_{w(i)} z$. Then $y\sim_n z$ for each $n\ge w(i)$.
By Lemma~\ref{dup-1} for each extra edge $\sigma\in G$ such that
$y\subseteq\sigma_1\subset y\subseteq\sigma_2$,
an extra edge $\sigma'\sim_{w(i)}\sigma$ exists such that
$z\subseteq\sigma'_1\subseteq z\subseteq\sigma'_2$ and $q(\sigma)=q(\sigma')$.
From these and by (\ref{rela-1}), we obtain
\begin{equation}\label{relation-2}
P(y)/P(y^{w(i)})=P(z)/P(z^{w(i)})
\end{equation}
for all $y$ and $z$ such that $y\sim_{w(i)}z$. From this we obtain
$$
\bar P(y)/\bar P(y^{w(i)})=\bar P(z)/\bar P(z^{w(i)})
$$
for all $y$ end $z$ such that $y\sim_{w(i)}z$.
Therefore, the conditional probability
$$
\bar P(y|x)=\frac{\bar P(xy)}{\bar P(x)}
$$
does not depend on the choice of the initial fragment $x$ of the sequence $y$
for $l(x)=w(i)$ and $l(y)>w(i)$:
In particular, the random variables $f_j(\omega)=\omega_j$ do not depend on
the random variables $f_s(\omega)=\omega_s$ for $s\le w(i)<j$.
$\Box$
We define an atom consisting of nonstochastic Turing degrees.
Let $q$ be the network defined using the Template 2,
$G$ be the set of all extra edges, and $s$ be the corresponding flow delay function.
The following lemma is a corollary of Lemma~\ref{zero-one-law}.
\begin{lemma} \label{atom-el-1}
For any $A\subseteq\Omega$ containing, together with
each sequence, all sequences differing from it by a finite number of bits,
$\bar P(A)=0$ or $\bar P(A)=\bar P(\Omega)$.
\end{lemma}
{\it Proof.}\footnote{A specific for Cantor space $(\Omega,L)$ proof of this lemma
see in Downey and Hirschfeldt~\cite{DH2010}, Theorem 1.2.4. See also,
Holzl and Porter~\cite{HoP2021}, Theorem 4.9.}
To apply the Kolmogorov 0 or 1 law, consider the independent random
variables $\tilde f_1,\tilde f_2,\dots$, where
$$
\tilde f_i(\omega)=f_{w(i)+1}(\omega)\dots f_{w(i+1)}(\omega)=
\omega_{w(i)+1}\dots\omega_{w(i+1)}
$$
and $f_i(\omega)=\omega_i$. Clearly, the random variables
$\tilde f_1(\omega),\tilde f_2(\omega),\dots$ generate the same $\sigma$-algebra
as the random variables $f_1(\omega),f_2(\omega),\dots$.
The set $A$ satisfying the condition of the lemma lies in the
$\sigma$-algebra generated by the set of independent random variables
$\tilde f_{k},\tilde f_{k+1},\dots$ for each $k$,
and therefore, it lies in the residual $\sigma$-algebra of the entire sequence
$\tilde f_1,\tilde f_2,\dots$. By Kolmogorov 0 or 1 law
$\bar P(A)=0$ or $\bar P(A)=\bar P(\Omega)$.
$\Box$
\begin{corollary}\label{atom-el-2}
There exists an atom of $\Upsilon$.
\end{corollary}
{\it Proof.}
Let ${\bf p}=[\bar E_P]$. By Lemma~\ref{nontriv-1a} $P({\bf p})>0$,
then ${\bf p}\not ={\bf 0}$. Define
${\bf d}=i_P({\bf p})$. By definition $\bar P(i_P({\bf p}))=\bar P({\bf d})$.
Then ${\bf d}\not ={\bf 0}$.
Assume that ${\bf d}={\bf a}\cup{\bf b}$, where ${\bf a}\not ={\bf 0}$,
${\bf b}\not ={\bf 0}$ and ${\bf a}\cap\bf b={\bf 0}$.
By Corollary~\ref{Rad-Nic} $\bar P({\bf a})>0$ and $\bar P({\bf b})>0$,
that contradicts Lemma~\ref{atom-el-1}.
This contradiction proves that ${\bf d}$ is an atom.
$\Box$
\subsection{Atom of nonstochastic Turing degrees}\label{nucl-1-pr}
Corollary~\ref{atom-el-2} shows that the network flow defined by Template 2
generate an atom regardless what predicate $B(i,\sigma)$ is used.
Specifying this predicate, we obtain the following theorem.
\begin{theorem}\label{single-atom-1}
There exists an atom ${\bf d}$ such that ${\bf d}\cap({\bf c}\cup{\bf r})={\bf 0}$.
\end{theorem}
{\it Proof.}
Let us specify the predicate:
\begin{eqnarray*}
B(i,\sigma)\Longleftrightarrow l(\tilde F_i(\sigma_2))>\sigma_1+i,
\end{eqnarray*}
where the finite sequence $\sigma_1$ (the starting point of the edge $\sigma$)
is identified with its order number in the natural numbering of the set $\Xi$.
The following statements are similar to Theorem~\ref{nontriv-1b}
and Corollary~\ref{nontriv-1b-cor-1} and their proofs are the same:
1) For any infinite sequence $\omega$ from the support set of the semimeasure $P$ and
for any computable operator $F$, if $F(\omega)$ is infinite, then the sequence
$F(\omega)$ is not Martin-L\"of random with respect to the uniform measure.
2) $\bar P$-almost every infinite sequence $\omega$ is not Martin-L\"of random
with respect to any computable measure.
From these statements ${\bf d}\cap({\bf c}\cup{\bf r})={\bf 0}$ follows.
$\Box$
\subsection{Decomposition into countable sequence of atoms}\label{atoms-count-1}
We will construct an infinite sequence of lower semicomputable semimeasures
$P_1, P_2,\dots$ which will define a sequence of pairwise different atoms
$\bf d_1, \bf d_2,\dots$.
We use the same sequence of all computable operators $\{F_i\}$ and their
modified versions $\{\tilde F_i\}$ as in Section~\ref{app-1}.
Let $\langle x_1,x_2,x_3\rangle$ be the number of a triple of natural numbers,
for some fixed computable one-to-one correspondence between all triples
$\langle x_1,x_2,x_3\rangle$ such that $x_1\not=x_2$, and all positive integer numbers.
The inverse functions $[\langle x_1,x_2,x_3\rangle]_t=x_t$, $t=1,2,3$, are also given.
The order number $i=\langle x_1,x_2,x_3\rangle$ of each such triplet will be a code
of some task $i$, where $x_1$ is the number of the computable operator, $x_2$ is
called task base, $x_3$ is task target.
By the main property of triplets numbering, any number $m$ cannot be both
the target and the base of the same task.
Let us define the networks $q_m$ for $m=1,2,\dots$.
We will execute the tasks what are common to all networks $q_m$.
At any step $n$ the task $i=p(n)$ will execute Template~2 for the network $q_{[i]_1}$,
which is the base of the task $i$ and discard some vertices
of the network $q_{[i]_2}$, which is the target of the task $i$.
All other networks remains unchanged at the step $n$.
One and the same network can serve as a base for some task at some steps and
as a target at other steps, but not at the same time.
For any $m$, let $G^n_m(i)$ be the set of all extra edges drawn by a task $i$
for a network $q_m$ at steps $\le n$, $G^n_m=\cup_i G^n_m(i)$.
Since at any step of the construction only a finite number of extra edges
can be drawn, for any $n$, $G^n_m=\emptyset$ for almost all $m$.
Definition (\ref{win-1}) of the function $w(i,n)$ is changed to
\begin{eqnarray*}
w(i,n)=\min\{n': n'\le n\&p(n')=i\&
\nonumber
\\
\forall m\forall j\forall\sigma
((j<i\&\sigma\in G_m^{n-1}(j)\rightarrow n'>l(\sigma_2)).
\end{eqnarray*}
and the definition (\ref{wikn-2}) of the function $w(i,k,n)$ is changed to
\begin{eqnarray*}
w(i,k,n)=\min\{n': n'\le n\&p(n')=i\&\tilde p(n')=k\&
\nonumber
\\
\forall m\forall j\forall\sigma ((j<i\&\sigma\in G_m^{n-1}(j)
\rightarrow n'>l(\sigma_2))\&
\\
\nonumber
\forall m\forall t\forall\sigma (1\le t<k\&\sigma\in G_m^{n-1}(i,t)
\rightarrow n'>l(\sigma_2))\}.
\end{eqnarray*}
We add a new element to construction of Template~2.
We say that a finite sequence $x$ of length $n$ is $i$-discarded by a sequence
$y$ (or by an edge $\sigma\in G^n_{[i]_1}$ such that $\sigma_2=y$),
if $l(y)=l(x)$ and a finite sequence $u$ exist such that
$x\sim_{w(i,n)} u$ and $\tilde F_{[i]_3}(y)\subseteq u$.
We can now define the recursive predicate which is needed to specify Template~2.
\begin{equation}\label{main-rel-1}
B(i,\sigma)\Longleftrightarrow
\sum\limits_{z}R_{[i]_2}(z)\le 2^{-(\sigma_1+3)},
\end{equation}
where $R_{[i]_2}$ denotes the frame of the flow through the elementary
network $q_{[i]_2}$, and the sum is taken
over all $z$ of length $l(\sigma_2)$, which are $i$-discarded by an extra edge
$\sigma\in G^n_{[i]_1}(i)$.
Here in the exponent degree we identify the finite sequence $\sigma_2$ and its
order number.
Any task $i$ relates to the two networks: to its base $q_{[i]_1}$ and to its
target $q_{[i]_2}$.
The goal of the task $i=\langle x_1,x_2,x_3\rangle$ is to provide conditions under
which the operator $F_{x_3}$ could not transform any infinite sequence
from the support set of the semimeasure $P_{x_1}$ to a sequence from
the support set of the semimeasure $P_{x_2}$.
A competing requirement is that these semimeasures should be nontrivial, i.e.
there should be $\bar P_{x_1}(\Omega)>0$ and $\bar P_{x_2}(\Omega)>0$.
{\it Construction of the networks $q_m$.}
We define the networks $q_m$ for $m=1,2,\dots$ using mathematical induction on
steps $n=1,2,\dots$.
Define $s_m(\lambda)=0$ and $G_m^0=\emptyset$ for all $m$.
Let $n\ge 1$ and for every $m$, the sets $G_m^{n-1}$ and
the values $q_m(\sigma)$ be defined for all $\sigma\in G^{n-1}_m$,
$s_m(x)$ be defined for all $x$ such that $l(x)\le n-1$ and
$q_m(\sigma)=\frac{1}{2}(1-s_m(\sigma_1)$ for each $\sigma$ of unit length such that
$l(\sigma_2)=n$.
At any step $n$ of induction execute the task $i=p(n)$:
1) denote $m=[i]_1$ and execute step $n$ of Template 2 with the predicate
(\ref{main-rel-1}), where $s$, $q$ and $G^n$ are replaced with $s_m$, $q_m$ and $G^n_m$;
2) denote $m'=[i]_2$ and define $G^n_{m'}=G^{n-1}_{m'}$ and $s_{m'}(x)=1$ for each
$x$ of length $n$, which is $i$-discarded at step $n$ by some edge
$\sigma\in G^n_m(i)$ such that $l(\sigma_2)=n$
and define $s_{m'}(x)=0$ for all other $x$, $l(x)=n$;
3) for any $m\not=[i]_1$ and $m\not=[i]_2$ define $G^n_m=G^{n-1}_m$
and $s_m(x)=0$ for each $x$ such that $l(x)=n$;
4) After all, define
$
q_m(\sigma)=\frac{1}{2}(1-s_m(\sigma_1))
$
for each $m$ and for each $\sigma$ of unit length such that $l(\sigma_1)=n$.
This concludes the description of the induction step.
By Lemma~\ref{dup-1} $q_m(\sigma)=q_m(\sigma')$ for any
$\sigma,\sigma'\in G_m$ such that $\sigma\sim\sigma'$.
Let $P_m$ be the $q_m$-flow.
By Lemma~\ref{contin-1} the semimeasure $P_m$ is continuous for each $m$, since
the number $w(i)$ separates $G_m$ for each $i$.
The support set of any semimeasure $P_m$ is equal to
$$
E_{P_m}=\{\omega\in\Omega:\forall n(P_m(\omega^n)\not=0)\}.
$$
The following lemma is similar to Lemma~\ref{nontriv-1a} but its proof
has some new details.
\begin{lemma} \label{nontriv-1}
$\bar P_m({\bf 1})>0$ for each $m$.
\end{lemma}
{\it Proof.} Let us estimate from below the value of $\bar P_m(\Omega)$. Let
$R_m$ be the frame of the network $q_m$. Define
$$
S_{m,n}=\sum\limits_{u:l(u)=n}R_m(u)-
\sum\limits_{\sigma:\sigma\in G_m,l(\sigma_2)=n}q_m(\sigma)R_m(\sigma_1).
$$
By definition of the flow delay function,
\begin{eqnarray}
\sum\limits_{u:l(u)=n+1}R_m(u)=\sum\limits_{u:l(u)=n}(1-s_m(u))R_m(u)+
\label{RR-1a-1}
\\
\sum\limits_{\sigma:\sigma\in G_m,l(\sigma_2)=n+1}q_m(\sigma)R_m(\sigma_1).
\label{RR-2a-1}
\end{eqnarray}
Let $m$ be a base of the task $p(n)$ at step $n$, i.e., $m=[p(n)]_1$.
Let us first consider the case $w(p(n),n)<n$.
In this case the proof of this lemma coincides with the corresponding
part of the proof of Lemma~\ref{nontriv-1a},
where the delay function $s$ is replaced with the delay function $s_m$.
We pass this part of the proof and obtain $S_{m,n+1}\ge S_{m,n}$.
Consider the case $w(p(n),n)=n$. As in the proof of Lemma~\ref{nontriv-1a}
$\sum\limits_{u:l(u)=n}R_m(u)\le 1$ and
$$
\sum\limits_{u:l(u)=n}s_m(u)R_m(u)\le\rho(n)=1/(n+3)^2.
$$
Combining this inequality with (\ref{RR-1a-1})--(\ref{RR-2a-1}), we obtain
$S_{m,n+1}\ge S_{m,n-1}/(n+3)^2$.
Let $m$ be a target of the task $p(n)$ on step $n$, i.e., $m=[p(n)]_2$. Then
\begin{eqnarray}
\sum\limits_{l(u)=n}s_m(u)R_m(u)=\sum\limits_{u\in D} R_m(u)\le
\sum\limits_{\sigma\in G_m,l(\sigma_2)=n}2^{-(\sigma_1+3)},
\end{eqnarray}
where $D$ is the set of all $u$ of length $n$, which are
$p(n)$-discarded by sequences $\sigma_2$, where $\sigma\in G_{[p(n)]_1}$
and $l(\sigma_2)=n$.
Recall that in the exponent, we identify the finite sequence $\sigma_1$ and its
order number. Therefore,
$$
S_{m,n+1}\ge S_{m,n}-\sum\limits_{\sigma\in G_{[p(n)]_1},l(\sigma_2)=n}2^{-(\sigma_1+3)}.
$$
If $m$ is neither base no target of the task $p(n)$ at step $n$
(i.e. $[p(n)]_1\not=m$ and $[p(n)]_2\not=m$), then $s_m(u)=0$
for each $u$ of length $n$ and there is no edge $\sigma\in G_m$,
such that $l(\sigma_2)=n$. Hence, $S_{m,n+1}=S_{m,n}$.
Using these bounds for $S_{m,n}$ and $S_{m,0}=1$, we obtain
$$
S_{m,n}\ge 1-\sum\limits_{i=1}^{\infty} (i+3)^{-2}-
\sum\limits_{x\in\Xi}2^{-(x+3)}\ge\frac{1}{2}
$$
for all $n$. Since $P_m\ge R_m$, we have
$$
\bar P_m(\Omega)=\inf\limits_{n}\sum\limits_{l(u)=n} P_m(u)\ge
\inf\limits_n S_{m,n}\ge\frac{1}{2}.
$$
Lemma is proved. $\Box$
\begin{lemma}\label{non-equv-1}
If $k\not=m$ then $F_t(E_{P_k})\cap E_{P_m}=\emptyset$ for all $t$.
\end{lemma}
{\it Proof.} Assume that an $\omega\in E_k$ exists such that
$F_t(\omega)\in E_m$ for some $t$. Consider the task $i=\langle k,m,t\rangle$.
For any $n$, let $D_n$ be the set of all $z$ of length $n$ that are
$i$-discarded by the finite sequence $\omega^n$. It follows from
continuity of $P_m$ that
$$
\lim\limits_{n\to\infty}\sum\limits_{z\in D_n}P_m(z)\le
\lim\limits_{n\to\infty}2^{w(i,n)}P_m(\tilde F_t(\omega^n))=0.
$$
Besides, $P_k(\omega^n)\not=0$ for all $n$. From here it is easy to see
that for each $n$ the sequence $\omega^n$ has an $i$-extension
($\omega$ itself is suitable as such extension).
By Lemma~\ref{exten-1} an edge $\sigma\in G_k(i)$ will be drawn on some
step $n>w(i)$ such that $l(\sigma_2)=n$,
$\sigma_2\subset\omega$ and the sequence $(\tilde F_t(\omega))^n$ is $i$-discarded
by the sequence $\sigma_2$. Since $w(i,n)=w(i)$, no extra edge
$\sigma'$ such that $\sigma'_1\subset (\tilde F_t(\omega))^n\subset\sigma'_2$ can be
drawn at a step $n'=l(\sigma'_2)>n$, and therefore, no extra portion of the flow
can go through the vertex $(\tilde F_t(\omega))^n$.
Then $q_m(\big((\tilde F_t(\omega))^n,(\tilde F_t(\omega))^{n+1})\big)=0$
that implies $F_t(\omega)\not\in E_m$. The resulting contradiction proves the lemma.
$\Box$
\begin{theorem}\label{nucl-atom-1}
The set of all atoms of $\Upsilon$ is countable.
\end{theorem}
{\it Proof.}
Let ${\bf p}_m=[\bar E_{P_m}]$. By Lemma~\ref{nontriv-1} $P_m({\bf p}_m)>0$,
then ${\bf p}_m\not ={\bf 0}$. By Lemma~\ref{non-equv-1} for $k\not = m$,
any $\alpha\in E_{P_k}$ and $\beta\in E_{P_m}$ do not reduce to each other. Therefore,
${\bf p}_k\cap{\bf p}_m={\bf 0}$. Define ${\bf d}_m=i_{P_m}({\bf p}_m))$.
Since $\bar P_m(i_{P_m}({\bf p_m}))=\bar P_m({\bf p_m})$, we have
${\bf d}_m\not ={\bf 0}$.
From ${\bf d}_m\subseteq{\bf p}_m$ the equality
${\bf d}_k\cap{\bf d}_m={\bf 0}$ follows for $k\not = m$.
Assume that ${\bf d}_m={\bf a}\cup{\bf b}$, where ${\bf a}\not ={\bf 0}$,
${\bf b}\not ={\bf 0}$ and ${\bf a}\cap\bf b={\bf 0}$.
Then by Corollary~\ref{Rad-Nic}
$\bar P_m({\bf a})>0$ and $\bar P_m({\bf b})>0$, which contradicts
Corollary~\ref{atom-el-1}.
This contradiction proves that ${\bf d}_m$ is an atom for each $m$.
Theorem is proved. $\Box$
Theorems~\ref{infini-div-1} and~\ref{nucl-atom-1} imply the main result
of this paper on decomposition of the maximal element of LV-algebra.
\begin{theorem} \label{nucl-ato-1}
It holds ${\bf 1}=\cup_{i = 1}^{\infty}{\bf a}_i\cup{\bf d}$,
where ${\bf a}_1, {\bf a}_2, \dots$ is the infinite sequence of all atoms and
${\bf d}$ is the non-zero infinitely divisible element.
\end{theorem}
\subsection{Decomposition of the hyperimmune LV-degree into atoms}\label{hyperimmune-2}
Let $\bf h$ be the element of $\Upsilon$ defined by the collection of
all hyperimmune sequences. We call this element hyperimmune LV-degree.
Rumyantsev and Shen~\cite{RuS2014} proved that
hyperimmune sequences can be generated by some probabilistic machine
with positive probability. From this ${\bf h}\not={\bf 0}$ follows.
In this section we present a decomposition of $\bf h$ into a union of
the infinite sequence of atoms and of the infinitely divisible element.
An infinite subset $A\subseteq\cal N$ is called hyperimmune if there is no computable
function $f$ such that $f(i)\ge z_i$ for every $i$, where $z_1<z_2<\dots$ be all elements
of the set $A$ arranged in ascending order.
Let $a=a_1a_2\dots$ be the characteristic (binary) sequence of the set $A$, i.e.,
$a_i=1$ if and only if $i\in A$ for every $i$. We call $a$ hyperimmune sequence.
We will study LV-degrees generated by Turing degrees of hyperimmune sequences.
An infinite binary sequence $\alpha$ is called sparse if it contains infinitely
many ones and there is no computable
total function $f$ such that for each $k$ the prefix of $\alpha$ of length $f(k)$
contains at least $k$ ones.
\begin{proposition}\label{sparse-hyper-1}
A set $A$ is hyperimmune if and only if its characteristic sequence is sparse.
\end{proposition}
{\it Proof.} Assume that a set $A$ is not hyperimmune. Then $f(i)\ge z_i$, where
$z_1<z_2<\dots$ be all elements of the set $A$ arranged in ascending order.
It holds $a_{z_i}=1$ for all $i$ and $a_j=0$ for each $j\not\in A$. Since
$a_{z_i}=1$ for each $i$, the prefix of $a$ of length $f(i)$ contains at least
$i$ ones for each $i$, i.e., the sequence $a$ is not sparse.
On other side, assume that the characteristic sequence $a$ of the set $A$ is not
sparse. Then there is a computable function $f$ such that for each $k$ the prefix
of $a$ of length $f(k)$ contains at least $k$ ones. Since $a_{z_i}=1$ for each $i$,
$f(k)\ge z_k$, i.e., the set $A$ is not hyperimmune.
$\Box$
More information about the hyperimmune LV-degrees can be found in
Holzl and Porter~\cite{HoP2021}, Proposition~4.15.
The following theorems~\ref{nucl-4-1},~\ref{nucl-4-2}, and~\ref{nucl-4}
present a decomposition of the hyperimmune degree into the union of a countable
sequence of atoms and a non-zero infinitely divisible element.
\begin{theorem}\label{nucl-4-1}
There exists an infinite sequence $ {\bf h}_1, {\bf h}_2, \dots$
of atoms defined by collections of hyperimmune sequences.
\end{theorem}
{\it Proof.}
We modify Template 2 for the case of two recursive predicates
$B_1(j,\sigma)$ and $B_2(j,\sigma)$.
We call $i=p(l(x))$ the atoms difference task if $i$ is even, $i=2j$,
and we call $i=p(l(x))$ the sparsity task if $i$ is odd, $i=2j+1$.
We say that a finite sequence $x$ of length $n$ is $j$-discarded by a sequence
$y$ (or by an edge $\sigma\in G^n_{[j]_1}$ such that $\sigma_2=y$),
if $l(y)=l(x)=n$ and a finite sequence $u$ exist such that
$x\sim_{w(i,n)} u$ and $\tilde F_{[j]_3}(y)\subseteq u$.
Let us define the first predicate which have to provide the difference
between atoms:
\begin{equation}\label{rel-1-1}
B_1(j,\sigma)\Longleftrightarrow
\sum\limits_{z}R_{[j]_2}(z)\le 2^{-\sigma_1+3},
\end{equation}
where by $R_{[j]_2}$ we denote the frame of the flow through
the elementary network $q_{[j]_2}$ defined on steps $<n$,
and the sum is taken over all $z$ of length $l(\sigma_2)$, which are $j$-discarded
by the sequence $\sigma_2$. Here, in the exponent, we identify the sequence
$\sigma_1$ and its number.
Let $\phi_j$ be a computable sequence of all partial recursive functions
such that for any partial recursive function $f$ there exist infinitely many $j$
such that $\phi_j=f$, $\phi^n_i(x)$ is a result of computation in $n$ steps
(see Section~\ref{prelimin-1}).
Define the second predicate which have to provide the sparsity of sequences from
the support set of the $q_{[j]_1}$-flow:
\begin{eqnarray*}
B_2(j,\sigma)\Longleftrightarrow
\sigma_2=\sigma_110^{l(\sigma_2)-l(\sigma_1)-1}\&
l(\sigma_2)\ge\phi^{l(\sigma_2)}_{[j]_1}(l(\sigma_1)+2).
\end{eqnarray*}
{\it Construction of the network $q_m$.}
The induction hypothesis is the same as for step $n$ of Template 2.
At any step $n$ of induction we execute the task $i=p(n)$. This means that
1) Let $i=2j$. In this case do the following:
1.1) denote $m=[j]_1$ and execute step $n$ of Template~2 with the
predicate $B_1(j,\sigma)$ to define the set $G^n_m$, the values $q_m(\sigma)$
for $\sigma\in G^n_m$ such that $l(\sigma_2)=n$, and the values of the
flow delay function $s_m(x)$ for all $x$ of length $n$;
1.2) denote $m'=[j]_2$ and define $G^n_{m'}=G^{n-1}_{m'}$,
$s_{m'}(x)=1$ for each $x$, which is $j$-discarded on step $n$
by some edge $\sigma\in G^n_{[j]_1}(j)$; define $s_{m'}(x)=0$ for all other
$x$ such that $l(x)=n$;
1.3) for each $m$ such that $m\not=[j]_1$ and $m\not=[j]_2$ define
$G^n_m=G^{n-1}_m$ and $s_m(x)=0$ for every $x$ of length $n$.
2) Let $i=2j+1$. In this case do the following:
2.1) denote $m=[j]_1$ and execute step $n$ of Template 2 with the
predicate $B_2(j,\sigma)$ to define the set $G^n_m$, the values $q_m(\sigma)$
for $\sigma\in G^n_m$ such that $l(\sigma_2)=n$, and the values of the
flow delay function $s_m(x)$ for all $x$ of length $n$;
2.2) for each $m\not=[j]_1$ define $G^n_m=G^{n-1}_m$ and
$s_m(x)=0$ for every $x$ of length $n$;
3) after all, for every $m$ define
$q_m(\sigma)=\frac{1}{2}(1-s_m(\sigma_1))$
for each $\sigma$ of unit length such that $l(\sigma_1)=n$.
This concludes the description of the induction step.
By Lemma~\ref{dup-1} $q_m(\sigma)=q_m(\sigma')$ for any
$\sigma,\sigma'\in G_m$ such that $\sigma\sim_{w(i,n)}\sigma'$.
Let $P_m$ be the $q_m$-flow. Define ${\bf p}_m=[\bar E_{P_m}]$.
By Lemma~\ref{nontriv-1} $P_m({\bf p}_m)>0$, then ${\bf p}_m\not ={\bf 0}$.
Define ${\bf h}_m=i_{P_m}({\bf p}_m)$ for each $m$.
${\bf h}_m\not ={\bf 0}$, since $\bar P(i_P({\bf p_m}))=\bar P({\bf p_m})$.
The LV-degree ${\bf h}_m$ is an atom of $\Upsilon$ for each $m$,
since we use Template 2 for its definition.
By Lemma~\ref{non-equv-1}, for $k\not=m$,
any $\alpha\in E_{P_k}$ and $\beta\in E_{P_m}$ do not Turing reducible to
each other. Therefore, ${\bf p}_k\cap{\bf p}_m={\bf 0}$.
Since ${\bf h}_m\subseteq{\bf p}_m$, we obtain ${\bf h}_k\cap{\bf h}_m={\bf 0}$
for $k\not=m$.
$\Box$
The rest of the proof of Theorem~\ref{nucl-4-1} is presented in the following lemma.
\begin{lemma} \label{nontriv-1b-3}
Any infinite sequence $\omega$ from the support set of the semimeasure $P_m$ is sparse.
\end{lemma}
{\it Proof.} Let $m$ be given.
We should prove that for any infinite sequence $\omega$ from the support set of
the semimeasure $P_m$ and for any total computable function $f$, there are infinitely
many $k$ such that the prefix of $\omega$ of length $f(k)$ contains less than $k$ ones.
For any computable function $f$ there are infinitely many odd $i=2j+1$
such that $f=\phi_j$. Since $f$ is total, each prefix of any $\omega\in E_{P_m}$ has
an $j$-extension. By Lemma~\ref{exten-1} $\sigma_1\subset\sigma_2\subset\omega$
for infinitely many extra edges $\sigma$ such that $f(l(\sigma_1)+2)\le l(\sigma_2)$.
Since number of ones in $\sigma_2=\sigma_110^{l(\sigma_2)-l(\sigma_1)-1}$ is
less or equal to $l(\sigma_1)+1$ and $f(l(\sigma_1)+2)\le l(\sigma_2)$, the prefix
of $\omega$ of length $f(l(\sigma_1)+2)$ contains less than $l(\sigma_1)+2$ ones.
Since at least one 1 is added to $\sigma_1$ at infinitely many steps,
the sequence $\omega$ contains infinitely many 1s. Hence, each $\omega$
from the support set of semimeasure $P_m$ is sparse.
$\Box$
\begin{theorem}\label{nucl-4-2}
There exists an infinitely divisible element defined by a collection of
the hyperimmune sequences.
\end{theorem}
The proof is similar to the proof of Theorem~\ref{nucl-4-1}, where
the recursive predicate $B_1$ is replaced with (\ref{rel-2})
and the Template 1 is used.
Theorems~\ref{nucl-4-1} and~\ref{nucl-4-2} imply the main result of
Section~\ref{hyperimmune-2}.
\begin{theorem} \label{nucl-4}
The decomposition ${\bf h}=\cup_{i=1}^{\infty}{\bf h}_i\cup{\bf e}$
of hyperimmune LV-degree takes place, where $ {\bf h}_1, {\bf h}_2,\dots$ are
infinite sequence of atoms and ${\bf e}$ is the infinitely divisible element
defined by collections of hyperimmune sequences.
\end{theorem}
It should be interesting to extend the result of Theorem~\ref{nucl-4}
to other specific LV-degrees.
A careful analysis of the relationship between LV-degrees and Turing degrees is
given in the review by Holzl and Porter~\cite{HoP2021}. We have proved that
some of LV-degrees can be generated using Template 2 and, so,
the decomposition of type (\ref{main-decomposition-2}) takes place for these LV-degrees.
For example, this is the hyperimmune degree. Holzl and Porter~\cite{HoP2021}
result on DNC (diagonally non-computable)
degree can be extended to obtain the decomposition like (\ref{main-decomposition-2})
for this degree.\footnote{An infinite binary sequence $\omega$ has DNC degree
if and only if there is some function $f$ such that $f\le_T\omega$
and $f(i)\not=\phi_i(i)$ for all $i$.}
An open problem arises can we obtain decompositions of type (\ref{main-decomposition-2})
of the LV-degrees considered in~\cite{HoP2021} among which there are degrees defined
by the collection of 1-generic sequences, degrees defined by the collection of
generalized low sequences, and those collections corresponding to various notions
of effective randomness. Author does not know whether it is possible to apply
the technics of Templates 1 and 2 for the construction of LV-degrees of
1-generic sequences.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,874 |
\section{Introduction}
Liquid democracy \citep{blum2016liquid} is an influential proposal in recent debates on democratic reforms in both Europe and the US. Several grassroots campaigns, as well as local parties, experimented with this novel type of decision making procedure. Examples include the German Piratenpartei\footnote{\url{https://www.piratenpartei.de/}} and the EU Horizon 2020 project WeGovNow~\citep{boella2018wegovnow}, which have incorporated the LiquidFeedback\footnote{\url{https://liquidfeedback.org/}} platform in their decision making, as well as grass-roots organizations such as the Democracy Earth Foundation\footnote{\url{https://www.democracy.earth/}}. Liquid democracy is a form of proxy voting \citep{miller1969program,Tullock_1992,Alger_2006,green2015direct,cohensius2017proxy} where, in contrast to classical proxy voting, proxies are delegable (or transitive, or transferable). Suppose we are voting on a binary issue, then each voter can either cast her vote directly, or she can delegate her vote to a proxy, who can again either vote directly or, in turn, delegate to yet another proxy, and so forth. Ultimately, the voters that decided not to delegate cast their ballots, which now carry the weight given by the number of voters who entrusted them as proxy, directly or indirectly.
\paragraph{Contribution}
The starting point of our analysis is an often cited feature of liquid democracy:
transitive delegations reduce the level of duplicated effort required by direct voting, by freeing voters from the need to invest effort in order to vote accurately. The focus of the paper is the decision-making problem that voters, who are interested in casting an accurate vote, face between voting directly, and thereby incurring a cost in terms of effort invested to learn about the issue at hand, or delegating to another voter in their network, thereby avoiding costs.
We define a game-theoretic model, called {\em delegation game}, to represent this type of interaction. We establish pure strategy Nash equilibrium existence results for classes of delegation games, and study the quality of equilibria in terms of the average accuracy they enable for the population of voters, both analytically and through simulations. Proofs of the two main results (Theorems \ref{thm:NE-deterministic} and \ref{thm:delegation-games-NE}) are presented in full, while we provide proofs of the simpler secondary results as an Appendix only.
By means of simulations we also study the effects of different network structures on delegation games in terms of: performance against direct voting, average accuracy and the probability of a correct majority vote, the number and quality of voters acting as ultimate proxies (so-called gurus) and, finally, the presence of delegation cycles.
To the best of our knowledge, this is the first paper providing a comprehensive study of liquid democracy from a game-theoretic angle.
\paragraph{Related Work}
Although the idea of delegable proxy was already sketched by \citet{dodgson84principles}, only a few very recent papers have studied aspects of liquid democracy in the (computational) social choice theory \citep{brandt16handbook} literature. \citet{kling2015voting} provide an analysis of election data from the main platform implementing a liquid democracy voting system (Liquid Feedback) for the German Piratenpartei. They focus on network theoretic properties emerging from the structure of delegations---with particular attention to the number of highly influential gurus or `super-voters'. Inspired by their experimental analysis, \citet{golz2018fluid} propose and analyze a variant of the liquid democracy scheme able to restrict reliance on super-voters. \citet{skowron17proportional} study an aspect of the Liquid Feedback platform concerning the order in which proposals are ranked and by which they are brought to the attention of the community. \citet{boldi2011viscous} investigate applications of variants of the liquid democracy voting method (called viscous democracy) to recommender systems.
\citet{brill2018interactive} presents some research directions in the context of liquid democracy.
A general, more philosophical discussion of liquid democracy is provided by \citet{blum2016liquid}.
More directly related to our investigations is the work by \citet{christoff17binary} and, especially, by \citet{kahng18liquid}.
The first paper studies liquid democracy as an aggregator---a function mapping profiles of binary opinions to a collective opinion---in the judgment aggregation and binary voting tradition \citep{Grossi_2014,endriss16judgment}.
The focus of that paper are the unintended effects that transferable proxies may have
due to delegation cycles, and due to the failure of rationality constraints normally satisfied by direct voting.
The second paper addresses one of the most cited selling arguments for liquid democracy: delegable proxies guarantee that better informed agents can exercise more weight on group decisions, thereby increasing their quality. Specifically, \citet{kahng18liquid} study the level of accuracy that can be guaranteed by liquid democracy (based on vote delegation with weighted majority) vs. direct voting by majority. Their key result consists in showing that no `local' procedure to select proxies can guarantee that liquid democracy is, at the same time, never less accurate (in large enough graphs)
and sometimes strictly
more accurate
than direct voting.
In contrast to their work,
we assume that agents incur costs (effort) when voting directly, and on that basis we develop a game-theoretic model. Also, we assume agents aim at tracking their own type rather than an external ground truth, although we do assume such a restriction in our simulations to better highlight how the two models are related and to obtain insights applicable to both.
\section{Preliminaries}
\subsection{Types, Individual Accuracy and Proximity}
We are concerned with a finite set of agents (or voters, or players) $N = \set{1, \ldots, n}$ having to decide whether $x = 1$ or $x = 0$. For each agent one of these two outcomes is better, but the agent is not necessarily aware which one. We refer to this hidden optimal outcome as the {\em type of agent $i$} and denote it by $\tau_i \in \set{0,1}$. Agents want to communicate their type truthfully to the voting mechanism, but they know it only imperfectly. This is captured by the \emph{accuracy} $q_i$ of an agent $i$:
$q_i$ determines the likelihood that, if $i$ votes directly, she votes according to her type $\tau_i$.
We assume that an agent's accuracy is always $\geq 0.5$, i.e., at least as good as a coin toss.
We distinguish two settings depending on whether the agents' types are deterministic or probabilistic.
A {\em deterministic type profile} $T=\tuple{\tau_1, \ldots, \tau_n}$ simply collects each agent's type.
In {\em probabilistic type profiles} types are independent random variables drawn according to a distribution ${\mathbb P}$. Given a probabilistic type profile, the likelihood that any two agents $i,j\in N$ are of the same type is called the {\em proximity} $p_{i,j}$ where
$p_{i,j}={\mathbb P}(\tau(i)=\tau(j)) = {\mathbb P}(\tau(i) = 1)\cdot {\mathbb P}(\tau(j) = 1) + (1- {\mathbb P}(\tau(i) = 1)) \cdot (1- {\mathbb P}(\tau(j) = 1))$. In the probabilistic setting we assume agents know such value although, importantly, they do not know ${\mathbb P}$.
In a deterministic type profile, we have $p_{i,j}= 1$ if $\tau_i=\tau_j$ and $p_{i,j}= 0$ otherwise. Following standard equilibrium theory, our theoretical results assume agents act as if they have access to the accuracy of each agent. More realistically, in our simulations we assume agents have access to such information only with respect to neighbors on an underlying interaction structure.
\subsection{Interaction Structure and Delegations}
Agents are nodes in a network (directed graph) represented by a relation $R \subseteq N^2$. For $i \in N$ the neighborhood of $i$ in $\langle N, R \rangle$ is denoted $R(i)$, i.e., the agents that are directly connected to $i$.
Agents have the choice of either voting themselves, thereby relying solely on their own accuracy, or delegating to an agent in their neighborhood. A {\em delegation profile} is a vector ${\bf d} = \tuple{d_1, \ldots, d_n} \in N^n$. Given a delegation profile ${\bf d}$ we denote by $d_i$ the \emph{proxy} selected by $i$ in ${\bf d}$. Clearly a delegation profile can be viewed as a functional graph on $N$ or, equivalently, as a map in ${\bf d}: N \to N$ where ${\bf d}(i) = d_i$. When the iterated application of ${\bf d}$ from $i$ reaches a fixed point we denote such fixed point as $d^*_i$ and call it $i$'s {\em guru} (in ${\bf d}$).
In the following, we write $N^*$ to denote the set of voters whose delegation does not lay on a path ending on a cycle, i.e., the set of voters $i$ for which $d^*_i$ exists.
We write ${\bf d}' = ({\bf d}_{-i},j)$ as a short form for ${\bf d}' = \tuple{d_1, \ldots, d_{i-1}, j, d_{i+1},\dots, d_n} $.
As agents may only be able to observe and interact with their direct network neighbors, structural properties of the interaction network may play a role in the model dynamics. In our simulations we will focus on undirected graphs (that is, $R$ will be assumed to be symmetric, as social ties are normally mutual) consisting of one single connected component (that is, $N^2$ is included in the reflexive transitive closure of $R$).
Under these assumptions, we consider four typical network structures that are well represented in the literature on social network analysis (cf. \citealt{jackson08social}): 1) the {\em random} network, in which each pair of nodes has a given probability of being connected \citep{erdos1959random}; 2) the {\em regular} network, in which all nodes have the same degree; 3) the {\em small world} network, which features a small average path length and high clustering \citep{watts1998collective}; and 4) the {\em scale free} network, which exhibits a power law degree distribution \citep{barabasi1999emergence}.\footnote{Although random and regular graphs are not generally applicable to real-world settings, they serve as a useful baseline to illustrate the effects of network structure on delegations.
}
\iffalse
\subsection{Individual Accuracy and Delegation}
Agents have the choice of either voting themselves or delegating to an agent in their neighborhood.
Agents that vote themselves have to rely on their ability to judge the available alternatives, i.e., their ability to determine their own type.
This is captured by the \emph{accuracy} $q_i$ of an agent $i$:
$q_i$ determines the likelihood that---if $i$ decides to vote herself---she votes according to her type $\tau_i$.
We assume that an agent's accuracy is always $\geq 0.5$, i.e., at least as good as a coin toss.
A {\em delegation profile} is a vector ${\bf d} = \tuple{d_1, \ldots, d_n} \in N^n$. Given a delegation profile ${\bf d}$ we denote by $d_i$ the \emph{proxy} selected by $i$ in ${\bf d}$. Clearly a delegation profile can be viewed as a functional graph on $N$ or, equivalently, as a map in ${\bf d}: N \to N$ where ${\bf d}(i) = d_i$. When the iterated application of ${\bf d}$ from $i$ reaches a fixpoint we denote such fixpoint as $d^*_i$ and call it $i$'s {\em guru} (in ${\bf d}$).
In the following, we write $N^*$ to denote the set of voters whose delegation does lay on a path ending on a cycle, i.e., the set of voters $i$ for which $d^*_i$ exists.
We write ${\bf d}' = ({\bf d}_{-i},j)$ as a short form for ${\bf d}' = \tuple{d_1, \ldots, d_{i-1}, j, d_{i+1},\dots, d_n} $.
\fi
\section{A Model of Rational Delegations}
\subsection{Individual Accuracy under Delegable Proxy}
Each agent $i$ has to choose between two options: either to vote herself with accuracy $q_i$ or to delegate, thereby inheriting the accuracy of another voter (unless $i$ is involved in a delegation cycle).
These choices are recorded in the delegation profile ${\bf d}$ and can be used to compute the individual accuracy for each agent $i\in N^*$ as follows:
\begin{align} \label{eq:accuracy}
\small
\hspace{-0.3cm} q^*_i({\bf d}) =
\begin{cases}
q_{d^*_i}\cdot p_{i,d^*_i}
+ (1-q_{d^*_i}) \cdot (1-p_{i,d^*_i}) & \text{if }i\in N^* \\
0.5 &\text{if }i\notin N^*
\end{cases}
\end{align}
In \eqref{eq:accuracy}
$i$'s accuracy equals the likelihood that $i$'s guru has the same type and votes accurately plus the likelihood that $i$'s guru has the opposite type and fails to vote accurately. Note that if $i$ votes directly, i.e., $d_i=i$, then $q^*_i({\bf d}) = q_i$. Observe that if $i$'s delegation leads to a cycle ($i\notin N^*$), $i$'s accuracy is set to $0.5$. The rationale for this assumption is the following. If an agent delegates into a cycle, even though she knows her own accuracy and she actively engages with the voting mechanism by expressing a delegation, she fails to pass information about her type to the mechanism. No information is therefore available to decide about her type.
It may be worth observing that, by \eqref{eq:accuracy}, in a deterministic type profile we have that $p_{i,j}\in\{0,1\}$ and therefore $i$'s accuracy reduces to: $q_{d^*_i}$ if $i\in N^*$ and $\tau(i)=\tau(d^*_i)$; $1-q_{d^*(i)}$ if $i\in N^*$ and $\tau(i)\neq\tau(d^*_i)$; and $0.5$ if $i\notin N^*$.
\iffalse
\begin{align} \label{eq:accuracy_deterministic}
q^*_i({\bf d}) = \begin{cases}
q_{d^*(i)} &\text{if }i\in N^* \text{ and }\tau(i)=\tau(d^*_i)\text{,}\\
1-q_{d^*(i)} & \text{if }i\in N^* \text{ and }\tau(i)\neq\tau(d^*_i)\text{,}\\
0.5 &\text{if }i\notin N^*.
\end{cases}
\end{align}
Let us immediately establish a basic fact about delegations in our model: assuming a weak rationality condition for all agents, an agent can identify some delegations that have a guaranteed positive impact on her accuracy, even if her proxy further delegates to agents she has no information about.
\fi
Before introducing our game theoretic analysis, we make the following observation. Agents have at their disposal an intuitive strategy to improve their accuracy: simply delegate to a more accurate neighbor. We say that a delegation profile ${\bf d}$ is \emph{positive} if for all $j\in N$ either $d_j=j$ or $q^*_j({\bf d})> q_j$.
Furthermore, we say that a delegation from $i$ to a neighbor $j$ is
\emph{locally positive} if $q_j\cdot p_{i,j} + (1-q_j)\cdot (1-p_{i,j})>q_i$.
\begin{proposition}
Let ${\bf d}$ be a positive delegation profile.
Further, let $s,t\in N$, $d_s=s$, and ${\bf d}'=({\bf d}_{-s},t)$, i.e., agent $s$ votes directly in ${\bf d}$ and delegates to $t$ in ${\bf d}'$.
If the delegation from $s$ to $t$ is locally positive, then
${\bf d}'$ is positive (proof in Appendix~\ref{appendix:proof}).
\label{prop:positive}
\end{proposition}
However, locally positive delegations do not necessarily correspond to optimal delegations.
This can be easily seen
in an example where agent $i$ is not a neighbor of a very competent agent $j$, but would have to delegate via an intermediate agent $k$ (who delegates to $j$). If this intermediate agent $k$ has a lower accuracy than $i$, then the delegation from $i$ to $k$ would not be locally positive, even though it is an optimal choice. So utility-maximization may require information which is inherently non-local (accuracy of `far' agents).
\subsection{Delegation Games}
We assume that each agent $i$ has to invest an effort $e_i$ to manifest her accuracy $q_i$.
If she delegates, she does not have to spend effort.
Agents aim therefore at maximizing the trade-off between the accuracy they can achieve (either by voting directly or through proxy) and the effort they spend. Under this assumption, the binary decision set-up with delegable proxy we outlined above can be used to define a natural game---called {\em delegation game}---$G = \tuple{N, {\mathbb P}, R, \Sigma_i, u_i}$, with $i \in N$, where $N$ is the set of agents, ${\mathbb P}$ is the (possibly degenerate) distribution from which the types of the agents in $N$ are drawn, $R$ the underlying network as defined above, $\Sigma_i\in N$ is the set of strategies of agent $i$ (voting, or choosing a specific proxy), and
\begin{align}
u_i({\bf d}) =
\begin{cases} q^*_i({\bf d}) &\text{if } d_i\neq i\\
q_i - e_i &\text{if } d_i= i\\
\end{cases} \label{eq:utility}
\end{align}
is agent $i$'s utility function. The utility $i$ extracts from a delegation profile equals the accuracy she inherits through proxy or, if she votes, her accuracy minus the effort spent.\footnote{No utility is accrued for gaining voting power in our model.} In delegation games we assume that $q_i-e_i\geq 0.5$ for all $i\in N$. This is
because if $q_i-e_i < 0.5$, then $i$ would prefer a random effortless choice over taking a decision with effort.
A few comments about the setup of \eqref{eq:utility} are in order. First of all, as stated earlier, we assume agents to be truthful. They do not aim at maximizing the chance their type wins the vote, but rather to relay their type to the mechanism as accurately as possible.\footnote{Notice however that our modeling of agents' utility remains applicable in this form even if agents are not truthful but the underlying voting rule makes truthfulness a dominant strategy---such as majority in the binary voting setting used here.}
Secondly, notice that the utility an agent extracts from a delegation profile may equal the accuracy of a random coin toss when the agent's delegation ends up into a delegation cycle (cf. \eqref{eq:accuracy}). If this happens the agent fails to relay information about her type, even though she acted in order to do so. This justifies the fact that $0.5$ is also the lowest payoff attainable in a delegation game.
The following classes of delegation games will be used in the paper:
games with {\em deterministic profiles}, i.e., where ${\mathbb P}$ is degenerate and all players are assigned a crisp type from $\set{0,1}$; {\em homogeneous} games, where all players have the same (deterministic) type;\footnote{This is the type of interaction studied, albeit not game-theoretically, by \citet{kahng18liquid} and normally assumed by jury theorems \citep{grofman83thirteen}.} and {\em effortless voting} games, where for each $i \in N$ we have $e_i = 0$.
As an example, a homogeneous game in matrix form is given in Table~\ref{table:dgame}, where $N = \set{1,2}$, $R = N^2$ and the distribution yields the deterministic type profile $T = \tuple{1,1}$. Interestingly, if we assume that $q_i - e_i > 0.5$ with $i \in \set{1,2}$, and that\footnote{We use here the usual notation $-i$ to denote
$i$'s opponent.} $q_{-i} > q_i - e_i$ (i.e., the opponent's accuracy is higher than the player's individual accuracy minus her effort), then the game shares the ordinal preference structure of the class of anti-coordination games: players need to avoid coordination on the same strategy (one should vote and the other delegate), with two coordination outcomes (both players voting or both delegating) of which the second (the delegation cycle) is worst for both players. Notice that, were the underlying network not complete (i.e., $R \subset N^2$), the matrix would be shrunk by removing the rows and columns corresponding to the delegation options no longer available.
\bgroup
\def1.1{1.1}
\begin{table}[tb]
\centering
\begin{tabular}{rP{2.2cm}|P{2.2cm}|}
& \multicolumn{1}{c}{vote} & \multicolumn{1}{c}{delegate (to $1$)} \\ \cline{2-3}
\multicolumn{1}{r|}{vote} & $q_1-e_1, q_2-e_2$ & $q_1-e_1,q_1 $ \\ \cline{2-3}
\multicolumn{1}{r|}{delegate (to $2$)} & $q_2,q_2-e_2 $ & $0.5,0.5$ \\ \cline{2-3}
\end{tabular}
\caption{A two-player delegation game. The row player is agent $1$ and the column player is agent $2$.}
\label{table:dgame}
\end{table}
\egroup
The introduction of effort has significant consequences on the delegation behavior of voters, and we will study it in depth in the coming sections. It is worth noting immediately that the assumptions of Proposition~\ref{prop:positive} no longer apply, since agents may prefer to make delegations that are not locally positive due to the decreased utility of voting directly.
\subsection{Existence of Equilibria in Delegation Games}
In this section we study the existence of pure strategy Nash Equilibria (NE) in two classes of delegation games. NE describe how ideally rational voters would resolve the effort/accuracy trade-off. Of course, such voters have common knowledge of the delegation game structure---including, therefore, common knowledge of the accuracies of `distant' agents in the underlying network. Our simulations will later lift some of such epistemic assumptions built into NE.
\paragraph{Deterministic Types}
In the following we provide a NE existence result for games with deterministic type profiles.
\begin{theorem}
Delegation games with deterministic type profiles always have a (pure strategy) NE.\label{thm:NE-deterministic}
\end{theorem}
\begin{proof}
First of all, observe that since the profile is deterministic, for each pair of agents $i$ and $j$, $p_{i,j} \in \set{0,1}$.
The proof is by construction.
First, we partition the set of agents $N$ into $N_1 = \set{i \in N \mid \tau(i) = 1}$ and $N_0 = \set{i \in N \mid \tau(i) = 0}$.
We consider these two sets separately; without loss of generality let us consider $N_1$.
Further we consider the network $R_1=\{(i,j)\in N_1\times N_1: (i,j)\in R\}$.
Since $(N_1,R_1)$ can be seen as a directed graph, we can partition it into Strongly Connected Components (SCCs).
If we shrink each SCC into a single vertex, we obtain the condensation of this graph; note that such a graph is a directed acyclic graph (DAG).
We construct a delegation profile ${\bf d}$ by traversing this DAG bottom up, i.e., starting with leaf SCCs.
Let $S\subseteq N_1$ be a set of agents corresponding to a leaf SCC in the condensation DAG.
We choose an agent $i$ in $S$ that has maximum $q_i-e_i$.
Everyone in $S$ (including $i$) delegates to $i$.
Now let $S\subseteq N_1$ be a set of agents corresponding to an inner node SCC in the condensation DAG and assume that we have already defined the delegation for all SCCs that can be reached from $S$.
As before, we identify an agent $i\in S$ with maximum $q_i-e_i$.
Further, let $T\subseteq N_1\setminus S$ be the set of all voters $j$ that can be reached from $S$ in $(N_1,R_1)$, and for which $q^*_j>q_i-e_i$.
We distinguish two cases.
(i) If $T\neq \emptyset$, then we choose an agent $k\in T$ with $q^*_k=\max_{j\in T} q^*_j$ and all agents in $S$ directly or indirectly delegate to $k$.
(ii) If $T=\emptyset$, all agents in $S$ delegate to $i$.
This concludes our construction (as for $N_0$ the analogous construction applies); let ${\bf d}$ be the corresponding delegation profile.
It remains to verify that this is indeed a NE:
Let $i$ be some agent in an SCC $S$, and, without loss of generality, let $i\in N_1$.
Observe that since we have a deterministic profile, if agent $i$ changes her delegation to $j$, then $i$'s utility changes to $q_j^*({\bf d})$ if $i\in N_1$ and $1-q_j^*({\bf d})$ if $i\in N_0$.
First, note that for all agents $k\in N$, $q^*_k({\bf d})\geq q_k-e_k \geq 0.5$.
Hence, we can immediately exclude that for agent $i$ delegating to an agent in $j\in N_0$ is (strictly) beneficial, as it would yield an accuracy of at most $1-q^*_j\leq 0.5$.
Towards a contradiction assume there is a beneficial deviation to an agent $j\in N_1$, i.e., there is an agent $j\in R(i)\cap N_1$ with $q^*_j({\bf d}) > q_i^*({\bf d})$.
Let us now consider the three cases: (1) $d_i=i$, (2) $d^*_i\in S$ but $d_i\neq i$, and (3) $d^*_i\notin S$.
In case (1), everyone in $S$ delegates to $i$. Hence, if $j\in S$, a cycle would occur yielding a utility of $0.5$, which is not sufficient for a beneficial deviation.
If a delegation to $j\notin S$ is possible but was not chosen, then by construction $q^*_j \leq q_i-e_i$ and hence this deviation is not beneficial. We conclude that in case (1) such an agent $j$ cannot exist.
In case (2), everyone in $S$ delegates to $d^*_i$. Hence, if $j\in S$, then $d^*_j=d^*_i$, a contradiction. If $j\notin S$, the same reasoning as before applies and hence also here we obtain a contradiction.
In case (3), by construction, $d^*_i\notin S$ had been chosen to maximise accuracy, hence $j\in S$.
Since for all $k\in S$, $d_k^*=d_i^*$, only a deviation to $i$ itself can be beneficial, i.e., $j=i$. However, since $d^*_i$ was chosen because $q^*_{d^*(i)} > q_i-e_i$, no beneficial deviation is possible.
\end{proof}
\noindent
It follows that also homogeneous games always have NE.
\paragraph{Effortless Voting}
Effortless voting ($e_i=0$ for all $i\in N$) is applicable whenever effort is spent in advance of the decision and further accuracy improvements are not possible.
\begin{theorem}
Delegation games with effortless voting always have a (pure strategy) NE.\label{thm:delegation-games-NE}
\end{theorem}
\begin{proof}
We prove this statement by showing that the following procedure obtains a NE:
We start with a strategy profile in which all players vote directly, i.e., player $i$'s strategy is $i$.
Then, we iteratively allow players to choose their individual best response strategy to the current strategy profile.
Players act sequentially in arbitrary order.
If there are no more players that can improve their utility by changing their strategy, we have found a NE.
We prove convergence of this procedure by showing that a best response that increases the player's utility never decreases the utility of other players.
We proceed by induction.
Assume that all previous best responses have not reduced any players' utility (IH).
Assume player $i$ now chooses a best response that increases her utility.
Let ${\bf d}$ be the delegation profile; further, let $d^*_i=s$.
By assumption, $i$'s utility started with $q_i-e_i=q_i$ and has not decreased since, i.e., $u_i({\bf d})\geq q_i$.
Since $i$'s best response strictly increases $i$'s utility, it cannot be a delegation to herself.
So let a delegation to $j \neq i$ be $i$'s best response and consider profile ${\bf d}'=({\bf d}_{-i},j)$.
Further, let $d_j^*=t$, i.e., $i$ now delegates to $j$ and by transitivity to $t$, i.e., $d_i'^*=d_j'^*=t$.
Let $k$ be some player other than $i$.
We define the delegation path of $k$ as the sequence $({\bf d}(k), {\bf d}({\bf d}(k)), {\bf d}({\bf d}({\bf d}(k))),\dots)$.
If $k$'s delegation path does not contain $i$, then $k$'s utility remains unchanged, i.e., $u_k({\bf d}')\geq u_k({\bf d})$.
If $k$'s delegation path contains $i$, then $k$ now delegates by transitivity to $t$, i.e., we have $d_k^*=s$ and $d_k'^*=t$.
By \eqref{eq:utility}, we have
\begin{align}
&u_k({\bf d}) = q_s\cdot p_{k,s} + (1-q_{s})\cdot (1-p_{k,s}) \quad\text{ and}\label{eq:uked}\\
&u_k({\bf d}') = q_t\cdot p_{k,t} + (1-q_{t})\cdot (1-p_{k,t}).\label{eq:uke'd'}
\end{align}
We have to show that $k$'s utility does not decrease, i.e., $u_k({\bf d}') \geq u_k({\bf d})$, under the assumption that $i$ chooses a best response, i.e., $u_i({\bf d}') > u_i({\bf d})$,
with:
\begin{align}
&u_i({\bf d}) = q_s\cdot p_{i,s} + (1-q_{s})\cdot (1-p_{i,s}) \quad\text{ and}\label{eq:uied}\\
&u_i({\bf d}') = q_t\cdot p_{i,t} + (1-q_{t})\cdot (1-p_{i,t}).\label{eq:uie'd'}
\end{align}
In the following we will often use the fact that, for $a,b\in[0,1]$, if $ab+(1-a)(1-b)\geq 0.5$, then either $a,b\in[0,0.5]$ or $a,b\in[0.5,1]$.
By IH, since accuracies are always at least 0.5, it holds that $u_i({\bf d})\geq q_i\geq 0.5$ and by \eqref{eq:uied} we have $q_s \cdot p_{i,s} + (1-q_s)\cdot (1-p_{i,s}) \geq 0.5 $ and hence $p_{i,s}\geq 0.5$.
Analogously, \eqref{eq:uked} implies that $p_{k,s}\geq 0.5$.
Furthermore, we use the fact that
\begin{equation}
p_{k,i} = p_{k,s}p_{s,i}+(1-p_{k,s})(1-p_{s,i}) + (- 2 (2x_k-1) \cdot (2 x_i - 1) \cdot (x_s - 1) x_s)
\label{eq:fact1}
\end{equation}
where $x_j = {\mathbb P}(\tau(j) = 1)$ for $j \in \set{k,i,s}$. Observe that, by the definition of utility in \eqref{eq:utility}, the assumptions made on ${\bf d}$ and ${\bf d}'$, and the fact that for $a,b\in[0,1]$, if $ab+(1-a)(1-b)\geq 0.5$, then either $a,b\in[0,0.5]$ or $a,b\in[0.5,1]$. So we have that either $x_j \geq 0.5$ for $j \in \set{k, i, s}$, or $x_j \leq 0.5$ for $j \in \set{k, i, s}$. We work on the first case. The other case is symmetric. Let also $\gamma_{k,s,i} = - 2 (2x_k-1) \cdot (2 x_i - 1) \cdot (x_s - 1) x_s$. From the above it follows that $0.5 \geq \gamma_{k,i,s} \geq 0$. Furthermore, given that $p_{i,s} = p_{s,i} \geq 0.5$, we can also conclude that $p_{k,i}\geq 0.5$.
Now by substituting
\begin{equation*}
p_{k,s} = p_{k,i}p_{i,s}+(1-p_{k,i})(1-p_{i,s}) + (\underbrace{- 2 (2x_k -1) \cdot (2 x_s - 1) \cdot (x_i - 1) x_i}_{\gamma_{k,i,s}})
\end{equation*}
in \eqref{eq:uked}, we obtain
\begin{equation}
u_k({\bf d}) = (2 p_{k,i}-1) (\overbrace{2q_sp_{i,s}-q_s-p_{i,s}+1}^{u_i({\bf d})}) + 1 - p_{k,i} + \gamma_{k,i,s}(2q_s - 1).
\label{eq:uked=uied}
\end{equation}
Similarly, using the appropriate instantiation of \eqref{eq:fact1} for $x_j$ with $j \in \set{k,i,t}$, by substituting
$
p_{k,i}\cdot p_{i,t}+(1-p_{k,i})(1-p_{i,t}) + \gamma_{k,i,t}
$
for $p_{k,t}$ in \eqref{eq:uke'd'} we obtain
\begin{equation}
u_k({\bf d}') = (2 p_{k,i}-1)\cdot (\overbrace{2q_tp_{i,t}-q_t-p_{i,t}+1}^{u_i({\bf d}')}) + 1 - p_{k,i} + \gamma_{k,i,t}(2q_t - 1).
\label{eq:uke'd'=uie'd'}
\end{equation}
Now observe that, since $p_{k,i}\geq 0.5$ we have that $(2 p_{k,i}-1)\geq 0$. It remains to compare $\gamma_{k,i,s}(2q_s - 1)$ with $\gamma_{k,i,t}(2q_t - 1)$, showing the latter is greater than the former. Observe that both expressions have a positive sign. We use the fact that $ab+(1-a)(1-b) < cd+(1-c)(1-d)$ implies $cd > ab$ under the assumption that $a,b,c,d\in[0.5,1]$. On the basis of this, and given that $u_i({\bf d}')> u_i({\bf d})$, we obtain that $q_s\cdot p_{i,s} < q_t\cdot p_{i,t}$ and therefore that $q_s\cdot x_s < q_t\cdot x_t$, from which we can conclude that
\begin{equation*}
\begin{split}
&\big(\overbrace{- 2 (2x_k -1) \cdot (2 x_s - 1) \cdot (x_i - 1) x_i}^{\gamma_{k,i,s}}\big)\cdot(2q_s - 1) \\
&< \big(\underbrace{- 2 (2x_k -1) \cdot (2 x_t - 1) \cdot (x_i - 1) x_i}_{\gamma_{k,i,t}}\big) \cdot(2q_t - 1).
\end{split}
\label{eq:extra}
\end{equation*}
It follows that the assumption $u_i({\bf d}')> u_i({\bf d})$ (player $i$ chose a best response that increased her utility) together with Equations~(\ref{eq:uked=uied}) and~(\ref{eq:uke'd'=uie'd'}) implies that $u_k({\bf d}')> u_k({\bf d})$ (and {\em a fortiori} that $u_k({\bf d}') \geq u_k({\bf d})$). We have therefore shown that if some player chooses a best response, the utility of other players does not decrease. This completes the proof.
\end{proof}
\iffalse
\begin{proposition}
There exist delegation games without (pure strategy) NE.
\end{proposition}
\begin{proof}
Embedding a $3$-player matching penny (for $N \geq 3$) showing that best-response dynamics leads to cycles from every possible profile.
\note[Davide]{the construction is possible in cases in which proximity is not symmetric, possibly a case we want to rule out.}
\note[Martin]{Is a proof like this still possible in our new setting?}
\end{proof}
\fi
\paragraph{Discussion}
The existence of NE in general delegation games remains an interesting open problem. It should be noted that the proof strategies of both Theorems~\ref{thm:NE-deterministic} and~\ref{thm:delegation-games-NE} do not work in the general case. Without a clear dichotomy of type it is not possible to assign delegations for all agents in an SCC (as we do in the proof of Theorem~\ref{thm:NE-deterministic}). And the key property upon which the proof of Theorem \ref{thm:delegation-games-NE} hinges (that a best response of an agent does not decrease the utility of other agents) fails in the general case due to the presence of non-zero effort.
Finally, it should also be observed that Theorem \ref{thm:delegation-games-NE} (as well as Proposition~\ref{prop:positive})
essentially depend on the assumption that types are \emph{independent} random variables. If this is not the case (e.g., because voters' preferences are correlated), delegation chains can become undesirable.
\begin{example}
Consider the following example with agents $1$, $2$ and $3$. The probability distribution ${\mathbb P}$ is defined as
${\mathbb P}(\tau(1)=1\wedge \tau(2)=1\wedge \tau(3)=0)=0.45$,
${\mathbb P}(\tau(1)=0\wedge \tau(2)=1\wedge \tau(3)=1)=0.45$, and
${\mathbb P}(\tau(1)=1\wedge \tau(2)=1\wedge \tau(3)=1)=0.1.
$
Consequently, $p_{1,2}=0.55$, $p_{2,3}=0.55$, and $p_{1,3}=0.1$.
Let us assume that the agents' accuracies are $q_1=0.5001$, $q_2=0.51$, and $q_3=0.61$.
A delegation from agent 1 to 2 is locally positive as $q_2\cdot p_{1,2}+(1-q_2)\cdot (1-p_{1,2}) = 0.501 > q_1$.
Furthermore, a delegation from 2 to 3 is locally positive as $q_3\cdot p_{2,3}+(1-q_3)\cdot (1-p_{2,3}) = 0.511 > q_2$.
However, the resulting delegation from $1$ to $3$ is not positive since $q_3\cdot p_{1,3}+(1-q_3)\cdot (1-p_{1,3}) = 0.412$.
\end{example}
\subsection{Quality of Equilibria in Delegation Games}
In delegation games players are motivated to maximize the tradeoff between the accuracy they acquire and the effort they spend for it. A natural measure for the quality of a delegation profile is, therefore, how accurate or informed a random voter becomes as a result of the delegations in the profile, that is, the average accuracy (i.e., $\bar{q}^*({\bf d}) = \frac{1}{n}\sum_{i\in N} q^*_i({\bf d})$) players enjoy in that profile. One can also consider the utilitarian social welfare $\mathsf{SW}({\bf d}) = \sum_{i \in N} u_i({\bf d})$ of a delegation profile ${\bf d}$. This relates to average accuracy as follows:
\[
\bar{q}^*({\bf d}) = \frac{\mathsf{SW}({\bf d}) + \sum_{i \mid d(i) = i} e_i}{n}.
\]
It can immediately be noticed that equilibria do not necessarily maximize
average accuracy.
On the contrary, in the following example NE yields an average accuracy
of close to $0.5$, whereas an average accuracy
of almost $1$ is achievable.
\begin{example} \label{ex:quality}
Consider an $n$-player delegation game where all players have the same type and
$(i,j) \in R$ for all $j$ and all $i>1$, i.e., player 1 is a sink in $R$ and cannot delegate to anyone, but all others can delegate to everyone.
Further, we have $e_1=0$ and $e_i=0.5-\epsilon$ for $i\geq 2$. The respective accuracies are $q_1=0.5+2\epsilon$ and $q_i=1$.
If player $i\geq 2$ does not delegate, her utility is $0.5+\epsilon$. Hence, it is always more desirable to delegate to player $1$ (which yields a utility of $0.5+2\epsilon$ for $i$). Consider now the profiles in which all players delegate to player $1$ (either directly or transitively). Player $1$ can only vote directly (with utility $0.5+2\epsilon$). All such profiles are NE with average accuracy $0.5+2\epsilon$.
If, however, some player $j\geq 2$ chose to vote herself, all players (except $1$) would delegate to $j$ thereby obtaining an average accuracy of $1-\frac{0.5-2\epsilon}{n}$, which converges to $1$ for $n\to\infty$. This is not a NE, as $j$ could increase her utility by delegating to $1$.
\end{example}
The findings of the example can be made more explicit by considering a variant of the price of anarchy for delegation games, based on the above notion of average accuracy. That is, for a given delegation game $G$, the price of anarchy ($\textit{PoA}$) of $G$ is given by
\[
\textit{PoA}(G) = \frac{\max_{{\bf d} \in N^n} \bar{q}^*({\bf d})}{\min_{{\bf d} \in \textit{NE}(G)} \bar{q}^*({\bf d})},
\]
where $\textit{NE}(G)$ denotes the set of pure-strategy NE of $G$.
\begin{fact}
PoA is bounded below by $1$ and above by $2$ (see Apendix~\ref{appendix:proof}).
\end{fact}
An informative performance metrics for liquid democracy is the difference between the group accuracy after delegations versus the group accuracy achieved by direct voting. This measure, called {\em gain}, was introduced and studied by \citet{kahng18liquid}. Here we adapt it to our setting as follows:
$
\mathsf{G}(G) = \left(\min_{{\bf d} \in \textit{NE}(G)} \bar{q}^*({\bf d})\right) - \bar{q}
$
where $\bar{q} = \bar{q}^*(\tuple{1, \ldots, n})$.
That is, the gain in the delegation game $G$ is the average accuracy of the worst NE minus the average accuracy of
the profile in which no voter delegates. It turns out that the full performance range is possible:
\begin{fact}
$\mathsf{G}$ is bounded below by $-0.5$ and above by $0.5$ (see Apendix~\ref{appendix:proof}).
\end{fact}
The above bounds for PoA and gain provide only a very partial picture of the performance of liquid democracy when modeled as a delegation game. The next section provides a more fine-grained perspective on the effects of delegations.
\section{Simulations}
We simulate the delegation game described above in a variety of settings. We restrict ourselves to homogeneous games. This allows us to relate our results to those of \citet{kahng18liquid}. Our experiments serve to both verify and extend the theoretical results of the previous section. In particular we simulate the best response dynamics employed in the proof of Theorem \ref{thm:delegation-games-NE} and show that these dynamics converge even in the setting with effort, which we could not establish analytically.
In addition, we investigate the dynamics of a one-shot game scenario, in which all agents need to select their proxy simultaneously at once.
\paragraph{Setup}
We generate graphs of size $N=250$ of each of the four topologies \textit{random}, \textit{regular}, \textit{small world}, and \textit{scale free}, for different average degrees, while ensuring that the graph is connected.
Agents' individual accuracy and effort are initialized randomly with $q_i \in [0.5,1]$ and $q_i - e_i \geq 0.5$.
We average results over 2500 simulations for each setting (25 randomly generated graphs $\times$ 100 initializations). Agents correctly observe their own accuracy and effort, and the accuracy of their neighbors. The game is homogeneous, so proximities are $1$.
Each agent $i$ selects from her neighborhood $R(i)$ (which includes $i$ herself) the agent $j$ that maximizes her expected utility following \eqref{eq:utility}. We compare two scenarios. The {\em iterated best response} scenario follows the procedure of the proof of Theorem~\ref{thm:delegation-games-NE}, in which agents sequentially update their proxy to best-respond to the current profile ${\bf d}$ using knowledge of their neighbors' accuracy $q_i^*({\bf d})$. In the {\em one-shot game} scenario all agents choose their proxy only once, do so simultaneously, and based only on their neighbors' accuracy. The latter setup captures more closely the epistemic limitations that agents face in liquid democracy.
\subsection{Iterated Best Response Dynamics}
These experiments complement our existence theorems. They offer insights into the effects of delegations on average voter's accuracy in equilibrium, and on the effects of different network structures on how such equilibria are achieved.
We initialize $q_i \sim \mathcal{N}(0.75,0.05)$ and first investigate the case in which $e_i = 0$ for all $i$ (effortless voting). Across all combinations of network types and average degrees ranging from 4 to 24, we find that the best response dynamics converges, as predicted by Theorem~\ref{thm:delegation-games-NE}, and does so optimally with $d_i^* = \argmax_{j} q_j$ for all $i$. We observe minimal differences between network types, but see a clear inverse relation between average degree and the
number of iterations required to converge
(Table~\ref{tab:convergence_rate}, top). Intuitively, more densely connected networks facilitate agents in identifying their optimal proxies (further details are provided in Appendix~\ref{appendix:simulation}).
We accumulate results across all network types and compare the effortless setting to the case in which effort is taken into account. When we include effort $e_i \sim \mathcal{N}(0.025,0.01)$, we still observe convergence in all cases and, interestingly, the number of iterations required does not change significantly (Table~\ref{tab:convergence_rate}, bottom). Although the process no longer results in an optimal equilibrium, each case still yields a single guru $j$ with $q_j \approx \max_k q_k$ (less than $1\%$ error) for all $k \in N$. In this scenario, the inclusion of effort means that a best response update of agent $i$ no longer guarantees non-decreasing accuracy and utility for all other agents, which was a key property in the proof of Theorem \ref{thm:delegation-games-NE}. This effect becomes stronger as the average network degree increases, and as a result higher degree networks allow a greater discrepancy between the maximal average accuracy achievable and the average accuracy obtained at stabilization (Table~\ref{tab:br_accuracy})
\begin{table}[tb]
\caption{Total number of best response updates by individual agents and corresponding full passes over the network required for convergence. Reported are the mean (std.dev.) over all network types. \textit{Note}: not all agents update their delegation at each full pass, but any single update requires an additional pass to check whether the best response still holds.}
\label{tab:convergence_rate}
\begin{widetable}{\columnwidth}{l|cccccc}
Degree & 4 & 8 & 12 & 16 & 20 & 24 \\
\hline
BR updates & 298.1 & 261.7 & 254.2 & 251.6 & 250.6 & 250.0 \\
(effortless) & (18.2) & (11.1) & (6.9) & (4.5) & (3.3) & (2.6) \\[0.2em]
Full passes & 3.6 & 3.0 & 2.9 & 2.8 & 2.7 & 2.5 \\
(effortless) & (0.5) & (0.1) & (0.2) & (0.4) & (0.5) & (0.5) \\[0.2em]
\hline
BR updates & 294.7 & 259.4 & 252.9 & 250.9 & 250.2 & 249.9 \\
(with effort) & (18.4) & (10.6) & (6.6) & (4.8) & (4.9) & (4.3) \\[0.2em]
Full passes & 3.6 & 3.0 & 2.8 & 2.6 & 2.4 & 2.4 \\
(with effort) & (0.5) & (0.3) & (0.6) & (0.8) & (0.9) & (1.0)
\end{widetable}
\end{table}
\begin{table}[tb]
\caption{Comparing the maximum accuracy across all agents and the mean accuracy under delegation ${\bf d}$ for different network degrees, averaged across network types. The differences are statistically significant (paired t-test, $p=0.05$).}
\label{tab:br_accuracy}
\begin{widetable}{\columnwidth}{l|cccccc}
Degree & 4 & 8 & 12 & 16 & 20 & 24 \\
\hline
$\max_j q_j$ & 0.8908 & 0.8908 & 0.8904 & 0.8909 & 0.8904 & 0.8910 \\
$\bar{q}^*({\bf d})$ & 0.8906 & 0.8903 & 0.8897 & 0.8899 & 0.8890 & 0.8893
\end{widetable}
\end{table}
In lower degree graphs (e.g. degree 4) we further observe differences in convergence speed between the four different network types which coincide with differences between the average path lengths in those graphs: a shorter average distance between nodes yields a lower number of best response updates.
This is intuitive, as larger distances between nodes mean longer delegation chains, but we have not yet conducted statistical tests to verify this hypothesis.
\subsection{One-Shot Delegation Games}
Here we study one-shot interactions in a delegation game:
all agents select their proxy (possibly themselves) simultaneously among their neighbors; no further response is possible. This contrasts the previous scenario in which agents could iteratively improve their choice based on the choices of others.
While \citet{kahng18liquid} study a probabilistic model, we instead assume that agents deterministically select as proxy the agent $j \in R(i)$ that maximizes their utility, as above. We compare $\bar{q}$ and $\bar{q}^*$ (the average network accuracy without and with delegation, respectively), as well as the probability of a correct majority vote under both direct democracy $P_D$ and liquid democracy $P_L$ where gurus carry as weight the number of agents for whom they act as proxy. The difference $P_L - P_D$ is again similar to the notion of {\em gain} \citep{kahng18liquid}. In line with Condorcet's jury theorem (see for instance \citealt{grofman83thirteen}) $P_D \rightarrow 1$ as $N \rightarrow \infty$, and indeed for $N=250$ we obtain $P_D \approx 1$.
First we again look at the effortless setting. Figure~\ref{fig:acc_maj_n250} (top) shows both metrics for the four different network types and for different average degrees.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\linewidth]{acc_maj_full.pdf}
\caption{{\bf Top:} Without effort. {\bf Bottom:} With effort $e_i \sim \mathcal{N}(0.025, 0.01)$. {\bf Left:} mean accuracy under liquid democracy, $\bar{q}^*({\bf d})$. The solid (dashed) line shows the mean (std. dev.) of the initial accuracy $q$; the dotted line shows $\max_i q_i$. {\bf Right:} probability of a correct majority vote under ${\bf d}$.}
\label{fig:acc_maj_n250}
\end{figure}
We observe that while $\bar{q}^*({\bf d})$ increases as the network degree increases (and in fact is always higher than $\bar{q}$ without delegation), the probability of a correct majority outcome, $P_L$, simultaneously decreases. This confirms the analysis of \citet{kahng18liquid}. We also observe that the number of gurus decreases exponentially as the degree increases (Figure~\ref{fig:perc_dist_n250_q0.75_e0}, left). Simply put, giving all voting weight to a small group of gurus increases the chance of an incorrect majority vote, assuming that gurus have a less than perfect accuracy.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\linewidth]{{perc_dist_n250_q0.75_e0}.pdf}
\caption{Percentage of guru nodes under ${\bf d}$ (left) and mean distance between (non-guru) nodes and their gurus (right).}
\label{fig:perc_dist_n250_q0.75_e0}
\end{figure}
When we include effort (Figure~\ref{fig:acc_maj_n250}, bottom), thereby moving away from the model of \citet{kahng18liquid}, we observe a drastic decrease in average network accuracy combined with a lower probability of a correct majority outcome under liquid democracy, with both decreasing as network degree increases. The main reason is the existence of delegation cycles in this case. This contrasts the best response setting above where agents could iteratively reconsider their choice of proxy and thus avoid cycles. Now, even with relatively low effort (mean 0.025), up to {\em half} of all agents are stuck in a cycle (and thereby fail to pass information about their type) when the degree increases.
This confirms results on the probability of delegation cycles from \citet{christoff17binary} and stresses the importance of cycle resolution in concrete implementations of liquid democracy.
Figure~\ref{fig:acc_maj_n250} further highlights differences between the four network types. Scale free networks yield a lower probability of a correct majority outcome across all degrees, as well as a larger number of gurus with a lower average accuracy and longer delegation chains (Figure~\ref{fig:perc_dist_n250_q0.75_e0}, right). Intuitively, this indicates that one-shot interactions in scale free networks are more likely to end up in a local optimum. In contrast, small world networks have short average distances
and thus agents are more likely to be close to their optimal guru.
Finally, our experiments highlight a key feature of liquid democracy: the trade-off between a reduction in (total) effort against a loss in average voting accuracy.
\section{Conclusions and Future Work}
The paper introduced delegation games as a first game-theoretic model of liquid democracy. Both our theoretical and experimental results showed that voting effort is a key ingredient for understanding how delegations form and what their effects are.
Our empirical findings provided further insights
into the influence of interaction networks on the quality of collective decisions in liquid democracy.
The paper opens up several directions of research.
A general NE existence theorem is the main open question. Our model can then be generalized in many directions, e.g.: by making agents' utility dependent on voting outcomes; by dropping the independence assumption on agents' types; or by assuming the voting mechanism has better than $0.5$ accuracy in identifying the types of agents involved in cycles.
\iffalse
\section{Acknowledgements}
We are indebted to the anonymous reviewers of IJCAI/ECAI'18 and AAAI'19 for many helpful comments on earlier versions of this paper.
We are also grateful to the participants of the LAMSADE seminar at Paris Dauphine University, and the THEMA seminar at University Cergy-Pontoise where this work was presented, for many helpful comments and suggestions.
Daan Bloembergen has received funding in the framework of the joint programming initiative ERA-Net Smart Energy Systems' focus initiative Smart Grids Plus, with support from the European Union's Horizon 2020 research and innovation programme under grant agreement No 646039.
Davide Grossi was partially supported by EPSRC under grant EP/M015815/1.
Martin Lackner was supported by the European Research Council (ERC) under grant number 639945 (ACCORD) and by the Austrian Science Foundation FWF, grant P25518 and Y698.
\fi
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 518 |
\section{Keywords:} FSPM, ontology, virtual plants, modeling, eco-physiology}
\end{abstract}
\section{Introduction}
Functional Structural Plant Modeling (FSPM) is a discipline studying how single plants as well as vegetal tissues, populations and communities can be simulated together with their spatial representation (structure) and full interaction with the environment.
The objective is to quantitatively model how exogenous and endogenous factors affect the local environment (e.g. temperature, shading) and regulate and activate processes relevant to the plant growth and productive aspects (such as yield and quality of produce). Such a detailed representation of plants and processes can be fruitfully used for teaching purposes of botany and growing techniques, as well as in Precision Agriculture.
To simulate plant growth the most popular approach is based on L-System technique \citep{Prusinkiewicz-1990}, a geometry-oriented rule-based recursion algorithm.
An alternative approach is given by the topological approach \citep{Honda-1971} based on dedicated data structures to store the plant state in its modules (metamers). It can be considered at the base of the Object Oriented (OO) approach, which proved to support real plant coding \citep{Vitali-2012}.
Both approaches are suggested from an apparently simple plant architecture defined on the base of naked-eye observation and on a formalism driven from the agronomic application contexts \citep{Godin-1998}.
In fact, both approaches suffer strong limitations. L-Systems hardly allow encoding of existing plants to let them to be regrown by simulation.
On the other hand, the class hierarchy of OO technique hardly allows to model the plasticity typical of living tissues (specialisation) or the appearance of new organs (e.g. adventitious buds, see e.g. \citealt{Gomez-1995}).
Moreover, though they have been used to represent plant development processes at a different scale (cell, tissue, organ), the way these processes are connected to one another can hardly be embedded in a unique computational scheme - there is still the need to have a model that incorporates processes operating at different scales.
When developing a new model of a system to be used for simulations, system engineers start from conceptual modelling characterised by three main tasks: (1) mapping to the 'original' system, which is expressed through a modelling language (e.g.graphical); (2) a reduction aimed at identifying a subset of the original system and (3) the pragmatics aimed at describing its purpose \citep{Verdonck-2015}.
When conceptual models are developed in a cross-disciplinary context (e.g. engineering and botany) a lack of a common agreement on terminology often occurs.
This disagreement could happen both in the graphic symbolism used in conceptual modelling, as much as in the terms used to describe the system itself.
About the first point, a standard is in use for several years, Unified Modeling Language \citep{UML-2017}, supporting several typologies of graphs to describe structural and dynamical aspects of a system.
However, UML doesn't give the modeller the rules about the elements to be drawn in the graph, as it lacks of semantics. That is the reason why in conceptual modelling, the role of Ontologies has grown considerably in the last years. In particular, Ontology-driven Model Design (ODCM) proved to be a robust and quick model development process \citep{Katasonov-2012} also useful for novice modellers \citep{Verdonck-2019}.
Reasoning over ontologies may help in discovering inferences, and locating concepts in a class hierarchy, testing the consistency of the conceptual model, and inspecting the conceptual knowledge embedded in the model \citep{Zedlitz-2014}.
Two types of ontologies should be recognised though.
A first typology may be recognised in Foundational Ontologies, helping in classifying objects to support semantics in the modelling language \citep{Fonseca-2019}.
The second typology is represented by Domain Ontologies developed to support knowledge in a number of areas, particularly in descriptive sciences, including biology. Their terminology, however, may be so different to make them hard to be merged, and the development of a controlled vocabulary specific for plant modelling seems to be preferable \citep{SaintCast-2020}.
The aim of this work is to analyse the possibility to integrate available Domain Ontologies with the constraints of Foundation Ontology for the conceptual design of functional and structural plant modelling.
\section{Ontologies}
An ontology is a shared conceptualisation of a domain of interest \citep{Gruber-1993} given by terms/symbols linked by relationships with a semantic meaning.
Ontologies sinks their roots in different disciplines, including:
\begin{itemize}
\item language theory - deriving from with logics and related to language computability, possibility to obtain a prove about truth of a statement. Ontological approach is born to be used in semantic analysis (e.g. in text mining, \citealt{Sanchez-2004}), and is currently used for bibliography research (Web Ontologies).
\item expert systems - ontologies can be considered descendants of Expert Systems (ES), developed in the 1970's and popular in the 1990's, mainly used for diagnosis purposes and to develop decision support systems. ES host knowledge in terms of axioms, premises or facts, and inference rules, to be used to obtain conclusions (with a certainty level). Ontologies generally host a general data models, made of general types of things sharing certain properties, but they may also include information about specific individuals. The first component can be considered the skeleton of a knowledge graph, helping to validate and build new knowledge \citep{Hedden-2019, Schrader-2020}.
\end{itemize}
\subsection{Foundational Ontologies}
Despite a number of suggestions produced to drive Conceptual Modeling (e.g.Occam razor), related graphs are easily redundant and poorly consistent,and the need for a semantic represenation in UML entities has already been evidenced (e.g., \citealt{Cerans-2013})
A way to reduce the fuzzyness around entities has been proposed by Foundational Ontologies - here we will consider the most popular, the Unified Foundational Ontology (UFO, \cite{Guizzardi-2015}, an approach that lead to an extension of UML \citep{OntoUML-2022} introducing a constraints on entities \citep{OntoUML-2022}.
The first point in UFO is to define stereotypes, that is an object categorization. A first distinction is made between \textit{sortal} and \textit{non-sortal} types (classes of objects), the former being those endowable to some \textit{identity} (e.g.\textit{bud}) contrasting with those that cannot (e.g.\textit{tissue}).
UFO also consider the concept of \textit{rigidity}, characterising those classes that derive from a class only (e.g. \textit{vegetative-bud} $\rightarrow$ \textit{bud}) - entities in such a rigid hierarchy are called \textit{kind} (top of hierarchy) and \textit{subkind}. \textit{Anti-rigid} stereotypes are \textit{roles} and \textit{phases} (see table \ref{tab_UFO}).
\begin{table}[!htb]
\centering
\begin{tabular}{| c | c c |}
\hline
& Rigid & Anti-rigid \\
[0.5ex]
\hline
Mixin & & Phase Mixin \\
& Category & Role Mixin \\
\hline
Sortal & Kind & Phase \\
& SubKind & Role \\
& Relator & \\
& Collective & \\
\hline
Nonsortal & Quantity & \\
\hline
Aspects & Mode & \\
& Quality & \\
\hline
\end{tabular}
\caption{ UFO stereotypes}
\label{tab_UFO}
\end{table}
Stereotypes as \textit{phase} and \textit{roles} (referring to subjects that can be sorted) or \textit{PhaseMixin} and \textit{RoleMixin} (if they cannot be sorted) may in fact be used to define entities that can derive from different classes: let's think to the 'enlighted tissue' and 'shaded tissue', given by tissues with different 'roles' in photosynthesis during daytime, while having the same at nightime. In this example the \textit{Mixin} suffix refers to the fact that the organs (sortal) or tissues (non-sortal) can derive from different organs, as a branch or a fruit in the first case (sortal), fruit or branch skin in non-sortal case.
Another important class of sterotypes refers to relations, in which it is possible to identify some base stereotypes that interprete the aspects featuring a given type:
\begin{itemize}
\item characterisation: a relation between a bearer entity and some features
\item structuration: is the base constructor of any class, collecting and ascribing those features (of different typologies) characterising each class
\end{itemize}
A set of relations as
\textit{ Part-whole}, \textit{ Component\_Of}, \textit{ Containment},
\textit{Member\_Of}, \textit{ SubCollection\_Of}, \textit{ SubQuantity\_Of}, identifies arrows commonly used in UM to connect entities.
UFO includes relations with a logical/computational equivalent as
\begin{itemize}
\item Formal: e.g. a vegetative bud 'is lighter than' a flower bud
\item Material: e.g. 'flow of sap' from organ A to organ B
\end{itemize}
As OntoUML hardly supports dynamical features, other diagrams are suitable to the urpose. In particular about Discrete Event Systems quite popular is BPMN, \cite{Rosing-2015}), that also has an ontological extention - Onto-PMN \citep{Guizzardi-2013}.
In Onto-PMN, entities \textit{participates} to \textit{events} (like previously to \textit{roles}).
Events identified as \textit{Atomic} and {Complex} (compound) are ascribed to objects (with a given role).
Most of semantic contents stay in the concept of \textit{situation} (state of affairs) that changes after the occurrence of an \textit{event} and in that of an entity \textit{disposition} (meant as power, ability, capacity,etc) that determine a \textbf{causal explainability} of the occurrence of a given event. From this viewpoint an \textit {event} determines a \textit{triggering} of a \textbf{transition}, driving a \textbf{transition rule} suitable of a probabilistic approach (causal law). It follws a distinction between 'triggering event' and 'resulting event'.
\textbf{UFO vs OO} - UFO put in evidence structures embedded in OO (UML), trying to manage them explicitly, as abstract classes, that cannot be instantiated, or some method/function that require to be written in the subclass.
Some \textit{sortal} stereotypes used to represent \textit{aspects}, as \textit{quantity} and \textit{amount}, reflect classes that can be found in OO-languages, namely Set and Magnitude (a mother classes of every numerical and measurable quantity), both characterised by large number of specialised subclasses.
A subclass like \textbf{ordered-set} can be used to represent two UFO stereotypes as \textbf{aspects} including \textbf{mode} and \textbf{quality}.
Relations (in UML identified by arrows of several types) can also be ascribed to class ownership, often of magnitude-like or set-like: they are often associated a multiplicity.
After \cite{Tomaiouolo-2005} a major problem in OO representation of Ontologies is represented by the fact that in the last case instances are not required to be referred to some class (e.g. " the 'color white' characterizes more flowers than leaves " ).
Authors also put in evidence other main differences, tough proving that coding an ontology within an OO scheme may be complex, but also feasible.
The issue of roles has already been solved by \cite{Fowler-1997}, and it is based on definition of a specific class defining roles: ${StringCollection> roles> bud-role }$
and successively subclass every object requiring roles in a separate class branch:
${Object > ObjectWithRoles}$ including attributes to describe possible roles and the presently acting one.
The complexity of translation is also related to the language the conceptual model has to be coded. Smalltalk (supported by platforms as Squeak, Pharo, ..), as a pure OO language, is more suitable to support those paradigms, with respect to other OO-style languages (e.g. Phyton, Java, ..) which are basically based on imperative language (as C), that require a preliminary type declaration and memory allocation).
\textbf{example 1} - In figure \ref{fig_1} an (onto-extended) Class Diagram is used to draw a OO toy model of a plant. An \textit{internode} is here used as an object that can bear a number of \textit{leaves} and \textit{buds}, the latter being specialised into \textit{vegetative-bud} and a \textit{reproductive-bud}. In a dynamical simulation framework the latter develop generating respectively shoots and a flowers, entities inheriting features from the class \textit{VegetalOrgan}, while \textit{leaves} and \textit{shoots} also beared from the \textit{internode} continue growing as any other \textit{VegetalPart}.
In the diagram they are using the suggestions of UFO, labeling entities with $\langle\langle kind \rangle\rangle$ and $\langle\langle subkind \rangle\rangle$, while the arrows are used to represent specialisation/generalisation (is a), and composition (1..N). Two processes $\langle\langle event\rangle\rangle$ are also used to transform objects (buds into flower and shoot), which in a simulation perspective, could mean destroy the first object using its parameters to create the new one (morphing).
\subsection{Domain Ontologies}
While a formal language (as UML) is represented by a reduced number of symbols (e.g.boxes and arrows to connect them), the languages characterising knowledge domains are used to generate large annotated dictionaries of terms and relations. Both, terms and relations, may be collected in a multi-hierarchical framework, and a standard is represented by OWL (Ontology Web Language) and RDF (Resource Description Framework), based on XML (Extensible Markup Language).
A growing number of tools, both on-line and desktop-based, allow to access OWL, browsing over, as \textbf{Ontobee} (operating on 251 ontologies), operate queries, as \textbf{Ontofox} (\url{https://ontofox.hegroup.org/}), or merging, as \textbf{robot} \citealt{Jackson-2019}), while for inspecting/editing \textbf{Protégé} (\url{https://protege.stanford.edu}) a widely used platform.
Ontologies can also be translated in Graphic Data Base, as in \textbf{Neo4j} (\url{https://neo4j.com/}) with the plugin \textbf{neosemantics} (\url{https://neo4j.com/labs/neosemantics/}).
The majority of ontologies can be found (and fully retreieved) at: \url{http://purl.obolibrary.org/obo/}NAME\url{.owl}.
\textbf{Ontologies for Plants} - A number of ontologies are already available and many are under development - largest ones are in the domain of medicine and biology. From some queries in Ontobee (\url{https://www.ontobee.org}) some of the largest ontologies dedicated to plant emerge, which are reported in \ref{tab_1} are together with some size indicator.
\begin{table}[!htb]
\centering
\begin{tabular}{|c| c c c c|}
\hline
NAME & content & classes & object & annotation \\
& & & properties & properties \\
[0.5ex]
\hline\hline
BTO & BRENTA Tissue Ontology & 6569 & 10 & 27 \\
FLOPO & Flora Phenotype Ontology & 35424 & 111 & 62\\
PO & Plant Ontology & 2018 & 300 & 190 \\
PPO & Plant Phenology Ontology & 443 & 333 & 64 \\
TO & Plant Trait Ontology & 5216 & 159 & 76 \\
[1ex]
\hline
\end{tabular}
\caption{ List of largest ontologies characterising domains of researches on vegetal plants}
\label{tab_1}
\end{table}
They cover a wide-range of aspects (e.g.development stages, \citealt{Pujar-2006}), but also host annotations related to species \citep{Jaiswal-2005} useful to extend the ontology to cover particular aspects \citep{Akbar-2021}, while queries can be used to develop species-specific ontologies \citep{Walls-2019}.
\textbf{Ontology structure} - Browsing Ontobee for the term \textit{bud} we get, together with its definition on PO: \textit{an undeveloped shoot system}, and its tag: \textit{PO:0000055}, other information including:
\begin{itemize}
\item hierarchy: \textit{Thing \> continuant \> independent continuant \> material entity \> plant anatomical entity \> plant structure \> collective plant structure \> collective plant organ structure \> shoot system}.
In the hierarchy the root entity \textit{thing} is followed by a chain of entity types reflecting the standard adopted from ontology editing board. The \textit{material entity} is the starting point of many entities of major interest.
\item subclasses: \textit{vegetative bud, axillary bud, terminal bud, reproductive bud}, are all representing specialisation of the parent entity (\textit{bud})
\item siblings: \textit{root-borne shoot system, shoot-borne shoot system, primary shoot system, reproductive shoot system, inflorescence branch crown, corm, bulb, vegetative shoot system, gametophore}, represent children of the same parent (\textit{shoot system - a collective plant organ structure that produces shoot-borne portions of meristem tissue and the plant structures that arise from them} - see figure \ref{fig_2}).
\item relations: properties tagged from a specific Ontology (Relation Ontology - RO). Almost every of such relations are represented by directed edge on a graph, and have an inverse relation - e.g. the relation: \textit{develops\_from} RO:0002202 is the inverse of: \textit{delelops\_into} RO:0002203. Relations have a \textit{domain}, represented by those entity types they can be applied.
\end{itemize}
\textbf{Some relevant terms} - In BTO \textit{\textbf{WholePlant} - BTO:0001461 - The main part of a plant}) has a main child \textit{shoot - BTO:0001243 - a sending out of new growth or the growth sent out: as a stem or branch with its leaves and appendages especially when not yet mature}), which has the following parts:
\begin{itemize}
\item \textit{stem - BTO:0001300 - The main trunk of a plant; specifically: a primary plant axis that develops buds and shoots instead of roots} (with 8 subclasses);
\item \textit{internode - BTO:0000636 - Region on a stem between nodes} (no descendants);
\item \textit{leaf - BTO:0000713 - A lateral outgrowth from a plant stem that is typically a flattened expanded variably shaped greenish organ, constitutes a unit of the foliage, and functions primarily in food manufacture by photosynthesis}, with subclasses: \textit{ brct, leaflet, final leaf, true leaf ,etc};
\item \textit{bud - BTO:000158 - A small lateral or terminal protuberance on the stem of a plant that may develop into a flower, leaf, or shoot},including in its subclasses: \textit{apical bud, dormant eye, axillary bud, leaf bud, flower bud (further specialised)}.
\end{itemize}
Such entities are all children of \textit{thing}.
A more detailed plant description can be found in PO, where \textit{whole plant - PO:000000} is a child of \textit{\textbf{plant structure} - PO:0009011 - A plant anatomical entity that is, or was, part of a plant, or was derived from a part of a plant}, where we may recognise two important children:
\begin{itemize}
\item \textit{collective plant structure - PO:0025007 - a plant structure that is a proper part of a whole plant and includes two or more adjacent plant organs or adjacent cardinal organ parts, along with any associated portions of plant tissue )}. Its main descendant is \textit{\textbf{shoot system} - PO:0009006 - A collective plant organ structure (PO:0025007) that produces shoot-borne portions of meristem tissue (PO:0009013) and the plant structures (PO:0009011) that arise from them}, whose children include:
\begin{itemize}
\item \textit{Bud - PO:0000055 - An undeveloped shoot system};
\end{itemize}
\item \textit{multi-tissue plant structure - PO:0025496 - a plant structure that has as parts two or more portions of plant tissue of at least two different types and which through specific morphogenetic processes forms a single structural unit demarcated by primarily bona-fide boundaries from other structural units of different types}. Children include \textit{\textbf{plant organ} - PO:0009008 - A multi-tissue plant structure that is a functional unit, is a proper part of a whole plant, and includes portions of plant tissue of at least two different types that derive from a common developmental path}, which in turn includes:
\begin{itemize}
\item \textit{shoot axis - PO:0025029 - a plant axis that is part of a shoot system}, having as children: \textit{stem ( PO:0009047 - A shoot axis that is the primary axis of a plant}) and \textit{branch (PO:0025073, a shoot axis that develops from an axillary bud meristem or from equal divisions of a meristematic apical cell}).
\item \textit{phyllome - PO:0006001 - a lateral plant organ produced by a shoot apical meristem )}, having as a childr \textit{Leaf (PO:0025034 - a phyllome that is not associated with a reproductive structure}).
\end{itemize}
\end{itemize}
Every \textit{plant organs} are part of a \textit{shoot system}.
Most of these entities are used in other ontologies, where they are enriched with relations and annotations. They are also indirectly referred to in TO (Trait Ontology), e.g. \textit{branch angle - TO:100000009 - A branch morphology trait which is the angle of a branch }) where entities support almost every phenotypical observation.
\textbf{Dynamic aspects} - In Plant Pheno Ontology (PPO) it is possible to find entities describing dynamical aspects. There is an ancestor named \textit{occurrent} having among children \textit{process} which is defined in Basic Formal Ontology as \textit{BFO:0000015}, as \textit{an occurrent that has temporal proper parts and for some time t, process s-depends\_on some material entity at t}.
\textit{Process} has a single child (in PO) \textit{\textbf{plant structure development stage} - PO:0009012 - a stage in the life of a plant structure during which the plant structure undergoes developmental processes}, with children:
\begin{itemize}
\item \textit{whole plant development stage}
\item \textit{collective plant organ structure development stage} (including bud development stages);
\item \textit{plant tissue development stage}, including development stage of vascular tissues (xylem,phloem);
\item \textit{multi-tissue plant structure development stage}, including development stages for fruit, seed and other plant organs.
\end{itemize}
Such a hierarchical representation is different from that found in PPO, where development stages descend directly from \textit{occurent} while:\textit{biological process} include:
\begin{itemize}
\item \textit{biological process}
\item \textit{collective plant organ structure development stage} (including development processes);
\item \textit{multicellular organismal process};
\item \textit{reproductive process};
\item \textit{response to stimulus};
\end{itemize}
Though PO and PPO interpretation of \textit{process} is rather different to that given in NCI thesaurus - \textit{NCIT:C17828 - An activity occurring within an organism, between organisms or among organisms and the mechanisms underlying such events} (neither \textit{process} nor \textit{biological process} are used in PO, PPO or other plant-related ontologies).
\textbf{Analysing OWL}
Looking at PO and PPO OWLs, it is possible to see the practical use of OWL language.
\textit{Class} is the main entry, used to describe and annotate terms.
\textit{AnnotationProperty} is the local dictionary of decriptors e.g. \textit{definition}, \textit{synonym}, etc.
\textit{Axioms} are assertions about a property relating a source to a target. They are used to enrich the set of annotations, and also may include supplementary definitions and synonyms (exact or narrow) using the tags defined in \textit{AnnotationProperty}.
\textit{ObjectProperty} report the relations between entities, most relevant being given by:
\begin{itemize}
\item
\textit{part\_of}
\& \textit{has\_part}
\item
\textit{preceded\_by}
\& \textit{precedes}
\item \textit{participates\_in}
\& \textit{has\_participant}
\item \textit{located\_in}
\& \textit{adjacent\_to}
\item \textit{develops\_from}
\& \textit{develops\_to}
\item \textit{starts}
\& \textit{starts\_with}
\item \textit{ends\_after}
\& \textit{bearer\_of}
\item \textit{generated\_from}
\& \textit{depends\_on}
\item \textit{developmentally\_preceded\_by}
\& \textit{developmentally\_precedes}
\end{itemize}
They are used (in classes) in a 'subclass-restriction', 'equivalentClass-restriction', or a 'disjointWith-restriction' context \citep{OWL-2004}.
The 'restriction' is used to define the class (as subclass, same class, or outside the reference class hierarchy) on the base of a property, which could be on the base of a parameter value or on number of allowed items ('cardinality'); here is a fragment of ontology of \textit{PO\_0001094: plant embryo coleoptilar stage}:
\textit{
\<rdfs:subClassOf\>
\<owl:Restriction\>
\<owl:onProperty rdf:resource="http://purl.obolibrary.org/obo/BFO\_0000063"/\>
\<owl:someValuesFrom rdf:resource="http://purl.obolibrary.org/obo/PO\_0001081"/\>
\</owl:Restriction\>
\</rdfs:subClassOf\>
},
telling that the: \textit{plant embryo coleoptilar stage} \textit{precedes (BFO\_0000063)} the \textit{mature plant embryo stage (PO\_0001081)}.
Following figures have been produced importing the OWL in a GDB with Neo4J-semantics. Nodes are shown as circles with a the property \textit{label} as caption.
Figure \ref{fig_3} reports the set of \textit{Class} nodes centered on \textit{shoot system} from PO.
\textbf{UFO interpretation} - From an UFO viewpoint, it can be observed that entities appearing in figure \ref{fig_3} own to \textit{Kind} and \textit{Subkind} prototypes.
In those ontologies, together with organs, also appear entities as \textit{root initial cell}, and \textit{vascular System} that evidence \textit{Sortable} entities at different scales, together with \textit{Nonsortable} entities as \textit{portion of 'some tissue'}, easily related to prototypes as \textit{Quantity}.
However OWL doesn't seem to support any suggestions to identify \textit{Sortable} from \textit{Nonsortable}, not any clues about class stereotyping.
Object Properties contain features referring to space and time relative allocation of entities, easily recognised as UFO relations.
While spatial features could be enriched for detail from other ontologies (as Trait Ontology), dynamical aspects can be detailed by ontologies as PPO.
In Onto-PMN, events may include creator/destroyer of entities - in PPO family of processes includes most of GO types descending from process (including organ develpment, reproduction, dormancy, aging, response to stimulus, abscission, germination) and include both unicellular and multicellular organisms.
Such processes may easily include any metamorphoses process as well as development of new organs (latent bud).
Nonetheless, processes, as much as events, are (non material) entities to which material entities \textit{partecipates\_in} - one of the ObjectProperties listed above, e.g.
\textit{dormant leaf bud - participates in (some) - bud dormancy process}.
Figure \ref{fig_4} reports the construction of a Conceptual Model for the dynamics of \textit{Bud} in relation to the organ bearing it \textit{Plant Shoot} and to the tissue it is made for, collecting the classes from PO (on the left). \textit{Bud} dynamics are represented by stand alone entities - in particular \textit{Bud\_Swell\_Stage} inherit feature from \textit{System\_Devel\_Stage}, has any vegetal part as a participant. This is the technique that allow in the OO paradigm to have 'messages' as standalone functions.
Dynamical aspects may be related to some change of \textit{phase}, but they hardly refer to metamorphoses of entities - entity may disappear (or die) and be substituted by an evolved self - as both are descendant of a common class (e.g. plant part) the new entity inherit some properties, as mass and location.
Morphing, together with \textit{roles}, though implicitly present in some complementary ontologies, could require more specialised add-ons to include concepts already faced in popular branch of plant \& crop modeling (e.g.sink/source modeling), which may also include technical terms, e.g. tree growers refers to vegetal parts not in use of botanists as "Brindle", "Sucker"," Dart","Spur", etc.
\section{Conclusion}
The development of Domain Ontologies is becoming a common practice in scientific community, becoming more and more rich of descriptions about living beings' anatomy and behavior, while using terms shared by a large community.
Ontology-based Conceptual Modeling may profit of such a large amount of coded description, together with the constraints defined by Foundation Ontologies to suggest a new approach for plant structural and functional modeling.
The analysis put in evidence that ontologies offer the possibility to include in modeling a large amount of details and process occurring at several scales, and eventually to integrate them.
Domain Ontologies, however, may need a 'bridge ontology' to be merged together and enriched by semantical descriptors suggested by Foundational Ontologies.
Because of size and complexity of ontologies, the next step should consist in the development of a tool helping in identifying missing definitions and relations, useful to improving current ontologies and make robust design Conceptual Models in the same time.
\section*{Conflict of Interest Statement}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
\section*{Funding}
Not applicable.
\section*{Acknowledgments}
Not applicable.
\section*{Supplemental Data}
Not applicable.
\section*{Data Availability Statement}
The study is based on publicly available ontologies reported in the manuscript. Any detail on methodology is available directly from author.
\bibliographystyle{Frontiers-Harvard}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,997 |
\section{Introduction}
The state space of a charged particle moving in a homogeneous magnetic field in a plane orthogonal to the field decomposes into Landau levels, differing in energy by integral multiples of the magnetic field strength. When the position coordinates are expressed as complex numbers in the symmetric gauge the states in the lowest Landau level form a Bargmann space of holomorphic functions while wave functions in higher Landau levels involve also powers
of the complex conjugate position variables in the standard representation.
As noted by many people since long, and emphasized in particular in \cite{Haldane-13,Haldane-18,CheBis-18}, a holomorphic representation of states is not limited to the lowest Landau level, where it has proved to be important for deriving some basic properties, e.g. \cite{LieRouYng-16,LieRouYng-17,OlgRou-19,Rougerie-xedp19,RouYng-17}. In fact, there is a natural unitary correspondence between states in different Landau levels, in particular between higher levels and the lowest one.
In this expository paper we discuss several ways to arrive at the holomorphic representations and derive explicit formulas for particle densities and effective Hamiltonians in higher Landau levels, expressed in terms of corresponding quantities in the lowest Landau level. The methods have appeared in various disguises in the literature before but our aim is to present them in a coherent fashion that, we hope, will be found useful for students and researchers in quantum Hall physics.
A physically appealing starting point is the decomposition of the position variables into guiding center variables and variables associated with the cyclotron motion of the particle around the guiding centers. While the components of the position operator commute, the other two sets of variables are non-commutative and satisfy canonical commutation relations. They can be represented in terms of creation and annihilation operators for two distinct and mutually commuting harmonic oscillators. One way of arriving at a holomorphic representation of states is an expansion in terms of coherent states for the harmonic oscillator of the guiding center variables~\cite{Haldane-18,CheBis-18}. (These are the same as the \lq\lq vortex eigenstates\rq\rq in \cite{ChaFlo-07,ChaFloCan-08,ChaFlo-09}).
The transformation between position coordinates and the coherent state variables can also be expressed in terms of an integral operator with a kernel that is a modification of the reproducing kernel of a Bargmann space~\cite{ChaFlo-07}.
A formally simpler and more direct approach is to use the creation and annihilation operators of the cyclotron oscillator to define unitary mappings between different Landau levels\footnote{This approach appears already in~\cite{MacDonald-84} where it is attributed to Laughlin.}.
This gives explicit formulas for particle densities of many-body states in one Landau level in terms of polynomials in the Laplacian applied to corresponding densities in the lowest Landau level. The same formulas can alternatively be obtained by a Fourier transformation, exploiting the factorization of the exponential factor in the guiding center and cyclotron variables respectively.
The main application of these considerations is in quantum Hall physics~\cite{Goerbig-09,Jain-07,Laughlin-99,StoTsuGos-99,Tong-16}. In this context, an electron gas is confined to two spatial dimensions and submitted to a magnetic field large enough to set the main energy scale. The quantization of the kinetic energy levels then becomes the salient feature. In the full plane, each level is infinitely degenerate, but for a finite area the degeneracy is proportional to the area times the field strength. For extremely large values of the latter, the lowest Landau level is degenerate enough to accommodate all electrons without violating the Pauli principle.
For smaller values of the field several Landau levels can be completely filled with electrons and become inert in first approximation. The physics then boils down to the motion of the electrons in the last, partially filled, Landau level.
In both cases only one Landau level has to be taken into account, and an effective model of widespread use in the literature is given in terms of a Hamiltonian acting on holomorphic functions. We review this first, before describing in more details the unitary mappings between Landau levels. The remarkable fact is that the dependence of the effective Hamiltonian on the Landau level it corresponds to is quite simple and transparent. An intuitive explanation (albeit not the most direct one from a computational point of view) is that the good variables to use are not the position variables but rather those of the guiding centers. The Landau level index, which fixes the energy of the cyclotron motion, is encoded in a form factor in Fourier space that modifies external and interaction potentials via a differential operator.
In particular, the unitary mappings between Landau levels map multiplication by potentials to operators of the same kind.
One salient feature of the effective operators acting on holomorphic functions is that they naturally suggest variational ans\"atze for their ground states, which become exact for certain truncated models. The Laughlin state~\cite{Laughlin-83,Laughlin-87} is the most emblematic of those, and much of our understanding of the fractional quantum Hall effect rests on its remarkable properties. In Sec.\ VI we apply our formulas to Laughlin states in an arbitrary Landau level, computing their density profiles and extending rigidity results from~\cite{LieRouYng-16,LieRouYng-17,RouYng-17,OlgRou-19}.
\bigskip
\section{Projected Hamiltonians and densities in quantum Hall physics}
Let us start from the many-body Hamiltonian (in symmetric gauge) for interacting 2D electrons in a constant perpendicular external magnetic field $B$ and a one-body potential~$V$
\begin{equation}\label{eq:full hamil}
H = \sum_{j=1} ^N \left( -\mathrm{i} \nabla_{\mathbf{r}_j} + \frac{B}{2} \mathbf{r}_j ^{\perp} \right) ^2 -NB+ V (\mathbf{r}_j) + \sum_{i<j} w (\mathbf{r}_i-\mathbf{r}_j).
\end{equation}
Here $w$ is the radial repulsive pair interaction potential, modeling 3D Coulomb interactions\footnote{Although electrons are confined to a 2D interface, they retain their interactions via the 3D Coulomb kernel.} in quantum Hall (QH) physics, but more general choices are also of interest. The one-body potential $V$ incorporates trapping in a finite size sample, plus the electrostatic potential generated by impurities. Mathematical conditions on the potentials $V$ and $w$ will be stated below. For convenience we have subtracted $NB$ from the kinetic part of the energy so that its lowest value is 0.
In the sequel vectors $\mathbf{r} = (x,y) \in \mathbb{R}^2$ will very often be identified with complex numbers $z=x+{\mathrm i} y\in C$.
As appropriate for electrons we consider the action of $H$ on the fermionic antisymmetric space
\begin{equation}\label{eq:full space}
L^2_{\mathrm{asym}} (\mathbb{R}^{2N}) = \bigotimes_{\mathrm{asym}} ^N L^2 (\mathbb{R}^2).
\end{equation}
For bosons one considers the symmetric tensor product instead; this is relevant for rotating cold atomic gases, where the rotation frequency takes over the role of the magnetic field.
In fractional quantum Hall (FQH) physics, the energy scales are set, by order of importance: first by the magnetic field, second by the repulsive interactions, third by the one-body potential. Our discussion in the sequel will reflect this.
\subsection{Landau levels}
For large $B$ it is relevant to restrict particles to live in an eigenspace of $\left( -\mathrm{i} \nabla_{\mathbf{r}} + \frac{B}{2} \mathbf{r} ^{\perp} \right) ^2.$ Denote by
\begin{equation}\label{eq:nLL}
n\mathrm{LL} := \left\{ \psi \in L^2 (\mathbb{R}^2), \quad \left( -\mathrm{i} \nabla_{\mathbf{r}} +\hbox{ $\frac{B}{2}$} \mathbf{r} ^{\perp} \right) ^2 \psi = 2 B \left(n + \hbox{$\frac{1}{2}$} \right) \psi \right\}
\end{equation}
the $n$-th Landau level. The lowest level ($n=0$) will be denoted by $\mathrm{LLL}$; it is made of analytic $\times$ gaussian functions:
\begin{equation}\label{eq:LLL}
\mathrm{LLL} = \left\{ \psi (\mathbf{r}) = f (x + \mathrm{i} y) e^{-\frac{B}{4} |\mathbf{r}| ^2} \in L^2, \quad f \mbox{ analytic } \right\}.
\end{equation}
The corresponding fermionic spaces for $N$ particles will be denoted by $n\mathrm{LL}_N$ and $\mathrm{LLL}_N$:
\begin{equation}\label{eq:LLN}
\mathrm{LLL}_N = \bigotimes_\mathrm{asym} ^N \mathrm{LLL}, \quad n\mathrm{LL}_N = \bigotimes_\mathrm{asym} ^N n\mathrm{LL}.
\end{equation}
\subsection{Hamiltonians in the LLL}
Consider projecting~\eqref{eq:full hamil} to the LLL. The first term is just a constant, the others can be expressed using the canonical basis
\begin{equation} \varphi_m (z) = (\pi m!)^{-1/2} z^m e^{-\frac{B}{4} |z| ^2}.\end{equation}
Projecting~\eqref{eq:full hamil} to the LLL leads formally to
\begin{equation}\label{eq:LLL hamil}
H_{w,V} ^{\mathrm{LLL}} = \sum_{j=1}^N \sum_{m,\ell \geq 0} \left\langle \varphi_m | V | \varphi_\ell \right\rangle \left| \varphi_m \right \rangle \left\langle \varphi_\ell \right |_j + \sum_{i<j} \sum_{m\geq 0} \left\langle \varphi_m | w | \varphi_m \right\rangle (|\varphi_m \rangle \langle \varphi_m |)_{ij}
\end{equation}
where $(|\varphi_m \rangle \langle \varphi_m |)_{ij}$ projects\footnote{Note that fermionic wave-functions do not see the even $m$ terms of~\eqref{eq:LLL hamil}.} the relative coordinate $\mathbf{r}_i - \mathbf{r}_j$ on the state $\varphi_m$. Similarly $\left| \varphi_m \right\rangle \left\langle \varphi_\ell \right |_j$ is the operator mapping $\varphi_\ell$ to $\varphi_m$, acting on the $j$ variable only.
We assume that the potentials are measurable functions and that the \lq\lq moments\rq\rq
\begin{equation} \langle \varphi_m|\, |V|\, |\varphi_m\rangle = \frac{1}{\pi m!}\int_{\mathbb R^2} |V(\mathbf r)|r^{2m} e^{-Br^2} d^2\mathbf r, \quad \langle \varphi_m|\, |w|\, |\varphi_m\rangle = \frac{1}{\pi m!} \int_{\mathbb R^2} |w(\mathbf r)| r^{2m}e^{-Br^2}d^2\mathbf r\end{equation}
are finite for all $m$. Then \eqref{eq:LLL hamil} is well defined as a quadratic form on a dense subspace of $\mathrm{LLL}_N$. Finiteness for $m=0$ means in particular that the potentials are in $L^1(\mathbb R^2)_{\rm loc}$ so derivatives of the potentials are well defined in the sense of distributions. If the moments are uniformly bounded in $m$ and the potentials rotationnally symmetric (which implies the absence of terms $m\neq\ell$ in~\eqref{eq:LLL hamil}), then the corresponding operators are bounded and defined on the whole space.
Usually in FQH physics one focuses attention on the interaction term in \eqref{eq:LLL hamil} (i.e., one sets $V\equiv 0$). There are no off-diagonal terms in it because $w$ is assumed to be radially symmetric. The coefficients $\left\langle \varphi_m | w | \varphi_m \right\rangle$ are often called ``Haldane pseudo-potentials'', cf. \cite{Haldane-1983}. If $w$ decreases rapidly at infinity then they also decrease rapidly with increasing $m$ and a basic observation in the theory of the fractional quantum Hall effect (FQHE) is that, if one truncates the sum~\eqref{eq:LLL hamil} at $m = \ell-1$, then the Laughlin state
\begin{equation}\label{eq:Laughlin}
\Psi_{\rm Lau} ^{(\ell)} (z_1,\ldots,z_N) = c _{\rm Lau} ^{(\ell)} \prod_{i<j} (z_i-z_j) ^{\ell} e^{-\frac{B}{4} \sum_{j=1} ^N |z_j| ^2}
\end{equation}
is an exact ground state ($L^2$-normalized by the constant in front). One can then argue, and prove to some extent~\cite{LieRouYng-16,LieRouYng-17,RouYng-17,OlgRou-19}, that such functions and natural variants are extremely robust, in particular to the addition of the external potential $V$.
\begin{rem} For very strong interaction potentials of range much smaller than the magnetic length $\sim B^{-1/2}$, in particular if there is a hard core, an expansion in terms of moments as in \eqref{eq:LLL hamil} is not adequate. This situation is analysed in \cite{SY-20} which generalizes the paper \cite{LS-09}. It is shown that in an appropriate scaling limit the pseudo-potential operators $|\varphi_m\rangle\langle \varphi_m|$ also emerge, but with renormalized pre-factors involving the scattering lengths of the interaction potentials in the different angular momentum channels, rather than expectation values as in \eqref{eq:LLL hamil}.\end{rem}
\subsection{Hamiltonians in higher Landau levels}
Consider now a situation where $n-1$ Landau levels are filled, so that additional electrons must sit in the higher ones, because of the Pauli principle. It is a common procedure in the FQH physics community~\cite{Jain-07,GoeLed-06,Tong-16} to model this situation using lowest Landau level (LLL) functions again. The basis for this reduction is the following statement, contained in one form or another in a number of sources, in particular~\cite{Haldane-18,CheBis-18,Tong-16,Jain-07, MacDonaldGirvin-86, CifQui-10}.
\begin{theorem}[\textbf{Effective Hamiltonian in the $n$-th Landau level}]\label{thm:main}\mbox{}\\
Let $H$ be given by~\eqref{eq:full hamil} and define
\begin{equation} H^{n\mathrm{LL}} = P^{n\mathrm{LL}} H P^{n\mathrm{LL}} \end{equation}
where $P^{n\mathrm{LL}}$ orthogonally projects all particles into the $n\mathrm{LL}$, i.e. it is the orthogonal projector from $L^2_{\mathrm{asym}}(\mathbb{R}^{2N})$ to $n\mathrm{LL}_N$.
Then, for any $n$ there exists an effective external potential $V_n$ and an effective (radial) interaction potential $w_n$, depending only on $V,w$ and $n$ such that
\begin{equation} H^{n\mathrm{LL}}- n\cdot 2BN \end{equation}
is unitarily equivalent to the LLL Hamiltonian $H_{V_n,w_n} ^{\mathrm{LLL}}$, defined as in~\eqref{eq:LLL hamil}, and acting on $\mathrm{LLL}_N$.
The effective $n$-th level potentials are as follows:
\begin{align} V_n (\mathbf{r}) &= L_n \left(-\mbox{$\frac 14$}{\Delta} \right) V (\mathbf{r}) \label{eq:eff pot}
\\
w_n (\mathbf{r}) &= L_n \left(-\mbox{$\frac 14$} {\Delta}\right) ^2 w (\mathbf{r})\label{eq:eff int}
\end{align}
where $\Delta$ is the Laplacian and $L_n$ the Laguerre polynomial
\begin{equation}\label{eq:Laguerre pre}
L_n (u) = \sum_{l=0} ^n {n\choose l} \frac{(-u)^l}{l!}.
\end{equation}
\end{theorem}
\begin{rem} Since we have not assumed any regularity of $V$ and $w$ except being measurable functions with finite moments the differentiations in \eqref{eq:eff pot} and \eqref{eq:eff int} have in general to be understood in the sense of distributions. This poses no problems, however, because the potentials are integrated against densities of wave functions in $\mathrm{LLL}_N$, which are smooth functions. Moreover, the densities have the form of polynomials times a gaussian so the finiteness of the moments for all $m$ guarantees that the integrals are well defined.
In Sec.\ V B it will be convenient to assume that the potentials have integrable Fourier transforms, but this is not really an extra restriction because the general case follows by a density argument.
\end{rem}
\medskip
We shall give two proofs of the Theorem in Sec.~\ref{sec:proof thm}. Note that the constant we subtract from $H^{n\mathrm{LL}}$ is just the magnetic kinetic energy of $N$ particles in the $n\mathrm{LL}$.
\medskip
What the Theorem says is that one can profit from the nice properties of the LLL to study phenomena in other Landau levels. This is particularly relevant because the main features are supposed not to depend very much on the potentials $V_n,w_n$ entering~\eqref{eq:LLL hamil}. In particular the Laughlin states
have equivalents in any Landau level (cf Sec.~\ref{sec:Laughlin}).
\bigskip
Since potential energies are integrals of potentials against particle densities, Theorem~\ref{thm:main} can be seen as a corollary of a general result about particle densities of a many body states in different Landau levels. We recall that the $k$-particle density of an $N$-particle state with wave function $\Psi(\mathbf r_1,\dots, \mathbf r_N)$
is by definition
\begin{equation}
\rho^{(k)}_\Psi(\mathbf r_1,\dots, \mathbf r_k)= {N \choose k} \int_{\mathbb R^{2(N-k)}} |\Psi(\mathbf r_1\cdots ;\mathbf r'_{k+1}\cdots \mathbf r'_N)|^2\mathrm d\mathbf r'_{k+1}\cdots \mathbf \mathrm dr'_N.
\end{equation}
If $\Psi\in n\mathrm{LL}_N$ for some $n$, then $\rho^{(k)}_\Psi$ is a $C^\infty$ function and decreases rapidly at infinity. This is discussed in Sec. V.
\begin{theorem}[\textbf{Particle densities in the $n$-th Landau level}]\label{thm:main2}\mbox{}\\
There is a unitary mapping $\mathcal U_{N,n}: n\mathrm{LL}_N\to \mathrm{LLL}_N$ such that if $\Psi_0=\mathcal U_{N,n}\Psi_n\in\mathrm{LLL}_N$ with $\Psi_n\inn\mathrm{LL}_N$ then for all $k$
\begin{equation}\label{eq:nLLdens}\rho^{(k)}_{\Psi_n}(\mathbf r_1,\dots, \mathbf r_k)
= \prod_{i=1}^k L_n \left(-\mbox{$\frac 14$}\Delta_{\mathbf r_i}\right)\rho^{(k)}_{\Psi_0}(\mathbf r_1,\dots, \mathbf r_k)
\end{equation}
\end{theorem}
Theorem~\ref{thm:main} follows as a corollary if one integrates $V(\mathbf r)$ against the right hand side of \eqref{eq:nLLdens} with $k=1$, respectively $w(\mathbf r_1-\mathbf r_2)$ with $k=2$, and shifts the differentiations to the potentials by partial integration.
Conversely, Theorem \ref{thm:main2} (for $k=1,2$) follows from Theorem \ref{thm:main} if one regards the potentials as trial functions for the densities.
\medskip
In the following we shall define (in several related but distinct ways) the unitary mappings between Landau levels (see~\eqref{eq:unitary N}), and discuss the proofs of Theorems~\ref{thm:main} and ~\ref{thm:main2}. The physically most appealing way to interpret these unitaries is to see them as replacing the physical coordinates of electrons by the coordinates of the guiding centers of their cyclotron orbits, mathematically implemented through the use of coherent states. Indeed, in the LLL the position coordinates and the guiding center coordinates are really two different names for the same thing as will be evident in Sec.~\ref{sec:kernel}. Moreover, the quantum mechanical spread of both coordinates is of the order of the magnetic length $\sim B^{-1/2}$. The cyclotron radius in Landau level $n$ has an extra factor $\sqrt {n+1}$. Thus it is plausible that for large $B$ and small $n$ the difference between position and guiding center coordinates, and the non-commutativity of the latter, is not of much significance in thermodynamically large systems, i.e., for large $N$, provided the magnetic length stays much smaller than the interparticle distance.
Although the coherent state approach offers a satisfactory physical picture it is not always the most convenient one from a computational point of view. This motivates our review of alternate routes to the mappings between levels.
We also take the example of Laughlin states to explain how to deduce properties of the actual wave-functions in $n\mathrm{LL}_N$ minimizing effective energies from their representation in the $\mathrm{LLL}_N$ using the above unitary map. This amounts to saying that the density in guiding center coordinates can to a large extend indeed be identified with the true, physical, density in electron coordinates. We believe this is crucial for the understanding of the efficiency of the correspondence between Landau levels in FQH physics.
\section{The Landau Hamiltonian and the two oscillators}
\subsection{The cyclotron oscillator}
The magnetic Hamiltonian of a particle of charge $q$ and effective mass $m^*$, moving in a plane with position variables ${\mathbf r}=(x,y)$, is
\begin{equation} H=\frac 1{2m^*}(\pi_x^2+\pi_y^2)\end{equation}
where
\begin{equation} \mbox{\boldmath$\pi$}=(\pi_x,\pi_y)=\mathbf p-q\mathbf A\end{equation}
is the gauge invariant kinetic momentum with $\mathbf A$ the magnetic vector potential and
\begin{equation}\mathbf p=-\mathrm{i} \hbar (\partial_x,\partial_y)\end{equation}
the canonical momentum. We assume a homogeneous magnetic field of strength $B$ perpendicular to the plane and choose the
symmetric gauge
\begin{equation}\mathbf A=\frac B2(-y,x).\end{equation}
Moreover, we choose units and signs so that $|q|=1$, $qB\equiv B>0$, $\hbar=1$ and $m^*=1$. Then
\begin{equation} \pi_x=-\mathrm{i} \partial_x+\mbox{$\frac{1}{2}$} B y,\quad \pi_y=-\mathrm{i} \partial_y-\mbox{$\frac{1}{2}$} B x\end{equation}
and the kinetic momentum components satisfy the canonical commutation relations (CCR)
\begin{equation} [\pi_x,\pi_y]=\mathrm{i} \ell_B^{-2}\label{CCRpi}\end{equation}
with
\begin{equation}\ell_B=B^{-1/2}\end{equation}
the magnetic length.
In terms of the creation and annihilation operators
\begin{equation} \quad a^\dagger=\frac {\ell_B}{\sqrt 2}(-\pi_y-\mathrm{i} \pi_x), \quad a=\frac{ \ell_B} {\sqrt 2 }(-\pi_y+\mathrm{i} \pi_x)\label{a}\end{equation}
with
$[a,a^\dagger]=1$
the Hamiltonian is
\begin{equation} H=2B(a^\dagger a+\mbox{$\frac{1}{2}$}). \end{equation}
Powers of $a^\dagger$ generate normalized eigenstates
\begin{equation}\varphi_n=(n!)^{-1/2}(a^\dagger)^n\varphi_0\end{equation}
with $a\varphi_0=0$ and the energy eigenvalues
\begin{equation} E_n=(n+\mbox{$\frac{1}{2}$})2B, n=1,2,\dots.\label{Landauspec}\end{equation}
In position variables the corresponding wave functions are
\begin{equation} \varphi_0({\bf r})=\frac 1{\sqrt {\pi}}\,e^{-(x^2+y^2)/4\ell_B}, \quad \hbox{and}\quad \varphi_n({\bf r})=\frac 1{\sqrt {\pi n!}}\,
(x-\mathrm{i} y)^ne^{-(x^2+y^2)/4\ell_B}.\end{equation}
\subsection{Complex notation}\label{sec:complex}
With
\begin{equation}
z =x+iy, \quad \bar z=x-\mathrm{i} y, \quad \partial_z=\mbox{$\frac{1}{2}$}(\partial_x-\mathrm{i} \partial_y), \quad \partial_{\bar z}=\mbox{$\frac{1}{2}$}(\partial_x+\mathrm{i} \partial_y)
\end{equation}
we can write
\begin{equation} a^\dagger=\frac 1{\sqrt 2 \ell_B} (\mbox{$\frac{1}{2}$} \bar z-2\ell_B^2 \partial_z),\quad a=\frac 1{\sqrt 2 \ell_B} (\mbox{$\frac{1}{2}$} z+2\ell_B^2 \partial_{\bar z}).\label{aellb}\end{equation}
Choosing units so that $B=2$, or equivalently, defining $z=\frac 1{\sqrt 2 {\ell_B}}(x+\mathrm{i} y)$, this becomes
\begin{equation} a^\dagger=\mbox{$\frac{1}{2}$} \bar z- \partial_z, \quad a= \mbox{$\frac{1}{2}$} z+ \partial_{\bar z}.\end{equation}
Also, the gaussian factor $e^{-(|x|^2+|y|^2)/4\ell_B^2}$ becomes $e^{-|z|^2/2}$.
For computations it is often convenient to use the corresponding operators $\hat a^\dagger$, $\hat a$, acting on the pre-factors to the gaussian and defined by
\begin{equation} a^{\#} \left[f(z,\bar z)e^{-|z|^2/2}\right]=\left[\hat a^{\#} f (z,\bar z)\right]e^{-|z|^2/2}.\end{equation}
These are
\begin{equation} \hat a^\dagger=\bar z-\partial_z, \quad \hat a=\partial_{\bar z}.\label{hata}\end{equation}
In the sequel we shall generally use the hat $\hat{\phantom a}$ on operators and functions to indicate that the gaussian normalization factors are excluded.
Besides the standard definition $z=x+\mathrm{i} y$, other complexifications of $\mathbb R^2$ are possible and can be useful, as stressed in~\cite{Haldane-18}.
\subsection{The guiding center oscillator}\label{sec:guiding}
The classical 2D motion of a charged particle in a homogeneous magnetic field consists of a cyclotron rotation around \lq\lq guiding centers". The quantization of the cyclotron motion is the physical basis for the energy spectrum \eqref{Landauspec}, and the creation operators $a^\dagger$ generate the corresponding harmonic oscillator eigenstates. Every energy eigenvalue is infinitely degenerate, due to the different possible positions of the guiding centers.
Quantum mechanically the dynamics of the guiding centers is described by another harmonic oscillator commuting with the first one. One arrives at this picture by splitting the (gauge invariant) position operator $\bf r$ into the guiding center part $\bf R$ and the cyclotron part
\begin{equation} \widetilde{\mathbf R}=\ell_B^2\mathbf n\times \mbox{\boldmath$\pi$},\end{equation}
with $\mathbf n$ the unit normal vector to the plane. Both $\bf R$ and $\widetilde{\bf R}$ are gauge invariant and they commute with each other. On the other hand the two components of $(R_x, R_y)$ of $\bf R$ do not commute and likewise for the components of $\widetilde{\bf R}$. More precisely, we have
\begin{equation} \mathbf r=\mathbf R+\widetilde{\mathbf R}\label{splitting}\end{equation}
with
\begin{equation} R_x=x+\ell_B^2 \pi_y=\mbox{$\frac{1}{2}$} x-\mathrm{i} \ell_B^2\partial_y, \quad R_y=y-\ell_B^2\pi_x=\mbox{$\frac{1}{2}$} y+\mathrm{i} \ell_B^2\partial_x,\end{equation}
\begin{equation} \widetilde{R}_x=-\ell_B^2 \pi_y=\mbox{$\frac{1}{2}$} x+\mathrm{i} \ell_B^2\partial_y, \quad \widetilde{R}_y=\ell_B^2\pi_x=\mbox{$\frac{1}{2}$} y-\mathrm{i} \ell_B^2\partial_x\end{equation}
and the commutation relations
\begin{equation}[\mathbf R,\widetilde {\mathbf R}]=\mathbf 0,\quad
[R_x,R_y]=-\mathrm{i} \ell_B^2,\quad [\widetilde{R}_x,\widetilde{R}_y]= \mathrm{i} \ell_B^2.\label{CCR}
\end{equation}
The creation and annihilation operators for $\widetilde{\mathbf R}$ are the same as \eqref{a},
\begin{equation} a^\dagger=\frac1{\sqrt 2 \ell_B}(\widetilde{R}_x-\mathrm{i} \widetilde{R}_y),\quad a=\frac1{\sqrt 2 \ell_B}(\widetilde{R}_x+ \mathrm{i} \widetilde{R}_y).\label{aa}\end{equation}
Those for the guiding center, on the other hand, are
\begin{equation} b^\dagger=\frac 1{\sqrt 2 \ell_B}(R_x+\mathrm{i} R_y),\quad b=\frac 1{\sqrt 2 \ell_B}(R_x-\mathrm{i} R_y).\label{b}\end{equation}
Note the different signs compared to \eqref{aa} due to the different signs in \eqref{CCR}. We have
$[b,b^\dagger]=1$
and in complex notation
\begin{equation} b^\dagger=\frac 1{\sqrt 2 \ell_B} (\mbox{$\frac{1}{2}$} z-2\ell_B^2 \partial_{\bar z}),\quad b=\frac 1{\sqrt 2 \ell_B} (\mbox{$\frac{1}{2}$} \bar z+2\ell_B^2 \partial_z).\label{bellb}\end{equation}
For $B=2$
\begin{equation} b^\dagger=\mbox{$\frac{1}{2}$} z- \partial_{\bar z}, \quad b= \mbox{$\frac{1}{2}$} \bar z+ \partial_{z}\end{equation}
and
\begin{equation} \hat b^\dagger=z-\partial_{\bar z},\quad \hat b=\partial_{z}.\label{hatb}\end{equation}
The splitting \eqref{splitting} corresponds to
\begin{equation}
\left({\begin{array}{c} z\\ \bar z\\ \end{array}}\right)=\left({\begin{array}{c} b^\dagger\\ b\\ \end{array}}\right)+\left({\begin{array}{c} a\\ a^\dagger \\ \end{array}}\right)\label{splitting2}.
\end{equation}
While the operators $a^\dagger, a$ increases or decrease the Landau level index, the operators $b^\dagger, b$ leave each Landau level invariant. Pictorially speaking we can say that operators associated with the cyclotron oscillator move states \lq\lq vertically\rq\rq, i.e. act as ladder operators, while those associated with the guiding center oscillator move them \lq\lq horizontally\rq\rq.
With $\varphi_{0,0}=\varphi_0$ the common, normalized ground state for both oscillators, \begin{equation} a\varphi_{00}=b\varphi_{00}=0, \end{equation}
the states
\begin{equation}\varphi_{n,m}= \frac1{\sqrt{n!m!}}(a^\dagger)^n(b^\dagger)^m\varphi_{0,0}=\frac1{\sqrt{n!m!}}(b^\dagger)^m(a^\dagger)^n\varphi_{0,0},\quad n,m=0,1,\dots\end{equation}
form a basis of common eigenstates of the oscillators with $\varphi_{n,0}$ being the previously defined $\varphi_n$. For fixed $n$ the states $\varphi_{n,m}$, $m=0,1,\dots$ generate the Hilbert space of the $n$'th Landau level, which we shall denote by $n$LL. The lowest Landau level will be denoted LLL.
Using complex coordinates the wave functions with $n=0$ respectively $m=0$ are
\begin{equation} \varphi_{0,m} (z,\bar z)=\frac 1{\sqrt{\pi m!}}\, z^m e^{-|z|^2/2},\quad \varphi_{n,0}(z,\bar z)=\frac 1{\sqrt {\pi n!}}\, \bar z^ne^{-|z|^2/2}.\end{equation}
More generally, the wave functions
\begin{equation} \varphi_{n,m}(z,\bar z)=\frac 1{\sqrt {\pi n!\,m!}} [(z-\partial_{\bar z})^m \bar z^n ] e^{-|z|^2/2}=\frac 1{\sqrt {\pi n!\,m!}}[(\bar z-\partial_{z})^n z^m ] e^{-|z|^2/2}\label{nLLbasis}\end{equation}
can be written in terms of associated Laguerre polynomials. They are eigenfunctions of the angular momentum operator in the symmetric gauge (acting on the pre-factor to the gaussian)
\begin{equation} \hat L=z\partial_z-\bar z\partial_{\bar z}\end{equation}
with eigenvalues $M=-n+m=-n,-n+1,\dots$ in the $n$LL. The operators $b^\dagger, b$ shift the angular momentum within each Landau level.
\section{Expressions of the inter-level unitary maps}
\subsection{With coherent states}
A coherent state associated with the guiding center oscillator in the $n$LL with parameter $Z\in\mathbb C$ is defined in a standard way~\cite{ComRob-12,KlaSka-85} as
\begin{equation} |Z,n\rangle=e^{(Zb^\dagger -\bar Zb)}\varphi_{n,0}=\sum_{m=0}^\infty \frac {Z^m}{\sqrt {m!}}\, \varphi_{n,m} e^{-|Z|^2/2}\label{cohstate}.\end{equation}
The overlap of two coherent states is
\begin{equation}
\langle Z,n|Z',n'\rangle=\delta_{n,n'} e^{(2\bar Z Z'-|Z|^2-|Z'|^2)/2}=\delta_{n,n'} e^{-|Z-Z'|^2/2}\,e^{{\mathrm i }\,\text{Im}\,(\bar Z Z')}.\end{equation}
Moreover,
\begin{equation}
\int |Z,n\rangle\langle Z,n|\,\frac{{\mathrm d}^2Z}\pi =\Pi_n\label{nproj}
\end{equation}
is the projector on $n$LL, where ${\mathrm d}^2Z:=\hbox{$\frac{ \mathrm i }{2}$} {\mathrm d}Z\wedge {\mathrm d}\bar Z$ is the Lebesgue measure on the plane. Indeed,
\begin{equation} \frac {1}{\sqrt { m!\, m'!}}
\int \bar Z^m Z^{m'} e^{-|Z|^2} \frac{{\mathrm d}^2Z}\pi=\langle \varphi_{n,m}|\varphi_{n,m'}\rangle=\delta_{m,m'},\end{equation}
and
\begin{equation}
\Pi_n=\sum_{m=0}^\infty |\varphi_{n,m}\rangle\langle \varphi_{n,m}|.
\end{equation}
The coherent states allow an interpretation of $n$LL as a Bargmann space of analytic functions of the coherent state variable $Z$: If $\psi\in$\,$n$LL then
\begin{equation} \widehat{\Psi}(Z):= \langle \bar Z,n|\psi\rangle e^{|Z|^2/2}=\sum_{m=0}^\infty\langle \varphi_{n,m}|\psi\rangle\frac {Z^m}{\sqrt {m!}}\, \label{analyt}\end{equation}
is analytic in $Z$ and
\begin{equation} \Psi(Z,\bar Z)=\widehat{\Psi}(Z)e^{-|Z|^2/2}\label{analyt2}\end{equation}
has the same $L^2$ norm as $\psi$ because of \eqref{nproj}. Thus the map
\begin{equation} U_n:\psi\mapsto \Psi\end{equation}
is isometric from the $n$LL to the LLL. From the definition it is clear that
\begin{equation} U_n\varphi_{n,m}=\varphi_{0,m}\label{Un1}\end{equation}
and
\begin{equation} U_n|Z,n\rangle=|Z,0\rangle,\label{Un2}\end{equation}
so $U_n$ is in fact a unitary with
\begin{equation} U_n^{-1}\varphi_{0,m}=\varphi_{n,m}\quad \hbox{and}\quad
U_n^{-1}|Z,0\rangle=|Z,n\rangle.\label{Un-1}\end{equation}
Either \eqref{Un1} or \eqref{Un2} can be taken as the definition of $U_n$. The unitary map
\begin{align}\label{eq:unitary N}
\mathcal{U}_{N,n}: n\mathrm{LL}_N &\rightarrow \mathrm{LLL}_N \nonumber\\
\Psi_N &\mapsto \left(\bigotimes_\mathrm{asym} ^N U_n \right) \Psi_N
\end{align}
is that used in Theorem~\ref{thm:main}.
The function $\Psi(Z,\bar Z)$ coincides with the LLL wave function of $U_n\psi$ if $Z$ is identified with the complex position variable $z=x+\mathrm{i} y$. Note, however,
that $Z$ is associated with the (non-commutative) components of the guiding center operator $\mathbf R$ rather than the (commutative) position operator $\mathbf r$. By the definition \eqref{analyt} $\Psi$ depends linearly on $\psi$; the alternative definition $\Psi=\langle\psi|Z,n\rangle$, that is sometimes used, leads to an anti-unitary correspondence.
\subsection{With integral kernels}\label{sec:kernel}
Consider the coherent state \eqref{cohstate} without the gaussian normalization factors as as function of $Z; z,\bar z$:
\begin{equation} \widehat{F}(Z; z,\bar z)=\sum_{m=0}^\infty \frac {Z^m}{\sqrt {m!}}\, \widehat{\varphi}_{n,m}(z,\bar z).\end{equation}
The coherent state is an eigenstate of the annihilation operator $\hat b=\partial_z$ with eigenvalue $Z$, so
\begin{equation} \widehat{F}(Z; z,\bar z)=f(z,\bar z) e^{Zz} .\end{equation}
Furthermore, $\hat F$ is an eigenstate of $\hat a^\dagger\hat a=(\bar z-\partial_z)\partial_{\bar z}$ to eigenvalue $n$ which leads to
\begin{equation} \widehat{F}(Z; z,\bar z)=c_n(\bar z-Z)^n e^{Zz}\end{equation}
with a normalization constant $c_n=1/\sqrt{\pi n!}$. The full coherent state \eqref{cohstate} as a function of $Z$, $z$ and $\bar z$, including normalization factors, is thus given by
\begin{equation} c_n(\bar z-Z)^n e^{-(|Z|^2+|z|^2-2Zz)/2}.\end{equation}
Inserting this into \eqref{analyt} gives
\begin{equation} \Psi(Z,\bar Z)=\int G(Z,\bar Z;z,\bar z)\psi(z,\bar z)\,{\mathrm d}^2z \label{psiPsi}\end{equation}
with
\begin{equation} G(Z,\bar Z;z,\bar z)= \frac 1{ \sqrt {\pi n!}}\,(z-Z)^n e^{-(|Z|^2+|z|^2-2Z\bar z)/2}= \frac 1{ \sqrt {\pi n!}}\,(z-Z)^n e^{-|z-Z|^2/2}
e^{-{\mathrm i }\,\text{Im}\, (\bar z Z)}.\label{G}
\end{equation}
This formula was derived in a different way in \cite{ChaFlo-07} and appears there (in slightly different notation) as Equation~(34). The inverse map is given by
\begin{equation} \psi(z,\bar z)=\int \bar G(z,\bar z; Z,\bar Z)\Psi(Z,\bar Z)\,{\mathrm d}^2Z\label{Psipsi}
\end{equation}
with
\begin{equation}
\bar G(z,\bar z; Z,\bar Z)=\frac 1{ \sqrt {\pi n!}}\,(\bar z-\bar Z)^n e^{-(|Z|^2+|z|^2-2\bar Z z)/2}= \frac 1{ \sqrt {\pi n!}}\,(\bar z-\bar Z)^n e^{-|z-Z|^2/2}
e^{-{\mathrm i }\,\text{Im}\, (z\bar Z)}.\label{Gbar}\end{equation}
Note that $G$ can be written as
\begin{equation} g(z-Z)\,e^{-2{\mathrm i }\,\text{Im}\, (\bar z Z)}\end{equation}
where $g$ is essentially concentrated in a disc of radius $\sim \sqrt{ n+1}$ and the factor is a phase factor. Recall also that the length unit is $\sqrt 2\ell_B\sim B^{-1/2}$.
A further remark is that for $n=0$ $G$ is the reproducing kernel in Bargmann space, confirming again that in the LLL $\Psi$ and $\psi$ are the same function on $\mathbb C$ just with different names for the variables. The phase factor in $G$ is essential for this to hold.
\subsection{With ladder operators}
A direct approach to the correspondence $n$LL $\leftrightarrow$ LLL, by-passing the coherent states, starts from \eqref{Un1}, noting that
\begin{equation} U_n= (n!)^{-1/2} a^n\quad\hbox{restricted to $n$LL}\end{equation}
and hence
\begin{equation} U_n^{-1}= (n!)^{-1/2} (a^\dagger)^n\quad\hbox{restricted to LLL}.\end{equation}
Using the representations \eqref{hata} for the creation and annihilation operators we conclude that the following holds:
\begin{proposition}[\textbf{Unitary maps with ladder operators}]\label{lem:simple}\mbox{}\\
Let $\psi_n\inn\mathrm{LL}$ have wave function
\begin{equation}\label{eq:nLLfunc}
\psi_n(z,\bar z)=\sum_{\nu=0}^n \bar z^\nu f_\nu(z)e^{-|z|^2/2},\end{equation}
$f_\nu$ analytic for $\nu=0,\dots, n$. Then $\Psi_0 = U_n \psi_n\in\mathrm{LLL}$ has wave-function
\begin{equation}
\Psi_0(z,\bar z)= \sqrt{n!}f_n(z)e^{-|z|^2/2}.
\end{equation}
Conversely, the wave function of $\psi_n = U_n^{-1}\Psi_0$ is
\begin{align}\label{mainformula}
\psi_n(z,\bar z) &= [(\bar z-\partial_z)^nf_n(z)]e^{-|z|^2/2}\nonumber\\
&= \left[\bar z^n f_n(z)+\sum_{k=1}^n(-1)^k {n\choose k} \,\bar z^{n-k}f_n^{(k)}(z)\right]e^{-|z|^2/2}.
\end{align}
\end{proposition}
Note that Equation~\eqref{mainformula} implies in particular that the factor $f_n(z)$ to the highest power $n$ of $\bar z$ determines uniquely the factors to the lower powers $\bar z^\nu$:
\begin{equation} f_\nu(z)= (-1)^{n-\nu}\left({\begin{array}{c} n\\ \nu\\ \end{array}}\right) f_n^{(n-\nu)}(z).\label{134}\end{equation}
The state is thus completely fixed by the holomorphic function $f_n$ and the Landau index $n$.
\bigskip
Incidentally, these considerations also lead to a method for projecting functions to the lowest Landau level:
\begin{proposition}[\textbf{LLL projection}]\mbox{}\\
Let
\begin{equation} \phi(z,\bar z)=\sum_{\nu=0}^n \bar z^\nu g_\nu(z)e^{-|z|^2/2}\label{135}\end{equation}
with arbitrary analytic functions $g_\nu$. It's orthogonal projection into $\mathrm{LLL}$ is
\begin{equation}\label{eq:LLL projection}
\mathcal P_{\rm LLL}\phi(z)=\sum_{\nu=0}^n g^{(\nu)}_\nu(z) e^{-|z|^2/2} .
\end{equation}
where $g^{(\nu)} = \partial_z ^{\nu} g$.
\end{proposition}
This is well-known as the recipe ``move all $\bar{z}$ factors to the left and replace them by derivatives in $z$'', see e.g.~\cite{Jain-07}. For completeness we give the simple proof:
\begin{proof}
The previous considerations lead to a method for splitting a state
\begin{equation}\phi\in\bigoplus_{k=0}^n \: k\mathrm{LL}\end{equation}
into its components in the different LL: Start with a wave function as in~\eqref{135}. It's component $\psi_n$ in the $n$LL is then given by \eqref{mainformula} with $f_n:=g_n$ and $f_\nu$ for $0\leq\nu\leq n-1$ defined by~\eqref{134}. The difference
$\tilde \phi=\phi-\psi_n$ is now in $\bigoplus_{k=0}^{n-1} k\mathrm{LL}$ and we can repeat the procedure with $n$ replaced by $n-1$, $\phi$ by $\tilde\phi$ etc. until we obtain the splitting
$\varphi=\sum_{k=0}^n \psi_{k}$
with $\psi_{k}\in k\mathrm{LL}$.
By induction over $n$, using that
\begin{equation}
\sum_{\nu=0}^n {n\choose \nu} (-1)^\nu=(1-1)^n=0\end{equation}
this procedure implies~\eqref{eq:LLL projection}.
\end{proof}
\subsection{Recap of the different expressions for the unitary maps}
Summarizing the contents of this section, we have displayed three equivalent ways to represent a state $\Psi\in n$LL by analytic functions in Bargmann space:
\begin{itemize}
\item[{\bf 1.}] Take the scalar product
$\langle \bar Z,n|\Psi\rangle $
with a coherent state, cf. \eqref{analyt}.
\item[{\bf 2.}] Use Equation \eqref{psiPsi} with the integral kernel \eqref{G}.
\item[{\bf 3.}] Apply the differential operator $\partial_{\bar z}$ $n$-times to the pre-factor of the Gaussian. Equivalently: Expand the pre-factor in powers of $\bar z$ and keep only the highest power. The inverse mapping, LLL $\to$ $n$LL, is achieved by applying the differential operator $(\bar z-\partial_z)^n$ to the analytic function representing the state in the LLL.
\end{itemize}
The last method is formally the simplest and in the Sec. V we shall use it to discuss particle densities in higher Landau levels in term of their counterparts in the lowest Landau level.
\section{Particle densities and the $n\mathrm{LL}$ Hamiltonian, proofs of the Theorems}\label{sec:proof thm}
We now have all the necessary ingredients to prove Theorems~\ref{thm:main} and~\ref{thm:main2}. We provide two slightly different approaches.
\subsection{Many body states and particle densities}\label{densities}
All considerations in Secs. III and IV carry straightforwardly over to many-body states in symmetric or anti-symmetric tensor powers $n\mathrm{LL}_N\equiv
n$LL$^{\otimes_{{\rm s,a}}N}$ of single particle states by applying the single particle formulas to each tensor factor.
Let $\Psi_n$ be a state in $n\mathrm{LL}_N$ with wave function
\begin{equation}
\psi_n(z_1,\bar z_1; \dots ;z_N,\bar z_N)=\widehat{\psi}_n(z_1,\bar z_1; \dots ;z_N,\bar z_N)e^{-(|z_1|^2+\cdots +|z_N|^2)/2}.
\end{equation}
Expanding in powers of $\bar z_i$ we can write
\begin{equation}
\widehat{\psi}_n(z_1,\bar z_1; \dots ;z_N,\bar z_N)=\prod_{i=1}^N \bar z_i^n f_n(z_1,\dots, z_N)+\sum\prod_{i=1}^N\bar z_i^{\nu_i} f_{\nu_1,\dots, \nu_N} (z_1,\dots, z_N).
\end{equation}
The sum is here over $N$-tuples $(\nu_1,\dots\nu_N)$ such that $\nu_k<n$ for at least one $k$. The functions $f_n$ and $f_{\nu_1,\dots, \nu_N}$ are holomorphic and the latter are, in fact, derivatives of $f_n$, cf. \eqref{134}.
The state $\Psi_0=U_n\Psi_n$ in LLL$^N$ has the wave function
\begin{equation} \psi_0(z_1,\bar z_1; \dots ;z_N,\bar z_N)=\widehat \psi_0(z_1,\dots, z_N)e^{-(|z_1|^2\cdots |z_N|^2)/2}\end{equation}
with
\begin{equation} \widehat \psi_0(z_1,\dots, z_N)=(n!)^{-N/2}\prod_{i=1}^N \partial_{\bar z_i}^n \, \widehat \psi_n(z_1,\bar z_1; \dots ;z_N,\bar z_N)=
(n!)^{N/2}f_n(z_1,\dots, z_N).
\end{equation}
The wave function $\widehat\psi_n$ can now be written
\begin{equation} \widehat\psi_n((z_1,\bar z_1; \dots ;z_N,\bar z_N)=(n!)^{-N/2}\prod_{i=1}^N (\bar z_i-\partial_{z_i})^n\widehat \psi_0(z_1,\dots, z_N)=
\prod_{i=1}^N (\bar z_i-\partial_{z_i})^nf_n(z_1,\dots, z_N).\end{equation}
Next we consider the $k$-particle density of $\Psi_n$, defined by
\begin{multline} \rho^{(k)}_n(z_1,{\bar z}_1, \dots, z_k,\bar z_k)= {N \choose k} \int |\psi_n(z_1,\bar z_1; \dots ;z_N,\bar z_N)|^2 {\mathrm d}^2z_{k+1}\cdots {\mathrm d}^2z_N\\={N \choose k} \int \left|\widehat \psi_n(z_1,\bar z_1; \dots ;z_N,\bar z_N)\right|^2 e^{-(|z_1|^2+\cdots +|z_N|^2)} {\mathrm d}^2z_{k+1}\cdots {\mathrm d}^2z_N.\label{densmatr}\end{multline}
The density $\rho^{(k)}_0$ of $\Psi_0=U_n\Psi_n$ is given by the same formula with $n=0$.
Functions in LLL are holomorphic and decrease at $\infty$ as $e^{-|z|^2/2}$; the latter follows from the fact that the Bargmann kernel \eqref{G} with $n=0$, which has this decrease, is a reproducing kernel for the Hilbert space LLL. Equation~\eqref{G} (equivalently, Equation \eqref{eq:nLLfunc}) also implies that functions in $n\mathrm{LL}$ are $C^\infty$ in the real position variables and decrease in the same way. This clearly carries over to wave functions in $n\mathrm{LL}_N$ and corresponding densities.
To prove Theorem~\ref{thm:main2} (which then implies Theorem~\ref{thm:main}) we have to compare $\rho^{(k)}_n$ and $\rho^{(k)}_0$. It is, in fact, sufficient to consider the problem for a single variable, i.e., to prove the following Lemma:
\begin{lemma}[\textbf{Reshuffling differentiations}]\label{lem:proj dens}\mbox{}\\
Let $\psi(z,\bar z)=f(z)e^{-|z|^2/2}$ with holomorphic $f$.
Then
\begin{equation}
(n!)^{-1}\overline {\left[(\bar z-\partial_{z})^n f(z)\right]}\left[(\bar z-\partial_{z})^n f(z)\right]e^{-z \bar z}=
L_n\left(- \partial_{\bar z}\partial_z\right)\left[\overline{f(z)}f(z)e^{-z \bar z}\right]\label{nlift0}
\end{equation}
with $L_n$ the Laguerre polynomial~\eqref{eq:Laguerre pre}. Recall that $\partial_{\bar z} \partial_z=\frac 14 \Delta$.
\end{lemma}
\begin{proof} This is a straightforward computation by induction over $n$, using the recursion relation for the Laguerre polynomials,
\begin{equation} (n+1)L_{n+1}(u)=(2n+1)L_n(u)-nL_{n-1}(u)-uL_n(u).\end{equation}
To compute \begin{equation}\partial_{\bar z}\partial_z\left[ \overline {\left[(\bar z-\partial_{z})^n f(z)\right]}\left[(\bar z-\partial_{z})^n f(z)\right]e^{-z \bar z}\right],\end{equation}
starting with $n=0$ and $L_0=1$, one uses the commutation relations
\begin{equation}
\partial_{\bar z} (\bar z-\partial_z)^n=(\bar z-\partial)^n\partial_{\bar z}+n(\bar z-\partial_z)^{n-1},\quad \partial_{z} (\bar z-\partial_z)^n=(\bar z-\partial)^n\partial_{z},
\end{equation}
and the fact that $\partial_{\bar z}f(z)=\partial_{z}\overline{f(z)}=0$ for holomorphic $f$.
\end{proof}
Applying the Lemma to each variable in a many-body wavefunction leads directly to \eqref{eq:nLLdens} and hence Eqs.\eqref{eq:eff pot} and \eqref{eq:eff int}.
\subsection{Projected Hamiltonian and guiding center coordinates}
We now discuss an alternative road to~\eqref{eq:eff pot} and~\eqref{eq:eff int}, providing additional insights. The starting point is the splitting \eqref{splitting} of the position variables in guiding centers and cyclotron motion, and the ensuing factorization of matrix elements of $\exp({\mathrm i}\mathbf q\cdot \mathbf r) $ which enter the Fourier transformed version of \eqref{nlift0}.
\begin{lemma}[\textbf{Plane waves projected in Landau levels}]\label{lem:plane}\mbox{}\\
For any $\mathbf{q}\in \mathbb{R} ^2$, identify $e^{\mathrm{i} \mathbf{q}\cdot \mathbf{r}}$ with the corresponding multiplication operator on $L^2 (\mathbb{R}^2)$, where $\mathbf{r}$ is the spatial variable. Let $\mathbf{R}$ be the guiding center operator defined in Sec.~\ref{sec:guiding}, $\Pi_n$ the orthogonal projector on $n\mathrm{LL}$ and $U_n:n\mathrm{LL} \rightarrow \mathrm{LLL}$ the inter-LL unitary map.
We have that
\begin{equation}\label{eq:proj wave}
U_n \Pi_n e^{\mathrm{i} \mathbf{q}\cdot \mathbf{r}} \Pi_n U_n ^* = L_n \left(\frac{|\mathbf{q}|^2}{4}\right) e^{-\frac{|\mathbf{q}|^2}{8}} \Pi_0 e^{\mathrm{i} \mathbf{q}\cdot \mathbf{R}} \Pi_0 = L_n \left(\frac{|\mathbf{q}|^2}{4}\right) \Pi_0 e^{\mathrm{i} \mathbf{q}\cdot \mathbf{r}} \Pi_0
\end{equation}
with the Laguerre polynomial
\begin{equation}\label{eq:Laguerre}
L_n (u) = \sum_{l=0} ^n {n\choose l} \frac{(-u)^l}{l!}.
\end{equation}
\end{lemma}
The equality of the left-hand and the right-hand sides of~\eqref{eq:proj wave} can be seen as a Fourier transformed version of~\eqref{eq:nLLdens} (with $k=1$). The identity \eqref{eq:proj wave} implies that the norm of $\Pi_0 e^{\mathrm{i} \mathbf{q}\cdot \mathbf{r}} \Pi_0$ decays faster than any polynomial in $|q|$. Indeed, on the left hand side we have a product of unitaries and projections whose norm is bounded by one. Also, when $q\neq 0$ and $n$ grows, the norm of $\Pi_n e^{\mathrm{i} \mathbf{q}\cdot \mathbf{r}} \Pi_n$ decays like $n^{-1/4}$.
We now provide another proof, using guiding center coordinates rather than ladder operators. This also connects with the middle expression in~\eqref{eq:proj wave}.
\begin{proof}
Some of the following computations can be found in a variety of sources, e.g.~\cite[Proof of Theorem~3.2]{Jain-07} or~\cite{GoeLed-06}.
First note that if $A$ is a function of $a^\dagger, a$ and $B$ of $b^\dagger, b$, then
\begin{equation}
\langle \varphi_{n',m'}|AB|\varphi_{n,m}\rangle=\langle \varphi_{n',0}|A|\varphi_{n,0}\rangle\, \langle \varphi_{0,m'}|B|\varphi_{0,m}\rangle.\label{factorization}
\end{equation}
This is a consequence of the fact that the two commuting harmonic oscillators \eqref{a} and \eqref{b} can be represented, in a unitarily equivalent way, in the tensor product of two spaces with basis vectors $\varphi_{n,0}$ and $\varphi_{0,m}$ respectively. In this representation $\varphi_{n,m}=\varphi_{n,0}\otimes \varphi_{0,m}$ and the operators $A$ and $B$ act independently on each of the tensor factors. One can also pick directly $A,B$ to be polynomials in creation and annihilation operators and use the CCR to prove the claim.
Note, however, that in the representation \eqref{nLLbasis} the functions $\varphi_{n,m}(z,\bar z)$ are not simply products of the functions $\varphi_{n,0}$ and $\varphi_{0,m}$. Indeed, the variables $z$ and $\bar z$, regarded as position operators, do not act independently in the tensor factors, cf. \eqref{splitting2}.
We apply \eqref{factorization} to compute the matrix elements of $\exp({\mathrm i}\mathbf q\cdot \mathbf r) $, $\mathbf q\in\mathbb R^2$, in the nLL. With
\begin{equation}\mathbf q=(q_x,q_y), \quad q=q_x+\mathrm{i} q_y, \quad \mathbf r=(x,y), \quad z=x+iy\end{equation}
and further employing
\begin{equation} z=a+b^\dagger, \qquad \bar z=a^\dagger+b\end{equation}
we have
\begin{equation}
\mathbf q\cdot \mathbf r=q_x x+q_y y=\mbox{$\frac{1}{2}$}(\bar q z+q\bar z)=\mbox{$\frac{1}{2}$} (q a^\dagger+\bar q a)+\mbox{$\frac{1}{2}$}(\bar q b^\dagger+q b). \label{bfr}
\end{equation}
Since the $a^\#$'s and the $b^\#$'s commute it follows that
\begin{equation}
\exp\left({\mathrm i}\mathbf q\cdot \mathbf r\right)=\exp \left(\frac {\mathrm i} 2 (q a^\dagger+\bar q a) \right)
\exp \left(\frac {\mathrm i} 2(\bar q b^\dagger+q b)\right)
\end{equation}
and thus by \eqref{factorization}
\begin{equation}
\langle \varphi_{n,m'}|\exp({\mathrm i}\mathbf q\cdot \mathbf r)|\varphi_{n,m}\rangle=
\langle \varphi_{n,0}|\exp \left(\frac {\mathrm i} 2 (q a^\dagger+\bar q a) \right)
|\varphi_{n,0}\rangle\langle \varphi_{0,m'}| \exp \left(\frac {\mathrm i} 2(\bar q b^\dagger+q b)\right)|\varphi_{0,m}\rangle.
\end{equation}
By the Baker-Campbell-Hausdorff formula
\begin{equation} e^{X+Y}=e^X\,e^Y\,e^{-\mbox{$\frac{1}{2}$}[X,Y]}\end{equation}
for two operators commutating with their commutator (recall that $[a, a^\dagger]=1$) we can write
\begin{equation}
\exp \left(\frac {\mathrm i} 2 (q a^\dagger+\bar q a) \right) = \exp \left( \frac {\mathrm i} 2 (q a^\dagger) \right) \exp\left( \frac {\mathrm i} 2 (\bar q a)\right) \exp\left(-\frac 1 8|q|^2\right)
\end{equation}
and thus
\begin{equation}
\langle \varphi_{n,m'}|\exp\left({\mathrm i}\mathbf q\cdot \mathbf r\right)|\varphi_{n,m}\rangle= \tilde h_n (q)
\langle \varphi_{0,m'}| \exp \left(\frac {\mathrm i} 2(\bar q b^\dagger+q b)\right)|\varphi_{0,m}\rangle.\label{expvalues}
\end{equation}
with
\begin{equation}
\tilde h_n(q)=\langle \exp\left(\frac {-\mathrm i} 2 (\bar q a) \right)\varphi_{n,0}| \exp \left(\frac {\mathrm i} 2 (\bar q a)\right)\varphi_{n,0}\rangle \exp\left(-\frac 1 8 |q|^2\right).
\end{equation}
Expanding the exponential and using $a^k\,\varphi_{n,0}=\sqrt {(n-1)\cdots(n-k+1)}\, \varphi_{n-k,0}$ we obtain
\begin{equation}
\tilde h_n(q)=\sum_{k=0}^n \frac {(-1)^k}{4^{k}}{n\choose k}\frac {1}{k!} |q|^{2k} \exp\left(- \frac 1 8 |q|^2\right)=L_n(\mbox{$\frac 14$}|q|^2) \exp\left(- \frac 1 8 |q|^2\right).\label{8.9}
\end{equation}
Thus, recalling the definition of the guiding center coordinate $\mathbf{R}$ in Sec.~\ref{sec:guiding},~\eqref{expvalues} implies the first equality in~\eqref{eq:proj wave}.
To obtain the second equality we subtract
\begin{equation}\frac {\mathrm i} 2 (q a^\dagger+\bar q a)\end{equation}
from \eqref{bfr} to get, employing the Campbell-Hausdorff formula again,
\begin{multline}
\exp\left({\mathrm i}\mathbf q\cdot \mathbf R\right)
=\exp\left(-\frac {\mathrm i} 2 q a^\dagger+ \left(
\frac {\mathrm i} 2 (q a^\dagger+\bar q a)
+\frac {\mathrm i} 2 (\bar q b^\dagger+q b)\right)- \frac {\mathrm i} 2 \bar q a\right)\\
=\exp\left(-\frac {\mathrm i} 2 q a^\dagger\right)\exp\left(\mathrm i\mathbf q\cdot\mathrm r\right)\exp\left(-\frac {\mathrm i} 2 \bar q a\right)\exp\left(\frac 1 8|q|^2\right).
\end{multline}
On the LLL $\exp\left(-\frac {\mathrm i} 2 \bar q a\right)$ is the identity, so the second equality in~\eqref{eq:proj wave} follows.
\end{proof}
To deduce Equations~\eqref{eq:eff int}--\eqref{eq:eff pot} and hence Theorem 2.1 it only remains to write the Fourier decompositions
\begin{equation} V (\mathbf{r}) = \int_{\mathbb{R}^2} \widehat{V} (\mathbf{q}) e^{\mathrm{i} \mathbf{q} \cdot \mathbf{r}} d\mathbf{q}\end{equation}
and
\begin{equation} w(\mathbf{r}_1 - \mathbf{r}_2) = \int_{\mathbb{R}^2} \widehat{w} (\mathbf{q}) e^{\mathrm{i} \mathbf{q} \cdot \mathbf{r}_1} e^{-\mathrm{i} \mathbf{q} \cdot \mathbf{r}_2} d\mathbf{q} \end{equation}
and use Lemma~\ref{lem:plane}. The expressions involving Laplacians in Theorem~\ref{thm:main} follow from the Fourier representation $-\Delta = |\mathbf{q}|^2.$ This argument demands absolute integrability of the Fourier transforms, but the general case follows by a density argument.
\section{Laughlin states in higher Landau levels}\label{sec:Laughlin}
As already mentioned, a crucial approximation in FQH physics is to truncate the Haldane
pseudo-potential series in the LLL Hamiltonian~\eqref{eq:LLL hamil} to obtain the Laughlin state~\eqref{eq:Laughlin} as an exact ground state of the translation invariant problem $V\equiv 0$.
In view of Theorem~\ref{thm:main}, it is desirable to do the same in a higher Landau level, at the level of the effective Hamiltonian~$H_{0,w_n} ^{\mathrm{LLL}}$ and~\eqref{eq:Laughlin} then becomes an exact ground state after the unitary mapping to the LLL. In this section, we explain how the previous considerations allow to study the properties of the corresponding physical wave-function (that is, as expressed in the position coordinates, rather than in the guiding center coordinates).
\subsection{Density estimates on mesoscopic scales}
Consider a Laughlin state in the $\mathrm{LLL}_N$
\begin{equation}
\Psi_{0,N}^{\rm Lau}=c_N\prod_{i<j}(b^\dagger_i-b^\dagger_j)^\ell \varphi_{0,0}^{\otimes N}\label{Laugh01}
\end{equation}
with wave function
\begin{equation}
\Psi_{0,N}^{\rm Lau}(z_1,\dots,z_N)=c_N\prod_{i<j}(z_i-z_j)^\ell e^{-(|z_1|^2+\cdots+|z_N|^2)/2}.\label{Laugh02}
\end{equation}
Here $\ell=1, 3,\dots$ for fermions and $\ell=2,4,\dots$ for bosons. We denote by
\begin{equation}\label{eq:Laugh dens}
\varrho^{\rm Lau}_{0,N}(\mathbf{r}):= N \int_{\mathbb{R} ^{2(N-1)}} |\Psi_{0,N} (\mathbf{r},\mathbf{r}_2,\ldots,\mathbf{r}_N)|^2 d\mathbf{r}_2 \ldots d\mathbf{r}_N
\end{equation}
the corresponding $1$-particle density. According to Laughlin's plasma analogy~\cite{Laughlin-83} the density profile is for large $N$ well approximated by a droplet of radius $(\ell N)^{1/2}$ and fixed density $(\pi\ell)^{-1}$,
\begin{equation}
\varrho^{\rm flat}_N(\mathbf{r}) := \begin{cases}\frac{1}{\pi \ell} \mbox{ if } |z|\leq \sqrt {\ell N}\\
0 \mbox{ otherwise}.
\end{cases}
\end{equation}
Indeed, by a rigorous mean-field analysis it was proved in \cite{RouSerYng-13b} that this approximation holds in the sense of averages over discs of radius $N^\alpha$ with $1/2>\alpha>1/4$. More generally, the $k$-particle densities are well approximated in this sense by the $k$-fold tensor power of the flat density if $N\to\infty$.
The more refined analysis of classical Coulomb systems in \cite{LebSer-16,BauBouNikYau-15, Ser-20} leads to an extension of this result down to mesoscopic scales $N^\alpha$ for all $\alpha>0$. We shall now use results from \cite{LebSer-16} to estimate the density of Laughlin states in higher Landau levels.
The Laughlin state corresponding to~\eqref{Laugh01} in the $n$-th Landau level $n\mathrm{LL}_N$ is
\begin{equation}
\Psi_{n,N}^{\rm Lau}=c_N\prod_{i<j}(b^\dagger_i-b^\dagger_j)^\nu \varphi_{n,0}^{\otimes N}=
c_N\prod_{i<j}(b^\dagger_i-b^\dagger_j)^\nu \prod_{i=1}^N\left[ (a^\dagger_i)^n\,\varphi_{0,0}\right]
\label{Laughn1}
\end{equation}
with wave function (cf Lemma~\ref{lem:simple})
\begin{equation}
\Psi_{n,N}^{\rm Lau}(\mathbf{r}_1,\dots,\mathbf{r}_N) = c_N\left[ \frac 1{(n!)^{N/2}}\prod_{i=1}^N\left(\bar z_i-\partial_{z_i}\right)^n\prod_{i<j}(z_i-z_j)^\nu\right] e^{-(|z_1|^2+\cdots+|z_N|^2)/2}.\label{Laughn2}
\end{equation}
This is, in \emph{electronic position variables}, the exact ground state of a Hamiltonian obtained by
\begin{itemize}
\item Projecting the physical starting point~\eqref{eq:full hamil} in the $n\mathrm{LL}_N$.
\item Unitarily mapping the result down to an effective Hamiltonian on $\mathrm{LLL}_N$ using Theorem~\ref{thm:main}.
\item Neglecting the one-body potential $V_n$ and truncating the Haldane pseudo-potential series of the interaction potential $w_n$.
\end{itemize}
The Hamiltonian obtained this way acts on $\mathrm{LLL}_N$, and its exact ground state is a LLL Laughlin state in \emph{guiding center variables}. Lifting it back up to the $n\mathrm{LL}_N$ results in~\eqref{Laughn2}:
\begin{equation}\Psi_{n,N}^{\rm Lau} = \left( U_n^* \right) ^{\otimes N} \Psi_{0,N}^{\rm Lau}.\end{equation}
We now vindicate a natural expectation: the density of $\Psi_{n,N} ^{\rm Lau}$ is very close, for large $N$, to that of $\Psi_{0,N}^{\rm Lau}$ on length scales much larger than the magnetic length ($1$ in our units). This is because electron coordinates and guiding center coordinates differ only on the scale of a cyclotron orbit, which is much smaller than the thermodynamically large extent of the states themselves.
We shall test the densities with regularized characteristic functions of discs. Let $\chi_1$ be the characteristic function of the unit disc around the origin and for $\varepsilon>0$ let $\eta_\varepsilon$ be a function with support in the annulus $1\leq |\mathbf r|\leq 1+\varepsilon$ such that
$\chi_{1,\varepsilon}:=\chi_{1}+\eta_\varepsilon$ is $C^\infty$.
For $R>0$ and $\mathbf r_0\in \mathbb R^2$ define
\begin{equation}\chi_{R,{\mathbf r}_0,\varepsilon}(\mathbf r)=\chi_{1,\varepsilon}(R^{-1}(\mathbf r-\mathbf r_0))\label{chiR}.\end{equation}
The analysis in \cite{RouSerYng-13b,LebSer-16} is carried out using scaled variables,
\begin{equation}\mathbf r'=N^{-1/2}\mathbf r.\end{equation}
In these variables the extension of the Laughlin state is $O(1)$ and mesoscopic scales are $O(N^{-\gamma})$ with $0<\gamma<1/2$. The scaled densities are ($\varrho_{n,N}^{\rm Lau}$ is defined in analogy with~\eqref{eq:Laugh dens})
\begin{equation}
\widetilde{\varrho}_{n,N}^{\rm Lau}(\mathbf r') = \varrho_{n,N}^{\rm Lau}(N^{1/2}\mathbf r') \mbox{ and } \widetilde{\varrho}^{\rm flat}_N(\mathbf r')=\varrho^{\rm flat}_N(N^{1/2}\mathbf r').
\end{equation}
We scale the test functions accordingly and consider $\chi_{r,{\mathbf r'_0},\varepsilon}(\mathbf r')=\chi_{1,\varepsilon}(r^{-1}(\mathbf r'-\mathbf r'_0))$. The result on the density and its fluctuations we want to sketch the proof of is as follows:
\begin{theorem}[\textbf{Density of Laughlin states on mesoscopic scales}]\label{thm:density}\mbox{}\\
\emph{\textbf{(i)}} For every Landau index $n$, every fixed $\varepsilon>0$ and all mesoscopic scales $r\sim N^{-\gamma}$ with $0<\gamma<\mbox{$\frac{1}{2}$}$
\begin{equation}
\int \widetilde{\varrho}^{\rm Lau}_{N,n}(\mathbf r') \chi_{r,\varepsilon,z_0}(\mathbf r')\,{\mathrm d}^2\mathbf r'=\int\widetilde{\varrho}^{\rm flat}_N(\mathbf r') \chi_{r,\varepsilon,z_0}(\mathbf r')\, {\mathrm d}^2\mathbf r' \, (1+O(N^{-1+2\gamma }))\label{128}
\end{equation}
\noindent\emph{\textbf{(ii)}} If $r\sim N^{-\gamma}$, the fluctuation of the linear statistics associated to $\chi_{r,\varepsilon,z_0}$ in the $n$-th Landau level is
\begin{equation} \sim \varepsilon^{-1/2}(1+\varepsilon^{-2n} O(N^{-n(1-2\gamma)})).\label{129}\end{equation}
\end{theorem}
\begin{proof}
The considerations of Sec.~\ref{densities} imply that for every test function $\chi$
\begin{equation} \int_{\mathbb{R}^2} \widetilde{\varrho}^{\rm Lau}_{N,n}(\mathbf r') \chi (\mathbf{r}') = \int_{\mathbb{R}^2} \widetilde{\varrho}^{\rm Lau}_{N,0}(\mathbf r') L_n \left(-\mbox{$\frac 14$} N^{-1} \Delta\right) \chi (\mathbf{r}')\end{equation}
with $L_n$ the Laguerre polynomial. The point is that for large $N$, only the lowest order term in the above polynomial will contribute:
\begin{equation} L_n \left(-\mbox{$\frac 14$} N^{-1} \Delta\right) \approx 1.\end{equation}
We use Theorem 1 and Remark 1.2 in \cite{LebSer-16}, see also Theorem 1 in \cite{Ser-20}. The function denoted there by $\xi$ is in the present case
\begin{equation}
L_n \left(-\mbox{$\frac 14$} N^{-1} \Delta\right) \chi_{r,\varepsilon,\mathbf r_0'}.
\end{equation}
The potential in Equation (1.14) of~\cite{LebSer-16} is here $|z|^2$ so Mean$(\xi)=0$. Equation (1.17) in \cite{LebSer-16} and
\begin{equation}
\int\widetilde\varrho^{\rm flat}_N \chi_{r,\varepsilon,z_0}\sim r^2
\end{equation}
now lead directly to \eqref{128} above. The dependence of the error term on $n$ and $\varepsilon$ cannot be deduced from Equation (1.17) in Remark 1.2 alone, however.
For the fluctuations we need $\Vert \nabla \xi\Vert_2$, according to the \lq\lq Mesoscopic case" of Theorem~1 in \cite{LebSer-16}. Since
\begin{equation}
\xi= L_n \left(-\mbox{$\frac 14$} N^{-1} \Delta\right) \chi_{r,\varepsilon,\mathbf r_0'},
\end{equation}
Equation~\eqref{129} is a consequence of this theorem and of a simple $L^2$-estimate of the gradient of the test function $\chi_{r,\varepsilon,\mathbf r_0'}$.
\end{proof}
\subsection{Rigidity estimates}
In~\cite{RouYng-14,RouYng-15,LieRouYng-16,LieRouYng-17,RouYng-17,OlgRou-19} we have investigated rigidity/stability properties of the LLL Laughlin state. The question is now the response of the Laughlin function to a slight relaxation of the assumptions made in it's derivation, namely that one could in first approximation neglect the one-body potential and truncate the Haldane pseudo-potential series to a finite order. If one assumes the validity of a certain ``spectral gap conjecture'' (see~\cite[Appendix]{Rougerie-xedp19} and references therein), investigating this question basically means minimizing the one-body energy and the residual part of the interaction \emph{within the full ground eigenspace} of the truncated interaction energy (cf degenerate perturbation theory). Our main conclusion was that this problem could be solved to leading order in the large $N$ limit by generating quasi-holes on top of Laughlin's wave function. We now want to quickly explain how this can be generalized to Laughlin states in higher levels. We discuss only the adaptation of~\cite{LieRouYng-17,RouYng-17} for the response to one-body potential. One could consider as well the response to smooth long-range weak interactions as in~\cite{OlgRou-19}, but for brevity we do not write this explicitly.
We take $v:\mathbb{R}^2 \rightarrow \mathbb{R}^+$ to be a smooth one-body potential, growing polynomially at infinity. We scale it so that it lives on the scale of the Laughlin wave-function:
\begin{equation} V_N (\mathbf{r}) = v (\sqrt{N} \mathbf{r}).\end{equation}
As discussed in the aforementioned references these assumptions can be relaxed to some extent. The main observation is that after the reduction of the $n\mathrm{LL}$ interacting Hamiltonian discussed in Subsection VI A, any multiplication of the $\mathrm{LLL}$ Laughlin state by a symmetric analytic function $F$ still yields an exact zero-energy eigenstate in guiding center variables. It is thus relevant to consider the action of the one-body potential $V_N$ on the ground-state space of the truncated interaction Hamiltonian. In electron variables the latter is
\begin{equation}\label{eq:GS space}
\mathcal{L}_{N,n} ^\ell := \left\{ \Psi_{N,n} \in n\mathrm{LL}_N, U_n ^{\otimes N} \Psi_{N,n} = F(z_1,\ldots,z_N) \Psi_{\rm Lau} ^{(\ell)} \mbox{ with } F \mbox{ analytic and symmetric }\right\}
\end{equation}
where the $\mathrm{LLL}$ Laughlin state is as in~\eqref{eq:Laughlin}. For any many-body wave-function $\Psi_{N,n}\in \mathcal{L}_{N,n}^{\ell}$ we define it's one-particle density as
\begin{equation} \varrho_{\Psi_{N,n}} (\mathbf{r}) := N \int_{\mathbb{R}^{2(N-1)}} |\Psi_{N,n} (\mathbf{r},\mathbf{r}_2,\ldots,\mathbf{r}_N)|^2 d\mathbf{r}_2\ldots d\mathbf{r}_N.\end{equation}
The variational problem for the response of the Laughlin state to an external potential, within the class~\eqref{eq:GS space} is now
\begin{equation}\label{eq:full var prob}
E (N,n,\ell) := \inf \left\{ \int_{R^2} V \varrho_{\Psi_{N,n}}, \, \Psi_{N,n} \in \mathcal{L}_{N,n} ^{\ell}, \int_{\mathbb{R}^{2N}} |\Psi_{N,n}|^2 = 1 \right\}.
\end{equation}
It is of importance in Laughlin's theory of the FQHE that one needs only consider so-called quasi-holes states to solve the above approximately. If one makes this approximation, the minimum energy becomes
\begin{equation}\label{eq:red var prob}
e (N,n,\ell) := \inf \left\{ \int_{R^2} V \varrho_{\Psi_{N,n}},\, U_n ^{\otimes N} \Psi_{N,n} = f^{\otimes N} \Psi_{\rm Lau} ^{(\ell)} \mbox{ with } f \mbox{ analytic }, \int_{\mathbb{R}^{2N}} |\Psi_{N,n}|^2 = 1 \right\}.
\end{equation}
The latter energy is obtained by reducing the variational set, so, obviously
\begin{equation} E(N,n,\ell) \leq e (N,n,\ell).\end{equation}
What is much less obvious is that this upper bound is optimal in the large $N$ limit:
\begin{theorem}[\textbf{Response of higher LL Laughlin states to external potentials}]\label{thm:rigidity}\mbox{}\\
With the previous notation we have, for any fixed $n,\ell \in \mathbb{N}$
\begin{equation}\label{eq:rigidity}
\frac{E(N,n,\ell)}{e (N,n,\ell)} \underset{N \to \infty}{\rightarrow} 1
\end{equation}
\end{theorem}
The $n=0$ version of the above was proved in~\cite{LieRouYng-17,RouYng-17}. The adaptation to higher $n$ follows from the tools therein, together with the representation of $U_n V_n U_n^* $ discussed at length in Sec.~\ref{sec:proof thm}. We do not give details for brevity. We however point out that consequences for minimizing densities also follow, so that the density of a (quasi)-minimizer for~\eqref{eq:full var prob} is approximately flat with value $(\pi \ell) ^{-1}$ on an open set to be optimized over, and quickly drops to $0$ outside. This is in accordance with the physical picture of the system responding to external potentials by generating quasi-holes to accommodate their crests. Indeed, the interpretation of the states in~\eqref{eq:red var prob} is that the zeroes of the analytic function $f$ correspond to the location of quasi-holes in guiding center coordinates.
\bigskip
\noindent\textbf{Acknowledgements.} We had helpful conversations regarding the material of this paper with Thierry Champel, S\o{}ren Fournais and Alessandro Olgiati. We received funding from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme (Grant agreement CORFRONMAT No 758620).
\bibliographystyle{siam}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,600 |
Q: Detect when user closes floating toolbar frame Is possible to capture event when user tries to close floating toolbar window in swing?
Thanks in advance.
A: There's probably some really awesomely simple solution, but why would you use that?
The best I could come up with (without extending out my own tool bar) was to add an AncestorListener to the toolbar and monitor it's events.
The problem I have this approach, though, is you need to know the main frame you were originally attached to, which may not be convenient.
import java.awt.BorderLayout;
import java.awt.EventQueue;
import java.awt.event.ComponentEvent;
import java.awt.event.ComponentListener;
import java.awt.event.ContainerEvent;
import java.awt.event.ContainerListener;
import java.awt.event.HierarchyEvent;
import java.awt.event.HierarchyListener;
import java.beans.PropertyChangeEvent;
import java.beans.PropertyChangeListener;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JToolBar;
import javax.swing.SwingUtilities;
import javax.swing.UIManager;
import javax.swing.UnsupportedLookAndFeelException;
import javax.swing.event.AncestorEvent;
import javax.swing.event.AncestorListener;
public class TestFloatingToolBar {
public static void main(String[] args) {
new TestFloatingToolBar();
}
public TestFloatingToolBar() {
EventQueue.invokeLater(new Runnable() {
@Override
public void run() {
try {
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
} catch (ClassNotFoundException | InstantiationException | IllegalAccessException | UnsupportedLookAndFeelException ex) {
}
final JFrame frame = new JFrame("Test");
final JToolBar tb = new JToolBar();
tb.add(new JButton("Pop"));
tb.setFloatable(true);
tb.addAncestorListener(new AncestorListener() {
@Override
public void ancestorAdded(AncestorEvent event) {
tell();
if (SwingUtilities.getWindowAncestor(tb).equals(frame)) {
System.out.println("...In Main Frame");
} else {
System.out.println("...Maybe floating");
}
}
@Override
public void ancestorRemoved(AncestorEvent event) {
tell();
if (SwingUtilities.getWindowAncestor(tb).equals(frame)) {
System.out.println("...In Main Frame");
} else {
System.out.println("...Maybe floating");
}
}
@Override
public void ancestorMoved(AncestorEvent event) {
}
});
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLayout(new BorderLayout());
frame.add(tb, BorderLayout.NORTH);
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
public void tell() {
Exception exp = new Exception();
StackTraceElement[] stackTrace = exp.getStackTrace();
System.out.println(stackTrace[1].getMethodName());
}
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,148 |
The Lichtstern House is a historic house at 105 S. Deere Park Drive in Highland Park, Illinois. The house was built in 1919 for a businessman named E. Lichtstern. Arthur Heun, a Chicago architect known for designing homes for the upper class, designed the house. Heun's design primarily used Italian Villa architecture, which was inspired by Lichtstern's travels to Italy, but also includes some Prairie School elements. Its overall form, use of segmental arches, and balconies are typical of the Italian Villa style, but its leaded glass windows and overhanging eaves are Prairie School features.
The house was added to the National Register of Historic Places on September 29, 1982.
References
National Register of Historic Places in Lake County, Illinois
Houses on the National Register of Historic Places in Illinois
Italianate architecture in Illinois
Houses completed in 1919
Highland Park, Illinois | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,903 |
Home Science Environment Researchers Create a Model that Forecasts which Structures will Endure a Wildfire
Researchers Create a Model that Forecasts which Structures will Endure a Wildfire
Researchers have developed a model that uses machine learning to predict which buildings are most likely to survive a wildfire. The model takes into account factors such as the distance of a building to the wildfire, the type of construction materials used, and the slope of the land.
Wildfires may seem unpredictable, leaving random ruin in their wake. A model created by CSU engineers can forecast how a wildfire will affect a city down to the specific buildings that will burn. They assert that planning fire mitigation techniques and steps for recovery depends on anticipating harm to the built environment.
For years, Hussam Mahmoud, a Civil and Environmental Engineering professor, and postdoctoral fellow Akshat Chulahwat have been working on a model to measure the vulnerability of communities to wildfire.
Mahmoud and Chulahwat's model was the first to forecast how a fire will spread through a town. The majority of studies on wildfire control have been on predicting fire behavior in the wildland.
"We're able to predict the most probable path the fire will take and how vulnerable each home is relative to the neighboring homes," Mahmoud said. "We put a spin on the original model that allows us now to determine the level of damage in each building, whether the building will burn or survive."
Using data from Technosylva, a wildfire science and technology company, Mahmoud and Chulahwat tested their model on the 2018 Camp Fire and 2020 Glass Fire in California. With 58–64% accuracy, the model foresaw which buildings would burn and which would survive.
We're able to predict the most probable path the fire will take and how vulnerable each home is relative to the neighboring homes. We put a spin on the original model that allows us now to determine the level of damage in each building, whether the building will burn or survive.
Hussam Mahmoud
By modifying the model's weighting of several damage-causing characteristics, they have been able to accurately forecast which buildings burnt in the Camp Fire with 86% accuracy after releasing their findings in Scientific Reports.
Mahmoud says a holistic approach is needed to understand wildfire behavior and bolster resilience. Decision-makers will have the knowledge they need to mitigate vulnerable areas with the help of models that incorporate the wildland and built environment elements of a community.
Wildfire is like a disease
Mahmoud and Chulahwat used graph theory, a method for studying networks, to create their model. These methods also are used to study how diseases spread.
"Wildfire propagation in communities is similar to disease transmission in a social network," Mahmoud said. Fire spreads from object to object in the same way contagions pass from one person to another.
According to him, wildfire mitigation techniques are similar to the procedures employed to stop the spread of COVID-19. By mapping a structure's surroundings (contact tracing), removing defensible space around it (social distancing), strengthening it to be more fire resistant (immunization), and establishing a buffer zone at the wildland-urban interface (closing borders), a community's immune system can be strengthened.
Some houses are like super-spreaders; they have a higher fire danger and a higher likelihood of spreading fire to other houses. By targeting certain homes or areas for reinforcement, policymakers could maximize a community's mitigation efforts, Mahmoud said.
The researchers are hopeful that their model may aid in protecting communities from the severe losses brought on by wildfires as wildfire risk is increased by more people relocating to wildland-adjacent areas and climate change drying out the terrain in desert places.
CSU at COP27
Mahmoud will present this work in November at the United Nations Framework Convention on Climate Change during a side event co-hosted by CSU and the France Global Hub of Future Earth. The event, New Approaches to Wildfire: Managing Climate Risks in Urban, Suburban, and Wilderness Areas, also will feature CSU collaborators Peter Backlund, associate director of the School of Global Environmental Sustainability, and Courtney Schultz, associate professor of Forest and Rangeland Stewardship and director of the Public Lands Policy Group.
"Presenting the work at a global conference like this, where there's going to be people from all over the world, is exciting because they'll see how we might be able to help them," Mahmoud said.
The 27th session of the Conference of Parties of the UNFCCC, or COP27, will be held Nov. 6-18 in Sharm El Sheikh, Egypt.
Third of Antarctic Ice Shelf at Risk of Collapsing Into the Sea Due To Global Warming
Human Behavior Undermines Greenhouse Gas Emissions Strategies
Reducing Emissions in the US Now Could Save Hundreds of Thousands of Lives By 2030
World Leaders That Claim Going Green Hurts the Economy Are Stupid Or Liars, Says Arnie
Electric Truck Hydropower, a Versatile Hydropower Solution for Mountainous Areas
Researchers discovered a Large Quantity of Microplastics in Arctic Ecosystem
Black Box Set to Record Earth's Environmental Collapse for Any Future Survivors
Potential Economic Bright Side of Plastic Waste for Developing Countries
Trump's Partially Built Border Wall Damaged by Arizona's Monsoons
Earth's Mantle Swallows More Carbon than Previously thought – but Not Enough to Offset Humans
Similar Environment Post
Why do Corals Glow Even Deep in the Sea?
Air Quality Models Have the Potential to Improve the Accuracy of Daily Solar Power Production Forecasts in the Future
In the Last Decade, there have been Eight of the Worst Wildfire Weather Years on Record
Agricultural Land being carried out Illegally causes Rainforest Destruction
Plane Company to Operate 18,000 Ghost Flights to Keep Landing Slots
Feeding Probiotic Bacteria to Coral Reefs Could Help Protect Them from Bleaching
Importance of Time Tracking Applications
Explain Retail Mix
Annual Report 2007-2008 of Sun Pharmaceutical Industries Limited
Overall Credit Risks Management of Jamuna Bank Limited.
Know about Industrial Coating | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 713 |
Q: BaseAdapter extending questions I need to add a checkbox to this listview, I have the row.xml setup with a textview/chrono/checkbox. Do I have to show the checkbox in my baseadapter extending? Also why does that getSystemService() error out on me?
package com.walkarchdev.tasktracker;
import android.content.Context;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.ArrayAdapter;
import android.widget.BaseAdapter;
import android.widget.Chronometer;
import android.widget.TextView;
public class TTAdapterView extends BaseAdapter {
public View v;
public TTAdapterView(Context context){
super();
}
@Override
public View getView(int position, View convertView, ViewGroup parent){
this.v = convertView;
if(v==null){
LayoutInflater vi = (LayoutInflater)getSystemService(Context.LAYOUT_INFLATER_SERVICE);
v = vi.inflate(R.layout.row, null);
}
TextView task = (TextView)v.findViewById(R.id.textView1);
Chronometer time = (Chronometer)v.findViewById(R.id.chronometer1);
//Checkbox complete = (Checkbox)v.findViewById(R.id.checkBox1);
return v;
}
@Override
public int getCount() {
// TODO Auto-generated method stub
return 0;
}
@Override
public Object getItem(int arg0) {
// TODO Auto-generated method stub
return null;
}
@Override
public long getItemId(int arg0) {
// TODO Auto-generated method stub
return 0;
}
}
A:
why does that getSystemService()
error out on me?
getSystemService() is a method on Context. Save the context passed as an argument in the constructor and use it to call the getSystemService() method
Do I have to show the checkbox in my baseadapter extending?
If you intend to use the states of checkbox then you must store it in a variable in order to monitor the checked states
A: Without manually working with checkbox state in your adapter you will have the problem with checkboxes states in your ListView when scroll it down.
What kind of error do you have calling getSystemService?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,825 |
'Social Network' writer Aaron Sorkin talks Steve Jobs biopic
Daniel Martin Nov 23, 2011 5:05 pm GMT
Credit: PA
Sorkin is indeed in the frame to write script about Apple founder's life
Aaron Sorkin has confirmed that he is in talks to write the proposed biopic of the late Apple boss Steve Jobs.
Talk of a film about Jobs' life has been rife since he died last month from pancreatic cancer, and even at this early stage, the Social Network screenwriter has been linked to penning the screenplay.
Sorkin has now confirmed to E! Online that discussion are indeed taking place. He said: "Sony has asked me to write the movie and it's something I'm strongly considering. Right now I'm just in the thinking-about-it stages. It's a really big movie and it's going to be a great movie no matter who writes it."
Sorkin added that he is currently reading Walter Isaacson's biography of Jobs, on which the proposed film is to be based. Sony acquired the rights in a seven-figure deal recently.
He continued: "[Jobs] was a great entrepreneur, he was a great artist, a great thinker. He's probably inspired my [11-year-old daughter] Roxy more than he's inspired me. She plays with all his toys."
It was claimed this week that George Clooney is in the frame to play Jobs, but faces competition from his former ER co-star Noah Wylie, who has portrayed him before, in 1999 TV movie Pirates Of Silicon Valley. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,925 |
\section{Introduction}
Spin polarized alkali vapors prepared by optical pumping have been
used since 50 years for fundamental studies in atomic physics and
applications thereof \cite{Budker:2002:RNM}.
The achieved sensitivities depend mainly on the (transverse)
lifetime $T_{2}$ of the spin coherence in the vapor, and, to a
lesser extend, on the (longitudinal) lifetime $T_{1}$ of the spin
polarization.
Those lifetimes are related to corresponding relaxation rates by
$\sGamma{i}=T_{i}^{-1}$.
To assure a long-lived spin polarization, the vapor cells are either
filled with a buffer gas mixture or are left evacuated while applying
an anti-relaxation coating on the walls.
In the first case, the buffer gas in the cell confines the atoms to a
diffusion-limited volume and thus reduces the rate of depolarizing
wall collisions.
In the second case, a thin film of paraffin or similar substance
applied to the cell wall reduces the collisional sticking time with
the wall and thereby the dephasing interactions with magnetic
impurities embedded in the walls.
Alkali vapors in paraffin-coated cells were introduced in 1958
\cite{Robinson:1958:PSS} and have since been widely applied in atomic
physics spanning applications from magnetometers
\cite{DiDomenico:2007:SDR,Weis:2005:LBP,Budker:2007:OM},
over slow light studies
\cite{Klein:2006:SLP}, to spin-squeezing
\cite{Fernholz:2008:SSA}, and light-induced atomic desorption (LIAD)
\cite{Alexandrov:2002:LID,Gozzini:2008:LIS} studies.
Our group develops atomic magnetometers for the accurate measurement
of small changes in already weak fields (typically 10\% of the earth's
field \cite{Bison:2005:OPO}), a technique that we currently apply to
the measurement of the faint magnetic fields produced by the beating
human heart) \cite{Bison:2003:LPM,Weis:2006:TDR,Hofer:2008:HSO} and
for magnetic field measurement and control in the search for a neutron
electric dipole moment \cite{Groeger:2006:HSL,Ban:2006:TNM}.
Both experiments call for a large number (50 to 100) of individual
sensors to be operated simultaneously.
Although buffer gas cells were used in our initial work
\cite{Bison:2003:LPM}, we currently focus on paraffin-coated cells
that have a reduced sensitivity to magnetic field gradients because of
motional narrowing and to temperature effects compared to buffer gas
cells \cite{Andalkar:2002:HRM,Vanier:1989:QPA}.
In order to fulfill the requirements of the mentioned experiments we
have initiated a large scale production of cells that has yielded over
250 cells in the past year. We have developed an automatic cell
characterization facility for determining the quality and
reproducibility of the cell coatings. In this work we describe this
characterization facility in detail and report results (intrinsic
relaxation times, intrinsic magnetometric sensitivity) based on
significant cell statistics. A comparative study of a small sample of
paraffin-coated cells produced over four decades was reported in
\cite{Budker:2005:MTN}. To our knowledge our present study involves
the largest sample of coated cells ever compared.
\section{Cell production}
\label{sec:cells}
The paraffin-coated glass cells are manufactured in our institute.
Pyrex is formed into a spherical bulb (inner diameter of $\approx
28~\ensuremath{\mathrm{mm}}$, wall thickness of 1~\ensuremath{\mathrm{mm}}) that is connected to a sidearm
consisting of a Pyrex tube with 4~\ensuremath{\mathrm{mm}}{} inner (7~\ensuremath{\mathrm{mm}}{} outer)
diameter, which acts as a reservoir to hold the droplet of solid
cesium after coating, filling, and sealing the cell
(Fig.~\ref{fig:cell}). The metallic Cs is the source for the
saturated Cs vapor filling the cell. Near the cell proper, the
sidearm is constricted into a capillary with a design diameter of
0.75(25)~\ensuremath{\mathrm{mm}}{} that reduces spin depolarizing collisions with the bulk
Cs in the sidearm.
\begin{figure}[t]
\centerline{\includegraphics*[width=\linewidth]{Castagna_fig1}}
\caption{(Color online) Paraffin-coated Cs vapor cell. The small
amount of the solid alkali metal is well visible in the sidearm.
The arrow points to the capillary which reduces depolarizing
collisions of vapor atoms from the cell with the solid Cs.}
\label{fig:cell}
\end{figure}
A typical coating and filling process takes about one week. Ten cells
are mounted on a glass structure together with a paraffin containing
reservoir and a Cs metal containing ampule, both isolated from the
vacuum system by break-seals.
The system is connected to a turbomolecular pump stand via a liquid
nitrogen cold trap and all coating and filling steps are performed in
a vacuum below $10^{-7}$~mbar.
Prior to coating, the whole structure is baked for 5~hours at
$370{}\,^\circ$C.
The coating process is similar to the one reported in
\cite{Alexandrov:2002:LID,Gozzini:2008:LIS}.
Our current choice of coating material is a commercial paraffin,
Paraflint $H_{1}$, from Sasol Wax American Inc.
After baking the system, the break-seal of the paraffin reservoir
is broken by a piece of iron sealed in a glass bead (``hammer'')
manipulated from the outside by a permanent magnet.
The wax is deposited onto the cell walls by heating the paraffin
reservoir.
During the coating procedure the pressure rises to $9\times
10^{-7}$~mbar, and the cell is kept isolated from the cesium
containing ampule.
Once the cell is coated, the same hammer is used to break the seal of
the Cs ampule and a thin film of metallic Cs is distilled into the
cell's sidearm by heating the Cs ampule, after which the end of the
sidearm is sealed off.
During Cs distillation the pressure rises to $3\times10^{-7}$~mbar,
and at the end of filling the cells are pumped down to a pressure
below $10^{-7}$ mbar before being sealed.
The filled cells are activated by heating them in a oven at
$80{}\,^\circ$C for 10~hours, while assuring that the sidearm is kept
at a sightly lower temperature.
In this way we produce 10 coated cells in one week.
\section{The cell characterization setup}
\label{sec:experiment}
\begin{figure}[b]
\centerline{\includegraphics*[angle=0,width=\linewidth]{Castagna_fig2}}
\caption{The cell testing apparatus. Frequency stabilized laser light
is carried by a multimode fiber into a threefold magnetic shield (L:
lenses). Circular polarization is created by a polarizer (P) and a
quarter-wave plate ($\lambda/4$). The transmitted power is recorded
by a photodiode (PD) and the modulated light power components are
extracted by a lock-in amplifier. A personal computer controls the
light power, performs scans of the frequency $\omega$ via a
programmable frequency synthesizer (PFS, Stanford Research model
SR345), and records the lock-in signals.}
\label{fig:expsetup}
\end{figure}
Following manufacture, each cell undergoes a characterization
procedure in a dedicated experimental apparatus for determining the
relevant parameters that indicate its magnetometric properties.
Our current magnetometers use the technique of optically detected
magnetic resonance in the Double Resonance Orientation
Magnetometer or DROM configuration (notation introduced in
\cite{Weis:2006:TDR}), also called M$_x$-configuration
\cite{Aleksandrov:1995:LPS,Groeger:2006:HSL}. The underlying
theory will be addressed below.
It was thus a natural choice to use the same technique for the
dedicated cell testing facility.
\subsection{Experimental setup and signal recording}
The experimental apparatus is shown in Fig.~\ref{fig:expsetup}.
The laser source is a DFB laser ($\lambda=894$~nm) whose frequency is
actively stabilized to the $4 \Rightarrow 3$ hyperfine component of
the Cs \ensuremath{\mathrm{D}_{1}}{} transition using the dichroic atomic vapor laser lock (DAVLL)
technique \cite{Corwin:1998:FSD}.
The light is carried by a 400~\ensuremath{\mu\mathrm{m}}{} diameter multimode fiber into a
three-layer mu-metal magnetic shield that contains the actual double
resonance setup.
Prior to entering the fiber, the laser power, $P_L$, is
computer-controlled via a stepper-motor driving a half-wave plate
located before a linear polarizer.
The light leaving the fiber is collimated and passes a linear
polarizer followed by a quarter-wave plate to create circular
polarization before entering the Cs cell.
The fiber is wound into several loops so that the exiting light is
completely depolarized, thus avoiding vibration related polarization
fluctuations that translate into power fluctuations after the
polarizer.
The paraffin-coated Cs cell to be characterized is placed in the
center of the magnetic shields where three pairs of Helmholtz coils
and three pairs of anti-Helmholtz coils compensate residual stray
magnetic fields and gradients, respectively.
A static magnetic field $B_0$ with an amplitude of a few \ensuremath{\mu\mathrm{T}}{} is
applied in the $yz$-plane at $45^{\circ}$ with respect to the laser
beam direction, $\hat{k}=\hat{z}$.
The transmitted light power is recorded by a nonmagnetic photodiode
and then amplified.
Absorbed laser light pumps the Cs atoms into the nonabsorbing (dark)
$\ket{F{=}4, M_F{=}3,4}$ magnetic sublevels, thereby creating a
vector spin polarization (orientation) $P_z\propto \braket{F_z}$.
A small magnetic field \ensuremath{\mathrm{rf}}-field $B_{1}(t)$ of a few~\ensuremath{\mathrm{nT}}, constant
in amplitude, but rotating at frequency $\omega$, is applied in
the plane perpendicular to $B_0$.
The choice of a rotating, rather than a linearly polarized,
oscillating field is used to suppress magnetic resonance transitions
in the $F{=}3$ state \cite{DiDomenico:2006:ESL}.
$B_{1}(t)$ drives magnetic resonance transitions between adjacent
sublevels in the $F{=}4$ hyperfine state, whose Zeeman degeneracy is
lifted by the static magnetic field $B_0$.
For a properly oriented magnetic field $\vec{B}_0$ the transmitted
light power will be modulated at the rotation frequency $\omega$.
When $\omega$ is close to the Larmor frequency $\omega_L = \gamma_F
B_0$, where $\gamma_F \simeq 2\pi \cdot 3.5~\ensuremath{\mathrm{Hz}}/\ensuremath{\mathrm{nT}}$ is the Cs ground
state gyromagnetic factor, a resonance occurs in the absorption
process, manifesting itself in both the amplitude and phase of the
light power modulation.
The corresponding in-phase, quadrature, and phase signals are
extracted by means of a lock-in amplifier (LIA, Stanford Instruments,
model SR830) whose output signals are read by a personal computer.
The rotating field frequency is generated by a computer controlled
programmable synthesizer.
The computer varies this frequency, $\omega$, by a linear ramp in the
range of $\pm~2\pi\cdot 100~\ensuremath{\mathrm{Hz}}$ around the Larmor frequency during a
scan time of 40~s.
A~dedicated electronics box generates from this AC voltage two
$90^\circ$ dephased AC currents that drive two perpendicular coil
pairs (not shown in Fig.~\ref{fig:expsetup}) producing the rotating
field $B_1(t)$. \\
\indent The characterization of each individual cell consists in the recording
of resonance spectra for a set of 12 selected (and computer
controlled) values of the laser power $P_L$ in the range of 1~to
$12~\mu$W.
It is difficult to determine the absolute laser intensity for a given
laser power $P_L$, because of the (asymmetric) transverse beam
profiles and their modification by the cell's spherical shape.
We therefore quantify the light intensity in terms of the laser power
$P_L$, to which it is proportional.
Note that $P_L$ used below refers to the power measured after the cell with
the laser frequency resonant with the $4{\rightarrow}3$ Cs~\ensuremath{\mathrm{D}_{1}}{}
transition and the \ensuremath{\mathrm{rf}}{} power off.
A typical automated characterization run, including insertion of the
cell into the apparatus, takes 10 minutes.
Data analysis is performed by a semi-automatic dedicated
Mathematica\cite{Mathematica52} code, which takes another 5~minutes.
In a regular working day it is thus possible to characterize 30 to 40
cells.
\subsection{DROM theory}
\label{sec:theory}
A modulation of the transmitted power only occurs when the static
magnetic field $B_0$ is neither parallel nor perpendicular to the
direction of light propagation.
In that case the transmitted light power has components that oscillate
in phase, $D_{\omega}$, and in quadrature, $A_{\omega}$, with respect
to the rotating field
\begin{equation}
B_1(t)= \frac{\Omega_{\ensuremath{\mathrm{rf}}}}{\gamma_F}\, e^{i\,\omega t}\,.
\label{eq:in-phase}
\end{equation}
The in-phase and the quadrature components depend on the detuning,
$\delta=\omega-\omega_0$, between the driving, $\omega$, and the
Larmor, $\omega_0$, frequencies.
The dependence of $D_{\omega}$ and $A_{\omega}$ on $\delta$ are
dispersive and absorptive Lorentzians given by \cite{Bison:2005:OPO}
\begin{eqnarray}
D_\omega(\delta) = -\eta \ensuremath{\braket{F_z}} \sin{(2\theta)}
\frac {\Omega_{\ensuremath{\mathrm{rf}}} \delta}
{ \delta^2+ \sGamma{2}^{2}+\frac{\sGamma{2}}{\sGamma{1}} \Omega^2_{\ensuremath{\mathrm{rf}}} }
\nonumber \\
A_\omega(\delta) = -\eta \ensuremath{\braket{F_z}} \sin{(2\theta)}
\frac {\Omega_{\ensuremath{\mathrm{rf}}} \sGamma{2}}
{ \delta^2+ \sGamma{2}^{2}+\frac{\sGamma{2}}{\sGamma{1}} \Omega^2_{\ensuremath{\mathrm{rf}}} }
\label{eq:signals}
\end{eqnarray}
where $A_0=\eta \ensuremath{\braket{F_z}} \sin{(2\theta)}$ is a common signal amplitude
that depends --- via the spin polarization \ensuremath{\braket{F_z}}{} created by optical
pumping and the detection of the polarization's precession via light
absorption --- on the laser power $P_L$.
The calibration constant $\eta$ includes all of the apparatus
constants such that $D_\omega(\delta)$ and $A_\omega(\delta)$ are
measured in Volts.
With respect to Fig.~\ref{fig:expsetup}, $U(t) = D_\omega(\delta)
\cos{\omega t} + A_\omega(\delta) \sin{\omega t}$.
The phase $\phi_{\omega} (\delta)$ between the drive and the power
modulation
\begin{equation}
\phi_{\omega} (\delta) = +\arctan{\left(\frac{\sGamma{2}}{\delta}\right)}\,,
\label{eq:phase}
\end{equation}
depends also on the detuning $\delta$.
The expressions (\ref{eq:signals}) and (\ref{eq:phase}) are valid
for atomic media with an arbitrary ground state angular momentum,
as may be shown easily by a theoretical treatment analogous to the
discussion of the signals in the DRAM (double resonance alignment
magnetometer) geometry presented in \cite{Weis:2006:TDR}.
In the above expressions, $\theta$ is the angle between the
applied magnetic field $B_0$ and the laser beam propagation
direction $\vec{k}$.
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{Castagna_fig3_top}}
\centerline{\includegraphics[width=\columnwidth]{Castagna_fig3_bot}}
\caption{(color online) Lock-in demodulated magnetic resonance
signals. Top: The dispersive signal (blue) represents the in-phase
component $D(\omega)$ and the absorptive signal (red) the quadrature
component $A(\omega)$. Bottom: Phase signal $\phi(\omega)$.
Experimental points are shown together with lines fitted according
to (\protect\ref{eq:signals})--(\protect\ref{eq:phase}). All signals
were recorded at $B_0 \simeq 4~\ensuremath{\mu\mathrm{T}}$ ($\omega_0 \simeq 2\pi \cdot
11640~\ensuremath{\mathrm{Hz}}$), with $P_L=6~\mu$W, and $B_{1}=1.3~\ensuremath{\mathrm{nT}}$.}
\label{fig:signals}
\end{figure}
\subsection{Signal analysis}
\label{sec:recording}
Since the resonance signals are extracted by a lock-in amplifier, and
since it is experimentally difficult to precisely determine the phase
of the rotating field (and hence the phase difference between that
field and the modulation of the photocurrent), the signals produced by
the lock-in amplifier are superpositions of the absorptive and
dispersive lineshapes $A_{\omega}(\delta)$ and $D_{\omega}(\delta)$.
Using the fitting procedure described in detail in
\cite{Bison:2005:OPO} it is possible to extract the pure absorptive
and dispersive components.
For fitting the theoretical lineshapes the combined apparatus
constants $A_0\equiv\eta \ensuremath{\braket{F_z}} \sin{(2\theta)}$ is taken as one fitting
parameter, with $A_0$ measured in Volts.
Other parameters are the relaxation rates $\sGamma{1}$ and
$\sGamma{2}$, the resonance frequency $\omega_0$, an unknown overall
phase, as well as weighting factors of the absorptive and dispersive
components.
The Rabi frequency $\Omega_{\ensuremath{\mathrm{rf}}}$ can be easily calibrated as
described in \cite{DiDomenico:2007:SDR} and a fixed numerical value is
used when fitting (\ref{eq:signals}) and (\ref{eq:phase}).
Typical resonance lineshapes of the in-phase, quadrature, and phase
signals are shown in Fig.~\ref{fig:signals}, together with the fitted
theoretical shapes (\ref{eq:signals}) and (\ref{eq:phase}).
Fitting the absorptive and dispersive spectra by (\ref{eq:signals})
with the relaxation rates $\sGamma{1}$ and $\sGamma{2}$ as free
parameters yields a strong correlation between the two rates in the
$\chi^2$-minimizing algorithm, with corresponding large uncertainties
in the numerical values.
We have therefore opted for the following fitting procedure.
In a first step, we use the fact that the phase does not depend on
$\sGamma{1}$ and fit the dependence $\phi({\omega})$ given by
(\ref{eq:phase}) to the data.
The resulting $\sGamma{2}$ value is then used as a fixed parameter in
the subsequent simultaneous fit of the absorptive and dispersive
lineshapes to infer $\sGamma{1}$.
In this way we obtain $(\sGamma{1}, \sGamma{2})$-pairs for each value
of the laser power $P_L$.
In addition, the fits yield the overall signal amplitude $A_0$.
\begin{figure}[t]
\centerline{\includegraphics[angle=0,width=0.9\columnwidth]{Castagna_fig4}}
\caption{Laser power dependence of the relaxation rates $\sGamma{1}$
(boxes) and $\sGamma{2}$ (diamonds). The experimental points are
fitted with (\protect\ref{eq:gammas}). The (statistical) error bars
on the individual data points are smaller than the symbol size.}
\label{fig:relaxation}
\end{figure}
\section{Results}
\label{sec:results}
\subsection{Relaxation rates}
Figure~\ref{fig:relaxation} shows the dependence of the longitudinal
and transverse relaxation rates on the laser power $P_L$.
There is, to our knowledge, no theoretical algebraic expression
describing that dependence for ground states of arbitrary angular
momentum $F$.
We therefore fit, as in \cite{DiDomenico:2007:SDR}, the dependence by
a quadratic polynomial
\begin{equation}
\sGamma{i}(P_L)= \sGamma{0i}+\alpha_{i}\,P_L+\beta_{i}\,P_L^2\,,
\label{eq:gammas}
\end{equation}
which allows us to infer the intrinsic relaxation rates, $\sGamma{01}$
and $\sGamma{02}$, i.e., the relaxation rates extrapolated to zero
light power.
\begin{figure}[b]
\centerline{\includegraphics[angle=0,width=0.9\columnwidth]{Castagna_fig5}}
\caption{Magnetic resonance amplitude versus laser power. The
experimental points are fitted with the polynomial expression from
(\protect\ref{eq:amlitude}), which yields, for this specific cell,
the saturation parameters $P_{S1}=634$~nW and $P_{S2}=16.3~\mu$W.
The error bars are smaller than the plotting symbol size.}
\label{fig:amplitude}
\end{figure}
\subsection{Signal amplitudes}
\begin{figure}[t!]
\centerline{\includegraphics[angle=0,width=0.85\columnwidth]{Castagna_fig6a}}
\centerline{\includegraphics[angle=0,width=0.85\columnwidth]{Castagna_fig6b}}
\centerline{\includegraphics[angle=0,width=0.85\columnwidth]{Castagna_fig6c}}
\caption{Histogram of intrinsic longitudinal (top) and transverse
(middle) relaxation times of 241 coated cells. The upper axis in
the top graph gives the radius of the effective depolarization spot
that models reservoir relaxation (see text). The lower graph
shows the correlation between the relaxation rates, together with a
fit of the form $\sGamma{02} = s \sGamma{01} + a$.}
\label{fig:gamma12Histo}
\end{figure}
Figure~\ref{fig:amplitude} shows the dependence of the signal
amplitude $A_0$ on the laser power $P_L$.
Here again, we have no theoretically derived algebraic expression
describing that dependence for transitions between states with
arbitrary angular momenta.
We therefore, as in \cite{DiDomenico:2007:SDR}, fit the experimental
dependence by the empirical saturation formula
\begin{equation}
S_0(P_L)= C\,\frac{P_L^2}{(P_L+P_{S1})(P_L+P_{S2})}
\label{eq:amlitude}
\end{equation}
which accounts for an amplitude growing as $P_L^2$ at low powers, and
where $P_{S1}$ and $P_{S2}$ are saturation powers.
Figures.~\ref{fig:relaxation} and \ref{fig:amplitude} show typical
dependencies of $\sGamma{1}$, $\sGamma{2}$, and $S_0$ on $P_L$ for a
given cell, together with the fits (solid lines) by
(\ref{eq:amlitude}).
We have characterized 253 paraffin-coated cells of equal diameter
using the method described above.
The histograms in Fig.~\ref{fig:gamma12Histo} (top, middle) show the
distributions of the intrinsic longitudinal and transverse relaxation
rates of the 241 best cells.
The scatter plot in the lower graph of Fig.~\ref{fig:gamma12Histo}
shows that the two rates are strongly correlated.
The fitted line represents a linear relation of the form $\sGamma{02}=
s\,\sGamma{01} + a$ with $s=1.00(1)$ and $a=1.35(3)~\ensuremath{\mathrm{Hz}}{}$.
The longitudinal and transverse relaxation rates are thus equal, up to
a constant offset that affects the $\sGamma{02}$ values only.
For an isotropic relaxation process, in which all Zeeman sublevels
relax at the same rate, one would expect $\sGamma{01}=\sGamma{02}$.
In section~\ref{sec:discussion} below we will come back to a
quantitative discussion of those contributions.
\subsection{Magnetometric sensitivity}
\label{subsec:NEM}
The intrinsic relaxation rates are well suited to characterize each
individual cell.
In particular, the transverse rate $\sGamma{02}$, which determines the
intrinsic width of the signals $A_{\omega}$ and $D_{\omega}$, is
relevant for magnetometric applications.
However, the intrinsic rates are, by definition, rates for vanishing
laser and \ensuremath{\mathrm{rf}}{} powers.
Therefore, the magnetometric sensitivity of a given cell can not be
inferred directly from the intrinsic rates, since magnetometers have
to be operated at finite laser and \ensuremath{\mathrm{rf}}{} power levels.
The linear zero crossing of the dispersive signal $D_{\omega}$ near
resonance is convenient for magnetometric applications since any
magnetic field change $\delta B$ yields a signal change
\begin{equation}
\delta
D_{\omega}=\left|\frac{dD_{\omega}}{dB}|_{\omega{=}\omega_L}\right|\,
\delta B
\end{equation}
that is proportional to $\delta B$.
The lowest magnetic field change $\delta B$ that can be detected
depends on the shot noise of the DC photocurrent $I_{L}\propto P_L$.
A feedback resistor, $R_F$, in the transimpedance amplifier, marked
$I/U$ in Fig.~\ref{fig:expsetup}, transforms that photocurrent into a
photovoltage $U_L$, whose shot noise (in a bandwidth of 1~\ensuremath{\mathrm{Hz}}) is
given by
\begin{equation}
\delta U_{L} = R_F\,\delta I_{L}
= R_F\,\sqrt{2 e I_{L}}
= R_F\,\sqrt{\frac{2 \ensuremath{{Q}_{\kern-0.2em E}} P_L e^{2}}{h \nu}}\,,
\label{eq:Ushot}
\end{equation}
where $\ensuremath{{Q}_{\kern-0.2em E}}=70\%$ is the quantum efficiency of the
photodiode, and $\nu$ the laser frequency.
The experimentally measured signal noise lies $\approx 20\%$ above the
shot noise level, due to laser power fluctuations and amplifier noise.
With the calibration constant $\eta$ in (\ref{eq:signals}), $\delta
D_{\omega}$ is expressed in Volts, i.e., in the same units as $\delta
U_{L}$.
For each set of the experimental parameters $P_L$ and $\Omega_{\ensuremath{\mathrm{rf}}}$
one can thus define the magnetometric sensitivity as the field
fluctuation $\delta B_{\ensuremath{\mathrm{NEM}}}$ that induces a signal change $\delta
D_{\omega}$ of equal magnitude than $\delta U_{L}$.
This noise equivalent magnetic field fluctuation (\ensuremath{\mathrm{NEM}}) is thus given
by
\begin{eqnarray}
\delta B_{\ensuremath{\mathrm{NEM}}}
&=&\frac{\delta U_{L}} {\left|\frac{dD_{\omega}}{dB}|_{\omega{=}\omega_L}\right|}\\
&=&\frac{1}{\gamma_F}
\frac{\sGamma{2}^2+\Omega_{\ensuremath{\mathrm{rf}}}^2\sGamma{2}/\sGamma{1}}
{A_0 \Omega_{rf}} \,\delta U_{L}\,.
\label{eq:NEM}
\end{eqnarray}
$A_0$, $\sGamma{1}$, and $\sGamma{2}$ are ($P_L$ dependent) parameters
obtained from the fits of the experimental $D_\omega$ spectra. $\delta
U_L$ is assumed to be the $P_L$ dependent shot noise value
(\ref{eq:Ushot}).
We recall that $\Omega_{\ensuremath{\mathrm{rf}}}$ is not a fit parameter, and that
calibrated numerical values of $\Omega_{\ensuremath{\mathrm{rf}}}$ are inserted in
(\ref{eq:NEM}) when evaluating $\delta B_{\ensuremath{\mathrm{NEM}}}$.
\begin{figure}[t]
\centerline{\includegraphics[angle=0,width=0.9\linewidth]{Castagna_fig7}}
\caption{Plot of $\delta B_{\ensuremath{\mathrm{NEM}}}$ as a function of the amplitude
$\Omega_{\ensuremath{\mathrm{rf}}}$ of the rotating field and of the laser power $P_L$.
The contours represent the lines of constant \ensuremath{\mathrm{NEM}}, spaced by
1~\ensuremath{\mathrm{\fT}/\sqrt{\Hz}}, with selected numerical values indicated. The cross refers
to the minimal \ensuremath{\mathrm{NEM}}{} value, which, for the cell represented here
has a value of 10.5~\ensuremath{\mathrm{\fT}/\sqrt{\Hz}}.}
\label{fig:nem}
\end{figure}
For each cell we have evaluated $\delta B_{\ensuremath{\mathrm{NEM}}}$ for a range of
parameters $P_L$ and $\Omega_{\ensuremath{\mathrm{rf}}}$.
Figure~\ref{fig:nem} shows a typical result in terms of a contour plot
of $\delta B_{\ensuremath{\mathrm{NEM}}}$.
For each cell we determine the optimal \ensuremath{\mathrm{NEM}}{} value, $\delta
B_{\ensuremath{\mathrm{NEM}}}^\mathrm{min}$, by a numerical minimization procedure.
The minimum for the cell shown in Fig.~\ref{fig:nem} is indicated by a
cross.
\begin{figure}[t]
\centerline{\includegraphics[angle=0,width=0.95\linewidth]{Castagna_fig8}}
\caption{Histogram of the minimal \ensuremath{\mathrm{NEM}}{} values, $\delta
B_{\ensuremath{\mathrm{NEM}}}^\mathrm{min}$, of 241 cells, which represent 94\% of
the cells produced to date.}
\label{fig:histogram}
\end{figure}
The distribution of minimal \ensuremath{\mathrm{NEM}}{} values, $\delta
B_{\ensuremath{\mathrm{NEM}}}^\mathrm{min}$, thus obtained is represented in form of a
histogram in Fig.~\ref{fig:histogram}.
Only cells with $\delta B_{\ensuremath{\mathrm{NEM}}}^\mathrm{min}<40 \ensuremath{\mathrm{\fT}/\sqrt{\Hz}}$ are
shown.
This set represents 94\% of all cells we have produced to date.
\section{Discussion}
\label{sec:discussion}
The distribution of linewidths shown in Fig.~\ref{fig:gamma12Histo}
reveals a dependence of the form $\sGamma{02} = \sGamma{01} + \Delta
\sGamma{\mathrm{relax}}$, with a constant offset relaxation rate $\Delta
\sGamma{\mathrm{relax}}$, whose numerical value (fit parameter $a$ in
Fig.~\ref{fig:gamma12Histo}) is $\Delta\sGamma{\mathrm{relax}} / 2\pi =
1.35~\ensuremath{\mathrm{Hz}}$.
Here we show that $\sGamma{01}$ is ultimately limited by atoms
escaping to the sidearm, and that $\Delta\sGamma{\mathrm{relax}}$ is
mainly determined by spin exchange collisions
($\Delta\sGamma{\mathrm{ex}}$) with a minor contribution from magnetic
field inhomogeneities ($\Delta\sGamma{\Delta B}$).
\subsection{Longitudinal relaxation}
The intrinsic longitudinal relaxation rate $\sGamma{01}$ is limited by
processes which thermalize the magnetic sublevel populations, such as
atoms escaping through the capillary to the sidearm where they
eventually collide with the solid Cs droplet, atoms hitting an
imperfectly coated surface spot of the spherical bulb, or atoms being
absorbed by the coating \cite{Alexandrov:2002:LID,Gozzini:2008:LIS}.
All of those processes can be parametrized in terms of an effective
depolarizing surface area $\sigma_\mathrm{dep} \equiv \pi
r_\mathrm{dep}^2$.
We will refer to such processes in general as ``reservoir losses''.
The distribution of $\sGamma{01}$ values in the top graph of
Fig.~\ref{fig:gamma12Histo} represents the statistical distribution of
such imperfections, due to uncontrolled parameters in the cell
production process.
In a spherical cell of radius $R$ the rate of wall collisions is
$\gamma_\mathrm{wall}=3\overline{v}/4R$, where $\overline{v}$ is the
average thermal velocity.
The intrinsic longitudinal relaxation rate can thus be expressed in
terms of the effective depolarizing spot radius, $r_\mathrm{dep}$, via
\begin{equation}
\sGamma{01} = \gamma_\mathrm{wall} \frac{\pi r_\mathrm{dep}^2}
{4\pi R^2}
= \frac{3\,\overline{v}\,r_\mathrm{dep}^2}
{16\,R^3}\,.
\end{equation}
The upper axis in the top graph of Fig.~\ref{fig:gamma12Histo} shows
the radius $r_\mathrm{dep}$ corresponding to the $\sGamma{01}$ value
on the lower axis.
The best cell produced so far has a longitudinal relaxation rate
$\sGamma{01}/ 2\pi\approx 0.50(5)~\ensuremath{\mathrm{Hz}}$, which corresponds to
$d_\mathrm{dep}=2r_\mathrm{dep}=1~\ensuremath{\mathrm{mm}}$.
This value is compatible with the design diameter, $d_\mathrm{cap} =
0.75(25)~\ensuremath{\mathrm{mm}}$, of the capillary, which shows that $\sGamma{01}$ is
ultimately limited by atoms escaping into the sidearm.
\subsection{Transverse relaxation: field inhomogeneities}
If the offset magnetic field $B_0$ varies over the cell volume it
produces a distribution of resonance frequencies $\omega_L$, and hence
a broadening of the magnetic resonance lines given by
(\ref{eq:signals}) and (\ref{eq:phase}).
The fitting analysis interprets this broadening as an increase of the
transverse linewidth $\sGamma{02}$ by an amount $\Delta\Gamma_{\Delta B}$.
A main advantage of coated cells over buffer gas filled cells is that,
because of multiple wall collisions, the atoms explore a large
fraction of the cell volume during the spin coherence time, which
effectively averages out field gradients.
Standard line narrowing theory \cite{Watanabe:1977:MLN} predicts that
an inhomogeneous magnetic field gives a lowest order contribution
\begin{equation}
\sGamma{\Delta B} = (\gamma_F \Delta B_\mathrm{rms})^2 \tau_c
\label{eq:gamma2inhom}
\end{equation}
to the transverse relaxation rate, where $\Delta B_\mathrm{rms}$ is
the rms value of the magnetic field averaged over the cell volume, and
$\tau_c$ the correlation time of the field fluctuations seen by the
cell, which can be approximated by the mean time between wall
collisions.
This expression is valid in the so-called good averaging regime
\cite{Watanabe:1977:MLN}, i.e., for $\gamma_F \Delta B_\mathrm{rms}
\tau_c \ll 1$.
From the geometry of the used coils we estimate $\Delta
B_\mathrm{rms}$ to be on the order of 2~\ensuremath{\mathrm{nT}}, which yields
$\Delta\nu_{\Delta B}=\Gamma_{\Delta B}/2\pi= 30$~mHz.
Even when allowing for a 5~times larger inhomogeneity (i.e., $\Delta
B=10~\ensuremath{\mathrm{nT}}$) from uncompensated residual fields --- recall that we
actively compensate linear field gradients --- one still has
$\Delta\nu_{\Delta B}< 0.1~\ensuremath{\mathrm{Hz}}$. We can thus ascertain that the
contribution from field inhomogeneities to $\sGamma{02}$ is
negligible.
We note that the good averaging conditions for $\Delta B=2~\ensuremath{\mathrm{nT}}$ and
10~\ensuremath{\mathrm{nT}}{} read $\gamma_F\Delta B_\mathrm{rms}\tau_c=0.004$ and $0.02$,
respectively.
\subsection{Transverse relaxation: spin exchange}
As derived by Ressler~\mbox{et~al.}, \cite{Ressler:1969:MSE}, the
contribution from spin exchange collisions to the transverse
relaxation rate is given by
\begin{equation}
\Delta\sGamma{\mathrm{ex}}=\alpha\frac{2I}{2I+1}\,
n_\mathrm{Cs}\sigma_\mathrm{ex}v_r\,,
\label{eq:sex}
\end{equation}
where $I$ is the nuclear spin, $n_\mathrm{Cs}$ the Cs number density,
$v_r$ the relative velocity of colliding atoms, and
$\sigma_\mathrm{ex} = 2.06\times 10 ^{-14}~\mathrm{cm}^2$
\cite{Ressler:1969:MSE} the spin exchange cross section for Cs--Cs
collisions.
The parameter $\alpha$ describes the slowing down of the spin
relaxation due to the hyperfine interaction.
In small magnetic fields $\alpha\approx 0.63$ for the
$M{=}-4\rightarrow M{=}-3$ transition (Fig.~3 of
\cite{Ressler:1969:MSE}).
At $T=20(1){}\,^\circ$C the contribution of spin exchange collisions
to $\sGamma{02}$ evaluates to
\begin{equation}
\frac{\Delta\sGamma{\mathrm{ex}}}{2\pi}= 1.6(2)~\ensuremath{\mathrm{Hz}} \,,
\label{eq:sexval}
\end{equation}
where the error reflects the uncertainty in the number density.
This value is compatible with the experimental value
$\sGamma{02}-\sGamma{01} = (2\pi) 1.35(3) ~\ensuremath{\mathrm{Hz}}$.
We are therefore confident that dephasing spin exchange collisions
give the main contribution to the transverse relaxation rate,
notwithstanding a certain scatter of the spin exchange cross sections
in the literature.
\subsection{Fundamental limits of magnetometric sensitivity}
The ultimate sensitivity of the type of magnetometers described here
is limited by two fundamental processes, viz., photon shot noise limit
and spin projection noise.
One can show (Appendix~\ref{sec:photoncounting}) that the minimal
\ensuremath{\mathrm{NEM}}{} imposed by the shot noise of the detected photons is given by
\begin{equation}
\delta B_{\ensuremath{\mathrm{NEM}}}^\mathrm{PSN}
=
\frac{2\sqrt{2}\sGamma{2}}{\gamma_F}
\sqrt{\frac{\sGamma{2}}{\sGamma{1}}}
\frac{1}{\kappa_0 L }
\frac{1}{\ensuremath{A_{FF'}}\ensuremath{\braket{F_z}}}
\sqrt{\frac{h \nu}{\ensuremath{{Q}_{\kern-0.2em E}} \ensuremath{P_{L}} t}}
\,,
\label{eq:PSN}
\end{equation}
where \ensuremath{P_{L}}{} is the power detected after the cell, \ensuremath{{Q}_{\kern-0.2em E}}{} the quantum
efficiency of the photodiode for photons of energy $h\nu$, and
$\kappa_0$ the resonant absorption coefficient of the driven hyperfine
component for unpolarized atoms.
For a time interval of $t=0.5$~s, the result corresponds to a
measurement bandwidth of 1~\ensuremath{\mathrm{Hz}}.
In (\ref{eq:PSN}) \ensuremath{\braket{F_z}}{} is the spin polarization
\begin{equation}
\ensuremath{\braket{F_z}} = \sum_{M=-4}^{4} p_{4,M}^{\phantom{+}} M\,,
\label{eq:Fz}
\end{equation}
in the $F{=}4$ state, where the $p_{4,M}$ are the populations of the
magnetic sublevels $\ket{F{=}4,M}$.
The analyzing power for the transition $F{\rightarrow}F'$, \ensuremath{A_{FF'}}{},
depends in general on the applied laser power and accounts for
population effects such as hyperfine pumping. Its value has been
determined by a numerical model based on rate equations
\cite{pumping_inprep}. It is a slowly varying function in the domain
of laser powers considered here, with value $\ensuremath{A_{43}} = 1.15(5)$.
For our apparatus, $\kappa_0 L\approx 0.7$, $\ensuremath{{Q}_{\kern-0.2em E}}=0.7$, so the above
can be rewritten as
\begin{equation}
\delta B_{\ensuremath{\mathrm{NEM}}}^\mathrm{PSN}(\ensuremath{\mathrm{fT}}) = \frac{0.146}{\ensuremath{A_{43}}\ensuremath{\braket{F_z}}\sqrt{\ensuremath{P_{L}}(\mu\mbox{W})}}
\sGamma{2}\sqrt{\frac{\sGamma{2}}{\sGamma{1}}} \,.
\label{eq:PSNnumerical}
\end{equation}
For our best cell, $\ensuremath{A_{43}}\ensuremath{\braket{F_z}} = 0.39(4)$, $\sGamma{1}/2\pi =
3.40~\ensuremath{\mathrm{Hz}}$, and $\sGamma{2}/2\pi = 4.75~\ensuremath{\mathrm{Hz}}$ at the optimum laser power
of $3.6~\mu$W, which yields an expected sensitivity of $\delta
B_{\ensuremath{\mathrm{NEM}}}^\mathrm{PSN}=7.0(7)~\ensuremath{\mathrm{fT}}$, to be compared with the measured
minimal \ensuremath{\mathrm{NEM}}{} of the cell of 9(1)~\ensuremath{\mathrm{fT}}. For a more typical cell with
$\sGamma{2}/2\pi = 10~\ensuremath{\mathrm{Hz}}$, $\sGamma{1}/2\pi = 8.65~\ensuremath{\mathrm{Hz}}$, and
$\ensuremath{A_{43}}\ensuremath{\braket{F_z}} = 0.46(5)$ at the optimal power of $5~\mu$W, the
expected minimal \ensuremath{\mathrm{NEM}}{} is $\delta B_{\ensuremath{\mathrm{NEM}}}^\mathrm{PSN} =
9.6(1.0)~\ensuremath{\mathrm{fT}}$, indicating that the shot noise limited \ensuremath{\mathrm{NEM}}{} grows
less than linearly in $\sGamma{2}$.
Spin projection noise limits the magnetometric sensitivity to
\begin{equation}
\delta B_{\ensuremath{\mathrm{NEM}}}^\mathrm{SPN}=
\frac{1}{\gamma_F} \sqrt{\frac{\sGamma{2}}
{N_\mathrm{at} t_\mathrm{meas}}}\,,
\label{eq:SPN}
\end{equation}
where $N_\mathrm{at}=\frac{9}{16}\rho_\mathrm{at} V_\mathrm{cell}$ is
the number of atoms in the $F{=}4$ state that contribute to the signal,
with $\rho_\mathrm{at}$ being the total Cs number density, and
$V_\mathrm{cell}$ the cell volume.
For a measurement time $t_\mathrm{meas}$ of 0.5~s, one finds at
$T=20(1){}\,^\circ$C, $\delta B_{\ensuremath{\mathrm{NEM}}}^\mathrm{SPN}=0.74(2)~\ensuremath{\mathrm{fT}}$ for
$\sGamma{2}/2\pi= 4.75~\ensuremath{\mathrm{Hz}}$. In our magnetometers spin projection noise
thus has a negligible contribution.
\section{Summary and conclusion}
\label{sec:conclusion}
We have manufactured and characterized a set of 253 paraffin-coated Cs
vapor cells of identical geometry (15~mm radius spheres), 90\% of
which have an intrinsic transverse relaxation rate in the range of
2~to 6~\ensuremath{\mathrm{Hz}}.
Under optimized conditions of laser and \ensuremath{\mathrm{rf}}{} power those cells have
intrinsic magnetometric sensitivities, $\delta B_{\ensuremath{\mathrm{NEM}}}^\mathrm{min}$,
in the range of 9 to 30~\ensuremath{\mathrm{\fT}/\sqrt{\Hz}}{} under the assumption of (light)
shot-noise limited operation in a DROM-type magnetometer.
The magnetometric sensitivity is determined by the intrinsic
transverse relaxation rate, which, for the best cell of our batch has
a value of $2\pi\cdot2~\ensuremath{\mathrm{Hz}}$, of which $\approx~0.5~\ensuremath{\mathrm{Hz}}$ are due to
reservoir ($T_1$) relaxation, and $\approx~1.5~\ensuremath{\mathrm{Hz}}$ are due to spin
exchange relaxation.
Improving the relaxation properties by reducing reservoir relaxation
is technologically demanding, and would only marginally improve the
overall sensitivity.
Spin exchange relaxation, on the other hand, cannot be suppressed in
coated cells, although it was shown that spin exchange relaxation can
be suppressed in high pressure buffer gas cells, yielding sub-\ensuremath{\mathrm{fT}}{}
magnetometric sensitivity \cite{Kominis:2003:SMA}.
We thus conclude that our cells are as good as coated cells of that
diameter can be, disregarding a possible 25\% reduction of
$\sGamma{02}/2\pi$ by a suppression of reservoir losses.
The expected photon shot noise limited \ensuremath{\mathrm{NEM}}{} of our cells is very
close to the measured \ensuremath{\mathrm{NEM}}{}. The most promising improvement in
sensitivity is expected to come from maximizing \ensuremath{\braket{F_z}}{} via hyperfine
repumping, which could win, at most, a factor of 2--3.
It is well known that in the spin exchange limited regime an increase
of the atomic density by heating the cell does not increase the
magnetometric sensitivity, since both $\sGamma{2}$ and $\kappa_0$ in
(\ref{eq:PSN}) grow proportionally to the density.
The same holds for $\sGamma{2}$ and $N_\mathrm{atom}$ in
(\ref{eq:SPN}).
However, when operating the magnetometer in a regime where spin
exchange is not the limiting factor, one expects an improvement of the
sensitivity by increasing the atomic number density.
We will use the cells in multi-sensor applications in fundamental and
applied fields of research.
Since an optimal magnetometric sensitivity is reached with a typical
light power of approximately $5~\mu$W, a single diode laser can drive
hundreds of individual sensors \cite{Hofer:2008:HSO}.
This scalability, together with the very good reproducibility of the coated
cell quality reported here, will allow us to realize in the near future
a three-dimensional array of 25 individual sensors for imaging the
magnetic field of the beating human heart, a signal with a peak amplitude
100~\ensuremath{\mathrm{pT}}{} \cite{Hofer:2008:HSO}.
With a reliable and inexpensive multichannel heart measurement system,
magnetocardiograms can be measured in a few minutes, times which are
of interest in the real world of clinical applications.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,760 |
Das Dorf Pemmering ist ein südwestlicher Gemeindeteil des Marktes Isen im oberbayerischen Landkreis Erding.
Geschichte
Der Ort war Teil der freisingischen Herrschaft Burgrain. Nach deren Auflösung infolge der Säkularisation wurde 1808 aus dem südwestlichen Viertel des Herrschaftsgebietes die Gemeinde Mittbach (mit Pemmering) gebildet und kam 1818 zum Bezirksamt Wasserburg. Ab der Kreisgebietsreformam 1. Juli 1972 in gehört der Ort zum Landkreis Erding. Am 1. Mai 1978 wurde Mittbach bei der Gemeindereform dem Markt Isen zugewiesen.
Pfarrgeschichte
Im Jahre 1116 bestätigte Bischof Egilbert von Freising dem Dekan Baptist des Stiftes Isen, der Pfarrei Permaningen (Pemmering) das "ius praecurtandi" (= vorheriges Recht). Nach diesen bestätigten Pfarrrechten hat Pemmering bereits vor 1116 existiert. Die heutige katholische Pfarrkirche St. Margareth ist ein langgestreckter Saalbau mit leicht eingezogenem polygonalem Chorabschluss, angefügter Sakristei und Westturm mit Spindelhelm. Der Chor spätgotisch, das Langhaus aus dem 17./18. Jahrhundert und nach Brand 1776 umgebaut. Die Pfarrei Mittbach wurde 1828 in die Pfarrei Pemmering eingegliedert. Die Pfarrei ist heute Teil des Pfarrverbandes Isen.
Literatur
Landkreis Erding – Land und Leute (1985)
Georg Brenninger: Die Kirchen im Pfarrverband Isen. Katholische Kirchenverwaltung Isen (Hrsg.), Isen 1997, S. 22–23.
Weblinks
Einzelnachweise
Geographie (Isen)
Ort im Landkreis Erding
Ersterwähnung 1116
Kirchdorf (Siedlungstyp) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,056 |
Basho steht für:
Hon-Basho, Sumō-Turnier
Bashō (Merkurkrater), Krater auf dem Planeten Merkur
Basho (Philosophie), Philosophie von Kitaro Nishida (1870–1945)
Basho ist der Name folgender Personen:
Matsuo Bashō (1644–1694), japanischer Dichter
Robbie Basho (1940–1986), amerikanischer Gitarrist
Steffen Basho-Junghans (1953–2022), deutscher Gitarrist | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,407 |
Q: view swapping techniques I want to hear developers opinions on the best way to swap views on the iphone.
For example, I have a tab bar and one of its tabs defaults to a login view. When The user logs in the view changes to a logged in view.
I was going to just use one view controller and have all the content in one xib hiding and showing content as needed but this seems in no way elegant.
Secondly I was considering having one viewcontroller and simply swapping the xib. I'm a litle reluctant to try this as I've read in an article or 2 that it can lead to memory leaks.
Finally I was considering using 2 view controllers with with 2 seperate xibs. My gut tells me this would probably be the "proper" solution but I so far have failed to hunt down any sample code on the correct way to do it.
Can you offer advice on the best way to solve this problem?
Is there a technique that I have not listed?
Thanks.
A: I would keep the logic for which view to show in the view controller. The XIB is the view itself, and should have no objects in it that are transient or not always visible for that particular view.
Your second approach (of swapping the views) seems to be the right approach to me, and is always something I, personally, do in these situations. I am not aware of any memory issues if you do it right (remove from superview, followed by loading the new view as a subview of the controller's view). You could perform any custom initialization once the new XIB has been loaded and before showing it to the user.
Multiple view controllers just seems superfluous as then you would ideally require another top level controller to manage the two view controllers.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,226 |
{"url":"http:\/\/mathbook.ups.edu\/doc\/author-guide\/html\/section-7.html","text":"\n## Section2.4Extending the Minimal Example\n\nWe will not keep reproducing the entire example but instead suggest a series of modifications. After each edit, process the file again to make sure your syntax is correct (before you get too far along) and to see the changes.\n\nThe generic name of the resulting files is pretty bland, and it would be nice to have a title for our article. We will add an attribute to the <article> tag, specifically @xml:id, which is a very important part of PreTeXt and used frequently. For now, it will be used to generate the names of the output files. (The \u201cat\u201d symbol is a way of reminding you that it is an attribute, it is not part of what you author.)\n\nSo make the following modifications:\n\n<article xml:id=\"quick\">\n<title>My First Small Example<\/title>\n\n<p>This is a short sentence.<\/p>\n\n\nYour outputs should now have a title, and more importantly, the filenames will be quick.html and quick.tex. Of course, you might like your outputs to have similar names to your input, but you see that this is not necessary.\n\nLet us give our article a bit of structure. We will have an introduction and two sections with their own titles. So replace the one-sentence paragraph by the following, all following the article title and contained within the <article> tags. Remember, do not include any newlines (carriage returns, line feeds, hard wrap) in the longer lines. (We have to format things differently so you can see exactly what is happening.)\n\n<introduction>\n<p>Let's get started.<\/p>\n<\/introduction>\n\n<section xml:id=\"section-short\">\n<title>Beginnings<\/title>\n\n<p>This is a short sentence.<\/p>\n<\/section>\n\n<section xml:id=\"section-multiple-paragraph\">\n<title>Endings<\/title>\n\n<p>This is a longer sentence that is followed by another sentence.\nTwo sentences, and a second paragraph to follow.<\/p>\n\n<p>One more paragraph.<\/p>\n<\/section>\n\n\nThe \/PDF output will be a bit odd looking since every paragraph is so short, but all the content should be there. Notice that the HTML output now has a table of contents to the left and active navigation buttons. Also the two sections are in their own files and the URLs have been constructed from the supplied values of the @xml:id attribute.\n\nOne last experiment\u2014let us add some mathematics. We use XML tags, <m> for \u201cinline\u201d mathematics and <me> for a \u201cmath equation\u201d which will be rendered with a bit of vertical separation and centered from left to right. We use syntax for mathematics, which has been the standard for working mathematicians for decades. For electronic presentation, we rely on the excellent MathJax project which basically supports all the syntax of the amsmath package from the American Mathematical Society. Add the following sentence to any of the paragraphs of your article.\n\nIf the two sides of a right triangle have lengths <m>a<\/m>\nand <m>b<\/m> and the hypotenuse has length <m>c<\/m>,\nthen the equation <me>a^2 + b^2 = c^2<\/me> will always hold.\n\n\nSo your final source file might look like the following. You now have many of the basic skills you would need to write an entire research article in mathematics, and should be in a position to learn the remainder easily and quickly.\n\n<mathbook>\n<article xml:id=\"quick\">\n<title>My First Small Example<\/title>\n\n<introduction>\n<p>Let's get started.<\/p>\n<\/introduction>\n\n<section xml:id=\"section-short\">\n<title>Beginnings<\/title>\n\n<p>This is a short sentence.<\/p>\n<\/section>\n\n<section xml:id=\"section-multiple-paragraph\">\n<title>Endings<\/title>\n\n<p>This is a longer sentence that is followed by another sentence.\nTwo sentences, and a second paragraph to follow.<\/p>\n\n<p>One more paragraph. If the two sides of a right triangle have\nlengths <m>a<\/m> and <m>b<\/m> and the hypotenuse has length <m>c<\/m>,\nthen the equation <me>a^2 + b^2 = c^2<\/me> will always hold.<\/p>\n<\/section>\n<\/article>\n<\/mathbook>","date":"2018-02-21 07:07:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6750621199607849, \"perplexity\": 1262.750265074578}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-09\/segments\/1518891813571.24\/warc\/CC-MAIN-20180221063956-20180221083956-00075.warc.gz\"}"} | null | null |
Q: How can I retry HTTP client requests within a HttpClientFilter using Micronaut? I've implemented a micronaut HttpClientFilter to add a cached bearer token for all requests to a 3rd party service, however this token expires fairly regularly. I would like to reauthenticate with the downstream API and retry the request when this happens, but there doesn't seem to be an obvious way to do this. So far I'm simply calling proceed a second time, however this causes an Index out of bounds error to be thrown (I think that this is the exception that is supposed to be thrown, but there appears to be a bug in Micronaut here). Minimal reduction of what I've got is below:
import io.micronaut.http.HttpResponse;
import io.micronaut.http.HttpStatus;
import io.micronaut.http.MutableHttpRequest;
import io.micronaut.http.annotation.Filter;
import io.micronaut.http.client.exceptions.HttpClientResponseException;
import io.micronaut.http.filter.ClientFilterChain;
import io.micronaut.http.filter.HttpClientFilter;
import io.reactivex.Flowable;
import javax.inject.Inject;
interface AuthTokenProvider {
// cached
Flowable<String> fetchToken();
void invalidateToken();
}
@Filter(serviceId = "third-party-api")
public class AuthTokenFilter implements HttpClientFilter {
@Inject AuthTokenProvider tokenProvider;
private Flowable<HttpResponse<?>> buildRequestWithToken(MutableHttpRequest<?> request, ClientFilterChain chain) {
return tokenProvider.fetchToken()
.map(token -> request.bearerAuth(token))
.flatMap(chain::proceed);
}
@Override
public Flowable<HttpResponse<?>> doFilter(MutableHttpRequest<?> request, ClientFilterChain chain) {
return buildRequestWithToken(request, chain)
.onErrorResumeNext(err -> {
System.out.println("API request failed, invalidating token and retrying");
tokenProvider.invalidateToken();
return buildRequestWithToken(request, chain);
});
}
}
Can anyone suggest the correct way to do this?
A: *
*You can use advanced-expiration property in application.yml If you set it to 200secs then micronaut considers the token expired 200sec before the actual expiry period. https://micronaut-projects.github.io/micronaut-security/latest/guide/#io.micronaut.security.oauth2.configuration.OauthClientConfigurationProperties$ClientCredentialsConfigurationProperties
*You can use retry annotation provided by micronaut in 18th page in this deck https://objectcomputing.com/files/6315/7297/3014/Slide_Deck_Micronaut_Declarative_HTTP_Client_Webinar.pdf
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,923 |
Shelby Hearon (January 18, 1931 - December 10, 2016) was an American novelist and short story writer.
Early life
Hearon was born in 1931 in Marion, Kentucky. She attended the University of Texas at Austin, graduating with a Bachelor of Arts in 1953.
Career
Armadillo in the Grass, her first novel, was begun in 1962 and accepted for publication by Knopf in 1967. Hearon had a teaching career at several colleges, and served on the Texas Commission on the Arts and the New York State Council on the Arts.
Awards and recognition
Hearon has been awarded fiction fellowships from the Ingram Merrill Foundation, the Guggenheim Foundation and the National Endowment for the Arts. She has received the Texas Institute of Letters award twice, and a lifetime achievement award from the Texas Book Festival. Five of her short stories were awarded NEA/PEN syndication Short Story Prizes and she received a NEA Creative Writing Fellowship. She has also received a New York Women in Communications Award.
Her novel Owning Jolene won an American Academy of Arts and Letters Literature Award.
Bibliography
Armadillo in the Grass (1968)
The Second Dune (1973)
Hannah's House (1975)
Now and Another Time (1976)
A Prince of a Fellow (1978)
Barbara Jordan, a self portrait (1979)
Painted Dresses (1981)
Afternoon of a Faun (1983)
Group Therapy (1984)
A Small Town (1985)
500 Scorpions (1986)
Owning Jolene (1989)
Hug Dancing (1991)
Life Estates (1994)
Footprints (1996)
Ella in Bloom (2001)
Year of the Dog (2007)
References
External links
Shelby Heardon 1931-2016
A Conversation with...Shelby Hearon from The Borzoi Reader
"Laying It All Out: Shelby Hearon Makes an Art of the Little White Lie" in the Austin Chronicle
Hearon's Papers at the Texas State University Library - Southwestern Writers Collection
Inventory of Hearon's papers at Harry Ransom Center at The University of Texas at Austin
Addition to inventory of Hearon's papers at Harry Ransom Center
1931 births
2016 deaths
20th-century American novelists
21st-century American novelists
American women novelists
American women short story writers
University of Texas at Austin alumni
Novelists from Kentucky
People from Marion, Kentucky
20th-century American women writers
21st-century American women writers
20th-century American short story writers
21st-century American short story writers
Kentucky women writers | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,747 |
Theorising the Labour leadership selection
Weekend social 16/12/2011
Christmas presents from the Commission
Written By: Mike Smith - Date published: 1:39 pm, December 16th, 2011 - 9 comments
Categories: uncategorized - Tags:
In 2008 on 16 December, the previous Electoral Commission released its non-decision on the address I used on Chris Knox's CD "it's a better way with Labour". They were embarrassed and buried it in the pre-Christmas dump. This December, I'm waiting for a much more important decision from the current Commission on the RadioLive "Prime Minister's Hour." There's a lot at stake.
Our elections have always kept money out of the broadcast media. In contrast to the billions spent in the US and the millions in Austrlia, parties here can only spend the Commission's allocation and only then in the period between writ day and election day. Candidates can buy time on radio and TV but have to keep spending within their overall limits.
The significance of the RadioLive offer to the Prime Minister of a free hour on radio is twofold. First it opens the floodgates for brand advertising, and secondly it allows media organisations to determine who gains access.
The issue of political influence in the broadcast media is a live one in the UK, the US, and Australia, what with phone hacking, Fos News and the Murdoch empire. We should not think it does not go on here. In today's Herald, John Drinnan relates how Key's electorate manage Stephen McElrea is involved in New Zealand on Air's efforts to influence the media:
New Zealand On Air was running scared after bloggers criticised a TV3 child poverty documentary it funded, which screened just days before the election. The politically appointed funding agency is concerned the TV3 scheduling three days out from the election had called its neutrality into question.
The alternative view is that the board, which includes John Key's electorate chairman Stephen McElrea, got the political jitters. When NZ On Air saw the TV3 promos last month it was aghast.
NZ On Air said that around that time there was criticism in blogs and questions over the funding body's neutrality. Chief executive Jane Wrightson sent TV3 bosses "a stern and strict letter" before the doco screened complaining about the documentary highlighting an election issue so close to polling day.
"We jealously guard our political neutrality and we were distressed it was put under threat by the scheduling," she said yesterday. "The whole board is concerned about this and so am I. "We had no trouble with the documentary itself which Bryan Bruce describes as a social issues documentary. "It attracted considerable comment from people such as bloggers."
Comment from people such as bloggers? Did the Standard cause such a reaction? Hold on, it must be the other lot.
Drinnan goes on to raise real questions, about New Zealand on Air's independence.
The timing of the child poverty doco raised eyebrows. TV3 has maintained solid, serious documentaries while the genre has been abandoned at TVNZ. Admittedly, you might expect such an important topic would run a few weeks out rather than just before polling day. And you could argue that the topic – about the distribution of income and its effect on the weak – would have a left-ish perspective. Bruce did not seem to be heavy-handed with any party political bias.
New Zealand On Air's "stern and strict reaction" was more disturbing than the scheduling decision. It evokes some of the wider issues about bravery and independence at a time when media are under pressure and turning to the state for help, and particularly for New Zealand On Air being caught up in the Wellington milieu. NZ On Air can't be taking an editorial line, but it should serve the public and not hide from controversy.
The New Zealand On Air board is made up of political appointees and could look closer to home when it concerns perceptions about its neutrality. It included Labour-friendly people and friends of Helen Clark. Under National the most recent appointee is Stephen McElrea, who has long experience in the broadcasting world, and who is also northern region deputy chairman of the National Party and was chairman of the electorate committee for Key's Helensville electorate. He would have played an important role in National's campaign.
The Prime Minister took an active role in his appointment to New Zealand On Air – opening up to scrutiny the funding body's neutrality. Despite his party political background, McElrea was appointed to a special committee to select a series of three social issues documentaries to screen on TV3 next year.
One will be about education and charter schools, a notion that was unheard of before the election, and most certainly could have done with some exposure in the run-up to the election like child poverty did. The Government and NZ On Air needs to gets its own ship in order before it starts firing off "stern and strict" letters to organisations who rely on taxpayer funding for their survival.
Given its advice to the broadcaster before the RadioLive programme was put to air, the Electoral Commission's decision on Prime Minister's hour will be an important test of its independence when it appears. It will certainly attract considerable comment on this blog.
9 comments on "Christmas presents from the Commission "
Craig Glen Eden 1
So that report will be out on Christmas day or New years day then?
lprent 1.1
No No….. People might take an interest then. Not everyone gets wasted on New Years Eve and on Xmas day people without little kids are usually quite alert in the morning and if they're like me inclined to find a safe (from work) quiet position so they don't get in the way of the happiness of the xmas day bustlers. Like where ever computer the newspapers, TV, and computers are
Boxing day. Now there is a day that almost everyone is distracted by one thing or another. Hangovers, screaming kids in sugar withdrawal, boxing day sales for the terminally addicted, an insistence that everyone gets off their newly formed lard arses and do something…….
newsense 2
don't forget the thing about taking 70,000 off the Nation!
"The Government and NZ On Air needs to gets its own ship in order"
It is by being filled with nat stooges to ensure the 'message' as required by the govt is passed off as if it's a factually based balanced piece of doco making.
The demise of the charter and issues like Heartland going behind sky's paywall says all you need to know about what the nats atitude to any quality public funded TV is, the same as their atitude toward due process and democrcay.
They don't give a toss as long as it can be bent to their own ends.
Inventory2 4
I don't think that Carmel Sepuloni will like HER Christmas present from the Electoral Commission; a nine-vote defeat, and a job search to start on Monday.
Spratwax 5
How can we be sure of the neutrality of the Electoral Commission? Can we count on it, Carmel?
I'd have to say that its very disconcerting that these counts can change like this. Impartiality is not the question, systems and security are.
randal 6
go round the back of the building and look for a big large empty brown paper bag.
Ianupnorth 7
I was going to post this observation in Open Mike, just think it is timely that Mr. Smile and Wave keeps appearing at lots of popular charity type events – Special kids Christmas Party, ZM radio's break and enter Christmas, etc… | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,048 |
Will Having a Third Child Have an Impact on Adam Levine's Net Worth?
Did The Simpsons Just Predict The Death of Queen Elizabeth?
Adam Noah Levine, an American singer, songwriter, and musician, was born on March 18, 1979. He plays rhythm guitar and sings lead for the pop rock band Maroon 5.
Levine has ruled the Billboard charts with the band for the past 15 years and has had a tonne of radio play all across the world. Levine won three times with a member of his team while serving as a coach on NBC's reality talent competition "The Voice."
Early Life of Adam Levine
On March 18, 1979, Adam Noah Levine was born in Los Angeles, California. When Adam was seven years old, his parents, Fredric and Patsy Levine divorced. He has said that his family is quite musical and that his mother's favourite bands, Simon & Garfunkle, Fleetwood Mac, and the Beatles, influenced him musically.
Levine attended Brentwood School, where he met Jesse Carmichael and Mickey Madden, who would later form the band Maroon 5.
He continued to be interested in music in high school when he claims he was "a bit disobedient I didn't want to follow the instructions they were giving me, and music dominated all of my thoughts."
Read More: 'RHOA' Drew Sidora's Net Worth: How She Becomes so Rich?
In February 1994, Levine, Carmichael, Madden, and Ryan Dusick, a friend of the group, founded Kara's Flowers. Levine played the guitar and sang for them during their debut performance at Whisky a Go Go in West Hollywood in 1995.
Producer Tommy Allen stumbled onto the band while attending a Malibu beach party where they were playing. Through producer Rob Cavallo, Allen and his partner had Kara's Flowers create an 11-track album before signing them to reprise records.
The Fourth World, the band's debut album, was released in August 1997. The band additionally had a guest appearance on Beverly Hills, 90210 in the same year.
The lads went on tour in favour of their album after finishing high school, but they had little success. Kara's Flowers quickly disbanded after Reprise chose to dismiss the band after the record barely sold roughly 5,000 copies.
Adam Levine's Net Worth
A $160 million estimate for Adam Levine's wealth has been made. He primarily makes money through music endeavours. He is most recognised for his roles as the band Maroon 5's main singer and as the host of the TV competition The Voice on NBC.
His season-long hosting compensation for The Voice is $8 million. Blake Shelton, Usher, Shakira, and Christina Aguilera have all served as Adam's past hosts.
Profession: Professional Singer
Date of Birth: March 18, 1979 (age 43)
Height: 5 ft 11 in (1.82 m)
Since 2011, Levine has served as a judge and a coach on the reality television programme The Voice. Levine's fame and reach have increased dramatically since he joined the programme. He's been dubbed the series' breakout star by many. In May 2019, after 16 seasons and 8 years, he left The Voice.
On The Voice, Adam's first season, he made $6 million. He made $10 million in 2015. He made $12 million in 2016. He earned about $35 million in total between 2015 and 2016. Since 2017, Adam's pay on The Voice has been $13 million.
Will Having a Third Child Affect His Net Worth?
The family of Adam Levine and Behati Prinsloo is expanding! According to PEOPLE, the Maroon 5 frontman, 43, and Victoria's Secret model, who wed in 2014, are expecting their third child together.
The new baby will join the couple's existing children, Gio Grace, 4, and Dusty Rose, 5. and Maybe the newborn will increase Adam Levine's wealth.
Levine and Prinsloo went out to eat on Monday in Santa Barbara, where the expectant mother flaunted her baby bulge in a silk flowery outfit.
Prinsloo discussed the idea of growing their current family of four when speaking with Entertainment Tonight in November.
In 2012, Levine began dating Namibian Victoria's Secret model Behati Prinsloo. In 2014, two years later, they got married. Two daughters, Dusty Rose (born in 2016) and Gio Grace, were born to the couple (born in 2018).
According to People Levine and Prinsloo are reportedly expecting their third child together on September 6, 2022.
Levine has admitted to using hallucinogens as a teenager. He has claimed that following his first encounter with Ambien, which rendered him asleep for an hour, he completely ceased using prescription medications. The lovely residence includes a number of conveniences.
How Much Are Adam Levine's Cars Worth?
Adam Levine has several high-end vehicles since he is wealthy. Let's look at his collection of automobiles.
1. BMW 3-Series Convertible ($59,800)
The 3 Series features the newest technology from the company and more opulent luxury in a new, upgraded design.
A post shared by Adam Levine (@adamlevine)
It has a new plug-in hybrid variant with rear-wheel or all-wheel drive and an eight-speed automatic transmission, as well as a turbocharged four-cylinder engine with 255 horsepower and a turbocharged six-cylinder engine with 385 horsepower.
2. Audi A7 ($88,900)
The Audi A7 sits between the Audi A8, which is the most opulent Audi you can get, and the Audi A6, which is a wonderful luxury sedan with a dynamic feel.
The hatchback form of the Audi A7 makes it considerably more practical, and it looks and drives beautifully.
3. Range Rover ($135,670)
This SVA is the most costly Range Rover ever made and is a full-fledged Range Rover. Most well-known individuals, corporate leaders, and other wealthy people employ a range.
Autobiography is all about having money and living in a luxurious environment. A 4999 cc supercharged eight-cylinder engine with 557 HP powers the SVA.
Additionally, the Range Rover has a large wheelbase that increases passenger comfort and legroom in the back seats.
Read More: Richard Roundtree's Net Worth: Check the Revenue from Movies!
Adam Noah Levine is an American singer, songwriter, and musician whose net worth is $160 Million. He was born in Los Angles, California.
He is one of the most sought-after TV personalities and the author of some of the best pop records of the 2000s and 2010s. Adam married supermodel Behati Prinsloo in 2004 and they have two children together, dusty rose and gio grace.
To know more latest updates you can visit our website: Landscape Insight
Christina is an Author at the Landscape Insight. She writes about various topics including Celebrities, Relationship Rumours, History, TV Series & Web Series Updates, etc. Outside her Professional Work, she enjoys watching Web Series and Gardening.
How Much Camille Vasquez's Net Worth Hiked After Depp-Heard Trial?
Camille Vasquez, who is highly well-known is a lawyer from San Francisco who…
Dan Bilzerian Net Worth: How This Person Become so Rich? Latest Update!
Dan Bilzerian is an American venture capitalist, actor, social media figure, and…
Shakur Stevenson's Net Worth: How Wealthy Is This Boxer?
American boxer Ash-Shakur Nafi-Shahid Stevenson was born on June 28, 1997. He…
Pam & Tommy's relationship timeline- Are the two still together? | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 683 |
Do you own a heart punch? This time of year most people might be thinking about how to use it for Valentines Day projects. But if you're like me, you also like to get as much use out of your tools as possible. So, here are 5 non-valentine ways to use your heart punch.
The current Stampin'UP! catalog has a variety of heart punches to choose from. If you're looking to get a basic heart punch my recommendation would be the Sweetheart Punch. Side to side its 1 3/4" across. I'd also recommend the Itty Bitty Accents Punch Pack. The 3 small punches in that pack (heart, star and flower) are can be used on a wide variety of projects, and can really dress up a project. The Sweetheart Punch and Itty Bitty Accents Punch Pack should would be excellent basic supplies in every crafter's tool box. In fact that little flower punch in that pack is one of my favorites. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,610 |
Q: Adding \ in sprintf for C I am trying to add the \ in C with:
sprintf("trying to add \ in this print"); I have tried to lookup and find this and I cannot seem to get it to work.
A: Have you tried the double backslash: sprintf("trying to add \\ in this print");?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,332 |
La SA-302 es una carretera de titularidad autonómica de la Junta de Castilla y León (España) que discurre por la provincia de Salamanca entre las localidades de Ledesma y Almendra .
Pertenece a la Red Complementaria Local de la Junta de Castilla y León
Pasa por las localidades salmantinas de Ledesma, Villaseco de los Reyes, Monleras y Almendra.
Historia
Antiguamente esta carretera estaba dividida en dos tramos con las siguientes denominaciones:
que corresponde con el tramo que va de Ledesma a Monleras.
que corresponde con el tramo que va de Monleras a Almendra () *
* Este es el primero de los dos tramos que conformaban la antigua SA-303. El otro tramo es actualmente denominado como .
Recorrido
Véase también
Red de carreteras de Salamanca
Referencias
Carreteras autonómicas de la red complementaria local de Castilla y León
302
Transporte de Castilla y León | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,750 |
Q: Prove that $a^n \gt \frac{(n \ln(a))^m}{m!}$ without using taylor series. Prove that $a^n = e^{n \ln(a)} \gt \dfrac{(n \ln(a))^m}{m!}$ for all $m\in \mathbb{N}$ and $a>1$ without using taylor series/calculus.
I am trying methods that use binomial theorem but I can't get far. I tried induction on $m$. For $m=1$, we have $e^{n\ln(a)}>n\ln(a)$, which is true since $e^x\geq 1+x>x.$ Suppose the proposition is true for $m=t,$ then we have $e^{n\ln(a)}>\dfrac{{(n\ln(a))^t}}{t!}.$ Then in order to prove $m=t+1$ is true we must could show that $e^{n\ln(a)}>\dfrac{e^{n\ln(a)}n\ln(a)}{t+1}.$ But after this step I am stuck. Any hints/ideas will be much appreciated.
A: Let prove by induction that
$$e^x >\frac{x^m}{m!} ~~~, \forall ~~m~~ x>0$$
*
*for $n=0$, we have $e^x > 1 =\frac{x^)0}{0!}$
*Asumme that $e^x >\frac{x^m}{m!} ~~~, x>0$
consider $$(0,\infty)\ni x\mapsto f(x) = e^x -\frac{x^{m+1}}{(m+1)!}$$
Then by Assumption we have $$ f'(x) = e^x -\frac{x^{m}}{m!}>0$$
that is $f$ is strictly increasing on $(0,\infty)$ therefore,
$$ 0=f(0)<f(x) =e^x -\frac{x^{m+1}}{(m+1)!}$$
hence for all $m$
we have $$e^x -\frac{x^{m}}{m!}>0$$
In particular $a>1$ then, taking, $x= n\ln a>0$ one gets
$$a^n = e^{n \ln(a)} \gt \dfrac{(n \ln(a))^m}{m!}$$
A: Prove with induction. $P(n): e^x>\dfrac{x^n}{n!}$ then $P(0): e^x>1$ is trivial. With assumption $P(m): e^x>\dfrac{x^m}{m!}$ be true, then
$$\int_0^x e^x>\int_0^x \dfrac{x^m}{m!}=\dfrac{x^{m+1}}{(m+1)!}$$
shows $P(m+1)$ is true.
A: We assume two results:
*
*$e^x>1+x$ for $x>0$.
*$(e^y)^n= e^{yn}$ for an real $y$ and positive integer $n$.
and prove:
Theorem: If $x>0$ and $m$ a non-negative integer, then $e^x>\frac{x^m}{m!}.$
Assumption (1) implies this result for $m=0,1.$
Given any $x>0$, and any integer $k>m$, (1) means $e^{x/k}\geq 1+\frac{x}{k}$ and thus, by $(2)$:
$$e^x = (e^{x/k})^k>\left(1+\frac{x}{k}\right)^k\geq 1+\binom{k}{m}\frac{x^m}{k^m}$$
By the binomial theorem.
Now $$\frac{1}{k^m}\binom{k}{m}=\frac{(1-1/k)(1-2/k)\cdots(1-(m-1)/k)}{m!}>\frac{(1-m/k)^m}{m!}$$
Now you need to show that if you pick $\epsilon>0$ you can find $k$ so that:
$$(1-m/k)^m>1-\epsilon$$
(We will assume $\epsilon<1$.)
Then we want $1-\frac{m}{k}>(1-\epsilon)^{1/m}$ or
$$k>m\left(1-(1-\epsilon)^{1/m}\right)^{-1}$$
So this means:
$$e^x>1+\frac{1-\epsilon}{m!}x^m=\frac{x^m}{m!}+\left(1-\frac{x^m}{m!}\epsilon\right)$$ for any $\epsilon>0$.
But then pick $\epsilon <\min\left(1,\frac{m!}{x^m}\right)$ so we get:
$$e^x>\frac{x^m}{m!}$$
None of these steps used calculus, but (1) and (2) depend on our definition of $e$ (or of $e^x$).
If you define $e^x=\lim_{n\to\infty} \left(1+\frac xn\right)^n$ then you get (1) and (2) pretty easily, but you'd have to prove that this limit exists.
If you define $e^x$ in terms of the power series expansion, then you get the theorem much more directly.
Other ways to define $e^x$ or $e$ are with integrals (definition of natural logarithm) or derivatives, which are calculus. I guess you could "hide" the calculus by defining natural log as a limit of the Reimann sums, and brute force from there.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,846 |
Gerardus Willem "Wim" Anderiesen Jr. (2 September 1931 – 27 January 2017) was a Dutch footballer. He played as a defender at club level from the early-1950s to the mid-1960s. He played for Ajax from 1951 to 1961, making 177 appearances for the side. He later capped for Holland Sport.
Club career
Born in Amsterdam, he was the son and namesake of Wim Anderiesen (1903–1944).
Anderiesen played 10 years for Ajax, playing 177 official matches in their senior team (ranking 81st in the club's Club van 100) and winning two league titles with them. He later played a few seasons for SHS and Holland Sport.
Personal life
In May 1945, Anderiesen was shot in the back during the Dam Square shooting in Amsterdam, when German soldiers fired at a crowd who were celebrating the German capitulation after World War II.
Anderiesen was one of the founding members of the Dutch players' union.
His father Wim Anderiesen also played for Ajax and earned 46 caps for the Netherlands national football team.
Wim Anderiesen Jr. died on 27 January 2017 in Heerhugowaard at the age of 85.
References
1931 births
2017 deaths
AFC Ajax players
Eredivisie players
Association football defenders
Footballers from Amsterdam
Dutch footballers | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,379 |
export default from './src/pureRender'
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,465 |
Home»Files»Death penalty»A brief on the death penalty during the first half of 2020
22 July, 2020 Death penalty, Monitoring Death penalty
Egyptian Front monitored the death penalty in Egypt during the first half of 2020. It appears that Egyptian authorities insist on using the death penalty in criminal offenses and are widely using it in political cases. This in addition to violations committed against the defendants in these cases during their trial period and thus infringing their right to a fair trial.
During the first half of 2020, Civil and Military Criminal courts issued 171 death sentences, most of which are criminal cases. Also, the Cassation Court upheld sentences on 10 defendants in 3 cases, most of which are political. Meanwhile, Egyptian authorities executed 34 persons in 11 cases, 3 of which include events of political violence.
Egyptian courts continue to widely use the death penalty in several crimes. A number of 105 crimes are punishable by the death penalty as stipulated by Penal Code no. 58 for the year 1937 and its amendments, Military Code of Justice no. 25 for the year 1966, Firearms and ammunition law no. 394 for the year 1954, and Narcotic Drugs Control law no. 182 for the year 1960.
The death penalty in Egypt is problematic as it is widely used in various crimes not only in heavy ones. Also, the defendants are mostly deprived of their right to a fair trial due to the grave violations of standards of a fair trial.
Within the framework of the monthly monitoring of the death penalty, the Egyptian front has communicated with lawyers and monitored the media for documentation purposes. Egyptian front has documented the execution of 34 defendants, 10 of whom are part of 3 cases of terrorism and political violence known in the media as "Churches bombing", "Al-Wahat case", and "Al-Farafra Checkpoint". It also monitored the confirmation of the Cassation Courts to execute 10 defendants, 7 of whom are part of political violence case known as "Helwan police station". This in addition to handing down death sentences to 171 defendants, 40 of whom are part of two political violence cases known in the media as "Ansar Beit alMaqdis" (37 to be executed) and "Assassination attempt of Alexandria's Security Chief" (3 to be executed). 37 out of the 40 are part of "Ansar Beit al Maqdis" and the other 3 are part of the Assassination Attempt case. Meanwhile, the documents of 158 defendants in 59 cases, 40 of whom are part of political cases, were referred to the Grand Mufti for his advisory opinion regarding their death sentence.
The Death penalty situation during the first half of 2020
First: Execution of the death penalty.
Egyptian Front monitored the execution of at least 34 defendants in 11 cases, 8 of whom were convicted in case no. 165 for the year 2017, Military, known in the media as "bombing churches" on February 25, 2020. Hisham Ashmawy was executed on March 4th against the backdrop of case no. 1 for the year 2014, Military, known in the media as "alFarafra Checkpoint". Abdel Rahim al-Mesmar- was also executed against the backdrop of case no. 160 for the year 2018, West Cairo Military, known in the media as "Al-Wahat Case".
While Egyptian Front assures it stands against terrorism and does not by any means claim the innocence of any of the aforementioned defendants, it however emphasizes the importance of maintaining fair trial standards for everyone without discrimination. It has appeared that a number of defendants were subjected to enforced disappearances, physical and mental coercion, and the investigation sources were kept unknown. This reveals clear violations to fair trial standards in cases of capital punishment.
An infographic showing the executions during the first half of 2020
Second: Confirming death sentences
The Court of Cassation upheld the death sentence of one person in a criminal case in January 2020. In March, it upheld another death sentence on two defendants' criminal case. It also confirmed the death sentences of 7 political defendants in case no. 8280/2014 Helwan felonies, known in the media as "Helwan Police Station". Therefore, the total number of cases upheld by Court of Cassation in the first half of 2020 is 10 in 3 cases.
ِAn Infographic showing the situation of death sentences upheld during the first half of 2020
Third: Death sentences
Egyptian Front monitored death sentences handed down to more than 171 defendants in 73 cases, 40 of whom are part of two political cases; Ansar Beit al Maqdis and Assassination Attempt of Alexandria's Security Chief cases. Worthy to note that the latter is an emergency state security case where once a verdict is issued from an Emergency state security court it cannot be appealed. Also, 131 defendants were sentenced to death in 71 criminal cases.
An Infographic showing death sentence status during the first half of 2020
Fourth: Referrals to the Grand Mufti
Egyptian courts referred more than 158 defendants in 59 cases to the Grand Mufti to get his advisory religious opinion on their death sentences. 40 of these defendants are part of political cases known as Ansar Beit al Maqdis and Attempted assassination of Alexandria's security chief.
An Infographic showing the status of Al Mufti referrals during the first half of 2020 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,691 |
Q: google map marker not showing why? I'm following google map api and add the following code but not showing the marker
function initialize() {
var myLatlng = new google.maps.LatLng(27.7000, 85.3333);
var map_canvas = document.getElementById('map_canvas');
var map_options = {
center: new google.maps.LatLng(27.7000, 85.3333),
zoom: 12,
mapTypeId: google.maps.MapTypeId.ROADMAP
}
// To add the marker to the map, use the 'map' property
var marker = new google.maps.Marker({
position: myLatlng,
map: map,
title:"Hello World!"
});
// To add the marker to the map, call setMap();
marker.setMap(map);
var map = new google.maps.Map(map_canvas, map_options)
}
google.maps.event.addDomListener(window, 'load', initialize);
A: You're creating your map after adding the marker to it, so it is null at that point
var map = new google.maps.Map(map_canvas, map_options);
marker.setMap(map);
you might have some javascript errors so look at the console log
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,423 |
Q: How do I know when to use the perfect infinitive and simple infinitive? Most of the time they have the same meaning as in:
It was stupid of me to say anything on Twitter
It was stupid of me to have said anything on Twitter
Luis deserved to earn that promotion
Luis deserved to have earned that promotion
My question is why there is a perfect infitive? if it can easily be replaced by an infinitive(simple).
A: Like the past perfect, this form is used to convey looking back from a particular time to an earlier time. Like the past perfect, it is usually optional if the temporal relationships are clear anyway.
There are cases where it is clearly significant eg
I don't want to tidy the garage, but I want to have tidied the garage.
In your examples, using it says that the speaker is choosing to locate themselves temporally at a later time (possibly the present) and ''look back'' at the event they are describing. The forms without have do not make that choice, and don't imply any particular viewpoint.
Idiomatically, I find the past infinitive reasonably likely in the first case - it suggests that the speaker is putting the whole thing behind them, and they have learnt better since. But the simpler form is fine.
In the second case, I can't think of a context in which the past infinitive is likely; but there may be such a context.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,209 |
_Written in loving memory of my mother and grandparents, and for my two sons, Marcus and Leon_
## Contents
Foreword by Sir Chris Bonington
Prologue
**Phase One: Following Footsteps**
ONE: | The Hills Are Calling
_Meall a' Bhuachaille – The Shepherd's Hill, April 2008_
---|---
TWO: | Coincidence or Fate
_Back to Bhuachaille, May 2008_ / _Bynack More – The Big Cap, May 2008_
THREE: | Doomed Champagne and Mountain Magic
_Beinn Eighe – The File Hill, June 2008_
FOUR: | Cheating Myself
_Ben Wyvis – The Hill of Terror, July 2008_
FIVE: | Becoming a Woman with a Plan
_Beinn Alligin – The Jewelled Hill, August 2008_
SIX: | Divergent Paths
_Meall Fuar-mhonaidh – The Cold Rounded Hill, February 2010_
**Phase Two: Troubled Tracks**
SEVEN: | Keep Them Close
_Nakara, Tanzania, June 2010_
---|---
EIGHT: | Where the Wind Blows
_Naro Moru Gate to Simba Camp, 2,650 metre/Simba Camp to Kikelewa, 3,678 metres, June 2010_
NINE: | Where There's a Will There's a Way
_Kikelewa to Mawenzi Tarn, 4,295 metres/Mawenzi to Kibo, 4,700 metres, June 2010_
TEN: | Hell on Earth
_Kibo to Uhuru Peak, 5,895 metres, June 2010_
ELEVEN: | Dead Loss
_Horombo Huts, 3,720 metres, June 2010_
TWELVE: | Peaks and Troughs
_The North Glen Shiel Ridge, June 2011_
THIRTEEN: | A Hatch and Despatch
_Bidein a' Choire Sheasgaich and Lurg Mhor, October 2011_
FOURTEEN: | Slippery Slopes
_Near Fersit, January 2012_
**Phase Three: Steps in the Sunshine**
FIFTEEN: | One Thing Leads to Something Else
---|---
SIXTEEN: | Protecting Next of Kin
SEVENTEEN: | History Repeats Itself
_Fisherfield, July 2013_
EIGHTEEN: | Dark Horse
_The Inaccessible Pinnacle, September 2013_
NINETEEN: | Walking on Air
_Aonach Mor and Aonach Beag – the Big Ridge and the Little Ridge, April 2014_
TWENTY: | Early Illness
_Kathmandu, Lukla and on to Monjo, 2–4 May 2014_
TWENTY-ONE: | Onwards and Upwards
_To Namche, Thyangboche, Dingboche, 5–7 May 2014_
TWENTY-TWO: | Deliverance
_Chukhung, 9–10 May 2014_
EPILOGUE: | Homeward Bound
_Ben Nevis – The Venomous Mountain, July 2015_
Postscript
Acknowledgements
Index
## Foreword
My own love of mountains started in the summer of 1951, at the age of sixteen, when I climbed a hill in Blackrock, a suburb of Dublin. It was certainly no mountain but it sparked a passion that led me to devote my life to climbing all over the world. I have faced the most forbidding mountains on earth and have always relished the challenge, even climbing the Old Man of Hoy, the tallest sea stack in the British Isles, to mark my eightieth birthday.
It is an experience that remains exhilarating no matter where we are in the world, or what stage of life we are in: the physical challenges of endurance; the thrill of the risks taken; the elation of reaching the summit; the joy of immersion in the rugged scenery, all of your senses in tune with the landscape you're walking through.
The mountains are also a place to seek solace. There have been many times in my life when I have found peace in such solitary and unforgiving surrounds. In times of trouble and grief, walking has seen me through.
It is for these reasons that I have so enjoyed reading of Sarah Jane Douglas's experiences, which have inspired her to find strength in the face of life's challenges. There is something universal at the heart of this book – something we can all understand, not just those of us who have grown to love the mountains. That immersing ourselves in wild landscapes can heal, motivate and inspire us is something that is beyond doubt.
Sarah's story shows that this is open to everyone; anyone can decide to go out and just start walking. To pit oneself against a summit – even a small one, in a suburb of Dublin – can be the beginning of a lifetime of adventure and discovery. I hope that this book will inspire others to do the same.
_Sir Chris Bonington_
2019
## Prologue
Loads of people get horrible diagnoses all the time, so really it isn't anything special or extraordinary that I found myself with membership to the cancer club. To be honest I'd been expecting it, but the news still came as a swift kick to the balls. The hardest thing to get my head around was the fact that twenty years earlier I'd held my own mum's hand when breast cancer stole her life from mine. It had taken me most of my adulthood to recover from her loss.
I was twenty-four when my mum died, and it felt far too young. I wasn't ready for it – in my mind I was still a child, her child, and I needed her. But she was gone for ever. Lost without her, I spent years lurching from one distraction to the next: drinking too much, dabbling with drugs, loveless sex with too many men, motherhood. I got into trouble with the police. I wound up in a volatile marriage. Without her support, and with the subsequent deaths of my grandparents, it seemed there was no one who cared. I had my two sons, but sometimes it felt like a struggle just to keep breathing: I was at odds with the world and everything in it.
But I'd made a promise to Mum that I wouldn't give up, and the hope within, which at times seemed to have died, somehow kept flickering.
I remembered – and turned to – a world I'd once loved, a world right on my doorstep: mountains.
I'd grown up in the Scottish Highlands, so mountains had always been a big part of my life. Mum and I would often walk together, and many of my favourite memories of her are from those times. After her death, I continued to go on my own for long walks on the beach and along the river – it helped me to feel closer to her. But it was when my life started to spiral out of control that I really started to discover a passion for the outdoors. At first I started setting out for places wilder and further afield, but I soon realised I needed more of an outlet, time to escape, and eventually I sought out high tops. Proximity to nature was soothing; I felt at peace and perfectly secure in the rugged environment. The more I ventured out, the more I wanted to do and the higher I wanted to go.
I didn't know it at first, but hillwalking would be the key to turning things around. As soon as I find myself on top of a mountain I am filled with the joy of life, even more so if the summit has been hard won through tricky terrain or challenging weather. Climbing all of Scotland's highest peaks, pitting myself against nature, forced me to face up to my troubles. It reconnected me to my mum and, in getting to the marrow of my experiences, it helped me move past grief. And eventually it would help me deal with cancer. Faced with my diagnosis, there was only one thing I could do, the thing I'd come to rely on so much these last few years. I had to put one foot in front of the other and just keep walking.
## PHASE ONE
## FOLLOWING FOOTSTEPS
'We are no other than a moving row
Of Magic Shadow-shapes that come and go
Round with the Sun-illumined Lantern held
In Midnight by the Master of the Show'
_The_ Rubaiyat _of Omar Khayyam, LXVIII_
### CHAPTER ONE
### The Hills Are Calling
_Meall a' Bhuachaille — The Shepherd's Hill, April 2008_
Snowflakes floated down from a heavy alabaster sky; beyond their dot-to-dot spaces Scots pines blurred in my vision. _Delicate frozen patterns_. I tried to catch flakes on my tongue, each unique in design and without permanency. _A temporary structure, like us humans_ , I thought.
I'd only been walking for ten minutes but my cheeks already felt flushed in the cold air. 'I can't believe there's so much snow!' I said out loud, looking down at my feet. They felt warm inside the brown Brashers: they had been Mum's boots but they belonged to me now. Mum had always said it wasn't good to wear other people's shoes, something about feet moulding into the insoles and the leather. If they'd been anybody else's I wouldn't be wearing them, but they were hers. I guess I wanted to be close to her in whatever way I could.
Mum had felt that walking was like a cure for duress. 'Let's get out and clear our heads,' she'd say. But equally, when life was good it had given her pleasure. Over the years we shared many walks, along cliffs and coasts, far from anyone else, but she had also liked to go out alone.
In her youth, during the 1960s, she had loved to walk barefoot whenever possible, often strolling the long, curving arm of the shingle beach near our home. Mum had been quite the hippy then, with long, dark hair that was always in a centre parting, and wearing the minimum of make-up. She made her own clothes: floaty ankle-length skirts, dresses with flower prints and blue-denim bell-bottoms teamed with vest tops. She was super-cool.
Mum hitchhiked a lot in those days too – she once went all the way from the Highlands to Roslin, near Edinburgh, 276 kilometres away. Turning up at a friend's house, she was promptly given a pair of socks and shoes. I wished I could have known her then.
Out front of my childhood home, just over the road and beyond the football pitch, was a gorse-covered raised beach – a small but steep hill known as Cromal Mount. It was Mum who first led me to its top. Each time we scaled 'Crumble Hill', as I called it, was an adventure; I would dart off ahead to find a suitable hiding place to jump out on Mum, brushing up through prickly gorse on threads of sandy trail, and both of us would arrive at the top panting from the effort. Just me and her and the world at our feet; I loved that feeling of being separated from everything below. We were up high and free like the birds in the sky.
In later years, when we'd moved to nearby Nairn, Mum and I would share walks along the town's sandy white beaches, or pass by farmers' freshly cut fields next to the river path. We would be as startled by the heron as it was by us; we'd disturbed its search for rodents and off it would fly. More often we saw the large wader standing in its one-legged pose on a rock in the river, its grey, black and white feathered body half hidden by the overhanging branches and leaves of trees. The heron was always on its own, never in a pair. And the moment it saw us – even if we were some distance away – with one beat of its slender, long wings it would take flight, so gracefully. It seemed a reclusive creature, wanting to be left alone; there had been times when Mum was like that. Out along the riverbank, grey wagtails and dippers would fly between rocks poking out of the water, and chaffinches fluttered from tree to tree. My mum loved to identify what birds she could as we went, but mostly I enjoyed our walks because they lent us quiet uninterrupted time together. I was always looking to find a way in, to be closer to her. I often wished she could talk to me in the same way I could talk to her. Sometimes it felt that while I told her everything, she was holding back. But in spite of that, to me she was more than just a mum. She was my very best friend, the person I knew I could always turn to.
Now, just as she had done, I would go out walking whatever my mood – and those moods were sinking ever lower. While I was still grieving for my mother, my marriage of two years had already started to run into difficulties, causing all the other smaller problems in life to take on horrendous proportions. As my troubles piled up, I found myself being pulled in the direction of the outdoors more and more. But it no longer seemed enough – I needed something more challenging than the low-level walking I'd been used to. I needed to take off to the mountains.
I'd always thought of hillwalking and climbing as predominantly male activities. It seemed to me that men tended to resort to strenuous physical exercise like this to work out their problems, while it was seen as more natural for women to turn to a close relative or friend for support. But I had neither. Instead, I found myself yearning to be out in the elements and wilderness. Taking refuge in solitude. Gaining perspective. So here I was in April 2008, on a steep, snowy but accessible mountain near Aviemore in the Cairngorms – running to the hills, and away from the troubles of my life.
As I contoured underneath Meall a' Bhuachaille and carried on along the valley over snow-covered boardwalks, I stopped to admire Lochan Uaine. The surface water of the small lake had frozen into iridescent, concentric rings ranging from white to deep hues of pewter where the ice was thinnest – it was like a scene from a wintry fairy tale. Thin shoots growing out thickly from the severed trunk of a tree made dark criss-cross outlines, as though the tree was finding its way back to the spring of its life.
I was an accident, born in 1972 in the Highland capital, Inverness. For the first ten years of my life home was with my mum and grandparents in the small former fishing village of Ardersier on the Moray Firth, eleven miles east of the city. We lived in a grand Edwardian red-sandstone house called Inchrye which was surrounded by mountains, and it was almost like living in a bowl: all around us we had panoramic hill views, with the great bulk of Ben Wyvis rising up above coastal cliffs on the Black Isle, the low-lying Clava Hills obscuring the great Cairngorm mountain range behind, the Great Glen headed by Meall Fuar-mhonaidh, a prominent dome-shaped hill and – the one I was most intrigued by – the wintry views across the water to the cluster of shapely Strathfarrar peaks in Ross-shire.
Grandad had been furious with Mum for falling pregnant, and if Gran had got her way I wouldn't have been born at all. But my mum kept her baby, and when I arrived into the world, kicking and screaming, my grandparents couldn't have been more supportive or loving towards her, and me. My natural father was already married to someone else, and so my mother brought me up with the help of her parents. We all lived in the family home along with my mother's teenage brother and sister, David and Penny. As both my grandad and my mum went out to work, my early memories are of Gran looking after me. She was warm and caring, and naturally I developed a strong attachment to her.
It was in 1974 that the lure of adventure stirred an inherent inquisitive impulse in me at the tender age of two. Inchrye was split over three levels, and when Gran was tied up with household chores I loved snooping around its many rooms, hiding among the fur coats in her wardrobe, sneaking up on my young aunty and getting up to unintentional mischief. But, left largely to my own devices, I would often play in the attic, where metal cases and trunks, covered in layers of dust, piqued my curiosity. Inside were all manner of curios: musty-smelling old clothes, random jigsaw pieces, Dinky cars and strange, rubbery chess moulds. Sixteen-year-old Uncle David didn't know I was there, but I heard him clattering up the spiral staircase. I spied on him as he rearranged some furniture and clambered over it. I heard the window open and watched as his body, legs and feet disappeared. I was fascinated. It wasn't long before he then reappeared through the small opening, closed the window by its long metal latch, jumped down over the furniture and clattered back down the stairs. Uncle David had been out on the roof. I didn't know what it had all been about, but he had inspired me to give it a go myself a few days later.
Poor Gran. She was outside, hanging washing on the line, when the coal-man drew up in his lorry and parked in the lane alongside the tall, pebbled garden wall. As he stood on the back of his lorry, almost ready to heave the sack of coal onto his back, he spotted me.
'There's a bairn on the roof!'
'What's that you're saying?'
'There's a bairn on the roof!' repeated the coal-man, pointing.
'Yes, it's a lovely day.'
' _No! There's a bairn on the roof!_ ' said the coal-man, stabbing his finger skywards in my direction.
My gran, born in Liverpool, hadn't understood either the Scottish slang or the broad accent, but she soon caught his drift when she followed his pointed digit. Dropping everything, she rushed in to get my grandad. He grabbed extending ladders from the shed, propped them against the wall of the kitchen extension, climbed onto its roof and then onto the pitched roof of the building. Sitting on warm slates in the winter sunshine I was quite content, captivated by the distant snow-topped mountains beyond the waters of the inner Moray Firth. Quiet as a mouse and unperturbed – but definitely stuck – I waited to be carried down, pinned against my grandad's chest by the firm clutch of his strong arm. And all I could think was how much I wanted to go to those snowy mountains.
I was certainly finding out all about snowy mountains now. Blizzards and whiteout conditions engulfed me as I shouldered my way towards the top of Meall a' Bhuachaille. All the footprints I'd been following were quickly obliterated by fresh snow, but I wasn't concerned; there was only one way to go and that was up. _Holy fuck it's wild, but it's great!_ I thought, as I grinned and gave out a howl – there wasn't a soul in sight, and even if there had been it was impossible to be heard over the bellowing wind.
As I reached the cairn on the 810m peak I yelled out again as I spun about in celebration: my first hillwalk, all on my own.
I took off my rucksack to get a drink but the zip had frozen and my efforts to yank it open were in vain. Now I was also feeling cold. Snow whirled in frenzy around me. I looked at my surroundings, hastily trying to gather my bearings while realising that it had only taken that brief moment of spinning to become disorientated. I was at the top, so I simply had to go down, but which way?
I turned my back to the buffeting wind. There was nothing to see. I was enveloped by an impenetrable wall of white. I cursed myself for not having a map or compass, but then what good would either be when I didn't really know how to use them? I cursed myself again. Fanny.
I had climbed Meall a' Bhuachaille once before; I'd done it the previous summer with my stepdad Frank and was sure I remembered the route – however, my present situation seemed anything but simple. As my mind reached into the past, desperately trying to clutch at any tiny fragment of conversation we might have had about the way off this mountain, I thought of Frank with his new family: he would be with them, safe, warm and tucked up indoors, but by fuck I wished he was here instead.
_I think I have to go straight over this summit, down the other side, and just make sure I keep left_. I vaguely remembered wide, open moorland that lay to the right of the hill and didn't want to end up wandering around lost down there either. Turning back round, I went to pick up my bag, which I'd left by the cairn, and promptly sank deep into the snow. My foot had stepped into nothing, and for a split-second I thought I was falling through a crevasse and was going to die. Waist deep, I hit solid ground. I freed myself by sprawling across the snow, rolling, sliding and then scrambling, laughing with embarrassment at my own stupidity. Somewhere in the clouds a helicopter flew close by. 'Maybe it's for me,' I joked. 'Right, come on, blockhead. Let's get below the treeline and you'll be fine then.'
My favoured method of descent was by sliding on my backside, and I was glad to have decent waterproofs. But overall, I was not best prepared; my inexperience of mountains and lack of any sense of fear were childlike. The real risks of winter walking hadn't entered into my head.
When there were no clear spaces to slide, I walked. The aromatic scent of pines filled my lungs and memory on that crisp afternoon, and aside from being stabbed by twigs and needles from low branches as I brushed a way downwards, I felt true contentment: I didn't have to think about trying to feel happy, I just was. Busy with the task at hand, there was no room in my mind to dwell on the sorrows or anxieties that otherwise overshadowed my life. I was free, living in the moment. Maybe that had been the lure of long walks for my mum.
From a high stance on the steep, wooded hillside I saw what I thought was a track a couple of hundred metres below – there was a bright-yellow shape on it that looked human. I felt like a lost adventurer who had just discovered a way out of the most hazardous and wild environment known to man, and I wanted to catch up with that person – or people.
It took longer than anticipated, but after a last hurdle – squelching through some sticky black bog – I reached the path. The bright yellow was a jacket worn by one of a trio, all of whom were studiously looking at what I thought was a map, but when I was almost upon them I realised it was a GPS. I didn't know anything much about GPS units then, but it didn't matter. All I cared about was finding out where I was – and the thought of getting back to my car was hugely appealing. My feet were ready to be released from Mum's Brashers, my toes felt fiery and desperate to be relieved from their boot-prison and rubbed.
I moved forward to say hello to the small group. With brief greetings over, it transpired that my saviours were as happy to have met me because they were hoping that _I_ could help _them_. They didn't have a clue how to work the GPS. So we all walked back and forth and around and around together. I was beginning to wonder if we'd ever find our way back when, for the second time, I heard the unmistakable sound of air slapping off rotor blades. A yellow Sea King hovered overhead, preparing to land in a nearby field. I dashed up a small, steep bank to see it and was surprised by the unexpected sight of the sprawling outdoor-activity centre, Glenmore Lodge. I squealed with delight; my car was parked just beyond the building.
Boots and waterproofs off, I was on the road – wanting and not wanting to go home. The day hadn't gone entirely to plan; weather conditions in combination with my lack of experience and equipment could have spelt disaster. I had, however, enjoyed myself to the full, battling the elements but 'living to tell the tale' of my first autonomous venture into the hills. It had been invigorating, and I had felt alive in a way that I had not done for so many years.
Eleven years actually, ever since Mum died.
Back at home, I thought of her as I placed her boots by the hearth to dry. The brown Brashers had looked after me and kept my feet warm at the start of the day, but now they felt heavy and smelt of leathery dampness. I would definitely need my own pair at some point. Still, those sodden boots had connected me to her out on the hillside, had helped me feel closer to her. How I wished right then that I could tell her about the day I'd spent. She'd have shaken her head disapprovingly but would have said she was glad I'd had a good time and that I'd probably learnt a valuable lesson. When she said these sorts of things I usually hadn't learnt much at all and would go on to make similar mistakes but in a different way. Mum had always erred on the side of caution, preparing well for most things, preferring to reduce the element of risk. We both had to learn that sometimes life has other plans.
### CHAPTER TWO
### Coincidence or Fate
_Back to Bhuachaille, May 2008/
Bynack More — The Big Cap, May 2008_
Driving back from Meall a' Bhuachaille that day, as the miles between me and home decreased the familiar feeling of dread crept in. On the hill it had been just me pitted against nature and the rugged terrain. But now I had to return to my real life: back to supply teaching, while waiting to begin my probation year; back to horrible neighbours and a leaking roof; back to the troubles of my marriage. I was already yearning to feel that sense of escape again, and so not long after my snowy adventure I returned to Meall a' Bhuachaille with my husband, Sam. I hoped he would feel a connection to nature, as I had, and share that sense of life's worries being put firmly into perspective. I hoped it would become something we could do together, a new common ground, reaching for boots instead of a bottle.
I'd met Sam in 2004. It was seven years after Mum's death and now, with two small boys at my heels, I was working weekend shifts at a pub, and I hadn't been there long before the pub's leasehold was taken over by Sam. At forty-five he was older than me by thirteen years, but he didn't look it. I liked to watch him as he stood at the jukebox. He was tall, had a strong jawline, jet-black hair and cool sideburns. Always smartly dressed in suits and with a broad Glaswegian accent, he had the look and sound of a gangster; he had edge – I liked that. And he was what my grandad would have called a man's man, old school when it came to paying for dinner or walking on the outside of the pavement. At first he made me feel like I was the centre of his world.
We became more deeply involved when his marriage failed, and when my ninety-one-year-old grandfather's health was declining quickly. Neither of us was in a good place emotionally, but, as fools do, and ignoring the sage advice my grandad had offered back in my student days, I rushed headlong into the new romance.
Sam had a huge capacity for kindness and was great fun, but when he was fuelled by alcohol he was wild – we both were. I was fascinated by his darker side. I wanted to unearth its roots and understand him . . . or maybe I thought he was just like me, lost and lonely . . . maybe I thought we could save each other.
Drink was always involved when we first started seeing each other, and I was too busy being caught up in the excitement of our flirtations to think too much about how wrecked either of us would get. At the end of our evenings, when I would leave Sam and go home, he came across all hurt and dejected – as if he couldn't bear us to be parted. Nobody had ever looked at me in such a way before.
After I gradually introduced him to my sons, Marcus and Leon, Sam moved in with me. For a while I felt he gave me stability. He treated my erratic moods with kindness and a hug, and took care of me when I was low. And he was understanding when dementia tightened its grip on my grandfather, and I wanted to spend more time with him and give some extra help.
During his army career my grandfather had been well respected, with a reputation of being firm but fair; and it's true to say this was the case at home too. He'd once been a good footballer and won loads of medals; he was an even better tennis player, but more surprisingly he was an excellent Highland dancer. He had never been one to boast and he'd never been one to take his work home with him. But now, both in body and mind, dementia had reduced him to a shadow of his former self.
I remember first noticing the signs when I saw him greeted warmly by an old colleague who seemed to know him very well, and who spoke with him for several minutes.
'Who was that?' I asked, after the man walked away.
'I haven't a bloody clue, dear,' came the answer.
Sam never knew the man my grandad had once been, but I was glad they had met. Sam even asked him for my hand in marriage. Poor Grandad, he was confused. He mistakenly thought that Sam was an officer and asked what regiment he was attached to before shaking his hand warmly and saying, 'I'd be honoured to have you marry my daughter.' I doubt my grandad would have been so approving if he'd known the troubles and arguments that went on between us, even then.
When we drank, Sam was always first to tip over the edge. If I had been my younger self I would have carried on drinking too, descending with him into a well of depression and self-pity. I understood it was the drink talking when Sam's insecurities roared to the surface, but my best efforts to placate and reassure him were wasted.
As time went on, I kept making excuses to myself and often recited internal lists as to why we should stay together. At the top was always the hope that he would change. Things hadn't worked out with either of my children's fathers and I didn't want to fail at another relationship. My kids liked him and Sam was fond of them too. I was sure that having a constant father figure in their lives could only be a positive thing for them, and their happiness was paramount. In short, I convinced myself that I needed him. And, of course, anything was better than being alone, so I clung on. Despite the upsets and fights, I'd attempted so many times to reconcile the feelings I had about my marriage: and now the hills were a last resort for its rescue.
To avoid repeating my previous walk exactly, my plan was to do the Meall a' Bhuachaille circuit in reverse, hiking over the smaller tops Craigowrie and Creagan Gorm first. It would make for a longer and more satisfying day out, and there would be less chance of meeting anybody else on this route. It would be just us and the elements, exactly how I liked it. But this time I made sure I checked the weather forecast – and that I had a map and compass, even though I wasn't a hundred per cent sure I knew what I was doing with either. Of course there was also the not so small matter of childcare to sort out. Finding someone to look after the boys wasn't always the easiest thing to arrange. Without having my own mum to turn to, I asked my children's paternal grandparents for help. Though they had contact with their grandchildren there was no real relationship between me and them, so I'd felt awkward for asking, as though I was taking advantage when I knew I'd be gone for the best part of a whole day. What sweet relief it was when they'd said yes to my request.
Although the hills we were going to do weren't especially high by comparison with nearby Cairngorm, they could still hold on to a good covering of snow even well into May. And since I'd only been here a week earlier, I fully expected to find the peaks still under their wintry blanket. I suggested repeatedly that Sam change his clothing, but he insisted jeans would be fine.
'Sam,' I said, 'I don't know much about hillwalking, but I do know that jeans are a terrible choice in wet weather.'
'How's that then?' he asked.
'They hold water.'
'Yeah, but snow's different, it's drier than rain.'
We hadn't even reached the top of the first hill before Sam's clothes were saturated. Not only had he become sodden by trudging through almost knee-deep crunchy snow, but he also fell into a muddy and watery hole. At work two nights earlier he had accidentally chopped off the end of his index finger in the safe door when it had swung shut. Preserving the tip of his finger in ice, he took it to hospital to have it stitched back on. And so the vision of his white, massively bandaged finger pointing skywards in _Saturday Night Fever_ fashion was hilarious as he struggled to free himself. Weak with laughter, I helped extract him from the muddy breach but, man, was he annoyed.
Leaving Sam and his bad mood trailing behind, I arrived at the first peak on the ridge and perched myself on a rock. Surprised to discover that Sam and I weren't the only ones who had thought to come this way, I introduced myself to two men who were sitting against the summit cairn.
'I was here last week,' I said. 'I climbed up Meall a' Bhuachaille in a whiteout and got totally disorientated. I'm glad the weather's better today!'
'You wouldn't happen to know Dawn Main by any chance?' asked the older of the two men.
'Yes! I would!' I said. 'I know her from school. How do you know her?'
'She's a colleague of ours. So you'll be the mad pal she was telling us about who got lost in the snow,' teased the other guy, Ollie, whose baggy trousers and black jacket disguised a tall, heavy-set frame.
We were laughing and marvelling at the coincidence when Sam reached us; I could tell by his face that all he wanted to do was turn round and go back to the car, so I quickly introduced him to the two men and explained the connection. Sam appeared more at ease on the hill with the distraction of male company and so the four of us ended up carrying along the ridge together, but before a final climb up to the top of Meall a' Bhuachaille Sam called it a day.
'Sarah! If you need a hill buddy, get Dawn to let me know. I'd be happy to join you,' Ollie called back. I smiled and waved as we travelled in our different directions. Neither of us could know it then but, after that chance encounter, Ollie's part in my hillwalking career would turn out to be significant.
Sam and I navigated our way back down, along tracks and fire breaks through Queen's Forest to the road and then finally back to our parked car, much to Sam's relief.
'Did you enjoy our day?' I asked him, as we drank a post-hill dram at home.
'Hmm. I couldn't be arsed doing that every weekend. Too boring just walking up a hill and then back down again,' he answered.
Ten minutes later he was flaked out on the sofa. I looked at his peaceful face and was disappointed that he hadn't felt the same sense of wonderment and freedom that I felt on the mountains. But my plan had partly worked, because at least he wasn't getting drunk – there would be no shit tonight.
To get ideas for walks I bought a popular book on Scotland's highest mountains. Inside its covers were route descriptions, grid references for start points and estimated times for how long each walk should take. A world of possibilities for new adventures – for escaping my troubles – literally opened up for me with the turn of each page. There were coloured drawings showing relief, rivers, summits, and a broken red line with little arrowheads even indicated direction of travel. Perfect for a birdbrain like me. As I'd flipped through the book I came across what I decided would be the next walk, Bynack More in the Cairngorm range. The route looked easy enough, and its start point at Glenmore Lodge and the first section up this mountain were now familiar to me.
It was the week after our first outing together and a scorcher of a day. The sky was a brilliant blue with just a few scattered white clouds. The journey across Dava Moor towards the Cairngorms was quiet; Sam had taken some convincing to come out again so soon.
He was silent as we set out walking, no doubt brooding about having to spend a whole day in the outdoors, but I still hoped there was a chance he'd fall in love with the mountains. 'Come and see Lochain Uaine,' I enthused. 'It's like an amazing little secret tucked away from the masses, so perfectly unspoilt.' Eventually he agreed, but he seemed to be regretting it already.
We took a brief detour from our path to Ryvoan bothy, at the head of the pass. Once an important drovers' road, it linked other tracks stretching off to the north, or heading east to the coast. Its interior was dingy, with only a dull light cast through the small, cobwebbed window, but its sturdy stone walls conveyed a sense of history. I imagined times when shepherds would have taken grateful shelter here. However, in the mild weather it smelt like wet dog.
Sam was unimpressed. I was going to have my work cut out trying to entertain him. I wished I hadn't been so insistent that he join me. I felt resentful that the effort involved in trying to please him was going to spoil my own sense of enjoyment. This wasn't how it was meant to be.
Returning to a junction in the track, we branched off in an easterly direction. Before crossing a wooden bridge over the River Nethy we came to a long section of trail comprising loose stones, varying in size and stability. Sam complained that the rough terrain was causing pain to flare up in his knee, but he soldiered on.
It seemed a long walk in the heat as we followed the footpath south-east over the lower shoulder of Bynack More. I decided to distract Sam by walking ahead with my butt exposed. It had the desired effect. He roared with laughter at the sight of my wobbling white posterior which, with a rapid yank of my trousers, was quickly concealed just before the first people we encountered saw more than they had bargained for. The old boy looked like an ageing Crocodile Dundee in his Australian cork hat, beating the trail ahead of his wife.
'You're the first people we've seen in five days,' she said. 'We've been wild-camping. We've come from Loch Avon and I'm so looking forward to reaching Glenmore Lodge and civilisation!'
'I hope I'm doing stuff like that when I'm their age,' I said to Sam, once they were gone and out of earshot.
'Rock on, Granny,' he laughed, and I did too. But deep down inside, while I did have hopes that I would be adventuring into my twilight years, I didn't think I'd be doing so with Sam.
As we continued to ascend Bynack More the winds picked up, blowing across austere and barren windswept plains of muted browns and purples. Taking shelter, we stopped to have lunch behind a huge hunk of granite, where a friendly dog appeared and tried to join us for a piece of quiche. ' _Fuck off!_ ' I said through my teeth while directing my smile towards its approaching owners.
Sam petted the animal. 'Aww. She's only cranky 'cause she's hungry. She disnae want your big doggy slavers dripping ower her lunch when you were probably licking your baws ten minutes ago.' He could be affectionate and funny, and I wished that was how things were between us all the time. Mum had said you only got to really know a person by living with them for a couple of years. After our four years together, I could now say I really knew Sam.
When we reached the 1,090m summit of Bynack More the wind had strengthened; snot now streamed from my nostrils like horizontally blown, translucent windsocks. I loved the ferocity and felt an affiliation with the wildness: it was like a physical manifestation of what raged through my being.
Scrambling up over the higgledy-piggledy maze of giant granite tors, I soaked in my surroundings. From here, the north-eastern fringe of the Cairngorms, there was nothing to interrupt the views over Moray. Meall a' Bhuachaille and its pine-covered lower slopes, which teemed with all kinds of insect and bird life, seemed left far behind. It was a different world up here of rough weathered bedrock: the height and exposure gave it an Arctic climate. It was truly wild and empty, and I could imagine how inhospitable a place winter would make it. As I turned round, the mountain seemed much smaller, dominated suddenly by the massive shoulders of Cairn Gorm, its magnificent corries and big, rounded plateaux beyond. The wind caused a milky haze to lightly veil the views, and Sam created his own fog by puffing on a cigarette as he lay slumped against one of the tors.
Neither marred my enjoyment. But I didn't experience that same sense of freedom I'd felt walking on my own or with my mother: I hadn't been able to fully lose myself in the day because I'd been preoccupied with how to keep Sam happy – as I so often was, I realised. I had married him because I was looking for the security I no longer had from having my own family. But being out on the mountains with him, unable to share the exhilaration and joy of the experience, was helping me gain some clarity on my relationship. My attempt at finding some common ground for us to connect, to save our marriage, was failing. I was going to have to accept that my new-found passion wasn't ever going to appeal to him. For me our walk on Bynack More marked both the beginning of an enduring love for the mountains and a long and drawn-out end to my marriage to Sam.
We returned to Glenmore by retracing our steps – painful ones for Sam when his footwear came apart. At first the sole of his boot flapped open, just at the toes, but it wasn't long before the whole lot gave up and gave way.
'That's the last hill am ever climbin', Sayree. Never again.'
In truth, our relationship had been destined for failure from the beginning.
With hindsight, I knew the warning signs had been there all along. But at the time I didn't recognise them and so, a year and a half after my grandfather died, in 2006, Sam and I married. I'd wanted our wedding to take place in the chapel at Fort George where my grandfather, a major in the Queen's Own Cameron Highlanders, had been quartermaster during the 1960s.
The Fort was at the furthest point north-west of the village, within sight of my childhood home. It had been built in the eighteenth century, after the Jacobite rising of 1745, on a level spit of land jutting out into the Moray Firth, guarding it at its narrowest point. It is an impressive place. The ramparts, more than a kilometre in length, enclose an area the size of five football pitches. Pristine green grassy slopes lead up to projecting bastions and redoubts. Every wall is covered with guns and big black cannons remain fixed in place, pointing out across the water. The barracks are still used by the military, but much of the rest of the site is run by Historic Scotland and is open to the public.
I loved the Fort. It was steeped in family history. In the museum my grandad's medals and regimental claymore are part of the exhibit. I'd been christened there; my name is framed in the Cradle Roll on the chapel wall. Mum had once worked behind its great sandstone walls too, at first in the dental clinic and then as verger of the church (not a virgin of the church as I'd mistakenly told people). It was where Mum had been meant to get married to her first love – and was where she did eventually marry Frank.
I had warm memories of trips to the Fort as well. When I was small Mum would load me onto the back of her bike and off we'd go, her legs doing all the pedalling while I enjoyed the wind in my hair, my little fingers sticky with a melting mini-milk lolly. There were other days when, unnoticed by Gran, I'd steal out of the back garden and range along the squidgy, waterlogged sands of the beach with the intention of visiting Mum at her work. She and I had walked this way a million times when the tide was out, around the base of the Fort's sloping perimeter walls, me searching for jellyfish and squashing flat the wiggly worm casts while she sang a song to herself.
It had been an important place for me and for Mum, and I had my heart set on getting married there. Because of my grandfather's old role as quartermaster, special permission was granted and my wedding ceremony went ahead. We had our wedding photographs taken on the battlements, where as a child I'd run around wild, loose and carefree. But with each picture Sam was becoming more impatient. He wanted to get to the party.
The reception was held at the Gun Lodge Hotel, back in the village. Also built in the eighteenth century, it had been home and stables for the high-ranking officers at the Fort. It became a hospital for a short while, but after falling into disrepair during the 1960s it was bought up and reopened as a hotel. Mum had worked there too, behind the bar; she'd fetch me home salt and vinegar crisps and a lemonade, and tell me snippets of silly things that some of the customers said or did. Sometimes men who had a fancy for her would give her presents to give to me; once some guy bought a whole tray of plastic gold and silver rings with coloured gems, setting my little-girl eyeballs spiralling in their sockets. But, most of all, I loved hearing Mum's story about Georgina the ghost who haunted the pub, and I would make her repeat that same tale over and over again. I remembered all these things as I cut the wedding cake with Sam. We danced, we sang, and we drank till it was time to leave.
Inchrye, our old family home, was only a stone's throw away from the pub. The house had changed hands several times since my grandad had sold it back in 1984, and now it was being run as a B&B. This was where Sam and I would spend our first night as a married couple. 'You're staying at Inchrye? That's a bit macabre, isn't it?' Mum's older brother Jimmy said as we tripped out into the chilly car park. His words wounded me. These places – the Fort, the Gun Lodge, the house – connected me to my mother and grandparents. Why was it wrong to want to keep those fond memories alive?
The owners of Inchrye, learning that I had grown up there, gave us a tour of the house the following day. I disguised my broken heart through smiles and nods of approval at the structural changes. Every room had been altered; the old bedroom I'd shared with Mum, where she'd play her Leonard Cohen and Donovan records, was unrecognisable. It wasn't the home I remembered. My hand glided down the banister and memories filled my head as I walked down the last flight of stairs: I could see my grandad, there at the foot of the staircase, and smell the paraffin from the heater he'd just lit to take the chill off the cold midwinter air. And then a memory of Gran, smiling at me as she opened the door of the dining room to show me the incredible feast of jellies, cakes and sandwiches she had prepared for my birthday tea, then saying, 'Well, there it is. You can like it or lump it.' I smelt cigarette smoke intermingled with the aroma of whisky, and heard the rapid chattering and laughter of my grandparents as they entertained guests in 'the posh front room', my mother asking what I was doing up so late but letting me sit up and join in the fun for a while.
I'd spent countless happy hours playing in the garden on the swing my grandad built, and roaming around after him as he pushed the lawnmower and hoed the soil. I'd pester him for fruit off the trees and he'd always choose me the juiciest plum. But when Sam and I were led outside I found it hard not to cry. The beautiful borders, host to the glorious colours of summer flowers and plants, the lawns, greenhouse, fruit and vegetable patches – everything was gone, replaced by an acre of chipped stones, two wooden sheds, a caravan and an old red telephone box. Every fibre of my body groaned. I wanted everything to go back to how it used to be and, as I ached for my family, I thought my uncle had been right in a way: it had been a mistake to stay at the old house. Deep down, even then, I think I knew it had also been a mistake to get married.
### CHAPTER THREE
### Doomed Champagne and Mountain Magic
_Beinn Eighe — The File Hill, June 2008_
'How do you know where you're going when you're in a car?' asked Frank.
'I read road signs,' I replied.
'Yes, but when you get in your car, how do you know how to get there?'
'Because I never go anywhere that I don't know how to get to . . . Ohhhh, do you mean have I a road map?'
'Yes.'
'Nah, I don't have a road map.'
Frank drove for a further ten minutes. 'Well,' he said, breaking the silence, 'here's the thing now . . . we've taken the wrong road.'
I groaned inwardly. An extra 35 miles had been added to the journey to Torridon and the mystery mountain where Frank was taking us – me, his new wife Irina and her eleven-year-old son Roman. After that last disastrous walk with Sam I had wanted to get out again but didn't have anyone else I could think of to ask. Then I'd remembered Frank. It had been quite some time since we'd last had contact, but when I asked him if he would come hillwalking he was enthusiastic and said he had a place in mind where we could go. I was happy to leave the details to him – checking the weather, the route, what time we should set off – but I hadn't thought that he'd bring his new family along too; I was a bit disappointed. Not because I disliked them, but because conversation was virtually impossible. Irina's English was only marginally better than my Russian. Besides, I had been hoping it would just be Frank and me because I'd wanted to talk about, and remember, Mum.
Frank was the living connection to my mum. After she had died, we'd clung to each other for a while, bound together because we both loved her. Although in some ways he still felt like an anchor to my past, eventually, like two separate threads coiled around the same reel, we had started to unravel from one another and the distance had seemed to grow between us with each passing year.
Five months earlier, at the start of January, Frank had remarried. He'd had other love interests since Mum had died, but his getting hitched made everything different. It wasn't as though I had expected or wanted him to remain faithful to my mother's memory, but the child in me resented him for being happy and for moving on.
It was almost midday by the time we arrived at our destination. 'One of us needs to drive the car about another mile further on, to where we'll finish the walk,' Frank said, looking in my direction.
'I suppose that'll be me then,' I replied with resignation, but secretly glad that I wouldn't have to try to communicate with Irina and Roman.
'Good girl,' he laughed, then added, 'there should be a small parking area on your right-hand side, you won't miss it. We'll wait here.'
It took me half an hour to ditch the car and make it back to them, and then finally we were on our way. But within twenty minutes of walking we were already stopping for our first rest. Irina was tired, and I was thinking, _You've got to be kidding_ – we had just passed a copse of trees and were still on the flat!
The trail we were on was easy to follow as it wound gently upwards, steepening as it climbed towards a grassy-floored corrie, a glacially scooped-out hollow in the side of the mountain, and since the route was obvious and the weather dry and fine, I carried on ahead of the others even though I didn't have a map – if I was unsure of where I should be going I knew I could wait for them. Frank was in charge, which made me feel safe, and so, like the giddiness you feel after guzzling a glass of fizz, I was able to freely enjoy the exhilaration of simply being on the mountain. Although I would have preferred his company on this occasion, I enjoyed the solitude. I could hear only the panting of my own breath and each step of my boots as they connected to the ground. The sun was shining and there were no elements to battle. I was able to lose myself in thought instead of concerning myself with – as Frank called them – 'route logistics'.
Pausing briefly, I looked behind: my three walking companions were dots almost consumed by the vastness of wild country, and they didn't seem to be making much progress. I carried on. Ascending the steep corrie headwall was a formidable task – I felt like a spider, using both my feet and hands to grip and clutch at grassy tufts till I made it to the top, where I was confronted by a staggering and intimidating view. Catching my breath, I stood transfixed. My eyes met with the scariest-looking mountain I'd ever seen – a vision of breathtaking beauty, horror and impossible enormity all rolled into one. It was Liathach: a towering fortress with terraced cliffs of sandstone topped with quartzite, otherworldly in appearance and so utterly extraordinary I couldn't quite take in what I was looking at. It was both real and not real. _People have climbed that?_ Of course I knew people had, I just couldn't comprehend how when it looked so completely impregnable (yet I too would eventually scale its heights). I stared at that mountain opposite for twenty minutes before Frank appeared over the crest of the headwall.
'At last!' I exclaimed. He just laughed.
Mum and Frank had often gone off on short hillwalks or coastal routes, and on occasion I'd go with them. We got used to walking together efficiently; I hadn't expected to have to wait around for the others to catch up today.
'It was never like this when we were walking with Mum; at least we all kept the same pace,' I said. 'Remember that walk we did at Kilmuir on the Black Isle?'
'Oh yes,' he said with delight. 'Right . . . don't panic, but run! Those cows were seriously chasing us through that field. Didn't we have to jump over a fence or something?' Frank said, now laughing hard. The three of us had enjoyed some lovely walks there; goats roamed the coastal cliffs, gulps of cormorants clustered on volcanic rock and huge birds of prey hovered above.
'Wasn't that the same day you stuck a dead bird on a stick and chased Mum and me?'
'Mmmmm, baby,' Frank said in one of his funny voices. 'Mum was cross about that. She liked her birds. She was very upset when we saw that gull trapped on a rock, surrounded by the sea. Didn't it have a broken wing or something?'
'That's right, I'd forgotten about that. She was upset because she couldn't do anything to help it.'
'Yes, Mum was a very special lady,' he said with affection.
Irina and Roman continued to make their way up, but now I didn't mind the extra wait. I was enjoying listening to Frank, who had entered full-blown storytelling mode.
'My best walk with Mum was one we did at Torr Achilty, it was a _verrry_ hot day,' he said, rolling his 'r's, 'and we drove about half a mile to a quiet area on the River Conon. The river was well hidden from sight by lots of foliage. Mum and I went for a naked swim.' After pausing he added, 'It was quite brave of her.' Although I didn't really want to know about Mum stripping off, I did like Frank sharing that story. There was so much I didn't get to know about her.
Keen to get moving again, I asked, 'Which way from here?'
Frank pointed me towards a scree slope on the right. 'I've got a surprise for you, but I'll tell you at the end of phase three of the walk,' he said. There was no point quizzing him about the surprise – he loved his little games – so I playfully rolled my eyes and shook my head. On I went alone, wondering what on earth the surprise was, while he dutifully waited for Irina and Roman.
We got on now, but I hadn't always liked Frank. I'd behaved awkwardly when, in my early teens, he'd got together with Mum; actually, I was the original demon-spawned child from hell. Back then I'd seen him as a threat and would constantly try to find faults in him. What an asshole I'd been, and Frank had taken it.
I think I'd struggled to accept him for the same reason I'd struggled to accept any of my mum's boyfriends. I couldn't bear thinking of them as a potential father because none of them came close to being the man who should have filled that role. A man who had shaped my mum's life, and mine, and whose presence I felt throughout my childhood and beyond. Poor Frank had had some big shoes to fill.
During the spring of 1973 Mum had got the job as verger in the chapel at the Fort. At this time, among other army units, the Fort contained the Joint Services Mountain Training Centre, to which arrived a young new Chief Instructor, Major Gerry Owens. A bachelor and keen sportsman in his early thirties, he already had a formidable reputation as a Himalayan mountaineer. When he and my mum met, the pair fell deeply in love.
Without reservation Gerry was well liked by all the family. Gran thought he was dashing; his handsome, angular face was framed by sandy curls, his athletic build was clad in stylish garb, and he was charming. Grandad approved of him too.
Those who knew him best would say that Gerry was a good companion with a huge capacity for fun, often making digs but always in friendship. On the mountains his determination and resolve were unstoppable. On one occasion he was on an army mountaineering expedition in northern India, climbing up a vertical rock face, when the loose snow all the way up the ridge gave way. Luckily, he and his climbing partner were firmly attached to the rocks as the avalanche thundered down around them. According to his partner, Gerry was silent and calm during the entire ordeal, and when it was over simply finished the route without a word in response to his partner's exclamations of relief. He was as hard as nails and gave himself – and others – no quarter. But with my mother and me he was tender and caring. Because of his love of being out in the wild, amid nature, Gerry encouraged my mum to join him on days out.
Walking hadn't really been Mum's thing back in those days. Her older brother, my uncle Jimmy, had told me he'd been surprised when she got together with Gerry as she'd never struck him as an outdoors sort of person. During the summer of 1973, when he had arrived home on a rare visit from university, he spotted two familiar figures making their way to the house. He remembered being amused at the spectacle of his sister, looking distinctly unimpressed, with her dark hair stuck to the sides of her rain-soaked face and clothes sagging under the weight of the water. Gerry strode up the rose-bordered path with a huge grin on his face and called out to my mother, lagging a good fifty yards behind, 'Hurry up! Come and greet your brother!' Gerry enthusiastically described what a great time they had had. Uncle Jimmy said Mum looked as though she had never spent a worse day in her life, but gave a bright smile and agreed that it had been 'great'. My uncle thought the relationship wouldn't last, but he was proved wrong as my mum developed an enduring love of the outdoors life, and for Gerry.
Gerry was away a lot for his job with the army, but he would always send my mum a postcard or a tourist guidebook with a scribbled message to let her know he was thinking about her. Two years after they met, he was to be posted to Norway. He asked Mum to go with him; she accepted. But when she announced the plans to Gran and Grandad they put their foot down. I had been born out of wedlock, and despite their approval of Gerry they would not allow Mum to live with him outside of marriage. This caused a big problem between the pair, and for a few weeks they were apart; but finally, on Christmas Day 1974, Gerry took the plunge and asked Mum to be his wife. Everybody was delighted. Predictably unconventional, Mum said she didn't want an engagement ring, so instead Gerry presented her with a hinged gold bangle engraved with a fine leaf pattern: she loved it and it rarely left her wrist.
With his posting to Norway coming up at the end of summer, Mum and Gerry planned a short engagement. Their vows were to be taken in the chapel at the Fort in June, a few weeks after he was due to return from a Himalayan expedition to Nuptse with the Army Mountaineering Association. It was a training exercise for the army's plans to summit Everest the following year and, as one of their strongest climbers, Gerry was a forerunner for the team. He left in mid February and Mum wrote to him at least once a week while they were apart.
And then, six weeks before the ceremony, it all came to a catastrophic end.
Though I was only a little child of about two and a half years old, the scene and exchanges are clear in my mind: some things just stick. I was sensitive to my mother's moods and I had noticed a change. Despite the sunshine of late spring an oppressive darkness made its presence felt, and the silence that had fallen on Inchrye was disquieting. Something was very wrong, I sensed it.
The morning was bright. The sun reflected off the two-tone small pink squares on the lino of our bedroom floor, the hearth lay empty and the radio sat in obedient silence above it on the mantelpiece. My mum and I stood between the bed and the chest of drawers near the door. I was small and she was so tall.
'When's Gerry coming back?' I asked, looking up to her face. She turned away from me.
'He's not.' Her tone was cold and unemotional. She walked away, leaving me in the room on my own.
In those early days we shared a bed. I'd wake up in the night and rest my little leg over the top of hers, but now she would push it off and turn her back to me. I suppose this was really the first time I felt a sense of being shut out. It wasn't that she didn't love me. She did. But what I didn't yet know was that she had become consumed with grief: Gerry was dead.
I had known nothing about my biological father when I was growing up, so later, when anybody asked about my dad, I told them he was dead. The person I was referring to was Gerry, even though I hadn't really known anything much about him either. But those people who had asked the question would offer expressions of condolence followed invariably by the question: 'What happened?'
'He was a mountaineer and he died in a fall,' I would say – that much was true. That was what I had told my teacher when both Mum and I started at the same school, her as a teacher and me as a pupil, and I had cringed afterwards thinking, _Ah no, what if Mrs Miller questions Mum in the staffroom_? I hadn't wanted to cause upset. But if it ever was mentioned, Mum never let on.
Gerry's early death had robbed my mother of a husband and me of a father. Maybe it wouldn't have worked out for them, and he might have turned out not to be a brilliant stepdad, but it was something neither of us had the chance to find out. I'd been told by both my aunt and my grandparents that after Gerry learnt of my roof escapade he had bounced me up and down on his knees and said, 'I'm going to love and look after this little girl as though she is my very own.' It was a scene I so often pictured, me with the father I should have had. Instead we had been left with a memory of a much better time when all had seemed 'perfect' and everyone happy, an image that perhaps we both idealised.
At the top of the quartzite scree trail I reached a trig point, a distinctive concrete pillar once used by surveyors, now often invaluable to walkers finding their way. When I looked back, Frank was no longer in view. I could see that a sharp, rocky ridge rose higher on my right, so I realised that the summit was further still. The others were going to take ages, so I thought I'd attempt the file-edged route to the mountain's top. My stomach fluttered as an unstable rock shifted when I stood on it. Maybe it had been the earlier sight of Liathach that had unnerved me, or maybe it was my lack of experience – I'd never been so high on any mountain, I was alone, it all seemed so precarious, and I suddenly felt vulnerable. Dropping onto my hands and knees, I crawled along until I decided to give up and return to the trig. There was still no sign of Frank, Irina or Roman, but, glad to be back on more stable ground, I bounced off over some rocks and ran along the next bit of ridge, where, finding a spot out of the wind, I sat and waited for them to catch up. Sitting there all alone I felt my insignificance, my own mortality next to the great ages of the rocks around me.
Frank came into sight. I waved and called out to him, pointing to the path that dropped down. 'Is this the way?'
He confirmed that it was so I carried on ahead. I made my way down to the col, the lowest point between the ridges. There was a lot of sandstone and quartzite, indented with curious circular designs, like the mark a pastry-cutter leaves in rolled dough. I was certain I was looking at the fossil of some ancient creature, that this rock held great secrets, and in that moment I felt a definite spiritual presence in nature. Fascinated, I picked up a small piece to carry home. As I held the rock I was aware of that same sense of history that I'd experienced inside the walls of Ryvoan bothy, except the stone I now had in my hand whisked my imagination away on a much grander timescale: a connection to a past from which we all come – and maybe, in a more basic sense, it reminded me that we all come from, and will return to, dust. I felt comforted by that thought and was so absorbed in the moment that I didn't notice that the others had caught up and now it was I who had been left behind.
The only way to the floor of the next corrie was to descend an incredibly steep and narrow scree gully. Keeping to the right, using the rock wall as a support, I made my way down the stony staircase while Frank, Irina and Roman were slip-sliding their way ahead. The sky above the col was a magnificent gentian blue, which deepened into darker hues as it edged towards outer space; what a great sense of peace I felt deep within right then.
Below the broken cliffs and beyond the scree gully were irregular-shaped boulders varying in size – picking a way over them felt interminable as I dropped down into the natural amphitheatre; it was truly impressive. Under the surveillance of the almost vertical triple buttresses on my left, their lower halves sandstone and upper sections shining quartzite, it almost felt as if I was trespassing. Then, unexpectedly revealing itself at the foot of the towering rocks was the most beautiful loch I'd ever seen; it was so close. Captivated by its shimmering, iridescent blue waters, I was transfixed.
While I was lost in thought I found myself squatting at the lochan's edge, enjoying the sensation of the icy water as it ran through my fingers and swallowed up my hot, sticky hands. That's when I noticed that the wind had dropped. Stillness: not a sound except for the gentle movement of the loch water as the exit stream found its way over rocks and plunged down the mountainside. I almost didn't dare breathe, as if doing so would shatter the silence commanded by the council of walls around me. I felt spooked and yet thoroughly enchanted in this peaceful place: these mountains were magical. It had definitely been a struggle to get here, but clearly it was worth it.
' _Things in general don't come easy, and achieving anything worthwhile takes hard work and time_ ,' I said softly to myself, thinking not just of the trek up here but also how hard I was still finding it to live without my mum. I wondered if that was why she had carried on walking after Gerry had died, if it had been her way of coming to terms with her own grief.
After Gerry's death there were reminders of him and what my mother had lost everywhere. Redundant wedding invitations lay stacked in a neat pile on the kitchen table, never to be posted. The service at the Garrison chapel and reception at Cameron Barracks had to be cancelled. The dining room was full of cases of champagne, and the wedding dress that Mum had been in the process of making was spread over the table, along with an embroidery she was sewing to give to Gerry on their wedding day, its words now taking on new meaning: _Today is the First Day of the Rest of Your Life_. And then, the most heartbreaking reminder of all – a letter arrived from him, written shortly before he died.
_Camp I – 17,000 feet 24th April_
_My Dear Jenny,_
_Wonderful to hear from you again – I received your two letters of the 10th and 16th together, to-day. Particularly for the 16th that's fast going to this remote camp on the glacier. I was interested to hear that the material for your wedding dress has come through – you say it's lovely but tell me what colour it is and whether it is plain or patterned? The weather with you sounds rather unpleasant – I heard through a fellow member that London also experienced snowfalls over Easter. I'm sure with the clothing you wear to work that you are far worse off than me. We have just experienced another fairly heavy snowfall but the worst aspect of the climate are the very chill winds. To-day the upper air-stream is moving along at 100mph and it's blowing from Siberia and Tibet._
_You wouldn't recognise me at present – shaggy, unkempt hair, straggly beard and moustache, peeling nose and split lips – still love me!? Oh, and my teeth haven't been cleaned for a month: my tent companion tells me I look better than I did! I'm going to have to watch him. Seriously I'm looking anxiously to the day when I can once more set my eyes on you – it doesn't seem so far away now._
_Our return trip to the UK looks a little unsettled but I'm still fairly confident that I should get back by the end of the first week of June._
_Tell your mum, that instead of going on the Aberdeen trip, to join me here – the altitude and exercise reduces you to a scarecrow in no time despite eating twice the amount you normally do at home!_
_Tomorrow I'm off to Camp II at 19,000 feet. It's a delightful campsite, set on a ridge. The early morning sun soon reaches it and the views are fantastic. However, the day's work will consist of humping 40lb loads up to 20,000 feet. Needless to say, my love, I miss you very much._
_All my love as ever, Gerry xxx_
On the day my mother should have married Gerry my gran invited a small gathering of close friends for a lavish lunch. The idea was to cheer her daughter up and take her mind off the day. Steaming food was brought to the dining table and laid out so that everyone could serve themselves, while Uncle Jimmy was asked to open a bottle of the champagne that had been destined for the wedding. As he pushed, the cork gave way under pressure of the bubbles inside and unexpectedly flew out from the bottle's neck and smashed into the chandelier. Droplets of glass shattered into tiny sparkling shards, falling like a shower of diamonds all over the food. Amid gasps, a heavy chair scraped across the wooden floor as Mum withdrew from the table and fled from the room in tears.
The gloomy atmosphere prevailed within the walls of Inchrye for a long time. Unspeakable grief consumed my mother. She barely uttered a single word for over six months. She wrote poems for no one to read and would disappear on long, solitary walks for hours on end. Nobody could get through to her, not even me. Grandad would go to work with a heavy heart as he worried over his daughter, while Gran attended to mundane rituals of housekeeping and looking after everyone. Like the grown-ups, I got on with each day in my own way, roaming around the house or playing in the garden; but underneath the surface was a different matter. I was insecure, my mum was distant and I was too young to understand why.
Mum eventually forced herself to get on with her life without Gerry and completed a four-year teacher-training course. There were a few boyfriends over the years and I resented them all – they just weren't Gerry.
Then Mum met Frank. He and my uncle David had served in the same regiment since the late 1970s and the pair had frequently come to Inchrye when they were both on leave. Mum's new career took the two of us to Nairn, and since my grandparents were getting older, and Inchrye was far too large for just the two of them, they sold up and also moved to the town. But instead of going to their bungalow, Frank would come to visit Mum and me in our tiny home.
In the beginning I didn't mind – he was just my uncle's fun friend who always brought me sweets and junk; we used to play cards and he'd let me win his money, I loved hearing his rude jokes and his crazy, infectious laugh. But then I began to notice he was spending more and more time with us and it didn't take too much longer before I worked out that there was something going on between my mum and him.
It was 1984 and I was about to enter those hideously awkward years: the teens. The timing was bad, although to be fair he stood little chance of winning me round anyway. There hadn't been anyone on the scene for a few years, not since Mum's college days, and I'd been used to having her to myself. I didn't want this guy muscling in, stealing her affection from me. Up till now I'd accepted that she was busy with school work and I had been content to play quietly with my dolls while she sat doing her planning and marking. I accepted the fact that she had to work and wasn't bothered that she didn't give me her full attention, but now here he was, and she was giving him her time. I felt jealous of him and unimportant to her, and I behaved horrifically, making their every attempt at a private life impossible. Neither of them deserved the teenage tantrums, but no matter how vile I was, Frank showed great understanding. He was the one who came into the bedroom I shared with Mum to sit with me when I was sleep-talking – well, he said it was more like sleep-yelling. He could be good-natured, kind and caring. But, being too immature, I took no notice of these qualities.
I continued to resent his relationship with my mother. In the flat, when Frank and Mum weren't paying attention, I'd hide his cigarettes one by one: subtle and highly irritating. And when he returned to duty in Northern Ireland I would wiggle the connection from the phone just enough so that it looked like it was still plugged in. Sometimes Mum didn't notice for days. After a year the strain of a long-distance relationship, coupled with my adolescent outbursts, became too much. I found a letter Frank had sent to Mum, in which he'd written, 'can't you see the beauty of her love for you?' and added that he was prepared to 'weather the storm'. But Mum called the relationship off.
My mum never once blamed me, not to my face anyway. And though at the time I was glad Frank was gone and that I had Mum back to myself, I did feel guilty and responsible for their split. It wasn't good to feel that I had caused Mum such deep misery that she had sacrificed her own happiness for mine.
After ten years they eventually got back together again. They got married, too, but they were only man and wife for three months before cancer took her life. Their marriage ceremony in the chapel at Fort George was both joyful and sad. At the reception afterwards we toasted what was meant to be one of the happiest days of their lives when Grandad presented Mum with a bottle of champagne – one he had held on to for a long time: it was the very last of those bought to celebrate Mum's wedding to Gerry twenty-two years earlier, and now, at last, it would be opened. Mum passed me a champagne flute. I raised it to my lips but the lump in my throat made it hard to swallow the fizzy liquid. How could I pretend to be happy when I knew the reason she'd got married was because time was running out? In 1975 it had been her tears falling into the glass, in 1997 the tears were mine.
The wind suddenly kicked up and ripped through the stillness like the invasion of a school playground at the sound of the three o'clock bell, blowing the water back up the waterfall. Frank, Irina and Roman were still labouring over the boulders, so to kill some time I walked out on the stalkers' path, pausing to watch three deer, one posed majestically as if for a photograph. When they finally trotted off, I retraced my steps back up to the loch. There they were. I was starting to feel a bit irate at all the delays, but then Frank revealed the surprise.
'Well,' he said, 'that's the end of phase three. Don't you know where you are?'
I had a think but shook my head.
'The last time we tried to come here you were four months pregnant. The time before that was with Mum.'
It all came flooding back. This was the loch at Coire Mhic Fhearchair, a place Mum had wanted to visit ever since Frank had shown her photos of one of his trips here. As she became sicker it seemed increasingly important to her that she do it, and so the three of us attempted it just a couple of months before her death. She had convinced herself, and us, that she was fit enough for the walk, but when we were a little over halfway to the loch she'd had to call it a day. Mum really wasn't well and, although disappointed she hadn't made it to the loch he especially wanted to share with her, Frank knew we absolutely had to turn back and get Mum home – and he was with her every step of the way.
We had tried to return, Frank and I, on Valentine's Day 1998, six months after Mum died. But I was carrying an already large pregnant bump and it had been my turn to struggle. The winding trail felt interminable as I trailed far behind Frank, and as the wind battered around the side of Sail Mhor it took my breath away. It was madness to try to carry on – and I too had had to give up. I yelled, hoping Frank would hear my voice over the wind. He did. I felt bad for him as we walked back. I understood that he'd wanted to come here to remember Mum. To relive the walk we'd done just months earlier with her at our side; we both needed to feel that connection to her. Twice we'd been on Beinn Eighe, and twice we'd turned back before reaching the loch. Now we'd finally made it, and it felt like Mum was here with me.
Inhaling deeply, I stood still for a few moments absorbing views of the wild, ragged landscape. Standing in that space of irresolution I felt entirely at the mercy of the mountain: and there was something strangely reassuring about that.
### CHAPTER FOUR
### Cheating Myself
_Ben Wyvis — The Hill of Terror, July 2008_
On Ben Wyvis the cloud can settle on the top and clear up again many times in one day. I knew that well from all the years I had spent staring out of our second-floor kitchen window. With the naked eye, on a clear day I could see over the outer Moray Firth, all the way along the coastline from nearby Black Isle and Cromarty to as far up as Caithness – probably about forty miles away as the crow flies. It was a far less hilly view than the one I'd enjoyed from the old family home in Ardersier, but while Ben Wyvis was the only proper mountain I had a glimpse of from here, I loved being able to see for miles around me.
So I knew that what happened at the top on any given day was always hit or miss, but as I looked over blue-slate rooftops, the church spire and the water, the weather seemed fair and the mountain cloud-free. I decided it was a fine day to go up the Ben. And I'd been itching to take my boys up their first Munro, as any mountain in Scotland over 914 metres is known.
Marcus and Leon were like chalk and cheese, both in looks and personality; aged nine, Marcus was dark-haired with hazel eyes, while seven-year-old Leon was my fair-haired, blue-eyed boy. Where Marcus would be content to play quietly, Leon was a rascal. They even expressed affection differently, especially when they were tots: where Marcus would offer a gentle cuddle Leon would come running at me, deliver a swift blow to whichever of my body parts he came into contact with first, then growl, 'I LOVE YOU MUMMY!'
I'd already taken them out with me on many long walks and I'd recently started to introduce them to the hills as well, tackling Meall a' Bhuachaille with them. Now I felt they were ready for their first big climb, hoping that the mountains would bring us closer together, just as walking had with me and my mother. But it turned out that none of us was quite prepared for what was waiting for us on their first Munro.
Ben Wyvis was about an hour's drive from home, and the circuitous route I'd picked from my book had suggested the walk would only take around six to eight hours. I was feeling confident and well prepared when we set out walking along the well-constructed path, glad to be with the two little humans who meant most in the world to me. As we walked up through trees and enjoyed views of the river racing and frothing over rocks on our right, the sun's warmth was delicious as it beat down. The straps of my overstuffed daysack rubbed against the exposed flesh of my shoulders, but I felt too content to care as I enjoyed watching my boys happily exploring, Marcus staying close by while Leon made straight for a small burn that dissected the path to poke at some water spiders. But as we continued into more open land, the weather changed all too quickly.
Patches of cloud blotted out the sun, and the wind suddenly blew harder, making it feel considerably colder. I was glad I'd had the foresight to pack extra clothing for us all, which we threw on, halfway up, by the shelter of a camper-van-sized rock. Our cheerful mood soon evaporated, replaced by a steady stream of moans and groans, mostly falling from Leon's lips. At the height of his disenchantment he declared crossly this was 'a wasted day' and 'a meaningless walk to the top of a lump of earth', which despite everything made me laugh, and he soon saw the funny side too.
It really was a hard slog for untrained calf muscles, though, as we continued upwards. The path edged the mountainside and it was a long drop down. In a reversal of roles, Marcus appeared to be in his element, much more confident and adventurous, whereas Leon seemed to have lost his usual fearless approach to life and was demonstratively ill at ease when he saw his brother investigating a large, flat slab of rock, like a shelf, jutting out over the valley below.
'Tell him to come back, Mummy! He'll get blown off the mountain!' Leon wailed.
I reassured my younger son as he clutched my hand tightly, though in truth I'd been watching his brother with nervous pride myself. I called Marcus to rejoin us, and he immediately scrabbled back before forging off ahead once more.
'Auch, man, you have got to be kidding me!' he called back.
Laughing, I realised he must have reached a false summit. After battling up heathery slopes we were fully exposed as we topped out onto a dirtied-amber flat-summit skyline, and the wind buffeted around us mercilessly. We gazed down upon views of the Cromarty and Moray Firths, steely grey waters spilling from their triangular inlets into the expanse of the North Sea, which filled the horizon. It was tremendous to be so high up. To see so far out into watery nothingness in one direction, and in the other to behold an inanimate world of seemingly endless peaks fading into the distance. I hoped the boys shared my sense of awe.
The going was now easier as we tramped across woolly hair moss over the broad ridge, but we were still just over a kilometre and a half away from the summit. We watched as cloud drifted onto, over the top of, and away again from Ben Wyvis's highest point. The boys were delighted by large pockets of snow still clinging to the deep, craggy corrie on our right, but I began to worry about the deteriorating weather closing in around us. It was one thing to watch weather fold and unfold from my kitchen window, but to be right here experiencing it in the fabric of the landscape, on the peak of this big, rolling, fuck-off mountain, was something completely different.
At the shelter cairn we huddled together, seeking extra protection from the wind behind the concrete trig point. Dense mists surrounded us and, as my shivering boys ate their sandwiches, a sense of irresponsibility rolled about my stomach in sickening waves.
I stared at the drawing of the route I'd printed from the Munro book realising, with growing fright, how redundant it was without a proper OS map. It was wildly less than adequate for the zero-visibility conditions we were now in. We were only at the halfway point. I'd thought we'd be much further along by now and would easily beat any change in the weather. Of course, I'd anticipated it would take longer with two small kids in tow than if I'd been on my own, but now I realised I'd vastly underestimated just how much longer. Now I was up here in bad weather with my children, frightened and out of my depth. But panicking in front of the boys was not an option. I just needed to think.
I figured there were three choices: stay put and hope the weather cleared up again before we all froze to death; try to continue following our planned route despite no longer having a clear idea of where that was; or return the way we'd come, hoping we'd be able to retrace our steps and keep to the path. It wasn't a happy situation – two options carried a high risk of getting lost, and in all three there was a chance we could end up with hypothermia. As I sat shivering next to my boys, my terrified mind began to play out the tragic scene of our deaths, and then of my own death in which my children were left without their mother. I didn't want them to have to grow up without me: it had been bad enough losing my own mum at twenty-four; they were far too young to go through that sort of pain and suffering. Who would look after them if I was gone? Those thoughts made me pull myself together with a firm resolve. They were not going to die today, and neither was I.
I was twenty-one years old when I'd arrived back in the country on 18 December 1993 after spending two weeks in Thessaloniki, Greece. It was quite late, after I'd caught various buses from Aberdeen airport, when I opened the door to my wee student bedsit on Hardgate. Despite the hour, I phoned home to wish my grandad a happy eightieth birthday.
'I think you'd better give your mum a call, Sarah, pet,' he said.
My mum had been diagnosed with breast cancer.
'I've had lumps before, but they were just cysts. I went into the doctor's once with one and came out with five,' she said.
It felt like I was falling in spirals from a great height as my stomach dropped and churned. She was only forty. A stabbing pain clawed across my chest.
'I'm gonna get the first bus home,' I told her.
Mum's next appointment was on the Monday and I went with her to the hospital.
'And you've had this lump for about a year?' the doctor enquired.
'Yes . . . I didn't get checked for so long because the lump is the antithesis of what my local GP advised me to look out for,' she answered defensively. 'It's wiggly and mobile. I assumed it was another cyst.'
Mum was admitted for a lumpectomy and discharged from hospital on Christmas Eve.
'I feel exhausted,' she said, and then she cried.
Days passed in a numb blur until the college holidays were coming to a close.
'My radiotherapy sessions start on 7 January. Least I'll get a whole term off school,' Mum told me.
'Do you want me to stay home. I can help you?' I offered. I didn't want to leave her to deal with this on her own, I wanted to be there for her – and I wanted her to want me to be there for her.
'Thank you, Sarah, but no. There's nothing you can do. The treatment only lasts six weeks and then that's it. I just want everything to carry on as normal, so you get back to art school,' she answered affectionately.
She still looked and sounded like my mum, yet cancer, this hateful disease, seemed to have lodged itself between us. Now she felt more distant than ever. She was probably trying to protect me by sending me away, but at the time I only saw it as rejection. I couldn't help my mum. I was utterly powerless, still cartwheeling from the sky. It felt like it didn't matter what I did; the sick in my guts and fog in my head were there to stay.
Maybe it was that feeling of rejection, combined with the fear that I might lose her, that caused me to suddenly rebel. In the following weeks and months my life slid out of control quite quickly. Shopping sprees became opportunities to thieve out of devilment rather than need, and I was drinking more than was good for me. I loved painting in my studio, sometimes staying late till only the cleaners were around, but when my day at college was through I found myself getting either stoned, pissed or both. Weekends extended, beginning on a Thursday and ending on a Monday, and if Kate, my only friend at college, was preoccupied with her boyfriend then I'd go out alone, latching on to anybody in the various bars and clubs I frequented.
Bad dreams disturbed my sleep, ones in which I had only days to live because it was me who had cancer, not Mum. And then there were night terrors – the paradox of knowing I was asleep, but believing I was awake. In these episodes I could see my soul extricating itself, taking flight from my lifeless corpse. In others the Devil himself was coming for me. He'd be outside my window ready to get in and I, by some supernatural force, found myself crawling, as though through treacle, on hands and knees across the ceiling, desperately making a bid for freedom. There was no respite from Mum's cancer. No escape.
I was drinking so much that blackouts were becoming a common occurrence, but I started passing out too. I was swinging on a chair at the Students' Union one minute and the next everything went black. It wasn't until the very end of the night that the people I was with got me to my feet and helped me to leave. If I was still conscious at closing time I'd try to find a party, just to keep pouring down the alcohol. Sometimes I'd make it back to my bedsit and not remember how, but more often home was a distance too far so I'd crash out wherever I ended up.
The boys next door weren't exactly a stabilising influence. They'd already shown me how to identify which mushrooms were the magic ones, but tripping out, I discovered, was definitely not for me.
'Why don't you just stick to blow?' my neighbour's muffled voice said before he inhaled deeply, while his flatmate, having drawn on a loaded joint, blew its smoke up through the black pipe attached to the mask on my friend's face. Pale features, framed by his ginger locks and black rubber, disappeared in a fog behind the clear plastic visor and I laughed when he reemerged looking an even more ghostly white, his eyes glazed, totally stoned. It was my turn to pull on the gas mask and take a blow back through the concertinaed hose. But it wasn't enough for me; I was going out to the pub in case I missed out – on what, I don't know – and so I left them with their mask and blow in their squalid rented room. I passed out that night too. I didn't register what was at the root of my behaviour so I did nothing to break the pattern, not even when the events of a night out turned a shade darker.
I'd switched on the TV, browsing through channels to figure out if it was afternoon or early morning. It wasn't great that I'd lost a day at college, but on the bright side I didn't have to wait long to go to the pubs. I thought alcohol made me happy and gave me confidence. But there I was, alone again, in a booth at the Students' Union, all maudlin and introspective. I phoned my neighbour to tell him I felt suicidal. He had come more than once to get me, but on this occasion it was a mutual friend who turned up – someone I fancied – and took me home. One minute we were kissing on the sofa, the next he'd pushed me to the floor and was forcing himself on me. It was over within minutes. He readjusted himself and left. I sat staring at the blurry vision of my jeans around my knees. I remained there for a moment in a state of total confusion before cleaning myself up and crawling into bed. What I should have done was call the police, but I didn't – because I'd invited him in and had enjoyed him pressing his lips to mine, and because my efforts to fend him off when he pushed me down consisted of two alcohol-weakened hands pressed flat against his chest until I gave up and let him get on with it, and because I was so drunk. He'd physically hurt me, but I blamed myself entirely. I felt shame, too much to be able to tell anyone. And I felt worse than ever before.
While I was busy screwing myself up, Mum's treatment had been hard going for her.
'How are you feeling?' I asked when I called home.
'Glad the radiotherapy is over. I felt so sick and lethargic all the time. I've had enough of hospital and doctor appointments. I just want to get on with things now.'
'So the cancer is all gone?'
'Yes. The doctor said I'll have to go for check-ups, but after five years I should get the all-clear.'
It was a relief to hear her airy optimism, and so I buried my fears.
Summer term had come round and I threw myself into the routine of art school. I was relieved that the doctors had dug out and zapped Mum's cancer, but I remained unsettled. I began a secret affair with one of the visiting tutors. He was married. It was clandestine, and I enjoyed it. I grew fond of him because he seemed to genuinely care, and he was gentle with me. But the affair was almost as though I was substituting one bad habit for another. Like alcohol and shoplifting, he was the initial high of those first few drinks, and the buzz I felt at getting away with thieving. My satiation was only ever temporary. It was like I was addicted to danger and unpredictability. Eventually our involvement, just like my worst hangovers, made me feel regret and shame. He wasn't mine to have and so we came to an end. As usual, I promised myself that I'd change, that I'd be better.
With the degree show only weeks away, I needed to make new plans and decided the best thing would be to get as far away as possible from Aberdeen – even from Scotland. I applied for a postgraduate place at a quaint and ramshackle art college in a small village called Lemba, several miles from Paphos on the Greek side of Cyprus. I wanted Mum to think that I was taking responsibility for my future; I didn't want her to worry about me.
Six months later I was living in Cyprus, I had a boyfriend and was feeling good. I thought I had things under control. But I was kidding myself. A couple of months later my boyfriend dumped me. What he had first been attracted to in my crazy, drunken behaviour had become a worry and burden.
'You can't just enjoy one or two drinks, you always have to get fucking trashed,' he said. 'I'm fed up of all your moaning and feeling sorry for yourself. I've had enough.'
Feeling lonely and discarded, I went even wilder than I had been in Aberdeen. Craving attention, I went on benders and would sing and dance on tables. I skinny-dipped in the sea under a full moon. I got so drunk I thought I was lost in a jungle, when really I was on the terrace outside my room. Twice in one night I was found asleep and snoring in the middle of the road behind our accommodation block, and I'd wake up in other people's rooms, wondering how I'd got there. In the dead of night I'd ridden off on my moped, blazing drunk and high on grass, determined to find someone to party with. And in the morning, still under the influence, I rode off to work without realising I was missing the left lens of my sunglasses. I was a danger, and a mindless idiot. Christ, I even called the Mayor of Nicosia the 'c' bomb before I did a bunk with one of the diplomats: it was meant to be a laugh.
The confidence and impulsiveness that drink gave me to go up and talk to strangers made me come across as annoying, and often my ambition to seek comfort ended up as a fleshy encounter. Hungover, I'd hide in my studio space by day, my stomach seizing when people came to tell me what I'd done or whom I'd pissed off.
I made plenty of excuses to myself for my loose and chaotic behaviour. But every situation I found myself in was of my own making, and I alone was responsible for the fact I was becoming less likeable with each passing day. I'd stretched the limits of the relationship I'd had – and common decency – like a rubber band because I'd lost myself in drink. I didn't want to be that person. I needed to go back home, promising myself once more I would be better.
When I returned home from Cyprus, hoping to make another fresh start, it was Mum who helped put me back on track. She let me use one of the smaller rooms in our flat as a studio, and supported me financially until I found work, at first in a bar and then running a local art gallery. I was even getting on well with Frank, who by this time was back on the scene. A relatively harmonious and productive year soon passed. Life had settled.
It was mid August 1996 and I'd just completed a series of sea and landscape paintings for exhibition at a small London gallery. The next morning, I heard Mum talking on the phone to Gran.
'I've got an appointment with the doctor at two o'clock,' she said.
Our year of peace had been the calm before the storm.
After Mum's appointment with the doctor she was admitted to hospital. At ten-thirty the next morning she called.
'The CT scan revealed a tumour on my brain. It's small, easily accessible and they're going to be able to remove it.' Her words were a dizzying drug flooding my body from head to toe, but I understood what they meant. She had secondary cancer: ultimately, a death sentence.
'I'm coming in with Gran and Grandad, we'll be there soon. Have you told Frank?' I asked. After he and my uncle David came out of the army they'd gone into business together. He was away working on a removal job in London and wasn't due home for two more days.
'No. There's no point. There isn't anything he can do, it can wait till he's back,' she answered. My grandparents and I arrived at the hospital to see Mum, and each of us kept our feelings in check. 'I've just had a chest X-ray and tomorrow I'm getting a scan on my liver and bones to make sure that no spores have manifested there,' she said.
The situation was surreal, like the bad dreams I used to have. Conversation was stilted small talk that none of us had any real interest in, but better that than loaded silence. We all wanted out of there, we had all wanted to wake up from the nightmare.
Back at my grandparents' house I took the dog's lead and stuffed biscuits into my jacket pocket. 'C'mon, Poppy. Walkies,' I called. Ears pricking, she got up from her basket and crossed the kitchen, her wagging tail and a lick of my hand indicating she was ready to go. She was a faithful and gentle old girl, a cross between a Labrador and a lurcher. She had been part of our family since she was a pup, rescued from the needle two nights before Remembrance Sunday by my grandad. A myriad of thoughts thumped in my head. Both Poppy and I wanted the freedom of the wide open space offered by the beach: she so that she could chase gulls and I hoping the sea breeze would blow all the upset and pain away. I turned at the kitchen door before I left.
'Do you believe in God, Grandad?' I asked.
'If only I knew the answer, Sarah pet. It's something I've often asked myself, but I do think all this can't be here just by chance.'
My grandad was the smartest man I knew; he could speak a number of languages fluently, and I was always impressed that he could do the _Telegraph_ cryptic crossword puzzle. I thought he had the answers to everything, but he couldn't tell me this. I decided that if God was there, then I was going to talk to Him more because my mum was not allowed to die.
The following day the four of us once again sat in the hospital's day room. 'The bone and liver scan were clear,' Mum announced. The news was a small lift. Things seemed even more hopeful when a member of nursing staff popped in to say the chest scan had been clear too – so it came as a shock to discover later that day that the nurse had made a terrible mistake. Three tumours were found on both lungs. 'Life will never be the same again,' Mum said quietly.
Within two weeks of the discovery of Mum's brain tumour the neurosurgeon had carried out a craniotomy at Aberdeen Royal Infirmary. When Mum came round from the anaesthetic she saw me standing at her side, and, breaking into sobs, raised a hand to her eyes. I leant forward and gently kissed her cheek. 'I love you too,' she whispered. Though she'd written those words in letters and cards, I couldn't remember the last time she'd come straight out and said them to me. My heart ached. She had once said that I shouldn't need to be told I was loved constantly – I should just know. I did know, of course I did, but sometimes I just needed the reassurance of hearing it. I suppose she did not always understand me and neither did I always understand her; it was only when I became a mother myself that I came to realise this is probably often the way with parents and their children.
Three days later Mum was discharged; however, she still had to face two weeks of radiotherapy. Sickness and lethargy that she knew was caused by the treatment returned. Mostly she just wanted to be by herself.
I took Poppy for long walks around the river and along the beach, my mind trying to process the inevitable truth that Mum was going to die. Everything seemed back to front: my grieving had already begun yet she was still alive, and any time we had left was destroyed by knowing that she would soon leave me for ever. Standing on a bridge, I felt tears roll off my cheeks. I cried in bed at night too, and voices inside my head would contradict each other. _I'm allowed to cry! But you shouldn't. You should be happy your Mum is still here. You should be enjoying the time you have left. Save your tears till after she's gone. I don't want her to go. That's why I'm crying._
Dreams were full of images in which my mum was suffering, shrieking in pain, and always I looked on helplessly. Mum's cancer and its consequences remained dark, amorphous beasts, lingering and unshifting in my mind.
I'd spent many afternoons painting. It was normally an absorbing activity, but now I couldn't concentrate. Abandoning my task, I lay on top of my bed crying silently, tears gathering in tickly pools inside my ears. As if she had somehow sensed my distress, Mum came to my room and sat on the edge of the bed.
'It looks like you're suffering more than I am,' she said as she stroked my hair.
'I love you. I don't want you to die,' I blurted selfishly, crying even harder. Her voice soothed me as she told me to let it all out.
'Everyone has to die; it's the one and only guarantee in life. And when I'm gone of course you will cry. Give yourself a couple of weeks, but then I want you to get on with your life; just keep busy,' she said.
Mum always found the right words to say, maybe that's why she had always seemed old to me. But at forty-three she was only a young woman herself, and I think maybe now she needed me as much as I did her.
Like most parents, all I wanted was to provide better everything for my children, but I constantly felt a sense of inadequacy. And I wondered what chance they had of growing up to be happy and secure when their mother was neither of these things. Now on the summit of Ben Wyvis, as the self-flagellation continued, I worried alone, my mind replaying fragments of an old conversation:
_'What will I do when you're not here, Mum, and I have an important decision to make and need your advice?'_
_'You'll just have to try and imagine what I would say.'_
I was contemplating the best course of action when out from the thick furls of white mists a lone walker appeared. He was a young lad in his early twenties and without much mountain experience himself; he didn't have a map or compass, but he did have a GPS. Between us we managed to navigate the rest of the circuitous route together and at quarter past seven in the evening, after nine hours of walking, my boys and I were reunited with the car. The boys greeted our black carriage home with an abundance of kisses to its windows. They were shattered but had been completely oblivious to the dangers we had faced, while I was now flooded with relief at the happy outcome of what had been a fairly scary day: the hill of terror, so aptly named, had taught me my toughest hillwalking lessons to date. Even in summer months the weather on the Scottish hills can be less than hospitable; print outs from my book did not in any acceptable way constitute a map; and, most importantly, I never ever wanted to put my boys in harm's way again because of my ignorance.
Yet there was still so much more for me to learn.
### CHAPTER FIVE
### Becoming a Woman with a Plan
_Beinn Alligin — The Jewelled Hill, August 2008_
In the couple of months since I had been on Beinn Eighe in Wester Ross with Frank, things at home with Sam had been going increasingly downhill. Drink was the wedge that continued to drive us apart bit by bit. I didn't look forward to weekends in particular. We argued constantly. I tried various tactics to avoid confrontation, from sneaking off to bed and pretending I was asleep to buying a karaoke machine, but still it felt as though there was no escape from it. Sometimes it went on all night, until it was light outside, and I was so sleep deprived I no longer felt able to function as a normal human being, or as a mother. Thankfully the boys seemed to be unaware of how badly our relationship had deteriorated. But still we remained together. I didn't want to fail. I didn't want to lose my husband.
Adding to life's daily stresses – my deteriorating marriage, balancing work and children, arranging repairs caused by a leaking roof – nine people now lodged in the flat opposite, and the building's plumbing system couldn't cope. Pipes would block and stink us out and our water supply was drawn away by a pump they'd had installed, leaving us with not so much as a droplet in our taps – ironic when they then flooded us.
I needed to get out on the hills again.
I decided to return to Wester Ross, this time to climb Beinn Alligin, another one of the Torridonian giants. These mountains form a dramatic landscape, their peaks rising up sheer from the sea. In comparison with its neighbours, Beinn Eighe and the mighty Liathach, Beinn Alligin is the easiest of the Torridon ridge traverses, but it includes a series of three towering rock pinnacles known as the Horns, and covering all of them would involve some exhilarating and airy scrambling.
Recent weather had been untypical of our rainy Scottish summers and an area of high pressure brought settled weather over the country. After the recent experience on Ben Wyvis with my children, I felt much happier to be venturing out under clearer skies, and I felt more confident having already done this walk a couple of weeks earlier.
The memory of our long day on Ben Wyvis was too raw for Leon, and though I tried to sell the walk as being 'only half the distance' and 'not as high', he emphatically declared he was not coming. But Marcus, enthused by the description I'd given of scrambling over the Horns, keenly agreed to accompany me.
We arrived at the small car park off the road to Diabaig at the back of Torridon House around ten o'clock the next morning. As we tramped across moorland it was much drier underfoot than the squelchy conditions I'd encountered on the walk previously; it had been a clammy day then, with no wind or rain, and views had been obscured by low cloud before the corrie headwall had even been reached.
What a contrast now as Marcus and I sweated under the heat of the sun, up the steepening path that zigzagged into the corrie. I puffed and panted as I toiled uphill, not talking because I was breathing so hard, but Marcus was a mountain goat, only stopping in his tracks to look at frogs or dragonflies and to stare out a grasshopper with its giant, red eyes. Marcus had his father's looks, but his inherent gentle nature and placid disposition reminded me so much of my mum. He was turning out to be the best of both his dad and me.
I had never wanted either of my boys to grow up without a dad, like I had. When I'd got involved with each of their fathers, it was with the expectation that we'd stay together, but neither relationship had worked out. I was grateful that Marcus had a positive rapport with his father. I just wished Leon got to see more of his dad, but at least there was contact from time to time. My natural father had abandoned me completely from the start.
I did eventually meet him. I don't really know what I had been expecting, but still the encounter was a let-down.
It was a Sunday morning in 1989. I was seventeen and still dozing in bed when Mum came into my room.
'Sarah, your father's here,' she said. The news came as a complete surprise. Only a week or so earlier I'd asked Mum about my real dad, and she'd asked if I wanted to meet him. I had said I did, but I'd never imagined our first encounter could come about so quickly.
My emotions were all mixed up. He'd been living nearby all this time, and my mum had clearly had no difficulty in getting hold of him. _If it was that easy to have got in touch why is it only now that he is here? Why has he never wanted to see me before now?_
I was angry that he'd abandoned us – run a mile from his responsibilities as soon as he'd got Mum pregnant, back to the wife he'd previously denied existed. But I was also curious about this man. And nervous to finally meet him. I was a little afraid my mum would feel upset that I had brought him back into her life; I didn't want her to think I was being disloyal – she was the most important person to me, I hoped she knew that.
With nervous apprehension I got up, pulled on some clothes and went to the living room. There he was, dressed in a grey suit that had a sheen to it, sitting on our sofa; his hair was thick and his beard well groomed and the colour of autumn leaves. His eyes were big and round with long lashes – like mine. The atmosphere was awkward, but he tried to make an effort.
A few days later he took me to Dingwall, a small town fifteen minutes north of Inverness, where he was opening a new nightclub. We went in his flashy white BMW and had a wander around the club. On its wall was a badly painted mural of singing legends.
He was all chuffed and asked, 'What do you think?' and 'Do you like the dance floor?'
'I do,' I said. It was made up of squares that flashed different colours in sync with the music. I liked his car too. But I didn't want to like him.
On the third and final time we met he took my mum and me out for dinner and gave me _The Lost Boys_ video. He said he'd like to get a phone installed at our home, so that he could get in touch more easily and get to know me better, but Mum later told me it was so he could arrange to see her. What a prick. I felt hurt and unimportant and more confused than ever. And so I sent him a vicious letter, writing that I hoped he'd be repaid with the same humiliation he'd put my mother through when he'd denied both of us all those years ago. I was so angry, I cut him dead. I never heard from him again.
I was twenty-three when a call came through to the art college's office in Cyprus, where I was living at the time. Mum's voice was clear and unbroken at the end of the line.
'Mike's dead.'
'Who's Mike?'
'Your dad.'
'Oh . . . Right . . . What happened?' I asked, as a cocktail of horror, guilt and a tinge of remorse flooded through me like a numbing wave of morphine without the high.
'He died choking on his own vomit. He'd been at a party.' There was silence down the wires, I had no idea how to respond. Luckily my mother understood. 'You were never given the chance to develop any feelings of affection for your father,' she said. 'It's only natural you wouldn't feel the grief.' She was right: the man had only been my father in the most basic, biological sense – I couldn't cry or grieve for someone I never really knew. But the news still made me feel strange.
It was twilight as I made the solitary journey down the single-tracked military road, back to the whitewashed accommodation block and the sanctuary of my single room. Frogs in the valley were in full chorus. Soft pinks, orange and blue hues delicately wrapped themselves across the horizon joining sky and sea, and the remaining warmth of the day seemed to embrace me. As I walked I thought about what Mum had said and conversed silently with myself, batting statements and opinions back and forth like a game of ping-pong.
_It's true. Mike has been absent my entire life – apart from those encounters when I was seventeen. Yes, and even then that contact had been arranged at your request_. _I had just wanted to see what a man who could so readily shirk his responsibilities looked like. You already knew the kind of man he was. Actions speak louder than words! But part of me is him and now he is gone. I'll never have another chance to get to know him. Was that letter I wrote too harsh? Should I not have broken off contact then?_
My thoughts were suddenly interrupted by a dreadful sound as old Sol, who lived in an adobe shack behind the block I stayed in, hawked up phlegm. He'd been blinded from years of drinking absinthe, nobody saw much of him, just a shuffling shape in the shadows of his overgrown garden. I closed the door to my room on the world outside. _If he had really wanted to know you, no matter how awful the things were that you had written in that letter, he would have kept on trying to win you over_. I struggled to shift the guilt I felt. Yet there was nothing to be done about either those words in my letter or the undeniable fact that he was now as dead as a doornail. I would never get to know who that missing part of me was.
In the same way I had once used the bottle and blow to bury all the hurt, I now found salvation in the freedom of wide, open spaces. Among mountains I enjoyed a natural high. And I wondered, momentarily, if it was a similar pain that my father had tried to blot out when he chose to drink himself to death. He chose drink and drugs. I had chosen to live, to find a more positive outlet for my turmoil, and to be there for my sons.
As Marcus and I climbed out from the dark confines of the corrie on Beinn Alligin and topped out onto a fairly flat plateau, we were rewarded with sudden and extensive views over sparkling waters to Skye, Harris and the low-lying profile of Lewis. We were standing high on the north-western edge of Scotland, with nothing between us and the islands of Skye and the Outer Hebrides, dark, angular outlines across the Minch. Behind and now way below us, Loch Torridon glinted in still, blue perfection. Rising steeply above its southern shore stood Beinn Damh, smaller than its neighbours but stark and prominent with endless peaks sweeping gracefully away behind it. We couldn't help but keep stopping to admire the grandeur – while also enjoying some respite from the hot work. With a film of sweat across our foreheads we climbed higher still to reach the first summit, where we had a well-earned rest. We weren't even bothered by the flies that buzzed around us as we ate some lunch. Liathach dominated our view to the east, that intimidating yet fascinating terraced sandstone monster. And behind us, almost five kilometres in length, was the rest of our ridgewalk. After our break, still feeling the heat, I whipped off my sweat-soaked vest before moving on.
We could admire our surroundings properly now, like a work of art, the sandstone ridge gently curving in a serpentine line all the way towards the second summit, Sgurr Mor, and beyond it the Horns. We carefully descended the steep, narrow ridge and the rock felt warm against the palms of our hands as we lowered ourselves over awkward drops that presented a stretch too long for our legs. Being with Marcus, and given the combination of good weather, incredible scenery and the challenge of the terrain, I felt consumed by an enormous sense of joy, and it seemed no time before we'd reached the col between the two Munro tops – only to have to begin another steep climb upwards. After ascending a smaller top, to our right a fantastically dramatic gash – the Eag Dubh gully – split the ridge. We paused momentarily to peer down and marvel at all the fallen rocks and rubble that had been weathered away and were now lying strewn on the corrie floor. I turned my face from the dark, shadowy confines of the gully and continued the trail, so brightly illumined in sunshine, to the height of Sgurr Mor. It felt good to be up here, to be part of nature's glorious mountain canvas. We'd done it together, me and my boy, and we couldn't stop grinning foolishly at each other, flushed with pleasure at our achievement.
From our second summit Liathach appeared even more imposing. My eyes remained transfixed on this isolated bastion with its precipitous walls. I knew one day I'd have to climb it, to satiate my curiosity. Beyond it were even more jagged tops. Land dressed in purple and deep-blue hues swept away into the distance to merge with the heated haze of the day and vastness of the sky, and I surrendered myself to the magic of the silence and beauty. Turning through 180 degrees, I gazed upon the Dundonnell and Fisherfield Hills, yawning off to the north. We sat quietly together, Marcus and I, tired but satisfied by the physical challenge.
We would have been content to stay there on the mountain's peak, but we still had to tackle the three pinnacles, so off we sauntered towards the Horns, with Beinn Dearg and Beinn Eighe as their backdrop. Scrambling over the airy sandstone towers held an attraction of its own. It was basically easy rock climbing and added an element of real fun to the day. Finding foot and hand holds with natural ease, Marcus scrambled up and down the rocky architecture of the Horns, loving every second of it. The warm wind blew more gustily, but, unfazed, he continued his route-finding with the utmost confidence, and his beaming smile as we arrived on the final pillar said more than the spoken word – almost.
'Can I call Dad?' he asked.
'Yeah, course you can,' I nodded, handing him my mobile.
'Dad was up here a few weeks ago, but he told me didn't manage the Horns because his legs were too tired for it. I can't wait to tell him I've done them,' he said gleefully.
'Dad! Guess where I am?' sang Marcus. 'I'm on the last Horn on Beinn Alligin.'
'Well done, Son,' I heard his father say, 'I'm going to have to attempt it again then.'
I'd met Marcus's dad in late autumn 1996, a couple of months after Mum's diagnosis. My moods had been low and so, in need of distraction, I had taken on extra evening shifts at the pub – and going by my track record it was far better that I was working and not drinking. It was there that I met the man who would father my first child. He looked like a young Charlie Sheen: his short, dark hair spiked at the front, almond-shaped green eyes and a perfectly proportioned, straight nose. He was interesting and as we chatted more I discovered he lived down the lane opposite Mum's flat, in a house tucked in behind high walls and trees next to the river. A whirlwind romance between us started out well and developed into a Christmas proposal of marriage at the summit of a local hill. But as months passed and my mum became sicker – more quickly than any of us thought possible – that whirlwind relationship creaked under the strain.
'Why is it that people always let you down and relationships end up pear-shaped? How am I going to live without you? It's always you I come to for advice. What is the point in anything!' I blurted as I approached my startled mother, who had been reading a book quietly by the fire.
'Oh Sarah, you must promise me that you won't give up,' Mum said as she stood up and held me to her.
'You're the only person I know I can trust,' I whined, as she stroked my hair and kept me in her arms. Just then my boyfriend walked into the room.
'You look after this girl,' Mum said to him. 'She's been starved of affection all her life.' We gently unlocked our embrace. My boyfriend looked suitably embarrassed, but I wasn't worried about him. All I could think about was how terrified I was to lose my mother from my life.
Coming off the final ridge felt rough on my knees, but I watched with pride as Marcus skipped and bounced his way downwards. Stopping in his flight, he turned to look at me, his face all tanned. 'Mum, I really enjoyed myself today,' he said. I beamed back, and with that he bounded off again. Marcus and I had always been close, and it was wonderful to be able to share these experiences with him.
Left alone with my thoughts for a moment, my mind wandered back to a conversation I'd had with a random stranger I'd walked this same bit of trail with weeks earlier.
'I want to climb Kilimanjaro. It's the highest free-standing mountain in Africa. It borders Tanzania and Kenya . . .' When he had said that I'd immediately thought of Mum. She had lived in Kenya as a kid. An idea started to take form.
As Marcus and I neared the bridge and the path that would return us to the car, I was convincing myself more and more that it should be me making the trip to Africa. If I felt released from the burden of my grief on the heathers and hills at home, maybe Kilimanjaro would expunge it for ever. I could climb that mountain as a personal tribute to Mum – and raise some money for charity too. It made sense.
'Hey, Mum,' Marcus called, breaking my train of thought. 'I found a stone and it's got a smiling face!' He pressed it into my hand; sure enough, iron oxide within the rock had created the illusion of two eyes and smile. Perceiving it as a good omen, I took it home as a reminder of our day. I was now a woman with a plan.
### CHAPTER SIX
### Divergent Paths
_Meall Fuar-mhonaidh — The Cold Rounded Hill, February 2010_
My guidelines for life had always been the memories of conversations I'd had with Mum. ' _Just keep busy!_ ' she had advised, and for the most part I had made sure I was occupied enough, always setting myself new goals. In 2008 I'd translated thought into action and signed up to join the Marie Curie Fundraising Team on their June 2010 Kilimanjaro trek. That gave me two years, the first to complete teaching probation and the second to fundraise.
Asking people for sponsorship was something I'd never been comfortable with, so I figured the best way to get people to part with money would be to run raffles at two dances I'd planned. I'd put all my energy into the project: sweet-talking local businesses for prizes in exchange for publicity on fliers, tickets and any newspaper articles that were published. People I didn't even know were kind and wanted to help. I'd been sponsored for all my trek clothing by an outdoors shop, all the tickets were printed by a proper printing press, gratis, and the upmarket hotel that gave me use of their dance hall also offered a weekend break as a raffle prize. And there were all sorts of other fabulous donations too: a helicopter flight, a large cash prize, various vouchers, a spa weekend – the list went on, and the generosity amazed me.
All that organisation had been difficult, but the real hard yards were selling the dance and raffle tickets – literally – as I knocked on almost every door in town, sometimes taking my children when I had nobody to look after them. But it was worthwhile because the townsfolk were big-hearted and we raised thousands of pounds for the charity.
With my hectic fundraising efforts over, I could turn my attention back to the main trouble in my life: my turbulent marriage. At the start of 2010, I finally made the decision it was over.
The final straw had been when Sam and I had taken the boys to Bulgaria for a skiing holiday over New Year. I'd wanted something we could all do together as a family, and I had hoped that skiing every day and sharing the same room with the boys would be just what we needed. It wasn't to be. Sam came up with a range of reasons as to why he couldn't ski: he'd left his jacket at home, his knees were hurting, he had a bad headache. Then came New Year. We enjoyed dinner and entertainment at our hotel then walked down to the main square in Bansko for midnight, where we set off sky lanterns. After a firework display we returned to the hotel. I took the boys to our mezzanine-style room, but Sam decided the night wasn't over and went back out. From experience, I knew this wasn't going to end well.
At seven-twenty on New Year's morning the sound of metal skittering off the edge of the hotel door lock alerted me to Sam's arrival. It took him ages to get the key into its hole and then several attempts to turn it the right way to open the door. In any other circumstances it would have been funny. He staggered in. Finally I heard him slump down onto one of the beds and the clickety-clack as the plastic ends of his laces bounced off the leather kilt shoes. Their untying was a battle more than he was capable of, and he cursed incomprehensibly in his Glaswegian accent before giving up and crashing out fully clothed. Snores to shake a continent were my cue to get up and leave. I woke the boys, we dressed quickly and quietly before creeping silently out of the room.
Walking along the empty streets of Bansko on that brisk and cold first day of 2010 was when I finally accepted that I was going to have to be brave and change my situation. This time, instead of making excuses to myself as to why we ought to stay together, I silently ran through the reasons to justify why we should part. _Grandad would have been disappointed that I was unable to stick it out 'for better or for worse', but I wasn't born in his generation. And anyway, he isn't here any more; and, no, it isn't my ideal to be a single parent, but better that than stuck in a troubled marriage._ Still I felt a crushing sense of guilt and failure, but up ahead, fixed firmly in sight, my two boys bobbed along the dusty Bansko pavements. They didn't know what went on between Sam and me, but they were growing up fast and it wouldn't be long before they saw and heard more than they should. My kids deserved better, all of us did. And so I resolved that the marriage was over.
Ending a relationship is never easy, even when you know it's the right thing to do. But when the emotional strain got too much, I found time to retreat to the hills. I started to realise, though, that on occasion my lack of experience bordered on the reckless. There was nothing else for it: if I was going to continue to go out on my own, I needed to take the basic course in navigation skills I'd been meaning to do for so long.
The class I attended was held once a week for six weeks at Drumnadrochit, a short drive from Inverness along the A82 towards Fort William, where a bunch of us stumbled around a field with black binbags over our heads – we must've looked crazy. We had to take a bearing on a given landmark, set our compass, then walk on the bearing to reach it. I learnt to work out how many paces I take to cover 100 metres, which for me, by the way, is 66 on average ground, meaning I can travel a distance of about five kilometres an hour. But, more importantly, I learnt enough to be able to go out hillwalking on my own without having to wait for brilliant blue-sky days or to rely on others to do the navigation – because, truthfully, up until now I'd been mostly winging it.
Drumnadrochit also happened to be an access point for Meall Fuar-mhonaidh, the hill that had been a part of the familiar landscape of my childhood. I'd been able to see it from the beach at the end of our garden and from the top of Cromal Mount out front of our house. Of all the mountains in the surrounding landscape it was this one that was most distinctive, shaped like a great big Christmas pudding, missing only a sprig of holly on its top. It was a place where Mum's fiancé Gerry had liked to take her on days out – not up the hill itself, but along Loch Ness – and the area had remained a favourite. Though I was only little, I remember Gerry would borrow my grandad's orange car to drive Mum and me there, and it was one of these days – just as we motored through Inverness, passing the big whisky distillery on the left with its blackened stone walls and, on the right, huge dirty gas tanks beyond the railway tracks – that Gerry gave me a present. My mum passed it back to me: _A Treasury of Nursery Rhymes_ , illustrated by Hilda Boswell. I loved it – it remained one of my most treasured childhood possessions – and I loved him.
So although I knew Meall Fuar-mhonaidh well, and it meant a great deal to me, I'd never actually walked up it. It was a Monday morning of the last week in February 2010 when I finally went to the top. That day, when I looked out of our window, the sky was such a brilliant blue it was as though it had somehow detached itself from above Cyprus or some other Mediterranean land. I checked the local forecast: it was going to remain a settled day in the north-east.
I'd had a thoroughly hideous weekend off the back of an equally rotten week and getting out onto the hillside sounded like exactly what I needed.
'Who'd like to go up the Christmas pudding hill?' I called, rousing the boys from their sleep.
'MEEE!' They cried in unison from behind closed doors, and I laughed.
'Come on then. Let's get ready and we'll go.'
I was glad they wanted to come because it would give us a chance to talk through everything that had been happening. I had to make sure they were okay about Sam not living with us any more.
In their cosy salopettes the boys struck out with me along the footpath next to a small stream. Silver birches overcrowded each bank, their dark branches interlocking above the gurgling flow, and fallen limbs forming natural bridges. The air was crisp and clear, with just a hint of the scent of leaf mould where its russet patches remained exposed next to the water. Soft snow creaked underfoot, and moisture that had dripped from grasses overhanging the stream had frozen into icy chandeliers. Leon was a bright-red streak against the white, as he made straight for the bank to kick snow into the icy river, while Marcus, a mirror of the blue sky above, ambled on ahead.
On our right lay unbroken wide, open space under a thick white blanket, spreading towards the lower shoulder of Meall Fuar-mhonaidh. At closer quarters the hill had lost its prominent dome shape; instead the changed perspective gave it the appearance of a whale surfacing for air. And what I had assumed might be a reasonably steep climb all the way to the top was actually going to be a fairly gradual ascent. Perhaps I needed to try to look at life from a different angle too. Yes, I was upset that I'd finally had to call it a day with Sam, and the break-up had somehow renewed my grief for my mother – she was the one I wanted to be able to turn to through all of this – but my health was good, my children fantastic and we had a roof over our heads. And I _was_ grateful for these things, sometimes I just felt too overwhelmed by everything to remember all that we had going for us.
The stream's chanting grew softer as the path curved away and crossed a track through two gates. We walked through woodland of birch and hazel, feeling through the soles of our boots the network of roots bulging like varicose veins under old, papery skin. As we climbed more steeply the trees began to thin out, until all at once we were completely surrounded by wintry moorland. We came upon a stile over deer fencing so big that it was like a climbing frame, and the boys took great pleasure in scaling its grand height to stand on the wooden platform like kings of the castle. They threw snowballs, and one just missed me; on hitting the ground it fractured into tiny pieces that scattered along the ice-encrusted surface.
There wasn't a whisper of wind when I crouched down and picked up a handful of icy crystals. Pouring them slowly through my fingers, I watched and listened as their glittery brilliance chinked and tinkled back onto the sugary terrain. It was mesmerising. On the mountains my senses were honed; light made everything appear brighter and sounds much richer. My mother would have loved this, I knew. I wished we'd come up here together, revisiting one of the places we both associated so much with Gerry. As usual, thinking of her brought an ache to my heart. I pulled my mobile phone from my pocket and wrote her a text – it didn't matter that she could never receive it. I pretended that she would.
When it came to the opposite sex, it seemed I was genetically hardwired to disastrous affairs of the heart – Mum's record had been no better. But of her failed romantic entanglements there was one she had told me to try to learn from.
I was in my teens and I'd just returned home from an adventure holiday in the South of France. I hadn't phoned ahead, as I'd wanted my arrival to be a surprise. Mum wasn't at the flat so I set off on foot to my grandparents' house, anticipating the welcome and the quiz on how I'd got on. Instead I was greeted with the image of my mother lying curled up and shaking on their sofa.
'What's wrong?' I questioned Gran when Mum didn't even look at me.
'She's been like that all weekend,' Gran said. My grandad took me through to the kitchen.
'Your mum was beaten up on Friday.'
'What! Why?'
'All I can tell you is that she arrived here at the back door, her face so covered in blood I didn't recognise her at first. The doctor came out at two in the morning. Stones ingrained into the palms of her hands had to be scrubbed out and then she was taken to Raigmore for X-rays. Her head had been knocked so hard off a wall she was temporarily deafened in one ear. If two men hadn't come out of the pub and stopped it God knows what would have happened.'
The attacker had been Mum's boyfriend, but she didn't press charges. Instead, two months later she succumbed to his apologies and flowers and announced they were seeing each other again. 'It wasn't him, it was the drink,' she said. 'It won't happen again. He's said he'll change.'
Of course, he did not. And he went on to cheat. Mum told me she'd seen them together, walking by the riverside hand in hand. 'Hi, Jen,' he'd said, all casual, like she was nothing special to him. At home, in a fit of rage, she shredded everything he'd ever given her. She took the tattered, broken items and scattered them over his garden. It was the one and only time I'd ever known her to act so wildly. When he tried to win her round again, this time, for her, it was over.
We all have our own threshold of how far we are prepared to go before the game is finally up. People don't change unless they really want to, and, like my mum, I'd had to learn this lesson the hard way too. I'd kept trying with Sam, because he'd kept telling me he loved me, and I wanted to believe that. I found myself stuck in the same sort of relationship as my mother, repeating her mistakes. When it came down to the choice of me or a bottle, I'd never stood much chance, I was almost used to that. But when I discovered a mass of online messages to a string of other women, I should have realised I was never going to be enough.
As I climbed over the stile I lingered briefly to catch a glimpse of the loch, most of it hidden by the intervening slopes. Once we gained the ridge we stayed on its left, the way clearly marked as we followed footprints trodden deep into the snow. Leon wanted to make his own tracks but quickly gave up. 'It's too hard work,' he said as he flumped down, picked up some snow and ate it.
After we passed a knobble of rock there was a slight dip before we began a more direct ascent of the hill, and the last 100 metres were steeper still, but posed no difficulty as old, sunken footprints carved out a snowy staircase. Marcus sprang ahead. I was lost in admiration when, somewhere from behind, a familiar, high-pitched screech like a gull interrupted my thought.
' _Muum!_ ' Leon squawked, 'my legs are going to fall off. I have to stop.'
'We've really not got far to go, we're almost at the top. Come on, you've done brilliantly well. Keep going!' I said, remembering that I'd moaned over less on walks with my own mum. Tugging at her trouser leg I'd repeatedly asked, 'Are we nearly there yet?' or 'How far now?' or 'When will we be home?' and she'd had to answer with small untruths like 'Not far now,' or 'Just around the next corner' or 'It won't be long.' By my standards, Leon was doing great. Mum would have been proud of him.
I found myself wishing, not for the first time, that my children had had the chance to know her. That she'd had the chance to know them. I'd always talked to them about her. And when they were small they'd at least known the sound of her voice because I'd played them bedtime stories she'd recorded on tapes. That had meant so much to me. But these much-loved treasures were lost to us for ever, all chewed up by the cassette recorder. If she had been here she would have read to them all the time. She would have played games, sung to them while playing her guitar, taken them for walks or to the park – and I would never have had to worry about arranging childcare, she'd have been stealing them off me! She would have looked after them the same way my grandparents had done for her with me. Remembering how much I had benefited from that close and important relationship I'd had with my own grandparents made me regret even more how my boys were missing out.
Reaching a cairn on what appeared to be the summit, we noticed that the true top lay further west. 'I'm staying here. Collect me on the way back,' stated Leon. Marcus and I exchanged a look and agreed to carry on without him, knowing full well he'd soon follow. We dropped down a short distance before we were traipsing up once again to reach a flat plateau. It didn't take much more than five to ten minutes to arrive at the real peak, and as we neared the piled-up stones a fast crunching over snow behind us announced the imminent arrival of the one and only Leon. Spinning round, we cheered, and a gigantic smile spread across his face as I welcomed him into my arms.
It had been warm work reaching the top, though now we'd stopped we cooled rapidly, so we stayed at the summit only long enough to eat rolls, slug freezing water and take in the panoramic scene in silence. In every direction we were surrounded by a glut of gleaming white peaks, row after row rippling off into the distance, pristine, angled outlines against a deep-blue sky. The clean, dry air brought down from Arctic regions made views down the Great Glen to Ben Nevis appear stunningly clear. Snow had transformed the landscape, hiding heathers and grasses, and it was so cold there was not a flying insect to be seen; and yet I felt flooded by the presence of life. We were removed from the chaos and noise that carried on way down beneath us – people buzzing past in cars to-ing and fro-ing from appointments, folk bustling around shops, and workers in offices or on building sites – all oblivious to us up here by ourselves.
'Look,' I said to the boys, pointing, 'you can see Fort George from here.' We'd been to the Fort, and the village where I'd grown up, on our bikes many times. And as much as I had done with my mum, they too loved clambering to the top of Cromal Mount, where we'd look across the smudge of Inverness to the inverted bowl we now stood upon.
As we retraced our steps across the plateau, the race was on between the boys. 'Last one to that lump stinks of tuna!' Marcus yelled. I watched contentedly as they ran. I'd tentatively broached the subject of Sam earlier in the day and they had said they didn't mind that he didn't live with us any more. Listening to their laughter and shrieks escaping into the still air, I was reassured they were indeed all right. And so was I.
The end of my relationship with Sam had marked the beginning of a new and, I hoped, better phase for me. With every mountain I was climbing, I was becoming fitter physically and growing stronger mentally. I'd come to realise since separating that the main reason I'd married Sam was because I'd been afraid to be alone, but it hadn't solved anything. Now I finally understood that being alone could be the more attractive option. My marriage hadn't failed; it had been a mistake from the start. And I hadn't let my children down; they were doing just fine.
And now I was on course for a new, exciting challenge: Kilimanjaro. I felt more resolute than ever that the effort of the journey to its top would help me to put all of my troubles firmly behind me – after all, the name of the mountain's peak itself, Uhuru, meant freedom.
## PHASE TWO
## TROUBLED TRACKS
But helpless Pieces of the Game He plays
Upon this Chequer-board of Nights and Days;
Hither and thither moves, and checks, and slays,
And one by one back in the Closet lays.'
_The_ Rubaiyat _of Omar Khayyam, LXIX_
### CHAPTER SEVEN
### Keep Them Close
_Nakara, Tanzania, June 2010_
It was the eve of my flight to Tanzania. Four days earlier I'd been on Braeriach with my children, but now Marcus had been packed off on a school trip to an outdoor-activity centre for a week, and Sam had agreeably upheld his offer to look after Leon. I was on my own at last, sitting on a bed in a hotel room near Edinburgh airport. It was almost liberating in itself to have no responsibilities for the next ten days. After all the changes and upheavals that had been going on, and all the preparation involved for this trip, I finally had the chance to put my foot on the brake for a while.
Clutched in the palm of one hand was a tiny pot containing some of Mum's ashes. I knew she would have loved to return to East Africa, so a piece of her was coming with me. In my other hand was a photograph of the two of us, taken at the top of Cromal Mount, the Fort in the background. As I looked at the image of our smiling faces, feeling the familiar sadness swelling, I thought about the imminent ascent of Kilimanjaro: I knew it was going to be tough, but I had to succeed. It was as though the two things were tied – the grief that I hadn't been able to shake had driven me to take on this challenge, and if anything was going to get me to the top it would be holding tightly to thoughts of Mum, a determination to keep going in her memory. A part of me was also excited to think that I'd be, in a way, following in Gerry's footprints, becoming a big mountain climber.
Lying in the hotel bed too excited to sleep, I fantasised that I'd made it to the mountain's peak and was overlooking Kenyan plains, towards the places of my mother's childhood. In 1960 my grandad had been posted to a place called Lanet, six miles from the town of Nakuru, about 100 miles from Nairobi. The family lived on a military base, and home was a large, red-roofed bungalow with a huge garden. Mum was about eight years old then. I imagined her as a little girl, all prim and proper in her purple and white gingham dress, ready for the journey to school in Nakuru. There was no bus for the dozen or so kids from the army base who attended Lugard, so instead they were transported to school on the back of a one-ton army truck. Five minutes before it was due to leave, a hand-cranked air-raid siren would sound and Mum, along with all the other kids, would belt down the road.
I'd heard the stories so many times when I was younger and had always found them fascinating. It sounded so exciting – so different from my own childhood experiences. Still, growing up in a military background, Mum had had a fairly rigid upbringing, I knew. I wondered if that's what had made her so wild and free-spirited during her teenage years. Still not asleep in the darkening airport hotel room, I thought back to my own childhood and the experiences that had played their part in defining the person I became.
I was five, living with my grandparents, and had just started school. My mother was in Aberdeen and had begun a second year of teacher training. I would only see her on holidays, and her commitment to the four-year course was already taking its toll on my long-term, and already fragile, security. I was quite reserved. Going to school was the first time I'd mixed with local children, and this only made me more timid – I may as well have walked into that playground with a big placard emblazoned with the words 'easy target'.
Set back from the road and surrounded by farmland, the old stone school building was situated two miles outside the village, and it was there that a girl, a couple of years my senior, made it her business to terrorise me. Bullying didn't happen every day, but its impact over the course of two years stayed with me. Playtime and home time had always been the worst. Deliberately seeking me out, the robust, red-cheeked girl would order me to the end of the playing field, where she'd humiliate me by pulling up my skirt, teasing that she was going to pull down my pants and tickle me or 'get me' after school; but what hurt a million times more than her threats was when she would tell me that my mother didn't love or care about me. On and on she would taunt.
My grandad and her mother would take turns to do the school run home. I dreaded the ride home with her – sitting next to me on the back seat, she would deliberately squash me against the door, pinch my legs hard and sneakily tug my hair. And then one day I was invited to her home. I'd thought she wanted to be my friend now. We played in some woods out of sight from her house, where I picked up a colourful plastic ball. 'That had poison on it. You'll be dead by tomorrow,' she jeered. She made me follow her to a wooden fence where we sat on top of its stile.
'Kiss me,' she ordered.
'Where?' I asked, feeling hugely afraid of her answer.
'On my mouth,' she said, with unflinching coldness. I obeyed.
After that the prospect of having to go to school made me feel sick. I'd complain to my grandparents that I didn't want to go, that I didn't like it, but I was too frightened to reveal the truth.
When my mum came home on holiday I was her shadow. She'd take me on walks along the beach where broom crackled in the summer heat and sweet coconut-scented gorse grew so wildly, its smell mingling with the salty seaside air. She taught me the difference between these similar species, pointed out dog rose and other wild flowers and plants, or birds like the yellowhammers that flew between the thickets and seabirds like the oystercatchers and curlews. We'd have picnics on white sands round the back of the Fort and play at running away from the waves. We'd search for razor clams, perfectly intact whelks and stamp on the seaweed to hear it pop. And at home she'd play songs to me on her guitar. All of these things were reassurance that the bully was wrong and that I was loved, but time always passed too quickly and Mum would leave once more for Aberdeen.
Separation was upsetting; I didn't want her to go because her departure also spelt back to school and into the clutches of my tormentor. Eventually I was moved to the village school, but the damage was already done and I was shy and awkward around the other kids for the rest of my school years, distrusting people's motives while also being convinced I was unlikeable. I spent a fairly solitary and lonely childhood, deriving happiness from the company of the adults at home and by losing myself in drawing and painting – mainly on the walls, which did not go down a storm with Gran.
Later, during my teen years, my grandad would sometimes ask why I was either up or down and never a happy medium. I used to tell him it was because of my inherent artistic temperament, and while I decided there probably was some truth in that, I also believed I'd been conditioned by those early experiences: swinging between the extremes of feeling great joy when Mum was home from college, to deep term-time doom. Where my mother had spent her teens rebelling against her parents in pursuit of independence, I had wanted to stay as close to her as possible.
It was an early start the next day, but it felt full of promise. And after an eight-hour flight the aircraft began its descent to Dar es Salaam, Tanzania. As we reached a height of 5,000 metres, I realised that in a few days my feet – like those belonging to the other 35,000 people who also come to climb this mountain each year – would be trekking higher than this. It seemed both fantastic and implausible.
After immigration and baggage reclaim I exited the airport and found my way to other waiting members of the Marie Curie fundraising team. I'd been to Africa before, visiting Malawi and Egypt, but the wall of heat and the monstrously proportioned bugs still came as a surprise. The driver launched our rucksacks and holdalls onto the roof of our transport, a minibus that looked like the Mystery Machine from _Scooby-Doo_ , and funnily enough our short and stocky, bespectacled trek doctor, Emma, with her auburn fringed bob, bore more than a passing resemblance to Velma. There weren't enough seats in the bus for everyone, so I travelled to the hotel by Jeep with Marty – who would also be my roomie during the trip. Marty and I came from the same town and we knew each other a little, so it was nice to have a familiar face to share a hotel room and tent with.
We set off and almost immediately petrol fumes and the stench of rotting animal flesh rushed in through the open windows, filling my nostrils and making me feel very ill. But as the countryside opened up, these odours were replaced by the smells of burning bush from local farms. For two hours the bumpy ride continued until finally we arrived at Nakara Hotel, which, at an elevation of around 1,500 metres, was higher than Ben Nevis. The queasiness passed and the intoxicating and heady scent of jasmine permeated the night air. Above us a sumptuous, deep-velvet sky was punctuated by a scattering of stars. I already felt better.
Last down for breakfast, I was dismayed to find that everything was pretty much gone, so I had a juice then took a short walk before our briefing. I'd read up on high-altitude trekking and was aware of the risks involved; these were made more stark by our medic, Emma. 'The higher you go the more important it is to drink plenty of fluids. Dehydration is dangerous on its own, but it can also mask or worsen altitude sickness. If your pee is dark, you aren't drinking enough. Your headache could lead to cerebral oedema and coughs may be the precursor to pulmonary oedema. Both are life-threatening and in either event we need to get you down off the mountain quickly.'
Later, all twenty-five of us set out for an acclimatisation walk around the local area along the soft, reddish-brown soil road which, because of the rain, seemed to have turned into a kind of wet clay. Streets were simply openings in thick plantations of maize or banana trees. Winding paths the width of a foot supported the occasional kiosk, whose goods were random and sparse – three of this, four of that. Soaps and washing powder sold alongside half a dozen tomatoes and an avocado, all ensconced behind a wire mesh to stop thieves. I bought a piece of sugar cane from a young boy for a dollar. Its stick-like shape and fibrous texture reminded me of the rhubarb Grandad once grew in his garden, but my taste buds felt robbed because it wasn't the sugary treat I'd expected. We walked on through luscious green foliage above a rock-strewn river, and beyond its bank was a field being worked by women, breaking the soil with antiquated scythes. It looked an arduous task under the blaze of the afternoon sun, but on seeing us they raised their hands to wave and flashed a smile.
Musky wood and a fresh, earthy scent from overnight rain filled the air as we passed by ramshackle homes, tucked in behind the trees, almost completely hidden from sight. At the Marangu Gate, where our trek would end, Humphrey, our local guide, pointed out the Chyulu Hills to the north and Lake Chala to the south-east, the Kenyan border tucked in behind.
As our trekking group walked along winding paths I saw what I assumed was a shallow grave. The low bed of rocks was fringed with grasses that had since dried out, some plants grew in between and at its head was a colourful wreath and a crude wooden cross. When I asked, Humphrey confirmed my thoughts. The family had buried their loved one close by. I could understand why they would want to do that. I held tightly on to the tiny pot of Mum's ashes in my pocket.
### CHAPTER EIGHT
### Where the Wind Blows
_Naro Moru Gate to Simba Camp, 2,650 metres_ /
_Simba Camp to Kikelewa, 3,678 metres, June 2010_
Our group's Land Cruisers left the hotel early and headed in a northerly direction to bring us around Mount Kilimanjaro to Rongai and the Naro Moru Gate at an elevation of 1,950 metres. It was a bone-shaking ride, but the scenery provided distraction. A car passed, toiling on the slight incline, and my eyes almost popped out of their sockets as I took in the vision of this clapped-out hunk of rust bursting at its seams. It was jammed full of bodies, five people crammed into the front and seven or eight in the rear. I felt claustrophobic just looking at them.
Women wearing vibrant fabrics and carrying impossibly huge cargoes of bananas on their heads were in colourful contrast to the dry and dusty landscape, and a man in a checked shirt and dirty, rag-hemmed trousers laboured uphill on his bike loaded with blue-plastic gallon containers full of water. There were market stalls at the roadside of each small township we passed through, where men sat under shade on rickety wooden stools chatting over a drink, and women stood at the road with baskets of beef tomatoes and sacks of maize to sell.
On arrival at the Gate we were given cucumber sandwiches and tea. Fellow trekkers got to know each other better as they chatted inside the simple concrete gazebo, but I shied away. Fuelled by my fear of rejection, a wave of regret for having come as part of a large group flowed through me; feeling uncomfortable, I wandered off and watched with interest as a straggly queue of around forty porters waited to have loads weighed. Seeing Humphrey, I asked how the expedition was organised. 'Well, the team has six guides, and they carry their own kit, nothing else. The tent boys carry the tents, set them up and clean them out each day. The three cooks have to carry only the eggs, but once the eggs run out they have to help carry other stuff. And the porters have to carry the clients' kit, their own and whatever else is required,' he said. It didn't seem possible that these short, wiry men would carry so much.
Before we began the trek proper I thought I'd better go for a posh pee (by this I mean enjoyment of a pee in the comfort of a facility with a flushing loo, a basin to wash in, warm water and maybe a paper towel to dry my hands), but what I was directed to was something far removed from my expectations: as surprising as the porters' loads was my first encounter with the long-drop. Several metres from the gazebo was a shaky wooden structure; inside the cubicle was a rectangular opening cut into its slatted floor from which a powerful stench was emanating from the pit underneath. I gagged but managed to urinate, and I reckon it was at this point that my bowels beat an immediate retreat into hibernation. I wondered if I would survive for ten whole days without doing a poo and shuddered as I imagined the headaches and malaise I'd suffer as a consequence of my body absorbing its own toxins. There was no two shades of shit about it, the long-drops were a stinky, messy and off-putting affair, but better them than nothing at all.
Gulping at the fresh air I practically fell out of the wooden cubicle, straightened myself up and rejoined the group, which was getting organised to begin the trek. At the entrance to the Kilimanjaro National Park was a warning sign that quickly replaced any toileting fears I'd had. A series of blackened planks nailed across two wooden posts and engraved with instructions alerted trekkers on Points to Remember (nine in all) when climbing to high altitude; they all seemed to indicate that what we were about to do was pretty risky. But the unease those rules stirred in me also melted away as we journeyed up through lush tropical rainforest and fields of maize to the chorus of cicadas.
Always preferring my own company when walking, I trailed behind the group till they were out of sight and their chatter was replaced by the sound of my feet scuffing the ground; only Humphrey walked behind me. We'd already passed slender firs with pale-green needles, a delicate relative of the more coarse Scots pines that were so familiar to me. But it was amid the olive, juniper and tall, twisting cedar trees that the rustling of leaves alerted us to a small troop of piebald colobus monkeys who peered down from branches in curiosity.
Somewhere I could hear the soft sound of grasses being trampled and a tapping noise. It grew louder and, turning to locate its source, I spotted a little boy. He appeared through tall green shoots of maize chasing after a large, plastic blue lid that he thwacked merrily with his stick. This happy little lad, who had emerged from the nearby collection of pitiful-looking huts with his makeshift toy, rushed on without giving me a second glance. I thought of my own children and wished they could be here with me. I wanted them to see how much pleasure could be derived from such simple things as a lid and a stick, but mostly I wanted them here because I realised how much I missed them.
At our first camp, clouds drifted apart fleetingly to give teasing views of one of Kilimanjaro's three main volcanic cones. Mawenzi's dark mass was impressive, its brittle, jagged ridge like a thorny crown. The moment of awe was disrupted by loud banter, the other members of the group chatting excitedly about their adventure. I felt apart from their noise and jovialities; I was here to resolve my inner turmoil. But while I was craving some solitude, I also knew that deep down I really did also want to join in with them. At once I felt both resentful and envious of these people with their carefree, light-hearted ways. Feeling my frustration and anger arising within, I wanted to scream at the world from the top of my lungs.
'Hot water?' A voice called from the outside the tent the next morning.
'Yeah, _asante_ ,' I mumbled.
' _Hakuna matata_.'
Six o'clock awakenings and that minimal exchange of conversation would become the norm for the duration of our time on the mountain. My roomie Marty and I stirred to wakefulness by using the small Tupperware bowls of water we'd each been provided with to clean ourselves. It wasn't a big deal to either of us that we didn't know each other that well. We accepted the situation for what it was, which was a good job, because turning our backs on each other to wash then dress was all the privacy we were going to get. After packing our gear we headed to the canteen tent, where breakfast consisted of a hard-boiled egg, whose yolk appeared more white than yellow, and a radioactive bright-pink sausage; we looked dubiously at our plates.
It was the second day of the trek and we were walking to camp two, Kikelewa – a significant height gain of 1,028 metres. On the way, while others conversed about farts, their lack of bowel movements, and the celebrities who had climbed Kilimanjaro for the Comic Relief charity months earlier, I fretted about altitude sickness.
' _Pole, pole,_ ' Humphrey said.
'What's that mean?' I asked.
'Slowly, slowly. You'll acclimatise better.'
So I stayed at the back of our group and maintained a deliberately slow pace, thinking Humphrey's advice was an iron-clad guarantee against mountain sickness; and he kept me company.
'What's that plant? . . . What do you call this? . . . What do you call that?' On and on I went.
'The buds of this plant are special; this variety is collected and infused in water then used to swill around the mouth to ease dental pain. It has antiseptic properties,' he said. 'There are many plants and trees that have medicinal value. This one helps stomach upsets, we use the aerial root of the tree and boil it, then when it cools it is drunk,' Humphrey said, touching the bark of a fig tree. Flat clumps of everlastings spread in abundance; their white petals with yellow centres reminded me of daisies.
'What are these?' I asked, pointing at seven pretty red flowers supported on a wiry stem, their heads closed because of heavy moisture in the air.
'They are gladiolus, named after one of the first Westerners to climb this mountain. From now on plants will become less,' Humphrey answered.
'What's the name of that bird with the pale streaks on its face?'
'It's a streaky seedeater. Its home is on the moorland.'
Asking about the birds and plants was just one of my ways to remember Mum out loud, and with great patience Humphrey did his best to answer the barrage of questions. I was feeling good; both my body and my mind were invigorated by the exercise and stimulated by the unfamiliar.
Trees and larger plants began to diminish as we plodded higher. We'd been on the move for about four hours through damp mists with no views, but by midday the full force of the sun blazed overhead. I stopped for a drink and, on turning round to take in the scenery, I was bewitched by the most incredible sight. Tropical clouds were a thick and fluffy sea of white colliding silently into the side of the mountain, the sky above a perfect deep blue. I felt as if I were in a secret land in the sky and had to point out the phenomenon to Marty.
'Oh aye, cloud inversion. You sometimes see that on the mountains at home. Pretty cool, isn't it?' he said, and carried on walking. But his matter-of-factness didn't trash my sense of awe; it _was_ really cool, and I tried to walk backwards for a bit to enjoy it for longer.
When we stopped to break, after stashing my daysack in the canteen tent's shade, I found a spot where I could sit and admire the ocean of white. Nearby, a white-naped raven strutted about on top of Third Cave – a large lava tube – no doubt waiting expectantly for the leftovers from our lunch.
After eating we rested another half-hour, giving the porters time to dismantle the tent, wash up pots and dishes and filter water from the river to refill our bottles. It made me feel guilty that they were doing all the work while all we had to do was eat the food we were given and walk. I took a short stretch to Third Cave and sat away from everyone until it was time to resume the trek. I watched the raven, still on its own, still keenly waiting for us to move on so that it could scavenge for scraps. It made me think about the elusive heron that Mum and I had liked to catch a glimpse of on the river at home.
Back on the trail I walked alone until an older fellow trekker, Robby, joined me. I hadn't been desirous of company but he was Liverpudlian, and that reminded me of Gran. I asked if he was familiar with Bootle – the town where Gran had grown up, had met my grandad, and where Mum had been born. He was. And, having found an admittedly tenuous link to my family's history, I allowed myself to relax with him a little. It was as we were walking and playing our made-up game, 'Name That Hum', that we began to notice the effects of altitude and the game didn't last long because trying to think up trivia was too tasking.
It seemed the rest of the group was experiencing it too; we soon caught up with three of the younger women from our group, none of them feeling well. Two of them had suffered terrible diarrhoea during the night and were now feeling nauseous and had headaches. It surprised me that people were becoming ill so soon into the trek. Robby and I walked on in a serious silence until, unexpectedly, the most giant fart erupted from his ass, but when I then let one rip too we both fell about the mountainside laughing.
After stopping for a short break in the afternoon, Marty joined Robby and me to walk at the rear.
'What _are_ you doing?' I asked Marty, as I turned round to see him standing stock-still and pushing at his belly.
'I'm trying to make all my farts come out at once . . .' he strained. 'That's why I came to the back,' he gasped, his face reddening with the exertion. All three of us started howling with laughter. The wind, jokes and giggles, almost to the point of hysteria, all continued until we reached our evening campsite; we blamed it on the altitude.
After an eight-hour trek, the ritual unpacking and dinner, the day ended with a hot chocolate under the night sky. To the north the sky was comparatively empty, but southwards a plethora of stars were splattered on a black-canvas sky like a Jackson Pollock painting, stunning, bright and clear. As I gazed up at those other worlds I reflected on the day: it had been better, thanks to Robby and Marty. I'd take each day as I found it. _Yeah, go with the wind!_ the voice in my head giggled. _Enjoy the hill. 'Cause if your mum was here she'd be loving all this!_
### CHAPTER NINE
### Where There's a Will There's a Way
_Kikelewa to Mawenzi Tarn, 4,295 metres_ /
_Mawenzi to Kibo, 4,700 metres, June 2010_
Marty woke me around three in the morning when he got up for a pee. A pounding headache that had settled in earlier now burned behind my right eye. I'd also managed to get a lump of beef stuck between my teeth at dinner which I hadn't been able to dislodge when I'd brushed, and now the gum around my tooth throbbed away uncomfortably. Toothache was no fun under normal circumstances, let alone up a remote mountain miles from help. But I'd have been more worried about it if I didn't already know that dental issues at altitude were not unusual, even fillings can shrink. And, anyway, it was far from my only concern. The number of people with mild symptoms of acute mountain sickness (AMS) had risen from three to five in one day, and I was afraid that I might become the sixth. During the remaining hours of the night, sleep was fractured as exhaustion battled against my aches and paranoia. Those hours had been rough, but by light my tooth had settled and, after giving myself a thorough wash, I put on some clean clothes and felt fresher. Even breakfast was enjoyable, and after some quick stretching exercises our group worked its way up into the Highland Desert Zone.
Everlastings carpeted the ground between clumps of stubby yellow grasses in shades of patchy greens and browns like dried tobacco. The winding trail ascended gradually until even the green began to disappear and lava tubes, like giant wormholes, became the focus of interest. Behind lay the ocean of cloud, but up ahead the dark outline of Mawenzi's broken ridge punctured the skyline, like an African version of Skye's rocky Cuillin range. Up ahead was the site of our next camp.
It had taken four hours, but we arrived over a knoll and dropped down into a hollow bowl during early afternoon. Mawenzi Tarn lay still, its shrunken egg shape framed by a halo of iridescent green algae. It looked toxic, but I was told it was going to be the source of our water supply. Nice. As we neared the tents we were welcomed with an amazing reception from our porters, a merry ragbag band dressed in bright pinks and blues, one trouser leg up one trouser leg down. Arms raised and swaying like a gospel choir, they danced and sang their Kilimanjaro song in joyous harmony. Singing to us, singing to the mountain – hoping it would bless us and keep us all safe from harm and the dangers of high altitude.
It was after lunch, when we were briefed on the next morning's itinerary, that I felt a rush of anticipation. Summit night, the climax of our trip, was a day closer than I'd realised. It was time to exorcise my demons. This was what I'd come for . . .
An afternoon's acclimatisation walk on to Mawenzi took us up another couple of hundred metres onto its east ridge over scree and dark rock. Akin to Cuillin gabbro, it felt like Velcro underfoot and was great to scramble over. Sitting on the ridge under the sun's glare I kept to myself; a dull ache at the back of my head was making me feel more unsociable than usual. As I looked down into the bowl, beneath sweeping volcanic slopes, our tents appeared like bright flecks of orange punctuation on a dark and grainy page. We were higher than anyone in the world below could see, even higher than the clouds. Absorbed in the rugged beauty, it was a while before I noticed that my headache had begun to shift, but when I did my mood instantly buoyed and remained that way when we got back down to camp.
As the sun lowered on the horizon the temperature dropped sharply, so I layered up and, with half an hour to spare before dinner, I took myself off over the rocky outcrop behind our tent. Finding a flattish ledge on a finger between two gullies, I sat down to soak in the view. Gentle hues in pastel shades of pink merged softly with yellows and tangerine. Higher above, cornflower blue deepened into indigo and I marvelled at all the wonder of this colour, concealed by cauliflower clouds from life beneath. A waxing moon hung low in the sky, and under my watchful stare it sunk lower still. In the peaceful twilight, I was alone with my memories.
Though Mum's brain tumour had been removed, the disease had rampaged through her and its symptoms presented themselves too quickly. It was August 1997, seven weeks before she died. We sat with a brandy each and we talked. I'd had the idea of asking her to tape some stories and nursery rhymes – the tapes that ended up meaning so much to both me and my sons – but I hadn't found the right time to bring it up. Her persistent cough was getting worse, so I knew I had to ask soon, but it was only after a little Dutch courage that I was able to broach it.
'What do you want me to do that for?'
'So that if I ever have a child I can play the tape to them, so that they'll know who their grandma is.' I hadn't wanted to upset her, but she smiled back at me.
'I think that's a lovely idea!'
'Awesome! We can get tapes on the weekend,' I said, as my mother suddenly made a sharp intake of breath and her face creased. 'What's wrong? Where do you hurt?'
'It's my back this time,' she groaned. Touching her gently, I willed her not to hurt – and the discomfort eased.
'Imagine if I really did have healing hands?' I said. 'I just want to fix you.'
'You have in a way,' she replied. I still don't know what she meant by that, but, the pain having passed, at the time it didn't feel right to press her.
Early the following morning Mum's coughing woke me. I walked into the living room to check that she was all right. She was sitting on the sofa, wrapped in her purple dressing gown, staring fixedly at the television.
'It makes you want to cry,' she said, not shifting her gaze.
I looked at the flickering images being broadcast. News was being reported live that Princess Diana had died at four in the morning following a traffic accident in Paris.
'Life is a lottery,' Mum added sadly.
I turned my attention back to her as she wiped the end of her nose with a tissue, and I wondered if she was also thinking of herself.
I'd been worried about Marty during the night: his breathing was terrible, he'd been coughing and was restless. I hadn't thought it was the best idea to smoke a big reefer every night before crashing out (scoring some grass from a local had been first on his to-do list on arrival at our hotel), and it sounded as though I was right. Complaining about a headache, he had gone to find the medic, and I fell back into broken sleep till he returned.
'What did Emma say?' I asked.
'She gave me Diamox.' His perfunctory response made me realise just how much he was suffering – ordinarily Marty could talk the hind legs off not just one but a whole herd of donkeys.
'Hope it works for you,' I said in sympathy. But, unless his taking it had the placebo effect, I had my doubts. While the drug is an aid to acclimatisation, there's no guarantee that it will either prevent or relieve symptoms of AMS. My own doctor had told me it wasn't necessary to take it, but that if I did want to use it, it would be best to begin taking the pills a few days before I started climbing, so the drug was already at work in the system. I chose not to take it.
'It's fucking freezing, I'm putting more layers on.' Now I was wearing two pairs of thick socks, a T-shirt, fleece, windstopper and both my down jackets.
Marty coughed. 'I feel like shit,' he said. And with that we settled down as best we could. Both of us woke often. As I tossed and turned like a giant blue caterpillar on my sleeping mat I was conscious of my breathing and the need to inhale deeply.
In the morning it was Marty's coughing that stirred me to full wakefulness. As we ventured out only the porters were about, beginning the daily water-boiling ritual and preparing the canteen for breakfast. I returned to my slab on the finger of rock to watch the sunrise; it was as evocative as the previous evening's sunset. Orange beams of warm light stretched forward in a burst of rays and heat to greet the new day, dispelling the night's discomfort. And I basked in its glow.
A few weeks after Mum had made the tapes of nursery rhymes and stories her health deteriorated and I had a bad feeling about what was coming. It was afternoon and she was resting in my bed. I wanted to lie next to her and hold her, but dared not. The last time I had brushed my awkwardness aside to embrace her gently I had hurt her; cancer now made every bit of her ache. So I sat on the carpet with my back against the wall and hugged my knees, drawing them into my chest. The sun's rays filtered through the flimsy yellow gingham curtains, its golden light in harmony with the walls.
'I love this room. I like how the sun makes it feel warm,' she said, as she turned her face towards mine. 'Do you know what I regret? I regret that I'll miss what happens to everyone. And I had looked forward to being a granny.' Once more her words made my heart ache and quite suddenly I felt terrified that time would make me forget how she sounded, looked and smelt.
'I regret I can't give you half my years,' I choked, as anxiety, that now familiar stabbing pain, seared across my chest. I felt hopeless and guilty – guilty for living, and for grieving even though she was still with me. I felt robbed of our future, of laughing, moaning and arguing with her – and that child's voice inside whined. _It's not fair!_ The child was screaming and stamping its feet with all its fury and might.
'The only thing I'm worried about is that I'm not going to be around to keep an eye on your spending; it's a bad habit of yours when you're upset.' She paused. 'I wish I could have left you everything, Sarah, but I've written a will.' And then followed another short silence before Mum continued, 'You're the best thing I have ever done; you are my daughter and you're carrying my genes. You have been reliable and true. And you've been here for me in every practical way.' I wished I could carry on doing everything for her for longer, for ever.
'What will I do with you . . . when you are gone . . . what shall I do with your ashes?'
'I won't care, I'll be dead. You do what you think is right. It might take you a while, but you'll get there in the end . . . you always do,' then, not for the first time, she added, 'Just promise me you won't give up.'
'I promise,' I said, trying to sound believable as I swallowed my sadness in the happy glow of the room.
As I left the slab of rock and walked back to camp, the glaciers on Kibo, Kilimanjaro's highest peak, glittered and sparkled as they reflected the light.
At breakfast it was clear that the majority of our group was experiencing the effects of altitude. And on everyone's mind was the prospect of not being allowed to reach the summit, but subdued chatter of hopes and fears closed when we were briefed by our guide, Don.
'Today we are trekking over the Saddle to camp four at Kibo. We'll eat lunch, rest, eat dinner at about five, then I want you all to get some rest or sleep if you can. At eleven o'clock tonight everyone needs to be up and ready to make our summit bid. This is the toughest part of the trek, guys. Dig deep.'
I had been anticipating that promise of hardship and struggle – and the five-hour trek across the alpine desert didn't disappoint. Although cool winds blustered across the barren landscape, the sun's fierce rays burned down, and there was no shelter. The sweltering heat and thirst occupied my thoughts till I saw something curious gleaming in the near distance. I asked our local guide Peter what it was.
'Two years ago a small airplane carrying Italians crashed into Mawenzi. The debris went everywhere,' he said. 'Everything from the wreckage, apart from the four dead bodies, was left on the mountain.' As I approached the fuselage of the plane I saw that it was its white carcass reflecting in the sun that had dazzled my eyes. Cutesy paw prints painted on its empty shell only exaggerated the sense of eeriness.
A long time on the Saddle had passed and I desperately needed to pee. This necessary and basic human function dominated my thoughts as I scoured for some place suitable to drop my pants: a lava bomb, situated some distance off the trail, was going to have to do. By the time I'd sorted myself out, Peter was quite far ahead. As I made tracks towards the guide a perfectly round puff of cloud was tumbling like a massive white beach ball across the high, sandy plain. It was a brilliant photo opportunity, but Peter had my camera so I ran to catch up with him. This was a big mistake. Huge. Almost immediately my lungs felt compressed and tight, as though they were being squeezed. I felt like I was suffocating as I struggled to breathe. Every intake of air was piercing, and by the time I caught up with Peter I genuinely thought I was going to die.
' _Haraka haraka haina baraka!_ ' Peter said, shaking his head.
'What's . . . that . . . mean?' I said between gulps.
'It means, hurry hurry has no blessings. Up here you have to take it slowly, _pole pole_.'
I'd never experienced pain like it and continued the journey with my head bowed. It occurred to me that I might have destroyed my chance of going to the summit. I had to keep walking and willed myself to take one slow step after the next. Internally I revelled in my rage as the cumulative effects of sun, altitude, hunger, lack of sleep and excruciating chest pain made the going truly hard.
At 4,700 metres Kibo was at an elevation equivalent to Mont Blanc, and I had never been so pleased to reach a camp. In the relative privacy of our tent I was glad for some respite and, once I'd recovered sufficiently, I sorted out kit for the summit attempt. At lunch, word had got round that I'd been unwell and a couple of the trekkers were kind. One lad gave me the last of his concentrated blackcurrant juice, another lent me four AA batteries for my camera and our medic pushed two paracetamol and two Diamox on me. I was glad for the juice and batteries, but there seemed no point in taking the pills now, since I'd almost recovered from my high-altitude sprint. And in any case, I kind of wanted – needed – to feel suffering, as if doing so brought me closer to Mum.
Wednesday, 15 October 1997, was the third-last day of my mother's life. She was in a bad way. Thrush had developed in her throat, and the front of her head was hurting, but she hadn't complained. Her skeletal frame, like that of a woman twice her forty-four years, stumbled into the living room, and her shrunken body was consumed by the brown armchair she sat down into.
'Do you want me to call the doctor?' I asked tentatively.
'Yes,' she said.
Never before had I felt the enormous weight and power of that single word; it terrified me, and in my guts I knew she was near the end. When he came to the flat the doctor wanted to give her morphine for the short journey to hospital, but Mum said no.
'I want to know exactly what's going on. I want to be fully aware,' she answered firmly.
From the front door of the flat my eyes locked onto the two ambulance staff as they lifted my mother, in a chair, down the stairs. One of them grumbled loudly, struggling with the awkwardness of the task. I kept my mouth shut, but inside I was screaming expletives. After calling my grandparents and Frank, I made my own way to the hospital, but on arrival I was sent away almost immediately by my mum.
'Can you go home and fetch me the bangle that Gerry gave me. Don't tell Frank though.'
'Yeah, course I will.' I said unquestioningly. She obviously didn't want to hurt Frank's feelings, and neither did I. When I returned she put it on. I had no doubt she loved Frank, but he was going to carry on living and make a new life for himself eventually. Maybe she knew herself the end was near and perhaps having the bangle gave her some kind of superstitious comfort.
Everyone at camp had taken Don's advice and rested up until we were called for the evening meal, by which time I had recovered from my lung-crushing episode. After eating, a final brief was given.
'Use bottles for your water instead of the CamelBak hydration packs as the tubes are likely to freeze, and I don't want anyone listening to music as we summit. I need your full attention. If another member of the group gets into difficulty you need to be alert and ready to help. I won't lie, this last push is going to hurt. It should take us six hours to get to Gilman's Point and from there it'll take another hour and a half to get around the crater's rim to Uhuru Peak. Some of you won't make it. Altitude sickness can come on rapidly and if you are suffering you must let us know. It will only worsen the higher you go and you will put yourself at serious risk if you continue. There is no point giving everything to get to the summit to realise you can't make it down. Trust me, if you are suffering the only thing you will be thinking is _Get me off this fucking mountain._ '
'I will be joining you all on the ascent until the first person gets sick, and it's not a case of if, but when, because somebody will,' Emma added.
I sat on my camp chair in our canvas canteen and glanced around the faces listening intently, trying to hazard a guess at who might succumb first, hoping upon hope that it wouldn't be me. It was just another lottery.
### CHAPTER TEN
### Hell on Earth
_Kibo to Uhuru Peak, 5,895 metres, June 2010_
It was midnight. With no moon I was enfolded by pitch-blackness as I scrambled from the tent. Assembling under the stars, our group came together, headtorches flickering into life. Galvanised into action by Daniel, our summit guide, we set off, slowly and steadily. Marty and I followed in single file directly behind him. My reasons for being up front were purely selfish; we'd been told that if we didn't reach Gilman's Point by 8 a.m. we would not be allowed to continue on to Uhuru Peak – the mountain's true summit. If I was further behind there would be someone slow holding things up. Adrenalin pumped as I felt consumed by a sense of purpose.
The black night was bitterly cold as we trod in small steps on and up. Repetitive monotony of pace, coupled with the intense light powering from my headtorch, made me feel dizzy, but the voice inside my head chattered away again, and before I knew it a whole hour had slipped by.
We continued to snake our way up the scree slope in silence, an occasional mad cry from the guides keeping us amused. We'd been on the move for three hours, climbing 500 vertical metres; and now, as high as Everest Base Camp, we stopped to rest at the cave marking the halfway point to Gilman's. Kitchen porters greeted us with a mug of soup to keep up our fluid intake – and warm us up; I was in awe of their organisation and also very grateful. Murmuring voices clustered together in the darkness, growing in volume as more of our group arrived; their talk, centred on the ascent, flew around in an atmosphere of nervous anticipation while I stood near Daniel wanting to get going. My head throbbed.
As we pressed on into the night the tediousness of the increasingly steep trail felt interminable. I felt light-headed, but the light beaming from my headtorch was also making me feel nauseous, so to distract myself from these unpleasant sensations I began to count from one to a hundred over and over, and over again. Absorbed by this monotonous pastime, I was surprised to suddenly notice we'd reached Gilman's Point. From my high stance, as I looked down the mountainside, the split between groups was more evident: clusters of white light shining from headtorches were separated by some large and some small spans of darkness. Just then Daniel's radio crackled into life.
'I need help with a casualty, and I'm not sure if I'm on the right path back to Kibo. Can you send a guide down?'
'Yeah, okay, Emma,' I heard Don reply over the airwaves. 'Two trekkers were turned round at the Cave. Peter's with them. Stay where you are and he'll look for you as he makes his way down.' I wondered momentarily who was sick, glad it wasn't me.
Trekkers hugged and congratulated each other for making it up to the 5,681m signpost, but we still had 200 vertical metres to gain before we'd reach the true summit. Just before we set off for Uhuru Peak, Marty and I fell out.
'My CamelBak pipe is frozen,' he said.
'I don't mind sharing my water, but I've only got a litre left so we need to take little sips,' I said to him.
'Don't fuckin' bother then,' he answered crankily.
'Marty! We've got at least two hours' walking to get to the top, and then we're gonna have to get all the way back down to camp. All I'm saying is we've got to be sparing.' But Marty stropped off.
It almost felt like being granted the gift of sight when dawn broke and revealed the mountain's features once more. We had transcended those dark hours and trekked into a brand-new colourful day. Optimistically, it felt symbolic. There wasn't so much as a whisper of wind. Cloud, like a herd of white horses galloping noiselessly into the mountain, was again cutting us off from the world below. Ahead, the route along the rim of the crater followed over loose and rocky terrain, dropping down over 100 metres. At the bottom of the steep scree slope on my right was a new view – the dormant volcano's resurgent dome. Straining my eyes, I could just make out a meandering, broken thread of ant-sized people crossing the vast, dusty-brown plain below; in my confused state, their presence was surprising. It was worth a photo, but Don, who seemed to have crept up behind, urged me to hurry on. 'You need to put the camera away. There's no time for hanging about,' he muttered bad-temperedly.
Our original group of twenty-five had become twelve as we walked in a straggling single line, each of us quiet. By now my head was incredibly sore: my eyeballs seemed to be vibrating in their sockets and my face felt hot. My balance was all to fuck, and as I came down off a rock it felt like I'd stepped out on a dead leg, my knees buckling. I knew I was feeling the effects of the altitude but I was determined to keep on going. I'd come so far, giving up was not an option.
Concentrating on breathing, I staggered forward. Movement felt like trying to push through some kind of invisible force field. I felt diabolical . . . hammered and hungover all at once. My hand scraped off a tower of brittle wall on my left. Beyond the Reusch Crater on my right was the ash pit; my vision made the scenes blur and the ant people disappear amid the dust. I panicked: I was here to make sense of myself and my grief, yet with my eyes refusing to work, I couldn't even make sense of my surroundings. My mind was in chaos.
It was evening when I went back to see Mum at the hospital. Frank was already there. Curtains were drawn and the only light in the small room came from a bedside lamp, but I could see that Mum's stare was glazed.
'The doctor injected her with a massive dose of morphine,' Frank said, 'and she's been hallucinating.' The way he spoke made it all sound like some sort of game.
'She didn't want drugs, why did they give her morphine?'
'I don't know, she must have needed it,' Frank replied.
'Are those warts all over your hands?' Mum asked me.
'No!' I replied, initially amused at her deadly serious tone.
'There are bugs on the bed . . . and why is Frank pissing in the corner?' My mum swore! Apart from one time when she'd dropped the vacuum cleaner on her toes she'd never sworn in my presence. 'Look! My skin's turning red!'
'Mum, it's okay, you're not going red. It's just the morphine they've given you that's making you think these things.' But she didn't register what I was saying. I felt horrified, angry and upset, and went to find the doctor.
'Why did you give her morphine? She said she didn't want it and now she's having awful hallucinations.'
'We gave her the drug because she was in great pain,' the doctor replied. I returned to the room feeling very afraid.
'What are you looking at?' I asked, as her gaze followed something.
'Balls dropping from the ceiling,' she answered.
'I'm going home,' Frank said. 'Do you want a lift?'
'No. I'll stay a while longer, but thanks anyway,' I replied. And then it was just us two. Repeatedly raising her arm she grasped at nothing. 'What are you doing?' I asked. I knew I wasn't going to hear anything rational. I just wanted to hear her voice.
'I'm picking loose hairs.' But then, in a moment of lucidity, my mum was back and, looking straight at me, she said, 'I'm scared I'm not going to get out of here.'
I was too. I knew she was going to die. I wondered if she knew it too? Even if she did, I would not confirm both our fears; I revolted against it. My head pounded, my right eye felt like it was going to burst open and tsunami-sized waves of sick dread churned in my guts.
Pounding pain in my head and eyes and a heavy fatigue induced sickness in my guts as I progressed towards the summit of Kilimanjaro. Pushing on and emerging from behind the last of the rocky stacks, I saw a glacier close up for the first time – a sight so remarkable and unexpected that it served as a total distraction from my aching body: stepped blocks of ice whose cliffs, with dirty, geometric upward streaks, creaked and groaned as if in conversation; beyond its icy reaches was the dark silhouette of Mawenzi, a cone of castellated spires. And further away still, where the sky met the horizon, the curvature of the earth was plain to see. Those spellbinding views stopped me in my tracks. When I got going again it hurt to push on to the elusive top, but eventually the rocky path gave way to a trail of footprints over ice-encrusted snow and I knew that I must surely be nearing Uhuru Peak.
Men and women I didn't recognise passed me by, and confusion returned once again to replace the sense of wonderment and peace I'd felt moments before. _They aren't in our group._ 'You're nearly there!' and 'Congratulations!' The American accents twanged, accompanied by a pat on my shoulder.
_Who are these people and where have they come from?_ I asked myself, as I reciprocated their words of encouragement with more a quizzical smile than one of friendly acknowledgement. And as they went on their way I couldn't help but notice they all seemed to have the same sickly yellow complexions. _What's wrong with them?_
We'd walked almost eight hours through the night, but after ten more minutes of determined footsteps, at seven-twenty, I finally made it to the summit. As well as the low hub of noise on approaching Uhuru Peak, the sight of a disorderly queue of human traffic evidenced how busy a mountain Kilimanjaro was. People from a range of nationalities crowded around, waiting for their turn to be photographed next to the sign that proved they'd made it to the roof of Africa. It was surreal. And while strangers hugged and shook hands, all I wanted was for them to disappear. My nerves frayed and finally I lost my temper as Marty started complaining about the water situation again. 'Why don't you take all my food too?' I yelled, and threw a chocolate bar at him before walking away in fury.
The physical effort, the pain, the relief of having made it – everything suddenly threatened to overwhelm me. Desperate to get away from the bizarrely large summit horde, I managed to find a quiet spot where I collapsed to the ground and, with my legs dangling over the rocky shelf, the floodgates opened.
I saw Frank on my way to the hospital that Thursday morning. He stopped his car, rolled down the window and said, 'You know this is it, don't you?'
'No. What do you mean?' I asked.
'The doctor told me your mum is going to slip into a coma.'
Not waiting to hear another word, I raced to the hospital, tears, rage and adrenalin at work. As I reached out to open the door to her room, a warm hand gripped my forearm and I looked up to see who dared stop me. I had been oblivious to the staff in the corridor.
'She won't recognise you,' the doctor warned. 'You must be prepared. She hasn't spoken since last night, and your mum didn't know either me or Frank.' Tears ran down my cold cheeks.
' _Let me go!_ ' I cried, wrangling free from her grasp and pushing the door open. Mum looked at me with alarm as I almost fell into her room.
'Why are you crying?' she asked, all concerned. Her question caught me off-guard and I experienced a surge of joy amid the agony: I hadn't expected to hear her voice. _They'd said she wouldn't recognise me!_
'I'm not,' I lied as I walked to the window, my back turned so I could quickly wipe tears away.
Composure regained, I sat next to her, telling myself that everything was really all right. Her lips were open, but she didn't speak. I wiped her mouth and I held her. Her eyes stared vacantly. I told her I loved her so many times that if she had been able to register what I was saying I'd have driven her mad. I was quiet when my grandad arrived, and not much later we left – I left her on her own.
'Hi,' she said quietly, when I came back. And then that really was it. She never spoke again.
Grandad brought Gran to the hospital. When he pushed her into Mum's room in a wheelchair the nurse had a hold of Mum under the armpits and was lifting her onto a potty. Mum's nightdress rose, revealing her dark pubis, and though I felt embarrassed at the invasion of her privacy she did not react. I looked at her dumbly. As a child I had sought comfort from that body, whose arms had folded around me and whose hands had clasped mine. As a teenager I'd been grossed out when I'd seen it nude as I'd caught her dressing for work. Her nakedness did not worry me now; it consumed me with sorrow. Hers was a cancer-ravaged frame under a suspended sentence. When Gran saw her daughter, she was overcome; her chest heaved up and down uncontrollably as a tide of emotion burst forth. Rushing from the room to the nearest toilet cubicle, I collapsed onto the floor in my own tearful outpouring. Gran's pain was more than I could bear. I could only imagine how she must have felt to see her child dying. It wasn't right; it wasn't the natural order of things.
Later that evening my grandad returned, red-faced, with several beers inside him. Mum lay catatonic in the hospital bed, and as he gazed at her face and held her hand he softly said, 'My Jeffa', calling her by her pet name. Tears filled my eyes and a lump rose in my throat. And I wondered how many times, and in how many ways, a person's heart could break.
At six o'clock, back at the flat, ringing shredded the dark silence of morning. Its shrill alarm set my heart pounding. I hadn't needed to answer the phone or hear the words: I knew exactly who was calling and why. I went to Frank's room. 'Hurry! We've got to get to the hospital before it's too late.'
I threw on clothes I'd worn the day before and was ready to go, but Frank seemed to take for ever to get ready, and then an infuriating length of time to drive to the hospital. ' _Fuck_ the speed limit. My mum is dying!' I wailed as the car trundled along at 30mph. It was tempting to open the door, bail out and run as fast as my legs could go: I was desperate to be with her. Frank said nothing.
The overhead lamp was on, its light shone through the tiny frame of glass in the door, and as we entered the room a sweet, sickly scent filled my nostrils. She was lying on her side and her breathing rattled and bubbled, laboured and slowed. I touched her shoulder; her nightshirt was sweat-soaked. I put a disc of classical music that she liked into the CD player, and its vibrations joined the room.
'I wonder where Grandad is?' I said to Frank.
'I told the nurse not to phone him,' he answered.
I felt that my grandfather should have the choice to be there or not, so, going against Frank's decision, I left the room, found a nurse and asked her to make the call. My motives weren't entirely altruistic – _I_ needed him there.
Another seeming eternity lapsed before my grandfather arrived. Like Frank he had made himself presentable: my beloved old soldier had washed, shaved, suited up and polished his shoes, and suddenly I felt ashamed to be there in my scabby green mod jacket, jeans and top. Now I detected the faint odour of cigarette smoke from my clothes and hair; my mouth tasted of stale alcohol and I cursed myself for having gone to the pub when I'd left the hospital the night before.
Grandad and Frank each took a chair on either side of my mum's bed while I sat on it and held her hand. We were all quiet, watching, waiting. Barber's _Adagio for Strings_ intensified as it crescendoed, and the violins shrieked and screamed as if they knew that the moment had come. Drowning in a cacophony of noise that echoed and thumped in my ears, I wanted it all to be over. I looked at my mum. Weakly, she squeezed my hand.
'It'll be okay. Don't worry. Everything will be all right,' I whispered. _Liar, liar, you big fat fucking liar._ Her eyes stared into the corner of the ceiling, then slowly her head turned. Her gaze shifted to the opposite corner of the room, where it remained fixed. _What's happening? What's she doing? What can she see?_ My thumb brushed the bangle on her wrist. _Is it Gerry? Has he come for her?_ Her breathing slowed. In . . . out . . . in . . . and that was it. _She isn't taking another breath; where did it go? How can she have just gone?_ I could not understand. I searched my grandad's face looking for an answer, but all I saw was sorrow.
Grandad went to find the doctor. 'I've never watched anyone die before,' Frank said flatly as I pressed my cheek against hers: she felt warm. I stroked my hand through the damp curls of her grey hair.
The doctor came in and I had to leave.
I sat on the summit of Kilimanjaro, feeling calmer for a moment. I took out the photograph of me and Mum, and the small pot of her ashes I'd brought. On an impulse, I released them here, so close to her childhood home. Watching her ashes disperse into the crisp mountain air, my memories of her roared with a burning intensity and I felt overcome once more, crying as though she had left me right then.
'I'm sorry,' Marty said, as he came up behind me and put his arms around my shoulders. 'We've been here half an hour. Come on, we need to get down.'
'Okay.' I smiled back. I didn't want to leave, but he was right, our job was only half done. I found a nook in some stones and buried the picture of Mum and me, which I'd popped inside the pot, then stumbled away from Uhuru Peak.
Descent was torturous. And, like the Americans had been, now we were yellow-faced. My mind felt wrecked and all concept of time was lost on me, but I guessed it had to be about ten o'clock when we found ourselves back at Gilman's Point. Kibo's hut and the collection of tents looked tantalisingly close yet were so far away, and now, in the broad light of day, it was startling to see the steepness and distance we had covered during the night. In agonising and fitful bursts of movement I only managed to progress several metres at a time, down loose scree slopes, before having to crouch against or on rocks to catch my breath and let the pain ease.
In the hospital my mother's body lay inert and lifeless, but as I made my way out on the long road towards my grandparents' house it felt as though her spirit had wrapped me up from the inside out like a protective cloak – as though her soul had jumped into my body – and at that moment, in the strangest of ways, I felt comforted. Instead of walking along the pavement I crossed the road, gravitating towards the tall stone boundary wall of the Farmer's Showfield; in my mind I saw it helping to prop me up, preventing me from falling helplessly to my knees. The October wind sent a few rusty-coloured leaves scuttling across my path. The sun pursued its course behind the clouds and the cold, steely grey of day pressed down from all sides, swallowing me whole. Suddenly I understood the meaning of being carefree because I was not, and in crushing defeat I felt as though I never would be again. Each of my footsteps fell automatically, moving my body forward. My body existed but it was as though my spirit, the I that makes me _me_ , had transferred to another world where it fluttered unresting and lost. Only my feet progressed towards their destination. At the house I went straight to Gran's room. She was in bed lying up against her pillows. 'Is that it then?' she asked, and Uncle David, seated in the chair next to her, searched my face. Unable to say the words, I walked over, sat across my uncle's knees, put my arms around his neck and cried. I did not see my mother again.
Kibo was a place of safety, and when I finally crashed back into camp I promptly burst into floods of tears again, but this time they were sobs of relief. It didn't matter if I got sick now; I'd achieved the summit. As I pulled myself together, conversation with a tone of concern alerted me to the arrival of a member of our trekking group. His face looked like he lived on a planet with a gravitational pull six times stronger than Earth's; his eyes, and the skin underneath, drooped and sagged, as did his cheeks and the flesh of his jowls. Though supported by a porter, he walked without co-ordination, and his speech was slurred. He'd collapsed on his way back to Gilman's Point and had been out cold. He was an intelligent man (and someone who had taken Diamox from the start) but, driven by summit fever, he had lost all faculty for reason – so much so that he'd put his very life on the line – and it was only my shock at seeing him in that condition that made me appreciate the real danger of climbing at altitude. My thoughts turned to my children. I wanted so much to be with them.
### CHAPTER ELEVEN
### Dead Loss
_Horombo Huts, 3,720 metres, June 2010_
I returned to my tent, where I was looking forward to collapsing in a heap, but as I grabbed at the flap to scramble in I came face to face with Marty, who was clearly unwell. He was coughing harshly and his eyes were bloodshot and bulging like a bullfrog's, and with the next bout of coughing he spat blood.
'How long have you been like this?' I asked.
'I was feeling really shit when we were at the summit, but after I left you at Gilman's, that's when the coughing got worse.'
Without hesitation I went to find Emma, who accompanied me back to the tent where she examined my friend. Marty was diagnosed with onset pulmonary oedema, and so while he and the other casualty were raced to lower altitude on one-wheeled contraptions – a metal stretcher, like a mobile see-saw, with handles each end – the rest of us made the four-hour haul on foot.
Trailing down over the vast, dusty expanse, with nothing to see but volcanic boulders and senecio trees, was total and utter hell on earth. I was physically and emotionally spent, had no appetite and was running on empty, but everyone was suffering. All any of us wanted was to reach Horombo camp, and it was a case of every man for himself as people desperately fought to get there. Unsteady on their feet, people fell about; far-spread individuals and pairs staggered along, suddenly dropping to the ground to rest. It was like watching an apocalyptic scene of a film with a cast of zombies, and I was one of them.
Horombo huts eventually came into sight and so did a very chipper Marty; seeing his happy face cheered me up.
'What a speedy recovery! You don't look like Gollum from _Lord of the Rings_ any more,' I said.
'Cheeky bitch!' he laughed. 'It's good to see you!'
We chatted for a bit and unpacked the stuff we needed for a last night of camping. Then, after a meagre meal of rice accompanied by carrots and peas (the last of the food supplies), nobody needed persuasion to have an early night – we'd not slept in over thirty-six hours, and any shut-eye prior to that had been in short supply. Bodily weary, I managed only one solitary thought before my eyes closed. _What an epic and twisted forty-eight hours._
On the first night of life without Mum I barely slept at all. And as I lay in my bed I thought about how I hadn't wanted either the day to finish or time to move forward; and how I hadn't wanted to return to the flat – where she wouldn't be.
I recalled leaving my grandparents' house late and going to visit Mum's friend and teaching colleague, staying with her till midnight – till going home could no longer be avoided. A snippet of Mum's friend's conversation turned over in my mind. 'I remember when she first got breast cancer and your mum said to me, "They found it in my lymph nodes, so that's it, I'm a goner."'
Mum hadn't told me or my grandparents; she had known all along that the cancer would return and that death would come to her sooner rather than later, but she had protected me and all of her family from the truth. A stream of tears trickled from my eyes. I could see my whole life stretching endlessly ahead of me: time was too large and I was swallowed up in it, I didn't want it to go on without her.
It must have been throwing-out time from the pubs because there was sudden noise out in the street. People passing on the pavements below sang, shouted and laughed. I felt resentful and totally disconnected to that world that carried gaily on outside my window. Nobody out there knew how I felt. Nobody out there cared. Why would they? This was my life, my sorrow, and it seemed that nothing would make me feel better ever again.
In the end I gave up on sleep. I reached out for a pile of papers by my bedside on which my mother had handwritten several verses from the _Rubaiyat_. Grandad had given her this book of poetry many years before, and a few weeks earlier she had selected her favourite philosophical quatrains to copy out. She had wanted Grandad and me to find comfort in their words after she was gone. I had coloured Celtic patterns around them, so they had become something we had created together. I lay there looking at them by torchlight, reading those lines over and over again.
Heat from the sun beating through the thin fabric of the tent woke me. My whole head throbbed, my face was puffy, and yesterday's tingling lips had erupted into the mother of all cold sores overnight. But, ropey as I felt, I was determined to make the most of every last minute on Kilimanjaro, a final day of mourning on the mountain, if you like.
Once the breakfast things had been cleared away and before we made tracks down the mountain, the porters and guides put on a final performance for us; they danced and sang their hearts out, cranking up the volume as they belted out the medley of songs that had welcomed us into camp every night. It was a fitting farewell. And afterwards, as we trekked back down beneath the cloud line, I remembered what I could of my mother's last goodbye.
It was hard to recall much about the funeral; the day passed in sketchy scenes. When the hearse drew up outside my grandparents' home my legs virtually gave way beneath me and the world blurred. I don't remember the drive to Inverness, but when we got there I was aware that the crematorium was full to capacity of people who had come to pay their respects, and that was 'nice'. And I vaguely remembered some former pupils of my mother who came to offer up those ritual expressions of consolation. My grandparents, uncles, aunt and Frank maintained composure throughout the service – I don't know how they did it. I wanted to throw myself on top of the box that was taking my mother away. I don't remember the hymns. But I remember the curtains being drawn, then the coffin – with my mum – was gone.
At the wake I drank brandy, because it was what she would have had. I walked – actually, it felt more like I floated – around the tables and in between people. I thought I'd spend the rest of my life feeling permanently numb, so I was pleased to feel a fleeting surge of irritation towards a friend of my grandad's who was filling his face with the sandwiches like he'd come along for the free feed. I wanted to grab his fork and stab his eyeballs, but instead I smiled. I sought out my grandad. Although I felt stone-cold sober, it seemed the alcohol had rubbed out enough brain cells for me to fuck up my attempt at quoting a verse from the _Rubaiyat_ , but my grandfather raised his glass with me all the same.
When we arrived back at Nakara Hotel I headed straight for the bar and ordered a brandy, making a toast to my mother and the mountain. As I sipped my drink, though, I felt strangely numb, not at all as I'd expected to feel having conquered the mountain. I had come to Kilimanjaro in hope that the journey to and from its summit would provide the time I needed to make sense of my grief – that by performing my own private memorial, sorrow would vanish as with the wave of a wand, and that by reliving painful memories my inner demons would be cast out. The boundaries of my own endurance had been pushed almost to the limit, so why did I now feel so underwhelmed?
Like everyone else, I continued drinking after dinner, but unlike the others I drank not in celebration of summiting, but to blot out the unbearable fact that my anguish remained. Marty and I got blitzed – I even shared some of his 'bad-boy' grass, giggling uncontrollably for an hour before passing out on my bed.
The undercurrent of disappointment remained as I left Africa behind. As our flight took us further away from Kilimanjaro I tried to work out why the challenge hadn't provided the closure I was after. I'd certainly revelled in the physical and mental punishment the mountain had thrown at me, but once that euphoria had worn off I'd found no lasting change. What now? I remembered my mother's words: _Just keep busy_.
I needed a new focus, a new challenge.
I decided right then that I'd climb all of Scotland's highest mountains, the Munros. I started to feel excited as soon as I'd hit upon this resolve and a sense of renewed hope rose within me.
### CHAPTER TWELVE
### Peaks and Troughs
_The North Glen Shiel Ridge, June 2011_
True to my obsessive nature, on my return from Africa I was keen to start ticking off the list of Munros, fixating on reaching the summits of all 284 mountains as quickly as possible. Having taken up a teaching post at a small school near Inverness, I went hillwalking whenever I could find the time.
But despite my frequent excursions, by summer term the following year I felt the familiar tug of anxiety dragging me down again. Though my job was part-time, my evenings were taken up with marking or preparing activities. It seemed I was always rushing to get dinner and bedtimes over with so that I could get on with school stuff. I realised I'd fallen into the very same routine Mum had, and I resented being trapped in a cycle I'd sworn to avoid. Pressures increased when the roof started to leak again, and the uncooperative neighbours at our flat in Nairn continued to cause problems.
In the last months of Mum's life, while she'd still had the energy, she had done up the flat with a view to putting it on the market. We had lived in two homes since moving to the town, the first a tiny one-bedroomed property in the Fishertown area and then a larger flat above the busy High Street. It had more rooms than we really needed, all bright and spacious, but very cold in winter. And because it was an old building it had many problems other than just the freezer-like temperatures.
'I think we should get rid of this place,' she'd said. 'It would be better for you not to have the worry of the leaking roof.'
'No! I don't want you to do that! This is home! It's where we have lived together!' I'd cried.
At that time the idea of selling was inconceivable: my mum was going to die and I didn't want the flat, with our shared memories, taken away from me too. We'd whiled away hours there playing Scrabble or cards. Sitting by the fire reading, or watching TV. Drinking brandy and playing the guitar. We'd fought and made up. She'd helped me with homework to get me through exams, and typed up the dissertation I'd written in my final year at art school. She'd given disastrous cooking lessons in the kitchen: I'd let pans of water burn dry, been heavy-handed with chillies in a curry that blew everyone's heads off, and misunderstanding her instructions I'd drained all the juicy stock from a pan of mince down the plughole.
Mum had been patient and forgiving of me as I struggled to grow up, even when I'd accidentally spilt the bright-red contents of my Indian takeaway all over her new living-room carpet (well, okay, that incurred a three-month silence – but I was easy to ignore since had I scurried off back to art school in Aberdeen feeling very bad and very guilty). I had rarely listened, but Mum had only ever done her best to guide and advise me, and help me become independent. We had been through a lot together in those four walls.
But the flat had been a problem ever since. Recently a large family of Mancunians, minus a wife but plus one dog, had been installed. They were an odd lot and all heavy smokers. Fumes from their cigarettes migrated through the walls, permeating the air, making bedding and clothing in my sons' room smell bad.
Hearing the man's dry cough as he laboured up the communal stairwell one day, I opened my door. He was tall and broad, dressed in baggy jeans and an oversized woolly jumper. I said a breezy, 'Hello.' His unshaven face smiled back and he politely returned the greeting. Maybe I'd stared a little too long, because he then put down his bags and went into a long explanation of why his teeth resembled a broken and rickety picket fence.
'Me teef are needin' fixed. Me wife an' I were rowin' an' she pushed me – I fell into the side of the baffroom door an' all me teef got smashed up,' he said. I lamented their loss and asked what happened with his wife.
'Did you divorce?' I asked.
'No. She's dead. I killed her, but it were an accident. She fell down the stairs.'
'Ooooh!' I said. He began to tell me all the problems he was having with his twenty-year-old daughter and asked if I'd like to come in for a cup of tea sometime. 'Sure,' I answered, then made my excuses and retreated to the flat, deciding I'd complain about his smoke fumes another day.
A few weeks later that strange family did a moonlight flit. Their replacements were just as unpleasant. The regular screaming matches, accompanied by objects being hurled against walls and smashed off floors, were frightening for my younger son to hear, while my older boy was receiving a different kind of education.
'Mum! I can hear them having sex!' Marcus exclaimed as he burst, wide-eyed, through our living-room door.
'Are you sure?'
'Yes, because the lady's saying "Eff me harder!"'
The police were becoming frequent visitors to the residents: they came when there was a fight between the father and son, they came when the man beat his woman and they came to arrest the son, a thief and drug dealer.
Now, despairing of our situation and racked with anxiety, I sat at the kitchen window crying. All my problems were beginning to overwhelm me: the continuous stream of awful neighbours; a personality clash at work; readjusting to life as a single parent. I missed my family and the support they had provided. I felt like such a failure at everything as I sat there punishing myself. I was a shitty, introspective, self-absorbed fuck-up. I hated myself and wanted out of my own skin. I growled with self-loathing till my throat burned. Consumed by loneliness and the burden of my responsibilities, I moved closer to the window and opened it. Leaning forward, I peered down into the darkness, gripped suddenly by a terrible urge to leap. _Be spontaneous! Don't think about it, just do it. It'll all be over quickly!_ It was a fleeting thought. I knew I could never do it, never deliberately choose to leave my little boys. I would put on a brave face for them and carry on going.
I needed the breathing space that I only found on the hills to help me cope, but in my agitated state I began to hit them harder, becoming increasingly careless.
At seven-thirty in the morning the alarm on my mobile went off. I'd arranged to meet Ollie in Inverness; we were heading north-west to do four Munros on the north Glen Shiel Ridge. After our first meeting on Meall a' Bhuachaille in 2008, Ollie and I had been out walking together a number of times and were now pretty comfortable in each other's company. Our association was uncomplicated; we met for hillwalks and kept conversation lighthearted. Ollie saw me as a joker, but humour, for a long time, had been a front. Underneath it, a sense of angry frustration had taken over. Like a midnight tide that washes in, depression seeped its inky darkness through me as I struggled on.
Now, as Ollie drove us to our next challenge, I lowered my seat and closed my eyes, listening to the rain as it pattered off the windows.
It had been a rainy day, four weeks after Mum's funeral, when I was manhandled back into Woolworths by security. I was led up some stairs, their tan colouring worn down to a smooth grey in the centre of each step. What a truly unimportant thing for my stumbling mind to have focused on. A grubby white door was pushed open and I was taken into a small office. Dressed in a navy-blue pullover and skirt, the manageress, a dumpy woman with a terrible mousey-blonde perm and huge framed glasses, sat behind a desk that was covered in piles of paper in messy stacks. There were files and boxes everywhere, more like how I imagined the office of a journalist or private detective might be – except here there were random toys and Christmas decorations lying about. She'd phoned the police and officers were on their way. My belly churned, my head felt light and my brain raced at a rate of knots as the consequences of my actions finally sunk in. _What in the fuck were you thinking? You have money. You could have paid. You even realised the security guy was watching you. Why didn't you just dump the stuff? You're such an idiot._ I then recoiled as horror filled me. What were my grandparents going to think?
The door opened and in walked a policeman. I looked up at his face, which was partially concealed under the shiny peak of his black cap, and cringed. I couldn't believe it. I knew him. My embarrassment doubled as my mind flickered back to the night I'd shagged him down the putting green, long ago. I kept quiet but I knew he recognised me too. He opened up the large, white-plastic carrier bag that I'd stashed the stolen goods in and pulled the bizarre variety of items out one by one. He looked at me, and then at the manageress.
'I think', he said, 'that in this case it would probably be better if you don't press charges. I know Sarah's circumstances and, if she is agreeable, I would instead refer her to the community psychiatric team for counselling.'
There was a pause before the manageress nodded.
'I don't want to see you in my store again,' she warned sternly.
'Okay,' I answered, and then asked if my name would appear in the paper.
'No. We can keep you out of that,' the policeman answered.
'Thank you,' I said, grateful that my grandparents would be spared the knowledge and shame of what I'd done. And with that I was free to leave.
Yet I was not free.
I hadn't been able talk to my grandparents about how I really felt inside. How could I? I knew that their hearts were as broken as mine. I'd lost my mum, but they'd buried their child. I could only imagine their suffering. And yet they sheltered me from their own pain with a united show of composure. I witnessed only one outpouring of grief, from my grandfather. It was my fault, I'd said something, I can't even remember what, that finally broke through his stoic self-control. He jumped up from his chair in the living room, banged his fists on the wall and cried, 'I can't seem to say anything right!' His outburst was so unexpected, so out of character, I was completely taken aback. Guilt pinned me to my seat on the sofa, hating myself for being the cause of this dreadful moment, for hurting him.
Other than that brief eruption, though, my grandparents 'soldiered on'. Their strength and depth of character was humbling, so I hid my shoplifting shame and pretended that I was strong too – and, besides, it would only have caused them concern and even more upset if they'd known. So we three carried on with our usual routines, each suffering alone in silence. My brush with the law had given me a fright, but it didn't change the blackness inside that I believed was going to drive me insane with grief, turn me into an alcoholic or, more likely, both. It was not a good place to be. And as the rain came down, I thought maybe it wasn't a bad thing that I was going to have to see to a counsellor.
It was a two-hour journey before Ollie and I, having driven through Glen Shiel, arrived at Morvich, a tiny settlement near the southern end of Loch Duich. The rain had stopped, though the sky remained heavy as we turned off the road and parked beyond a large metal gate. Feeling tired and sluggish, I put on my gaiters and waterproof jacket and set out on the six-kilometre route to Glen Licht House. Silver puddles filled the potholes on the stony dirt track that ran alongside the River Croe, its peat-coloured waters cutting a meandering path through the bottom of this wide, treeless valley. The lower slopes of the great mountain ridges that hemmed us in on both sides were cloaked in a deep and luscious green, fresh from the summer rains. Higher up, rocks glistened. Shreds of light found a way through rolling folds of gunmetal cloud and fell like ripples on the mountainside. The atmospheric trickery made it impossible to tell exactly how steep parts of these mountains were, but I had no doubt we were in for quite the slog upwards. It was only when we reached the isolated building at the end of the track that I realised I'd neglected to factor in the distance and time it had taken to walk those extra kilometres – adding another two and a half hours to what was already going to be a lengthy day. I felt cross with myself and worried that it would be near ten o'clock that night before I got home to the kids. Their grandparents hadn't been able to help on this occasion, and so the boys were being looked after by Sam. Although our marriage had long since been over we had remained in touch and he sometimes helped me out with them – the boys were familiar with him, and Sam was still fond of them too.
We followed a good path, in a southerly direction, alongside a tributary stream that we needed to cross. Heavy rains had made the river lively, and a suitable place to cross difficult to find. But it was after I had slipped off some rocks and got my feet wet that my temper escalated beyond the combined heights of the mountains we were scaling. Ollie, for reasons only he knew, decided to do a gigantic zigzag up unforgivingly steep and craggy terrain in the opposite direction to the line I'd mapped out. And I followed him. A southerly wind, now blowing fiercely, brought low cloud and light rain, making it feel all the more like hard work to get anywhere.
It now seemed I couldn't even get to the top of a hill without messing up. Depression was not meant to follow me on the mountains. I yelled my frustration and rage into the wind till my throat hurt and tears burned. Ollie was too far ahead to have heard me wailing like a lunatic, and in any case the strong winds blowing against us carried the din away in the opposite direction.
A day after the incident at Woolworths I took Poppy for a walk. I was desperate to reach the white sands of Nairn's east beach, knowing that at this time of the year it was unlikely there would be anyone about. I'd chosen the place deliberately, so that I could scream out all the anger and misery and pain from the top of my lungs.
A man appeared from between sand dunes, looking distinctly alarmed when he'd heard my cry.
'Are you all right?' the poor unsuspecting soul asked.
_No, not really. My mother is dead, I got done for shoplifting the other day and I want to fucking top myself._
'Yes,' I stammered, 'I'm fine.'
The man walked on uncertainly, and I felt idiotic. Nevertheless, as soon as I was alone on the wide, open space of the beach I started talking to myself again, pretending Mum was walking with me. ' _I dreamt about you last night. You told me that there is a heaven and everyone gets to go there. You said you'd done more there in two days than you could do in a lifetime on Earth . . . I wish I always got to see you in my dreams . . . I miss you so badly Mum_.'
I walked for empty miles along the sand till the chill December wind made my fingers, toes and lips as numb as I felt on the inside. And I remembered the promise I'd made not to give up. Sea air blustered across cold, grey waters carrying a gull's cry. Looking to the sky, I said to Mum, ' _I'll speak to thecounsellor. I'll do teacher training. I'll go see your friend and ask if she'll let me have work experience in her class. I won't steal, I won't drink and I won't let you down again, Mum. I'll be better._'
After the shoplifting incident I hadn't said anything to my grandparents about having to see a counsellor, but there were some things that could not be left unsaid. I hadn't been feeling right since my walk on the beach. There was something more than grief going on inside me. With all the upset I hadn't even noticed I'd missed a period, but at a doctor's appointment it was confirmed I was pregnant. Dr Collier had been our family's doctor for years. It was she who had been on duty at the hospital when Mum was dying; her warm hand that had held me back, and her voice that warned me that Mum might not recognise me. She had always been so kind, I trusted her and could tell her things I couldn't speak to my grandparents about. When, years later, she told me she was retiring, I was devastated. 'Who am I going to talk to now?' was all I managed to say.
Now, as she delivered the news of my impending motherhood, a range of emotions pulled me in every direction. What was I going to do? After losing my mum there was no way I wasn't keeping my baby. But I was scared to tell my grandparents. Of the two it was Gran I chose to tell first. My boyfriend and I had already parted company with no chance of reconciliation, so I knew my grandfather was going to be even more disapproving, but to have Gran onside would cushion the anticipated blow of his reaction. Behind Grandad's slim facade was a man to be reckoned with. Not that he ever ranted or raved – it was the disappointment visibly etched on his face combined with a quiet but disparaging remark, and an exit from your presence, that left you wanting to crawl away, tail between legs, feeling abysmal for having let him down. I was dreading it. 'So . . . will you tell Grandad for me?' I'd asked.
'She's pregnant,' Gran said when Grandad entered the bedroom with her dinner; subtlety had never been her strong point. He thrust the tray of food onto her lap and left without casting so much as a glance in my direction. It had been expected, but it still hurt. Sitting on the dusky-pink tweed-covered armchair next to her bed, I looked at Gran.
'I suppose he thinks like mother like daughter,' I said, my voice cracking.
'Don't worry. He'll come round. He did with your mum and he will with you.' Gran's friendship was invaluable to me. She talked about my grandad and the fun they used to have. A flicker of youth shone in her old eyes as she'd told me about an officers' party, and how the men had to put their wives over their shoulders and race to the end of the hall. 'But your grandad was so skinny that I picked him up and carried him instead,' she said with a little chuckle. 'I think that was the night he pushed me home in a shopping trolley.' Gran was a marvel. She must still have been feeling grief-stricken too, but told me her stories – her way of letting me know I was loved by them both. That night I prayed to the God that my grandad had faith in, the God I wanted to believe in.
'Help us through these times. Look after my gran and my grandad. Thank you for giving me my baby.' As I lay in bed, in the darkness of my room, I knew that the new life growing inside would save me from myself.
It took Ollie and me a further three hours to reach the first summit. With every step, I'd finally felt my black mood slowly slipping away, the soothing familiarity of the mountain scenery and the effort of the physical exertion gradually softening the turmoil of my raw emotions until all traces of my usual pain and anguish had been erased altogether.
'You know why I love mountains, Ollie? 'Cause they make me feel great!' I announced cheerfully when I caught up with him.
The wind continued to blow hard, but we snatched glimpses of our surroundings from clearing views. It was a magnificent, time- and weather-worn landscape I saw as I looked south-west across to the unmistakable long, straight ridge of Sgurr na Sgine, presenting visibly beneath the cloud line. In thick clag we made our way along the ridge and congratulated each other when we arrived at the top of the second Munro. I shrieked our silly summit catchphrase, 'Nippy tunnel!', as we balled our fists, bumped knuckles with each other, beat our chests and punched the sky. It was only after performing another nippy tunnel, and when we came upon an impressively stacked third cairn, that we realised we were at the actual, genuine, bona-fide second summit. We laughed and blamed our mistakes on the weather.
Thick, white mists enfolded us as we wandered away from the peak on a path that led out along an ever-narrowing rocky spine. It was a good forty minutes later and after losing some height that I began to notice the landscape did not look as it should. I showed Ollie the map and he agreed, 'We're on the wrong bloody ridge!' Even though I knew what to do with a map and compass, in poor conditions I realised how easy it was to go wrong. Small mistakes could end in big disaster; the mountains were no place for complacency.
'I'm really sorry,' I said, feeling bad again, but by half past five I was dancing on the third summit, overcome with happiness once more. The cloud had lifted and I was confident there would be no more mistakes. And even another sweat-inducing, calf-killing haul up over a couple of false tops to reach the day's fourth and final Munro only served to enliven my spirit. Ollie was lagging behind, but with the incredible views of the ridge we'd just walked and the spectacular peaks of the Glen Affric mountains to the north, I was in my element.
'I won't lie, Sarah,' Ollie said as he made it to the top, 'I'm knackered!'
'At least it's all downhill from here,' I grinned.
A path from the summit petered out to give way to nature's own carpet of grasses, moss and flowering mountain plants, butterwort and tiny purple orchids. In the distance, the bright-red roof of a small building down in the valley was where we set our sights; it didn't seem that far. Well, we walked and we walked, but we didn't seem to be drawing any nearer to it. The ground was becoming soggier. Shooting pain surging through my right knee made the descent feel unending, and it was always a downer to know that I was returning home to the same old worries.
Life had been a journey of peaks and troughs, but that first year without Mum was hard. Frank and I wanted to keep her close so the urn containing her ashes remained in the flat. Two months after Mum died, on Christmas Eve, I'd spent the day with my grandparents and it was teatime before I went home. I wanted to talk to my mum but her ashes, which we'd put on the mantelpiece in the living room, weren't there. Without having said a word or left a note, Frank had taken her away. My heart raced as I assumed the worst. We hadn't been getting along and I thought that he had gone to scatter her ashes without me. Through choked tears I repeatedly tried to get hold of him by phone, but got no answer. I called Uncle David.
'Have you seen Frank?'
'No,' he replied.
I called Frank's number again, and this time he picked up.
'Where have you taken my mum?' I asked, trying to suppress the anger and grief that raged inside.
'I just took her for a walk. That's all,' he answered.
It was years later when he finally told me that he'd taken the urn with her ashes to Beinn Eighe, that beautiful high loch in Wester Ross. He had meant no harm, but that Christmas Eve I hated him for making me panic like that. Too wrapped up in my own grief, I didn't stop to consider his.
After that, I saw the counsellor twice a month for six months and at first I tried to articulate my feelings, but it didn't seem to be helping – I never really felt I could connect with the woman. So I told her the things she wanted to hear and took my own measures in the battle against the blackness. I made endless lists and set goals. But all the while I swung like a pendulum: from aching heights of gratitude and hope when I heard my unborn child's heartbeat or felt his kick, to the miserable depths of tears and headaches; of brushing my face with a lock of Mum's hair; or holding her nightshirt, the one she had worn during those last hours in hospital, to breathe in the faint smell of her life. All her other clothes had been boxed up and given to charity; I had never before been a worshipper of things, but the presence of her life was solidified in those strands of her hair and in her nightshirt.
Cries for my mum during labour pains in the hospital were heard only by the four walls, but when my son was born and I cradled him for the first time, I was possessed by overwhelming love.
The immediate responsibility I felt for my son helped me understand better how my grandparents had managed their grief when they'd lost their daughter. They weren't superhuman: they'd put on a brave face and were strong because they'd had to be for the rest of their family, for me. Through my tears I whispered into my baby's ear, 'Your grandma would love you!'
### CHAPTER THIRTEEN
### A Hatch and Despatch
_Bidein a' Choire Sheasgaich and Lurg Mhor, October 2011_
I'd had a bad bout of flu, but despite my illness, and the constant downpours, winds and the cold, nothing had stopped me from knocking out twelve Munros over the course of an extended weekend. Incapable of slowing down and indifferent to the weather, I kept heading out on long mountain days, even though hours of daylight were shortening. And after another big outing, on the Mamores, I made plans for the following day to tackle two of the most isolated Munros in the country.
After catching five hours' sleep I was up and getting organised when there was a soft rap on the door. It was Marty. We had kept in contact after Kilimanjaro and he had offered to join me on my walk today.
'Morning! Your bike's in the jeep. I even pumped your tyres up; how were you gonna ride it wi' flats, ya tube? You ready?' He said.
'Yup,' I grinned.
'How far is the walk anyway?' he asked as we drove north.
'Twenty-nine kilometres. I reckon it'll take us ten hours tops,' I answered confidently. 'If we can set off at half-eight we should be down before it's too dark.'
'Well, sunset's at quarter to six. You got your headtorch?' he asked.
'Yeah, course I do,' I replied.
'Good. 'Cause I forgot mine,' he said. I shook my head and laughed.
At quarter past eight we arrived at Craig, four kilometres east of Achnashellach in the north-west of Scotland. Rain fell steadily. Crossing the railway line, we continued by bike on a wide, stony track bordered by trees, but I soon dismounted and pushed while Marty bust his competitive guts, cycling the steep uphill, to the deer fence.
Forty-five minutes later we had dumped our wheels and were following a trail towards a wooden rope bridge that I'd used once before to cross the river. To our dismay we found the rope lying in a curled mass by the piers, but luckily the water wasn't in spate, and protruding boulders provided a way over. Tired from the previous days of walking, my legs were like lead as we hiked up the stalkers' path, but we chattered non-stop, making the time it took to reach the col pass relatively quickly. Marty was being all profound, telling me how he thought he'd finally met the right girl.
'In the past, when I've been in a relationship, I've always been looking for something better. Sometimes you have to stand back and take notice of what's staring you in the face,' he said. 'You not met anyone yet?'
'Na. There's no way I'm getting involved with anyone anytime soon,' I answered cheerfully. But inside I felt rueful. I'd never chosen well, always in a rush to find love and ending up in the wrong relationships, then staying involved for too long out of a misguided determination that I could make it work. After so many failures, I found it hard to believe I could ever really have a successful relationship.
Bells had rung in a new year and a new millennium, and life somehow carried on without Mum. Although the undercurrent of sadness remained, Marcus, now a toddler, brought me lots of happiness, and a return to bar work also helped me to smile and joke again. And then I met Charlie. I'd gone down to the fruit and veg shop, and, spotting me struggling as carrier bags swung from each handlebar, Charlie offered to hold my bike so that Marcus could stay in the child seat while I got my goods and paid at the counter. I was immediately taken with him as I overheard him chatting away to my son, and I was flattered by the attention he paid me too. From there, things moved quickly and, desperate for love and companionship, I soon fell for him.
One evening, as we were making love in the dark, he let out a yelp and gripped my arms. 'Sarah! Stop moving!' His distress alarmed me, and I wondered what the hell was wrong as I hovered above him. 'Your earring! It's caught up my nose!' he cried. The silver hoop had unclasped, catching his nostril like a fish on a hook. Releasing him from its sharp grip, I ruptured into uncontrollable peals of laughter. It was the first real belly laugh I'd had since my mum died, something I had never thought could happen. When I finally wore myself out, a wave of disgust washed over me as I registered that my laughter had briefly obliterated my grief.
Charlie and I started to see more of each other, but I kept it quiet from my grandparents at first. He was seven years younger than me, which I knew they wouldn't approve of, and in any case I wasn't sure if the relationship would amount to anything, so there didn't seem any point in upsetting them for no reason. Besides, my grandfather's mood had been low, and a combination of various ailments gave poor old Gran more than a few good reasons for her increasing despondency. I didn't think it would be helpful to add to their stress by introducing a boyfriend I knew they would consider unsuitable.
In the two years since Mum had died I'd seen my grandparents most days. I'd sit with my grandad in the kitchen then pop into Gran's room. We'd chat or just watch one of her programmes on TV, but there had been an evident shift in her mood. She had, completely out of character, stopped eating and would only sip at drinks. She'd gone silent too, and I had the horrible feeling she was dying. Panic set in. Sitting on the chair next to her bed, desperate for any conversation, I showed Gran the funny picture on a birthday card I'd bought. 'That's a very painful operation, I've seen it on the telly,' she said.
'She was meant to have laughed,' I said to my grandfather when I saw him later. He shared my concern.
'She asked me how much the dog cost and when I asked her what she meant she replied, "Well, I thought you were taking the dog to the vets because she smells?"' Then he added, 'I think she's going a bit ga-ga. And the multiplicity of pills pushed into her by the doctors do little to help. I'm afraid there isn't anything much we can do for her, she doesn't want any of her usual goodies, not even a small slice of my lemon cheesecake,' he said.
I grabbed Poppy's lead and took her out. 'If I'm ever on a life-support machine,' Gran had once said, ' _don't_ switch me off!' Mum, Gran and I had all laughed together at her statement, but we knew she meant it. She was a woman who clung ferociously to life, which made it all the harder to bear witness to her decline. When I returned with the dog I was greeted at the door by the doctor, who was just leaving.
'An ambulance is on its way,' Grandad said. 'Your gran is going to the local hospital.' I knew exactly what that meant. Her body was giving up and she was too.
Later that evening I called in to see her. Two-year-old Marcus and Poppy were with me.
'Can you stay in the corridor please while we clean up your gran and give her an injection? It won't take long,' a nurse requested.
Waiting at the door, I stared at the room opposite. _Last time I was here I was in that room. Mum died in there_. _Poor Gran, she'll remember that too; why'd they have to put her here?_ I thought sombrely. A smiling nurse left the door ajar, indicating that I could go in. And so there I stood at the end of her bed.
'Has your grandad left?' she said.
'Yes,' I answered. _He must have been here earlier_.
Pouring some Coke into a glass, I gave it to her to sip. Her fight was gone. She wasn't interested in the words coming from my mouth. She just stared as though lost in her own deep thought.
'I'll be off now. Grandad will be in later, so I'll see you tomorrow,' I said as cheerfully as possible, but feeling great anguish.
She looked straight at me then, with softness in her eyes as she opened them right up, like she was taking in one last good look at me. A momentary flicker of a smile was directed at Marcus, who sat placidly in his backpack. I wanted to say, 'I love you', but to do that would have felt too much like confirmation that I might never see her again.
At half past ten the next morning the phone rang. It was my aunty. 'Mum died in the early hours.'
'No!' I cried, 'I've got to go to her!'
Leaving Marcus with my grandad, I went to the hospital. I made my way along the corridor to Gran's room, where she lay as though sleeping. _Why didn't anyone tell me? I'd never have let you leave on your own._ My heart shattered again, and my entire being trembled. Sitting in the chair by her bed, I took her hand in mine and stroked it. Her pale, paper-thin skin felt silky, but ice-cold – not warm like my mum's had been. I reached my arm across her body and rested my head on her chest. Unexpectedly, Gran burped. Jumping out of my skin, I withdrew from the embrace. I looked at her . . . _no_ . . . _she is definitely still dead_ . . . and then I started laughing through my tears.
For the second time I found myself in a black car being driven to the crematorium, as 'Marble Halls', the only tune I knew Gran liked, played over and over inside my head. In two years I had lost the two most significant women in my life. In a trance-like state I stared from the car's window onto a grey and silent world. Black-headed gulls with motionless wings were being carried on the wind. I envied those gulls right then.
It was only one month after Gran died when I found out I was expecting my second child.
Concluding that there would never be a 'right time' to tell my grandad, I'd decided to bite the bullet and announce my pregnancy. In light of losing Gran I was sure the news couldn't make him feel too much worse, but all the same I wasn't expecting a pat on the back and a 'Well done, dear.'
Steeling myself, I entered Grandad's kitchen.
'What?' he spat, in disbelief. 'Tell me, are there going to be five different fathers waiting at the park for their children in another five years' time?' His words cut me down, and this time there was no Gran to hide behind. But I refused to skulk off. _He'll come round_.
I visited my grandfather most days, helping out around the house, making dinner and going on the supermarket run. And of an evening I'd return to take the dog out. Often I'd walk into the living room to find him sitting in dim light with a whisky by his side, gazing up at a black and white photograph of his four young children. 'I like to look at my little family, sitting there together on the sofa, all smiles,' he'd say. I sensed his sorrow, and knew he missed both Mum and Gran.
At the end of the millennium year Leon was born by caesarean section. At the same time, the prediction my grandfather had made – that the relationship with the baby's father was destined for failure – was about to come true. On New Year's Eve, when I should have been celebrating the latest addition to my little family, I discovered that Charlie was seeing someone else.
My grandad, who had come to visit, found me bent double and weeping at the top of the communal stairwell.
'What on earth is the matter, Sarah?' he asked.
'I've hurt myself,' I answered. That was partly true. I'd been outside to fetch some coal, and the exertion of carrying its weight so soon after my operation had been too much. But we both knew that wasn't why I was crying.
'You shouldn't be lifting heavy things. Let me,' he said, taking the buckets of coal, 'and then you can tell me what's really wrong.' In a reversal of roles, my octogenarian grandfather ended up looking after me.
My grandfather had been my last source of support when I was feeling vulnerable. Now, with all of them gone, I often felt as if I had no one to turn to. At least on the mountains I could count on my walking companions. Out with Marty, I felt much more confident than I would have done on my own in this wilderness. I knew that any problems the mountain threw at us we would solve together.
Taking a bearing at the col, Marty and I tackled the Corbett, which stood between us and the first Munro. At its summit we followed the twisting, bumpy ridge and as we reached its end I spied a massive drop to the next narrow col between the Corbett and Munro, and rising behind that was a forbidding and seemingly sheer wall of rock.
Mists engulfed us and it was difficult to see much other than the immediate vertical crags. We picked up a path, only for it to lose itself in the wall. We wasted a lot of time trying to find a way up the broken cliffs, but contouring the mountain westwards we found a mossy breach in the hill's defences and began to pull ourselves up. Great clumps of loose earth came away in our hands and underneath our feet the terrain slipped away, forcing us to move quickly up crags and gullies. Wind and rain were making the going more challenging, but Marty's banter kept me distracted till we finally topped out. It had taken us five and a half hours – too long. But, after walking around the west side of a high lochan and up a neatly tapering ridge, we were at the summit.
'Man, cheesecake mountain was hard earned,' I panted.
'Cheesecake?' Marty said, his chin retracting into his neck and his nose wrinkling.
'Yeah. Cheesecake. I can never pronounce this mountain's Gaelic name, cheesecake is close enough!'
'Ha, yeah. Cheesecake. It was a beast,' Marty agreed. 'Listen; if we don't hit the second Munro within the next forty-five minutes I think we should turn round. Time's not on our side,' he said. I reluctantly agreed. Keeping on the move, we stepped up the pace.
The route to Lurg Mhor was easier, and, covering ground on a good path, we made its top in the timescale we'd agreed. Marty stuffed his face while I managed only one bite of my roll. My appetite had deserted me, but I didn't think twice about it. It was only as we were retracing our steps and I began to feel sick that I realised I hadn't eaten much all day – Marty had arrived early that morning, and in the sudden rush to get out I had completely forgotten to grab anything for breakfast. I'd already started our day feeling physically drained and by the time we were back at the first summit I was consciously battling against waves of nausea brought on by fatigue. I said nothing. The lochan we'd passed earlier was now sunlit, and to the west were dazzling views over other sparkling high lochs. Down-climbing the rocky wall came easily as we picked up a path east of the lochan, and as Marty whistled I felt a second wind and started to feel brighter.
At quarter past five we were back at the summit of the Corbett. We came down at first on a path but then over rough ground to what we assumed was the col. Neither of us checked the map. We dropped down some more. Dark cloud thickened overhead. And at half past five we realised something was wrong. We'd come off the Corbett too soon. Checking the map, we agreed to keep contouring around and down, but discovered we were walking towards broken cliffs whose drops were big enough to cause serious injury if we were to go over the edge. Light was disappearing fast and the terrain underfoot was difficult: slippery rocks were separated by squelchy bog and heathery mounds as we detoured to avoid the dangers of the crags. It began to rain steadily again and I knew then that we were in for a long night. We slid and stumbled our way across the rough ground, and Marty took a couple of hard falls onto rock, but we finally made the bealach (the pass between the hills) and in pitch-blackness walked out on the stalkers' path. Small but noisy rivers now gushed across the path everywhere and suddenly I felt the bulk of those surrounding black mountainous bodies closing in, squashing me. My head went light and I fell to the ground.
'Holy shit. You okay?' Marty's voice echoed over me.
'Don't feel so good,' I whispered, sticking my head between my knees.
'Here take this, it'll help,' he said, pulling a bottle from his bag and then me to my feet. There was no choice but to keep walking. 'Back in my navy days,' Marty said, pausing dramatically, 'we danced the sailors' hornpipe.'
He blethered a whole heap of crap to take my mind off what we were doing, and as he talked the rain stopped. The clouds broke up to reveal the Milky Way stretching across the heavens, and more and more points of light appeared above our heads. A beautiful end to our fourteen-hour day.
Shortly before midnight, after an hour's drive to the nearest fast-food stop, Marty quizzed me as he stuffed fries into his mouth.
'How you feeling now?'
'In pain!' I answered truthfully, 'but it's my own fault.' Every bit of my mouth was agony as I mashed a small bite of burger between my molars – the hangover from my illness on the mountain. 'I should have forced some food down earlier, I didn't realise I was running on empty,' I groaned. 'I won't make that mistake again.'
I'd become used to learning from my blunders out on the mountains. There had been an element of uncalculated fright in most of my recent outings. But I'd found overcoming each stressful situation emboldening and addictive. I enjoyed the challenges and being pushed out of my comfort zone. I still hadn't reached the limits and that was exciting, because it meant there was more to come. There was no feeling like it: to experience my own vulnerability and master it. But despite my determination not to make any more mistakes, the mountains were about to teach me the most important lessons of my life, and I was heading for a fall.
### CHAPTER FOURTEEN
### Slippery Slopes
_Near Fersit, January 2012_
It was the last day of January 2012 when Ollie and I arrived at Fersit, a remote hamlet between the Cairngorm National Park and Fort William. Clouds were gathering ominously. Bitter cold hit my face as I got out of the car and something nagged inside as I looked in the direction of Stob Coire Sgriodain and Chno Dearg, the mountains we were going to climb.
Ascending a gentle gradient across moorland of heathers and grasses, I stopped momentarily to take a photograph of a small stream whose tiny waterfall was rendered motionless by winter's spell. Continuing for more than an hour over rough ground, we reached the foot of a steep, craggy nose. We decided to climb up one of its snow-filled gullies. Off I went up the glistening-white chute, my crampons biting into the icy terrain with a reassuring crunch. Twenty feet ahead I was feeling smug as I stopped and looked back at my flailing companion.
'You really ought to invest in crampons, Ollie,' I shouted as I took his photograph, while thinking up some smart-ass comments to tag onto the pictures.
'I'm going to aim for the rockier bit; I'll see you up there,' he called.
Traversing across the steep slope, I realised not all points on my crampons were making contact with the snow; I was like a car taking a corner on two wheels and my weight wasn't being distributed effectively. The nagging feeling returned. With my next step the crampon on my left foot bit in, but without enough purchase, and as I raised my right foot, _whoosh!_ I was on my backside and sliding fast. I couldn't self-arrest because my ice axe was lying in the boot of the car.
' _OLLIE, OLLIE!_ ' I yelled.
And then I did what I knew I shouldn't do. I used my foot as a brake. The metal spikes bit in and stuck fast, but momentum and gravity continued to propel my body forwards. My body flipped over and, with a mouthful of freezing snow, I eventually came to a grinding halt. My ankle felt useless, hot and traumatised.
'Have you got your mobile? You need to call for help,' I said as Ollie approached.
'Do you want to see if we can get you to flatter ground?'
'There's no way I can move. I think it's broken . . . I'll try.' But the foot did nothing at my command. 'You definitely need to make the call,' I groaned.
Putting on my down jacket, I also wrapped myself as best I could in my orange bothy bag and lay on my tilted frozen bed, propped up by my elbows for support. Scared I would start sliding again, I concentrated my efforts on not losing grip.
'If I'd kept going and hit my head I could be dead,' I said, as I looked at the rocks below.
'Aye, how many lives is that you've got left now?'
'Do you know what I'm most mad about . . . no more hillwalking for me for at least six months. I'm such an idiot.'
There was nothing I could do but wait for the helicopter and try to find strength in the face of a difficult situation – as I had done before.
Gran was gone. With his old routine abandoned, time was my grandad's own. He'd spent his entire life looking after his family, so I said nothing when I saw that he had started on the whisky a little earlier in the day. If anything was giving me cause for concern it was his frailty.
He had always had a slight frame. Poking fun, we'd call him a skinny freak. But during the course of his life my grandfather had suffered a great deal of physical hardship. I remember sitting at the dinner table as a kid when he announced, and not without surprise himself, that the ulcers he had been going back and forth to hospital about for the past ten years had actually been stomach cancer. He said that when the doctors gave him the all-clear that day they told him they hadn't expected him to survive six months and called him a miracle.
After my mother died Grandad told me a story that gave me a rare glimpse into his past. 'I was on campaign in Tobruk, in North Africa, during the Second World War when we were captured and transported on an awful ship from Egypt to Italy. On the first day we were given one square biscuit each and a little water, the second day half a biscuit and the third day even less. All of us were cramped in the bottom of the ship. And when we arrived at the prisoner-of-war camp in Italy we were starved for months on end,' he said. 'That's why it annoys me when people say they are starving. Most of them don't have a clue what that is.' He referred to those times as 'the sad days'.
Skin and bone he was, but mentally my grandfather had always been strong and sharp as a tack – I reckoned it was all down to his army training and the daily ritual of his crossword. Friends referred to him as 'Gentleman Jim', and those who had received his epistles often said his command of the English language was almost Churchillian. He was a man who chose his words wisely. And in times of crisis those words were heartfelt and always in the right tone. But now, with each passing week and month he was losing a bit more of his mental faculty. He didn't fuss, and candidly blamed old age for absent-mindedness. But it was the transition from small, innocuous things, like not being able to finish the crossword to being found wandering around near the river when he had forgotten his way home that raised serious misgivings. The doctor diagnosed dementia.
Between us, my aunt and I did our best to care for my grandad, and I continued to walk the dog. An increasingly unenthusiastic Poppy would greet me with a half-hearted wag of her tail when I called to take her out, preferring, more often than not, to lie prostrate at the threshold of Grandad's room, her large, sandy bulk impossible to budge. Like Gran, she too went off her food and, an old girl herself, it wasn't long before she had to be put to sleep.
Seasons changed. Winter approached. A few nasty slips on frosty pavements resulted in my grandad being carted off to hospital in an ambulance, blood gushing from his head or his hands. But the colder weather brought other health troubles. One afternoon when I called to the house, Grandad was asleep in bed. Wine gums he'd been sucking had dribbled from his mouth and now stuck like medals to the sleeve and shoulder of his shirt. His lungs were rattling and he sounded in a bad way, so I called the doctor out; he suspected pneumonia. An ambulance came once again. At hospital Grandad hallucinated a lot; he thought he could see soldiers marching over a hill and a little boy crying, but when he tried to comfort the child the boy slipped further away. I wondered if he was the little boy. Pulling through the worst of the illness, he looked at me as he lay half propped up in the hospital bed.
'Why didn't you just let me go?' he said. Tears pricked my eyes.
'Because I love you so much,' my voice cracked. I felt guilty and selfish, and I hurt enormously.
My grandad now needed round-the-clock care and so arrangements were made for him to move into a nearby nursing home – the last place he'd wanted to end up. I felt I'd let him down: by making him live, and then because of his incarceration. I went to see him almost every day and would read to him from the _Rubaiyat_. Even if he didn't take it in, I found solace in its verses – just as my mother had intended.
Another cold January passed and Aunty Penny and I continued to pay regular visits to the nursing home, but Grandad hadn't been doing so well. One day a member of staff called to ask me to come over. 'I'm sorry, but he won't pull through this time,' the doctor said.
His words had the same effect as if he'd taken his fist and punched it straight through my guts. I arranged for the boys to stay with their paternal grandparents, packed a toothbrush and some clothes, and later that evening I returned to the nursing home. The boys and I had the rest of our lives together. Right now my place was with my grandad. He was my last connection to my past, to my mother. I loved him so much I couldn't bear to leave his side, and so I slept on the floor by his bed for four nights – and I slept better in the home than I had at the flat. I didn't need to worry about the phone ringing with bad news; and I was protected from the wild running of my imagination, because I was there. During the day, carers popped in to change Grandad's clothes and he would silently and obediently comply when asked to turn on his side, sit up or raise his arms. _He can still follow orders_ , I thought, smiling wryly.
On the fourth evening there was a change about the air in his room. I was apprehensive. Light shone from the overhead lamp as the room grew dismal. The gloom felt funereal. Anxiety tugged inside my chest. _What was that familiar odour?_ It took a while to place, but then it came to me. _Mum smelt of it and now it's here too._ It was the sickly, sweet smell of death. Kneeling by his bedside, I held his old hand as his chest rattled and his lungs laboured for air, growing weaker, giving in.
Daylight stretched its fingers through a gap in the curtains, having long since chased night's darkness away. 'Not long now,' I whispered. Exhausted and emotional, I found myself suddenly startled by silence. I held my breath, straining to hear anything. Raising my eyes, I surveyed my grandad's face. He had gone. Whimpering and tearful, I cradled his hand to my face. For minutes all was quiet until, without warning, came a horrifying gasp for air. My grandad's neck and head reached back into the pillow and his mouth opened. My heart beat out of its chest. I felt confusion and panic. _Is he alive?_ I felt embarrassed that I'd been crying. _What if he knew you thought he was dead?_ Holding my breath again I waited for him to exhale . . . it didn't come.
Only moments later, the clatter of the door handle pushing open resounded in my ears and two members of staff bustled in.
'Is it okay to get your grandad's sheets changed now?'
'No!' came my crumbling reply, 'it's not okay. It's too late.'
Early February found me standing at my grandfather's graveside, chilly and dejected as his coffin was lowered into the ground. All the people who had played their part in putting me on the planet were gone. I felt lost in the world; fearful and undone, entrenched in insecurity, totally and utterly alone. I wondered what the point of life was when it seemed so loaded with misery.
Words my grandad had once written to me in a letter, when I'd left home for the first time, echoed in my head, _We must all learn the hard lesson that the clock cannot be turned back. Once a path has been chosen, we've got to follow it, even although the going is rough at times. But I know you are tough enough and brave enough to do just that._
I felt neither tough nor brave. I was angry and embittered. The past could not be altered. I would never again see either my grandparents or my mother, and I could not think of them in the past because the love I felt for them was with me in the present. I would not detach myself from the memories of how they cared for me as their granddaughter and daughter. It was impossible to reconcile life with death! Grief was so hideously, fucking lonely.
The only thing that kept me going was love for my sons. And that gave me the strength to limp on, albeit emotionally battered and bruised, into the future.
I searched the wintry skies for signs of the rescue helicopter. My ears strained against the occasional gusts of wind for the sound of its engine and blades. We'd been waiting an hour and twenty minutes, but it felt an eternity. And then I spotted a small, dark dot in my line of vision, but, with growing dismay, I watched as it circled distantly a few times then left. Paranoia set in. My children would finish their day at school soon and I wouldn't be home in time for them. I needed to get hold of the school and Leon's gran.
'I just spoke to the police again. They've confirmed Mountain Rescue is on the way. Ten more minutes, Sarah. Hang on in there,' Ollie comforted. I was cold in my awkward position, but heartened by the prospect of imminent rescue.
'If this had happened somewhere that doesn't have Mountain Rescue I'd be screwed. I'd have to try and drag myself back to the car on my elbows,' I said.
'I'd leave you. I wouldn't be able to stand listening to you saying how raging you are at yourself!' Ollie joked.
Suddenly the helicopter appeared over the mountain, its powerful blades whirring. It flew overhead, its pilot assessing the geography, while two figures at its open door, dressed in khaki clothing, gesticulated that the helicopter was going to fly around. Watching as it disappeared behind the curve of the mountain, I now felt a sense of urgency, I needed out of this remote place. Once more the helicopter rose into view then positioned its bulky metallic body directly above, rupturing the airflow and causing spindrift to thrust icy particles into our faces. I felt like I was going to slide again and tensed every muscle until my rescuer was winched down. Unclipping himself, he asked my name, age and what I'd hurt before strapping up my useless appendage and tying my good leg onto the splint too. 'We're ready to take you up. Keep your arms down and bend your knees like you're in a sitting position.' I closed my eyes as I was pulled high above the ground, but opened them when the downdraft from the helicopter's blades caused me to spin on the cable. I was frightened my splinted leg was going to thwack off the opening into the cabin.
Once I'd been safely hauled into the far corner, I thought about Ollie. I felt bad that we'd driven all this way and didn't even summit the mountains, that he'd had to hang around in the cold waiting for the rescue team and that now he had to do that long walk back to the car with my backpack as well as his own. In a way I hoped my ankle was broken so that my guilt was justified on account of both Ollie and the Mountain Rescue service – who had scrambled not only the aircrew, but also a team of sixteen who had been making their way to me by road. I'd put a lot of people out. An ambulance was waiting to transfer me to a hospital in Fort William, and as soon as I was there I asked to use a phone. My mind at rest that the boys would be fine, I was able to let the staff get on with their job. Medics crowded around.
'I'm going to need to take your boot off,' a nurse said apologetically.
'Please give me drugs before you take it off . . . _please!_ ' I begged her, with an eyeball-twisting wince.
'I'm sorry,' she said, shaking her head.
Turning my head, I closed my eyes and gripped the doctor's arm and bedrail as the boot was pulled. An X-ray confirmed what had been obvious: my ankle was broken. Back at the cubicle the nurse stripped me of my damp clothing.
'I didn't expect that!' she said, laughing.
'Is that how you hillwalkers dress these days?' asked the surprised doctor as he checked me out in my sparkly silver sequinned mini-dress.
'My Munro tally would have been 150 today. This was my outfit for the summit photo,' I explained.
'I'll take your photo for you now if you like?' said the smiling nurse.
'Can you wait till Ollie gets here with my afro wig? It's in my backpack,' I answered as a junior doctor fidgeted with a cannula, trying to insert it into my frozen, contracted vein. The staff were laughing. But I was now groaning as the anti-sickness fluid went in, ripping into my veins like a cat's claw.
'Don't worry,' the first doctor said, 'you're about to get the good stuff.'
' _Wooow!_ ' I purred in my silver dress as the morphine took effect, but despite the drug the doctor's several attempts at manipulating my bones back into place were still painful. I was at the hospital for over two hours before Ollie arrived, and four hours before they discharged me, telling me that I'd need to go home and attend hospital in Inverness the following day. My ankle was going to need surgery.
Ollie drove me back to Nairn.
'How are you going to get up four flights of stairs?' he asked. 'Do you want me to support you so you can hop?'
'I think it'd be less traumatic for my ankle if I shuffle up the steps on my butt, but thanks for the offer. And you don't need to hang about waiting for me to get to the top, I'll be fine,' I assured him: I'd wasted enough of his time.
'Okay. Well, if you're sure? I'll get off. Take care,' he said. And with that he left.
I hadn't shed a single tear all day – I'd sworn a lot but hadn't cried, until now. The exertion of getting up each dusty, dirty step of the communal stairwell was too much and I sobbed bitterly. A combination of the drugs and pain, frustration at how unnecessary the accident was and the question of how I was going to manage was overwhelming.
But although it all seemed so bleak, the accident was like the squeeze of a gun's trigger, and I was about to embark on a journey out of the blackness.
## PHASE THREE
## STEPS IN THE SUNSHINE
'Come, fill the Cup, and in the fire of Spring
Your Winter-garment of Repentance fling:
The Bird of Time has but a little way
To flutter — and the Bird is on the Wing.'
_The_ Rubaiyat _of Omar Khayyam, VII_
### CHAPTER FIFTEEN
### One Thing Leads to Something Else
The day after my accident, I asked my friend Paul to take me to hospital. I'd met him several years earlier when I was still with Sam. I'd needed to arrange some cosmetic repairs to the flat's interior, damaged by the leaking roof, and Sam had noticed him carrying out some work on the property next door. Paul agreed to come to the flat early evening and work till nine o'clock most nights, weekends too. He put on new skirting boards and facings, and I would sand, stain and oil them. He replastered and taped walls and ceilings for me to paint. He worked hard and I liked to watch. I liked how he wasn't full of typical male bravado; I liked the look on his face as he scribbled measurements and worked out sizes. He was competent, hardworking and honest, but best of all he spoke kindly and was nice to me. The worse things had become between Sam and me, the more I looked forward to times when I'd see Paul.
We'd stayed in touch over the years – whenever I needed a hand, whether with a bit of carpentry or tinkering with my car, it was him I turned to. And he always made himself available to me. So when I called him after my accident, he said he was happy to help.
For the first couple of weeks after surgery I was doped up on industrial-strength painkillers, and by mid February I felt toxic. All the while Paul had popped in and out to make sure I had what I needed, and that the boys were okay. One of my neighbours, who'd heard about my accident, also visited the flat, bringing us home-cooked meals; she even insisted on washing dishes and tidying up the kitchen more than once. I was bowled over by the kindness and generosity of this woman, who had been a total stranger before this. I found her motherly presence comforting, and her help invaluable; I was so grateful for her visits.
Fed up with taking pills, I reduced my dose. Continuous fatigue eased off and I soon realised that the good thing about being laid up meant there was plenty of time for reflection.
I took stock of my life, of the things that were making me unhappy, and the things I had the power to change, and the most obvious one was my career. After years of trying, I had to admit that teaching just wasn't for me. So I was going to forget it and return to my painting. I felt guilty for abandoning the path Mum would have picked for me; in some ways, even though she had been gone so long, I still sought her approval, but my time on the mountains had given me more confidence in myself and I was ready to cast aside idealised memories and choose a way that was going to make me, and therefore my children, happier.
Paul made me a huge easel and I was fired with enthusiasm. He'd also made other contraptions for keeping my leg raised at a comfortable angle on the sofa and the bed. I was coming to rely on him more and more, not just because he was helpful, but because he was patient and kind. It had only been a couple of years since I'd separated from Sam, but I began to muse what it might be like to be Paul's girlfriend.
When the boys were at school I continued my reading. I re-read my art school 'bible', E. H. Gombrich's _The Story of_ _Art_. I read the biography of the Mexican artist Frida Kahlo, and then I flicked through _501 Great Artists_ and was drawn to the image _Baby Giant_. The artist, Leonora Carrington, based her artworks on Aztec myth and mysticism. Before my accident I'd done a painting influenced by my own travels through Peru. These women inspired me to learn more about Latin American mythology. The reading I'd done supported my gut instinct. I was going back to my art, a world where I knew I belonged.
At last I was beginning to accept myself for who I was and not who I thought I should be.
Pondering what I might like to paint, I came across _The Origin of the Milky Way_ by the Venetian artist Tintoretto and decided to make a colourful reproduction, to scale. The painting reminded me of my interest in Greek mythology, the universe, the stars from which we all come and nature itself. I looked out a varied selection of reading material to digest during long hours of incapacitation on the sofa – from _Granite and Grit_ by R. Turnbull and _The Greek Myths_ by Robert Graves to the _Children's Encyclopaedia of World Religion_. I find religion fascinating. Within its different branches there is such a rich diversity of practices and rites, yet commonalities also exist between different world religions and ancient myths. I felt excited by the possibility of working on a body of paintings, drawing from the theme of religious creation stories.
It was while I was carrying out more in-depth research on Hinduism that I had my next brainwave. _I should go to the Himalayas!_ Excitement about an overseas trip grew; it would be ideal, combining a study trip with the enjoyment of mountains! And while it was not my initial reason for going, it did not escape my memory that this was where Gerry had died. I could visit his final resting place, and something about that felt like a fitting tribute to the man who had had such a profound impact on my life.
By early March I was able to get out on crutches. It was great to be more mobile again; I could sit properly and walk about the flat and I got to work on my new paintings. I even managed a couple of nights out when Paul drove me to and from Inverness to attend evening lectures by the climbers Stephen Venables and Simon Yates. Captivated by the former's description of his Himalayan expedition, I bought his book _A Slender Thread_ , which I positively devoured and which revived my lust for a return to the hills.
One afternoon, as I rested on the sofa, I gazed at a painting Gerry had sent to my mum from Kathmandu. I'd never been fond of it, but I'd kept it as it had been special to her and that made it special to me. I looked at the multi-peaked, icy mountain against a lime-green wash and powder-blue sky. I could appreciate the artist's faithful rendition of the soaring bird of prey, and I pondered the painting's two signatures, R. L. Fleming and H. Poudyal. Fleming was leader of the 1975 expedition and I was sure I remembered my mum telling me that it was a friend of Gerry's who had painted the picture.
On top of the wardrobe in my bedroom was a small, sturdy black case that had belonged to my mum. Retrieving the dusty case from its time-honoured post, I unclicked its silver clasps. Immediately the nostalgic and familiar scent of old paper wafted up. Once upon a time all Mum's secret stuff had been kept in here – private letters and photos. Before she died she had thrown out most of its contents but a few items survived: airmail letters from Gerry, a slightly out-of-focus Polaroid (the only picture of the pair together), photographs of him on expedition and, the item I was after, the bottle-green notebook detailing his Himalayan climbing exploits.
I read about climbs Gerry made to the Atlas Mountains, Morocco, to Tirich Mir in the Hindu Kush range of northwest Pakistan and to Indrasan in the Himachal Pradesh. But it was the 1970 Annapurna expedition in Nepal that made me feel most proud of his achievements. Annapurna is the tenth-highest mountain in the world; Gerry's ascent of it was a first for a British expedition team, and only the second in the mountain's history. (The first ascent of its north face – and in fact the first ascent of a mountain higher than 8,000 metres – had been made by the French mountaineers Maurice Herzog and Louis Lachenal twenty years earlier, in 1950.) But the success of Gerry's climb to the 8,090m peak was overshadowed by the first ascent of the mountain's steeper and more dangerous south face, just one week later, by two members of a team led by Chris Bonington.
Digging around inside the black case, I found the letters Gerry had sent to my mum, and also a Worcestershire and Sherwood Foresters Regimental Journal. Flicking through it, I stopped at a page that showed Gerry's face smiling back at me in black and white. Sadness that I'd been robbed of him as a father seeped back in.
The article reported on the accident in the Nepalese Himalayas that killed him; about how it was thought that Gerry and his climbing partner were hit and swept off Nuptse's south face by a heavy rockfall and that it was too dangerous for their bodies to be recovered by other expedition members. I learnt that he was the most distinguished mountaineer ever to have served in his regiment; the Colonel-in-Chief, Princess Anne, had conveyed sympathies to my mother, but it was a quoted tribute by General Sir Gerald Lathbury that I read through twice: 'We first met in Gibraltar, where I was Governor, and he was serving with his regiment. I heard that among his many activities he was interested in birds, so I roped him in to help in the observations I was carrying out . . .'
A penny was dropping. My brain cells were sparking like the friction between flint and stone. Excitedly I returned to Gerry's journal and scanned through the first and then second entry. His knowledge and understanding of birds – the feathered variety, as he had joked in one of his letters to Mum – was impressive, but more importantly his words, fluttering through the grey matter of my mind, had metamorphosed into understanding. It was as if I was stepping out into the glaring-bright light of day after spending almost fifteen years locked in darkness. I'd wanted to read about his mountain expeditions because I was interested in pursuing my own adventure, but his stories were the key to my liberation.
This was my eureka moment.
I'd often wondered where my mother's interest in ornithology had come from. Now I knew! Gerry had died, but by keeping his letters, photographs and journal, by walking and by surrounding herself with all things birds – from the painting, cushion covers, to the tapestries she stitched, and even school projects – she was preserving his memory. And, of course, there was the bangle.
_When Gerry died, Mum shut out the world; she talked to no one. Just like I did when she died. She pursued a career in teaching to keep her mind busy and block out the misery of loss – probably the reason why she'd advised me to choose a similar career_. _Gerry's interest inornithology became hers_. _And her interest in nature continues through me_ . . . it all began to make sense.
The people we love are the blueprint for our lives. At long last it felt that not only was I beginning to know who my mother was, but who I was too.
I dug out old calendars from a drawer in the bureau. Mum had often written down notes or thoughts on them and I wondered if her words would take on new meaning, but as I flicked through each I noticed something else. Every 9 May – the date Gerry had died – was circled in pen. She had never spoken about missing him, at least not to me. And though she'd enjoyed a few years of happiness with Frank, I was moved by Mum's enduring love for Gerry: I realised how little I really knew about this man who, despite his absence, had continued to have such an impact on our lives. I wanted to find out more about him. And suddenly I had an idea, a way of gaining closure and finally laying my mother's memory to rest. After years of holding on to her ashes, uncertain what to do with them, I knew what I needed to do.
In life Mum had been denied a future with Gerry, but in death I could reunite them. I decided to take her remaining ashes to Nepal and the Himalayas.
### CHAPTER SIXTEEN
### Protecting Next of Kin
By the autumn of 2012, and my ankle having survived its test run, I'd got back on track working my way through the Munros. In the past two years I had shared long mountain days with my friend Mel. We were first introduced to each other when I'd come home from Cyprus, but although we'd been in and out of each other's lives since then we hadn't been particularly close. One day, however, we bumped into each other on the street and while we were chatting the subject of hills came up. Realising we both enjoyed walking, we made a plan to go together, and from that moment a more affectionate and meaningful bond grew between us. And eventually we discovered we had a lot more in common than just a mutual passion for hillwalking. At about the age I had lost my mum she had become estranged from her parents, so in a way she understood how it felt to be separated from family.
The more walks we did, the deeper we delved in conversation, till it came to the point where I found myself trusting her and felt I could honestly open up to her. This had been a gigantic step, as I'd felt anxious that she might distance herself from my company. But when I told her of those bad times, when I had been so low I'd felt like ending it all, and of how pointless life seemed, she surprised me by confessing that she too had felt these same things. I remembered something Mum once said about having at least one good friend, and though it had taken many years, I felt with a growing certainty that Mel, at last, was that person.
I was the most content I had been in over a decade. My work was fulfilling, I had my next big project to plan, my children seemed to be growing up happily. And then there was Paul.
Paul had started to accompany me onto the hills, too. The more time we spent together, the closer we got. Slowly but surely I was falling in love with him. I knew he'd harboured feelings for me for some time too, but I didn't want to tell him how I felt until I was absolutely certain. The relationship I'd had with Sam had been a disaster, I'd dived right in. I didn't want a repeat of that, so I needed to take my time and be sure I wasn't gravitating towards Paul's love just because it was there. We'd known each other well for several years and I didn't want to throw this valuable friendship away. And, importantly, I needed to know if my children would approve.
'You like Paul, don't you?' I'd asked Leon one day.
'Yes. I like him as your friend. But I don't want him to be your boyfriend.'
Surprised by his perspicacity, I asked, 'What makes you think I want him for a boyfriend?'
'I've seen the way he looks at you.'
'Why wouldn't you want him to be my boyfriend?'
'I like things the way they are. It would be different.' The shine of my happiness dulled.
'You don't have to worry,' I said quickly. 'We're only friends.'
'Yes, but one thing leads to another,' my precocious but intuitive younger son commented.
I'd already spent weeks agonising over whether or not I could trust my feelings and not screw things up, and now Leon's opinion gave me further pause. But Paul made me feel happier than I had in years and so, finally throwing caution to the wind, I gave in to my feelings. On New Year's Eve Paul and I took the plunge and became a couple. I decided to see how things went between us before telling my boys, though; while Marcus would be accepting I already knew how Leon would react, so I expected a challenge ahead. But, remembering my own experience of how I'd first reacted to Frank, and how much I had regretted that later, I had to trust that Leon would eventually come around to the idea of Paul and me as a couple.
At the start of 2013 I set about finding out what I could about Gerry and his final climb. I returned to the little black case. A photograph of the memorial cairn, with the towering south face of Nuptse behind, and an airmail letter from Jon Fleming, the expedition's leader, were all that I had to go on. But at the head of the letter Jon had written that he was then a member of the Parachute Regiment; with this snippet of information I sent an email to the Ministry of Defence asking for their help to locate Jon and anyone else who knew Gerry.
At this time I also searched online for treks through the Khumbu Himal and found the 'Three Peaks, Three Passes' organised by Jagged Globe. The acclimatisation programme they were offering was ideal and, importantly, the main trail to Everest Base Camp would pass close to the valley that would lead me to Nuptse. I contacted Uncle Jimmy to tell him my plan.
'I'm going to take Mum's ashes to Nepal, to scatter her with Gerry; what do you think?'
'I couldn't have thought of anything better, Sarah.' My uncle's words were endorsement enough.
Without further hesitation, I contacted Jagged Globe and booked my place on their 2014 trek. The ball was rolling. Two weeks and thirty-three emails later, I had established contact with Jon Fleming and Henry Day – Gerry's climbing partner on Annapurna. Everything was coming together.
Henry Day and I stayed in regular contact, and I was soon invited to meet him and his wife Sara. During May I spent a weekend at their Cambridge home, where I was welcomed warmly. While I was there, friends of Henry and Sara – John Peacock and his wife Sheila – arrived late on Saturday afternoon. I liked them immediately. They were both quite short in stature. John's white hair was smartly coiffed, his crinkly blue eyes shone with generosity, and his kindly manner reminded me so much of my grandad. There was a motherly presence about Sheila that I felt instantly, and I almost wanted to hug both of them there and then. John had been with Gerry on Nuptse. He knew of my visit and had brought slides to show me. The four made me feel as if I was family, like I'd come home for the weekend and that I belonged.
I had brought the green notebook, some letters, a few photos and the only surviving wedding invitation intimating the details of Mum and Gerry's marriage.
'I hadn't realised that the relationship between your mum and Gerry had got so far; in fact I had no idea that they were to be married at all, let alone so soon after Nuptse,' said Henry.
'Did you know about my mum's relationship with Gerry?' I asked John.
'It was only at Gerry's memorial service that I was first introduced to her,' he answered.
'I remember meeting your mum,' Sheila said. 'John and I weren't even married at the time so I felt quite honoured that I was asked to go to Gerry's memorial. Your mum attended the service with Major Ian Leigh, and afterwards we all stayed overnight at John's sister's house. I told your mum that I'd met Gerry just the once, the night before he and John left for the Nuptse expedition. I'd been planning a cosy night for just myself and John at the opera to see _Cavalleria rusticana_ , but John had asked if Gerry could go along too. It was the first opera Gerry had ever been to. Your mum told me that he had written to her, telling her all about it and that it had made a big impression on him. I was so happy to have given him that experience.'
I liked to hear Mum being talked about in conversation, but it struck me as more than a little odd that neither John nor Henry was privy to my mum and Gerry's wedding plans. The three men were good friends, so what was it with the secrecy? A host of nasties started whirling around my head as I puzzled over why Gerry had been a dark horse. I wanted to know why he'd kept the news of his marriage quiet – I was suddenly paranoid it had been because of me, because he was ashamed of Mum having an illegitimate child – but neither John nor Henry could tell me.
After the delicious dinner that Sara had cooked for us, Henry, Sheila and I settled down to listen to John recount with slides the expedition's journey to Nepal, the days preceding the tragedy on Nuptse, and the terrible unfolding of the accident that led to Gerry's death. I was sitting next to Sheila, and she asked me:
'Why now, after all these years?'
Her directness caught me off guard, but I tried to explain.
'Because I have been so utterly miserable since Mum died. I don't really feel I've ever got over it, in a way. Then I discovered Gerry's journal. I read it and suddenly realised how little I really knew about him, about them. He was a great love of my mum's life. Now I need to know more about the kind of man he was and the relationship they had.'
Sheila nodded.
Whisky's nostalgic aroma scented the room as Sara brought in a dram for Henry and John. I let its warm memories of Christmases past fill my lungs as John began his story. He spoke and we listened without interruption.
'Directed right around the airfield at Heathrow, five of us arrived at a discreet little lounge, out of sight of the public view, and waited for the Comet to appear. We were the advance party on our way to Nepal and scheduled to fly, hitch-hiking by courtesy of the RAF, as supernumeraries with HRH Prince Charles, on his way to represent Her Majesty, his mother, at the Coronation of King Birendra in Kathmandu. HRH was our Expedition Patron and seemed more than happy to help. The few days we spent in Kathmandu were fascinating, as was the coronation procession itself. But we left all that behind and suddenly found ourselves camped above a native village, not a light in sight, in stark contrast with the city still celebrating miles behind us.
'Ahead lay nearly two weeks of steady tramping, across the grain of the country, over and down ridge after ridge. Crops in the valleys, and rhododendrons scarlet on the hillsides, formed an ever-changing backdrop to the human activities. Gerry was in seventh heaven as he could really indulge his interest in birds. He would set off every morning a good two hours ahead of the rest of us so he had time to study them. Long before we reached Namche Bazaar, the main Sherpa centre, he had spotted and identified more than 300 different indigenous species. Even so, it was not until we had established our base camp, close to the Nuptse Glacier, that he finally caught sight of the huge Himalayan vulture that so fascinated him, the lammergeier. It has an enormous two-and-a-half-metre wingspan; it dwarfs all other species.
'What happened next remains a mystery to us all. Two days earlier we'd last seen Gerry and Richard, his climbing partner, around mid-morning, climbing steadily towards the top of the couloir, the steep ravine, and the start of the relatively short rocky ridge leading to the summit. So we watched and waited, hoping against hope they were still climbing but somehow hidden from our view. But they seemed to have simply disappeared.
'The following morning, with no radio call from the summit or elsewhere, we were forced to accept the worst. We sent a signal via the Embassy in Kathmandu to the Royal Nepalese Army, and they immediately offered to send a helicopter to help search the following day. The Alouette helicopter whisked two of us up the glacier and into a huge amphitheatre of rock walls, to land on the ice just a few hundred yards below the crevasse between the glacier and the rock. Above reared a huge, shallow rock face, almost vertical in the lower hundred feet before leaning back at a slightly shallower angle towards the similarly steep mixed slopes above: an unremitting and more or less continuous line leading eventually to the final couloir and the route to the summit. Anything falling from the couloir would probably continue down this chute. Finally we discovered Richard's boot on the glacier surface some distance short of the bergschrund, confirming that we had come to the right place. A few more yards enabled us to peer over the edge to confirm our worst fears.
'Nigel and I found them, Gerry and Richard, lying quite close to each other on an icy shelf some 40 feet down. Below the shelf the crevasse continued into the depths. Both bodies were cocooned in coils of rope, suggesting that, whatever had caused their fall, they must have rolled for some distance down the steep slope of the snow-filled ravine.
'Some yards to the left there appeared to be a relatively straightforward means of climbing into the crevasse, on a level and connecting with the shelf on which Gerry and Richard lay. While one of us organised a belay, securing a rope to an anchor to enable us to get back up and out of the crevasse, the other set off to climb down and along but had barely started before Pierre, our French pilot, 'buzzed' us, the prearranged signal that he was running short of fuel. Scrambling quickly out onto the glacier surface, we made our way across to the aircraft and climbed aboard. Wasting no time, Pierre took off back to Base Camp, asking no questions: our nods and the looks on our faces were doubtless enough.
'With most of the expedition engaged well up the route, there were only a few people left at Base Camp. In any case, the trauma of our recent discovery perhaps obliterated all but the most significant details, including Pierre's gentle courtesy as he left us alone immediately after we landed, and again that same consideration as we made our farewells and very grateful thanks before he left to fly back to Kathmandu. For a little while I busied myself with routine things, trying the while to come to terms with events, before realising that Nigel was no longer in the camp.
'Even at 17,000 feet it was a very hot morning, but I found Nigel just a few hundred yards along the gently sloping valley leading to Pokalde. He was sitting on a boulder, his chin in his hands and obviously distressed. Neither of us spoke; there was no need. How do you reconcile such a perfect day in such magnificent surroundings with what we had both witnessed just a few hours earlier? Then a lammergeier appeared, almost it seemed from nowhere, to fly majestically past us on the same level and barely twenty yards away, its huge wings stretched out to bear it effortlessly past until, gently turning, it came back to us, a little higher now, only to continue its turn, complete the circle and commence another, higher still. We gazed, fascinated by this magnificent bird, as slowly it spiralled up and up and still up for minutes on end. Losing track of time, neither of us could follow it further as it grew smaller and ever smaller. One moment it was still there, the next it had simply disappeared into the heavens.
'We sat there, speechless. Both of us knew that Gerry had been more than enthusiastic about birds of all kinds; even more significant was the fact that he had said only a week earlier that the bird he most admired was the great bearded vulture, the lammergeier. He had added that, if there was such a thing as reincarnation, he would like to come back as, yes, a lammergeier.'
John retold the tale with composure and clarity, but now his voice wavered, overcome with emotion as he remembered the sadness – we all felt it. A heavy silence bore down on our small group as we sat around the open fire. Seeing slides of the journey that the expedition had taken gave me a good idea of what to expect on my trek the following year, but I couldn't shake off the concoction of distress, horror and anger as I stared at the final slide of Nuptse's forbidding and hostile south face. After Gerry and his climbing partner died, a second summit bid had been made. Two more men perished, this time lower down the mountain. It was only then that the pursuit of victory over Nuptse was abandoned.
'I have a painting at home that was done by the expedition's leader, Jon Fleming. It's of a bird of prey flying in front of a mountain,' I said, breaking the stony silence.
'How interesting, it would be fascinating to see it,' said Henry. When I showed him a photograph of the painting he exclaimed, 'That's Annapurna!'
'And the bird is undoubtedly a lammergeier,' John added. 'I remember Gerry talking about having met a father-and-son team in Kathmandu before the expedition set off for Nuptse. Yes, it's coming back to me now.' The surname Fleming was mere coincidence and the painting was not done, as I had previously assumed, by the expedition's leader. 'Gerry and I had supper with two delightful Americans – natural historians who were producing a definitive bird book on Nepal. Gerry had first met them in 1970 and had corresponded with them since; they had struck up a bond of friendship cemented by a mutual enthusiasm for ornithology. Gerry introduced the Flemings to me in February 1975. R. L. Fleming and his son of the same name lived and worked together in Nepal; their base was in Kathmandu.'
After a little further research on my part I learnt that Robert Fleming had been studying birds for twenty-five years and, at the time of his meeting Gerry, he was in the process of having a publication entitled _Birds of Nepal_ put into print. He himself was also the subject of a book, _The Fabulous Flemings of Kathmandu_ , which told the story of how he founded the first modern hospital in Nepal in 1956. Hem Poudyal, the second signature on the painting, was the artist; he had devoted three years to Robert Fleming's project, persevering with the meticulous depiction of approximately 800 species of birds. Gerry had commissioned the painting, now in my possession, and had instructed that on completion it be sent to my mother.
Discovering its story gave me a new sense of appreciation for the painting. It was no longer some kind of bird against a gaudy-coloured background; it was the magnificent lammergeier flying triumphantly before Gerry's prized and conquered Annapurna.
Everyone went to bed. I sent a text to Paul, sharing my day's news before switching off my phone. I wished I could switch off the thoughts in my head too, but as I lay there I couldn't seem to stop my mind from returning to the question of why Gerry had kept quiet about marrying Mum.
After breakfast Sara and Henry showed Sheila around the garden, which gave John and me the opportunity to talk some more about Gerry.
'How do you think they came to fall?' I asked.
'That will always remain a mystery. We can never know, but Gerry died doing something he loved.'
'He loved my mum too though.' I told John about my mum dying and about how she had asked for the bangle. Tears flowed from my eyes as I opened up. 'I'm sorry for crying.'
'Crying is nothing to be sorry for. It means you have been loved and you love,' John said gently. Pulling myself together, we went outside to take a photograph of us all before John and Sheila left for home.
Sara and Henry went to church, leaving me to root through a couple of boxes of papers and photographs that Henry had dug out for me – I wasn't looking for anything in particular, but perhaps there might be something that would tell me more about Gerry. It felt a nice kind of weird being left in their house on my own; a privilege to be trusted because, up till now, I had been a perfect stranger.
There was such a lot to sift through, many duplicates and endless lists of equipment. Scanning over a couple of pages from the expedition newsletter, I saw it contained information that could be useful to photocopy in preparation for my own trek the following year – stuff like the types of flora and fauna that are found locally, species of birds and names of settlements.
An airmail letter from Jon Fleming to Henry caught my eye. It had been sent from Nuptse Base Camp and simply correlated with everything else I'd heard and read, while also expressing condolences and a request not to intimate its contents to the press – 'to protect the feelings of next of kins'. I continued to flick through papers such as a copy of the Memorial Service Distribution List, which named all the relatives and military personnel who attended it at Worcester Cathedral. There was a postcard of Mount Ama Dablam and, turning it over, I instantly recognised Gerry's handwriting.
_Camp II – 18,000 feet 22 April_
_Dear Sara and Henry,_
_Many thanks for your letter – most welcome. First and foremost, delighted to hear of Katherine's arrival; my congratulations to you both – great news! Yes, despite too many godchildren already, I am most happy to accept but June may be a v difficult month for me. Apparently we have now reached the most difficult part of the climb. Expedition is far too largeas I mentioned to you before I left UK. I hope your Everest Log Plan reduces members!_
_As ever, Gerry_
Weird! Of course June would be a bit tricky for him – he was getting married to my mother! Confusion bubbled again. Why was their marriage such a big secret? With not much time left available before my late-afternoon flight I had to be swift with my photocopying. I'd tidied up by the time Sara and Henry returned and was making some last copies in Henry's office when he came in and rummaged around in a cupboard. 'Here, I'll loan you my Royal Geographic Map of the Everest Region. It should come in handy, you'll be able to gauge the route your trek will take in more detail – always better to know where you are going even if you don't know what you will encounter on the way.' I thanked him.
I had gone to Cambridge hoping to learn more about the man my mum loved, the man who would have brought me up and, according to my Aunt Penny, had vowed to look after me and love me like I was his very own. But while I took away some answers, I had also found more questions.
### CHAPTER SEVENTEEN
### History Repeats Itself
_Fisherfield, July 2013_
July 2013 was hot. The group of Fisherfield Munros, located in Wester Ross, were fairly remote, but I'd been keen to go camping there for some time. Heaving on 40lb rucksacks, Paul and I set off along Loch Maree. It was afternoon; the dry heat was already intense and as wearing as the clegs – pellet-shaped flying insects whose vicious bites kept drawing blood from the bare skin of my arms and legs. They were driving me nuts. We beat our way through high sections of tall bracken on the narrow trail, trying not to trip over protruding rocks and tree roots. It took us an hour before we reached a bridge that signalled the long haul up Gleann Bianasdail. A series of stunning waterfalls had carved the rock into square, flat platforms, and water cascaded over the edges like the veils of a thousand brides.
Deep into the glen we were hemmed in by the secluded valley walls, and, struggling with the weight of our heavy packs, our march had slowed to a plod as we baked in the heat. Loch Fada finally came into view, but any hopes that we were nearly there were seriously misjudged. Still, it was only three and a half hours after leaving the car at Incheril when I finally plonked my backpack down on the shingle beach at the loch's head. It didn't rest there long. There was barely enough time to appreciate the beauty before it was spoilt by the entire Fisherfield contingent of clegs and midges on their search-and-destroy mission; the place was alive with them, and I danced and swatted in an attempt to fend them off. Throwing the tent up at breakneck speed, we flung our packs, and ourselves, into the insect-free zone.
At six o'clock we left the sanctuary of the tent for our evening hike up to the summits of A' Mhaighdean and Ruadh Stac Mor, hoping that our winged foes were in abeyance. We trudged along over uneven, rough and boggy terrain, the heathers scratching around our ankles. While it was still light I took bearings from landmarks we picked out – the nipple on top of a hillock, the right-hand side of two rounded lumps on higher ground, and so on. I was pretty good at this map-reading stuff now. Slioch was reflected in the loch, and as we climbed higher more Torridonian giants soared into view. Day was fading. Quietness instilled a perfect peace and I paused to watch two young deer silhouetted on the ridgeline. Absorbed by the task of climbing uphill over the rough heathers, I got quite a fright when the head of a stranger, popping up over the top of a tent, suddenly came to view. I was disappointed that we didn't have the peak to ourselves so Paul and I pressed on towards the second Munro.
As we walked along the ridge the sun's rays washed the surrounding land and mountains in glorious colour. Dubh Loch and Fionn Loch glittered like swathes of silver ribbon far below. Ahead the red-sandstone cliffs of Ruadh Stac Mor were set aglow in a salmon pink. In the near distance An Teallach resembled a fortress, with beetling crags and a ragged ridge that punctured the skyline like scores of broken glass bottles. Atmospheric conditions created blocks of colour that made the complex architecture of the mountain scenery look almost two-dimensional. The night was intoxicating, and I was definitely in love.
Though we had been discreet, Paul and I couldn't hide our relationship from my children for ever. As expected, my youngest son hadn't taken the news so well.
'You said you wouldn't have him for a boyfriend. You lied to me!' he cried. His distress struck a chord. I'd put off telling my boys about the relationship because I'd wanted them to get used to having Paul around, to get to know and like him more and to protect their feelings – probably the same reasons why my mother had kept quiet about Frank; she had wanted me to accept him in my own time.
'Leon, I'm sorry,' I'd said gently, 'things changed.'
'Why couldn't you just stay friends?'
'I suppose because we spent more time together. I trust him. He has shown us all kindness and has given all of us help whenever we've needed it – how many times has he come to catch spiders?' I asked. But my attempt to raise a smile was met with a scowl. I realised that whether I'd been honest about Paul or not, the outcome remained the same. Just as I had vied for my mother's attention, my son sought mine. It seemed I couldn't stop history repeating itself.
The trail began to thread its way up through cliffs, steeply in places, over loose sandstone screes. I enjoyed the easy scrambling to reach the second summit. I pulled on my down jacket, yanked on some leggings and took the squashed roll and can of Jack Daniels from my backpack. As I sat on a summit rock, warm air caressed my face and dusky pinks and mellowing violets coloured the sky, casting warmth onto the mountains. I found it hard to believe that these towering bastions, with their shattered spires and jaggy ridges like filed teeth, could be such hostile environments and the takers of life. Soaking in the beauty, I thought about Gerry, Mum, my grandparents and my children. And I thought about the future. Paul and I talked.
'I know things are a bit tricky with Leon, but if we love each other enough we can ride the storm, can't we?' I said.
'Sarah, I love you and I always will. I waited so long for you, I'm not going to stop now.' Paul's answer was everything I needed to know.
'So, what do you think about me going to Nepal? Will you miss me?'
'Yeah, course I'll miss you. But it's a big deal for you and it's what you need to do, isn't it? Anyway, you'll only be away for a month.'
'Well. I was wondering if maybe you would want to come with me. What do you think?'
'Do you want me to?' he asked with surprise.
I could tell he was touched that I'd asked. That was what I also loved about him; he was so unassuming. We toasted our celebration with a tinny clunk of our Jack Daniels and watched the sun as it set. For the first time in my life I hadn't rushed in, and, in spite of all my imperfections, I knew I was truly loved.
At ten-thirty it was time to descend. Returning in the dark was in some ways easier than walking during daylight. We had to rely on the bearing I took and walk on it faithfully, whereas in daylight, although I use my map and compass, I'm also observing the lie of the land, trying to pick out an easier line. Of course night walking was not without its pitfalls. All was going well until I lost my left and then my right foot into ankle-deep bog, but silvery moonlight reflecting in a small lochan nearby lifted my spirits and that, combined with our earlier conversation, made the night feel enchanting as we squelched blindly on. I tried to avoid further soakings as blisters gnawed, my feet rubbing in wet socks against my boots. But no amount of discomfort could spoil my contented mood. At two in the morning we were back in the tent eating the last of our pasta, and more than ready for sleep.
Paul and I woke up gulping for air. Though it was only seven in the morning the sun's heat was already burning through the thin nylon tent fabric. It was suffocating, but I didn't dare unzip the flap when I saw the tens of thousands of black, pinhead-sized bodies splattered against the green outer shell. Desperate for air, I opened the inner flap, squashing my mouth and nose against the midge net. A few gentle wafts of air off the loch gave momentary relief, but the stifling heat became torturous. Packing up swiftly, we braced ourselves for the apocalyptic attack as we emerged from the tent . . . and right on cue the dancing and swatting routine began. A calm scene by comparison, Slioch was mirrored in minute detail in Loch Fada as waters lapped idly against the shingle with neither a whisper of wind nor a cloud in the sky.
We left the tent and made our way across boggy ground and scratchy shrubs. Jumping over squelchy, dark-brown hag onto dried yellow sprigs of grasses, we made a direct line to the south ridge of our first Munro. Once we gained its lower section we had to pick a way across an expanse of glacially exposed flat rock. Too much sun, tiredness and not having eaten was making me feel sick and, remembering my previous experience, I had to stop and force some food down. After ten minutes we pressed on slowly, conserving energy – trying not to perspire too much. My feet, in their wet accommodation, were already in pain.
A footpath took us the rest of the way to the summit and we ate an early lunch in what little shade there was. My bread was difficult to swallow. It felt dry and rough as it passed slowly down my gullet. It was only eleven and the worst of the heat was to come, but superb views to An Teallach and the full Fisherfield horseshoe compensated for my minor physical difficulties. And because John Peacock had told me that these mountains were a favourite haunt of Gerry's, they held more meaning for me.
As Paul and I walked down the wide, grassy ridge we saw folk camped out on the col below. More people were making their way up lower slopes from Loch Fada. The hills were busy.
'John Peacock told me that back in the 1950s his friend Mike O'Hara was the first man to have completed the three peaks, Ben Nevis, Scafell Pike and Snowdon, in under twenty-four hours. And apparently Mike loved climbing here too. There were no paths or Munro-baggers back then,' I continued. 'It must have been exciting to explore these hills; that's real adventure, isn't it?'
'Yeah, it's too easy today. There's all those online sites with their free route descriptions and maps to download,' Paul said.
'I wish I'd been an explorer.'
'Being with you _is_ a daily adventure. I doubt I could take much more excitement,' Paul joked, and I gave him a shove, pretending to be put out. I was so happy. Respite from the sun's infernal blaze as we walked in the shadow of Meall Garbh increased my pleasure, and as luck would have it a nice, fat cloud lazily pulled across the sky, blocking those violent rays for most of the battle up the ridge to our second summit of the day.
I felt like the little mermaid. My fiery feet were in tatters as I relieved them of their Gore-Tex prisons to totter across the summit stones. My vest was soaked, so I peeled it off too and sat in my bra and skirt at the cairn. I eyed up Sgurr Ban. Our third peak lay just over one kilometre away.
'We're running low on everything, Sarah. There's hardly any water left and this heat's a killer. I know you won't want to, but I think we should abandon it,' Paul said.
It was hard to turn my back on that last summit.
'The hill isn't going anywhere. We can come back and do it another day. We've still gotta get down and dismantle the tent, and that's before the long walk out too. The boys have school tomorrow and you don't want to be too late to pick them up from their grandparents,' he added. Paul was right. I was all gung-ho and that's how accidents happen, and he was the voice of reason, a calming influence: he was the yin to my yang.
### CHAPTER EIGHTEEN
### Dark Horse
_The Inaccessible Pinnacle, September 2013_
Crepuscular light filtered through the sky at about quarter past seven as we sped along the road towards Skye. The Glen Shiel Mountains made a good barrier against the rising sun and created a stunning silhouette, but we were off to climb the hardest mountain summit to reach on the British Isles, the Inaccessible Pinnacle, or In Pinn. To tackle the In Pinn we had needed to learn some basic rock climbing, so we'd taken a weekend course near Betws-y-Coed. The same skills could have been taught closer to home, but because I'd discovered, through correspondence with another contact of Gerry's, that he had enjoyed climbing in Snowdonia I'd decided it was there that Paul, Marcus, Leon and I would take our instruction. We were kitted out with equipment – harnesses, climbing shoes and the like – then it was off to Plas y Brenin and the Pinnacles to be shown the basics of rock climbing. After a morning of instruction, learning how to tie on, make figure-of-eight knots, use safety anchors and abseil, we tackled a couple of climbs on a large slab of rock, whose cracks ran like tramlines in criss-cross patterns, at Little Tryfan. The day had been absorbing, but it was about to get more interesting. We were off to Llangollen, to meet Gerry's cousins, Rod and Jill.
Through initial contact with the Ministry of Defence my email address had finally trickled down the line to Rod Owens. I was ecstatic when he got in touch. Members of my family hadn't been able to shed light on why Gerry might not have broadcast his wedding plans, and it was still bothering me. Uncle David's impression at the time was that Gerry's parents had been unhappy about him marrying my mother because she had a child, and that they threatened to cut him off if he went ahead with the wedding. Aunt Penny, though, had told me that she didn't think Gerry's parents were alive. She thought that he had been brought up either by an aunt or by a foster mother. Uncle Jimmy knew no more than Uncle David. It was a muddle of information that made Gerry and his life an enigma. But now I was in a position to ask about family, and I had to hope that Rod would provide the answers I was looking for.
Arriving at the bistro in Llangollen ahead of Rod and Jill, we had time to order a drink. I felt nervous. There was no mistaking Rod when he walked in. His handsome, angular features resembled my own memories and the photos of Gerry. His eyes and smile were similar too. Both he and Jill were pleasant and easy company, but it transpired that Rod knew very little about Gerry himself, and Jill had never even met him!
'I'm afraid I know nothing of the relationship between your mother and my cousin,' he said.
_Not even his family knew he was getting married!_ My heart sank to the pit of my stomach.
'What about his parents? Do you know anything about them?' I asked.
'His father was killed in action in North Africa when he was thirty-six. His grave is out there,' said Rod, sifting through photographs to show me a picture of the headstone. 'Gerry's mother, Edith, didn't cope with his death. She had a breakdown and was, unfortunately, institutionalised. Gerry, his two brothers and two sisters were split up and sent off to different people to be looked after, but the brothers then attended Duke of York's Military School, before Gerry then went on to Sandhurst. He kept contact with his two sisters, Jean and Bernice, possibly more than he did with the brothers.'
'Is anyone in touch with the brothers and sisters?'
'The older brother is dead, and the other brother lost touch with the family of his own volition. It would be impossible to track him down. I really don't know if anyone's in touch with the sisters, but I'll do my best to find out,' Rod said.
Three hours had been a long time for Marcus and Leon to sit quietly, but our evening had finally come to an end. Apart from one swift kick Leon had given his brother under the table that had gone unnoticed by all except me, the boys had been on their best behaviour and I was proud of them.
'Your boys are so well behaved, they're a credit to you,' Jill said, smiling at them.
Agreeing to send on copies of pictures, we said our goodbyes and promised to keep in touch. I felt we would, just as Gerry's old climbing partner Henry Day and I had maintained contact – we'd corresponded regularly since our first meeting and had arranged to see each other again, on the In Pinn on Skye.
On the second day of our weekend course in Wales we'd gone to the Moelwyns, sixteen miles from Betws-y-Coed. Slate mines lined each side of the road: blue slate was quarried on the right, purple on the left. We pulled in at an already busy car park and walked up a trail to cliffs where a couple of ropes were at work. A cold wind gusted. Our guide, Dave, led Leon and me on a route called 'Slick', while Marcus and Paul had started a few minutes ahead of us with their instructor. Leon climbed second and I followed behind. He made me feel warm-hearted as he called out words of encouragement. Sharing our struggles and achievements today was helping to build an even closer bond with him, bridging the gap that had opened up when I'd started seeing Paul. Reaching Dave, who had us secured by a sling around a tree, I saw Marcus ahead. His body was wedged, feet against one wall, back against the other, as he wriggled his way up a narrow chimney – a fissure in the rock.
'That looks tricky!' I exclaimed.
'You missed seeing Paul get up. It was a performance of brute force rather than technique!' said Dave, then, turning to Leon, asked, 'Shall we abseil back down from here? The chimney might be too difficult. What do you think?'
'I want to try it!' Leon exclaimed, not to be outdone by his older brother, as he scrabbled and struggled up the chimney. Seeing him succeed filled me with pride – and also gave me a nudge of confidence that I could do it too. After one final pitch – as the section of climb between two fixed points is known – we were at the top of the 100-metre crag.
Leon watched with Paul and Marcus as Dave and I climbed one more route. When the rope stopped sliding through my hands I knew that Dave was either putting in an anchor to make himself safe or there was a trickier section of rock to climb. I was tied onto Dave, but he had disappeared from view and the rope had almost paid out. It felt strange standing alone on the narrow, rocky shelf so high above ground. As my hands gripped the wall I wondered if this route might have been one that Gerry had done and I tried to imagine how it would have felt to be climbing with him.
'That's me!' Dave called, interrupting my thoughts. Up I went. 'That was fast!' he exclaimed, as my head appeared over the rock. At the top of the crag Dave showed me how to coil ropes correctly to carry them. 'You should be quite pleased with yourselves. Those climbs we did are classed as a very difficult and a severe 4a*.' I didn't really know what that meant, but if he was pleased then I was delighted.
With our rock-climbing weekend over, we had one more visit left to make before leaving Wales.
When I'd returned from the weekend in Cambridge with Henry and Sara I'd had mixed emotions but wasted no time in trying to find more contacts from Gerry's past. Determined that there was someone somewhere who knew of his relationship with my mother, I needed reassurance. If I was going to take her ashes all the way to Nepal then I had to be 100 per cent sure that Gerry had truly loved her too – there could be no doubt. I looked out letters from the little black case written to Mum by my grandfather, searching for any kind of clue at all. Though it felt like clutching at straws, it was, all the same, a comfort to read his words. Hearing Grandad's voice in my head as I read the letter, I did find something worth going on:
_Mum told me you had got the photographs of Gerry, so Ian Leigh must have dropped them in. He got them from a Dr Jones in Canada, who was on the Annapurna expedition with Gerry.He had promised to send them to Ian a long time ago and when he finally did the other day he said he would send duplicates to you if Ian thought you might like them. Ian accepted on your behalf and in the meantime gave you his copies._
I got in touch with Henry, hopeful that he might have contact details for the men mentioned in the letter. Ian Leigh had died some years ago, but Henry passed on an email address for Dr Jones. I'd assumed that the doctor lived in Canada so was surprised, after sharing my rock-climbing plans with him, when he invited me, Paul and the boys to come for dinner – he and his wife lived forty minutes away from Betws-y-Coed. As Dr Jones said, it was serendipity indeed.
Finding Dolfriog Lodge proved to be a task as tricky as our earlier climbs. We finally found the lodge, out in the sticks along more of the overgrown, single-track, twisting roads, hidden amid trees and luscious green foliage, built high on rock with a fast-flowing river below. The noise of the car in this remote place alerted Dr Jones and his wife to our arrival and they came out to greet us at the gate. Glenys was a character larger than life, and she bestowed a massive hug and kiss on each of us as her husband, David, after a warm handshake, ushered us into the house. While he tended to the roasting meat, Glenys regaled us with a little local history. The slow-cooked lamb in mint with new potatoes, carrots and broccoli was delicious. After dinner we at last got down to the business of Gerry.
'We first became acquainted on the Annapurna expedition, you see. We just clicked,' said David. 'Gerry and I had been close. I considered us to be good friends. We remained in regular contact after Annapurna . . .'
Glenys interjected, 'I was very fond of Gerry. He and all the Annapurna climbers came to visit us for a reunion. I remember him saying, "I don't know why I'm in the army, I'm a pacifist."' With that she erupted into laughter.
'So, did you know about my mum and Gerry?' I asked David.
'I'm afraid I had no knowledge of their engagement. I didn't even know there was a girl on the scene. But, you see, he was a very private person, a dark horse even among close friends. It was hard to know what went on in his mind.'
By now I was getting used to these kinds of comments. Digging out an old address book, he scribbled down details of two contacts.
'The address is for Cattie. She's the wife of Andy Anderson; sadly he died several years ago, but he was a great friend of Gerry's. I think Cattie will be the one most able to help you.'
'Gimme another jelly sweet,' I said. 'Your driving's making me feel sick.'
'Sorry, but you told Henry we'd meet him at the memorial hut at eight. We're running late,' Paul answered.
As the car took us over the brow of a hill and further into Glen Brittle, the Skye Cuillin ranged into view; on its tobacco-coloured cushion of empty moorland the ridge was a spiky crown in shadowy shades of blue and purple. Early-morning cloud wrapped around the ancient volcano's base like a fine silk scarf. Henry greeted us at the memorial hut and then, laden with backpacks and ropes, we set off on the trail. I'd only been here once before, but everything was as I recalled it: the 25-metrehigh Eas Mor waterfall tumbling into the tree-filled gorge, the stony terrain and the sense of high adventure.
Our route began on an obvious man-made track before it became rockier. There were plenty of cairns, which took a devious line beneath the crags of Sgurr Dearg. We could have followed these up into a gully, which would have put us on the ridge, but we went left onto screes that led to a gap in the line north of Sgurr Dearg. Paul topped out first.
'Well, there it is,' he said. His tone and the look on his face did not impart a sense of joy.
I arrived at his side. 'Ahhh,' I said, in serious contemplation.
'Mmm . . . okay,' said Henry, as he appeared behind us.
We dropped our kit and sat for a few minutes to enjoy views of the ridge extending east and west, and to Rum, one of the small isles of the Inner Hebrides, before Henry handed us a harness each. We descended slabby rock to the base of the In Pinn. We spotted someone on nearby Sgurr Mhic Choinnich, and closer still a woman in shorts and a vest, blonde hair scraped back in a ponytail; she was alone and moving fast. As I waited to climb I looked up at the blue sky. A raven, dark as night, with wings at full span circled up from behind the rock. Then another raven appeared, and another. I was watching their theatrical performance with fascination when into view came a whopping wingspan. Henry reckoned it was an immature golden eagle to whom the ravens were giving chase.
Henry started up the east ridge of the Pinn, a moderate rock climb but incredibly exposed: a foot wide with 'an overhanging and infinite drop on one side, and steeper and further on the other', as one early mountaineer had described it. The rock climbing in Wales stood us in good stead for this.
'Safe,' Henry called, as he made the first of two pitches. I took him off belay and he pulled in the rope.
'That's me,' cried Paul.
'Climb!' Henry shouted down to me.
Off I went. Busy concentrating on finding foot- and handholds, I didn't even think to look down. I was fine, though taking my time, when the lone, blonde-haired woman scrambled quickly up past me, unroped! I admired, and slightly envied, her bold and confident attitude. Parts of the rock felt smooth under my hands and fingers, no doubt worn down over the passage of time by countless climbers and Munroists. Making sure I had three points of contact, I pulled myself up, momentarily imagining what it might be like to come off and how unpleasant it would be to pendulum out and smash into solid rock, but I shook the thought from my mind. Before long I reached Henry and was soon perched behind him, safely clipped into the sling.
The second pitch was initially steep but easy, and the abseil off was fun. Gathering in the rope, we returned to our earlier perch on high rock to eat lunch and watched as two guys began what we had just completed.
I wished that my confidence in Gerry was as solid as the rock we'd been climbing over. The mystery surrounding his relationship with Mum remained unresolved, but having met his friends, in particular the cousins, made him feel less of a ghost, and there were new leads.
I wasn't about to give up on anything.
### CHAPTER NINETEEN
### Walking on Air
_Aonach Mor and Aonach Beag — the Big Ridge and the Little Ridge, April 2014_
Strong winds and blizzard conditions across the Scottish Highlands put paid to regular hillwalking over the winter months. By January I missed being on the mountains and was beginning to fret that there were only a few months left to get into good condition for the high-altitude trek in Nepal, but I built up my stamina by weight-training and running ten kilometres every day, and Paul just carried on working. He was fit anyway, going up and down ladders, digging like a slave and lifting heavy blocks. What bothered me more than our physical health was that I still hadn't found any of Gerry's friends who knew he had planned to be married. If I could hear what I wanted from just one person I'd feel vindicated in taking Mum's ashes to him.
I'd written to Cattie Anderson but, having heard nothing back, I'd given up on that lead. But then one day, in March, a reply arrived in the post. My heart sank at first, scanning the first three paragraphs, which seemed to suggest I'd drawn another blank, but I soon discovered how wrong I was as I read on:
_Andy left the Army in 1973 or 74 – he was scared they were going to send him to Ireland – but he continued to work asa civilian instructor and was expecting to be on the team for Everest in 1976. In 1975 he was appointed as an instructor at Glenmore Lodge, and I think that this is why he didn't go to Nuptse. As it happened, he broke his leg quite seriously in a skiing accident and spent some time in Raigmore hospital. It was while he was there that he heard of Gerry's death. Sadly, a few days after that he received a letter from Gerry asking him to be best man at his wedding – that really cut him up._
The words in Cattie's letter were exactly I had been looking for. Clearly, Andy and Gerry had been close and, whatever secrecy Gerry had applied elsewhere to his plans, he had made his intentions crystal clear to Andy. Elated with the news, I cast aside doubt. I felt I was walking on air.
I'd also met up again with John and Sheila, as John had brought the Nuptse expedition map to show me. We compared his original route to the one I would be making with Jagged Globe.
'Can you show me where to find the memorial cairn?' I asked.
John drew a small cross in black ink on a 5,000-metre contour above a small settlement called Bibre. 'I'm not entirely sure, but you see this wide space between the contours . . . showing where the land flattens out . . . this is most likely where we built it. It was nearly forty years ago, but I think this is right.'
I was so excited to think that I would be following in Gerry's footsteps.
By late April the weather had improved, and we were ready to head for the mountains again. Paul drove us to Glen Nevis and parked at Polldubh, the end of the road. A confidence-boosting signpost warning 'Danger of Death' indicated the start of the rugged, but popular, path for tourists. It was a shady walk under deciduous woodland, and a busy river rushed through the glen, carving rock into waterfalls and pools. Sunshine warmed our skin as soon as the trail opened out onto the green valley floor and its light glinted and danced as it caught spray from Steall Falls, its water cascading 120 metres down broken cliffs like the swishing tail of a white horse. I could imagine some kind of period drama being filmed here. It felt romantic and old-worldly as we made our way up the broad expanse of the U-shaped valley backed by the Mamores. Crossing a bridge, we investigated some ruins, a sad reminder of when the upper part of the glen was once inhabited; it was another of those places steeped in history and I imagined what life must have been like for the people who lived here. After checking the map we followed a faint but then clear path that ran along the left side of a tributary stream.
'Yeah, if we stick to the river it'll take us right up to the col between Carn Mor Dearg and Aonach Mor,' Paul said.
'Cool, let's go,' I answered, pleased that my boyfriend was enjoying leading for once.
It was warm work climbing higher, but we enjoyed a little respite when we arrived into the glacially scooped-out corrie bowl. Mountain peaks were holding on to their snows, and meltwaters were making the ground underfoot wet and sloppy. Repeated footfall had teased the suggestion of a wandering line through the grasses, but traces of previous human passage soon disappeared completely. We followed the meandering river and traipsed over straw-coloured tufts and heathers on intervening slopes towards views of the broken terraced cliffs of Carn Mor Dearg. I came across a lost black cap.
'Hey, Paul, look, someone has trodden the exact same way as us!' I exclaimed, as I waggled it about. 'Life is a bit like route-finding on a mountain, isn't it? All of us just trying to beat the line of least resistance,' I said. 'When Mel and I were walking yesterday we met a guy who described these mountains as "dull" and "a slog" and "not the most interesting of the four thousanders". I couldn't disagree more. I know the terrain isn't so challenging, but the views make up for that tenfold, don't you think? They make me feel so ALIVE!'
'Maybe the guy experienced these hills on a cloudy day,' Paul suggested, as we trudged on up through a patch of crispy snow to the col between Carn Mor Dearg and the great wall formed by the western slopes of Aonach Mor. Steep, grassy crags confronted us and we stood in silent contemplation.
'It didn't look that bad from down there,' I said.
'No,' Paul agreed, 'and I'm not seeing a path of any sorts. I think we're just going to have to go straight up.' We scrambled up the vertiginous wall, clutching at dry, crackly mosses and grasses. Paul was enjoying himself.
But it was as we were gaily tramping across the snow-covered col towards the second Munro, Aonach Beag, at 1,234 metres, when Paul suddenly said in a very girly octave:
'Eh, Sarah . . . we're walkin' on air.'
'What?'
'There's a golf-ball sized hole here that I can't see the bottom of. Just rocks all the way down at the base of the mountain. I think we need to move over very carefully, we're out on the edge!'
The ground beneath my feet felt solid to me, but I immediately turned and walked through ninety degrees. Once on safer ground, we stopped on an ice-clad outcrop to look back down over the col.
'You can see the sink line, look!' Paul said.
Sure enough, we had been walking on a cornice – literally just an extended lip of snow hanging over the edge of the mountain. Not really what you want to be doing unless dicing with certain death is your thing. If conditions had been milder and the snow just a bit more rotten, our combined weight could have broken through that snowy shelf and we'd have fallen to our doom. It was quite a thought.
'That'll be another of my nine lives blowing five sheets to the wind!' I said.
'I reckon the snow we were walking on was a bit deeper than the length of my body,' Paul mused. I shuddered at the thought.
A low hum of chopper blades broke our silence. The search-and-rescue helicopter was hovering around the vicinity of Ben Nevis, one more reminder of how dangerous the Scottish mountains can be.
We marched back out along the path, which was now busy with tourists of all nationalities: there were girls sweating off layers of make-up and wearing pumps on their feet; a young buck in flip-flops; and a man carrying a new baby in a papoose. I thought about the slippery rocks and water slide they'd had to cross and was glad none of them had come a cropper. 'Maybe they didn't notice the warning sign,' said Paul, as though reading my mind. And then, as we passed back under the deciduous trees, we were stopped by Mountain Rescue. An elderly Indian lady had slipped and knocked herself unconscious against the rocks; her worried and tearful husband sat on the damp ground cradling her head in his lap. It was a disturbing scene after the day's events.
My mind turned to Nepal. Only yesterday, 18 April, news had been broadcast of tragedy in the Himalayas. I had listened in horror to the story as it revealed the highest death toll on one day in the history of mountaineering had taken place. A block of ice – reportedly equalling the weight of 657 buses – broke away from a hanging glacier on Everest's west wall, causing an avalanche to barrel down across the full width of Khumbu Icefall. Sixteen Sherpa died, one of them belonging to the Jagged Globe team. I could hardly believe it. In addition to my tailored plan to locate Nuptse Base Camp, a trek to Everest Base Camp was part of the company's planned itinerary, and I wondered whether we would still go there – or in fact if the trek would go ahead at all. But, after a call from Jagged Globe, I learnt that the trek I had booked was going ahead as planned.
A month would be the longest I'd ever been away from my children and, though they would be well looked after, it didn't stop me feeling guilty about leaving them.
As we said our farewells, Leon put his arms around me.
'Mummy. I'm going to miss you. I'm frightened you won't come back.'
'Aww, what makes you think that, you silly billy?' I said, hugging him tight.
'You told me Gerry died on the mountain and that's where you're going to take Granny's ashes and I don't want you to die too,' he spilt out in one breath, tears rolling down perfectly smooth cheeks.
'Listen. You don't need to worry about me. I'm not going _up_ the mountain, just to the bottom of it. I'll be quite safe, I promise. How about I give you a copy of the trek itinerary? That way you will know where I'll be each day.'
Nodding his head, he seemed reassured.
I had felt like a child waiting for Christmas, but finally Paul and I were at the start of our journey to Nepal. We'd left Leon with his grandparents, but Marcus would be staying with one of my neighbours – the same kindly woman who had helped me so much when I'd broken my ankle – and they were both standing on the shingle beach near the airport. As I peered through the aircraft's window I could just make Marcus out, a tiny red dot against the grey, pebbly shoreline. He'd told me he was going to wave till the aeroplane disappeared into the cloud.
_See you soon, my lovely boys._
### CHAPTER TWENTY
### Early Illness
_Kathmandu, Lukla and on to Monjo, 2—4 May 2014_
I had been especially concerned about carrying the urn with my mother's ashes into Nepal, unsure of whether it was legal or not, and worried my head with ideas that the ashes might be confiscated. So it was with enormous relief and great elation that we made it to Kathmandu via Delhi with all our kit, including the urn, without a hitch. Stepping out into the intense humidity, confusion and noise of a hundred voices competing for taxi fares, Paul spotted a Nepali waving a Jagged Globe sign. He beckoned us over to his waiting taxi. To my amusement, a local asked if he could take a photo of my holdall.
'Yeah, sure!' I said, smiling.
'Thanks, lady.'
'No problem, but then you give me one dollar!' I joked.
The journey between the airport and our hotel was quite exciting. It was a case of every man for himself, and I learnt that road markings really didn't matter. 'They're just guidelines,' our driver said breezily, with a wave of his hand and a toothy grin. People and cows strolled across roads among the moving vehicular mayhem, competing with rows of meandering mopeds and rickshaws in a calamitous cacophony of peeps, toots and honks. The sun's rays filtered through filaments of dust and dirt that hung on thick, warm air, and the overpowering smell of car fumes filled my nostrils. The streets were a colourful, seething mass of humanity. High-rise buildings sandwiched row after row of long narrow corridors in a seemingly endless maze. Black power cables hanging between poles, some slung as low as a skipping rope, were all wrapped in a chaotic mess like giant liquorice wheels on the top of their dodgy wooden supports.
On arrival at the Summit Hotel we were greeted by Mara. 'Our extreme tour guide Barbie,' whispered Paul. I stifled a giggle as she shook my hand and pointed us towards a straw-roofed gazebo outside the reception, where the rest of the trekking team members sat waiting.
'We'll do a little ice-breaker,' she said in a lilting American accent, 'where we'll take it in turns to introduce ourselves and you can share with the rest of the group what your motivations are for coming to Nepal . . .'
'Why's she makin' us tell our reasons for coming out here? She knows perfectly well why each of us has come – it's on our forms. I'm not telling a bunch of strangers about my mum!' I whispered in a hushed tone to Paul.
'I'll go first. So my name is Mara Larson, I'm originally from Oregon. I used to work for NASA studying the effects of altitude on climbers, but now I work for Jagged Globe and am based in Chamonix in France. I've come out here to lead our group on the Three Peaks, Three Passes trek. There is a lot of information I need to share with the group about our trek which I'll pass on as we go, but one important rule we will be following is: walk high, sleep low. On the trail when we arrive at our stops for the night we'll take a short trip to a higher elevation before returning back down to our camp. This is going to help with the acclimatisation process, but I'll tell you more about all that later.' Mara was in her early thirties, and petite. She was as intelligent as she was attractive and seemed lively and interesting. And so the monologues began . . .
When it came to my turn, red-faced and feeling horribly awkward I garbled my spiel, looking to Paul's face for reassurance. I felt clumsy and embarrassed at my efforts to speak to the group of strangers, 'Your turn, Paul,' I said with haste.
'Hi, I'm Paul and I'm here 'cause of Sarah.' That was it! Short and bloody sweet! He really didn't care what anybody else thought about him. Whispering into my ear, he said, 'Can't be arsed wi' all this Americanised ice-breaker pish.' His was my brand of humour, and being with him gave me extra confidence. I was glad he had come with me.
At dinner we met three guys from Jagged Globe's Everest expedition who had just returned from Base Camp. We sat next to the expedition cook, an Australian. 'The camp has been all but evacuated. There are just porters there now, ferrying equipment and gear down,' he said. 'It was hell being there when the avalanche hit. Everyone was shell-shocked when the number of corpses on the mountain kept rising, and even now, back in civilisation, it's still difficult to come to terms with.'
I understood his bewilderment and grief. After all, I was here to visit the scene of a tragedy that had unfolded almost forty years earlier whose repercussions had vibrated throughout my mother's life and mine.
The hour was getting late, and since we had a five o'clock start Paul and I bade goodnight to our fellow trekkers. 'Make sure you sit on the left side of the plane when you fly out; you'll get a great view of Everest,' the Australian said.
At the airport the next morning, Mara prepared us for a delay due to poor weather, but after only a short wait we got lucky with a weather window. In a rush of frenzied activity we were herded out onto the tarmac towards a tiny Tara airplane. There were only single seats down either side of the aisle, but I got myself a perch on the left-hand side up front, directly behind the pilot. Any misgivings I'd had because of what I'd heard about the flight were quickly dispelled. It wasn't at all scary. Paul pointed out the notice fixed to the entrance of the cockpit which warned, 'Pilots beware, the clouds have rocks in them.' We grinned at each other.
The aircraft took us over the foothills, passing through and in between the cultivated high valley walls. Tiny dwellings far below were merely suggestions as a heat haze softened edges and paled the landscape into a palette of pastel shades; higher up, the clouds drifted apart to reveal fleeting glimpses of Everest. I felt a flurry of anticipation. The flight had been thrilling, but the landing was startling as the small plane hit the runway. The landing strip was only 460 metres long – basically, just long enough to apply the brakes, with no room for pilot error. My eyes were like saucers, my knuckles white and my heart was in my mouth as we approached the brick wall at the end of the short runway at a speed that was faster than felt safe, but the pilot turned the plane assuredly and neatly into the holding area where there was room for just four of these small aircraft.
Disembarking, I was instantly struck by the pure, uncontaminated air here at 2,860 metres. Apart from the aircraft there were no motorised vehicles of any kind here. There weren't even any bicycles. Dirt track of random rubble cobbled together only supported passengers of the two- or four-legged variety. Modest dwellings, with colourful washing spread out to dry on walls, lay cradled in the arms of this high valley. After a short tea break – while our gear was being divvied up between the porters – we set out on our trek in a northerly direction from Lukla. As we passed down a rugged track through a rocky gorge made damp and dark by overhanging branches, the boulder-strewn Dudh Kosi river churned white and milky. It was enchanting.
Oblivious to our presence, young local girls dressed in Western clothes were practising a traditional dance routine while boys on a lower terrace played a variation of cricket. I wondered how many balls they must have lost down the side of the valley to another that lay thousands of feet below. Wonderful, musky scents filled my lungs and a porter passed me by in flip-flops with a staggering load tethered to his back; it included a large box of Nestlé Everyday Milk Powder, a large sack of Crown Rice, four boxes of cans of San Miguel and two larger boxes of bottles of San Miguel. I made a pact with myself that I would not dare to complain about the weight in my backpack. We passed some Buddhist prayer wheels. 'Give them three full spins in the direction of travel to bring good fortune and health,' said Dawa, one of our Sherpa guides. I later wished I'd pushed those wheels with more conviction.
The green landscape was lush and teeming with life. A cacophony of bird calls filled the air; one sounded like a whistle blowing evenly three times, while another shrill call came like a warning, ' _Take care!_ '
Phakding was at a lower elevation than Lukla and would be our first overnight stop on the trek. We'd only been on the go for a couple of hours and neither Paul nor I could believe our day's walking was over so soon, but we realised the slow pace and early stops were part of the acclimatisation process – annoying but necessary. As we entered the small village it was still raining hard. Two tents stood erected on a terrace below the teahouse in which we now took shelter.
'Fuck,' said Paul dismally, as we stared out from the window down to the orange nylon shelters, 'I don't have to go and put up our tent in that, do I?'
I laughed at him. 'No, ya tool, the porters put the tents up for us!'
With a look of relief, Paul returned his gaze to the rainy scene. 'No wonder they abandoned putting up the rest of the tents.'
It truly was a torrential downpour, but not an early arrival of the monsoons, we hoped. In the end we spent the night in the teahouse. Our accommodation was spartan, but at least we were inside and dry. After we had consumed a thin tomato and garlic soup, _dal bhat_ (a traditional Nepalese dish of rice and lentils) and some pineapple, Mara asked us to exchange our high and low points of the day. Then we were introduced to our supporting team: kitchen porters, cook, Sherpa guide and Sirdar, the Sherpa in command of all the staff.
At nine o'clock Paul and I turned in, but I woke in the small hours feeling not quite right. Hoping the sensation of nausea would disappear, I turned over in my sleeping bag and tried to concentrate my worries on any fleas or lice that might be trying to crawl into the bag with me – a nugget of paranoia stemming from a delightfully vivid paragraph in Jon Krakauer's _Into Thin Air_. But itchy bedtime companions were to be the least of my concerns for now.
I woke feeling like death. Creeping from our room as quickly and silently as possible, I made my way down the creaky corridor and locked myself in the toilet, the first non-Westernised loo of the trip so far. I stared down at the dark, rectangular hole in the floor and swiftly planked my feet either side of the pee-stained wood. 'Hello long-drop, old friend,' I muttered as the whole world exploded out of my ass.
Repacking my belongings in preparation for the day's trek to Namche Bazaar was a gargantuan and laboured task as I attempted to keep feelings of nausea under control. As ready as I could be, I took my holdall and backpack downstairs into the dingy light of the dining area. The air was stifling and loaded with smells of cooked breakfast mingling with juniper from an early-morning _puja_ , a prayer ritual. I had to get out of there. Sitting on the whitewashed ledge under the window, I held my head in my hands.
'Is everything okay?' Mara asked, as she wandered out from the teahouse.
I had to be honest. 'I'm feeling pretty sick and I've had diarrhoea.'
'Try getting some porridge down, and let's reduce the contents of your daypack so you're carrying less weight,' Mara said. I agreed to her conditions, glad that she wasn't banning me from continuing the journey altogether.
Last to leave the teahouse, I made my way over a short suspension bridge that was gaily adorned with colourful prayer flags, and _khada_ – white silk scarves that symbolise the pure heart of the giver and are presented to bring good luck and fortune. The chain bridge bounced and swayed with every step, my stomach churning and my head becoming dizzy. I made it across, but succumbed to the sensation of sickness. Staggering to the edge of the trail, I noticed the steep-sided drop. There was nothing much to grasp on to as I leant forward. My middle and index fingers pressed into the bark of a small, thin tree growing out from the verge as volumes of warm fluid surged up and forced their way out of my mouth like a violent demon being exorcised from my body. Dawa Sherpa made a call and soon Chote, our Sirdar, and Paul appeared. They walked the trail with me. Dawa and Chote seemed concerned, but Paul took my photograph. 'I've never seen anyone actually look like death warmed up,' he said, but I wasn't interested in his attempt to make me smile. Everyone else was waiting higher up. When we rejoined the group they all asked how I was, but I felt so ill I couldn't muster the energy to give even a monosyllabic answer.
Hampered as I was by waves of sickness, my progress up into the valley was slow. I made it to the next small village but, without warning, threw up all over somebody's wall.
'Where's a toilet?' I groaned. Dawa pointed me in the direction of the wall owner's long-drop. 'Paul, come hold the door,' I whimpered desperately as I made my mad dash, 'I won't get it shut in time.'
As I raced towards the wooden structure I caught a glimpse of the Nepalese national bird, the Himalayan monal; it was as fast at disappearing into the bush as the shit was at spraying out my ass. It was a mortifying situation, not least because I'd puked over some poor sod's property, and Paul was now getting to know me really well, subjected as he was to my stinking, noisy bodily expulsions. But at this point I was far too sick to register or care about any embarrassment.
Managing another four miles of trail over 327 metres of ascent, I arrived at Monjo, where Mara took me aside.
'Sarah, the rest of the group have gone on to Namche Bazaar, but you are going no further today.'
'Okay,' I said, too ill to feel sorry for myself.
'Paul said he's staying with you, and I'm leaving Jangbu Sherpa here. He'll keep in radio contact with me. I want you to start taking antibiotics and Diamox. Try and get some crackers down. If you're feeling better in the morning we'll see you at Namche, but you must try to get some food inside for energy. It's a demanding and relentless hike uphill.' It was all very well her saying that, but anything that passed my lips immediately exited from either one end or the other. Things weren't looking good.
Paul was worried about me. He paid for us to stay in a luxury room at the Monjo teahouse. It had a shower – no running hot water – but a shower nonetheless, and a Western toilet. I wondered if it was the toilet that gave it its luxury status or the shower. Either way it didn't matter; I had little time to muse over the triviality since I spent the entire afternoon shitting and puking simultaneously between toilet and sink. _Maybe it's the mirror above the sink that makes it a luxury room._ I lifted my head after the last lot of retching had finished.
'Oh my God! I'm _green_!' I squeaked, as I caught sight of my face for the first time that day.
'I know,' said Paul as he held my hair back with one hand and scooped the plughole clear of my sick with the other.
'I'll clean that!' I said, feeling horrified for the millionth time.
'Don't be daft,' he said gently. 'Go lie down.'
Just then there was a knock at the door and I opened it to see Neil, the youngest member of our group, standing there.
'Paul, look! Neil's green too!' I said without so much as a hello and sounding more cheerful than sympathetic.
'What happened?' Paul asked him.
'I'd continued along the trail with the others but Mara turned me round after I threw up. I'm feeling pretty ropey. I just wanted to let you guys know I'm here. I'm in the room next door. Hopefully it's just a twenty-four-hour thing. Maybe see you later,' Neil replied.
I crossed the room to my bed, lay down inside my sleeping bag and closed my eyes. Though it was a relief to know I was not alone in illness, fear plagued me. If I wasn't fit enough by morning the trek would be over before it had even begun. I couldn't let that happen. I would not let my mum down.
### CHAPTER TWENTY-ONE
### Onwards and Upwards
_To Namche, Thyangboche, Dingboche, 5—7 May 2014_
I woke to darkness. Looking across the room I could see that Paul wasn't in his bed. I lay for a while wondering what time it could be. I'd no idea how long I'd been asleep. Crackers sat in their torn packet on the small cabinet next to the bed. Leaning up on my elbow, I reached for them and ate one, then two, a third and then a fourth – this was progress. _Do I feel better? I think I might do!_
Hankering after a Coke, craving the thought of its sugar, I sat on the edge of my bed, slid my feet into their boots without bothering to tie the laces and, rising oh so delicately, made my way to the teahouse dining room. A chill made me shudder as I walked along the narrow wooden corridor. On my right, single-glazed windows offered shadowy views of tall, leafy vegetation on the valley's hillside, which dropped away into complete blackness, and a draught sneaked in through ill-fitting frames. Loud chatter, laughter and gaiety filled the brightly lit dining room on my left. _Where had all those people come from? And where's Paul?_
When I opened the door I was almost knocked straight back out into the night by the overpowering aroma of garlic and spices. _Keep the sickness down. Find Paul. Get Coke. Leave._ Pulling my buff up over my mouth and nose to block out smells, I squeezed my way past rows of occupied seats, my eyes searching faces until I spotted him. I yanked the buff down just long enough to give him a faint smile and ask if he'd get me a Coke. 'I gotta get outta here . . . sorry,' I trailed off. My exit was swift, but I'd clocked Neil sitting next to Paul, baseball cap pulled down hard on his head, properly scoffing his food down. Returning to the room, I felt more than a little envious of his evident recovery and wondered why it was that I was still suffering.
Paul brought me the bottle of Coke. I wanted it there and then, but gave it a good shake before releasing its cap slowly and repeating the process.
'What are you doing?' Paul asked.
'When I went trekking in Peru, I'd been drinking mostly coca tea and water, but on the last day of my travels, at a place near Lake Titicaca, I bought a bottle of Inca Cola. It was neon-yellow and tasted amazing, and I didn't think twice about finishing the bottle quite quickly. It was only when I boarded my flight from Juliaca airport that I began to suffer murderous pains in my stomach – pains so bad that, as I rested my forehead against the seat in front, I thought I might actually pass out. The plane took off and my stomach became harder and more bloated, the pain increased – and then the farting started, non-stop, lengthy, windy eruptions all the way to Lima,' I said, rolling my eyeballs in my head as I recalled the acute embarrassment I'd felt at the time. 'I was so gassy I could have fuelled the plane all by myself. It was a horrible experience, not least for surrounding passengers, and one that I never wish to experience ever, _ever_ again.' Paul was laughing, took the bottle from my hand and shook it vigorously, still laughing.
After a full night's sleep the nausea seemed to have passed. I managed a small bowl of porridge for breakfast, then Paul, Neil, Jangbu and I set off for Namche Bazaar – Jangbu carrying my daypack because I was too weak and pathetic. We walked slowly along: up, down, up and down again, on the stony and dung-littered trail that followed the Dudh Kosi river. I needed to dive behind rocks twice on our way. An Australian girl had already gone up a trench in the hillside. I planned to try to wait till she vacated, but succumbed once more to the gastric urge.
'I'm not looking. Not looking. Just passing. Sorry! I just gotta go!' I called as I rushed past the squatting lass.
'No worries!' she yelled sympathetically.
The highest of two long, steel suspension bridges swayed as we crossed it and I paused briefly to look down at the dizzying drop; the river's roar, deadened by distance, was now a mere purr. It was beautiful. And I felt like utter shit. But I knew that if I could get myself to Namche I would be able to push on with the rest of the trek. I had to!
After two and a half hours we arrived at Namche Bazaar, the Sherpa capital, at a lofty 3,440 metres. On first impressions, as I toiled up stone steps avoiding herds of laden donkeys, the town looked far more built up than it had been when the 1975 Nuptse expedition had passed through. White adobe buildings with pink, green and blue rooftops were crammed together; built on stacked terraces, they were linked by winding steps and narrow, cobbled paving. It was a natural amphitheatre – like a Nepalese version of the view from inside the Colosseum. Yak-dung smoke filled the air. I felt queasy but kept quiet. We passed chattering locals standing in shop doorways. Somewhere nearby a school bell rang and we heard joyous shouts and cries of children as they rushed out to play. The campsite we were using was on a high terrace on the back wall of this scooped-out bowl. We dumped our backpacks into tents that were already set up. Clouds swirled and funnelled upwards from the valley far below as I sat outside enjoying the afternoon warmth. It was good to rest.
Paul had disappeared but came back minutes later with a tube of Pringles. 'Thought you could try these. You need to get something inside,' he said as he handed them over. I took them from him with thanks, not because I wanted to eat them, but because his thoughtfulness made me feel cared for.
Neil, Paul and I had been sitting for a while before the others appeared. 'You guys! You've made it! You have no idea how happy I am to see you all. I really missed you!' said Matt, a film producer from London who had signed up for the trek on a last-minute whim. We'd hit it off from the start.
'What do you reckon, do you think his tolerance of his tent mate may already be wearing thin?' Paul commented. We laughed.
After I managed to eat just a little of my lunch, Paul, Neil and I were taken on a short twenty-minute acclimatisation walk with Jangbu Sherpa above Namche. This was when the high-altitude flatus expulsion reared up. I hung back, letting Neil and Jangbu walk ahead, my eyeballs practically popping their sockets as I felt gas rapidly increase in my stomach. 'I shouldn't have had the Coke,' I groaned to Paul as wind exploded out of my butt. Neil and Jangbu turned round, disbelief on their faces, while mine was full of apology.
Neil and Jangbu bailed, leaving Paul and me alone. There then followed a constant stream of wind as we made our way all around the narrow streets that wound around Namche. I was helpless. It was the horrendous flight from Juliaca to Lima all over again, but my audible suffering did at least cheer some people up as I was given the thumbs up from one American woman passing by.
The clattering of hooves alerted us to the presence of donkeys, and interesting trekkers of all nationalities, in their puffy down jackets and colourful woollen hats, chattered gaily in small groups. It was fascinating to meander along the narrow, cobbled paths. Tables set up either side of darkened shop doorways all sold similar wares: Tibetan prayer wheels, Buddha statuettes, prayer flags, masks, knives, knitted headbands, pretty bead friendship bracelets, rings and necklaces.
Mostly concerned with replenishing my already dwindling supply of baby wipes, we found a small pharmacy, and then stocked up on powdered juice and plain Pringles. It didn't matter that prices in Namche were three times higher than anywhere else in Nepal. After witnessing the enormous loads local porters had to bear to bring the goods there in the first place I wouldn't have grudged having to pay five times the amount. Suddenly, warning cramps creased across my lower abdomen.
'I gotta get to a toilet.'
'Let's go in there then.'
Paul pointed to a café and we trotted up the stairs. We stayed there for a while. Two documentaries were being screened. The first was about the real heroes of Everest, the Sherpa. The second film, incredibly, was about the avalanche that had killed the sixteen climbing Sherpa during April. I was surprised it was being aired just weeks after the tragedy. The Discovery Channel's crew were at Base Camp to film the American Joby Ogwyn, whose plan it was to jump from Everest's summit in a wing suit and land at Base Camp. Filming had started two weeks before the avalanche and had documented the _puja_ ceremonies and the camaraderie between Sherpa and Westerner, then footage of the disaster was caught on film.
Helicopters flew in to what they call the football pitch – a flat area of snow the size of a putting green – and were operating at their very limits for the rescue operation. Although they can fly at greater altitudes, they are unable to hover because the air is too thin. Corpses were dug out and airlifted off the mountain, but three still remained undiscovered. The last body to be recovered by rescuers was recorded on film. I couldn't believe what I was seeing. The lifeless leg and foot of Dorje Sherpa stuck up out of the snow and the team dug out his body. It was harrowing, but my eyes remained transfixed to the screen. When it finished Paul and I sat in a stunned silence for a moment. 'I wonder how the families of the dead Sherpa will survive?' I said, trying not to cry.
I was ill, and now my spirits were even lower. I felt weighed down by the sadness of everything. I thought about the avalanche victims and their families, I thought about all the people who must have died and been left on the mountains, I thought about Gerry, and I thought about my mum.
The sickness had stopped, but none of my dinner was digested; instead it passed straight through me. After sharing high and low points of the day with the rest of the group, Paul and I went off to bed. I read some of _Greenvoe_ by George Mackay Brown, the only book I had brought with me on the trek. My eyes didn't stay open for long, but sleep didn't last. Like gumballs being released from a vending machine, one fart was followed immediately by the rattle and rumble of the next; it was relentless. After an already upsetting day I started to feel even more down and pondered whether there might be such a thing as death by farts. I looked across at Paul, he hadn't stirred. With only my thoughts and the long-drop toilet for company it was a long and dreadful night.
Kongde's crisp, snowy outline against the blue sky was dazzling as I unzipped the tent. I admired its vast scale, but it didn't take long before clouds started to conceal the view, yet again making the mountain seem theatrical and full of visual trickery.
At breakfast Mara gave us a briefing. 'I've received an updated report. The weather conditions aren't conducive to our original planned itinerary, so I've decided to reverse our route. Instead of trekking to Gokyo we will head for Chukhung. If anyone else suffers early illness there are more opportunities for recovery with an option to regroup further along on the trail.'
That seemed a logical plan, but I also felt a private sense of elation because going to Chukhung first meant that I might be able to deliver my mum to Gerry on the exact anniversary of his death. I'd chosen to do the trek in May because it was the same time Gerry had been here and I'd understood our route would take us past the Nuptse valley, where his body still lay entombed in its icy grave, a couple of weeks after 9 May – the day he had died. But now the change of itinerary meant that I might reach the valley on the anniversary itself, which would make the scattering of Mum's ashes even more meaningful and perfect.
En route to Deboche at 3,820 metres, we travelled in a northeasterly direction, a gently ascending traverse. Sitting at the side of the trail on a white plastic chair was a Tibetan lama, a Buddhist monk. An open box sat on top of a table into which a 'non-obligatory' obligatory donation was made by each of us – for this, we received his holiness's blessing for our onward journey. I willingly paid my dues, figuring I needed all the help I could get.
A short distance further along the dusty trail, and fenced off, was a large, white Buddhist shrine, known as a chorten monument, crowned with a golden top and shaped like a giant Tibetan meditation bell. It was built for all Sherpa and in honour of the Nepali mountaineer Tenzing Norgay. A Himalayan griffon soared on the thermals in the sky above.
Birdsong filled the air as we descended through birch draped in old man's beard. Rhododendron forests were in bloom with delicate pink and white flowers.
I waggled my camera at a local lad, who let me take his photograph.
'I can't wait to show this picture to Marcus and Leon,' I said to Paul. 'How old are you?' I asked the boy.
'Fourteen,' he replied, before strapping three pieces of plywood onto his back that were more than one and a half times his height, and the width of a large doorway. The boy set off, always remaining ahead of us along the trail. It was impressive but also humbling to witness.
As we passed through more rhododendron forest I soon saw that these load-bearing skills were developed from a young age. A sturdy-legged eight-year-old in a worn black down jacket torn at the armpit, muddied cotton trousers and a pair of rubber open-toed sandals scuttled past. He was bent almost double under the weight of a large sack, which was attached to his back by a cloth sling fixed around his forehead. Dawa told me the boy was preparing for a future as a porter, with aspirations to become a Sherpa and one day a Sirdar. _Meanwhile a few thousand miles away Marcus and Leon are probably bending their forefingers and thumbs out of shape on their Xbox consoles._ These Nepalese youngsters were living a hard but honest life and I couldn't help but feel inspired, even though their futures probably held the dangers of work at high altitude.
We stopped for lunch. It was noodle soup and beans. 'If you eat the beans, stay away from me,' Paul said, with a look of such consternation it made me laugh. He needn't have worried. I couldn't eat anything other than the Pringles.
From our lunch spot it was an hour-and-three-quarter slow-paced walk, under the cover of low cloud, up to Thyangboche at 3,860 metres. We crossed another high and lengthy suspension bridge, which sagged in the middle as it supported the weight of a yak caravan carrying expedition holdalls, barrels and other gear. A sliver of silver far below, the Dudh Kosi frothed, its rapid flow hushed by steep valley walls supporting stands of aromatic pines. As we continued high on the trail in single file Paul, who had been walking in front of me, suddenly turned round and virtually winded me with a punch in the stomach.
'Check that out!' he said excitedly, 'it's like a giant barn owl. You see it? On the rock ahead. Look!'
'Wow! What is that?' Closer inspection revealed it was only the kitchen porter's load resting against a rock. _Muppet._
As the trail climbed high and dropped low, many chortens dotted each side of the trail and a never-ending stream of laden yak made their journey alongside the river. Thyangboche, one of the most famous monasteries in all Nepal, was shrouded in cloud when we arrived. It felt mystical. Digging out the diary from its place in my rucksack, I looked at a photo of this spot from 1975. Before I'd left Scotland, John Peacock had emailed photographs of the expedition's journey to me and I'd printed them out. My plan was to take photographs from the same places so that on return I could show John how things had changed – or perhaps how they had stayed the same. In comparison with the photograph, the monastery was unrecognisable. I had to double-check we were actually at the same place, so I asked Dawa. 'Yes, this is Thyangboche,' he answered.
Prayer wheels were built onto the top of a wall flanking each side of an ornate gateway. Beyond the courtyard, at the top of several narrowing flights of steps, the temple sat squarely. Heavy brown curtains, hung down over the pink brick walls, parted to reveal the way inside. We took off our boots and entered the garishly coloured place of worship with its many carvings and massive central statue of Buddha. We watched as a lone Tibetan monk in burgundy robes finished chanting, rose from his cross-legged position and left: I felt like I'd intruded. I was fascinated by how dramatically different the monastery was and wanted to find out about the changes. Stepping back outside, I felt the cold. Paul and I went to the nearby bakery, where a warm drink didn't help to heat us up much, but our Sirdar, Chote, was sitting at one of the tables. I showed him my photographs.
'What happened to the monastery?' I asked.
'When was this taken?' Chote queried, studying the images.
'1975.'
'Ah, well. The monastery was reduced to rubble during an earthquake in 1933 and was rebuilt. Then in 1989 it was destroyed again by fire. All historical and spiritual treasures were lost, but the Sherpa people came together, like they did before, and gave their labour and craftsmanship to rebuild it,' said Chote. I was yet again impressed by the people, their faith and resilience. In the face of loss and adversity they did not give up. Their determination was something I felt I understood.
As we left Thyangboche the clouds broke up long enough to afford a fleeting view of Lhotse but, like unwrapped gifts, Nuptse and Everest remained hidden from sight for now. Our lower campsite for the night was at Deboche, half an hour's walk from the monastery. Feeling super-tired after dinner, Paul and I called it a night. Although it was only half past seven it was already dark as we snuggled into our sleeping bags.
'Hey, Paul,' I whispered.
'Hey, Sarah,' he whispered back.
'Night night, sleep tight, mind the bed bugs don't bite!'
Talk about famous last words. I woke the next morning with rapidly swelling and ragingly itchy welts that were spreading over my body right before my eyes. Unbelievable! Every scrap of food that passed my lips was still exiting faster than a Fedex delivery; I had suffered smelly and incessant attacks of knicker-staining flatulence; and now it seemed I was being ravaged by fleas. After I'd already gone to the effort of packing up my sleeping bag and mat, both now had to be unrolled, beaten and aired. My infested clothing was bagged up and, using bowls of water, I had to wash my entire self from head to toe – it did feel bloody brilliant to have clean hair and put on fresh clothes though. _I should have spun those prayer wheels harder._
The altitude began to have its effects. Paul had woken through the night feeling as if the back of his head had been whacked by a cricket bat. My headaches came and went, and we both felt mildly sick and floaty. The trail was not technically difficult as we followed its age-old route, but the thinning air slowed our group and silenced all. In single file we made tracks above the Imja Khola river, where a large metal bridge lay collapsed and broken. Paul disappeared behind it.
'Paul, what are you doing?' Mara asked.
'Errrr . . . I really gotta go,' Paul tried to explain.
'Can you try to hold on? This place isn't safe, there's objective danger of rockfall here. It's best to keep moving.'
Paul obediently complied, but as we continued on the steep, narrow trail contouring high on the valley hillside, and because he was ahead of me, I noticed he was walking funny.
'Are you okay?' I asked.
'Nope. I think I've shat myself and I can't check 'cause there's no fucking rocks or bushes to go behind.' I felt so sorry for him – but also slightly relieved that at least I didn't have to feel so embarrassed about my own display of bodily functions.
There was not much in the way of distraction from our symptoms; billowing white clouds obscured any morning views of big mountains, and valley hillsides began to shed their colour as we climbed higher up out of the treeline into a monochromatic landscape of greys and browns. The Imja Khola zigzagged along the V-shaped valley floor and, like the paths of lava flow from a volcanic eruption, massive rockfalls scarred the opposite hillside – a lasting memory of previous monsoons. A man was negotiating his way along the lower scree slopes on the opposite hillside, while another drove a herd of yak across a higher path. Watching them travel temporarily took my mind off how grotty I was beginning to feel.
After passing yet another chorten and admiring a long line of tilted _mani_ prayer stones, I thought it would be a good time to talk to our trek leader.
'Hey, Mara, I know you'll have been told that I have asked for the trek to be tailored for me and I was just wondering how that's going to work.'
'Yeah, I read in your application that you wanted to go find Nuptse Base Camp. Was it your father that was a climber?'
I explained my relation to Gerry, about his summit bid and how he was meant to marry my mum but the accident had claimed his life six weeks before their wedding day.
'Don't worry, I'll speak with Chote Sirdar and we'll get your trek arranged. You know, my boyfriend died on Annapurna's south face. I can totally relate to your sense of needing to go to the place where Gerry's body lies.'
She talked at length, and I was touched that she had told me about her boyfriend so I decided to tell her my whole story: that I had brought my mother's ashes with me.
'That's amazing. You should write a book!' she said.
'Maybe I will,' I laughed.
Dingboche village was still a further two hours away. The thinning air was cold, dry and causing shortening of breath. The going was hard. We all wore neckerchiefs over our noses and mouths to protect our skin from the ultraviolet rays, and also to help prevent the dusty, dry air from giving us the renowned Khumbu cough. It was a relief finally to see a modest collection of teahouses come into view; we would have an extra day and night's acclimatisation here – and I for one was grateful for the rest. That evening dinner was pizza, chips and spaghetti with a tomato sauce followed by delicious apple, and for the first time in ages I was actually able to enjoy what I had been given. The antibiotics at last seemed to be working.
As I lay in my sleeping bag I said a silent and wholly selfish prayer: _Dear God, Please let me get a great sleep. I've been running on empty and haven't had a non-trekking day since I've been ill. I really need to be fit and well so I can take my mum to Gerry. Amen_.
God must have been listening. I woke up the following morning feeling awesome. Even the skies were a cheerful blue. I took photos of the mountains; like wintry queens, they were resplendent in their glittering white cloaks. Icy fluting thrust upwards like giant folds while plumes of spindrift from the highest peaks blew from summit crests like delicate trains. Clouds were being born: the sun their sire, and the mountains their dame!
At breakfast I ate all my porridge and had a pancake with lots of apple jam then chilled out on a chair in the dusty courtyard to take in the view. Straight ahead, soaring to 6,812 metres, was Ama Dablam – possibly the most perfect mountain I'd ever seen. Its lower reaches were blocked from sight by the green, corrugated long-drop toilet that was built on top of eight stone steps, and in front it, piled up high, was a giant, dome-shaped mound of yak dung drying in the sun.
'It's an incredible mountain, isn't it?' said Paul as he came and sat on the chair next to me.
'Yeah,' I replied, nodding towards the toilet and dung with a smile, 'But there's always gotta be some kinda shit to spoil things . . . yak shit . . . get it?'
'You're obviously on the mend,' he said with a broad grin.
My hand in his, we took a leisurely walk up the main street of Dingboche, a precarious, pot-holed thread of dirt. A narrow channel of none-too-clean-looking water ran its length. It was piped up from the river below, and locals were using a bucket to scoop out water to use for cooking, washing and drinking. Busy at work behind a low dry-stone dyke wall, a large woman in a long pink skirt, apron, blue top and headscarf was flinging yak muck across her land, and as we ventured further more mountains revealed themselves. Island Peak with its brown triangular face looked dwarfed by its neighbour, Lhotse.
'Gerry climbed Island Peak in one day. Maybe we could come back here to climb it too?'
'Anything's possible,' said Paul.
I felt excited. I wanted to go everywhere Gerry had been, to see the landscape as he might have seen it. As I had felt in the Scottish hills, and on the rock in North Wales, it somehow made me feel closer to him and therefore my mum.
Returning through the village, we found an open kiosk. It was the largest store there and only the size of a two-door wardrobe, but among its variety of useful items, and with Mum and Gerry in mind, I purchased a set of prayer flags.
At eleven o'clock Paul, Harry, Dawa and I hiked over the hill to Pheriche while the rest of the group went higher up into the hillside to aid acclimatisation. We were planning to attend the Himalayan Rescue Association (HRA) talk on mountain sickness and how to recognise signs of pulmonary and cerebral oedema. A chorten-splattered hilltop separated the two settlements of Dingboche and Pheriche, and as we approached Pheriche, Dawa spotted a lammergeier, its wide wings stroking the air in an effortless glide. Feeling emotional to have my first sighting of Gerry's favourite bird, I took its sighting as a good sign. Having threaded our way down steep scree to the wide valley floor, we continued along winding dirt paths to the Himalayan Lodge, where we had lunch. Soon after eating Paul was out of sorts.
'That went straight through me. My guts are rank,' he groaned.
'Start on the antibiotics. Here, take mine just now,' I told him firmly.
To kill some time before the talk we wandered around the settlement.
'Have you noticed a pattern with the clouds, Paul?' I asked, as they rose up from the lower valleys, bringing with them strong winds. 'Mornings are often quite clear, but by early afternoon they gather and stampede upwards enveloping everything with their mists.'
He mumbled agreement, but I could tell that he was too distracted by his stomach. And I now became preoccupied with what the clouds were doing. If I stood any chance of finding the memorial cairn my good-weather window would only last till early afternoon. I worried away to myself.
Early evening, back in the teahouse at Dingboche, the porters, Sherpa and Sirdar crowded around a table with John Peacock's map opened out. They pointed at the inked-in cross – where John thought we would locate the memorial cairn. Although I had no comprehension of Nepalese it was clear that Chote was explaining about Gerry and the accident on Nuptse. Many pairs of sympathetic brown eyes looked up at me, but there was something else that I recognised in their gaze, which I tried to ignore – I could tell they didn't think that we were going to find what I was looking for.
### CHAPTER TWENTY-TWO
### Deliverance
_Chukhung, 9—10 May 2014_
'Today is the day he died you know,' I said to Paul.
'I know,' he said affectionately, rubbing his thumb against the back of my hand.
We carried on in an easterly direction to Chukhung, at 4,730 metres. Low-growing shrubs were ever sparser and the trail became rockier. And, just when I thought I'd seen the most spectacular perspective of Ama Dablam, I was treated to yet another as we contoured its opposite hillside. Immense hanging glaciers and tiered fluting, luxurious folds and furls, coated the mountain like thick Christmas-cake icing. Ahead were Island Peak, Lhotse and Nuptse, its deadly snow couloir coming increasingly into focus. A continuous series of ridges punctured the skyline like ferociously sharp angular fangs, while luminous peaks stood out against the piercing blue. Studying the mountain faces through binoculars, I marvelled at the vivid creations in ice, sculpted by the elements. It was like looking through a kaleidoscope: a vision of dark, fractured geometrical lines, wedges and coils. Vapour lifted off into the air, twisting and furling like flames, changing the mood and appearance of the landscape. The scene was mesmerising; its wildness resonated within me. _Did Gerry feel this same wonderment? Do you feel it, Mum? I think that you do. And tomorrow, Gerry, we're coming to find you. We are on our way!_
We arrived at Chukhung in time for lunch. Porters had already set up tents in the earthen makeshift camping ground. Our team grouped inside a spectacularly filthy wooden shack whose tables and surfaces were covered in thick layers of dust and cobwebs. It was like a scene in an abandoned railroad house from an old western movie. Soup was served. Rushing our food, Paul and I escaped the dirt and the drone of bluebottles as they bounced off the windowpanes. Out in the fresh air, we descended the stone steps and sat on the dusty wall to wait for the others.
Following the rule of walk high, sleep low, for the first time our whole group left together for a short but steep acclimatisation hike towards the mountain Chukhung Ri. It felt warm in the sunshine when we eventually stopped, and Paul and I sat on a rock as if it were a seat with the gods, taking in the views of our grandiose surroundings. In front of us a castellated horseshoe of serrated peaks like the teeth of a saw glistened brilliantly white against a perfectly blue sky. The grey, blistered tongue of the Ama Dablam Glacier stretched its way to vanishing point down the Chukhung valley. The smell of heathers reminded me of home. Birds sang in the stillness, and butterflies danced frivolously.
'How are you feeling?' asked Paul. 'I'm a bit breathless.'
'I'm okay. I did have a small nagging headache but it seems to be going.'
We didn't speak much. There was no need. We watched as everyone trekked back down the hillside. There was no hurry to leave so we stayed on our rock together, in contemplation of the day, before we too returned to camp. Inside our tent we addressed the ritual unpacking of our holdalls, pumped up our mats and unrolled sleeping bags. I combed through my long, knotted hair while Paul, leaning up on his elbow, watched the battle with a smile. We remained in silence.
Fishing in my bag, I pulled out the set of prayer flags that I'd bought at Dingboche and linked them up, wishing that I was trekking my way to Gerry with them right then. I tried to console myself, as well as prepare for disappointment. _What's one more day in the scheme of things? Even if we don't locate the cairn tomorrow, it doesn't matter. What's important is that I can make my own cairn, drape my flags, say words, and then I can send Mum to find him._ I looked forward to delivering my eulogy – not least because fragments of it were on a perpetual loop in my mind, driving me to distraction.
Chote walked over to wish me well, but his tidings came with a warning. 'I have passed by this place many times and have never seen what you are looking for.'
My heart sank to my stomach. As Chote left I turned to Paul and rested my head against his shoulder.
'Do you think we'll find the cairn?'
'We'll try,' Paul said. I didn't want to think about the possibility of failure. I had to believe that we would find it. Shuffling closer to him in my sleeping bag, I closed my eyes, but my unfulfilled business called out to me like a siren as I drifted towards sleep.
'W _ahsheeng waterrrrr!_ ' yelled the kitchen porter outside our tent as 10 May dawned. His voice was alarming, like a giant gong resonating. It was amazing how we made the small basin of tepid water meet our needs. ' _Teea!_ ' Dawa called five minutes later, thrusting a hand through the nylon flap and holding out a cup for Paul. 'No tea, _didi_?' he asked – every day, always addressing me using the same term of endearment, meaning 'sister'.
'No, thanks,' I'd reply – every day. Breakfast was at six-thirty, and Chote had decided that Dawa, who spoke better English than Jangbu, was the best candidate to take Paul and me on our mission.
Paul, Dawa and I left, kicking up orange dust with each step. I watched Paul's head bobbing up and down ahead of me. I was glad he was here with me. Our pace was reasonably quick as we ascended the sandy trail, and clouds began to disperse as the day heated up. Contouring the hillside, we followed Dawa mostly in silence; it was taxing enough to draw breath.
'Is Bibre below?' I panted.
'No, it's over the next ridge.'
Although I thought I'd studied the lie of the land well as we'd passed by the day before, I recognised very little: lumps, bumps and scars on the grey landscape – they all looked the same and I realised how reliant we were on Dawa to guide us in the right direction.
'Do you know where we are going?' I asked.
'I've never been up this valley before,' he said, shaking his head. I literally couldn't fucking believe it. When we stopped for a drink, I gave him my map as well as John Peacock's small colour photographs.
'Let's climb up to the top of that ridge; maybe it'll give us a better viewpoint,' I suggested.
We carried on along a faint path which then descended to a dried riverbed of black stones. Waves of optimism came and went. I recalled meeting John the previous summer, when he had spread his old map of the Khumbu Himal across the table in the supermarket café. It had all looked so straightforward then. But here I was, not a large fingertip on a thin contour line, but a small person enclosed in an enormous, ribbed and alien landscape. The only reference point was the mountain itself, Nuptse, towering ahead on our right. Dawa still had the map. My only clue as to where we were, roughly, was when Paul raised his hand, palm out-turned to invite mine.
'High five!' he said.
'Do you know where we are?' I asked excitedly.
'No.'
'Why'd you high-five me then?'
'Because we've reached the 5,000 metres mark,' he answered, tapping the altimeter on his watch. 'That's where the cross was drawn onto the map, isn't it?' We checked the map against the landscape, and sure enough the geographical features looked about right, but the photographs didn't marry up. We discussed where we thought we might be. 'I think we should aim for the "V" shape on the horizon line at the head of the valley,' said Paul. I felt agitated.
'But according to John's cross on the map the cairn should be somewhere around here. I want to go up higher, onto that ridge on the right; maybe we'll see more?' We were penned in on our left by the jagged black rock of Dingogma; its dark, brittle shards, arranged in stacks and chimneys, were a stark contrast to the white of Nuptse, which soared skywards on our right. And the return view of Ama Dablam was imposing and authoritative. I felt tiny in this gigantic arena, and Chote's words haunted me. ' _I have passed by this place many times and have never seen what you are looking for._ '
Fitter and better adapted to higher altitude, Dawa raced ahead. Paul and I followed behind across the stony, egg-box terrain. But when I next looked up, Paul was pulling further away from me and Dawa had completely disappeared. Despite my best effort to get his attention Paul was too far ahead, and my voice too small to be heard. Irrationally, I felt vulnerable. Try as I might, I physically couldn't get myself up the damn moraine any faster. Each intake of breath seemed piercing in the rarefied air, my lungs felt like they were going to collapse as they laboured. 'Fucking-fuckity-fuck,' I muttered. 'This is so not cool. Not cool, at all.' I felt panic that I'd be left behind, but had to stop to let my heart regain rhythm so that I could try to get control over my breath in order to yell out. Paul stopped and looked around. _Thank God._
'Where's Dawa, did you see where he went?' I called. But Paul shook his head. I shouted. 'DAWA!' No answer. I knew he wouldn't have abandoned us and we weren't really lost, but we were in vast surroundings – and I had an important job to do. Minutes seemed like hours.
'He's here!' Paul called. He had topped out and spied Dawa, sitting on a rock poring over the map. Relieved, I puffed up onto the ridge and was suddenly struck by the sight of Nuptse's south face, now in full view. For a moment my unquiet mind became still.
'I think this is the area of the Base Camp,' Dawa said, bringing me back to the present.
I felt elated. 'Will we find the memorial?'
'No,' he said, his soft, brown eyes looking directly into mine, 'I don't think so.' The waves of optimism and deflation tormented me. Turning my head to hide my disappointment, I fixed my gaze along the ridge of lateral moraine. We'd been walking for hours. I gulped at the cold air. Early-afternoon clouds now sat across the base of Ama Dablam; it wouldn't be long before they would conceal our views completely. Time was running out, but I wasn't prepared to give up.
Taking John's photographs, Paul wandered off and so did Dawa. And I just stood and stared at the mountain that changed the course of my mother's, and my, entire life. As I studied the route to its peak I now saw how it was possible that Gerry had fallen all 1,800 metres from its top to the bottom. I glanced around. Paul and Dawa were far enough away. ' _Please_ , Gerry! Show us the way!' I implored. 'I've brought my mum to you! She's here now.' I knew my pleas could not be answered, but it didn't stop my own superstitious hope. And if his spirit was awakened, I hoped that it leapt with joy.
As I followed Paul, I focused on Nuptse. I could see the bergschrund at its base and wondered if Gerry's body might still be there, perfectly entombed in ice. I wanted to go and find him, but that wish was futile too. The sweeping hands of my watch ticked on, and I resigned myself to the fact that we were not going to locate the memorial cairn. I decided then that where I stood would be as good a place as any to build my own shrine.
I was emotional. How could something so breathtakingly beautiful have been the cause of such untold misery?
'Come here a minute,' Paul called. I turned in his direction. He beckoned me to him. His simple gesture caused another tidal wave of optimism. I tried to suppress the sensation, afraid of disappointment, as I made my way over. Resting his hand upon my shoulder, he pointed down towards an amorphous lump. 'Look down there, can you see it?' I strained my neck forward and screwed up my eyes to scan the scene. Others would scarcely have noticed the collection of rocks piled together against a larger boulder; anyone could be forgiven for missing it, camouflaged as it was. Tears filled my eyes.
'What a wonderful miracle!' I said, the words choking out of my mouth. Against the odds, and down to Paul's perseverance in trying to match up the old photos against the landscape, he had found the memorial cairn.
Already halfway down, Dawa called, 'Be careful, _didi_. It's very steep and loose rocks.'
Swallowing back tears and composing myself, I picked a way down the scree from our high point. I staggered over to a large rock and sat down in a joyous daze. Paul handed over the tiny, crumpled pictures as he joined me: the memorial cairn with Nuptse towering behind, Base Camp with Pokalde in the distance, the Alouette rescue helicopter near Base Camp with Ama Dablam in sharp focus to the south – they all matched up. Base Camp's position was perfectly sensible: right next to a glacial lake, lying at a low point between the big mountains, providing good shelter and situated well enough for the team to view progress on Nuptse through binoculars. Checking the map, I realised how earlier confusion over the memorial cairn's location had arisen – we were just over 3 kilometres north-northeast, and 188 metres higher in elevation, than the little black cross marked on the map. The cairn had stood almost unchanged for thirty-nine years; all that was missing was the aluminium cross.
Following tradition, I placed a rock on top of the cairn, as did Paul. Then, extricating the prayer flags from my bag, I attached one end securely to the base of the cairn while Dawa tied the other to a large boulder at a higher point. Paul and Dawa said nothing at all, respectfully stepped back and went down to the lake. I took the urn from my backpack and carried it to the cairn, Nuptse behind it like a towering tombstone.
Carefully, thoughtfully, I scattered Mum's ashes. I paused briefly, aware and upset that these were the fragmented remains of my mother. 'It's been a long time coming, but I did get there in the end, like you said I would. I can let you go now, and we can both finally be at peace.'
When the cross had been removed, rocks had been displaced and a recess in the centre of the cairn left gaping open – almost as though it was known that I was coming. I placed the urn inside then walked to Paul. He had a photo that I wanted to put into it along with a simple message. Standing back, I could never have imagined a spot in which such perfection, beauty and peace existed. It was an incredible final resting place and I felt confident that it would remain this way, undisturbed for at least another thirty-nine years and more. No words passed our lips. I was touched by the feeling expressed in Dawa's eyes, which he lowered as he walked purposefully by. He set about rebuilding fallen rocks from the cairn with great fervour, blocking in the urn, keeping its secrets preserved for ever. He wasn't Dawa Sherpa any more; he was Dawa my friend, and I was glad I hadn't done all this on my own.
Absorbing the setting, we three sat on a rock and ate some jelly babies, lost in our own thoughts. After all my troubles and failed relationships, feeling as if I'd been wandering aimlessly through life, I was finally confident I was on the right track. I now understood that, like being on a mountain at high altitude, life was a test of endurance. You just had to be patient, and know that no matter how tough things may seem there is always a way through the difficulties. You had to learn to accept the things you couldn't control but be brave enough to change the things you could. My mum's death had hit me hard; I had felt so adrift and alone. But while I would never stop missing her, I could find a way to live with my loss. Giving myself physical challenges had started me on the path to healing. Understanding her life and accepting her death had helped me to understand myself better. And, above all, I came to realise the importance of having people in my life to support me. Now I have a partner who loves me unconditionally, and I have a great friend in Mel. And, of course, I have my children. As I sat there on the rock I wished that I could see them and hug them tightly.
With the weather beginning to close in we had to leave. I thought about John, and how pleased he would be to know that we'd made it. I took a photograph of the empty expedition Base Camp and lingered, wanting to stay, but having to go. Dawa, Paul and I collected some rocks, small, token reminders of this special place. As we travelled back over the hillside above Chukhung a massive bird of prey glided down the valley. Its colouring and long, wedge-shaped tail were unmistakable. It was the lammergeier, the very bird Gerry had said he would want to be reincarnated as, and I wondered. I thought about the embroidery Mum had stitched for Gerry, her wedding gift, in shiny threads of pink and blue _Today is the First Day of the Rest of Your Life_ – and so it was.
### EPILOGUE
### Homeward Bound
_Ben Nevis — The Venomous Mountain, July 2015_
Early June 2014 was warm. Paul and I were back from our trek in Nepal and I was keen to spend as much time as possible with my sons after so long away from them. Hauling out our bikes, we took advantage of the weather and set off to Ardersier.
My legs were on autopilot as we pedalled along. I thought about how I had long wanted to move back to the village where I grew up, to the place where my family had been together, alive and well, when everything had been all right. I thought about how I had talked myself out of the notion, telling myself that going back wouldn't bring them back, and it wouldn't be the same; reminding myself that the clock cannot be turned back. As the wheels on my bike turned I supposed it was enough to at least be within cycling distance of my old home.
Ditching our bikes, the three of us raced up the sandy path, brushing through spiky gorse, to arrive at the viewpoint on top of Cromal Mount. A gentle breeze ruffled through my hair as I stood with hands on hips catching my breath. I swept my gaze across the familiar landscape until my eyes rested their sights upon Inchrye. This time there was none of the sorrow or self-pity that I used to feel when I took in the view of the old family home. I no longer felt embittered by those events of my life that had cast their dark shadows in me. The trapped, lost and lonely girl was gone; I now knew myself. I was my own woman.
Just ten yards further along the road from my childhood home lay a row of seven south-facing terraced houses, their gable ends towards the sea, and in one of the gardens I saw a prominent 'For Sale' sign. When we climbed back down to the bottom of the hill my belly fluttered with a flurry of excitement and I went to take a closer look.
Within a few days I'd made an appointment and returned to view the house. It wasn't big enough for me and two boys. But the owner told me the property next door was twice its size, the owner had recently died, and the daughter was selling.
'Was it Mrs Johnstone who lived there?' I'd asked.
'Yes, that's right. Did you know her?'
'Yeah. I used to live at Inchrye. Her husband Scotty and my grandad were great friends.'
Whether or not the stars were in alignment, the timing was never more right. The final piece of my jigsaw was about to slot into place.
Mrs Johnstone's daughter remembered my family and we talked about the old days as she showed me around the house. 'It's not on the market yet. We're still trying to sort everything out,' she said.
The place was in desperate need of renovation and modernisation, but that didn't matter – it would be a real blank canvas. I could do it up in my own time; there was even room for an art studio where I'd be able to work on the paintings I had started after my accident. The good feeling I had about the house was inspiring, and something told me it was going to be mine.
Within two weeks of putting my own property up for sale a young couple came to view it and soon put in an offer. I was ecstatic, and, with my mortgage secured, the wheels were set in motion.
The last night in our flat was not without its usual disruption. Loud banging and shouts woke us at three in the morning. Looking out into the hallway, I saw and heard nothing. I went to the kitchen and opened the window. There was nobody out on the street far below. As I pulled my head back in I looked up and saw, to my horror, a pair of feet dangling above the window. Voices shouted again.
'C'mon! Don't be daft,' said one.
'You're gonna kill yourself, come back up! We'll help ya,' said another.
'The police'll be here any minute! Get back in, ya fuckin' idiot!' urged another faceless voice.
Leaks, slugs and bad neighbours: for so long it had felt as though we were being driven from the home where I'd wanted to belong with my memories of Mum. But as I closed the door on the flat for the last time I felt no regret. I was so much stronger now: the mountains had made me that way. By this point I'd climbed most of the Munros, just two more summits remained, which I would tackle together, and I'd saved the biggest mountain of them all for last.
Togged out in kilts, ten of us met at Torlundy near Fort William and set out along the track signposted 'North Face'. Our route swung us left and right and on and up before we reached a stile where great views of verdant green landscape with long grasses, heathers and pines opened up. Ben Nevis's rocky north face glinted like steel in the sun, and white cauliflower clouds hatched and flourished amid the darkness of the crags, billowing upwards into a cerulean sky. Because the traverse of the Carn Mor Dearg Arete was too dangerous for my sons in winter conditions I had put off doing my final Munro outing till now, a year after our return from Nepal. I really wanted both my boys with me, but at the last minute Leon had not felt up to coming. I missed his company, but smiled as I watched Paul and Marcus ahead of me, their dark-green kilts swinging freely with each step and my friend Mel forging the way ahead of everyone else.
On the red summit rocks of Carn Mor Dearg the atmospheric conditions were dramatic. Light and dark shades were cast over the gullies, cliffs and buttresses of the Ben, and the arm of the arête, extending in a graceful, rusty curve, appeared razor sharp. The traverse was utterly involving. Rocks were slippery when it began to rain, and we became human aeroplanes, arms outstretched, until fear of falling to our doom finally forced us down onto all fours. A poor soul had died on a particularly narrow section of angled rock only two weeks earlier. After a short scramble a wide shoulder was reached before a bouldery ascent put us onto the flat summit plateau of Ben Nevis. We had ascended the mountain without meeting anyone, but here we were greeted by the sight of many people: Arabs, French, Indians and a group of Muslims sitting cross-legged in a circle, their voices praising Allah in song. It was a harmonious scene. Enveloped in white mists, our group clambered to the top of the summit cairn. Going higher still, standing on its trig point, I waved our saltire with pride, but the achievement I felt wasn't just because I'd completed the full round of Munros.
The journeys I had made by climbing all of the Munros, the trips that forced me to push the boundaries of endurance on Africa's highest mountain and on the peaks and passes of the Himalayas, had all helped me to reconcile an inner journey. Cradled in the arms of valley walls, scrambling on crags and topping out onto ridges gave me the breathing space I needed to gain perspective; in those places I was freed of troubles from the past and worries for the future. Mountains taught me to live in the here and now. They showed me that although life is uncertain, it is also full of possibility. They had enabled me to cope with my problems; but there are other ways for other people.
Nowadays I always check the weather forecast. I make sure I have my map and compass at the ready. I pack my rucksack with emergencies in mind. For as long as I am fit and able I will always return to my mountains.
## Postscript
On a Sunday morning near the end of January 2017 I silenced the offending sound of my alarm and rolled over, lowering my legs over the side of the bed. I momentarily held the weight of my head in my hands; it was five-thirty in the morning. I was feeling drugged with sleep and an uncertain nagging that things were not quite right quietly tugged away inside.
Mel and I hadn't seen each other for a proper catch-up in ages, but the two-and-a-half-hour car journey to Bridge of Orchy in the west Highlands was quiet. I'd never felt such chronic motion sickness. If only my car hadn't been out of action I'd have driven and spared us both from my whimpers and moans as we curved the seemingly interminable bends on the Fort William road.
As soon as the fresh mountain air filled my lungs I felt better. And as we got moving on the long, stony track to Beinn Mhanach the heavy tiredness I'd been feeling wore off too.
We ambled up the mountainside over clumps of waterlogged grasses, and for the majority of the day we were surrounded by thick mist. The weather didn't matter. Our day wasn't about the views; it was about exercising the body and mind, and having a good old chinwag.
'There's something I'm worried about,' I told Mel. 'I found a lump in my breast.'
She looked at me. 'When did you find it? Have you been to the doctor? You've had lumps before so this one will probably be okay too,' she said. I sensed the concern through her feigned confidence.
'Yeah, I once went to the doctor with one lump and came out with several more,' I said with a laugh. But inside I felt sick. Sick because our exchange exactly echoed the very conversation my mother had had with me so many years ago.
'But it does feel different this time. It's totally solid,' I said.
Mel and I tramped on up to the exposed top. The wind baling over the summit was biting cold and, at once, the immediacy of several basic needs (warmth, food and a pee) became foremost in my mind. The mention of the lump in my breast was not spoken about again for the rest of the day.
Three weeks later I was walking with Paul, back towards his parked car, my hand in his. Dead air hung between us as we drove away from the hospital. My head was swimming. The consultant confirmed I had cancer, and it was aggressive – just like Mum's was. Sharp pain in my right breast caused me to wince. A doctor had inserted a marker – a coiled wire – into the tissue behind my nipple, so that if the chemotherapy they were going to give me shrank the tumour to an undetectably small size they would still know where to cut during surgery. Only after the operation, which I was told wouldn't be until July, would they then be able to tell if the cancer had spread to my lymph nodes. And after surgery would be four weeks of radiotherapy . . . in other words, it was going to be a tough year.
Even now every little ache or cough sends my mind into a frenzy and I think, has it gone to my bones . . . is it in my lungs . . . am I riddled like Mum was? It's hard not to feel afraid when terrible memories of what happened to her are still so vivid. She was young, fit and healthy, just like me. Her lump was small and was operated on straight away, but it had still spread to her lymph nodes. My lump was colossal in size, so did that mean the odds were that rogue cells had already split off and circulated their malicious disease elsewhere?
_Am I a goner?_
_Is this the beginning of my end?_
In the first week of 'knowing', my mind took me on a journey to hell and back several times a day. It was exhausting, and waiting to start treatment a strain. I'd had all kinds of fears about chemotherapy, but didn't give in to these – I've witnessed enough in my life to know that when the mind gives up the body soon follows. Determination to survive kicked in.
During the chemo, even when I was most unwell, I forced myself outside every day to feel the sea air on my cheeks, grateful to live right next to the beach in my beloved Ardersier. I walked. At first my body behaved like my dodgy car: motoring along at speed when suddenly, without warning, power was lost, as if I was driving with the brakes on. There was nothing I could do except adjust to the new rhythm and keep going.
But it was going to high places that I knew would restore me best, so on my good days that's what I did, I hillwalked. During the eighteen weeks of chemotherapy my feet carried me back over Meall a Bhuachaille, where I'd had that first snowy, solo walk; to Meall Fuar-mhonaidh, my Christmas pudding hill. I tackled Munros; Bynack More and Ben Lomond. I trod new ground over a few Corbetts in Sutherland and kissed off the whole stinking chemo thing by walking the 72-mile length of the Great Glen Way, tent and pack on my back.
At first, the timing of my cancer had seemed unfair, and the irony of it was priceless – discovering the lump at age forty-four years and eighty-one days old, just one day older than Mum was when she died. But climbing made me strong. And having cancer has made me appreciate every single day of my life even more. I don't care if it's raining or windy or if the sky is clouded over. I put one foot in front of the other and off I go.
It's just another mountain.
## Acknowledgements
You wouldn't be holding this book in your hands if it wasn't for the following people. First off I need to thank my mate Mel, not only for her friendship, but for having a celebration birthday dinner for her fortieth. If it hadn't been for that I'd never have met her colleague and pal Fiona MacBain. Fiona, at the time, was writing her first novel, _Daughter, Disappeared_. We hit it off and agreed to help each other by editing our respective stories. Fiona, in 2015, then pointed me in the direction of Pete Urpeth at Xpo North, who in turn put me in touch with the renowned literary agent Jenny Brown. Jenny read my story and said she wanted to help. I felt blown away that someone with such credibility believed in me, and cared. I don't have a degree in English or a background in writing in any professional way – in fact at school I scraped through my Higher with a 'C' pass. So, although exciting, it was a daunting prospect to face the challenge of reworking my entire manuscript to bring it up to submission standard, however throughout this process Jenny was a source of great encouragement. My editor Simon Spanton was only a WhatsApp away. Pippa Crane, and my talented publisher Jennie Condell at Elliott & Thompson both put an astonishing amount of thought and effort into all the important final decisions about how to make my story the best it could be. Thanks also to copy-editor Linden Lawson. So to all of you who have been so instrumental in getting my story out there, massive thanks.
Thank you to Hazel Macpherson, my mum's oldest friend. And to Tony Kayley for reminding me about dodgy eggs and pink radioactive sausages.
I am indebted to those of you who helped me get to the root of so many questions and for your kindnesses. Many thanks to Henry and Sara Day, and to the wonderful John and Sheila Peacock for taking me under their wing; to Laurence Smith, Jon Fleming, Neil Winship, Nigel Gifford, Crispin Agnew, Brummie Stokes, Bronco Lane, Dougie Keelan, Dr David Jones, Cattie Anderson, Sue O'Hara, Tom Lynch, John Muston and John and Durga Patchett, who all knew Gerry as a friend or colleague and were able to share their stories with me. And to Jill and Rod Owens, Gerry's cousins, thank you so much for filling in the family history.
Sir Chris Bonington's generous Foreword is acknowledged with particular gratitude.
Lastly, and most importantly, thank you to my long-suffering boyfriend Paul. And to my gorgeous Marcus and Leon, who are my whole entire world and without whom I am nothing. Thank you for a love which is reliable and true, and the best love of all.
## Index
#### A
A' Mhaighdean (Munro)
Ama Dablam Glacier
Ama Dablam, Himalayas , , , , ,
An Teallach, Dundonnell
Anderson, Andy , 213–14
Anderson, Cattie , 213–14
Annapurna, Nepal , ,
Anne, Princess 177–8
Aonach Beag (Munro)
Aonach Mor (Munro) 215–16
Ardersier, Moray Firth ,
#### B
Bansko, Bulgaria 72–3
Beinn Alligin, Torridon 62–3, 66–9
Beinn Damh, Torridon ,
Beinn Eighe, Torridon 27–31, 36–8, , , ,
Beinn Mhanach (Munro)
Ben Nevis (Munro) , , , 259–61
Ben Wyvis (Munro) , 45–9, 59–60
Betws-y-Coed
Black Isle, Inverness , ,
Blackrock, Dublin xi
Bonington, Sir Chris xi–xii,
Bynack More (Munro) 19–22
#### C
Cairngorms , , 19–22
Caithness
Carn Mor Dearg Arete (Munro)
Carn Mor Dearg (Munro) 215–16
Carrington, Leonora
Charles, Prince
Charlie (boyfriend) ,
Chno Dearg (Munro)
Chote (Sirdar Sherpa) , , , , 249–50,
Chukhung, Himalayas , , ,
Chukhung Ri, Himalayas
Clava Hills, Inverness
Coire Mhic Fhearchair, Torridon 43–4
Collier, Dr
Corbetts, the 156–8
Craig, Scotland
Croe, River
Cromal Mount , , ,
Cromarty Firth ,
Cuillin, Skye ,
Cyprus 54–5
#### D
Daniel (summit guide) ,
Dava Moor
Dave (guide) 206–7
David, Uncle , , , , ,
Dawa (Sherpa) , , , , 249–50, , 254–5
Dawn (school friend)
Day, Henry 184–6, 190–3, , 209–11
Day, Sara 184–6, 191–3
Deboche, Himalayas ,
Dingboche, Nepal , 244–5,
Dingwall
Don (guide) ,
Dorje (Sherpa)
Drumnadrochit, Highlands
Dubh Loch
Dudh Kosi river , ,
Duich, Loch
Dundonnell Hill
#### E
Eag Dubh (gully)
Eas Mor (waterfall)
Emma (trek doctor) , , , ,
Everest
Army expedition , ,
avalanches ,
Base Camp ,
#### F
Fada, Loch
Fersit, Lochaber
Fionn Loch
Fisherfield Hill
Fisherfield Munros 195–7, 197–8, 199–201
Fleming, Jon , , , ,
Fleming, R. L. , 190–1
Fort George, Ardersier 23–4, , ,
Frank (stepdad) 27–32, 41–4
climbing with ,
and Mum ,
Mum's illness , , 116–17, , 121–2
Mum's ashes 145–6
#### G
Gifford, Nigel 188–9
Gilman's Point, Kilimanjaro , 113–14,
Gleann Bianasdail, Highland
Glen Brittle, Skye
Glen Licht House
Glen Nevis, Lochaber
Glen Shiel Mountains
Glen Shiel Ridge
Gran
Inchrye
and Mum 6–8
Mum's illness ,
Mum's death 138–9
news of pregnancy 142–3
death 151–4
Grandad
army career , , , , 163–4
and Gerry 207–8
Inchrye
and Mum 6–7, ,
news of pregnancy 142–3, 154–5
and religion
Mum's illness , 120–2,
Mum's death 138–9
Gran's death 151–4
illness and death 163–6
Great Glen ,
Gun Lodge Hotel, Ardersier
#### H
Harris, Isle of
Herzog, Maurice
Highland Desert Zone, Kilimanjaro
Himalayan Rescue Association (HRA)
Himalayas 175–6,
Horns, Torridon ,
Horombo camp 127–8
Humphrey (local guide) , , 97–8
#### I
Imja Khola river
Inaccessible Pinnacle (In Pinn), Skye , , 210–11
Inchrye (family home) , 25–6, 257–8
Inner Hebrides
Inverness
Irina (Frank's wife) , , ,
Island Peak, Nepal ,
#### J
Jagged Globe 183–4, , ,
Jangbu (Sherpa) , ,
Jimmy, Uncle , , 183–4,
Johnstone, Mrs
Jones, Dr David 207–9
Jones, Glenys 208–9
#### K
Kahlo, Frida
Kate (college friend)
Kathmandu, Nepal 221–2
Kibo (volcanic cone), Kilimanjaro 107–9,
Kikelewa, Kilimanjaro
Kilimanjaro
plans for ,
climbing 107–9, 110–11, 117–18,
final day 129–30
volcanic cones
Kilimanjaro National Park
Kilmuir, Black Isle
Kongde, Himalayas
Krakauer, Jon
#### L
Lachenal, Louis
Lathbury, General Sir Gerald
Leigh, Major Ian , 207–8
Leon (son) 45–9
birth 154–5
and neighbours
and Nepal 218–19
and Paul , ,
rock climbing , 205–7
and Sam , ,
and walking ,
walking with , 75–6, , 79–81
Lewis, Isle of
Lhotse, Himalayas
Liathach, Torridon , ,
Lochain Uaine, Cairngorms , 19–20
Lukla, Nepal
Lurg Mhor (Munro)
#### M
Mara (Larson, tour guide) 222–3, , 226–7, , ,
Marcus (son) 45–9
birth 146–7
toddler , 153–4
and his Dad 68–9
and neighbours 135–6
and Nepal
and Paul
rock climbing , 205–7
and Sam ,
school trip
walking with , 62–3, 66–70, 75–6, 79–81,
Maree, Loch
Marie Curie Fundraising Team 71–2,
Marty (trek roommate) 89–90, , 98–100, , 104–5, 114–15, , , , , , 149–51, 156–8
Matt (climber)
Mawenzi Tarn
Mawenzi (volcanic cone), Kilimanjaro , 102–3,
Meall a' Bhuachaille ,
Meall Fuar-mhonaidh , 8–11, 16–18, 74–7
Meall Garbh (Munro)
Mel (friend) , , , , 263–4
Mike (father) 63–6
Minch (strait)
Moelwyns, Wales 205–6
Monjo, Nepal
Moray Firth ,
Morvich, Highland
Mountain Rescue 168–9,
Mum 3–5, 119–22
childhood
and bangle , , , ,
cancer diagnosis 49–50, , 57–8, 264–5
cancer treatment
flat 133–5
and Frank , 30–1, ,
and Gerry 32–5, 38–40, 184–5
illness , , 106–7, 109–10, 116–17
relationships 77–8
tape recordings 79–80, 103–4
work 23–4, 41–2
death 123–4, 128–9
funeral 130–1
ashes , , , , 145–6, ,
Munros, the , , , 144–5, ,
#### N
Nairn
Nakara Hotel, Kilimanjaro ,
Namche Bazaar, Himalayas , , , 233–6
Naro Moru Gate, Kilimanjaro 93–4
Neil (climber) 229–30, , ,
Nepal , 185–93,
Nethy, River
North Sea
Nuptse, Himalayas 183–5, 187–9
1975 expedition ,
base
Base Camp , ,
couloir
Gerry and ,
south face
#### O
Ogwyn, Joby 235–6
O'Hara, Mike
Old Man of Hoy, Orkney xi
Ollie (walker) , 136–7, 140–1, 143–5, 161–2, 168–70
_Origin of the Milky Way, The_ (Tintoretto)
Outer Hebrides
Owens, Maj Gerry 32–5, 175–9, 183–93
family background 204–5
and Mum 38–40, 74–5
Scottish climbs
death
memorial cairn 253–5
Owens, Rod and Jill (Gerry's cousins) 204–5
#### P
Paul (boyfriend)
becomes boyfriend ,
fitness
handyman 173–4
in Nepal , , , 232–42, 245–6, 248–56
rock climbing , 205–7
support ,
walking with 195–7, 200–1, 209–11, 215–17,
Peacock, John 184–6, 189–91, , , , , 250–1,
Peacock, Sheila 184–6, ,
Penny, Aunty , , ,
Peru
Peter (local guide) ,
Phakding, Nepal
Pheriche, Nepal
Pierre (French pilot)
Poppy (family dog) , , , 152–3,
Poudyal, H.
#### R
Reusch Crater, Kilimanjaro
Richard (Gerry's climbing partner) 187–8
Robby (walker) 99–100
Roman (Irina's daughter) , , ,
Royal Nepalese Army
Ruadh Stac Mor (Munro)
_Rubaiyat of Omar Khayyam_ (poetry) , , , , ,
Rum, Isle of
Ryvoan bothy, Cairngorms
#### S
Saddle, Kilmanjaro ,
Sail Mhor
Sam (husband) 13–16, 17–19
and boys
holidays with 72–3
marriage to , , , , 78–9,
walking with 19–22
Scafell Pike, England
Sgurr Ban (Munro)
Sgurr Dearg, Skye
Sgurr Mhic Choinnich (Munro)
Sgurr Mor (Munro)
Sgurr na Sgine (Munro)
Sherpa, the , , , , 235–6,
Skye, Isle of , , , 209–11
Slioch (Munro) ,
Snowdon, Wales
Snowdonia
Steall Falls, Lochaber
Stob Coire Sgriodain (Munro)
Strathfarrar
Summit Hotel, Nepal
#### T
Tenzing Norgay
Three Peaks Challenge
Thyangboche Monastery, Nepal 239–41
Torlundy, Fort William
Torr Achilty
Torridon 62–3
Torridon, Loch
#### U
Uhuru Peak, Kilimanjaro , , 113–16, 117–19,
#### V
Venables, Stephen
#### Y
Yates, Simon
First published 2019 by
Elliott and Thompson Limited
27 John Street, London WC1N 2BX
www.eandtbooks.com
epub: 978-1-78396-420-8
MOBI: 978-1-78396-421-5
Copyright © Sarah Jane Douglas 2019
The Author has asserted her right under the Copyright, Designs and Patents Act, 1988, to be identified as Author of this Work.
All rights reserved. No part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording or otherwise) without the prior written permission of the publisher. Any person who does any unauthorised act in relation to this publication may be liable to criminal prosecution and civil claims for damages.
A catalogue record for this book is available from the British Library.
Typesetting by Marie Doherty
Cover design by kid-ethic.com
Illustration contains elements from Shutterstock
# Contents
1. Title
2. Contents
3. Foreword
4. Prologue
5. Phase One: Following Footsteps
1. One: The Hills Are Calling
2. Two: Coincidence or Fate
3. Three: Doomed Champagne and Mountain Magic
4. Four: Cheating Myself
5. Five: Becoming a Woman with a Plan
6. Six: Divergent Paths
6. Phase Two: Troubled Tracks
1. Seven: Keep Them Close
2. Eight: Where the Wind Blows
3. Nine: Where There's a Will There's a Way
4. Ten: Hell on Earth
5. Eleven: Dead Loss
6. Twelve: Peaks and Troughs
7. Thirteen: A Hatch and Despatch
8. Fourteen: Slippery Slopes
7. Phase Three: Steps in the Sunshine
1. Fifteen: One Thing Leads to Something Else
2. Sixteen: Protecting Next of Kin
3. Seventeen: History Repeats Itself
4. Eighteen: Dark Horse
5. Nineteen: Walking on Air
6. Twenty: Early Illness
7. Twenty-One: Onwards and Upwards
8. Twenty-Two: Deliverance
8. Epilogue: Homeward Bound
9. Postscript
10. Acknowledgements
11. Index
12. Copyright
## Guide
1. Cover
2. Title
3. Start
| {
"redpajama_set_name": "RedPajamaBook"
} | 4,001 |
You are here: Home / TV News from STUDIO BRIEFING / MICROSOFT DROPS OUT OF MSNBC.COM
MICROSOFT DROPS OUT OF MSNBC.COM
July 16, 2012 by admin · Leave a Comment
Hit the delete key on the "MS" in MSNBC.com. Comcast, which owns NBC Universal, has acquired from Microsoft the 50 percent of the news website that it didn't already own and has renamed it NBCNews.com. The entertainment conglomerate agreed to pay $300 million to gain full ownership of the site, the New York Times reported on Sunday, citing unnamed sources. Users of MSNBC.com are now automatically being redirected to NBCNews.com where they are greeted by a notice assuring them that "while you'll notice some changes to our logos and navigation," nothing else is changing. MSNBC.com will return as an altogether separate entity next year, the notice continued, "creating in-depth content and a community for the passionate audiences of MSNBC programs." With 55.7 million monthly visitors, the current website ranks fourth among all general news sites behind Yahoo/ABC News with 89.1 million; AOL/Huffington Post with 59.4 million; and CNN with 57.4 million, according to comScore. Meanwhile, the Associated Press is reporting that Microsoft intends to launch its own news website in the fall.
Filed under TV News from STUDIO BRIEFING · Tagged with Comcast, Microsoft, MSNBC.com, NBCNews.com | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,441 |
Www.WorldHistory.Biz
Literature. World War II
Literature. World War I
Literature. Ancient history
Literature. Middle Ages
Literature. Modern history
Literature. Contemporary history
Literature. Sundries
Photos from the Archives.
Photos and videos.
Orthodox Study Bible
Timeline.
Maps.
Photos. Art.
Israel: A History
Redlegs. The U.S. Artillery From the Civil War to the Spanish-American War, 1861-1898 (G.I. Series 11)
Armada 1588: The Spanish Assault on England
The Outbreak of The First World War: 1914 in Perspective
United States Army Aviators' Clothing, 1917-1945
Time Frame - The Natural World
The Marine Steam Engine
Nations and Nationalism [4 volumes]: A Global Historical Overview
Medieval Warfare
Login *:
The inner body: emotions and passions
Letters and diaries from the early modern era often refer to emotions, which were the topic of published work as well. In his enormous consideration of illness in the human body, The Anatomy of Melancholy (1621), for example, Robert Burton viewed emotions and mental states as linked to imbalances in the four bodily humors: too much blood made one bold, courageous, and sanguine (from the Latin word for blood, sanguis ); too much phlegm made one sluggish, apathetic and phlegmatic ; too much yellow bile (choler) made one angry, irritated, and choleric ; too much black bile made one sad, depressed, and melancholy . Melancholy was the most worrisome of these states. A certain amount of melancholy could be a source of genius, inspiring music and poetry, but too much could lead to madness and both physical and mental illness. Physicians prescribed physical and spiritual treatments for their melancholic patients: a change in diet or sleeping patterns, vomits, bleeding, travel to a different climate, sex, music, astrology, wearing amulets, magic, prayer. In the early seventeenth century – the height The regular delivery of mail is such a normal part of daily life today that we notice only when it is interrupted, but it was a major innovation in early modern Europe. People who could write did not have to rely on private contacts or the whims of travelers to correspond directly with one another, and began to use the post for regular communication. Though most letters from any era have long since disintegrated, so that it is diffi cult to arrive at exact fi gures, the volume of written personal communications increased signifi cantly with the regular postal service. Paper provided letter-writers as well as printers with a cheap surface, and writing letters became a large part of many people's daily activities. We know that people spent time each day writing letters not only from the letters themselves, but also from journals and diaries that describe this, along with their other activities. Personal journals have survived in numbers that steadily increase throughout the early modern period, mostly from people at the upper end of the social scale, but quite a few from middle-class individuals, such as the German merchant Mattheus Miller, and a few from the laboring classes, such as the English lace-maker's apprentice Mary Hurll. Explorers such as Columbus, Vespucci, and Pigafetta wrote open letters and kept journals describing their voyages. Men, and a few women, in various occupations kept daily records of their professional activities; the English scientist Robert Boyle (1627–91), for example, kept a diary of his experiments and observations, while the Dutch midwife Catharina van Schrader (1656–1746) kept notebooks of every one of the more than 3,000 births she attended over her long career. Such journals vary from terse and businesslike to rambling and thoughtful. Protestants, especially Calvinists and Quakers, were encouraged to engage in spiritual self-refl ection on a regular basis, and in England and other places where literacy rates were relatively high, many people kept spiritual journals. Catholics were more likely to discuss spiritual matters orally with their priest, but in certain cases they, too, were encouraged to write them down. The confessors of several Spanish holy women (termed beatas ) ordered them to dictate or write about their devotional practices and mystical visions. The most famous of these, Teresa of Avila, edited and refi ned her work over many years, turning it into a full spiritual autobiography. Some writers combined business, religious, and family matters with introspection in ways that reveal a great deal about their personal qualities as well as the society in which they lived. Glickl bas Judah Leib, traditionally known as "Glückel of Hameln" (1646?–1724), was a Jewish woman born in Hamburg who assisted her husband in his growing trade in gold, pearls, jewels, and money. When she was in her early forties, her husband died accidentally, and she continued his business, traveling widely. To help her get over her sorrow, she also began to write her memoirs, which contain much about her family and business life, but also stories drawn from history and tradition through which she sought to understand and explain the events of her life. "In my great grief and for my heart's ease," she wrote, "I begin this book … upon the death of your good father, in the hope of distracting my soul from the burdens laid upon it." Her book would be a long endeavor written over many years, eventually describing the death of her second husband as well as her fi rst. The text survived in two family copies to the nineteenth century, when it was published, fi rst in the Yiddish in which it was written, and then in translation. Glickl's text provides a detailed look at the economic and social life of central European Jews as a group, as well as information about how one seventeenth-century woman responded to a son and a second husband who disappointed her and to a God who sometimes seemed distant; in recent translations, her memoirs have served as a source of spiritual inspiration as well as historical information. Samuel Pepys (1633–1703) was an English civil servant who worked in several branches of government. He eventually became the top administrator of the navy, a member of the House of Commons, and president of the Royal Society. He kept an extensive diary covering the years 1660 to 1669, including a discussion of the dramatic political events of those years and of the many theatrical and musical performances he attended. He also recorded in great detail his rather fumbling sexual encounters with a number of women – one of which his wife walked in on – coding these in French, Italian, or Spanish words so that they were even more secret than the shorthand he used for the rest of the diary. This shorthand made transcription diffi cult, and the diary was not published until the nineteenth century, when a bowdlerized version omitting anything even vaguely sexual appeared; the full diary was not published until the 1970s, though it is now available in several versions on the web. The diary provides historians of music and drama with information about actors and audiences, and social historians with information about aspects of daily life, such as lice in wigs or excrement piling up in streets and cellars. Pepys also turned his talent for close observation inward, recording his strengths and weaknesses, thoughts and emotions, in what Claire Tomalin, a recent biographer, has termed his contemplation of the "unequalled self." Both Glickl's and Pepys's works are unusual in their personal insights, but they are also unusual in that they seem to have been written only for private or family reading. Today we draw a fairly sharp line between a private diary or letter and a published book of memoirs, but in the early modern period this line was not as clear. Members of the nobility and the educated elite sent letters to friends or colleagues knowing (and indeed, often hoping) that these would be copied, circulated in manuscript, and eventually published. The French noblewoman Marie de Rabutin-Chantal, marquise de Sévigné (1626–96), for example, wrote regularly to her friends and relatives, providing court news, Parisian gossip, and witty commentary; over 1,100 letters have survived. She quickly learned that her letters were being copied and read widely, and so crafted them with this in mind, though she still included her personal feelings. Daily journals,
SOURCE 21 Pepys's diary
In recording his daily activities, Pepys blends comments about family life, routine government operations, major political events, and goings-on in his neighborhood and around London. Here are two diary entries for February 1660, when Parliament was debating restoring Charles II to the throne after his father had been deposed and executed in the English Civil War. At this point Pepys was a clerk in the Exchequer, or treasury department. February 16 In the morning at my lute. Then came Shaw [Pepys's colleague at the Exchequer] and Hawly [another work colleague, who was also Pepys's neighbor], and I gave them their morning draft [of ale, a common morning food] at this time at my house. So to my offi ce, where I wrote by the carrier to my Lord [the earl of Sandwich, a distant cousin] and sealed my letter at Will's [a tavern] and gave it old East [probably a servant] to carry it to the carrier's, and to take up a box of china oranges and two little barrels of scallops at my house, which Captain Cuttance sent to me for my Lord. Here I met with Osborne and with Shaw and Spicer [two colleagues], and we went to the Sun Tavern in expectation of a dinner, where we had sent us only two trenchers[platters]-full of meat, at which we were very merry, while in came Mr. Wade and his friend Capt. Moyse (who told us of his hopes to get an estate merely for his name's sake), and here we staid till seven at night, I winning a quart of sack of Shaw that one trencherfull that was sent us was all lamb and he that it was veal. [In other words, they made a bet about what type of meat was on the platter.] I by having but 3 d. in my pocket made shift to spend no more, whereas if I had had more I had spent more as the rest did, so that I see it is an advantage to a man to carry little in his pocket. Home, and after supper, and a little at my fl ute, I went to bed. February 17 In the morning Tom that was my Lord's footboy came to see me and had 10 s. [shillings] of me of the money which I have to keep of his. So that now I have but 35 s. more of his. Then came Mr. Hills the instrument maker, and I consulted with him about the altering my lute and my viall [violin]. After that I went into my study and did up my accounts, and found that I am about 40 l. [pounds] beforehand in the world, and that is all. So to my offi ce and from thence brought Mr. Hawly home with me to dinner, and after dinner wrote a letter to Mr. Downing [Pepys's supervisor at the Exchequer] about his business and gave it Hawly, and so went to Mr. Gunning's [a prominent clergyman] to his weekly fast, and after the sermon … we went and walked in the park till it was dark. I played on my recorder at the Echo, and then drank a cup of ale at Jacob's. So to Westminster Hall, where I heard that some of the members of the House were gone to meet [about the restoration of King Charles] … Hence we went to White Hall, thinking to hear more news, where I met with Mr. Hunt [a neighbor], who told me … that some of the members of the House had this day laid in fi ring into their lodgings at White Hall for a good while, so that we are at a great stand to think what will become of things … Hence … to Harper's, and there drank a cup or two to the King, and to his fair sister Frances' good health, of whom we had much discourse of her not being much the worse for the small pox, which she had this last summer. So home and to bed. This day we are invited to my uncle Fenner's wedding feast, but went not, this being the 27th year [i.e. his 27th wedding anniversary]. (From The Diary of Samuel Pepys, ed. Henry B. Wheatley [London: G. Bell and Sons, 1924], vol. I, pp. 55–7.) especially those of well-connected people, were often written in the same way, with an eye to their eventual publication. We have already traced the impact of the journals of Columbus, Vespucci, and Pigafetta, but even the journals of less adventurous sorts proved interesting for people to read. The English clergyman John Beadle's The Journal or Diary of a Thankful Christian (1656), for example, was a best-seller and a model for others. This semi-public nature of many personal documents means that we cannot use them as direct windows into people's inner thoughts and emotions, for their writers often framed their journal with a wider audience in mind, and were careful to present a persona that would enhance their reputation or at least be acceptable. This is actually true of all personal documents. Letters and diaries, even those that the writer expects will remain private, are written within a specific cultural background in which certain emotions, ideals, and fantasies are regarded as appropriate for people of a specific age, gender, and social class. In the early modern period, for example, anger was generally seen as more appropriate for men and thus masculinized a woman who became extremely angry, whereas intense heterosexual passion was seen as feminizing a man. This did not mean that women never became angry and men never felt passion, but such expectations may have affected how men and women described their feelings, even to themselves. Thus we may think of personal documents as strictly descriptive sources, depictions of reality, when they are actually to some degree also prescriptive sources, reports of what their writers wished were true.
WorldHistory ->Modern history
BB-Link
Hell's Highway: Chronicle of the 101st Airborne Division in the Holland Campaign, September-November, 1944
General Ike: A Personal Reminiscence
Winston's War: Churchill, 1940-1945
The Politics of War - Australia at War, 1939-45 - From Churchill to Macarthur
The Second World War: Asia and the Pacific
Thunder Gods: The Kamikaze Pilots Tell Their Story
Tank Busters - The History of the 607th Tank Destroyer Battalion in Combat on the Western Front | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,574 |
{"url":"https:\/\/crypto.stackexchange.com\/questions\/51146\/rubiks-cube-as-encryption\/51147","text":"# Rubik's Cube as Encryption\n\nConsider this scenario:\n\nAlice gets a Rubik's Cube and peels off the colors from each piece. She then writes a small message on one of the faces of the cube and fills the remaining pieces with random letters. Then, she scrambles the pieces in a way that was pre-determined between Alice and Bob. And finally, she ships the cube to Bob.\n\nCan this be considered as encryption, and, if so, how secure can this encryption scheme be?\n\n\u2022 How will Bob know the correct orientation of the cube so he knows how to hold it when he starts unscrambling? \u2013\u00a0Barmar Aug 28 '17 at 10:24\n\u2022 @Barmar Centers cannot move. Pick any center as the 'root' to determine facing. \u2013\u00a0Weckar E. Aug 28 '17 at 11:50\n\u2022 @WeckarE. But if the colors are removed, how do you know which face is which when decoding? \u2013\u00a0Barmar Aug 28 '17 at 11:53\n\u2022 @Barmar They can also pre-determine that, for example, Bob knows to start solving the puzzle knowing that center piece that contains the letter \"A\" should face him, and the center piece that contains letter \"Y\" should be on the right side of the \"A\" center piece. Or they can decide on something smarter that I didn't thought of yet. \u2013\u00a0yasar Aug 28 '17 at 16:51\n\u2022 For reference, Google CTF in 2017 included one challenge using Stickel's Key Exchange on a Rubik's Cube. Note that Rubik's cube group is non-abelian, so it would fall into the non-commutative crypto category. There are a few crypto papers about Rubik's cube too. \u2013\u00a0Lery Aug 29 '17 at 12:47\n\nCan this be considered as Encryption\n\nIf the sequence of necessary moves is treated as the key, yes.\n\nhow secure can this encryption scheme be?\n\nFirst some details about the cube:\n\n\u2022 6 faces, each with 9 pieces visible each. Because the faces share some pieces, and the immovable cube center is not visible, there are only 26 pieces in total: 6 centers (of faces), 8 corners (each with 3 colored sides), and 12 edges (each with 2 colored sides).\n\u2022 The center piece of each face is, like the cube center itself, not movable. If it is \"moved\", in reality everything else moves.\n\u2022 The 8 corner pieces always are corner pieces, independent of any moves. Same goes for the. 12 edge pieces.\n\u2022 There are 8! possible position combinations of the 8 corner pieces (naturally). In their position, 7 of the 8 can have 3 possible \"rotations\", just the last one depends on the others. With this, there are corner $8! \\cdot 3^7$ possible corner positions\n\u2022 Similarly, 12! combinations of edge pieces are restricted to $\\frac{12!}{2}$ by the corner pieces (for details to everything, see Wikipedia).\n\nNow, we have 9 pieces that contain \"good\" data: 1 face center, 4 edges (each has two more sides with nonsense data), and 4 corners (each 1 more side with nonsense data). The other 17 pieces contain only nonsense data.\n\nIf an attacker wants to (bruteforce-)find the center piece with the good data on it, there are 6 possibilities (6 face centers, just turning the whole cube around to find the right one).\n\nThen there are 4 corner pieces where position and orientation matters, and 4 others that don't matter to find the one good-data face. Meaning, $\\frac{8!}{4!} \\cdot 3^4$ possibilities to try here.\n\nFinally, 4 edge pieces where position and orientation matters, and 8 others that don't matter to find the one good-data face. Meaning, $\\frac{12!}{2} \\div \\frac{8!}{2} \\cdot 2^4$\n\nMultiplying...\n\n$6 \\cdot \\frac{8!}{4!} \\cdot 3^4 \\cdot \\frac{12!}{2} \\div \\frac{8!}{2} \\cdot 2^4 = 155196518400$ or about $2^{37}$\n\nYour key has 37 bit. With todays computer, that's nothing =>\ncompletely insecure\n\nAside from that ...\n\n\u2022 A \"padding\" of 45 byte for 9 byte payload is impractical\n\u2022 A cube that contains the same symbol multiple times is less secure\n\u2022 The scheme isn't protected against things like known-plaintext attacks etc.etc.\n\u2022 Properties like the avalanche effect etc., etc. are completely missing\n\u2022 Depending on the choice of padding data, just making statistics what symbols exist might be enough to figure the plaintext out\n\u2022 ... and many more\n\u2022 @SqueamishOssifrage I think if OP meant a non-standard cube, it should have been mentioned in the question... \u2013\u00a0deviantfan Aug 28 '17 at 3:32\n\u2022 Key length is only one factor, and even if there are $2^{128}$ keys, it still means nothing about security. The substitution cipher has a very large key space and it's completely and trivially insecure. \u2013\u00a0Yehuda Lindell Aug 28 '17 at 6:29\n\u2022 Depending if all letters of the original message are oriented equally, your computations regarding the security are way to optimistic. If they should be equal, then most of the time the letters will be wrongly oriented. And you can use this to your advantage. For instance this is why it is possible to solve the Sudoku Cube (en.wikipedia.org\/wiki\/Sudoku_Cube). \u2013\u00a0Jakube Aug 28 '17 at 10:04\n\u2022 @YehudaLindell A large key does not guarantee security. But an small one does guarantee insecurity. Thus deviantfan only needs to prove an upper bound for the key (an optimistic one) which is small enough to guarantee lack of security. As he has done. \u2013\u00a0Jose Antonio Reinstate Monica Aug 28 '17 at 11:49\n\u2022 @JoseAntonioDuraOlmos Indeed, you are right. The problem is that many people will then say stupidities like \"so it's OK with a larger cube\"... That's just what I was trying to prevent. \u2013\u00a0Yehuda Lindell Aug 28 '17 at 11:53\n\ni will defer to deviantfan's judgement on whether this constitutes encryption, but I see no reason to counter his argument. By as to security though...\n\nBrute force in not necessary at all. There's classic permutation, but no substitution is involved. So it's just 3D scrabble and looks like:-\n\nwith small letters ( I didn't spend a great deal of time formatting it but you get the gist), or like this with large letters:-\n\nThe former is fairly trivial as you can see whole words and multiple words. Compared to random letters, some common sense reveals the secret message.\n\nThe latter is slightly more difficult as the letters would be permuted individually. The presence or absence of spaces is not really relevant to this answer's premise. Frequency analysis will make short work of decryption. If you look at the details of monogram, bigram and trigram letter frequencies, you'll see that most random combinations are not possible in a language (even if it's Klingon). There are even statistics for whole words. Below is an extract for monograms:-\n\nClearly cubes with a \"Q\" on them are improbable in constituting a word, but even if then did, you know that the next letter is certainly a \"U\". Et cetera. The statistical calculations are a little beyond me, but you will easily infer that the message can be extracted much much quicker than brute forcing it. Without knowing the exact term for this level of encryption, I would use Scrabble Junior level.\n\nAs a sidebar, one of the most difficult aspects of this encryption might be how to actually convey the permutation sequence \/key.\n\n\u2022 To convey the permutation sequence \/ key, all Alice need to do is scramble two cubes the same exact way, and then give Bob one and keep one for herself. She then specifies a key letter that is not symmetrical, e.g. V. She puts the V in the green center square, with the character facing up towards the yellow center and down towards the white center. If you correctly orient the V and then solve the second cube, while copying the movements onto the code cube, it'll also solve the code cube \u2013\u00a0Hans Z Aug 29 '17 at 20:27","date":"2020-01-23 11:20:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4437788426876068, \"perplexity\": 1332.6938674830992}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579250610004.56\/warc\/CC-MAIN-20200123101110-20200123130110-00082.warc.gz\"}"} | null | null |
require 'google/protobuf'
Google::Protobuf::DescriptorPool.generated_pool.build do
add_file("google/ads/googleads/v10/enums/goal_config_level.proto", :syntax => :proto3) do
add_message "google.ads.googleads.v10.enums.GoalConfigLevelEnum" do
end
add_enum "google.ads.googleads.v10.enums.GoalConfigLevelEnum.GoalConfigLevel" do
value :UNSPECIFIED, 0
value :UNKNOWN, 1
value :CUSTOMER, 2
value :CAMPAIGN, 3
end
end
end
module Google
module Ads
module GoogleAds
module V10
module Enums
GoalConfigLevelEnum = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.ads.googleads.v10.enums.GoalConfigLevelEnum").msgclass
GoalConfigLevelEnum::GoalConfigLevel = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("google.ads.googleads.v10.enums.GoalConfigLevelEnum.GoalConfigLevel").enummodule
end
end
end
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,662 |
namespace sharedstructures {
class SimpleAllocator : public Allocator {
public:
SimpleAllocator() = delete;
SimpleAllocator(const SimpleAllocator&) = delete;
SimpleAllocator(SimpleAllocator&&) = delete;
explicit SimpleAllocator(std::shared_ptr<Pool> pool);
~SimpleAllocator() = default;
// Allocator functions.
// There are three sets of these.
// - allocate/free behave like malloc/free but deal with raw offsets instead
// of pointers.
// - allocate_object/free_object behave like the new/delete operators (they
// call object constructors/destructors) but also deal with offsets instead
// of pointers.
// - allocate_object_ptr and free_object_ptr deal with PoolPointer instances,
// but otherwise behave like allocate_object/free_object.
virtual uint64_t allocate(size_t size);
virtual void free(uint64_t x);
virtual size_t block_size(uint64_t offset) const;
virtual void set_base_object_offset(uint64_t offset);
virtual uint64_t base_object_offset() const;
virtual size_t bytes_allocated() const;
virtual size_t bytes_free() const;
// Locks the entire pool
virtual ProcessReadWriteLockGuard lock(bool writing) const;
virtual bool is_locked(bool writing) const;
virtual void verify() const;
private:
struct Data {
std::atomic<uint64_t> size; // This is part of the Pool structure
std::atomic<uint8_t> initialized;
ProcessReadWriteLock data_lock;
std::atomic<uint64_t> base_object_offset;
std::atomic<uint64_t> bytes_allocated; // Sum of allocated block sizes
std::atomic<uint64_t> bytes_committed; // Same as above, + the block structs
std::atomic<uint64_t> head;
std::atomic<uint64_t> tail;
uint8_t arena[0];
};
Data* data();
const Data* data() const;
// Struct that describes an allocated block. Inside the pool, these form a
// doubly-linked list with variable-size elements.
struct AllocatedBlock {
uint64_t prev;
uint64_t next;
uint64_t size;
uint64_t effective_size();
};
virtual void repair();
};
} // namespace sharedstructures
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,259 |
Conrad Nicholson Hilton (25. prosince 1887, San Antonio, Nové Mexiko – 3. ledna 1979 Santa Monica, Kalifornie) byl americký hoteliér, zakladatel hotelového impéria Hilton.
Život
Narodil se jako jeden z osmi dětí Augustovi Halvorsonovi Hiltonovi (1854–1919), imigrantovi z Norska, a Marii Genevieve Laufersweiler (1861–1947), Američance s německými kořeny.
Reference
Související články
Hotely Hilton
Externí odkazy
Američtí podnikatelé
Narození v roce 1887
Narození 25. prosince
Úmrtí v roce 1979
Úmrtí 3. ledna
Muži
Úmrtí v Santa Monice
Hoteliéři | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,706 |
from django.test import TestCase
from comp.models import Comp
class CompTestCase(TestCase):
def setUp(self):
for i in range(100):
Comp(name='comp%s' % i, alias='c%s' % i, revision='A').save()
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,115 |
What do you know about the religion of the Eastern Slavs? What was it called?
1. What do you know about the religion of the Eastern Slavs? What was it called? What Slavic gods can you name? 2. When did the congress of princes take place in Lubech? What decisions were made at this congress? 3. Why did the Mongol – Tatars manage to quickly conquer the lands of Rus? Which principality became the center of the unification of the Russian lands? What are the reasons for the rise of this principality? 4. What kind of battle is called the "Mamayev Massacre"? What is the meaning of this battle? 5. Who were the following historical figures: Kiy, Rurik, Olga, Vladimir Krasno Solnyshko, Vladimir Monomakh, A. Bogolyubsky, S. Radonezhsky, Batu, Ivan Kalita, Ivan III. Name one of the events associated with each of these names.
1. Until the end of the 10th century, the religion of the Eastern Slavs was pagan, although in the 11th century, pagans still lived on the periphery (Vyatichi, for example). The main gods were: Khors, Yarilo, Dazhdbog, Svarog, Perun. Veles. Stribog, for example, was the lord of the wind.
2. The congress was held in 1097, the decision was made as follows: "Everyone owns his own fiefdom."
3. The principality of Moscow became the center of the unification, the process began around 1300. The Mongol-Tatars quickly conquered the Russian lands due to their fragmentation and the lack of a strong and numerous army from the princes. The reasons for the rise of Moscow: geographical location, the policy of the princes, especially Ivan Kalita.
4. The Battle of Kulikovo, and the significance is that it was the first major military success of the Russian principalities against the Tatars.
5. All of them were princes of Kiev from the V (Kyi) to the XII century (Vladimir Monomakh), Andrei Bogolyubsky ravaged Kiev, and moved the capital of his principality to Vladimir. Sergius of Radonezh founded the Trinity-Sergius Monastery. Batu was a Tatar khan who, in 1236-1242, ravaged the Russian principalities, Volga Bulgaria and some European countries (southern Poland, eastern Bohemia, Hungary). Ivan I and Ivan III were Moscow princes. The first made the principality the center of the unification of Russian lands (the metropolitan moved to Moscow), and the second got rid of the Horde dependence and annexed several principalities and lands (Novgorod, Vyatka, Yaroslavl, Rostov, Vyazma). | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,123 |
\section{Proof of Theorem 2}
\label{appendix:theorem2}
\setcounter{equation}{7}
\begin{lemma}
\label{le:property}
For any ${\bm{H}},{\bm{B}}\in\mathbb{R}^{N\times C}$ and $\alpha_1, \alpha_2\geq0$, we have:
\begin{eqnarray}
\label{eq:A}
d_{{\mathcal{M}}}(\hat{{\bm{A}}}{\bm{H}})&\leq& \lambda d_{{\mathcal{M}}}({\bm{H}}),\\
\label{eq:W}
d_{{\mathcal{M}}}({\bm{H}}{\bm{W}})&\leq& sd_{{\mathcal{M}}}({\bm{H}}),\\
\label{eq:relu}
d_{{\mathcal{M}}}(\sigma({\bm{H}}))&\leq& d_{{\mathcal{M}}}({\bm{H}}),\\
\label{eq:bias}
d_{{\mathcal{M}}}(\alpha_1{\bm{H}}+\alpha_2{\bm{B}})&\leq& \alpha_1 d_{{\mathcal{M}}}({\bm{H}})+\alpha_2 d_{{\mathcal{M}}}({\bm{B}}),
\end{eqnarray}
where $\sigma$ is ReLU function, and the denotations of $\lambda, s, {\mathcal{M}}$ follow Theorem 2.
\end{lemma}
\begin{proof}
Oono \& Suzuki~\cite{oono2019asymptotic} have proved the first three inequalities. Their proof is based on eigen-decomposition with Kronecker product, which is sort of tedious. Here, we additionally discuss Ineq.~\ref{eq:bias}, and prove all the four inequalities in a new and concise way.
Our proof is mainly based on the notion of projection~\cite{horn2012matrix} that returns the projected vector/matrix onto a subspace from any given vector/matrix. In terms of the subspace ${\mathcal{M}}$, the projection matrix is given by $\hat{{\bm{E}}}\hat{{\bm{E}}}^{\mathrm{T}}$, where $\hat{{\bm{E}}}$ is the normalized bases of the subspace ${\mathcal{M}}$ defined in Definition 1. We also define the orthogonal complement of $\hat{{\bm{E}}}$ as $\hat{{\bm{F}}}$. Then, the distance $d_{{\mathcal{M}}}({\bm{H}})$ of arbitrary ${\bm{H}}$ is derived as
\begin{eqnarray}
\label{eq:dM-1}
d_{{\mathcal{M}}}({\bm{H}}) &=& \|({\bm{I}}-\hat{{\bm{E}}}\hat{{\bm{E}}}^{\mathrm{T}}){\bm{H}}\|_F\\
\nonumber
&=& \|\hat{{\bm{F}}}\hat{{\bm{F}}}^{\mathrm{T}}{\bm{H}}\|_F\\
\nonumber
&=& \text{tr}\left((\hat{{\bm{F}}}\hat{{\bm{F}}}^{\mathrm{T}}{\bm{H}})^{\mathrm{T}}(\hat{{\bm{F}}}\hat{{\bm{F}}}^{\mathrm{T}}{\bm{H}})\right)\\
\nonumber
&=& \text{tr}\left((\hat{{\bm{F}}}^{\mathrm{T}}{\bm{H}})^{\mathrm{T}}(\hat{{\bm{F}}}^{\mathrm{T}}{\bm{H}})\right) \\
\label{eq:dM-F}
&=& \|\hat{{\bm{F}}}^{\mathrm{T}}{\bm{H}}\|_F.
\end{eqnarray}
With Eq.~\ref{eq:dM-F} at hand, we justify Ineq.~\ref{eq:A},~\ref{eq:W}, and~\ref{eq:bias} by
\begin{eqnarray}
\nonumber
d_{{\mathcal{M}}}(\hat{{\bm{A}}}{\bm{H}}) &=& \|\hat{{\bm{F}}}^{\mathrm{T}}(\hat{{\bm{A}}}{\bm{H}})\|_F \\
\nonumber
&=& \|\hat{{\bm{F}}}^{\mathrm{T}}(\hat{{\bm{E}}}\hat{{\bm{E}}}^{\mathrm{T}}+\hat{{\bm{F}}}\Lambda\hat{{\bm{F}}}^{\mathrm{T}}){\bm{H}}\|_F \\
\nonumber
&=& \|\Lambda\hat{{\bm{F}}}^{\mathrm{T}}{\bm{H}}\|_F \\
\nonumber
&\leq& \sigma_{\text{max}}(\Lambda)\|\hat{{\bm{F}}}^{\mathrm{T}}{\bm{H}}\|_F \\
&\leq& \lambda d_{{\mathcal{M}}}({\bm{H}}),
\end{eqnarray}
where we have applied the fact $\hat{{\bm{A}}}=\hat{{\bm{E}}}\hat{{\bm{E}}}^{\mathrm{T}}+\hat{{\bm{F}}}\Lambda\hat{{\bm{F}}}^{\mathrm{T}}$, and $\sigma_{\text{max}}(\cdot)$ returns the maximal sigular value of the input matrix.
\begin{eqnarray}
\nonumber
d_{{\mathcal{M}}}({\bm{H}}{\bm{W}}) &=& \|\hat{{\bm{F}}}^{\mathrm{T}}({\bm{H}}{\bm{W}})\|_F \\
\nonumber
&=& \|(\hat{{\bm{F}}}^{\mathrm{T}}{\bm{H}}){\bm{W}}\|_F \\
\nonumber
&\leq& \sigma_{\text{max}}({\bm{W}}) d_{{\mathcal{M}}}({\bm{H}}) \\
&=& s d_{{\mathcal{M}}}({\bm{H}}).
\end{eqnarray}
\begin{eqnarray}
\nonumber
d_{{\mathcal{M}}}(\alpha_1{\bm{H}}+\alpha_2{\bm{B}}) &=& \|\hat{{\bm{F}}}^{\mathrm{T}}(\alpha_1{\bm{H}}+\alpha_2{\bm{B}})\|_F \\
\nonumber
&\leq& \|\alpha_1\hat{{\bm{F}}}^{\mathrm{T}}{\bm{H}}\|_F +\|\alpha_2\hat{{\bm{F}}}^{\mathrm{T}}{\bm{B}}\|_F\\
&=& \alpha_1 d_{{\mathcal{M}}}({\bm{H}})+\alpha_2 d_{{\mathcal{M}}}({\bm{B}}).
\end{eqnarray}
Notice that the above inequation can be extended for the vector ${\bm{b}}\in\mathbb{R}^{1\times C}$ (such as the bias in GCN-b), and we define $d_{{\mathcal{M}}}({\bm{b}})=d_{{\mathcal{M}}}({\bm{B}})$ where ${\bm{B}}\in\mathbb{R}^{N\times C}$ is broadcasted from ${\bm{b}}$ in the first dimension.
We now prove Ineq.~\ref{eq:relu}. As $\hat{{\bm{E}}}$ is defined by the node indicator of connected components in Theorem~1, all elements in $\hat{{\bm{E}}}$ are non-negative. Moreover, since each node can only belong to one connected component, the non-zero entries in different column ${\bm{e}}_i$ of $\hat{{\bm{E}}}$ are located in a non-overlap way. It means, Eq.~\ref{eq:dM-1} can be further decomposed as
\begin{eqnarray}
\label{eq:dM-2}
d_{{\mathcal{M}}}({\bm{H}}) &=& \sum_{i=1}^M \|({\bm{I}}-{\bm{e}}_i{\bm{e}}_i^{\mathrm{T}}){\bm{H}}_i\|_F,
\end{eqnarray}
where the $j$-th row of ${\bm{H}}_i\in\mathbb{R}^{N\times C}$ is copied from ${\bm{H}}$ if $j$ belongs to component $i$ and is zero otherwise. Then,
\begin{eqnarray}
\label{eq:decompose}
\nonumber
d^2_{{\mathcal{M}}}({\bm{H}}) &=& \sum_{i=1}^M \|({\bm{I}}-{\bm{e}}_i{\bm{e}}_i^{\mathrm{T}}){\bm{H}}_i\|_F^2 \\
\nonumber
&=& \sum_{i=1}^M \text{tr}\left({\bm{H}}_i^{\mathrm{T}}({\bm{I}}-{\bm{e}}_i{\bm{e}}_i^{\mathrm{T}})^{2}{\bm{H}}_i \right)\\
\nonumber
&=& \sum_{i=1}^M \text{tr}\left({\bm{H}}_i^{\mathrm{T}}({\bm{I}}-{\bm{e}}_i{\bm{e}}_i^{\mathrm{T}}){\bm{H}}_i \right)\\
&=& \sum_{i=1}^M \sum_{c=1}^C {\bm{h}}_{ic}^{\mathrm{T}}{\bm{h}}_{ic}-({\bm{h}}_{ic}^{\mathrm{T}}{\bm{e}}_i)^2,
\end{eqnarray}
where ${\bm{h}}_{ic}\in\mathbb{R}^{N}$ denotes the $c$-th column of ${\bm{H}}_i$. We further denote the non-negative and negative elements of ${\bm{h}}_{ic}$ as ${\bm{h}}_{ic}^{+}$ and ${\bm{h}}_{ic}^{-}$. Similar to Eq.~\ref{eq:decompose}, we have
\begin{eqnarray}
\label{eq:decompose-s}
d_{{\mathcal{M}}}^2(\sigma({\bm{H}})) &=& \sum_{i=1}^M \sum_{c=1}^C ({\bm{h}}_{ic}^{+})^{\mathrm{T}}{\bm{h}}_{ic}^{+}-(({\bm{h}}_{ic}^{+})^{\mathrm{T}}{\bm{e}}_i)^2.
\end{eqnarray}
Then, we minus Eq.~\ref{eq:decompose-s} with Eq.~\ref{eq:decompose},
\begin{eqnarray}
\label{eq:relu-final}
\nonumber
&& d^2_{{\mathcal{M}}}({\bm{H}})-d^2_{{\mathcal{M}}}(\sigma({\bm{H}})) \\
\nonumber
&=& \sum_{i=1}^M \sum_{c=1}^C ({\bm{h}}_{ic}^{-})^{\mathrm{T}}{\bm{h}}_{ic}^{-}-(({\bm{h}}_{ic}^{-})^{\mathrm{T}}{\bm{e}}_i)^2\\
\nonumber
&& -2({\bm{h}}_{ic}^{-})^{\mathrm{T}}{\bm{e}}_i({\bm{h}}_{ic}^{+})^{\mathrm{T}}{\bm{e}}_i \quad ({\bm{h}}_{ic}^{-}<0, {\bm{h}}_{ic}^{+}\geq0, {\bm{e}}_i\geq 0) \\
\nonumber
&\geq& \sum_{i=1}^M \sum_{c=1}^C ({\bm{h}}_{ic}^{-})^{\mathrm{T}}{\bm{h}}_{ic}^{-}-(({\bm{h}}_{ic}^{-})^{\mathrm{T}}{\bm{e}}_i)^2 \\
\nonumber
&\geq& \sum_{i=1}^M \sum_{c=1}^C ({\bm{h}}_{ic}^{-})^{\mathrm{T}}{\bm{h}}_{ic}^{-} - (({\bm{h}}_{ic}^{-})^{\mathrm{T}}{\bm{h}}_{ic}^{-})({\bm{e}}_i^{\mathrm{T}}{\bm{e}}_i),\\
&=& 0.
\end{eqnarray}
where the last inequation employs the Cauchy–Schwarz inequality. Hence, we have proved Ineq.~\ref{eq:relu}.
\end{proof}
Based on Lemma~\ref{le:property}, we can immediately justify Theorem~2 as follows.
For GCN in Eq.~1, we apply Ineq.~\ref{eq:A}, \ref{eq:W} and \ref{eq:relu},
\begin{eqnarray}
\label{eq:dm-gcn}
\nonumber
d_{{\mathcal{M}}}({\bm{H}}_{l+1}) &\leq& d_{{\mathcal{M}}}(\hat{{\bm{A}}}{\bm{H}}_l{\bm{W}}_l) \\
\nonumber
&\leq& \lambda d_{{\mathcal{M}}}({\bm{H}}_l{\bm{W}}_l) \\
&\leq& s\lambda d_{{\mathcal{M}}}({\bm{H}}_l).
\end{eqnarray}
For GCN-b in Eq.~2, we apply Ineq.~\ref{eq:A}-~\ref{eq:bias},
\begin{eqnarray}
\label{eq:dm-gcn-b}
\nonumber
d_{{\mathcal{M}}}({\bm{H}}_{l+1}) &\leq& s\lambda d_{{\mathcal{M}}}({\bm{H}}_l)+d_{{\mathcal{M}}}({\bm{b}}_l)
\end{eqnarray}
\begin{eqnarray}
\nonumber
\Rightarrow & & d_{{\mathcal{M}}}({\bm{H}}_{l+1})-\frac{d_{{\mathcal{M}}}({\bm{b}}_l)}{1-s\lambda} \\
& \leq & s\lambda \left(d_{{\mathcal{M}}}({\bm{H}}_l)-\frac{d_{{\mathcal{M}}}({\bm{b}}_l)}{1-s\lambda}\right).
\end{eqnarray}
For ResGCN in Eq.~3, we apply Ineq.~\ref{eq:A}-~\ref{eq:bias},
\begin{eqnarray}
\label{eq:dm-resgcn}
\nonumber
d_{{\mathcal{M}}}({\bm{H}}_{l+1}) &\leq& s\lambda d_{{\mathcal{M}}}({\bm{H}}_l)+\alpha d_{{\mathcal{M}}}({\bm{H}}_l) \\
&=& (s\lambda+\alpha) d_{{\mathcal{M}}}({\bm{H}}_l).
\end{eqnarray}
For APPNP in Eq.~4, we apply Ineq.~\ref{eq:A} and \ref{eq:bias},
\begin{eqnarray}
\label{eq:dm-appnp}
\nonumber
d_{{\mathcal{M}}}({\bm{H}}_{l+1}) &\leq& (1-\beta)\lambda d_{{\mathcal{M}}}({\bm{H}}_l)+\beta d_{{\mathcal{M}}}({\bm{H}}_0)
\end{eqnarray}
\begin{eqnarray}
\nonumber
\Rightarrow && d_{{\mathcal{M}}}({\bm{H}}_{l+1})-\frac{\beta d_{{\mathcal{M}}}({\bm{H}}_0)}{1-(1-\beta)\lambda} \\
& \leq & (1-\beta)\lambda \left(d_{{\mathcal{M}}}({\bm{H}}_l)-\frac{\beta d_{{\mathcal{M}}}({\bm{H}}_0)}{1-(1-\beta)\lambda}\right).
\end{eqnarray}
Clearly, Ineq.~\ref{eq:dm-gcn}-\ref{eq:dm-appnp} imply the general form in Theorem~2.
\section{Proof of Theorem~3}
\label{appendix:theorem3}
Our proof basically explores the relationship between the Laplacian matrices of the original version and the one after DropEdge. We first provide the related notations for better readability.
\textbf{Notations.}
We reuse the aforementioned definitions of the adjacency, the Laplacian, and the degree matrix as ${\bm{A}}$, ${\bm{L}}$, and ${\bm{D}}$, respectively, and define these terms after DropEdge as ${\bm{A}}_{\text{drop}}$, ${\bm{L}}_{\text{drop}}$, and ${\bm{D}}_{\text{drop}}$. We denote the re-normalized (adding self-loops) of each above symbol in a hatted form, such as $\hat{{\bm{A}}}$ denoting the normalized augmented adjacency.
The expected adjacency matrix ${\bm{A}}_{\text{drop}}$ by DropEdge (Eq.~6) is given by ${\bm{A}}_{\text{drop}}=(1-p){\bm{A}}$. Then after re-normalization,
\begin{eqnarray}
\label{eq:re-norm-drop}
\nonumber
\hat{{\bm{A}}}_{\text{drop}}&=&({\bm{D}}_{\text{drop}}+{\bm{I}})^{-\frac{1}{2}}({\bm{A}}_{\text{drop}}+{\bm{I}})({\bm{D}}_{\text{drop}}+{\bm{I}})^{-\frac{1}{2}} \\
\nonumber
&=& ({\bm{D}}+\frac{{\bm{I}}}{(1-p)})^{-\frac{1}{2}}({\bm{A}}+\frac{{\bm{I}}}{(1-p)})({\bm{D}}+\frac{{\bm{I}}}{(1-p)})^{-\frac{1}{2}} \\
&:=& {\bm{D}}_p^{-\frac{1}{2}}{\bm{A}}_p{\bm{D}}_p^{-\frac{1}{2}},
\end{eqnarray}
where we define ${\bm{A}}_p={\bm{A}}+\frac{{\bm{I}}}{(1-p)}$ and its degree matrix ${\bm{D}}_p={\bm{D}}+\frac{{\bm{I}}}{(1-p)}$. Eq.~\ref{eq:re-norm-drop} is indeed a general form of the re-normalization trick proposed by~\cite{Kipf2017}, where we obtain $\hat{{\bm{A}}}_{\text{drop}}=\hat{{\bm{A}}}$ when $p=0$ and assign more weights to the self-loops when $p>0$. For consistent denotation in Eq.~\ref{eq:re-norm-drop}, we re-specify $\hat{{\bm{A}}}_{\text{drop}}$ as $\hat{{\bm{A}}}_p$ below.
We can easily check the correlation between $\hat{{\bm{L}}}_p$ and $\hat{{\bm{L}}}$ by:
\begin{eqnarray}
\label{eq:laplacian-drop}
\nonumber
\hat{{\bm{L}}}_p&:=&{\bm{I}}-\hat{{\bm{A}}}_p \\
\nonumber
&=& {\bm{D}}_p^{-\frac{1}{2}}({\bm{D}}_p-{\bm{A}}_p){\bm{D}}_p^{-\frac{1}{2}} \\
\nonumber
&=& {\bm{D}}_p^{-\frac{1}{2}}({\bm{D}}-{\bm{A}}){\bm{D}}_p^{-\frac{1}{2}} \\
\nonumber
&=& {\bm{D}}_p^{-\frac{1}{2}}\hat{{\bm{D}}}^{\frac{1}{2}}({\bm{I}}-\hat{{\bm{A}}})\hat{{\bm{D}}}^{\frac{1}{2}}{\bm{D}}_p^{-\frac{1}{2}} \\
&=& {\bm{D}}_p^{-\frac{1}{2}}\hat{{\bm{D}}}^{\frac{1}{2}}\hat{{\bm{L}}}\hat{{\bm{D}}}^{\frac{1}{2}}{\bm{D}}_p^{-\frac{1}{2}}.
\end{eqnarray}
We now denote the eigenvalue of $\hat{{\bm{A}}}_p$ as $\lambda(p)$, and its associated eigenvector as ${\bm{x}}_p$.
We immediately have
\begin{eqnarray}
\label{eq:lambda-bound}
\nonumber
\lambda(p) &=& |\frac{{\bm{x}}_p^{\mathrm{T}}\hat{{\bm{A}}}_p{\bm{x}}_p}{\|{\bm{x}}_p\|^2}| \\
\nonumber
&=& |1 - \frac{{\bm{x}}_p^{\mathrm{T}}\hat{{\bm{L}}}_p{\bm{x}}_p}{\|{\bm{x}}_p\|^2}| \\
\nonumber
&=& |1 - \frac{{\bm{x}}_p^{\mathrm{T}}{\bm{D}}_p^{-\frac{1}{2}}\hat{{\bm{D}}}^{\frac{1}{2}}\hat{{\bm{L}}}\hat{{\bm{D}}}^{\frac{1}{2}}{\bm{D}}_p^{-\frac{1}{2}}{\bm{x}}_p}{\|{\bm{x}}_p\|^2}| \quad(\text{via Eq.~\ref{eq:laplacian-drop}})\\
&=& |1- \frac{{\bm{y}}_p^{\mathrm{T}}\hat{{\bm{L}}}{\bm{y}}_p}{\|{\bm{y}}_p\|^2}\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2}|,
\end{eqnarray}
where we set ${\bm{y}}_p=\hat{{\bm{D}}}^{\frac{1}{2}}{\bm{D}}_p^{-\frac{1}{2}}{\bm{x}}_p$. Let $a=\max_{{\bm{y}}_p=\hat{{\bm{D}}}^{\frac{1}{2}}{\bm{D}}_p^{-\frac{1}{2}}{\bm{x}}_p}|1-\frac{{\bm{y}}_p^{\mathrm{T}}\hat{{\bm{L}}}{\bm{y}}_p}{\|{\bm{y}}_p\|^2}|=\max_{{\bm{y}}_p=\hat{{\bm{D}}}^{\frac{1}{2}}{\bm{D}}_p^{-\frac{1}{2}}{\bm{x}}_p}|\frac{{\bm{y}}_p^{\mathrm{T}}\hat{{\bm{A}}}{\bm{y}}_p}{\|{\bm{y}}_p\|^2}|$. Clearly, $0\leq a\leq 1$, $0\leq \frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2}=\frac{\|\hat{{\bm{D}}}^{\frac{1}{2}}{\bm{D}}_p^{-\frac{1}{2}}{\bm{x}}_p\|^2}{\|{\bm{x}}_p\|^2} \leq 1$.
Hence, according to Eq.~\ref{eq:lambda-bound}, we arrive at
\begin{eqnarray}
\label{eq:final-bound}
\nonumber
\lambda(p) &=& |1-\frac{{\bm{y}}^{\mathrm{T}}\hat{{\bm{L}}}{\bm{y}}}{\|{\bm{y}}\|^2}\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2}| \\
\nonumber
&=& |1-\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2}+(1-\frac{{\bm{y}}^{\mathrm{T}}\hat{{\bm{L}}}{\bm{y}}}{\|{\bm{y}}\|^2})\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2}| \\
\nonumber
&\leq& 1-\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2}+a\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2} \quad( |x+y|\leq|x|+|y|) \\
\nonumber
&=& 1- (1-a)\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2} \\
\nonumber
&\leq& 1- (1-a)\min_{d_i}\frac{d_i+1}{d_i+1/(1-p)} \\
&:=& \gamma(p).
\end{eqnarray}
On the contrary,
\begin{eqnarray}
\nonumber
\lambda(p) &=& |1-\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2}+(1-\frac{{\bm{y}}^{\mathrm{T}}\hat{{\bm{L}}}{\bm{y}}}{\|{\bm{y}}\|^2})\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2}| \\
\nonumber
&\geq& 1-\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2}-a\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2} \quad( |x+y|\geq|x|-|y|) \\
\nonumber
&=& 1-(1+a)\frac{\|{\bm{y}}_p\|^2}{\|{\bm{x}}_p\|^2} \\
\nonumber
&\geq& 1- (1+a)\max_{d_i}\frac{d_i+1}{d_i+1/(1-p)} \\
&:=& \mu(p).
\end{eqnarray}
Therefore, we have
\begin{eqnarray}
\label{eq:bound}
\mu(p)\leq\lambda\leq\gamma(p),
\end{eqnarray}
where, both $\mu(p)$ and $\gamma(p)$ monotonically increase in terms of $p$; the gap $\gamma(p)-\mu(p)=a(\max_{d_i}\frac{d_i+1}{d_i+1/(1-p)}+\min_{d_i}\frac{d_i+1}{d_i+1/(1-p)})$ monotonically decreases, and when $p=1$ (dropping all edges), $\mu(1)=\lambda(1)=\gamma(1)=1$.
Eq.~\ref{eq:bound} indicates that dropping edges probably increases the value of $\lambda(p)$ if $p$ is approaching 1. Actually, when $p$ is small, we have $a=\lim_{p\rightarrow 0}\max_{{\bm{y}}_p}|\frac{{\bm{y}}_p^{\mathrm{T}}\hat{{\bm{A}}}{\bm{y}}_p}{\|{\bm{y}}_p\|^2}|=\lambda(0)$. Hence for small $\lambda$, the gap between $\gamma(p)$ and $\mu(p)$ is also small, and the monotonically-increasing property of $\lambda(p)$ w.r.t. $p$ holds more potentially.
\section{Proof of Theorem~4}
The proof is straightforward, since the number of connected components only increases if the edges connecting two different components are dropped.
\section{Models and Backbones}
\label{appendix:models}
\textbf{Backbones}
We employ one input GCL and one output GCL on ResGCN, APPNP, and JKNet. Therefore, the layers in ResGCN, APPNP and JKNet are at least 3 layers.
All backbones are implemented in Pytorch~\cite{paszke2017automatic}.
\textbf{Self Feature Modeling}
We also implement a variant of graph convolution layer with self feature modeling \cite{fout2017protein}:
\begin{align}
\mathbf{H}_{l+1} = \sigma\left(\hat{\mathbf{A}}\mathbf{H}_{l}\mathbf{W}_{l} + \mathbf{H}_{l}\mathbf{W}_{{\text{self}}_{l}}\right),
\end{align}
where $\mathbf{W}_{{\text{self}}_{l}}\in \mathbb{R}^{C_l\times C_{l-1}}$.
\
\begin{table}[h!]
\centering
\caption{Hyper-parameter Description}
\label{tab:hyper-desc}
\small
\begin{tabular}{l|l}
\hline
Hyper-parameter & Description \\
\hline
lr & learning rate \\
weight-decay & L2 regulation weight \\
sampling-percent & edge preserving percent ($1-p$) \\
dropout & dropout rate \\
normalization & the propagation models \cite{Kipf2017} \\
withloop & using self feature modeling \\
withbn & using batch normalization \\
\hline
\end{tabular}%
\end{table}%
\begin{table*}[h!]
\centering
\caption{The normalization / propagation models}
\vspace{-2ex}
\scriptsize
\begin{tabular}{l|l|l}
\hline
Description & Notation & ${\bm{A}}_{\text{drop}}$ \\
\hline
First-order GCN & FirstOrderGCN & ${\bm{I}} + {\bm{D}}^{-1/2}{\bm{A}}{\bm{D}}^{-1/2}$ \\
Augmented Normalized Adjacency & AugNormAdj & $({\bm{D}} + {\bm{I}})^{-1/2} ( {\bm{A}} + {\bm{I}} ) ({\bm{D}} + {\bm{I}})^{-1/2}$ \\
Augmented Normalized Adjacency with Self-loop & BingGeNormAdj & ${\bm{I}} + ({\bm{D}} + {\bm{I}})^{-1/2} ({\bm{A}} + {\bm{I}}) ({\bm{D}} + {\bm{I}})^{-1/2}$ \\
Augmented Random Walk & AugRWalk & $({\bm{D}} + {\bm{I}})^{-1}({\bm{A}} + {\bm{I}})$\\
\hline
\end{tabular}%
\label{tab:normalization}%
\end{table*}%
\begin{table*}[h!]
\centering
\scriptsize
\caption{The hyper-parameters of best accuracy (\%) for each backbone on all datasets.}
\small
\begin{tabular}{cl|r|r|p{0.6\textwidth}}
\hline
\multicolumn{1}{l}{Dataset} & Backbone & \multicolumn{1}{l|}{nlayers} & \multicolumn{1}{l|}{Acc.} & Hyper-parameters \\
\hline
\multirow{6}[10]{*}{Cora} & GCN & 4 & 86.60 & lr:0.0005, weight-decay:1e-5, sampling-percent:0.7, dropout:0.7, normalization:FirstOrderGCN \\
\cline{2-5} & GCN-b & 4 & 87.60 & lr:0.010, weight-decay:5e-3, sampling-percent:0.7, dropout:0.8, normalization:FirstOrderGCN \\
\cline{2-5} & ResGCN & 4 & 87.00 & lr:0.001, weight-decay:1e-5, sampling-percent:0.1, dropout:0.5, normalization:FirstOrderGCN \\
\cline{2-5} & JKNet & 16 & 88.00 & lr:0.008, weight-decay:5e-4, sampling-percent:0.2, dropout:0.8, normalization:AugNormAdj \\
\cline{2-5} & APPNP & 64 & 89.10 & lr:0.006, weight-decay:5e-5, sampling-percent:0.4, dropout:0.1, normalization:AugRWalk, alpha:0.2\\
\hline
\multirow{6}[10]{*}{Citeseer} & GCN & 4 & 79.00 & lr:0.01, weight-decay:5e-4,sampling-percent:0.1, dropout:0.8, normalization:AugRWalk, withloop, withbn \\
\cline{2-5} & GCN-b & 4 & 79.20 & lr:0.009, weight-decay:1e-3, sampling-percent:0.05, dropout:0.8,
normalization:BingGeNormAdj, withloop, withbn \\
\cline{2-5} & ResGCN & 16 & 79.40 & lr:0.001, weight-decay:5e-3, sampling-percent:0.5, dropout:0.3, normalization:BingGeNormAdj, withloop \\
\cline{2-5} & JKNet & 8 & 80.20 & lr:0.004, weight-decay:5e-5, sampling-percent:0.6, dropout:0.3, normalization:AugNormAdj, withloop \\
\cline{2-5} & APPNP & 64 & 81.30 & lr:0.010, weight-decay:1e-5, sampling-percent:0.8, dropout:0.8, normalization:AugNormAdj, alpha:0.4 \\
\hline
\multirow{5}[10]{*}{Pubmed} & GCN & 8 & 91.00 & lr:0.006, weight-decay:5e-4,sampling-percent:0.3, dropout:0.8, normalization: AugRWalk, withloop, withbn \\
\cline{2-5} & GCN-b & 4 & 91.30 & lr:0.010, weight-decay:1e-3, sampling-percent:0.3, dropout:0.5, normalization:BingGeNormAdj, withloop, withbn \\
\cline{2-5} & ResGCN & 32 & 91.10 & lr:0.003, weight-decay:5e-5, sampling-percent:0.7, dropout:0.8, normalization:AugNormAdj, withloop, withbn \\
\cline{2-5} & JKNet & 64 & 91.60 & lr:0.005, weight-decay:1e-4, sampling-percent:0.5, dropout:0.8, normalization:AugNormAdj, withloop,withbn \\
\cline{2-5} & APPNP & 4 & 90.70 & lr:0.008, weight-decay:1e-4,sampling-percent:0.8, dropout:0.1, normalization:FirstOrderGCN, alpha:0.4 \\
\hline
\multirow{5}[10]{*}{Reddit} & GCN & 8 & 96.57 & lr:0.005, weight-decay:1e-5, sampling-percent:0.7, dropout:0.2, normalization:FirstOrderGCN, withloop, withbn \\
\cline{2-5} & GCN-b & 4 & 96.71 & lr:0.005, weight-decay:1e-4, sampling-percent:0.6, dropout:0.5, normalization:AugRWalk, withloop \\
\cline{2-5} & ResGCN & 16 & 96.48 & lr:0.009, weight-decay:1e-5, sampling-percent:0.2, dropout:0.5, normalization:BingGeNormAdj, withbn \\
\cline{2-5} & JKNet & 8 & 97.02 & lr:0.010, weight-decay:5e-5, sampling-percent:0.6, dropout:0.5, normalization:BingGeNormAdj, withloop,withbn \\
\cline{2-5} & APPNP & 8 & 95.85 & lr:0.004, weight-decay:1e-5, sampling-percent:0.5, dropout:0.1, normalization:AugRWalk, alpha:0.1 \\
\hline
\end{tabular}%
\label{tab:hyperparameterdetails}%
\end{table*}%
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{P}lenty of data are in the form of graph structures, where a certain number of nodes are irregularly related via edges. Examples include social networks~\cite{Kipf2017}, knowledge bases~\cite{ren2019query2box}, molecules~\cite{duvenaud2015convolutional}, scene graphs~\cite{xu2017scene}, etc.
Learning on graphs is crucial, not only for the analysis of the graph data themselves, but also for general data forms as graphs deliver strong inductive biases to enable relational reasoning and combinatorial generalization~\cite{battaglia2018relational}.
Recently, Graph Neural Network (GNN)~\cite{Wu2019} has become the most desired tool for the purpose of graph learning. The initial motivation of inventing GNNs is to generalize the success of Neural Networks (NNs) from tabular/grid data to the graph domain.
The key spirit in GNN is that it exploits recursive neighborhood aggregation function to combine the feature vector from a node as well as its neighborhoods until a fixed number of iterations $d$ (\emph{a.k.a.} network depth). Given an appropriately defined aggregation function, such message passing is proved to capture the structure around each node within its $d$-hop neighborhoods, as powerful as the Weisfeiler-Lehman (WL) graph isomorphism test~\cite{weisfeiler1968wltest} that is known to distinguish a broad class of graphs~\cite{xu2018powerful}. In this paper, we are mainly concerned with Graph Convolutional Networks (GCNs)~\cite{bruna2013spectral,Kipf2017,chen2018fastgcn,hamilton2017inductive,Klicpera2019,Huang2018,xu2018representation}, a central family of GNN that extends the convolution operation from images to graphs. GCNs have been employed successfully for the task of node classification which is the main focus of this paper.
As is already well-known in vision, the depth of Convolutional Neural Network (CNN) plays a crucial role in performance.
Inspired from the success of CNN, one might expect to enable GCN with more expressivity to characterize richer neighbor topology by stacking more layers.
Another reason of developing deep GCN stems from that characterizing graph topology requires sufficiently deep architectures. The works by~\cite{dehmamy2019understanding} and~\cite{loukas2019graph} have shown that GCNs are unable to learn a graph moment or estimate certain graph properties if the depth is restricted.
However, the expectation of formulating deep and expressive GCN is \textbf{not} easy to meet. This is because deep GCN actually suffers from the detriment of expressive power mainly caused by \emph{over-smoothing}~\cite{Li2018}.
An intuitive notion of over-smoothing is that the mixture of neighborhood features by graph convolution drives the output of an infinitely-deep GCN towards a space that contains limited distinguished information between nodes. From the perspective of training, over-smoothing erases important discriminative information from the input, leading to pool trainablity. We have conducted an example experiment in Figure~\ref{fig.compare}, in which the training of a deep GCN is observed to converge poorly.
Several attempts have been proposed to explore how to build deep GCNs~\cite{Kipf2017,Xu2018,Klicpera2019,li2019can}. Nevertheless, none of them delivers sufficiently expressive architecture, and whether or not these architectures are theoretically guaranteed to prevent (or at least relieve) over-smoothing is still unclear. Li \emph{et al.} initially linearized GCN as Laplacian smoothing and found that the features of vertices within each connected component of the graph will converge to the same values.
Putting a step forward from~\cite{Li2018}, Oono \& Suzuki~\cite{oono2019asymptotic} took both non-linearity (ReLU function) and convolution filters into account, and proved GCN converges to a subspace formulated with the bases of node degrees, but this result is limited to generic GCN~\cite{Kipf2017} without discussion of other architectures.
Hence, it remains open to answer, \emph{why and when, in theory, does over-smoothing happen for a general family of GCNs?} and \emph{can we, to what degree, derive a general mechanism to address over-smoothing and recover the expressive capability of deep GCNs?}
\begin{figure}[t!]
\centering
\includegraphics [width=0.24\textwidth]{deepcompare_deep_muti_train.pdf}
\includegraphics [width=0.24\textwidth]{deepcompare_deep_muti_val.pdf}
\vskip -0.1in
\caption{Performance of GCNs on Cora.
We implement 4-layer and 8-layer GCNs w and w/o DropEdge. GCN-4 gets stuck in the over-fitting issue attaining low training error but high validation error; the training of GCN-8 fails to converge satisfactorily due to over-smoothing. By applying DropEdge, both GCN-4 and GCN-8 work well for both training and validation. Note that GCNs here have no bias.}
\label{fig.compare}
\end{figure}
To this end, we first revisit the concept of over-smoothing in a general way. Besides generic GCN~\cite{Kipf2017}, we explore GCN with bias~\cite{dehmamy2019understanding} that is usually implemented in practice, ResGCN~\cite{Kipf2017} and APPNP~\cite{Klicpera2019} that refine GCN by involving skip connections. We mathematically prove, if we go with an infinite number of layers, all these models will converge to a \emph{cuboid} that expands the subspace proposed by~\cite{oono2019asymptotic} up to a certain radius $r$. Such theoretical finding is interesting and refreshes current results by~\cite{Li2018, oono2019asymptotic} in several aspects. First, converging to the cuboid implies converging to the subspace, but not vice verse. Second, unlike existing methods~\cite{Li2018, oono2019asymptotic} that focus on GCN without bias, our conclusion shows that adding the bias leads to non-zero radius, which, interestingly, will somehow impede over-smoothing. Finally, our theorem suggests that ResGCN slows down over-smoothing and APPNP always maintains certain input information, both of which are consistent with our instinctive understandings, yet not rigorously explored before.
Over-smoothing towards a cuboid rather than a subspace, albeit not that bad, still restricts expressive power and requires to be alleviated.
In doing so, we propose DropEdge. The term ``DropEdge'' refers to randomly dropping out certain rate of edges of the input graph for each training time. In its particular form, each edge is independently dropped with a fixed probability $p$, with $p$ being a hyper-parameter and determined by validation. There are several benefits in applying DropEdge for the GCN training (see the experimental improvements by DropEdge in Fig.~\ref{fig.compare}). First, DropEdge can be treated as a message passing reducer. In GCNs, the message passing between adjacent nodes is conducted along edge paths. Removing certain edges is making node connections more sparse, and hence avoiding over-smoothing to some extent when GCN goes very deep. Indeed, as we will draw theoretically in this paper, DropEdge either slows down the degeneration speed of over-smoothing or reduces information loss.
Anther merit of DropEdge is that it can be considered as a data augmentation technique as well. By DropEdge, we are actually generating different random deformed copies of the original graph; as such, we augment the randomness and the diversity of the input data, thus better capable of preventing over-fitting. It is analogous to performing random rotation, cropping, or flapping for robust CNN training in the context of images.
Note that DropEdge is related to the random graph generation methods (such as the Erdos-Renyi (ER) model~\cite{erdos1960evolution} or sparse graph learning approaches (such as GLASSO~\cite{friedman2008sparse}) in terms of edge modification. Nevertheless, DropEdge removes different subset of edges for different training iteration according to the uniform distribution, while the ER model or GLASSO employ the same graph for all training iterations once the random/sparser graph is created.
This is why DropEdge is able to alleviate over-smoothing for each training iteration, but, during the whole training phrase, it still preserves the full information of the underlying graph kernel in a probabilistic sense.
We provide a complete set of experiments to verify our conclusions related to our rethinking on over-smoothing and the efficacy of DropEdge on four benchmarks of node classification. In particular, our DropEdge---as a flexible and general technique---is able to enhance the performance of various popular backbone networks, including GCN~\cite{Kipf2017}, ResGCN~\cite{li2019can}, JKNet~\cite{Xu2018}, and APPNP~\cite{Klicpera2019}. It demonstrates that DropEdge consistently improves the performance on a variety of both shallow and deep GCNs.
Complete details are provided in \textsection~\ref{sec:exps}.
To sum up, our contributions are as follows.
\begin{itemize}
\item We study the asymptotic behavior for the output of general deep GCNs (generic GCN, GCN with bias, ResGCN, and APPNP) {with the involvement of non-linearity}. We theoretically show that these GCNs will converge to a cuboid with infinite layers stacked.
\item We propose DropEdge, a novel technique that uniformly drops a certain number of edges during each training iteration, which is proved to relieve the over-smoothing issue of general deep GCNs in terms of slowing down the convergence speed or decreasing the information loss.
\item Experiments on four node classification benchmarks support the rationality of our proposed theorems and indicates that DropEdge is able to enhance a variety of GCNs for both shallow and deep variants.
\end{itemize}
\section{Related Work}
\textbf{GCNs.}
The first prominent research on GCNs is presented in \cite{bruna2013spectral}, which develops graph convolution based on both the spectral and spatial views. Later, \cite{Kipf2017,defferrard2016convolutional,henaff2015deep,Li2018a,Levie2017} apply improvements, extensions, and approximations on spectral-based GCNs. To address the scalability issue of spectral-based GCNs on large graphs, spatial-based GCNs have been rapidly developed~\cite{hamilton2017inductive,Monti2017,niepert2016learning,Gao2018}. Recently, several sampling-based methods have been proposed for fast graph representation learning, including the node-wise sampling methods~\cite{hamilton2017inductive}, the layer-wise approaches~\cite{chen2018fastgcn,Huang2018}, and the graph-wise methods~\cite{chiang2019clustergcn,zeng2020graphsaint}. Specifically, GAT~\cite{DBLP:journals/corr/abs-1710-10903} has discussed applying dropout on edge attentions. While it actually is a post-conducted version of DropEdge before attention computation, the relation to over-smoothing is never explored in~\cite{DBLP:journals/corr/abs-1710-10903}. In our paper, however, we have formally presented the formulation of DropEdge and provided rigorous theoretical justification of its benefit in alleviating over-smoothing.
\textbf{Deep GCNs.}
Despite the fruitful progress, most previous works only focus on shallow GCNs while the deeper extension is seldom discussed. The attempt for building deep GCNs is dated back to the GCN paper~\cite{Kipf2017}, where the residual mechanism is applied; unexpectedly, as shown in their experiments, residual GCNs still perform worse when the depth is 3 and beyond.
The authors in~\cite{Li2018} first point out the main difficulty in constructing deep networks lying in over-smoothing, but unfortunately, they never propose any method to address it. The follow-up study~\cite{Klicpera2019} solves over-smoothing by using personalized PageRank that additionally involves the rooted node into the message passing loop.
JKNet~\cite{Xu2018} employs dense connections for multi-hop message passing which is compatible with DropEdge for formulating deep GCNs. Recently, DAGNN~\cite{liu2020towards} refines the architecture of GCN by first decoupling the representation transformation from propagation and then utilizing an adaptive adjustment mechanism to balance the information from local and global neighborhoods for each node.
The authors in~\cite{oono2019asymptotic} theoretically prove that the node features of deep GCNs will converge to a subspace and incur information loss. It generalizes the conclusion in~\cite{Li2018} by considering the ReLU function and convolution filters. In this paper, we investigate the over-smoothing behaviors of a broader class of GCNs, and show that general GCNs will converge to a cuboid other than a subspace. Chen et al.~\cite{chen2020measuring} develop a measurement of over-smoothing based on the conclusion of~\cite{Li2018} and propose to relieve over-smoothing by using a supervised optimization-based method, while our DropEdge is proved to alleviate general GCNs by just random edge sampling, which is simple yet effective. Other recent studies to prevent over-smoothing resort to activation normalization~\cite{zhao2019pairnorm} and doubly residual connections~\cite{chensimple}, which are complementary with our DropEdge.
A recent method ~\cite{li2019can} has incorporated residual layers, dense connections and dilated convolutions into GCNs to facilitate the development of deep architectures,
where over-smoothing is not discussed.
{
\textbf{Other related works.}
In DropEdge, the idea of removing edges from the input graph is similar but distinct from the sparse graph learning methods~\cite{friedman2008sparse,egilmez2016graph}. By DropEdge, we are NOT implying that the edges of the underlying graph kernel are sufficient uninformative; instead, DropEdge still preserves this kind of information, as it acts \textbf{in a random yet unbiased way}. We will provide more explanations in \textsection~\ref{sec:methodology} and evaluations in \textsection~\ref{sec:glasso}. The well-known Perron-Fronbenius Theorem (PFT)~\cite{pillai2005perron} and spectral graph theory~\cite{chung1997spectral} have characterized the convergence behavior of random walks on graphs. However, these results are not directly applicable for our case as the adjacency here is augmented with self-loops, and the model is more complicated beyond the random walk, \emph{e.g.} with the involvement of non-linearity.
}
\section{Preliminaries}
\subsection{Graph denotations and the spectral analysis.}
{
Let ${\mathcal{G}}=({\mathbb{V}}, \mathcal{E})$ represent the input graph of size $N$ with nodes $v_i\in{\mathbb{V}}$ and edges $(v_i, v_j)\in\mathcal{E}$. We denote by ${\bm{X}}=\{{\bm{x}}_1,\cdots,{\bm{x}}_N\}\in\mathbb{R}^{N\times C}$ the node features, and by ${\bm{A}}\in\mathbb{R}^{N\times N}$ the adjacency matrix where the element ${\bm{A}}(i,j)$ returns the weight of each edge $(v_i, v_j)$. The node degrees are given by ${\bm{d}}=\{d_1,\cdots,d_N\}$ where $d_i$ computes the sum of edge weights connected to node $i$. We define ${\bm{D}}$ as the degree matrix whose diagonal elements are obtained from ${\bm{d}}$. Following the previous researches~\cite{Kipf2017,Xu2018,Klicpera2019}, the edge weights are supposed to be non-negative and only capture the similarity between nodes instead of their negative correlations.
}
As we will introduce later, GCN~\cite{Kipf2017} applies the normalized augmented adjacency by adding self-loops followed by augmented degree normalization, which results in $\hat{{\bm{A}}}=\hat{{\bm{D}}}^{-1/2}({\bm{A}}+{\bm{I}})\hat{{\bm{D}}}^{-1/2}$, where $\hat{{\bm{D}}}={\bm{D}}+{\bm{I}}$. We define the augmented normalized Laplacian~\cite{oono2019asymptotic} as $\hat{{\bm{L}}}={\bm{I}}-\hat{{\bm{A}}}$. By setting up the relation with the spectral theory of generic Laplacian~\cite{chung1997spectral}, Oono \& Suzuki~\cite{oono2019asymptotic} derive the spectral for the augmented normalized Laplacian and its adjacency thereby. We summarize the result as follows.
\begin{theorem}[Augmented Spectral Property~\cite{oono2019asymptotic}]
\label{th:spectral}
Since $\hat{{\bm{A}}}$ is symmetric, let $\lambda_1\le\cdots\le\lambda_N$ be the real eigenvalues of $\hat{{\bm{A}}}$, sorted in an ascending order. Suppose the multiplicity of the largest eigenvalue $\lambda_N$ is $M$, \emph{i.e.}, $\lambda_{N-M}<\lambda_{N-M+1}=\cdots=\lambda_N$. Then we have:
\begin{itemize}
\item $-1<\lambda_1,\lambda_{N-M}<1$;
\item $\lambda_{N-M+1}=\cdots=\lambda_N=1$;
\item $M$ is given by the number of connected components in ${\mathcal{G}}$, and $\hat{{\bm{e}}}_m\coloneqq\hat{{\bm{D}}}^{1/2}{\bm{u}}_m$ is the eigenvector associated with eigenvalue $\lambda_{N-M+m}$ where ${\bm{u}}_m\in\mathbb{R}^{N}$ is the indicator of the $m$-th connected component, \emph{i.e.}, ${\bm{u}}_m(i)=1$ if node $i$ belongs to the $m$-th component and ${\bm{u}}_m(i)=0$ otherwise.
\end{itemize}
\end{theorem}
{
Theorem 1 focuses on the eigenvalues of the the normalized augmented adjacency $\hat{{\bm{A}}}$. The well-known Perron-Fronbenius Theorem (PFT)~\cite{pillai2005perron} states that for a row-stochastic, irreducible probability matrix ${\bm{P}}$, the spectral radius is exactly 1, and all other eigenvalues have magnitude strictly smaller than 1. Yet, this result can not be applied directly since $\hat{{\bm{A}}}$ is not a irreducible stochastic matrix. The conclusion by~\cite{chung1997spectral} is applicable for the spectral analysis of the adjacency matrix ${\bm{A}}$, but it has not taken the augmented self-loops (\emph{i.e.} ${\bm{A}}+{\bm{I}}$) into account. Theorem 1 rigorously generalizes the conclusion by~\cite{chung1997spectral} from ${\bm{A}}$ to its augmented version $\hat{{\bm{A}}}$.
}
\subsection{Variants of GCN}
\label{sec:gcns}
Here, we introduce several typical variants of GCN.
\textbf{Generic GCN.}
As originally developed by~\cite{Kipf2017}, the feed forward propagation in GCN is recursively conducted as
\begin{eqnarray}
\label{Eq:gcn}
{\bm{H}}_{l+1} &=& \sigma\left(\hat{{\bm{A}}}{\bm{H}}_{l}{\bm{W}}_{l}\right),
\end{eqnarray}
where ${\bm{H}}_{l}=\{{\bm{h}}_{1,l},\cdots,{\bm{h}}_{N,l}\}$ are the hidden vectors of the $l$-th layer with ${\bm{h}}_{i,l}$ being the hidden feature for node $i$; $\sigma(\cdot)$ is a nonlinear function (it is implemented as ReLU throughout this paper); and ${\bm{W}}_{l}\in\mathbb{R}^{C_l \times C_{l+1}}$ is the filter matrix in the $l$-th layer. For the analyses in \textsection~\ref{sec:our-methods}, we set the dimensions of all layers to be the same $C_l=C$ for simplicity. We henceforth call generic GCN as GCN for short unless otherwise specified.
\textbf{GCN with bias (GCN-b).}
In most literature, GCN is introduced in the form of Eq.~\ref{Eq:gcn} without the explicit involvement of the bias term that, however, is necessary in practical implementation. If adding the bias, Eq.~\ref{Eq:gcn} is renewed as
\begin{eqnarray}
\label{Eq:gcn-b}
{\bm{H}}_{l+1} &=& \sigma\left(\hat{{\bm{A}}}{\bm{H}}_{l}{\bm{W}}_{l}+{\bm{b}}_{l}\right),
\end{eqnarray}
where the bias is defined by ${\bm{b}}_l\in\mathbb{R}^{1\times C}$.
\textbf{ResGCN.}
By borrowing the concept from ResNet~\cite{he2016deep}, Kipf \& Welling~\cite{Kipf2017} utilize residual connections between hidden layers to facilitate the training of deeper models by carrying over information from the previous layer's input:
\begin{eqnarray}
\label{Eq:resgcn}
{\bm{H}}_{l+1} &=& \sigma\left(\hat{{\bm{A}}}{\bm{H}}_{l}{\bm{W}}_{l}\right)+\alpha{\bm{H}}_l,
\end{eqnarray}
where we have further added the weight $0\leq\alpha\leq1$ for more flexibility to balance between the GCN propagation and residual information.
\textbf{APPNP.}
Since deep GCNs will isolate the output from the input due to over-smoothing, Klicpera et al.~\cite{Klicpera2019} suggest to explicitly conduct skip connections from the input layer to each hidden layer to preserve input information:
\begin{eqnarray}
\label{Eq:appnp}
{\bm{H}}_{l+1} &=& (1-\beta)\hat{{\bm{A}}}{\bm{H}}_{l}+\beta{\bm{H}}_0,
\end{eqnarray}
where $0\leq\beta\leq1$ is the trade-off weight.
Note that the original version by~\cite{Klicpera2019} dose not involve the non-linearity and weight matrix in each hidden layer. The work by GCNII~\cite{chensimple} seeks for more capacity by adding the ReLU function and trainable weights to the propagation. Here we adopt the original version and find it works promisingly.
\begin{figure}[t!]
\centering
\includegraphics [width=.45\textwidth]{illustrations.pdf}
\caption{(a) Over-smoothing of generic GCN~\cite{oono2019asymptotic}: converging to a multi-dimensional subspace ${\mathcal{M}}$; (b) Over-smoothing of general models: converging to a cuboid $O({\mathcal{M}},r)$. }
\label{fig.illustration}
\end{figure}
\section{Analyses and Methodologies}
\label{sec:our-methods}
In this section, we first derive the universal theorem (\autoref{th:universal}) to explain why and when over-smoothing will happen for all the four models introduced in \textsection~\ref{sec:gcns}. We then introduce DropEdge that is proved to relieve over-smoothing for all models. We also contend that our DropEdge is able to prevent over-fitting, and involve the discussions and extensions of DropEdge with other related notions.
\subsection{Over-smoothing of General Models}
\label{sec:analysis}
The notion of ``over-smoothing'' is originally introduced by ~\cite{Li2018}, later explained by~\cite{oono2019asymptotic} and many other recent works~\cite{rong2019dropedge,chen2020measuring,zhao2019pairnorm,chensimple}.
{
In general, the over-smoothing phenomenon implies that the node representations become mixed and in-distinguishable with each other after many layers of message passing. It hence weakens the trainability and expressivity of deep GCNs. Notice that over-smoothing in our context is related to ``frequency smoothing" in classical signal processing, if frequency is understood as the eigenvalue of the adjacency/Laplacian matrix, joining the definition by~\cite{ortega2018graph}. Over-smoothing can be explained as graph frequency filtering of small eigenvalues. However, compared to the conventional studies in frequency smoothing~\cite{ortega2018graph}, the analysis of over-smoothing in GCN models is more challenging given the complicated architecture and the involvement of non-linearity, which motivates the theoretical studies in this paper. }
In our following analyses, we exploit the conclusion by~\cite{oono2019asymptotic} for its generality of taking both the non-linearity (\emph{i.e.} the ReLU function) and the convolution filters into account. The authors in~\cite{oono2019asymptotic} explain over-smoothing of deep GCN as convergence to a multi-dimensional subspace. Falling into this subspace will encounter information loss: the nodes within the same connected component are distinguishable only by their degrees. In other words, if two nodes share the same degree in the same component, their representations will be the same after infinite-layer propagation, even their initial features and local topology are clearly different. Such information loss will become more serious if the number of connected components, \emph{i.e.} the dimensionality of the subspace (defined below) is small. We can hence leverage the distance between each GCN layer and the subspace to measure how serious the over-smoothing is.
The definition of the subspace is given below.
\begin{definition}[Subspace]
\label{de:subspace}
We define ${\mathcal{M}}\coloneqq\{{\bm{H}}\in\mathbb{R}^{N\times C}|{\bm{H}}=\hat{{\bm{E}}}{\bm{C}}, {\bm{C}}\in\mathbb{R}^{M\times C}\}$ as an $M$-dimensional ($M\le N$) subspace in $\mathbb{R}^{N\times C}$, where $\hat{{\bm{E}}}=\{\hat{{\bm{e}}}_1,\cdots,\hat{{\bm{e}}}_M\}\in\mathbb{R}^{N\times M}$ collects the bases of the largest eigenvalue of $\hat{{\bm{A}}}$ in \autoref{th:spectral}, namely, $\hat{{\bm{e}}}_m=\hat{{\bm{D}}}^{1/2}{\bm{u}}_m$.
\end{definition}
{
The subspace ${\mathcal{M}}$ is expanded by the columns of $\hat{{\bm{E}}}$. Definition~\ref{de:subspace} follows the conventional definition of a subspace in~\cite{vetterli2014foundations}, namely, ${\mathcal{M}}$ is closed under addition and scalar multiplication. To be specific, if ${\bm{H}}_1,{\bm{H}}_2\in{\mathcal{M}}$, for any $a, b\in\mathbb{R}$, $a{\bm{H}}_1 + b{\bm{H}}_2 = a\hat{{\bm{E}}}{\bm{C}}_1 + b\hat{{\bm{E}}}{\bm{C}}_2 = \hat{{\bm{E}}}(a{\bm{C}}_1+b{\bm{C}}_2)=\hat{{\bm{E}}}{\bm{C}}\in{\mathcal{M}}$, since ${\bm{C}}\coloneqq a{\bm{C}}_1+b{\bm{C}}_2\in\mathbb{R}^{M\times C}$.
} We define the distance between matrix ${\bm{H}}\in\mathbb{R}^{N\times M}$ and ${\mathcal{M}}$ as $d_\mathcal{M}({\bm{H}})\coloneqq\inf_{{\bm{Y}}\in \mathcal{M}} ||{\bm{H}}-{\bm{Y}}||_\mathrm{F}$, where $\|\cdot\|_F$ computes the Frobenius norm.
However, the conclusion by~\cite{oono2019asymptotic} is only applicable for generic GCN. In this section, we derive a universal theorem to characterize the behavior of general GCNs, showing that they will converge to a cuboid other than a subspace with the increase of depth. We first define the cuboid below.
\begin{definition}[Cuboid]
\label{de:cuboid}
We define ${\mathcal{O}}({\mathcal{M}},r)$ as the cuboid that expands ${\mathcal{M}}$ up to a radius $r\geq 0$, namely, ${\mathcal{O}}({\mathcal{M}},r)\coloneqq\{{\bm{H}}\in\mathbb{R}^{N\times C}|d_{{\mathcal{M}}}({\bm{H}})\leq r\}$.
\end{definition}
We now devise the general theorem on over-smoothing.
\begin{theorem}[General Over-Smoothing Theorem]
\label{th:universal}
For the GCN models defined in Eq.~\ref{Eq:gcn} to Eq.~\ref{Eq:appnp}, we universally have
\begin{equation}
d_\mathcal{M}({\bm{H}}_{l+1})-r \le v\left(d_\mathcal{M}({\bm{H}}_{l})-r\right), \label{equ:distance-b}
\end{equation}
where $v\geq0$ and $r$ describe the convergence factor and radius, respectively, depending on what the specific model is. In particular,
\begin{itemize}
\item For generic GCN (Eq.~\ref{Eq:gcn}), $v=s\lambda$, $r=0$;
\item For GCN-b (Eq.~\ref{Eq:gcn-b})\footnote{We assume the distance $d_\mathcal{M}({\bm{b}}_l)$ keeps the same for all layers for simplicity; otherwise, we can define it as the supremum.}, $v=s\lambda$, $r=\frac{d_\mathcal{M}({\bm{b}}_l)}{1-v}$;
\item For ResGCN (Eq.~\ref{Eq:resgcn}), $v=s\lambda+\alpha$, $r=0$;
\item For APPNP (Eq.~\ref{Eq:appnp}), $v=(1-\beta)\lambda$, $r=\frac{\beta d_\mathcal{M}({\bm{H}}_0)}{1-v}$,
\end{itemize}
where, $s>0$ is the supremum of all singular values of all ${\bm{W}}_l$, and $\lambda\coloneqq \max_{n=1}^{N-M}|\lambda_n|<1$ is
the second largest eigenvalue of $\hat{{\bm{A}}}$. The equality in Eq.~\ref{equ:distance-b} is achievable under certain specification.
\end{theorem}
{
The main characteristic of Theorem 2, in contrast to the power method~\cite{van1996matrix} and random-walk-based methods~\cite{Li2018}, is that it has taken the non-linear ReLU function $\sigma$ into account. By involving the non-linearity, it is indeed challenging to analyze the convergence behavior, which resorts to certain tricky transformations and inequations, as demonstrated by Eq.~17-20 in Appendix. In addition, Theorem 2 is applicable for general models, making it more powerful than the methods~\cite{oono2019asymptotic} that are proposed for certain specific case.}
The proof is provided in Appendix A. By Eq.~\ref{equ:distance-b}, we recursively derive $d_{{\mathcal{M}}}({\bm{H}}_{l})-r\leq v (d_{{\mathcal{M}}}({\bm{H}}_{l-1})-r)\leq\cdots\leq v^l(d_{{\mathcal{M}}}({\bm{H}}_{0})-r)$.
We assume $v<1$ for any $v\in\{s\lambda, s\lambda+\alpha, (1-\beta)\lambda\}$ in \autoref{th:universal} by observing that $\lambda<1$, $s\leq1$ (which is usually the case due to the $\ell_2$ penalty during training) and $\alpha$ can be set to small enough\footnote{Otherwise, if $v\geq1$, it will potentially cause gradient explosion and unstable training for deep GCNs, which is not the focus of this paper.}. Under this assumption, \autoref{th:universal} states that the general GCN models actually converge towards the cuboid $O({\mathcal{M}},r)$, as depicted in Fig.~\ref{fig.illustration}.
We further analyze the convergence behavior for each particular model with the following remarks.
\begin{remark}
\label{re:gcn}
For generic GCN without bias, the radius becomes $r=0$, and we have $\lim_{l\rightarrow\infty}d_{{\mathcal{M}}}({\bm{H}}_{l+1})\leq\lim_{l\rightarrow\infty}v^l d_{{\mathcal{M}}}({\bm{H}}_{0})=0$, indicating that ${\bm{H}}_{l+1}$ exponentially converges to ${\mathcal{M}}$ and thus results in over-smoothing, as already studied by~\cite{oono2019asymptotic}.
\end{remark}
\begin{remark}
\label{re:gcn-b}
For GCN-b, the radius is not zero: $r>0$, and we have
$\lim_{l\rightarrow\infty}d_{{\mathcal{M}}}({\bm{H}}_{l+1})\leq \lim_{l\rightarrow\infty}r+v^l( d_{{\mathcal{M}}}({\bm{H}}_{0})-r)=r$, \emph{i.e.}, ${\bm{H}}_{l+1}$ exponentially converges to the cuboid $O({\mathcal{M}},r)$. Unlike ${\mathcal{M}}$, $O({\mathcal{M}},r)$ shares the same dimensionality with $\mathbb{R}^{N\times C}$ and probably contains useful information (other than node degree) for node representation.
\end{remark}
\begin{remark}
\label{re:resgcn}
For ResGCN, it finally converges to ${\mathcal{M}}$ similar to generic GCN. Yet, as $v=s\lambda+\alpha \geq s\lambda$, it exhibits slower convergence speed to ${\mathcal{M}}$ compared to generic GCN, which is consistent with our intuitive understanding of the benefit by adding residual connections.
\end{remark}
\begin{remark}
\label{re:appnp}
For APPNP, it converges to $O({\mathcal{M}},r)$ other than ${\mathcal{M}}$ with $r>0$. This explains why adding the input layer to each hidden layer in APPNP helps impede over-smoothing. Notice that increasing $\beta$ will enlarge $r$ but decrease $v$ at the same time, thus leading to faster convergence to a larger cuboid.
\end{remark}
The discussions above in Remarks~\ref{re:gcn}-\ref{re:appnp} show that the value of $\lambda$ plays an important role in influencing over-smoothing for different models, larger $\lambda$ implying less over-smoothing. In the next subsection, we will introduce that our proposed method DropEdge is capable of increasing $\lambda$ and preventing over-smoothing thereby.
\subsection{DropEdge to Alleviate Over-Smoothing}
\label{sec:methodology}
At each training epoch, the DropEdge technique drops out a certain rate of edges of the input graph by random. Formally, it randomly enforces $Vp$ non-zero elements of the adjacency matrix $\mathbf{A}$ to be zeros, where $V$ is the total number of edges and $p$ is the dropping rate. If we denote the resulting adjacency matrix as ${\bm{A}}_{\text{drop}}$, then its relation with ${\bm{A}}$ becomes
\begin{eqnarray}
\label{Eq:DropEdge}
{\bm{A}}_{\text{drop}} &=& \text{Unif}({\bm{A}}, 1-p),
\end{eqnarray}
where $\text{Unif}({\bm{A}}, 1-p)$ uniformly samples each edge in ${\bm{A}}$ with property $1-p$, namely, ${\bm{A}}_{\text{drop}}(i,j)={\bm{A}}(i,j)*\text{Bernoulli}(1-p)$. In our implementation, to avoid redundant sampling edge, we create ${\bm{A}}_{\text{drop}}$ by drawing a subset of edges of size $V(1-p)$ from ${\bm{A}}$ in a non-replacement manner. Following the idea of~\cite{Kipf2017}, we also perform the re-normalization trick on ${\bm{A}}_{\text{drop}}$ to attain $\hat{{\bm{A}}}_{\text{drop}}$. We replace $\hat{{\bm{A}}}$ with $\hat{{\bm{A}}}_{\text{drop}}$ in Eq.~\ref{Eq:gcn} for propagation and training. When validation and testing, DropEdge is not utilized.
\autoref{th:universal} tells that the degenerated expressivity of deep GCNs is closely related to $v$ and thereby $\lambda$---the absolute second-largest eigenvalue of $\hat{{\bm{A}}}$. Here, we will demonstrate that adopting DropEdge decreases $\lambda$ and alleviates over-smoothing. In our previous conference version~\cite{rong2019dropedge}, we only discuss how DropEdge influences the spectral of ${\bm{A}}_{\text{drop}}$ without considering the re-normalization trick. In this paper, we will draw the conclusion directly for the normalized augmented adjacency matrix $\hat{{\bm{A}}}_{\text{drop}}$ below.
\begin{theorem}
\label{The:smoothing}
We denote $\lambda(p)$ as any absolute eigenvalue of the expected ${\bm{A}}_{\text{drop}}$ under dropping rate $p$ after re-normalization. We can bound the value of $\lambda(p)$ by
\begin{eqnarray}
\label{eq:lambda_drop}
\mu(p)\leq\lambda(p)\leq\gamma(p),
\end{eqnarray}
where both $\mu(p)$ and $\gamma(p)$ are monotonically increasing functions with regard to $p$. Besides, the gap $\gamma(p)-\mu(p)$ monotonically decreases with respect to $p$, and when $p=1$, the gap reduces to zero, leading to $\mu(1)=\lambda(1)=\gamma(1)=1$.
\end{theorem}
We provide the full details in Appendix B.
Theorem~\ref{The:smoothing} tells that
performing DropEdge is able to increase both the upper and lower bounds of $\lambda$ (and decrease their gap at the same time), which will enforce $\lambda$ towards a larger value particularly when
$p$ is close to 1 (with sufficient edges dropped).
This, to a certain degree, can slow down the over-smoothing speed in Eq.~\ref{equ:distance-b}. Fig.~\ref{fig.dropedge-os} illustrates the relation between $\lambda$ and $p$, where $\lambda$ may fluctuates initially but its value is finally increased when $p$ is enlarged.
DropEdge is also able to increase the dimensionality of the subspace, thus alleviating information loss, as already proved by our conference paper~\cite{rong2019dropedge}. We summarize this property as a theorem as follows.
\begin{theorem}
\label{The:smoothing-2}
Regarding the GCN models in Eq.~\ref{Eq:gcn} to Eq.~\ref{Eq:appnp}, we assume by ${\mathcal{M}}$ the convergence subspace defined in Definition~\ref{de:subspace} on the original graph, and by ${\mathcal{M}}'$ on the one after DropEdge. Then, after certain edges removed, the information loss is only decreased: $N-\text{dim}({\mathcal{M}}) \geq N-\text{dim}({\mathcal{M}}')$\footnote{In a general sense, the dimensionality of data space does not necessarily reflect the amount of information, but in this paper converging to the subspace of smaller dimension does indicates more serious information loss considering the particular structure of the subspace given by Definition~\ref{de:subspace}.}.
\end{theorem}
\begin{figure}[t!]
\centering
\includegraphics [width=.35\textwidth]{Dropedge-os.png}
\caption{Illustration of Eq.~\ref{eq:lambda_drop} where $\lambda$ is bounded by $\mu(p)$ and $\gamma(p)$. The derivations of $\mu(p)$ and $\gamma(p)$ are given in Appendix~B.}
\label{fig.dropedge-os}
\end{figure}
\autoref{The:smoothing}-\ref{The:smoothing-2} do suggest that DropEdge is able to alleviate over-smoothing, but they do \textbf{not} mean preventing over-smoothing by DropEdge will always deliver enhanced classification performance. For example, dropping all edges will address over-smoothing completely, which yet will weaken the model expressive power as well since the the GCN model has degenerated to an MLP without considering topology modeling. In general, how to balance between preventing over-smoothing and expressing graph topology is critical, and one should take care of choosing an appropriate edge dropping rate $p$ to reflect this. In our experiments, we select the value of $p$ by using validation data, and find it works well in a general way.
\begin{table*}[t!]
\centering
\caption{Dataset Statistics}
\small
\begin{tabular}{lrrrrcl}
\hline
Datasets & \multicolumn{1}{l}{Nodes} & \multicolumn{1}{l}{Edges} & \multicolumn{1}{l}{Classes} & \multicolumn{1}{l}{Features} & \multicolumn{1}{l}{Traing/Validation/Testing} & Type \\
\hline
Cora & 2,708 & 5,429 & 7 & 1,433 & 1,208/500/1,000 & Transductive \\
Citeseer & 3,327 & 4,732 & 6 & 3,703 & 1,812/500/1,000 & Transductive \\
Pubmed & 19,717 & 44,338 & 3 & 500 & 18,217/500/1,000 & Transductive \\
Reddit & 232,965 & 11,606,919 & 41 & 602 & 152,410/23,699/55,334 & Inductive \\
\hline
\end{tabular}%
\label{table:data}%
\end{table*}%
{
We would like to highlight that DropEdge acts \textbf{in a random yet unbiased way}. It does drop a certain number of edges at each specific training iteration for relieving over-smoothing, but different edges are removed at different iteration in accordance with the uniform probability. In expectation, the information of the graph edges (and the underlying kernel) are still retained from the perspective of the whole training process. Briefly, the hidden feature of node $i$ in the $(l+1)$-th layer is aggregated from all its neighborhoods in the $l$-th layer, which means the full aggregation is defined as ${\bm{h}}_{i, l+1}=\sum_{j\in{\mathcal{N}}(i)}{\bm{A}}(i,j){\bm{h}}_{j,l}$. In DropEdge, each edge is drawn from a Bernoulli distribution denoted as $\text{Bernoulli}(1-p)$, where $p$ is the dropping rate. Then, the expectation of the aggregation with regard to all edges is given by $\mathbb{E}[{\bm{h}}_{i, l+1}|{\bm{H}}_l]=\mathbb{E}[\sum_{j\in{\mathcal{N}}(i)}{\bm{A}}_{i,j}{\bm{h}}_{j,l}|{\bm{H}}_l]=\sum_{j\in{\mathcal{N}}(i)}\mathbb{E}[{\bm{A}}_{i,j}]{\bm{h}}_{j,l}=(1-p)\sum_{j\in{\mathcal{N}}(i)}{\bm{A}}_{i,j}{\bm{h}}_{j,l}$, which is the same as the original full aggregation up to a multiplier $1-p$, and this multiplier is erased if the sum-to-1 normalization is conducted on adjacency weights ${\bm{A}}$. Such unbiased sampling behavior makes our DropEdge distinct from the random graph generation methods~\cite{erdos1960evolution} or sparse graph learning approaches~\cite{friedman2008sparse} where the graph once generated/modified will keep fixed for all training iterations, leading to the exact information loss of edge connections. By the way, dropping edges randomly is able to create different random deformations of the input graph. In this way, DropEdge is able to prevent over-fitting, similar to typical image augmentation skills (\emph{e.g.} rotation, cropping and flapping) to hinder over-fitting in training CNNs. We will provide experimental validations in \textsection~\ref{sec:glasso}.
}
\textbf{Layer-Wise DropEdge.}
The above formulation of DropEdge is one-shot with all layers sharing the same perturbed adjacency matrix. Indeed, we can perform DropEdge for each individual layer. Specifically, we obtain $\hat{{\bm{A}}}_{\text{drop}}^{(l)}$ by independently computing Eq.~\ref{Eq:DropEdge} for each $l$-th layer. Different layer could have different adjacency matrix $\hat{{\bm{A}}}_{\text{drop}}^{(l)}$. Such layer-wise version brings in more randomness and deformation of the original data, and we will experimentally compare its performance with the original DropEdge in \textsection~\ref{sec:exp-dropedge}.
\subsection{Discussions}\label{sec.discussions}
This sections contrasts the difference between DropEdge and other related concepts including Dropout, DropNode, and Graph Sparsification. We also discuss the difference of over-smoothing between node classification and graph classification.
\textbf{DropEdge vs. Dropout.}
The Dropout trick~\cite{Hinton2012} is trying to perturb the feature matrix by randomly setting feature dimensions to be zeros, which may reduce the effect of over-fitting but is of no help to preventing over-smoothing since it does not make any change of the adjacency matrix. As a reference, DropEdge can be regarded as a generation of Dropout from dropping feature dimensions to dropping edges, which mitigates both over-fitting and over-smoothing. In fact, the impacts of Dropout and DropEdge are complementary to each other, and their compatibility will be shown in the experiments.
\textbf{DropEdge vs. DropNode.}
Another related vein belongs to the kind of node sampling based methods, including GraphSAGE~\cite{hamilton2017inductive}, FastGCN~\cite{chen2018fastgcn}, and AS-GCN~\cite{Huang2018}. We name this category of approaches as DropNode. For its original motivation, DropNode samples sub-graphs for mini-batch training, and it can also be treated as a specific form of dropping edges since the edges connected to the dropping nodes are also removed. However, the effect of DropNode on dropping edges is node-oriented and indirect. By contrast, DropEdge is edge-oriented, and it is possible to preserve all node features for the training (if they can be fitted into the memory at once), exhibiting more flexibility. Further, to maintain desired performance, the sampling strategies in current DropNode methods are usually inefficient, for example, GraphSAGE suffering from the exponentially-growing layer size (the number of sampled node), and AS-GCN requiring the sampling to be conducted recursively layer by layer. Our DropEdge, however, neither increases the layer size as the depth grows nor demands the recursive progress because the sampling of all edges are parallel.
\textbf{DropEdge vs. Graph-Sparsification.}
Graph-Sparsification~\cite{eppstein1997sparsification} is an old research topic in the graph domain. Its goal is removing unnecessary edges for graph compressing while keeping almost all information of the input graph. This is clearly district from the purpose of DropEdge where no optimization objective is needed. Specifically, DropEdge will remove the edges of the input graph by random at each training time, whereas Graph-Sparsification resorts to a tedious optimization method to determine which edges to be deleted, and once those edges are discarded the output graph keeps unchanged.
\textbf{Node Classification vs. Graph Classification.}
The main focus of our paper is on node classification, where all nodes are in an identical graph. In graph classification, the nodes are distributed across different graphs; in this scenario, \autoref{th:universal} is still applicable per graph, and the node activations of an infinitely-deep GCN in the same graph instance are only distinguishable by node degrees. Yet, this is not true for those nodes in different graphs, since they will converge to different positions in ${\mathcal{M}}$ (\emph{i.e.} ${\bm{C}}$ in Definition~\ref{de:subspace}). To illustrate this, we suppose all graph instances are fully connected graphs and share the same form of $\hat{{\bm{A}}}=\{\frac{1}{N}\}_{N\times N}$, the node features ${\bm{X}}_i$ ($\geq0$) within graph $i$ are the same but different from those in different graphs, and the weight matrix is fixed as ${\bm{W}}={\bm{I}}$ in Eq.~\ref{Eq:gcn}. Then, any layer of generic GCN keeps outputting ${\bm{X}}_i$ for graph $i$, which indicates no information confusion happens across different graphs.
Note that for graph classification over-smoothing per graph still hinders the expressive capability of GCN, as it will cause dimension collapse of the input data.
\section{Experiments}
\label{sec:exps}
Our experimental evaluations are conducted with the goal to answer the following questions:
\begin{itemize}
\item Is our proposed universal over-smoothing theorem in line with the experimental observation?
\item How does our DropEdge help in relieving over-smoothing of different GCNs?
\end{itemize}
To address the first question, we display on a simulated dataset how the node activations will behave when the depth grows. We also calculate the distance between the node activations and the subspace to show the convergence dynamics. As for the second question, we contrast the classification performance of varying models of different depths with and without DropEdge on several real node classification benchmarks. The comparisons with state-of-the-art methods are involved as well.
\begin{figure*}
\centering
\subfloat[Initial Visualization]{
\includegraphics[width=0.24\textwidth]{figures/osm/iter0mutigcnFalse0.pdf}}
\subfloat[GCN-b]{
\includegraphics[width=0.24\textwidth]{figures/osm/mutigcnTrue0.pdf}}
\subfloat[ResGCN]{
\includegraphics[width=0.24\textwidth]{figures/osm/resgcnFalse0.pdf}}
\subfloat[APPNP]{
\includegraphics[width=0.24\textwidth]{figures/osm/appnpFalse0.pdf}}\\
\subfloat[GCN]{
\includegraphics[width=0.24\textwidth]{figures/osm/iter400mutigcnFalse0.pdf}}
\subfloat[GCN-b]{
\includegraphics[width=0.24\textwidth]{figures/osm/iter400mutigcnTrue0.pdf}}
\subfloat[ResGCN]{
\includegraphics[width=0.24\textwidth]{figures/osm/iter400resgcnFalse0.pdf}}
\subfloat[APPNP]{
\includegraphics[width=0.24\textwidth]{figures/osm/iter400appnpFalse0.pdf}}
\caption{Output dynamics of GCN. (a) The initial visualization of the node distribution on Small-Cora, where the displayed size of each node reflects its degree. (b-d) The comparisons of the distance to the subspace $d_{{\mathcal{M}}}$ between GCN and GCN-b, ResGCN and APPNP, respectively, when the depth $d$ ranges from 0 to 400; (e-h) The visualization of the output for GCN, GCN-b, ResGCN and APPNP, when $d=400$.}
\label{fig.osm-gcn}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.24\textwidth]{figures/osm/mutigcnFalse7.pdf}
\includegraphics[width=0.24\textwidth]{figures/osm/iter10mutigcnFalse5.pdf}
\includegraphics[width=0.24\textwidth]{figures/osm/iter100mutigcnFalse5.pdf}
\includegraphics[width=0.24\textwidth]{figures/osm/iter400mutigcnFalse5.pdf}
\caption{Output dynamics of GCN with DropEdge. The left sub-figure plots $d_{{\mathcal{M}}}$ for GCN and GCN with DropEdge (dropping rate $p=0.5, 0.7$) under varying depth. Other sub-figures depict the output when the depth is 10, 100, and 400 ($p=0.5$).}
\label{fig.osm-dropedge}
\end{figure*}
\textbf{Node classification datasets.}
Joining the previous works' practice, we focus on four benchmark datasets varying in graph size and feature type: (1) classifying the research topic of papers in three citation datasets: Cora, Citeseer and Pubmed~\cite{sen2008collective}; (2) predicting which community different posts belong to in the Reddit social network~\cite{hamilton2017inductive}. Note that the tasks in Cora, Citeseer and Pubmed are transductive underlying all node features are accessible during training, while the task in Reddit is inductive meaning the testing nodes are unseen during training. We apply the full-supervised training fashion used in~\cite{Huang2018} and \cite{chen2018fastgcn} on all datasets in our experiments. The statics of all datasets are listed in Tab.~\ref{table:data}.
\textbf{Small-Cora.}
We have constructed a small dataset from Cora. In detail, we sample two connected components from the training graph of Cora, with the numbers of nodes being 654 and 26, respectively. The original feature dimension of nodes is 1433 which is not suitable for visualization on a 2-dimension plane. Hence, we apply truncated SVD for dimensionality reduction with output dimension as 2. Fig.~\ref{fig.osm-gcn} (a) displays the distribution of the node features. We call this simulated dataset as Small-Cora.
\subsection{Visualizing over-smoothing on Small-Cora}
\label{sec:small-cora}
Theorem~\ref{th:universal} has derived the universality of over-smoothing for the four models: GCN, GCN-b, ResGCN, and APPNP. Here, to check if it is consistent with empirical observations, we visualize the dynamics of the node activations on Small-Cora.
\textbf{Implementations.}
To better focus on how the different structure of different GCN influences over-smoothing, the experiments in this section fix the hidden dimension of all GCNs to be 2, randomly initialize an orthogonal weight matrix ${\bm{W}}$ for each layer and keep them untrained, which leads to $s=1$ in Eq.~\ref{equ:distance-b}. We also remove ReLU function, as we find that, with ReLU, the node activations will degenerate to zeros when the layer number grows, which will hinder the visualization. Regarding GCN-b, the bias of each layer is set as 0.05. We fix $\alpha=0.2$ and $\beta=0.5$ for ResGCN and APPNP, respectively. Since the total number of nodes is small (\emph{i.e.} 680), we are able to exactly devise the bases of the subspace, \emph{i.e.} ${\bm{E}}$ according to Theorem~\ref{th:spectral}, and compute the distance between node activations ${\bm{H}}$ and subspace ${\mathcal{M}}$: $d_{{\mathcal{M}}}$ using Eq. 11 in Appendix A.
Fig.~\ref{fig.osm-gcn} demonstrates the output dynamics of all models. We have the following findings.
\textbf{For GCN}.
The nodes are generally distinguishable when $d=0$ (see (a)). After increasing $d$, the distance $d_{{\mathcal{M}}}$ decreases dramatically, and it finally reaches very small value when $d=400$ (see (b) for example). It is clearly shown that, when $d=400$, the nodes within different components are collinear onto different lines, and the bigger (of larger degree) the node is, the farther it is from the zero point (see (e)). Such observation is consistent with Remark~\ref{re:gcn}, as different lines indeed represent different bases of the subspace.
\textbf{For GCN-b}.
The output dynamics of GCN-b is distinct from GCN. It turns out the nodes within the same component keep non-collinear when $d=400$, as shown in (f). In (b), in contrast to GCN, the curve of GCN-b fluctuates within a certain bounded area. This result coincides with Remark~\ref{re:gcn-b} and supports that adding the bias term enables the node activations to converge to a cuboid surrounding the subspace under a certain radium.
\textbf{For ResGCN}.
Akin to GCN, the output of ResGCN approaches the subspace in the end, but its convergence dynamics as shown in (c) is a bit different. The curve shakes up and down for several rounds before it eventually degenerates. This could because the skip connection in ResGCN helps prevent over-smoothing or even reverse the convergence direction during the early period. When $d=400$ (in (g)),
each node will exponentially fall into the subspace at the rate of $\lambda+\alpha$ as proven in Remark~\ref{re:resgcn}. Note that the average speed of ResGCN is smaller than that of GCN (recalling $\lambda+\alpha>\lambda$).
\textbf{For APPNP}.
The behavior of APPNP is completely different from GCN in (d). It quickly becomes stationary and this stationary point is beyond the subspace up to a fixed distance $r>0$, which confirms the conclusion by Remark~\ref{re:appnp}. In APPNP, as the rate $v=\lambda\beta$ is smaller than that of GCN, its convergence speed is faster.
In addition, Fig.~\ref{fig.osm-dropedge} demonstrates how DropEdge changes the dynamics of GCN.
Clearly, the results verify Theorem~\ref{The:smoothing}-\ref{The:smoothing-2}, where the convergence to the subspace becomes slower and the number of connected components is larger when we perform DropEdge on GCN with the dropping rate $p=0.5$. If we further increase $p$ to 0.7, the convergence speed will be further decreased.
\begin{table*}[htbp]
\centering
\renewcommand\arraystretch{1.1}
\caption{Testing accuracy (\%) on different backbones. }
\setlength{\tabcolsep}{9pt}
\begin{tabular}{|c|c|l|rrrrrr|r|}
\toprule
\multicolumn{3}{|c|}{Layer} & 2 & 4 & 8 & 16 & 32 & 64 & \multicolumn{1}{l|}{Average Improvement} \\
\midrule
\multirow{10}[20]{*}{Cora} & \multirow{2}[4]{*}{GCN} & Original & 85.8 & 85.6 & 83.2 & 81.2 & 69.8 & 42.1 & - \\
\cline{3-3} & & DropEdge & \textbf{86.4} & \textbf{86.6} & \textbf{85.5} & \textbf{84.2} & \textbf{71.3} & \textbf{50.6} & \textbf{+2.8} \\
\cline{2-10} & \multirow{2}[4]{*}{GCN-b} & Original & 86.1 & 85.5 & 78.7 & 82.1 & 71.6 & 52.0 & - \\
\cline{3-3} & & DropEdge & \textbf{86.5} & \textbf{87.6} & \textbf{85.8} & \textbf{84.3} & \textbf{74.6} & \textbf{53.2} & \textbf{+2.7} \\
\cline{2-10} & \multirow{2}[4]{*}{ResGCN} & Original & - & 86.0 & 85.4 & 85.3 & 85.1 & 79.8 & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{87.0} & \textbf{86.9} & \textbf{86.9} & \textbf{86.8} & \textbf{84.8} & \textbf{+2.2} \\
\cline{2-10} & \multirow{2}[4]{*}{JKNet} & Original & - & 86.9 & 86.7 & 86.2 & 87.1 & 86.3 & \textbf{-} \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{87.7} & \textbf{87.8} & \textbf{88.0} & \textbf{87.6} & \textbf{87.9} & \textbf{+1.2} \\
\cline{2-10} & \multirow{2}[4]{*}{APPNP} & Original & - & 87.9 & 87.7 & 87.5 & 87.8 & 87.5 & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{88.6} & \textbf{89.0} & \textbf{88.8} & \textbf{88.9} & \textbf{89.1} & \textbf{+1.2} \\
\hline
\multirow{10}[20]{*}{Citeseer} & \multirow{2}[4]{*}{GCN} & Original & 76.8 & 72.7 & 72.6 & 72.2 & 56.5 & 43.8 & - \\
\cline{3-3} & & DropEdge & \textbf{78.1} & \textbf{79.0} & \textbf{75.4} & \textbf{67.5} & \textbf{60.5} & \textbf{46.4} & \textbf{+2.1} \\
\cline{2-10} & \multirow{2}[4]{*}{GCN-b} & Original & 75.9 & 76.7 & 74.6 & 65.2 & 59.2 & 44.6 & - \\
\cline{3-3} & & DropEdge & \textbf{78.7} & \textbf{79.2} & \textbf{77.2} & \textbf{76.8} & \textbf{61.4} & \textbf{45.6} & \textbf{+3.8} \\
\cline{2-10} & \multirow{2}[4]{*}{ResGCN} & Original & - & 78.9 & 77.8 & 78.2 & 74.4 & 21.2 & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{78.8} & \textbf{78.8} & \textbf{79.4} & \textbf{77.9} & \textbf{75.3} & \textbf{+11.9} \\
\cline{2-10} & \multirow{2}[4]{*}{JKNet} & Original & - & 79.1 & 79.2 & 78.8 & 71.7 & 76.7 & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{80.2} & \textbf{80.2} & \textbf{80.1} & \textbf{80.0} & \textbf{80.0} & \textbf{+3.0} \\
\cline{2-10} & \multirow{2}[4]{*}{APPNP} & Original & - & 80.3 & 80.5 & 80.2 & 79.9 & 80.4 & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{80.8} & \textbf{80.9} & \textbf{81.1} & \textbf{81.2} & \textbf{81.3} & \textbf{+0.8} \\
\hline
\multirow{10}[20]{*}{Pubmed} & \multirow{2}[4]{*}{GCN} & Original & 89.5 & 89.2 & 88.3 & 87.7 & 78.6 & 72.7 & - \\
\cline{3-3} & & DropEdge & \textbf{89.7} & \textbf{90.9} & \textbf{91.0} & \textbf{90.5} & \textbf{80.1} & \textbf{77.5} & \textbf{+2.3} \\
\cline{2-10} & \multirow{2}[4]{*}{GCN-b} & Original & 90.2 & 88.7 & 90.1 & 88.1 & 84.6 & \textbf{79.7} & - \\
\cline{3-3} & & DropEdge & \textbf{91.2} & \textbf{91.3} & \textbf{90.9} & \textbf{90.3} & \textbf{86.2} & 79.0 & \textbf{+1.2} \\
\cline{2-10} & \multirow{2}[4]{*}{ResGCN} & Original & - & 90.7 & 89.6 & 89.6 & 90.2 & 87.9 & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{90.7} & \textbf{90.5} & \textbf{91.0} & \textbf{91.1} & \textbf{90.2} & \textbf{+1.1} \\
\cline{2-10} & \multirow{2}[4]{*}{JKNet} & Original & - & 90.5 & 90.6 & 89.9 & 89.2 & 90.6 & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{91.3} & \textbf{91.2} & \textbf{91.5} & \textbf{91.3} & \textbf{91.6} & \textbf{+1.2} \\
\cline{2-10} & \multirow{2}[4]{*}{APPNP} & Original & - & 90.4 & 90.3 & 89.8 & 90.0 & 90.3 & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{90.7} & \textbf{90.4} & \textbf{90.3} & \textbf{90.5} & \textbf{90.5} & \textbf{+0.3} \\
\hline
\multirow{10}[20]{*}{Reddit} & \multirow{2}[4]{*}{GCN} & Original & 95.75 & 96.08 & 96.43 & 79.87 & 44.36 & - & - \\
\cline{3-3} & & DropEdge & \textbf{95.93} & \textbf{96.23} & \textbf{96.57} & \textbf{89.02} & \textbf{66.18} & - & \textbf{+6.3} \\
\cline{2-10} & \multirow{2}[4]{*}{GCN-b} & Original & 96.11 & 96.62 & 96.17 & 67.11 & 45.55 & - & - \\
\cline{3-3} & & DropEdge & \textbf{96.13} & \textbf{96.71} & \textbf{96.48} & \textbf{90.54} & \textbf{50.51} & - & \textbf{+5.8} \\
\cline{2-10} & \multirow{2}[4]{*}{ResGCN} & Original & - & 96.13 & 96.37 & 96.34 & 93.93 & - & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{96.33} & \textbf{96.46} & \textbf{96.48} & \textbf{94.27} & \textbf{-} & \textbf{+0.2} \\
\cline{2-10} & \multirow{2}[4]{*}{JKNet} & Original & - & 96.54 & 96.82 & 95.91 & 95.42 & - & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{96.75} & \textbf{97.02} & \textbf{96.39} & \textbf{95.68} & \textbf{-} & \textbf{+0.3} \\
\cline{2-10} & \multirow{2}[4]{*}{APPNP} & Original & - & 95.84 & 95.77 & 95.64 & 95.59 & - & - \\
\cline{3-3} & & DropEdge & \textbf{-} & \textbf{95.89} & \textbf{95.91} & \textbf{95.76} & \textbf{95.73} & - & \textbf{+0.1} \\
\bottomrule
\end{tabular}%
\label{tab:overall_res}%
\end{table*}
\subsection{Evaluating DropEdge on different GCNs}\label{sec.cmpdropedge}
In this section, we are interested in if applying DropEdge can promote the performance of the aforementioned four GCN models on real node classification benchmarks: Cora, Citeseer, Pubmed, and Reddit. We further implement JKNet~\cite{xu2018representation} and carry out DropEdge on it.
We denote each model X of depth $d$ as X-$d$ for short in what follows, \emph{e.g.} GCN-4 denotes the 4-layer GCN.
\textbf{Implementations.}
Different from~\textsection~\ref{sec:small-cora}, the parameters of all models are trainable and initialized with the method proposed by~\cite{Kipf2017}, and the ReLU function is added. We implement all models on all datasets with the depth $d\in\{2,4,8,16,32,64\}$ and the hidden dimension $128$. For Reddit, the maximum depth is 32 considering the memory bottleneck. Since different structure exhibits different training dynamics on different dataset, to enable more robust comparisons, we perform random hyper-parameter search for each model, and report the case giving the best accuracy on validation set of each benchmark. The searching space of hyper-parameters and more details are provided in Tab.~7 in Appendix~D. Tab.~8 depicts different type of normalizing the adjacency matrix, and the selection of normalization is treated as a hyper-parameter. Regarding the same architecture w or w/o DropEdge, we apply the same set of hyper-parameters except the drop rate $p$ for fair evaluation. We adopt the Adam optimizer for model training. To ensure the re-productivity of the results, the seeds of the random numbers of all experiments are set to the same. We fix the number of training epoch to $400$ for all datasets. All experiments are conducted on a NVIDIA Tesla P40 GPU with 24GB memory. Tab.~9 in Appendix summaries the hyper-parameters of each backbone with the best accuracy on different datasets.
\textbf{Overall Results.}
Tab.~\ref{tab:overall_res} summaries the results on all datasets. We have the following observations:
\begin{itemize}
\item DropEdge consistently improves the testing accuracy of the model without DropEdge for all cases. On Citeseer, for example, ResGCN-64 fails to produce meaningful classification performance while ResGCN-64 with DropEdge still delivers promising result.
\item For deep models, GCN-b, ResGCN, and APPNP generally outperform generic GCN with or without DropEdge, which is consistent with our before theoretical analyses. In particular, APPNP+DropEdge performs best on Cora and Citeseer, while JKNet+DropEdge yields the best result on Pubmed and Reddit, showing the compatibility of DropEdge to modern architectures.
\item After using DropEdge, all models normally achieves the highest accuracy when the depth is sufficiently large. For instance, both GCN and GCN-b perform worse when $d$ increases, but with DropEdge, they both arrive at the peak when $d=4$, probably thanks to the alleviation of over-soothing by DropEdge.
\end{itemize}
In addition, the validation losses of all 4-layer and 6-layer models on Cora and Citeseer are shown in Figure~\ref{fig.dropvallosscmpaddtional}. The curves of both training and validation are dramatically pulled down after applying DropEdge, which also explain the benefit of DropEdge.
\begin{table*}[t!]
\centering
\caption{Test Accuracy (\%) comparison with SOTAs. The number in parenthesis denotes the network
depth.}
\vskip -0.1 in
\renewcommand\arraystretch{0.9}
\setlength{\tabcolsep}{21pt}
\begin{tabular}{c|c|c|c|c}
\toprule
& Cora & Citeseer & Pubmed & Reddit \\
\midrule
KLED~\cite{Fouss06anexperimental} & $82.3$ & - & $82.3$ & - \\
DCNN~\cite{atwood2016diffusion} & $86.8$ & - & $89.8$ & - \\
GAT~\cite{DBLP:journals/corr/abs-1710-10903} & $87.4$ & $78.6$ & $89.7$ &- \\
FastGCN~\cite{chen2018fastgcn} & $85.0$ & $77.6$ & $88.0$ & $93.7$ \\
AS-GCN~\cite{Huang2018} & $87.4$ & $79.7$ & $90.6$ & $96.3$ \\
GraphSAGE~\cite{hamilton2017inductive} & $82.2$ & $71.4$ & $87.1$ & $94.3$ \\
DAGNN~\cite{liu2020towards} & $88.4$ & $78.6$ & $86.4$ & OOM \\
\hline
\hline
GCN+DropEdge & $86.6(4)$ & $79.0(4)$ & $91.0(8)$ & $96.57(8)$ \\
GCN-b+DropEdge & $87.6(4)$ & $79.2(4)$ & $91.3(4)$ & $96.71(4)$ \\
ResGCN+DropEdge & $87.0(4)$ & $79.4(16)$ & $91.1(32)$ & $96.48(16)$\\
APPNP+DropEdge & $\textbf{89.1(64)}$ & $\textbf{81.3(64)}$ & $90.7(4)$ & $95.91(8)$\\
JKNet+DropEdge & $88.0(16)$ & $80.2(8)$ & $\textbf{91.6(64)}$ & $\textbf{97.02(8)}$\\
DAGNN+DropEdge &$88.7(8)$ &$79.5(8)$ & $87.1(16)$ & OOM \\
\bottomrule
\end{tabular}%
\label{tab:full_sota}%
\end{table*}
\textbf{Comparison with SOTAs}
We select the best performance for each backbone with DropEdge, and contrast them with existing State of the Arts (SOTA), including KLED~\cite{Fouss06anexperimental}, DCNN~\cite{atwood2016diffusion}, FastGCN~\cite{chen2018fastgcn}, AS-GCN~\cite{Huang2018}, , GraphSAGE~\cite{Hamilton2017} and DAGNN~\cite{liu2020towards} in Tab.~\ref{tab:full_sota}; for the SOTA methods, we reuse the results reported in~\cite{Huang2018}.
We have these findings in Tab.~\ref{tab:full_sota}:
\begin{itemize}
\item Clearly, our DropEdge obtains significant enhancement against SOTAs; particularly on Cora and Citeseer, the best accuracies by APPNP+DropEdge are 89.10\% and 81.30\%, which are clearly better than the previous best (87.44\% and 79.7\%), and obtain around 1\% improvement compared to the no-drop APPNP. Such improvement is regarded as a remarkable boost considering the challenge on these benchmarks.
\item For most models with DropEdge, the best accuracy is obtained under the depth beyond 4, which again verifies the impact of DropEdge on formulating deep networks.
\item As mentioned in \textsection~\ref{sec.discussions}, FastGCN, AS-GCN and GraphSAGE are considered as the DropNode extensions of GCNs. The DropEdge based approaches outperform the DropNode based variants as shown in Tab.~\ref{tab:full_sota}, which confirms the effectiveness of DropEdge.
\item DAGNN is a recently proposed model that is able to alleviate over-smoothing. Table~\ref{tab:full_sota} also reports the performance of DAGNN, where we have conducted DAGNN with varying depth and collect the best case on each dataset. It is observed that adding DropEdge on DAGNN can further boost the performance, implying the generality of our proposed method.
\end{itemize}
\begin{figure*}
\centering
\includegraphics [width=0.24\textwidth]{figures/dropedgevsnodropedge/edgedropcompare_cora_4.pdf}
\includegraphics [width=0.24\textwidth]{figures/dropedgevsnodropedge/edgedropcompare_cora_6.pdf}
\includegraphics [width=0.24\textwidth]{figures/dropedgevsnodropedge/edgedropcompare_citeseer_4.pdf}
\includegraphics [width=0.24\textwidth]{figures/dropedgevsnodropedge/edgedropcompare_citeseer_6.pdf}
\caption{The validation loss on different backbones w and w/o DropEdge. GCN-$n$ denotes GCN of depth $n$; similar denotation follows for other backbones.}
\label{fig.dropvallosscmpaddtional}
\end{figure*}
\begin{figure*}[t!]
\centering
\subfloat[]{
\includegraphics[width=0.24\textwidth]{dropvsdrop_cora_4.pdf}}
\subfloat[]{
\includegraphics [width=0.24\textwidth]{dropvsdrop_citeseer_4.pdf}}
\subfloat[]{
\includegraphics[width=0.24\textwidth]{figures/eachlayersampling/each_layer_sampling_compare_Cora_4_0_2.pdf}}
\subfloat[]{
\includegraphics [width=0.24\textwidth]{figures/eachlayersampling/each_layer_sampling_compare_Citeseer_4_0_8.pdf}}
\caption{(a-b) The compatibility of DropEdge with Dropout; (c-d) The performance of Layer-wise DropEdge.}
\label{fig:more-ablation}
\end{figure*}
\subsection{Ablation Studies}
\label{sec:exp-dropedge}
This section continues several ablation studies to evaluate the importance of each proposed component in DropEdge. We employ GCN as the backbone. The hidden dimension, learning rate and weight decay are fixed to 256, 0.005 and 0.0005, receptively. The random seed is fixed.
Unless otherwise mentioned, we do not utilize the ``withloop'' and ``withbn'' operation (see their definitions in Tab.~7 in Appendix~D).
\subsubsection{On Compatibility with Dropout}
\textsection~\ref{sec.discussions} has discussed the difference between DropEdge and Dropout. Hence, we conduct an ablation study on GCN-4, and the validation losses are demonstrated in Figure~\ref{fig:more-ablation} (a-b). It reads that while both Dropout and DropEdge are able to facilitate the training of GCN, the improvement by DropEdge is more significant, and if we adopt them concurrently, the loss is decreased further, indicating the compatibility of DropEdge with Dropout.
\subsubsection{On Layer-wise DropEdge}
\textsection~\ref{sec:methodology} has descried the Layer-Wise (LW) extension of DropEdge. Here, we provide the experimental evaluation on assessing its effect. As observed from Figure~\ref{fig:more-ablation} (c-d), the LW DropEdge achieves lower training loss than the original version, whereas the validation value between two models is comparable. It implies that LW DropEdge can facilitate the training further than original DropEdge. However, we prefer to use DropEdge other than the LW variant so as to not only avoid the risk of over-fitting but also reduces computational complexity since LW DropEdge demands to sample each layer and spends more time.
\begin{table}[htbp]
\centering
\caption{The performance comparison of ER-Graph, GLASSO (GGL) and DropEdge.}
\setlength{\tabcolsep}{1pt}
\begin{tabular}{cccccc}
\hline
\multicolumn{1}{l}{Dataset} & Backbone & \multicolumn{1}{l}{Original} & \multicolumn{1}{l}{ER-Graph} & \multicolumn{1}{l}{GLASSO} & \multicolumn{1}{l}{DropEdge} \bigstrut\\
\hline
{Cora} & GCN-b & 0.831 & 0.319 & 0.432 & 0.849 \bigstrut\\
\hline
{Citeseer} & GCN-b & 0.715 & 0.233 & 0.220 & 0.763 \bigstrut\\
\hline
{Pubmed} & GCN-b & 0.850 & 0.407 & -- & 0.861 \bigstrut\\
\hline
\end{tabular}%
\label{tab:ergraph}%
\end{table}%
\begin{table}[htbp]
\centering
\caption{The running time of GLASSO (GGL) and DropEdge.}
\begin{tabular}{ccrrr}
\hline
\multicolumn{1}{l}{Dataset} & Backbone & \multicolumn{1}{l}{Original} & \multicolumn{1}{l}{GLASSO} & \multicolumn{1}{l}{DropEdge} \bigstrut\\
\hline
{Cora} & GCN-b & 7.77s & 95.27s & 8.73s \bigstrut\\
\hline
{Citeseer} & GCN-b & 13.49s & 328.63s & 15.71s \bigstrut\\
\hline
{Pubmed} & GCN-b & 33.26s & >40h & 35.98s \bigstrut\\
\hline
\end{tabular}%
\label{tab:glassotime}%
\end{table}%
{
\subsubsection{On Comparisons with ER-Graph and GLASSO}
\label{sec:glasso}
We have conducted an ablation study of using random graphs created by the Erdos-Renyi model~\cite{erdos1960evolution} with the number of edges equal to the graph after DropEdge. We train the model GCN-b with 6 layers on these random graphs following the same setting as DropEdge for fair comparisons, and summarize the averaged results over 20 runs in Table~\ref{tab:ergraph}. It shows that all results are much worse than DropEdge probably because the generated graphs by the ER method are inconsistent to the generic statistics of edges in the original graph. On the contrary, the performance by DropEdge is always promising.
In addition, we have provided the experimental comparisons between DropEdge and GLASSO~\cite{friedman2008sparse} in Table~\ref{tab:ergraph}. For GLASSO, we compute the empirical covariance matrix based on node feature vectors, and leverage the source code\footnote{\url{https://github.com/STAC-USC/Graph_Learning}} built by [2] to perform constrained GLASSO. In particular, we select the GGL setting (Problem 1 in [2]) to enforce the off-diagonal elements of the learned graph Laplacian to be non-positive, in order to output a positive graph as expected. To keep fair comparisons, we choose appropriate hyper-parameters (my\_eps\_outer = 1.00E-05; alpha=4.00E-05 on Cora, alpha=2.00E-06 on Citeseer) to ensure that the number of edges by GGL is equal to that yielded by DropEdge. We then carry out GCN on the learned sparse graphs. From Table~\ref{tab:ergraph}, GGL gets much worse performance than DropEdge and the full-graph version on the Cora and Citeseer datasets. GGL is unable to maintain the whole information of edge connections due to the lack of the information from the original adjacency matrix, while DropEdge is capable of alleviating over-smoothing by edge sampling and still preserve the whole information during the entire training phase. Another drawback of GLASSO is that the optimization process for sparsification is time-consuming. Table~\ref{tab:glassotime} displays the training time over 400 epochs. It is observed that GLASSO involves a large amount of extra running time, making it unpractical for large-scale datasets; we can not obtain reasonable result by conducting GGL on the Pubmed dataset ($\sim$ 20,000 nodes) even after 40-hour computation.
}
\begin{table}[htbp]
\centering
\caption{ The performance comparison of uniform dropping and feature weighted dropping.}
\begin{tabular}{cccc}
\hline
Dataset & Backbone & \multicolumn{1}{l}{Feature Weighted} & \multicolumn{1}{l}{Uniform} \bigstrut\\
\hline
{Cora} & GCN-b & 0.856 & 0.876 \bigstrut\\
\hline
{Citeseer} & GCN-b & 0.797 & 0.792 \bigstrut\\
\hline
{Pubmed} & GCN-b & 0.888 & 0.913 \bigstrut\\
\hline
\end{tabular}%
\label{tab:feaweighted}%
\end{table}%
{
\subsubsection{On Comparisons with the Feature-based Sampling}
It is possible to take the pairwise correlations/similarities into account in DropEdge. However, this will make DropEdge biased, focusing more on node feature correlations but less on authentic edge connections provided in the adjacency matrix ${\bm{A}}$. As already explained before, we suppose the sampling in DropEdge to be unbiased so as to keep the original distribution of edge connections. If using the biased version, it will somehow pollute the information in ${\bm{A}}$ and could cause performance detriment. To show this, we have implemented the experiment on GCN-b with 4 layers, by first computing the similarity score between node features via the RBF kernel and then dropping edges with the probability negatively related with the similarity score. All other settings keep the same as our previous experiments in Section 5. As observed from Table~\ref{tab:feaweighted}, the biased version performs slightly better than our unbiased method only on Citeseer, but it is always worse in other cases. This observation supports the superiority of the unbiased sampling in general. We really appreciate the advice by the reviewer and have contained the corresponding discussions in Section 5.3 to make our paper more enhanced.
}
\section{Conclusion}
{
We have analyzed the universal process of over-smoothing for 4 popular GCN models, including generic GCN, GCN with bias, ResGCN, and APPNP. Upon our analyses, we propose DropEdge, a novel and efficient technique to facilitate the development of general GCNs. Considerable experiments on Cora, Citeseer, Pubmed and Reddit have verified that DropEdge can generally and consistently promote the performance of current popular GCNs.
Note that the experimental improvement from shallow 2-layer models to deep models (with DropEdge) is not big, but still meaningful. For example, on Pubmed, 8-layer GCN with DropEdge (91.0\%) outperforms 2-layer GCN without DropEdge (89.5\%) by 1.5\%, which is considered as being remarkable for this benchmark. More importantly, besides the experimental improvement, we have theoretically explained why deep GCN fails and how over-smoothing happens for general GCN models.
Even we have not achieved sufficient benefits from deep GCNs as we did in other domains such as CNNs on images, both the theoretical analyses and experimental evaluations in this paper are still valuable to facilitate a broader class of future work in graph learning.
}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,640 |
// Copyright (c) 2012 Ecma International. All rights reserved.
// This code is governed by the BSD license found in the LICENSE file.
/*---
es5id: 15.2.3.6-4-550
description: >
ES5 Attributes - property ([[Get]] is a Function, [[Set]] is a
Function, [[Enumerable]] is false, [[Configurable]] is true) is
non-enumerable
---*/
var obj = {};
var getFunc = function () {
return 1001;
};
var verifySetFunc = "data";
var setFunc = function (value) {
verifySetFunc = value;
};
Object.defineProperty(obj, "prop", {
get: getFunc,
set: setFunc,
enumerable: false,
configurable: true
});
var propertyDefineCorrect = obj.hasOwnProperty("prop");
var desc = Object.getOwnPropertyDescriptor(obj, "prop");
for (var p in obj) {
assert.notSameValue(p, "prop", 'p');
}
assert(propertyDefineCorrect, 'propertyDefineCorrect !== true');
assert.sameValue(desc.enumerable, false, 'desc.enumerable');
| {
"redpajama_set_name": "RedPajamaGithub"
} | 965 |
Hurriyat leaders felicitate govt, people on Pakistan Day
By Country News Last updated Mar 23, 2020
Srinagar, March 23 (KMS): In occupied Kashmir, Hurriyat leaders and organizations have felicitated people and the Government of Pakistan on the Pakistan Day, today, and have prayed for a strong, prosperous, stable and safe Pakistan.
The leaders in their statements and messages termed Pakistan as centre of hope for the oppressed Kashmiri people as well as Muslim Ummah. They lauded the unparalleled sacrifices rendered by the Muslims for the creation of Pakistan and described the country as Almighty Allah's blessing for the Muslims.
Hurriyat leaders Bilal Ahmed Sidiqui, Shabbir Ahmad Dar, Umar Adil Dar, Farooq Ahmad Tawheedi, Khwaja Firdous and women leaders Zamruda Habib and Shameem Shawl in their statements said that Pakistan as an ideological state had always supported not only the just cause of the people of Jammu and Kashmir but all other oppressed and suppressed people all over the world. They said Indian authorities in Kashmir have crossed all limits and people and leadership are caged, hence it is the duty of the people of Pakistan to represent our aspirations at all global forums.
Zamruda Habib and Shamim Shawl said Kashmiris have seen dead bodies of their young children. "We have experienced shortage of food hunger and hospitals without medical facilities without doctors .but it was all man made situation. But now we are seeing people in the wider in a grave situation." They added that the people of occupied Kashmir were facing lockdown since August 5, 2019 and had learnt to live without food and medicines. "Now the world is facing pandemic and we the people of Jammu and Kashmir pray for their safety."
The patron of Tehreek-e-Wahdat-e-Islami, Syed Hussain, said 23rd March holds great significance for the Muslims of the Subcontinent as it was on this day in 1940 when Quaid-e-Azam Mohammad Ali Jinnah and his colleagues firmly set on course to create an independent state for the Muslims of the region. He said that it was the time when such a goal was deemed impossible as Hindutva forces were up in arms against this very idea but the determination of the then leadership and unparalleled sacrifices offered by the people of that time made it possible.
Hurriyat AJK leaders Muhammad Farooq Rehmani, Abdul Majeed Mir, Abdul Majeed Malik, Aijaz Rehmani, Imtiaz Wani, Khawaja Naeem-ul-Hassan and Jammu and Kashmir Islami Tanzeem-e-Azadi in their separate statements also greeted the people and the Government of Pakistan on the day. They thanked Pakistan for highlighting the Kashmir dispute and exposing Indian atrocities in occupied Jammu and Kashmir at all international forums.
Executive Member of Organisation of Kashmir Coalition Professor Nazir Ahmed Shawl and Barrister Abdul Majeed Tramboo in their messages said, the historic resolution that was adopted on this day in Lahore became a reality and a new geography was carved out under the dynamic leadership of Quaid-e-Azam Mohammed Ali Jinnah. They said a stable and strong Pakistan can become a determining factor in global politics. They deplored that the people of Jammu and Kashmir continue to remain deprived of their right of self determination.
President, PM reiterate unshakable support to Kashmiris' self-determination right
Indian troops arrest six Kashmiri youth in Kupwara
No Plan To Change Current Status Of AJK Into Province: Barister Sultan
IIOJK Authority Bans Import Of Poultry Birds Amid Bird Flu Scare
IIOJK Continues To Remain Cut Off From From World
© 2021 - Daily Country News. All Rights Reserved. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,863 |
\section{Introduction}
For many decades now wave scattering in random media is one of the most important problems of wave theory.
From the viewpoint of practical applications, it is thought of as an adverse process worsening signal-to-noise ratio.
In the context of long-range sound propagation in the ocean, volume scattering on random sound-speed inhomogeneity
severely delimits possibilities of hydroacoustical tomography \cite{TT96}.
Such inhomogeneity is commonly caused by oceanic internal waves. Internal-wave-induced sound-speed variations are
usually small, inferring only forward scattering,
but their accumulated long-range effect can be very substantial, as it is confirmed by experiments \cite{SLICE89,AET,AST}.
In the ray-based description, internal waves give rise to Lyapunov instability and chaos of sound rays \cite{Review03,RayWave,UFN}.
The phenomenon of ray chaos is mathematically equivalent to dynamical chaos in classical physics.
Following this analogy, wavefield manifestations of ray chaos, commonly referred to as wave chaos, can be considered from the viewpont
of a more general paradigm of quantum chaos \cite{Stockman}.
This circumstance enables usage of well-developed methods of quantum chaos to the problem of long-range sound propagation.
In particular, we can mention phase space analysis using the Wigner function or its smoothed versions \cite{Sundaram_Zas,Okomelkova,Viro00,Viro05,Ac17},
Lyapunov analysis \cite{WT01}, entropy calculation \cite{MorCol}, periodic orbit theory \cite{Viro-scar,PRE76,SFU10}, theory of nonlinear resonance \cite{Viro01,Chaos,BV04},
and action-angle formalism \cite{Udo08}, to name a few.
One of the most novel approaches is based on the unitary propagator governing wave evolution
within the narrow-angle approximation \cite{UFN,Levelspacing,Tomc11,PRE87,Hege13}.
Particularly, Hegewisch and Tomsovic have shown that such propagator can be constructed
using random matrix theory (RMT), avoiding direct solution of the parabolic wave
equation \cite{Tomc11,Hege13}.
Random matrices are utilized to describe mode coupling induced by scattering on the random inhomogeneity.
Hereafter we shall refer to this method as the Hegewisch-Tomsovic method.
Validity of the Hegewisch-Tomsovic approach was examined in \cite{JCA,UZMU-RMT}. It was shown that the random matrix modelling ensures
sufficient accuracy for signal frequencies of 50-100 Hz that are relevant for long-range propagation.
In the Hegewisch-Tomsovic method, solution of a wave equation is replaced
by multiplication of matrices. The matrix size is determined by number of propagating modes, therefore,
this method is extremely fast for low frequencies, if the background sound-speed profile doesn't depend on range.
However, the latter condition is basically not satisfied in realistic oceanic environments.
Ocean almost always has large-scale horizontal inhomogeneity due to temperature and bathymetric variations, presence of eddies and currents, Rossby waves, e.t.c.
The corresponding variations of a sound-speed profile are commonly very significant and cannot be considered as a small perturbation.
Thus, applicability of the Hegewisch-Tomsovic method in natural experiments requires generalization
onto waveguides with strong but adiabatic longitudinal variability.
Unfortunately, an attempt to incorporate large-scale inhomogeneity directly to the original scheme of the method results in substantial growth
of auxiliary computations.
In this way, the Hegewisch-Tomsovic method loses its important advantage, namely its speed.
Therefore one needs an optimized version of this method incorporating the effect of large-scale longitudinal variability.
The present work offers a pretty simple and robust way to resolve this problem.
The paper is organized as follows. The next section contains brief description of the Hegewisch-Tomsovic method in the absence of adiabatic inhomogeneities.
Section \ref{Model} is devoted to the acoustic model used for numerical simulation.
Modification of the Hegewisch-Tomsovic method for waveguides with adiabatic inhomogeneity is presented in Section \ref{Adiabatic}.
In Section \ref{Numer}, we numerically examine validity of the modified Hegewisch-Tomsovic method by means of numerical simulation.
Section Discussion outlines some prospects for future research in this field.
In Conclusions, an account of the main results is presented.
\section{Hegewisch-Tomsovic method in the absence of large-scale sound-speed inhomogeneity}
\label{Absence}
Long-range wave propagation can be fairly modeled by means of the standard parabolic equation that takes into account only forward propagation.
Assuming cylindrical symmetry and neglecting azimuthal coupling, we can reduce the original three-dimensional problem to the two-dimensional one.
Then the parabolic equation can be written in the following way:
\begin{equation}
\frac{i}{k_0}\frac{\partial\Psi}{\partial r}=
-\frac{1}{2k_0^2}\frac{\partial^2\Psi}{\partial
z^2}+\frac{n^2-1}{2}\Psi,
\label{parabolic}
\end{equation}
where
\begin{equation}
k_0=\frac{2\pi f}{c_0},
\label{k0}
\end{equation}
$z$ is ocean depth, $r$ is range,
$f$ is signal frequency, $c_0$ is a reference sound speed, and $n = n(r,z) = c_0/c(r,z)$ is refractive index.
In the small-angle approximation we have
\begin{equation}
\frac{n^2(r,z)-1}{2} \simeq U(z) + V_{\text{lsc}}(r,z) + V_{\text{iw}}(r,z),
\end{equation}
where
\begin{equation}
U(z)=\frac{\Delta c(z)}{c_0},\quad V_{\text{lsc}}(r,z) = \frac{\delta c_{\text{lsc}}(r,z)}{c_0},\quad
V_{\text{iw}}(r,\,z)=\frac{\delta c_{\text{iw}}(r,\,z)}{c_0}.
\label{pot}
\end{equation}
Here $\Delta c(z)$ is linked to the range-independent unperturbed sound-speed profile as $\Delta c(z) = c_{\text{unpert}}(z)-c_0$,
$\delta c_{\text{lsc}}(r,z)$ describes large-scale sound-speed inhomogeneity,
and $\delta c_{\text{iw}}(r,z)$ is a random sound-speed perturbation caused by internal waves.
Acoustic wavefield can be represented as sum over normal modes of the unperturbed waveguide
\begin{equation}
\Psi(r,z) = \sum\limits_{m} a_m(r)\psi_m(z).
\end{equation}
The normal modes and the corresponding eigenvalues satisfy the Sturm-Liouville problem
\begin{equation}
-\frac{1}{2k_0^2}\frac{\partial^2\psi_m(z)}{\partial
z^2}+U(z)\psi_m(z)=E_m\psi_m(z).
\label{StL}
\end{equation}
Solution of the parabolic equation (\ref{parabolic}) at the range $r=r_{\mathrm{f}}$
can be formally written in terms of an unitary propagator $\hat G$ acting as
\begin{equation}
\Psi(r_{\mathrm{f}},z) = \hat G(r_0,r_{\mathrm{f}})\Psi(r_0,z).
\label{evolution}
\end{equation}
Using the basis of normal modes, we can express the propagator $\hat G$ as a matrix $\mathbf{G}$ with elements
\begin{equation}
G_{mn}(0,r_{\mathrm{f}})=\int \psi_m^*\hat G(0,r_{\mathrm{f}})\psi_n\,dz,
\label{Gmn}
\end{equation}
where $\hat G(0,r_{\mathrm{f}})\psi_n$ is a solution of the parabolic equation at the range $r=r_{\mathrm{f}}$
for the initial condition $\Psi(r=0)=\psi_n$.
As long as the parabolic equation involves a random perturbation $V_{\text{iw}}(r,z)$,
the propagator matrix $\mathbf{G}$ is random as well.
For the sake of simplicity, we use idealistic perfectly-reflecting boundary conditions of the form
\begin{equation}
\left.\Psi\right\vert_{z=0} = 0,\quad
\left.\frac{d\Psi}{dz}\right\vert_{z=h}=0,
\label{BCs}
\end{equation}
where $h$ is depth of the ocean bottom. It is assumed that $h$ doesn't change with range, i.~e. the bottom is flat.
Using (\ref{BCs}), we disregard bottom attenuation directly.
However, sound absorption in the bottom is implicitly taken into account by means of a proper truncation of modal spectrum.
Particularly, we drop out all the modes which don't satisfy the condition
\begin{equation}
E_m \le U(z=h),
\end{equation}
i.~e. only modes propagating without contact with the bottom are taken into account.
In the Hegewisch-Tomsovic method \cite{Tomc11,Hege13}, the propagator $\mathbf{G}(0,r_{\mathrm{f}})$ is expressed
as a product of propagators for intermediate segments of a waveguide:
\begin{equation}
\mathbf{G}(0,r_{\mathrm{f}}=Kr_{\text{b}})=\prod\limits_{k=0}^{K-1}\mathbf{G}_{K-k}((k-1)r_{\text{b}},kr_{\text{b}}).
\label{BB}
\end{equation}
If the step $r_{\text{b}}$ is sufficiently large, segment propagators $\mathbf{G}_k$ with different $k$ are statistically independent from each other.
Furthermore, as the background sound-speed profile doesn't depend on range, one can assume that statistical properties of $\mathbf{G}$
are stationary along the waveguide. It yields $\mathbf{G}((k-1)r_{\text{b}},kr_{\text{b}}) = \mathbf{G}(r_{\text{b}})$.
A propagator for
each individual segment can be calculated within the first-order perturbation theory, with the Cayley transform imposed to ensure unitarity.
The resulting formula is
\begin{equation}
\mathbf{G}(r_{\text{b}}) = \mathbf{\Lambda}[\mathbf{I} + i\mathbf{A}(r_{\text{b}})/2]^{-1}[\mathbf{I} - i\mathbf{A}(r_{\text{b}})/2].
\label{Cayley}
\end{equation}
Here $\mathbf{I}$ is the identity matrix, and $\mathbf{\Lambda}$ is a diagonal matrix with elements
\begin{equation}
\Lambda_{mn} = \delta_{mn}e^{-ik_0E_mr_{\text{b}}},
\end{equation}
%
where $\delta_{mn}$ is the Kronecker symbol.
$\mathbf{A}$ is an inhomogeneity-induced perturbation matrix whose elements are calculated as
\begin{equation}
A_{mn}=k_0 \int\limits_{r'=0}^{r_{\text{b}}} e^{ik_0(E_m-E_n)r'} V_{mn}(r')\,dr',
\label{pert}
\end{equation}
\begin{equation}
V_{mn}(r) = \int \psi_m^*(z)V(r,z)\psi_n(z)\,dz.
\label{Vmn}
\end{equation}
The key idea of the random matrix approach is to treat matrix elements of the perturbation $\mathbf{A}$ as
random quantities
\begin{equation}
A_{mn}(r_{\text{b}},k_0) = \sigma_{mn}(r_{\text{b}},k_0)z_{mn}(k_0),
\label{Amn}
\end{equation}
where $\sigma_{mn}$ is calculated from spectral properties of the random inhomogeneity,
and $z_{mn}$ is a complex-valued Gaussian random variable with the unit variance.
It is important to note that variances $\sigma_{mn}$ can be found analytically (the corresponding formula is given in \cite{Hege13}).
The propagator step $r_{\text{b}}$ should be large enough to ensure statistical independence of propagators for neighboring segments.
The upper bound for $r_{\text{b}}$ is determined by the condition $|A_{mn}|\ll 1$, otherwise the first-order perturbation theory doesn't apply.
Mode amplitudes of a wavefield can be combined into the vector $\vec{a}$, $\vec{a} \equiv (a_1, a_2, \text{...}, a_M)^T$.
In accordance with (\ref{evolution}), range evolution of this vector is governed by the equation
\begin{equation}
\vec{a}(r) = \mathbf{G}(r)\vec{a}(0).
\label{amod}
\end{equation}
It means that a wavefield can be calculated by means of sequential multiplication of the vector of mode amplitudes by the propagator matrix.
This algorithm is extremely fast if number of propagating modes is not very large.
\section{Model of a waveguide}
\label{Model}
In the present work we consider an acoustic waveguide in the deep ocean,
with an unperturbed sound-speed profile described by the biexponential model \cite{Chaos}
\begin{equation}
c_{\text{unpert}}(z)=c_0\biggl[\biggr.1+\frac{b^2}{2}\left(
e^{-az}-\eta
\right)^2\biggl.\biggr].
\label{BEP-prof}
\end{equation}
where $c_0=1490$~m/s,
$\eta =0.6065$, $a=0.5$~km$^{-1}$, $b=0.557$.
The biexponential profile closely resembles the celebrated canonical Munk model.
\begin{figure}[!ht]
\centerline{
\includegraphics[width=.73\textwidth]{fig1.eps}}
\caption{Biexponential sound-speed profile.
}
\label{Fig-BEP}
\end{figure}
We consider a large-scale inhomogeneity induced by a cold synoptic eddy.
The corresponding sound-speed perturbation is taken in the form \cite{Viro-Wamot,Viro-Akj08,Viro-Akj10,Radiophys}
\begin{equation}
\delta c_\text{lsc}=c_\mathrm{e}\exp\left(
-\frac{(r-r_{\mathrm{e}})^2}{2\Delta r^2}-\frac{(z-z_{\mathrm{e}})^2}{2\Delta z(r)^2}
\right),
\label{eddy}
\end{equation}
where
\begin{equation}
\Delta z(r)=\Delta z_{\text{c}}-\Delta z_{\upsilon}
\exp\left(-\frac{(r-r_{\upsilon})^2}{2\Delta r_{\upsilon}^2}\right).
\label{deltaz}
\end{equation}
The following parameter values are taken:
$r_\mathrm{e}=250$~km, $z_{\mathrm{e}}=1$~km,
$\Delta r=120$~km, $\Delta z_{\mathrm{c}}=0.8$~km, $\Delta z_{\upsilon}=0.4$~km,
$r_{\upsilon}=270$~km, $\Delta r_{\upsilon}=50$~km.
Sound-speed perturbation caused by internal waves is expressed as
\begin{equation}
\delta c_{\text{iw}}(r,z) = c_0V_0\sum\limits_{j=1}^{j_{\max}} F_j(z) Y_{j}(r).
\label{Vzr0}
\end{equation}
where $j_{\max}=50$,
\begin{equation}
F_j(z) = \sqrt{\frac{1}{j^2+j_*^2}} e^{-3z/2B}\sin(j\pi\xi(z)),
\label{Fj}
\end{equation}
$\xi(z) = e^{-z/B}-e^{-h/B}$, $B$ is the thermocline depth,
\begin{equation}
Y_j(r) = \sum_l\sqrt{I_{j,l}}\cos(k_lr + \phi_{jl}),
\label{Yi}
\end{equation}
$\phi_{jl}$ are random phases,
\begin{displaymath}
V_0 = \frac{24.5}{g}\frac{2B}{\pi}N_0^2\sqrt{\frac{E\Delta k_l}{M}},
\end{displaymath}
$\Delta k_l$ is spacing between neighboring values of $k_l$.
This model was originally developed in \cite{ColBr98}.
Spectral weights $I_{j,l}$ are given by the formula
\begin{equation}
I(j,k_l) = \frac{k_j}{k_l^2+k_j^2} + \frac{1}{2}\frac{k_l^2}{(k_l^2+k_j^2)^{3/2}}
\text{ln}\frac{\sqrt{k_l^2+k_j^2}+k_j}{\sqrt{k_l^2+k_j^2}-k_j},
\label{GM}
\end{equation}
where vertical wavenumbers are determined as
\begin{equation}
k_j = \frac{\pi jf_{\text{i}}}{N_0B}.
\end{equation}
Formula (\ref{GM}) corresponds to the Garrett-Munk spectrum.
The following values of parameters are taken:
$N_0=2\pi/10$ min, $f_{\text{i}}=1$ cycle per day, the Garrett-Munk energy $E=6.3*10^{-5}$, mode scaling number $M=(\pi j_*-1)/2j_*^2$,
and the principle mode number $j_*=3$.
We take 1000 values of
horizontal internal wave number $k_l$, which are equally spaced within the interval from $2\pi/100$ to $2\pi$ radians per km.
Generally, vertical modes of internal waves depend on the horizontal wavenumber. In this case, the ansatz (\ref{Vzr0}) can be obtained by expanding
a random field $\delta c_{\text{iw}}$ over empirical orthogonal functions \cite{Radiophys}.
\section{The Hegewisch-Tomsovic method with adiabatic inhomogeneity imposed}
\label{Adiabatic}
Adiabatic variations of a waveguide can be taken into account in (\ref{Cayley}) by incorporating range dependence of normal modes and their eigenvalues.
Under some assumptions this range dependence can be evaluated using the perturbation theory \cite{LL3}, i. e. without solving the Sturm-Liouville problem
too frequently.
However, even in this way, statistics of integrals (\ref{Amn}) can be found only numerically, using Monte-Carlo sampling.
It remarkably increases computational time needed to estimate variances $\sigma_{mn}^2$.
The situation becomes particularly worse if one uses the Hegewisch-Tomsovic method for modelling of acoustic pulses,
when variances $\sigma_{mn}^2$ have to be computed for every frequency component.
The problem can be partially resolved by optimizing the calculation of perturbation $V_{\text{iw}}$.
According to (\ref{Vzr0}), the function $V_{\text{iw}}$ is compound of many vertical modes, and amplitude of each vertical mode, $Y_j$,
is commonly modelled as superposition of several hundred range harmonics. Calculation of $V_{\text{iw}}$ can be accelerated by representing
$Y_j$ as Fourier series
\begin{equation}
Y_j(r) = \sum_{n=-N}^N y_n^j e^{in\omega_{\text{b}} r}, \quad
\omega_{\text{b}} = \frac{2\pi}{r_{\text{b}}}.
\end{equation}
with random amplitudes $y_n$. Variance of $y_n$ can be estimated analytically:
\begin{equation}
\sigma_y^2(j,n) = \frac{1}{4}\sum_l I_{j,l}\left[
\text{sinc}^2\left(\frac{k_l-n\omega_b}{2}r_b\right) + \text{sinc}^2\left(\frac{k_l+n\omega_b}{2}r_b\right)
\right].
\label{sgmy}
\end{equation}
It turns out that number of Fourier harmonics needed for fair representation of amplitudes $Y_j(r)$
is about ten times smaller than number of harmonics in the expansion (\ref{Yi}).
Much more substantial reduction of computational cost is achieved by partitioning
a waveguide into short segments,
so that range variations of sound speed
due to the adiabatic term $V_{\text{lsc}}$ are negligible within each individual segment.
We can eliminate them by averaging:
\begin{equation}
\bar U_k(z) = U(z) + \frac{1}{r_b}\int\limits_{(k-1)r_b}^{kr_b} V_{\text{lsc}}(r,z)\,dr.
\end{equation}
Then we can calculate local modes $\psi^{(k)}_m$ and eigenvalues $E^{(k)}_m$ by solving the Sturm-Liouville problem (\ref{StL})
with the averaged sound-speed profile $\bar U_k(z)$.
Variances $(\sigma_{mn}^{(k)})^2$ corresponding to the $k$-th segment now can be evaluated analytically:
\begin{equation}
(\sigma_{mn}^{(k)})^2 = k_0^2r_b^2V_0^2\sum_j |F_{mn}^{jk}|^2\sum_{l=-L}^L\sigma_{y}^2(j,l)\text{sinc}^2\chi_{lmn}^{(k)},
\label{varmn}
\end{equation}
where
\begin{equation}
F_{mn}^{jk} = \int \psi_m^{(k)*}(z)F_j(z)\psi_n^{(k)}(z)\,dz,
\end{equation}
\begin{displaymath}
\chi_{lmn}^{(k)}\equiv\frac{(\omega_{mn}^{(k)}+l\omega_b)r_b}{2},\quad
\omega_{mn}^{(k)}\equiv k_0(E_m^{(k)} - E_n^{(k)}).
\end{displaymath}
Now we have to properly rewrite the formulae for the propagator construction from the preceding section:
\begin{equation}
A_{mn}^{(k)}(r_{\text{b}}) = \sigma_{mn}^{(k)}(r_{\text{b}},k_0)z_{mn}^{(k)},
\label{Amn2}
\end{equation}
\begin{equation}
\mathbf{G_k}(r_{\text{b}}) = \mathbf{\Lambda_k}[\mathbf{I} + i\mathbf{A_k}(r_{\text{b}})/2]^{-1}[\mathbf{I} - i\mathbf{A_k}(r_{\text{b}})/2],
\label{Cayley2}
\end{equation}
where $\mathbf{A_k}$ is a random matrix consisted of elements $A_{mn}^{(k)}$, and $\mathbf{\Lambda_k}$ is a matrix with elements
\begin{equation}
\Lambda_{mn}^{(k)} = \delta_{mn}e^{-ik_0E_m^{(k)}r_{\text{b}}},
\end{equation}
%
Propagators $\mathbf{G_k}$ with different $k$ correspond to different basis sets of normal modes.
As long as multiplication of two neighboring propagators requires them to be in the same basis,
the formula for the resulting propagator has to include an unitary matrix $\mathbf{S_k}$ for the basis transformation.
Elements of the transformation matrix are given by
\begin{equation}
S_{mn}^{(k)} = \int \psi_m^{(k-1)}\psi_n^{(k)*}\,dz.
\end{equation}
Here it is assumed that the initial condition is taken as superposition of modes of an unperturbed waveguide,
and the matrix $\mathbf{S_1}$ describes basis transformation between unperturbed modes to modes of the first segment.
The resulting propagator reads
\begin{equation}
\mathbf{G}(Kr_b)=(\mathbf{G_K}\mathbf{S_K^{-1}})(\mathbf{G_{K-1}}\mathbf{S_{K-1}^{-1}})\text{...}(\mathbf{G_2}\mathbf{S_{2}^{-1}})(\mathbf{G_1}\mathbf {S_1})
\prod\limits_{k=1}^{K}\mathbf{S_{k}}.
\label{prop}
\end{equation}
This equation corresponds to stepwise transformation of basis sets with increasing $k$.
\section{Numerical simulation}
\label{Numer}
\subsection{Intensity profile of a wavefield}
\label{Intensity}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=.73\textwidth]{fig2a.eps}\\
\includegraphics[width=.73\textwidth]{fig2b.eps}
\end{center}
\caption{Mean intensity of an acoustic wavefield
as function of depth. (a) r = 200 km, (b) r = 500 km.
The curves obtained via the modified Hegewisch-Tomsovic method are denoted by ``RMT'', the curves denoted
``CN'' correspond to direct solution of the standard parabolic equation using the Crank-Nicholson scheme. Sound frequency is of 75 Hz.
}
\label{Fig2}
\end{figure}
Formula (\ref{prop}) was checked by means of numerical simulation with the model of a waveguide described in the Section \ref{Model}.
Sound frequency was taken of 75 Hz.
Computations were conducted for a point sound source located at the channel axis, $z=1$~km.
Figure \ref{Fig2} demonstrates the depth dependence of intensity, $J = |\Psi|^2$, averaged over 1000 realizations of an internal-wave field.
Direct solutions of the parabolic equation were obtained using the Crank-Nicholson scheme.
It turns out that agreement between the modified Hegewisch-Tomsovic method and the Crank-Nicholson solutions improves
with increasing range. Probably, higher discrepancies for short ranges are related to the presence of long-lasting horizontal correlations
that are ignored in the random matrix modelling.
Indeed, random matrix modelling implies that propagators for neighboring segments are statistically independent.
For $r=200$~km, the intensity profile corresponding to direct solution is significantly smoother than predictions
of the random matrix theory.
In the case of $r=500$~km the difference is not so apparent, but one should notice that the modified
Hegewisch-Tomsovic method overestimates localization of a wavefield near the channel axis. Apart from the channel axis, the intensity profiles
almost coincide. Notably, the curve corresponding to random matrix modelling with $r_{\text{b}}=5$~km is smoother and closer to the curve
corresponding to direct solving than the curve corresponding to $r_{\text{b}}=10$~km.
As long as reduction of $r_{\text{b}}$ makes the stepwise approximation of the propagator more accurate,
one may conclude that the presence of intensity oscillations imposed onto the smooth profile is associated with
errors of the stepwise approximation.
In general, we see that the modified Hegewisch-Tomsovic method provides satisfactory agreement with direct solutions.
\subsection{Spectral statistics test}
\label{Spectral}
When we utilize any approximation, it is very important to ensure that it doesn't alter the underlying physics.
Information about physics of scattering is stored in spectrum of a wavefield propagator.
It becomes evident if one invokes analogy with quantum mechanics, where spectral properties play a key role for dynamics.
We can check whether the modified Hegewisch-Tomsovic propagator (\ref{prop}) is able to reproduce
spectral statistics of the ``actual'' propagator obtained via the Crank-Nicholson scheme, or not.
Analysis of \cite{JCA} shows that spectral correspondence should be considered as a very stringent test, allowing one
to find out hidden discrepancies.
Eigenvalues and eigenfunctions of the propagator obey the equation
\begin{equation}
\hat G(0,r_F)\Phi_n(z)=g_n(r_0,r_F)\Phi_n(z).
\label{eigen}
\end{equation}
Owing to the unitarity of the propagator, eigenvalues can be recast as
\begin{equation}
g_n=e^{-i\varphi_n}, \quad
\varphi_n\in\Re.
\label{fn}
\end{equation}
This property means that the propagator matrix belongs to the so-called circular ensemble of random matrices \cite{Stockman}.
Scattering on random inhomogeneity reveals itself in statistics of level spacings \cite{UFN,PRE87}
\begin{equation}
\begin{gathered}
s=\frac{k_0M(\varphi_{m+1}-\varphi_m)}{2\pi},\quad
m = 1,2,\dots,M, \\
\varphi_{M+1} = \varphi_1 + \frac{2\pi}{k_0}.
\end{gathered}
\label{spacing}
\end{equation}
where the sequence of eigenphases $\varphi_m$ is rearranged in the ascending order,
$M$ is the total number of eigenvalues for a single realization of the propagator, equal to the number
of propagating modes.
Statistical distribution of level spacings is connected to all $m$-order correlation functions of eigenvalues \cite{Stockman}.
Hence level spacing statistics serves as a good indicator of differences between the spectrum of the propagator constructed via random matrices
and the actual propagator obtained via the Crank-Nicholson scheme.
If scattering on inhomogeneity is weak, then the corresponding eigenphases of the propagator are statistically
independent from each other, and level spacing distribution obeys the Poisson law
\begin{equation}
\rho(s)\sim\exp(-s).
\label{Poisson}
\end{equation}
In the opposite case of strong scattering and global inter-mode coupling, the neighboring eigenphases ``repulse'' from each other \cite{Stockman}. It leads
to level spacing statistics described by the Wigner surmise
\begin{equation}
\rho(s)\sim s^{\alpha}\exp\left(-Cs^2\right),
\label{surmise}
\end{equation}
where constants $\alpha$ and $C$ depend on symmetries of the propagator.
As the unitarity is the only constraint on the propagator,
the propagator corresponds to the circular unitary ensemble (CUE). In this case we have
$\alpha=2$ and $C = 4/\pi$ \cite{Kol97}.
In the intermediate regime of moderate scattering one can use the Berry-Robnik distribution \cite{BR}
\begin{equation}
\rho(s)= \left[
v_{\mathrm{r}}^2\operatorname{erfc}\left(\frac{\sqrt{\pi}}{2}v_{\mathrm{c}}s\right)+\left(2v_{\mathrm{r}}v_{\mathrm{c}}+\frac{\pi}{2}v_{\mathrm{c}}^3s\right)
\exp\left(-\frac{\pi}{4}v_{\mathrm{c}}^2s^2\right)
\vphantom{\frac{\sqrt{\pi}}{2}}\right]\exp(-v_{\mathrm{r}}s),
\label{berrob}
\end{equation}
where $v_{\mathrm{r}}+v_{\mathrm{c}}=1$.
Generally speaking, the Berry-Robnik formula (\ref{berrob}) is obtained under the assumption that the matrix $\mathbf{G}$
consists of two uncoupled blocks.
The first block is near-diagonal. It corresponds to weak scattering and regularly propagating modes.
The second block is a widely banded matrix, corresponding to strong scattering and ``chaotic'' modes.
Let's denote number of rows (or columns) in the first block as $M_{\text{r}}$. Then the parameters
$v_{\mathrm{r}}$ and $v_{\mathrm{c}}$ are determined as
\begin{equation}
v_{\mathrm{r}} = \frac{M_{\text{r}}}{M},\quad v_{\mathrm{c}}=\frac{M - M_{\text{r}}}{M} = \frac{M_{\text{c}}}{M}.
\end{equation}
Hence they can be thought of as fractions of weakly and strongly scattered modes, respectively.
Berry-Robnik distribution undergoes smooth transition
from the Poisson to the Wigner law as $v_{\mathrm{r}}$ decreases from 1 to 0.
Thus, fitting level spacing distribution by means of the formula (\ref{berrob}) and finding a value of $v_{\mathrm{r}}$ (or $v_{\mathrm{c}}$) corresponding
to the best fit, we can track the process of mode decoherence
due to scattering on random inhomogeneity.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=.73\textwidth]{fig3.eps}
\end{center}
\caption{Fraction of weakly scattered modes estimated using the Berry-Robnik formula (\ref{berrob}) vs range.
}
\label{Fig3}
\end{figure}
Figure \ref{Fig3} shows range dependence of the parameter $v_{\mathrm{r}}$. Apparently, the curves obtained via the modified Hegewisch-Tomsovic method
lie closely to the curve obtained via the Crank-Nicholson scheme.
However, we can see that the curve of the actual propagator corresponds to smaller values of $v_{\mathrm{r}}$ than predictions of
the random matrix theory. It means that the latter ones slightly underestimate scattering.
The most intriguing feature of the curves presented in Fig.~\ref{Fig3} is increasing of $v_{\mathrm{r}}$ after crossing the synoptic eddy ($r\simeq 250$~km).
As $v_{\mathrm{r}}$ can be regarded as fraction of weakly scattered modes, it turns out that the eddy suppresses sound scattering.
Notably, this effect is well reproduced by the random matrix modelling.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=.73\textwidth]{fig4.eps}
\end{center}
\caption{Range dependence of mean participation ratio per eigenfunction of the propagator.
}
\label{Fig4}
\end{figure}
Strictly speaking, level spacing statistics cannot be considered as a absolutely reliable method of estimating scattering.
As it was shown in \cite{UFN}, the transformation of level spacing statistics back to the Poissonian form
may be caused by scattering on fine-scale structures and doesn't unambiguously indicate suppression of scattering.
Therefore, identification of
the mechanism responsible for such transformation
requires one to accompany the eigenvalue analysis by the analysis of propagator eigenfunctions.
Each eigenfunction can be expressed as superposition
of normal modes,
\begin{equation}
\Phi_m(z) = \sum\limits_n b_{mn}\phi_n(z),
\label{eigf}
\end{equation}
where $b_{mn}$ is the $m$-th element of $n$-th eigenvector
of the matrix $\mathbf{G}$.
Scattering on random inhomogeneity leads to intense mode coupling.
Consequently, a propagator eigenfunction corresponding to strong scattering should be compound of many normal modes.
Thus, we can estimate impact of scattering by exploring statistics of participation ratio values in the expansions (\ref{eigf}).
Participation ratio of the $n$-th eigenfunction is calculated as
\begin{equation}
\nu(n) = \left(
\sum\limits_{m=1}^M\lvert b_{mn}\rvert^4
\right)^{-1}.
\label{npc}
\end{equation}
According to this definition,
$\nu$ is equal to 1 in a range-independent waveguide, and increases
as scattering intensifies.
Figure \ref{Fig4} demonstrates range dependence of participation ratio averaged over
all eigenfunctions and realizations of random inhomogeneity.
After rapid growth for $r< 300$~km, mean participation ratio suddenly starts to decrease.
Hence the eigenfunction statistics
confirms that growth of $v_{\mathrm{r}}$ is associated with suppression of scattering.
These results anticipate a kind of anti-diffusive behavior, when some limited group of modes becomes more favorable for concentration of acoustic energy .
\section{Discussion}
Generalization of the Hegewisch-Tomsovic method onto waveguides involving large-scale inhomogeneity drastically extends range of its applications.
Indeed, real-world underwater acoustic waveguides are often subjected to longitudinal variations which can be treated as adiabatic.
It should be noted that efficiency of the method can be enhanced by using non-uniform partition of a waveguide.
Adjusting the propagator step with the rate of mesoscale variability, we can reduce inaccuracy of the stepwise approximation.
Furthermore, the modified Hegewisch-Tomsovic method looks as a promising tool for modelling in the presence of uncertainty in hydrological characteristics.
Nevertheless, the Hegewisch-Tomsovic method still has some limitations of the applicability. Firstly, the method is based on the perturbation theory and
can fail if it doesn't apply. It is the case, for example, for relatively high frequencies.
Secondly, the method relies on the narrow-angle approximation, therefore, it should not correctly incorporate wide-angle effects.
In this way it is reasonable to develop a version of the Hegewisch-Tomsovic method for the wide-angle parabolic equation, or for the Helmholtz equation.
In the latter case formalism of S-matrices should be invoked \cite{Stockman}.
Correct calculation of matrix element variances is one of the main technical problems arising in the random matrix modelling.
Alternatively, these variances can be evaluated by solving the master equation for modal amplitudes \cite{DozierI,Creamer,ColosiMorozov,Colosi_Duda_Morozov}.
It is especially interesting in the context of the ``anti-diffusive'' behavior observed in this paper:
can the master equation reproduce this effect?
It should be mentioned that
a somewhat similar behavior occurs in quantum systems,
when the so-called ``dark'' states accumulate population.
As it was shown in \cite{EPJB14,QE}, quantum master equation, being mathematically equivalent to the acoustical master equation,
readily reproduces this effect.
Therefore, it is reasonable to expect that the acoustical master equation can be reliable instrument for modeling
using random matrices.
It means that these two approaches can be efficiently combined.
\section{Conclusions}
\label{Concl}
The present paper is devoted to random matrix modelling of sound propagation in the ocean. It is shown that the approach of K.~Hegewisch and S.~Tomsovic
can be efficiently generalized onto waveguides with adiabatic inhomogeneity, even if magnitude of the inhomogeneity is relatively large.
The generalization is obtained by means of stepwise approximation of the wavefield propagator, leading to the formula (\ref{prop}).
Efficiency of the modified Hegewisch-Tomsovic method is confirmed by numerical simulation for a model of an underwater sound channel
with a cold synoptic eddy imposed. Spectral analysis of the propagator has shown that the eddy leads to suppression of scattering on internal waves.
\section*{Acknowledgments}
This work was supported by the Russian Foundation of Basic Research within the projects 16-35-60040 and 16-05-01074, and by
the POI FEB RAS Program
'Mathematical simulation and analysis of dynamical processes in the ocean'
(№117030110034-7).
Author is grateful to Steven Tomsovic for stimulating and fruitful discussions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,981 |
Astrakhan State Technical University (ASTU) is a technical university located in Astrakhan, Russia.
History
Federal State Educational Institution of Higher Professional Education "Astrakhan State Technical University" is the legal successor of the Astrakhan Technical Institute of Fishing Industry and Economy, established in accordance with the Order of the People's Commissariat of Foreign and Domestic Trade of the USSR from May 9, 1930 № 695 "On fish schools, colleges, rybfakah and courses".
Order of the Russian Federation State Committee for Higher Education on June 3, 1994 № 547 "On the renaming of the Astrakhan Technical Institute of Fishing Industry and Economy" and the order of the Russian Federation of 08.07.1994 Goskorybolovstva, № 107 "On the renaming of the institutions of the Astrakhan and Kaliningrad, the fishing industry and the economy technical universities in the state, Astrakhan technical Institute of Fishing industry and economy renamed to Astrakhan State technical University.
Faculties
Institute of Economics
Institute of Information Technology and Communications
Mechanical Engineering Institute
Institute for Fisheries, Biology and Nature
Faculty of Law
Faculty of Civil Engineering
Faculty of vocational education
Branch Dmitrovsky
Institute of Distance Education
Institute for Advanced Professional Education
Institute for the Humanities
Institute of Oil and Gas
Institute of Marine Technology, Energy and Transport
Preparatory Faculty for Foreign Citizens
Volga-Caspian Sea Fishing College
Yeisk Sea Fishing College
Fishing Dmitrovsky College
External links
Universities in Volga Region
Technical universities and colleges in Russia | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,377 |
Q: Docker compose v3: The difference between volume type mount and bind I am using docker-compose syntax version 3 and want to use some volumes. The documentation on the long syntax for volumes states the following:
type: the mount type volume or bind
but never fully explains the difference. What is it?
A: bind is the simpler one to understand. It takes a host path, say /data and mounts it inside your container, say /opt/app/data. /data can be anything, probably mounted on NFS or it maybe a local host path.
docker run -v /data:/opt/app/data -d nginx
volume mount is where you can use a named volume.
You would normally use a volume driver for this, but you can get a host mounted path using the default local volume driver something like the below:
docker volume create data
docker run -d -v data:/opt/app/data nginx
The named volume can also be anonymous if you run just this:
docker run -d -v /opt/app/data nginx
If you run docker volume ls, docker would have create an autogenerated long name for the anonymous volume.
In docker-compose, you would just use it as below:
web:
image: nginx:latest
volumes:
/data:/opt/app/data
data:/opt/app/data1
volumes:
data:
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,453 |
{"url":"https:\/\/math.stackexchange.com\/questions\/750336\/how-prove-this-can-choose-two-postive-integer-numbers-a-m-a-k-such-frac","text":"# How prove this can choose two postive integer numbers $a_{m},a_{k},$such $\\frac{a_{m}+a_{k}}{3a_{p}}\\notin N^{+},$\n\nIf $a_{1},a_{2},\\cdots,a_{n}(a_{i}\\neq a_{j}),n\\ge 3$ are positive integers,show that: we can always choose two positive integers among them, $a_{m},a_{k},m,k\\in\\{1,2,\\cdots,n\\}$.such that $$\\dfrac{a_{m}+a_{k}}{3a_{p}}\\notin N^{+},\\forall p\\in\\{1,2,3,\\cdots,n\\}$$\n\nThis problem is from the Jiangxi province Mathematical Contest,2014 .(I failed This year's exam was very difficult)\n\nmy idea: if $n=3$,Assume that $a_{1}=1,a_{2}=2,a_{3}=3$,then we have $a_{p}=1$,then we can choose $a_{m}=2,a_{k}=3$,then it is clearly $$\\dfrac{a_{m}+a_{k}}{3a_{p}}=\\dfrac{5}{3}\\notin N^{+}$$ if $a_{p}=2$,then we can choose $a_{m}=1,a_{k}=3$\n\nif $a_{p}=3$ then we can choose $a_{m}=1,a_{k}=2$,\n\nBut for in general $n$,I can't prove it.\n\nThank you very much\n\n\u2022 can you post you solution? Thank you \u2013\u00a0math110 Apr 12 '14 at 9:04\n\u2022 During a lot of time, I saw the expression \"I fell\" used by math110 and did not understand it. I\u2019ve just realized that he means \"I failed\" \u2013\u00a0Ewan Delanoy Apr 12 '14 at 9:30\n\u2022 HaHa,I fell in chinese mean \uff1a \u6211\u611f\u89c9. \u2013\u00a0math110 Apr 12 '14 at 9:31\n\u2022 Must $a_m,a_k,a_p$ be distinct?The result then does not hold if you consider the list $1,3,6,9$ with $a_p=1$. \u2013\u00a0rah4927 Apr 12 '14 at 9:32\n\u2022 $a_{m},a_{k}$ be distinct.and $a_{p}$ is can equal $a_{m}$ or $a_{k}$, \u2013\u00a0math110 Apr 12 '14 at 9:36\n\nWe can assume $a_1 < a_2 < a_3 < \\ldots <a_n$ without loss. Since $n\\geq 3$, we have $a_n \\geq 3$.\nLet us argue by induction on $m=a_n$. When $m=3$, we have $n=3, (a_1,a_2,a_3)=(1,2,3)$ and this case has already been treated by the OP.\nNow, suppose that $m>3$ and that the result is true for $m-1$. Let $a_1 < a_2 < a_3 < \\ldots <a_n$ be a sequence of positive integers. If, for some $i \\neq j$, $a_i+a_j$ is not divisible by $3$, then we may take $m=i,k=j$ and we are done. So we can assume that all the $a_i+a_j(i \\neq j)$ are divisible by $3$. So for any distinct $i,j,k$, we have that $a_j-a_i=(a_j+a_k)-(a_i+a_k)$ is divisible by $3$. So all the $a_i$ are congruent to a constant $a\\in\\lbrace 0,1,2\\rbrace$ modulo $3$. Then $2a$ is divsible by $3$, so $a=0$. We see then that all the $a_i$ are divisible by $3$ ; write $a_i=3b_i$ where $b_i$ is a positive integer. By the induction hypothesis, there are indices $m\\neq k$ such that $\\frac{b_n+b_m}{3b_p} \\not\\in {\\mathbb N}$ for any $p$. Since $\\frac{a_n+a_m}{3a_p}=\\frac{b_n+b_m}{3b_p}$, we are done.","date":"2019-05-21 03:02:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9237552285194397, \"perplexity\": 113.26586585755254}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232256215.47\/warc\/CC-MAIN-20190521022141-20190521044141-00248.warc.gz\"}"} | null | null |
Arman's RPGs
Deadlands: Constitution Day
The train limped into town - Dodge City, Kansas. While it is in for repairs, the posse split up.
Wilhelm headed to the Catholic church, and got a little information, and a free meal. Dr. Abigail Winston headed to the drugstore, where she picked out a few ingredients. She met a strange man wearing armor outside. Mr. Brag headed to the saloon, as did Hector.
It wasn't long before Hector met Ida Mae Hobart, a staunch teetotaler and leader of the Ladies Temperance Movement, who lectured him on the horrors of the devil's drink. Once she left (in a huff), Hector tried to stir up a bit of trouble - bringing some of the bar rabble over to harass Ida. They (mostly) declined, saying that the sheriff wouldn't be too pleased about people making trouble. Hector and his big mouth blurted out, "Well, why don't we just kill 'im, like we killed the last one?"
The room fell silent. Finally, one of the men patted Hector on the shoulder. "Y'might not want to go around sayin' that, son. I recon ol' Wyatt Earp wouldn't take kindly to that."
Hector quickly realized that maybe this wasn't the town to shoot up.
The "paladin", the strange man in armor, found a bill advertising a job - temporary deputy during Constitution Day celebrations. Abigail, visiting McCarty's City Drugstore, read the same poster; and Wilhelm came across the information through the priest he spoke to. Hector, meanwhile, decided to stay away. By the next day, each (except Hector) had each trooped through Earp's office, and were officially deputized. And a good thing, too...
After stopping a fight between Ralphie Simpkins, a local hothead, and Walter Jackson, a young black ex-Union soldier, the posse stopped over at Dog Eye's, the local saloon. Earlier, Hector met a tattered-looking fellow named Clayton Mansfield; well-spoken, if somewhat drunk.
Not long after, Hector caught Dog Eye's saloon gal, Suzy Winger, talking with a man she called Paul in the alley; Paul hurried off, and Suzy escorted Hector back to the saloon.
Abigail, meanwhile, has begun to pay off the local kids as unofficial spies; handing out dollars like candy, she quickly formed a speedy spy ring of eager, if somewhat grubby, children. Which paid off - one of those children called her to the scene of a crime! She and the paladin looked over the body - a rather grisly scene! The man had been neatly decapitated; his arms, too, were severed at the shoulders. Abigail quickly checked him over, and could tell that whoever had killed him had done it very quickly, and must have had great strength. Rather than a large chopping blade, the wounds were incurred with a smallish blade...
A further search turned up a letter, sewed into his clothes - the man, Paul Goodwin, was a Union spy! Abigail kept that information to herself. The saloon's barman and owner, Dog Eye himself, took Suzy to the local doctor.
Not long afterward, Ralphie got into another fight with Walter; this time, he broke Walter's arm! Nobody spoke up to rescue Walter - too afraid of Ralphie and his brother, the leaders of the Wilderness Riders - but nonetheless, Wilhelm and Hector hauled the both of them to jail. Though not until after Wilhelm healed Walter's arm. Clayton, the drunk Hector had met earlier staggered over, peered at the man's arm, and muttered, "Huh. Looked like a good clean break..." before belching in Wilhelm's face.
Ida Mae arrived, and contradicted Ralphie's story of Walter starting trouble; Jake sneered, but before tension could raise any higher, Wyatt Earp arrived and dispersed the crowd. He was none too pleased with the posse, though, allowing themselves to get drawn into a fight with the Wilderness Riders...
The next day, Abigail went to Wyatt with the news that the murder victim, Paul, had been a Union spy; as it turned out, Wyatt knew it all along. As long as Paul hadn't made trouble, Wyatt wasn't about to intervene.
Abigail and the paladin went to visit Suzy, but she was still hysterical. However, the doctor motioned them over and verified Abigail's initial hunch - the killer was strong, and knew his way around the human body.
After following some clues - including finding a scrap of cloth at the crime scene, and verifying that it was silk - the posse was called to yet another grisly murder. This time, though, Abigail and the paladin had figured out who the likely suspect was - none other than Clayton, the drunk! They searched his room, and found a small case - empty - and a collection of trunks and carpet bags - filled with heads, arms, and legs, sewn together into horrible monsters! They didn't waste a moment, and gathered the body parts up and locked them into a cell, leaving a message for Earp as to what they likely were.
At the scene of the murder, they found Ralphie - or at least, what was left of him. Hector emptied his stomach, as did some of the other townsfolk. A search of the area turned up nothing; the posse was left to sleep - if they could. Wyatt spoke to them in private, and advised them to find the killer before he struck again...
The next morning, the posse was roused by a call from the town doctor; Suzy was felling well enough to speak, finally. She told them what happened - the killer shoved open the door, knocking her back against the desk. When she regained her senses, she saw the killer stuffing Paul's head into a carpet bag. He leaped out of the window, and vanished.
The doctor had his own story to tell - ten years ago, at the Battle of Gettysburg, he had been working with the rebels, tending the injured, when he heard gunshots. He ran to the hospital tent and saw a man severing arms, legs, anything he could. He shouted and ran, and guards fired, but the bullets didn't seem to harm him; his last victim, a man named Ketchum, was the one who shot at him, even as the Butcher was removing Ketchum's eye. Ketchum survived, but so did the Butcher...
Abigail realized that this Butcher was likely the same creature who had been following her, and indeed the same creature who was terrorizing the town! Abigail quickly rounded up her informants, and sent them all to the drugstore, with instructions to the owner to keep the kids safe.
The posse kept an eye out for any trouble. Finally, it struck - the Butcher jumped down from the building he was standing on, and pulled a scalpel from his arm! Abigail let loose with a blast, but the bullets didn't seem to do much harm. The paladin charged in, knocking the creature aside; while he was going toe-to-toe with the thing, he wasn't making much headway. He managed to nearly sever the creature's hand with his sword, and the thing howled; Hector and Wilhelm moved into rifle range, and with a barrage of gunfire, finally severed the creature's hand. Immediately, the form reverted to that of Clayton, the drunk; Hector, taking no chances, fire twice: once at Clayton, and once at the scalpel. Clayton died immediately; the scalpel, however, seemed untouched, but a slow hairline fracture darted across the blade, then suddenly, it shattered into dust. Finally, the town was safe.
The next morning, Earp shook their hands; the body parts in the cell, as it turns out, came alive when the Butcher was out and about. He had personally burned every last one of them. He handed out their pay, collected their badges, and thanked them for their service - they likely saved numerous lives, and with Ralphie out of the way, the Wilderness Gang would likely be less of a problem.
Meanwhile, it was finally time to leave. A message arrive from the train station - their train would depart in an hour.
Posted by Andrew Metzger at 10:30 PM
Labels: Deadlands
ETU
Musgraves
Non-writeup
Quern War
Shifting Horizon
Tempus Frigate
Andrew Metzger
Deadlands: Night Train Part II
Image attribution: www.CGPGrey.com | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,670 |
\section{Introduction}
It is common knowledge in mathematical analysis that a function $f:I\subseteq\mathbb{R}\to \mathbb{R}$ is said to be convex on an interval $I\ne\emptyset$ if
\begin{equation}\label{convex-dfn-ineq}
f(\lambda x+(1-\lambda) y)\le \lambda f(x)+(1-\lambda )f(y)
\end{equation}
for all $x,y\in I$ and $\lambda \in [0,1]$; If the inequality~\eqref{convex-dfn-ineq} reverses, then $f$ is said to be concave on $I$.
\par
Let $f: I\subseteq\mathbb{R} \to \mathbb{R}$ be a convex function on an interval $I$ and $a, b\in I$ with $a<b$. Then
\begin{equation}\label{HH-ineq-eq}
f\biggl(\frac{a+b}2\biggl)\le\frac{1}{b-a}\int_a^bf(x)\td x\le \frac{f(a)+f(b)}2.
\end{equation}
This inequality is well known in the literature as Hermite-Hadamard integral inequality for convex functions. See~\cite{Dragomir-selected-Topic, Niculescu-Persson-Monograph-2004} and closely related references therein.
\par
The concept of usually used convexity has been generalized by a number of mathematicians. Some of them can be recited as follows.
\begin{definition}[\cite{Toader-1985-329}]
Let $f:[0,b]\to\mathbb{R}$ be a function and $m\in[0,1]$. If
\begin{equation}
f(\lambda x +m(1-\lambda)y)\le\lambda f(x)+m(1-\lambda )f(y)
\end{equation}
holds for all $x,y\in[0,b]$ and $\lambda\in[0,1]$, then we say that $f(x)$ is $m$-convex on $[0,b]$.
\end{definition}
\begin{definition}[\cite{Mihesan-1993-Romania}]
Let $f:[0,b]\to\mathbb{R}$ be a function and $(\alpha,m)\in[0,1]\times[0,1]$. If
\begin{equation}
f(\lambda x+m(1-\lambda)y)\le \lambda^\alpha f(x)+m(1-\lambda^\alpha)f(y)
\end{equation}
is valid for all $x,y\in[0,b]$ and $\lambda\in(0,1]$, then we say that $f(x)$ is $(\alpha, m)$-convex on $[0,b]$.
\end{definition}
It is not difficult to see that when $(\alpha,m)\in\{(\alpha,0),(1,0),(1,m),(1,1),(\alpha,1)\}$ the $(\alpha, m)$-convex function becomes the $\alpha$-star-shaped, star-shaped, $m$-convex, convex, and $\alpha$-convex functions respectively.
\par
The famous Hermite-Hadamard inequality~\eqref{HH-ineq-eq} has been refined or generalized by many mathematicians. Some of them can be reformulated as follows.
\begin{theorem}[{\cite[Theorem~3]{M.E.-2010-1065}}]
Let $f:I^\circ\subset[0,\infty)\to\mathbb{R}$ be a twice differentiable function such that $f''\in L([a,b])$ for $a,b\in I$ with $a<b$. If $|f''(x)|^{q}$ is $m$-convex on $[a,b]$ for some fixed $q>1$ and $m\in[0,1]$, then
\begin{equation}
\biggl|f\biggl(\frac{a+b}{2}\biggr)-\frac{1}{b-a} \int_{a}^{b}f(x)\td x\biggr|
\le \frac{(b-a)^{2}}{8}\biggl[\frac{\Gamma(1+p)}{\Gamma(3/2+p)}\biggr]^{1/p} \biggl[\frac{|f''(a)|^{q}+m|f''(b/m)|^{q}}{2}\biggr]^{1/q},
\end{equation}
where $\frac1p+\frac1q=1$ and $\Gamma$ is the classical Euler gamma function which may be defined for $\Re(z)>0$ by
\begin{equation}
\Gamma(z)=\int_0^{\infty}t^{z-1}e^{-t}\td t.
\end{equation}
\end{theorem}
\begin{theorem}[{\cite[Theorem~4]{Sarikaya-Aktan-1005.2897}}]
Let $I\subseteq\mathbb{R}$ be an open interval and $a,b\in I$ with $a<b$, and let $f:I\to\mathbb{R}$ be a twice differentiable mapping such that $f''(x)$ is integrable. If $0\le\lambda\le1$ and $|f''(x)|$ is convex on $[a,b]$, then
\begin{multline}
\biggl|(\lambda-1)f\biggl(\frac{a+b}2\biggr)-\lambda\frac{f(a)+f(b)}2+\int_a^bf(x)\td x\biggr|\\
\le
\begin{cases}
\begin{aligned}
\frac{(b-a)^2}{24}\biggl\{\biggl[\lambda^4&+(1+\lambda)(1-\lambda)^3+\frac{5\lambda-3}4\biggr]|f''(a)|\\ &+\biggl[\lambda^4+(2-\lambda)\lambda^3+\frac{1-3\lambda}4\biggr]|f''(b)|\biggr\}, \quad 0\le\lambda\le\dfrac12;
\end{aligned}\\
\dfrac{(b-a)^2}{48}(3\lambda-1)\bigl(|f''(a)|+|f''(b)|\bigr),\quad \dfrac12\le\lambda\le1.
\end{cases}
\end{multline}
\end{theorem}
\begin{theorem}[{\cite[Theorem~3]{M.E.-2011-2614}}]\label{thm1.3}
Let $b^*>0$ and $f:[0,b^*]\to\mathbb{R}$ be a twice differentiable function such that $f''\in L([a,b])$ for $a,b\in[0,b^*]$ with $a<b$. If $|f''(x)|^{q}$ is $(\alpha,m)$-convex on $[a,b]$ for $(\alpha,m)\in[0,1]\times[0,1]$ and $q\ge1$, then
\begin{multline}\label{thm1.3-n=2}
\biggl|\frac{f(a)+f(mb)}2-\frac{1}{mb-a} \int_{a}^{mb}f(x)\td x\biggr|\\
\le \frac{(mb-a)^2}2\biggl(\frac1{6}\biggr)^{1-1/q}
\biggl\{\frac{|f''(a)|^{q}}{(\alpha+2)(\alpha+3)}+m|f''(b)|^{q} \biggl[\frac16-\frac1{(\alpha+2)(\alpha+3)}\biggr]\biggr\}^{1/q}.
\end{multline}
\end{theorem}
In recent years, some other kinds of Hermite-Hadamard type inequalities were generated in~\cite{Hadramard-Convex-Xi-Filomat.tex, H-H-Bai-Wang-Qi-2012.tex, chun-ling-Hermite.tex, difference-hermite-hadamard.tex, GMJ-2013-062.tex, 130-2014-Shuang-Wang-Qi-JOCAAA-2-10-2014.tex, Xi-Bai-Qi-Hadamard-2011-AEQMath.tex, H-H-h-C-Xi-Qi-AIA.tex, Hadramard-Convex-Xi-September-2011.tex}, for example. For more systematic information, please refer to monographs~\cite{Dragomir-selected-Topic, Niculescu-Persson-Monograph-2004} and related references therein.
\par
In this paper, we will establish some new inequalities of Hermite-Hadamard type for functions whose derivatives of $n$-th order are $(\alpha,m)$-convex and deduce some known results in the form of corollaries.
\section{A lemma}
For establishing new integral inequalities of Hermite-Hadamard type for functions whose derivatives of $n$-th order are $(\alpha,m)$-convex, we need the following lemma.
\begin{lemma}\label{lem2.1}
Let $0<m\le1$ and $b>a>0$ satisfying $a<mb$.
If $f^{(n)}(x)$ for $n\in\{0\}\cup\mathbb{N}$ exists and is integrable on the closed interval $[0,b]$, then
\begin{multline}\label{eq2.1}
\frac{f(a)+f(mb)}{2} -\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x -\frac12\sum_{k=2}^{n-1}\frac{(k-1)(mb-a)^k}{(k+1)!}f^{(k)}(a) \\*
=\frac12\frac{(mb-a)^n}{n!}\int_{0}^1t^{n-1}(n-2t)f^{(n)}(ta+m(1-t)b)\td t,
\end{multline}
where the sum above takes $0$ when $n=1$ and $n=2$.
\end{lemma}
\begin{proof}
When $n=1$, it is easy to deduce the identity~\eqref{eq2.1} by performing an integration by parts in the integrals from the right side and changing the variable.
\par
When $n=2$, we have
\begin{equation}\label{eq2.1-n=2}
\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x
=\frac{(mb-a)^2}2\int_{0}^1t(1-t)f''(ta+m(1-t)b)\td t.
\end{equation}
This result is same as~\cite[Lemma~2]{M.E.-2011-2614}.
\par
When $n=3$, the identity~\eqref{eq2.1} is equivalent to
\begin{multline}\label{eq2.1-n=3}
\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x-\frac{(mb-a)^2}{12}f''(a)\\
=\frac{(mb-a)^3}{12}\int_{0}^1t^2(3-2t)f^{(3)}(ta+m(1-t)b)\td t,
\end{multline}
which may be derived from integrating the integral in the second line of~\eqref{eq2.1-n=3} and utilizing the identity~\eqref{eq2.1-n=2}.
\par
When $n\ge4$, computing the second line in~\eqref{eq2.1} by integration by parts yields
\begin{multline*}
\frac{(mb-a)^n}{n!}\int_{0}^1 t^{n-1}(n-2t)f^{(n)}(ta+m(1-t)b)\td t\\
=-\frac{(n-2)(mb-a)^{n-1}}{n!}f^{(n-1)}(a)
+\frac{(mb-a)^{n-1}}{(n-1)!} \int_{0}^1t^{n-2}(n-1-2t)f^{(n-1)}(ta+m(1-t)b)\td t,
\end{multline*}
which is a recurrent formula
\begin{equation*}
S_{a,mb}(n)=-T_{a,mb}(n-1)+S_{a,mb}(n-1)
\end{equation*}
on $n$, where
\begin{equation*}
S_{a,mb}(n)=\frac12\frac{(mb-a)^n}{n!}\int_{0}^1 t^{n-1}(n-2t)f^{(n)}(ta+m(1-t)b)\td t
\end{equation*}
and
\begin{equation*}
T_{a,mb}(n-1)=\frac12\frac{(n-2)(mb-a)^{n-1}}{n!}f^{(n-1)}(a)
\end{equation*}
for $n\ge4$. By mathematical induction, the proof of Lemma~\ref{lem2.1} is complete.
\end{proof}
\begin{remark}
Similar integral identities to~\eqref{eq2.1}, produced by replacing $f^{(k)}(a)$ in~\eqref{eq2.1} by $f^{(k)}(b)$ or by $f^{(k)}\bigl(\frac{a+b}2\bigr)$, and corresponding integral inequalities of Hermite-Hadamard type have been established in~\cite{H-H-(a-m)-convex-Filomat.tex, Wang-Qi-MIA3459-MINFAA2012.tex, Wang-Ineq-H-H-type-Analysis.tex}.
\end{remark}
\begin{remark}
When $m=1$, our Lemma~\ref{lem2.1} becomes~\cite[Lemma~2.1]{Hwang-Kyugpook-03}.
\end{remark}
\section{Inequalities of Hermite-Hadamard type}
Now we are in a position to establish some integral inequalities of Hermite-Hadamard type for functions whose derivatives of $n$-th order are $(\alpha,m)$-convex.
\begin{theorem}\label{th3.1}
Let $(\alpha,m)\in[0,1]\times(0,1]$ and $b>a>0$ with $a<mb$.
If $f(x)$ is $n$-time differentiable on $[0,b]$ such that $\bigl|f^{(n)}(x)\bigr|\in L([0,mb])$ and $\bigl|f^{(n)}(x)\bigr|^p$ is $(\alpha,m)$-convex on $[0,mb]$ for $n\ge 2$ and $p\ge 1$, then
\begin{multline}\label{eq3.1.1}
\biggl|\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x -\frac12\sum_{k=2}^{n-1}\frac{(k-1)(mb-a)^k}{(k+1)!}f^{(k)}(a)\biggr|\\*
\le\frac12\frac{(mb-a)^n}{n!}\biggl(\frac{n-1}{n+1}\biggr)^{1-1/p} \biggl\{\frac{n(n-1)+\alpha(n-2)}{(n+\alpha)(n+\alpha+1)}\bigl|f^{(n)}(a)\bigr|^p\\ +m\biggl[\frac{n-1}{n+1}-\frac{n(n-1)+\alpha(n-2)} {(n+\alpha)(n+\alpha+1)}\biggr]\bigl|f^{(n)}(b)\bigr|^p\biggr\}^{1/p},
\end{multline}
where the sum above takes $0$ when $n=2$.
\end{theorem}
\begin{proof}
It follows from Lemma~\ref{eq2.1} that
\begin{multline}\label{eq3.1.2}
\biggl|\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x -\frac12\sum_{k=2}^{n-1}\frac{(k-1)(mb-a)^k}{(k+1)!}f^{(k)}(a)\biggr|\\
\le \frac12\frac{(mb-a)^n}{n!}\int_{0}^1t^{n-1}(n-2t)\bigl|f^{(n)}(ta+m(1-t)b)\bigr|\td t.
\end{multline}
\par
When $p=1$, since $\bigl|f^{(n)}(x)\bigr|$ is $(\alpha,m)$-convex, we have
\begin{equation*}
\bigl|f^{(n)}(ta+m(1-t)b)\bigr|\le t^\alpha\bigl|f^{(n)}(a)\bigr|+m(1-t^\alpha)\bigl|f^{(n)}(b)\bigr|.
\end{equation*}
Multiplying by the factor $t^{n-1}(n-2t)$ on both sides of the above inequality and integrating with respect to $t\in[0,1]$ lead to
\begin{align*}
&\quad\int_{0}^1 t^{n-1}(n-2t)\bigl|f^{(n)}(ta+m(1-t)b)\bigr|\td t\\
&\le\int_{0}^1t^{n-1}(n-2t)\bigl[t^{\alpha}\bigl|f^{(n)}(a)\bigr| +m(1-t^{\alpha})\bigl|f^{(n)}(b)\bigr|\bigr]\td t\\
&=\bigl|f^{(n)}(a)\bigr|\int_{0}^1 t^{n+\alpha-1}(n-2t)\td t +m\bigl|f^{(n)}(b)\bigr|\int_{0}^1t^{n-1}(n-2t)(1-t^{\alpha})\td t\\
&=\biggl(\frac{n}{n+\alpha}-\frac2{n+\alpha+1}\biggr)\bigl|f^{(n)}(a)\bigr|
+m\bigl|f^{(n)}(b)\bigr|\biggl(\frac{n-1}{n+1}-\frac{n}{n+\alpha}+\frac2{n+\alpha+1}\biggr)\\
&=\frac{n(n-1)+\alpha(n-2)}{(n+\alpha)(n+\alpha+1)}\bigl|f^{(n)}(a)\bigr| +m\biggl[\frac{n-1}{n+1}-\frac{n(n-1)+\alpha(n-2)}{(n+\alpha)(n+\alpha+1)}\biggr]\bigl|f^{(n)}(b)\bigr|.
\end{align*}
The proof for the case $p=1$ is complete.
\par
When $p>1$, by the well-known H\"older integral inequality, we obtain
\begin{multline}\label{eq3.1.3}
\int_{0}^1t^{n-1}(n-2t)\bigl|f^{(n)}(ta+m(1-t)b)\bigr|\td t\\*
\le\biggl[\int_{0}^1t^{n-1}(n-2t)\td t\biggr]^{1-1/p}
\biggl[\int_{0}^1t^{n-1}(n-2t)\bigl|f^{(n)}(ta+m(1-t)b)\bigr|^p\td t\biggr]^{1/p}.
\end{multline}
Using the $(\alpha,m)$-convexity of $\bigl|f^{(n)}(x)\bigr|^p$ produces
\begin{multline}\label{eq3.1.4}
\int_{0}^1t^{n-1}(n-2t)\bigl|f^{(n)}(ta+m(1-t)b)\bigr|^p \td t\\
\le\int_{0}^1t^{n-1}(n-2t)\bigl[t^{\alpha}\bigl|f^{(n)}(a)\bigr|^p
+m(1-t^{\alpha})\bigl|f^{(n)}(b)\bigr|^p\bigr]\td t\\
=\frac{n(n-1)+\alpha(n-2)}{(n+\alpha)(n+\alpha+1)}\bigl|f^{(n)}(a)\bigr|^p
+m\biggl[\frac{n-1}{n+1}-\frac{n(n-1)+\alpha(n-2)}{(n+\alpha)(n+\alpha+1)}\biggr]\bigl|f^{(n)}(b)\bigr|^p.
\end{multline}
Substituting~\eqref{eq3.1.3} and~\eqref{eq3.1.4} into~\eqref{eq3.1.2} yields the inequality~\eqref{eq3.1.1}.
This completes the proof of Theorem~\ref{th3.1}.
\end{proof}
\begin{corollary}\label{cor3.1}
Under conditions of Theorem~\ref{th3.1},
\begin{enumerate}
\item
when $m=1$, we have
\begin{multline*}
\biggl|\frac{f(a)+f(b)}{2}-\frac{1}{b-a}\int_{a}^{b}f(x)\td x -\frac12\sum_{k=2}^{n-1}\frac{(k-1)(b-a)^k}{(k+1)!}f^{(k)}(a)\biggr|
\le\frac12\frac{(b-a)^n}{n!}\biggl(\frac{n-1}{n+1}\biggr)^{1-1/p}\\
\times\biggl\{\frac{n(n-1)+\alpha(n-2)}{(n+\alpha)(n+\alpha+1)}\bigl|f^{(n)}(a)\bigr|^p
+\biggl[\frac{n-1}{n+1}-\frac{n(n-1) +\alpha(n-2)}{(n+\alpha)(n+\alpha+1)}\biggr]\bigl|f^{(n)}(b)\bigr|^p\biggr\}^{1/p};
\end{multline*}
\item
when $n=2$, we have
\begin{multline*}
\biggl|\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x \biggr|\\
\le\frac{(mb-a)^2}4\biggl(\frac13\biggr)^{1-1/p}
\biggl\{\frac2{(\alpha+2)(\alpha+3)}\bigl|f''(a)\bigr|^p +m\biggl[\frac13-\frac2{(\alpha+2)(\alpha+3)}
\biggr]\bigl|f''(b)\bigr|^p\biggr\}^{1/p};
\end{multline*}
\item
when $m=\alpha=p=1$ and $n=2$, we have
\begin{equation*}
\biggl|\frac{f(a)+f(b)}{2}-\frac{1}{b-a}\int_{a}^{b}f(x)\td x \biggr|
\le\frac{(b-a)^2}{24}\bigl[\bigl|f''(a)\bigr|+\bigl|f''(b)\bigr|\bigr];
\end{equation*}
\item
when $m=\alpha=1$ and $p=n=2$, we have
\begin{equation*}
\biggl|\frac{f(a)+f(b)}{2}-\frac{1}{b-a}\int_{a}^{b}f(x)\td x \biggr|
\le\frac{(b-a)^2}{12}\biggl[\frac{|f''(a)|^2+|f''(b)|^2}2\biggr]^{1/2}.
\end{equation*}
\end{enumerate}
\end{corollary}
\begin{remark}
Under conditions of Theorem~\ref{th3.1},
\begin{enumerate}
\item
when $n=2$, the inequality~\eqref{eq3.1.1} becomes the one~\eqref{thm1.3-n=2} in~\cite[Theorem~3]{M.E.-2011-2614};
\item
when $\alpha=m=1$, Theorem~\ref{th3.1} becomes~\cite[Theorem~3.1]{Hwang-Kyugpook-03}.
\end{enumerate}
\end{remark}
\begin{theorem}\label{th3.2}
Let $(\alpha,m)\in[0,1]\times(0,1]$ and $b>a>0$ with $a<mb$.
If $f(x)$ is $n$-time differentiable on $[0,b]$ such that $\bigl|f^{(n)}(x)\bigr|\in L([0,mb])$ and $\bigl|f^{(n)}(x)\bigr|^p$ is $(\alpha,m)$-convex on $[0,mb]$ for $n\ge 2$ and $p>1$, then
\begin{multline}\label{th3.2-ineq}
\biggl|\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x -\frac12\sum_{k=2}^{n-1}\frac{(k-1)(mb-a)^k}{(k+1)!}f^{(k)}(a)\biggr|\\
\le\frac12\frac{(mb-a)^n}{n!}\biggl[\frac{n^{q+1}-(n-2)^{q+1}}{2(q+1)}\biggr]^{1/q} \biggl\{\frac1{p(n-1)+\alpha+1}\bigl|f^{(n)}(a)\bigr|^p\\*
+\frac{m\alpha}{[p(n-1)+1][p(n-1)+\alpha+1]}\bigl|f^{(n)}(b)\bigr|^p\biggr\}^{1/p},
\end{multline}
where the sum above takes $0$ when $n=2$ and $\frac1p+\frac1q=1$.
\end{theorem}
\begin{proof}
It follows from Lemma~\ref{lem2.1} that
\begin{multline}\label{eq3.2.2}
\biggl|\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x -\frac12\sum_{k=2}^{n-1}\frac{(k-1)(mb-a)^k}{(k+1)!}f^{(k)}(a)\biggr|\\
\le \frac12\frac{(mb-a)^n}{n!}\int_{0}^1t^{n-1}(n-2t)\bigl|f^{(n)}(ta+m(1-t)b)\bigr|\td t.
\end{multline}
By the well-known H\"older integral inequality, we obtain
\begin{multline}\label{eq3.2.3}
\int_{0}^1t^{n-1}(n-2t)\bigl|f^{(n)}(ta+m(1-t)b)\bigr|\td t\\
\le\biggl[\int_{0}^1(n-2t)^q\td t\biggr]^{1/q} \biggl[\int_{0}^1t^{p(n-1)}\bigl|f^{(n)}(ta+m(1-t)b)\bigr|^p\td t\biggr]^{1/p}\\
=\biggl[\frac{n^{q+1}-(n-2)^{q+1}}{2(q+1)}\biggr]^{1/q} \biggl[\int_{0}^1t^{p(n-1)}\bigl|f^{(n)}(ta+m(1-t)b)\bigr|^p\td t\biggr]^{1/p}.
\end{multline}
Making use of the $(\alpha,m)$-convexity of $\bigl|f^{(n)}(x)\bigr|^p$ reveals
\begin{multline}\label{eq3.2.4}
\int_{0}^1t^{p(n-1)}\bigl|f^{(n)}(ta+m(1-t)b)\bigr|^p \td t \\*
\begin{aligned}
&\le\int_{0}^1t^{p(n-1)}\bigl[t^{\alpha}\bigl|f^{(n)}(a)\bigr|^p +m(1-t^{\alpha})\bigl|f^{(n)}(b)\bigr|^p\bigr]\td t\\
&=\bigl|f^{(n)}(a)\bigr|^p\int_{0}^1t^{p(n-1)+\alpha}\td t +m\bigl|f^{(n)}(b)\bigr|^p\int_{0}^1t^{p(n-1)}(1-t^{\alpha})\td t
\end{aligned}\\
=\frac{\bigl|f^{(n)}(a)\bigr|^p}{p(n-1)+\alpha+1} +\frac{m\alpha}{[p(n-1)+1][p(n-1)+\alpha+1]}\bigl|f^{(n)}(b)\bigr|^p.
\end{multline}
Combining~\eqref{eq3.2.3} and~\eqref{eq3.2.4} with~\eqref{eq3.2.2} results in the inequality~\eqref{th3.2-ineq}.
This completes the proof of Theorem~\ref{th3.2}.
\end{proof}
\begin{corollary}
Under conditions of Theorem~\ref{th3.2},
\begin{enumerate}
\item
when $m=1$, we have
\begin{multline*}
\biggl|\frac{f(a)+f(b)}2-\frac{1}{b-a}\int_{a}^{b}f(x)\td x -\frac12\sum_{k=2}^{n-1}\frac{(k-1)(b-a)^k}{(k+1)!}f^{(k)}(a)\biggr|\\*
\le\frac12\frac{(b-a)^n}{n!}\biggl[\frac{n^{q+1}-(n-2)^{q+1}}{2(q+1)}\biggr]^{1/q} \biggl\{\frac1{p(n-1)+\alpha+1}\bigl|f^{(n)}(a)\bigr|^p\\*
+\frac{\alpha}{[p(n-1)+1][p(n-1)+\alpha+1]}\bigl|f^{(n)}(b)\bigr|^p\biggr\}^{1/p};
\end{multline*}
\item
when $n=2$, we have
\begin{multline*}
\biggl|\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x \biggr|\\
\le\frac{(mb-a)^2}2\biggl(\frac1{q+1}\biggr)^{1/q}
\biggl[\frac1{p+\alpha+1}\bigl|f''(a)\bigr|^p+\frac{m\alpha}
{(p+1)(p+\alpha+1)}\bigl|f''(b)\bigr|^p\biggr]^{1/p};
\end{multline*}
\item
when $m=\alpha=1$ and $n=2$, we have
\begin{equation} \label{e}
\biggl| \frac{f(a) +f(b) }{2}-\frac{1}{b-a}\int_{a}^{b}f(x) \td x\biggr|
\le\frac{(b-a)^{2}}{2(p+1)^{1/p}(q+2)^{1/q}} \biggl[\frac{(q+1)|f''(a)|^{q}+|f''(b)|^{q}}{q+1}\biggr]^{1/q},
\end{equation}
where $\frac{1}{p}+\frac{1}{q}=1$.
\end{enumerate}
\end{corollary}
\begin{theorem}\label{th3.3}
Let $(\alpha,m)\in[0,1]\times(0,1]$ and $b>a>0$ with $a<mb$. If $f(x)$ is $n$-time differentiable on $[0,b]$ such that $\bigl|f^{(n)}(x)\bigr|\in L([0,mb])$ and $\bigl|f^{(n)}(x)\bigr|^p$ is $(\alpha,m)$-convex on $[0,mb]$ for $n\ge 2$ and $p\ge 1$, then
\begin{multline}
\biggl|\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x -\frac12\sum_{k=2}^{n-1}\frac{(k-1)(mb-a)^k}{(k+1)!}f^{(k)}(a)\biggr|\\
\begin{aligned}
&\le\frac{(n-1)^{1-1/p}}2\frac{(mb-a)^n}{n!} \biggl\{\frac{(n-2)(pn-p+\alpha)+2(n-1)} {(pn-p+\alpha+1)(pn-p+\alpha+2)}\bigl|f^{(n)}(a)\bigr|^p\\
&\quad+m\bigg[\frac{(n-1)(pn-2p+2)}{(pn-p+1)(pn-p+2)}-
\frac{(n-2)(pn-p+\alpha)+2(n-1)} {(pn-p+\alpha+1)(pn-p+\alpha+2)}\biggr]\bigl|f^{(n)}(b)\bigr|^p\biggr\}^{1/p},
\end{aligned}
\end{multline}
where the sum above takes $0$ when $n=2$.
\end{theorem}
\begin{proof}
Utilizing Lemma~\ref{eq2.1}, H\"older integral inequality, and the $(\alpha,m)$-convexity of $\bigl|f^{(n)}(x)\bigr|^p$ yields
\begin{align*}
&\quad\biggl|\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x -\frac12\sum_{k=2}^{n-1}\frac{(k-1)(mb-a)^k}{(k+1)!}f^{(k)}(a)\biggr|\\
&\le\frac12\frac{(mb-a)^n}{n!}\int_{0}^1t^{n-1}(n-2t)\bigl|f^{(n)}(ta+m(1-t)b)\bigr|\td t\\
&\le\frac12\frac{(mb-a)^n}{n!}\biggl[\int_{0}^1(n-2t)\td t\biggr]^{1-1/p}\\
&\quad\times\biggl\{\int_{0}^1t^{p(n-1)}(n-2t)\bigl[t^{\alpha}|f^{(n)}(a)|^p +m(1-t^{\alpha})|f^{(n)}(b)|^p\bigr]\td t\biggr\}^{1/p}\\
&=\frac{(n-1)^{1-1/p}}2\frac{(mb-a)^n}{n!} \biggl\{\frac{(n-2)(pn-p+\alpha)+2(n-1)}{(pn-p+\alpha+1)(pn-p+\alpha+2)}\bigl|f^{(n)}(a)\bigr|^p+m\\
&\times\bigg[\frac{(n-1)(pn-2p+2)}{(pn-p+1)(pn-p+2)}-\frac{(n-2)(pn-p+\alpha)+2(n-1)}{(pn-p+\alpha+1)
(pn-p+\alpha+2)}\biggr]\bigl|f^{(n)}(b)\bigr|^p\biggr\}^{1/p}.
\end{align*}
This completes the proof of Theorem~\ref{th3.3}.
\end{proof}
\begin{corollary}
Under conditions of Theorem~\ref{th3.3},
\begin{enumerate}
\item
when $m=1$, we have
\begin{multline*}
\biggl|\frac{f(a)+f(b)}{2}-\frac{1}{b-a}\int_{a}^{b}f(x)\td x -\frac12\sum_{k=2}^{n-1}\frac{(k-1)(b-a)^k}{(k+1)!}f^{(k)}(a)\biggr|\\
\le\frac{(n-1)^{1-1/p}}2\frac{(b-a)^n}{n!} \biggl\{\frac{(n-2)(pn-p+\alpha)+2(n-1)}{(pn-p+\alpha+1)(pn-p+\alpha+2)}\bigl|f^{(n)}(a)\bigr|^p\\
+\biggl[\frac{(n-1)(pn-2p+2)}{(pn-p+1)(pn-p+2)}
-\frac{(n-2)(pn-p+\alpha)+2(n-1)}{(pn-p+\alpha+1)(pn-p+\alpha+2)}\biggr]\bigl|f^{(n)}(b)\bigr|^p\biggr\}^{1/p};
\end{multline*}
\item
when $n=2$, we have
\begin{multline*}
\biggl|\frac{f(a)+f(mb)}{2}-\frac{1}{mb-a}\int_{a}^{mb}f(x)\td x\biggr|
\le\frac{(mb-a)^2}4\biggl\{\frac2{(p+\alpha+1)(p+\alpha+2)}\bigl|f''(a)\bigr|^p\\
+m\biggl[\frac2{(p+1)(p+2)}-\frac2{(p+\alpha+1)(p+\alpha+2)}\biggr]\bigl|f''(b)\bigr|^p\biggr\}^{1/p};
\end{multline*}
\item
when $m=\alpha=1$ and $n=2$, we have
\begin{equation} \label{k}
\biggl|\frac{f(a) +f(b) }{2}-\frac{1}{b-a}\int_{a}^{b}f(x) \td x\biggr|\le\frac{(b-a) ^{2}}{2^{2-1/p}}\biggl[ \frac{(p+1) |f''(a)|^{p}+2|f''(b)|^{p}}{(p+1)(p+2)(p+3)}\biggr]^{1/p}.
\end{equation}
\end{enumerate}
\end{corollary}
\section{Applications to special means}
It is well known that, for positive real numbers $\alpha$ and $\beta$ with $\alpha\ne\beta$, the quantities
\begin{gather*}
A(\alpha,\beta)=\frac{\alpha+\beta}{2},\quad G(\alpha,\beta)=\sqrt{\alpha\beta}\,, \quad H( \alpha,\beta) =\frac{2}{1/\alpha+1/\beta},\\
I(\alpha,\beta)=\frac{1}{e}\biggl(\frac{\beta^{\beta}}{\alpha^{\alpha}}\biggr)^{1/(\beta-\alpha)}, \quad
L(\alpha,\beta)=\frac{\alpha-\beta}{\ln\alpha-\ln\beta},\quad
L_{r}(\alpha,\beta)=\biggl[ \frac{\beta^{r+1}-\alpha^{r+1}}{(r+1)(\beta-\alpha)}\biggr]^{1/r}
\end{gather*}
for $r\ne0,-1$ are respectively called the arithmetic, geometric, harmonic, exponential, logarithmic, and generalized logarithmic means.
\par
Basing on inequalities of Hermite-Hadamard type in the above section, we shall derive some inequalities of the above defined means as follows.
\begin{theorem}\label{Prop1}
Let $r\in(-\infty,0)\cup[1,\infty)\setminus\{-1\}$ and $b>a>0$. Then, for $p,q>1$,
\begin{equation}\label{m}
|A(a^{r},b^{r})-[L_{r}(a,b)]^{r}|\le\frac{(b-a)^{2}r(r-1)}{2(p+1)^{1/p}(q+2)^{1/q}} \biggl[a^{(r-2)q}+\frac{b^{(r-2)q}}{q+1}\biggr]^{1/q},
\end{equation}
where $\frac1p+\frac1q=1$.
\end{theorem}
\begin{proof}
This follows from applying the inequality~\eqref{e} to the function $f(x)=x^{r}$.
\end{proof}
\begin{theorem}\label{Prop2}
Let $r\in(-\infty,0)\cup[1,\infty)\setminus\{-1\}$ and $b>a>0$. Then, for $p\ge1$,
\begin{equation}\label{l}
| A(a^{r},b^{r})-[L_{r}(a,b)]^{r}| \le\frac{(b-a)^{2}r(r-1)}{2^{2-1/p}} \biggl[\frac{(p+1)a^{(r-2)p} +2b^{(r-2)p}}{(p+1)(p+2)(p+3)}\biggr]^{1/p}.
\end{equation}
\end{theorem}
\begin{proof}
This follows from applying the inequality~\eqref{k} to the function $f(x)=x^{r}$.
\end{proof}
\begin{theorem}\label{Prop3}
Let $r\in(-\infty,0)\cup[1,\infty)\setminus\{-1\}$ and $b>a>0$. Then
\begin{equation}\label{n}
| A(a^{r},b^{r})-[L_{r}(a,b)]^{r}| \le\frac{(b-a)^{2}r(r-1)}{24}A\bigl(a^{r-2},b^{r-2}\bigr).
\end{equation}
\end{theorem}
\begin{proof}
This follows from applying the inequality~\eqref{k} for $p=1$ to the function $f(x)=x^{r}$.
\end{proof}
\begin{theorem}\label{Prop4}
Let $b>a>0$. Then for $p,q>1$ we have
\begin{equation}\label{o}
\biggl| \frac1{H(a,b)}-\frac1{L(a,b)}\biggr|
\le\frac{(b-a)^{2}}{(p+1)^{1/p}(q+2)^{1/q}}\biggl[\frac1{a^{3q}}+\frac1{(q+1)b^{3q}}\biggr]^{1/q},
\end{equation}
where $\frac1p+\frac1q=1$.
\end{theorem}
\begin{proof}
This follows from applying the inequality~\eqref{e} to the function $f(x)=\frac{1}{x}$.
\end{proof}
\begin{theorem}\label{Prop5}
Let $b>a>0$. Then for $p\ge1$ we have
\begin{equation}
\biggl| \frac1{H(a,b)}-\frac1{L(a,b)}\biggr|
\le\frac{(b-a)^{2}r(r-1)}{2^{1-1/p}[(p+2)(p+3)]^{1/p}} \biggl[\frac1{a^{3p}}+\frac2{(p+1)b^{3p}}\biggr]^{1/p}.
\end{equation}
\end{theorem}
\begin{proof}
This follows from the inequality~\eqref{k} to the function $f(x)=x^{r}$.
\end{proof}
\begin{theorem}\label{Prop6}
Let $b>a>0$. Then we have
\begin{equation}\label{r}
\ln \frac{I(a,b)}{G(a,b)}
\le\frac{(b-a)^{2}}{24}A\biggl(\frac1{a^{2}},\frac1{b^{2}}\biggr).
\end{equation}
\end{theorem}
\begin{proof}
This follows from applying the inequality~\eqref{k} for $p=1$ to the function $f(x)=-\ln x$.
\end{proof}
\begin{remark}
This paper is a combined version of the preprints~\cite{n-times-diferentiable-functions_m-convex.tex, try-too-to-arxiv.tex}.
\end{remark}
\subsection*{Acknowledgements}
The authors would like to thank Professors Bo-Yan Xi and Shu-Hong Wang at Inner Mongolia University for Nationalities in China for their helpful corrections to and valuable comments on the original version of this paper.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,075 |
William here is actually the latest of the fan-made characters to be introduced to the cast as a result of the Peter & Company Animated Pilot Kickstarter, way back in 2012. The highest reward tier for the campaign allowed each contributor to create a character to become a permanent member of the cast.
William's creator is one of the most dedicated readers of the comic I've met so far, like a walking encyclopedia of P&C knowledge. It was a blast working with him to come up with William's final design and place within the comic.
The future of grandchildren of course! We just make ones that WANT to visit us.
And here starts a wonderful ship featuring an old man and his new found machine friend! I know for a fact that these two will hit it off so well with their combined antics! And I just enjoyed all of Proto's reactions!
Panel 5: I believe this is the beginning of a beautiful friendship.
A mage, Proto, not a wizard.
Though in all honesty, I just call both of them magic-users.
I have noticed the words and picture comes out blurry when reading this comic and others on mobile then computer or tablet. Just wanted to point that out for the comics on Katbox. Some pages are blurred and others are not. A recent problem I've noticed yet waited if this affected other comics. It did.
I like the new banner. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,633 |
Chocolate: Is It a "Health Food"?
Chocolate is being touted as a health food now. Nutritionist Nancy Clark digs down into this to describe the good, bad, and ugly on chocolate as a healthy part of an athlete's diet. Is chocolate good for you? Better yet, what types of chocolate are the best?
"Chocolate! I try to stay away from it!!!" commented my client, a runner who described herself as having a rampant sweet tooth. For her, chocolate fits into the categories of junk food, guilty pleasure and ruiner of good intentions to lose weight. Yet, she also recognized there is potentially a happier side of the story. Ads for (dark) chocolate suggest chocolate is good for us. Chocolate comes from plants and contains the same health-protective compounds that are found in fruits and vegetables.
So what is the whole story on chocolate? Is it little more than an alluring form of refined sugar, saturated fat and empty calories? Or does chocolate (in moderation, of course) have positive qualities that might be beneficial for athletes?
Here are some nuggets of information about chocolate. I'll let you decide whether or not the health benefits of eating chocolate are greater than the health costs—and if you personally want to define chocolate as a "health food" within the context of your own sports diet.
People tend to eat chocolate in bursts—a lot in a day, such as on holidays or pre-menstrually—or none. The question arises: Would enjoying some chocolate every day help reduce an athlete's urge to binge-eat the whole bag of, let's say, M&Ms in a moment of weakness? That's a good question and one that needs to be researched. We do know that deprivation and denial of food contributes to overeating. You know the syndrome: "I'm starting my diet Monday morning, so Sunday is my last chance to eat chocolate…" and there goes the whole bag of M&Ms!
I invite my clients to try taking the "power" away from chocolate by enjoying a little bit every day, such as for dessert after lunch. Ideally, daily chocolate could reduce it to being simply a commonplace plant food, just like bran cereal, an apple or carrot sticks. Give it a try?
Some athletes claim they are "addicted" to chocolate. Perhaps "chocolate addicts" grew up in a household where the parents banned chocolate? Now, as grown-ups, maybe they rebel by eating Reece's Pieces by the bagful? Or are they "super tasters"—and the flavor of chocolate is just irresistible? Perhaps they have a genetic difference that makes chocolate highly attractive? Some day, genetic testing may help us find the answer to that question.
Chocolate is made from cocoa. Cocoa comes from a plant. It is a rich source of health-protective phytochemicals, just like you'd get from fruits, vegetables and whole grains. Two tablespoons natural cocoa power (the kind used in baking) offers the same antioxidant power as 3/4 cup blueberries or 1.5 glasses red wine.
Of all the types of chocolate, dark chocolate is the richest source of phytonutrients. Unfortunately, dark chocolate has a slightly bitter taste and most people prefer the sweeter milk chocolate. Maybe we should raise today's children on dark chocolate, so they will they learn to prefer it…?
Cocoa increases blood flow to the brain. If this means you can process information better and faster—like calculate your split times or help your kids with their math homework—wouldn't that be a great excuse to enjoy chocolate?!
Chocolate is yummy! Most athletes love chocolate. Chocolate lovers don't want sugar-free or fat-free chocolate. They want the 100% real stuff! That's because consumers buy benefits, not products. Being yummy is a huge benefit!
During the recession in 2009, sales of Hershey's chocolates increased. Is that because worried people bought a moment of yummy, cheer-me-up chocolate? Or, did they simply settle for a bag of less expensive Hershey's Kisses instead of a box of pricey Godiva Chocolates? Regardless, chocolate seems to fit every mood, be it happy, sad, tired or celebratory.
Although the chocolate used in flavoring milk lacks the health-protectors found in dark chocolate, the yummy flavor makes chocolate milk a popular recovery drink. The sweetened chocolate offers carbs to refuel muscles; the milk offers protein to build and repair muscle. Plus, milk boosts intake of calcium and vitamin D, needed for strong bones.
Despite all this good news about chocolate, it is still just a candy and not a life-sustaining food. Yet, it does provide pleasure—and pleasure is certainly part of a health and wellness program, right?
The trick is to enjoy dark chocolate as part of the 100 to 150 "discretionary" sugar calories that can be part of your daily sports diet. As for me, I'll enjoy my dark chocolate during a long hike or bike ride. Tastes better than most engineered sports foods and nicely fuels both my body and my mind!
Nancy Clark, MS, RD, CSSD (Board Certified Specialist in Sports Dietetics) counsels both casual and competitive athletes in her practice at Healthworks, the premier fitness center in Chestnut Hill MA (617-383-6100). For fueling help, read her bestselling Sports Nutrition Guidebook and food guides for new runners, marathoners or soccer players. See www.nancyclarkrd.com and sportsnutritionworkshop.com.
1. Fisher ND, Hollenberg NK. Aging and vascular responses to flavanol-rich cocoa. J Hypertens. 24(8):1575-80, 2006.
2. Buijsse B, Feskens EJ, Kok FJ, Kromhout D. Cocoa intake, blood pressure, and cardiovascular mortality: the Zutphen Elderly Study. Arch Intern Med. 27;166(4):411-7, 2006.
3. Wiles NJ, Northstone K, Emmett P, Lewis G 'Junk food' diet and childhood behavioural problems: results from the ALSPAC cohort. Eur J Clin Nutr. 2009 Apr;63(4):491-8.
4. McBrier NM, Vairo G, Bagshaw D et al., Cocoa-based protein drink decreases CK levels and perceived soreness following exhaustive exercise. J Strength and Conditioning Research 2010, manuscript in press. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,254 |
Home Motorcycle Racing News Yamaha Superbike: No 50 for Laverty
Yamaha Superbike: No 50 for Laverty
Ron Lieback
2011 World Superbike
For nine years, Eugene Laverty contested with the number 50 across many motorcycle racing disciplines. But during his debut year on the Yamaha Factory team for 2011 World Superbike, the Irishman has been forced to give up his number, and instead use 58.
Laverty, who took over the empty Superbike seat left by Cal Crutchlow, had to give up his number because Suzuki rider Sylvain Guintoli competes with 50 in the World Superbike series.
Eugene Laverty (Yamaha Sterilgarda Superbike) says: "I don't have an emotional attachment to it, but I do feel it has been very significant in my career so far, and for that reason I am a bit sad to be parting with it," he admits."I don't have a choice, but I did have a choice in choosing my new number, 58."
"I like it because it's probably the nearest you can get to 50. I'm sure I'll go back to it when I get the opportunity. This year Sylvain has it but I'd definitely like to get it back as soon as I can."
The 24-year old first fielded the number 50 in 2002 during his debut in the British 125cc Championship, and has used it ever since.
Laverty competed in World Supersport on the Parkalgar Honda CBR600 in 2009 and 2010. His best results with the number were achieved in 2009 and 2010 in World Supersport, when he notched up 12 wins and twice finished championship runner-up on the Parkalgar Honda.
While competing in the 250cc MotoGP Championship in 2008, Laverty rode as a wildcard in World Supersport, substituting for the injured Fabien Foret; he took second and third in those races.
Eugene Laverty
Parkalgar Honda
World Supersport
Yamaha Factory
Previous article2011 Kawasaki Versys | Wallpaper
Next article2011 Dakar: Red Bull KTM Prep | Video
One of the few moto journalists based on the East Coast, Ron Lieback joined the motorcycle industry as a freelancer in 2007 and is currently Editor at Large at Ultimate Motorcycling. He is also the author of 365 to Vision: Modern Writer's Guide (How to Produce More Quality Writing in Less Time).
2022 Harley-Davidson Iron 883 Buyer's Guide (Specs, Prices, and Photos)
2021 BMW G 310 R Review [Urban Motorcycle Test in Los Angeles]
2022 Honda CB300R Buyer's Guide: Price, Specs, and Photos | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,035 |
\section{Introduction}
Inflationary cosmological models~\cite{Guth,Linde,Steinhardt,Sato1,Sato2,Kazanas}, which assume exponential expansion of the Universe in its early stage, have become very important in modern cosmology (see also \cite{Starobinsky}). The spacetime in the expansion stage of these models is approximately de~Sitter spacetime, which is the unique maximally symmetric solution to the Einstein equations with a positive cosmological constant (see, e.g., \cite{HawkingEllis}). Moreover, the Universe at present may approximately be de~Sitter spacetime since it appears to be undergoing accelerated expansion~\cite{Riess,Perlmutter}. (See \cite{Turner} for a recent review.) For these reasons quantum field theory in de~Sitter spacetime is attracting much attention recently. The cosmological constant problem~\cite{Weinberg}, the fact that the cosmological constant is much smaller than naturally expected from the Standard Model of particle physics, is another reason for studying quantum field theory in de~Sitter spacetime.
Some authors have suggested that the vacuum states in quantum field theories, including quantum gravity, may be unstable in de~Sitter spacetime (see, e.g., \cite{Yokoyama,Polyakov}). In particular, Polyakov~\cite{Polyakov} has pointed out that in this spacetime the free-theory vacuum state is generally unstable against spontaneous emission of Fock-space particles at tree level in the interaction picture in any interacting field theory such as general relativity. (We use the phrase ``Fock-space particles" to mean the quanta created by creation operators in a Fock space. They should not be confused with particles detected, e.g.~by an Unruh-DeWitt detector~\cite{Unruh,DeWitt}.) In Minkowski spacetime such processes are forbidden due to energy conservation because the vacuum state has the lowest energy. However, it is clear that the conservation laws in de~Sitter spacetime do not prevent such processes from occurring. (This conclusion should be true in any spacetime without a global timelike Killing vector.) Several authors have pointed out that interacting low-mass scalar fields have infrared-divergent $n$-point functions~\cite{Yokoyama,Polyakov,Sasaki}.
The relation between these infrared divergences and the spontaneous emission discussed here is not very clear since the latter occurs for interacting fields of any mass and spin.
In this paper we first calculate the rate per unit volume of spontaneous emission of four Fock-space particles in the interaction picture in the theory in which the conformally-coupled massless scalar field interacts through a $\varphi^4$ term. We also present the expression for the emission rate for a scalar field of arbitrary mass to emphasize that this process is not entirely an infrared effect. Then we discuss possible significance of apparent spontaneous emission of this kind in general. We use the metric signature $+---$ and natural units $\hbar = c = 1$ throughout this paper unless otherwise stated.
\section{Calculation of the spontaneous emission rate}
One can cover half of de~Sitter spacetime by the coordinates $(u,\mathbf{x})$ with the conformally-flat metric of the form
\begin{equation}
ds^2 = (Hu)^{-2}(du^2 - d\mathbf{x}\cdot d\mathbf{x})\,. \label{metric}
\end{equation}
Here, $\mathbf{x}$ is a three-dimensional vector, and the conformal time $u$ decreases from $\infty$ to $0$ towards the future. The constant $H$ is the Hubble constant, which gives the rate of expansion of the space.
The Lagrangian density of the conformally-coupled massless scalar field $\varphi(u,\mathbf{x})$ interacting through the $\varphi^4$ term is
\begin{equation}
{\cal L} = \sqrt{-g}\left[\frac{1}{2}(\nabla_\mu \varphi)(\nabla^\mu \varphi) - \frac{1}{12}R\varphi^2 - \frac{\lambda}{4!}\varphi^4\right]\,,
\end{equation}
where $g= -(Hu)^{-8}$ is the determinant of the metric tensor $g_{\mu\nu}$ and where the scalar curvature is given by $R=12H^2$. Treating the $\varphi^4$ term as an interaction term, we find that the free field (or the field in the interaction picture), $\varphi_I$, satisfies
\begin{equation}
\left(\Box + 2H^2\right)\varphi_I = 0\,.
\end{equation}
As is well known, this equation can be written as the massless scalar field equation for
$(Hu)^{-1}\varphi_I(u,\mathbf{x})$ in (part of) Minkowski spacetime with the metric $-du^2+d\mathbf{x}\cdot d\mathbf{x}$. Hence, the field $\varphi_I(u,\mathbf{x})$ can be expanded as
\begin{equation}
\varphi_I(u,\mathbf{x}) =
\int \frac{d^3\mathbf{k}}{(2\pi)^3}\left[ a(\mathbf{k})\frac{Hu}{\sqrt{2k}}e^{iku+i\mathbf{k}\cdot\mathbf{x}}
+ a^\dagger(\mathbf{k})\frac{Hu}{\sqrt{2k}}e^{-iku-i\mathbf{k}\cdot\mathbf{x}}\right]
\end{equation}
with $k\equiv \|\mathbf{k}\|$,
where the annihilation and creation operators satisfy the usual commutation relations,
\begin{equation}
\left[a(\mathbf{k}_1),a^\dagger(\mathbf{k}_2)\right] = (2\pi)^3\delta^3(\mathbf{k}_1-\mathbf{k}_2)\,,
\end{equation}
with all other commutators vanishing. The vacuum state $|0\rangle$ is defined by requiring that $a(\mathbf{k})|0\rangle = 0$ for all $\mathbf{k}$. This state is the standard vacuum state called the Euclidean (or Bunch-Davies)
vacuum~\cite{GibbonsHawking,BunchDavies}. (This choice of vacuum is implicit in an earlier work of Tagirov~\cite{Tagirov}.)
We define the transition amplitude $\mathcal{A}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3,\mathbf{k}_4)$ from the free-theory vacuum state $|0\rangle$ to a 4-Fock-space-particle state,
$$
|\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3,\mathbf{k}_4\rangle = a^\dagger(\mathbf{k}_1)a^\dagger(\mathbf{k}_2)a^\dagger(\mathbf{k}_3)a^\dagger(\mathbf{k}_4)|0\rangle\,,
$$
to lowest order in $\lambda$ as
\begin{equation}
\mathcal{A}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3,\mathbf{k_4})
\equiv \int dud^3\mathbf{x}\,\sqrt{-g}\,
\langle \mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3,\mathbf{k}_4|\mathcal{H}_I(u,\mathbf{x})|0\rangle\,,
\end{equation}
where
\begin{equation}
\mathcal{H}_I(u,\mathbf{x}) = \frac{\lambda}{4!}\left[\varphi_I(u,\mathbf{x})\right]^4\,.
\end{equation}
We readily obtain
\begin{equation}
\mathcal{A}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3,\mathbf{k}_4)
= \lambda \int_0^\infty du\, \frac{e^{-i(k_1+k_2+k_3+k_4)u}}{4\sqrt{k_1k_2k_3k_4}}(2\pi)^3\delta^3(\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3+\mathbf{k}_4)\,.
\label{amp}
\end{equation}
It is possible to integrate over $u$ after introducing a cutoff by letting $k_1+\cdots+k_4 \to k_1+\cdots+k_4-i\epsilon$, where $\epsilon$ is an arbitrarily small positive number. Nevertheless, we leave this integral as it is until we square the amplitude in order to factor out the infinite spacetime volume from the transition probability to find the transition rate per unit volume.
First we make the change of variable $u = H^{-1}e^{-Ht}$, where $t$ is the proper time for a geodesic observer with $\mathbf{x}$ constant. Then the metric (\ref{metric}) takes the form
\begin{equation}
ds^2 = -dt^2 + e^{2Ht}d\mathbf{x}\cdot d\mathbf{x}\,. \label{metric2}
\end{equation}
This form of the metric shows clearly that the space expands with Hubble constant $H$.
The transition probability is
\begin{eqnarray}
P & = & \frac{1}{4!}\int \frac{d^3\mathbf{k}_1}{(2\pi)^3}\frac{d^3\mathbf{k}_2}{(2\pi)^3}
\frac{d^3\mathbf{k}_3}{(2\pi)^3}\frac{d^3\mathbf{k}_4}{(2\pi)^3}
|\mathcal{A}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3,\mathbf{k}_4)|^2\nonumber \\
& = &
\frac{\lambda^2}{4!}\int \frac{d^3\mathbf{k}_1d^3\mathbf{k}_2 d^3\mathbf{k}_3d^3\mathbf{k}_4}{(2\pi)^{12}}
\int_{-\infty}^\infty dt_1 \int_{-\infty}^\infty dt_2 \frac{e^{-H(t_1+t_2)}}{16k_1k_2k_3k_4}\nonumber \\
&& \times\exp\left[\frac{i(k_1+k_2+k_3+k_4)}{H}\left(e^{-Ht_1} - e^{-Ht_2}\right)\right]
(2\pi)^3 \delta^3(\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3+\mathbf{k}_4) V_c\,,
\end{eqnarray}
where we have interpreted $(2\pi)^3\delta^3(\mathbf{0})$ in the momentum space as the infinite {\em coordinate} volume
$V_c=\int d^3\mathbf{x}$~\cite{Bjorken}. Changing the variables again as $T=(t_1+t_2)/2$ and $\tau = t_2-t_1$, we find
\begin{eqnarray}
P & = & \frac{\lambda^2}{4!}\int \frac{d^3\mathbf{k}_1d^3\mathbf{k}_2d^3\mathbf{k}_3d^3\mathbf{k}_4}{16(2\pi)^9 k_1k_2k_3k_4}
\delta^3(\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3+\mathbf{k}_4)\nonumber \\
&& \times \int_{-\infty}^\infty dT \int_{-\infty}^\infty d\tau
\,e^{-2HT}\exp\left[\frac{2i(k_1+k_2+k_3+k_4)}{H}e^{-HT}\sinh \frac{H\tau}{2}\right]V_c\,.
\end{eqnarray}
It is useful to multiply this expression by $\delta(k_1+k_2+k_3+k_4 - K)$ and integrate over $K$ from $0$ to $\infty$ and then change the integration variables from $\mathbf{k}_i$ to $\mathbf{y}_i = K^{-1}\mathbf{k}_i$, $i=1,2,3,4$. Then, using the result
\begin{equation}
\int\frac{d^3\mathbf{y}_1d^3\mathbf{y}_2d^3\mathbf{y}_3d^3\mathbf{y}_4}{y_1y_2y_3y_4}\delta(y_1+y_2+y_3+y_4-1)\delta^3(\mathbf{y}_1+\mathbf{y}_2+\mathbf{y}_3+\mathbf{y}_4) = \frac{\pi^3}{4}\,,
\end{equation}
where $y_i \equiv \|\mathbf{y}_i\|$, $i=1,2,3,4$, we find
\begin{equation}
P = \frac{\lambda^2}{3(8\pi)^6}\int_{-\infty}^\infty dT \int_0^\infty dK\,K^4\int_{-\infty}^\infty d\tau\,
e^{-2HT}\exp\left(\frac{2iK}{H}e^{-HT}\sinh \frac{H\tau}{2}\right)V_c\,.
\end{equation}
Now, the metric given by (\ref{metric2}) shows that at time $T$ the proper distance $\ell_p$ between two points $\mathbf{x}_1$ and $\mathbf{x}_2$ is related to the coordinate distance $\ell_c=\|\mathbf{x}_1-\mathbf{x}_2\|$
by $\ell_p = e^{HT}\ell_c$. Thus, the physical wave number vector of the mode with label $\mathbf{k}$ at time $T$ is $e^{-HT}\mathbf{k}$. This fact motivates the change of variable from $K =k_1+k_2+k_3+k_4$ to
$\kappa = (2K/H)e^{-HT}$, which is roughly the typical wave number of the emitted Fock-space particles in units of the Hubble constant $H$. (A similar change of variables is used in the standard calculation for the response rate of a uniformly accelerated detector in Minkowski spacetime~\cite{BirrellDavies}.) Then the probability $P$ can be written as
\begin{equation}
P = \int_{-\infty}^\infty dT\,\Gamma\,V_c e^{3HT}\,,
\end{equation}
where $V_c e^{3HT}$ can be interpreted as the total {\em proper} volume of the Universe at time $T$, and where $\Gamma$ is interpreted as the emission rate per unit volume and given by
\begin{equation}
\Gamma =\frac{\lambda^2H^4}{48(8\pi)^6}\int_0^\infty d\kappa\,\kappa^4
\int_{-\infty}^\infty d\eta\,
\exp\left(i\kappa \sinh \eta\right)\,.
\end{equation}
Here we have made a further change of variable $H\tau/2 = \eta$.
This integral can be evaluated using standard integrals~\cite{GR} as
\begin{eqnarray}
\Gamma & = & \frac{\lambda^2H^4}{24(8\pi)^6c^3}\int_0^\infty d\kappa\,\kappa^4\,K_0(\kappa) \\
& = & \frac{3\lambda^2H^4}{4(16\pi)^5c^3}\,,
\end{eqnarray}
where we have restored the speed of light $c$ by dimensional analysis.
(The rate $\Gamma$ is independent of $\hbar$.) Since
$K_0(\kappa) \approx \sqrt{\pi}(2\kappa)^{-1/2}e^{-\kappa}$ for $\kappa \gg 1$,
we find that emission of modes with wavelengths much shorter than the horizon scale, $c/H$, is suppressed and that the emission is dominated by modes with wavelengths comparable to $c/H$.
As we stated before, the rate is nonzero for a scalar field of arbitrary mass. Let the field be conformally coupled and have mass $m$. Define $\nu \equiv \sqrt{1/4 - (mc^2/\hbar H)^2}$. Then the emission rate per unit volume will be
\begin{eqnarray}
\Gamma_m & = & \frac{\lambda^2 H^4e^{-4{\rm Im}\,\nu}}{3(16\pi)^5c^3}
\int d^3\mathbf{y}_1d^3\mathbf{y}_2d^3\mathbf{y}_3 d^3\mathbf{y}_4\delta^3(\mathbf{y}_1
+ \mathbf{y}_2+\mathbf{y}_3+\mathbf{y}_4)\nonumber \\
&& \times \delta(y_1+y_2+y_3+y_4-1)
\lim_{\epsilon\to 0+}\left|\int_0^\infty dx\,x^{7/2}
e^{-\epsilon x}\prod_{i=1}^4H_\nu^{(1)}(y_ix)\right|^2\,,
\end{eqnarray}
where ${\rm Im}\,\nu = \sqrt{(mc^2/\hbar H)^2-1/4}$ if $mc^2/\hbar H > 1/2$.
\section{Summary and Discussions}
One might be tempted to conclude from the calculation in the previous section that the vacuum state would be unstable in the $\varphi^4$ theory in de Sitter spacetime. However, since the states are not evolved by the true Hamiltonian in the interaction picture used in our calculation, it is not clear whether or not the apparent spontaneous emission process in this picture provides a good description of a physical process. Nevertheless, our calculation clearly demonstrates that the in-vacuum state, i.e. the no-Fock-space-particle state in the infinite past, evolves to a state with infinitely many Fock-space particles relative to the out-vacuum state in the infinite future. Thus, the in- and out-vacuum states are orthogonal to each other as emphasized by Polyakov~\cite{Polyakov}. This means that the in-out perturbation theory is inadequate for the $\varphi^4$ theory and other interacting field theories in de Sitter spacetime since it presupposes some overlap between these vacuum states~\cite{Peskin}. This point can be illustrated more clearly by a free scalar field theory with a small mass term treated as a perturbation~\cite{unpublished}.
Assuming that the spontaneous emission process in the interaction picture studied in this paper gives a good description of a true physical process, let us consider how it would appear to an inertial observer. Such an observer would describe her quantum field theory using the symmetry generated by the timelike Killing vector inside her cosmological horizon as the time translation symmetry. In this description of the field theory the energy can be defined as the conserved quantity corresponding to this Killing vector, and the vacuum state, distinct from the Euclidean vacuum, can be defined as the lowest energy eigenstate. Therefore there cannot be spontaneous emission in this description.
Now, the Euclidean vacuum is seen as a thermal bath of Gibbons-Hawking temperature $H/2\pi$ in this description of the field theory inside the cosmological horizon~\cite{GibbonsHawking}, and the Fock-space particles in this thermal bath interact with one another. For example, there are scattering processes. The natural conclusion, therefore, is that spontaneous emission of Fock-space particles in the global description of the field theory is seen by an inertial observer as a process involving some initial Fock-space particles, e.g.~scattering of two Fock-space particles in the thermal bath. This conclusion is analogous to the well-known fact that when an accelerated detector in Minkowski spacetime absorbs a quantum, it {\em emits} a usual Minkowski particle~\cite{UnruhWald}. (See, e.g., \cite{CHM} for some more examples illustrating how the inertial and accelerated observers describe the same phenomenon differently in Minkowski spacetime.) Given that an inertial observer does not see any spontaneous emission in de~Sitter spacetime, it will be interesting to determine whether or not she sees anything unusual in the Gibbons-Hawking thermal bath.
It will also be interesting to determine the state to which the free-theory vacuum evolves in these processes in the interaction picture. Since the Euclidean vacuum state is de~Sitter invariant, it cannot make a transition to a de~Sitter non-invariant state in perturbation theory. Since there are no de~Sitter invariant states other than the vacuum state in the Fock space of the free theory, it might appear that there would be a contradiction. However, it is known that one can construct de~Sitter invariant states with infinite norm which nevertheless form a well-defined Hilbert space~\cite{Higuchi1}. These states were constructed in connection with ``quantum linearization instabilities", which lead to the requirement that all physical states be de~Sitter invariant~\cite{Moncrief1,Moncrief2} (see also \cite{Higuchi2}). (For recent work on quantum linearization instabilities of de~Sitter spacetime see \cite{Losic,Marolfetal,Marolfetal2}.)
It is natural to speculate that the free-theory vacuum state makes a transition to one of these de~Sitter invariant states in an interacting field theory.
\acknowledgments
We thank Don Marolf for helpful comments on an earlier version of this paper, Yen Cheong Lee for useful discussions and Professor Starobinsky for useful correspondence.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,485 |
La poire à la beaujolaise ou poire au vin est un dessert traditionnel du vignoble du Beaujolais (cuisine bourguignonne, cuisine lyonnaise).
Caractéristique
Ce dessert permet de consommer les fruits ne murissant pas de façon satisfaisante sur l'arbre. Leur chair devient granuleuse. Aussi, à peine mûre, la poire doit être cueillie pour permettre à l'amidon de se transformer en sucre.
Historique
Ce a fortement évolué. Au début du , il existait une recette de « compote de poires à la bonne femme ». Les poires étaient cuites dans une poêle avec du vin rouge, du sucre, un morceau de cannelle et des clous de girofle. Comme elles se ridaient lorsqu'elles étaient cuites, elles furent affublées du nom de « poires à la bonne femme ».
Mais la cuisson ne donnait pas toujours la couleur rouge souhaitée à cette compote. On y rajoutait donc un peu de la cochenille préparée et on la conservait en mettant dans le pot une cuillère d'étain. Gaston Bachelard a expliqué dans Le Matérialisme rationnel :
Variétés
Les variétés les plus utilisées au cours du ont été les poires de Messire Jean et bon-chrétien, blanches et rouges. De nos jours ce sont les Passe-Crassane et conférence.
Ingrédients
Pour accommoder les poires au vin rouge sont utilisés un vin fruité, dans ce cas un beaujolais, du sucre ou du miel, des clous de girofle, des grains de poivre, un morceau de bâton de cannelle, une petite gousse de vanille et un zeste d'orange. Peuvent aussi être utilisés du gingembre et de la cardamome, et pour soutenir la couleur rouge un peu de liqueur de cassis ou de framboise.
Préparation
Le pochage se fait dans un sirop chargé en vin, avec les épices et les zestes. Le tout est porté à ébullition, les poires doivent rester légèrement fermes. Elles sont alors égouttées, ce qui permet de réduire encore le vin en le laissant bouillir pour une sauce un peu plus épaisse. Le dessert est servi frais ou tiède.
Notes et références
Voir aussi
Bibliographie
.
Articles connexes
Beaujolais, Vignoble du Beaujolais
Cuisine bourguignonne
Cuisine lyonnaise
Liste des spécialités régionales françaises de pâtisserie et de dessert
Vin et cuisine
Dessert
Vin et cuisine
Cuisine bourguignonne
Gastronomie dans le Rhône
Spécialité à base de poires | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,872 |
What is the Monaco Blockchain Conference?
The Monaco Blockchain Conference is a coming together of some of the leading investors into the technology sector from around the world. This event is not trying to sell 10,000 tickets it is very much about the quality of guests. It is a chance for Funds, Family Offices and High Net Worth investors to learn at a much deeper level the impact Blockchain is already having, learn about how to spot a strong project and of course be pitched by some of the leading Blockchain projects out there right now.
What made you decide to put the event on?
With literally thousands of Blockchain events taking place around the world every year there was certainly no shortage of options for people to go to. I felt that the biggest issue was companies paying pretty high costs to conferences with no real or clear path to getting in front of investors which let's face it is the main aim of any Blockchain project until it is fully funded. So we decided that it was time to bring down the numbers on the day and make sure projects were able to relax, discuss, pitch and network in an investor rich audience.
Monaco is not somewhere you would instantly think of when you say 'Blockchain' but for us the reasons are pretty simple. Blockchain as a technology and industry is already running deep within some of the largest companies in the world, from American Express, JP Morgan Chase, Lamborghini and many more. When you look at Monaco it controls approx. $400 Billion of Investments each year from a country that is only 2 square kilometers in size. That is not something that could or should be ignored. There is most definitely a hunger for knowledge and learning in the Principality to gain a deeper understanding of Blockchain, not just as a technology but as an investment opportunity.
How will the conference be structured?
As much as I am sure everyone enjoys sitting in the same seat for 7 hours straight this is not how we are doing it. We want to create a very relaxed environment where people can listen to some of the worlds leading experts on market making, tokenomics or how to get from Seed to Series A. We will have a beautiful lunch out on the sun terrace for people to talk and connect. We will have a pitching competition from some really exciting Blockchain companies. Followed by a champagne networking event on the roof of the Fairmont at the stunning Nikki Beach. For those wanting to continue discussions into the evening we will also be holding a 500 person cocktail party at Twiga in Monte Carlo.
We are very lucky to have some true pioneers and leading figures joining us. The conference will be opened by Giuseppe Ambrosio – President of the Monaco Single and Multi Family Office Association, Oliver Harris – Head of Crypto Assets Strategy at JP Morgan Chase, Shawn Broderick – General Partner at SOSV. We have the author of the fascinating Book Tokenomics: The Crypto Shift of Blockchains, ICOs and Tokens – Thomas Power. We have Anne Ravanona CEO & Founder of Global Invest Her, Athanasios Ladopoulos – Chief Investment Office for LAPO AG and Simon Cocking – Chief Editor of the IRish Tech News and Top 5 global advisors on Blockchain plus many more. We have investments funds from throughout Europe joining us, Family Offices and individual investors as well as the leaders from some of the top Blockchain projects of course.
There are 2 main types of people who should not be missing this event. The first is those who represent or personally make investments. Blockchain may still be in its infancy but we all know knowledge is power so if you are reading this and you think Blockchain is just Bitcoin then I would strongly encourage you to come and join us. You will learn about the technology, hear from investors on their own market entry points, what they look for in a project. On the other side of the coin (No pun intended) This is the perfect event for Blockchain Projects to be coming to if they are looking to raise capital. They will not attend an event in 2019 that has a better ratio of investors to projects and the chance to pitch to an audience in this setting is a really great opportunity. Along with our Media Partners such as CoinTelegraph and Monaco Life the exposure they will enjoy by attending and pitching will be out of this world.
The event is being held at the Fairmont Hotel in Monte Carlo, Monaco on Friday May 31st 2019.
How can people get a ticket?
My advice would be to not delay in getting your ticket, on the day we want the event to be private and exclusive so will not increase the number of tickets available. Finally, we look forward to warmly welcoming you in May. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,107 |
{"url":"http:\/\/codeforces.com\/problemset\/problem\/955\/F","text":"F. Heaps\ntime limit per test\n2.5 seconds\nmemory limit per test\n512 megabytes\ninput\nstandard input\noutput\nstandard output\n\nYou're given a tree with n vertices rooted at 1.\n\nWe say that there's a k-ary heap of depth m located at u if the following holds:\n\n\u2022 For m\u2009=\u20091 u itself is a k-ary heap of depth 1.\n\u2022 For m\u2009>\u20091 vertex u is a k-ary heap of depth m if at least k of its children are k-ary heaps of depth at least m\u2009-\u20091.\n\nDenote dpk(u) as maximum depth of k-ary heap in the subtree of u (including u). Your goal is to compute .\n\nInput\n\nThe first line contains an integer n denoting the size of the tree (2\u2009\u2264\u2009n\u2009\u2264\u20093\u00b7105).\n\nThe next n\u2009-\u20091 lines contain two integers u, v each, describing vertices connected by i-th edge.\n\nIt's guaranteed that the given configuration forms a tree.\n\nOutput\n\nExamples\nInput\n41 32 34 3\nOutput\n21\nInput\n41 22 33 4\nOutput\n22\nNote\n\nConsider sample case one.\n\nFor k\u2009\u2265\u20093 all dpk will be equal to 1.\n\nFor k\u2009=\u20092 dpk is 2 if and 1 otherwise.\n\nFor k\u2009=\u20091 dpk values are (3,\u20091,\u20092,\u20091) respectively.\n\nTo sum up, 4\u00b71\u2009+\u20094\u00b71\u2009+\u20092\u00b72\u2009+\u20092\u00b71\u2009+\u20093\u2009+\u20091\u2009+\u20092\u2009+\u20091\u2009=\u200921.","date":"2018-06-19 10:47:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.28577032685279846, \"perplexity\": 1294.4966573046654}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-26\/segments\/1529267862248.4\/warc\/CC-MAIN-20180619095641-20180619115641-00170.warc.gz\"}"} | null | null |
'use strict';
var React = require('react');
var ReactDOM = require('react-dom');
var dis = require('matrix-react-sdk/lib/dispatcher');
module.exports = React.createClass({
displayName: 'RoomTooltip',
componentDidMount: function() {
var tooltip = ReactDOM.findDOMNode(this);
if (!this.props.bottom) {
// tell the roomlist about us so it can position us
dis.dispatch({
action: 'view_tooltip',
tooltip: tooltip,
});
}
else {
tooltip.style.top = (70 + tooltip.parentElement.getBoundingClientRect().top) + "px";
tooltip.style.display = "block";
}
},
componentWillUnmount: function() {
if (!this.props.bottom) {
dis.dispatch({
action: 'view_tooltip',
tooltip: null,
});
}
},
render: function() {
var label = this.props.room ? this.props.room.name : this.props.label;
return (
<div className="mx_RoomTooltip">
<img className="mx_RoomTooltip_chevron" src="img/chevron-left.png" width="9" height="16"/>
{ label }
</div>
);
}
});
| {
"redpajama_set_name": "RedPajamaGithub"
} | 92 |
{"url":"https:\/\/electronics.stackexchange.com\/questions\/18676\/how-does-a-ram-work-with-a-cpu","text":"# How does a RAM work with a CPU?\n\nI'm reading an assembly language book. The problem with this book is that it tries to explain how a ram works with a CPU, but it doesn't explain in depth.\n\nI would like to know how a memory cell, address line, and data pin work when storing or retrieving data.\n\nAll memory cells have their level, either 0 or 1. The CPU tells the memory device which cells it needs the binary values of, and supplies this address to the memory device. Inside the memory device the address is decoded in a row and column address, and the cell at that position in the matrix is allowed to gets its data to the databus, i.e. the data pin.\n\nLet's say we have an 8-bit address 01100101. This will be split up in a row address 0110 (the high order nibble) and a column address 0101 (the low order nibble). The row address selects row #06, so all cells connected to this row will have their data ready. The column address selects the cell at column #05 of this row, so that finally only one single cell is allowed to place its data to the output pin.\n\nStoring data follows the same pattern: only one row is selected, and the cell at the given column in that row will get the data present on the pin stored.\n\nThis is for 1 bit. The operation occurs for the full data word width simultaneously, so if you have a byte-wide memory, 8 bits are retrieved and their value placed on 8 databus pins.\n\nedit\n\nThis is a representation of a DRAM array, where data is stored in the charge of the capacitors, each capacitor is one bit. The row part of the address (here A1..A0) selects a row, which means they activate all FETs on that row, so that the levels of the capacitors for that row become available on their corresponding column. Then the column address selection block, controlled by the other part of the address, A3..A2, selects the one bit which we want the data from.\n\nDRAM is easy to build, but has a nasty disadvantage: reading the data discharges the capacitor, so the data is lost. To counter this DRAM has sense amplifiers, which detect the current memory cell status and refresh it when read. In addition this refresh has to be done periodically because the capacitors' charge will leak away even when the memory isn't read. The need for refresh circuitry is easily compensated for thanks to the DRAM's cells' compactness.\n\nSRAM uses a couple of transistors to store the data, and it isn't volatile in the way DRAM is (though the data is still gone when you switch the power off). With EEPROM and Flash the data is stored in the (insulated) floating gate of a FET, and therefore it won't lose its data when power is switched off.","date":"2019-08-20 03:38:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.23823218047618866, \"perplexity\": 1120.863991733115}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027315222.14\/warc\/CC-MAIN-20190820024110-20190820050110-00122.warc.gz\"}"} | null | null |
{"url":"http:\/\/gardensofinfinity.com\/first\/first.html","text":"# First\n\nClaim: digital and countable infinity are the same size; or, $$|\\mathscr{D}| = |\\mathbb{N}|$$.\n\nCountable infinity is the size of the set $$\\mathbb{N} = \\{0, 1, 2, 3, \u2026\\}$$.\n\nRemember that by digital infinity we mean the collection of all digital information ever. All the books, films, digitized music. All the content on the internet, and on every hard drive. Imagine our civilization living forever, digitizing all of our art and knowledge, our communications, all our documentations of life, generation after generation, with finer and finer detail. Call this set $$\\mathscr{D}$$.\n\nThe claim is that digital and countable infinity are the same size, in other words that $$|\\mathscr{D}| = |\\mathbb{N}|$$. Before we can explain why this is true, we need to make digital infinity more mathematically precise.\n\nFirst, consider how much any one person contributes to digital infinity. Digital implies discrete. We take all our art and knowledge, and turn it into ones and zeros. Each human's digital life is a mass of discrete bits of information. These discrete bits accumulate just as the discrete natural numbers $$\\{0, 1, 2, 3, \u2026\\}$$ accumulate.\n\nOver time, as digitization technology develops, we gather finer and finer detail as well -- our files grow from megabytes, to gigabytes, to terabytes. We envision, generation after generation, an average person\u2019s digital footprint growing, and, far enough into the future, accumulating beyond any finite discrete limit. In this sense, we imagine every individual contributing nothing more, and perhaps nothing less, than $$\\mathbb{N}$$ -- the discrete infinite -- to the larger set $$\\mathscr{D}$$.\n\nAnd now we imagine an ever-growing population doing the same. The population grows and grows, surpassing every finite discrete number, up towards $$\\mathbb{N}$$. For our infinite set $$\\mathscr{D}$$, we imagine an infinite population; a population of $$\\mathbb{N}$$. And each individual, in this limit, may contribute as much as $$\\mathbb{N}$$ to $$\\mathscr{D}$$.\n\n\"We envision, generation after generation, an average person\u2019s digital footprint growing, and, far enough into the future, accumulating beyond any finite discrete limit.\"\n\nDigital infinity, then, is at most an infinite population, with each individual contributing an infinite digital accumulation. But, as we\u2019ve just argued, these infinities are countable infinities, like $$\\mathbb{N}$$. Thus digital infinity, $$\\mathscr{D}$$, is no larger than $$\\mathbb{N}$$ copies of $$\\mathbb{N}$$ -- one copy for each individual. And, assuming an infinite population, it is at least as large as $$\\mathbb{N}$$. Mathematically, we can state these last two sentences as the following: $$|\\mathscr{D}| \u2264 |\\mathbb{N}\\times\\mathbb{N}|$$ and $$|\\mathscr{D}| \u2265 |\\mathbb{N}|$$.\n\n(In Basics of Set Theory, we explain how the set $$\\mathbb{N}\\times\\mathbb{N}$$ is like $$\\mathbb{N}$$ copies of $$\\mathbb{N}$$.) All that is left is to show that $$|\\mathbb{N}\\times\\mathbb{N}| = |\\mathbb{N}|$$.\n\nFor, once we have shown that $$|\\mathbb{N}\\times\\mathbb{N}| = |\\mathbb{N}|$$, we will have $$|\\mathscr{D}| \u2264 |\\mathbb{N}|$$ and $$|\\mathscr{D}| \u2265 |\\mathbb{N}|$$, which forces $$|\\mathscr{D}| = |\\mathbb{N}|$$. In other words, we will have shown that digital infinity is the same size as countable infinity.\n\nWe have reduced the claim that digital and countable infinity are the same size, to the claim that $$|\\mathbb{N}\\times\\mathbb{N}| = |\\mathbb{N}|$$.\n\nTop","date":"2018-12-10 11:33:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7413056492805481, \"perplexity\": 929.7659530872603}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-51\/segments\/1544376823322.49\/warc\/CC-MAIN-20181210101954-20181210123454-00088.warc.gz\"}"} | null | null |
package com.yfiton.api.parameter.converters;
/**
* @author lpellegr
*/
public final class StringConverter implements Converter<String> {
@Override
public String convert(String parameterName, String parameterValue) {
return parameterValue;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 33 |
Чемпиона́т ми́ра по футбо́лу 1950 (, ) — 4-й чемпионат мира по футболу. Проводился с 24 июня по 16 июля 1950 года на полях Бразилии.
Это был первый Кубок мира с 1938 года (планируемые в 1942 и 1946 годах соревнования не состоялись из-за Второй мировой войны).
Его выиграла сборная Уругвая, повторив тем самым свой результат, показанный на чемпионате мира 1930, за счёт победы над хозяевами — сборной Бразилии — 2:1 в решающем матче в группе, состоящей из четырёх команд. Это был единственный турнир, в котором чемпион и призёры определялись по круговой системе. Это был также первый турнир, где трофей был назван как Трофей Жюля Риме, чтобы отметить 25-летие Риме в качестве президента ФИФА.
Выбор места проведения
Последний чемпионат мира состоялся в 1938 году во Франции. Запланированные чемпионаты мира 1942 и 1946 годов впоследствии были отменены из-за Второй мировой войны.
После войны ФИФА стремилась как можно скорее возродить соревнование и начало строить планы на проведение очередного турнира мирового первенства по футболу. После войны большая часть мира лежала в руинах. В результате у ФИФА возник ряд трудностей с поиском страны, заинтересованной в проведении турнира, поскольку многие правительства посчитали приоритетом направлять скудные средства на более неотложные задачи (восстановление экономики и прежнего уклада жизни), чем на организацию спортивных мероприятий.
Помимо этих нюансов, чемпионат мира мог не состояться также из-за явного отсутствия интереса со стороны международного сообщества, пока на 25-м конгрессе ФИФА 1946 года в Люксембурге, Бразилия не предложила свою заявку провести соревнование при условии, что турнир состоится в 1949 году (позже турнир был перенесён на 1950 год). Кроме того, до начала Второй мировой войны Бразилия и Германия считались главными претендентами на проведение отменённого чемпионата мира 1942 года, поскольку ранее в Европе уже проводились мировые футбольные первенства 1934 и 1938 годов. Историки футбола в целом выражали согласие с тем, что чемпионат мира 1942 года, скорее всего, стоило провести в одной из спокойных стран Южной Америки, которые также проявляли свою заинтересованность в этом вопросе. Новая заявка Бразилии была очень похожа на заявку 1942 года и была единогласно принята конгрессом.
Участники
Право участвовать в чемпионате мира получили 16 команд:
Европа:
— чемпионы мира, автоматически попали в финальный турнир
Южная Америка:
— организаторы чемпионата, автоматически попали в финальный турнир
Северная Америка:
Азия:
Турция и Шотландия отказались от участия ещё до жеребьёвки. Освободившиеся места ФИФА предлагала Португалии и Франции; Португалия отказалась, Франция приняла приглашение. После жеребьёвки Франция отказалась от участия, как и сборная Индии. В итоге в чемпионате, как и в 1930 году, участвовали всего 13 команд — наименьшее число в истории. Составы групп менять не стали.
Впервые приняла участие в чемпионате мира сборная Англии.
Отборочный турнир
Автоматически квалифицировавшиеся
Помимо хозяев турнира, на чемпионат мира автоматически отобралась Италия, которая выигрывала чемпионаты в 1934 и 1938 годах. За год до турнира в Италии случилась трагедия: разбился самолёт с игроками клуба «Торино», составлявшими костяк сборной. Несмотря на это, Италия не снялась с чемпионата, однако игроки до Бразилии добирались на корабле.
Оставшиеся 14 мест были распределены изначально следующим образом: семь команд Европы, шесть из Северной и Южной Америк и одна из Азии.
Не допущенные к квалификации
Сборные Германии и Японии не были допущены к квалификации в качестве наказания за развязывание этими странами Второй мировой войны. Италия и Австрия избежали этих санкций: итальянцы как чемпионы мира уже попали на турнир (см. выше), а Австрию допустили к квалификации. В отборочном турнире не участвовали также сборная Саара (она была принята в ФИФА за две недели до начала чемпионата мира) и сборная ГДР (футбольной ассоциации в Восточной Германии ещё тогда не было).
Британские сборные
Сборные с Британских островов приняли приглашение на чемпионат мира впервые за долгие годы. Для выбора команд-участниц состоялся Домашний чемпионат по групповому принципу, две лучшие команды на котором попадали в финальную часть. В итоге заветные путёвки получили Англия как победитель и Шотландия как второй призёр.
Снявшиеся с турнира до квалификации
Большая часть команд отклонила предложение об участии в турнире из-за финансовых проблем. Среди отказавшихся от участия были Китай, СССР, Болгария, Венгрия, Греция, а также Чехословакия. В списке отказавшихся также числились Исландия, Польша, Румыния (оба участники чемпионата мира 1938 года) и Албания. В качестве причин указывалась необходимость восстановления разрушенного войной хозяйства в каждой стране. Таким образом, из всех стран социалистического лагеря в отборочном турнире играла только Югославия, впоследствии получившая путевку на чемпионат мира.
Снявшиеся по ходу квалификации
В Южной Америке снялись с соревнований команды Аргентины, Эквадора и Перу: Аргентина ещё умудрилась и устроить скандал с Бразилией. Таким образом, оставшиеся команды — Чили, Боливия, Парагвай и Уругвай — автоматически вышли в финальную часть. В Азии отказались играть команды Филиппин, Индонезии и Бирмы, что позволило Индии выйти в финальную часть без игр. В Европе отказались играть Австрия и Бельгия, опасаясь разгрома в отборочном турнире, и в итоге в финальную часть без игр вышли Швейцария и Турция. Финляндия, несмотря на то, что она выступала на стороне нацистской Германии с 1941 по 1944 год, была допущена к квалификации, но снялась до её завершения и ФИФА объявила результаты матчей товарищескими.
Снявшиеся по окончании отборочного турнира
Однако отказы сборных не прекратились. В Шотландии руководство футбольной ассоциации пригрозило не пустить сборную в финальную часть, если та не выиграет Домашний чемпионат, хотя регламент позволял выйти в финальную часть и со второго места. Это вызвало возмущение капитанов шотландской и английской сборной — Джорджа Янга и Билли Райта, которые требовали от руководства Шотландской футбольной ассоциации отменить это решение. Однако чиновники были непреклонны, и Шотландия снялась с турнира.
Вышедшая в финальную часть сборная Турции по финансовым причинам (в том числе из-за стоимости перелёта в Южную Америку) отказалась от участия. ФИФА предложил занять места Шотландии и Турции командам Португалии и Франции, но приглашение приняли только французы.
Снявшиеся после жеребьёвки
Жеребьёвка турнира прошла 22 мая 1950 года в Рио-де-Жанейро, распределив команды по четырём группам:
Группа 1: Бразилия, Мексика, Швейцария, Югославия
Группа 2: Англия, Чили, Испания, США
Группа 3: Италия, Индия, Парагвай, Швеция
Группа 4: Уругвай, Боливия, Франция
С турнира после жеребьёвки снялась Индия, официально мотивировав это большими затратами на перелёт (несмотря на то, что большая часть расходов покрывалась ФИФА), недостатками опыта и недооценкой турнира (по тем временам Олимпийские игры носили более высокий статус). Однако настоящая причина была в том, что в 1948 году на Олимпиаде в Лондоне ФИФА утвердила правило, запрещающее играть босиком, а на Олимпиаде индийская сборная так и играла. Сам капитан индийской сборной Шайлен Манна отвергал подобные версии и заявлял, что сборная действительно не могла играть на чемпионате мира из-за финансовых проблем. Примеру Индии последовала Франция, что сократило список участников до 13 команд.
Статистика выступлений
Дебютантом на турнире выступила только сборная Англии. Впервые с 1930 года на турнире играла команда Югославии, также впервые с 1930 года выступали команды из Центральной и Северной Америки. А вот для США и Боливии этот турнир стал последним перед огромным перерывом: США вернулись на чемпионаты мира только в 1990 году, а боливийцы и вовсе в 1994 году.
Города и стадионы
Составы
Первый раунд
Группа 1
Группа 2
Группа 3
также попала в группу 3, но снялась с турнира.
Группа 4
также попала в группу 4, но снялась с турнира.
Финальный раунд
Память о чемпионате
Решающий матч Бразилия — Уругвай с формальной точки зрения не был финалом, но, тем не менее, по многолетней традиции ставится в один статистический ряд с остальными финальными матчами ЧМ.
В Уругвае 16 июля (день решающей победы над Бразилией) был объявлен национальным праздником, который с тех пор отмечают ежегодно.
После поражения от уругвайцев сборная Бразилии навсегда отказалась от традиционных на тот момент цветов (белые футболки с синим воротником, белые трусы и гетры) и с тех пор выступает в форме цветов национального флага (желтые футболки с зеленым воротником, синие трусы, белые гетры).
В 2005 году вышел художественный фильм «Игра их жизней» о выступлении сборной США по футболу на чемпионате мира 1950 года — в частности, о легендарной победе 1:0 над сборной Англии — и о жизни игроков и тренеров той команды. Фильм снят по одноимённой книге Джеффри Дугласа.
Фильм «Пеле: Рождение легенды».
Бомбардиры
Примечания
Литература
Ссылки
Чемпионат мира по футболу 1950 на официальном сайте FIFA
Чемпионат мира по футболу 1950 на сайте RSSSF
Чемпионат мира по футболу 1950
1950
Международные соревнования по футболу в Бразилии
1950 год в футболе | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,481 |
{"url":"http:\/\/www.ipam.ucla.edu\/abstract\/?tid=13662&pcode=TDM2017","text":"## Real-World Turbulence as Dissipative Euler Solutions: A Physics Perspective\n\n#### Gregory EyinkJohns Hopkins University\n\nSince Onsager\u2019s proposal in 1949 that turbulent velocity fields at high Reynolds number\nmay be considered as dissipative weak solutions of the Euler equations, there has been extensive\nwork in the mathematics community, but almost no exploitation of the theory by physicists. This seems\nto be due to the fact that its physical meaning remains obscure to most fluids scientists. However, as\nwe show here, Onsager\u2019s theory can be understood most intuitively as an application of the concept\nof \u201crenormalization-group invariance\u201d to the problem of explaining the available experimental data on\nanomalous energy dissipation. Such anomalies imply diverging velocity gradients in the inviscid limit,\nor a violet catastrophe\u2019\u2019 in Onsager\u2019s own words. Regularizing this ultraviolet divergence, e.g. by a\nspatial coarse-graining at length-scale ??, leads to a description of turbulent velocities as \u201ccoarse-grained\nEuler solutions\u201d at scales ?? in the inertial range. As we show, this notion of \u201ccoarse-grained solution\u2019'\nis equivalent to the mathematical concept of \u201cweak solution\u201d when the length-scale ?? can be taken\narbitrarily small; it also underlies the practical turbulence modeling method of Large-Eddy Simulation.\nSince spatial coarse-graining is a purely passive operation (\u201cremoving one\u2019s spectacles\u201d), no physics\ncan depend upon the arbitrary scale ??. A consequence of this arbitrariness is Onsager\u2019s experimentally\nconfirmed prediction of (near) singularities with H\u00f6lder exponent h<1\/3. The fecundity of Onager\u2019s theory\nis then demonstrated by extending it to special-relativistic turbulence described by dissipative fluid models,\nsuch as the Israel-Stewart class. It is shown that a dissipative anomaly can appear in the internal energy\nbalance, which is due both to forward energy cascade and a new compressive anomaly called \u201cpressure-work\ndefect\u201d. Furthermore, there is a forward cascade of negentropy which is fed by the pressure-work defect,\nand which leads to an even more fundamental anomaly in the entropy balance. A renormalization-group\ninvariance argument like Onsager\u2019s establishes the fluid singularities required to sustain such cascades.\nAn interesting feature of the special-relativistic case is that there is no possible space-time coarse-graining\nwhich preserves Lorentz symmetry, but, similar to lattice quantum field-theory, the symmetry is restored in\nthe limit ???0. Thus cascade rates can be to some extent frame-dependent at finite Reynolds numbers.\n\nThis talk is based on joint work with Theodore D. Drivas.\n\nBack to Turbulent Dissipation, Mixing and Predictability","date":"2021-10-23 11:19:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7799535393714905, \"perplexity\": 3428.399071347734}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585671.36\/warc\/CC-MAIN-20211023095849-20211023125849-00154.warc.gz\"}"} | null | null |
using JetBrains.Annotations;
using Microsoft.EntityFrameworkCore;
using MultiDbContextExample.Models;
namespace MultiDbContextExample.Data;
[UsedImplicitly(ImplicitUseTargetFlags.Members)]
public sealed class DbContextB : DbContext
{
public DbSet<ResourceB> ResourceBs => Set<ResourceB>();
public DbContextB(DbContextOptions<DbContextB> options)
: base(options)
{
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,790 |
Q: Insert values into a table with a foreign key using mysql I want to insert values in a table containing foreign keys, but when I add the foreign keys manually (for example, addin the id of the foreign key manually by writing it's number), it doesn't work and it me the error of a wrong syntax.
#1064 - You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax to use
near 'FOREIGN KEY,
id_livreur int FOREIGN KEY,
id_plat int FOREIGN KEY,
id_dessert ' at line 4
Here is my table that contains foreign keys :
CREATE TABLE Commande
(
id_commande int NOT NULL AUTO_INCREMENT PRIMARY KEY,
id_client int FOREIGN KEY,
id_livreur int FOREIGN KEY,
id_plat int FOREIGN KEY,
id_dessert int FOREIGN KEY ,
prix_total int,
heure_estime time,
FOREIGN KEY (id_client) REFERENCES Client(id_client),
FOREIGN KEY (id_livreur) REFERENCES Livreur(id_livreur),
FOREIGN KEY (id_plat) REFERENCES Plat(id_plat),
FOREIGN KEY (id_dessert) REFERENCES Dessert(id_dessert)
) ENGINE=InnoDB;
And here are my inserts :
INSERT INTO Commande VALUES (1,1,1,1,1,45,now() + INTERVAL 20 minute);
INSERT INTO Commande VALUES (2,2,2,2,2,55,now() + INTERVAL 20 minute);
INSERT INTO Commande VALUES (3,3,3,3,3,75,now() + INTERVAL 20 minute);
INSERT INTO Commande VALUES (4,4,4,4,4,45,now() + INTERVAL 20 minute);
A: remove "FOREIGN KEY" after the column definition
CREATE TABLE Commande
(
id_commande int NOT NULL AUTO_INCREMENT PRIMARY KEY,
id_client int,
id_livreur int,
id_plat int ,
id_dessert int,
prix_total int,
heure_estime time,
FOREIGN KEY (id_client) REFERENCES Client(id_client),
FOREIGN KEY (id_livreur) REFERENCES Livreur(id_livreur),
FOREIGN KEY (id_plat) REFERENCES Plat(id_plat),
FOREIGN KEY (id_dessert) REFERENCES Dessert(id_dessert)
) ENGINE=InnoDB;
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,522 |
{"url":"https:\/\/oeis.org\/wiki\/CiteP","text":"This site is supported by donations to The OEIS Foundation.\n\n# CiteP\n\n\"Steven R. Finch's incredible labor of love, an encyclopedia of mathematical constants, begins with such basics, then moves on to more elaborate topics. ... It appears astonishing to me that a single individual went through all these topics. His achievement can only be compared to the On-Line Encyclopedia of Integer Sequences.\" [Osmo Pekonen, 2019]\n\n\"As a researcher in combinatorics, one of my favorite tools is the On-Line Encyclopedia of Integer Sequences, or OEIS. This database was started by the mathematician Neil Sloane, who first started keeping an index of popular sequences of integers that came up in his work. At the time, Sloane was a graduate student at Cornell University. A photo of the first page of Sloane\u2019s notebook is shown in Figure 7.1. Recognize any of these sequences?\" [T. Kyle Petersen, 2019]\n\n\"... we'd like to thank to OEIS editors Michel Marcus, Peter Luschny, Jon E. Schoen\ufb01eld and others for their patient, faithful volunteer work and for useful comments and suggestions during the editing of sequences, concerned with this manuscript.\" [Kolosov Petro. 2019]\n\n\"On p. 18, notes that the OEIS was used to find relevant literature.\" [Robert A. Proctor and Matthew J. Willis, 2017]\n\n\"All three authors would like to acknowledge the On-Line Encyclopedia of Integer Sequences [Slo14], without which this project would have been very difficult.\" [Nicholas Proudfoot, Max Wakefield, and Ben Young, 2015]\n\n\u2022 This is part of the series of OEIS Wiki pages that list works citing the OEIS.\n\u2022 Additions to these pages are welcomed.\n\u2022 But if you add anything to these pages, please be very careful \u2014 remember that this is a scientific database. Spell authors' names, titles of papers, journal names, volume and page numbers, etc., carefully, and preserve the alphabetical ordering.\n\u2022 If you are unclear about what to do, contact one of the Editors-in-Chief before proceeding.\n\u2022 Works are arranged in alphabetical order by author's last name.\n\u2022 Works with the same set of authors are arranged by date, starting with the oldest.\n\u2022 This section lists works in which the first author's name begins with P.\n\u2022 The full list of sections is: .\n\u2022 For further information, see the main page for Works Citing OEIS.\n\n## References\n\n1. Arun Padakandla, P. R. Kumar, Wojciech Szpankowski, On the Discrete Geometry of Differential Privacy via Ehrhart Theory, (2017). PDF (A103881)\n2. Arun Padakandla, P.R. Kumar, Wojciech Szpankowski, Preserving Privacy and Fidelity via Ehrhart Theory, July 2017.\n3. R. Padmanabhan, Alok Shukla, Orchards in elliptic curves over finite fields, arXiv:2003.07172 [math.NT], 2020. (A003035)\n4. Martin Paech, Numerische und algebraisch-graphentheoretische Algorithmen f\u00fcr korrelierte Quantensysteme, Dissertation, Hannover: Fakult\u00e4t f\u00fcr Mathematik und Physik der Leibniz Universit\u00e4t, 2015.\n5. M. Paech, W. Apel, E. Kalinowski and E. Jeckelmann, Comparison of computer-algebra strong-coupling perturbation theory and dynamical mean-field theory for the Mott-Hubbard insulator in high dimensions, Phys. Rev. B 90 (24), 245147 (2014), 10 pages, doi:10.1103\/PhysRevB.90.245147. Also arXiv:1410.6630, 2014.\n6. M. Paech, E. Kalinowski, W. Apel, G. Gruber, R. Loogen, and E. Jeckelmann, Ground-state energy and beyond: High-accuracy results for the Hubbard model on the Bethe lattice in the strong-coupling limit, DPG Spring Meeting, Berlin, TT 45.91 (2012).\n7. M. Paech, E. Kalinowski, W. Apel, and E. Jeckelmann, Strong-coupling expansion in the Hubbard model by a diagrammatic-combinatorial computer algorithm, DPG Spring Meeting, Dresden, TT 11.14 (2011).\n8. P. Pagacz, M. Wojtylak, On the spectral properties of a class of H-selfadjoint random matrices and the underlying combinatorics, arXiv preprint arXiv:1310.2122, 2013.\n9. Iv\u00e1n E. Paganini, Ruslan L. Davidchack, Brian B. Laird, and Ignacio Urrutia, Properties of the hard-sphere fluid at a planar wall using virial series and molecular-dynamics simulation, The Journal of Chemical Physics 149 (2018), 014704. doi:10.1063\/1.5025332\n10. Don N. Page, Religious and Scientific Faith in Simplicity (2008); arXiv:0811.0630\n11. David Pagni, Building buildings with triangular numbers, AMATYC Review (vol. 27 no. 2 spring 2006, pp. 56-65).\n12. C. B. Pah and M. Saburov, Single Polygon Counting on Cayley Tree of Order 4: Generalized Catalan Numbers, Middle-East Journal of Scientific Research 13 (Mathematical Applications in Engineering): 01-05, 2013, ISSN 1990-9233; doi:10.5829\/idosi.mejsr.2013.13.mae.9991.\n13. C. H. Pah, doi:10.1007\/s10955-010-9989-5, Single polygon counting on Cayley Tree of order 3, J. Stat. Phys. 140 (2010) 198-207\n14. C. H. Pah, M. R. Wahiddin, Combinatorial Interpretation of Raney Numbers and Tree Enumerations, Open Journal of Discrete Mathematics, 2015, 5, 1-9; doi:10.4236\/ojdm.2015.51001\n15. Kung-Jui Pai, Jou-Ming Chang, Ro-Yu Wu, A Constant Amortized Time Algorithm for Generating Left-Child Sequences in Lexicographic Order, International Workshop on Frontiers in Algorithmics, In: Xiao M., Rosamond F. (eds) Frontiers in Algorithmics, FAW 2017, Lecture Notes in Computer Science, vol 10336. doi:10.1007\/978-3-319-59605-1_20, also in Discrete Applied Mathematics (2018). doi:10.1016\/j.dam.2018.09.035\n16. I. Pak, Partition Identities and Geometric Bijections, Proc. Amer. Math. Soc. 132 (2004), 3457-3462.\n17. Igor Pak, History of Catalan numbers, arXiv:1408.5711, 2014.\n18. Igor Pak, Complexity problems in enumerative combinatorics, arXiv:1803.06636 [math.CO], 2018. (A000055, A000081, A000110, A000123, A000607, A000929, A001156, A002829, A003107, A005130, A006318, A007279, A007294, A033552, A064986, A076478, A082640, A151353, A250102)\n19. I. Pak and G. Panova, Unimodality via Kronecker products, arXiv preprint arXiv:1304.5044, 2013.\n20. Igor Pak, Greta Panova, Bounds on Kronecker coefficients via contingency tables, Linear Algebra and its Applications (2020), Vol. 602, 157-178. doi:10.1016\/j.laa.2020.05.005, also PDF (A000700, A059867)\n21. I. Pak, G. Panova, E. Vallejo, Kronecker products, characters, partitions, and the tensor square conjectures, arXiv:1304.0738, 2013\n22. Igor Pak, Greta Panova, Damir Yeliussizov, On the largest Kronecker and Littlewood-Richardson coefficients, arXiv:1804.04693 [math.CO], 2018. (A003040, A070933, A110143)\n23. I. Pak, A. Soffer, On Higman's k(U_n(F_q)) conjecture, arXiv preprint arXiv:1507.00411, 2015\n24. Apisit Pakapongpun, Thomas Ward, Functorial orbit counting (2009) arXiv:0901.2646 and JIS 12 (2009) 09.2.4\n25. F. Pakovich, A. K. Zvonkin, Minimum degree of the difference of two polynomials over Q, and weighted plane trees, Selecta Mathematica, New Ser. 2014; doi:10.1007\/s00029-014-0151-0\n26. M. B. Paksoy, Derived Ramanujan Primes: R'_{N}, arXiv preprint arXiv:1210.6991, 2012\n27. S. Palasek, Non-Cooperativity in Bayesian Social Learning, arXiv preprint arXiv:1407.0519, 2014\n28. Sushma Palimar and B. R. Shankar, Mersenne Primes in Real Quadratic Fields, Journal of Integer Sequences, Vol. 15 (2012), #12.5.6.\n29. J. M. Pallo, On the listing and random generation of hybrid binary trees, International Journal of Computer Mathematics, 50, 1994, 135-145.\n30. Jean Pallo, doi:10.1016\/S0020-0190(00)00008-9 An efficient upper bound of the rotation distance of binary trees, Information Processing Letters, Volume 73, Issues 3-4, 29 February 2000, Pages 87-92.\n31. Jean Marcel Pallo, Weak associativity and restricted rotation, Information Processing Letters, Volume 109, Issue 10, 30 April 2009, Pages 514-517.\n32. M. Palmacci, Escher's Problem and Numerical Sequences, (2006)\n33. M. G. Palomo, Latin polytopes, arXiv preprint arXiv:1402.0772, 2014\n34. Hao Pan, Z.-W. Sun, Consecutive primes and Legendre symbols, arXiv preprint arXiv:1406.5951, 2014\n35. J. Pan, Multiple Binomial Transforms and Families of Integer Sequences, J. Int. Seq. 13 (2010), 10.4.2.\n36. Pan, Jiaqiang Some properties of the multiple binomial transform and the Hankel transform of shifted sequences. J. Integer Seq. 14 (2011), no. 3, Article 11.3.4, 8 pp.\n37. J. Pan, Matrix Decomposition of the Unified Generalized Stirling Numbers and Inversion of the Generalized Factorial Matrices, Journal of Integer Sequences, 15 (2012) #12.6.6.\n38. J. Pan, Convolution Properties of the Generalized Stirling Numbers and the Jacobi-Stirling Numbers of the First Kind, Journal of Integer Sequences, 16 (2013), #13.9.2.\n39. Qiong Qiong Pan, Jiang Zeng, The \u03b3-coefficients of Branden's (p,q)-Eulerian polynomials and Andr\u00e9 permutations, arXiv:1910.01747 [math.CO], 2019. (A000111, A122852)\n40. Ran Pan, Block patterns in permutations and words and generalized clusters, Ph. D. Dissertation, Univ. Calif. San Diego, 2016.\n41. Ran Pan, Dun Qiu, Jeffrey Remmel, Counting Consecutive Pattern Matches in S_n(132) and S_n(123), arXiv:1809.01384 [math.CO], 2018. Also in Advances in Applied Mathematics (2019) Vol. 105, 130-167. doi:10.1016\/j.aam.2019.01.005 (A001006)\n42. R. Pan, J. B. Remmel, Paired patterns in lattice paths, arXiv:1601.07988 (2016)\n43. Pan, Shu-Wen. & Pan, JQ., Direct solutions of linear non-homogeneous difference equations, Adv Differ Equ (2016) 2016: 108. doi:10.1186\/s13662-016-0839-x\n44. D. Panario, M. Sahin, Q. Wang, A family of Fibonacci-like conditional sequences, INTEGERS, Vol. 13, 2013, #A78.\n45. D. Panario, M. Sahin, Q. Wang, W. Webb, General conditional recurrences, Applied Mathematics and Computation, Volume 243, 15 September 2014, Pages 220-231.\n46. A Panayotopoulos, On Meandric Colliers, Mathematics in Computer Science, (2018). https:\/\/doi.org\/10.1007\/s11786-018-0389-6.\n47. A. Panayotopoulos and P. Tsikouras, \"Meanders and Motzkin Words\", J. Integer Sequences, Volume 7, 2004, Article 04.1.2.\n48. A. Panayotopoulos and P. Vlamos, Cutting Degree of Meanders, Artificial Intelligence Applications and Innovations, IFIP Advances in Information and Communication Technology, Volume 382, 2012, pp 480-489;doi:10.1007\/978-3-642-33412-2_49.\n49. A. Panayotopoulos, P. Vlamos, Partitioning the Meandering Curves, Mathematics in Computer Science (2015) p 1-10, doi:10.1007\/s11786-015-0234-0. (A000136, A005316, A000682, A000560, A217310, A217318, A227167)\n50. Kanupriya Pande, Jeffrey J. Donatelli, et al., Free-electron laser data for multiple-particle fluctuation scattering analysis, Scientific Data volume 5, Article number: 180201 (2018). doi:10.1038\/sdata.2018.201\n51. Ram Krishna Pandey, \"On Some Magnified Fibonacci Numbers Modulo a Lucas Number\" , Journal of Integer Sequences, Vol. 16 (2013), #13.1.7.\n52. V. Pandichelvi, P. Sivakamasundari, M. A. Gopalan, On the Negative Pell Equation y^2 = 54 x^2 - 5, International Journal of Mathematics Trends and Technology- Volume 21 Number 1, pages 16-20.\n53. Sabrina X. M. Pang and Lun Lv, A Refinement of Leaves on Noncrossing Trees, Graphs and Combinatorics, 2011, doi:10.1007\/s00373-011-1097-z.\n54. Alois Panholzer, Parking function varieties for combinatorial tree models, arXiv:2007.14676 [math.CO], 2020. (A000139, A006318, A010050, A214377, A294084)\n55. Alexei Pantchichkine, Constructions of p-adic L-functions and admissible measures for Hermitian modular forms, Number Theory [math.NT], 2018. Abstract (A047817)\n56. Jay Pantone, The Enumeration of Permutations Avoiding 3124 and 4312, arXiv preprint arXiv:1309.0832, 2013\n57. D. Panyushev, On the orbits of a Borel subgroup in abelian ideals, arXiv preprint arXiv:1407.6857, 2014\n58. G. Paoletti, Deterministic Abelian Sandpile Models and Patterns, Ph. D. Thesis, Universita di Pisa, Scuola di dottorato Scienze di base \"Galileo Galilei\", Pisa, 2013; PDF.\n59. L. A. Papakonstantinidis, The win-win-win papakonstantinidis model: The limit of the Sensitization Process (2019). doi:10.13140\/RG.2.2.16575.15529 (A002162, A002391, A002392, A007524, A016627, A016628, A016629, A016630, A016631, A016632, A020862)\n60. Dimitris Papamichail, Angela Huang, Edward Kennedy, Jan-Lucas Ott, Andrew Miller, Georgios Papamichail, Live phylogeny with polytomies: Finding the most compact parsimonious trees, Computational Biology and Chemistry, Volume 69, August 2017, p. 171-177. doi:10.1016\/j.compbiolchem.2017.03.013, arXiv preprint arXiv:1603.03315 [cs.DS], 2016.\n61. G. Paquin, D\u00e9nombrement de multigraphes enrichis, M\u00e9moire, Math. Dept., Univ. Qu\u00e9bec \u00e0 Montr\u00e9al, 2004.\n62. Daniel Pareja, Prime Number Races, PDF\n63. D. Parisse, The Tower of Hanoi and the Stern-Brocot Array, Thesis, Ludwig-Maximilians-Universitaet Munich, August 1997.\n64. Boram Park and Seonjeong Park, Shellable posets arising from even subgraphs of a graph, arXiv:1705.06423 [math.CO], 2017.\n65. Donghwi Park, Space-state complexity of Korean chess and Chinese chess, arXiv preprint arXiv:1507.06401, 2015.\n66. Gunwoong Park, High-Dimensional Poisson DAG Model Learning Using l_1-Regularized Regression, arXiv:1810.02501 [stat.ML], 2018.\n67. Ki-Hyeon Park and Hong-Yeop Song, Some Properties of Binary Matrices and Quasi-Orthogonal Signals Based on Hadamard Equivalence, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences Vol.E95-A No.11 pp.1862-1872, 2012.\n68. Poo-Sung Park, Multiplicative functions with f(p + qn0) = f(p) + f(q) \u2212 f(n0), arXiv:2002.09908 [math.NT], 2020. (A057778, A126717)\n69. Sang-Hoon Park, Abdelmejid Bayad, Daeyeoul Kim, Divisor functions and Polygon Shape Numbers, Draft of Proceedings Book of the 2nd Mediterranean International Conference of Pure & Applied Mathematics and Related Areas (MICOPAM 2019), 1-3. PDF\n70. Seonjeong Park, Real toric manifolds and shellable posets arising from graphs, 2018. PDF (A071721)\n71. So Ryoung Park, Jinsoo Bae, Hyun Gu Kang, Iickho Song, doi:10.1090\/S0025-5718-07-02082-0 On the polynomial representation for the number of partitions with fixed lengths, Math. Comp. vol. 77, no. 262 (2008) 1135-1151.\n72. Y. Park, S. Park, Avoiding permutations and the Narayana numbers, J. Korean Math. Soc. 50 (2013), No. 3, pp. 529-541, doi:10.4134\/JKMS.2013.50.3.529\n73. Park, Youngja, and SeungKyung Park. \"Enumeration of generalized lattice paths by string types, peaks, and ascents.\" Discrete Mathematics 339.11 (2016): 2652-2659.\n74. Daniel E. Parker, Romain Vasseur, Joel E. Moore, Entanglement Entropy in Excited States of the Quantum Lifshitz Model, arXiv:1702.07433 [cond-mat.stat-mech], 2017.\n75. Douglas Stott Parker and Prasad Ram, The construction of Huffman codes is a submodular (\"convex\") optimization problem over a lattice of binary trees. SIAM J. Comput. 28 (1999), no. 5, 1875-1905 (electronic).\n76. M. G. Parker, Conjectures on the Size of Constellations Constructed from Direct Sums of PSK Kernels, LNCS 1719, Presented in part at 13th International Symposium, AAECC-13, Honolulu, Hawaii, pp 420-429, 14-19 Nov, 1999. (postscript, PDF)\n77. M. G. Parker, Spectrally Bounded Sequences, Codes and States: Graph Constructions and Entanglement, Invited Talk at Eighth IMA International Conference on Cryptography and Coding, Cirencester, UK, 17-19 December, 2001, LNCS 2260, pp. 339ff. (2001). (PostScript, Pdf)\n78. M. G. Parker and C. Tellambura, A Construction for Binary Sequence Sets with Low Peak-to-Average Power Ratio, Reports in Informatics, University of Bergen, Report No 242, ISSN 0333-3590, February 2003. (PostScript, PDF)\n79. J. Parkkonen, F. Paulin, doi:10.2140\/gt.2010.14.277, Prescribing the behaviour of geodesics in negative curvature, Geom. Topol. 14 (2010) 277-392.\n80. Bernhard R. Parodi, A generalized Fibonacci spiral, arXiv:2004.08902 [math.HO], 2020.\n81. A. Parreau, M. Rigo, E. Rowland, E. Vandomme, A new approach to the 2-regularity of the l-abelian complexity of 2-automatic sequences, arXiv preprint arXiv:1405.3532, 2014\n82. Robert Parviainen, \"Lattice Path Enumeration of Permutations with k Occurrences of the Pattern 2-13\", J. Integer Sequences, Volume 9, 2006, Article 06.3.2.\n83. Robert Parviainen, \"Parametric Production Matrices and Weighted Succession Rules: a Dyck Path Example\", J. Integer Sequences, Volume 10, 2007, Article 07.6.1.\n84. Robert Parviainen, Some bijections on set partitions (2007), arXiv:0710.1132.\n85. J. Pasukonis, S. Ramgoolam, From counting to construction for BPS states in N=4SYM, J. High En. Phys. 2011 (2) (2011) # 078 arXiv:1010.1683 doi:10.1007\/JHEP02(2011)078\n86. Ludovic Patey, Ramsey-like theorems and moduli of computation, arXiv:1901.04388 [math.LO], 2019. (A000108)\n87. A. Pathak, Non-Hermitian quantum gates are more common than Hermitian quantum gates, arXiv preprint arXiv:1309.4037, 2013.\n88. Anuj Pathania, Scalable Task Schedulers for Many-Core Architectures, Ph.D. Thesis, Karlsruher Instituts f\u00fcr Technologie (Germany, 2018). doi:10.5445\/IR\/1000082991 (A000105)\n89. Anuj Pathania, Vanchinathan Venkataramani, Muhammad Shafique, Tulika Mitra, J\u00f6rg Henkel, Defragmentation of Tasks in Many-Core Architecture, ACM Transactions on Architecture and Code Optimization (TACO), Volume 14 Issue 1, April 2017, Article No. 2. doi:10.1145\/3050437\n90. G. K. Patil, Ramanujan's Life And His Contributions In The Field Of Mathematics, International Journal of Scientific Research and Engineering Studies (IJSRES), Volume 1 Issue 6, December 2014, ISSN: 2349-8862; http:\/\/www.ijsres.com\/2014\/vol-1_issue-6\/paper_8.pdf\n91. N. Patson, Pisano period and permutations of n X n matrices, Australian Math. Soc. Gazette, 2007.\n92. Elena Patyukova, Taylor Rottreau, Robert Evans, Paul D. Topham, Martin J. Greenall, Hydrogen bonding in acrylamide and its role in the scattering behavior of acrylamide-based block copolymers. arXiv:1805.09878 [cond-mat.soft], 2018. Also in Macromolecules (2018) Vol. 51, No. 18, 7032-7043. doi:10.1021\/acs.macromol.8b01118 (A000108, A000309)\n93. Elena Patyukova, Taylor Rottreau, Robert Evans, Paul D. Topham, Martin J. Greenall, Supporting information: Hydrogen bonding aggregation in acrylamide: theory and experiment, Aston University (Birmingham, England 2019), S12. PDF (A000309)\n94. Paukner, M., Pepin, L., Riehl, M., and Wieser, J., Pattern Avoidance in Task-Precedence Posets, arXiv:1511.00080\n95. Neeraj Kumar Paul and Helen K. Saikia, A generalization of Fibonacci sequence, Proyecciones (2020) Vol. 39, 1393-1405. doi:10.22199\/issn.0717-6279-2020-06-0085 (A000045)\n96. Shubhankar Paul, Ten Problems of Number Theory, International Journal of Engineering and Technical Research (IJETR), ISSN: 2321-0869, Volume-1, Issue-9, November 2013\n97. Shubhankar Paul, Legendre, Grimm, Balanced Prime, Prime triplet, Polignac's conjecture, a problem and 17 tips with proof to solve problems on number theory, International Journal of Engineering and Technical Research (IJETR), ISSN: 2321-0869, Volume-1, Issue-10, December 2013; http:\/\/erpublication.org\/admin\/vol_issue1\/upload%20Image\/IJETR012013.pdf\n98. G. Paun and A. Salomaa, Self-reading sequences. Amer. Math. Monthly 103 (1996), no. 2, 166-168.\n99. Chrystalla Pavlou, Edith Elkind, Manipulating citation indices in a social context, in: Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016), J. Thangarajah, K. Tuyls, C. Jonker, S. Marsella (eds.), May 9\u201313, 2016, Singapore; http:\/\/trust.sce.ntu.edu.sg\/aamas16\/pdfs\/p32.pdf\n100. Pavlyukh, Y.; H\u00fcbner, W. Analytic solution of Hedin's equations in zero dimensions. J. Math. Phys. 48 (2007), no. 5, 9 pp.\n101. J. Pawlewicz, Counting Square-Free Numbers, Arxiv preprint arXiv:1107.4890, 2011\n102. J. L. Pe, Ana's Golden Fractal.\n103. J. L. Pe, On a Generalization of Perfect Numbers, To appear in The Journal of Recreational Mathematics 31(3).\n104. J. L. Pe, Fractal dimension, primes and the persistence of memory, Adv. Compl. Systems 6(2) (2003) 241-249\n105. Joseph L. Pe, The 3x+1 fractal, Computers & Graphics, Volume 28, Issue 3, June 2004, Pages 431-435.\n106. Paul A. Pearce and Alessandra Vittorini-Orgeas, Yang-Baxter Solution of Dimers as a Free-Fermion Six-Vertex Model. arXiv:https:\/\/arxiv.org\/abs\/1612.09477, 2017.\n107. Antony Pearson, On Hidden Structures in Contaminated Symbolic Data, Ph. D. thesis, University of Colorado at Boulder (2020) 27835343. PDF (A088389)\n108. Paul Peart and Wen-Jin Woan, \"Generating Functions via Hankel and Stieltjes Matrices\", J. Integer Sequences, Volume 3, 2000, Article 00.2.1.\n109. Paul Peart and Wen-Jin Woan, \"Dyck Paths With No Peaks At Height k\", J. Integer Sequences, Volume 4, 2001, Article 01.1.3.\n110. Satya R. T. Peddada, Daniel R. Herber, Herschel C. Pangborn, Andrew G. Alleyne, James T. Allison, Optimal Flow Control and Single Split Architecture Exploration for Fluid-Based Thermal Management, J. Mech. Des. (2019) 141(8), Paper No. MD-18-1843, 083401. doi:10.1115\/1.4043203 Also in Proceedings of the ASME 2018 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference (IDETC 2018), Quebec City, Quebec, Canada. PDF (A000041, A000262)\n111. A. Peder, M. Tombak, Finding the description of structure by counting method: a case study, SOFSEM 2011, LNCS 6543 (2011) 455-466 doi:10.1007\/978-3-642-18381-2_38\n112. J. Peebles, Cap Set Bounds and Matrix Multiplication, Senior Thesis, Harvey Mudd College, 2013; http:\/\/www.math.hmc.edu\/seniorthesis\/archives\/2013\/jpeebles\/jpeebles-2013-thesis-poster.pdf\n113. E. T. Pegg Jr., A Complete List of Fair Dice, Master's Thesis, University of Colorado at Colorado Springs, 1997.\n114. Ed Pegg, Jr., Polyform puzzles, in Tribute to a Mathemagicain, Peters, 2005, pp. 119-125.\n115. Jun Pei, Li Guo, Averaging algebras, Schr\u00f6der numbers, rooted trees and operads, Journal of Algebraic Combinatorics, Volume 42, Issue 1, August 2015, pp 73-109; arXiv:1401.7386.\n116. R. E. Peile, H. Taylor, Sets of points with pairwise distinct slopes, Computers & Mathematics with Applications, Volume 39, Issue 11, June 2000, Pages 109-115.\n117. Tiago P. Peixoto, Bayesian stochastic blockmodeling, arXiv:1705.10225 [stat.ML], 2017.\n118. A. Pekec, Meaningful and Meaningless Solutions for Cooperative N-person Games, European Journal of Operational Research, Volume 133, Issue 3, 16 September 2001, Pages 608-623.\n119. Osmo Pekonen, Mathematical Constants, Mathematical Constants II, The Mathematical Intelligencer (2019), 1-2. doi:10.1007\/s00283-019-09929-0 Steven R. Finch's incredible labor of love, an encyclopedia of mathematical constants, begins with such basics, then moves on to more elaborate topics. ... It appears astonishing to me that a single individual went through all these topics. His achievement can only be compared to the On-Line Encyclopedia of Integer Sequences.\n120. John A. Pelesko, \"Generalizing the Conway-Hofstadter $10,000 Sequence\", J. Integer Sequences, Volume 7, 2004, Article 04.3.5. 121. Jarkko Peltom\u00e4ki, Privileged Words and Sturmian Words, Turku Centre for Computer Science, TUCS Dissertations No 214, August 2016; http:\/\/www.doria.fi\/bitstream\/handle\/10024\/124473\/TUCSDissertationD214Peltomaki.pdf?sequence=2 122. Jarkko Peltom\u00e4ki and Aleksi Saarela, Standard words and solutions of the word equation X12Xn2 = (X1Xn)2, Journal of Combinatorial Theory, Series A (2021) Vol. 178, 105340. doi:10.1016\/j.jcta.2020.105340 See also \" arXiv:2004.14657 [cs.FL], 2020. (A000374, A002326, A037226, A330878) 123. Jarkko Peltom\u00e4ki, Markus A. Whiteland, Avoiding abelian powers cyclically, arXiv:2006.06307 [cs.FL], 2020. (A334831) 124. J.-G. Penaud and O. Roquesm, G\u00e9n\u00e9ration efficace d'un langage de Fibonacci, Colloque LaCIM 2000. 125. J. Peng, Y. Zhang, Heron triangles with figurate number sides, Acta Mathematica Hungarica (2019) 1-11. doi:10.1007\/s10474-018-00907-0 (A232461) 126. Y. Peng and K. P. S. Bhaskara Rao, On Zumkeller numbers, Journal of Number Theory, Volume 133, Issue 4, April 2013, Pages 1135-1155. 127. Yisu Peng, Y Jiang, P Radivojac, Enumerating consistent subgraphs of directed acyclic graphs: an insight into biomedical ontologies, arXiv preprint arXiv:1712.09679, 2017 128. K. A. Penson, arXiv:quant-ph\/0111151 Coherent States from Combinatorial Sequences, Conference 'Quantum Theory and Symmetries 2', Krakow, Poland, July 2001. 129. K. A. Penson, P. Blasiak, G. Dattoli et al., Monomiality principle, Sheffer-type polynomials and the normal ordering problem (2005), arXiv:quant-ph\/0510079. 130. K. A. Penson, P. Blasiak, G. Duchamp, A. Horzela and A. I. Solomon, arXiv:quant-ph\/0312202 Hierarchical Dobinski-type relations via substitution and the moment problem, [J. Phys. A 37 (2004), 3475-3487] 131. K. A. Penson, P. Blasiak, A. Horzela, G. H. E. Duchamp and A. I. Solomon, arXiv:0904.0369 Laguerre-type derivatives: Dobinski relations and combinatorial identities, J. Math. Phys. vol. 50, 083512 (2009) 132. K. A. Penson and J.-M. Sixdeniers, \"Integral Representations of Catalan and Related Numbers\", J. Integer Sequences, Volume 4, 2001, Article 01.2.5. 133. Karol A. Penson, Allan I Solomon, Coherent States from Combinatorial Sequences (2001), arXiv:quant-ph\/0111151. 134. Karol A. Penson and Karol Zyczkowski, doi:10.1103\/PhysRevE.83.061118 Product of Ginibre matrices : Fuss-Catalan and Raney distribution, Phys. Rev E. 83, 061118 (2011) (9 pages). arXiv:1103.3453 arXiv version 135. Ian Percival, Franco Vivaldi, Critical dynamics and trees, Physica D: Nonlinear Phenomena, Volume 33, Issues 1-3, October-November 1988, Pages 304-313. 136. Carlos Castro Perelman, Generalized Fibonacci Numbers and 4k+ 1-fold symmetric Quasicrystals, Clark Atlanta University (2019). doi:10.13140\/RG.2.2.13892.48001 137. R. A. Perez, A brief but historic article of Siegel, Notices AMS, 58 (2011), 558-566. 138. S. Perez, R. Styer, Persistence: A Digit Problem, 2013 PDF 139. Carlos I. P\u00e9rez-S\u00e1nchez, The full Ward-Takahashi Identity for colored tensor models, arXiv preprint arXiv:1608.08134, 2016 140. E. Pergola, Two bijections for the area of Dyck paths, Discrete Math., 241 (2001), 435-447. 141. E. Pergola, G. Labelle, P. Leroux and R. Pinzani, Bell permutations and Stirling numbers interpolation, Proceedings FPSAC'99, Barcelona, 450-461. 142. E. Pergola and R. Pinzani, A Combinatorial Interpretation of the Area of Schr\u00f6der Paths, Electronic Journal of Combinatorics, Volume 6(1), 1999, article #R40. 143. Elisa Pergola and Robert A. Sulanke, Schr\u00f6der Triangles, Paths and Parallelogram Polyominoes, J. Integer Sequences, Volume 1, 1998, Article 98.1.7. 144. Quinn Perian, Bella Xu, Alexander Lu Zhang, Counting the Nontrivial Equivalence Classes of Sn under {1234,3412}-Pattern-Replacement, arXiv:2008.02380 [math.CO], 2020. (A330395) 145. Serge Perrine, About the diophantine equation z\u00b2 = 32y\u00b2 \u2212 16, SCIREA Journal of Mathematics (2019) Vol. 4, Issue 5, 126-139. PDF (A000129, A001333, A002203, A046176, A082405) 146. Jon Perry, Calculating the Smarandache numbers, in Smarandache Notions Journal (2004), Vol. 14.1, pp 124-127. PDF. (A002034) 147. Peters, J.; Buehlmann, P. Identifiability of Gaussian structural equation models with equal error variances. Biometrika 101 (2014), no. 1, 219-228. 148. J. Peters, J. Mooij, D. Janzing, B. Sch\u00f6lkopf, Causal Discovery with Continuous Additive Noise Models, arXiv preprint arXiv:1309.6779, 2013 149. Petersen, Karl, An adic dynamical system related to the Delannoy numbers. Ergodic Theory Dynam. Systems 32 (2012), no. 2, 809-823. 150. Karl Petersen, Ibrahim Salama, Tree shift complexity, arXiv:1712.02251 [math.DS], 2017. (A076725, A144229) 151. Karl Petersen, Ibrahim Salama, Tree shift topological entropy, Theoretical Computer Science (2018) Vol. 743, 64-71. doi:10.1016\/j.tcs.2018.05.034 152. Petersen, K.; Varchenko, A. Path count asymptotics and Stirling numbers. Proc. Amer. Math. Soc. 140 (2012), no. 6, 1909-1919. 153. Petersen, T. Kyle, The sorting index. Adv. in Appl. Math. 47 (2011), no. 3, 615-630. 154. Petersen, T. Kyle On the shard intersection order of a Coxeter group. SIAM J. Discrete Math. 27 (2013), no. 4, 1880-1912. 155. T. Kyle Petersen, Exponential generating functions and Bell numbers, Inquiry-Based Enumerative Combinatorics (2019) Chapter 7, Undergraduate Texts in Mathematics, Springer, Cham, 98-99. doi:10.1007\/978-3-030-18308-0_7 (A000435) As a researcher in combinatorics, one of my favorite tools is the On-Line Encyclopedia of Integer Sequences, or OEIS. This database was started by the mathematician Neil Sloane, who first started keeping an index of popular sequences of integers that came up in his work. At the time, Sloane was a graduate student at Cornell University. A photo of the first page of Sloane\u2019s notebook is shown in Figure 7.1. Recognize any of these sequences? 156. T. K. Petersen, M. Guay-Paquet, The generating function for total displacement, - arXiv preprint arXiv:1404.4674, 2014 157. T. Kyle Petersen, Bridget Eileen Tenner, The depth of a permutation, Arxiv preprint arXiv:1202.4765, 2012 158. T. KYLE PETERSEN AND BRIDGET EILEEN TENNER, How to write a permutation as a product of involutions (and why you might care), Arxiv preprint arXiv:1202.5319, 2012 and Integers 13 (2013) A63 HTML 159. Ivars Peterson, Fibonacci's Missing Flowers. 160. M. D. Petkovic, doi:10.1080\/10652469.2010.497998 The Hankel transform of generalized central trinomial coefficients and related sequences, Intr. Trans. Spec. Func. 22 (1) (2011) 29-44 161. Petkovic, Marko D.; Barry, Paul; Rajkovic, Predrag Closed-form expression for Hankel determinants of the Narayana polynomials. Czechoslovak Math. J. 62(137) (2012), no. 1, 39-57. 162. M. Petkovsek and T. Pisanski, Counting Trees, 1994. (PostScript , dvi version) 163. M. Petkovsek and T. Pisanski, Counting disconnected structures: chemical trees, fullerenes, I-graphs and others, Croatica Chem. Acta, 78 (2005), 563-567. 164. Petkovic, Marko D.; Rajkovic, Predrag M.; Barry, Paul The Hankel transform of generalized central trinomial coefficients and related sequences. Integral Transforms Spec. Funct. 22 (2011), no. 1, 29-44. 165. Marko Petkovsek and Helena Zakrajsek, Enumeration of I-graphs: Burnside does it again, ARS MATHEMATICA CONTEMPORANEA, 2 (2009) 241-262. PDF 166. Aleksandar Petojevic, \"The Function _v M_m(s;a;z) and Some Well-Known Sequences\", J. Integer Sequences, Volume 5, 2002, Article 02.1.7 167. Aleksandar Petojevic, On the vMm(s*,a,z) function, Novi Sad J. Math. 34 (1) (2004) 99-106 168. Petojevic, Aleksandar, The {K_i(z)}^\\infty_{i=1} functions. Rocky Mountain J. Math. 36 (2006), no. 5, 1637-1650. 169. A. Petojevic and N. Dapic, The vAm(a,b,c;z) function, Preprint 2013, PDF 170. Aleksandar Petojevic, H.M. Srivastava, Computation of Euler type sums of the products of Bernoulli numbers, Applied Mathematics Letters, Volume 22, Issue 5, May 2009, Pages 796-801. 171. A. Petras, L. Ling, S.J. Ruuth, An RBF-FD closest point method for solving PDEs on surfaces, 2018. PDF 172. M. P\u00e9tr\u00e9olle, Characterization of Cyclically Fully commutative elements in finite and affine Coxeter Groups, arXiv preprint arXiv:1403.1130, 2014. 173. Mathias P\u00e9tr\u00e9olle, Alan D. Sokal, Lattice paths and branched continued fractions. II. Multivariate Lah polynomials and Lah symmetric functions, arXiv:1907.02645 [math.CO], 2019. (A000369, A001497, A004747, A008277, A008297, A035342, A035469, A048993, A049029, A066667, A105278) 174. George Petrides and Johannes Mykkeltveit, On the Classification of Periodic Binary Sequences into Nonlinear Complexity Classes, in Sequences and Their Applications SETA 2006, Lecture Notes in Computer Science, Volume 4086\/2006, Springer-Verlag. 175. Constantin M. Petridi, The Sums of the$ k-\\$ powers of the Euler set and their connection with Artin's conjecture for primitive roots, arXiv preprint arXiv:1612.07632, 2016\n176. Kolosov Petro, Relation between Pascal\u2019s triangle and hypercubes, 2018. PDF (A007318)\n177. Kolosov Petro. An Odd-Power Identity Involving Discrete Convolution. Preprints (2019) 2019040126. doi:10.20944\/preprints201904.0126.v1 ... we'd like to thank to OEIS editors Michel Marcus, Peter Luschny, Jon E. Schoen\ufb01eld and others for their patient, faithful volunteer work and for useful comments and suggestions during the editing of sequences, concerned with this manuscript.\n178. Ian Petrow, MP Young, A generalized cubic moment and the Petersson formula for newforms, arXiv preprint arXiv:1608.06854, 2016\n179. V. H. Pettersson, Enumerating Hamiltonian Cycles, The Electronic Journal of Combinatorics, 21 (4) 2014, #P4.7.\n180. Ville Pettersson, Graph Algorithms for Constructing and Enumerating Cycles and Related Structures, Preprint 2015, https:\/\/aaltodoc.aalto.fi\/bitstream\/handle\/123456789\/17688\/isbn9789526063652.pdf?sequence=1\n181. Robertas Petuchovas, Asymptotic analysis of the cyclic structure of permutations, arXiv preprint arXiv:1611.02934, 2016\n182. A. A. Petukhov, Mixed optimization combinatorial method for constructing covering arrays, Programming and Computer Software, 2014, Vol. 40, No. 1, pp. 10-20. Pleiades Publishing, Ltd., 2014. Original Russian Text published in Programmirovanie, 2014, Vol. 40, No. 1.\n183. J. L. Pfaltz, Partitions of 2^n, Congressus Numerantium 109:3-12, 1995.\n184. J. L. Pfaltz, Partition Coefficients of Acyclic Graphs, 21st International Workshop on Graph Theoretic Concepts in Computer Science, Aachen, June 1995 (Springer Verlag, LNCS #1017) 313-332.\n185. C. Pfeifer, Probability Distribution of the Median Taken on Partial Sums of a Simple Random Walk, Stochastic Analysis and Applications, Volume 31, Issue 1, 2013, pp. 31-46; doi:10.1080\/07362994.2013.741359.\n186. G\u00f6tz Pfeiffer, \"Counting Transitive Relations\", J. Integer Sequences, Volume 7, 2004, Article 04.3.2.\n187. Goetz Pfeiffer, A Quiver Presentation for Solomon's Descent Algebra (2007), arXiv:0709.3914; Advances in Mathematics, Volume 220, Issue 5, 20 March 2009, Pages 1428-1465.\n188. Hugo Pfoertner, Uniform Illumination of a Sphere\n189. Hai Pham-Van, Linh Tran-Phan-Thuy, Cuong Tran-Manh, Bich Do-Danh, Hoang Luc-Huy, Two-Dimensional Clusters of Colloidal Particles Induced by Emulsion Droplet Evaporation, Nanomaterials (2020) Vol. 10, 156. doi:10.3390\/nano10010156 (A070765)\n190. C. Phatak, R. Pokharel, M. Beleggia and M. De Graef, On the magnetostatics of chains of magnetic nanoparticles, Journal of Magnetism and Magnetic Materials, Volume 323, Issue 22, November 2011, Pages 2912-2922; doi:10.1016\/j.jmmm.2011.06.058\n191. Leroux Philippe, An equivalence of categories motivated by weighted directed graphs (2007), arXiv:0709.3453.\n192. Andreas N. Philippou, Spiros D. Dafnis, A simple proof of an identity generalizing Fibonacci-Lucas identities, Fibonacci Quarterly (2018) Vol. 56, No. 4, 334-336. Abstract (A001644)\n193. Mitch Phillipson, Manda Riehl and Tristan Williams, Enumeration of Wilf classes in Sn ~ Cr for two patterns of length 3, PU. M. A. Vol. 21 (2010), No. 2, pp. 321-338; http:\/\/www.mat.unisi.it\/newsito\/puma\/public_html\/21_2\/11_Phillipson_Riehl_Williams.pdf\n194. D. Phulara and L. W. Shapiro, Descendants in ordered trees with a marked vertex, Congressus Numerantium, 205 (2011), 121-128.\n195. Phakhinkon Phunphayap, Prapanpong Pongsriiam, Reciprocal sum of palindromes, arXiv:1803.00161 [math.CA], 2018. (A002113, A002385, A002779)\n196. Phakhinkon Phunphayap, Prapanpong Pongsriiam, Explicit Formulas for the p-adic Valuations of Fibonomial Coefficients, Journal of Integer Sequences, Vol. 21 (2018), Article 18.3.1. HTML (A000045, A003267, A010048, A055870)\n197. Phakhinkon Phunphayap, Prapanpong Pongsriiam, Estimates for the Reciprocal Sum of b-adic Palindromes, (2019). doi:10.13140\/RG.2.2.23202.79047 (A244162)\n198. Steven T. Piantadosi, Problems in the philosophy of mathematics: A view from cognitive science, preprint.\n199. Steven T. Piantadosi, Mathematics, Substance and Surmise: Views on the Meaning and Ontology of Mathematics, University of Rochester (2019). HTML In 2009, Nathaniel Johnston found that 11630 was the smallest uninteresting number, with interestingness determined by membership in the Online Encyclopedia of Integer Sequences (OEIS)\n200. Picantin, Matthieu, Finite transducers for divisibility monoids. Theoret. Comput. Sci. 362 (2006), no. 1-3, 207-221.\n201. K Piejko, Extremal trees with respect to number of of (A, B, 2C)-edge colourings, Journal of Applied Mathematics, Hindawi Publishing Corporation, Volume 2015, Article ID 463650, 5 pages doi:10.1155\/2015\/463650\n202. Karin Pielage, Proactive lateral transshipments and stock allocation via transient behavior of loss systems, Master Thesis (2018) Eindhoven University of Technology (Netherlands). PDF\n203. V. U. Pierce, Continuum limits of Toda lattices for map enumeration, in Algebraic and Geometric Aspects of Integrable Systems and Random Matrices, edited by Anton Dzhamay, Ken'ichi Maruno, Virgil U. Pierce; Contemporary Mathematices, Vol. 593, 2013.\n204. Titus Piezas III, On Ramanujan's Other Pi Formulas, http:\/\/www.oocities.org\/titus_piezas\/Pi_formulas2.pdf\n205. Titus Piezas III, Pi Formulas, Ramanujan, and the Baby Monster Group, http:\/\/www.oocities.org\/titus_piezas\/Pi_formulas1.pdf\n206. Titus Piezas III, Ramanujan's Constant exp(Pi sqrt(163)) And Its Cousins, http:\/\/www.oocities.org\/titus_piezas\/Ramanujan_a.pdf\n207. Titus Piezas III, \"The 163 dimensions\"\n208. A. Piggott, THE SYMMETRIES OF MCCULLOUGH-MILLER SPACE, 2011; http:\/\/www.facstaff.bucknell.edu\/ap030\/researchfiles\/TheSymmetriesOfMMSpace.pdf\n209. Giovanni Pighizzini, Limited Automata: Properties, Complexity and Variants, International Conference on Descriptional Complexity of Formal Systems (DCFS 2019) Descriptional Complexity of Formal Systems, Lecture Notes in Computer Science (LNCS, Vol. 11612) Springer, Cham, 57-73. doi:10.1007\/978-3-030-23247-4_4 (A007814)\n210. Giovanni Pighizzini and Luca Prigioniero, Limited Automata and Unary Languages, In: Charlier \u00c9., Leroy J., Rigo M. (eds) Developments in Language Theory, DLT 2017, Lecture Notes in Computer Science, vol 10396. doi:10.1007\/978-3-319-62809-7_23\n211. Vincent Pilaud, Brick polytopes, lattice quotients, and Hopf algebras, preprint arXiv:1505.07665 (A000108, A078920, A003319, A001181, A033282)\n212. Vincent Pilaud, V Pons, Permutrees, arXiv preprint arXiv:1606.09643, 2016\n213. V. Pilaud, J. Ru\u00e9, Analytic combinatorics of chord and hyperchord diagrams with k crossings, arXiv preprint arXiv:1307.6440, 2013\n214. Khodabakhsh Hessami Pilehrood and Tatiana Hessami Pilehrood, Vacca-type series for values of the generalized-Euler-constant function and its derivative (2008); arXiv:0808.0410\n215. K. H. Pilehrood, T. H. Pilehrood, Jacobi Polynomials and Congruences Involving Some Higher-Order Catalan Numbers and Binomial Coefficients, J. Int. Seq. 18 (2015) 15.11.7\n216. Deanna Pineau, Math-Aware Search Engines: Physics Applications and Overview, arXiv preprint arXiv:1609.03457, 2016.\n217. Sandra Pinelas, Paolo Emilio Ricci, On Sheffer polynomial families, 4open (2019), Vol. 2, No. 4, 1-9. doi:10.1051\/fopen\/2019004\n218. S. Pion, De la g\u00e9om\u00e9trie algorithmique au calcul g\u00e9om\u00e9trique, Ph.D thesis, Universit\u00e9 de Nice Sophia-Antipolis, 1999. (PostScript, PDF)\n219. T. Pisanski, D. Schattschneider and B. Servatius, Applying Burnside's Lemma to a One-Dimensional Escher Problem, Mathematics Magazine, vol. 79, #3, 167-180.\n220. C. de Jesus Pita Ruiz Velasco, Convolution and Sulanke Numbers, JIS 13 (2010) 10.1.8\n221. C. Pita, On s-Fibonomials, J. Int. Seq. 14 (2011) # 11.3.7\n222. C. J. Pita Ruiz Velasco, Sums of Products of s-Fibonacci Polynomial Sequences, J. Int. Seq. 14 (2011) # 11.7.6\n223. Claudio de Jesus Pita Ruiz Velasco, A Note on Fibonacci & Lucas and Bernoulli & Euler Polynomials, Journal of Integer Sequences, Vol. 15 (2012), Article #12.2.7\n224. Pita Ruiz V., Claudio de J., Some number arrays related to Pascal and Lucas triangles. J. Integer Seq. 16 (2013), no. 5, Article 13.5.7, 34 pp.\n225. Pita Ruiz V., Claudio de J., Some weighted sums of powers of Fibonacci polynomials. Integers 13 (2013), Paper No. A60, 19 pp.\n226. Pita Ruiz V., Claudio de J., Generalized Stirling numbers and hyper-sums of powers of binomial coefficients. Electron. J. Combin. 21 (2014), no. 1, Paper 1.10, 39 pp.\n227. Jim Pitman and Wenpin Tang, Regenerative random permutations of integers, arXiv:1704.01166, [math.PR], 2017.\n228. Irene Pivotto, Gordon Royle, Highly-connected planar cubic graphs with few or many Hamilton cycles, arXiv:1901.10683 [math.CO], 2019. (A006791, A007021)\n229. Robert Piziak, Remarks on some papers of Flachsmeyer and Katrno\u0161ka, Baylor University (2019). doi:10.13140\/RG.2.2.11485.05606\n230. M. Planat, Twelve-dimensional Pauli group contextuality with eleven rays, Arxiv preprint arXiv:1201.5455, 2012\n231. M. Planat, Twelve-dimensional Pauli group contextuality, 2012, PDF\n232. M. Planat, A. Giorgetti, F. Holweck, M. Saniga, Quantum contextual finite geometries from dessins d'efants, arXiv:1310.4267 (2013-2015)\n233. Michel Planat, Patrick Sole, arXiv:1109.6489 Efficient prime counting and the Chebyshev primes. Also Hindawi Publishing Corporation, Journal of Discrete Mathematics, Volume 2013, Article ID 491627, 11 pages, doi:10.1155\/2013\/491627.\n234. Michel Planat, Patrick Sol\u00e9, Improving Riemann prime counting, arXiv preprint arXiv:1410.1083, 2014\n235. David J. Platt, T Trudgian, On the sum of two squares and at most two powers of 2, arXiv preprint arXiv:1610.01672, 2016\n236. V. Pletser, Conjecture on the value of Pi(10^26), the number of primes less than 10^26, arXiv preprint arXiv:1307.4444, 2013\n237. V. Pletser, Congruence conditions on the number of terms in sums of consecutive squared integers equal to squared integers, arXiv preprint arXiv:1409.7969, 2014\n238. V. Pletser, Finding all squared integers expressible as the sum of consecutive squared integers using generalized Pell equation solutions with Chebyshev polynomials, arXiv preprint arXiv:1409.7972, 2014\n239. V. Pletser, General solutions of sums of consecutive cubed integers equal to squared integers, arXiv preprint arXiv:1501.06098, 2015\n240. Vladimir Pletser, Number of terms, first term and square root of sums of consecutive cubed integers equal to integer squares, Research Gate, 2015; PDF\n241. V. Pletser, Fundamental solutions of the Pell equation X^2-(sigma^4-delta^4)Y^2=deleta^4 for the first 45 solutions of the sums of consecutive cubed integers equalling integer squares 2015, PDF\n242. S. Plouffe, Approximations de s\u00e9ries g\u00e9n\u00e9ratrices et quelques conjectures, Master's Thesis, Univ. du Qu\u00e9bec \u00e0 Montr\u00e9al, August, 1992. There is a separate page for the associated formulae.\n243. S. Plouffe, Un methode pour obtenir la fonction generatrice algebrique d'une serie, FPSAC, Florence, June 1993.\n244. Simon Plouffe, \"The Computation of Certain Numbers Using a Ruler and Compass\", J. Integer Sequences, Volume 1, 1998, Article 98.1.3.\n245. Simon Plouffe, Identities and approximations inspired from Ramanujan notebooks, III<\/a>, 2009.\n246. Simon Plouffe, A search for a mathematical expression for mass ratios using a large database, http:\/\/www.plouffe.fr\/simon\/Search.htm, also https:\/\/archive.is\/srZc#selection-57.0-69.35; 2012.\n247. S. Plouffe, On the values of the functions ... [zeta and Gamma] ..., arXiv preprint arXiv:1310.7195, 2013\n248. S. Plouffe, The many faces of the polygamma function PDF, 2016\n249. Simon Plouffe, Primes as sums of irrational numbers, Preprint. 2016;\n250. Simon Plouffe, A set of formulas for primes, arXiv:1901.01849 [math.NT], 2019. (A000058)\n251. Simon Plouffe, \u03c0, the primes and the Lambert W-function (2), Bulletin of the Derive User Group (2019) Newsletter #116, 5-16. PDF\n252. F. Pluvinage, Developing problem solving experiences in practical action projects, The Mathematics Enthusiast, ISSN 1551-3440, Vol. 10, nos.1&2, pp. 219-244; PDF\n253. Ricardo A. Podest\u00e1, New identities for binary Krawtchouk polynomials, binomial coefficients and Catalan numbers, arXiv preprint arXiv:1603.09156, 2016\n254. K. Podnieks, Digits of pi: limits to the seeming randomness, arXiv preprint arXiv:1411.3911, 2014\n255. Md Masbaul Alam Polash, M. A. Hakim Newton, Abdul Sattar, Constraint-directed search for all-interval series, Constraints, July 2017, Volume 22, Issue 3, pp 403\u2013431; doi:10.1007\/s10601-016-9261-y\n256. Nikolay L. Poliakov, Note on level r consensus, arXiv:1606.04816 [q-fin.EC]\n257. Alberto Policriti and Alexandru I. Tomescu, Counting extensional acyclic digraphs, Information Processing Letters, Volume 111 Issue 16, August 2011 , pp. 305-315.\n258. P. Pollack, C. Pomerance, Some problems of Erdos on the sum-of-divisors function, For Richard Guy on his 99th birthday: May his sequence be unbounded, Trans. Am. Math. Soc. B 3 (2016) 1-26 doi:10.1090\/btran\/10.\n259. PAUL POLLACK AND VLADIMIR SHEVELEV, On perfect and near-perfect numbers, J. Number Theory 132 (2012), 3037--3046; http:\/\/www.math.uga.edu\/~pollack\/pnp-4.pdf\n260. Pollack, Paul; Trevi\u00f1o, Enrique. The Primes that Euclid Forgot. Amer. Math. Monthly 121 (2014), no. 5, 433--437. MR3193727.\n261. M. Pollanen, UNIFORM EQUIPARTITION TEST BOUNDS FOR MULTIPLY SEQUENCES, International Journal of Pure and Applied Mathematics, Volume 72 No. 4 2011, 515-526; http:\/\/ijpam.eu\/contents\/2011-72-4\/7\/\n262. Burkard Polster and Marty Ross, Marching in squares, arXiv preprint arXiv:1503.04658, 2015.\n263. D. H. J. Polymath, arXiv:1002.0374 Density Hales-Jewett and Moser numbers.\n264. PolyMath REU Convex Geometries Collaboration: Kira Adaricheva, Madina Bolat, Gent Gjonbalaj, Brandon Amerine, J. Alexandria Behne, Evan Daisy, Alexander Frederiksen, Ayush Garg, Zachary King, Grace Ma, Michelle Olson, Rohit Pai, Junewoo Park, Cat Raanes, Sean Riedel, Joseph Rogge, Raviv Sarch, James Thompson, Fernanda Yepez-Lopez, Stephanie Zhou, Convex geometries representable by at most 5 circles on the plane, arXiv:2008.13077 [math.CO], 2020. An OEIS (On-line Encyclopedia of Integer Sequences) submission on the number of non-isomorphic antimatroids by Przemys\u0142aw Uzna\u0144ski [10] served as the starting point for our project. The algorithm was developed around 2013, in the framework of enumerating anti-matroids in the On-line Encyclopedia of Integer Sequences (OEIS). The existing code was enhanced for computing the implicational basis of each geometry and its convex dimension.\n265. Maxim V. Polyakov, Kirill M. Semenov-Tian-Shansky, Alexander O. Smirnov, Alexey A. Vladimirov, Quasi-Renormalizable Quantum Field Theories, arXiv:1811.08449 [hep-th], 2018. (A000108, A098777)\n266. C. Pomerance, Divisors of the middle binomial coefficient, Amer. Math. Monthly, 112 (2015), 636-644.\n267. Carl Pomerance, Simon Rubinstein-Salzedo, Cyclotomic Coincidences, arXiv:1903.01962 [math.NT], 2019. (A206225)\n268. C. Pomerance and H.-S. Yang, On untouchable numbers and related problems, http:\/\/www.math.dartmouth.edu\/~carlp\/uupaper3.pdf, 2012.\n269. C. Pomerance and H.-S. Yang, Variant of a theorem of Erdos on the sum-of-proper-divisors function, https:\/\/www.math.dartmouth.edu\/~carlp\/uupaper6.pdf, 2012.\n270. Klaus Pommerening, The Indecomposable Solutions of Linear Congruences, arXiv:1703.03708, 2017.\n271. M. Poneti, V. Vajnovszki, doi:10.1016\/j.ejc.2009.03.028, Generating restricted classes of involutions, Bell and Stirling permutations, Eur. J. Combinat. 31 (2) (2010) 553-564\n272. P. Pongsriiam, Relatively Prime Sets, Divisor Sums, and Partial Sums, arXiv preprint arXiv:1306.4891, 2013 and J. Int. Seq. 16 (2013) #13.9.1\n273. Prapanpong Pongsriiam, Kittipong Subwattanachai, Exact Formulas for the Number of Palindromes up to a Given Positive Integer, Intl. J. of Math. Comp. Sci. (2019) 14:1, 27-46. PDF (A002113)\n274. A. P\u00f6nitz, \u00dcber die Methode zur Konstruktion von Algorithmen f\u00fcr die Berechnung von Invarianten in endlichen ungerichteten Hypergraphen, Ph.D Thesis (2004)\n275. Sankar Ponnapalli, V. A., and P. V. Y. Jayasree Pappu. \"An investigation of fractal antenna arrays for side lobe reduction with a fractal distribution of current.\" Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on. IEEE, 2015.\n276. V. A. Sankar Ponnapalli and V. Y. Jayasree Pappu, Design of Octagonal Fractal Array Antenna for Side Lobe Reduction with Morse-Thue Fractal Density Tapering Technique, Preprint, 2016.\n277. V A Sankar Ponnapalli, P V Y Jayasree, A Three Valued Morse-Thue Fractal Tapering for Thinning of Fractal Array Antennas, Journal of Science and Technology: Issue on Information and Communications Technology, Vol. 2, No. 1, August 2016.\n278. Luca Terzio Pontiggia, Computational methods in string and field theory, doctoral dissertation, Univ. of the Witwatersrand, Johannesburg, 2018. PDF (A090045)\n279. B. Poonen and M. Rubinstein, Number of Intersection Points Made by the Diagonals of a Regular Polygon, SIAM J. Discrete Mathematics, Vol. 11, pp. 135-156 (PostScript, PDF). (Although the Encyclopedia is not mentioned in the final version, this paper was born when I wrote the beginning of sequence A007678 on the blackboard in the Commons Room at AT&T Bell Labs and appealed to people to extend it.)\n280. J. Pope, D. Sonnier, A linear solution to the n-Queens problem using vector spaces, Journal of Computing Sciences in Colleges, Volume 29 Issue 5, May 2014 Pages 77-83.\n281. Oleg Posnansky, Ruiwang Huang, N. Jon Shah, The truncated Levy-flight process: Application to the random spin phase change in non-linear magnetic fields, Physica A: Statistical Mechanics and its Applications, Volume 370, Issue 2, 15 October 2006, Pages 553-564.\n282. A. Postnikov and R. P. Stanley, Deformations of Coxeter hyperplane arrangements, Journal of Combinatorial Theory, Series A 91 (2000), no. 1-2, 544-597. (Special issue dedicated to G.-C. Rota.)\n283. William Poundstone, Are You Smart Enough to Work at Google?: Trick Questions, Zen-like Riddles, Insanely Difficult Puzzles, and Other Devious Interviewing Techniques You Need to Know to Get a Job Anywhere in the New Economy, Little, Brown and Company, 2012.\n284. Maurice Pouzet, The Profile of relations (2007), arXiv:math\/0703211.\n285. Pouzet, Maurice; Thi\u00e9ry, Nicolas M. Some relational structures with polynomial growth and their associated algebras I: Quasi-polynomiality of the profile. Electron. J. Combin. 20 (2013), no. 2, Paper 1, 35 pp.\n286. Geoffrey Powell, Symmetric powers, indecomposables and representation stability, arXiv:1809.08781 [math.AT], 2018. (A045412, A100661)\n287. S. C. Power, arXiv:math.OA\/0005110 Approximately finitely acting operator algebras, J. Funct. Anal. 189 (2002), no. 2, 409-468.\n288. A. Prasad, Equivalence classes of nodes in trees and rational generating functions, arXiv preprint arXiv:1407.5284, 2014\n289. A. Prasad, Representation Theory, A Combinatorial Viewpoint, Cambridge Studies in Advanced Mathematics, 2014.\n290. J. S. Pratt, Universality in the entanglement structure of ferromagnets (2004), arXiv:quant-ph\/0411125.\n291. V. R. Pratt, Chu Spaces: Complementarity and Uncertainty in Rational Mechanics, Course notes, TEMPUS summer school, Budapest, July 1994.\n292. Emmanuel Preissmann, \"A Self-Indexed Sequence\", J. Integer Sequences, Volume 8, 2005, Article 05.3.5.\n293. Louis-Fran\u00e7ois Pr\u00e9ville-Ratelle and Xavier Viennot, An extension of Tamari lattices, Discrete Mathematics & Theoretical Computer Science, FPSAC 2013 Paris, France, DMTCS Proc. AS, 2013 [Mentions A000139. A version on the arXiv, arXiv:1406:3787, 2014, with the same title, does not mention the OEIS.]\n294. Louis-Fran\u00e7ois Pr\u00e9ville-Ratelle and Xavier Viennot, The enumeration of generalized Tamari intervals, Trans. Amer. Math. Soc. 369 (2017), 5219-5239. doi:10.1090\/tran\/7004\n295. Thomas B. Preu\u03b2er and Matthias R. Engelhardt, Putting Queens in Carry Chains, No27, Journal of Signal Processing Systems, Volume 88, Issue 2, August 2017, p. 185-201. doi:10.1007\/s11265-016-1176-8\n296. Andrew Elvey Price, Wenjie Fang, Michael Wallner, Asymptotics of Minimal Deterministic Finite Automata Recognizing a Finite Binary Language, 31st International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2020) Leibniz International Proceedings in Informatics (LIPIcs) Vol. 159, 11:1-11:13. doi:10.4230\/LIPIcs.AofA.2020.11 (A331120)\n297. Andrew Elvey Price, Alan D. Sokal, Phylogenetic trees, augmented perfect matchings, and a Thron-type continued fraction (T-fraction) for the Ward polynomials, arXiv:2001.01468 [math.CO], 2020. (A000311, A001498, A008299, A008517, A112007, A112493, A134685, A134991, A137375, A181996, A201637, A269939, A288874, A298673) This work has benefited greatly from the existence of the On-Line Encyclopedia of Integer Sequences. We warmly thank Neil Sloane for founding this indispensable resource, and the hundreds of volunteers for helping to maintain and expand it.\n298. R. Pries and C. Weir, The Ekedahl-Oort type of Jacobians of Hermitian curves, arXiv preprint arXiv:1302.6261, 2013\n299. J.-B. Priez, A lattice of combinatorial Hopf algebras, Application to binary trees with multiplicities, arXiv preprint arXiv:1303.5538, 2013. Published in FPSAC 2013 Paris, France DMTCS Proc. AS, 2013, 1167-1179; PDF\n300. J.-B. Priez, A. Virmaux, Non-commutative Frobenius characteristic of generalized parking functions: Application to enumeration, arXiv preprint arXiv:1411.4161, 2014\n301. Raul Prisacariu, Generating k-Pell Infinite Series Using Whittaker's Formula. PDF (A000129)\n302. U. Priss, Lattice-based Information Retrieval, Knowledge Organization, Vol. 27, 3, 2000, p. 132-142.\n303. Robert A. Proctor, Let's Expand Rota's Twelvefold Way For Counting Partitions! (2006), arXiv:math.CO\/0606404.\n304. Robert A. Proctor and Matthew J. Willis, Convexity of tableau sets for type A Demazure characters (key polynomials), parabolic Catalan numbers, arXiv:1706.03094 [math.CO], 2017. [On p. 18, notes that the OEIS was used to find relevant literature.]\n305. Robert A. Proctor, MJ Willis, Parabolic Catalan numbers count flagged Schur functions; Convexity of tableau sets for Demazure characters, arXiv preprint arXiv:1612.06323, 2016\n306. H. Prodinger, COMBINATORICS - PAST AND PRESENT, MAY 2006; PDF.\n307. H. Prodinger, Generating functions related to partition formulae for Fibonacci Numbers, JIS 11 (2008) 08.1.8.\n308. Helmut Prodinger, Generating functions for a lattice path model introduced by Deutsch, arXiv:2004.04215 [math.CO], 2020. (A001764)\n309. Helmut Prodinger, Retakh's Motzkin paths and some combinatorial comments, ECA 1:1 (2021) Article S2R4.\n310. Helmut Prodinger, Counting ternary trees according to the number of middle edges and factorizing into (3\/2)-ary trees, arXiv:2009.06793 [math.CO], 2020. (A120986)\n311. Helmut Prodinger, Summing a family of generalized Pell numbers, arXiv:2010.14321 [math.NT], 2020.\n312. Helmut Prodinger, Sarah J. Selkirk, Sums of squares of Tetranacci numbers: A generating function approach, arXiv:1906.08336 [math.NT], 2019. (A000078)\n313. H. Prodinger and T. A. Tshifhumulo, On q-Olivier functions, Annals of Combinatorics, 6 (2002), no. 2, 181-194.\n314. J. Propp, Integrability, Exact Solvability and Algebraic Combinatorics: A Three-Way Bridge?, presented in the Workshop on Combinatorics and Integrable Models, Canberra, 2002.\n315. J. Propp, Tilings, Chapter 9 in Miklos Bona, editor, Handbook of Enumerative Combinatorics, CRC Press, 2015, pages 541-588.\n316. J. Propp, Lessons I learned from Richard Stanley, arXiv preprint arXiv:1501.00719, 2015\n317. J. Propp and D. Ullman, On the Cookie Game, International Journal of Game Theory, volume 20 (1992), pages 313-324.\n318. Nicholas Proudfoot, Max Wakefield, and Ben Young, Intersection Cohomology of the Symmetric Reciprocal Plane, Preprint, http:\/\/pages.uoregon.edu\/njp\/ukl-m1.pdf, 2015. [\"All three authors would like to acknowledge the On-Line Encyclopedia of Integer Sequences [Slo14], without which this project would have been very difficult.\"]\n319. Nicholas Proudfoot and Ben Young, Configuration spaces, FSop-modules, and Kazhdan-Lusztig polynomials of braid matroids, arXiv:1704.04510 [math.RT], 2017.\n320. Nicholas Proudfoot, Ben Young, Yuan Xu, The Z-polynomial of a matroid, arXiv:1706.05575 [math.CO], 2017.\n321. Mihai Prunescu, Self-similar carpets over finite fields (2007), arXiv:0708.0899.\n322. Jozef H. Przytycki, History of Knot Theory (2007), arXiv:math\/0703096.\n323. W. Pu, J. Choi, E. Amir, Lifted Inference On Transitive Relations, in Workshops at the Twenty-Seventh AAAI Conference on Statistical Relational Artificial Intelligence, 2013.\n324. Lara Pudwell, Digit Reversal Without Apology (2005), arXiv:math\/0511366.\n325. Lara Pudwell, Enumeration schemes for permutations avoiding barred patterns, El. J. Combin. 17 (1) (2010) R29.\n326. L. Pudwell, Pattern avoidance in trees (slides from a talk, mentions many sequences), http:\/\/faculty.valpo.edu\/lpudwell\/slides\/notredame.pdf, 2012.\n327. L. Pudwell, Avoiding an Ordered Partition of Length 3, 2013; http:\/\/faculty.valpo.edu\/lpudwell\/slides\/pp2013pudwell.pdf.\n328. L. K. Pudwell, Ascent sequences and the binomial convolution of Catalan numbers, arXiv preprint arXiv:1408.6823, 2014\n329. L. Pudwell, Pattern-avoiding ascent sequences, Slides from a talk, 2015; http:\/\/faculty.valpo.edu\/lpudwell\/slides\/ascseq.pdf.\n330. Lara Pudwell, On the distribution of peaks (and other statistics), 16th International Conference on Permutation Patterns, Dartmouth College, 2018. PDF (A001263, A091156, A091894, A092107, A236406)\n331. Lara Pudwell, From permutation patterns to the periodic table, Valparaiso University (2019). PDF Also Notices Amer. Math. Soc., 67:7 (2020), 994-1001. (A168380)\n332. L. Pudwell, A. Baxter, Ascent sequences avoiding pairs of patterns, 2014.\n333. Lara Pudwell, Nathan Chenette, Manda Riehl, Statistics on Hypercube Orientations, AMS Special Session on Experimental and Computer Assisted Mathematics, Joint Mathematics Meetings (Denver 2020). PDF (A001787, A001788, A061301)\n334. Pudwell, Lara; Scholten, Connor; Schrock, Tyler; Serrato, Alexa doi:10.1155\/2014\/316535 Noncontiguous pattern containment in binary trees. ISRN Comb. 2014, Article ID 316535, 8 p. (2014).\n335. Lara Pudwell, Rebecca Smith, Two-stack-sorting with pop stacks, arXiv:1801.05005 [math.CO], 2018. (A224232)\n336. S\u00edlvia Casacuberta Puig, On the divisibility of binomial coefficients, 2018. PDF (A290290, A290203)\n337. Yash Puri and Thomas Ward, \"Arithmetic and Growth of Periodic Orbits\", J. Integer Sequences, Volume 4, 2001, Article 01.2.1.\n338. Robert James Purser, Mobius Net Cubed-Sphere Gnomonic Grids, U.S. Department of Commerce, National Oceanic and Atmospheric Administration, National Weather Service, National Centers for Environmental Protection, 2018. doi:10.25923\/d9rn-fd18 (A008958)\n339. Pushkarev, I. A.; Byzov, V. A. Donaghey's transformation: an elementary approach. (Russian) Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 411 (2013), Teoriya Predstavlenii, Dinamicheskie Sistemy, Kombinatornye Metody. XXII, 148--177, 243; translation in J. Math. Sci. (N. Y.) 196 (2014), no. 2, 199-215\n340. B. Putievskiy, Transformations [Of] Integer Sequences And Pairing Functions, arXiv preprint arXiv:1212.2732, 2012\n341. The Dutch magazine Pythagoras (http:\/\/www.pyth.eu\/) currently (in 2015) has a series of articles about number sequences, many of which mention the OEIS. Four parts have appeared so far: Een Lexicon vol Getallen [\"A Dictionary of Numbers\"] (Sept. 2015), Getallenplantjes [\"Number plants\" (?)] (Oct. 2015), Driehoeksgetallen [\"Triangular numbers\"] (Nov. 2015), Een bizarre rij [A bizarre sequence] (Dec. 2015),\n342. Pythagoras, Een Eigen Rij - Uitslag Prijsvraag, Nummer 6, Juni 2016, pp. 20-21.\n343. Pythagoras in 2016 had a series of illustrations on the back covers showing sequences from the OEIS. For an example see the January 2016 issue, http:\/\/www.pyth.eu\/jaargangen\/Pyth55-3.pdf","date":"2021-03-06 15:27:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6547909379005432, \"perplexity\": 10443.508737599044}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178375096.65\/warc\/CC-MAIN-20210306131539-20210306161539-00131.warc.gz\"}"} | null | null |
<?php
declare(strict_types=1);
namespace Baethon\Phln;
const partition = 'Baethon\\Phln\\partition';
/**
* @param array<mixed> $collection
* @param callable(mixed):bool $predicate
*
* @return array{array<mixed>, array<mixed>}
*/
function partition(array $collection, callable $predicate): array
{
return pipe_first($collection, [
_(reduce, function ($carry, $item) use ($predicate) {
$key = $predicate($item)
? 'left'
: 'right';
return array_merge($carry, [
$key => append($carry[$key], $item),
]);
}, ['left' => [], 'right' => []]),
values,
]);
};
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,237 |
var map, map2, featureList, gpsActive, activeRecord;
/* URL parameters */
var urlParams = {};
if (location.search) {
var parts = location.search.substring(1).split("&");
for (var i = 0; i < parts.length; i++) {
var nv = parts[i].split("=");
if (!nv[0]) continue;
urlParams[nv[0]] = nv[1] || true;
}
}
/* Basemap Layers */
var mapboxOSM = L.tileLayer("http://{s}.tiles.mapbox.com/v3/spatialnetworks.map-6l9yntw9/{z}/{x}/{y}.jpg70", {
maxZoom: 19,
subdomains: ["a", "b", "c", "d"],
attribution: 'Basemap <a href="https://www.mapbox.com/about/maps/" target="_blank">© Mapbox © OpenStreetMap</a>'
});
var mapboxSat = L.tileLayer("http://{s}.tiles.mapbox.com/v3/spatialnetworks.map-xkumo5oi/{z}/{x}/{y}.jpg70", {
maxZoom: 19,
subdomains: ["a", "b", "c", "d"],
attribution: 'Basemap <a href="https://www.mapbox.com/about/maps/" target="_blank">© Mapbox © OpenStreetMap</a>'
});
/* Overlay Layers */
var highlight = L.geoJson(null);
var markerClusters = new L.MarkerClusterGroup({
spiderfyOnMaxZoom: true,
showCoverageOnHover: false,
zoomToBoundsOnClick: true,
disableClusteringAtZoom: 17
});
var observations = L.geoJson(null, {
pointToLayer: function (feature, latlng) {
return L.marker(latlng, {
title: feature.properties.Title,
riseOnHover: true
});
},
onEachFeature: function (feature, layer) {
function formatPhotos(value) {
if (value) {
return "<a href='#' onclick='photoGallery(\"" + value + "\"); return false;'>View Photo</a>";
} else {
return "<i>No photo available</i>";
}
}
if (feature.properties) {
var content = "<table class='table table-striped table-bordered table-condensed'>";
$.each(feature.properties, function(index, value) {
if (index === "Photo") {
value = formatPhotos(value);
}
if (index === "Timestamp") {
value = new Date(value).toLocaleString();
}
if (value === null) {
value = "";
}
if (index !== "id") {
content += "<tr><th>" + index + "</th><td>" + value + "</td></tr>";
}
});
content += "</table>";
layer.on({
click: function (e) {
$("#feature-title").html(feature.properties.Title);
$("#info-tab").html(content);
$("#feature-tabs a:first").tab("show");
$("#featureModal").modal("show");
activeRecord = feature.properties.id;
fetchComments();
highlight.clearLayers().addLayer(L.circleMarker([feature.geometry.coordinates[1], feature.geometry.coordinates[0]], {
stroke: false,
fillColor: "#00FFFF",
fillOpacity: 0.7,
radius: 10
}));
}
});
$("#feature-list tbody").append('<tr class="feature-row" id="'+L.stamp(layer)+'"><td class="feature-name">'+layer.feature.properties.Title+'</td><td style="vertical-align: middle;"><i class="fa fa-chevron-right pull-right"></i></td></tr>');
}
}
});
/* Fetch data to populate form dropdowns */
$(document).ready(function() {
fetchLookups();
loadObservations();
});
/* Dynamic sidebar click listener */
$(document).on("click", ".feature-row", function(e) {
sidebarClick(parseInt($(this).attr('id')));
});
/* Button listeners */
$("#about-btn").click(function() {
$("#aboutModal").modal("show");
return false;
});
$("#full-extent-btn").click(function() {
map.fitBounds(observations.getBounds());
return false;
});
$("#list-btn").click(function() {
$('#sidebar').toggle();
map.invalidateSize();
return false;
});
$("#nav-btn").click(function() {
$(".navbar-collapse").collapse("toggle");
return false;
});
$("#sidebar-toggle-btn").click(function() {
$("#sidebar").toggle();
map.invalidateSize();
return false;
});
$("#sidebar-hide-btn").click(function() {
$("#sidebar").hide();
map.invalidateSize();
});
$(".add-feature-btn").click(function() {
$("#formModal").modal("show");
return false;
});
$("#share-btn").click(function() {
var link = location.protocol + '//' + location.host + location.pathname + "?id=" + activeRecord;
$("#share-hyperlink").attr("href", link);
$("#share-twitter").attr("href", "https://twitter.com/intent/tweet?url=" + link + "&text=Another FOSS4G Observation!&via=brymcbride");
$("#share-facebook").attr("href", "https://facebook.com/sharer.php?u=" + link);
});
$("#contact-form").submit(function() {
$("<div class='modal-backdrop fade in'></div>").appendTo(document.body);
$("#loading").show();
$.ajax({
url: "https://script.google.com/macros/s/AKfycbyVzUCsCI4FegBJ5TYbBZhS-w2zJYZ_0H5Nro2SRvEdBMy4-H4/exec",
type: "GET",
dataType: "json",
data: {
first_name: $("#contact-first").val(),
last_name: $("#contact-last").val(),
email: $("#contact-email").val(),
message: $("#contact-message").val(),
table: "email"
},
success: function(data) {
if (data.status === "success") {
$("#aboutModal").modal("hide");
alert("Your message has been sent, thanks for your interest!");
} else {
alert("There was an error sending your message. Please try again.");
}
$("#contact-form")[0].reset();
$("#loading").hide();
$(".modal-backdrop").remove();
}
});
console.log("submitted!");
return false;
});
/* Hackish form submit to handle modal hiding issues */
$("#data-form").submit(function(e) {
$("<div class='modal-backdrop fade in'></div>").appendTo(document.body);
$("#loading").show();
var photo_url = "";
var photo_hash = "";
/* Upload photo anonymously to imgur */
function postToImgur() {
var formData = new FormData();
formData.append("image", $("[name='uploads[]']")[0].files[0]);
$.ajax({
url: "https://api.imgur.com/3/image",
type: "POST",
datatype: "json",
data: formData,
cache: false,
contentType: false,
processData: false,
headers: {
"Authorization": "Client-ID bbec7a6bb07979a"
},
success: function(response) {
photo_url = response.data.link;
photo_hash = response.data.deletehash;
postToCartoDB();
}
});
}
/* Post data to CartoDB via Google Apps Script proxy (to protect API key) */
function postToCartoDB() {
$.ajax({
url: "https://script.google.com/macros/s/AKfycbyVzUCsCI4FegBJ5TYbBZhS-w2zJYZ_0H5Nro2SRvEdBMy4-H4/exec",
type: "GET",
dataType: "json",
data: {
longitude: parseFloat($("#longitude").val()),
latitude: parseFloat($("#latitude").val()),
date: $("#date").val(),
observer: $("#observer").val(),
category: $("#category").val(),
venue: $("#venue").val(),
title: $("#title").val(),
observations: $("#observations").val(),
photo_url: photo_url,
photo_hash: photo_hash,
table: "foss4g_observations"
},
success: function(data) {
addObservation(data.rows[0].cartodb_id);
$("#data-form")[0].reset();
}
});
}
/* Hide form */
$("#formModal").modal("hide");
/* Post data after form modal is closed */
$("#formModal").one("hidden.bs.modal", function (e) {
/* Upload photos first so we have the URL to insert into the DB */
if ($("[name='uploads[]']")[0].files[0]) {
postToImgur();
} else {
postToCartoDB();
}
//$("#formModal").off();
});
return false;
});
$("#comment-form").submit(function() {
$("<div class='modal-backdrop fade in'></div>").appendTo(document.body);
$("#loading").show();
$.ajax({
url: "https://script.google.com/macros/s/AKfycbyVzUCsCI4FegBJ5TYbBZhS-w2zJYZ_0H5Nro2SRvEdBMy4-H4/exec",
type: "GET",
dataType: "json",
data: {
observation_id: activeRecord,
name: $("#comment-name").val(),
email: $("#comment-email").val(),
comment: $("#comment").val(),
table: "foss4g_comments"
},
success: function(data) {
fetchComments();
$("#comment-form")[0].reset();
$("#loading").hide();
$(".modal-backdrop").remove();
}
});
return false;
});
function fetchComments() {
$.ajax({
cache: false,
url: "https://fulcrum.cartodb.com/api/v2/sql?format=json&q=SELECT updated_at, name, email, comment FROM foss4g_comments WHERE observation_id = '" + activeRecord + "' ORDER BY updated_at DESC",
dataType: "json",
success: function (response) {
var content = "";
if (response.rows && response.rows.length > 0) {
$.each(response.rows, function(index, comment) {
content += "<div class='panel panel-default'>" +
"<div class='panel-heading'>" +
"<h3 class='panel-title'>" + comment.name + "<span class='text-muted pull-right'><em>" + new Date(comment.updated_at).toLocaleString() + "</em></span></h3>" +
"</div>" +
"<div class='panel-body'>" +
comment.comment +
"</div>" +
"</div>";
});
$("#comment-panes").html(content);
} else {
$("#comment-panes").html("<p class='text-muted'><em>No comments</em></p>");
}
}
});
}
function zoomToFeature(id) {
observations.eachLayer(function (layer) {
if (layer.feature.properties.id == id) {
map.setView([layer.getLatLng().lat, layer.getLatLng().lng], 18);
layer.fire("click");
}
});
}
function sidebarClick(id) {
if (!map.hasLayer(markerClusters)) {
map.addLayer(markerClusters);
}
var layer = observations.getLayer(id);
map.setView([layer.getLatLng().lat, layer.getLatLng().lng], 17);
layer.fire("click");
if (document.body.clientWidth <= 767) {
$("#sidebar").hide();
map.invalidateSize();
}
}
function fetchLookups() {
/* Load category options from CSV file */
Papa.parse("lookups/categories.csv", {
download: true,
header: true,
complete: function(results) {
$.each(results.data, function(index, category) {
$("#category").append("<option value='"+category.value+"'>"+category.label+"</option>");
});
}
});
/* Load venue options from CSV file */
Papa.parse("lookups/venues.csv", {
download: true,
header: true,
complete: function(results) {
$.each(results.data, function(index, venue) {
$("#venue").append("<option value='"+venue.value+"'>"+venue.label+"</option>");
});
}
});
}
function addObservation(id) {
$.ajax({
url: 'https://fulcrum.cartodb.com/api/v2/sql?format=geojson&q=SELECT the_geom, cartodb_id AS "id", updated_at AS "Timestamp", observer AS "Observer", category AS "Category", venue AS "Venue", title AS "Title", observations AS "Observations", photo_url AS "Photo" FROM foss4g_observations WHERE cartodb_id = '+id,
dataType: "json",
success: function (data) {
observations.addData(data);
markerClusters.clearLayers();
markerClusters.addLayer(observations);
}
}).done(function() {
featureList = new List("features", {valueNames: ["feature-name"]});
featureList.sort("feature-name", {order:"asc"});
$("#loading").hide();
$(".modal-backdrop").remove();
});
}
function photoGallery(photos) {
var photoArray = [];
$.each(photos.split(","), function(index, photo) {
photoArray.push({href: photo});
});
$.fancybox(photoArray, {
"type": "image",
"showNavArrows": true,
"padding": 0,
"scrolling": "no",
beforeShow: function () {
this.title = "Photo " + (this.index + 1) + " of " + this.group.length + (this.title ? " - " + this.title : "");
}
});
return false;
}
function loadObservations() {
$.ajax({
cache: false,
url: 'https://fulcrum.cartodb.com/api/v2/sql?format=geojson&q=SELECT the_geom, cartodb_id AS "id", updated_at AS "Timestamp", observer AS "Observer", category AS "Category", venue AS "Venue", title AS "Title", observations AS "Observations", photo_url AS "Photo" FROM foss4g_observations',
dataType: "json",
success: function (data) {
observations.addData(data);
markerClusters.addLayer(observations);
}
}).done(function() {
featureList = new List("features", {valueNames: ["feature-name"]});
featureList.sort("feature-name", {order:"asc"});
$("#loading").hide();
/* If id param passed in URL, zoom to feature, else fit to cluster bounds */
if (urlParams.id && urlParams.id.length > 0) {
zoomToFeature(urlParams.id);
} else {
map.fitBounds(markerClusters.getBounds());
}
});
}
map = L.map("map", {
layers: [mapboxOSM, markerClusters, highlight],
zoomControl: false
}).fitWorld();
/* Bare bones attribution */
map.attributionControl.setPrefix("");
/* Clear feature highlight when map is clicked */
map.on("click", function(e) {
highlight.clearLayers();
});
/* Change form map when main map is changed */
map.on("baselayerchange", function(e) {
if (e.name === "Aerial Imagery") {
map2.removeLayer(mapboxOSM2).addLayer(mapboxSat2);
} else if (e.name === "Street Map") {
map2.removeLayer(mapboxSat2).addLayer(mapboxOSM2);
}
});
/* Larger screens get expanded layer control */
if (document.body.clientWidth <= 767) {
var isCollapsed = true;
} else {
var isCollapsed = false;
}
var baseLayers = {
"Street Map": mapboxOSM,
"Aerial Imagery": mapboxSat
};
var overlayLayers = {
"Observations": markerClusters
};
var layerControl = L.control.layers(baseLayers, overlayLayers, {
collapsed: isCollapsed
}).addTo(map);
var zoomControl = L.control.zoom({
position: "bottomright"
}).addTo(map);
var locateControl = L.control.locate({
position: "bottomright",
drawCircle: true,
follow: true,
setView: true,
keepCurrentZoomLevel: true,
markerStyle: {
weight: 1,
opacity: 0.8,
fillOpacity: 0.8
},
circleStyle: {
weight: 1,
clickable: false
},
icon: "icon-direction",
metric: false,
strings: {
title: "My location",
popup: "You are within {distance} {unit} from this point",
outsideMapBoundsMsg: "You seem located outside the boundaries of the map"
},
locateOptions: {
maxZoom: 18,
watch: true,
enableHighAccuracy: true,
maximumAge: 10000,
timeout: 10000
}
}).addTo(map);
var mapboxOSM2 = L.tileLayer("http://{s}.tiles.mapbox.com/v3/spatialnetworks.map-6l9yntw9/{z}/{x}/{y}.jpg70", {
maxZoom: 19,
subdomains: ["a", "b", "c", "d"],
attribution: 'Basemap <a href="https://www.mapbox.com/about/maps/" target="_blank">© Mapbox © OpenStreetMap</a>'
});
var mapboxSat2 = L.tileLayer("http://{s}.tiles.mapbox.com/v3/spatialnetworks.map-xkumo5oi/{z}/{x}/{y}.jpg70", {
maxZoom: 19,
subdomains: ["a", "b", "c", "d"],
attribution: 'Basemap <a href="https://www.mapbox.com/about/maps/" target="_blank">© Mapbox © OpenStreetMap</a>'
});
map2 = L.map("map2", {
layers: [mapboxOSM2],
attributionControl: false,
zoomControl: false
}).fitWorld();
var baseLayers2 = {
"Street Map": mapboxOSM2,
"Aerial Imagery": mapboxSat2
};
var layerControl2 = L.control.layers(baseLayers2, null, {
collapsed: true
}).addTo(map2);
var locateControl2 = L.control.locate({
position: "topleft",
drawCircle: true,
follow: true,
setView: true,
keepCurrentZoomLevel: false,
markerStyle: {
weight: 1,
opacity: 0.8,
fillOpacity: 0.8
},
circleStyle: {
weight: 1,
clickable: false
},
icon: "icon-direction",
metric: false,
strings: {
title: "My location",
popup: "You are within {distance} {unit} from this point",
outsideMapBoundsMsg: "You seem located outside the boundaries of the map"
},
locateOptions: {
maxZoom: 18,
watch: true,
enableHighAccuracy: true,
maximumAge: 10000,
timeout: 10000
}
}).addTo(map2);
map2.on("startfollowing", function() {
map2.on("dragstart", locateControl2.stopFollowing);
}).on("stopfollowing", function() {
map2.off("dragstart", locateControl2.stopFollowing);
});
map2.on("moveend", function(e) {
$("#latitude").val(map2.getCenter().lat.toFixed(6));
$("#longitude").val(map2.getCenter().lng.toFixed(6));
});
$("#formModal").on("shown.bs.modal", function (e) {
if (locateControl._active) {
gpsActive = true;
locateControl.stopLocate();
} else {
gpsActive = false;
}
map2.invalidateSize();
map2.fitBounds(map.getBounds());
locateControl2.locate();
}).on("hidden.bs.modal", function (e) {
locateControl2.stopLocate();
if (gpsActive) {
locateControl.locate();
}
});
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,198 |
Q: Accessing service path parameters from CXF Message I am writing an interceptor to validate the requests that I see. However, I am unable to determine a way to access the path parameters (any parameter for that matter). Although, when I expand the message object in eclipse, debug console, I see what I want, but because it is an element in the array, I'm not sure which interface to use to access it.
What I see in eclipse (debugger):
My Interceptor :
public class ValidationInterceptor extends AbstractPhaseInterceptor<Message>
{
public ValidationInterceptor()
{
super(Phase.PRE_INVOKE);
}
public void init() {}
public void handleMessage(Message message) throws Fault
{
OperationResourceInfo a = message.getExchange().get(OperationResourceInfo.class);
Method methodMetaData = a.getMethodToInvoke();
List<Parameter> metadataParameters = a.getParameters();
// ? How to access the parameters of the actual method invoked
...
}
}
I would appreciate any help/pointers. Thank you.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,081 |
Reporting entities must be registered with FINTRAC.
The registration process involves an application to FINTRAC. Registration is valid for a two-year period and must be renewed before expiry. There are no fees associated with the registration or the renewal of registration.
FINTRAC may deny or revoke registration if it determines that a reporting entity is not eligible for registration.
Have carried out, attempted to carry out, participated in or facilitated a terrorist activity.
Be controlled directly or indirectly by, be acting on behalf of, at the direction of, or in association with any individual or entity conducting any of the above activities.
Have knowingly carried out, attempted to carry out, participated in or facilitated a terrorist activity.
Have knowingly acted on behalf, at the direction of, or in association with such an entity.
A money laundering or a terrorist activity financing offence.
An offence convicted on indictment under the Proceeds of Crime (Money Laundering) and Terrorist Financing Act.
An offence under sections 83.18 to 83.231 of the Criminal Code of Canada related to the participation in, or the facilitation of, activities of a terrorist group, the commission of an offence for such a group, the instructing, directing or facilitating of that offence.
An offence under sections 354 or 467.11 to 467.13 of the Criminal Code of Canada related to the possession of property obtained from crime, the participation in activities of a criminal organization, or the commission of, or instruction to commit, an offence for such an organization.
A conspiracy, an attempt to commit, being an accessory after the fact in relation to, or any counselling in relation to any of the offences above.
An individual or entity convicted of an offence on indictment or convicted more than once for fraudulent transactions relating to contracts and trade under the Criminal Code of Canada or any offences under the Controlled Drugs and Substances Act (other than for possession of a substance). | {
"redpajama_set_name": "RedPajamaC4"
} | 1,277 |
Taking that perfect sunset panorama of a pristine white sand beach after a day of exploring your latest vacation destination just got a little easier. Samsung's new rear quad-camera Galaxy A9 features a 120° Ultra Wide lens, which captures the whole scene with way less panning.
Whether you're ready for your close-up or looking at the bigger picture, the Galaxy A9 comes equipped with camera options to help you capture the moment the way you want. While the 120° Ultra Wide lens lets you capture everything your eyes see, the telephoto lens provides 2x Optical Zoom, so you get the perfect composition even when you are far away from your subject.
The phone's 24MP main lens stays sharp no matter what your lighting conditions. Using a pixel re-mosaic algorithm, it is able to maintain peak performance in low light, producing clear images with less noise. The Depth Camera works with the main camera to give you the ability to manually adjust the depth of field of your images, so you can add beautiful bokeh to accentuate your photos.
Check out the infographic below to learn more about the Galaxy A9's quad camera system and find out which lens best suits your needs. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,354 |
function onStepIn(cid, item, position, fromPosition)
local player = Player(cid)
if not player then
return true
end
if not isPlayerInArea(Position(33235, 31801, 12), Position(33299, 31867, 12)) then
Game.setStorageValue(10004)
end
player:teleportTo(Position(33265, 31838, 10))
player:getPosition():sendMagicEffect(CONST_ME_TELEPORT)
for i = 10005, 10008 do
player:setStorageValue(i)
end
return true
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 169 |
Q: How do i make it so that whenever i write something in html it will always be under the html above it using css When I try adding something new to my html it is always above the previous things i want the what ever comes next to be under the todays matches or whatever is above it
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,534 |
\section{Introduction}
The history of cosmological N-body simulations (in an admittedly broad sense) dates back to 1941 when \citet{Holmberg1941} conducted light bulb experiments in order to investigate the movement of galaxies resulting from gravitational interactions. Since the dawn of N-body simulations by means of computers in the early 1960's \citep{vonHoerner1960, Aarseth1963} where the particle number was limited to $N \leq 100$, technical innovation has led the increase of computational power to keep up with Moore's law and presently, simulations with many billions of particles shed light onto the non-linear growth of structure \red{(see e.g. \citealt{Springel2005, Springel2008, Springel2017, Boylan-Kolchin2009, Stadel2009, Klypin2011, Vogelsberger2014, Schaye2015}} for high-resolution simulations performed in the last two decades).
\par While cosmological simulations arguably constitute a major pillar in the establishment of $\Lambda$ cold dark matter ($\Lambda \text{CDM}$) as the concordance model of cosmology and confirm its predictions on a large-scale, tensions exist on smaller scales. Examples are the missing satellite problem that describes the overabundance of satellite galaxies predicted by $\Lambda \text{CDM}$ simulations as compared to observations of the Milky Way \citep{Klypin1999, Moore1999} and the Local Group \citep{Zavala2009} and the core-cusp problem \citep{Moore1994, DeBlok2001} that addresses the contrast between cuspy halo density profiles predicted by $\Lambda \text{CDM}$ and flattened DM cores as observed in low surface brightness galaxies, amongst other discrepancies. The entirety of differences between $\Lambda \text{CDM}$ and observations is sometimes called the ``small-scale crisis'' (see \citealt{Weinberg2015} for a review).
\par Incorporating baryonic physics into cosmological simulations may alleviate the small-scale discrepancies, e.g. by suppressing galaxy formation due to photoionization \citep{Bullock2000, Benson2002, Somerville2002}, supernova feedback \citep{Governato2012, Pontzen2012}, or tidal stripping \citep{Zolotov2012, Brooks2013}. Another avenue of investigation is to question the cold and collisionless nature of DM: hypothesising ``warm'' DM with a non-negligible velocity in the early Universe instead of the CDM paradigm leads to a sharp cut-off in the high-frequent modes of the power spectrum and hence to the suppression of small-scale structure below the free-streaming scale \citep{Lovell2012}. DM particles that strongly interact with each other, known as self-interacting DM, may also reconcile predictions from simulations with observations \red{\citep{Vogelsberger2012, Zavala2013, Rocha2013, Peter2013, Elbert2015, Tulin2018}}, although this is disputed \citep{KuzioDeNaray2010, Schneider2014}. A further possible explanation for the erasure of small-scale structure is late kinetic decoupling of a cold DM species that remains in thermal equilibrium with a relativistic species until the Universe has cooled down to sub-$\text{keV}$ temperatures \citep{Boehm2014, Bringmann2016}.
\par In the course of the last decades, the tools for cosmological simulations have developed from purely gravitational codes to complex programs which model gas hydrodynamics and additional baryonic physics such as stellar feedback, cosmic rays, supernovae, feedback from active galactic nuclei, and various gas cooling and heating mechanisms. In contrast, a versatile toolbox for the investigation of the most abundant matter component, namely DM, is often lacking. In this work, we present an extension of the simulation code \textsc{Gizmo} \citep{GIZMO} to a generic class of DM particles that annihilate into standard model (SM) particles and deposit a fraction of the released energy into the intergalactic medium.
This includes in particular weakly interacting massive particles (WIMPs) which are (still, despite non-detection hitherto) one of the most promising DM candidates. WIMPs were in a thermal equilibrium with the plasma in the early Universe and the WIMP abundance was kept in an equilibrium by annihilation into and production from SM particles. As the Universe gradually expanded, the reactions of the WIMPs became too rare to maintain the equilibrium abundance, a moment called the ``freeze-out''. Calculating the velocity cross-section needed to explain the observed DM density today for a WIMP yields a velocity cross-section of $\langle \sigma v \rangle \sim 3 \times 10^{-26} \ \text{cm}^3 \ \text{s}^{-1}$ \citep{Steigman2012}, which falls into the weak scale where many well-motivated particles in extensions of the SM reside, such as the lightest supersymmetric particle.
\par The DM annihilation alters the thermal history of the Universe, leaving its imprint on the cosmic microwave background (CMB) anisotropies (e.g. \citealt{Slatyer2009}), for which reason CMB data from the Planck satellite provides tight constraints on dark matter candidates \citep[see Figure 42]{PlanckCollaboration2018} with energies $\mathcal{O}(\text{few GeV})$, assuming an s-wave annihilation channel, a thermal relic cross-section, and $2 \to 2$ annihilation.
\par Since the annihilation of DM particles at a mass scale of $\gtrsim \text{GeV}$ is expected to produce $\gamma$-rays which could be detected, further constraints come from indirect dark matter searches such as Fermi-LAT, HESS, HAWC, Chandra, and AMS-02, amongst others (e.g. \citealt{Hoof2018} for a recent review of DM signals from Fermi-LAT). In the years to come, the parameter space for WIMPs will be further narrowed down aiming to cover the entire mass range from $\sim 100 \ \text{keV}$ (below, WIMPs no longer behave like CDM) to the unitarity bound of $\sim 100 \ \text{TeV}$ \citep{Smirnov2019} using direct detection at colliders such as the LHC \citep{Abdallah2015}, underground detectors (see \citealt{Schumann2019}), and indirect searches; in particular, measurements from the LSST will enable precise probes for theoretical DM models \citep{LSSTDarkMatterGroup2019}.
\par Whereas the DM mass is highly constrained for specific annihilation channels, the strongest model-independent lower bound for s-wave $2 \to 2$ annihilation is only $\gtrsim 20 \ \text{GeV}$ \citep{Leane2018}, and for more complicated annihilation channels such as a dark annihilation products, the DM mass is even less constrained.
\par Recently, the debate on the source of the Galactic Centre Excess (GCE), an excess in GeV $\gamma$-ray emission that could possibly be explained by faint millisecond pulsars \citep{Bartels2016, Lee2016, Macias2018}, has flared up again as \citet{Leane2019} showed that a DM annihilation signal could be mistakenly attributed to point sources due to mismodelling and thus remains a viable explanation.
The same DM candidate that might perhaps cause the GCE could also accommodate observations by the AMS-02 collaboration of an excess of $\sim 10-20 \ \text{GeV}$ cosmic-ray antiprotons \citep{Cholis2019, Cuoco2019}. \red{A $\gamma$-ray signal from the atypical globular cluster Omega Centauri, which is conjectured to be the remnant core of a stripped dwarf galaxy (e.g. \citealt{Ibata2019}), might also originate from annihilating DM \citep{Brown2019, Reynoso-Cordova2019}.}
\par For the distinction between astrophysical sources and the effects of DM annihilation, numerical simulations are a powerful tool that allows to discern the different physics involved by simply switching on and off the respective heating mechanisms. Besides, cosmological simulations are predestined for analysing the imprint of DM annihilation on structure formation,
such as the delayed formation of galaxies \citep{Schon2015, Schon2017}.
\par The results of N-body simulations have been used to estimate annihilation fluxes \citep{Stoehr2003} and to investigate their experimental detectability \citep{Pieri2011}. \citet{Natarajan2008} coupled a galaxy model to the ``Millennium simulation'' \citep{Springel2005} to explore the energy production of DMAF during galaxy formation, and \citet{Smith2012} modelled DMAF during the collapse of primordial minihalos using an analytical DM density profile in an N-body simulation. A model for DM decay into dark radiation is considered in \citet{Dakin2019}.
\par This work builds on \citet{Iwanus2017}, where a method for DMAF into SM particles was presented that self-consistently incorporates the DMAF power generated by the DM particles in the course of the cosmological simulation instead of resorting to an analytic model. Whereas the authors of that work propose to calculate the DMAF power at each gas N-body particle, we present a new method herein in which the DMAF power is determined at each DM N-body particle and then injected into the surrounding gas. This permits the choice of various injection mechanisms that take into account the mean free path of a specific annihilation channel.
\par The paper is structured as follows: in Section \ref{sec:DMAF}, we give a brief overview of DM annihilation and the resulting generation of energy. Then, we introduce our numerical method in Section \ref{sec:method}, focusing on different injection schemes and on the individual time step scheme. Section \ref{sec:comparison} is dedicated to a juxtaposition between our method and the receiver-based method proposed in \citet{Iwanus2017}. Numerical results are presented in Section \ref{sec:results}, and we conclude the paper in Section \ref{sec:conclusions}.
\section{Dark matter annihilation}
\label{sec:DMAF}
We briefly summarise the fundamental equations for the pair annihilation of DM which describe the mass loss and the energy deposition into the surrounding gas.
For the sake of simplicity, we present the formulae for Majorana particles here; the only difference in case of Dirac particles is an additional correction factor of $1/2$ \citep{Abdallah2015}.
\par For DM pair annihilation, the decrease in number density is given by
\begin{equation}
\frac{dn_\chi}{dt}=-\langle\sigma v\rangle n_{\chi}^2,
\end{equation}
where $n_\chi$ is the number density of DM with respect to a comoving volume and $\langle \sigma v \rangle$ is the velocity-averaged annihilation cross-section.
Note that the annihilation rate is $\frac{1}{2} \langle\sigma v\rangle n_{\chi}^2$, with each annihilation removing two DM particles.
\par Consequently, the mass loss due to DM annihilation within a fixed volume is
\begin{equation}
\frac{dM_\chi}{dt} = -\frac{\langle\sigma v\rangle}{m_\chi} \rho_\chi M_\chi,
\end{equation}
where $m_\chi$ denotes the DM particle mass, $\rho_\chi = m_\chi n_\chi$ is the DM mass density, and $M_\chi$ stands for the total DM mass enclosed in the selected volume.
\subsection{DMAF energy rate}
By energy conservation, each pair annihilation releases an energy of $2 m_\chi c^2$. Depending on the mean free path of the annihilation products, this energy is injected directly into the adjacent gas (typically within a few kpcs for an electron-positron decay channel due to inverse Compton scattering, \citealt{Delahaye2010}), gradually released over a larger mean free path in case of photon production, or it escapes the local vicinity as e.g. in case of a neutrino decay channel.
\par Thus, the energy rate absorbed by the surrounding gas can be written as
\begin{equation}
\frac{dE}{dt} = B f \frac{\langle\sigma v\rangle}{m_\chi} \rho_\chi M_\chi c^2,
\label{eq:energy_rate}
\end{equation}
where $B$ is a boost factor that accounts for unresolved clumps of DM (see \citealt{Bergstrom1999}), which recent studies find to be as high as $\mathcal{O}(10)$ \citep{Stref2017, Hiroshima2018} in certain situations such as for large clusters at low redshift. Additionally, $f$ is the fraction of energy that is injected into the surrounding gas, which is generally in the range of 0.01 -- 1 (see \citealt{Slatyer2009, Galli2013, Madhavacheril2014} for detailed calculations with dependence on redshift and annihilation channel).
Henceforth, we will assume $B = f = 1$ for simplicity, but in principle, arbitrary boost factors $B$ and absorption fractions $f$ can be considered with our method, such as e.g. a boost factor that depends on the local DM density \citep{Ascasibar2007}.
\section{Numerical method}
\label{sec:method}
In this section, we will present our DMAF implementation into the cosmological simulation code \textsc{Gizmo} \citep{GIZMO}, which is an offspring of the \textsc{Gadget} series of simulation codes \citep{Gadget1, Gadget2}. \textsc{Gizmo} is a parallel multi-physics code that comes with a broad range of physics modules for e.g. cooling, supernovae, cosmic rays, black halo physics, etc. \citep{FIRE2}, which can be individually enabled or disabled as one requires. In this spirit, we implemented DMAF in a modular way allowing for the simple combination with other physics modules. In contrast to older cosmological simulation codes, \textsc{Gizmo} features not only traditional smoothed-particle hydrodynamics (SPH), but also more advanced methods such as pressure SPH, meshless finite mass method, and meshless finite volume method.\footnote{For the meshless methods, the spatial discretisation uses cells rather than SPH ``particles'', but we will continue using the notion of particles for intuition. Henceforth, a ``particle'' will refer to an N-body particle and \emph{not} to a physical particle candidate if not stated otherwise.} The underlying equation for the fluid dynamics is the Euler equation, although extensions such as the Navier--Stokes equation can be enabled. Self-gravity is treated with a hybrid TreePM scheme, and the time stepping allows for distinct, adaptive time steps for individual particles.
\par In order to implement equation \eqref{eq:energy_rate} into \textsc{Gizmo}, we calculate the DM density at each DM particle. The velocity cross-section is taken to be the standard thermal relic cross-section herein, but the extension to velocity-dependent cross-sections modelling p-wave annihilation does not pose a difficulty since the velocity of each DM particle is computed anyway. For incorporating Sommerfeld enhancement \citep{Sommerfeld1931}, relative velocities between DM particles would need to be calculated in addition. Furthermore, simulating decay of DM in lieu of annihilation can easily be accomplished by modifying the implementation of equation \eqref{eq:energy_i_j}.
\par A neighbour search at each DM particle is required to find the gas neighbours that will absorb the generated energy. As suggested in \citet{HopkinsSupernovae} in the context of supernova feedback, we carry out a bidirectional search in order to include the gas particles within the search radius of a DM particles (which is itself determined by imposing a fixed number of desired gas neighbours), as well as gas particles containing the injecting DM particle within \emph{their} kernel. This prevents us from neglecting gas particles in less dense regions and thus from introducing an unnatural bias towards high-density regions.
\par The energy generated by a DM particle $i$ of mass $M_i$ that is released into a specific gas particle $j$ found in the neighbour search is
\begin{equation}
\frac{dE_{i \to j}}{dt} = \frac{\langle\sigma v\rangle}{m_\chi} \rho_{\chi, i} M_i c^2 \frac{w_j}{\sum_{k \in \mathcal{N}_\text{gas}(i)} w_k},
\label{eq:energy_i_j}
\end{equation}
where $w_k$ are weights that specify the fraction of the entire energy produced by DM particle $i$ injected into gas particle $k$, $\mathcal{N}_\text{gas}(i)$ is the set of gas neighbours of particle $i$, and $\rho_{\chi, i}$ is the DM density evaluated at particle $i$. By normalising with the sum of all weights, the total energy received by the gas particles equals the energy produced at the DM particle.
\subsection{Choosing the weights}
\label{subsec:weights}
\begin{figure}
\centering
\noindent
\resizebox{\columnwidth}{!}{
\includegraphics{Figures/Injection_schemes.png}
}
\caption{Scheme of the energy injection from one DM particle $i$ into the surrounding gas for the choice of weights a) (\emph{left}) and b) (\emph{right}). For case b), the orange area depicts the convex hull around particle $i$. While the energy is distributed anisotropically for case a) dependent on the distribution of the surrounding gas, the energy injection is (approximately) statistically isotropic (see \citealt{HopkinsSupernovae} for an numerical investigation of this property).}
\label{fig:Injection_schemes}
\end{figure}
Equation \eqref{eq:energy_i_j} provides the flexibility to customise the energy injection to model a wide range of annihilation mechanisms and decay channels by defining the weights $w_k$ appropriately. In this work, we consider two possible choices:
\begin{enumerate}[a), align=left]
\item mass weighted injection:
\begin{equation}
w_k = M_k W(r_{k i}, h_i),
\end{equation}
\item solid angle weighted injection:
\begin{equation}
w_k = \frac{1}{2}\left(1-\left(1+\left(\mathbf{A}_{k i} \cdot \hat{\mathbf{r}}_{k i}\right) /\left(\pi\left|\mathbf{r}_{k i}\right|^{2}\right)\right)^{-\nicefrac{1}{2}}\right).
\end{equation}
\end{enumerate}
\par For the mass weighted injection in case a), $W = W(r_{k i}, h_i)$ is the kernel function which depends on the distance $r_{k i} = |\mathbf{r}_{k i}|$ between particles $i$ and $k$, and on the search radius $h_i$ of particle $i$. For typical choices of kernel functions, $W$ takes its maximum at $r_{k i} = 0$ and decreases monotonically as $r_{k i}$ increases. For case b), $\mathbf{A}_{k i}$ is the vector-oriented area of a face constructed between particles $i$ and $k$, in such a way that the entirety of these faces around particle $i$ forms a convex hull. The vector $\hat{\mathbf{r}}_{k i} = \mathbf{r}_{k i} / r_{k i}$ is the unit vector pointing from particle $k$ to particle $i$.
\par We first consider case a): for this simple choice, the bidirectional search boils down to a standard search within radius $h_i$ since the compact support of kernel function $W$ is contained within a radius of $h_i$.
The sum in the denominator on the right hand side of equation \eqref{eq:energy_i_j} can be interpreted as the gas density evaluated at DM particle $j$, and the energy assigned to each gas particle is proportional to its contribution to it. Therefore, the larger the mass of a gas particle and the closer it is to the injecting DM particle, the higher the amount of energy it receives. A variant would be to drop the explicit dependence on the gas particle masses and to set the weights to $w_k = W(r_{k i}, h_i)$.
However, as most simulation codes use similar (or even identical) gas particle masses, there is typically little change.
\par Whereas in case a), the direction of energy injection around a DM particle depends on the mass distribution of neighbouring gas particles (see Figure \ref{fig:Injection_schemes}), it is often desirable to inject energy in a uniform way without giving preference to any specific direction -- a property dubbed as ``statistical isotropy''. For this purpose, we follow the idea in \citet{HopkinsSupernovae} to use solid angle based weights in case b), which are chosen such that the energy assigned to each particle is proportional to the solid angle that it subtends on the sky as viewed from particle $i$. For more details, in particular on the construction of the convex hull, and an illustrative sketch, we refer the reader to \citet{HopkinsSupernovae}. In contrast to the reference, it is not necessary to define vector weights in our case because the transferred quantity (energy) is a scalar, although this method naturally lends itself to momentum injections as well. In Section \ref{sec:results}, we will investigate how the choice of the weights affects the simulation results.
\par Additionally, more complex weights could be chosen for modelling the gradual deposition of energy along each line of sight. To this end, the sky as seen from a DM particle could be subdivided into multiple viewing cones along each of which a part of the energy would be distributed by setting the weights for particles within the viewing cone. More elaborate energy injection mechanisms will be addressed in future work.
\subsection{Individual time stepping}
\begin{figure}
\centering
\noindent
\resizebox{\columnwidth}{!}{
\includegraphics{Figures/Time_stepping_with_array.png}
}%
\caption[]{%
Time line of a DM and a neighbouring gas particle: \\ %
I: DM assigns a DMAF energy rate to gas according to equation \protect{\eqref{eq:energy_i_j}}. \\ %
II: End of gas time step, gas particle adds the energy set at I. \\ %
III: End of gas and DM time step, gas adds again the energy set at I. DMAF energy rate is updated and assigned to gas particle. DM reduces its time step, which is now the same as gas time step. \\ %
IV: End of gas and DM time step, gas adds the energy set at III. DM wants to change to a smaller time step than gas. DM informs gas, gas decreases its time step as well. DMAF energy rate is updated and assigned to gas particle. \\ %
V: End of gas and DM time step, gas adds the energy set at IV. DMAF energy rate is updated and assigned to gas particle. \\%
}
\label{fig:Time_stepping}
\end{figure}
Modern cosmological simulation codes such as \textsc{Gizmo} feature an adaptive, individual time stepping scheme which leads to an appreciable computational speed-up as compared to fixed time step schemes, while resolving active regions at high accuracy in time. In \textsc{Gizmo}, the time domain is subdivided in a division of powers of two and after each time step, every particle determines its subsequent time step depending on various time step criteria. These include a criterion for gravitational acceleration proposed by \citet{Power2003} and a Courant--Friedrichs--Lewy (CFL) criterion \citep{CFL1928} for hydrodynamics, together with the improvements in \citet{Saitoh2009, Durier2012, Hopkins2013}. We refer the reader to \citet{Springel2010, GIZMO} for a detailed description of the time stepping scheme in the \textsc{Gadget} codes and in \textsc{Gizmo}.
\par For the implementation of DMAF, one must hence pay attention to the cases where a receiving gas particle has a smaller or larger time step than the injecting DM particle, i.e. $\ensuremath{\Delta t_\text{gas}} \neq \ensuremath{\Delta t_\text{DM}}$. The case $\ensuremath{\Delta t_\text{gas}} > \ensuremath{\Delta t_\text{DM}}$ is rather unlikely since the gas particles are subject to additional hydrodynamic time step restrictions, whereas DM is typically subject only to the gravitational time step limiter. For this reason, and since the DMAF energy absorbed by a gas particle may significantly change its dynamics, we opt for the following strategy: if a DM particle is assigned a smaller time step than a receiving gas particle, the gas particle reduces its time step such that $\ensuremath{\Delta t_\text{gas}} \leq \ensuremath{\Delta t_\text{DM}}$. However, it can occur that a DM particle encounters a receiving gas particle with a larger time step that is currently inactive; this case is treated in Appendix \ref{sec:inactive_gas}.
\par We devised a first-order in time scheme for the DMAF energy injection that is able to deal with differing time steps for injecting DM and receiving gas particles. An illustration of the scheme is presented in Figure \ref{fig:Time_stepping}. At the beginning of each time step, just after the time steps for the coming step have been found, each active DM particle computes the local DM density, finds the receiving gas particles, and evaluates equation \eqref{eq:energy_i_j} (for the first time at time I in Figure \ref{fig:Time_stepping}). In case of a gas particle with $\ensuremath{\Delta t_\text{gas}} > \ensuremath{\Delta t_\text{DM}}$, the respective gas particle will update its new time step to equal $\ensuremath{\Delta t_\text{DM}}$. In the regular case where $\ensuremath{\Delta t_\text{gas}} \leq \ensuremath{\Delta t_\text{DM}}$, the receiving gas particle stores the energy rate in a bin corresponding to the time step of the injecting DM particle, illustrated by the black filling of the respective array element.
\par The motivation for the time-binned energy storage is as follows: if a gas particle had a single variable to store the DMAF energy it receives, each DM particle would need to store a list of receiving gas particles in order to let them know when it gets active again. In the example in Figure \ref{fig:Time_stepping}, suppose that the gas particle receives energy from \emph{another} DM particle at time I, which has the same time step as the gas particle. Then, this other DM particle updates its energy injection into the gas at time II (which effectively means that the old energy rate generated by this DM particle should be deleted and be replaced by a new energy rate), whereas the DM particle depicted in the figure is not active at time II and the energy injection from this particle calculated at time I should be sustained. This requires that the other DM particle inform the gas particle about the new energy rate (which might be zero if the gas particle is no longer a neighbour of the DM particle). Since the gas particle has moved during the time step and may now reside within the computational domain of another process, it would be computationally expensive and intricate for the DM particles to keep track of their receivers.
\par For this reason, we store the DMAF energy rates for each gas particle in an array where each element corresponds to energy from a certain DM time step \footnote{Although codes such as \textsc{Gizmo} and \textsc{Gadget} internally work in terms of specific energy, it is important to store the energy rate $dE/dt$ since the particle mass may change during the time step, e.g. by other feedback processes, or intrinsically due to the hydrodynamic method in case of the meshless finite volume method.}. Thanks to the elegant time discretisation in powers of two, it is then easy to zero out the energy rate generated by particles that will become active.
\par This is schematically shown in Figure \ref{fig:Time_stepping}: at the end of the first time step of the gas particle, it adds the energy $\frac{dE_{i \to j}}{dt} \ensuremath{\Delta t_\text{gas}}$. Since the energy rate from the DM particle has been stored in a bin corresponding to the DM particle's larger time step, it is not zeroed out yet. At time III, the gas particle adds again the energy $\frac{dE_{i \to j}}{dt} \ensuremath{\Delta t_\text{gas}}$ set at time I as the DM particle did not update its injection rate at time II. Now, the DM particle is active again and, in this example, reduces its time step. The gas particle zeroes out the energy rate from the DM particle (note that this is without the DM needing to inform the gas particle, but rather because the gas particle ``knows'' that the DM particle is active now since the energy rate is stored in the respective bin). The gas particle is still a neighbour of the DM particle and stores the energy rate from the DM particle in the bin corresponding to the new DM time step $\ensuremath{\Delta t_\text{DM}}$. Note that if the gas particle were not a neighbour of the DM particle anymore, it would simply delete the energy rate from the DM particle set at time I and not set a new contribution to the energy rate from this DM particle. At time IV, the gas particle adds the energy $\frac{dE_{i \to j}}{dt} \ensuremath{\Delta t_\text{gas}}$ and zeroes out the energy rate from the DM particle. Now, the DM particle further reduces its time step which might now be smaller than the designated time step of the gas particle. As the DM particle sets the new energy rate for the gas particle, it informs the gas particle that it needs to decrease its coming time step such that it equals the DM time step. At time V, the gas particle adds the energy, zeroes out the energy rate, the DM particle sets the new energy rate, and so on.
\par In case of $\Delta t_\text{DM} \gg \Delta t_\text{gas}$, gas particles may continue receiving energy from a DM particle although the gas particles have already moved a large distance and are no longer in the vicinity of the injecting DM particle. In order to prevent this, we implemented an option to limit the DM time step to $\Delta t_\text{DM} \leq c_\Delta \Delta t_\text{gas}$ for all receiving gas particles, in the spirit of \citet{Saitoh2009} who recommend $c_\Delta = 4$ for neighbouring gas particles. This enforces that DM particles update their gas neighbours more frequently, leading to a more localised energy injection.
\par In cosmological simulations, proper treatment of the cosmological expansion is required, which we discuss in Appendix \ref{sec:cosmological_expansion}.
\subsection{Sedov--Taylor time step limiter}
\label{subsec:Sedov_Taylor}
Assume a gas particle moves from a low-density DM region to a high-density region. Since the DMAF power in the low-density region was small, it might still have a large time step, at the end of which it adds a big amount of energy from DMAF. Since the energy is directly injected as internal energy, the time step criteria that depend on dynamic quantities such as the particle acceleration remain unaffected and the gas particle will carry out another large time step, during which parts of the energy are converted into kinetic energy. This means that two (possibly) large time steps may pass from the moment the gas saves the new DMAF energy rate until it reduces its time step.
\par In order to react faster to the increase in DMAF energy, we harness the fact that the total DMAF energy rate for each gas particle is already known at the beginning of the time step. Define the set of donor particles as $\hat{\mathcal{N}}_\text{DM}(j) = \{i \in \text{DM}: j \in \mathcal{N}_\text{gas}(i) \}$. Let $\mathcal{P}_j = \sum_{i \in \hat{\mathcal{N}}_\text{DM}(j)} \frac{dE_{i \to j}}{dt}$ be the total DMAF power that particle $j$ receives. If $\mathcal{P}_j$ is sufficient to form a strong shock, the shock propagation will only depend on $\mathcal{P}_j$ and the surrounding gas density $\rho_g$ as the surrounding gas pressure is negligible. Using dimensional analysis, the shock radius $R_{s,j}$ for this generalised Sedov--Taylor self-similar shock can be found to propagate as
\begin{equation}
R_{s,j} = \beta \left(\frac{\mathcal{P}_j}{\rho_{g,j}}\right)^{\nicefrac{1}{5}} t^{\nicefrac{3}{5}},
\end{equation}
approximating the surrounding gas density in vicinity to particle $j$ by the constant value $\rho_{g,j}$, and assuming constant DMAF power during the time step. The dimensionless coefficient $\beta$, which depends on the polytropic index of the gas $\gamma$, can be determined numerically after a lengthy calculation to be of magnitude $\sim 1$ (see \citealt{Dokuchaev2003} for exact values), for which reason we neglect it in what follows.
\par In the spirit of a CFL condition, we restrict the time step to be small enough such that a strong shock in a (locally) sufficiently homogeneous environment is confined to the smoothing length $h_j$ of particle $j$ until the end of the time step, that is
\begin{equation}
R_{s,j} \leq c_\text{CFL} h_j,
\end{equation}
where $c_\text{CFL}$ is the CFL number. This yields an additional time step limiter of
\begin{equation}
\Delta t_j \leq \left(c_\text{CFL} h_j\right)^{\nicefrac{5}{3}} \left(\frac{\rho_{g,j}}{\mathcal{P}_j}\right)^{\nicefrac{1}{3}},
\end{equation}
which we impose at the beginning of each gas time step just after the DMAF power has been calculated which might lead to an updated smaller time step.
\par In none of our numerical tests for a homogeneous, isotropic universe and a halo with a gaseous fraction, this Sedov--Taylor time step limiter was the dominant time step criterion, while for our academic example with extremely high annihilation rates, it turned out to be necessary in order to prevent big energy errors, which we will discuss in Subsection \ref{subsec:step_example}.
\subsection{Function flow}
Figure \ref{fig:Flowchart} provides a schematic overview of the DMAF method. Steps unrelated to the DMAF are sketched very roughly only to provide an overview. Note in particular the logic for determining the gas time step: first, $\Delta t$ is set for \emph{all} particles, irrespective of DMAF. Then, the energy rates at the DM particles are computed and energy receivers are determined. If a DM particle has a smaller time step then a receiving gas particle, the latter reduces its time step. The energy rates are stored for each gas particle. Afterwards, gas particles check if the Sedov--Taylor time step condition is satisfied, otherwise, they reduce their time step. Then, the system evolves to the end of the time step by applying two kick half steps and a drift; between the kick half steps, gravitational forces and hydrodynamic quantities are updated. At the end of the time step, gas particles add the energy from DMAF and zero out the energy rates from DM particles that will subsequently update their energy rates.
\begin{figure}
\centering
\noindent
\resizebox{\columnwidth}{!}{
\includegraphics{Figures/Flowchart_linear.png}
}%
\caption{Flowchart for one time step of the DMAF implementation. Rectangles tinted in light pink stand for steps that are required due to the DMAF, whereas light yellow rectangles indicate steps that are always required. Recall that the kick evolves the system in momentum space while the drift operation updates the positions.}%
\label{fig:Flowchart}
\end{figure}
\section{Donor-based vs. receiver-based approach}
\label{sec:comparison}
Whereas in our implementation, the energy deposition is expressed by equation \eqref{eq:energy_i_j}, another way of incorporating DMAF energy in cosmological simulations is proposed in \citet{Iwanus2017}. In that work, the energy generated by DMAF is determined at each gas particle, for which reason we will refer to this method as receiver-based approach in what follows. This section is dedicated to a brief comparison between the receiver-based approach and the donor-based approach proposed herein, highlighting the strengths and weaknesses of either of them.
\par In the receiver-based approach, equation \eqref{eq:energy_rate} is reformulated as
\begin{equation}
\frac{1}{M_g} \frac{dE}{dt} = \frac{\langle\sigma v\rangle}{m_\chi} \frac{\rho_\chi^2}{\rho_g} c^2,
\end{equation}
where $M_g$ denotes the gas mass contained in the fixed volume $V$ and $\rho_g$ stays for the gas density. From this, the energy rate at a gas particle $j$ produced by DMAF can be computed as
\begin{equation}
\frac{1}{M_j} \frac{dE_{j\leftarrow \chi}}{dt} = \frac{\langle\sigma v\rangle}{m_\chi} \frac{\rho_{\chi,j}^2}{\rho_{g, j}} c^2.
\label{eq:energy_i_j_receiver_based}
\end{equation}
In equation \eqref{eq:energy_i_j_receiver_based}, $E_{j\leftarrow \chi}$ is the total energy received by particle $j$, $\rho_{\chi, j}$ and $\rho_{g, j}$ are the DM density and gas density evaluated at particle $j$, respectively, and $M_j$ is the mass of particle $j$.
\par The receiver-based approach underlies the assumptions that the DM particles can be used as tracers to reconstruct a well-defined DM density field and that the energy injection is highly localised.
\subsection{Energy injection}
First, let us compare the total energy rates created by DMAF in a constant volume for both methods. For the ease of notation, we assume that the kernel function $W = W(r, h)$ is independent of the particle type under consideration. A short calculation gives
\begin{equation}
\label{eq:energy_rate_ratio}
\frac{\left(dE/dt\right)_\text{donor}}{\left(dE/dt\right)_\text{receiver}} = \frac{\sum_{i \in \text{DM}} M_i \left(\sum_{k \in \mathcal{N}_\text{DM}(i)} M_k W(r_{ki}, h_i) \right)}{\sum_{j \in \text{gas}} \left( M_j \frac{\left(\sum_{k \in \mathcal{N}_\text{DM}(j)} M_k W(r_{kj}, h_j)\right)^2}{\sum_{l \in \mathcal{N}_\text{gas}(j)} M_l W(r_{lj}, h_j)}\right)},
\end{equation}
where $\mathcal{N}_T(i)$ denotes the set of neighbours of type $T \in \{\text{gas}, \text{DM}\}$ of particle $i$ and where we used the SPH density estimate $\rho_i = \sum_{k \in \mathcal{N}(i)} M_k W(r_{ki}, h_i)$. The total energy rate in the donor-based scheme thus does not depend on either gas properties nor on the choice of weights in equation \eqref{eq:energy_i_j}, whereas the evaluation of DMAF at the gas particles in the receiver-based case implies that changes in the gas distribution yield a change in the total DMAF energy rate. The enumerator, originating from the donor-based method, only depends on distances between DM particles in the evaluation of the local DM density, while the denominator shows that for the receiver-based method, distances between gas and DM particles as well as between gas particles determine the energy rate.
\par The independence of the total DMAF energy rate from the gas properties is arguably an advantage of the donor-based approach. Since for cold WIMP-like particles, the impact of the energy generated by DM annihilation outweighs the effects of the DM mass loss itself by far \citep{Iwanus2017}, it is a reasonable approximation to keep the DM particle masses constant throughout the simulation. However, if the mass loss due to annihilation is to be taken into account explicitly, e.g. for light dark matter candidates, the donor-based approach allows for reducing the DM mass with $e = M c^2$ being satisfied to machine accuracy by simply subtracting the DM mass corresponding to the DMAF energy at the end of the time step. In contrast, the DM mass loss in the receiver-based implementation is decoupled from the energy absorption calculated at the gas particles, which introduces a small mass error.
\par Additionally, if a more detailed prescription of the energy deposition is required where one wants to account for a certain mean free path of the annihilation products, the donor-based approach provides the flexibility to adjust the weights to mimic the desired physics while the receiver-based approach has no free parameters. This is crucial since the particular annihilation channel may significantly impact the total energy absorbed by the surrounding gas (see e.g. \citealt{Schon2017}).
\par On the other hand, the receiver-based approach intrinsically localises the energy deposition at the gas: if a DM particle resides in a region without gas particles, no energy from this DM particle will be absorbed since no receivers are nearby. In particular, energy production from DMAF in extremely gas-poor haloes is artificially suppressed with the receiver-based method, while the donor-based method sustains the energy injection into gas particles even after the DMAF has driven them out of their host halo if not enough gas particles are left within the halo that could absorb the energy. This means that with the donor-based method, theoretically, distant gas particles might instantaneously receive large amounts of energy because the energy created at the DM particles needs to go \emph{somewhere}. However, in cosmological simulations where $N_\text{gas} \approx N_\text{DM}$, the distance between each DM particle and its nearest gas neighbours is typically small and unphysical energy transport is unlikely to occur. Additionally, relativistic annihilation products travel distances of $\sim 100 \ \text{kpc}$ within a typical time step of a few hundred thousand years, for which reason the physical energy transport horizon is well above the spacing of gas particles in dense regions and the assumption of instantaneous energy injection into the nearest gas neighbours seems justified.
\par In both methods, incorporating arbitrary absorption fractions $f$ and boost factors $B$ can be done without any difficulty.
\subsection{Computational cost}
\begin{table}
\begin{center}
\begin{tabular}{@{}llcccc@{}}
\toprule
& Neighbour search & \multicolumn{1}{l}{donor} & \multicolumn{1}{l}{receiver} & search at & search for \\ \midrule
1. & Calculate $\rho_g$ & X & X & gas & gas \\
2. & Calculate $\rho_{\chi, i}$ & X & & DM & DM \\
3. & Calculate $\rho_{\chi, j}$ & & X & gas & DM \\
4. & Find energy receivers & X & & DM & gas \\ \bottomrule
\end{tabular}
\caption{Different neighbour searches are required for the donor-based and for the receiver-based approaches, respectively. The gas density is always calculated irrespective of DMAF. The two methods differ in the location where the DM density is evaluated. Finding energy receiving gas neighbours is only necessary for the donor-based approach.}
\label{table:loops}
\end{center}
\end{table}
The implementation of DMAF necessitates additional neighbour searches for different particle types, which are listed in Table \ref{table:loops} for the donor-based and receiver-based approach. The first neighbour search for $\rho_g$ is always needed regardless of the DM annihilation. Thanks to the similarity of the neighbour searches, large parts of the code can be recycled from the gas density calculation. The extra neighbour searches can be expected to constitute the major part of additional cost for the DMAF implementation. Note, however, that as reported in \citet{GIZMO}, the number of iterations needed until the search radius has converged is very small for the gas density, and we observe the same behaviour for the other loops as well. This is due to the efficient iteration scheme by \citet{SpringelHernquist2002} that has been further improved in \textsc{Gizmo} accounting for findings in \citet{Cullen2010, Hopkins2013, FIRE1}.
\par The donor-based approach as given by equation \eqref{eq:energy_i_j} is based upon the evaluation of $\rho_\chi$ at each DM particle, which requires a search of the nearest DM neighbours. Moreover, a neighbour search must be conducted for each DM particle in order to find the receiving gas particles. Both neighbour searches are conducted in analogy to the calculation of the gas density at each gas particle: since the desired number of neighbours is fixed (and not the search radius), a variable search radius $h_i$ is assigned to each DM particle $i$ which is adjusted until the effective neighbour number, calculated via $N_{\text{ngb, eff}} = (4\pi/3) \, h_i^3 \sum_k W(r_{k i}, h_i)$, has reached the desired value up to a certain tolerance. Separate search radii $h_{i, g}$ and $h_{i, \chi}$ for gas neighbours and DM neighbours, respectively, are needed, and two neighbour searches must be carried out. In total, there are thus three neighbour search loops. The loop over the energy receiving gas particles is in fact called twice: in the first call, the weight for each gas particle is calculated, then, the normalisation factor in the denominator of equation \eqref{eq:energy_i_j} is computed, and in the second call, the normalised DMAF energy rates are stored. However, note that gas particles often possess a smaller time step than the DM particles since the time step for gas particles is subject to a range of limiters depending on the baryonic physics (see \citealt{GIZMO}), for which reason the additional loops may be evaluated less frequently than the gas density loop in many situations.
\par The overhead from updating the gas time step such that $\Delta t_\text{gas} \leq \Delta t_\text{DM}$ is satisfied for neighbouring particles and from applying the Sedov--Taylor time step limiter is small.
\par For the receiver-based approach, in contrast, all properties in equation \eqref{eq:energy_i_j_receiver_based} are being determined by default, except for the DM density at gas particles. Hence, one additional neighbour search is required. In case of a small ratio between gas time steps and DM time steps, however, the less frequent evaluation of the two loops in the donor-based approach may outweigh the receiver-based approach in terms of computational speed.
\par In the first example in Section \ref{sec:results}, the donor-based method with choice of weights a) was the fastest, followed by the receiver-based method and the donor-based method with choice of weights b). For the realistic examples, the receiver-based method outperformed the donor-based method in terms of computational speed. For the cosmological simulation for example, run on 256 CPUs, the wall time was $7965$, $13547$, $16700$, $6387 \ \text{s}$ for $\Lambda \text{CDM}$, donor-based a), donor-based b), and receiver-based -- thus, the receiver-based method ran even faster than the fiducial simulation without DMAF.
\subsection{Time stepping}
\par In the receiver-based approach, each gas particle updates the DMAF energy rates together with the hydrodynamic quantities between the two kick half steps (see Figure \ref{fig:Flowchart}) and adds the DMAF contribution to the total energy rate. Thus, the DMAF energy evolves with the second-order in time leapfrog method as a component of the total energy budget.
\par For the donor-based method, we use a simple first-order accurate time stepping scheme. The motivation for this is the following: since the DMAF energy rates are computed at the DM particles, the update of the DMAF energy rate needs to happen at a logical moment during each DM time step. One could update the DMAF energy rate between the two half kicks of each DM particle, however, the length of the DM half kicks generally differs from the one of the receiving gas particles due to different time steps. Therefore, this would lead to an update of the DMAF energy rate in an unnatural moment \emph{for the gas} (e.g. in Figure \ref{fig:Time_stepping} before the gas particle performs its second half kick from the midpoint between II and III to III, the DM particle, which is about to do its second half kick from II to III, would update the DMAF energy rate, which would only affect one out of four half kicks of the gas between times I and III).
\par For this reason, we opt for the aforementioned method consisting of updating the DMAF energy rates at the beginning of each DM time step (in case the DM particle is active) and adding the energy at the end of each gas time step. In view of the large uncertainties in the DM annihilation mechanism, employing a simple first-order in time scheme for the DMAF energy seems justifiable.
\section{Results}
\label{sec:results}
In this section, we present test cases for our numerical method. For all tests, we use the meshless finite mass method (MFMM), which is the default method in \textsc{Gizmo}. We take a standard cubic spline kernel. In our tests, we neglect modelling the small DM mass loss due to annihilation.
\par The first example deals with the injection of DMAF energy into a contact discontinuity. Then, we turn towards the more realistic case of an isolated halo. Finally, we consider the effects of DMAF in a cosmological simulation of non-linear structure formation.
\subsection{Injection into gas with a density jump}
\label{subsec:step_example}
Our first numerical example examines the differences arising from the donor-based approach and the receiver-based approach. Additionally, the two different choices of weights considered herein are compared. The scenario for the simulation is the following: a few massive DM particles are located adjacent to a discontinuous transition between a high-density gas region on the right and a low-density gas density region on the left (density ratio $10^4 \colon 1$). The geometry is two-dimensional and the box is large enough that the shock ensuing the DMAF energy deposition into the gas remains within the simulation domain until the end time of the simulation $T = 978.5 \ \text{Myr}$.
\par Initially, the gas is distributed over a regular grid consisting of $256 \times 256$ points. All gas particles located at $x < 0$ possess a mass of $M = 1.5 \times 10^7 \ \text{M}_\odot$, whereas all gas particles in the right hand side of the domain ($x > 0$) have a mass of $M = 1.5 \times 10^{11} \ \text{M}_\odot$. This results in a density jump of four orders of magnitude across $x = 0$ since the particles are equally spaced over the simulation domain ($\sim 10^6 \ \text{M}_\odot \ \text{kpc}^{-3}$ and $\sim 10^{10} \ \text{M}_\odot \ \text{kpc}^{-3}$ on the left hand side and right hand side, respectively).
\par We populate the simulation domain with $513 \times 513$ DM particles whose masses follow the probability density function (PDF) of a two-dimensional normal distribution with mean $(0, 0) \ \text{kpc}$ and standard deviation $3 \ \text{kpc}$, scaled such that the total DM mass in the system equals $10^{10} \ \text{M}_\odot$ (see Figure \ref{fig:Step_example_ICs}). The number of DM particles is chosen such that the particle with maximal mass resides in the origin of the domain, exactly in the middle between the nearest gas particles in the left and right domain half. The larger number of DM particles as compared to the gas is taken to have sufficiently many particles in the small region where the high DM density leads to a substantial DMAF energy generation. A minimum particle mass of $10^{-6} \ \text{M}_\odot$ is enforced to ensure numerical stability. Since the DM mass distribution declines rapidly as the distance to the jump increases, the DMAF energy deposition will be dominated by a small number of DM particles around the origin.
\begin{figure}
\centering
\noindent
\resizebox{\columnwidth}{!}{
\includegraphics{Figures/Step_ICs_DM.png}
}%
\caption{Static mass distribution of the DM particles, following the PDF of a normal distribution with mean $(0, 0) \ \text{kpc}$ and standard deviation $3 \ \text{kpc}$. Shown are particles with mass larger than $5 \times 10^{-6} \ \text{M}_\odot$.}
\label{fig:Step_example_ICs}
\end{figure}
\par Depending on the injection mechanism, we expect differing energy fractions and propagation speeds of the emanating shock wave in each domain half. For the receiver-based method, which assumes a well-defined DM density field, this problem is numerically challenging in view of the steep DM density gradients and the crude approximation of the underlying smooth DM density field by the DM particles. Self-gravity is switched off for this test case; moreover, the gas is initially cold such that the system starts in an equilibrium state. We take the gas to be polytropic with $\gamma = 5/3$. For the DM candidate, we choose a thermal relic cross-section of $\langle \sigma v \rangle = 3 \times 10^{-26} \ \text{cm}^3 \ \text{s}^{-1}$ and a particle mass of $m_\chi = 100 \ \text{keV} \ c^{-2}$. The number of desired neighbours is $N_\text{ngb} = 16$ for all neighbour searches.
\begin{figure*}
\centering
\noindent
\resizebox{\textwidth}{!}{
\includegraphics{Figures/Step_example_result.png}
}%
\caption{Internal energy of the gas for the donor-based method with choice of weights a) (\emph{left}), b) (\emph{centre}), and receiver-based method (\emph{right}), at the end time of the simulation. Each filled circle stands for one particle and the size and colour represent the internal energy. The spherical black area encompasses the $3\sigma$-region of the DM density field for which the DM particles act as a tracer. The blue line marks the density jump. The kinetic and internal energies in the left and right half of the domain are depicted in the inset bar plots. In the solid angle weighted variant b), a bigger fraction of the energy in the left low-density subdomain has been converted to kinetic energy, and the shock wave has propagated further than in the mass weighted variant a). The total energy injected by the receiver-based approach is much smaller.}
\label{fig:Step_example_result}
\end{figure*}
\par Figure \ref{fig:Step_example_result} shows the internal energy of the gas at final time. Each dot represents a gas particle, where particle size and colour scale with the internal energy of the particle. The black spherical region around the origin marks the $3\sigma$-region of the PDF which the DM particles discretise. Because of the lower density in the left subdomain, the shock speed is faster than in the right subdomain, which can be seen for all methods. In the case of mass weighted injection (case a)), the shock front has propagated the least into the left, low-density subdomain. Since the gas density in the right subdomain is much higher than the one in the left subdomain, the DM particles preferentially deposit DMAF energy into the right subdomain. Tracing back the movement of the high-energy particles in the left subdomain, one finds that they originate from the right domain and migrate to the left half after having absorbed a large amount of energy.
\par In the solid angle weighted case b), the shock front in the left subdomain has advanced much further. Due to the statistical isotropy of the energy injection, the energy injection is not biased towards the right subdomain.
\par For the receiver-based approach, the shock front in the left subdomain lies between cases a) and b). It is salient that the shock wave in the right subdomain is much less pronounced than for the the donor-based approach; the shock front has travelled a shorter distance and the shock carries a much lower energy. Also, the total number of high-energy particles is much smaller.
\par The inset plots show the energy in the left and right half of the domain, broken down into internal energy and kinetic energy. For all methods considered, the energy in the left subdomain at final time is greater than the one in the right one due to convective energy transport. For the two flavours of the donor-based method, the total energy in the system should be identical since the methods only differ in \emph{how} the energy is deposited into the gas, not in the calculation of the DMAF energy rate. Indeed, the difference in total energy after $978.5 \ \text{Myr}$ between case a) and b) amounts to $< 10^{-4}$ per cent. In contrast, the total energy for the receiver-based method is only $17.3$ per cent of the one for the donor-based method. This is caused by the reconstruction of the DM density at the gas particles, which leads to an underestimation of the DMAF energy rate: while equation \eqref{eq:energy_i_j} for the donor-based method depends linearly on $\rho_\chi$, equation \eqref{eq:energy_i_j_receiver_based} has a quadratic dependence on $\rho_\chi$ and therefore on an average over several DM particles, which suppresses sharp mass peaks of individual DM particles and hence lowers the generated DMAF energy.
\par Interestingly, the shock wave has propagated further than for the donor-based method in case a), despite the much lower total energy. This is due to the fact that in the receiver-based method, gas particles in the left subdomain absorb DMAF energy throughout the simulation, whereas for the mass weighted injection, the largest part of the energy in the left domain half comes from particles that have propagated from the right domain half to the left, and the fraction of directly absorbed energy in the left domain half is small. During the first $196 \ \text{Myr}$, the energy in the left domain half is larger with the receiver-based method than with the mass weighted donor-based method.
\begin{table}
\begin{center}
\emph{With} time step limiter: \\
\begin{tabular}{@{}llll@{}}
\toprule
& a) & b) & Receiver-based \\ \midrule
$E_\text{tot}(t_1)$ & 257840 & 257650 & 173910 \\
1. -- 3. $\Delta t_\text{min}$ & 0.11936 & 0.014920 & 0.014920 \\ \bottomrule
\end{tabular}
\vspace{0.5cm}
\\
\emph{Without} time step limiter: \\
\begin{tabular}{@{}llll@{}}
\toprule
& a) & b) & Receiver-based \\ \midrule
$E_\text{tot}(t_1)$ & 260700 & 309830 & 233010 \\
1. $\Delta t_\text{min}$ & 15.278 & 15.278 & 15.278 \\
2. $\Delta t_\text{min}$ & 15.278 & 15.278 & 0.029840 \\
3. $\Delta t_\text{min}$ & 0.11936 & 0.029840 & 0.029840 \\ \bottomrule
\end{tabular}
\caption{Total energy in the system at $t_1 = 98 \ \text{Myr}$ and the first three system time steps. Energies are given in $10^{10} \ \text{M}_\odot \ \text{km}^2 \ \text{s}^{-2}$ and times in $\text{Myr}$. With the time step limiter, the total energies for variant a) and b) agree well with each other, while there is a large energy error for variant b) and for the receiver-based method without the time step limiter. The time step imposed by the additional limiter \emph{before} the energy injection is identical to (for variant a)) or half (for variant b) and for the receiver-based method) the one that is set by the CFL condition \emph{after} the energy injection.}
\label{table:dt_limiter}
\end{center}
\end{table}
\par In order to verify the statistical isotropy in case b), we ran the same simulation with the Euler equation deactivated, i.e. with injection of DMAF energy into static particles. The energy difference between the left and right subdomain at final time then amounts to roughly $0.01$ per cent. Also for the receiver-based approach, the energy is deposited isotropically in this case and the energy difference at final time between the left and right subdomain is less than $0.1$ per cent if the particles are kept fixed. This is because the gas particles are distributed symmetrically around the DM particles and the energy rate in equation \eqref{eq:energy_i_j_receiver_based} is invariant under a scaling of the gas particle mass $M_j$ while keeping the volume occupied by the particle constant. However, this will in general not be the case when the gas is heterogeneously distributed around the DM particles since each gas particle evaluates the DM density at its own location. Finally, for the mass weighted variant of the donor-based method, the energy ratio between the two domain halves at final time is $E_{\text{right}} / E_{\text{left}} = 520$, heavily biased towards the right, high-density domain half.\footnote{We checked that in the case of a single massive DM particle of mass $10^{10} \ \text{M}_\odot$ located at $(0, 0) \ \text{kpc}$ and for static gas particles, the energy ratio for the mass weighted variant differs from the density ratio only by $|E_{\text{left}} / E_{\text{right}} - 10^{-4}| < 10^{-11}$, as expected.}
\par Finally, we discuss the necessity of the Sedov--Taylor time step limiter (see Subsection \ref{subsec:Sedov_Taylor}) to prevent errors at the beginning of the simulation before the deposited internal energy has been converted to kinetic energy. Table \ref{table:dt_limiter} lists the total energy at time $t_1 = 98 \ \text{Myr}$ and the first three system time steps (i.e. smallest time step of any particle), with and without the additional time step limiter.
\par While the energies for the two choices of weights agree well with each other when the time step limiter is activated, the energy is massively overestimated for solid-angle based weights and for the receiver-based method without the time step limiter. This stems from the fact that these two mechanisms inject large amounts of energy into the left, low-density subdomain, where the energy induces high accelerations which make small time steps necessary. Without the time step limiter, the system performs two huge time steps for the donor-based method (one for the receiver-based method), only limited by the globally set maximum step size, before drastically reducing the time steps. The time step assigned to the particles by the Sedov--Taylor limiter equals or is half the time step enforced by the CFL condition after the energy has affected the dynamics (recall that the time domain is subdivided in powers of two in \textsc{Gizmo}; hence, the minimum time step placed by all applicable limiters is rounded down to the closest power of two times a base interval). Thus, our additional time step limiter may preclude large energy errors while not being overly restrictive.
\subsection{Isolated galaxy}
\label{subsec:halo_example}
\begin{figure*}
\centering
\noindent
\resizebox{\textwidth}{!}{
\includegraphics{Figures/Halo_result_snap_2_T.png}
}
\caption{Gas density $\rho_g$ (\emph{top}) and \red{temperature $T_g$} (\emph{bottom}) of the galaxy after $97.8 \ \text{Myr}$, averaged over the $z$-coordinate. The central gas density is depleted due to the DMAF as compared to the fiducial $\Lambda\text{CDM}$ simulation without DMAF. The results for the different methods closely resemble each other.}
\label{fig:Halo_result}
\end{figure*}
\begin{figure*}
\centering
\noindent
\resizebox{\textwidth}{!}{
\includegraphics{Figures/Halo_radial_plot_with_delta_T.png}
}
\caption{\red{\emph{Isolated halo}: r}adial plot of the DM density $\rho_\chi$ (\emph{left}), gas density $\rho_g$ (\emph{centre}), and \red{gas temperature $T_g$} (\emph{right}) in a logarithmic scale. DMAF reduces the central density of the gas and the DM while increasing the \red{temperature}. The lower panels show the relative difference towards the mean of a), b), and the receiver-based method.}
\label{fig:Halo_radial_plot}
\end{figure*}
Next, we assess our DMAF method in the context of an isolated spherical galaxy, consisting of a DM halo and gas distributed following a Navarro--Frenk--White (NFW) profile \citep{Navarro1997}, with a system mass of $M_{200} = 1.13 \times 10^{12} \ \text{M}_\odot$, concentration parameter $c = 13$, and scale length $r_\text{scale} = 21.7 \ \text{kpc}$. The initial conditions were created using \textsc{Dice} \citep{DICE} and are made up of 80000 DM particles (total mass: $6.04 \times 10^{11} \ \text{M}_\odot$) and 20000 gas particles (total mass: $5.24 \times 10^{11} \ \text{M}_\odot$) that follow the same distribution as the DM and are in thermal equilibrium. We let the initial conditions re-virialise without DMAF for $489 \ \text{Myr}$ to eliminate transients.
\par We simulate the evolution of the galaxy undergoing DM annihilation from a light DM candidate of mass $m_\chi = 100 \ \text{keV} \ c^{-2}$ with thermal relic cross-section of $\langle \sigma v \rangle = 3 \times 10^{-26} \ \text{cm}^3 \ \text{s}^{-1}$ for a simulation time of $T = 97.8 \ \text{Myr}$. The background cosmology is static and thus non-expanding. We take $N_{\text{ngb}} = 40$, and the gravitational softening length is $r_\text{soft} = 4 \ \text{kpc}$. The gravitational softening length plays an important role since it is related to the relative error in the DMAF energy rate due to an unresolved NFW cusp, as derived in \citet{Iwanus2017}.
\par Figure \ref{fig:Halo_result} shows the gas density $\rho_g$ and the \red{temperature $T_g$} of the gas within a radius of $50 \ \text{kpc}$ around the halo centre, averaged over the $z$-coordinate (created with the package \textsc{Pynbody} \citep{pynbody}). In the central region of the halo where the DM density is the highest, the \red{temperature} has risen due to the DMAF energy injection and the gas density has decreased since the heated gas has partly left the halo centre. The depletion of gas in the halo centre also entails a reduction in DM density, as the radial plot in Figure \ref{fig:Halo_radial_plot} shows, albeit not as pronounced as for the gas. The receiver-based and the donor-based method for both choices of weights give very similar results for this example. The relative difference between each of the three simulations with DMAF and their mean amounts to less than $2$ per cent for the DM density, $20$ per cent for the gas density within a radius of $10 \ \text{kpc}$ (less than $5$ per cent for $r > 10 \ \text{kpc}$), and less than $13$ per cent for the \red{temperature}.
\par This example demonstrates that DMAF can considerably alter the structure of galaxies: after less than $100 \ \text{Myr}$, the radial gas density distribution has flattened so much that it is not monotonic anymore but reaches a maximal gas density at a distance of $\sim 10 \ \text{kpc}$ from the galactic centre. For heavier DM candidates, smaller annihilation velocity cross-sections, or lower absorption rates, the imprint of DMAF on the density profiles is smaller but may none the less be detectable, for which reason numerical simulations are a valuable means in investigating the nature of DM.
\subsection{Cosmological simulation}
In this example, we test our method in the context of a cosmological simulation. The background expansion of the universe requires the conversion of the quantities in equation \eqref{eq:energy_i_j} from comoving to physical coordinates. In order to get a more accurate estimation of the DMAF energy rates, we account for the Hubble flow of inactive DM particles by rescaling the energy rates appropriately. This is discussed in Appendix \ref{sec:cosmological_expansion}.
\par We simulate a cubic box with side length $50 \ \text{Mpc} \ h^{-1}$ starting from redshift $z = 100$ to $z = 0$. For both DM and gas, we take $256^3$ particles, respectively.
The boundary conditions are periodic. We select a very light DM candidate here in order to highlight the impact of DMAF. To be specific, we choose $m_\chi = 1 \ \text{MeV} \ c^{-2}$ and a thermal relic cross-section of $\langle \sigma v \rangle = 3 \times 10^{-26} \ \text{cm}^3 \ \text{s}^{-1}$. The background cosmology is taken from \citet{Ade2016} and the parameters are summarised in Table \ref{table:planck2015}.
\begin{table}
\begin{center}
\begin{tabular}{@{}ll@{}}
\toprule
Parameter & Value \\ \midrule
$\Omega_m$ & 0.3089 \\
$\Omega_b$ & 0.0486 \\
$\Omega_\Lambda$ & 0.6911 \\
$H_0 \ [\text{km} \ \text{s}^{-1} \ \text{Mpc}^{-1}]$ & 67.74 \\ \bottomrule
\end{tabular}
\caption{Cosmological parameters for the cosmological simulation, taken from \citet{Ade2016}.}
\label{table:planck2015}
\end{center}
\end{table}
For the reconstruction of all hydrodynamic quantities as well as for the number of energy receivers, we choose a desired neighbour number of $N_{\text{ngb}} = 40$. We set the gravitational softening length to $\sim9.77 \ \text{kpc} \ h^{-1}$, which corresponds to 5 per cent of the average inter-particle spacing. The initial conditions at $z = 100$ are generated using the tool \textsc{N-GenIC} \citep{NGenic}, which is based on the Zel'dovich approximation (\citealt{zel1970gravitational}; see e.g. \citealt{White2014} for a comprehensive review). During the phase of linear evolution until $z \sim 100$, we do not take into account the effects of DMAF which would require modifying the initial conditions generator. The impact on our simulations is however expected to be small since e.g. \citet{Bertschinger2006} derives that early-time DMAF can enhance the abundance of small haloes with mass ranging from less than an earth mass to a few solar masses, which is many orders of magnitude below the scale which we can resolve in our simulations (each DM particle comprises a mass of $\sim7.95 \times 10^8 \ \text{M}_\odot$ in this simulation).
\begin{figure*}
\centering
\noindent
\resizebox{0.85\textwidth}{!}{
\includegraphics[scale=1]{Figures/Planck2015_result_rendered_variant_white_serif.png}
}%
\caption{Results of the cosmological simulation with a light DM candidate of mass $m_\chi = 1 \ \text{MeV} \ c^{-2}$: DM density $\rho_\chi$ (\emph{left}), gas density $\rho_g$ (\emph{centre}), and specific internal energy of the gas $u_g$ (\emph{right}) at $z = 0$. The first row shows the fiducial $\Lambda\text{CDM}$ simulation without DMAF, the second and third row correspond to the donor-based method with choice of weights a) and b), respectively, and the last row shows the results of the receiver-based approach. DMAF evidently suppresses the formation of substructure -- most noticeable in the gas density plots.}%
\label{fig:Planck2015_result}
\end{figure*}
\par We compare the results for the choice of weights a) and b)
with the receiver-based method; moreover, we run a fiducial $\Lambda \text{CDM}$ simulation \red{without DMAF}. Figure \ref{fig:Planck2015_result} shows a volume rendered representation (see \citealt{Garate2017}) of the DM density $\rho_{\chi}$, gas density $\rho_g$, and specific internal energy of the gas $u_g$. It is evident that the strong DMAF in this simulation suppresses the formation of substructure, and the gas density distribution is much more washed out than in the fiducial $\Lambda \text{CDM}$ simulation, as reported in \citet{Iwanus2017, Iwanus2019}.
\par The weights $w_k$ only have a minor effect on the large-scale structure in this simulation; the density and energy distributions for case a) and b) strongly resemble each other. In contrast, the difference between the donor-based method and the receiver-based method manifests itself in the specific internal energy: for the receiver-based method, DMAF deposits less energy into the gas than for the donor-based method. This behaviour is in line with our findings in Subsection \ref{subsec:step_example}, where we showed that the receiver-based method may underestimate the DM density in case of a steep density gradient. As gas particles heat up and move away from the donating DM particles, this effect is likely to be magnified since the approximation for the DM density at the receiving gas particles deteriorates as the separation between donors and receivers increases.
\begin{figure*}
\centering
\noindent
\resizebox{\textwidth}{!}{
\includegraphics{Figures/Planck2015_HMF_HVF.png}
}
\caption{Halo mass function (HMF) (\emph{red tones, upper-right corner}) and halo velocity function (HVF) (\emph{blue tones, lower-left corner}) for the cosmological simulation at $z = 0$. Inset plots show a zoom with 30-fold magnification. The DMAF quenches the formation of haloes of all sizes; additionally, it decreases the maximum halo velocities. The impact of the method for the choice of weights considered in this work on the haloes for which sufficient statistics are available ($N \gtrsim 30$) is small.}
\label{fig:Planck2015_HMF_HVF}
\end{figure*}
\par In order to compare the impact of the DMAF methods on the formation of structure, we calculate the halo mass function (HMF) and the halo maximum velocity function (HVF) for the different methods. The theoretical foundation of modelling the HMF has been laid by \citet{press1974formation} and it has since then evolved to a standard tool in the analysis of cosmological N-body simulations. Recent numerical studies investigating HMFs include \citet{Despali2016, Comparat2017, McClintock2018}.
\par The HVF is another important statistics that measures the gravitational potential within a halo \citep{Ascasibar2008} and is closely related to the baryonic content of the halo \citep{Comparat2017} in view of the well-established baryonic Tully--Fisher relation \citep{tully1977new, mcgaugh2000baryonic}. The authors of \citet{Ascasibar2008, Knebe2011, Onions2012} advocate the usage of the HVF as a metric for comparing (sub-)haloes due to its independence of the particular cut-off radius for the halo definition since the maximum halo velocity is typically reached within 20 per cent of the virial radius for moderately-sized haloes described by NFW profiles \citep{Muldrew2011}.
The maximum halo velocity is calculated as
\begin{equation}
v_\text{max} = \max_{r} \left\{\left(\frac{G M(r)}{r}\right)^{\nicefrac{1}{2}}\right\},
\end{equation}
where $M(r)$ is the mass enclosed within a radius of $r$.
\par Finding DM haloes is a highly non-trivial task and while different popular halo finders agree well on presence and location of the haloes \citep{Knebe2013a, Onions2012}, each halo finder leaves its imprint on the halo properties which can lead to differences of around 20 per cent \citep{Knebe2013, Onions2012} and impact the halo merger tree \citep{Avila2014a}. For a detailed overview on the zoo of halo finders, we refer to the references in this paragraph.
\par We opt for \textsc{VELOCIraptor} \citep{Elahi2019}, which is based on successive application of a friends-of-friends (FOF) algorithm in physical space and phase space. For the treatment of baryons, we select the ``DM+Baryons mode'' (see the reference for more details).
\par Figure \ref{fig:Planck2015_HMF_HVF} shows the HMF and the HVF for the donor-based method with both choices for the weights, the receiver-based method, and the $\Lambda\text{CDM}$ simulation without DMAF. The mass $M_\text{200}$ denotes the virial mass for an overdensity parameter $\Delta_c = 200$ as usual. In this medium-resolution simulation, we are able to identify haloes of mass $\gtrsim 10^{10} \ \text{M}_\odot$. In total, we find 23446, 19361, 19401, 18904 haloes for $\Lambda\text{CDM}$, donor-based method a), donor-based method b), receiver-based method. Thus, DMAF curbs the formation of haloes at a level of $\sim 20$ per cent for this light DM candidate. This is because the DMAF energy heats up the gas which results in faster moving gas particles and an inhibited accretion of gas onto DM haloes. \red{The receiver-based approach produces slightly} fewer haloes than the donor-based approach. However, for haloes in the mass range $10^{10} - 10^{13} \ \text{M}_\odot$, where a statistically relevant number of haloes is available, the HMF and HVF for all methods agree very well with one another. In particular, the two choices of weights considered herein lead to similar results, as already suggested by Figure \ref{fig:Planck2015_result}. From the HVF, we infer that the DMAF energy inhibits the formation of haloes with deep gravitational wells and therefore with high maximum velocity.
\begin{figure*}
\centering
\noindent
\resizebox{\textwidth}{!}{
\includegraphics{Figures/Halo_radial_plot_cosmo.png}
}
\caption{\red{\emph{Largest galaxy cluster in the cosmological simulation}: radial plot of the DM density $\rho_\chi$ (\emph{left}), gas density $\rho_g$ (\emph{centre}), and gas temperature $T_g$ (\emph{right}) in a logarithmic scale. The lower panels show the relative difference towards the mean of a), b), and the receiver-based method. The different methods give a similar temperature profile; however, the gas density in the halo centre is more severely depleted with the donor-based method.}}
\label{fig:Halo_cosmo_plot}
\end{figure*}
\par \red{In Figure \ref{fig:Halo_cosmo_plot}, we have a closer look at the largest galaxy cluster in the cosmological simulation and plot the radial profiles of DM density $\rho_\chi$, gas density $\rho_g$, and temperature $T_g$. The haloes are matched across the different simulations by maximising $N_{a \cap b}^2 / (N_a N_b)$, where $N_{a \cap b}$ is the number of common DM particles in haloes $a$ and $b$, and $N_a$ and $N_b$ are the total numbers of particles in each halo. The galaxy cluster in the reference simulation without DMAF has a total mass of $1.4 \times 10^{14} \ \text{M}_\odot$. As expected, the DMAF has heated the cluster, in particular the inner region, and the different methods agree well with each other in terms of temperature. Interestingly, the gas density in the inner halo region is roughly $50$ per cent higher than average with the receiver-based method: as the heated gas particles move away from the halo centre and disperse to regions with lower DM density, the energy absorption is quickly reduced. In contrast, with the donor-based method, the DM particles in the halo centre continue depositing a large amount of energy into the gas particles even when they are receding from the centre, driving them further out.
However, this difference between the methods is moderate compared with the order of magnitude difference between the DMAF simulations and the reference simulation without DMAF. As for the two different weights, the gas density in the halo centre has decreased slightly more with the solid angle weighted injection.
\par The fact that the difference in gas density between the two methods is not observed for the isolated halo (see Subsection \ref{subsec:halo_example}) suggests that the effects of the specific DMAF implementation become more relevant on larger time scales -- the galaxy cluster considered here is subject to DMAF for several Gyrs, compared with stronger DMAF for less than 100 Myrs in the case of the isolated halo.}
\par In this simulation, a very low mass has been taken for the DM candidate to showcase the impact of high DMAF energy rates on the large-scale structure of the Universe, although a WIMP particle this light has already been ruled out by various studies (e.g. \citealt{Leane2018}) assuming $2 \to 2$ s-wave annihilation. In a follow-up paper, more realistic scenarios will be considered and while the simulation herein lacks a model for gas cooling through effects such as bremsstrahlung, inverse Compton scattering, recombination, and reionization, these effects will be taken into account.
\section{Conclusions}
\label{sec:conclusions}
We have developed a novel numerical method for including DMAF in cosmological simulations. Our method can serve as a starting point for more elaborate dark sector models, e.g. DM annihilation into more general SM / dark sector products such as a dark radiation component. A varying degree of locality in the resulting deposition of the annihilation power can be modelled by weights accounting for heat generation through e.g. inverse Compton scattering, ionization, or excitation, and an extension of the weights presented herein will be addressed in future work. A careful treatment is required for synchronising two interacting species with different, individual time steps, namely injecting DM and receiving gas particles.
\par Our numerical results show good agreement with the receiver-based method presented in \citet{Iwanus2017} for simulations of an isolated halo and for a cosmological simulation, however, we present a toy example that showcases the conceptual differences between the two methods. \red{Over long periods of time, the donor-based method tends to reduce the gas density in halo centres somewhat more than the receiver-based method}. It is reassuring that for realistic test cases, numerical codes seem to be fairly robust with respect to the particular implementation of DMAF.
\par Almost a century has passed since the first postulation of DM in the Universe and still, little is known about its nature. Joint efforts of experimentalists and theorists will be necessary for unravelling this mystery within the coming decades, and probing dark sector models by means of cosmological simulations can play a crucial role in this quest.
\section*{Acknowledgements}
The authors acknowledge the National Computational Infrastructure (NCI), which is supported by the Australian Government, and the University of Sydney HPC for providing services and computational resources on the supercomputers Raijin and Artemis, respectively, that have contributed to the research results reported within this paper. Thanks also go to Phil Hopkins and Volker Springel for making the codes \textsc{Gizmo} and \textsc{Gadget}-2 publicly available. F. L. is supported by the University of Sydney International Scholarship (USydIS).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,418 |
{"url":"https:\/\/www.zbmath.org\/?q=an%3A7423294","text":"## Quasistatic viscoplasticity without safe-load conditions.(English)Zbl\u00a01477.35261\n\nSummary: In the paper we present the existence theory to gradient-type quasistatic models of viscoplasticity. Our goal is to show the reader a new point of view to the existence of solutions to such models where no safe-load conditions are needed. In the classic approach authors use this kind of indirect assumption on data in order to obtain proper energy estimates, in our approach we propose how to omit such safe-load conditions by quite delicate estimates and due to considering a proper approximation of the initial problem.\n\n### MSC:\n\n 35Q74 PDEs in connection with mechanics of deformable solids 74C05 Small-strain, rate-independent theories of plasticity (including rigid-plastic and elasto-plastic materials) 74H20 Existence of solutions of dynamical problems in solid mechanics 35A01 Existence problems for PDEs: global existence, local existence, non-existence\nFull Text:\n\n### References:\n\n [1] Amassad, A.; Shillor, Meir; Sofonea, Mircea, A nonllinear evolution inclusion in perfect plasticity with friction, Acta Math. Univ. Comen., LXX, 215-228 (01 2001) [2] Alber, H.-D., Materials with Memory, Lecture Notes in Mathematics, vol. 1682 (1998), Springer-Verlag: Springer-Verlag Berlin [3] Alber, H.-D.; Che\u0142mi\u0144ski, K., Quasistatic Problems in Viscoplasticity Theory i: Models with Linear Hardening, 105-129 (2004), Birkh\u00e4user Basel: Birkh\u00e4user Basel Basel \u00b7 Zbl\u00a01280.74017 [4] Anzellotti, G.; Luckhaus, S., Dynamical evolution of elasto-perfectly plastic bodies, Appl. Math. Optim., 15, 1, 121-140 (1987) \u00b7 Zbl\u00a00616.73047 [5] Aubin, J.-P.; Cellina, A., Differential Inclusions, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 264 (1984), Springer-Verlag: Springer-Verlag Berlin [6] Babadjian, J.-F.; Mora, M. G., Approximation of dynamic and quasi-static evolutions in elasto-plasticity by cap models, Q. Appl. Math., 73, 265-316 (2015) \u00b7 Zbl\u00a01328.74019 [7] Bensoussan, A.; Frehse, J., Asymptotic behaviour of the time dependent Norton-Hoff law in plasticity theory and $$H^1$$ regularity, Comment. Math. Univ. Carol., 37, 2, 285-304 (1996) \u00b7 Zbl\u00a00851.35079 [8] Che\u0142mi\u0144ski, K., Coercive approximation of viscoplasticity and plasticity, Asymptot. Anal., 26, 2, 105-133 (2001) \u00b7 Zbl\u00a01013.74010 [9] Che\u0142mi\u0144ski, K., Global existence of weak-type solutions for models of monotone type in the theory of inelastic deformations, Math. Methods Appl. Sci., 25, 14, 1195-1230 (2002) \u00b7 Zbl\u00a01099.74029 [10] Che\u0142mi\u0144ski, K.; Gwiazda, P., Convergence of coercive approximations for strictly monotone quasistatic models in inelastic deformation theory, Math. Methods Appl. Sci., 30, 12, 1357-1374 (2007) \u00b7 Zbl\u00a01121.35135 [11] Denkowski, Z.; Mig\u00f3rski, S.; Papageorgiou, N. S., An Introduction to Nonlinear Analysis: Theory (2003), Kluwer Academic Publishers: Kluwer Academic Publishers Boston, MA \u00b7 Zbl\u00a01040.46001 [12] Duvaut, G.; Lions, J. L., Inequalities in Mechanics and Physics (1976), Springer-Verlag: Springer-Verlag Berlin-New York \u00b7 Zbl\u00a00331.35002 [13] Halphen, B.; Nguyen, Q. S., Sur les mat\u00e9riaux standard g\u00e9n\u00e9ralis\u00e9s, J. M\u00e9c., 14, 39-63 (1975) \u00b7 Zbl\u00a00308.73017 [14] Ionescu, I. R.; Sofonea, M., Functional and Numerical Methods in Viscoplasticity, Oxford Mathematical Monographs (1993), Oxford University Press \u00b7 Zbl\u00a00787.73005 [15] Johnson, C., Existence theorems for plasticity problems, J. Math. Pures Appl., 55, 431-444 (1976) \u00b7 Zbl\u00a00351.73049 [16] Kami\u0144ski, P., Nonlinear quasistatic problems of gradient type in inelastic deformations theory, J. Math. Anal. Appl., 357, 1, 284-299 (2009) \u00b7 Zbl\u00a01178.35360 [17] Kisiel, K.; Che\u0142mi\u0144ski, K., Prandtl-Reuss dynamical elasto-perfect plasticity without safe-load conditions, Nonlinear Anal., 192, Article 111678 pp. (2020) \u00b7 Zbl\u00a01437.35656 [18] Dal Maso, G.; DeSimone, A.; Mora, M. G., Quasistatic evolution problems for linearly elastic-perfectly plastic materials, Arch. Ration. Mech. Anal., 180, 2, 237-291 (May 2006) [19] Moreau, J. J., Application of Convex Analysis to the Treatment of Elastoplastic Systems, 56-89 (1976), Springer Berlin Heidelberg: Springer Berlin Heidelberg Berlin, Heidelberg \u00b7 Zbl\u00a00337.73004 [20] Owczarek, S., Convergence of coercive approximations for a model of gradient type in poroplasticity, Math. Methods Appl. Sci., 32, 12, 1541-1563 (2009) \u00b7 Zbl\u00a01172.35076 [21] Owczarek, S., Existence of a solution to a non-monotone dynamic model in poroplasticity with mixed boundary conditions, Topol. Methods Nonlinear Anal., 43, 2, 297-322 (2014) \u00b7 Zbl\u00a01360.74032 [22] Reuss, A., Ber\u00fccksichtigung der elastischen form\u00e4nderung in der plastizit\u00e4tstheorie, J. Appl. Math. Mech., 10, 3, 266-274 (1930) \u00b7 JFM\u00a056.1245.04 [23] Roubicek, T.; Valdman, J., Perfect plasticity with damage and healing at small strains, its modeling, analysis, and computer implementation, SIAM J. Appl. Math., 76, 314-340 (01 2016) [24] Suquet, P.-M., Evolution problems for a class of dissipative materials, Q. Appl. Math., 38, 4, 391-414 (1981) \u00b7 Zbl\u00a00501.73030 [25] Temam, R., Mathematical Problems in Plasticity (1985), Gauthier-Villars \u00b7 Zbl\u00a00457.73017 [26] Temam, R., A generalized Norton-Hoff model and the Prandtl-Reuss law of plasticity, Arch. Ration. Mech. Anal., 95, 2, 137-183 (1986) \u00b7 Zbl\u00a00615.73035 [27] Temam, R.; Lions, J. L.; Papanicolaou, G.; Rockafellar, R. T., Navier-Stokes Equations: Theory and Numerical Analysis, Studies in Mathematics and Its Applications (2016), Elsevier Science\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.","date":"2022-05-20 19:42:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4128245413303375, \"perplexity\": 6759.162125950047}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662534669.47\/warc\/CC-MAIN-20220520191810-20220520221810-00722.warc.gz\"}"} | null | null |
Also by Antonio Tabucchi
_Indian Nocturne_
_It's Getting Later All the Time_
_Letter from Casablanca_
_Little Misunderstandings of No Importance_
_The Missing Head of Damasceno Monteiro_
_Pereira Declares_
_Requiem_
"Having been" belongs in some way to
a "third kind," radically heterogeneous to both
being and non-being.—VLADIMIR JANKELEVITCH
# Contents
Front Cover
Title Page
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Chapter 13
Chapter 14
Chapter 15
Chapter 16
Chapter 17
Chapter 18
Chapter 19
Chapter 20
Author's Note
Copyright Page
# 1
To open the drawers you have to turn the handle and press down. This disconnects the spring, the mechanism is set off with a slight metallic click, and the ball bearings automatically begin to slide. The drawers are stacked at a slight angle and run out of their own accord on small rails. First you see the feet, then the stomach, then the chest, then the head of the corpse. Sometimes, when an autopsy hasn't been performed, you have to help the mechanism by pulling the drawer with your hands, since some of the corpses will have bloated stomachs which press against the drawer above and so get stuck. The corpses which have been autopsied on the other hand are dry, as though drained, with a sort of zip fastener along their stomachs and their innards stuffed with sawdust. They make you think of big dolls, oversized puppets from a show whose run has ended, tossed away in a store for old bric-a-brac. And in a way this is life's storehouse. Before their final disappearance, the discarded products of the scene find a last home here while waiting for suitable classification, since the causes of their deaths cannot be left in doubt. That's why they are lying here, and he looks after them and watches over them. He manages the anteroom that leads to the definitive disappearance of their visible image; he records their entry and their departure; he classifies them; he numbers them; sometimes he photographs them; he fills in the file-card that will allow them to vanish from the world of the senses; he hands out their final ticket. He is their last companion, and something more, like a posthumous guardian, impassive and objective.
Then is the distance that separates the living from the dead, he sometimes wonders, really so great? He's unable to answer his own question. In any event cohabitation, if we can call it that, helps to reduce that distance. The corpses have to have a little card attached to their big toes with a registration number, but he's sure that, in the remote way in which they are present, they detest being classified with a number as if they were objects. Because of this, when he thinks about them himself he gives them jokey nicknames, some entirely random, others suggested by a vague likeness to, or circumstance in common with, some character in an old film: Mae West, Professor Unrat, Marcelino Pan y Vino. Pablito Calvo, for example, is the exact double of Marcelino: round face, knobbly knees, a short shiny black fringe. Thirteen years old, Pablito was working illegally when he fell off some scaffolding. The father can't be found, the mother lives in Sardinia and can't come. They'll be sending him back to her tomorrow.
Of the original hospital, only the temporary reception ward and the morgue are still here in this old part of town, otherwise referred to as the historic center. For a long time now this area has been considered a site for study and restoration. But the years go by, local governments alternate, vested interests change and the part to be restored grows more and more decrepit. And then the city encroaches menacingly from other areas, drawing the attention of the experts elsewhere, to suburbs where the "productive" population is ever more densely settled, where huge dormitories have been built. It's the buildings in these areas that demand the time of the municipal engineers. Sometimes the hillside will slip, as if it wanted to shrug off those ugly encrustations—and urgent measures are introduced, special funds made available. Then there are roads to be built, sewers and gas pipes to be linked up, schools, nurseries, clinics. Here in the center, on the other hand, the agony is diffuse, a slow leprosy that has invaded walls and houses whose decay is stealthy and irreversible, like a pending death sentence. Here live pensioners, prostitutes, street vendors, fishmongers, unemployed young layabouts, grocers with ancient, damp, dark shops that smell of spices and dried cod, above whose doors one can barely make out faded signs announcing: "Wines—Colonial Products—Tobaccos." The garbage men rarely come by; even they disdain the leavings of this second-class humanity. In the evening syringes glitter in the narrow streets, there are plastic bags, and sometimes the shapeless mass of a rat dead in a corner where a phosphorescent poster put up by the Pest Control Department warns you not to touch the verdigris-colored bait scattered on the ground.
Sara has frequently said she'd like to come and pick him up on those evenings his shift finishes at ten, but he has always forbidden her. Not so much for fear of the people; in the evening the narrow street is home for three quiet prostitutes who have watchful pimps at first-floor windows. No, what worries him most are the bands of rats that roam around aggressively in the evening. Sara has no idea how big they are; he's sure she would be terrified; she can't imagine what they're like. True, the city abounds in rats, but this area has its own special breed. Spino has a theory, but he's never told anyone, least of all Sara. He thinks it's the morgue that attracts them.
# 2
Saturday evenings they usually go to the Magic Lantern. It's a film club at the top of Vico dei Carbonari in a small courtyard that looks like some corner of a country village and reminds you of farmhouses, patches of countryside, times past. From up here you can see the harbor, the open sea, the tangle of tiny streets in the old Jewish ghetto, the pinkish bell tower of a church hemmed in between walls and houses, invisible from other parts of the city, unsuspected. You have to climb a brick stairway worn by long use, a long shiny iron bar serving as a handrail, twisting along a pitted wall invaded by tufts of caper plants obscuring faded graffiti. You can still read: "Long live Coppi" and "The exploiters' law shall not pass." Things from years gone by. On summer nights, after the film, they wind up their evening in a small café at the end of the narrow street where two blocks of granite with a chain between them mark off a little terrace complete with pergola and surrounded by a shaky wall. There are four small tables with green iron legs and marble tops where the circles of wine and coffee the stone has absorbed and made its own trace out hieroglyphics, little patterns to interpret, the archaeology of a recent past of other customers, other evenings, drinking bouts perhaps, late nights with card games and singing.
Beneath them the untidy geometry of the city falls sheer away together with the lights of villages along the bay, the world. Sara has a mint granita that they still make here using a primitive little gadget. With a grater fitted inside a small aluminum box, it scrapes the fragments of ice together compact and soft as snow. The proprietor is a fat man with bags under his eyes and a lazy walk. He wears a white apron that emphasizes his paunch, he smiles, he pronounces his always miserable weather predictions: "Tomorrow it'll get colder, the wind is from the east" or "This haze'll bring rain." He prides himself on knowing the winds and weather; he was a seaman when he was younger; he worked on a steamship on the Americas route.
Even when it's hot Sara draws in her legs and covers her shoulders with a shawl, since the night air gives her pains in her joints. She looks towards the sea, a brooding mass that might be the night itself were it not for the stationary lights of the ships waiting to come into harbor. "How nice it would be to get away," she says, "wouldn't it?" Sara has been saying how nice it would be to get away for ten years now, and he answers her that one day maybe, sooner or later, they ought to do it. By tacit agreement their exchanges on this subject have never gone beyond these two ritual phrases: yet all the same he knows that Sara dreams of their impossible departure. He knows because it isn't difficult for him to get close to her dreams. There's an ocean liner in her fantasies, with a deckchair under cover and a plaid blanket to protect her from the sea breeze, and some men in white trousers at the end of the deck are playing a game the English play. It takes twenty days to get to South America, but to which city isn't specified: Mar del Plata, Montevideo, Salvador de Bahia, it doesn't matter: South America is small in the space of a dream. It's a film with Myrna Loy that Sara liked a lot: the evenings are stylish, there's dancing on board, the deck is lit up by garlands of lights and the band plays "What a Night, What a Moon, What a Girl" or some tango from the thirties, like _"Por una cabeza."_ She's wearing an evening dress with a white scarf, she lets the dashing captain flirt with her and waits for her partner to leave the infirmary and come and dance with her. Because, of course, as well as being her partner, Spino is also the ship's doctor.
If Sara's dream is not exactly that, then it's certainly something very like it. The evening they saw _Southern Waters_ she looked so wistful; she hugged his arm tight, and while she was eating her granita went back to the old chestnut of his unfinished degree. These days even the line that he is too old doesn't deter her. Won't she accept, he says, once and for all, that at his age you don't feel like going back to school any more? And then the exam registration booklet, the bureaucracy, his old college friends who would be his examiners now. It would be intolerable. But it's no good, she doesn't give up: life is long, she says, longer maybe than one expects, and you don't have the right to throw it away. At which he prefers to look off into the distance, doesn't answer, falls silent to let the matter drop and to avoid it leading to another argument that's connected to his not getting his degree. It's a subject that distresses him: he understands well enough how she feels about it, but what can he do? Of course at their age this life as secret lovers is a somewhat inconvenient eccentricity, but it's so difficult to break with old habits, to pass suddenly into married life. And then, the idea of becoming the father of that evasive eighteen-year-old with his absurd way of speaking and indolent, slovenly manner terrifies him. Sometimes he sees the boy walk by on his way back from school and thinks: I would be your father, your substitute father.
No, this is definitely not something he wants to talk about. But Sara doesn't want to talk about it either; she wants him to want to. So like him she doesn't mention it; instead she talks about films. The Magic Lantern has been holding two retrospectives dedicated to Myrna Loy and Humphrey Bogart; they even showed _Strictly Confidential:_ there's more than enough for them to chew over here. Did he notice the scarves Myrna Loy was wearing? Of course he did, for heaven's sake, they're so flashy; but Bogart's foulards as well, always so fluffy and with those polka dots, truly unbearable . . . sometimes it seems like wafts of cologne and Brylcreem are coming off the screen. Sara laughs quietly, with that delicate way of catching her breath she has. But why don't they have a retrospective for Virginia Mayo, too? That Bogart treated her like a dog, the bastard. She has a special soft spot for Virginia Mayo, who died in a motel room, destroyed by alcohol, because he'd dropped her. But, _by the way_ , that ship in the harbor, doesn't it look like a liner? It has too many lights, she thinks, to be a cargo ship. He isn't sure, hmm, no, he wouldn't know. Though perhaps, no, they don't have ocean liners anymore these days, they're all in the breakers' yards, just a few left for cruises. People travel by plane these days, who would cross the Atlantic in a liner? She says: "Right, you're right," but he senses from her tone that she doesn't agree, is merely resigned. Meanwhile the proprietor of the café moves around with a cloth in his hand, wiping the empty tables. It's a silent message: if they would be so kind as to call it a day he could close down and get off to bed, he's been on his feet since eight this morning and the years weigh heavier than his paunch. Then the breeze has got a bit cool; the night is oppressively silent and humid; you can feel a film of brine on the arms of the chairs; perhaps they really had better go. Sara agrees it would be better. Her eyes are bright, he never knows whether this is emotion or merely tiredness. "I'd like you to sleep with me tonight," she tells him. Spino says he'd like to as well. But tomorrow is his day off, she'll come to his place in the morning and they'll be together until evening. He'll prepare a quick snack to eat in the kitchen and they can spend the whole afternoon in bed. She whispers what a shame it is they met so late in life, when everything was already settled; she's sure she would have been happy with him. Perhaps he's thinking the same thing, but to cheer her up he tells her no, it's one thing being lovers and quite another being married, the daily routine is love's worst enemy, it grinds it down.
The proprietor of the café is already lowering his shutters and mumbles goodnight under his breath.
# 3
They brought him in in the middle of the night. The ambulance arrived quietly, its headlights dimmed, and Spino immediately thought: something horrific has happened. He had the impression he'd been asleep and yet he picked up the sound of the ambulance's motor perfectly clearly, heard it turn into the narrow street too calmly, as if there were nothing more that could be done, and he sensed how death arrives slowly, how that is death's real pace, unhurried and inexorable.
At this time of night the city is asleep, this city which never rests during the day. The noise of the traffic dies down, just every now and then the lonely roar of a truck from along the coast road. Through the empty expanses of night-time silence comes the hum of the steelworks that stands guard over the town to the west, like some ghostly sentinel with lunar lighting. The doors of the ambulance echoed wearily in the courtyard, then he heard the sliding door open and felt he was picking up that smell the night's chill leaves in people's clothes, not unlike the sour, slightly unpleasant smell some rooms have when they've been slept in. There were four policemen, their faces ashen, four boys with dark hair and the movements of sleepwalkers. They said nothing. A fifth had stayed outside and stammered something in the dark that Spino couldn't catch. At which the four went out, moving as though they didn't really know what they were doing. He had the impression of witnessing a graceful, funereal ballet whose choreography he couldn't understand.
Then they came in again with a corpse on a stretcher. Everything was done in silence. They shifted the corpse from the stretcher and Spino laid it out on the stainless-steel slab. He opened the stiffened hands, tied the jaws tight with a bandage. He didn't ask anything, because everything was only too clear, and what did the mere mechanics of the facts matter? He recorded the time of arrival in the register and pushed the bell that rang on the first floor to get the doctor on duty to come and certify death. The four boys sat down on the enameled bench and smoked. They seemed shipwrecked. Then the doctor came down and started to talk and write. He looked at the fifth boy, who was wounded and was moaning softly. Spino telephoned the New Hospital and told them to prepare the operating theater for an urgent case, then immediately arranged for the boy to be sent there. "We haven't even got any instruments here," he said. "We're just a morgue now."
The doctor went out by the back stairs and someone, one of the boys, sobbed and murmured: "Mother," pushing his hands into his eyes, as if to erase a scene that had been etched there. At which Spino felt an oppressive tiredness, as though the tiredness of everything around him were bearing down on his shoulders. He went outside and sensed that even the courtyard was tired, and the walls of this old hospital were tired, the windows too, and the city, and everything. He looked up and had the impression that even the stars were tired, and he wished there were some escape from this universal tiredness, some kind of postponement or forgetting.
# 4
He walked all morning by the harbor. He got as far as the Customs and the cargo docks. There was an ugly ship with "Liberia" written on the poop, unloading bags and boxes. A black man leaning against the guard-rail watching the unloading procedure waved to him and he waved back. Then a thick bank of low cloud rose from the sea and only moments later had reached the shore, wrapping itself round the lighthouse and the derricks, which dissolved in fog. The harbor grew dark and the iron structures shiny. He crossed the Piazza delle Vettovaglie and went to the elevator cars that go up to the hills beyond the bastion of apartment blocks framing the city. There was no one on the cars now, they fill up in the late afternoon when people come home from work. The operator is a little old man with a smoke-dark suit and a wooden hand. On his lapel he wears a disabled veteran's ribbon. He's extremely efficient at using his one good hand to operate the levers and that strange iron ring that looks like the controls of a tram. Alongside the windows of the cabin, which in this first stretch of the journey runs on rails like a funicular, blank walls of houses march by, interrupted by small dark openings inhabited by cats, gates leading through to courtyards where you can glimpse a little washbowl, a rusty bicycle, geraniums and basil planted in tuna cans. Then all at once the walls open up: it's as if the car had burst through the roofs and was headed straight for the sky. For a moment you feel you're hanging in the void, the traction cables slide silently; the harbor and the buildings fall away rapidly below; you almost have the impression that the lifting movement will never stop; the law of gravity seems an absurdity and the town a toy it's a relief to be leaving behind you.
You stop at the edge of a meager garden with a shelter. It's like a railway station in the mountains, there's even a wooden seat cut from a tree trunk. If you didn't turn to look at the sea you could think you were in Switzerland or on the hills above some German lake. From here there's a path that leads away to a Hungarian trattoria. That's its name, "Hungary," and inside there's a handsome old woman and her irritable husband. The customers speak a hesitant Italian and argue amongst themselves in Hungarian. Heaven knows why they insist on keeping this poor shack open. Every time Spino goes the place is deserted; the old woman is solicitous and calls him Captain: it's ridiculous, she has always called him Captain.
He sat down at a table near the window; it's incredible how at this height the sound of the ships' sirens is clearer than down below. He ordered lunch and then a coffee that the woman always prepares Turkish style, serving it up in huge blue porcelain cups that belong perhaps to her Hungarian youth.
After the meal he rested a while, his eyes open, head on his hands, but noticing nothing, exactly as if he were sleeping. He sat there listening to time slipping slowly by; the cuckoo in the clock over the kitchen door popped out and cuckooed five times. The old woman arrived and brought him a teapot wrapped in a felt cloth. He sipped tea for a long time. The old man was playing solitaire at the next table and every now and then looked up at him, screwing his eyes into a smile as he indicated the cards that wouldn't come out. He invited Spino to join him and they played a game of _briscola_ , both concentrating on the cards as if they were the most important thing in the world, as if upon them depended the outcome of some event which remained obscure, but which they both sensed was superior to the reality of their own presence here. Dusk fell pale blue and the old woman turned on the lights behind the counter, their two parchment lampshades spotted with fly droppings and supported by two stuffed squirrels, somewhat absurd in a trattoria that looks out over a seaport.
So then he telephoned Corrado, but he wasn't in the editorial office. They managed to track him down in Typesetting. He seemed rather excited. "But where have you been?" he shouted, to make himself heard over the noise of the machines, "I've been trying to get you all day." Spino told him he was in the Hungary; if Corrado wanted to come and meet him there he'd be happy to see him. He was on his own. Corrado told him he couldn't, and his tone seemed brusque, perhaps annoyed. He explained that they were about to start printing the paper and the crime page read like a boring official communiqué, the nasty story the whole city would be reading about tomorrow. He'd been trying to reconstruct what had happened all day without managing to put together a decent article. The reporter he'd sent out to the scene had come back with a garbled version. Nobody knew anything and asking at the police station was worse than trying to see in the dark. If only he'd been able to find Spino a bit earlier he could have asked him for a couple of details. He'd heard he'd been on duty. "They didn't even want to tell me his name," he finished, huffily. "All I know is that he had false papers."
Spino said nothing and Corrado calmed down. From the receiver Spino heard the noise of the machines working rhythmically with a liquid sound, like waves. "You come here," Corrado began again, suddenly disarming, "Please," and Spino seemed to see the childish expression Corrado's face has when he's upset.
"I can't," he said. "I'm sorry, Corrado, but this evening I really can't. I'll call you back tomorrow maybe, or the day after."
"Okay," Corrado said, "I wouldn't have time to change the piece now anyway. All I need is his name. You didn't hear anything, last night? Do you remember if someone mentioned a name?"
Spino looked out the window. Night had fallen and a waterfall of lights was spilling down the hillside, cars driving into town. He thought for a moment about the previous night, remembering nothing. Odd, the only image that came to mind was a stagecoach in an old film; it shot out from the right-hand side of the screen, growing enormous as it came into close-up, as if heading straight for him, where as a child he sat watching its approach in the front row of the Aurora cinema. There was a masked rider galloping after it. Then the guard tucked his rifle into his shoulder and the screen exploded with a crashing shot as Spino covered his eyes.
"Call him The Kid," he said.
# 5
The article in the _Gazzetta del Mare_ was unsigned, a brief note on the front page leading the reader to the Local News section, where the story took up two columns: a modest space on an inside page. To compensate, there was a photograph of the dead man. It's the photo the police took, Corrado managed to get them to give it to him, and anyhow, if they want to find out who the man is, it suits the police to have it published. Under the photo they've put the caption: "Gunman Without a Name."
He opened the paper on the table, pushing aside the breakfast things, while Sara began to tidy up the other rooms. "See?" she shouted from the kitchen, "Seems nobody knows him. But the article can't be by Corrado, it isn't even signed."
Spino knows it's not by Corrado. The facts were dug up by a young and very enterprising reporter who a few months ago caused pandemonium when he wrote about corruption on the docks. Spino sticks to the main story, skipping the opening paragraph about the fight against crime, full of clichés.
"A tragic gun battle took place last night in the working-class Arsenale district in an apartment on the top floor of an old block in Via Casedipinte. Acting on a tip-off from a source which police are keeping strictly secret, five men of the Police Special Corps raided the apartment shortly after midnight. At the warning, 'Open up! Police!' an unspecified number of persons in the apartment fired repeatedly through the door, seriously wounding one policeman, Antonino Di Nola, 26, who has been stationed in our city for only two months. Di Nola later underwent what was described as delicate surgery. After the shooting, the gunmen barricaded themselves in a small room leading off from the entrance hall before escaping from a window across the rooftops. But before fleeing (and this perhaps is the most obscure part of the whole incident) they shot one of their own gang. The man was raced to the Old Hospital but was dead on arrival. His identity is unknown. It appears he was carrying false documents. Between twenty and twenty-five years old, brown beard, blue eyes, slim, average height, to all intents and purposes the dead man was a stranger to local inhabitants, despite having lived in the area for about a year. He went under the name of Carlo Noboldi and claimed to be a student, although inquiries made at university offices have revealed that he was not enrolled. Shopkeepers in the area say he was courteous and polite and always paid his bills on time. The apartment, which has two rooms and a loft, belongs to a religious order which took Noboldi in last year when he claimed he had just returned from abroad and was out of money. The Prior of the Order, to which Noboldi was paying a nominal rent, declined to make any statement to journalists. This new murder, which once again sees our city as the stage for violent crime, will intensify the fears of a population already deeply disturbed by recent events."
Sara has now come up behind him and, leaning over his shoulder, starts to read the paper, her head beside his. She passes a hand through his hair, a gesture of understanding and tenderness. For a moment, engrossed, they stare at the photograph of the unidentified man. Then she lets slip a remark that leaves him shaken: "Grow a beard and lose twenty years and it could be you."
He doesn't reply, as if this observation were of no importance.
# 6
On the sliding door Pasquale had left a note: "Back Soon." Pasquale always goes and has his morning coffee around eleven. Instead of waiting in the courtyard, Spino decided to go and join him; after all he knew where to find him. The sun was bright, the streets were pleasant. He went out of the hospital and down a dark side street that led into a small square where there was a café with a terrace and tables set out. Pasquale was sitting at a table reading the paper. Spino must have frightened him, because when he came up from behind and spoke to him, Pasquale started slightly. With a look of resignation he folded his paper and left some money on the table. They walked calmly, as if out for a stroll. Then Pasquale said it was a sad story, to which Spino replied, "Right," and Pasquale said: "I want to be buried in my own village. That's where I want them to put me, beneath the mountains."
A bus went by and the noise drowned out their last words. They crossed a patch of garden where people had worn a footpath between flowerbeds defended by "Keep Off" signs. Spino said he wasn't going to the morgue, he just wanted to know if anybody had shown up, a relative, someone who knew the man. Pasquale shook his head with an expression of disgust and said: "What a world." Spino asked him not to leave the morgue if he could possibly avoid it, and Pasquale replied that if the relatives did come forward, the first place they'd go would be to the police, they certainly wouldn't come to the hospital. They parted at the crossroad where the path through the gardens plunges between the houses of the old city center, and Spino set off to catch the number 37.
Corrado wasn't in the office, as Spino had feared. He had guessed his friend would want to go in person to try and find out more. Obviously the facts his reporter had picked up hadn't satisfied him. He hung around in the editorial office for a while, saying hello to people he knew, but no one paid much attention to him. There was an atmosphere of impatience and nervous tension, and Spino imagined that this death with its burden of tragedy was weighing down on the room, making the men feel feverish and vulnerable. Then somebody came through a door waving a piece of paper and shouting that the tanks had crossed the frontiers, and he named a city in Asia, some improbable place. And shortly afterwards another journalist working at a teleprinter went over to a colleague and told him that the agreements had been signed, and he mentioned another distant foreign city, something plausible perhaps out there in Africa, but as unlikely-sounding here as the first. And Spino realized that the dead man he was thinking of meant nothing to anybody; it was one small death in the huge belly of the world, an insignificant corpse with no name and no history, a waste fragment of the architecture of things, a scrap-end. And while he was taking this in, the noise in that modern room full of machines suddenly stopped, as if his understanding had turned a switch reducing voices and gestures to silence. And in this silence he had the sensation of moving like a fish caught in a net; his body made a sudden involuntary jerk and his hand knocked an empty coffee cup off a table. The sound of the cup breaking on the floor started up the noise in the room again. He apologized to the owner of the cup, who smiled as if to say it didn't matter, and Spino left.
# 7
"Still No Name for the Victim of Via Casedipinte." It's the headline of an article by Corrado. His initials are at the bottom. It's a resigned, tired piece, full of clichés: the police search, all leads meticulously followed up, the inquiry at a dead end.
Spino noticed the involuntary irony: a dead end. He reflects that one person is definitely dead and no one knows who he is, so much so that they can't even legally declare him dead. There's just the corpse of a young man with a thick beard and a sharp nose. Spino starts to use his imagination. He was dead on arrival at the hospital, but perhaps in the ambulance he mumbled something: cursed, begged, mentioned a name. Perhaps he called for his mother, as is only natural, or for a wife, or child. He could have children. He is married. There's a ring on his finger, given, of course, that it is his ring. But of course it's his. No one wears somebody else's ring.
But no, says Corrado in his article. He didn't say anything while he was being driven to the hospital, he was in a coma, to all intents and purposes already dead. The policemen involved in the shoot-out said so.
Spino found a pen and underlined the parts he thought most interesting.
"His photograph has been sent to every police station in Italy, but there appears to be no trace of him in police files.... It is believed that if he had been a member of an underground organization, his comrades would have made some kind of announcement by now.... As things stand at the moment the police cannot be sure that the young man was a terrorist.... What's more, according to informed sources, the tip-off given to the police could be part of an underworld or perhaps mafia vendetta.... The identity-card found on the murdered man belongs to Mr. I. F. of Turin, who lost it two years ago and reported the loss in the usual fashion.... And lastly there is the curious detail of the name on the door. Written on a plastic strip, the kind of thing anyone can print out themselves with a Dyno machine, it says: Carlo Nobodi (not 'Noboldi' as we mistakenly reported yesterday). The name is obviously false, perhaps a significant adaptation of the English word 'nobody'."
Suddenly he thought of the ring. He telephoned the morgue and Pasquale's voice answered.
"Has he still got his ring on?"
"Who is it? Can I help you?"
"It's Spino. I want to know if he's still got his ring on."
"What ring? What are you talking about?"
"Doesn't matter," said Spino. "I'll be right over."
"Nobody shown up?" Spino asks him.
Pasquale shakes his head and lifts his eyes to the ceiling with a resigned expression, as if to say that the corpse will have to stay where it is. The clothes are in the locker, the forensic people have left them there because they didn't consider them important. They didn't even bother to search through them carefully, otherwise they'd have found a photograph in his breast pocket. Pasquale points to it, he's put it under the glass top on the desk. It's a snapshot from a contact sheet, about as big as a postage stamp. It must be an old photo, in any event he ought to hand it over to the policeman on duty, it's compulsory. But the policeman's not there at the moment. He was there half the morning and then they called him out for something urgent. He's a young guy who does patrolwork as well.
Spino had expected to have trouble with the ring, but, as it happens, it slips off easily. The hands aren't swollen and then the ring seems too big for the finger. On the inside, as he was hoping, there's a name and a date: "Pietro, 12.4.1939." Pasquale is surprised out of his sleepiness and comes over to take a look. Chewing a toffee, he mutters something incomprehensible. Spino shows him the ring and he looks at his friend inquisitively.
"But what are you after?" Pasquale says in a whisper. "Why are you so bothered about finding out who he is?"
# 8
They got on the bus in Piazza del Parlasolo, under the bell tower. The clock said eight o'clock, and, it being Sunday, the square was quiet, deserted almost, the three buses lined up in a row, their engines ticking over, each with a card on its windshield announcing a destination. The clock struck eight and the driver promptly folded up his paper, pressed a button to close the automatic doors and slipped into gear. They went to sit up front, on the driver's side, Sara by the window. On the seat at the back was a group of Boy Scouts, halfway down the aisle an elderly couple in Sunday best, then themselves.
Sara had brought sandwiches and on her knees held a guidebook to Romanesque churches in the area. The book was in color and its cover featured a stone ceiling rose. The bus drove along the almost deserted sea front. The traffic lights hadn't been switched on yet and the driver slowed down at every intersection. After the flower market they took a wide road that climbed rapidly in long curves. In just a few minutes they were halfway up the hillside, already out of town, running along beside an old ruined aqueduct. Another moment and it was open country with thickets of trees and vegetable gardens planted on terraces; olive, acacia and mimosa trees seemed on the point of flowering despite the season. Below, they looked down to the sea and the coast, both pale blue and veiled in a light mist which didn't penetrate the city itself.
Sara closed her eyes and perhaps slept a little. Spino's eyes were also half-closed as he let himself be lulled by the motion of the bus. The Boy Scouts got off a stop before the village by a roadside Madonna. Then the bus crossed the village and turned round in the square, stopping in a yellow rectangle painted on the flagstones. Before starting their climb they had coffee in a cafe on the square. The little woman behind the counter watched them with a curiosity they satisfied by asking for directions to the sanctuary. She spoke in a harsh, rather primitive dialect, showing bad teeth. They gathered she was suggesting they eat in a trattoria that belonged to her daughter where the cooking was good and the prices reasonable.
They decided instead to climb up the path marked in their guidebook. The book promised a steep but picturesque walk with dramatic views across the bay and the countryside inland. All of a sudden the bell tower rose pink and white amongst the holm oaks. Sara took Spino by the hand, pulling him along, like two children coming out of school.
The churchyard is paved with stone flags, grass growing in the cracks between, while a low brick wall runs along the edge of a sheer drop to the other side. From up here the horizon stretches away from one bay to the next and the sea breeze blows in with a sharp tang. On the facade, near the door, an inscription explains how in the year of grace MCCCXXV the Madonna now in the sanctuary was carried in procession down to the sea, where she vanquished the terrible plague then afflicting the valley, after which the people chose the Madonna as patron saint of the bay. The first stone of the convent annex was laid on June 12th MCCCXXV and the inscription preserves the memory of that day. Sara read aloud from her guidebook, insisting that Spino pay attention.
The sun was hot. To eat their sandwiches they stretched out on a patch of grass at the end of the churchyard where an iron cross on a stone pedestal commemorates a solemn visit paid by the Bishop in 1918, in gratitude, it says, for the end of the war, and for Victory. They ate slowly and calmly, enjoying the pleasure of being there, and when the sun began to slip behind the promontory, leaving a hazy light along the coast, they went into the church by a side door near the apse where a fresco shows a knight on a white horse crossing a landscape dominated by a native allegorical representation with a background of spring celebrations and festivals to the left and fires and hangings to the right. Then they went along the aisles, looking at the votive paintings hung on the walls. Most of them are seascapes: shipwrecks, miraculous visions saving mariners from storms, windjammers with their rigging devastated by lightning finding the right route thanks to the intercession of the Madonna. The Holy Mother is always shown between flashing clouds, her head covered by an azure veil as in popular iconography, her right hand reaching through the sky to make a gesture of protection toward the wave-tossed boats. Rough handwriting has traced out phrases of devotion across the paint.
Then the bell rang out and the Prior came in from the vestry to celebrate afternoon Mass. They sat to one side, near the confession-box, reading the inscriptions on the stone slabs on the walls. Afterwards, they found the Prior in the vestry as he was taking off his vestments and he led them through to his study next to the now empty cells of the convent beyond the refectory. Perhaps he mistook them for a mature married couple wanting advice, who knows, or for two inquisitive tourists. He invited them to sit on a small couch in a bare room: there was a dark table, a small organ, a bookcase with glass doors. On the table, with a chestnut leaf to mark his place, was a book about destiny and the tarot. Then Spino said he had come about a man who had died, and the priest immediately understood and asked if they were relatives or friends of the man. Neither, he said. The first time he'd seen him he was already dead, and now he was being kept in a refrigerator, like a fish, but he ought to be given a proper burial. The priest nodded in agreement, since from his point of view he imagined he was hearing and perhaps warming to a version of his own compassion as a man of faith in the words of another. But what could he say? Yes, he had known the boy, but not in the sense of knowing his name, place of birth and so on. He had always believed he was named Carlo and perhaps he really was. All he could say about him was that he was a nice boy, he loved his studies, he had said he was poor, and the Order had helped him. He didn't know for certain if he was really born in Argentina, that was what he had said, and the Prior had never doubted it, and why should he have? In the two months he had stayed in the monastery he had read a great deal and they had talked a great deal. Then he had moved to town so as to be able to study and the Order had continued to help him by offering the modest charity of a low rent. He was sorry he had gone, he was a boy with a sharp, clear mind.
He looked them in the eyes, insistently, as priests will sometimes. "Why do you want to know about him?" he asked.
"Because he is dead and I'm alive," Spino said.
He wasn't sure why he'd answered like that. He felt it was the only plausible answer, since, truth to tell, there was no other reason. So then the priest clasped his hands together on the table, and, stretching out his arms, his white cassock slipped back to show his wrists, which were also white, and his fingers fidgeted a little with each other.
"He wrote to me," said the priest. "I'll show you the letter." He opened a drawer and took out a blue envelope with a postcard inside showing a view of the city that Spino sees every day. The priest handed it to him and he read the few lines written there in a large, rather childish hand. Then Spino asked if anyone else had seen it, and the priest shook his head smiling, as if to say that no one had bothered to come and talk to him. "I couldn't be of much use to the police," he said, "and then it's too much of an effort to climb up here."
They exchanged a few casual remarks about the beauty of the place and the history of the church. Sara embarked on a pleasant conversation with the priest about the frescoes, Spino restricting himself to listening to their authoritative remarks as they spoke easily about the Knight, the Angel, Death, and the Hanged Man, until he remarked that it was odd but they sounded like tarot figures, and he pointed to the book on the table. "I wouldn't have thought you'd like it, Father," he added, "it being about life's strange coincidences."
The priest smiled and looked at him indulgently. "God alone knows all the coincidences of this existence, but we alone must choose our own set of coincidences from all those possible," he said, "we alone." And so saying he pushed the book towards Spino.
So then, for fun, Spino took the book and opened it at random without looking. He said: "Page forty-six," and with a solemn voice, as if pretending to be a fortune teller, read aloud the first paragraph. They laughed out of politeness, as one does after an amusing remark, and it was clear that this laughter also marked the end of their conversation. So they said goodbye and the priest showed them out. The sky was growing dark and they hurried down the path, having heard the horn of the bus in the village square announcing its imminent departure.
Sara flopped on her seat with a sigh of satisfaction and tidied her hair slyly. "We should go on vacation," she said. "We need a vacation." He nodded without saying anything and leaned his head back on the headrest. The driver turned off the interior lights and the bus sped out of the village and along the hillside. Spino closed his eyes and thought of destiny, of the sentence he had read from that book, of life's infinite coincidences. And when he opened them again the bus was already driving through the pitch dark and Sara had gone to sleep with her head on his shoulder.
# 9
Seeing him holed away behind his desk with that childish frown he sometimes wore when he had too much to do, Spino thought how Corrado always loved to play the part of the cynical newspaper editor, a type they'd seen together in the movies so many times. Spino had arrived ready to tell his friend about his Sunday outing. The morning's newspaper, as always on Mondays, was almost exclusively given over to soccer and contained no news of any importance. He would have liked to have told Corrado that Sara was perhaps about to set off for a short vacation, and if he wanted to take him on free of charge as a private investigator, here was an occasion not to be missed.
But when Corrado said: "Another," holding up two fingers, Spino's good humor suddenly evaporated and he sat down without the courage to speak, waiting.
"The policeman died last night," Corrado said, and he made a gesture with his hand, a cutting gesture, as if to say "that's it" or "end of story." There was a long silence and Corrado began to leaf through the pages of a file as if there was nothing more to say about the matter. Then he took off his glasses and said calmly: "The funeral will be held tomorrow; the corpse is laid out in a mortuary room at the police barracks; the wire services have already released the text of the official telegrams of condolence." He put the file back on a shelf and fed a piece of paper into his typewriter. "I've got to write it up," he said. "I'm doing it myself because I don't want any trouble, just straightforward news, no speculation, no fancy stuff."
He made as if to start writing, but Spino put a hand on the machine. "Listen, Corrado," he said, "yesterday I spoke to a priest who knew him, I saw a letter. He was a sensitive person, maybe this business isn't as simple as it seems."
Corrado jumped to his feet, went to the door of his little glass office and closed it. "Oh, he was sensitive, was he?" he exclaimed, turning red. Spino didn't answer. He shook his head in a sign of denial, as if not understanding. So then Corrado said to listen very carefully, because there were only two possible explanations. First: that when the police arrived the dead man was already dead. In fact The Kid died by the door to the apartment. Now, the gun that killed both him and the policeman, from which six shots were fired, was found on the kitchen balcony at the end of a short passage. So obviously it wasn't suicide since a dead man couldn't possibly run back the whole length of the passage and go out on the balcony to leave the gun there. Second explanation: the gun, with somebody holding it, was on the balcony, waiting. The Kid knew this, or didn't know, impossible to be sure. At a certain point the police knock on the door and The Kid calmly goes to open it. And at that moment the gun pokes in from the night and fires repeatedly both on The Kid and on the police. So then, who was the dead man? Unknowing bait? Or aware that he was bait? A poor fool? Someone who wasn't involved at all? An inconvenient witness? Or something else again? All hypotheses were possible. Was it terrorism? Perhaps. But it could equally well have been something else: vendettas, fraud, something secret, blackmail, who knows. Perhaps The Kid was the key to everything, but he might also have been just a sacrificial victim, or someone who stumbled into an encounter with destiny. Only one thing Corrado was sure of: that it was best to forget the whole business.
"But you can't let people die in a vacuum," Spino said. "It's as if they'd died twice over."
Corrado got up and took his friend by the arm, pulling him gently to the door. He made an impatient gesture, pointing to the clock on the wall. "What do you think you're going to find out?" he said, pushing him outside.
# 10
"Indian summer, St. Martin's Day, winter can't be far away." Somebody used to say that to him when he was a boy, and in vain Spino struggled to remember who it was. He thought about it on a station platform swept by cold gusts of wind, waving as the train bellied out into the curve. He also thought that a lot could happen in three days. And in his mind a childish voice was laughing, saying: "Three little orphans! Three little orphans!" It was a piercing, malignant voice, but one he couldn't recognize, recovered from some distant past when memory had stored away the emotion but not the event that produced it. Leaving the station he turned to look at the lighted clockface on the façade and said to himself: Tomorrow is another day.
Sara had gone on vacation. Her school had organized a three-day trip to Lake Maggiore and Spino encouraged her to go. He asked her to send him some postcards from Duino and she smiled with complicity. If they had had some time they would have talked about it; once they talked a lot about Rilke; and now he would have liked to talk about a poem that takes as its subject a photograph of the poet's father, something he's been repeating to himself by heart all day.
At home he set up his instruments in the kitchen where there was more space to work than in the cubby-hole he normally used as his darkroom. In the afternoon he had picked up a supply of reagent and bought a plastic bowl in the gardening department of a big store. He arranged the paper on the dining table, setting the stand on the enlarger at maximum. He got a frame of light thirty centimeters by forty and inserted the negative of the contact photo which he'd had rephotographed in a lab where he knew he could trust people.
He printed the whole photograph, leaving the enlarger on a few seconds more than necessary since the contact shot was overexposed. In the bowl of reagent the outline appeared to be struggling to emerge, as if a distant reality, past now, irrevocable, were reluctant to be resurrected, were resisting the profanation of curious, foreign eyes, this awakening in a context to which it didn't belong. That family group, he sensed, was refusing to come back and exhibit itself in this theater of images he'd set up, refusing to satisfy the curiosity of a stranger in a strange place and in a time no longer its own. He realized too that he was evoking ghosts, trying to extort from them, through the ignoble stratagem of chemistry, a forced complicity, an ambiguous compromise that they had unknowingly underwritten with an unguarded pose delivered up to a photographer of long ago. Oh, the questionable virtue of the snapshot! They're smiling. And that smile is for him now, even if they don't like it. The intimacy of an unrepeatable instant of their lives is his now, stretched out across the years, always identical to itself, visible an infinite number of times, hung dripping on a string that crosses his kitchen. A scratch that the process has enlarged out of all proportion slashes diagonally across their bodies and their surroundings. Is it a chance fingernail scratch, the inevitable wear and tear things get, perhaps the scratch of a piece of metal (keys, watch, a lighter), something those faces have shared a pocket or drawer with? Or was it done intentionally, the work of a hand that wanted to destroy that past? But that past, like it or not, is part of another present now, offers itself up, despite itself, for interpretation. It shows the veranda of a modest suburban house, stone steps, a scrubby climbing plant with pale bell-shaped flowers twisting round the architrave. It must be summer. The light seems dazzling and the people photographed are wearing summer clothes. The man's face has a surprised and at the same time lethargic expression. He's wearing a white shirt, sleeves rolled up, and is sitting behind a small marble table. In front of him on the table is a glass jug with a folded newspaper propped against it. Obviously he was reading when the unexpected photographer said something to get him to look up. The mother is coming out of the door, she only just gets into the frame and doesn't even realize. She has a short apron with a flower pattern, her face is thin. She's still young, but her youth seems over. The two children are sitting on a step, but apart, strangers to each other. The girl has pigtails bleached by the sun, glasses with plastic frames, clogs. In her lap she holds a rag doll. The boy is wearing sandals and shorts. He's got his elbows on his knees, his chin on his hands. His face is round, his hair has a few glossy curls, his knees are dirty. Sticking out of his pocket is the fork of a slingshot. He's looking straight ahead, but his eyes are lost beyond the lens, as if he were watching some apparition in the air, some event of which the other people in the photograph are unaware. He's looking slightly upwards too, the pupils betray the fact, no doubt about it. Perhaps he's looking at a cloud, at the top of a tree. In the right-hand corner, where the space opens into a stone-flagged lane over which the roof of the veranda is tracing a staircase of shadow, you can just see the curled-up body of a dog. Not interested in the animal, the photographer has caught it in the frame by accident, but left out its head. It's a small dog with mottled black fur, something like a fox terrier, but definitely a mongrel.
There's something that disturbs him in this peaceful shot of nameless people, something that seems to be escaping his interpretation, a hidden signal, an apparently insignificant element which nevertheless he senses is crucial. Then he moves in closer, his attention caught by a detail. Through the glass of the jug, distorted by the water, the letters on the folded newspaper the man has before him spell _Sur_. Realizing he's getting excited, he says to himself: Argentina, we're in Argentina. Why am I getting excited? What's Argentina got to do with it? But now he knows what the boy's eyes are staring at. Behind the photographer, immersed in the foliage, is a pink and white country villa. The boy is staring at a window where the shutters are closed, because that shutter could slowly open just a crack, and then. . . .
And then what? Why is he dreaming up this story? What is this his imagination is inventing and trying to palm off as memory? But just then, not inventing, but really hearing it in his mind, a child's voice distinctly calls: "Biscuit! Biscuit!" Biscuit is the name of a dog, it can't be otherwise.
# 11
When you reach the top of Via della Salita Vecchia the town thins out into the hinterland, settles down into a dull plain that the ramparts of the hills would never have led you to suspect. Here the lava-flow of cement hasn't arrived yet and buildings put up in the twenties—the ones the bombs spared—are still standing: small villas built in a fanciful, petit-bourgeois Deco which, over the years, time's patina has managed one way or another to ennoble; and then more modest houses, surrounded by walls and vegetable patches, with a few tufts of yellow reeds near the fences, as though this were already the country. The main road is lined by two rows of identical two-story terraced houses with outside brick staircases and tiny windows. They were put up under Fascism. This area was planned as a residential suburb for the clerical staffs of municipal boards, the bureaucrats, members of the less important professions. What the place has preserved of that period and world is the formality, the sadness. Yet there is something charming in the landscape: there's a small square with a fountain, some flowerbeds, a few rusty swings, a bench where two old ladies with their shopping bags are chatting. And this meager, inert charm makes the place feel almost unreal: likewise improbable, perhaps non-existent, is the thing he is looking for. _F. Poerio, Tailor, Via Cadorna 15_. That's what the telephone directory says. The dead man's jacket is an old tweed with leather patches on the elbows. It could be ten years old, maybe fifteen. It's too insignificant a clue to lead to anything. And then who knows whether it's the same tailor. Perhaps there are other Poerios working as tailors in other cities in Italy.
And meanwhile he walks along Via R. Cadorna, a narrow avenue lined with lime trees. The houses here are small, detached, two-story villas preserving vestiges of the wealth of a bygone age. Many of them could do with a fresh coat of paint on walls and shutters, their scanty gardens show signs of neglect and washing has been hung out to dry from some of the windows. Number fifteen is a house with a wrought-iron fence which has been taken over by wild ivies. The entrance is sheltered by a little porch, likewise wrought-iron and of vaguely oriental design. A glass nameplate says: _Poerio, Tailor_. The letters, once gold, are sandy now and spotted with little stains, like an old mirror.
Signor Poerio has a warm smile and glasses with thick lenses that make his eyes small and distant. He seems protected by an indestructible candor; it must be his age, his sense of already being a part of the past. The glass door opens on a largish room decorated in an old pink color with narrow windows and a pattern of vine leaves painted along the ceiling moulding. The furniture is basic to the room's function: a nineteenth-century sofa, a stool with a Viennese wicker seat, a tailor's workbench in one corner. And then there are the mannequins, a few busts upright on poles left standing here and there about the room in no particular order. And for a moment Spino imagines that they are Poerio's old customers, presences from the past who've transformed themselves into wooden mannequins for old time's sake. Among them are some which do look like real people, with pink plaster faces that have turned almost brown and small white peelings on their cheekbones or noses. They are men with square jaws and short sideburns, plaster hairstyles imitating the Brylcreem look, thin lips and rather languid eyes. Poerio shows Spino some catalogues to help him choose a model. They must be catalogues from the sixties. The trousers are narrow and the jacket lapels long and pointed. He pauses a moment over one of the less ridiculous, more discreet models, then arranges the dead man's jacket on a mannequin and has the tailor look at it. If he could make him one like this, what does he reckon? Poerio considers, he's puzzled, twists his mouth wrily. "It's a sports jacket," he says doubtfully, "I don't know if it would be right for the kind of suit you're after." Spino agrees. Still, the old jacket has such a perfect cut that it wouldn't look out of place as a regular suit either. He shows the tailor the name tag inside, sewn onto the pocket. Poerio has no trouble recognizing it. It's his tag, though straight off he can't remember anything about the jacket. It's an old jacket, he has put together so many jackets in his time. . . .
Spino says he appreciates that, but with a bit of effort could he remember something, that is, find the invoice . . . an old accounts register maybe? Poerio thinks about it. He has taken a flap of the jacket between forefinger and thumb and strokes the fabric thoughtfully. One thing he is sure of, he made it in the sixties, absolutely no doubt about that, it was part of a small roll of cloth, he remembers it perfectly, a remnant that cost him next to nothing because it was a warehouse leftover and the supplier wanted to get rid of it. Poerio now seems a little suspicious, he's not sure what Spino wants of him. "Are you from the police?" he asks. All of a sudden he's turned wary, obviously he's afraid of saying something that might harm him.
Spino tries to reassure him somehow: no, he says, he really does want a suit, there's nothing to worry about, on the contrary, he'd like to put down a deposit right away; and then he mumbles a strange explanation. It's pretty contrived and Poerio doesn't seem at all convinced. Still, he says he's willing to help, as far as he can. He does still have his little file of past customers, although many must be dead. To be honest he closed the shop eight years back, he laid off his apprentices and retired. There was no reason for keeping the business going anymore.
"Well then, let's see . . . let's see," he whispers, leafing mechanically through blocks of receipts. "This one is '59, but there are a few orders from 1960 as well . . ." He reads them carefully, holding the blocks a few inches from his nose. He's taken his glasses off and his eyes are childlike. "This is it, I think," he announces with a certain satisfaction. "'Jacket in _real tweed.'_ Yes, it must be this one." He pauses a moment. "'Guglielmo Faldini, Accountant, Tirrenica, Via della Dogana 15 (red).'" He lifts his eyes from the receipt and puts his glasses back on. He says that actually now he's thought about it he doesn't feel up to making a suit. His eyesight's so bad he can't even thread a needle. And then he wouldn't be able to make the kind of suits people are wearing these days.
# 12
He finds Faldini, the accountant, in a dusty office where, on a glass door leading to a dark corridor, a frosted sign says: "Tirrenica Import-Export." The window offers a view of harbor derricks, a sheet-iron warehouse and a tugboat pitching in an oily sea. Faldini has the face of someone who has spent his entire life addressing letters to distant countries while looking out across a landscape of derricks and containers. Under a sheet of glass, his desk is a patchwork of postcards. Behind him a brightly-colored calendar extols the delights of vacations in Greece. He has a placid look about him, big watery eyes, grey hair cut short and bristling in an old-fashioned style. He is truly amazed to see his jacket again. He lost it so many years ago. No, he couldn't say how many. Well, twenty maybe.
"You really lost it?"
Faldini toys with a pencil on his desk. The tug has moved through the frame of the window leaving light blue patches on the water. It's hard to say. He doesn't know. Or rather, he thinks not, let's say that it disappeared, so far as he can recall. From the harbor, in the distance, comes the sound of a siren. The accountant considers his visitor with a certain curiosity. Obviously he's asking himself, what on earth's this business of my old jacket, who is this man, what's he after? And Spino finds it so difficult to be convincing, and then he's not really trying. Faldini watches him with his placid expression. Of course, on the accounts book he keeps open in front of him there are numbers that tell of dream cities like Samarkand, where people maybe have a different way of being people. Spino feels he must tell him the truth, or something like the truth. So then, this is the truth, this is how things stand. Does he understand, this Faldini, the accountant? Perhaps. Or rather he senses it somehow, the same way he must sense his sedentary man's dreams. But it doesn't matter, yes, he remembers. It was in '59, or maybe '60. He always hung the jacket there, where he hangs the jacket he has now. On that coathanger behind the door. The office was exactly as it is now, identical. He makes a vague gesture in the air. In his memory the only thing different is himself, a young Faldini, a young accountant, who would never go to Samarkand. And there was a workman, a sort of porter that is. He often came into the office, did a bit of everything. He did it because he needed the work, but in the past, if Faldini remembers rightly, he'd had a clerical position at the Customs. He doesn't know why he'd lost that job. He'd had some personal catastrophe, he doesn't know what. He was a reserved, polite person, ill perhaps, he wasn't cut out for being a porter. His name was Fortunato, sometimes names are really ironic, but everyone called him Cordoba. He can't remember his surname. They called him Cordoba because he'd been out in Argentina, or some other Latin American country, yes, his wife had died in Argentina and he had come back to Italy with his son, a little boy. He always talked about his little boy, on the rare occasions when he did talk. He had no relatives here and he'd put him in a boarding school. That is, it wasn't a proper boarding school, it was a lodging house run by an old maid who kept a few children, a sort of private school, but on a small scale, where it was he wouldn't know, he has a vague impression it was near Santo Stefano, the church, perhaps. The boy was called Carlito. Cordoba was always talking about Carlito.
A phone rings in a nearby room. Faldini is brought up short, coming back to the present. He casts a worried glance towards the door and then to his accounts books. The morning is flying by say his eyes now, eyes in which Spino also catches an intimation of constraint and embarrassment. Well then, one last thing and he'll be off. If he'd just like to take a look at this photograph. This man sitting under the porch here, could it be Cordoba? Does he recognize him? And the boy? The accountant holds the photo delicately between thumb and forefinger. He holds it at arm's length, he's farsighted. No, he says, it's not Cordoba, although, odd, it does look very like him, maybe it's his brother, though he doesn't know if Cordoba had a brother. As for the boy, he never saw Carlito.
Faldini is toying nervously with his pencil now. He seems distracted. Yes, well, he wouldn't like to have been misunderstood, you know, belongings, they're always so slippery, these belongings of ours, they move about, they even get the better of your memory. How could he not have remembered? In any case, now he remembers perfectly. He gave that jacket to Cordoba. He gave it to him as a present one day. Cordoba was always badly dressed, and he was a decent person.
# 13
"They say I'm mad because I live alone with all these cats, but what do I care? You haven't come about the gate have you? The front gate. I had to have it repainted because a city van scraped it right across trying to turn round. It happened a while ago, you should know better than me, shouldn't you? Anyway, of course I remember Carlito. But I'm not sure if he's the boy in your photograph. You see, the boy in the photo looks too blond to be him, but then you never can tell. The Carlito I had here was a cheerful boy. He loved all the little creatures you find in the earth: beetles, ants, fireflies, green-and-yellow caterpillars, the ones with the sticking-out eyes and the furry bits. . . ."
The cat curled up in her lap shakes itself and with a jump bounds away. She gets up too. She still has some photographs, she never throws anything away, she likes to keep things. From a drawer she takes out some little boxes, ribbons, rosary beads, a mother-of-pearl album. She invites him to look through the album with her. Two pairs of eyes are better than one. There are yellowing photographs of surly men leaning on fake cardboard parapets with the name of the photographer stamped under their feet; and then an infantryman with an unhappy expression, this with a dedication written at an angle; then a view of Vittorio Veneto in 1918, an old woman sitting on a wicker armchair, a view of Florence with carriages in the streets, a church, a family portrait taken from too far away, a girl with white cheeks and hands pressed together, memento of a first communion. There are some empty pages, a dog with melancholy eyes, a house with wisteria and shutters under which a feminine hand has written, _scent of a summer_. On the last page a group of children have been arranged in pyramid shape in a little courtyard. The ones in front are crouching, then there's a row standing, and finally another higher row—perhaps the photographer had them climb on a bench. He counts them. There are twenty-four. On their right, standing and with her hands held together, is Signorina Elvira as she was then, although she really hasn't changed that much. The children have been arranged too far from the lens for their faces to be recognized with any confidence. The only one who might in some way resemble the face he is looking for is a little blond boy in the front row. His body has the same posture, chin propped up on one hand with the elbow resting on his knee. But definite identification is impossible.
And does Signorina Elvira remember the boy's father? No, not the father. All she knows is that he was dead, the mother too, all the boy had was an uncle. But is he sure he was called Carlito? She seems to remember Carlino. Anyhow, it's not important. He was a cheerful boy. He loved the little creatures you find in the earth: beetles, ants, fireflies, green-and-yellow caterpillars . . .
And so here he is again wandering about in search of nothing. The walls of these narrow streets seem to promise a reward he never manages to arrive at, as if they formed the board of a game of snakes and ladders, full of dead ends and trapdoors, on which he goes up and down, round and round, hoping that sooner or later the dice will take him to a square that will give the whole thing meaning. And meanwhile over there is the sea. He looks at it. Across its surface pass the shapes of ships, a few seagulls, clouds.
# 14
There are days when the jealous beauty of this city seems to unveil itself. On clear days, for example, windy days, when the breeze that announces the arrival of the south-westerly sweeps along the streets slapping like a taut sail. Then the houses and bell towers take on a brightness that is too real, the outlines too sharp; like a photograph with fierce contrasts, light and shade collide aggressively without blending together, forming a black-and-white check of splashes of shadow and dazzling light, of alleyways and small squares.
Once, if he had nothing else to do, he used to choose days like this to wander round the old dock area, and now, following the dead-end sidings the wagons use along the quay, heading back to town, he finds himself thinking of those days. He could catch the bus that goes to town through the tunnels of the beltway, but instead he chooses to walk across the docks, following the twists and turns of the wharves. He feels he wants to dawdle slowly through this grim landscape of railway lines that reminds him of his childhood, of diving from the landing stage with the tires along its sides, of those poor summers, the memory of which has remained etched inside him like a scar.
In the disused shipyard, where once they repaired steamships, he sees the hulk of a Swedish vessel lying on its side. It's called the _Ulla_. Strangely, the yellow letters of the name somehow escaped the fire that devastated the boat leaving enormous brown stains on the paint. And he has the impression that this old monster on the brink of extinction has always been there in that corner of the dock. A little further on he found a battered phone booth. He thought of phoning Corrado to put him in the picture. Anyway it was only right to let him know, since to a certain extent he owed the meeting to his friend.
"Corrado," he said, "it's me. I managed to speak to him."
"But where are you? Why did you disappear like that?"
"I didn't disappear at all. I'm at the docks. Don't worry."
"Sara was after you. She left you a message here at the paper. She says they're extending their vacation for three days, they're going to Switzerland."
A seagull, which had been wheeling about for some time, landed on the arm of a water pump right next to the phone booth and stood there quietly watching him while at the same time hunting through its feathers with its beak.
"There's a seagull next to me, it's right here next to the phone booth, it's as if it knew me."
"What are you talking about? . . . Listen, where did you find him, what did he tell you?"
"I can't explain now. There's a seagull here with its ears pricked, it must be a spy."
"Don't play the fool. Where are you, where did you find him?"
"I told you, I'm at the docks. We met at the Boat Club. There are boats for rent and we went out for a trip."
Corrado's voice dropped, perhaps someone had come into the office. "Don't trust him," he said. "Don't trust him an inch."
"It's not a question of trusting or not. He gave me a tip and I'm going to try it out. He didn't know anything about the business. But there's someone who maybe does know something and he told me who."
"Who?"
"I told you I can't tell you, I don't want to speak on the phone."
"There's no one here who can hear you. You can speak on my phone. Tell me who."
"Come on, you don't imagine he went and gave me name and surname, do you? He's very smart. He just gave me an idea."
"So then give me an idea."
"You wouldn't understand."
"So how come you understood?"
"Because it's someone I happened to know years back. A musician."
"Where does he play?"
"Corrado, please, I can't tell you anything."
"In any event I don't like it, and you're too naive, understand? It's quicksand. Anywhere you put your feet you risk sinking in."
"Sorry Corrado, have to say goodbye, it's getting late. And then this seagull is getting annoyed, he wants to make a call, he's waving his beak at me furiously."
"Come straight here, I'll wait for you at the paper. I won't go home, just so I can see you."
"What about tomorrow, okay? I'm tired now, and I've got something to do this evening."
"Promise me you won't trust anyone."
"Okay, talk to you tomorrow."
"Hang on a second, I heard something that might interest you. The coroner has arranged for the burial, the case has been dropped."
# 15
Twenty years ago the Tropical was a small nightclub with a shady atmosphere catering to American sailors. Now it's called the Louisiana and it's a piano bar with couches and table lamps. On the drinks list, on a green velvet noticeboard near the main door, it says: _Piano player—Peppe Harpo_.
Peppe Harpo is Giuseppe Antonio Arpetti, born in Sestri Levante in 1929, struck off the register of doctors in 1962 for his over-lavish prescription of addictive drugs. In his university days he played the piano at little parties. He was quite talented and could do perfect imitations of Erroll Garner. After the drug scandal he took to playing at the Tropical. He played mambos and pop songs through evenings thick with smoke, five hundred lire a drink. The emergency exit, behind the curtains, opened onto a stairwell where, above another door, a neon sign said: _Pensione—Zimmer—Rooms_. Then at a certain point he disappeared for six or seven years, to America, people said. When he reappeared it was with small round eyeglasses and a greying mustache. He had become Peppe Harpo, the jazz pianist. And with his return the Tropical became the Louisiana. Some said he had bought the place, that he'd made money playing in bands in America. That he had made money no one found strange. He seemed capable of it. That he had made money banging on the piano left many unconvinced.
Spino sat down at a table to one side and ordered a gin and tonic. Harpo was playing "In a Little Spanish Town," and Spino supposed his entry had passed unobserved, but then when his drink came there was no bill with it. He sat on his own for a long time, slowly sipping his gin and listening to old tunes. Then towards eleven Harpo took a break and a tape of dance music replaced the piano. Spino had the impression, as Harpo came towards him through the tables, that his face wore an expression at once remorseful and resolute, as if he were thinking: ask me anything, but not that, I can't tell you that. _He knows_ , a voice inside him whispered, _Harpo knows_. For a second Spino thought of putting the photo of The Kid as a child down on the table and then saying nothing, just smiling with the sly expression of one who knows what he knows. Instead he said straightforwardly that perhaps the time had come for Harpo to return him that favor. He was sorry if that was putting it bluntly. The favor, that is, of helping him find somebody, as he had once helped Harpo. A look of what seemed like genuine amazement crossed Harpo's face. He waited without saying anything. So Spino pulled out the group photograph. "Him," he said, pointing at the boy.
"Is he a relative of yours?"
Spino shook his head.
"Who is he?"
"I don't know. That's what I want to find out. Perhaps his name is Carlito."
Harpo looked at Spino suspiciously, as if expecting a trick, or afraid he was being made fun of. Was he mad? The people were wearing fifties-style clothes, it was an old photograph. The boy must be a man now, for God's sake.
"You know perfectly well what I'm talking about," Spino said. "He's got a dark beard now. His hair is darker too, not as light as in the photo, but his face still has something boyish about it. He's been in my freezer for a few days. The people who knew him are keeping quiet, nothing, not even an anonymous phone call, as if he'd never existed. They're wiping out his past."
Harpo was looking around rather uneasily. A couple at a nearby table was watching them with interest. "Don't speak so loud," he said. "No need to disturb the customers."
"Listen Harpo," Spino said, "if a person doesn't have the courage to go beyond appearances, he'll never understand, will he? All his life he'll just be forced to keep playing the game without understanding why."
Harpo called a waiter and ordered a drink. "But who's he to you?" he asked softly. "You don't know him, he doesn't mean anything to you." He was speaking in a whisper, uneasy, his hands moving nervously.
"And you?" Spino said. "Who are you to yourself? Do you realize that if you wanted to find that out one day you'd have to look for yourself all over the place, reconstruct yourself, rummage in old drawers, get hold of evidence from other people, clues scattered here and there and lost? You'd be completely in the dark, you'd have to feel your way."
Harpo lowered his voice even further and told him to try an address, though he wasn't certain. His face told Spino that in giving him that address the favor had been repaid in full.
# 16
It's called "Egle's." It's an old pie-house, or that's what he's heard people call it. The walls are covered in white tiles and behind a zinc-topped counter Signora Egle bustles about a small wood-fired oven serving cakes and pies. Spino sits at one of the little marble tables and a grey-aproned waitress with the haggard look of a cloistered nun comes with a cloth to wipe up the crumbs the last customer left. He orders a chickpea pie and then, as instructed, lays a copy of the _Gazzetta Ufficiale_ on the table in full view. He begins to check out the other customers and speculate as to who they are. At the table next to his are two middle-aged blonde women chattering in low voices, occasionally exploding in shrill laughter. They look well-heeled and are wearing gauche, expensive clothes. They could be two retired whores who've invested their earnings well and now run a shop, or some business related to their previous profession, but dignified now by this façade of respectability. Sitting in a corner is a young lout bundled up in a thick jacket and engrossed in a magazine from the cover of which a fat orange-clad guru wags a warning finger at the plate of pie in front of him. Then there's a spry-looking old man, hair dyed a black that takes on a reddish tint about the temples, as cheap dyes often do. He has a gaudy tie and brown-and-white shoes with patterns of tiny holes. Wheeler-dealer, pimp, widower in the grip of a mad desire for adventure? Could be anything. Finally there's a lanky man leaning against the counter. He's chatting to Signora Egle and smiling, showing off an enormous gap in his upper teeth. He has a horsey profile and greased-back hair, a jacket that doesn't manage to cover his bony wrists, jeans. Signora Egle seems determined not to concede something the lanky character is insisting on. Then, with an expression of surrender, she moves to one end of the counter and puts a record on a decrepit phonograph that looked as if it were purely decorative. The record is a 78 and rumbles; there are a couple of bursts from a band and then a falsetto voice starts up, distorted by the scratches the disc carries in its grooves. Incredibly, it's _Il tango delle capinere_ , sung by Rabagliati. The lanky character sends a nod of complicity in the direction of the waitress and she, unresisting but sullen, lets herself be led in a long-stepping tango that immediately captures the attention of the clientele. The girl leans a cheek on the chest of her beau, which is as far as her height allows her to reach, but she's having all kinds of trouble keeping up with his powerful strides as he leads her aggressively about the room. They finish with a supple _casqué_ and everybody claps. Even Spino joins in, then opens his paper, pushing his plate away, and pretends to be absorbed in the _Gazzetta Ufficiale_.
Meanwhile the boy with the guru on his magazine gets up dreamily and pays his bill. Going out he doesn't deign to give anyone in the room a single glance, as if he had too much on his mind. The two big blonde women are repairing their make-up and two cigarettes with traces of lipstick on the filters burn in their ashtray. They leave chuckling, but no one shows any special interest in Spino, nor in the paper he's reading. He raises his eyes from the paper and his gaze meets that of the spry old man. There follows a long and intense exchange of glances and Spino feels a light coating of sweat on the palms of his hands. He folds his paper and puts his pack of cigarettes on top, waiting for the first move. Perhaps he should do something, he thinks, but he's not sure what. Meanwhile the girl has finished clearing the tables and has started spreading damp sawdust on the floor, sweeping it along the tiles with a broom taller than herself. Signora Egle is going through the day's takings behind the counter. The room is quiet now, the air thick with breath, with cigarettes, with burnt wood. Then the spry little old man smiles: it's a trite, mechanical smile, accompanied by the slightest jerk of the head and then another gesture that tells all. Spino sees the misunderstanding he's been encouraging, immediately turns red with embarrassment, then senses, rising within him, a blind anger and intolerance towards this place, towards his own stupidity. He makes a sign to the girl and asks for his bill. She approaches wearily, drying her hands on her apron. She adds up his bill on a paper napkin; her hands are red and swollen with a coating of sawdust sticking to their backs, they might be two chops sprinkled with breadcrumbs. Then, giving him an insolent look, she mutters in a toneless voice: "You're losing your hair. Reading after eating makes you lose your hair." Spino looks at her astonished, as though not believing his ears. It can't be her, he thinks, it can't be. And he almost has to hold himself back from attacking this little monster who goes on giving him her arrogant stare. But she, still in that detached, professional tone, is telling him about a herbalist who sells things for hair, on Vico Spazzavento.
# 17
Vico Spazzavento—Windswept Lane—is the perfect name for this blind alley squeezed between walls covered with scars. The wind forms a whirling eddy right where a blade of sunshine, slipping into the narrow street between flapping washing seen high above against a corridor of sky, lights up a little heap of swirling detritus. A wreath of dry flowers, newspapers, a nylon stocking.
The shop is in a basement with a swing door. It looks like a coal cellar, and in fact on the floor there are some sacks of coal, although the sign on the doorpost says: "spices, paints." On the counter is a pile of newspapers used to wrap up goods sold. A little old man dozing on a small wicker-covered chair near the coal got to his feet. Spino was first to say hello. The old man mumbled a good evening. He propped himself up against the counter with a lazy and seemingly absent expression.
"Someone told me you sold hair lotions here," Spino said.
The old man answered knowledgeably. He leant over the counter a little to look at Spino's hair, listed various products with curious names: _Zolfex, Catramina_. Then some plants and roots: sage, nettle, rhubarb, red cedar. He thinks red cedar is what he needs, that's his guess at first glance, though one ought to do some tests on the hair.
Spino answered that maybe red cedar would be okay, he doesn't know, he doesn't know what properties red cedar has.
The old man looked at him doubtfully. He had metal-framed glasses and a two-day growth of beard. He didn't say anything. Spino tried not to let his nerves get the better of him. Calmly, he explained that he hadn't checked out his hair type, it was just brittle. In any case, he doesn't want a commercial product, he wants a special lotion. He stressed the word special, something that _only_ the shopkeeper knows the formula for. He has come on the advice of people he trusts. It's strange they haven't mentioned it to him.
The old man pushed aside a curtain, said to wait and disappeared. For a second Spino caught a glimpse of a poky little room with a gas-ring and a light bulb switched on, but he didn't see anybody. The old man started to speak, a few yards from Spino, in a whisper. A woman's voice answered, perhaps an old woman. Then they fell silent. Then they began to speak again, their voices very low. It was impossible to catch what they were saying. Then came a squeak as of a drawer being opened, and finally silence again.
The minutes passed slowly. Not a sound came from beyond the curtain now, as if the two had gone out by another door to leave him waiting there like an idiot. Spino coughed loudly, he made a noise with a chair, at which the old man reappeared at the curtain with a look of reproach. "Be patient," he said, "another few minutes."
He came out round the counter and went to close and bolt the swing door that opened onto the street. He moved somewhat cautiously, looked at his customer, lit a small cigar, and returned to the back room. The voices began to whisper again, more urgently than before. The shop was almost dark. The daylight coming in through the small barred window had grown dimmer. The sacks of coal along the walls looked like human bodies abandoned in sleep. Spino couldn't help thinking that the dead man might also have come to this shop once and like him have waited in the half-dark; perhaps the old man knew him well, knew who he was, his reasons, his motives.
Finally the little man came back all smiles. In his hand he had a small brown bottle of the kind they use in pharmacies to sell iodine. He wrapped it up carefully in a sheet of newspaper and pushed it across the counter without a word. Spino looked at it now, paused, smiled perhaps. "Hope you're not making a mistake," he said. "It's important."
The old man released the bolt on the door, went back to sit on his seat and started on the accounts he had previously broken off. He made a show of pretending not to have heard. "Off you go now," he said. "The instructions are on the label."
Spino slipped the little bottle into his pocket and left. When he said goodbye, the old man answered that he had put some sage in the lotion too, to give it some fragrance. And Spino had the impression he was still smiling. There was no one on Vico Spazzavento. He felt as though time hadn't passed, as though everything had happened too quickly, like some event that took place long ago and is revisited in the memory in a flash.
# 18
He asked the caretaker if he knew of a monument with an angel and an owl. The caretaker looked at the visitor and pretended to concentrate, though it was perfectly obvious he was disoriented. All the same, so as not to seem ignorant, he said it must be in the Western Gallery, and in revenge flaunted a knowledge that hadn't been asked for. "It must be one of the first graves," he said. "During the Romantic period the owl was in fashion." Then, as Spino was walking away in the direction indicated by his outstretched arm, the caretaker reminded him that the cemetery closed at five and that he'd better be careful not to get locked inside. "There's always someone gets left in, you know," he added, as if to tone down the bluntness of his warning.
Spino nodded to show he had understood and set off along the asphalt avenue that cuts across the central squares. The cemetery was all but deserted, perhaps because it was late and the weather was unpleasantly windy. A few little old women dressed in black were busy in the squares tidying the graves. It's strange how one can spend one's life in a city without getting to know one of its most famous sites. Spino had never set foot in this cemetery described in all the tourist guides. He thought that to get to know a cemetery maybe you had to have your own dead there, and his dead were not in this place, nor in any other, and now that he was at last visiting the cemetery it was because he had acquired one of the dead who was not his own and was not buried here and to whom he was not even connected by any memories of a common past.
He began to wander about among the graves, distractedly reading the stones of the recently dead. Then his curiosity drew him towards the steps of the ugly neoclassical temple which houses the urns of some of the great men of the Risorgimento and along the pediment on which a Latin inscription establishes an incongruous connection between God and country. He crossed a section in the eastern part of the cemetery where bizarrely ornamented graves, all spires and pinnacles, loom alongside ugly little neo-gothic palaces. And he could hardly help but notice how at a certain period all the titled dead of the city had been concentrated in this area: nobles, senators of the realm, admirals, bishops; and then families for whom the nobility of wealth had stood in for the rarer nobility of blood: shipbuilders, merchants, the first industrialists. From the pronaos of the temple one can make out the original geometry of the cemetery which later developments were to change considerably. But the concept it expressed has remained unchanged: to the South and East, the aristocracy; to the North and West, the monumental tombs of the bourgeois business class; in the central squares, in the ground, the common people. Then there are a few areas for floating categories, for those who don't belong; he noticed a portico beside the steps of the temple entirely given over to philanthropists: benefactors, men of science, intellectuals of various levels. It's curious how nineteenth-century Italy faithfully reproduced in its choreography of death the class divisions that operated in life. He lit a cigarette and sat down at the top of the steps, immersed in his thoughts. _Battleship Potemkin_ came to mind, as it does every time he sees an enormous, white flight of steps. And then a film about the Fascist period that he had liked for its sets. For a moment he had the impression that he too was in a scene in a film and that a director, from a low angle, behind an invisible movie camera, was filming his sitting there thinking. He looked at his watch and reassured himself it was only quarter past four. So then, he still had fifteen minutes before the appointment. He set off along the Western Gallery, stopping to look at the monuments and read the inscriptions. He stood a long time in front of the statue of the hazelnut seller, studying her carefully. Her face was modeled with a realism that showed no mercy in reproducing the features of a plebeian physiognomy. It was obvious that the old woman had posed for the sculpture in her Sunday best: the lace bodice peeps out from under a working woman's shawl, a smart skirt covers the heavy pleats of another skirt, her feet are in slippers. With the hazelnuts she sold her whole life at street corners strung in loops over her arms, she stands to have the statue sculpted, this statue that now, life-size, looks out at the visitor with pride. A little further on an inscription on a bas-relief clumsily representing the throne of the Ludovisi informs him that Matilde Giappichelli Romanengo, a virtuous and kindly woman, having scarcely passed her thirtieth year, left husband and daughters Lucrezia and Federiga distraught. The deceased passed away on the second day of September 1886, and the two daughters, who dutifully hold the sheet from which their mother Matilde is flying to heaven, also support an inscription alongside which says: "Dear Mummy, what shall we offer you if not prayers and flowers?"
He walked slowly along the gallery until he found the grave with the angel and the owl. He noticed that a solitary seagull, blown along perhaps by the southwest wind, was hovering over the squares as if intending to land. On days like this when the southwesterly blows hard it's not unusual to see seagulls even in those parts of the city furthest from the coast. They flock in, following the rubbish-strewn canal, then wander away from the water looking for food. It was exactly half past four. Spino sat on the low wall of the gallery, his back to the tomb, and lit another cigarette. There was no one under the porticos along the gallery and the old women in the middle of the squares had thinned out. Over to the other side of the squares, in a corner near the cypresses, he noticed a man who seemed deep in contemplation near a cross, and started to watch him. The minutes passed slowly but the man made no move. Then he got up quickly and set off towards the small square by the exit. Spino looked around, but could see no one. His watch indicated that it was a quarter to five, and he realized that no one would be coming now to keep this strange appointment. Or perhaps no one was supposed to come, they had simply wanted to know if he would, and now someone he couldn't see was watching him perhaps, was checking that he really was willing. It was a kind of test they had set him.
The seagull touched down lightly just a few yards away and began to walk awkwardly between the graves, quietly curious, like a pet. Spino felt in his pocket and threw it a sweet which the bird immediately swallowed, shaking its head from side to side and fluffing out its feathers in satisfaction. Then it took off for a moment, not much more than a hop, to come to rest on the shoulder of a little First World War soldier, from where it looked at him calmly. "Who are you?" Spino asked him softly. "Who sent you? You were spying on me at the docks too. What do you want?"
It was two minutes to five. Spino got up quickly and his brusque movement frightened the seagull, which took off obliquely to glide away over the other square near the steps. Before leaving, Spino glanced at the tomb with the angel and the owl and read the inscription which, in the suspense of waiting, he had overlooked. Only then did it come to him that someone had merely wanted him to read that inscription, this was what the appointment amounted to, this was the message. Under a foreign name, on a bas-relief scroll, was a Greek motto, and beside it the translation: _Man's body dies; virtue does not die_.
He began to run and the noise of his footsteps echoed high up under the vaults of the gallery. When he reached the exit the caretaker was sliding the gate along its rail and Spino bid him a hurried good night: "There's a seagull still inside," he said. "I think he's planning to sleep there." The man said nothing in reply. He took off his peaked cap and pushed back the hair on an almost bald scalp.
# 19
He found the message in his mail box on returning home: a note written in capital letters indicating a time and place.
He put it in his pocket and climbed the stairs of his old block. As he entered his apartment the bell tower of San Donato began to strike six. He ran to the door leading out to the terrace and threw it open, wanting the sound to come right into the apartment and fill it. He took off his tie and flopped down on an armchair, putting his feet up on the coffee table. From this position all he could see was the outline of the bell tower, the slate of a roof and then a stretch of the horizon. He found a white sheet of paper and wrote in large capitals: "Weep? What's Hecuba to him?"
He placed the paper next to the note and thought of the connection between them. He was tempted to phone Corrado and tell him: "Corrado, you remember this line? I've understood exactly what it means." He looked at the phone but didn't move. He realized that he wouldn't be able to explain. Perhaps he would put it in a letter to Sara, but without offering long explanations, just write it as now he had intuitively understood it, and as she too would understand, that the player who was weeping (but who was he?) saw, albeit in another shape and in another fashion, himself in Hecuba. He thought of the power things have to come back to us and of how much of ourselves we see in others. And like a wave sweeping across him, warm and overwhelming, he remembered a deathbed and a promise made and never kept. And now that promise demanded fulfillment, it was obvious, and found in him and in this quest a kind of accomplishment, a different kind, an apparently incongruous kind, but one which in fact followed an implacable logic, as of some unknown geometry, something one might intuit but could never pin down in a rational order or in an explanation. And he thought that things do follow an order and that nothing happens by chance, that chance in fact is just this: our incapacity to grasp the true connections between things. And he sensed the vulgarity and the arrogance with which we bring together the objects that surround us. He looked about him and thought, what was the connection between the jug on the chest of drawers and the window? They weren't related in any way, they were foreign to each other; they seemed plausible to him only because one day, many years ago, he had bought that jug and put it on the chest of drawers near the window. The only connection between the two objects was his eyes looking at them. Yet something, something more than this must have led his hand to buy that jug. And that forgotten, hurried gesture was the real connection; everything lay in the gesture, the world and life, and a universe.
And once again he thought of that young man, and now he saw the scene clearly. It had happened like this, he knew it. He saw him come out of his hiding-place and deliberately put himself in the path of the bullets, seeking out the exact position that would bring him his death. He saw him advance down the corridor with calculated determination, as if following the geometry of a particular trajectory so as to accomplish an expiation or achieve a simple connection between events. That was what Carlo Nobodi had done, who as a child had been called Carlito. He had established a connection. Through him things had found a way of tracing their pattern.
So he took the paper where he'd written the question about Hecuba and pegged it out on the washing line on the terrace, then came back in, sat in the same position as before, and looked at it. The paper fluttered like a flag in the stiff breeze. It was a bright, rustling stain against the falling night. He just watched it for a long time, establishing again a connection between that piece of paper flapping in the dusk and the edge of the horizon that was ever so slowly dissolving away into darkness. He got up slowly, overcome by a great tiredness. But it was a calm, peaceful tiredness that led him by the hand towards his bed as if he were a boy again.
And that night he had a dream. It was a dream he hadn't had for years, too many years. It was a childish dream and he felt light and innocent; and dreaming, he had the curious awareness of having rediscovered this dream, and this heightened his innocence, like a liberation.
# 20
He spent the day putting his books in order. It's incredible how many newspapers and notes can accumulate in a house. He threw away whole stacks of them, cleaning up the couch and the corners where they had piled up over the years. Likewise out in the rubbish went all sorts of things from the bottoms of drawers, old stuff, the kind of bric-a-brac you can never see your way to throwing out, either from laziness or because of that indefinable sadness that objects connected with our past arouse. When he'd finished it was as if it were another apartment. How pleased Sara would be, poor thing, having put up with that indescribable mess for so long. In the evening he wrote her a letter and sealed it in an envelope he had already stamped, intending to mail it on his way to the appointment. Then he telephoned Corrado, but only got the answering machine. He had to hang up because straight off he found himself unable to leave the message the recorded voice was asking for. Then he prepared something and dialed again. "Hi, Corrado," he said, "Spino here, I just wanted to say hello and tell you that I think of you with affection." Hanging up, he was reminded of a day many years before when he had dialed the same number and said: "Corrado, it's me again, you remember that day we went to see _Picnic_ and fell in love with Kim Novak?" Only when he had put the receiver down did he realize that he'd said something ridiculous, but by then it was too late to do anything about it. Then he thought that maybe Corrado wouldn't find it ridiculous, perhaps it would just seem strange hearing it on the answering machine.
At dinner time he made himself a snack with a can of salmon he'd had in the fridge heaven knows how long and some pineapple doused in port. When evening fell he turned on the radio without putting on the light and sat in the dark smoking and looking out of the window at the lights in the harbor. He let the time slip by. He enjoyed listening to the radio in the dark, it always gave him a sense of distance. Then the bell tower of San Donato struck eleven and he started. He washed the dishes and tidied the kitchen in candlelight because he couldn't face the violence of the electric light. He left at half past eleven, locking the door and leaving the key under the flowerpot on the landing, where he always left it for Sara.
He mailed the letter in the box near the stand, took Vico dei Calafati and went down the steps as far as the road along the sea front. The trattorias by the harbor were closing; a little old man sunk up to his thighs in rubber boots was washing his fishmonger's counter with a hose. He went down the Ripa Gallery as far as the harbor railway station, then crossed the road and walked on along the tramlines that have outlasted the asphalt there, keeping close to the safety fence between the two lines. A night watchman was heading in his direction on a moped and, passing by, wished him good night. He waited till he was some distance away, then slipped into the port area through a little turnstile next to the big gates of the Customs. There were still some lights on in the Customs building. He chose to cut across a small labyrinth of containers so as not to risk being seen. He walked along a wharf where a Revenue Department launch was tied up and found himself at the cargo docks. He went past the Old Wharf, cluttered with cotton bales, and stopped by the dry-docks. There was no trace of any human presence ahead of him now; the lights were all behind, the lamps of a ship moored to a wharf and two windows lit up in the harbor station. He walked on about five hundred meters, keeping the traffic-light hanging over the coast road to his right as a point of reference. Striking a match, he checked once again the route he was supposed to take, then screwed up the piece of paper and threw it in the water. He saw the dark outline of the warehouse under a skeleton of metal bridges. He sat down on an iron stairway at the water's edge and lit a cigarette. The bell tower of San Donato struck midnight. He hung on a few minutes more, looking out at the dark sea and an uncertain light on the horizon. To reach the warehouse he had to circle round some enormous containers scattered quite randomly along the wharf. The yard was lit by yellow foglights which drew four shadows from his body, projected in diametrically opposed directions as if they wanted to flee from him at every step. He reached the back of the warehouse, going down the side where the dusty light of the lamps was weakest. On the handle of the door was a chain without a padlock and he slipped it out through the rings holding it. He eased open the door and a long strip of yellow light slid into the darkness inside, snapping at a right angle where it met a pile of crates. He coughed three times, keeping the coughs distinct and decisive, as he was supposed to, but there was no reply from inside the building. He stood immobile on the strip of light, coughed again, and again no one answered.
"It's me," he said softly, "I've come."
He waited a moment, then repeated in a louder voice: "It's me, I've come." Only then did he suddenly feel absolutely certain that no one was there. Despite himself he began to laugh, first softly, then more loudly. He turned round and looked at the water a few meters away. Then stepped forward into the dark.
# _Author's Note_
This book owes much to a city, to a particularly cold winter and to a window. Writing it did not bring me an inordinate amount of levity. All the same I observed that the older one gets the more one tends to laugh on one's own; and that seems to me a step forward towards a more composed and somehow self-sufficient sense of humor.
Spino is a name I invented myself and one I have grown fond of. Some may point out that it's an abbreviation of Spinoza, a philosopher I won't deny I love; but it signifies other things, too, of course. Spinoza, let me say in parenthesis, was a Sephardic Jew, and like many of his people carried the horizon with him in his eyes. The horizon, in fact, is a geometrical location, since it moves as we move. I would very much like to think that by some sorcery my character did manage to reach it, since he too had it in his eyes.
A.T.
© Giangiacomo Feltrinelli Editore, Milano, 1986
Copyright © 1990 by Tim Parks
All rights reserved. Except for brief passages quoted in a newspaper, magazine, radio, or television review, no part of this book may be reproduced in any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage and retrieval system, without permission in writing from the Publisher.
This translation of _Il filo dell'orizzonte_ published by arrangement with Chatto & Windus Ltd., London; first published clothbound in the United States in 1990.
eISBN 978-0-8112-2452-9
New Directions Books are published for James Laughlin by New Directions Publishing Corporation,
80 Eighth Avenue, New York 10011
| {
"redpajama_set_name": "RedPajamaBook"
} | 9,030 |
The Nore pearl mussel (Margaritifera durrovensis) is a critically endangered species of freshwater pearl mussel, an aquatic bivalve mollusc in the family Margaritiferidae.
The species is endemic to Ireland and was first identified by R.A. Phillips in 1926, who later declared it a new species in Volume 18 of the Proceedings of the Malacological Society. This designation was controversial, and the taxonomic status of the Nore pearl mussel remains inconclusive. It is often described as a rare ecophenotype of M. margaritifera. The European Union's Habitats Directive on the conservation of natural habitats and wild fauna placed Margaritifera durrovensis on Annex II and Annex V as a separate taxon.
Distribution
The species is native to the Three Sisters - the rivers Barrow, Suir and Nore, the latter of which being the mussel's namesake. However, specimens have not been found outside of the River Nore since 1993. Unlike M. margaritifera, which can tolerate acidic conditions, the Nore pearl mussel requires highly calcareous waters, and generally inhabits sections of the River Nore which have CaCO3 concentrations of over 330 mg/L. The Nore pearl mussel also has a significantly shorter lifespan than M. margaritifera, typically living for 60 to 80 years.
Threats and conservation
Studies conducted on Nore pearl mussel distribution revealed that the population of the species had declined by approximately 75% between 1991 and 2009. The primary pressure identified was agricultural intensification leading to elevated levels of phosphorus, nitrate and suspended solids across the mussel's native range. A captive breeding program was set up in 2005 by the National Parks and Wildlife Service, whereby juvenile mussels complete their first growing season in captivity before being re-introduced to the River Nore.
References
Margaritifera
Freshwater bivalves
Bivalves described in 1928
Bivalves of Europe
Bivalves and humans | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,502 |
Q: Knowing if a table is in "native" Word Format I'm sorry if this is a really dumb question, but I need to submit a document in Microsoft Word format and furthermore all tables must also be in the "native format" (their words).
For the tables in my document, I first created them in Excel and then copied them into Word (using the default paste option, which seems to be similar to choosing the .rtf option in paste special). The tables are not linked back to Excel, they are independent. Also, when I click on the table, the ribbon switches to "Table Layout" options.
All this would seem to imply that the tables are in the correct format, but I just wanted to be sure.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,101 |
\section{Introduction}
We consider the numerical solution of two related
problems which arise in the study of Brownian diffusion by a particle in
the exterior or interior of a porous sphere. We denote the open unit
ball centered at the origin
in $\mathbb{R}^3$ by $\Omega$, and assume that the sphere
$\partial \Omega$ is
partially covered by $N$ small patches of radius $\varepsilon$, measured
in arclength (Fig. \ref{fig:patches}).
For the sake of simplicity, we assume that the patches are
disk-shaped and comment briefly on more general shapes in the conclusion.
\begin{figure}[ht]
\centering
\includegraphics[width=.4\linewidth]{patches}
\caption{A sphere partially covered by disk-shaped patches. We
assume each patch is of radius $\varepsilon$. We also assume that distinct
patches are separated by a distance of at least $\varepsilon$.
In the figure, this means that the regions bounded by the dashed
lines do not overlap.}
\label{fig:patches}
\end{figure}
The union of the patches is referred to as the {\em absorbing boundary}
and denoted by $\Gamma_A$. The remainder of the boundary,
$\Gamma_R = \partial \Omega \backslash \Gamma_A$, is referred to as the
{\em reflecting boundary}.
The first problem, called the narrow capture problem, is to calculate
the concentration $\bar u(x)$, at equilibrium, of Brownian particles at $x
\in \mathbb{R}^3 \backslash \wb{\Omega}$ with a given fixed concentration far
from the origin, assuming that particles are absorbed (removed)
at $\Gamma_A$. The second problem,
called the narrow escape problem, is to
calculate the mean first passage time (MFPT) in $\Omega$, namely the
expected time $\bar v(x)$ for a Brownian particle released at $x \in \Omega$
to first reach $\Gamma_A$. In both settings, particles are
reflected from $\Gamma_R$. In this paper, we sometimes refer to the
narrow capture problem as the exterior problem, and the narrow escape
problem as the interior problem.
These problems have received quite a lot of attention in the mathematics
and biophysics communities since the seminal work of Berg and Purcell
\cite{Berg1977}. We do not seek to review the biophysical background
here, but note that the absorbing patches serve as a simplified model
for either surface receptors (the capture mechanism) or pores (the
escape mechanism) in an otherwise impermeable membrane. We refer the
reader to \cite{Berg1977,wiegel83,bressloff13,Holcmanbook2015,Holcman2014,kaizu14,isaacson11} for more detailed
discussions of applications and a selection of work on related biophysical
models.
Standard arguments from stochastic analysis show that both $\bar u$ and $\bar v$
satisfy a Poisson equation with mixed Dirichlet-Neumann boundary
conditions \cite{redner01,pavliotis14}. More precisely, for the capture
problem, if the far-field particle concentration is set to be $1$,
then $\bar u$ satisfies the exterior Laplace equation:
\begin{align}\label{eq:extBVP}
\begin{cases}
\Delta \bar u = 0 & x \in \mathbb{R}^3 \backslash \wb{\Omega} \\
\bar u = 0 & x \in \Gamma_A \\
\frac{\partial \bar u}{\partial n} = 0 & x \in \Gamma_R \\
\bar u(x) \to 1 & |x| \to \infty. \\
\end{cases}
\end{align}
A scalar quantity of interest is the
total equilibrium flux $J$ of particles through $\Gamma_A$:
\begin{equation}\label{eq:flux}
J = \int_{\Gamma_A} \frac{\partial \bar u}{\partial n} \, dS.
\end{equation}
This is sometimes referred to as the capacitance of the system
(see Remark \ref{rem:capacitance}).
For the escape problem, the MFPT $\bar v$ satisfies the interior Poisson equation:
\begin{align}\label{eq:intBVP}
\begin{cases}
\Delta \bar v = -1 & x \in \Omega \\
\bar v = 0 & x \in \Gamma_A \\
\frac{\partial \bar v}{\partial n} = 0 & x \in \Gamma_R.
\end{cases}
\end{align}
Here, the quantity of interest is the average MFPT $\mu$ - that is the
average, over all possible initial particle positions, of the expected
time to escape from $\Omega$ through $\Gamma_A$:
\begin{equation}\label{eq:avgmfpt}
\mu = \frac{1}{|\Omega|} \int_\Omega \bar v \, dV.
\end{equation}
Here, and in the remainder of the paper,
$\frac{\partial}{\partial n}$ refers to the derivative in the outward
normal direction;
$n$ points towards the interior of $\Omega$ for the exterior problem,
and towards the exterior of $\Omega$ for the interior problem.
In order to understand how the distribution of absorbing
patches on the surface affects $\bar u(x)$, $\bar v(x)$ and the associated
quantities $J$ and $\mu$,
a variety of asymptotic and numerical methods have been developed (see
\cite{Berg1977,Cheviakov2010,Lindsay2017,Bernoff2018,Singer2006,Holcmanbook2015,Holcman2014}
and the references therein).
\begin{remark}\label{rem:capacitance}
The total flux $J$ defined in \eqref{eq:flux} is sometimes referred
to as the capacitance because of a connection to
electrostatics. Imagine that
the ball $\Omega$ is a dielectric with
low permittivity, and that $\Gamma_A$ is a collection of perfectly conducting
patches on its surface, connected by infinitesimally thin wires so
that they act as a single conductor. Suppose also that this object
is surrounded by a dielectric with high permittivity and that the outer
dielectric is enclosed by an infinitely large perfectly conducting sphere,
with a unit voltage drop from
the outer conductor to the conducting patches. Then, letting the
ratio of the permittivity of the outer dielectric to that of
the inner dielectric approach $\infty$, the electrostatic potential
outside $\wb{\Omega}$ satisfies \eqref{eq:extBVP}, and the
electrostatic capacitance of the system is given by $J$.
\end{remark}
\begin{remark}\label{rem:mfpteqn}
The total flux $J$
is computed directly from the Neumann data on
$\Gamma_A$, as seen from \eqref{eq:flux}. Likewise, the average MFPT $\mu$ can be computed
directly from the Dirichlet data $\bar v$ on $\Gamma_R$. For this, we
use Green's second identity,
\[\int_\Omega \paren{\psi \Delta \varphi - \varphi \Delta
\psi} \, dV = \int_{\partial \Omega} \paren{\psi
\frac{\partial \varphi}{\partial n} - \varphi \frac{\partial
\psi}{\partial n}}\, dS\]
with $\psi(x) \equiv \bar v(x)$ and $\varphi(x) \equiv \frac{|x|^2}{6}$.
Using that $\Delta \frac{|x|^2}{6} = 1$,
$\int_\Omega \frac{|x|^2}{6} dV(x) =
\frac{2 \pi}{15}$, and that for $|x| = 1$, $n
\equiv x$ and $\frac{\partial}{\partial n} \frac{|x|^2}{6} = \frac13$,
we obtain
\[\int_\Omega \bar v \, dV = \frac13 \int_{\partial \Omega} \bar v \, dS - \frac16
\int_{\partial \Omega} \frac{\partial \bar v}{\partial n} \, dS - \frac{2
\pi}{15}.\]
Applying the divergence theorem to the second term, dividing by
$|\Omega|$, and using that $|\Omega| = \frac{4 \pi}{3}$,
$|\partial \Omega| = 4 \pi$ gives an alternative expression for
$\mu$:
\begin{equation}\label{eq:avgmfpt2}
\mu = \frac{1}{|\partial \Omega|}
\int_{\partial \Omega} \bar v \, dS + \frac{1}{15} \equiv \frac{1}{|\partial \Omega|}
\int_{\Gamma_R} \bar v \, dS + \frac{1}{15}.
\end{equation}
Thus the average MFPT over $\Omega$ may be obtained from the average
MFPT on $\partial \Omega$.
\end{remark}
Given an arrangement of patches, we present here a
fast, high-order accurate numerical scheme for the evaluation
of $\bar u$, $J$, $\bar v$, and $\mu$,
of particular use when $N$ is large and
$\varepsilon$ is small.
Such computations are numerically challenging, partly because
solutions of elliptic boundary value problems of mixed type
are singular near Dirichlet-Neumann interfaces
\cite{Sneddon1966,Fabrikant1989}.
Direct discretization, using either PDE-based methods or integral
equation methods, would require many degrees of freedom to resolve the
singularities in $\bar u$ and $\bar v$. Further, the resulting linear systems would
be large and ill-conditioned, especially in cases involving large
numbers of small patches.
The formulation presented here is
{\em well-conditioned}, is nearly identical for the capture and escape
problems, and suffers no loss in accuracy
or increase in computational cost as $\varepsilon$ is decreased.
To make large-scale problems practical,
we have developed a fast algorithm, so that
the cost per GMRES iteration \cite{GMRES} is of the order
$\mathcal{O}(N \log N)$, rather than
$\mathcal{O}(N^2)$.
Our method involves the following ingredients:
\begin{itemize}
\item We make use of the Neumann Green's functions for the
interior and exterior
of the sphere to recast \eqref{eq:extBVP} and \eqref{eq:intBVP} as
first-kind integral equations for a density $\sigma$ on
$\Gamma_A$.
\item
Given a patch radius $\varepsilon$,
we precompute the solution operator for the
corresponding one-patch integral equation, assuming smooth Dirichlet
data which is expanded in a rapidly converging series of
Zernike polynomials. We analytically incorporate a square root singularity in the
induced density at the Dirichlet/Neumann interface.
\item To solve the many-patch integral equation, we use
the solution operator for the one-patch integral equation as
a block-diagonal ``right preconditioner''. This yields
a second-kind Fredholm system of equations which, upon discretization, is
well-conditioned and has a small number of degrees of freedom per patch.
\item We solve the resulting linear system by iteration, using GMRES,
and accelerate each matrix-vector product by means of a fast algorithm
modeled after the fast multipole method (FMM). The fast algorithm uses the
{\em interpolative decomposition} \cite{liberty07} to derive a compressed
representation of the outgoing field induced by the density on a patch,
a hierarchical organization of patches into groups at different length
scales, and a spectral representation of the smooth incoming field due to
densities on distant patches.
\end{itemize}
Though most of the past work on the narrow capture and narrow escape
problems is based on asymptotics, we wish to highlight the numerical work
of Bernoff and Lindsay, who also proposed an integral equation method
for the narrow capture problem for the sphere and the plane based on the
Neumann Green's function \cite{Bernoff2018}. Our approach to discretization
shares several characteristics with theirs: both methods
incorporate a square root singularity into the density on each patch
analytically,
and both use a representation in terms of Zernike polynomials for
smooth Dirichlet data on each patch.
The paper is organized as follows.
In Section \ref{sec:analyticalsetup}, we introduce the analytical
framework for our method, reformulate the
boundary value problems as first-kind integral
equations using single layer potentials, and explain
how to calculate the scalar quantities $J$ and
$\mu$ directly as functionals of the layer potential densities. In
Section \ref{sec:manypatchsetuptools}, we show how to transform the
first-kind integral equations into Fredholm equations of the second-kind,
using the solution operator for the one-patch integral
equation as a preconditioner.
In Sections \ref{sec:zernike}, \ref{sec:onepatchprelim}, and
\ref{sec:manypatchsetup} we describe our discretization approach for the
full system of equations, and in Section \ref{sec:outgoingrep} we
introduce the technical tools involved in our fast algorithm.
In Section \ref{sec:algorithm} we describe the full method, including
our fast algorithm to accelerate the application of the system matrix.
In Section \ref{sec:onepatch}, we provide a detailed description of the
solver for the one-patch integral equation. We demonstrate the
performance of the method with numerical experiments
in Section \ref{sec:numresults}.
\begin{figure}[p!]
\centering
\includegraphics[width=.82\linewidth]{randpts_1e5_wblowup.png}
\caption{MFPT $\bar v$ plotted just inside the unit sphere for
an example with $N = 100\, 000$ random well-separated patches of radius $\varepsilon \approx 0.00141$. The
integral equation associated with this problem was solved in $63$
minutes on a 60-core workstation, to an $L^2$ residual error of
approximately $2.2 \times 10^{-8}$. Further details are given in Section
\ref{sec:numexamples}.}
\label{fig:randpts1e5}
\end{figure}
\section{Analytical setup}\label{sec:analyticalsetup}
Our approach to solving the exterior and interior problems \eqref{eq:extBVP}
and \eqref{eq:intBVP} uses a representation of each solution as
an integral involving the corresponding Neumann Green's function.
This representation leads
to an integral equation, and the scalar quantity of interest - $J$ or
$\mu$ - can be calculated directly from its solution.
\subsection{Neumann Green's functions for the sphere}\label{sec:greens}
Let us first consider the exterior Neumann problem:
\begin{align}\label{eq:extneu}
\begin{cases}
\Delta u = 0 & x \in \mathbb{R}^n \backslash \wb{\Omega} \\
\frac{\partial u}{\partial n} = g & x \in \partial \Omega \\
u(x) \to 0 & |x| \to \infty.
\end{cases}
\end{align}
Here $\Omega$ is a bounded domain, and $g$ a given continuous function on
$\partial \Omega$. This
problem has a unique solution, and if $\Omega$ is the unit ball in
$\mathbb{R}^3$, it may be obtained using the {\em exterior Neumann Green's
function} $G_E(x,x')$, which is known analytically \cite{koshlyakov64,kellogg53}.
$G_E$ is symmetric, and
satisfies
\begin{align}\label{eq:Gextproperties}
\begin{cases}
-\Delta G_E(x,x') = 4 \pi \delta(x-x') &x,x' \in \mathbb{R}^3 \backslash
\Omega \\
\frac{\partial}{\partial n_{x'}} G_E(x,x') = 0 &x \in \mathbb{R}^3
\backslash \Omega, \, x' \in \partial \Omega, \, x \neq x', \\
\end{cases}
\end{align}
with $G_E(x,x') = \mathcal{O}\paren{|x|^{-1}}$ as $|x| \to \infty$ for fixed $x' \in
\mathbb{R}^3 \backslash \Omega$. It can be shown, using Green's
second identity, that
\begin{equation}\label{eq:extneusoln}
u(x) = \frac{1}{4 \pi} \int_{\partial \Omega} G_E(x,x') g(x') \, dS(x')
\end{equation}
solves the exterior Neumann problem \eqref{eq:extneu}. When $x' \in
\partial \Omega$, $G_E$ is given explicitly by
\begin{equation}\label{eq:Gextoffsurface}
G_E(x,x') = \frac{2}{|x-x'|} + \log \left( \frac{|x| - x \cdot x'}{1 - x \cdot x' + |x -
x'|} \right).
\end{equation}
If, in addition, $x \in \partial \Omega$, then
\begin{equation}\label{eq:Gextonsurface}
G_E(x,x') = \frac{2}{|x-x'|} - \log \left( \frac{2}{|x-x'|} \right) -
\log \left(1 + \frac12 |x-x'| \right).
\end{equation}
The interior Neumann problem is given by
\begin{align}\label{eq:intneu}
\begin{cases}
\Delta v = 0 & x \in \Omega \\
\frac{\partial v}{\partial n} = g & x \in \partial \Omega,
\end{cases}
\end{align}
where $\Omega$ is a bounded domain and $g$ is a continuous function defined on the
boundary, with the additional constraint that $g$
must satisfy the consistency condition
\[\int_{\partial \Omega} g \, dS = 0.\]
This problem has a solution which is unique up to an additive constant.
The consistency condition precludes the existence of an interior Green's function
with zero Neumann data. Rather, for $\Omega$
the unit ball in $\mathbb{R}^3$, we have an {\em interior Neumann Green's
function} $G_I(x,x')$, also known analytically
\cite{koshlyakov64,kellogg53}. It is again
symmetric and satisfies
\begin{align}\label{eq:Gintproperties}
\begin{cases}
-\Delta G_I(x,x') = 4 \pi \delta(x-x') &x,x' \in \Omega \\
\frac{\partial}{\partial n_{x'}} G_I(x,x') = -1 &x \in
\wb{\Omega}, \, x' \in \partial \Omega, \, x \neq x'. \\
\end{cases}
\end{align}
As before,
\begin{equation}\label{eq:intneusoln}
v(x) = \frac{1}{4 \pi} \int_{\partial \Omega} G_I(x,x') g(x') \, dS(x')
\end{equation}
solves the interior Neumann problem \eqref{eq:intneu}. When $x' \in
\partial \Omega$, $G_I$ is given by
\begin{equation}\label{eq:Gintoffsurface}
G_I(x,x') = \frac{2}{|x-x'|} + \log \left( \frac{2}{1 - x \cdot x' +
|x - x'|} \right).
\end{equation}
If, in addition, $x \in \partial \Omega$, this reduces to
\begin{equation}\label{eq:Gintonsurface}
G_I(x,x') = \frac{2}{|x-x'|} + \log \left( \frac{2}{|x-x'|} \right) -
\log \left(1 + \frac12 |x-x'| \right).
\end{equation}
This is the same as \eqref{eq:Gextonsurface} except for the
sign of the second term. In other words,
the restrictions of the interior and exterior Green's functions
to the boundary $\partial \Omega$ are nearly identical.
The following lemma, which we will require in the next section, follows
from the second property in \eqref{eq:Gintproperties} and the symmetry
of $G_I$.
\begin{lemma}\label{lem:intsigma}
Let $\Gamma$ be an open subset of $\partial \Omega$ and let $\sigma$
be continuous on $\Gamma$. Then for $x \in \partial \Omega \backslash
\bar{\Gamma}$,
\[\frac{\partial}{\partial n_x} \int_{\Gamma} G_I(x,x')
\sigma(x') \, dS(x') = -\int_{\Gamma} \sigma(x') \, dS(x').\]
\end{lemma}
\begin{figure}[ht]
\centering
\hspace*{3cm}\includegraphics[width=0.8\linewidth]{fibopts_1e4_wblowup.png}
\caption{MFPT $\bar v$ plotted just inside the unit sphere for
an example with $N = 10\, 000$ uniformly distributed patches of radius $\varepsilon
\approx 0.00447$. The
integral equation associated with this problem was solved in $114$
seconds on a 60-core workstation, and in $15$ minutes on a four-core,
eight-thread laptop, to an $L^2$ residual error of
approximately $6.4 \times 10^{-8}$. Further details are given in Section
\ref{sec:numexamples}.}
\label{fig:fibopts1e4}
\end{figure}
\subsection{The narrow capture problem}\label{sec:extsetup}
We turn now to the narrow capture problem, which is
the simpler of the two. We first modify the BVP
\eqref{eq:extBVP} by defining $u = 1 - \bar u$, so that solutions decay
as $|x| \to \infty$.
The function $u$ satisfies the modified equations
\begin{align}\label{eq:extBVP2}
\begin{cases}
\Delta u = 0 & x \in \mathbb{R}^3 \backslash \Omega \\
u = 1 & x \in \Gamma_A \\
\frac{\partial u}{\partial n} = 0 & x \in \Gamma_R \\
u(x) \to 0 & |x| \to \infty. \\
\end{cases}
\end{align}
Let us denote the unknown Neumann data on $\Gamma_A$ by
$\sigma(x')$. Then
\eqref{eq:extneusoln} implies that for $x \in \mathbb{R}^3
\backslash \wb{\Omega}$, we have
\begin{equation}\label{eq:urep}
u(x) = \frac{1}{4 \pi} \int_{\Gamma_A} G_E(x,x') \frac{\partial
u}{\partial n}(x') \, dS(x') \equiv \int_{\Gamma_A} G_E(x,x') \sigma(x')
\, dS(x').
\end{equation}
By analogy with classical potential theory, we refer to this as a {\em
single layer potential} representation with density $\sigma$ supported
on $\Gamma_A$. Since the dominant singularity of the kernel $G_E$ is that of
the free-space Green's function for the Laplace equation, this single layer potential is continuous up to $\partial \Omega$.
Taking the limit as $x \rightarrow \Gamma_A$ and using
the second condition in \eqref{eq:extBVP2}, we obtain the first-kind
integral equation
\begin{equation}\label{eq:extinteqn}
\int_{\Gamma_A} G_E(x,x') \sigma(x') \, dS(x') = f(x), \quad x \in \Gamma_A,
\end{equation}
where $f(x) \equiv 1$, with the weakly singular kernel $G_E$. Assuming that we can solve
\eqref{eq:extinteqn} for $\sigma$, it follows that
$u(x)$, given by \eqref{eq:urep}, is the solution to
\eqref{eq:extBVP2}, and that $\bar u = 1-u$ solves
\eqref{eq:extBVP}. Furthermore, since $\sigma \equiv \frac{\partial
u}{\partial n} \equiv -\frac{\partial \bar u}{\partial n}$ on
$\Gamma_A$, the total flux $J$ from \eqref{eq:flux} will be given by
\[J = -I_\sigma\]
where we have introduced the shorthand
\begin{equation}
\label{Isigmadef}
I_\sigma := \int_{\Gamma_A} \sigma \, dS.
\end{equation}
We will not prove the existence of a solution to \eqref{eq:extinteqn},
but sketch a possible approach.
If we replace the kernel $G_E$ in \eqref{eq:extinteqn} with its first term
$\frac{2}{|x-x'|}$, which is the free-space Green's function for the Laplace
equation (up to a constant scaling factor), we obtain the
first-kind integral equation for the Dirichlet problem
on an open surface, which we can denote in operator form by
\[ \mathcal{S}_0 \sigma = f. \]
This is
a well-studied problem, which has a unique solution in the Sobolev space
$H^{-\frac12}(\Gamma_A)$ given data in
$H^{\frac12}(\Gamma_A)$ \cite{Stephan1987}.
Writing the full single layer potential operator in the form
$\mathcal{S}_0 + K$, where $K$ is a
compact pseudodifferential operator of order $-2$,
we may rewrite
\eqref{eq:extinteqn} in the form of a Fredholm integral equation of the second kind:
\begin{equation}
(I + \mathcal{S}_0^{-1}K) \sigma = \mathcal{S}_0^{-1} \, f.
\label{preconsinglepatch}
\end{equation}
Thus, to prove existence and uniqueness for the single patch equation,
one can apply the Fredholm alternative to
\eqref{preconsinglepatch}.
That is, one need only show that the homogenous
version of the single patch equation has no nontrivial solutions.
This is straightforward to prove when
$\varepsilon$ is sufficiently small, since the norm of $K$ goes to zero
as $\varepsilon$ goes to zero and the corresponding Neumann series
converges.
We conjecture that the result holds for any $\varepsilon$.
\subsection{The narrow escape problem}\label{sec:intsetup}
The analytical formulation of the narrow escape problem is somewhat more complicated
than that of the narrow capture problem, largely because of the
non-uniqueness of the interior Neumann problem, but it leads to a similar integral equation.
We first recast the Poisson problem
\eqref{eq:intBVP} as a Laplace problem
with inhomogeneous boundary conditions.
Assume that $v$ satisfies
\begin{align}\label{eq:intBVP2}
\begin{cases}
\Delta v = 0 & x \in \Omega \\
v = 1 & x \in \Gamma_A \\
\frac{\partial v}{\partial n} = D & x \in \Gamma_R,
\end{cases}
\end{align}
for some non-zero constant $D$. Then $\bar v$ given by
\begin{equation}\label{eq:vtilde}
\bar v = \frac{v - 1}{3D} + \frac{1 - |x|^2}{6}
\end{equation}
solves \eqref{eq:intBVP}. We will therefore seek a method to produce a
solution of \eqref{eq:intBVP2} for some $D \neq 0$.
\begin{figure}[ht]
\centering
\hspace*{3cm}\includegraphics[width=0.8\linewidth]{cluspts_1e4_wblowup.png}
\caption{MFPT $\bar v$ plotted just inside the unit sphere for
an example with $N = 10\, 000$ random, clustered patches of radius $\varepsilon
\approx 0.0035$. The
integral equation associated with this problem was solved in $269$
seconds on a 60-core workstation, and in $35$ minutes on a four-core,
eight-thread laptop, to an $L^2$ residual error of
approximately $6.5 \times 10^{-8}$. Further details are given in Section
\ref{sec:numexamples}.}
\label{fig:cluspts1e4}
\end{figure}
\begin{lemma}
Let
\begin{equation}\label{eq:intBVPansatz}
v(x) = \int_{\Gamma_A} G_I(x,x') \sigma(x') \, dS(x'),
\end{equation}
where $\sigma$ satisfies the first-kind integral
equation
\begin{equation}\label{eq:intinteqn}
\int_{\Gamma_A} G_I(x,x') \sigma(x') \, dS(x') = 1
\end{equation}
for $x \in \Gamma_A$. Then $v$ solves \eqref{eq:intBVP2} with $D = -I_\sigma$, for $I_\sigma$
defined as in \eqref{Isigmadef}, and $I_\sigma \neq 0$.
\end{lemma}
\noindent {\em Proof}:
The function $v(x)$ is
harmonic in $\Omega$, and by Lemma \ref{lem:intsigma}, it
satisfies the third condition of \eqref{eq:intBVP2} with $D \equiv
-I_\sigma$, as long as $I_\sigma \neq 0$. Taking $x$ to $\Gamma_A$ and using the continuity of
the single layer potential up to
$\Gamma_A$, we find that $v$ will satisfy the second condition of
\eqref{eq:intBVP2} as long as $\sigma$ satisfies \eqref{eq:intinteqn}.
It remains only to show that if $\sigma$ satisfies
\eqref{eq:intinteqn}, then
$I_\sigma \neq 0$.
If not, then $v$ given by \eqref{eq:intBVPansatz} satisfies
\eqref{eq:intBVP2} with $D = 0$, as does the constant function $1$.
It follows from Green's identity
that solutions to \eqref{eq:intBVP2} with the same value of $D$ are unique, so we
must have $v \equiv 1$. The formula
\eqref{eq:Gintoffsurface} for $G_I$ shows that if $|x'| = 1$, then $G_I(0,x')
= 2$, so if $v \equiv 1$ we have
\[ 1 = v(0) = 2 \int_{\Gamma_A} \sigma(x') \, dS(x') = 2 I_\sigma, \]
a contradiction. \hfil \hfill$\Box$ \\
The question of the existence of a solution to \eqref{eq:intinteqn} is
analogous to that for \eqref{eq:extinteqn}, which was discussed in
Section \ref{sec:extsetup}.
To calculate the average MFPT $\mu$ directly from
$\sigma$, we plug \eqref{eq:vtilde} into \eqref{eq:avgmfpt2} to obtain
\begin{equation}\label{eq:mufromsigma1}
\mu = \frac{1}{3 D
|\partial \Omega|} \int_{\partial \Omega} v \, dS - \frac{1}{3D} +
\frac{1}{15}.
\end{equation}
To calculate $\frac{1}{|\partial \Omega|} \int_{\partial \Omega} v \, dS$, we use the
representation \eqref{eq:intBVPansatz}:
\begin{align*}
\frac{1}{|\partial \Omega|} \int_{\partial \Omega} v \, dS &= \frac{1}{|\partial \Omega|} \int_{\partial \Omega} \int_{\Gamma_A}
G_I(x,x') \sigma(x') \, dS(x') \, dS(x) \\
&= \int_{\Gamma_A} \sigma(x') \paren{ \frac{1}{|\partial \Omega|} \int_{\partial \Omega}
G_I(x,x') \, dS(x) } \, dS(x').
\end{align*}
A calculation using the explicit form \eqref{eq:Gintonsurface} of $G_I$ gives
\[\frac{1}{|\partial \Omega|} \int_{\partial \Omega} G_I(x,x') \, dS(x) = 2\]
for any $x' \in \partial \Omega$.
We therefore have
\[\frac{1}{|\partial \Omega|} \int_{\partial \Omega} v \, dS = 2 I_\sigma.\]
Plugging this into \eqref{eq:mufromsigma1} and replacing $D$ by
$-I_\sigma$ gives
\begin{equation}\label{eq:mufromsigma2}
\mu = \frac{1}{3 I_\sigma} - \frac35.
\end{equation}
\section{A multiple scattering formalism}\label{sec:manypatchsetuptools}
We have shown that the solutions of the two boundary value problems of
interest, as well the associated scalars $J$ and $\mu$, may be
obtained by solving \eqref{eq:extinteqn} and \eqref{eq:intinteqn},
respectively,
on the collection of absorbing patches. These integral equations differ only by the sign of one
term in their respective kernels, as seen in Section \ref{sec:greens}.
Since our treatment of the two cases is the same,
we drop the subscripts on $G_E$ and $G_I$, and discuss the solution of
\[ \int_{\Gamma_A} G(x,x') \sigma(x') \, dS(x') = 1 \quad x \in \Gamma_A,\]
where $\sigma$ is an unknown density on $\Gamma_A$. Letting
$\Gamma_A = \cup_{i=1}^N \Gamma_{i}$, where $\Gamma_{i}$ is the $i$th
patch, and letting $\sigma_i$ be the restriction of $\sigma$
to $\Gamma_{i}$, we write this equation in the form
\begin{equation}\label{eq:inteqns}
\sum_{j=1}^N \int_{\Gamma_{j}} G(x,x') \sigma_j(x') \, dS(x') = 1 \quad x \in
\Gamma_{i}, \, i = 1,\ldots,N.
\end{equation}
For the sake of simplicity, we assume that each patch has the same radius
$\varepsilon$. We also assume that the patches are {\em well-separated},
in the sense that the distance between the centers of any two patches
in arc length along the surface of the sphere is at least $3 \varepsilon$.
That is, any two patches are separated by a distance greater than or
equal to their own radius.
For $x \in \Gamma_i$, we define $\mathcal{S}_{ij}$ by
\[(\mathcal{S}_{ij} \sigma_j)(x) := \int_{\Gamma_j} G(x,x') \sigma_j(x')
\, dS(x'). \]
More specifically, we define each such operator in a coordinate system
fixed about the center of $\Gamma_j$. Since all the patches have the same radius, the operators $\mathcal{S}_{ii}$
are therefore identical, and we denote $\mathcal{S}_{ii}$ by $\mathcal{S}$. Thus we may rewrite the many-patch integral equation \eqref{eq:inteqns} in the form
\begin{equation}\label{eq:opinteqns}
\mathcal{S} \sigma_i + \sum_{j\neq i}^N \mathcal{S}_{ij} \sigma_j = 1 \quad i = 1,\ldots,N.
\end{equation}
The aim of this section is to reformulate \eqref{eq:opinteqns} as a
Fredholm system of the second kind in an efficient basis.
\begin{definition} \label{def:1pinteqn}
Let $f$ be a smooth function on some patch $\Gamma_i$.
The \emph{one-patch integral
equation with data $f$} is defined by
\begin{equation}\label{eq:onepatinteq}
\mathcal{S} \sigma_i = f,
\end{equation}
where $\sigma_i$ is an unknown density on $\Gamma_i$.
\end{definition}
\begin{remark} \label{rem:rightside}
Writing \eqref{eq:opinteqns} in the form
\[
\mathcal{S} \sigma_i = 1 - \sum_{j\neq i}^N \mathcal{S}_{ij} \sigma_j,
\]
and observing that $\mathcal{S}_{ij} \sigma_j$ is a smooth function for
$\Gamma_j$ well-separated from $\Gamma_i$, we see that each $\sigma_i$
satisfies a one-patch integral equation with smooth data.
Conversely, if $\sigma_1,\ldots,\sigma_N$ satisfy \eqref{eq:opinteqns}, then
each $\mathcal{S} \sigma_i$ is smooth on $\Gamma_i$.
\end{remark}
It is convenient to
make use of an orthonormal basis $\{ q_1,q_2,\dots \}$ of smooth functions
on each patch, so that for smooth $f$ on $\Gamma_i$ we have
\begin{equation} \label{fsynth}
f(x) = \sum_{n=1}^\infty \hat{f}_n q_n(x),
\end{equation}
in the usual $L^2$ sense, with
\[ \hat{f}_n = \int_{\Gamma_i} f(x) q_n(x) \, dx.
\]
We postpone until Section \ref{sec:zernike} a discussion of our
particular choice of the basis $\{q_n\}$, which will be constructed
using Zernike polynomials.
We will denoted by $\hat{f}^K$ the
vector of the first $K$ coefficients:
\[ \hat{f}^K = (\hat{f}_1,\hat{f}_2,\ldots,\hat{f}_K)^T.
\]
\begin{definition} \label{projectors}
Let $f$ be a smooth function on $\Gamma$ defined by
\eqref{fsynth}, with $\hat{f}$, $\hat{f}^K$ computed as above.
The {\em projection}
operators $\mathcal{P}$ and $\mathcal{P}^K$ are defined by
\[ \paren{\mathcal{P}[f]}_n = \hat{f}_n, \]
with $\mathcal{P}^K$ defined in the same manner for $n \leq {K}$.
The {\em synthesis} operators $\mathcal{Q}$ and $\mathcal{Q}^K$ are
defined by
\[ \mathcal{Q}[\hat{f}](x) = \sum_{n=1}^\infty \hat{f}_n q_n(x),\quad
\mathcal{Q}^K[\hat{f}_{K}](x) = \sum_{n=1}^{K} f_n q_n(x).
\]
\end{definition}
$\mathcal{P}$ and $\mathcal{P}^K$ are left inverses of
$\mathcal{Q}$ and $\mathcal{Q}^K$, respectively.
Finally, we define $b_n$ to be the solution of
the one-patch integral equation
with data given by the basis element $q_n$:
\begin{equation}\label{eq:onepatchinteqn}
b_n = \mathcal{S}^{-1} q_n.
\end{equation}
Thus, if a smooth function $f$ on $\Gamma_i$ is expanded
as $f = \sum_{n=1}^\infty \hat{f}_n q_n$, then the
solution of the one-patch integral equation with data $f$ is given by
$\mathcal{S}^{-1} f = \sum_{n=1}^\infty \hat{f}_n b_n$. This
motivates the following definition.
\begin{definition}\label{def:Bop}
We denote the solution operator of the one-patch integral equation in the
basis $\{q_n\}$ by
\[\mathcal{B} = \mathcal{S}^{-1} \mathcal{Q}.\]
For $\hat{f} = \{\hat{f}_1,\hat{f}_2,\ldots\}$ and $f(x) =
\sum_{n=1}^\infty \hat{f}_n q_n(x)$, $\mathcal{B}$ satisfies
\[\mathcal{B}[\hat{f}](x) = \sum_{n=1}^\infty \hat{f}_n b_n(x).\]
We denote the solution operator of the one-patch integral equation in the
truncated basis $\{q_n\}_{n=1}^K$ by
\[\mathcal{B}^K = \mathcal{S}^{-1} \mathcal{Q}^K.\]
For $\hat{f} = (\hat{f}_1,\hat{f}_2,\ldots\,\hat{f}_K)$ and $f(x) =
\sum_{n=1}^K \hat{f}_n q_n(x)$, $\mathcal{B}_K$ satisfies
\[\mathcal{B}^K[\hat{f}](x) = \sum_{n=1}^K \hat{f}_n b_n(x).\]
\end{definition}
Note that the construction of $\mathcal{B}$ requires
solving the one-patch integral equations with data $q_1,q_2,\ldots$ to
obtain $b_1,b_2,\ldots$, and that the
construction of $\mathcal{B}^K$ requires solving the first $K$ of these
equations. For a fixed patch radius
$\varepsilon$, these solutions are universal and do not
depend on the number or arrangement of patches in the full problem.
Given $\mathcal{B}$, we are now able to rewrite the integral equation
\eqref{eq:opinteqns} as a well-conditioned Fredholm system of the second
kind in the basis $\{q_n\}$. On $\Gamma_i$, we {\em define}
a function $f_i$ by
\[ f_i = \mathcal{S} \sigma_i. \]
Substituting into \eqref{eq:opinteqns}, we have
\[f_i + \sum_{j\neq i}^N \mathcal{S}_{ij} \mathcal{S}^{-1} f_j = 1
\quad i = 1,\ldots,N. \]
To transform to the basis $\{q_n\}$, we write
$f_i$ in the form $f_i = \mathcal{Q} \hat{f}_i$
and multiply on the left by $\mathcal{P}$ to obtain
\begin{equation}\label{eq:opinteqnslim*}
\hat{f}_i + \mathcal{P} \sum_{j\neq i}^N \mathcal{S}_{ij}\mathcal{B} \hat{f}_j =
\mathcal{P} \, 1 \quad i = 1,\ldots,N.
\end{equation}
Since the patches $\Gamma_i$ and $\Gamma_j$ are well-separated,
$\mathcal{P} \mathcal{S}_{ij} \mathcal{B}$ is a compact operator for $i \neq j$,
so that \eqref{eq:opinteqnslim*} is a Fredholm system of the second kind.
The corresponding truncated system takes the form
\begin{equation}\label{eq:opinteqns*}
\hat{f}_i^K + \mathcal{P}^K \sum_{j\neq i}^N
\mathcal{S}_{ij}\mathcal{B}^K \hat{f}_j^K =
\mathcal{P}^K \, 1 \quad i = 1,\ldots,N,
\end{equation}
where we have used the approximation $f_i \approx \mathcal{Q}^K \hat{f}_i^K$.
\begin{remark}\label{rem:advantages}
We refer to the approach described above as a {\em multiple scattering}
formalism by analogy with the problem of wave scattering from multiple
particles in a homogeneous medium. In the language of scattering theory,
one would say that for the $i$th patch, the boundary
data is the known data ($\mathcal{S} \sigma_i = 1$), perturbed by
the potential ``scattered" from all other patches, namely
$\sum_{j\neq i}^N \mathcal{S}_{ij} \sigma_j$.
Solving the system \eqref{eq:opinteqns} corresponds to determining how the
collection of uncoupled single patch solutions
$\mathcal{S} \sigma_i = 1$ needs to be perturbed to account for the
``multiple scattering" effects.
The approach developed above, where
$f_i = \mathcal{S} \sigma_i$ are the unknowns, has many advantages
over solving \eqref{eq:opinteqns} directly, even with ${\mathcal{S}}^{-1}$ as
a left preconditioner. By working in the spectral basis, we avoid the
need to discretize $\sigma_i$ on each patch, the number
of degrees of freedom per patch is significantly reduced, and the
linear system is a well-conditioned Fredholm equation of the second kind.
\end{remark}
\begin{remark}\label{rem:sigrep}
The original unknowns $\sigma_i$ may be recovered from the
solution of \eqref{eq:opinteqnslim*} or \eqref{eq:opinteqns*}
using the formula
\begin{equation}\label{eq:recoversig}
\sigma_i = \mathcal{B} \hat{f}_i \approx \mathcal{B}^K \hat{f}_i^K.
\end{equation}
Thus, we may think of the unknowns
$\hat{f}_i$ as a representation of the unknown density $\sigma_i$ in the
basis $\{b_n\}$.
\end{remark}
We turn now to
the construction of an orthonormal basis $\{ q_n \}$ for
smooth functions on a patch, the
construction of the singular solutions $b_n = \mathcal{S}^{-1} q_n$, and
the efficient solution of the discretized multiple scattering system
\eqref{eq:opinteqns*}.
\section{A basis for smooth functions on a patch} \label{sec:zernike}
It is well-known that the Zernike polynomials are a
spectrally accurate, orthogonal basis for smooth functions on the
disk.
For a thorough discussion of these functions, we refer the reader
to \cite{boyd11}. Here, we simply summarize their relevant properties.
The Zernike polynomials on the unit disk $0 \leq r \leq 1$, $0 \leq
\theta < 2 \pi$ are given by
\begin{align*}
\begin{cases}
Z_n^m(r,\theta) &= R_n^m(r) \cos(m \theta) \\
Z_n^{-m}(r,\theta) &= R_n^m(r) \sin(m \theta),
\end{cases}
\end{align*}
with $0 \leq m < \infty$, $m \leq n < \infty$, and
\[R_n^m(r) = (-1)^{(n-m)/2} r^m P_{(n-m)/2}^{m,0}(1 - 2 r^2), \]
where $P_n^{\alpha,\beta}(x)$ is a Jacobi polynomial on
$[-1,1]$. The Jacobi polynomials are orthogonal on $[-1,1]$ with respect to the
weight function $(1-x)^\alpha (1+x)^\beta$. Thus, for fixed
$m$, the functions $R_n^m(r)$ are orthogonal on $[0,1]$ with respect to
the weight function $r$. This gives the orthogonality relation
\begin{equation} \label{eq:zorthog}
\int_0^{2 \pi} \int_0^1 Z_{n_1}^{m_1}(r,\theta)
Z_{n_2}^{m_2}(r,\theta) r \, dr \, d\theta = \frac{(1 + \delta_{m_1,0}) \pi}{2
n_1 + 2} \delta_{n_1,n_2} \delta_{m_1,m_2}.
\end{equation}
The natural truncation of this basis is to fix a cutoff mode $M$ in both
the radial and angular variables, and to let $0 \leq m \leq n \leq M$.
This yields $K = (M+1)(M+2)/2$ basis functions.
To use this basis on a generic patch $\Gamma_i$,
we define a polar coordinate system
$(r,\theta)$ about the patch center, for which $r$ is the distance in
arc length along the sphere from the center, and $\theta$ is the
polar angle. We rescale the radial variable
from $[0,1]$ to $[0,\varepsilon]$, transforming the Zernike
polynomials to functions on $\Gamma_i$. Finally, the basis functions
$q_1,\ldots,q_K$ discussed in Section \ref{sec:manypatchsetuptools} can be defined as the scaled Zernike
polynomials up to mode $M$.
>From the orthogonality relation \eqref{eq:zorthog},
the projection operators $\mathcal{P}$ and $\mathcal{P}^K$
are obtained as
normalized inner products against Zernike polynomials in polar coordinates.
This {\em Zernike transform} can be implemented
numerically using
a tensor product quadrature with a Gauss-Legendre rule in the radial
variable and a trapezoidal rule in the angular variable.
The number of grid points required to obtain the exact Zernike
coefficients of a function in the space spanned by
$q_1,\dots,q_K$ is $\mathcal{O}(K)$; we denote this number by $K^*$. We refer to these points as
the {\em Zernike sampling nodes} $x_1^z,\ldots,x_{K^*}^z$
(see \cite{boyd11} for further details).
\begin{remark}\label{rem:zerniketrunc}
Rewriting \eqref{eq:opinteqns*} in the form
\begin{equation}\label{eq:opinteqns2*}
\hat{f}_i^K =
\mathcal{P}^K \, \left(1 - \sum_{j\neq i} \mathcal{S}_{ij}\mathcal{B}^K \hat{f}_j^K \right),
\end{equation}
we see that the truncation error compared with \eqref{eq:opinteqnslim*} depends on how well
the smooth function
\[1 - \sum_{j\neq i} \mathcal{S}_{ij}\mathcal{B}^K \hat{f}_j^K\]
is represented in the space spanned by
$q_1,\ldots,q_K$.
In the one-patch case, the summation term vanishes, and $K=1$ is sufficient.
For multiple patches, the choice of $K$ depends largely on
how well-separated the patches are.
Since the Zernike basis is spectrally accurate, $M$ grows only logarithmically
with the desired precision. In practice, {\em a posteriori} estimates
are easily obtained for any fixed configuration by inspection of the
decay of the Zernike coefficients $\hat{f}_i^K$ in the computed solution.
\end{remark}
\section{Informal description of the one-patch solver}
\label{sec:onepatchprelim}
While the details of our solver for the one-patch integral equation
\[ \mathcal{S} \sigma_i = f \]
are deferred to Section \ref{sec:onepatch}, we outline the general
approach here.
First, we note that in the absence of curvature
(i.e. a flat disk on a half-space) and with the associated terms of the
Green's function removed, the solution $\sigma_i$
is known to have a square root singularity at the disk edge
\cite{Bernoff2018,Sneddon1966,Fabrikant1989,Stephan1987,costabelsing}.
In our case, we will explicitly include this square root singularity in
the representation of $\sigma_i$, but also allow for weaker
singularities - which
we have observed and will demonstrate in Section
\ref{sec:sigsingularity} - by using a
discretization that is adaptively refined toward the edge
$\partial \Gamma_i$.
Assume then that we have discretized
the patch $\Gamma_i$ using a suitable polar mesh with
$n_f$ {\em fine grid points}, denoted by
$x_{i,1}^f,\ldots,x_{i,n_f}^f$. The fine grid points for different
patches are identical relative to the coordinate systems of their own
patches. We denote the corresponding samples
of the right-hand side $f$ and $\sigma_i$ by
\[
\begin{aligned}
\vec{f} &= (f(x_{i,1}^f),\ldots,f(x_{i,n_f}^f))^T, \\
\vec{\sigma}_i &= ((\vec{\sigma}_i)_1,\ldots,(\vec{\sigma}_i)_{n_f})^T
\approx
(\sigma_i(x_{i,1}^f),\ldots,\sigma_i(x_{i,n_f}^f))^T.
\end{aligned}
\]
We assume that $\mathcal{S}$ is discretized to high-order accuracy by a matrix
$S$ with
\begin{equation}\label{eq:finegrdquadraturea}
\mathcal{S}[\sigma_i](x_{i,k}^f)
\approx \sum_{l=1}^{n_f} S(k,l) (\vec{\sigma_i})_l,
\end{equation}
so that the discretized system takes the form
\begin{equation}\label{eq:onepatchdiscreteeqn}
S \vec{\sigma}_i = \vec{f}.
\end{equation}
We will also require a set of quadrature weights, denoted by
$w_1^f,\ldots,w_{n_f}^f$ and identical for each patch, that permit the accurate integration
over $\Gamma_i$ of the product of an arbitrary smooth function with the
discretized density $\vec{\sigma}_i$, taking into account the fact that
$\sigma_i$ has an edge singularity.
That is, we assume that
\begin{equation}\label{eq:finegrdquadrature}
\int_{\Gamma_i} g(x) \sigma_i(x) \, dS(x) \approx \sum_{l=1}^{n_f}
g(x_l^f) (\vec{\sigma}_i)_l w_l^f
\end{equation}
for any smooth $g$, with high-order accuracy.
In the next section, we
will use this quadrature to discretize the operators $\mathcal{S}_{ij}$.
The solutions of the $K$ one-patch integral
equations \eqref{eq:onepatchinteqn} may be obtained in a precomputation,
after which we have access
to the functions $b_1,\ldots,b_K$ sampled on the fine grid.
We assemble these
functions into an $n_f \times K$ matrix $B$ with
\[B(n,m) = b_m(x_n^f).\]
$B$ is then the discretization of the operator $\mathcal{B}^K$, mapping the
first $K$ Zernike coefficients of a smooth function to the solution of the
corresponding one-patch integral equation sampled on the fine grid.
If we denote by $Q$ the discretization of the synthesis operator
$\mathcal{Q}^K$ as an $n_f
\times K$ matrix,
\[Q(i,j) = q_j(x_i^f),\]
then we have, as in Definition \ref{def:Bop},
\[SB = Q.\]
In short, the precomputation amounts to solving this matrix system for
$B$.
\section{Discretization of the multiple scattering system}\label{sec:manypatchsetup}
We return now to the multiple scattering system \eqref{eq:opinteqns*}.
The unknowns on $\Gamma_i$ are defined in the truncated Zernike basis as
$\hat{f}_i^K$. We will need as intermediate variables the fine grid
samples of $\sigma_i(x)$. From Remark \ref{rem:sigrep}, we define the
sampling vector $\vec{\sigma_i}$ by
\[ \vec{\sigma_i} = B \hat{f}_i^K \approx \mathcal{B}^K \hat{f}_i^K. \]
In order to discretize the integral operators $\mathcal{S}_{ij}$
for $i \neq j$, we note that $G(x,x')$ is smooth for $x \in \Gamma_i$,
$x' \in \Gamma_j$, and
use the quadrature \eqref{eq:finegrdquadrature}. This yields
\begin{equation}\label{eq:Sijquad}
\int_{\Gamma_{j}} G(x,x') \sigma_j(x') \, dS(x') \approx
\sum_{l=1}^{n_f} G(x,x_{j,l}^f) (\vec{\sigma_j})_l w_l^f.
\end{equation}
Setting $x = x_{i,k}^z$ to be the $k$th Zernike sampling node on
$\Gamma_{i}$, we define the matrix $S_{ij}$ by
\[S_{ij}(k,l) = G(x_{i,k}^z,x_{j,l}^f) w_l^f. \]
Thus, $S_{ij}$ maps a density sampled on the fine grid on $\Gamma_j$ to the
smooth field it induces at the Zernike sampling nodes on $\Gamma_i$.
Lastly, we discretize the truncated Zernike transform $\mathcal{P}^K$ as a $K
\times K^*$ matrix $P$ using the trapezoidal-Legendre scheme described in
Section \ref{sec:zernike}.
\begin{definition}
The {\em discrete Zernike transform} $P$ is defined to be the
mapping of a smooth function
sampled on the $K^*$ Zernike sampling nodes to its $K$ Zernike coefficients.
\end{definition}
We can now write the multiple scattering system
\eqref{eq:opinteqns*} in a fully discrete form,
\begin{equation}
\label{eq:opinteqnsdisc}
\hat{f}_i^K + P \sum_{j \neq i} S_{ij} B
\hat{f}_j^K = P \vec{1} \quad i=1,\ldots,N,
\end{equation}
where $\vec{1}$ is the vector of length $K^*$ with all entries
equal to $1$. Since $P \in \mathbb{R}^{K \times K^*}$,
$S_{ij} \in \mathbb{R}^{K^* \times n_f}$, and
$B \in \mathbb{R}^{n_f \times K}$, this is
a linear system of dimensions $K N \times K N$, with
$K << n_f$ degrees of freedom per patch. As a discretization of a
Fredholm system of the second kind, it is amenable to rapid solution
using an iterative method such as GMRES \cite{GMRES}.
We now describe how to calculate the constants $J$ and $\mu$ from the
solution of \eqref{eq:opinteqnsdisc}.
We saw in Sections \ref{sec:extsetup} and
\ref{sec:intsetup} that these can be computed directly from $I_\sigma =
\sum_{i=1}^N \int_{\Gamma_{i}} \sigma_i \, dS$.
Using the fine grid quadrature \eqref{eq:finegrdquadrature}, we have
\begin{equation}\label{eq:getI}
I_\sigma = \sum_{i=1}^N
\int_{\Gamma_{i}} \sigma_i \, dS \approx \sum_{i=1}^N \sum_{k=1}^{n_f} (B
\hat{f}_i^K)_k
w_k^f = (w_1^f,\ldots,w_{n_f}^f) B \sum_{i=1}^N \hat{f}_i^K.
\end{equation}
Since we may precompute the row vector
$I := (w_1^f,\ldots,w_{n_f}^f) B$ of length $K$,
the cost to compute $I_\sigma$ is $\mathcal{O}(NK)$.
When the system \eqref{eq:opinteqnsdisc} is solved iteratively,
each matrix-vector product is dominated by the computation
of the ``multiple scattering events''
\begin{equation}\label{eq:mssums}
P \sum_{j \neq i} S_{ij} B \hat{f}_j^K
\end{equation}
for $i=1,\ldots,N$. That is, for each patch $\Gamma_i$, we must compute the
Zernike coefficients of the field induced on that patch by the densities
on all other patches. Note that if we were to calculate the above sums
by simple matrix-vector products, the cost would be
$\mathcal{O}(n_f K N^2)$. We turn now to the description of a scheme that permits the
computation of these sums using $\mathcal{O}(K N \log N)$ operations,
with a constant which depends only on
the desired precision, but not on $n_f$.
\section{Efficient representation of outgoing
and incoming fields}\label{sec:outgoingrep}
Our fast algorithm relies on what is variously referred to as a
compressed, skeletonized, or sparsified representation of the far field
induced by a source density $\sigma_i$ on a single patch $\Gamma_i$ (Fig.
\ref{fig:skelpts}).
We define the far field region $\Theta_i$ for a patch $\Gamma_i$
to be the set of points whose distance from the center of $\Gamma_i$ (measured
in arc length along the surface of the sphere) is greater than
$2\varepsilon$.
In light of our restriction on the minimum patch
separation distance, this ensures that the far field region of a
particular patch contains every other patch.
\begin{figure}[ht]
\centering
\includegraphics[width=.4\linewidth]{outgoing_fig}
\caption{
For a patch $\Gamma_i$, the far field region $\Theta_i$ is defined as
the complement on the surface of the sphere of a disk of radius $2
\varepsilon$, measured in arclength, about the center of $\Gamma_i$. The black dots in the
figure represent the subset of the fine grid points used to
efficiently represent the outgoing field induced by the density
$\sigma_i$.}
\label{fig:skelpts}
\end{figure}
We start from \eqref{eq:Sijquad}, which was used to define the matrix
$S_{ij}$. We will show that there is a subset of $p$ fine grid points
with $p << n_f$ and modified source strengths $\vec{\rho}_i =
(\rho_{i,1},\rho_{i,2},\dots,\rho_{i,p})^T$ so that
\begin{equation}\label{eq:outgoingcompressed}
\int_{\Gamma_i} G(x,x') \sigma_i(x') \, dS(x') \approx
\sum_{l=1}^{n_f} G(x,x_{i,l}^f) (\vec{\sigma}_i)_l w_l^f
\approx \sum_{m=1}^p
G(x,x_{i,\pi(m)}^f) \rho_{i,m},
\end{equation}
for any $x \in \Theta_i$. Moreover, there is a stable algorithm
for obtaining this {\em compressed} or
{\em skeletonized outgoing representation}. Here,
$\pi(m)$ is an indexing function which maps
$\{1,\ldots,p\} \to \{1,\ldots,n_f\}$, and identifies which of the
original fine grid points are used in the representation.
The number $p$ represents the numerical rank, to a specified precision,
of the $n_f$ functions $\{G(x,x_{i,l}^f)\}$ on $\Theta_i$.
\begin{remark}
The existence of such low-rank factorizations is
discussed in detail in \cite{cheng05,Goreinov1997,gu96}.
For the purposes of computation, we will use the
interpolative decomposition (ID) \cite{liberty07,cheng05,id2011}, described
briefly below.
The ID and related compression schemes are essential and widely used in
hierarchical, fast algorithms
for applying and inverting dense matrices (see for example
\cite{siva,borm2003hierarchical,corona2014n,fong2009,
gillman2012,gimbutas2002,ho2011,martinsson2005fast,minden2017,ying2004}
and the references therein).
\end{remark}
\subsection{The interpolative decomposition}\label{sec:ID}
We consider a generic patch $\Gamma_i$ and, for simplicity, drop
the patch index $i$ on all quantities.
We first discretize $\Theta$ on a
training grid $x_1^t,\ldots,x_{n_t}^t$ of $n_t$ points chosen to
be sufficiently fine to
accurately represent smooth functions on $\Theta$. We can then obtain a
matrix $A$ of size $n_t \times n_f$, with entries $A_{jl} =
G(x_j^t,x_l^f)$, so that the $l$th column of $A$ is a discretization of
the function $G(x,x_l^f)$ on the training grid.
Given a user-specified tolerance $\epsilon$,
the ID takes as input a matrix $A$,
and returns the factorization $\wt{A} \Pi$ with
\begin{equation}\label{eq:idapprox}
\|A - \wt{A} \Pi \|_2 = O(\epsilon),
\end{equation}
where $\wt{A}$ is $n_t \times p$ and $\Pi$ is $p
\times n_f$. The parameter $p$ is
the numerical rank of $A$ determined by the ID
as part of the factorization.
The columns of $\wt{A}$ are a $p$-column subset of the original
matrix $A$, chosen so that
the column space of $\wt{A}$ approximates that of $A$. The matrix
$\Pi$ contains the coefficients needed to approximately reconstruct the
columns of $A$ from those of $\wt{A}$.
If we define the indexing function
$\pi$ so that the $m$th column of $\wt{A}$ is the $\pi(m)$th column
of $A$, then
the approximation \eqref{eq:idapprox}
implies that
\[G(x_j^t,x_l^f) \approx \sum_{m=1}^p G(x_j^t,x_{\pi(m)}^f) \Pi_{ml}\]
for $l=1,\ldots,n_f$. Since the columns of $A$ represent the functions
$\{G(x,x_l^f)\}$ on a fine training grid, the expression above holds
not just for $x \in \{x_j^t\}$, but more generally for $x \in \Theta$.
That is,
\[G(x,x_l^f) \approx \sum_{m=1}^p G(x,x_{\pi(m)}^f) \Pi_{ml}.\]
Summing both sides of this expression
against $(\vec{\sigma})_l w_l^f$ and
rearranging yields
\[\sum_{l=1}^{n_f} G(x,x_l^f) (\vec{\sigma})_l w_l^f \approx \sum_{l=1}^{n_f}
\sum_{m=1}^p G(x,x_{\pi(m)}^f) \Pi_{ml} (\vec{\sigma})_l w_l^f =
\sum_{m=1}^p G(x,x_{\pi(m)}^f) (\Pi W \vec{\sigma})_m\]
where $W$ is a diagonal $n_f \times n_f$ matrix with $W_{ll} = w_l^f$.
Since $\vec{\sigma} = B \hat{f}^K$, we let $T := \Pi W B$ to obtain
the representation \eqref{eq:outgoingcompressed} with
\begin{equation} \label{eq:Tmap}
\vec{\rho} = T \hat{f}^K.
\end{equation}
$T$ is a generic $p \times K$
matrix which may be formed and stored
once $\Pi$, $W$, and $B$ are available. We emphasize that each of these
matrices is identical for all patches of a given radius $\varepsilon$
and may therefore be precomputed.
$\Pi$ is obtained from a single interpolative decomposition, $W$ is
a simply a matrix of quadrature weights, and $B$ is computed by solving a
sequence of one-patch integral equations as explained in Section
\ref{sec:onepatchprelim}.
Using this compression scheme alone, it is straightforward to reduce the
cost of computing the sums \eqref{eq:mssums} from $\mathcal{O}(K n_f N^2)$ to
$\mathcal{O}(K p N^2)$. The tools introduced in the remainder of this section
will allow us to reduce the cost further to $\mathcal{O}(K p N \log N)$.
\subsection{Quadtree on the sphere}\label{sec:quadtreesphere}
We now describe a data structure which will enable us to organize groups
of patches in a hierarchical fashion. We first inscribe the sphere in a
cube (see Fig. \ref{fig:embeddedsphere}). We then project each patch
center onto the surface of the cube via the ray from the origin through
the patch center (indicated by the arrows in the figure). This defines
a set of points on the surface of the cube. We then build a quadtree on
each face of the cube, subdividing boxes until there is only one point
per box, and pruning empty boxes in the process. The union of these six
quadtrees is an FMM-like full tree data structure, which provides a
subdivision of the sphere itself into a hierarchy of levels. The
patches assigned to a particular box in the full tree will be said to
form a {\em patch group}. Each patch is a member of one patch group at
each level of the full tree. At the leaf level, each group consists of a
single patch.
We define parent, child, and neighbor boxes in the full tree
in the same way as in an ordinary quadtree. The only modification to the
definition of a neighbor box is that it wraps across cube edges and corners.
Thus, a box adjacent to an edge has eight neighbors (like an interior box)
unless it is a corner box, in which case it has seven neighbors.
Well-separatedness and the interaction list for boxes or their
corresponding patch groups are define as in the usual FMM. Two
boxes at a given level are well-separated if they are not
neighbors, and the interaction list for a particular box is comprised of
the well-separated children of its parent's neighbors.
We will sometimes refer to a patch $\Gamma_i$ as being in the
interaction list of some patch group $\gamma$, by which
we mean that $\Gamma_i$ is
contained in a group which is in the interaction list of $\gamma$.
\begin{figure}[ht]
\centering
\includegraphics[width=.35\linewidth]{embeddedsphere} \hspace{.2in}
\includegraphics[width=.32\linewidth]{qtree}
\caption{The sphere is inscribed in a cube and each
patch center is projected to a face of the cube by a ray
emanating from the sphere center (left). An adaptive quad tree is
then built on each face until, at the finest level,
there is one patch in every non-empty leaf node in the quad tree
(right).}
\label{fig:embeddedsphere}
\end{figure}
\subsection{The representation of incoming fields on patch
groups}\label{sec:localrep}
Since the incoming field due to remote source patches in the interaction
list of a patch group $\gamma$ is smooth, it can be efficiently
represented on a spectral polar grid (see Fig.
\ref{fig:localgrids}). This requires the construction of a {\em bounding
circle} on the surface of the sphere, enclosing all of the patches in
$\gamma$, which circumscribes the grid. Incoming field values can
then be obtained at arbitrary points inside the bounding circle by interpolation. We refer to the
grid samples of the incoming field as an {\em incoming representation}.
\begin{figure}[ht]
\centering
\includegraphics[width=.4\linewidth]{incoming_fig}
\caption{
For a group of $m$ patches, the field due to well-separated
source patches may be captured with high order accuracy on a polar
grid which covers all $m$ patches.
}
\label{fig:localgrids}
\end{figure}
The bounding circle is straightforward to construct using a
``smallest circle algorithm" for a collection of
points in the plane, suitably adapted to the sphere (see
\cite{skyum91,welzl91,xu03} and the references therein for discussion of
the smallest circle problem).
Given a bounding circle for a patch group, we can build a local polar
coordinate system $(r,\theta)$, for which $r = 0$ corresponds to the
center of the patch group, and $r = R$ corresponds to the bounding
circle. We must select an {\em incoming grid} in these coordinates
which can represent a
smooth incoming field in a high order manner with as few grid points
as possible. For this, we will use
a parity-restricted Chebyshev-Fourier
basis, formed by
taking products of scaled Chebyshev polynomials in the radial variable
$r \in [-R,R]$ with
trigonometric functions in the angular variable
$\theta \in [0, 2 \pi)$. The
coefficients of an expansion in these basis functions
corresponding to Chebyshev and Fourier modes of different parity can be
shown to be zero, hence the name of the basis.
This is an efficient and spectrally accurate basis with a simple
associated grid \cite{boyd11}.
Namely, the coefficients of the expansion may be computed from function samples
on a polar grid comprised of the scaled Chebyshev nodes
in $r \in [0,R]$
and equispaced nodes in $\theta \in [0,2 \pi)$.
The desired field may then be evaluated at any point
inside a patch group's bounding circle
by evaluating the resulting Chebyshev-Fourier expansion.
It is straightforward to verify that the number of grid points
and coefficients required to obtain an accuracy $\epsilon$ is $\mathcal{O}(\log^2(1/\epsilon))$.
\section{Solution of the multiple scattering system}\label{sec:algorithm}
We now describe our method to solve the discretized many-patch system
\eqref{eq:opinteqnsdisc}, including the fast algorithm for accelerating
the computation of the multiple scattering interactions
\eqref{eq:mssums} within a GMRES iteration.
\vspace{.2in}
\noindent {\bf Step 1: Precomputation (for each choice of $\varepsilon$)}
Given the patch radius $\varepsilon$, select
the Zernike truncation parameter $K$ and form
the matrix $Q$.
(a) Solve the system $S B = Q$ described in Section \ref{sec:onepatch}.
(b)
Construct the matrix $T$ defined in Section
\ref{sec:ID} by building and composing the matrices $\Pi$,
$W$, and $B$. $\Pi$ need not be stored after $T$ is formed.
(c)
Construct the vector
$I = (w_1^f,\ldots,w_{n_f}^f) B$,
used to obtain the quantities $J$ and $\mu$ in \eqref{eq:getI}.
At this point we no longer need to store $B$,
only the $p \times K$ matrix $T$ and the $1 \times K$ vector $I$.
The storage associated with the outputs of the precomputation phase is
therefore negligible.
\vspace{.2in}
\noindent {\bf Step 2: Construction of hierarchical data structure}
Let $N$ denote the number of patches on the surface of the sphere,
assumed to satisfy the
the minimum patch separation
condition introduced in Section \ref{sec:manypatchsetuptools}.
(a) Form the quadtree on the sphere described in Section
\ref{sec:quadtreesphere}. The data structure should associate
each patch with its group at every level, and identify
the interaction list of every patch group.
(b) For each patch group, construct the incoming grid described in
Section \ref{sec:localrep}. For each patch, construct the Zernike
sampling grid described in Section \ref{sec:zernike}.
\vspace{.2in}
\noindent {\bf Step 3: Iteration}
We use GMRES to solve the system \eqref{eq:opinteqnsdisc}. At
each iteration, we must apply the system matrix; that is, we must compute
\begin{equation}\label{eq:sysmatapply}
\hat{f}_i^K + P \sum_{j \neq i} S_{ij} B \hat{f}_j^K
\end{equation}
for $i=1,\ldots,N$, where here $(\hat{f}_1^K,\ldots,\hat{f}_N^K)^T \in
\mathbb{R}^{KN}$ is the input vector at a given iteration.
The following algorithm computes this
expression in $\mathcal{O}(N \log N)$ operations.
\begin{enumerate}
\item Compute and store the outgoing
coefficients $\vec{\rho}_i = T \hat{f}_i^K$ for each patch, $i=1,\ldots,N$.
{\em \underline{Cost}: Approximately $p K N$.}
\item Loop through every patch group in every level. For each patch group $\gamma$, loop through all patches in its
interaction list. For each such patch $\Gamma_i$, evaluate the
field induced by the density on $\Gamma_i$ on the
incoming grid of $\gamma$, using the outgoing representation
\eqref{eq:outgoingcompressed}. Add together all such field values to obtain the total incoming
field on the incoming grid.
{\em \underline{Cost}: If $q$ is an upper bound on the number of points in each
incoming grid, the cost of evaluating a single outgoing representation on
an incoming grid is at most $q p$. At each level, the outgoing
representation corresponding to each patch must be
evaluated on at most $27$ incoming grids, since the
interaction list of each patch's group at that level contains at
most $27$ other groups. There are approximately $\log_4 N$ levels.
Therefore, the
cost of this step is approximately $27 q p N \log_4 N$.}
\item At the leaf level of the tree, each patch group $\gamma$ contains a
single patch, say $\Gamma_i$. Though we have already evaluated the outgoing
representation for $\Gamma_i$ on the incoming grids of all
(single-patch) groups in the interaction list of $\gamma$, we now do
so also for the neighbors of $\gamma$, which are
also single-patch groups but are not contained in the interaction
list of $\gamma$. We add these contributions to the field values
already stored on the incoming grids of these neighbor
patches.
{\em \underline{Cost}: Since each leaf-level single-patch group has at most $8$ neighbors,
the cost of this step is approximately $8 q p N$.}
{\em Note: For each patch $\Gamma_i$, the incoming field due to every other
patch has now been stored in the incoming grid of exactly one patch-group
of which $\Gamma_i$ is a member. Indeed, every other patch is
either a neighbor of $\Gamma_i$ at the leaf level, or it is contained
in exactly one of the interaction lists of the patch groups
containing $\Gamma_i$.}
\item Loop through each patch group. For every patch $\Gamma_i$ in a
group $\gamma$, evaluate the interpolant of the incoming field stored
on the incoming grid of $\gamma$ at the Zernike sampling nodes
on $\Gamma_i$.
{\em \underline{Cost}: There are $\mathcal{O}(K)$ Zernike sampling nodes,
so the cost of each
interpolation is approximately $q^2$ to form the interpolant and $K
q$ to evaluate it. Each patch is a member of a single group at each
level, so we must carry out approximately $N \log_4 N$ such
interpolations. The total cost is therefore approximately $(q^2
+ Kq) N \log_4 N$. (For large $q$, this step could be
accelerated with fast transform methods but $q$ is generally too small
for this to provide any significant benefit.)}
At this point, we have computed the field due to all other patches
on the Zernike sampling grid on each patch. That is, we have
computed the sums $\sum_{j \neq i} S_{ij} B \hat{\sigma}_j$ for $i =
1,\ldots,N$.
\item Apply the matrix $P$ to the values stored on the Zernike
sampling grid on each patch and add $\hat{f}_i^K$ to the result to
obtain \eqref{eq:sysmatapply}.
{\em \underline{Cost}: Approximately $K^2 N$.}
\end{enumerate}
The total cost of each iteration is therefore $\mathcal{O}(N \log N)$, with
asymptotic constants which involve the parameters $K$, $q$, and $p$
associated with the resolution of smooth functions on spectral grids.
The singular character of the problem is dealt with entirely during the
precomputation phase.
\subsection{Optimizations and parallelization}
While the algorithm described above has the desired computational
complexity, there are several practical considerations that are worth
discussing to optimize its performance.
{\em Selection of incoming grid parameters:}
Rather than making a uniform choice of the radial and azimuthal
truncation parameters for the incoming grid, we can compute these
adaptively as follows. For each patch group $\gamma$, we determine the
distance from its bounding circle to the nearest patch in its
interaction list. We then adaptively construct an incoming grid which
accurately interpolates a collection of point sources $G(x,x')$ at
points $x'$ this distance away. This adaptive interpolation is carried
out by increasing the incoming grid truncation parameters until the last
few Legendre-Fourier coefficients of the interpolant fall below some
specified tolerance.
{\em Additional compression of the outgoing representation:}
Instead of using the same outgoing coefficients $\vec{\rho}_i$ for each
level of the quadtree, we can associate with each patch a different
outgoing representation for each level. Recall that the far field
regions $\Theta_i$ were constructed
identically for each patch $\Gamma_i$ to be as large as possible,
consistent with the minimum patch separation. This way, one could build
a single generic matrix $T$ taking a density on a patch to its outgoing
representation. $T$ was built by compressing the outgoing field due to a
generic patch $\Gamma$ against a grid on a generic far field region $\Theta$.
Instead, we can build one such matrix for each level of the quadtree by
constructing a generic far field region for each level. Each such far field
region is an annulus or disk on the surface of the sphere. For each
level, it is taken to be just large enough so that for any $i =
1,\ldots,N$, in the coordinate system of $\Gamma_i$, it covers the
bounding circle of every group $\gamma$ containing $\Gamma_i$ in its
interaction list at that level. Using the interpolative decomposition,
we can then recompress the outgoing representation for a generic patch
against training grids on each of the approximately $\log_4 N$ new far
field regions. We obtain one matrix $T$ per level, each of which has
fewer rows and therefore yields fewer outgoing coefficients than the
original.
{\em Parallelization:}
Each step of the algorithm to compute \eqref{eq:sysmatapply} may be
straightforwardly
parallelized. Steps (1) and (5) are parallelized over all patches; steps
(2) and (4) are parallelized over all patch groups at all levels; step
(3) is parallelized over all patch groups at the leaf level.
\section{The one-patch
integral equation}\label{sec:onepatch}
In this section, we describe in detail a solver for the
integral equation \eqref{eq:onepatinteq},
as well as the construction of the far-field quadrature nodes
$x_{i,1}^f,\ldots,x_{i,n_f}^f$ and weights
$w_1^f,\ldots,w_{n_f}^f$ discussed in Section \ref{sec:onepatchprelim}.
We assume that a patch $\Gamma$ has radius
$\varepsilon$ and make use of
cylindrical coordinates $(r,\theta,z)$. If we take the center of the
patch to be the north pole of the sphere, then $r = 0$ corresponds to
the $z$-axis, $r = 0$ and $z = \pm 1$ to the north and south poles, respectively, and $\theta =
0$ to the $x$-axis.
Following the approach of \cite{young12,helsing14}, we use the
rotational symmetry of $\Gamma$ to reduce the integral
equation over the patch to a sequence of one-dimensional integral equations,
each corresponding to a
Fourier mode in the variable $\theta$.
More precisely,
we denote by $C$ the arc which generates $\Gamma$ via rotation
about the $z$-axis:
$C(t) \equiv (r(t),z(t)) =
(\sin(t),\cos(t))$ for $t \in [0,\varepsilon]$.
In this parametrization, $t$ is simply the arclength along the sphere.
Let $x = (r,\theta,z)$ and $x' = (r',\theta',z')$.
Since $G_E$ and $G_I$ are functions of $|x-x'|$
and
\[|x-x'| = \sqrt{r^2 + r'^2 + (z-z')^2 -
2rr'\cos(\theta-\theta')},\]
we can write the dependence of the Green's function in cylindrical coordinates as
$G(x-x') = G(r,r',z-z',\theta-\theta')$.
In these coordinates, the one-patch integral equation
\eqref{eq:onepatinteq} takes the form
\[\int_0^\varepsilon
\int_0^{2 \pi} G(r(t),r'(t'),z(t)-z'(t'),\theta-\theta')
\sigma(r'(t'),z'(t'),\theta') r'(t') \, dt' \, d \theta' = f(r(t),z(t),\theta).\]
Representing $\sigma$ as a Fourier series in $\theta$,
\[\sigma(r(t),z(t),\theta) = \sum_{n=-\infty}^\infty \sigma_n(t) e^{i n
\theta},\]
and taking the Fourier transform of both sides of this equation, upon
rearrangement, gives the following integral equation for the Fourier
modes:
\begin{equation}\label{eq:modalinteqn}
2 \pi \int_0^\varepsilon G_n(t,t')
\sigma_n(t') \sin(t') \, dt' = f_n(t).
\end{equation}
Here $G_n(t,t')$, $\sigma_n(t)$,
and $f_n(t)$ are the Fourier transforms of
$G(r(t),r'(t'),z(t)-z'(t'),\theta)$,
$\sigma(r(t),z(t),\theta)$ and
$f(r(t),z(t),\theta)$ with respect to $\theta$.
Thus, after solving the one-dimensional modal equations
\eqref{eq:modalinteqn},
we can recover
$\sigma(r(t),z(t),\theta)$ from its Fourier series. Note that
the Fourier series is spectrally convergent because
$\sigma(r(t),z(t),\theta)$ is smooth as a function of $\theta$, even though
it is singular as a function of $t$ at the edge $t = \varepsilon$.
\subsection{Evaluation of the modal kernels}
Let
\begin{align*}
G_n^{(1)}(t,t') &= \frac{1}{\pi} \int_0^\pi \frac{2}{|x-x'|}
\cos(n \tilde{\theta}) \, d \tilde{\theta} \\
G_n^{(2)}(t,t') &= \frac{1}{\pi} \int_0^\pi
\log\left(\frac{2}{|x-x'|} \right) \cos(n \tilde{\theta}) \, d \tilde{\theta} \\
G_n^{(3)}(t,t') &= \frac{1}{\pi} \int_0^\pi \log \left( 1 +
\frac12 |x-x'| \right) \cos(n \tilde{\theta}) \, d \tilde{\theta}.
\end{align*}
Then, using the formulae \eqref{eq:Gextonsurface} and \eqref{eq:Gintonsurface},
it is straightforward to show that
$G_n = G_n^{(1)} + G_n^{(2)} - G_n^{(3)}$
for $G_E(x,x')$ and
$G_n = G_n^{(1)} - G_n^{(2)} - G_n^{(3)}$
for $G_I(x,x')$.
We can write $|x-x'|$ in terms of $t$, $t'$ and
$\tilde{\theta} = \theta-\theta'$
as
\[|x-x'| = \sqrt{2 \left( 1-\cos(t)\cos(t') -
\sin(t)\sin(t')\cos(\tilde{\theta}) \right)}.\]
The integrands are not smooth at $t = t'$, $\tilde{\theta} = 0$, so we
must use specialized methods to evaluate each kernel.
$G_n^{(1)}(t,t')$ is simply the cosine transform of the Coulomb kernel
and arises in boundary integral equations for
electrostatics on axisymmetric surfaces. In
\cite{helsing14}, an efficient evaluation algorithm is described
which involves writing the modal kernel in terms of
Legendre functions of half-integer order and using their
associated three-term recurrence. We refer the reader to this
paper for further details.
The kernel $G_n^{(2)}(t,t')$ is weakly singular and may be evaluated by
adaptive Gaussian quadrature. However, the following formula, discovered
by a combination of analytical manipulation and symbolic calculation
with Mathematica, has been numerically verified for a wide
range of values and is significantly faster:
\begin{equation*}
\frac{1}{\pi}\int_0^\pi \log\left( \frac{2}{|x-x'|} \right) \cos(n
\tilde{\theta}) \, d \tilde{\theta} =
\begin{cases}
-\log\left(\cos(t_1/2)\sin(t_2/2)\right) & n = 0 \\
\frac{1}{2 n} \left(\tan(t_1/2) \cot(t_2/2)\right)^n & n > 0 \\
t_1 = \min(t,t'), t_2 = \max(t,t').
\end{cases}
\end{equation*}
The integrand in the expression for $G_n^{(3)}(t,t')$ is even more weakly
singular, so $G_n^{(3)}(t,t^\prime)$ may be evaluated relatively quickly
by adaptive Gaussian quadrature.
\subsection{Discretization of the modal integral equations}
Since \eqref{eq:modalinteqn} is a singular integral equation, care must
be taken to discretize it accurately. The dominant singularity of the kernel $G_n(t,t')$ at $t = t'$
is the logarithmic singularity of $G_n^{(1)}(t,t')$.
An analogous classical problem is therefore the first-kind integral equation
arising from the solution of the Dirichlet problem on an open arc in two
dimensions by a single layer potential.
Stable and accurate numerical schemes for this problem can be found, for example,
in \cite{yan88,atkinson91,jiang04}. As described in
\cite{jiang04}, when the domain is the
interval $[-1,1]$, the solution of
\begin{equation}
\label{straighteq}
\int_{-1}^1 \log|t-s| \sigma(s) \, ds = f(t)
\end{equation}
can be computed with spectral accuracy in the form
$\sigma(t) = g(t)/\sqrt{(1+t)(1-t)}$,
where $g$ is a smooth function whose Chebyshev coefficients depend in a
simple manner on those of $f$.
For an open
arc, the corresponding integral equation can be preconditioned
using the solution of \eqref{straighteq}. This procedure results in
a Fredholm equation of the second kind
for which the density may be represented as a Chebyshev expansion
and computed stably with high order accuracy.
In the present context, the inclusion of the additional
weakly singular kernels $G_n^{(2)}$ and $G_n^{(3)}$
cause the singularity of $\sigma_n(t)$ to be more complex, but our
numerical evidence suggests that there is still a dominant square root
singularity at $t = \varepsilon$. To be more precise, if we represent
$\sigma_n$ by
\begin{equation}
\sigma_n(t) = g_n(t)/\sqrt{\varepsilon - t}
\label{sigmangndef}
\end{equation}
near $t = \varepsilon$, we can investigate the effectiveness of
representing $g_n$ in a basis of orthogonal polynomials.
While the exact behavior of $g_n(t)$
is not understood analytically, the numerical results presented in Section
\ref{sec:sigsingularity} suggest that it is only
mildly non-smooth. We note that there is no singularity at the endpoint
$t = 0$, since this point corresponds to the patch center, at which
there is no physical singularity.
To resolve the endpoint singularity of $\sigma_n$, we discretize
it on a set of panels $[a_0,a_1],[a_1,a_2],\ldots,[a_{m-1},a_m]$ on
$[0,\varepsilon]$ which are dyadically
refined towards $t = \varepsilon$:
\[ a_0 = 0,\ a_1 = \frac{\varepsilon}{2},\ a_2 = \frac{3 \varepsilon}{4},
\dots,\ a_{m-1} = \frac{(2^{m-1}-1) \varepsilon}{2^{m-1}},\ a_{m} =
\varepsilon.\]
On each panel, except the last, $\sigma_n$ is represented as a Legendre series of fixed
order $k$. Since $\sigma_n$ is smooth on each such
panel and separated
from its singularity by a distance equal to the panel length, it can be
shown that
this representation has an
error of size $\mathcal{O}(e^{-k} \log_2 (1/\varepsilon))$.
This argument is widely used in handling endpoint and corner singularities in the context
of boundary integral equations
\cite{bremer1,bremer2,helsing1,helsing3,serkh2016jcp,trefethen}.
On the last panel, we analytically incorporate a square root singularity
into our representation of $\sigma_n$ as above, and expand $g_n(t) =
\sigma_n(t) \sqrt{\varepsilon - t}$ as a
series of Jacobi polynomials with $\alpha = -\frac12$ and $\beta = 0$.
If the singularity of $\sigma_n$ at $t = \varepsilon$ were exactly of
square root type, this would yield a spectrally accurate representation
of $\sigma_n$. Instead, as we will show in Section
\ref{sec:sigsingularity}, we obtain a
representation which is finite order but resolves the solution quite
well even for modest truncation parameters.
Thus we have rewritten \eqref{eq:modalinteqn} as
\[
f_n(t)
= 2 \pi \sum_{j=1}^{m-1} \int_{a_{j-1}}^{a_j} G_n(t,t')
\sigma_n(t') \sin(t') \, dt' \\
+ 2 \pi \int_{a_{m-1}}^{\varepsilon}
\frac{G_n(t,t')}{\sqrt{\varepsilon - t'}} \left( \sigma_n(t')
\sqrt{\varepsilon - t'} \right) \sin(t') \, dt' \,
\]
and discretized $\sigma_n$ by Legendre polynomials for the first $m-1$
panels and by Jacobi polynomials for the last. Sampling the resulting
equations at the corresponding quadrature nodes - Gauss-Legendre for the
first $m-1$ panels and Gauss-Jacobi for the last - yields a collocation method for
$\sigma_n$, in which $\sigma_n$ is determined by its piecewise polynomial
basis coefficients. For each collocation node $t_i$, we compute the
system matrix entries by
adaptively integrating $G_n(t_i,t')$ in $t'$ against
the piecewise polynomial basis functions. We compute
the values $f_n(t_i)$ by discretizing the Fourier
transform of $f(r(t_i),z(t_i),\theta)$ in $\theta$ by the trapezoidal
rule, which is spectrally accurate for smooth, periodic functions. We
solve the resulting set of linear systems - one for each Fourier mode - by $LU$ factorization and
back substitution. The factorizations may be reused, since we must solve
a one-patch integral equation for many different right hand sides.
We can now define the fine grid points and the smooth quadrature weights
introduced in Section \ref{sec:onepatchprelim}. The points
$x_{i,1}^f,\ldots,x_{i,n_f}^f$ are the tensor products of the
collocation nodes in the radial direction with equispaced points - the
trapezoidal rule quadrature nodes - in the azimuthal direction.
$w_1^f,\ldots,w_{n_f}^f$ are the corresponding quadrature weights -
products of the panel-wise Gauss weights with the trapezoidal rule weight.
\subsection{Numerical investigation of the singularity of
$\sigma_n$}\label{sec:sigsingularity}
In this section, we contrast two strategies for representing $\sigma_n$
in \eqref{sigmangndef}.
In the first, we use $m = 1$ panels, and represent
$g_n$ in a basis of Jacobi polynomials, which takes into account the square
root singularity in $\sigma_n$. This approach would yield spectral
accuracy with respect to $g_n$ if $\sigma_n$ only contained a square
root singularity. The second strategy is the one described above; we
use $m > 1$ panels with a Jacobi polynomial basis of fixed degree only in the
last panel. These experiments give us some insight into the nature of the
true singularity in $\sigma_n$, and justify our discretization choice.
In both cases, we solve the interior one-patch integral equation by the method
described above for a basis of Zernike polynomials with truncation
parameter $M = 15$. The results do not change significantly if we solve
the exterior equation instead. We do this for several different choices of
$\varepsilon$. The Fourier series truncation is fixed sufficiently large
to resolve the highest azimuthal Zernike mode. For each solution,
we measure the residual error in $L^2$, normalized by the patch size:
\begin{equation}\label{eq:patcherrl2}
\norm{\mathcal{S} \sigma - f}_{L^2(\Gamma)} / |\Gamma|.
\end{equation}
Here $|\Gamma|$ is the surface area of the patch, and $f$ is
a Zernike polynomial.
This measures the extent to which the
computed solution of the one-patch BVP satisfies the Dirichlet boundary
condition. This solution automatically satisfies the Neumann boundary
condition and the PDE, because of its representation as a single layer
potential with the Neumann Green's function, so a small $L^2$ residual error
corresponds to a solution which nearly satisfies the boundary value
problem. This error is computed by
quadrature on a Legendre-Fourier grid which does not overlap with the
grid on which the integral equation is solved, so it is {\em not} the same as
the residual of the solution to the discrete linear system.
Using the first strategy ($m=1$), we
measure the error
\eqref{eq:patcherrl2} for each Zernike polynomial, as the number of
Jacobi basis functions is increased. The error is defined to be the
maximum taken over all Zernike polynomials.
The results are presented in the
left panel of Fig. \ref{fig:onepaterr}. We observe an initial regime of rapid convergence, followed by
much slower convergence. Indeed, $15$ basis functions are required
to resolve the highest Zernike modes we have used as data. Afterward, the slow
regime of convergence suggests that $\sigma_n$ has a dominant square root singularity
and a subdominant term which is nonsmooth, but much smaller. We also notice
that performance improves as $\varepsilon$ is decreased, which is not
surprising since as $\varepsilon \to 0$, we approach the flat case in
which $\sigma_n$ has a pure square root singularity.
The second strategy is explored in the right panel of Fig
\ref{fig:onepaterr}. Here, we fix $20$
basis functions per panel - sufficient to begin with a good error
constant, according to the first experiment. We then increase the number
$m$ of panels. Although we can already obtain
quite good accuracy using the first strategy, the second allows us to
reach near-machine precision. The improvement is particularly dramatic
for larger choices of $\varepsilon$.
\begin{figure}[ht]
\centering
\includegraphics[width=.45\linewidth]{onepat_jacobi.pdf} \hspace{.2in}
\includegraphics[width=.45\linewidth]{onepat_mpanels.pdf}
\caption{Left panel: $g_n$ is represented by a basis of
Jacobi polynomials on a single panel. We plot the
maximum residual error \eqref{eq:patcherrl2} vs.
the number of Jacobi basis functions. Right panel: $g_n$ is represented in
a Legendre basis on every panel except the last, where a Jacobi basis is used.
We plot the maximum residual error vs. the number of panels.}
\label{fig:onepaterr}
\end{figure}
\section{Numerical experiments}\label{sec:numresults}
An important parameter in studying narrow escape and narrow capture problems
is the {\em patch area fraction} $f_{N,\varepsilon}$.
Since the surface area
of a single patch of radius $\varepsilon$ is given by
\[A_\varepsilon = 4 \pi \sin^2(\varepsilon/2), \]
we have
\begin{equation}
f_{N,\varepsilon} = N \sin^2(\varepsilon/2).
\end{equation}
Assuming $\varepsilon$ is sufficiently small, we may write
\begin{equation}\label{eq:patareafrac}
f_{N,\varepsilon} \approx \varepsilon^2 N / 4.
\end{equation}
Given $N$, we will use \eqref{eq:patareafrac} to compute
the patch radius $\varepsilon$ for a given patch area fraction.
\subsection{Convergence with respect to the Zernike basis} \label{sec:Mconv}
We first investigate the convergence
of the solution with respect to the Zernike truncation parameter $M$,
which determines the largest radial and azimuthal Zernike modes used to
represent the smooth incoming field on each patch.
We fix the patch area fraction at $f_{N,\varepsilon} = 0.05$ and
carry out
experiments with $N = 10$, $100$, and $1000$ patches. $\varepsilon$
is computed from \eqref{eq:patareafrac}.
The patch locations are drawn from a uniform random
distribution on the sphere, with a minimal patch separation of $2
\varepsilon$ enforced.
In each case, we solve the one-patch problems with the
truncation parameter $M$ set to $1,3,5,\ldots,15$.
The one-patch solutions are obtained, guided
by the results in Fig. \ref{fig:onepaterr},
using $13$ panels with $20$ basis functions
per panel, and the number of Fourier modes set equal to the number of
azimuthal modes in the Zernike basis. The ID and GMRES tolerances are
set to $10^{-15}$, and the incoming grid
tolerance is set to $10^{-12}$.
We measure error in two ways. The first,
as in \eqref{eq:patcherrl2}, is to examine the relative $L^2$ residual
of the multiple scattering system \eqref{eq:opinteqns}
(the discrepancy of the computed boundary values with the Dirichlet data)
on a random patch $\Gamma_i$:
\begin{equation}\label{eq:l2res}
\frac{1}{|\Gamma_i|} \norm{\left(\mathcal{S} \sigma_i + \sum_{j \neq i}^N \mathcal{S}_{ij} \sigma_j
\right) - 1}_{L^2(\Gamma_i)}.
\end{equation}
The second is to examine the difference between the computed average
mean first passage time (MFPT) $\mu$ and
a reference value, denoted by $\mu_{\text{ref}}$.
We obtain $\mu_{\text{ref}}$ by carrying out a more refined simulation,
with $M = 17$ on each patch, while also increasing
the number of panels and basis
functions used to solve the one-patch problem to $19$ and $30$,
respectively, and doubling the numbers of both radial and azimuthal modes
used in the incoming grids of all patch groups.
This is a self-consistent convergence
test for $\mu$.
The results are presented in Fig. \ref{fig:Mconvergence}. In all cases, we
observe the expected spectral convergence with respect to $M$, and
can reach errors of approximately $10^{-12}$ or less. We
also find that the residual error appears to provide a good upper bound
on the error of $\mu$ until convergence is reached.
\begin{figure}[ht]
\centering
\includegraphics[width=.32\linewidth]{mconv_10.pdf} \hspace{.01in}
\includegraphics[width=.32\linewidth]{mconv_100.pdf} \hspace{.01in}
\includegraphics[width=.32\linewidth]{mconv_1000.pdf}
\caption{$L^2$ residual error and self-consistent convergence error
of the average MFPT $\mu$ for random patches with
$f_{N,\varepsilon} = 0.05$. Left panel: $N = 10$, $\varepsilon
\approx 0.141$. Middle panel: $N =
100$, $\varepsilon \approx 0.0447$. Right panel: $N = 1000$,
$\varepsilon \approx 0.0141$.}
\label{fig:Mconvergence}
\end{figure}
\subsection{Large scale simulations} \label{sec:numexamples}
We next study the performance of our solver
as $N$ is increased and $\varepsilon$ is decreased.
The error is measured by computing the $L^2$ residual
\eqref{eq:l2res} on a random
patch. The parameters for the one-patch solver are set as in the
previous section with $M = 15$, but we fix the ID tolerance at
$10^{-11}$, the GMRES tolerance at $10^{-10}$, and the incoming grid
truncation tolerance at $10^{-8}$. This selection of parameters yields
errors in range $10^{-7}-10^{-10}$ for all of our experiments.
Our calculations are performed on either a laptop with a 4-core Intel
i7-3630QM 2.40GHz processor or a
workstation with four Intel Xeon E7-4880 2.50GHz processors.
each of which has 15 cores.
The algorithm has been implemented in
Fortran, and in both cases, the hierarchical fast algorithm is
parallelized over all available cores using OpenMP.
We consider randomly located patches, uniformly located patches and patches
that are highly clustered.
For each experiment we report $N$, $\varepsilon$,
the computed value of the average MFPT $\mu$,
truncated at $8$ significant digits,
the $L^2$ residual error on a random patch,
the total number of GMRES iterations,
the total solve time, and the
time per GMRES iteration.
We also compute the {\em parallel
scaling factor} - namely, the ratio of the time to compute
the matrix-vector product \eqref{eq:sysmatapply} using a
single core to the time required using all cores on the 60-core
workstation.
\subsubsection{Example 1: Random patches with area fraction
$f_{N,\varepsilon} = 0.05$}
Fixing the patch area fraction at $f_{N,\varepsilon} = 0.05$,
we let $\varepsilon$ be given by
\eqref{eq:patareafrac} for $N = 10,100,1000,10\, 000,100\, 000$,
with patches randomly distributed on the sphere
with a minimum patch separation of $2
\varepsilon$.
The corresponding results are given in Table \ref{tab:exrand}.
In the left panel of Fig. \ref{fig:timings}, we plot the time
per GMRES iteration as a function of $N$ using the 4-core
laptop and the 60-core workstation, as well as a reference curve
with $\mathcal{O}(N \log N)$ scaling.
In Fig. \ref{fig:sphereplots}, we also plot the
computed MFPT $\bar v$ just inside the unit sphere -
on a sphere of radius $1-\varepsilon/5$ -
for $N= 10,100,1000,10\, 000$.
The case $N = 100\, 000$ case was plotted earlier,
in Fig. \ref{fig:randpts1e5}.
Note that the number of GMRES iterations increases with $N$,
as one would expect from the increased complexity of the problem,
but slowly.
The computation with $N = 100\, 000$ required just over
an hour to complete using the 60-core workstation.
The computation with $N = 10\, 000$
required just over $45$ minutes to solve on the 4-core laptop,
and the computation with $N = 1000$ required approximately
one minute.
(The case $N = 100\, 000$ was not attempted on the laptop because of
memory requirements.)
Note from the data in Table \ref{tab:exrand} that
we achieve approximately $85\%$ parallel efficiency at
$N=1000$ and an efficiency near
$90\%$ for the largest calculation.
Note also from Fig. \ref{fig:timings} that the complexity
of the fast algorithm is consistent with the expected
$O(N \log N)$ scaling.
\begin{table}[ht]
\small
\centering
\begin{tabular}{|c|r|r|r|r|r|}
\hline
$N$ & $10$ & $100$ & $1000$ & $10\, 000$ & $100\, 000$ \\ \hline
$\varepsilon$ & $\approx 0.14$ & $\approx 0.045$ & $\approx
0.014$ & $\approx 0.0045$ & $\approx 0.0014$ \\ \hline
Average MFPT $\mu$ & $0.64277353$ & $0.24999828$ & $0.12308716$ &
$0.084405945$ & $0.072275200$ \\ \hline
$L^2$ residual error & $3.6 \times 10^{-9}$ & $1.6 \times 10^{-9}$ & $5.3 \times 10^{-9}$ & $4.8 \times 10^{-8}$ & $2.2 \times 10^{-8}$ \\ \hline
$\#$ GMRES iterations & $7$ & $12$ & $17$ & $25$ & $35$ \\ \hline
Total iteration time (s) (60 cores) & $0.11$ & $0.54$ & $8.9$ &
$215$ & $3793$ \\ \hline
Time per iteration (s) (60 cores) & $0.02$ & $0.05$ & $0.5$ &
$8.6$ & $108$ \\ \hline
Total iteration time (s) (laptop) & $0.10$ & $2.63$ & $68.9$ &
$1731$ & \\ \hline
Time per iteration (s) (laptop) & $0.01$ & $0.22$ & $4.1$ &
$69$ & \\ \hline
Parallel scaling factor (60 cores) & $2.1$ & $25.7$ & $51.4$ & $52.3$ & $53.5$ \\
\hline
\end{tabular}
\caption{Narrow escape problem with random patches at patch
area fraction $f_{N,\varepsilon}
= 0.05$.}
\label{tab:exrand}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=.32\linewidth]{randpts_times.pdf} \hspace{.01in}
\includegraphics[width=.32\linewidth]{fibpts_times.pdf} \hspace{.01in}
\includegraphics[width=.32\linewidth]{cluspts_times.pdf}
Example 1 \hspace{1.4in} Example 2 \hspace{1.4in} Example 3
\caption{Time per GMRES iteration for the 4-core laptop and 60-core
workstation. A reference curve with $\mathcal{O}(N \log N)$ scaling is
also plotted.}
\label{fig:timings}
\end{figure}
\subsubsection{Example 2: Uniform patches with area fraction
$f_{N,\varepsilon} = 0.05$}
Using the same patch area fraction as in the previous example,
we let $N$ take the same values, but place the patch centers at the Fibonacci
spiral points, which are approximately uniform
on the sphere \cite{Bernoff2018}.
Results are shown in Table
\ref{tab:exunif} and the middle panel of Fig. \ref{fig:timings}.
The computed MFPT $\bar v$ on the sphere
of radius $1-\varepsilon/5$ was plotted in
Fig. \ref{fig:fibopts1e4} for the case $N = 10\, 000$. The MFPT is plotted
for the $N = 100$ and $N = 1000$ cases in Fig. \ref{fig:sphereplots}.
\begin{table}[ht]
\small
\centering
\begin{tabular}{|c|r|r|r|r|r|}
\hline
$N$ & $10$ & $100$ & $1000$ & $10\, 000$ & $100\, 000$ \\ \hline
$\varepsilon$ & $\approx 0.14$ & $\approx 0.045$ & $\approx
0.014$ & $\approx 0.0045$ & $\approx 0.0014$ \\ \hline
Average MFPT $\mu$ & $0.62771752$ & $0.23201408$ & $0.11813387$ &
$0.082870386$ & $0.071784189$ \\ \hline
$L^2$ residual error & $3.0 \times 10^{-9}$ & $1.5 \times 10^{-9}$ & $3.2 \times 10^{-8}$ & $6.4 \times 10^{-8}$ & $8.4 \times 10^{-8}$ \\ \hline
$\#$ GMRES iterations & $6$ & $9$ & $11$ & $16$ & $20$ \\ \hline
Total iteration time (s) (60 cores) & $0.10$ & $0.38$ & $5.1$ &
$114$ & $1803$ \\ \hline
Time per iteration (s) (60 cores) & $0.02$ & $0.04$ & $0.47$ &
$7.1$ & $90$ \\ \hline
Total iteration time (s) (laptop) & $0.087$ & $1.45$ & $40.7$ &
$926$ & \\ \hline
Time per iteration (s) (laptop) & $0.014$ & $0.16$ & $3.7$ & $58$ & \\ \hline
Parallel scaling factor (60 cores) & $5.0$ &
$29.9$ & $53.7$ & $54.0$ & $54.8$ \\
\hline
\end{tabular}
\caption{Narrow escape problem with uniform patches
at patch area fraction $f_{N,\varepsilon} = 0.05$.}
\label{tab:exunif}
\end{table}
\subsubsection{Example 3: Clustered patches}
In our final example, we configure the patches to form a collection
of $20$ clusters. Each
cluster is contained within a disk on the surface of the sphere centered
at the vertices of a dodecahedron inscribed in
the sphere, and the radii of the disks are chosen so that all $20$ disks
cover one quarter of the area of the sphere. Patch centers are placed
randomly on the sphere, and a proposed center is accepted if it falls
within one of the disks, while enforcing a minimum patch separation distance
of $2 \varepsilon$. We choose $\varepsilon$ empirically to be as large
as possible so that our random placement process yields the desired
number $N$ of patches in a reasonable amount of time. For sufficiently
large $N$, this results in a
much denser packing of patches within each cluster than we had in our
previous examples.
The results of our simulations are provided in Table
\ref{tab:exclus} and the right panel of Fig. \ref{fig:timings}.
The MFPT is plotted on a sphere of radius $1-\varepsilon/5$ in Fig. \ref{fig:cluspts1e4} for the $N =
10\, 000$ case and in Fig. \ref{fig:sphereplots} for the $N = 100$ and $N =
1000$ cases. The
denser packing of patches leads to
a greater number of GMRES iterations than in the previous examples and
longer computation times, but the
difference is mild. The case with $N = 100\, 000$ required
just over an hour and a half to solve on our 60-core workstation.
The simulation with $N = 10\, 000$
required 75 minutes on a laptop, and the simulation with $N =
1000$ required about one minute.
\begin{table}[ht]
\small
\centering
\begin{tabular}{|c|r|r|r|r|r|}
\hline
$N$ & $10$ & $100$ & $1000$ & $10\, 000$ & $100\, 000$ \\ \hline
$\varepsilon$ & $0.25$ & $0.047$ & $0.012$ & $0.0035$ & $0.001$ \\ \hline
Average MFPT $\mu$ & $0.29687267$ & $0.25519357$ & $0.20318506$ &
$0.17622000$ & $0.16531162$ \\ \hline
$L^2$ residual error & $4.9 \times 10^{-10}$ & $3.9 \times 10^{-9}$
& $1.2 \times 10^{-8}$ & $6.5 \times 10^{-8}$ & $1.2 \times 10^{-7}$
\\ \hline
$\#$ GMRES iterations & $8$ & $12$ & $19$ & $28$ & $42$ \\ \hline
Total iteration time (s) (60 cores) & $0.21$ & $0.43$ & $9.9$ &
$269$ & $5795$ \\ \hline
Time per iteration (s) (60 cores) & $0.03$ & $0.04$ & $0.52$ &
$9.6$ & $138$ \\ \hline
Total iteration time (s) (laptop) & $0.18$ & $2.7$ & $76.4$ &
$2112$ & \\ \hline
Time per iteration (s) (laptop) & $0.02$ & $0.22$ & $4.0$ & $75$ & \\ \hline
Parallel scaling factor (60 cores) & $2.9$ &
$43.9$ & $49.3$ & $51.4$ & $55.5$ \\
\hline
\end{tabular}
\caption{Narrow escape problem with clustered patches.}
\label{tab:exclus}
\end{table}
\begin{figure}[p!]
\centering
\includegraphics[width=.29\linewidth]{randpts_1e1.png} \hspace{.1in}
\includegraphics[width=.29\linewidth]{randpts_1e2.png}
\vspace{.1in}
\includegraphics[width=.29\linewidth]{randpts_1e3.png} \hspace{.1in}
\includegraphics[width=.29\linewidth]{randpts_1e4.png}
\vspace{.1in}
\includegraphics[width=.29\linewidth]{fibopts_1e2.png} \hspace{.1in}
\includegraphics[width=.29\linewidth]{fibopts_1e3.png}
\vspace{.1in}
\includegraphics[width=.29\linewidth]{cluspts_1e2.png} \hspace{.1in}
\includegraphics[width=.29\linewidth]{cluspts_1e3.png}
\caption{Plots of the MFPT $\bar v$ on a sphere of radius $1
- \varepsilon/5$ for the experiments described in
Section \ref{sec:numexamples}. The first two rows correspond to
Example 1 with $N =
10,100,1000,10\, 000$. The third row corresponds to Example 2
with $N = 100, 1000$. The final row corresponds to
Example 3 with $N = 100, 1000$.}
\label{fig:sphereplots}
\end{figure}
\begin{remark}
We carried out the simulations above for the corresponding
exterior problem as well (the narrow capture problem).
As expected (since the integral equations are nearly identical),
the timings and errors are similar and are therefore omitted.
\end{remark}
\section{Conclusions}\label{sec:conclusions}
We have developed a fast solver for the narrow capture and narrow escape
problems on the sphere with arbitrarily-distributed well-separated
disk-shaped patches. We solve the corresponding mixed
boundary value problems by an integral equation scheme
derived using the Neumann Green's functions for the sphere. Our
numerical method combines a high order accurate solver for the one-patch
problem, a multiple scattering formalism, and a hierarchical fast
algorithm. We have demonstrated the scheme on examples with $N$ as large
as $100\, 000$, significantly larger than previously accessible.
The ability to carry out such large-scale simulations will permit
a systematic study of the asymptotic
approaches described, for example, in \cite{Cheviakov2010} and
\cite{Lindsay2017}.
Possible extensions of our method include the consideration of
narrow escape and narrow capture problems when the
patches are asymmetric and have multiple shapes.
Assuming some separation between patches, the
multiple scattering formalism still applies, but the
single patch integral
equation will not be solvable by separation of variables and
the compressed representation of outgoing fields will need to be computed
for each distinct patch type.
Neither of these extra steps, however, affects the asymptotic
$\mathcal{O}(N \log N)$ scaling of the fast algorithm.
Exterior problems involving
multiple spheres with different arrangements of patches
could also be simulated by a simple modification of our multiple scattering
approach.
A more challenging problem is to extend our method to non-spherical
geometries. For this, one would either have to discretize the entire
domain surface, rather than just the absorbing patches, or construct the
Neumann Green's function for such a domain numerically.
In the latter case, aspects of
our multiple scattering approach would carry over. We are
currently investigating these issues and will report on our progress
at a later date.
\section*{Acknowledgments}
We would like to thank Michael Ward for suggesting this problem
and for several valuable insights. We would also like to
thank Mike O'Neil for many useful conversations. J.K.
was supported in part by the Research Training Group in Modeling
and Simulation funded by the National Science Foundation via grant
RTG/DMS-1646339.
\bibliographystyle{ieeetr}
{\footnotesize | {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,429 |
← Labatt USA Hockey Cans Now Available in Some States
PHOTOS: USA Hockey Goalie Masks Through History →
PHOTOS: Jonathan Quick's Team USA Olympics Mask
Posted on January 29, 2014 by Chris Peters
The U.S. Olympic goaltending race is starting to get interesting with Jonathan Quick playing some of his best hockey in the month of January. Who knows how much he or Ryan Miller will play at this point, but we know that Team USA's top two goaltenders are going to be looking pretty sharp.
Ryan Miller already debuted his mask for Sochi, which is an awesomely updated version of his 2010 mask. Now we have a look at Quick's completed mask.
His designer, Steve Nash of EyeCandyAir offered a brief look via Vine, which you can see here (since for some reason I can't embed Vines yet on this here blog). To give you a longer look, I took a few screen caps.
Here's the top of the mask.
Quick is keeping with the theme of his current armored mask with the Kings, which is a design style he's used for years in LA. With the blue background and red and gold accents, this Olympic mask offers some subtle differences from what he wears in LA to America it up a bit. Then of course, there's the USA right in the middle.
Here's a look at the side. He has the stars in the middle there, which is a slight difference from the diamond pattern currently employed in that spot on his LA mask. The gold near the cage is a nice touch.
Here's a slight similarity between Quick's and Miller's mask. They both have the USA shield on the chin guard. That logo really is a perfect fit there and it is encased in the steel armor, in keeping with the mask's theme. Normally, Quick's mask has his name on the chin. Here's a look at his current lid for reference.
UPDATE: There's a new addition to the mask that is worth taking a peek at. Quick very much wanted to honor the troops with his 2010 mask that he never ended up getting a chance to wear. He will do so with his 2014 mask on his backplate. Steve Nash, who designed the mask, painted a beautiful tribute to fallen soldiers.
The artwork depicts an officer laying a wreath at the base of the Tomb of the Unknowns at Arlington National Cemetery. It is mostly black and white, save for the red wreath. It is extremely well done. It also doesn't include the language the IOC seems so antsy about, so they shouldn't have a problem with this nice artwork.
Quick obviously will hope to get to wear his mask this time around. He was on the 2010 U.S. Olympic Team but never dressed. He should see at least some time this year.
Quick had to alter the mask he planned to wear in 2010 because it included USA Hockey's "Waving S" logo about as big as you could possibly make it on the top of the helmet. The IOC barred federations from using their corporate logo for team equipment, which is why there's no more waving S on the jerseys. It can't be on goalie helmets either, so Quick would have had to cover it up if he played…. That would have taken a lot of stickers. Here's a look at what could have been…
Maybe it's because the IOC quashed the last mask or maybe he's just more comfortable with his armor design, but this year's mask is quite a departure from what he was going for in 2010, which included a "Support Our Troops" back plate. The IOC deemed that quotation as political propaganda, which meant he had to cover that as well.
It's hard to imagine the IOC will have any problem with his 2014 mask, as it is certainly a more understated look. It's actually quite cool in its simplicity, though. Quick also gets to keep some of his individuality by sticking with what has kind of become his trademark design, with a few Americanized alterations.
All photos via EyeCandyAir. You can check out their website here. They're also on Twitter here.
I'll be sure to update when full photos become available.
2 Responses to PHOTOS: Jonathan Quick's Team USA Olympics Mask
Zach W says:
Looks pretty neat, nobody's ever explained to me why the IOC bans USA Hockey and Hockey Canada from using their awesome logos on jerseys/helmets, do you happen to know the reasoning behind that rule?
The reason they do it is because the IOC views the waving S logo as USA Hockey's corporate mark, which it actually isn't. The corporate mark is the arc and star that you see on everything but the jerseys when it comes to USA Hockey. Unfortunately the IOC sees it as close enough. Therefore USA Hockey and Hockey Canada had to replace their normal logos with basically whatever Nike comes up with. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,760 |
Q: Swift Codable Decode Date format for *One* key I have a JSON structure that looks like this:
{
"start_date": "2021-12-31",
"created_at": "2021-12-30T23:36:25-06:00"
}
Both of the values (start_date, created_at) have a different date format, but need to be parsed on the same level as each other. My swift struct looks like:
struct ExampleDateFormatProblem: Codable {
var startDate: Date
var createdAt: Date
enum CodingKeys: String, CodingKey {
case startDate = "start_date"
case createdAt = "created_at"
}
}
And is being decoded using
let formatter = DateFormatter()
formatter.dateFormat = "yyyy-MM-dd'T'HH:mm:ss'.'SSS'Z'"
let decoder = JSONDecoder()
decoder.dateDecodingStrategy = .formatted(dateFormatter)
decoder.decode // (basic decoding stuff here, just showing the date formatting)
This struct can decode perfectly fine without the start_date format, but when adding that back in it breaks due to it not being the same date format as the date format I've set the JSONDecoder to.
So here's my question, how do I define a date format to use for one property in a struct? In this example I need to say "start_date should be decoded using this format, and created_at using another". Is this even possible using Swift's Codable?
A: heavy thanks to @Fogmeister and @dahiya_boy for the inspiration here
You can define property wrappers that conform to Codable, so we can make a protocol to conform Date to codable using a format.
// MARK: - Property Wrappers
public protocol DateValueCodableStrategy {
associatedtype RawValue: Codable
static func decode(_ value: RawValue) throws -> Date
static func encode(_ date: Date) -> RawValue
}
/// Uses a format to decode a date from Codable.
@propertyWrapper struct DateFormatted<T: DateValueCodableStrategy>: Codable {
private let value: T.RawValue
var wrappedValue: Date
public init(wrappedValue: Date) {
self.wrappedValue = wrappedValue
self.value = T.encode(wrappedValue)
}
public init(from decoder: Decoder) throws {
self.value = try T.RawValue(from: decoder)
self.wrappedValue = try T.decode(value)
}
public func encode(to encoder: Encoder) throws {
try value.encode(to: encoder)
}
}
// MARK: - Date Formats
struct DayDateStrategy: DateValueCodableStrategy {
private static var formatter: DateFormatter = {
let formatter = DateFormatter()
formatter.dateFormat = "yyyy-MM-dd"
return formatter
}()
static func decode(_ value: String) throws -> Date {
formatter.date(from: value) ?? Date()
}
static func encode(_ date: Date) -> String {
formatter.string(from: date)
}
}
Then use it like
struct ExampleDateFormatProblem: Codable {
@DateFormatted<DayDateStrategy> var startDate: Date
var createdAt: Date
enum CodingKeys: String, CodingKey {
case startDate = "start_date"
case createdAt = "created_at"
}
}
Thanks for the help guys!
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,067 |
Q: Why do I have a segmentation error sometime? I have a segmentation error in the second_get_device_descriptor function, with this line:
ret = libusb_get_device_descriptor(dev, &device_descriptor);
while the same line in the first function work perfectly.
Here is my code:
#include <stdio.h>
#include <stdlib.h>
#include <libusb-1.0/libusb.h>
void first_get_device_descriptor(libusb_device ** devices, libusb_device * dev){
struct libusb_device_descriptor device_descriptor;
dev = devices[10];
int ret;
ret = libusb_get_device_descriptor(dev, &device_descriptor);
}
void second_get_device_descriptor(libusb_device *dev){
struct libusb_device_descriptor device_descriptor;
int ret;
ret = libusb_get_device_descriptor(dev, &device_descriptor);
}
int main(int argc, char* argv[]) {
libusb_device **devices;
libusb_device *dev;
libusb_context *context = NULL;
libusb_init(&context);
libusb_get_device_list(context, &devices);
first_get_device_descriptor(devices, dev);
//Android device found
second_get_device_descriptor(dev);
libusb_free_device_list(devices, 1);
libusb_exit(context);
return 0;
}
Anyone have an idea with this please?
A: In your first function, you assigned dev a value. In the 2nd you didn't. libusb_get_device_descriptor will dereference this uninitialized variable and your program will have undefined behavior as a result.
A: dev = devices[10];
discards the (thoroughly useless due to being uninitialized) pointer passed to you from the caller, so you neither use the caller's pointer, nor what it points to, nor do they have any access to what you changed it to (unless they also reference devices[10]). Thus, the caller's dev remains an uninitialized pointer.
The second_get_device_descriptor also receives that original uninitialized dev, does not initialize it to anything, and passes it to libusb_get_device_descriptor, that could only possibly do something useful if dev pointed to valid allocated memory, which it does not. Presumably libusb_get_device_descriptor tries to dereference that pointer at some point (to read or write it), it still points to garbage, and you seg fault.
What you wanted to do I can't say (I don't know these APIs), but odds are you should be receiving dev as a double pointer (if the function(s) are responsible for allocating the memory, so you can change the caller's pointer through the double-pointer), or it needs to be allocated by the caller so the functions called with it can use it without reallocating/reassigning it.
A: libusb_device *dev; /* dev is uninitialized here */
/* you're passing the still-uninitialized pointer by value here */
first_get_device_descriptor(devices, dev);
/* you still haven't initialized dev! */
second_get_device_descriptor(dev);
When you write a function like
void first_get_device_descriptor(libusb_device ** devices, libusb_device * dev)
{
dev = /*something*/
it doesn't change the caller's dev, it only alters the local variable dev inside the scope of this function. Don't be confused by the fact they happen to have the same name. After all, if you called first_get_device_descriptor(devices, NULL), it wouldn't change the value of NULL, would it?
You can - and really should - verify this by using a debugger or adding some print statements to your code.
Finally, there are two actual solutions:
*
*use dev as an out-parameter here, which in C means passing a pointer (to a pointer):
void first_get_device_descriptor(libusb_device **devices, libusb_device **dev)
{
*dev = /*something*/
}
/* called like */
first_get_device_descriptor(devices, &dev);
*or just return dev:
libusb_device* first_get_device_descriptor(libusb_device **devices)
{
libusb_device *dev = /* something */
...
return dev;
}
/* called like */
dev = first_get_device_descriptor(devices);
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,993 |
Hiroshi Miyazawa (jap. , Miyazawa Hiroshi; * 22. November 1970 in Hokkaido) ist ein ehemaliger japanischer Fußballspieler.
Karriere
Miyazawa erlernte das Fußballspielen in der Schulmannschaft der Fujisawa Nishi High School und der Universitätsmannschaft der Chūō-Universität. Seinen ersten Vertrag unterschrieb er 1993 bei JEF United Ichihara. Der Verein spielte in der höchsten Liga des Landes, der J1 League. Für den Verein absolvierte er 19 Erstligaspiele. 1996 wechselte er zum Ligakonkurrenten Bellmare Hiratsuka. Für den Verein absolvierte er 29 Erstligaspiele. 1998 wechselte er zum Ligakonkurrenten Sanfrecce Hiroshima. Für den Verein absolvierte er 21 Erstligaspiele. Danach spielte er bei Canberra Cosmos und Football Kingz. Ende 2003 beendete er seine Karriere als Fußballspieler.
Erfolge
Sanfrecce Hiroshima
Kaiserpokal
Finalist: 1999
Weblinks
Fußballspieler (JEF United Ichihara Chiba)
Fußballspieler (Shonan Bellmare)
Fußballspieler (Sanfrecce Hiroshima)
Japaner
Geboren 1970
Mann | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,215 |
Permaculture isn't just for those with vast garden spaces, according to Juliet Kemp, author of Permaculture in Pots, and she's quite right. In the January/February, 2015 issue of Urban Farms Magazine I talk with Kemp about turning balconies, window sills, and back patios into productive permaculture havens and why it matters. You'll have to buy the magazine, but it's worth every cent! | {
"redpajama_set_name": "RedPajamaC4"
} | 107 |
Q: build fails due to custom plugin's fileTree.getFiles() in configuration phase- Android Studio 3.2.0-alpha11 Top-level build.gradle:
com.android.tools.build:gradle:3.2.0-alpha11
org.jetbrains.kotlin:kotlin-gradle-plugin:1.2.31
I use a custom gradle plugin where I create a FileTree in gradle's configuration phase and then I iterate the files with fileTree.getFiles(). This was working since i use the plugin (Android Studio 2.3.3).
Now I recently downloaded Android Studio 3.2 Canary 11 (preview) and I am wondering why my code has stopped working: Caused by: com.android.builder.errors.EvalIssueException: Resolving this BuildableArtifact can only done during task execution.(Stacktrace below).
I can't even sync the project.
Does anybody know if this is a bug (or a feature) and if there is a possibility to overcome it while beeing in the configuration phase.
org.gradle.api.ProjectConfigurationException: A problem occurred configuring project ':app'.
at org.gradle.configuration.project.LifecycleProjectEvaluator.addConfigurationFailure(LifecycleProjectEvaluator.java:94)
at org.gradle.configuration.project.LifecycleProjectEvaluator.notifyAfterEvaluate(LifecycleProjectEvaluator.java:89)
at org.gradle.configuration.project.LifecycleProjectEvaluator.doConfigure(LifecycleProjectEvaluator.java:70)
at org.gradle.configuration.project.LifecycleProjectEvaluator.access$100(LifecycleProjectEvaluator.java:34)
at org.gradle.configuration.project.LifecycleProjectEvaluator$ConfigureProject.run(LifecycleProjectEvaluator.java:110)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
at org.gradle.configuration.project.LifecycleProjectEvaluator.evaluate(LifecycleProjectEvaluator.java:50)
at org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:667)
at org.gradle.api.internal.project.DefaultProject.evaluate(DefaultProject.java:136)
at org.gradle.execution.TaskPathProjectEvaluator.configure(TaskPathProjectEvaluator.java:35)
at org.gradle.execution.TaskPathProjectEvaluator.configureHierarchy(TaskPathProjectEvaluator.java:60)
at org.gradle.execution.TaskSelector.getSelection(TaskSelector.java:100)
at org.gradle.execution.TaskSelector.getSelection(TaskSelector.java:81)
at org.gradle.execution.commandline.CommandLineTaskParser.parseTasks(CommandLineTaskParser.java:42)
at org.gradle.execution.TaskNameResolvingBuildConfigurationAction.configure(TaskNameResolvingBuildConfigurationAction.java:44)
at org.gradle.execution.DefaultBuildConfigurationActionExecuter.configure(DefaultBuildConfigurationActionExecuter.java:48)
at org.gradle.execution.DefaultBuildConfigurationActionExecuter.access$000(DefaultBuildConfigurationActionExecuter.java:25)
at org.gradle.execution.DefaultBuildConfigurationActionExecuter$1.proceed(DefaultBuildConfigurationActionExecuter.java:54)
at org.gradle.execution.DefaultTasksBuildExecutionAction.configure(DefaultTasksBuildExecutionAction.java:44)
at org.gradle.execution.DefaultBuildConfigurationActionExecuter.configure(DefaultBuildConfigurationActionExecuter.java:48)
at org.gradle.execution.DefaultBuildConfigurationActionExecuter.access$000(DefaultBuildConfigurationActionExecuter.java:25)
at org.gradle.execution.DefaultBuildConfigurationActionExecuter$1.proceed(DefaultBuildConfigurationActionExecuter.java:54)
at org.gradle.execution.ExcludedTaskFilteringBuildConfigurationAction.configure(ExcludedTaskFilteringBuildConfigurationAction.java:47)
at org.gradle.execution.DefaultBuildConfigurationActionExecuter.configure(DefaultBuildConfigurationActionExecuter.java:48)
at org.gradle.execution.DefaultBuildConfigurationActionExecuter.select(DefaultBuildConfigurationActionExecuter.java:36)
at org.gradle.initialization.DefaultGradleLauncher$CalculateTaskGraph.run(DefaultGradleLauncher.java:286)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
at org.gradle.initialization.DefaultGradleLauncher.constructTaskGraph(DefaultGradleLauncher.java:181)
at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:136)
at org.gradle.initialization.DefaultGradleLauncher.executeTasks(DefaultGradleLauncher.java:115)
at org.gradle.internal.invocation.GradleBuildController$1.call(GradleBuildController.java:78)
at org.gradle.internal.invocation.GradleBuildController$1.call(GradleBuildController.java:75)
at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:152)
at org.gradle.internal.invocation.GradleBuildController.doBuild(GradleBuildController.java:100)
at org.gradle.internal.invocation.GradleBuildController.run(GradleBuildController.java:75)
at org.gradle.tooling.internal.provider.runner.BuildModelActionRunner.run(BuildModelActionRunner.java:53)
at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35)
at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35)
at org.gradle.tooling.internal.provider.ValidatingBuildActionRunner.run(ValidatingBuildActionRunner.java:32)
at org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner$1.run(RunAsBuildOperationBuildActionRunner.java:43)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110)
at org.gradle.launcher.exec.RunAsBuildOperationBuildActionRunner.run(RunAsBuildOperationBuildActionRunner.java:40)
at org.gradle.tooling.internal.provider.SubscribableBuildActionRunner.run(SubscribableBuildActionRunner.java:51)
at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:49)
at org.gradle.launcher.exec.InProcessBuildActionExecuter.execute(InProcessBuildActionExecuter.java:32)
at org.gradle.launcher.exec.BuildTreeScopeBuildActionExecuter.execute(BuildTreeScopeBuildActionExecuter.java:39)
at org.gradle.launcher.exec.BuildTreeScopeBuildActionExecuter.execute(BuildTreeScopeBuildActionExecuter.java:25)
at org.gradle.tooling.internal.provider.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:80)
at org.gradle.tooling.internal.provider.ContinuousBuildActionExecuter.execute(ContinuousBuildActionExecuter.java:53)
at org.gradle.tooling.internal.provider.ServicesSetupBuildActionExecuter.execute(ServicesSetupBuildActionExecuter.java:57)
at org.gradle.tooling.internal.provider.ServicesSetupBuildActionExecuter.execute(ServicesSetupBuildActionExecuter.java:32)
at org.gradle.tooling.internal.provider.GradleThreadBuildActionExecuter.execute(GradleThreadBuildActionExecuter.java:36)
at org.gradle.tooling.internal.provider.GradleThreadBuildActionExecuter.execute(GradleThreadBuildActionExecuter.java:25)
at org.gradle.tooling.internal.provider.ParallelismConfigurationBuildActionExecuter.execute(ParallelismConfigurationBuildActionExecuter.java:43)
at org.gradle.tooling.internal.provider.ParallelismConfigurationBuildActionExecuter.execute(ParallelismConfigurationBuildActionExecuter.java:29)
at org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:64)
at org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:29)
at org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:59)
at org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:44)
at org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:45)
at org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:30)
at org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:67)
at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
at org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:37)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
at org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:26)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
at org.gradle.launcher.daemon.server.exec.RequestStopIfSingleUsedDaemon.execute(RequestStopIfSingleUsedDaemon.java:34)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.call(ForwardClientInput.java:74)
at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.call(ForwardClientInput.java:72)
at org.gradle.util.Swapper.swap(Swapper.java:38)
at org.gradle.launcher.daemon.server.exec.ForwardClientInput.execute(ForwardClientInput.java:72)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
at org.gradle.launcher.daemon.server.exec.LogAndCheckHealth.execute(LogAndCheckHealth.java:55)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
at org.gradle.launcher.daemon.server.exec.LogToClient.doBuild(LogToClient.java:62)
at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
at org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment.doBuild(EstablishBuildEnvironment.java:82)
at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:36)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122)
at org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy$1.run(StartBuildOrRespondWithBusy.java:50)
at org.gradle.launcher.daemon.server.DaemonStateCoordinator$1.run(DaemonStateCoordinator.java:295)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
Caused by: com.android.builder.errors.EvalIssueException: Resolving this BuildableArtifact can only done during task execution.
at com.android.build.gradle.internal.api.artifact.BuildableArtifactImpl.checkResolvable(BuildableArtifactImpl.kt:58)
at com.android.build.gradle.internal.api.artifact.BuildableArtifactImpl.getFiles(BuildableArtifactImpl.kt:72)
at com.android.build.gradle.internal.api.artifact.BuildableArtifactImpl.iterator(BuildableArtifactImpl.kt:67)
at org.gradle.util.GUtil.addToCollection(GUtil.java:163)
at org.gradle.util.GUtil.addToCollection(GUtil.java:174)
at org.gradle.api.internal.file.collections.DefaultFileCollectionResolveContext.doResolve(DefaultFileCollectionResolveContext.java:138)
at org.gradle.api.internal.file.collections.DefaultFileCollectionResolveContext.resolveAsFileTrees(DefaultFileCollectionResolveContext.java:88)
at org.gradle.api.internal.file.collections.DefaultFileCollectionResolveContext$FileTreeConverter.convertInto(DefaultFileCollectionResolveContext.java:203)
at org.gradle.api.internal.file.collections.DefaultFileCollectionResolveContext.doResolve(DefaultFileCollectionResolveContext.java:112)
at org.gradle.api.internal.file.collections.DefaultFileCollectionResolveContext.resolveAsFileTrees(DefaultFileCollectionResolveContext.java:88)
at org.gradle.api.internal.file.CompositeFileCollection$1.visitContents(CompositeFileCollection.java:111)
at org.gradle.api.internal.file.CompositeFileTree$FilteredFileTree.visitContents(CompositeFileTree.java:114)
at org.gradle.api.internal.file.CompositeFileTree$FilteredFileTree.visitContents(CompositeFileTree.java:114)
at org.gradle.api.internal.file.CompositeFileCollection.getSourceCollections(CompositeFileCollection.java:171)
at org.gradle.api.internal.file.CompositeFileTree.getSourceCollections(CompositeFileTree.java:38)
at org.gradle.api.internal.file.CompositeFileCollection.getFiles(CompositeFileCollection.java:55)
at org.gradle.api.file.FileTree$getFiles$0.call(Unknown Source)
[...]
Thank you!
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,377 |
Q: Using typeahead.js with jquery ajax call I am using
http://twitter.github.io/typeahead.js/ version 0.9.3
and JQuery version 2.0.3
I have the below example, which I know works correctly.
<input id="subCategory" name="subCategory" type="text" />
<script>
$('#subCategory').typeahead({
name: "subCategory",
local: ["how", "when", "where", "who", "why"],
limit: 10
});
</script>
How can I then change this so that I can use the successful result from an AJAX request for JSON?
The below example does not work, my first thought is because it is not waiting for the response from $.getJSON() and I to poll for updates or wait until the async call finishes.
<script>
$('#subCategory').typeahead({
name: "subCategory",
local: $.getJSON("/subcategories/all/"),
limit: 10
});
</script>
My first thought is that I would have to apply the typeahead configuration above inside the success callback of the $.getJSON() function instead? is there a better way of approaching this?
The JSON call is to an MVC controller action that returns a JSONResult similar to this basic example below:
public ActionResult All()
{
return Json(_subCategoryService.GetAll(), JsonRequestBehavior.AllowGet);
}
I have tested and know that this getJSON request works correctly.
UPDATE:
I get a bit further when thinking about it and doing the below instead of an async call, but the typeahead data shows 1 item as undefined but this seems more appropriate, however I was only intending to populate the full list once and then filter that on the client, rather than make this remote call when someone is typing into the input box with a query parameter.
<script>
$('#subCategory').typeahead({
name: "subCategory",
remote: "/subcategories/all/",
limit: 10
});
</script>
UPDATE 2:
I also now am realising that my first example is a list of primitives where as my subcategories is not :( duh.. example:
[{ id: 1, name: "subcategory-1" }, { id: 2, name: "subcategory-2" }]
So now I am starting to look at the typeahead prefetch option and the filter attribute on that but I am really trying to use this as if it was a dropdown, so want to select the Id as the backing value for a particular entry in the list
UPDATE 3:
Since I was trying to use the typeahead input as if it was a combobox, I have since altered my example, but using local data rather than my JSON response and the below works and stores the backing id value in a hidden field.
<input id="subCategorySelection" type="hidden" />
<input id="subCategory" name="subCategory" type="text" />
<script>
$("#subCategory").typeahead({
name: "subCategories", // the name for the dataset
local: [{ id: 1, name: "subcategory-1" }, { id: 2, name: "subcategory-2" }],
limit: 10,
valueKey: "name" // the value shown in the textbox
}).on("typeahead:selected typeahead:autocompleted",
function(e, datum) {
$("#subCategorySelection").val(datum.id);
}
);
</script>
A: I'm afraid this is not yet supported, atleast when I looked at it some weeks ago.
But...there's this pull request that does exactly what you are trying to do.
https://github.com/twitter/typeahead.js/pull/220
A: The below is an example of doing it within the success callback, but I don't really like how this needs to be used.
$.getJSON("/subcategories/all/", null, function(response) {
$("#subCategory").typeahead({
name: "subCategories", // the name for the dataset
local: response,
limit: 10,
valueKey: "name"
}).on("typeahead:selected typeahead:autocompleted",
function(e, datum) {
$("#subCategorySelection").val(datum.id);
}
);
});
for now I don't need to do this anyway and have gone for this solution using prefetch
$("#subCategory").typeahead({
name: "subCategories", // the name for the dataset
prefetch: {
url: "/subcategories/all/",
ttl: 360000 /* 1 hour of local storage */
},
limit: 10,
valueKey: "name"
}).on("typeahead:selected typeahead:autocompleted",
function(e, datum) {
$("#subCategorySelection").val(datum.id);
}
);
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,253 |
\section*{Appendix A. Algorithms}
\label{appendix:A}
\renewcommand{\theequation}{A.\arabic{equation}}
\renewcommand{\thealgorithm }{A.\arabic{algorithm}}
\setcounter{equation}{0}
\setcounter{algorithm}{0}
We here denote nodes by $p,q, c$ and ${\boldsymbol{\mathsf{x}}}_p,\,{\boldsymbol{\mathsf{x}}}_q, {\boldsymbol{\mathsf{x}}}_c\in\CX\subset\bbR^D$ are the corresponding points.
\begin{algorithm}[htb]
\caption{\textsc{Insert}(point $q$, node $p$, level $l$)}
\begin{algorithmic}[1]
\STATE We assume $q$ already satisfies $\|{\boldsymbol{\mathsf{x}}}_q-{\boldsymbol{\mathsf{x}}}_p\| \leq 2^{-l}r_0$.
\IF{$\|{\boldsymbol{\mathsf{x}}}_q-{\boldsymbol{\mathsf{x}}}_c\| > 2^{-(l+1)}r_0$ for all $c\in \textit{Children}(p)$}
\STATE Insert $q$ into $\textit{Children}(c)$.
\STATE\textsc{Update\_CoverFraction}(\textit{Parent}($Q_l$), "No parent found")
\STATE \textbf{Break}
\ELSIF{$\|{\boldsymbol{\mathsf{x}}}_q-{\boldsymbol{\mathsf{x}}}_c\| < 2^{-(l+1)}r_0$ for some $c\in\textit{Children}(p)$}
\STATE Consider all children of $c$, namely $\textit{Children}(c)$
\IF{$\textit{Children}(c)$ is empty}
\IF{Covering fraction of $p$, Def. \ref{def:cover_fraction}, satisfy $\Fc\Ff(p)\ge \CD_{\Fc\Ff}$ for some threshold $\CD_{\Fc\Ff}$}
\STATE Insert q into $\textit{Children}(c)$
\STATE \textbf{Break}
\ELSE
\STATE \textsc{Update\_CoverFraction}(p, "parent found")
\COMMENT{c is found to be a potential parent. However, since $\Fc\Ff(p) < \CD_{\Fc\Ff}$ we can not add $q$ to $\textit{Children(c)}$}
\ENDIF
\ELSE
\STATE\textsc{Insert}($q$, $c$, $l+1$)
\ENDIF
\ENDIF
\end{algorithmic}
\label{alg:DampedCoverTree}
\end{algorithm}
\begin{algorithm}[htb!]
\caption{\textsc{StreaMRAK}(point ${\boldsymbol{\mathsf{x}}}$, target $y$)}
\begin{algorithmic}[1]
\STATE Let $l$ be the level. Let $p_{(0)}$ be the root node, $r_0$ the radius of the root node.
\STATE \textbf{Sub-sampling thread}
\STATE Insert ${\boldsymbol{\mathsf{x}}}$ into the cover tree with $\textsc{Insert}({\boldsymbol{\mathsf{x}}}, p_{(0)}, l=0)$. \COMMENT{See Alg. \ref{alg:DampedCoverTree}}
\IF{a new level has $\Fc\Ff(Q_l) \geq \mathcal{D}_{level}$.}
\STATE Extract the landmarks at level $l$ as sub-samples, namely $\Gamma^{(l)}_{m^{(l)}}$.
\ENDIF
\STATE \textbf{Training thread}
\STATE Consider level $l$ and assume that the landmarks $\Gamma^{(l)}_{m^{(l)}}$ are extracted.
\WHILE{$l$ is not sufficiently covered with training points according to Def. \ref{def:sufficien_training_points}.}
\STATE Update $ \big[ ({\mathbf{K}}^{(l)}_{nm})^\top {\mathbf{K}}^{(l)}_{nm} \big]_{ij}$ and ${\boldsymbol{\mathsf{z}}}^{(l)}_i$ according to Eq. \eqref{eq:updateFormula_KnmTKnm} and Eq. \eqref{eq:updateFromualte_Zm} as new samples $({\boldsymbol{\mathsf{x}}},y)$ arrive, using the landmarks in $\widetilde\Gamma^{(l)}_m$ from Def. \ref{def:landmarks}.
\STATE Continuously check if matrices have converged.
\IF{Matrices converge according to Def. \ref{def:sufficien_training_points}}
\STATE Update the \textsf{StreaMRAK } regression model $\widetilde{f}^{(L)}$, by including the correction term $s^{(l)}$ into the Laplacian pyramid, as described in Section \ref{section:TheLaplacianPyramid}. Let $L=l$ and update $l=l+1$.
\ENDIF
\ENDWHILE
\end{algorithmic}
\label{alg:PseudoCode_StreaMRAK}
\end{algorithm}
\begin{algorithm}[htb]
\caption{\textsc{Update\_CoverFraction}(node $p$, string s)}
\begin{algorithmic}[1]
\IF{s="No parent found"}
\STATE Update covering fraction of $p$ with $\Fc\Ff(p) = (1-\alpha)\Fc\Ff(p)$
\ELSIF{s= "parent found"}
\STATE Update covering fraction of $p$ with $\Fc\Ff(p) = (1-\alpha)\Fc\Ff(p) + \alpha$
\ENDIF
\end{algorithmic}
\label{alg:updateCoverFraction}
\end{algorithm}
\FloatBarrier
\section*{Appendix B. Preparatory material}
\label{appendixB}
\renewcommand{\theequation}{B.\arabic{equation}}
\renewcommand{\thetheorem }{B.\arabic{theorem}}
\setcounter{equation}{0}
\setcounter{theorem}{0}
We offer preparatory material on the damped cover-tree and kernel methods.
\subsection*{B.1 Preparatory material on the damped cover-tree}
This section shows how the recursive formula in Eq. \ref{eq:estimator_of_covering_fraction} approximates the weighted average of the outcome of the last $N$ random trails.
Where the trails are as described in Section \ref{subsection:DCT_construction}.
By expanding Eq. \ref{eq:estimator_of_covering_fraction} we have $ (\Fc\Ff(p))_t = (1-\alpha)^t(\Fc\Ff(p))_1 + \alpha \sum_{i=1}^{t-1}(1-\alpha)^i\mathbbm{1}_{\mathcal{B}_c}({\boldsymbol{\mathsf{x}}}_{t-i})$. Since $(1-\frac{1}{N})^N\approx 1/e$, the first term becomes negligible when $t \gg N$. Similarly, all terms $i > N$ in the sum becomes negligible. This leaves,
\begin{equation*}
(\Fc\Ff(p))_t \approx \frac{1}{N}\sum_{i=1}^N\bigg(1-\frac{1}{N}\bigg)^i\mathbbm{1}_{\mathcal{B}_c}({\boldsymbol{\mathsf{x}}}_{t-i})
\end{equation*}
which is a weighted average of the outcome of the $N$ last draws as claimed.
\begin{remark}
We mention a weakness of the estimator in Eq. \eqref{eq:estimator_of_covering_fraction}. As follows from Algorithm \ref{alg:DampedCoverTree}, every time a new point ${\boldsymbol{\mathsf{x}}}$ is not covered by the existing children, a new child is added. This consequently updates $\CB_c$, leading to the posterior distribution $\text{Prob}(\mathbbm{1}_{\CB_c}({\boldsymbol{\mathsf{x}}})=0|{\boldsymbol{\mathsf{x}}})$ to changed every time $\mathbbm{1}_{\CB_c}({\boldsymbol{\mathsf{x}}})=0$.
\label{remark:weakness_with_cf_estimator}
\end{remark}
\subsection*{B.2 Preparatory material on Kernel methods}
Kernel methods in the context of reproducing kernel Hilbert spaces (RKHS) offer a powerful approach to machine learning with a well-established mathematical foundation \cite{hofmann2008kernel, scholkopf2002learning}. In this paper we consider an input space $\CX \subset \bbR^D$, a corresponding target space $\CY \subset \bbR$ and let $\rho$ be the probability distribution on $\CX\times\CY$. Furthermore, we assume an RKHS $\CH_k$ generated by a positive definite kernel $k:\CX\times\CX\rightarrow\bbR$. In other words, the eigenvalues $\sigma_i,\dots,\sigma_n$ of the corresponding kernel matrix ${\mathbf{K}}_{nn}=(k({\boldsymbol{\mathsf{x}}}_i,{\boldsymbol{\mathsf{x}}}_j))_{i,j=1}^n$ satisfies $\sigma_i > 0$ for all $i\in n$. In this setting the inner product between two feature vectors $\phi({\boldsymbol{\mathsf{x}}}), \phi({\boldsymbol{\mathsf{x}}}^\prime) \in \CH_k$ satisfies the property that $\DP{\phi({\boldsymbol{\mathsf{x}}})}{\phi({\boldsymbol{\mathsf{x}}}^\prime)}_{\CH_k} = k({\boldsymbol{\mathsf{x}}}, {\boldsymbol{\mathsf{x}}}^\prime)$. This relation, known as the "kernel trick" \cite{aiserman1964theoretical, boser1992training}, effectively circumvents the need for explicit construction of non-linear mappings $\phi$.
Given a training set $\{({\boldsymbol{\mathsf{x}}}_i,y_i):i\in[n]\}$ sampled according to $\rho$ with $\Gamma_n=\{{\boldsymbol{\mathsf{x}}}_i:i\in[n]\}$, we formulate the kernel ridge regression (KRR) problem as
\begin{equation}
\widehat f_{n,\lambda} =\argmin_{f\in\widehat\CH_n} \frac{1}{n}\sum_{i=1}^n (f({\boldsymbol{\mathsf{x}}}_i)-y_i)^2 + \lambda \N{f}_\CH^2,
\label{eq:KRR_argmin_formulation}
\end{equation}
where $\lambda>0$ is a regularisation parameter and $\widehat\CH_n=\overline{\textrm{span}}\{k(\cdot,{\boldsymbol{\mathsf{x}}}_i):i\in[n]\}$ is a finite-dimensional subspace of $\CH_k$. What is more, for all $f\in\widehat\CH_n$ the Representer theorem \cite{kimeldorf1970correspondence, scholkopf2001generalized} guarantees that there exists coefficients $\alpha_1,\ldots,\alpha_n$ such that the solution to Eq. \eqref{eq:KRR_argmin_formulation} is on the form
\begin{equation*}
f({\boldsymbol{\mathsf{x}}}) = \sum_{i=1}^n \alpha_i k({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{x}}}_i).
\end{equation*}
Computing the KRR estimator is therefore reduced to solving the linear system
\begin{equation*}
({\mathbf{K}}_{nn}+\lambda {\mathbf{I}}_n)\Valpha = {\boldsymbol{\mathsf{y}}},
\end{equation*}
where ${\boldsymbol{\mathsf{y}}}=(y_1,\ldots,y_n)^\top$, $\Valpha=(\alpha_1,\ldots,\alpha_n)^\top$, and $[{\mathbf{K}}_{nn}]_{ij}=k({\boldsymbol{\mathsf{x}}}_i,{\boldsymbol{\mathsf{x}}}_j)$.
\section*{Appendix C. Proofs and definitions}
\label{appendixC}
\renewcommand{\theequation}{C.\arabic{equation}}
\renewcommand{\thetheorem }{C.\arabic{theorem}}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\begin{lemma}
Consider a domain $\CX\in \bbR^D$, a ball $\CB({\boldsymbol{\mathsf{x}}}_p, r)\subset \CX$ and let $S=\{{\boldsymbol{\mathsf{x}}}_i,\,{\boldsymbol{\mathsf{x}}}_j\in \CB({\boldsymbol{\mathsf{x}}}_p,r) | \|{\boldsymbol{\mathsf{x}}}_i-{\boldsymbol{\mathsf{x}}}_j\|\geq \delta \, \text{for}\,\, i\neq j \}$. Furthermore, let the doubling dimension of the set $S$ be $\texttt{ddim}\coloneqq\texttt{ddim}(S, r)$. We
let $c_d\coloneqq |S|$ when $\Fc\Ff(p) = 1$. We then have $ 2^{\texttt{ddim}-1} \leq c_d \leq 5^\texttt{ddim}$.
\label{lemma:Bound_on_c_D}
\end{lemma}
\begin{proof}
The upper bound on $c_d$ follows from Lemma \ref{lemma:upperBoundOnNumberOfPointsInBall} with $r=r_0$ and $\delta=r_0/2$. The lower bound follows from the definition of the doubling dimension \ref{def:doubling_dim_at_range}.
\end{proof}
\begin{lemma} Let ${\boldsymbol{\mathsf{d}}}^{(l)}$ be the residual at level $l$ as defined in Eq. \eqref{eq:residual_at_level}. We then have,
\begin{equation*}
{\boldsymbol{\mathsf{d}}}^{(l+1)} = ({\mathbf{I}}-{\mathbf{K}}_{nn}^{(l)}({\mathbf{K}}_{nn}^{(l)}+\lambda n{\mathbf{I}})^{-1}){\boldsymbol{\mathsf{d}}}^{(l)}
\end{equation*}
\label{lemma:residual_expression}
\end{lemma}
\begin{proof}
Denote ${\boldsymbol{\mathsf{s}}}^{(l)}=s^{(l)}([{\boldsymbol{\mathsf{x}}}_n])$, and note that ${\boldsymbol{\mathsf{s}}}^{(l)}={\mathbf{K}}_{nn}^{(l)}({\mathbf{K}}_{nn}^{(l)}+\lambda n{\mathbf{I}})^{-1}{\boldsymbol{\mathsf{d}}}^{(l)}$.
For $l=1$, we have
\begin{align*}
{\boldsymbol{\mathsf{d}}}^{(1)}&={\boldsymbol{\mathsf{y}}}-{\boldsymbol{\mathsf{s}}}^{(0)}={\boldsymbol{\mathsf{y}}}-{\mathbf{K}}_{nn}^{(l)}\Valpha^{(0)}=({\mathbf{I}}-{\mathbf{K}}_{nn}^{(0)})({\mathbf{K}}_{nn}^{(0) }+\lambda n {\mathbf{I}})^{-1}{\boldsymbol{\mathsf{y}}}.
\end{align*}
We proceed by induction. Assume the statement holds for an $l\geq2$.
We now have
\begin{align*}
{\boldsymbol{\mathsf{d}}}^{(l+1)}={\boldsymbol{\mathsf{y}}} - \sum_{j=0}^l{\boldsymbol{\mathsf{s}}}^{(j)}={\boldsymbol{\mathsf{d}}}^{(l)}-{\boldsymbol{\mathsf{s}}}^{(l)}={\boldsymbol{\mathsf{d}}}^{(l)}-{\mathbf{K}}_{nn}^{(l)}({\mathbf{K}}_{nn}^{(l)}+\lambda n{\mathbf{I}})^{-1}{\boldsymbol{\mathsf{d}}}^{(l)}
=({\mathbf{I}}-{\mathbf{K}}_{nn}^{(l)}({\mathbf{K}}_{nn}^{(l)}+\lambda n{\mathbf{I}})^{-1}){\boldsymbol{\mathsf{d}}}^{(l)}.
\end{align*}
\end{proof}
\subsection*{C.1 Proof of Thm. \ref{thm:LP_KRR_convergence}}
Let ${\mathbf{P}}_{nn}^{(l)}\coloneqq{\mathbf{K}}^{(l)}_{nn}({\mathbf{K}}^{(l)}_{nn}+\lambda n {\mathbf{I}})^{-1}$.
By definition of ${\boldsymbol{\mathsf{d}}}^{(l)}$ and Lemma \ref{lemma:residual_expression} it follows
\begin{equation}
f([{\boldsymbol{\mathsf{x}}}_n])-\widehat{f}^{(l+1)}([{\boldsymbol{\mathsf{x}}}_n])={\boldsymbol{\mathsf{d}}}^{(l+1)}=({\mathbf{I}}-{\mathbf{P}}_{nn}^{(l)}){\boldsymbol{\mathsf{d}}}^{(l)} =({\mathbf{I}}-{\mathbf{P}}_{nn}^{(l)})(f([{\boldsymbol{\mathsf{x}}}_n])-\widehat{f}^{(l)}([{\boldsymbol{\mathsf{x}}}_n])).
\label{eq:residual_relation_before_norm}
\end{equation}
We then have
\begin{equation}
\|\widehat{f}^{(l+1)}([{\boldsymbol{\mathsf{x}}}_n])-f([{\boldsymbol{\mathsf{x}}}_n])\| \leq \|{\mathbf{I}}-{\mathbf{P}}_{nn}^{(l)}\| \|\widehat{f}^{(l)}([{\boldsymbol{\mathsf{x}}}_n])-f([{\boldsymbol{\mathsf{x}}}_n])\|.
\label{eq:appendix_res_ident}
\end{equation}
Consider the SVD ${\mathbf{K}}^{(l)}_{nn} = {\mathbf{U}}\VSigma {\mathbf{U}}^\top$ where $\VSigma= \diag{(\sigma_{l,i})}$ and $\sigma_{l,n} \leq \dots \leq \sigma_{l,1}$. We then have
\begin{align}
\begin{split}
\|{\mathbf{I}}-{\mathbf{P}}^{(l)}_{nn}\| &=\bigg\| {\mathbf{U}}\diag{\Big(\frac{n\lambda}{n\lambda+\sigma_{l,i}}\Big)}{\mathbf{U}}^\top\bigg\|=\bigg\|\diag{\Big(\frac{n\lambda}{ n\lambda+\sigma_{l,i}}\Big)}\bigg\| \\
&= \frac{n\lambda}{n\lambda+\sigma_{l,n}}\coloneqq 1-\varepsilon(l),
\end{split}
\label{eq:defining_eps_l}
\end{align}
and Thm. \ref{thm:LP_KRR_convergence} follows recursively from Eq. \eqref{eq:appendix_res_ident} and Eq. \eqref{eq:defining_eps_l}.
\hfill\qedsymbol
\subsection*{C.2 Proof of Thm. \ref{thm:LP_KRR_convergence_rate}}
To bound the smallest eigenvalue of the kernel matrix $[{\mathbf{K}}^{(l)}_{nn}]_{ij} = \Phi(\|{\boldsymbol{\mathsf{x}}}_i-{\boldsymbol{\mathsf{x}}}_j\|)$, namely $\sigma_{l,n}$, we will assume that there exists a lower bound on the minimal distance between any two points ${\boldsymbol{\mathsf{x}}}_i,{\boldsymbol{\mathsf{x}}}_j\in \CX$, namely $\delta\coloneqq\min\limits_{i\neq j\in \CX}\N{{\boldsymbol{\mathsf{x}}}_i-{\boldsymbol{\mathsf{x}}}_j}>0$. Consider the Gaussian $\Phi({\boldsymbol{\mathsf{x}}})=\exp(-\beta\|{\boldsymbol{\mathsf{x}}}\|_2^2)$, $\beta>0$, with the Fourier transform $\widehat{\Phi}(\omega) = (\pi/\beta)^{D/2}\exp(-\|\omega\|_2^2/4\beta)$. From \cite[Corollary 12.4]{Wendland2004Scattered} we have the bound
\begin{equation*}
\sigma_{l,n} \geq C_D 2^D (2\beta)^{-D/2} \delta^{-D}\exp(-4M_D^2/(\delta^2\beta)),
\end{equation*}
where
\begin{equation*}
M_D = 12\bigg(\frac{\pi\Gamma^2(D/2+1)}{9}\bigg)^{1/(D+1)} \quad \text{and} \quad
C_D = \frac{1}{2\Gamma(D/2+1)}\bigg(\frac{M_D}{2^{3/2}}\bigg)^D.
\end{equation*}
With $\beta = (\sqrt{2}2^{-l}r_0)^{-2}$ we then have
\begin{align*}
\begin{split}
\sigma_{l,n} &\geq C_D 2^D 2^{-Dl}\bigg(\frac{r_0}{\delta}\bigg)^D\exp\big(-(2\sqrt{2}M_D)^2(r_0/\delta)^2 4^{-l}\big) \\
& = C_{1,D} 2^{-Dl}\exp\big(-C_{2,D} 4^{-l}\big) \coloneqq B(l),
\end{split}
\end{align*}
where we define
\begin{equation*}
C_{1,D} = \frac{1}{2}(6\sqrt{2})^D\Gamma(D/2+1)^{\frac{D-1}{D+1}}\bigg(\frac{\pi}{9}\bigg)^{\frac{D}{D+1}}\bigg(\frac{r_0}{\delta}\bigg)^D \quad \text{and} \quad C_{2,D} = 1152\bigg(\frac{\pi\Gamma^2(D/2+1)}{9}\bigg)^{\frac{2}{D+1}}\bigg(\frac{r_0}{\delta}\bigg)^2.
\end{equation*}
The first bound in Thm. \ref{thm:LP_KRR_convergence_rate} follows from this result. \hfill\qedsymbol
\begin{remark}
In \cite[Thm. 12.3]{Wendland2004Scattered} they also offer an a fortiori bound corresponding to $M_D = 6.38D$, $C_{1,D}=\frac{1}{2}\big(\frac{12.76}{2^{3/2}}\big)^D\big(\frac{D^D}{\Gamma(D/2+1)}\big)\big(\frac{r_0}{\delta}\big)^D$ and $C_{2,D}=(12.76\sqrt{2}D)^2(r_0/\delta)^2$.
\label{remark:fortiori_bound}
\end{remark}
\begin{corollary}
We note that $B(l)$ has a maximum at
\begin{align*}
\begin{split}
l^* &= \frac{1}{2}\log_2\bigg(\frac{C_{2,D}\log 4}{D\log 2}\bigg) =\log_2\bigg(\sqrt{\frac{D}{2}}\bigg(\frac{r_0}{\delta}\bigg)\bigg) + \log_2\bigg(\frac{4M_D}{D}\sqrt{2}\bigg)
\end{split}
\end{align*}
and is monotonically increasing with $l$ on the interval $l\in(0, l^*)$. Furthermore, with the a fortiori expression for $M_D$ from Remark \ref{remark:fortiori_bound} we have
\begin{equation*}
l^* = \log_2\bigg(\sqrt{\frac{D}{2}}\bigg(\frac{r_0}{\delta}\bigg)\bigg) + \log_2\bigg(25.52\sqrt{2}\bigg).
\end{equation*}
\label{corollary:Monotonicaly_increasing}
\end{corollary}
When the level $l$ becomes sufficiently large, the kernel matrix ${\mathbf{K}}^{(l)}_{nn}$ becomes diagonally dominant, and we can therefore bound the eigenvalues using Garschgorins Theorem \cite[Thm. 1.1]{gomez2006more}, which gives
\begin{equation}
|\sigma_{l,i} - [{\mathbf{K}}^{(l)}_{nn}]_{jj}| = |\sigma_{l,i} - 1| < \sum_{\substack{q=1,\\ q\neq j}}^n |[{\mathbf{K}}^{(l)}_{nn}]_{jq}|\quad \text{for}\quad i,j\in [n].
\label{eq:gaurchinThm_bounding_eValues_close_to_1}
\end{equation}
To find a more explicit bound, we analyze the sum on the right-hand side using Lemma \ref{lemma:upperBoundOnNumberOfPointsInBall}.
\begin{lemma} Consider a ball $\CB({\boldsymbol{\mathsf{x}}},r) \in \mathbb{R}^D$ and let $\delta>0$.
The number of points in any (discrete) set of points within $\CB({\boldsymbol{\mathsf{x}}},r)$ that are at least $\delta$ apart, $S=\{{\boldsymbol{\mathsf{x}}}_i\in \CB({\boldsymbol{\mathsf{x}}},r) | d({\boldsymbol{\mathsf{x}}}_i, {\boldsymbol{\mathsf{x}}}_j)\geq \delta \, \text{for}\,\, i\neq j \}$, is bounded by $ |S| \leq \bigg( \frac{2r}{\delta}+1\bigg)^D$.
\label{lemma:upperBoundOnNumberOfPointsInBall}
\end{lemma}
\begin{proof}
Since the points in $S$ are at least $\delta$ apart, it follows that the balls $\CB({\boldsymbol{\mathsf{x}}}_i, \delta/2)$ are disjoint. Consider now the ball $\CB({\boldsymbol{\mathsf{x}}}, r+\delta/2)$. All of the balls $\CB({\boldsymbol{\mathsf{x}}}_i, \delta/2)$ are entirely contained within $\CB({\boldsymbol{\mathsf{x}}}, r+\delta/2)$. Since the balls $\CB({\boldsymbol{\mathsf{x}}}_i, \delta/2)$ are disjoint, it follows that
\begin{equation*}
|S| \leq \frac{\operatorname*{Vol}\Big(\CB({\boldsymbol{\mathsf{x}}}, r+\delta/2)\Big)}{\operatorname*{Vol}\Big( \CB({\boldsymbol{\mathsf{x}}}_i, \delta/2)\Big)} = \bigg(\frac{2r}{\delta}+1\bigg)^D.
\end{equation*}
\end{proof}
Consider a family of annuli $\{R_t\}_{t=0}^\infty$ where $R_t = \CB({\boldsymbol{\mathsf{x}}}_j, 2^{t+1}\delta) \backslash \CB({\boldsymbol{\mathsf{x}}}_j, 2^t\delta)$.
Inspired by \cite{Leeb2019}, we can interpret the right hand side of Eq. \eqref{eq:gaurchinThm_bounding_eValues_close_to_1} as a sum over $\{R_t\}_{t=0}^\infty$. The entries of ${\mathbf{K}}^{(l)}_{nn}$ are defined as
\begin{equation*}
[{\mathbf{K}}^{(l)}_{nn}]_{ij} = \exp{\bigg(-\frac{\N{{\boldsymbol{\mathsf{x}}}_i-{\boldsymbol{\mathsf{x}}}_j}^2}{ 2 r_l^2}\bigg)},\quad \forall i,j\in [n],
\end{equation*}
where $r_l = 2^{-l}r_0$ for $r_0>0$. It follows
\begin{align*}
\begin{split}
\sum_{\substack{q=1,\\ q\neq j}}^n |[{\mathbf{K}}^{(l)}_{nn}]_{jq}| = \sum_{t=0}^{\infty}\sum_{{\boldsymbol{\mathsf{x}}}_q\in R_t} k^{(l)}({\boldsymbol{\mathsf{x}}}_j, {\boldsymbol{\mathsf{x}}}_q) \leq \sum_{t=0}^\infty \bigg(\frac{2^{t+2}\delta}{\delta}+1\bigg)^D
\exp\big(-(2^{t}\delta 2^{-1/2}r_l^{-1})^2\big)
\end{split}
\end{align*}
where in the first term on the right-hand side we bound the number of summands using Lemma \ref{lemma:upperBoundOnNumberOfPointsInBall},
and in the second we use $\|{\boldsymbol{\mathsf{x}}}_q - {\boldsymbol{\mathsf{x}}}_j\| \geq 2^t\delta$ for ${\boldsymbol{\mathsf{x}}}_q\in R_t$.
Note now that for all $T\ge 1$ there exists $C_T>0$ such that $\exp(-r^2)\leq C_T r^{-T}$ holds for all $r>0$.
Such a constant is given by the Lambert W function and satisfies $C_T =\left(\frac{T}{2{\rm e}}\right)^{T/2}$.
Moreover, $2^{t+2}+1 \leq 2^{t+2+\alpha}$, for $\alpha\ge \ln(1+1/4)/\ln(2)$.
Thus,
\begin{align*}
\sum_{\substack{q=1,\\ q\neq j}}^n |[{\mathbf{K}}^{(l)}_{nn}]_{jq}| \leq C_T \left(\frac{r_l}{\delta}\right)^T 2^{(2+\alpha)D+T/2}\sum_{t=0}^\infty 2^{t(D-T)}\leq C_{DT}\left(\frac{r_l}{\delta}\right)^T,
\end{align*}
where using $\sum_{t=0}^\infty 2^{t(D-T)}\leq 2$, which holds for $D-T< 0$, we let
\[ 2 \cdot 2^{D(2+\alpha)+T/2} C_T
\le2 \cdot 2^{D(2+\alpha)-T/2(1+1/\ln(2))}T^{T/2}=:C_{DT} ,\]
where we used $\exp(1)\ge 2^{1+1/\ln(2)}$.
We now consider the function
\begin{align*}
\begin{split}
F(T) &\coloneqq 2^{-\frac{T}{2}(1+1/\ln{2})}2^{-lT}T^{\frac{T}{2}}\bigg(\frac{r_0}{\delta}\bigg)^T \\
& = 2^{-\frac{T}{2}(1+1/\ln{2})}2^{-lT}2^{T/2\log_2T} 2^{T\log_2(r_0/\delta)}\\
& = 2^{-T/2(B-\log_2 T)} = 2^{f(T)},
\end{split}
\end{align*}
Where $B=1+\frac{1}{\ln 2} + 2l - 2\log_2\bigg(\frac{r_0}{\delta}\bigg)$. $F$ is minimized by
\begin{equation*}
T^{*} = 2^{B-1/\ln 2} = 2^{1+1/\ln 2 + 2l -2\log_2(r_0/\delta)-1/\ln 2} = 2\cdot 4^{l-\log_2(r_0/\delta)},
\end{equation*}
such that
\begin{equation*}
F(T^{*}) = 2^{-\frac{T}{2}(B-\log_2 2^B - \log_2 2^{-1/\ln 2})} = 2^{\frac{-T^{*}}{2\ln 2}} = 2^{-\frac{1}{\ln 2}4^{l-\log_2 (r_0/\delta)}}.
\end{equation*}
Inserting this back and with $\alpha=\ln (1+1/4)/\ln 2$, we have
\begin{equation*}
\sigma_{l,n} > 1-\sum_{\substack{q=1,\\ q\neq j}}^n |[{\mathbf{K}}^{(l)}_{nn}]_{jq}| \geq 1 - 2^{1+\frac{1}{\ln{2}}((\ln{(1+1/4)}+2\ln{2}) D - g(l))},\quad g(l) =4^{l-\log_2{r_0/\delta}}
\end{equation*}
With $C_3=(\ln{(1+1/4)}+2\ln{2})$ this leads to
\begin{equation}
0 < 1-\varepsilon(l) < \bigg(1+\big(1-2^{1+\frac{1}{\ln{2}}(C_3 D - g(l))}\big)/n\lambda\bigg)^{-1},
\label{eq:garschgorin_bound}
\end{equation}
We note that the bound in Eq. \eqref{eq:garschgorin_bound} holds for $T^{*}>D$ which means that $ l > \log_2(\sqrt{D/2}r_0/\delta)$.\hfill\qedsymbol
\subsection*{C.3 Proof of Corollary \ref{cor:spectrally_bandlim_res}}
Follows from Eq. \eqref{eq:residual_relation_before_norm}-\eqref{eq:defining_eps_l} with $P^{(l)}_{nn} =P^{(l)}_{nn,k} + \big(P^{(l)}_{nn,k}\big)^\perp $, where $P^{(l)}_{nn,k}$ is the projection on the eigenvectors associated with the $k$ largest eigenvalues and $\big(P^{(l)}_{nn,k}\big)^\perp(\widehat{f}^{(l)}([{\boldsymbol{\mathsf{x}}}_n])-f([{\boldsymbol{\mathsf{x}}}_n])) = 0$.\hfill\qedsymbol
\section{Introduction}
Machine learning algorithms based on kernel ridge regression (KRR) \cite{scholkopf2002learning} is
an active field of research \cite{rudi2017falkon, Alaoui2014, Zhang2015divide, Avron2017, Burnaev2017}, with applications ranging from time series prediction in finance \cite{Exterkate2016Nonlinear}, parameter inference in dynamical systems \cite{Niu2016}, to pairwise learning \cite{Stock2018comparative}, face recognition \cite{An2007face} and drug estimation and gene analysis in biomedicine \cite{Li2015, Mohapatra2016Microarray}.
This paper develops a streaming variation of KRR using a radial kernel, a new sub-sampling scheme, and a multi-resolution formulation of the learning model.
Many popular data analysis software packages, such as $\mbox{Matlab}^{\mbox{\tiny TM}}$ require loading the entire dataset into memory. While the size of computer memory is growing fast, the size of available data sets is growing much faster, limiting the applicability of \textit{in-memory} methods.~\footnote{$\mbox{Simulink}^{\mbox{\tiny TM}}$, a companion software to $\mbox{Matlab}^{\mbox{\tiny TM}}$ supports streaming but has a much more limited computational model, targeted at signal processing applications.}
Streaming~\cite{muthukrishnan2005data} is a computational model where the input size is much larger than the size of memory. Streaming algorithms read one input item at a time, update their memory, and discard the item. The computer memory is used to store a {\em model} or a {\em sketch} of the overall data distribution, which is orders of magnitude smaller than the data itself. The development of streaming algorithms is experiencing increased popularity in the face of big data applications such as data mining \cite{fan2013mining} and bioinformatics \cite{lan2018survey}, where data sets are typically too large to be kept \textit{in-memory}.
Many big data applications call for non-linear and involved models, and thus, the development of non-parametric and non-linear models is critical for successful learning.
Among the most popular non-parametric learning algorithms are kernel methods, which include well-known learning schemes such as the support vector machine (SVM) and KRR, to name a few. The appeal of kernel methods lies in their strong theoretical foundation \cite{scholkopf2002learning, kivinen2002online}, as well as their ability to map complex problems to a linear space without requiring an explicit mapping. A common class of kernels are radial kernels $k({\boldsymbol{\mathsf{x}}},\tilde{\boldsymbol{\mathsf{x}}})=\Phi(\|{\boldsymbol{\mathsf{x}}}- \tilde{\boldsymbol{\mathsf{x}}}\|/r)$ for ${\boldsymbol{\mathsf{x}}}, \tilde{\boldsymbol{\mathsf{x}}}\in\CX\subseteq\bbR^D$ and $r>0$ \cite{scovel2010radial}. An example is a Gaussian kernel, for which the shape parameter $r$ is referred to as the kernel bandwidth. What is more, radial kernels are universal kernels (with a few exceptions \cite{micchelli2006universal}), meaning that they can approximate any bounded continuous function on $\CX$ arbitrarily well. However, in high dimensions kernel methods suffer from the "curse of dimensionality" and require large amounts of training data to converge. Furthermore, the computational complexity, memory requirement, and the number of parameters to learn grow unbounded with the number of training samples, a drawback known as the "curse of kernelization" \cite{wang2012breaking}. In the context of streaming, the prospect of unbounded data streams makes this shortcoming even more detrimental.
Although kernel-based learning schemes are typically formulated as convex optimization problems, which do not require tuning hyper-parameters such as learning rate etc., there is still a need to determine the optimal kernel. For the Gaussian kernel, this amounts to selecting the bandwidth. Classically, an optimal kernel is chosen through batch techniques such as leave-one-out and k-fold cross-validation \cite{loader1999bandwidth, cawley2004fast, arlot2010survey}. However, these approaches are inefficient as they spend significant time evaluating bad kernel hypotheses and often use multiple runs over the data, which is impossible in a streaming setting.
Despite the universality of radial kernels on $\CX\in\bbR^D$, this only guarantees the convergence of the model in the asymptotic regime and does not provide finite sample bounds.
As a reaction, several works have shown the benefit of combining multiple kernels from a dictionary of kernel hypotheses.
These strategies include multi-kernel learning (MKL) \cite{Zhang2015divide, lanckriet2004learning, bach2004multiple, sonnenburg2006large, buazuavan2012fourier}, multi-scale analysis \cite{bermanis2013multiscale, Rabin2018multiscale}, and the Laplacian pyramid (LP) \cite{Rabin2012, Leeb2019}.
Combining these strategies with a localized kernel gives a frequency and location-based discretization similar to multi-resolution analysis, a well-established concept in signal processing and functional approximation through concepts such as wavelets \cite{graps1995introduction, akansu2010emerging}, diffusion wavelets \cite{coifman2006diffusion, maggioni2008diffusion}, and graph wavelets \cite{hammond2011wavelets, cloninger2021natural, vito2021waveletframes}.
To meet a need for non-linear non-parametric algorithms for streaming data, we propose the \textit{streaming multi-resolution adaptive kernel algorithm} (\textsf{StreaMRAK}) - a computationally and memory-efficient streaming variation of KRR. \textsf{StreaMRAK } is a streaming algorithm that combines a streaming sub-sampling scheme with a multi-resolution kernel selection strategy and adapts the kernel bandwidth $r$ and the sub-sample density to each other over several levels of resolution. Furthermore, \textsf{StreaMRAK } addresses the curse of dimensionality in a novel way, through the sub-sampling scheme and multi-resolution formulation.
\subsection{Setting}
\label{subsection:setting}
We consider a finite sample data-cloud $\CX$, $|\CX|=n$, that is sampled i.i.d. according to a fixed but unknown distribution ${\cal P}$ over $\bbR^D$. The target is a bounded and continuous function $f:\bbR^D \to \bbR$.
We assume that the points in $\CX$ are placed
in a {\em sequence} and that their order is random.~\footnote{The assumption that the sequence is randomly ordered allows us to draw statistical conclusions from prefixes.}
Each instance ${\boldsymbol{\mathsf{x}}}_i\in\CX$, for $i\in[n]$, paired with a label $y_i$ where $y_i=f({\boldsymbol{\mathsf{x}}}_i)+\varepsilon_i$ and $\varepsilon_i\sim\CN(0,\sigma)$ represents noise. The task of learning is to train a model $\widehat{f}$ that is a good approximation of the target function $f$.
In this work, we think about the intrinsic dimension of $\CX$ as a local quantity, meaning it depends on the region $\CA \subseteq\CX$ and the radius $r$ at which we consider the point cloud.
Rooted in this way of thinking about the intrinsic dimension, \textsf{StreaMRAK } is designed to handle domains where the local intrinsic dimension changes across different regions and resolutions.
To estimate the local intrinsic dimension in a "location and resolution sensitive" manner, we use the concept of the doubling dimension of a set, defined in Def. \ref{def:doubling_dim_at_range}. We note that our definition of the doubling dimension is related to the definition used in \cite{krauthgamer2004navigating, Beygelzimer2006}.
\begin{definition}[Covering number]
Consider a set $\CA$ and a ball $\CB({\boldsymbol{\mathsf{x}}}, r)$, with $r>0$ and ${\boldsymbol{\mathsf{x}}}\in\CA$.
We say that a finite set $\CS\subset\CB(x,r)$ is a covering of $\CB(x,r)$ in $\CA$ if $\CA\cap\CB(x,r)\subset \cup_{{\boldsymbol{\mathsf{x}}}_i\in\CS} \CB({\boldsymbol{\mathsf{x}}}_i, r/2)$.
We define the covering number $\kappa(\CA, {\boldsymbol{\mathsf{x}}}, r)$ as the minimum cardinality of any covering of $\CB({\boldsymbol{\mathsf{x}}},r)$ in $\CA$.
\label{def:covering_number}
\end{definition}
\begin{definition}(Doubling dimension)
The doubling dimension $\texttt{ddim}(\CA, r)$ of a set $\CA$ is defined as $\texttt{ddim}(\CA, r) = \lceil\log \kappa(\CA,{\boldsymbol{\mathsf{x}}},r)\rceil$.
For an interval $\CI\subset \bbR_{>0}$ we define the doubling dimension as the least upper bound over $r\in\CI$, that is $\texttt{ddim}(\CA,\CI)=\max_{r\in\CI}\texttt{ddim}(\CA,r)$.
\label{def:doubling_dim_at_range}
\end{definition}
Using Def. \ref{def:doubling_dim_at_range} we say that the intrinsic dimension of $\CX$ changes with the location if there exist $\CA_1,\CA_2\subset\CX$ such that $\texttt{ddim}(\CA_1,r)\neq \texttt{ddim}(\CA_2,r)$ for some $r>0$. Similarly, we say that the intrinsic dimensionality of $\CX$ changes with the resolution, if there exist $r_1 \neq r_2$ such that the doubling dimension $\texttt{ddim}(\CA, r_1) \neq \texttt{ddim}(\CA, r_2)$ for $\CA\subset\CX$.
In Fig. \ref{fig:Examples_on_intrinsic_dim} we consider three examples to provide further insight for the doubling dimension. In Fig. \ref{subfig:change_loc} we see a domain shaped like a dumbbell, where the spheres are high dimensional, and the bar connecting them is lower-dimensional, showing how the dimension can change with the location. Meanwhile, Fig. \ref{subfig:change_noise} illustrates a lower-dimensional manifold, embedded in $\bbR^3$, with manifold noise $\zeta_m$. We see that when the resolution is sufficiently small, so that $r\approx\zeta_m$, the doubling dimensionality increases towards the dimension of the ambient space $\bbR^D$. Furthermore, Fig. \ref{subfig:change_scale} shows a point cloud that is locally $2$-dimensional, but is embedded in a $3$-dimensional space. By reducing $r$ we can resolve this lower dimensionality, but if it is reduced further, we would eventually resolve the noise level, and the doubling dimension increases again.
We also mention two special cases.
First, for large enough $r$ any set in $\bbR^D$ has a doubling dimension of at most $D$.
Second, if $\CA$ is a finite set of points in $\bbR^D$ and $r$ is smaller than the minimal distance between two points, then the number of balls of radius $r'<r$ required to cover $\CA$ is at most the number of points. Therefore the doubling dimension of $\CA$ at the range $(0,r]$ is zero. In other words, {\em any} actual (and therefore finite) training set has dimension zero for a small enough $r$, as illustrated in Fig. \ref{subfig:change_scale}.
\input{Figures/1-Introduction/Fig1/Fig1}
As an example of how the intrinsic dimension might change with respect to regions and resolutions, we consider a double pendulum system, a well-known chaotic system that depends heavily on its initial conditions \cite{shinbrot1992chaos}.
Systems with multiple pendulum elements are well known in engineering applications such as mechanical and robotic systems with several joints and are studied for their chaotic properties \cite{marcelo2016chaos}.
Let $\omega_i=\dot{\theta_i}$, for $i=1,2$, be angular velocities. We initialize $500$ pendulums with $\theta_1,\theta_2=180$, $\omega_1=10^{-2}$ and $\omega_2=10^{-1}$, measured in degrees, and perturb the angular velocities by $\varepsilon_i \sim \CN(0, 10^{-2}|\omega_i|)$. We iterate the system for $T=500$ time steps and let ${\boldsymbol{\mathsf{s}}}^{(i)}_t=[\theta_1(t),\, \theta_2(t),\, \omega_1(t),\, \omega_2(t)]\in \bbR^4$ be the state of pendulum $i$ at $t\in \bbN$. We think of each state as a training point in a point cloud $\CX\subset\bbR^4$, where for instance the target function can be ${\boldsymbol{\mathsf{s}}}^{(i)}_{t+\Delta} = f({\boldsymbol{\mathsf{s}}}^{(i)}_t)$, with $\Delta\in\bbN$.
In Fig. \ref{subfig:phaseDiagramIntroduction_2d} we visualize the trajectory of four pendulums $P_0, P_1, P_2, P_3$, for which the trajectories are indistinguishable until a bifurcation occurs around $T=300$ time steps, and the trajectories start to diverge. In Fig. \ref{subfig:phaseDiagramIntroduction_3d_before_bifurcation} and Fig. \ref{subfig:phaseDiagramIntroduction_3d} we zoom in on the trajectory of all $500$ pendulums in regions before and after the bifurcation. These two regions, $\CA_1$ and $\CA_2$, are indicated by a blue and red circle, respectively, in Fig. \ref{subfig:phaseDiagramIntroduction_2d}. From the figures, it is clear that learning the trajectory in $\CA_1$ is significantly easier than in $\CA_2$, where learning the trajectory is more affected by the curse of dimensionality.
We also note that the trajectories that remain close after the bifurcation will remain so until a new bifurcation occurs. The take-home message is that predicting the trajectory of a double pendulum is a hard problem because of a few regions where the intrinsic dimension blows up and makes the prediction hard. However, between these regions, the trajectory is easier to describe.
The spirit of this work aims to reduce the effort in such regions $\CA_2$ where the training data exhibit high intrinsic dimension and focus more on those regions $\CA_1$ where the data has a lower intrinsic dimension, i.e. is more well behaved.
\input{Figures/1-Introduction/Fig2/Fig2}
\subsection{Contribution and comparison to related work}
Contributions of this work can be divided into three components.
\begin{enumerate}[label=(C\arabic*)]
\item \label{contribution_LP} A multi-resolution variation of the state-of-the-art KRR solver \textsf{FALKON } \cite{rudi2017falkon}, using the LP, which refines the predictions at each level of resolution by regressing on the errors from the previous level.
\item \label{contribution_subSamp} A novel sub-sampling scheme for kernel methods, tailored for use in combination with the LP, that can handle the curse of dimensionality and does not require the data to be \textit{in-memory}.
\item \label{contribution_Stream} Development of a streaming variation of \textsf{FALKON}, where the time and memory requirements depend on the doubling dimensionality and the level of resolution, instead of the number of training points. see Props. \ref{prop:memory_streamrak_and_DCT}-\ref{prop:Time_req_solver}.
\end{enumerate}
In the following, we give further details on these contributions and compare them to related work.
The computational backbone of \textsf{StreaMRAK } is based on the state-of-the-art KRR solver \textsf{FALKON } \cite{rudi2017falkon}, which among other things combines sub-sampling and preconditioning to process large data sets efficiently.
However, \textsf{FALKON } relies on selecting an optimal kernel bandwidth, which can be inefficient and hard within a streaming setting.
Inspired by the success of existing multi-resolution approaches \cite{Zhang2015divide, lanckriet2004learning, bach2004multiple, sonnenburg2006large, buazuavan2012fourier,bermanis2013multiscale}, our first contribution \ref{contribution_LP} addresses the issue of selecting an optimal kernel bandwidth by introducing a multi-resolution reformulation of \textsf{FALKON } using a variation of the LP.
The LP scheme originated in image representation \cite{Burt2009} and was introduced to machine learning by \cite{Rabin2012} for efficient data representation.
The LP refines the prediction at several levels of resolution, and at each level, reduces the bandwidth used at the previous level by a constant factor.
In doing so, the selection of a single optimal bandwidth is avoided, and the resulting approach has greater flexibility.
The LP is similar to ideas in wavelet analysis that have shown great success in numerous applications.
However, typical wavelet architectures \cite{graps1995introduction, akansu2010emerging,coifman2006diffusion, maggioni2008diffusion, hammond2011wavelets, cloninger2021natural} require upfront construction of a wavelet basis, which is not compatible with a data-adaptive kernel. In this work, we aim to show that the LP is a viable multi-resolution scheme and can be modified to the streaming setting. Furthermore, we experimentally show that it significantly improves the estimation accuracy, and inspired by \cite{Leeb2019} we provide convergence bounds for the LP in the context of radial kernels and KRR.
Let us now discuss our second contribution \ref{contribution_subSamp}.
\textsf{FALKON } addresses the curse of kernelization by combining Nystr\"{o}m sub-sampling, conjugate gradient, and preconditioning, and achieves time and memory requirements of $\mathcal{O}(n\sqrt{n})$ and $\mathcal{O}(n)$ respectively, where $n$ is the number of samples.
In recent years there have been several efforts to address the curse of kernelization in similar ways through sub-sampling techniques such as sketching \cite{Alaoui2014, Avron2017}, randomized features \cite{rahimi2008random, le2013fastfood, yang2015carte, zhang2021sigma} and Nystr\"{o}m sub-sampling \cite{williams2001using, smola2000sparse, cloninger2017prediction, ma2018power, ma2018kernel}. However, despite their successes, these techniques are in principle \textit{in-memory} type algorithms since they require access to the training data in advance of the training and are not optimized for streaming.
Furthermore, \textsf{FALKON } selects the sub-samples uniformly over the input domain $\CX$, and the LP uses the same training set for each level. However, when learning with a radial kernel, the density of samples should be related to the bandwidth of the kernel. Otherwise, a too-small bandwidth will lead to bad interpolation properties, while a too-large bandwidth gives an ill-conditioned system \cite{bermanis2013multiscale}.
Since the LP scheme reduces the kernel bandwidth at each level of resolution, it would be problematic to use the same sub-sample density. Furthermore, due to the curse of dimensionality, the covering number increases exponentially with the doubling dimension.
Therefore, if doubling dimensionality varies across different regions of the domain $\CX$, as illustrated by Fig. \ref{subfig:change_loc}, then the number of sub-samples necessary to maintain the density for a given bandwidth will also vary.
Our second contribution \ref{contribution_subSamp} provides an alternative sub-sampling strategy, which adapts the sub-sampling density to the kernel bandwidth. This strategy is similar to tuning the kernel bandwidth to the data, which is used in online algorithms to avoid the use of cross-validation \cite{Zhang2020, chen2016kernel, fan2016kernel}. Although expensive to calculate, especially in high dimensions, a similar strategy is used in graph-based methods, where the kernel bandwidth is adapted to the $k$-nearest neighbor distance of each training point \cite{cheng2020convergence}.
The sub-sampling strategy we propose is based on a damped cover-tree (DCT), which is a modified version of the cover-tree (CT) \cite{Beygelzimer2006}. The CT is a tree-based data structure originally intended for nearest neighbor search in metric spaces. It is closely related to navigation nets \cite{krauthgamer2004navigating}, but with improved space requirements: $\mathcal{O}(n)$ in memory and $\mathcal{O}(c^6n\log n)$ in time. In this work, we show how the CT structure can be used to organize the samples hierarchically with increasing density for each new level in the tree and how it can adapt the sub-sample density to kernel bandwidths in the LP.
However, the problem with an adaptive sub-sampling strategy is its vulnerability to the curse of dimensionality. In regions of high doubling dimensions, the number of samples to achieve a certain density increases exponentially, as quantified by Def. \ref{def:doubling_dim_at_range}. This means that the number of sub-samples from the CT will quickly grow too large for efficient computing. The danger is to waste resources on samples from subsets and levels where the doubling dimension is so large that good interpolation cannot be achieved for any viable sample sizes. This would only serve to slow down the computation and not increase the precision.
Due to this, the DCT introduces a damping property, which gradually suppresses the selection of sub-samples where the doubling dimensionality is large. This has the additional advantage of allowing to choose more sub-samples from regions where the doubling dimensionality is small. Thus, the DCT can diminish the impact of the curse of dimensionality. Furthermore, the DCT can be built continuously as new samples come in, making it ideal for a streaming computational model.
Our third contribution \ref{contribution_Stream}, relies on the changes implemented with \ref{contribution_LP} and \ref{contribution_subSamp}. In particular, \textsf{StreaMRAK } can operate as a streaming algorithm and efficiently organizes the sub-samples as it builds the multi-resolution kernel. Furthermore, the sub-sampling and kernel construction allows for continuous integration of new training points into the kernel matrix. Moreover, the DCT, the multi-resolution construction, and the KRR solver can all be multi-threaded and parallelized.
\subsection{Organization of the paper}
The paper is organized as follows. Section \ref{section:kernel_methods} introduces kernel methods and the \textsf{FALKON } algorithm, as well as the LP. Section \ref{sect:Adaptive sub-sampling} introduces the adaptive sub-sampling scheme and the DCT. \textsf{StreaMRAK } is described in Section \ref{section:TheStreaMRAKalgo} and an analysis of the algorithm is given in Section \ref{section:analysis}. Finally, Section \ref{section:ExperimentsMain} presents several numerical experiments and Section \ref{section:Outlook} gives an outlook for further work. The Appendix includes further mathematical background and the proofs.
\subsection{Notation}
We denote vectors ${\boldsymbol{\mathsf{a}}} \in \bbR^D$ with boldface and matrices $ {\mathbf{A}} \in \bbR^{n\times m}$ with bold uppercase, and ${\mathbf{A}}^\top$ denotes the matrix transpose. We use ${\mathbf{K}}_{nm}$ for kernel matrices, where the dimensionality is indicated by the subscripts. We reserve $n$ for the number of training samples and $m$ for the number of sub-samples.
The $ij$-th element of a kernel matrix is denoted $[{\mathbf{K}}_{nm}]_{ij}$, while for other matrices we use ${\mathbf{A}}_{ij}$. The notation $a_i$ indicates $i$-th element of a vector ${\boldsymbol{\mathsf{a}}}$.
Furthermore, we use $f([{\boldsymbol{\mathsf{x}}}_n])$ to denote $(f({\boldsymbol{\mathsf{x}}}_1),\ldots, f({\boldsymbol{\mathsf{x}}}_n))^\top \in \bbR^n$, and $[m]$ to denote $\{i\}^{m}_{i=1}$. The notation ${\boldsymbol{\mathsf{x}}}_i$ indicates the $i$-th training example.
We use ${\boldsymbol{\mathsf{a}}}^{(l)}$ and ${\mathbf{A}}^{(l)}$, where $l$ refers to a specific level in the LP and the DCT. We take $\|\cdot\|$ to be the $L^2$ norm and $\|\cdot\|_{\CH}$ to be the RKHS norm.
We denote the intrinsic dimension of a manifold with $d$ and the dimension of the embedding with $D$.
By $\mathbbm{1}_{\CS}({\boldsymbol{\mathsf{x}}})$ we denote the indicator function, which evaluates to $1$ if ${\boldsymbol{\mathsf{x}}}\in\CS$ and $0$ otherwise, of a set $\CS\subset \bbR^D$.
\section{Kernel methods}
\label{section:kernel_methods}
Consider a positive definite kernel $k:\CX\times\CX\rightarrow\bbR$, defined on an input space $\CX\subset\bbR^D$.
Given data $\{({\boldsymbol{\mathsf{x}}}_i,y_i):i\in[n]\}$ of samples from $\CX\times\bbR^D$, kernel ridge regression computes an estimator by minimising
\begin{align*}
\widehat f_{n,\lambda} =\argmin_{f\in\widetilde\CH} \frac{1}{n}\sum_{i=1}^n (f({\boldsymbol{\mathsf{x}}}_i)-y_i)^2 + \lambda \N{f}_\CH^2,
\end{align*}
where $\CH$ is the Hilbert space induced by the kernel.
This allows to reduce the problem to a linear system
\begin{align}\label{eqn:KRR}
({\mathbf{K}}_{nn}+\lambda n {\mathbf{I}}_n)\Valpha = {\boldsymbol{\mathsf{y}}}, \text{ for } [{\mathbf{K}}_{nn}]_{ij}=k({\boldsymbol{\mathsf{x}}}_i,{\boldsymbol{\mathsf{x}}}_j),\text{ and } {\boldsymbol{\mathsf{y}}}=(y_1,\ldots,y_n)^\top.
\end{align}
Coefficients $\Valpha=(\alpha_1,\ldots,\alpha_n)^\top$ define the estimator by $f({\boldsymbol{\mathsf{x}}})=\sum_{i=1}^n \alpha_ik({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{x}}}_i)$.
However, solving \eqref{eqn:KRR} using traditional methods has a time complexity of $\CO(n^2)$, which can be costly for large $n$ \cite{rudi2017falkon}.
\textsf{FALKON } \cite{rudi2017falkon} addresses this issue by sub-sampling the columns of ${\mathbf{K}}_{nn}$, which reduces the effective complexity while maintaining accuracy.
Namely, denote $\Gamma_n=\{{\boldsymbol{\mathsf{x}}}_1,\ldots,{\boldsymbol{\mathsf{x}}}_n\}$ and for $m\ll n$ let $\widetilde\Gamma_m=\{\widetilde{{\boldsymbol{\mathsf{x}}}}_1,\ldots, \widetilde{{\boldsymbol{\mathsf{x}}}}_m\}$ be Nystr\"{o}m centers (i.e. a randomly selected subset of $\Gamma_n$).
Minimizing
\begin{equation}
\widehat f_{n,m,\lambda} =\argmin_{f\in\widetilde\CH_M} \frac{1}{n}\sum_{i=1}^n (f({\boldsymbol{\mathsf{x}}}_i)-y_i)^2 + \lambda \N{f}_\CH^2,
\end{equation}
where $\widetilde\CH_m=\cspan{k(\cdot, \widetilde{{\boldsymbol{\mathsf{x}}}}_j):j\in[m]}$, leads to a linear system
\begin{align*}
{\mathbf{H}} \widetilde{\Valpha} = {\boldsymbol{\mathsf{z}}}, \text{ for } {\mathbf{H}} = {\mathbf{K}}_{nm}^\top{\mathbf{K}}_{nm} + \lambda n{\mathbf{K}}_{mm}, \text{ and } {\boldsymbol{\mathsf{z}}} ={\mathbf{K}}_{nm}{\boldsymbol{\mathsf{y}}}.
\end{align*}
Here $[{\mathbf{K}}_{nm}]_{ij}=k({\boldsymbol{\mathsf{x}}}_i, \widetilde{{\boldsymbol{\mathsf{x}}}}_j)\in\bbR^{n\times m}$ is the column-subsampled matrix and the estimator is given by $\widehat f_{n,m,\lambda}({\boldsymbol{\mathsf{x}}})=\sum_{j=1}^m \widetilde{\alpha}_j k({\boldsymbol{\mathsf{x}}},\widetilde{{\boldsymbol{\mathsf{x}}}}_j)$.
To further reduce the time complexity \textsf{FALKON } uses a suitable preconditioner to reduce the condition number.
The preconditioner is defined as $ {\mathbf{B}}\VB^\top = (n/m {\mathbf{K}}_{mm}^2+\lambda n{\mathbf{K}}_{mm})^{-1}$,
which is a natural (lower complexity) approximation of the ideal preconditioner ${\mathbf{A}}\VA^\top = ({\mathbf{K}}_{nm}^\top{\mathbf{K}}_{nm}+\lambda n{\mathbf{K}}_{mm})^{-1}$.
We now solve for $\widetilde{\Valpha}$ from the system of equations
\begin{equation}
{\mathbf{B}}^\top{\mathbf{H}}{\mathbf{B}}\Vbeta = {\mathbf{B}}^\top{\boldsymbol{\mathsf{z}}}, \text{ for } {\mathbf{H}} = {\mathbf{K}}_{nm}^\top{\mathbf{K}}_{nm} + \lambda n{\mathbf{K}}_{mm}, \text{ } {\boldsymbol{\mathsf{z}}} ={\mathbf{K}}_{nm}{\boldsymbol{\mathsf{y}}}, \text{ and } \widetilde{\Valpha}={\mathbf{B}}\Vbeta.
\label{eq:Falkon_system}
\end{equation}
This is solved iteratively, using the conjugate gradients with early stopping.
Choosing $m=\CO(\sqrt{n})$ still ensures optimal generalisation (i.e. same as KRR), while reducing the computational complexity to $\CO(n\sqrt{n})$.
\subsection{Streaming adaptation of \textsf{FALKON }}
Matrices and vectors involved in the linear system in \eqref{eq:Falkon_system} can be separated into two classes: those that depend only on sub-samples in $\widetilde\Gamma_m$; and those (${\mathbf{K}}_{nm}^\top{\mathbf{K}}_{nm}$ and ${\boldsymbol{\mathsf{z}}}$) that also depend on all the training points $\Gamma_n$.
Critically, terms in both groups are all of size $m$, which allows to reduce the complexity.
Consider now the set of sub-samples $\widetilde\Gamma_m$ to be fixed, and assume new training points, in the form $\{({\boldsymbol{\mathsf{x}}}_q, y_q):q=n+1,\ldots, n+t)\}$, are coming in a stream.
We can then update the second class of terms according to
\begin{align}
\big[ ({\mathbf{K}}_{(n+t)m})^\top {\mathbf{K}}_{(n+t)m} \big]_{ij} &= \big[ ({\mathbf{K}}_{nm})^\top {\mathbf{K}}_{nm} \big]_{ij} + \sum_{q=n+1}^{n+t} k({\boldsymbol{\mathsf{x}}}_q, \widetilde{{\boldsymbol{\mathsf{x}}}}_i)k({\boldsymbol{\mathsf{x}}}_q, \widetilde{{\boldsymbol{\mathsf{x}}}}_j),
\label{eq:updateFormula_KnmTKnm}\\
\big[({\mathbf{K}}_{(n+t)m})^\top {\boldsymbol{\mathsf{y}}}\big]_i&= z_i+\sum_{q=n+1}^{n+t} k({\boldsymbol{\mathsf{x}}}_q, \widetilde{{\boldsymbol{\mathsf{x}}}}_i)y_q.
\label{eq:updateFromualte_Zm}
\end{align}
Thus, only sub-samples $\widetilde\Gamma_m$, matrices $\big({\mathbf{K}}_{nm}\big)^\top{\mathbf{K}}_{nm}$, ${\mathbf{K}}_{mm}$ and ${\boldsymbol{\mathsf{z}}}$, need to be stored.
However, in order to continuously incorporate new training points into Eqs. \eqref{eq:updateFormula_KnmTKnm} and \eqref{eq:updateFromualte_Zm}, sub-samples $\widetilde\Gamma_m$ must be determined in advance.
Whereas this works if all the data is provided beforehand, it cannot be done if the data arrives sequentially.
In this work, we address this through a multi-resolution framework. The overall estimator is composed of a sequence of estimators defined at different resolution levels of the domain.
Correspondingly, the set of sub-samples $\widetilde \Gamma_m$ consists of smaller sets $\widetilde \Gamma_{m^{(l)}}^{(l)}$ that correspond to individual levels of resolution.
The sets $\widetilde \Gamma_{m^{(l)}}^{(l)}$ are filled as the data streams in, and once a set for a given level is deemed complete, we proceed with updating \eqref{eq:updateFormula_KnmTKnm} and \eqref{eq:updateFromualte_Zm}.
Further details of how the sets $\widetilde \Gamma_{m^{(l)}}^{(l)}$ are constructed, and the corresponding criteria, are provided in Sections \ref{sect:Adaptive sub-sampling} and \ref{section:TheStreaMRAKalgo}.
We begin by describing the multi-resolution framework of estimators.
\subsection{The Laplacian pyramid}
\label{section:TheLaplacianPyramid}
The LP \cite{Burt2009, Rabin2012} is a multi-resolution regression method for extending a model $\widehat{f}$ to out-of-sample data points ${\boldsymbol{\mathsf{x}}}\in \CX/\Gamma_n$. The LP can be formulated for radial kernels in the form
\begin{equation}
k({\boldsymbol{\mathsf{x}}}_i, {\boldsymbol{\mathsf{x}}}_j) = \Phi\bigg(\frac{\|{\boldsymbol{\mathsf{x}}}_i-{\boldsymbol{\mathsf{x}}}_j\|}{r}\bigg),
\label{eq:radialBasisFunction}
\end{equation}
where $r>0$ is a shape parameter that determines the decay of $\Phi$ with respect to $\|{\boldsymbol{\mathsf{x}}}_i-{\boldsymbol{\mathsf{x}}}_j\|$, see \cite{scovel2010radial}.
The idea underpinning the LP is to approximate the target function sequentially, where at each stage we regress on the errors from the previous stage. In other words, we begin with a rough approximation using a large shape parameter for which $\Phi$ decays slowly and then improve the approximation by fitting the resulting error and reducing the shape parameter.
In the LP, the estimator at level $L\in\bbN$ is defined recursively as
\begin{equation}
\widehat{f}^{(L)}({\boldsymbol{\mathsf{x}}}) = \sum_{l=0}^L s^{(l)}({\boldsymbol{\mathsf{x}}}) = s^{(L)}({\boldsymbol{\mathsf{x}}})+\widehat f^{(L)}({\boldsymbol{\mathsf{x}}}),
\label{eq:laplacian_pyramid_model}
\end{equation}
where $\hat f^{(0)}=s^{(0)}$, and $s^{(l)}({\boldsymbol{\mathsf{x}}})$ is a correction term defined by
\begin{equation}
s^{(l)}({\boldsymbol{\mathsf{x}}}) = \sum_{i=1}^n \alpha^{(l)}_i k^{(l)}({\boldsymbol{\mathsf{x}}},{\boldsymbol{\mathsf{x}}}_i).
\label{eq:LP_approx_at_specific_level}
\end{equation}
The coefficients $\Valpha^{(l)}=(\alpha^{(l)}_1,\ldots,\alpha^{(l)}_n)^\top$ are computed by conducting KRR on the residuals, i.e. errors, from the estimator at the previous level. Namely,
$\Valpha^{(l)} = ({\mathbf{K}}^{(l)}_{nn} + \lambda n {\mathbf{I}})^{-1}{\boldsymbol{\mathsf{d}}}^{(l)}$, where
\begin{equation}
{\boldsymbol{\mathsf{d}}}^{(l)} = \begin{cases}
{\boldsymbol{\mathsf{y}}}, & \text{if}\quad l = 0 \\
{\boldsymbol{\mathsf{y}}}-\widehat{f}^{(l-1)}([{\boldsymbol{\mathsf{x}}}_n]), & \text{otherwise}
\end{cases}.
\label{eq:residual_at_level}
\end{equation}
For a \textsf{FALKON } adaption of this scheme, we only need to modify how per-level coefficients are computed.
Following \eqref{eq:Falkon_system} we iteratively solve
\begin{equation}
({\mathbf{B}}^{(l)})^\top {\mathbf{H}}^{(l)} {\mathbf{B}}^{(l)} \Vbeta^{(l)} = ({\mathbf{B}}^{(l)})^\top \big({\mathbf{K}}^{(l)}_{nm}\big)^\top{\boldsymbol{\mathsf{d}}}^{(l)},
\label{eq:FALKONlinearSystem_in_LP}
\end{equation}
where ${\mathbf{B}}^{(l)}$ is the corresponding preconditioner, and ${\mathbf{H}}^{(l)} = ({\mathbf{K}}_{nm}^{(l)})^\top{\mathbf{K}}_{nm}^{(l)} + \lambda n{\mathbf{K}}_{mm}^{(l)}$, and set $\widetilde{\Valpha}^{(l)} = {\mathbf{B}}^{(l)}\Vbeta^{(l)}$.
\begin{remark} In this paper, we construct the kernel matrices ${\mathbf{K}}^{(l)}$ on a particular class of radial kernels, namely the Gaussian kernel
\begin{equation*}
k^{(l)}({\boldsymbol{\mathsf{x}}}, \widetilde{{\boldsymbol{\mathsf{x}}}}_i) = \exp{\bigg(-\frac{\|{\boldsymbol{\mathsf{x}}}-\widetilde{{\boldsymbol{\mathsf{x}}}}_i\|^2}{2r_l^2}\bigg)},
\end{equation*}
where $r_l>0$ is the shape parameter (the kernel bandwidth) at level $l$.
\label{remark:we_use_radial_basis_kernels}
\end{remark}
\section{The damped cover tree}
\label{sect:Adaptive sub-sampling}
This work introduces a data-driven sub-sampling method that we call the damped cover-tree (DCT). The DCT is a modification of the cover-tree (CT) \cite{Beygelzimer2006}, a data structure based on partitioning a metric space, initially designed to facilitate nearest neighbor search.
The goal of the DCT is to modify and simplify the CT to allow a viable sub-sampling scheme.
Let $(\CX, \|\cdot\|)$ be a normed space where the input domain $\CX\subset \bbR^D$ is bounded, such that the diameter $r_0=\diam(\CX)$ is finite. The DCT is a tree structure where each node $p$ of the tree is associated with a point ${\boldsymbol{\mathsf{x}}}_p\in \CX$, and which is built sequentially as data points arrive.
Furthermore, let $Q_l$ be a set (herein called a cover-set) containing all the nodes at a level $l\ge0$ in the given tree.
A level is associated with an integer $l$ and a radius $r_{l}=2^{-l}r_0$, where $l=0$ denotes the root level containing only one node and $l$ increases as we descend deeper into the tree.
DCT has three invariants, of which the first two are also invariants of the CT.
\begin{enumerate}[label=({I\arabic*})]
\setcounter{enumi}{0}
\item\label{inv:covering} (\textsf{Covering invariant}) For all $p \in Q_{l+1}$ there exists $q \in Q_l$ such that $\|{\boldsymbol{\mathsf{x}}}_q-{\boldsymbol{\mathsf{x}}}_p\| < r_{l}$.
\item\label{inv:separation} (\textsf{Separation invariant}) For all $q, p \in Q_l$ where ${\boldsymbol{\mathsf{x}}}_q\neq {\boldsymbol{\mathsf{x}}}_p$, we have $\|{\boldsymbol{\mathsf{x}}}_q-{\boldsymbol{\mathsf{x}}}_p\| > r_{l}$.
\end{enumerate}
We add that the standard CT includes a third invariant, the so-called nesting invariant, which requires $Q_l\subseteq Q_{l+1}$, but this is not desired for our purpose.
To introduce the last invariant of the DCT, we first need the following definition.
\begin{definition}[The covering fraction] Let $p\in Q_l$ be a node, and ${\boldsymbol{\mathsf{x}}}_p$ the associated point in $\CX$.
Furthermore, let $\widetilde{C}_p=\{c_i\}_{i=1}^k$ be the children of $p$, and ${\boldsymbol{\mathsf{x}}}_{c_i}$ the corresponding points in $\CX$. The covering fraction of a node $p$ is defined as
\begin{equation*}\label{eqn:cover_fraction}
\Fc\Ff(p) = \frac{\operatorname*{Vol}\bigg(\CB({\boldsymbol{\mathsf{x}}}_p,r_{l}) \cap \bigcup\limits_{c_i\in \widetilde{C}_p} \CB({\boldsymbol{\mathsf{x}}}_{c_i}, r_{l+1})\bigg)}{\operatorname*{Vol}{\big(\CB({\boldsymbol{\mathsf{x}}}_{p}, r_{l})\big)}}.
\end{equation*}
\label{def:cover_fraction}
\end{definition}
The covering fraction is the proportion of the volume of $\CB({\boldsymbol{\mathsf{x}}}_p, r_{l})$ that is covered by balls around its children of half the radius.
This quantity is directly related to \ref{inv:separation}, which enforces the radius $r_{l}$ to reduce by a factor of $2$ for each new level, starting from an initial radius $r_0>0$. The
covering fraction allows us to capture the vulnerability of the standard CT to the curse of dimensionality.
For example, consider two regions $\CA_1,\CA_2\subseteq\CX$, for which the doubling dimension at radius $r_l$ is $\texttt{ddim}(\CA_1, r_l) > \texttt{ddim}(\CA_2, r_l)$. A node $p\in\CA_1$ at level $l$ will then need exponentially more children to be covered, than a node $q\in\CA_2$ at the same level $l$.
This exacerbates the deeper we go into the tree. Therefore, the CT would have significantly more nodes from regions where the doubling dimension is large.
We recall now that sub-sampling is in kernel methods intended to reduce the computational complexity.
For this purpose, it is desirable to keep the number of sub-samples from each level within a budget of reasonable size.
On the other hand, a too low sub-sample density will lead to poor interpolation performance.
Due to the exponential growth of the number of nodes with respect to the doubling dimension, it would be desirable to avoid wasting our budget on sub-samples from regions and radii with a large doubling dimension,
as this would require dedicating an (exponentially) large number of points to achieve good interpolation, which is not feasible.
Moreover, in high dimensional regions, we likely cannot learn anything more than a simple function, for which a lower sampling density would suffice.
To reduce the number of sub-samples from regions of large doubling dimensionality, we introduce the following damping invariant as the third invariant of the DCT.
\begin{enumerate}[label=({I\arabic*})]
\setcounter{enumi}{2}
\item\label{inv:damping}(\textsf{Damping invariant})
Let $\CD_{\Fc\Ff}\in (0,1)$ be some threshold and let $\widetilde{C}_p$ and $\Fc\Ff(p)$ be as in Def. \ref{def:cover_fraction}.
Then any node $q$ whose parent node $p$ does not satisfy $\Fc\Ff(p) \geq \CD_{\Fc\Ff}$ does not have children of its own.
\end{enumerate}
The damping invariant forces the tree to devote more resources to regions of lower doubling dimension by making it harder for nodes in regions with higher doubling dimensions to have children.
In other words, the practical effect of the damping invariant is to stop the vertical growth of the DCT if the doubling dimension becomes large.
This is because the covering number grows exponentially with the dimensionality, ensuring $\Fc\Ff(p) \geq \CD_{\Fc\Ff}$ gets correspondingly harder to achieve.
\begin{remark}
In Section \ref{section:analysis_of_the_DCT}, we analyze the damping invariant in more detail and show how the damping suppresses vertical growth of the DCT more for regions of high doubling dimension than for regions of lower doubling dimensionality.
\end{remark}
\subsection{Construction of the DCT}
\label{subsection:DCT_construction}
We now discuss how the DCT is constructed and updated as the data streams in.
First, it is important to restate that we use the DCT to replace the Nystr\"om sampling, which was in \textsf{FALKON } used to reduce the complexity of the ridge regressor.
Consequently, not all of the streamed data (that is, not every training point) will be added to the tree, but only those whose inclusion into the tree would not violate the invariants \ref{inv:covering}-\ref{inv:damping}.
In other words, the tree consists of only those training points that help resolve the data space at the relevant resolution level.
Thus, each node $p$ in the DCT is associated with a unique training sample ${\boldsymbol{\mathsf{x}}}_p$, but not every training sample will be represented by a node in the tree.
Note that this is different from the standard CT, which aims to organize all of the training data into a geometrical leveled data structure.
The construction of the DCT consists of a series of checks which examine whether adding a given data point to the DCT would, or would not, violate invariants \ref{inv:covering}- \ref{inv:damping}.
When a new point ${\boldsymbol{\mathsf{x}}}_q$ arrives from the data stream the goal is to identify the deepest level $l$ for which there exists a node $p$ such that $\|{\boldsymbol{\mathsf{x}}}_q-{\boldsymbol{\mathsf{x}}}_p\|\leq r_{l}$. This corresponds to finding the nearest node in the tree that could serve as a parent.
We achieve this in the following way. The first training point is identified as the root node to which we associate the radius $r_0$.
For each new point, we proceed in a top-down manner, starting from the root node\footnote{We assume that all new points ${\boldsymbol{\mathsf{x}}}_q$ are within a ball of radius $r_0$ around this node, which holds for a large enough $r_0$}.
We then check whether ${\boldsymbol{\mathsf{x}}}_q$ would violate the separation invariant at the next level. In other words, if there exists a node $p$ such that $\|{\boldsymbol{\mathsf{x}}}_q-{\boldsymbol{\mathsf{x}}}_p\|<r_{l}$.
If such a node does not exist, then ${\boldsymbol{\mathsf{x}}}_q$ is added to the set of children of the root node, and we update the covering fraction estimate for the root node.
Otherwise, if such a node does exist, we repeat the process, checking the separation invariant among the children of the corresponding node, and proceed further down the tree.
Assume we arrived to a node $p$ at level $l\ge 1$, and we have $\|{\boldsymbol{\mathsf{x}}}_q-{\boldsymbol{\mathsf{x}}}_p\|\le r_{l}$.
We then check if $p$ is allowed to have children, that is if the damping invariant is satisfied.
If it is not satisfied, the point ${\boldsymbol{\mathsf{x}}}_q$ is dismissed (it is not added to the tree).
On the other hand, if $p$ is allowed to have children, we check whether the separation invariant holds, i.e., if there exists a child $c$ of the node $p$ such that $\|{\boldsymbol{\mathsf{x}}}_q-{\boldsymbol{\mathsf{x}}}_c\|<r_{l+1}$.
If that were the case, the separation invariant would be violated, and the recursion is applied again by considering $c$ as the potential parent node.
However, if such a child does not exist, that is, if the separation invariant is not violated, then ${\boldsymbol{\mathsf{x}}}_q$ is added to the set of children of the node $p$.
More details are given in Alg. \ref{alg:DampedCoverTree}.
Some comments are needed to elucidate how are the steps described above applied in practice.
First, note that the covering fraction from Def. \ref{def:cover_fraction} cannot be calculated explicitly, since the volume terms require knowing the intrinsic dimensionality.
Therefore, it is necessary to use an estimator instead.
For this purpose, we interpret $\Fc\Ff(p)$ as the probability that a sample ${\boldsymbol{\mathsf{x}}}\sim\textrm{Uni}(\CB({\boldsymbol{\mathsf{x}}}_p, r))$ will be within $\CB_{c} \coloneqq \bigcup_{c_i\in \widetilde{C}_p}\CB({\boldsymbol{\mathsf{x}}}_{c_i},r/2)$, where $\widetilde{C}_p$ are the children of $p$.
This probability can be estimated by considering the checks of the separation invariant \ref{inv:separation}, conducted on the last $N$ points that were inside $\CB({\boldsymbol{\mathsf{x}}}_p,r)$, as a series of independent random trials.
We use the following running average as an estimator of the covering fraction
\begin{equation}
(\Fc\Ff(p))_t = (1-\alpha)(\Fc\Ff(p))_{t-1} + \alpha \mathbbm{1}_{\CB_c}({\boldsymbol{\mathsf{x}}}_t),
\label{eq:estimator_of_covering_fraction}
\end{equation}
where $\mathbbm{1}_{\CB_c}({\boldsymbol{\mathsf{x}}}_t)$ is the indicator function, and $\alpha>0$ is a weighting parameter.
This approximates a weighted average of the outcome of the $N$ last draws (cf. Appendix \ref{appendixB}).
Note that this reduces the memory requirements, since instead of storing $N$ trial outcomes for each node in the tree, as required had we used an average of the last $N$ trials, we store only a single value for each node in the tree.
Second, the separation invariant is in practice too strict since it results in too few points added to the tree, and thus a worse kernel estimator.
Moreover, checking the separation invariant adds to the computational complexity.
Therefore, we introduce the following relaxation.
Assume we have a new point ${\boldsymbol{\mathsf{x}}}_q$ and arrived at a node $p$ at level $l$.
We then first conduct a random Bernoulli trial, with the failure probability
\begin{equation}
q_{\boldsymbol{\mathsf{x}}} = \frac{1}{1+\exp{\big[\-h\tan\big(\pi(\|{\boldsymbol{\mathsf{x}}}_q- {\boldsymbol{\mathsf{x}}}_p\|/r_{l}-\frac{1}{2})\big) \big]}},
\end{equation}
where $h$ is the hardness of the threshold.
In other words, the probability of failure is proportional to the distance between ${\boldsymbol{\mathsf{x}}}_q$ and ${\boldsymbol{\mathsf{x}}}_p$ - the larger the distance, the more likely the failure.
If the trial's outcome is a failure, then the check for the separation invariant is ignored, and the algorithm continues.
If it is a success, we proceed by first checking the separation invariant.
This means that the probability to ignore the separation invariant increases as ${\boldsymbol{\mathsf{x}}}_q$ gets farther from ${\boldsymbol{\mathsf{x}}}_p$.
\subsection{Sub-sampling from the DCT}
\label{subsection:subsampling_from_DCT}
We now discuss how the DCT is used for sub-sampling the training points.
By organizing the training points into cover-sets $Q_l$ the DCT allows a hierarchical sub-sampling.
Even though cover-sets $Q_l$ significantly reduce the number of training points, they are for practical purposes still too large for efficient sub-sampling. Due to this, we restrict ourselves to a subset $\widetilde\Gamma^{(l)}\subseteq Q_l$ of candidate sub-samples called landmarks.
\begin{definition}[Landmarks]
Let $Q_l$ be the cover-set at level $l$ in a DCT.
We define the set of candidate landmarks at level $l$ as $\widetilde\Gamma^{(l)}=\{{\boldsymbol{\mathsf{x}}}_p \mid p\in Q_l \text{ and }\Fc\Ff(p) \geq \mathcal{D}_{\Fc\Ff} \}$, and the set of landmarks (of size $m$) as any subset $\widetilde\Gamma^{(l)}_m = \{\widetilde{{\boldsymbol{\mathsf{x}}}}^{(l)}_1,\ldots, \widetilde{{\boldsymbol{\mathsf{x}}}}^{(l)}_m\}\subset \widetilde\Gamma^{(l)}$ of size $m$.
\label{def:landmarks}
\end{definition}
Some remarks are in order.
First, by Def. \ref{def:landmarks}, candidates for landmarks at level $l$ are only those nodes allowed to have children (according to the damping invariant \ref{inv:damping}).
This design choice implies that the set of candidate landmarks will contain more points from regions with a lower doubling dimension than points from regions with a higher doubling dimension.
This is because the larger the doubling dimension is, the more children nodes are needed to cover a given parent node.
Second, Def. \ref{def:landmarks} suggests using only a subset of candidate landmarks as sub-samples.
We refer to a result from \cite{rudi2017falkon} which states that good statistical accuracy of the estimator is achieved if the number of sub-samples is proportional to the square root of the number of samples.
At level $l$ we therefore use a set of landmarks which is of size $m^{(l)} = \delta_0 \sqrt{\textSN{Q_l}}$, where $\delta_0>0$ is a constant.
The third point that requires attention concerns the question of when the landmarks should be selected.
To that end, we use the covering fraction of a level, which, with a slight abuse of notation, we denote as $\Fc\Ff(Q_l)$. Moreover, we compute $\Fc\Ff(Q_l)$ as
\begin{equation}
(\Fc\Ff(Q_l))_t = (1-\alpha)(\Fc\Ff(Q_l))_{t-1} + \alpha \mathbbm{1}_{\CB_\text{level}}({\boldsymbol{\mathsf{x}}}_t),
\label{eq:estimator_of_covering_fraction_for_level}
\end{equation}
where $\mathcal{B}_{level} = \bigcup\limits_{p\in Q_l} \mathcal{B}({\boldsymbol{\mathsf{x}}}_{p}^{(l)}, r_{l})$.
Moreover, analogously to the damping invariant, let $\mathcal{D}_{level}\in(0,1)$ be some threshold.
We then say that a level $l$ is sufficiently covered when $\Fc\Ff(Q_l) \geq \mathcal{D}_{level}$.
\begin{remark} We note that as the level increases, our estimate of $\Fc\Ff(Q_l)$ through Eq. \eqref{eq:estimator_of_covering_fraction_for_level} will be increasingly more sensitive to subsets $\CA\subset\CX$ of low doubling dimension than to subsets of large doubling dimension.
This is because the damping invariant \ref{inv:damping} makes it harder for nodes in high dimensions to have children.
Consequently, we will have fewer points in deeper levels that belong to high dimensional regions. Because of this, the estimator in Eq. \eqref{eq:estimator_of_covering_fraction_for_level} is biased towards using more sub-samples from lower dimensional regions.
\end{remark}
Sub-sampling from a level $l$ goes as follows.
As training points arrive, we build the tree and continuously update the covering fraction of a level.
Once that level is sufficiently covered, that is, once $\Fc\Ff(Q_l) \geq \mathcal{D}_{level}$, we extract the set of landmarks by sub-sampling $m^{(l)}$ points from the pool of candidate landmarks $\widetilde\Gamma^{(l)}$.
\section{StreaMRAK}
\label{section:TheStreaMRAKalgo}
In this section, we present \textsf{StreaMRAK } and clarify how it synthesizes concepts from Sections \ref{section:kernel_methods} and \ref{sect:Adaptive sub-sampling}, and utilizes them in a streaming context.
The workflow of \textsf{StreaMRAK } can be divided into three threads that can run in parallel, subject to some inter-dependencies.
These are the sub-sampling thread, the training thread, and the prediction thread. Overviews of these threads are given next, and the reader is referred to Algorithm \ref{alg:PseudoCode_StreaMRAK} in the Appendix for further details.
\subsection{Sub-sampling thread}
In the sub-sampling thread \textsf{StreaMRAK } collects and organizes the training data into a DCT.
Namely, as new training pairs are collected, the covering \ref{inv:covering} and separation \ref{inv:separation} are checked, and the covering fraction is updated as described in Section \ref{subsection:DCT_construction}.
Moreover, the set of landmarks for each level is updated, as described in Section \ref{subsection:subsampling_from_DCT}. Once the set of landmarks for a given level $\widetilde\Gamma^{(l)}_m$ is completed, the landmarks and the estimator for the corresponding level can be used in the remaining two threads.
\subsection{Training thread}
The model is trained at level $l$ when two conditions are met. First, coefficients of the previous level $l-1$ in the LP must have been calculated, i.e. previous training thread must finish.
Second, landmarks $\widetilde\Gamma^{(l)}_{m^{(l)}}$ at level $l$ must be ready.
In the first step, we define the kernel matrix on the landmarks by
\begin{equation}\label{eq:kernelmatrix_on_landmarks}
[{\mathbf{K}}_{mm}^{(l)}]_{ij} = k^{(l)}(\widetilde{{\boldsymbol{\mathsf{x}}}}_i,\widetilde{{\boldsymbol{\mathsf{x}}}}_j),\text{ for }\widetilde{{\boldsymbol{\mathsf{x}}}}_i \in \widetilde\Gamma^{(l)}.
\end{equation}
In the second step we consider $\big({\mathbf{K}}^{(l)}_{nm}\big)^\top{\mathbf{K}}^{(l)}_{nm} \in \mathbb{R}^{m^{(l)}\times m^{(l)}}$ and $\big({\mathbf{K}}^{(l)}_{nm}\big)^\top{\boldsymbol{\mathsf{d}}}^{(l)}_n \in \mathbb{R}^{m^{(l)}}$
which in addition to landmarks depend on the training points.
They are updated continuously as new training points come in, according to Eq. \eqref{eq:updateFormula_KnmTKnm} and Eq. \eqref{eq:updateFromualte_Zm}.
However, they are not updated indefinitely, but only until new training points do not significantly alter the matrices according to the following criterion.
\begin{definition}(Sufficient training points) Let ${\mathbf{A}}_n:=({\mathbf{K}}^{(l)}_{nm})^\top {\mathbf{K}}^{(l)}_{nm}$, and ${\boldsymbol{\mathsf{b}}}_n:=\big({\mathbf{K}}^{(l)}_{nm}\big)^\top{\boldsymbol{\mathsf{d}}}^{(l)}_n$. Let $\delta_1,\delta_2,\delta_3>0$ be three constants. We consider the number of training points at a level $l$ sufficient when either $n\geq \delta_3$ or
\begin{equation*}
\bigg\|\frac{{\mathbf{A}}_{n}}{n} -\frac{ {\mathbf{A}}_{n+1}}{n+1}\bigg\|_{\infty} \leq \delta_1 \text{ and } \quad \bigg\|\frac{{\boldsymbol{\mathsf{b}}}_{n}}{n} - \frac{{\boldsymbol{\mathsf{b}}}_{n+1}}{n+1}\bigg\| \leq \delta_2.
\end{equation*}
\label{def:sufficien_training_points}
\end{definition}
After enough training samples are collected according to Def. \ref{def:sufficien_training_points}, the correction term $s^{(l)}$ is obtained by solving for the coefficients $\widetilde\alpha^{(l)}_{1},\dots,\widetilde\alpha^{(l)}_{m^{(l)}}$ using Eq. \eqref{eq:FALKONlinearSystem_in_LP}.
The new prediction model $\widehat{f}^{(L)}$ is obtained by adding $s^{(l)}$ to the previous model, according to Eq. \eqref{eq:laplacian_pyramid_model}.
\subsection{Prediction thread}
In this thread \textsf{StreaMRAK } makes provides the latest version of the trained LP model in Eq. \eqref{eq:laplacian_pyramid_model}. This means that if $L$ is currently the highest level that has been trained, the prediction for new points ${\boldsymbol{\mathsf{x}}}$ is made using the model $\widehat{f}^{(L)}({\boldsymbol{\mathsf{x}}})$.
\section{Analysis}
\label{section:analysis}
In this section, we first analyze the damping invariant of the DCT. We then offer theoretical results on the convergence properties of the LP in the context of KRR. Finally, we offer estimates of the time and memory requirements of \textsf{StreaMRAK }.
\subsection{Analysis of the DCT}
\label{section:analysis_of_the_DCT}
As discussed in Section \ref{sect:Adaptive sub-sampling}, the DCT adds a given training point to the set of nodes of the tree if conditions \ref{inv:separation} and \ref{inv:damping} are satisfied, and the points are otherwise discarded.
In particular, the damping invariant \ref{inv:damping} makes it harder for a node to have children.
The guiding idea is that damping should reduce the impact of the curse of dimensionality by making it harder for nodes in regions of higher doubling dimension to have children, and in doing so it should effectively stop the vertical growth of the tree in corresponding regions.
Therefore, it is critical to understand how and to what degree the damping affects high dimensional regions more than low dimensional ones.
In a statistical sense, the damping should treat all nodes in regions of the same doubling dimension equally.
Therefore, to gain insight into the damping, it suffices to analyze its effects concerning the doubling dimension on a single node $p$.
In this case, the effect of damping can be measured by analyzing how many training points must pass through $p$, in the sense of Alg. \ref{alg:DampedCoverTree}, before children of $p$ are allowed to have children of their own.
This can be modeled by considering the expected number of training points ${\boldsymbol{\mathsf{x}}}_i\sim{\rm Uni}\big( \CB({\boldsymbol{\mathsf{x}}}_p,r)\big)$ necessary to cover $\CB({\boldsymbol{\mathsf{x}}}_p, r)$ with balls of radius $r/2$ around points ${\boldsymbol{\mathsf{x}}}_i$.
Consider ${\boldsymbol{\mathsf{x}}}_i\sim {\rm Uni}\big( \CB({\boldsymbol{\mathsf{x}}}_p,r)\big)$, and let a set $\CS_p$ be built in a succession of trials $i=1,\ldots,N_{t}$ so that
\[{\boldsymbol{\mathsf{x}}}_i\in\CS_p \text{ if } \|{\boldsymbol{\mathsf{x}}}_i-{\boldsymbol{\mathsf{x}}}\|\geq \frac{r}{2} \text{ for all } {\boldsymbol{\mathsf{x}}} \in\CS_p.\]
In other words, a newly sampled point ${\boldsymbol{\mathsf{x}}}_i$ will only be added to the set $\CS_p$ if it its pairwise distances from all the points that are already in $\CS_p$ are at least $r/2$.
\begin{problem}\label{problem:Problem_setup_CT_filling_node}
Let $\widetilde C_p$ denote the set of children of the node $p$, constructed from the above-described trials.
What is the expected number of trials $N_{t}$ needed to ensure $\Fc\Ff(p)=1$?
\end{problem}
Since there is no unique set $\CS_p$ such that the corresponding set of children $\widetilde C_p$ ensures $\Fc\Ff(p)=1$, the sample space for Problem \ref{problem:Problem_setup_CT_filling_node} corresponds to all admissible sets $\CS_p$, which vary in both the number and the location of points they contain.
Characterizing all such sets corresponds to a disordered sphere packing problem \cite{jeffrey2012statistical}, which is an NP-hard combinatorial problem \cite{Hifi2009}.
For a theoretical analysis of this problem, defining a probability measure over the sample space is necessary.
However, in this level of generality, neither the sample space nor the probability measure admit a workable definition, with currently available mathematical tools \cite{jeffrey2012statistical}.
Although some theoretical insights are possible under simplifications on the sample space, this analysis is restrained to a limited number of spheres and configurations.
Due to these difficulties, we consider a simplified setting where we instead consider an average case.
If the set $\CS_p$ is such that $\CB({\boldsymbol{\mathsf{x}}}_i,r)\subset\bigcup_{{\boldsymbol{\mathsf{x}}}_i\in\CS_p}\CB({\boldsymbol{\mathsf{x}}}_i,r/2)$, which corresponds to $\Fc\Ff(p)=1$,
then each of the balls $\CB({\boldsymbol{\mathsf{x}}}_i,r/2)$ occupies on average $\frac{1}{|\CS_p|}$ of the total volume of $\CB({\boldsymbol{\mathsf{x}}}_p,r)$, assuming none of the balls are covered by a union of other balls.
Therefore, as $\CS_p$ is being built, adding a point to $\CS_p$ will, on average, reduce the unoccupied volume of $\CB({\boldsymbol{\mathsf{x}}}_p,r)$ by $\frac{1}{|\CS_p|}$.
Moreover, it can be shown that the number of elements in such a set satisfies $2^{\texttt{ddim}-1}\leq |\CS_p| \leq 5^\texttt{ddim}$, see Lemma \ref{lemma:Bound_on_c_D}, where $\texttt{ddim}\coloneqq \texttt{ddim}(\CS_p, r)$ is the doubling dimension of $\CS_p$.
Based on these considerations we introduce a simplified setting for the average case of Problem \ref{problem:Problem_setup_CT_filling_node}.
\begin{assumption}
Problem \ref{problem:Problem_setup_CT_filling_node} can be approximated by dividing the ball $\CB({\boldsymbol{\mathsf{x}}}_p, r)$ into a union of $c_d$ fixed (and known) disjoint bins $\CB_i$ of size $(1/c_d)\operatorname*{Vol}\big(\CB({\boldsymbol{\mathsf{x}}}_p, r)\big)$.
\label{assumption:simplified_problem_setting}
\end{assumption}
Note that the bins referred to in Assumption \ref{assumption:simplified_problem_setting} correspond to regions around the children of the node $p$. Assumption \ref{assumption:simplified_problem_setting} reduces the average case of Problem \ref{problem:Problem_setup_CT_filling_node} to a form of the classical coupons collector's problem \cite{flajolet1992birthday}, which considers $n$ coupons with the same probability of being drawn.
Through a series of randomized trials with replacement, the goal is to obtain a copy of each coupon.
Relevant for Problem \ref{problem:Problem_setup_CT_filling_node} is estimating the stopping time $T$, which counts the number of trials before all coupons are collected, and which satisfies $\bbE[T] = n H_n$, where $n$ denotes the number of coupons and $H_n$ is the $n$-th harmonic number \cite{flajolet1992birthday}.
In terms of Problem \ref{problem:Problem_setup_CT_filling_node}, and under Assumption \ref{assumption:simplified_problem_setting}, we can therefore identify $T=N_{t}$, $n=|\CS_p|$ and $\bbE[N_{t}|\text{Node}\, p] = |\CS_p| H_{|\CS_p|}$. Combining the bound $\ln(n) + \frac{1}{2} \leq H_n \leq \ln(n) + 1$ (from \cite{klambauer1979problems}), with the bound on $|\CS_p|$ from Lemma \ref{lemma:Bound_on_c_D} we have
\begin{equation}
2^{\texttt{ddim}-1}((\texttt{ddim}-1)\ln 2+1/2) \leq \bbE[N_{t}|\text{Node}\, p] \leq 5^\texttt{ddim}(\texttt{ddim}\ln 5+1).
\label{eq:bound_T_p}
\end{equation}
With the same strategy, we can bound the number of trials until the cover-fraction of a level reaches $1$, as
\begin{equation}
2^{l(\texttt{ddim}-1)}({l(\texttt{ddim}-1)}\ln 2+1/2) \leq \bbE[N_{t}|\text{Level}\,l] \leq 5^{l\texttt{ddim}}(l\texttt{ddim}\ln 5+1).
\label{eq:bound_T_l}
\end{equation}
From Eq. \eqref{eq:bound_T_p} we see that the number of training points $\bbE[N_{t}|\text{node}\, p]$ grows exponentially with the doubling dimensionallity $d$.
In other words, significantly more trials are needed to achieve $\Fc\Ff(p) = \CD_{\Fc\Ff}$ for nodes in regions with a large doubling dimension than it is for nodes in regions with a lower doubling dimension.
Consequently, through the damping invariant, the DCT restricts the vertical growth of the tree comparatively more the higher the doubling dimension of the local region.
\subsection{Time and memory requirements}
\label{subsect:memory_usage_analysis}
This section analyzes the memory requirements of \textsf{StreaMRAK }, which involve storing the DCT and the linear system components used to update the coefficients. Furthermore, we consider the computational requirements, which consist in solving the coefficient equations.
Both the memory and computational requirements need to be analyzed per level $l$ of the tree due to the multi-resolution nature of the estimator and the tree organization of the data.
For the analysis, we consider a simplified setting where we assume that the doubling dimension is constant for all levels and all subsets of $\CX$, and that the number of children $c_d$ is the same for all nodes. At the end of the section we describe a more general setting.
In the following, we assume that the growth of the DCT stops at a level $L$.
In other words, level $L$ is the last level at which there are nodes.
In practice, the growth of the DCT slows down exponentially fast with the product of the doubling dimension $\texttt{ddim}\coloneqq\texttt{ddim}(\CX, r_L)$ and the level $l$.
This can be seen from Eq. \eqref{eq:bound_T_l}, which shows that the number of training points necessary to fill up a level grows exponentially with $l\texttt{ddim}$.
Therefore, in practice, no new levels will be added to the DCT when $l\texttt{ddim}$ is large enough, which effectively makes the last level $L$ independent of the number of training points.
Furthermore, from Lemma \ref{lemma:Bound_on_c_D} we know that $c_d$ is bounded by $2^{\texttt{ddim}-1} \leq c_d\leq 5^\texttt{ddim}$, which shows that also $c_d$ is independent of the number of training points.
\begin{proposition}
The memory requirement of \textsf{StreaMRAK } is
$\mathcal{O}\big(\sum_{l=0}^{L} c_d^{l}\big)$.
\label{prop:memory_streamrak_and_DCT}
\end{proposition}
\begin{proof}
The memory requirement of the DCT is determined by the number of nodes in the tree. Given that the number of children is the same for all nodes.
If the number of children per node is $c_{d}$, then the total number of nodes at level $l$ is $c_d^l$.
Thus, the memory needed to store the DCT with $L$ levels is $\CO(\sum_{l=0}^{L} c_d^l)$.
To store the linear system on level $l$ we need the matrices $\big({\mathbf{K}}_{nm^{(l)}}\big)^\top{\mathbf{K}}_{nm^{(l)}},\,{\mathbf{K}}_{m^{(l)}m^{(l)}}\in\bbR^{m^{(l)}\times m^{(l)}}$ and the vector ${\boldsymbol{\mathsf{z}}}\in\bbR^{m^{(l)}}$.
The number of landmarks $m^{(l)}$ at level $l$ is chosen as $m^{(l)}=\delta_0\sqrt{|Q_l|}$, where $|Q_l|$ is the number of nodes at level $l$. Since $|Q_l|$ is $\CO(c_d^l)$, it follows that $m^{(l)}\times m^{(l)}$ is also $\CO(c_d^l)$ per level, and the desired conclusion follows.
\end{proof}
Note that with a fixed $L$ and $n$ larger than $\mathcal{O}\big(\sum_{l=0}^{L} c_d^{l}\big)$, then the memory requirement is independent of $n$.
We also note that if the deepest level satisfies $L\rightarrow\infty$, then the number of nodes is determined by the number of training points, and the memory requirement would thus, in the worst case, become $\mathcal{O}(n)$, the same as for the standard cover-tree.
Next, we discuss the construction of the DCT, where adding a new point to the set of nodes requires a search through the tree.
\begin{proposition}
\label{prop:Time_req_DCT_insertion}
Inserting a new point into the DCT, cf. Algorithm \ref{alg:DampedCoverTree}, requires $\CO(c_d L)$ operations.
\end{proposition}
\begin{proof}
For a point ${\boldsymbol{\mathsf{x}}}_q\in\CX$ to be analyzed at level $L$, we need to have analyzed it at the previous $l<L$ levels.
At each level, we must, in the worst case, check the separation invariant with all children of the current potential parent $p^{(l)}$, before finding a node $c$ such that $\|{\boldsymbol{\mathsf{x}}}_q-{\boldsymbol{\mathsf{x}}}_c\|\leq 2^{-l}r_0$, that would serve as the next potential parent.
This requires at most $c_d$ operations per level, giving $L c_d$ total operations over the $L$ levels.
The same number of operations is necessary if a node is discarded at level $L$.
\end{proof}
Lastly, we analyze the computational requirements for solving the linear system.
\begin{proposition}
\label{prop:Time_req_solver}
The time requirement for solving the linear system in Eq. \eqref{eq:Falkon_system} is $\mathcal{O}\big(\delta_3 m^{(l)}+\big(m^{(l)}\big)^3\big)$ per level, where $\delta_3$ is given in Def. \ref{def:sufficien_training_points},
\end{proposition}
\begin{proof}
The time requirement of \textsf{FALKON } is $\CO(nmt+m^3)$ where $n$ is the number of training points, $m$ the number of landmarks and $t$ the number of iterations of the conjugate gradient (which has an upper bound).
By Def. \ref{def:sufficien_training_points}, \textsf{StreaMRAK } uses at most $\delta_3$ training samples at each level.
Since $m^{(l)}$ is the number of landmarks at level $l$, the result follows.
\end{proof}
Assume that the domain $\CX$ can be divided into disjoint subsets $\CA_1,\dots,\CA_t\subset\CX$ for which the doubling dimension $\texttt{ddim}(\CA_i, r_{l})$ differs based on $\CA_i$ and radius $r_{l}$. Let the number of children of a node ${\boldsymbol{\mathsf{x}}}_p\in \CA_i$ at level $l$ be $c_{d,i,l}$. In this scenario, the growth of the DCT will stop at different levels $L_i$ for different subsets $\CA_i$. The final time and memory requirements would therefore be the sum of the contribution from each subset $\CA_i$. In other words, the memory would be $\mathcal{O}(\sum_{i=1}^t\sum_{l=0}^{L_i}c_{d,i,l}^{l})$, and similarly the time requirement per point insertion would be $\mathcal{O}(\sum_{i=1}^t\sum_{l=0}^{L_i} c_{d,i,l})$. We note that $c_{d,i,l}$ and $L_i$ depend on the dimensionality of the data, but are independent of $n$. Therefore, so are the time and memory requirements.
\subsection{Convergence of the LP formulation of the KRR}
This section analyzes the conditions for which the LP approximates the training data $y_i=f({\boldsymbol{\mathsf{x}}}_i)$, with respect to the number of levels.
A similar analysis was previously done for the LP in the context of kernel smoothers \cite{Leeb2019}. However, to the best of our knowledge, this is the first time the LP formulation of KRR has been analyzed in this way.
Consider the LP estimator $\widehat{f}^{(l)}$ as defined in Eq. \eqref{eq:laplacian_pyramid_model}, but without sub-sampling.
From the recurrence relationship for the residuals ${\boldsymbol{\mathsf{d}}}^{(l)}$ in Eq. \eqref{eq:residual_at_level} by induction it follows
\begin{equation}
\widehat{f}^{(l+1)}([{\boldsymbol{\mathsf{x}}}_n])-f([{\boldsymbol{\mathsf{x}}}_n])=({\mathbf{I}}-{\mathbf{P}}_{nn}^{(l)})( \widehat{f}^{(l)}([{\boldsymbol{\mathsf{x}}}_n])-f([{\boldsymbol{\mathsf{x}}}_n]),
\end{equation}
where ${\mathbf{P}}_{nn}^{(l)}\coloneqq{\mathbf{K}}^{(l)}_{nn}({\mathbf{K}}^{(l)}_{nn}+\lambda n {\mathbf{I}})^{-1}$, cf. Lemma \ref{lemma:residual_expression}.
\begin{theorem}
Let $\widehat{f}^{(l)}$ be the LP estimator defined in Eq. \eqref{eq:laplacian_pyramid_model} and let $\lambda$ be a regularization parameter. Furthermore, let $0<\sigma_{l,n} \leq \dots \leq \sigma_{l,1}$ be the eigenvalues of ${\mathbf{K}}^{(l)}_{nn}$. For $L>0$ we then have
\begin{equation*}
\|\widehat{f}^{(L+1)}([{\boldsymbol{\mathsf{x}}}_n])-f([{\boldsymbol{\mathsf{x}}}_n])\| \leq \prod_{l=0}^L(1-\varepsilon(l)) \|\widehat{f}^{(0)}([{\boldsymbol{\mathsf{x}}}_n])-f([{\boldsymbol{\mathsf{x}}}_n])\|, \quad \text{where} \quad \varepsilon(l)=\frac{\sigma_{l,n}}{n\lambda+\sigma_{l,n}}.
\end{equation*}
\label{thm:LP_KRR_convergence}
\end{theorem}
From Thm. \ref{thm:LP_KRR_convergence} it follows that the LP estimator will converge as $l\rightarrow \infty$, since $\sigma_{l,n}>0$ and therefore $1-\varepsilon(l) \in (0, 1)$ for all $l$. In Thm. \ref{thm:LP_KRR_convergence_rate} we characterise how $\varepsilon(l)$ depends on the level $l$ to give insight on the nature of this convergence.
\begin{theorem}
The LP estimator $\widehat{f}^{(l)}$ from Eq. \eqref{eq:laplacian_pyramid_model} converges with increasing level $L$ to the training data $f({\boldsymbol{\mathsf{x}}}_i)$, cf. Thm. \ref{thm:LP_KRR_convergence}, with the rate $\prod_{l=0}^L(1-\varepsilon(l))$,
where
\begin{equation}
1-\varepsilon(l) \leq \big(1+C_{1,D} 2^{-Dl}\exp\big(-C_{2,D} 4^{-l}\big)/n\lambda\big)^{-1},
\label{eq:first_bound_LP}
\end{equation}
for
\begin{equation*}
C_{1,D} = \frac{1}{2}(6\sqrt{2})^D\Gamma(D/2+1)^{\frac{D-1}{D+1}}\bigg(\frac{\pi}{9}\bigg)^{\frac{D}{D+1}}\bigg(\frac{r_0}{\delta}\bigg)^D \quad \text{and} \quad C_{2,D} = 1152\bigg(\frac{\pi\Gamma^2(D/2+1)}{9}\bigg)^{\frac{2}{D+1}}\bigg(\frac{r_0}{\delta}\bigg)^2,
\end{equation*}
where $\Gamma$ is the gamma function.
Furthermore, for $l > \log_2(\sqrt{D/2}(r_0/\delta))$ we have the tighter bound
\begin{equation}
1-\varepsilon(l) < \bigg(1+\big(1-2^{1+\frac{1}{\ln{2}}(C_3 D - g(l))}\big)/n\lambda\bigg)^{-1},
\label{eq:sec_bound_LP}
\end{equation}
where $g(l)=4^{l-\log_2{r_0/\delta}}$ and $C_3=(\ln{(1+1/4)}+2\ln{2})$.
\label{thm:LP_KRR_convergence_rate}
\end{theorem}
We note that the bound in Eq. \eqref{eq:first_bound_LP} underestimates the rate of convergence for lower levels but improves as the levels increase.
Furthermore, Thm. \ref{thm:LP_KRR_convergence_rate} shows that the convergence rate increases with the level $l$.
In fact, the bound in Eq. \eqref{eq:first_bound_LP} can be simplified with an \textit{a fortiori} bound of the same form, where $C_{1,D}=\frac{1}{2}\big(\frac{12.76}{2^{3/2}}\big)^D\big(\frac{D^D}{\Gamma(D/2+1)}\big)\big(\frac{r_0}{\delta}\big)^D$ and $C_{2,D}=(12.76\sqrt{2}D)^2(r_0/\delta)^2$, which ensures that $1-\varepsilon(l)$ decreases monotonically for $l<\log_2(\sqrt{D/2}(r_0/\delta))+\log_2(25.52\sqrt{2})$. see Remark \ref{remark:fortiori_bound} and Corollary \ref{corollary:Monotonicaly_increasing}.
On the other hand, when $l > \log_2(\sqrt{D/2}(r_0/\delta))$ the tighter bound from Eq. \eqref{eq:sec_bound_LP} ensures that $1-\varepsilon(l)$ continues to decreases monotonically.
Moreover, as $l\rightarrow\infty$ each new level reduces the residual error by $(1+1/n\lambda)^{-1}$.
We can also observe that the convergence rate is reduced by the number of training points $n$, but this effect can be mitigated by reducing the regularization parameter $\lambda$.
We also note that Thm. \ref{thm:LP_KRR_convergence} and Thm. \ref{thm:LP_KRR_convergence_rate} are derived for a vector of numbers on the training data $\Gamma_n\subset \CX$, without assumptions on the target function. In other words, the LP estimator can approximate the training data for any function $f:\Gamma_n\rightarrow\bbR$, to arbitrary precision, by including sufficiently many levels.
\begin{corollary} If the residual ${\boldsymbol{\mathsf{d}}}^{(l)}=(\widehat{f}^{(l)}([{\boldsymbol{\mathsf{x}}}_n])-f([{\boldsymbol{\mathsf{x}}}_n]))$ at level $l$ only projects non-trivially onto the eigenvectors with eigenvalue $\sigma_{l,n} \geq \sigma_{\text{cut-off}}$, then we say the residual is spectrally band-limited with respect to the kernel. If the residual ${\boldsymbol{\mathsf{d}}}^{(l)}$ is spectrally band-limited, then $1-\epsilon(l)< n\lambda/(n\lambda + \sigma_{\text{cut-off}})$.
\label{cor:spectrally_bandlim_res}
\end{corollary}
\section{Experiments}
\label{section:ExperimentsMain}
This section presents comparative numerical experiments of the proposed estimator on three problems.
In Section \ref{section:varsinus_experiment} we consider a one-dimensional regression problem, and in Section \ref{section:dumbell_experiment} we consider a dumbbell-shaped domain that consists of two 5-dimensional spheres connected by a 2-dimensional plane.
Lastly, in Section \ref{section:doublePend}, we forecast the trajectory of a double pendulum, which is a well-known chaotic system \cite{shinbrot1992chaos}.
We compare \textsf{StreaMRAK } with \textsf{FALKON } \cite{rudi2017falkon} and an LP modification of KRR (\textsf{LP-KRR}). Both \textsf{FALKON } and \textsf{LP-KRR } rely on the standard Nystr\"{o}m sub-sampling \cite{williams2001using, smola2000sparse}. Furthermore, \textsf{FALKON } does not rely on a multi-resolution scheme but uses instead a single bandwidth, found by cross-validation.
Throughout the experiments, we set the threshold for the number of sub-samples (landmarks) in \textsf{StreaMRAK } to be $10\sqrt{\textSN{Q_l}}$, where $Q_l$ is the set of nodes at level $l$ in the DCT. We note that to choose the sub-sample size, \textsf{FALKON } and \textsf{LP-KRR } require $n$ to be known beforehand. For \textsf{FALKON } we let the number of Nystr\"{o}m landmarks be $10\sqrt{n}$, where $n$ is the number of training samples. Meanwhile, for \textsf{LP-KRR }{} we sub-sample $\sqrt{n}$ Nystr\"{o}m landmarks, which are then used for all levels.
We also need to pre-select the number of training points for \textsf{LP-KRR } and \textsf{FALKON}. For \textsf{FALKON }{} we use the entire training set, as in \cite{rudi2017falkon}.
Similarly, it is also common for the LP to use the entire training set at each level \cite{Rabin2012, Leeb2019}.
However, for large data sets, it might be better to include fewer data points. Therefore, we also use a version of the \textsf{LP-KRR } where we divide the total training data equally between the levels.
Throughout the experiments, we measure the performance of \textsf{StreaMRAK}, \textsf{FALKON}, and \textsf{LP-KRR } by estimating the mean square error
\begin{equation}
MSE(y, y_\textit{pred}) = \frac{1}{\Upsilon\Lambda}\sum_{k=1}^\Upsilon \frac{1}{n_k}||{\boldsymbol{\mathsf{y}}}_{k}-{\boldsymbol{\mathsf{y}}}_{k}^\textit{pred}||^2, \text{ with }
\Lambda= \max_{\substack{k\in[\Upsilon]\\ i\in[n_k]}}[{\boldsymbol{\mathsf{y}}}_k]_i - \min_{\substack{k\in[\Upsilon]\\ i\in[n_k]}}[{\boldsymbol{\mathsf{y}}}_k]_i,
\label{eq:MSEequationExperiments}
\end{equation}
where $\Upsilon$ is the number of test runs we average over, $n_k$ is the number of test points at test run $k$, and ${\boldsymbol{\mathsf{y}}}_{k}, {\boldsymbol{\mathsf{y}}}_{k}^\textit{pred}\in \bbR^{n_t}$ are the target values and predictions respectively, and $\Lambda$ is the normalisation factor.
\subsection{Multi-resolution benchmark}
\label{section:varsinus_experiment}
We consider the function,
\begin{equation}\label{eq:varsinus_target}
f(x) = \sin{\bigg(\frac{1}{x+0.01}\bigg)}
, \text{ for } x\in\left[0,\frac{\pi}{4}\right].
\end{equation}
In the experiment we use a training set of $n=2.2\times 10^{6}$ samples and a test set of $1.3 \times 10^{5}$ samples. We use the non-uniform gamma distribution $\Gamma(\alpha, \beta)$ with $\alpha=1,\,\beta = 2$ to sample the training data.
The number of training points used at each level in \textsf{StreaMRAK } is determined by setting $\delta_1$ and $\delta_2$ from Def. \ref{def:sufficien_training_points} to $10^{-3}$. With this choice, \textsf{StreaMRAK } selects between \SI{30244}{} and \SI{40100}{} training points for each level. For comparison, \textsf{FALKON } uses all the $\SI{2.2e6}{}$ training points. Furthermore, for \textsf{LP-KRR } we run two experiments: \textsf{LP-KRR } (1) using $\SI{1.1e5}{}$ training points at each level and \textsf{LP-KRR } (2) using $\SI{2.2e6}{}$ training points at each level.
\input{Tables/Table1/Table1}
\input{Figures/6-Experiments/Fig3/Fig3}
Results are presented in Table \ref{table:compare_falkon_mrfalkon_streamrak_on_varsinus}, and the prediction results are illustrated in Fig. \ref{subfig:varsinus_streamrak_pred_lvl3}-\ref{subfig:varsinus_falkon_pred}. The results show that \textsf{StreaMRAK } and both \textsf{LP-KRR } schemes perform much better than \textsf{FALKON}. The reason is that \textsf{FALKON } uses only one bandwidth $r$, while the multi-resolution schemes \textsf{StreaMRAK } and \textsf{LP-KRR}, utilize a bandwidth regime $r_l = 2^{-l}r_0$ that varies with the level $l$. The consequence is that \textsf{StreaMRAK } and \textsf{LP-KRR } approximate the low-frequency components of $f$ when the bandwidth is large, and then target the high-frequency components of $f(x)$ gradually as the bandwidth decreases. These results illustrate the benefits of a multi-resolution scheme over a single bandwidth scheme.
From Table \ref{table:compare_falkon_mrfalkon_streamrak_on_varsinus}, we also observe that \textsf{LP-KRR } (2) is significantly slower than \textsf{StreaMRAK } and \textsf{LP-KRR } (1). This is because it uses the entire training set at each level. Therefore, since \textsf{LP-KRR } (1) and \textsf{LP-KRR } (2) achieve comparable precision, we see that including all training points at each level is not always necessary.
A closer comparison of \textsf{StreaMRAK } and \textsf{LP-KRR } is given in Fig. \ref{fig:mse_varsin_mrfalkon_vs_streamrak}. In particular, in Fig. \ref{subfig:varsin_mse_vs_level} we see that the two algorithms achieve very similar precision. However, comparing the training times in Fig. \ref{subfig:varsin_mse_vs_time}, we see that \textsf{StreaMRAK } trains each level faster and therefore achieves better precision earlier than \textsf{LP-KRR } (1).
\input{Figures/6-Experiments/Fig4/Fig4}
\input{Figures/6-Experiments/Fig5/Fig5}
In Fig. \ref{fig:avg_nn_streamrak_mrfalkon} we show the average distance of each landmark to their 2 nearest neighbors (2-NN distance).
Two aspects of the selection require attention.
As opposed to \textsf{LP-KRR}, \textsf{StreaMRAK } selects landmarks such that the 2-NN distance is comparable to the bandwidth used at a specific level.
In addition, \textsf{StreaMRAK } saves computational power by not choosing landmarks in regions where the 2-NN distance is too low compared to the bandwidth.
In Fig. \ref{subfig:avg_nn_lvl_16_streamrak_mrfalkon} this can be observed for level $l=16$ for landmarks with $x\ge 0.2$.
Due to the non-uniform sample distribution with a higher density around $x=0$, the adaptive sub-sampling is able to select more landmarks in the region close to $x=0$, where $f$ oscillates with high frequency.
Furthermore, \textsf{StreaMRAK } stops predicting at level $16$ because level $17$ is not yet covered with a high enough density of landmarks.
Meanwhile, \textsf{LP-KRR } continues, but as seen from Fig. \ref{subfig:varsin_mse_vs_level} the improvements after level $15$ are not significant because the density of Nystr\"{o}m samples is too low compared to the bandwidth.
\subsection{Adaptive sub-sampling benchmark}
\label{section:dumbell_experiment}
We consider a dumbbell-shaped domain embedded in $\bbR^5$, consisting of two 5-dimensional spheres connected by a 2-dimensional plane. A projection of the input domain in $\bbR^3$ is shown in
Fig. \ref{fig:Compare_landmarks_selection_StreaMRAK_vs_MRFALKON} (a)-(c). Furthermore, as target we consider the following function,
\begin{equation}\label{eq:dumbell_target}
f({\boldsymbol{\mathsf{x}}}) =
\begin{cases}
A\sin(Bx_1+ \phi) + (x_1 + 2), & 1 < x_1 < 3 \\
1, & \text{ otherwise }\\
\end{cases}, \text{ for } {\boldsymbol{\mathsf{x}}}\in[-1,5]\times[-1, 1]^4,
\end{equation}
where $A, B$ and $\phi$ are chosen so that $f\in \CC^1([-1,5]\times[-1, 1]^4, \bbR^5)$. For the experiments, we consider a training set of $\SI{1.9e+6}{}$ samples and a test set of $\SI{6e+5}{}$ samples, all sampled uniformly at random from the input domain. We note that we purposefully chose a simple function in the high dimensional regions because complicated functions in high dimensions require far too many points to be satisfactorily learned.
To determine the number of training points for \textsf{StreaMRAK}, we let $\delta_1=\SI{1e-3}{}$ and $\delta_2 = \SI{1e-4}{}$, cf. Def. \ref{def:sufficien_training_points}.
With this choice \textsf{StreaMRAK } selects between \SI{30100}{} and \SI{40100}{} training points for each level. \textsf{FALKON } again uses all the $\SI{1.9e6}{}$ training points and for \textsf{LP-KRR } we consider two settings: \textsf{LP-KRR } (1) using $\SI{1.8e5}{}$ training points at each level, and \textsf{LP-KRR } (2) using $\SI{1.9e6}{}$ training points at each level.
The results for \textsf{StreaMRAK}, \textsf{LP-KRR}, and \textsf{FALKON } are presented in Table \ref{table:dumbell_experiment_complexSinus}. We observe that \textsf{StreaMRAK } achieves a better prediction than both \textsf{FALKON } and \textsf{LP-KRR} because it adapts the sub-sampling density to the level of resolution.
\input{Tables/Table2/Table2}
To understand the improvement in prediction accuracy, we need to discuss the effects of landmark selection.
In Fig. \ref{subfig:dumbell_lm_dist_lvl4}-\ref{subfig:dumbell_lm_dist_lvl6} we show the projections of landmarks for \textsf{StreaMRAK } and \textsf{LP-KRR } on $\bbR^3$, and in Fig. \ref{subfig:dumbell_lm_avg_distance_lvl4}-\ref{subfig:dumbell_lm_avg_distance_lvl6} the average distance of each landmark to its $7$ nearest neighbors.
These distances are compared with the bandwidth $r_l$ selected for the given level $l$. We see that \textsf{StreaMRAK } selects landmarks in regions where the average distance to nearest neighbors is comparable to the bandwidth.
This means that in high dimensional regions, which correspond to $x_1\in[-1,1]\cup[3,5]$, the algorithm effectively stops collecting landmarks since it cannot maintain high enough density.
On the other hand, \textsf{LP-KRR } uses Nystrom sub-sampling, which imposes a uniform selection of landmarks.
Consequently, a significant number of landmarks come from high-dimensional regions.
\input{Figures/6-Experiments/Fig6/Fig6}
\input{Figures/6-Experiments/Fig7/Fig7}
Moreover, Fig. \ref{fig:Compare_landmarks_selection_StreaMRAK_vs_MRFALKON} shows that in the case of \textsf{LP-KRR}, the average distance between the landmarks in high dimensional regions is larger than the bandwidth $r_l$ when $l\geq5$.
As a knock-on effect, \textsf{LP-KRR } makes only small improvements in high dimensional regions for $l\geq 5$, as seen from Fig. \ref{fig:mse_low_and_high_dimensions}.
Analogous behavior can be observed for \textsf{StreaMRAK}.
However, since \textsf{StreaMRAK } devotes fewer resources to high dimensional regions, it sub-samples more from the low dimensional region, as illustrated in Fig. \ref{fig:comparison_low_and_high_dim_landmarks}. The consequence is that \textsf{StreaMRAK } makes bigger improvements in the low dimensional region than \textsf{LP-KRR}, as seen from Fig. \ref{fig:mse_low_and_high_dimensions}. Note that this was not the case in Section \ref{section:varsinus_experiment}, where the two methods had similar behavior, but unlike here, the input domain in Section \ref{section:varsinus_experiment} did not consist of regions with different dimensionalities.
\FloatBarrier
\subsection{Forecasting the trajectory of a double pendulum}
\label{section:doublePend}
We consider the double pendulum, illustrated in Fig. \ref{fig:DoublePendIllustration}, which we model by the Lagrangian system
\begin{equation}
\CL = ml^2(\omega_1^2+\frac{1}{2}\omega_2^2) +ml^2\omega_1\omega_2\cos{(\theta_1-\theta_2)} +mgl(2\cos{\theta_1}+\cos{\theta_2}),
\label{eq:lagrangian_dp}
\end{equation}
under the assumption that the pendulums are massless rods of length $l_1=l_2=l$ with masses $m_1=m_2=m$ centered at the end of each rod. Here $g$ is the standard gravity, $\omega_1\coloneqq \dot\theta_1$, $\omega_2\coloneqq \dot\theta_2$ are the angular velocities, and the angles $\theta_1$, $\theta_2$ are as indicated in Fig. \ref{fig:DoublePendIllustration}. For the experiments we let $m=1$, $l=1$ and $g=10$.
The learning task is to forecast the trajectory of the pendulum, given only its initial conditions. We let ${\boldsymbol{\mathsf{s}}}_t=[\theta_1(t),\, \theta_2(t),\, \omega_1(t),\, \omega_2(t)]\in \bbR^4$ be the state of the system at step $t\in \bbN$ and train \textsf{StreaMRAK}, \textsf{LP-KRR } and \textsf{FALKON } to learn how ${\boldsymbol{\mathsf{s}}}_t$ maps to a later state ${\boldsymbol{\mathsf{s}}}_{t+\Delta}$, for $\Delta\in\bbN$. The trained model $\widehat{f}$ is used to forecast the state ${\boldsymbol{\mathsf{s}}}_T$ for $T>>0$ by recursively predicting ${\boldsymbol{\mathsf{s}}}_{t+\Delta} = \widehat{f}({\boldsymbol{\mathsf{s}}}_t)$ from the initial state ${\boldsymbol{\mathsf{s}}}_0$ until $t=T$.
For the experiments we consider two settings: a low energy system ${\boldsymbol{\mathsf{s}}}_0^\textit{low} = [-20\degree,\, -20\degree,\, 0\degree,\, 0\degree]$ and a high energy system ${\boldsymbol{\mathsf{s}}}_0^\textit{high} = [-120\degree,\, -20\degree,\, -7.57\degree,\, 7.68\degree]$. For these systems, we initialize $8000$ pendulums as ${\boldsymbol{\mathsf{s}}}_0\sim\CN({\boldsymbol{\mathsf{s}}}, \sigma({\boldsymbol{\mathsf{s}}}))$ for ${\boldsymbol{\mathsf{s}}} = {\boldsymbol{\mathsf{s}}}_0^\textit{low},\,{\boldsymbol{\mathsf{s}}}_0^\textit{high}$ respectively, where $\sigma({\boldsymbol{\mathsf{s}}})=[0.025|\theta_1|,\, 0.15|\theta_2|,\, 0.3|\omega_1|,\, 0.3|\omega_2|]$. Each pendulum is iterated for $500$ steps, which results in $5\times10^6$ training points distributed in $\bbR^4$. Furthermore, for the test data we consider $100$ pendulums ${\boldsymbol{\mathsf{s}}}_0\sim\CN({\boldsymbol{\mathsf{s}}}, 0.01|{\boldsymbol{\mathsf{s}}}|)$ for ${\boldsymbol{\mathsf{s}}} = {\boldsymbol{\mathsf{s}}}_0^\textit{low},\, {\boldsymbol{\mathsf{s}}}_0^\textit{high}$, iterated for 500 steps.
To determine the number of training points for \textsf{StreaMRAK}, we let $\delta_1,\delta_2=10^{-4}$, cf. Def. \ref{def:sufficien_training_points}. With this choice \textsf{StreaMRAK } selects between \SI{30219}{} and \SI{70282}{} training points for each level for the low energy system, and between \SI{36300}{} and \SI{130200}{} for the high energy system. Meanwhile, \textsf{FALKON } uses all $\SI{5.0e6}{}$ training points and \textsf{LP-KRR } use $\SI{3.9e5}{}$ training points at each level.
Results are presented in Table \ref{table:doublePend_falkon_vs_streamrak_lowE} and \ref{table:doublePend_falkon_vs_streamrak_highE}. Furthermore, to illustrate the prediction results we consider the center of mass $\overline{M}_x({\boldsymbol{\mathsf{s}}}_t)=\frac{1}{2}(x_1(t)+x_2(t))\in \bbR$ at state ${\boldsymbol{\mathsf{s}}}_t$, where $x_1,\, x_2\in\bbR$ are the positions of the two pendulum masses as seen in Fig. \ref{fig:DoublePendIllustration}. The prediction results are illustrated in Fig. \ref{fig:Pred_doublePend_lowE} and \ref{fig:Pred_doublePend_highE} for the low and high energy pendulums respectively. We calculate the MSE at each step $t$ separately, such that for a given $t$ we use Eq. \ref{eq:MSEequationExperiments} with ${\boldsymbol{\mathsf{y}}}_k=\overline{M}_x({\boldsymbol{\mathsf{s}}}_t)$, ${\boldsymbol{\mathsf{y}}}^\textit{pred}_k=\overline{M}_x({\boldsymbol{\mathsf{s}}}^\textit{pred}_t)$ and $\Upsilon=100$.
\input{Tables/Table3/Table3}
\input{Tables/Table4/Table4}
For the low energy system, we see from Fig. \ref{subfig:mse_vs_time_dp_lowE} how \textsf{StreaMRAK } is trained significantly faster than \textsf{LP-KRR}, although at a cost of reduced precision. The reduced training time of \textsf{StreaMRAK } is a consequence of the low doubling dimension of the training data, which allows the selection of far fewer landmarks for \textsf{StreaMRAK } than what is used at each level in \textsf{LP-KRR}.
For the high-energy pendulum, we see from Fig. \ref{subfig:dp_mse_vs_time_highE} that \textsf{StreaMRAK } is again able to achieve good precision faster than \textsf{LP-KRR }. Furthermore, we see that the number of landmarks selected for \textsf{StreaMRAK } increases abruptly with the levels, reflecting the high doubling dimension of the training data. Due to this \textsf{StreaMRAK } stops the training after level $7$, as the next levels require too many landmarks. By continuing for $2$ more levels \textsf{LP-KRR } is able to achieve marginally better precision but at an increased computational cost.
\input{Figures/6-Experiments/Fig8/Fig8}
\input{Figures/6-Experiments/Fig9/Fig9}
As seen in Fig. \ref{subfig:cm_pred_doublePend_highE}, the forecasting of \textsf{StreaMRAK } and \textsf{LP-KRR } breaks down after $T\approx 200$ steps. In Fig. \ref{subfig:phase_diagram_with_bifurcation} we observe the trajectory of a pendulum with initial condition ${\boldsymbol{\mathsf{s}}}_0^\textit{high}$, as well as four pendulums with a $0.5\%$ perturbation on the angles $\theta_1$ and $\theta_2$ in ${\boldsymbol{\mathsf{s}}}_0^\textit{high}$. We observe that after roughly $T=205$ time steps the trajectory of the five pendulums diverge significantly from each other. Therefore, it seems that a bifurcation point occurs around this time, which may explain why all the algorithms are unable to make good forecasting beyond this point.
\input{Figures/6-Experiments/Fig10/Fig10}
\section{Outlook}
\label{section:Outlook}
Further development of \textsf{StreaMRAK } is intended with focus on four objectives.
\begin{enumerate}[label=(O\arabic*)]
\item \label{Track_error} Augmentation of the DCT to track the error at each node
\item \label{Improve_estim} Improve the estimator in Def. \ref{def:sufficien_training_points} and Eq. \ref{eq:estimator_of_covering_fraction}.
\item \label{refine_prev_lvl} Refinement of previously fitted levels in the LP as new data arrives.
\item \label{further_theory} Further theoretical analysis of the LP.
\end{enumerate}
Considering objective \ref{Track_error} we intend to develop the DCT to track the error at each node. This way the growth can be restricted in regions where the error is small, which allows for more focus on regions where the error is large. The intention is that this will reduce the problem complexity even further, while also increasing the precision. Regarding objective \ref{Improve_estim}, a drawback with the estimator in Eq. \ref{eq:estimator_of_covering_fraction} was already mentioned in Remark \ref{remark:weakness_with_cf_estimator}. Furthermore, for the estimator in Def. \ref{def:sufficien_training_points}, we intend to implement and evaluate alternative ways to estimate the convergence of the matrices. Another focus area will be objective \ref{refine_prev_lvl}, as we believe new information may be revealed as new training data arrive, and refinement of previously fitted levels can therefore be beneficial. Finally, the theoretical analysis in objective \ref{further_theory} will focus on analyzing the generalization error for the LP, particularly in combination with the adaptive sub-sampling scheme.
\section{Acknowledgement}
We especially would like to thank Prof. Pieter Abeel at UC Berkeley and Asst. Prof. Sicun Gao at UC San Diego for their input on the double pendulum system, and for providing a code example for this system. We would also like to thank Sami Ortoleva at UC San Diego for his discussion on the analysis of the damped cover-tree. AO is part of the Simula-UCSD-UiO Research and Ph.D. training program (SUURPh), an international collaboration in computational biology and medicine funded by the Norwegian Ministry of Education and Research, {\v Z}K is funded by UK EPSRC grant EP/T000864/1, AC is funded by NSF DMS 1819222, 2012266, and Russell Sage Foundation grant 2196 and YF is funded by the NIH grant NINDS (PHS) U19NS107466 Reverse Engineering the Brain Stem Circuits that Govern Exploratory Behavior.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,221 |
{"url":"https:\/\/ask.openstack.org\/en\/answers\/116059\/revisions\/","text":"Ask Your Question\n\n# Revision history [back]\n\nReplace the server() call with\n\nfor server in conn.compute.servers(all_tenants=True):\n\n\nNote that earlier versions of the OpenStack SDK suffer from a bug that makes it ignore the all_tenants option. If that is the case for you, add \"all_tenants\" to the _query_parameters definition in ....\/site-packages\/openstack\/compute\/v2\/server.py.\n\nReplace the server() call withI was successful with this code:\n\n...\nconn = connection.Connection(auth_url=\"http:\/\/192.168.1.222:5000\/v3\/\",\nproject_name=\"admin\",username=\"admin\",\npassword=\"pw\",\nuser_domain_id=\"default\",\nproject_domain_id=\"default\")\n\nhost_names = []\nfor server in conn.compute.servers(all_tenants=True):\n...\n\n\nNote that earlier versions of the OpenStack SDK suffer from a bug that makes it ignore the all_tenants option. If that is the case for you, add \"all_tenants\" to the _query_parameters definition in ....\/site-packages\/openstack\/compute\/v2\/server.py.\n\nI was successful with this code:\n\n...\nconn = connection.Connection(auth_url=\"http:\/\/192.168.1.222:5000\/v3\/\",\nproject_name=\"admin\",username=\"admin\",\npassword=\"pw\",\nuser_domain_id=\"default\",\nproject_domain_id=\"default\")\n\nhost_names = []\nfor server in conn.compute.servers(all_tenants=True):\n...\n\n\nNote that earlier versions of the OpenStack SDK suffer from a bug that makes it ignore the all_tenants option. If that is the case for you, add the string \"all_tenants\" to the _query_parameters definition in ....\/site-packages\/openstack\/compute\/v2\/server.py.\n\nI was successful with this code:\n\n...\nconn = connection.Connection(auth_url=\"http:\/\/192.168.1.222:5000\/v3\/\",\nproject_name=\"admin\",username=\"admin\",\npassword=\"pw\",\nuser_domain_id=\"default\",\nproject_domain_id=\"default\")\n\nhost_names = []\nfor server in conn.compute.servers(all_tenants=True):\n...\n\n\nNote that earlier versions of the OpenStack SDK suffer from a bug that makes it ignore the all_tenants option. If that is the case for you, add the string \"all_tenants\" to the _query_parameters definition in ....\/site-packages\/openstack\/compute\/v2\/server.py.\n\nSee also the newest version of the openstacksdk.","date":"2020-12-02 20:14:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2783995568752289, \"perplexity\": 4628.127059251297}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141715252.96\/warc\/CC-MAIN-20201202175113-20201202205113-00552.warc.gz\"}"} | null | null |
Wallenfels – miasto w Niemczech, w kraju związkowym Bawaria, w rejencji Górna Frankonia, w regionie Oberfranken-West, w powiecie Kronach. Leży w Lesie Frankońskim, nad rzeką Wilde Rodach, przy drodze B173.
Miasto położone jest 11 km na północny wschód od Kronach, 32 km na zachód od Hof i 36 km na północny zachód od Norymbergi.
Dzielnice
W skład miasta wchodzą następujące dzielnice:
Geuser/Dörnach
Neuengrün
Schnaid
Wallenfels
Wolfersgrün
Historia
Pierwsze wzmianki o miejscowości jako Ilowa pojawiły się w 1126. Nazwa Wallenfels pojawia się po raz pierwszy w 1248. W 1954 Wallenfels uzyskało prawa miejskie.
Polityka
Burmistrzem jest Peter Hänel. Rada miasta składa się z 17 członków:
Współpraca
Miejscowość partnerska:
Bingham, Wielka Brytania
Osoby urodzone w Wallenfelsie
Lorenz-Günther Köstner, trener piłkarski
Powiat Kronach
Miasta w Bawarii | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,884 |
{"url":"https:\/\/carma.newcastle.edu.au\/event.php?n=469","text":"PHD CONFIRMATION SEMINAR Speaker: Cameron Rogers, School of Mathematical and Physical Sciences, The University of Newcastle Title: Automorphisms of totally disconnected, locally compact groups Location: Room V206, Mathematics Building (Callaghan Campus) The University of Newcastle Time and Date: 2:00 pm, Thu, 21st Nov 2013 Abstract: The scale function plays a key role in the structure theory of totally disconnected locally compact (t.d.l.c.) groups. Whereas it is known that the scale function is continuous when acting on a t.d.l.c. group, analysis of the continuity of the scale in a wider context requires the topologization of the group of continuous automorphisms. Existing topologies for Aut(G) are outlined and shown to be insufficient for guaranteeing the continuity of the scale function. Possible methods of generalising these topologies are explored. [Permanent link]","date":"2021-10-23 20:36:57","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8204532861709595, \"perplexity\": 1184.5023295685642}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585768.3\/warc\/CC-MAIN-20211023193319-20211023223319-00646.warc.gz\"}"} | null | null |
Roberto Marco Grego Todesco Assagioli (Venetië, 27 februari 1888 - Capolona, 23 augustus 1974) was een Italiaanse pionier van de transpersoonlijke psychologie en psychotherapie. Hij was arts, psychiater en psychotherapeut en de grondlegger van de psychosynthese, een model van de mens dat lichaam, geest en ziel omvat. Dit legde het fundament voor de therapeutische psychosynthese, maar wordt ook toegepast in de pedagogiek, op sociaal gebied, op het gebied van de persoonlijke ontwikkeling en van de intermenselijke betrekkingen. Psychosynthese geldt als een belangrijke basis voor de ontwikkeling van de humanistische en transpersoonlijke psychologie en psychotherapie.
Levensloop
Assagioli's moeder Elena Kaula, die in Egypte geboren was, kwam uit een Venetiaanse familie; zijn vader Leone Grego was ingenieur en kwam uit Verona. Beiden waren joods. Zijn vader stierf toen Roberto twee jaar oud was. Zijn moeder hertrouwde later met de arts Emmanuele Assagioli, die Roberto adopteerde. Roberto's culturele opvoeding was dankzij een opwindende familieomgeving en goede financiële middelen zeer uitgebreid.
Roberto kreeg een klassieke opvoeding. In Venetië bezocht hij het Foscarinilyceum. In het jaar 1904 deed hij op 16-jarige leeftijd met zeer goede cijfers eindexamen. Deze scholing kwam volgens Alessandro Berti's studie zowel Assagioli's klassiek-humanistische als zijn wetenschappelijke belangstelling ten goede. Deze brede belangstelling, van filosofie en literatuur tot de wetenschap, behield hij zijn hele leven.
Naast zijn moedertaal Italiaans sprak hij vloeiend Duits, Engels en Frans en las hij klassiek (zogenaamd 'oud-')Grieks, Latijn, Russisch en Sanskriet. Zijn belangstelling omvatte een breed spectrum aan vak- en kennisgebieden; bijzonder intensief onderzoek deed hij in de grensgebieden van de geneeskunde, pedagogiek en religie.
In november 1904 trok de familie Assagioli naar Florence, waar Roberto aan de Universiteit van Florence zijn studie geneeskunde begon: eerst met het zwaartepunt op chirurgie, later psychiatrie. Psychologie was indertijd nog geen zelfstandige studie, dus zocht zijn belangstelling voor de menselijke psyche een andere weg. Zijn culturele belangstelling was eveneens breed; al in 1906 begon zijn medewerking aan het Florentijnse tijdschrift Leonardo.
In 1906 reisde hij naar Wenen, en vermoedelijk ontmoette hij daar Sigmund Freud. In hetzelfde jaar had hij contact met de Romeinse theosofen. In Genève ontmoette hij de psychologen Édouard Claparède en Théodore Flournoy, die ook met Carl Gustav Jung contact onderhielden en met wie hij lang contact bleef houden.
In 1907 verscheen in het tijdschrift Leonardo een artikel met de titel Il nuovo pensiero americano ('Het nieuwe Amerikaanse denken'), waarin al de eerste ideeën voor zijn latere levenswerk, de psychosynthese opgenomen zijn, bijvoorbeeld de nadruk op de wil als een belangrijke geestelijke kracht in de mens.
In 1907 bezocht Assagioli het Burghölzli, de psychiatrische universiteitskliniek van Zürich, en besloot hij om de psychoanalyse als onderwerp van zijn doctoraalscriptie te nemen. Hier ontmoette hij Carl Gustav Jung en was geregeld bij hem te gast. Jung schreef op 13 juli 1909 aan Sigmund Freud: "...unter ihnen ein gewisser Dr. Assagioli aus Florenz,von der dortigen psychiatrischen Klinik. Er ist ein offener aufnahmefähiger junger Mann..." (...onder hen een zekere Dr. Assagioli uit Florence, van de psychiatrische kliniek daar. Hij is een open en leergierige jonge man...). Met Jung bleef hij tot diens dood verbonden.
In augustus 1909 bezocht Assagioli het Internationale psychologische congres in Genève. Hier hield hij zich intensief bezig met de psychologie van religieuze uitingen, de mystiek en de bijzondere staten van bewustzijn die op het gebied van de transpersoonlijke psychologie liggen.
In 1909 begon hij bij professor Tanzi in Florence aan zijn doctoraalscriptie met de titel La Psicoanalisi, waarop hij op 1 juli 1910 afstudeerde.
Na zijn afstuderen werkte Assagioli als arts-assistent bij Eugen Bleuler in de kliniek Burghölzli. Bleuler beschreef indertijd als eerste de 'gespletenheid van de geest' als 'schizofrenie'. Bleuler was in de tijd dat Assagioli bij hem werkte, uiterst kritisch ingesteld tegenover Freud.
Na zijn tijd als arts-assistent hield Assagioli als psychiater praktijk in Italië. Hij behoorde bij de kring van de eerste psychoanalytici en had een grote rol in het verspreiden van de psychoanalyse in Italië. Hij was lid van de door Alice Bailey opgerichte Arcane school en de vertegenwoordiger van deze school voor Italië.
In 1922 trouwde hij met Nella en samen kregen zij een zoon, Ilario Assagioli.
In 1938 werd Assagioli gearresteerd en gevangengezet door Mussolini's fascistische regering, in verband met zijn Joodse afkomst en zijn humanistische geschriften. Ruim een maand werd hij in eenzame opsluiting gehouden. Tijdens WOII werd de boerderij van zijn familie in Florence verwoest, en dook hij onder samen met zijn gezin. Zijn zoon stierf op 28-jarige leeftijd aan een longziekte, die geweten werd aan ernstige stress door de barre leefomstandigheden tijdens de oorlog.
Na de oorlog ging hij weer aan het werk en begon aan zijn levenswerk, bekend als psychosynthese. De jaren na de oorlog waren betrekkelijk kalm, en hij stichtte in Europa en Noord-Amerika verschillende instituten die aan psychosynthese gewijd waren.
Assagioli had een lang en welvarend leven en een gelukkig veertigjarig huwelijk, tot hij op 86-jarige leeftijd stierf.
Psychosynthese
Vanaf 1910 wees Assagioli op de beperkingen het psychoanalytische concept: zo lang de mens alleen gezien wordt als afhankelijk van zijn biologische instincten, kan hij slechts gedeeltelijk begrepen worden, maar niet in zijn totaliteit gezien. Het was Assagioli's verlangen een wetenschappelijke psychologie te ontwikkelen die het bestaan van de ziel erkent, en de vreugde, zin, vervulling, creativiteit, liefde en wijsheid, dus de hogere energieën en strevingen van het menselijke bestaan net zo zeer omvat als de impulsen, driften en behoeften van de vitale basis van de menselijke natuur.
Hij creëerde zijn psychologische concept en wereldbeeld, de psychosynthese, waarmee hij poogde de wetenschappelijke inzichten van de geneeskunst en de psychologie en de sofistiek van de volken samen te voegen tot een mensbeeld dat de biologische gebondenheid van de mensheid in een groter kader van de vrije keuze en verantwoordelijkheid moest plaatsen en dit weer in een nog groter kader van spirituele verbondenheid en betrokkenheid.
Citaten
Bibliografie
Oorspronkelijke titels
Tijdens zijn leven verschenen:
Psychosynthesis. A Manual of Principles and Techniques, Hobbs, Dormann & Company, New York 1965.
Psicosintesi. Per l'armonia della vita, Mediterranee, Roma 1966.
The Act of Will, Viking Press, New York 1973.
Psychosynthesis: A Collection of Basic Writings, ISBN 0967857007
Postuum verschenen:
Psychosynthesis Typology (= vertaling van het Italiaanse origineel I tipi umani), The Institute of Psychosynthesis, London 1983
Educare l'uomo domani, Ed. Istituto di Psicosintesi, Firenze 1988
Lo sviluppo transpersonale (a cura di M. Macchia Girelli), Astrolabio, Roma 1988
Comprendere la Psicosintesi (a cura di M. Macchia Girelli), Astrolabio, Roma 1991
Nederlandse vertalingen
Psychosynthese - een veelzijdige benadering van heel de mens (Servire, Katwijk a/z, 4e druk, 1988, 316 pag., ISBN 90-6325-194-7)
Over de wil - sturend mechanisme in het menselijk handelen (Servire, Katwijk a/z, 1981, 252 pag., ISBN 90-6325-186-6)
Transpersoonlijke ontwikkeling (Servire, 1991, 309 pag., ISBN 90-6325-368-0)
Psychosynthese Typologie (Stichting Psychosynthese, Utrecht, 2010, 126 pag., ISBN 978-90-73129-02-3)
Externe links
Psychosynthese.nl
Psychosynthese-studies.nl
Psychosyntheticus.nl
Italiaans wetenschapper
Psychiater
Italiaans psycholoog | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,219 |
The Swords of Righteousness Brigade (Arabic: سرايا سيوف الحق, sometimes translated as the Swords of Truth Brigade) is a terrorist group which kidnapped four Western peace activists in Iraq on 26 November 2005, murdered one, Tom Fox, and held the remaining three hostages until March 22, 2006, when coalition forces raided the place where the hostages were held, known as the 2005-2006 Christian Peacemaker hostage crisis.
The group was unknown prior to this kidnapping. However, the U.S.-based SITE Institute, a terrorism research organization, said that it had found ties between the Swords of Righteousness Brigade and the Islamic Army in Iraq.
Hostage crisis
In 2006, the human rights workers were freed, except for Tom Fox, an American Quaker, who was killed by the hostage takers. Along with Tom Fox, two Canadians, James Loney and Harmeet Singh Sooden, and Briton Norman Kember were also kidnapped. Their release reportedly involved the Canadian elite forces JTF-2, British SAS and American Special Forces operatives.
See also
2005-2006 Christian Peacemaker hostage crisis
References
External links
Free The Captives
Factions in the Iraq War
Rebel groups in Iraq
Iraqi insurgency (2003–2011) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,272 |
Webinar Focuses on America's Most Dangerous Rivals
Photo by: Courtesy
The military strategies of America's most dangerous rivals will be the focus of a webinar hosted by the Association of the U.S. Army.
The event, part of the AUSA Noon Report series, will feature author Seth Jones, who will discuss his book Three Dangerous Men: Russia, China, Iran, and the Rise of Irregular Warfare.
It will take place at noon Eastern on March 30. The webinar is free, but registration is required here.
In Three Dangerous Men, Jones examines how three key figures in Moscow, Beijing and Tehran built irregular warfare campaigns that are eroding American power. They are Valery Gerasimov, chief of the Russian General Staff; Gen. Qassem Soleimani, the head of Iran's elite Quds Force who was killed in January 2020 in a U.S. airstrike; and Zhang Youxia, vice chairman of China's Central Military Commission.
Jones also argues that the U.S. is woefully unprepared for the future of global competition, according to a description of the book, which was published before Russia's invasion of Ukraine.
While America has focused on building fighter jets, missiles and conventional warfighting capabilities, its three principal rivals—Russia, China and Iran—have increasingly adopted irregular warfare, using cyberattacks, covert action, proxy conflicts, information and disinformation campaigns, espionage and economic coercion to undermine American power.
Jones is senior vice president, Harold Brown Chair, director of the International Security Program, and director of the Transnational Threats Project at the Center for Strategic and International Studies. He also teaches at Johns Hopkins University's School of Advanced International Studies and the Center for Homeland Defense and Security at the U.S. Naval Postgraduate School.
The author of many books and articles, Jones specializes in irregular warfare, counterterrorism and covert action. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,860 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.