text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
Оркестр русских народных инструментов «Оне́го» — оркестр русских народных инструментов Карельской государственной филармонии в городе Петрозаводске (Республика Карелия).
В репертуаре оркестра около двух тысяч произведений, в том числе классика симфонической музыки в обработке для оркестра русских народных инструментов, арии, романсы, русские народные песни и танцы.
История
Первое выступление оркестра, созданного при Доме культуры Онежского тракторного завода, состоялось на Фестивале народного творчества в Петрозаводске 29 марта 1975 года.
В 1993 году оркестр вошёл в состав Карельской государственной филармонии.
С первого дня создания коллективом руководит заслуженный деятель искусств России, заслуженный работник культуры Республики Карелия, дирижёр Геннадий Миронов.
В 2013 году коллектив оркестра был удостоен премии Республики Карелия «Сампо».
В настоящее время в составе оркестра 45 музыкантов, в большинстве своём — выпускники Петрозаводской государственной консерватории имени А. К. Глазунова.
Примечания
Ссылки
Страница оркестра на сайте Карельской филармонии
Страница оркестра на официальном сайте Республики Карелия
Оркестр «Онего» отметил 40-летний юбилей
Гипноз музыки
Фотоархив
Народная музыка славян
Музыкальные коллективы, появившиеся в 1975 году
Музыка Карелии
Музыкальные коллективы Петрозаводска
Лауреаты премии «Сампо»
Оркестр русских народных инструментов | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,664 |
Ха́сселл () — английская фамилия. Не следует путать с немецкой () фамилией Хассель.
Фамилия
(род. 1980) — английский футболист.
(ок. 1767 — 1825) — английский художник, гравёр и издатель.
(род. 1937) — американский трубач и композитор.
(род. 1979) — американский баскетболист.
Прочее
— городок в Северной Каролине (США).
Хасселл — национальный парк в Западной Австралии.
— австралийская архитектурная фирма.
См. также
Хассель
Хазель
Хассалл | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,535 |
ЕА се може односити на
Електроник артс, предузеће
Еволуциони алгоритам, алгоритам | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,822 |
{"url":"https:\/\/zbmath.org\/?q=an:1054.65079","text":"## Computing the characteristic roots for delay differential equations.(English)Zbl\u00a01054.65079\n\nThe paper deals with the computation of rightmost characteristic roots of a system of linear delay differential equations (DDE) with multiple discrete and distributed delays as follows: $$y'(t)=\\sum_{l=0}^kL_ly(t-\\tau_l) +\\int_{-\\sigma_2}^{-\\sigma_1}M(\\theta )y(t+\\theta ) \\,d\\theta$$, $$t\\geq 0$$, where $$L_0,\\dots,L_k\\in C^{m\\times m}$$, $$\\tau =\\tau_k>\\dots >\\tau_l>\\tau_0$$, $$\\sigma_2>\\sigma_1\\geq 0$$ and $$M:[-\\sigma_2, -\\sigma_1]\\rightarrow C^{m\\times m}$$ is a sufficiently smooth function.\nThe characteristic roots of this DDE constitute the spectrum of the infinitesimal generator $$A$$ of the semigroup of solution operators. $$A$$ is discretized by a suitable matrix and then its eigenvalues are computed. It avoids a complicated problem of the computation of roots of the characteristic equation of the DDE. The discretization scheme using the Runge-Kutta method is proposed. The convergence order of the computed approximate roots to the exact ones is proved for arbitrary meshes. Implementation issues lead to standard large and sparse eigenvalue problems. Numerical results for two different systems of DDEs are included.\n\n### MSC:\n\n 65L15 Numerical solution of eigenvalue problems involving ordinary differential equations 65L05 Numerical methods for initial value problems involving ordinary differential equations 34K28 Numerical approximation of solutions of functional-differential equations (MSC2010) 34L16 Numerical approximation of eigenvalues and of other parts of the spectrum of ordinary differential operators\n\nDDE-BIFTOOL\nFull Text:","date":"2022-07-06 10:07:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7491872310638428, \"perplexity\": 315.47846122403985}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104669950.91\/warc\/CC-MAIN-20220706090857-20220706120857-00734.warc.gz\"}"} | null | null |
classdef nlsaLocalDistanceData_scl < nlsaLocalDistanceData
%NLSALOCALDISTANCEDATA_SCL Class definition and constructor of distance data
% with scaling factors
%
% Modified 2019/11/24
%% PROPERTIES
properties
sclComponent = nlsaComponent();
end
methods
%% NLSALOCALDISTANCEDATA_SCL Class constructor
function obj = nlsaLocalDistanceData_scl( varargin )
nargin = numel( varargin );
if nargin > 0 && isa( varargin{ 1 }, 'nlsaLocalDistanceData' )
varargin = { 'component', getComponent( varargin{ 1 } ), ...
varargin{ 2 : end } };
end
nargin = numel( varargin );
ifParentArg = true( 1, nargin );
% Parse input arguments
iSclComponent = [];
for i = 1 : 2 : nargin
switch varargin{ i }
case 'sclComponent'
iSclComponent = i + 1;
ifParentArg( [ i i + 1 ] ) = false;
end
end
obj = obj@nlsaLocalDistanceData( varargin{ ifParentArg } );
if isempty( obj )
if ~isempty( iSclComponent )
error( 'Attempted to initialize an empty nlsaLocalDistanceDta_scl object with non-empty scaling data' )
end
return
end
% Set caller-defined values
if ~isempty( iSclComponent )
if ~isa( varargin{ iSclComponent }, 'nlsaComponent' )
error( 'sclComponent must be set to an nlsaComponent object' )
end
nC = getNComponent( obj );
if nC > numel( varargin{ iSclComponent } ) ...
&& iscolumn( varargin{ iSclComponent } )
obj.sclComponent = repmat( varargin{ iSclComponent }, ...
[ nC 1 ] );
elseif all( size( obj.component ) == size( varargin{ iSclComponent } ) )
obj.sclComponent = varargin{ iSclComponent };
else
error( 'Incompatible scaling data' )
end
end
end
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,881 |
Q: Insert text format from an editable or textarea into the database I am currently working on a personal project for my own website wherein I am trying to add in a feature of storing formatted text into the database. So far what I have done is able to change the font from italic to bold as a sample but I am completely clueless how I can pass this through to the database.
<style>
#fake_textarea {
width: 100%;
height: 200px;
border: 1px solid red;
}
#jBold {
font-weigth: bold;
}
#jItalic{
font-style:italic;
}
</style>
<script src="/scripts/snippet-javascript-console.min.js?v=1"></script>
<body>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<button id="jBold"><b>B</b></button><button id="jItalic"><i>I</i></button>
<div id='fake_textarea' contenteditable>
Select some text and click the button to make it bold...
<br>Or write your own text
</div>
<script type="text/javascript">
$(document).ready(function() {
$('#jBold').click(function() {
document.execCommand('bold');
});
});
</script>
<script type="text/javascript">
$(document).ready(function() {
$('#jItalic').click(function() {
document.execCommand('italic');
});
});
</script>
</body>
</html>
Sample work:
codepen
A: To access the content in that editable div, you can use:
let content = $('#fake_textarea').html();
Regarding sending the data through to PHP, the easiest solution would probably be to use Ajax.
Alternative
If you don't want to use Ajax but rather an ordinary form post, you could let the button trigger a function that get's the content and populates it into a hidden field in a form, which you then submit.
Something like this: (untested pseudo code)
HTML:
<form method="post" action="foo.php" id="some-form">
<input type="hidden" name="content" id="some-hidden-input" />
<div id="fake_textarea" ...></div>
<button id="submit-button"></button>
</form>
JS:
$('#submit-button').on('click', function (e) {
// Stop the default submission
e.preventDefault();
// Get the content from the div
let content = $('#fake_textarea').html();
// Store the content in a hidden input
$('#some-hidden-input').val(content);
// Submit the real form
$('#some-form').submit();
});
Note
I'm using jQuery in these examples since you show that you're using it. All this can of course be done in vanilla JS as well.
A: Alright so I have tweaked Magnus' code a bit and I do thank him a lot for helping me figure this out.
textarea.php
This is where you will write your own content, format the text and throw it to your php file that in turn would insert it to the database. I added comments for those that wants to learn from this as well.
<style>
#fake_textarea {
width: 100%;
height: 200px;
border: 1px solid red;
}
<!-- Add css to modify the text -->
#jBold {
font-weigth: bold;
}
#jItalic{
font-style:italic;
}
#jUnderline{
text-decoration: underline;
}
#jLT{
text-decoration: line-through;
}
</style>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://ajax.aspnetcdn.com/ajax/jQuery/jquery-3.4.1.min.js"></script>
<body>
<!-- Put buttons here to modify the format -->
<div>
<select id="select_font" onchange="changeFont(this);">
<option value="Arial">Arial</option>
<option value="Sans Serif" selected>Sans Serif</option>
<option value="Comic Sans MS">Comic Sans MS</option>
<option value="Times New Roman">Times New Roman</option>
<option value="Courier New">Courier New</option>
<option value="Verdana">Verdana</option>
<option value="Trebuchet MS">Trebuchet MS</option>
<option value="Arial Black">Arial Black</option>
<option value="Impact">Impact</option>
<option value="Bookman">Bookman</option>
<option value="Garamond">Garamond</option>
<option value="Palatino">Palatino</option>
<option value="Georgia">Georgia</option>
</select>
<select id="select-size" onchange="changeSize(this);">
<option value="4">4</option>
<option value="8">8</option>
<option value="12">12</option>
<option value="16">16</option>
<option value="20">20</option>
<option value="24">24</option>
<option value="28">28</option>
<option value="32">32</option>
<option value="36">36</option>
<option value="40">40</option>
<option value="44">44</option>
<option value="48">48</option>
<option value="52">52</option>
<option value="56">56</option>
<option value="58">58</option>
</select>
<button id="jBold"><b>B</b></button><button id="jItalic"><i>I</i></button><button id="jUnderline">U</button><button id="jSuperScript">A<sup>A</sup></button><button id="jSubScript">A<sub>A</sub></button>
<button id="jLT">A</button>
<div>
<!-- Add a form -->
<form method="post" action="postcontent.php" id="contentform">
<!-- Add some hidden input in order for the form to submit some sort of value -->
<input type="hidden" name="content" id="hiddeninput" />
<!-- Add a place to insert the content -->
<div id='fake_textarea' contenteditable>
Select some text and click the button to make it bold...
<br>Or write your own text
</div>
<!-- Add a submit button-->
<button id="submit">Submit</button>
</form>
<!-- Script to make a selected text bold-->
<script type="text/javascript">
$(document).ready(function() {
$('#jBold').click(function() {
document.execCommand('bold');
});
});
</script>
<!-- Script to make a selected text italic-->
<script type="text/javascript">
$(document).ready(function() {
$('#jItalic').click(function() {
document.execCommand('italic');
});
});
</script>
<!-- Script to make add an underline-->
<script type="text/javascript">
$(document).ready(function() {
$('#jUnderline').click(function() {
document.execCommand('underline');
});
});
</script>
<!-- Script to make make selected text a superscript-->
<script type="text/javascript">
$(document).ready(function() {
$('#jSuperScript').click(function() {
document.execCommand('superscript');
});
});
</script>
<!-- Script to make make selected text a subscript-->
<script type="text/javascript">
$(document).ready(function() {
$('#jSubScript').click(function() {
document.execCommand('subscript');
});
});
</script>
<!-- Script to add a line-through-->
<script type="text/javascript">
$(document).ready(function() {
$('#jLT').click(function() {
document.execCommand('strikeThrough');
});
});
</script>
<!-- Changes the font type -->
<script type="text/javascript">
function changeFont(font) {
var sel = window.getSelection(); // Gets selection
if (sel.rangeCount) {
// Creates a new element, and insert the selected text with the chosen font inside
var e = document.createElement('span');
e.style = 'font-family:' + font.value + ';';
e.innerHTML = sel.toString();
// https://developer.mozilla.org/en-US/docs/Web/API/Selection/getRangeAt
var range = sel.getRangeAt(0);
range.deleteContents(); // Deletes selected text…
range.insertNode(e); // … and inserts the new element at its place
}
}
</script>
<!-- Changes the font size -->
<script type="text/javascript">
function changeSize(size) {
var sel = window.getSelection(); // Gets selection
if (sel.rangeCount) {
// Creates a new element, and insert the selected text with the chosen font inside
var e = document.createElement('span');
e.style = 'font-size:' + size.value + 'px;';
e.innerHTML = sel.toString();
// https://developer.mozilla.org/en-US/docs/Web/API/Selection/getRangeAt
var range = sel.getRangeAt(0);
range.deleteContents(); // Deletes selected text…
range.insertNode(e); // … and inserts the new element at its place
}
}
</script>
<!-- Script to add value to the hidden input then submits it-->
<script type="text/javascript">
$( "#submit" ).click(function() {
var htmlString = $( "#fake_textarea" ).html();
$('#hiddeninput').val(htmlString);
// Submit the real form
$('#contentform').submit();
});
</script>
</body>
postcontent.php
This file will submit the value thrown from the hidden input to the database.
<?php
if ($_SERVER["REQUEST_METHOD"] == "POST") {
//grabs the name of the hidden input that was posted
$pcd= $_POST['content'];
$uid="";
$bid="";
$cnum="";
$cid="";
//connect to database
$mysqli = new mysqli("localhost","root","","nw");
//error checking the connection
if ($mysqli -> connect_errno) {
echo "Failed to connect to MySQL: " . $mysqli -> connect_error;
exit();
}
//submits it
$stmt= $mysqli->prepare("INSERT INTO usercontent (userid, bookid, chapterid, chapternum,data) VALUES (?,?,?,?,?)");
$stmt->bind_param("sssss", $uid, $bid,$cid, $cnum,$pcd);
$stmt->execute();
$stmt -> close();
$mysqli -> close();
}
?>
Hopes this will help someone as much as this person helped me.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,691 |
Q: Add language to LANGUAGES django/python I am using Django 1.10 and Python 3.5.
How do I add en-ca Canada English to the {{ LANGUAGES }}?
This is the code I have used to temporarily display the available languages, just so I can see the languages:
{% for lang in LANGUAGES %}
{{ lang }}<br />
{% endfor %}
The only English languages in the list are:
('en', 'English')
('en-au', 'Australian English')
('en-gb', 'British English')
I am wanting to add English Canada as I must set the selected value to a language select list on the landing page.
EDIT:
The django docs LANGUAGES reference is here:
The LANGUAGES list is in the django global settings here:
A: For some very strange and unknown reason, when I added the LANGUAGES = [ ... ] code filled with my languages to my settings.py file, the code was not recognised.
After a re-start of my pc, the issue is resolved.
I hope this does help someone.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,835 |
Q: Can we use a VM in Azure DevOps? I have a Neo4j VM as a database in Azure and I also use it to call Neo4j algorithms.
Is it possible to use this VM in Azure DevOps ? I searched on the Azure documentation and I found nothing.
I don't know if I can use a VM in Azure DevOps.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,026 |
Ludwig Bechstein (24 November 1801 – 14 May 1860) was a German writer and collector of folk fairy tales.
He was born in Weimar, the illegitimate child of Johanna Carolina Dorothea Bechstein and Hubert Dupontreau, a French emigrant who disappeared before the birth of the child; Ludwig thus grew up very poor in his first nine years. His situation improved only when his uncle Johann Matthäus Bechstein, a renowned naturalist and forester living in Meiningen in the country of Duchy of Saxe-Meiningen, adopted him in 1810. He was sent to school in Meiningen, and in 1818, started an apprenticeship as a pharmacist.
From 1828 to 1831 he studied philosophy and literature in Leipzig and Munich thanks to a stipend granted by Duke Bernhard II of Sachsen-Meiningen, who hired him subsequently as a librarian. This lifetime post provided Bechstein with a continuous income, while leaving him a lot of freedom to pursue his own interests and writing. He lived from 1831 until his death in Meiningen. In his honor, a fountain was built in the English Garden.
Bechstein published many works and was a successful author of his time. His German Fairy Tale Book was even more popular than the Brothers Grimm's collection when it was first published in 1845. He published several collections of folk tales, and also published romances and poems
Important works
Thüringische Volksmärchen (1823)
Sonettenkränze (1826, through which Duke Bernhard became interested in him)
The Children of Haymon (1830, epic poem)
Der Totentanz (The Dance of Death, 1831, epic poem)
Grimmenthal (1833, novel)
Luther (1834)
Der Sagenschatz und die Sagenkreise des Thüringerlandes (A treasury of the tales of Thuringian legends and legend cycles)(1835–38)
Fahrten eines Musikanten (Journeys of a Musician, 1836–37, novel)
Deutsches Märchenbuch (German Fairy-Tale Book, 1845; 41st ed., 1893); French translation with introduction and comments: Corinne and Claude Lecouteux, Paris, José Corti, 2010 (collection Merveilleux); English (complete) translation: Michael Haldane (see External Links)
New Natural History of Pet Birds (1846, humorous didactic poem)
Berthold der Student (1850, novel)
Deutsches Sagenbuch (1853)Volksmärchen (1823)
Sonettenkränze (1826, through which Duke Bernhard became interested in him)
The Children of Haymon (1830, epic poem)
Der Totentanz (The Dance of Death, 1831, epic poem)
Grimmenthal (1833, novel)
Luther (1834)
Der Sagenschatz und die Sagenkreise des Thüringerlandes (A treasury of the tales of Thuringian legends and legend cycles)(1835–38)
Fahrten eines Musikanten (Journeys of a Musician, 1836–37, novel)
'
Neues Deutsches Märchenbuch (New German Fairy-Tale Book, 1856; 105th ed., 1922); English (complete) translation: Michael Haldane (see External Links)
Thüringer Sagenbuch (1858)
Thuringia's Royal House'' (1865)
Schools named after Ludwig Bechstein
Staatliche Grundschule 6, Erfurt: Bechsteinschule (public elementary school)
Ludwig-Bechstein-Grundschule, Meiningen: (public elementary school)
Notes
References
External links
1801 births
1860 deaths
Writers from Weimar
People from Saxe-Weimar
Artists from Meiningen
People from Saxe-Meiningen
Leipzig University alumni
Ludwig Maximilian University of Munich alumni
German male writers | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,492 |
'Evolution of Evil' is the story of Lori and Christopher, a Pacific Northwest couple wanting to get away from the hustle and bustle of the city into the wilderness. The couple makes their way past the usual campsites, and backpack into the remote Mt. Hood Wilderness area.
'Evolution of Evil' is the story of Lori and Christopher, a Pacific Northwest couple wanting to get away from the hustle and bustle of the city into the wilderness. The couple makes their way past the usual campsites, and backpack into the remote Mt. Hood Wilderness area. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,732 |
Events & Bus Trips
Members Contact us today to reserve your FREE one hour Genealogy Consultation
All meetings will occur at the museum during business hours. No guarantee is given, best efforts will be made. We can not recreate the 1890 census.
Fill out the information and we will get in touch to schedule your time.
Name of the Family you are researching.
Please describe the help you need.
By checking this box and submitting your information, you are granting us permission to email you. You may unsubscribe at any time.
Your message has been sent successfully, I hope to respond within 24 hours. You can also contact us through social media, links can be found below!
Spring Grove Area Historical Preservation Society
100 Glenview Road, P.O. Box 383
Spring Grove, PA, 17362, US
Spring Grove Area Historical Preservation Society is a 501(c)3, not-for-profit, charitable organization located in Spring Grove, Pennsylvania. All donations stay within the organization to create a historical legacy for this and future generations of Spring Grove area residents. Donations may be tax deductible, check with your financial advisor.
Spring Grove Area Historical Preservation Society does not and shall not discriminate on the basis of race, color, religion (creed), gender, gender expression, age, national origin (ancestry), disability, marital status, sexual orientation, or military status, in any of its activities or operations.
© 2023 Spring Grove Area Historical Preservation Society | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,065 |
{"url":"https:\/\/www.physicsforums.com\/threads\/double-slit-experiment-and-gravity.790430\/","text":"# Double-slit experiment and gravity\n\n1. Jan 3, 2015\n\n### Ookke\n\nIf the particles used in double-slit experiment were massive enough and\/or our equipment sensitive enough, could we use gravity to spy what path the particles take even before they hit the detector? Would this kind of \"measurement\" destroy the interference?\n\n2. Jan 3, 2015\n\n### phinds\n\nANY measurement destroys the interference.\n\nI don't see how you could \"use gravity\" in the sense you are saying. What would you do, stop the particles, put them on a scale, then send them on their way. I'll bet that would destroy the interference for sure\n\n3. Jan 3, 2015\n\n### bahamagreen\n\nI think Ookke is thinking the opposite way, phinds... using the gravitational dislocation of resting test masses to infer the path of the massive particle going through the slits - a kind of indirect weak measurement.\n\nImagine we discovered a stream of microscopic black holes entering the solar system and set up the test for their arrival using Ricci ring deformations to measure the little BH paths through the apparatus\n\n4. Jan 3, 2015\n\n### Staff: Mentor\n\nYes. It's a historical accident that the words \"measurement\" and \"observation\" are so widely used here, when \"interaction\" might in hindsight have been less misleading. A detectable gravitational interaction, like any other detectable interaction, will suffice to eliminate the interference.\n\n5. Jan 3, 2015\n\n### Staff: Mentor\n\nWeak measurement or no weak measurement QM is unequivocal - if you know the path interference is destroyed\n\nA string of miniature black holes - well assuming such actually existed QM is clear - you know the path - interference disappears.\n\nThanks\nBill\n\n6. Jan 3, 2015\n\n### Ookke\n\nOk. As there is always some gravity present in double-slit experiments on Earth, and the particles e.g. electrons (I suppose) interact with the gravity field, there must be some threshold so that tiny gravitational effects do not eliminate the interference. This threshold must be something fundamental and not really depending on the accuracy of our measuring equipment.\n\n7. Jan 3, 2015\n\n### Staff: Mentor\n\nNugatory was referring to interactions that 'observe' the path - such, for example, would be placing a detector in one of the slits.\n\nGravity is not a which path observation.\n\nOver the size of your typical double slit experiment on earth gravity is effectively constant so has a path independent effect - assuming it actually has a measurable effect on a double slit experiment - which it doesn't - its far too weak to affect electrons in any significant way.\n\nThanks\nBill\n\nLast edited: Jan 3, 2015\n8. Jan 3, 2015\n\n### Ookke\n\nIf there was no limit for the accuracy of our equipment, we could put test masses at left and right side of the room, release an electron and see if there is any difference. I would expect that the electron creates tiny gravity field around it and interacts more with left side test mass, if it goes through left slit, and the same with right side.\n\nSure, but in principle. And the particle could be more massive than electron.\n\n9. Jan 4, 2015\n\n### Staff: Mentor\n\nWhy do you think that's a double slit type experiment where the electron goes through one slit or the other? I think you need to specify your exact set-up, what you expect to happen, and why.\n\nJust so you understand what is meant, and why we get interference effects, see the following:\nhttp:\/\/arxiv.org\/ftp\/quant-ph\/papers\/0703\/0703126.pdf\n\nThe reason you get an interference pattern is the state behind the screen is the superposition of the state going through each hole. If you know which hole it went through ie it goes through one hole, it's not a superposition and you do not get interference.\n\nPrecisely how do you propose 'gravity' to force it through one hole or the other, and not be a superposition of both?\n\nThanks\nBill\n\nLast edited: Jan 4, 2015\n10. Jan 4, 2015\n\n### vanhees71\n\nIt's pretty hard to study the effects of gravitation on subatomic particles, because it's the weakest of all the fundmental interaction. It's weaker by a factor of about $10^{40}$ or something in this order of magnitude compared to the electromagnetic interaction.\n\nNevertheless, nowadays one can check a gedanken experiment with \"cold\" neutrons. Besides photons (which are difficult to study on the beginner's level, because they are massless and thus always ultrarelativistic, and you need pretty advanced math to really understand them right) neutrons enable us to make the most precise measurements on the foundations of quantum theory.\n\nThe most simple thing is to study non-relativistic neutrons in the (homogeneous) gravitational field of the earth. This is a problem usually treated in quantum mechanics 1 as an example for an energy eigenvalue problem (time-independent Schr\u00f6dinger equation) solvable in momentum space, leading to Airy functions as solutions.\n\nAt the Institut Laue Langevin in Grenoble the experimentalists put neutrons above a neary ideal mirror (mirror for neutrons of course) such that the neutrons in the gravitational field got bound. You can evaluate the corresponding discrete energy eigenvalues analytically and compare to the measured values, which are in the peV (pico-electron volts, i.e., $10^{-12} \\; \\text{eV}$.\n\nHere are the two papers about the experiment (for the first one, I couldn't find a publicly available source, which is due to the restrictive copyright politics by Nature):\n\nhttp:\/\/www.nature.com\/nature\/journal\/v415\/n6869\/abs\/415297a.html\n\nPhys. Rev. D 67, 102002 (2003)\nhttp:\/\/arxiv.org\/abs\/hep-ph\/0306198\n\n11. Jan 5, 2015\n\n### Ookke\n\nI didn't mean an experiment that forces the particle to appear at either slit, but something where we compare tiny gravitational effects at different places to get hint of the path that particle goes. This is somewhat analogous to cellular network that is able to (at least roughly) get phone location by comparing phone signal at different nodes. Or something like that a submarine does with its passive sonar.\n\nMaybe this wouldn't work even with ideal equipment, but this was based on intuition that the particle must create gravitational field even in interfered state. If the field is stronger somewhere, this is probably where the particle is near. And if the field is uniform everywhere, this would be important result too, supporting the idea that particle (in its interfered state) has no specific location unless it's forced to appear. But that's true I need to study this more.\n\n12. Jan 5, 2015\n\n### Staff: Mentor\n\nSo you are talking about a hand-wavy very vague experimental set-up that somehow measures the path of the electron. QM is clear - you measure that path - interference disappears.\n\nWhy do you believe you can view a non localised particle as point particle with a field? In other words why do you think your classically formed intuition applies to QM?\n\nIts wrong BTW - QFT says the field of an electron is not like that - intuition is a very poor guide to QM.\n\nThanks\nBill\n\n13. Jan 5, 2015\n\n### StevieTNZ\n\n14. Jan 6, 2015\n\n### vanhees71\n\nI think the idea is the following (speaking in terms of Newtonian gravity, because relativistic QFT in a curved background space-time is a very tricky business): You use a double slit with horizontal slits. Then in principle the particles moving through the lower slit has a lower energy than the particle running through the upper slit, supposed you have a particle beam hitting the slits with a very well defined momentum. Of course, this is practically very unlikely to be achieved, because you can not make the distance between the slits pretty large if you want to see interference fringes, and the energy (momentum) difference of the particles due to the gravitational field of the earth is negligibly tiny and for sure not measurable within the accuracy limits you can prepare the particle's momentum to begin with.\n\nIn any case the which-way information has to be established by entangeling it with some intrinsic property of the particles (or photons). E.g., in the famous quantum eraser experiment one uses the photons' polarization to gain which-way information. To gain complete which-way information you must make sure that the polarization of the photons running through slit 1 is perpendicular to the polarization of photons running through slit 2, and thus you don't see interference effects anymore, because the corresponding polarization states are orthogonal and thus the interference term vanishes. If you make only a \"weak measurement\" of which-way information, you only get less contrast in your interference pattern but have a corresponding uncertainty in the which-way information.\n\nThe proposed experiment in Nature uses gravity to provide the entanglement between the way the particles go and some internal degrees of freedom providing a \"clock\". It's not clear to me, how they concretely want to realize the clock. I guess one has to follow the paper carefully and check the references. I wonder whether it's practically possible to make such a measurement, but with neutrons there may be some chance, because they can be handled with utmost precision.\n\nLast edited: Jan 6, 2015","date":"2018-07-18 11:23:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6051963567733765, \"perplexity\": 704.2363612313802}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-30\/segments\/1531676590127.2\/warc\/CC-MAIN-20180718095959-20180718115959-00355.warc.gz\"}"} | null | null |
Prochiloneurus nigriflagellum är en stekelart som först beskrevs av Girault 1932. Prochiloneurus nigriflagellum ingår i släktet Prochiloneurus och familjen sköldlussteklar. Inga underarter finns listade i Catalogue of Life.
Källor
Sköldlussteklar
nigriflagellum | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,402 |
Q: injecting a web user control with Ninject in WebForms I have an aspx page that I would like to have injected a reference to a user control. The user control is stored in a seperate assembly and loaded at runtime. After injected the user control it should then be loaded into the control collection of the page.
Everything seems to be working fine expect the point of adding the control to the page. There is no error but the UI for the control doesn't show.
global.asax.cs
protected override Ninject.IKernel CreateKernel()
{
var modules = new INinjectModule[] { new MyDefaultModule() };
var kernel = new StandardKernel(modules);
// Loads the module from an assembly in the Bin
kernel.Load("ExternalAssembly.dll");
return kernel;
}
and here is how the external module is defined in the other assembly:
public class ExternalModule : NinjectModule
{
public ExternalModule() { }
public override void Load()
{
Bind<IView>().To<controls_MyCustomUserControlView>();
}
}
The debugger shows that when the app is run, the Load on the external module is being called, and the dependency gets injected into the page.
public partial class admin_MainPage : PageBase
{
[Inject]
public IView Views { get; set; }
At this point when trying to add the view (here a user control) to the page, nothing is shown. Is this something to do with the way the user control is created by Ninject? The loaded control seems to have an empty control collection.
inside aspx page
var c = (UserControl)Views;
// this won't show anything. even the Control collection of the loaded control (c.Controls) is empty
var view = MultiView1.GetActiveView().Controls.Add(c);
// but this works fine
MultiView1.GetActiveView().Controls.Add(new Label() { Text = "Nice view you got here..." });
and finally the view/user control:
public partial class controls_MyCustomUserControlView : UserControl, IView
{
}
It contains just one label:
<%@ Control Language="C#" AutoEventWireup="true" CodeFile="MyCustomUserControlView.ascx.cs" Inherits="controls_MyCustomUserControlView" %>
<asp:Label Text="Wow, what a view!" runat="server" />
A: Been able to get this working by calling Page.LoadControl with the user control as a resource.
Page.LoadControl(typeof(controls_controls_MyCustomUserControlView), null) does not work, however Page.LoadControl("controls_MyCustomUserControlView.ascx") does.
As the control is in an external assembly, first create a VirtualPathProvider and VirtualFile as discussed http://www.codeproject.com/KB/aspnet/ASP2UserControlLibrary.aspx
The custom VirtualPathProvider will be used to check if the user control is located in an external assembly, and the VirtualFile (user control) will get returned as a resource from the assembly.
Next setup the Ninject module to load the user control:
public override void Load()
{
//Bind<IView>().To<controls_MyCustomUserControlView>();
Bind<IView>().ToMethod(LoadControl).InRequestScope();
}
protected IView LoadControl(Ninject.Activation.IContext context)
{
var page = HttpContext.Current.Handler as System.Web.UI.Page;
if (page != null)
{
//var control = page.LoadControl(typeof(controls_MyCustomUserControlView), null);
var control = page.LoadControl("~/Plugins/ExternalAssembly.dll/MyCustomUserControlView.ascx");
return (IView)control;
}
return null;
}
"Plugins" is just a prefix for determining in the VirtualPathProvider if the control is located in another assembly.
If your user control has a namespace then make sure you prefix the control name in LoadControl. The other thing is making sure to use CodeBehind instead of CodeFile as ASP.NET will try load a CodeBehind file.
<%@ Control Language="C#" AutoEventWireup="true"
CodeBehind="~/MyCustomUserControlView.ascx.cs" Inherits="controls_MyCustomUserControlView" %>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,263 |
Home » baseball player » Howie Fox Net Worth 2018: What is this baseball player worth?
Howie Fox Net Worth 2018: What is this baseball player worth?
Howie Fox was a pro baseball player who played Pitcher. Fox was born on March 1, 1921, in Coburg, Oregon. Fox died on October 9, 1955, in San Antonio, Texas. This page will take a closer look at Howie Fox's net worth.
Howie Fox Career, Earnings
Fox batted Right and threw Right. Fox debuted in the MLB on September 28, 1944 for the Cincinnati Reds. In all, Fox played for the Cincinnati Reds, Philadelphia Phillies, and Baltimore Orioles. Fox's career ended with the Baltimore Orioles in 1954.
Some of Fox's most prominent statistics in the MLB included a Win-loss record (pitching) stat of 43-72, a Earned run average stat of 4.33, and a Strikeouts stat of 342.
Howie Fox Net Worth 2018
Baseball player annual pay can range widely. In the MLB, the median pay is around $3 million annually. Top players can get $25 million or more per year, and lower rated players earn $1 million or less. In the minor leagues, most contracts are worth less than $10,000 a year.
Howie Fox net worth: baseball salary distribution
So what was baseball player Howie Fox's net worth at the time of death? Our estimate for Howie Fox's net worth at death is:
Want to see some related net worth articles? Check out these: Evan Skoug, Gerry Shea, Johnny Schaive, Rich Gee, John Leovich, Brian Dorsett, Herb Barnhill, Jesse Cannady, Eric Fryer, Happy Townsend, and Scott Cousins. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,025 |
Q: Modeling nested data structures in Entity Framework I'm trying out Entity Framework 7 RC1 (every major version, I check back to see if it's worth the hassle), but I'm having some trouble trying to understand how I'm supposed to model certain entities.
As an example I went back to a simple real-world application I have lying around. It was made to manage Windows print servers (ugh).
Here's an example of some of the database tables:
*
*PrinterType: A table describing the different types of printers
(Printers, plotters and MFPs)
*PrinterManufacturer: A table describing
different manufacturers (Xerox, Canon, Samsung, HP, etc.)
*PrintServer: A table holding name and description of our print
servers
*PrintServerSupport: A table mapping the Server ID, Type ID and Manufacturer ID to show which specific printers a print server supports.
Here's the DDL:
CREATE TABLE dbo.PrintServer
(
ID INT IDENTITY NOT NULL,
Name VARCHAR(MAX) NOT NULL,
Description VARCHAR(MAX) NULL,
CONSTRAINT [PK_PrintServer_ID] PRIMARY KEY (ID),
)
CREATE TABLE dbo.PrintServerSupport
(
ID INT IDENTITY NOT NULL,
ServerID INT NOT NULL,
TypeID INT NOT NULL,
ManufacturerID INT NOT NULL,
CONSTRAINT [PK_PrintServerSupport_ID] PRIMARY KEY (ID),
CONSTRAINT [FK_PrintServerSupport_ServerID] FOREIGN KEY (ServerID) REFERENCES PrintServer (ID) ON DELETE CASCADE,
CONSTRAINT [FK_PrintServerSupport_TypeID] FOREIGN KEY (TypeID) REFERENCES PrinterType (ID) ON DELETE CASCADE,
CONSTRAINT [FK_PrintServerSupport_ManufacturerID] FOREIGN KEY (ManufacturerID) REFERENCES PrinterManufacturer (ID) ON DELETE CASCADE
)
CREATE TABLE dbo.PrinterType
(
ID INT IDENTITY NOT NULL,
Type VARCHAR(MAX) NOT NULL,
CONSTRAINT [PK_PrinterType_ID] PRIMARY KEY (ID),
)
CREATE TABLE dbo.PrinterManufacturer
(
ID INT IDENTITY NOT NULL,
Manufacturer VARCHAR(MAX) NOT NULL,
CONSTRAINT [PK_PrinterManufacturer_ID] PRIMARY KEY (ID)
)
Now, turning this into C# POCO entities would apparently amount to something along these lines:
public partial class PrinterManufacturer
{
public PrinterManufacturer()
{
PrintServerSupport = new HashSet<PrintServerSupport>();
}
public int ID { get; set; }
public string Manufacturer { get; set; }
public virtual ICollection<PrintServerSupport> PrintServerSupport { get; set; }
}
public partial class PrinterType
{
public PrinterType()
{
PrintServerSupport = new HashSet<PrintServerSupport>();
}
public int ID { get; set; }
public string Type { get; set; }
public virtual ICollection<PrintServerSupport> PrintServerSupport { get; set; }
}
public partial class PrintServer
{
public PrintServer()
{
PrintServerSupport = new HashSet<PrintServerSupport>();
}
public int ID { get; set; }
public string Description { get; set; }
public string Name { get; set; }
public virtual ICollection<PrintServerSupport> PrintServerSupport { get; set; }
}
public partial class PrintServerSupport
{
public int ID { get; set; }
public int ManufacturerID { get; set; }
public int ServerID { get; set; }
public int TypeID { get; set; }
public virtual PrinterManufacturer Manufacturer { get; set; }
public virtual PrintServer Server { get; set; }
public virtual PrinterType Type { get; set; }
}
Now, image I would want to select all print servers, I would merely have to do the following? (Please keep in mind my EF experience with EF is very limited)
using (var db = new DbContext())
{
var query = db.PrintServer.Include(s => s.PrintServerSupport);
}
However, when debugging, this returns the following rather strange resultset:
As you can see, the Manufacturer and Type fields are not populated. Curiously enough, the nested Server fields are...
To make things even more annoying, I'm also receiving JSON payloads with nested data. Here's an example:
[
{
"Name":"REDACTED",
"Description":"Xerox MFP TEST",
"SupportedPrinters": [
{
"Type":"Printer",
"Manufacturer":"XEROX"
},
{
"Type":"Plotter",
"Manufacturer":"XEROX"
},
{
"Type":"MFP",
"Manufacturer":"XEROX"
}
]
},
{
"Name":"REDACTED-2",
"Description":"Xerox MFP TEST 2",
"SupportedPrinters": [
{
"Type":"Printer",
"Manufacturer":"SAMSUNG"
},
{
"Type":"Plotter",
"Manufacturer":"SAMSUNG"
}
]
}
]
Marshaling and unmarshaling this data is a piece of cake, but what about unmarshaling data, and then updating the database? I always found it to be pretty difficult problem, and I'm curious as to how EF is supposed to help out here.
What is the correct way of both modeling the data and querying it?
A: I don't think lazy loading is enabled by default in EF 7 when you mark your navigation properties virtual (like in EF6). This is to reduce unnecessary trips to the database.
You can load your related entities by using ThenInclude
using (var db = new DbContext())
{
var query = db.PrintServer.Include(s => s.PrintServerSupport)
.ThenInclude(p => p.Manufacturer);
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,279 |
While UNIONs can be useful for certain cases, they can often be avoided completely with small changes to the query.
In this article we'll present various example cases where a UNION isn't necessary, and a simple Cypher query will do.
There are cases where you want all nodes connected to a common node in some way, including the starting node, and all of these nodes are connected by the same pattern.
A prototypical case is, for a given actor, to get all of the actors of movies they've worked on, including the starting actor.
Since Cypher only allows a single traversal of a relationship in each matched path, this won't return Keanu Reeves as a coactor. The :ACTED_IN relationship used to match to the movie node can't be traversed back again when finding coactors.
This allows Keanu Reeves to show up in the results as desired. However this is more complicated than it needs to be, and we can't continue processing the unioned result, if that's something we need later.
By using a second MATCH, we've broken up the paths used, so there's no longer any restriction for the :ACTED_IN relationships when we match back out to coactors for a movie. The relationship to Keanu Reeves is treated as any other relationship from the MATCH, and Keanu Reeves appears in the results.
For some queries we may want the results from two similar patterns, but there may be some extra traversals on one that aren't present in the other.
For example, building on the previous query for coactors of Keanu Reeves, maybe we want to find coactors not just through the movies Keanu Reeves has acted in, but similar movies.
However the two match patterns are similar enough that we can actually get the results we want without UNION.
We are using a variable-length relationship for :SIMILAR with a lower bound of 0.
This means that the two connected nodes in the pattern may be the same node, with no actual relationship between them.
This will allow movie to match both to the movies Keanu Reeves acted in, as well as any :SIMILAR movies if such a relationship exists.
This [*0..1] trick basically represents an optional connection, and can be used when we want both a node and a connected node to be treated the same way (and represented by the same variable, if needed) in the pattern.
In the above example our optional connection was between :Movie nodes, allowing us to get nodes connected to both the starting node, and an adjacent node.
We can also use this approach when it's possible the initial node isn't the one we want, but an adjacent node that might possibly be beyond it.
If we want to return movie recommendations from friends, it's easy enough just to return :Movie nodes recommended by a friend.
But if we also want to return movies associated with non-movie recommendations (such as movies based on recommended books or other media), then the query is a little more complicated.
If the recommended item is a :Movie, it will be included, and if it's something that a :Movie was based on, that movie will also be included. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,596 |
Have you ever found a curriculum that simply blows you away and makes you reconsider everything you've ever used for homeschooling? For me that curriculum company is Moving Beyond the Page. Their unit studies are so thorough, so simple to use and yet so much fun!
For this review, we were given access to our choice of products. Trust me, this was NOT an easy decision to make. Since we have been studying the Revolutionary War in relation to our Hometown of Georgetown, SC, I choose two units that I thought we best fit to our current lessons. For our Language Arts Package, I chose Abigail Adams and for our History Package, I chose Revolution.
As soon as I saw Moving Beyond the Page listed as one of our vendors, I knew I didn't want to miss out. I reviewed another set of packaged programs for this company with the Schoolhouse Team last year and we absolutely LOVED it. I wanted to see if we'd feel the same way this year and I'm happy to say we still LOVE them.
Our Language Arts Package provided us with an online guidebook to use and a physical book to read to learn all about the life of Abigail Adams. The book we received is Abigail Adams Witness to a Revolution by Natalie S. Bober. It is filled with information on Mrs. Adams life and the things she saw during this important time in the early beginning of America.
The online guide for Abigail Adams provides a wonderful lesson plan which is simple to follow. You simply log into the website and follow the links. There are activity pages that can be printed out to use with the lessons being learned. There are also bonus suggestions from others to help you extend your lessons even farther. I really enjoy the formatting of the online guide and the extra content that helps to bring the lessons to life.
I must admit though, my children found the book on Mrs. Adams lengthy and slightly boring and so did I since I was reading it to them. I think this book and lesson package would be better suited for a more structured homeschool setting than ours. I was able to find readings from various websites and books from the library to help us make our lessons more interesting. There were some areas where reading the information provided in the book Abigail Adams Witness to a Revolution could not be avoided and we read through and did learn a lot from the reading.
For our History Package, we were sent a physical copy of the Revolution Guidebook, along with the hands-on book Great Colonial America Projects You Can Build Yourself by Kris Bordessa and We Were There, Too! Young People in US History by Phillip Hoose. I will admit this package was immediately a favorite of both myself and the kiddos. Both these books are amazing and they add so much to the lesson itself.
Great Colonial America Projects contains so many interesting facts and super-fun projects to complete! It includes words to know which can be used for Vocabulary exercises and also for copywork. It features different persons from the time period throughout the book, providing a better look at those important historical figures and their influence on American History. This book also gives timeline tidbits would could easily be used to create a physical timeline in your lessons. There are so many things we learned from this one little book!
We Were There, Too! is a beautiful book itself. It's a glossy covered, hardback book that is sturdily made. It reads like a textbook but the stories are so interesting. They really draw the reader in and the students as well. Beginning with the discovery of America, this book takes you on a long walk through History. One that you won't soon forget.
We Were There, Too! also contains new words that can be used for Vocabulary, Spelling, Copywork and more. Lessons on the specific people who made History happen can be easily expanded upon though the book is pretty thorough in examining these people and is historically accurate in it's information. This book is a wonderful addition to our lessons and our homeschooling.
The Revolution Guidebook is approximately 168 pages long. It begins with covering history from the settling of America which did surprise me, as I was expecting it to only cover the actual Revolutionary period and the War itself. I was thrilled to discover it began with America's settlement and covered some information that we had already been discussing in our homeschooling over the weeks before we began this study.
The guidebooks for both of these packages are written so that the teacher can jump right in and begin right away. The packages come with the books and some packages come with extra activities that may otherwise be hard to locate so that you can easily begin the study. These packages are set up as unit studies and as most unit studies do, they cover the information being learned thoroughly both by providing excellent reading selections with the chosen books and by providing a thorough and completely planned guidebook to direct the teacher through the lesson.
We used these unit studies together in conjuction as much as possible. They fit well together, since Abigail Adams was a witness to the Revolution which we were discussing in the Revolution study. We used these lessons almost daily over the review period and we learned a lot from them. We also learned a lot about the history of our hometown, and nearby Charleston in the process. We enjoyed how the lessons brought so much of our home's past to life for us.
The guidebooks provide lessons activities, extensions, suggestions for extra books to read and so much more. Best yet, they are so so so easy to follow. We have learned so much with these lesson packages in such a short time because of the simple and easy layout which makes it easy to jump right in and learn. I'm really considering switching other parts of our learning program to Moving Beyond the Page because of the ease of using the program and it's completeness.
The packaging prices aren't bad either. The Language Arts Package for Abigail Adams with the Online Guidebook is priced as $26.88 at time of this review. This includes access to the guidebook online and the physical book Abigail Adams: Witness to a Revolution. This package is designed for use with students ages 12-14 or who have a reading level of 8-9th grade.
The History Package for Revolution with a printed copy of the guidebook is $65.93. It comes with the guidebook in print copy, a copy of Great Colonial American Projects You Can Build Yourself and We Were There, Too!: Young People in US History. This package is designed for use with students age 12-14 or who have a reading level of 8-9th grade.
To read more reviews of Moving Beyond the Page and see other curriculum packages available, please click on the banner below! | {
"redpajama_set_name": "RedPajamaC4"
} | 4,771 |
using EventStoreLite;
using Raven.Client;
using Snittlistan.Web.Areas.V2.Domain;
using Snittlistan.Web.Areas.V2.Domain.Match;
using Snittlistan.Web.Areas.V2.Domain.Match.Events;
using Snittlistan.Web.Areas.V2.ReadModels;
#nullable enable
namespace Snittlistan.Web.Areas.V2.Handlers;
public class ResultForPlayerHandler :
IEventHandler<MatchResultRegistered>,
IEventHandler<MatchResult4Registered>,
IEventHandler<Serie4Registered>,
IEventHandler<SerieRegistered>,
IEventHandler<ScoreAwarded>
{
public IDocumentSession DocumentSession { get; set; } = null!;
public void Handle(MatchResultRegistered e, string aggregateId)
{
// need to delete some entries
ResultForPlayerReadModel[] modelsToDelete = DocumentSession.Load<ResultForPlayerReadModel>(
e.PreviousPlayerIds.Select(x => ResultForPlayerReadModel.GetId(x, e.BitsMatchId, e.RosterId)));
HashSet<string> toKeep = new(e.RosterPlayers.Select(x => ResultForPlayerReadModel.GetId(x, e.BitsMatchId, e.RosterId)));
foreach (ResultForPlayerReadModel modelToDelete in modelsToDelete)
{
if (toKeep.Contains(modelToDelete.Id) == false)
{
DocumentSession.Delete(modelToDelete);
}
}
Roster roster = DocumentSession.Load<Roster>(e.RosterId);
foreach (string playerId in e.RosterPlayers)
{
string id = ResultForPlayerReadModel.GetId(playerId, e.BitsMatchId, e.RosterId);
ResultForPlayerReadModel model = DocumentSession.Load<ResultForPlayerReadModel>(id);
if (model == null)
{
model = new ResultForPlayerReadModel(roster.Season, playerId, e.BitsMatchId, roster.Id!, roster.Date);
DocumentSession.Store(model);
}
model.Clear();
}
}
public void Handle(SerieRegistered e, string aggregateId)
{
foreach (MatchTable table in new[] { e.MatchSerie.Table1, e.MatchSerie.Table2, e.MatchSerie.Table3, e.MatchSerie.Table4 })
{
string id1 = ResultForPlayerReadModel.GetId(table.Game1.Player, e.BitsMatchId, e.RosterId);
DocumentSession.Load<ResultForPlayerReadModel>(id1).AddGame(table.Score, table.Game1);
string id2 = ResultForPlayerReadModel.GetId(table.Game2.Player, e.BitsMatchId, e.RosterId);
DocumentSession.Load<ResultForPlayerReadModel>(id2).AddGame(table.Score, table.Game2);
}
}
public void Handle(MatchResult4Registered e, string aggregateId)
{
// need to delete some entries
ResultForPlayerReadModel[] modelsToDelete = DocumentSession.Load<ResultForPlayerReadModel>(e.PreviousPlayerIds.Select(x => ResultForPlayerReadModel.GetId(x, e.BitsMatchId, e.RosterId)));
HashSet<string> toKeep = new(e.RosterPlayers.Select(x => ResultForPlayerReadModel.GetId(x, e.BitsMatchId, e.RosterId)));
foreach (ResultForPlayerReadModel modelToDelete in modelsToDelete)
{
if (toKeep.Contains(modelToDelete.Id) == false)
{
DocumentSession.Delete(modelToDelete);
}
}
Roster roster = DocumentSession.Load<Roster>(e.RosterId);
foreach (string playerId in e.RosterPlayers)
{
string id = ResultForPlayerReadModel.GetId(playerId, e.BitsMatchId, e.RosterId);
ResultForPlayerReadModel model = DocumentSession.Load<ResultForPlayerReadModel>(id);
if (model == null)
{
model = new ResultForPlayerReadModel(roster.Season, playerId, e.BitsMatchId, roster.Id!, roster.Date);
DocumentSession.Store(model);
}
model.Clear();
}
}
public void Handle(Serie4Registered e, string aggregateId)
{
foreach (MatchGame4 game in new[] { e.MatchSerie.Game1, e.MatchSerie.Game2, e.MatchSerie.Game3, e.MatchSerie.Game4 })
{
string id = ResultForPlayerReadModel.GetId(game.Player, e.BitsMatchId, e.RosterId);
DocumentSession.Load<ResultForPlayerReadModel>(id).AddGame(game);
}
}
public void Handle(ScoreAwarded e, string aggregateId)
{
foreach (string playerId in e.PlayerIdToScore.Keys)
{
int totalScore = e.PlayerIdToScore[playerId];
string id = ResultForPlayerReadModel.GetId(playerId, e.BitsMatchId, e.RosterId);
DocumentSession.Load<ResultForPlayerReadModel>(id).SetTotalScore(totalScore);
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,114 |
\section{Introduction}
The NASA Exoplanet Science Center (NExScI) hosts the Sagan Workshops, which are annual themed conferences aimed at introducing the latest techiques in exoplanet astronomy to young researchers. The workshops emphasize interaction with data, and include hands-on sessions where participants use their laptops to follow step-by-step tutorials given by experts. The 2012 workshop had the theme "Working With Exoplanet Light Curves," and posed special challenges for the conference organizers because the three applications chosen for the tutorials run on different platforms, and because over 160 persons attended, the largest attendance to date. One of the applications, PyKE, is a suite of Python tools designed to reduce and analyze Kepler light curves; it is called from the command line or from a GUI in PyRAF. The Transit Analysis Package (TAP) uses Markov Chain Monte Carlo (MCMC) techniques to fit light curves in the Interactive Data Language (IDL) environment, and Systemic Console analyzes Transit Timing Variations (TTV) with IDL and Java-based GUIs to confirm and detect exoplanets from timing variations in light curve fitting. Rather than attempt to run these diverse applications on the inevitable wide range of environments on attendees' laptops, the conference organizers, in consulation with the Virtual Astronomical Observatory, chose instead to run the applications on the Amazon Elastic Cloud 2 (EC2). This paper summarizes the system architecture, the Amazon resources consumed, and lessons learned and best practices.
\section{The System Architecture}
The Sagan Workshop took advantage of the EC2's capabilities to support Virtual Machines (VMs) that can be customized to meet local needs, then replicated, and then released on completion of the jobs. Fig. 1 shows the system architecture developed to support the Sagan Workshop. Participants logged into one of 20 tutorial servers via a Virtual Network Connection (VNC). The Amazon Elastic Block Storage (EBS) system and the Network File System (NFS) were used to share common datasets and user home directories across all virtual machines. An IDL license server at IPAC received license request through an ssh tunnel. The following list describes the architecture component by component and the rationale for the design choices.
\articlefigure {O03_f1.eps}{Fig 1}{Hardware architecture to support the Sagan Workshop}
\begin{itemize}
\item One master virtual machine image, built on the Cent OS 64-bit operating system, was used for all servers.
A boot script determined the VM's identity.
Usernames and passwords were the same on all machines.
\item
1 TB of Elastic Block Storage (EBS), a
block-based storage service where
volumes appear as disk drives connected to VMs,
contained applications, tutorial data, and user home directories. Applications and tutorial data are installed on VM images, and so data are not lost if a tutorial server fails.
\item
The EC2 m1.2xlarge instance type was chosen to handle the load of 20 tutorial servers. It has enough memory to cache commonly accessed files, mounts all the partitions from the EBS volumes, and exports all partitions via NFS to the tutorial servers.
\item
The tutorial servers were EC2 c1.xlarge instance type, with
8 cores and 7 GB RAM, chosen because the applications were CPU-bound.
Server performance was benchmarked with 8 users, but the servers were in fact able to support up to 25 users.
\item
A Virtual Network Computing (VNC) server provided remote desktop logins to the tutorial servers. VNC is similar to the X Window system, but sends compressed images instead of drawing commands and proved more responsive than X in our tests. Each tutorial server ran one VNC server that supported up to 30 connections. Screen resolution set to 1024x768 to balance usability and performance. In practice, the workshop used TigerVNC as the server and RealVNC as the client.
\item
The tutorial servers were connected via an ssh tunnel to an IDL license server at IPAC.
IDL VM sessions think the license server is on localhost, and the license server thinks IDL is inside IPAC's network. We used autossh to ensurethe tunnel was re-established if disconnected
\item
The Amazon AWS Security Rules limited access only to the VNC, SSH and IDL ports, and only from the Caltech and IPAC subnets used to support the workshop.
\end{itemize}
\section{Cost of Using the Amazon Elastic Cloud 2}
Had the Sagan Workshop's Amazon EC2 costs not been met by an educational grant, the total cost of installation, testing and running the workshop sessions would have been \$2,876. The breakdown of the costs is shown in Table 1.
\begin{table}[!ht]
\caption{Breakdown Of The Costs of Using Amazon EC2 During The Sagan Workshop}
\smallskip
\begin{center}
{\small
\begin{tabular} {llr}
\tableline
\noalign{\smallskip}
Resource & Consumption & Cost (\$)\\
\noalign{\smallskip}
\tableline
\noalign{\smallskip}
VM Instances & 4,159 hours & 2,738 \\
EBS Storage & 1.25 TB & 126 \\
I/O Requests & 12 million & 1 \\
Snapshot data storage & 22 GB & 3 \\
Use of elastic IP addresses & 604 hours & 3 \\
Data Transfer & 55 GB & 5\\
Total & ... & 2,876 \\
\noalign{\smallskip}
\tableline
\end{tabular}
}
\end{center}
\end{table}
\section{Lessons Learned and Best Practices}
These may be summarized as follows:
\begin{itemize}
\item Automate processes wherever possible, as this allows easier management of large numbers of machines and easy recovery in the case of failure. Tutorial servers automatically mounted NFS partitions when booted and SSH tunnels automatically reconnected on failure.
\item Test, test, and test again. Document and test all the steps required to recover if a VM fails, and step through the tutorials under as close to operational conditions as possible.
\item Develop a failover system. We copied the final software configuration to two local machines for use if Amazon failed.
\item Give yourself plenty of time to solve problems. In our case, we needed to assure the IDL vendor that licenses would not persist on the cloud, and we needed to understand the poor performance of X for remote access to the cloud.
\end{itemize}
\acknowledgements The Sagan Workshop was funded as part of the Sagan Program through NASA's Exoplanet Exploration Program. We thank Amazon Web Services for the award of a generous Educational Gran. ED, GJ and MR acknowledge support through NSF OCI-0943725. The VAO is jointly funded by NSF and NASA, and is being managed by the VAO, LLC, a non-profit 501(c)(3) organization registered in the District of Columbia and a collaborative effort of the Association of Universities for Research in Astronomy (AURA) and the Associated Universities, Inc. (AUI). We thank Dr. Peter Plavchan for suggesting we examine VNC.
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,453 |
Q: 3 phase BLDC sinusoidal drive using PWM controller and/or DSP? My background is mechanical engineering (robotics). I apologize in advance.
I have a sensitive torque ripple (Vibrations) application and would like to use sinusoidal drive to run a 3 phase BLDC motor instead of trapezoidal for low speeds. This is for a speed controller (just inferring torque).
12V motors, 8W peak, estimated 2W Steady state. Unfortunately, I am forced to use the UC1825 pwm controller chip. Can the UC1825 PWM controller output to 6 mosfets (inverter) portion, whereupon it has 3 phase output and 1 for direction?What are comparable sinusoidal drive chips?
Datasheet, UC1825: http://www.ti.com/general/docs/lit/getliterature.tsp?genericPartNumber=uc1825&fileType=pdf
Alos, if I could implement this using DSP inside an fpga, would I still need a pwm controller (UC1825) or just an 3-phase inverter ?
Thanks!
A: A single UC1825 is not capable of driving a 3 phase motor.
You could perhaps build a controller with 3 * UC1825, with one for each phase of the BLDC. However the complexity of using such an solution would seem hardly worth the efforts.
I'd suggest you read this great application note from TI using an MSP430 to generate the PWM signals as a starting point.
How you proceed will depend on whether you have Hall Effect sensors on your current motors, or are constrained to build a sensorless driver solution.
You would definitely be better purchasing a BLDC integrated servo to simplify your task.
I'd recommend you look at Teknic's Clearpath-SDSK servo as a potential plug in replacement for your BLDC motors. This gives you a simple Step/Dir interface and would relieve you of much work designing your own solution.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,518 |
{"url":"https:\/\/projecteuclid.org\/euclid.bbms\/1378314513","text":"Bulletin of the Belgian Mathematical Society - Simon Stevin\n\nMultiplicity of solutions for a biharmonic equation with subcritical or critical growth\n\nAbstract\n\nWe consider the fourth-order problem $$\\left\\{ \\begin{array}{l} \\epsilon^4\\Delta^2u + V(x)u = f(u) + \\gamma|u|^{2_{**}-2}u \\mbox{in \\mathbb{R}^N}\\\\ u\\in H^2(\\mathbb{R}^N), \\end{array} \\right.$$ where $\\epsilon > 0$, $N\\geq 5$, $V$ is a positive continuous potential, $f$ is a function with subcritical growth and $\\gamma \\in \\{0,1\\}$. We relate the number of solutions with the topology of the set where $V$ attain its minimum values. We consider the subcritical case $\\gamma=0$ and the critical case $\\gamma=1$. In the proofs we apply Ljusternik-Schnirelmann theory.\n\nArticle information\n\nSource\nBull. Belg. Math. Soc. Simon Stevin, Volume 20, Number 3 (2013), 519-534.\n\nDates\nFirst available in Project Euclid: 4 September 2013\n\nhttps:\/\/projecteuclid.org\/euclid.bbms\/1378314513\n\nDigital Object Identifier\ndoi:10.36045\/bbms\/1378314513\n\nMathematical Reviews number (MathSciNet)\nMR3129056\n\nZentralblatt MATH identifier\n1282.35152\n\nCitation\n\nFigueiredo, Giovany M.; Pimenta, Marcos T. O. Multiplicity of solutions for a biharmonic equation with subcritical or critical growth. Bull. Belg. Math. Soc. Simon Stevin 20 (2013), no. 3, 519--534. doi:10.36045\/bbms\/1378314513. https:\/\/projecteuclid.org\/euclid.bbms\/1378314513","date":"2019-09-21 17:40:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7896666526794434, \"perplexity\": 1214.485469729213}, \"config\": {\"markdown_headings\": false, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-39\/segments\/1568514574588.96\/warc\/CC-MAIN-20190921170434-20190921192434-00463.warc.gz\"}"} | null | null |
Q: What does the line on top of variables in deferential equations mean? For example,
equation
What does the horizontal line on top of f mean?
A: In that context, being $g$ a set $g=\{g_1,g_2,...,g_n\},$ it means
$$\overline{g}=\sum_{i=1}^n g_i x_i.$$
It is a weighted expected value, where each element $g_i$ has a weight $x_i$.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,382 |
Command and Conquer to Get Co-Op Missions in 2014
Video 10/24/2013 at 9:00 AM by Nick Puleo 0
Still free to play
We last looked at the upcoming free to play Command and Conquer at E3 and came away impressed by the quality of the title. The game continues to be in a closed beta test while the rest of the world is on the outside looking in. Recently EA have announced further plans for the game, letting us know a full fledged episodic campaign would arrive in 2014.
Command and Conquer is set in the Generals universe, that means no GDI or NOD, but it also means more freedom in terms of the story. Here's what's planned for Episode 1.
The first Campaign Missions will follow the Asia-Pacific Alliance (APA), as they attempt to stabilize a world that once again teeters on the brink of war. Having sat comfortably as the world's foremost geopolitical power for nearly a decade, the APA finds itself challenged on all fronts. The upstart European Union (EU), a high-tech, single-state entity with an increasingly expansionist agenda continues to flex her muscles, while a series of increasingly bloody uprisings have torn key APA and EU satellite nations asunder. Rumors point to a newly reconstituted Global Liberation Army (GLA) as the instigator, but witnesses have described technology far beyond the reach of any normal terrorist organization. Concerned, the APA dispatches an elite force to infiltrate the latest, most volatile rebellion, one targeting an EU backed dictator. Their mission - learn the truth about the GLA, undermine the EU and restore peace to the globe.
We've been told that these missions will be playable in both single player and co-op with more dynamic content added as time goes on. These missions will of course be free to play in addition to the game's versus modes as well as Onslaught, an objective based co-op mode where players team up to defend their base from enemy hordes.
A new trailer has been unveiled teasing the campaign missions. Command and Conquer will launch before the end of 2013 and will be accessible via EA's Origin service.
2 player online co-op
Command and Conquer Cancelled
E3 2013 - Command and Conquer Co-Op Hands On Impressions
Back 4 Blood Playthrough - Winter Wonderland Special
Serious Sam: Siberian Mayhem Coming to Steam This Month | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,981 |
\section{Introduction}
In this talk, we report on recent work~\cite{Florkowski:2017ruc}, where a novel hydrodynamic
framework for particles with spin ${\nicefrac{1}{2}}$ was introduced. The renewed interest in hydrodynamics
of spinning particles is based on two facts: first, relativistic hydrodynamics forms the basic framework that is used to describe
space-time evolution of matter created in relativistic heavy-ion collisions, studied experimentally at RHIC and
the LHC~\cite{Florkowski:2017olj}, second, recently measurements of particle polarization in heavy-ion collisions have become
available~\cite{STAR:2017ckg}. Thus, it is tempting to combine these two topics to explore polarization effects
in the context of hydrodynamic models (for a recent review of this and other related issues see,
for example, Ref.~\cite{Wang:2017jpl} and references therein).
\section{Local equilibrium distribution functions}
The main physics input for our approach is the definition of local equilibrium distribution functions for particles (plus signs) and antiparticles (minus signs) given in \cite{Becattini:2013fla}
\bel{fplusrsxp}
f^+_{rs}(x,p) = \f{1}{2m} {\bar u}_r(p) X^+ u_s(p), \quad
f^-_{rs}(x,p) = - \f{1}{2m} {\bar v}_s(p) X^- v_r(p).
\end{eqnarray}
Here $r,s = 1,2$ are spin indices, $u_r$ and $v_s$ are bispinors, and $X^{\pm}$ are the four by four matrices
\bel{XpmM}
X^{\pm} = \exp\left[\pm \xi(x) - \beta_\mu(x) p^\mu \right] M^\pm,
\end{eqnarray}
where
\bel{Mpm}
M^\pm = \exp\left[ \pm \f{1}{2} \omega_{\mu\nu}(x) {\hat \Sigma}^{\mu\nu} \right].
\end{eqnarray}
Here we use the notation $\beta^\mu= \umU/T$ and $\xi = \mu/T$, with the temperature $T$, chemical potential $\mu$, and the fluid four-velocity $\umU$ ($u \cdot u~=~1$). The quantity $\omega_{\mu\nu}$ is the polarization tensor, while ${\hat \Sigma}^{\mu\nu}$ is the spin operator expressed by the Dirac gamma matrices, ${\hat \Sigma}^{\mu\nu} = (i/4) [\gamma^\mu,\gamma^\nu]$.
It is convenient to express the polarization tensor $\omega_{\mu\nu}$ in terms of the four-vectors $k^\mu$ and $\omega^\mu$,
%
\bel{omunuL}
\omega_{\mu\nu} \equiv k_\mu u_\nu - k_\nu u_\mu + \epsilon_{\mu\nu\beta\gamma} u^\beta \omega^\gamma .
\end{eqnarray}
We can assume that both $k_\mu$ and $\omega_\mu$ are orthogonal to $\umU$ ($k \cdot u = \omega \cdot u = 0$), hence
\bel{kmuomu}
k_\mu = \omega_{\mu\nu} u^\nu, \quad \omega_\mu = \f{1}{2} \epsilon_{\mu\nu\alpha\beta} \, \omega^{\nu\alpha} u^\beta.
\end{eqnarray}
We also define the dual polarization tensor
\bel{omunuLD}
{\tilde \omega}_{\mu\nu} \equiv \f{1}{2} \epsilon_{\mu\nu\alpha\beta} \omega^{\alpha\beta} = \omega_\mu u_\nu - \omega_\nu u_\mu + \epsilon^{\mu\nu\alpha\beta} k_\alpha u_\beta.
\end{eqnarray}
It follows that $\f{1}{2} \omega_{\mu\nu} \omega^{\mu\nu} = k \cdot k - \omega \cdot \omega$ and $\f{1}{2} {\tilde \omega}_{\mu\nu} \omega^{\mu\nu} = 2 k \cdot \omega$. Using the constraint
\bel{conONE}
k \cdot \omega = 0
\end{eqnarray}
we find the compact form
\bel{Mpmexp}
M^\pm = \cosh(\zeta) \pm \f{\sinh(\zeta)}{2\zeta} \, \omega_{\mu\nu} {\hat \Sigma}^{\mu\nu} ,
\end{eqnarray}
where
\bel{zeta}
\zeta \equiv \f{1}{2} \sqrt{ k \cdot k - \omega \cdot \omega }.
\end{eqnarray}
\section{Basic physical observables}
The knowledge of the equilibrium distribution functions \rfn{fplusrsxp} allows us to compute the basic
physical observables such as the charge and energy density, pressure, and entropy density. For the charge
current we use the definition of Refs.~\cite{Becattini:2013fla, deGroot:1980}
\bel{jmu}
N^\mu = \int \f{d^3p}{2 (2\pi)^3 E_p} p^\mu \left[ \rm tr( X^+ ) - \rm tr ( X^- ) \right] = n \umU,
\end{eqnarray}
where ``$\rm tr$'' denotes the trace over spinor indices and $n$ is the charge density
\bel{nden}
n = 4 \, \cosh(\zeta) \sinh(\xi)\, n_{(0)}(T) = 2 \, \cosh(\zeta) \left(e^\xi - e^{-\xi} \right)\, n_{(0)}(T).
\end{eqnarray}
Here $n_{(0)}(T) = \langle(u\cdot p)\rangle_0$ is
the number density of spinless, neutral Boltzmann particles, obtained using the thermal average
\bel{avdef}
\langle \cdots \rangle_0 \equiv \int \f{d^3p}{(2\pi)^3 E_p} (\cdots) \, e^{- \beta \cdot p} ,
\end{eqnarray}
where $p^0 = E_p = \sqrt{m^2 + {\bf p}^2}$ is the particle energy.
In the next step we calculate the energy-momentum tensor, again following Refs.~\cite{Becattini:2013fla,deGroot:1980}
\bel{Tmn}
T^{\mu \nu} = \int \f{d^3p}{2 (2\pi)^3 E_p} p^\mu p^\nu \left[ \rm tr( X^+ ) + \rm tr ( X^- ) \right] = (\varepsilon + P ) \umU u^\nu - P g^{\mu\nu}.
\end{eqnarray}
The energy density and pressure in \rfn{Tmn} are given by the formulas
\bel{enden}
\varepsilon = 4 \, \cosh(\zeta) \cosh(\xi) \, \varepsilon_{(0)}(T)
\end{eqnarray}
and
\bel{prs}
P = 4 \, \cosh(\zeta) \cosh(\xi) \, P_{(0)}(T),
\end{eqnarray}
respectively. In analogy with the particle density $n_{(0)}(T)$, we define the auxiliary quantities
$\varepsilon_{(0)}(T) = \langle(u\cdot p)^2\rangle_0$ and $P_{(0)}(T) = -(1/3) \langle \left[ p\cdot p -
(u\cdot p)^2 \right] \rangle_0$. We note that the energy-momentum tensor \rfn{Tmn} is symmetric and has the structure
characterizing perfect fluids.
For the entropy current we use a straightforward generalization of the Boltzmann expression:
\bel{s2}
S^\mu = -\int \f{d^3p}{2 (2\pi)^3 E_p} \, p^\mu \, \Big( \rm tr\left[ X^+ (\ln X^+ -1)\right] + \, \rm tr \left[ X^- (\ln X^- -1) \right] \Big) .
\end{eqnarray}
This leads to the entropy density which satisfies the equation
\bel{s}
s = u_\mu S^\mu = \f{\ed+P - \mu \, n - \Omega w}{T} ,
\end{eqnarray}
where $\Omega = \zeta \, T$ and
\bel{w}
w = 4 \, \sinh(\zeta) \cosh(\xi) \, n_{(0)}.
\end{eqnarray}
The last equation suggests that $\Omega$ can be used as a thermodynamic variable of the grand canonical potential, in addition to $T$ and $\mu$. Taking the pressure $P$ to be a function of $T, \mu$ and $\Omega$, $P=P(T,\mu,\Omega)$, one finds
\bel{dP}
s = \left.{\f{\partial P}{\partial T}}\right\vert_{\mu,\Omega}, \quad
n = \left.{\f{\partial P}{\partial \mu}}\right\vert_{T,\Omega}, \quad
w = \left.{\f{\partial P}{\partial \Omega}}\right\vert_{T,\mu}.
\end{eqnarray}
\section{Hydrodynamic equations}
Hydrodynamic equations are first-order differential equations for the Lagrange multipliers
appearing in the local equilibrium distribution functions. Since we use the constraint \rfn{conONE}
and introduce $\Omega$ to parametrize the contraction $\omega_{\mu\nu} \omega^{\mu\nu}$, ten independent
functions of space and time are needed for a complete description. These are chosen as:
$T(x)$, $\mu(x)$, $\Omega(x)$, three independent components
of $u^\mu(x)$, and the four remaining independent components of $\omega^{\mu\nu}(x)$.
The conservation of energy and momentum implies that
\bel{Tmncon1}
\partial_\mu T^{\mu \nu} = 0.
\end{eqnarray}
This equation can be split into two parts, one longitudinal and the other transverse with
respect to $u^\mu$:
\bel{Tmncon2}
\partial_\mu [(\ed + P) \umU ] &=& \umU \partial_\mu P \equiv \f{dP}{d\tau}, \\
(\ed + P ) \f{d \umU}{d\tau} &=& (g^{\mu \alpha} - u^\mu u^\alpha ) \partial_\alpha P.
\end{eqnarray}
Evaluating the derivative on the left-hand side of the first equation one finds
\bel{snwcon}
T \,\partial_\mu (s \umU) + \mu \,\partial_\mu (n \umU) + \Omega \,\partial_\mu (w \umU) = 0.
\end{eqnarray}
The term in the middle of the left-hand side vanishes due to charge conservation,
\bel{ncon}
\partial_\mu (n \umU)=0.
\end{eqnarray}
Thus, in order to have conservation of entropy in our system, $\partial_\mu (s \umU)~=~0$ (for the
perfect-fluid description we are aiming at), we demand that
\bel{wcon}
\partial_\mu (w \umU) = 0.
\end{eqnarray}
Equations \rfn{Tmncon1}, \rfn{ncon} and \rfn{wcon} form a closed system of six equations for six unknowns: $T(x)$, $\mu(x)$, $\Omega(x)$ and three components of $u^\mu(x)$. Since they do not determine the time evolution of the individual components
of the polarization tensor, we dub them the equations for the hydrodynamic background.
\section{Spin dynamics}
Our approach is based on the conservation of the angular momentum
in the form $\partial_\lambda J^{\lambda, \mu\nu}=0$, where $J^{\lambda, \mu\nu} = L^{\lambda, \mu\nu}
+ S^{\lambda, \mu\nu}$ with $L^{\lambda, \mu\nu}=x^\mu T^{\nu\lambda}- x^\nu T^{\mu\lambda}$
being the orbital angular momentum and $S^{\lambda, \mu\nu}$ the spin tensor.
Since the energy-momentum tensor \rfn{Tmn} is symmetric, the conservation law $\partial_\lambda J^{\lambda, \mu\nu}=0$
implies conservation of the spin tensor $S^{\lambda, \mu \nu}$~\cite{Hehl:1976vr},
\bel{spincon1}
\partial_\lambda S^{\lambda, \mu \nu} = 0.
\end{eqnarray}
For $S^{\lambda, \mu \nu}$ we use the form discussed in \cite{Becattini:2009wh}
\bel{st11}
S^{\lambda, \mu \nu} = \!\!\int\!\!\f{d^3p}{2 (2\pi)^3 E_p} \, p^\lambda \, {\rm tr} \left[(X^+\!-\!X^-) {\hat \Sigma}^{\mu\nu} \right] = \frac{w u^\lambda}{4 \zeta} \omega^{\mu \nu} .
\end{eqnarray}
Using the conservation law for the spin density and introducing the rescaled
polarization tensor ${\bar \omega}^{\mu\nu} = \omega^{\mu\nu}/(2\zeta)$, we find
\bel{st12}
u^\lambda \partial_\lambda \, {\bar \omega}^{\mu\nu} = \f{d{\bar \omega}^{\mu\nu} }{d\tau} = 0.
\end{eqnarray}
Since, ${\bar \omega}^{\mu\nu}$ is antisymmetric, \rf{st12} with the normalization condition
\bel{norm}
{\bar \omega}_{\mu\nu} \, {\bar \omega}^{\mu\nu} = 2
\end{eqnarray}
yields five independent equations. If the condition \rfn{conONE} is fulfilled on the initial
hypersurface, it remains fulfilled at later times, provided \rf{st12} holds. Hence, \rf{st12} used with \rfn{conONE} and \rfn{norm} yields four additional equations that are needed to determine the space-time evolution of a spinning fluid.
In Ref.~\cite{Florkowski:2017ruc} we have shown that this framework has a vortex-like solution that corresponds
to global equilibrium studied in Refs.~\cite{Becattini:2013fla,Becattini:2009wh}.
\section{Closing remarks}
In this work we have described a new hydrodynamic approach to relativistic perfect-fluid hydrodynamics of
particles with spin ${\nicefrac{1}{2}}$. The system of hydrodynamic equations follows directly from the conservation
laws for charge, energy, momentum and angular momentum. An important ingredient of our approach
is the form of the spin tensor defined by \rf{st11} that allows for the construction of a consistent system
of equations. We note that the form \rfn{st11} differs from those
used in \cite{Becattini:2013fla} and \cite{deGroot:1980}, respectively.
\bigskip
{\bf Acknowledgments:} This research was supported in part by the ExtreMe Matter Institute
EMMI at the GSI Helmholtzzentrum f\"ur Schwerionenforschung,
Darmstadt, Germany.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,983 |
Introducing TeamOur team is made of not only passionate students but also experts and content mania. We serve to breathe life into content that contains the work of the circusAR crew and send it out to the world while performing research in new technology that solidifies our foundation.
- Interest in development of New Media Contents including AR, VR, MR, Kinect, etc.
- Experiences in VR projects such as Vive, Google Cardboard, etc.
Don't you want to develop something with us that will turn the world upside down!?
We are pirates of circusAR! You can be eccentric. Let's let our passion burn together!
We do not force overtime work. There are many opportunities for self-development. We welcome those who love sports! | {
"redpajama_set_name": "RedPajamaC4"
} | 9,973 |
Richard Snow Writer
Book review: Dalai Lama's autobiography
Book review: The God Delusion, by Richard Dawkins
Book Review:Liberty Gun
Review of Ayaan Hirsi Ali's 'Infidel'
Sci-Fi Film Reviews 2009
Travel Article Brazil
Travel article: Argentina
Travel article: New York
Washington DC: City of Museums and Dead People
Did people know this type of book exists?
April 14, 2011 Richard A Snow Leave a comment
Last week I went to "Supernova" a "pop culture expo" in Melbourne. It's basically an excuse for a lot of vendors of sci-fi, fantasy, vampire and zombie – related apparel, games, outfits, and comics to sell stuff to people who turn up in strange costumes. There was even one guy dressed up as Alex from A Clockwork Orange.
But I found a literary genre I didn't know existed.
Some people have taken the classics of literature and expanded them to include zombies, werewolves and other horror elements.
For example "Sense and Sensibility and Zombies," by Jane Austin and Seth Grahame Smith, which starts off, "It is a truth universally acknowledged that a Zombie in possession of brains must be in want of more brains." Apparently it sold so well he's written two sequels.
I also saw "Sense and Sensibility and Sea Monsters."
Did other people know this stuff exists, or have I been living under a rock the last ten years?
The Japanese Tsunami- Why are the Economic and Political Effects not Bigger?
The Japanese Tsunami – Why are the Economic and Political Effects not Bigger?
A Note for non-Economists.
By Richard Snow, Melbourne, Australia, 17 March 2011
This note attempts to explain the possible economic and political effects of the tsunami and nuclear disaster of 11 March 2011 in Japan. It is aimed at non –economists, those not used to reading economic newsletters, or those interested in the political fallout (if any) the disaster could produce. Part of this note is to explain why the economic effects of the disaster are not as great as one might expect from watching media coverage. This note is NOT intended to down–play the human aspect of the tragedy in which 10,000 people may have died. The tragedy is real and horrific, but economic forecasts of its effect may seem very small to the non-economists
(i) The immediate economic effect of the tsunami and the nuclear meltdown is that a large amount of productive capacity (factories, offices, shops and electricity production) has been destroyed. GDP is the market value of output in a given year, which must equal the income generated by the sale of that output. It should not be confused with the value of the asset that produced the goods. The asset and the income enter the picture in different ways.
To take a totally hypothetical figure, if a factory worth $3million generated output (say canned fish) of $1milion in a year, there must be $1million of income going to some groups of people in the form of wages to fishermen and factory workers, rents to the factory owner, and dividends or retained company profits. The effect on GDP is a loss of $1 million this year, and again next year, and again each year for as long as the factory is not rebuilt.
To repeat, the effect on GDP (national income) in a particular year is not equal to the value of the asset destroyed ($3million), but rather the value of the output (which equals the incomes) it would have generated in the relevant year: ($1 million).
However, the area hit by the Tsunami, the "Tohoku" (which means North-East) region of Honshu accounts for 6-7 per cent of Japan's GDP and about the same proportion of the population: only half that of the area hit by the Kobe earth quake of 1995. The loss of GDP also includes the incomes not produced by those who have been killed in the disaster: estimated at about 10,000 people. The total population of Japan is about 127 million.
At 17 march, most economists are predicting a loss of GDP of less than half a per cent in the first half of the year.
(ii) There will be a post-disaster reconstruction period. The reconstruction adds to GDP, since it is building activity which produces an output (e.g. a new factory, rebuilt roads and new power lines) and creates incomes for the construction workers, profits for the construction companies, and dividends and rents to anyone who is entitled to them from the construction companies. This addition to GDP may be stretched out over many years, depending on how long the re-building process takes. Most economists are predicting an increase to GDP of less than half a per cent in the second half of the year as reconstruction begins. Thus the effect on the Japanese economy is not as great as one might imagine from looking at the scenes on TV.
(iii) We might logically expect that there should be an effect on inflation. Electricity prices could rise, on the other side of the ledger, electricity prices are only a small part of the CPI, and as stated above, the area affected is only a small part of Japan. Finally, instead of allowing power prices to rise, the government or the electricity companies could engage in quantity rationing: distributing the losses around by means of rolling blackouts. This is in effect rather like the quantity rations used in many countries in war-time: hold prices constant but limit how much people can have, so as to hold demand down in line with supply.
(iv) As reconstruction begins, the prices of building materials might rise. This would be concentrated on construction materials that are expensive to transport (cement, stone) since demand will rise strongly but supply will have trouble catching up. Ironically the faster the rebuilding effort is undertaken after a natural disaster, the greater the rise in inflation since the greater the extent to which demand will outstrip supply of difficult-to-transport materials. However the extent of the affected is small in terms of the world economy and the world price of raw materials is what matters here. Materials can be diverted from the 93 per cent of Japan not hit by destruction of buildings. It would also be very "Japanese" in the terms of social harmony for supply companies not to take advantage of the situation by raising price.
(v) We might anticipate that food prices may rise if farming areas have been subject to radiation or were rendered uninhabitable. However the extent of food price rises would be mitigated (or not) by the willingness (or not) to buy imported substitutes. Japan already imports 60 per cent of its food, and the Tohoku region is only 6-7 per cent of Japan's economy. Agriculture, forestry, fishing and mining (jointly referred to as "primary industry") account for only 1.3% of gross national product. So any inflationary effect on food prices would be small. I have been unable to obtain statistics on the percentage of Japanese agricultural production that takes place in the Tohoku region, but I find no references to it being a major producer or "bread basket" of the country.
(vi) Trust in the Japanese government may deteriorate. In any catastrophe, information comes out in bits and pieces, and the full picture may not be known for some time. (Example: the Deepwater Horizon oil spill of 2010). Governments also tend to want to reassure their populations in times of crisis. This may produce accusations in one or two years of "why didn't you tell us about X". The incumbent party may face an electoral backlash if it turns out that information was withheld or was incomplete (even if this was not intentional.) At present the ruling centre-left "Democratic Party of Japan" (Minshu-to) has 308 of the 480 seats in the Japanese Parliament. The conservative Liberal Democratic Party (Jiyu-Minshu-to) holds 119 seats. (The word "liberal" should not be confused with its American usage.) Minor parties or independents have 53 seats. For the conservatives to form a government in their own right with a bare majority would require them to gain an extra 123 seats, or to more than double their current representation. This is a "big ask" of any political party.
(vii) Some people will remain unemployed for a considerable time. Small business owners will face effective (if not legal) bankruptcy. But these people may blame nature, or accept the fact that Japan is prone to natural disasters. The government relief and reconstruction effort would have to be very seriously messed up before people blamed the government.
(vii) The disaster will likely produce a backlash against nuclear power in some areas. Transnational lobby groups will likely intensify efforts to suggest that alternative power sources (solar, wind, tidal) be used rather than a rebuilding of nuclear power plants. Thus there may be opportunities for companies producing those technologies to pitch for contracts. However, the effect of anti-nuclear lobby groups may be short term. Not many today can remember the Three Mile Island accident (1979) or the Chernobyl accident (1986). The damage to the nuclear reactor may be forgotten in less time than one might expect. On the other hand, the BBC reports (17 March) that China has suspended approval of any new nuclear power plants.
(viii) Expenditure by the Japanese government on rebuilding and relief will be accompanied by reduced tax revenues from affected areas. In most countries this would mean a rise in the budget deficit and an increase in government debt. This would normally lead to a downgrading of the government's credit rating and therefore higher interest rates for the Government borrowings. However, the Japanese government sets aside a contingency fund for natural disasters in its budget each year, and there is currently no talk of a budget blow out.
Conclusion: although the disaster is a human tragedy, the economic consequences are not as great as many non-economists might suspect. [This has been written 17 March, 6 days after the tsunami, on the assumption that there is no nuclear meltdown affecting a greater area of Japan.]
Additional Note: since the above was written on 17 March, there has been much discussion of continuing radiation leaks. The effect of these is difficult to predict. If people in the vicinity of the plant suffer radiation induced illness, GDP is reduced by the loss of their contribution to the economy. However the medical expenses involved in treating them are an addition to GDP. This is not meant to sound ghoulish – it's just how GDP works. The net effect of the radiation leaks may take years to play itself out.
2nd additional note: since the above was written the Japanese government and TEPCO (the electricity company that owned the plants) have admitted that at least three of the reactors did suffer meltdowns. What the final damage bill will be, or the cost in human life is anybody's guess. I've never seen such a situation before.
Author's contact: Richard Snow, Melbourne Australia, snowinmelbourne (at) hotmail (dot) com. The writer has a BEc(Hons) and MEc degree, worked as an economist in the Victorian State Government Dept of Treasury and Finance for 16 years and taught Economics at University level for 8 years.
A Few Photos of Arizona
April 3, 2011 Richard A Snow Leave a comment
After the writers' conference at San Diego, I went to Arizona. (Most of the east coast of the USA was having really bad weather.) In Phoenix, the Capitol, I went to the few museums they have in the center of town – there aren't that many. One museum, the Arizona Science Center, had the display "Bodyworks", by Gunther von Hagens, which consists of preserved plasticized human corpses which are opened up, so you can see all the internal organs. It's very strange to look at what once were real people. Is it morbid? Yes, sort of, but is it any different to looking at an ancient Egyptian mummy? Maybe not. The Heard Museum has displays of Native American crafts and artifacts
Phoenix is full of cactus plants and red scoria gravel where other cities have grass. I guess they just have to fit in with their climate.
A friend of a friend was good enough to drive me to Sedona, a town built in the middle of canyons that show the layers of rocks very clearly. We also saw "Rawhide," a reconstructed Wild West town, complete with acted shootouts etc. I've included some photos below. I imagine the first settlers found this very inhospitable countryside to visit. It's a wonder they didn't pack up and go home. Still, America was founded with that spirit of exploration and setting off into possibly adverse conditions – kind of like the first explorers in Australia who crossed the Great Dividing Range that's a couple of hundred miles in from the east coast of Australia. When they got to the other side and saw the harsh conditions, it surprising they didn't go, "Hey I'm out of here." Wills and Bourke, two of our most famous explorers, died because they miscalculated how far you had to travel before you'd find water. There are parallels. Anyway, some pictures are below.
Near Sedona (1)
Guess why not many people live out here?
Yours truly with some rocks in the background.
The rock formations are just beautiful.
Abandoned. It's hard to imagine someone lived here once.
Side view of Scorpion Gulch
Quite spectacular rock formations.
also near Sedona
Playing Sherrif at Rawhide
Entrance to rawhide
Didn't know there were puple cacti.
Horse riding competition at rawhide
On the road out of Sedona
Elizabeth who drove me around - thanks!
A few thoughts about Jordan Peterson
Do differences between men and women disappear in more gender equal societies?
New article on why the US can probably never be invaded
Are social connections a form of corruption in China?
New online magazine launched
Australia's gun laws
Australia-Indonesia relations
Books – thrillers
Books action adventure
Cambodia students
Fundamentalist Religion.
Fundamentalist Relligion
Funny news items
Funny news stories
Missile tests
Novels action adventure
Novels thrillers
Photos autumn fall scenery
Photos of Cambodia
Protest in Russia
Religious violence
Richard A Snow
right wing journalists
Strange news.
Strange things in the news
The benefits of practice
Thrillers (Novels)
US aid to North Korea
US Budget
US Security
Weird things in the news | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,512 |
Gosh sometimes I really just need to hear some old school hymns, know what I mean? Not that there is anything wrong with contemporary worship music (I love me some All Sons and Daughters and Hillsong!) but there have been times lately where my soul just needs to go back to the basics and worship with some of the more beautiful words I've personally ever sang along to.
There's a band you've probably already heard me mention, Page CXVI. Sheesh friends...these guys rock. They've committed themselves to making hymns more accessible and known again to our younger generations. Amazing right? They've got albums upon albums of all the older hymns you may totally remember, but sung in a more modern, almost folk-like way. I just love it. Like, can't get enough of it.
Best part? These hymns that I used to sing in grade school and had forgotten about, are now what I wake up singing most mornings! Pretty much the best way to wake up. Ever.
They speak against you with malicious intent; your enemies take your name in vain. Do I not hate those who hate you, O Lord? And do I not loathe those who rise up against you?
Praying this hymn and this scripture is a total breath of fresh air for you today. If you'd like is as a computer, iPad, or cell phone background, grab one of those sizes below.
Isn't it a blessing to know, that He "knows" us. He knows exactly how he made us. Our temperament, our personality and He uses our unique gifts for His purpose. I love you. Blessed to know you. So thankful for the encouragement you are in my life.
The feeling is extremely mutual.
I love that hymn! It's so beautiful.
Come Thou Fount is my absolute favorite hymn! I've got to check out Page CXVI -never heard of them!
Come Thou Fount is my favorite hymn. I had it played at our wedding when we took Communion together.
I so very much agree Meg!
This is one of my favorite hymns too! I remember hearing it a Loy in college and I loved singing the chores over and over again. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,515 |
{"url":"https:\/\/math.stackexchange.com\/questions\/3122242\/u-sub-vs-trig-sub-are-giving-different-answers-for-int-fracx4x22x5dx\/3122252#3122252","text":"# u-sub vs trig-sub are giving different answers for $\\int\\frac{x+4}{x^2+2x+5}dx$\n\nWhen I complete the square in the denominator and solve using u-sub, I can get the right answer:\n\n$$\\int\\frac{x+4}{x^2+2x+5}dx$$ $$\\int\\frac{x+4}{x^2+2x+1+4}dx$$ $$\\int\\frac{(x+1)+4}{(x+1)^2+4}dx$$ $$u=x+1$$ $$\\int\\frac{u+3}{u^2+4}du$$ $$I_1=\\int\\frac{u}{u^2+4}du+I_2=\\int\\frac{3}{u^2+4}du$$ $$I_1=(w=u^2+4, dw=2udu)$$ $$I_1=\\frac{1}{2}\\int\\frac{dw}{w}$$ $$w = u^2+4,$$\n\n$$w = (x+1)^2+4,$$ so: $$I_1=\\frac{1}{2}\\ln|x^2+2x+5|$$ $$I_2=3\\int\\frac{1}{u^2+4}du$$ $$u = 2\\tan\\theta$$\n\n$$du = 2\\sec^2\\theta d\\theta$$ $$I_2=3\\int\\frac{2\\sec^2\\theta}{(2\\tan\\theta)^2+4}d\\theta$$ $$I_2=3\\int\\frac{2\\sec^2\\theta}{4\\tan^2\\theta+4}d\\theta$$ $$I_2=3\\int\\frac{2\\sec^2\\theta}{4\\sec^2\\theta}d\\theta$$ $$I_2=\\frac{3}{2}\\int d\\theta$$ $$I_2=\\frac{3\\theta}{2}$$\n\nAt this point, I have to turn the integral back into terms of x, so I made the right triangle like normal:\n\nNow solving the integrals:\n\n$$\\frac{1}{2}\\ln|x^2+2x+5|+\\frac{3\\theta}{2}$$ $$\\frac{1}{2}\\ln|x^2+2x+5|+\\frac{3\\arctan(\\frac{x+1}{2})}{2}+C$$\n\nThis obviously is the correct answer, however, when I try to solve this with trig-sub (which is the first thing that came to my mind when I looked at the problem, hence my frustration) I am getting a similar, albeit incorrect answer:\n\n$$I_1=\\int\\frac{u}{u^2+4}du+I_2=\\int\\frac{3}{u^2+4}du$$ $$u = 2\\tan\\theta$$\n\n$$du = 2\\sec^2\\theta d\\theta$$\n\n$$I_1=\\int\\frac{2\\tan\\theta}{4\\tan^2\\theta+4}\\cdot\\frac{2\\sec^2\\theta}{1}d\\theta$$ $$I_1=\\int\\frac{2\\tan\\theta}{4\\sec^2\\theta}\\cdot\\frac{2\\sec^2\\theta}{1}d\\theta$$ $$I_1=\\int\\frac{2\\tan\\theta}{2}d\\theta$$ $$I_1=\\int \\tan\\theta$$ $$I_2=3\\int\\frac{2\\sec^2\\theta}{4\\sec^2\\theta}d\\theta$$ $$I_2=3\\int\\frac{1}{2}d\\theta$$ $$I_2=\\frac{3\\theta}{2}$$ $$\\int \\tan\\theta d\\theta+\\frac{3\\theta}{2}$$ $$-\\ln|\\cos\\theta|+\\frac{3\\theta}{2}$$ Now I put it back into terms of x like I did when solving it using u-sub: $$-\\ln\\left|\\frac{2}{u^2+4}\\right|+\\frac{3}{2}\\arctan\\frac{x+1}{2}$$ Since $$u=x+1$$: $$-\\ln\\left|\\frac{2}{(x+1)^2+4}\\right|+\\frac{3}{2}\\arctan(\\frac{x+1}{2})+C$$\n\nBut this is obviously wrong, since it looks like the $$ln$$ should have a $$\\frac{1}{2}$$ in front of it, so something must be wrong with my trig-sub on $$I_1$$? I know it's a lot to read but I just wanted to put it step by step to see if there's some dumb algebraic mistake I made. If anyone can help, thanks a ton in advance.\n\n\u2022 Still reading. First solution seems good. Feb 22 '19 at 6:20\n\u2022 Note the final integral of $\\tan(x)$ is $\\ln(\\sec(x))$ and $\\sec(x)=(\\tan^2(x)+1)^{1\/2}$...that square root is what is missing Feb 22 '19 at 6:27\n\u2022 It's all good until you convert back to x on the last couple lines. You should indeed get that $-\\log(\\cos(\\arctan(\\tfrac{x+1}{2}))) = \\tfrac{1}{2} \\log(x^2+2x+5)$ Feb 22 '19 at 6:27\n\u2022 $\\cos \\theta = \\frac {2} {\\sqrt {u^2+4}}.$ Right? Feb 22 '19 at 6:28\n\nYou plugged in the wrong expression for $$\\cos \\theta$$, it should be $$2\/\\sqrt{u^2+4}$$. Notice that the Pythagorean theorem tells you that the length of the hypotenuse should be $$\\sqrt{x^2+2x+5}$$.\nNote the minus sign outside of the logarithm. $$-\\ln\\left|\\dfrac{2}{\\sqrt{u^2+4}} \\right|=\\ln\\left|\\dfrac{\\sqrt{u^2+4}}{2}\\right|$$\nSince $$\\sqrt{f(x)}$$ can be written as $$(f(x))^{1\/2}$$. Therefore by the logarithm properties.\n1. $$\\ln \\left|\\sqrt{u^2+4}\\right|=1\/2\\cdot\\ln\\left|u^2+4\\right|$$\n2. $$1\/2\\cdot\\ln\\left|(u^2+4)\/\\sqrt{2}\\right|=1\/2\\cdot\\ln\\left|u^2+4\\right|-1\/2\\cdot\\ln\\sqrt{2}$$ which gets absorbed in the constant $$C$$.","date":"2021-12-03 22:49:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 48, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9153443574905396, \"perplexity\": 111.069682210686}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964362919.65\/warc\/CC-MAIN-20211203212721-20211204002721-00532.warc.gz\"}"} | null | null |
enum Baz { Qux = 0 };
int x;
void foo();
static int w;
void bar(int y) {
static int z;
int k;
}
extern int n;
static int wibble(int);
void ena(int (*dio)(int tria));
// CHECK: EnumDecl=Baz:3:6 (Definition)linkage=External
// CHECK: EnumConstantDecl=Qux:3:12 (Definition)linkage=External
// CHECK: VarDecl=x:4:5linkage=External
// CHECK: FunctionDecl=foo:5:6linkage=External
// CHECK: VarDecl=w:6:12linkage=Internal
// CHECK: FunctionDecl=bar:7:6 (Definition)linkage=External
// CHECK: ParmDecl=y:7:14 (Definition)linkage=NoLinkage
// CHECK: VarDecl=z:8:14 (Definition)linkage=NoLinkage
// CHECK: VarDecl=k:9:7 (Definition)linkage=NoLinkage
// CHECK: VarDecl=n:11:12linkage=External
// CHECK: FunctionDecl=wibble:12:12linkage=Internal
// CHECK: ParmDecl=:12:22 (Definition)linkage=NoLinkage
// CHECK: FunctionDecl=ena:14:6linkage=External
// CHECK: ParmDecl=dio:14:16 (Definition)linkage=NoLinkage
// CHECK: ParmDecl=tria:14:25 (Definition)linkage=NoLinkage
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,013 |
Salvation is completely free, but discipleship is costly. Our allegiance to Jesus must be far greater than our allegiance to our family or to our own self-interests. We have to love God more! Therefore, each of us must count the cost carefully before choosing to follow Him. Just as unsalty salt is useless, so a disciple who isn't all in with Jesus is useless to the kingdom of God.
Look who's coming to dinner…not those who have it all together, not the popular, not the connected, not the powerful, not the elite, but the poor, the crippled, the lame and the blind…the powerless, the forgotten, the overlooked, the left out and left behind…all those who recognize their need to be rescued.
Are just a few being saved? That's really the wrong question. The real question is, Are you going to be among those who are saved? Jesus says the door to the kingdom is a narrow door…there's only one way in. It's the way of Jesus. It's narrow door theology. Narrow because Jesus is the way, the truth and the life…the kingdom is only available through Him. But also very broad in that it's open to anyone…anyone who recognizes their need for repentance…their need to be rescued from their sin…and who believes that Jesus can rescue them and trusts in Him to do so. Jesus says that folks from all over the place will be there. John in the book of Revelation writes that there will be folks there from every tribe and tongue and people and nation gathered around the throne worshiping God in the kingdom. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,118 |
{"url":"https:\/\/www.physicsforums.com\/threads\/eletric-potential-inside-charged-sphere-with-hole-inside.716717\/","text":"# Eletric potential inside charged sphere with hole inside\n\n## Homework Statement\n\nConsider a charge density of \u03c1=k\/r , k>0 , located between a sphere surface of r=a and another sphere surface of r=b, b>a.\nI'm supposed to find the electric field on all space, which I did. Now I have to find the electric potential in all space, which I also did for r>b, but I'm having problems finding it for a<r<b and for r<a.\n\n## Homework Equations\n\nThese are the electric field equations I came up with:\nr<a : E(r)=0\nr>b: E(r)=(k*(b2-a2))\/(\u03b50*2*r2)\na<r<b: E(r)=(k*(1-a2\/r2))\/(2*\u03b50)\n\nElectric potential for r>b: V(r)=(b2-a2)\/(2\u03b50*r)\n\n## The Attempt at a Solution\n\nFor finding the EP at r>b I just had to integrate E(r) for r>b with limits between r and \u221e which is equal to V(r)-V(\u221e) with V(\u221e)=0, but I cant come up with any solution for the other Epotentials, if someone could give me a hint I would appreciate.\n\nThanks!\n\n#### Attachments\n\n\u2022 sphere.jpg\n15.4 KB \u00b7 Views: 403\n\nmfb\nMentor\nBased on your solution for r>b, what is V(b)?\nCan you calculate the potential between a and b, if you know V(b)? The method is similar to the region r>b.\nr<a is easy once you have the region a<r<b, as there is no field inside.\n\nto find V(b) I can use the equation of the potential for r>b right?\n\nThen to find V(r) for a<r<b:\n\nV(r)=-\u222brbE(r).dr + V(b)\n\nand then repeat the process to find V(r) for r<a, where there is no field, which means V(r) for r<a = V(r) for a<r<b.\n\nCorrect?\n\nLast edited:\nmfb\nMentor\nto find V(b) I can use the equation of the potential for r>b right?\n\nThen to find V(r) for a<r<b:\n\nV(r)=-\u222brbE(r).dr + V(b)\n\nand then repeat the process to find V(r) for r<a, where there is no field\nSure.\n\n, which means V(r) for r<a = V(r) for a<r<b.\n\nCorrect?\nI guess that is a typo here.\n\n1 person\nI guess that is a typo here.\n\nAh yes! It should be V(r) = V(a) for r<a.\n\nThat was the hint I needed, thanks!","date":"2021-06-22 21:25:55","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8437625765800476, \"perplexity\": 2491.6074399767413}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623488519735.70\/warc\/CC-MAIN-20210622190124-20210622220124-00554.warc.gz\"}"} | null | null |
\section{Introduction and Related Work}
Consider a strongly-connected network of $N$ agents, where information can flow in either direction between any two connected agents and, moreover, there is at least one self-loop in the topology \cite[p. 436]{ASayed2014}. We associate a strongly convex differentiable risk, $J_k(w)$, with each agent $k$ and assume in this work that all these costs share a common minimizer, $w^o\in\mathbb{R}^{M}$, where $\mathbb{R}$ denotes field of real numbers. This case models important situations where agents work cooperatively towards the same goal. The objective of the network is to determine the unique minimizer $w^o$ of the following aggregate cost, assumed to be strongly-convex:
\begin{equation}\label{equ1}
J^{\rm glob}(w)\triangleq\sum\limits_{k=1}^{N}J_k(w)
\end{equation}
It is also assumed that the individual cost functions, $J_k(w)$, are each twice-differentiable and satisfy
\begin{equation}\label{equ6}
0<{\nu_d}I_{M}\leq\nabla_w^2J_k(w)\leq{\delta_d}I_{M}
\end{equation}
where $\nabla^2_wJ_k(w)$ denotes the $M\times M$ Hessian matrix of $J_k(w)$ with respect to $w$, $\nu_d\leq\delta_d$ are positive parameters, and $I_{M}$ is the $M\times M$ identity matrix. In addition, for matrices $A$ and $B$, the notation $ A\leq B$ denotes that $B-A$ is positive
semi-definite. The condition in (\ref{equ6}) is automatically satisfied by important cases of interest, such as logistic regression or mean-square-error designs \cite{ASayed2014,Sayed2014}.
Starting from some initial conditions $\{\bm w_{k,-1}\}$, the agents work cooperatively in an adaptive manner to seek the minimizer $w^o$ of problem (\ref{equ1}) by applying the following diffusion strategy \cite{ASayed2014,Sayed2014}:
\begin{subequations}\label{equ2}
\begin{empheq}[left=\empheqlbrace]{align}
\bm \phi_{k,i-1} &= \displaystyle{\sum_{\ell\in\mathcal{N}_k}}a_{1,\ell k}\bm w_{\ell,i-1}\label{equ2c}\\
\bm \psi_{k,i} &= \bm \phi_{k,i-1} - \mu_k \widehat{\nabla_{w^{\sf T}}J}_k(\bm \phi_{k,i-1})\label{equ2a}\\
\bm w_{k,i} &= \displaystyle{\sum_{\ell\in\mathcal{N}_k}}a_{2,\ell k}\bm \psi_{\ell,i}\label{equ2b}
\end{empheq}
\end{subequations}
where the $M$-vector $\bm w_{k,i}$ denotes the estimate by agent $k$ at iteration $i$ for $w^o$, while $\bm{\psi}_{k,i}$ and $\bm \phi_{k,i-1}$ are intermediate estimates. Moreover, an approximation for the true gradient vector of $J_k(w)$, $\widehat{\nabla_{w^{\sf T}}J}_k(\cdot)$, is used in (\ref{equ2a}) since it is generally the case that the true gradient vector is not available (e.g., when $J_k(w)$ is defined as the expectation of some loss function and the probability distribution of the data is not known beforehand to enable computation of $J_k(\cdot)$ or its gradient vector). The symbol ${\cal N}_k$ in (\ref{equ2}) refers to the neighborhood of agent $k$. The $N\times N$ combination matrices $A_1=[a_{1,\ell k}]$ and $A_2=[a_{2,\ell k}]$ are left-stochastic matrices consisting of convex combination coefficients that satisfy:
\begin{equation}\label{equ3}
a_{j,\ell k}\geq0,\;\;\sum\limits_{\ell=1}^{N}a_{j,\ell k}=1,\;\;a_{j,\ell k}=0,\textrm{if}\,\, \ell\notin \mathcal{N}_{k}
\end{equation}
for $j =1,2$. Either of these two matrices can be chosen as the identity matrix, in which case algorithm (\ref{equ2}) reduces
to one of two common forms for diffusion adaptation: the adapt-then-combine (ATC) form when $A_1=I$
and the combine-then-adapt (CTA) form when $A_2=I$. We continue to work with the general
formulation (\ref{equ2}) in order to treat both algorithms, and other cases as well, in a unified manner. The parameter $\mu_k>0$ is a constant step-size factor used to drive the adaptation process. Its value is set to a constant in order to enable continuous adaptation in response to streaming data or drifting minimizers. We could also consider a distributed implementation of the useful consensus-type \cite{ASayed2014, Moura2012,Nedic2010,Kar2011,Dimakis2010,Sardellitti2010,Braca2008,Khan2008,Xiao2004}. However, it has been shown in \cite{ASayed2014, Tu12} that when constant step-sizes are used to drive adaptation, the diffusion networks have wider stability ranges and superior performance. This is because consensus implementations have an inherent asymmetry in their updates, which can cause network graphs to behave in an unstable manner when the step-size is constant. This problem does not occur over diffusion networks. Since adaptation is a core element of the proposed strategies in this work, we therefore focus on diffusion learning mechanisms.
The main distinction in this work relative to prior studies on diffusion or consensus adaptive networks is that we now assume that, at each iteration $i$, the adaptation step in (\ref{equ2a}) has only access to a {\em random subset} of the entries of the approximate gradient vector. This situation may arise due to missing data or a purposeful desire to reduce the computational burden of the update step. We model this scenario by replacing the approximate gradient vector by
\begin{equation}\label{equ4}
\widehat{\nabla_{w^{\sf T}}J}^{\rm miss}_k(\bm \phi_{k,i-1}) = \bm{\Gamma}_{k,i}\cdot\widehat{\nabla_{w^{\sf T}}J}_k(\bm \phi_{k,i-1})
\end{equation}
where the random matrix $\bm{\Gamma}_{k,i}$ is diagonal and consists of Bernoulli random variables $\{\bm{r}_{k,i}(m)\}$; each of these variables is either zero or one with probability
\begin{equation}\label{equ4a}
\mbox{\rm Prob}(\bm{r}_{k,i}(m)=0)\triangleq r_k
\end{equation}
where $0\leq r_k<1$ and
\begin{equation}\label{equ5}
\bm{\Gamma}_{k,i} = \mbox{\rm diag}\{\bm{r}_{k,i}(1),\bm{r}_{k,i}(2),\ldots,\bm{r}_{k,i}(M)\}
\end{equation}
In the case when $\bm{r}_{k,i}(m)=0$, the $m$-th entry of the gradient vector is missing, and then the $m$-th entry of $\bm \psi_{k,i}$ in (\ref{equ2a}) is not updated. Observe that we are attaching two subscripts to $\bm{r}$: $k$ and $i$, which means that we are allowing the randomness in the update to vary across agents and also over time.
\subsection{Relation to Block-Coordinate Descent Methods}
Our formulation provides a nontrivial generalization of the powerful random coordinate-descent
technique studied, for example, in the context of deterministic optimization in \cite{nesterov2012,richtarik2014,lu2013} and the references
therein. Random coordinate-descent has been primarily applied in the literature to single-agent
convex optimization, namely, to problems of the form:
\begin{equation}
w^o=\arg\min_w J(w)
\end{equation}
where $J(w)$ is assumed to be known beforehand. The traditional gradient descent algorithm for
seeking the minimizer of $J(w)$, assumed differentiable, takes the form
\begin{equation}
w_i=w_{i-1}-\mu \nabla_{w^{\sf T}} J(w_{i-1})
\end{equation}
where the full gradient vector is used at every iteration to update $w_{i-1}$ to $w_i$. In a coordinate-descent implementation,
on the other hand, at every iteration $i$, only a subset of the entries of the gradient vector is
used to perform the update. These subsets are usually chosen as follows. First, a collection of $K$ partitions of
the parameter space $w$ is defined. These partitions are defined by diagonal matrices, $\{\Omega_k\}$.
Each matrix has ones and zeros on the diagonal and the matrices add up to the identity matrix:
\begin{equation}
\sum_{k=1}^K \Omega_k=I_M
\end{equation}
Multiplying $w$ by any $\Omega_k$ results in a vector of similar size, albeit one where the only
nontrivial entries are those extracted by the unit locations in $\Omega_k$. At every iteration $i$, one of
the partitions is selected randomly, say, with probability
\begin{equation}
\mbox{\rm Prob}(\bm{\Gamma}_i=\Omega_k)=\omega_k
\end{equation}
where the $\{\omega_k\}$ add up to one. Subsequently, the gradient descent iteration is replaced by
\begin{equation}\label{equ5a}
\bm w_i=\bm w_{i-1}-\mu \bm{\Gamma}_i \nabla_{w^{\sf T}} J(\bm w_{i-1})
\end{equation}
This formulation is known as the randomized block-coordinate descent (RBCD) algorithm \cite{nesterov2012,richtarik2014,lu2013}. At each
iteration, the gradient descent step employs only a collection of coordinates represented by the selected
entries from the gradient vector. Besides reducing complexity, this step helps alleviate the condition on the step-size parameter for convergence.
If we reduce our formulation (\ref{equ2})--(\ref{equ4}) to the single agent case, it will become similar to (\ref{equ5a}) in that the desired cost function is optimized only along a \emph{subset} of the coordinates at each iteration. However, our algorithm offers more randomness in generating the coordinate blocks than the RBCD algorithm, by allowing more random combinations of the coordinates at each time index. In particular, we do not limit the selection of the coordinates to a collection of $K$ possibilities
pre-determined by the $\{\Omega_k\}$. Moreover, in our work we use a random subset of the \emph{{stochastic}} gradient vector instead of the \emph{true} gradient vector to update the estimate, which is necessary for adaptation and online learning when the true risk function itself is not known (since the
statistical distribution of the data is not known). Also, our results consider a general multi-agent
scenario involving distributed optimization where \emph{each} individual agent employs random coordinates for
its own gradient direction, and these coordinates are generally different from the coordinates used by other agents. In other
words, the networked scenario adds significant flexibility into the operation of the agents under model
(\ref{equ4}).
\subsection{Relation to Partial Updating Schemes}
It is also important to clarify the differences between our formulation and other works in the
literature, which rely on other useful notions of partial information updates. To begin with, our formulation (\ref{equ4}) is different from the models used in \cite{Zhao20151,Zhao20152,Zhao20153} where the step-size parameter was modeled as a random Bernoulli variable, $\bm{\mu}_k(i)$, which could assume the values $\mu_k$ or zero with certain probability. In that case, when the step-size is zero, all entries of $\bm \psi_{k,i}$ will not be updated and adaptation is turned off completely. This is in contrast to the current scenario where only a subset of the entries are left without update and, moreover, this subset varies randomly from one iteration to another.
Likewise, the useful work \cite{Arablouei2014} employs a different notion of partial sharing of information by focusing on
the exchange of partial entries of the weight estimates themselves rather than on partial entries of the
gradient directions. In other words, the partial information used in this work relates to the combination steps (\ref{equ2c}) and (\ref{equ2b}) rather than to the adaptation step (\ref{equ2a}). It also focuses on the special case in which the risks $\{J_k(w)\}$ are quadratic in $w$. In \cite{Arablouei2014}, it is assumed that only a subset of the weight entries are shared (diffused) among neighbors and that
the estimate itself is still updated fully. In comparison, the formulation we are considering diffuses all entries of the weight estimates. Similarly, in \cite{Gholami2013} it is assumed that some entries of the regression vector are missing, which causes changes to the gradient vectors. In order to undo these changes, an estimation scheme is proposed in \cite{Gholami2013} to estimate the missing data. In our formulation, more generally, a random subset of the entries of the gradient vector are set to zero at each iteration, while the remaining entries remain unchanged and do not need to be estimated.
There are also other criteria that have been used in the literature to motivate partial updating. For example, in \cite{Douglas1997}, the periodic and sequential least-mean-squares (LMS) algorithms are proposed, where the former scheme updates the whole coefficient vector every $N-$th iteration, with $N>1$, and the latter updates only a fraction of the coefficients, which are pre-determined, at each iteration. In \cite{Werner2008,Werner2010ASILOMAR} the weight vectors are partially updated by following a set-membership approach, where updates occur only when the {\em innovation} obtained from the data exceeds a predetermined threshold. In \cite{Douglas1994,Dogancay2001,Werner2010ASILOMAR}, only entries corresponding to the largest magnitudes in the regression vector or the gradient vector at each agent are updated. However, such scheduled updating techniques may suffer from non-convergence in the presence of nonstationary signals \cite{Godavarti2005}. Partial update schemes can also be based on dimensionality reduction policies using Krylov subspace concepts \cite{Chouvardas2013,Theodoridis2011,Chouvardas2012}. There are also techniques that rely on energy considerations to limit updates, e.g., \cite{Gharehshiran2013}.
The objective of the analysis that follows is to examine the effect of {\em random} partial gradient information on the learning performance and convergence rate of adaptive networks for general risk functions. We clarify these questions by adapting the framework described in \cite{ASayed2014,Sayed2014}.
{\emph{Notation}}: We use lowercase letters to denote vectors, uppercase
letters for matrices, plain letters for deterministic
variables, and boldface letters for random variables. We also
use $(\cdot)^{\sf T}$ to denote transposition, $(\cdot)^{-1}$ for matrix inversion,
$\mathrm{Tr}(\cdot)$ for the trace of a matrix, $\mbox{\rm diag}\{\cdot\}$ for a diagonal matrix, $\mathrm{col}\{\cdot\}$ for a column vector, $\lambda(\cdot)$ for the eigenvalues of
a matrix, $\rho(\cdot)$ for the spectral radius of a matrix, $\|\cdot\|$ for the two-induced norm of a matrix or the Euclidean
norm of a vector, $\|x\|_{\Sigma}^2$ for the weighted square value $x^{\sf T}\Sigma x$, $\otimes$ for Kronecker product, $\otimes_b$ for block Kronecker product.
Besides, we use $p\succ 0$ to denote that all entries of vector $p$ are positive. Moreover, $\alpha=O(\mu)$ signifies that $|\alpha|\leq c|\mu|$ for some constant $c > 0$, and $\alpha=o(\mu)$ signifies that $\alpha/\mu\to0$ as $\mu\to0$. In addition, the notation $\limsup_{n\to\infty}a(n)$ denotes limit superior of the sequence $a(n)$.
\section{Data Model and Assumptions}
Let $\bm{\mathcal{F}}_{i-1}$ represent the filtration (collection) of all random events generated by the processes $\{\bm w_{k,j}\}$ and $\{\bm\Gamma_{k,j}\}$ at all agents up to time $i-1$. In effect, the notation $\bm{\mathcal F}_{i-1}$ refers to the collection of all past $\{{\boldsymbol{w}}_{k,j},\bm{\Gamma}_{k,j}\}$ for all $j\leq i-1$ and all agents.
\begin{assumption}\label{ass2}
(\textbf{Conditions on indicator variables}). It is assumed that the indicator variables $\bm{r}_{k,i}(m)$ and $\bm{r}_{\ell,i}(n)$ are independent of each other, for all $\ell,k,m,n$. In addition, the variables $\{\bm{r}_{k,i}(m)\}$ are independent of $\bm{\mathcal{F}}_{i-1}$ and $\widehat{\nabla_{w^{\sf T}}J}_k(\bm w)$ for any iterates $\bm w\in\bm{\mathcal{F}}_{i-1}$ and for all agents $k$.
\hfill $\Box$
\end{assumption}
Let
\begin{equation}\label{equ16}
\bm s_{k,i}(\bm{\phi}_{k,i-1}) \triangleq \widehat{\nabla_{w^{\sf T}}J}_k(\bm{\phi}_{k,i-1})-{\nabla_{w^{\sf T}}J}_k(\bm{\phi}_{k,i-1})
\end{equation}
denote the gradient noise at agent $k$ at iteration $i$, based on the {\em complete} approximate gradient vector, $\widehat{\nabla_{w^{\sf T}}J}_k(\bm w)$.
We introduce its conditional second-order moment
\begin{equation}\label{equ41}
\bm R_{s,k,i}(\bm w)\triangleq\mathbb{E}[\bm s_{k,i}(\bm w)\bm s^{\sf T}_{k,i}(\bm w)|\bm{\mathcal{F}}_{i-1}].
\end{equation}
The following assumptions are standard and are satisfied by important cases of interest, such as logistic regression risks or mean-square-error risks, as already shown in \cite{ASayed2014,Sayed2014}. These references also motivate these conditions and explain why they are reasonable.
\begin{assumption}\label{ass4}(\textbf{Conditions on gradient noise}) \cite[pp. 496--497]{ASayed2014}. It is assumed that the first and
fourth-order conditional moments of the individual gradient noise processes satisfy the following conditions for any iterates
$\bm w \in \bm{\mathcal{F}}_{i-1} $ and for all $k,\ell= 1, 2,\ldots,N$:
\begin{align}
\mathbb{E}[\bm s_{k,i}(\bm w)|\bm{\mathcal{F}}_{i-1}]&=0\label{equ20}\\
\mathbb{E}[\bm s_{k,i}(\bm w)\bm s_{\ell,i}^{\sf{T}}(\bm w)|\bm{\mathcal{F}}_{i-1}]&=0,\,k\neq \ell\label{equ22}\\
\mathbb{E}[\|\bm s_{k,i}(\bm w)\|^4|\bm{\mathcal{{F}}}_{i-1}]&\leq{\beta}_k^4\|\bm w\|^4+{\sigma}_{s,k}^4 \label{equ23a}
\end{align}
almost surely, for some nonnegative scalars ${\beta}_k^4$ and ${\sigma}_{s,k}^4$.\hfill $\Box$
\end{assumption}
\begin{assumption}\label{ass5} (\textbf{Smoothness conditions}) \cite[pp. 552,576]{ASayed2014}.
It is assumed that the Hessian matrix of each individual cost function, $J_k(w)$,
and the covariance matrix of each individual gradient noise process are locally Lipschitz continuous in a small neighborhood around $w = w^o$ in the following manner:
\begin{align}
\|\nabla_w^2J_k(w^o+\triangle w)-\nabla_w^2J_k(w^o)\|&\leq \kappa_c\|\triangle w\|\label{equ7a}\\
\left\|\bm R_{s,k,i}(w^o+\triangle w)-\bm R_{s,k,i}(w^o)\right\|&\leq \kappa_d\|\triangle w\|^\gamma\label{equ7b}
\end{align}
for any small perturbations $\|\triangle w\|\leq\varepsilon$ and for some $\kappa_c\geq0$, $\kappa_d\geq0$, and parameter $0<\gamma\leq4$.\hfill $\Box$
\end{assumption}
\section{Main Results: Stability and Performance}
For each agent $k$, we introduce the error vectors:
\begin{align}
\widetilde{\bm{{w}}}_{k,i}&\triangleq w^o-\bm w_{k,i}\label{equ12}\\
\widetilde{\bm{{\phi}}}_{k,i}&\triangleq w^o-\bm \phi_{k,i}\label{equ12b}\\
\widetilde{\bm{{\psi}}}_{k,i}&\triangleq w^o-\bm \psi_{k,i}\label{equ13}
\end{align}
We also collect all errors, along with the gradient noise processes, from across the network into block vectors:
\begin{align}
\widetilde{\bm{{w}}}_{i} &\triangleq \mathrm{col}\left\{\widetilde{\bm{{w}}}_{1,i},\widetilde{\bm{{w}}}_{2,i},\ldots,\widetilde{\bm{{w}}}_{N,i}\right\}\label{equ14a}\\
\widetilde{\bm{{\psi}}}_{i}&\triangleq \mathrm{col}\left\{\widetilde{\bm{{\psi}}}_{1,i},\widetilde{\bm{{\psi}}}_{2,i},\ldots,\widetilde{\bm{{\psi}}}_{N,i}\right\}\label{equ14b}\\
\widetilde{\bm{{\phi}}}_{i} &\triangleq \mathrm{col}\left\{\widetilde{\bm{{\phi}}}_{1,i},\widetilde{\bm{{\phi}}}_{2,i},\ldots,\widetilde{\bm{{\phi}}}_{N,i}\right\}\label{equ14c}\\
{\bm{{s}}}_{i} &\triangleq \mathrm{col}\left\{{\bm{{s}}}_{1,i},{\bm{{s}}}_{2,i},\ldots,{\bm{{s}}}_{N,i}\right\}\label{equ14d}
\end{align}
For simplicity, in (\ref{equ14d}) we use the notation $\bm s_{k,i}$ to replace the gradient noise $\bm s_{k,i}(\bm \phi_{k,i-1})$ defined in (\ref{equ16}), but note vector ${\bm{{s}}}_{i}$ is dependent on the collection of $\{\bm \phi_{k,i-1}\}$ for all $k$.
We further introduce the extended matrices:
\begin{align}
\mathcal{M}&\triangleq\mbox{\rm diag}\{\mu_1,\mu_2,\ldots,\mu_N\}\otimes I_{M}\label{equ160}\\
\mathcal{A}_1&\triangleq A_1\otimes I_{M},\mathcal{A}_2\triangleq A_2\otimes I_{M}\label{equ160b}\\
\bm{\Gamma}_i&\triangleq\mbox{\rm diag}\{{\bm{{\Gamma}}}_{1,i},{\bm{{\Gamma}}}_{2,i},\ldots,{\bm{{\Gamma}}}_{N,i}\}\label{equ160c}
\end{align}
Note that the main difference between the current work and the prior work in \cite{ASayed2014} is the appearance of the random matrices $\{\bm \Gamma_{k,i}\}$ defined by (\ref{equ4}). In the special case when the random matrices are set to the identity matrices across the agents, i.e., $\{\bm \Gamma_{k,i}\equiv I_M\}$, current coordinate-descent case will reduce to the full-gradient update studied in \cite{ASayed2014}. The inclusion of the random matrices $\{\bm{\Gamma}_{k,i}\}$ adds a non-trivial level of complication because now, agents update only random entries of their iterates at each iteration and, importantly, these entries vary randomly across the agents. This procedure adds a rich level of randomness into the operation of the multi-agent system. As the presentation will reveal, the study of the stability and limiting performance under these conditions is more challenging than in the stochastic full-gradient diffusion implementation due to at least two factors: (a) First, the evolution of the error dynamics will now involve a {\em non-symmetric} matrix (matrix $\bm{D}_{11,i}$ defined later in (\ref{app9})); because of this asymmetry, the arguments of \cite{ASayed2014} do not apply and need to be modified; and (b) second, there is also randomness in the coefficient matrix for the error dynamics (namely, randomness in the matrix $\bm{\mathcal B}_{i}'$ defined by (\ref{equ31})). These two factors add nontrivial complications to the stability, convergence, and performance analysis of distributed coordinate-descent solutions, as illustrated by the extended derivations in Appendices \ref{APP1} and \ref{APP3}. These derivations illustrate the new arguments that are necessary to handle the networked solution of this manuscript. For this reason, in the presentation that follows, whenever we can appeal to a result from \cite{ASayed2014}, we will simply refer to it so that, due to space limitations, we can focus the presentation on the new arguments and proofs that are necessary for the current context. It is clear from the proofs in Appendices \ref{APP1} and \ref{APP3} that these newer arguments are demanding and not straightforward.
\begin{lemma}\label{lem1}
(\textbf{Network error dynamics}). Consider a network of $N$ interacting
agents running the diffusion strategy (\ref{equ2}) with the gradient vector replaced by (\ref{equ4}). The evolution of the error
dynamics across the network relative to the reference vector $w^o$ is described by the following recursion:
\begin{equation}\label{equ24}
\widetilde{\bm{w}}_i = \bm{\mathcal{B}}_{i}\widetilde{\bm{w}}_{i-1}+\mathcal{A}_2^{\sf{T}}\mathcal{M}\bm{\Gamma}_i\bm{s}_
\end{equation}
where
\begin{align}
\bm{\mathcal{B}}_{i}&\triangleq\mathcal{A}_2^{\sf{T}}\left(I-\mathcal{M}\bm{\Gamma}_i\bm{\mathcal{H}}_{i-1}\right)\mathcal{A}_1^{\sf{T}}\label{equ25}\\
\bm{\mathcal{H}}_{i-1}&\triangleq\mbox{\rm diag}\{\bm H_{1,i-1},\,\bm H_{2,i-1},\ldots,\,\bm H_{N,i-1}\}\label{equ26}\\
\bm H_{k,i-1}&\triangleq\int_0^1\nabla^2_wJ_k(w^o-t\widetilde{\bm\phi}_{k,i-1})dt\label{equ27}.
\end{align}
\end{lemma}
\noindent\emph {Proof}: Refer to \cite[pp. 498--504]{ASayed2014}, which is still applicable to the current context. We only need to set in that derivation the matrix $A_o$ to $A_o =I$, and the vector $b$ to $b=0_{MN}$. These quantities were defined in (8.131) and (8.136) of \cite{ASayed2014}. The same derivation will lead to (\ref{equ24})--(\ref{equ27}), with the main difference being the appearance now of the random matrix $\bm{\Gamma}_i$ in (\ref{equ24}) and (\ref{equ25}). \hfill $\Box$
We assume that the matrix product $P=A_1A_2$ is primitive. This condition is guaranteed automatically,
for example, for ATC and CTA scenarios when the network is strongly-connected. This means, in view of the Perron-Frobenius Theorem \cite{ASayed2014,Sayed2014}, that $P$ has a single eigenvalue at one. We denote the corresponding eigenvector by $p$, and normalize the entries of $p$ to add up to one. It follows from the same theorem that the entries of $p$ are strictly positive, written as
\begin{equation}\label{equ11a}
Pp=p,\;\;\mathds{1}^{\sf T} p=1,\;\;p\succ0
\end{equation}
with $\mathds{1}$ being the vector of size $N$ with all its entries equal to one.
\begin{theorem}\label{theo1}
(\textbf{Network stability}). Consider a strongly-connected network of $N$ interacting agents running the diffusion strategy (\ref{equ2}) with the gradient vector replaced by (\ref{equ4}). Assume the matrix product $P=A_1A_2$ is primitive. Assume also that the individual cost functions, $J_k(w)$, satisfy the condition in (\ref{equ6}) and that Assumptions \ref{ass2}--\ref{ass4} hold. Then, the second and fourth-order moments of the network error vectors are stable for sufficiently small step-sizes, namely, it holds, for all $k= 1, 2,\ldots,N$, that
\begin{align}
\limsup_{i\to\infty}\mathbb{E}\|{\widetilde{\bm{w}}}_{k,i}\|^2&=O(\mu_{\mathrm{max}})\label{equ28a}\\
\limsup_{i\to\infty}\mathbb{E}\|{\widetilde{\bm{w}}}_{k,i}\|^4&=O(\mu^2_{\mathrm{max}})\label{equ28b}
\end{align}
for any $\mu_{\mathrm{max}}<\mu_o$, for some small enough $\mu_o$, where
\begin{equation}\label{app13}
\mu_{\mathrm{max}}\triangleq\max\{\mu_1,\mu_2,\ldots,\mu_N\}.
\end{equation}
\end{theorem}
\noindent\emph {Proof}:
The argument requires some effort and is given in Appendix \ref{APP1}.\hfill $\Box$
\begin{lemma}\label{lem2} (\textbf{Long-term network dynamics}). Consider a strongly-connected network of $N$ interacting agents running the diffusion strategy (\ref{equ2}) under (\ref{equ4}). Assume the matrix product $P=A_1A_2$ is primitive. Assume also that the individual cost functions satisfy (\ref{equ6}), and that Assumptions \ref{ass2}--\ref{ass4} and (\ref{equ7a}) hold. After sufficient
iterations, $i\gg1$, the error dynamics of the network relative to the reference vector $w^o$
is well-approximated by the following model:
\begin{equation}\label{equ30}
\widetilde{\bm{w}}_i' = \bm{\mathcal{B}}'_{i}\widetilde{\bm{w}}_{i-1}'+\mathcal{A}_2^{\sf{T}}\mathcal{M}\bm{\Gamma}_i\bm{s}_i,\,\, i\gg 1
\end{equation}
where
\begin{align}
\bm{\mathcal{B}}_{i}'&\triangleq\mathcal{A}_2^{\sf{T}}\left(I-\mathcal{M}\bm{\Gamma}_i\mathcal{H}\right)\mathcal{A}_1^{\sf{T}}\label{equ31}\\
{\mathcal{H}}&\triangleq\mbox{\rm diag}\{ H_{1},\,H_{2},\ldots,\,H_{N}\}\label{equ32}\\
H_{k}&\triangleq\nabla^2_wJ_k(w^o)\label{equ33}
\end{align}
More specifically, it holds for sufficiently small step-sizes that
\begin{align}\label{equ35}
\limsup_{i\to\infty}\mathbb{E}\|{\widetilde{\bm{w}}}_{k,i}'\|^2&=O(\mu_{\mathrm{max}})\\
\limsup_{i\to\infty}\mathbb{E}\|{\widetilde{\bm{w}}}_{k,i}'\|^4&=O(\mu_{\mathrm{max}}^2)\label{equ35b}
\end{align}
\begin{equation}\label{equ34}
\limsup_{i\to\infty}\mathbb{E}\|\widetilde{\bm{w}}_i'\|^2=\limsup_{i\to\infty}\mathbb{E}\|\widetilde{\bm{w}}_i\|^2+O(\mu_{\mathrm{max}}^{3/2}).
\end{equation}
\end{lemma}
\noindent\emph {Proof}:\label{pr2}
To establish (\ref{equ30}), we refer to the derivation in \cite[pp. 553--555]{ASayed2014}, and note that, in our case, $\|\bm{\Gamma}_i\|\leq 1$ and $b={0}_{MN}$ (which appeared in (10.2) of \cite{ASayed2014}). Moreover, the results in (\ref{equ35}) and (\ref{equ35b}) can be established by following similar techniques to the proof of Theorem \ref{theo1}, where the only difference is that the random matrix $\bm{\mathcal{H}}_{i-1}$ defined in (\ref{equ26}) is now replaced with the deterministic matrix $\mathcal{H}$ defined by (\ref{equ32}), and by noting that the matrices $\{H_k\}$ in (\ref{equ33}) still satisfy the condition (\ref{equ6}). With regards to result (\ref{equ34}), we refer to the argument in \cite[pp. 557--560]{ASayed2014} and note again that $\|\bm{\Gamma}_i\|\leq 1$.\hfill$\Box$
Result (\ref{equ28a}) ensures that the mean-square-error (MSE) performance of the network is in the order of $\mu_{\max}$. Using the long-term model (\ref{equ30}), we can be more explicit and derive the proportionality constant that describes the value of the network mean-square-error to first-order in $\mu_{\max}$. To do so, we introduce the quantity
\begin{equation}\label{equ11}
q\triangleq \mbox{\rm diag}\left\{\mu_1,\mu_2,\ldots,\mu_N\right\}A_2 p
\end{equation}
and the gradient-noise covariance matrices:
\begin{align}
G_k&\triangleq\lim_{i\to\infty}\bm R_{s,k,i}(w^o)\label{app45e}\\
G_k'&\triangleq\mathbb{E}[\bm\Gamma_{k,i}G_k\bm\Gamma_{k,i}]\label{equ45d}.
\end{align}
Observe that $G_k$ is the limiting covariance matrix of the gradient noise process evaluated at $w^o$, and is assumed to be a constant value, while $G_k'$ is a
weighted version of it. A typical example for the existence of the limit in (\ref{app45e}) is the MSE network, where the covariance matrix of the gradient noise is a constant matrix, which is independent of the time index $i$ \cite[p. 372]{ASayed2014}. It follows by direct inspection that the entries of $G_k'$ are given by:
\begin{equation}\label{equ46}
G_k'(m,n)=\left\{\begin{array}{ll}(1-r_k)^2G_k(m,n),&m\neq n\\
(1-r_k)G_k(m,m),&m=n.\end{array}\right.
\end{equation}
We also define the mean-square-deviation (MSD) for each agent $k$, and the average MSD across the network to first-order in $\mu_{\max}$ --- see \cite{ASayed2014} for further clarifications on these expressions where it is explained, for example, that $\mbox{\rm MSD}_k$ provides the steady-state value of the error variance $\mathbb{E}\|
\widetilde{\bm w}_{k,i}\|^2$ to first-order in $\mu_{\max}$:
\begin{align}
\mathrm{MSD}_k &\triangleq\mu_{\mathrm{max}}\left(\lim_{\mu_\mathrm{max}\to0}\limsup_{i\to\infty}\frac{1}{\mu_{\mathrm{max}}}\mathbb{E}\|\bm{\widetilde{w}}_{k,i}\|^2\right)\label{equ42}\\
\mathrm{MSD}_{av} &\triangleq\frac{1}{N}\sum\limits_{k=1}^N\mathrm{MSD}_k \label{equ42b}.
\end{align}
Likewise, we define the excess-risk (ER) for each agent $k$ as the average fluctuation of the normalized aggregate cost
\begin{equation}\label{equ68}
\bar{J}^{\rm glob}(w)\triangleq\left(\sum\limits_{k=1}^{N}q_k\right)^{-1}\sum\limits_{k=1}^{N}q_kJ_k(w)
\end{equation}
with $\{q_k\}$ being entries of the vector $q$ defined by (\ref{equ11}), around its minimum value $\bar{J}^{\rm glob}(w^o)$ at steady state to first-order in $\mu_{\max}$, namely \cite[p. 581]{ASayed2014}:
\begin{multline}\label{equ69}
\mathrm{ER}_k \triangleq\mu_{\mathrm{max}}\\\times\left(\lim_{\mu_\mathrm{max}\to0}\limsup_{i\to\infty}\frac{1}{\mu_{\mathrm{max}}}\mathbb{E}\{\bar{J}^{\rm glob}(\bm{{w}}_{k,i})-\bar{J}^{\rm glob}({{w}}^o)\}\right).
\end{multline}
The average ER across the network is defined by
\begin{equation}\label{equ71}
\mathrm{ER}_{av} \triangleq\frac{1}{N}\sum_{k=1}^N\mathrm{ER}_k.
\end{equation}
By following similar arguments to \cite[p. 582]{ASayed2014}, it can be verified that the excess risk can also be evaluated by computing a weighted mean-square-error
variance:
\begin{equation}
\mathrm{ER}_k \triangleq\mu_{\mathrm{max}}\left(\lim_{\mu_\mathrm{max}\to0}\limsup_{i\to\infty}\frac{1}{\mu_{\mathrm{max}}}\mathbb{E}\|\bm{\widetilde{w}}_{k,i}\|^2_{\frac{1}{2}\bar{H}}\right)\label{equ42c}
\end{equation}
where $\bar{H}$ denotes the Hessian matrix of the normalized aggregate cost, $\bar{J}^{\rm glob}(w)$, evaluated at the minimizer $w=w^o$:
\begin{equation}\label{equ67}
\bar{H}\triangleq\left(\sum\limits_{k=1}^{N}q_k\right)^{-1}\sum_{k=1}^Nq_kH_k
\end{equation}
with $H_k$ defined by (\ref{equ33}).
Moreover, we define the convergence rate as the slowest rate at which the error variances, $\mathbb{E}\|\bm{\widetilde{w}}_{k,i}\|^2$, converge to the steady-state region. By iterating the recursion for the second-order moment of the error vector, we will arrive at a relation in the following form:
\begin{equation}\label{equ94}
\mathbb{E}[\|\widetilde{\bm{w}}_i\|^2]= \mathbb{E}\left\{\|\widetilde{\bm{w}}_{-1}\|^2_{{F}^{i+1}}\right\}+c
\end{equation}
for some matrix ${F}$ and constant $c$, where $\widetilde{\bm{w}}_{-1}$ denotes the network error vector at the initial time instant. The first-term on the right-hand side corresponds to a transient component
that dies out with time, and the second-term denotes the steady-state region that $\mathbb{E}[\|\widetilde{\bm{w}}_i\|^2]$ converges to. Then, the convergence rate of $\mathbb{E}[\|\widetilde{\bm{w}}_i\|^2]$ towards its steady-state region is dictated by $\rho({F})$ \cite[p. 395]{ASayed2014}. The following conclusion is one of the main results in this work. It shows how the coordinate
descent construction influences performance in comparison to the standard diffusion strategy where all
entries of the gradient vector are used at each iteration. Following the statement of the result, we
illustrate its implications by considering several important cases.
\begin{theorem}\label{theo2} (\textbf{MSD and ER performance}). Under the same setting of Theorem \ref{theo1}, and assume also that Assumption \ref{ass5} holds, it holds that, for sufficiently small step-sizes:
\begin{multline}\label{equ43}
\mathrm{MSD}_{\mathrm{coor},k}= \mathrm{MSD}_{\mathrm{coor,av}}\\=\frac{1}{2}\mathrm{Tr}\left(\left(\sum\limits_{k=1}^Nq_k(1-r_k)H_k\right)^{-1}\sum\limits_{k=1}^Nq_k^2G_k'\right)
\end{multline}
\begin{equation}\label{equ72}
\mathrm{ER}_{\mathrm{coor},k}= \mathrm{ER}_{\mathrm{coor,av}}=\frac{1}{2}\mathrm{Tr}\left(X\sum\limits_{k=1}^Nq_k^2G_k'\right)
\end{equation}
where the subscript ``$\mathrm{coor}$'' denotes the stochastic coordinate-descent diffusion implementation, and matrix $X$ is the unique solution to the following Lyapunov equation:
\begin{equation}\label{equ73}
X\left(\sum\limits_{k=1}^Nq_k(1-r_k)H_k\right)+ \left(\sum\limits_{k=1}^Nq_k(1-r_k)H_k\right)X=\bar{H}
\end{equation}
with $\bar{H}$ defined by (\ref{equ67}). Moreover, for large enough $i$, the convergence rate of the error variances, $\mathbb{E}\|\bm{\widetilde{w}}_{k,i}\|^2$,
towards the steady-state region (\ref{equ43}) is given by
\begin{equation}\label{equ88}
\alpha_{\mathrm{coor}} = 1-2\lambda_{\min}\left(\sum_{k=1}^Nq_k(1-r_k)H_k\right) + O\left(\mu_{\max}^{(N+1)/N}\right)
\end{equation}
\end{theorem}
\noindent \emph {Proof}: See Appendix \ref{APP3}.
\hfill $\Box$
\section{Implications and Useful Cases} \label{sec4}
\subsection{Uniform Missing Probabilities}
Consider the case when the missing probabilities are identical across the agents, i.e., $\{r_k\equiv r\}$.
\subsubsection{Convergence time} Consider the ATC or CTA forms of the full-gradient or coordinate-descent diffusion
strategy (\ref{equ2c})--(\ref{equ2b}) and (\ref{equ4}). From (\ref{equ94}), we find that the error variances for the distributed strategies evolve according to a relation of the form:
\begin{equation}\label{app103}
\mathbb{E}\|\widetilde{\bm{w}}_{k,i}\|^2\leq {\alpha^{i+1}}\mathbb{E}\|\widetilde{\bm{w}}_{k,-1}\|^2+c
\end{equation}
for some constant $c>0$, and where the parameter $\alpha$ determines the convergence rate. Its value
is denoted by $\alpha_{\rm grad}$ for the full-gradient implementation and is given by \cite[p. 584]{ASayed2014}:
\begin{equation}
\alpha_{\mathrm{grad}} = 1-2\lambda_{\min}\left(\sum_{k=1}^Nq_kH_k\right) + o\left(\mu_{\max}\right)
\end{equation}
Likewise, the convergence rate for the coordinate-descent variant is denoted by $\alpha_{\rm coor}$
and is given by expression (\ref{equ88}). It is clear that $\alpha_{\mathrm{coor}}\geq\alpha_{\mathrm{grad}}$
for $0\leq r<1$, so that the coordinate-descent implementation converges at a slower rate as expected (since it only
employs partial gradient information). Thus, let $T_{\mathrm{coor}}$ and $T_{\mathrm{grad}}$ denote the largest number of iterations that are needed for the error variances, $\mathbb{E}\|\widetilde{\bm{w}}_{k,i}\|^2$, to converge to their steady-state regions. The values of $T_{{\rm coor}}$ and $T_{{\rm grad}}$ can be estimated by assessing the number of iterations that it takes for the transient term $\alpha^{i+1}\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{k,-1}\|^2$ in (\ref{app103}) to assume a higher-order value in $\mu_{\max}$, i.e., for
\begin{align}
\alpha_{\mathrm{coor}}^{T_{\mathrm{coor}}}\mathbb{E}\|\widetilde{\bm{w}}_{k,-1}\|^2&=d\mu_{\max}^{1+\epsilon}\\
\alpha_{\mathrm{grad}}^{T_{\mathrm{grad}}}\mathbb{E}\|\widetilde{\bm{w}}_{k,-1}\|^2&=d\mu_{\max}^{1+\epsilon}
\end{align}
for some proportionality constant $d$, and small number $\epsilon >0$. Then, it holds that
\begin{align}\label{app111}
\frac{T_{\mathrm{coor}}}{T_{\mathrm{grad}}}
&=\frac{{{\rm ln}\,\alpha_{\mathrm{grad}}}}{{{\rm ln}\,\alpha_{\mathrm{coor}}}}\nonumber \\
&\stackrel{(a)}\approx\frac{{{\rm ln}\left(1-2\lambda_{\min}\left(\sum_{k=1}^Nq_kH_k\right)\right)}}{{{\rm ln}\left(1-2\lambda_{\min}\left(\sum_{k=1}^Nq_k(1-r)H_k\right)\right)}}\nonumber \\
&\stackrel{(b)}\approx\frac{-2\lambda_{\min}\left(\sum_{k=1}^Nq_kH_k\right)}{-2(1-r)\lambda_{\min}\left(\sum_{k=1}^Nq_kH_k\right)}\nonumber \\
&=\frac{1}{1-r}
\end{align}
where in step (a) we ignored the higher-order term in $\mu_{\max}$, and in (b) we used $\ln(1-x)\approx -x$ as $x\rightarrow 0$.
Expression (\ref{app111}) reveals by how much the convergence time is increased in the coordinate-descent
implementation. Note that because of longer convergence time, the stochastic coordinate-descent diffusion implementation may require more quantities to be exchanged across the network compared to the full-gradient case.
\subsubsection{Computational complexity}
Let us now compare the computational complexity of both implementations: the coordinate-descent and
the full-gradient versions. Assume that the computation required to calculate each entry of the gradient vector $\widehat{\nabla_{w^{\sf T}}J}_k(\bm \phi_{k,i-1})$ is identical, and let $c_m\geq0$ and $c_a\geq0$ denote the number of multiplications and additions, respectively, that are needed for each entry of the gradient vector.
Let $n_k\triangleq|\mathcal{N}_k|$ denote the degree of
agent $k$. Then, in the full-gradient implementation, the adaptation step (\ref{equ2a}) requires $c_mM+M$ multiplications and $c_aM+M$ additions, while the combination step (\ref{equ2c}) or (\ref{equ2b}) requires $n_kM$ multiplications and $(n_k-1)M$ additions. In the coordinate-descent implementation, the adaptation step (\ref{equ2a}) with the gradient vector replaced by (\ref{equ4}) requires $(1-r)\cdot(c_mM+M)$ multiplications and $(1-r)\cdot(c_aM+M)$ additions on average, while the combination step (\ref{equ2c}) or (\ref{equ2b}) requires $n_kM$ multiplications and $(n_k-1)M$ additions. Let $m_{\mathrm{coor},k}$ and $m_{\mathrm{grad},k}$ denote the combined number of multiplications required by the adaptation and combination steps per iteration at each agent $k$ in the coordinate-descent and full-gradient cases. Then,
\begin{align}
m_{\mathrm{grad},k}&=(c_m+n_k+1)M\label{app108}\\
m_{\mathrm{coor},k}
&=m_{\mathrm{grad},k}-(c_m+1)Mr\label{app106}
\end{align}
If we now consider that these algorithms take $T_{\rm coor}$ and $T_{\rm grad}$ iterations to reach their
steady-state regime, then the total number of multiplications at agent $k$, denoted by $M_{\mathrm{coor},k}$ and $M_{\mathrm{grad},k}$, are therefore given by
\begin{align}
M_{\mathrm{coor},k}&=m_{\mathrm{coor},k}T_{\mathrm{coor}}\label{app104}\\
M_{\mathrm{grad},k}&=m_{\mathrm{grad},k}T_{\mathrm{grad}}\label{app105}
\end{align}
so that using (\ref{app111}):
\begin{equation}\label{app109}
\frac{M_{\mathrm{coor},k}}{M_{\mathrm{grad},k}}=\frac{m_{\mathrm{coor},k}}{m_{\mathrm{grad},k}}\frac{1}{1-r}
\end{equation}
Now, the first term on the right hand side satisfies
\begin{align}
\frac{m_{\mathrm{coor},k}}{m_{\mathrm{grad},k}}
&=1-\frac{c_m+1}{c_m+n_k+1}r\label{app115}
\end{align}
so that from (\ref{app109}) and (\ref{app115}):
\begin{equation}
1\leq\frac{M_{\mathrm{coor},k}}{M_{\mathrm{grad},k}}=(1-r)^{-1}\left(1-\frac{c_m+1}{c_m+n_k+1}r\right)
\end{equation}
since $0\leq r<1$. It is clear that when it is costly to compute the gradient entries, i.e., when $c_m \gg n_k$, then $M_{{\rm
coor},k} $ and $M_{{\rm grad},k}$ will be essentially identical. This means that while the coordinate-descent
implementation will take longer to converge, the savings in computation per iteration that it
provides is such that the overall computational complexity until convergence remains largely invariant (it
is not increased). This is a useful conclusion. It means that in situations where computations at each
iteration need to be minimal (e.g., when low end sensors are used), then a coordinate-descent variant is recommended and it will be able to
deliver the same steady-state performance (to first-order in $\mu_{\max}$, see (\ref{equ93}) ahead) with the total computational demand spread over a longer
number of iterations. This also means that the complexity and convergence rate measures, when
normalized by the number of entries that are truly updated at each iteration, remain effectively
invariant. A similar analysis and conclusion holds if we examine the total number of additions (as opposed
to multiplications) that are necessary.
\subsubsection{MSD performance}
The matrix $G_k'$ defined by (\ref{equ46}) can be written as
\begin{align}\label{equ51}
G_k'&= (1-r)^2G_k+\left((1-r)-(1-r)^2\right)\mbox{\rm diag}\{G_k\}\nonumber \\
&=(1-r)^2\left(G_k+\frac{r}{1-r}\mbox{\rm diag}\{G_k\}\right)
\end{align}
where the term $\mbox{\rm diag}\{G_k\}$ is a diagonal matrix that consists of the diagonal entries of $G_k$. Then, the MSD expression (\ref{equ43}) gives
\begin{align}
\mathrm{MSD}_{\mathrm{coor},k}\hspace{0.05cm}
&\hspace{-0.15cm}\stackrel{(\ref{equ51})}=\hspace{-0.1cm}\frac{1}{2}(1-r)\mathrm{Tr}\Bigg(\left(\sum\limits_{k=1}^Nq_kH_k\right)^{-1}\times\nonumber\\
&\hspace{2cm}\sum\limits_{k=1}^Nq_k^2\left(G_k+\frac{r}{1-r}\mbox{\rm diag}\{G_k\}\right)\Bigg)\nonumber
\end{align}
\begin{align}
&=\frac{1}{2}\mathrm{Tr}\left(\left(\sum\limits_{k=1}^Nq_kH_k\right)^{-1}\sum\limits_{k=1}^Nq_k^2G_k\right)+\nonumber\\
&\hspace{0.45cm}\frac{r}{2}\mathrm{Tr}\left(\left(\sum\limits_{k=1}^Nq_kH_k\right)^{-1}\sum\limits_{k=1}^Nq_k^2\mbox{\rm diag}\{G_k\}\right)-\nonumber\\
&\hspace{0.45cm}\frac{r}{2}\mathrm{Tr}\left(\left(\sum\limits_{k=1}^Nq_kH_k\right)^{-1}\sum\limits_{k=1}^Nq_k^2G_k\right).\label{equ52a}
\end{align}
By recognizing that the first item in (\ref{equ52a}) is exactly the MSD expression for the stochastic full-gradient diffusion case \cite[p. 594]{ASayed2014}, which is denoted by ``$\mathrm{MSD}_{\mathrm{grad},k}$'', we get
\begin{multline}\label{equ64}
\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}\\
=\frac{r}{2}\mathrm{Tr}\Bigg(\left(\sum\limits_{k=1}^Nq_kH_k\right)^{-1}
\sum\limits_{k=1}^Nq_k^2\check{G}_k\Bigg)
\end{multline}
where
\begin{equation}\label{equ89}
\check{G}_k\triangleq\mbox{\rm diag}\{G_k\}-G_k.
\end{equation}
We show in Appendix \ref{APP6} that the difference in (\ref{equ64}) can be positive or negative, i.e., the MSD performance can be better or worse in the stochastic coordinate-descent case in comparison to the stochastic full-gradient case. Recall from (\ref{equ42}) that the MSD performance is evaluated to first-order in $\mu_{\max}$. Then, the MSD gap in (\ref{equ64}) is to first-order in the step-size
parameter. Observe that the missing probability $r$ on the right hand side of that equation is independent of $\mu_{\max}$. It thus follows that
\begin{equation}\label{equ90}
\mathrm{Tr}\Bigg(\left(\sum\limits_{k=1}^Nq_kH_k\right)^{-1}
\sum\limits_{k=1}^Nq_k^2\check{G}_k\Bigg)=O(\mu_{\max}).
\end{equation}
\begin{corollary}\label{cor5}(\textbf{Small missing probabilities}). Let $r=O(\mu_{\max}^\varepsilon)$ for a small number $\varepsilon>0$. It holds that
\begin{equation}\label{equ93}
\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}=O(\mu_{\max}^{1+\varepsilon})=o(\mu_{\max}).
\end{equation}
\end{corollary}
\noindent\emph {Proof}: It follows from (\ref{equ64}) and (\ref{equ90}). \hfill $\Box$
We proceed to provide a general upper bound for the difference between $\mathrm{MSD}_{\mathrm{coor},k}$ and $\mathrm{MSD}_{\mathrm{grad},k}$.
\begin{corollary}\label{cor1}(\textbf{Upper bound}). Under the same conditions of Theorem \ref{theo2}, and when the missing probabilities are uniform, namely, $\{r_k\equiv r\}$, it holds that:
\begin{multline}\label{equ52g}
|\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}|\leq\\\frac{r}{2}\left({\sum_{k=1}^Nq_k}\right)^{-1}\left(\frac{1}{\nu_d}-\frac{1}{\delta_d}\right)\sum\limits_{k=1}^Nq_k^2\mathrm{Tr}(G_k)
\end{multline}
where the positive numbers $\nu_d\leq\delta_d$ are defined in (\ref{equ6}), and the matrices $\{G_k\}$ are defined by (\ref{app45e}).
Furthermore, when the matrices $\{H_k\}$ or $\{G_k\}$ are diagonal, it follows that
\begin{equation}\label{equ52h}
\mathrm{MSD}_{\mathrm{coor},k}=\mathrm{MSD}_{\mathrm{grad},k}
\end{equation}
\end{corollary}
\noindent\emph {Proof}: See Appendix \ref{APP4}.
\hfill $\Box$
\begin{corollary}\label{cor4}(\textbf{Uniform step-sizes}). Continuing with the setting of Corollary \ref{cor1} by assuming now that the step-sizes are uniform across all agents and $A_1=I$ or $A_2=I$ (corresponding to either the ATC or CTA formulations). Let $\{p_k\}$ be entries of the vector $p$ defined by (\ref{equ11a}). Then, in view of (\ref{equ11}) and (\ref{equ11a}), $q_k=\mu p_k$ and the $\{p_k\}$ add up to one. In this case, the sum of the $\{q_k\}$ is equal to $\mu$ and expression (\ref{equ52g}) simplifies to
\begin{multline}\label{equ51a}
|\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}|\leq\\\frac{r}{2}\mu\left(\frac{1}{\nu_d}-\frac{1}{\delta_d}\right)\sum\limits_{k=1}^Np_k^2\mathrm{Tr}(G_k).
\end{multline}
\end{corollary}
\hfill $\Box$
Consider now MSE networks where the risk function that is associated with each agent $k$ is the mean-square-error:
\begin{equation}\label{equ48}
J_k(w)=\mathbb{E}(\bm d_k(i)-\bm u_{k,i}w)^2
\end{equation}
where $\bm d_k(i)$ denotes the desired signal, and $\bm u_{k,i}$ is a (row) regression vector. In these networks, the data $\{\bm d_k(i),\bm u_{k,i}\}$ are assumed to be related via the linear regression model
\begin{equation}\label{equ49}
\bm d_k(i)=\bm u_{k,i}w^o+\bm v_k(i)
\end{equation}
where $\bm v_k(i)$ is zero-mean white measurement noise with variance $\sigma_{v,k}^2$ and assumed to be independent of all other random variables. The processes $\{\bm d_{k}(i),\bm u_{k,i},\bm v_{k}(i)\}$ are assumed to be jointly wide-sense stationary random processes. Assume also that the regression data $\{\bm u_{k,i}\}$ are zero-mean, and white over time and space with
\begin{equation}\label{equ49a}
\mathbb{E}\hspace{0.05cm} \bm u_{k,i}^{\sf T}\bm u_{\ell,j}\triangleq R_{u,k}\delta_{k,\ell}\delta_{i,j}
\end{equation}
where $R_{u,k}>0$, and $\delta_{k,\ell}$ denotes the Kronecker delta sequence. Consider the case when the covariance matrices of the regressors are identical across the network, i.e., $\{R_{u,k}\equiv R_u>0\}$. Then, it holds that \cite[p. 598]{ASayed2014}
\begin{equation}\label{equ49b}
H_k\equiv 2R_{u},\,\,G_k=4\sigma_{v,k}^2R_{u}.
\end{equation}
Substituting into (\ref{equ64}) we have
\begin{align}\label{equ49c}
&\mathrm{MSD}_{\mathrm{coor},k} - \mathrm{MSD}_{\mathrm{grad},k}\nonumber \\
&\hspace{0.2cm}=r\left(\sum\limits_{k=1}^Nq_k\right)^{-1}\left({\sum\limits_{k=1}^Nq_k^2\sigma_{v,k}^2}\right)\mathrm{Tr}\left(R_u^{-1}\mbox{\rm diag}\{R_u\}-M\right)\nonumber \\
&\hspace{0.2cm}\geq0
\end{align}
where (\ref{equ49c}) holds because $\mathrm{Tr}\left(R_u^{-1}\mbox{\rm diag}\{R_u\}\right)\geq M$, which can be shown by using the property that $\mathrm{Tr}\left(X\right)\mathrm{Tr}\left(X^{-1}\right)\geq M^2$ for any $M\times M$ symmetric positive-definite matrix $X$ \cite[p. 317]{ASayed2008}, and choosing $X=\mbox{\rm diag}^{\frac{1}{2}}\{R_u\}R_u^{-1}\mbox{\rm diag}^{\frac{1}{2}}\{R_u\}$.
In the case of MSE networks, by exploiting the special relation between the matrices $\{H_{k}\}$ and $\{G_k\}$ in (\ref{equ49b}), we are able to show that the MSD in the stochastic coordinate-descent case is always larger (i.e., worse) than or equal to that in the stochastic full-gradient diffusion case (although by not more than $o(\mu_{\max})$, as indicated by (\ref{equ93})). We are also able to provide a general upper bound on the difference between these two MSDs.
\begin{corollary}\label{cor2}(\textbf{MSE networks}).
Under the same conditions of Corollary \ref{cor1}, and for MSE networks with uniform covariance matrices, i.e., $\{R_{u,k}\equiv R_u>0\}$, it holds that
\begin{multline}\label{equ65}
0\leq\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}\leq\\{r}\left({\sum_{k=1}^Nq_k}\right)^{-1}\left(\sum\limits_{k=1}^Nq_k^2\sigma_{v,k}^2\right)\left(\frac{\delta_d}{\nu_d}-{1}\right)M
\end{multline}
Moreover, it holds that $\mathrm{MSD}_{\mathrm{coor},k}=\mathrm{MSD}_{\mathrm{grad},k}$ if, and only if, $R_u$ is diagonal.
\end{corollary}
\noindent\emph {Proof}: It follows from Corollary \ref{cor1} by using $\mathrm{Tr}\left(G_k\right)=4\sigma_{v,k}^2\mathrm{Tr}\left(R_{u}\right)$ and noting that $\nu_d/2\leq\lambda\left(R_u\right)\leq\delta_d/2$ according to (\ref{equ49b}) and (\ref{equ6}).
\hfill $\Box$
\subsubsection{ER performance}
Consider the scenario when the missing probabilities are identical across the agents, i.e., $\{r_k\equiv r\}$. Then, expression (\ref{equ73}) simplifies to
\begin{equation}\label{equ73b}
(1-r)\left(\sum\limits_{k=1}^Nq_k\right)X\bar{H}+ (1-r)\left(\sum\limits_{k=1}^Nq_k\right)\bar{H}X=\bar{H}
\end{equation}
where we used the equality $\sum_{k=1}^Nq_kH_k=\left(\sum_{k=1}^Nq_k\right)\bar{H}$, it follows that
\begin{equation}\label{equ730a}
X=\frac{1}{2}(1-r)^{-1}\left(\sum\limits_{k=1}^Nq_k\right)^{-1}I_M.
\end{equation}
Thus, the ER expression in (\ref{equ72}) can be rewritten as:
\begin{align}\label{equ730b}
\mathrm{ER}_{\mathrm{coor},k}
&=\frac{1}{4}(1-r)^{-1}\left(\sum\limits_{k=1}^Nq_k\right)^{-1}\mathrm{Tr}\left(\sum\limits_{k=1}^Nq_k^2G'_k\right)\nonumber \\
&\stackrel{(a)}=\frac{1}{4}\left(\sum\limits_{k=1}^Nq_k\right)^{-1}\sum\limits_{k=1}^Nq_k^2\mathrm{Tr}\left(G_k\right)
\end{align}
which is exactly the same result for the full gradient case from \cite[p. 608]{ASayed2014}, and where the equality (a) holds because $\mathrm{Tr}\left(G'_k\right)=(1-r)\mathrm{Tr}\left(G_k\right)$ according to the definition in (\ref{equ46}).
\subsection{Uniform Individual Costs}
Consider the case when the individual costs, $J_k(w)$, are identical across the network, namely, \cite[p. 610]{ASayed2014}
\begin{equation}\label{equ75}
J_k(w)\equiv J(w)\triangleq\mathbb{E}Q(w;\bm x_{k,i})
\end{equation}
where $Q(w;\bm x_{k,i})$ denotes the loss function. In this case, it will hold that the matrices $\{H_k,G_k\}$ are uniform across the agents, i.e.,
\begin{equation}
H_k=\nabla^2_wJ(w^o)\equiv H\label{equ76}
\end{equation}
\begin{equation}
G_k=\mathbb{E}{\nabla_{w^{\sf T}}Q(w^o;\bm x_{k,i})}\left[{\nabla_{w^{\sf T}}Q(w^o;\bm x_{k,i})}\right]^{\sf T}\equiv G
\end{equation}
in view of ${\nabla_{w^{\sf T}}J(w^o)}=0$. Then, (\ref{equ76}) ensures the matrix $\bar{H}=H$ according to the definition in (\ref{equ67}). By referring to (\ref{equ73}), we have
\begin{equation}\label{equ73a}
X=\frac{1}{2}\left(\sum\limits_{k=1}^Nq_k(1-r_k)\right)^{-1}I_M.
\end{equation}
Then, expressions (\ref{equ43}) and (\ref{equ72}) reduce to
\begin{multline}\label{equ43a}
\mathrm{MSD}_{\mathrm{coor},k}= \mathrm{MSD}_{\mathrm{coor,av}}\\=\frac{1}{2}\left(\sum\limits_{k=1}^Nq_k(1-r_k)\right)^{-1}\sum\limits_{k=1}^Nq_k^2\mathrm{Tr}\left(H^{-1}G_k'\right)
\end{multline}
\begin{multline}\label{equ74}
\mathrm{ER}_{\mathrm{coor},k}= \mathrm{ER}_{\mathrm{coor,av}}\\=\frac{1}{4}\left(\sum\limits_{k=1}^Nq_k(1-r_k)\right)^{-1}\sum\limits_{k=1}^Nq_k^2(1-r_k)\mathrm{Tr}\left(G\right).
\end{multline}
We proceed to compare the MSD and ER performance in the stochastic full-gradient and coordinate-descent cases. Let
\begin{align}
\alpha&\triangleq\frac{\sum_{k=1}^Nq_k^2(1-r_k)^2}{\sum_{k=1}^Nq_k(1-r_k)}-\frac{\sum_{k=1}^Nq_k^2}{\sum_{k=1}^Nq_k}\label{equ78}\\
\theta&\triangleq\frac{\sum_{k=1}^Nq_k^2(1-r_k)}{\sum_{k=1}^Nq_k(1-r_k)}-\frac{\sum_{k=1}^Nq_k^2}{\sum_{k=1}^Nq_k}\label{equ79}
\end{align}
and note that $\alpha \leq \theta$, with equality if, and only if, $\{r_k\equiv0\}$.
\begin{corollary}\label{cor3}(\textbf{Performance comparison}).
Under the same conditions of Theorem \ref{theo2}, when the individual costs $J_k(w)$ are identical across the agents, it holds that:
a) if $\alpha\geq0$:
\begin{equation}\label{app41}
0\leq\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}\leq\frac{1}{2}\frac{\theta}{\nu_d}\mathrm{Tr}\left(G\right)
\end{equation}
b) if $\alpha<0$, and $\theta\geq\left(1-{\delta_d}/{\nu_d}\right)\alpha\geq0$:
\begin{multline}\label{equ78a}
0\leq
\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}\leq\\\frac{1}{2}\left(\frac{\theta}{\nu_d}+\left(\frac{1}{\delta_d}-\frac{1}{\nu_d}\right)\alpha\right)\mathrm{Tr}\left(G\right)
\end{multline}
c) if $\alpha<0$, and $\theta\leq\left(1-{\nu_d}/{\delta_d}\right)\alpha\leq0$:
\begin{multline}\label{equ84}
\frac{1}{2}\left(\frac{\theta}{\delta_d}+\left(\frac{1}{\nu_d}-\frac{1}{\delta_d}\right)\alpha\right)\mathrm{Tr}\left(G\right)\leq\\
\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}\leq0.
\end{multline}
Likewise, it holds that
\begin{equation}\label{equ80}
\mathrm{ER}_{\mathrm{coor},k}-\mathrm{ER}_{\mathrm{grad},k}=\frac{\theta}{4}\mathrm{Tr}\left(G\right).
\end{equation}
Then, in the case when either the missing probabilities or the quantities $\{q_k\}$ are uniform across the agents, namely, $\{r_k\equiv r\}$ or $\{q_k\equiv q\}$, it follows that
\begin{equation}\label{equ81}
\mathrm{ER}_{\mathrm{coor},k}=\mathrm{ER}_{\mathrm{grad},k}.
\end{equation}
\end{corollary}
\noindent \emph {Proof}: See Appendix \ref{APP5}.
\hfill $\Box$
Note that for the other choices of parameter $\theta$ that are not indicated in Corollary \ref{cor3}, there is no consistent conclusion on which MSD (between $\mathrm{MSD}_{\mathrm{coor},k}$ and $\mathrm{MSD}_{\mathrm{grad},k}$) is lower.
\begin{figure
\centering
\includegraphics[width=2.8in]{new_toplogy.pdf}
\caption{Network topology consisting of $N=100$ agents.}
\label{topology100}
\end{figure}
\begin{figure
\centering
\includegraphics[width=2.8in]{small_probabilities.pdf}
\caption{MSD learning curves, averaged over 200 independent runs, in the case of Corollary \ref{cor5} when $\{r_k=0.1\}$. The dashed lines show the theoretical MSD values from (\ref{equ43}).}
\label{small_probabilities}
\end{figure}
\begin{figure
\centering
\includegraphics[width=2.8in]{white_regressors.pdf}
\caption{MSD learning curves, averaged over 200 independent runs, in the case of Corollary \ref{cor1} when $\{H_k,G_k\}$ are diagonal. The dashed line along the horizontal axis shows the theoretical MSD value from (\ref{equ43}). Those along the learning curves show the reference recursion at rates formulated by (\ref{equ88}).}
\label{white_regressors}
\end{figure}
\begin{figure*}
\centering
\subfigure[]{\includegraphics[width=2.3in]{two_agents_msd2.pdf}}
\subfigure[]{\includegraphics[width=2.3in]{two_agents_msd1.pdf}}
\subfigure[]{\includegraphics[width=2.3in]{two_agents_er}
\caption{Learning curves, averaged over 10000 independent runs, and theoretical results calculated from (\ref{equ43}) and (\ref{equ72}) respectively, for a two-agent MSE network, with parameters $\{\pi_1=-0.34,\pi_2=0.99\}$ in (a), and $\{\pi_1=0.34,\pi_2=0.99\}$ in (b) and (c).}
\label{two_agents}
\end{figure*}
\section{Simulation Results}
In this section, we illustrate the results by considering MSE networks and logistic regression networks; both settings satisfy condition (\ref{equ6}) and Assumptions \ref{ass2} through \ref{ass5}.
\subsection{MSE Networks}
In the following examples, we will test performance of the associated algorithms in the case when uniform missing probabilities are utilized across the agents. We adopt the ATC
formulation, and set the combination matrices $A_1=I$, and $A_2$ according to the averaging rule \cite[p. 664]{ASayed2014} in the first two examples, and Metropolis rule \cite[p. 664]{ASayed2014} in the third example.
In the first example, we test the case when the gradient vectors are missing with small probabilities across the agents. Figure \ref{topology100} shows a network topology with $N = 100$ agents. The parameter vector $w^o$ is randomly generated with $M=10$. The regressors are generated by the first-order autoregressive model
\begin{equation}\label{equ66}
\bm u_{k,i}(m) = \pi_k\bm u_{k,i}(m-1)+\sqrt{1-\pi_k^2}\bm t_{k,i}(m),\,1\leq m<M
\end{equation}
and the variances are scaled to be 1. The processes $\{\bm t_{k,i}\}$ are zero-mean, unit-variance, and independent and identically distributed (\emph{i.i.d}) Gaussian sequences. The parameters $\{\pi_k\}$ are generated from a uniform distribution on the interval $(-1,1)$. The noises, uncorrelated with the regression vectors, are zero-mean white Gaussian sequences with the variances uniformly distributed over $(0.001,0.1)$. The step-sizes $\{\mu_k\}$ across the agents are generated from a uniform distribution on the interval $(0.0001,0.0005)$. We choose a small missing probability $\{r_k=0.1\}$. Figure \ref{small_probabilities} shows the simulation results, which are averaged over 200 independent runs, as well as the theoretical MSD values calculated from (\ref{equ43}), which are $-57.72$dB and $-57.61$dB, respectively, for the full and partial update case. It is clear from the figure that, when the gradient information is missing with small probabilities, the performance of the coordinate-descent case is close to that of the full-gradient diffusion case.
In the second example, we test the case when the regressors are white across the agents. We randomly generate $w^o$ of size $M=10$. The white regressors are generated from zero-mean white Gaussian sequences, and the powers, which vary from entry to entry, and from agent to agent, are uniformly distributed over $(0.05,0.15)$. The noises $\{\bm v_k(i)\}$, uncorrelated with the regressors, are zero-mean white Gaussian sequences, with the variances $\{\sigma_{v,k}^2\}$ generated from uniform distribution on the interval $(0.0001,0.01)$. The step-sizes are uniformly distributed over $(0.001,0.01)$. The results, including the theoretical MSD value from (\ref{equ43}) in Theorem \ref{theo2}, the simulated MSD learning curves, and the convergence rates from (\ref{equ88}), are illustrated by Fig. \ref{white_regressors}, where the results are averaged over 200 independent runs. It is clear from the figure that, when white regressors are utilized in MSE networks, the stochastic coordinate-descent case converges to the same MSD level as the full-gradient diffusion case, which verifies (\ref{equ52h}), at a convergence rate formulated in (\ref{equ88}).
In the third example, we revisit the two-agent MSE network discussed in Appendix \ref{APP6}, i.e., $N=2$.
We randomly generate $w^o$ of size $M=2$. The step-sizes $\mu_1=\mu_2=0.005$ are uniform across the agents, which gives $q_1=q_2=2.5\times10^{-3}$. The missing probabilities $r_1=r_2=0.5$. The noises $\{\bm v_1(i),\bm v_2(i)\}$ are zero-mean white Gaussian sequences with the variances $\{\sigma_{v,1}^2=0.5,\sigma_{v,2}^2=5\times10^{-4}\}$. The regressors, uncorrelated with the noise sequences, are scaled such that the covariance matrices are of the form
\begin{equation}\label{equ54}
R_{u,1}=\left[\begin{array}{cc}|\pi_1|&\pi_1\\ \pi_1&1\end{array}\right],\,R_{u,2}=\left[\begin{array}{cc}|\pi_2|&\pi_2\\ \pi_2&1\end{array}\right]
\end{equation}
with $|\pi_1|<1, |\pi_2|<1$. Now we select parameters $\{\pi_1=-0.34,\pi_2=0.99\}$, which satisfy condition (\ref{equ86}), and $\{\pi_1=0.34,\pi_2=0.99\}$ to illustrate the cases of $\mathrm{MSD}_{\mathrm{coor},k}<\mathrm{MSD}_{\mathrm{grad},k}$ and $\mathrm{MSD}_{\mathrm{coor},k}>\mathrm{MSD}_{\mathrm{grad},k}$ respectively. Fig. \ref{two_agents} (a) shows the simulation results with the parameters $\{\pi_1=-0.34,\pi_2=0.99\}$. Figures \ref{two_agents} (b) and \ref{two_agents} (c) show the simulation results with the parameters $\{\pi_1=0.34,\pi_2=0.99\}$. All results are averaged over 10000 independent runs.
It is clear from the figures that the simulation results match well with the theoretical results from Theorem \ref{theo2}. In Fig. \ref{two_agents} (a), the steady-state MSD of the stochastic coordinate-descent case is slightly lower than that of the full-gradient diffusion case, by about $0.32$dB, which is close to the theoretical MSD difference of $0.41$dB from (\ref{equ64}). The MSD performance is better in the full-gradient diffusion case in Fig. \ref{two_agents} (b), and the difference between these two MSDs at steady state is $1.71$dB, which is close to the theoretical difference of $1.49$dB from (\ref{equ64}). The ER performance for both the stochastic coordinate-descent and full-gradient diffusion cases are the same as illustrated in Fig. \ref{two_agents} (c), which verifies the theoretical result in (\ref{equ730b}).
\begin{figure
\centering
\includegraphics[width=2.4in]{fig1.pdf}
\caption{Network topology consisting of $N=20$ agents.}
\label{topology}
\end{figure}
\begin{figure*}
\centering
\subfigure[]{\includegraphics[width=2.3in]{logistic_uniform_stepsize.pdf}}
\subfigure[]{\includegraphics[width=2.3in]{logistic_coor_better.pdf}}
\subfigure[]{\includegraphics[width=2.3in]{logistic_full_better.pdf}}
\caption{ER learning curves, averaged over 1000 independent runs, and theoretical results from (\ref{equ74}) for diffusion learning over a logistic network with full or partial updates. Corollary \ref{cor3} is tested in (a) when a uniform step-size and a doubly-stochastic combination matrix are utilized across the network. Corollary \ref{cor3} is tested when the parameters $\{\mu_k\}$ and $\{r_k\}$ are scaled to make $\theta$ in (\ref{equ79}) negative in (b) and positive in (c).}
\label{logistic}
\end{figure*}
\subsection{Logistic Networks}
We now consider an application in the context of pattern classification. We assign with each agent the logistic risk
\begin{equation}\label{equ50}
J_k(w)=\frac{\rho}{2}\|w\|^2 + \mathbb{E}\left\{\ln\left[1+e^{-\bm \gamma_k(i)\bm h^{\sf T}_{k,i}w}\right]\right\}
\end{equation}
with regularization parameter $\rho>0$, and where the labels $\{\bm \gamma_k(i)=\pm 1\}$ are binary random and the $\{\bm h_{k,i}\}$ are feature vectors. The objective is for the agents to determine a parameter vector $w^o$ to enable classification by estimating the class labels via $\widehat{\bm \gamma}_{k}(i)=\bm h_{k,i}^{\sf T} w^o$.
We proceed to test the theoretical findings in Corollary \ref{cor3}. Consider the network topology in Fig. \ref{topology} with $N=20$ agents. We still adopt the ATC
formulation, and set the combination matrices $A_1=I$, and $A_2$ according to the Metropolis rule in \cite[p. 664]{ASayed2014}. The feature vectors and the unknown parameter vector are randomly generated from uncorrelated zero-mean unit-variance \emph{i.i.d} Gaussian sequences, both of size $M = 10$. The parameter $\rho$ in (\ref{equ50}) is set to 0.01. To generate the trajectories for the experiments, the optimal solution to (\ref{equ50}), $w^o$, the Hessian matrix $H$, and the gradient noise covariance matrix, $G$, are first estimated off-line by applying a batch algorithm to all data points.
In the first example, we consider the case when a uniform step-size $\{\mu_k = 0.005\}$ is utilized across the agents. All entries of the stochastic gradient vectors are missing completely at random with positive probabilities that are uniformly distributed over $(0,1)$. Figure \ref{logistic} (a) shows the transient ER curves for the diffusion strategies with complete and partial gradients, where the results are averaged over 1000 independent runs. The figure also shows the theoretical result calculated from (\ref{equ74}). It is clear from Figure \ref{logistic} (a) that the same ER performance is obtained in the stochastic coordinate-descent and full-gradient diffusion cases, by utilizing a uniform step-size and a doubly-stochastic combination matrix across the agents (in which case the parameters $\{q_k\}$ in (\ref{equ11}) are identical across the agents), which is in agreement with the theoretical analysis in (\ref{equ81}).
In the second and third examples, we randomly generate the step-sizes $\{\mu_k\}$ and missing probabilities $\{r_k\}$ by following uniform distributions on the intervals $(0.001,0.01)$ and $(0,1)$ respectively. In Figure \ref{logistic} (b), the parameters $\{\mu_k\}$ and $\{r_k\}$ are scaled to get a negative value for $\theta$ in (\ref{equ79}), and in Fig. \ref{logistic} (c), those parameters are scaled to make $\theta$ positive. Figures \ref{logistic} (b) and \ref{logistic} (c) show respectively the transient ER learning curves in these two cases for the diffusion strategies with complete and partial gradients, where the results are averaged over 1000 independent runs. The figures also show the theoretical results calculated from (\ref{equ74}). It is clear from Figs. \ref{logistic} (b) and \ref{logistic} (c) that these learning curves converge to their theoretical results at steady state. In Fig. \ref{logistic} (b) where $\theta<0$, the stochastic coordinate-descent case converges to a lower ER level than the full-gradient diffusion case, and the difference between these two ERs is $0.637$dB, which is close to the theoretical difference of $0.640$dB from (\ref{equ80}). In Fig. \ref{logistic} (c) where $\theta>0$, the steady-state ER in the full-gradient diffusion case is lower than that of the stochastic coordinate-descent case, by about $0.726$dB, which is close to the theoretical difference of $0.750$dB from (\ref{equ80}).
\appendices
\section{Proof of Theorem 1}\label{APP1}
Let $P=A_1A_2$. It was argued in \cite[p.510]{ASayed2014} that $P$ admits a Jordan canonical decomposition of the form
$P=V_\epsilon J V_\epsilon^{-1}$
where
\begin{equation}
V_\epsilon\triangleq\left[\begin{array}{cc}p&V_R\end{array}\right],\,
V_\epsilon^{-1}\triangleq\left[\begin{array}{c}\mathds{1}^{\sf{T}}\{\boldsymbol{V}}_L^{\sf{T}}\end{array}\right],\,
J=\left[\begin{array}{cc}1&0\\0&J_\epsilon\end{array}\right]\label{app4}
\end{equation}
$p$ is defined by (\ref{equ11a}), $\epsilon$ denotes an arbitrary positive scalar that we are free to choose, and the matrix $J_{\epsilon}$ has a Jordan structure with $\epsilon$ appearing in the first lower diagonal rather than unit entries.
All eigenvalues of $J_{\epsilon}$ are strictly inside the unit circle. Then,
\begin{align}\label{app6}
\mathcal{P}\triangleq P\otimes I_{M}
\triangleq\mathcal{V}_\epsilon\mathcal{J}\mathcal{V}_\epsilon^{-1}
\end{align}
where
$\mathcal{V}_\epsilon\triangleq{V}_\epsilon\otimes I_{M},\,\mathcal{J}\triangleq J\otimes I_{M}$.
Using (\ref{app6}), we can rewrite $\bm{\mathcal{B}}_{i}$ from (\ref{equ25}) as
\begin{align}\label{app7}
\bm{\mathcal{B}}_{i}
\triangleq\left(\mathcal{V}_\epsilon^{-1}\right)^{{\sf{T}}}(\mathcal{J}^{{\sf{T}}}-\bm{\mathcal{D}}_{i}^{{\sf{T}}})\mathcal{V}_\epsilon^{\sf{T}}
\end{align}
where
\begin{align}\label{app8}
\bm{\mathcal{D}}_{i}^{\sf{T}}&\triangleq\mathcal{V}_\epsilon^{{\sf{T}}}\mathcal{A}_2^{{\sf{T}}}\mathcal{M}\bm{\Gamma}_i\bm{\mathcal{H}}_{i-1}\mathcal{A}_1^{{\sf{T}}}\left(\mathcal{V}_\epsilon^{-1}\right)^{\sf{T}}\nonumber\\
&=\left[\begin{array}{cc}\bm{D}_{11,i}^{\sf{T}}&\bm{D}_{21,i}^{\sf{T}}\\\bm{D}_{12,i}^{\sf{T}}&\bm{D}_{22,i}^{\sf{T}}\end{array}\right]
\end{align}
and
\begin{equation}
\bm{D}_{11,i}=\sum\limits_{k=1}^Nq_k\bm{H}_{k,i-1}\bm{\Gamma}_{k,i}\label{app9}
\end{equation}
with the vector $q=\{q_k\}$ defined by (\ref{equ11}). With regards to the norm of $\bm{D}_{11,i}$, we observe that contrary to the arguments in \cite[p. 511]{ASayed2014}, this matrix is not symmetric anymore in the coordinate-descent case due to the presence of $\bm{\Gamma}_{k,i}$. We therefore need to adjust the arguments, which we do next.
Let
\begin{align}\label{app35}
\bm {\bar{D}}_{11,i}&\triangleq\mathbb{E}\left[\bm D_{11,i}|\bm{\mathcal{F}}_{i-1}\right]\nonumber \\
&=\sum_{k=1}^Nq_k\bm{H}_{k,i-1}\mathbb{E}\left[\bm\Gamma_{k,i}\right]\nonumber \\
&\stackrel{(\ref{equ4a})}=\sum_{k=1}^Nq_k(1-r_k)\bm{H}_{k,i-1}\nonumber \\
&=\mathbb{E}\left[\bm D^{\sf{T}}_{11,i}|\bm{\mathcal{F}}_{i-1}\right].
\end{align}
Noting that
\begin{equation}\label{app36}
\mathbb{E}\left[\bm\Gamma_{k,i}\bm\Gamma_{j,i}\right]=\left\{\begin{array}{cl}(1-r_k)(1-r_j),&k\neq j\\1-r_k,&k=j\end{array}\right.
\end{equation}
we introduce
\begin{align}
\bm R_{D_{11},i}&\triangleq\mathbb{E}\left[\left(\bm D_{11,i}-\bm {\bar{D}}_{11,i}\right)\left(\bm D_{11,i}-\bm {\bar{D}}_{11,i}\right)^{\sf{T}}|\bm{\mathcal{F}}_{i-1}\right]\nonumber \\
&{\hspace{0.1cm}=\mathbb{E}\left[\bm D_{11,i}\bm D^{\sf{T}}_{11,i}|\bm{\mathcal{F}}_{i-1}\right]-\bm {\bar{D}}_{11,i}\mathbb{E}\left[\bm D^{\sf{T}}_{11,i}|\bm{\mathcal{F}}_{i-1}\right]-}\nonumber \\
&{\hspace{0.5cm}\mathbb{E}\left[\bm D_{11,i}|\bm{\mathcal{F}}_{i-1}\right]\bm {\bar{D}}_{11,i}+\bm {\bar{D}}^2_{11,i}}\nonumber \\
&\hspace{-0.17cm}\stackrel{(\ref{app35})}=\mathbb{E}\left[\bm D_{11,i}\bm D^{\sf{T}}_{11,i}|\bm{\mathcal{F}}_{i-1}\right]-\bm {\bar{D}}^2_{11,i}\label{app38}\\
&=\sum_{k=1}^N\sum_{j=1}^Nq_kq_j\bm {H}_{k,i-1}\mathbb{E}\left[\bm \Gamma_{k,i}\bm \Gamma_{j,i}\right]\bm {H}_{j,i-1}-\nonumber \\
&\hspace{0.5cm}\sum_{k=1}^N\sum_{j=1}^Nq_kq_j(1-r_k)(1-r_j)\bm {H}_{k,i-1}\bm {H}_{j,i-1}\nonumber \\
&\hspace{-0.15cm}\stackrel{(\ref{app36})}=\sum_{k=1}^N\sum_{j\neq k=1}^Nq_kq_j(1-r_k)(1-r_j)\bm {H}_{k,i-1}\bm {H}_{j,i-1}-\nonumber \\
&\hspace{0.5cm}\sum_{k=1}^N\sum_{j=1}^Nq_kq_j(1-r_k)(1-r_j)\bm {H}_{k,i-1}\bm {H}_{j,i-1}
+\nonumber\\
&\hspace{0.5cm}\sum_{k=1}^Nq_k^2(1-r_k)\bm {H}_{k,i-1}^2\nonumber \\
&=\sum_{k=1}^Nq_k^2(1-r_k)\bm {H}_{k,i-1}^2-\sum_{k=1}^Nq_k^2(1-r_k)^2\bm {H}_{k,i-1}^2\nonumber \\
&=\sum_{k=1}^Nq_k^2(1-r_k)r_k\bm {H}_{k,i-1}^2.\label{app37}
\end{align}
Recall, from (\ref{equ6}) and (\ref{equ27}), that
\begin{equation}\label{app53}
0<\nu_d I_M\leq\bm {H}_{k,i-1}\leq\delta_d I_M.
\end{equation}
Then, matrices $\bm {\bar{D}}_{11,i}$ and $\bm R_{D_{11},i}$ are symmetric positive-definite. Following similar arguments to those in \cite[pp. 511--512]{ASayed2014}, we have
\begin{equation}\label{app39}
\|I_M-\bm {\bar{D}}_{11,i}\|\leq1-\sigma_{11}\mu_{\max},\,\|\bm R_{D_{11},i}\|\leq\beta_{11}^2\mu_{\max}^2
\end{equation}
for some positive constants $\sigma_{11}$ and $\beta^2_{11}$, and sufficiently small $\mu_{\max}$.
Now, multiplying both sides of (\ref{equ24}) by $\mathcal{V}_\epsilon^{\sf{T}}$, we have
\begin{equation}\label{app19}
\mathcal{V}^{\sf{T}}_\epsilon\widetilde{\bm{w}}_i = (\mathcal{J}^{{\sf{T}}}-\bm{\mathcal{D}}_{i}^{{\sf{T}}})\mathcal{V}^{\sf{T}}_\epsilon\widetilde{\bm{w}}_{i-1}+\mathcal{V}^{\sf{T}}_\epsilon\mathcal{A}_2^{{\sf{T}}}\mathcal{M}\bm{\Gamma}_i\bm{s}_
\end{equation}
where (\ref{app7}) was used. Let
\begin{equation}
\mathcal{V}^{\sf{T}}_\epsilon{\widetilde{{\bm{w}}}}_i=\left[\begin{array}{c}\left(p^{\sf{T}}\otimes I_{M}\right)\widetilde{\bm{w}}_i\\\left(V_R^{\sf{T}}\otimes I_{M}\right)\widetilde{\bm{w}}_i\end{array}\right]\triangleq\left[\begin{array}{c}\bar{{\bm{w}}}_i\\\check{{\bm{w}}}_i\end{array}\right]\label{app19a}
\end{equation}
\begin{equation}
\mathcal{V}^{\sf{T}}_\epsilon\mathcal{A}_2^{{\sf{T}}}\mathcal{M}\bm{\Gamma}_i\bm{s}_i=\left[\begin{array}{c}\left(q^{\sf{T}}\otimes I_{M}\right)\bm{\Gamma}_i\bm{s}_i\\\left(V_R^{\sf{T}}\otimes I_{M}\right)\mathcal{A}_2^{{\sf{T}}}\mathcal{M}\bm{\Gamma}_i\bm{s}_i\end{array}\right]
\triangleq\left[\begin{array}{c}\bar{{\bm{s}}}_i\\\check{{\bm{s}}}_i\end{array}\right]\label{app19b}
\end{equation}
We then rewrite (\ref{app19}) as
\begin{align}\label{app20}
\hspace{-0.3cm}
\left[\begin{array}{c}\bar{{\bm{w}}}_i\\\check{{\bm{w}}}_i\end{array}\right]=\left[\begin{array}{cc}I_{M}-\bm D^{\sf{T}}_{11,i}&-\bm D^{\sf{T}}_{21,i}\\-\bm D_{12,i}^{\sf{T}}&\mathcal{J}^{\sf{T}}_\epsilon-\bm D_{22,i}^{\sf{T}}\end{array}\right]\left[\begin{array}{c}\bar{{\bm{w}}}_{i-1}\\\check{{\bm{w}}}_{i-1}\end{array}\right]+\left[\begin{array}{c}\bar{{\bm{s}}}_i\\\check{{\bm{s}}}_i\end{array}\right
\end{align}
where the asymmetry of the matrix $\bm{D}_{11,i}$ in this case leads to a difference in the first row, compared to the arguments in \cite[pp. 514--515]{ASayed2014}. We adjust the arguments as follows. Using Jensen's inequality, we have \cite[p. 515]{ASayed2014}:
\begin{multline}\label{app021}
\mathbb{E}[\|\bar{{\bm{w}}}_i\|^2|\bm{\mathcal{F}}_{i-1}]\leq\frac{1}{1-t}\mathbb{E}[\|(I_{M}-\bm D^{\sf{T}}_{11,i})\bar{{\bm{w}}}_{i-1}\|^2|\bm{\mathcal{F}}_{i-1}]\\+\frac{1}{t}\mathbb{E}[\|\bm D^{\sf{T}}_{21,i}\check{{\bm{w}}}_{i-1}\|^2|\bm{\mathcal{F}}_{i-1}]+\mathbb{E}[\|\bar{\bm s}_i\|^2|\bm{\mathcal{F}}_{i-1}]
\end{multline}
for any $0<t<1$, where the expectation of the cross term between $\bar{\bm s}_i$ and $(I_{M}-\bm D^{\sf{T}}_{11,i})\bar{{\bm{w}}}_{i-1}-\bm D^{\sf{T}}_{21,i}\check{{\bm{w}}}_{i-1}$ vanishes conditioned on $\bm{\mathcal{F}}_{i-1}$ and $\bm{\Gamma}_i$ in view of (\ref{equ20}), and the result in (\ref{app021}) follows by taking the expectations again on both sides over $\bm{\Gamma}_i$. Then, the first term on the right hand side of (\ref{app021}) can be bounded by
\begin{align}
&\mathbb{E}\left[\|(I_{M}-\bm D^{\sf{T}}_{11,i})\bar{{\bm{w}}}_{i-1}\|^2|\bm{\mathcal{F}}_{i-1}\right]\nonumber\\
&\hspace{0.1cm}=\left(\bar{{\bm{w}}}_{i-1}\right)^{\sf{T}}\mathbb{E}\left[\left(I_{M}-\bm D_{11,i}\right)\left(I_{M}-\bm D^{\sf{T}}_{11,i}\right)|\bm{\mathcal{F}}_{i-1}\right]\bar{{\bm{w}}}_{i-1}\nonumber\\
&\hspace{0.05cm}\stackrel{(a)}\leq\lambda_{\max}\left(\mathbb{E}\left[\left(I_{M}-\bm D_{11,i}\right)\left(I_{M}-\bm D^{\sf{T}}_{11,i}\right)|\bm{\mathcal{F}}_{i-1}\right]\right)\left\|\bar{{\bm{w}}}_{i-1}\right\|^2\nonumber \\
&\hspace{0.05cm}\stackrel{(b)}=\big\|\mathbb{E}\left[\left(I_{M}-\bm D_{11,i}\right)\left(I_{M}-\bm D^{\sf{T}}_{11,i}\right)|\bm{\mathcal{F}}_{i-1}\right]\big\|\|\bar{{\bm{w}}}_{i-1}\|^2\nonumber \\
&\hspace{-0.1cm}\stackrel{(\ref{app35})}=\big\|I_M-2\bm{ \bar{D}}_{11,i}+\mathbb{E}\left[\bm D_{11,i}\bm D^{\sf{T}}_{11,i}|\bm{\mathcal{F}}_{i-1}\right]\big\|\|\bar{{\bm{w}}}_{i-1}\|^2\nonumber \\
&\hspace{-0.1cm}\stackrel{(\ref{app38})}=\big\|I_M-2\bm{ \bar{D}}_{11,i}+\bm {\bar{D}}^2_{11,i}+\bm R_{D_{11},i}\big\|\|\bar{{\bm{w}}}_{i-1}\|^2\nonumber \\
&\hspace{0.1cm}\leq\left(\big\|\left(I_M-\bm{ \bar{D}}_{11,i}\right)^2\big\|+\big\|\bm R_{D_{11},i}\big\|\right)\|\bar{{\bm{w}}}_{i-1}\|^2\nonumber \\
&\hspace{0.05cm}\stackrel{(c)}=\left(\big\|I_M-\bm{ \bar{D}}_{11,i}\big\|^2+\big\|\bm R_{D_{11},i}\big\|\right)\|\bar{{\bm{w}}}_{i-1}\|^2\nonumber \\
&\hspace{0.1cm}\leq\left((1-\sigma_{11}\mu_{\max})^2+\beta^2_{11}\mu_{\max}^2\right)\|\bar{{\bm{w}}}_{i-1}\|^2\label{app21}
\end{align}
where in step (a) we called upon the Rayleigh-Ritz characterization of eigenvalues \cite{Golub96,Johnson03}, and (b), (c) hold because $\|A\|=\lambda_{\max}(A)$ for any symmetric positive-semidefinite matrix $A$, and $\|A^2\|=\|A\|^2$ for any symmetric matrix $A$.
Computing the expectations again on both sides of (\ref{app21}), we have
\begin{align}\label{app44}
&\hspace{-0.3cm}\frac{1}{1-t}\mathbb{E}\left\{\mathbb{E}[\|(I_{M}-\bm D^{\sf{T}}_{11,i})\bar{{\bm{w}}}_{i-1}\|^2|\bm{\mathcal{F}}_{i-1}]\right\}\nonumber \\
&\leq\frac{1}{1-t}\left((1-\sigma_{11}\mu_{\max})^2+\beta^2_{11}\mu_{\max}^2\right)\mathbb{E}\|\bar{{\bm{w}}}_{i-1}\|^2\nonumber \\
&\stackrel{(a)}=\left(1-\sigma_{11}\mu_{\max}+\frac{\beta^2_{11}\mu_{\max}^2}{1-\sigma_{11}\mu_{\max}}\right)\mathbb{E}\|\bar{{\bm{w}}}_{i-1}\|^2\nonumber \\
&\stackrel{(b)}\leq\left( 1-\sigma'_{11}\mu_{\max}\right)\mathbb{E}\|\bar{{\bm{w}}}_{i-1}\|^2
\end{align}
where in step (a) we set $t= \sigma_{11}\mu_{\max}$, and in (b) positive number $\sigma'_{11}<\sigma_{11}$, and $\mu_{\max}$ is small enough such that
$\sigma'_{11}\leq\sigma_{11}-\left(1-\sigma_{11}\mu_{\max}\right)^{-1}{\beta^2_{11}}\mu_{\max}$.
We can now establish (\ref{equ28a}) by substituting (\ref{app44}) into (\ref{app021}), and completing the argument starting from Eq. (9.69) in the proof of Theorem 9.1 in \cite[pp. 516--521]{ASayed2014}, where the quantity $b={0}_{MN}$ (appeared in (9.54) of \cite{ASayed2014}).
We next establish (\ref{equ28b}). Compared to the proof for Theorem 9.2 in \cite{ASayed2014}, the main difference, apart from the second-order moments evaluated in (\ref{app44}), is the term \begin{equation}\label{app60}
\frac{1}{(1-t)^3}\mathbb{E}\left[\|(I_{M}-\bm D^{\sf{T}}_{11,i})\bar{{\bm{w}}}_{i-1}\|^4\right]
\end{equation}
for any $0<t<1$, which appeared in (9.117) of \cite{ASayed2014}.
Let
\begin{align}
\bm K_i &\triangleq \left(I_{M}-\bm D_{11,i}\right)\left(I_{M}-\bm D^{\sf{T}}_{11,i}\right)\bar{{\bm{w}}}_{i-1}\left(\bar{{\bm{w}}}_{i-1}\right)^{\sf{T}}\nonumber \\
&\hspace{0.5cm}\times\left(I_{M}-\bm D_{11,i}\right)\left(I_{M}-\bm D^{\sf{T}}_{11,i}\right)\label{app24}\\
\bm L_i &\triangleq \left(\left(I_{M}-\bm D_{11,i}\right)\left(I_{M}-\bm D^{\sf{T}}_{11,i}\right)\right)^2\label{app25}.
\end{align}
Then, both matrices $\bm K_i$ and $\bm L_i$ are symmetric positive semi-definite.
Thus, we have
\begin{align}
&\hspace{-2cm}\mathbb{E}\left[\|(I_{M}-\bm D^{\sf{T}}_{11,i})\bar{{\bm{w}}}_{i-1}\|^4|\bm{\mathcal{F}}_{i-1}\right]\nonumber\\
&\hspace{-1.2cm}=\left(\bar{{\bm{w}}}_{i-1}\right)^{\sf{T}}\mathbb{E}\left[\bm K_i|\bm{\mathcal{F}}_{i-1}\right]\bar{{\bm{w}}}_{i-1}\nonumber\\
&\hspace{-1.2cm}\leq\lambda_{\max}\left(\mathbb{E}\left[\bm K_i|\bm{\mathcal{F}}_{i-1}\right]\right)\left\|\bar{{\bm{w}}}_{i-1}\right\|^2\nonumber\\
&\hspace{-1.2cm}\stackrel{(a)}\leq\mathrm{Tr}\left(\mathbb{E}\left[\bm K_i|\bm{\mathcal{F}}_{i-1}\right]\right)\left\|\bar{{\bm{w}}}_{i-1}\right\|^2\nonumber\\
&\hspace{-1.2cm}=\left(\bar{{\bm{w}}}_{i-1}^{\sf T}\mathbb{E}\left[\bm L_i|\bm{\mathcal{F}}_{i-1}\right]\bar{{\bm{w}}}_{i-1}\right)\|\bar{{\bm{w}}}_{i-1}\|^2\nonumber \\
&\hspace{-1.2cm}\leq\lambda_{\max}\left(\mathbb{E}\left[\bm L_i|\bm{\mathcal{F}}_{i-1}\right]\right)\|\bar{{\bm{w}}}_{i-1}\|^4\nonumber \\
&\hspace{-1.2cm}=\|\mathbb{E}\left[\bm L_i|\bm{\mathcal{F}}_{i-1}\right]\|\|\bar{{\bm{w}}}_{i-1}\|^4\label{app26}
\end{align}
where the inequality (a) holds because $\lambda_{\max}(\Sigma)\leq \mathrm{Tr}(\Sigma)$ for any symmetric positive semi-definite matrix $\Sigma$. We proceed to deal with the term $\mathbb{E}\left[\bm L_i|\bm{\mathcal{F}}_{i-1}\right]$. Note that
\begin{align}\label{app27}
\bm L_i=I_M - \bm L_{1,i} + \bm L_{2,i} - \bm L_{3,i} +\bm L_{4,i}
\end{align}
where
\begin{align}
&\bm L_{1,i}&\hspace{-0.4cm}\triangleq&\hspace{0.1cm}2\bm D_{11,i}+2\bm D^{\sf{T}}_{11,i}\label{app51a}\\
&\bm L_{2,i}&\hspace{-0.4cm}\triangleq& \hspace{0.1cm}3\bm D_{11,i}\bm D^{\sf{T}}_{11,i}+\bm D^{\sf{T}}_{11,i}\bm D_{11,i}+\left(\bm D_{11,i}\right)^2+\left(\bm D^{\sf{T}}_{11,i}\right)^2\label{app51b}\\
&\bm L_{3,i}&\hspace{-0.4cm}\triangleq&\hspace{0.05cm}\left(\bm D_{11,i}\right)^2\bm D^{\sf{T}}_{11,i}+\bm D_{11,i}\left(\bm D^{\sf{T}}_{11,i}\right)^2+\nonumber \\
&&&\bm D_{11,i}\bm D^{\sf{T}}_{11,i}\bm D_{11,i}+\bm D^{\sf{T}}_{11,i}\bm D_{11,i}\bm D^{\sf{T}}_{11,i}\label{app51c}\\
&\bm L_{4,i}&\hspace{-0.4cm}\triangleq&\hspace{0.05cm}\left(\bm D_{11,i}\bm D^{\sf{T}}_{11,i}\right)^2\label{app51d}
\end{align}
and we have
$\mathbb{E}\left[\bm L_{1,i}|\bm {\mathcal{F}}_{i-1}\right]=4\bm{\bar{D}}_{11,i}$
according to (\ref{app35})
Let $X$ be a constant matrix of size $M\times M$. Then,
\begin{equation}\label{app48}
\mathbb{E}\left[\bm \Gamma_{k,i}X\bm \Gamma_{j,i}\right]=\left\{\begin{array}{cl}(1-r_k)(1-r_j)X,&k\neq j\{\boldsymbol{X}}',&k=j\end{array}\right.
\end{equation}
where $X'$ has the same form as (\ref{equ46}), and we can further rewrite $X'$ as (\ref{equ51}).
It follows that:
\begin{align}\label{app52}
&\mathbb{E}\left[\bm D^{\sf{T}}_{11,i}\bm D_{11,i}|\bm {\mathcal{F}}_{i-1}\right]-\left(\bm{\bar{D}}_{11,i}\right)^2 \nonumber \\
&\hspace{0.1cm}=\sum_{k=1}^N\sum_{j=1}^Nq_kq_j\mathbb{E}\left[\bm \Gamma_{k,i}\bm {H}_{k,i-1}\bm {H}_{j,i-1}\bm \Gamma_{j,i}|\bm {\mathcal{F}}_{i-1}\right]-\nonumber \\
&\hspace{0.5cm}\sum_{k=1}^N\sum_{j=1}^Nq_kq_j(1-r_k)(1-r_j)\bm {H}_{k,i-1}\bm {H}_{j,i-1}\nonumber \\
&\hspace{0cm}\stackrel{(\ref{app48})}=\sum_{k=1}^N\sum_{j\neq k=1}^Nq_kq_j(1-r_k)(1-r_j)\bm {H}_{k,i-1}\bm {H}_{j,i-1}+\nonumber \\
&\hspace{0.3cm}\sum_{k=1}^Nq_k^2(1-r_k)^2\bm {H}_{k,i-1}^2+\sum_{k=1}^Nq_k^2(1-r_k)r_k\mbox{\rm diag}\{\bm {H}_{k,i-1}^2\}-\nonumber \\
&\hspace{0.3cm}\sum_{k=1}^N\sum_{j=1}^Nq_kq_j(1-r_k)(1-r_j)\bm {H}_{k,i-1}\bm {H}_{j,i-1}\nonumber \\
&\hspace{0.1cm}=\sum_{k=1}^Nq_k^2(1-r_k)r_k\mbox{\rm diag}\{\bm {H}_{k,i-1}^2\}.
\end{align}
Recall from (\ref{app53}) that $\{\bm {H}_{k,i-1}>0\}$. Then, $\{\bm {H}_{k,i-1}^2>0\}$ and $\{\mbox{\rm diag}\{\bm {H}_{k,i-1}^2\}>0\}$. Computing Euclidean norms on both sides of (\ref{app52}), we have
\begin{align}
&\big\|\mathbb{E}\left[\bm D^{\sf{T}}_{11,i}\bm D_{11,i}|\bm {\mathcal{F}}_{i-1}\right]-\left(\bm{\bar{D}}_{11,i}\right)^2\big\|\nonumber \\
&\hspace{0.3cm}\stackrel{(a)}\leq\sum_{k=1}^Nq_k^2(1-r_k)r_k\big\|\mbox{\rm diag}\{\bm {H}_{k,i-1}^2\}\big\|\nonumber \\
&\hspace{0.3cm}\stackrel{(b)}\leq\sum_{k=1}^Nq_k^2(1-r_k)r_k\left(\mathrm{Tr}\!\left[\bm {H}_{k,i-1}^2\right]\right)\nonumber\\
&\hspace{0.35cm}\leq\sum_{k=1}^Nq_k^2(1-r_k)r_k\left(M\lambda_{\max}\left(\bm {H}_{k,i-1}^2\right)\right)\nonumber \\
&\hspace{0.2cm}\stackrel{(\ref{app53})}\leq\sum_{k=1}^Nq_k^2(1-r_k)r_k\left(M\delta_d^2\right)\label{app54}
\end{align}
where in step (a) we used the property that $\|A+B\|\leq\|A\|+\|B\|$ \cite{Johnson03}, and (b) holds because $\|X\|\leq\mathrm{Tr}(X)$, for any symmetric positive semi-definite matrix $X$, and that $\mathrm{Tr}[\mbox{\rm diag}\{X\}]=\mathrm{Tr}[X]$.
Recall from (\ref{equ11}) that \cite[p. 509]{ASayed2014}
\begin{equation}\label{app90}
q_k=\mu_k(e_k^{\sf T}A_2p)
\triangleq\mu_{\max}\tau_k(e_k^{\sf T}A_2p)
\end{equation}
where $e_k$ denotes the $k$-th basis vector, which has a unit entry at
the $k$-th location and zeros elsewhere, and the parameter $\tau_k$ satisfies $\mu_k=\mu_{\max}\tau_k$.
Then, we have
\begin{equation}\label{app91}
\big\|\mathbb{E}\left[\bm D^{\sf{T}}_{11,i}\bm D_{11,i}|\bm {\mathcal{F}}_{i-1}\right]-\left(\bm{\bar{D}}_{11,i}\right)^2\big\|=O(\mu_{\max}^2)
\end{equation}
Likewise, it follows that
\begin{align}
\Big\|\mathbb{E}\left[\left(\bm D_{11,i}\right)^2|\bm{\mathcal{F}}_{i-1}\right]-\left(\bm{\bar{D}}_{11,i}\right)^2\Big\|&=O(\mu_{\max}^2)\label{app55a}\\
\Big\|\mathbb{E}\left[\left(\bm D_{11,i}^{\sf T}\right)^2|\bm{\mathcal{F}}_{i-1}\right]-\left(\bm{\bar{D}}_{11,i}\right)^2\Big\|&=O(\mu_{\max}^2).\label{app55b}
\end{align}
Recall from (\ref{app39}) and (\ref{app38}) that
\begin{equation}\label{app56}
\big\|\mathbb{E}\left[\bm D_{11,i}\bm D_{11,i}^{\sf T}|\bm{\mathcal{F}}_{i-1}\right]-\left(\bm{\bar{D}}_{11,i}\right)^2\big\|=O(\mu_{\max}^2).
\end{equation}
Substituting (\ref{app91})--(\ref{app56}) into (\ref{app51b}), we obtain
\begin{equation}\label{app57}
\big\|\mathbb{E}\left[\bm L_{2,i}|\bm{\mathcal{F}}_{i-1}\right]-6\left(\bm{\bar{D}}_{11,i}\right)^2\big\|=O(\mu_{\max}^2).
\end{equation}
Similarly, it can be verified that
\begin{eqnarray}
\big\|\mathbb{E}\left[\bm L_{3,i}|\bm{\mathcal{F}}_{i-1}\right]-4\left(\bm{\bar{D}}_{11,i}\right)^3\big\|=O(\mu_{\max}^3)\label{app58a}\\
\big\|\mathbb{E}\left[\bm L_{4,i}|\bm{\mathcal{F}}_{i-1}\right]-\left(\bm{\bar{D}}_{11,i}\right)^4\big\|=O(\mu_{\max}^4).\label{app58b}
\end{eqnarray}
It follows that
\begin{align}
\|\mathbb{E}\left[\bm L_{i}|\bm{\mathcal{F}}_{i-1}\right]\|&=
\|I-\mathbb{E}\left[\bm L_{1,i}|\bm{\mathcal{F}}_{i-1}\right]+\mathbb{E}\left[\bm L_{2,i}|\bm{\mathcal{F}}_{i-1}\right]-\nonumber \\
&\hspace{0.5cm}\mathbb{E}\left[\bm L_{3,i}|\bm{\mathcal{F}}_{i-1}\right]+\mathbb{E}\left[\bm L_{4,i}|\bm{\mathcal{F}}_{i-1}\right]\|\nonumber \\
&=\big\|I-4\bm{\bar{D}}_{11,i}+6\left(\bm{\bar{D}}_{11,i}\right)^2-4\left(\bm{\bar{D}}_{11,i}\right)^3+\nonumber \\
&\hspace{0.25cm}\left(\bm{\bar{D}}_{11,i}\right)^4+\left(\mathbb{E}\left[\bm L_{2,i}|\bm{\mathcal{F}}_{i-1}\right]-6\left(\bm{\bar{D}}_{11,i}\right)^2\right)\nonumber \\
&\hspace{0.25cm}-\left(\mathbb{E}\left[\bm L_{3,i}|\bm{\mathcal{F}}_{i-1}\right]-4\left(\bm{\bar{D}}_{11,i}\right)^3\right)\nonumber \\
&\hspace{0.25cm}+\left(\mathbb{E}\left[\bm L_{4,i}|\bm{\mathcal{F}}_{i-1}\right]-\left(\bm{\bar{D}}_{11,i}\right)^4\right)\big\|\nonumber \\
&\leq\big\|I-4\bm{\bar{D}}_{11,i}+6\left(\bm{\bar{D}}_{11,i}\right)^2-4\left(\bm{\bar{D}}_{11,i}\right)^3+\nonumber \\
&\hspace{0.25cm}\left(\bm{\bar{D}}_{11,i}\right)^4\big\|+\big\|\mathbb{E}\left[\bm L_{2,i}|\bm{\mathcal{F}}_{i-1}\right]-6\left(\bm{\bar{D}}_{11,i}\right)^2\big\|\nonumber \\
&\hspace{0.25cm}+\big\|\mathbb{E}\left[\bm L_{3,i}|\bm{\mathcal{F}}_{i-1}\right]-4\left(\bm{\bar{D}}_{11,i}\right)^3\big\|\nonumber \\
&\hspace{0.25cm}+\big\|\mathbb{E}\left[\bm L_{4,i}|\bm{\mathcal{F}}_{i-1}\right]-\left(\bm{\bar{D}}_{11,i}\right)^4\big\|\nonumber\\
&=\|\left(I-\bm{\bar{D}}_{11,i}\right)^4\|+O(\mu_{\max}^2)\nonumber \\
&=\|I-\bm{\bar{D}}_{11,i}\|^4+O(\mu_{\max}^2)\nonumber \\
&\hspace{-0.15cm}\stackrel{(\ref{app39})}\leq\left(1-\sigma_{11}\mu_{\max}\right)^4+O(\mu_{\max}^2).\label{app59}
\end{align}
Substituting into (\ref{app26}), and taking expectations again on both sides, we have
\begin{align}\label{app62}
&\frac{1}{(1-t)^3}\mathbb{E}\left[\|(I_{M}-\bm D^{\sf{T}}_{11,i})\bar{{\bm{w}}}_{i-1}\|^4\right]\nonumber \\
&\hspace{0.5cm}\leq\frac{1}{(1-t)^3}\left(\left(1-\sigma_{11}\mu_{\max}\right)^4+O(\mu_{\max}^2)\right)\mathbb{E}\|\bar{{\bm{w}}}_{i-1}\|^4\nonumber \\
&\hspace{0.45cm}\stackrel{(a)}=\left(1-\sigma_{11}\mu_{\max}+O(\mu_{\max}^2)\right)\mathbb{E}\|\bar{{\bm{w}}}_{i-1}\|^4\nonumber \\
&\hspace{0.5cm}\leq\left(1-\sigma''_{11}\mu_{\max}\right)\mathbb{E}\|\bar{{\bm{w}}}_{i-1}\|^4
\end{align}
for some positive constant $\sigma''_{11}<\sigma_{11}$, and for small enough $\mu_{\max}$, where in step (a) we set $t\triangleq\sigma_{11}\mu_{\max}$.
Then, the result in (\ref{equ28b}) can be obtained by continuing from Eq. (9.117) (by choosing $t = \sigma_{11}\mu_{\max}$) in the proof of Theorem 9.2 in \cite[pp. 523--530]{ASayed2014}.
\section{Proof of Theorem \ref{theo2}}\label{APP3}
Define
\begin{equation}\label{app21a}
\mathcal{F}\triangleq\mathbb{E}\left[\bm{\mathcal{B}}'_{i}\otimes_b\bm{\mathcal{B}}'_{i}\right]^{\sf T}
\end{equation}
Then, by following similar techniques shown in the proof of Lemma 9.5 \cite[pp. 542--546]{ASayed2014}, we have
\begin{equation}\label{equ37}
(I-\mathcal{F})^{-1}=[(p\otimes p)(\mathds{1}\otimes\mathds{1})^{\sf{T}}]\otimes Z^{-1}+O(1)
\end{equation}
where
\begin{equation}\label{equ38}
Z\triangleq\sum\limits_{k=1}^Nq_k(1-r_k)\left[\left(H_k\otimes I_{M}\right)+\left(I_{M}\otimes H_k\right)\right].
\end{equation}
The desired results (\ref{equ43}) and (\ref{equ72}) in Theorem \ref{theo2} now follow by referring to the proofs of Theorem 11.2 and Lemma 11.3 in \cite[pp. 583--596]{ASayed2014}, and Theorem 11.4 in \cite[pp. 608--609]{ASayed2014}.
Evaluating the squared Euclidean norms on both sides of (\ref{equ30}) and taking expectations conditioned on $\bm{\mathcal{F}}_{i-1}$, then
taking expectations again we get
\begin{multline}\label{app88}
\mathbb{E}[\|\widetilde{\bm{w}}_i'\|^2_{\mbox{\rm bvec}(I_{NM})}]= \mathbb{E}\left\{\|\widetilde{\bm{w}}_{i-1}'\|^2_{\mathcal{F}\mbox{\rm bvec}(I_{NM})}\right\}+\\\mathbb{E}\left\{\|\bm{s}_i\|^2_{\mathbb{E}\left[\left(\bm{\Gamma}_i\mathcal{M}\mathcal{A}_2\right)\otimes_b\left(\bm{\Gamma}_i\mathcal{M}\mathcal{A}_2\right)\right]\mbox{\rm bvec}(I_{NM})}\right\}
\end{multline}
where we used the weighted vector notation $\|x\|^2_{\sigma}=\|x\|^2_{\Sigma}$ with $\sigma=\mbox{\rm bvec}(\Sigma)$ and $\mbox{\rm bvec}(\cdot)$ denoting the block vector operation \cite[p. 588]{ASayed2014}.
Iterating the relation we get
\begin{multline}\label{app89}
\mathbb{E}[\|\widetilde{\bm{w}}_i'\|^2_{\mbox{\rm bvec}(I_{NM})}]= \mathbb{E}\left\{\|\widetilde{\bm{w}}_{-1}'\|^2_{\mathcal{F}^{i+1}\mbox{\rm bvec}(I_{NM})}\right\}+\\\sum_{n=0}^i\mathbb{E}\left\{\|\bm{s}_i\|^2_{\mathbb{E}\left[\left(\bm{\Gamma}_i\mathcal{M}\mathcal{A}_2\right)\otimes_b\left(\bm{\Gamma}_i\mathcal{M}\mathcal{A}_2\right)\right]\mathcal{F}^{n}\mbox{\rm bvec}(I_{NM})}\right\}
\end{multline}
where the first-term corresponds to a transient component that dies out with time, and the convergence rate of $\mathbb{E}\|\bm{\widetilde{w}}_{k,i}\|^2$
towards the steady-state regime is seen to be dictated by $\rho\left(\mathcal{F}\right)$ \cite[p. 592]{ASayed2014}. Now, let
\begin{equation}
\Gamma\triangleq\mathbb{E}\bm \Gamma_i=\mbox{\rm diag}\{(1-r_k)\}_{k=1}^N\otimes I_M\label{app67a}
\end{equation}
\begin{equation}
\mathcal{M}'\triangleq\mathcal{M}{\Gamma}=\mbox{\rm diag}\{\mu_k(1-r_k)\}_{k=1}^N\otimes I_M\label{app67b}
\end{equation}
\begin{equation}
{\mathcal{B}'}\triangleq\mathbb{E}\bm{\mathcal{B}}'_{i}=\mathcal{A}_2^{\sf{T}}\left(I-\mathcal{M}'\mathcal{H}\right)\mathcal{A}_1^{\sf{T}}\label{app67c}
\end{equation}
We now rewrite (\ref{app21a}) in terms of ${\mathcal{B}}'$ as
\begin{align}\label{app74}
\mathcal{F}\hspace{-0.1cm}\stackrel{(\ref{equ31})}=&\mathbb{E}\left[\left(\mathcal{A}_2^{\sf{T}}\left(I-\mathcal{M}\bm \Gamma_i\mathcal{H}\right)\mathcal{A}_1^{\sf{T}}\right)\otimes_b\left(\mathcal{A}_2^{\sf{T}}\left(I-\mathcal{M}\bm \Gamma_i\mathcal{H}\right)\mathcal{A}_1^{\sf{T}}\right)\right]^{\sf{T}}\nonumber \\
&\hspace{-0.48cm}=\left(\mathcal{A}_1\otimes_b\mathcal{A}_1\right)\big(I-I\otimes_b\left(\mathcal{H}\Gamma\mathcal{M}\right)-\left(\mathcal{H}\Gamma\mathcal{M}\right)\otimes_b I+\nonumber \\
&\mathbb{E}\left[\left(\mathcal{H}\bm \Gamma_i\mathcal{M}\right)\otimes_b\left(\mathcal{H}\bm \Gamma_i\mathcal{M}\right)\right]\big)\left(\mathcal{A}_2\otimes_b\mathcal{A}_2\right)
\nonumber \\
&\hspace{-0.48cm}=\left[{\mathcal{B}}'\otimes_b{\mathcal{B}}'\right]^{\sf T} + \Delta_F(\mu_{\max}^{2})
\end{align}
where $\Delta_F(\mu_{\max}^{2})$ is a matrix whose entries are in the order of $O(\mu_{\max}^{2})$.
Following similar techniques to the proof of Theorem 9.3 \cite[pp. 535--540]{ASayed2014}, we make the same Jordan canonical decomposition for matrix $\mathcal{P}=\mathcal{A}_1\mathcal{A}_2$ as (\ref{app6}), then substituting into (\ref{app67c}) we get
\begin{equation}\label{app82}
{\mathcal{B}}'
=\left(\mathcal{V}_\epsilon^{-1}\right)^{{\sf{T}}}(\mathcal{J}^{{\sf{T}}}-{\mathcal{D}}'^{{\sf{T}}})\mathcal{V}_\epsilon^{\sf{T}}
\end{equation}
where
\begin{align}\label{app83}
{\mathcal{D}}'^{\sf{T}}&\triangleq\mathcal{V}_\epsilon^{{\sf{T}}}\mathcal{A}_2^{{\sf{T}}}\mathcal{M}'\mathcal{H}\mathcal{A}_1^{{\sf{T}}}\left(\mathcal{V}_\epsilon^{-1}\right)^{\sf{T}}\nonumber\\
&=\left[\begin{array}{cc}{D}_{11}'^{\sf{T}}&{D}_{21}'^{\sf{T}}\\{D}_{12}'^{\sf{T}}&{D}_{22}'^{\sf{T}}\end{array}\right]
\end{align}
and
\begin{equation}
{D}_{11}'=\sum\limits_{k=1}^Nq_k(1-r_k){H}_{k},\,
{D}_{21}'=O(\mu_{\max})\label{app84b}.
\end{equation}
We now introduce the eigen-decomposition ${D}_{11}'^{\sf T} \triangleq U\Lambda U^{\sf T}$ for the symmetric positive-definite matrix ${D}_{11}'^{\sf T}$,
where $U$ is unitary, and $\Lambda$ is a diagonal matrix composed of the eigenvalues of ${D}_{11}'^{\sf T}$.
Let
\begin{equation}\label{app79}
\mathcal{T} = \mbox{\rm diag}\{\mu_{\max}^{1/N}U,\mu_{\max}^{2/N}I_M,\ldots,\mu_{\max}^{(N-1)/N}I_M,\mu_{\max}I_M\}
\end{equation}
then we have
\begin{equation}\label{app71}
\mathcal{T}^{-1}\mathcal{V}_{\epsilon}^{\sf T}\mathcal{B}'\left(\mathcal{V}_{\epsilon}^{-1}\right)^{\sf T}\mathcal{T}=\left[\begin{array}{cc}B_{11}'&B_{12}'\{\boldsymbol{B}}_{21}'&B_{22}'\end{array}\right].
\end{equation}
It follows that
$B_{11}'\triangleq I_M-\Lambda,\,B_{12}' = O(\mu_{\max}^{(N+1)/N})$ \cite[p. 538]{ASayed2014}.
The matrix $\mathcal{B}'$ has the same eigenvalues as the block matrix on the right hand side of (\ref{app71}).
By referring to Gershgorin's Theorem \cite{Golub96, Johnson03}, it is shown in \cite[pp. 539--540]{ASayed2014} that the union of the $M$ Gershgorin discs, each centered at an eigenvalue of $B_{11}'$ with radius $O(\mu_{\max}^{(N+1)/N})$, is disjoint from that of the other $M(N-1)$ Gershgorin discs, centered at the diagonal entries of $B_{22}'$, and therefore
\begin{equation}\label{app73}
\rho\left(\mathcal{B}'\right)=\rho\left(B_{11}'\right)+O(\mu_{\max}^{(N+1)/N}).
\end{equation}
Let
{\arraycolsep=1.4pt\def1.5{1.5}
\begin{align}\label{app80}
\tilde{\Delta}_{F}&\triangleq\left(\mathcal{T}^{\sf T}\mathcal{V}_{\epsilon}^{-1}\right)\otimes_b\left(\mathcal{T}^{\sf T}\mathcal{V}_{\epsilon}^{-1}\right)\Delta_{F}(\mu_{\max}^2)\times\nonumber \\
&\hspace{0.4cm}\left(\mathcal{V}_{\epsilon}\left(\mathcal{T}^{-1}\right)^{\sf T}\right)\otimes_b\left(\mathcal{V}_{\epsilon}\left(\mathcal{T}^{-1}\right)^{\sf T}\right)\nonumber \\
&=\left[\begin{array}{c|ccc}O(\mu_{\max}^2)&&o(\mu_{\max}^{1/N})&\\%\vspace{0.1cm}
\hlin
&O(\mu_{\max}^2)&&o(\mu_{\max}^{2/N})\\
o(\mu_{\max}^{2})&&\ddots&\\
&o(\mu_{\max}^{1+{1}/{N}})&&O(\mu_{\max}^2)\end{array}\right].
\end{align}}
\noindent It follows from (\ref{app80}) that all the diagonal blocks of $\tilde{\Delta}_{F}$ are in the order of $O(\mu_{\max}^2)$, the remaining block matrices in the first row are in the order of $o(\mu_{\max}^{1/N})$, the remaining block matrices in the first column are in the order of $o(\mu_{\max}^{2})$, and the upper and lower triangular blocks in the $(2,2)$th block of $\tilde{\Delta}_{F}$ are in the order of $o(\mu_{\max}^{2/N})$ and $o(\mu_{\max}^{1+1/N})$ respectively. Then, substituting (\ref{app71}) and (\ref{app80}) into (\ref{app74}), we have
\begin{align}\label{app75}
&\hspace{-0.3cm}\left(\mathcal{T}^{\sf T}\mathcal{V}_{\epsilon}^{-1}\right)\otimes_b\left(\mathcal{T}^{\sf T}\mathcal{V}_{\epsilon}^{-1}\right)\mathcal{F}\left(\mathcal{V}_{\epsilon}\left(\mathcal{T}^{-1}\right)^{\sf T}\right)\otimes_b\left(\mathcal{V}_{\epsilon}\left(\mathcal{T}^{-1}\right)^{\sf T}\right)\nonumber \\
&=\left(\left[\begin{array}{cc}B_{11}'&B_{12}'\{\boldsymbol{B}}_{21}'&B_{22}'\end{array}\right]\otimes_b\left[\begin{array}{cc}B_{11}'&B_{12}'\{\boldsymbol{B}}_{21}'&B_{22}'\end{array}\right]\right)^{\sf T}+\tilde{\Delta}_{F}\nonumber \\
&=\left[\begin{array}{cc}F_{11}&F_{12}\{\boldsymbol{F}}_{21}&F_{22}\end{array}\right]^{\sf T}
\end{align}
where
\begin{equation}
F_{11}=B_{11}'\otimes B_{11}'+O(\mu_{\max}^{2}),\,
F_{12}=O(\mu_{\max}^{(N+1)/N})\label{app76b}.
\end{equation}
Recall that
$B_{11}'$ is a diagonal matrix, so is $B_{11}'\otimes B_{11}'$, then we have
\begin{equation}\label{app81}
\mbox{\rm diag}\{F_{11}\}=\mbox{\rm diag}\{\ \lambda\left(B_{11}'\otimes B_{11}'\right)\} + O(\mu_{\max}^{2})
\end{equation}
which means that the diagonal entries of $F_{11}$ are the eigenvalues of $B_{11}'\otimes B_{11}'$ perturbed by a second-order term, $O(\mu^2_{\max})$.
Referring to Gershgorin's Theorem, the union of the $M^2$ Gershgorin discs, centered at the diagonal entries of $F_{11}$ with radius $O(\mu_{\max}^{(N+1)/N})$, is disjoint from the union of the other $M^2(N^2-1)$ Gershgorin discs, centered at the diagonal entries of $F_{22}$. Note that $\mathcal{F}$ has the same eigenvalues as the block matrix on the right hand side of (\ref{app75}), and that eigenvalues are invariant under a transposition operation.
It follows from (\ref{app73}) that
\begin{equation}\label{app77}
\rho\left(\mathcal{F}\right)=\rho\left(B_{11}'\otimes B_{11}'\right)+O(\mu_{\max}^{(N+1)/N}).
\end{equation}
Using the fact that
$\rho\left(B_{11}'\otimes B_{11}'\right)=\left[\rho\left(B_{11}'\right)\right]^2$,
we arrive at the desired result (\ref{equ88}).
\section{Examining the Difference in (\ref{equ64})}\label{APP6}
We revisit the MSE networks discussed in (\ref{equ48}).
Assume that there are only two agents in the network as shown in Fig. \ref{two_agents_network}, namely, $N = 2$, with $M = 2$.
Assume that
$\sigma_{v,1}^2>\sigma_{v,2}^2$.
\begin{figure}[htbp]
\centering
\includegraphics[width=2.2in]{two_agents_network.pdf}
\caption{A two-agent MSE network with a doubly-stochastic combination matrix.}
\label{two_agents_network}
\end{figure
For simplicity, uniform parameters $\{q_k\equiv q\}$ are used across the agents (which may occur, for example, in the ATC or CTA forms when the step-sizes are uniform across the agents, i.e., $\{\mu_k\equiv\mu\}$, and doubly-stochastic combination matrices are adopted. In this case, we get $\{q_k\equiv \mu/N\}$ \cite[pp. 493--494]{ASayed2014}). Let
\begin{equation}\label{equ92}
R_{u,1}=\left[\begin{array}{cc}|\pi_1|&\pi_1\\ \pi_1&1\end{array}\right],\:R_{u,2}=\left[\begin{array}{cc}|\pi_2|&\pi_2\\ \pi_2&1\end{array}\right]
\end{equation}
where the numbers $|\pi_1|<1,|\pi_2|<1$ ensure that $R_{u,1}>0,R_{u,2}>0$. Then, expression (\ref{equ64}) can be rewritten as
\begin{align}\label{equ83}
&\hspace{-0.2cm}\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}\nonumber \\
&\hspace{-0.2cm}\stackrel{(\ref{equ49b})}={r}q\mathrm{Tr}\Big(\left(R_{u,1}+R_{u,2}\right)^{-1}
\big(\sigma_{v,1}^2\left(\mbox{\rm diag}\{R_{u,1}\}-R_{u,1}\right)+\nonumber \\
&\hspace{0.5cm}\sigma_{v,2}^2\left(\mbox{\rm diag}\{R_{u,2}\}-R_{u,2}\right)\big)
\Big)\nonumber \\
&\hspace{-0.2cm}\stackrel{(\ref{equ92})}={r}q\mathrm{Tr}\Bigg(\left[\begin{array}{cc}|\pi_1|+|\pi_2|&\pi_1+\pi_2\\\pi_1+\pi_2&2\end{array}\right]^{-1}\times\nonumber \\
&\hspace{0.4cm}\left[\begin{array}{cc}0&-\pi_1\sigma_{v,1}^2-\pi_2\sigma_{v,2}^2\\-\pi_1\sigma_{v,1}^2-\pi_2\sigma_{v,2}^2&0\end{array}\right]\Bigg)\nonumber \\
&\hspace{0cm}=\frac{2rq(\pi_1+\pi_2)(\pi_1\sigma_{v,1}^2+\pi_2\sigma_{v,2}^2)}{2\left(|\pi_1|+|\pi_2|\right)-(\pi_1+\pi_2)^2}
\end{align}
where the denominator is positive
for all $|\pi_1|<1,|\pi_2|<1$. Then, $\mathrm{MSD}_{\mathrm{coor},k}<\mathrm{MSD}_{\mathrm{grad},k}$ if, and only if
$(\pi_1+\pi_2)(\pi_1\sigma_{v,1}^2+\pi_2\sigma_{v,2}^2)<0$,
which implies that
\begin{subequations}
\begin{empheq}[left=\empheqlbrace]{align}
0<\pi_2<1,-\pi_2<\pi_1<-\left(\sigma_{v,2}^2/\sigma_{v,1}^2\right)\pi_2\label{equ86}\\
-1<\pi_2<0,-\left(\sigma_{v,2}^2/\sigma_{v,1}^2\right)\pi_2<\pi_1<-\pi_2.\label{equ87}
\end{empheq}
\end{subequations}
Otherwise, $\mathrm{MSD}_{\mathrm{coor},k}\geq\mathrm{MSD}_{\mathrm{grad},k}$.
\section{Proof of Corollary \ref{cor1}}\label{APP4}
In the case when the matrices $\{H_k\}$ or $\{G_k\}$ are diagonal, it follows from (\ref{equ64}) and (\ref{equ89}) that $\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}=0$, which verifies (\ref{equ52h}).
More generally, according to (\ref{equ6}) and (\ref{app45e}), we have
\begin{equation*}
\left(\sum\limits_{k=1}^Nq_kH_k\right)^{-1}>0,\:\,
\sum\limits_{k=1}^Nq_k^2G_k\geq0,\:\,
\sum\limits_{k=1}^Nq_k^2\mbox{\rm diag}\{G_k\}\geq0
\end{equation*}
Then, applying the inequality \cite{Fang1994}:
\begin{equation}\label{app40}
\lambda_{\min}(A)\mathrm{Tr}(B)\leq\mathrm{Tr}(AB)\leq\lambda_{\max}(A)\mathrm{Tr}(B)
\end{equation}
for any symmetric positive semi-definite matrices $A$ and $B$, where $\lambda_{\min}(A)$ and $\lambda_{\max}(A)$ represent respectively the largest and smallest eigenvalues of $A$, we obtain
\begin{multline}\label{equ52d}
\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}
\leq\frac{r}{2}\Bigg\{\lambda_{\max}\left(\left(\sum\limits_{k=1}^Nq_kH_k\right)^{-1}\right)-\\
\lambda_{\min}\left(\left(\sum\limits_{k=1}^Nq_kH_k\right)^{-1}\right)\Bigg\}\sum\limits_{k=1}^Nq_k^2\mathrm{Tr}(G_k)
\end{multline}
where we substituted (\ref{equ89}) into (\ref{equ64}) and used the relation $\mathrm{Tr}(G_k)=\mathrm{Tr}\left(\mbox{\rm diag}\{G_k\}\right)$.
Then, noting that
\begin{equation}\label{equ52b}
0<\sum\limits_{k=1}^N q_k\lambda_{\min}\left(H_k\right)
\leq\lambda\left(\sum\limits_{k=1}^Nq_kH_k\right)\leq\sum\limits_{k=1}^N q_k\lambda_{\max}\left(H_k\right)
\end{equation}
we have
\begin{multline}\label{equ52c}
1\Big/\left(\delta_d\sum_{k=1}^Nq_k\right)\stackrel{(a)}\leq1\Big/\sum\limits_{k=1}^N q_k\lambda_{\max}\left(H_k\right)\leq\\
\lambda\left(\left(\sum\limits_{k=1}^Nq_kH_k\right)^{-1}\right)\leq\\
1\Big/\sum\limits_{k=1}^N q_k\lambda_{\min}\left(H_k\right)\stackrel{(b)}\leq1\Big/\left(\nu_d\sum_{k=1}^Nq_k\right)
\end{multline}
where the inequalities (a) and (b) hold due to (\ref{equ6}).
Substituting (\ref{equ52c}) into (\ref{equ52d}) gives the upper bound for the difference $\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}$ as shown by (\ref{equ52g}). Then, by following a similar argument, we obtain the lower bound for the difference as the opposite number of the upper bound, which leads to the desired result in Corollary \ref{cor1}.
\section{Proof of Corollary \ref{cor3}}\label{APP5}
We start from the MSD expression in (\ref{equ43a}) and note first that
\begin{align}\label{equ51c}
G_k'= (1-r_k)^2\left(G+\frac{r_k}{1-r_k}\mbox{\rm diag}\{G\}\right)
\end{align}
Substituting into (\ref{equ43a}) we have:
\begin{align}
\mathrm{MSD}_{\mathrm{coor},k}
=&\:\frac{1}{2}\left(\sum\limits_{k=1}^Nq_k(1-r_k)\right)^{-1}\left(\sum\limits_{k=1}^Nq_k^2(1-r_k)^2\right)\times\nonumber \\
&\:\mathrm{Tr}\left(H^{-1}\left(G+\frac{r_k}{1-r_k}\mbox{\rm diag}\{G\}\right)\right)\nonumber \\
=&\:\frac{1}{2}\left(\sum\limits_{k=1}^Nq_k(1-r_k)\right)^{-1}\left(\sum\limits_{k=1}^Nq_k^2(1-r_k)^2\right)\times\nonumber \\
&\:\mathrm{Tr}\left(H^{-1}G\right)+\frac{1}{2}\left(\sum\limits_{k=1}^Nq_k(1-r_k)\right)^{-1}\times\nonumber \\
&\:\left(\sum\limits_{k=1}^Nq_k^2(1-r_k)r_k\right)\mathrm{Tr}\left(H^{-1}\mbox{\rm diag}\{G\}\right)\nonumber \\
=&\:\frac{1}{2}\left(\sum\limits_{k=1}^Nq_k(1-r_k)\right)^{-1}\left(\sum\limits_{k=1}^Nq_k^2(1-r_k)^2\right)\times\nonumber \\
&\:\mathrm{Tr}\left(H^{-1}G\right)+
\frac{1}{2}(\theta-\alpha)\mathrm{Tr}\left(H^{-1}\mbox{\rm diag}\{G\}\right)\label{equ63a}
\end{align}
where (\ref{equ63a}) holds because
\begin{align}\label{equ70}
\theta-\alpha&=\frac{\sum_{k=1}^Nq_k^2(1-r_k)}{\sum_{k=1}^Nq_k(1-r_k)}-\frac{\sum_{k=1}^Nq_k^2(1-r_k)^2}{\sum_{k=1}^Nq_k(1-r_k)}\nonumber \\
&=\frac{\sum_{k=1}^Nq_k^2(1-r_k)r_k}{\sum_{k=1}^Nq_k(1-r_k)}
\end{align}
with the numbers $\alpha$ and $\theta$ being defined in (\ref{equ78}) and (\ref{equ79}), respectively. Recall that
\begin{equation}\label{equ63b}
\mathrm{MSD}_{\mathrm{grad},k}=\frac{1}{2}\left(\sum\limits_{k=1}^Nq_k\right)^{-1}\sum\limits_{k=1}^Nq_k^2\mathrm{Tr}\left(H^{-1}G\right)
\end{equation}
is the MSD performance for the full-gradient case. Thus,
\begin{align}\label{equ63c}
&\mathrm{MSD}_{\mathrm{coor},k}-\mathrm{MSD}_{\mathrm{grad},k}\nonumber \\
&\hspace{0.2cm}=\frac{1}{2}\left(\frac{\sum_{k=1}^Nq_k^2(1-r_k)^2}{\sum_{k=1}^Nq_k(1-r_k)}-\frac{\sum_{k=1}^Nq_k^2}{\sum_{k=1}^Nq_k}\right)\mathrm{Tr}\left(H^{-1}G\right)\nonumber \\
&\hspace{0.6cm}+\frac{1}{2}(\theta-\alpha)\mathrm{Tr}\left(H^{-1}\mbox{\rm diag}\{G\}\right)\nonumber \\
&\hspace{0.2cm}=\frac{\alpha }{2}\mathrm{Tr}\left(H^{-1}G\right)+\frac{1}{2}(\theta-\alpha)\mathrm{Tr}\left(H^{-1}\mbox{\rm diag}\{G\}\right).
\end{align}
Applying (\ref{app40}) and (\ref{equ6}) to (\ref{equ63c}), we obtain the desired results for the MSD performance shown in Corollary \ref{cor3}. The result for the ER performance in Corollary \ref{cor3} can be shown by subtracting the ER expression, $\mathrm{ER}_{\mathrm{grad},k}$, on the both sides of (\ref{equ74}).
\bibliographystyle{IEEEbib}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,937 |
\section{Introduction}
Recently, the learnware~\citep{learnware} paradigm has been proposed. A learnware is a well-performed pre-trained machine learning model with a specification which explains the purpose and/or specialty of the model. The provider of a learnware can upload it to a market, and ideally, the market will be a pool of (model, specification) pairs solving different tasks. When a person is going to tackle her own learning task, she can identify a good or some useful learnwares from that market whose specifications match her requirements and apply them to her own problem.
One of the most important properties of learnware is to enable future users to build their own applications upon previous models without accessing the raw data used to train these models, and thus, the machine learning experience is shared but the data privacy violation and data improper disclosure issues are avoided. This property is named \emph{inaccessibility} of training data.
Note that it may be too optimistic to expect that there is a model in the pool which was trained exactly for the current task; maybe there is one, or multiple, or even none helpful models. Thus, a key challenge is how to provide each model with a specification such that given a new learning task, it is possible to identify helpful models from the model pool. This property is named \emph{reusability} of pre-trained models.
It was thought that logical clauses or some simple statistics may be used to construct the model specification~\citep{learnware}, though there has been no effective approach yet. In this paper, we show that it is possible to construct a reduced kernel mean embedding (RKME) specification for this purpose, where both inaccessibility and reusability are satisfied under reasonable assumptions.
Kernel mean embedding~\citep{KME_survey} is a powerful technique for solving distribution-related problems, and has made widespread contribution in statistics and machine learning, like two-sample testing~\citep{Interpretable_TSS}, casual discovery~\citep{KME_casual}, and anomaly detection~\citep{KME_anomaly}. Roughly speaking, KME maps a probability distribution to a point i{}n reproducing kernel Hilbert space (RKHS), and can be regarded as a representation of distribution. Reduced set construction~\citep{TNN99_RS} keeps the representation power of empirical KMEs, and blocks access to raw data points at the same time.
To clearly show why reduced KME is a valid specification in the learnware paradigm, we decompose the paradigm into a two-phase framework. Initially, in the upload phase, the pre-trained model provider is required to construct a reduced set of empirical KME as her model's specification and upload it together with the built predictive model into a public pool. The RKME represents the distribution of model's training data, without using any raw examples. Subsequently, in the deployment phase, we demonstrate that the user can select suitable pre-trained model(s) from the pool to predict her current task by utilizing the specifications and her unlabeled testing points in a systematic way.
RKME specification is a bridge between the current task and solved tasks upon which the pre-trained models are built. We formalize two possible relationships between the current and the solved tasks. The first one is \emph{task-recurrent assumption}, saying the data distribution of the current task matches one of the solved tasks. We then use the maximum mean discrepancy (MMD) criteria to find the unique fittest model in the pool to handle all testing points. The second one is \emph{instance-recurrent assumption}, saying the distribution of the current task is a mixture of solved tasks. Our algorithm estimates the mixture weight, uses the weight to generate auxiliary data mimicking the current distribution, learns a selector on these data, then uses the selector to choose the fittest pre-trained model for each testing point. Kernel herding~\citep{herding_thesis}, a fast sampling method for KME, is applied in mimic set generation.
Our main contributions are:
\begin{compactitem}
\item Propose using RKME as the specification under the learnware paradigm, and implement a two-phase framework to support the usage.
\item Show the inaccessibility of training data in the upload phase, i.e., no raw example is exposed after constructing specifications.
\item Prove the reusability of pre-trained models in the deployment phase, i.e., the current task can be handled with identified existing model(s).
\item Evaluate our proposal through extensive experiments including a real industrial project.
\end{compactitem}
In the following sections, we first present necessary background knowledge, then introduce our proposed framework, followed by theoretical analysis, related work, experiments and finally the conclusion.
\section{Background} \label{sec:background}
In this section, we briefly introduce several concepts and techniques. They will be incorporated and further explained in detail through this paper.
\subsection{Kernel Mean Embeddings}
Let $X \in \X$ be a random variable in $\mathbb{R}^d$ and $P_X$ be a measurable probability function of $X$. Let $k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ be reproducing kernels for $\X$, with associated RKHS $\mathcal{H}_k$ and $\phi:x\in\mathcal{X}\mapsto k(x,\cdot)\in\mathcal{H}_k$,
the corresponding canonical feature maps on $\X$. Throughout this paper, we assume the kernel function is continuous, bounded, and positive-definite. The kernel function is considered a similarity measure on a pair of points in $\mathcal{X}$.
Kernel mean embedding (KME)~\citep{Smola07KME} is defined by the mean of a $\H_k$-valued random variable that maps the probability distributions to an element in RKHS associated with kernel $k$~\citep{Learning_with_Kernels}. Denote the distribution of an $\mathcal{X}$-valued random variable by $P$, then its kernel mean embedding is
\begin{equation}\label{eq:KME_def}
\mu_k(P)\coloneqq \int_\mathcal{X} k(x,\cdot)\diff P(x)
\end{equation}
By the reproducing property, $\forall f\in \H_{k}, \langle f, \mu_k(P)\rangle = \E_{P}[f(X)]$, which demonstrates the notion of mean.
By using characteristic kernels~\citep{Fukumizu07}, it was proved that no information about the distribution $P$ would be lost after mapping to $\mu_k(P)$. Precisely, $\lVert \mu_k(P)-\mu_k(Q) \rVert_{\mathcal{H}_k}=0$ is equivalent to $P=Q $. This property makes KME a theoretically sound method to represent a distribution. An example of characteristic kernel is the Gaussian kernel
\begin{equation} \label{eq:gaussian_kernel}
k(x,x^\prime)=\exp(-\gamma \lVert x-x^\prime \rVert), \gamma>0.
\end{equation}
In learning tasks, we often have no access to the true distribution $P$, and consequently to the true embedding $\mu_k(P)$. Therefore, the common practice is to use examples $X=\{x_n\}_{n=1}^N\sim P^N$, which constructs an empirical distribtuion $P_X$, to approximate \eqref{eq:KME_def}:
\begin{equation}\label{eq:KME_empirical}
\widehat{\mu_k}(P_X)\coloneqq \frac{1}{N}\sum_{n=1}^{N} k(x_n,\cdot)
\end{equation}
If all functions in $\mathcal{H}_k$ are bounded and examples are i.i.d drawn, the empirical KME $\widehat{\mu_k}(P_X)$ converges to the true KME $\mu_k(P)$ at rate $O(1/\sqrt{N})$, measured by RKHS norm \cite[Theorem 1]{Lopez15}.
\subsection{Reduced Set Construction}
Reduced set methods were first proposed to speed up SVM predictions~\citep{SVM_RS} by reducing the number of support vectors, and soon were found useful in kernel mean embeddings~\citep{TNN99_RS} to handle storage and/or computational difficulties.
The empirical KME $\widehat{\mu_k}(P_X)$ is an approximation of true KME $\mu_k(P)$, requiring all the raw examples. It is known that we can approximate the empirical KME with fewer examples. Reduced set methods find a weighted set of points $R=(\bm{\beta},Z)=\{(\beta_m,z_m) \}_{m=1}^M$ in the input space to minimize the distance measured by RKHS norm
\begin{equation}\label{eq:rs_obj}
\lVert\widehat{\mu_k}(P_X)-\widehat{\mu_k}(P_R) \rVert_{\mathcal{H}_k}^2=\Big\lVert\sum_{n=1}^{N}\frac{1}{N} k(x_n,\cdot)-\sum_{m=1}^{M} \beta_m k(z_m,\cdot) \Big\rVert_{\mathcal{H}_k}^2
\end{equation}
It is trivial to achieve perfect approximation if we are allowed to have the same number of points in the reduced set ($M=N$). Therefore we focus on the $M<N$ case by introducing additional freedom on real-value coefficients $\bm{\beta}$ and vectors $Z$. If points in $Z$ are selected from $X$, it is called reduced set selection. Otherwise, if $\{z_m\}$ are newly constructed vectors, it is called reduced set construction~\citep{ICCV_rs}. Since the latter does not expose raw examples, we apply reduced set construction to compute the specification in the upload phase of our proposal.
\subsection{Kernel Herding}
Kernel herding algorithm is an infinite memory deterministic process that learns to approximate a distribution with a collection of examples~\citep{kernel_herding}. Suppose we want to draw examples $X=\{x_n\}_{n=1}^N$ from distribution $P$, but the probability distribution function of $P$ is unknown. Given the kernel mean embedding $\mu_k(P)$ of $P$, assume $k(x,x)$ is bounded for all $x\in \X$ and the further restrictions of finite-dimensional discrete state spaces~\citep{Welling09}, kernel herding will iteratively draw an example in terms of greedily reducing the following error at every iteration:
\begin{equation} \label{eq:herding_obj}
\lVert\widehat{\mu_k}(P_X)-\mu_k(P)\rVert^2_{\H_k}=\Big\lVert \frac{1}{N}\sum_{n=1}^N k(x_n,\cdot) -\mu_k(P)\Big\rVert^2_{\H_k}
\end{equation}
A remarkable result of kernel herding is, it decreases the square error in~\eqref{eq:herding_obj} at a rate $O(1/N)$, which is faster than generating independent identically distributed random samples from $P$ at a rate $O(1/\sqrt{N})$.
Comparing with~\eqref{eq:rs_obj} in last section, if we set $\mu_k(P)=\widehat{\mu_k}(P_R)$, kernel herding looks like an ``inverse'' operation of reduced set construction. Reduce set construction in~\eqref{eq:rs_obj} is ``compressing'' the KME, while kernel herding in~\eqref{eq:herding_obj} is ``decompressing'' the information in reduced KME if $N$ is large. We will apply kernel herding in the deployment phase to help recover the information in reduced KMEs.
\section{Framework}
In this section, we first formalize our problem setting with minimal notations, then show how to construct RKME in the upload phase and how to use RKME in the deployment phase.
\subsection{Problem Formulation}\label{sec:formulation}
Suppose there are in total $c$ providers in the upload phase, they build learnwares on their own tasks and generously upload them to a pool for future users. Each of them has a private local dataset $S_i=\{(x_n,y_n)\}_{n=1}^{N_i}$, which reflects the task $T_i$. Task $T_i$ is a pair $(P_i,f)$ defined by a distribution $P_i$ on input space $\X$ and a global optimal rule function $f:\X \rightarrow \Y$,
\begin{equation}
\begin{split}
\forall i \in [c], \forall (x,y)\in S_i,f(x)=y.
\end{split}
\end{equation}
All providers are competent, and the local datasets are sufficient to solve their tasks. Formally speaking, their models $\widehat{f}_i$ enjoy a small error rate $\epsilon>0$ with respect to a certain loss function $L$ on their task distribution $P_i$:
\begin{equation} \label{eq:small-loss}
\forall i \in [c], \mathcal{L}(P_i,f,\widehat{f}_i)=\mathds{E}_{x\sim P_i }\big[L(\widehat{f}_i(x),f(x))\big]\leq \epsilon.
\end{equation}
With a slight abuse of notation, here $L:\mathcal{Y}\times \mathcal{Y}\rightarrow \mathbb{R}^+$ can be either a regression loss or classification loss. Since tasks $\{T_i\}_{i=1}^c$ are equipped with low-error pre-trained models, they are referred to as solved tasks throughout this paper.
In the deployment phase, a new user wants to solve her current task $T_t$ with only unlabeled testing data $x\sim P_t$. Thus her mission is to learn a good model $\widehat{f}_t$ which minimizes $\mathcal{L}(P_t,f,\widehat{f}_t)$, utilizing the information contained in pre-trained models $\{\widehat{f}_i\}_{i=1}^c$.
This problem seems easy at the first glance. A naive reasoning is: since all the solved tasks share the same rule function $f$, and each $\widehat{f}_i$ is a low-error estimate of $f$, any of them is a good candidate for $\widehat{f}_t$. However, this is not the case because no assumption between $P_i$ and $P_t$ has been made so far. In an extreme case, the support of $P_t$ may not be covered by the union support of $P_i$, therefore there exist areas where all $\widehat{f}_i$'s can fail.
Put it in a concrete example. The global rule function is a 4-class classifier $f:\X \rightarrow \{a,b,c,d\}$. There are two providers equipped with very ``unlucky'' distribution. One's local dataset only contains points with two classes $\{a,b\}$, and another only sees points labeled $\{b,c\}$. They learn zero-error local classifiers $\{\widehat{f}_1,\widehat{f}_2\}$, which are perfect for their own task and uploaded to the public pool. Then in the deployment phase facing current task $T_t$, suppose all points drawn from $P_t$ are actually labeled class $d$ according to $f$. In this unfortunate case, both pre-trained models $\{\widehat{f}_1,\widehat{f}_2\}$ suffer from 100\% error on $T_t$.
The above example demonstrates that it is difficult to have a low-risk model $\widehat{f}_t$ on the current task without making any assumptions on $P_t$. To this end, we propose two realistic assumptions to model relationships between the current and solved tasks.
\textbf{Task-recurrent assumption}: The first type of assumption is that the distribution of the current task matches one of the solved tasks. The current task $T_t$ is said to be task-recurrent from the solved tasks $\{T_i\}_{i=1}^c$ if there exists $i\in [c]$, such that $P_t = P_i$.
\textbf{Instance-recurrent assumption}: The second type of assumption is that the distribution of the current task is a convex mixture of solved tasks, i.e. $P_t = \sum_{i=1}^c w_i P_i$, where $\bm{w}=(w_1,\cdots,w_c)\in \Delta^c$ lies in a unit simplex.
The second assumption is weaker as task-recurrent is a special case for instance-recurrent by setting $\bm{w}$ at a vertex of the unit simplex. However, if we are told that the first assumption holds a priori, it is expected to achieve better performance on the current task.
\subsection{Upload Phase}
In this section, we describe how to compute the reduced KME specification to summarize provider $i$'s local dataset $S_i$ in the upload phase. To lighten the notations, we focus on one provider and temporarily drop the subscript $i$.
Given a local dataset $S=\{(x_n,y_n)\}_{n=1}^N$, where $x_n \sim P$. Now we use empricial KME to map the empirical distribution defined by $X=\{x_n\}_{n=1}^N$ with a valid kernel function $k$. The empirical KME is $\widehat{\mu_k}(P_X)$ as defined in \eqref{eq:KME_empirical}.
Then our mission is to find the reduced set minimizing \eqref{eq:rs_obj}. Denote $\bm{\beta}=(\beta_1,\cdots,\beta_M)$ and $Z=\{z_1,\cdots,z_M\}$, expanding \eqref{eq:rs_obj} gives
\begin{equation}\label{eq:rs_obj_expand}
F(\bm{\beta},Z)=\sum_{n,m=1}^N \frac{1}{N^2} k(x_n,x_m)
+\sum_{n,m=1}^M \beta_n \beta_m k(z_n,z_m) -2 \sum_{n=1}^N \sum_{m=1}^M \frac{\beta_m}{N} \ k(x_n,z_m).
\end{equation}
We adopt the alternating optimization to minimize \eqref{eq:rs_obj_expand}.
\textbf{Fix $Z$ update $\bm{\beta}$.} Suppose vectors in $Z$ are fixed, setting $\frac{\partial F(\bm{\beta},Z)}{\partial \bm{\beta}}=0$ obtains the closed-form solution of $\bm{\beta}$:
\begin{equation} \label{eq:rebeta}
\bm{\beta}=K^{-1}C,
\end{equation}
where
\begin{equation*}
K \in \mathbb{R}^{N\times M}, K_{nm}=k(z_n,z_m), C \in \mathbb{R}^{N\times 1}, C_n=\frac{1}{N}\sum_{m=1}^M k(z_n,x_m).
\end{equation*}
\textbf{Fix $\bm{\beta}$ update $Z$.} When $\bm{\beta}$ is fixed, $\{z_1,\cdots,z_M\}$ in $Z$ are independent in \eqref{eq:rs_obj_expand}, therefore we can iteratively run gradient descent on each $z_m$ as
\begin{equation} \label{eq:re_z}
z_m^{(t)}=z_m^{(t-1)}-\eta \frac{\partial F(\bm{\beta},Z)}{\partial z_m}.
\end{equation}
The optimization is summarized in Algorithm~\ref{alg:rs}. If the step size $\eta$ is small, the objective value $F(\bm{\beta},Z)$ will decrease monotonically at both steps, and finally converges.
After the optimization, each provider uploads her model $\widehat{f}_i$, paired with RKME specification $\Phi_i$ (represented by $\bm{\beta}$ and $Z$), into the learnware pool. Raw data examples are inaccessible after the construction by design. Differential privacy can be further ensured by applying techniques in \cite{Priave_Release}, which is an interesting issue but out of our scope.
An illustration of this phase is presented in Fig.~\ref{fig:upload_phase}. In this illustration, 3 providers upload pre-trained binary classification models and computed RKMEs into the public learnware pool. They are unaware of each other, and their pre-trained models disagree on many areas. The RKMEs ($\Phi_1,\Phi_2,\Phi_3$) are score functions in the raw feature space (denoted by contours, deeper means higher), and also points in the RKHS (denoted by points in a cloud). There is no optimal way to ensemble these models, but the RKME specifications allow future users to appropriately reuse them in the deployment phase.
\begin{algorithm}[htb]
\caption{Reduced KME Construction}
\label{alg:rs}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{input:}}
\renewcommand{\algorithmicensure}{\textbf{output:}}
\REQUIRE~~\\
Local dataset $X=\{x_n\}_{n=1}^N$, kernel fucntion $k$, size of reduced set $M$, iteration parameter $T$
\ENSURE~~\\
Reduced KME $\Phi(\cdot)=\sum_{m=1}^{M}\beta_m k(z_m,\cdot)$
\renewcommand{\algorithmicrequire}{\textbf{procedure:}}
\REQUIRE~~\\
\STATE Initialize $z^{(0)}_m$ by running $k$-means on $X$, where $k=M$
\FOR{$t=1:T$}
\STATE Update $\bm{\beta}$ by \eqref{eq:rebeta}
\STATE Update each $z_m^{(t)}$ by \eqref{eq:re_z}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{figure*}[htb]
\centering
\includegraphics[width=\textwidth]{./figure/upload_phase.pdf}
\caption{An illustration of the upload phase.}\label{fig:upload_phase}
\end{figure*}
\subsection{Deployment Phase}
In this section, we describe how to use RKME to identify useful models in the learnware pool for the current task. Algorithm~\ref{alg:deploy} shows the overall deployment procedure. As we mentioned in Section~\ref{sec:formulation}, the procedure treats two different recurrent assumptions separately.
\begin{algorithm}[tb]
\caption{Deployment Procedure}
\label{alg:deploy}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{input:}}
\renewcommand{\algorithmicensure}{\textbf{output:}}
\REQUIRE~~\\
Current task test data $X=\{x_n\}_{n=1}^{N}$, a pool of pre-trained models $\{\widehat{f}_i\}_{i=1}^c$, RKMEs $\{\Phi_i\}_{i=1}^c$
\ENSURE~~\\
Prediction $Y=\{y_n\}_{n=1}^{N}$
\renewcommand{\algorithmicrequire}{\textbf{procedure:}}
\REQUIRE~~\\
\IF{task-recurrent assumption}
\STATE $\Phi_t=\sum_{n=1}^{N}\frac{1}{N} k(x_n,\cdot)$
\STATE $i^*=\argmin_i \big\lVert \Phi_t-\Phi_i \big\rVert_{\mathcal{H}_k}^2$ \label{deploy:min_dist}
\STATE $Y=\widehat{f}_{i^*}(X)$ \label{deploy:task_re_predict}
\ENDIF
\IF{instance-recurrent assumption}
\STATE Estimate $\widehat{\bm{w}}$ as \eqref{eq:w_solution} \label{deploy:ira_start}
\STATE Initialize the mimic sample set $S=\emptyset$ \label{deploy:sampling_start}
\WHILE{$|S|$ is not big enough}
\STATE Sample a provider index $i$ by weight $\widehat{w}_i$ \label{deploy:pick_provider}
\STATE Sample an example $x$ by kernel herding as~\eqref{eq:herding_next} \label{deploy:herding}
\STATE $S=S\cup \{(x,i)\}$
\ENDWHILE \label{deploy:sampling_end}
\STATE Train a selector $g$ on mimic sample $S$
\FOR{$n=1:N$}
\STATE $i^*=g(x_n)$\label{deploy:phi_x_start}
\STATE $y_n=\widehat{f}_{i^*}(x_n)$ \label{deploy:phi_x_end}
\ENDFOR \label{deploy:ira_end}
\ENDIF
\end{algorithmic}
\end{algorithm}
\subsubsection{Task-recurrent assumption}
When the task-recurrent assumption holds, which means the current distribution matches one of the distributions solved before, our goal is to find out which one fits the best. In Line \ref{deploy:min_dist} of Algorithm \ref{alg:deploy}, we measure the RKHS distance between the testing mean embedding and reduced embeddings in the pool, and figure out the model which was trained on the closest data distribution. Then in Line \ref{deploy:task_re_predict}, we apply the matching model $\widehat{f}_{i^*}$ to predict all the points.
\subsubsection{Overview of instance-recurrent assumption}
When the instance-recurrent assumption holds, which means no single pre-trained model can handle all the testing points, our goal is to determine which model is the most suitable for each instance. The general idea is that we ``mimic'' the test distribution by weighting existing distributions first, then ``recover'' enough data points and learn a model selector on them. Finally, the selector predicts the suitable model $\widehat{f}_i$ for each testing point.
\subsubsection{Estimate mixture weights}
Let us see how to estimate the mixture weights first. By instance-recurrent assumption, we have $P_t = \sum_{i=1}^c w_i P_i$, which implies
\begin{equation}\label{eq:mixture_model}
\mu_k(P_t) = \sum\nolimits_{i=1}^c w_i \mu_k(P_i).
\end{equation}
Let $\{x_n\}_{n=1}^N\sim P_t$ be the examples from the current task.
To estimate the weights $\widehat{\bm{w}}$, we aim to minimize:
\begin{equation*}\label{eq:mixture-dist-loss}
\min_{\bm{w}}\Big \lVert \frac{1}{N}\sum_{n=1}^N k(x_n,\cdot)- \sum_{i=1}^c w_i \Phi_i(\cdot) \Big \rVert^2_{\H_k},
\end{equation*}
which is similar to \eqref{eq:rs_obj}, thus the solution $\widehat{\bm{w}}$ is similar to \eqref{eq:rebeta}:
\begin{equation}\label{eq:w_solution}
\widehat{\bm{w}} = H^{-1}C,
\end{equation}
where
\begin{equation*}
H\in \mathbb{R}^{c\times c},H_{ij}=\langle \Phi_i, \Phi_j\rangle, C\in \mathbb{R}^{c\times 1},C_i=\frac{1}{N}\sum_{n=1}^N \Phi_i(x_n).
\end{equation*}
The $\widehat{\bm{w}}$ measures the weight of each provider's distribution. Given the weights, we are able to unbiasedly pick a provider $i$ in Line~\ref{deploy:pick_provider} of Algorithm~\ref{alg:deploy}, which is the first step to mimic the testing distribution.
\subsubsection{Sampling from RKME}
This subsection explains how to implement Line~\ref{deploy:herding} of Algorithm~\ref{alg:deploy}, i.e., sample examples from the distribution $P_i$ with the help of RKME $\Phi_i$, by applying kernel herding techniques~\citep{kernel_herding}.
For ease of understanding, we temporarily drop the subscript $i$, slightly abuse the notation $t$ as the iteration number here, and rewrite the iterative herding process in~\cite{herding_thesis} via our notations:
\begin{equation}\label{eq:herding_next}
x_{T+1}=\begin{cases}
\argmax_{x\in \X} \Phi(x), &\text{if $T=0$} \\
\argmax_{x\in \X} \Phi(x)-\frac{1}{T+1}\sum_{t=1}^{T}k(x_t,x), &\text{if $T\geq 1$}.
\end{cases}
\end{equation}
where $x_{T+1}$ is the next example we want to sample from $P$ when $\{x_t\}_{t=1}^T$ have been already sampled. And Proposition 4.8 in~\cite{herding_thesis} shows the following error $\mathcal{E}_T$ will decrease as $O(1/T)$:
\begin{equation*}
\mathcal{E}_T=\Big\lVert \frac{1}{T}\sum_{t=1}^T k(x_t,\cdot) -\Phi\Big\rVert^2_{\H_k}.
\end{equation*}
Therefore, by iteratively sampling as in~\eqref{eq:herding_next}, we will eventually have a set of examples drawn from $P_i$. Combined with unbiased sampling from providers (Line~\ref{deploy:pick_provider} of Algorithm~\ref{alg:deploy}), a labeled sample set $S\sim \sum \widehat{w}_i P_i$ is constructed.
\subsubsection{Final predictions and illustrations}
When all the previous steps are ready, it is quite easy to make the final prediction. The user will train a selector $g:\X\rightarrow \{1,\cdots,c\}$ on $S$ to predict which pre-trained model in the pool should be selected. The selector can be similar to pre-trained models except that its output space is the index of providers. The final prediction for a test instance $x$ will be $\widehat{f}_{i^*}(x)$, where $i^*=g(x)$ is the selected index.
\begin{figure*}[htb]
\centering
\includegraphics[width=\textwidth]{./figure/deployment_phase.pdf}
\caption{An illustration of the deployment phase.}\label{fig:deployment_phase}
\end{figure*}
An illustration of the deployment phase including both task-recurrent and instance-recurrent assumptions is presented in Fig.~\ref{fig:deployment_phase}. It is easier to see the differences between assumptions in the RKHS plotted as a cloud. If task-recurrent, we are finding the closest RKME (which is $\Phi_1$ in that cloud) and only one pre-trained model will be used. If instance-recurrent, we are finding a combination of RKMEs (which is $\Phi_t=w_1 \Phi_1+w_2 \Phi_2$ in that cloud). Each $\Phi_i$ is like a basis in the reproducing kernel Hilbert space, and instance-recurrent assumption is actually saying that $\Phi_t$ can be decomposed by these bases. In that example, because $w_1>w_2$, more circles are generated than triangles in the mimicked sample. There is no square because $w_3=0$. The learned selector shows we should use $f_1$ in the left half and $f_2$ in the right half.
Since we are reusing pre-trained models without modifying them on the current task, our framework accepts any types of model from providers. They can be deep neural classifiers, SVMs with different kernel functions, or gradient boosted tree regressors. As long as the input spaces are identical, these pre-trained models can even have different output spaces.
\section{Theory}
In this section, theoretical results are presented to rigorously justify the reusability of pre-trained models by using RKME specifications via our proposed way, either in task-recurrent assumption or in instance-recurrent assumption.
\subsection{Task-recurrent}
Here we introduce useful propositions regarding MMD and then show the guarantees for the task-recurrent assumption. For simplicity, we omit the subscript $k$ in $\mu_k$ and represent $\mu_k(P)$ as $\mu_P$.
\begin{Proposition}[Upper bound for empirical MMD]\label{prop:mmd-bound}
\textnormal{\cite[Theorem 7]{gretton2012kernel}} Let $P_i$ and $P_t$ be Borel probability measure defined in topological space $\X$ and the associated kernel function is bounded: $0\leq k(x,x')\leq K, \forall x,x'\in \X$. If $P_i = P_t$, then, with probability at least $1-2\delta$, the (biased) empirical \textnormal{MMD} in \eqref{eq:rs_obj} is bounded by:
\[
\frac{1}{2}\textnormal{MMD}_b(\widehat{P_i}, \widehat{P_t}) < \sqrt{\frac{K}{m}}+ \sqrt{\frac{K}{n}} +\sqrt{\frac{K(m+n)\log \frac{1}{\delta}}{2mn}}
\]
for arbitrary small $\delta>0$. And this bound converges to zero when $m,n\to \infty$.
\end{Proposition}
We then know that when task-recurrent assumption is satisfied, the empirical MMD is bounded by a small value. Using such an idea, we further bound the empirical estimation of the task-recurrent case.
\begin{Theorem}[Task-recurrent bound]\label{thm:task-recurrent} Assume $T_t$ is task-recurrent, i.e. $\exists i, P_t = P_i$. The learned model $\widehat{f}_i$ from each solved task satisfy
\eqref{eq:small-loss}.
Assume the loss function $L(\widehat{f}_i(x),f(x)) \in \H_k$, and upper bounded $|L(\widehat{f}_i(x),f(x))|<K$ $\forall i$. The empirical \textnormal{MMD} between distribution $P_i$ and current distribution $P_t$ can be estimated from
\[
\widehat{\textnormal{MMD}}_b^{(i)} = \Big\lVert\sum_{n=1}^{N}\frac{1}{N} k(x_n,\cdot)-\Phi_i(\cdot) \Big\rVert_{\mathcal{H}_k}^2
\]
where $x_n\sim P_t$.
Further assume that the minimum empirical \textnormal{MMD} is bounded above:
$\min_i \widehat{\textnormal{MMD}}_b^{(i)} \leq \eta$.
The selected model for current task is
$\widehat{f}_t = \widehat{f}_{i^*}$ s.t.
\[
i^{\ast}=\argmin_i \widehat{\textnormal{MMD}}_b^{(i)}.
\]
The finite sample loss satisfies:
\begin{equation}\label{eq:task-recurrent-bound}
\widehat{\mathcal{L}}(P_t,f,\widehat{f}_t)=\sum_{x_n\sim P_t }\big[L(\widehat{f}_t(x_n),f(x_n))\big]\leq \epsilon + K\eta + O(m,n)
\end{equation}
where $O(m,n)=O(1/\sqrt{n}+1/\sqrt{m}) \to 0$ as $m,n \to \infty$.
\end{Theorem}
\begin{spacelessprf}
When task-recurrent assumption holds, our method either correctly identifies the recurrent pre-trained model or not. Let $j$ be the correct recurrent index. If $i^{\ast} = j$, the result follows directly from \eqref{eq:small-loss}. If $i^{\ast} = i \neq j$, the loss function is
$$
L(\widehat{f}_i(x), f(x)) = L_i(x) = \langle L_i, k(x,\cdot) \rangle.
$$
We then can represent the error of each model in the form of KME. For the current task $T_t$:
$$\mathds{E}_{x\sim P_t } \big[L_t(x)\big] = \langle L_t, \mu_{P_t}\rangle = \langle L_i, \mu_{P_j}\rangle $$
since $P_t = P_j$ and the selected model $\widehat{f}_t = \widehat{f}_i$. However, the correct matching should be $\widehat{f}_t = \widehat{f}_j$. Hence, we are applying model $\widehat{f}_i$ on distribution $P_j$. The empirical loss in $T_t$ is:
$$
|\langle L_i, \widehat{\mu}_{P_j}\rangle| \leq
\underbrace{|\langle L_i, \widehat{\mu}_{P_j}\rangle - \langle L_i, \widehat{\mu}_{P_i}\rangle|}_{(A)} + \underbrace{|\langle L_i, \widehat{\mu}_{P_i}\rangle|}_{(B)}
$$
We then bound $(A)$ and $(B)$ separately.
$$(B)\leq |\langle L_i, \widehat{\mu}_{P_i}\rangle|+ |\langle L_i, \widehat{\mu}_{P_i} - \widehat{\mu}_{P_i}\rangle|\leq \epsilon + O(1/\sqrt{n})$$
by \eqref{eq:small-loss} and convergence rate for empirical \textnormal{MMD}.
$$(A) = |\langle L_i, \widehat{\mu}_{P_j} - \widehat{\mu}_{P_i}\rangle| \leq \|L_i\|\|\widehat{\mu}_{P_j} - \widehat{\mu}_{P_i} \|\leq K \eta + O(m,n)$$
To see the bound for empirical embeddings, $$\|\widehat{\mu}_{P_j} - \widehat{\mu}_{P_i} \| \leq \underbrace{\|\widehat{\mu}_{P_t} - \widehat{\mu}_{P_i}}_{\leq \eta} \| + \underbrace{\|\widehat{\mu}_{P_j} - \widehat{\mu}_{P_t} \| }_{=O(m,n)}$$
the first term is by assumption and the second term is because $P_t=P_i$:
\begin{equation*}
\|\widehat{\mu}_{P_j} - \widehat{\mu}_{P_t}\| \leq\|\widehat{\mu}_{P_j} - \mu_{P_j} \|+\|\widehat{\mu}_{P_t} - \mu_{P_t} \| = O\Big(\frac{1}{\sqrt{m}}+\frac{1}{\sqrt{n}}\Big),
\end{equation*}
which completes the proof.
\end{spacelessprf}
The theorem shows that, in the task-recurrent setting, the test error from our procedure is bounded by the solved task error $\epsilon$ and the approximation error $\eta$ of empirical KME.
\subsection{Instance-recurrent}
In this subsection, we present the guarantee of our proposal when instance-recurrent assumption $P_t=\sum w_i P_i$ holds. Our analysis is based on the following four steps:
\begin{enumerate}
\item The estimation of $\{\widehat{w}_i\}$ is close to the true mixture weight $\{w_i\}$.
\item The quality of the mimicked sample set generated by kernel herding is good.
\item The error of the learned selector $g$ is bounded.
\item The error of our estimated predictor $\widehat{f}_t$ is bounded.
\end{enumerate}
\begin{Lemma}[$w$-estimation bound]\label{lem:w_estimation}
Consider the weights estimation procedure stated in~\eqref{eq:w_solution}: $\widehat{\bm{w}} = H^{-1}C,$
where
\begin{equation*}
H\in \mathbb{R}^{c\times c},H_{ij}=\langle \Phi_i, \Phi_j\rangle, C\in \mathbb{R}^{c\times 1},C_i=\frac{1}{N}\sum_{n=1}^N \Phi_i(x_n),
\end{equation*}
and consider the population embedding $\mu_k(P_t)$ for $\widetilde{\bm{w}} = H^{-1}\widetilde{C}$
where
$$\widetilde{C}\in \mathbb{R}^{c\times 1},\widetilde{C}_i=\langle \Phi_i, \mu_k(P_t) \rangle.$$
Assume $H$ is non-degenerate, i.e. the smallest eigenvalue is bounded below by a positive real number $\lambda$. Then,
$$\|\mu_k(P_t) - \sum_{i=1}^c \widehat{w}_i\Phi_i(\cdot) \| = O(\frac{1}{\sqrt{M}} + \frac{1}{\sqrt{N}})$$
\end{Lemma}
\begin{proof}
The proof starts from the instance-recurrent assumption where $\mu_k(P_t) = \sum_i w_i \mu_k(P_i) $. Rewrite
\begin{align}
\|\mu_k(P_t) - \sum_{i=1}^c \widehat{w}_i\Phi_i \|
\leq& \|\sum_i w_i \mu_k(P_i) - \sum_i \widetilde{w}_i \Phi_i \| \label{eq:term1} \\
&+\|\sum_i \widetilde{w}_i \Phi_i - \sum_i \widehat{w}_i \Phi_i\| \label{eq:term2} \\
=&O(\frac{1}{\sqrt{M}} + \frac{1}{\sqrt{N}})
\end{align}
\eqref{eq:term1} goes to $0$ since $\Phi_i$ is a $O(1/\sqrt{M})$-consistent estimator of $\mu_k(P_i)$.
Let $\widetilde{H}\in \mathbb{R}^{c\times c},\widetilde{H}_{ij}=\langle \Phi_i, \mu_k(P_j)\rangle$. $\|H-\widetilde{H}\|_F\leq c^2 O(\frac{1}{\sqrt{M}})$. As $c \ll M$, we have
$\widetilde{w} = H^{-1}\widetilde{H}w = w + H^{-1}(\widetilde{H} -H)w = w + O(\frac{1}{\sqrt{M}})$. As $\Phi_i \in \H$ is bounded, $\eqref{eq:term1} =O(\frac{1}{\sqrt{M}})$.
$\eqref{eq:term2}=\|\sum_i (\widetilde{w}_i-\widehat{w}_i)\Phi_i\|\leq K\|\widetilde{\bm{w}}-\widehat{\bm{w}}\|_F = K\lambda^{-1}\|C-\widetilde{C}\|_F = K^2\lambda^{-1}\|\mu_k(P_t)- \frac{1}{N}\sum_n k(x_n,\cdot)\| =K^2\lambda^{-1} O(\frac{1}{\sqrt{N}}) $, where $K<\infty$ as bounded of RKHS, $\lambda^{-1}<\infty$ as $H$ is non-degenerate and
$$\|\mu_k(P_t)- \frac{1}{N}\sum_n k(x_n,\cdot)\|=O(\frac{1}{\sqrt{N}})$$
as $\{x_n\} \sim P_t$.
By such construction, we are able to see the difference, in terms of Frobenius norm, of the weight vector learned from true embedding $\widetilde{\bm{w}}$ versus from empirical embedding $\widehat{\bm{w}}$.
\end{proof}
To better understand the instance-recurrent case, for the mixture model in \eqref{eq:mixture_model}, we perceive the component assignment as a latent variable, $I$. Hence, by setting $P_i(x) = P(x|I=i)$ as the conditional distribution of $x$ given that $x$ comes from the $i$-th component, we can write the distribution of test data as:
$$P_t(x) = \sum_i w_i P_i(x) = \sum_i P(x|i)\Pi(I=i) = \sum_i P(x, i)$$
where $\Pi(I=i)$ is the marginal distribution of component assignment variable $I$. This corresponds to partition weights $w_i$, where $\sum_i w_i = 1$. We call the $P(x,i)$ the joint distribution of random variable pair $(X, I)$.
\begin{Proposition}\label{prop:herding_rate}
\cite[Proposition 4]{kernel_herding}
Let $p$ be the target distribution, $T$ be the number of samples generated from kernel herding, and $\hat{p}$ be the empirical distribution from $T$ samples. For any $f\in \H$, the error $|\E[f]_p - \E[f]_{\hat{p}}| = O(T^{-1})$. Moreover this condition holds uniformly, that $\sup_{\|f\|\neq 1}|\E[f]_p - \E[f]_{\hat{p}}| = O(T^{-1})$. Thus $\|\mu_p - \mu{\hat{p}}\|
_{\H}= O(T^{-1})$.
\end{Proposition}
The proof can be found in~\cite{kernel_herding}, utilizing Koksma Hlawka inequality. Proposition~\ref{prop:herding_rate} shows that the convergence rate of empirical embedding from kernel herding is $O(T^{-1})$ which is faster than the convergence rate between empirical mean embedding to the population version from empirical samples. Hence, sampling $P(x,i)$ does not make the convergence rate, from $\sum_i \hat{w_i}\Phi_i(\cdot)$ to $\sum_i w_i \mu(P_i)$ slower.
\newline
\textbf{Learning classifier from samples:}
With the samples generated from the herding step, we train a selector $g$ via the following loss
$$
\mathcal{L}_c(\hat{P}(x,i),g) = \int L_c(g(x), i) \d\hat{P}(x,i)
$$
We do not have direct access to $P(x,i)$ as the selector is learned from the generated empirical samples.
We assume the loss $L_c \in \H$ and thus bounded, and the training loss is small, i.e. $\mathcal{L}_c(g^{\ast}; x,i) \leq \varepsilon$ for $g^{\ast}= \argmin_g{\mathcal{L}_c(P(x,i),g)}
$.
\begin{Lemma}\label{lem:classifier_error}
Let optimal selector $g^{\ast}= \argmin_g{\mathcal{L}_c(P(x,i),g)}$ and $\mathcal{L}_c(P(x,i),g^{\ast}) \leq \varepsilon$; let the estimated embedding from the deployment phase and \eqref{eq:w_solution} be $\sum_i\hat{w_i}\Phi_i(\cdot)$. Then, the population loss using the learned classifier $g$ is
$$
\mathcal{L}_c(P(x,i),g) = \int L_c(g(x), i) \d P(x,i)
\leq \varepsilon + O(\frac{1}{\sqrt{N}} + \frac{1}{\sqrt{M}})$$
\end{Lemma}
\begin{proof}
Let $\hat{P}(x,i)$ be herding samples from $\sum_i \hat{w}_i \Phi_i(\cdot)$. Then
$$
\mathcal{L}_c(P(x,i),g) \leq
\underset{{\leq \varepsilon}}{\underbrace{\mathcal{L}_c(P(x,i),g^{\ast})}}
+ \langle L_c(g^{\ast}) , \mu_k(P) - \mu_k(\hat{P})\rangle.
$$
By Lemma \ref{lem:w_estimation} and Proposition \ref{prop:herding_rate}, we know $\mu_k(P) - \mu_k(\hat{P})$ approaches zero at rate $ O(\frac{1}{\sqrt{N}} + \frac{1}{\sqrt{M}})$ in RKHS norm. As $L_c$ is a bounded function, the second term goes to zero at rate $ O(\frac{1}{\sqrt{N}} + \frac{1}{\sqrt{M}})$, which completes the proof.
\end{proof}
Note that $L_c(g^{\ast})$ is a bounded function that only depends on $g^{\ast}$, as $P$ and $\hat{P}$ both represent the joint distribution of $(x,i)$. Alternatively, for discrete random variable $I$ that is finite, it is equivalent to see the embedding as $\mu_k(P) = \int k(x,\cdot) \otimes h(i,\cdot) \d P(x,i)$ where $h(i,\cdot)$ is the linear kernel w.r.t. $I$.
\begin{Theorem}[Instance-recurrent bound]
Assume for the source models, $$\forall i, \mathcal{L}(P_i,f, \hat{f}_i)=\mathds{E}_{x\sim P_i }\big[L(\widehat{f}_i(x),f(x))\big]\leq \epsilon,$$ where $L \in \H$ and bounded; the classification error of trained classifier $g$ is small, i.e. $\mathcal{L}_c(P(x,i),g)\leq \varepsilon$, where $L_c \in \H$ and bounded. We further assume a Lipschitz condition between the loss used for task and the loss used for training classifier $g$: $\|L(f, \hat{f}_i) -L(f, \hat{f}_j)\| \leq \eta \| L_c(i,j) \| $ for some $\eta < \infty$. Then the RKME estimator $\hat{f}_t = f_{g(x)}(x)$ satistifies
$$\mathcal{L}(P_t,f,\hat{f}_t) \leq \epsilon + \varepsilon + O(\frac{1}{\sqrt{N}} + \frac{1}{\sqrt{M}})$$
\end{Theorem}
\begin{proof}
The samples approximating $P_t$ are generated from estimated mean embedding via herding. Applying the result in Lemma \ref{lem:w_estimation} and Lemma \ref{lem:classifier_error}, $\widehat{w}_i\Phi_i(\cdot)$ are $\delta$-consistent and assignment error is $(\delta+\varepsilon)$-consistent, for $\delta =O(\frac{1}{\sqrt{N}} + \frac{1}{\sqrt{M}}) $.
\begin{align*}
&\mathcal{L}(P_t,f, \hat{f}_t)=\int_{\X} L(f(x),\hat{f}_t(x)) \mathrm{d} P(x|i)p(i) \\
&= \sum_i w_i \int_{\X_i} L(f(x),\hat{f}_t(x))\mathrm{d} P(x|i)\\
&=\sum_i w_i \int_{\X_i} L(f(x),\hat{f}_{g(x)}(x)) \mathrm{d} P(x|i) \\
& \leq \sum_i w_i \int_{\X_i} L(f(x),\hat{f}_{i}(x)) \mathrm{d} P(x|i)
+ \sum_i w_i \int_{\X_i, g(x)\neq i} \hspace{-.9cm}L(f(x),\hat{f}_{g(x)}(x)) - L(f(x),\hat{f}_{i}(x)) \mathrm{d} P(x|i)\\
&\leq \sum_i w_i \epsilon + \eta \sum_i w_i \int_{\X_i, g(x)\neq i} \hspace{-0.9cm}L_c(g(x), i)\mathrm{d} P(x|i)\\
&\leq \sum_i w_i \epsilon + \eta \sum_i w_i \int_{\X_i} L_c(g(x), i)\mathrm{d} P(x|i)\\
&= \epsilon + \eta \int_{\X} L_c(g(x), i)\mathrm{d} P(x,i)\\
&\leq \epsilon + \eta \varepsilon + O(\frac{1}{\sqrt{N}} + \frac{1}{\sqrt{M}})
\end{align*}
where $\X_i = \{x: I=i\}$.
The third last inequality holds as we are looking into the set where the component assignment are mis-classified. Using the Lipschitz condition, we bounded the loss using training loss results of $g$.
The second last inequality dropping $g(x)=i$ holds because $L_c$ is non-negative loss function.
By Lemma \ref{lem:classifier_error}, the last inequality holds.
\end{proof}
\section{Related Work}
Domain adaptation~\citep{multisource_suvery} is to solve the problem where the training examples (source domain) and the testing examples (target domain) are from different distributions. A common assumption is the examples in source domain are accessible for learning the target domain, while learnware framework is designed to avoid the access at test time. Domain adaptation with multiple sources~\citep{DA_multiple08,DA_multiple18} is related to our problem setting. Their remarkable theoretical results clearly show that given the direct access to distribution, a weighted combination of models can reach bounded risk at the target domain, when the gap of distributions are bounded by R{\'{e}}nyi divergence~\citep{DA_UAI09}. Compared with the literature, it is the first time that the prediction is made from dynamic selection, which is capable of eliminating useless models and more flexible for various types of model. Furthermore, density estimation is considered difficult in high dimensional space, while ours does not depend on estimated density function but implicitly matching distributions via RKME for model selection.
Data privacy is a common concern in practice. In terms of multiple participants setting like ours, multiparty learning~\citep{PP_Aggregation}, and recently a popular special case called federated learning~\citep{federated,federated_google} are related. Existing approaches for multiparty learning usually assume the local dataset owner follows a predefined communication protocol, and they jointly learn one global model by continuously exchanging information to others or a central party. Despite the success of that paradigm such as Gboard presented in \cite{gboard}, we observe that in many real-world scenarios, local data owners are unable to participate in such an iterative process because of no continuous connection to others or a central party. Our two-phase learnware framework avoids the intensive communication, which is preferable when each data owner has sufficient data to learn her own task.
Model reuse methods aim at reusing pre-trained models to help related learning tasks. In the literature, this is also referred to as ``learning from auxiliary classifiers''~\citep{Auxiliary} or ``hypothesis transfer learning''~\citep{Kuzborskij13,HTL_tranformation}. Generally speaking, there are two ways to reuse existing models so far. One is updating the pre-trained model on the current task, like fine-tuning neural networks. Another is training a new model with the help of existing models like biased regularization~\citep{Tommasi14} or knowledge distillation~\citep{nec45,distillation}. Both ways assume all pre-trained models are useful by prior knowledge, without a specification to describe the reusability of each model. Our framework shows the possibility of selecting suitable models from a pool by their resuabilities, which works well even when existing models are ineffective for the current task.
These previous studies did not touch one of the key challenge of learnware~\citep{learnware}: given a pool of pre-trained models, how to judge whether there are some models that are helpful for the current task, without accessing their training data, and how to select and reuse them if there are. To the best of our knowledge, this paper offers the first solution.
\section{Experiments}
To demonstrate the effectiveness of our proposal, we evaluate it on a toy example, two benchmark datasets, and a real-world project at Huawei Technologies Co., Ltd about communication quality.
\subsection{Toy Example} \label{sec:toy}
In this section, we use a synthetic binary classification example including three providers to demonstrate the procedure of our method. This example recalls the intuitive illustration in Fig.~\ref{fig:upload_phase}\&\ref{fig:deployment_phase}, and we will provide the code in CodeOcean~\citep{codeocean}, a cloud-based computational reproducibility platform, to fully reproduce the results and figures.
Fig.~\ref{fig:toy_local_dataset} shows the problem setting. Circle, triangle, and square points are different local datasets from each provider. Each dataset is drawn from a mixture of two Gaussians. The means of these Gaussians are arranged around a circle denoted by the grey dashed line. Points inside the grey circle are labeled as blue class, and points outside the circle are labeled as yellow class, thus the binary classification problem. We should emphasize that their local datasets are unobservable to others, they are plotted in the same figure just for space-saving. RBF kernel SVMs are used as pre-trained models.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\captionsetup{skip=1pt}
\includegraphics[width=\textwidth]{./figure/toy_local_dataset.pdf}
\caption{Local datasets}\label{fig:toy_local_dataset}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\captionsetup{skip=1pt}
\includegraphics[width=\textwidth]{./figure/toy_rs.pdf}
\caption{RKMEs}\label{fig:toy_rs}
\end{subfigure}
\caption{Upload phase. (a) Labeled three local private datasets owned by different providers. (b) Constructed points $\{z_m\}$ in the reduced set of KME, bigger marker means larger weight $\{\beta_m\}$. The deeper green contour means higher KME score.}\label{fig:preparation}
\end{figure}
The results of reduced set construction by running Algorithm \ref{alg:rs} are shown in Fig.~\ref{fig:toy_rs}. We set $M=5$ here, which is enough for approximating the empirical KME in this example. Different from the original empirical KME $\sum_{n=1}^N \frac{1}{N}k(x_n,\cdot)$, where all points contribute equally to the embedding, the constructed reduced KME $\sum_{m=1}^M \beta_m k(z_m,\cdot)$ introduced more freedom by using variable weights $\{ \beta_m\}$. In the figure, we use the size of markers to illustrate the value of weights. These reduced sets implicitly ``remember'' the Gaussian mixtures behind local datasets and serve as specifications to tell future users where each pre-trained model works well.
In the deployment phase, we evaluate both task-recurrent and instance-recurrent assumptions. In Fig.~\ref{fig:deployment_tra}, we draw test points from the same distribution of the ``circle'' dataset. As expected, our method successfully finds the match and predict all the data by the pre-trained ``circle'' model.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\captionsetup{skip=1pt}
\includegraphics[width=\textwidth]{./figure/toy_tra_unlabel.pdf}
\caption{Testing data}\label{fig:toy_ira_unlabel}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\captionsetup{skip=1pt}
\includegraphics[width=\textwidth]{./figure/toy_tra_label.pdf}
\caption{Predictions}\label{fig:toy_tra_label}
\end{subfigure}
\caption{Deployment phase: task-recurrent. (a) Testing data when task-recurrent assumption holds. (b) Predictions, achieved accuracy 97\%.}\label{fig:deployment_tra}
\end{figure}
In instance-recurrent setting, we set the mixture weight of (circle, triangle, square) to $\bm{w}=(0.7,0.3,0.0)$ and test our method. Our estimated mixture weight is $\widehat{\bm{w}}=(0.701,0.285,0.014)$, closing to the groundtruth. Given the accurately estimated mixture weight, we are able to generate a mimicked sample by kernel herding. It is clear in Fig~\ref{fig:toy_ira_mimic} that the drawn distribution is similar to the testing data and with assigned labels. The weight of square is low but not zero, therefore there are still few squares in the sample set. The learned selector divides the feature space into three regions, and all the testing points fall into the ``circle'' or ``triangle'' region. Predictions in Fig.~\ref{fig:toy_ira_label} achieves accuracy 92.5\%, and errors are mainly made from pre-trained models themselves, not from the selection of ours.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\captionsetup{skip=1pt}
\includegraphics[width=\textwidth]{./figure/toy_ira_unlabel.pdf}
\caption{Testing data}\label{fig:toy_ira_unlabel}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\captionsetup{skip=1pt}
\includegraphics[width=\textwidth]{./figure/toy_ira_mimic.pdf}
\caption{Generated data}\label{fig:toy_ira_mimic}
\end{subfigure}
\begin{subfigure}[b]{0.229\textwidth}
\captionsetup{skip=1pt}
\includegraphics[width=\textwidth]{./figure/toy_ira_region.pdf}
\caption{Decision of selector}\label{fig:toy_ira_region}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\captionsetup{skip=1pt}
\includegraphics[width=\textwidth]{./figure/toy_ira_label.pdf}
\caption{Predictions}\label{fig:toy_ira_label}
\end{subfigure}
\caption{Deployment phase: instance-recurrent. (a) Testing data when instance-recurrent assumption holds. True mixture weight is $\bm{w}=(0.7,0.3,0.0)$. (b) Generated mimicked data by kernel herding with labels. (c) The decision regions of learned selector. (d) Predictions, achieved accuracy 92.5\%. The red line is the decision boundary of the learned selector.}\label{fig:deployment_ira}
\end{figure}
The toy example gives a visual demonstration of our two-phase framework. We can see from this example that the inaccessibility of private training data and reusability of pre-trained models are met. In the next section, we post results on two benchmark datasets.
\subsection{Benchmark} \label{sec:benchmark}
In this section, we evaluate our proposal on two widely used benchmark datasets: image dataset \texttt{CIFAR-100}~\citep{cifar100} and text dataset \texttt{20-newsgroup}~\citep{20newsgroup}.
\texttt{CIFAR-100} has 100 classes and they are grouped into 20 superclasses, and each superclass contains 5 classes. For example, the superclass ``flower'' includes \{orchid, poppy, rose, sunflower, tulip\}. It is natural to use this dataset to simulate our setting. We divide \texttt{CIFAR-100} into 20 local datasets, each having images from one superclass, and build 5-class local neural network classifiers on them.
\texttt{20-newsgroup} is a popular text classification benchmark and it has similar hierarchical structure as \texttt{CIFAR-100}. There are 5 superclasses \{comp, rec, sci, talk, misc\} and each is considered a local dataset for training local models in the upload phase.
Kernel methods usually cannot work directly on the raw-pixel level or raw-document level, therefore we use off-the-shelf deep models to extract meaningful feature vectors. For \texttt{CIFAR-100}, features are the outputs from the penultimate layer of ResNet-110.\footnote{Trained by running the command of ResNet-110 in \url{https://github.com/bearpaw/pytorch-classification/blob/master/TRAINING.md}} For \texttt{20-newsgroup}, an LSTM is built on GloVe~\citep{glove} word embeddings, and features are extracted from the global max-pooling layer. These feature vectors are used for RKME construction in the upload phase. Gaussian kernel as defined in \eqref{eq:gaussian_kernel} with $\gamma=0.01$ is used in both datasets, and the size of the reduced set is set to $M=10$, which is a tiny ratio of the original datasets.
We compare our method with a naive baseline MAX and a related method HMR~\citep{HMR}. MAX simply uses all the pre-trained models to predict one test instance, and takes out the most confident predicted class. HMR incorporates a communication protocol which exchanges several selected key examples to update models, and then does predictions like MAX. In this comparison we allow HMR to exchange up to 1000 examples. All three methods use the same pool of pre-trained models. Instance-recurrent setting is simulated by randomly mixing testing data from different number of solved tasks. The mean accuracy of 10 times each setting are reported in Table~\ref{table:cifar100}\&\ref{table:newsgroup}, and the last row reports the non-private accuracy of a global model trained on merged data.
\begin{table}
\caption{Results of \texttt{CIFAR-100} in accuracy(\%).}\label{table:cifar100}
\vspace{-0.2cm}
\centering
\begin{tabular}{c c c c c c}
\toprule
& \multicolumn{1}{c}{Task-recurrent} & \multicolumn{4}{c}{Instance-recurrent}\\
\#Mixing tasks& 1 & 2 & 5 & 10 & 20\\
\midrule
MAX & 43.00 & 42.10 & 41.51 & 41.62 & 41.44\\
HMR & 70.58 & 68.91 & 68.93 & 68.88 & \fontseries{b}\selectfont 68.81\\
Ours& \fontseries{b}\selectfont 86.22 & \fontseries{b}\selectfont 72.91 & \fontseries{b}\selectfont 72.57 & \fontseries{b}\selectfont 71.07 & 68.79\\
\midrule\midrule
Global&75.08& 73.24 & 73.31 & 71.86 & 73.24\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Results of \texttt{20-newsgroup} in accuracy(\%).}\label{table:newsgroup}
\vspace{-0.2cm}
\centering
\begin{tabular}{c c c c c c}
\toprule
& \multicolumn{1}{c}{Task-recurrent} & \multicolumn{4}{c}{Instance-recurrent}\\
\#Mixing tasks& 1 & 2 & 3 & 4 & 5\\
\midrule
MAX & 58.65 & 55.76 & 53.03 & 51.94 & 50.68\\
HMR & 72.01 & 72.19 & 70.86 & 70.53 & 70.09\\
Ours& \fontseries{b}\selectfont 83.13 & \fontseries{b}\selectfont 76.03 & \fontseries{b}\selectfont 75.10 & \fontseries{b}\selectfont 74.02 & \fontseries{b}\selectfont 72.68\\
\midrule\midrule
Global&72.06& 73.24 & 73.31 & 71.86 & 73.24\\
\bottomrule
\end{tabular}
\end{table}
It is clear that our method performs best with a large margin in the task-recurrent setting. Other methods cannot exploit the prior knowledge that the current task is identical to one of the solved tasks, while our minimum-MMD measure can successfully find out the fittest pre-trained model.
In the instance-recurrent setting, ours is the best in most cases. We are even better than the non-private global model in the \texttt{20-newsgroup} dataset. It is possible because the global model is an ERM optimizer on the merged data, which is the best model for i.i.d testing examples but not adaptive to a changed unknown distribution. While ours can estimate the mixing weight and adapt to a different biased test distribution in the deployment phase. Ours is increasingly better when the number of mixing tasks goes smaller, because we can preclude some impossible output classes by selecting right pre-trained models.
Besides, we should keep in mind that the comparison is unfair because HMR and global are not fully privacy-preserving methods. Our proposal gets better or competitive performance without exposing any raw data points.
Section \ref{sec:toy} and \ref{sec:benchmark} show results on classification problems. We then apply ours to a real regression problem.
\subsection{Real-World Project}
Communication quality is the key to user experience for a telecommunications company. We participated in an industrial project called \texttt{crystal-voice} at Huawei Technologies Co., Ltd. Huawei tested a novel technology ``deep cover'' on base stations to improve the quality. But engineers observed the gain of quality varies because of differences about user behaviors and environments among stations. They want to predict how much can we gain in a new base station, to decide whether it is profitable to deploy ``deep cover'' on it.
Every user covered by a base station is represented by a feature vector, and a real-valued quality gain. It is strictly forbidden to move users' information out of stations, but each station has enough data to build a strong local model and share it in a pool. Therefore, our proposal is a wise choice to handle this problem.
In the upload phase, a local ridge regression model is trained in each base station. We then construct RKME (set size $M=50$, Gaussian kernel $\gamma=0.5$) as the specification, and upload the models and specifications into a learnware pool. All the vectors in the specification are constructed ``pseudo'' users, protecting the raw information from thousands of users.
In the deployment phase, we run instance-recurrent procedure on a new base station. There are 8 anonymous base stations in total, therefore we test our method 8 times. At each time, we select one of them as the current task and the rest 7 as solved tasks.
Four methods are compared with ours. Two model reuse baselines RAND/AVG and two transfer learning methods KMM~\citep{KMM}/mSDA~\citep{MSDA}. RAND means randomly selecting one pre-trained model from other base stations to predict each user's gain. AVG means averaging the outputs of all regressors in the model pool as predictions. KMM reweights source data to train a better model for the testing data. mSDA learns robust feature representations over all domains.
We should notice that the model reuse methods are private, but transfer learning methods are non-private because they need to see both testing and training data in the deployment phase. The mean results are reported in Table~\ref{table:crystal}. MAX and HMR in Section \ref{sec:benchmark} cannot be used for this regression task, while our framework is agnostic to the task and type of pre-trained models.
\begin{table}[htb]
\caption{Results on regressing quality gain of the \texttt{crystal-voice} project. ``$\downarrow$'' means the lower the better, ``$\uparrow$'' means the higher the better. ``Model reuse'' methods are private, while ``transfer'' methods are non-private.} \label{table:crystal}
\vspace{-0.2cm}
\setlength{\tabcolsep}{7.5pt}
\centering
\begin{tabular}{c c c c c c c}
\toprule
& & RMSE$\downarrow$ & 3p30$\uparrow$ & 5p30$\uparrow$ & 3f1$\uparrow$ \\
\midrule
\multirow{3}{*}{Model reuse}
& RAND & .0363 & .3730 & .4412 & .7320 \\
& AVG & .0326 & .4272 & .4535 & .7712 \\
& Ours & \fontseries{b}\selectfont .0279 & \fontseries{b}\selectfont .5281 & .5205 & \fontseries{b}\selectfont .8082 \\
\midrule\midrule
\multirow{2}{*}{Transfer}
& KMM & .0291 & .5018 & .5222 & .7911 \\
& mSDA & .0285 & .5105 & \fontseries{b}\selectfont .5324 & .8034 \\
\bottomrule
\end{tabular}
\end{table}
Our method not only outperforms model reuse baselines in terms of root-mean-square error (RMSE), but is also superior on the other measurements required in the real business. ``3p30'' is the ratio of users whose gain value is above 3\% and the prediction error is lower than 30\%. ``5p30'' is defined similarly. ``3f1'' is the F1-measure if we consider the users whose gain value above 3\% are positive class. Our method is even better than mSDA and KMM in some measurements. Considering these two transfer learning methods break the privacy, ours sacrifice a little performance in ``5p30'' while keeping the data safe in base stations.
\section{Conclusion}
In this paper, we propose reduced kernel mean embedding as the specification in the learnware paradigm, and implement a two-phase pipeline based on it. RKME is shown to protect raw training data in the upload phase and can identify reusable pre-trained models in the deployment phase. Experimental results, including a real industrial project at Huawei, validate the effectiveness of it. This is the first valid specification with practical success to our best knowledge.
In the future, we plan to incorporate more powerful kernel methods to directly measure the similarity in the raw high-dimensional feature space when constructing RKME. It remains an open challenge to design other types of valid specifications, under assumptions which are even weaker than the instance-recurrent assumption.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,432 |
Jindřichohradecký vikariát je jedním z deseti vikariátů českobudějovické diecéze. Tvoří jej 31 farností, v nichž působí 9 kněží a jeden jáhen. Okrskovým vikářem je P. Mgr. Ivo Prokop, administrátor v Jindřichově Hradci. Vikariátním sekretářem je P. PhDr. Mgr. Jozef Gumenický.
Seznam farností
Odkazy
Externí odkazy
Jindřichohradecký vikariát na stránkách českobudějovické diecéze
Vikariáty českobudějovické diecéze
Náboženství v okrese Jindřichův Hradec
Náboženství v Jindřichově Hradci
Organizace v Jindřichově Hradci | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,833 |
Q: How do I check if two balls collide in tkinter? Working on a small project involving tkinter and I need to figure out how to get balls bouncing off of each other.
Heres the code:
from tkinter import *
# dimensions of canvas
WIDTH=300
HEIGHT=400
# Create window and canvas
window = Tk()
canvas = Canvas(window, width=WIDTH, height=HEIGHT, bg='#ADF6BE')
canvas.pack()
# starting position of ball
x = 0
y = 10
# starting position of ball1
x1 = 100
y1 = 0
# distance moved each time step for ball 1
dx = 10
dy= 10
# distance moved each time step for ball 2
dx1 = 10
dy1 = 10
# diameter of ball
ballsize = 30
while True:
x = x + dx
y = y + dy
x1 = x1 + dx1
y1 = y1 + dy1
# if ball get to edge then we need to
# change direction of movement
if x >= WIDTH-ballsize or x <= 0 or x == x1:
dx = -dx
print("x=", x)
print('y=', y)
if y >= HEIGHT-ballsize or y <= 0 or y == y1:
dy = -dy
print("x=", x)
print('y=', y)
if x1 >= WIDTH-ballsize or x1 <= 0 or x1 == x:
dx1 = -dx1
print("x1=", x1)
print('y1=', y1)
if y1 >= HEIGHT-ballsize or y1 <= 0 or y1 == y:
dy1 = -dy1
print("x1=", x1)
print('y1=', y1)
# Create balls
ball=canvas.create_oval(x, y, x+ballsize, y+ballsize, fill="white", outline='white')
ball1 = canvas.create_oval(x1, y1, x1 + ballsize, y1 + ballsize, fill="white", outline='white')
# display ball
canvas.update()
canvas.after(50)
#remove ball
canvas.delete(ball)
canvas.delete(ball1)
window.mainloop()
They move and bounce off of the canvas walls but not off of each other.
Heres an image to show what I mean, instead of hitting each other and bouncing off
A: You have to check the distance between the balls.
If the distance between the center of the two circles is less than the sum of the radii of the circles, then they are colliding.
(math.sqrt((ball1.x- ball2.x) ** 2 + (ball1.y - ball2.y) ** 2) <= sum_radii
Then change the dy and dx of the balls.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,914 |
Q: Can't reach my flask api running on port 5000 from other devices on the network. Tried firewall. Tried router. Stumped I am working on an Android app using a Windows 11 pc as both the app dev environment and also a flask/python backend api for it.
I can get the app in the emulator to reach the api at its address, ie. 192.168.0.100:5000. But the same address on my phone and other laptop just returns a timeout.
I've tried adding inbound rules to the windows machine firewall (local port 5000, remopte port all ports).
I've tried disabling the firewall.
I've tried adding port forwarding on the router from 5000 to 5000 for the reserved IP of the pc.
I am running the flask app as host=0.0.0.0 from the command line
Nothing seems to work. The service works fine when I'm on the pc both from browser, postman, and the android emulator. But it's dead everywhere else on the wifi and I need to be on an actual phone to properly test everything.
I'm at the end of my networking knowledge and don't know where else to look. Thank you!
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,610 |
Roof Dimension: 2430mm x 1520mm.
2390mm (7ft 8") Wide Plastic Shed - Wide practical shed design.
Two Ridge Skylights - Allows natural light to flood in.
The Lifetime 8x5 Plastic Shed is an 8ft wide apex roofed shed with a built in floor and Large double doors. This is without doubt the highest quality plastic shed available today. It's good looking and tough and will stand up to the harshest UK weather.
The Lifetime 8 x 5 is built from Double Wall High Density Polyethylene (HDPE) and powder coated steel making it super strong and resistant to weather. A Lifetime Shed will not warp, crack, chip, peel, fade or stain.
The Lifetime 8ft x 5ft has a hard wearing integrated non slip PVC floor that will take many years of punishment. Natural light floods into the shed through it's skylight windows on the ridge. There are screened vents in both gable ends to allow for good airflow without letting in pests such as flies and wasps.
Lifetime Sheds are easy to assemble and can be erected in a matter of hours if you follow the instructions provided. As with all Lifetime sheds, there is a 10 Year Warranty on the Lifetime 8x5 Plastic Shed. | {
"redpajama_set_name": "RedPajamaC4"
} | 582 |
RISE Viktoria, tidigare Viktoriainstitutet och Viktoria Swedish ICT, är ett 1997 grundat svenskt forskningsinstitut inom IT. Det ingår i RISE - Research Institutes of Sweden och ligger Lindholmen Science Park i Göteborg.
Externa länkar
Officiell webbplats
Swedish ICT Research | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,168 |
Enjoy S$15 off + free delivery with min. spend of S$150 - Limited to the first 500 redemptions per month.
This Promotion is valid from 1 April to 30 June 2019 (both dates inclusive) (the "Promotional Period").
The 15% discount storewide and free delivery is valid with a minimum spend of S$150, limited to first 500 redemptions per month. Cardholder must enter the promo code "15HSBCRM" ("Code") upon checkout.
Codes are valid on www.lazada.sg/hsbc during Promotional Period.
Codes cannot be combined with other codes and promotions, and not exchangeable for cash. | {
"redpajama_set_name": "RedPajamaC4"
} | 108 |
Q: C++ dll throwing assertion faliure I have a program written in C# that consumes a dll written in C++. I have the source for that but changing that is out of scope. There are two files of the type .pak and .jrn that get saved in the application. However the location of these files are configurable. If I choose to save it in a local location (somewhere on the hard drive of the machine running my C# code) it works just fine. However, when I try to configure the system to store the files in a remote machine, I get Assertion Failure error in C++.
This is really urgent. Any help will be greatly appreciated.
Thanks in advance,
A: I notice that you are configuring pakDir, but not jrnDir. So my guess is that jrnDir points to an invalid file path on the remote machine.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,472 |
<?php
class Scalr_Service_Gearman_Client extends Scalr_Service_Gearman
{
private $gmClient;
protected $processesPerJob = 1;
public function __construct(array $serversList)
{
parent::__construct();
$this->logger = Logger::getLogger("Scalr_Service_Gearman_Client");
$this->servers = $serversList;
}
public function initGearman()
{
$this->gmClient= new GearmanClient();
foreach ($this->servers as $server)
$this->gmClient->addServer($server['host'], $server['port']);
}
protected function childProcess($jobName)
{
$this->initGearman();
while (true) {
foreach ($jobName::getTasksList() as $job) {
$this->gmClient->doBackground($jobName, $job['id'], $job['id']);
}
$this->logger->info("[".posix_getpid()."] {$jobName}: sleeping 5 seconds");
sleep(5);
}
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 1,662 |
/*
* base64.cpp
*
* Created on: Oct 15, 2012
* Author: ondra
*/
#include "base64.tcc"
namespace LightSpeed {
byte base64_encodeTable[65] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
byte base64_urlEncodeTable[65] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
byte base64_decodeTable[256] = "";
const ConstStrA ByteToBase64Convert::standardTable((char *)base64_encodeTable,64);
const ConstStrA ByteToBase64Convert::forUrlTable((char *)base64_urlEncodeTable,64);
const ConstStrA Base64ToByteConvert::standardTable((char *)base64_encodeTable,64);
const ConstStrA Base64ToByteConvert::forUrlTable((char *)base64_urlEncodeTable,64);
ByteToBase64Convert::ByteToBase64Convert()
:table(standardTable),padChar(standardPadding),wrbyte(0),readpos(0)
{
}
ByteToBase64Convert::ByteToBase64Convert(ConstStrA table, char paddingChar)
:table(table),padChar(paddingChar),wrbyte(0),readpos(0)
{
}
const char &ByteToBase64Convert::getNext() {
const char &out = outchars[readpos];
readpos = readpos+1;
if (readpos == 4) {
readpos = 0;
wrbyte = 0;
needItems = true;
hasItems = false;
} else {
hasItems = readpos < wrbyte;
}
return out;
}
const char &ByteToBase64Convert::peek() const {
return outchars[readpos];
}
void ByteToBase64Convert::write(const byte &item) {
//if eolb is active - reset state of convertor when new item is written
if (this->eolb) {
this->eolb = false;
wrbyte = 0;
readpos = 0;
}
switch (wrbyte) {
case 0:
//encode first 6 bits to character
outchars[0] = table[item>>2];
//store partial of the result - we will need it in next cycle
outchars[1] = (item & 0x3) << 4;
//increase write counter
++wrbyte;
break;
case 1:
//retrieve stored result and combine it with four bits from second byte
outchars[1] = table[outchars[1] | (item >> 4)];
//store remaining four bits - we will need it in next cycle
outchars[2] = (item & 0xF) << 2;
//increase write counter
++wrbyte;
break;
case 2:
//retrieve stored result and combine it with six bits from the third byte
outchars[2] = table[outchars[2] | (item >> 6)];
//also encode remaining six bits
outchars[3] = table[item & 0x3F];
//increase write counter twice - it unlocks last character for reading
++wrbyte;
++wrbyte;
//lock need items - because now reader must finish reading of generated four characters
needItems = false;
break;
//cannot write now, throw exception
default: throwIteratorNoMoreItems(THISLOCATION,typeid(*this));
break;
}
hasItems = true;
}
void ByteToBase64Convert::flush() {
//flush should finish current base64 sequence
//it is called when writting and source iterator is closed
//first check, whether flush has been called, we will use eolb flag
//if did, do nothing now
if (this->eolb) return;
//set eolb, because no more data are expected
this->eolb = true;
//close needItems, no more items are expected
this->needItems = false;
//depend on current state
switch (wrbyte) {
case 0:
//no bytes written, do nothing, hasItems will be false
break;
case 1:
//one byte written
//encode second char from stored remaining bits and padded by zeroes
outchars[1] = table[outchars[1]];
if (padChar) {
//write pad char
outchars[2] = padChar;
//write pad char
outchars[3] = padChar;
//adjust wrbyte
wrbyte=4;
} else {
wrbyte=2;
}
break;
case 2:
//two bytes written
//encode third char from stored remaining bots and padded by zeroes
outchars[2] = table[outchars[2]];
if (padChar) {
//write pad char
outchars[3] = padChar;
//adjust wrbytes
wrbyte=4;
} else {
wrbyte=3;
}
break;
default:
//three bytes writen, nothing need to be done
break;
}
//unlock hasItems only if there are chars to read.
hasItems = readpos < wrbyte;
}
Base64ToByteConvert::Base64ToByteConvert()
:padChar(standardPadding),readpos(0),wrpos(0) {
initTable(standardTable);
table[(unsigned int)'-'] = 62;
table[(unsigned int)'_'] = 63;
}
Base64ToByteConvert::Base64ToByteConvert(ConstStrA table, char paddingChar)
:padChar(paddingChar),readpos(0),wrpos(0) {
initTable(table);
}
const byte &Base64ToByteConvert::getNext() {
const byte &out = outBytes[readpos];
readpos = readpos+1;
if (readpos == 3) {
readpos = 0;
wrpos = 0;
needItems = true;
hasItems = false;
} else {
hasItems = readpos+1 < wrpos;
}
return out;
}
const byte &Base64ToByteConvert::peek() const {
return outBytes[readpos];
}
void Base64ToByteConvert::write(const char &item) {
if (item == padChar) {
//toto je spravne - pouze cely bajt muze byt precten a znaky navic jsou jen do padding
this->eolb = true;
} else {
//if eolb is active - reset state of convertor when new item is written
if (this->eolb) {
this->eolb = false;
wrpos = 0;
readpos = 0;
}
int x = (byte)item;
switch (wrpos) {
case 0: outBytes[0] = table[x] << 2;
++wrpos;
break;
case 1: outBytes[0] |= table[x] >> 4;
outBytes[1] = table[x] << 4;
++wrpos;
break;
case 2: outBytes[1] |= table[x] >> 2;
outBytes[2] = table[x] << 6;
++wrpos;
break;
case 3: outBytes[2] |= table[x];
++wrpos;
break;
default:throwIteratorNoMoreItems(THISLOCATION,typeid(*this));
break;
}
hasItems = readpos+1 < wrpos;
}
}
void Base64ToByteConvert::flush() {
this->eolb = true;
}
void Base64ToByteConvert::initTable(ConstStrA t) {
//for performance reason, do not fill other items, it expects valid format if base64
for (natural i = 0; i < t.length(); i++)
table[(unsigned int)(t[i])] = (byte)(i & 63);
}
void base64_initTables() {
if (base64_decodeTable[0] == 0) {
for (natural i = 0; i < 256; i++) base64_decodeTable[i] = 0xFF;
for (natural i = 0; i < 64; i++)
base64_decodeTable[base64_encodeTable[i]] = (byte)((i));
base64_decodeTable[(unsigned int)'-'] = 62;
base64_decodeTable[(unsigned int)'_'] = 63;
}
}
template class Base64EncoderT<char,char>;
template class Base64EncoderT<byte,char>;
template class Base64EncoderT<byte,byte>;
template class Base64DecoderT<char,byte>;
template class Base64DecoderT<char,char>;
template class Base64DecoderT<byte,byte>;
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,816 |
{"url":"http:\/\/tailieu.vn\/doc\/random-numbers-part-9-281001.html","text":"# Random Numbers part 9\n\nChia s\u1ebb: Dasdsadasd Edwqdqd | Ng\u00e0y: | Lo\u1ea1i File: PDF | S\u1ed1 trang:13\n\n0\n43\nl\u01b0\u1ee3t xem\n4\n\n## Random Numbers part 9\n\nM\u00f4 t\u1ea3 t\u00e0i li\u1ec7u\n\nAdaptive and Recursive Monte Carlo Methods This section discusses more advanced techniques of Monte Carlo integration. As examples of the use of these techniques, we include two rather different, fairly sophisticated, multidimensional Monte Carlo codes: vegas [1,2] , and miser [3]. The techniques that we discuss all fall under the general rubric of reduction of variance (\u00a77.6), but are otherwise quite distinct.\n\nCh\u1ee7 \u0111\u1ec1:\n\nB\u00ecnh lu\u1eadn(0)\n\nL\u01b0u","date":"2018-04-22 09:09:08","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9370372295379639, \"perplexity\": 7703.2503794678505}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-17\/segments\/1524125945552.45\/warc\/CC-MAIN-20180422080558-20180422100558-00570.warc.gz\"}"} | null | null |
\section*{\Large Appendices}
\section{Related Work}
\label{appendix:RW}
Recently, several novel deep learning-based techniques to estimate uncertainty with a single model have been proposed. For example, Deep Evidential Regression \cite{amini2019deep} is a
method for estimating epistemic uncertainty that is based on a parametric estimate of model variance. This is in line with previous work using evidential uncertainty estimation \cite{malinin2018predictive, sensoy18neural}. Orthonormal Certificates, a set of learned features with a suitable loss function, are used in \cite{tagasovska2018single}. These certificates capture the distance to the training set to learn an estimate of the epistemic uncertainty. This is further studied in \cite{Liu2020SimpleAP} who formalize \emph{distance awareness}, which captures the model's ability to quantify the distance of a test sample from the training data manifold, as a necessary condition for uncertainty estimation. This distance awareness can be captured with a weight normalization step in training, in addition to using a GP as the output layer. DUE \cite{DBLP:journals/corr/abs-2102-11409} is an instance of Deep Kernel Learning \cite{pmlr-v51-wilson16}, which is defined as a GP with a deep feature extractor inside the kernel. DUE improves upon SNGP by using an inducing point GP, and bi-lipschitz constraints on the feature extractor, giving better test set accuracies as well as improved training efficiency. DDU \cite{mukhoti2021deep} further extends this line of work by fitting Gaussian Discrminant Analysis (GDA) on the feature space of a regularized neural network. DUN \cite{Antoran2020DepthUI} uses the disagreement between the outputs from intermediate layers as a measure of uncertainty. DUQ \cite{Amersfoort2020SimpleAS} on the other hand uses two-sided Jacobian regularization on RBF networks \cite{lecun1998gradient} for reliable uncertainty estimates.
\cite{Wen2020BatchEnsembleAA} present an efficient way of implementing ensembles of neural networks, by using one shared matrix and a rank-1 matrix per member. The weights for each member are then computed as the Hadamard product of the shared matrix and the rank-1 matrix of the member.
There has also been extensive work in scaling up Bayesian Neural Networks for high-dimensional data to capture epistemic uncertainty. SWAG \cite{maddoxfast} fits a Gaussian distribution capturing the SWA \cite{Izmailov2018AveragingWL} mean and a covariance matrix representing the first two moments of SGD iterated. This distribution is then used as a posterior over the neural network weights. \cite{Dusenberry2020EfficientAS} parametrize the BNN with a distribution on a rank-1 subspace for each weight matrix, inspired by BatchEnsembles \cite{Wen2020BatchEnsembleAA}.
\cite{Vadera2020GeneralizedBP, Malinin2020Ensemble} propose approaches to improve the efficiency of ensembles by distilling the distribution of predictions rather than the average, thus preserving the information about the uncertainty captured by the ensemble.
There is also a large body of work on methods for approximating samples from the Bayesian posterior on large datasets with efficient MCMC based approaches. \cite{welling2011bayesian,Zhang2020CyclicalSG,Vadera2020URSABenchCB}. The variance of this posterior distribution can then be computed and used as an uncertainty estimate. However, as we have discussed in the sections above, this does not account for model misspecification and thus is not an accurate estimate for the lack of knowledge of the predictor.
\cite{Appice2015NovelDO} present several decompositions of the total expected loss, including the decomposition into the epistemic and irreducible (aleatoric) loss. They present additive adjustments that reduce the scoring rules like the log-loss and Brier score, but do not tackle the general problem of uncertainty estimation.
There are also interesting connections between the problem of out-of-distribution generalization arising in sequential model optimization and Bayesian optimization, discussed here, and the possibility of reweighing examples, see~\cite{farquhar2021statistical}.
\section{Epistemic Uncertainty in a general loss function setting}
\label{appendix:theorygeneral}
We consider the setting presented in section \ref{sec:theoryL2}, but with a general loss function $l$. We use the same notations as in section \ref{sec:theoryL2}.
\begin{definition}
The {\bf total uncertainty} of $f$ at $x$ is defined as:
\begin{align}
{\cal U}(f,x) = \int l(f(x),y) dP(y|x).
\end{align}
\end{definition}
\begin{definition}
\label{def:varianceL2-2}
For a learning algorithm $\cal L$ which produces a distribution $P_{\cal L}(f(x)|{\cal D}_n)$ over
possible solutions $f(x)$ at $x$, the {\bf model variance} at $x$ is defined as
\begin{align}\label{eq:variancegeneralloss}
V({\cal L},{\cal D}_n,x)& = \int l(f(x),\hat{f}(x)) dP_{\cal L}(f(x)|{\cal D}_n))
\end{align}
with $\hat{f}(x)=\arg\min_{\bar{f}(x)} \int l(f(x),\bar{f}(x)) dP_{\cal L}(f(x)|{\cal D}_n)).$
Note that for a loss function that is different from the square loss, the semantics of variance (such as its non-negativity) might be lost.
\end{definition}
Let us consider the special cases of the negative log-likelihood loss in general (for outputs which may be discrete or continuous)
and that of the squared error loss (which ends up being a special case of the former for normally distributed outputs).
Below we see $Q(Y|x)$ as a probability mass or density function (over $y$), which is also a function of $x$.
\begin{definition}
The negative log-likelihood (NLL) loss takes as first
argument $Q(Y|x)$ a probability mass or density function and returns
\begin{align}
l_{NLL}(Q(Y|x),y) = - \log Q(Y=y|x).
\end{align}
\end{definition}
\begin{proposition}
\label{nll_total}
For the NLL loss with ground truth $P(Y|x)$
and predictor $Q(Y|x)$, the total uncertainty ${\cal U}(Q(Y|x),x)$ is a cross-entropy, i.e.,
\begin{align}
{\cal U}(Q(Y|\ . \ ),x) &= CE(P(Y|x)||Q(Y|x)) \nonumber \\
&= - \int dP(y|x) \log Q(y|x)
\end{align}
\end{proposition}
The proposition is shown by applying the definitions.
\begin{proposition}
For the NLL loss with ground truth $P(Y|x)$, the aleatoric uncertainty ${\cal A}(x)$ in this setting
is the entropy $H[P(Y|x)]$ of the ground truth conditional:
\begin{align}
{\cal A}(x) = - \int dP(y|x) \log P(y|x) = H[P(Y|x)],
\end{align}
\end{proposition}
The proposition is shown from the cross-entropy
$CE(P(Y|x)||Q(Y|x))$ being minimized when $Q=P$.
\begin{proposition}
For the NLL loss with ground truth $P(Y|x)$
and predictor $Q(Y|x)$, the epistemic uncertainty ${\cal E}(Q(Y|x),x)$
is the Kullback-Liebler divergence between $P$ and $Q$ (with $P$
as the reference):
\begin{align}
{\cal E}(Q(Y|\ . \ ),x) &= KL(P(Y|x)||Q(Y|x)) \nonumber \\
&= \int dP(y|x) \log \frac{P(y|x)}{Q(y|x)}
\label{eq:KL}
\end{align}
\end{proposition}
The proposition is shown by combining the above two propositions
and the definition of epistemic uncertainty.
To move towards the MSE loss, consider the
special case of NLL with a conditionally Normal output density
for both $P$ and $Q$.
\begin{proposition}
\label{nll_epistemic}
For the NLL loss with a conditionally Normal output density
for both $P$ and $Q$, with respective means $f^*(x)$ and $\hat{f}(x)$
and respective variances
$\sigma^2_P(x)$ and $\sigma^2_Q(x)$, the epistemic
uncertainty is
\begin{align}
{\cal E}(Q(Y|\ . \ ),x) = \frac{1}{2 \sigma^2_Q(x)} l_{MSE}(\hat{f}(x),f^*(x))
+ KL(P(Y|x)||\tilde{Q}(Y|x)),
\end{align}
where $\tilde{Q}(\ . \ |x)$ is obtained by shifting $Q(\ . \ |x)$ towards $P(\ . \ |x)$ (i.e., $\tilde{Q}(\ . \ |x)$ is Gaussian with mean $f^*(x)$ and variance $\sigma^2_Q(x)$ ), and the Bayes-optimal mean predictor is \mbox{$f^*(x)=E_P[Y|x]$}. Note that if $\sigma_P=\sigma_Q$,
then the KL term is zero.
\end{proposition}
The proof is presented in Appendix~\ref{appendix:proofs}. We can compare with the MSE loss (which assumes a constant
variance $\sigma=\sigma_P=\sigma_Q$)
and obtain the same result up to variance scaling.
\section{Proofs}
\label{appendix:proofs}
\subsection{Proposition \ref{epistemic_mse}}
It is a well known result that, because $f^*(x)$ is the mean of $P(.|x)$, it is also the minimizer of $\hat{y} \mapsto \int (\hat{y} - y)^2 dP(y|x)$. $f^*$ is thus a Bayes Optimal predictor.
By definition of the total uncertainty:
\begin{align*}
{\cal U}(f, x) = \int (f(x) - y) ^ 2 dP(y | x) = E[(f(x) - Y)^2].
\end{align*}
Hence, by definition of aleatoric uncertainty:
\begin{align*}
{\cal A}(x) = {\cal U}(f^*, x) = E[(f^*(x) - Y)^2].
\end{align*}
and by definition of epistemic uncertainty
\begin{align*}
{\cal E}(f, x) &= E\left[(f(x) - y) ^ 2 - (f^*(x) - y) ^ 2\right] \\
&= f(x) ^ 2 - f^*(x)^2 - 2 (f(x) - f^*(x))f^*(x) \\
&= (f(x) - f^*(x))^2.
\end{align*}
Which concludes the proof.
\subsection{Proposition \ref{nll_epistemic}}
From Equation \ref{eq:KL}, we get:
\begin{align*}
{\cal E}(Q(Y|\ . \ ),x) &= KL(P(Y|x)||Q(Y|x)) \\
&= \log \frac{\sigma_Q(x)}{\sigma_P(x)} + \frac{\sigma_P^2(x) + (f(x) - f^*(x)) ^ 2}{2 \sigma^2_Q(x)} - \frac{1}{2} \\
&= \frac{1}{2 \sigma^2_Q(x)} l_{MSE}(f(x),f^*(x)) + \log \frac{\sigma_Q(x)}{\sigma_P(x)} + \frac{\sigma_P^2(x) + (f^*(x) - f^*(x)) ^ 2}{2 \sigma^2_Q(x)} - \frac{1}{2}
\\
&= \frac{1}{2 \sigma^2_Q(x)} l_{MSE}(f(x),f^*(x)) + KL(P(Y|x)|\tilde{Q}(Y|x))
\end{align*}
Which concludes the proof
\section{Pseudo Code for the fixed training set setting}
\label{appendix:pseudocodes}
Algorithm \ref{pseudocode:deupfixed} illustrates the training procedure when a held-out validation set is available. We focus on $y \in \mathbb{R}$ in this paper because it makes sense for active learning applied to black-box optimization (where we want to maximize it) but the algorithms can trivially be applied to the case where $y \in \mathbb{R}^d$ for any $d$. Similarly, the algorithms can be generalized to other losses besides the square loss, following the generalized theory presented above in Appendix~\ref{appendix:theorygeneral}.
\begin{algorithm}[ht]
\textbf{Data: }$\mathcal{D}$ the training dataset with pairs $(x, y)$ with $x \in \cal X$, $y \in \mathbb{R}$; $\mathcal{D}_{out}$ the out-of-sample dataset with pairs $(x, y)$, to train the uncertainty estimator \\
$\mathcal{X}$, the input/search space\\
$a: {\cal X}\mapsto \mathbb{R}$, trained estimator of aleatoric uncertainty (\\
$f: {\cal X}\mapsto \mathbb{R}$, main predictor, trained on $\mathcal{D}$ \\
$u: {\cal X}\mapsto \mathbb{R}$, total uncertainty estimator (estimates error of $f$)
\kwTraining{\\
Initialize empty dataset of errors $\mathcal{D}_u$ \\
\textbf{Optional: }Pre-fill $\mathcal{D}_u$ using Algorithm \ref{pseudocode:crossval} and fit $u$ on $\mathcal{D}_u$\\
$x_{acq} \leftarrow \emptyset, y_{acq} \leftarrow \emptyset$
\For{every pair $(x, y)$ in $\mathcal{D} \cup \mathcal{D}_{out}$}
{$\mathcal{D}_u \leftarrow \mathcal{D}_u \cup \{(x, (y - f(x))^2)\}$}
Fit $u$ on $\mathcal{D}_u$\\
}
\textbf{Evaluation: }For every input $x$, return $u(x) - a(x)$ as an estimator of epistemic uncertainty at $x$
\caption{DEUP with a fixed training set: Training procedure to obtain estimates of epistemic uncertainty}
\label{pseudocode:deupfixed}
\end{algorithm}
\section{Estimating Aleatoric Uncertainty with access to an oracle}
\label{estimatealeatoric}
In scenarios like active learning, one has access to an oracle from which we can obtain samples of $Y \sim P(Y|x)$ at any given point $x$. In that case, one can train an estimator $a(x)$ of aleatoric uncertainty by obtaining $K > 1$ samples $y_1, \dots y_K$ at the same $x$, for a set of representative $x$'s.
More formally, if we have multiple independent outcomes $y_1, \dots, y_K \sim P(Y|x)$ for each input point $x$, then training a predictor $a$ with the squared loss on (input, target) examples
$\left(x,\frac{K}{K - 1}\overline{Var}(y_1, \dots, y_K)\right)$, where $\overline{Var}$ denotes the empirical variance, yields an estimator of the aleatoric uncertainty.
Naturally, this estimator is asymptotically unbiased, if the learning algorithm ensures asymptotic convergence to a Bayes-Optimal predictor.
This is due to the fact that $E_{y_1, \dots, y_K} \left[\frac{K}{K - 1}\overline{Var}(y_1, \dots, y_K) \right] = Var_{P(y|x)} [y|x] $, which according to Proposition \ref{epistemic_mse} is equal to $\mathcal{A}(x)$.
\section{Sequential Model Optimization Experiments}\label{appendix:SMO}
We use BoTorch\footnote{\href{https://botorch.org/}{https://botorch.org/}}~\cite{balandat2020botorch} as the base framework for our experiments.
For all our Sequential Optimization algorithms, we use Algorithm \ref{pseudocode:deupactive} to train DEUP uncertainty estimators.
We found that the optional step of pre-filling the uncertainty estimator dataset $\mathcal{D}_e$ was important given the low number of available training points. We used half the initial training set (randomly chosen) as in-sample examples (used to train the main predictor and an extra-feature generator) and the other half as out-of-sample examples to provide instances of high epistemic uncertainty to train an uncertainty predictor; we repeated the procedure by alternating the roles of the two halves of the dataset. We repeated the whole procedure twice using a new random split of the dataset, thus ending up with $4$ training points in $\mathcal{D}_e$ for every initial training point in $\mathcal{D}_{init}$.
The error predictor is trained with the $\log$ targets (i.e. log MSE between predicted and observed error). This helps since the scale of the errors varies over multiple orders of magnitude.
Computationally, the training time of DEUP-EI depends on various choices (e.g. the features used to train the epistemic uncertainty predictor, the dimension of the input, the learning algorithms, etc..). Additionally, the training time for the uncertainty predictor varies at each step of the optimization. In total, the sequential optimization experiments took about 1 CPU day.
\subsection{One-dimensional function toy example}
\label{appendix:SMO1}
In Figure \ref{multioptima_appendix}, we show the results of DEUP-EI, compared to GP-EI, MCDropout-EI and Ensembles-EI, in the task of optimizing a synthetic one-dimensional function. Because MCDropout and Ensembles are trained on in-sample data only, they are unable to generalize their uncertainty estimates, which makes them bad candidates for Sequential Model Optimization, because they are easily stuck in local minima, and require many iterations before the acquisition function gives more weight to the predicted uncertainties than the current maximum.
For Random acquisition, we sampled for different seeds $56$ points, and used the (average across the seeds of the) maximum of the first $6$ values as the first value in the plots (Figures \ref{ackley} and \ref{multioptima_appendix}). Note that because the function is specifically designed to have multiple local maxima, GP-EI also required more optimization steps, and actually performed worse than random acquistion
As a stationarizing input feature, we used the variance of a GP fit on the available data at every step. We found that the binary (in-sample/out-of-sample) feature and density estimates were redundant with the variance feature and didn't improve the performance as captured by the number of additional function calls. We used a GP for the DEUP uncertainty estimator. Using a neural net provided similar results, but was computationally more expensive in this 1-D case with few datapoints. We used a 3-hidden layer neural network, with 128 neurons per layer and a ReLU activation function, with Adam \cite{kingma2014adam} and a learning rate of $10^{-3}$ (and default values for the other hyperparameters) to train the main predictor for DEUP-EI (in order to fit the available data). The same network architecture and learning rate were used for the Dropout and Ensemble baselines. We used 3 networks for the Ensemble baseline, and a dropout probability of 0.3 for the Dropout baseline, with 100 test-time forward passes to compute uncertainty estimates.
\begin{figure}[H]
\begin{center}
\vspace*{-5mm}
\centerline{\includegraphics[width=1.2\columnwidth]{figs/multioptima_results_appendix.pdf}}
\caption{{\em Left.} Synthetic function to optimize.
{\em Right.} Maximum value reached by the different methods on the synthetic function.
The shaded areas represent the standard error across 5 different runs, with different initial sets of 6 pairs. For clarity, the shaded areas are omitted for the two worst performing methods. In each run, all the methods start with the same initial set of 6 points. GP-EI tends to get stuck in local optima and requires more than 50 steps, on average, to reach the global maximum. }
\vspace*{-5mm}
\label{multioptima_appendix}
\end{center}
\end{figure}
\subsection{Two-dimensional function}
To showcase DEUP's usefulness for Sequential Model Optimization in with a number of dimensions greater than 1, we consider the optimization of the Levi N.13 function, a known benchmark for optimization. The function $f$ takes a point $(x, y)$ in 2D space and returns:
\begin{align*}
f(x, y) = - \left( \sin^2(3\pi x) + (x - 1)^2 (1 + \sin^2(3\pi y)) + (y-1)^2(1 + \sin^2(2\pi y))\right)
\end{align*}
We use the box $[-10, 10]^2$ as the optimization domain. In this domain, the maximum of the function is $0$, and it is reached at $(1, 1)$. The function has multiple local maxima, as shown in Figure \ref{fig:levi3d}\footnote{Plot of the function copied from \href{https://www.sfu.ca/~ssurjano/levy13.html}{https://www.sfu.ca/~ssurjano/levy13.html}}.
Similar to the previous one-dimensional function, MCDropout and Ensemble provided bad performances and are omitted from the plot in \ref{fig:levi_results}. We used the same setting and hyperparameters for DEUP as for the previous function. DEUP-EI is again the only method that reaches the global maximum consistently in under $56$ function evaluations.
\begin{figure}[H]
\centering
\subfigure[Visualization of $(x, y) \mapsto - f(x, y)$] {\includegraphics[width=0.42\textwidth]{figs/levifigure.png}\label{fig:levi3d}}
\subfigure[Comparisons with GP-EI and Random acquisition] {\includegraphics[width=0.49\textwidth]{figs/levi_n13_results_appendix.pdf}\label{fig:levi_results} }
\caption{Sequential Model Optimization on the Levi N.13 function}
\end{figure}
\subsection{Additional details for the Ackley function experiment, for synthetic data in higher dimensions}
\label{appendix:SMO3}
The Ackley function of dimension $d$ is defined as:
\begin{align*}
Ackley_d: {\mathcal{B}} & \rightarrow {\mathbb{R}} \\
x & \mapsto A \exp\left(-B \sqrt{\frac{1}{d} \sum_{i=1}^d x_i^2}\right) +
\exp\left(\frac{1}{d} \sum_{i=1}^d cos(c x_i)\right) - A - \exp(1)
\end{align*}
where ${\mathcal{B}}$ is a hyperrectangle of ${\mathbb{R}}^d$. $(0, \dots, 0)$ is the only global optimizer of $Ackley_d$, at which the function is equal to $0$. We use BoTorch's default values for $A, B, c$, which are $20, 0.2, 2 \pi$ respectively.
In our experiments, we used ${\mathcal{B}} = [-10, 15]^d$ for all dimensions $d$.
For the TurBO baseline, we use BoTorch's default implementation, with Expected Improvement as an acquisition function, and a batch size of $1$ (i.e. acquiring one point per step).
For fair comparisons, for DEUP, we use a Gaussian Process as the main model, and use its variance as the only input of the epistemic uncertainty predictor. This means that we calibrate the GP variance to match the out-of-sample squared error, using another GP to perform the regression. TurBO-DEUP is a combination of both, in which we perform the variance calibration task for the local GP models of TurBO. The uncertainty predictor, i.e. the GP regressor, is trained with $\log$ targets, as in Appendix~\ref{appendix:SMO1}, but also with $\log$ variances as inputs.
Note that only the stationarizing feature is used as input for the uncertainty predictor. When we used the input $x$ as well, we found that the GP error predictor overfits on the $x$ part of the input $(x, v)$, and it was detrimental to the final performances.
For all experiments, we used $20$ initial points.
\section{Reinforcement Learning Experiments}\label{appendix:RL}
For RL experiments, we used \textit{bsuite} \cite{osband2020bsuite}, a collection of carefully designed RL environments. \textit{bsuite} also comes with a list of metrics which aim to evaluate RL agents from different aspects. We compare the agents based on the \textit{basic} metric and average regret as they capture both sample complexity and final performance. The default DQN agent is used as the base of our experiments with a 3 layer fully-connected (FC) neural network as its Q-network. For the Bootstrapped DQN baseline, we used the default implementation provided by \textit{bsuite}. To implement DQN + MC-Dropout, following the implementation from \cite{gal2016dropout}, two dropout layers with dropout probability of 0.1 are used before the second and the third FC layers. In order take an action, the agent performs a single stochastic forward pass through the Q-network, which is equivalent to taking a sample from the posterior over the Q-values, as done in Thompson sampling, an alternative to $\epsilon-$greedy exploration.
As a density estimator, we used a Kernel Density Estimator (KDE) with a Gaussian kernel and bandwidth of 1 to map states to densities. This KDE is fit after each 10000 steps (actions) with a batch of samples from the replay buffer (which is of size 10000). The uncertainty estimator network (E-network) has the same number of layers as the Q-network, with an additional Softplus layer at the end. All other hyperparameters are the same as the default implementation by \cite{osband2020bsuite}.
One complete training run for the DEUP-DQN with 5 seeds experiments takes about 0.04-0.05 GPU days on a V100 GPU. In total RL experiments took about 0.15 GPU days on a Nvidia V100 GPU.
\begin{algorithm}[H]
Initialize replay buffer $\mathcal{D}$ with capacity $\mathcal{N}$ \\
$Q_\theta(s, a)$: state-action value predictor \\
$E_\phi(\log d)$: uncertainty estimator network, which takes the log-density of the states as the input \\
$d(s)$: Kernel density estimator (KDE)\\
K: KDE fitting frequency \\
W: Number of warm-up episodes \\
\For{episode=1 to $M$}
{
set $s_0$ as the initial state \\
\For{t=1 to \textit{max-steps-per-episode}}
{
\textbf{with probability $\epsilon$: } take a random action, \textbf{otherwise:} \\
\textbf{if} $episode \leq$ W: $a = max_{a} Q_\theta(s_t, a)$, \textbf{else:}
$a = max_{a} \big[Q_\theta(s_t, a) + \kappa \times E_\phi(\log d(s_t))(a)\big]$
\\
store $(s_t, a_t, r_t, s_{t+1})$ in $\mathcal{D}$
\\
Sample random minibatch B of transitions $(s_j , a_j , r_j , s_{j+1})$ from $\mathcal{D}$
\\
\textbf{if} $s_j$ is a final state: $y_j = r_j$, \textbf{else: }$y_j = r_j + \gamma max_{a} Q(s_t, a)$
\\
\textbf{Update Q-network:} \\
$\theta \leftarrow \theta + \alpha_Q.\nabla_\theta \mathop{\mathbb{E}_{(s, a) \sim {B}}} \Big [\big(y_j - Q_\theta(s, a) \big)^2 \Big]$
\\
\textbf{Update E-network:} \\
$\phi \leftarrow \phi + \alpha_E.\nabla_\phi \mathop{\mathbb{E}_{(s, a) \sim {B}}} \Bigg[ \Big [\big(y_j - Q_\theta(s, a) \big)^2 - E_\phi(\log d(s_t))(a)\Big]^2 \Bigg]$
\\
\textbf{if} mod(\textit{total-steps}, K) = 0: fit the KDE $d$ on the states of $\mathcal{D}$
}
}
\caption{DEUP-DQN}
\label{pseudocode:DEUP-DQN}
\end{algorithm}
\section{Rejecting Difficult Examples}
\label{appendix:fixed_training_set}
We adapt the standard OOD rejection task~\cite{Amersfoort2020SimpleAS, Liu2020SimpleAP} to measure the Spearman Rank Correlation of the predicted uncertainty with the true generalization error, in addition to the OOD Detection AUROC. We use MC-Dropout~\cite{gal2016dropout}, Deep Ensemble~\cite{lakshminarayanan2016simple}, DUE\cite{DBLP:journals/corr/abs-2102-11409} and DUQ~\cite{Amersfoort2020SimpleAS} as the baselines~\footnote{MC-Dropout and Deep Ensemble baselines are based on \href{https://github.com/google/uncertainty-baselines}{https://github.com/google/uncertainty-baselines}, DUQ based on \href{https://github.com/y0ast/deterministic-uncertainty-quantification}{https://github.com/y0ast/deterministic-uncertainty-quantification} and DUE based on \href{https://github.com/y0ast/DUE}{https://github.com/y0ast/DUE}}. We use these baselines as representatives for the major approaches for uncertainty estimation in recent literature. For all the methods, including DEUP we consider two architectures for the main predictor, ResNet-18 and ResNet-50~\cite{he2016deep} (Table~\ref{app:Resnet50}) to study the effect of model capacity. Note that for the ResNet50 DEUP model we continue using the ResNet-18 based DUE as variance source.
\begin{table}[ht]
\caption{Spearman Rank Correlation between predicted uncertainty and the true generalization error on OOD data (SVHN) with ResNet-50 models (3 seeds) trained on CIFAR-10.}
\label{app:Resnet50}
\begin{center}
\begin{tabular}{ll}
\hline
\textbf{Model} & \textbf{ResNet-50} \\ \hline
MC-Dropout & $0.312 \pm 0.003$ \\
Deep Ensemble & $0.401 \pm 0.004$ \\
DUQ & $0.399 \pm 0.003$ \\
DEUP (D+V) & $\bm{0.465 \pm 0.002}$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\paragraph{Training}
The baselines were trained with the CIFAR-10 training set with $10\%$ set aside as a validation set for hyperparameter tuning. The hyperparameters are presented in Table~\ref{dropouthyperparam} and Table~\ref{deuphyperparam}. The hyperparameters not specified are set to the default values. For DEUP, we consider the log-density, model-variance estimate and the seen-unseen bit as the features for the error predictor. The density estimator we use is Masked-Autoregressive Flows~\cite{papamakarios2017masked} and the variance estimator used is DUE~\cite{DBLP:journals/corr/abs-2102-11409}. Note that as indicated earlier $x$, the input image, is not used as a feature for the error predictor. We present those ablations in the next sub-section. For training DEUP, the CIFAR-10 training set is divided into 5 folds, with each fold containing 8 unique classes. For each fold, we train an instance of the main predictor, density estimator and model variance estimator on only the corresponding 8 classes. The remaining 2 classes act as the out-of-distribution examples for training the error predictor. Using these folds we construct a dataset for training the error predictor, a simple feed forward network. The error predictor is trained with the $\log$ targets (i.e. log MSE between predicted and observed error). This helps since the scale of the errors varies over multiple orders of magnitude. We then train the main predictor, density estimator and the variance estimator on the entire CIFAR-10 dataset, for evaluation. The hyperparameters are presented in Table~\ref{deuphyperparam}. For all models, we train the main predictor for $75$ and $125$ epochs for ResNet-18 and ResNet-50 respectively. We use SGD with Momentum (set to $0.9$), with a multi-step learning schedule with a decay of $0.2$ at epochs $[25, 50]$ and $[45, 90]$ for ResNet-18 and ResNet-50 respectively. One complete training run for DEUP takes about 1.5-2 GPU days on a V100 GPU. In total these set of experiments took about 31 GPU days on a Nvidia V100 GPU.
\begin{table}[ht]
\caption{\textbf{Left}: Hyperparameters for training Deep Ensemble~\cite{lakshminarayanan2016simple}. \textbf{Right}: Hyperparameters for training MC-Dropout~\cite{gal2016dropout}.}
\label{dropouthyperparam}
\begin{center}
\resizebox{0.4\linewidth}{!}{
\begin{tabular}{lcc}
\hline
\multirow{2}{*}{\textbf{Parameters}} & \multicolumn{2}{c}{\textbf{Model}} \\ \cline{2-3}
& \textbf{ResNet-18} & \textbf{ResNet-50} \\ \hline
Number of members & 5 & 5 \\
Learning Rate & 0.05 & 0.01 \\\hline
\end{tabular}
}
\quad
\resizebox{0.4\linewidth}{!}{
\begin{tabular}{lcc}
\hline
\multirow{2}{*}{\textbf{Parameters}} & \multicolumn{2}{c}{\textbf{Model}} \\ \cline{2-3}
& \textbf{ResNet-18} & \textbf{ResNet-50} \\ \hline
Number of samples & 50 & 50 \\
Dropout Rate & 0.15 & 0.1 \\
L2 Regularization Coefficient & 6e-5 & 8e-4 \\
Learning Rate & 0.05 & 0.01 \\\hline
\end{tabular}
}
\end{center}
\end{table}
\begin{table}[ht]
\caption{\textbf{Left}: Hyperparameters for training DUQ~\cite{Amersfoort2020SimpleAS}. \textbf{Right}: Hyperparameters for training DUE~\cite{DBLP:journals/corr/abs-2102-11409}.}
\label{deuphyperparam}
\begin{center}
\resizebox{0.4\linewidth}{!}{
\begin{tabular}{lcc}
\hline
\multirow{2}{*}{\textbf{Parameters}} & \multicolumn{2}{c}{\textbf{Model}} \\ \cline{2-3}
& \textbf{ResNet-18} & \textbf{ResNet-50} \\ \hline
Gradient Penalty & 0.5 & 0.65 \\
Centroid Size & 512 & 512 \\
Length scale & 0.1 & 0.2 \\
Learning Rate & 0.05 & 0.025 \\\hline
\end{tabular}
}
\quad
\resizebox{0.3\linewidth}{!}{
\begin{tabular}{lcc}
\hline
\multirow{2}{*}{\textbf{Parameters}} & \multicolumn{2}{c}{\textbf{Model}} \\ \cline{2-3}
& \textbf{ResNet-18} \\ \hline
Inducing Points & 50 \\
Kernel & RBF \\
Lipschitz Coefficient & 2 \\
BatchNorm Momentum & 0.99\\
Learning Rate & 0.05\\
Weight Decay & 0.0005\\\hline
\end{tabular}
}
\end{center}
\end{table}
\begin{table}
\caption{Hyperparameters for training DEUP.}
\begin{center}
\begin{tabular}{lcc}
\hline
\multirow{2}{*}{\textbf{Parameters}} & \multicolumn{2}{c}{\textbf{Model}} \\ \cline{2-3}
& \textbf{ResNet-18} & \textbf{ResNet-50} \\ \hline
Uncertainty Predictor Architecture & {[}1024{]} x 5 & {[}1024{]} x 5 \\
Uncertainty Predictor Epochs & 100 & 100 \\
Uncertainty Predictor LR & 0.01 & 0.01 \\
Main Predictor Learning Rate & 0.05 & 0.01 \\ \hline
\end{tabular}
\end{center}
\end{table}
\paragraph{Ablations}
We also perform some ablation experiments to study the effect of each feature for the error predictor. The Spearman rank correlation coefficient between the generalization error and the variance feature, $V$, from DUE~\cite{DBLP:journals/corr/abs-2102-11409} alone is 37.84 $\pm$ 0.04, and the log-density, $D$, from MAF~\cite{papamakarios2017masked} alone is 30.52 $\pm$ 0.03. With only the image ($x$) the SRCC is $36.58 \pm 0.16$
Table~\ref{cifarablation} presents the results for these experiments. We observe that combining all the features performs the best. Also note that using the log-density and variance as features to the error predictor we observe better performance than using them directly, indicating that the error predictor perhaps captures a better target for the epistemic uncertainty. The boolean feature ($B$) indicating seen examples, discussed in Section~\ref{sec:interactive}, also leads to noticeable improvments.
\begin{table}[ht]
\caption{Spearman Rank Correlation between predicted uncertainty and the true generalization error on OOD data (SVHN) with variants of DEUP with different features as input for the uncertainty predictor. $D$ indicates the log-density from MAF~\cite{papamakarios2017masked}, $V$ indicates variance from DUQ~\cite{Amersfoort2020SimpleAS} and $B$ indicates a bit indicating if the data is seen. }
\label{cifarablation}
\begin{center}
\begin{tabular}{lcc}
\hline
\multirow{2}{*}{\textbf{Features}} & \multicolumn{2}{c}{\textbf{Model}} \\ \cline{2-3}
& \textbf{ResNet-18} & \textbf{ResNet-50} \\ \hline
$D$+$V$+$B$ & \textbf{0.426 $\pm$ 0.009} & \textbf{0.465 $\pm$ 0.002} \\
$D$+$V$ & 0.419 $\pm$ 0.003 & 0.447 $\pm$ 0.003 \\
$V$+$B$ & 0.401 $\pm$ 0.004 & 0.419 $\pm$ 0.004 \\
$D$+$B$ & 0.403 $\pm$ 0.003 & 0.421 $\pm$ 0.002 \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Predicting Uncertainty under Distribution Shift}
We also consider the task of uncertainty estimation in the setting of shifted distributions \cite{ovadia2019can, hendrycks2019robustness}. We evaluate the uncertainty predictions of models trained with CIFAR-10, on CIFAR-10-C \cite{hendrycks2019robustness} which consists of images from CIFAR-10 distorted using 16 corruptions like gassian blur, impulse noise, among others. Figure~\ref{fig:shifted} shows that even in the shifted distribution setting, the uncertainty estimates of DEUP correlate much better with the error made by the predictor, compared to the baselines.
\begin{figure}[h]
\begin{center}
\centerline{\includegraphics[width=0.6\columnwidth]{figs/cifar10c_eval.pdf}}
\caption{Spearman Rank Correlation Coefficient between the predicted uncertainty and true error for models trained with CIFAR-10, and evaluated on CIFAR-10-C. DEUP outperforms the baselines on all types of corruptions.}
\label{fig:shifted}
\end{center}
\end{figure}
\section{Drug Combination Experiments}\label{appendix:dgc}
To validate DEUP's uncertainty estimates in a real-world
setting, we measured its performance on a regression task predicting the synergy of drug combinations.
While much effort in drug discovery is spent on finding novel small molecules, a potentially cheaper method is identifying combinations of pre-existing drugs which are synergistic (i.e., work well together). Indeed, drug combinations are the current standard-of-care for a number of diseases including HIV, tuberculosis, and some cancers~\cite{cihlar2016current, world2010treatment, mokhtari2017combination}.
However, due to the combinatorial nature of drug combinations, identifying pairs exhibiting synergism is challenging. Compounding this problem is the high monetary cost of running experiments on promising drug combinations, as well as the length of time the experiments take to complete. Uncertainty models could be used by practitioners downstream to help accelerate drug combination treatment discoveries and reduce involved development costs.
To test DEUP's performance on this task we used the DrugComb and LINCS L1000 datasets~\cite{zagidullin2019drugcomb, subramanian2017next}. DrugComb is a dataset consisting of pairwise combinations of anti-cancer compounds tested on various cancer cell lines. For each combination, the dataset provides access to several synergy scores, each indicating whether the two drugs have a synergistic or antagonistic effect on cancerous cell death.
LINCS L1000 contains differential gene expression profiles for various cell lines and drugs. Differential gene expressions measure the difference in the amount of mRNA related to a set of influential genes before and after the application of a drug. Because of this, gene expressions are a powerful indicator of the effect of a single drug at the cellular level.
In our experiments, each drug is represented by its Morgan fingerprint~\cite{morgan1965generation}\footnote{The Morgan fingerprint represents a molecule by associating with it a boolean vector specifying its chemical structure. Morgan fingerprints have been used as a signal of various molecular characteristics to great success~\cite{ballester2010machine, zhang2006novel}.} (with 1,024 bits and a radius of 3) as well as two differential gene expression profiles (each of dimension 978) from two cell lines (PC-3 and MCF-7). In order to use gene expression features for every drug, we only used drug pairs in DrugComb where both drugs had differential gene expression data for cell lines PC-3 and MCF-7.
We first compared the quality of DEUP's uncertainty estimations
to other uncertainty estimation methods on the task of predicting
the combination sensitivity score~\cite{malyutina2019drug} for drug pairs
tested on the cell line PC-3 (1,385 examples).
We evaluated the uncertainty methods using a train, validation,
test split of $40\%$, $30\%$, and $30\%$, respectively.
The underlying model used by each
uncertainty estimation method consisted of a \textit{single drug} fully connected neural network (2 layers with 2048 hidden units and output of dimension 1024) and a \textit{combined drug} fully connected neural network (2 layers, with 128 hidden units). The embeddings of an input drug pair's drugs produced by the \textit{single drug} network are summed and passed to the \textit{combined drug} network, which then predicts final synergy. By summing the embeddings produced by the \textit{single drug} network, we ensure that the model is invariant to permutations in order of the two drugs in the pair.
The models were trained with Adam~\cite{kingma2014adam}, using a
learning rate of $1\text{e-}4$ and weight decay of $1\text{e-}5$.
For MC-Dropout we used a dropout probability of $0.1$ on the two layers of the \textit{combined drug} network and 3
test-time forward passes to compute uncertainty estimates.
The ensemble used 3 constituent models for its uncertainty estimates. Both Ensemble and MC-Dropout models were trained with the \textit{MSE} loss.
We also compared against DUE \cite{DBLP:journals/corr/abs-2102-11409} which combines a neural network feature extractor with an approximate Gaussian process. Spectral normalization was added to all the layers of the \textit{combined drug} network and of the \textit{single drug} network. Let $d_{\text{emb}}$ denote the dimension of the output of the \textit{combined drug} network, which is also the input dimension of the approximate Gaussian process.
We conducted a grid-search over different values of $d_{\text{emb}}$ (from 2 to 100), the number of \textit{inducing points} (from 3 to 200), the learning rate, and the kernel used by the Gaussian process. The highest correlation of uncertainty estimates with residuals was attained with $d_{\text{emb}} = 10$, 100 \textit{inducing points}, a learning rate of $1\text{e-}2$, and the \textit{Matern12} kernel.
\begin{algorithm}[H]
\KwData{$\mathcal{D}$ dataset of pairwise drug combinations, along with synergy scores $((d_1, d_2), y)$}
\kwInit{ \\
Split training set into two halves, \textit{in-sample} $\mathcal{D}_{in}$ and \textit{out-of-sample} $\mathcal{D}_{out}$ \\
$f_{\mu}(d_1, d_2)$: $\hat{\mu}$ predictor which takes a pair of drugs as input \\
$f_{\sigma}^{in}(d_1, d_2)$: In-sample $\hat{\sigma}_{in}$ error predictor\\
$f_{\sigma}^{out}(d_1, d_2)$: Out-of-sample $\hat{\sigma}_{out}$ error predictor}
\kwTraining{\\
\While{training not finished}{
\textit{In-sample update} \\
Get an \textit{in-sample} batch $(d_{1, in}, d_{2, in}, y_{in}) \sim \mathcal{D}_{in}$ \\
Predict $\hat{\mu}= f_{\mu}(d_{1, in}, d_{2, in})$ and \textit{in-sample} error $\hat{\sigma}_{in}= f_{\sigma}^{in}(d_{1, in}, d_{2, in})$ \\
Compute \textit{NLL}: $\frac{\log(\hat{\sigma}_{in}^2)}{2} + \frac{(\hat{\mu} - y_{in})^2}{2\hat{\sigma}_{in}^2}$ \\
Backpropagate through $f_{\mu}$ and $f_{\sigma}^{in}$ and update.\\
\textit{Out-of-sample update} \\
Get an \textit{out-of-sample} batch $(d_{1, out}, d_{2, out}, y_{out}) \sim \mathcal{D}_{out}$ \\
Estimate $\hat{\mu}= f_{\mu}(d_{1, out}, d_{2, out})$ and \textit{out-of-sample} error $\hat{\sigma}_{out}= f_{\sigma}^{out}(d_{1, out}, d_{2, out})$ \\
Compute \textit{NLL}: $\frac{\log(\hat{\sigma}_{out}^2)}{2} + \frac{(\hat{\mu} - y_{out})^2}{2\hat{\sigma}_{out}^2}$ \\
Backpropagate through $f_{\sigma}^{out}$ and update.
}}
\caption{DEUP for Drug Combinations}
\label{pseudocode:DrugComb}
\end{algorithm}
The DEUP model we used outputs two heads $\bigl[\begin{smallmatrix} \hat{\mu} \\ \hat{\sigma} \end{smallmatrix}\bigr]$ and is trained with the \textit{NLL} $\frac{\log(\hat{\sigma}^2)}{2} + \frac{(\hat{\mu} - y)^2}{2\hat{\sigma}^2}$ in a similar fashion as in~\cite{lakshminarayanan2016simple}. To obtain a predictor of the out-of-sample
error, we altered our optimization procedure so that
the $\mu$ and $\sigma$ heads were not backpropagated through at all times. Specifically, we first split the training set
into two halves, terming the former the in-sample set $\mathcal{D}_{in}$ and the
latter the out-of-sample set $\mathcal{D}_{out}$.
We denote as $f_{\sigma}^{in}$ the in-sample error predictor and $f_{\sigma}^{out}$ the out-of-sample error predictor. $f_{\sigma}^{out}$ is used to estimate total uncertainty. Note that in this setting, $f_{\sigma}^{out}$ predicts the square root of the epistemic uncertainty ($\hat{\sigma}_{out}$) rather than the epistemic uncertainty itself ($\hat{\sigma}_{out}^2$).
In our experiments, an extra bit is added as input to the model in order to indicate whether a given batch is from $\mathcal{D}_{in}$ or $\mathcal{D}_{out}$.
Through this, the same model is used to estimate $f_{\sigma}^{in}$ and $f_{\sigma}^{out}$ with the model estimating $f_{\sigma}^{in}$ when the bit indicates an example is drawn from $\mathcal{D}_{in}$ and $f_{\sigma}^{out}$ otherwise. When the batch is drawn from $\mathcal{D}_{in}$, both heads are trained using NLL using a single forward pass. However, when the data is drawn from $\mathcal{D}_{out}$ only the $\hat{\sigma}$ head is trained. To do this, we must still predict $\hat{\mu}$ in order to compute the NLL. But the $\hat{\mu}$ predictor $f_\mu$ must be agnostic to the difference between $\mathcal{D}_{in}$ and $\mathcal{D}_{out}$. To solve this, we perform two separate forward passes. The first pass computes $\hat{\mu}$ and sets the indicator bit to 0 so $f_\mu$ has no notion of $\mathcal{D}_{out}$, while the second pass computes $\hat{\sigma}$, setting the bit to 1 to indicate the true source of the batch. Finally, we backpropagate through the $\hat{\sigma}$ head only.
The training procedure is described in Algorithm \ref{pseudocode:DrugComb}
We report several measures for the quality of uncertainty predictions on a separate test set in Table \ref{fig:drugcomb_1_app}.
\begin{table}[ht]
\centering
\vcenteredhbox{\resizebox{\textwidth}{!}{
\begin{tabular}{lllllll}
\hline
\textbf{Model} & \textbf{Corr. w. res.} & \textbf{U. Bound} & \textbf{Ratio} & \textbf{Log Likelihood} & \textbf{Coverage Probability} & \textbf{CI width}\\ \hline
MC-Dropout & $0.14 \pm 0.07$ & $0.56 \pm 0.05$ & $0.25 \pm 0.12$ & $ -20.1 \pm 6.8 $ & $11.4\pm 0.2$ & $3.1\pm0.1$ \\
Deep Ensemble & $0.30 \pm 0.09$ & $0.59 \pm 0.04$ & $0.50 \pm 0.13$ & $ -14.3 \pm 4.7 $ & $10.8\pm 1.4$ & $3.4\pm 0.6$ \\
DUE & $0.12 \pm 0.12$ & $0.15 \pm 0.03$ & $\bm{0.80 \pm 0.79}$ & $-13.0 \pm 0.52$ & $15.2\pm 1.0$ & $3.5\pm 0.1$ \\
DEUP & $\bm{0.47 \pm 0.03}$ & $0.63 \pm 0.05$ & $\bm{0.75 \pm 0.07}$ & $\bm{-3.5 \pm 0.25}$ & $\bm{36.1\pm 2.5}$ & $\bm{13.1\pm 0.9}$\\\hline
\end{tabular}
\label{tab:table_drugcomb_uncertainty_analysis}
}
}
\caption{Drug combinations: quality of uncertainty estimates from different methods. \textit{Corr. w. res.} shows correlation between model residuals and predicted uncertainties $\hat{\sigma}$. A best-case \textit{Upper Bound} on \textit{Corr. w. res.} is obtained from the correlation between $\hat{\sigma}$ and true samples from $\mathcal{N}(0, \hat{\sigma})$. \textit{Ratio} is the ratio between col. 1 and 2 (larger is better). \textit{Log-likelihood}: average over 3 seeds of per sample predictive log-likelihood. \textit{Coverage Probability}: Percentage of test samples which are covered by the 68\% confidence interval. \textit{CI width}: width of the 86\% confidence interval.}
\label{fig:drugcomb_1_app}
\end{table}
For each model, we report the per sample predictive log-likelihood, coverage probability and confidence interval width, averaged over 3 seeds.
We also computed the correlation between the residuals of the model $|\hat{\mu}(x_i) - y_i|$ and the predicted uncertainties $\hat{\sigma}(x_i)$. We noted that the different uncertainty estimation methods lead to different distributions $p(\hat{\sigma}(x))$. For example, predicted uncertainties obtained with DUE always have a similar magnitude. By contrast, DEUP yields a wide range of different predicted uncertainties.
These differences between the distributions $p(\hat{\sigma}(x))$ obtained with the different methods may have an impact on the correlation metric, possibly biasing the comparison of the different methods. In order to account for differences in the distribution $p(\hat{\sigma}(x))$ across methods, we report another metric which is the ratio between the observed correlation $Corr(|\hat{\mu}(x) - y|, \hat{\sigma}(x))$ and the maximum achievable correlation given a specific distribution $p(\hat{\sigma}(x))$.
This maximum achievable correlation (refered to as the \textit{upper bound}) is not \textit{per se} a comparison metric, and is estimated (given a specific $p(\hat{\sigma}(x))$) as follows: we assume that, for each example $(x_i, y_i)$, the predictive distribution of the model $\mathcal{N}(\hat{\mu}(x_i), \hat{\sigma}(x_i))$ corresponds exactly to the distribution of the target, \textit{i.e.} $y_i \sim \mathcal{N}(\hat{\mu}(x_i), \hat{\sigma}(x_i))$. Under this assumption, the residual of the mean predictor follows a distribution $\mathcal{N}(0, \hat{\sigma}(x_i))$. We can then estimate the upper bound by computing the correlation between the predicted uncertainties $\hat{\sigma}(x_i)$ and samples from the corresponding Gaussians $\mathcal{N}(0, \hat{\sigma}(x_i))$. 5 samples were drawn from each Gaussian for our evaluation. This upper bound is reported in the Table.
Finally, we reported our comparison metric: the ratio between the correlation $Corr(|\hat{\mu}(x) - y|, \hat{\sigma}(x))$ and the upper bound. The higher the ratio is, the closer the observed correlation is to the estimated upper bound and the better the method is doing.
It is interesting to note that the upper bound is much lower for DUE compared to other methods, as its predicted uncertainties lie within a short range of values.
Predicted $\hat{\mu}$ and uncertainty estimates can be visualized in Figure \ref{fig:drugcomb_1} for different models. MC-dropout, Ensemble and DUE consistently underestimate uncertainty, while the out-of-sample uncertainties predicted by DEUP are much more consistent with the order of magnitude of the residuals. Moreover, we observed that DUE predicted very similar uncertainties for all samples, resulting in a lower upper-bound for the correlation between residuals and predicted uncertainties compared to other methods. We observed a similar pattern when experimenting with the other kernels available in the DUE package, including the standard Gaussian kernel.
Finally, we note that in the context of drug combination experiments, aleatoric uncertainty could be estimated by having access to replicates of a given experiment (\textit{c.f.} Section \ref{estimatealeatoric}), allowing us to subtract the aleatoric part from the out-of-sample uncertainty, leaving us with the epistemic uncertainty only.
\begin{figure}[H]
\centering
\subfigure[MC-dropout] {\includegraphics[width=0.24\textwidth]{figs/DrugComb_Mcdrop_seed_1.png}}\label{fig:drugcomb_1a}
\subfigure[Ensemble]{\includegraphics[width=0.24\textwidth]{figs/DrugComb_Ens_seed_1.png}}\label{fig:drugcomb_1b_appendix}
\subfigure[DUE] {\includegraphics[width=0.24\textwidth]{figs/DrugComb_DUE_seed_1.png}}\label{fig:drugcomb_1_due_appendix}
\subfigure[DEUP] {\includegraphics[width=0.24\textwidth]{figs/DrugComb_DEUP_seed_1.png}}\label{fig:drugcomb_1c_appendix}
\caption{Predicted mean and uncertainty for different models on a separate test set. 50 examples from the test set are ordered by increasing value of true synergy score (orange). Model predictions and uncertainties are visualized in blue. MC-Dropout, Ensemble and DUE consistently underestimate the uncertainty while DEUP seems to capture the right order of magnitude. Figures made using The Uncertainty Toolbox \cite{chung2020beyond}.}
\label{fig:drugcomb_1}
\end{figure}
One complete training run for the drug combination experiments takes about 0.01 GPU days on a V100 GPU. In total these set of experiments took about 0.2 GPU days on a Nvidia V100 GPU.
\section{Aleatoric and Epistemic Uncertainty}
\label{sec:theoryL2}
Consider a supervised learning algorithm (or learner) $\cal L$ mapping a dataset ${\cal D}$
to a predictive function $\hat{f}={\cal L}({\cal D})$. $\cal L$
tries to minimize
the expected value of a supervised learning loss $l(\hat{f}(x),y) \in \mathbb{R}$ under unknown probability measure $P(Y|x)$. In this section, we focus on regression tasks with the squared loss $l(\hat{y}, y) = (\hat{y} - y)^2$, with $y \in {\mathbb{R}}$. In Appendix~\ref{appendix:theorygeneral}, we provide the corresponding results for a general loss function, with discrete or continuous outputs.
\begin{definition}
The {\bf expected loss} of a predictor $\hat{f}$ at $x$ is defined as:
\begin{align}\label{eq:totaluncertainty}
{\cal U}(\hat{f},x) = \int (\hat{f}(x) - y)^2 dP(y|x).
\end{align}
\vspace*{-5.5mm}
\label{def:u}
\end{definition}
The expected loss is an unknown scalar, as we generally do not have access to the true data distribution $P(Y|x)$. The errors made by any predictor $\hat{f}$ at $x$ are due to both the inherent randomness of $P(Y|x)$ (aleatoric uncertainty) and the lack of knowledge of the predictor that can be tackled by acquiring more information around $x$ (epistemic uncertainty). Because of this natural decomposition of the expected loss, we will also refer to it as {\bf total uncertainty}, and we will use the two terms interchangeably.
Bayes-optimal predictors $f^*$ satisfy the following equation at every $x$:
\begin{align*}
\forall \Tilde{y} \in {\mathbb{R}} \quad \int (f^*(x) - y)^2 dP(y|x) \leq \int (\Tilde{y} - y)^2 dP(y|x).
\end{align*}
They depend on the underlying data distribution only and not on learner $\cal L$ or trained predictor $\hat{f}$. Additionally, for any $x$, all Bayes-optimal predictors have the same total uncertainty at that $x$. The error made by a Bayes-optimal predictor is irreducible and we define it as the aleatoric uncertainty:
\begin{definition}
\label{def:a}
The {\bf aleatoric uncertainty} at $x$ is the total uncertainty
of any Bayes-optimal predictor $f^*$ at $x$:
\begin{align}
{\cal A}(x) = {\cal U}(f^*,x).
\end{align}
and we note that by definition ${\cal A}(x) \leq {\cal U}(f,x),\; \forall f \; \forall x$.
\end{definition}
$ \ $ \\
We now define the epistemic uncertainty of a predictor $\hat{f}$ as the
gap between the error of $\hat{f}$ at $x$ and the lowest possible error at $x$, i.e., the reducible part of the loss, given more knowledge.
\begin{definition}
\label{def:e}
The {\bf epistemic uncertainty} ${\cal E}(\hat{f},x)$ of a predictor $\hat{f}$ at $x$ is given by
\begin{align} \label{eq:epistemic}
{\cal E}(\hat{f},x) = {\cal U}(\hat{f},x) - {\cal A}(x) = {\cal U}(\hat{f},x) - {\cal U}(f^*,x).
\end{align}
\end{definition}
Using these definitions, we can present our main result for the regression setting.
\begin{proposition}
\label{epistemic_mse}
In a regression task with Gaussian ground truth $P(y|x) = {\mathcal{N}}(y; \ f^*(x), \ \sigma^2(x))$,
\begin{align*}
&{\cal E}(\hat{f},x) = (\hat{f}(x) - f^*(x))^2
\\
&{\cal U}(\hat{f},x) = \mathbb{E}_{P(y|x)}[(\hat{f}(x) - y)^2] = (\hat{f}(x) - f^*(x))^2 + \sigma^2(x)
\\
&{\cal A}(x) = \mathbb{E}_{P(y|x)}[(f^*(x) - y)^2] = \sigma^2(x)
\end{align*}
\end{proposition}
The proof is provided in Appendix~\ref{appendix:proofs}.
\begin{wrapfigure}{l}{0.5\textwidth}
\vspace*{-4mm}
\begin{center}
\centerline{\resizebox{4.5cm}{!}{\input{figs/bias_variance_bis}}}
\caption{\sl Illustrating noise and bias. Observed $y$ is independent noise plus true $\mathbb{E}[Y|x]=f^*(x)$, itself best approximated by unknown $\tilde{f}(x)$ in parametric set $S$ (orange), e.g., using Bayesian posterior distribution $p(f|\mathcal{D})$ (grey) over parameters. $\tilde{f}$ is the closest function in $S$ to $f^*$, leading to a bias $b(x) = f^*(x) - \tilde{f}(x)$.
$\varepsilon(\hat{f})(x)=f^*(x)-\hat{f}(x)$ is the reducible error of the main predictor $\hat{f}$ (e.g. posterior mean),
whose square corresponds to EU or lack of knowledge, that DEUP aims at estimating.
With $\tilde{f}$ the unknown ideal predictor and $\hat{f}$ the actual (e.g. mean) predictor, the square of $\Delta f(x) = \tilde{f}(x) - \hat{f}(x)$ induces
variance in the posterior. $\varepsilon(\hat{f})(x) = b(x) + \Delta f(x)$ indicates that using the variance as a proxy for EU misses out a non-negligible quantity: the bias $b(x)$.
}
\label{fig:bias-variance-error}
\end{center}
\vspace*{-6mm}
\end{wrapfigure}
\textbf{Relation to existing notions of EU:} Consider a parametric model $p(Y|x, \theta)$ and a learner maintaining a distribution over parameters $\theta \in \Theta$, each corresponding to a predictor $f$ in a parametric set of functions $S$, possibly starting from a prior $p(\theta)$ that would lead to a posterior distribution $p(\theta| \mathcal{D})$. Clearly, the fact that multiple $\theta$'s and corresponding values of $f$ are compatible with the data and the prior indicates lack of knowledge. Because the lack of knowledge indicates where an interactive learner should acquire more information, this justifies the usage of dispersion measures, such as the variance or the entropy of the posterior predictive, as proxies for EU.
However, the limited capacity of $S$ or the prior $p(\theta)$ may keep the optimal $\tilde{f}$ in $S$ at a distance from the Bayes-optimal predictor $f^*$. We can refer to these self-imposed constraints as a form of bias, in the sense that the learner is usually biased towards the prior preferences, e.g., towards smoother predictors (typically more so when the dataset is smaller).
This arises when training neural networks with limited data, where the network may not use all of its capacity due to implicit (and not fully understood) regularization properties of SGD~\cite{Kale2021SGDTR}, explicit regularization, early stopping or a combination of these, which can induce a preference on the functions it learns. In Fig.~\ref{fig:bias-variance-error}, we illustrate this gap with the bias function $b(x) = f^*(x) - \tilde{f}(x)$. Because of this bias, model variance cannot be an accurate measure of EU $\mathcal{E}(\hat{f}, x)$ in the general case.
In Deep Ensembles \cite{lakshminarayanan2016simple}, for example, if all the networks in the ensemble tend to fail in systematic (i.e. potentially reducible) ways, this aspect of prediction failure will not be captured by variance. Whereas Deep Ensembles variance provides us uncertainty regarding which of the networks we could draw is best, this does not tell us how poor that network is even in a noise-free setting. On the other hand, with flexible models like neural networks, adding examples around $x$ where $b(x)^2$ is large may allow to increase capacity around $x$ and reduce $b(x)^2$.
\section{Direct Epistemic Uncertainty Prediction}
\label{sec:DEUPcore}
DEUP (Direct Epistemic Uncertainty Prediction) {\bf uses observed out-of-sample
errors in order to train an error predictor which
can be used to estimate epistemic uncertainty} elsewhere, as suggested directly by Def.~\ref{def:u}-\ref{def:e}.
These may be in-distribution or out-of-distribution errors,
depending on what we care about and the kind of data that is available.
In Algo.~\ref{pseudocode:deupactive}, we provide the pseudo-code for DEUP in interactive settings. The pseudo-code for the simpler version with a fixed training set is given in Appendix~\ref{appendix:pseudocodes}. We consider three separate cases for the aleatoric uncertainty:
\vspace{-2mm}
\begin{enumerate}[leftmargin=*,topsep=0pt,parsep=0pt,itemsep=1mm]
\item If we know that ${\cal A}(x)=0$
then $u(x)$ is an estimate of ${\cal E}(\hat{f},x)$ as well as of ${\cal U}(\hat{f},x)$. In this case, we choose $a$ to be the zero function in Algo.~\ref{pseudocode:deupactive}.Example: Sec.~\ref{smosection}. Also, in cases where it is not possible to estimate the aleatoric uncertainty, we can rely on the total uncertainty estimates as a proxy for EU. Example: Sec.~\ref{exp:ue}.
\item If an estimator $x\rightarrow a(x)$ of aleatoric uncertainty is available,
then $u(x)-a(x)$ becomes an estimate of the epistemic uncertainty of $\hat{f}$ at $x$.
As an example for this scenario, consider an active learning setting, where the evaluation of a new candidate requires wet lab experiments, where we know the margins of errors of the instruments used.
\item
When we have access to an oracle that samples $Y$ given query $x$ from the environment $P(Y|x)$ (e.g., in active learning or SMO), then ${\cal A}(x)$ can be estimated using the empirical variance of different outcomes of the oracle at the same input $x$; see Appendix~\ref{estimatealeatoric}. It is common practice to perform replicate experiments in biological assays~\cite{lee2000importance, schurch2016many}. Variation across replicates, for a given input x, provides information about the amount of noise associated with x, i.e. the magnitude of the aleatoric uncertainty ${\cal A}(x)$.
\end{enumerate}
\subsection{Fixed Training Set}
\label{fixedtrainingset}
Consider the scenario with a fixed training set $\mathcal{D}=\{(x_i, y_i)\}_{i=1}^{N_{train}}$, where $x_i \sim P(X)$ and $y_i \sim P(Y|X=x_i) = \mathcal{N}(Y; f^*(x), \sigma^2(x))$. The goal is to obtain an estimator of the total uncertainty $\mathcal{U}(\hat{f}, .)$ for $\hat{f} = \mathcal{L}(\mathcal{D})$, that can be readily applied for input points $x$ sampled from the same distribution $P(X)$.
According to Prop.\ref{epistemic_mse}, we can train a secondary predictor $u$ on (input, target) pairs $(x, e)$, where $e=(y - \hat{f}(x))^2$ in order to obtain such an estimator. To see this, note that at the limit of infinite data, if the learning algorithm ensures asymptotic convergence to a Bayes-optimal predictor (such as $k-$nearest neighbors with $k$ increasing at a proper rate, or neural networks whose size and regularization are hyperparameters optimized on a growing validation set), the resulting estimates $u(x)$ converge to $\mathbb{E}_{y \sim P(y | x)} [ (\hat{f}(x) - y)^2] = \mathcal{U}(\hat{f}, x)$. These (input, target) pairs can be obtained from a hold-out dataset $\mathcal{D}^{OOS}=\{(x_i, y_i)\}_{i=N_{train}+1}^{N_{train} + N_{OOS}}$, sampled from the same distribution as $\mathcal{D}$.
\vspace*{-2mm}
\subsection{Interactive Settings}
\label{sec:interactive}
\vspace*{-2.5mm}
\begin{wrapfigure}{r}{0.55\textwidth}
\vspace*{-13mm}
\begin{center}
\begin{minipage}{0.55\textwidth}
\begin{algorithm}[H]
\textbf{Data: }$\mathcal{D}_{init}$ initial dataset with pairs $(x, y) \in {\cal X} \times \mathbb{R}$\\
$a: {\cal X}\mapsto \mathbb{R}$, estimator of aleatoric uncertainty \\
$\hat{f}: {\cal X}\mapsto \mathbb{R}$, main predictor (of $y$ given $x$) \\
$u: {\cal X}\mapsto \mathbb{R}$, total uncertainty estimator \\
$\phi: {\cal X}\mapsto \mathbb{R}^k$, chosen stationarizing features \\
$\pi$: acquisition machinery that proposes new input points from $\cal X$, using the current $\hat{f}$ and EU estimates.
$\mathcal{D}_u \leftarrow \emptyset$, training dataset for $u$ \\
$\mathcal{D} \leftarrow \mathcal{D}_{init}$, dataset of training points for $\hat{f}$ seen so far \\
$x_{acq} \leftarrow \emptyset, y_{acq} \leftarrow \emptyset$ \\
\textbf{Optional: }Pre-fill $\mathcal{D}_u$ using Algo.~\ref{pseudocode:crossval} and fit $u$ on $\mathcal{D}_u$\\
\While{stopping criterion not reached}{
\textbf{Optional: }Fit $a$ on $\mathcal{D}$ if necessary (e.g. see Appendix~\ref{estimatealeatoric})\\
Fit predictor $\hat{f}$ and features $\phi$ on $\mathcal{D}$\\
$\mathcal{D}_u \leftarrow \mathcal{D}_u \cup \{ ( \phi(x_{acq}), (y_{acq} - \hat{f}(x_{acq}))^2) \}$
\\
Fit $u$ on $\mathcal{D}_u$ \\%(which contains stationarizing features $\phi$, and the error of $\hat{f}$) \\
$x_{acq} \sim \pi(. | \hat{f}, u - a)$ (can be either a single point, or a batch of points)\\
Sample outcomes from the ground truth distribution: $y_{acq} \sim P(\ . \ |x_{acq})$\\
$\mathcal{D}_u \leftarrow \mathcal{D}_u \cup \{ ( \phi(x_{acq}), (y_{acq} - \hat{f}(x_{acq}))^2 ) \}$\\
$\mathcal{D} \leftarrow \mathcal{D} \cup \{ (x_{acq}, y_{acq}) \}$
}
\vspace*{-1mm}
\caption{Training procedure for DEUP in an Active Learning setting}
\label{pseudocode:deupactive}
\end{algorithm}
\end{minipage}
\end{center}
\vspace*{-7mm}
\end{wrapfigure}
Interactive settings, in which EU estimates are used to guide the acquisition of new examples, provide a more interesting use case for DEUP. They introduce however their own challenges, as the main predictor is retrained multiple times with the newly acquired points, which we address below.
We consider an initial training dataset $\mathcal{D}^0 = \{(x^0_i, y^0_i)\}_{i=1}^{N_{0}}$. At each round $t>0$, we have access to a trained predictor $\hat{f}^{t-1}$, estimating the unknown ground-truth $f^*$, and an estimator $\hat{e}^{t-1}$ of its EU. Suppose there is an acquisition function, that given any predictor $\hat{f}$ and an estimator $\hat{e}$ of its EU, defines a distribution on the input space $\mathcal{X}$: $\pi(X | \hat{f}, \hat{e})$, from which we can sample points to add to the training set. Assume that at round $t > 0$, $N_t$ points $\{(x^t_i, y^t_i)\}_{i=1}^{N_t}$ are acquired, where $x^t_i \sim \pi( X | \hat{f}^{t-1}, \hat{e}^{t-1})$, and $y^t_i \sim P(Y | X=x^t_i) = \mathcal{N}(Y; f^*(x), \sigma^2(x))$ added to the training dataset: $\mathcal{D}^t = \mathcal{D}^{t-1} \cup \{(x^t_i, y^t_i)\}_{i=1}^{N_t}$. From $\mathcal{D}^t$, a predictor $\hat{f}^t = \mathcal{L}(\mathcal{D}^t)$ is obtained, which in turn will be used to decide which $N_{t+1}$ points are going to be acquired next, through the acquisition policy $\pi$ and an estimator $\hat{e}^t$ of its EU. How can we train such an EU estimator, without wasting $N_{OOS}$ (input, target) pairs that could be used to obtain a better estimator of $f^*$?
\textbf{The problem with the trivial dataset: } Inspired by the fixed training set setting, it might be tempting to train an error predictor $\hat{e}^t$ on $\mathcal{D}^t_{u, 1} = \cup_{\tau=0}^t \{(x^\tau_i, (y^\tau_i - \hat{f}^t(x^\tau _i))^2) \}_{i=1}^{N_\tau}$. However, $x^\tau_i$ was used to train $\hat{f}^t$ and so we have an in-sample rather than OOS error.
There would thus be no reason for $\hat{e}^t$ to provide accurate estimates in unexplored regions of the search space $\mathcal{X}$.
An alternative is to learn from the errors made by previous versions of the predictor, i.e. $\{f^\tau, \ \tau \in \{0, \dots, t\}\}$, on points that were not used to train each of them, which we can simply choose to be the subsequent acquired points. More formally, consider the dataset $\mathcal{D}^t_{u, 2} = \cup_{\tau=0}^t \cup_{\tau'=\tau}^t \{(x^{\tau'}_i, (y^{\tau'}_i - \hat{f}^\tau(x^ {\tau'} _i))^2) \}_{i=1}^{N_{\tau'}}$. A learning algorithm with good generalization properties (such as modern neural networks), would in principle be able to extrapolate from this dataset, and estimate the errors (or total uncertainty) made by $\hat{f}^t$ on points $x \in \mathcal{X}$ not seen so far, i.e. belonging to what we call the \textit{frontier of knowledge}. However, as each $x_i^{\tau'}$ appears $t - \tau' + 1$ times in $\mathcal{D}^t_{u, 2}$, with different targets at each time (each corresponding to error made by $\hat{f}^\tau$ on $x_i^{\tau'}$, for $\tau \in \{\tau', \dots, t\}$), there is no reason to assume that the resulting predictor would estimate the errors of $\hat{f}^t$. The reason is that the targets are actually functions of not only the input $x$, but also of the dataset used to train the different predictors $\hat{f}^\tau$, i.e. $\mathcal{D}^\tau$. This is actually obvious when we write $(y^{\tau'}_i - \hat{f}^\tau(x^{\tau'} _i))^2 = \mathcal{U}(\mathcal{L}(\mathcal{D}^\tau), x^{\tau'}_i)$. This means that in order to obtain an estimator $\hat{e}^t$, we should train a general error estimator $\hat{e}$, with inputs of the form $(\mathcal{D}, x)$ and targets of the form $(y - \mathcal{L}(\mathcal{D})(x))^2$, using the historical dataset $\mathcal{D}^t_{u, 2} = \cup_{\tau=0}^t \cup_{\tau'=\tau}^t \{((\mathcal{D}^\tau, x^{\tau'}_i), (y^{\tau'}_i - \hat{f}^\tau(x^ {\tau'} _i))^2) \}_{i=1}^{N_{\tau'}}$, and then define $\hat{e}^t(x)$ to be $\hat{e}(\mathcal{D}^t, x)$ for any $x\in \mathcal{X}$. In summary, it is the acquired points, before they are added to the training set, that play the role of out-of-sample (they are actually OOD) points to train the EU estimator.
However, $\mathcal{D}$ being a very high-dimensional object, with size growing with the number of acquired points, we may face severe overfitting issues when using $(\mathcal{D}, x)$ as inputs. We thus propose using \textbf{stationarizing features} of the dataset $\mathcal{D}$ at $x$, that we denote by $\phi_\mathcal{D}(x)$, as inputs to the error predictor instead of $(\mathcal{D}, x)$ (we typically include $x$ in $\phi_\mathcal{D}(x)$). These stationarizing features also address concerns with directly estimating uncertainty using a second order predictor trained simply using L2 error, raised in~\cite{bengs2022difficulty}.
In this paper, we explored $\phi_\mathcal{D}(x) = \left(x, s, \hat{q}(x | \mathcal{D}), \hat{V}(\tilde{\cal L}, \mathcal{D}, x)\right)$, where $\hat{q}(x | \mathcal{D})$ is a density estimate from data $\mathcal{D}$ at $x$, $s = 1$ if $x \in \mathcal{D}$ otherwise $0$, $\tilde{\cal L}$ a learner that produces a distribution over predictors, e.g. a GP or a Deep Ensemble~\cite{lakshminarayanan2016simple}), and $\hat{V}(\tilde{\cal L}, \mathcal{D}, x)$ is an estimate of the model variance of $\tilde{\cal L}$ at $x$. Note that $\tilde{\mathcal{L}}$ can be chosen to be the same as $\mathcal{L}$ For numerical reasons, we found it preferable to use $\log \hat{q}$ and $\log \hat{V}$ instead of $\hat{q}$ or $\hat{V}$ as input features. $\hat{q}$ can be obtained by training a density estimator (such as a Kernel Density Estimator or a flow-based deep network \cite{rezende2015variational}, in our case). Like the other predictors, the density estimator also needs to be fine-tuned when new data is added to the training set. While these features are not required per se to train DEUP, they provide clues to help training the uncertainty estimator, and one can play with the trade-off of computational cost versus usefulness of each clue. They sometimes come at no greater cost, if our main predictor is the mean prediction of the learner's output distribution, and if we use the corresponding variance as the only extra feature, as is the case in the experiments of Sec.~\ref{smosection} with GPs.
In our experiments, we found that using inputs $\phi_{\cal D}(x)$, or even a subset of the $4$ possible features, is sufficient to train an uncertainty estimator with targets $(\hat{f}(x) - y)^2$.
For computational reasons, it is unfeasible to store all previous datasets and all previous predictors $\hat{f}^\tau$, and we propose a subset of $\mathcal{D}^t_{u, 2}$, that makes it possible to have an online algorithm, without memorization:
\begin{equation*}
\mathcal{D}^t_u = \cup_{\tau=0}^t \cup_{\tau'=\tau}^{\tau + 1} \{(\phi_{\mathcal{D}^\tau}( x^{\tau'}_i)), (y^{\tau'}_i - \hat{f}^\tau(x^ {\tau'} _i))^2) \}_{i=1}^{N_{\tau'}}.
\end{equation*}
The corresponding pseudo-code is provided in Algo.~\ref{pseudocode:deupactive}. Note that ${\cal D}_u$ is incremented twice with $x_{acq}$ but $\phi_{\cal D}(x_{acq})$ is different each time because $x_{acq}$ first is and then is not yet in ${\cal D}$.
\vspace{-1mm}
\subsubsection{\textbf{Pretraining the error predictor}}
\label{sec:kickstarting}
\vspace{-1mm}
Consider an interactive setting, and suppose that the oracle is so costly that we cannot afford to wait for a few rounds of acquisition in order to build a training dataset large enough for the uncertainty estimator to provide reasonable EU estimates. To this end, we propose a cross-validation strategy to pretrain the secondary learner before any acquisition step, using pairs $(\phi_{\tilde{D}}(x), (f_{\tilde{D}}(x) - y)^2)$,
\begin{wrapfigure}{r}{0.51\textwidth}
\vspace*{-6mm}
\begin{center}
\begin{minipage}{0.51\textwidth}
\begin{algorithm}[H]
\textbf{Input: }$\mathcal{D}_u$ \\
\While{$|\mathcal{D}_u| < N_{pretrain}$}{
Split $\mathcal{D}_{init}$ into $K$ random subsets $\mathcal{D}_1, \dots, \mathcal{D}_K$ of equal size. Define $\tilde{\cal D} = \bigcup_{k=1}^{K-1} \mathcal{D}_k$\\
Fit a new predictor $\hat{f}$ on $\tilde{\mathcal{D}}$, and fit the features $\phi$ on $\tilde{\mathcal{D}}$\\
$\mathcal{D}_u \leftarrow \mathcal{D}_u \cup \bigcup_{(x, y) \in \mathcal{D}} \{( \phi(x), (y - \hat{f}(x))^2)\}
}
\caption{Pre-filling the uncertainy estimator training dataset $\mathcal{D}_u$}
\label{pseudocode:crossval}
\end{algorithm}
\end{minipage}
\end{center}
\vspace*{-5mm}
\end{wrapfigure}where $\tilde{D}$ is any subset of the initially available data, that we use to train a predictor $f_{\tilde{D}} = \mathcal{L}(\tilde{D})$, and $(x, y)$ any pair in $D$, whether in $\tilde{D}$ or not. With this pretraining phase, we obtain better uncertainty estimates as soon as the first batch of acquired points come in. There are several ways to build the pretraining dataset. We present one of them in Alg.~\ref{pseudocode:crossval}. The procedure stops when the training dataset for the secondary learner, $\mathcal{D}_u$, contains at least $N_{pretrain}$ elements. In our experiments, we choose $N_{pretrain}$ to be a small multiple of the number of initial training points.
\vspace{-3mm}
\section{Related Work}
In \cite{KIUREGHIAN2009105}, the authors characterize the sources of uncertainty as aleatoric (inherent noise) and epistemic (incomplete knowledge). Using Gaussian Processes \cite{Williams1995GaussianPF} or GP is a popular EU estimation because the variance among the functions in the posterior (given the data) can be computed analytically.
In a deep learning context, \cite{blundell2015weight, kendall2017uncertainties, depeweg2018decomposition} use the posterior distribution of network weights \cite{mackay1992practical} in Bayesian Neural Networks (BNNs) to capture EU.
Other techniques that rely on measuring the discrepancy between different predictors as a proxy for EU include MC-Dropout \cite{gal2016dropout}, that interprets Dropout \cite{hinton2012improving} as a variational inference technique in BNNs. These approaches, because they rely on sampling multiple sets of weights or dropout masks at inference time, share some similarities with ensemble-based methods, that include bagging \cite{breiman1996bagging} and boosting \cite{efron1994introduction}, in which multiple predictors are trained and their outputs are averaged to make a prediction, although the latter measure variability due to the training set instead of the spread of functions compatible with the given training set, as in Bayesian approaches. For example, \cite{shaker2020aleatoric} use Random Forests \cite{breiman2001random} to estimate EU. Deep Ensembles \cite{lakshminarayanan2016simple} are closer to the Bayesian approach, using an ensemble of neural networks that differ because of randomness in initialization and training (as you would have with MCMC, albeit in a more heuristic way). In addition to this, several other variants of this central idea of measuring discrepancy between different predictors have been proposed recently \cite{malinin2018predictive, tagasovska2018single, amini2019deep,Liu2020SimpleAP, Amersfoort2020SimpleAS, Wen2020BatchEnsembleAA, Antoran2020DepthUI, kirichenko2020normalizing, malinin2020regression, DBLP:journals/corr/abs-2102-11409,mukhoti2021deep}. We discuss some of these methods in
more detail in Appendix~\ref{appendix:RW}.
More closely related to DEUP, \cite{Yoo2019LearningLF} propose a loss prediction module for learning to predict the value of the loss function. \cite{Hu2020ANP} also propose using a separate network that learns to predict the variance of an ensemble. These methods, however, are trained only to capture the in-sample error, and do not capture the out-of-sample error which is more relevant for scenarios like active learning where we want to pick $x$ where the reducible {\em generalization error} (see Def.~\ref{def:e}) is large.
EpiOut \cite{Umlauft2020RealtimeUD, hafner2019noise} propose learning a binary output that simply distinguishes between low or high EU.
\vspace{-3mm}
\section{Experiments}
\vspace{-1.5mm}
\label{sec:experiments}
In our experiments, we focus on interactive settings, where having good uncertainty estimates is essential for efficient acquisition. As explained in Sec.~\ref{sec:interactive}, it is the acquired points, before they are used to retrain the main predictor, that act as the {\em out-of-sample} examples to train DEUP. In RL, because the targets (e.g. of Q-Learning) are themselves estimates and moving, data seen at any particular point is normally out-of-sample and can inform the uncertainty estimator, when the inputs are used with the stationarizing features. We emphasize that in order to make fair comparisons, \textbf{DEUP does not have access to any additional OOD data during training}, in any of the experiments that follow. Instead, we use Algo.~\ref{pseudocode:crossval} to generate the OOD data used for training the error predictor. Finally,
note that in terms of computational cost, training DEUP with density and model variance as stationarizing features is on-par with training a ensemble of $5$ networks.
\vspace*{-3mm}
\subsection{Sequential Model Optimization}
\label{smosection}
\vspace*{-1mm}
Sequential model optimization is a form of active learning, where at each stage, the learner chooses query examples to label, looking for examples with a high value of the unknown oracle function. Such examples are selected so they have a high predicted value (to maximize the unknown oracle function) and a large predicted uncertainty (offering the opportunity of discovering
yet higher values). Acquisition functions, such as Upper Confidence Bound (UCB, \cite{srinivas2009gaussian}) and Expected Improvement (EI, \cite{movckus1975bayesian}) trade-off exploration and exploitation, and one can select the next candidate by looking for $x$'s maximizing the acquisition function. We combine UCB and EI with DEUP (DEUP-UCB and DEUP-EI) to perform active learning, treating the main predictor and DEUP EU predictions at $x$ respectively as mean and variance of a Gaussian distribution for the learner's guess of the value of the oracle at $x$.
We showcase how using DEUP to calibrate GP variances (used as the only extra input for DEUP) allows for better performances in higher-dimensional optimization tasks. Specifically, we compare DEUP-EI to TuRBO-EI \cite{eriksson2019scalable}, a state-of-the-art method for sequential optimization, that fits a collection of local GP models instead of a global one in order to perform efficient high-dimensional optimization, on the Ackley function \cite{ackley2012connectionist} as oracle, a common benchmark for optimization algorithms. The oracle function can be defined for arbitrary dimensions, and has many local minima.
In Fig.~\ref{ackley}, we compare different methods on the Ackley-10 function, in addition to the optimum reached in budget-constrained optimization problems for different oracle input dimensions, and we find that adapting DEUP to TuRBO consistently outperforms regular TuRBO, especially in higher dimensions.
See also Appendix~\ref{appendix:SMO} for 1D and 2D SMO tasks where DEUP-EI outperforms GP-EI \cite{bull2011convergence}, as well as neural networks with MC-Dropout or Ensembles.
We find GP-EI getting stuck in local optima whereas DEUP-EI was able to reach the global maximum consistently. Experimental details are provided in Appendix~\ref{appendix:SMO3}.
\begin{figure}[h!]
\vspace*{-5mm}
\begin{center}
\centerline{\includegraphics[width=0.9\linewidth]{figs/multioptima_results_new_full.pdf}}
\caption{{\em Left.} Max. value reached by the different optimization methods, for the 10 dimensional Ackley function. In each run, all the methods start with the same initial 20 points. Shaded areas represent the standard error across 3 runs.
{\em Right.} Max. value reached in the budget-constrained setting, on the Ackley functions of different dimensions. Error bars represent the standard error across 3 different runs, with different initial sets of 20 pairs. The budget is $120$ function calls in total. Higher is better and TuRBO-DEUP-EI is less hurt by dimensionality.}
\label{ackley}
\end{center}
\vspace{-8mm}
\end{figure}
\vspace{-2mm}
\subsection{Reinforcement Learning}
\vspace{-1mm}
\begin{wrapfigure}{l}{0.4\textwidth}
\vspace*{-6mm}
\begin{center}
\centerline{\includegraphics[width=0.4\textwidth]{figs/RL_regret}}
\caption{Average regret on CartPole task. Error bars represent standard error across 5 runs.}
\label{RL_regret}
\end{center}
\vspace*{-9mm}
\end{wrapfigure}
Similar to SMO, a key challenge in RL is efficient exploration of the input state space. To investigate the effectiveness of DEUP's uncertainty estimates in the context of RL, we incorporate epistemic uncertainties predicted by DEUP to DQN \cite{mnih2013playing}, which we refer to as DEUP-DQN. Specifically, we train the uncertainty predictor with the objective to predict the TD-error, using log-density estimates as a stationarizing feature. The predicted uncertainties are then used as an exploration bonus in the Q-values. Details of the experimental setup are in Appendix~\ref{appendix:RL}.
We evaluate DEUP-DQN on CartPole, a classic RL task from \textit{bsuite} \cite{osband2020bsuite}, against DQN + $\epsilon$-greedy, DQN + MC-Dropout \cite{gal2016dropout} and Bootstrapped DQN \cite{osband2016deep}. Fig.~\ref{RL_regret} shows that DEUP achieves lower regret faster, compared to all the baselines, which demonstrates the advantage of DEUP's uncertainty estimates for efficient exploration. Future work should investigate ways to scale this method to more complex environments.
\vspace*{-2mm}
\subsection{Uncertainty Estimation}
\label{exp:ue}
\vspace*{-1.5mm}
\subsubsection{Epistemic Uncertainty Predictions for Rejecting Difficult Examples}
Epistemic uncertainty estimates can be used to reject difficult examples where the predictor might fail, such as OOD inputs\footnote{e.g. rare but challenging inputs can be directed to a human, avoiding a costly mistake}. We thus consider a standard OOD Detection task \cite{Amersfoort2020SimpleAS, DBLP:journals/corr/abs-2102-11409}, where we train a ResNet-18 \cite{he2016deep} for CIFAR-10 classification~\cite{Krizhevsky09learningmultiple} and reject OOD examples using estimated uncertainty in the prediction. To facilitate rejection of classes other than those in the training set,
\begin{wraptable}{r}{0.59\textwidth}
\vspace*{-2mm}
\begin{center}
{\smaller
\begin{tabular}{lll}
\hline
\textbf{Model} & \textbf{SRCC} & \textbf{AUROC} \\ \hline
MC-Dropout & $0.287 \pm 0.002$ & $0.894 \pm 0.008$ \\
Deep Ensemble & $0.381 \pm 0.004$ & $\bm{0.933 \pm 0.008}$ \\
DUQ & $0.376 \pm 0.003$ & $0.927 \pm 0.013$ \\
DUE & $0.378 \pm 0.004$ & $0.929 \pm 0.005$ \\
DEUP (D+V) & $\bm{0.426 \pm 0.009}$ & $\bm{0.933 \pm 0.010}$ \\ \hline
\end{tabular}
}
\end{center}
\caption{Spearman Rank Correlation Coefficient (SRCC) between predicted uncertainty and OOD generalization error (SVHN); Area under ROC Curve (AUROC) for OOD Detection (SVHN) with CIFAR-10 ResNet-18 models (3 seeds). DEUP significantly outperforms baselines in terms of SRCC and is equivalent to Deep Ensembles but scoring better than the other methods in terms of the coarser AUROC metric.}
\label{CIFAREstimation}
\vspace*{-4mm}
\end{wraptable}
we use a Bernoulli Cross-Entropy Loss for each class~\cite{Amersfoort2020SimpleAS}: $l(\hat{f}(x),y) = - \sum_i y_i \log \hat{f}_i(x) + (1-y_i) \log (1-\hat{f}_i(x))$, where $y$ is a one-hot vector ($y_i=1$ if $i$ is the correct class, and $0$ otherwise), and $\hat{f}_i(x)=$ predicted probability for class $i$, so the target for out-of-distribution data (from other classes) is $y = \{0, \dots, 0\}$. To ascertain how well an epistemic error estimate sorts non-training examples by the above NLL loss, we consider the rank correlation between the predicted uncertainty and the observed OOD generalization error on SVHN examples \cite{Netzer2011}. This metric focuses on the quality of the uncertainty estimates rather than just their ability to simply classify in- vs out-of-distribution. We also report the AUC for the OOD detection task; more details in Appendix~\ref{appendix:fixed_training_set}. Table~\ref{CIFAREstimation} shows that with the variance from DUE \cite{DBLP:journals/corr/abs-2102-11409} and the density from MAF \cite{papamakarios2017masked} as stationarizing features, we obtain uncertainty estimates that have high rank correlation with the underlying generalization errors and competitive AUROC, compared with the baselines. In addition, since the error predictor is trained separately from the main predictor, there is no explicit trade-off between the accuracy of the main predictor and the quality of uncertainty estimates. We achieve competitive accuracy of $93.89\%$ for the main predictor. We ignore the effect of aleatoric uncertainty (due to inconsistent human labelling), which would require a human study to ascertain. We note that we choose the DUE baseline as it is representative of related methods such as SNGP~\cite{Liu2020SimpleAP} and DDU~\cite{mukhoti2021deep}, and performs best in our experiments.
We present additional results in a distribution shift setting in Appendix~\ref{appendix:fixed_training_set}.
Note that in the pretraining phase of the uncertainty estimator (Alg.~\ref{pseudocode:crossval}), we obtain the subsets by splitting the data based on classes, with each split containing $\floor{n/K}$ classes. So when we train on $K-1$ subsets, the $\floor{n/K}$ classes from the remaining subset become \emph{out-of-distribution}.
\subsubsection{Epistemic Uncertainty Estimation for Drug Combinations}
\begin{figure}[h!]
\centering
\subfigure[Ensemble]{\vcenteredhbox{\includegraphics[width=0.20\columnwidth]{figs/DrugComb_Ens_seed_1.png}}\label{fig:drugcomb_1b}}
\subfigure[DEUP] {
\vcenteredhbox{\includegraphics[width=0.20\columnwidth]{figs/DrugComb_DEUP_seed_1.png}}\label{fig:drugcomb_1c}}
\subtable[Quality of uncertainty estimates from different methods.] {
\label{tb:table_drugcomb_uncertainty_analysis}
\resizebox{0.5\columnwidth}{!}{
\vcenteredhbox{
\begin{tabular}{lllll}
\hline
\textbf{Model} & \textbf{Corr. w. res.} & \textbf{U. Bound} & \textbf{Ratio} & \textbf{Log Likelihood}\\ \hline
MC-Dropout & $0.14 \pm 0.07$ & $0.56 \pm 0.05$ & $0.25 \pm 0.12$ & $ -20.1 \pm 6.8 $ \\
Deep Ensemble & $0.30 \pm 0.09$ & $0.59 \pm 0.04$ & $0.50 \pm 0.13$ & $ -14.3 \pm 4.7 $ \\
DUE & $0.12 \pm 0.12$ & $0.15 \pm 0.03$ & $\bm{0.80 \pm 0.79}$ & $-13.0 \pm 0.52$ \\
DEUP & $\bm{0.47 \pm 0.03}$ & $0.63 \pm 0.05$ & $\bm{0.75 \pm 0.07}$ & $\bm{-3.5 \pm 0.25}$\\\hline
\end{tabular}
}}
}
\caption{Drug Combinations. Predicted mean and uncertainty (error bars) on 50 test examples ordered by increasing value of true synergy score (orange). Model predictions and uncertainties in blue. Ensemble \textbf{(a)} (and MC-dropout, not shown) consistently underestimate uncertainty while DEUP \textbf{(b)} captures the right order of magnitude. \textbf{(c)} \textit{Corr. w. res.} shows correlation between model residuals and predicted uncertainties $\hat{\sigma}$. A best-case \textit{Upper Bound} on \textit{Corr. w. res.} is obtained from the correlation between $\hat{\sigma}$ and true samples from $\mathcal{N}(0, \hat{\sigma})$. \textit{Ratio} is the ratio between col. 1 and 2 (larger is better). \textit{Log-likelihood}: average over 3 seeds of per sample predictive log-likelihood.}
\label{fig:drugcomb_1_main}
\vspace{-4mm}
\end{figure}
We validate DEUP in a real-world
regression task predicting the synergy of drug combinations. While much effort in drug discovery is spent on finding novel small molecules, a potentially cheaper method is identifying combinations of pre-existing drugs which are synergistic (i.e., work well together).
However, every possible combination cannot be tested due to the high monetary cost and time required to run experiments. Therefore, developing good estimates of EU can help practitioners select experiments that are both informative and promising.
As shown in Table~\ref{tb:table_drugcomb_uncertainty_analysis}, the out-of-sample error predicted by DEUP correlates better with residuals of the model on the test set in comparison to several other uncertainty estimation methods.
Moreover, DEUP better captured the order of magnitude of the residuals as shown in Fig.~\ref{fig:drugcomb_1_main}. Details on experiments and metrics are in Appendix~\ref{appendix:dgc}.
\section{Conclusion and Future Work}
\vspace{-2mm}
Whereas standard measures of epistemic uncertainty focus on variance, we argue that bias (misspecification) can also be reduced with predictors like neural networks. In a regression setup, the expected out-of-sample squared error minus aleatoric uncertainty thus captures all the uncertainty about the ground-truth function $\mathbb{E}[Y|x]$, that more data can reduce. This motivates the DEUP framework, where we train a second network to predict the errors of the first.
In interactive settings, this nonetheless raises non-stationarity challenges for this estimator, and we propose extra input features to tackle this issue, and show experimentally their advantages. Future work should investigate ways to improve the computational efficiency of DEUP, and ways to estimate aleatoric uncertainty when no estimator thereof is readily available and when one cannot simply query an oracle several times on the same input $x$.
\section*{Acknowledgements}
The authors would like to thank Tristan Deleu, Anirudh Goyal, Tom Bosc, Léna Néhale Ezzine, Doina Precup, Pierre Luc-Bacon, John Bradshaw, Jos\'e Miguel Hern\'andez-Lobato as well as anonymous reviewers for useful comments and feedback. This research was enabled in part by support provided by \href{www.computecanada.ca}{Compute Canada}, the Bill \& Melinda Gates Foundation, IVADO and a CIFAR AI Chair. The authors also acknowledge funding from Samsung, IBM, Microsoft.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,262 |
{"url":"https:\/\/www.timlrx.com\/blog\/understanding-automated-market-maker","text":"Published on\n\n# Automated Market Makers (AMM) Explained\n\nAuthors\n\n## Preface\n\nThis article is a summary of what I have learnt from the decentralized finance space - written in a friendly and accessible way. I get questions on DeFi fairly often in my day to day work but a two-sentence answer often feels insufficient to explain the innovation in the space. In this post, I contextualised such protocols in the wider financial market and explore the evolution of DeFi, or more specifically automated market makers, in an incremental fashion. By peeling off the layers of complexity of the topic, I hope you gain a broader appreciation and understanding of the topic.\n\n## Order books, market makers and exchanges\n\nIf you carry out a trade on a brokerage or stocks exchange platform, you might notice a side feed containing a list of buy and sell orders for a specific security or financial product. That is known as the order book and it shows the number of shares \/ securities being bid on or offered at each price point.\n\nThis information on market depth is very valuable. In fact companies like Robinhood make most of their money by selling data on order flow to trading giants like Citadel Securities. For example, each time you buy a share of Tesla, Robinhood sends that order to Citadel and receives a few pennies in return. Citadel also automatically takes the other side of the order, then returns to the market to flip the trade i.e. it \"makes the market\". Fun fact - Citadel's platform trades approximately 47% of U.S. listed retail volume.1\n\nBy standing ready to buy or sell a tradable asset on a regular basis at a publicly quote price, market makers (also known as liquidity providers) facilitate the speed and ease in which financial instruments can be bought or sold. Market makers like Citadel can be found in all types of markets from equity to currency exchanges to forex markets and are regarded as an important part of a well functioning and liquid market.\n\nFor a large part of the history of finance, market making activity was carried out by institutions with large capital and resources. However, the rise of cryptocurrency and decentralized finance has presented new alternative models of finance. This includes decentralized exchanges and automated market makers (AMM), the focus of this post.\n\nIn the first part of the post, I cover how market making can be carried out in a decentralized environment. I show how Uniswap the pioneer in the AMM space managed to automate the pricing of assets in a decentralized setting and how it incentivises traders and liquidity providers. Next, I cover major innovations in the AMM space - SushiSwap, Balancer and Curve and show how they refined the value proposition of decentralized trading.\n\n## Understanding automated market makers\n\nImagine market making happening without any centralized institution in control. How could that happen? Perhaps, it is better to flip the question around and ask why can't it happen?\n\nAfter all prices for goods and services in many well functioning markets are thought to be determined not through any central authority but through the \"invisible hand\" of the market. The innovation in that AMM introduces is the removal of a central market maker and converting what was a once a one-sided market into a two-sided market of traders and liquidity providers.\n\nFor such two-sided market making to happen without any central party in control, it needs to solve a series of challenges:\n\n1. Decentralized buying and selling of an asset\n2. Automated pricing and price discovery\n3. Incentivise liquidity providers\n4. Incentivise users\n\n### A brief history of Uniswap\n\nThe initial idea of using an AMM mechanism for decentralized exchanges original from a proposal in reddit by Vitalik Buterin in 2016.\n\nMy proposed solution is to use the style of \"on-chain automated market maker\" used in prediction markets in a decentralized exchange context.\n\nHayden Adams launched Uniswap in November 2018 and it began to quickly attract liquidity and trading volume. Its governance token (UNI) was introduced on September 2020 and allows token holders to participate in the governance of the protocol such as usage of the treasury and upgrade decisions. At the point of time of writing, the protocol has facilitated 55M trades worth 295B. Note: It's important to make a distinction between Uniswap the decentralized exchange and Uniswap the governance token (UNI). Unless specified explicitly, Uniswap in the course of the article refers to the exchange rather than the token. Let's examine how Uniswap, the pioneer in the Automated Market Maker (AMM) solves the 4 challenges highlighted above. ## Pools provide liquidity in a decentralized manner Prior to the invention of AMMs, decentralized exchanges face a problem of low liquidity as it is hard to find enough people willing to make trades on token pairs at the same time. Instead of trading between buyers and sellers or relying on a centralized market maker, on AMM platforms, users trade against a shared pool of tokens. The more assets in the pool, the higher the liquidity and the easier trades are being carried out. To visualize how it works, I describe an imaginary example involving two tokens - a Cat token and a Dog token.2 In our example, we have a cat-dog liquidity pool filled with a relatively equal value of each token. Having equal value ensures that there is sufficient liquidity for trades in both directions and also plays a part in the pricing decisions. ## Automated pricing via algorithms Users will only trade if there is sufficient liquidity and prices are clear and transparent. Pools promote liquidity, but how does that determine the pricing of assets in the pool? In a centralized exchange, one can easily infer this information from the order book, but in a decentralized setting this is determined by an algorithm. These algorithms are coded into smart contracts (computer programs on the blockchain created to execute certain code or logic in a permissionless setting), and are executed each time a trade is carried out. ### Constant product formula An example of a pricing formula is the constant product formula: $d * c = k$ Let $d$ and $c$ represent our two assets dog and cat token respectively. Assume we have 100 of each token - the constant $k$ is simply 100 * 100 or 10,000. The algorithm works but requiring the trader to maintain the constant value. For example, in order to buy 1 dog token, a trader must deposit a proportional amount of cat token to maintain the $k$.3 For an infinitesimally small quantity of dog token, we get the marginal price of 1 dog token is to 1 cat token. For 1 dog token, the true quantity of cat token that is has to be exchanged is: \\begin{aligned} Q_{c} &= \\frac{10000}{99} - 100 \\\\ &= 101.01010 - 100 \\\\ &= 1.01010 \\end{aligned} Price is then simply a construct of the ratio of both tokens i.e. the price of 1 dog token is 1.0101 cat token. The following picture illustrates the ratio of cat tokens required for dog tokens. Importantly, the market price changes as the ratio of the tokens in the pool changes. To put it another way, the price would approach infinity if someone plans to buy up all of a particular coin due to the parabolic nature of the curve. How does an AMM ensure that the price is \"right\"? Through the invisible hand of arbitrageurs. Imagine that each dog token is actually worth 2 cat tokens. Arbitrageurs would sell cat tokens and buy up the dog tokens in the pool until the next unit of dog token cost 2 cat tokens. This simple formula allows pricing to be determined automatically and price discovery to happen. ## Incentives for liquidity providers Our discussion so far takes for granted that there already exists a pool of tokens for users to trade with. But how are pools formed and why should individuals contribute tokens to the pool? In this section, we look at the tokenomics behind AMM protocols to entice liquidity providers. ### Liquidity providers, tokens and shares Recall that a pool is created through the combination of two tokens of proportional value. Each time that a user deposits token pairs in the pool, he gets a share of it. For example, if 10 cat and 10 dog tokens are deposited in our cat-dog pool to bring it to 100 cat and dog tokens in total, the user contributing those tokens, also known as the liquidity providers (LP) gets a 10% share of the pool. This allows him to claim the share and withdraw 10% worth of tokens from the pool (assuming no further contributions or removal). This happens automatically via smart contracts which \"mints\" liquidity tokens and distributes them to the user as proof of his share of the pool. The number of tokens that are \"minted\" is proportional to the value of coins that users contribute to the pool. For every withdrawal, the user gets back his share of the pool and the liquidity token will be \"burn\". A pool created by a single user can hardly be characterised as decentralized. In fact, a common scam known as rug pulls rely on tricking users to low liquidity pools which can be easily manipulated by the single liquidity provider. Thus, it is crucial that AMMs have a strong incentive mechanism to attract more liquidity provides to the pool. ### AMM LP incentives Unlike a centralized market maker system like Citadel in which revenue is dependent on the ability to flip good trades, in a decentralized system there is no flipping. Instead, a portion of every trade is distributed as fees to the liquidity providers. In Uniswap, 0.3% of all trade volume would be distributed proportionally to the liquidity providers.^[https:\/\/uniswap.org\/docs\/v2\/advanced-topics\/understanding-returns\/] Technical side note: Rather than distributing tokens for every trade that takes place to every liquidity provider, which would just add to \"shoe leather cost\", the fees are added back to the pool and the same number of liquidity tokens held by a liquidity provider can now be redeemed for a higher value. If every pool were to provide the same fees, liquidity providers who are hoping to maximise profit would choose pools that have the highest trade activity.4 For new pools consisting of less well-known tokens, additional incentives can be offered to liquidity providers such as bonus token distributions. ## Incentives for users Why should users use decentralized exchanges over centralized exchanges? Fees, user experience and a preference for things decentralized. ### Fees For token pairs with high liquidity and large pools, fees (trading price spreads + transaction fees) are often comparable to centralized exchanges. This is due to arbitrageur activity across centralized and decentralized exchanges resulting in comparable trading prices, though transaction fees tend to be relatively high for small exchanges (ETH specific). For illiquid token pairs, centralized exchanges offer better spreads. ### User experience Decentralized exchanges require no registration, KYC or verification procedures. One can easily execute a transfer and carry out other activities with a single wallet address without having to transfer funds in and out of exchanges. This contributes to a seamless Web3 experience. On the other hand, centralized exchanges tend to support more sophisticated trading functionalities (e.g. stop losses, trading margins etc.) and provide faster execution speed and customer support. ### Preference for decentralization \"Not your keys, not your coins\", as the popular saying goes. Relying on a centralized exchange means trusting the custody of your funds to a third-party which can be hacked, stolen or compromised. With a decentralized exchange, the responsibility falls on the user to verify the trustworthiness of the contract that he is interacting with. ## Innovation in the AMM space The above points summarise Uniswap in a nutshell. While there are other concepts like flash swaps, price oracles and governance, they are ancillary to the operations of an AMM system. If you have made it this far, congratulations - you now have a good understanding of the most popular decentralized finance (DeFi) protocol by market capitalisation, one that has powered 55M trades and nearly295B by market volume^[Accurate as at 10 July 2021, source: https:\/\/uniswap.org\/]. The rest of the article covers other protocols that have innovated on Uniswap.\n\nWhile there has been an explosive growth of AMM protocols in the DeFi space, many of them are just clones of a few popular ones with minor tweaks to the front-end user interface and deployed on alternative chains. Unfortunately, the DeFi space does not really value originality that highly and there are many clone projects abound. I choose to focus on 3 protocols that bring something new to the table - SushiSwap, Balancer and Curve.\n\nRanking of decentralized exchanges by total value locked as at 10 July 2021. Data from DeFi Pulse\n\n### SushiSwap and yield farming\n\nSushiSwap was launched by its anonymous \"chef\" Nomi in August 2020 as a fork of Uniswap with some minor modifications. On launch, it managed to attract nearly a billion dollars in staked liquidity from Uniswap sparking much discussion and debate within the Ethereum community on the ethics of copying. It is also famous for the exit scam fiasco in which chef Nomi liquidated his holders of the token before apologizing and transferring control of the protocol to FTX exchange CEO Sam Bankman-Fried.^[https:\/\/news.bitcoin.com\/sushiswap-returns-14-million-exit-scam\/]\n\nWhile seemingly a Uniswap copycat, SushiSwap introduced the idea of yield farming in the decentralized exchange space. Yield farming is the idea of staking cryptocurrency assets into platforms in return for payouts over time, popularised by the likes of Compound, an algorithmic, autonomous interest rate protocol.\n\nIn order to attract liquidity providers to its platform, SushiSwap gave out its governance tokens (SUSHI) as rewards to liquidity providers. These tokens also allow holders to receive a portion of the 0.3% trading fee - liquidity providers receive 0.25% of the fee and SUSHI holders receive the remaining 0.05%.5\n\nLiquidity tokens are also not used for bookkeeping sake and can be used for yield farming. This incentive system helped SushiSwap attract liquidity providers and users to the platform, creating a network effect that cementing its place as the next most popular platform after Uniswap.6\n\n### Balancer and ETFs\n\nIn Uniswap a pool consists of two assets, predominantly a very popular token like Ethereum and a less popular one. What happens when a user wants to trade between two less popular tokens? She would need to do multiple trades incurring higher transaction fees in the process. The two asset pool structure is also not capital efficient for liquidity providers who could potentially get more transaction fees is capital can flow freely across pools.\n\nBalancer was born out of a research project at BlockScience in 2018, founded by Fernando Martinelli and Mike McDonald. It was created with the idea of smart pools that allows for greater customizability. It allows up to 8 assets in a pool with arbitrary weights and fees, which can be dynamically adjusted and allow for more complex logic to be programmed. Thus, pools in balancer can be thought of as \"inverse exchange traded funds (ETFs)\". Like ETFs they are continuously rebalanced but differ as liquidity providers are paid for providing capital. Their documentation summarises it well:\n\nBalancer turns the concept of an index fund on its head: instead of a paying fees to portfolio managers to rebalance your portfolio, you collect fees from traders, who rebalance your portfolio by following arbitrage opportunities.\n\nIt uses a generalised version of the constant product formula:\n\n$const = \\prod_{t} B_{t}^{W_{t}}$\n\nwhere the constant, is derived by the product of the balance of tokens $B$ weighted by $W$ for all $t$ tokens in the pool. $\\sum_{t} W_{t} = 1$, giving a constant returns to scale function.\n\n### Curve, slippage and stableswap\n\nTrades conducted on AMM platforms affect the price of each of the tokens in the pool. This movement may be insignificant for pools with large liquidity or trades with low volume. But a large swap order for a token with a small market cap or liquidity can move the price significantly. This is known as price slippage.\n\nIn the constant product formula used by Uniswap and generalised in Balancer, a small change in the quantity bought or sold of particular tokens could result in negative price slippage of a couple of percentage points. How can we reduce price slippage, especially for assets like stablecoins (tokens pegged to fiat currency like the USD) where we have a good sense of the \"ideal\" price of the exchange?\n\nCurve, a brainchild of Michael Egorov, solves this problem by introducing a StableSwap formula, where the rate of change of price around the ideal price (i.e. 2nd derivative) does not change as quickly compared to the constant product formula. The stableswap invariant curve is shown below:\n\nFor the stableswap invariant curve, price changes very slowly at the \"ideal\" price of 1.0 at the initial portfolio composition of x = 5, y = 5. The graph is very close to a straight line like a \"zoomed in\" version of the constant product formula. In some way, this could be thought of as leveraging the existing liquidity of the tokens in the pool.\n\nThe capital efficiency benefit for liquidity providers as a result of Curve's invariant formula also leads to lower price slippage for users trading stable assets (stablecoins and wrapped coins). Since its launch in 2020, Curve has emerged as the leading decentralized exchange market for stable assets.\n\n## Conclusion\n\nSince the introduction of AMM technology, adoption and usage of decentralized exchanges have grown significantly. Uniswap is now in the top 10 cryptocurrency exchanges by daily traded volume, behind Binance, Upbit, Okex, Huobi, Coinbase and Bitfinex.7\n\nI hope this article gives you a good introduction and broader appreciation of the AMM space. To what extent will algorithmic protocols successfully replace or complement traditional exchanges? Can decentralized governance of AMM protocol truly happen? Will AMM technology branch out from crypto to other asset classes? It's still early days in the space but an interesting one to keep track of.\n\n1. From Citadel's website: Our automated equities platform trades approximately 26% of U.S. equities volume1 across more than 8,900 U.S.-listed securities and trades over 16,000 OTC securities. We execute approximately 47% of all U.S.-listed retail volume, making us the industry\u2019s top wholesale market maker.\n2. The example holds for any asset pair, but as the idea and adoption of AMM originated within the cryptocurrency space, I will explain in the context of a trade involving two different tokens.\n3. We ignore transaction fees for simplicity.\n4. To be more precise, profit maximisation is a function of trading fee share, trading activity, additional bonuses and token prices. LPs can suffer a loss due to a decline in token prices and\/or fluctuation in prices, also known as impermanent loss.\n5. This prompted Uniswap to issue its own governance token a few months later and shows how allowing people to own a stake in the protocol helps to create user loyalty via an \"illusion\" of aligned interest.\n6. Most clone projects on other platforms seem to be based on the SushiSwap model.\n7. Data taken from Coinmarketcap centralized exchange data, filtering out exchanges with scores greater than 5, and Coinmarketcap DEX data","date":"2021-08-04 07:22:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 11, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.30761638283729553, \"perplexity\": 2722.032136405802}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046154796.71\/warc\/CC-MAIN-20210804045226-20210804075226-00267.warc.gz\"}"} | null | null |
\section{Introduction}
Recently spin Hall effect \cite{Murakami03a,Sinova04}
has been studied theoretically and
experimentally. It has shed new light onto the physics of the
spin-orbit coupling.
In the spin Hall effect the spin-orbit coupling plays the role of the
spin-dependent effective magnetic field, causing the Hall effect
in a spin-dependent way.
As a related subject, quantum spin Hall (QSH)
phase \cite{Kane05a,Kane05b,Bernevig06a}
is recently proposed theoretically.
The QSH phase is a novel topological phase, where the
bulk is gapped and insulating while there are
gapless states localized near the system boundaries.
This phase can be found among nonmagnetic insulators, and
opened up a renewed interest onto nonmagnetic insulators.
This phase is a kind of topological order, and it is not
evident compared with other type of orderings such as magnetism
and superconductivity.
This phase is not an ordered phase in the sense of Ginzburg-Landau
theory.
It is rather a topological order, which is encoded in the wavefunctions
themselves, and it appears only at the boundaries.
This topological order is hidden in the
bulk, and appears as topologically protected gapless states at the
sample boundaries and interfaces.
The topological number, in the present case the $Z_2$ topological number
\cite{Kane05a},
characterizes whether the system is in the QSH phase or not.
In a sense, it plays the role of the ``order parameter''.
This phase can be realized in 2D and in 3D \cite{Fu06b,Moore06,Roy};
the order guarantees the existence of gapless edge states for 2D and
gapless surface states for 3D.
The QSH phase resembles the quantum Hall (QH) phase, while
there are several important differences.
The QSH phase requires an absence of magnetic field,
while the QH phase requires a rather strong magnetic field.
Moreover, the QH phase is realized
usually in 2D, and it is not easily realized in 3D because the
motion along the magnetic field usually does not become gapful.
In contrast, in the QSH phase there is no such built-in direction
and it can be easily realized in 3D as well as in 2D {\it without
applying fields}.
These gapless edge states in 2D are peculiar
in the sense that it is robust against nonmagnetic disorder
and interaction \cite{Wu06,Xu06a}. This is in strong constrast with usual
states localized at the boundary, which are sensitive to
the boundary roughness and impurities and so forth.
Since general readers may not be familiar with this issue,
in the first half of this paper we give a general
instructive review for the whole subject: ``quantum spin Hall
phase for pedestrians''. In the latter half we explain
our recent research on this topic.
The paper is organized as follows.
In Section 2 we consider edge states of various 2D systems
and see whether they are robust or not.
Surface states on the 3D QSH phases is also mentioned.
In Section 3 we see how the quantum phase transition between the
QSH and insulator phases occurs.
Section 4 is devoted to explanations of
an existence of gapless helical edge states
based on the models obtained in Sec.~3.
In Section 5 we theoretically propose that the bismuth ultrathin film
will be a good candidate for the 2D QSH phase. In Section 6 we give concluding remarks.
\section{Edge States in Various Systems -- Fragile or Robust ?}
\subsection{Edge States in Graphene}
Graphene has been studied intensively in recent years.
One of the novel properties of graphene is the edge states.
It was theoretically proposed by Fujita et al.\ \cite{Fujita},
and was observed by STM experiments. This is a good
starting point for studying the edge states in various systems.
One simple way to see the edge states in graphene
is the nearest-neighbor tight-binding model on
a honeycomb lattice (Fig.~\ref{fig:BZ}(a)), described as
\begin{equation}
H=t\sum_{\langle i,j\rangle}c_{i}^{\dagger}c_j .
\end{equation}
We ignore other details of graphene, since they are inessential to
the subsequent discussions.
The primitive vectors are
\begin{equation}
\mathbf{a}_{1}=\frac{a}{2}(1,\sqrt{3}),\
\mathbf{a}_{2}=\frac{a}{2}(-1,\sqrt{3}),
\end{equation}
and the reciprocal lattice vectors are
\begin{equation}
\mathbf{G}_{1}=\frac{2\pi}{a}\left(1,\frac{1}{\sqrt{3}}\right),\
\mathbf{G}_{2}=\frac{2\pi}{a}\left(-1,\frac{1}{\sqrt{3}}\right).
\end{equation}
The Brillouin zone is shown as Fig.~\ref{fig:BZ}(b).
\begin{figure}
\centerline{\includegraphics[width=12cm]{honeycomb3.eps}}
\caption{(a) Honeycomb lattice and (b) Brillouin zone corresponding to the honeycomb lattice.}
\label{fig:BZ}
\end{figure}
When we consider the tight-binding model on
an infinite system, the eigenenergies are
\begin{equation}
E({\bf k})=\pm t \left| e^{i{\bf k}\cdot {\bf a}_{1}}+
e^{i{\bf k}\cdot {\bf a}_{2}}+1 \right|.
\end{equation}
The gap between the valence and the conduction bands
closes at two different points in the Brillouin zone, which are
called $K$ and $K'$ points. Their wavenumbers are
\begin{equation}
\mathbf{k}_{K}=\frac{1}{3}(\mathbf{G}_{1}
-\mathbf{G}_{2})=\left(\frac{4\pi}{3a},\ 0\right),\
\mathbf{k}_{K'}=-
\mathbf{k}_{K}.
\end{equation}
Around these points the dispersion is linear and forms
Dirac fermions.
To study edge states, we consider the system geometry with edges. For this purpose we make the system to
have the ribbon geometry, having a finite width in one direction and
being infinite in the direction perpendicular to it.
The honeycomb lattice allows
various types of edge shape, and we choose the
zigzag edge and the armchair edge for example. The band structures
for the two choices are shown in Fig.~\ref{fig:graphene} (a) and (b),
respectively.
To understand the obtained band structures in this ribbon geometry,
we relate the bulk band structure and
that of the ribbon as follows.
In the ribbon geometry, the translational symmetry in one direction
(perpendicular to the ribbon) is lost and the wavenumber along
this direction is no longer a good quantum number.
Therefore the bulk band structure is projected along this
direction. This almost corresponds to the band structure
for the ribbon geometry, calculated in Fig.~\ref{fig:graphene}(a) and (b).
In the zigzag-edge case
(Fig.~\ref{fig:graphene}(a)), however,
the states located at $E=0$, $\frac{2\pi}{3}<k_x a< \frac{4\pi}{3}$
is outside the bulk-band projection.
This is nothing but the edge states, because it is located
in the bulk band gap, i.e. within the region where the bulk states
are prohibited.
\begin{figure}
\centerline{\includegraphics[width=12cm]{graphene2.eps}}
\caption{Band structure of the nearest-neighbor tight-binding model of the
the graphene
in the ribbon geometry with (a) zigzag edges and (b) armchair edges.}
\label{fig:graphene}
\end{figure}
We note that the above tight-binding model is with drastic
simplification. In reality there are many factors which have
been ignored here. We thus ask ourselves whether the
above properties survive perturbations.
For example,
within the above tight-binding model, we can introduce various perturbations
which do not support edge states.
When we vary the boundary conditions, the
edge states may vanish.
For example, the armchair edge does not support edge states.
Another perturbation which kills edge states around $E=0$ is a staggered on-site potential,
although it may not be realized in real systems.
This fragility of the edge states arises because the edge states in
graphene are not topologically protected, due to vanishing of the
bulk gap.
In the following sections we see various kinds of
robust edge states. All of these robust edge states
are associated with respective topological numbers.
A more physical reason why the edge states in graphene
are fragile is
that no current of any kind is flowing along the edge.
As we see from Fig.~\ref{fig:graphene}(a), the edge states form a flat band,
which means that
the velocity along the edge is zero.
The modes are localized
and does not move along the edge.
In fact, this flatness of the edge-state band is not a universal property;
some perturbations give a dispersion to the otherwise flat band.
For example, the next-nearest-neighbor hopping brings about a downward
shift of the dispersion around $k_x=\pi/a$, compared with
Fig.~\ref{fig:graphene} (a).
Even in this case, the net current along the edge sums up to zero.
\subsection{Edge States in Quantum Hall Systems}
In this section we consider an example of robust edge states.
We consider the integer QH systems.
The QH systems are realized experimentally in a two-dimensional
electron system in a strong magnetic field.
In the QH systems the bulk is gapped while the edge has gapless
edge states, which carries chiral current.
The number of chiral edge states, $\nu$, is a topological quantity,
which does not change under weak perturbation.
In this case the robustness of the edge states comes from the
topological number $\nu$. The bulk states bear a topological order,
described by the topological number $\nu$ called the Chern number.
The edge states of the QH system look very different from those
in the graphene discussed in the previous section.
Nevertheless, by using a simple model we can relate them and
discuss their differences.
It is the model proposed by Haldane \cite{Haldane88}.
It is a tight-binding model on the honeycomb lattice,
where the nearest-neighbor hopping is real, and the
next-nearest neighbor hopping is complex.
The Hamiltonian is given by
\begin{equation}
H_{{\rm Haldane}}
=t_1\sum_{\langle i,j\rangle}c_{i}^{\dagger}c_j
+t_2\sum_{\langle\langle i,j\rangle \rangle}e^{-i\nu_{ij}\phi}c_{i}^{\dagger}
c_{j}+M\sum_{i}\xi_ic_i^{\dagger}c_i.
\label{eq:Haldane}
\end{equation}
Here $\nu_{ij}={\rm sgn}(\hat{{\bf d}}_{1}\times\hat{{\bf d}}_{2})_{z}=\pm 1$,
where $\hat{d}_{1}$ and $\hat{d}_{2}$ are unit vectors along
the two bonds, which constitute the next-nearest neighbor hopping.
$\xi_i$ represents a staggered on-site potential,
and takes the values $\pm 1$ depending on the $i$-th sites being
in the A or B sublattices, respectively.
This simply means that the next nearest neighbor
hopping, going around the hexagonal plaquette in
the clockwise (counterclockwise) way, obtains the phase $e^{i\phi}$
($e^{-i\phi}$).
The quantization of $\sigma_{xy}$ for this model without impurities
can be seen from the Kubo formula, and the resulting
$\sigma_{xy}$ is rewritten as $\sigma_{xy}=\nu e^2/h$ \cite{Thouless82,Kohmoto85},
where
\begin{equation}
\nu=\int_{{\rm BZ}}\sum_{n:{\rm filled}}
\frac{d^{2}\bf k}{2\pi} B_z^{(n)}({\bf k}),
\label{eq:nu}
\end{equation}
and
\begin{equation}
B_z^{(n)}({\bf k})=
\frac{\partial A_y^{(n)}({\bf k})}{\partial k_x}-
\frac{\partial A_x^{(n)}({\bf k})}{\partial k_y},
A_i^{(n)}({\bf k})=-i\left\langle u_{n{\bf k}}\right|
\frac{\partial}{\partial k_{i}}\left|u_{n{\bf k}}\right\rangle.
\label{eq:Ch}
\end{equation}
The sum in Eq.~(\ref{eq:nu}) is taken over the
filled bands. The quantity $\nu$ has novel properties arising from
topology.
A na{\"i}ve application of the Stokes theorem casts
Eq.~(\ref{eq:nu}) into a contour integral along
the border of the Brillouin zone (BZ), and the periodicity of
the BZ results in $\nu=0$; however it is not true \cite{Kohmoto85}.
In some cases as the present one,
the Bloch wavefunction cannot be expressed as a single continuous
function over the whole BZ; the BZ should be divided into
pieces, on each of which the Bloch wavefunction is continuous.
At the borders between two ``pieces'' the Bloch wavefunctions
differ by a U(1) phase. The number $\nu$ is expressed in terms
of this phase difference. As a result one can show that
this quantity is quantized to be an integer,
and it turns out to be the topological number called the Chern number
\cite{Thouless82,Kohmoto85}.
This Chern number represents the number of chiral gapless edge states
going around the sample edge.
Namely the existence of the gapless
edge states is guaranteed by the Chern number.
This comes from the Laughlin argument \cite{Laughlin81};
we roll the two-dimensional
system into an open cylinder by attaching two edges on the opposite sides, and
let a flux $\Phi$ penetrate the hole. The
flux $\Phi$
is to be increased from 0 to a flux quantum. Then
the number of electrons carried from one edge of the cylinder to the other is
equal to the Chern number. These carried electrons are on the gapless
edge states. Thus we can establish the correspondence between the
number of chiral edge states and the Chern number.
To calculate the value of $\nu$ for the Haldane's model, it is not easy to
calculate Eq.(\ref{eq:Ch}) directly, and analytical
calculation looks almost impossible.
Instead by considering the change of $\nu$ by a change of some parameter,
we can easily calculate
$\nu$.
First we notice that for $\phi=0$ the system is time-reversal
symmetric, which yields $\nu=0$, because $\nu$ changes sign
under time-reversal.
As long as the gap remains open, the Chern number
$\nu$ is quantized to be an integer
and
cannot change when a parameter in the Hamiltonian is
continuously changed. In the Haldane's model,
by changing the system parameters, the gap closes only at
$K$ and $K'$ points.
At the gap closing the spectrum of Eq.~(\ref{eq:Haldane})
becomes linear in ${\bf k}$ near the gap-closing point,
namely the massless Dirac fermion is formed at the gap closing.
It is straightforward to show the following statement for the
change of the Chern number at the gap closing.
We consider a $2\times 2$ Hamiltonian matrix $H(M,{\bf k})$, which
depends on a parameter $M$.
Suppose the gap between the two bands closes at $M=M^{(0)}$ and
${\bf k}={\bf k}^{(0)}$. Then we have
$H(M^{(0)},{\bf k}^{(0)})=E_{0}(M^{(0)},{\bf k}^{(0)})\hat{1}$, where
$\hat{1}$ is an identity matrix, and we
can write
\begin{equation}
H(M,{\bf k})=E_{0}(M,{\bf k})\hat{1}+\sum_{i}a_{i}(M,{\bf k})
s_{i},
\end{equation}
where $s_{i}$ are Pauli matrices.
For notational brevity and convenience we write
$k_0\equiv M$, $k_1=k_x$, $k_2=k_y$.
The coefficients $a_{i}$ is expanded in $M$ and ${\bf k}$ in
the linear order as
\begin{equation}
a_{i}=\sum_{j=0,1,2}(k_j-k_j^{(0)})a_{ij}.
\end{equation}
Then one can show
that the change of the Chern number $\nu$ across $M=M^{(0)}$ is
\begin{equation}
\nu(M=M^{(0)}+\delta)-\nu(M=M^{(0)}-\delta)
={\rm sgn}({\rm det}a).
\end{equation}
where $a$ is the matrix with elements $a_{ij}$ and $\delta$ is an
infinitesimal positive number.
Thus the Chern number changes by one at the gap closing.
In the present case, the gap closes when $M=\mp
3\sqrt{3}t_2 \sin\phi$,
with ${\bf k}=\pm {\bf k}_{K}=({\bf k}_{K},\ {\bf k}_{K'}$),
respectively.
For the respective cases, the matrix $a$ is calculated by linearlizing
the Hamiltonian in the vicinity of the gap closing as
\begin{equation}
a=\left(\begin{array}{ccc}
1&&\\& \mp\sqrt{3}a/2 &\\ &&-\sqrt{3}a/2
\end{array}\right)
\end{equation}
and ${\rm sgn}({\rm det}a)=\pm 1$. Thus when we increase $M$ across the
value $\mp
3\sqrt{3}t_2 \sin\phi$, the Chern number increases by $\pm 1$.
From this we can easily elaborate the phase diagram as shown in
Fig.~\ref{fig:haldane}.
As we have seen, for analytical calculation
it is much easier to calculate the {\it change} of the
Chern number, than to calculate the Chern number itself.
\begin{figure}
\centerline{\includegraphics[width=8cm]{haldane.eps}}
\caption{Phase diagram of the Haldane's honeycomb-lattice model.
$\nu$ represents the Chern number}
\label{fig:haldane}
\end{figure}
What is the fundamental differnce between the QH phase and the grephene?
The difference comes from the bulk gap. The graphene
is gapless while the QH phase has a bulk gap.
The topological number in the QH phase protects the
existence of the gapless edge states.
\subsection{Edge States in 2D Quantum Spin Hall Systems}
The QSH system is insulating in the bulk, while
there are gapless edge states carrying a spin current.
The simplest case of the QSH system is realized by superposing
two QH subsystems with opposite spins \cite{Bernevig06a}.
We consider the up-spin subsystem with a QH state
($\sigma_{xy}=e^2/h$), and the down-spin subsystem with
a QH state ($\sigma_{xy}=-e^2/h$). The superposition of these states
is the QSH state. The edge states consist of two states
with opposite spins and velocities. These states
are thus carrying a spin current.
To realize this state we need a spin-dependent magnetic field, which
can be produced by the spin-orbit coupling.
This example is just a superposition of two QH systems with conserved
spin $s^z$.
Nevertheless, in real systems, the spin-orbit coupling
does not necessarily conserve $s^z$, and the above
two QH subsystems are mixed with each other.
The next question is what happens when $s^z$ is no
longer a good quantum number.
We consider this question in the following, and
we see that the physics coming from topology survives
partially.
This phase is realized in the model proposed by Kane and Mele
\cite{Kane05a,Kane05b}.
The model is given as
\begin{equation}
H_{KM}=t\sum_{\langle i,j\rangle}c_{i}^{\dagger}c_j
+i\lambda_{{\rm SO}}
\sum_{\langle\langle i,j\rangle \rangle}\nu_{ij}c_{i}^{\dagger}s_z
c_{j}
+i\lambda_{R}\sum_{\langle i,j\rangle}c_{i}^{\dagger}({\bf s}\times
\hat{{\bf d}}_{ij})_z c_j+ \lambda_{v}\sum_i \xi_{i}c_{i}^{\dagger}
c_{i}.
\end{equation}
$\xi_{i}$ represents a staggered on-site potential,
taking values $\pm 1$ depending on the sublattice index.
$\hat{{\bf d}}_{ij}$ is the unit vector along
the nearest neighbor bond from $i$ to $j$.
For a special case, if $\lambda_R=0$ and $\lambda_v=0$, the model
conserves $s^z$ and it reduces to a superposition of two Haldane models:
\begin{equation}
H_{KM}(\lambda_R=0,\lambda_v=0)=H^{\uparrow}_{{\rm Haldane}}(\phi=-\pi/2)+
H^{\downarrow}_{{\rm Haldane}}(\phi=\pi/2).
\end{equation}
For generic cases where $s^z$ is not conserved,
the key ingredient is the time-reversal
symmetry, which gives rise to the Kramers degeneracy
between ${\bf k}$ and $-{\bf k}$.
In the theory the wavenumbers satisfying ${\bf k}\equiv -{\bf k}$
$({\rm mod}\ {\bf G})$ play an important role.
Such momenta are called the time-reversal-invariant momenta (TRIM),
and are expressed as ${\bf k}={\bf \Gamma}_i$ where ${\bf \Gamma}_{i=(n_1 n_2)}
=\frac{1}{2}(n_1{\bf G}_{1}+n_2{\bf G}_{2})$ with $n_1,n_2=0,1$ in 2D
\cite{Kane05a,Fu06a,Fu07b}.
The $Z_2$ topological number $\nu$ is defined in the following way
\cite{Fu06a,Fu07b}.
First we define a $(2N)\times (2N)$ matrix $w$, defined as
\begin{equation}
w_{mn}({\bf k})=\langle u_{-{\bf k},m}|\Theta
|u_{{\bf k},m}\rangle
\end{equation}
where $\Theta$ is the time-reversal operator
represented as $\Theta=i\sigma_y K$
with $K$ being complex conjugation.
$u_{m,\mathbf{k}}$ is the periodic part of the $m$-th Bloch wavefunction lying
below $E_F$,
and
$N$ is the number of Kramers pairs below $E_F$.
This matrix $w({\bf k})$ is unitary at any ${\bf k}$,
and is also antisymmetric at ${\bf k}={\bf \Gamma}_{i}$.
Then for each TRIM we define an index $\delta_i$ as
\begin{equation}
\delta_{i}\equiv \frac{\sqrt{{\rm det}w({\bf \Gamma}_i)}}{
{\rm Pf}w({\bf \Gamma}_i)}.
\label{eq:Iasym}
\end{equation}
for ${\cal I}$-asymmetric systems \cite{Fu06a}, and
\begin{equation}
\delta_{i}\equiv \prod_{m=1}^{N}
\xi_{2m}({\bf \Gamma}_i),
\label{eq:Isym}
\end{equation}
for ${\cal I}$-symmetric systems \cite{Fu07b}, where
$\xi_{2m}({\bf \Gamma}_i)$ ($=\pm 1$) is the parity eigenvalue of the
Kramers pairs at ${\bf \Gamma}_i$.
The index $\delta_i$ takes the values $\pm 1$.
The $Z_2$ topological number $\nu$ is then defined as
\begin{equation}
(-1)^{\nu}=\prod_{i=1}^{4}\delta_{i}, \ \
\label{eq:Z2-2D}
\end{equation}
where the product is taken over the TRIMs ${\bf k}={\bf \Gamma}_{i}$.
Hence $(-1)^{\nu}$ takes only two values $\pm 1$, which means
there are only two distinct cases $\nu=\mathrm{even}$ and
$\nu=\mathrm{odd}$. For simplicity of notations we henceforth call
these cases as $\nu=0$ and $\nu=1$.
If the resulting value is $\nu=1$ it is the QSH phase, and if
it is $\nu=0$ it is the ordinary insulator.
The expression of the $Z_2$ topological number is different between
systems with and without ${\cal I}$-symmetry.
This shows a crucial role of the ${\cal I}$-symmetry in the
theory of the QSH systems. It is because the ${\cal I}$ symmetry
is the only symmetry operation which relates between ${\bf k}$ and
$-{\bf k}$.
To illustrate the physics of the $Z_2$ topological number, we calculate the
band structure for geometries with edges. Let us consider
a ribbon geometry, which is finite in one direction and is infinite
in the other direction.
To see the difference between two phases, QSH and ordinary insulator (I),
we take up the sets of the parameter values for the respective
phases, employed in Ref.~\citen{Kane05b}. The result
is shown in Figs.~\ref{fig:QSH} and \ref{fig:SHI} for the QSH and I
phases, respectively.
As we see from Fig.~\ref{fig:QSH},
there exist gapless edge states in the QSH phase,
irrespective of the geometry.
In contrast, for the insulator phase
(Fig.~\ref{fig:SHI}) there are no gapless edge states.
In fact there are edge states, but they do not
go across the gap. These edge states may or may not
cross the Fermi energy. Even if they cross the Fermi
energy, it is not an intrinsic property, and the crossing
can disappear by perturbation.
\begin{figure}
\centerline{\includegraphics[width=12cm]{ptp-QSH2.eps}}
\caption{Band structure of the Kane-Mele model in the QSH phase
in the ribbon geometry with (a) zigzag edges and (b) armchair edges.
The parameters are $\lambda_v=0.1t$,
$\lambda_{SO}=0.06t$ and $\lambda_R=0.05t$.}
\label{fig:QSH}\end{figure}
\begin{figure}
\centerline{\includegraphics[width=12cm]{ptp-SHI2.eps}}
\caption{Band structure of the Kane-Mele model in the I phase
in the ribbon geometry with (a) zigzag edges and (b) armchair edges.
The parameters are $\lambda_v=0.4t$,
$\lambda_{SO}=0.06t$ and $\lambda_R=0.05t$.}
\label{fig:SHI}
\end{figure}
The bulk band-structure is gapped for both the QSH and I phases
and looks similar.
Namely, the topological order is not evident in the bulk band
structure. The existence of the robust edge states is encoded
in the bulk wavefunctions.
There is a bulk-edge correspondence:
\begin{enumerate}
\item $Z_2$ topological number is $\nu=1$ ($\nu=0$).
\item there are odd (even) number of Kramers pairs of gapless edge states.
\end{enumerate}
This correspondence can be shown
by the argument similar to the well-known Laughlin's gedanken
experiment \cite{Laughlin81,Fu06a}.
Suppose we consider the system on a ribbon, with two opposite ends
attached. In one direction the system is periodic while in the
other directions there are edges.
When we increase the flux penetrating the hole from zero to half of
the flux quantum,
the change of a certain physical quantity (time-reversal polarization)
is zero for $\nu=0$ while it is unity for $\nu=1$ \cite{Fu06a}. This means
that for $\nu=1$ there is a Kramers pair of gapless edge states while for
$\nu=0$ there are no gapless edge states.
\subsection{Surface States in 3D Quantum Spin Hall Systems}
The analogous phase is also possible in 3D.
In this case this phase is an insulator in the bulk and
supports gapless surface states carrying spin currents.
In this case as well, there is a correspondence between
the bulk and the surface. The topology of
Fermi ``curve'' of the surface states
is related with the $Z_2$ topological numbers for
the bulk.
One can see this by the 3D tight-binding model introduced by
Fu {\it et al.} \cite{Fu06b} on a diamond lattice.
This model
exhibits a transition between QSH and I phases.
The model is written as
\begin{equation}
H=t\sum_{\langle ij\rangle}c_{i}^{\dagger}c_{j}
+i(8\lambda_{\mathrm{SO}}/a^{2})\sum_{\langle\langle
ij\rangle\rangle} c_{i}^{\dagger}\mathbf{s}\cdot
(\mathbf{d}_{ij}^{1}\times \mathbf{d}_{ij}^{2})c_{j}.
\end{equation}
Here $a$ is the size of the cubic unit cell, $t$ is the
nearest-neighbor
hopping, and ${\bf s}=(s_x, s_y, s_z)$ are the Pauli matrices.
The term with $\lambda_{\mathrm{SO}}$ is
a spin-dependent hopping
to the next nearest neighbor sites, representing the
spin-orbit coupling.
The vectors $\mathbf{d}_{ij}^{1}$ and
$\mathbf{d}_{ij}^{2}$ denote
those for the
two nearest neighbor bonds involved in
the next-nearest-neighbor hopping.
In 3D, the TRIMs are ${\bf \Gamma}_{i=(n_1 n_2 n_3)}
=\frac{1}{2}(n_1{\bf G}_{1}+n_2{\bf G}_{2}+n_3{\bf G}_{3})$
with $n_1,n_2,n_3=0,1$.
There are four
$Z_2$ topological numbers
$\nu_{0},\nu_{1},\nu_{2},\nu_{3}$ \cite{Moore06,Fu06b}, given by
\begin{equation}
(-1)^{\nu_{0}}=\prod_{i=1}^{8}\delta_{i}, \ \
(-1)^{\nu_{k}}=\prod_{n_k=1; n_{j\neq k}=0,1}\delta_{i=(n_1n_2n_3)}.
\label{eq:Z2-3D}
\end{equation}
Each phase is expressed as $\nu_0; (\nu_1\nu_2\nu_3)$, which
distinguishes 16 phases.
Because among $\nu_i$, $\nu_0$ is the only topological number which
is robust against disorder, the phases are mainly
classified by $\nu_0$.
When $\nu_{0}$ is odd the phase is called the strong topological
insulator (STI), and when $\nu_{0}$ is even it is called the weak topological
insulator (WTI). The STI and WTI correspond to the QSH and I phases,
respectively.
The other indices $\nu_1$, $\nu_2$, and $\nu_3$ are used
to distinguish various phases in the STI or WTI phases,
and each phase can be associated with a mod 2 reciprocal lattice vector
$\mathbf{G}_{\nu_1\nu_2\nu_3}=\nu_1\mathbf{b}_{1}
+\nu_2\mathbf{b}_{2}
+\nu_3\mathbf{b}_{3}$, as was proposed in Ref.~\citen{Fu06b}.
These topological numbers in 3D determine the topology of the
surface states for arbitrary crystal surfaces \cite{Fu06b}.
We note that among the four $Z_2$ topological numbers in 3D,
only $\nu_{0}$ is robust against nonmagnetic impurities,
while the others ($\nu_k$ ($k=1,2,3$)) are meaningful only
for a relatively clean sample \cite{Fu06b}.
\section{Quantum Phase Transition with a Change of
Topological Number}
As we have seen in the Haldane's model on the honeycomb lattice,
it is hard to calculate
the topological number itself and
to capture its physical meaning in an intuitive way.
The topological number involves an integral over the whole
Brillouin zone, both for the QSH systems and the QH systems.
On the other hand, the ``change'' of the topological number
is more accessible. This is because the change occurs locally
in ${\bf k}$ space. As we change an external parameter,
the system may undergo a phase transition.
It accompanies a closing of a bulk gap at a certain ${\bf k}$,
because it
is the only way to change the topological number.
When an external parameter is changed,
in some cases the phase transition occurs, while in other cases
it does not, because of the level repulsion.
In the following we consider ``generic'' gap closing by
tuning a {\it single} parameter, which we call $m$.
We exclude the cases where the gap closing is
achieved by tuning more than one parameters.
In such cases, the phase transition might be
circumvented by some perturbation.
In general, energy levels repel each other, thereby the
valence and the conduction bands do not touch if the number of tuned
parameters is not large enough. The number of tuned parameters to
achieve degeneracy, called the codimension, is sensitive to the symmetry
and the dimension of the system considered.
We henceforth consider only clean systems without any impurities or
disorder. The time-reversal symmetry is
assumed.
We also assume that the Hamiltonian is generic, and
we exclude the Hamiltonians which require fine tuning of parameters.
In other words, we exclude the cases which are vanishingly improbable
as a real material.
\subsection{2D Quantum Hall Systems}
For the 2D QH systems, it is simple, and in fact we have already
seen the physics of gap closing in the Haldane's model.
We can argue gap closing in generic systems,
and it becomes similar to the one already described in Section 2.
The Chern number is the total flux of $B_z({\bf k})$
inside the Brillouin zone.
Therefore, if we roll the Brillouin zone into a torus,
the Chern number is nothing but a total monopole charge
inside the torus, where a monopole and an antimonopole
have charges $+1$ and $-1$, respectively.
Each band is associated with the respective Chern number.
At the gap closing, the Chern number at the lower band
changes by $\pm 1$ and that of the upper band
changes by $\mp 1$. This means that a monopole (or an
antimonopole) goes out of the Brillouin-zone torus of the upper band,
and into the Brillouin-zone torus of the lower band.
Let $m$ denote the parameter which drives the phase transition.
In the Haldane's model the on-site staggered potential
$M$ plays the role of the parameter $m$.
The feature of the phase transition
is further clarified by considering a hyperspace
$(m,\ k_x, k_y)=(k_0,\ k_1,\ k_2)$,
and characterizing
the gap closing in terms of the gauge field in $m$-${\bf k}$-space
\cite{Berry84, Volovik}.
Suppose the gap closes at an
isolated point $\tilde{\bf k}=(m,\mathbf{k})$.
Then the involved bands, which we call $\alpha$-th and $\beta$-th bands,
have monopoles in $m$-${\bf k}$ space, with their monopole charges
are opposite in sign \cite{Berry84,Volovik}. More precisely,
the gauge field $\mathbf{A}_{\alpha}
(\tilde{\mathbf{k}})$ and the
corresponding field strength
$\mathbf{B}_{\alpha}(\tilde{\mathbf{k}})$ for the $\alpha$-th band
are defined as
\begin{align}
&\mathbf{A}_{\alpha}
(\tilde{\mathbf{k}})=-i\langle \psi_{\alpha}(\tilde{\mathbf{k}})|\nabla_{\tilde{\mathbf{k}}}|
\psi_{\alpha}(\tilde{\mathbf{k}})\rangle,\label{eq:A}\\
&\mathbf{B}_{\alpha}
(\tilde{\mathbf{k}})=\nabla_{\tilde{\mathbf{k}}}\times\mathbf{A}_{\alpha}(\tilde{\mathbf{k}}).
\label{eq:B}
\end{align}
The corresponding monopole density is defined as
\begin{equation}
\rho_{\alpha}
(\tilde{\mathbf{k}})=\frac{1}{2\pi}\nabla_{\tilde{\mathbf{k}}}\cdot\mathbf{B}_{\alpha}
(\tilde{\mathbf{k}})
\end{equation}
Except for the point where $\alpha$-th band touches with other bands,
the monopole density $\rho_{\alpha}(\tilde{\mathbf{k}})$ vanishes
identically. At the $\tilde{\bf k}$ point where the $\alpha$-th band
touches with another band ($\beta$-th band), the wavefunction cannot be written
as a single analytic function around this point,
and the wavefunction is to be written with more than one ``patches''
which are related by gauge transformation \cite{Kohmoto85},
as is similar to the vector potential around the Dirac monopole
in electromagnetism \cite{WuYang}.
This leads to a $\delta$-function singularity of
$\rho(\tilde{\mathbf{k}})$ at the band touching; $\rho_{\alpha}(\tilde{\bf k})
\sim -\rho_{\beta}(\tilde{\bf k})\sim
\pm\delta(\tilde{\bf k}=\tilde{\bf k}_0)$.
As a result the monopole density is written in general
as $\rho(\tilde{\mathbf{k}})=\sum_{l}q_{l}
\delta(\tilde{\mathbf{k}}-\tilde{\mathbf{k}}_{l})$,
where $q_{l}$ is an
integer
representing
a monopole charge.
The monopole charge is conserved under a continuous change of the
Hamiltonian. The monopoles indicate the value of $(m,\mathbf{k})$ where
the gap closes.
At the monopole, the Chern number for the band considered
changes by $\pm 1$.
As an example, in the Haldane's model on the honeycomb lattice,
the gap closes either at $K$ or at $K'$. When the gap closing occurs
either at $K$ or at $K'$, the Chern number changes by one.
Meanwhile,
when $M$ is changed while $\phi=0$ is kept,
the gap closes simultaneously at $K$ and $K'$, and the changes of the
Chern number at $K$ and $K'$ cancel each other; the
Chern number does not change as a result.
\subsection{Quantum Spin Hall Phase and Universal Phase Diagram}
Because the QSH phase is roughly analogous to a
superposition of two QH systems, the phase transition
can be studied similarly.
The $Z_2$ topological number is preserved as long as the gap
remains open. Suppose the system goes from
the insulator to the QSH phase by changing
some parameter of the system. Then the gap should close somewhere
in between. Whether or not gap closes as the parameter is varied reflects
the topological properties of the system.
We investigate the criterion
for the occurence of the phase transition in Refs.~\citen{Murakami07a,Murakami07b,Murakami08a},
as we explain in the following. Through this study we can show that
the gap-closing physics is equivalent to the
physics of the $Z_2$ topological number.
\begin{figure}
\centerline{\includegraphics[width=12cm]{phase-diagram-ptp.eps}}
\caption{Universal phase diagram between the QSH and the insulator
phases in (a) 3D and (b) 2D. $m$ is a parameter driving the phase
transition, and $\delta$ is a parameter describing the degree
of inversion-symmetry-breaking.}
\label{fig:phase-diagram}
\end{figure}
\begin{figure}
\includegraphics[scale=0.52]{band-crossing-ptp.eps}
\caption{Generic gap-closing for (a-1) 2D inversion-asymmetric,
(a-2) 2D inversion-symmetric, (b-1) 3D inversion-asymmetric and
(b-2) 3D inversion-symmetric cases. In the cases (a-2)
(b-2) all the states are
doubly degenerate by Kramers theorem. In (a-1)(a-2) and (b-2), the
gap closing and concomitant phase transition occurs only
at a single value of $m$: $m=m_0$. Meanwhile in (b-1), by increasing
$m$,
the gap closes at $m=m_1$, and the bulk remains gapless in
$m_1\leq m\leq m_2$. The gap opens again at $m=m_2$.
Although in reality the ${\bf k}$ space is two-dimensional in (a-1)(a-2),
and three-dimensional in (b-1)(b-2), it is drawn as one-dimensional
in (a-1)(a-2), and two-dimensional in (b-1)(b-2) for clarity of illustration.}
\label{fig:degeneracy}\end{figure}
To study the phase
transition in 2D and in 3D, we consider a Hamiltonian matrix
\begin{equation}
H({\bf k})=\left(\begin{array}{cc}
h_{\uparrow\uparrow}({\bf k})& h_{\uparrow\downarrow}({\bf k})\\
h_{\downarrow\uparrow}({\bf k})& h_{\downarrow\downarrow}({\bf k})
\end{array}\right).\label{eq:Hamiltonian}
\end{equation}
We assume that the system is a band insulator,
and the Fermi energy $E_F$
lies within the gap.
The time-reversal-symmetry gives,
\begin{equation}
H({\bf k})=s_y H^{T}(-{\bf k})s_y,
\label{time-reversal}\end{equation}
which is rewritten as
$h_{\uparrow\uparrow}({\bf k})
=h_{\downarrow\downarrow}^{T}(-{\bf k})$,
$h_{\uparrow\downarrow}({\bf k})
=-h_{\uparrow\downarrow}^{T}(-{\bf k})$,
$h_{\downarrow\uparrow}({\bf k})
=-h_{\downarrow\uparrow}^{T}(-{\bf k})$.
The Kramers theorem guarantees that
the band structure of such time-reversal-symmetric spin-$1/2$ system is
symmetric with respect to ${\bf k}\leftrightarrow -{\bf k}$.
While the dimension of the Hamiltonian is arbitrary,
it will be taken as the number of states
involved in the gap closing.
The feature of the phase transition is
different whether the system considered is (i) ${\cal I}$-symmetric
or (ii) ${\cal I}$-asymmetric \cite{Murakami07b,Murakami07a}.
It is because the degeneracy for each state is different
for the two cases.
According to the Kramers theorem, the time-reversal-symmetry says
$\varepsilon_{n \alpha}({\bf k}) = \varepsilon_{n{\bar \alpha}}(-{\bf k})$,
where $\varepsilon_{n \alpha}({\bf k})$ is the energy of
the $n$-th band with pseudospin $\alpha$, and ${\bar \alpha}$
is the pseudospin opposite to $\alpha$.
If in addition,
the system is ${\cal I}$-symmetric (i),
all the states are doubly
degenerate, because
the ${\cal I}$-symmetry imposes
$\varepsilon_{n \alpha}({\bf k}) = \varepsilon_{n \alpha}(-{\bf k})$, leading
to $\varepsilon_{n \alpha}({\bf k}) = \varepsilon_{n {\bar \alpha}}({\bf k})$.
If (ii) ${\cal I}$-symmetry is broken, double degeneracy occurs only at points
${\bf k} = {\bf \Gamma}_{i}$, where ${\bf \Gamma}_{i}$
is one of the TRIM.
By analyzing the respective cases (i)(ii) in 2D and 3D, we
obtain a universal phase diagram shown in Fig:~\ref{fig:phase-diagram}.
Here $m$ is a parameter driving the phase
transition, and a parameter $\delta$ describes the degree
of inversion-symmetry-breaking.
The derivation of this universal phase diagram is generic and
based on topological arguments. Hence it
does not depend on the details of the system. The parameter $m$
can be any parameter, and in the CdTe/HgTe/CdTe quantum well
the well thickness $d$ plays the role of $m$ here.
\subsubsection{Inversion asymmetric systems}
In ${\cal I}$-asymmetric systems,
when ${\bf k} \neq {\bf \Gamma}_{i}$,
each band is non-degenerate.
At the gap-closing point, one valence band and one conduction band
become degenerate. In this case a $2\times 2$ Hamiltonian matrix
is sufficient for our purpose;
\begin{equation}
H=\left(
\begin{array}{cc}
a & c\\
c^{*} & b
\end{array}
\right),
\end{equation}
where
$a$, $b$ are real functions of ${\bf k}$ and $m$, and $c$ is
a complex function of ${\bf k}$ and $m$.
A necessary condition for the two eigenvalues to be identical
consists of three conditions $a=b$, ${\rm Re}c=0$ and ${\rm Im}c=0$,
i.e. the codimension is three \cite{vonNeumann29,Herring37}.
To put it in a different way, the 2$\times$2 Hamiltonian $H(m,\mathbf{k})$
is expanded as
\begin{equation}
H(m,\mathbf{k})=a_0(m,\mathbf{k})+\sum_{i=1}^{3}a_{i}(m,\mathbf{k})
\sigma_i.
\end{equation}
The gap closes when the two eigenvalues are identical, i.e.
when the three conditions
$a_{i}(m,\mathbf{k})$=0 ($i=1,2,3$) are satisfied. This means that the codimension is three.
In 2D, the codimension three is equal to the number of parameters involved,
that is, $k_x$, $k_y$ and $m$. Thus the gap can close at some ${\bf k}$
when the parameter $m$ is tuned to a critical value.
Near the gap-closing point ${\bf k}={\bf k}_{0}(\neq {\bf \Gamma}_{i})$,
the system's Hamiltonian corresponds to massive Dirac fermion,
and can be expressed
as
\begin{equation}
{\cal H}=(m-m_0)\sigma_z
+(k_x-k_{0x})\sigma_x+(k_y-k_{0y})\sigma_y
\end{equation}
after unitary and scale transformations.
The time-reversal-symmetry requires that
the gap closes simultaneously at ${\bf k}_0$ and
$-{\bf k}_0$ as shown in
Fig.~\ref{fig:degeneracy} (a-1), and that the masses of the Dirac fermions
at ${\bf k}=\pm
{\bf k}_{0}$ have
opposite signs.
In the Kane-Mele model for the QSH phase \cite{Kane05a,Kane05b}
the gap closes simultaneously
at the $K,K'$ points, corresponding to the present case.
In 3D, as is different from 2D, the gap closing at
${\bf k}=\pm {\bf k}_{0}\neq {\bf \Gamma}_{i}$
cannot lead to phase transion. This is because the
codimension three is less than the number of
parameters $(m,k_x,k_y,k_z)$.
The three gap-closing conditions determine a curve
in the four-dimensional space $(m,k_x,k_y,k_z)$.
When $m$ is changed continuously the gap-closing ${\bf k}$
point moves in the ${\bf k}$ space, and the system
remains gapless.
This curve forms a loop $C$ in $m$-${\bf k}$ space, and
the gap opens when $m$ is changed across the extremum
of the loop.
The loop $C$ is a trajectory of the gap-closing points
i.e. monopoles
in the $\mathbf{k}$-space as we change $m$.
It forms a closed loop in the $m$-$\mathbf{k}$ space,
because of the conservation
of the monopole charge.
The time-reversal-symmetry requires
\begin{equation}
\mathbf{B}_{\alpha}(\mathbf{k})=-\mathbf{B}_{\bar{\alpha}}(-\mathbf{k}), \
\rho_{\alpha}(\mathbf{k})=\rho_{\bar{\alpha}}(-\mathbf{k}).
\end{equation}
The monopoles are symmetric
with respect to the origin. Therefore, the generic form of the
loop $C$ is as shown in Fig.~\ref{fig:monopole}.
Thus the loop $C$ occupies a finite region in the value of $m$, and
it follows that
in ${\cal I}$-asymmetric 3D systems, gapless phase emerges \cite{Murakami07b},
which
is nonexistent in 2D.
\begin{figure}
\includegraphics[scale=0.45]{trajectory-ptp.eps}
\caption{Trajectory of the gap-closing points for (a) inversion-
(${\cal I}$-)asymmetric and
(b) symmetric systems. For (b) ${\cal I}$-symmetric systems, the gap-closing
point is located at $\mathbf{k}={{\bf \Gamma}}_i$, and isolated in the
$m$-$\mathbf{k}$ space. Only
at $m=m_0$ the system is gapless.
For (a) ${\cal I}$-asymmetric systems, the gap-closing points are created in monopole-antimonopole pairs
at $m=m_1$, and
move in $\mathbf{k}$-space as $m$ changes.
Solid and broken curves denote the trajectories of the
monopoles and antimonopoles, respectively. The system opens a gap when
these gapless points annihilate in pairs at $m=m_2$. }
\label{fig:monopole}\end{figure}
So far we discussed the gap closing at ${\bf k}\neq {\bf \Gamma}_{i}$.
To complete the discussion,
we show that the gap does not close at ${\bf k} = {\bf \Gamma}_{i}$,
when the system is ${\cal I}$-asymmetric.
At ${\bf k}={\bf \Gamma}_{i}$,
the band
is doubly degenerate, and the codimension
is five \cite{Avron88,Avron89}, exceeding the number of
tunable parameters which is one (that is, $m$).
Thus, generic gap-closing
cannot occur at ${\bf k} = {\bf \Gamma}_{i}$.
One can see this as follows.
Because both the valence and the conduction bands
are doubly degenerate, we consider $4\times 4$
Hamiltonian matrix with time-reversal-symmetry.
From (\ref{time-reversal}) we get
\begin{equation}
{H}(m,{\bf k}={\bf \Gamma}_{i})=E_{0}+\sum_{i=1}^{5}a_{i}
\Gamma_{i}
\label{eq:asym-Gamma-i}
\end{equation}
where $a_{i}$'s and $E_{0}$ are real, and
$\Gamma_{1}=1\otimes\tau_{x}$, $\Gamma_{2}=\sigma_{z}\otimes\tau_{y}$,
$\Gamma_{3}=1\otimes\tau_{z}$,
$\Gamma_{4}=\sigma_{y}\otimes\tau_{y}$, and
$\Gamma_{5}=\sigma_{x}\otimes\tau_{y}$, and $\sigma_i$, $\tau_i$
are the Pauli matrices.
The eigenenergies are
$E_{0}\pm\sqrt{\sum_{i=1}^{5}a_{i}^{2}}$.
The condition for the gap-closing between
the two (doubly-degenerate) bands consists of five equations
$a_{i}=0$ for $i=1,\cdots,5$,
which are not satisfied by tuning only one parameter $m$. (Here
the wavenumber ${\bf k}$ is fixed to be ${\bf \Gamma}_{i}$.)
Thus the gap does not close at ${\bf k}={{\bf \Gamma}}_{i}$
by tuning a single parameter $m$.
\subsubsection{${\cal I}$-symmetric systems}
In ${\cal I}$-symmetric systems, the energies
are doubly degenerate for every ${\bf k}$ by the Kramers theorem.
The gap closes
between the two doubly-degenerate bands,
and
we set the Hamiltonian
matrix $H({\bf k})$ to be 4$\times$4.
The ${\cal I}$-symmetry is imposed as
\begin{equation}
H(-{\bf k})=PH({\bf k})P^{-1}, \ u(-{\bf k})=Pu({\bf k}),
\label{eq:H-inversion}
\end{equation}
where $P$ is a unitary matrix independent of ${\bf k}$ which
commutes with the spin matrices,
and $u({\bf k})$ is the periodic part
of the Bloch wavefunction: $\varphi_{{\bf k}}({r})
=u({\bf k})e^{i{\bf k}\cdot{r}}$.
Because $P^2=1$, the eigenvalues of $P$ are $\pm 1$.
By a unitary transformation which diagonalizes $P$,
one can rewrite
\begin{equation}
P=\left(
\begin{array}{cc}
P_{\uparrow}&\\
&P_{\downarrow}\end{array}
\right),\ \
P_{\uparrow}=P_{\downarrow}={\rm diag}(\eta_{a},\ \eta_{b}),\ \
\eta_a=\pm 1,\ \ \eta_b=\pm 1
\end{equation}
without losing generality. $\eta_a$ and $\eta_b$ represent the
parity eigenvalues of the wavefunctions.
One of them corresponds to the valence band, and the other to the
conduction band.
While there are four combinations for $\eta_a=\pm 1$ and $\eta_b=\pm 1$,
the overall sign for $(\eta_a,\eta_b)$ is arbitrarily changed
by gauge transformation. Thus the only distinct case are
(i) $\eta_a=\eta_b$ and
(ii) $\eta_a=-\eta_b$.
The case (i) $\eta_{a}=\eta_{b}=\pm 1$ means that
the wavefunctions (orbitals) $a$, $b$
have the same parity, e.g. two $s$-like orbitals or two $p$-like
orbitals.
When $\eta_{a}=\eta_{b}=\pm 1$, the Hamiltonian
becomes
\begin{equation}
{H}({\bf k})=E_{0}({\bf k})+\sum_{i=1}^{5}a_{i}({\bf k})
\Gamma_{i},
\label{eq:sym-same}
\end{equation}
where $a_{i}$'s and $E_{0}$ are real even functions of ${\bf k}$.
On the other hand, when (ii) $\eta_{a}=-\eta_{b}=\pm 1$,
where the two constituent wavefunctions have different parity,
the Hamiltonian is
\begin{equation}
{H}({\bf k})=E_{0}({\bf k})+a_{5}({\bf k})\Gamma'_{5}+\sum_{i=1}^{4}
b^{(i)}({\bf k})
\Gamma'_{i}
\label{eq:sym-different}
\end{equation}
where $E_0({\bf k})$ and $a_{5}({\bf k})$ are even functions of ${\bf k}$,
and $b^{(i)}({\bf k})$ are odd functions of ${\bf k}$.
The matrices $\Gamma'_{1}=\sigma_{z}\otimes
\tau_{x}$,
$\Gamma'_{2}=1\otimes\tau_{y}$,
$\Gamma'_{3}=\sigma_{x}\otimes\tau_{x}$,
$\Gamma'_{4}=\sigma_{y}\otimes\tau_{x}$,
and $\Gamma'_{5}=1\otimes\tau_{z}$ form the Clifford algebra.
Therefore, in both cases, $\eta_a=\eta_b$ and $\eta_a=-\eta_b$,
the codimension is five, which exceeds the number of tunable
parameters ($m$, $k_x$, $k_y$, $k_z$).
Therefore, gap closing does not occur in general.
However, this counting holds true only when ${\bf k}$ is at a
generic point with ${\bf k}\neq {{\bf \Gamma}}_{i}$,
At ${\bf k}={\bf \Gamma}_{i}$,
the codimension (number of parameters to
achieve degeneracy)
remains 5 for $\eta_a=\eta_b$ (Eq.~(\ref{eq:sym-same})),
while it becomes 1 for $\eta_a=-\eta_b$ (Eq.~(\ref{eq:sym-different})).
This is because the odd functions $b^{(i)}({\bf k})$ vanish identically
at ${\bf k}={\bf \Gamma}_{i}$,
and one has only to tune
$a_{5}({\bf k})$ to be zero.
The number of parameters ($m$) is one, because the wavenumber is fixed
as ${\bf k}={{\bf \Gamma}}_{i}$. Thus, the gap closes by fine-tuning a
single parameter, only
when $\eta_a=-\eta_b$.
In the following we derive an effective Hamiltonian describing
the low-energy physics of the QSH-I phase transition.
As a gap-closing point, we take ${\bf k}=0$ as an example, and
write down the Hamiltonian explicitly. Extension to other ${\bf k}=
{\bf \Gamma}_{i}$ points is straightforward.
The Hamiltonian is expanded in terms of ${\bf k}$ as
\begin{equation}
{H}(m,{\bf k})\sim E_{0}+m\Gamma'_{5}+\sum_{i=1}^{4}
\left({\bf \beta}^{(i)}\cdot{\bf k}\right)
\Gamma'_{i},
\end{equation}
where $E_0$ and $m$ are constants, and
${\bf \beta}^{(i)}$ $(i=1,\cdots,4)$ are two-dimensional real constant vectors.
The critical value of $m$ is set as zero. After unitary transformations,
the Hamiltonian finally becomes block-diagonal,
\begin{equation}
{H}(m,{\bf k})=E_{0}+\left(
\begin{array}{cccc}
m&z_{-}&&\\
z_{+}&-m&&\\
&&m&-z_{+}\\
&&-z_{-}&-m
\end{array}\right).\label{eq:case-b}\end{equation}
where $z_{\pm}=b_{1}k_{x}+b_{3}k_{y}\pm ib_{2}k_{y}$ with real constants
$b_1$, $b_2$ and $b_3$.
If the system has fourfold rotational symmetry in the $xy$ plane for example,
one has $b_{1}=b_{2}$ and $b_{3}=0$, leading to $z_{\pm}\propto
k_{x}\pm ik_{y}$.
Thus we have shown that
a generic Hamiltonian with time-reversal- and ${\cal I}$- symmetries
decouples into a pair of
Hamiltonians describing two-component Dirac fermions,
with opposite signs of the
mass terms.
Such decoupling is nontrivial. This Hamiltonian
is identical with the one suggested for the HgTe quantum well
in Ref.~\citen{Bernevig06f}.
The eigenenergies are
$E=E_{0}\pm \sqrt{m^{2}+z_{+}z_{-}}$ and the gap closes at ${\bf k}=0$ when
the parameter $m$ is tuned to zero.
This kind of effective model can be used for studying disorder
effects in the QSH phase \cite{Shindou08}.
\subsubsection{$Z_2$ Topological Number}
We have discussed generic types of gap closing
in time-reversal invariant
systems, achieved by tuning a single parameter.
There are two types of gap closing: (a)
simultaneous gap closing at ${\bf k}=\pm{\bf k}_{0} (\neq{\bf \Gamma}_{i})$ occurs
in systems without ${\cal I}$-symmetry, and (b) gap closing between
two Kramers-degenerate bands (i.e. four bands) at ${\bf k}={\bf \Gamma}_{i}$ occurs in systems with
${\cal I}$-symmetry
(see Fig.~\ref{fig:degeneracy}).
Thus the gap-closing by tuning a single parameter
occurs only in limited cases,
due to level repulsion.
We note that we have not made any assumption on the $Z_2$ topological number,
in deriving this result.
Nevertheless, we can show that the both cases
(a) and (b) involve a change of the $Z_2$ topolgical number,
and are accompanied by a
quantum phase transition.
The Kane-Mele model on the honeycomb lattice \cite{Kane05b} belongs
to class (a) while the HgTe quantum-well model \cite{Bernevig06f} belongs to class (b).
In 2D ${\cal I}$-symmetric systems, the gap closing at
the QSH-I
transition occurs at TRIM $\mathbf{k}={{\bf \Gamma}}_i$. This is
accompanied by an exchange of
the parity eigenvalues between the valence and
the conduction bands. It corresponds
to the expression of the $Z_2$ topological number as a product of
the parity eigenvalues over all the TRIMs
$\mathbf{k}={{\bf \Gamma}}_i$ over the
occupied states \cite{Fu06a} (Eq.~(\ref{eq:Isym})).
On the other hand, for ${\cal I}$-asymmetric 2D systems,
the gap closes at $\pm
\mathbf{k}_{0} (\neq {\bf \Gamma}_{i})$ by tuning $m$.
Because the $Z_2$ topological number should change at the gap closing,
the $Z_2$ topological number
should be expressed as an integral over the $\mathbf{k}$ space.
This is the Pfaffian expression of the $Z_2$ topological
number (Eq.~(\ref{eq:Iasym})) \cite{Kane05a,Fu06a}.
In retrospect, the gap closing in the ${\cal I}$-symmetric systems
occur only at ${\bf \Gamma}_{i}$ between the bands with the opposite parities.
This is because otherwise the codimension is five, exceeding the number
of tunable parameters.
On the other hand,
if ${\cal I}$-symmetry is absent, and the level repulsion is less stringent,
and the gap can close at some ${\bf k}$ other than the TRIM
${\bf \Gamma}_{i}$.
This difference is reflected in the definition of the $Z_2$ topological number.
\subsection{Example: 3D Fu-Kane-Mele model}
The 3D Fu-Kane-Mele model \cite{Fu06b}
is an ideal model for studying
QSH-I phase transitions in 3D.
It is a 4-band tight-binding model on a diamond lattice
and is ${\cal I}$- and time-reversal-symmetric.
It means that every eigenstate is doubly degenerate by the Kramers theorem.
The doubly-degenerate conduction band and the valence band
touch at the three $X$ points, $X^r=(2\pi/a) \hat{r}$ $(r=x,y,z)$.
To describe the phases having a bulk gap,
one sets the nearest-neighbor hoppings for four bond directions to be
different; $t_i$ ($i=1,2,3,4$) \cite{Fu06b}.
The system then opens a gap.
When we set the hopping to be $t_{i}=t+\delta t_{i}$ and
$\delta t_{3}=0=\delta t_{4}$, the phase diagram is as shown
in Fig.~\ref{fig:phase-dia1}(a) as a function of $\delta t_{1}$ and
$\delta t_{2}$, obtained
in Ref.~\citen{Fu06b}.
At the phase boundaries the bulk gap vanishes.
\begin{figure}
\includegraphics[scale=0.5]{Phase_Diagram-ptp.eps}
\caption{Phase diagrams for the Fu-Kane-Mele model with $\delta t_3=0$,
$\delta t_4=0$. $t_1$ and $t_2$ are the bonds along the 111 and
$1\bar{1}\bar{1}$ directions.
We put $\lambda_{\mathrm{SO}}=0.1t$. The axes are in the unit
of $t$. (a) The phase diagram in
$\delta t_1$-$\delta t_2$ plane \cite{Fu06b}.
$\lambda_v$ is set as zero.
(b) The phase
diagram in the $\delta t_{+}$-$\lambda_v$ plane.
Here $\delta t_{+}=\delta t_{1}+\delta t_{2}$,
while $\delta t_{-}=\delta t_{1}-\delta t_{2}=0.1t$ is fixed.
The arrows in (a) and (b) refer to the same variation of parameters.}
\label{fig:phase-dia1}
\end{figure}
To verify the universal phase diagram in 3D, we introduce
the ${\cal I}$-symmetry-breaking term, which
does not exist in the original Fu-Kane-Mele model.
The simplest way to break ${\cal I}$-symmetry is
to introduce an alternating on-site energy $\lambda_v$ into
the system, as was done the 2D Kane-Mele model on the
honeycomb lattice \cite{Kane05a}.
We then
calculate how the WTI-STI phase transition is modified by
the $\lambda_v$ term.
As we see from the phase diagram (Fig.~\ref{fig:phase-dia1}(a)) ,
we regard $\delta t_{+}=\delta t_{1}+\delta t_{2}$ as a parameter
$m$ driving the phase transition, while $\delta t_{1}-\delta t_{2}$ is
fixed to be $\delta t_{1}-\delta t_{2}=0.1t$ as an example.
This corresponds to
the arrow in Fig.~\ref{fig:phase-dia1}(a).
The phase diagram in the $\delta t_{+}$-$\lambda_{v}$ plane
is calculated as shown in Fig.~\ref{fig:phase-dia1}(b).
When the ${\cal I}$-symmetry is broken ($\lambda_{v}\neq 0$),
the gapless region appears in the phase diagram, in
accordance with our universal phase diagram \cite{Murakami08a}.
The trajectory (``string'')
of the gapless points in $\mathbf{k}$ space also agrees with
our theory.
As the parameter $\delta t_{+}$ is
changed along the arrow in Fig.~\ref{fig:trajectory-mono2}(a), the
gapless points move in $\mathbf{k}$ space as in
Fig.~\ref{fig:trajectory-mono2}(b) \cite{Murakami08a}.
The overall feature
of the trajectory, i.e. its pair creation and annihilation
with changing partners, perfectly agrees with
our theory.
The change of the $Z_2$ topological numbers is
also consistent with our theory \cite{Murakami08a}.
\begin{figure}[htb]
\includegraphics[scale=0.5]{dynamics_k-ptp.eps}
\caption{Trajectory of the gapless points in $\mathbf{k}$ space.
As we change $\delta t_{+}$ with $\lambda_v$ fixed as shown in
the arrow in (a),
the monopoles and antimonopoles
travel in the ${\bf k}$ space as shown in (b)
by the solid and broken curves, respectively.
The wavenumber $\mathbf{k}$ is shown in the unit of $(2\pi/a)$.}
\label{fig:trajectory-mono2}
\end{figure}
\section{Helical edge states}
From the effective model thus obtained,
we can calculate the helical edge states appearing at the
boundary between the QSH and the insulator phase.
Such boundary is described by
setting the mass parameter $m(x)$ to be dependent on space;
$m(\pm \infty)= \pm m_{0}$, i.e.,
\begin{equation}
m=\left\{\begin{array}{l}
m_{0}\ \ : x\gg 0\\
-m_{0}\ \ : x\ll 0.
\end{array}\right.
\label{eq:DW}
\end{equation}
The detail of the crossover between $m_{0}$ and
$-m_{0}$ is unimportant and is left unspecified.
For 2D ${\cal I}$-asymmetric systems (Fig.~\ref{fig:degeneracy}(a-1)), one can
consider the Dirac fermions at ${\bf k}=\pm {\bf k}_{0}$ separately.
Masses of these Dirac fermions change sign at $m=0$; hence they yield
the edge states localized at the boundary, as explained in
Ref.~\citen{Niemi86}.
Because the Dirac fermions at ${\bf k}=\pm {\bf k}_{0}$ are
related by time-reversal-reversal symmetry,
the two edge states form a Kramers pair and carry a spin current.
For 2D ${\cal I}$-symmetric systems (Fig.~\ref{fig:degeneracy}(a-2)), we follow the discussion in
Refs.~\citen{Niemi86,Su79} to
show that such a boundary between phases with different $Z_2$
topological numbers has a Kramers pair of edge states.
By replacing $k_x$ by $-i\partial_x$ in Eq.~(\ref{eq:case-b}),
we consider
\begin{eqnarray}
&&\tilde{H}(k_{y})=E_{0}+b_1\partial_x\left(
\begin{array}{cccc}
0&-i &&\\
-i&0&&\\
&&0&i\\
&&i&0
\end{array}\right)\nonumber\\
&&\ \ \
+\left(
\begin{array}{cccc}
m& (b_3 -ib_2)k_y&&\\
(b_3 +ib_2)k_y&-m&&\\
&&m&-(b_3 +ib_2)k_y\\
&&-(b_3 -ib_2)k_y&-m
\end{array}\right).
\label{eq:case-b-DM}
\end{eqnarray}
To calculate the eigenstates it is convenient to perform unitary
transformation as
\begin{equation}
H'(k_y)=Q^{\dagger}\tilde{H}({\bf k})Q=
E_{0}+\left(
\begin{array}{cccc}
b_2 k_y& m-b_1\partial_x&&\\
m+b_1\partial_x &-b_2 k_y&&\\
&&-b_2 k_y& m-b_1\partial_x\\
&&m+b_1\partial_x&b_2 k_y
\end{array}\right),\label{eq:case-b-DM-u}\end{equation}
where
\begin{equation}
Q=\frac{1}{\sqrt{2}}e^{-ib_3 k_{y}x/b_{1}}\left(
\begin{array}{cccc}
1&1&&\\
i&-i&&\\
&&-i&-i\\
&&-1&1
\end{array}\right).
\end{equation}
The eigenvalue problem reads as $H'(k_y)u_{k_{y}}(x)=E(k_y)u_{k_y}(x)$.
The term $E_0$ is absorbed by shifting the energy.
Because (\ref{eq:case-b-DM-u}) is block-diagonal, we first solve
the eigenvalue problem for the first two components of $u_{k_{y}}$.
By putting $u_{k_{y}}=(u_1,\ u_2,\ 0,\ 0)^{t}$, we get
\begin{eqnarray}
&&(E-b_2 k_y)u_1=Du_{2},\label{eq:Du2} \\
&&(E+b_2 k_y)u_2=D^{\dagger}u_{1}, \label{eq:Du1}
\end{eqnarray}
where $D=m-b_{1}\frac{\partial}{\partial x}$,
$D^{\dagger}=m+b_{1}\frac{\partial}{\partial x}$.
They yield eigenequations for $u_{1}$ and $u_{2}$, respectively:
\begin{eqnarray}
&&DD^{\dagger}u_{1}=(E^{2}-b_{2}^{2}k_{y}^{2})u_{1},\label{eq:DDdag}\\
&&D^{\dagger}Du_{2}=(E^{2}-b_{2}^{2}k_{y}^{2})u_{2}.\label{eq:DdagD}
\end{eqnarray}
Because (\ref{eq:DDdag}) is invariant under
$E\rightarrow -E$, the resulting spectrum
seems to be symmetric with respect to $E=0$; $E\leftrightarrow -E$.
However, it is not true, because in some cases the $u_1$ solutions to
(\ref{eq:DDdag}) has no corresponding solution for $u_2$.
If $E=-b_2 k_y$, (\ref{eq:Du1})
cannot be solved for $u_{2}$. Similarly, if
$E=b_2 k_y$, (\ref{eq:Du1}) cannot be solved for $u_1$. Thus
the solutions which are not symmetric with respect to $E=0$ are
as follows.
For $u_{1}(\neq 0)$ which satisfies $D^{\dagger}u_{1}=0$,
we get $E=b_{2}k_{y}$ and $u_2=0$ from Eqs.~(\ref{eq:Du2}) and
(\ref{eq:Du1}), whereas there is no
solution with $E=-b_{2}k_{y}$. In the same token, for
$u_{2}$ which satisfies $Du_{2}=0$, we get
$E=-b_{2}k_{y}$ from (\ref{eq:Du2}), whereas there is no
solution with $E=b_{2}k_{y}$.
Hence the spectral asymmetry is related to
the kernels for $D$ and $D^{\dagger}$.
For example, for $b_1>0$ and $m_0>0$, the solution at the
boundary (\ref{eq:DW}),
with $D^{\dagger}u_{1}=0$ gives
\begin{equation}
u_{1}\propto
\exp\left(-b_{1}^{-1}\int^{x}m(s)ds\right)
\end{equation} and $E=b_{2}k_{y}$,
while $Du_{2}=0$ has no normalizable solution.
Thus the energy dispersion in $k_{y}$ direction has a branch $E=b_{2}k_{y}$,
which crosses the Fermi energy $E\sim 0$. This state is gapless,
localized near $x=0$.
\begin{figure}[h]
\includegraphics[scale=0.9]{dispersion-ptp.eps}
\caption{Schematic dispersion curves for
the model (\ref{eq:case-b-DM}).}
\label{fig:dispersion}\end{figure}
We have thus far solved the eigenequation for the first two components.
The lower two components of the wavefunction $u$ is obtained from above by
time-reversal operation.
Therefore, the above-mentioned edge state with $E=b_2 k_y$ has a Kramers
partner with $E=-b_{2} k_{y}$. The whole dispersion is shown in
Fig.~\ref{fig:dispersion}.
Thus we have shown that
the Kramers pair of edge states exists
at the boundary between the QSH and I phases.
They cross at $k_y=0$, as follows from the Kramers theorem.
\section{Bismuth Ultrathin Films}
The QSH phase requires no magnetic field. This means that some
materials might realize the QSH by themselves without applying
any field. The only necessary conditions for the QSH systems
are as follows.
\begin{enumerate}
\item nonmagnetic insulator
\item the $Z_2$ topological number is odd ($\nu=1$)
\end{enumerate}
The latter condition means that the spin-orbit coupling
should be strong enough, which requires relatively heavier elements.
In the absence of the spin-orbit coupling the $Z_2$ topological
number $\nu$ is zero (i.e. even). When the spin-orbit coupling
is made stronger, some systems can change its $Z_2$ topological number
from $\nu=0$ to $\nu=1$. At the phase transition, the gap closes.
In this sense, the gap should be opened by the spin-orbit coupling.
This is an ambiguous statement. In Ref.~\citen{Murakami06a} we
clarified that systems with large
susceptibility is a good starting point for the
materials search.
Among materials with heavy elements, we pick up bismuth as a candidate.
Bismuth is known as a strong diamagnet, due to interband
matrix elements by the spin-orbit coupling.
In this sense the gap is originated from the spin-orbit coupling,
and is a good candidate for the 2D QSH.
Bismuth itself is a nonmagnetic semimetal, not an insulator.
We have to open a gap by some means to make it the QSH phase.
One idea is to make it into thin film. Indeed, the recent experiments
and first-principle calculations show that the (111) 1-bilayer
bismuth has indeed a gap \cite{Koroteev08}.
In
Ref.~\citen{Murakami06a} we considered the (111) 1-bilayer bismuth ultrathin
film from the
simple tight-binding model, and theoretically proposed that it is the
QSH phase.
We also calculate the parity eigenvalues for the tight-binding
model, and confirmed the result.
The other way to open a gap is to make an alloy with Sb. This leads us to
the 3D QSH, as has been proposed in Ref.~\citen{Fu06b}.
In the ARPES measurement \cite{Hsieh}, the Fermi ``surface'' for
the surface states has been observed, and it crosses the Fermi
energy odd times, from the $\bar{\Gamma}$ point to the $\bar{M}$ point.
This shows that the surface state of Bi$_{0.9}$Sb$_{0.1}$ is
the topological one for the QSH system.
\section{Concluding Remarks}
With simple examples we have seen various kinds of edge states.
In graphene, the existence of edge states is sensitive to boundary
conditions, while in the QH and the QSH systems, the gapless edge states
exist irrespective of the boundary conditions.
This comes from the nontrivial topological number carried by
the bulk states, defined only for insulators.
On the other hand, the graphene, being a zero-gap semiconductor,
cannot have such topological numbers,
which means that the edge states are not robust against perturbations.
The quantum phase transition between the QSH and insulator phases
is studied. We consider generic time-reversal-invariant system
with a gap, and study the condition when the bulk gap closes
by tuning a single external parameter. Due to level repulsion,
the gap does not always close by tuning a single parameter;
instead in many cases, fine tuning of more than one parameters is
needed to close the gap.
In the ${\cal I}$-symmetric systems, the gap closes only at
the TRIM ${\bf k}={\bf \Gamma}_{i}$, between the valence
and conduction bands with opposite parities.
In the ${\cal I}$-asymmetric systems, on the other hand,
the phase transition
is different between 2D and 3D. In 2D the gap closes simultaneously at
${\bf k}=\pm {\bf k}_{0}\neq {\bf \Gamma}_{i}$. In 3D there appears a
gapless region in the phase diagram between the QSH and the insulator
phases. The gap closing points are monopoles and antimonopoles, and
they are created and annihilated in pairs, when the system
transits from the gapless phase into the phases with a bulk gap (i.e.
QSH or insulator phases).
It is interesting to note that in each case with robust
gapless edge states, there is an associated current.
In the 2D QH phase it is the charge current which the edge states carry.
In the 2D QSH phase it is the spin current.
There are also other classies of systems which show this kind
of stable edge states.
One is the superconductor without time-reversal symmetry,
for example with gap function $p_x +ip_y$. In this
case the current of Majorana fermion is carried by the
edge states, and the edge states are nothing but the
surface Andreev bound states.
Another case of interest is found among
the superconductors/superfluids
with time-reversal symmetry.
In this case the edge carries a spin current of the Majorana fermion
\cite{Qi08}.
\section*{Acknowledgements}
We are grateful to N. Nagaosa, S.-C. Zhang, S. Iso, Y. Avishai, M. Onoda
R. Shindou and S. Kuga
for collaborations and fruitful discussions, and
R.~Tsukui for making some of the
figures for the present paper.
Part of this work is based on discussions
during Yukawa International Seminar 2007 (YKIS 2007)
entitled as ``Interaction and Nanostructural Effects in
Low-Dimensional Systems''.
This research is partly supported in part
by Grant-in-Aids
from the Ministry of Education,
Culture, Sports, Science and Technology of Japan.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,360 |
{"url":"https:\/\/datascience.stackexchange.com\/questions\/64521\/why-i-get-attributeerror-float-object-has-no-attribute-3f\/64522","text":"# Why I get AttributeError: 'float' object has no attribute '3f'?\n\nI am getting this error:\n\nAttributeError: 'float' object has no attribute '3f'\n\n\nI don't understand why I am getting it, I am following the example straight from the book \"applied text analysis\"\n\nThe chunk of code in python is:\n\ntotal = sum(words.values())\nfor gender, count in words.items():\npcent = (count \/ total) * 100\nnsents = sents[gender]\nprint(\n\"{0.3f}% {} ({} sentences)\".format(pcent, gender, nsents)\n)\n\n\nI see that pcent clearly will return a float, why the author tries to apply .3f what am I missing?\n\n\u2022 Weird\ud83e\udd14;Try this :.3f.format() Author is limiting the decimals precision. \u2013\u00a0Aditya Dec 10 '19 at 2:44\n\u2022 wow, it works! Is the ':' always required for this formatting? why the author has a mistake on the very first example of the book?? \u2013\u00a0Chicago1988 Dec 10 '19 at 2:48\n\u2022 It's not a mistake, thing's change with time but book's don't necessarily reflect that! \u2013\u00a0Aditya Dec 10 '19 at 2:49\n\u2022 out of curiosity would this have worked in python 2? \u2013\u00a0Chicago1988 Dec 10 '19 at 3:04\n\u2022 I am not actually sure but in very early days Python had introduced % formatting (similar to C\/C++ etc), after that in Py2.x they introduced string formatting (the example you have imho) and then in Py3.6+ they introduced the f-strings! Prefer fstrings always unless you are logging something where that string formatting comes more handy! \u2013\u00a0Aditya Dec 10 '19 at 3:11\n\n print(\n\"{:.3f}% {} ({} sentences)\".format(pcent, gender, nsents)\n)\n\n\nRefer the latest docs for more examples and check the Py version!\n\n\u2022 Well, to be honest I thought the book was the 'latest doc' so far, it was written only last year! \u2013\u00a0Chicago1988 Dec 10 '19 at 3:14\n\u2022 It will soon be 2 years old then.. \u2013\u00a0Aditya Dec 10 '19 at 3:23\n\nYou could also use {:.3%} instead of {:.3f}%.\n\nIt will transform the value into percentages automatically. That means \"{:.3%}\".format(0.3) will print \"30%\" while you have to write \"{:.3f}%\".format(0.3 * 100) to get \"30%\" as well.","date":"2021-06-15 13:54:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.37046122550964355, \"perplexity\": 4193.620352502489}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487621273.31\/warc\/CC-MAIN-20210615114909-20210615144909-00473.warc.gz\"}"} | null | null |
uint32_t guTickFactor;
uint32_t gmTickFactor;
extern "C"
{
void InitClock()
{
//
// Set the clocking to run directly from the PLL at 80 MHz
//
SysCtlClockSet(SYSCTL_SYSDIV_2_5 | SYSCTL_USE_PLL | SYSCTL_XTAL_16MHZ |
SYSCTL_OSC_MAIN);
//we set a period to its max representation (24 bits)
SysTickPeriodSet(0x00FFFFFF + 1);
//enable systick counter
SysTickEnable();
guTickFactor = SysCtlClockGet() / 1000000;
gmTickFactor = SysCtlClockGet() / 1000;
//
// Enable global interrupts
//
IntMasterEnable();
}
}
namespace clock
{
uint32_t getTickCount()
{
return (NVIC_ST_CURRENT_R & 0xFFFFFF);
}
void decrease_ticks(uint32_t nbTicks)
{
uint32_t PreviousTickCounter = getTickCount();
uint32_t CurrentTickCounter;
//ticks count in one iteration
uint32_t elapseTicks = 0;
while( nbTicks > elapseTicks)
{
//decrease total ticks counter
nbTicks -= elapseTicks;
//calculate nb ticks between now and previous iteration
CurrentTickCounter = getTickCount();
if (CurrentTickCounter > PreviousTickCounter)
{
elapseTicks = 0xFFFFFF - (CurrentTickCounter - PreviousTickCounter);
}
else
{
elapseTicks = PreviousTickCounter - CurrentTickCounter;
}
PreviousTickCounter = CurrentTickCounter;
}
}
uint32_t elapsedMicros(unsigned int PreviousTickCounter)
{
uint32_t CurrentTickCounter = getTickCount();
uint32_t elapseTicks;
if (CurrentTickCounter > PreviousTickCounter)
{
elapseTicks = 0xFFFFFF - (CurrentTickCounter - PreviousTickCounter);
}
else
{
elapseTicks = PreviousTickCounter - CurrentTickCounter;
}
return (elapseTicks/guTickFactor);
}
void usleep(unsigned int nTime)
{
//ticks count for the requested time
decrease_ticks(nTime * guTickFactor);
}
void msleep(unsigned int nTime)
{
//ticks count for the requested time
decrease_ticks(nTime * gmTickFactor);
}
unsigned int getTickPerMs()
{
return gmTickFactor;
}
unsigned int getTickPerUs()
{
return guTickFactor;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,732 |
Q: How to get function return values into ROS topic msg to be published I'm a beginner when it comes to working with ROS.
Currently working in Melodic, outside of the catkin workspace. ie: not using catkin
My goal is to create a topic that contains an x and y value, then to update those values from the return value of my path prediction function, which returns a std::vector<Eigen::Vector2f> containing the x and y values.
this is what my code looks like:
ros::NodeHangle nh;
ros::Publisher topic_pub = nh.advertise<geometry_msgs::Point>("mytopic/path", 1000);
int counter=0;
ros::Rate rate(10);
while (ros::ok()) {
geometry_msgs::Point msg;
msg.data = engine.getPathFunction();
topic_pub.publish(msg);
rate.sleep();
}
The line where I set msg.data = engine.getPathFunction(); , returns an error of course.
The return type of getPathFunction() is std::vectorEigen::Vector2f ... with x and y values for the path.
What's the best way to handle this conversion and get those x and y values into msg?
Is this something ROS can handle? If not, is there another library useful for conversions?
Should I just write my own function? If so, what should that function look like in this example?
A: The best, and only, way to handle this conversion is to do it manually. A Point message has fields for (x,y,z) so you'll need to simply assign those values yourself. Another thing to note is that a Point msg doesn't have a .data field like std_msgs do.
geometry_msgs::Point msg;
std::vector<Eigen::Vector2f> tmp_vec = engine.getPathFunction();
msg.x = tmp_vec[0]; //Assumes x here
msg.y = tmp_vec[1]; //Assumes y here
topic_pub.publish(msg);
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,688 |
Divenne noto per essere riuscito a intercettare i segnali di soccorso del Titanic in tempo reale (nelle primissime ore del 15 aprile 1912) prima ancora che la notizia del disastro diventasse di dominio pubblico nel Regno Unito qualche giorno dopo. Dopo la notorietà di questa impresa, ha continuato con successo una carriera nelle vendite, nella gestione e nello sviluppo delle prime radio, lavorando nell'azienda di Marconi.
Biografia
Moore nasce a Pontllanfraith, figlio maggiore del mugnaio locale, William Moore. In giovane età Moore fu coinvolto in un incidente in un mulino, che causò la perdita della parte inferiore della sua gamba destra, e per il resto della sua vita indossò una gamba di legno. All'età di dieci anni, Moore sviluppò un interesse per l'ingegneria dilettantistica. Grazie alle sue capacità nell'ingegneria dilettantistica, entrò nel concorso in The Model Engineerrivista. Ricevette come premio un libro di Sir Oliver Lodge dal titolo Modern Views of Magnetism And Electricity, che suscitò il suo interesse per il wireless.
Nell'estate del 1912, le attività di Moore e la pubblicità che lo circondava a seguito del disastro del Titanic lo portarono presto all'attenzione del comitato di educazione del Monmouthshire, che gli offrì una borsa di studio alla British School of Telegraphy di Clapham, a Londra. Dopo aver studiato per soli tre mesi, Moore partecipò all'esame governativo Wireless Telegraphy and Morse Code, in cui ebbe successo.
Grazie a questo esame, Guglielmo Marconi stesso lo invitò ad unirsi alla sua squadra di telegrafisti.
Nel 1914, Moore fu trasferito al "Dipartimento attrezzature navali" della Marconi Company e allo scoppio della prima guerra mondiale fu assunto come tecnico in alcune navi mercantili.
Nel 1932 brevettò una forma molto antica di Sonar, chiamata Echometer.
Moore rimase presso lo stabilimento di Marconi ad Avonmouth fino al suo pensionamento nel 1947, ma nel 1948, per motivi di salute, si trasferì in Giamaica per riprendersi. Aveva 62 anni e non sarebbe mai più tornato in Galles, la sua terra natale. Dopo soli sei mesi in Giamaica, partì per l'Inghilterra e giovedì 20 gennaio 1949 morì in una casa di convalescenza a Bristol.
La passione per la radio senza fili
Intorno al 1909, lavorando nel suo mulino, iniziò a erigere antenne a filo e a costruire la sua stazione radio rudimentale, composta da un ricevitore a base coerente e un trasmettitore a spinterometro. Fu il suo talento ingegneristico che gli permise di immagazzinare elettricità nelle sue batterie tramite un generatore accoppiato alla ruota del mulino. Lo stesso generatore veniva utilizzato anche per caricare batterie per le fattorie locali che all'epoca non erano collegate alla rete elettrica.
Rimanendo sveglio nelle prime ore del mattino, seduto alla sua stazione, spesso ascoltava i segnali che partivano dalle navi, sia passeggeri che mercantili, che viaggiavano nelle acque costiere intorno al Galles.
Nel 1911 Moore contribuì a scrivere la prima pagina del quotidiano londinese The Daily Sketch, dopo aver intercettato la dichiarazione di guerra del governo italiano contro la Libia.
Nel 1912, Moore aveva 26 anni e le sue conoscenze e abilità nella costruzione wireless erano migliorate a tal punto che era in grado di costruire apparecchiature di ricezione più sensibili e sofisticate.
Il messaggio dal Titanic
La ricezione della richiesta di soccorso del RMS Titanic diede inizio alla sua vera e propria carriera nel campo del wireless.
Nelle prime ore del 15 aprile 1912, Moore, usando il suo apparecchio radio rudimentale, ricevette un debole segnale nel codice Morse:
"CQD SOS Titanic Posizione 41.44 N 50.24 W. Richiediamo assistenza immediata. Venite subito. Abbiamo colpito un iceberg. Affondiamo... Stiamo facendo evacuare le donne sulle scialuppe..."
Moore continuò a trascrivere i segnali Morse che stava ricevendo: "Stiamo mandando i passeggeri su piccole imbarcazioni" "Donne e bambini su scialuppe, non possiamo durare a lungo..."
Poi arrivò il segnale finale: "Vieni il prima possibile, amico - riferito al RMS Carpathia - ; la nostra sala macchine si sta riempiendo di acqua".
Moore trasmise la notizia alla gente del posto e alla stazione di polizia locale, ma nessuno gli credette. Due giorni dopo, la popolazione ricevette conferma dalla stampa. I giornali inoltre confermarono - come Moore aveva sostenuto - che il nuovo segnale di aiuto "SOS" era stato usato per la prima volta dagli operatori del Titanic, così dimostrando che Moore avesse realmente ricevuto i segnali dal transatlantico.
Note
Voci correlate
RMS Titanic
Naufragio del RMS Titanic
Telegrafo
SOS
Galles
CQD
Radiantismo
RMS Titanic | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,579 |
Die Otto-Mayr-Hütte ist eine Alpenvereinshütte der Sektion Augsburg des Deutschen Alpenvereins. Sie befindet sich im hinteren Reintal in den Tannheimer Bergen, einer Untergruppe der Allgäuer Alpen, auf dem Gemeindegebiet von Musau im Bezirk Reutte in Tirol.
Geschichte
Die Hütte wurde im Jahr 1900 durch die Sektion Augsburg unter der Leitung ihres Vorsitzenden Otto Mayr erbaut und am 8. Juli 1900 eröffnet. Um Kosten und Arbeitszeit zu minimieren, wurde 1899 auf der Deutschen Sportausstellung in München ein auf einer Holzkonstruktion basiertes "Musterhaus für Bergsteiger" gekauft, in Einzelteile zerlegt und mit der Bahn in die Tannheimer Berge transportiert, wo die Teile dann am geplanten Standort in kurzer Zeit wieder zusammengefügt wurden.
Bereits 1903 folgten die ersten Erweiterungen an der Unterkunft, weitere dann im Jahr 1909. In den 1970er Jahren wurde es wegen des starken Zuwachses an Besuchern notwendig, eine biologische Kläranlage anzulegen. 1987 erfolgte dann eine Komplettsanierung der Bausubstanz.
Aufstieg
Es gibt verschiedene Möglichkeiten, zur Otto-Mayr-Hütte zu gelangen:
Der direkte Zugang erfolgt vom Talort Musau () auf einem kleinen Bergweg auf die Achsel, von der man einen schönen Ausblick auf das Lechtal hat. Von dort aus gelangt man auf Forststraßen über die Musauer Alm (Einkehrmöglichkeit) bis zu einem Abzweig, von dem aus ein Steig bis zur Hütte führt (Gesamtgehzeit etwa 2 bis 2½ Stunden).
Von Grän (1138 m) im Tannheimer Tal gelangt man über das Füssener Jöchl () zur Hütte. Den Aufstieg kann durch Benutzung der Kabinenseilbahn auf das Füssener Jöchl abgekürzt werden (Gesamtgehzeit etwa 2½ Stunden ohne oder etwa 1 Stunde mit Seilbahnbenutzung).
Von Vils () führt eine Forststraße zur Vilser Alpe (, Einkehrmöglichkeit). Von dort aus gelangt man auf einem Bergsteig über die Vilser Scharte () zur Otto-Mayr-Hütte. Vor der Vilser Scharte ist der Weg mit Seilen und zwei Leitern versichert und nur für trittsichere Bergsteiger zu empfehlen (Gesamtgehzeit ca. 3½ Stunden).
Touren von der Otto-Mayr-Hütte
Von der Otto-Mayr-Hütte aus lassen sich einige umliegende Gipfel der Tannheimer Berge besteigen. Für Kinder ab sechs Jahren ist beispielsweise die Große Schlicke (, Gehzeit: 1½ Stunden) geeignet. Ein anderes Gipfelziel ist der Schartschrofen (, Gehzeit: 1½ Stunden). Anspruchsvolle Touren sind die Besteigung der Roten Flüh (, Gehzeit: 2 Stunden), des Gimpel (, Gehzeit: 3 Stunden) oder der Gehrenspitze (, Gehzeit: 3½ Stunden). Der höchste Berg der Tannheimer Berge, die Kellenspitze (), kann in vier Stunden erreicht werden.
Übergang zu anderen Hütten
Bad Kissinger Hütte (), Gehzeit: 3 Stunden
Tannheimer Hütte (), Gehzeit: 3 Stunden
Gimpelhaus (), Gehzeit: 2 Stunden
In direkter Nachbarschaft befinden sich die Füssener Hütte und die Willi-Merkl-Gedächtnis-Hütte
Karten
Alpenvereinskarte BY 5 Tannheimer Berge – Köllenspitze, Gaishorn (1:25.000)
Literatur
Ludwig Hummel: Ein "Fertighaus" feiert seine Geschichte – 100 Jahre Otto-Mayr-Hütte. In: DAV Panorama. Nr. 4, 2001, , S. 44–47.
Weblinks
Website der Otto-Mayr-Hütte
Hütten der DAV-Sektion Augsburg
Einzelnachweise
Allgäuer Alpen
Alpenhütte in Tirol
Musau
Erbaut in den 1900er Jahren | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,369 |
var gulp = require('gulp');
var babel = require('gulp-babel');
var rimraf = require('rimraf');
var src = 'src/**/*.js';
function onError (err) {
if (err) throw err;
}
function compile () {
return gulp.src(src)
.pipe(babel())
.pipe(gulp.dest('./dist'));
}
gulp.task('clean', function (done) {
rimraf('./docs', onError);
rimraf('./coverage', onError);
rimraf('./dist', function (err) {
if (err) throw err;
done();
});
});
gulp.task('compile', compile);
gulp.task('build', ['clean'], function (done) {
compile();
done();
});
gulp.task('default', ['build'], function () {
return gulp.watch([src], ['compile']);
});
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,963 |
\section{Introduction}
In this article, $p$ is always a prime number, given two non-empty subsets $A$ and $B$ of $\mathbb{F}_p$, we denote their sumset $A+B=\{a+b\mid a\in A,\ b\in B\}$.
The first addition theorem in $\mathbb{F}_p$ is the Cauchy-Davenport theorem.
\begin{theo}[Cauchy-Davenport \cite{Ca,Da1,Da2}]
Let $A$ and $B$ be two non empty subsets of $\mathbb{F}_p$, then:
\[|A+B|\geqslant\min\left\{p,|A|+|B|-1\right\}.\]
\end{theo}
In the seminal article \cite{Al}, Alon described the Combinatorial Nullstellensatz and the polynomial method that relies on it (described in section $\ref{PM}$). The method allows to prove that a combinatorial problem has a solution or a contradiction, just by computing a certain coefficient in a polynomial. The combinatorial problem is reduced to a computation problem. The Cauchy-Davenport is one of the first example developed in this article. The binomial theorem is the key point that allows to prove that the proper coefficient is non zero.
Surprisingly a slight variation of the definition of the sumset has revealed itself much more difficult to tackle. For two subsets $A$ and $B$ of $\mathbb{F}_p$, we define their restricted sumset: $A\dot{+}B=\{a+b\mid a\in A,\ b\in B,\ a\neq b\}$. In $1964$, Erd\H os and Heilbronn made the following famous conjecture:
\begin{conj}[Erd\H os-Heilbronn] Let $A\subset\mathbb{F}_p$, then:
\[|A\dot{+}A|\geqslant\min\left\{p,2|A|-3\right\}.\]
\end{conj}
The first proof follows from the following generalization in $1994$ by Dias da Silva and Hamidoune, introducing the $h$-fold restricted sumset:
\begin{defi} Let $A\subset \mathbb{F}_p$ and $h\in[0,|A|]$, we denote $h^\wedge A$ the set of subsums of $h$ pairwise distinct elements of $A$:
\[h^\wedge A=\{a_1+\dots+a_h\mid a_i\in A,\ a_i\neq a_j\}.\]
\end{defi}
\begin{theo}[Dias da Silva, Hamidoune \cite{DH}] Let $A\subset \mathbb{F}_p$. For a natural integer $h\in[0,|A|]$,
\[|h^\wedge A|\geqslant\min\{p,h(|A|-h)+1\}.\]
\end{theo}
Their proof relies on exterior algebras. A second proof of this result was given the following year by Alon, Nathanson and Rusza. They applied the Combinatorial Nullstellensatz \cite{ANR1,ANR2}. To prove that the proper coefficient is non zero, they consider another combinatorial interpretation of it through strict ballot number.
Following this method, the author could prove a further statement considering the set of all subsums.
\begin{defi} Let $A\subset \mathbb{F}_p$, we denote its set of subsums by:
\begin{align*}
\Sigma(A)\ & =\left\{\sum_{x\in I}x\mid\emptyset\subset I\subset A\right\}=\bigcup_{h=0}^{|A|}(h^\wedge A)\\
\intertext{and we also denote its set of non-trivial subsums by:}
\Sigma^*(A) & =\left\{\sum_{x\in I}x\mid\emptyset\subsetneq I\subset A\right\}=\bigcup_{h=1}^{|A|}(h^\wedge A).
\end{align*}
\end{defi}
For the following result the computation of the coefficient relied on determinants of binomial coefficients: binomial determinants considered in the work of Gessel and Viennot.
\begin{theo}[Balandraud \cite{EB}]\label{EB} Let $A\subset\mathbb{F}_p$, such that $A\cap(-A)=\emptyset$. We have
\begin{align*}
|\Sigma(A)|\ & \geqslant\min\left\{p,\frac{|A|(|A|+1)}{2}+1\right\},\\
|\Sigma^*(A)| & \geqslant\min\left\{p,\frac{|A|(|A|+1)}{2}\right\}.
\end{align*}
\end{theo}
Among other the applications of this result are algebraic invariants: Noether number or Davenport constant variations \cite{CZ,OPSS,WS}. Many of these applications would consider the bound on $\Sigma^*(A)$ in order to ensure the existence of a \emph{non trivial} zero-subsum of $A$. For these problems it is also of interest to consider subsums with a larger restriction on the number of terms. This is the aim of the last and new result of this article. We define:
\begin{defi} Let $A\subset \mathbb{F}_p$, we denote $\Sigma_\alpha(A)$ the set of subsums of at least $\alpha$ pairwise distinct elements of $A$ and $\Sigma^\alpha(A)$ the set of subsums of at most $|A|-\alpha$ pairwise distinct elements of $A$.
\begin{align*}
\Sigma_\alpha(A) & = \{a_1+\dots+a_k\mid a_i\in A,\ \alpha\leq k\leq |A|,\ a_i\neq a_j\} & & =\bigcup_{k=\alpha}^{|A|}(k^\wedge A)\\
\Sigma^\alpha(A) & = \{a_1+\dots+a_k\mid a_i\in A,\ 0\leq k\leq |A|-\alpha,\ a_i\neq a_j\} & & =\bigcup_{k=0}^{|A|-\alpha}(k^\wedge A).
\end{align*}
\end{defi}
These sets of subsums satisfy the following elementary properties:
\begin{itemize}
\item Whenever $\alpha\in\{0,1\}$, one has $\Sigma_0(A)=\Sigma^0(A)=\Sigma(A)$ and $\Sigma_1(A)=\Sigma^*(A)$.
\item Whatever $\alpha$, one has the symmetry: $\Sigma_\alpha(A)=\left(\sum_{a\in A}a\right)-\Sigma^\alpha(A)$, what implies that $|\Sigma_\alpha(A)|=|\Sigma^\alpha(A)|$.
\item Whenever $\alpha\leq\alpha'$ one has $\Sigma_{\alpha'}(A)\subset\Sigma_\alpha(A)$.
\end{itemize}
The generalization of Theorem \ref{EB} is:
\begin{theo}\label{Main} Let $A\subset\mathbb{F}_p$, such that $A\cap(-A)=\emptyset$. For any natural integer $\alpha\in[0,|A|]$, we have:
\[|\Sigma_\alpha(A)|=|\Sigma^\alpha(A)|\geqslant\min\left\{p,\frac{|A|(|A|+1)}{2}-\frac{\alpha(\alpha+1)}{2}+1\right\}.\]
\end{theo}
Before the proof, we can make the following remarks:
\begin{itemize}
\item Whenever $\alpha\in\{0,1\}$, this is exactly Theorem \ref{EB}.
\item This bound is sharp since for $A=[1,d]$, one has:
\[\Sigma^\alpha(A)=\left[0,\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}\right].\]
of cardinality exactly $\min\left\{p,\frac{|A|(|A|+1)}{2}-\frac{\alpha(\alpha+1)}{2}+1\right\}$.
\end{itemize}
The article is organized as follows: In a first section, we explain the method. We state the Combinatorial Nullstellensatz, the coefficient formula and the new proofs of the Cauchy-Davenport and Dias da Silva-Hamidoune theorems. The novelty in these proofs, is that there would be no need to compute the coefficients. In a second section, the proof of Theorem \ref{Main} is given. It follows the steps of the method of the first section. In the last section, we discuss the problem of the sets of subsums with upper and lower bound on the number of terms. It appears surprisingly that the problem with a double bound is of a different nature than the three previous ones.
\section{Rewriting the polynomial proofs of Cauchy-Davenport and Dias da Silva-Hamidoune theorems}\label{PM}
\subsection{The polynomial method}
The Combinatorial Nullstellensatz is a result that generalizes to multivariate polynomials the fact that an univariate polynomial of degree $d$ cannot vanish on $d+1$ points.
\begin{theo}[Combinatorial Nullstellensatz \cite{Al}]\label{CNSS} Let $\mathbb{F}$ be any field and $P(\underline{X})\in\mathbb{F}[X_1,\dots,X_d]$. If $P$ has total degree $k_1+\dots+k_d$
and its coefficient of the monomial $\prod_{i=1}^dX_i^{k_i}$ is non-zero, then whatever is the family $(A_1,\dots,A_d)$ of subsets of $\mathbb{F}$ satisfying $|A_i|>k_i$, there is a point $\underline{a}\in A_1\times\dots\times A_d$ such that
\[P(\underline{a})\neq 0.\]
\end{theo}
This theorem has lead to numerous proofs of combinatorial conjectures and new proofs in many mathematical fields. It is called Combinatorial Nullstellensatz because another formulation of it gives a generating family of the ideal of polynomial that vanishes on a cartesian product. The previously stated formulation is a criterion for a polynomial not to belong to this ideal.
Applying the polynomial method (the one that relies on the Combinatorial Nullstellensatz) on a combinatorial problem consists in defining a (big enough) cartesian product and a polynomial of small degree, so that the Combinatorial Nullstellensatz, will assert that there is a solution or a contradiction provided that a specific coefficient is nonzero. The combinatorial problem is then reduced to the computation problem of the appropriate coefficient.
In the three problems treated in this article, we will not need to compute the coefficient. We use the coefficient formula proved independently by Karasev-Petrov and by L\'ason, it is equivalent to the Combinatorial Nullstellensatz:
\begin{theo}(Coefficient formula \cite{KP,Las}) Let $P\in\mathbb{F}[X_1,\dots,X_d]$ be a polynomial of degree $k_1+\dots+k_d$ and any family of sets $A_i$, with $|A_i|=k_i+1$, denoting $g_i(X)=\prod_{a\in A_i}(X-a)$, then the coefficient of the monomial $\prod_{i=1}^dX_i^{k_i}$ in the expansion of $P$ is
\[\sum_{\underline{b}\in\prod_{i=1}^d A_i}\frac{P(\underline{b})}{\prod_{i=1}^dg_i'(b_i)}.\]
\end{theo}
In \cite{KP}, Karasev and Petrov gave a new proof of Dyson's conjecture thanks to this formula using an auxiliary polynomial and cartesian product.
We will use the coefficient formula for some well chosen sets to prove that the wanted coefficient is not zero. This does not require to compute the coefficient. The coefficient formula will provide
an expression of the specified coefficient as a sum, all of whose terms are zero but exactly one.
In our context the bound is tight and reached by some arithmetical progressions. The way to choose the auxiliary polynomial and cartesian product will be to consider the same constructions for these arithmetical progressions. In conclusion, our method is a way to understand why these bounds are reached by these arithmetical progressions via a kind of algebraic comparison.
\subsection{A proof of the Cauchy-Davenport theorem}
\begin{proof}
Let us consider two non empty subsets $A$ and $B$ of $\mathbb{F}_p$, of respective cardinality, $|A|=n$ and $|B|=m$. Define $\delta=\max\{0,n+m-1-p\}$. Since $\max\{n,m\}\leq p$, one has $\delta<\min\{n,m\}$.
We will prove the theorem by contradiction. Let us suppose that $|A+B|<\min\{p,n+m-1\}$, then consider a set $C$ of cardinality $|C|=\min\{p-1,n+m-2\}=n+(m-\delta)-2<p$ that contains $A+B$.
Define the polynomial
\[P(X,Y)=\prod_{x\in C}(X+Y-x).\]
By definition, $P$ vanishes on the cartesian product $A\times B$. We have $\deg(P)=|C|=(n-1)+(m-\delta-1)$.
Using the Combinatorial Nullstellensatz, to obtain a contradiction, it suffices to prove that the coefficient $c_{n-1,(m-\delta)-1}$ of $X^{n-1}Y^{(m-\delta)-1}$ is not zero.
Now consider the sets $A'=[1,n]$ and $B'=[1,(m-\delta)]$, one has $A'+B'=[2,n+(m-\delta)]$. We also consider the polynomial
\[Q(X,Y)=\prod_{x=2}^{n+(m-\delta)-1}(X+Y-x).\]
The polynomial $Q(X,Y)$ is defined similarly as $P(X,Y)$ on a set $C'=[2,n+(m-\delta)-1]$ of cardinality $|C'|=n+(m-\delta)-2=|C|$. Since $|C'|<p$, the elements of $[2,n+(m-\delta)]$ are pairwise distinct modulo $p$.
The two polynomial $P$ and $Q$ have the same coefficients of maximal degree, in particular they have the same coefficient $c_{n-1,(m-\delta)-1}$ of the monomial $X^{n-1}Y^{(m-\delta)-1}$.
We can use the coefficient formula on the sets $A'$ and $B'$ to find this coefficient in $Q$. The key point of this proof is the fact that the polynomial $Q$ vanishes on all the element of $A'\times B'$ but one: $Q(n,(m-\delta))\neq 0$. Therefore the coefficient is
\begin{align*}
c_{n-1,(m-\delta)-1}= & \sum_{(a,b)\in A'\times B'}\frac{Q(a,b)}{\prod_{a'\in A'\setminus\{a\}}(a-a')\prod_{b'\in B'\setminus\{b\}}(b-b')}\\
= & \frac{Q(n,(m-\delta))}{\prod_{i=1}^{n-1}(n-i)\prod_{i=1}^{(m-\delta)-1}((m-\delta)-i)}\neq 0.
\end{align*}
The expression as a sum that contains exactly one non-zero term suffices to assert that it is non zero.
\end{proof}
In this case, the computation is easy and the previous formula also proves that $c_{n-1,(m-\delta)-1}=\binom{n+(m-\delta)-2}{n-1}$.
\subsection{A proof of the Dias da Siva-Hamidoune theorem}
\begin{proof}
Consider a subset $A=\{a_1,\dots,a_d\}$ of $\mathbb{F}_p$ and $h\in[0,d]$.
We will prove the theorem by contradiction. Suppose that $h^\wedge A\subset C$, with $|C|=\min\{p-1,h(d-h)\}$.
Let us denote $\delta=\max\{0,h(d-h)+1-p\}$, this implies that $|C|=h(d-h)-\delta<p$. Since $h^\wedge(A\setminus\{a_d\})\subset h^\wedge A$, one can consider that $h((d-1)-h)+1<p$, what implies that $\delta<h$.
Let us consider the polynomial of $P_{d,h,\delta}(\underline{X})\in\mathbb{F}_p[X_1,\dots,X_h]$:
\[P_{d,h,\delta}(\underline{X})=\prod_{x\in C}(X_1+X_2+\dots+X_h-x)\prod_{1\leq i<j\leq h}(X_j-X_i).\]
By definition of $C$, $P_{d,h,\delta}$ vanishes on the whole cartesian product $A^h$. In our context, we will consider the sub-cartesian product $A_1\times\dots\times A_h$, where:
\[\begin{array}{lcl}
A_1 & = & \{a_1,\dots,a_{d-h}\} \\
\ \vdots & & \ \ \ \vdots\hspace{1.8cm} \ddots\\
A_\delta & = & \{a_1,\hspace{.6cm}\dots\hspace{.6cm},a_{d-h+\delta-1}\} \\
A_{\delta+1} & = & \{a_1,\hspace{1cm}\dots\hspace{1cm},a_{d-h+\delta+1}\} \\
\ \vdots & & \ \ \ \vdots\hspace{3.7cm} \ddots \\
A_h & = & \{a_1,\hspace{1.8cm}\dots\hspace{1.8cm},a_d\} \\
\end{array}\]
On the first hand, one has
\begin{align*}
\deg(P)= & |C|+\frac{h(h-1)}{2}\\
= & h(d-h)+\frac{h(h-1)}{2}-\delta\\
= & dh-\frac{h(h+1)}{2}-\delta,
\end{align*}
and on the other hand $\sum_{i=1}^h(|A_i|-1)=dh-\frac{h(h+1)}{2}-\delta$.
Thanks to the Combinatorial Nullstellensatz, to obtain a contradiction, it suffices to prove that the coefficient $c_{d,h,\delta}$ of the monomial $\prod_{i=1}^hX_i^{|A_i|-1}=\prod_{i=1}^\delta X_i^{d-h+i-2}\prod_{i=\delta+1}^h X_i^{d-h+i-1}$ is not zero.
We now consider the same construction for the set $B=[1,d]$ that satisfy $h^\wedge B=\left[\frac{h(h+1)}{2},\frac{d(d+1)}{2}-\frac{(d-h)(d-h+1)}{2}\right]$ of cardinality $|h^\wedge B|=\min\{p,h(d-h)+1\}$.
Let us consider the cartesian product $B_1\times\dots\times B_h$:
\[\begin{array}{lcl}
B_1 & = & \{1,\dots,(d-h)\} \\
\ \vdots & & \ \ \vdots\hspace{1.8cm} \ddots\\
B_\delta & = & \{1,\hspace{.6cm}\dots\hspace{.6cm},(d-h+\delta-1)\} \\
B_{\delta+1} & = & \{1,\hspace{1cm}\dots\hspace{1cm},(d-h+\delta+1)\} \\
\ \vdots & & \ \ \vdots\hspace{3.7cm} \ddots \\
B_h & = & \{1,\hspace{2cm}\dots\hspace{2cm},d\}.
\end{array}\]
We also define the set $R=\left[\frac{h(h+1)}{2},\frac{d(d+1)}{2}-\frac{(d-h)(d-h+1)}{2}-\delta-1\right]$. (Since $h(d-h)-\delta<p$, the elements of $R$ are pairwise distinct modulo $p$ and do not cover $\mathbb{F}_p$.) Finally, we define the polynomial
\[Q_{d,h,\delta}(\underline{X})=\prod_{x\in R}(X_1+X_2+\dots+X_h-x)\prod_{1\leq i<j\leq h}(X_j-X_i).\]
Since $|R|=h(d-h)-\delta=|C|$, the two polynomials $Q_{d,h,\delta}$ and $P_{d,h,\delta}$ have same degree. Moreover they differ only by constants in their linear factors, so they have the same coefficients of maximal degree. In particular, they share the have the same coefficient
$c_{d,h,\delta}$ of the monomial $\prod_{i=1}^\delta X_i^{d-h+i-2}\prod_{i=\delta+1}^h X_i^{d-h+i-1}$.
If we consider the sums $b_1+\dots+b_h$ of pairwise different values $b_i\in B_i$, one can reach any value in $\left[\frac{h(h+1)}{2},\frac{d(d+1)}{2}-\frac{(d-h)(d-h+1)}{2}-\delta\right]$. Only one of the values is missing in $R$, namely $\frac{d(d+1)}{2}-\frac{(d-h)(d-h+1)}{2}-\delta$ and this value is uniquely reached by the sum $(d-h)+\dots+(d-h+\delta-1)+(d-h+\delta+1)+\dots+d$. This implies that there is only one point $\underline{b}^*$ in the cartesian product $B_1\times\dots\times B_h$ such that $Q_{d,h,\delta}(\underline{b}^*)\neq 0$. Using the coefficient formula, one get that:
\[c_{d,h,\delta}=\sum_{\underline{b}\in \prod B_i}\frac{Q_{d,h,\delta}(\underline{b})}{\prod g'_i(b_i)}= \frac{Q_{d,h,\delta}(\underline{b}^*)}{\prod_{i=1}^dg_i'(b^*_i)} \neq 0,\]
where \[\underline{b}^*=(\underbrace{(d-h),\dots,(d-h+\delta-1)}_{i=1..\delta},\underbrace{(d-h-\delta+1),\dots,d}_{i=\delta+1..h}).\]
This coefficient is therefore different from zero and the proof is complete.
\end{proof}
\begin{rem}
The computation of the coefficient $c_{d,h,\delta}$ can be proceed to a closed expression, it is done in proposition \ref{comput1} in the annex of this article.
\end{rem}
\section{Sets of subset sums whose number of terms is bounded}
We proceed now to the proof of theorem \ref{Main}:
\begin{proof}
Whenever $p=2$, the hypothesis $A\cap(-A)=\emptyset$ is impossible for a non-empty subset, so from now on $p$ is an odd prime. Consider that the set is $A=\{2a_1,2a_2,\dots,2a_d\}$, so $|A|=d$ and denote $m=\sum_{i=1}^da_i$.
We prove the theorem by contradiction. Suppose that $|\Sigma^\alpha(A)|<\min\left\{p,\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}+1\right\}$, and consider a set $C$, such that $\Sigma^\alpha(A)\subset C$, with $|C|=\min\left\{p-1,\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}\right\}$.
Denote:
\[\delta=\max\left\{0,\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}-(p-1)\right\}.\]
So that $|C|=\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}-\delta<p$.
Since one has $\Sigma^{\alpha+1}(A)\subset\Sigma^\alpha(A)$. One can consider that $\frac{d(d+1)}{2}-\frac{(\alpha+1)(\alpha+2)}{2}+1<p$. This implies that $\delta\leq\alpha$.
\bigskip
We define the polynomial:
\begin{align*}
P_{d,\alpha,\delta}(\underline{X})= & \prod_{x\in C}(X_1+\dots+X_d+m-x)\prod_{1\leq i<j\leq d}(X_j-X_i)\prod_{\substack{1\leq i<j\leq d\\\textrm{and}\ j>\alpha}}(X_j+X_i)\\
\end{align*}
This polynomial has degree
\begin{align*}
\deg(P_{d,\alpha,\delta})= & \left(\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}-\delta\right)+\left(\frac{d(d-1)}{2}\right)
+\left(\underbrace{(d-\alpha)\alpha}_{j>\alpha,\ \textrm{and}\ i\leq\alpha}+\underbrace{\frac{(d-\alpha)(d-\alpha-1)}{2}}_{\alpha<i<j}\right)\\
= & d^2+\frac{d(d-1)}{2}-\alpha^2-\delta.
\end{align*}
Let us consider the sets:
\[\begin{array}{lcll}
A_1 & = & \{-a_d,\dots,-a_{\alpha+1}\} & \\
\ \vdots & & \ \ \ \vdots\hspace{1.8cm} \ddots & \\
A_\delta & = & \{-a_d,\hspace{.3cm}\dots\hspace{.3cm},-a_{\alpha-\delta+2}\} & \\
A_{\delta+1} & = & \{-a_d,\hspace{1cm}\dots\hspace{1cm},-a_{\alpha-\delta}\} & \\
\ \vdots & & \ \ \ \vdots\hspace{3.7cm} \ddots & \\
A_\alpha & = & \{-a_d,\hspace{1.5cm}\dots\hspace{1.5cm},-a_1\} & \\
A_{\alpha+1} & = & \{ -a_d,\hspace{1.5cm}\dots\hspace{1.5cm},-a_1, & a_1,\hspace{.5cm}\dots\hspace{.5cm}, a_{\alpha+1}\}\\
\ \vdots & & \ \ \ \vdots\hspace{4.2cm} \vdots &\ \vdots \hspace{3.2cm}\ddots\\
A_d & = & \{ -a_d,\hspace{1.5cm}\dots\hspace{1.5cm},-a_1, & a_1,\hspace{1.5cm}\dots\hspace{1.5cm}, a_d\}\\
\end{array}\]
Moreover, one also have:
\begin{align*}
\sum_{i=1}^d(|A_i|-1)= & \left(d^2-\frac{\alpha(\alpha-1)}{2}\right)+\left(d(d-\alpha)-\frac{(d-\alpha)(d-\alpha-1)}{2}\right)-d-\delta\\
= & d(2d-\alpha-1)-\left(\alpha^2+\frac{d(d-1)}{2}-d\alpha\right)-\delta\\
= & d^2+\frac{d(d-1)}{2}-\alpha^2-\delta.
\end{align*}
Whatever is the element of the cartesian product, if the two last factors of $P_{d,\alpha,\delta}$ do not vanish then it consists of a sum of the type $\pm a_1\pm a_2\dots\pm a_d$, which has at least $\alpha$ negative signs. So $\pm a_1\pm a_2\dots\pm a_d+m$ is a sum of at most $d-\alpha$ elements of $A$ and the first factor vanishes. In conclusion, $P_{d,\alpha,\delta}$ vanishes on the whole cartesian product $\prod_{i=1}^d A_i$.
To obtain a contradiction thanks to the Combinatorial Nullstellensatz, it suffices to prove that the coefficient $c_{d,\alpha,\delta}$ of the following monomial is non zero:
\[\prod_{i=1}^dX_i^{|A_i|-1}=\left(\prod_{i=1}^\delta X_i^{d-\alpha+i-2}\right)\left(\prod_{i=\delta+1}^\alpha X_i^{d-\alpha+i-1}\right) \left(\prod_{i=\alpha+1}^dX_i^{d+i-1}\right).\]
Let us now consider the same construction for the set $B=2.[1,d]$: one has
\[\Sigma^\alpha(B)=2.\left[0,\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}\right],\] of cardinality $|\Sigma^\alpha(B)|=\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}+1$.
Define the sets:
\[\begin{array}{lcll}
B_1 & = & \{-d,\dots,-(\alpha+1)\} & \\
\ \vdots & & \ \ \ \vdots\hspace{1.8cm} \ddots & \\
B_\delta & = & \{-d,\hspace{.3cm}\dots\hspace{.3cm},-(\alpha-\delta+2)\} & \\
B_{\delta+1} & = & \{-d,\hspace{1cm}\dots\hspace{1cm},-(\alpha-\delta)\} & \\
\ \vdots & & \ \ \ \vdots\hspace{3.7cm} \ddots & \\
B_\alpha & = & \{-d,\hspace{1.5cm}\dots\hspace{1.5cm},-1\} & \\
B_{\alpha+1} & = & \{-d,\hspace{1.5cm}\dots\hspace{1.5cm},-1 & 1,\hspace{.5cm}\dots\hspace{.5cm},\alpha+1\}\\
\ \vdots & & \ \ \ \vdots\hspace{4.1cm} \vdots & \vdots \hspace{2.5cm} \ddots \\
B_d & = & \{-d,\hspace{1.5cm}\dots\hspace{1.5cm},-1 & 1,\hspace{1cm}\dots\hspace{1.3cm}, d\}\\
\end{array}\]
Let us denote $m'=\frac{d(d+1)}{2}$ and $R=\left[0,\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}-\delta-1\right]$. Since $\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}-\delta<p$, the elements of $R$ are pairwise distinct modulo $p$ and do not cover $\mathbb{F}_p$.
define the polynomial:
\[Q_{d,\alpha,\delta}(\underline{X})=\prod_{x\in R}(X_1+\dots+X_d+m'-x)\prod_{1\leq i<j\leq d}(X_j-X_i)\prod_{\substack{1\leq i<j\leq d\\\textrm{and}\ j>\alpha}}(X_j+X_i)\]
Since $|R|=\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}-\delta=|C|$, the two polynomials $Q_{d,\alpha,\delta}$ and $P_{d,\alpha,\delta}$ have same degree. Moreover they differ only by constants in their linear factors, so they have the same coefficients of maximal degree. In particular, they have the same coefficient of the monomial $\left(\prod_{i=1}^\delta X_i^{d-\alpha+i-2}\right)\left(\prod_{i=\delta+1}^\alpha X_i^{d-\alpha+i-1}\right)\left(\prod_{i=\alpha+1}^dX_i^{d+i-1}\right)$.
If we consider all the sums $b_1+\dots+b_d+m'$ where $b_i\in B_i$ and
\[\prod_{1\leq i<j\leq d}(b_j-b_i)\prod_{\substack{1\leq i<j\leq d\\\textrm{and}\ j>\alpha}}(b_j+b_i)\neq 0,\] one can reach any value in $\left[0,\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}-\delta\right]$. Only one value for the sum does miss in $R$, $\frac{d(d+1)}{2}-\frac{\alpha(\alpha+1)}{2}-\delta$, and there is only one element, whose coordinates are pairwise neither equal nor opposite in this cartesian product and that reaches this value. It implies that there is only one point $\underline{b}^*$ in the cartesian product $B_1\times\dots\times B_d$ such that $Q_{d,\alpha,\delta}(\underline{b}^*)\neq 0$. Using the coefficient formula, one
\[c_{d,\alpha,\delta}=\sum_{\underline{b}\in \prod B_i}\frac{Q_{d,\alpha,\delta}(\underline{b})}{\prod g'_i(b_i)}=\frac{Q_{d,\alpha,\delta}(\underline{b}^*)}{\prod_{i=1}^dg_i'(b_i^*)} \neq 0,\]
where \[\underline{b}^*=(\underbrace{-(\alpha+1),\dots,-(\alpha-\delta+2)}_{i=1..\delta},\underbrace{-(\alpha-\delta),\dots,-1}_{i=\delta+1..\alpha},\alpha+1-\delta,\underbrace{\alpha+2,\dots,d}_{i=\alpha+2..d})\]
This coefficient is therefore different from zero, what concludes the proof.
\end{proof}
\begin{rem} The value of $c_{d,\alpha,\delta}$ can be compute from this formula. It is written in proposition \ref{comput2} in the annex of this article.
\end{rem}
\section{The trouble in the consideration of a double bound}
It seems natural at this point to define the sets of subsums whose number of terms are doubly bounded:
\begin{defi} Let $A\subset \mathbb{F}_p$, we denote $\Sigma_\alpha^\beta(A)$ the set of subsums of at least $\alpha$ and at most $|A|-\beta$ pairwise distinct elements of $A$
\[\Sigma_\alpha^\beta(A)= \{a_1+\dots+a_k\mid a_i\in A,\ \alpha\leq k\leq |A|-\beta,\ a_i\neq a_j\}=\bigcup_{k=\alpha}^{|A|-\beta}(k^\wedge A).\]
\end{defi}
At first glance, one could think that for a set $A\subset \mathbb{F}_p$ such that $A\cap(-A)=\emptyset$ the minimal cardinality of such a set of subsums is again reached on an arithmetical progression of type $[1,d]$, and so that the cardinality of $|\Sigma_\alpha^\beta(A)|$ would be at least:
\[\min\left\{p,\frac{|A|(|A|+1)}{2}-\frac{\alpha(\alpha+1)}{2}-\frac{\beta(\beta+1)}{2}+1\right\}.\]
This does not hold and several counterexamples can be given:
Let $k\geq 3$ and consider the set $A=\{1,-2,3,\dots,k\}$, then one has:
\begin{align*}
\Sigma_1^1(A)= & \left\{-2,-1,1,2,\dots,\frac{k(k+1)}{2}-5,\frac{k(k+1)}{2}-3,\frac{k(k+1)}{2}-2\right\},\\
\Sigma_2^1(A)= & \left\{-1,1,2,\dots,\frac{k(k+1)}{2}-5,\frac{k(k+1)}{2}-3,\frac{k(k+1)}{2}-2\right\}.
\end{align*}
Considered in $\mathbb{Z}$, one has $|\Sigma_1^1(A)|=\frac{k(k+1)}{2}-1=\frac{k(k+1)}{2}-1-1+1$ and $|\Sigma_2^1(A)|=\frac{k(k+1)}{2}-2=\left(\frac{k(k+1)}{2}-3-1+1\right)+1$.
So whenever $\frac{k(k+1)}{2}-4=p$, one has
\begin{align*}
|\Sigma_1^1(A)|= & p-1=\frac{k(k+1)}{2}-3<\min\left\{p,\frac{k(k+1)}{2}-1-1+1\right\},\\
|\Sigma_2^1(A)|= & p-1=\frac{k(k+1)}{2}-3<\min\left\{p,\frac{k(k+1)}{2}-3-1+1\right\}.
\end{align*}
It is conjectured (but not formally known) that there is an infinite number of couples $(k,p)$ such that $p=\frac{k(k+1)}{2}-4\in \mathbb{P}$. Here follows the list of those with $p<1000$:
\[(5,11),(6,17),(9,41),(14,101),(17,149),(18,167),\]\[(21,227),(26,347),(29,431),(30,461),(33,557),(41,857).\]
Conversely, it can be seen that, for some other prime numbers, the conjecture is true. It implies that the problem is of a different nature from the Cauchy-Davenport, Dias da Silva-Hamidoune theorems and theorem \ref{Main}. These three theorems can be called universal, since the bound is universal in $p$, the cardinality of the set (and their parameters).
However, for this problem, it is still possible to define a polynomial and a cartesian product that would lead to a proof of the bound, provided a specified coefficient is non zero. Of course, since counterexamples are known, for some values of the parameters $d,\alpha,\beta,p$, the specified coefficient will be zero. The computations of these coefficients lead to the idea that the previous counterexamples are the only ones possible. What can be summarized in the following conjecture:
\begin{conj} Let $p$ be a prime number and $A\subset \mathbb{F}_p$ such that $A\cap(-A)=\emptyset$, then
\[|\Sigma_\alpha^\beta(A)|\geqslant\min\left\{p,\frac{|A|(|A|+1)}{2}-\frac{\alpha(\alpha+1)}{2}-\frac{\beta(\beta+1)}{2}+1\right\},\]
unless $A=\lambda.\{1,-2,3,\dots,k\}$, with $\lambda\in\mathbb{F}_p^*$, $\frac{k(k+1)}{2}=p+4$ and $(\alpha,\beta)\in\{(1,1),(1,2),(2,1)\}$.
\end{conj}
\section*{Annex: Computation of the coefficients}
In this annex, we denote $n!!=\prod_{i=0}^{n-1}i!$, the product of the $n$ first factorials. It is an unusual notation, but it satisfies the nice property $\prod_{1\leq i<j\leq n}(j-i)=n!!$.
\subsection{The coefficient involved in the proof of the Dias da Silva-Hamidoune theorem}
\begin{prop}\label{comput1} The coefficient involved in the proof of the Dias da Silva-Hamidoune theorem is:
\[c_{d,h,\delta}=(h(d-h))!\frac{\binom{d-h+\delta-1}{\delta}\binom{h}{\delta}}{\binom{h(d-h)}{\delta}}\frac{h!!(d-h)!!}{d!!}.\]
\end{prop}
\begin{proof}
The computation of the coefficient can be continued:
\[c_{d,h,\delta}=\frac{Q_{d,h,\delta}(\underline{b}^*)}{\prod_{i=1}^dg_i'(b^*_i)},\]
where \[\underline{b}^*=(\underbrace{(d-h),\dots,(d-h+\delta-1)}_{i=1..\delta},\underbrace{(d-h-\delta+1),\dots,d}_{i=\delta+1..h}).\]
Since $g'_i(b_i)=(|B_i|-1)!=(d-h+i-2)!$ if $i\leq \delta$ and $g'_i(b_i)=(|B_i|-1)!=(d-h+i-1)!$ if $i>\delta$, the multinomial sum gives $|R|!=(h(d-h)-\delta)!$ and the Vandermonde is $\binom h\delta h!!$
\begin{align*}
c_{d,h,\delta}= & \frac{(h(d-h)-\delta)!}{\prod_{i=1}^\delta(d-h+i-2)!\prod_{i=\delta+1}^h(d-h+i-1)!}\binom h\delta h!!\\
= & \frac{(h(d-h)-\delta)!}{\prod_{i=d-h-1}^{d-h+\delta-2} i!\prod_{i=d-h+\delta}^{d-1} i !}\binom h\delta h!!\\
= & \frac{(h(d-h)-\delta)!}{\frac{(d-h-1)!}{(d-h+\delta-1)!}\frac{d!!}{(d-h)!!}}\binom h\delta h!!\\
= & \delta!((h(d-h)-\delta)!)\binom{d-h+\delta-1}{\delta}\binom{h}{\delta}\frac{h!!(d-h)!!}{d!!}\\
= & (h(d-h))!\frac{\binom{d-h+\delta-1}{\delta}\binom{h}{\delta}}{\binom{h(d-h)}{\delta}}\frac{h!!(d-h)!!}{d!!}.\\
\end{align*}
\end{proof}
\subsection{The coefficient involved in the proof of Theorem \ref{Main}}
\begin{prop}\label{comput2} Denoting $m_{d,\alpha}=d(d+1)/2-\alpha(\alpha+1)/2$. One has\[c_{d,\alpha,\delta}=\frac{2^{m_{d,\alpha}-\delta}(m_{d,\alpha})!}{\binom{m_{d,\alpha}}{\delta}}\frac{\binom{d-\alpha+\delta-1}{\delta}\binom{\alpha+1}\delta\binom{d+\alpha+1}\delta}{\binom{2\alpha+2}\delta}\frac{\alpha!!(d-\alpha)!!(d+\alpha+1)!!}{d!!(2d+1)!!}\left(\prod_{i=\alpha+1}^d(2i-1)!\right).\]
\end{prop}
\begin{proof}
The computation of the coefficient can be continued:
\[c_{d,\alpha,\delta}=\frac{Q_{d,\alpha,\delta}(\underline{b}^*)}{\prod_{i=1}^dg_i'(b_i)},\]
with \[\underline{b}^*=(\underbrace{-(\alpha+1),\dots,-(\alpha-\delta+2)}_{i=1..\delta},\underbrace{-(\alpha-\delta),\dots,-1}_{i=\delta+1..\alpha},\alpha-\delta+1,\underbrace{\alpha+2,\dots,d}_{i=\alpha+2..d}).\]
One has:
\[g'_i(b^*_i)=\begin{cases} (d-\alpha+i-2)! & \textrm{if}\ i\leq\delta,\\
(d-\alpha+i-1)! & \textrm{if}\ \delta<i\leq\alpha,\\
(-1)^\delta \delta!\frac{(d+\alpha+1-\delta)!}{(\alpha-\delta+1)}=(-1)^\delta\frac{(d+\alpha+1)!}{(\alpha-\delta+1)\binom{d+\alpha+1}{\delta}}& \textrm{if}\ i=\alpha+1\\
\frac{(d+i)!}{i} & \textrm{if}\ i>\alpha+1
\end{cases}\]
so the product of their inverse is:
\begin{align*}
\frac{1}{\prod_{i=1}^dg'_i(b^*_i)}= & (-1)^\delta\left(\prod_{i=1}^\delta\frac{1}{(d-\alpha+i-2)!}\right)\left(\prod_{i=\delta+1}^\alpha\frac{1}{(d-\alpha+i-1)!}\right)\\
& \hspace{2cm}\times\binom{d+\alpha+1}{\delta}\frac{(\alpha-\delta+1)}{(d+\alpha+1)!}\left(\prod_{i=\alpha+2}^d\frac{i}{(d+i)!}\right)\\
\end{align*}
\begin{align*}
= & (-1)^\delta \frac{(d-\alpha+\delta-1)!}{(d-\alpha-1)!}\left(\prod_{i=1}^\alpha\frac{1}{(d-\alpha+i-1)!}\right)\\
& \hspace{2cm}\times\binom{d+\alpha+1}{\delta}(\alpha-\delta+1)\frac{d!}{(\alpha+1)!}\left(\prod_{i=\alpha+1}^d\frac{1}{(d+i)!}\right)\\
= & (-1)^\delta \frac{(d-\alpha+\delta-1)!}{(d-\alpha-1)!}\frac{(d-\alpha)!!}{d!!}\\
& \hspace{2cm}\times\binom{d+\alpha+1}{\delta}(\alpha-\delta+1)\frac{d!}{(\alpha+1)!}\frac{(d+\alpha+1)!!}{(2d+1)!!}\\
= & (-1)^\delta \delta!\binom{d-\alpha+\delta-1}{\delta}\binom{d+\alpha+1}{\delta}(\alpha-\delta+1)\frac{d!}{(\alpha+1)!}\frac{(d-\alpha)!!(d+\alpha+1)!!}{d!!(2d+1)!!}.
\end{align*}
The first factor of $Q_{d,\alpha,\delta}$ gives:
\[\prod_{x\in R}(b^*_1+\dots+b^*_d+m-x)=2^{|R|}|R|!=2^{m_{d,\alpha}-\delta}(m_{d,\alpha}-\delta)!\]
The second factor is
\begin{align*}
\prod_{1\leq i<j\leq d}(b^*_j-b^*_i) = & \left(\prod_{1\leq i<j\leq \alpha}\times\prod_{\substack{i=1\\(j=\alpha+1)}}^\alpha\times\prod_{\substack{1\leq i\leq\alpha\\\alpha+1<j\leq d}}\times\prod_{\substack{j=\alpha+2\\(i=\alpha+1)}}^d\times\prod_{\alpha+1<i<j\leq d}\right)(b^*_j-b^*_i)\\
= & \left(\alpha!!\binom\alpha\delta\right)\left(\frac{(2\alpha-\delta+2)!}{(2\alpha-2\delta+2)(\alpha-\delta+1)!}\right)\\
& \hspace{2cm}\times\left(\prod_{i=1}^\delta\frac{(d+\alpha-i+2)!}{(2\alpha-i+3)!}\prod_{i=\delta+1}^\alpha\frac{(d+\alpha-i+1)!}{(2\alpha-i+2)!}\right)\\
& \hspace{4cm}\times\left(\frac{(d-\alpha+\delta-1)!}{\delta!}\right) (d-\alpha-1)!!\\
= & \alpha!!\binom\alpha\delta\frac{(2\alpha-\delta+2)!}{(2\alpha-2\delta+2)(\alpha-\delta+1)!}\\
& \hspace{2cm}\times\frac{(d+\alpha+2)!!(2\alpha-\delta+3)!!}{(d+\alpha-\delta+2)!!(2\alpha+3)!!}\frac{(d+\alpha-\delta+1)!!(\alpha+2)!!}{(d+1)!!(2\alpha-\delta+2)!!}\\
& \hspace{4cm}\times\binom{d-\alpha+\delta-1}\delta(d-\alpha)!!\\
= & \binom{d-\alpha+\delta-1}\delta\binom\alpha\delta\alpha!!(d-\alpha)!!\\
& \hspace{2cm}\times\frac{(2\alpha-\delta+2)!}{(2\alpha-2\delta+2)(\alpha-\delta+1)!}\frac{(2\alpha-\delta+2)!}{(d+\alpha-\delta+1)!}\\
& \hspace{4cm}\times\frac{(d+\alpha+2)!!(\alpha+2)!!}{(d+1)!!(2\alpha+3)!!}\\
\end{align*}
\begin{align*}
= & \binom{d-\alpha+\delta-1}\delta\binom\alpha\delta\alpha!!(d-\alpha)!!\\
& \hspace{2cm}\times\frac{(2\alpha-\delta+2)!}{(2\alpha-2\delta+2)(\alpha-\delta+1)!}\frac{(2\alpha-\delta+2)!}{(d+\alpha-\delta+1)!}\frac{(d+\alpha+1)!(\alpha+1)!}{(2\alpha+1)!(2\alpha+2)!}\\
& \hspace{4cm}\times\frac{(d+\alpha+1)!!(\alpha+1)!!}{(d+1)!!(2\alpha+1)!!}\\
= & \frac{(\alpha+1)}{(\alpha-\delta+1)}\frac{\binom{d-\alpha+\delta-1}\delta\binom\alpha\delta\binom{\alpha+1}\delta\binom{d+\alpha+1}\delta}{\binom{2\alpha+2}\delta^2}\frac{\alpha!!(d-\alpha)!!(d+\alpha+1)!!(\alpha+1)!!}{(d+1)!!(2\alpha+1)!!}.
\end{align*}
The last factor is:
\begin{align*}
\prod_{\substack{1\leq i<j\leq d\\ j>\alpha}}(b^*_j+b^*_i)= & \left(\prod_{\substack{i=1\\(j=\alpha+1)}}^\alpha\times\prod_{\substack{1\leq i\leq\alpha\\\alpha+1<j\leq d}}\times\prod_{\substack{j=\alpha+2\\(i=\alpha+1)}}^d\times\prod_{\alpha+1< i<j\leq d}\right)(b^*_j+b^*_i)\\
= & \left((-1)^\delta\delta!(\alpha-\delta)!\right) \left(\prod_{j=\alpha+2}^d\frac{(j-1)!}{(j-\alpha+\delta-1)(j-\alpha-2)!}\right)\\
& \hspace{2cm}\times\left(\frac{(d+\alpha-\delta+1)!}{(2\alpha-\delta+2)!}\right) \left(\prod_{i=\alpha+2}^{d-1}\frac{(d+i)!}{(2i)!}\right)\\
= & (-1)^\delta\frac{\alpha!}{\binom\alpha\delta}\frac{\delta!}{(d-\alpha+\delta-1)!}\frac{d!!}{(\alpha+1)!!(d-\alpha-1)!!}\\
& \hspace{2cm}\times\frac{(d+\alpha-\delta+1)!}{(2\alpha-\delta+2)!}\frac{(2d)!!}{(d+\alpha+2)!!}\prod_{i=\alpha+2}^{d-1}\frac{1}{(2i)!}\\
= & (-1)^\delta\frac{\binom{2\alpha+2}{\delta}}{\binom\alpha\delta\binom{d+\alpha+1}\delta\binom{d-\alpha+\delta-1}\delta}\frac{d!!(2d)!!}{\alpha!!(d-\alpha)!!(d+\alpha+1)!!}\prod_{i=\alpha+1}^{d-1}\frac{1}{(2i)!}.
\end{align*}
This gives the value of $Q_{d,\alpha,\delta}(\underline{b}^*)$:
\begin{align*}
Q_{d,\alpha,\delta}(\underline{b}^*)= & (-1)^\delta2^{m_{d,\alpha}-\delta}(m_{d,\alpha}-\delta)!\frac{(\alpha+1)}{(\alpha-\delta+1)}\frac{\binom{\alpha+1}\delta}{\binom{2\alpha+2}\delta} \frac{(\alpha+1)!!(2d)!!}{d!(2\alpha+1)!!}\prod_{i=\alpha+1}^{d-1}\frac{1}{(2i)!}\\
= & (-1)^\delta 2^{m_{d,\alpha}-\delta}(m_{d,\alpha}-\delta)!\frac{(\alpha+1)}{(\alpha-\delta+1)}\frac{\binom{\alpha+1}\delta}{\binom{2\alpha+2}\delta}\frac{(\alpha+1)!!}{d!}\prod_{i=\alpha+1}^d(2i-1)!\\
\end{align*}
And finally
\begin{align*}
c_{d,\alpha,\delta}= & 2^{m_{d,\alpha}-\delta}(m_{d,\alpha}-\delta)!\frac{(\alpha+1)}{(\alpha-\delta+1)}\frac{\binom{\alpha+1}\delta}{\binom{2\alpha+2}\delta}\frac{(\alpha+1)!!}{d!}\prod_{i=\alpha+1}^d(2i-1)!\\
& \hspace{2cm}\times\delta!\binom{d-\alpha+\delta-1}{\delta}\binom{d+\alpha+1}{\delta}(\alpha-\delta+1)\frac{d!}{(\alpha+1)!}\frac{(d-\alpha)!!(d+\alpha+1)!!}{d!!(2d+1)!!}\\
= & \frac{2^{m_{d,\alpha}-\delta}(m_{d,\alpha})!}{\binom{m_{d,\alpha}}{\delta}}\frac{\binom{d-\alpha+\delta-1}{\delta}\binom{\alpha+1}\delta\binom{d+\alpha+1}\delta}{\binom{2\alpha+2}\delta}\frac{\alpha!!(d-\alpha)!!(d+\alpha+1)!!}{d!!(2d+1)!!}\prod_{i=\alpha+1}^d(2i-1)!\\
\end{align*}
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,044 |
It's time to add a little floral charisma to your bedroom setting with this floral printed designer comforter. This Comforter is specially hand-picked from the Classic Collection of Comforters only for you. The filling inside is of Hollow siliconised polyester fiber made of 100% Polyester, which is enveloped in a very soft comfortable durable cloth.
This Comforter is available in 120 GSM. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,336 |
Was $95.99 You save $55.80!
The Pyle PWRC82 dual-channel speaker system delivers stereo sound to any room in your home or office without cluttering floor space. This 8� moisture-resistant speaker with marine grade construction can be placed indoors, outdoors or anywhere excessive moisture exposure is possible. The PWRC series speakers feature impressive power handling capability, heavy-duty voice coils and high-efficiency response rates. These speakers can be used for both in-ceiling and in-wall applications for a clean look. Convenient push-type speaker terminals allow for quick and hassle-free speaker wire connection. The speakers includes mounting hardware and cut-out template for easy installation. Achieve optimal sound performance and enhance your overall audio experience with the Pyle PWRC82 Speaker System.
Outdoor speaker, was excllent product but the servious was extraordinary given the shipping took place during Thanksgiving weekend. Prompt turn around, outstanding. | {
"redpajama_set_name": "RedPajamaC4"
} | 8 |
#include "AbstractConfig.h"
#include <FileUtils.h>
#include <StringUtils.h>
#include <Application.h>
#include <Syslogger.h>
#include <algorithm>
#include <functional>
#include <cctype>
#include <locale>
#include <fstream>
#include <sstream>
#include <regex>
namespace {
const char g_listSeparator = ',';
const char g_equal = '=';
const char g_comment = ';';
const char g_iniGroupStart = '[';
const char g_groupCommandLineSeparator = '-';
const std::string g_assign = ":=";
}
namespace Wuild {
void AbstractConfig::ReadCommandLine(const StringVector& args)
{
for (const auto& arg : args) {
const auto groupPos = arg.find(g_groupCommandLineSeparator);
if (groupPos == std::string::npos)
SetArg("", arg);
else
SetArg(arg.substr(0, groupPos), arg.substr(groupPos + 1));
}
}
bool AbstractConfig::ReadIniFile(const std::string& filename)
{
if (filename.empty())
return false;
std::string currentGroup;
std::map<std::string, std::string> variables;
std::ifstream fin;
fin.open(filename);
if (!fin)
return false;
std::string line;
const std::regex varregex{ R"regex(\$\w+)regex" };
while (fin) {
std::getline(fin, line);
line = StringUtils::Trim(line);
line = line.substr(0, line.find(g_comment)); // remove comments;
if (!line.empty()) {
if (line[0] == g_iniGroupStart) {
currentGroup = line.substr(1, line.size() - 2); // hope we have closing ].
continue;
}
// Replace all $VarName in line. First, trying our variable map; second, try environment variable.
size_t offset = 0;
std::smatch match;
while (std::regex_search(line.cbegin() + offset, line.cend(), match, varregex)) {
auto wholeMatchPosition = match.position(0);
const std::string varName = match[0].str().substr(1);
std::string replacement = "???";
const auto varIt = variables.find(varName);
if (varIt != variables.cend()) {
replacement = varIt->second;
} else {
auto envValue = getenv(varName.c_str());
if (envValue)
replacement = envValue;
else
Syslogger(Syslogger::Err) << "Invalid variable '" << varName << "' found in config.";
}
const auto start = line.begin() + offset + wholeMatchPosition;
line.replace(start, start + match[0].length(), replacement);
offset += wholeMatchPosition + replacement.length();
}
size_t assignPos = line.find(g_assign);
if (assignPos != std::string::npos) {
const auto varname = StringUtils::Trim(line.substr(0, assignPos));
const auto value = StringUtils::Trim(line.substr(assignPos + g_assign.size()));
variables[varname] = value;
continue;
}
SetArg(currentGroup, line);
}
}
fin.close();
return true;
}
bool AbstractConfig::empty() const
{
return m_data.empty();
}
bool AbstractConfig::Exists(const std::string& group, const std::string& key) const
{
return Find(group, key) != nullptr;
}
int AbstractConfig::GetInt(const std::string& group, const std::string& key, int defValue) const
{
const auto* val = Find(group, key);
if (!val)
return defValue;
return std::atoi(val->c_str());
}
double AbstractConfig::GetDouble(const std::string& group, const std::string& key, double defValue) const
{
const auto* val = Find(group, key);
if (!val)
return defValue;
return std::stod(*val);
}
std::string AbstractConfig::GetString(const std::string& group, const std::string& key, const std::string& defValue) const
{
const auto* val = Find(group, key);
if (!val)
return defValue;
return *val;
}
StringVector AbstractConfig::GetStringList(const std::string& group, const std::string& key, const StringVector& defValue) const
{
const auto* val = Find(group, key);
if (!val)
return defValue;
StringVector res;
StringUtils::SplitString(*val, res, g_listSeparator, true, true);
return res;
}
bool AbstractConfig::GetBool(const std::string& group, const std::string& key, bool defValue) const
{
const std::string val = GetString(group, key);
if (val == "true" || val == "TRUE" || val == "ON" || val == "on")
return true;
return GetInt(group, key, defValue) > 0;
}
std::string AbstractConfig::DumpAllValues() const
{
std::ostringstream os;
for (const auto& group : m_data) {
if (!group.first.empty()) {
os << "\n\n[" << group.first << "]";
}
for (const auto& keyValue : group.second) {
os << "\n"
<< keyValue.first << " = " << keyValue.second;
}
}
return os.str();
}
void AbstractConfig::SetArg(const std::string& group, const std::string& arg)
{
const auto equalPos = arg.find(g_equal);
if (equalPos == std::string::npos)
return;
const auto key = StringUtils::Trim(arg.substr(0, equalPos));
const auto value = StringUtils::Trim(arg.substr(equalPos + 1));
m_data[group][key] = value;
}
const std::string* AbstractConfig::Find(const std::string& group, const std::string& key) const
{
auto groupIt = m_data.find(group);
if (groupIt == m_data.end())
return group.empty() ? nullptr : Find("", key);
auto valueIt = groupIt->second.find(key);
if (valueIt == groupIt->second.end())
return group.empty() ? nullptr : Find("", key);
return &(valueIt->second);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,444 |
Rock Ore Gold Mining Machine /gold Mining Equipment With Carbon In Leaching . targeted at the large and medium sized series of heavy mining equipment. Chat Online Henan Xingbang Heavy Machinery Mining Machine, AAC .
heavy equipment ore mining offers 5076 gold mining processing equipment products. About 1% of these are other metal metallurgy machinery, 1% are other food processing machinery. A wide variety of gold mining processing equipment options are available to you, such as free samples.
The TurboPan gold pan gets the gold and heavy minerals to the bottom of the pan and into a central trap quickly because the pan is shallow. The TurboPan is only around 5cm deep on the edges. TurboPan 16quot; Black Plastic Gold Pan.
EquipmentMine is a searchable database of new, used and surplus mining equipment and parts available for sale. . XGMA heavy equipment. machines used in heavy mines captainlee. Related Post quartz ore mines machines in sites vsi stone copper machines ghana gold machines washing used manganese ore magnetic separator machines in india .
May 21, 20150183;32;Wheel Drilling Machine. mining ore hitachi heavy equipment offers 10449 wheel drilling machine products. About 19% of these are mine drilling rig, 10% are wood router, and 1% are construction machinery.
Used Heavy Stone Mines Machines . Caiman ore MINING EQUIPMENT. Caiman Mining equipment manufacturers is focused on delivering projects and services that m. Welcome to the Boulder Patch Mines Used Gold Mining Equipment Store Check out our wide variety of used gold mining equip.
Iron Ore Mine Indonesia, Wholesale Various High Quality Iron Ore Mine Indonesia Products from Global Iron Ore Mine Indonesia Suppliers and Iron Ore Mine Indonesia Factory,Importer,Exporter iron ore mines equipments in india. ore mines in indiairon ore indonesia.
Gold Mining Equipment, Gold Mining Equipment Suppliers . China Small Gold Mining Equipment , Gold Processing Machine plant. Add to Compare .. Gongyi City Hua Sheng Ming Heavy Industry Machinery Factory. Get Price Gold Ore Sorting Wholesale, Gold Ore Suppliers. Mineral bulks processing machinery for ore sorting,Gold mining equipment,sorting machine.
46 best Machines images on Pinterest Heavy equipment, Mining This huge machine at a Xinhai Mining site is used for open pit mining. Japanese musings on odd construction equipment Art imitating life or vice versa Mine a pit from which coal, minerals, or precious stones are taken by digging. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,536 |
Q: Rails select not working on edit page For my @line model I have a form that works perfectly when included as a partial on my NEW line page but raises undefined methodempty?' for nil:NilClass` when included on my EDIT page.
The Edit page has:
<%= form_for(@line, :html => { :class => "form-horizontal"} ) do |f| %>
<%= render 'form', f: f %>
<%= f.submit "Submit changes", class: "btn btn-primary" %>
<% end %>
The form looks like this (minus divs):
<%= f.label :name, class: "control-label" %>
<%= f.text_field :name, placeholder: 'A relatively short line name' %>
<%= f.label :description, class: "control-label" %>
<%= f.text_area :description, placeholder: 'Full line name and any description' %>
<%= f.label :manufacturer_id, class: "control-label" %>
<%= f.select :manufacturer_id, options_from_collection_for_select(Manufacturer.all, :id, :name, {:selected => @line.manufacturer}) %>
<%= f.label :parent_id, class: "control-label" %>
<%= f.select :parent_id, @lines, {:selected => @line.parent, include_blank: true} %>
The problem is with the select options on the last list. I'm using @lines to populate the dropdown with all existing lines. I'll probably change that to use AJAX to populate it with only the lines that belong to the manufacturer selected above but for now I just want to get the edit function working.
I'm sure it's an obvious mistake but no amount of looking on here and reading the documentation has found a solution so far.
If it's useful, here is my controller actions:
def edit
@line = Line.find(params[:id])
end
def update
@line = Line.find(params[:id])
if @line.update_attributes(line_params)
flash[:success] = "Line updated. #{undo_link}"
redirect_to @line
else
render 'edit'
end
end
A: You aren't defining @lines anywhere in your edit action, so when you use it as an argument in your view, it is returning nil.
A: You have to change your edit action, because you aren't defining @lines anywhere:
def edit
@line = Line.find(params[:id])
@likes = Like.all
end
or you can call your index action as:
def edit
@line = Line.find(params[:id])
index
end
or you can change your form:
= f.select :user_id, Company.all, {:selected => @company.name, include_blank: true}
I think it will help you.
A: After trying the above answers and getting stuck on how to get the line names showing in the dropdown properly, I've got it working with the following:
<%= f.select :parent_id, ("<option></option>" + options_from_collection_for_select(Line.all, :id, :name, {:selected => @line.parent_id})).html_safe %>
Got everything working and didn't need any alterations to controller.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,094 |
.insert-image-container {
max-width: 100%;
border: 1px solid #eee;
padding: 10px;
display: flex;
justify-content: center;
align-items: center;
max-height: 365px;
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 6,388 |
Q: Let $X'$ and $X$ be metric spaces with measures $d$ and $d'$. Show that $m((p,p'), (q,q')) = \sqrt{d(p, q)^2 + d'(p,q)^2}$ is a metric. For the given metric $m((p,p'), (q,q')) = \sqrt{d(p, q)^2 + d'(p,q)^2}$. I have proven the first two properties for a metric hold (this is rather trivial). I am unable to prove the triangle inequality. My work so far is as follows:
$$m((p,p'), (q,q')) = \sqrt{d(p, q)^2 + d'(p',q')^2} \le \sqrt{(d(p,r) + d(r, q))^2 + (d'(p',r') + d'(r', q'))^2} \\ \le \sqrt{(d(p,r) + d(r, q))^2} + \sqrt{(d'(p',r') + d'(r', q'))^2}$$
How can I continue from here? Was invoking $\sqrt{x+y} \le \sqrt x + \sqrt y$ helpful?
A: Hint:
Fixing notation, I assume you mean
$$m((p,p'), (q,q')) = \sqrt{ d(p,q)^2 + d'(p',q')^2}$$
This is a generalization of a Euclidean norm on $\mathbb R^2$. You can prove the triangle inequality the same way as we do there. Namely, write
$$m((p,p'), (q,q'))^2 = \langle (d(p,q),d'(p',q')), (d(p,q),d'(p',q')) \rangle_{\mathbb R^2}^2$$
Your proof will require the Cauchy-Schwarz inequality.
Try writing out the proof for $\mathbb R^2$. Then write it out for your metric $m$.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,326 |
{"url":"https:\/\/msp.org\/ant\/2010\/4-2\/p01.xhtml","text":"#### Vol. 4, No. 2, 2010\n\n Recent Issues\n The Journal About the Journal Editorial Board Subscriptions Editors' Interests Submission Guidelines Submission Form Editorial Login Ethics Statement ISSN: 1944-7833 (e-only) ISSN: 1937-0652 (print) Author Index To Appear Other MSP Journals\nCanonical extensions of N\u00e9ron models of Jacobians\n\n### Bryden Cais\n\nVol. 4 (2010), No. 2, 111\u2013150\n##### Abstract\n\nLet $A$ be the N\u00e9ron model of an abelian variety ${A}_{K}$ over the fraction field $K$ of a discrete valuation ring $R$. By work of Mazur and Messing, there is a functorial way to prolong the universal extension of ${A}_{K}$ by a vector group to a smooth and separated group scheme over $R$, called the canonical extension of $A$. Here we study the canonical extension when ${A}_{K}={J}_{K}$ is the Jacobian of a smooth, proper and geometrically connected curve ${X}_{K}$ over $K$. Assuming that ${X}_{K}$ admits a proper flat regular model $X$ over $R$ that has generically smooth closed fiber, our main result identifies the identity component of the canonical extension with a certain functor ${Pic}_{X\u2215R}^{\u266e,0}$ classifying line bundles on $X$ that have partial degree zero on all components of geometric fibers and are equipped with a regular connection. This result is a natural extension of a theorem of Raynaud, which identifies the identity component of the N\u00e9ron model $J$ of ${J}_{K}$ with the functor ${Pic}_{X\u2215R}^{0}$. As an application of our result, we prove a comparison isomorphism between two canonical integral structures on the de\u00a0Rham cohomology of\u00a0${X}_{K}$.\n\n##### Keywords\ncanonical extensions, N\u00e9ron models, Jacobians, relative Picard functor, group schemes, Grothendieck's pairing, Grothendieck duality, integral structure, de Rham cohomology, abelian variety, rigidified extensions\n##### Mathematical Subject Classification 2000\nPrimary: 14L15\nSecondary: 14F30, 14F40, 11G20, 14K30, 14H30","date":"2019-11-15 21:42:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 19, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8497133851051331, \"perplexity\": 544.0394047720904}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496668712.57\/warc\/CC-MAIN-20191115195132-20191115223132-00485.warc.gz\"}"} | null | null |
Great shot of the mushrooms. I like the house in the distance.
Those look really nice, although I wouldn't eat them!
Cool close up shots. Good to hear you are still not smoking. Keep it up!
I love these shots Brian! Very nice. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,148 |
About Hunters Coaches
Information about Hunters Executive Coaches and the history of the company
Information about the safety and security that we provide to our passengers
We care passionately about our environment, more information is here
We care deeply about our environment and doing our part in helping to protect, not only the stunning, gorgeous Scotland that we live and operate our business in but, we care about the planet as well.
Of course, it is well known that public transportation and, moving many people at once in a coach, bus or more mass transportation systems are much less harmful to the environment.
But for us, that's not enough.
We do our very best to route our vehicles in order to minimise needlessly wasted mileage and reduce fuel consumption without impacting in any way our passenger's experience.
We do as much as we can to avoid heavy traffic by scheduling smarter where we can.
These, however, are just the start of our efforts in order to minimise our environmental footprint as much as we possibly can.
We go to extraordinary lengths to maintain our fleet to the highest possible standards to ensure that not only that our vehicles are extremely reliable but also so that they are running as cleanly and efficiently as possible.
The fluids we have to replace such as oil and brake fluids etc are all recovered and disposed of in an environmentally friendly as possible way.
Tyres also we deal with in as an environmentally way we possibly can.
Even going so far as to clean our buses to help them cut through the air as best possible on the outside and we clean as much as possible inside, asking our passengers to help by not leaving rubbish onboard or disposing of it responsibly, even if we are asked to assist with that.
And, as we replace and update our coaches and buses they are either replaced with or upgraded to the most modern and clean engines that are available, even if it costs us more.
These small details matter.
Some companies merely make statements that they care about the environment.
We don't, we do all this and more as, after all, we love the views just like all our passengers and we want them still to be there for generations to come.
Get our latest news and offers
Get all the latest news and information on all our trips and offers by signing up to our mailing list for all this straight to your inbox before anyone else. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,129 |
Circo: Powerful Docu about Circus Life
March 19, 2011 by EmanuelLevy
"Circo," a new documentary from Aaron Schock, hits on multiple levels. The director has found a supremely rich topic—a family-run circus on its last legs touring the small towns of Mexico—and taken the time to carefully capture it on screen. The result is a transcendent experience for filmgoers with a feature that has strong elements of social commentary, while also raising some bigger questions about the human condition in general in strange times such as these.
It is also a great circus movie period, often calling to mind circus-related classics, such as Tod Browning's "Freaks" (1932), Federico Fellini's "La Strada" (1954), Max Ophuls' "Lola Montes" (1955), and Robert Bresson's "Au Hasard Balthazar" (1966).
(Please see our reviews of these films).
"Circo" may not match the stature of those films, but it certainly belongs in the same beloved circus movie family.
Director Schock captures the beautiful yet disturbing magic of the Gran Circo Mexico: the blue big top rises out of the dust, and the town's children rush to see, in addition to lions and tigers, children of their own age contorting themselves, flying through the air on ropes, walking the tightrope, and clowning around. This circus seems bigger than the real world—but, like the real world, it somehow is barely able to hold itself together–the whole tent could collapse at any moment.
Can such a life be good for the children who must perform there night after night, some of them—especially the boys—pulling off dangerous stunts on a regular basis? This is a question that Ivonne, the mother of most of the children, is constantly asking herself and her husband, Tino.
Unlike Ivonne, Tino comes from generations of "circus blood." Tino, the likely inheritor of the Circo Mexico following his father, is uneducated; he's unable to read and write. But some of his observations about circus life are startlingly poetic. "The circus forever," he tells us as a statement of fact. "Through the good and the bad. Always the circus."
This is the philosophy his parents have taught him, but to which his wife does not subscribe—not for her children. "You have kids to give them everything," Ivonne argues. "Not for them to give everything to you. And they give us too much."
Tino and Ivonne fight about this point throughout the film, thus giving "Circo" its central thematic tension. If, as Ivonne insists, Tino and the kids give up the life, which is obviously not working out too well for anyone. If this happens, it would be the end for Tino's family of more than a hundred years in the business. And what new life can Tino build for his family given the freefall of the Mexican economy?
He is in the treacherous middle of everything: stuck between a lost past and a murky present, between his growing family with Ivonne and his crumbling family tradition. At one point, Tino precisely likens his life to tightrope walking. (Ivonne, meanwhile, says she feels like a caged animal.)
Schock sets up a compelling contrast between Tino's predicament and the circumstances of his five-year-old niece, Naydelin, who is just entering the circus world. This spirited girl loves the circus so much that she is resolved to live there separate from her mother, who used to be in the circus but has since settled down.
Although Naydelin is scarily firm about her decision for such a cute, tiny child, her mother has to seriously consider whether it would be best for her daughter to get a proper education in kindergarten, learning to read, write, do arithmetic, and the rest. Does a girl with this level of "circus blood" belong cooped up in a kindergarten?
A highly talented cinematographer, Shock works with editor Mark Becker to present many memorable montage sequences that, while not necessarily furthering the main thrust of the narrative, add so much to the ambience of "Circo." The filmmakers make this world, especially as seen through the eyes of children, a real place we can almost step into. By the end of the film, we feel that we have been somewhere; we have taken an actual trip with this family and survived something.
This is, after all, what documentary films at their best can do, take an unknown aspect of the real world and make it come alive for us, thus changing our perspective.
The finest sequence of "Circo" follows the kids wandering around the countryside on a rare day off. This is one of the few times we see them playing and exploring like "normal" kids would. When they come upon a deserted mansion and spend time peeking through its windows, one boy poignantly declares, "You don't believe me, but one day I'll live in a house like this." We are reminded of this line later in the film, when Tino muses, "The circus is your house."
Other striking sequences, some of them quite short but nevertheless powerful, include the unceremonious discarding of a dead llama. Tino explains of the llama: "It woke up dead, the poor animal."
Other show Tino's brother riding his motorcycle in loops in the "Globe of Death," and one of the boys sitting atop the circus tent like it is the top of the world, then running down it.
Schock gradually casts a dizzying spell on us in "Circo," which is backed up by a pitch-perfect soundtrack from the band Calexico.
If Terrence Malick were to make a documentary about circus life in Mexico, it might look and feel a whole lot like this.
A First Run Features release.
Directed by Aaron Schock.
Produced by Jannat Gargi.
Cinematography by Aaron Schock.
Edited by Mark Becker.
Running time: 75 minutes. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 169 |
Pomnik Najświętszego Serca Pana Jezusa w Poznaniu
Pomnik Najświętszego Zbawiciela w Poznaniu | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,466 |
{"url":"https:\/\/physics.stackexchange.com\/questions\/330063\/pulleys-and-rotational-motion-problem","text":"Pulleys and rotational motion problem\n\nTwo blocks are joined by a light string that passes over the pulley shown above, which has radius $R$ and moment of inertia $I$ about its center. $T_1$ and $T_2$ are the tensions in the string on either side of the pulley and $\u03b1$ is the angular acceleration of the pulley. Which of the following equations best describes the pulley's rotational motion during the time the blocks accelerate?\n(A) $m_2$$g$$R$ = $I$$\u03b1 (B) T_2$$R$ = $I$$\u03b1 (C) (T_2 - T_1)R = I$$\u03b1$\n(D) ($m_2$ \u2013 $m_1$)$g$$R = I$$\u03b1$\n\nThe correct answer is C. The reasoning is that the net torque is equal to $T$$_2R - T_1R. What I'm having trouble understanding is why the torque produced by m_2 is simply T$$_2$R. $m_2$ is accelerating downwards. Wouldn't the free body diagram of $m_2$ be $mg$ pointing downwards and $T_2$ pointing upwards? Thus, shouldn't the net force acting on the block be equal to $mg - T_2$, and consequently, the torque caused by $m_2$ be equal to $R(m_2g - T_2)$? Why are we allowed to ignore the weight of the block?\n\n$T_2$ has all the relevant information in it. It and $T_1$ are the only forces actually acting on the Pulley, $T_2$ will have some dependence on $m_2$, but since the question defines a tension force acting from the weight to the pulley for you, there's no reason to not just use that.\n\nSo, with the knowledge that all we need are $T_1$ and $T_2$, and the radius to calculate the torque, $$\\tau_1 = T_1 R$$ $$\\tau_2 = T_2 R$$ And so the net torque is: $\\tau_{net} = (T_2 - T_1)R = I\\alpha$\n\nSeparate free body diagrams are required for the block $m_2$ and the pulley. That is why they are called free body diagrams : we isolate them from each other and consider only the forces acting directly on each.\n\nThe forces which act directly on the pulley are the tensions $T_1, T_2$ in the string. The string transmits these forces between the blocks and the pulley. The weight of the block does not itself act on the pulley, so we ignore it when considering the forces on the pulley.\n\nThe weight of the block is not being ignored altogether. It affects the value of $T_2$. We take account of it when we look at the block as a free body and write an equation of motion for it : $m_2g-T_2=m_2a$.\n\nThe fact that the string is inextensible and does not slip against the pulley also relates the accelerations $a$ of $m_2$ and $m_1$ to the angular acceleration $\\alpha$ of the pulley.\n\n\u2022 \"Tension\" is a force within string molecules. It doesn't affect the pulley. The pulley exerts a reaction force on the string: and thus an equal force is exerted on the pulley. this is the force exerted on the pulley. \u2013\u00a0satan 29 Jun 11 at 19:08","date":"2020-09-21 03:59:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7199715971946716, \"perplexity\": 186.6127716845058}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400198887.3\/warc\/CC-MAIN-20200921014923-20200921044923-00219.warc.gz\"}"} | null | null |
Q: How do I count columns of a table For example :
tbl_ifo
id | name | age | gender
----------------------------
1 | John | 15 | Male
2 | Maria | 18 | Female
3 | Steph | 19 | Female
4 | Jay | 21 | Male
How can I count the columns of this table using mysql?
A: SELECT count(*)
FROM information_schema.columns
WHERE table_name = 'tbl_ifo'
A: $cs = mysql_query("describe tbl_info");
$column_count = mysql_num_rows($cs);
Or just:
$column_count = mysql_num_rows(mysql_query("describe tbl_info"));
A: I think you need also to specify the name of the database:
SELECT COUNT(*)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_schema = 'SchemaNameHere'
AND table_name = 'TableNameHere'
if you don't specify the name of your database, chances are it will count all columns as long as it matches the name of your table. For example, you have two database: DBaseA and DbaseB, In DBaseA, it has two tables: TabA(3 fields), TabB(4 fields). And in DBaseB, it has again two tables: TabA(4 fields), TabC(4 fields).
if you run this query:
SELECT count(*)
FROM information_schema.columns
WHERE table_name = 'TabA'
it will return 7 because there are two tables named TabA. But by adding another condition table_schema = 'SchemaNameHere':
SELECT COUNT(*)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_schema = 'DBaseA'
AND table_name = 'TabA'
then it will only return 3.
A: To count the columns of your table precisely, you can get form information_schema.columns with passing your desired Database(Schema) Name and Table Name.
Reference the following Code:
SELECT count(*)
FROM information_schema.columns
WHERE table_schema = 'myDB'
AND table_name = 'table1';
A: I have a more general answer; but I believe it is useful for counting the columns for all tables in a DB:
SELECT table_name, count(*)
FROM information_schema.columns
GROUP BY table_name;
A: Simply use mysql_fetch_assoc and count the array using count() function
A: this query may help you
SELECT COUNT(COLUMN_NAME) FROM INFORMATION_SCHEMA.COLUMNS WHERE
TABLE_CATALOG = 'database' AND TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'tbl_ifo'
A: I think you want to know the total entries count in a table!
For that use this code..
SELECT count( * ) as Total_Entries FROM tbl_ifo;
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,279 |
Georg Heinrich Crola (6 June 1804, Dresden - 6 May 1879, Ilsenburg am Harz) was a German landscape painter in the mid-19th century. He specialized in the representation of the German forest.
Life
Georg Heinrich Crola was born in Dresden in 1804, the son of the merchant Croll. Difficult domestic conditions forced his parents to send the four year-old Georg Heinrich to live in the family of his maternal grandfather, who was a painter at the Royal Porcelain Factory and an art teacher at the state boarding school of Meissen. His grandfather, recognising his artistic talent, introduced him in Dresden to the painters Johann Christian Klengel, Johann David Schubert and Johann Gottfried Jentzsch who took charge of his education.
After the death of his grandfather in 1822 he lived for a while a wandering life. Around this time he also changed his family name from Croll to Crola so as to avoid conscription by the Saxon government. Around 1823 he was back in Dresden where he made a living from painting boxes. He succeeded in capturing the attention of Caspar David Friedrich and Johan Christian Dahl who assisted him in his studies. His talent was also recognised by Ernest I, Duke of Saxe-Coburg and Gotha who gave him commissions to paint landscapes and castles in Gotha. He used the opportunity to paint many landscapes of the Gotha region.
Crola went to Munich in 1830, where he studied the old masters as well as the neighboring landscapes. He traveled later to northern Germany where he got to know work of the Düsseldorf school of painting.
He later visited Berlin where he met his future bride Elise (Elisabeth) Concordia, daughter of a banker and an amateur painter herself. The couple got married in 1840 in Ilsenburg and the couple settled there. The pair founded a school for romantic landscape painters.
From that time onwards the surrounding low mountains became Crola's most important source of inspiration.
References
External links
1804 births
1879 deaths
Artists from Dresden
German landscape painters
19th-century German painters
19th-century German male artists
German male painters
Düsseldorf school of painting | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,976 |
Water, Sanitation, and Hygiene
Refugees & IDPs
Back to What We Do
We work in the world's most fragile settings
Back to Where We Work
Fragile Settings
Educate Girls, End Poverty
Students enrolled in Relief International's Educate Girls, End Poverty program walk home after school. © RI Staff/RI
The Educate Girls, End Poverty program aims to overcome political, economic, and cultural norms by providing Somali girls with the opportunity to receive an education and break the cycle of chronic poverty.
Twenty years of conflict in Somalia have ruined the country's public services and forced a generation of people from their homes. This massive displacement has cost a generation of Somali students the opportunity to receive an education and other benefits of a stable childhood.
Somalia has one of the world's lowest enrollment rates for primary school – with fewer than 50% of all Somali girls in school.
Violence and instability, coupled with extreme poverty, are the main drivers behind low enrollment rates, forcing many families to choose to keep their girls at home instead of in school. In many areas of Somalia, parents are required to pay for their children's school fees – a burdensome expense that many cannot afford. In addition, many families hold deeply-rooted beliefs about gender roles in the household that determine whether or not girls attend school.
In response, Relief International has implemented the Educate Girls, End Poverty program since 2013 with support from the U.K. Department for International Development (DFID) and in partnership with two other development NGOs. The program aims to overcome political, economic, and cultural norms by providing Somali girls with the opportunity to receive an education and break the cycle of chronic poverty.
Relief International supports 227 schools in fragile and conflict-affected areas of Somalia's newly formed Federal Member States of Galmudug, HirShabelle, and Benadir Regional Administration as well as schools in Puntland, and Somaliland. Our teams work hand-in-hand with girls, boys, parents, teachers, community leaders, and government ministries to devise and implement tailored interventions to provide marginalized girls a full education.
SOMALIA — Galmudug, HirShabelle, and Benadir Regional Administration; Puntland; Somaliland.
Spotlight on Educate Girls, End Poverty
pdf - 372KB
Address: 1101 14th St. NW, Suite 1100, Washington, D.C. 20005, United States
Address: 5455 Wilshire Blvd., Suite 1280, Los Angeles, CA 90036, United States
Address: 31–35 Kirby St, Holborn, London, England EC1N 8TE
Address: Avenue Louise 65-1050 Brussels, Belgium
Address: 3 bis, rue de Budapest, 75009, Paris, France
©2020 Relief International
This site uses cookies to offer you a better browsing experience. Read our Cookie Notice to find out more on how we use cookies and how to change your settings. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,148 |
package org.jbpm.executor.cdi.impl.jpa;
import javax.enterprise.inject.Produces;
import javax.inject.Inject;
import javax.persistence.EntityManagerFactory;
import javax.persistence.PersistenceUnit;
import org.jbpm.executor.ExecutorServiceFactory;
import org.jbpm.executor.impl.jpa.ExecutorQueryServiceImpl;
import org.jbpm.executor.impl.jpa.ExecutorRequestAdminServiceImpl;
import org.jbpm.executor.impl.jpa.JPAExecutorStoreService;
import org.kie.api.executor.ExecutorAdminService;
import org.kie.api.executor.ExecutorQueryService;
import org.kie.api.executor.ExecutorService;
import org.kie.api.executor.ExecutorStoreService;
import org.kie.internal.runtime.cdi.Activate;
/**
*
* IMPORTANT: please keep all classes from package org.jbpm.shared.services.impl as FQCN
* inside method body to avoid exception logged by CDI when used with in memory mode
*/
@Activate(whenAvailable="org.jbpm.runtime.manager.impl.RuntimeManagerFactoryImpl")
public class JPAExecutorServiceProducer {
@Inject
@PersistenceUnit(unitName = "org.jbpm.domain")
private EntityManagerFactory emf;
@Produces
public ExecutorService produceExecutorService() {
ExecutorService service = ExecutorServiceFactory.newExecutorService(emf);
return service;
}
@Produces
public ExecutorStoreService produceStoreService() {
ExecutorStoreService storeService = new JPAExecutorStoreService(true);
org.jbpm.shared.services.impl.TransactionalCommandService commandService = new org.jbpm.shared.services.impl.TransactionalCommandService(emf);
((JPAExecutorStoreService) storeService).setCommandService(commandService);
((JPAExecutorStoreService) storeService).setEmf(emf);
return storeService;
}
@Produces
public ExecutorAdminService produceAdminService() {
ExecutorAdminService adminService = new ExecutorRequestAdminServiceImpl();
org.jbpm.shared.services.impl.TransactionalCommandService commandService = new org.jbpm.shared.services.impl.TransactionalCommandService(emf);
((ExecutorRequestAdminServiceImpl) adminService).setCommandService(commandService);
return adminService;
}
@Produces
public ExecutorQueryService produceQueryService() {
ExecutorQueryService queryService = new ExecutorQueryServiceImpl(true);
org.jbpm.shared.services.impl.TransactionalCommandService commandService = new org.jbpm.shared.services.impl.TransactionalCommandService(emf);
((ExecutorQueryServiceImpl) queryService).setCommandService(commandService);
return queryService;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,872 |
\section{Introduction}
\label{sec:intro}
A central research area in the domain of randomized algorithms is the \emph{occupancy problem} for \emph{balls-into-bins} processes~\cite{azar, perfect, heavily_load, lenzen, mitzen}. The framework of the problem involves the analysis of the online
allocation, wherein a set of independent balls is to be assigned to a set of bins. The occupancy problem helps to model several realistic problems into a formal mathematical structure, and hence opens an active area of work in probability theory
as well as in computer science.
In the classical ``balls-into-bins'' problem, $m$ balls are sequentially thrown into $n$ bins, where each ball is placed into one of the bins independently and uniformly at random (i.u.r.). The natural question then is to analyze the maximum load in any of the bins. Mapping the problem to the application domain, we may consider the balls to be jobs or tasks and the bins to be servers. The problem then reduces to scheduling the jobs with balanced load allocations among the servers.
Probably one of the earliest applications of randomized load balancing is in the context of $hashing$. For the chaining method during hash clash, the length of the lists in the hash buckets are a measure of the retrieval complexity. For a uniform
hash function, the length of the lists follow the same distribution as the number of balls in a bin in this case.
The advent of parallel and distributed systems required efficient online load balancing among the servers to improve the
throughput of the system. Dependence on a centralized environment for uniform load balancing is highly undesirable for
such systems due to high communication complexity. With the introduction of the \textit{Cloud Computing} paradigm, the placement of virtual machines (VMs)
on servers provided a new dimension to the applicability of the randomized balanced allocation study.
Other applications such as the design of Multimedia or Data Servers use disk arrays where a data unit is partitioned and stored in a distributed fashion. These applications demand even (balanced) access of the disks on retrieval~\cite{sanders} and Karp in~\cite{karp} discusses applications in video-on-demand (termed \emph{k-orientability}~\cite{perfect}). The balls into bins problem accurately describes these applications only when the balls have uniform weights.
Other applications assume the loads to be of different weights to model its various dimensions.
This paper tackles the problem of sequential online allocation of balls into bins. Assuming we have $n$ bins and
$m$ balls arriving one at a time are to be thrown into these bins, the problem is to devise an efficient algorithm such
that the allocation of the balls is nearly balanced among all the bins. In formal terms, the
load in each of the bins should be as close to the average, ($m/n$) as possible. We initially study the case of single-dimensional
sequential placement of uniform weighted balls into bins problem and then extend it for the general weighted case. Finally we also observe
that $IDEA$ provides the same result w.h.p. for multi-dimensional balls-into-bins problem for $m=n$.
In this context, we define \emph{Gap} to be the difference between the heaviest loaded bin and the average load. The currently best known algorithm bounds $Gap$ to $O(\log \log n)$ with high probability using the symmetric \emph{d-choice} placement strategy~\cite{azar, mitzen}. In the d-choice method, each ball selects $d$ bins i.u.r. among the $n$ bins and is allocated to the least loaded bin among them. It is well-known that if $d = \Theta(\log n)$ choice, the gap is $O(1)$~\cite{soda08}.
In this paper we propose a novel algorithm, \emph{Improved D-choice with Estimated Average}, ($IDEA$) for efficient placement of the balls in the bins. We prove that this technique provides a \emph{\textbf{constant}} $Gap$ with high probability (w.h.p.) even when $d$ is kept constant, albeit with an expected constant number of retries or rounds per ball. We further extend the result to show that the guarantee also holds true for the heavily loaded case, i.e. $m>>n$ w.h.p. Our technique is different from the typical greedy $d$-choice process in that it places the ball in the bin that has load equal-to or lower than the \textit{estimated average} of that bin. Using \textit{expected} constant number of retries such a bin can be found for each ball and hence the load in each bin tends towards the estimated average which also tends towards the actual average, resulting in constant upper bound on the gap. Our strategy is also different from the typical asymmetric strategy~\cite{vocking-tree} where in case of tie over the load, the leftmost bin gets the ball. Our result can have profound implication both theoretically and practically on the online load balancing algorithms.
The outline of the paper is as follows: Section~\ref{sec:rel} presents an introduction to the known works and results in this domain. In Section~\ref{sec:algo} we propose the detailed outline of the $IDEA$ algorithm for allocating the balls into the bins. Section~\ref{sec:proof} provides the theoretical proof for bounding the $Gap$ to a constant quantity with high probability. Section~\ref{sec:disc} provides insights into the execution of the $IDEA$ algorithm.
Section~\ref{subsec:ext_weight} depicts its extension for the general weighted balls case, Section~\ref{subsec:ext_multi} exhibits similar results for the multi-dimensional scenario, and Section~\ref{subsec:parallel} proposes the protocol for achieving the same results for the parallel scenario. Finally, Section~\ref{sec:conc} concludes the paper.
\section{Related Work}
\label{sec:rel}
The study of ``balls-into-bins'' problem dates back to the study of hashing by Gonnet. He showed that when $n$
balls are thrown into $n$ bins i.u.r., the fullest bin has an expected load of $(1+o(1))\log n / \log \log n$~\cite{gonnet}.
The maximum loaded bin in this approach was shown to be $O(\log n/ \log\log n)$ w.h.p.~\cite{ranjan}. It was also shown
that for $m \geq n\log n$ balls, a bin can have a maximum load of $m/n + \Theta(\sqrt{m\log n/ n})$.
Azar et al.~\cite{azar} showed that if the balls chose sequentially from $d \geq 2$ bins i.u.r. (called \textit{Greedy[d]} algorithm) and greedily selected the bin currently with the lowest load, the $Gap$ could be bounded by $O(\log \log n/\log d)$ w.h.p. However, the solution worked only for the case when $m=n$. They also showed that the bound is stochastically optimal, i.e. any other greedy approach using the placement information of the previous balls to place the current ball majorizes to their approach. However, if the alternatives are drawn from separate groups with different rules for tie breaking, it results in different allocations.~\cite{vocking-tree} presents such an \textit{asymmetric} strategy and using witness tree based analysis proves that this leads to an improvement in the load balance to $O(\frac{\log \log(n)}{d\log(\phi_d)})$ w.h.p. where, $\phi_2$ is the golden ratio and $\phi_d$ is a simple generalization. Our algorithm is different from both these techniques in that it uses the \textit{estimated gap} as the criterion for choosing the bin and makes potentially multiple retries, where in each retry $d$ bins are chosen i.u.r.
For the heavily loaded case, $m>>n$, the bound of $O(\log \log n/\log d)$ w.h.p. was later proven in~\cite{heavily_load} using sophisticated techniques in two main high level steps. In the first step, they show that when the number of balls is polynomially bounded by the number of bins the gap can be bounded by $O(\ln\ln(n))$, using the concept of layered induction and some additional tricks. In particular, they consider the entire distribution of the bins in the analysis (while in typical $m = O(n)$ case the bins with load smaller than the average could be ignored). In the second step, they extend this result to general $m >> n$ case, by showing that the multiple-choice processes are fundamentally different from the classical single-choice process in that they have \textit{short memory}. This property states that given some initial configuration with gap $\Delta$, after adding $poly(n)$ more balls the initial configuration is \textit{forgotten}. The proof of the short memory property is done by analyzing the mixing time of the underlying Markov chain describing the load distribution of the bins. The study of the mixing time is via a new variant of the coupling method (called \textit{neighboring coupling}).
It was also shown that when $d = \Theta(\log n)$ the gap becomes $O(1)$~\cite{soda08}.
Cole et al.~\cite{routing-cole} showed that the two-choice paradigm can be applied effectively in a different context, namely, that of routing virtual circuits in interconnection networks with low congestion. They showed how to incorporate the two-choice approach to a well-studied paradigm due to Valiant for routing virtual circuits to achieve significantly lower congestion.
Kunal et.al.~\cite{kunal-weighted} prove that for weighted balls (weight distribution with finite fourth moment) and $m >> n$, the expected gap is independent of the number of balls and is less than $n^c$, where $c$ depends on the weight distribution. They first prove the weak gap theorem which says that w.h.p $Gap(t) < t^{2/3}$. Since in the weighted case the $d$ choice process is not dominated by the one choice process, they prove the weak gap theorem via a potential function argument. Then, the \textit{short memory theorem} is proved. While in~\cite{heavily_load} the short memory theorem is proven via coupling,~\cite{kunal-weighted} uses similar coupling arguments but defines a different distance function and use a sophisticated argument to show that the coupling converges.
The $(1+\beta)$-choice scheme~\cite{beta} proved that if a ball chooses with $\beta \in (0,1)$ probability the least loaded bin of $d=2$ randomly chosen bin, and otherwise i.u.r. a single bin, the $Gap$ becomes independent of $m$ and is given by $O(\log n / \beta)$.
In the parallel setting,~\cite{lenzen} showed that a constant bound on the gap is possible with $O(\log^* n)$ communication rounds. Adler et.al.~\cite{parallel} consider parallel balls and bins with multiple rounds. They present analysis for $O(\frac{\log\log(n)}{\log(d)})$ bound on the gap (for $m = O(n)$) using $O(\frac{\log\log(n)}{\log(d)} + O(d))$ rounds of communication.
For offline balls-into-bins problem, using maximum flow computations it was shown that the maximum load of a bin w.h.p. is $\lceil m/n \rceil +1$.~\cite{perfect} showed that for $m>cn\log n$ balls, where $c$ is a sufficiently large constant, a perfect distribution of the balls was possible w.h.p. However, no such similar result is found in the literature for the online sequential case for constant $d$ choice.
Mitzenmacher et. al. in~\cite{multi} addresses both the single choice and d-choice paradigm for multidimensional balls and
bins under the assumption that the balls are uniform D-dimensional (0, 1) vectors, where each ball has exactly $f$ populated
dimensions. They show that the gap for multidimensional balls and bins, using the two-choice process, is bounded by
O(log log(nD)). We provide a better bound of $O(1)$ w.h.p. for $m=n$ case.
In this paper, we study a novel online sequential allocation algorithm for balls-into-bins based on a constant \emph{d-choice} strategy and prove a
constant gap bound both for $m=n$ and the heavily loaded case $m >> n$ along with for the general weighted balls and multi-dimensional scenario.
\section{The $IDEA$ Algorithm}
\label{sec:algo}
In this section we discuss the execution of the \emph{Improved D-choice with Estimated Average} ($IDEA$) algorithm. We consider
there are $n$ bins and $m$ balls which arrive in an online fashion. We initially assume that the balls are of uniform weights and are numbered according to
the order of their arrival. In hashing applications, the number of the balls based on their arrival order plays no role in assisting
better or faster retrieval. Hence, this assumption does not decrease the complexity of the problem at hand. Later we also provide a blueprint
of the case when such a numbering of the balls in not allowed and the weighted balls case with the weights of the balls drawn from an arbitrary
distribution with finite variance.
\begin{algorithm}[ht]
\begin{center}
\caption{IDEA Algorithm}
\label{algo:idea}
\begin{algorithmic}
\REQUIRE Number of bins ($n$), Number of balls ($m$) and Maximum iteration ($\gamma$)
\ENSURE Balanced Allocation of Balls-into-Bins
\medskip
\FORALL{bin $B_i$, $i \in [1,n]$}
\STATE Initialize the load, $L_{B_i}$ and estimated average, $\hat{A_{B_i}}$ to $0$
\ENDFOR
\FORALL{ball $b_j$, $j \in [1,m]$}
\STATE $loop \gets 0$
\WHILE{$loop \leq \gamma$}
\STATE Choose $d$ bins, $C = \{Bin_1, Bin_2, \cdots Bin_d\}$ i.u.r. from the $n$ bins
\IF{set $C$ contains at least one bin with negative or zero estimated gap, $\hat{Gap_{Bin_i}} = L_{Bin_i} - \hat{A_{Bin_i}}$}
\STATE Break \textbf{while}
\ENDIF
\STATE $loop \gets loop + 1$
\ENDWHILE
\STATE Place ball $b_j$ in the bin, $B \in C$ having the lowest estimated gap, $\hat{Gap_B}$
\STATE $L_B \gets L_B + 1$
\FORALL{bins, $Bin_i \in C$}
\IF{$\hat{A_{Bin_i}} > \lceil j/n \rceil$}
\STATE $flag \gets 1$
\ELSE
\STATE $flag \gets 0$
\ENDIF
\IF{$flag = 0$}
\STATE $\hat{A_{Bin_i}} \gets \hat{A_{Bin_i}} + 1/d$
\ENDIF
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{center}
\end{algorithm}
Given each bin has an accurate knowledge of the average number of balls in the system, $m/n$ it is easy to distribute the balls so as
to obtain a perfectly balanced allocation. $IDEA$ operates on the above principle, where each bin independently calculates a fairly good
estimate of the current average number of balls in the system. Each bin is then loaded nearly equal to its estimated average
value. In the remainder of this section we show how each bin independently estimates its average which we later prove,
with a high probability, to be very close to the actual average, $m/n$. We also show that each bin is then loaded close to its
estimated average value, giving a maximum load of $\lceil m/n \rceil$ with a constant gap allocation w.h.p.
The $IDEA$ algorithm initially works as in the d-choice algorithm. On arrival of a ball $b_j$, it i.u.r. chooses $d$ bins ($d$ is constant) as
its possible candidates for placement. Each bin, $B_i, i \in [1,n]$ is characterised by two parameters: (i)~\emph{Current Load}, $L_i^j$,
and (ii)~\emph{Current Estimated Average}, $\hat{A_i^j}$. For each bin we define its \emph{estimated gap}, $\hat{Gap_i^j}$ as the difference
between its current load and its current estimated average. Formally, $\hat{Gap_i^j} = L_i^j - \hat{A_i^j}$.
The ball $b_j$ is then allocated to the bin having the lowest value of $\hat{Gap_i^j}$ among the $d$ chosen bins.
Given the definition of $Gap$ (in Section~\ref{sec:intro}) we would like to place the ball in a bin with \emph{negative}
or \emph{zero} $\hat{Gap}$. This would ensure that the loads in the bins be close to their estimated average values and thus lead
to a lower $Gap$. Hence, if in the $d$ choice a ball selects no bin with negative or zero $\hat{Gap_i}$, it re-chooses
its candidate $d$ bins. To boost the probability of a ball choosing a bin having such $\hat{Gap_i}$, this
re-choosing will be carried out $\gamma$ times, where $\gamma$ will later be shown to be approximately a constant.
The current estimated average for each of the $d$ bins finally selected by the ball is then incremented by
$1/d$. In the next paragraph we discuss the selection of such an increment value. We intuitively argue that for
each bin if $\hat{A_i}$ is finally close to the actual average ($m/n$) w.h.p., and its load $L_i$ is nearly equal to its estimated average,
the overall $Gap$ in the system will be minimized and the maximum load of a bin will be $\lceil m/n \rceil$. The pseudo-code of $IDEA$ algorithm
is shown in Algorithm~\ref{algo:idea}.
The probability that a bin is chosen by a ball in its $d$ choice is given by $d/n$. So when $n$ balls arrive a bin will be chosen $d$ times
on expectation. For each such choice the estimated average of the bin is incremented by $1/d$ (Algorithm~\ref{algo:idea}). Hence, its
final estimated average will be $1$, which is indeed the actual average of the system. However, from Lemma\ref{lem:ping} we observe that a bin might
be chosen $d \log n$ times or lesser w.h.p. Since we increase the estimated average by $1/d$,
the estimated average may increase beyond $1$ in such cases. Hence the estimated average of a bin may be greater that $1$ in two situations: \\
(i)~Not more than $n$ balls have arrived, but the bin has been chosen close to $d \log n$ times, or \\
(ii)~More than $n$ balls have arrived. \\
For case (i), the estimated average of the bin should still remain $1$, while in the other case, the estimated average should be
increased as usual. It is here that the numbering of the balls come into effect. If the estimated average of a bin goes beyond $1$
and the next ball which selects this bin has a number less than $n$, the bin knows that it may be chosen $d \log n$ times and hence
refrains from increasing its estimated average until a ball with number more than $n$ selects it. Similarly when the estimated average
of a bin increases beyond $\alpha, \alpha \in \mathbb{N}$, it checks if the next ball selecting it has a number greater than $\alpha n$. Thus the balls
communicate their numbers as well while choosing the $d$ candidate bins.
However in the scenario where numbering of the balls is forbidden, to differentiate between the two cases, we use the sampling technique among the bins.
A bin with estimated average just above $\alpha$, in this case chooses $\log n$ bins i.u.r. and communicates with them for their estimated average.
If the average of the estimated averages of the sampled bins is less than $1$, the bin comprehends that case (i)
has happened, i.e., it is receiving more than $d$ balls out of $n$ balls and thus refrains from increasing its estimated average. However, if the average of the estimated
averages are $1$, the bin decides that more than $\alpha n$ balls are arriving and increases its estimated
value as usual. The probability that the error in the sampled average is greater than $\epsilon$, a small constant, is given by $\frac{1}{n}$ for \emph{constant}
number of samples when $m>n\log n$ and by $\log n$ sampled choice for $m<n\log n$ scenario (\emph{sampling theorem}).
Hence w.h.p. of $1-\frac{2}{n}$ we obtain the right decision for each bin. In Appendix~\ref{sec:sampling} we discuss in detail
the proof for this claim, and also show that the total number of such sampling done is less than communication done if $d=\log n$. More intelligent sampling
methods as that of \emph{Reservoir Sampling}~\cite{reservoir}, \emph{Subset-Sum Sampling}~\cite{subset1,subset} or a combination of \emph{Sampling} and \emph{Sketching}~\cite{sketch1,sketch}
may be used to obtain a better estimates. The study and effects of such methods are not discussed as a part of this paper.
Hence, we find that $IDEA$ dynamically adapts its estimated average to be closer to the actual average of the system. In either case, the estimated average of a bin is increased by at most $1$ for every
$n$ balls.
\section{Theoretical Framework}
\label{sec:proof}
In this section, we provide a theoretical proof of the constant gap performance of the $IDEA$ algorithm. First, we
bound the number of balls that may select each bin. We then establish that each
ball in the $IDEA$ algorithm chooses at least one bin having negative $\hat{Gap}$ with a high probability, which makes the
load of each bin converge to its estimated average value. Finally, we bound the $Gap$ of the system to a
constant value w.h.p. We assume $m$ balls to arrive in an online fashion and there are $n$ bins.
\begin{lemma}
\label{lem:ping}
If each ball chooses $d$ bins i.u.r. out of $n$ bins, each bin is chosen by $\frac{md}{n}$ balls on expectation,
and by at most $\frac{md}{n} \log n$ balls with high probability.
\end{lemma}
\begin{proof}
Define $Y_1, Y_2, \cdots Y_m$ to be indicator random variables corresponding to balls $b_1, b_2, \cdots, b_m$ respectively.
Let $Y_i = 1$ represent the event that the ball $b_i$ chose bin $B$ as one of its $d$ candidate bins, otherwise $Y_i = 0$, $\forall
i \in [1,m]$. Since the balls choose $d$ bins i.u.r., the probability that bin $B$ is chosen among the $d$ bins, or $\Pr (Y_i) = 1$,
is given by $d/n$. Let $X$ be a random variable depicting the number of balls that chose $B$ among its $d$ candidate bins.
Hence, $X = \sum_{i=1}^m Y_i$. The expected value of $X$ is,
%
{\small{
\begin{align}
\label{eq:exp_X}
E[X] = E[\sum_{i=1}^m Y_i] = \sum_{i=1}^m E[Y_i] = \sum_{i=1}^m \frac{d}{n} = \frac{md}{n} \qquad \qquad \text{[By Linearity of Expectation]}
\end{align}
}}
%
Applying Chernoff's bound on $X$ we obtain,
%
{\small{
\begin{align}
\label{eq:X_bound}
&P(X > (1+\delta)E[X] ) < \frac{e^\delta}{\left(1+\delta \right)^{\left(1+\delta \right)}} \nonumber \\
&\therefore P(X > (1+\delta)\frac{md}{n} ) < \frac{e^\delta}{\left(1+\delta \right)^{\left(1+\delta \right)}} \nonumber \\
&\text{Substituting $\delta = \log n - 1$ we have,} \nonumber \\
&P(X > \frac{md}{n}\log n ) < \frac{e^{\log n -1}}{(\log n) ^{\log n}} = \frac{n}{e (\log n)^{\log n}} \\
&\text{Let $y = (\log n) ^{\log n}$. Hence, $\log y = \log n \log \log n$. We have,} \nonumber \\
&\Rightarrow \log (y/n) = \log n \left(\log \log n - 1\right) = \log n \left(\log \log n - \log \log e^e\right) \nonumber \\
&\text{For large values of $n$, $\log \log (n/e^e) \geq 1$, giving $\log (y/n) \geq \log n$. Therefore, we have $y > n^2$.} \nonumber \\
&\text{Substituting in Eq.~\eqref{eq:X_bound},} \nonumber \\
\label{eq:X_prob}
&P(X > \frac{md}{n}\log n ) < \frac{1}{en}
\end{align}
}}
%
Hence, bin $B$ is chosen by at most $\frac{md}{n} \log n$ balls with a high probability of $1 - \frac{1}{en}.$
\end{proof}
\begin{lemma}
\label{lem:av}
At any iteration, the estimated average of each bin is approximately equal to the current average with high probability.
\end{lemma}
\begin{proof}
We assume here that $Z$ balls have already arrived and have been placed among the $n$ bins. The number of balls that chose bin $B$ among its $d$ candidates is
$\frac{Zd}{n}$ on expectation, since each bin can be chosen by a ball with a probability of $\frac{d}{n}$. The number of such balls is also bounded
by $\frac{Zd}{n}\log n$ with high probability (by Lemma~\ref{lem:ping}). However, a bin does not increment its estimated average by more than $d$
times for every $n$ balls. For each choice the bin $B$ increases its estimated average by $\frac{1}{d}$. Hence the current value of $\hat{A_B}$ is given by,
{\small{
\begin{align}
\hat{A_B} = \frac{Zd}{n} \cdot \frac{1}{d} = \frac{Z}{n} \text{ , which is the current average.} \nonumber
\end{align}
}}
Hence, the estimated average $\hat{A}$ of any bin is nearly equal to the actual average w.h.p.
\end{proof}
\begin{observation}
\label{obs:var}
The variance of the estimated average of a bin $B$ for $n$ balls is,
{\small{
\begin{align}
Var[\hat{A_B}] &= Var[\frac{X}{d}] = Var[\frac{1}{d}.\sum_{i=1}^n Y_i] = \frac{1}{d^2} \sum_{i=1}^n Var[Y_i] \nonumber \\
&= \frac{1}{d^2}.n\frac{d}{n}(1-\frac{d}{n}) = \frac{1}{d} - \frac{1}{n} \qquad \qquad \qquad \text{[From Lemma~\ref{lem:ping}]} \nonumber
\end{align}
}}
\end{observation}
\begin{lemma}
\label{lem:gap}
The amortized sum of the estimated gap, $\hat{Gap}$ over all the bins is zero after every $n$ balls.
\end{lemma}
\begin{proof}
Each ball chooses $d$ candidate bins i.u.r. and is finally allocated to the bin having the least estimated gap. Hence for all
the $d$ chosen bins, their estimated average is increased by $1/d$. The bin which receives the ball witness an increase in its actual
load by $1$. Hence, overall its estimated gap increases by $1-1/d$. However, for the remaining $d-1$ bins their loads remain the same,
and thus their estimated gap decreases by $1/d$. Hence the overall change in estimated gap over the $d$ chosen bins is $1-1/d + (d-1)(-1/d) = 0$.
Initially, since the sum of the estimated gaps of the bins was $0$, the lemma holds.
Considering a batch of $n$ balls arriving in the system, a bin may be selected more than $d$ times (Lemma~\ref{lem:ping}). In such case,
the bin samples other bins for their current estimated average value, and depending on it may or may not increase its estimated average
as discussed in Section~\ref{sec:algo}.
As such the change in the overall estimated gaps in this round will not add up to $0$. Such a scenario occurs when a bin is selected more
than $d$ times in the batch of $n$ balls. Such a bin may not increase its estimated average, and $IDEA$ experiences a positive change in
the overall estimated gap of the system for such a round.
However, it can be observed that for a batch of $n$ balls, the total number of bins that are selected by the balls is exactly $nd$.
Since we consider a bin to have been selected more than $d$ times, there exists at least one bin which was selected less than $d$ times.
Assume a \emph{bank} to exist, which loans a unit credit to the bin, selected more than $d$ times for $n$ balls, per extra selection.
If such a bin is selected $d+c$ times over a period of $n$ incoming balls, the total credit units in the bank is exactly $c$. However,
since the number of selections are fixed, the total \emph{holes} in the system will also be exactly be equal to $c$. \emph{Hole} in a bin
refers to the difference of $d$ and the number of times the bin has been selected by $n$ balls, for bins selected less than $d$ times.
Each such bin can be considered to have extra unit credit points per hole, which it returns to the bank after $n$ balls have been
allocated to the system. Since the number of credits in the bank is exactly equal to the number of extra credits held by the bins in the
system, after $n$ balls the total credit points of the bank will be $0$.
It can easily be observed that the total credits in the system is always a non-negative quantity. Since the bins are chosen by the balls
i.u.r., all the bins are selected nearly the same number of times over a period of $n$ balls, no bins tends to accumulate a large quantity of
extra credits that it always keeps returning to the bank. This factor helps to maintain the estimated average of each bin close to the actual
average of the system. Hence, combining both the settings, we prove that on an \emph{amortized} notion, the sum of the estimated gap in all
the bins is $0$ after every $n$ balls.
\end{proof}
\begin{corollary}
\label{cor:gap_zero}
The sum of the \emph{estimated gap} over all bins is zero for arbitrary small number of balls allocated in the system.
\end{corollary}
\begin{proof}
Let the number of balls being allocated in the system be a function of $n$, $f(n)$. Given the constraint that the value of $f(n)$ is not a constant,
the arguments of Lemma~\ref{lem:gap} still holds true. Consider, $f(n) = n^{\epsilon}$, where $\epsilon$ is arbitrary small respecting the
constraint that $f(n)$ is not a constant. Thus, the sum of the estimated gap in the system is $0$ after $f(n)$ balls have been allocated to the
bins.
\end{proof}
\begin{lemma}
\label{lem:negbins}
The number of bins having a \emph{zero} or \emph{negative} estimated gap, $\hat{Gap}$ is $\Theta(n)$.
\end{lemma}
\begin{proof}
In Lemma~\ref{lem:gap} and Cor.~\ref{cor:gap_zero}, we show that the sum of the estimated gap of the bins is $0$ even when
arbitrarily small number of balls are allocated to the bins. As such the number of bins with positive estimated gap cannot
increase by more than $n^{\epsilon}$.
Let there be $\alpha$ bins with positive $\hat{Gap}$, $\beta$ bins with negative estimated gap, and $\theta$ bins having $0$
estimated gap. Hence, $\alpha + \beta + \theta = n$. We would like to establish a lower bound on $\beta +\theta$.
In order to have minimum number of bins with negative or zero $\hat{Gap}$, the value of the gap should be minimum for bins with
a positive gap and maximum for bins with a negative gap. The minimum positive estimated gap for a bin is
$Z(1-\frac{d-1}{d})$ when $Z(d-1)$ balls have arrived in the system, of which only $Z$ balls have been committed into the bin.
The maximum negative estimated average that a bin may have in this case is $-\frac{Z(d-1)}{d}$. Hence,
{\small{
\begin{align}
&\alpha.(Z(1-\frac{d-1}{d})) + \beta.(-\frac{Z(d-1)}{d}) + \theta .0 = 0 \qquad \qquad \qquad \text{[From Lemma~\ref{lem:gap}]} \nonumber \\
&\therefore \alpha = \beta(d-1) \nonumber
\end{align}
}}
As $\alpha + \beta + \theta = n$, we have $d\beta + \theta = n$. Hence, the number of bins with zero or negative $\hat{Gap}$ is $\Theta(n)$.
For each round of $f(n)$ balls, the number of bins with zero or negative estimated gap may decrease by $f(n)$. Consider that in round $k$, the number
of bins with zero or negative gap is $N(c_k)$. In the $(k+1)^{th}$ round, the number of such bins may become $N(c_k) - f(n)$. However, as $f(n)$ is
considered to be very small, in the order notation the number of such bins still remains $\Theta(n)$. We contradict the existence of any additive
influence of $f(n)$ per round by the argument of amortized analysis in the above lemma and its corresponding corollary.
\end{proof}
\begin{lemma}
\label{lem:negchoice}
Each ball chooses at least one bin having negative estimated gap among its $d$ choices w.h.p. in $\gamma$ rounds.
\end{lemma}
\begin{proof}
Each ball selects independently and uniformly at random $d$ candidate bins for its placement among
the $n$ bins. Hence the probability that bin $B_i$ is chosen as a candidate for ball $b_j$ is,
$P_i^j = \binom{n-1}{d-1} / \binom{n}{d} = \frac{d}{n}$. Let there be $c$ bins with zero or negative $\hat{Gap}$. The
probability that neither of these bins are selected as candidate by a ball $ = \binom{n-c}{d} /
\binom{n}{d}$. The ball may re-select its candidates at most $\gamma$ times. Therefore, the probability
that neither of the $c$ bins are selected in any of the $\gamma$ tries $=\left(\binom{n-c}{d} /
\binom{n}{d}\right)^\gamma$. Hence the probability that at least one bin with negative $\hat{Gap}$
is selected in the $\gamma$ iteration is given by,
{\small{
\begin{align}
\label{eq:one_sel}
P(\text{at least one selected}) = 1 - \left(\frac{\binom{n-c}{d}}{\binom{n}{d}}\right)^\gamma \approx 1 - \frac{1}{2^{d\gamma}} \qquad
\text{[Assuming $c=n/2$ from Lemma~\ref{lem:negbins}]}
\end{align}
}}
For $d=2$ and $\gamma=2$, we obtain a probability of around $0.94$. However, with $\gamma = \log n$, the probability becomes nearly $1-\frac{1}{n}$. Further, we can show that approximately constant number of retries suffice.
Let the number of bins with positive gap at any point of time be $n^{1-\epsilon}$, where $0 \leq \epsilon \leq 1$. The probability $P_{bneg}$ with which a bin
with a zero or negative gap is chosen in $\gamma$ iterations is given by,
{\small{
\begin{align}
&P_{bneg} = 1 - \left(\frac{\binom{n^{1-\epsilon}}{d}}{\binom{n}{d}}\right)^{\gamma} \nonumber
\end{align}
}}
For a zero or a negative bin to be chosen with a high probability, we need $P_{bneg} \ge 1 - \frac{1}{n^{\phi}}$, where $\phi > 0$. Hence for $\left(1 - (\frac{n^{1-\epsilon}}{n})^{d\gamma}\right) > 1 - \frac{1}{n^{\phi}}$. Thus, $\gamma > \frac{\phi}{d\epsilon}$.
Hence, at least one such bin is chosen by each ball in approximately
constant $\gamma$ re-polls or rounds per ball w.h.p.
\end{proof}
In the next lemma, we show that in practice only a couple of retries are needed to get a bin with zero or negative estimated gap.
\begin{lemma}
\label{lem:constgamma}
The expected number of rounds, $\gamma$ per ball to find a bin with zero or negative estimated gap is constant.
\end{lemma}
\begin{proof}
Let $p_i$ denote the probability that we find a zero or a negative bin at iteration $i$. Therefore, we have
{\small{
\begin{align}
p_i &= \left(\prod_{1}^{i-1} P_{pos}\right)\cdot P_{neg} = \prod_{1}^{i-1} \frac{1}{2^d} \cdot \left(1-\frac{1}{2^d}\right) = \frac{2^d - 1}{2^{id}} \nonumber
\end{align}
}}
where $P_{pos}$ is the probability of selecting a bin with a positive estimated gap and $P_{neg}$ is the probability of selecting a bin with a zero or negative gap.
The expected number of rounds per ball, $\gamma$ to find a zero or a negative gap is given by,
{\small{
\begin{align}
\label{eq:exp}
&E[\gamma] = \sum ip_{i} = \left(2^d-1\right)\sum_{i=1}^{\frac{\log{n}}{d}}\frac{i}{2^{id}}
\end{align}
Let,
\begin{align}
\label{eq:si}
&S(i) = \sum_{i=1}^{\frac{\log{n}}{d}}\frac{i}{2^{id}} \\
\label{eq:si+1}
&\therefore \frac{S(i)}{2^d} = \sum_{i=1}^{\frac{\log{n}}{d}}\frac{i}{2^{(i+1)d}}
\end{align}
Subtracting Eq.~\eqref{eq:si+1} from Eq.~\eqref{eq:si}, we have
\begin{align}
\label{eq:sum}
&\left(1-\frac{1}{2^d}\right)S(i) = \frac{1}{2^d} - \frac{\log{n}}{d2^{1+\log{n}}} + \frac{1}{2^d\left(2^d-1\right)} \nonumber \\
&\therefore S(i) \approx \frac{2^d}{\left(2^d-1\right)^2}
\end{align}
Substituting Eq.~\eqref{eq:sum} in Eq.~\eqref{eq:exp}, we have
\begin{align}
E[\gamma] &\approx 1 + \frac{1}{2^d-1}\\ \nonumber
\Rightarrow E[\gamma] &< 2
\end{align}
}}
Given the number of bins having negative of zero estimated gap to always remain $\Theta(n)$, the number of retries per balls remains constant
throughout the execution of the $IDEA$ algorithm.
\end{proof}
\begin{lemma}
\label{lem:load}
The load of each bin tends to its estimated average.
\end{lemma}
\begin{proof}
$IDEA$ places each ball into a bin with zero or negative $\hat{Gap}$, with high probability $1 - \frac{1}{n^\phi}$ (Lemma~\ref{lem:negchoice}) using $\gamma$ retries. When a ball is placed in a bin, its $\hat{Gap}$ increases. Thus, the probability that this bin will again get a ball lowers. On the other hand, the bins that had been chosen but the ball was not placed in them have a decrease in their estimated gap. Hence, the probability that a ball is placed in them increases. So, a bin with a negative or zero $\hat{Gap}$ has a higher probability of a ball being allocated to it, whereby its estimated gap tends towards $0$ (in case of negative estimated gap-ed bins). On the other hand, bins with positive estimated gap receive a ball with low probability even when chosen as candidates, and their estimated gap decreases towards $0$. Hence, we observe that the estimated gap of any bin tends towards $0$. Since, estimated gap is the difference of the load and the estimated average of a bin and the gap tends to zero, the load of the bins becomes nearly equal to their estimated average w.h.p.
\end{proof}
\begin{theorem}
\label{th:constgap}
The maximum load in any bin is $\lceil m/n \rceil + \Theta(1)$ w.h.p using the $IDEA$ allocation algorithm for the sequential, on-line and unweighted balls-into-bins problem.
\end{theorem}
\begin{proof}
Using the above lemmas we observe that the estimated average of each bin finally becomes $\lceil m/n \rceil$ and the load in each bin is equal to its estimated
average w.h.p. Hence the maximum load in any bin is $\lceil m/n \rceil + \Theta(1)$ w.h.p.
\end{proof}
\begin{corollary}
\label{cor:load}
The $IDEA$ algorithm provides a perfectly balanced allocation with constant gap.
\end{corollary}
\begin{proof}
Since the maximum loaded bin has a load of $\lceil m/n \rceil + \Theta(1)$ w.h.p. (Theorem~\ref{th:constgap}), the \emph{Gap} is of $\Theta(1)$ providing a perfectly balanced allocation
for the balls-into-bins problem with constant gap.
\end{proof}
\section{Discussion}
\label{sec:disc}
We note that the \textit{Greedy[d]} algorithm can also retry $\gamma$ times to find a bin of even lower total number of balls that what it could do in a single round. Still, the distribution of the balls in bins will be different than the $IDEA$ algorithm because the $IDEA$ algorithm explicitly uses the \textit{expected gap} to make the decision of where the ball is placed. The key question is can the Greedy[d] algorithm give a constant gap and the answer is negative for a single retry because of the well known lower bound of $O(\ln\ln(n))$~\cite{azar}, while for multiple retries $\gamma$ has to be $\Theta(log(n))$~\cite{soda08} to achieve a constant gap. $IDEA$ however requires only constant ($< 2$) retries in the expectation (Lemma~\ref{lem:constgamma}), to achieve the constant gap. Further, it requires $\gamma = \frac{\phi}{d\epsilon}$ retries with high probability (Lemma~\ref{lem:negchoice}).
A bin, $B$ is chosen by $d$ balls among $n$ balls on expectation. However, the bin may be chosen $\alpha d$ times, $0 \leq \alpha \leq 1$ among the first $\rho$ balls that arrive. As such, the $Greedy[d]$ choice algorithm
will place the balls in empty or lesser loaded bins if available. In the remaining balls, $B$ is chosen $(1-\alpha)d$ times. Now, for large values of $\alpha$, even if all these balls are placed in it, $B$ will have
a load far less than the average of the system. So the $Gap$ increases. However, for $IDEA$ with large $\alpha$ values, the estimated average for $B$ will be large and hence its estimated gap will be significantly lower than
the other bins. So, it has a higher probability of a ball being allocated to it. Thus, when the remaining balls arrive and a small fraction of them are placed in $B$, its load will still be closer to the actual average
as compared to the d-choice algorithm. This sensitivity towards skewness in the random choices also enables $IDEA$ to arrive at a better allocation than the d-choice.
\section{Extended Framework}
\subsection{Weighted Case}
\label{subsec:ext_weight}
In this section we consider the weighted case of the balls-into-bins problem where the balls have weights drawn from a distribution $\chi$ with an expected weight $W^{*}$, such that the weight of any ball $W$ has a finite variance and can be bounded by $(W^*-k) \leq W \leq (W^*+k)$, where $k$ is a constant. We apply the $IDEA$ algorithm and show that the gap is also constant w.h.p. in such scenarios.
\begin{theorem}
\label{th:wconstgap}
The maximum load in any bin is $W^*(\lceil m/n \rceil + \Theta(1))$ w.h.p using the $IDEA$ allocation algorithm for the sequential, on-line and weighted balls-into-bins problem.
\end{theorem}
\begin{proof}
Reworking the lemmas stated in Section~\ref{sec:proof} we observe that the estimated average of each bin converges to $W^*\lceil m/n \rceil$ and that the load in each bin tends to its estimated
average w.h.p. Hence the maximum load in any bin is given by $W^*(\lceil m/n \rceil + \Theta(1))$ w.h.p. The complete proofs of the lemmas for the weighted case is provided in Appendix~\ref{sec:wproof}.
\end{proof}
\begin{corollary}
\label{cor:wload}
The $IDEA$ algorithm provides a perfectly balanced weighted allocation with constant gap even for the general weighted case of the Balls-into-bins problem.
\end{corollary}
\begin{proof}
From Theorem~\ref{th:wconstgap} we observe that as the maximum load is $W^*(\lceil m/n \rceil + \Theta(1))$. Hence $IDEA$ provides a perfectly balanced allocation for the
weighted case w.h.p. having a constant gap of $W^*\Theta(1)$.
\end{proof}
\subsection{Multi-Dimensional Case}
\label{subsec:ext_multi}
In this section, we consider the multidimensional (md), variant of the balls and bins problem. One multidimensional
variant, proposed by~\cite{multi} is as follows: Consider throwing $m$ balls into $n$ bins, where each ball is a
uniform D-dimensional (0-1) vector of weight $f$. Here, each ball has exactly $f$ non-zero entries chosen uniformly
among all $\binom{D}{f}$ possibilities. The average load in each dimension for each bin is given as $mf/nD$.
Let $l(a, b)$ be the load in the dimension $a$ for the $b^{th}$ bin. The gap in a dimension (across the bins) is given by
$gap(a) = max_b l(a, b) − avg(a)$, where $avg(a)$ is the average load in the dimension $a$. The maximum gap across all
the dimensions, $max_a gap(a)$, then determines the load balance across all the bins and the dimensions. Thus, for the
multidimensional balanced allocation problem, the objective is to minimize the maximum gap (across any dimension).
We refer to the multidimensional ball as md-ball and the multidimensional bin as md-bin.
In another variation of multidimensional balanced allocation the constraint of uniform distribution for populated
entries is removed. Here again, each ball is a D dimensional 0-1 vector and each ball has exactly $f$ populated dimensions,
but these populated dimensions can have an arbitrary distribution. In the third variation that is most general of
the three, the number of populated dimensions, $f$, may be different across the balls, where $f$ then is a random variable
with an appropriate distribution.
Each md-ball has $f$ populated dimensions,
where $f$ could be constant across the balls or a random variable with a given distribution. Let, $s_i(t)$ denote the sum
of the loads (minus corresponding dimension averages) across all $D$ dimensions for the bin $i$ at time $t$, expressed as
$s_i(t) = \sum_{d=1}^D x^d_i$. This reduces the problem to that of the \emph{scalar weighted case}. The $IDEA$ algorithm works based
on the sum of the dimensions for each bin. Also, for each choice of the bin, its estimated average is now incremented by $\frac{f}{d}$.
\begin{theorem}
\label{th:multi_constgap}
For the multi-dimensional scenario, the $IDEA$ algorithm provides a constant gap for uniform distribution of the $f$
populated dimensions for each ball with $m=n$.
\end{theorem}
\begin{proof}
Following the analysis in Section~\ref{subsec:ext_weight}, the $Gap$ in the system is bounded by $\Theta(1)$. Hence, the difference of the number of
balls in the maximum bin and the actual average of the system is constant. For $m=n$, the average is $1$ and so the number of
balls in the maximum bin is also a constant. Given a uniform distribution of the $f$ populated dimensions of each ball over $D$,
the $Gap$ is bounded by $\Theta(1)$.
\end{proof}
\subsection{Parallel Case}
\label{subsec:parallel}
In this section we describe the algorithmic protocol to extend $IDEA$ for the parallel balls-into-bins scenario. In the parallel scenario multiple
balls are allocated to bins simultaneously in a single round. The remain balls are considered for allocation in the next round. This process is
repeated until all the balls are allocated. Later in this section we will show that the proposed protocol ensures that the algorithm completes
in a finite number of rounds. We consider that in any round, $r$, a bin may accept only one ball.
Let $x$ balls be simultaneously allocated in round $r$. We observe that the outcome of round $r$ can be obtained by sequentially allocating $x$
balls by $IDEA$. Hence any round in the parallel case can be replaced by a series of sequential processes of $IDEA$. Hence the gap remains
\emph{constant} even in the parallel case with $IDEA$.
\begin{algorithm}[ht]
\begin{center}
\caption{Communication Protocol}
\label{algo:parallel}
\begin{algorithmic}
\REQUIRE Number of bins ($n$), Number of choices per ball ($d$)
\ENSURE Parallel execution of $IDEA$
\medskip
\STATE Step 1. Each ball, $B_i$ chooses $d$ bins as candidates for allocation, and stores the choices as $M_i$.
\STATE Step 2. Ball $B_i$ queries its chosen bins ($M_i$) for the \emph{estimated gap}.
\STATE Step 3. The bins queries returns their estimated gap to the corresponding balls.
\STATE Step 4. Ball $B_i$ selects the bin $b_i$ with the lowest estimated gap among its chosen bins and sends a confirmation message, $C1_i$.
\STATE Step 5. A bin $b_j$ receiving a $C1_i$ message confirms allocation of ball $B_i$ and sends it a message $C2_{ij}$. If a bin receives multiple $C1_i$ messages, it arbitrarily selects one of them.
\STATE Step 6. Ball $B_i$ after receiving $C2_{ij}$ sends message $INC$ to all its $d$ chosen bins ($M_i$) and commits to bin $b_j$.
\STATE Step 7. All the bins in $M_i$ receiving $INC$ message increments their estimated average by $\frac{1}{d}$.
\end{algorithmic}
\end{center}
\end{algorithm}
The communication protocol, as given in Algorithm~\ref{algo:parallel} ensures that there is no deadlock in the system and that each bin accepts
at most one ball in each round. Since the allocation of a ball into a bin is done by \emph{two-way handshaking} between the
ball and the bin, a bin may receive multiple confirmations from the balls but will accept only one of them, and since each ball makes a single
choice of the bin where it prefers to be allocated, deadlock in the system is avoided. The update of the \emph{estimated average} of the bins receiving
the $INC$ message is similar to that of the sequential $IDEA$ with the use of sampling.
We now prove that the algorithm terminates in finite number of rounds to guarantee a constant gap.
\begin{theorem}
\label{th:parallel}
$IDEA$ in the parallel scenario using the communication protocol described in Algorithm~\ref{algo:parallel} provides a constant gap in
expected $O(\log \log n)$ rounds.
\end{theorem}
\begin{proof}
Since each round of the parallel case of $IDEA$ can be simulated with multiple sequential processes of it, $IDEA$ along with the communication
protocol described above provides a constant gap.
We observe that the execution of $IDEA$ is identical to that of the ordinary d-choice algorithm except for the parameter on which the allocations
of the balls are done. Hence Theorem 21 of~\cite{parallel} stating that the \emph{Threshold(1)} for parallel cases terminates after at most $\log \log n + O(1)$
steps, holds in our case as well. However, each ball will select a bin zero or negative estimated gap in $\gamma$ retries. Hence the total number
of rounds taken by $IDEA$ in the parallel setting will be given by $\gamma \log \log n$. The expected value of $\gamma$ is a constant (Lemma~\ref{lem:constgamma}).
Hence the expected number of rounds for the algorithm to terminate is given by $O(\log \log n)$.
\end{proof}
It can easily been observed that this protocol still provides a constant gap even for the heavily loaded case when $m>>n$.
\section{Conclusions}
\label{sec:conc}
This paper proposes the \emph{Improved D-choice with Estimated Average}, $IDEA$ algorithm which w.h.p. provides a perfectly balanced allocation for the sequential, online
and uniform weighted balls-into-bins problem. We propose a better metric for greedy placement of the balls using the estimated average of the system for each bin.
We show that for a constant $d$ choice and expected constant number of rounds per ball, the maximum loaded
bin in $IDEA$ is $\lceil m/n \rceil + \Theta(1)$ w.h.p. This result holds for $m=n$ case as well as the heavily loaded scenario where $m>>n$. We also extends the
solution for the general weighted case (with $m>>n$) to show similar results for balls with weights taken from an arbitrary distribution with finite variance and
for the multi-dimensional case with $m=n$ for uniform distribution of $f$ populated dimensions over the $D$ total dimensions. We also propose a communication
protocol which in conjunction with $IDEA$ provides a constant gap with expected $O(\log \log n)$ rounds.
\pagebreak
{\small{
\bibliographystyle{abbrv}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,226 |
Q: The application has stopped unexpectedly. Please try again The application will run fine unless I add the following code.
public class TFView extends View{
private Paint p;
private ArrayList<Orb> O1;
/** Called when the activity is first created. */
public TFView (Context context){
super(context);
O1=new ArrayList<Orb>();
p = new Paint();
int Orby=0;
for(int j=0; j<3; j++){
int Orbx= 0;
for(int i=0; i<4; i++)
{
O1.add(new Orb(Orbx,Orby,true));
Orbx+= 40;
}
Orby+= 40;
}
}
@Override
protected void onDraw(Canvas canvas) {
for(Orb t:O1){
canvas.drawOval(t.drawOrb(), p);
}
p.setColor(Color.BLUE);
canvas.drawText(String.valueOf(main.getx()), 50, 50, p);
canvas.drawText(String.valueOf(main.gety()), 50, 80, p);
canvas.drawText(String.valueOf(O1.size()), 50, 110, p);
try {
Thread.sleep(30);
} catch (InterruptedException e) { }
invalidate();
}
}
So, I believe that my problem has something to do with the android not accepting my Arraylist. I have run a very similar code to this on my computer, but something about Android doesn't seem to want to accept it. Here is my Orb class that is used in my Arraylist.
import android.graphics.RectF;
public class Orb {
static int orbx;
static int orby;
public int size;
static RectF button;
boolean display;
public Orb(){
orbx=0;
orby=0;
display=false;
}
public Orb(int x, int y, boolean d){
orbx=x;
orby=y;
display=true;
}
public RectF drawOrb(){
button.set(orbx, orby,orbx+30, orby+30);
return button;
}
}
So, why doesn't the android accept my Arraylist drawings? Thank you for your help.
Edit: So, I fixed my problem with button being null with the following code
public RectF drawOrb(){
button.set(orbx, orby,orbx+30, orby+30);
if(button!=null)
return button;
else
return b;
}
Now I receive multiple null pointer exceptions that look like this
[2011-04-23 22:31:25 - ddms]null
java.lang.NullPointerException
at com.android.ddmlib.JdwpPacket.writeAndConsume(JdwpPacket.java:213)
at com.android.ddmlib.Client.sendAndConsume(Client.java:574)
at com.android.ddmlib.HandleHello.sendHELO(HandleHello.java:142)
at com.android.ddmlib.HandleHello.sendHelloCommands(HandleHello.java:65)
at com.android.ddmlib.Client.getJdwpPacket(Client.java:671)
at com.android.ddmlib.MonitorThread.processClientActivity(MonitorThread.java:317)
at com.android.ddmlib.MonitorThread.run(MonitorThread.java:263)
Thanks again
A: E/AndroidRuntime( 312): java.lang.NullPointerException
E/AndroidRuntime( 312): at com.djrobotfreak.Think_Fast.Orb.drawOrb(Orb.java:26)
You have a NullPointerException on line 26 of Orb.java. Based on your listings, that would appear to be:
button.set(orbx, orby,orbx+30, orby+30);
If so, button is null.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,332 |
Canti delle Terre divise è una trilogia ispirata alla Divina Commedia, scritta da Francesco Gungui, composta dai tre episodi Inferno, Purgatorio e Paradiso.
Trama
Inferno
Ci troviamo in un universo distopico, dove la gente viene mandata all'Inferno, se non si adegua alle regole dell'Oligarchia, il sistema governativo di Europa. Alec vive proprio lì, insieme a sua madre e a sua sorella, fino a quando non viene trasferito in Paradiso, il quartiere d'élite del Mondo, per lavorare a casa di un oligarca. Proprio qui farà la conoscenza di Maj Shobert, la figlia del suo datore di lavoro, della quale si innamora. Anton Shobert viene accusato di tradimento verso l'Oligarchia, e quindi condannato all'Inferno insieme a sua figlia; Alec, rendendosi conto di essersi innamorato della giovane e avendo trovato dei documenti di suo padre che svelerebbero l'uscita dall'Inferno, interrompe una parata governativa, cosicché viene condannato anche lui. I due ragazzi, finiti nello stesso cerchio, affrontano vari pericoli all'interno della prigione nazionale, in seguito ai quali conoscono le Amazzoni, ragazze condannate che combattono per il territorio, Guido, Jorgos e ritrovano Maureen, amica d'infanzia di Alec. L'avventura si conclude con Maj e Alec che si avventurano verso l'uscita dall'Inferno.
Purgatorio
Paradiso
Personaggi
Inferno
Alec: giovane ragazzo abitante di Europa. Viene mandato a lavorare nel quartiere più benestante di tutta Europa, Paradiso.
Maj Shobert: una ragazza che abita a Paradiso; è figlia dell'oligarca Anton Shobert.
Elena e Beth: madre e sorella minore di Alec.
Marcus: zio di Alec dipendente dal nepente, causa della sua multipla personalità.
Maureen: amica d'infanzia di Alec, vive in una scuola abbandonata alla periferia di Europa.
Guido: condannato all'Inferno, aiuterà i protagonisti nel loro viaggio.
Jorgos: un bambino nato nell'Inferno che si unirà ad Alec e Maj.
Marvin: ragazzo di Maj, figlio dell'Oligarca Kronous.
Purgatorio
Lando: padre di Alec, condannato all'Inferno dopo averne progettato il sistema, è riuscito a scappare dopo essere stato ritenuto morto da tutti.
Laura: aiutante di Lando, genio del pc.
Filippo: fratello di Laura, di mestiere marinaio.
Ivan: è il fratello maggiore di Guido, esiliato all'Inferno per spaccio di nepente, droga locale.
Primo: All'inferno capo degli anarchici, dopo un grande scontro contro le guardie dell'oligarchia evade ed diventa una figura importante nel Movimento.
Paradiso
Karl: schiavo in America, aiuterà Alec a fuggire da un accampamento di schiavisti.
Serie di romanzi | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,216 |
Jean Wallet (1930-2012) est un organiste, improvisateur et professeur d'orgue français.
Biographie
Jean Wallet commence ses études musicales au conservatoire de Nice en 1945 où il apprend le piano, le violon et l'harmonie.
C'est en 1949 qu'il commence l'apprentissage de l'orgue à l'Institut des jeunes aveugles de Paris sous la direction du maître André Marchal. Il obtient son premier prix en 1963 au Conservatoire de Paris.
Nommé titulaire des grandes orgues de la cathédrale Sainte-Réparate de Nice en 1964, il y restera quarante ans et recevra le titre de chevalier de Saint-Grégoire.
C'est également en 1964 qu'il devient professeur d'orgue (classe adultes) et d'improvisation au CNR de Nice, poste qu'il occupe jusqu'en 1998.
Il enseigne jusqu'en 2012 chaque été à la session des organistes liturgiques d'Annecy et au stage national d'orgue liturgique de Montbrison.
Jean Wallet meurt le à Nice.
Discographie
Dix journées de l'Orgue Corse Jean Wallet joue sur l'Orgue de Cagnano (vinyle - Réf. 21038 Productions Kalliste)
Jean Sebastien Bach interprété par Jean Wallet sur l'Orgue de Notre-Dame des Blancs-Manteaux (Vinyle - Réf. 13175)
Œuvres pour Orgue Jean Wallet et Pascal Sabot à l'orgue Henri Saby de l'Église de Dunières (AC01)
Œuvres pour Orgue à l'orgue de l'Église Saint-Pierre des Carmes (Le Puy-en-Velay) (JMW01) - 2000
Musique d'Orgue à la collégiale de Montbrison Vol.1 (JMW02) - 2001
Musique d'Orgue à la cathédrale de Fréjus (JMW03) - 2001
Musique d'Orgue - Jean Wallet joue sur son orgue à l'église Saint-Pierre des Carmes au Puy-en-Velay (JMW04) - 2002
Musique d'Orgue à la cathédrale d'Annecy (JMW05) - 2004
Musique d'Orgue à la collégiale de Montbrison Vol.2 (JMW06) - 2005
Musique d'Orgue à la cathédrale Notre-Dame du Puy (JMW07) - 2005
Musique d'Orgue à la cathédrale d'Ajaccio (JMW08) - 2006
Musique d'Orgue à l'église Saint-Pierre de Manigod (JMW09) - 2008
Jean Wallet joue sur son orgue Allen - 2009
Jean Wallet joue sur son orgue Allen à Lyon - 2010
Jean Wallet joue sur son orgue Allen (enregistrement offert gracieusement par Franck Lhermet) - 2011
Liens externes
La Collégiale de Montbrison
Orgue de Fréjus
Notes et références
Organiste classique français
Naissance en octobre 1930
Naissance dans le 17e arrondissement de Paris
Élève du conservatoire à rayonnement régional de Nice
Professeur au conservatoire à rayonnement régional de Nice
Décès à Nice
Décès en septembre 2012
Décès à 81 ans | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 970 |
\section{Introduction}
\label{intro}
Since the discovery of neutron-induced fission of Uranium in 1938 \cite{Hahn1939}, the neutron-induced fission is the subject of both theoretical and experimental studies. In the past, tremendous efforts have been focused on the low-energy actinide fission because of the particular importance for nuclear energy applications.
Nowadays, there is an increasing interest in studying neutron-induced fission of actinides at intermediate energies. It is motivated by nuclear data needs for new applications such as accelerator-driven system, thorium-based fuel cycle, and the next generation of exotic beam facilities. The pre-neutron-emission mass distribution is one of the most important quantities for neutron-induced fission. Its precise description is of great importance for both understanding the fission mechanism and the practical application. In addition, $^{238}$U is one of the most important actinides, and its disposal in spent fuel ($^{238}$U is up to 95\%) is an important feature of the utilization of nuclear power.
Although one can qualitatively describe the nuclear fission process as a deformation of a single nucleus, the exactly understanding the fission process or quantitatively predicting the pre-neutron-emission fragment mass distributions or product yields are still very elusive for the existing theories and models \cite{Randrup2011}. An international working group has studied the overall problem and recommended the assembly of the required nuclear data (including fission products) at intermediate incident neutron energies up to 150 MeV \cite{Lammer200801}. Compared with low-energy fission, the modeling of neutron-induced fission at intermediate energies is severely complicated by the fact that fission follows pre-equilibrium particle emission and competes with neutron evaporation \cite{Ryzhov2011}.
Several important theories and models \cite{Randrup2011,Goutte2005,Vanin1999,Karpov2001,Hu1999,Dubray2008,Younes2009,Moller2001,Moller2004,Moller2009,Moller2011,Brosa1990,Duijvestijn2001,Koning2007} have been developed for
understanding the fission mechanism or quantitatively calculating
the fragment mass distributions or fission product yields. These models are mainly focused on the dynamical processes. The systematic approaches
which consist of three to seven Gaussian functions have been developed for
quantitatively predicting the fragment mass distributions or
product yields \cite{Liu2008,Katakura2008,Wahl2008,Lammer2008}.
A combination method based on the driving potential from the Skyrme energy-density functional \cite{Liu2006,Wang2009} and the phenomenological fission potential is proposed in our previous work \cite{Sun2012}, and the experimental pre-neutron-emission mass distributions of neutron-induced actinide fission at low energies have been reasonably well reproduced. The present study is an extension of this combination method for reaction $^{238}$U(n, f) at incident energies up to 60 MeV.
This paper is organized as follows. In section 1, the combination method and the potential parameters are introduced in detail. In section 2, the comparisons of the calculated results and the measured data for reaction $^{238}$U(n, f) are presented and analyzed. A simple summary is also given in the this section.
\section{The combination method and the potential parameters}
\label{sec:1}
\begin{figure}
\resizebox{0.5\textwidth}{!}{%
\includegraphics{fig_1.eps}
}
\caption{Fission cross section of reaction $^{238}$U(n, f)
for incident neutron energies from threshold energy to 60 MeV. The experimental data are obtained from Refs. \cite{Shcherbakov2002}(squares), \cite{Lisowski1992}(circles) and \cite{Lisowski1991}(triangles), respectively. The solid line denotes the evaluated results of ENDF/B-VII, and the dash lines label the incident energy regions corresponding to the different multi-chance fission channels such as (n, f), (n, nf), (n, 2nf) and (n, 3nf), respectively. }
\label{CS}
\end{figure}
The sequential products of neutron-induced binary fission are elaborated on Refs. \cite{Sun2012,Madland2006}.
A combination method for calculating the pre-neutron-emission mass
distributions of neutron-induced actinide fission at low energies
has been proposed in our previous work \cite{Sun2012}. In this model, the pre-neutron-emission mass distributions are described as
\begin{equation}\label{eq1}
P(A)=C\exp[-U(A)].
\end{equation}
Where $C$ is the normalization constant, and the variable $A$
denotes the mass number of the primary fragment. The
phenomenological fission potential $U(A)$ is described by three harmonic-oscillator functions, i.e.,
\begin{equation}\label{eq2}
U(A)= \left\{\begin{array}{l l}
\displaystyle u_1(A-A_1)^2 & A\leq a \\
\displaystyle -u_0(A-A_0)^2+R & a\leq A\leq b \\
\displaystyle u_2(A-A_2)^2 & A\geq b. \\
\end{array}\right.
\end{equation}
Where, $A_1$ and $A_2$ are the positions of the
light and heavy fragment peaks of the pre-neutron-emission mass
distributions, respectively. $A_0$ denotes the corresponding
position for symmetric fission. The fission potential parameters $u_1, u_0, u_2, a, b$ and $R$, which are the functions of $A_0, A_1$ and $A_2$, have been uniquely derived as Eq. (6) - Eq. (9) in our previous paper \cite{Sun2012}.
\begin{figure}
\resizebox{0.5\textwidth}{!}{%
\includegraphics{fig_2.eps}
}
\caption{Peak $P(A_1)$ and valley $P(A_0)$ of
the pre-neutron-emission mass distributions for reaction $^{238}$U(n, f) as a function of incident neutron energy. The experimental data are derived from the white neutron beam (circles) \cite{Zoller1995}, monoenergetic neutron (triangles) \cite{Vives2000} and the quasi-monoenergetic neutron (squares) \cite{Ryzhov2011,Simutkin2011}. The solid lines denote the results of this work. }\label{PA}
\end{figure}
\begin{table*}
\caption{The positions ($A_1, A_2)$ for the mass number of the light and heavy fragments mass distributions for reaction $^{238}$U(n, f) at different incident energy regions.}
\label{table1}
\begin{tabular}{llllllll}
\hline\noalign{\smallskip}
$E_n$ (MeV) &9-11 &16-18 &24-26 &33 &45 &60 &Ref.\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Experiment &(99, 138) &(99, 138) &(98, 138) &(99, 137) &(99, 137) &(99, 136) &\cite{Simutkin2011} \\
TALYS &(99, 139) &(99, 138) &(99, 138) &(98, 137) &(98, 137) &(98, 136) &\cite{Koning2007}\\
This work &(99, 139) &(99, 138) &(99, 137) &(99, 137) &(99, 137) &(99, 137) &\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table*}
\begin{figure*}
\resizebox{1\textwidth}{!}{%
\includegraphics{fig_3.eps}
}
\caption{Pre-neutron-emission mass distributions
at incident energies from 1.3 to 5.5 MeV for reaction $^{238}$U(n, f). The
scattered symbols denote the experimental data, which are taken from Ref. \cite{Vives2000} (squares, measured by the monoenergetic neutron)
and from Ref. \cite{Zoller1995} (circles, measured by the white neutron beam), respectively.}\label{dist1}
\end{figure*}
A particular attention should be payed that these parameters are closely relative to the evaporation neutrons before scission at different incident energies. For reaction $^{238}$U(n, f) at low incident energies ($E_n \leq 6.5$ MeV), the
positions $A_1$ and $A_2$ are obtained from the nucleus-nucleus
driving potential of the fissile nucleus $^{239}$U \cite{Liu2006,Wang2009}.
With the incident neutron energy increasing, the excitation energy of the
compound nucleus will become higher, and a few neutrons will be
evaporated before scission. The numbers
of the evaporation neutron can be derived from the corresponding multi-chance fission cross sections. Therefore, the fission cross
sections of reaction $^{238}$U(n, f) have been investigated as shown in Fig. \ref{CS}. The scattering dots denote the experimental data derived from Refs. \cite{Shcherbakov2002,Lisowski1992,Lisowski1991}, and the solid line does the evaluated results of ENDF/B-VII, which is recommended as the standard cross sections. The dash lines denote the incident energy regions corresponding to the different multi-chance fission channels as labeled (n, f), (n, nf), (n, 2nf) and (n, 3nf), respectively. From Fig. \ref{CS}, one established that the number $\tilde{n}(E_n)$ of evaporation neutrons before scission can be
roughly expressed as follow
\begin{equation}\label{eq3}
\tilde{n}(E_n)=\left\{\begin{array}{llllll}
\displaystyle 0, & & E_{\rm th}\leq E_n \leq6.5 & \textrm{MeV} \\
\displaystyle 1, & & 6.5< E_n \leq 14.5 & \textrm{MeV} \\
\displaystyle 2, & & 14.5<E_n\leq21.5 & \textrm{MeV} \\
\displaystyle 3 & & 21.5<E_n\leq60 & \textrm{MeV}. \\
\end{array}\right.
\end{equation}
Where, $E_{\rm th}$ is the threshold energy for
$^{238}$U(n, f) reaction. Eq. (\ref{eq3}) is consistent with the result at low incident energies as shown in Ref. \cite{Sun2012}.
\begin{figure*}
\resizebox{1\textwidth}{!}{%
\includegraphics{fig_4.eps}
}
\caption{Pre-neutron-emission mass distributions
at incident energies from 10 to 60 MeV for reaction $^{238}$U(n, f). The
scattered symbols denote the experimental data, which are taken from Ref. \cite{Ryzhov2011,Simutkin2011} (squares, measured by the quasi-monoenergetic neutron)
and from Ref. \cite{Zoller1995} (circles, measured by the white neutron beam), respectively. The dash and solid
curves denote the calculated results of TALYS code \cite{Koning2007} and in this work, respectively.}\label{dist2}
\end{figure*}
It is assumed that a compound nucleus $A_{\rm CN}$ after evaporating neutrons $\tilde{n}(E_n)$ separates into a pair of fragments in the fission process, so the mass number of the fissile nucleus is $A_{FN}=A_{CN}-\tilde{n}(E_n)$ at different incident energy regions. For reaction $^{238}$U(n, f), the fissile nuclei are $^{239}$U, $^{238}$U, $^{237}$U and $^{236}$U, respectively, at different incident energy regions as listed in Eq. (\ref{eq3}).
Based on the nucleus-nucleus potential with the Skyrme energy-density functional \cite{Liu2006,Wang2009}, the driving potentials of these fissile systems are studied considering the deformations of fragments. One sees that these driving potentials generally show a valley at
$A\sim$140 for the mass distributions of heavy fragments, as elaborated Fig. 1 in Ref. \cite{Sun2012}. It is should be noted that the driving potentials are only derived from the ground state or low excited energies of the fragments. However, the fissile nuclei still hold highly excited energies after evaporating neutrons at different incident energy regions.
So the position $A_2$ of the heavy fragment peaks, as well as $A_1$ of the light fragment peaks and $A_0$ of the symmetrical fission, should be modified as
\begin{equation}\label{eq4}
\left\{\begin{array}{llllll}
\displaystyle A_2=A_2^{g.s.}-\tilde{n}(E_n), \\
\displaystyle A_1=A_{FN}-A_2, \\
\displaystyle A_0=A_{FN}/2. \\
\end{array}\right.
\end{equation}
Where $A_2^{g.s.}\simeq 140$ denotes the lowest position of the driving potential derived from the ground state of the fragments. These results of Eqs. (\ref{eq3}) and (\ref{eq4}) agree exactly with the positions of the maximal mass distributions of the heavy fragments measured by the quasi-monoenergetic neutrons beam from 10 MeV up 60 MeV \cite{Ryzhov2011,Simutkin2011} as listed in Table 1. For comparison, the results of the famous TALYS code \cite{Koning2007} are also listed in this Table.
Based on monoenergetic experimental data \cite{Vives2000} and the quasi-monoenergetic experimental data \cite{Ryzhov2011,Simutkin2011}, the heights $P(A_0)$ and
$P(A_1)$ of the valleys and peaks of the pre-neutron-emission mass
distributions have been fitted as the functions of incident
neutron energy. For reaction $^{238}$U(n, f) up to 60 MeV, the energy dependence of $P(A_1)$ and $P(A_0)$ is written as
\begin{eqnarray}\label{eq5}
P(A_1) &=& 3.850+2.600 e^{-0.040E_n},\nonumber\\
P(A_0) &=& 0.044+4.581 e^{-32.465/E_n}.
\end{eqnarray}
The results of the measurement, including from the white neutron beam \cite{Zoller1995}, monoenergetic neutron \cite{Vives2000} and the quasi-monoenergetic neutron \cite{Ryzhov2011,Simutkin2011}, and the calculation are shown in Fig. \ref{PA}. So the parameter $R$ in Eq. (2) can be derived easily through Eq. (\ref{eq6}) as shown
\begin{equation}\label{eq6}
R=\textmd{ln}\frac{P(A_1)}{P(A_0)}.
\end{equation}
Furthermore, Eq. (\ref{eq5}) approximatively equals the results of Ref. \cite{Sun2012} at low incident energy ($E_n \leq 6.5$ MeV). One can see that the values of $P(A_1)$ and $P(A_0)$ exponentially change with the incident energies in general, which could provide some useful information at unmeasured energies.
\begin{figure}
\resizebox{0.55\textwidth}{!}{%
\includegraphics{fig_5.eps}
}
\caption{(Color online) The calculated pre-neutron-emission mass distributions (\%)
at incident energies from threshold energy up to 100 MeV for reaction $^{238}$U(n, f) involved the fragment mass number $A$ and the incident energy $E_n$.}\label{dist3}
\end{figure}
\subsection{Results and analysis}
\label{sec:2}
In this work, the evaporation neutrons $\tilde{n}(E_n)$ at different incident energy regions are derived from the fission cross sections in multi-chance fission channels as shown in Fig. \ref{CS}. In terms of the evaporation neutrons $\tilde{n}(E_n)$, the positions $A_2$ of the heavy fragment peaks of the pre-neutron-emission mass distributions are determined. So the positions $A_1$ of the light fragment peaks can be also obtained easily. Combined the heights $P(A_0)$ and
$P(A_1)$ of the valleys and peaks of the pre-neutron-emission mass
distributions as shown in Fig. \ref{PA}, the parameter $R$ is also obtained in terms of Eq. (\ref{eq6}). So the pre-neutron-emission mass
distributions can be calculated using Eq. (\ref{eq1}) and (\ref{eq2}) up to 60 MeV, as shown in Fig. \ref{dist1} and \ref{dist2}. Fig. \ref{dist1} shows the pre-neutron-emission mass
distributions of reaction $^{238}$U(n, f) at low incident energies from 1.3 MeV to 5.5 MeV, and one can see that this results agree with the previous results of Ref. \cite{Sun2012}. Fig. \ref{dist2} shows the calculated results up to 60 MeV. In this figure, the scattered symbols denote the experimental data, which are taken from Ref. \cite{Ryzhov2011,Simutkin2011} (squares, measured by the quasi-monoenergetic neutron)
and from Ref. \cite{Zoller1995} (circles, measured by the white neutron beam), respectively. The solid curves
denote the calculated results in this work. The dash curves denote the results calculated by TALYS code \cite{Koning2007}.
One can see that the experimental data of reaction $^{238}$U(n, f) can be reproduced well at different incident neutron
energies from threshold energy up to 60 MeV. It indicates that the method combined the driving potential with phenomenological fission potential is reasonable to describe the pre-neutron-emission mass
distributions of reaction $^{238}$U(n, f) up to 60 MeV.
Fig. 5 gives the contours of the predicted pre-neutron-emission mass distributions of reaction $^{238}$U(n, f) from threshold energy to 100 MeV. One can see that several distinct characters of the pre-neutron-emission mass distributions can be reasonably reproduced: 1) the double bump shape; 2) the increase of the valley heights, as well as the decrease of the peak heights, with the incident energy increasing; 3) the position $A_2$ of the heavy fragment peak locates roughly 140 at low energies, and gradually decreases because of the evaporation neutrons before scission at $E_n>$6.5 MeV. Contrarily, the position $A_1$ of the light fragment peak always locates roughly 99 from threshold energy up to 100 MeV. This implies that the combination method in this paper can provide some additionally useful information for the intermediate energies neutron induced actinides fission.
\\
\textbf{Acknowledgements}
We thank our colleagues L. Ou and M. Liu for some valuable suggestions. This work was
supported by Guangxi University Science and Technology Research Projects (Grant No. 2013ZD007), Guangxi Natural Science Foundation (Grant No. 2012GXNSFAA053008), the Th-based Molten Salt Reactor Power System of the Strategic Pioneer Science and Technology Projects from the Chinese
Academy of Sciences, and National Natural Science Foundation of China (Grants No. 11265004).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 169 |
env response stream ranges
--SKIPIF--
<?php include "skipif.inc"; ?>
--ENV--
HTTP_RANGE=bytes=2-4
--GET--
a=b
--FILE--
<?php
$f = tmpfile();
$r = new http\Env\Response;
$r->setContentType("text/plain");
$r->setContentDisposition(
array("attachment" => array(array("filename" => basename(__FILE__))))
);
$r->setBody(new http\Message\Body(fopen(__FILE__, "rb")));
$r->send($f);
rewind($f);
var_dump(stream_get_contents($f));
?>
--EXPECTF--
string(%i) "HTTP/1.1 206 Partial Content%c
Accept-Ranges: bytes%c
X-Powered-By: PHP/%s%c
Content-Type: text/plain%c
Content-Range: bytes 2-4/311%c
%c
php"
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,448 |
\section{Additional Experiments}
\subsection{Experiments with online switching}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/SAMUEL_alg1.pdf}
\caption{\mbox{{SAMUEL\ }} over geometric intervals on CIFAR-10.}
\label{fig:samuel_hard}
\end{figure}
In this section we conduct a preliminary sanity check to test \mbox{{SAMUEL\ }} ability to switch learning rates on the fly.
For this purpose we tested the full \mbox{{SAMUEL\ }} implementation with the original Algorithm 1 on CIFAR-10 classification. We compared training ResNet-18 with \mbox{{SAMUEL\ }} to training with AdaGrad with constant learning rate multiplier as shown in Fig\ref{fig:samuel_hard}. For the baseline learning rate multiplier, we considered multiplier of 0.01 and 0.1. For SAMUEL, we constructed the geometric interval set with the minimum length of 100 training iterations and provided multipliers 0.01 and 0.1 as candidate learning rate multipliers to SAMUEL. Although \mbox{{SAMUEL\ }} can only alternate between two candidate learning rate multipliers, it demonstrates superior performance. Baselines and \mbox{{SAMUEL\ }} over geometric intervals were both trained for 220 epochs with batch size of 256. We conducted experiments with 5 different random seeds for each of three schedules 0.01, 0.1 and \mbox{{SAMUEL\ }}. We report the average final test accuracy: 88.98\% with lr 0.01, 92.08\% with lr 0.1, and 92.43\% with \mbox{{SAMUEL\ }}.
In this experiment \mbox{{SAMUEL\ }} prefers lr 0.1 at first, then switch to lr 0.01 automatically around iteration 2500, where it starts to outperform the lr 0.1 baseline. It demonstrates the ability of \mbox{{SAMUEL\ }} to switch between learning rates on the fly.
This shows the promise of interpolating different algorithms in a manner that improves upon the individual methods. However, this implementation not as efficient as the heuristic we test in the other experiments.
It remains to test how quickly we can shift optimizers in more challenging online tasks, such as domain shift and online reinforcement learning.
\subsection{CIFAR-100 Experiment}
We conducted image classification on the CIFAR-100 dataset. We compare a ResNet-18 \citep{he2016deep} model trained with our optimization algorithm to a model trained with AdaGrad using brute-force searched learning rate schedulers. Following \cite{he2016deep}, we applied per-pixel mean subtraction, horizontal random flip, and random cropping with 4 pixel padding for CIFAR data processing and augmentation. All experiments were conducted on TPU-V2 hardware. For training, we used a batch size of 256 and 250 total epochs with a step learning rate schedule. We fixed the learning rate stepping point at epoch 125 and 200, and provided five possible candidate learning rates \{0.0001, 0.001, 0.01, 0.1, 1\} for each region. Thus an exhaustive search yielded 125 different schedules for the baseline AdaGrad method. For a fair comparison, we adopted the same learning rate changing point for our method. Our method automatically determined the optimal learning rate at the transition point without the need to exhaustively search over learning rate schedules.
We display the CIFAR-100 test accuracy curves of AdaGrad with 125 exhaustively-searched learning rate schedules and our method in only one single run in Fig.\ref{fig:CIFAR100}. Fig.\ref{fig:CIFAR100} shows that the best accuracy of exhaustive search is 76.77\%, and the accuracy of \mbox{{SAMUEL\ }} using the same seed is 75.66\%.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/CIFAR100_CI.pdf}
\caption{CIFAR-100 comparison of exhaustive searched learning rate schedule and \mbox{{SAMUEL\ }}. Top: 125 parallel experiments with exhaustively searched learning rate schedules. Bottom: \mbox{{SAMUEL\ }} on one run with 10 different random seeds, no tuning needed.}
\label{fig:CIFAR100}
\end{figure}
\subsection{Comparison with Baselines}
We conducted additional experiments on CIFAR-10 with off-the-shelf learning rate schedulers from the optax library. We considered the same model and training pipeline as detailed in the experiment section. Instead of using the three phase learning rate stepping scheme, we tried more varieties of schedulers available in the optax library. Specifically, we finetuned the cosine annealing scheduler, the linear warmup followed by cosine decay scheduler, and the linear warmup followed by exponential decay scheduler. Their test accuracy curves together with different learning rate schedules are displayed in Fig.\ref{fig:cosineanneal}, Fig.\ref{fig:warmcosineanneal} and Fig.\ref{fig:warmexponent}, respectively.
For finetuning the cosine annealing scheduler, we experimented with 45 different initial learning rates in the range of 1e-5 to 0.9.
For the linear warmup followed by cosine decay scheduler, we finetuned the initial learning rate, the peak learning of the warmup and the duration of the warmup. We considered possible initial learning rate \{0, $1\times10^{-5}$, $1\times10^{-4}$\}, peak learning rate \{0.001, 0.01, 0.05, 0.1, 0.5, 1\}, and warmup epochs \{5, 10\} for the grid search.
For the linear warmup followed by exponential decay scheduler, we finetuned the initial learning rate, the peak learning of the warmup and the duration of the warmup, the exponential decay rate, and the transition steps. We considered possible initial learning rate \{0, $1\times10^{-5}$, $1\times10^{-4}$\}, peak learning rate \{0.05, 0.1, 0.5, 1\}, warmup epochs \{5, 10\}, exponential decay rate \{0.5, 0.8, 0.9\}, and transition step \{5, 10\} for the grid search.
As the figures demonstrate, the final test accuracy depend heavily on the learning rate schedules. For off-the-shelf learning rate schedulers, tuning the schedule associated hyperparameters is not trivial.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/cosineanneal.pdf}
\caption{Tuning cosine annealing schedules on CIFAR-10. The best test accuracy out of all 45 trials is 95.37\%.}
\label{fig:cosineanneal}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/warm_cosineanneal.pdf}
\caption{Tuning the linear warmup followed by cosine decay scheduler on CIFAR-10. The best test accuracy out of 36 trials is 95.31\%.}
\label{fig:warmcosineanneal}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/warm_exponent.pdf}
\caption{Tuning the linear warmup followed by exponential decay scheduler on CIFAR-10. The best test accuracy out of 144 trials is 95.27\%.}
\label{fig:warmexponent}
\end{figure}
\section{Algorithm and Main Theorem}
Before giving the precise definition, we give a high-level description of our method.
The input to our method is any optimizer. To derive our best theoretical results, we apply the reduction with the AdaGrad or Adam optimizers, that have provable regret guarantees.
We choose a set of intervals $S$ and hold a blackbox algorithm instance ${\mathcal A}_I$ on each $I$ in the set, then run a variant of multiplicative weight algorithm on these blackbox algorithm instances. The idea is to use the multiplicative weight algorithm to achieve comparable loss with the best among the experts, therefore for any $I$ our algorithm is not much worse than the optimal expert ${\mathcal A}_I$. Our main technical contribution is to make copies of each 'expert' ${\mathcal A}_I$ with different learning rates in the multiplicative weight algorithm, so that one of them is guaranteed to be near optimal.
\begin{algorithm}[t]
\caption{Strongly Adaptive Regret MUltiplicative-wEight-based AdaGrad (\mbox{{SAMUEL\ }})}
\label{alg1}
\begin{algorithmic}
\STATE Input: OCO algorithm ${\mathcal A}$, interval set $S$, constant $Q= 4 \log(dTD^2G^2)$.
\STATE Initialize: for each $I\in S$, $Q$ copies of OCO algorithm ${\mathcal A}_{I,q}$.
\STATE Set $\eta_{I,q}=\frac{1}{2GD 2^q}$ for $q\in [1,Q]$.
\STATE Initialize $w_1(I,q)=\min \{1/2, \eta_{I,q}\}$ if $I=[1,s]$, and $w_1(I,q)=0$ otherwise for each $I\in S$.
\FOR{$\tau = 1, \ldots, T$}
\STATE Let $x_{\tau}(I,q) = {\mathcal A}_I(\tau)$
\STATE Let $W_{\tau}=\sum_{I\in S(\tau),q} w_{\tau}(I,q)$.
\STATE Let $x_{\tau}=\sum_{I\in S(\tau),q} w_{\tau}(I,q)x_{\tau}(I,q)/W_{\tau}$.
\STATE Predict $x_{\tau}$.
\STATE Receive loss $\ell_{\tau}(x_{\tau})$, define $r_{\tau}(I)=\ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}(I,q))$.
\STATE For each $I=[s,t]\in S$, update $w_{\tau+1}(I,q)$ as follows,
$$
w_{\tau+1}^{(I,q)}=\left\{
\begin{array}{lcl}
0 & & {\tau+1\notin I}\\
{\min \{1/2, \eta_{I,q}\}} & & {\tau+1=s}\\
{w_{\tau}(I,q)(1+\eta_{I,q} r_{\tau}(I))} & & \textbf{else}
\end{array}\right.
$$
\ENDFOR
\end{algorithmic}
\end{algorithm}
We provide our main result of this paper, a full-matrix strongly adaptive regret bound for Algorithm \ref{alg1}.
\begin{thm}\label{thmain}
Under assumptions \ref{as1} and \ref{as2}, when Adagrad is used as the blackbox $\mathcal{A}$, the total regret $\mbox{{Regret}}(I)$ of the multiplicative weight algorithm in Algorithm \ref{alg1} satisfies that for any interval $I=[s,t]$,
$$ \mbox{{Regret}}(I)= \tilde{O}\left(\sqrt{ \min_{H \in {\mathcal H}} \sum_{\tau=s}^t \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}}\right) ,
$$
ignoring poly-logarithmic factors, and more precisely,
$$
\mbox{{Regret}}(I) = O(D \log(T) \max\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_{H \in {\mathcal H}} \sum_{\tau=s}^t \|\nabla_{\tau}\|_H^{*2}}\})
$$
\end{thm}
\begin{remark}
The reason we can use convex combination instead in line 8 is because the loss $\ell_{\tau}$ and the domain $\ensuremath{\mathcal K}$ are both convex. The convexity of $\ensuremath{\mathcal K}$ guarantees that $x_{\tau}$ still lies in $\ensuremath{\mathcal K}$, and the convexity of $\ell_{\tau}$ guarantees that the loss suffered $\ell_{\tau}(x_{\tau})$ is no larger than the expected loss of the random version: $\sum_{I\in S(\tau),q} w_{\tau}(I,q)\ell_{\tau}(x_{\tau}(I,q))/W_{\tau}$.
\end{remark}
\section{Analysis}
The key contribution of this paper is an improved upper bound on $R_1(I)$, as stated in the following theorem. Recall that $R_1(I)=\sum_{t=k}^s r_t(I)$ where $I=[k,s]$.
\begin{theorem}\label{thm1}
Under assumptions \ref{as1} and \ref{as2}, the regret $R_1(I)$ of the multiplicative weight algorithm part in Adagrad+ satisfies that for any interval $I=[k,s]$,
\begin{equation}
R_1(I)= O\left(D \log(T)\max\left\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_H \sum_{t=k}^s g_t^{\top} H^{-1} g_t}\right\}\right)
\end{equation}
\end{theorem}
\begin{proof}
We define the pseudo weight $\tilde{w}_i(I,q)=w_i(I,q)/\eta_{I,q}$ for $i\le t$, and for $i>t$ we just set $\tilde{w}_i(I,q)=\tilde{w}_t(I,q)$. To proceed, we need the following lemma:
\begin{lemma}
Define $\tilde{W}_t=\sum_{I\in S(t)} \tilde{w}_t(I)$, then we have that
\begin{equation}
\tilde{W}_t\le t (\log(t)+1)\log(dTD^2G^2)\log(T)
\end{equation}
\end{lemma}
\begin{proof}
The proof is by induction. For $t=1$ it follows since on the interval $[1,1]$ the number of experts is exactly the number of possible $q$s. Now we assume it hols for all $t'\le t$. We have
\begin{align*}
\tilde{W}_{t+1}&=\sum_{I\in S(t+1),q} \tilde{w}_{t+1}(I,q)\\
&=\sum_{I=[t+1,s]\in S(t+1),q} \tilde{w}_{t+1}(I,q)+\sum_{I=[k,s], k\le t\in S(t+1),q} \tilde{w}_{t+1}(I,q)\\
&\le \log(t+1)\log(dTD^2G^2)\log(T)+1+\sum_{I=[k,s], k\le t\in S(t+1),q} \tilde{w}_{t+1}(I,q)\\
&= \log(t+1)\log(dTD^2G^2)\log(T)+1+\sum_{I=[k,s], k\le t\in S(t+1),q} \tilde{w}_t(I,q)(1+\eta_{I,q} r_t(I))\\
&\le \log(t+1)\log(dTD^2G^2)\log(T)+1+\tilde{W}_t+\sum_{I\in S(t),q} w_t(I,q) r_t(I)\\
&\le (t+1)(\log(t+1)+1)\log(dTD^2G^2)\log(T)+\sum_{I\in S_t,q} w_t(I,q) r_t(I)
\end{align*}
We complete the proof by showing that $\sum_{I\in S(t),q} w_t(I,q) r_t(I)=0$:
\begin{align*}
\sum_{I\in S(t),q} w_t(I,q) r_t(I)&=W_t \sum_{I\in S(t),q} p_t(I,q) (\ell_t(x_t)-\ell_t(x_t(I,q)))\\
&=W_t(\ell_t(x_t)-\ell_t(x_t))=0
\end{align*}
\end{proof}
With this lemma in hand, we only need to prove that for any $I=[k,s]$,
\begin{equation}
\sum_{t=k}^s r_t(I)= O\left(D\sqrt{\log(T)}\max\left\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_H \sum_{t=k}^s g_t^{\top} H^{-1} g_t}\right\}\right)
\end{equation}
By the lemma above, we have that
\begin{equation}
\tilde{w}_{s+1}(I)\le \tilde{W}_{s+1}\le (s+1)(\log(s+1)+1)\log(dTD^2G^2)\log(T)
\end{equation}
hence
\begin{equation}
\log(\tilde{w}_{s+1}(I))\le \log(s+1)+\log(\log(s+1)+1)+\log (\log(dTD^2G^2))+\log(\log(T))
\end{equation}
we also note that
\begin{equation}
\tilde{w}_{s+1}(I)=\prod_{t=k}^s (1+\eta_{I,q} r_t(I))
\end{equation}
By using the fact that $\log(1+x)\ge x=x^2, \forall x\ge -1/2$ and
\begin{equation}
|\eta_{I,q} r_t(I)|\le \frac{1}{4GD} ||x_t-x_t(I,q)||_2 G\le 1/2
\end{equation}
we obtain for any $q$
\begin{equation}
\log(\tilde{w}_{s+1}(I)) \ge \sum_{t=k}^s \eta_{I,q} r_t(I)-\sum_{t=k}^s \eta_{I,q}^2 r_t(I)^2
\end{equation}
Now we need to carefully upper bound the term $\sum_{t=k}^s r_t(I)^2$. By convexity we have that $r_t(I)=\ell_t(x_t)-\ell_t(x_t(I))\le g_t^{\top}(x_t-x_t(I))$, and also $g_t^{\top}(x_t-x_t(I))\le ||g_t||_{H^{-1}} ||x_t-x_t(I)||_H$ for any $H$. As a result, we have that for any $H$ which is PSD and $tr(H)\le d$,
\begin{equation}
r_t(I)^2\le g_t^{\top} H^{-1} g_t ||x_t-x_t(I)||_H^2\le 4g_t^{\top} H^{-1} g_t D^2 d
\end{equation}
where $ ||x_t-x_t(I)||_H^2\le 4D^2 d$ is by elementary algebra. Hence
\begin{equation}
\sum_{t=k}^s r_t(I)\le \frac{4\log(T)}{\eta_{I,q}}+4\eta_{I,q} D^2 d \min_{H} \sum_{t=k}^s g_t^{\top} H^{-1} g_t
\end{equation}
The optimal choice of $\eta$ is of course
\begin{equation}
4\sqrt{\frac{\log(T)}{ D^2 d \min_{H} \sum_{t=k}^s g_t^{\top} H^{-1} g_t}}
\end{equation}
When $D^2 d \min_{H} \sum_{t=k}^s g_t^{\top} H^{-1} g_t\le 64 G^2D^2 \log(T)$, $\eta_{I,1}$ gives the bound $O(GD\log(T))$. When $D^2 d \min_{H} \sum_{t=k}^s g_t^{\top} H^{-1} g_t> 64 G^2D^2 \log(T)$, there always exists $q$ such that $0.5\eta_{I,q}\le \eta\le 2\eta_{I,q}$ by the construction of $q$ so that the regret is still of order
\begin{equation}
O\left(D\sqrt{\log(T)}\max\left\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_H \sum_{t=k}^s g_t^{\top} H^{-1} g_t}\right\}\right)
\end{equation}
which concludes our proof.
\end{proof}
To combine regrets $R_0(I), R_1(I)$ and extend to any interval $J$, we need the following lemma from \cite{daniely2015strongly}:
\begin{lemma}\label{lem1}
For any interval $J$, there exists a set of intervals $S^J$ such that $S^J$ contains only disjoint intervals in $S$ whose union is exactly $J$, and $|S_J|= O(\log(T))$
\end{lemma}
We can now upper bound the total regret $\mbox{{Regret}}(J)$:
\begin{theorem}\label{thm2}
For any interval $J$, its regret $\mbox{{Regret}}(J)$ can be upper bounded by:
\begin{equation}
\mbox{{Regret}}(J)\le \sqrt{|S_J| \sum_{I\in S_J} (R_0(I)+R_1(I))^2}
\end{equation}
\end{theorem}
\begin{proof}
Apparently the total regret $\mbox{{Regret}}(I)$ on interval $I\in S$ is upper bounded by $R_0(I)+R_1(I)$, and the regret on $J$ can be controlled by
\begin{equation}
\mbox{{Regret}}(J)\le \sum_{I\in S_J} \mbox{{Regret}}(I)
\end{equation}
By Cauchy Schwarz we have that
\begin{equation}
(\sum_{I\in S_J} \mbox{{Regret}}(I))^2\le |S_J| \sum_{I\in S_J} \mbox{{Regret}}^2(I)
\end{equation}
which concludes our proof.
\end{proof}
Our main result of this paper can be derived as a corollary.
\begin{corollary}
Under assumptions \ref{as1} and \ref{as2}, when Adagrad is used as the blackbox $\mathcal{A}$, the total regret $\mbox{{Regret}}(I)$ of the multiplicative weight algorithm in Adagrad+ satisfies that for any interval $I=[k,s]$,
\begin{equation}
\mbox{{Regret}}(I)= O\left(D \log(T)\max\left\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_H \sum_{t=k}^s g_t^{\top} H^{-1} g_i}\right\}\right)
\end{equation}
\end{corollary}
\begin{proof}
Observe that when Adagrad is used as the blackbox $\mathcal{A}$, $R_0(I)=O(R_1(I))$. Combining Lemma \ref{lem1}, Theorem \ref{thm1} and Theorem \ref{thm2} gives the desired bound.
\end{proof}
\begin{rem}
We notice that the algorithms with or without $q$ should have very similar behaviours, because $\mathcal{A}_{I,q}$ do exactly the same predictions on $I$ whatever $q$ is. Introducing the index $q$ is for the ease of analysis and only worsens the optimal full-matrix bound \cite{duchi2011adaptive} by a small $\log(T)$ factor.
\end{rem}
\section{Analysis}
In this section, we first provide the key technical contribution of this paper, improved upper bound on the regret of the multiplicative weight part as stated in the following theorem. Then we prove our main result as a corollary, together with other extensions.
The total regret can be written as $R_0(I)+R_1(I)$, where $R_0(I)$ is the regret of ${\mathcal A}_I$ and $R_1(I)$ is the regret of the multiplicative weight part. Without loss of generality, we assume $T=2^k$ and define the geometric covering intervals:
\begin{definition}
Define $S_i=\{[1,2^i],[2^i+1,2^{i+1}],...,[2^k-2^i+1,2^k]\}$ for $0\le i\le k$. Define $S=\cup_i S_i$ and $S(\tau)=\{I\in S|\tau\subset I\}$.
\end{definition}
For $2^k<T<2^{k+1}$, one can similarly define $S_i=\{[1,2^i],[2^i+1,2^{i+1}],...,[2^i \lfloor \frac{T-1}{2^i}\rfloor+1,T]\}$, see \cite{daniely2015strongly}. Hencefore at any time $\tau$ the number of 'active' intervals is only $O(\log (T))$, this guarantees that the running time and memory cost per round of \mbox{{SAMUEL\ }} is as fast as $O(\log (T))$.
It's worth to notice that $q$ doesn't affect the behavior of ${\mathcal A}_{I,q}$ and only takes affect in the multiplicative weight algorithm, and that $r_{\tau}(I,q)$ and $x_{\tau}(I,q)$ doesn't depend on $q$ so we may write $r_{\tau}(I)$ and $x_{\tau}(I)$ instead for simplicity.
\begin{theorem}\label{thm1}
Under assumptions \ref{as1} and \ref{as2}, the regret $R_1(I)$ of the multiplicative weight algorithm part in Algorithm \ref{alg1} satisfies that for any interval $I=[s,t]\in S$,
\begin{eqnarray*}
&R_1(I)= O(\sqrt{\log(T)}\\
&\max\{D G\sqrt{\log(T)}, \sqrt{ \sum_{\tau=s}^t (\nabla_{\tau}^{\top} (x_{\tau}-x_{\tau}(I)))^2}\})
\end{eqnarray*}
\end{theorem}
To combine regrets $R_0(I), R_1(I)$ and extend to any interval $J$, we need the following lemma from \cite{daniely2015strongly}:
\begin{lemma}\label{lem1}
For any interval $J$, there exists a set of intervals $S^J$ such that $S^J$ contains only disjoint intervals in $S$ whose union is exactly $J$, and $|S_J|= O(\log(T))$
\end{lemma}
We can now upper bound the total regret $\mbox{{Regret}}(J)$:
\begin{lemma}\label{lem2n}
For any interval $J$, its regret $\mbox{{Regret}}(J)$ can be upper bounded by:
$$
\mbox{{Regret}}(J)\le \sqrt{|S_J| \sum_{I\in S_J} (R_0(I)+R_1(I))^2}
$$
\end{lemma}
\begin{proof}
Apparently the total regret $\mbox{{Regret}}(I)$ on interval $I\in S$ is upper bounded by $R_0(I)+R_1(I)$, and the regret on $J$ can be controlled by
$$
\mbox{{Regret}}(J)\le \sum_{I\in S_J} \mbox{{Regret}}(I)
$$
By Cauchy Schwarz we have that
$$
(\sum_{I\in S_J} \mbox{{Regret}}(I))^2\le |S_J| \sum_{I\in S_J} \mbox{{Regret}}^2(I)
$$
which concludes our proof.
\end{proof}
With Theorem \ref{thm1} and Lemma \ref{lem2n} in hand, we are ready to present our main results in the full-matrix and diagonal-matrix settings.
\subsection{Full-matrix adaptive regularization}
Our main result of this paper can be derived as a corollary from Theorem \ref{thm1} and Lemma \ref{lem2n}, by a refined bound on the term $\sum_{\tau=s}^t (\nabla_{\tau}^{\top} (x_{\tau}-x_{\tau}(I)))^2$.
\begin{corollary}\label{cor1}
Under assumptions \ref{as1} and \ref{as2}, when Adagrad is used as the blackbox $\mathcal{A}$, the total regret $\mbox{{Regret}}(I)$ of the multiplicative weight algorithm in Algorithm \ref{alg1} satisfies that for any interval $I=[s,t]$,
$$
\mbox{{Regret}}(I) = O(D \log(T)\max\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_{H \in {\mathcal H}} \sum_{\tau=s}^t \|\nabla_{\tau}\|_H^{*2}}\})
$$
\end{corollary}
\begin{rem}
We notice that the algorithms with or without $q$ should have very similar behaviours, because $\mathcal{A}_{I,q}$ do exactly the same predictions on $I$ whatever $q$ is. Introducing the index $q$ is for the ease of analysis and only loses a small $\log(T)$ factor compared with the optimal full-matrix bound \cite{duchi2011adaptive}.
\end{rem}
\subsection{Diagonal-matrix adaptive regularization}
If we restrict our expert optimization algorithm to be diagonal Adagrad, we can derive a similar guarantee for the adaptive regret.
\begin{corollary}\label{cor3}
Under assumptions \ref{as1} and \ref{as2}, when diagonal Adagrad is used as the blackbox $\mathcal{A}$, the total regret $\mbox{{Regret}}(I)$ of the multiplicative weight algorithm in Algorithm \ref{alg1} satisfies that for any interval $I=[s,t]$,
$$
\mbox{{Regret}}(I)= \tilde{O}\left(D_{\infty} \sum_{i=1}^d \|\nabla_{s:t,i}\|_2 \right)
$$
\end{corollary}
Here $\nabla_{s:t,i}$ denotes the $ith$ coordinate of $\sum_{\tau=s}^t \nabla_{\tau}$.
\begin{proof}
The proof is almost identical to that of the previous corollary, observing that the regret $R_0(I)$ is $\tilde{O}(D_{\infty} \sum_{i=1}^d \|\nabla_{s:t,i}\|_2 )$ due to \cite{duchi2011adaptive}, and the regret $R_1(I)$ remains $\tilde{O}(D \sqrt{ \min_{H\in {\mathcal H}} \sum_{\tau=s}^t \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}})$, which is upper bounded by $\tilde{O}(D_{\infty} \sum_{i=1}^d \|\nabla_{s:t,i}\|_2 )$.
\end{proof}
\section{Proof of Theorem \ref{thm1}}
\begin{proof}
We define the pseudo weight $\tilde{w}_{\tau}(I,q)=w_{\tau}(I,q)/\eta_{I,q}$ for $\tau\le t$, and for $\tau>t$ we just set $\tilde{w}_{\tau}(I,q)=\tilde{w}_t(I,q)$. To proceed, we need the following lemma:
\begin{lemma}
Define $\tilde{W}_{\tau}=\sum_{I\in S(\tau)} \tilde{w}_{\tau}(I)$, then we have that
$$
\tilde{W}_{\tau}\le \tau (\log(\tau)+1)\log(dTD^2G^2)\log(T)
$$
\end{lemma}
\begin{proof}
The proof is by induction. For $\tau=1$ it follows since on the interval $[1,1]$ the number of experts is exactly the number of possible $q$s. Now we assume it holds for all $\tau'\le \tau$. We have
\begin{align*}
\tilde{W}_{\tau+1}&=\sum_{I\in S(\tau+1),q} \tilde{w}_{\tau+1}(I,q)\\
&=\sum_{I=[\tau+1,t]\in S(\tau+1),q} \tilde{w}_{\tau+1}(I,q)+\sum_{I=[s,t], s\le \tau\in S(\tau+1),q} \tilde{w}_{\tau+1}(I,q)\\
&\le \log(\tau+1)\log(dTD^2G^2)\log(T)+1+\sum_{I=[s,t], s\le \tau\in S(\tau+1),q} \tilde{w}_{\tau+1}(I,q)\\
&= \log(\tau+1)\log(dTD^2G^2)\log(T)+1+\sum_{I=[s,t], s\le \tau\in S(\tau+1),q} \tilde{w}_{\tau}(I,q)(1+\eta_{I,q} r_{\tau}(I))\\
&\le \log(\tau+1)\log(dTD^2G^2)\log(T)+1+\tilde{W}_{\tau}+\sum_{I\in S(\tau),q} w_{\tau}(I,q) r_{\tau}(I)\\
&\le (\tau+1)(\log(\tau+1)+1)\log(dTD^2G^2)\log(T)+\sum_{I\in S_{\tau},q} w_{\tau}(I,q) r_{\tau}(I)
\end{align*}
We complete the proof by showing that $\sum_{I\in S(\tau),q} w_{\tau}(I,q) r_{\tau}(I)\le 0$:
\begin{align*}
\sum_{I\in S(\tau),q} w_{\tau}(I,q) r_{\tau}(I)&=W_{\tau} \sum_{I\in S(\tau),q} p_{\tau}(I,q) (\ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}(I,q)))\\
&\le W_{\tau} \sum_{I\in S(\tau),q} p_{\tau}(I,q) (\sum_{J\in S(\tau),q} w_{\tau}(J,q)\ell_{\tau}(x_{\tau}(J,q))/W_{\tau}-\ell_{\tau}(x_{\tau}(I,q)))\\
&=0
\end{align*}
\end{proof}
With this lemma in hand, we only need to prove that for any $I=[s,t]$,
$$
\sum_{\tau=s}^t r_{\tau}(I)= O\left(\log(T)\max\left\{D G\sqrt{\log(T)}, \sqrt{ \sum_{\tau=s}^t (\nabla_{\tau}^{\top} (x_{\tau}-x_{\tau}(I)))^2}\right\}\right)
$$
By the lemma above, we have that
$$
\tilde{w}_{t+1}(I)\le \tilde{W}_{t+1}\le (t+1)(\log(t+1)+1)\log(dTD^2G^2)\log(T)
$$
hence
$$
\log(\tilde{w}_{t+1}(I))\le \log(t+1)+\log(\log(t+1)+1)+\log (\log(dTD^2G^2))+\log(\log(T))
$$
we also note that
$$
\tilde{w}_{t+1}(I)=\prod_{\tau=s}^t (1+\eta_{I,q} r_{\tau}(I))
$$
By using the fact that $\log(1+x)\ge x=x^2, \forall x\ge -1/2$ and
$$
|\eta_{I,q} r_{\tau}(I)|\le \frac{1}{4GD} \|x_{\tau}-x_{\tau}(I,q)\|_2 G\le 1/2
$$
we obtain for any $q$
$$
\log(\tilde{w}_{t+1}(I)) \ge \sum_{\tau=s}^t \eta_{I,q} r_{\tau}(I)-\sum_{\tau=s}^t \eta_{I,q}^2 r_{\tau}(I)^2
$$
Now we need to carefully upper bound the term $\sum_{\tau=s}^t r_{\tau}(I)^2$. By convexity we have that $r_{\tau}(I)=\ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}(I))\le \nabla_{\tau}^{\top}(x_{\tau}-x_{\tau}(I))$, hence
$$
\sum_{\tau=s}^t r_{\tau}(I)\le \frac{4\log(T)}{\eta_{I,q}}+4\eta_{I,q} \sum_{\tau=s}^t (\nabla_{\tau}^{\top}(x_{\tau}-x_{\tau}(I)))^2
$$
The optimal choice of $\eta$ is of course
$$
4\sqrt{\frac{\log(T)}{\sum_{\tau=s}^t (\nabla_{\tau}^{\top}(x_{\tau}-x_{\tau}(I)))^2}}
$$
When $\sum_{\tau=s}^t (\nabla_{\tau}^{\top}(x_{\tau}-x_{\tau}(I)))^2\le 64 G^2D^2 \log(T)$, $\eta_{I,1}$ gives the bound $O(GD\log(T))$. When $\sum_{\tau=s}^t (\nabla_{\tau}^{\top}(x_{\tau}-x_{\tau}(I)))^2> 64 G^2D^2 \log(T)$, there always exists $q$ such that $0.5\eta_{I,q}\le \eta\le 2\eta_{I,q}$ by the construction of $q$ so that the regret is still of order
$$
O\left(\sqrt{\log(T)}\max\left\{D G\sqrt{\log(T)}, \sqrt{ \sum_{\tau=s}^t (\nabla_{\tau}^{\top} (x_{\tau}-x_{\tau}(I)))^2}\right\}\right)
$$
which concludes our proof.
\end{proof}
\section{Proof of Corollary \ref{cor1}}
\begin{proof}
We need to carefully upper bound the term $ \nabla_{\tau}^{\top}(x_{\tau}-x_{\tau}(I))$. By Cauchy-Schwarz we have that $\nabla_{\tau}^{\top}(x_{\tau}-x_{\tau}(I))\le \|\nabla_{\tau}\|_{H^{-1}} \|x_{\tau}-x_{\tau}(I)\|_H$ for any $H$. As a result, we have that for any $H$ which is PSD and $tr(H)\le d$,
$$
(\nabla_{\tau}^{\top}(x_{\tau}-x_{\tau}(I)))^2\le \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau} \|x_{\tau}-x_{\tau}(I)\|_H^2\le \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau} 4D^2 d
$$
where $ \|x_{\tau}-x_{\tau}(I)\|_H^2\le 4D^2 d$ is by elementary algebra: let $H=V^{-1} M V$ be its diagonal decomposition where $B$ is a standard orthogonal matrix and $M$ is diagonal. Then
\begin{align*}
\|x_{\tau}-x_{\tau}(I)\|_H^2&= (x_{\tau}-x_{\tau}(I))^{\top} H (x_{\tau}-x_{\tau}(I))\\
&= (V(x_{\tau}-x_{\tau}(I)))^{\top} M V(x_{\tau}-x_{\tau}(I))\\
&\le (V(x_{\tau}-x_{\tau}(I)))^{\top} d I V(x_{\tau}-x_{\tau}(I))\\
&\le 4D^2 d
\end{align*}
Hence
$$
\sum_{\tau=s}^t r_{\tau}(I)\le \frac{4\log(T)}{\eta_{I,q}}+4\eta_{I,q} D^2 d \min_{H} \sum_{\tau=s}^t \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}
$$
The optimal choice of $\eta$ is of course
$$
4\sqrt{\frac{\log(T)}{ D^2 d \min_{H} \sum_{\tau=s}^t \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}}}
$$
When $D^2 d \min_{H} \sum_{\tau=s}^t \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}\le 64 G^2D^2 \log(T)$, $\eta_{I,1}$ gives the bound $O(GD\log(T))$. When $D^2 d \min_{H} \sum_{\tau=s}^t \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}> 64 G^2D^2 \log(T)$, there always exists $q$ such that $0.5\eta_{I,q}\le \eta\le 2\eta_{I,q}$ by the construction of $q$ so that the regret $R_1(I)$ is of order
$$
O\left(D\sqrt{\log(T)}\max\left\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_{H\in {\mathcal H}} \sum_{\tau=s}^t \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}}\right\}\right)
$$
Observe that when Adagrad is used as the blackbox $\mathcal{A}$, $R_0(I)=O(R_1(I))$. Combining Lemma \ref{lem1}, Theorem \ref{thm1} and Lemma \ref{lem2n} gives the desired bound.
\end{proof}
\section{Conclusion}
In this paper we study adaptive gradient methods with local guarantees. The methodology is based on adaptive online learning, in which we contribute a novel twist on the multiplicative weights method that we show has better adaptive regret guarantees than state of the art. This, combined with known results in adaptive gradient methods, gives an algorithm \mbox{{SAMUEL\ }} with local adaptive regret guarantees. We give provable guarantee that \mbox{{SAMUEL\ }} can achieve full-matrix second-order regret bounds for strongly adaptive regret, combining the advantages of both worlds: the optimality of Adagrad and the local guarantee of strongly adaptive methods from online learning.
We demonstrate the effectiveness and robustness of \mbox{{SAMUEL\ }} in experiments, where we show that \mbox{{SAMUEL\ }} can automatically adapt to the optimal learning rate and achieve comparable task accuracy as a fine-tuned optimizer, in a single run. While these experiments do not show improvement in state-of-the-art, they show potential of local adaptive gradient methods to be more robust to hyperparameter tuning.
\section{Exponentially-concave Loss}
For exp-concave loss functions there exists faster algorithms with logarithmic regret such as Online Newton Step (ONS) \cite{hazan2007logarithmic}. In this section we show how to achieve logarithmic adaptive regret for all intervals with the help of another variant of MW, the Fixed Share algorithm from \cite{herbster1998tracking}.
\begin{definition}
A function $f(x)$ is called $\alpha$-exponential-concave, if $g(x)=e^{-\alpha f(x)}$ is concave in $x$.
\end{definition}
Our proposed algorithm holds for any interval $I$, a copy of OCO algorithm ${\mathcal A}_I$ which achieves logarithmic regret over $I$. We then use the Fixed Share algorithm to achieve adaptive regret, losing only an additional logarithmic regret.
\begin{algorithm}[t]
\caption{Strongly Adaptive Multiplicative-weight for Exponential-concave Loss}
\label{alg3}
\begin{algorithmic}[1]
\STATE Input: OCO algorithm ${\mathcal A}$, interval set $S$, constant $\alpha$, parameter $\delta<1/2$.
\STATE Initialize: for each $I\in S$, a copy of OCO algorithm ${\mathcal A}_{I}$.
\STATE Initialize $w_1(I)=1/|S|$ for each $I\in S$.
\FOR{$\tau = 1, \ldots, T$}
\STATE Let $x_{\tau}(I) = {\mathcal A}_I(\tau)$
\STATE Let $x_{\tau}=\sum_{I\in S(\tau)} w_{\tau}(I)x_{\tau}(I)$.
\STATE Predict $x_{\tau}$.
\STATE Receive loss $\ell_{\tau}(x_{\tau})$.
\STATE For each $I\in S$, update:
$$\tilde{w}_{\tau+1}(I)=\frac{w_{\tau}(I)e^{-\alpha \ell_{\tau}(x_{\tau}(I))}}{\sum_{J\in S} w_{\tau}(J)e^{-\alpha \ell_{\tau}(x_{\tau}(J))}} $$
\STATE For each $I\in S$, update $w_{\tau+1}(I)$ as follows.
$$
w_{\tau+1}(I)=(1-\delta) \tilde{w}_{\tau+1}(I)+\frac{\delta}{|S|}
$$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{thm3}
Assuming \ref{as1}, \ref{as2} and the loss $\ell_t$ is $\alpha$-exponential-concave, the regret $R_1(I)$ of the multiplicative weight algorithm part in Algorithm \ref{alg3} with $\delta=1/2T$ satisfies that for any interval $I=[s,t]$ (not necessarily in $S$),
$$
R_1(I)= O\left(\frac{\log(T)}{\alpha}\right)
$$
\end{theorem}
\begin{corollary}\label{cor2}
Assuming \ref{as1}, \ref{as2} and the loss $\ell_{\tau}$ is $\alpha$-exponential-concave, when ONS is used as the blackbox $\mathcal{A}$, the total regret $\mbox{{Regret}}(I)$ of the multiplicative weight algorithm in Algorithm \ref{alg3} with $\delta=1/2T$ satisfies that for any interval $I=[s,t]$,
$$
\mbox{{Regret}}(I)= O\left(\frac{\log^2(T)}{\alpha}\right)
$$
\end{corollary}
\section{Experiments}
In this section we demonstrate the effectiveness of our methodology on standard benchmarks in vision and NLP domains. We first implement Algorithm \ref{alg1} on CIFAR-10 image classification and achieve better results than constant learning rate baselines. However, the use of Algorithm \ref{alg1} requires computation and memory cost growing linear in $T$ which is less efficient in practice.
To that extent, we change the application of our theoretical algorithm \ref{alg1} to be more practical in several ways. The first is that in Algorithm \ref{alg1} we require a large number of experts that grows with time. Instead, we take a fixed number of experts, each with an exponential decay factor on the history. Second, we do not take a convex combination of the experts' DNN weights, but rather sample an expert and take its weights. Also, the reinitialize mechanism is a different: in the original algorithm every expert's state is initialized as that of the previously chosen one once it becomes active, now that we don't have 'hard intervals' any longer, we change it to reinitialize all experts at fixed time-points.
These are significant simplifications that we perform, and the heuristic algorithm we get can be summarized simply as follows. Run several copies of your optimization method, and at time to evaluate/switch, keep the best copy only. Re-spawn new algorithms with the same parameters, and continue evaluating them with the new learning rates, and so forth.
The below equation is the update rule of the Adagrad variant which we use for experiments. We use a parameter $\alpha$ to represent the memory length, which can be seen as a 'soft' version of Algorithm \ref{alg1}.
$$
x_{t+1}=x_t-\frac{\eta}{\sqrt{\epsilon I+\sum_{\tau=1}^t \alpha^{t-\tau} \nabla_{\tau} \nabla_{\tau}^{\top}}}\nabla_t
$$
Properly setting $\eta$ also matters in practice. In theory we upper bound $\eta$ by $\frac{1}{4GD}$ to guarantee that the weight update works in \mbox{{SAMUEL\ }}, but it's usually unrealistic to find $G,D$ in advance. In experiments, we set $\eta$ to start from $\frac{1}{2}$ in easier datasets such as CIFAR-10 and set it smaller accordingly in larger datasets in which the loss values are larger. The key is to set $\eta$ as small that its product with the largest loss is no more than $\frac{1}{2}$.
\begin{algorithm}[t]
\caption{\mbox{{SAMUEL\ }} experiment pseudocode for CIFAR-10 (check out Algorithm \ref{alg1})}
\label{pseudo}
\begin{algorithmic}[1]
\STATE Input: AdaGrad optimizer ${\mathcal A}$, constant Q, a set of learning rates $\{1, 0.1, 0.001, 0.0001, 0.00001\}$, reinitialize frequency K.
\STATE Initialize: for each learning rate $i \in S$, a copy of ${\mathcal A}_i$.
\STATE Set $\eta_{i, q}=\frac{1}{2^q}$ for $q\in [1,Q]$.
\STATE Initialize $w_1(i, q)=\min \{1/2, \eta_{I,q}\}$. Initialize NN params $x_0$
\FOR{$\tau = 1, \ldots, T$}
\STATE Let updated NN params $x_{\tau}(i,q) = {\mathcal A}_i(\tau)$
\STATE Let $W_{\tau}=\sum_{i,q} w_{\tau}(i,q)$.
\STATE sample $x_{\tau}$ according to $w_{\tau}(i,q)/W_{\tau}$.
\STATE Receive batch loss $\ell_{\tau}(x_{\tau})$, define $r_{\tau}(i)=\ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau}(i,q))$.
\STATE For each $i$, update $w_{\tau+1}(i,q)$ as follows.
$$w_{\tau+1}(i,q)=
w_{\tau}(i,q)(1+\eta_{i,q} r_{\tau}(i))$$
\IF{$\tau \% K = 0$}
\STATE Re-initialize $w_\tau(i, q)=\min \{1/2, \eta_{I,q}\}$
\STATE All copies ${\mathcal A}_i$ start from NN params $x_\tau$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
We demonstrate the effectiveness of our optimization method on diverse vision and language tasks commonly used for benchmarking: image classification on CIFAR-10, CIFAR-100 and ImageNet, and sentiment classification on SST-2. On all tasks, we show that \mbox{{SAMUEL\ }} does not require any learning rate schedule fine tuning to produce stable, high task accuracy.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/ImageNet_CI.pdf}
\caption{ImageNet comparison of exhaustive searched decaying learning rate schedule and \mbox{{SAMUEL\ }}. Top: 125 parallel experiments with exhaustively searched learning rate schedules. Bottom: \mbox{{SAMUEL\ }} no tuning needed.}
\label{fig:imagenet}
\end{figure}
\subsection{\textbf{Vision tasks}}
\noindent\textbf{CIFAR-10}: We conducted image classification on the CIFAR-10 dataset. We compare a ResNet-18 \citep{he2016deep} model trained with our optimization algorithm to a model trained with AdaGrad using brute-force searched learning rate schedules. Following \cite{he2016deep}, we applied per-pixel mean subtraction, horizontal random flip, and random cropping with 4 pixel padding for CIFAR data processing and augmentation. All experiments were conducted on TPU-V2 hardware. For training, we used a batch size of 256 and 250 total epochs with a step learning rate schedule. We fixed the learning rate stepping point at epoch 125 and 200, and provided five possible candidate learning rates \{0.0001, 0.001, 0.01, 0.1, 1\} for each region. Thus an exhaustive search yielded 125 different schedules for the baseline AdaGrad method. For a fair comparison, we adopted the same learning rate changing point for our method. Our method automatically determined the optimal learning rate at the transition point without the need to exhaustively search over learning rate schedules.
We display the CIFAR-10 test accuracy curves of AdaGrad with 125 exhaustively-searched learning rate schedules and our method in only one single run in Fig.\ref{fig:CIFARcompare}. The top plot in Fig.\ref{fig:CIFARcompare} shows the 125 curves of all possible step learning rate schedules, the best one of which achieves 94.95\% test accuracy, meanwhile the accuracy of \mbox{{SAMUEL\ }} in a single run achieves 94.76\% test accuracy using the same seed (average is 94.50\% using 10 different random seeds). The accuracy achieved by \mbox{{SAMUEL\ }} ranks top 3 among the 125 exhaustive search schedules.
This point of this experiment is not to consider the 125 exhaustive search as an actual practical scheduler to compare with. Instead, it's designed such that the size of the whole searching space is finite so that we can afford running a brutal-force search, and comparing \mbox{{SAMUEL\ }} with the best of them is convincing to justify that \mbox{{SAMUEL\ }} can achieve comparable performance without manual fine-tuning. The results from Fig.\ref{fig:CIFARcompare} exactly demonstrate the ability of our algorithm to achieve comparable performance with the optimal offline schedule, automatically.\\
\noindent\textbf{ImageNet}:
We continue examining the performance of SAMUEL on the large-scale ImageNet dataset, which consists of 1.2M training images and 50K validation images of size 224$\times$224 labeled for 1000 classes. We trained a resnet-50 model with exhaustive search of learning rate schedules and compare with SAMUEL. In this setting, we consider a more practical step learning rate scheduling where the learning rate after each stepping epoch decays. Specifically, the candidate learning rates are \{0.2, 0.4, 0.6, 0.8, 1.0\} in the first phase, and decay by 10$\times$ when stepping into the next phase. The total training epochs are 100 and the stepping learning rate position is set at 50 and 75 epoch. We adopted the pipeline from \cite{flax} for image pre-processing and model training. For both baselines and SAMUEL, we used the SGD optimizer with nesterov momentum of 0.9. All experimented were conducted on TPU-V2 hardware with training batch size of 1024.
The experiment setting differs from that of CIFAR-10, in that we use a 10$\times$ decaying for the learning rate schedule of both exhaustive search and \mbox{{SAMUEL\ }}, therefore the exhaustive search no longer contains ``obviously bad'' schedules which makes the comparison more practical. The top plot of Fig. \ref{fig:imagenet} shows the validation accuracy curves of all possible practical schedules, among which the best accuracy is 76.32\%. At the bottom of Fig.\ref{fig:imagenet} is \mbox{{SAMUEL\ }} with no finetuning. The accuracy of \mbox{{SAMUEL\ }} using the same seed as the baselines' is 76.22\% (average is 76.15\% using 5 different random seeds). The performance comparison again validates that our algorithm is capable of achieving comparable performance with a fine-tuned scheduler on large-scale datasets.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/SST2_CI.pdf}
\caption{SST-2 sentiment classification with bi-directional LSTM. Top: 125 parallel experiments with exhaustively searched learning rate schedules. Bottom: \mbox{{SAMUEL\ }} with 10 different random seeds.}
\label{fig:SAMcurve}
\end{figure}
\subsection{\textbf{Language task}}
We consider tasks in the language domain and conducted experiments on the Stanford Sentiment Treebank SST-2 dataset. We used the pipeline from \citep{flax} for pre-processing the SST-2 dataset and trained a simple bi-directional LSTM text classifier. The total training epoch is 25 with stepping learning rate position at epoch 15 and 20. We used SGD with momentum of 0.9 and additive weight decay of 3e-6. The training batch size in both baseline and SAMUEL is 64. The learning rate schedule setting is the same as that of CIAR-10.
Fig. \ref{fig:SAMcurve} shows that the best accuracy of exhaustive search is 86.12\%, and the accuracy of \mbox{{SAMUEL\ }} using the same seed is 85.55\% (average is 85.58\% among 10 different random seeds), showing that our algorithm can achieve comparable performance not only on vision datasets but also on language tasks.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/stability.pdf}
\caption{stability study of \mbox{{SAMUEL\ }} with different hyperparameters. We demonstrate the stability of \mbox{{SAMUEL\ }} with 18 sets of different hyperparameters on CIFAR-10. They all converge to high evaluation accuracy.}
\label{fig:stability}
\end{figure}
\subsection{\textbf{Stability of \mbox{{SAMUEL\ }}}}
We demonstrated the stability of SAMUEL with hyperparameter tuning. Since our algorithm will automatically selects the optimal learning rate, the only tunable hyperparameters are the number of $\eta$ and the number of decaying factor $\alpha$. We conducted 18 trials with different hyperparameter combinations and display the test accuracy curves in Fig.\ref{fig:stability}. Specifically, we considered the number of decaying factors $\alpha$ with values $\{2, 3, 6\}$ and the number of $\eta$ with values $\{5, 10, 15, 20, 25, 30\}$. As Fig.\ref{fig:stability} shows, all trials in \mbox{{SAMUEL\ }} converge to nearly the same final accuracy regardless of the exact hyperparameters. \mbox{{SAMUEL\ }}'s robustness to hyperparameter choice reduces the need to finetune in order to achieve high task performance for deep learning applications.
\section{Introduction}
Adaptive gradient methods have revolutionized optimization for machine learning and are routinely used for training deep neural networks. These algorithms are stochastic gradient based methods, that also incorporate a changing data-dependent preconditioner (multi-dimensional generalization of learning rate). Their empirical success is accompanied with provable guarantees: in any optimization trajectory with given gradients, the adapting preconditioner is comparable to the best in hindsight, in terms of rate of convergence to local optimality.
Their success has been a source of intense investigations over the past decade, since their introduction, with literature spanning thousands of publications, some highlights are surveyed below.
The common intuitive understanding of their success is their ability to change the preconditioner, or learning rate matrix, per coordinate and on the fly. A methodological way of changing the learning rate allows treating important coordinates differently as opposed to commonly appearing features of the data, and thus achieve faster convergence.
In this paper we investigate whether a more refined goal can be obtained: namely, can we adapt the learning rate per coordinate, and also in short time intervals? The intuition guiding this search is the rising popularity in ``exotic learning rate schedules" for training deep neural networks. The hope is that an adaptive learning rate algorithm can automatically tune its preconditioner, on a per-coordinate and per-time basis, such to guarantee optimal behavior even locally.
To pursue this goal, we use and improve upon techniques from the literature on adaptive regret in online learning to create a provable method that is capable of attaining optimal regret in any subinterval of the optimization trajectory. We then test the resulting method and compare it to learning a learning rate schedule from scratch.
\subsection{Informal statement of our results}
The (stochastic/sub)-gradient descent algorithm is given by the following iterative update rule:
$$ x_{\tau+1} = x_{\tau} - \eta_{\tau} \nabla_{\tau} , $$
where $x_{\tau}$ are the parameters to be optimized, $\nabla_{\tau}$ is a random variable whose expectation is the (sub)gradient of the objective, and $\eta_{\tau}$ is the learning rate parameter, which can be either a scalar or a matrix. If $\eta_{\tau}$ is a matrix, it is usually called a preconditioner. A notable example for a preconditioner is when $\eta_{\tau}$ is equal to the inverse Hessian (or second differential), which gives Newton's method.
In a nutshell, the convergence rate of adaptive gradient methods can be explained as follows. For simplicity, we state convergence rates to global optimality in convex optimization. Regret and convergence to stationary point in non-convex smooth optimization follow similarly, and are discussed in Appendix. Let $\nabla_1,...,\nabla_T$ be the gradients observed in an optimization trajectory. It is well established that stochastic gradient descent (SGD) approaches optimality at a rate of $\sim \frac{1}{\sqrt{T}}$, or more precisely,
$$ \sim \frac{ \sqrt{ \sum_{\tau} \|\nabla_{\tau} \|^{2} } }{T} . $$
Adaptive gradient methods can improve upon this rate in a subtle way. The
AdaGrad algorithm (and more general adaptive gradient methods) achieve the same guarantee at a rate of
$$ \sim \frac{ \sqrt{ \min_{H \in {\mathcal H}} \sum_{\tau} \|\nabla_{\tau} \|_H^{*2} } }{T} , $$
where ${\mathcal H}$ is a family of matrix norms, most commonly those with a bounded trace. Thus, adaptive gradient methods can improve upon the rate of vanilla SGD in certain geometric favorable scenarios.
In this paper we improve upon this guarantee in terms of the local performance over any interval of the optimization trajectory. The adaptive component of adaptive gradient methods is the preconditioner, or the matrix norm by which we measure the gradients. Roughly speaking, we can take the minima of norms in the above expression over any subinterval of time, rather than a global prconditioner. This allows the algorithm to change learning rate in any interval, and over any coordinate.
The formal guarantee is given below:
\begin{theorem}\label{thm0}
The convergence rate of Algorithm \ref{alg1} can be upper bounded by:
\begin{equation*}
\sim \tilde{O}\left( \frac{ \min_k \min_{H_1,...,H_k \in {\mathcal H}} \sum_{j=1}^k \sqrt{ \sum_{ \tau \in I_j} \|\nabla_{\tau} \|_{H_j}^{*2} } }{T}\right)
\end{equation*}
\end{theorem}
\subsubsection{Our Results in Online Learning}
The convergence results above, as well as more general convergence results in the setting of approximating stationary points in nonconvex smooth optimization, are derived using the methodology of regret in online convex optimization (OCO).
Regret measures the difference in loss between an online optimizer, and the best decision in hindsight. We consider a stronger metric of performance, called adaptive regret, which was originally proposed in the literature for learning in changing environments.
Adaptive regret is the maximum regret over any interval of time. Our main technical contribution is a strengthening of the best adaptive regret bound for general OCO. Namely, the guarantee of Theorem \ref{thm0} are derived using the stronger regret bounds for general OCO:
\begin{theorem}\label{thm0r}
The regret of Algorithm \ref{alg1} can be upper bounded by:
\begin{equation*}
\tilde{O}\left( \min_k \min_{H_1,...,H_k \in {\mathcal H}} \sum_{j=1}^k \sqrt{ \sum_{ \tau \in I_j} \|\nabla_{\tau} \|_{H_j}^{*2} } \right)
\end{equation*}
\end{theorem}
Our new upper bound improves over all previous results in terms of adaptive regret. For general convex loss, the previously best known bound is
$\tilde{O}(\sqrt{|I|})$
which depends only on the length of the interval $I = [s,t] \subseteq [T]$, that we also denote by $I$ with some notation abuse. We improve it to the adaptive geometry-dependent bound
$\tilde{O}(\min_{H\in {\mathcal H}} \sqrt{\sum_{\tau=s}^t \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}})$, analogous to AdaGrad for standard regret.
A comparison of our results in terms of adaptive regret are given in Table \ref{table:result_summary}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|}
\hline
Algorithm &
Regret over $I=[s,t]$ \\
\hline
\cite{hazan2007adaptive} & $\sqrt{T \log T}$\\
\hline
\cite{daniely2015strongly} & $\sqrt{|I| \log^2 T}$ \\
\hline
\cite{jun2017improved} & $\sqrt{|I| \log T}$\\
\hline
\cite{cutkosky2020parameter} & $\sqrt{\sum_{\tau=s}^t \|\nabla_{\tau}\|^2}$\\
\hline
\mbox{{SAMUEL\ }} (ours)& $\min_{H\in {\mathcal H}} \sqrt{\sum_{\tau=s}^t \|\nabla_{\tau}\|_{H}^{*2}}$\\
\hline
\end{tabular}
\caption{Comparison of results. We evaluate the regret performance of the algorithms on any interval $I=[s,t]$. For the ease of presentation we hide secondary parameters and $O, \tilde{O}$ notations. Our algorithm achieves the tight bound of AdaGrad on any interval.
}
\label{table:result_summary}
\end{center}
\end{table}
\subsection{Examples of improvements}
\paragraph{Theoretical example.}
In this section we give an example of when we can expect an improvement over non-adaptive AdaGrad. Divide the whole time interval $[1,T]$ into $d$ pieces $I_1,...,I_d$, each with length $\frac{T}{d}$. For each $I_i$ we set the gradients $\nabla_{\tau}=e_i$ if $\tau\in I_i$. Then our algorithm can achieve the optimal convergence rate in each $I_i$, with the total rate upper bounded by $\tilde{O}(\frac{1}{\sqrt{T}})$ because in each such interval $\min_{H\in {\mathcal H}} \sum_{ \tau \in I_j} \|\nabla_{\tau} \|_H^{*2}$ is upper bounded by $\frac{T}{d^2}$. In comparison, for regular AdaGrad, it still leads to an $\tilde{O}(\frac{\sqrt{d}}{\sqrt{T}})$ rate.
\paragraph{Example in experiments.}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/CIFAR10_CI.pdf}
\caption{test accuracy on CIFAR-10 with exhaustively searched step learning rate schedule (top) and \mbox{{SAMUEL\ }} with 10 different random seeds (bottom). A single run of \mbox{{SAMUEL\ }} without any tuning achieves comparable test accuracy as the top performers of the 125 exhaustively searched learning rate schedules.}
\label{fig:CIFARcompare}
\end{figure}
Here we empirically demonstrate the strength of our method over AdaGrad on CIFAR-10 classification. We first conducted experiments using an off-the-shelf AdaGrad optimizer with 125 exhaustively searched step learning rate schedules. The top plot in Fig.\ref{fig:CIFARcompare} reveals that training behavior and task accuracy strongly depend on learning rate schedule fine-tuning. We next conducted an experiment using our method with only a single run and no learning rate fine-tuning. This single run, shown in Fig.\ref{fig:CIFARcompare}, achieves test accuracy and convergence behavior comparable to the top runs of the exhaustive search. Our method's ability to bypass exhaustive learning rate schedules search makes deep learning dramatically more accessible to the public, which may have limited access to compute resources for fine tuning. We provide experiment details in the Experiment section.
\subsection{Related Work}
Our work lies in the intersection of two related areas: adaptive gradient methods, and adaptive regret algorithms for regret minimization, surveyed below.
\paragraph{Adaptive Gradient Methods.}
Adaptive gradient methods and the AdaGrad algorithm were proposed in \citep{duchi2011adaptive}. Soon afterwards followed other popular algorithms, most notable amongst them are Adam \citep{kingma2014adam}, RMSprop \citep{ tieleman2012lecture}, and AdaDelta \citep{zeiler2012adadelta}.
Numerous efforts were made to improve upon these adaptive gradient methods in terms of parallelization, memory consumption and computational efficiency of batch sizes \citep{shazeer2018adafactor,agarwal2019efficient,gupta2018shampoo,chen2019extreme}.
A multitude of rigorous analyses of AdaGrad, Adam and other adaptive methods have appeared in recent literature, notably \citep{ward2019adagrad,li2019convergence,defossez2020convergence}. However, fully understanding the theory and utility of adaptive methods remains an active research area, with diverse (and sometimes clashing) philosophies \citep{wilson2017marginal,reddi2018convergence,agarwal2020disentangling}.
\cite{bernstein2020learning} use the multiplicative weights update method for training deep neural networks that is more robust to learning rates than vanilla adaptive gradient methods.
\paragraph{Learning Rate Schedules.}
Due to the complexity of optimization landscape of deep neural networks, tuning the learning rate during optimization is necessary to achieve state of the art results. This motivated our study of methodological learning of the preconditioning.
On top of adaptive gradient methods, a plethora of nonstandard learning rate schedules have been proposed. The most commonly used one is the step learning rate schedule, which changes the learning rate at fixed time-points. A cosine annealing rate schedule was introduced by \cite{loshchilov2016sgdr}. Alternative learning rates were studied in \cite{agarwal2021acceleration}. Learning rate schedules which increase the learning rate over time were proposed in \cite{li2019exponential}. Learning the learning rate schedule itself was studied in \cite{wu2018wngrad}.
\paragraph{Adaptive Regret Minimization in Online Convex Optimization.}
The concept of competing with a changing comparator was pioneered in the work of \citep{HW,BW} on tracking the best expert.
Motivated by computational considerations for convex optimization,
the notion of adaptive regret was first introduced by \cite{hazan2007adaptive},
which generalizes regret by considering the regret of every interval. They also provide an algorithm Follow-The-Leading-History which attains $\tilde{O}(\sqrt{T})$ adaptive regret.
\cite{daniely2015strongly} consider the worst regret performance among all intervals with the same length and obtain interval-length dependent bounds.
\cite{daniely2015strongly} obtain an efficient algorithm that achieves $O(\sqrt{|I| \log^2 T})$ adaptive regret. This bound was later improved by \cite{jun2017improved} to $O(\sqrt{|I| \log T})$.
Recently, \cite{cutkosky2020parameter} improves previous results to a more refined second-order bound $\tilde{O}(\sqrt{\sum_{\tau\in I}\|\nabla_{\tau}\|^2})$, but in a more restricted setting assuming the loss is linear.
For other related work, some consider the dynamic regret of strongly adaptive methods \cite{zhang2018dynamic,zhang2020minimizing}. \cite{zhang2019adaptive} considers smooth losses and proposes SACS which achieves an $O(\sum_{\tau=s}^t \ell_{\tau}(x_{\tau}) \log^2 T)$ regret bound. There are also works utilizing strongly adaptive regret in online control \cite{zhang2021adversarial,minasyan2021online}.
\paragraph{Hyperparameter Optimization.}
Related to our paper are general approaches for hyperparameter optimization (HPO), not limited to learning rate. These methods can be applied to learn the best learning rate schedule in principle, although not often applied for this purpose, and are usually reserved for learning deep architectures.
In critical applications, researchers usually use a grid search over the entire parameter space, but that becomes quickly prohibitive as the number of hyperparameters grows.
More sophisticated methods include gradient-based methods such as \citep{Maclaurin15,scalablegradientbasedtuning,drmad,gradientbasedoptimizationofhyperparamters} are applicable to continuous hyperparameters, but not to schedules which we consider. Bayesian optimization (BO) algorithms \citep{tpe,bayesianOPT,multitaskBO,inputBO,inequBO,highDim,rbfbayesian}
tune hyperparameters by assuming a prior distribution of the loss function, and then keep updating this prior distribution based on the new observations. Each new observation is selected according to an acquisition function, which balances exploration and exploitation such that the new observation gives us a better result, or helps gain more information.
Another approach is an adaptive resource allocation technique found in the multi-armed bandit literature \citep{successive,hyperband}.
\cite{hazan2018hyperparameter} propose a spectral approach for hyperparameter optimization based on Fourier domain learning.
\section{Setting}
\elad{I also want to investigate ``parameter free" bound from the MW method, TBD today...}
We consider the problem of online convex optimization. At each round $t$, the learner picks a point $x_t\in X\subset R^d$ where the domain $X$ is a convex and compact, then the adversary picks a loss function $\ell_t(x)$ and the learner suffers loss $\ell_t(x_t)$ and observes a sub-gradient $g_t$. Our goal is to achieve strongly-adaptive, full-matrix, second-order regret bounds, such that for any interval $I=[s,t]$, we have
\begin{equation}\label{eq1}
\mbox{{Regret}}(I)=\sum_{i=s}^t \ell_i(x_i) -\min_x \sum_{i=s}^t \ell_i( x)=\tilde{O}\left( \sqrt{\min_{H: H\succeq 0, tr(H)\le d} \sum_{i=s}^t g_i^{\top} H^{-1} g_i}\right)
\end{equation}
We make the following basic assumptions:
\begin{assumption}
$X$ contains zero and there exists $D>1$ such that $||x||_2\le D$.
\end{assumption}
\begin{assumption}
There exists $G>1$ such that $\|g_t\| \le G, \forall t\in [1,T]$.
\end{assumption}
At a high-level, our algorithm holds many blackbox algorithm instances on a special set of intervals which achieves regret $R_0(I)$ on interval $I$. Then we run a variant of multiplicative weight algorithm on these blackbox algorithm instances to achieve regret $R_1(I)$ on any of these intervals where $R_1(I)$ is the regret of the multiplicative weight algorithm against these blackbox algorithm instances. Note that the overall regret is $R_0(I)+R_1(I)$, and we already have full-matrix regret bounds for $R_0(I)$ by algorithms such as Adagrad \cite{duchi2011adaptive}, but it's not clear how to achieve the optimal bound for $R_1(I)$ and existing results can only give a unsatisfactory $O(\sqrt{|I|})$ bound \cite{daniely2015strongly}.
\elad{state the bound in terms of $R_0$ and $R_1$}
A key contribution of this paper is improving $R_1(I)$ to enjoy the optimal full-matrix bound by setting copies of the blackbox algorithm instance on a same interval $I$. They are only different in having different $\eta$ in the multiplicative weight algorithm. We design the set of $\eta$ so that we only lose a logarithmic term because of the increasing number of 'experts', but at least one of them is guaranteed to have a near-optimal $\eta$ which significantly improves previous methods.
\elad{state the bound in terms of $R_0$ and $R_1$}
The blackbox algorithm we consider is Adagrad from \cite{duchi2011adaptive}, which achieves regret for $I=[s,t]$
\begin{equation}\label{eq2}
R_0(I)=O(D d^{\frac{1}{2}} \min_{H: H\succeq 0, tr(H)\le d} \sqrt{\sum_{i=s}^t g_i^{\top} H^{-1} g_i})
\end{equation}
For the ease of presentation, we will drop the constraints on $H$ and write $\min_H \sqrt{\sum_{i=s}^t g_i^{\top} H^{-1} g_i}$ for simplicity.
\section{Algorithm}
Without loss of generality, we assume $T=2^k$ and define the geometric covering intervals:
\begin{definition}
Define $S_i=\{[1,2^i],[2^i+1,2^{i+1}],...,[2^{k-1},2^k]\}$ for $0\le i\le k$. Define $S=\cup_i S_i$.
\end{definition}
\begin{definition}
Define $S(t)=\{I\in S|t\subset I\}$
\end{definition}
Our intuition is that for each $I\in S$, we hold $O(\log(dTD^2G^2))$ number of independent instances of Adagrad called $B_{I,q}$ that achieves the regret as in \ref{eq2} on the interval $I$. The index $q$ takes value in $[1, O(\log(dTD^2G^2))]$. It's worth to notice that $q$ doesn't affect the behavior of $B_I$ and only takes affect in the multiplicative weight algorithm.
We then treat them as experts and run a variant of multiplicative weight (MW) algorithm so that the meta algorithm on each interval $I\in S$ has low regret $R_1(I)$ against each $B_{I,q}$. Finally we prove any interval $J\subset [1,T]$ can be written as the union of $O(\log T)$ disjoint intervals from $S$, so by Cauchy-Schwarz the regret over $J$ can be controlled by $R(I)$ as well.
The weight update rule of our MW algorithm is the following. Define the regret of our meta algorithm against $B_{I,q}$ at time $t$ to be $r_t(I,q)=\ell_t(x_t)-\ell_t(x_t(I,q))=r_t(I)$ where $x_t(I,q)$ is the action of $B_I$ at time $t$. For any interval $J=[a,b]$, the weight $w_t(I,q)$ is set to be zero when $t<a$. On $t=a$, we set $w_t(I,q)=\min \{1/2, \eta_{I,q}\}$. For the weight in time $t\in (a,b]$, for every $q$ we update
\begin{equation}
w_t(I,q)=w_{t-1}(I,q)(1+\eta_{I,q} r_{t-1}(I))
\end{equation}
where $\eta_{I,q}=\frac{1}{2GD 2^q}$.
\begin{algorithm}[t]
\caption{Adagrad+}
\label{alg1}
\begin{algorithmic}[1]
\STATE Input: OCO algorithm ${\mathcal A}$
\STATE Initialize $w_1(I)=1/2$ if $I=[1,1]$, and $w_1(I)=0$ otherwise.
\FOR{$t = 1, \ldots, T$}
\STATE Let $x_t(I,q) = {\mathcal A}( ...) $
\STATE Let $W_t=\sum_{I\in S(t),q} w_t(I,q)$.
\STATE Choose $I$ with probability $w_t(I,q)/W_t$.
\STATE Predict $x_t(I,q)$.
\STATE Update weights as stated above.
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Analysis}
Let $\mbox{{Regret}}_I({\mathcal A}) $ be the regret upper bound of algorithm ${\mathcal A}$ on interval $I = [s,t]$, initialized at $s$.
Now the theorem should state:
$$ R(I) = O( \mbox{{Regret}}_I({\mathcal A}) \cdot \log .. + ... )$$
Corollary is theorem 3, if we plug in the AG bound.
\elad{state this theorem in terms of $R_0(I)$ and $R_1(I)$. The derive the theorem in the form of a corollary}
\begin{theorem}
The regret of Adagrad+ satisfies that for any interval $I=[s,t]$,
\begin{equation}
R(I)= O\left(D \log(T)\max\left\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_H \sum_{i=s}^t g_i^{\top} H^{-1} g_i}\right\}\right)
\end{equation}
\end{theorem}
\begin{proof}
We will first prove such regret guarantee for any interval $I\in S$, which can be divided into the sum of the regret of $B_I$ and the regret of the MW algorithm.
Then we prove the regret bound for any interval $J$.
For any interval $I=[s,t]\in S$, we already know that the regret of $B_I$ on $I$ is $R_0(I)=O(D d^{\frac{1}{2}} \min_H \sqrt{\sum_{i=s}^t g_i^{\top} H^{-1} g_i})$. If we are able to prove that the regret of the MW algorithm is
\begin{equation}
O\left(D \sqrt{\log(T)}\max\left\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_H \sum_{i=s}^t g_i^{\top} H^{-1} g_i}\right\}\right)
\end{equation}
then the overall regret of Adagrad+ on $I$ is the same as above. Notice that any interval $J=[a,b]$ can be written as the union of at most $\log(T)+1$ disjoints intervals in $S$, by Cauchy Schwarz we have that
\begin{equation}
(\sum_{i=1}^n \sqrt{x_i})^2\le n \sum_{i=1}^n x_i
\end{equation}
thus the regret on $J$ can be upper bounded by:
\begin{equation}
O\left(D \log(T)\max\left\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_H \sum_{i\in J} g_i^{\top} H^{-1} g_i}\right\}\right)
\end{equation}
We define the pseudo weight $\tilde{w}_i(I,q)=w_i(I,q)/\eta_{I,q}$ for $i\le t$, and for $i>t$ we just set $\tilde{w}_i(I,q)=\tilde{w}_t(I,q)$. To proceed, we need the following lemma:
\begin{lemma}
Define $\tilde{W}_t=\sum_{I\in S(t)} \tilde{w}_t(I)$, then we have that
\begin{equation}
\tilde{W}_t\le t (\log(t)+1)\log(dTD^2G^2)
\end{equation}
\end{lemma}
\begin{proof}
The proof is by induction. For $t=1$ it's trivial because on the interval $[1,1]$ the number of experts is exactly the number of possible $q$s. Now we assume it hols for all $t'\le t$. We have
\begin{align*}
\tilde{W}_{t+1}&=\sum_{I\in S(t+1),q} \tilde{w}_{t+1}(I,q)\\
&=\sum_{I=[t+1,s]\in S(t+1),q} \tilde{w}_{t+1}(I,q)+\sum_{I=[k,s], k\le t\in S(t+1),q} \tilde{w}_{t+1}(I,q)\\
&\le \log(t+1)\log(dTD^2G^2)+1+\sum_{I=[k,s], k\le t\in S(t+1),q} \tilde{w}_{t+1}(I,q)\\
&= \log(t+1)\log(dTD^2G^2)+1+\sum_{I=[k,s], k\le t\in S(t+1),q} \tilde{w}_t(I,q)(1+\eta_{I,q} r_t(I))\\
&\le \log(t+1)\log(dTD^2G^2)+1+\tilde{W}_t+\sum_{I\in S(t),q} w_t(I,q) r_t(I)\\
&\le (t+1)(\log(t+1)+1)\log(dTD^2G^2)+\sum_{I\in S(t),q} w_t(I,q) r_t(I)
\end{align*}
We complete the proof by showing that $\sum_{I\in S(t),q} w_t(I,q) r_t(I)=0$:
\begin{align*}
\sum_{I\in S(t),q} w_t(I,q) r_t(I)&=W_t \sum_{I\in S(t),q} p_t(I,q) (\ell_t(x_t)-\ell_t(x_t(I,q)))\\
&=W_t(\ell_t(x_t)-\ell_t(x_t))=0
\end{align*}
\end{proof}
With this lemma in hand, we only need to prove that for any $I=[k,s]$,
\begin{equation}
\sum_{t=k}^s r_t(I)= O\left(D\sqrt{\log(T)}\max\left\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_H \sum_{i=s}^t g_i^{\top} H^{-1} g_i}\right\}\right)
\end{equation}
By the lemma above, we have that
\begin{equation}
\tilde{w}_{s+1}(I)\le \tilde{W}_{s+1}\le (s+1)(\log(s+1)+1)\log(dTD^2G^2)
\end{equation}
hence
\begin{equation}
\log(\tilde{w}_{s+1}(I))\le \log(s+1)+\log(\log(s+1)+1)+\log (\log(dTD^2G^2))
\end{equation}
we also note that
\begin{equation}
\tilde{w}_{s+1}(I)=\prod_{t=k}^s (1+\eta_{I,q} r_t(I))
\end{equation}
By using the fact that $\log(1+x)\ge x=x^2, \forall x\ge -1/2$ and
\begin{equation}
|\eta_{I,q} r_t(I)|\le \frac{1}{4GD} ||x_t-x_t(I,q)||_2 G= 1/2
\end{equation}
we obtain for any $q$
\begin{equation}
\log(\tilde{w}_{s+1}(I)) \ge \sum_{t=k}^s \eta_{I,q} r_t(I)-\sum_{t=k}^s \eta_{I,q}^2 r_t(I)^2
\end{equation}
Now we need to carefully upper bound the term $\sum_{t=k}^s r_t(I)^2$. By convexity we have that $r_t(I)=\ell_t(x_t)-\ell_t(x_t(I))\le g_t^{\top}(x_t-x_t(I))$, and also $g_t^{\top}(x_t-x_t(I))\le ||g_t||_{H^{-1}} ||x_t-x_t(I)||_H$ for any $H$. As a result, we have that for any $H$ which is PSD and $tr(H)\le d$,
\begin{equation}
r_t(I)^2\le g_t^{\top} H^{-1} g_t ||x_t-x_t(I)||_H^2\le 4g_t^{\top} H^{-1} g_t D^2 d
\end{equation}
where $ ||x_t-x_t(I)||_H^2\le 4D^2 d$ is by elementary algebra. Hence
\begin{equation}
\sum_{t=k}^s r_t(I)\le \frac{2\log(s+1)+\log(\log(dTD^2G^2))}{\eta_{I,q}}+4\eta_{I,q} D^2 d \min_{H} \sum_{t=k}^s g_t^{\top} H^{-1} g_t
\end{equation}
The optimal choice of $\eta$ is of course
\begin{equation}
\sqrt{\frac{8\log(s+1)+4\log(\log(dTD^2G^2))}{ D^2 d \min_{H} \sum_{t=k}^s g_t^{\top} H^{-1} g_t}}
\end{equation}
When $D^2 d \min_{H} \sum_{t=k}^s g_t^{\top} H^{-1} g_t\le 32 G^2D^2 \log(T)$, $\eta_{I,1}$ gives bound $O(GD\log(T))$. When $D^2 d \min_{H} \sum_{t=k}^s g_t^{\top} H^{-1} g_t> 32 G^2D^2 \log(T)$, there always exists $q$ such that $0.5\eta_{I,q}\le \eta\le 2\eta_{I,q}$ so that the regret is still of order
\begin{equation}
O\left(D\sqrt{\log(T)}\max\left\{G\sqrt{\log(T)}, d^{\frac{1}{2}}\sqrt{ \min_H \sum_{i=s}^t g_i^{\top} H^{-1} g_i}\right\}\right)
\end{equation}
which concludes our proof.
\end{proof}
\begin{rem}
We notice that the algorithms with or without $q$ should have very similar behaviours, because $B_{I,q}$ do exactly the same predictions on $I$ whatever $q$ is. Introducing the index $q$ is for the ease of analysis and only worsens the optimal full-matrix bound \cite{duchi2011adaptive} by a small $\log^{\frac{3}{2}}(T)$ factor.
\end{rem}
\section{Deriving Local Optima from Regret}
Though our theory so far is mostly for the convex setting, most practical optimization problems have non-convex loss functions, and it's important to derive convergence guarantees for the non-convex setting as well. The goal is now to find an approximate first order stationary point $x_{\tau}$ with small $\|\nabla_{\tau}\|_2$. In this section, we give a brief discussion on how to reduce the convergence rate of finding a first-order stationary point of a non-convex function $\ell$, to the regret bound of $\ell$.
In a nutshell, we adopt a method like GGT in \cite{agarwal2019efficient} which is a proximal-point like algorithm that solves a sequence of convex sub-problems and guarantees to output an approximate stationary point. We assume that $\ell(x)$ is $\beta$-smooth and $\ell(x_1)-\min_x \ell(x)\le M$. The use of Algorithm \ref{alg1} can accelerate the convergence of each sub-problem, i.e. making $w_{\tau}$ smaller. The following proposition is direct from Theorem \ref{thm0}.
\begin{algorithm}[t]
\caption{Finding Stationary Point with \mbox{{SAMUEL\ }}}
\label{alg4}
\begin{algorithmic}
\STATE Input: non-convex loss function $\ell$, horizon $T$, $\lambda\ge \frac{\beta}{2}$.
\FOR{$\tau = 1, \ldots, T$}
\STATE Let $\ell_{\tau}(x)=\ell(x)+\lambda\|x-x_{\tau}\|_2^2$.
\STATE Update $x_{\tau+1}$ to be the output of Algorithm \ref{alg1} with $\mathcal{A}$ to be Adagrad, starting at $x_{\tau}$, for $w_{\tau}$ steps.
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{proposition}
$\ell_{\tau}(x_{\tau+1})-\min_x \ell_{\tau}(x)=$
$$
\tilde{O}\left( \frac{ \sqrt{ \min_k \min_{H_1,...,H_k \in {\mathcal H}} \sum_{j=1}^k \sum_{ \tau \in I_j} \|\nabla_{\tau} \|_{H_j}^{*2} } }{w_{\tau}}\right)
$$
\end{proposition}
And we define the adaptive ratio $\mu(w_{\tau})$ to be
$$
\mu(w_{\tau})=\sqrt{\frac{ \min_k \min_{H_1,...,H_k \in {\mathcal H}} \sum_{j=1}^k \sum_{ \tau \in I_j} \|\nabla_{\tau} \|_{H_j}^{*2} }{w_{\tau}(\ell(x_0)-\min_x \ell(x))}}
$$
which quantifies the improvement of our adaptive algorithm by its advantage over the usual worst-case bound
of vanilla SGD/Adagrad in $w_{\tau}$ rounds, see \cite{agarwal2019efficient} for more details whose proof idea we follow. We are now ready to analyze the convergence rate of Algorithm \ref{alg4}. We begin by proving the following useful property for any $\eta>0$:
\begin{align*}
\ell_{\tau}(x_{\tau})-\min_x \ell_{\tau}(x)&\ge \ell(x_{\tau})-\ell_{\tau}(x_{\tau}-\eta \nabla_{\tau})\\
&\ge \eta \|\nabla_{\tau}\|_2^2 -\frac{\beta \eta^2}{2}\|\nabla_{\tau}\|_2^2 -\lambda \eta^2 \|\nabla_{\tau}\|_2^2
\end{align*}
Setting $\eta=\frac{1}{\beta+2\lambda}$, we have that
\begin{equation}\label{eq1}
\ell_{\tau}(x_{\tau})-\min_x \ell_{\tau}(x)\ge \frac{\|\nabla_{\tau}\|_2^2 }{2(\beta+2\lambda)}
\end{equation}
Meanwhile, we have the following bound
\begin{align*}
\ell(x_{\tau})-\ell(x_{\tau+1})&\ge \ell_{\tau}(x_{\tau})-\ell_{\tau}(x_{\tau+1})\\
&=\ell_{\tau}(x_{\tau})-\min_x \ell_{\tau}(x)-(\ell_{\tau}(x_{\tau+1})-\min_x \ell_{\tau}(x))\\
&\ge \ell_{\tau}(x_{\tau})-\min_x \ell_{\tau}(x)-\mu(w_{\tau}) \sqrt{\frac{\ell_{\tau}(x_{\tau})-\min_x \ell_{\tau}(x)}{w_{\tau}}}
\end{align*}
Fix $\epsilon>0$, denote $w_{\tau}(\epsilon)$ to be the smallest integer that makes
$$
\frac{ \sqrt{ \min_k \min_{H_1,...,H_k \in {\mathcal H}} \sum_{j=1}^k \sum_{ \tau \in I_j} \|\nabla_{\tau} \|_{H_j}^{*2} } }{w_{\tau}(\epsilon)(\ell(x_0)-\min_x \ell(x))}\le \sqrt{\frac{\epsilon^2}{8(\beta+2\lambda)}}
$$
Suppose for contradiction now, that for all $\tau$, $\|\nabla_{\tau}\|_2> \epsilon$, then $\ell(x_{\tau})-\ell(x_{\tau+1})\ge \frac{\ell_{\tau}(x_{\tau})-\min_x \ell_{\tau}(x)}{2} \ge \frac{\|\nabla_{\tau}\|_2^2 }{4(\beta+2\lambda)}$ by property \ref{eq1} and the definition of $w_{\tau}(\epsilon)$. Summing over $[1,T]$ we get
$$
\ell(x_1)-\ell(x_{T+1})\ge \frac{T\epsilon^2}{4(\beta+2\lambda)}
$$
If we set $T=\frac{4M(\beta+2\lambda)}{\epsilon^2}$, then the above inequality will lead to contradiction. Therefore, within $\sum_{\tau=1}^T w_{\tau}(\epsilon)$ calls of Algorithm \ref{alg4}, it's guaranteed that our algorithm will output some $x_{\tau}$ that $\|\nabla_{\tau}\|\le \epsilon$. We can rewrite the number of calls in terms of the adaptive ratio: $O(\frac{\overline{\mu(w_{\tau}(\epsilon))}}{\epsilon^4})$, concerning only $\epsilon$ and letting $\overline{\mu(w_{\tau}(\epsilon))}$ denote the average of all $\mu(w_{\tau}(\epsilon))$. Comparing with the convergence rate $O(\frac{1}{\epsilon^4})$ of SGD, we make improvement when the optimization trajectory is more adaptive.
\begin{theorem}[Informal]
The convergence rate of Algorithm \ref{alg4}, is $O(\frac{\overline{\mu(w_{\tau}(\epsilon))}}{\epsilon^4})$ ignoring parameters except $\epsilon$.
\end{theorem}
\section{Setting and Preliminaries}
We consider the problem of online convex optimization. At each round $\tau$, the learner outputs a point $x_{\tau}\in \ensuremath{\mathcal K}$ for some convex domain $\ensuremath{\mathcal K} \subset R^d$, then suffers a convex loss $\ell_{\tau}(x_{\tau})$ which is chosen by the adversary. In the most common setting, the learner gains access to the sub-gradients $\nabla_{\tau}$ of $\ell_{\tau}()$ at any $x_{\tau}$.
The goal of the learner in OCO is to minimize regret, defined as
$$
\mbox{{Regret}} = \sum_{\tau=1}^T \ell_{\tau}(x_{\tau}) -\min_{x\in \ensuremath{\mathcal K}} \sum_{\tau=1}^T \ell_{\tau}( x) .
$$
Henceforth we make the following basic assumptions for simplicity (these assumptions are known in the literature to be removable):
\begin{assumption}\label{as1}
There exists $D, D_{\infty}>1$ such that $\|x\|_2\le D$ and $\|x\|_{\infty}\le D_{\infty}$ for any $x\in \ensuremath{\mathcal K}$.
\end{assumption}
\begin{assumption}\label{as2}
There exists $G>1$ such that $\|\nabla_{\tau}\|_2 \le G, \forall \tau \in [1,T]$.
\end{assumption}
We make the notation of the norm $\|\nabla\|_H$, for any PSD matrix $H$ to be:
$$\|\nabla\|_H=\sqrt{\nabla^{\top} H \nabla}$$
And we define its dual norm to be $\|\nabla\|_H^*=\sqrt{\nabla^{\top} H^{-1} \nabla}$. In particular, we denote ${\mathcal H}=\{ H| H\succeq 0, tr(H)\le d\}$. An optimal blackbox online learning algorithm is also needed for our construction. We consider Adagrad from \cite{duchi2011adaptive}, which is able to achieve the following regret if run on $I=[s,t]$:
$$
\mbox{{Regret}}(I)=O\left(D d^{\frac{1}{2}} \min_{H\in {\mathcal H}} \sqrt{\sum_{\tau=s}^t \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}}\right)
$$
\subsection{Adaptive regret}
Classical online learning algorithms usually focus on the regret over the entire trajectory of optimization. Our goal is more nuanced, for which we need a refined performance definition. In particular, we want to construct an algorithm, such that for any interval $I$, the regret w.r.t. the local optimum is (almost) as good as when $I$ is treated as the whole interval. Formally, define
$$
\textbf{Adaptive-Regret}=\sup_{I=[s,t]\subset [1,T]} \sum_{\tau=s}^t \ell_{\tau}(x_{\tau})-\min_{x^*\in X}\sum_{\tau=s}^t \ell_{\tau}(x^*) .
$$
However, in the well-studied online convex optimization, the optimal (non-adaptive) regret is known to be
$$
\tilde{O} \left( \sqrt{ \min_{ H \in {\mathcal H}} \sum_{\tau=1}^T \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}} \right)
$$
achieved by Adagrad \cite{duchi2011adaptive}. Therefore all existing bounds for strongly adaptive regret is still not optimal, in view of its non-adaptive counterpart.
In contrast, our algorithm \mbox{{SAMUEL\ }} closes this gap and attains an adaptive regret bound of
$$
\tilde{O} \left( \sqrt{ \min_{ H \in {\mathcal H}} \sum_{\tau=s}^t \nabla_{\tau}^{\top} H^{-1} \nabla_{\tau}} \right)
$$
Thus, the minimization is also over the norm in which the gradients are measured. Our bound is superior especially in cases when the gradients $\nabla_{\tau}$ are mostly sparse, see the examples in introduction.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,320 |
\section{#1}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\topmargin 0pt
\advance \topmargin by -\headheight
\advance \topmargin by -\headsep
\textheight 8.9in
\oddsidemargin 0in
\evensidemargin \oddsidemargin
\marginparwidth 0.5in
\textwidth 6.5in
\advance\hoffset by -3mm
\advance\voffset by 8mm
\def\rm Vol{\rm Vol}
\def\rm constant{\rm constant}
\def\rm Herm{\rm Herm}
\def\mathcal{D}{\mathcal{D}}
\defSO(d,1){SO(d,1)}
\defSO(d-1,1){SO(d-1,1)}
\def{\rm tr}{{\rm tr}}
\def\mathcal{P}{\mathcal{P}}
\def\mathcal{M}{\mathcal{M}}
\def\Bbb R{\Bbb R}
\def\Bbb C{\Bbb C}
\def\Bbb H{\Bbb H}
\def\Bbb O{\Bbb O}
\def\Bbb K{\Bbb K}
\def\Bbb Z{\Bbb Z}
\def\Bbb E{\Bbb E}
\def\Bbb P{\Bbb P}
\def\Bbb I{\Bbb I}
\def\Bbb W{\Bbb W}
\defe^0{e^0}
\def{\rm irr}{{\rm irr}}
\begin{document}
\title{Deformed General Relativity and Torsion}
\author{Gary W. Gibbons\footnote{\tt gwg1@damtp.cam.ac.uk}, Steffen Gielen\footnote{\tt sg452@damtp.cam.ac.uk}}
\affiliation{D.A.M.T.P., Cambridge University, Wilberforce Road, Cambridge CB3 0WA, U.K.}
\begin{abstract}
We argue that the natural framework for embedding the ideas of deformed, or doubly, special relativity (DSR) into a curved spacetime is a generalisation of Einstein-Cartan theory, considered by Stelle and West. Instead of interpreting the noncommuting ``spacetime coordinates" of the Snyder algebra as endowing spacetime with a fundamentally noncommutative structure, we are led to consider a connection with torsion in this framework. This may lead to the usual ambiguities in minimal coupling. We note that observable violations of charge conservation induced by torsion should happen on a time scale of $10^3$ s, which seems to rule out these modifications as a serious theory. Our considerations show, however, that the noncommutativity of translations in the Snyder algebra need not correspond to noncommutative spacetime in the usual sense.
\\
\\Keywords: doubly special relativity, Cartan geometry, Einstein-Cartan theory, torsion, noncommutative geometry
\end{abstract}
\pacs{02.20.Sv, 04.50.Kd, 04.60.Bc}
\maketitle
\section{Introduction}
\label{intro}
It is commonly assumed that quantum gravity sets a fundamental length scale, the Planck scale \cite{planck}, which can not be resolved by any physical experiment. Different approaches to quantum gravity, such as string theory or loop quantum gravity, incorporate such a scale. This leads to the idea that some kind of ``space discreteness" should be apparent even in a low-energy ``effective" theory.
The idea of putting quantum mechanics on a discrete lattice\footnote{with spacing equal to the Compton wavelength of the proton, $l_c\approx 1.3$ fm} seems to have been first considered by Heisenberg in the spring of 1930 \cite{heisenberg}, in an attempt to remove the divergence in the electron self-energy. Because the absence of continuous spacetime symmetries leads to violations of energy and momentum conservation, this approach was not pursued further, but later in the same year he considered modifying the commutation relations involving position operators instead \cite{heisenberg}.
A fundamental length scale is absent in special relativity, where two observers will in general not agree on lengths or energies they measure. Hence the usual ideas of Lorentz and Poincar\'e invariance must be modified in some way. Snyder observed \cite{snyder} that this could be done by deforming the Poincar\'e algebra into the de Sitter algebra, i.e. considering the isometry group of a (momentum) space of constant curvature. From an algebraic viewpoint, if one maintains the structure of a Lie algebra and considers deformations of the Poincar\'e algebra, the de Sitter algebra is the unique way of implementing a modified kinematic framework \cite{bacry1}.
A $d$-dimensional de Sitter momentum space with curvature radius $\kappa$ is defined as the submanifold of a $(d+1)$-dimensional flat space with metric signature ($d,1$) by
\begin{equation}
(P^1)^2 + (P^2)^2 + \ldots + (P^{d-1})^2 - (P^d)^2 + (P^{d+1})^2 = \kappa^2\,,
\label{ads}
\end{equation}
where $\kappa$ has dimensions of mass. Its isometry group is generated by the algebra
\begin{eqnarray}
[M_{ab},M_{cd}] =\eta_{ac}M_{bd}+\eta_{bd}M_{ac}-\eta_{bc}M_{ad}-\eta_{ad}M_{bc}\,, \nonumber
\\ \left[ X_{a},M_{bc} \right]=\eta_{ac}X_b-\eta_{ab}X_c\,, \quad [X_a,X_b]=\frac{1}{\kappa^2}M_{ab}\,.
\label{algebra}
\end{eqnarray}
Here $M_{ab}$ correspond to a Lorentz subalgebra of the de Sitter algebra, while $X_a\equiv\frac{1}{\kappa}M_{d+1,a}$ are interpreted as (noncommuting) translations. These translations are then interpreted as corresponding to coordinates on spacetime; Snyder thought of operators acting on a Hilbert space. Since the operators $X_1,X_2$ and $X_3$ correspond to rotations in the $(d+1)$-dimensional space, their spectrum is discrete. In this way, one obtains ``quantised spacetime", while maintaining Lorentz covariance.
One can give explicit expressions for the algebra elements by choosing coordinates on de Sitter space (\ref{ads}). The choice made by Snyder is taking Beltrami coordinates
\begin{equation}
p^1=\kappa\frac{P^1}{P^{d+1}}\,,\;p^2=\kappa\frac{P^2}{P^{d+1}}\,,\ldots\,,\;p^d=\kappa\frac{P^d}{P^{d+1}}\,,
\end{equation}
whence one has $(P^{d+1})^2=\kappa^4/(\kappa^2+\eta_{ab}p^a p^b)$ to satisfy (\ref{ads}), and $\eta_{ab}p^a p^b \ge -\kappa^2$, corresponding to an apparent maximal mass if $p^a$ were interpreted as Cartesian coordinates on a Minkowski momentum space. (Up to this point one could in principle have chosen anti-de Sitter instead of de Sitter space. Then this inequality becomes $\eta_{ab}p^a p^b \le \kappa^2$, which perhaps seems less motivated physically.) A necessary sign choice means that these coordinates only cover half of de Sitter space. In these coordinates, the translation generators
\begin{equation}
X_a=\frac{1}{\kappa}\left(P^{d+1}\frac{\partial}{\partial P^a}-P_a\frac{\partial}{\partial P^{d+1}}\right)=\frac{\partial}{\partial p^a}+\frac{1}{\kappa^2}p_a p^b\frac{\partial}{\partial p^b}
\end{equation}
generate ``displacements" in de Sitter space. (In this notation, indices are raised and lowered with $\eta_{ab}$, the $d$-dimensional Minkowski metric, so that $p_a=\eta_{ab}p^b$.)
The motivation behind these ideas was to cure the infinities of quantum field theory, which evidently arise from allowing arbitrary high momenta (or short distances). In a somewhat similar spirit, Gol'fand suggested \cite{golfand} to define quantum field theory on a momentum space of constant curvature, using Beltrami coordinates as momentum variables. This makes the volume of the corresponding Riemannian space finite and so presumably leads to convergent loop integrals in the Euclideanised theory. The consequences for standard quantum field theory were further explored in \cite{kada,golfand2}.
Gol'fand only assumed that $\kappa\gg m$ for all elementary particles; thinking of quantum gravity, one would perhaps identify $\kappa$ with the Planck scale, whereas the original authors seem to have thought of the Fermi scale.
The induced metric on de Sitter space in terms of the coordinates $p^a$ is
\begin{equation}
g_{nr}=\frac{\kappa^2}{\kappa^2+p\cdot p}\left(\eta_{nr}-\frac{p_n p_r}{\kappa^2+p\cdot p}\right)\,,
\label{metric}
\end{equation}
where $p\cdot p\equiv \eta_{cd}p^c p^d$. The metric (\ref{metric}) becomes singular when $p \cdot p\rightarrow -\kappa^2$, and negative definite when extended to what Gol'fand calls the exterior region $p \cdot p < -\kappa^2$. In four dimensions,
\begin{equation}
\det g = -\kappa^{10}(\kappa^2+p\cdot p)^{-5}\,,
\end{equation}
and the volume element is $d^4 p\,\kappa^5(\kappa^2+p\cdot p)^{-5/2}$.
In Gol'fand's approach (assuming $d=4$ of course), the standard Feynman rules were modified by replacing the addition of momenta $p$ and $k$ at a vertex by
\begin{equation}
(p(+)k)^a=\frac{\kappa}{\kappa^2 - p\cdot k}\left(p^a\sqrt{\kappa^2+k\cdot k}+k^a\left(\kappa-\frac{p\cdot k}{\kappa+\sqrt{\kappa^2+k\cdot k}}\right)\right)\,,
\end{equation}
which corresponds to a translation by $k$ of the vector $p$. (Again $p\cdot k \equiv \eta_{ab}p^a k^b$, etc.) It was also noted that spinors now transform under ``displacements" as well, which is made more explicit in \cite{kada} and \cite{golfand2}. As is well known, five-dimensional Dirac spinors still have four components and the matrix $\gamma^5$ appears in the Dirac Lagrangian, hence there is no chirality. This alone seems to imply that the original Gol'fand proposal cannot be used for an appropriate model of the known particles.
Gol'fand's approach is very different from more recent approaches to quantum field theory on noncommutative spaces (see e.g. \cite{nekrasov}) in that the field theory is defined on a momentum space which is curved, but neither position nor momentum space are noncommutative in the usual sense.
In this paper, we attempt to embed the old idea of a curved momentum space into general relativity by describing a geometric framework in which an internal de Sitter space is associated to a curved spacetime. This internal space replaces the usual (co-)tangent space in general relativity. We will make use of the interpretation of Einstein-Cartan theory given by Stelle and West \cite{stellewest}. Since we are staying within conventional differential geometry, this formalism provides an alternative to the usual interpretation of the Snyder algebra as describing a noncommutative spacetime.
The paper is organised as follows: We give a brief introduction into the ideas of {\it deformed (doubly) special relativity} (DSR) most relevant to the following discussion in section \ref{dsr}. In section \ref{gauge} we outline how Einstein-Cartan theory can be formulated as a gauge theory of gravity with the de Sitter group $SO(d,1)$ as gauge group; this theory includes a gauge field that plays a crucial role in what follows. In this section we essentially rederive the results of Stelle and West, using a different set of coordinates which we find more closely related to the DSR literature. Since we claim that this geometric framework can be used to generalise the ideas of DSR, we show in section \ref{synth} how, if spacetime is taken to be Minkowski space, the simplest non-trivial choice of zero section leads to a connection with torsion, providing a geometric interpretation for the noncommuting ``coordinates" appearing in the Snyder algebra. We close with a discussion of our results and their possible physical implications, which show that the theory, at least in its given form, is not physically viable. We conclude that there may be different physical interpretations of algebraic commutation relations such as those used in DSR.
Since the two most obvious extensions of general relativity are admitting either connections with torsion or non-metric connections, we briefly discuss the theory of a torsion-free non-metric connection, known as symmetric affine theory, in an appendix. It does not fit as well into a description by Cartan geometry as the case highlighted in this paper. A more mathematical account of Cartan geometry is given in a second appendix.
We use units in which $\hbar=c=1$, such that momenta have the dimension of inverse length. Lower-case Latin indices such as $a,b,c$ denote either Lorentz indices or label coordinates, as will hopefully be clear from the context.
\setcounter{equation}{0}
\section{Deformed Special Relativity}
\label{dsr}
The idea that the classical picture of Minkowski spacetime should be modified at small length scales or high energies was re-investigated in more recent times, motivated by the apparent existence of particles in ultra high energy cosmic rays whose energies could not be explained within special relativity \cite{experiment}. The proposed framework of deformed special relativity (DSR) \cite{amelino} modifies the Poincar\'e algebra, introducing an energy scale $\kappa$ into the theory, in addition to the speed of light $c$. This leads to a quantum ($\kappa$-)deformation of the Poincar\'e algebra \cite{majid}, with the parameter $\kappa$ associated with the newly introduced scale.
It was soon realised \cite{kowalski} that this deformed algebra is the algebra of the isometry group of de Sitter space, and that the symmetries of DSR could hence be obtained by identifying momentum space with de Sitter space, identifying $X_a$ as the generators of translations on this space. The constructions of DSR thus appear to be a resurrection of Snyder's and Gol'fand's ideas. We take this observation as the defining property of DSR, and will seek to describe a framework in which momentum space, or rather the (co-)tangent space in general relativity, is replaced by an ``internal" de Sitter space. We will see that this can best be done using Cartan geometry.
When discussing DSR as a modification of special relativity, we take the view that special relativity is defined as a kinematic framework with preferred inertial systems, related to one another by (proper) Lorentz transformations. That is, one has a flat spacetime on which there exist certain preferred coordinate systems, those in which the metric is diagonal with entries $\pm 1$. From this point of view, the choice of coordinates on the internal de Sitter space plays quite an important role if one is looking for a ``deformation" of special relativity including an energy scale $\kappa$. Such a deformation can only arise if the chosen coordinate system reduces to Cartesian coordinates on Minkowski space as $\kappa\rightarrow\infty$. The choice of coordinates is obviously not unique.
The generators of the algebra will take different explicit forms when different coordinate systems (on four-dimensional de Sitter space) are chosen. In \cite{kowalski} ``natural coordinates" are defined by, in the notation of section \ref{intro}, \footnote{Capital Latin indices such as $I$ and $J$ used in this section only run over spatial coordinates (from 1 to 3).}
\begin{equation}
g=\exp\left[p^I (M_{I4}+X_I)\right] \exp\left[p^4 X_4\right]\mathcal{O}\,,
\end{equation}
where $\mathcal{O}=(0,0,0,0,\kappa)$ is taken to be the origin of de Sitter space in five-dimensional Minkowski space, and $M_{I5}$ and $M_{45}$ correspond to translations in space and time. The coordinates one obtains are related to the five-dimensional coordinates by
\begin{equation}
P^I=p^I e^{\frac{p^4}{\kappa}}\,,\quad P^4=\kappa \sinh\left(\frac{p^4}{\kappa}\right)+\frac{\vec{p}^2}{2\kappa}e^{\frac{p^4}{\kappa}}\,,\quad P^5=\kappa \cosh\left(\frac{p^4}{\kappa}\right)-\frac{\vec{p}^2}{2\kappa}e^{\frac{p^4}{\kappa}}\,.
\end{equation}
Again, these cover only half of de Sitter space where $P^4+P^5>0$. The metric in these ``flat" coordinates is
\begin{equation}
ds^2=-(dp^4)^2+e^{\frac{2p_4}{\kappa}}\delta_{IJ}dp^I \, dp^J\,.
\end{equation}
Slices of constant $p_4$ are flat; to an observer using these coordinates the spacetime appears as expanding exponentially. An illuminating discussion of different coordinate systems and kinematics on de Sitter space is given in \cite{special}.
The Magueijo-Smolin model \cite{msmodel} corresponds to the following choice of coordinates:
\begin{equation}
p^1=\kappa\frac{P^1}{P^5-P^4}\,,\;p^2=\kappa\frac{P^2}{P^5-P^4}\,,\;p^3=\kappa\frac{P^3}{P^5-P^4}\,,\;p^4=\kappa\frac{P^4}{P^5-P^4}\,,
\end{equation}
The generators of boosts in de Sitter space take the form
\begin{equation}
K^I\equiv p^I\frac{\partial}{\partial p^4}+p^4\frac{\partial}{\partial p^I}+\frac{1}{\kappa}p^I p^J\frac{\partial}{\partial p^J}\,,
\end{equation}
and translations (not considered by the authors) would take the form
\begin{equation}
X_I=\frac{p^4+\kappa}{\kappa}\frac{\partial}{\partial p^I}+\frac{1}{\kappa^2} p_I p^b\frac{\partial}{\partial p^b}\,,\quad X_4=\frac{1}{\kappa}p^b\frac{\partial}{\partial p^b}+\frac{p^4+\kappa}{\kappa}\frac{\partial}{\partial p^4}\,.
\end{equation}
This choice of coordinates is somewhat peculiar as $p^4$ takes a special role, as is also apparent from the modified dispersion relations presented in \cite{msmodel}. The quantity
\begin{equation}
||p||^2=\frac{\eta_{ab}p^a p^b}{(1+\frac{1}{\kappa}p^4)^2}
\end{equation}
is invariant under boosts and rotations in de Sitter space, as would $\eta_{ab}p^a p^b$ be in Beltrami coordinates.
Each DSR model corresponds to a choice of coordinates on de Sitter space, such that all expressions reproduce the expressions for special-relativistic Minkowski coordinates as $\kappa\rightarrow\infty$. What Smolin and Magueijo call a ``$U$ map" is essentially a coordinate transformation from Beltrami coordinates to a different set of coordinates, which becomes the identity as $\kappa\rightarrow\infty$. In the remaining sections we shall use Beltrami coordinates. Note that this means we always have $p\cdot p\ge -\kappa^2$.
\setcounter{equation}{0}
\section{A de Sitter Gauge Theory of Gravity}
\label{gauge}
The most direct implementation of the ideas discussed so far into a framework describing more general spacetimes is replacing the cotangent (or tangent) bundle usually taken as phase space by a general symplectic manifold $\{\mathcal{P},\omega\}$, which can be locally viewed as a product $U \times \mathcal{D}$ of a subset $U\subset \mathcal{M}$ of spacetime $\mathcal{M}$ with de Sitter space $\mathcal{D}$. We want to retain the differentiable structure of a manifold, which we do not assume to be present in a full theory of quantum gravity. We also assume that the structure of momentum space is fixed and in particular does not depend on matter fields, as suggested in \cite{moffat}.
If phase space is described as such a manifold, with a choice of origin in the ``tangent" de Sitter space at each point, the appropriate mathematical language is that of fibre bundles. The theory of connections on fibre bundles of this type, called {\it homogeneous bundles} in \cite{russians}, was developed by \'Elie Cartan (e.g. in \cite{cartan}). Adopting this framework means there is now an $\frak{so}(d,1)$ connection, instead of an $\frak{so}(d-1,1)$ connection, defining parallel transport on spacetime.
It was noted by MacDowell and Mansouri \cite{mmgravity} that gravity with a cosmological term in four dimensions could be obtained from a theory of such an $\frak{so}(d,1)$ connection by projecting it onto its $\frak{so}(d-1,1)$ part in the action. A more elaborate description in terms of Einstein-Cartan theory was then given by Stelle and West \cite{stellewest}. Their analysis included the gauge field needed to identify the fibres at different spacetime points, which will be crucial for the interpretation of the theory. The mathematical side of MacDowell-Mansouri gravity as a theory of a Cartan connection is nicely illustrated in \cite{wise}; we follow this article as well as the more computationally based presentation of \cite{stellewest}, who use the language of non-linear realizations. An overview over the mathematics of Cartan connections is given in \cite{sharpe}.
For clarity we first describe the framework in a language more common to physicists; a more mathematical account of Cartan connections on homogeneous bundles is given in appendix \ref{app}.
The usual description of general relativity as a gauge theory of the Lorentz group is known as vier-/vielbein formalism, method of moving frames, etc. Since the tangent bundle is in our description replaced by a homogeneous bundle with a curved ``tangent" space, one effectively uses a ``double vielbein" formalism, in which spacetime vectors are mapped to vectors in the tangent space to the internal (curved) space by a soldering form (vielbein). The picture we have in mind is that of a de Sitter space rolled along the manifold. One then needs to introduce a new field which specifies the point of tangency, expressed in a given coordinate system on the internal space, at each spacetime point. We denote it by $p^a(x)$. This corresponds mathematically to a necessary choice of zero section (see appendix), and physically to a gauge field. Picking a point of tangency at each spacetime point breaks the gauge group $SO(d,1)$ down to the Lorentz subgroup $SO(d-1,1)$ leaving this point invariant.
Since we consider a theory with gauge group $SO(d,1)$, the connection $A$ takes values in the Lie algebra $\frak{so}(d,1)$. It can be split as (introducing a length $l$ on dimensional grounds)
\begin{equation}
A=\left(\matrix{& & & \cr & {\omega^a}_b & & \frac{1}{l}e^i \cr & & & \cr & -\frac{1}{l}e_i & & 0}\right)\,,
\end{equation}
so that ${\omega^a}_b$ acts as the usual $\frak{so}(d-1,1)$-valued connection of general relativity and $e^i$ as a vielbein one-form. In doing this we have simultaneously unified the usual connection and the vielbein, and replaced the (flat) tangent space by a curved ``internal" space, such that the de Sitter group and not the Poincar\'e group now appears as a gauge group. (Lorentz) indices on ${\omega^a}_b$ and $e^i$ are now raised and lowered using $\eta^{ab}$.
A gauge transformation, i.e. a local transformation $g(x)$ taking values in the de Sitter group, can be split as $g(x)=s(x)\Lambda(x)$, where $s(x)$ changes the zero section, i.e. changes the local identification of points of tangency at each spacetime point, and $\Lambda(x)$ is a usual local Lorentz transformation in the vielbein formalism of general relativity which does not mix the ${\omega^a}_b$ and $e^i$ parts of the connection. The connection transforms under a gauge transformation as
\begin{equation}
A(x)\rightarrow A'(x)=\Lambda^{-1}(x)s^{-1} (x) A(x) s(x)\Lambda(x) + \Lambda^{-1}(x)s^{-1} (x) ds(x)\Lambda(x) + \Lambda^{-1}(x)d\Lambda(x)\,.
\label{gaugetransf}
\end{equation}
One can use this equation to relate the connection $A_0$ corresponding to the trivial zero section, where the point of tangency is the origin of the internal space at each spacetime point, $p^a(x)\equiv (0,0,0,0)$, to a connection corresponding to any given zero section. The physical significance of this is the following. Assume we have fixed $p^a(x)\equiv (0,0,0,0)$. Then an action can be defined from the curvature of the connection $A$ (here $R$ is the curvature of the $\frak{so}(d-1,1)$ part of $A$),
\begin{equation}
F=dA+A\wedge A=\left(\matrix{& & & \cr & {R^a}_b - \frac{1}{l^2}(e^a \wedge e_b) & & \frac{1}{l}T^i\equiv\frac{1}{l}(de^i + {\omega^i}_j \wedge e^j) \cr \cr & -\frac{1}{l}T_i & & 0}\right)\,.
\end{equation}
In four dimensions, the MacDowell-Mansouri action \cite{mmgravity, wise} is
\begin{equation}
S=-\frac{3}{32\pi G\Lambda}\int \epsilon_{abcd} \left(F^{ab} \wedge F^{cd}\right)= -\frac{3}{32\pi G\Lambda}\int d^4 x \, \frac{1}{4}\epsilon_{abcd}\epsilon^{\mu\nu\rho\tau}F_{\mu\nu}^{ab}F_{\rho\tau}^{cd}\,,
\label{akshn}
\end{equation}
where the Latin indices run from 1 to 4, and so one projects $F$ to its $\frak{so}(d-1,1)$ part in this action.
Apart from a topological Gauss-Bonnet term, the action (\ref{akshn}) is equivalent to the Einstein-Hilbert action with a cosmological term
\begin{equation}
S = \frac{3}{16\pi G\Lambda}\frac{1}{l^2}\int \epsilon_{abcd} \left(e^a\wedge e^b \wedge R^{cd} - \frac{1}{2 l^2}e^a\wedge e^b \wedge e^c \wedge e^d \right)\,,
\label{einsthilb}
\end{equation}
where we have to identify
\begin{equation}
\Lambda=\frac{3}{l^2}\,.
\end{equation}
as the cosmological constant.
In order to define the projection of $F$ in the action (\ref{akshn}), one has used a splitting
\begin{equation}
\frak{so}(d,1) \simeq \frak{so}(d,1)/\frak{so}(d-1,1) \oplus \frak{so}(d-1,1),
\label{split1}
\end{equation}
which depends on the gauge field since the subgroup $SO(d-1,1)$ leaving a given point in de Sitter space invariant depends on the choice of this point.
When the action (\ref{akshn}) is coupled to matter, the $\frak{so}(d,1)/\frak{so}(d-1,1)$ part $e^a$ of the connection appears in a volume element in the matter Lagrangian. By varying the action one obtains the field equations of Einstein-Cartan theory with a cosmological constant $\Lambda=3/l^2$. The length scale $l$, which is so far arbitrary, can be chosen to reproduce the $\Lambda$ of the observed universe, which means it must be chosen to be very large (the ``cosmological constant problem"). By the field equations, one can determine for a given matter distribution a connection $A_0$ consisting of an $\frak{so}(d-1,1)$ connection $({\omega^a}_b)_0$ and a vielbein $e^i_0$.
The MacDowell-Mansouri action reproducing Einstein-Cartan theory with a cosmological constant includes a gauge choice. We can hence view it as the gauge-fixed version of a more general theory. Since (\ref{gaugetransf}) determines how the connection transforms under a gauge transformation, we can generalise a given solution of Einstein-Cartan theory to an arbitrary gauge choice. The extension of the theory to arbitrary configurations of the gauge field, and hence arbitrary choices of tangency points of the internal space to spacetime, is what we call {\it Einstein-Cartan-Stelle-West theory}. Any solution of Einstein-Cartan theory, in particular any (torsion-free) solution of general relativity, gives rise to more general solutions of Einstein-Cartan-Stelle-West theory via (\ref{gaugetransf}). We will later see that one can construct an $\frak{so}(d-1,1)$ connection with torsion from a torsion-free one.
In (\ref{gaugetransf}), $s(x)$ takes values in the de Sitter group, more precisely in the subgroup generated by ``translations" which leaves no point of de Sitter space invariant. The correspondence between Beltrami coordinates $p^a(x)$ on de Sitter space and such group elements is given explicitly by
\begin{equation}
s(p(x))=\exp\left[ \frac{p^i(x)}{\sqrt{-p(x)\cdot p(x)}}\,{\rm Artanh}\left(\frac{\sqrt{-p(x)\cdot p(x)}}{\kappa}\right)\kappa\,X_i \right]\,.
\label{param}
\end{equation}
Then the group element $s(p(x))$ maps $(0,0,0,0)$ to $(p^1(x),p^2(x),p^3(x),p^4(x))$ in Beltrami coordinates. A different choice of coordinates in the internal de Sitter space would correspond to a different parametrisation of the elements of the subgroup of translations of the de Sitter group.
Inserting (\ref{param}) into (\ref{gaugetransf}) and setting $\Lambda(x)\equiv e$, we obtain
\begin{eqnarray}
\omega^{ab}(p(x)) & = & \frac{p^a e_0^b}{l \kappa \gamma(p)} + \left(1 - \frac{1}{\gamma(p)}\right)\frac{p^a dp^b + \omega_0^{ca}p^b p_c}{p\cdot p} + \frac{1}{2}\omega_0^{ab} - (a \leftrightarrow b)\,,
\label{explicit}
\\ e^i(p(x)) & = & \frac{l \kappa}{p\cdot p+\kappa^2}\left(p^i \frac{p_c dp^c }{p\cdot p}(1-\gamma(p)) +dp^i \gamma(p) + ({\omega^i}_b)_0 p^b \gamma(p)\right)+p^i e_0^a p_a\frac{1 + \frac{\kappa^2}{p\cdot p}(1-\gamma(p))}{p\cdot p+\kappa^2} + \frac{e_0^i}{\gamma(p)}\,, \nonumber
\end{eqnarray}
where
\begin{equation}
\gamma(p)\equiv\sqrt{\frac{p\cdot p +\kappa^2}{\kappa^2}}=1+\frac{p\cdot p}{2\kappa^2}+\ldots\,
\end{equation}
Because $p\cdot p \ge -\kappa^2$ in Beltrami coordinates, the square root is always real. In the limit $p\cdot p\rightarrow 0$, our parametrisation is the same as that used in \cite{stellewest}, and we recover their results
\begin{eqnarray}
\omega^{ab}(p(x)) & = & \left({1\over 2}\omega^{ab}_0+\frac{1}{l \kappa} p^a e_0^b + \frac{1}{2\kappa^2} \left(p^a dp^b + \omega_0^{ca}p^b p_c \right)\right) - (a \leftrightarrow b)\,,
\\ e^i(p(x))& = & e_0^i + \frac{l}{\kappa}\left(-\frac{1}{2\kappa^2}p^i p_c dp^c +dp^i + \omega^{ib}_0 p_b \right)+\frac{1}{2\kappa^2} p^i e_0^a p_a \,. \nonumber
\end{eqnarray}
Near $p=0$, we have
\begin{equation}
\omega^{ab}(p(x)) = \omega_0^{ab} + O\left(\frac{p}{\kappa}\right)\,, \quad e^i(p(x)) = e_0^i + \frac{l}{\kappa}dp^i + O\left(\frac{p}{\kappa}\right)\,. \nonumber
\label{smallp}
\end{equation}
As mentioned above, the $\frak{so}(d,1)/\frak{so}(d-1,1)$ part of the connection $A$ acts as a vielbein and maps vectors in the tangent space at a point $x$ in spacetime to vectors in the tangent space at $p(x)$ in the internal de Sitter space, given in components with respect to an orthonormal basis at $p(x)$. In order to give their components in the coordinate-induced basis $\{\frac{\partial}{\partial p^a}\}$, we need another vielbein, which can be obtained from (\ref{explicit}) by setting $\omega_0=e_0=0$ (corresponding to spacetime being de Sitter space with cosmological constant $\Lambda$) and $p^a(x)=\frac{\kappa}{l}x^a$, as in \cite{stellewest}. We obtain
\begin{equation}
{l_n}^a(p(x))=\kappa^2\frac{{\delta_n}^a (p\cdot p) \gamma(p) - p^a p_n (\gamma(p)-1)}{(p \cdot p)(p\cdot p+\kappa^2)}\,,
\label{vierbein}
\end{equation}
where $n$ is a coordinate index in the internal space and $a$ denotes a Lorentz index, as before. This vielbein is of course independent of the underlying spacetime.
Parallel transport can be defined for the $\frak{so}(d,1)$ connection using the notion of development, which generalises the usual covariant derivative. One introduces a development operator \cite{stellewest}
\begin{equation}
D = d - {1\over 2}\omega^{ab}M_{ab} - (e\cdot V)\,,
\end{equation}
where the second term is the usual infinitesimal relative rotation of tangent spaces at different spacetime points, and the last term compensates for the change of point of tangency and hence generates maps from the tangent space at one point of de Sitter space to the tangent space at a different point of de Sitter space. Again one should think of an internal space rolled along spacetime \cite{wise}.
In components, in our conventions we have
\begin{equation}
{(\omega^{ab}M_{ab})^c}_d=-2{\omega^c}_d\,,
\end{equation}
and the combination $e^a V_a$ acts on Lorentz indices as an element of $\frak{so}(d-1,1)$, representing the map from one tangent space to another in the respective bases. We use the result obtained by \cite{stellewest} using the techniques of non-linear realizations\footnote{For an exposition of the theory of non-linear realizations, see \cite{coleman}.}, namely that when expressed as an $\frak{so}(d-1,1)$ matrix,
\begin{equation}
l(e\cdot V) = \kappa\,s(p)^{-1}(e^a X_a )s(p)-s(p)^{-1}\left[s(p+\delta p)-s(p)\right]\,,
\end{equation}
where $s(p)$ is defined according to (\ref{param}) and $\delta p$ is determined from the equation
\begin{equation}
{\left[s(p+\delta p)\right]^a}_5={\left[(1+e^b X_b \kappa)s(p)\right]^a}_5
\end{equation}
where only terms linear in $e^a$ are kept in $\delta p$. An explicit calculation shows that
\begin{equation}
\delta p^a = \frac{p^a}{\kappa}(\eta_{bc}e^b p^c)+e^a\kappa\,,
\end{equation}
and hence near $p=0$, we have $\delta p^a = \kappa e^a$, as expected. We find that $(e\cdot V)$ has components
\begin{equation}
{(e\cdot V)^b}_c=\frac{\kappa(e^b p_c - e_c p^b)(1-\gamma(p))}{l (p\cdot p)}\,.
\label{development}
\end{equation}
One then has a notion of holonomy, mapping closed loops in spacetime into the internal space by development. In particular, if one develops the field $p(x)$ describing the point of tangency around an infinitesimal closed loop at $x_0$, the developed value will in general differ from the original value at $x_0$ \cite{stellewest}:
\begin{equation}
\Delta p^a (x_0) \propto {T_{\mu\nu}}^i(x_0){l^a}_i(p(x_0))\oint x^{\mu} dx^{\nu}\,,
\end{equation}
where ${l^a}_i(p(x))$ is the inverse of the vielbein (\ref{vierbein}) and ${T_{\mu\nu}}^i$ are the components of the torsion tensor $T=de+\omega\wedge e$. The situation for Minkowski space, which we will discuss next, is illustrated in figure \ref{fig}. The central result we will try to justify in the following is that, starting from Minkowski spacetime, if we assume the internal space is rolled along Minkowski space in a non-trivial way, we obtain a connection with torsion. In our interpretation, this is the only way that ``coordinates" can act as translations on momentum space, as one normally assumes when associating the Snyder algebra with a noncommutative spacetime.
\begin{figure}[h]
\begin{picture}(300,120)
\put(0,0){\line(1,0){200}}\put(0,0){\line(1,2){50}}
\put(50,100){\line(1,0){200}}\put(200,0){\line(1,2){50}}
\put(100,60){\circle{40}}\bezier{313}(80,60)(100,50)(120,60)
\bezier{125}(100,40)(115,40)(120,30)\bezier{578}(100,40)(115,20)(120,30)
\bezier{125}(100,40)(110,42)(112,50)\bezier{578}(100,45)(110,55)(112,50)
\end{picture}
\caption{When the curved internal space is rolled along Minkowski space, a path in spacetime corresponds to a path in the internal space. Because of the curvature of the internal space, a closed path in Minkowski space does not correspond to a closed path in the internal space, which is manifest as torsion.}
\label{fig}
\end{figure}
\setcounter{equation}{0}
\section{Synthesis}
\label{synth}
The notion of development along curves in spacetime is central to the interpretation of Einstein-Cartan-Stelle-West theory, because it allows ``spacetime coordinates" to act as translations in the internal de Sitter space. The situation described by DSR, where noncommuting translations on a curved momentum space are interpreted as noncommuting spacetime coordinates, here corresponds to a Minkowski spacetime with an internal de Sitter space rolled along this Minkowski space. The gauge field $p^a(x)$ specifies the points of tangency of the internal space at each spacetime point, and we have chosen Beltrami coordinates on de Sitter space which look like Cartesian coordinates on Minkowski space near the ``origin" of de Sitter space. Since the internal space has a natural scale $\kappa$ and we needed to introduce a natural scale $l$ in spacetime, we choose the gauge field to be
\begin{equation}
p^a(x)=\frac{\kappa}{l}x^a
\label{choice}
\end{equation}
in a vicinity of the origin of spacetime which is now taken to be Minkowski space, where $x^a$ are the standard Minkowski coordinates such that the connection vanishes in general relativity. In general a closed path in spacetime will not correspond to a closed path traced out on the internal space, hence such an identification is only local and, strictly speaking, only valid the origin of Minkowski spacetime. On dimensional grounds, the effects of torsion scale as $\frac{x}{l}$ or $\frac{p}{\kappa}$. For (\ref{choice}) to be well-defined, we must guarantee that $x\cdot x \ge -l^2$, so $l$ should be large in Planck units. We will comment on the significance of the scale $l$ at the end of this section.
It should perhaps be emphasised that the gauge field $p^a(x)$ does not represent physical momentum, but determines the point of tangency of the internal space we have introduced which is to some extent arbitrary. Tangent vectors to the original spacetime can be mapped to tangent vectors to the internal space via the vielbein. The physical interpretation of motion in an internal ``momentum" space which is related to motion in spacetime seems obscure, but if coordinates are to act as translations in the internal space, the two must be connected in some way. In this sense, we are constructing the minimal non-trivial gauge field which leads to observable effects, and an alternative interpretation of noncommuting generators $X_a$ in the Snyder algebra.
Since we do not interpret different points in the internal de Sitter space as representing different values for physical four-momentum, we avoid problems with the physical interpretation of DSR, such as the ``spectator problem" of noncommutative momentum addition and the ``soccer ball problem" of how to describe extended objects. In our framework, tangent vectors representing a particle's (or extended body's) velocity remain vectors and as such live in an unbounded space with commutative addition.
As explained before, we can use equations (\ref{explicit}) to obtain the connection components $\omega$ and $e$ corresponding to this choice of our gauge field; we set $\omega_0=0$ and $({e_{\mu}}^a)_0={\delta_{\mu}}^a$ and substitute (\ref{choice}) to get
\begin{eqnarray}
{\omega_{\mu}}^{ab} & = & \left(x^a {\delta_{\mu}}^b - x^b {\delta_{\mu}}^a \right)\frac{x\cdot x+l^2(\gamma(p)-1)}{l^2 (x\cdot x) \gamma(p)}\,,
\label{connection}
\\{e_{\mu}}^i & = & \frac{1}{(x\cdot x)(x\cdot x + l^2)}\Big(x^i x_{\mu}(x\cdot x-2 l^2(\gamma(p)-1))+{\delta_{\mu}}^i (x\cdot x) 2 l^2 \gamma(p)\Big) \nonumber
\end{eqnarray}
and
\begin{eqnarray}
\partial_{\nu}{e_{\mu}}^i-\partial_{\mu}{e_{\nu}}^i & = & \left(x_{\nu}{\delta_{\mu}}^i-x_{\mu}{\delta_{\nu}}^i\right)\frac{l^2(2l^2(\gamma(p)-1)-3(x\cdot x))-(x\cdot x)^2}{(x\cdot x)(x\cdot x+l^2)^2}\,,\nonumber
\\{\omega_{\nu}}^{ib}e_{\mu b}-{\omega_{\mu}}^{ib}e_{\nu b} & = & \frac{x\cdot x+ l^2(\gamma(p)-1)}{(x\cdot x)(x\cdot x+l^2)l^2\gamma(p)}\left(x_{\nu}{\delta_{\mu}}^i-x_{\mu}{\delta_{\nu}}^i\right)\left(2 l^2+x\cdot x\right)\,,
\end{eqnarray}
which gives a non-zero torsion
\begin{equation}
{T_{\mu\nu}}^i=\left(x_{\nu}{\delta_{\mu}}^i-x_{\mu}{\delta_{\nu}}^i\right)\frac{1}{l^2 \sqrt{\frac{x\cdot x}{l^2}+1}}\,.
\end{equation}
Interestingly enough, for the choice of zero section (\ref{choice}) the scale $\kappa$ drops out of all expressions. Expressed in coordinates on the internal space, one has
\begin{equation}
{T_{\mu\nu}}^i=\left(p_{\nu}{\delta_{\mu}}^i-p_{\mu}{\delta_{\nu}}^i\right)\frac{1}{l \kappa \sqrt{\frac{p\cdot p}{\kappa^2}+1}}\,.
\end{equation}
The quantity ${T_{\mu\nu}}^i$ will be multiplied by an infinitesimal closed loop $\oint x^{\mu} dx^{\nu}$ to give the difference in the value $p(x)$ caused by development along this loop. In momentum coordinates, this is equal to $\frac{l}{\kappa}\oint p^{\mu} dp^{\nu}$, and the effect of going around the developed curve in the internal space is (near $x=0$ or $p=0$) proportional to $\kappa^{-2}$, just as was suggested by (\ref{algebra}).
Expressing Minkowski space in the usual coordinates, together with the (local) identification $p^a(x)=\frac{\kappa}{l}x^a$, in this framework gives a connection with torsion. Developing a closed curve in spacetime in the internal space will give a curve that does not close in general, which is the effect of noncommuting translations in the internal space.
The reader may wonder how the ``deformation" of the Minkowski solution described here is manifest in a metric. We can define a metric by the usual expression
\begin{equation}
g_{\mu\nu}=e_{\mu}^a e_{\nu}^b \eta_{ab}\,.
\end{equation}
This metric would not determine the connection, but could be used to define distances in the spacetime in the usual way. Then, from (\ref{connection}), we get
\begin{equation}
g_{\mu\nu}=\eta_{\mu\nu}\,\frac{4}{1+\frac{x \cdot x}{l^2}}+x_{\mu}x_{\nu}\frac{(x\cdot x)}{\left((x\cdot x)+l^2\right)^2}\,.
\end{equation}
It should be stressed that the connection on spacetime is {\it not} the Levi-Civita connection of this metric. There is a factor of 4 because of a term in (\ref{smallp}) which does not necessarily go to zero as $p\rightarrow 0$. With the identification (\ref{choice}), the soldering form always gets a contribution
\begin{equation}
{e_{\mu}}^i(x) = ({e_{\mu}}^i)_0 + {\delta_{\mu}}^i + O\left(\frac{x}{l}\right)\,.
\end{equation}
The limit $\kappa\rightarrow\infty$ is now identified with the limit $l\rightarrow\infty$, in which we recover the (rescaled) Minkowski metric.
In deriving the expressions (\ref{connection}) we started with Minkowski space, which clearly solves the field equations of the Einstein-Cartan theory for an energy-momentum tensor cancelling the cosmological constant term, and vanishing internal spin. In changing the zero section, we then performed a $SO(d,1)$ gauge transformation, under which the curvature $F$ transformed as
\begin{equation}
F(s(x))=s^{-1} (x) F(x) s(x).
\end{equation}
Since this is a general $SO(d,1)$ rotation, it mixes up the $\frak{so}(d-1,1)$ and $\frak{so}(d,1)/\frak{so}(d-1,1)$ parts of the connection and the curvature. Hence, the resulting connection will no longer solve the original field equations, but the field equations for an energy-momentum tensor which has also undergone a $SO(d,1)$ transformation. This mixes up the energy-momentum and internal spin parts, combining them into an element of the Lie algebra $\frak{so}(d,1)$, the interpretation of which seems obscure at least.
A comment is in order with regard to physical units. In addition to the energy scale $\kappa$, which is perhaps naturally identified with the Planck scale, the identification of lengths with momenta, necessary in the framework presented here, requires the choice of a unit of length $l$ which is not necessarily connected to the scale $\kappa$. It may well be that it is instead the cosmological constant which sets this length scale, leading to an astronomical scale instead of a sub-atomic one. And indeed, some more recent approaches to quantum gravity (e.g. \cite{friedel, ita}) use the product ${G\Lambda}$ as a dimensionless parameter in a perturbative expansion. A fixed positive $\Lambda$ is also required in non-perturbative approaches to quantum gravity \cite{quantsym}. Then the cosmological constant may play the role of a fundamental parameter in quantum gravity.
\setcounter{equation}{0}
\section{Discussion}
It has been argued that the algebra of DSR describes the symmetries of a semiclassical limit of (a generic theory of) quantum gravity (see e.g. \cite{quantsym}). If this claim is taken seriously, one has to give an interpretation of the noncommuting translations appearing in the algebra, and usually they are supposed to represent a spacetime with a fundamentally noncommutative structure \cite{madore}. Alternatively, one may view the apparent noncommutativity as an artefact of the finite resolution of lengths \cite{oriti}. However, there are fundamental difficulties in associating these operators directly with coordinates on spacetime, as position is not {\it additive} in a way that momentum and angular momentum are \cite{stability}. Furthermore, as also pointed out in \cite{stability}, a proposed noncommutativity of spacetime of the form (\ref{algebra}), proportional to angular momentum or boost generators, and hence vanishing at a given ``origin", seems deeply at odds with any idea of (even Galilean) relativity. This would also be an obvious criticism of the framework presented in this note, when taken as a theory that is supposed to describe the real world.
What we have shown here, is that using the framework of Einstein-Cartan-Stelle-West theory, one reaches a different conclusion from the usual one: The noncommutativity of translations on a momentum space of constant curvature is interpreted as torsion of a connection that solves the equations of Einstein-Cartan theory with a modified energy-momentum tensor that mixes with the spin tensor. If one takes this seriously, one is led to conclude that there is an effect of torsion induced by quantum gravity, whose effects would however only become measurable over distances comparable to $l$, a length scale presumably associated with the cosmological constant.
No such effect appears in de Sitter space with an appropriate cosmological constant, or indeed any vacuum solution of the theory. Vacuum solutions are then just described by the Poincar\'e algebra, and hence undeformed special relativity.
Any non-zero energy-momentum tensor, however, will lead to a connection having torsion. In theories such as Einstein-Cartan theory, this leads to well-known problems when trying to couple the gravitational field to Maxwell fields, for instance, as there is no unambiguous procedure of minimal coupling. This is because the statement that the exterior derivative is independent of the choice of connection,
\begin{equation}
(dA)_{\mu\nu}\propto\partial_{[\mu}A_{\nu]}=\nabla_{[\mu}A_{\nu]}\,,
\end{equation}
is true precisely when torsion vanishes. Using an $\frak{so}(d-1,1)$ connection, this is apparent from
\begin{equation}
d(e^i A_i)=\nabla(e^i A_i)=A_i \nabla e^i - e^i \wedge\nabla A_i = - e^i \wedge \nabla A_i + A_i T^i
\end{equation}
where $\nabla e^i=de^i+{\omega^i}_j\wedge e^j$ etc. One has two different candidates for the field strength $F$, namely $e^i\wedge \nabla A_i$ and $d(e^i A_i)$, with possibly observable differences between these choices, although it could be argued that $F=dA$ is the only meaningful choice because it preserves gauge invariance \cite{benn}.
In the framework of Einstein-Cartan-Stelle-West theory, gauge fields should be coupled to gravity via development, i.e. replacing $F=dA$ by $F=DA$. We compute from (\ref{explicit}) and (\ref{development}) that development can be expressed in terms of $\omega_0$ and $e_0$ by
\begin{equation}
D=d - {1\over 2}\omega^{ab}M_{ab} - (e\cdot V) = d + \omega - (e\cdot V) = d + \omega_0 + 2(p \otimes_A e_0)\kappa\frac{(\gamma(p)-1)}{l(p\cdot p)}=: d + \omega_{{\rm eff}}\,,
\end{equation}
where $\otimes_A$ is an antisymmetrised tensor product, $2(U\otimes_A V)^{ab} = U^a V^b - U^b V^a$. Parallel transport is effectively described by the connection $\omega_{{\rm eff}}$, whose torsion is in general non-zero. One can give an explicit formula for the torsion which is however rather complicated and does not seem to give much insight; to linear order in $p^i$, one has
\begin{equation}
T^i = \frac{l}{\kappa}({R^i}_b)_0 p^b - \frac{1}{2 l\kappa}e^i_0 \wedge (e^0_j p^j) + \frac{1}{2\kappa^2}\left(p^i e_{j0}\wedge dp^j - p_j e_0^i \wedge dp^j\right)+O(p^2).
\end{equation}
If we assume a universal relation of internal momenta and spacetime lengths of the form $p\sim\frac{\kappa}{l}x$, the second and third terms seem to give contributions of order $x/l^2$. The first term is proportional to the local curvature of $\omega_0$, ${R^i}_b = d{\omega^i}_b + {\omega^i}_j\wedge{\omega^j}_b$, contracted with $x^b$. Note that it is the Riemann tensor, not the Ricci tensor, that appears, so that propagating degrees of freedom of the gravitational field are included. This first term should in realistic situations, even in vacuum, give the dominant contribution.
Assuming that minimal coupling is achieved through the development operator $D$, or equivalently by using the effective connection which has torsion, one would couple vector or matter fields (using $D\psi$ for spinors) to torsion, breaking gauge invariance. Such an effect of course leads to the absence of charge conservation, and this should be experimentally observable in the presence of a non-trivial gravitational field, i.e. in regions where spacetime is not exactly de Sitter. Let us recall that in standard tensor calculus one uses the identity
\begin{equation}
[\nabla_{\mu},\nabla_{\nu}]M_{\lambda\rho}={R_{\mu\nu\lambda}}^{\sigma}M_{\sigma\rho}+{R_{\mu\nu\rho}}^{\sigma}M_{\sigma\lambda}-{{T_{\mu}}^{\sigma}}_{\nu}\nabla_{\sigma}M_{\lambda\rho}
\end{equation}
which gives for an antisymmetric $M_{\lambda\rho}$ when contracted
\begin{equation}
[\nabla^{\lambda},\nabla^{\rho}]M_{\lambda\rho}=-g^{\mu\lambda}g^{\nu\rho}{{T_{\mu}}^{\sigma}}_{\nu}\nabla_{\sigma}M_{\lambda\rho},
\end{equation}
to establish that the right-hand side of Maxwell's equation $\nabla^{\lambda}F_{\lambda\rho}=4\pi J_{\rho}$ satisfies a continuity equation in the absence of torsion. With torsion present, one has then for any region $R$
\begin{equation}
\int_{\partial R}d^3x \;\sqrt{h}\;n^{\lambda} J_{\lambda} = \frac{1}{4\pi}\int_R d^4x\; \sqrt{g}\;\left(-g^{\mu\lambda}g^{\nu\rho}{{T_{\mu}}^{\sigma}}_{\nu}\nabla_{\sigma}F_{\lambda\rho}\right).
\end{equation}
Effects become important when the size of the region $R$ is comparable to the length scale of torsion.
As an example consider the Schwarzschild solution, which has Kretschmann scalar
\begin{equation}
R_{abcd}R^{abcd} \sim \frac{r_S^2}{r^6},
\end{equation}
so roughly $R_{abcd}\sim r_S r^{-3}$. Assuming that the origin of the $x$ coordinate system corresponds to the centre of the Earth, we would, on the surface of the Earth, measure a torsion of order $r_S R^{-2}$, where $R$ is the radius of the Earth. Since $r_S\sim 10^{-2}$ m and $R^2 \sim 10^{13}\,{\rm m}^2$, this means that the length scale for effects of torsion would be about $10^{11}$ m. The other two contributions, given that $l\sim 10^{26}$ m, would be much smaller. Although this crude estimate suggests that effects will be very small, even tiny violations of charge conservation should have been observed experimentally. For a discussion of experimental tests of charge conservation and possible extensions of Maxwell theory in Minkowski space, see \cite{exptest}. Processes such as electron decay on a length scale of $10^{11}$ m, or a time scale of $10^{3}$ s, can clearly be ruled out.
The example presented here shows that the correct physical interpretation of purely algebraic relations, such as the commutators of the Snyder algebra, may not be the seemingly obvious one. We conclude that the physical motivation for assuming spacetime is ``noncommutative" may not be as clear as often assumed.
\section{Gauge Invariance Broken?}
The idea that an asymmetry between the proton and electron charges could have interesting astrophysical consequences goes back to Lyttleton and Bondi \cite{bondi}, who argued that a charge difference, and hence a net charge of the hydrogen atom, of $10^{-18}$ elementary charges, might explain the observed expansion of the universe by electrostatic repulsion. This idea was proposed in connection with Hoyle's ideas of a universe in a steady state, which required continuous production of material via a ``creation field" \cite{hoyle}, and a modification of Maxwell's equations was proposed to accommodate charge nonconservation. From Hoyle's perspective, however, the steady state model was incompatible with expansion of the universe by electrostatic repulsion, and should lead to electrostatic attraction instead \cite{hoyle2}.
There seems to be need for the electron and proton charges to be of equal magnitude to maintain gauge invariance. However, if the universe as a whole is not neutral, but it is homogeneous, gauge invariance must be broken. Hence the two issues are closely related. Modern laboratory experiments \cite{laboratorium} give a bound of $10^{-21}$ elementary charges on the difference of electron and proton charge; astrophysical considerations give bounds of $10^{-26}$ elementary charges using the isotropy of the cosmic microwave background \cite{cmbbounds}, or $10^{-29}$ elementary charges by considering cosmic rays \cite{raybounds}. Recently, an interesting proposal to measure net charges of atoms and neutrons, sensitive to $10^{-28}$ elementary charges, was put forward \cite{expproposal}.
From a theoretical viewpoint, if gauge invariance is broken, it is natural to assume a nonvanishing photon mass. One then considers Einstein-Proca theory, an outline of which can be found in \cite{hejna}. The photon may also be charged. Here, experimental bounds on the charge are $10^{-29}$ elementary charges using pulsars \cite{pulsar}, and possibly $10^{-35}$ elementary charges from CMB isotropy \cite{cmbbounds}.
Experimental bounds on violations of gauge invariance in electrodynamics are very tight, and hence any theory predicting torsion which is coupled to electromagnetism faces severe problems when confronted by experiment. In the framework of Einstein-Cartan-Stelle-West theory, it is possible to maintain gauge invariance by choosing $F=dA$, but using the development operator is the most natural choice.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,863 |
{"url":"https:\/\/www.math.kyushu-u.ac.jp\/eng\/seminars\/view\/803","text":"Top > Seminars & Events > Seminars > Representing Milnor\u2019s ${\\mu}$-invariant by HOMF...\n\n## Seminars\n\n### Representing Milnor\u2019s ${\\mu}$-invariant by HOMFLY polynomials\n\nHold Date\n2014-10-31 16:00\u301c2014-10-31 17:00\nPlace\nSeminar Room 1, Faculty of Mathematics building, Ito Campus\nObject person\n\nSpeaker\nYuka KOTORII (Tokyo University)\n\nAbstract:\nFor an ordered oriented link in the 3-sphere, J. Milnor defined a family of invariants, known as Milnor's $\\overline{\\mu}$-invariants. Those invariants are determined by a sequence of integers. Polyak showed that any Milnor's $\\overline{\\mu}$-invariants of length 3 sequence can be represented as a combination of the Conway polynomials of knots. On the other hand, Habegger and Lin showed that Milnor invariants are also invariants for string links, called $\\mu$-invariants. In this talk, we show that any Milnor's ${\\mu}$-invariant can be represented as a combination of the HOMFLYPT polynomials of knots under some assumption of string links, by using a finite type property of Milnor's ${\\mu}$-invariant.In particular, for any ${\\mu}$-invariants of length $3$ sequence are given by a combination of the HOMFLYPT polynomials and linking numbers without the assumption of string links. This result is a string link version of Polyak's result.","date":"2020-09-28 19:25:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8643873929977417, \"perplexity\": 1353.8944920302529}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600401604940.65\/warc\/CC-MAIN-20200928171446-20200928201446-00514.warc.gz\"}"} | null | null |
\section{Introduction}
The dynamics of a dense ensemble of quantum emitters driven by an electromagnetic field is a topic of current interest and much excitement. Experimental and theoretical research on dense collections of atoms have studied numerous effects such as superradiance~\cite{PhysRevA.75.033802, PhysRevA.93.033407, PhysRevA.95.033839}, dipole blockade~\cite{PhysRevLett.85.2208}, collective Lamb shift~\cite{Friedberg1973101}, etc. In nano-optics, the interest is in developing the properties of hybrid systems such as quantum dots or organic dye molecules in proximity to metal nanoparticles~\cite{PhysRevLett.89.203002, PhysRevA.95.043834,Singh,Hughes,Malinovskaya}.
In all these studies, the type and strength of interactions between the quantum emitters (henceforth referred to as ``atoms") are specific to the type of phenomenon studied.
The effects of interatomic interactions in dense ensembles that are excited by low-intensity electromagnetic fields are typically studied computationally. Large-scale simulations have shown that interatomic interactions can shift resonance absorptions in cold dense gases \cite{PhysRevLett.116.183601, PhysRevLett.112.113603}, modify spontaneous emission rates and decoherence rates \cite{PhysRevA.90.012511,PhysRevA.94.033848,JETP112.246,PhysRevA.95.033839}, and affect overall scattering processes \cite{Sokolov2011}. These large-scale simulations are computationally intensive as they require the evaluation of the interaction between numerous atoms (or lattice sites). The best scaling that we have found in the literature is one that scales as the fourth power of the number of lattice sites in Ref.~\cite{PhysRevA.95.033839}. A popular approximation is mean-field approximation such as the one used in Refs.~\cite{PhysRevA.75.033802,DIET,PhysRevB.95.115406,PhysRevA.89.022501}. This indeed reduces the computational effort, however is still quite computationally intensive. Thus, many calculations use further approximations such as short pulse-methods \cite{PhysRevB.95.115406, PhysRevA.89.022501}, or quantum basis sets that are of reduced dimension~\cite{DIET,PhysRevA.89.022501}. The first broadband or short-pulse approximation is used in scattering calculations of driven ensembles of classical dipoles \cite{PhysRevA.84.043802, PhysRevA.89.022501,lumerical,kall}. In this methodology, a broadband short pulse illuminates the system and the scattered field is tracked and Fourier-transformed to yield an appropriate intensity spectrum. In the latter approximation, the quantization axis of the quantum emitter is along one direction - the polarization of the incident electromagnetic field~\cite{DIET,PhysRevA.89.022501}, or a time-independent quantization axis is used \cite{PhysRevA.95.033839}. However, these approximations may not be able to accurately capture spontaneous emission from the ensemble, or inelastic scattering \cite{PhysRevA.84.043802}. In this study, we show that these approximations are inadequate for studying a strongly driven, dense quantum ensemble.
We examine the ensemble behaviour of a dense collection of approximately 4000 atoms, modelled as two-level quantum systems (2LS), that is driven by a strong plane wave electromagnetic field. Each two-level atom interacts with the environment, and the interaction is modelled by a radiative decay rate ($\gamma$). The states of the individual atoms therefore significantly affect local electromagnetic field intensities, and the local fields mediate the inter-atomic interactions. In our methodology, we model the quantum evolution of the state of each atom, the spontaneous emission from which, in turn changes the electromagnetic field that is perceived by a neighbouring atom. Thus, both the field propagation and the density matrix evolutions must be calculated simultaneously. We solve both Maxwell's equations and the Liouville-Von Neumann equation concurrently using a pseudo-spectral time domain (PSTD) method discussed in Sec.~\ref{PSTD}. This is a method based on a self-consistent, mean-field approach that is free of problematic self-interactions~\cite{PhysRevA.84.043802, PhysRevA.89.022501}.
By examining the dynamics of the strongly-driven, dense ensemble of 2LS, we find that interatomic interactions create strong disorder in the ensemble states over a characteristic time that is much shorter than the excited state lifetime of a single 2LS. This disorder imposes an effective lifetime for quantum scattering effects in an ensemble. This implies that in order to understand the long-term dynamics of a driven, dense quantum ensemble, short-pulse/broadband techniques are inadequate. The interatomic interactions also lead to excitation of atoms in directions other than the incident field polarization. This indicates that for modelling a general ensemble of dense emitters, a full, three-dimensional state basis is required. Our calculation therefore goes beyond the standard approximations by using a plane wave excitation, and uses a full three-dimensional state basis as discussed in Sec.~\ref{3dbasis}. For an example case of an ensemble of 1 eV emitters, our calculation shows that there is a transient upshifting of incident photons that disappears in the steady state. This disappearance is correlated with the onset of disorder in the ensemble-averaged quantum state of the ensemble.
We propose that the overall behaviour of these dense ensembles can be modelled by a single-particle, rotating wave approximation, solution to the Lindblad-von Neumann equation. In this model, interatomic excitations are modelled by introducing decoherence terms inspired by models of the Forster resonance energy transfers (FRET) process in biophysical systems. This approach allows the response of a dense quantum ensemble to be rapidly approximated with a single-atom model. This representation also isolates the processes that are most significant in determining the optical response of a nanoscale, dense quantum ensemble to strong electromagnetic excitation.
In Section \ref{sec:method}, we discuss the computational and numerical approach that was implemented to calculate the response of nanoscale dense ensembles driven at high intensities. In Section \ref{sec:disorder}, we discuss, via a practical example, the effects that strong driving fields and its associated decoherence have on the ensemble-averaged quantum state of a nanospherical ensemble. Section \ref{sec:single} describes the approximation technique in which we use a single particle model to simulate the average behaviour of the ensemble. Lastly, Section \ref{sec:conclusion} summarizes our main conclusions and future outlook of this work.
\section{Theory and Implementation} \label{sec:method}
We model a dense ensemble of two-level atoms driven by a strong, linearly-polarized, electromagnetic field. Though the driving field is polarized in one direction, spontaneous emission from each atom excites transitions in nearby atoms in other directions. Each of the atoms contributes to a ``mean field'' that mediates the interactions between various quantum emitters. This mean field in the ensemble is a spatially varying, 3D-vector. Therefore, the dynamics of an individual quantum system involves a ground state and three excited states, one for each Cartesian direction of the atomic dipole interacting with the mean field as suggested in Ref.~\cite{PhysRevA.78.013806}. The calculation involves numerically evaluating the coupled Maxwell-Liouville equations in a computational space that includes the ensemble.
The electromagnetic field evolves in time according to Maxwell's equations:
\begin{equation}
\nabla \times \vec{E}(\vec{r},t)= -\mu_0 \frac{\partial \vec{H}(\vec{r},t)}{\partial t}\label{MaxwellH_bare},
\end{equation}
\noindent and
\begin{equation}
\nabla \times \vec{H}(\vec{r},t)= \epsilon_0 \frac{\partial \vec{E}(\vec{r},t)}{\partial t} + \vec{J}(\vec{r},t), \label{MaxwellE_bare}
\end{equation}
\noindent where, $\vec{H}(\vec{r},t)$ and $\vec{E}(\vec{r},t)$ are the magnetic and electric fields respectively, and $\vec{J}(\vec{r},t)$ is the free current density.
The quantum state of each emitter evolves in time according to the Lindblad-Von Neumann equation
\begin{equation}
\dot{\rho}(\vec{r},t) = - \frac{i}{\hbar} [H(\vec{r},t),\rho(\vec{r},t)] - L(\rho(\vec{r},t)). \label{eq:Lindblad}
\end{equation}
\noindent In this evolution equation, the Lindblad superoperator, $L(\rho(\vec{r},t))$, models the decoherence in the system. This term is linear in the state density operator and is of the form:
\begin{equation}
L(\rho) =\sum_{d}{\frac{\gamma_d}{2} (\sigma_d^\dagger \sigma_d \rho + \rho\sigma_d^\dagger \sigma_d - 2 \sigma_d \rho \sigma_d^\dagger) }. \label{eqn:linbladsuper}
\end{equation}
\noindent In this equation, $\sigma_d$ are the Lindblad operators, which are assumed to model spontaneous emissions from an excited state to the ground state, and $\gamma_d$ is the rate of spontaneous emission. For the emission from $\ket{i} \rightarrow \ket{j}$, these operators would take the form $\sigma_d =\sigma_{ij} =\ket{j}\bra{i}$. All non-allowed emissions have $\gamma_d=0$, and each allowed emission has a spontaneous emission rate determined by Fermi's Golden Rule \cite{nanopticsbook}.
The quantum states of the atoms contribute to the electromagnetic field via the free current density ($\vec{J}$), whose directional components ($\eta = x, y, z$) can be found by \cite{PhysRevA.84.043802}:
\begin{equation}
J_\eta(\vec{r}) = N_A \braket{\frac{\partial}{\partial t} \hat{\mu_\eta}(\vec{r}) } = N_A Tr (\dot{\rho}(\vec{r}) \hat{\mu_\eta} ),
\end{equation}
\noindent where $N_A$ is the number density of emitters, $\rho(\vec{r})$ is the density matrix of an emitter located at position $\vec{r}$, and $\hat{\mu_\eta}$ is the transition dipole moment operator corresponding to the $\eta^{\text{th}}$ Cartesian component of the dipole moment. The transition dipole moment operator is directly related to the Hamiltonian of the atom as:
\begin{equation}
\hat{\mu_\eta} = -\frac{\partial \hat{H}}{ \partial E_\eta}.
\end{equation}
\subsection{Generalized Directional State Basis}
\label{3dbasis}
In the quantum control of a single two-level atom by an incident electromagnetic field, the quantization axis is assumed to be along the direction of polarization, and the two atomic levels $\ket{g}$ and $\ket{e}$ are coupled with a transition strength proportional to $\mu E(\vec{r})$. In a driven ensemble of atoms, though the driving field is polarized in one direction, spontaneous emission from each quantum system excites transitions in nearby quantum systems in other directions. This requires the consideration of all three components of the dipole moment operator. Rather than work in the angular momentum basis, a simpler way to approach this problem is to introduce a ``directional" state basis \cite{PhysRevA.78.013806}. These ``directional" states are those accessed by transitions that are driven by a single field polarization as depicted in Figure~\ref{fig:DirectionalStates}. This results in an effective four-level system which can display quantum interference.
\begin{figure*}
\centering
\includegraphics[width = 5in]{directionals.jpg}
\caption{a) When the polarization of an electromagnetic field sets the quantization axis of an atom, the effective quantum system is a two-level system with the direction of the transition dipole oriented along that polarization direction. b) When the polarization of an incident control field is different from the quantization axis of an atom, the effective quantum system is a four-level system with a dipole transition oriented along each field component. $\Omega$'s represent the field-atom interaction frequency, $\Delta$'s are the detuning between the frequency of the driving field and the transition frequency of the 2LS, and $\gamma$'s are the spontaneous emission rates from the excited states to the ground state. \label{fig:DirectionalStates}}
\end{figure*}
The Hamiltonian of a two-level atom interactive with an electromagnetic field in this directional state basis is:
\begin{equation}
H = \left ( \begin{matrix}
0 & \hbar\Omega_{e_x,g} & \hbar\Omega_{e_y,g} & \hbar\Omega_{e_z,g} \\
\hbar\Omega^*_{e_x,g}&E&0&0\\
\hbar\Omega^*_{e_y,g} &0&E&0\\
\hbar\Omega^*_{e_z,g} &0&0&E\\
\end{matrix} \right ),
\end{equation}
\noindent where the energy of the ground state is set to zero; the degenerate, excited, directional states have energy $E$, and the dipole-field interaction takes the form $\Omega_{e_\eta,g} = \frac{\mu_{e_\eta,g} E_\eta}{\hbar}$, where $\eta=(x,y,z)$.
\subsection{Mean-field Interatomic Interaction}
A microscopic representation of a large number of open quantum systems interacting with one another is computationally intensive. Since the Lindblad-Von Neumann equation involves matrix multiplication, this computation becomes onerous for a large number of atoms in the ensemble; since even the most modern, optimized methods scale more slowly than $M^2$ \cite{davie2013improved}, where $M$ is the total number of states (for $N$ atoms, $M=4N$ for the atomic structure in Fig. \ref{fig:DirectionalStates}b).
Therefore we describe the interaction between the members of the ensemble using a mean-field method. In this method, spatially separated atoms do not directly interact with one another through the Hamiltonian or Lindblad operators. Instead each atom interacts with and contributes to a local, mean field and sees the behaviour of other atoms through this mean field. The mean field is a sum of the external incident field that excites the ensemble and a local field created by the driven and spontaneously emitting atoms (quantum emitters) in the ensemble:
\begin{equation}
\vec{E}(\vec{r},t) = \vec{E}_{inc}(\vec{r},t) +\vec{E}_{local}(\vec{r},t) .
\end{equation}
This method of using a mean-field interaction is used in numerous areas in computational physics, such as in polymer self-consistent field theory \cite{MscThesis}, and computational electrodynamics \cite{PhysRevLett.72.1651}. For clarity, the ``mean'' in the mean field refers to a mean of the interactions between molecules and not a spatial mean of the fields themselves.
This simplification allows the overall quantum state space to remain relatively small. For a system consisting of N four-level systems, the total directional state space ($M=4N$) is reduced to $4N$ quantum states and $3N$ local quantum interactions. This greatly simplifies the equations, and allows us to solve the problem by evolving the density matrices locally with an efficient parallel implementation. In this study, we model an ensemble of approximately 4000 atoms. With the ensemble state basis reduced to a more manageable size, one now needs to determine how the quantum emitters create local fields.
\subsection{Numerical Implementation}
\label{PSTD}
To implement this calculation numerically, we modify and extend the methodology used by Sukharev and Nitzan \cite{PhysRevA.84.043802}. In our method, Maxwell's equations are solved numerically in time for a coarse-grained grid using a pseudo-spectral time domain method (PSTD) \cite{PTSD1, PTSD2}. The choice of using a PSTD method over the FDTD method used in Ref.\cite{PhysRevA.84.043802} is largely because the PSTD method is computationally more efficient than the FDTD method \cite{PTSD1, PTSD2}. There is also the added benefit of using a single lattice grid as opposed to the staggered grid required of the FDTD method \cite{kunz1993finite}. A uniaxial perfectly matched layer (PML) \cite{PMLbook1,PMLbook2} is used to eliminate reflection at the boundaries, and to strongly attenuate the signal so as to prevent signal wraparound in the simulation \cite{PTSD1}. For a plane wave, we modify the PML size and coefficients to reduce the relative reflected and wraparound field amplitudes to at most $10^{-5}$ of the incident field amplitude.
The simulation space is broken into a 3D computational grid, with each cell having associated with it an electric and magnetic field. This grid is chosen to be cubic with spacing of $l=1$ nm; this spacing corresponds to the interatomic spacing associated with the approximate atomic density used in the calculations ($N_A= 1 \times 10^{27}m^{-3}=l^{-3}$). The individual quantum emitters are assumed to be point emitters.
The order of operations at each time-step is:
\begin{itemize}
\item The fields of the ``source cells" are updated analytically so that a plane wave is produced~\cite{PTSD1}.
\item Maxwell's equations are solved numerically in time for this coarse-grained grid using the pseudo-spectral time domain method. Firstly, the magnetic field $\vec{H}(\vec{r})$ is updated.
\item If there is a quantum emitter present in a cell, the density matrix of that cell is evolved by solving the Lindblad-Von Neumann equation (\ref{eq:Lindblad}) using a fourth-order Runge-Kutta method\cite{press1989numerical}, and the electric fields at the previous time-step as input. Going beyond previous studies \cite{PhysRevA.84.043802}, we include interatomic interactions in all three directions by implementing a generalized three-directional state basis described in subsection~\ref{3dbasis}.
\item The free current in each cell $\vec{J}(\vec{r})$ is determined for cells containing one or more quantum emitters.
\item The free current is used to update the local electric field, $\vec{E}(\vec{r})$, using Maxwell's equations.
\item The process is repeated and items of interest are recorded.
\end{itemize}
Each simulation is run until the density matrix of the ensemble reaches an approximate steady state. For a collection of ~4000 emitters, a simulation takes between 8-12 CPU days on 8 cores \cite{SHARCNET}.
\section{Example Application: Increasing Solar-Cell Efficiency} \label{sec:disorder}
Thermal upconversion is a very important process of interest in the design of highly efficient solar cells \cite{lenert2014nanophotonic}. In silicon solar cells, electricity is only produced by photons with $\lambda < 1100 nm$ due to the band gap in silicon; therefore solar photons of much higher wavelengths are ``wasted" \cite{strumpel2007modifying}. The goal of many in the scientific community is to design a nanoscale system that can blueshift significant amounts of infrared photons, thus recouping some of this under-utilized energy.
The Lorentz-Lorenz model of an atomic electron driven by an incident electromagnetic field predicts that the induced polarization has a frequency that is blueshifted~ \cite{lorentz_book}. It can be expected that the induced electromagnetic field will also be at a blueshifted frequency compared to the incident field. According to this model, a driven neodymium atom, for which the ground-to-excited-state transition energy is $\approx$ 1 eV, when placed onto silicon that has a bandgap of just above 1 eV, could theoretically blueshift the incident light, and increase the silicon's absorption. We speculate that a dense arrangement of neodymium atoms on the silicon would be able to amplify this blueshifting effect. Therefore we model a dense ensemble of atoms driven by a plane wave electromagnetic field, with an aim to exploit the macroscopic/collective effects amplified from the microscopic dynamics.
Using the methodology described in the previous section, we calculate the response of a dense quantum ensemble to a monochromatic, plane-wave, driving field of wavelength $197.5$ nm (corresponding to $1.0$ eV) in order to determine whether or not the frequency of the near field around the ensemble can be blueshifted. The collection of dense quantum emitters is arranged in the form of a 10 nm nanosphere with an origin of coordinates at its centre. The incident monochromatic, plane wave is polarized in the $\hat{y}$-direction, and propagates along the $\hat{z}$-direction. We monitor the electric field amplitude a short-distance (3 nm) outside the nanosphere for 200 fs (0 fs to 200 fs). Taking a Fourier transform of this field amplitude, we see that the electromagnetic field around the nanosphere is no longer purely monochromatic (Fig \ref{fig:shift_plasmon}(a)) even if the input is. There is a blueshifted component that appears. Although this appears promising, if we continue the evolution and take a Fourier transform of the field for the window from 100-300 fs, the spectrum transforms to that depicted in Figure \ref{fig:shift_plasmon}(b). The blueshifted peak has disappeared.
\begin{figure}
\centering
\includegraphics[width = 3in]{3nm_field_fft.png}
\caption{Fourier transform of the electric field over a 200fs time window at $\vec{r}$=(0, 13nm, 0), a point outside a 10 nm radius spherical ensemble of atoms centered at the origin. Each atom in the ensemble has an energy level structure as shown in Fig.~\ref{fig:DirectionalStates}(b), with energy spacing between the ground and excited states of 1 eV, and spontaneous emission rates of 2.95MHz . The number density of atoms in the ensemble is $4 \times 10^{27}$ atoms per cubic metre. The incident plane wave electromagnetic wave of frequency 241 THz, and electric field amplitude 1.5GV/m is polarized in the $\hat{y}$-direction, and propagates along the $\hat{z}$ direction. (a) Frequency components that appear in the time window 0 - 200fs after the start of excitation include a distinct blue-shifted peak. (b) Frequency components that appear in the time window 100 - 300fs after the start of excitation. Notice that the blue-shifted frequency components have died out. \label{fig:shift_plasmon}}
\end{figure}
This loss of upshifted frequencies at long times indicate that an ensemble of quantum emitters is not suitable for thermal upshifting in solar cells.
In order to probe why the frequency-shifted components disappear, we examine the spatial distribution of free-current density components ($J_\eta (\vec{r})$) in the nanospherical ensemble as a function of time. Snapshots of the free-current in the $xy$-plane are depicted in Fig \ref{fig:order_disorder}. It is immediately seen that the distribution of free currents in the ensemble becomes disordered as time goes on. Initially, the ensemble responds to the incident field in what is effectively an ordered phase; all the individual atoms respond to the field by oscillating in an identical manner. This phase is characterized by a near-uniform free current distribution anti-aligned with the incident field polarization. The spatial distribution of the free currents in directions perpendicular to the incident field polarization show weak, quadrupolar patterns. At later times, due to the build-up of electric field components perpendicular to the incident polarization, the overall ordered pattern is lost, and small instantaneous domains are formed that do not move in phase with one another. These two phases that we refer to as `ordered' and `disordered' correspond to the two time windows; one that has a blueshifted frequency and one that does not. The time-scale of this onset of disorder ($\approx 28 fs$) in the free-current distribution is much faster than what one would expect from the normal spontaneous emission rates of the individual emitters ($1/\gamma_0 \approx 344 ns$).
\begin{figure*}
\centering
\includegraphics[width = 5in]{order-disorder.png}
\caption{Snapshots of the spatial distribution of free-current density components $\vec{J}_y(\vec{r})$ (row (a)) and $\vec{J}_x(\vec{r})$($A/m^2$) (row (b)) in the $x-y$ plane (with $\hat{y}$ being horizontal) bisecting a 10 nm nanosphere of atoms at times (i) 10 fs, (ii) 100 fs and (iii) 250 fs after start of excitation. Parameters of the ensemble and the incident field are the same as in Fig.\ref{fig:shift_plasmon}. The snapshots show that at early times, there are ordered patterns in the spatial distribution of the free current density; and as time goes on, disorder sets in due to interatomic interactions, finally ending in a disordered `phase'. \label{fig:order_disorder}}
\end{figure*}
Examination of the spatially-averaged ensemble density matrix ($\bar{\rho} = \frac{1}{V}\int d^3 \vec{r}\rho(\vec{r}) =\frac{1}{N} \sum^N_n \rho_n$) reveals some interesting connections between the macroscopic and microscopic dynamics. In Fig.~\ref{ensemble-pop}, we see that the ensemble-averaged excited-state population that lies along the incident polarization axis ($\bar{\rho}_{yy}$) appears to quickly reach a steady-state. As the free-current distribution quickly becomes disordered, non-directly-driven excited states ($\ket{e_x}$ and $\ket{e_z}$) gain and retain state population, as seen from the increase in $\bar{\rho}_{xx}$ and $\bar{\rho}_{zz}$. This directly shows that inter-atomic interactions (mediated through a mean field) with strong driving fields lead to a mixing of multi-directional excited states. As all of the ensemble state populations rapidly reach an approximate steady-state that oscillates only with the incident frequency, the time-averaged coherences in the rotating frame reduce to a small net coherence oscillating in the incident field polarization direction with the frequency of the incident field. By examining the purity ($Tr(\bar{\rho}^2)$) of the ensemble in Fig.\ref{ensemble-pop}(b), it is seen that the ensemble state undergoes decoherence over the same timescale as the population leakage.
\begin{figure}
\centering
\includegraphics[width = 3in]{s12.png}
\includegraphics[width = 3in]{purity.png}
\caption{(a) Spatially-averaged populations in the $\hat{x}$, $\hat{y}$ and $\hat{z}$-directional excited states and (b) ensemble-averaged purity for a 10 nm radius nanosphere of atoms with atomic number density $N_A = 4.0 \times 10^{27} m^{-3}$. Other parameters of the ensemble and the incident field are the same as in Fig.\ref{fig:shift_plasmon}. \label{ensemble-pop}}.
\end{figure}
The onset of disorder in the current density distribution is directly linked to the fact that population from the excited state corresponding to the polarization direction of the incident light (in our case $|e_y\rangle$) is redistributed due to interatomic interactions into the other excited states ($|e_x\rangle$ and $|e_z\rangle$), which in turn is linked to decoherence in the ensemble state. This ``directional state leakage", and associated decoherence effects occur on a time-scale that is much faster that what would be predicted by normal spontaneous emission by several orders of magnitude (the lifetime of the ensemble excited state about $28$ fs in comparison to the lifetime of a single 2LS which is $\approx 344 ns$). Note that this disorder is purely an ensemble effect; the local purity of individual coarse grains remains close to unity on this time-scale since the individual spontaneous emission rate is low ($2.95 \times 10^6$ Hz).
By examining the dynamics of the strongly-driven, dense ensemble of 2LS, we find that interatomic interactions create strong disorder in the ensemble states over a characteristic time. This disorder imposes an overall effective lifetime for quantum scattering effects.
Figure \ref{fig:dens_vary_pop_y} shows the ensemble-averaged excited state populations as a function of increasing number density. At very low number density, the ensemble-averaged excited state population oscillates much in the same way as a single, driven two-level system with spontaneous emission. Since the interatomic interactions are low, the population in the non-directly driven excited states isn't much. As the number density of atoms increases, the interatomic interactions cause population leakage into the non-directly driven excited states. At the same time, we see that the oscillation in the directly-driven excited state is damped much more quickly than the low density case. Increasing interatomic interactions appear to increase the rate of spontaneous emission in the ensemble, which we already saw is linked to the onset of disorder in the free-current density. As the number density increases further, screening makes it more difficult to excite population into the directly-driven excited state, and hence the populations in the non-directly driven excited states increases at slower rates.
\begin{figure}
\centering
\includegraphics[width = 3in]{xyz.png}
\caption{Spatially averaged populations ($\bar{\rho}_{xx}$, $\bar{\rho}_{yy}$, $\bar{\rho}_{zz}$) in the $\hat{x}$, $\hat{y}$ and $\hat{z}$-directional excited states for a 10 nm radius spherical ensemble of atoms with varying number densities. Parameters of the incident field are the same as in Fig.\ref{fig:shift_plasmon}. \label{fig:dens_vary_pop_y}}
\end{figure}
The presence of these extremely strong, decoherent processes in a driven quantum ensemble has immediate consequences for the numerical modelling of a driven ensemble of quantum emitters. Firstly, these results indicate that the ``short-pulse method'' \cite{PhysRevA.84.043802}(the use of ultra-short, sub-fs pulses to determine continuous scattering amplitudes) may not be generally applicable when modelling quantum systems driven at high intensities. Secondly, these results indicate that for an ensemble of quantum emitters, a one or two-directional basis set (such as in Ref. \cite{rand2008,DIET,PhysRevA.84.043802})is insufficient to fully capture inter-atomic interactions, and can lead to overestimates in their long-term coherent behaviours at high densities. This indicates that for a general ensemble of dense emitters, a full directional state basis is required. Our calculation therefore goes beyond the standard approximations by using a plane wave excitation, and a full three-dimensional state basis.
\subsection{Quantifying Disorder in Driven, Dense Quantum Ensembles}
The ensemble-averaged excited state density in the incident field polarization direction $\bar{\rho}_{yy}$ can be fit to a phenomenological model of a driven two-level system in which there is spontaneous decay from the excited state to the ground state, as well as a loss of population density.
\begin{equation}
\bar{\rho}_{yy} = a \exp(-\gamma_{ens} t) \cos(\Omega t) + b + c \exp(-g t),
\end{equation}
\noindent where a, b, and c are dimensionless constants, $\gamma_{ens}$ is analogous to the damping rate of the driven excited state ($\hat{y}$) that we call ``the disorder-onset rate", $g$ represents the rate at which state population ``leaks'' from the $\ket{e_y}$ state to $\ket{e_x}$ and $\ket{e_z}$ excited states, and $\Omega$ is the Rabi frequency that is proportional to the electric field amplitude of the near-resonance driving field.
Just as the spontaneous emission rate of an individual quantum state tells us how long a 2LS can remain viable as a qubit, the effective disorder-onset rate of the system tells us how long true quantum behaviour stays relevant in the ensemble.
A table summarizing the fits for disorder-onset rates and state leakage rates as a function of increasing number density of the atoms in the ensemble can be found in Table \ref{table:fit1}.
\begin{table}
\centering
\begin{tabular}
{ | c | c | c |}
\hline
Number Density $N_A$ ($m^{-3}$) & $\gamma_{ens}$ (Hz) & $g$ (Hz) \\ \hline \hline
$1 \times 10^{27}$ & $6.243 \times 10^{11} $ & $8.983 \times 10^{11}$
\\ \hline
$2.5 \times 10^{27}$ & $1.455 \times 10^{13} $ & $6.173 \times 10^{12}$
\\ \hline
$4 \times 10^{27}$ & $3.555 \times 10^{13} $ & $1.845 \times 10^{13}$
\\ \hline
$5 \times 10^{27}$ & $5.072 \times 10^{13}$ & $2.637 \times 10^{13}$
\\ \hline
$7.5 \times 10^{27}$ & $5.305 \times 10^{13} $ & $9.193 \times 10^{12}$
\\ \hline
$1 \times 10^{28}$ & $5.194 \times 10^{13} $ & $1.475 \times 10^{12}$
\\ \hline
\end{tabular}
\caption{Disorder-onset rates ($\gamma_{ens}$), and excited-state population leakage rate ($g$) for a 10 nm radius spherical ensemble of atoms with varying number density ($N_a$). The amplitude of the driving electromagnetic wave is E=$1.5 \times 10^9$ V/m. The spontaneous emission rate of a single atom in the ensemble is 2.95MHz.\label{table:fit1}}
\end{table}
As the number density increases, the disorder onset rate $\gamma_{ens}$ increases. At very high number density, $\gamma_{ens}$ becomes so large that the $\ket{e_y}$ state cannot be significantly populated, so the ``leakage'' to other directional states starts to disappear. We note that the onset of disorder in denser ensembles is largely dominated by $\gamma_{ens}$. The dependence of $\gamma_{ens}$ as a function of number density ($N_a$) is plotted in Figure \ref{fig:logistic_fit} for a dense ensemble driven with strong fields ($\Omega>>\gamma_0$). From this figure, it is clear that a strongly driven, dense quantum ensemble experiences a fast (compared to a single atom's spontaneous emission rate $\gamma_0 = 2.95 \times 10^6$ Hz) onset of disorder, and the disorder-onset rate increases as the density of atoms in the ensemble increases. This indicates that both rapid onset of disorder and leakage to three-directional states via interatomic interactions are important in the dynamics of a strong driven ensemble of atoms. Any quantum control calculations that are applied to dense collections of atoms should not use short pulse methods and/or reduced basis sets that ignore directional states unless they are driven by extremely rapid pulses or have a low number density.
The dependence of $\gamma_{ens}$ on $N_A$ is particularly interesting. Figure~\ref{fig:logistic_fit} shows that this dependence is nonlinear and although it increases at low densities, the disorder-onset rate ($\gamma_{ens}$) slows down at high densities and converges to a saturation value. This behaviour appears to be best described by a saturation curve that takes the form of the logistic function \cite{gershenfeld1999nature}
\begin{equation}
\gamma_{ens} = \frac{L}{1+\exp(-k(x-a))}, \label{eq:logistic}
\end{equation}
\noindent where, $L$ is the saturation value of $\gamma_{ens}$, $k$ is a rate constant, $x$ is the number density and $a$ is the inflection point of the number density at which the disorder onset rate begins to saturate. This saturation curve is typically used in evolutionary systems in which there is a competition between different processes \cite{verhulst1845mathematical}. In this particular ensemble system, there is a competition between the incident field that is trying to force the ensemble to oscillate coherently, and the disorder (i.e. the mean-field mediated inter-atomic interactions) that is trying to prevent this coherent oscillation.
Figure \ref{fig:logistic_fit} shows the fit of the disorder onset rate $\gamma_{ens}$ to the logistic function (Eq.~\ref{eq:logistic}) for two different incident field intensities. One conclusion that can be easily drawn from such fits is that, as the intensity of the incident light is increased, the saturation point ($L$) of the disorder-onset rate also increases. This is because, at higher intensities, the coherent driving by the incident field excitation is able to more strongly overcome the decoherence caused by interatomic interactions.
\begin{figure}
\centering
\includegraphics[width = 3in]{s5.png}
\begin{tabular}
{ | c | c | c | c |}
\hline
Intensity ($V/m$) & $L$ (Hz) & $a$ ($m^-3$) & $k$ ($m^3$) \\ \hline \hline
$1.5 \times 10^9$ & $5.316 \times 10^{13} $ & $3.337 \times 10^{27}$ & $1.353 \times 10^{-27}$
\\ \hline
$7.5 \times 10^8$ & $3.055 \times 10^{13}$ & $1.762 \times 10^{27}$ & $2.286 \times 10^{-27}$
\\ \hline
\end{tabular}
\caption{Effective ensemble disorder-onset rates ($\gamma_{ens}$) for a 10 nm radius spherical ensemble of atoms with varying number density ($N_a$), for two different amplitudes of the driving plane-wave electromagnetic wave. The spontaneous emission rate of a single atom in the ensemble is 2.95MHz. The data are fit to a logistic function as in Eq.~\ref{eq:logistic}. The saturation value of the disorder-onset rate $L$ increases as the amplitude of the driving field increases. \label{fig:logistic_fit}}
\end{figure}
This dependence of the disorder-onset rate ($\gamma_{ens}$) on the amplitude of the driving indicates that for strongly-driven, dense quantum systems, {\emph{the disorder-onset rate} is dependent on the density matrix, and therefore \emph{is time-dependent}. For dense collections of quantum emitters, a better model of $\gamma_{ens}$ than a constant value, would be to estimate it by using the quantum state of the ensemble.
\section{Modelling Dense Ensemble Dynamics with Single Particle Techniques}\label{sec:single}
Examining the dynamics of a driven, nanoscale ensemble of quantum systems, one notable observation is that the evolution of the ensemble state population in the incident field polarization direction is qualitatively similar to that of a driven two-level system with two competing decoherence mechanisms --- spontaneous emission, and a loss of population from the excited state parallel to the incident field polarization. Therefore we aim to approximate this behaviour with a single-particle model by modifying the decoherence scheme.
This single-particle model should be similar in nature to the individual particles that make up the ensemble. Its basis consists a ground state $\ket{g}$ and three directional excited states, $\ket{e_x}$, $\ket{e_y}$ and $\ket{e_z}$ and it is excited by an incident plane wave. The Hamiltonian of this system, after making the rotating wave approximation is
\begin{equation}
H = \left ( \begin{matrix}
0 & \frac{\hbar\Omega_{x}}{2} & \frac{\hbar\Omega_{y}}{2} & \frac{\hbar\Omega_{z}}{2} \\
\frac{\hbar\Omega_{x}^*}{2}&-\bigtriangleup&0&0\\
\frac{\hbar\Omega_{y}^*}{2} &0&-\bigtriangleup&0\\
\frac{\hbar\Omega_{z}^*}{2} &0&0&-\bigtriangleup\\
\end{matrix} \right ),
\end{equation}
\noindent where, $\bigtriangleup$ represents the detuning between the atomic transition frequency and the frequency of the incident light, and the Rabi frequency-like terms $\Omega_i, i=x,y,z$ are proportional to the electric field amplitudes in each of the three Cartesian directions.
For this Hamiltonian, the electric field terms included are the external incident field ($E_y$), and perpendicular scattered field components $E_x$ and $E_y$ that are much smaller than the incident field. For the perpendicular scattered field components, we assume they arise from the field of a dipole with $E_{x,z} \approx E_y \frac{\mu}{e r}sin(\theta) \hat{\theta}$ \cite{sharma}. In this case, $r= \sqrt[3]{\frac{3 \sqrt{8}}{4 N_a \pi}}$ is the separation between diagonal nearest neighbours, $\theta = \pi/4$ is the angle between them, $e$ is the charge of an electron and $\mu$ is the transition dipole moment. For dense systems, the magnitude of this scattered field is about 1-2 orders of magnitude less than that of the incident field.
In the ensemble, an individual quantum system can spontaneously emit radiation from the $\ket{e_x}$, $\ket{e_y}$, and $\ket{e_z}$ excited directional states with rates $\gamma{xg}$, $\gamma{yg}$ and $\gamma{zg}$ respectively. This emitted radiation can then excite either the $\ket{g} \rightarrow \ket{e_x}$, $\ket{g} \rightarrow \ket{e_y}$, or $\ket{g} \rightarrow \ket{e_z}$ transitions in nearby atoms. This process is similar to the Forster-Resonance Energy Transfer (FRET) process commonly seen in biophysical systems \cite{nanopticsbook}. We adopt this FRET model to the decoherence in our single particle model. Decoherence couplings are added that look like forbidden electric-dipole transitions as shown in Figure \ref{fig:DecayCoupledDirectionalStates}. Although these transitions look similar to spontaneous emission, they do not result in net emission of a photon. Rather, they represent the emission of a photon and the reabsorption of that photon by another transition in an adjacent atom. This makes these transition rates behave more like dephasing rates ($\delta_{ij}$), as they do not emit energy from the system. $\delta_{ij}$ represents the dephasing rate due to emission of a photon from the state $\ket{e_i}$ of one atom that is absorbed by a neighbouring atom that is excited to state $\ket{e_j}$. $\delta_{xx}$, $\delta_{yy}$ and $\delta_{zz}$ are referred to as ``parallel" dephasing rates, whereas $\delta_{xy}$, $\delta_{yz}$, and $\delta_{zx}$ are referred to as ``perpendicular" dephasing rates. A diagram of all the decoherence processes in the two-level, directional state basis of the single-atom model is provided below in Figure \ref{fig:DecayCoupledDirectionalStates}.
\begin{figure}
\centering
\includegraphics[width = 3in]{decay-coupled-directionals.jpg}
\caption{ Modified decoherence structure in the single-particle model of a driven atomic ensemble. When significant inter-atomic interactions are present in an ensemble, it becomes possible for the spontaneously emitted radiation from the excited state of one atom to excite state population from the ground state of a nearby atom. This emission followed by absorption is modelled by a dephasing process between states that have electric-dipole forbidden transitions ($\delta_{ij}$'s in red). These dephasing rates do not affect the total state population, they only reduce the overall coherence of the single particle state that models the ensemble. \label{fig:DecayCoupledDirectionalStates}}
\end{figure}
\subsection{Estimating Decoherence Rates}
We want to estimate the decoherence rates $\gamma_{ig}$'s and $\delta_{ij}$'s that will be inputs into the single particle model. Let $\vec{E}_d$ be the amplitude of the field driving the ensemble, and $\vec{E}_{local}$ be the local field at the location of the atom. Let $\gamma_0$ be the vacuum spontaneous emission rate of a single two-level atom.
To calculate the spontaneous emission rate $\gamma_d$ from an excited state to a ground state of an atom in an ensemble, one can define an `enhancement factor' by comparing the power emitted by an atom in an ensemble $P$ to that which it emits in free space $P_0$, calculated via the Larmor formula. This takes the form:
\begin{equation}\begin{aligned}
\frac{\gamma_d}{\gamma_{0}} &= \frac{P}{P_{0}} = \frac{Re(\vec{j}_d^* \cdot \vec{E}_{local})}{Re(\vec{j}_d^* \cdot \vec{E}_{d})},
\end{aligned}
\end{equation}
\noindent where $\vec{j}_d$ is the free current of the transition. The local field $\vec{E}_{local}$ is the sum of the driving field $\vec{E}_d$, and the field scattered by other atoms $\vec{E}_{ext}$. Therefore,
\begin{equation}\begin{aligned}
\frac{\gamma_d}{\gamma_{0}} &= \frac{P}{P_{0}} = 1 + \frac{Re (\vec{j}_d^* \cdot \vec{E}_{ext})}{Re(\vec{j}_d^* \cdot \vec{E}_{d})}.
\label{eqn:hi_td_decay_enhance},
\end{aligned}
\end{equation}
If the ensemble contains many strongly interacting quantum elements, the decay rate enhancement in various directions will be a complicated function of time and therefore cannot be easily evaluated with a single, constant, enhancement. If the transitions are oscillating dipole emitters; however, Eq.~\ref{eqn:hi_td_decay_enhance} can be simplified to \cite{nanopticsbook},
\begin{equation} \begin{aligned}
\frac{\gamma_d}{\gamma_{0}} = \frac{P}{P_{0}} &= 1 - \frac{6 \pi \epsilon_0}{|\mu_0|^2} \frac{c^3}{\omega^4} Re(\vec{j_0}^* \cdot \vec{E_{ext}}), \\
&= 1 + \frac{6 \pi \epsilon_0}{|\mu_0|^2} \frac{1}{k^3} Im(\vec{\mu_{0}}^* \cdot \vec{E_{ext}}).
\label{eqn:td_decay_enhance}
\end{aligned}\end{equation}
In the single particle model, there are no fields due to scattering from other atoms, i.e., $E_{ext}=0$. Therefore the spontaneous emission rates $\gamma_{ig}$ are all equal to $\gamma_0$.
The dephasing rates ($\delta_{ij}$) associated with energy transfer between atomic transitions can be calculated by the following process \cite{nanopticsbook}. The magnitudes of the dephasing rates depend on the excitation transfer between atoms. At different spatial locations, these dephasing rates can be quantified by
\begin{equation}
\frac{\delta_{i \rightarrow j}}{\gamma_0} = \frac{P_{i \rightarrow j}}{P_0},
\end{equation}
\noindent where $\delta_{i \rightarrow j}$ is the rate of energy transfer from transition $i$ in one atom ($\ket{e_i} \rightarrow \ket{g}$) to transition $j$ ($\ket{g} \rightarrow \ket{e_j}$) in a neighbouring atom, and $P_{i \rightarrow j}$ is the power received by the ``acceptor'' transition ($j$) from the field created by the ``donor'' transition ($i$). $P_{i \rightarrow j}$ is computed by
\begin{equation}
P_{i \rightarrow j} = \frac{1}{2} Re(\vec{j}^*_j(\vec{r_j}) \cdot \vec{E}_i(\vec{r_i})),
\label{eqn:Fret_Power2}
\end{equation}
\noindent where $\vec{j}^*_j(\vec{r_j})$ is the free current of the acceptor and $\vec{E}_i(\vec{r_i})$ is the field created by the donor.
Starting with the near field of an radiating point dipole
\begin{equation}
\vec{E_i}(\vec{r}) = \frac{1}{4\pi\epsilon_0} \left( \frac{3(\vec{\mu_i}\cdot \hat{r})\hat{r}-\vec{\mu_i}}{r^3} \right ).
\end{equation}
where $\vec{E_i}(\vec{r}) $ is the electric field, $\vec{\mu_i}$ is the dipole moment of transition $i$ in a single particle and $\vec{r}$ is the spatial position, we can assume that the atoms are spherically distributed two atomic radii apart ($1/r^3 =\frac{1}{8} \frac{4 \pi}{3} N_a$). This yields
\begin{eqnarray}
\vec{E_i}(\vec{r}) &=& \frac{1}{4\pi\epsilon_0} \frac{1}{8} \frac{4 \pi}{3} N_a \left(3(\vec{\mu_i}\cdot \hat{r})\hat{r}-\vec{\mu_i} \right ) \nonumber\\&=& \frac{N_a}{24 \epsilon_0} |\mu_i|\left( 3(\hat{\mu_i}\cdot \hat{r})\hat{r}-\hat{\mu_i} \right ).
\end{eqnarray}
Therefore the power transferred due to interaction between two individual particle transitions ($i$ and $j$), assuming that $\vec{j} \sim \omega \vec{\mu}$
\begin{equation}
P_{i \rightarrow j} = \frac{1}{2} Re(\vec{j}^*_j(\vec{r}) \cdot \vec{E}_i(\vec{r})) \approx \frac{1}{2} \omega\vec{\mu_j} \cdot \vec{E}_i(\vec{r})
\end{equation}
becomes
\begin{equation}
P_{i \rightarrow j} = \frac{1}{2} \omega\vec{\mu_j} \cdot (\frac{N_a}{24 \epsilon_0} |\mu_i|\left( 3(\hat{\mu_i}\cdot \hat{r})\hat{r}-\hat{\mu_i} \right )).
\end{equation}
Given that the dipole moments for each transition are the same ($|\mu_{i,j}|=|\mu|$) (as all atoms are identical), this reduces to
\begin{eqnarray}
P_{i \rightarrow j} &=& \frac{N_a \omega}{48 \epsilon_0} |\mu|^2\left (\hat{\mu_j} \cdot\left( 3(\hat{\mu_i}\cdot \hat{r})\hat{r}-\hat{\mu_i} \right )\right ) \\
&=& \frac{N_a \omega}{48 \epsilon_0} |\mu|^2\left ( 3(\hat{\mu_i}\cdot \hat{r})(\hat{\mu_j} \cdot\hat{r})-(\hat{\mu_j} \cdot\hat{\mu_i}) \right ).
\end{eqnarray}
With this, we can calculate $\delta_{i \rightarrow j}$ by normalizing to the power output of a classical oscillating dipole
\begin{equation}
\frac{\delta_{i \rightarrow j}}{\gamma_0} = \frac{P_{i \rightarrow j}}{P_0} = \frac{\frac{N_a \omega}{48 \epsilon_0} |\mu|^2\left (3(\hat{\mu_i}\cdot \hat{r})(\hat{\mu_j} \cdot\hat{r})-(\hat{\mu_j} \cdot\hat{\mu_i})\right )}{\frac{\mu_0 \omega^4 |\mu|^2}{12\pi c}}.
\end{equation}
This yields
\begin{eqnarray}
\frac{\delta_{i \rightarrow j}}{ {\gamma_0}} &=&\frac{ N_a \pi c^3} {4 \omega^3} \left( 3(\hat{\mu_i}\cdot \hat{r})(\hat{\mu_j}\cdot\hat{r})-\hat{\mu_j}\cdot\hat{\mu_i} \right ).
\end{eqnarray}
Lastly we add a factor of $\sqrt{\rho_{ii} \rho_{gg}} \sqrt{\rho_{jj} \rho_{gg}}$ which serves as an estimate of the fraction of atoms in the ensemble that experience the $\ket{i} \rightarrow \ket{j}$ energy transfer.
\begin{eqnarray}
\frac{\delta_{i \rightarrow j}}{ {\gamma_0}} &=&\frac{ N_a \pi c^3} {4 \omega^3} \left( 3(\hat{\mu_i}\cdot \hat{r})(\hat{\mu_j}\cdot\hat{r})-\hat{\mu_j}\cdot\hat{\mu_i} \right )\nonumber \\ && (\sqrt{\rho_{ii} \rho_{gg}} \sqrt{\rho_{jj} \rho_{gg}})\label{eqn:para};
\end{eqnarray}
For the ``parallel'' transitions (for example $\delta_{xx}$), we use Equation \ref{eqn:para} and normalize to the power of a radiating dipole of the transition frequency $\omega$,
\begin{equation}
\frac{\delta_{i \rightarrow j}}{ {\gamma_0}} =\frac{ N_a \pi c^3} {2 \omega^3} (\sqrt{\rho_{ii} \rho_{gg}} \sqrt{\rho_{jj} \rho_{gg}}).
\end{equation}
For the transitions that are ``perpendicular'' (for example $\delta_{xy}$), we use the nearest diagonal neighbour, instead of the nearest neighbour, as this diagonal neighbour is the closest lattice site in which a dipole can produce radiated fields in a perpendicular direction to its dipole moment. This involves dividing Equation \ref{eqn:para} by $\frac{1}{\sqrt{8}}$ since $r' = \sqrt{2} r$ and therefore $\theta = \pi/4$. The dephasing rate of a perpendicular transition is calculated as,
\begin{equation}
\frac{\delta_{i \rightarrow j}}{ {\gamma_0}} =\frac{ 3 N_a \pi c^3} {16 \sqrt{2} \omega^3} (\sqrt{\rho_{ii} \rho_{gg}} \sqrt{\rho_{jj} \rho_{gg}}).
\end{equation}
Placing these decoherence parameters into the single-particle Liouville equation, and solving numerically, yields excited state populations depicted in Figures \ref{fig:hola} (a) and (b). The single-particle state calculation is overlaid with the ensemble-averaged calculation described in the previous section. Comparing the results of the single particle approximation to the full ensemble calculation, we see that there is relatively good agreement between the two methods. The two curves are not identical, however they are close enough to suggest that this single particle, modified-decoherence scheme captures a significant amount of the underlying physical processes involved, and that the FRET process is a good model of interatomic interactions in our mean-field calculation.
\begin{figure}
\centering
\includegraphics[width = 3in]{hola1-colour.png}
\includegraphics[width = 3in]{hola2-colour.png}
\caption{Comparison between single particle model calculation, and the mean field PSTD calculation of spatially averaged excited state populations for a 10 nm radius spherical ensemble of atoms. The amplitude of the driving electromagnetic wave is E=$1.5 \times 10^9$ V/m. The number density of atoms in the ensemble are (a) $4.0 \times 10^{27}$ and (b) $2.5 \times 10^{27}$ atoms per cubic metre. \label{fig:hola}}
\end{figure}
The success of the effective single particle model shows that a FRET-like decoherence process takes place in a dense, driven ensemble. The full calculation required $\approx 16$ CPU days of runtime; in comparison the single particle calculation required $\approx 2$ CPU minutes of runtime. Thus, the single particle model can provide a reasonably accurate, quick estimate of the quantum dynamics in an ensemble before attempting a full calculation.
The main limitation of this single-particle model is that it does not explicitly include coherent scattering of a field emitted by one emitter from another emitter. That is, in the Hamiltonian, only the incident electromagnetic field appears. In reality, this Hamiltonian should also depend on the instantaneous state and overall geometry of the ensemble. Another limitation of this model it that it assumes that only the single, nearest neighbour interactions are relevant to the couplings; in truth, farther couplings and interference effects between atoms are required to increase the model's accuracy. In future work, one could improve this model by adopting a more robust coupling geometry to account for scattered driving fields, and farther neighbours.
\section{Conclusion} \label{sec:conclusion}
We have studied the behaviour of a dense ensemble of quantum emitters driven by an intense, electromagnetic plane wave. The state of each quantum emitter evolves according to the Lindblad-Von Neumann equation. The evolution of the ensemble reflects not only the interaction between the driving field and individual atoms, but also the interactions between individual emitters. To study this evolution, we have implemented a coarse-grained, mean field method based on the PSTD technique in which the Lindblad-Von Neumann equation for each quantum emitter is solved in conjunction with a solution to Maxwell's equations over the whole ensemble. In order to correctly model the excitation of the quantum elements in 3D due to spontaneous emission from nearby neighbours, we have implemented a multi-directional basis for the quantum state of each emitter.
The dynamics of the driven quantum ensemble is characterized by a ``disorder onset rate" that is a function of number density. This ensemble disorder-onset rate reflects the effect of interactions between atoms and, is relatively high for dense, strongly-interacting systems. The presence of this disorder is immediately significant as it sets an effective time limit in which quantum optical effects are relevant in ensemble dynamics. It also serves as a limit on the applicability of theoretical techniques such as the short-pulse method and simplified basis sets, the use of which may lead to overestimates of coherent effects in quantum ensembles.
Lastly, we have provided a theoretical method in which the disorder produced during the evolution of a driven ensemble of quantum emitters can be modelled as decoherence a single particle, specifically, a dephasing. We have used this model to approximate the state evolution of a dense quantum ensemble using an effective single-particle density matrix. This method works by allowing for FRET-like coupling between multiple quantum emitters in the ensemble. This method provides a pretty close approximation to the full, mean-field simulation in significantly less computational time than the full simulation. This single-particle model also highlights how decoherence processes affect overall ensemble behaviour, which may prove useful in designing protocols for decoherence control.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,055 |
<?php
declare(strict_types=1);
namespace Octris\Process;
/**
* Class for handling IPC between processes. The communication is handled using
* socket streams and the data is transferred json encoded using binary mode with
* null-byte termination.
*
* @copyright copyright (c) 2015-present by Harald Lapp
* @author Harald Lapp <harald@octris.org>
*/
class Messaging
{
/**
* Maximum block size to read from socket.
*
* @type int
*/
const BLOCK_SIZE = 4096;
/**
* Socket handle for receiving messages from process.
*
* @type resource
*/
protected $reader;
/**
* Socket handle for sending messages to process.
*
* @type resource
*/
protected $writer;
/**
* Constructor.
*/
protected function __construct($reader, $writer)
{
$this->reader = $reader;
$this->writer = $writer;
}
/**
* Destructor.
*/
public function __destruct()
{
socket_close($this->reader);
socket_close($this->writer);
}
/**
* Send message to process.
*
* @param mixed $msg Message to write.
*/
public function send(mixed $msg): void
{
$this->socketWrite($this->writer, $msg);
}
/**
* Receive message from process.
*
* @return mixed Received message or false if no message received.
*/
public function recv(): mixed
{
$sockets = array($this->reader); $null = null;
$changed = socket_select($sockets, $null, $null, 0);
if ($changed === false) {
throw new \Octris\Process\Exception\SocketException();
} elseif ($changed > 0) {
return $this->socketRead($this->reader);
} else {
return false;
}
}
/**
* Write to socket.
*
* @param resource $socket Socket to write to.
* @param mixed $msg Message to write.
*/
protected function socketWrite($socket, mixed $msg): void
{
$msg = json_encode($msg) . "\x00"; // add termination character
$len = strlen($msg);
do {
$sent = socket_write($socket, $msg, $len);
if ($sent === false) {
throw new \Octris\Process\Exception\SocketException();
} elseif ($sent < $len) {
$msg = substr($msg, $sent);
$len -= $sent;
} else {
break;
}
} while(true);
}
/**
* Read from socket.
*
* @param resource $socket Socket to read from.
* @return mixed Read value.
*/
protected function socketRead($socket): mixed
{
$msg = '';
do {
//$chunk = socket_read($socket, self::BLOCK_SIZE);
$bytes = socket_recv($socket, $chunk, self::BLOCK_SIZE, MSG_DONTWAIT);
if ($bytes === false) {
$code = socket_last_error($socket);
if ($code != 11 && $code != 115) {
throw new \Octris\Process\Exception\SocketException();
}
} elseif ($bytes > 0) {
$msg .= rtrim($chunk, "\x00");
}
} while($bytes > 0 && substr($chunk, -1) !== "\x00");
if ($msg !== '') {
$data = json_decode($msg, true);
if (($code = json_last_error()) !== JSON_ERROR_NONE) {
// unable to unserialize message
throw new \Octris\Process\Exception\MessagingException(json_last_error_msg(), $code);
}
} else {
$data = false;
}
return $data;
}
/**
* Create pair of channels.
*/
public static function create(): array
{
if (!socket_create_pair(AF_UNIX, SOCK_STREAM, 0, $sockets_ch1) ||
!socket_create_pair(AF_UNIX, SOCK_STREAM, 0, $sockets_ch2)) {
throw new \Octris\Process\Exception\SocketException();
}
return array(
new static($sockets_ch2[0], $sockets_ch1[1]),
new static($sockets_ch1[0], $sockets_ch2[1])
);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,407 |
{"url":"http:\/\/cloud.originlab.com\/doc\/LabTalk\/ref\/Mod2-func","text":"3.5.2.1.52 Mod2\n\nDescription\n\nThis function returns the integer modulus (the remainder from division) of integer n divided by integer m, which is denoted by formula\n\n$n-X*m$\n\nwhere $X$ is the integer of $\\frac{n}{m}$ round toward negative infinity. It is functionally similar to MS Excel's mod function.\n\nSyntax\n\nint mod2(int n ,int m)\n\nParameter\n\nn\n\nis the dividend. Can be any integer.\n\nm\n\nis the divisor. Can be any integer.\n\nReturn\n\nReturns the integer modulus of integer n divided by integer m.\n\nExample\n\naa = mod(5, -3);\naa =\u00a0; \/\/should return 2\nbb = mod2(5, -3);\nbb =\u00a0; \/\/should return -1","date":"2020-01-27 07:42:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 3, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3550195097923279, \"perplexity\": 7745.899774126646}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251694908.82\/warc\/CC-MAIN-20200127051112-20200127081112-00345.warc.gz\"}"} | null | null |
"[Christ] is the image of the invisible God, the firstborn of all creation. For by him all things were created, in heaven and on earth, visible and invisible, whether thrones or dominions or rulers or authorities—all things were created through him and for him. And he is before all things, and in him all things hold together. And he is the head of the body, the church. He is the beginning, the firstborn from the dead, that in everything he might be preeminent. For in him all the fullness of God was pleased to dwell, and through him to reconcile to himself all things, whether on earth or in heaven, making peace by the blood of his cross. And you, who once were alienated and hostile in mind, doing evil deeds, he has now reconciled in his body of flesh by his death, in order to present you holy and blameless and above reproach before him, if indeed you continue in the faith, stable and steadfast, not shifting from the hope of the gospel that you heard, which has been proclaimed in all creation under heaven"!
What a tremendous passage this is! Doesn't it naturally transport us into the realms of worship! Paul holds up the Lord Jesus before us and declares: Here is your King's banner. Behold His flag, His standard. Behold your king who leads you into battle to advance His kingdom in your disorderly world. As one of His generals, Paul more or less proclaims Jesus to us this morning and calls upon us the foot soldiers to stand at attention as the King comes to inspect His troops ready for battle. That is the gist of what we have here. Make your formation behind your king!
It is often the case that many Christians forget that they are indeed called Christians; that they are named after Christ! But the fact is Christ is the root of the name Christian. Christians follow Christ. Not a Church. Not a denomination! Not a religion. Not a set of principles and rules! Not a particular form of ritual and practices. Christians follow Christ. As James puts it, we are called by "the Noble Name" (James 2:7). When we go into this disorderly world, what we put on offer is Jesus and Him crucified. All of us are like Paul. We are chosen and saved, as Ananias was told in Acts 9:15, "to bear my name before the Gentiles, and kings, and the children of Israel". That Name is what we have on offer to the world. The Name Jesus! It is the name that defines us. We are what we are because of Him! So simple a truth! Yet we too often forget! Well, what Paul does here is to hold Jesus before the Colossians as their king, and in so doing define who they are.
Last Sunday, I was with the Kings Baptist Church in Cleethorpes where we reflected on some of the memorable one-liners that Pilate uttered when Christ the King was brought before him, according to John's Gospel. One of those one-liners was when Pilate brought Jesus to the balcony of his headquarters, with Jesus scourged and bleeding and dressed in a purple royal robe and with a crown of thorns upon his head. And I imagine Pilate went near Him, took the Lord's right hand, lifted it up into the air and shouted out loud, of course he did so sarcastically, derisively, and contemptuously, but he shouted out loud nevertheless: "Behold your King" (Jn 19:14)! Friends, what Pilate unknowingly, ironically and scornfully did that day, Paul does here in Col 1:15-23 reverently and worshipfully. Behold Your King! Behold Him, the Founder of Your kingdom. All hail King Jesus! All hail Emmanuel! King of Kings; Lord of Lords, Bright Morning Star!
As I mentioned the last time, this passage is extremely important that it deserves a whole series of sermons on its own. Let me make three introductory comments about this passage before we look at it in details. In the first place it is one of a few passages in the New Testament devoted to a sustained description of the Person and Work of the Lord Jesus Christ. In that sense Col 1:15-23a belongs to a special elite class of passages, with John 1:1-5, Heb 1:1-4, Phil 2:5-11 and a few other verses in Revelation. Our passage shares that exalted place in the New Testament. So we are entering holy ground this morning. Remove your sandals, my dear brothers and sisters, remove your sandals!
Secondly the clauses and the phrases in the passage are so crisp, rhythmic and poetically arranged that it has led many Scholars to label it, or at least part of it (1:15-20) as a Christ hymn. In other words, it sounds like the lyrics of a song. Not all scholars agree with this view. And even among those who agree, there are differences of opinion as to whether it was a hymn that previously existed before Paul quotes it here as part of his letter, or a new hymn that the Apostle himself has composed and is now sharing it with the brothers and sisters in Colossae.
Whichever view is correct, Paul has certainly worked this hymn well into the letter that even if it was written by someone else, he has made it his own, and as good preachers do, adapted it into his letter. Regardless therefore of who originally wrote it, this hymn tells us something very important about the worship of the first Christians. Their hymns were focused on exalting Jesus. Their worship was all about Him! Every Sunday, when the first Christians gathered, they did what, according to Rev 5:9-10, the twenty-four elders in heaven did when the Slain Lamb took the scroll from the One who sat on the throne. John says, they sang a new song: "Worthy are you to take the scroll and to open its seals, for you were slain, and by your blood you ransomed people for God from every tribe and language and people and nation, and you have made them a kingdom and priests to our God, and they shall reign on the earth." That is what they sung. In other words, every Sunday, the first Christians practiced for heaven. And Col 1:15-23a was one of their songs. So must we!
The third comment I need to make is about the structure of the hymn. It is evidently made up of two stanzas. And Col 1:17 around its middle summarizes the whole hymn. The first part of the verse, 1:17a: "He is before all things" summarizes the first stanza, and Col 1:17b "in him all things hold together" summarizes the second stanza. In other words 1:15-17 is the first stanza, and speaks of Christ and Creation. The second stanza, 1:18-23 speaks of Christ and Recreation. The first stanza describes the Rulership of the King over God`s Creation. And the second stanza describes the Redemption and Reconciliation of Creation through the King. The first stanza summarizes what happened in Gen 1:1-3:15, and the second stanza summarizes what has been happening since Gen 3:15. As I say, this is rich stuff indeed and we can go on and on reflecting on it.
Paul first says, this Jesus "is the image of the invisible God" (1:15a). What a profound statement to make! The Bible everywhere emphasizes the invisibility of God. Paul himself describes God in 1 Tim 1:17 as "King of the ages, immortal, invisible, the only God, be honour and glory forever and ever". First Jn 4:12a says, "No one has ever seen God". Hebrews 11:27 says, "[God] is invisible". And many more verses like that. We sometimes sing the hymn Immortal, invisible, God only wise, in light inaccessible hid from our eyes, most blessed, most glorious, the Ancient of Days, Almighty, victorious, thy great name we praise! We sing that song and rightly so.
Well this morning, Paul says, Jesus "is the image of this invisible God". This Immortal, Invisible Only Wise God that we sing of and to, He is Visible in and as Jesus. Jesus is all of God that we can see. Just as the pictures we see on the television screen represent the invisible electrons which have hit the back of the screen at various different frequencies and speeds, so also it is that Jesus is the image of the invisible God. He is all of God that we can see. The word "image" here in Col 1:15a is not speaking of an inferior representation, as if a cartoon is drawn to give you an idea of what the real thing looks like. In fact in ancient Greek thought, an image was regarded as the more perfect form of the real thing. So to call Jesus the "image of the invisible God" is to say that He is the perfect representation of God whom none can see. Hebrews 1:3 says the same thing: "[Jesus] is the radiance of the glory of God and the exact imprint of his nature". If you want to see God, look to Jesus. For He is His exact representation! That was also John's testimony: "No one has ever seen God; he said, the only God, who is at the Father's side, he has made him known" (Jn 1:18). Jesus Himself told the disciples, "Whoever has seen me has seen the Father" (Jn 14:9). To put it simply, friends, Jesus is God. The reason why Christians worship Jesus is this: He is God in Person, who therefore is worthy of our worship!
Occasionally, you meet people in churches, sometimes Church ministers and theologians; especially people who have grown in a western culture where they had hitherto not seen other religions in operation before but have now become exposed to them in this country. And they are surprised that devotees of these other religions are often as pious and as religious and in fact sometimes with better ethical conduct than some of us. They see people of other religions like that, and they think: Oh, there is not much of a difference between Christianity and these other religions.
Not too long ago, I attended a conference on Johannine discipleship and during the coffee break; one of my fellow speakers said exactly that! That he found the Moslem religion simpler and more straight forward than Christianity. They have clear-cut rules on what to do and not do. They have explicit regulations on how and what and when to pray. Which posture to adopt in your prayers! Where to face in your prayer. When and how much fasting you have to do, and so on! And he spoke of that religion with a feeling of even envy. Because his denomination was always arguing about almost everything! And here you have a religion that appears so organized and so explicit. So he envied them! You know, there are people like that in some of our churches. They compare and they feel that there is no need to make sharp distinctions between Christianity and these religions. They even think we are worshipping the same God and what's not.
Well, let me say this to you, my dear friends. Jesus your king "is the image of the invisible God". That is how wide a gulf it is between the Christian God and what these religions proclaim. You will have to cut out a chunk of the Bible, in fact the whole New Testament; you will have to cut it all out from the Bible, in order to make the so called God of other religions the same as the God of Christians. We worship Jesus as God. That kind of worship, Judaism and Islam regard as idolatry. Let me put it another way. If you refuse to worship Jesus as God! If you hesitate from acknowledging Him as the "exact representation of God"! If you are reluctant to hail Him as the image of the invisible God! If you are unwilling to worship Him! Well, then I am afraid you cannot call yourself a Christian. That is how wide the gulf is between Christianity and these religions.
Paul then moves on to talk about Jesus' relationship with God's Creation. He says six things, the first more or less summarizing the rest.
Paul says Jesus is "the firstborn of all creation". A lot of people find this word, "firstborn" difficult to understand. The Jehovah's Witnesses for example argue that it means that Jesus was the first to be created. This of course is incorrect, for, as we find in the very next phrase, he says, "by him all things were created". The Bible would be contradicting itself in a most spectacular way if that were the case. No! In the ancient Jewish conception, the word "firstborn" was a title of rank rather than a statement of chronology. To be called firstborn did not necessarily mean that you were born first. It means you have the title of pre-eminence. Of course while this title of "firstborn" customarily went to the one who was born first, in reality it could go to anyone. So for example as we all know, Esau was born first, but he despised the title of firstborn, sold it away for pottage, and Jacob became firstborn. It is in the same sense that God promised David in Ps 89:27, "I will make him the firstborn, the highest of the kings of the earth". In other words, I will give Him the pre-eminence and the supremacy. It is the same thought that is expressed here in Col 1:15b. Jesus is the firstborn of all creation means he holds the title, the entitlement to all of creation. He inherits it. As I say, this first clause summarizes the rest of the things Paul says about Jesus' relationship with creation.
Secondly, he says, "For by him all things were created" (1:16a). And in case we have missed what all things are, and so are tempted to exclude something special and extraordinary in creation, the Apostle goes further to catalogue the taxonomy of the universe, the way they "scientifically" did it at the time: all things "in heaven and on earth, visible and invisible, whether thrones or dominions or rulers or authorities" (1:16b). In other words, whether in historical, or geographical or spiritual phenomenon! Whether in the political and non-political realms! Whether in the simplest of systems or the most complex of ecosystems! Whether normal, supra-normal or paranormal! All things were Created by Christ! Regardless of your taxonomy of the world, whether Darwinian or Creationist! Christ created them all! Amen!
Thirdly, he says, "all things were created through him" (1:16c). He is the Agent of creation. "All things were made through him, and without him was not anything made that was made" (Jn 1:3). Are you still in any doubt about who your King is?
Fourthly, Paul says, all things were created "for Him" (1:16d). Of course, that should be, for He is the firstborn, the inheritor of creation. Creation was the Father's gift to the Son. You know, my brothers and sisters. There is a supreme purpose for redemption. God has good reasons why He has not and will not give up on His creation. Someone may say, if creation is all gone wrong, why doesn't God just call it quits and start afresh all over again? Why doesn't He create a new Adam who will obey Him? Why doesn't He spirit us all away, wipe the slate clean and start afresh? Well the reason is simple. Creation is the Father's inheritance for His Son Jesus! He couldn't just abandon this inheritance? He must redeem it for His Son! It is all for King Jesus. Amen!
Fifthly, Paul says in 1:17a, "He is before all things". The "before" here, is spatial rather than chronological. It means He is the Leader and the King of Creation! Once again, we know He is not created. He was there before all things were created. He was there when God said, "let us make man in our image" (Gen 1:26). And then accordingly, man was made in the image of God. He is indeed Above all powers, above all kings; Above all nature and all created things! Above all wisdom and all the ways of man! You were here, before the world began! Above all kingdoms, above all thrones! Above all wonders the world has ever known! Above all wealth and treasures of the earth! There's no way to measure what you're worth…And so the hymn goes! That is what He is! He is before all things because He is God! Worship Him!
Then finally, Paul says in 1:17b, "in him all things [all creation] hold together". "Hold together" in a physical spatial sense! He is more or less the Force that keeps the planets in their orbit! Hebrews puts this even more graphically: "He upholds the universe by the word of his power" (Heb 1:3)! That is who Jesus is! But "hold together" may also be in the logical sense, not just physical spatial gravitational sense. He gives sense to creation! It is only in Jesus that this universe makes sense!
I don't know about you but this passage makes me feel light headed and giddy! Jesus is worthy of our worship. Because He is God in the flesh! Because He is Pre-eminent over all of Creation! Because He Produced all of Creation! Because He is the Prime Agent of Creation! Because He Possesses all of Creation! Because He Precedes all of Creation! And because He Preserves all of Creation! Seven reasons to worship Jesus! Seven reasons to rise and worship Him again with the hymn, My Jesus, My Saviour!
This entry was posted in Resources and tagged Colossians, Dr Annang. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,418 |
NCAA basketball roundup: Michigan rallies to stun Kansas in OT
ARLINGTON, Texas — Trey Burke shook off one of his worst starts with the best shot of his life.
Burke bounced back from a scoreless first half to score 23 points, including a long, never-a-doubt 3-pointer in the final seconds of regulation, and Michigan rallied to beat Kansas 87-85 in overtime in the South Regional semifinals Friday night.
The fourth-seeded Wolverines wiped out a 10-point Kansas lead in the last 3 minutes of regulation, and Burke gave them their first lead since early in the game with another long 3 to open Michigan's scoring in overtime.
"This guy was a champ all the way through it," Michigan coach John Beilein said.
They'll certainly remember this one in Ann Arbor for a while.
The Wolverines (29-7) reached a regional final for the first time since the Fab Five era 19 years ago, the last time they were in the round of 16.
Ben McLemore had 20 points to lead the Jayhawks (31-6), who looked to be on their way to a third straight regional final before Michigan's improbable rally.
Instead, they became the third No. 1 seed to fall in this tournament, joining Gonzaga and Indiana.
The Wolverines were down five when Tim Hardaway Jr. missed a 3-pointer with 35 seconds left, but Glenn Robinson III won a scramble for the ball and hit a reverse layup to force Kansas to win the game at the free throw line.
The Jayhawks couldn't do it. Burke's tying shot — he pulled up from well beyond the arc just right of key — came with 4.2 seconds left after Elijah Johnson missed a free throw moments after hitting two to keep the Kansas lead at five. Burke had scored on a layup to get Michigan back to within three.
"I'm so proud of my team because a lot of people say we're young, but we stuck with it tonight," Burke said. "I'm just so happy right now. We stayed together and we got the win."
The lead changed hands five times in overtime — the first OT game of the tournament — the last when Mitch McGary, who led Michigan with 25 points and 14 rebounds, hit a short jumper with Johnson in his face to put Michigan ahead for good 83-82.
Midwest Regional
INDIANAPOLIS — Even a nasty cold can't stop Russ Smith.
With his teammates struggling with the virus he gave them and top-seeded Louisville facing its toughest test of the postseason, Russ put on his best show yet. He matched his career high of 31 points and the Cardinals proved they can win close games, too, beating Oregon in the Midwest Regional semifinals.
"Without Russ Smith, we couldn't win," said Louisville coach Rick Pitino, who improved to 11-0 in the regional semifinals of the NCAA tournament.
Louisville (32-5) is hoping to advance to the Final Four for the second straight year.
Louisville has been nearly untouchable during its 13-game winning streak, beating opponents by an average of 17 points. And it looked as if this was going to be more of the same when Smith outscored Oregon 9-8 through the first 10 minutes.
But the 12th-seeded Ducks (28-9) made a game of it late.
After Louisville went up 66-48 with 9:01 left, Oregon made six straight field goals to close to 70-64 — the closest anyone's been to the Cardinals in weeks. But Kevin Ware scored on a layup and Chane Behanan threw down a monstrous dunk to put the game out of reach.
Ware finished with 11, topping his previous career best by one, and Gorgui Dieng had 10 points, nine rebounds and four blocked shots.
"Russ Smith is a talented young man. They've got a lot of talented players," Oregon coach Dana Altman said. "When he got going, we didn't have an answer."
Duke 71
Michigan St. 61
INDIANAPOLIS — Seth Curry shot Duke right into the regional finals — and put Mike Krzyzewski on the verge of another major milestone.
Curry scored 29 points to lead the second-seeded Blue Devils past third-seeded Michigan State 71-61 on Friday night and into the Midwest Regional final.
If Duke (30-5) beats top-seeded Louisville (32-5) in Sunday's regional final, Krzyzewski would tie John Wooden's record with 12 Final Four trips.
Michigan State (27-9) just couldn't keep up with Curry and Duke's shooters. The Spartans were led by Keith Appling with 16 points and Adreian Payne with 14.
Curry's sixth 3 of the game broke a 38-38 tie early in the second half, sending Duke on a 9-0 run. It never trailed again. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,479 |
{"url":"https:\/\/de.maplesoft.com\/support\/help\/Maple\/view.aspx?path=updates\/Maple12\/index","text":"Index of New Features - Maple Help\n\nHome : Support : Online Help : System : Information : Updates : Maple 12 : Index of New Features\n\nIndex of New Maple 12 Features\n\n Maple 12 contains many new capabilities and improvements to existing facilities. For a summary, see What's New in Maple 12?.\n The important changes in Maple 12 are described in the following topics.","date":"2021-12-08 09:29:26","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8470380306243896, \"perplexity\": 2687.2988148309346}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964363465.47\/warc\/CC-MAIN-20211208083545-20211208113545-00166.warc.gz\"}"} | null | null |
\section{Introduction}
Over the past fifteen years, $L_1$ minimization-based methods have shown very interesting features for the interpolation and approximation of continuous or discontinuous function and irregular geometric data. In \cite{Moskona1995}, Moskona \emph{et al.} have shown the Gibbs phenomenon existing for best $L_1$ trigonometric approximation of the Heaviside function is lower than the one observed using the $L_2$ norm. Saff and Tashev have done a similar work leading to the same conclusion using polygonal lines \cite{SaffTashev1999}.\\
Similarly to classical cubic interpolation splines which minimize the $L_2$ norm of the second derivative, Lavery has defined cubic Hermite interpolation splines which minimizes the $L_1$ norm of the second derivative \cite{Lavery2000}. He has noted that this strategy enabled to delete completely the Gibbs phenomenon observed for classical $L_2$ cubic interpolation spline when applied on the Heaviside function. It has later been shown formally by Auquiert \emph{et al.} \cite{Auquiert2007}.\\
Further work has then focused on an appropriate combination of the best $L_1$ approximation functional and the variational $L_1$ functional used for the interpolation problem. Lavery has firstly proposed a linear combination of the two functionals and called the resulting splines $L_1$ smoothing splines \cite{Lavery2000b}. They do not introduce oscillation on multiscale univariate datasets contrary to $L_2$ smoothing splines. However, the regularization parameter used in the linear combination of $L_1$ functionals cannot be easily fixed.\\
Lavery has then proposed another kind of $L_1$ splines named $L_1$ spline fits \cite{Lavery2004}. They are best $L_1$ approximations in an appropriate spline space obtained by the union of $L_1$ interpolation splines. Like $L_1$ smoothing splines, they do not introduce oscillations but they have the asset that they do not need any additional parameter. Existence of such splines was not shown in \cite{Lavery2004}. We prove in this paper that $L_1$ splines fits at a given set of nodes exist for every function in $L_1[a,b]$.\\
One must admit that the intrinsic non-linearity of $L_1$ norm problems imply that a closed form solution is not available in general. A global numerical solvation is then currently used \cite{Lavery2000,Lavery2000b,Lavery2004,Auquiert2007b,Dobrev2010}. Another strategy has been introduced in 2011 by Nyiri, Auquiert and Gibaru for the interpolation problem \cite{Nyiri2011}. The algorithm they design is based on a sliding-window process. It consists in finding a set of local solutions on a limited number of successive points - five for the interpolation problem. By keeping appropriate information - the derivative at the middle point of the window - one can easily construct a global interpolating function which has similar shape preserving properties than the global solution. Moreover, this process enables to have a linear complexity algorithm and it can be parallelized. This process has also been applied in recent articles for a problem of approximation of data with prescribed error using $L_1$ norm \cite{Gajny2013,Gajny2014}. \\
Recently, Wang {et al.} proposed a method to compute $L_1$ spline fits with a global algorithm but based on a five-point interpolation rule to fix derivatives at the spline nodes \cite{Wang2014}. Indeed, the first derivative at a given node is determined using only its four neighbours while the value of the spline is determined by a minimization process on the whole dataset. We propose in this article another approach following the work in \cite{Nyiri2011,Gajny2013,Gajny2014}. We investigate to define an appropriate sliding window process to compute locally-computed $L_1$ spline fits close to the global one.\\
In the first section, we recall some generalities about $L_1$ cubic Hermite interpolation splines. We show that the union of such splines over all possible Lagrange interpolation is a closed set. This helps in the second section to show the existence of $L_1$ spline fits previously introduced in the literature. We introduce in section 3 and 4 sliding-window algorithms to determine a locally-computed $L_1$ spline fits and we compare them to each other. Conclusions are drawn in a last section.
\section{The set of $L_1$ cubic Hermite interpolation splines}
Let $(x_i,y_i)$, $i=1,\dots,n$, where $x_1<x_2<\dots<x_n$, be $n$ data points belonging to the graph of a function $f$. Let $Her(\mathbf{x})$ the space of cubic Hermite splines with nodes $\mathbf{x}=\{x_1,x_2,\dots,x_n\}$. A $L_1$ cubic Hermite interpolation spline of this data is a cubic Hermite spline $\gamma^*\in Her(\mathbf{x})$ a solution of :
\begin{equation}
\min_{\gamma \in Her(\mathbf{x})} \int_{x_1}^{x_n} \vert \gamma''(x) \vert \mathrm{d}x,
\end{equation}
under the Lagrange interpolation constraints :
\begin{equation}
\gamma(x_i)=f_i, \ i=1,2,\dots,n.
\end{equation}
Lavery has shown that a solution of this problem always exists. By mean of numerical experiments, he has noted that the resulting splines were very efficient to preserve the shape of the Heaviside function (see Figure \ref{L1L2interp}). Auquiert later has shown that a $L_1$ cubic Hermite interpolation spline with six knots or more with at least three knots on each part of the Heaviside function preserve both linearities of the Heaviside function and then do not lead to a Gibbs phenomenon \cite{Auquiert2007}. This is the major asset of $L_1$ cubic Hermite interpolation splines.\\
\begin{figure}[!h]
\centering
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{interpol_L1_Auquiert}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{interpol_L2_zhang_martin}
\end{minipage}
\caption{$L_1$ (left) and $L_2$ (right) interpolation splines (solid lines) of the Heaviside function (dotted line) with 10 and 30 equally spaced knots.}
\label{L1L2interp}
\end{figure}
We consider now the union of all $L_1$ cubic Hermite interpolation splines as follow :
\begin{equation}
\mathcal{F}_\mathbf{x} = \bigcup_{\mathbf{y}\in \mathbf{R}^n} \mathrm{argmin}\left\{ \int_{x_1}^{x_n} \vert \gamma''(x) \vert \ \mathrm{d}x, \ \gamma \in Her(\mathbf{x}), \ \gamma(x_i)=y_i, \ i=1,\dots,n\right\}.
\end{equation}
This set will be fundamental in the definition of $L_1$ spline fits. We give in the following an important property of this set which has never been proved before to the best of our knowledge and will be very important in the next section.
\begin{proposition}
Given $\mathbf{x}=\{x_1<x_2<\dots<x_n\} \in \mathbf{R}^n $, the set $\mathcal{F}_\mathbf{x}$ is closed.
\end{proposition}
\begin{proof}
Let us define a norm on $Her(\mathbf{x})$. Let $s \in \overline{\mathcal{F}_{\mathbf{x}}}$ then by definition there exists a sequence :
\[\left(s_p\in \mathrm{argmin} \left\{\int_{a}^{b} |\gamma''(x)| \mathrm{d}x, \ s\in Her(\mathbf{x}),\ \gamma(x_k)=q_k^{(p)},\ k=1,\dots,m\right\}\right)_{p\in \mathbf{N}}\]
which converges to $s\in Her(\mathbf{x})$. For all $p \in \mathbf{N}$, $s_p$ is a cubic Hermite spline and is then defined by $2n$ coefficients $q_k^{(p)}$, $b_k^{(p)}$, $k=1,\dots,n$, respectively the values and the first derivative values of $s_p$ at abscissae $x_k$. By convergence hypothesis in $Her(\mathbf{x})$, there existe real values $q_k^*$, $b_k^*$, $k=1,\dots,n$ such that :
\begin{equation}
\begin{split}
q_k^{(p)} & \underset{p\rightarrow +\infty}{\longrightarrow} q_k^*,\\
b_k^{(p)} & \underset{p\rightarrow +\infty}{\longrightarrow} b_k^*.
\end{split}
\label{eq_conv}
\end{equation}
By the unicity of the limit, $s$ is defined by these $2n$ coefficients. We then show that the minimization property of the splines $s_p$ is stable by passing to the limit.\\
We deduce from \eqref{eq_conv} that $(s_p'')_{p\in \mathbf{N}}$ converges simply almost everywhere to $s''$. Moreover, for all $p\in\mathbf{N}$, $s_p''$ is piecewise linear. Then we can easily bound it on the interval $[a,b]$ by an integrable function. By the dominated convergence theorem, it follows that :
\begin{equation}
\int_a^b |s_p''(x)| \ \mathrm{d}x \underset{p\rightarrow +\infty}{\longrightarrow} \int_a^b |s''(x)| \ \mathrm{d}x.
\end{equation}
Let $\gamma \in Her({\mathbf{x}})$ such that $\gamma(x_k)=q_k^*, \ k=1,\dots,n$. By the first assertion in \eqref{eq_conv}, there exists a sequence $(\gamma_p \in Her({\mathbf{x}}))_{p\in\mathbf{N}}$ such that for all $p\in \mathbf{N}$ et $k=1,\dots,n$, $\gamma_p(x_k)=q_k^{(p)}$ that converges to $\gamma$. We easily show that :
\begin{equation}
\int_a^b |\gamma_p''(x)| \ \mathrm{d}x \underset{p\rightarrow + \infty}{\longrightarrow} \int_a^b |\gamma''(x)| \ \mathrm{d}x.
\end{equation}
For all $p \in \mathbf{N}$, since $s_p\in \widetilde{\mathcal{S}}_{1,\mathbf{x},\mathbf{q}^{(n)}}$, it follows that :
\begin{equation}
\int_a^b |s_n''(x)| \ \mathrm{d}x \le \int_a^b |\gamma_p''(x)| \ \mathrm{d}x.
\end{equation}
By passing to the limite, we have that for all $\gamma \in Her({\mathbf{x}})$ such that $\gamma(x_k)=q_k^*$ :
\begin{equation}
\int_a^b |s''(x)| \ \mathrm{d}x \le \int_a^b |\gamma''(x)| \ \mathrm{d}x.
\end{equation}
We conclude that $\mathcal{F}_{1,\mathbf{x}}$ is closed in $Her(\mathbf{x})$.
\end{proof}
\section{Best approximation using $L_1$ spline fits}
Let us first define these splines introduced in \cite{Lavery2004}.
\begin{definition}
Given a function $f\in L_1[a,b]$, $a,b \in \mathbf{R}$ and a set of knots $\mathbf{x}=\{a=x_1<x_2<\dots<x_n=b\}$, a $L_1$ spline fit of the function $f$ at knots $\mathbf{x}$ is a best $L_1$ approximation of $f$ in $\mathcal{F}_\mathbf{x}$. In other words, it is a solution of :
\begin{equation}
\min_{s \in \mathcal{F}_\mathbf{x}} \int_{a}^{b} \vert s(x) - f(x)\vert \ \mathrm{d}x.
\end{equation}
\end{definition}
We prove with the next theorem that $L_1$ spline fits are well defined.
\begin{theorem}
$L_1$ splines fit exist for every function $f\in L_1[a,b]$ and every set of knots $\mathbf{x}=\{a=x_1<x_2<\dots,x_n=b\}$.
\end{theorem}
\begin{proof}
Let us give $f\in L_1[a,b]$ and a set of knots $\mathbf{x}=\{a=x_1<x_2<\dots<x_n=b\}$. Since $\mathcal{F}_{1,\mathbf{x}}$ is closed in the finite dimensional subspace $Her(\mathbf{x})$ of $L_1[a,b]$, there exists a best $L_1$ approximation of $f$ in $\mathcal{F}_{1,\mathbf{x}}$. \hfill $\square$
\end{proof}
One can easily define an equivalent tool using exclusively $L_2$ norm and called $L_2$ splines fit. We compare both methods in Figure \ref{fig_heaviside_glob}.
\begin{figure}[!h]
\centering
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{function_heaviside_globalL1_10esk}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{function_heaviside_globalL2_10esk}
\end{minipage}
\caption{Global $L_1$ spline fits (left) and global $L_2$ spline fits (right) of the Heaviside function with ten equally spaced knots.}
\label{fig_heaviside_glob}
\end{figure}
$L_1$ spline fits has been previously defined for discrete data \cite{Lavery2004}. Let $(\hat{x}_i,\hat{y}_i), i=1,\dots,m$ be $m$ data points where $m\ge n$. A $L_1$ spline fits of this dataset is a best $\ell_1$ approximation of them in $\mathcal{F}_\mathbf{x}$. In other words, it is a solution of :
\begin{equation}
\min_{s \in \mathcal{F}_\mathbf{x}} \sum_{i=1}^m \vert s(\hat{x}_i) - \hat{y}_i \vert.
\end{equation}
As in the continuous case, these splines exist since they are solutions of a best approximation problem in a closed set of a finite dimensional subpace of a normed linear space.
The results presented in Figure \ref{fig_L1SFG} indicate that $L_1$ spline fits preserve well the shape of multiscale data contrary to $L_2$ spline fits. Moreover, $L_1$ spline fits do not require human intervention to choose a parameter that balances weights of the approximation functional and the variational functional. However, the computational cost of $L_1$ spline fits is generally higher than the one of $L_1$ smoothing spline and more obviously of least squares methods.
\begin{figure}[!h]
\centering
\begin{minipage}{\linewidth}
\includegraphics[width=\linewidth]{dataset1_globalL1}
\end{minipage}
\begin{minipage}{\linewidth}
\includegraphics[width=\linewidth]{dataset1_globalL2}
\end{minipage}
\caption{Global $L_1$ spline fits (top) and global $L_2$ spline fits (bottom).}
\label{fig_L1SFG}
\end{figure}
\section{Sliding window algorithms for $L_1$ spline fits}
\subsection{Best approximation of functions}
We define sliding window methods with window size $m=3,\ 5,\ 7$ that we call respectively $L_1$SFL3, $L_1$SFL5 and $L_1$SFL7. For all set of $m$ consecutive knots $\mathbf{x}_{i,m}=\{x_{i-\lfloor \frac{m}{2}\rfloor},\dots,x_i,\dots,x_{i+\lfloor \frac{m}{2}\rfloor}\}$, we will determine numerically a cubic Hermite spline $s_{i,m}^*$ solution of :
\begin{equation}
\min_{\gamma\in \mathcal{F}_{\mathbf{x}_{i,m}}} \int_{x_{i-\lfloor \frac{m}{2}\rfloor}}^{x_{i+\lfloor \frac{m}{2}\rfloor}} \vert \gamma(x) - f(x) \vert \ \mathrm{d}x.
\end{equation}
Then we keep only middle information $z_{i}=s_{i,m}^*(x_i)$ and $b_{i}=s_{i,m}^{*'}(x_i)$.
\begin{figure}[!h]
\centering
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{function_heaviside_L1SFL3_10esk}
Continuous $L_1$SFL3
\end{minipage}
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{function_heaviside_L1SFL5_10esk}
Continuous $L_1$SFL5
\end{minipage}
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{function_heaviside_L1SFL7_10esk}
Continuous $L_1$SFL7
\end{minipage}
\caption{$L_1$ spline fits computed by the three proposed sliding windows methods on the Heaviside function with ten equally spaced knots.}
\label{fig_Heaviside_cont}
\end{figure}
These methods have been tested on the Heaviside function with ten equally spaced knots and the results are summarized in Figure \ref{fig_Heaviside_cont}. The three-point and five-point methods fail to approximate linear shape on both side of the discontinuity. We are facing here typical cases of non-invariance of the numerical solution by rotation of the data. On both side of the discontinuities, the two windows considered are similar geometrically and should lead the same solution. Since on one side, we are able to preserve linearity by the three-point and the five-point methods, we should be able to do it on the other side. Further work will be done to make these methods invariant by rotation. The seven-point methods seems to more robust to rotation of data and so should be preferred. In this case, the seven-point solution and the global solution are identical.\\
We have also made some test about computing time and a comparison between the methods. The results are summarized in the graph in Figure \ref{fig_CPU_cont}. We can notice a great improvement of computing time when using local methods. The faster is of course the three-point method. We also notice a dual phenomenon in these results. The numerical solvation is different whether we have even or odd number of knots. It is linked to the fact of having a knot or not at the discontinuity.\\
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{CPUtime_contL1SFL}
\caption{Comparison of computational times between global and local methods}
\label{fig_CPU_cont}
\end{figure}
Regarding the graphical and computational time results, the seven-point method is a good compromise. We will confirm this tendency by the study of the discrete case.
\subsection{Best approximation of discrete data}
In this section, we apply the three-point, five-point and seven-point methods to discrete multiscale data. In other words, for all set of $m$ consecutive knots $\mathbf{x}_{i,m}=\{x_{i-\lfloor \frac{m}{2}\rfloor},\dots,x_i,\dots,x_{i+\lfloor \frac{m}{2}\rfloor}\}$, we will determine numerically a cubic Hermite spline $s_{i,m}^*$ solution of :
\begin{equation}
\min_{\gamma\in \mathcal{F}_{\mathbf{x}_{i,m}}} \sum_{j=i-\lfloor \frac{m}{2}\rfloor}^{i+\lfloor \frac{m}{2}\rfloor} \vert s(\hat{x}_j) - \hat{y}_j \vert.
\end{equation}
Then we only keep information at the middle point of the window $z_{i}=s_{i,m}^*(x_i)$ and $b_{i}=s_{i,m}^{*'}(x_i)$. The results are illustrated in Fig. \ref{fig_dataset1_L1SFL}, \ref{fig_dataset2_L1SFL} and \ref{fig_dataset3_L1SFL}. While the three point and the seven-point methods give smooth curves, the five-point method highly fails. In Fig. \ref{fig_dataset1_L1SFL}, we can notice an undershoot phenomenon and in Fig. \ref{fig_dataset2_L1SFL}, oscillations are created.\\
\begin{figure}[p]
\centering
\includegraphics[width=0.9\linewidth]{dataset1_L1SFL3}
Discrete $L_1$SFL3\\
\includegraphics[width=0.9\linewidth]{dataset1_L1SFL5}
Discrete $L_1$SFL5\\
\includegraphics[width=0.9\linewidth]{dataset1_L1SFL7}
Discrete $L_1$SFL7
\caption{Local (solid lines) and global (dashed line) $L_1$ spline fits on a multi-scale data set.}
\label{fig_dataset1_L1SFL}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=0.9\linewidth]{dataset2_L1SFL3}
Discrete $L_1$SFL3\\
\includegraphics[width=0.9\linewidth]{dataset2_L1SFL5bis}
Discrete $L_1$SFL5\\
\includegraphics[width=0.9\linewidth]{dataset2_L1SFL7}
Discrete $L_1$SFL7
\caption{Local (solid lines) and global (dashed line) $L_1$ spline fits on a multi-scale data set.}
\label{fig_dataset2_L1SFL}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=0.9\linewidth]{dataset3_L1SFL3}
Discrete $L_1$SFL3\\
\includegraphics[width=0.9\linewidth]{dataset3_L1SFL5}
Discrete $L_1$SFL5\\
\includegraphics[width=0.9\linewidth]{dataset3_L1SFL7}
Discrete $L_1$SFL7
\caption{Local (solid lines) and global (dashed line) $L_1$ spline fits on a multi-scale data set.}
\label{fig_dataset3_L1SFL}
\end{figure}
Like in the continuous case, the seven-point method is the closest graphically to the global method. In some cases like Fig. \ref{fig_dataset1_L1SFL}, linear shape are preserved in a better way.
\section{Modification of $L_1$SFL5 and $L_1$SFL7}
The methods presented before may exhibit on multiscale configurations some undesirable features. We have observed them with the discrete $L_1$SFL5 in Fig.\ref{fig_dataset1_L1SFL} and \ref{fig_dataset2_L1SFL} and with the discrete $L_1$SFL3 in Fig.\ref{fig_dataset3_L1SFL}. This is typically due to a lack of consistency between the different windows. To reduce this phenomenon, we propose two others sliding window methods, $L_1$SFL5-3 and $L_1$SFL7-3, which are respectively a five-point and a seven-point method. The difference with the previous $L_1$SFL5 and $L_1$SFL7 is that we keep now the three middle pieces of information (approximation points and derivative value) instead of one single information. In other words, for sets of $m$ consecutive knots $\mathbf{x}_{i,m}=\{x_{i-\lfloor \frac{m}{2}\rfloor},\dots,x_i,\dots,x_{i+\lfloor \frac{m}{2}\rfloor}\}$ with $i$ going from $\lfloor \frac{m}{2}\rfloor+1$ to $n-\lfloor \frac{m}{2}\rfloor$ by step 3, we will determine numerically a cubic Hermite spline $s_{i,m}^*$ solution of :
\begin{equation}
\min_{\gamma\in \mathcal{F}_{\mathbf{x}_{i,m}}} \sum_{j=i-\lfloor \frac{m}{2}\rfloor}^{i+\lfloor \frac{m}{2}\rfloor} \vert s(\hat{x}_j) - \hat{y}_j \vert.
\end{equation}
Then we keep information at the three central knots:
\begin{itemize}
\item $z_{i-1}=s_{i-1,m}^*(x_{i-1})$, $z_{i}=s_{i,m}^*(x_i)$ and $z_{i+1}=s_{i+1,m}^*(x_{i+1})$.
\item $b_{i-1}=s_{i-1,m}^{*'}(x_{i-1})$, $b_{i}=s_{i,m}^{*'}(x_i)$ and $b_{i+1}=s_{i+1,m}^{*'}(x_{i+1})$.
\end{itemize}
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{dataset2_L1SFL5-3}
Discrete $L_1$SFL5-3\\
\includegraphics[width=\linewidth]{dataset2_L1SFL7-3}
Discrete $L_1$SFL7-3\\
\caption{Application of discrete $L_1$SFL5-3 and $L_1$SFL7-3 (solid line) on a multiscale dataset. Comparison with previous discrete $L_1$SFL5 and $L_1$SFL7 (dotted line) and global $L_1$SF (dashed line).}
\label{fig_dataset2_L1SFL-3}
\end{figure}
These methods have also the advantage of requiring less computation than the previous $L_1$SFL5 and $L_1$SFL7. Indeed, with $L_1$SFL5-3 and $L_1$SFL7-3, the window slides more rapidly since we do not treat as before every sequence of five, resp. seven, consecutive knots.\\
By this way, we were able to enhance consistency in the five point solution. However, the seven-point method is still the closest one to the initial global method. Since the global method is for now our reference, we select this method for further tests on noisy datasets.\\
We have applied firstly in Fig.\ref{fig_test_heav} our $L_1$SFL7-3 method to a 100-point configuration initially evenly distributed on the Heaviside function and then corrupted by a Gaussian noise with zero mean and 0.03 standard deviation.\\
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{test_heav}
Discrete $L_1$SFL7-3\\
\includegraphics[width=\linewidth]{test_heav_error}
Pointwise error to the global solution
\caption{Application of discrete $L_1$SFL7-3 (solid line) on a noisy Heaviside-like dataset. Comparison with previous discrete $L_1$SFL7 (dotted line) and global $L_1$SF (dashed line).}
\label{fig_test_heav}
\end{figure}
Results are compared with the global method and the $L_1$SFL7 method. Solutions are not identical but are similar as the error plot in Fig.\ref{fig_test_heav} suggests it. We have then applied the method on a 300-point configuration lying initially in the sine function and then corrupted by a Gaussian noise with zero mean and 0.05 standard mean. The observations are the same and graphical results are given in Fig.\ref{fig_test_sine}.
\begin{figure}[!h]
\centering
\includegraphics[width=\linewidth]{test_sine}
Discrete $L_1$SFL7-3\\
\includegraphics[width=\linewidth]{test_sine_error}
Pointwise error to the global solution
\caption{Application of discrete $L_1$SFL7-3 (solid line) on a noisy sinus-like dataset. Comparison with previous discrete $L_1$SFL7 (dotted line) and global $L_1$SF (dashed line).}
\label{fig_test_sine}
\end{figure}
\newpage
\section{Conclusion}
In this article, we have shown the existence of $L_1$ splines fits which are very efficient to approximate data with abrupt changes but time-consuming. In order to obtain lower algorithmic complexity methods, we have tested different methods of computation of $L_1$ spline fits by sliding window process for both continuous and discrete case. At the end of this study, a seven-point method named $L_1$SFL7-3 should be chosen. It is currently a good compromise between keeping the geometrical properties of global $L_1$ spline fits and decreasing computations. The method has linear computational complexity and can be parallelized. This method has shown good results on both multiscale datasets and noisy datasets.
\section{Acknowledgments}
The authors thank deeply Shu-Cherng Fang and Ziteng Wang from the Industrial and Systems Engineering Department of North Carolina State University and John E. Lavery, retired from the Army Research Office, for their comments and suggestions that improved the contents of this paper.
\newpage
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,371 |
package it.gov.aifa.invoice_processor.constant;
public final class DocumentoCorrelatoType {
public static final String CONTRATTO = "contratto";
public static final String CONVENZIONE = "convenzione";
public static final String DDT = "ddt";
public static final String FATTURA_COLLEGATA = "fatturaCollegata";
public static final String LINEA_ACQUISTO = "lineaAcquisto";
public static final String ORDINE_ACQUISTO = "ordineAcquisto";
public static final String RICEZIONE = "ricezione";
public static final String RIEPILOGO = "riepilogo";
private DocumentoCorrelatoType(){ }
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,465 |
18 Worst Things to Do If You Have a Hangover
After a long night drinking, you're going to want to avoid making your hangover worse. Here's what not to do.
By Stephanie Osmanski
Many people like drinking alcohol, but nobody likes having a hangover. A hangover happens when you indulge in too much alcohol and typically the next morning, your body becomes dehydrated. Categorized by super unpleasant symptoms such as vomiting, fatigue and weakness, excessive thirst, dry mouth, headaches, muscle aches, poor or interrupted sleep, increased sensitivity to light and sound, shakiness, mood disturbances, lack of concentration, and rapid heartbeat, the Mayo Clinic says the more you drink, the more likely you are to develop a hangover the next day. And there's a chance you're most likely doing things to magically cure your hangover that are simply making you feel worse.
Fortunately, hangover symptoms go away on their own. If they lasted forever, nobody would probably ever drink again. (Eh, maybe.) And while many myths out there cite the wonders that solutions like coffee and showers and sleep can work, most often, the only things that really combat a hangover are water, electrolytes, and time.
Never want to feel the unpleasantness that is a hangover ever again? It's important to know what you can do to help mitigate (or prevent!) a hangover and also know which actions exacerbate hangovers. You know, so you can avoid them at all costs.
Keep reading for 18 of the worst things to do if you have a hangover.
Taking Acetaminophen
Wait, what?! But the first thing you do when you wake up after a crazy night on the town is reach for the pill bottle, right? That's fine—as long as it's not acetaminophen.
"Do not take Tylenol when you are hungover," says Michael Betancourt, owner, operator, and manager at hydration therapy company Vida-Flo in Hoboken, New Jersey. "Acetaminophen and alcohol don't do well together. [There's] increased risk for liver damage and it slows the metabolism process, which will prolong the effects of alcohol overindulgence."
In short, only rely on Tylenol if you want to feel hungover longer. And no one wants to do that, right?
Having a cup of coffee
Nope. Do not have a cup of coffee. In fact, avoid any drinks high in caffeine, as this could cause worsen symptoms associated with hangovers and even increase symptoms of mental illness such as anxiety, depression, and mood disorders.
"Stay away from caffeine," says Betancourt. "Nausea and jitters are common side effects associated with hangovers, and caffeine will only increase this."
Skipping breakfast
"Breakfast really is the most important meal of the day, especially when you have a hangover," says holistic nutritionist Kyria Marie, MA, NC, CHD, RYT, founder of Kyria Health. "Eating a breakfast full of healthy fats and proteins helps to stabilize blood sugar levels."
Believe it or not, skipping breakfast may make you feel sicker longer, especially because food and nutrients absorb alcohol.
"Consider a breakfast that includes eggs, small amounts of high-quality sausage or bacon, fruits, and vegetables," she says. "While alcohol depletes the body of vitamins and minerals, eating a well-balanced breakfast helps to add more nutrients to the body, thereby helping to reduce the negative effects of alcohol consumption."
Indulging in Hair of the Dog
Another hangover myth is allowing yourself a little hair of the dog. You know, an early morning mimosa with brunch or what's lovingly known as The Next Day Vodka Soda. Many people believe that a little more alcohol will actually cause relief symptoms, but it's a hangover no-no.
"Do not have more drinks after you are hungover," says Betancourt. "You confuse your body and have the potential to introduce greater amounts of Formaldehyde—a highly toxic substance converted from Methanol, which is found in dark alcohol—to your liver and obviously, prolonging the hangover relief process."
Hitting the gym
"You need to exercise to sweat out all those margarita toxins!" Do you have a friend that always says that? Don't listen to her and don't sign up for that early morning barre class with her. She is not looking out for your best interests.
"Do not try to 'sweat it out,'" says Betancourt. "Alcohol acts as a diuretic, causing you to urinate more often, and reducing the amount of fluid retained in your body."
This may not seem like a big deal, but exercising while you're hungover can actually worsen dehydration.
"This causes dehydration, which 75 percent of Americans suffer from anyway, so trying to sweat it out will just exacerbate the dehydration and cause you to feel worse," Betancourt says.
RELATED: Your guide to the anti-inflammatory diet that heals your gut, slows the signs of aging, and helps you lose weight.
Laying on the couch all day
Trust us, sleep is great but laying on the couch all day long won't really help you feel any better.
"Don't lay around and feel sorry for yourself," says certified health and wellness coach and nutritionist Lynell Ross. "If you can, drink water, plain or sparkling with some fizz, eat something light, then get out and take a walk in the fresh air. Just laying there can make you feel worse. Moving around can help your blood circulation and help your body burn off some of the alcohol messing with your blood sugar."
Take that "moving around" bit with a grain of salt though. "Moving around" could mean something as simple as a walk outside. You don't want to intensify your symptoms of dehydration.
Staying up until you're sober
Fighting off sleep while you're drunk is a sure-fire way to induce a hangover the next day. No, staying up will not cause you to sober up quicker, and it will certainly not stave off any hangover symptoms. In fact, it can make hangovers even worse.
"Do not try to stay awake. This will exacerbate the fatigue and impairment," says Betancourt. "While REM sleep is significantly affected by alcohol, and restless will occur, sleeping nonetheless is an important step to getting through a hangover."
Only relying on water
Drinking water is an effective way to counteract the symptoms of dehydration when you are hungover, but Betancourt recommends looking to other fluids that are high in electrolytes, as well.
"Remember, alcohol is a diuretic, which makes you urinate frequently. You lose more than just water," Betancourt says. "Electrolytes are lost in the urination process, [too], and thus it is extremely important to replenish these using drinks containing electrolytes!"
But that recommendation also comes with a warning: Some drinks high in electrolytes can also have high levels of added sugar. Be wary of those, as sugar does not directly affect your blood alcohol level, but it can worsen your symptoms by causing a stomach ache and a sugar rush that later results in a crash.
Eating a burger, fries, a grilled cheese, and a milkshake
Aside from the negative effects that added sugar can have on your hangover, using greasy food to outrun your hangover isn't a good idea all around.
"Stay away from greasy foods," Betancourt says. "These types of foods would traditionally upset your stomach anyway, so if your stomach is already upset, you just compounded the two!"
Two better alternatives to greasy foods are crackers and toast. Alcohol lowers your blood sugar and disrupted sleep only lowers blood sugar levels further, so bland foods like toast and crackers can actually help boost your levels so that you feel better quicker.
"Low blood sugar is frequently associated with hangovers and low energy," says Marie. "In addition, low blood sugar symptoms feel very similar to a hangover such as fatigue, nausea, lightheadedness, mood swings, weakness, dizziness, and confusion."
Plus, both toast and crackers can help with any nausea you may be experiencing.
Not eating at all
While greasy foods may not be the way to go, that's not to say that you should eat nothing. Before drinking, while drinking, and after drinking, you should be intermittently eating or snacking on something nutritious. Drinking on a full stomach can help you bounce back quicker.
"While there are many cautions to take with [which] food to eat or not eat while drinking, eating nonetheless will help reduce the alcohol absorption rate," says Betancourt. "While you may not feel well and may not want to eat, eating will give your body a substance to replace the alcohol."
Drinking energy drinks
Caffeine isn't just present in coffee, it's also in many energy drinks, too.
"High energy drinks are loaded with caffeine and other junk," says Ross. "If you have a hangover, you need to be hydrated with pure water, or drinks with electrolytes or coconut water, not things that dehydrate you and make you jittery."
Taking a promotional "hangover cure"
We've all seen them on Instagram ads or heard about them before—hangover cures are everywhere, in the form of pills, supplements, or even beverage mix-ins.
"Do not try to take some sort of 'hangover cure' medication orally," says Betancourt. "These are all unsupported by research and can introduce unsuspected side effects."
What you can do is pop a B6 vitamin prior to drinking or the next morning if your head is aching. Vitamin B6 has been linked to easing hangover symptoms. If you don't have any B6, it can naturally be found in poultry, fish, liver, potatoes, and non-citrus fruits.
Nothing can beat a combination of electrolytes, water, and time. Oh, and some bland food!
Sitting in silence
Yes, really. While hangovers can make you hyper-sensitive to light and sound, sitting in silence may not be as beneficial to people with hangovers as listening to music. In fact, listening to "pleasant music" that you personally enjoy has been shown has been shown to more efficiently relieve the pain associated with a hangover.
According to a study in the Journal of Acoustical Society of America, a listener's "preferred" music can also help alleviate nausea.
If you're not sure which playlist to make your hangover go-to, classical music is always a safe bet.
Eating high-sodium foods
Greasy isn't the only type of food you should avoid. Foods high in sodium can also worsen the symptoms of a hangover and fast.
"Snacks like pretzels, chips, olives, pickles and ham are high in sodium, so unless you plan on drinking plenty of water to correct all the fluid imbalance, it's better to lay off the salty snacks until you feel better," says Ross, who is also the founder of Zivadream.
Taking medications
You already know that acetaminophen is a big no-no when you're hungover because of the way it reacts with alcohol. But taking other medications that aren't doctor-approved could also be detrimental to your hangover.
"Do not take any medications without consulting your doctor or reading the warning labels," says Betancourt. "Some medications have adverse effects when mixed with alcohol, and the last thing you need to do is increase your symptoms."
Some medications that may cause negative effects when mixed with alcohol include antidepressants, antibiotics, painkillers, ADHD medication, diabetes medication, cold and flu treatments, and more.
Eating or drinking anything high in citrus
"Stay away from citrus—lemons, limes, oranges," Betancourt says. "Citrus irritates the stomach lining and alcohol irritates the stomach lining… Get it?"
Relying on hangover-curing ice cream
Um, what? Yes, some believe that an ice cream in Korea has been specifically designed to cure hangovers, thanks to the "miracle" ingredient of Hovenia dulcis—aka oriental raisin tree fruit juice.
While oriental raisin tree fruit juice has been used as a hangover cure for centuries, particularly in traditional Eastern medicine practices, it's interesting that the ice cream is grapefruit-flavored.
As mentioned above, citrus can be notoriously bad for treating hangovers. However, the hangover ice cream—which is called Gyeondyo-bar—relies on the power of a small amount of Hovenia dulcis to get the hungover person chipper again.
Our take? If you get the opportunity, sure, try it. If it works, awesome, but we don't recommend relying on grapefruit ice cream to cure your painful hangover symptoms on the reg.
Driving or operating machinery
We know, we know, but you have to drive your car home from your friend's place where you slept last night. But Marie doesn't recommend driving or operating machinery if you're hungover from the night before.
"Although alcohol is typically out of your system about 12 hours after drinking, the effects of the hangover [can] last much longer than that. In fact, some of the symptoms such as headaches and fatigue reach their maximum after 24 hours," says Marie. "A hangover can leave you feeling disoriented, uncoordinated, confused, anxious, fatigued, and nauseous."
None of which are ideal for operating heavy machinery.
"In addition, studies also prove that hangovers inhibit your reaction time, even with a blood alcohol content level of zero or close to zero," says Marie. "If you have somewhere to drive the next day, be mindful not to drink too much the night before."
Also, there's always Uber or Lyft.
Stephanie Osmanski
Stephanie Osmanski is a freelance sustainability, health, and wellness writer. Read more
Alcoholic Drinks // Hangover
6 Foods That Ease Every Hangover
Rough night last night? These science-backed remedies can help.
7 Pre-Happy Hour Foods That Prevent Hangovers
If you're gearing up for a booze-filled night out, you'll want to read this.
The Hangover Cure You Need to Try This Weekend
Save yourself some pain by sipping on this drink.
More in Healthy Eating
One Habit to Avoid If You Want to Eat Less Sugar
The #1 Best Diet to Avoid Fatty Liver Disease
Giving Up Alcohol Does This to Your Body
Popular Drinks You Should Avoid, Says Mayo Clinic
Surefire Ways to Protect Your Liver | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,656 |
\section{Introduction}
The H$\alpha$ line is frequently used to detect star-forming galaxies and quasars at low to medium redshifts ($z \lesssim 2.3$). Thanks to
its weak metallicity dependence, its relatively high luminosity and favorable observing frequency (relative to
the Ly$\alpha$ line), H$\alpha$ is the main target line of several new space instruments. Due to their very large survey volumes, this new generation of instruments will revolutionize the study of the global properties of galaxy
formation and evolution and put very stringent constraints on dark energy models \citep{2015Gehrels,2013Amendola}.
Being a hydrogen recombination line, H$\alpha$ emission is dominated by massive, short-lived stars, such as type O stars
and early type B stars, characterized by timescales ${\rm < 10-20\, Myr}$. Hence, it is a good tracer of the instantaneous SFR
\citep{2009Lee,2015McQuinn}. To constrain the H$\alpha$ intensity, a proper sampling of the high-mass
end of the initial mass function is required to overcome cosmic variance.
In addition, accounting for medium luminosity
systems, which also contribute significantly to the overall line intensity requires both large volumes and relatively high
sensitivities \citep{2015Sobral,2016Pozzetti}.
At low redshift, $z \lesssim 2.3$, the dust attenuation of H$\alpha$ flux is of the order of 1 mag \citep{2013Sobral,2016Sobral.Stroe}.
However, it is expected to decline at higher redshifts given the lower dust content of the interstellar medium (ISM). Furthermore,
both the observed and the intrinsic H$\alpha$ line emission correlate well with the overall galaxy luminosity. In contrast, other metal UV lines,
such as [OIII]500.7 nm or [OII]372.7 nm depend strongly on the metallicity and dust content of the galaxy \citep{2015Khostovan}.
In recent years, a method for probing gas in
the Universe, called Intensity Mapping, has been proposed \citep{1997madau.Meiksin}.
The basic idea is to observe the intensity of a specific line emission, e.g. H\,I 21 cm radiation \citep{1997madau.Meiksin, chang10, masui13},
and map it at every point in space and redshift, down to the resolution and sensitivity of the telescope.
Since this method probes a certain spectral line, the redshift of the observed gas parcel comes for free.
Therefore, the 3D mapping made available by this method contains an enormous
amount of information about the integrated galaxy and IGM emission from each voxel. This new technique is expected to provide a wealth of information
that is not yet available to current probes. For example, using IM surveys in combination with galaxy surveys will link the distribution of
galaxies with IGM overdensities. Given its simplicity and
many advantages, IM has been extended to several other atomic and molecular emission lines, such as CO, CII and
Ly$\alpha$, used to probe the Epoch of Reionization (EOR) \citep{Visbal:2010rz,2011Lidz,2012GongCII,2013Silva,2015Silva}.
Moreover, IM has also been proposed as a probe of the medium to low redshift Universe in several lines
\citep{2014Pullen,2014Uzgil,2016Silva,2017Fonseca}.
Recently, two H$\alpha$ IM instruments have been proposed. The first is SPHEREx (the
Spectrophotometer for the History of the Universe, Epoch of Reionization, and Ice Explorer), which is a NASA Medium-Class Explorer mission selected for phase
A study in 2017 \citep{2014Dore,2016Dore}. The second instrument is CDIM (the Cosmic Dawn Intensity Mapper \citep{2016CoorayCDIM}) space telescope. \citet{2017Fonseca} have explored the use of the surveys of these two instruments to constrain cosmological models. They found that these missions are able to constrain
the H$\alpha$ power spectrum at $k>0.02\, {\rm h/Mpc}$ down to the few percent accuracy. This makes them useful for probing
the baryonic acoustic oscillations (BAO) scale. Furthermore, \citet{2017Gong} has demonstrated the
potential of SPHEREx to put strong constraints
on the star formation rate density (SFRD) for $z\lesssim5$, assuming that it scales linearly with the observed H$\alpha$ intensity and
overlooking the contamination by background lines in the H$\alpha$ intensity maps.
Two other interesting instruments for H$\alpha$ studies are the Euclid \citep{2011Laureijs} and WFIRST (Wide-Field Infrared Survey Telescope \citep{2012Green,2015Spergel}) space telescopes. This new generation of instruments will carry out galaxy
surveys in a very wide area, particularly compared to ground-based instruments. Their spectroscopic
capabilities will provide data with high-frequency resolution to allow the distinction between two lines with
small frequency separation, hence significantly reducing line contamination \citep{2016Pozzetti}.
Although their spectroscopic sensitivity is low, these telescopes will have high photometric sensitivity that will
allow them to probe down to faint sources -- fainter than the best available ground-based instruments.
We note here that these two instruments will not carry out IM surveys.
Furthermore, the wide-band filters used by Euclid and WFIRST will result in a poor
determination of the emitted line and thus of the galaxy redshift. Spectroscopic follow-up of the photometric sources
will only be possible for a few cases and usually for relatively bright galaxies. Another option would be to use
IM surveys, covering the same area of the sky, to provide better redshift estimates for these galaxies.
In this paper, we study the H$\alpha$ emission at $z\lesssim 5$ expected to be observed by these four instruments. We improve upon previous predictions by estimating the uncertainty in the
intensity of the intrinsic H$\alpha$ signal and of the dust extinction suffered by H$\alpha$ photons. We also
account for the contribution of quasars to the overall H$\alpha$ luminosity density. Moreover, we explore several
possible constraints on astrophysical quantities obtained from the surveys targeting H$\alpha$ emission.
We investigate the contamination of the relevant background/foreground lines in H$\alpha$ intensity maps at each redshift. These interloping
lines include the hydrogen H$\beta$ (486.1 nm) and \lya (121.6 nm) lines, the [SII] (671.7 nm)(673.1 nm) doublet, the [NII] (658.3 nm)(654.8 nm)
doublet and the ionized oxygen [OII] (372.7 nm) and [OIII] (500.7 nm) lines.
By doing this study using simulations, we are able to self-consistently explore the foreground removal technique needed to recover the H$\alpha$ signal from
observational intensity maps.
The paper is organized as follows. In Section~\ref{sec:Ha_physics}, we present the physical processes that give rise to the observed H$\alpha$ line luminosity in galaxies.
In Section~\ref{sec:Ha_constraints} we present and discuss current constraints on the galaxy H$\alpha$ LF, and on the SFR-halo mass relation.
The simulation code we run in order to predict the intensity and spatial fluctuations of H$\alpha$ emission from both
galaxies and the IGM is presented in Section~\ref{sec:Simulations}. We follow by describing the four instruments and their planned H$\alpha$ survey characteristics in Section~\ref{sec:Surveys}. In Section~\ref{sec:LineContam} and Section~\ref{sec:contamination4}
we discuss foreground removal strategies in H$\alpha$ intensity maps in the context of these instruments. A comparison
between constraints from galaxy and IM surveys is made in Section~\ref{sec:Constraints}. The final
conclusions are presented in Section~\ref{sec:Summary}.
Throughout this paper we assume the best fit cosmological parameters from Planck + WMAP \citep{2014Ade}
($\Omega_b h^2=0.022032$, $\Omega_m=0.3089$, $h=0.6704$, $Y_P=0.2477$, $n_s=0.9619$ and $\sigma_8=0.8347$).
\section{H-alpha physics: From SFR to line luminosity}
\label{sec:Ha_physics}
The H$\alpha$ line is a Balmer line that corresponds to a transition between energy levels $n = 3$ to $n = 2$ of neutral hydrogen. It is predominantly emitted during
hydrogen recombinations, but can also arise due to collisional excitation of this transition. The latter process is mainly relevant
for warm and neutral gas, such as the boundary region between ionized and neutral gas. Therefore, recombination emission usually dominates the overall H$\alpha$ emission in galaxies \citep{2008Cantalupo}.
It is commonly assumed that the volume average escape fraction of ionizing photons, from galaxies, is very small. This quantity
is poorly constrained from observations. However, in the few cases where it has been
measured (along a few lines of sight), it is found to be below the 10\% level \citep{2016Vasei}.
The escape fraction of ionizing photons is highly dependent
on the gas conditions in the ISM. It can thus range from almost zero up to a few tens of percents. High values of escape fraction
are a signal that the ISM contains low column density channels along which ionizing photons can easily escape. Given the
current results from simulations and observations, in the relevant redshift range, escape fractions above $10 - 20$\% are unlikely \citep{2011Boutsia,2014Yajima}.
As a first approximation, and given that this parameter is degenerate with the galaxy SFR, we will assume that it is zero. The
number of hydrogen atom ionizations can then be inferred from the stellar emission spectrum of the galaxy. It should be noted that a zero escape
fraction is the common assumption in the estimation of the SFRD from observations of nebular emission lines \citep{2008Geach,2016Suzuki}.
Following \cite{1998KennicuttApj}, we connect the SFR to the galaxy luminosity ($L_{\nu}$) in the 1500-2800 \r{A}
wavelength range by averaging over a \cite{1955Salpeter} initial mass function (IMF) with solar metallicity and with mass
limits 0.1 to 100 ${\rm M_{\odot}}$. This gives
\begin{equation}
SFR({\rm M_{\odot}\, yr^{-1}})=1.4 \times 10^{-28}\, L_{\nu}\, ({\rm ergs\, s^{-1}\, Hz^{-1}}).
\label{eq:sfr_vs_Lnu}
\end{equation}
Taking the population synthesis galaxy spectra from \cite{1993Bruzual.Charlot} we relate the luminosity density
at 1500 \r{A} to the rate of hydrogen ionizing photon emission ($Q_{\rm H}$). This yields the following relation:
\begin{equation}
SFR({\rm M_{\odot}\, yr^{-1}})=1.08\times 10^{-53}\,Q_{\rm H}\, ({\rm s^{-1}}),
\end{equation}
which is valid for star formation in the age interval $0.1-1$ Gyr, dictated by the assumed IMF.
The timescale for hydrogen ionization is of the order of a few years and the timescale for recombination in the dense
($n\sim10^2-10^4 {\rm cm^{-3}}$) and ionized ISM is of the order of few hundred years. From a cosmological point of view,
these are instantaneous processes. Hence, one can safely assume ionization-recombination equilibrium.
For a case B recombination coefficient (the choice of recombination coefficient has little impact on this result),
an average gas temperature of $10^4\, {\rm K}$ will result in the emission of $\sim 0.45$ H$\alpha$ photons per hydrogen
recombination \citep{1989Osterbrock,1996Madau.Ferguson}.
This results in the commonly used relation between SFR and the intrinsic H$\alpha$ luminosity
\citep{1998KennicuttAraa}:
\begin{equation}
L^{\rm int}_{\rm H\alpha}\, ({\rm erg\, s^{-1} })=1.26 \times 10^{41} SFR\, ({\rm M_{\odot}\, yr^{-1}}).
\label{eq:Lum_Ha}
\end{equation}
Ideally, one should choose the IMF according to the target population. However, this information is usually not available,
which introduces IMF related uncertainty into our calculations. For example, using a
Kroupa IMF \citep{2003Kroupa.Weidner}
or a Chabrier IMF \citep{2003Chabrier} would result in an increase of $\sim 1.54$ to $1.64$ in
the $L^{\rm int}_{\rm H\alpha}/SFR$ ratio compared to the Salpeter IMF. Additional uncertainty in this relation
arises from the choice of population synthesis galaxy spectra \citep{2009Lee}.
The observed H$\alpha$ luminosity is obtained by correcting the intrinsic luminosity for dust extinction. This extinction is usually taken
to be of the order of $A_{\rm H\alpha}=$1 mag, defined as:
\begin{equation}
L^{\rm obs}_{\rm H\alpha} = L^{\rm int}_{\rm H\alpha}\times 10^{-A_{\rm H\alpha}/2.5}.
\end{equation}
Dust extinction increases with stellar mass and environment density \citep{2016Sobral.Stroe}. The amount of dust, and therefore
its line extinction power, correlates with metallicity and is expected to decrease with increasing redshift.
The steep increase of the extinction with stellar mass has a strong impact on the overall extinction, affecting surveys with different flux sensitivity.
The commonly used extinction value, $A_{\rm H\alpha}\, =\, 1\, {\rm mag}$, corresponds to galaxies with stellar masses
of a few times $10^{10}\, {\rm M}_{\odot}$ \citep{2016Sobral.Stroe}. From semi-analytical studies \citep{2016Mitchell}, the dark matter
halo masses corresponding to these stellar masses straddle two orders of magnitude, centered at $M_{\rm halo}\sim 10^{12}\, {\rm M}_{\odot}$. Due
to their sensitivity limits, Euclid and WFIRST spectroscopic surveys will only be able to observe these luminous and massive galaxies. Therefore,
for these surveys the extinction value of $A_{\rm H\alpha}$ = 1 mag is appropriate. However, for surveys capable of probing low luminosity galaxies,
the overall extinction might be smaller.
Note that, H$\alpha$ emission will also suffer extinction due to the dust in the Milky Way. This decrement in the H$\alpha$ flux can be estimated with dust maps of the Milky Way. As a reference, corrections due to interstellar extinction in the COSMOS field, in the relevant frequency bands for this study, are of the order of $\lesssim 0.05$ mag \citep{2007ApJS..172...99C}. Moreover, intensity maps will need to be corrected for continuum galactic dust emission and zodiacal light \citep{1998ApJ...500..525S}.
Unlike Ly$\alpha$, H$\alpha$ photons are not efficiently absorbed by neutral hydrogen. Therefore, their flux is expected to suffer
little to no attenuation due to scattering or dust extinction along their path through to the IGM.
\section{Modelling H-alpha emission}
\label{sec:Ha_constraints}
\subsection{H-alpha constraints from observational LFs}
We make use of the H$\alpha$ LFs compiled by \citet{2016Pozzetti}. These include data from the ground-based imaging
Hi-Z Emission Line Survey (HiZELS) with UKIRT, Subaru and VLT \citep{2013Sobral}, the WISP slitless space-based spectroscopic
survey \citep{2013Colbert} with the Wide Field Camera 3 on HST (HST+WFC3) and from the HST Near Infrared Camera and
Multi-Object Spectrograph (HST-NICMOS) \citep{2009Shim,2000Hopkins,1999Yan}.
For simplicity, we fit the H$\alpha$ LF using a Schechter fitting function \citep{1976Schechter}:
\begin{equation}\label{eq:LF}
\Phi(L)dL = \phi_*\left( \frac{L}{L_*} \right)^{\alpha} {\rm exp}\left(-\frac{L}{L_*}\right)\frac{dL}{L_*},
\end{equation}
where $\phi_*$ is a normalization factor with units of inverse volume, $L_*$ is the characteristic luminosity at which there is a break in
the luminosity function and $\alpha$ is the faint-end slope. This formula usually applies to the intrinsic continuum luminosity of galaxies.
However, it has also been shown to be a good fit to galaxy line luminosities. In the case of the H$\alpha$ line LF, it reproduces the emission from star-forming galaxies well, but
not the additional contribution from AGN. The observed line luminosity function is then better described by a
Schechter fitting function plus a power law \citep{2017Matthee}.
Alternatively, these observations can be fitted to other functions, such as the \cite{1990Saunders} fitting function.
Therefore, one should bear in mind that the choice of using Schechter fitting function can introduce certain biases to our estimates.
We evolve the H$\alpha$ LF in the redshift range $0<z<2.3$ following the observed LFs mentioned above. Given the small number
of H$\alpha$ flux observations at higher $z$, we infer the H$\alpha$ luminosity from observed UV fluxes following the method described in \citet{2016Smit}.
Basically, this method consists of converting the LFs at 1600\textup{\AA} into SFRs and then to H$\alpha$ luminosities using
Equations~\ref{eq:sfr_vs_Lnu} and \ref{eq:Lum_Ha}, respectively.
We consider two prescriptions for the H$\alpha$ dust extinction. In the first case, we assume an H$\alpha$ dust extinction
of $A_{{\rm H}\alpha}=1$ mag for the entire redshift range. While in the second case we assume an H$\alpha$ dust extinction of
$A_{{\rm H}\alpha}=1$ mag for the low $z$ ($z\lesssim 2.3$) sample and a
decreasing extinction towards high $z$, reaching $A_{{\rm H}\alpha}=0.475$ mag at $z\sim5$. The latter extinction value is obtained by
requiring the UV data to fit the H$\alpha$ LFs. The high $z$ constraints on the H$\alpha$ LF are based on the offset
between 3.6 $\mu$m fluxes from Spitzer/IRAC and the best spectral energy distribution (SED) fits from the HST upper
limits and the Spitzer/IRAC photometry \citep{2016Smit}.
The observed low-$z$ H$\alpha$ LFs include emission from both star-forming galaxies and AGNs. The percentage of H$\alpha$ emitters
powered by AGN activity depends on the luminosity limits. It can range from a few tens of percent in luminous galaxies to
practically zero for faint galaxies. For example, \citet{2016Sobral.Kohn} has measured an AGN contribution of $30 \pm 8$\% in the redshift
range $z \sim 0.8-2.23$ for bright galaxies ($L_{{\rm H}\alpha}>L_{\ast}$).
The mean intensity of H$\alpha$ emission is
\begin{equation} \label{eq:I_LF}
\bar{I}_{{\rm H}\alpha}(z) = \int_{L_{\rm min}}^{L_{\rm max}} dL \frac{dn}{dL} \frac{L_{{\rm H}\alpha}}{4\pi D_{\rm L}^2}y(z)D_{\rm A}^2,
\end{equation}
where $D_{\rm L}$ is the proper luminosity distance, $D_{\rm A}$ the comoving angular diameter distance and
$y(z)[{\rm Mpc\, h^{-1}\, Hz^{-1}}]=d\chi/d\nu$, where $\chi$ is the comoving distance.
In galaxy surveys, the lower luminosity limit, $L_{\rm min}$, is set by the sensitivity of the instrument. However, the mean intensity due
to the whole H$\alpha$ population would require $L_{\rm min}=0$. The contribution of very faint galaxies, i.e., more
than two or three orders of magnitude below $L_*$, to the
total line intensity is quite small for all possible LF shapes. The choice of the upper limit in luminosity for
the integration, $L_{\rm max}$, should
be the same for any survey. It is usually taken as the maximum observed luminosity, which is a
few orders of magnitude above $L_*$ \citep{2016Pozzetti}.
The constraints on the mean intensity of H$\alpha$ emission from observational LFs are shown as symbols in Figure~\ref{fig:IHa}. Also
shown is the best fit to the observational data derived from the \citet{2001Cole} SFRD fitting function (black lines). The solid and
the dashed-dotted lines denote the H$\alpha$ intensity with and without a
15\% boost due to the presence of AGN, respectively \citep{2010Garn,2013Sobral}. The intensity points were derived by integrating the observed
luminosity functions down to H$\alpha$ fluxes of $10^{30}\, {\rm erg\, s^{-1}}$, which is well below the observed flux limit. We fixed this lower
flux limit and determined the one-sigma error bars from the one-sigma uncertainty in the low luminosity end slope.
Given that this model fits the observational
constraints well, it will be our base model. The total H$\alpha$ intensity (including the AGN contribution) corresponding to this fit is
\begin{eqnarray} \label{eq:I_Ha_fit}
\bar{I}_{{\rm H}\alpha}\, (z)\, &=& 2.892\times 10^{-9}\frac{0.027+0.28z}{1+(z/4.8)^{5.3}}\frac{(1+0.15)}{(1+z)^2} \\ \nonumber
&\times& \frac{y(z)}{\rm [Mpc\, h^{-1}\, Hz^{-1}]}\, {\rm erg\, s^{-1}\, cm^{-2}\, sr^{-1}\, Hz^{-1}}.
\end{eqnarray}
We will further discuss this model in Section \ref{sec:Ha_sim}. The uncertainties in the model will be discussed in Section~\ref{sec:Constraints}.
\begin{figure}
\begin{centering}
\includegraphics[angle=0,width=0.5\textwidth]{./figures/F1.pdf}
\caption{Intensity of observed H$\alpha$ emission from galaxies. The red triangles correspond to observed H$\alpha$ fluxes at $z\lesssim2.3$
\citep{1995Gallego,2013Sobral,2015Sobral,2015Stroe}. At higher z, yellow symbols correspond to H$\alpha$ intensities obtained
from the offset between 3.6 $\mu$m fluxes from Spitzer/IRAC and best SED fits from HST upper limits and Spitzer/IRAC photometry,
following \citet{2016Smit}. The latter correspond to star-forming H$\alpha$ emitters only. Also shown is the fit to the observational points based on the \citet{2001Cole} SFRD fitting formula, with (solid line) and without (dotted line) assuming an intensity
boost of 15\% due to AGN powered H$\alpha$ emission. Error bars corresponding to the one-sigma uncertainty on
the low luminosity end slope of the observational LFs.
}
\label{fig:IHa}
\end{centering}
\end{figure}
\subsection{H-alpha bias constraints from simulations}
\label{sec:Ha_sim}
We make use of a SFR model based on simulations, in order to predict H$\alpha$ line luminosities where they are not observationally available. This is usually
the case at higher redshifts and/or at low line luminosities. The use of simulations, which provide a relation between SFR and halo mass, will also allow us
to estimate the H$\alpha$ bias needed to compute the line power spectra.
In this section, we compare how well different analytical functions are able to reproduce our simulated and observational constraints. We choose the best of these fits
as our base SFR/SFRD model.
\subsubsection{SFR}
\label{subsec:SFR}
We adopt the catalog of \cite{2013Guo.White} (hereafter, Guo2013), who used a semi-analytic prescription that incorporates astrophysical
properties of the Sloan Digital Sky Survey (SDSS) to the dark matter halos in the Millennium
and Millennium II cosmological simulations \citep{2005Springel.White,2009Boylan-Kolchin}.
This catalog includes an estimate of the galaxy SFR based on its cold gas mass, the fuel for star formation. It is assumed that during a single
orbital period, 20\% of the cold gas in the galaxy is converted into stars. This is usually presented as the galaxy having an $\epsilon=0.2$ efficiency
per dynamical cycle of converting cold gas into stars.
The average of the star formation in these galaxies as a function of halo mass can be parameterized with the function
\begin{equation}
SFR(M)=10^{a}\left(\frac{M}{M_1} \right)^b\left(1+\frac{M}{M_2} \right)^c\, [{\rm M_{\odot}\, yr^{-1}}],
\label{eq:SFR_Guo}
\end{equation}
where $M_1=10^8 {\rm M_{\odot}}$. The remaining fitting parameters for redshifts
ranging from $z \sim 0-4.8$ can be found in Table \ref{tab:SFR}.
This fit is valid in the $M\sim (10^8-10^{13})\, {\rm M_{\odot}} $ mass range for $z<4$ and in the
$M\sim (10^8-10^{12})\, {\rm M_{\odot}} $ mass range for $z\gtrsim 4$. At higher halo masses the SFR is assumed to be constant.
As an example, Figure~\ref{fig:sfr} shows this relation at $z=2.2$. In Section~\ref{subsec:SFRD} we discuss a normalization
to this relationship that we applied in order to better fit current observational constraints.
\begin{figure}
\begin{centering}
\includegraphics[angle=0,width=0.5\textwidth]{./figures/F2.pdf}
\caption{
SFR as a function of halo mass at $z\sim2.2$ from the
\citet{2013Guo.White} galaxy catalog.
The dashed line marks the average in this relation. The yellow and green dots correspond respectively to DM halos extracted from the
Millennium and Millennium II cosmological dark matter simulations.
}
\label{fig:sfr}
\end{centering}
\end{figure}
\begin{table}
\centering
\caption{Fitting parameters for the SFR-Halo Mass relation}
\begin{tabular}{l c c c c c c }
\hline\hline
& $z=0$ & $z=0.8$ & $z=1$ & $z=2.2$ & $z=4.8$\\
\hline
a & $ -9.3 $ & $-8.75$ & $-8.75$ & $-8.2$ & $-6.7$ \\
b & $ 2.6 $ & $2.7 $ & $2.7$ & $2.7$ & $2.5$ \\
c & $ -2.9 $ & $-3.0$ & $-3.0$ & $-2.95$ & $-2.6$ \\
${\rm log_{\rm 10}}M_2$ & $ 11.9 $ & $11.7$ & $11.7$ & $11.6$ & $11.3$ \\
\hline
\end{tabular}
\label{tab:SFR}
\end{table}
\subsubsection{SFRD}
\label{subsec:SFRD}
It is worth pointing out the large
uncertainties in the evolution of the SFRD predicted from simulations, especially towards
high redshifts. This uncertainty is currently unavoidable due to the poor understanding of
feedback effects and the large range of halo masses contributing to the SFRD.
The fitting parameters shown in Table~\ref{tab:SFR} and used in Equation~\ref{eq:SFR_Guo}
underestimate the SFRD at $z>1$ compared to other observationally based models, such as the
ones by \citet{2006Hopkins.Beacom} (hereafter, HB+06), \citet{2013Behroozi} (hereafter, Be+13),
or \citet{2014MadauDickinson} (hereafter, MD+14).
These models are fits of the SFRD estimated using different sets of observational data in the
infrared, optical, radio and UV bands. We compare between these models predictions and the SFRD
derived from H$\alpha$ measurements to illustrate how different probes
of incomplete data samples at high-$z$ lead to a large uncertainty in the cosmic SFRD evolution.
For a Salpeter IMF and for the conversion factor between UV luminosity and SFR shown in Equation \ref{eq:sfr_vs_Lnu}, the HB+06 SFRD is
\begin{equation}
{\rm SFRD}\, (z)\, =\, \frac{h}{0.77}\, \frac{0.012+0.091z}{1+(z/3.3)^{5.3}}\, {\rm M_{\odot}\, yr^{-1}\, Mpc^{-3}},
\end{equation}
whereas the Be+13 SFRD is
\begin{equation}
{\rm SFRD}\, (z)\, =\, \frac{0.311}{10^{-0.997(z-z_0)}+10^{0.241(z-z_0)}}\, {\rm M_{\odot}\, yr^{-1}\, Mpc^{-3}},
\end{equation}
with $z_0=1.243$. Finally, the MD+14 SFRD is given by
\begin{equation}
{\rm SFRD}\, (z)\, =0.015\frac{\left( 1+z \right)^{2.7}}{1+\left[(1+z)/2.9 \right]^{5.6}}\, {\rm M_{\odot}\, yr^{-1}\, Mpc^{-3}}.
\end{equation}
In Figure~\ref{fig:sfrd} we show the SFRD evolution predicted by the referred models. This figure
shows that none of these SFRD models provide a good fit to the star formation traced by H$\alpha$ emission.
We note that the high redshift points in Figure~\ref{fig:sfrd} are derived from H$\alpha$ emitters, which show an excess H$\alpha$ flux compared
to the measured UV fluxes. This might be due to dust correction uncertainties, a bursty or rising star formation history, the shape of
the ionizing spectrum or other reasons. For a further discussion on this subject see \citet[]{2016Smit}. As a result, at high $z$,
the SFRs and SFRDs inferred from the H$\alpha$ flux using Equation~\ref{eq:Lum_Ha} might be overestimated. Nevertheless, we will start by
ignoring this H$\alpha$ excess since its origin is not yet clear. In order to be consistent, we will use Equation~\ref{eq:Lum_Ha} to connect SFR
and H$\alpha$ emission for the $z\sim$0-5 redshift range. In Sections~\ref{sec:Summary} and \ref{subsec:Const_SFRD}
we will discuss the impact of this decision on our conclusions.
We fit the SFRD traced by H$\alpha$ emission by updating the parameters in the \citet{2001Cole} fitting function:
\begin{equation}
{\rm SFRD}\, (z)\, =\, \frac{0.01+0.1036z}{1+(z/z_1)^{e}}\, {\rm M_{\odot}\, yr^{-1}\, Mpc^{-3}},
\label{eq:sfrd_model}
\end{equation}
where $z_1=4$ and $e=5$. These parameters can be derived from the H$\alpha$ intensity given by Equation \ref{eq:I_Ha_fit}, using
the conversion factor between SFR and H$\alpha$ luminosity in Equation \ref{eq:Lum_Ha}. We assume a dust extinction of
$A_{{\rm H}\alpha}=1\, {\rm mag}$ at low redshift ($z\lesssim 2.3$) and a
lower extinction of $A_{{\rm H}\alpha}=0.45\, {\rm mag}$ at $z\sim4-5$ (shown as the middle thickness black solid line
in Figure~\ref{fig:sfrd}). The same fit for a constant H$\alpha$ extinction of $A_{{\rm H}\alpha}=1\, {\rm mag}$
can be obtained using $z_1=4.8$ and $e=5.3$ (shown as the thin black solid line in Figure~\ref{fig:sfrd}).
The SFRD formula for a Small Magellanic Cloud (SMC) extinction curve \citep{2003Gordon}, can be obtained with $z_1=3.25$ and $e=5.9$ (shown
as the thick black solid line in Figure~\ref{fig:sfrd}).
From the current H$\alpha$ observations, one can trace the LF and use it to derive the cosmic SFRD. However, in order to estimate the H$\alpha$
intensity power spectrum we need to establish a relation between SFR and DM halo mass. Since we cannot observationally probe the masses of the DM
halos hosting the H$\alpha$ emitters we adopt the SFR-halo mass relations described in Section \ref{subsec:SFR}. We normalize these relations to our
observationally based SFRD fit given by Equation~\ref{eq:sfrd_model} (with $z_1=4$ and $e=5$) and adopt them as our base SFR model.
In the following sections, we use this SFR model to estimate the average H$\alpha$ flux originating from a DM halo,
the H$\alpha$ bias and finally the power spectra of H$\alpha$ intensity spatial fluctuations. Note that at low redshift there are a
few observational points below our theoretical model.
However, the adopted simple, yet physically based model cannot properly fit these points and the constraints at $z\sim1$ at the same time.
\begin{figure}
\begin{centering}
\includegraphics[angle=0,width=0.5\textwidth]{./figures/F3.pdf}
\caption{
Cosmic SFRD as a function of redshift according to several models.
The \citet{2013Guo.White} SFRD is shown for the default
efficiency of conversion of gas into stars of $\epsilon=0.2$ (green thick dashed line) and also for $\epsilon=0.4$ (green thin dashed line).
The symbols correspond to observational constraints based on H$\alpha$ emitters. The red triangles are derived from
H$\alpha$ LFs at $z=0.08 $ \citep{2007Ly.Malkan}, at $z=0.0225 $ \citep{1995Gallego}, at $z=(0.4,0.84,1.47,2.23)$
\citep{2013Sobral}, at $z=0.81 $ \citep{2015Sobral} and at $z=0.2$ \citep{2015Stroe}. The yellow symbols correspond to
SFRD constraints, derived from observed H$\alpha$ fluxes and UV continuum luminosities \citep{2016Smit}. These symbols assume different values for the extinction
suffered by H$\alpha$ emission. Squares, circles and diamonds correspond respectively to: $A_{{\rm H}\alpha}=1.0$, $A_{{\rm H}\alpha}=0.45$
and $A_{{\rm H}\alpha}=0.03$. The latter value was obtained assuming an SMC extinction law.}
\label{fig:sfrd}
\end{centering}
\end{figure}
\subsubsection{Power Spectrum}
The total power spectrum of H$\alpha$ emission can be written as
\begin{equation}
P_{\rm H\alpha}^{\rm tot}(k,z)=P_{\rm H\alpha}^{\rm clus}(k,z)+P^{\rm shot}_{\rm H\alpha}(z),
\end{equation}
where the first term accounts for the galaxy clustering and is given by:
\begin{equation}\label{eq:P_Lya}
P_{\rm H\alpha}^{\rm clus}(k,z) = \bar{b}_{\rm H\alpha}^2 \bar{I}_{\rm H\alpha}^2 P_{\delta \delta}(k,z),
\end{equation}
where $b_{\rm H\alpha}$ is the bias between H$\alpha$ emission and the matter power spectrum ($P_{\delta \delta}$).
The H$\alpha$ luminosity weighted bias is
\begin{equation}
\label{eq:lumbias}
\bar b_{\rm H\alpha} \left( z \right) \equiv\frac{\int^{M_{\rm max}}_{M_{\rm min}}{dM} ~b\left( M,z\right) L_{{\rm H}\alpha}(M,z) ~\frac{dn}{dM} }{\int^{M_{\rm max}}_{M_{\rm min}}{dM}~ L_{{\rm H}\alpha}(M,z) ~\frac{dn}{dM} }\,,
\end{equation}
where $b\left( M,z\right)$ is the halo bias and $dn/dM$ is the DM halo mass function. The integration limits for the halo mass are
$M_{\rm min}=10^{\rm 8}{\rm M_{\odot}}$ and $M_{\rm max}=10^{\rm 15}{\rm M_{\odot}}$.
The clustering term dominates the observed line power spectrum at large scales.
Whereas, on small scales it is dominated by the second term, namely the shot noise caused by
the discrete distribution of galaxies \cite[e.g.,][]{Visbal:2010rz}:
\begin{equation}\label{eq:Pshot_Lya}
P^{\rm shot}_{\rm H\alpha}(z) = \int_{L_{\rm min}}^{L_{\rm max}} dL \frac{dn}{dL} \left[\frac{L_{\rm gal}^{\rm H\alpha}}{4\pi D_{\rm L}^2}y(z)D_{\rm A}^2\right]^2.
\end{equation}
The relation between
line luminosity and halo mass, used in Equation~\ref{eq:lumbias}, has a large scatter. Moreover, it is rarely possible to observationally probe DM halo masses.
The bias of H$\alpha$ emitters
is observationally constrained at $z\sim2.24$ to be $b_{\rm H\alpha}\, =\, 2.4^{+0.1}_{-0.2}$ \citep{2012Geach}.
This bias was derived by comparing a sample of 370 H$\alpha$ emitters detected by HiZELS with predictions from GALFORM semi-analytic models \citep{2000Cole}.
The HiZELS uniform complete sample contains emitters with luminosities down to $L_{\rm H\alpha} = 2\times 10^{42}\, {\rm erg\, s^{-1}}$. However, the
referred bias was estimated assuming a lower luminosity limit of $L_{\rm H\alpha}^{\rm min}\, =\, 10^{41}\, {\rm erg\, s^{-1}}$. Since the H$\alpha$ bias
decreases towards lower luminosities, the total H$\alpha$ emission bias might be lower than the quoted value.
Given the lack of observational constraints for the full redshift range of interest, we estimate the H$\alpha$ bias assuming that H$\alpha$ emission scales with the
galaxy SFR following Equation~\ref{eq:Lum_Ha}.
The cumulative H$\alpha$ bias as a function of redshift is shown in Figure~\ref{fig:bias} using our SFR model. This figure clearly shows that the current observational constraints (limited to high fluxes) cannot be used to calculate the bias for a wide range of galaxy luminosity, hence a theoretical model is needed.
The total H$\alpha$ bias depends on the evolution of the slope of the relation between the SFR and the mass of a DM halo, but it is independent of the amplitude of this
relation. Nonetheless, we
use our normalized SFR-halo mass relation described in Sections~\ref{subsec:SFR} and \ref{subsec:SFRD}, so that we can estimate the
H$\alpha$ bias as a function of minimum halo mass. This allows us to estimate the cumulative H$\alpha$ bias according to several flux cuts, relevant for the upcoming H$\alpha$ surveys.
At low redshifts and for large halo masses each DM halo contains several galaxies. However, there is usually one galaxy dominating the SFR in the halo and
so, for simplicity, we assume that the H$\alpha$ flux in a halo corresponds to the emission from one galaxy.
\begin{figure}
\begin{centering}
\includegraphics[angle=0,width=0.5\textwidth]{./figures/F4.pdf}
\caption{Cumulative bias of H$\alpha$ emission. The x-axis corresponds to the minimum mass of a halo used on the bias integration. The
circles, pentagons, triangles, and squares correspond respectively to flux limits of ($10^{-18}$, $10^{-17}$, $1.2\times10^{-16}$,
$3\times10^{-16}$) ${\rm erg\, s^{-1}\, cm^{-2}}$.}
\label{fig:bias}
\end{centering}
\end{figure}
\section{Simulated H-alpha emission from galaxies and from the IGM}
\label{sec:Simulations}
Intensity mapping surveys will be sensitive to H$\alpha$ emission from galaxies. However, these surveys will also detect H$\alpha$ emission
from the large-scale IGM filaments connecting the DM halos in which galaxies reside. In this section, we use simulations to estimate
the H$\alpha$ emission from these two media
and compare their relative contribution to the total H$\alpha$ emission intensity and power spectrum.
\subsection{The simulation}
\label{subsec:simulated_emission}
The intensity of IGM H$\alpha$ emission scales nonlinearly with the gas density and temperature.
Therefore, we run a simulation code with a high spatial resolution in order to model the local properties of the gas.
We start with a dark matter only run made with the parallel
code Gadget 2 \citep{2001Springel,2005Springel}. This simulation covers a volume of $(200\, {\rm Mpc\, h^{ -1}})^3$ with $1024^3$ particles, each with a mass of $6.5\times10^8\, {\rm M_{\odot}\, h^{-1}}$.
Simulation outputs in the redshift range $\sim0-5$ are used to estimate the H$\alpha$ intensity. In addition, outputs
up to $z\sim10$ are used to estimate the contamination by background lines in H$\alpha$ intensity maps.
The particles are distributed in 3D boxes with $1200^3$ cells following the cloud in cell method.
In order to model the conditions of the IGM gas we assume that the spatial distribution of the baryonic matter follows that of dark matter.
The gas temperature, the neutral hydrogen number density ($n_{\rm HI}$), the ionized hydrogen number density ($n_{\rm HII}$)
and the electron number density ($n_{\rm e}$),
are estimated following the prescription outlined in \cite{2017Kooistra}.
Additionally, the Amiga halo finder code \citep{2004Gill} is used in order to
extract DM halos from the Gadget 2 outputs. The minimum halo mass in our simulation is $M_{\rm min}= 6.5\times 10^{9}\, {\rm M}_{\odot}$.
To each of these halos we attribute a SFR (normalized to our SFRD model) from a random halo with a similar mass from the Guo2013 galaxy catalog.
The H$\alpha$ emission from the DM halos is obtained with Equation~\ref{eq:Lum_Ha}, assuming a dust extinction of $A_{{\rm H}\alpha}=1\, {\rm mag}$.
Figure~\ref{fig:map_Ha} shows a map of the total H$\alpha$ emission from galaxies and from the IGM at redshift 2 (notice the logarithmic color scale).
This figure clearly shows that the contribution of the diffuse component is subdominant. The procedure used to derive the IGM H$\alpha$ emission
from the simulation is outlined in Section \ref{subsec:IGM_emission}.
\begin{figure}
\hspace{-5 mm}
\includegraphics[angle=0,width=0.55\textwidth]{./figures/F5.pdf}
\caption{Map of H$\alpha$ emission from galaxies and from the IGM at $z\sim2$.
The map scale corresponds to ${\rm log_{10}}\left( \nu {\rm I}_{\rm H\alpha}[{\rm erg\, s^{-1}\, cm^{-2}\, sr^{-1}}] \right)$.}
\label{fig:map_Ha}
\end{figure}
\subsection{H-alpha IGM emission}
\label{subsec:IGM_emission}
At $z\sim 5$, the IGM gas is kept highly ionized by the ultraviolet background radiation (UVB) together with ionizing radiation from local sources. Most of this gas is located in a
large scale filamentary structure connecting galaxies and galaxy clusters. The relative overdensity of filamentary gas allows for the existence
of small clumps of neutral gas. Therefore, in these filaments, there will be H$\alpha$ emission originating in hydrogen recombinations and
collisional excitations.
The luminosity density per comoving volume of H$\alpha$ emission from hydrogen recombinations in the IGM is
\begin{equation}
{\rm \ell}^{\rm IGM}_{\rm rec}(z)= f_{\rm rec}\, \dot{n}_{\rm rec}\, E_{\rm H\alpha},
\label{eq:Ha_IGM_rec}
\end{equation}
where $E_{\rm H\alpha}=1.89\, {\rm eV}$ is the energy of an H$\alpha$ photon. $f_{\rm rec}$ is the probability of emission of an H$\alpha$ photon
per recombination of a hydrogen atom. The value of $f_{\rm rec}$ at a gas temperature $T\, =\, 10^4\, {\rm K}$, is $0.45$ \citep{2006Osterbrock}.
The number density of recombinations per second, $\dot{n}_{\rm rec}$, is:
\begin{equation}
\dot{n}_{\rm rec}(z)=\alpha_B(T)\, n_e(z)\, n_{\rm HII}(z).
\label{eq:nrec_s}
\end{equation}
Here $\alpha_B$ is the case B recombination coefficient for hydrogen.
The gas temperature in the IGM can be much lower than that of the ISM. Therefore, we
also consider a temperature dependent effective recombination coefficient for the H$\alpha$ emission,
$\alpha_B^{\rm H\alpha}(T)=f_{\rm rec}(T)\, \alpha_B(T)$. We take the fitting formula from \citet{2015Raga.Castellanos}, hereafter Raga+15:
\begin{equation}
\alpha_{\rm B}^{\rm H\alpha}=\frac{4.85\times10^{-23}}{T^{0.568}+3.85\times10^{-5}T^{1.5}}\, {\rm cm^3\, s^{-1}}.
\end{equation}
This fit follows the same trend as the tabulated values from \citet{2006Osterbrock}.
The H$\alpha$ luminosity due to collisional excitations is
\begin{equation}
{\rm \ell}_{\rm coll}^{\rm IGM}=E_{{\rm H}\alpha}\, n_{\rm e}\, n_{\rm HI}\, q_{\rm eff}^{{\rm H}\alpha},
\end{equation}
where $n_{\rm HI}$ is the neutral hydrogen number density. The parameter $q_{\rm eff}^{\rm H\alpha}$ is the effective
collisional excitation coefficient for H$\alpha$ emission, which is taken from Raga+15 as
\begin{equation}
q_{\rm eff}^{{\rm H}\alpha}=\frac{3.57\times10^{-17}}{T^{0.5}}e^{-140360/T}\left(1+\frac{7.8}{1+5\times10^5/T} \right).
\end{equation}
\begin{figure}
\begin{centering}
\includegraphics[angle=0,width=0.5\textwidth]{./figures/F6.pdf}
\caption{Luminosity density of H$\alpha$ emission from gas with a hydrogen density of $n_{\rm H}\, =\, 10^{-6}\, {\rm cm^{-3}}$. Recombination
H$\alpha$ emission (blue thick lines) and collisional excitation H$\alpha$ emission (red thin lines) are shown for gas with an ionized
fraction of $x_i\, =\, (0.5\, ,0.1\, ,0.0001)$, for respectively the solid, dashed and dot-dashed lines.
Note, that in the IGM the average hydrogen ionized fraction is very small, of the order of $10^{-5}$ at $z\sim0$.}
\label{fig:coll_rec}
\end{centering}
\end{figure}
The relative importance of recombination and collisional excitation for H$\alpha$ emission is illustrated in Figure~\ref{fig:coll_rec}. The
plotted luminosity densities assume hydrogen number densities of $n_{\rm H}\, =\, 10^{-6}\, {\rm cm^{-3}}$, $n_{\rm HII}=n_{\rm e}=x_i n_{\rm H}$ and
$n_{\rm HI}=(1-x_i)n_{\rm H}$, where $x_i$ is the gas ionized fraction. As a reference, the average hydrogen number density in the IGM is about ${n_{\rm H}\, =1.9 \times 10^{-7} \rm cm^{-3}}$. The assumed ionized fractions
of the gas are presented in the figure.
The gas temperature and its ionization state are both positively correlated with the strength of the extragalactic background radiation.
The assumption of thermal and ionization equilibrium sets the HI gas temperature to $T\, \sim\, 10^4\, {\rm K}$. Hence, recombinational
emission is the dominant process for generating H$\alpha$ photons in this medium.
The intensity of H$\alpha$ emission from the IGM is
\begin{equation}
I_{\rm H\alpha}^{\rm IGM}(z)=\frac{\left({\rm \ell}^{\rm IGM}_{\rm rec}+{\rm \ell}^{\rm IGM}_{\rm coll}\right)\, D_A^2}{4\pi D_L^2}y(z).
\end{equation}
Using this equation, we estimate the H$\alpha$ intensity for each cell of our simulation boxes.
Cells with densities above the threshold for collapse,
which at $z\sim0$ is $\Delta_{\rm c} \sim 328$, should contain galaxies. These galaxies will contain most of the baryonic mass in the cell. The remaining
baryonic mass will be highly heated and ionized by
the local sources. Therefore, we assume that the H$\alpha$ IGM emission in these regions is zero.
Figure~\ref{fig:I_Ha_IGM_GAL} shows the redshift evolution of the simulated H$\alpha$ intensity from the IGM and from galaxies. The considered model results in galaxy
H$\alpha$ emission dominating over H$\alpha$ emission from the IGM. This figure also shows that, after masking the emission from galaxies with H$\alpha$ fluxes above
$10^{-18}\, {\rm erg\, s^{-1}\, cm^{-2}}$, the
remaining signal is still dominated by emission from low luminosity galaxies and not from IGM emission. At $z\sim0$, the
intensities of these two faint sources are similar due to the low redshift emission being highly dominated by bright sources.
\begin{figure}
\begin{centering}
\includegraphics[angle=0,width=0.5\textwidth]{./figures/F7.pdf}
\caption{Intensity of H$\alpha$ emission from galaxies (two top lines) and from the IGM (bottom lines). The black thin solid
line accounts for the emission from galaxies with H$\alpha$ fluxes ${\rm flux}<10^{-18}\, {\rm erg\, s^{-1} cm^{-2}}$. The blue dashed-dotted line
assumes $f_{\rm rec}\, =\, 0.45$, while the red lines assume the
temperature dependent H$\alpha$ emission rates from Raga+15.
}
\label{fig:I_Ha_IGM_GAL}
\end{centering}
\end{figure}
\begin{table*}
\centering
\caption{Parameters for the spectroscopic surveys}
\begin{tabular}{l c c c c c c }
\hline\hline
Instrument & $\lambda $ & $z_{{\rm H}\alpha}$ & $\delta_{\theta}$ & $R$ & Flux limit & FOV\, total \\
& ${\rm (\mu m)}$ & & ${\rm (arcsec)} $ & & ${\rm (erg\, s^{-1}\, cm^{-2})}$ & $({\rm deg^2})$\\
\hline
SPHEREx\, deep & $0.75-4.18$ & $0.1428-5.369$ & $6.2$ & $41.4$ & $10^{-17}$ & $200$ \\
SPHEREx\, deep & $4.18-5.0$ & $5.369-6.6187$ & $6.2$ & $135$ & $10^{-17}$ & $200$ \\
CDIM\, deep & $0.75-7.5$ & $0.14-10.43$ & $1$ & $500$ & ${\rm \le 4 \times 10^{-18}}$ & $25$ \\
CDIM & $0.75-7.5$ & $0.14-10.43$ & $1$ & $500$ & ${\rm 10^{-17}}$ & $300$ \\
Euclid\, deep & $1.1-2.0$ & $0.68-2.036$ & $0.3$ & $250$ & ${3\times 10^{-16}}$ & $40$ \\
WFIRST & $1.35-1.95$ & $1.05-1.96$ & $0.15$ & $75$ & ${1.2\times 10^{-16}}$ & $2227$ \\
\hline
\end{tabular}
\label{tab:instruments}
\end{table*}
\subsection{H-alpha emission power spectra}
\label{subsec:Ha_power_spectra}
\begin{figure*}
\vspace{-10pt}
\centerline{
\hspace{2pt}
\resizebox{!}{!}{\includegraphics[angle=0,width=0.35\textwidth]{./figures/F8a.pdf}}
\hspace{-12pt}
\resizebox{!}{!}{\includegraphics[angle=0,width=0.35\textwidth]{./figures/F8b.pdf}}
\hspace{-12pt}
\resizebox{!}{!}{\includegraphics[angle=0,width=0.35\textwidth]{./figures/F8c.pdf}}
}
\caption{Simulated power spectra of H$\alpha$ emission from galaxies as it will be observed by the IM surveys (black dashed lines).
The solid lines show the theoretical clustering power spectrum associated with each survey.}
\label{fig:ps_Ha}
\end{figure*}
Power spectrum analysis is the most common statistic with which intensity maps are studied.
Figure~\ref{fig:ps_Ha} shows the power spectra of simulated H$\alpha$ emission from galaxies at different redshifts. The theoretical H$\alpha$ power
spectrum is also presented in this figure in order to show this line power at large scales. The power spectra in Figure~\ref{fig:ps_Ha}
correspond to emission from galaxies in DM halos with masses above $6.5\times 10^9\, {\rm M_{\odot}}$. Our theoretical estimates indicate that even at $z\sim 5$, where the emission from low mass halos is more important, their relative contribution to the total H$\alpha$ intensity is of the order of 5\%. Therefore, there should be no meaningful loss in power due to the lack of emission from lower mass halos in our simulations.
The IGM H$\alpha$ emission is characterized by a small bias (from our simulations we obtain $b_{{\rm H}\alpha}^{\rm IGM}\sim 1$, consistent
with most of the emission originating in low-density gas). Given the low intensity of the IGM emission, its power spectrum amplitude
is much smaller than that of galaxy emission. We, therefore, do not show the IGM power spectrum in Figure~\ref{fig:ps_Ha}.
\section{Surveys of H$\alpha$ emission}
\label{sec:Surveys}
Here we make predictions for four instruments: Euclid and WFIRST that will measure H$\alpha$ emission from resolved galaxies, and SPHEREx and CDIM that will operate as
H$\alpha$ intensity mapping probes. The properties of the spectroscopic surveys of these instruments are listed in Table \ref{tab:instruments}.
Euclid is a space mission under development by the European Space Agency that should start observing galaxies in 2020.
WFIRST is a competing experiment being developed by NASA. It is projected to be launched also in 2020.
In addition to their spectroscopic capabilities, the two satellites will carry out photometric surveys that will cover slightly higher
frequency/redshift ranges than their
spectroscopic surveys. More importantly, the photometric surveys will allow to probe galaxies much deeper in magnitude. Euclid will reach $\sim$24 mag in $5\sigma$
detections of point sources using three filters (Y, J and H) and WFIRST will reach close to 26.5 mag using four filters (Y, J, H and F184).
The frequency resolution of these photometric surveys is enough to probe and
constrain cosmology using the BAO scale. However, due to the short time scales associated with the astrophysical
processes at hand, H$\alpha$ emission is better constrained using the spectroscopic surveys of these satellites.
The SPHEREx instrument is meant to be used as an explorer and so it has a broad range of science goals
spanning from cosmic inflation via non-Gaussianity to galaxy evolution and
Galactic ices \citep{2016Dore}. On the other hand, the CDIM instrument is a NASA Probe that focuses on greatly improving our knowledge of galaxy
formation and evolution. This mission's main objective is to probe galaxies and IGM emission during the Epoch of Reionization. The SPHEREx mission
is at an advanced stage of formulation and was selected for a Medium-Class Explorers Mission concept study by NASA in 2017. It will be a shallow
all-sky survey but with deep imaging data collected in the ecliptic poles, where the poles are
imaged at every orbital pass. In this study we will only consider the SPHEREx deep survey since its field of view is wide enough to constrain H$\alpha$ emission. CDIM remains a concept study and its
exact survey strategy as well as final details are yet to be determined.
Intensity mapping survey data consist of three-dimensional intensity maps, where each observational voxel
is set by a certain angular resolution and covers a wide range of frequency. Given the large volume covered by each observed voxel, it will contain emission from several unresolved sources. Instead of resolving galaxies, IM surveys aim at detecting the overall emission from both bright and faint sources, as
well as extended emission sources. SPHEREx and CDIM plan to detect H$\alpha$ emission in IM mode in order to be
unconstrained by flux limits and therefore probe the full signal. In these surveys, the recovery of the target signal, i.e., the intensity and spatial fluctuations of H$\alpha$ emission, is usually done through power spectrum analysis. However, the same instruments can also operate as galaxy surveyors. As such, they will characterize the galaxies down to the flux limit listed in Table~\ref{tab:instruments}. In this case, due to time limitations, SPHEREx and CDIM will observe over smaller areas.
IM surveys are a better choice for cosmological purposes since they can cover large volumes in a short time. On the other hand, in traditional galaxy survey mode, SPHEREx and CDIM instruments will be able to map bright interloping lines. Furthermore, in the case where the foreground removal strategies in intensity maps are not successful, these instruments will still be able to use the signal from resolved galaxies for astrophysical purposes.
The large resolving power of galaxy surveys will allow them to provide detailed information on the emission from
different galaxy populations. The Euclid and WFIRST galaxy surveys will probe the H$\alpha$ LF.
On the other hand, SPHEREx and CDIM IM surveys will provide measurements of the integrated H$\alpha$ intensity.
Therefore, IM surveys will probe the entire galaxy population and
suffer a lot less from selection biases than galaxy surveys.
An additional advantage of IM surveys is that the
redshift of the source of emission is obtained automatically. This
makes them particularly useful for probing the time evolution of global galaxy properties.
Moreover, given the large frequency range spanned by the SPHEREx and CDIM surveys, they will be able to target several emission lines and to probe
galaxy emission at higher redshifts than Euclid and WFIRST.
In particular, the CDIM surveys will cover the high frequencies corresponding
to Ly$\alpha$ emission from the EoR. CDIM also has the advantage of having a frequency resolution that makes it
possible to separate between emission in the H$\alpha$ line and
in the nearby NII doublet lines.
The potential of each type of instrument to probe galaxies H$\alpha$ emission and to constrain different galaxy properties as well as their
redshift evolution is described in detail in Appendix~\ref{app:constraints}.
\section{Contamination in H$\alpha$ observations}
\label{sec:LineContam}
In both galaxy surveys and IM surveys, observations of H$\alpha$ emission will be contaminated by emission from interloping lines. This contamination needs to be
identified, evaluated and, if necessary and possible, removed.
Euclid and WFIRST low-resolution spectra will be fitted
with galaxy SED templates, which can, in case the signal to noise is high, be used to identify the observed line. This will, however, not
always be possible due to the narrow frequency range covered by these instrument's filters. Also, the lack of information on the redshift of the source might result
in line confusion. Moreover, neither of these two surveys has enough frequency resolution to separate the peak of the H$\alpha$ line from that of the NII line doublet.
Nevertheless, the bright galaxies detected by these surveys are important to constrain the physical properties of the H$\alpha$ emitters. These galaxies will also be
useful to determine the role of the environment in the extinction suffered by H$\alpha$ emitters.
Line contamination is also a problem for IM surveys, given that they are intrinsically characterized by detecting emission from all types of
unresolved sources.
The amount of contribution from line contaminants depends on the target line and on the observed frequency. Therefore, it has to be evaluated
according to the frequency range covered by the survey. In the
case that this contamination is higher or comparable to the signal from the target line, part of it needs to be removed (masked) from the observational maps.
Generally speaking, the H$\alpha$ intensity maps will be contaminated by several strong interloping lines, hence, their contamination needs to be accounted for.
The observational voxel size will determine the percentage of voxels that need to be masked in order to efficiently reduce the contamination in the maps.
In the case where a high portion of the voxels is masked ($\gtrsim10\%$), the recovery of the target signal might require a correction
for the loss of flux in the target line. Therefore,
the appropriate contamination removal strategy for an intensity mapping study is highly dependent on the survey properties.
In Subsection~\ref{sec:contamination1} we model the intensity and dust extinction suffered by each of the line contaminants. For the
estimation of the observed intensity, we take into account that galaxy surveys will mainly observe bright systems, whereas
IM surveys are expected to observe the total galaxy population. We continue by estimating the power spectra of line contamination, which is
relevant for IM surveys (subsection~\ref{sec:contamination2}). We find that the contamination power spectrum is of the same order as that of the signal.
Using simulations we determine the masking fractions required to reduce the
contamination power spectra to a level well below the predicted H$\alpha$ signal. We also present estimates of the masking fractions associated with
increasing flux cuts (subsection~\ref{sec:contamination3}).
We assume that the voxels that need to be masked, corresponding to bright foreground emission, will be identified independently by a galaxy survey.
In the case of the SPHEREx and CDIM missions, the foregrounds survey can be performed by the same instrument operating in a different mode.
Additionally, intensity maps will suffer from continuum emission originating in the stellar and AGN continuum, as
well as: free-free, free-bound, 2-photon emission and dust emission in the ISM. At the frequency range relevant
for H$\alpha$ IM, this continuum contamination is dominated by stellar emission, since AGN emission
only dominates the extragalactic continuum background at higher frequencies/energies \citep{2016Silva}.
Continuum emission is expected to vary more smoothly with frequency compared to the signal from emission lines, which should quickly fluctuate in the frequency direction \citep{2015MNRAS.447..400A}. This smoothness is used in IM studies
to fit and remove the continuum emission. Therefore, we assume that continuum emission can easily be fitted out of these maps and focus only on line contamination. The case for galactic contamination is similar, since these foregrounds are fitted in frequency and removed in the same way as extragalactic continuum foregrounds.
\subsection{Observed intensity of line contaminants}
\label{sec:contamination1}
In H$\alpha$ intensity maps, at $z<5.0$, the main contaminants include: the ionized oxygen [OII] 372.7 nm and [OIII] 500.7 nm
lines, and the hydrogen H$\beta$ 486.1 nm and \lya 121.6 nm lines. We also consider the contamination by
the [NII] 658.3 nm/654.8 nm and [SII] 671.7 nm/673.1 nm doublet lines.
Table~\ref{tab:line_cont} presents a list of the contaminating lines and the
redshifts from which they originate, in comparison to the redshift of the H$\alpha$ line. Given
the very small frequency separation, the NII doublet lines originate from approximately the same redshift as the H$\alpha$ line and so they are not included in this table.
In order to model the contamination by each interloping line, we take the published relations between line luminosity
and SFR and then compare/adjust them to the existing LF constraints, when possible. The intensity of these contaminating lines
can be calculated by integrating over the SFR or the line LF, in the same way as was done for the H$\alpha$ line, using Equation~\ref{eq:I_LF}.
In most cases, the intensity of the interloping lines are only constrained at low redshifts and the extrapolation of their LFs to higher redshifts are uncertain.
The several factors involved in the line LF redshift evolution include: the expected decrease in galaxy metallicity,
the lower ionization state of the ISM gas, the possibility of the existence of low-density canals which would
affect the galaxy extinction rates independent of the galaxy dust content, and others.
We now explore the contamination from each line in further detail.\\
\begin{table*}
\centering
\caption{Contaminant background lines in Halpha intensity maps}
\begin{tabular}{l c c c c c c c}
\hline\hline
Line & $\lambda_0\, {\rm [nm]}$ & $z$($\lambda=656.28\, {\rm nm}$) & $z$($\lambda=721.91\, {\rm nm}$) & $z$($\lambda=1181.30\, {\rm nm}$) & $z$($\lambda=1968.84\, {\rm nm}$) & $z$($\lambda=2625.12\, {\rm nm}$) & $z$($\lambda=3937.68\, {\rm nm}$)\\
\hline
${\rm H}_{\alpha}$ & 656.28 & 0.00 & 0.20 & 0.80 & 2.00 & 3.00 & 5.00 \\
SII & 671.7 & ... & 0.07 & 0.76 & 1.93 & 2.91 & 4.86 \\
OIII & 500.7 & 0.31 & 0.57 & 1.36 & 2.93 & 4.24 & 6.86 \\
${\rm H}_{\beta}$ & 486.1 & 0.35 & 0.62 & 1.43 & 3.05 & 4.40 & 7.10 \\
OII & 372.7 & 0.76 & 1.13 & 2.17 & 4.28 & 6.04 & 9.57 \\
Lya & 121.6 & 4.40 & 5.48 & 8.71 & 15.19 & 20.59 & 31.38 \\
\hline
\end{tabular}
\label{tab:line_cont}
\end{table*}
\textbf{NII contamination:} The relative contribution from the NII doublet to the H$\alpha$ plus
NII lines scales with the equivalent width (EW) of this peak and is usually in the range 10\% to 50\% \citep{2009Sobral,2012Sobral}.
As a reference, the average value of the NII line contribution at $z=1.47\pm0.02$ and for H$\alpha$ fluxes above
$7\times 10^{-17}\, {\rm erg\, s^{-1}\, cm^{-2}}$ is $\sim$ 22\% of the sum of the H$\alpha$ and the NII line intensities \citep{2012Sobral}.
For galaxy surveys, we assume that the NII line contamination can be estimated from the H$\alpha$ EW following the
relation observed in SDSS galaxies and parameterized by \citet{2012Sobral} as:
\begin{eqnarray}
{\rm log\left([NII]/H\alpha]\right)}&=&-0.924+4.802E-8.892E^2\nonumber \\
&+&6.701E^3-2.27E^4+0.279E^5,
\end{eqnarray}
where $E{\rm = log\left[ EW_0 (N_{\rm II}+H\alpha) \right]}$.
However, IM surveys do not resolve individual galaxies. Therefore, this parametrization is impossible to apply. Given that, in intensity maps, each voxel contains
emission from several galaxies, our default assumption is that the NII line contributes $22\%$ of the sum of the H$\alpha$ and NII line intensities.
A similar NII line contribution is
predicted by Galaxy Evolutionary Synthesis (GALEV) models for galaxies with solar metallicities \citep{2003Anders}. For sub-solar metallicities
the NII contribution to the NII plus H$\alpha$ line intensities should be smaller.
One of the main advantages of the CDIM surveys is that they will have sufficient frequency resolution
to distinguish between the H$\alpha$ line emission and the NII line doublet emission.
From the surveys we consider in this work,
only CDIM will have enough frequency resolution to do this. It should be noted that the evolution of the ${\rm NII/H\alpha}$ at $z\gtrsim1.47$
is uncertain, due to lack of observations. Therefore, the assumption that this ratio is fixed will add a large systematic error to the estimation of the H$\alpha$ signal.
For further discussion on this topic see Appendix~\ref{subsec:Z_ion_param}. \\
\textbf{SII contamination:} Most surveys also do not have good enough frequency resolution to separate the peak of the H$\alpha$ line
from that of the SII line doublet. Therefore, one needs to estimate the average contribution of this doublet line,
using the same method we applied to the NII line.
The strength of the SII doublet has been found to vary between $\sim10-60\%$ of the H$\alpha$ emission in SDSS star-forming galaxies
\citep{2006Kewley}. However, in most cases the contribution of the SII lines to the (H$\alpha$ + NII + SII) peak is $\sim$12\%. This
value is taken from \citet{2016Marmol}, who evaluated it using H$\alpha$ selected galaxies in the redshift range
$1.23\lesssim z\lesssim 1.49$ and with an average equivalent width of $175\pm14\AA$ .
The contribution from these doublet lines should be estimated directly from the survey data. Whenever that is not possible, we assume
that the SII doublet line intensity is of the order of 12\% of the H$\alpha$ + NII+ SII line intensity. We note that GALEV models for galaxies with solar
metallicities predict that the SII line intensity corresponds to 9\% of the total H$\alpha$ + NII + SII line flux \citep{2003Anders}.\\
\textbf{H$\beta$ contamination:} We estimate the H$_{\beta}$ intrinsic luminosity from Eq.~\ref{eq:Lum_Ha} and the recombination emission line ratios, which yields,
\begin{equation}
L^{\rm int}_{\rm H\beta}\, ({\rm erg\, s^{-1} })\, =\, 4.43\times 10^{40} SFR\, ({\rm M_{\odot}\, yr^{-1}}).
\label{eq:Lum_Hb}
\end{equation}
Based on the \citet{2000Calzetti} extinction curve and the dust attenuation by OB galaxies in the COSMOS survey up to $z\, \sim\, 6.5$ \citep{2015Scoville},
the extinction suffered by the \Hb line is of the order of $A_{\rm H\beta}=1.35\times A_{\rm H\alpha}$. We use this to estimate the average \Hb intensity.
\begin{figure}
\begin{centering}
\includegraphics[angle=0,width=0.5\textwidth]{./figures/F9.pdf}
\caption{Intensity of observed OII line emission as a function of redshift. The lines show the OII intensity derived from our SFRD model
(solid line and dashed line) and from
the MD+14 SFRD model (dotted line). The solid and the dotted black lines were corrected for a
dust extinction of $A_{\rm OII}\,=\, 0.62\, {\rm mag}$, while the blue dashed line was corrected for a dust extinction of $A_{\rm OII}\, =\, 1.0\, {\rm mag}$.
The dots correspond to observational points from UKIDSS \citep{2013Drake} and HiZELS \citep{2015Khostovan}.}
\label{fig:I_OII}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[angle=0,width=0.5\textwidth]{./figures/F10.pdf}
\caption{Intensity of observed OIII line emission as a function of redshift. The black solid and the black dotted lines
show, respectively, the OIII intensity derived from our SFRD model and from
the MD+14 SFRD model. To the intrinsic line intensities, we applied a dust extinction of $A_{\rm OIII}\, =\, 1.35\, {\rm mag}$.
The dots correspond to observational points from
the UKIDSS Ultra Deep Survey Field \citep{2013Drake} and from HiZELS \citep{2015Khostovan}. Also shown in the
cyan lines is the H$\beta$ intensity obtained using Equation~\ref{eq:Lum_Hb}, to which we applied
a dust extinction of $A_{\rm H\beta}\, =\, 1.35\, {\rm mag}$. The cyan solid line assumes our SDRD model while the cyan dotted lines uses the MD+14 SFRD model.}
\label{fig:I_OIII}
\end{centering}
\end{figure}
\textbf{OII contamination:} The OII line luminosity can be estimated from the galaxy SFR using existing fits to the
observational data. As a reference, we use the relation from
\citet{1998KennicuttAraa} based on a ratio of 0.57 between the OII and H$\alpha$ fluxes observed in local galaxies:
\begin{equation}
L_{\rm OII}\, ({\rm erg\, s^{-1}})\, =\, (7.18\pm 2.2) \times 10^{40}\, SFR\, ({\rm M_{\odot}\, yr^{-1}}).
\label{eq:LOII_SFR}
\end{equation}
In Figure~\ref{fig:I_OII}, we compare the OII intensities predicted by SFRD based models with observational
constraints from the UKIDSS \citep{2013Drake} and HiZELS \citep{2015Khostovan} surveys. At low $z$, the theoretical
curves are systematically above the predictions from \cite{2013Drake}. However, at $z\sim1.5-2$ our predictions fit quite well. As indicated
by the more recent measurements from \cite{2017Matthee}, this reflects a lower OII$/$H$\alpha$ ratio at low $z$. The evolution of this ratio can be due
to an increasing dust extinction at high $z$. It can also simply be due to technical issues with observations in the local Universe given that the
surveys only measure the central kpc region of the galaxies \cite{2017Matthee}.
The high end of the OII LF is observationally probed up to a redshift of $z\, \sim\,5$. However, this line intensity is only well
constrained up to $z\, \sim\, 1.5$ \citep{2015Khostovan}. Recent observational
constraints, reaching $z\sim4.5$, predict this line EW to increase up to
$z\sim3.5$ and then to decline as a result of a higher ionization state of the ISM \citep{2016Khostovan}.
The OII intensity in this figure assumes a low
dust extinction of $A_{\rm OII}\, =\, 0.62$ mag \citep{2013Hayashi.Sobral}. This is the average dust extinction
of the observed OII emitting galaxies. However, the relevant extinction for IM studies should be that of the full
galaxy population, which should be closer to $A_{\rm OII}\, =\, 1$ mag, see discussion in \cite{2013Hayashi.Sobral}.\\
\textbf{OIII contamination:} Similarly, the OIII line luminosity can be estimated from the galaxy SFR, using existing fits to observational data. We take the
relation from \citet{2007Ly.Malkan}, based on observations in the $z\sim 0.07-1.47$ redshift range and given by:
\begin{equation}
L_{\rm OIII}\, ({\rm erg\, s^{-1}})\, =\, (1.32\pm 2.7) \times 10^{41}\, SFR\, ({\rm M_{\odot}\, yr^{-1}}).
\label{eq:LOIII_SFR}
\end{equation}
In Figure~\ref{fig:I_OIII}, we compare the OIII intensities predicted by SFRD based models with observational
constraints from the UKIDSS \citep{2013Drake} and HiZELS \citep{2015Khostovan} surveys. To the OIII and the H$\beta$ line intensities, we applied a dust extinction
of $A_{\rm OIII}=A_{\rm H\beta}=1.35\, {\rm mag}$, which corresponds to $A_{\rm H\alpha}\, =\,1\, {\rm mag}$ \citep{2015Khostovan}.
As in the case of the OII line, our theoretical OIII line curves fit better with observational LFs at $z\gtrsim1.5$. We note, however,
that the observational line ratios
OIII/OII and OIII/\Hb are larger in high $z$ galaxies \citep{2017Castellano}.
Also, the EW of the OIII line should increase towards galaxies with small
stellar masses. Therefore, these high EW are increasingly important towards high $z$, where galaxies are on average
smaller \citep{2016Khostovan}. This indicates that the ISM in high $z$
galaxies is characterized by a higher ionization parameter than that of low $z$ galaxies.
With the few currently available observations it is still unclear whether these differences are due to redshift evolution
of the ratios between lines, dust extinction or technical differences in observations, such as aperture effects \citep{2017Matthee}. Nonetheless, in the redshift
range important for this study, predictions based on observations fit well with those from our SFRD based model. Therefore, we estimate the contamination
by OIII line emitters using the latter model. \\
\textbf{\lya contamination:} \lya from high redshift galaxies, $z > 4.4$, contaminates H$\alpha$ line observations in the redshift range $z\sim 0\, -\, 5$.
We take the \lya LFs at $z\sim5.7$ and $z\sim6.6$, from \citet{2016Santos.Sobral}, and integrate them down to
$L_{\rm Ly\alpha}=10^{38}\, {\rm erg\, s^{-1}}$. For the different possible values of the low luminosity slope of the \lya LF, the
observed line intensity is between $(0.13-3.9)\times10^{-8}\, {\rm erg\, s^{-1}\, sr^{-1}\, cm^{-3}}$ at $ z=5.7$
and between $(0.036-1.1)\times10^{-8}\, {\rm erg\, s^{-1}\, sr^{-1}\, cm^{-3}}$ at $ z=6.6$. This high uncertainty
is due to, at these redshifts, the \lya LF only being observationally constrained down to a luminosity of $\sim10^{42.5}\, {\rm erg\, s^{-1}}$.
The \lya intensity can also be estimated from the SFR as
\begin{equation}
L_{\rm Ly\alpha}\, ({\rm erg\, s^{-1}})\, =\, 1.1 \times 10^{42} \,SFR\, ({\rm M_{\odot}\, yr^{-1}}).
\label{eq:Lya_SFR}
\end{equation}
This observational relation was obtained using galaxies in the local Universe \citep{1998KennicuttAraa}.
Using our SFR model the \lya line intrinsic intensity is then $6\times10^{-8}\, {\rm erg\, s^{-1}\, sr^{-1}\, cm^{-3}}$
at $z = 5.7$ and $2.6\times10^{-8}\, {\rm erg\, s^{-1}\, sr^{-1}\, cm^{-3}}$ at $z = 6.6$.
The observed line intensity was obtained by correcting the \lya line intensity for the average of the \lya photon escape fraction ($f_{\rm esc}^{\rm Ly\alpha}$).
Due to the lack of good observational, for the \lya dust extinction in galaxies, we take $f_{\rm esc}^{\rm Ly\alpha}$ from the high-resolution
simulations of \citet{2014Yajima}. As a reference this study predicts the average value of this escape fraction to be $f_{\rm esc}^{\rm Ly\alpha}(z=5.5)\sim0.5$.
The \lya intensity estimated from LFs is smaller than that estimated using our SFR model. LFs are obtained using galaxy survey data which are
unable to detect scattered \lya photons around the galaxy. These photons will, however, be detectable by IM surveys. Therefore, we estimate
this line contamination in intensity maps using our SFR estimate.
There will be additional contamination in the H$\alpha$ intensity
maps from \lya emission originating from recombinations, collisional
excitations and scattering of Lyman-n photons in the IGM \citep{2013Silva, 2016Comaschi}.
Even for the higher estimates for this scattered emission obtained by \cite{2016Comaschi}, its power spectrum will
be considerably below the signal from galaxies.
Moreover, the bias associated with the IGM emission is very close to unity, so in power spectrum analysis, these IGM
contributions are not important. For simplicity, we,
therefore, ignore these contributions to the \lya power spectrum.
\begin{figure*}
\vspace{-4pt}
\centerline{
\resizebox{!}{!}{\includegraphics[angle=0,width=0.50\textwidth]{./figures/F11a.pdf}}
\resizebox{!}{!}{\includegraphics[angle=0,width=0.50\textwidth]{./figures/F11b.pdf}}
}
\vspace{-1pt}
\centerline{
\resizebox{!}{!}{\includegraphics[angle=0,width=0.50\textwidth]{./figures/F11c.pdf}}
\resizebox{!}{!}{\includegraphics[angle=0,width=0.50\textwidth]{./figures/F11d.pdf}}
}
\vspace{-1pt}
\centerline{
\resizebox{!}{!}{\includegraphics[angle=0,width=0.50\textwidth]{./figures/F11e.pdf}}
\resizebox{!}{!}{\includegraphics[angle=0,width=0.50\textwidth]{./figures/F11f.pdf}}
}
\caption{
Left panels: Theoretical power spectra of contamination by background lines in H$\alpha$ intensity maps at $z_{{\rm H}\alpha}\, =\, 0.2, 0.8, 2.0$
(from top to bottom). The power spectra from the several interloping lines were scaled to the redshift of the H$\alpha$ line. The upper three
dotted yellow line corresponds to \lya emission from galaxies.
The IGM \lya emission power spectrum (bottom three dotted yellow line) was calculated using: an intensity with double the value of the intensity
from galaxies, a bias with the underlying density of one and no shot noise.
Right panels:
Simulated power spectra of contamination by background lines in H$\alpha$ intensity maps at $z_{{\rm H}\alpha}\, =\, 0.2, 0.8, 2.0$
(from top to bottom). These
plots were obtained from our simulation and assume the cell resolution of the CDIM survey. The result would be very similar for a survey with the
spatial and frequency resolution of SPHEREx. The top dotted line denotes
the total contamination power spectrum (by the OIII, OII, and H$_\beta$ lines), while the remaining dotted lines denote the contamination power
spectra, after masking cells with fluxes in one of the foreground lines above a given threshold. For $z=0.2$ and $z=0.8$ the flux thresholds
for masked cells are $1.2\times10^{-16}\, {\rm erg\, s^{-1}\, cm^{-2}}$ (middle dotted line) and $5.0\times10^{-17}\, {\rm erg\, s^{-1}\, cm^{-2}}$ (bottom dotted line).
For $z=2$ the flux thresholds for masked cells are $5.0\times10^{-17}\, {\rm erg\, s^{-1}\, cm^{-2}}$ (middle dotted line) and
$1.0\times10^{-17}\, {\rm erg\, s^{-1}\, cm^{-2}}$ (bottom dotted line).}
\label{fig:ps_lines}
\end{figure*}
\subsection{Power spectra of line contamination}
\label{sec:contamination2}
Intensity maps can be analyzed in several ways. However, given the low signal-to-noise ratio in the maps expected in many experiments, the signal
will most likely be detected statistically. The most obvious statistic to use is the intensity power spectrum \cite[see e.g.,][]{2012Pritchard}.
The target line and the interloping lines are emitted from different redshifts and so their emission originates in different volumes.
One needs to account for a volume conversion factor when estimating the contamination power spectra.
We estimate the contamination by background lines in the observed H$\alpha$ power spectrum as a function of the perpendicular and parallel components of the wavevector k,
using the \cite{2014GongLya} formula, given by:
\begin{eqnarray}
P_{\rm obs}(k_{\perp},k_{\parallel})&=& \left[ P^{\rm clus}_{\rm line}(z_{\rm f},k_{\rm f}) + P_{\rm line}^{\rm shot}(z_{\rm f},k_{\rm f}) \right] \nonumber \\
&\times& \left(\left[ \frac{\chi(z_{\rm s})}{\chi(z_{\rm f})}\right]^2 \left[\frac{y(z_{\rm s})}{y(z_{\rm f})}\right] \right),
\label{eq:ContPSshift}
\end{eqnarray}
where the clustering power spectrum is,
\begin{eqnarray}
P^{\rm clus}_{\rm line}(z_{\rm f},k_{\rm f})&=& \bar{I}^2_{\rm f}(z_{\rm f}) b^2_{\rm f}(z_{\rm f}) P_{\delta\delta}(z_{\rm f},k_{\rm f}).
\end{eqnarray}
The indexes $s$ and $f$ indicate the source, i.e.,
${\rm H}\alpha$, or the foreground/background line redshifts, respectively. The parameter $r$ corresponds to the comoving distance, while
$|\vec{k_{\rm f}}|\, =\, \left[ (r_{\rm s}/r_{\rm f})^2 k^2_{\perp}\, +\, (y_{\rm s}/y_{\rm f})^2 k^2_{\parallel} \right]^{1/2}$ is the three-dimensional k vector at
the redshift of the foreground/background line. $P_{\delta\delta}$ is the matter power spectrum and $b_{\rm line}$ is
the bias between the interloping line luminosity and the dark matter fluctuations.
Finally, the shot noise power spectrum due to the discrete nature of galaxies is:
\begin{equation}
P^{\rm shot}_{\rm line}(z)=\int^{M_{\rm max}}_{M_{\rm min}} dM \frac{dn}{dM} \left[ \frac{L(M,z)}{4 \pi D^2_{\rm L}}y(z)D^2_{\rm A} \right]^2.
\end{equation}
Notice that the contaminants power spectrum shown in Eq.~\ref{eq:ContPSshift} experiences a scale-dependent shift, as well as, amplitude modification.
This change, due to contamination, will increase/decrease the measured power spectrum. In the scales measured by the IM surveys,
the contamination by background lines will be attenuated relative to the H$\alpha$ power spectrum, whereas the foreground interloping
lines will have the opposite effect. Therefore, most studies ignore the influence of background lines \cite[see e.g.,][]{2014GongLya,2017Fonseca}. However, we show here, after a careful estimation, that some of the background lines cannot be ignored.
Left panels in Figure~\ref{fig:ps_lines} show the power spectrum of the most important background lines that can be confused with the H$\alpha$ emission.
The amplitude of the lines shown in this plot were calculated assuming a survey capable of detecting the full line emission. This is a reasonable
assumption for IM surveys. The matter
power spectrum was theoretically estimated using the publicly available code CAMB \citep{2000Lewis}.
Figure~\ref{fig:ps_lines} clearly shows that the contamination power spectrum from background lines is of the order of the H$\alpha$ power spectrum.
Therefore, it is clear that this contamination is not negligible and, hence, can not be ignored. Namely, some of this background
contamination needs to be removed from observational maps in order to accurately recover the signal from H$\alpha$ emission.
We note that the amplitude of the \lya line contamination power spectra are very small and can always be ignored. We show its
values only in the top left panel of Figure~\ref{fig:ps_lines}, whereas, in the lower panels it is ignored because it is even smaller.
\subsection{Contaminants masking fractions derived from simulations}
\label{sec:contamination3}
\begin{figure*}
\vspace{-10pt}
\centerline{
\hspace{2pt}
\resizebox{!}{!}{\includegraphics[angle=0,width=0.5\textwidth]{./figures/F12a.pdf}}
\hspace{-12pt}
\resizebox{!}{!}{\includegraphics[angle=0,width=0.5\textwidth]{./figures/F12b.pdf}}
}
\caption{Intensity of several emission lines in the $( 0.8- 4.3)\times10^{14}\, {\rm Hz}$ frequency range (corresponding to a
wavelength range of ${\rm 3.75-0.70\, \mu m}$), as it will be observed by the SPHEREx and CDIM surveys in galaxy survey mode. These plots were obtained from
our simulation and assume both the cell resolution and the flux sensitivity of these surveys. Note that for the CDIM survey we assumed a
flux sensitivity of $10^{-18}\, {\rm erg\, s^{-1}\, cm^{-2}}$. Also, for the NII and SII doublet lines we simply assumed that,
following the discussion in Section~\ref{sec:contamination1}, they would have 0.282 and 0.176 of the intensity of the H$\alpha$ line, respectively.
The intensity of \lya emission is only shown up to $z\sim8$, given that the intensity of this line at higher $z$ is highly dependent on the
assumed hydrogen reionization history.}
\label{fig:I_foreg_nu}
\end{figure*}
We use the simulations described in Section~\ref{sec:Simulations} to estimate the flux cuts that we need to mask
contaminant line emission, in order to efficiently reduce their power spectra. With the same simulations we
calculate the percentage of observational voxels that need to be masked to achieve these flux cuts.
The right panel of
Figure~\ref{fig:ps_lines}, shows the residual line contamination after masking the brightest background emitters from the observational maps. Note that these power spectra
have a bit more power at small scales than the theoretical ones
given that our simulation does not have halos below
$M_{\rm min}= 6.5\times 10^{9}\, {\rm M}_{\odot}$ and so we are overestimating the shot noise.
The contamination in H$\alpha$ intensity maps is
considerably reduced at $z\sim0.2$ by masking voxels containing the signal from background emitters with fluxes above
$1.2\times 10^{-16}\, {\rm erg\, s^{-1}\, cm^{-2}}$. This would decrease the contamination power spectrum to about $5-25\%$ of its
initial amplitude. Moreover, it would only require masking less than $1\%$ percent of the observational voxels, both for SPHEREx and for CDIM.
At $z\sim0.8$, masking voxels contaminated by line fluxes above $1.2\times10^{-16}\, {\rm erg\, s^{-1}\, cm^{-2}}$ would only slightly
decrease the signal from background lines. This would leave the line contamination in the observed maps at the level of $10-20\%$ of
the total observed power spectrum. A flux cut of $5.0\times10^{-17}\, {\rm erg\, s^{-1}\, cm^{-2}}$ would be much more successful at
reducing this contamination. However, it is very observationally challenging to individually detect all the galaxies responsible for this
emission down to this low flux level.
At $z\sim2.0$ and at a scale of $k\sim 0.1\, {\rm h^{-1}\, Mpc}$, the H$\alpha$ signal will be approximately $5$ times
higher than the background lines signal. Meaningfully decreasing this contamination (to about $50\%$ of its initial value) at small scales would also
require the masking of contaminant lines with fluxes down to $\sim 5.0\times10^{-17}\, {\rm erg\, s^{-1}\, cm^{-2}}$ (see Fig.~\ref{fig:ps_lines}).
The percentage of voxels lost, assuming these flux cuts and the voxel size of the CDIM and SPHEREx surveys is always at the $0.1-1\%$ level.
OIII emitters dominate the contamination in these maps. Therefore, detecting and masking only OIII line contaminants would be observationally easier and would still result in a meaningful reduction of the contamination in H$\alpha$ intensity maps.
The quoted masking fractions will only decrease the H$\alpha$
intensity by less than $1\%$. All these results were confirmed using our simulations.
An alternative masking procedure is the so called blind masking in which the brightest pixels are masked assuming they belong to foreground galaxies. However, this type of masking will not work in the presence of background line contaminants, since by doing so, a large fraction of the target line would also be inevitably masked. For the previously quoted flux cuts,
the percentage of H$\alpha$ signal that would be erased is of the order of $40-85\%$.
As a side note, contamination by interloping lines causes anisotropies in the angular power spectrum because the different lines originate at different redshifts.
The angular power spectrum can, therefore, be used to test if the masking procedure was successful.
\section{Alternative foreground removal/avoiding methods}
\label{sec:contamination4}
In the case where the foreground removal strategies in intensity maps are not successful, CDIM and SPHEREx will still be able to use their deep surveys (in which they function as traditional galaxy surveys and resolve galaxies) for astrophysical purposes.
In Figure~\ref{fig:I_foreg_nu}, we show estimates for CDIM and SPHEREx constraints on bright line intensities, in order to predict how useful they will be in tracing global astrophysical quantities when operating in galaxy survey mode.
Note that with the assumed flux limit for CDIM, the observed line intensity is very close to its total value.
Figure~\ref{fig:I_foreg_nu} indicates that at $z\sim 0-5$, the intensity of H$\alpha$ emission should be stronger than that of the other lines.
Therefore, when our target is the H$\alpha$ line and the contaminant signal cannot be efficiently removed, it might be easier to estimate the H$\alpha$
line intensity from the total detected intensity than through power spectrum analysis.
Figure~\ref{fig:I_foreg_nu} also shows that, for $z_{{\rm H}\alpha}\lesssim 2$, OIII line emitters are the strongest line contaminants.
The ratio between the intensity of the oxygen lines relative to that of the H$\alpha$ line emission is uncertain by a factor of about two at high redshift ($z\gtrsim5$).
The uncertainty is due to the expected decrease in galaxy metallicity with increasing redshift. This contrasts with a few of the observed high redshift galaxies, where
the line ratios of oxygen lines are actually quite high. Future observations with, for example, JWST will help to further constrain these lines intensities.
Another way of separating the different line contributions is by attributing a weight to the intensity of each line and using this
information together with the frequency of the emitting lines to iteratively determine the line intensities. This algorithm would take advantage of
the known/fixed separation between different lines and directly fit the spectral information of all the lines
intensities in a similar way to what was done for continuum infrared background data by \citet{2015Kogut}. Although, the accuracy of this
method would be affected by the additional differential extinction suffered by each line.
Moreover, the possibility of using the angular information of each of the lines in an intensity map was explored by
\citet{2016Cheng} for the case of CII emission line. This study uses an MCMC approach to recover the intensity and bias
information from each line. For the case of H$\alpha$ emission, however, this is a less promising approach, because there are several interloping lines originating from the same redshift. Moreover, the intensity of these lines is uncorrelated, whereas for CII the main contaminant lines, the different CO transitions, are highly correlated. This results in a much higher number of parameters for the case of the H$\alpha$ line and so the MCMC result would be very difficult to interpret. The solution for this problem would be to independently obtain strong constraints on some of the parameters and to properly determine the correlations between them prior to using the MCMC approach.
\begin{figure}
\begin{centering}
\includegraphics[angle=0,width=0.52\textwidth]{./figures/F13.pdf}
\caption{Fraction of H$\alpha$ emission powered by star formation detected by a survey as a function of its flux sensitivity. Solid
and dotted lines assume, respectively, an extinction in H$\alpha$ emission of $A_{{\rm H} \alpha}=1\,{\rm mag}$ and
$A_{{\rm H} \alpha}=0.475\,{\rm mag}$. The lines shown correspond to redshifts $z=0.2$, $z=0.81$ and $z=1.47$, from right to left. This
figure is based on observational LFs from \citep{2015Stroe,2013Sobral,2016Smit}.}
\label{fig:I_Ha_fcut}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[angle=0,width=0.52\textwidth]{./figures/F14.pdf}
\caption{Intensity of H$\alpha$ emission powered by star formation, detected by SPHEREx (blue dashed line ) and by CDIM (red lines). The
red solid line assumes a flux limit of $4\times10^{-18}\, {\rm erg\, s^{-1}\, cm^{-2}}$, whereas the red dashed-dotted lines assumes a flux
sensitivity of $1\times10^{-18}\, {\rm erg\, s^{-1}\, cm^{-2}}$.}
\label{fig:I_Ha_IM_perc}
\end{centering}
\end{figure}
Additionally, cross-correlating H$\alpha$ intensity maps with, for example, galaxy
surveys of HI radio data, as was suggested by \citet{2017Gong}, can be used to avoid foreground contamination. Since, foregrounds contaminating H$\alpha$ emission
and HI 21 cm data are uncorrelated to first order, the cross-correlation power spectrum would make it possible to probe the signal from these two lines.
The contamination by higher-order correlations in this statistical measure is unfortunately not well explored yet.
Also, this procedure is limited to
the cases where an HI survey covering the same position in the sky and redshift range as the H$\alpha$ survey is available.
\section{Astrophysical constraints from galaxy surveys versus IM surveys}
\label{sec:Constraints}
While galaxy surveys can observe a small fraction of the Universe in great detail, IM surveys provide a global picture of our
Universe by blindly detecting emission from all types of sources.
IM surveys directly probe the global quantities, whereas we can only try to infer these quantities from the limited data provided by the
selection biased and flux-limited galaxy surveys.
In Figure~\ref{fig:I_Ha_fcut} we show the percentage of H$\alpha$ emission powered by star formation that can be probed by a survey as a
function of its flux sensitivity. This figure is appropriate to infer the flux sensitivity limits required for a galaxy survey
to be able to detect a meaningful fraction of the H$\alpha$ emission originating from a given redshift. In Figure~\ref{fig:I_Ha_IM_perc} we show the intensity
of H$\alpha$ emission powered by star formation that can be detected (assuming flux sensitivity limits, at least as high as the one indicated
in Table~\ref{tab:instruments}) by the SPHEREx and CDIM instruments. We note that using a statistical analysis, such as the power spectrum analysis, would allow the
detection of emission below these flux sensitivity cuts.
Throughout this study, we mainly focus on five central points that illustrate how each of the surveys described in Table~\ref{tab:instruments} is useful for
astrophysical purposes and for specifically probing H$\alpha$ emission. These points are: the survey collecting area, its
flux sensitivity, the ability to distinguish between emission powered by AGN or by star formation, the certainty in the identification of the emission line
responsible for the observed flux and finally, the accuracy to which the dust extinction suffered by the observed line can be determined.
We now discuss how well these points are attained for galaxy surveys, and then for IM surveys.
The properties of the Euclid and WFIRST surveys are similar, and for that reason, they will be able to detect the
same types of galaxies. This study shows that, due to their sensitivity limits, these instruments will mainly detect bright galaxies. It also shows that,
thanks to the planned large fields of view of these surveys, enough bright galaxies will be detected to further constrain the high luminosity end of the H$\alpha$ LF.
Galaxies that are bright in H$\alpha$ emission can vary a lot in their properties, such as in their mass or the virial mass of the DM halo they
belong to. Although to a first approximation, massive
galaxies are supposed to be the brightest, this will not always be true. These galaxies can suffer from quenched star
formation, which would considerably decrease their H$\alpha$ emission. Also, massive galaxies are usually dusty galaxies and so their observed luminosity can be low.
For these reasons, it is not necessarily true that the observed galaxies will correspond to a specific type of galaxy. Moreover, the observed bright
galaxies might not be representative of the main H$\alpha$ galaxy population. As an example, bright galaxies might have
particularly high or low luminosity ratios between different
observational bands, compared to the majority of the H$\alpha$ emitters, which are much fainter.
Except for the small volumes magnified due to lensing, the spectroscopic surveys performed by WFIRST and Euclid will not
reach the necessary low luminosities to probe the galaxies that dominate the overall H$\alpha$ line intensity.
These surveys will also not probe the $\alpha$ slope (see Eq.~\ref{eq:LF}) of the H$\alpha$ luminosity function, since the lensed volumes
are too small to beat cosmic variance. Furthermore, the uncertainty in the
modelling of the lenses itself can be considerably high, especially for highly lensed sources \citep{2017Livermore}. These surveys might, however, help
to identify the properties of a few relatively faint galaxies, given that the number density of these systems is high. Therefore, the lensed galaxies
observed by Euclid and WFIRST will be important to understand the properties of low luminosity systems and therefore, to probe the evolution of
the relation between H$\alpha$ emission and SFR in a galaxy.
For the WFIRST and Euclid planned surveys, the combination of the low-frequency range covered, the relatively broad photometric filters
and the large number of sources that they are expected to detect, will make it impossible to properly probe the extinction suffered by each source.
Line extinction will mainly be probed using galaxy templates. Although template-fitting algorithms will take into account dust
extinction, it will not be possible to obtain accurate measurements for all sources, in particular for highly extincted sources \citep{2017Galametz}.
On the other hand, these galaxy surveys will be particularly good at identifying AGNs and thus in distinguishing between star
formation and AGN powered H$\alpha$ emission.
Moreover, Euclid and WFIRST are mainly being built with the objective of probing dark energy. For that propose, it is important to correctly
identify the redshift of the source of emission. The spectroscopic and photometric capabilities of these instruments will then, at
least for sources with a high signal to noise ($S/N > 10$), allow them to
determine which emission line is responsible for the observed signal \citep{2016Bisigello}. Ancillary data at lower wavelengths will also be used to help
identify line contaminants in these surveys \citep{2016Bisigello}.
The case for IM surveys is very different. The advantage of the IM surveys performed by the SPHEREx and CDIM instruments
resides in their low flux limits, combined with large frequency ranges and large FOVs. These factors will allow them to probe a large fraction
of the intensity of H$\alpha$ emission over a large redshift range. This will also allow them to probe the time evolution of H$\alpha$ emitters.
Moreover, by detecting emission from the sources mainly responsible for the total intensity of the H$\alpha$ line, their observational intensity maps
can be used to probe the global star formation in these galaxies.
Constraining the SFRD with the SPHEREx and CDIM IM surveys will require updating the relation between H$\alpha$ emission and the SFRD. These updates should be made
using constraints from other emission lines obtained with the same surveys, as discussed in Section~\ref{subsec:Z_ion_param}.
Moreover, the data from IM surveys will be difficult to interpret and separate in terms of the source of the emission.
\section{Summary and discussion}
\label{sec:Summary}
In this study, we explored the potential of different instruments to constrain H$\alpha$ line emission. Namely, we compared the galaxy
surveys that will be performed by the Euclid and WFIRST instruments, with the IM surveys that are planned for the SPHEREx and CDIM instruments.
Starting from observations, which we then extend by using physically motivated relations deduced from theory and simulations, we modeled
the intensity and power spectra of the H$\alpha$ line over the $z\sim 0 - 5$ redshift range.
We find that the intensity of this line is currently uncertain by a factor of up to a few until $z\sim 2$, and up to one order of magnitude at $z\sim5$. The higher uncertainty
towards high redshift lies both in the lack of observations of H$\alpha$ emitters and in the increasing uncertainty of dust extinction corrections.
Still, the available constraints led us to estimate that this line intensity, in the relevant redshift interval
should remain in the $I_{\rm H\alpha} \sim 10^{-8}-10^{-7}\, {\rm erg\, s^{-1}\, cm^{-2}\, sr^{-1}}$ range and peak at the same time as the cosmic SFRD at $z\sim 2-3$.
According to the properties of the considered CDIM and SPHEREx instruments, we predict that their planned IM surveys will be good enough to make a statistical detection of the overall H$\alpha$ line intensity. Moreover, when operating as a galaxy surveyor CDIM should detect more than $90\%$ of this line intensity up to $z\sim 4-5$, whilst SPHEREx can only do the same up to $z\sim1$. SPHEREx will still be able to detect more than $50\%$ of this emission up to $z\sim4$. These percentages assume a minimum flux per observational voxel corresponding to the flux limits quoted for these surveys.
On the other hand, the Euclid and WFIRST galaxy surveys will only detect H$\alpha$ emission in a narrow frequency range. This will include H$\alpha$ emission only up to $z\sim2$. Given
their flux sensitivity limits, in spectroscopic mode, these surveys will not probe the H$\alpha$ line intensity.
However, they will probe the high end of the H$\alpha$ LF over a large enough volume to beat cosmic variance. Moreover, at least for bright sources, Euclid and WFIRST will
be able to distinguish between SF and AGN powered H$\alpha$ emission.
The photometric capabilities of these surveys will also be used to probe the galaxy dust extinction and
to help distinguish between emission from different lines. These two points will only be easily achievable for relatively bright galaxies.
Using the same methodology as for the H$\alpha$ line, we modeled the emission by the SII and NII doublet lines and by the OII, OIII, H$\beta$ and Ly$\alpha$ lines
in the redshift range where these lines will contaminate H$\alpha$ intensity maps. We found that for a survey not suffering from flux limitations,
the signal from contaminant lines will increase the observed H$\alpha$ power spectra by a factor up to two at $z\lesssim 2$. At higher
redshift this contamination is expected to decrease.
We implemented the several models for line emission in a simulation code and obtained a light cone for both the H$\alpha$ line and the background
contaminant lines. The observational light cones assume the flux sensitivity and voxel resolution of each of the planned IM surveys. We find that,
besides the contamination by background interloping lines being quite strong (both in terms of intensity and power spectrum), it is also difficult to remove.
We applied flux cuts to the background lines of $(3.0,~1.2,~0.5)\times 10^{-16}~{\rm ergs\, s^{-1}\, cm^{-2}}$, in order to estimate the effect
that a masking procedure would have on the contamination power spectrum.
We find that the required flux cuts for masking need to be stronger towards increasing redshift. Still, overall they were successful at decreasing
the contamination power spectrum to a maximum of $\sim 10\%$ of the observed H$\alpha$ signal. The removal of contamination by these bright background
galaxies would only require masking less than $1\%$ of the voxels for both the SPHEREx and the CDIM surveys.
The decrease in the H$\alpha$ power spectrum due to putting the strongly contaminated voxels to zero would also be below $1\%$. The
recovery of the target signal would thus not be compromised.
At $z\lesssim 0.8$, the detection of interloping contaminants in H$\alpha$ intensity maps can be
reasonably done with data from a galaxy survey with a flux sensitivity of $1.2\times 10^{-16}\, {\rm ergs\, s^{-1}cm^{-2}}$. This corresponds
to the sensitivity of the WFIRST instrument, although it lies outside the frequency range covered by this survey. For H$\alpha$ intensity maps
in the range $z\sim0.8-2$, a flux cut of $5.0\times 10^{-17}\, {\rm ergs\, s^{-1}cm^{-2}}$ would produce a similar reduction of the contaminants power spectrum.
Unfortunately, there is no available galaxy survey that can detect contaminant galaxies up to this low flux level and over the large volumes covered by the IM surveys.
Given the lack of surveys that can individually detect the main contaminant galaxies, we believe that the best option for
H$\alpha$ IM studies is to jointly model the evolution of all the strong emission lines contributing to the observed line fluxes. After recovering the observed H$\alpha$ signal, intensity maps should be corrected for dust extinction. We explored the possibility of
doing this with the data from the same IM surveys and found that this can be done, to a certain point, using ratios of emission lines.
This would require efficient separation of the contribution from the different lines in these intensity maps, which is not trivial.
Alternatively, the extinction rates and extinction curves from galaxy surveys can be used to at least predict the overall evolution
of the dust extinction rate at the relevant frequencies.
We, therefore, conclude that IM surveys can be used to probe the overall intrinsic H$\alpha$ intensity up to an uncertainty of the order of $20\%$, as long
as this line signal can be accurately disentangled from the signal of other emission lines.
Many of our results are biased towards our choice of using a high redshift SFRD model based on observations of H$\alpha$ emitters. These have
inferred SFRs higher than the SFRs traced by the UV continuum. Despite the reasons for this discrepancy, our choice allowed us to make
predictions for the H$\alpha$ intensity and power spectrum consistent with observations. Moreover, OIII emitters, which are the most important contaminants
in H$\alpha$ intensity maps, also have very large equivalent widths at high z. Consequently, our SFRD based predictions for the intensity of this
line are consistent with observational constraints.
In the case of OIII emitters the large EWs are likely to be the result of the high ionized state of the ISM in high redshift galaxies.
Therefore, we might be overestimating the contamination by H$\beta$ and OII emitters. However, this does not affect our main
conclusions, given the small contribution from these lines ($\lesssim10\%$) to the contamination power spectra in H$\alpha$ intensity maps.
\section*{acknowledgements}
The authors thank the anonymous referee whose comments and suggestions helped to improve the quality of the article.
We also thank the Netherlands Foundation for Scientific Research support through the VICI grant 639.043.006.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,308 |
package com.olegych.scastie.instrumentation
import java.lang.System.{lineSeparator => nl}
case class DiffFailure(title: String,
expected: String,
obtained: String,
diff: String)
extends Exception(title + nl + Diff.error2message(obtained, expected))
object Diff {
def error2message(obtained: String, expected: String): String = {
val sb = new StringBuilder
sb.append(nl)
sb.append(s"""
## Obtained
#${trailingSpace(obtained)}
""".stripMargin('#'))
sb.append(s"""
## Expected
#${trailingSpace(expected)}
""".stripMargin('#'))
sb.append(s"""
## Diff
#${trailingSpace(compareContents(obtained, expected))}
""".stripMargin('#'))
sb.toString()
}
def assertNoDiff(obtained: String,
expected: String,
title: String = ""): Unit = {
val result = compareContents(obtained, expected)
if (result.nonEmpty) {
throw DiffFailure(title, expected, obtained, result)
}
}
def trailingSpace(str: String): String = str.replaceAll(" \n", "∙\n")
def compareContents(original: String, revised: String): String = {
compareContents(original.trim.split(nl), revised.trim.split(nl))
}
def compareContents(original: Seq[String], revised: Seq[String]): String = {
import collection.JavaConverters._
val diff = difflib.DiffUtils.diff(original.asJava, revised.asJava)
if (diff.getDeltas.isEmpty) ""
else
difflib.DiffUtils
.generateUnifiedDiff(
"original",
"revised",
original.asJava,
diff,
1
)
.asScala
.drop(3)
.mkString(nl)
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,438 |
The Investigation (Danish: Efterforskningen) is a six-part series, directed by Tobias Lindholm. The series is based on the investigation of the death of Kim Wall, a 30-year-old Swedish journalist. The series follows the criminal investigation of the case, featuring Søren Malling as Chief Inspector Jens Møller, Laura Christensen as Police Investigator Maibritt Porse, Pilou Asbæk as Special Prosecutor Jakob Buch-Jepsen and Rolf Lassgård and Pernilla August as Wall's parents. The series originally aired on 28 September 2020 on TV2 in Denmark and Sweden's SVT. It was broadcast on UK'S BBC Two between 22 January and 5 February 2021. HBO began showing the series on 1 February 2021.
Cast
Søren Malling as chief investigator Jens Møller Jansen
Pilou Asbæk as special prosecutor Jakob Buch-Jepsen
Pernilla August as Ingrid Wall (victim's mother)
Rolf Lassgård as Joachim Wall (victim's father)
Laura Christensen as investigator Maibritt Porse
Hans Henrik Clemensen as investigator Nikolaj Storm
Dulfi Al-Jabouri as investigator Musa Amin
Charlotte Munck as Kirstine (Jens Møller Jansen's wife)
Anders Juul as investigator Christian Skov
Henrik Birch as investigator Lars Møller
See also
Murder of Kim Wall
References
Danish crime television series
2020 Danish television series debuts
Television shows set in Denmark
True crime television series
Peter Madsen
DR TV original programming
Sveriges Television original programming | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,020 |
44-я стрелковая дивизия:
44-я стрелковая дивизия (СССР, 1919)
44-я стрелковая дивизия (СССР, 1941) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,617 |
Description: Deep within the darkness of secluded forest land in rural Ireland dwells an ancient evil. Feared by the nearby superstitious villagers as cursed creatures who prey upon the lost, their secrets have been kept from civilization and remain on their hallowed ground. But when a conservationist from London moves in with his wife and infant child in order to survey the land for future construction, his actions unwittingly disturb the horde of demonic forces. Alone in a remote wilderness, he must now ensure his family's survival from their relentless attacks. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,834 |
{"url":"http:\/\/nicolasbehr.com\/publication-type\/3\/","text":"# 3\n\n## Tracelet Hopf algebras and decomposition spaces\n\nTracelets are the intrinsic carriers of causal information in categorical rewriting systems. In this work, we assemble tracelets into a symmetric monoidal decomposition space, inducing a cocommutative Hopf algebra of tracelets. This Hopf algebra \u2026\n\n## Concurrency Theorems for Non-linear Rewriting Theories\n\nSesqui-pushout (SqPO) rewriting along non-linear rules and for monic matches is well-known to permit the modeling of fusing and cloning of vertices and edges, yet to date, no construction of a suitable concurrency theorem was available. The lack of \u2026\n\n## Explicit formulae for all higher order exponential lacunary generating functions of Hermite polynomials\n\nFor a sequence $P=(p\\_n(x))\\_{n=0}^{\\\\infty}$ of polynomials $p\\_n(x)$, we study the $K$-tuple and $L$-shifted exponential lacunary generating functions \\$\\\\mathcal{G}\\_{K,L}(\\\\lambda;x):=\\\\sum\\_{n=0}^{\\\\infty}\\\\frac{\\\\lambda^n}{n!} p\\_{n\\\\cdot \u2026\n\n## Combinatorics of chemical reaction systems\n\nWe propose a concise stochastic mechanics framework for chemical reaction systems that allows to formulate evolution equations for three general types of data: the probability generating functions, the exponential moment generating functions and the \u2026\n\n## The algebras of graph rewriting\n\nThe concept of diagrammatic combinatorial Hopf algebras in the form introduced for describing the Heisenberg-Weyl algebra in [Blasiak et al. 2010](https:\/\/arxiv.org\/abs\/1001.4964) is extended to the case of so-called rule diagrams that present graph \u2026","date":"2021-12-08 03:46:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6324443817138672, \"perplexity\": 1737.323204040596}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964363437.15\/warc\/CC-MAIN-20211208022710-20211208052710-00007.warc.gz\"}"} | null | null |
var parent = require('../../stable/instance/find-index');
module.exports = parent;
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,448 |
{"url":"http:\/\/gate-exam.in\/CS\/CSE-GATE-2013-Question-6","text":"# GATE Papers >> CSE >> 2013 >> Question No 6\n\nQuestion No. 6\n\nWhich one of the following is the tightest upper bound that represents the number of swaps required to sort n numbers using selection sort?","date":"2020-10-22 21:28:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 3, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.596993625164032, \"perplexity\": 2564.652823656376}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107880038.27\/warc\/CC-MAIN-20201022195658-20201022225658-00277.warc.gz\"}"} | null | null |
Q: Accessing the value from a View from a particular List How to fetch the values of a View in a particular List by using JQuery or JavaScript?
A: function getItemsFromView(listTitle, viewTitle) {
var context = new SP.ClientContext.get_current();
var list = context.get_web().get_lists().getByTitle(listTitle);
context.load(list);
var view = list.get_views().getByTitle(viewTitle);
context.load(view);
context.executeQueryAsync(
function (sender, args) {
var query = new SP.CamlQuery();
query.set_viewXml("<View><Query>" + view.get_viewQuery() + "</Query></View>");
var items = list.getItems(query);
context.load(items);
context.executeQueryAsync(
function () {
var listEnumerator = items.getEnumerator();
return listEnumerator; // Return List Item array
},
function (sender, args) { alert("error in inner request: " + args.get_message()); }
);
},
function (sender, args) { alert("error: " + args.get_message()); }
);
}
//Example of usage
var listItems = getItemsFromView("Tasks", "My Tasks");
var i = 0;
while (listEnumerator.moveNext()) {
i++;
}
alert("items retrieved: " + i);
Here is a similiar question for your reference:
JavaScript library to get list items based on a view using CSOM/JSOM
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,226 |
Q: Providing UI notifications when the ViewModel detects a file permissions issue Being a neophyte in C#, WPF and MVVM, I am refactoring my first app prototype from a kludge that was almost fully implemented in the code-behind to an MVVM pattern. I have everything working nicely: data binding, commands, etc. Almost everything has been moved to its correct location but I am running into a little bit of an issue trying to figure out how I should handle files.
My model supports a chunk of data that is sent to and read from a remote electronic device. That data is transformed in the VM and exchanged with the V via binding. The user may optionally select to stream output to a CSV. This can be done via an OpenFileDialog or by entering the file name directly in the text box.
I am relatively certain about two considerations (correct me if you disagree):
1) It's acceptable to handle the OpenFileDialog in the V and send the filename to the MV via binding. I've seen this answered in other discussions.
2) I'll implement a filehandler class that will open the file, check permissions, format the CSV record, etc.
What I am unsure about is how the file checking should occur. If a file is locked, or has not been selected, or already exists, how can this be communicated to the V so that the user is notified? In my first cut at the app, I simply implemented this logic in the code-behind, which does not seem correct:
private bool CSVReady()
{
if (filenameTextbox.Text == "<no file selected>")
{
MessageBox.Show("Please select an output file.");
return false;
}
if (File.Exists(filenameTextbox.Text))
{
var r = MessageBox.Show("File already exists. Append to it?",
"File Warning",
MessageBoxButton.YesNo,
MessageBoxImage.Warning);
if (r == MessageBoxResult.No)
return false;
try
{
File.OpenWrite(filenameTextbox.Text).Close();
}
catch (IOException)
{
MessageBox.Show("File is already open. Please close it.");
return false;
}
}
else // file does not exist, create it and initialize the column labels.
{
if ((MessageBox.Show("File does not exist. Create it?", "File Creation", MessageBoxButton.YesNo, MessageBoxImage.Warning) ==
MessageBoxResult.No))
return false;
File.Create(filenameTextbox.Text).Close();
File.AppendAllText( // Blah blah blah
}
return true;
}
A: If you have correctly data bound an instance of your view model to your view's DataContext property, then you can access your view model from your view's code behind simply, like this:
...
DialogResult result = fileDialog.ShowDialog();
if (result == DialogResult.OK) filePath = fileDialog.FileName;
...
ViewModel viewModel = (ViewModel)DataContext; // <--------
viewModel.DoSomethingWithNewFilePath(filePath);
UPDATE >>>
What I am unsure about is how the file checking should occur. If a file is locked, or has not been selected, or already exists, how can this be communicated to the V so that the user is notified?
The way that you have written your file checking process is fine as far as MVVM goes. You're not breaking any rules. It's the 'quick' way to implement such functionality, but of course, there are different levels of coding quality.
In my applications, I have xxxManager classes that perform a variety of functionality for me... some might call them service classes. As such, I have a WindowManager class that handles all Window related tasks, including showing MessageBoxes and various dialogs. The reason for this is that I can interface these classes and provide mock implementations for testing, so that the tests don't really open Windows that need someone to close.
If you're not testing and not likely to want to change or add a web interface, then you really don't need that level of separation. If however, you are working on a business application, then it is generally considered good practice to separate the various concerns of the application into different folders or projects; data access, data manipulation, services and UI.
I also have a HardDriveManager class that enables me to perform all of those System.IO functions from the view model without having to add usings for the relevant dlls. So the answer is that what you're doing is ok, but you could split out the different functionality into different helper classes.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,923 |
The application submission period has closed for Brewster, Wellfleet and Mashpee.
You can stay up-to-date about homeownership opportunities by subscribing to our e-news. Questions? our Family Programs Manager, Mary Ann Mills-Lassiter, at 508 362 3559 x21 or send an email to maryann@habitatcapecod.org.
Click here to see a SAMPLE APPLICATION and SAMPLE APPLICATION INSTRUCTIONS.
Habitat for Humanity of Cape Cod does not discriminate in the selection of applicants. Habitat for Humanity of Cape Cod is a not-for-profit organization and we do business in accordance with Federal and Massachusetts Fair Lending Laws. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,148 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.