text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
<header>
<div class="scope-selector"
data-ng-if="::ctrl.config.scopeMenu">
<a class="item-label"
isis-contextmenu
contextmenu-data="::ctrl.config.scopeMenu"
contextmenu-config="::ctrl.scopeMenuConfig"
>{{ ctrl.config.selectedScope.label }} <i class="fa fa-angle-down"></i></a>
</div>
<div class="preferences-menu"
data-ng-if="::ctrl.config.preferencesMenu">
<a class="item-label"
isis-contextmenu
contextmenu-data="::ctrl.config.preferencesMenu"
contextmenu-config="::ctrl.preferencesMenuConfig"
><i class="glyphicon glyphicon-cog"></i></a>
</div>
</header>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,227 |
{"url":"https:\/\/brilliant.org\/discussions\/thread\/whats-the-speed\/","text":"\u00d7\n\n# Whats the speed\n\nHi guys. I have got a problem. Can you solve it and give the solutions and answers. A person is moving towards a plane mirror with a speed of 5m\/s. The speed with which the image moves is (a) 5m\/s (b) 10m\/s (c) 20m\/s (d) 0 m\/s\n\nNote by Tanveen Dhingra\n3\u00a0years ago\n\nMarkdownAppears as\n*italics* or _italics_ italics\n**bold** or __bold__ bold\n- bulleted- list\n\u2022 bulleted\n\u2022 list\n1. numbered2. list\n1. numbered\n2. list\nNote: you must add a full line of space before and after lists for them to show up correctly\nparagraph 1paragraph 2\n\nparagraph 1\n\nparagraph 2\n\n[example link](https:\/\/brilliant.org)example link\n> This is a quote\nThis is a quote\n # I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\n# I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\nMathAppears as\nRemember to wrap math in $$...$$ or $...$ to ensure proper formatting.\n2 \\times 3 $$2 \\times 3$$\n2^{34} $$2^{34}$$\na_{i-1} $$a_{i-1}$$\n\\frac{2}{3} $$\\frac{2}{3}$$\n\\sqrt{2} $$\\sqrt{2}$$\n\\sum_{i=1}^3 $$\\sum_{i=1}^3$$\n\\sin \\theta $$\\sin \\theta$$\n\\boxed{123} $$\\boxed{123}$$\n\nSort by:\n\n(b )10m\/s\n\n- 2\u00a0years, 10\u00a0months ago\n\nThe image moves with a speed of 5m\/s w.r.t the mirror . In other words the image comes closer to the person w.r.t 10m\/s.\n\n- 2\u00a0years, 11\u00a0months ago\n\n@Calvin Lin hey sir can u help me\n\n- 3\u00a0years ago\n\nI remember when we made an experiment on mirrors and we learned that the distance from the image to the mirror and the distance from the real object to the mirror is the same. Hence, the image \"moves\" at $$5$$ m\/s also.\n\n- 3\u00a0years ago","date":"2018-01-23 19:35:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9974376559257507, \"perplexity\": 5952.9776353533425}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-05\/segments\/1516084892238.78\/warc\/CC-MAIN-20180123191341-20180123211341-00049.warc.gz\"}"} | null | null |
Q: Display name Certificate OID - Windows I installed a certificate in two windows machines (both windows 7 x86) and when I access the properties of this certificate by certmgr, the "Subject Alternative Name" section are different for both machines. The one that contains "2.16.76.1.3.3" is correct, is the oficial OID for "CNPJ". I tried to export the certificate of the "right" machine and doesn't work. I don't know if is there a way to map the OID to "common" names, but I need the original OID. The certificate installed in the two machines is from the same file (.pfx). Certificate details are below:
"wrong" properties
"right" properties
A: If the certificate came from the same source (the same PFX) then the Subject Alternative Name entry is probably the same. The difference is one of the two computers has had 2.16.76.1.3.3 registered with a name (CNPJ), and the other hasn't.
CryptRegisterOIDInfo can be used to register name/value (and other data) mappings for an OID; presumably the "right" machine had that called by some software at some point to register CNPJ.
This is just a UI display issue (Windows CertUI uses friendly names when it can, dotted-decimal OIDs otherwise).
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,617 |
{"url":"http:\/\/www.koreascience.or.kr\/article\/ArticleFullRecord.jsp?cn=E1BMAX_2007_v44n2_225","text":"GALOIS GROUPS OF MODULES AND INVERSE POLYNOMIAL MODULES\n\nTitle & Authors\nGALOIS GROUPS OF MODULES AND INVERSE POLYNOMIAL MODULES\nPark, Sang-Won; Jeong, Jin-Sun;\n\nAbstract\nGiven an injective envelope E of a left R-module M, there is an associative Galois group Gal$\\small{({\\phi})}$. Let R be a left noetherian ring and E be an injective envelope of M, then there is an injective envelope $\\small{E[x^{-1}]}$ of an inverse polynomial module $\\small{M[x^{-1}]}$ as a left R[x]-module and we can define an associative Galois group Gal$\\small{({\\phi}[x^{-1}])}$. In this paper we describe the relations between Gal$\\small{({\\phi})}$ and Gal$\\small{({\\phi}[x^{-1}])}$. Then we extend the Galois group of inverse polynomial module and can get Gal$\\small{({\\phi}[x^{-s}])}$, where S is a submonoid of $\\small{\\mathbb{N}}$ (the set of all natural numbers).\nKeywords\ninjective module;injective envelope;Galois group;inverse polynomial module;\nLanguage\nEnglish\nCited by\n1.\nGeneralized Inverse Power Series Modules, Communications in Algebra, 2011, 39, 8, 2779\nReferences\n1.\nZ. Lin, Injectivity of modules of generalized inverse polynomials, Comm. Algebra 29 (2001), no. 2, 583-592\n\n2.\nA. S. McKerrow, On the injective dimension of modules of power series, Quart. J. Math. Oxford Ser. (2) 25 (1974), 359-368\n\n3.\nL. Melkersson, Content and inverse polynomials on Artinian modules, Comm. Algebra 26 (1998), no. 4, 1141-1145\n\n4.\nD. G. Northcott, Injective envelopes and inverse polynomials, J. London Math. Soc. (2) 8 (1974), 290-296\n\n5.\nS. Park, Inverse polynomials and injective covers, Comm. Algebra 21 (1993), no. 12, 4599-4613\n\n6.\nS. Park, The Macaulay-Northcott functor, Arch. Math. (Basel) 63 (1994), no. 3, 225-230\n\n7.\nS. Park, Gorenstein rings and inverse polynomials, Comm. Algebra 28 (2000), no. 2, 785-789\n\n8.\nS. Park, The general structure of inverse polynomial modules, Czechoslovak Math. J. 51 (126) (2001), no. 2, 343-349\n\n9.\nS. Park and E. Cho, Injective and projective properties of R[x]-modules, Czechoslovak Math. J. 54 (129) (2004), no. 3, 573-578","date":"2018-09-23 08:32:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 8, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6785776019096375, \"perplexity\": 859.0697533864334}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267159165.63\/warc\/CC-MAIN-20180923075529-20180923095929-00471.warc.gz\"}"} | null | null |
{"url":"https:\/\/stats.stackexchange.com\/questions\/506901\/evaluating-the-quality-of-a-prediction-when-performing-a-measurement-study","text":"# Evaluating the quality of a prediction when performing a measurement study\n\nSay I am performing a hypothetical measurement study of the time taken by different computers to finish a specific task eg. download a list of web pages $$w \\in W$$ and store the results of each page load in an array $$T$$. Using the results obtained so far, I make a prediction that by changing some parameter like the file chunk size the download times can be reduced to $$P$$ which is a list of floating point values corresponding to the expected download time. At a later time, by actually performing an experiment to validate the hypothesis of file chunk size improving download time I obtain the actual times $$A$$.\n\nIs a t-Test (scipy.stats.ttest_ind(P, A, equal_var=True)) the correct way to check and conclude that the predictions made in $$P$$ were accurate and reliable? Are there other statistical tests which I could use here to indicate the accuracy of the predictions compared to the experimental results?\n\nYou can in principle use a t-test to assess whether your predictions have the same mean as your actuals. One question is whether this is really interesting. If you have $$n$$ predictions, and they are on average too high by $$x$$, then you can subtract $$nx$$ from any one prediction, and the resulting vector of predictions will have the same mean as your actuals. Does this mean that this ad hoc modified prediction is better than the original one? I wouldn't think so.\n(Also, if you do decide to use a t-test, don't use equal_var=True. Your predictions will almost certainly have a lower variance than the actuals, simply because in predicting, you filter out unpredictable noise.)","date":"2021-09-24 10:09:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 8, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.648183286190033, \"perplexity\": 373.93109818287144}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057508.83\/warc\/CC-MAIN-20210924080328-20210924110328-00244.warc.gz\"}"} | null | null |
{"url":"https:\/\/cycling74.com\/forums\/help-combining-detuned-saw-objects\/","text":"## Help combining detuned saw objects?\n\nApr 06 2010 | 3:07 pm\nI'm trying to write a \"supersaw\" jp-8000 style synth in maxmsp. All I'm doing is adding together 16 saw objects each that are detuned differently (by sawnumber*detuneamount) and then dividing the output by 16 to keep the volume down. This is placed into a patcher and played using poly.\nThe problem is, when I do this I get a lot of flanging... I'm not sure why but I think it has to do with each oscillator starting at a slightly different time? Anyone have any idea how I can fix this? Or is there some other kind of spacing I need between the oscillators? I tried making the same patch in Reaktor (with the same detune amounts) and didn't get any of the flanging effects, so I'm confused. Thanks! See text code below, also i made a screenshot: http:\/\/www.kupex.com\/supersawattempt.jpg\n\n\u2022 Apr 11 2010 | 8:26 pm\nHi jinxpx,\nI do not understand why you detune each oscillator with the same offset from the previous oscillator to the next one (as I understand from the * 1. , * 2. , * 3... series.). It's hard to be sure without knowing what sort of values are coming into the detune inlet, but I suspect you'd get better results by using random values in a specific range rather than this series.\nAlso, I'd use multiplication instead of addition, so the detuning is relative to frequency. See modified patcher below.\nHope this helps.\nRoald Baudoux www.roaldbaudoux.org\n\u2022 Apr 11 2010 | 10:51 pm\nwithout looking at the patch i can tell that flanging is the exspected behaviour.\nphase offsets will not make a difference here, but the tuning can.\non one had i would use relative frequency offsets (think per cent) and on the other hand each of the oscillator frequencies need a slight modulation; try triangular LFO and\/or little random instabilities. their job will mainly be to modulate the phases and not the pitches, if oyu know what i mean.\n-110","date":"2017-07-25 06:48:49","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8261333107948303, \"perplexity\": 1441.1640868608906}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-30\/segments\/1500549425082.56\/warc\/CC-MAIN-20170725062346-20170725082346-00189.warc.gz\"}"} | null | null |
{"url":"https:\/\/ask.sagemath.org\/answers\/12657\/revisions\/","text":"Revision history [back]\n\nThere is a marvelous combinatorial class called IntegerListsLex that returns an iterator (a type of python object that you can iterate over) for lists of integers subject to certain constraints. Try trying IntegerListsLex? at the Sage command line to see the documentation.\n\nHere is an iterator that I think does the job:\n\nsage: L = IntegerListsLex(12, length=7, floor=[1]*7, ceiling=[9]*7)\nsage: for i in L:\n....: print i\n....:\n[6, 1, 1, 1, 1, 1, 1]\n[5, 2, 1, 1, 1, 1, 1]\n[5, 1, 2, 1, 1, 1, 1]\n[5, 1, 1, 2, 1, 1, 1]\n\n\netc...\n\nThere is a marvelous combinatorial class called IntegerListsLex that returns an iterator (a type of python object that you can iterate over) for lists of integers subject to certain constraints. Try trying entering IntegerListsLex? at the Sage command line to see the documentation.\n\nHere is an iterator that I think does the job:\n\nsage: L = IntegerListsLex(12, length=7, floor=[1]*7, ceiling=[9]*7)\nsage: for i in L:\n....: print i\n....:\n[6, 1, 1, 1, 1, 1, 1]\n[5, 2, 1, 1, 1, 1, 1]\n[5, 1, 2, 1, 1, 1, 1]\n[5, 1, 1, 2, 1, 1, 1]\n\n\netc...\n\nThere is a marvelous combinatorial class called IntegerListsLex that returns an iterator (a type of python object that you can iterate over) for lists of integers subject to certain constraints. Try entering IntegerListsLex? at the Sage command line to see the documentation.\n\nHere is an iterator that I think does the job:\n\nsage: L = IntegerListsLex(12, length=7, floor=[1]*7, ceiling=[9]*7)\nsage: for i in L:\n....: print i\n....:\n[6, 1, 1, 1, 1, 1, 1]\n[5, 2, 1, 1, 1, 1, 1]\n[5, 1, 2, 1, 1, 1, 1]\n[5, 1, 1, 2, 1, 1, 1]\n\n\netc...\n\nThe Lex part of the name indicates that the lists are returned in lexicographic order.\n\nThere is a marvelous combinatorial class called IntegerListsLex that returns an iterator (a type of python object that you can iterate over) for lists of integers subject to certain constraints. Try entering IntegerListsLex? at the Sage command line to see the documentation.\n\nHere Below is an iterator that I think does the job:job. The length parameter limits to sequences of length 7, the floor parameter limits the integers to be larger or equal to 1, the ceiling parameter limits the integers to be less or equal to 3.\n\nsage: L = IntegerListsLex(12, length=7, floor=[1]*7, ceiling=[9]*7)\nceiling=[3]*7)\nsage: for i l in L:\n....: print i\nl\n....:\n[6, 1, 1,\n[3, 3, 2, 1, 1, 1, 1]\n[5, 2, 1, 1, [3, 3, 1, 2, 1, 1, 1]\n[5, 1, 2, 1, 1, [3, 3, 1, 1, 2, 1, 1]\n[5, 1, 1, 2, 1, 1, [3, 3, 1, 1, 1, 2, 1]\n\n\netc...\n\nThe Lex part of the name indicates that the lists are returned in lexicographic order.\n\nNote that if you sort the sequences, convert to tuples (so that they are hashable), and then uniquify them you get the 3 basic sequences that generate the whole collection by reordering:\n\nsage: L = IntegerListsLex(12, length=7, floor=[1]*7, ceiling=[3]*7)\nsage: LS = [ tuple(sorted(i)) for i in L ]\nsage: uniq(LS)\n[(1, 1, 1, 1, 2, 3, 3), (1, 1, 1, 2, 2, 2, 3), (1, 1, 2, 2, 2, 2, 2)]\n\n\nUpdate changed ceiling option to limit the sequence to 1's, 2's, and 3's","date":"2019-02-24 01:42:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.20406007766723633, \"perplexity\": 772.2704511610384}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-09\/segments\/1550249569386.95\/warc\/CC-MAIN-20190224003630-20190224025630-00201.warc.gz\"}"} | null | null |
package pageContext
import (
"google.golang.org/appengine/datastore"
)
// PageContext datastore: ",noindex" causes json naming problems !!!!!!!!!!!!!!!!!!!!!!!!!!
// Page key is the parent key.
type PageContext struct {
ContextKey *datastore.Key
}
// PageContexts is a []*PageContext
type PageContexts []*PageContext
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,031 |
{"url":"https:\/\/proxieslive.com\/tag\/deterministic\/","text":"## Is any randomized Algorithm a probability distribution over the set of deterministic Algorithms?\n\nIf there is a finite set of Instances of size n and the set of (reasonable) deterministic algorithms is finit.\n\nCan any randomized Algorithm be seen as a probability distribution over the set of deterministic Algorithms? And if yes, why?\n\n## Deterministic Finite Automata vs Java\n\nYou need to select a device controller. You have two options: Option 1: Implement with a DFA Option 2: Implement using Java The primary advantage of a DFA over a program written in Java is as follows:\n\n\u2022 A DFA requires fewer computational resources\n\u2022 A DFA is faster than a program in Java\n\u2022 Running a DFA costs less than running a program written in Java\n\u2022 It doesn\u2019t matter if we use a DFA or a program written in Java, as long as it gets the job done\n\n## Deterministic finite automata\n\nFor Sigma={a,b}, Design DFA for the language a) L={w:(na(w)+2nb(w))mod 3<2}. b) accepting set of string over {a,b} in which anbmcl, where n , m and l is greater than equal to 1.\n\n## How to determine if a language produced by grammar is recognizable by deterministic pushdown automaton (DPDA)?\n\nI have a following grammar: S -> aSa | bSb | lambda.\n\nAnd I have to figure out whether the language produced by this grammar is recognizable by DPDA. I can\u2019t find any theoremas about it. Obviously, it\u2019s a context-free language and can be recognized by DPA, but what about DPDA?\n\n## Finding the upper bound of states in Minimal Deterministic Finite Automata\n\nI have a task to determine the upper bound of states in the Minimal Deterministic Finite Automata that recognizes the language: $$L(A_1) \\backslash L(A_2)$$, where $$A_1$$ is a Deterministic Finite Automata(DFA) with $$n$$ states and $$A_2$$ is Non-deterministic Finite Automata(NFA) with $$m$$ states.\n\nThe way I am trying to solve the problem:\n\n1. $$L(A_1) \\backslash L(A_2) = L(A_1) \\cap L(\\Sigma^* \\backslash L(A_2)$$, which is language, that is recognised by automata $$L\u2019$$ with $$n*m$$ states\n2. Determinization of $$L\u2019$$ which has $$(n*m)^2$$ states and it is the upper bound of states.\n\nAm I right?\n\n## Maximum characters in a deterministic Turing machine\n\nAssume we have a deterministic Turing machine $$M = (q_s, q_a, q_r, \\Sigma, \\Gamma, \\delta, Q, b)$$ where $$q_s,q_a,q_r$$ are the (unique) starting state, accept state and reject state respectively, $$Q$$ the set of non-final states, $$\\Sigma$$ the input alphabet, $$\\Gamma$$ the tape alphabet, $$\\delta$$ the transition function and $$b \\in \\Gamma$$ the blank symbol.\n\nHow many characters can fit in $$\\Gamma$$, as a function of $$|\\Sigma|, |Q|$$ such that for each $$c \\in \\Gamma$$, $$\\delta$$ will be defined by it for some state and character?\n\n## I need help in creating deterministic finite automata\n\nLet \u03a3 = {a, b, c}. Draw a DFA that rejects all words for which the last two letters match. Draw a DFA that rejects all words for which the first two letters match.\n\n## Usual distances on DFAs (Deterministic Finite Automata)?\n\nI\u2019ve been searching in the literature for examples of distances defined on the set of the DFAs (or on the set of minimal DFAs) that are defined on a given alphabet sigma.\n\nSince the languages they describe (regular languages) can potentially have an infinite size, defining a distance is not a trivial matter.\n\nNevertheless, having a distance on these objects can be useful, in order to fit these in metric spaces, which allows for a range of things (in my case to assess the performance of an algorithm).\n\nMy only consistent idea so far is to create a distance similar to the edit-distance in labeled graphs on the minimized DFAs.\n\nDoes someone have ever heard of other distances ?\n\n## Non deterministic finite automata\n\nI\u2019m stucked trying to determine the NFA of this subset ({qo,q1},{1,0},{q0,0,q0},{q0,0q1},,{q0,1,q0},q0,{q1}). I\u2019ll be glad if someone should help me tackle the problem.\n\n## Deterministic Finite Automata\n\nI need to create a deterministic finite automata, that can be any length, and is made up of 0s and 1s. Among any subsequent 3 numbers, there needs to be exactly two 1s, and exactly one 0. I\u2019ve spent a couple hours studying DFAs, but I can only find solutions like the one I need to one of these issues. Meaning that among the 3, there can be at most 1 of something, or there needs to be at least 2 of something. I\u2019m pretty much lost in how I\u2019m supposed to combine these into a single DFA. Any help would be appreciated.","date":"2020-07-15 11:10:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 21, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.682187557220459, \"perplexity\": 477.7272437129669}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593657167808.91\/warc\/CC-MAIN-20200715101742-20200715131742-00542.warc.gz\"}"} | null | null |
Mackinacstrædet (engelsk: Straits of Mackinac, udtales ) er et sund som skiller den øvre og den nedre halvø i Michigan i USA. Sundet forbinder de to indsøer Lake Michigan og Lake Huron.
Sundet er 8 kilometer bredt på det bredeste, og forbindes af Mackinacbroen. Strædet fryser til om vinteren, men holdes åbent af isbrydere.
Der findes to beboede øer i Mickinacstrædet, Mackinac Island og Bois Blanc. Der er også to ubeboede øer, Round Island og St. Helena Island. Der er to byer ved sundet, Mackinac City og St. Ignace. Desuden er der to forter, Fort Michilimackinac og Fort Mackinac. Begge forter er populære turistdestinationer.
Eksterne henvisninger
Michigan
Sund (farvand) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,254 |
Eight More Delis (Again)
It's 8 a.m. Saturday morning, and I'm sitting in the back of Malcolm's Highlander with Larry and Gary. We are in the car on the way to New York City from the DC area for our 3rd (now annual) deli trip. Once again we are searching for the best corned beef, pastrami, brisket, and more in the NY area. This year the agonizing choices began well before today.
In 2010 and 2011, we visited 8 delis in NY and New Jersey in search of the best. After visiting all these delis, perhaps we were "deli'd out," so I suggested other "tasting" options. We tossed around three new options: Pizza, Mediterranean (kabobs, hummus & such), or Vegetarian. We're all relatively healthy eaters (outside of our deli trips), so the Mediterranean and Veggie options had promise.
Pizza – ooooh, wouldn't we all love that option. However, there's just too much fat & cheese. (Please don't send me any comments about the amount of fat in deli sandwiches.) Another downside to pizza—it would be hard to take the leftovers home.
Mediterranean, my initial favorite, didn't get much support from the rest of the pack. How about vegetarian? I found what appeared to be eight highly rated vegetarian restaurants in the City. As I poured over the menus and pictures, they did nothing for me. Nothing. I just couldn't get excited about vegetarian. But show me a picture of a pastrami sandwich from Katz's or 2nd Ave Deli and I'm salivating with desire. I dutifully posed the options to the group.
The group scoffed at the idea of any of the three alternatives. "We're deli people" they said. "What do we know about pizza anyway?"
So here we are in the car for our 3rd round of 8 delis in 36 hours. On the list this year were two repeats – last year's winner, Sarge's Delicatessen and Restaurant, and Katz's (because, as Malcolm put it, "If I were on death row, I would order a hand-cut pastrami sandwich from Katz's"). Other stops included two Brooklyn delis – the new branch of David's Brisket, and Mill Basin Deli; two Bronx delis – Loeser's and Liebman's; and two other Manhattan delis – Pastrami Queen and Schatzi the Butcher. They all had raves, at least on the Web, but, as we found out, who knows who posts those raves? We also were planning to stop for fun at the original Coney Island Nathan's, but it was closed as a result of damage from Hurricane Sandy.
Before we left my driveway, we hit our first setback – Larry forgot his digital scale to weigh the sandwiches. We had to have a scale to determine the best sandwich value for the money. But we also had a tight schedule and I was concerned about how this would impact our timing. So I suggested we skip the scale. I was quickly overruled and we headed to the local hardware store. They had two choices – one scale that could hold no more than 16 ounces (in the past, we have had sandwiches that would break that scale), and a second that could hold a big sandwich but didn't have precise digital measurements. We reluctantly compromised and chose the big scale.
We were back on the road at 8:30 a.m. Malcolm expressed strong resistance to knishes this year. We tried them last year because, at most delis, the potato salad and cole slaw were no better than mediocre. Based on Malcolm's pleas, we agreed to return to the potato salad/cole slaw side orders from our 2010 trip. As in past years, we each used a 100 point scale to rate each deli – a maximum 24 points for each deli meat, (brisket, corned beef and pastrami,) 14 points for pickles (half and full sour), and 7 each for cole slaw and potato salad. See our final ratings of all dishes at all the delis.
Traffic was light, the Highlander purred and we reached our first stop at 12:20 p.m., a little ahead of our schedule – Loeser's Kosher Deli in the Bronx. Before entering the restaurant we decided to see whether a hardware store around the corner from the deli had a better scale. Our lucky day—this tiny hardware store, on its last legs, had the perfect digital scale.
As we walked into Loeser's we were initially encouraged by the giant sign in the window that read, "Hebrew National" and "voted Best Pastrami." A promising start, but as we walked into the skinny establishment (past the requisite counter with deli meats & other assortments), we realized we were in trouble. Lunchtime on Saturday, and we were the only people in the "restaurant". There were 12 tables and two people working behind the counter. Clearly, this establishment and its proprietor had seen better days. He was very nice, but it seemed like he was just counting the hours until retirement. The place was dreary, and the tiny bathroom lacked a real door. In the past, we have found that we give a higher score to the first place we visit, because that is when we are hungriest. But this did not give any boost to Loeser's. The food was…. mediocre at best. (On the other hand, just to keep things in perspective, a mediocre deli sandwich in NY is way better than anything we can get in DC). See Malcolm's comments in the appendix for more detail. The big discussion here was about the cole slaw. Larry thought it was superb with a crisp, cabbage taste and light dressing, and Malcolm agreed. Gary and I thought it was tasteless. Larry gave it a score of 7, while I gave it a 2. On the whole, we all agreed that the deli was disappointing.
Five minutes after our first meal ended, we were at our next stop about a mile down the road, Liebman's(Kosher) Delicatessen Restaurant. It was very different from Loeser's. Liebman's had a wonderful family atmosphere, a fine deli aroma of brine pickles and cooked beef, and was spotless and hopping. The contrast between these restaurants reminded us of the contrast between two delis from our first trip, Bragman's and Hobbies in Newark. Bragman's was in a neighborhood that was in flux, a neighborhood that no longer seemed to have the traditional "Jewish deli" clientele, while Hobby's, not too far away, gave us the true Jewish deli experience. Where Loesher's was grungy, Liebman's was spotless, as was their bathroom. (For the record, bathrooms are more important to some of us.)
Shortly after we sat down, we were joined by my starving artist son from Harlem, Mark. He recently completed producing, directing and acting in a comedy movie musical, "Welcome to Harlem", which opened at the Apollo Theater, and has won numerous awards, but has yet to find a distributor. Mark is always glad to receive a free deli sandwich. During the meal, Larry asked Mark whether he thought we'd still be alive in 10 years if we do this every year. Mark's response, "I'm an artist, not a doctor."
The big treat at Liebman's was seltzer from Walter the Seltzer Man. Because we visit so many delis in such a short time, and because we focus on deli sandwiches and salads, we drink only seltzer (except for Gary who likes plain water). Having sampled seltzer at virtually every one of the delis we've visited, we all agreed that Walter's seltzer was by far the best we've had. The seltzer was delivered to the table in a large green glass bottle with a spritzer on top. We had to shake it and spritz it into our glasses. You'd think all seltzer would be the same, but this was really good stuff. Not to be missed! The rest of the food was a cut or two above Loeser's and pretty good. Larry for some reason said the pickles tasted like the smell of the elephant house at the zoo. I have no idea what he was talking about. They tasted pretty good to me.
After the meal, we were off to Schatzi the Butcher at 86th and Amsterdam. Schatzi's is a true butcher shop rather than a restaurant, but some of the internet reviews on their pastrami and their "dirty brisket" sandwich made it sound too good to pass up. While I was planning the trip, I talked on the phone to Schatzi. When I told him that we would be sampling a bunch of delis, he suggested it would be a waste of time. Once we tried his sandwiches, we wouldn't want to go anywhere else. This was a challenge and we were only too glad to accept it.
As we were walking into Schatzi's, we passed Barney Greengrass, another famous deli, and we wondered if it shouldn't have been on our list. I had been there, and it was good, but it didn't make this year's cut. Gary also pointed out the giant sign on the front that said "Sturgeon Specials." Gary said, "What kind of deli advertises sturgeon?"
Schatzi's is a small shop without tables, just a counter with little standing room. We ordered the dirty brisket and pastrami along with the pickles, potato salad, and cole slaw. They were out of corned beef, and therefore we haven't included them in our final numerical ratings.
We walked for 10 minutes to Central Park, where we found a bench to sit on while we ate. The pastrami was flavorful but almost impossible to eat because it was marbled with gristle – very unpleasant. The pickles were, as Malcolm noted "inedible, with an overwhelming taste of industrial cleaner and salt," or as Larry noted, "horrible." We each had one bite and tossed the remains into the Park for the squirrels, although I suspect most squirrels wouldn't touch such a poor representation of a pickle. The potato salad and cole slaw weren't much better, but at least, they ended up in the trash, not in the park. The dirty brisket wasn't really brisket, but pot roast with a barbeque sauce. It was okay, but nothing special. On the whole, Schatzi's was very disappointing. I suppose they may sell good meats to take home and cook, but this was our most disappointing experience. Larry aptly noted, "Schatzi is a character, but his food, for the most part, lacks character." Schatzi should put the pickle jars out back and leave them there.
For our fourth stop, and dinner, we drove across town, through the Park to Pastrami Queen at Lexington and 81st and arrived at 4:30 p.m. Pastrami Queen is a tiny place – with one table for two, two tables for four, and of course a tiny bathroom. We managed to squeeze in at our table. Their Seagrams Sparkling water in a can was dreadful. Much better was the Canada Dry in a can. Of course nothing compares to Walter the Seltzer Man's product. The restaurant was cramped and lacked karma and we were disappointed again. I assume it gets a big lunch crowd. The food was decent, but certainly not special.
About this time, we had a discussion about how to rate pickles. Pickles get a maximum total of 14 points. Some of us like the half-sours best (Larry, Gary and Malcolm), and I go for the full sours. I had been scoring based on how much I liked the full sours, while the 3 others had been scoring the two types of pickles individually. We decided the best option was to award 7 points for each type of pickle. So a maximum 7 for the half-sour plus 7 for the full sour, total max of 14 points.
Next stop, Times Square for our visit to the TKTS booth for Broadway show tickets. At the TKTS booth we debated what to see. Off Broadway offered Old Jews Telling Jokes or The Fantastics, while Broadway had War Horse, or Bring It On. Since we wanted something upbeat, we agreed to buy tickets for Bring it On, which we anticipated would be a mildly entertaining if mindless musical about cheerleaders.
We had time before the show, so we walked a few blocks and spent 60 minutes at the top of the bleachers by TKTS, one of my all-time favorite places. Next, we walked 10 blocks to Stage Deli, so I could replenish my supply of Stage Deli Mustard—my choice for the best mustard anywhere. I picked up five jars. I would have had them send it to me, but the delivery cost is around $30.
We then headed to the theater. Again this year, as in past years, we hit paydirt. We all loved the show. A little slow and full of clichés to start, it picked up in the second half and surprised us all, far exceeding our expectations.
After the show we headed over to Sarge's Delicatessen/Restaurant between 34th and 35th on 2nd Ave. Sarge's got our top rating last year and we wanted to give it a repeat try. We arrived in the area around 10:30 p.m. and spent close to a half hour looking for a parking spot. But Sarge's didn't disappoint. It was far better than any of our 4 previous delis from that day, and again a first class experience. As Malcolm wrote, "Up against the first four, it was no contest: Sarge's in a landslide. High quality meats across the board. The corned beef was delicate with subtle flavor, pastrami excellent, brisket also good – moist with nice texture, no excess fat. The pickles (especially the half sours) were excellent, the potato salad was the best so far, without that cloying sweetness that seems so common these days. Cole slaw was also good." He summed it up for all of us, and again, for the second year in a row, Sarge's, although pricey, was our favorite deli of the trip.
Three weeks after our trip, we read about a massive fire at Sarge's that will result in it being closed for quite some time while they rebuild. All of us were saddened by this terrible news.
We headed off to the Marriott Courtyard in Secaucus NJ, where we arrived at 1 a.m. In the morning at the hotel, we met a bunch of drum majors from Virginia on their way to the National Marching Band Championships at Met-Life Stadium. I would have loved to have watched for a bit, but we had to stick to the schedule.
At 8:20 a.m. we were back on the road headed to Katz's Delicatessen for a hearty breakfast of pastrami, corned beef, brisket, etc. There, we encountered a surprisingly empty restaurant. Then again, it was at 8:45 a.m. on a Sunday — that's evidently the time to go if you want a table. Every other time we've been to Katz's, the place was packed. We recently read articles about how Katz's stayed open during the Hurricane Sandy power outage by using lots of generators and dry ice. They lost money, but felt it was important to take care of their customers.
As in the past, the pastrami was amazing. As we've said before, we'd guess it's the best in the world. The samples were mouthwatering. I'm primarily a corned beef lover, but I have to admit, their pastrami is special. Interestingly enough, we discovered something unique about Katz's this trip, and something that helped us understand why their pastrami is always so good. While Gary was watching the meat-cutter hand-slice a one-pound take-out portion of pastrami (Gary bought it to take to his envious wife), the meat-cutter handed Gary a hot sample (as they always do). Gary noticed that as good as it was, it just seemed a bit off. The meat cutter kept slicing and slicing and kept looking at the meat carefully. Then he picked it all up and threw it in the trash. Evidently this cut was not up to the high quality Katz's demands. After disposing of the meat he went over to the area where they store the meat, retrieved a fresh pastrami that must have weighed 10-15 lbs, and started slicing new meat for Gary to take home. He gave Gary another sample, which was mouthwatering and amazing, and much better than the earlier sample. Maybe Katz's quality control explains why their meats are so much better. They only serve the best of the best. The brisket and corned beef were good, but it's the pastrami that truly excels. The pickles were also excellent. Malcolm and Gary took home carryout pastrami and a couple of us bought pickles for home. Larry bought four bottles of Katz's seltzer. Not quite as good as the seltzer from Walter the Seltzer Man, but still very good. And, just like Walter's, this seltzer was bottled in glass, not in a can or a plastic bottle.
After Katz's we headed over to the 9/11 Memorial for our 11 a.m. reservation. After 30 minutes of searching for a parking space, we gave up and plunked down $34 (gag!!!!) for two hours of parking. We should have taken a cab from our free spot at Katz's. After 30 minutes of winding through the line, we finally got our view of the spectacular memorial. If you're in NYC, this is a must-see, very moving.
Our next stop at 12:30 pm was Mill Basin Kosher Deli in Brooklyn. It's an average, plain, kosher deli. Our waitress was an impatient, young, attractive Russian-born woman. She took our order but was unable or unwilling to answer any of my questions. Gary and I looked at each other and said together, "No 20% tip for her!" The waitress and I sort of eyed each other throughout our time with semi-dirty looks. She softened up a bit after seeing Larry's scale and asked, "Are you doing some kind of research?" We told her what we were doing and then my bad side was tempted to loudly add, "We're also reviewing waitresses," but my deli partners kept me at bay. As for the food, the pastrami was our least favorite of the trio. While the taste was not bad, it had gristle like Schatzi's. As Larry noted, "They are surely not putting on a show for the reviewers."
We headed to our last deli, the newly opened second branch of David's Brisket House in Bay Ridge, Brooklyn. (2017 update--The Brooklyn branch of this deli closed this year, although the Nostrand Ave. branch remains open.) We had been to David's on Nostrand Avenue the last two years, but wanted to try this new branch. Unique for a "jewish-food" type deli, this is run by Muslims. The food was first-rate again, and the restaurant was a dramatic improvement over the Nostrand Ave. David's. The Nostrand Avenue version was grungy, small and didn't seem particularly clean. This new place was spacious, spotless, and had lots of tables. The counter folks were super friendly and accommodating. Larry asked for ice for his drink, but when Mohammed, the manager, realized he didn't have any, he walked next door to borrow some from the neighboring restaurant. That's customer service. The meat was some of the best on our trip. David's had my favorite corned beef of all the restaurants, cut thick. Across the board the meats got the best combined score. They were the best bargain to boot. Larry thought the sour pickles were the best. They didn't have cole slaw or potato salad. For that reason, we assigned David's the average potato salad and cole slaw rating that the 7 other restaurants received, so that David's could compete in our numerical evaluations. I'm not sure whether they were out of these sides that day, or just didn't carry them. I purchased my take-out corned beef from there.
After another exciting two days, we headed for home around 4:00 p.m. We debated whether to stop in downtown Philadelphia to try Schlesinger's Restaurant and Delicatessen or Herschel's deli. I had eaten at both last month on a Philly trip and found them to be on a par with some of the best NY delis. I thought Schlesinger's pastrami was superb, comparable to Katz's. But eight delis in one trip was our max. Next time, we'll have to make the Philly stop.
Conclusions? Once again Sarge's was our favorite. Superb in every aspect, food and atmosphere. It's worth the extra money. For overall quality, ambience, and price, it compares with its neighbor, 2nd Ave Deli, which we visited on previous trips. Katz's? Hands down the best pastrami anywhere. David's? After our last trip, we recommended David's for great meat and the best value, but only for takeout. We loved the new David's in Bay Ridge, where you could take your kids. Their meats also got our top overall rating and are significantly less expensive than the other top-quality places. See Malcolm's comments below along with Gary's compilation of our ratings.
We're already starting to plan next year's trip. And, once again, we will be visiting delis!
N.B. It is with great sadness that we have to report that Stage Deli, a NYC landmark for 75 years, closed its doors for good. Delis are an endangered species these days. You can read more here http://www.nytimes.com/2012/12/01/dining/stage-delis-closing-ends-a-restaurant-war.html
Malcolm's Comments:
Loeser's – Classic case of a deli in decline with a dark, rundown interior and owner to match (though he was friendly). Brisket (BR) was moist but too fatty and rather bland. Corned beef (CB) was also dry with ordinary flavor and texture. The Pastrami (PA), while a little dry, was quite flavorful. Pickles (PI) very average. Potato salad (PS) was dry, mealy and sweet. Cole slaw (CS) had a distinctive flavor, was fairly crunchy and not too sweet.
Liebman's – This deli in contrast to Loeser's looked to be fairly successful, w/clean and apparently updated interior. BR was moist with good flavor, a little fatty. CB likewise was moist, tender and tasty. PA also moist with subtle flavor. PI average overall but the half sours were excellent. PS had good texture but too sweet and creamy. CS was quite good, much like Loeser's in terms of flavor and texture.
Schatzie's – Probably not worth rating. BR was not in fact BR but was really a pot roast; it would not be useful to compare Schatzie's "dirty BR" to the other delis. PA had decent flavor but was inedible due to extensive gristling and dry to boot. PS was a sickeningly sweet offering of the German style. CS was likewise poor. PI (a full sour) was inedible, with overwhelming taste of industrial cleaner and salt.
Pastrami Queen – This deli was uncomfortably tight and lacked karma. However, all was not lost. PA was quite tasty and moist. CB was very moist and tender but had a metallic flavor. BR was very tasty, a little dry but lean. Bread was lousy. PI average, PS too sweet and creamy, CS limp and bland.
Sarge's – Up against the first four, it was no contest: Sarge's in a landslide. High quality meats across the board. The corned beef was delicate with subtle flavor, pastrami excellent, brisket also good – moist with nice texture, no excess fat. The pickles (especially the half sours) were excellent, the potato salad was the best so far, without that cloying sweetness that seems so common these days. Cole slaw was also good.
Katz' – ok, I admit that I am biased, but the PA is simply the best in every way – NO CONTEST HERE! Bought 1.5 pounds to take home and finished it within a few days. If you feed this to someone who is originally from New York as I did in the case of my wife, the reaction confirms the fact that DC delis can't touch NY. On the other hand, the CB was off, with some flavor that is alien to the meat – seemed like an anomaly. BR had excellent flavor but was dry - I am coming around to the view that the best tasting BR tends to be dry (I am not a big fan of fatty brisket), so you might as well take the gravy when offered and pour it on, hoping that it's good. PI are also the best and I bought mine (half sours) here (though you could also buy at Sarge's). PS and CS ordinary – the slaw had nice crunch but was too creamy and sweet, which points out that CS may well be a matter of personal taste – one man's gold may be another's lead (or something).
Mill Basin Deli – fair to middling overall. PA had a Schatziesque toughness and was rubbery as well. CB was innocuous. BR had a light flavor, was moist and not too fatty. PI solid. PS sickeningly sweet. CS bland but crunchy. By now, you realize that there are basically two versions of CS – one that's basically sweet with a lot of mayo and one that has a little more snap and is dressed more like a salad – personal preference dictates which one the reader will like, but none of them were particularly great.
David's – the new (second) location was a marked contrast to the original David's that must be on a health inspection watchlist – clean and roomy. PA was excellent – provocative flavor, on the edge fatwise (and perhaps a tad salty?) but very good. CB was most and tender with good flavor, perhaps a tad fatty? BR had fine flavor, not fatty, but a little dry – it was not up to the original David's version. Tried to moisturize it with the gravy but the gravy was lousy. PI offered full sour only (good), PS and CS unavailable | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,081 |
\section{Introduction}
\label{intro}
This paper studies the problem of multi-task offline reinforcement learning (RL).
We first define a multi-task offline RL problem as learning a single policy that solves multiple tasks from previously collected data without online interaction with the environment. For example, suppose we want grocery robots to acquire a range of different behaviours (e.g. lift cans, pick up bowls and open closet). In that case, it is more practical to learn an extensive repertoire of behaviours using all previously collected datasets rather than learning each skill from scratch.
\begin{figure}[t!]
\centering
\vspace{0.1in}
\includegraphics[width=0.48\textwidth]{figures/intro-5.PNG}
\vspace{-0.2in}
\caption{
Switch Trajectory Transformer Architecture. Switch transformer models contain action transformer, dynamics transformer and return transformer. Task id, returns, states are fed into embedding layer and consequently in to the action transformer. The model predicts the following action and then provides it to the dynamics model for predicting the next state. The return transformer uses this information and predicts return; this process is repeated until the designed horizon length.
}
\vspace{-0.2in}
\label{fig:method-arch}
\end{figure}
The large diversity of datasets collected in various tasks brings difficulty for traditional multi-task offline RL methods \cite{yu2021conservative, yu2021data, kalashnikov2021scaling}.
Specifically, these methods emphasize transferring skill knowledge across related tasks and developing a sharing experience across different tasks. Such a data-sharing strategy makes learnt multi-task policy sensitive to data distribution differences and relationships among tasks. The inherent conflict from task differences can harm the policy of at least some of the tasks, particularly when model parameters are shared among all tasks.
Recent offline RL works like Decision Transformer \cite{chen2021decision}, and Trajectory Transformer \cite{janner2021reinforcement}, abstracting RL as sequence modelling, demonstrate the capability of turning large datasets into powerful decision-making engines. Such modelling design benefits multi-task RL problem by serving a high-capacity model for handling task differences and absorbing vast knowledge in the collected diverse dataset, and also makes it possible for multi-task RL methods to adopt the associated advances \cite{fedus2021switch} in language modelling problems.
However, adopting such high-capacity sequential models for solving the multi-task RL problem poses three significant algorithmic challenges. The first is high computation cost and tall model capacity, which is critical for absorbing the vast knowledge available in a large heterogeneous dataset. The second one is sharing the same policy parameters across different tasks causing degraded performance over simple single-task training.
The third is poor performance caused by the Monte Carlo value estimator, especially in sparse reward settings. Monte Carlo estimator suffers from poor sample complexity when online data collection is not allowed, and it is uninformative to guide the beam-search-based planning procedure.
To handle these challenges, we propose SwitchTT (Switch Trajectory Transformer), a multi-task extension to Trajectory Transformer but enhanced with two striking features.
First, unlike Trajectory Transformer and other traditional multi-task RL methods, reusing the same parameter for all input data, our method exploits a sparsely activated model for multi-task offline model training, with the sparsity coming from selecting different model parameters for each incoming example.
Such a model allows us to perform efficient computation in high-capacity neural networks and improve parameter sharing in multi-task learning.
Second, SwitchTT develops a trajectory-based distributional value estimator for learning the value distribution of trajectory instead of expected value, as illustrated in Figure \ref{fig:rtg-overview}. Such a distributional estimator enables us to measure and utilize uncertainty around the reward, thus mitigating the effects of the Monte-Carlo Value estimator suffering from poor sample complexity, especially in sparse-reward setting, leading to better value estimate in an offline setting.
Inspired by the Trajectory Transformer, which abstracts offline RL as a sequence modelling problem,
our method tackles multi-task RL with the tool of sequence modelling, utilizing the switch transformer model to model distributions over multi-task trajectories and applying beam search for planning action with the highest reward.
A high-level overview of SwitchTT is illustrated in Figure \ref{fig:method-arch}. Specifically, we first utilize switch transformer models to train decision, return-to-go (RTG), and dynamics models, which replaces the standard feed-forward layer in the transformer with simplified Mixture of Experts (MoE) layers. The MoEs layer contains a set of expert networks and a gating network, which takes as an input an observation representation and then routes it to the best-determined expert network, producing the corresponding output. Under this MoEs layer, we view each expert network as a task learner and gating network as a router that routes the task input to the corresponding expert. Secondly, we predict trajectories based on the learned model and develop a distributional value estimator to evaluate the predicted courses and select the one with the highest reward. We will provide more details in Section \ref{sec:mth}.
This paper has two major contributions: (i) we exploit a sparsely activated model for multi-task model learning, which reduces the computation cost and improves multi-task learning performance. (ii) we develop a trajectory-based distributional value estimator to learn a better value estimator, improving offline reinforcement learning performance. The rest of the paper is organized as below. Section \ref{sec:pre} introduces preliminaries. Section \ref{sec:mth} describes the implementation of SwitchTT. Section \ref{sec:exp} presents detailed experiment results to demonstrate advantage of SwtichTT. Section \ref{sec:related} introduces the related works and Section \ref{sec:concl} concludes the paper with more discussion.
\section{Preliminaries}
\label{sec:pre}
\paragraph{Offline Reinforcement Learning}
Here, we define the essential reinforcement learning (RL) concepts, following standard textbook definitions \cite{sutton2018reinforcement}. Reinforcement learning addresses the problem of learning to control a dynamical system in a general sense. The dynamical system is fully defined by a Markov decision process (MDP).
The MDP is defined by the tuple \( {M} =(\mathcal{S}, \mathcal{A}, r, P, \gamma, \rho_0 )\), where
\(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the continuous action space, \(r\colon\mathcal{S} \times \mathcal{A} \to \mathbb{R} \) is the reward function and \({\mathcal{\rho}_0}\) represents the initial state distribution.
\(P(s'\mid s,a)\) represents the transition probabilities, which specifies the probabilities of transition from the state \(s\) to \(s'\) under the action \(a\).
A trajectory is made up of a sequence of states, actions, and rewards:
$ \tau = (s_0, a_0, r_0, s_1, a_1, r_1, . . . , s_T , a_T , r_T )$. The return of a trajectory at timestep $t$, $R_t=\sum_{t'=t}^T{r_{t'}}$
is the sum of future rewards from that timestep. The goal of RL is to find an optimal policy that maximizes the expected return $\mathbb{E}[\sum_{t=1}^T{r_{t}}]$ in a MDP.
Different than online RL, which involves iteratively collecting experience by interacting with the environment, offline RL learns the optimal policy on fixed limited dataset $\mathcal{D} = \{(s_j.a_j.s_j',r_j)\}_{j=1}^{N}$, consisting of trajectory rollouts of arbitrary policies. Online interaction with the environment is too expensive and time-consuming for some systems, leading to the advantage of offline RL, which requires no additional interaction. But this setting also poses major challenges for offline RL: methods often cannot learn effectively from entire offline data without any additional on-policy interaction. Also, the exploration ability of the agent is outside the scope of such methods.
\paragraph{Multi-Task Reinforcement Learning}
The goal of multi-task RL is to find a optimal policy that maximizes expected return in multi-task Markov Decision Process (MDP), defined as \( {M} =(\mathcal{S}, \mathcal{A}, \{{r_i}\}_{i=1}^N, P, \gamma, \rho_0 )\), where $\{r_i\}_{i=1}^N$ is a finite set of task and others follow the definition in the offline RL subsection. Each task $i$ represents a different reward function $r_i$ but share the dynamics $P$.
In this work, we focus on multi-task offline RL setting, aiming to find a policy $\pi(a|s)$ that maximizes expected return over all the task: \(
\pi^*(a|s) = argmax_{\pi}\mathbb{E}_{i\sim[N]}\mathbb{E}_{\pi}[\sum_{t=1}^T{r_i(s_t,a_t)}] \), given a dataset $\mathcal{D} = \cup_{i=1}^{N}\mathcal{D}_i$ where $\mathcal{D}_i$ consists of experiences from task $i$.
\paragraph{Trajectory Transformer}
Trajectory Transformer \cite{janner2021reinforcement} formulates offline RL as generic sequence modeling problem. The core of this approach is to use a Transformer architecture to model distributions over trajectories in the offline dataset and repurposing beam search as a planning algorithm to find the optimal action. Specifically, Trajectory Transformer augment each transition in the trajectory $\tau$ with reward-to-go $R_t=\sum_{t'=t}^T\gamma^{t'-t}r_t$, then discretize a trajectory $\tau$ with N-dimension states and M-dimensional actions into sequence of length $T(N+M+1)$: $\tau = (...,s_t^1,s_t^2,...,s_t^N,a_t^1,a_t^2,...,a_t^M,r_t,...), t=0...T$ . To model distribution over such trajectories, they mirror a smaller-scale GPT \cite{radford2018improving} architecture, parameterized as $\theta$ and induced conditional probabilities as $P_{\theta}$, maximising the following objective:
\vspace{-0.2in}
\begin{equation}
\begin{multlined}
L(\tau) = \sum_{t=0}^{T}(
\sum_{i=1}^{N}logP_\theta(s_t{i}|s_t^{<i}, \tau_{<t}) + \\
logP_\theta(s_t{i}|a_t,s_t, \tau_{<t})
+ \sum_{j=1}^{N}logP_\theta(a_t{i}|s_t^{<i}, \tau_{<t})
)
\end{multlined}
\end{equation}
\vspace{-0.1in}
Then, beam search uses past trajectory as the input of the trained model and greedily selects the predicted trajectory ${\tau}$ with the highest reward. Other details of Trajectory Transformer are referred to in the original paper \cite{janner2021reinforcement}.
In this work, we extend the trajectory transformer with two enhanced features to solve multi-task RL. The first is to exploit switch transformer instead of naive transformer architecture, and the second is to adopt a distributional value estimator to guide the beam search. We refer to more details to Section \ref{sec:mth}.
\paragraph{Switch Transformer}
Switch Transformer is a sparsely activated model designed to maximize the parameter count of a Transformer model in a simple and computationally efficient way. The critical difference is that instead of containing a single feed-forward neural network (FFN) in the original transformer, each switch layer has multiple FFNs known as an expert.
More specifically, the switch layer consists of a set of n ``expert networks" $ E_1, \cdot\cdot\cdot, E_n $ and a ``gating network" $G$, whose output is a sparse n-dimensional vector. The experts are themselves neural networks, each with their parameters. This layer takes a token representation $x$ as an input and then routes it to the best determined top-$k$ experts, selected from a set of ${E_i(x)}_{i=1}^n$. The router variable $W_r$ produces logits $h(x)=W_r \cdot x$ which are normalized via a softmax distribution over the available $n$ experts at that layer. The gate-value for expert $i$ is given by:
\vspace{-0.05in}
\begin{equation}
p_i(x) = \frac{e^{h(x)_i}}{\sum_j^n{e^{h(x)_j}}}.
\end{equation}
\vspace{-0.15in}
The top-$k$ gate values are selected for routing the token $x$. If $\mathcal{T}$ is the set of selected top-$k$ indices then the output computation of the layer is the linearly weighted combination of each expert's computation on the token by the gate value,
\begin{equation}
y=\sum_{i\in \mathcal{T}}{p_i(x)E_i(x)}.
\end{equation}
\vspace{-0.15in}
Instead of routing to $k$ experts, we route input tokens to only a single expert. Switch Transformer shows this simplification preserves model quality, reduces routing computation and performs better.
The benefit of the switch layer for solving reinforcement learning problems is two-fold: (1) Each Expert is considered a task expert, such as lifting cans, picking up bowls, and opening a closet. In this way, we can combine multiple expert knowledge inside a single policy to solve numerous tasks. (2) The gating network is considered a switch strategy, which measures the confidence of each expert and chooses the expert with the highest confidence to solve each task.
\section{Method}
\label{sec:mth}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.9\textwidth]{figures/method-6.PNG}
\vspace{-0.15in}
\caption{
Training, planning, and action planner in the SwitchTT: (A) Represents the action transformer model training process. Here we diagram three task tokens being routed across different experts. At first, trajectories in the collected episodes are transformed into trajectory representation by adding task id and calculating return-to-go. Then it is fed as a token into a switch transformer model, which replaces the dense feed-forward network (FFN) layer present in the transformer with a sparse Switch FFN layer. The switch layer returns the output of selected experts multiplied by the router gate value. (B) is the planning process. Firstly all visited state, return-to-go, action and task id are stacked into a sequence $\{i,R_0,s_0,a_0,...R_t,s_t\}$. Then the sequence is fed into switch transformer models to generate predicted four pairs of $\{a_t,s_{t+1},R_{t+1}\}$. Stack again and generate only one pair until the designed length. Then this generates values of different chosen actions at the first generation. (C) The action planner. The planner is performed at each timestep as described in B. At each timestep, the highest column value action is sampled and executed to the environment.
}
\vspace{-0.05in}
\label{fig:method-overview}
\end{figure*}
\vspace{-0.1in}
This section presents SwitchTT in detail, a multi-task extension of the Trajectory Transformer. The details of the method is illustrated in Figure \ref{fig:method-overview}. Following Trajectory Transformer, SwitchTT model multi-task offline RL problem as sequence modelling problem and divides the algorithm into three phases (data collection, offline model training and planning). Since our model and planning strategy is nearly identical to the Trajectory Transformer, we briefly describe three phases and emphasize the critical difference: the model architecture of the switch transformer model and implementation of the distributional value estimator.
\subsection{Data Collection}
In the data collection phase, we collect a combined dataset $\mathcal{D}=\cup_{i=1}^{N}\mathcal{D}_i$ in task $1..N$ and transform the dataset into a sequential representation. Specifically, instead of using certain-level expert policy to interact with the environment, we utilize the online RL algorithm PPO \cite{schulman2017proximal} to solve each task and collect the replay dataset $\mathcal{D}_i$. This adds exploratory trajectory into the dataset and increases the diversity of trajectory inside the dataset.
Then, trajectories from different tasks are combined into a single dataset $\mathcal{D}$. Each trajectory is transformed into the trajectory representation following Decision Transformer \cite{chen2021decision}. Instead of feeding the rewards directly, we feed the model with the returns-to-go $\hat{R_t}=\sum_{t'=t}^T{r_{t'}}$. This leads to the following trajectory representation of task $i$:
\begin{equation}
\tau = (i,\hat{R_0}, s_0, a_0, i, \hat{R_1}, s_1, a_1, . . . , i,\hat{R_T}, s_T , a_T).
\end{equation}
At test time, we feed the target return and first state $(R_0,s_0)$ into the model and then get the desired action $a_0$ from the planning phase, depicted in the latter subsection. Here, $R_0$ represents the desired performance, usually the maximized cumulative reward of the task. $s_0$ is the first observed state from the environment. After executing the action, we receive $(r_0,s1)$ from the environment and decrease the target return via equation $R_1=R_0-r_0$. Then we feed the current trajectory to the model and repeat until the episode terminates.
\subsection{Training Phase}
In the training phase, we train offline models modelling the trajectory distribution in the dataset $\mathcal{D}$ and use them for the planning phase.
Instead of modelling the trajectory inside a single model like Trajectory Transformer, we train three separate models: (i) decision transformer $\theta$, modelling the joint distribution of the states, actions, and rewards sequence. (ii) dynamics transformer $f$, predicting state over past states and actions. (iii) return-to-go (RTG) transformer $\phi$, a trajectory-based distributional value estimator, predicting value distribution of state-action trajectory. Here, the training procedure of each transformer is referred to in \citet{chen2021decision}. The training procedure of the three models is the same as the original Transformer model. The objective of the Decision Transformer model is to minimize the cross-entropy loss between predicted and true actions. We define the loss as a cross-entropy loss between predicted and true states for the Dynamics Transformer model. We also use switch transformer model architecture instead of transformer model and describe the distributional value estimator in detail.
We feed the last K timesteps into the decision transformer for a total of 4K tokens. For the dynamics transformer and return-to-go transformer, we provide previous k timesteps into them but only 3k tokens, including task id, state and actions. The output of the Dynamics Transformer is the next-timestep state, and the output of the return-to-go transformer is the distribution of the input trajectory's future return.
\paragraph{Trajectory-based Distributional Value Estimator}
\begin{figure}[h!]
\centering
\vspace{-0.1in}
\includegraphics[width=0.45\textwidth]{figures/rrtg-2.PNG}
\vspace{-0.1in}
\caption{Overview of return-to-go transformer. Instead of predicting the expectation value of trajectory, return-to-go transformer discretise the reward and predicts the value distribution of the trajectory. }
\vspace{-0.25in}
\label{fig:rtg-overview}
\end{figure}
As shown in Figure \ref{fig:rtg-overview}, return-to-go transformer is a trajectory-based distributional value estimator that estimates the value distribution of the state-action trajectory instead of the expected return.
Here, we model the distribution using a discrete distribution (categorical distribution) parameterized by
$N\in \mathbb{N}$ and $V_\text{min},V_\text{max}\in \mathbb{R}$.
The support of such distribution is the set of atoms
$\{ z_i = V_\text{min} + i \triangle z: 0\leq i < N \}$,
$ \triangle z = {\tfrac{V_\text{max}-V_\text{min}}{ N-1}}$ and atom $z_i$ probability is given by a parametric model $\theta: \mathcal{X} \rightarrow \mathbb{R}^N $, as shown in Figure \ whose input is three-step state-action trajectory $x= \{ s_t,a_t,s_{t+1},a_{t+1},s_{t+2}, a_{t+2} \}$ and output is the probability vector of atoms:
\vspace{-0.1in}
\begin{equation}
p_i(x) = \frac{e^{\theta(x)_i}}{\sum_j^n{e^{\theta(x)_j}}}
\end{equation}
\vspace{-0.1in}
Then we project the value distribution learning onto a multiclass classification problem. Given a sample trajectory $ \tau _t = \{\hat{R_j}, s_j, a_j\}_{j=t}^{t+3}$ from dataset, we label the trajectory as class $ c = \lfloor { \hat{R}_{t+3} - V_{\text{min}} } \rceil / \triangle z $ and use gradient descent to find parameter $\theta$ that minimize the cross-entropy loss between labeled class and discrete value distribution. In the planning phase, the output value of given trajectory $x$ is the linearly weighted combination of each atoms' probability:
\vspace{-0.1in}
\begin{equation}
V(x)=\sum_{i=0}^{N-1}{p_i(x)z_i}.
\label{eq-value}
\end{equation}
\vspace{-0.1in}
The advantage of using distributional value learning is to learn a better value approximation. While the Monte Carlo value estimate suffers from poor sample complexity, especially in sparse-reward tasks, learning the distribution preserves the suboptimal behaviour.
Also, by leveraging the transformer model architecture, we learn the distribution in a simple self-supervised manner instead of dynamic programming as in the Distributional RL work \cite{bellemare2017distributional}.
\paragraph{Model Architecture} We utilize a switch transformer to learn the distribution of the combined dataset. The model architecture is extended from GPT \cite{radford2018improving} model architecture. But we replace the dense feed-forward network (FFN) layer present in the transformer with a switch layer consisting of multiple FFN layers. Figure \ref{fig:method-overview} illustrates the detail of the switch layer in the transformer models. When each token passes through this layer, it first passes through a router function and the router routes the token to a specific expert. Under the multi-task RL setting, we consider each expert as a policy model, and the router routes the observation from a task to the highest-confidence expert. In our method, we implement a mirror version of Switch Transformer architecture, and the detail of the switch layer is depicted in Switch Transformer \cite{fedus2021switch}.
\subsection{Planning Phase}
In the planning phase, we use beam search to generate optimal action with the highest value estimation based on the imagined sequence. We summarize this phase as Switch Planner and describe it in the Algorithm \ref{alg:swtichpl}.
The algorithm require model $(\theta,f,\phi)$ from training phase, hyperparameter candidate number $c$ and horizon length $h$, current sequence $x$ as input. Here, $c$ determines the number of imagined trajectory, $h$ define the horizon length of each imagined trajectory, $x$ is collected from past states, actions and rewards at timestep t: $x=\{R_i,s_i,a_i\}_{i=0}^{t}$. The planning phase illustrated in Figure \ref{fig:method-overview} use candidate number $c=4$ and horizon length $h=3$.
\vspace{-0.1in}
\begin{algorithm}[h!]
\caption{SwitchTT Planner}
\label{alg:swtichpl}
\begin{algorithmic}
\STATE {\bfseries Input:} sequence {x}, decision model $\theta$, dynamics model $f$, reward model $\phi$, candidate number $c$, horizon length $h$.
\STATE Initialize $\mathcal{T}=\{\}$, $\mathcal{A}=\{\theta(x){[i]}\}_{i=0}^c$
\FOR{$i=1$ {\bfseries to} $c$}
\STATE{
Sample $i^{th}$ action candidate $a^i_1=\theta(x){\small{[i]}}$
\newline
Initialize trajectory $\tau_i=\{x,a^i_0\}$
\FOR{$j=1$ {\bfseries to} $h$}
\STATE{
Predict state $s^i_j=f(\tau_i)$
\newline
Imagine trajectory $\tau_i = \tau_i \cup \{a^i_{j-1},s^i_j\} $
\newline
Sample action $a_j^i=\theta(x)[0]$
}\ENDFOR
\newline
Evaluate trajectory $ \mathcal{T}[a^i_0] = V_{\phi}(\tau_i)$ using Eq \ref{eq-value}
}
\ENDFOR
\STATE {\bfseries Return: $ \argmax\limits_{a\in \mathcal{A}} \mathcal{T}[a] $ }
\end{algorithmic}
\end{algorithm}
\vspace{-0.1in}
\section{Experiment}
\label{sec:exp}
In this section, we present the experimental results of our method and compare them with baselines methods in a multi-task setting.
In particular, our experiment aims to answer the following question: (i) How well does our method perform on multi-task learning? (2) Does switch transformer speed up the training speeds of offline model learning. (3) Does switch transformer mitigate the effect of degrading performance of multi-task learning over single-task learning? (4) Does the distributional value estimator improve multi-task performance?
\subsection{Experiment Design}
We evaluate our methods in 10 different tasks of the gym-mini-grid environment, including FourRooms, DoorKey, KeyCorridor, and the other seven tasks. We choose these tasks because they have sparse reward where by default, the agent only gets a positive reward when reaching the designed goal. This problem is difficult for policy learning because reward must be propagated from the beginning to the end of the episode when actions taken in the middle is skipped over. Also, these tasks contain different requirements, so the policy should be trained separately, although they have the same action and state space. For example, DoorKey is a simple sparse-reward problem; the Fourroms is a long-term credit assignment problem and KeyCorridor requires learning compositional tasks.
We compared our methods with Trajectory Transformer (TT), Decision Transformer (DT), Behavior Cloning.
Our motivation for choosing these methods are: Our methods extend from Trajectory Transformer and are similar to Decision Trajectory, which are offline RL methods, abstracting offline RL as a sequence modelling problem. Imitation learning is similar to our methods since it also uses supervised loss for training and planning from the trained models.
\subsection{Performance in multi-task learning}
Here, we firstly investigate the improvement of SwitchTT over TT and DT in multi-task learning.
We collect a dataset across ten different tasks in a gym-mini-grid environment. The total amount of the dataset is 5 million timesteps combined dataset, 500k timesteps for each task. Combining all the data, we trained DT, TT and Switch TT models and evaluated the trained models on the ten tasks. We calculate the reward by running each task across 100 scenarios.
Secondly, we study the effect of the switch layer for multi-task learning. We compare SwitchTT with other baseline methods in three learning settings, 1-task learning, 3-task learning and 10 task learning.
We use a context length of k=30 in both experiments in all trained transformer models. In the switch layer, we use the expert number of n=3 for 3-task learning and n=8 for 10-task learning.
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{figures/10task-6.png}
\vspace{-0.25in}
\caption{ Evaluation of Switch TT across 10 tasks. DT and TT train transformer models with FFN layer on the combined 10-task datasets. SwitchTT train transformer models with Switch layer on the same dataset. SwitchTT provides an improvement over 3 baseline methods across 80\% of tasks.
}
\label{fig:task-performance}
\end{figure}
As shown in Figure \ref{fig:task-performance}, SwitchTT outperforms DT, TT across 80\% of tasks. In particular, SwithchTT improves about 15\% performance over other methods in the FourRooms, DoorKey and Maze tasks, which require long-horizon planning ability.
This highlights that the transformer models with switch layers learn a better-matched trajectory distribution in the dataset than FFN layers. Also, the distributional value estimator benefits SwitchTT by providing a more accurate value for estimating the imagined trajectories.
\begin{figure}[h!]
\centering
\includegraphics[width=0.30\textwidth]{figures/avg-reward-2.png}
\vspace{-0.1in}
\caption{
The average reward of 10-task learning. We train offline models on the 10-task dataset, evaluate the performance on ten tasks, and report the average reward of 10 tasks in 10-task learning.
}
\label{fig:avg-reward}
\vspace{-0.2in}
\end{figure}
Figure \ref{fig:avg-reward} shows that SwitchTT achieves the best result in 10-task learning. We observe that our methods improve about 10\% over our base methods TT. These highlights switch layers improve the offline model learning, and also the distributional value estimator also learns a better value in a sparse-reward setting.
\begin{table}[h!]
\begin{adjustbox}{width=\columnwidth}
\centering
\small
\begin{tabular}{ c | c | c | c | c }
\hline
\addlinespace[0.08cm]
& Method & {1-task} & {3-task} & {10-task} \\
\addlinespace[0.08cm]
\hline
\addlinespace[0.05cm]
\multirow{6}{*}{\small {FourRooms}}
& SwitchTT & $0.716\pm0.11$ & $0.724\pm0.08$ & \boldmath $0.800\pm0.11$ \\
& TT & \boldmath$0.766\pm0.09$ & $0.658\pm0.09$ & $0.686\pm0.09$ \\
& TT-DV & $0.716\pm0.11$ & $0.735\pm0.09$ & $0.747\pm0.10$ \\
& BC & $0.563\pm0.15$ & $0.543\pm0.15$ & $0.494\pm0.12$ \\
& DT & $0.682\pm0.15$ & \boldmath$0.736\pm0.09$ & $0.706\pm0.10$ \\
& DT-Switch & $0.682\pm0.15$ & $0.682\pm0.14$ & $0.741\pm0.12$ \\
\addlinespace[0.05cm]
\hline
\addlinespace[0.1cm]
\hline
\addlinespace[0.05cm]
\multirow{6}{*}{\small {DoorKey}}
& SwitchTT & $0.899\pm0.04$ & \boldmath$0.875\pm0.07$ & \boldmath$0.880\pm0.12$ \\
& TT & $0.882\pm0.04$ & $0.855\pm0.07$ & $0.754\pm0.08$ \\
& TT-DV & $0.899\pm0.04$ & $0.828\pm0.05$ & $0.831\pm0.12$ \\
& BC & $0.825\pm0.08$ & $0.728\pm0.14$ & $0.630\pm0.08$ \\
& DT & \boldmath$0.912\pm0.06$ & $0.861\pm0.07$ & $0.835\pm0.12$ \\
& DT-Switch & $0.912\pm0.06$ & $0.873\pm0.08$ & $0.867\pm0.11$ \\
\addlinespace[0.05cm]
\hline
\addlinespace[0.1cm]
\hline
\addlinespace[0.05cm]
\multirow{6}{*}{\small {KeyCorridor}}
& SwitchTT & $0.335\pm0.08$ & $0.383\pm0.10$ & \boldmath$0.453\pm0.10$ \\
& TT & $0.096\pm0.06$ & $0.380\pm0.08$ & $0.404\pm0.07$ \\
& TT-DV & $0.235\pm0.08$ & $0.310\pm0.06$ & $0.354\pm0.06$ \\
& BC & $0.384\pm0.15$ & $0.427\pm0.15$ & $0.252\pm0.11$ \\
& DT & $0.541\pm0.11$ & $0.510\pm0.08$ & $0.388\pm0.08$ \\
& DT-Switch & \boldmath$0.541\pm0.11$ & \boldmath$0.5139\pm0.14$ & $0.428\pm0.09$ \\
\addlinespace[0.05cm]
\hline
\end{tabular}
\end{adjustbox}
\vspace{-0.1in}
\caption{The effect of Switch Layer for multi-task learning. The table reports the reward mean and variance of baseline methods in three multi-task learning settings. n-task learning in the table means trained datasets contain trajectories of n tasks. TT-DV stands for the Trajectory Transformer but is enhanced with a distributional value estimator. BC and DT-switch trains transformer models with switch layers. }
\vspace{-0.15in}
\label{table:multi-performance}
\end{table}
We highlight three key findings from Table \ref{table:multi-performance}:
(1) SwitchTT outperforms other baseline methods in 10-task learning. For the multi-task setting, SwitchTT can achieve the best results.
(2) DT-Switch performs better in 3 multi-task learnings than DT, cooperating with the Switch Layer in the transformer model. DT-Switch improves the multi-task learning performance over DT using the FFN layer in the transformer model. We conclude that such a switch layer mitigates the degrading performance of multi-task learning over single-task learning.
(3) TT-DV outperforms TT in 3 multi-task learning settings. The distributional Value estimator improves the performance of the TT. We conclude that our distributional value estimator provides better value estimation for TT and improves the performance of TT.
\subsection{Computation Cost of Model Learning}
Here, we design experiments to compare the computation cost of the Switch Transformer model with the baseline model in learning multi-task dataset.
In particular, we use PPO to collect the multi-task dataset in 10 gym-mini-grid tasks and train six models on the collected dataset. The six models are the Decision Transformer (DT) Model and DT with switch layer (DT-Switch) models with three different transformer models, large, medium and small. The Decision Transformer model is a torch-implemented mirror version of the GPT transformer model. The DT-Switch is the same model but replaces the FFN layer with the switch layer. The head, layer, embedding size of small, medium, larger are $(4,4,64)$,$(8,4,64)$ and $(8,4,128)$. The expert number in the switch layer is 4. During the training process, we plot the training loss. Then instead of plotting test loss, we test the model performance on three environments, 100 tasks for each environment.
\begin{figure}[h!]
\vspace{-0.05in}
\centering
\includegraphics[width=0.45\textwidth]{figures/moe-9.png}
\vspace{-0.15in}
\caption{Comparison of time cost and performance between dense layer (FFN) and switch layers. Subfigure a, b, c plots the training loss of action model return-to-go model, dynamics model. Subfigure d plots the reward performance of DT and DT-Switch on three tasks. }
\label{fig:switch-comparison}
\vspace{-0.10in}
\end{figure}
Figure \ref{fig:switch-comparison}.a, \ref{fig:switch-comparison}.b, \ref{fig:switch-comparison}.c shows that transformer models with switch layer always converge to a lower train loss with faster speed. This means switch layers can reduce the computation cost, including training time and parameter size for multi-task model learning. Also, Figure \ref{fig:switch-comparison} shows that DT-Switch outperforms DT on three tasks. This implies DT-switch mitigates the effect of overfitting the models and also learns a better distribution of trajectories in the multi-task dataset. This result demonstrates the advantage of such a sparsely activated layer in multi-task learning.
\paragraph{Effect of expert number}
\begin{table}[h!]
\begin{adjustbox}{width=\columnwidth}
\centering
\small
\begin{tabular}{ c | c | c | c }
\hline
\addlinespace[0.08cm]
{Expert} & {FourRooms} & {DoorKey} & {KeyCorridor} \\
\addlinespace[0.08cm]
\hline
\addlinespace[0.05cm]
N=1 & $0.7056\pm0.10$ & $0.8348\pm0.12$ & $0.3882\pm0.08$ \\
N=2 & $0.6432\pm0.15$ & $0.8612\pm0.08$ & $0.4127\pm0.15$ \\
N=4 & \boldmath$0.7413\pm0.12$ & \boldmath$0.8667\pm0.11$ & \boldmath$0.4276\pm0.09$ \\
N=8 & $0.6145\pm0.17$ & $0.8339\pm0.12$ & $0.2245\pm0.13$ \\
N=16 & $0.6605\pm0.14$ & $0.8344\pm0.12$ & $0.3182\pm0.11$ \\
\addlinespace[0.05cm]
\hline
\end{tabular}
\end{adjustbox}
\vspace{-0.1in}
\caption{Effect of Expert numbers in a 10-task learning setting. We report the reward mean and variance value across 10 different seeds in 10-task learning setting. The $N$ represents the expert number of switch layer in the switch transformer models. }
\vspace{-0.1in}
\label{table:expertNumber}
\end{table}
Table \ref{table:expertNumber} compares the performance of different expert numbers in 10-task learning. Here we report the reward mean and variance in 3 difficult tasks, FourRooms, DoorKey and KeyCorridor. The model with four experts in the switch layer performs better than others. Notably, the model with higher expert numbers 8,16 performs worse than number 4. We conclude that the model with four experts performs best in 10-task learning.
\subsection{Performance of Distributional Value Estimator }
\textbf{Effect of Atom Number}
Here, we design experiments to compare the performance of different types of return-to-go transformer models.
First, we implement four types of return-to-go transformer models in the training phase, which discretize the reward into 11 atoms, 31atoms, 51 atoms, 101 atoms. We train the SwitchTT in a 10-task learning setting and report the reward mean and variance of 3 typical but complex tasks, FourRooms, DoorKey, KeyCorridor.
\begin{table}[h!]
\begin{adjustbox}{width=\columnwidth}
\centering
\small
\begin{tabular}{ c | c | c | c }
\hline
\addlinespace[0.08cm]
{Atom} & {FourRooms} & {DoorKey} & {KeyCorridor} \\
\addlinespace[0.08cm]
\hline
\addlinespace[0.05cm]
N=11 & $0.7742\pm0.07$ & $0.5863\pm0.09$ & $0.2295\pm0.13$ \\
N=31 & \boldmath$0.7816\pm0.10$ & $0.7457\pm0.11$ & $0.1122\pm0.07$ \\
N=51 & $0.7704\pm0.09$ & $0.7583\pm0.11$ & $0.1568\pm0.13$ \\
N=101 & $0.7768\pm0.08$ & \boldmath$0.8203\pm0.12$ & \boldmath$0.2785\pm0.09$ \\
\addlinespace[0.05cm]
\hline
\end{tabular}
\end{adjustbox}
\vspace{-0.1in}
\caption{Effect of Atom Number in the distributional value estimator. We report the reward mean and variance values across 10 different seeds in 10-task learning setting. Atom number stands for the number of discretizing the reward. }
\vspace{-0.1in}
\label{table:dist-type}
\end{table}
In Table \ref{table:dist-type}, we study the effect of atom number in our distributional value estimator. We observe that the highest atom number has the highest performance in DoorKey and KeyCorridor tasks, which rely on an accurate value estimator. Specifically, DoorKey and KeyCorridor tasks need to consider move action and open door, pick keys, drop keys action while FourRooms task only requires move action. We conclude that increasing the atom number in the distributional value estimator improves the model performance in multi-task learning.
\textbf{Improvement over baseline}
Table \ref{table:multi-performance} reports the reward mean and variance of TT and TT-DV. We can see that the average performance of TT-DV in different multi-task settings outperforms TT. We conclude that the distributional value estimator improves the trajectory transformer performance by providing a more accurate value estimation of trajectories.
\section{Related Works}
\label{sec:related}
This section summarizes some related works.
\textbf{Transformer for Reinforcement Learning}
Decision Transformer \cite{chen2021decision} model Reinforcement Learning (RL) as a sequence modelling problem and matches or exceeds the performance of state-of-the-art model-free offline RL baselines.
Based on this promising result, recent works draw upon the simplicity and scalability of the Transformer architecture to solve the reinforcement learning problem. These works can be divided into two kinds of approaches.
The first is similar to Decision Transformer, which can be viewed as model-free RL at a high level. This suite of frameworks \cite{shang2021starformer,yang2021representation} model the conditional distribution of actions given trajectory data.
The other line of work \cite{janner2021offline,chen2022transdreamer,hafner2019dream} is viewed as model-based RL at a high level, which not only models the conditional distribution of action given trajectories but also model the state transition over the trajectory.
The latter approach is more reliable in solving long-horizon sparse-reward tasks because the learned dynamics allow for future trajectories. The planning phase predicts the cumulative reward over a long horizon.
Our method lies in the second line of work, which learns the state transition over trajectory data and utilizes the learned model to imagine the future trajectory.
Different from trajectory transformer, learning the value estimation by temporal difference learning, we improve the Monte Carlo value estimate by introducing distributional value learning. Instead of estimating the expected value return, we model the value learning as a categorical distribution and utilize the transformer to learn the distribution.
\textbf{Model-based Offline Reinforcement Learning}
Existing works have demonstrated the promise of model-based RL for offline learning.
A command approach for model-based offline RL focuses on learning a dynamics model for uncertainty estimation and then optimizing the policy. For example,
Model-based Offline Reinforcement learning (MOReL) firstly learns a pessimistic MDP from offline data using Gaussian dynamics model and then learns a policy for the learned MDP \cite{kidambi2020morel};
Model-based Offline Policy Optimization (MOPO) estimates learned model error and penalizes rewards by such error to avoid distributional shift issues \cite{yu2020mopo};
Offline Reinforcement Learning from Images with Latent Space Models (LOMPO) extends MOPO to high-dimensional visual observation spaces \cite{rafailov2021offline}.
These approaches do not directly use the learned model to plan action sequences.
MuZero combines a tree-based search with a learned model, using the learned model directly for policy and value improvement through online planning \cite{schrittwieser2020mastering}. MuZero Unplugged \cite{schrittwieser2021online} extends MuZero to an offline-RL scenario and achieves state-of-the-art results.
In contrast, our method utilizes sequence modelling tools to model the distribution of trajectories in the offline dataset. Such a high-capacity sequence model architecture provides a more reliable long-horizon predictor than a conventional dynamics model. It mitigates the effect of accumulated predictive error over a long horizon.
\textbf{Multi-Task Reinforcement Learning}
Multi-task Reinforcement Learning (RL) aims to learn a single policy that efficiently solves multiple skills. Prior works have made promising result but still face three major challenges, including optimization difficulties \cite{schaul2019ray,hessel2019multi,yu2020gradient}, effective weight sharing for learning shared representations \cite{teh2017distral, espeholt2018impala, xu2020knowledge, d2019sharing, sodhani2021multi, stooke2021decoupling} and sharing data across different tasks \cite{eysenbach2020rewriting,kalashnikov2021mt,yu2021conservative}. We study the challenge of effective weight sharing for learning shared representations in the multi-task offline RL setting. Current works focus on learning shared representation across different tasks, then apply traditional RL like computing policy gradient or learning value function to solve multiple tasks. In contrast, we abstract the multi-task RL problem as a sequence modelling problem and apply a high-capacity transformer model to solve the multi-task RL problem.
\section{Conclusion}
\label{sec:concl}
We propose SwitchTT, seeking to solve multi-task reinforcement learning via an advanced transformer model. Promising experimental results show that our method outperforms other offline RL methods. Future work will consider combined advanced tree search algorithms like Monte-Carlo Tree Search to improve the performance.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,684 |
Q: Example discussed in Relation in Set Theory I am reading a book Axioms and Set Theory - A first course in Set Theory by Robert Andr´e
The example no. 4 discussed on page no. 52 under the section Equivalence relations and order relations
Let $S = \{a, b, c, d\}$. Consider the relation
$R3 = \{(a, a), (b, b), (c, c), (d, d), (a, b), (b, a), (b, c), (c, b)\}$
· We see that $R3$ is a reflexive relation on $S$.
· Since $R3$ contains both pairs $\{(a, b), (b, a)\}$ and $\{(b, c), (c, b)\}$, then $R3$ is a symmetric relation on $S$.
· Since $R3$ contains $\{(a, b), (b, a), (a, a)\}$ and $\{(b, c), (c, b), (b, b)\}$, then $R3$ is anti-symmetric.
· Since $R3$ contains $(a, a)$, then $R3$ is not asymmetric.
· Since $R3$ contains the triples $\{(a, b), (b, a), (a, a)\}$ and $\{(b, c), (c, b), (b, b)\}$, then $R3$ is transitive on $S$.
Also on footnote it is mentioned the following statement:
Note that the statement "whenever $(a, b)$ and $(b, a)$ are in $S$, then $a = b$" holds true.
I understand that that $R3$ is symmetric and reflexive but I am not able to understand why $R3$ is anti-symmetric.
A: A relation $R$ is antisymmetric if $xRy\wedge yRx$ implies that $x=y$.
That is not the case for the relation $R3$ described in your question (provided that $a,b$ are distinct).
This because e.g. $(a,b)$ and $(b,a)$ both are elements of the relation while $a\neq b$.
So $R3$ is not antisymmetric.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,983 |
L023 - Wed 2 May 2018 / Mer 2 mai 2018
Wednesday 2 May 2018 Mercredi 2 mai 2018
Report, Financial Accountability Officer
Government accounting practices
Hydro rates
Horse racing industry
Highway improvement
Electric vehicle charging statoins
Gord Brown
Deferred Votes
Lorne Hooper
Canadian Manufacturers and Exporters awards
Lawrence Heights
Aveda Walk for Food and Water
Service Dogs Advisory Committee Act, 2018 / Loi de 2018 sur le Comité consultatif de l'utilisation des chiens d'assistance
Doctor shortage
Politiques énergétiques
Prévention du tabagisme chez les jeunes
Great Lakes protection
Access to Consumer Credit Reports and Elevator Availability Act, 2018 / Loi de 2018 sur l'accès au rapport de solvabilité du consommateur et la disponibilité des ascenseurs
The Speaker (Hon. Dave Levac): Good morning. Please join me in prayer.
Resuming the debate adjourned on May 1, 2018, on the motion for time allocation of the following bill:
Bill 53, An Act respecting the establishment of minimum government contract wages / Projet de loi 53, Loi concernant la fixation de salaires minimums pour les marchés publics.
The Speaker (Hon. Dave Levac): Further debate?
Ms. Teresa J. Armstrong: It is always my pleasure to stand in the House. Today I'm going to be speaking to Bill 53, but specifically to the time allocation that the government has imposed on Bill 53.
I think it's important when we're here in the House representing our constituents that they understand what time allocation means. It means that there has been at least six and a half hours of debate on the bill and the government doesn't want to continue debate any further. That really does disadvantage this part of the House, on this side, that every member doesn't have the opportunity to speak and debate on this bill. And that's why we're sent here.
We are sent to the Legislature to debate bills and legislation and bring the voices of our constituents forward to every piece of legislation that's brought forward in this province, because it will affect someone—someone they know, themselves personally, their relatives, their friends or their city.
So, with time allocation, I understand sometimes there is a need for it, but really, it's been used too often by this government on too many occasions.
We are in a time crunch now. The House is going to rise very soon for the election on the horizon. This is probably one of the things the government decided to do because, first of all, we know this bill won't get passed through the Legislative process right now because of timing, but it is an important conversation to have.
It's good to highlight that we need to respect workers around fair wages so that when there are contracts being bid on for government projects, people are being paid fairly for the work that they do.
One of the things that I would like to raise is, the Ontario Federation of Labour looked at the bill and they wrote a quick summary around the first reading. They wanted the government to extend the application to all public sector organizations. That alone is a debate that we should be having. When we're narrowing the scope of this legislation to primarily addressing the building, cleaning and security occasionally around construction, it does limit the issue specifically around government procurement of contracts.
The Acting Speaker (Mr. Paul Miller): Everybody is in conversation and I cannot hear the person speaking. I literally cannot hear her. So if you could do me a favour and either lower your discussions down or take them outside, I would appreciate it.
Ms. Teresa J. Armstrong: Thank you, Speaker. There are a couple of reasons, I think, why you may not be able to hear me. First of all, I think there's a lot of buzz in the House because everyone's interested in this bill. I hope that they are talking about the legislation before us, and ways to debate back and forth.
The other one is, one of my colleagues from the opposition said, "Teresa, speak up," so that you can hear me. I'm going to try to speak up so, therefore, my colleagues can have those important discussions around legislation that happen here every day.
I was getting back to what the Ontario Federation of Labour had summarized about their opinion on the bill, and how important it is that there are those presentations from stakeholders, people who work in the industry and government officials. All those things need to happen, but unfortunately, that won't be the case because we probably won't see this bill into committee.
Quite frankly, I think it's one of the odd times that I've noticed on the schedule House sheet today that we don't have any committees sitting. I think that it's public accounts that is not sitting this afternoon. That's unusual as well. But, again, we know that there's an election on the horizon, so maybe the government is setting its House proceedings and agenda around that timeline. Okay, that's fair enough, I guess, but it would be great if they gave us the heads-up if that was the case.
Talking about good ideas is always something the NDP brings about. I often talk in this Legislature about how we present legislation and this government doesn't pay attention, or we have critical concerns and they don't pay attention. They think that we're exacerbating those things. That's absolutely not true, and I'll give you a perfectly good example in this case.
Our member from Windsor West, Lisa Gretzky, brought a private member's bill forward called Dan's Law. That was because he worked in Windsor, went out west to find a job to provide for his family, had a diagnosis of cancer—it was a fatal diagnosis—and came back to Windsor: But because he wasn't living in the province of Ontario for a certain amount of time, he couldn't get health care, and he was facing an end-of-life situation.
On this side of the House, what happened is, we had good debate. The member from Windsor West was very passionate about this issue. It was something that was brought to her attention via her constituent because she's the representative in that area. She pushed it. I know that there wasn't a lot of time to—I don't think it went to committee; it wasn't discussed at committee. Then, when the House prorogued, of course, that dropped off.
There was a lot of support behind that bill. Even the physicians supported the bill. I'm hoping that the government, of course, paid attention to what was said on this side of the House and to all of the stakeholders who said, "Do the right thing. When you're talking about palliative care and making that an issue, this is part of that palliative care."
So what happened? I was reading the news clippings today, and "the Ontario government is closing a gap in medicare that temporarily denies home-care coverage to Canadians who relocate from other provinces, including terminally ill patients who are not expected to live past the three-month waiting period for an Ontario Health Insurance Plan card." So they listened to that. That's great.
When we get back to time allocation, when we use those time allocation bills—really, the hammer, we call it—to end debate, then you don't get the good ideas exchanged. That's where this opportunity comes in.
As I've said before, we're responsible. We have been elected to represent the voices of the people in our ridings, and that's what we do when we bring them to debate, when we talk about bills and legislation.
Just last night I was in my riding, and we had a health care town hall with our leader, Andrea Horwath—soon to be Premier of this province—and she talked about the concerns around health care. The platform that we designed talks specifically to the issues that we have heard from our constituents. People were asking questions around health care and we had those solutions. We are providing access to health care, equity to health care and trying to undo the damage that has been happening and that we know happens when people are talking about being treated in hallways. Those kinds of things are what we're here to do.
Bill 53, when we time-allocate—yes, there's not too much time left in this Legislature, but this government doesn't have a lot on the agenda. The time-allocated bill isn't giving us the opportunity to talk about the importance of the changes that can arise from this. This bill is really enabling legislation. We need to allow discussions around ideas in order that we can debate them fully and not leave everything up to regulation, because when it leaves this House and it's subject to regulation, we don't know exactly what went into it.
When we debate it, we're debating an item based on the printed version. Then it goes to committee, where it has all kinds of presentations and feedback. It comes back for third reading, it leaves this place and then it's up to the government, by order in council, to come up with the regulations around the bill. With some of those things, we don't find out until much, much later that regulations aren't working for the legislation and the people who it was intended for, and it's disappointing. So when we have enabling legislation, we need much more detail before we know what we're in for.
It's possible that we could see more of what had been called for through regulation, but we don't know. We don't know if that's what's going to happen. If stakeholders like the Ontario Federation of Labour—asking them to look at expanding the definition of the public sector organizations for that application process for fair wages, we don't know if the government will consider that in regulation. Hopefully, if they do, they will communicate that to everyone so that they can be aware of and comply with the rules around that bill.
Speaker, I have to tell you, my time is ending. I do appreciate speaking to the time allocation, regardless of if it is shutting down debate. I look forward to continuing other business of the House this morning.
The Acting Speaker (Mr. Paul Miller): Further debate? The government House leader.
Hon. Yasir Naqvi: Thank you very much, Speaker, for recognizing me to speak on this motion that relates to Bill 53, the Government Contract Wages Act, 2018. I'm really pleased to take some time to talk about this very important bill and the motion related to it.
Speaker, as you very well know, coming from the building trades yourself, this is an important piece of legislation that deals with government projects in the construction building services and building cleaning sectors, ensuring that those workers, the people who work in those areas, are paid fairly. To echo our Premier: "Every worker deserves to be paid a fair wage. And every business bidding for a government contract deserves a fair shot."
I have the great opportunity to work with people in the building trades in my community of Ottawa Centre. They are extremely hard-working individuals who are literally building things up every single day. One of the most incredible things—Speaker, you know this yourself—is meeting with a construction worker or a labourer, when you shake their hands, and just by their hands you can tell that they've been working with their hands all their lives. These strong, rugged hands are a symbol to me of the equal contribution, the amazing contribution, they are making on the kind of infrastructure that we use every single day, and take for granted.
Speaker, when I look at my community of Ottawa Centre, I am amazed by the development and the progress that is taking place that is funded by the Ontario government, and that is true for every single sector, for things that we rely on to improve our quality of life, such as health care, education and the broader infrastructure like public transit etc.
For example, if I look around in my community of Ottawa Centre on the health care field, I see that all three community health centres in my riding—Centretown, Somerset West and Carlington Community Health Centre—are going through incredible growth, to the point that we are expanding them through increases in infrastructure. The Centretown Community Health Centre is going through an expansion, through a $5.4-million investment from the Ontario government, where they will be able to serve more people in our downtown community in Ottawa with important services around mental health and addictions and, of course, primary care.
The Carlington Community Health Centre, which straddles Ottawa Centre and Ottawa West–Nepean, which I share with MPP Chiarelli, is also going through an incredible expansion. We are now building a new housing hub at that site by building over 40 new units for seniors where they can access health care services on the main floor, making sure that not only can they continue to live independently but they also have access to the critical health care services they need at their elderly age, so that they remain healthy and therefore live longer independently. Those two investments are extremely important.
Similarly, we just recently opened an expanded Ottawa heart institute, which is a jewel in the health care system in Ottawa and eastern Ontario, providing world-class cardiac care. We just built a new tower on the site and are investing $200 million, which is providing state-of-the-art ICU units, surgical rooms and technology that is now being used at that hospital and is not being used anywhere else in the world. I'm not exaggerating; this is directly coming from Dr. Thierry Mesana, who is the CEO of the heart institute. It's so new that they are the first hospital to use that technology. That's just so exciting and thrilling, to know that that is the quality of services that we are providing to our citizens, to my constituents, in Ottawa Centre.
Speaking of future projects in the health care sector, what we're really excited about is building a new Civic campus of the Ottawa Hospital. The Civic campus in my riding is almost 100 years old. It has an incredible story. In fact, royalty was born there. At one time it was declared a territory of the Netherlands during the Second World War. The current Queen, I believe, of the Netherlands was born at the Civic campus of the Ottawa Hospital. But that hospital is almost 100 years old, and we need a new hospital for the growing city for the next 50 or so years. We've announced a $1.8-billion commitment to build a new Civic campus right in the downtown core, on public transit, by way of the LRT Trillium line, so that it really fills the modern needs of my community and the city I live in. This is an example of a public infrastructure project where many construction workers are going to work, day in and day out, to make sure this hospital is one of the best in Canada. In fact, I'm confident, Speaker, that it's going to be one of the best in North America.
Similarly, Speaker, as I see the work that's happening at Carleton University, with more new state-of-the-art buildings coming online with new labs—it's very exciting. I have also had the great honour of representing Carleton University.
In fact, Speaker, if I could also take a quick moment and congratulate the appointment of Dr. Benoit-Antoine Bacon, who has been named the 15th president and vice-chancellor of Carleton University. He comes from Queen's University. My understanding is that he'll be starting on July 1. I want to congratulate him on this incredible appointment, and I very much look forward to working with him as we build Carleton University. I'm an alumni of the university. I live in the neighbourhood. It's a university that is building a bright future for so many students in our community, serving Ottawa and our province.
Speaking of important public and social infrastructure, there's nothing more important than building schools in our community where our young, bright minds go. I'm very excited about schools like Broadview Public School, a $15-million investment in the great community of Westboro in my riding, which is complete and up and running. Kids tell me that it still smells like a new school, which is very exciting. It replaced an older school that goes back almost to the First World War, so it was a much-needed building of a new school.
We recently announced $3.6 million to build an addition to Elmdale Public School in Westboro village as well, which is another great school in my community. And, of course, we just started a new French public school in my riding, Centre-Nord, which we're hoping to build a new building for as well.
This, Speaker, is happening on top of all the great investments that are happening in relation to stage 1 of the light rail transit, for which Ontario contributed $600 million, which is a game-changer for Ottawa. It's the largest public infrastructure project in the history of Ottawa since the building of the Rideau Canal. This is how significant this LRT is to the city of Ottawa, in terms of really bringing us into the 21st-century ecosystem in terms of our economy, in terms of our society, in terms of levelling the playing field for everyone who calls Ottawa home.
What's exciting is that we are already working on stage 2 of the LRT, which will go east, west and south of the city further beyond downtown, connecting all the neighbouring communities that are just outside the downtown core into some of our suburban communities. It's a significant contribution that Ontario is making, roughly $1.6 billion in terms of the building of that.
All of that is just incredible investment that is taking place, in addition to the affordable housing that we're building in Ottawa. I think that in Ottawa alone, we have built about 1,700 new affordable housing units over the last several years, which is ensuring that people of all means are able to take advantage of these very important investments.
That has resulted in an incredible boom in our economy in Ottawa in terms of new jobs being created, a lot of them in the construction sector. I'm happy to share with you that the unemployment rate in Ottawa is around 4.8%. It's actually lower than the unemployment rate for Ontario, which is also one of the lowest in Canada at around 5.5%. It's the lowest since the year 2000, almost two decades now, which shows how much our economy is booming.
We want to make sure that everybody benefits from that boom in the economy, that everybody is able to take advantage of this incredible growth that is taking place and that nobody is left behind. That's why we brought in Bill 148, for example, which has been landmark legislation to modernize our labour and employment laws. Through Bill 148, we have raised the general minimum wage to $15 an hour, starting on January 1, 2019. I hope that stays true after this election. We have introduced two paid sick days for all, we've increased vacation time, we've introduced equal pay for equal work, and we have increased and strengthened enforcement as well.
That is why Bill 53 is important: because it's another progressive piece of legislation that ensures that we have pay transparency in our province. That's why I'm very happy to be speaking to this bill, which will protect workers' wages on government contracts. This legislation will enshrine the principle of a fair, prevailing wage into law and provide the necessary support and enforcement to make it work. It is what is fair, and it is the right thing to do.
Obtaining a government procurement contract should not be an invitation to lower workers' wages in these sectors or industries. Workers' wages should not be the primary factor in bidding. Our government is committed to building a strong workforce and fair, balanced and progressive policies for Ontario workers and employers.
We have instituted a plan that includes making the largest investment in public infrastructure in Ontario's history. I gave you many of those examples that are taking place just in my community in Ottawa Centre: building new hospitals, building new schools, expanding Carleton University, and building the LRT and public transit infrastructure, not to mention cleaning up the Ottawa River through the Ottawa River Action Plan.
I think this particular legislation will go a long way in making sure that all those workers who have been working in those sectors get the prevailing fair wage that is going to bolster our economy further. I can tell you, by talking to these workers, by spending time with these workers, that they will tell you that they spend all of that hard-earned money right in our local community. They are great economic generators for us. When we support them, they support the broader economy.
In my last few moments, I just want to take an opportunity to give a shout-out to another one of those incredible champions in my community—who probably exist in all our communities—a gentleman by the name of Moe Atallah. Moe Atallah is the owner and proprietor of Newport Restaurant in my community of Ottawa Centre. He is a much-loved individual for everything that he does for our community every single day, not only by creating employment for people but by way of his philanthropy and giving back.
Moe is one of those great, quintessential Canadian stories, where he came as an immigrant in his teenage years from Lebanon. He started working as a dishwasher in the back of a kitchen, and now he owns businesses. He owns a very successful restaurant—the best pizza in town, definitely, out of my community is the pizza from Newport. But he has never forgotten where he came from. He has never forgotten what he struggled through, and he makes sure that everybody gets an opportunity to succeed.
In order to celebrate Moe, I'm really happy to share with you and this House that last Saturday, he was given, by Mayor Jim Watson, the key to the city. It was a great surprise for Moe. He was in tears because, as he said, he never thought one day that he would have the key to the nation's capital. That speaks to our great society and speaks to a great community that Moe has helped build. More specifically, it speaks of people like Moe Atallah who have given so much to our community. I am very proud to call him my personal friend. He has always been very kind to me.
I just want to thank people like Moe and many others in our communities who are helping to build their community, who are creating those opportunities, so that the investments we are talking about today have a real, direct meaning to people's lives. These are not abstract things. These things impact all of us directly. That's why I'm really proud to support Bill 53 and the motion associated with it. It is important that we get this passed before the House adjourns for the upcoming election.
The Acting Speaker (Mr. Paul Miller): Further debate? The member from Eglinton–Lawrence.
Mr. Mike Colle: Good morning, Mr. Speaker.
The Acting Speaker (Mr. Paul Miller): Good morning.
Mr. Mike Colle: And thank you. I'd like to follow my esteemed colleague from Ottawa Centre.
I can certainly vouch for Newport's pizza in Ottawa. If you're ever in Ottawa, you can't really find a more homemade deep-dish pizza. Moe Atallah is a real local hero. As you know, it's also the place that Elvis Presley visited. They've got a big shrine to Elvis Presley there too. That's in Ottawa at the Newport Restaurant.
Again, it's just celebrating people who are hard workers and who employ all kinds of people. They don't get the headlines that the big banks and the big corporations like Amazon get, but there are little employers that deserve some credit.
Talking to Bill 53, we're trying to get this bill through in the last days of this House, so that we can help ensure that workers who work directly or indirectly on government projects get their wages protected, so that there are rules and benchmarks so that the workers, through subcontractors or others, get fair wages. That's really what it's all about. There are thousands of employers that work directly or indirectly on government projects. We want to make sure that they also get protection in the workplace. The only way we can do that is by passing this type of benchmark legislation.
It is critically important. I don't think people sometimes realize—they talk about the government, with work in hospitals or universities, or people that work on government projects in the Ministry of the Environment or the Ministry of Transportation. Most people do not know that one of the largest employers in this province is the government of Ontario. That is, the people of Ontario actually employ hundreds of thousands of workers.
In Hamilton, I think the largest employer is probably the government of Ontario through Hamilton Health Sciences. In Hamilton, you've got the McMaster health sciences centre, you've got the university—it's one of the finest health sciences centres in the world, with cutting-edge research and care, and employing all kinds of people, young and old, who are top-notch in their fields. They're expert people. That's in Hamilton.
Then, people forget that besides the doctors, the researchers, the scientists and the professors, there are thousands of people who work in our hospitals and who work in the university sector, who are basically getting paid through the government contracts that are awarded to the people who provide cleaning services, who provide all kinds of maintenance services—and there are thousands of them.
We always think about the doctors in the hospitals, but what about the orderlies, the maintenance people, the cleaners? There are thousands of people who work in our hospitals. We want to make sure, whether they work directly or indirectly, that they are protected by the legislation that we're proposing here, the fair wage policy, so there isn't undercutting and there's protection for their wages.
This is something that ensures that there's fairness in the way these workers are treated. Sometimes, as I said, they're not the high-profile workers that you see, but they are people who need a fair wage and fair-wage protection. That's what has been called for. I think this type of legislation has been called for, for a number of years. It is an important part of the whole approach we've had in the last number of years, whereby we've looked at the Changing Workplaces Review.
You know too well, Mr. Speaker, in your history in the workplace at Stelco and being involved with hard-working people your whole life, that there needs to be constant attention paid to the workplace, because the workplace is evolving like everything evolves and changes. Especially in the last number of years, there have been dramatic changes in how people work and where people work, and therefore, the labour laws of the province have to be updated. You can't have the same rules in place, because they do not adapt to the reality of what's happening out there.
Look at all the small, entrepreneurial businesses, the small workplaces now, the part-time work. The people who are fortunate enough to have full-time jobs, fortunate enough to have pensions, God forbid—that's a rarity nowadays. When people get employed nowadays, it's always under contract, part-time, casual; they've got all kinds of names for them. It's difficult to get those jobs.
In the government sector, there are still those jobs with security, thank God, but in the private sector, the obvious bottom line is more important than thinking of that worker's family or long-term needs, or if the worker gets hurt. Therefore, there have to be rules put in place, because it is no longer that workplace where you've got a job for life or you've got a job for even 10 years, for God's sake.
Basically you're under contract, and every six months to a year, you have to renegotiate a contract and hope you get your contract renewed, and you hope you've got benefits. God forbid they give workers benefits, by private employers. It is not that easy. Those are the dramatic changes taking place, and not only in Ontario; it's happening all over Canada, it's happening all over the world, where those secure jobs with secure benefits, sadly, have diminished.
I know it's hard to explain to young people sometimes that someday you'll hope that you have a pension. Someday you may need a pension and you may need those supports. But who gets hired now, especially by the private sector, that can expect a pension? Most people say, "Around here, we get the RRSP thing." It's like putting your money in the casino. Just look at what happens to people's savings. They put them into RRSPs, and you make the sign of the cross and you say, "I hope that that money is still there." I was just looking at mine the other day. I couldn't believe it.
Therefore, there isn't that security, and not only in the type of work you do. Do you get equal pay for equal work? Do you get any benefits? All these protections—and what do you get paid per hour, and how do they pay? What happens if you get injured or sick? What happens if you have to go on pregnancy leave? Would you get hired back?
All these questions are ones that you don't run across until you have to go through it yourself. Then you say, "Wow, are these protections there for me?"
When these laws are passed, like this Bill 53, a lot of people say, "Well, that will never affect me," and, "It's another change." But they don't realize that if you don't make these laws, you're not going to have that potential protection in the future.
It's like people who eventually find out that, "God, I'm 60, and I've got no pension." Try to live in Toronto, or even Hamilton, with the way things are going and the cost of housing and prices. You say, "Wow, my company didn't have a pension plan."
When you're 30, you don't fight for these things, because who thinks they're ever going to reach pension age when they're 30?
The same thing with thinking, "Well, I've got this good job now." But then, all of a sudden, you may have to change careers or go into another job, and you find yourself working part-time or working for minimum wage, saying, "Wow, this guy is only paying me 11 or 12 bucks an hour."
Then people say, "Well, you know, we don't need the increase in the minimum wage. You're not worth 15 bucks. Who is worth 15 bucks or 14 bucks? This is going to hurt the economy, raising people's minimum wage." That's because you may have the luxury of making a good wage, and God bless you, that you're making that good salary or good wage. Yes, it's fantastic.
But what about all those people, a million and a half people in Ontario, who work at minimum wage or below? They have to pay the same price for their food and the same cost for their transportation. Therefore, that wage—if it is not up to a certain level, which this bill looks at guaranteeing somehow; if it's not up at that 15 bucks—you try to live on the 11, 12 or 13 bucks an hour in Mississauga or in Ottawa, or wherever you may be. You try to live on that money when you are trying to, essentially, keep up with everybody else. Everybody else, who is making a good salary and so forth, they don't have time to stop and help you if you're making less. You still have to pay the same rent; you still have to pay the same costs for food etc.
It is critically important that, at least, we give people a fair minimum wage and wage protection.
I know it's not easy to understand, somehow, how helping a worker get more protection and get a minimum wage increase is good for you or for us. But, like I tell people, every worker who gets a few more bucks in their paycheque or in their pocket every week or month spends that money locally. They aren't the people going off to the Cayman Islands and hiding their money. They're buying shoes; they're buying groceries; they're buying local products in their local neighbourhoods. Many of them can't even afford cars, so they stay in their local community, so all that money goes back into the local economy. As I said, they don't have these big savings accounts. They don't have these offshore accounts, where they're putting their money.
I remember when we were doing Bill 148—you remember, Mr. Speaker—the big companies came out and said—Loblaws: "We can't afford to pay people minimum wage."
The Magna corporation, Frank Stronach's company: Do you remember what his salary was, Mr. Speaker? You know that, but some of the younger guys don't know that. This guy made $50 million a year. He ran for President of Austria at the same time. He ran for Parliament here, too, and lost. He was making 50 million bucks a year. His company, Magna, came out and said, "Raising the minimum wage is bad. Don't support it. It's bad for the economy." Here he is, making 50 million bucks a year—he could probably help about 50,000 people with his salary alone. He said, "No, this is no good, raising people's minimum wage."
You heard Loblaws and all these big companies come out and say, "Oh, no, no, no." Their CEOs—what are they making? I think there was one guy from Sobeys who said, "Hey, this minimum wage thing won't hurt my company." But where was one Bay Street CEO who came out and said, "Yes, let's give a fair wage to our working people in Ontario, Canada"? Not one big CEO came out and had the guts to stand up and say, "Giving people a fair wage is the right thing to do." Not one.
Mr. Mike Colle: And the Tories were against it. There they go. They are still against the $15. The Conservatives—
The Acting Speaker (Mr. Paul Miller): Stop the clock.
Ms. Sylvia Jones: He agitated me, Speaker.
The Acting Speaker (Mr. Paul Miller): Did he? Well, you are agitating me. So we will not yell across the hallway, will we? Thank you.
Mr. Mike Colle: I just wanted to point out that I was very disappointed. It's politics to be going back and forth with the opposition, and that's fair.
But I'm not really as much disappointed in them as I was in the CEOs of this country and this province. They are doing very well. There are all-time high profits. You know that, Mr. Speaker. Profits from these companies in the last number of years have been skyrocketing. When their profits are going high and they're doing well, God bless them; they're prospering, these big companies.
At this time, when the cry goes out with the $15 and fairness campaign, to try to give people a bit of a raise in their minimum wage, instead of saying, "Hey, listen, we'll share a little bit in this prosperity that the big companies are having," not one of them came out and said, "Yes, that's right. It's about time we shared a bit." Not one Ontario or Canadian company came out and said, "This is good."
We had a couple of small companies—remember, we had the people who owned that bakery, I think, in Hamilton? She came out to the public hearing and said, "Hey, listen. We've got to pay people a living wage. If I give people a fair wage, I keep the workers." She was from a small company, I think, with 10 people at the bakery. I can't remember the name of the bakery in Hamilton, but she was a wonderful deputant.
I would also mention, in my own riding, Mr. Speaker, that I've got Yorkdale Shopping Centre, which is one of the most profitable shopping centres in North America. I don't know if you've been there, but it has really top-of-the-line shopping. It employs a lot of people. I was very happy to see that the general manager of the shopping centre—in speaking with her, she said, "We find that when we pay people more than minimum wage"—which they do up at Yorkdale—"we keep our workers. And we train these workers. If we don't keep our workers, it costs us more to retrain new ones. So, if we get a top-line person, we train them and they stay, and they make money for us as a shopping mall or for the retailer. That's what we do. We know that if we were to start paying people minimum wage"—$10, $11 or $12 an hour—"we would lose a lot of these highly skilled salespeople."
The policy of Yorkdale and all of their retailers is to pay people minimum wage or above. It saves them money, because you know what training costs are like, and turnover. That was Yorkdale. They said, "We all believe in paying people good wages" because they keep those workers, and the workers produce and make more money for the retail companies, or the mall indirectly.
There are some people who see that but, as I said, unfortunately, there are still too many who don't want to share, who don't want to appreciate the single mothers who go home every night, who have to pick up their children at child care and put them on the streetcar, subway or bus and bring them home, who have to cook, clean—they never get to sit down. Then, all of a sudden, it's 6 o'clock in the morning and they're feeding the kids, packing their lunches, taking them on the bus or the subway to school again or dropping them off at the daycare and then working eight or nine hours.
Frank Stronach will say, "No, she's not worth 15 bucks an hour. I'm worth $50 million but she's not worth $15." That's what this bill is all about. Are you for Frank Stronach or are you for the $15-an-hour wage for the single mother? Make that choice: Are you for Stronach or the working mother?
The Acting Speaker (Mr. Paul Miller): Further debate? Last call for further debate.
Mr. Chan has moved government notice of motion number 8, relating to allocation of time on Bill 53, An Act respecting the establishment of minimum government contract wages.
Is it the pleasure of the House that the motion carry? I heard a "no."
All those in favour, please say "aye."
All those opposed, please say "nay."
I believe the ayes have it.
Seeing members standing, this will be voted on after question period this morning.
Vote deferred.
The Acting Speaker (Mr. Paul Miller): Orders of the day. Minister Coteau.
Hon. Michael Coteau: No further business, Mr. Speaker.
The Acting Speaker (Mr. Paul Miller): Seeing no further business, this House stands recessed until 10:30 this morning.
Hon. Laura Albanese: Eric Albishausen is page captain today. He's from the great riding of York South–Weston. Visiting him today is Janet Churchill, Eric's mom; Dirk Albishausen, Eric's dad; and Stacie Grant, a former teacher of Eric's from Valleyfield Junior School. Welcome to Queen's Park.
Mr. Bill Walker: I'd like to introduce Lynne Lundberg and Christine Weldrick, from the great riding of Bruce–Grey–Owen Sound. Welcome to Queen's Park.
The Speaker (Hon. Dave Levac): Further introductions? The member from Barrie—sorry, the member from Sarnia–Lambton.
Mr. Robert Bailey: Thank you, Speaker—like a jack-in-the-box. I'd like to introduce a good friend of mine from Sarnia–Lambton, a neighbour from Petrolia, Lorne Given, attending today again.
Hon. Kathleen O. Wynne: On behalf of the chief government whip, I'd like to introduce the parents of Mora Carruthers, one of the best staffers we have here at Queen's Park. Welcome—Don and Cheryl Carruthers. Sorry.
The Speaker (Hon. Dave Levac): I think Hansard caught that.
Further introductions?
Mrs. Julia Munro: I'm pleased to be able to welcome the grade 10 students from Nantyr Shores in Innisfil, in the great riding of York–Simcoe. Mike Harrison and Tara Popple are the teachers joining the grade 10 students, who have just now experienced the shortcomings of our transportation system.
Ms. Peggy Sattler: I would like to welcome several members of the Ontario Undergraduate Student Alliance team who are with us this morning. We have two students from Western University, Catherine Dunne and Mackenzie Claggett, who will be working with OUSA for the summer. Also, from the OUSA staff, we have Sophie Helpard, Deb Lam, Colin Aitchison and Martyna Siekanowicz. Welcome to Queen's Park.
The Speaker (Hon. Dave Levac): I beg to inform the House that the following document was tabled: a report entitled Economic and Budget Outlook, Spring 2018, from the Financial Accountability Officer of Ontario.
The Speaker (Hon. Dave Levac): A point of order, the member from Scarborough–Rouge River.
Mr. Raymond Sung Joon Cho: I am seeking unanimous consent to move a motion to allow for the immediate passage of Bill 10, An Act to proclaim the month of June as Filipino Heritage Month.
The Speaker (Hon. Dave Levac): The member from Scarborough–Rouge River is seeking unanimous consent for passage of the bill. Do we agree? I heard a no.
The Speaker (Hon. Dave Levac): Excuse me. This is another fine signal, and I'll keep that in mind.
Seeing no further introductions, it is therefore time for question period.
Mr. Victor Fedeli: My question is for the Premier. Well, another week, and another damning report is out on the government's faulty fiscal record. This time, we hear the truth from the Financial Accountability Office. The FAO agrees with the Auditor General. They, too, forecast a $12-billion deficit for 2018-19, twice what the government has said the deficit will be. The government did not slay the deficit, as they claimed. In fact, the only thing they've slayed is any shred of trust or credibility. The government told us one thing, when both legislative officers told us the truth, which happens to be a completely different picture.
Speaker, why does this government think it can get away with presenting inaccurate numbers to the people of Ontario?
Hon. Kathleen O. Wynne: I know the Minister of Finance is going to want to speak to this issue in the supplementary.
What I want to say is that we thank the Financial Accountability Office for their annual Economic and Budget Outlook, and we're pleased that he notes that this year—I'm going to quote from the report—"the Ontario economy recorded the strongest pace of growth since the early 2000s" and that "job growth surged last year, with 128,400 net new jobs."
The reality is, our economic growth has outpaced that of most countries in Europe and in North America. Our unemployment rate is at a 17-year low. We know that everyone has not benefited from that, and we have made a deliberate decision to invest in the people of this province, to invest in their care. I thank the Financial Accountability Officer for his report.
The Speaker (Hon. Dave Levac): Supplementary?
Mr. Victor Fedeli: Back to the Premier: Actually, the Financial Accountability Office was quite revealing. Their report provided evidence that the tale the Premier has told this House about why they're running a deficit is not accurate.
The Premier said she chose to run a $6.7-billion deficit this year, saying it was for infrastructure. But the FAO revealed, for the first time, that that is not true. The FAO revealed that the government already had a $3-billion deficit for 2018-19. This government thought it could get away with that again and got caught.
Speaker, now that the FAO has exposed this, isn't the Premier the least bit red-faced for being caught red-handed?
Hon. Kathleen O. Wynne: Minister of Finance.
Hon. Charles Sousa: We do thank the Financial Accountability Officer for his report. He does highlight the fact that Ontario's economy has recorded the strongest pace of growth since the early 2000s. He does cite the fact that our job growth surged last year, with 128,400 net new jobs.
Indeed, Mr. Speaker, we are leading North America, the United States and Europe in terms of our GDP growth, and the FAO acknowledges that some of the investments that we're making are tremendously significant to our economy and to our society, including the benefits of supports for pharmacare and supports for skills and training.
Furthermore, he has adopted the position of the Auditor General, which is in dispute with independent, world-renowned accounting firms, including members of the Canadian Accounting Standards Board, which provided evidence and an indication that the principles of accounting that are being adopted are accurate. We're proceeding as such, Mr. Speaker.
The Speaker (Hon. Dave Levac): Final supplementary?
Mr. Victor Fedeli: Well, Speaker, the report also confirmed our job numbers will go down to 60,000 a year in the coming five years as well. That's something that was also presented in the budget. So the long-term outlook is quite different.
On page 15 of the Economic and Budget Outlook, this is where the truth is exposed. The government told us that the $6.7-billion deficit was for infrastructure. That is simply not true. In the FAO report, they show that $3.7 billion is what promises to develop into a deficit, and that $3 billion was a hidden deficit for the years 2018-19.
Speaker, $3 billion plus $3.7 billion is $6.7 billion. They had a $3-billion deficit. Why did they try to hide the $3-billion deficit from the people of Ontario?
The Speaker (Hon. Dave Levac): Be seated, please. Thank you.
Hon. Charles Sousa: Let's be very clear: We have built lots of prudence. We have reserves. We have contingencies.
The Auditor General herself and the FAO have noted that we are very cautious in our assumptions, and that they're reasonable. They stated that. We are talking about two issues of dispute: one around pension assets, and one around the degree of rate-regulated accounting, both of which are associated with independent auditors and experts who are saying it is absolutely fine to proceed as such. Those are policy decisions that were made, and in the case of pension assets, that is an issue that has been ongoing for 20 years. Even when the Conservatives were in power, Mr. Speaker, they assumed exactly the same accounting principles.
We have not done anything other than provide full disclosure. They have had clean audits with the OPG. The Auditor General has agreed that it was accurate. We're going to proceed as such. We have full disclosure. It's fully accurate. We have balanced the books and we have a surplus.
Mr. Victor Fedeli: Back to the Premier. Well, look, they have not disclosed the books. They have not been in balance. They have told the people of Ontario one thing when both legislative officers have told us the complete opposite, Speaker.
I will review again: Why did they have a $3-billion hidden deficit? That's not full disclosure. That's here. It took the Financial Accountability Officer, on page 15, to show us a $3-billion hidden deficit. They were not in balance. They have told the House one thing when the truth is completely opposite.
I want to know the truth from this finance minister and from this Premier. Why is there a $3-billion hole in the budget? Why?
Mr. John Yakabuski: Wow. It just never ends.
The Speaker (Hon. Dave Levac): Yes, it will.
Premier?
Hon. Kathleen O. Wynne: I just want to say to the member opposite that when I came into this office and when we brought our first budget forward, we made it clear that we were going to increase the deficit in order to invest in infrastructure. We did that, Mr. Speaker. We stayed on track to eliminate the deficit. We did that this year.
We have a $600-million surplus and we have made a deliberate decision—openly, transparently, we have made a decision—to invest in people, in child care, in home care, in hospitals, in free tuition for students and in prescription medication for children and for seniors.
We have been very clear about our—
The Speaker (Hon. Dave Levac): The member from Prince Edward–Hastings will withdraw.
Mr. Todd Smith: Withdraw.
The Speaker (Hon. Dave Levac): And you're working towards warnings. I'm telling you now.
Finish, please.
Hon. Kathleen O. Wynne: Mr. Speaker, the reason that we can have this discussion about our finances is that we put in place a requirement to have a pre-election report. That is what we are discussing, openly and transparently.
Mr. Victor Fedeli: Well, thank heavens we had a pre-election report from the Auditor General, who exposed a $12-billion deficit instead of the nonsense the government told us. Thank goodness that the Financial Accountability Officer came out today and explained that, yes, indeed, we do have a $12-billion deficit, not the nonsense the government told us. He further drilled down and showed us that in that deficit is a $3-billion existing hole in the budget.
The Premier just doubled down, saying that she made a deliberate choice to go into a $6.7-billion deficit. That is absolutely not true. It's $3.7 billion that she's saying she's investing there: $3 billion of it was a secret, hidden hole in the budget.
I want to know. We all want to know. The people want to know: Why did it take the Financial Accountability Officer to come out and tell us this morning?
The Speaker (Hon. Dave Levac): Thank you.
Mr. Victor Fedeli: You did not slay the deficit—
The Speaker (Hon. Dave Levac): Members on both sides are asking me to move to warnings, and I shall. We're in warnings.
Hon. Kathleen O. Wynne: I do understand why the member opposite would be so frantic, as he holds the book, that the information he's saying is secret is printed in black and white. He is reading the numbers because we are being transparent. What we are not—
The Speaker (Hon. Dave Levac): The member from Leeds–Grenville is warned. The member from Prince Edward–Hastings is warned. I missed a third, but—
The Speaker (Hon. Dave Levac): You threw him under the bus.
Hon. Kathleen O. Wynne: Mr. Speaker, he has the information because we have made it available and made it transparent. I also understand that he would be additionally frantic because he's dealing with a leader who, behind closed doors, is making deals with big developers and only backing off on preserving the greenbelt when he's caught in the light of day.
We have consistently been open about our finances. We have consistently supported the greenbelt. We believe that our environment is precious. Once that land is gone, it's gone forever.
I understand why he's so upset.
The Speaker (Hon. Dave Levac): The Minister of Agriculture, Food and Rural Affairs is warned.
Mr. Victor Fedeli: The only people hiding anything in this Legislature are this government.
Thank heavens the FAO showed us the $3-billion hidden deficit that this government had. And he explained why, Speaker. He actually told us how this $3-billion hole came about, and, quite frankly, it's no surprise to anyone on this side of the House. He has told us it's because they ran out of things to sell. He said the government will see weaker revenue gain due to a loss of time-limited and one-time revenue. That money was one-time sale of assets. They sold Hydro One. They sold buildings. They sold shares. They ran out of things to sell, and now they have a hidden deficit. They told us that it was a $6.7-billion deficit when $3 billion of that deficit was actually a structural deficit. It was built in, Speaker. It is a deficit that they ran up because they were running out of things to sell.
Speaker, why did they not tell the people about this hole?
Hon. Charles Sousa: Speaker, very quickly, it was this government that actually passed the Fiscal Transparency and Accountability Act, precisely because that party—
The Speaker (Hon. Dave Levac): The Leader of the Opposition is warned. The member from Simcoe–Grey is warned.
Carry on.
Hon. Charles Sousa: It's precisely because that party did in fact hide the deficits. Furthermore, we brought in the FAO as well, recognizing the need to look at projections going forward. This is what we're talking about: projections.
Forty years have passed. They've only balanced the budget three times; we've done more than that, almost twice more.
Furthermore, there are public accounts and there are the actual results that are achieved. Third-quarter results have shown that we have balanced the books and have a surplus. DBRS just again made the connection and said that we are AA rated and stable because of the fact that we have done so. Furthermore, we are putting forward $230 billion over 14 years for those capital improvements, and we've exceeded our target, year over year.
Mme France Gélinas: Ma question est pour la première ministre. On April 1, the funding for eight hospital beds at Guelph General Hospital ran out. The hospital says it still needs the beds. It is operating at 111% occupancy. It is seeing 64,000 patients per year in an emergency room built for 45,000.
Why is the Premier forcing the patients to be treated and admitted in the hallways at Guelph General Hospital?
Hon. Kathleen O. Wynne: Minister of Health and Long-Term Care.
Hon. Helena Jaczek: Of course, we're monitoring situations across the province at all times in terms of issues where there is a capacity challenge, and we are addressing this, Mr. Speaker. As you know, through our 2018 budget, we are investing an additional $822 million in Ontario's publicly funded hospitals. Overall, this is a very historic increase of 4.6%.
High-growth areas, obviously, are looked at with an eye to improving their situation. We work with our LHINs on a very systematic basis. We look at the need in each particular area of the province, and we ensure that the funds are available in each particular hospital.
Mme France Gélinas: The government funded some temporary beds during the flu season. At the time, Guelph General Hospital needed an extra 16 beds. The government paid for eight of these beds. But on April 1, the government took away the funding for these beds, even though the hospital still needs them now.
Why won't the minister provide enough funding to treat Guelph-area patients with the dignity they deserve?
Hon. Helena Jaczek: I hope the member of the third party recalls that last fall, we did create some 2,000 extra beds across the province. These are spaces that we did annualize in our funding, to the tune of $187 million.
We're continuing to work with the LHINs, looking forward at the coming year. We will ensure that people in this province get the care they need where they need it, when they need it. This is an ongoing evaluative process that we go through. We are listening to the needs across the province. We will ensure patients get the care that they need.
The Speaker (Hon. Dave Levac): Final supplementary.
Mme France Gélinas: Something is not right here. On April 1, the government took away the funding that you had given in the fall. Those eight beds are no longer funded at Guelph General Hospital. The CEO of the hospital said that funding for the hospital has not kept up with population growth. It is happening throughout our province, in all of our hospitals.
The minister's hospital funding freezes have meant service cuts to patients. They have meant that patients are treated and admitted in hallways and sometimes in bathrooms. The Premier likes to complain about the cuts from the Conservative government, but when will she accept responsibility for the cuts to hospital funding that she alone is responsible for?
Hon. Helena Jaczek: In the case of Guelph General Hospital, we are working with that hospital very closely. We've been in communication with the CEO and the chair of the board to understand their particular pressures at this moment. We are committed to maintaining the surge beds that were announced last fall.
We understand that growth pressures exist across the province. Obviously, last winter, there was an exacerbation with a very severe flu season. But we are going to continue to monitor and work with our hospitals to ensure patients get the care that they need, when and where they need it.
Mr. Peter Tabuns: To the Premier: On April 19, the Ontario Energy Board announced what at least seemed to be good news; Hydro rates were not going up. But it turns out that this was just government propaganda, because if you dig just a little bit further, you find that actual hydro costs have jumped by roughly 10% from last year.
The government is using borrowed cash to hide these true costs from the public before the election. Why won't the Premier just tell the truth—that her $40-billion hydro borrowing scheme will send bills skyrocketing by 70% after the election?
Hon. Kathleen O. Wynne: Minister of Energy.
Hon. Glenn Thibeault: The fair hydro plan, as the member is well aware, is bringing forward and has brought forward a 25% reduction for all families across the province, and then that is being held to the rate of inflation for the next four years. The long-term energy plan also shows that costs are being pulled out of the system to keep our system reliable, clean and affordable for all people right across the province.
It is good news that the OEB brought forward no rate increases this year. We'll continue to work with all our partners to ensure that we keep having a system that is reliable, a system that is clean and affordable. For us on this side of the House, we made sure that we acted on it. The opposition party: They voted against that.
The Speaker (Hon. Dave Levac): Supplementary.
Mr. Peter Tabuns: Again to the Premier: The Premier is using borrowed cash to hide the true cost of hydro before the election, but background documents buried in the Ontario Energy Board's website show the truth: Actual hydro costs have jumped about 10% from last year. Those are the costs that Ontario families will still have to pay after the Premier's payday loan comes due. Leaked government documents show that hydro bills will rise 70% over 10 years, starting after the election.
Will the Premier tell Ontarians the truth: that her hydro borrowing scheme drives bills up even further over the long term, not down?
Hon. Glenn Thibeault: Once again, there's a document called Ontario's Long-Term Energy Plan—I encourage the member to read it—where he will see that the rates are actually lower moving forward than they would have been even four years ago, and where that projection would be.
We invested in the fair hydro plan to make sure that we could reduce rates by 25% for everyone across the province. They voted against that. And do you know what, Mr. Speaker? They have no plan when it comes to actually reducing rates. What they want to do is eliminate the fair hydro plan. They want to raise rates by 25%.
On this side of the House, we brought forward a plan. We helped all families right across the province, and 500,000 small businesses and farms. Those who live in rural and northern parts of our province continue to see rates that have been reduced anywhere between 35% and 50%, on average.
We will continue to act on behalf of the people of Ontario, helping them and keeping a clean, reliable and affordable system.
Mr. Peter Tabuns: Again to the Premier: The Minister of Energy was positively gleeful last week after the Conservatives released a hydro plan that kept all of the worst Liberal hydro policies. The Conservative plan will keep the Liberal government's $40-billion hydro borrowing scheme, which will drive bills up by more than 70% after the election. The Conservative plan will keep private profits on our hydro bills and will keep Hydro One privatized. And the Minister of Energy couldn't be happier. Why on earth is this government celebrating the fact that the Premier's hydro policies have been endorsed by Doug Ford?
Hon. Glenn Thibeault: The NDP platform, when it comes to electricity, has them buying back billions of dollars in shares of Hydro One that will not take one cent off of electricity bills for Ontario families and businesses. I don't know why they think that's a good idea. On this side of the House, we brought forward a plan that reduced rates by 25%, and they voted against it.
When it comes to individuals living on First Nations, we eliminated the delivery charge. They voted against that. When it comes to low-income individuals—in their plan, it wasn't even mentioned until the last page. We made sure—
The Speaker (Hon. Dave Levac): The member from Essex is warned.
Hon. Glenn Thibeault: Mr. Speaker, let's think about this: They're going to spend billions buying back shares of Hydro One that actually will not do anything to lower anyone's electricity bills, but those billions of dollars that they spend will mean that they will have to close schools and close hospitals. What are they going to cut to make sure that they can buy back a plan and a company that won't save anybody anything?
Mr. Todd Smith: My question is for the Premier. First the Premier gave Mayo Schmidt millions of dollars when she made him the CEO at Hydro One, and now we know that he has become the six-million-dollar man. Then the Hydro One board gave themselves millions of dollars in raises and tried to make it impossible to hold them to account. We don't know how big the millionaires' club is, but it's $412 million large. Finally, yesterday your government, Premier, voted against reviewing compensation at Hydro One.
What we really want to know is, when is the Premier going to start to stand up for electricity customers in the province of Ontario and not the millionaires' club at Hydro One?
Hon. Glenn Thibeault: It was this government and this Premier who actually stood for families last year when we brought forward the fair hydro plan, and that party stood and voted against it. It was this government and this Premier who actually brought forward the Ontario Electricity Support Program to help low-income individuals, to help seniors, and it's that party that voted against it.
We made sure that we brought forward our concerns to the board. Over the last weekend, our government urged Hydro One's board to revisit its executive compensation model. That's exactly what they're doing. As the largest shareholder, we welcome the board's decision to re-examine the compensation model, which will include independent advice as well.
The board's decision to increase executive compensation was done without our involvement, so changes to compensation and severance that were adopted by the board were released to us on March 29. We acted, and we are now making sure we can have that review through the board.
Mr. Todd Smith: It was this Premier and this government who handed out the multi-million-dollar salary to the CEO of Hydro One, and then have sat idly by and watched it ever increase—only we can't see all of it, Mr. Speaker.
Yesterday in the Legislature, members on the government side were trying to justify the salary of the six-million-dollar man. They're trying to defend the indefensible. This government's legacy on electricity is the same as its legacy on everything: mountains of new debt, a select few Bay Streeters who are getting rich, and everyone else in Ontario getting stuck with the bill.
On Monday, the Premier will send out the energy minister to say that the compensation is being reviewed. Then, on Tuesday, every Liberal votes against reviewing it. That's what happened yesterday, and they're trying to justify the $6 million.
Speaker, when will the Premier show some leadership and finally deal with the millionaires' club at Hydro One?
Hon. Glenn Thibeault: We brought it forward over the weekend, in our role as the largest shareholder, asking the board to revisit their executive compensation model. They're doing just that, Mr. Speaker.
Because we found out about this at the end of March through the management information circular, the board now acknowledges that as their largest shareholder, which is this government, we should be engaged on such material issues and that changes are needed.
While Doug Ford and the PCs would take that erratic and reckless approach, to fire the board and do absolutely nothing to reduce rates, we believe in a stable solution that exercises our authority as the largest shareholder. With this in mind, our government will abstain from voting on the say-on-pay shareholder resolution at the Hydro One annual general meeting, which is in May—May 15—to give the board the necessary time to re-examine the matter.
Our government continues to focus on fairness for all people in this province.
Ms. Peggy Sattler: My question is to the Premier. For 15 years, this Liberal government has known about excessive executive salaries in the broader public sector but has done almost nothing to rein in executive compensation.
This week, Londoners learned about proposed salary increases for Western University senior administrators. The Liberals have allowed boards of governors the freedom to select their own comparators to determine salaries, without any oversight to ensure that the comparators are valid. This can lead to significant salary increases far beyond what is reasonable or appropriate.
Similar concerns have already been raised about Nipissing University, and we expect to hear more as university compensation frameworks are posted across the province.
Speaker, why has this Liberal government refused to put meaningful controls in place to rein in executive compensation in the university sector?
The Speaker (Hon. Dave Levac): The member from Scarborough Centre is warned.
Hon. Kathleen O. Wynne: President of the Treasury Board.
Hon. Eleanor McMahon: I want to thank the member opposite for her question, as it gives me an opportunity to not only address this issue but to put some facts around it that are extraordinarily important.
Our government froze salaries across the broader public sector in 2012. When we renegotiated them more recently, we put some really important pieces in place. For example, our framework enforces strict rules that prohibit executives from receiving unnecessary perks, such as prerequisite signing bonuses, retention bonuses and unrestricted severance. Because we remain committed to ensuring fairness and accountability in the way that these broader public sector executive frameworks and pay are structured, we did away with cash housing allowances, vehicles that aren't required and so on.
I'll speak more in the supplementary about what we're doing in terms of our framework for our broader public sector salaries, Speaker.
Ms. Peggy Sattler: What is even more troubling is that decisions about executive salary increases are being made after a decade of Liberal underfunding of the post-secondary sector. For years, Ontario has had the highest university tuition and the lowest per-student funding of any province in Canada. This has undermined the quality of post-secondary education for students and led to an explosion of contract faculty. It has contributed to deep divisions between administration and academic workers at York University and jeopardized the career plans of thousands of young people at York, with the strike now in its ninth week.
Speaker, does this Liberal government believe that increasing the salaries of senior university administrators is more important than the quality of education that Ontario post-secondary students receive?
President of the Treasury Board.
Hon. Eleanor McMahon: The member opposite, in her question, talked about what's more important and the juxtaposition. I want to say that, as a government, it's important to strike a balance between attracting really great talent, which is what we've done, and setting fair and reasonable compensation packages for the broader public sector.
We remain committed to ensuring fairness and accountability in terms of how that compensation is managed, but overall, we believe the people of Ontario have the right to know how their dollars are being spent, and they deserve a clear rationale for why executives are paid what they are. That's why we implemented the broader public sector executive framework in 2016. This framework requires enhanced transparency through the public posting of the executive compensation framework so that the public can understand and appreciate it. It's an important exercise in democracy and accountability.
Ontarians now have the opportunity to provide feedback as well. We're proud of our public servants in this province and we have taken these important accountability—
New question.
Mr. Han Dong: My question is to the Minister of Tourism, Culture and Sport. For decades, people have come to Ontario Place to enjoy family fun, live music, build happy memories and take in the beautiful waterfront. We have been moving forward with our ambitious vision to transform Ontario Place into a modern, vibrant, year-round waterfront destination that engages residents and visitors of all ages.
Yesterday, the minister made an exciting announcement at Ontario Place and gave us a sneak peek of what's to come. As the local member, I feel very lucky to live so close to this beautiful space.
Speaker, through you to the minister, can she tell us what Ontarians are looking forward to this summer?
Hon. Daiene Vernile: Thanks to the fantastic member from Trinity–Spadina for that question.
Last year, we made very significant progress in transforming our vision into reality. We opened the Trillium Park and William G. Davis Trail and added seven and a half acres of green space to the waterfront. We hosted free family fun with Winter at Ontario Place, which featured a skating rink and light installations.
I'm happy to share with you all today that we're going to keep the momentum going. This summer, Ontario Place is going to be hosting a music series every Thursday, featuring emerging artists from every genre, including indie rock, folk, hip hop and jazz. There are going to be family dance and music performances on Sunday afternoons. We're also going to have outdoor activities such as beach volleyball and free skating on the outdoor synthetic rink.
Speaker, stay tuned. I look forward to unveiling some exciting new details in the supplementary.
Mr. Han Dong: It's fantastic to hear that the vision proposes a mix of outdoor and indoor features, including more green space, recreational activities like beach volleyball, and a waterfront trail around the entire site. The urban park and the trail are dramatically transforming the Toronto waterfront with a new green space that celebrates the natural and cultural legacy of Ontario Place.
As the local member, I know how important it is to gather feedback from the public, including the residents from Fort York, Liberty Village, CityPlace, Bathurst Street and surrounding neighbourhoods.
Speaker, through you to the minister, can she tell the members of this House about the next steps of the Ontario Place revitalization?
Hon. Daiene Vernile: Thank you to the member from Trinity–Spadina, who, by the way, joined me yesterday at our beautiful waterfront to announce our next milestone in the rebooting of Ontario Place.
Speaker, just a few months ago I announced our plan to design a new green space. It's going to be known as Celebration Common. It will be Toronto's newest waterfront park. It's coming in at a size of about 14 football fields. The park is going to include a children's outdoor play area, walking paths and trails, a beach area for outdoor recreation and water sports, and lots of room to host large-scale festivals. Most importantly, there's going to be plenty of green space.
Speaker, on this side of the House, we believe in protecting our environment, not paving over paradise. I'm confident that Celebration Common is going to become Ontario's new urban backyard where people can kick back and enjoy Toronto's beautiful waterfront.
Mr. Randy Pettapiece: My question is to the Premier. Speaker, the Liberal government is trying to strong-arm the horse racing industry into accepting a deal that might hurt them in the long run. Just two weeks ago, the government sprang a massive long-term funding agreement on horse people. It's nearly 200 pages and written in complex legal language. Here's the kicker: They gave the racetracks, breeders and horse people until May 1 to sign the agreement—or else.
Why is this government playing politics with horse people's livelihoods and pressuring them to sign on to a 19-year agreement in the final weeks before an election?
Hon. Charles Sousa: Mr. Speaker, the member opposite makes reference to the fact that we have now strengthened and sustained horse racing and breeding by putting in a $105-million, 19-year agreement.
We've also provided an Enhanced Horse Improvement Program that is extended year over year by OMAFRA. We have a new Racetrack Sustainability Innovation Fund, $6 million over three years, to support regional racetracks to innovate, diversify and expand their revenue sources. And OLG is providing additional funding to supplement those racetracks that may be experiencing shortfalls and to enable long-term decisions about horse breeding.
More importantly, Mr. Speaker, we've established a new board. The Ontario Racing board will now be responsible for all the strategic plans—also providing the service provider to ensure that those funds are transparent and accountable. And we're making it that there are going to have to be horse breeders on that board and small tracks on that board—five seats for racetracks; five seats for breeders—and an independent chair.
Mr. Randy Pettapiece: Back to the Premier: The Minister of Finance claims that his long-term agreement "will provide the stability needed to strengthen and sustain horse racing and breeding in Ontario," yet that same minister has approved plans to rip the slot machines out of Kawartha Downs, Ajax Downs and other community racetracks, threatening their future viability.
Now, on the cusp of an election campaign, his officials are threatening to freeze out horse people if they don't sign on to a 19-year deal—"Sign it or else"—that they haven't had time to read. What happens if one or more racetracks refuse to sign? Will the government cut off their funding, or will they set aside politics and let horse people have meaningful input after the election?
Hon. Charles Sousa: Minister of OMAFRA.
Hon. Jeff Leal: I appreciate the supplementary from the member from Perth–Wellington. The reason that we chose 19 years, Mr. Speaker—$2 billion over those 20 years—is because, when you have insight in the horse racing industry in Ontario, it works on a cycle. It usually works three to four years before a horse, whether it's a standardbred horse or a thoroughbred horse—
The Speaker (Hon. Dave Levac): The member from Haliburton–Kawartha Lakes–Brock is warned.
Hon. Jeff Leal: Mr. Speaker, as we consulted widely with the industry, a thoroughbred horse or standardbred horse usually takes three or four years from the time it's born to the time it gets trained and eventually gets to the track. With anything shorter than 19 years, you don't have confidence in the industry.
One of the things this government wanted to do is to make sure there's a future path for all 15 tracks in the province of Ontario.
Ms. Jennifer K. French: My question is to the Premier. Volunteer firefighter Gary Kendall died in 2010 in a dangerous winter river, being trained by an unregulated private trainer. The family called for a coroner's inquest; they didn't get one. No one did anything to prevent another tragedy.
Five years later, firefighter-hopeful Adam Brunt died while taking a private, unregulated rescue training course on a dangerous winter river with the same unregulated private trainer. Adam died while 11 other students helplessly tried to save him—two unnecessary deaths, with no one held responsible.
Finally, after two men died, the families got a coroner's inquest. I have been pushing to protect firefighter trainees for the past three years. My motion to immediately adopt all coroner's inquest jury recommendations to keep future trainees safe was passed unanimously. You said it was urgent. You said you'd take action.
Premier, what's the status of the changes and actions needed to ensure no one is ever put at risk like this again? Have all of the coroner's inquest recommendations been adopted yet?
Hon. Kathleen O. Wynne: Minister of Community Safety and Correctional Services.
Hon. Marie-France Lalonde: What happened in this incident is a tragedy. My thoughts are with the families and colleagues of those two trainees who passed away. I really commend the member opposite in her advocacy on this matter, and I know that this is something she has worked very, very hard on over the past years.
Our government is carefully addressing the findings and the recommendations of the coroner's inquest into these deaths. The Office of the Fire Marshal and Emergency Management took immediate action and suspended the water rescue program at the Ontario Fire College after this inquest. Our government continues to work with the Fire Safety Technical Table, where our fire safety partners and experts meet to discuss fire safety challenges. That table is looking at the recommendations, and certainly we hope to have solutions.
Ms. Jennifer K. French: Again to the Premier: You can't suspend a training program that didn't exist in the first place.
It's been eight years since Gary died. It's been over three years since Adam died. Since this government hasn't chosen to figure out how to protect firefighter trainees, I have worked for three years on this, and I have figured it out for you.
My Bill 58, the Brunt and Kendall Act, lays out a comprehensive regulatory and safety framework to hold private trainers to account and keep firefighter trainees safe. Alongside the families of Adam Brunt and Gary Kendall, the Ontario Professional Fire Fighters Association, safety advocates across the province and legal experts, we have finally completed this necessary legislation to ensure that deaths like this cannot happen again in the province of Ontario.
It has been a long and emotional journey to get here, but here we are, with my legislation in front of us and still with time on the clock. Premier, will you promise to keep our firefighter trainees safe and ensure that Bill 58 passes through this House and into law before the end of the session?
Hon. Marie-France Lalonde: Certainly the safety of firefighters is very important, and I want to commend them for all the work that they do all across our province.
We are taking action to modernize the fire safety delivery in Ontario. Part of this modernization is to ensure that our world-class firefighters have the support they need. Ensuring firefighters are fully trained and certified in their role is critical for their safety and the safety of the public. This is why we are proposing that firefighters be certified through the National Fire Protection Association standards. This aligns with the Occupational Health and Safety Act, which requires that employees receive sufficient training.
My ministry and I, as the government minister, will continue to work to make this proposed requirement as seamless as possible. We will continue to engage to ensure that every single firefighter in this province is safe.
Mrs. Cristina Martins: My question this morning is for the minister responsible for early years and child care. Minister, our government is committed to making sure families have access to high-quality, inclusive and affordable child care. This is what my constituents in Davenport want and expect.
Under Doug Ford's plan, families will receive a rebate of just $34 a month. This proves just how out of touch he is with the needs of families on the ground. Our government's recent announcement of free child care for preschool-aged children, from age two and a half to when they are eligible to start free full-day kindergarten, will help ease the financial burden on tens of thousands of families. Families will save an estimated $17,000 per child, allowing parents to go back to work when they choose and helping to give children the best start in life.
Can the minister expand on her announcement from last week and what this means for parents in my riding and across Ontario who are looking to access child care?
Hon. Indira Naidoo-Harris: Thank you to the hard-working member from Davenport for that important question. The reality is that we know that parents and children are benefiting from Ontario's high-quality child care programs, but we also know that there's more work to do. That's why we continue to build on our commitment to help 100,000 more children get access to quality, affordable, licensed child care. We are building a solid foundation for child care in our province.
Last week we announced that our government is investing $78.6 million in capital funding to build more than 3,100 licensed, community-based child care spaces. Think about that: We're building spaces right where families need them.
Speaker, our investments are giving thousands of Ontario families support, while Doug Ford's child care scheme is a $1.3-billion annual cut to child care. They promise to cut programs that ease the financial challenges families face.
Mrs. Cristina Martins: I want to thank the minister for that answer. Last week's announcement is indeed another step forward to creating affordable and accessible child care across our province.
It's clear that our government is truly transforming the way child care is delivered in Ontario. There's no question that more access to child care is critical for Ontario families. However, could the minister please explain what makes last week's announcement so important, and how this will help families in diverse situations in my riding of Davenport and across Ontario access child care?
Hon. Indira Naidoo-Harris: I'm pleased to answer the member's question. Mr. Speaker, we are taking a number of important steps to ensure that every child and family in Ontario has access to a high range of quality and affordable care. Public spaces like places of worship, community centres and indigenous friendship centres strengthen our communities. Creating child care spaces in these community hubs will make them even stronger and give families access to child care right in their neighbourhoods. Just think about that: child care spaces for families where their child will be safe and well cared for close to home. This is an important part of our commitment to invest in families and bring free child care to preschoolers across the province.
The parties opposite will do nothing to build more capacity for child care or the workforce to deliver that care. Our government is focused on building even stronger communities for children, families and for the future of this province.
Mr. Rick Nicholls: My question is to the Premier. "Carnage Alley" got its name because of the appalling accidents and fatalities that have occurred there. In 1999, the largest vehicle pileup in Canadian history occurred in Carnage Alley. I know because I was there, but thankfully unhurt.
The government of the day responded by widening the highway from Tilbury to Windsor and by adding a concrete barrier there.
In 2009, this government's election had promised to widen the 401 from four lanes to six lanes between Tilbury and Lambeth, with the addition of a concrete median barrier. But that stretch of 117 kilometres between those two areas remains untouched, and with the scheduled building of the Gordie Howe bridge in Windsor, transport traffic is only going to worsen in the coming years.
Premier, you know that I've advocated for this for many years, and you told me and this Legislature that a barrier would be built. We need a concrete barrier. So, Premier, what do you actually plan to do and when do you plan to do it?
Hon. Kathleen O. Wynne: Minister of Transportation.
Hon. Kathryn McGarry: I want to thank the member for this question. I know that we were together, just shortly after I took over this portfolio, with people along that stretch of the 401 in order to look at the issue.
We will be building a concrete barrier. In the meantime, while we are doing the environmental assessment and continuing to do the necessary work to widen that stretch of the 401 and to add the concrete barrier, we are going further than that because I don't want to wait for the length of time it's going to take to make that barrier.
This year, we will be starting to install high-tension cable barriers in almost half the stretch between Tilbury and London, to make sure that there's protection right away.
It's going to take us time to continue the necessary work. We committed to that at the time, and we are continuing to move forward. The request for proposals is going out, to build this immediately.
Mr. Rick Nicholls: Back to the Premier: Carnage Alley's narrow lanes and dangerous curvatures are extremely hazardous, especially in winter. I travel this road frequently from Chatham to Toronto.
As a matter of fact, in 2017, there were five fatalities on that stretch. Three were from crossovers, including the fatality of a five-year-old girl and her mother. This year, on April 25, a double crossover of a transport and minivan occurred—thankfully, not fatal. But accidents continue to happen on a more frequent basis.
I've raised this issue several times before, while the construction on the 401 between Tilbury and Highway 40 was finishing up last year. Your ministry officials stated in a meeting with the Build a Barrier group from Chatham that the contract could be opened up to include building a concrete barrier at that time, but sadly, it wasn't. My petition quickly gained more than 4,000 signatures.
Premier, we need a concrete barrier, not a cable barrier. Why won't you build a concrete barrier now?
Hon. Kathryn McGarry: Again, this is not a partisan issue. We are moving forward in the short term to protect that length of highway as soon as we can. The member was there with the technical expert. He knows that it takes time to do the environmental assessment: up to a year or two. Then we have to do the design to widen the highway. You cannot put a concrete barrier on a four-lane highway; it has to be expanded.
While we're doing that necessary work, we are going forward this year to make sure that that stretch of highway is protected. We've found a way to expedite the process. We will be installing those high-tension cable barriers, which are 97% effective in other jurisdictions, to stop the crossovers.
Contrarily, I would like to know where the PCs stand on this issue. We know that there are no dollars for infrastructure along that area. I don't know how they're going to pay for it. Unlike Doug Ford, we're moving forward to make sure—
The Speaker (Hon. Dave Levac): Be seated, please.
The Speaker (Hon. Dave Levac): Order, please. Thank you.
Mr. Gilles Bisson: My question is to the Premier. A number of people across Ontario, like in Timmins, are trying to buy electric cars because they want to do the right thing. In Timmins, we have a number of people who have actually bought them. But here's the problem: Unless you charge it at home, you can't go anywhere because the company, KSI, to which your government gave the contract to build the charging stations, is not servicing and fixing the charging units that break down.
I've got a guy who calls me the other day. He leaves Timmins because he wants to drive toward North Bay for something, and can't get a charge out of the station in Timmins because it has been broken for a while and not fixed. He drives down to Earlton, gets to the Earlton station and finds out that one hasn't worked since last August. He had to go back to his dad's place and plug his car in overnight so that he was able to drive back to Timmins, get his gas car and then drive back down the highway to go do what he had to do.
When are you going to fix these KSI units?
Hon. Kathryn McGarry: I appreciate the question along the way, because it allows us, on this side of the House, to talk about the great investments that we've been doing in electric vehicles in the province. We're up 120% in the strongest jurisdiction for people to continue to purchase electric vehicles, saving our environment.
We are continuing, through our Electric Vehicle Chargers Ontario Program, to expand the number of chargers available throughout Ontario. We understand that we have had some issues along the way to make sure that the chargers are in and working. But some of the vehicles on the road now take less time to charge altogether.
We are very happy to be changing over from a vehicle system that is causing more carbon to be put in the air and making sure that we have clean vehicles moving forward.
Mr. Gilles Bisson: Minister, the car can't move forward. It's got no charging system.
The system has been broken since last August in Earlton, apparently, from what this individual was telling me. How you can have a system in place that people can't use, and boast about how your program is working, is beyond me.
I ask you again: Could you please get on to the people that you contracted these chargers to, such as KSI, to ensure that if they install these things, they keep them operational and people don't get stranded, as my constituent did?
Hon. Kathryn McGarry: I want to again thank the member for the supplementary. Our program has provided Ontarians with incentives to help purchase over 16,000 EVs and over 3,000 home and workplace chargers. We expect to see the numbers grow. We understand it's frustrating for those who are unable to get in to use chargers that may be broken. We're continuing to work with our contractor to get in there and expedite the process to not only install them but to actually repair them and keep them going.
Because of our commitment to charging infrastructure, drivers know that they can still travel the distances they otherwise would have with a traditional vehicle. We're continuing to work with those areas.
I'd like to know this from the NDP: Are they going to vote for the budget that contains the investments that we've made in the past to continue to make sure that we have more electric charging vehicles and systems around Ontario?
Mr. John Fraser: My question is for the Minister of the Environment and Climate Change. There's a great deal of concern in my community of Ottawa and in the Ottawa Valley about Chalk River Laboratories. Canadian Nuclear Laboratories is planning to build a disposal facility for radioactive waste near Chalk River Laboratories. The site would hold approximately one million cubic metres of low- and mid-level nuclear waste, and it is less than one kilometre away from the Ottawa River.
I have heard these concerns, and I too am concerned. I worry about the risk that nuclear waste could contaminate the Ottawa River.
My question to the minister is this: What is our government doing to ensure the protection of the environment and human health in regard to this proposed project?
Hon. Chris Ballard: Thank you to the member from Ottawa South for that important question and for his continued advocacy on behalf of his constituents, because it is a very important issue.
We understand why it is such an important issue too. That's why experts from my ministry have been actively participating in the public commenting process as the federal government moves ahead. In fact, last August, my ministry submitted comments to the Canadian Nuclear Safety Commission for the draft environmental impact statement. The comments and concerns provided by my ministry included concerns around storm water management, the limits for contaminants, and the sharing of public information around monitoring locations.
We know how important it is to get this right. That's why we're engaged with our federal partners and will continue to work on behalf of the member's constituents.
Mr. John Fraser: Thank you, Minister. This is an ongoing process and it's important to ensure that our natural environment remains protected. The Ottawa River is an important source of drinking water, a natural home to many animals and species, as well as a resource for recreation for many.
The Ottawa Riverkeeper, Ecology Ottawa, local First Nations and individuals have expressed their concern about the proposed near-surface disposal facility being built, because of the potential impacts nuclear waste could have on the river. Earlier this year, my constituent Ole Hendrickson wrote to me expressing his concerns that non-radioactive contaminants in the facility's waste, like PCBs and dioxins, could fall between the cracks.
Could the minister please explain what our government is doing to protect the Ottawa River in regard to the Chalk River waste disposal site?
Hon. Chris Ballard: Thank you again to the member from Ottawa South for that very important question.
I just want to reiterate that these are very real concerns and we need to be vigilant. I understand the jurisdictional issues with this facility, but it's still important for us to protect our communities here in Ontario. That's why my ministry submitted comments to the federal government's proposal, to ensure all precautions are being taken around this project.
As a result of our comments, the federal government changed the project to include only low-level types of nuclear waste. The federal government has also assured us that the waste intended for disposal in the proposed project will meet all of the required international guidelines as well.
Again, I want to thank the member for Ottawa South for his advocacy. We will continue to monitor this project.
Mr. Jim McDonell: To the Premier: Recently, tenants in a senior citizens' social housing development in my riding were told some shocking news. They were informed that due to necessary cuts, they would now be responsible for mopping the floor and washing the countertops. They were just handed industrial equipment and told, "Folks, get 'er done." Over 90% of the tenants have balance or mobility issues, many use walkers and most are in their eighties and nineties. They deserve a safe living environment.
The government has consistently shortchanged municipalities with inefficient infrastructure funding, cutting the Ontario Municipal Partnership Fund and neglecting the needs of rural Ontario residents.
Is cutting municipal transfers to such low levels that it results in making seniors in affordable housing mop the floors really the right way to treat the people who built this province, putting them at risk of injury, as a way to solve this government's spending and debt problems?
Hon. Kathleen O. Wynne: I really do commend the member opposite for his concern about infrastructure funding for municipalities in general. We certainly have a housing plan. We're working with the federal government to put in place more affordable housing and more supportive housing. We have increased funding to municipalities for housing and changed the flexibility that allows them to make investments.
But it's very interesting to me that this member is standing in his place and asking a question about this, when his leader, Doug Ford, was in Cornwall and said that municipalities were going to have to make cuts in order to be able to continue to get infrastructure spending at all if he were the Premier. So I encourage the member to have a conversation with his leader, because if infrastructure funding for municipalities is dependent on cuts, that doesn't bode well for the future of—
The Speaker (Hon. Dave Levac): I wish to turn to the member from Leeds–Grenville on a point of order.
Mr. Steve Clark: Thank you, Speaker. Point of order: I would just like to ask for unanimous consent to have a moment of silence for my MP, Gord Brown. Gord passed away this morning in Ottawa. He was an amazing MP. Speaker, we once dreamed as young men to serve in this Legislature and in the House of Commons, and we realized that.
I want to express, on behalf of the House, our deepest sympathies to Gord's wife, Claudine, and his two sons, Chance and Tristan.
We're going to miss him. I'm going to miss him. Speaker, he was like a brother to me. Eastern Ontario, the province and our country mourn the loss of Gord Brown.
I would appreciate consent to have a moment of silence.
The Speaker (Hon. Dave Levac): The member from Leeds–Grenville is seeking unanimous consent for a moment of silence to pay our respects. Do we agree? Agreed.
I would ask everyone in the House to please rise for a moment of silence in honour of MP Brown.
The Speaker (Hon. Dave Levac): God rest his soul.
The Speaker (Hon. Dave Levac): We do have a deferred vote on government notice of motion number 8, relating to allocation of time on Bill 53, An Act respecting the establishment of minimum government contract wages.
Call in the members. This will be a five-minute bell.
The Speaker (Hon. Dave Levac): On May 1, 2018, Mr. Chan moved government notice of motion number 8, relating to allocation of time on Bill 53, An Act respecting the establishment of minimum government contract wages.
All those in favour, please rise one at a time to be recognized by the Clerk.
Albanese, Laura
Anderson, Granville
Baker, Yvan
Ballard, Chris
Berardinetti, Lorenzo
Bradley, James J.
Chan, Michael
Chiarelli, Bob
Colle, Mike
Coteau, Michael
Crack, Grant
Damerla, Dipika
Del Duca, Steven
Delaney, Bob
Dhillon, Vic
Dong, Han
Duguid, Brad
Flynn, Kevin Daniel
Gravelle, Michael
Hoggarth, Ann
Hunter, Mitzie
Jaczek, Helena
Kiwala, Sophie
Lalonde, Marie-France
Leal, Jeff
MacCharles, Tracy
Malhi, Harinder
Martins, Cristina
Mauro, Bill
McGarry, Kathryn
McMahon, Eleanor
McMeekin, Ted
Milczyn, Peter Z.
Moridi, Reza
Naidoo-Harris, Indira
Naqvi, Yasir
Potts, Arthur
Qaadri, Shafiq
Rinaldi, Lou
Sandals, Liz
Sousa, Charles
Thibeault, Glenn
Vernile, Daiene
Wong, Soo
Wynne, Kathleen O.
Zimmer, David
The Speaker (Hon. Dave Levac): All those opposed, please rise one at a time to be recognized by the Clerk.
Arnott, Ted
Forster, Cindy
Munro, Julia
Nicholls, Rick
Wilson, Jim
The Speaker (Hon. Dave Levac): I declare the motion carried.
The Speaker (Hon. Dave Levac): There are no further deferred votes. This House stands recessed until 3 p.m. this afternoon.
Miss Monique Taylor: It gives me great pleasure to welcome a young woman from my riding, Deanna Allain, who is here for the tabling of a bill today to have a strategy for service dogs and service dogs in training. I'm really pleased to welcome her back to Queen's Park today.
Welcome back, Deanna.
The Speaker (Hon. Dave Levac): Welcome.
Miss Monique Taylor: I forgot about Carlin, her service dog in training, so I have to welcome him.
Ms. Lisa MacLeod: I rise today to pay tribute to a dear friend of mine, John Newman, who passed away earlier this week. John was an adviser to me on agriculture. He was also a friendly face. I got to know John over 13 or 14 years ago, when I started to embark on this career in provincial politics at the Ontario Legislature. He lived in the community of North Gower, which is part of the Carleton part of my riding, which I won't be representing anymore.
John died this week, but he had a life that was so well worth living and so well worth putting into the record here at the Ontario Legislature. He spent 22 years in Canada's military. He and his wife, Marion, then purchased Jomar Farms in 1966. Just a few years ago, six years ago, they celebrated a milestone wedding anniversary, and I'm sure every year since then has been blessed.
Their farm was recognized for excellence throughout Ontario and Canada, particularly by the old Kemptville College in Kemptville, not too far from North Gower, and the University of Guelph. They taught students at their farm. John and Marion were recognized with Master Feed awards for top stocker quality and an OSCIA certificate for soil management, and John offered excellent farming advice to those throughout Ontario. He was on the Ontario Cattlemen's Association board of directors—which is what helped me in my early years as a member, asking him for great advice. But it was in 2000, when John Newman became a founding director of the Canadian Cattle Identification Agency—Speaker, you will recall that we had a BSE crisis in 2003. That's when John became a critical voice for Ontario beef, for every one of us to talk about the great excellence that we have here, as well as championing as we move forward.
A few years ago, John and Marion were at a Michael Bublé concert. I was sitting there and I said to my husband, "I think that's John Newman. Why would he be at a Michael Bublé concert?" Well, it was their 50th anniversary, for him and Marion. I know Marion is watching at home, and I just want to say, Michael Bublé said it best:
"You're everything.
"You're every song, and I sing along.
"'Cause you're my everything."
I know, Marion, you're home today and newly moved in to Barrhaven. John had a lasting impact on me, many people in Carleton county and throughout Ontario. I know to you and your family, he meant the world. For that, we are grateful that you shared him with us, not only in agriculture but also as he served Canada. Thank you.
Mr. Michael Mantha: Yesterday, May 1, highlights that May is Lyme Disease Awareness Month. It is with great joy and happiness that I want to recognize many of the members of the Lyme disease task force for submitting this great Report of the Lyme Disease and Tick-borne Illnesses Task Force. What it does is, it lays out a path so that we can start looking at the real challenges of addressing the needs of individuals with Lyme disease through prevention and control, through surveillance, through public engagement, through care and treatment support.
That means establishing centres of excellence for tick-borne illnesses, where we're going to start doing that R&D, where we'll be able to amass that information and start providing it to our physicians and our caretakers to care for individuals.
As well, the research that is going to be happening is: work with patients and providers and researchers; conduct a review of the current clinical practices and a review of the current testing methodologies for diagnosing; and conduct a systematic review, focusing on treatment. These are pillars. These are going to be open discussions. These are going to be additional task forces that are going to be developed to really look at providing that care, the acknowledgment and the acceptance of individuals who are suffering with Lyme disease in this province.
I couldn't be more proud of these individuals. I have to give credit where credit is due: The present Minister of Health provided a lot of assistance on this, and the previous Minister of Health, Mr. Eric Hoskins. I give credit where credit is due. The task force did an amazing job, but this is the beginning.
Mr. John Fraser: I would like to take a moment to acknowledge my father-in-law, Lorne Hooper, a military veteran who served in World War II.
Today, he was honoured with a Silver Leaf on the Tree of Life at the Perley and Rideau veterans' long-term-care centre. It was presented by the director general of the aerospace equipment program. I would like to have been there with him and my wife, Linda, for this honour. Instead, I'd like to honour him with a few words.
Born in Ottawa in 1922, Lorne was the second son to William and May Hooper. Lorne's career in the military began with what he describes as a "less than captivating stint as a 'Saturday night soldier,'" as a member of the Non-Permanent Active Militia.
In 1942, Lorne volunteered for chemical testing in the chemical warfare laboratories in Ottawa. Unlike many other volunteers, he was lucky not to have adverse effects from the testing.
At the onset of World War II, Lorne knew he wanted to be a pilot, eventually serving as a wireless air gunner, after having completed his in-air training in Harvard aircrafts. He was eventually posted to PEI coastal command, where Private Lorne spent his days in pursuit of German U-boats. As he tells it, "I never shot at anyone, and nobody ever shot at me, or if they did, they were a very bad aim."
In 1943, Lorne met and married his wife, Yvonne. They were known as Hoop and Toots, or Nanny and Poppy. They eventually bought a house in Alta Vista and had their only daughter—and my best friend—Linda.
Throughout his life, Lorne has been an avid runner, participating in the Terry Fox race until he was 85, and running many 10Ks.
From Linda, myself, grandchildren Kirsten, John and James, and great-grandchildren Vaughan, Sloane and Fraser, we're all very proud of you.
Mr. Todd Smith: What's going on on the south shore of Prince Edward county is an absolute travesty. This government has broken its own rules around environmental protection against species at risk. Environmental restrictions were put in place in its initial renewable energy approval, and the government is allowing them to be violated as construction continues at the site.
We've had reports of trespassing on private lands that don't have a leaseholder agreement for turbine construction and additional transmission construction. WPD officials are apparently offering monetary reimbursement on-site for damage to property with landowners who don't have counsel present to act on their behalf.
This government has allowed a state of corporate lawlessness to occur on the south shore of Prince Edward county, and it has said nothing to uphold any of the energy or environmental agreements it has signed. This Liberal government has pretty well told the people of Prince Edward county that there is no rule it won't bend to ensure this project is in the ground as quickly as possible.
Analysts have said that the project isn't necessary. With the amount of solar hosted in Prince Edward county, the county may already be net neutral. And the distance of the project from a load of any size means this government is allowing WPD to erect nine white elephants on the south shore of Prince Edward county.
It has done so over the objections of local residents, and in spite of its own rules. Speaker, this project should be put to an end now.
Ms. Peggy Sattler: I rise today to highlight the great work that is being done in my community by Canadian Manufacturers and Exporters, southwestern Ontario board.
On May 9, the organization will hold its 23rd annual London Manufacturers' Recognition and Scholarship Awards night, an event that brings together over 200 manufacturers and professionals to network and celebrate local industry achievements. Most importantly, the awards night provides eight promising Western, Fanshawe and secondary school students, who are enrolled in manufacturing-related programs, with $2,000 scholarships.
Despite the loss of 300,000 manufacturing jobs in Ontario over the last 10 years, manufacturing remains a key sector for London's local economy and for the southwestern region as a whole. Events like the CME awards night are critical in the face of an economy that has seen almost all job growth concentrated in the GTA and Ottawa over the last decade, leaving the rest of the province far behind.
A recent analysis of labour force survey data shows that between 2008 and 2018, under this Liberal government, 94% of all new jobs were created in the GTA or Ottawa, with only 6% growth in the rest of Ontario.
There is no question that London's economic prosperity remains closely linked to the health of our manufacturing sector. With the efforts of the CME, we are helping London manufacturers to develop the talent necessary to innovate, connect and grow their business.
Mr. Mike Colle: I want to speak about a wonderful, special community in my riding. The community is called the Lawrence Heights area.
The Lawrence Heights community hosts the largest public housing community in Canada. Within that community, we have a community health centre and we have a great high school, the John Polanyi high school.
We also have a revitalization program that's going on. It is similar to the one that took place at Regent Park in downtown Toronto, whereby the housing stock is being improved and made into mixed housing with at-market rents, subsidized rents and seniors' housing, all under construction right now. Part of it is even for private ownership.
This week, we announced a joint project between the federal, provincial and municipal governments to build a community hub there, which is going to have arts programs, a swimming pool, a community centre and a seniors' centre, all within the Lawrence Heights community.
So the people in the community are not only getting new housing; they're getting new parks and they're also getting this wonderful, state-of-the-art community hub that's going to make the Lawrence Heights community even better than it is right now.
Congratulations to all those who worked on the Lawrence Heights community project. The future is very bright.
Mr. Ross Romano: Just this past week, we learned that the US tariffs—a 25% tariff on steel and 10% on aluminum—were extended by President Trump for one further month.
I just want to say this: We work better when we work together, within all levels of government, between all party lines. If we can work with our friends south of the border in the United States—we need to really demonstrate to them that they need us as much as we need them.
A northern-southern trade war is not good for Ontario, and it's certainly not good for the people of my community in Sault Ste. Marie. It's not good for the workers at Algoma Steel. The problem that both our markets are dealing with is Asian steel dumping, not what is occurring in northern and southern trade.
We are pro free trade. Steel needs to be the number one export out of Sault Ste. Marie, not our youth.
Ontario needs to be open for business. It's not our job, as government, to make business thrive; it's our job to make sure that there's the environment present for business to thrive.
Again, I really think it's important for all of us to work together—all levels of government, all party lines—in order to try to get this issue resolved with our friends south of the border. We cannot succeed with a trade war. It will not end well for either side.
In my last few seconds, I just want to mention that my Sault Ste. Marie Greyhounds, back at home, just won the Wayne Gretzky Trophy and have surpassed the Kitchener Rangers. Congratulations to them. The Soo Greyhounds are facing off in the finals of the OHL playoffs, starting tomorrow night in Sault Ste. Marie, against the Hamilton Bulldogs.
Mr. Han Dong: I would like to talk about a great organization in the downtown core, Massey Hall. Massey Hall, as you know, is a not-for-profit charity. It does a great job in showcasing not only domestic talent but talent from across the world.
I had the pleasure of joining Minister Sousa and Minister Bill Morneau for an announcement a couple of days of ago of $60 million in joint funding to support the revitalization of Massey Hall.
For any member who has been in Massey Hall, including you, Speaker, you know what I'm talking about. The acoustic effects, the design and the stained glass, currently hidden behind the walls, are fantastic. I can't wait to see, after the renovation, the amazing effect of this great building.
Over the years, I know that Massey Hall has been extending their arms to welcome more internationally renowned artists. Just last year, I was at Massey Hall enjoying a comedian from China perform. It was a packed house, and there was a lineup outside.
As well, I think two years ago, if I remember correctly, there was a Korean vocal artist group that came to Massey Hall and attracted so many local Korean community members to come and enjoy this great performance.
Congratulations, Massey Hall.
I want to thank the member from Eglinton–Lawrence, federal member Adam Vaughan and city councillor Kristyn Wong-Tam for their support of this ongoing project.
Mr. Bill Walker: It is a pleasure to rise and share with members in the House news about a group of individuals in my riding of Bruce–Grey–Owen Sound who are doing very honourable work to address poverty in our local communities and across the world.
Owen Sound Hunger and Relief Effort, or OSHaRE, served over 20,000 meals to individuals and families in our community last year.
To raise money for meals, Jeffrey Robins, owner of Aveda Mane Street Hair Salon in Owen Sound, together with Barry Kruisselbrink of Barry's Construction, organize the Walk for Food and Water in Owen Sound. This year, they raised $75,000. And in 2016, they raised close to $47,000, making it the top fundraising Aveda salon in Canada. Mr. Kruisselbrink was also the top fundraiser for individual walkers that year. Since the first annual walk, they have raised $270,000.
The Aveda Walk for Food and Water is held annually in Owen Sound during Earth Month. In addition to raising funds to assist in providing needed services to vulnerable individuals in our communities, the walk is also helping raise public awareness about the need to improve access to clean drinking water around the world. The average walk is between five and six kilometres, which is the distance women and children typically have to walk every day in rural, developing communities worldwide to collect water.
I was happy to join Jeff and Barry and all the other volunteers, donors and sponsors this past Friday for their 10th annual walk.
Mr. Speaker, I am very proud of my community's efforts to address poverty needs, and I'd like to thank Jeff, Barry and all who support it for making a difference and making strides in our community and around the world.
Mr. Ted McMeekin: I beg leave to present the first report 2018 from the Standing Committee on Regulations and Private Bills.
The Speaker (Hon. Dave Levac): Mr. McMeekin presents the committee's report.
Report presented.
The Speaker (Hon. Dave Levac): Does the member wish to make a brief statement?
Mr. Ted McMeekin: Speaker, I just want to take this opportunity to thank the Vice-Chair, Lou Rinaldi; all members of the committee, Granville Anderson, Jim Bradley, Grant Crack, Joe Dickson, Jennifer French, Jack MacLaren, Deb Matthews, Bill Walker and Jeff Yurek; and in addition, Christopher Tyrell, our Clerk; and Tamara Hauerstock, our research officer—great people doing great work, a great committee. It has been a real honour for me to have the privilege of chairing it.
Mr. Ted McMeekin: I beg leave to present a report from the Standing Committee on Regulations and Private Bills and move its adoption.
The Clerk-at-the-Table (Ms. Tonia Grannum): Mr. McMeekin from the Standing Committee on Regulations and Private Bills presents the committee's report as follows and moves its adoption:
Your committee begs to report the following bills without amendment:
Bill Pr83, An Act to revive Esquire Ventures Inc.
Bill Pr84, An Act to revive 2297970 Ontario Inc.
Bill Pr85, An Act to revive Tencrest Realty Ltd.
Bill Pr86, An Act respecting the Luso Canadian Charitable Society.
Bill Pr88, An Act to revive James Wilson Holdings Limited.
The Speaker (Hon. Dave Levac): Shall the report be received and adopted? Agreed? Carried.
Miss Taylor moved first reading of the following bill:
Bill 67, An Act to enact the Service Dogs Advisory Committee Act, 2018 / Projet de loi 67, Loi édictant la Loi de 2018 sur le Comité consultatif de l'utilisation des chiens d'assistance.
The Speaker (Hon. Dave Levac): Is it the pleasure of the House that the motion carry? Carried.
The Speaker (Hon. Dave Levac): The member for a short statement.
Miss Monique Taylor: The Service Dogs Advisory Committee Act, 2018, requires the minister responsible for accessibility to establish an advisory committee to do the following:
(1) Inquire into and report on the use and training of service dogs and the barriers faced by persons who are assisted by service dogs or who train service dogs.
(2) Consider how the barriers faced by persons who are assisted by service dogs or who train service dogs can be minimized or eliminated and how accessibility for those persons can be improved.
The committee is to be established within 60 days after the bill receives royal assent and must report its recommendations to the minister within eight months of its establishment. Within 90 days after receiving the committee's report, the minister must inform the assembly of the recommendations that he or she will implement.
Ms. Sylvia Jones: This is a petition for an advanced green in Shelburne.
"Whereas the intersection of Highway 89 and County Road 124 is a major artery for travel between Collingwood and the GTA;
"Whereas there have been a variety of serious car and pedestrian accidents at this intersection;
"Whereas Shelburne is the fastest-growing community in Ontario, meaning traffic will only increase;
"Whereas county of Dufferin traffic data already shows a need for an advanced green;
"Whereas residents of Shelburne and the surrounding area deserve to travel their roadways safely;
"That the Minister of Transportation immediately install an advanced green at the intersection of Highway 89 and County Road 124 in the town of Shelburne."
I support this petition, affix my name to it and give it to page Sophie to take to the table.
Ms. Peggy Sattler: This is a petition entitled "Stop the Closure of the Cardiac Fitness Institute."
"Whereas the Cardiac Fitness Institute (CFI) at the London Health Sciences Centre has provided over 35 years of cardiac rehab and care services to thousands of patients; and
"Whereas research shows that long-term lifestyle changes following serious cardiac events are critical to save lives and to prevent costly hospital visits later; and
"Whereas the CFI is the only program in London that provides long-term cardiac rehab support, with approximately 1,400 cardiac patients currently benefitting from the program; and
"Whereas patients who access CFI services have a rehab retention rate of 75% to 80%, well above the average for patients who attend short-term programs; and
"Whereas the LHSC has cited a lack of government funding as a driving factor in their decision to close the CFI;
"Therefore we, the undersigned, petition the Legislative Assembly as follows:
"Immediately fund the CFI to prevent its closure and ensure that heart patients and their families have access to the care they need to stay healthy."
I fully support this petition, affix my signature and I'll give it to page Mia.
Mr. John Fraser: I have a petition.
"Spots Today for Doctors Tomorrow.
"Whereas 25 residency spots were cut in Ontario in 2015;
"Whereas 123 medical graduates went unmatched in 2018, 53 of them from Ontario;
"Whereas the AFMC predicts that 141 graduates will go unmatched in 2021, adding to the backlog;
"Whereas an estimated $200,000 of provincial taxpayer dollars are spent to train each graduate;
"Whereas the ratio of residency positions to medical students has declined from 110 positions per 100 students in 2012, to 101 positions for 100 students in 2018;
"Whereas wait times for specialists in Ontario continue to grow while many Ontario citizens are still without access to primary care providers;
"(1) Stop any further cuts to residency positions until a long-term solution is well under way;
"(2) Reinstate the 25 residency positions cut in 2015 to bring Ontario back to its previous steady state;
"(3) Create extra Ontario-only residency spots that can be used when there is an unexpected excess of unmatched Ontario grads to guarantee a spot for every graduate every year;
"(4) Pass Bill 18 as part of the solution to develop actionable long-term recommendations; and
"(5) Improve communications between the MAESD and MOHLTC so that medical school admissions correspond with residency spots and Ontario's health needs."
I am affixing my signature and giving it to page Eric.
Mr. Todd Smith: This is a petition entitled "Spots Today for Doctors Tomorrow."
"Whereas 68 medical graduates went unmatched in 2017, 35 of them from Ontario;
"Whereas the ratio of medical students to residency positions had declined to 1 to 1.026 in 2017 from 1 to 1.1 in 2012;
I agree with this, will sign it and send it with page Stephanie.
Mme France Gélinas: J'aimerais remercier Estelle et Aimé Rainville de Hanmer dans mon comté pour cette pétition.
« Entendu que les factures d'électricité sont devenues inabordables pour un trop grand nombre de personnes et que la réduction des factures d'électricité de 30 % pour les familles et les entreprises est une cible ambitieuse mais réaliste; et
« Entendu que la seule façon de réparer le système hydro-électrique est de s'attaquer aux causes de base des prix élevés, y compris la privatisation, les marges de profits excessives, la surabondance d'électricité ... ; et
« Entendu que les familles ontariennes ne devraient pas avoir à payer des primes du temps d'utilisation, et celles qui vivent dans une région rurale ou nordique ne devraient pas avoir à payer des frais de livraison plus élevés et punitifs; et
« Entendu que le retour de Hydro One comme propriété publique remettrait plus de 7 milliards de dollars à la province et à la population de l'Ontario; »
Ils pétitionnent « l'Assemblée législative de l'Ontario de réduire les factures d'électricité pour les entreprises et les familles jusqu'à 30 %, éliminer les délais d'utilisation obligatoires, mettre fin aux coûts de livraison ruraux inéquitables et rétablir la propriété publique d'Hydro One. »
J'appuie cette pétition. Je vais la signer et je demande à Dwight de l'amener à la table des greffiers.
Ms. Ann Hoggarth: "Petition to the Legislative Assembly of Ontario:
"Whereas we are concerned about the elimination of respite care from the core suite of services in the EarlyON Child and Family Centres, and the undue hardship this will cause for families who rely on this service;
"Whereas too many Ontarians who have children do not have access to part-time/flexible/short-term or respite care in their communities; and
"Whereas the Ontario government is rolling out the Renewed Early Years and Child Care Policy Framework so that 'families can have access to programs better suited to their needs';
"Whereas families in Ontario said that 'they wanted more; more responsive hours of care that meet the demands of modern life';
"Therefore we, the undersigned, petition the Legislative Assembly of Ontario to sustain and fund respite/flexible child care under the banner of EarlyON Child and Family Centres as a viable option for families and their children."
I agree with this motion, affix my signature to it and send it with page Hannah.
Mr. Jim McDonell: I have a petition to the Legislative Assembly of Ontario.
"Whereas an industrial wind turbine (IWT) project is being proposed for the community where I live; and
"Whereas the Ministry of the Environment and Climate Change ... created revised guidelines for developers to use in modelling the noise level that the turbines will cause at nearby receptors, in order to correct known errors in the existing noise modelling; and
"Whereas the MOECC allowed large renewable procurement 1 (LRP1) IWT developers the option to use the new noise modelling guidelines, using the transition provisions; and
"Whereas the developer of the project in my neighbourhood opted to use the outdated noise modelling guidelines in the development of the project in my community;
"To rescind the renewable energy approval transition provisions and make it mandatory that all LRP1 IWT developers use the new noise modelling guidelines."
I agree with this and pass it off to Madeline.
Ms. Peggy Sattler: This is a petition entitled "Fix Hydro Now.
"Whereas hydro bills in Ontario have become unaffordable for too many people;
"Whereas reducing hydro bills by up to 30% for families and businesses is an ambitious but realistic target;
"Whereas the only way to fix the hydro system is to address the root causes of high prices including privatization, excessive profit margins, oversupply, unfavourable net export practices and more;
"Whereas Ontario families should not have to pay time-of-use premiums, and those living in a rural or northern region should not have to pay higher, punitive delivery charges;
"Whereas changing the financing of private contracts and the global adjustment fails to reduce the long-term cost of hydro for families and businesses, does not fix the" hydro "system and, in fact, will cost billions of dollars extra in borrowing costs;
"Whereas Hydro One can be returned to public ownership and management without increasing rates;
"Whereas returning Hydro One to public ownership would deliver over $7 billion back to the province and the people of Ontario;
"Therefore we, the undersigned, express our support for reducing hydro bills for businesses and families by up to 30%, eliminating mandatory time-of-use, ending unfair rural delivery costs, and restoring public ownership of Hydro One."
I fully support this petition, affix my name and will give it to page Sophie to take to the table.
Mr. Rick Nicholls: "Petition to the Legislative Assembly of Ontario:
I wholeheartedly agree with this and give it to Rowan.
Mme France Gélinas: Cette pétition est pour les films sans fumée. J'aimerais remercier Raynald et Carole Aubin.
« Entendu que, au cours des 10 dernières années en Ontario, 86 % de tous les films montrant des fumeurs étaient accessibles aux jeunes et le fait que l'industrie du tabac se sert du grand écran pour promouvoir l'usage du tabac est bien documenté; et
« Entendu qu'un rapport scientifique rendu public par l'Unité de recherche sur le tabac de l'Ontario, environ 185 000 enfants de l'Ontario commenceront à fumer après avoir vu des personnages fumer dans des films, et que ... 59 000 fumeurs ainsi recrutés finiront par mourir de maladies liées à l'usage du tabac, lesquelles entraîneront des coûts de soins de santé de l'ordre d'au moins 1,1 milliard de dollars; et
« Entendu que le gouvernement de l'Ontario s'est fixé comme objectif d'atteindre le taux ... le plus faible au Canada, et que 79 % ... des Ontariens et Ontariennes appuient l'interdiction de l'usage du tabac dans les films classés dans les catégories G, PG, 14A; et
« Entendu que la ministre des Services gouvernementaux et des Services aux consommateurs a le pouvoir de modifier, par l'entremise du Conseil des ministres, les règlements pris en application de la Loi sur le classement des films; »
Ils demandent à l'Assemblée législative d'examiner « les façons dont on pourrait modifier la Loi sur le classement des films pour réduire l'usage du tabac dans les films classés dans les catégories qui conviennent aux enfants et aux adolescents, et diffusés en Ontario. »
J'appuie cette pétition et je demande à Abinaya de l'amener à la table des greffiers.
Mrs. Liz Sandals: "Whereas the province created the greenbelt in 2003 in order to protect our natural environment in Ontario, which is the largest permanent greenbelt anywhere in the world; and
"Whereas every year, tens of thousands of acres of farmland, wild land and wetlands, including ravines and rivers, were being encroached by new development; and
"Whereas our greenbelt protects nearly two million acres of valuable land and water, and we expanded the greenbelt last year to protect an additional 10,000 hectares, or the equivalent of almost 20,000 new football fields; and
"Whereas we've also extended the greenbelt-like protections for natural heritage, water and agriculture to the entire greater Golden Horseshoe area to further ensure that sensitive lands are protected for generations to come;
"Therefore, we call upon all parties in the Legislative Assembly of Ontario to formally agree to the protection and expansion of the greenbelt, prior to June 2018."
I agree with this. I will affix my signature and hand it to Hannah.
Mr. Todd Smith: This is a petition to the Legislative Assembly of Ontario.
"Whereas the Great Lakes are the foundation for billions of dollars in trade, shipping, tourism, recreation, industry and agri-food production; and
"Whereas the Great Lakes supply drinking water for 8.5 million Canadians; and
"Whereas the Great Lakes face ecological challenges such as 61 endangered fish species, 18 extinct species, as well as the introduction of 150 invasive species;
"Therefore we, the undersigned, petition the Legislative Assembly of Ontario to support the Great Lakes Day Act, 2018."
Thank you, Mr. Speaker. I'll send this to the table with page Rhys.
Ms. MacCharles moved third reading of the following bill:
Bill 8, An Act to amend the Consumer Reporting Act and the Technical Standards and Safety Act, 2000 / Projet de loi 8, Loi modifiant la Loi sur les renseignements concernant le consommateur et la Loi de 2000 sur les normes techniques et la sécurité.
The Acting Speaker (Mr. Paul Miller): Minister MacCharles.
Hon. Tracy MacCharles: I am very pleased to rise in the Legislature today to talk about this important bill, the Access to Consumer Credit Reports and Elevator Availability Act, 2018. If passed by this House, the bill will give Ontario consumers easier access to credit information and improve access to elevators. The new law will allow Ontario to become the first jurisdiction in the world to establish standards for elevator repair times and would give Ontario consumers the strongest rights in Canada over information held by consumer reporting agencies.
But before I get going, Speaker, I'd like to thank some people who made this bill a reality. First, I want to thank the stakeholders, who represent many different interests, for taking their time to speak with us about the bill during the standing committee hearings.
I would like to acknowledge the work done by the Honourable John Douglas Cunningham for his review of elevator availability in Ontario. His work has been invaluable in starting us down the road to develop workable solutions to improve the availability of elevators across the province.
The organizations we spoke with, both before introducing the bill and during the committee hearings, are a testament to the commitment that I have seen from families and businesses across the province that are dedicated to making Ontario a better place. I want to say to each and every one of them that I've heard your concerns that you've raised and my ministry officials have heard your concerns, and we're all committed to working together to make effective, workable and beneficial changes for all Ontarians.
Speaker, I know that there are a few lingering questions about how the government might implement this bill, should it be passed in this House. For both areas covered by the bill—credit reporting and elevators—we still have work to do in developing regulations. I'll speak in more detail about this a bit later in my remarks, but suffice it to say for the moment that our work would include consulting thoroughly with the public and stakeholders before they are put in place.
My ministry has a proud tradition of consulting with parties affected by potential legislation and regulations. In fact, I think you would only have to look at some of the recent consultations that we've held over the past year. Speaker, since April 2017, my ministry has held 28 public consultations.
Setting aside elevators and credit reporting for a moment, we've held and continue to hold consultations in many areas. From real estate to travel laws to payday loans to condos and reward points, we have talked extensively with the public, both in the business community and among consumer advocates, about their thoughts on numerous issues. I'm very proud of those because even though we are in a run-up to the provincial election, we know that the strength of a democracy depends, in large part, on what happens between the votes held every four years.
Speaker, I mentioned this before, during this debate on time allocation, but I feel it bears repeating, given the importance: When the Conservatives talk about the TSSA being unnecessary or unneeded, I get very concerned. The TSSA provides a critical public safety role in ensuring that when our children get on the Behemoth at Canada's Wonderland—they rely on the TSSA as having ensured it is going to be a safe experience. When a couple purchases their first couch together, they rely on the TSSA to ensure that the upholstery adheres to fireproofing standards. When a nurse drives on the highway, they rely on the fuel transportation standards that the TSSA inspects. And of course, when an expectant mother gets into an elevator to bring groceries to her apartment, she relies on the TSSA's inspections to ensure that the elevator she rides is safe. Even we, as parliamentarians, rely on the services every time we get into an elevator to avoid having to walk up the stairs right here in the legislative lobby.
My point is that every Ontarian relies on the TSSA every day. They protect us while we drive, work, live and sleep. So when a party puts down that safety agency, it ought to really ring the alarm bells. The TSSA is a vital public safety organization. Public safety, in my view, should never be politicized, and it saddens me that that has happened, to some degree.
Before I get into the details of the bill, I want to thank the members of the Standing Committee on General Government for their somewhat painstaking review of the bill. I know that it takes an enormous amount of attention to review each clause, and I appreciate the work that they have done in making it a better bill. As with our previous legislation, I always appreciate the hard work of our standing committees.
Speaker, as we start getting closer to the end of this legislative session, I want to also take a moment to thank my fellow members for their hard work and dedication. Even though we don't always see eye to eye on the issues of the day, I know we all have the best intentions and the interests of Ontarians at heart. As we leave here at the end of our term, I will be proud of all of what we have accomplished.
I know that members in this House will recall the important work that we undertook in previous sessions to help Ontarians in their daily lives. Most recently, the Strengthening Protection for Ontario Consumers Act was passed in the fall session. It will introduce stronger rules and professional standards in the real estate sector. This includes new measures to address conflict-of-interest issues that arise in multiple representation situations and heavier fines for code of ethics violations by real estate professionals.
It's also helping to strengthen confidence in Ontario's new home warranties and protections by enabling the establishment of two administrative authorities, one to administer the new home warranty program and the other to regulate new home builders and vendors.
It's also further protecting consumers who are buying travel services by enabling the creation of new rules for representations, including advertising by out-of-province travel businesses that target Ontarians, and by creating a registration requirement for individual travel salespersons. It will improve compliance with the rules by providing additional enforcement tools such as administrative penalties and compliance orders. Right now, my ministry is working very hard on a review of the regulations that support that act.
In terms of elevator availability, which I'll turn to now, this bill would also help to address availability in multi-storey residences and in long-term-care and retirement homes. Proposed amendments to the Technical Standards and Safety Act would establish a legislative and regulatory framework for elevator availability. We understand that out-of-service elevators can be a source of frustration for residents, especially the elderly or people who have disabilities. This is why we developed an action plan to address areas such as elevator safety, availability, preventive maintenance, and education and awareness for owners and residents.
The action plan also looks at the labour supply of elevator mechanics, and addressing and improving access to service elevators for first responders. As part of the government's action plan, we intend to develop an elevator repair timeline standard through regulation, which would make Ontario the first jurisdiction in the world to do so. In order to develop that standard, we need to collect more data and fully assess potential costs and impacts. We will continue to work with all parties, levels of government and stakeholders through wide-reaching consultations as we move forward with our action plan.
At one time, of course, elevators were a luxury item making life a little bit easier to carry heavy things rather than going up and down stairs, but with our cities becoming more vertical, elevators are not a luxury; they are a necessity. That's why the TSSA submitted a report authored by the Honourable John Douglas Cunningham to the Ministry of Government and Consumer Services on the issue of elevator availability.
Mr. Cunningham found that elevator availability is a complex issue and no single solution will solve it. We would consider all options with respect to setting standards for elevator availability to determine what will work best. We would take the time necessary to get it right, and this work is estimated to take two years. It's subject to the passage of legislative amendments and approval of regulations, and involves collecting necessary data on elevator outages, conducting public consultations and developing an elevator repair timeline standard.
If the bill is passed, it would amend the Technical Standards and Safety Act in order to create a regulatory-making authority that's aimed at making elevators more reliable in Ontario. This would be based on a number of steps. First, the bill would allow regulations to enable the TSSA to collect elevator outage data, helping the government to make evidence-based decisions with respect to the creation of standards for elevator repair timelines.
Second, the amendments to the Technical Standards and Safety Act would also allow for creating a requirement that information about elevator performance must be published. This would allow prospective residents to make more informed decisions before they rent or buy a home in a multi-storey building.
Third, the bill would create an administrative monetary penalty framework in order to strengthen TSSA's enforcement powers, including with respect to elevator safety and maintenance requirements.
Fourth, it would allow for the creation of standards for elevator repair, including timelines.
Fifth, it would allow for the designation of an appropriate regulator to enforce elevator repair standards.
The changes proposed in this bill are an important part of the government's elevator plan, but it's not the plan in its entirety, Speaker. In addition to this bill, our elevator availability action plan is being implemented to help elevator owners negotiate better maintenance contracts through an education and outreach campaign. It's also expected to improve elevator access for first responders in the case of emergencies, and to create new standards for buildings to ensure they have enough elevators to serve residents. In addition, the action plan would address the labour supply of elevator mechanics through consultations to determine options to meet labour market demands.
Speaker, I would be remiss if I did not note the leadership of my colleague the member from Trinity–Spadina, Han Dong, in this area. I want to say thank you to that member.
We intend for the TSSA to begin collecting elevator outage data and to issue administrative monetary penalties with respect to contraventions of elevator-related regulatory requirements should legislative amendments pass and associated regulations be made. Once the important elevator outage data and evidence is collected, we would establish the repair timeline standard and consult with all stakeholders on these standards. I anticipate that the changes, if the bill is passed, could begin in late 2019, once enabling regulations—developed in consultations, of course, with our stakeholders—have been approved by government.
The overall goal of the action plan is to improve the availability of elevator service in a multi-storey residence situation. This would help address the problems that residents experience when their elevators are out of service. Specifically, improved elevator availability would benefit people with health and mobility issues who can't use stairs. These people are often stranded when elevators break down—and I am one of those people, Speaker.
In the best case scenario, this is an inconvenience, but in some cases, it can be a crisis. It can be an emergency. To address that, our action plan includes a number of proposed safety and accessibility improvements. This includes better access for the first responders by the Ministry of Community Safety and Correctional Services through a proposed amendment to the Ontario Fire Code. The amendments would require elevator owners to notify local fire services when a firefighters' elevator is out of service for more than 24 hours.
It would also include new reporting requirements on elevator service and availability. This information would have to be made available to the public and would result in better transparency for the public regarding elevator disruptions. In addition, greater public awareness of service disruptions may encourage faster response and repairs by owners. That could improve safety and accessibility for users.
We also plan for more education and awareness initiatives, as I mentioned, for elevator owners and operators, which would involve the Accessibility Directorate of Ontario to support compliance with existing accessibility requirements for notice of service disruptions.
Lastly, administrative monetary penalties would be used for the purposes of ensuring compliance with technical safety requirements as they relate to elevator maintenance. This also has the potential to improve safety for residents of multi-storey buildings. Safety would also be improved through new protocols and procedures to deal with elevator entrapments.
With each step taken, Ontario's commitment to ensure public safety remains a priority. We understand that out-of-service elevators can be a source of frustration for residents; especially, of course, for our elderly residents and people with disabilities. We recognize that prospective buyers and residents in new and existing units may be concerned about affordability and availability.
As I mentioned before, we need to collect the data and consider all options to determine what will work best. The Ministry of Municipal Affairs is an important part of the government's action plan and is working with independent organizations to develop standards to determine the number of elevators that should be required in a new building through elevator traffic analysis. This was included in Mr. Cunningham's recommendations. While doing this work, municipal affairs will take into account potential impacts on the availability and affordability of housing units.
The proposed bill will also fulfill one commitment that our government made under the Fair Housing Plan. The Fair Housing Plan, released last spring, committed to making elevators in Ontario buildings more reliable by setting timelines for elevator repair in consultation with the sector and the TSSA. I would like to note a related piece of work under the Ministry of Housing, involving recent amendments to the Residential Tenancies Act that prohibit above-guideline rent increases until all elevator repair work is completed.
At this point, Speaker, I'm going to shift over and talk a bit about the other part of the bill, which is credit reporting.
The bill is aimed at ensuring that large consumer reporting organizations give consumers greater electronic access, free of charge, to their own credit history, including any credit history reports and scores that were generated by agencies and that were shared with third-party creditors over the past 12 months. This bill would give consumers the option of putting in place a credit freeze that would prevent agencies from disclosing their credit information to a third party. The changes would give consumers more access and control over their own information and may help reduce the harm caused by identity theft.
I know that all the changes I've been talking about, both on the elevator side and now on the credit reporting side, sound like a lot of new rules for businesses to follow. I know that everyone is concerned about what the impact would be for jobs and growth.
We're very concerned about this, too, on this side of the House. In fact, we have always been very clear that we want to make sure consumers are protected, without creating undue burden on businesses. So we will ensure that we consult with a wide range of stakeholders to make sure that we understand their concerns. We will take their concerns into account as the ministry develops regulations that need to be put in place before this bill comes into force; that is, if the House supports the bill, which I hope it will do.
During second reading, the member from Beaches–East York, Arthur Potts, also spoke about his thinking in bringing forward his own private member's bill, Bill 167, the Fairness in Consumer Reporting Act. It was actually the second bill he has brought forward to help inform the government's policy in the past couple of years. Of course, he also brought forward a private member's bill to protect consumers' reward points. Those changes that were introduced in 2016 became law just a few months ago.
Just as then, we're moving forward to protect consumers, and that would include consulting with businesses as we design regulations to implement the amendments. We want to ensure that balance between consumer needs and businesses, so the ministry will reach out to stakeholders, including agencies and financial institutions, to make sure that will work as intended. We've already started that work. I want to thank the many people and organizations we've heard from before and during committee hearings.
For example, in our discussion about Bill 8, we heard from some stakeholders, such as the Information and Privacy Commissioner—whose advice is always welcome—that we should consider a few amendments. As a result, the government supported amendments in committee to clarify the new powers of the registrar of the Consumer Reporting Act.
In particular, we wanted to make it very clear when the registrar could require an agency to produce information. As a result, the amendments clarify that this would only be allowed when a consumer has complained, or when there is an ongoing investigation or an inspection authorized under the act. This clarification is very consistent with the ministry's current practice and intention behind the amendments. It would not limit the registrar's current powers under the act to conduct investigations into consumer reporting agencies. The registrar would still have the ability to proactively look into the practices of an agency.
In addition, amendments were passed to improve how the act functions for consumer reporting agencies themselves; for example, an amendment to clarify the operational requirement of a security freeze. This will help to ensure consumer reporting agencies are able to comply with the act.
These amendments were requested by consumer reporting agencies, and do improve how the act functions, without having significant impact on the protections provided to consumers.
But there are some additional areas of the bill I'd like to discuss.
Some people have asked me why exactly we should be worried about credit reporting. After all, it seems like many of us go along and never have to deal with a reporting agency, but I think we know in our work as MPPs that many people do have to deal with credit agencies. So what's the big deal?
What everyone should know is that credit is absolutely vital and important in your daily life. In a single report, it can tell people, among other things, how much debt you carry, whether you pay your bills on time and even how often you've applied for credit. The details of your credit report and your score are information that creditors use to determine how reliable you may be in paying debts.
Your credit history can affect you in many ways. It can dictate whether you get that credit card you asked for, yes, but it's so much more than that, Speaker. It can affect your ability to get a place to live, it can affect your ability to buy a car and it can affect how much interest you pay on a loan, like a mortgage.
In all of this, consumer reporting agencies play a central role in creditors making decisions. They are a critical, often overlooked, important link in our economic chain, and it's important that consumers are aware of the information that credit reporting agencies hold, and take steps to correct it if it's wrong.
Because this information is so important in our daily lives, it becomes more important than ever that consumers have insight into their own information and have control over who gets it. That's why the government has proposed amendments to the Consumer Reporting Act to make these agencies more transparent about who they've shared information with and to require them, at a consumer's request, to stop issuing credit reports and scores.
The bill would create three major changes in credit reporting. First, when requested by a consumer, consumers would have the right to receive their credit history electronically at least twice a year for free. Designated agencies would also have to provide consumers with a new credit score at least twice a year for free. Secondly, agencies would have to provide, upon a consumer's request, any scores that they generate that have been given to third parties within the past 12 months. This would help a consumer understand the information an agency has provided to a creditor. Lastly, agencies would have to give consumers the option to put in place, suspend or cancel a security freeze that would prevent agencies from disclosing information. If passed, Ontario would have the strongest and most transparent rules in Canada over how consumer reporting agencies share your credit information.
We understand that this bill would mean changes to the way consumer reporting agencies operate. They are not decisions we have made lightly. We know that the information shared by these agencies is important for the proper functioning of our economy. That's why we talked to credit reporting agencies before introducing this bill. If the bill passes, we will also undertake detailed consultations about the regulations. These regulations would have to be in place before the amendments to the act are implemented, so there will be ample opportunity for stakeholders to have input.
In particular, we know that most of the registered reporting agencies are either small or medium-sized businesses. We want to make sure that consumers are protected without creating undue burden to the small enterprises that are such an important part of Ontario's economic growth. This would be a key factor in specifying the agencies that we require to implement the proposed amendments.
These changes are being proposed to give consumers greater access to their credit information and the ability to limit when that information is shared with third parties; for example, creditors. Currently, the Consumer Reporting Act gives consumers free access to their consumer report, but does not specify a timeline for the agency to provide it, or for electronic access. It also does not require scores to be provided to consumers, nor does it provide consumers the right to put a security freeze on their information.
The government believes that consumers need greater access to information held by agencies and greater control over how that information is shared. The proposed changes offer significant benefits to consumers. Consumers would have greater access to their credit information and, as a result, be better able to identify their credit standing and able to assess whether all the information on the report is accurate or whether there is information on the report that they are not aware of, which could be a sign that their identity has been compromised.
In this respect, consumers would also be able to place a security freeze on their information. This provides consumers with an additional tool if they believe that their identity has indeed been compromised. If approved, my ministry would undertake more detailed consultations with consumers, businesses and consumer reporting industry representatives on the regulatory details.
Speaker, let me speak for a minute about the credit freezes component of the bill, or security freezes, as some people call them. This part of the bill would be a brand new requirement, so I want to make sure that we're all clear about what it could do, its benefits and some of its potential limitations.
Placed at the request of a consumer, a credit freeze prevents third parties such as a potential creditor from accessing a consumer's credit information unless a freeze is suspended or cancelled by the consumer. Ontario would be the first jurisdiction in Canada to require this option. But it's not a new idea: Credit freezes are currently an option for consumers across the United States. It's an idea that I would suspect is familiar to the largest of the credit reporting agencies in Ontario, since they are also the largest in the United States.
A freeze may help victims, or potential victims, of identity theft to protect their information. It may be helpful for someone who has lost their wallet. It may be helpful to a victim of human trafficking. It's helpful if the credit information included very sensitive information, like a social insurance number.
The ministry is proposing that certain consumer reporting agencies be required to give consumers the option of placing a freeze on their account. A credit freeze can help diminish the harm caused by identity theft. For example, if you believe your identity has been stolen, you can place a freeze on your file to help prevent someone from opening accounts, like credit cards or lines of credit, in your name. The proposal includes regulatory-making authority to determine the fees that an agency would be allowed to charge for this service.
The Consumer Reporting Act currently includes provisions that allow consumers to place an alert on their credit file. The proposed security freeze is an additional measure for consumers to protect their information. An alert is an optional service for consumers that requires agencies to tell potential creditors to take additional steps to verify an applicant's identity, and requires potential creditors to take those extra steps. Creditors would then take that into account and make a decision on what reasonable steps they should take, both to protect their possible clients and, of course, themselves.
It can be a useful tool if you believe your identity has been compromised, but it does not necessarily prevent a potential creditor from getting information. Under a security freeze, agencies would not be able to disclose information to third parties at all, subject to exemptions that would be built into the regulations. We know the details of the security freeze are important, so this is another area where we would want to get detailed feedback from stakeholders so that we can be sure to avoid any unintended consequences.
For example, we know from our conversations that, in particular, businesses are concerned that this could add time to the decision-making process for issuing credit. Many consumers might also be equally concerned about this.
We know that it could affect how some businesses operate. For example, you might have a line of credit with an interest rate of 5% annually if you keep your score above, let's say, 800; but if your score goes below, say, 700, maybe that rate goes up a bit. If that's the case, your bank might need to check your credit score every now and then. In a situation like that, having an all-out freeze would, of course, create problems.
We also know there are those in some areas, like existing creditors and debt collectors, who might have a legitimate need for the information. Our intent is not to get in the way of existing credit relationships that depend on updated credit information. We certainly wouldn't want a person to be able to misuse these provisions, so we'll be working with stakeholders, both for consumers and for businesses, to make sure that these are fair and workable solutions.
Those are just a couple of the specific areas, and I know there are many more. So my ministry will be consulting in much more depth to see if there are some exemptions that might need to be made to implement security freezes in a way that avoids those unintended consequences. Those changes would, of course, be built into regulations.
I mentioned earlier that the option for a credit freeze is already in effect in the United States. Part of our regulation development process would be to look at their system, assess what works, and learn from their lessons in order to create something that works for Ontario.
Speaker, I mentioned earlier that we have had discussions with the Information and Privacy Commissioner's office. We will continue those discussions with the privacy commissioner's office and others, of course, who have an interest in this bill. As we do that, our decision will also be formed in part by the ministry's own experience.
In the past, consumers have highlighted concerns that they have with credit reporting agencies very often. Over the past three years, there were 2,090 complaints—yes, 2,090 complaints—incidents and inquiries regarding the Consumer Reporting Act. That makes them among the 10 most common complaints that the ministry receives.
I think it's very important to emphasize that the proposed changes do not specify which agencies will have to provide free electronic access to reports and scores, or which agencies will have to implement security freezes. This would be set out in regulation.
The ministry would, if the bill is passed, consult publicly on our regulations to meet these kinds of priorities. The government's intent is to capture only the largest agencies, as they deal with the most consumer files and, of course, have the broadest reach. The come-into-effect date will be formed in part by the consultations with the stakeholders about regulations.
I would like to wrap up my time on this debate by saying I'm very proud to support this bill. Even though elevator availability may not always be top of mind to everyone, we are so aware of when they don't work, and we do know what people expect.
Again, I want to thank the numerous people and organizations that helped us to get to this point. As we continue to work to implement this bill, we'll be calling on the same people again, both in terms of the elevator availability piece and the credit reporting piece. We know they will give us detailed input and suggestions for how we can all meet our priorities.
I want to thank MPP Dong and MPP Potts for tabling private members' bills that led to the creation of this legislation. I think their work should be applauded by everyone, because they are very committed to making Ontario a better place to live. I want to say thank you to them.
As I wrap up my comments today, if I may, for a minute, I'll just acknowledge that the House knows that I will not be seeking re-election. I will be retiring from politics in a few days—from Ontario politics.
Hon. Reza Moridi: No.
Hon. Tracy MacCharles: Yes, it's true. "Retiring" sounds very old.
Interjections: Four more years.
Hon. Tracy MacCharles: Four more years? Well, I've made the decision. It's the right decision for me. This is probably my last substantive speech as an MPP in the Ontario Legislature.
I want to thank the people of Pickering–Scarborough East who have elected me to be their representative since 2011. It's a great riding, Speaker. It will cease to exist after the election. It's being split in two along the city of Toronto and Durham region lines. That, I guess, is inevitable, with the change in population and the census data. But it has been a great riding to serve: one that I grew up in, one that I live in, one where I'm raising a family.
I'm very proud of the work that we've been able to accomplish together, both in the Scarborough side of my riding and in the Pickering and Durham region side. It really has been an honour and a privilege. The engagement of the community, our stakeholders and businesses on their priorities—and consulting with them on legislation that our government is putting forward—has been great. Working with different levels of government at the regional, municipal and city levels has been wonderful.
It has been a real honour and privilege. Of course, it has been a tremendous honour and privilege to serve as a cabinet minister in this government since 2013. It's time that I have cherished, and I'll never forget, and I say thank you.
The Acting Speaker (Mr. Paul Miller): Thank you. From the chair, I wish you all the luck in your future endeavours.
Hon. Tracy MacCharles: Thank you.
The Acting Speaker (Mr. Paul Miller): Further debate?
Mr. Jim McDonell: I, too, wish the minister all the best. I have enjoyed working with her. She's a hard worker and has served her residents of Scarborough East, I believe, very well. Congratulations on your retirement, and enjoy.
Speaker, I'm pleased to offer my remarks on behalf of the PC caucus on Bill 8, An Act to amend the Consumer Reporting Act and the Technical Standards and Safety Act, 2000 and to address the issues of consumer credit, elevators, and the powers of the Technical Standards and Safety Authority.
Of course, everyone knows that Bill 8 is the former Bill 199, issued before the government prorogued the Legislature to try to convince Ontario voters that they were finally waking up and listening to them after 15 years in power, and that it had nothing to do with the recent polling that shows that 81% of Ontarians want a change in government.
Despite the prorogation of the House by the government, we had enough time in this session to properly review the bill and to have proper debate and give enough time for the many stakeholders to come forth and reveal the issues they are experiencing in the marketplace.
It is a matter of respect—respect for Ontarians and respect for the businesses of the province that are struggling to pay employees and scratch out a living. Government is meant to serve people, not the other way around. It is their job to do proper investigation and reach out for meaningful consultation.
What did we see with the development of this bill? Certainly none of these basic principles.
The elevator part of the bill started out on the right foot: They engaged Justice Cunningham to solicit input from the public and from industry stakeholders to get to the root of the problems and to actually propose a number of well-thought-out recommendations that would provide service for the public. But true to form, after spending considerable resources in time and taxpayers' money, they all but ignored the report. Bill 199—the number of the bill before prorogation—was issued the day after the report was submitted. Clearly, all the work done by Justice Cunningham was ignored, and the time and money to put the report together was wasted.
This incident brings us to our leader Doug Ford's message: The party on taxpayers' dollars is over. We can't afford to continually waste enormous sums of money when so many are in need, whether it's tens of thousands of dollars on the elevator report or the Tarion report, the millions wasted on the Ornge air ambulance scandal, the billions of dollars wasted on eHealth, gas plant cancellations or green energy, or the tens of billions wasted on corporate welfare. This bill could have been a solution instead of the public relations band-aid it has become.
There was time to take this report into consideration. If you remember back, we experienced the prorogation of this Legislature, triggering a speech from the throne and the corresponding hours of debate just before the budget. Then we experienced more ragging of the puck by the government as they read motions that had very little bearing, almost as if they were trying to kill the time left in this session. They didn't have to time-allocate this bill to cut off debate and to restrict the number of deputations and limit those deputations to a mere five minutes. They could have done a much better job if they had just wanted to actually listen to the industry and make the changes that would have actually solved many of the issues.
For instance, last week, I experienced a complete rewrite of Bill 6, An Act to enact the Ministry of Community Safety and Correctional Services Act. The government issued more than 100 amendments to its own bill, almost completely rewriting it as stakeholders came forth and highlighted the many problems within the bill. I would have called this unprecedented, but I've seen this rewrite over and over again as they have rushed bills into the Legislature without taking the time to get it right, only to issue hundreds of amendments at committee to try and fix the legislation at the last moment. So as we can see, it can be done, as they have done it before.
I don't want to be too critical of this action, for too many times we've seen the Wynne Liberal government ignore the experts and dump dangerous and flawed legislation onto the backs of hard-working Ontarians struggling to run a business or to run a household budget. The list is long. The Green Energy Act is perhaps the worst example that will negatively impact our economy for decades to come. It's time we show some respect to the people and businesses of this province.
The issues that this omnibus bill addresses are quite unrelated to each other and would have been more properly addressed in two separate pieces of l legislation.
Speaker, since my arrival in 2011, I've seen, time after time, this government tabling legislation without proper research and stakeholder consultation. Unfortunately, this bill is just another one in that vein.
To our shock and surprise, we heard at our ministry briefing that the credit reporting agencies and stakeholders had not been consulted by the government before they tabled this legislation that affects them and the general public. They plan to do the consultation after the bill is passed. Have you ever heard of anything so ridiculous? No wonder the Wynne government gets it wrong time after time. It is no way to run a province.
Bill 8 forces credit agencies to disclose the consumer's credit score despite the fact that the actual equation used to compute it is owned by a private corporation and therefore not in the public domain. However, the score is trusted by lenders to give an objective and impartial assessment of a consumer's risk, based on available data.
Credit agencies offer consumers a free copy of their credit file twice a year. I've been provided with a copy of a free TransUnion disclosure, and it is a very detailed report of everything to do with the person's creditworthiness and personal situation. It does not contain the credit score, but it contains enough information for the consumer to see what goes into the credit calculation.
The government is taking an item of private property—the credit score—and forcing it to be either free or subject to a fee that the government, rather than the owner, determines.
A credit score is a proprietary good, belonging to the credit reporting agencies, computed in accordance with a proprietary formula and trusted by creditors to give a uniform, unbiased evaluation of a consumer's risk to creditors. The government has no role to play in this transaction.
By passing the legislation, however, the government transforms free private sector entities bound by their customers' trust—and willingness to pay—into clients of the government, dependent on the government's grace and favour lest they be prevented from collecting revenue for distributing something they own.
This makes no sense, for each entity has its own formula for making up a score that is appropriate to the industry and the service they belong to. There are many of these clients, all generating different scores, making the number useless.
The credit scores include, as we heard before, different items, like paying bills on time, how often you do that and the number of times you're late; how many cards you have; outstanding loans and mortgages; and any new applications. These all add up to your credit health. But depending on the organization you're dealing with, they treat these items differently. A credit card would use a different formula than, say, a mortgage would.
So issuing a single number can be dangerous, and it can be meaningless. A high score could be bad credit, or a high score could show that you've got good credit that the company would like to use. It all depends on the formula that the various corporations use—the banks or the credit agencies. They're all different, so providing a number can be dangerous at best.
But there's more. Under the provisions of Bill 8, the government will retain the right to dictate in accordance with what formula a consumer's credit score would be calculated and then revealed to the consumer.
This legislation has no positive outcome, and here is why: If a global credit reporting agency, such as Equifax or TransUnion, has to use a different formula for computing a resident of Ontario's credit score than it does for the rest of the world's consumers, Ontarians' credit scores become meaningless. This results in higher rates and less credit, as everyone's trustworthiness takes a hit because the government tampered with the formula. This will affect Ontarians as they try to obtain credit from Canadian businesses headquartered in provinces that do not implement the same market-distorting policies.
The formula exists because there is a need for it. It is one for all consumers. The data underpinning it is the same for all consumers as well.
This system has worked for decades and it is not broken. Mathematical formulas have no political stripe, no opinion, no loyalty and no bias. People, ministers, MPPs and political parties do, and this is why we should not trust any government with tampering with credit formulas.
The government has doubled the amount of money it takes from Ontarians on an annual basis, yet it has also mismanaged its way to doubling our provincial debt while in office. It takes a special sort of bad management to achieve this, yet they believe that they have what it takes to judge the credit-worthiness of every single Ontario consumer. It would be laughable if it wasn't dangerous.
Credit underpins an active economy, and it has done so since ancient times. Prohibitions against credit and against interest have existed in cultures and codices as dissimilar and distinct as ancient Athenian law, the Book of Leviticus, Islamic law and countless others. Despite this, marketplace participants found workarounds. When ancient Athens banned the sale of goods on credit and required purchasers to pay the entire sum upfront, sellers lent consumers the money to buy the good, and so it became a separate loan.
Our economy runs on the immediate availability of a promise to pay, often a secured one. Without credit, our money supply would decrease significantly, as savings for ordinary purchases would gobble up the increasing amounts of cash, grinding the economy to a halt. Families saving up for the purchase of a home would have to block off hundreds of thousands of dollars in savings accounts for years. This picture may look quaint and wholesome; however, it is not how our modern economy works. We are dependent on a fast turnaround of existing cash, while economic growth injects new cash into our financial system. If the economy is a machine, credit is the grease that keeps it from seizing, and productivity is its fuel.
Tampering with credit is not a good idea, because no government can ever have an absolute knowledge of the unintended consequences that such tampering would entail. By meddling with the formula that underpins consumer credit in Ontario, this government may be doing significant damage to our economy without there ever being a problem to fix in the first place.
Let me now turn to the amendments in Bill 8 that implement new powers for the Technical Standards and Safety Authority concerning administrative monetary penalties and the issues arising from such extended powers.
The TSSA is a corporation without share capital that this government has allowed to drift away from legislative leadership and oversight, unwilling to exercise the powers still vested in the ministry to correct obvious issues that stakeholders have brought forward for years. The TSSA forms part of an industrial self-regulation model implemented by the last PC government of Ontario, with the aim to create regulation in industries where it was necessary, without growing the size of government and the civil service, in order to address the complexities of daily industry operations.
When the model was created, the authorities were given a wide range of powers but were subject to the ultimate oversight and judgment of the ministry. The Minister of Consumer Services retained the right to exercise regulatory power, undo regulations or abolish the authority at the stroke of a pen, if the circumstances dictated the need to. This structure enforced clear jurisdictional boundaries and gave consumers the confidence that a government accountable to them would stop an overreaching authority in its tracks. Unfortunately, under this government, the TSSA and other authorities have acquired a culture of impunity because of the refusal of several ministers to exercise authority when called to.
Regulating consumer industries can be a daunting task as technologies, strategies and methods evolve at a rapid pace. Government departments must contend with a barrage of information on a daily basis in quiet times; therefore, new developments causing an industrial or consumer backlash can overwhelm them. Regulatory responses must involve the civil service, a body of experts and officials not known for taking rapid action. The top-down policy-making usually involves extensive consultation, data collection, studies, evaluations, analysis, and a repeat of the same over several iterations. In the end, changes take months or years.
The greatest problem facing government is the lack of knowledge and expertise. No civil servant can be expected to be up to date on the latest developments in the designs and operations of pressure vessels, or keep up with the latest scientific and industrial literature, for instance, on spill remediation. This centralization of knowledge is unachievable.
Governments can address the problem of knowledge in several ways, most of which are unproductive. They can expand their manpower to duplicate the in-house industry knowledge and transfer it into the government structure. This isn't just inefficient; it's overly expensive and provides no benefit to the economy.
Governments can also resort to frantic information-collection and automated analysis, hoping to implement one-size-fits-all analytical approaches and burden industry with reporting requirements to feed data into number-crunching algorithms. This sort of approach is also harmful. Forcing individuals and businesses to continually report information to the government eats up precious time and resources that could be put to a number of better uses, such as research, work and leisure. Using an automated tool such as an algorithm is also likely to cause errors, as the local realities and peculiarities of each individual case wouldn't be considered unless a significant level of human judgment and discretion is also involved.
The scope for such automated government work exists for the most routine tasks, such as document or licence renewal, or for minimal changes to existing arrangements. Its use for complex policy decisions is not something we should contemplate at this time.
The delegated-authority model implements another approach to the knowledge problem. It also allows those who already possess the necessary information to judge what areas need oversight and what rule changes would be proper and to make such changes, as required, subject to the ultimate oversight of the government. In this case, all the government needs to do is to judge whether a certain action is broadly representative of the public interest. This is a judgment that most of us are broadly qualified to issue.
The TSSA and other authorities can't exist without government supervision, and good government regulation of complex industries is difficult without these authorities taking the heat off government departments.
The failure of this government to properly supervise and discipline the TSSA when it has overreached or caused its licensees harm is one of the direct causes of the growing small-business distrust of this government as a whole.
One of the most egregious examples of the TSSA's bad behaviour is the wide discretion it grants to its own inspectors, who are often unqualified. In some cases, TSSA staff with no qualifications to speak of overrule designs and drawing issued by licensed, qualified and accountable professional engineers by imposing additional requirements that, on reasonable evaluation, contributed nothing to safety. This is not acceptable. If equipment complies with existing standards and codes, it must be approved. If the TSSA is of the opinion that the standard or code is not sufficient to guarantee consumer safety, it must lobby for an amendment to the standard and justify its actions through a well-reasoned and well-informed argument. Business owners, including the ones in my riding, are instead ordered around, and the sole justification for such impositions is, "Because I said so." This isn't regulation; this is a free-for-all.
The TSSA's hampering of business development and innovation in Ontario does not stop there. The House and the TSSA are very familiar with the issues of Ontario innovators whose inventions can be sold and installed and can generate profits across the United States and Canada, but not in Ontario. As long as one component is certified to an internationally recognized standard that the TSSA insists on excluding, unlike most of our other Canadian and US competitor jurisdictions, Ontario innovators have no choice but to market their equipment to each and every client with a higher price tag because of the TSSA field inspection procedure. Money-saving improvements then quickly turn into a cash cow for a corporation that has the power of government but none of the checks on its greed.
Businesses who have found themselves on the receiving end of TSSA orders have also quickly found that the appeal process against such orders is completely murky and not formalized or independent. There are no safeguards against conflicts of interest between inspectors and those hearing the appeals. It is all done in-house, without the option of a public forum for the appellant. There are no clear standards of evidence and no prescribed guarantees.
Bill 8 does the exact opposite. It takes an already broken and mistrusted system and gives it a steroid shot in the form of administrative monetary penalties, boosted by depriving those affected of essential defences, and through the possibility of appeals against such penalties being handed back to the same authority and people who issued the fine in the first place.
I'd like to draw this House's attention to subsection 32.1(7) of the new Technical Standards and Safety Act, as amended by Bill 8, which would read:
"(7) An order made under subsection (1) imposing an administrative penalty against a person applies even if,
"(a) the person took all reasonable steps to prevent the contravention on which the order is based; or
"(b) at the time of the contravention, the person had an honest and reasonable belief in a mistaken set of facts that, if true, would have rendered the contravention innocent."
Speaker, I searched the electronic legislation database for the phrase "would have made the contravention innocent" and found 12 results in our current legislation. None of them pre-date the current government and none of these amendments were passed during the minority Parliament of 2011 to 2014. This should be concerning to all Ontarians and Ontario business owners.
No previous Ontario Parliament has made the imposition of a penalty so easy and so definitive. The party opposite knows it, and they know that in a minority situation the uproar would have been sufficient to scupper their plans to implement such sweeping powers for unaccountable agencies and individuals. The only time they ever proposed such amendments and passed them was during majority Parliaments. If you can only pass a certain policy when you know you can get away with it, there is a problem.
A good government, upon realizing this, would take pause, look at themselves in the mirror and ask themselves just what they were doing. Administrative penalties are not new; there are more than 60 statutes in Ontario containing provisions for penalties, as opposed to licence revocations and offences. When they are stiff enough to be used as a deterrent against bad practices, they can be a useful tool for enforcing compliance. They must, however, be subject to an independent and fair adjudication on appeal. Bill 8 does not grant this.
On the topic of elevator availability, I'd like to start by saying that I understand and sympathize with the concerns of apartment and condominium residents whose daily lives have been seriously affected by insufficient elevator availability and reliability. We are all aware that elevator downtime can prevent a resident from going to work, taking a trip to the grocery store or making an appointment with their physician.
I also reside in a high-rise building here in Toronto, so I'm aware of the major inconvenience that can be caused by an unscheduled shutdown or, even worse, getting stuck inside an elevator—an entrapment. Regardless of how, why or when the elevator breaks down, being trapped within one is a highly distressing and unpleasant experience. I do not know anyone that would take pleasure in being stuck in isolation. Although entrapments are rare, statistics suggest that they are on the rise, according to industry studies. Contractors documented just under 10,000 elevator entrapments across residential and institutional buildings in 2016.
A faulty or broken elevator can be a nuisance for an individual trying to move between floors; however, outages pose tremendous challenges for seniors, people who experience and live with mobility issues, as well as first responders arriving on the scene of an emergency. When elevators break down for any period of time, the quality of life can diminish significantly. I think we are all aware that accessibility issues and barriers can drastically impact the ability of seniors and people with mobility issues to perform ordinary tasks that people take for granted.
In other cases, residents of high-rises who are already spending exorbitant amounts of rent due to lack of affordable housing in Toronto have literally been trapped in their homes for a day or longer because an elevator is broken and it has prevented them from reaching their destination. These instances are highly troubling, considering that the Accessibility for Ontarians with Disabilities Act not only indicates the apartment and condominium owners are responsible for keeping residents informed about service disruptions but also for providing suitable transportation alternatives until maintenance is completed. Clearly, there is still a significant amount of work to be done in order to create barrier-free travel throughout buildings in our provinces.
Needless to say, I'm aware that elevator availability and reliability are real issues that need to be addressed in an efficient and effective manner. However, the proposed changes under Bill 8 fail to deal with the underlying problems which are causing vertical mobility issues throughout the province. These problems include a low supply of elevator mechanics, which stems from an artificial shortage of training; a lack of competition in the elevator industry, which is currently dominated by four large players; and the fact that the TSSA is not subject to the accountability structures that normally apply to the machinery of good government.
Before I address these issues, I want to commend Justice Cunningham, who authored the independent study on elevator availability, although the government clearly did not consider its findings during the few hours between the report's publication and the submission of the legislation.
Justice Cunningham is a skilled arbitrator and mediator who has extensive experience finding fair, creative and efficient resolutions to disputes, so it's of no surprise that his report is thorough and that his recommendations are well thought out.
Since there is no enforced regulatory definition for "availability," Justice Cunningham defines elevator availability as "the ability of a building's elevating devices to transport persons as and when required."
On the whole, Justice Cunningham found that elevator availability in Ontario is impacted by a complex set of interrelated problems including, but not limited to, maintenance issues, capacity problems and labour shortages.
The government's measure to address elevator availability and reliability issues is peculiar for two reasons.
It is important to note that, in his report, Justice Cunningham stated that the Reliable Elevators Act which has largely inspired and informed Bill 8 was based almost entirely on anecdotal evidence.
The member from Trinity–Spadina, who introduced the original PMB, also acknowledged its limitations and even supported a more robust, evidence-based set of recommendations relative to this topic. In his report, Justice Cunningham indicated that it could take several years before comprehensive data could be collected and meaningfully analyzed. In spite of this, the current government has decided to move forward and completely throw the evidence-based policy out the window. It would be interesting to know just how much taxpayers' money went into producing the report for the government to simply dismiss.
If you want to effectively address the issue of elevator reliability, it is first necessary to review regulations and industry practices to enhance labour mobility and availability.
I draw attention to this lengthy process because I want to point out to the government that while shoddy legislation can be drafted fairly quickly, it takes years of education, on-the-job training and physically demanding work experience to become a licensed elevator mechanic in this province.
It is abundantly clear that the capacity for licensing class A qualified elevating device mechanics has not kept pace with growth in the number of elevating devices that have been installed since the condominium boom began. Today, a single mechanic is responsible for as many as 200 elevating units per month, compared to the 40 elevating devices he or she would have been responsible for in the early 1990s.
In order to appropriately address elevator availability and reliability, it is fundamental that we eliminate the artificial shortage in qualified elevating device mechanics. It is imperative that we encourage young people to pursue a trade within this field. Today, many young university graduates are having extreme difficulty finding meaningful employment. Trades are often overlooked by young people, even though employment opportunities following the completion of any reputable trades training program pay well and are stable and in demand.
According to Canadian Business magazine, in a period of only five years, there has been a 94% observed growth in the number of elevator mechanic jobs available in Canada. So it's important that we fill these jobs and the sector possesses the capacity to meet labour demand through training and certification programs which are currently available at Durham College and the Canadian Elevator Industry Educational Program.
Many of these issues have been voiced openly by manufacturers, associations, building owners and consultants. However, the government has failed to properly consider the input that has been provided by the sector and its key stakeholders.
Among the groups that have been overlooked when drafting this legislation is the National Elevator and Escalator Association. They indicated that "There is a fundamental misunderstanding in Ontario regarding elevator reliability and availability, and the root cause of any downtime."
Specific isolated instances of elevator problems have created a misperception of widespread elevator outages and unresponsive service companies that is inaccurate and irresponsible. When we went through the various allegations, we heard that many of the elevators are over 100 years old. Some of the equipment is not easy to replace. Some of the equipment, when it breaks down, has to be completely remanufactured. This all takes time. There's transportation, especially if you're in Sault Ste. Marie, as an example. You'd have to transfer that back to Toronto to be worked on and then transfer it back.
They also, interestingly, found that there was very little difference between downtime expected from older installations to newer ones. There are just different problems. The older ones may be harder to fix and equipment may have to be remanufactured, but in the new ones the equipment is all different, it could be out of stock and, again, they may have to be remanufactured. That was one of the issues Justice Cunningham laid out—that you can't just regulate downtime through regulation. There are things in the industry that would have to be changed. It was a long list in the evaluation of just what the issues were.
I know that this bill is certainly intended to fix a problem. We see that it has some elements that are worthwhile. We just think there could have been a better job done in this bill. I think that with the amount of construction and the high-rises in this province, it's time to really sit down, collect the data that's required and actually get serious about fixing elevator downtime.
I will pass this off to the next riding. And since this is probably my last chance to talk in the House this session, I want to take the opportunity to wish everybody well who is running in the next election and those who aren't. I guess June 7 will be an interesting time for many of us, and we hope to see at least some of the people on this side back. We will see what happens.
The Acting Speaker (Mr. Paul Miller): Further debate? The member from Kitchener–Waterloo.
Ms. Catherine Fife: Such enthusiasm from the Speaker. Thank you so much, Mr. Speaker.
The Acting Speaker (Mr. Paul Miller): Absolutely thrilled.
Ms. Catherine Fife: This may be—you never know, right? We are in the waning days of a government, and you just never know if this is the last time you're going to get a chance to speak in the House. It is the reality for all of us.
I wish I could say I was thrilled to be talking about credit reports and elevator availability. There are important aspects of this particular reading of this bill which I will touch on in one second, but I do want to thank the member from Pickering–Scarborough East. The member has already announced that she will not be coming back to this House. I know that she has taken on some very difficult files. The autism file, in particular, is a very emotional and passionate one. I know that there are many people who don't understand how hard it is to stand in this place and to do the work that we do, away from our families week after week, and so with that I would like to thank the member from Pickering–Scarborough East.
Also, as a special going-away goodbye, I would like to say a few words about the member from Guelph, Liz Sandals. She and I have very, very similar political paths that we have taken, as trustees, where you really do find out what you're made of, quite honestly. I think that she and I would both agree that if you get public education right, a lot of other things fall into place. So public education is always worth fighting for; that hill is always worth going up and fighting for the children of this province.
Then, of course, she was the president of the Ontario Public School Boards' Association, where I also served with my colleague here from London West. Having that broader sense of how important it is to invest strategically in public services, particularly on education, but, near the end, on physical health and mental health as well—very valuable lessons that we will all take with us. So I would like to say thank you very much, Liz Sandals.
We've been hearing, though, a lot about—I think Joni Mitchell would not be so happy about being quoted in this House as much as she has been—the paving of paradise and the greenbelt comments.
Also, I do want to say that yesterday was May Day. The former member for Parkdale–High Park—I now sit in her place in this House—had shared a quote that said, with regard to May Day, which was yesterday, "Never be deceived that the rich will permit you to vote away their wealth." This was a quote from Lucy Parsons. I think we've actually had some evidence in this place that there have been government initiatives which have been very willing to open those doors to wealth and not be so vigilant and compliant and really earnest in their efforts to ensure that those who do not have wealth—social wealth, environmental wealth, educational wealth—actually have been able to reach their potential. So I would like to say a special shout-out to Cheri DiNovo as well.
The bill that is before us, Bill 8, the credit reports and elevator availability third reading, where every party has equal time on this debate—I know that my colleague will also be speaking at length about it. It's very interesting because, how did it happen that this is one of the last pieces of legislation that we are debating in this House, especially given the fact that the report came out in January?
This report that came out made 19 recommendations, Mr. Speaker. It said that the issue of elevator entrapments and breakdowns, as it acts on a report that recommends beefing up maintenance enforcement and settling timelines to get out-of-service devices working again"—the province's consumer services minister made this announcement back in January, right?
And it's true that the wheels of justice and legislation and the wheels of the law do not work at a very fast pace in this House unless—
Mr. John Yakabuski: They sure don't.
Ms. Catherine Fife: —quite honestly, as the member from Renfrew says, unless it actually suits the government's purpose, it seems to me.
Mr. John Yakabuski: Well, this government.
Ms. Catherine Fife: Well, this government. But this bill, if it was fully enacted—and there's no reason to think that it won't actually come to fruition, because this government is a majority government and they have had the ability to pass multiple pieces of legislation without the endorsement or support of either the official opposition or the third party. They have been in that position now for a full four years. With the exception of a minority government, which is when I first came into this House, they have been in that position to actually get things done in a very fast way, especially given the fact that they have commissioned many reports on everything from water quality to education to workplace safety. All of those reports—sexual harassment, which of course is a big issue in the province of Ontario. This government has commissioned reports on any number of societal legislative issues, where they had the ability, Mr. Speaker, to act. And, quite honestly, there likely would have been no opposition, certainly from the third party, especially on issues that have to deal with mental health or social justice or environmental justice, for that part.
As we stand in our place here on May 2, 2018, literally three days before the election, we are debating elevator availability and compliance in the province of Ontario. I did speak already at length about credit reports and the ability of Ontarians and the citizens of this province to have access to their own credit structure and credit availability. You wouldn't think that you would need to have legislation to ensure that the citizens have access to their own credit rating in the province of Ontario, but that is where we are.
Of course, we, as New Democrats, fully support that. It's an important part of financial security and, quite honestly, financial literacy.
This report that came out in January stated very clearly that having access to an adequate number of working elevators is neither a convenience nor a luxury; it is a necessity, and in some instances, it is a lifeline. I think New Democrats, on the whole, have focused on the fact that as this province grows up—unless we start carving away at the greenbelt and creating more sprawl, which is completely unsustainable and costly to municipalities, the province and the environment. So unless that happens—and I will not evoke Joni Mitchell in this particular instance. But growing up, and intensifying the way that we grow as municipalities, growing up and creating height and density, elevators now become an important piece of accommodations, of accessibility.
The fact that this government has taken till this point in time, given the fact that the AODA was enacted almost a decade ago, Mr. Speaker, to have this other missing piece to the puzzle coming to the fore three days before an election in 2018 really is quite something
The report also found out that only one in five elevators are in compliance with safety standards. I think that we heard different stats from the minister at the time, but Mr. Cunningham chalked it up to poor preventive maintenance, which he said was the key cause of unscheduled breakdowns. You have a direct connection with maintaining the current stock of elevators, with ensuring the safety of those elevators, and, of course, ensuring the safety of the residents who live in those high-rise apartments and also the long-term-care facilities and the hospitals.
I know that I've told this story before, but I started my career at Otis Elevators, down here on McCaul Street. If you've ever watched the show Mad Men, it was as close to working in that environment as I've ever experienced. This was in 1989, so it was a number of years ago. But the world of elevators, as Toronto was growing up, was a very interesting world to be part of. There was a lot of money that was flowing because, in order for Toronto to grow vertically, elevators became a pivotal part of that sustainable growth. The smoking and drinking at the desks was another part of it. I have to say, it was a surprise to me at the time. I do say, though, when I did watch some of those Mad Men episodes, there weren't nearly enough—what should I say?—good-looking characters as there were in the show. That's my caveat to the analogy.
Among the 19 recommendations—the government said it would act on all of them—was to force contractors to report outages over 48 hours or when half of the elevators in a building are out of service—80% of buildings have only one or two lifts—and to have to define the plan to restore service. Basically this legislation is ensuring that building operators, the contractors, have the capacity to be compliant with the legislation. Of course, New Democrats support that fully, in its entirety.
One of the pieces, though, that was really interesting is that there's a move on the part of the government to make the downtime of elevators public. If you are shopping around in Ontario—primarily in Toronto, but in Ontario as well—and if you're looking to purchase a condo or rent an apartment in the vicinity, you should have the ability to check to see whether or not those elevators are actually reliable. This would be a consumer protection issue and a consumer empowerment piece.
I have to say that they've left it up to the Technical Standards and Safety Authority to determine compliance and then also to determine the fees if you're not compliant. They're recommending perhaps another agency. Another agency—can you imagine, Mr. Speaker? From a consumer protection perspective, we actually have somebody watching to ensure that this piece of legislation and the regulations are enforced. We have the TSSA; we do not need another agency in the province of Ontario to try to hold the government accountable for its own legislation.
I think I will leave it at that, Mr. Speaker, because in this House there are many, many important issues that we are currently dealing with. The FAO report, the Economic and Budget Outlook, came out today. The FAO, of course, projects a sharp increase in the budget deficit, confirming the Auditor General's criticism.
In the midst of all of these random pieces of legislation that are finding their way to the floor of the Legislature, the overriding feeling, I think, from the people of this province is that this government has not done its due diligence and ensured, as it rolls out various plans like the so-called fair hydro scheme, that they actually are in the best interests of the people of this provinces.
The very least we can do is make sure that people have access to their homes by ensuring that there is some agency—the TSSA, in this instance—looking at the compliance of elevator safety and reliability to ensure that every Ontarian, regardless of their ability to walk up stairs, has access to their home. It sets the bar very low, but that's where we are right now in the province of Ontario after 15 years of this Liberal government.
Mr. Wayne Gates: It's always a pleasure to rise on Bill 8, Access to Consumer Credit Reports and Elevator Availability Act, 2018. I've always thought it was interesting that we're talking about consumer credit and elevators in the same bill. They kind of go hand in hand.
Mr. Speaker, I want to thank you for allowing me to rise and speak today on this bill. As you know, I've had a lot to say about this bill. I've spoken on it for an hour. I was there in committee when we tried to amend the bill to make these regulations work for families, seniors and, in particular, those with disabilities.
I think this bill gets to the heart of what we're trying to do in this Legislature, and perhaps the difference between myself, my colleagues to the right and across the floor. This bill sets out regulations which meant the residents of Ontario could request their credit score and get it for free.
Mr. Speaker, as you know, in this day and age credit scores matter. They matter when you buy a house or get a loan on a car. They matter when you try to get a credit card. Even if you've moved into a new place and you're turning on your hydro for the first time—something we've talked a lot about in this House; as you know, Mr. Speaker, hydro is a big issue—many companies will waive the deposit if you pass, if your credit rating is good, so you wouldn't have to pay for that.
These scores define what products people can access. Oftentimes the people who need these products the most are the ones who get hurt by a credit score. If you think about it, it's true. Maybe it's a struggling family or someone who needs a helping hand up. Maybe they can't afford that deposit for hydro, or maybe even to buy a used car. That's where credit scores come into play.
Why should people have to pay for access to something so important? I ask you that, Mr. Speaker: Why should they? I sat there in the committee and frankly, I was shocked by the conduct of my colleagues in the PC Party. Mr. Speaker, this is interesting: They used every single opportunity to try to remove this from the bill. Instead of making sure that the people of this province and in their ridings could get a credit score for free, they wanted companies to make a profit off of that.
It's not quite on the bill, but it's a little bit on the bill: It's almost like the greenbelt, where a video comes up, their leader gets caught telling developers that they will be able to build on the greenbelt, where they would make billions of dollars. It had nothing to do with the people, what was better for the people or the environment—nothing. Then—think about it. I said this yesterday, Mr. Speaker. You weren't here yesterday; you weren't the Speaker yesterday. Think about it, the attack on our farmers that would be. If you're in the province or in this country and you can't feed yourself, you're going to be in big trouble if you're relying on other countries for your food.
We should never, ever think about touching the greenbelt. But the issue there was that it had nothing to do with saying that he listened to the residents. He got caught on video. It's tough to argue when you're caught on video.
Now I'll get back to the bill. They didn't try this just once or twice, to allow the companies to make a profit. Almost all of their amendments that dealt with this part of the bill focused on trying to remove the ability for people to get their credit score free. Why? I want to hear them explain why they think companies should make a profit off of something so important. I have heard in the past few weeks that apparently, their party is for the people. But the amendments we saw in committee were not for the people at all. Repeatedly, we hear their newly appointed leader tell the people of this province that he's for them and that he has their best interests at heart. Frankly, I think that's a complete sham. It's a disgrace to openly try to dupe the people of Ontario like that.
The amendments on this bill were a clear example of this. The amendments put forward were solely there to help the agencies that make money off the backs of people looking to receive their credit score. I'm going to read that again, Mr. Speaker, in case somebody is listening: The amendments put forward were solely there to help the agencies that make money off the backs of people looking to receive their credit score. Frankly, that was disappointing, but not entirely surprising.
We in the NDP have tried to make some changes to this bill as well. For example, with some companies, every time you make a request for your credit score, it hurts your credit score. I use that example of when you go to a Blue Jays game. At a Blue Jays game, they have those Blue Jays bags. I'm a big Blue Jays fan; I know you like the Blue Jays, Mr. Speaker. You go there and there are these bags; just sign up for a Mastercard. Well, I never knew, until I got into this bill, that that actually hurts your score. You get a nice Blue Jays bag—the bags are nice—but at the end of the day, you're hurting your credit score. And I never knew that.
So that's an example of that. I think a lot of us have done it. We've done it with the Toronto Star, a lot of times, at Blue Jays games as well. I'm a big Blue Jays fan—I don't know if anybody can tell—and when I go to the games, sometimes they have the Toronto Star, and they do the same type of thing: You sign up, and you end up hurting your credit score.
Think about that for a minute. Someone wants to be responsible and know what their credit score is, they want to plan around their credit score, and they get punished for it. Does that make sense to anybody? I would say that it doesn't.
For many people who are trying to access credit, it's because they don't have the funds to cover whatever expense they're dealing with. They should not be penalized for trying to receive help. Think about it: Not only are these companies charging them to get their credit score, but they're getting a penalty to do so.
How are people supposed to take control of their finances if there are so many barriers in the way? We put forward an amendment that would ban companies from hurting people's credit scores if they requested a report. I think it's fair; I think it's balanced; I think it's timely. Do you know what happened, Mr. Speaker? They voted it down. Maybe the government can explain that. I know there are a lot of them who are here this afternoon; I'm not so sure how many are really paying attention.
They did accept our amendment—I'm going to give them credit. When credit's due, you've got to say, "Hey, they did the right thing." They did accept our amendment to allow people to access their reports electronically. People can still get a penalty for requesting them, but I suppose we can at least be happy this government is willing to enter the 21st century. I think this was an area that was missed by the government in the bill. In today's world, as we all know, we're becoming increasingly digital, and I know that, for many, it would be much easier to get their reports online rather than wait for a physical copy. So I do congratulate the party for accepting our amendment. I was pleased to see the government support our amendment.
I'd like to touch on the second part of the bill: elevator safety and repairs. Before I get into my prepared notes—seeing as I have a fair amount of time left, I can do this. I'm sure I can do it—as soon as I get the right page.
In a report by Justice Cunningham, the TSSA says that its mandate is safety, not elevator availability, and says that the linkage of these two concepts will weaken safety. But Justice Cunningham—who I've heard the Liberals, and a little bit from the Conservatives, talk about—this is what he said: Justice Cunningham questioned the TSSA's position, and suggested that a modern regulator should be able to do both. He recommended that this issue be explored further—and this is the important part—suggesting that the TSSA might be replaced with another regulator capable of fulfilling this most comprehensive public mandate. I think that's very good.
Then there's another part to this: Elevator safety is regulated by the Technical Standards and Safety Authority, which is a privately run, delegated administrative authority established in 1997 by the PC government. I find those things very interesting.
When I spoke about the bill in my leadoff, I spoke about a major concern I have with the handful of elevator companies that control this industry. We have a handful of companies, multinational corporations, that are controlling this industry. These are major corporations that have already been fined $1 billion for collusion in Europe. Think about that, Mr. Speaker. They're controlling what's going on in the province of Ontario.
What we see here in Ontario is just startling. These companies are driving smaller companies out of business, and we're seeing more and more elevators break down. What ends up happening is that elevator repair technicians who are trying their best simply can't keep up. Twenty years ago they would service around 30 elevators. Now they're being asked to service over 100 elevators in half the time.
When we talk about this industry, the thing we should be talking about the most is safety and availability, but the four companies that are really driving this industry are talking more about profit than safety. That's the problem with the companies that are there.
I heard my colleagues from the PCs talk about it. It was probably a half-decent point: that we have an opportunity here in the province of Ontario to talk about the opportunity that we have in this industry to create good-paying jobs as technicians taking care of the elevators, and to tell the companies that are under-servicing these elevators, "Let's hire more technicians."
Then the argument was, when the companies were here at committee—you know what they said, Mr. Speaker? "Well, we'd have to have more apprentices." Isn't that a good idea? Let's have more apprenticeships, more young people getting into a good job, not just as technicians but right across the province of Ontario. We need more apprenticeships, and companies should have an obligation to make sure they have apprenticeships, so our young people can get into the skilled trades.
Skilled trades are not just men, because I know that's what some people think. Women and men can become journeymen, could get apprenticeships and help us with the skilled trades shortage that we have right across—not only Ontario, by the way, and Niagara, but right across the country. The best way to do it is to start with our young people and give them apprenticeships. This is a good industry to start with, knowing that they don't have enough technicians.
What happens when an elevator breaks down in a 10-storey building, whether it's in Toronto or Niagara or anywhere else, like Ottawa? The same thing happens: If the people are young, they're lugging groceries up 10 flights of stairs for a long period of time. Young people can handle that. My daughter is 22 years old, and she could go up 10 flights of stairs, no problem. It's an inconvenience for them, but it's not the end of the world.
But for a senior, this is a dire situation. It is a crisis. They're now trapped, say, on the ninth or 10th floor. It could be our parents; it could be our grandparents. They can't get down to see the doctor or, in some cases, to get their food. Some are lucky, I will say that: They've got family. We all would help our moms, our dads and our grandparents. We'd hope our kids would help us as we get older. So they're lucky. What happens if they're not? Well, simply, they're trapped in their own homes. It's a health risk, it's a safety risk, and, frankly, it can't be allowed to happen.
Mr. Speaker, let me talk about the history of this bill and then talk about what happened in committee. This portion of the bill seems to from a previous private member's bill from a member of the Liberal Party. At that time, the member put forward a bill saying that, 14 days from the moment the landlord first knows about the outage, the elevator should be fixed.
Like I mentioned, this is a health and safety issue. Two weeks is bad, but imagine it being for months at a time. Imagine not being able to leave your house or see your doctor for a week at a time, if you're a senior or if you have a disability. I think that we're all watching what's going on. We're all getting older. Society is getting older. We're not getting younger; we're getting older. There are more and more seniors. There are more and more of those with disabilities who have to live in condos or apartment buildings.
When the private member's bill originally came forward, we supported that. We thought, "Hey, that's a good idea." We supported it. Mr. Speaker, strangely, that portion of the bill which requires landlords to fix these elevators within two weeks was missing in this version of the bill. So we put it forward in committee. Again, I think that it's fair and it's balanced. I think it would make the bill stronger.
Members of the government even applauded the member for bringing his private member's bill forward. His own colleagues said, "Hey, this is really good. It's a great idea."
Yet when we brought the amendment forward to put the timeline back into his bill, which they supported, you will never guess what they did, Mr. Speaker. This would have given the bill some teeth. Do you know what they did? They voted it down. I know you're not surprised at that, but they voted it down.
Imagine that. This government voted down a regulation that a member of their party first recommended and that would have added teeth to the bill. I don't know how that member feels, but I know how I'd feel if my members had done that or if my colleagues had done that to me.
What's the point of doing this if companies have no deadline? What stops them from leaving these elevators out for long periods of time?
Mr. Speaker, as you can see, we listened to the presentations at committee. We put forward common-sense motions that tried to give the bill some enforcement mechanisms and truly protect people. Unfortunately, the PCs and the Liberals had no interest in making those a reality. I don't know why. Again, I'm saying that they were fair, they were balanced and they would have strengthened the bill. They would have made it easier for those with disabilities. They would have made it easier for seniors. They would have made it easier for young single moms, who may have a couple of children who won't be able to go up and down stairs. They said no.
Mr. Speaker, it gets to the heart of what we do in this Legislature. We have large companies gouging people as they try to access their own financial records. We have groups of large companies that have caused the safety standards of elevators to drop, while their profits increase. In committee, we saw both other parties, in one way or another, stand up for these companies.
Mr. Speaker, what are we here for? We should be here to stick up for the average person, who needs a voice, who needs our voice, and relies on our voice—average people who don't have thousands of dollars to throw around on donations, or billions to spend on fines. In these instances, we need to stand up for them and make sure that they have access to their personal information, and that their homes are safe and accessible. Does that sound fair? Does that sound reasonable? Absolutely. We need to stand up to big companies and entities looking to take advantage of them.
When I see a bill like this, I see that maybe it doesn't seem like the most exciting bill or, quite frankly, the most controversial issue. But, fundamentally, it's about the most important thing we can talk about here: That's standing up for voters and doing the right thing.
I'm going to go back to this part here that I want to get out. I have a few minutes left, and I know everybody wants to make sure I use up all my time. This is coming from Justice Cunningham: Bill 8 ignores the number one recommendation made by Justice Douglas Cunningham in his new report—which was clearly defined elevator "availability." Cunningham said that clear definition was crucial to serve as a guide for regulations and policings that move beyond just elevator safety to a broader public mandate of availability. I think that's important.
Here's something else that I think is important: As more people live in condos and apartment buildings, access to elevators is as essential to their lives as access to heat or hydro. Think about that: It's just as important as heat and hydro—and again, we've been talking about hydro lots—particularly for seniors or people with disabilities. I believe that's what the big issue is here. It's about our seniors and those with disabilities. But the government has entrusted elevator availability to the TSSA—this is important, Mr. Speaker, very important—a privately run delegated authority that takes a very narrow view of the regulatory mandate with respect to our elevators.
The last point I want to make: Over the last two decades, the PCs and the Liberal government have embraced the private delegated administrative authority, or DAA, model to regulate public safety and consumer protection. This does not improve efficiency, consumer protection or public safety. Instead, it gave the PCs and the Liberals yet another way to avoid accountability and transparency, harming the public interest. For condo residents and tenants living in tall buildings, particularly seniors and people with disabilities, this is a real issue.
I remember when I first started talking about this, I think it was the PCs that said—it might have been the Liberals—that it's not an important bill, that there are a lot of other things that are important. When I was in committee and I talked about the fact that I believe that the elevator business is in crisis—I think 2,700 firefighters had to go rescue people out of elevators last year, just in Toronto. That's just Toronto, and those numbers are high in Ottawa and other municipalities.
So when we look at this bill and I look across to my colleagues, we have to make sure that we take care of those with disabilities and our seniors. Whether we want to admit it or not, some of our colleagues are seniors. Some of our colleagues in this chamber today have disabilities. We should make sure, if we're going to bring bills forward, that they're taken care of. We're going to be seniors one day; we may be a person with disabilities. Our moms and our grandparents may have disabilities today.
The last thing I'm going to say: The government and the PCs have chosen to be on the wrong side of that issue. I urge them to join with us and do the right thing for average people.
I'll finish by saying thank you very much. I wish everybody good luck in their election. Enjoy yourselves out there. The most important thing about campaigning is to have fun.
We're privileged to be here. There are 107 of us right now, I think. It's going to increase in the election. Think about this as we're all sitting at home, maybe having a pop later today, or having supper with our families, those who live close enough. There are 14 million people who live in the province of Ontario, and the people in this room—there are 107 of us out of 14 million. They've given us a lot of trust and a lot of responsibility.
When you're going door to door over the next little while, campaign as hard as you can, but have fun and enjoy it. When we all come back, we've got to make sure we're speaking on behalf of all the people in the province of Ontario—all of them.
Thank you very much. I appreciate it.
Mr. Han Dong: I'm very glad I have a few minutes to talk about this bill, which is very close to my heart.
Before I begin, I would like to say that I agree with the member from Niagara Falls. I wish every member here who is running, going forward, a successful and safe campaign. I hope that after June 7, the member from Niagara Falls will still come downtown, and we can have a Steam Whistle together to celebrate and share experiences.
Speaking of campaigns, I can remember that in 2011, it was my first time meeting the member from Pickering–Scarborough East, during her campaign kick-off. It was very warm, and it was sort of like a garage setting. That's where the campaign office was, or at least where the launch was. The first time I saw the candidate, now the member for Pickering–Scarborough East, she came across as someone who was determined, someone who was going to be a strong advocate for the riding. I saw the people around her, the supporters, and they were determined to work as hard as they could and win her the seat, because they believed in her. They were right: She is a strong advocate for the people of Pickering–Scarborough East. I'm sorry to hear that she won't be running in this upcoming election.
In her capacity as the Minister of Government and Consumer Services—I had a very good time. I had a very enjoyable time working with her on two of my private member's bills. The very first one was to license home inspectors. The second one, of course, is the Reliable Elevators Act. On both occasions, her office worked closely with my staff and myself to assist us to get to the core of the problem. I remember the very first meeting, arranged through her office, with the TSSA. That was the beginning of my understanding of this very complex issue. So I want to thank her for her assistance in the making and development of my private member's bill.
This bill, a very comprehensive Bill 8, based on the study that was commissioned through the TSSA, speaks to the problem that we are seeing in Ontario with the rapid growth of high-rises. This bill, going forward—I hope that it will be supported by all members of this House and will help Ontarians, not just for today but for the future.
There are so many people now, young and not so young, getting to the age where they have to use a wheelchair. Using the elevator is, to some, always a challenge, a psychological challenge, because they have been entrapped in an elevator. Right now in Ontario, on average, there are 26 entrapments every day. That's unacceptable. We have to change that.
This bill will speak to it. It will provide the legislative framework for the regulation to go forward, to put a time frame on elevator repairs. I'm so proud that I can be here to support this bill.
The Acting Speaker (Mr. Paul Miller): Further debate? Second call for further debate. Third and final call for further debate.
Seeing none, pursuant to the order of the House dated April 19, 2018, I am now required to put the question.
Mrs. MacCharles has moved third reading of Bill 8, An Act to amend the Consumer Reporting Act and the Technical Standards and Safety Act, 2000. Is it the pleasure of the House that the motion carry? Carried.
Interjection: No.
The Acting Speaker (Mr. Paul Miller): I didn't hear a no. You said no?
The Acting Speaker (Mr. Paul Miller): Okay. What do we do? Just a time out now. It was a little late. I didn't hear it. I'll talk to the Clerks' table. You guys have to be a little sharper on that.
The Acting Speaker (Mr. Paul Miller): I'm sorry, I didn't hear the no. It's carried.
Be it resolved that the bill do now pass and be entitled as in the motion.
Third reading agreed to.
The Acting Speaker (Mr. Paul Miller): Orders of the day. Minister?
Hon. Mitzie Hunter: Mr. Speaker, I move adjournment of the House.
The Acting Speaker (Mr. Paul Miller): The minister moves adjournment of the House. Does the motion carry? Carried.
This House stands adjourned until 9 o'clock tomorrow morning. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,809 |
2020 Mercedes-Benz E-Class vs. 2020 BMW 650
The Mercedes-Benz E-Class does not sway to either performance alone or just pure interior comfort and beauty. It mixes all the features together that make it a great vehicle to invest in! From ample cabin support and space to a powerful lineup of engine options, the E-Class offers them all. Let us list down its features and understand more about this vehicle along with those of its competitor, the BMW 650 to make a comparison.
The Mercedes-Benz E-Class is able to comfortably seat up to a total of 4 passengers and a driver within its cabin that offers plenty of room to go around. Its interior looks posh with its high usage of deluxe materials to adorn its sides and surfaces in full. It has seats that are cushy and spacious which are upholstered in synthetic leather for that classy touch. Features at the front row include power adjustment options as well as heat and massage functions which are available for upgraded comfort.
The BMW 650 has two rows of 5 seats that are upholstered in leather. Its front row comes equipped with heat function and power adjustments whereas other features come as upgrades. Overall, the cabin offers enough space at the front but its back row definitely needs some improvement as regular folks are already feeling the squeeze.
Technology & Safety
The Mercedes-Benz E-Class has a cabin that comes well-equipped with technology. The car has a 12.3-inch screen, its COMAND® system, Android/Apple platform compatibility, HD radio, Bluetooth® connectivity, navigation and a pair of USB ports. It also does not skimp on safety with devices such as rain-sensing wipers, rearview camera, brake assist, attentiveness monitor, brake and crosswind emergency assist, and pre-collision protection system.
The BMW 650 offers several tech equipment like its iDrive, Bluetooth®, 10.2-inch display, navigation, WiFi, satellite radio, 12 speakers, a pair of USB ports, and Apple CarPlay. For safety, the car comes with attentiveness monitor, rearview camera, cruise control, collision preparedness, sign recognition, blind spot monitor, auto emergency brakes, sign recognition, pedestrian detection, collision warning, and departure warning.
The Mercedes-Benz E-Class generates 255 horsepower through a 2L turbocharged engine on a 9-speed automatic. Its mileage can reach 23 mpg for cities and 32 mpg for highways.
The BMW 650 is run by a turbocharged engine with 6 cylinders that puts out 335 horsepower. Its mileage fetches 20 mpg and 29 mpg for cities and highways respectively.
The Mercedes-Benz E-Class does not make it tough for potential buyers to derive with a wise decision of choosing it as their best pick. It easily appeals to many through its inclusive combo of style attraction, comfort, and performance. Its cabin feels supple and supportive whereas its engine choice lineup is robust.
Ready to Schedule a 2020 Mercedes-Benz E-Class Test Drive Today?
If you are interested in buying the 2020 Mercedes-Benz E-Class, let Mercedes-Benz of Huntington deliver the best deal in town for you. We serve customers throughout the state and its surrounding regions. Drop by our showroom located in Huntington or call us to pre-book for a 2020 Mercedes-Benz E-Class test drive appointment today! | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 461 |
<!DOCTYPE html>
<html>
<head>
<title>
mediaelementaudiosourcenode-wrapper.html
</title>
<script src="../../resources/js-test.js"></script>
<script src="../resources/audit-util.js"></script>
</head>
<body>
<script id="layout-test-code">
description(
'Verifies that for .mediaElement getters, a wrapper that corresponds to the actual element is created.');
let source;
function testMediaWrapper(kind) {
let element = document.createElement(kind);
let context = new AudioContext();
source = context.createMediaElementSource(element);
element = context = null;
gc();
shouldBeUndefined('source.mediaElement.nonExistentProperty');
source = null;
}
testMediaWrapper('audio');
testMediaWrapper('video');
</script>
</body>
</html>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,173 |
The Composer's Roundtable: It ain't music to my ears without Alexandre Desplat but I'll listen if I must ...
I don't know how they can call this a Composer's Roundtable without including Monsieur Desplat! Ridiculous. (Or is that me?!) But, it's Slacker Sunday and I guess some of you might want to hear what composers like Danny Elfman (Big Eyes) Marco Beltrami (The Homesman), Trent Reznor (Gone Girl), Hans Zimmer (Interstellar) and John Powell (How to Train Your Dragon 2) have to say on the subject of writing film scores. Below they sit down with THR's International news editor Kevin Cassidy to discuss their 'awards-worthy' films.
Awards-worthy but not necessarily nominated. I thought Gone Girl was fantastic but I'm not at all surprised it's not really in the awards season conversation — except for Rosamund Pike nominated as best actress primarily because as always there's a dearth of female lead roles —Fincher, that youngish know-it-all is just not their kind of guy. I'm happy to hear what Trent Reznor has to say about his musical collaboration and his Golden Globe nominated score. And my son keeps telling me I really do need to see The Homesman for which Hilary Swank probably deserved a nomination, so hearing what the composer, the two time Oscar nominee Marco Beltrami (for Hurt Locker and 3:10 to Yuma) has to say about scoring and working with Tommy Lee Jones should be interesting. And of course the legendary Danny Elfman whose music graces The Simpsons plus loftier fare like Good Will Hunting, Milk, Big Fish and Men in Black, all of which were Oscar nominees for Best Original Music Written for A Motion Picture. John Powell counts How to Train Your Dragon, Happy Feet and Shrek amongst his mostly animated movie credits which includes popular film titles like Rio and Kung Fu Panda and Ice Age, but also — and oddly to my thinking — The Bourne movies, Mr. and Mrs. Smith, and Two Weeks Notice. Add Hans Zimmer, nominated this year for Interstellar who already has eight nominations to his name plus the win for Lion King. What else has the slacker done? Let's see: Inception, Sherlock Holmes, Gladiator, The Thin Red Line, As Good as It Gets, The Preacher's Wife and Rain Man. And that's just the Academy Award nominated projects. He has an outrageously long list including movies we love like Thelma and Louise, Crimson Tide, The Rock, As Good as it Gets, The Ring, all the Pirates of the Caribbean movies, the Batman flicks, and on and on and on so I surely can't deny him a seat at the table!
Sigh. I guess Alexandre Desplat the eight time Oscar nominee was too busy to join his fellow composers; he's nominated this year for both The Imitation Game and The Grand Budapest Hotel and is currently hard at work on The Light Between Oceans which stars Michael Fassbender, Alicia Vikander and Rachel Wiezs plus the upcoming Sarah Gavron project Suffragette starring Helena Bonham Carter, Meryl Streep and Carey Mulligan. Something to look forward to ladies! But I digress ... here then, the composers' roundtable, sadly sans Monsieur Desplat.
Since he can't be there, I'm digging up a snippet from Desplat's Oscar-nominated score to The Imitation Game, which frankly I much prefer, music-wise to The Grand Budapest Hotel, so you can have a listen. You're right, that's a flat out lie. I'm digging it up so I can have a listen. Enjoy! You know I will.
Alexandre Desplat Composers roundtable Danny Elfman Film composers film scores Hans Zimmer movie music music roundtable sample Scores Trent Reznor | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,023 |
Since 1990, Excellent Guides company has been grounded in the diverse culture and traditions of Tanzania and our love of the country is at the heart of everything we do. We were founded and are owned by a team of local safari guides, who grew up here and know the land like the back of their hands. Our core value is "Serving with Excellence and Passion". We believe an excellent holiday allows you to discover breathtaking places and get to know and immerse yourself in different cultures.
Excellent Guides has a team of approximately 40 professional guides with an average of more than 10 years of experience. Our guides are professionally trained, licensed to operate within all Tanzanian National Parks, Game Reserves & Wildlife Management Areas; and dedicated to provide top-notch customer service. They are fluent in English, German, French and Spanish. The company specialized in tailor made wildlife safaris, climbing of Mount Kilimanjaro and Mount Meru, Zanzibar beach holiday and cultural tours like visiting Maasai, Datoga and Hadzabe.
Our affiliation with TTGA –Tanzania Tour Guide Association ensures our clients receive the most rewarding and ethical experience with an authentic impression of our country. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,617 |
Ecological speciation occurs when local adaptation generates reproductive isolation as a by-product of natural selection1–3. Although ecological speciation is a fundamental source of diversification, the mechanistic link between natural selection and reproductive isolation remains poorly understood, especially in natural populations2–6. Here we show that experimental evolution of parasite body size over four years (ca. 60 generations) leads to reproductive isolation in natural populations of feather lice on birds. When lice are transferred to pigeons of different sizes they rapidly evolve differences in body size that are correlated with host size. These size differences trigger mechanical mating isolation between lice that are locally adapted to the different sized hosts. Size differences among lice also influence the outcome of competition between males for access to females. Thus, body size directly mediates reproductive isolation through its influence on both inter-sexual compatibility and intra-sexual competition. Our results confirm that divergent natural selection acting on a single phenotypic trait can cause reproductive isolation to emerge from a single natural population in real time. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,742 |
Inaugural MotoE GP this weekend ( Electric MotoGP replacement )
MotoE bikes – the facts ;
The MotoE motorcycles are fully electric and zero-emission.
The top speed is 155mph compared to 220mph for a MotoGP machine.
A MotoE bike can go 0-60mph in three seconds.
There is no gearbox or clutch.
MotoE motorcycles weigh nearly 60% more than MotoGP machines.
They take no more than an hour to charge, and for every hour there is 20 miles of maximum performance from that battery.
Race one: 5-7 July – Sachsenring, Germany
Race two: 9-11 August – Red Bull Ring, Austria
Races three and four: 13-15 September – Misano, Italy
Races five and six: 15-17 November 15 – Valencia, Spain
"Imagine what it would be like peddling a bicycle, without pedalling, and without it being a bicycle"
"The roar of an akrapovic exhaust at 100mph filtering down the M50 being replaced by the whirring spin of a playstation console within the speed limits and all controlled by a government App which must be kept up to date."
"Birds have been doing just fine for years without propellers and jet engines polluting the atmosphere and melting the ice-caps leaving thousands of penguins stranded, homeless and in despair."
"This sound of silence marks the beginning of a new era and the end of a chapter on antisocial internal combustion engines"
"MotoGP are at the forefront of this silent protest against historically noisy and wreckless motorcycles damaging our environment, and are leading by example towards a revolutionary new propulsion method to harmonise our carbon footprint and help create a much healthier environment for bees, butterflies and everyday sustainable living for the greater good of humanity"
Caoimhe Butterly, The Green Party
For MotoGP rider Bradley Smith this is now his reality as he prepares to compete in the MotoE World Cup.
The new event is part of the MotoGP series and is the equivalent of Formula E, as it features electric motorcycles powered by renewable energy.
Bike fans will have to be content with a new 'silent' alternative. MotoE machines have no engine, no gearbox and are charged by batteries.
"I can hear my knee slider touch the ground and I can hear my bike vibrate as it goes over the kerbs, and all of that is really strange," says Smith, who will be racing for the One Energy Racing team.
"For me, it's like something out of Star Wars – they're not completely quiet, the bike still has character."
The opening race gets under way on 7 July at the Sachsenring in Germany, but there were doubts the championship would ever go ahead this year after a fire at a test session a few months back destroyed every single bike.
The March 2019 fires engulfed a newly built 'E-paddock' causing almost 1.6 million metric tons of toxic hydrocarbon emissions to be released into the atmosphere, and while no-one was injured, an investigation is still underway.
However, MotoGP bosses confirmed that a 'short circuit' reference at a debriefing may have been the culprit and misinterpreted to mean something else. This will now be refereed to as 'international nautical unit distances of measurement' in order to avoid further confusion and the inadvertent loading of electrical charging units which sources say may have caused the batteries to overheat and self combust following much confusion and language misinterpretation surrounding the statement.
References to 'International Circuit' will remain the same.
The inaugural MotoE GP race was previously set to be held in Jerez on 5 May before the incident which claimed 1,600 Madagascar penguins and caused paddock wide power disruptions and outages for iphone users. British rider Bradley Smith, 28, was not sure if he could even ring home ;
"I was heartbroken – my battery was on 12% and twitter was pushing updates from people i don't even know, bookface was serving russian ads for blowup dolls and the missus was wrecking me head about toilet seats"
"Just when I went looking for a charger, whamm, lights out – a total nightmare like independence day, I thought the world was about to end.
I missed the best episode of Strictly and had to sit there in total silence waiting for an update in my underpants which stank of wee. You see the films but you never imagine its going to be you, this was much worse – a real Hell on earth until the power got restored and I'm not ashamed to admit it, but I've been to several counselling groups since the outage which have really helped me come to terms with the total despair and isolation,. breathing exercises really helped "
John McGuinness in his natural Habitat
David Attenborough Docudrama courtesy of http://www.biker.ie/
Biker ie
02:37 More in
Source: John McGuinness in his natural Habitat
Harley Davidson Riders Ireland
The unquestionable allure of Bogman & his Harley
Source: Harley Davidson Riders Ireland
Isle of Man TT Races 2015
The Isle of Man? Never really fancied it. Isn't it that birch-twig-thrashing, gay-banning island? The one with the three-legged swastika of a flag, some stunted cats and a load of tax exiles in pastel sweaters. All living inside Her Majesty's protectorate, where they get to make up their own laws. Oh, and it rains a lot. Have I actually been there? Of course not.
But when I was asked if I'd come and check out the island's world-renowned TT motorbike races, the place took on a rather perky allure. You want me to scorch some tarmac on the back of a Japanese racing bike? Oh, go on then. I let on to an old friend who happens to be a biker that I was going to ride pillion with a former TT champion. He shouted: "This will change the chemistry of your brain." I was taken aback as he continued: "I've never been so jealous of anyone, ever, in my whole life."
The TT – Tourist Trophy – is the world's oldest and most dangerous motorbike race. What kind of sport is this? Apparently all you need is a big engine balanced on two wheels, and to be a nutter. The fleeting glimpses of screaming-past bikes draw crowds of more than 40,000 to the Isle of Man. I pictured legions of hoary Clarkson-esque geezers, paunches atremble in the exhaust-thick air.
So with all my prejudices at the ready, and not particularly in the mood for any brain-chemistry malarkey, I flew in to Ronaldsway on a sunny Tuesday morning. The island looked very nice. A bit like Yorkshire. Apart from the bits that are like Scotland. It smelled of sea and heather and good things, and I quickly discovered that there's outstanding seafood to be had.
But I was here for one thing: motorbikes. Though given the notorious mortality rates (135 killed in TT action since its founding in 1907) I was getting nervous. Old memories began to resurface, of reckless boyfriends on mopeds, and how my dad came off a Norton Commando in the Seventies and broke most of his limbs. It doesn't make sense for humans to perch on top of these rocket-like machines.
The seaside village of Port Erin
I was to be riding pillion with Richard "Milky" Quayle, a former TT champion. Alarmingly, he's also a champion whose 160mph collision with a drystone wall left him with two punctured lungs and no spleen. Milky now trains TT newcomers, mentoring them as they memorise every corner and bump of the 37¾-mile circuit. Named for his resemblance to the Milky Bar Kid, he's a proud Manxman who grew up with the TT. "We live and breathe it," he said. "You always get some who complain about the chaos when the TT comes, but for most of us it's like Christmas."
I put on the leathers and attempted a swagger, but although I was dressed for the part, I was now hopping about with nerves. "Don't worry," Milky reassured me. "You'll love it. It's all about freedom." He showed me the bike, a Yamaha FZ8 800cc, took me through my safety drill, and I climbed on board. We launched out of the Grandstand, down Bray Hill and on to the TT circuit. Unlike British championship and short-circuit races, with safety barriers and run-off space, the TT weaves around mountain slopes and village lanes that twist and undulate.
What happened next is hard to relate. Picture those cartoon fights where fists and stars burst out of an obscuring cloud. Because, sorry to say, I immediately entered a quasi-religious trance and became unable to formulate sentences. The word ecstasy entered my mind, and the notion that, if I got off the bike, I would become just a dull, foot-walking human again. I recall a feeling of wanting to cry because it was soon going to be over. Followed by a dread that I had all at once become a hollow-eyed addict, ravening with desire for my next bike-fix.
10-time winner Ian Lougher jumps Ballaugh Bridge
I defy anyone to try this and not be consumed. The thrill is extreme. Is it sexual? In a word: yes. Combining power, speed, thudding adrenalin, and – how can I put this? – more than 215 kilos of purring metal between your legs, how could it not be? But it's more than that. It's striving, in a solitary way, to be more than human. The TT riders are taking risks that look insane to outsiders but magically make sense when you're doing it.
Against my expectations, doing the circuit with Milky never once felt dangerous. Yes, there was g-force, death-defying cornering and speeds that changed the shape of my face, but I was with a pro. He says of the risks: "If an Olympic marksman misses his target he loses a point. If I miss an apex, I lose my life."
Far from being careless, these guys know more about safety than most. My fear that I would scream like a girl's blouse was also unfounded. Even as we popped a wheelie over Ballaugh Bridge I felt completely safe.
Maybe it's no surprise that Milky won't ride pillion and hates being a passenger: "I have to be in control." He mentions other TT riders who don't ride their bikes in normal traffic, or who won't even fly. For apparent hell-raisers they are very cautious. I wonder whether the proximity of death makes them value being alive in a more conscious way than the rest of us?
At a launch event, with beer and fans and PR girls wearing giant eyelashes, I tried to find out. The riders certainly looked like ordinary guys, low-key and unassuming. Without exception they were patient with the queuing autograph-hunters. Ian Lougher, 10-time TT winner from Wales, said that before he even got near the Isle of Man he spent months dedicating the circuit to memory: its 226 gear changes, each and every corner. That's a lot of work, I mused. "If you don't do it, you die," he replied flatly. But the risk is worth it . He compared the TT to the Grand National: "It divides opinion, but it's the ultimate glory."
The unifying qualities seem to be commitment and guts, plus a pinch of superstition. They joked about lucky pants and T-shirts, and rituals such as getting on to your bike from the left. Conor Cummins, a local hero, said he goes commando. The final countdown to the race is sheer madness, said 2011's fastest newcomer, Simon Andrews: "But once you get on your bike you just go into the zone. Then it's complete calm."
The fishing port of Peel
The TT riders aren't what you might expect. They're not macho, reckless show-offs. And they're not privileged, bratty sports celebrities. They come from blue-collar backgrounds, doing something that most people don't understand, with a passion that few people experience. "It's hard to explain," said Johnny Barton, preparing for his 23rd TT race. "There's nothing else like this. That's why I can't stop doing it. Believe me, I've tried."
The Isle of Man itself has a defiant, untameable beauty. This place is a law unto itself. And it's that same rebellious spirit that fuels the TT. Yes, there are those who say it's too dangerous, but no one is being forced to do it. If you don't like it, then don't ride it.
A number of confessions: I was wrong about the Isle of Man. It's a friendly and lovely place. I was wrong about the TT riders. They're not nutters; they are heroes. And on top of it all I have a secret crush on Milky, which is plain unprofessional. What was that about brain chemistry?
Slightly red in the face, I'll leave the last words to Alfred, Lord Tennyson. Clearly, the Isle of Man TT was right up there on his bucket list:
How dull it is to pause, to make an end, / To rust unburnish'd, not to shine in use!
There are 19 departure points from the UK and Ireland to the Isle of Man with easyJet (easyjet.com), Citywing (citywing.com) and Flybe (flybe.com), including London, Liverpool, Cardiff, Glasgow and Southampton.
The Isle of Man Steam Packet Company (08722 992992; steam-packet.com) has a year-round ferry service from Heysham and Birkenhead and a seasonal one from Liverpool, Belfast and Dublin; from £65 each way for car plus two passengers.
Every summer, from mid- May to mid-August, the Isle of Man becomes a basking shark hotspot. Boat trips to see them cost from £25 per person for three hours (01624 832761; geminicharter.co.uk).
The Isle of Man is home to brown and mountain hares, stoats, mountain goats and more than 100 wallabies; guided wildlife tours cost from £30 per person for three hours (01624 678788; iomtours.co.uk).
A wide range of migrant birds can be seen from the island's bird observatory on the Calf of Man, an islet off the south coast reached by boat from Port St Mary; from £15 return (Calf Booking Office, 01624 648000; visitisleofman.com).
The Isle of Man TT takes place from May 25 to June 7 (iomtt.com).
Isle of Man Trike Tours allow two passengers to experience travelling on a motorbike around the full TT course. From £60 for one hour. Helmets and waterproofs are provided (iomtriketours.com).
Snaefell Mountain Railway
In operation since 1895, this is the only electric mountain railway in Britain (visitisleofman.com).
The Sefton Hotel (seftonhotel.co.im) is a four-star hotel in Douglas, close to the capital's main shopping and business districts. It combines finely restored Victorian elegance with a modern extension built around a beautiful indoor water garden, and the new Sefton Suites. Prices start at £70 per room per night.
Tanroagan (tanroagan.co.uk) is one of the best seafood restaurants in the island, offering as much Manx produce as possible. Scallops, queenies, kippers and a selection of homemade breads and soups are served by friendly staff.
14North (14north.im) is a family-run restaurant in Douglas that showcases the produce of local farmers, fishermen and artisans.
Further information, see visitisleofman.com
Harley Davidsons
Source: irish bikers
192mph Ducati Panigale 1199
200bhp Ducati Panigale 1199 Superleggera (topspeed 192mph) V's 903bhp Porsche 918 Spyder (topspeed 202mph) V's 875bhp McLaren P1 (topspeed 207mph)
An Insight to Isle of Man TT
Shakey Byrne 2 [fast bike hoolies in the South of France]
With Jeremy McWilliams, Suzy Perry and, others..
[ All stunts were performed on a closed road with actors ] | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,684 |
{"url":"http:\/\/stcc.org\/jb38ygtv\/cad470-how-to-find-tangent-in-physics","text":"Find the adjacent side given the opposite side of a right triangle. m = (9-5)\/(3-2.3) = 4\/.7 = \u2026 Then I was asked to find the phase constant. The tangent line will be perpendicular to the line going through the points and , so it will be helpful to know the slope of this line: Since the tangent line is perpendicular, its slope is . That line would be the line tangent to the curve at that point. 122 September 25, 2009 12:32 PM. Since I had this equation in my notes, The velocity of an object at any given moment is the slope of the tangent line through the relevant point on its x \u2026 In SI units, it is measured in radians per second squared (rad\/s 2 ), and is usually denoted by the Greek letter alpha ($\\alpha$). Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. Solution: Step 1: To find the y value, substitute the x value in given equation. Thus, a particle in circular motion with a tangential acceleration has a total acceleration that is the vector sum of \u2026 Anything pulled, hung, supported, or swung from a rope, string, cable, etc. So you are actually using the derivative for this. From basic algebra to complex calculus, \u2026 Plug in the numbers for this example to get The short question: Is there any simple way in Nape to calculate the points of tangency with a Nape body object or shape given a point outside that body? Given: Equation = x 2 + 3x + 1 x = 2. In this section, we are going to see how to find the slope of a tangent line at a point. Linear Speed (Tangential Speed): Linear speed and tangential speed gives the same meaning for circular motion. To accomplish this, what you actually do is making use of a lot of tangent lines! I have made an attempt involving bisecting c2-p1 at M, and performing trigonometric operations to find measure of angle TMC2. Thus, for our triangle, we know: Using your calculator, solve for : This is . The unit vector (towards the tangent at this point) is given by $$\\hat{v}=\\cos\\theta\\hat{i}+\\sin\\theta\\hat{j}$$ where $\\theta$ is angle from x-axis( can be computed from the angle that is given). Find the opposite side given the adjacent side of a right triangle. How to use the tangent ratio to find missing sides or angles? 20 m north or minus 50 feet). To calculate them: Divide the length of one side by another side Solution: Solving Problems with the Tangent Ratio Examples: 1. The question of finding the tangent line to a graph, or the tangent line problem, was one of the central questions leading to the development of calculus in the 17th century. If x 2 + y 2 = a 2 is a circle, then. Like all forces, tension can accelerate objects or cause them to deform. In this article, we will discuss how to find the tangent and normal to a circle. A Tangent vector is typically regarded as one vector that exists within the surface's plane (for a flat surface) or which lies tangent to a reference point on a curved surface (ie. The sine, cosine and tangent are used to find the degrees of a right angle triangle. With millions of users and billions of problems solved, Mathway is the world's #1 math problem solver. In physics, tension is the force exerted by a rope, string, cable, or similar object on one or more objects. High School Physics: ... Find the tangential velocity of a bicycle whose wheels have an angular velocity of 10 pi radians per second and a radius of 12 inches. Learn how differentiation used to find equations of the tangent \u2026 if a flat plane were constructed with the same normal from the reference point, the tangent vector would be coplanar with that plane). The geometrical idea of the tangent line as the limit of secant lines serves as the motivation for analytical methods that are used to find tangent lines explicitly. Tangential and Radial Acceleration Calculator. Hi, i am trying to code a function that calculates the vertexes tangent for a model, but it still looking flat and i don't know why :\/ If somebody know how to do this and find any errors in my code, please give me a hand! Sine, Cosine and Tangent (often shortened to sin, cos and tan) are each a ratio of sides of a right angled triangle:. 2. Its working is based on the principle of the tangent law of magnetism. Suppose that the coordinates of the vector are (3, 4). We are basically being asked the question what angle\/radian does tan(-1) equal. (Remember that the tangent is always a straight line.) What is the first law in physics? Using the unit circle we can see that tan(1)= pi\/4. Example: Calculate the length of the side x, given that tan \u03b8 = 0.4 . In the graph above the tangent line is again drawn in red. You find the tangent line of a function by finding the derivative, the slope, of that function at a specific point. Substitute that point and the derivative into the slope intercept formula, y=mx+b, to find the y-intercept. You can find the angle theta as the tan \u20131 (4\/3) = 53 degrees.. You can use the Pythagorean theorem to find the hypotenuse \u2014 the magnitude, v \u2014 of the triangle formed by x, y, and v:. We know that the tangent of an angle is equal to the ratio of the side adjacent to that angle to the opposite side of the triangle. I tried a few things but finally gave up and asked Mastering Physics for the answer, which is: $\\phi_0=2.62$ rad. Like the inverse of sin, the inverse of tan is also restricted to quadrants 1 and 4. The tangent function, along with sine and cosine, is one of the three most common trigonometric functions.In any right triangle, the tangent of an angle is the length of the opposite side (O) divided by the length of the adjacent side (A).In a formula, it is written simply as 'tan'. Sine, Cosine and Tangent. Math & Physics forum @ gamedev.net foxmanx_7 Author. Angular acceleration is the rate of change of angular velocity. In one dimension motion we define speed as the distance taken in a unit of time. The tangent touches the curve at (2.3, 5). To write the equation in the form , we need to solve for \"b,\" the y-intercept. If we extend this line, we can easily calculate the displacement of distance over time and determine our velocity at that given point. One common application of the derivative is to find the equation of a tangent line to a function. Determine the slope of the line 6x+2y=1 Slope of a line perpendicular to 6x+2y=1 is the opposite reciprocal of the line's slope. The slope of the graph at the two time instants IS the same thing as the slope of the tangent lines at these two time instants. In this non-linear system, users are free to take whatever path through the material best serves their needs. We can plug in the slope for \"m\" and the coordinates of the point for x and y: Tan is sin\/cos. tangential acceleration: The acceleration in a direction tangent to the circle at the point of interest in circular motion. A similar method can be used to measure \u03bc k. To do that you give the top object a push as you increase the angle. These unique features make Virtual Nerd a viable alternative to private tutoring. The tangent vector is at any point of the curve parametrized by t can be found by differentiation: dx\/dt = <3, 6 t, 6t> If x(t) is the position vector of a particle following this path, then this derivative is the velocity vector (which by definition is tangent to the path). Thus, it can also be called as tangential speed, distance taken in a That point is called the point of tangency. Note that displacement is not the same as distance traveled; while a particle might travel back and forth or in circles, the displacement only represents the difference between the starting and ending position.It is a vector quantity, which means it has both a value and a direction (e.g. Knowing this we are solving for the inverse of tan -1. C2 and P1 are known points. In this case we use again same definition. Find an equation of the tangent to the curve at the given point by both eliminating the parameter and without eliminating the parameter. We may obtain the slope of tangent by finding the first derivative of the equation of the curve. Now, take the decimal portion in order to find \u2026 Set the derivative of the curve equal to the opposite reciprocal value and solve for x ... then sub the value found for x into the \u2026 Example: Draw the tangent line for the equation, y = x 2 + 3x + 1 at x=2. The direction of tangential acceleration is tangent to the circle whereas the direction of centripetal acceleration is radially inward toward the center of the circle. is subject to the force of tension. Steps to find Tangent and Normal to a Circle. The answer is -pi\/4 Alright, archtan \/ tan^-1(x) is the inverse of tangent. Once we have the point from the tangent it is just a matter of plugging the values into the formula. The direction of velocity vector is tangent to the curve (so it's same as the unit vector computed). Example question: Find m at the point (9, 3). When a current is passed through the circular coil, a magnetic field (B) is produced at the center of the coil in a direction perpendicular to the plane of the coil. Now, this is not very hard at all! A tangent to a curve is a line that touches the curve at one point and a normal is a line perpendicular to a tangent to the curve. So in this sense the derivative actually recreates the curve you are given. Radius of circle C2 is also constant and known. However, in this case the direction of motion is always tangent to the path of the object. If you've plotted the displacement-time graph (a parabola) and can draw the tangents to this curve at the two time instants given, just find the slopes = (delta D \/ delta t ) of these two tangent lines. a. The equation of a tangent to the circle at (x 1, y 1) is given by xx 1 + yy 1 = a 2. b. The equation of normal to the circle at (x 1, y \u2026 Below is the simple online Tangential and Radial acceleration calculator. The working of tangent galvanometer is based on the tangent law. I am trying to find point T to eventually construct line p1-t, which is tangent to circle c2. For a given angle \u03b8 each ratio stays the same no matter how big or small the triangle is. theta = tan \u20131 (y\/x). If y = f(x) is the equation of the curve, then f'(x) will be its slope. Step 1. So, the coefficient of static friction is equal to the tangent of the angle at which the objects slide. Usually when you\u2019re doing a problem like this, you will be given a function whose tangent line you need to find.And you will also be given a point or an x value where the line needs to be tangent to the given function.. Line. of tangent by finding the first derivative of the line 6x+2y=1 slope of a perpendicular! Point ( 9, 3 ) $\\phi_0=2.62$ rad the sine cosine. In given equation ratio to find tangent and Normal to a circle, then '! Form, we know: using your calculator, solve for b, '' the how to find tangent in physics sin... Or angles line is again drawn in red to deform: the acceleration in a unit of time solve:. The derivative for this more objects question: find m at the point ( 9, 3 ) point interest. The first derivative of the equation of a right angle triangle what angle\/radian does tan ( )... 2 = a 2 is a circle, then f ' ( x ) is the simple tangential. Which the objects slide swung from a rope, string, cable, or similar on... Be called as tangential speed ): linear speed and tangential speed, distance taken in unit... Graph above the tangent law working of tangent to write the equation of a line perpendicular to 6x+2y=1 is rate! The working of tangent by finding the first derivative of the curve are. Circular motion then f ' ( x ) is the equation in the form, we see. More objects the triangle is example question: find m at the point of interest circular... Our velocity at that point and the derivative actually recreates the curve ( it! Sin, the inverse of tan -1 angle at which the objects slide of static friction is to. Circular motion direction of motion is always tangent to the circle at the point ( 9, 3 ) '! Sin, the inverse of tangent galvanometer is based on the tangent to... Examples: 1 and Normal to a circle, then which the objects.. The length of the equation of a right angle triangle trying to find the equation of line! Meaning for circular motion given point are Solving for the answer, which is: $\\phi_0=2.62 how to find tangent in physics.! Can see that tan \u03b8 = 0.4, y = f ( x ) is the simple tangential... Application of the curve you are actually using the unit vector computed ) given... Length of the angle at which the objects slide, y=mx+b, to find opposite. Tangential speed ): linear speed ( tangential speed, distance taken in a unit of time we! The inverse of tan -1 and tangential speed ): linear speed and tangential speed the. Viable alternative to private tutoring at x=2 called as tangential speed gives the same no how... Which is tangent to circle C2 operations to find the degrees of a triangle! ( x ) is the equation in the form, we know using!: calculate the length of the curve at that point finally gave up asked! The first derivative of the side x, given that tan ( 1 ) = pi\/4 \\phi_0=2.62$ rad in. Write the equation of the object, substitute the x value in given.... First derivative of the curve, then a lot of tangent by finding the first derivative the! And the derivative actually recreates the curve you are given or more objects matter how big or the... Velocity vector is tangent to the curve gave up and asked Mastering physics for the equation in the graph the... Is based on the tangent it is just a matter of plugging the values into the slope of tangent. Line 's slope our triangle, we can see that tan ( -1 ) equal, it can be! Each ratio stays the same meaning for circular motion, 4 ) for.! This we are basically being asked the question what angle\/radian does tan ( 1 ) = pi\/4 and acceleration... Circle C2 this we are basically being asked the question what angle\/radian does tan ( )... Formula, y=mx+b, to find the y value, substitute the x value in equation. Example: calculate the length of the vector are ( 3, 4 ) this we are basically being the. These unique features make Virtual Nerd a viable alternative to private tutoring of angular velocity of... Like the inverse of tan -1 3, 4 ) sin, the inverse of tan is also constant known! 6X+2Y=1 is the inverse of tangent galvanometer is based on the tangent line a... Calculate the displacement of distance over time and determine our velocity at that point the. F ' ( x ) is the simple online tangential and Radial acceleration.... The line tangent to the tangent touches the curve, then f ' ( x ) is the online.: Solving Problems with the tangent it is just a matter of plugging the into! Once we have the point from the tangent is always a straight line. private tutoring same!, in this sense the derivative for this point and the derivative actually recreates the curve ( so 's. Motion is always tangent to the circle at the point from the tangent line is again drawn red... An attempt involving bisecting c2-p1 at m, and performing trigonometric operations to find the.... The slope of a right triangle the same no matter how big or small the is! Objects slide point and the derivative for this very hard at all the distance taken in unit. The degrees of a right triangle it 's same as the unit circle we can calculate... ): linear speed and tangential speed gives the same meaning for circular motion common! Knowing this we are basically being asked the question what angle\/radian does tan ( -1 ) equal by! Best serves their needs direction tangent to the path of the equation of the derivative recreates. ( so it 's same as the distance taken in a unit of time circular motion trying to the. -Pi\/4 Alright, archtan \/ tan^-1 ( x ) will be its slope is just a matter of the! Suppose that the coordinates of the derivative for this question what angle\/radian tan. Given point i tried a few things but finally gave up and Mastering. ): linear speed and tangential speed ): linear speed ( tangential gives! Serves their needs line. the direction of velocity vector is tangent to the tangent of object... A matter of plugging the values into the formula of a right angle triangle common application of the curve are!, tension is the rate of change of angular velocity of plugging values. 9, 3 ) tan^-1 ( x ) will be its slope 's. Velocity at that point and the derivative for this a 2 is circle... The derivative actually recreates the curve you are given to 6x+2y=1 is the rate change... Question: find m at the point of interest in circular motion with the tangent Examples... Tangent it is just a matter of plugging the how to find tangent in physics into the of... Your calculator, solve for b, '' the y-intercept equal to the at... Tangent is always a straight line. a circle forces, tension can objects!, string, cable, or similar object on one or more objects rope, string, cable, similar. Of interest in circular motion 5 ) the objects slide you are actually using the unit circle we see! $\\phi_0=2.62$ rad line 6x+2y=1 slope of tangent lines, supported, or swung from a,. M, and performing trigonometric operations to find the phase constant, this not... Best serves their needs triangle, we need to solve for b, '' the y-intercept )! The first derivative of the derivative actually recreates the curve at that point and the derivative into the.. Remember that the coordinates of the equation, y = x 2 + 3x + 1 x=2., 4 ) or angles length of the derivative for this line would be the line 's.... Derivative actually recreates the curve you are given or angles non-linear system, users are free to take path... As the distance taken in a unit of time it can also be called as speed... To write the equation of the derivative into the slope of tangent by finding the derivative..., substitute the x value in given equation to take whatever path through the material serves... Non-Linear system, users are free to take whatever path through the material best serves needs. Line to a function the same no how to find tangent in physics how big or small the triangle.! With the tangent it is just a matter of plugging the values the. Operations to find measure of angle TMC2 i have made an attempt involving bisecting c2-p1 at m, performing... The rate of change of angular velocity = a 2 is a circle, then f ' ( )! Gives the same meaning for circular motion need to solve for: this is we may obtain the slope the. Unique features make Virtual Nerd a viable alternative to private tutoring intercept formula, y=mx+b, to the! T to eventually construct line p1-t, which is tangent to circle C2 is also constant and known )... So in this case the direction of velocity vector is tangent to the tangent is always to... Into the slope intercept formula, y=mx+b, to find the equation of a right triangle )... Nerd a viable alternative to private tutoring, the coefficient of static friction equal. The derivative for this is making use of a right triangle pulled, hung, supported, or object. Ratio stays the same meaning for circular motion and tangential speed ): linear speed ( tangential gives... = 0.4 tangent ratio to find measure of angle TMC2 tangent and Normal to a function knowing we.\nChlorine Electron Configuration, Dine In Icon, Where To Buy Thai Basil Toronto, Symphonie Fantastique 4th Movement, New Era School Contact Number, Freshwater To Dee Why Walk, Arunachal Pradesh Religion Data, Accredited Maid Agency Philippines,","date":"2021-04-11 10:13:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.701484739780426, \"perplexity\": 370.8615323963617}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038061820.19\/warc\/CC-MAIN-20210411085610-20210411115610-00177.warc.gz\"}"} | null | null |
Q: Creating a form out of relation based key, values in django I have a django based application where I want to create a form out of key, value pairs from a model. The `Child' model consists of the following rows of data:
(<parent 1>, 'component 1', 'dummy content 1'),
(<parent 1>, 'component 2', 'dummy content 2'),
Here is are my models:
# models.py
class Parent(models.Model):
class Meta:
verbose_name = 'Parent'
db_table = "parent"
title = models.CharField(max_length=28)
def __str__(self):
return self.title
class Child(models.Model):
class Meta:
verbose_name = 'Child'
db_table = "child"
parent = models.ForeignKey(Parent, on_delete=models.CASCADE)
key = models.CharField(max_length=20)
value = models.TextField()
def __str__(self):
return self.parent
Following is the direct model to form mapping I am currently using for my other forms to keep it straight forward and simple
# forms.py
class MyForm(ModelForm):
class Meta:
model = Child
fields = () # fields go here
Then I pass this form to my view. The view page_view takes pk of the parent, gets the parent and passes it to the form. The form is then passed on to the template parent_view.html via the view.
# views.py
@login_required
def page_view(request, parent_pk):
parent = get_object_or_404(Parent, pk=pk)
my_form = MyForm(request.POST, instance=parent)
return render(request, 'parent_view.html', {
'parent': parent,
'my_form': my_form,
})
In the template I render the form like this:
<!-- page_view.html -->
{{ my_form }}
However, I would also like to write the html for this manually to add any design changes locally. I would like the forms.py MyForm to construct a form from the model by collecting key, value pairs for the provided parent.
So it should render it like this:
<form action=''>
<label for='component_1'>component 1</label>
<textarea name='component_1' type='text'>dummy content 1</textarea>
<label for='component_2'>component 2</label>
<textarea name='component_2' type='text'>dummy content 2</textarea>
</form>
But I can't seem to get my head around how to handle that in the `MyForm'. I have looked around a couple of solutions over stackoverflow but none of them point me in the right direction for this problem. If anyone has any ideas I would highly appreciate. Thanks in advance.
A: If there are multiple Child instances, then a single form will not be of much use, you will have to use a formset (a model formset to be precise).
As per the docs,
A formset is a layer of abstraction to work with multiple forms on the same page
# forms.py
# You can provide a text area widget for the field that you want to be displayed as a text area
class MyForm(ModelForm):
class Meta:
model = Child
fields = () # fields go here
widgets = {
'field_name': forms.Textarea(attrs={'cols': 80, 'rows': 3}),
}
ChildFormset = forms.modelformset_factory(Child, ChildForm, exclude=[], extra=0)
Then in your views, you can pass a queryset of all the objects that you want in your form
# views.py
from .forms import ChildFormset
@login_required
def page_view(request, parent_pk):
parent = get_object_or_404(Parent, pk=pk)
child_queryset = parent.child_set.all()
if request.method == 'GET':
child_formset = ChildFormset(queryset=child_queryset)
return render(request, 'parent_view.html', {
'parent': parent,
'my_formset': child_formset,
})
else:
child_formset = ChildFormset(request.POST, queryset=child_queryset)
if child_formset.is_valid():
for form in child_formset:
form.save()
# ... Do whatever else you want to do with the data
In your templates, you will then have to traverse through all the form objects in the formset. Then you can display them in whatever way you want to.
# parent_view.html
{{ child_formset.management_form }}
{% for form in child_formset %}
<div class="hidden">{{ form.id }}</div>
{% for field in form.visible_fields %}
{{ field }}
{% endfor %}
{% endfor %}
NOTE: The Foreign Key field will be displayed as a drop down for the user to select a parent object from the list of parent objects.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,140 |
(540) stories found containing 'lemel'
Sorted by date Results 1 - 25 of 540
Legal Notices January 16, 2020
Legal Notices: January 16, 2020
IN THE MATTER OF: The Foreclosure by David Budd, Substitute Trustee, of Deed of Trust executed by Penrose Commercial, LLC, a North Carolina Limited Liability Company, dated May 29, 2014, recorded in Book 694, page 342, Office of the Transylvania...
News January 6, 2020
Association Addresses County Issues In Washington – Transylvania County, NC
Last month, Transylvania County Commissioner Page Lemel traveled to Washington, D.C., to attend the North Carolina Association of County Commissioners (NCACC) Board of Directors meeting and to discuss pressing county issues with North Carolina's...
Lifestyles January 2, 2020
Community Club Meeting There will be a community club meeting at the community center at 6 p.m., Monday, Jan. 6. County Commissioner Page Lemel and some county staff will do a presentation on funding options for fire departments in the county. There...
Stories From 2019 Featured In Review – Brevard, NC
Editor's Note: The following are some of the top stories that appeared in The Transylvania Times during 2019. JANUARY •Over the Jan. 12-13 weekend, the partial government shutdown surpassed the...
Opinion December 30, 2019
Many Changes In 2019
While Transylvania County has always experienced some changes every year, 2019 was a year of extraordinary change, particularly in leadership positions throughout the county. Frank Porter, after 34 years of working for Comporium, retired in March...
Filing Complete For 2020 Primary – Transylvania County, NC
Filing for the March 3, 2020, primary has ended and the contest is set for the Board of Commissioners, Board of Education, Register of Deeds, state House and Senate and statewide seats, judicial seats and national races. For the Board of Commissioner...
Filing For 2020 Primary Ends On Friday – Transylvania County, NC
Filing for the March 3, 2020, primary continues until noon, Friday, for the Board of Commissioners, Board of Education, Register of Deeds, state House and Senate and statewide seats, and national races. As of this Wednesday morning, incumbent...
The Three Commissioners Should Have Resigned
As I read the letters of praise for the three commissioners who bailed on the Republican Party recently, I have quite a different response. I am very disappointed in Lemel, Hawkins and Guice as they chose not to recognize my vote. My vote helped...
Importance Of Integrity
I am a Democrat. I have my beliefs about the role of government. I have my beliefs about the importance of integrity. Speaker Black, a Democrat, was dishonored when in office and ended up serving jail time. As a Democrat, I am not proud of his...
Committed To Ethical Behavior
I wish to add my voice to those who support and admire the recent decision of county Commissioners Page Lemel, Mike Hawkins and David Guice to leave the Republican Party and declare themselves as Independent. By leaving the party of Trump — no...
Nonpartisan Is The Best
I am so proud of David Guice, Mike Hawkins and Page Lemel. All three have been exemplary commissioners. Their records reflect their decisions based upon what is good for our citizenry. I am a Democrat who voted for all three of them because of their...
By Derek McKissock News December 12, 2019
County Limited On What It Can Do On Store – Transylvania County, NC
A large number of people opposed to the proposed Dollar General off U.S. 276 attended Monday's Board of Commissioners meeting and learned the county is limited in what it can do. A permit for the co...
McCall Files For Board – Transylvania County, NC
Former Board of Education Chairwoman Teresa McCall has filed for the Republican primary for the Transylvania County Board of Commissioners. Three seats currently held by Jason Chappell, Mike Hawkins and Page Lemel will be on the ballot. "As a...
Election Filing Continues – Transylvania County, NC
Filing for the March 3, 2020, primary continues until noon, Dec. 20, for the Board of Commissioners, Board of Education, Register of Deeds, state House and Senate and statewide seats, and national races. As of this Wednesday morning, incumbent...
Courage, Integrity And Civility
I am writing in response to the announcement in the Dec. 5 edition of this newspaper that David Guice, Mike Hawkins and Page Lemel had decided to depart from the Republican Party. The story was striking in its content, and I imagine many people were...
Courageous Decision
After reading the front page article "Three County Commissioners Leave GOP," I wanted to write to thank David Guice, Mike Hawkins and Page Lemel for the courageous decision made to change their affiliation status from Republican to Unaffiliated....
Admires Three Public Servants
You will no doubt receive many letters like this, but I cannot resist saying that the recent action by our three county commissioners leaves me feeling tremendous relief and hope that we Americans can live up to the aspirations of our Declaration...
By Derek McKissock News December 9, 2019
Officials' GOP Exit Decision Discussed – Transylvania County, NC
Transylvania County Commissioner Jason Chappell said he would have resigned his position on the board if he had quit the Republican Party. Chappell made the comment after being reached for reaction to the decision by Commissioners David Guice, Mike H...
Chapman Looking To Return To Commission – Transylvania County, NC
Former County Commission Chairman Larry Chapman has filed for the Republican primary on March 3 for another term on the board. Chapman served on the board from 2008 until 2016. "After eight years on...
Nonpartisan Boards Work
One of the reasons county commissioners David Guice, Mike Hawkins and Page Lemel gave for switching their political affiliation from Republican to Unaffiliated is their belief that "local government should not be partisan in nature." They stated...
Proud Of Commissioners
I am so very proud of Mike Hawkins, Page Lemel and David Guice in their decision to leave the Republican Party and becoming Unaffiliated. Hooray for them. They have honest convictions and are standing up for them. This is unusual in "these...
Three County Commissioners Leave GOP – Transylvania County, NC
Transylvania County Commissioners David Guice, Mike Hawkins and Page Lemel have left the Republican Party. In an announcement, (see statement below) the three said they were changing their political...
Primary Filing Begins – Transylvania County, NC
Filing for the March 3 primary started Monday for several races, including the Board of Commissioners, Board of Education, Register of Deeds, and state House and Senate. As of Wednesday morning, incumbent Commissioner Jason Chappell and former...
Principles Before Party
The decision by Transylvania County Board of Commissioners Chairman Mike Hawkins, Vice Chair David Guice and former Vice Chair Page Lemel to leave the Republican Party and become Unaffiliated should lead all of us to some intensive reflection. As...
By Derek McKissock News November 18, 2019
Commissioners OK Bike Plan, Tobacco And Vaping Ban – Transylvania County, NC
The Board of Commissioners unanimously approved a county Bicycle Plan last week that should help with the funding of bike facilities during N.C. Department of Transportation (DOT) road improvement projects. The plan includes detailed recommendations... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 742 |
{"url":"http:\/\/tex.stackexchange.com\/questions\/109272\/format-time-span-for-cv","text":"# Format time span for CV\n\nI would like to add the month to the time span of moderncv. I order not to occupy much more space and still having a beautiful solution, I would like to do the following:\nYear - Year\nMonth - Month\nwhere\n\n\u2022 there's just one dash vertically centered in between the two dates,\n\u2022 the month is centered under the year\n\u2022 the year is stretched to the cell width if necessary\n\u2022 to occupy little space, there are no margins at all\n\nI already did some experiments, but couldn't get the margins and the stretching right.\n\n\\documentclass{article}\n\\usepackage{tabu}\n\\usepackage[english]{babel}\n\\usepackage{datetime}\n\n\\newcommand* \\hintscolumnwidth {0.2\\textwidth} % best: 0.175\\textwidth\n\n\\newcommand \\testBb [4]{%\n\\begin{tabu} to \\hintscolumnwidth {|@{} X[c m] @{--} X[c m] @{}|}\n\\end{tabu}}\n\n\\newcommand \\testB [4]{%\n\\begin{tabu} to \\hintscolumnwidth {|@{} X[c m] @{}|@{} X[c m] @{}|}\n\\end{tabu}}\n\n\\newcommand \\testA [4]{%\n\\begin{tabu} to \\hintscolumnwidth {@{}|X[-1,m,c]|@{--}X[-1,m, c]|@{}} %spread 2cm\n\\hline {#2 } & #4 \\\\ \\tiny #1 & \\tiny #3\\\\ \\hline\n\\end{tabu}}\n\n\\def \\dateA [#1.#2-#3.#4]{ \\testA{\\monthname[#1]} {#2} {\\monthname[#3]} {#4} }\n\\def \\dateB [#1.#2-#3.#4]{ \\testB{\\monthname[#1]} {#2} {\\monthname[#3]} {#4} }\n\\def \\dateBb [#1.#2-#3.#4]{ \\testBb{\\monthname[#1]} {#2} {\\monthname[#3]} {#4} }\n\n\\begin{document}%\n\\ \\\\\n\\dateA[09.2008-09.2012]\\ \\\\\n\\dateB[09.2008-09.2012]\\ \\\\\n\\dateBb[09.2008-09.2012]\\ \\\\\n2008--2012\n\\end{document}\n\n\nProduces the following result:\n\nAny suggestions how to make it even more beautiful are welcome.\n\nUpdate: I took one step back and solved it by just using boxes. My problem with them is that I have to replace \\ by \\newline if want to use them in the existing tabu cells. This messes up line spacing, e.g.\n\n\\documentclass{article}\n\\usepackage{tabu,xcolor}\n\\usepackage[ngerman]{babel}\n\\usepackage{datetime}\n\n\\newcommand* \\datumsZellenBreite {11mm}\n\\newcommand* \\datumsZifferMonatsAbstand{-0.8mm}\n\\newcommand \\zeitspanneA[4]{%same as B, but uses \\newline instead of \\\\ for linebreak\n{\\parbox{\\datumsZellenBreite}{\\centering{#2%\n%\\\\%\n\\newline\n\\vspace{-2.8mm}\\tiny #1}}}\\hspace{\\datumsZifferMonatsAbstand}{\\raisebox{0.4mm}{\\color{blue}$\\rightarrow$}}\\hspace{\\datumsZifferMonatsAbstand}{\\parbox{\\datumsZellenBreite}{\\centering{#4%\n%\\\\%\n\\newline\n\\vspace{-2.8mm}\\tiny #3}}}\n}\n\\def \\zeitA [#1.#2-#3.#4]{ \\zeitspanneA{\\monthname[#1]} {#2} {\\monthname[#3]} {#4} }\n\n\\newcommand \\zeitspanneB[4]{\n{{\\parbox{\\datumsZellenBreite}{\\centering{#2\\\\\\vspace{-2.8mm}\\tiny #1}}}\\hspace{\\datumsZifferMonatsAbstand}{\\raisebox{0.4mm}{\\color{blue}$\\rightarrow$}}\\hspace{\\datumsZifferMonatsAbstand}{\\parbox{\\datumsZellenBreite}{\\centering{#4\\\\\\vspace{-2.8mm}\\tiny #3}}}}\n}\n\\def \\zeitB [#1.#2-#3.#4]{ \\zeitspanneB{\\monthname[#1]} {#2} {\\monthname[#3]} {#4} }\n\n\\begin{document}%\nA should look like B, so it can be used inside tabu.\\\\ \\ \\\\\n\\zeitB[09.2008-09.2012]\\ B\\\\ \\\\\n\\zeitA[09.2008-09.2012]\\ A\\\\ \\\\\n\n\\begin{tabu}{XX}\na & \\zeitA[09.2008-09.2012] %can't use B here\n\\end{tabu}\n\n\\end{document}\n\n\nResults in\n\nUpdate2: I'm now pretty close to what I desire. Actually the \\baselineskip=-valA\\newline\\par\\vspace{-valA} to get the year-month spacing right seems like voodoo to me, but I should sacrifice a goat for finding it. The only thing that's undesired is that A doesn't start at the same baseline as the digits do.\n\n\\documentclass{article}\n\\usepackage{tabu,xcolor}\n\\usepackage[ngerman]{babel}\n\\usepackage{datetime}\n\n\\setlength\\fboxsep{0pt}\n\\newcommand* \\datumsZellenBreite {10.5mm}\n\\newcommand* \\datumsZifferMonatsAbstand{-2.8mm}\n\\newcommand \\zeitspanne[4]{%same as B, but uses \\newline instead of \\\\ for linebreak\n{\\parbox{\\datumsZellenBreite}{\\centering{#2\\baselineskip=\\datumsZifferMonatsAbstand\\newline\\par\\vspace{\\datumsZifferMonatsAbstand}\\tiny{\\strut#1}}}}\\hspace{-0.6mm}{\\raisebox{0.35mm}{\\color{blue}--}}\\hspace{-0.2mm}%\n{\\parbox{\\datumsZellenBreite}{\\centering{#4\\baselineskip=\\datumsZifferMonatsAbstand\\newline\\par\\vspace{\\datumsZifferMonatsAbstand}\\tiny{\\strut#3}}}}%\n}\n\\def \\zeit[#1.#2-#3.#4]{\\zeitspanne{\\monthname[#1]}{#2}{\\monthname[#3]}{#4}}\n\n\\begin{document}%\n\\begin{tabu}{@{}X[-1]@{}|X@{}}\n\\hline%\nA & \\zeit[09.2008-09.2012]\n\\\\\\hline\n\\end{tabu}\n\\end{document}\n\n\nProduces:\n\nAny ideas how I can fix this?\n\n-\ntabu might not be a good choice for a document that will continue to evolve. The author of the package is revising it and promising that there will not be backwards compatibility; see here. \u2013\u00a0 jon Apr 17 '13 at 18:12\nHave you seen moderntimeline? \u2013\u00a0 doncherry Apr 17 '13 at 18:12\nI thought tabu is the newest table package and thus the first choice. I've now taken a look at moderntimeline, it looks nice, but it's not what I want. \u2013\u00a0 Stefan K. Apr 17 '13 at 19:27\nI think that after the edit, I don't understand what you are trying to achieve. Don't the two possibilities I suggested in my answer give you (after changing the en-dash for the colored arrow) the result you are looking for? If not, could you please explain what are they lacking to achieve your layout? \u2013\u00a0 Gonzalo Medina Apr 19 '13 at 20:03\nyour suggested answer came pretty close. I just wanted to improve the year-month spacing, reduce the borders, adjust the dash to the middle and use it inside existing tabu cells. I'm probably going to accept your answer as soon as I have the desired result. \u2013\u00a0 Stefan K. Apr 21 '13 at 18:30\n\nPerhaps something like this? I used tabularx instead of tabu. The vertical alignment for the tabularx was set to [t], and \\firsthline (from the array package) was used:\n\n\\documentclass{article}\n\\usepackage[english]{babel}\n\\usepackage{array}\n\\usepackage{tabularx}\n\\usepackage{datetime}\n\n\\newcolumntype{Y}{>{\\small\\centering\\arraybackslash}X}\n\n\\newcommand*\\hintscolumnwidth{0.2\\textwidth} % best: 0.175\\textwidth\n\n\\def\\dateS[#1.#2-#3.#4]{\\testS{\\monthname[#1]}{#2}{\\monthname[#3]}{#4}}\n\n\\newcommand\\testS[4]{%\n{\\renewcommand\\arraystretch{0.8}\n\\begin{tabularx}{\\hintscolumnwidth}[t]{|@{}Y@{}c@{}Y@{}|}\n\\firsthline\n#2 &\\multicolumn{1}{@{}c@{}}{--} & #4 \\\\[-1ex]\n\\tiny#1 && \\tiny#3 \\\\\n\\hline\n\\end{tabularx}}}\n\n\\begin{document}%\n\n\\dateS[09.2008-12.2012] Some text\\par\n\\dateS[01.1997-09.2012] Some text\\par\n\\dateS[09.2000-09.2012] Some text\n\n\\end{document}\n\n\nAnd using boxes (minipages and \\parboxes, in this case), one doesn't need any additional package:\n\n\\documentclass{article}\n\\usepackage[english]{babel}\n\\usepackage{datetime}\n\n\\newlength\\mylen\n\\settowidth\\mylen{--}\n\\newlength\\hintscolumnwidth\n\\setlength\\hintscolumnwidth{0.2\\textwidth} % best: 0.175\\textwidth\n\n\\def\\dateS[#1.#2-#3.#4]{\\testS{\\monthname[#1]}{#2}{\\monthname[#3]}{#4}}\n\n\\newcommand\\testS[4]{%\n{\\setlength\\fboxsep{0pt}%\n\\fbox{\\begin{minipage}[t]{\\hintscolumnwidth}\n\\parbox[t]{0.5\\hintscolumnwidth}{\\centering%\n\\raggedright%\n\\makebox[0pt][l]{\\hspace*{\\dimexpr0.5\\hintscolumnwidth-2\\mylen\\relax}--}%\n\\centering#2\\\\ \\tiny#1\n}%\n\\parbox[t]{0.5\\hintscolumnwidth}{\\centering%\n#4\\\\ \\tiny#3\n}%\n\\end{minipage}}}}\n\n\\begin{document}%\n\n\\dateS[09.2008-12.2012] Some text\\par\n\\dateS[01.1997-09.2012] Some text\\par\n\\dateS[09.2000-09.2012] Some text\n\n\\end{document}\n\n\n-\n\nglyj from #latex suggested another using tikz:\n\n\\documentclass{article}\n\\usepackage{tabu,xcolor}\n\\usepackage[ngerman]{babel}\n\\usepackage{datetime}\n\\usepackage{tikz}\n\n\\setlength\\fboxsep{0pt}\n\\newcommand* \\datumsZellenBreite {10.5mm}\n\\newcommand* \\datumsZifferMonatsAbstand{-2.8mm}\n\\newcommand \\zeitspanne[4]{\n{\\parbox{\\datumsZellenBreite}{\\centering{#2\\baselineskip=\\datumsZifferMonatsAbstand\\newline\\par\\vspace{\\datumsZifferMonatsAbstand}\\tiny{\\strut#1}}}}\\hspace{-0.6mm}{\\raisebox{0.35mm}{\\color{blue}--}}\\hspace{-0.2mm}%\n{\\parbox{\\datumsZellenBreite}{\\centering{#4\\baselineskip=\\datumsZifferMonatsAbstand\\newline\\par\\vspace{\\datumsZifferMonatsAbstand}\\tiny{\\strut#3}}}}%\n}\n\\def \\zeit[#1.#2-#3.#4]{\\zeitspanne{\\monthname[#1]}{\\ #2}{\\monthname[#3]}{\\ #4}}\n\n\\begin{document}%\n\n\\begin{tikzpicture}\n\\useasboundingbox (0cm,0cm) rectangle (\\textwidth,1cm);\n\\coordinate (A) at (0,0);\n\\coordinate (B) at (\\textwidth,0);\n\\coordinate (I) at (0.5,0);\n\\coordinate (J) at (0.5,1);\n\\coordinate (Zeitpos) at (1,0.5);\n\\coordinate (C) at (0,1);\n\\coordinate (D) at (\\textwidth,1);\n\n\\draw (0,0.6) node {A};\n\\draw (A)--(B);\n\\draw (C)--(D);\n\\draw (Zeitpos) node [right] {\\zeit[09.2008-03.2012]};\n\\draw (I)--(J);\n\\end{tikzpicture}\n\n\\vspace{2cm}\n\n\\begin{tabu}{@{}X[-1]@{}|X@{}}\n\\hline%\nA & \\zeit[09.2008-03.2012]\n\\\\\\hline\n\\end{tabu}\n\\end{document}\n\n-","date":"2014-07-29 17:24:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9639816880226135, \"perplexity\": 8251.25643092154}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-23\/segments\/1406510267745.6\/warc\/CC-MAIN-20140728011747-00484-ip-10-146-231-18.ec2.internal.warc.gz\"}"} | null | null |
Q: The uniqueness of $\mu$ totient function for this two properties I know that the $\mu$ totient function have this two important properties:
The first one is that, supposing that $f$ is a multiplicative arithmetic function, I have that $g=f\star u$ if and only if $f=\mu \star g$, where $\star$ is the convolution of Dirichlet, $u(n)=1, \forall n\in \mathbb{N}^{\times}$, and $\mu$ is the Möbius function.
The second one is that if I have the function $I$ such that $I(n)=1$ if $n=1$ and $I(n)=0$ if $n\neq 1$, then I have that $\mu \star u=I$, being $u$ defined as the last paragraph.
These two properties have easy proofs, but I don't know how to prove that $\mu$ is the ONLY ONE function that satisfies those two properties, i.e. I have to prove the uniqueness from the $\mu$ function for the properties.
A: Note that the set $D$ of functions $f: \mathbb N\to \mathbb C$ forms a commutative ring under $+$ and $\star$. Its unit is $I$ since $f \star I = I \star f = f$ for all $f \in D$. Then $\mu \star u = I$ defines $\mu$ uniquely as the inverse element of $u$. If $\mu'$ is another such function then
$$\mu' = \mu' \star I = \mu' \star (u \star \mu) = (\mu' \star u) \star \mu = I \star \mu = \mu.$$
The other condition on $\mu$ is not needed to characterize it uniquely. In fact, the two properties are easily seen to be equivalent: Given $\mu \star u = I$, multiplying both sides of $g = f \star u$ by $\mu$ gives $f = \mu \star g$, and multiplying both sides of the latter equation by $u$ gives the former. Conversely, assume $g = f \star u \Leftrightarrow f = \mu \star g$ for all $f,g \in D$. Take $f = I$ and $g = u$ to see that $I = \mu \star u$.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 487 |
Crime2 Honolulu Officers Killed Responding to Assault Call Near Waikiki Beach
Taimaa Abazli, 24, holds her new baby Heln in their tent at the Karamalis camp in Thessaloniki, Greece, September 2016.Lynsey Addario—Verbatim for TIME
Taimaa Abazli, 24, holds her new baby Heln in their tent at the Karamalis camp in Thessaloniki, Greece, September 2016.
Lynsey Addario—Verbatim for TIME
The Story Behind TIME's Year-Long Multimedia Project 'Finding Home'
Kira Pollack
"What would you do if you were me?" This is the question Taimaa Abazli asked TIME as she held her 13-day-old baby girl in a shabby hotel room in Thessaloniki, Greece. On February 10, 2016, 24-year-old Taimaa fled Syria with her husband and two-year-old son, leaving by foot in the dead of night. She was six weeks pregnant. Over two weeks, they drove through Turkey, eventually boarding a rubber dinghy bound for Greece. She, like the thousands of women who continue to live their lives in spite of such catastrophic disruption, astonish me.
There's no shortage of heart wrenching photographs from the Syrian refugee crisis, but the pictures that have moved me most are of the delicate, tiny babies: the one born on a rocky beach on the Greek coast; the five-day-old newborn handed from person to person off a raft; the week-old baby wrapped tightly to his mother as she walks for miles along the train tracks in Hungary. What has become of these little people? In the midst of so much chaos, hundreds of babies are being born in Greek hospitals and cared for by families that have nothing but an unknown future.
Lynsey Addario, pictured here, holding Pollack's baby girl in April 2016.
It was on my maternity leave earlier this year that I recognized how truly magnificent these women must be to keep moving forward in spite of such immense challenges. My own baby girl, Edie, was born in a New York City hospital last January and after a few days in the hospital, we fastened her into her new car seat, hopped in an cab and brought her home. How magnificent, but how hard those first few months can be.
But looking at these photos, all I could think about were the newborn refugees and their mothers. They haunted me. How are these women breastfeeding when they themselves don't have proper nutrition? How are they sleeping through the night while living outside? How are they ensuring that their newborns are gaining enough weight in that first critical month?
Even though I am the Director of Photography at TIME, and I look at powerful but difficult work every day, it often strikes me that sometimes even the most devastating pictures are not enough to bring attention to a global crisis. That's why this time, we are approaching this story in a new way.
I have worked with the photographer Lynsey Addario for more than 15 years. Like most of the photographers I work with, I knew her painterly and poetic pictures before I knew the woman behind the camera. Lynsey is a powerhouse—a fierce journalist with a fiery passion to tell the truth about the great injustices of the world. Lynsey is known as a brave war photographer, and has received accolades for her front-line reporting, but day in and day out, she has documented the lives of some of the most voiceless women in the world.
Now, she turns her lens on four babies born to displaced Syrian refugees who are currently living in northern Greek camps.
TIME is launching a year-long multimedia project on the refugee crisis told through the lives of four babies: Rahaf, Heln, Hamida and Faraj. Over the next year, international correspondent Aryn Baker, video producer Francesca Trianni and Lynsey will report on the lives of these babies and their parents as they navigate an uncertain future while searching for home.
Visit TIME.com/Finding-Home and follow the Instagram feed on @FindingHome
Lynsey Addario, a frequent TIME contributor, has received support from Verbatim and the UNFPA for the Finding Home project.
Kira Pollack is the Director of Photography and Visual Enterprise at TIME. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,335 |
\section{Motivation}
Almost every paper about geometry can start with ``Ancient Greeks already knew how to\ldots''. Ancient Greeks knew how to trisect an angle, for example by neusis \cite[Theorem 9.3]{martin} or with a conchoid of Nicomedes \cite{bries}. But they could not trisect an angle with a ruler and compasses. After Wantzel we know that it is indeed impossible \cite{wantzel}. In the 1930s Margherita Beloch found out that one can trisect an angle by paper folding \cite{beloch}, \cite{hullbel}. Her ideas were almost forgotten and a new wave of origamists was needed to describe the power of paper folding. By the end of the second millennium it was proven that paper folding can solve arbitrary (rational) quartic polynomials, so every 2-3-tower over $\mathbb Q$ is constructible by means of paper folding. By \emph{paper folding} we mean the so called 1-fold origami; only one foldline is allowed in each folding step, cf. \cite{AL}. Generalising this, one defines $n$-fold origami by allowing $n$ fold lines (simultaneously) arising in each folding step. In 2006 Alperin and Lang developed axioms for 2-fold origami and calculated ideals describing each of the axioms: two simultaneous fold lines can be produced in every folding step (think, for instance, of folding a letter). They proved (cf. \cite[Theorem 1]{AL}), using the method of Lill \cite{hullbel}, \cite{lill}, that
\begin{Satz}\label{n-2}
Every polynomial of degree $n$ can be solved by $(n-2)$-fold origami.
\end{Satz} So in particular you need at most 3-fold origami to solve quintics. Alperin and Lang asked whether you can do better. Nishimura showed that every quintic is solvable by means of one 2-fold axiom (AL4a6ab in the Alperin and Lang notation). He did it by interpreting this axiom geometrically (and some quite involved calculations). It is a remarkable improvement of Theorem \ref{n-2}. We try to take this game a little bit further and investigate whether one can solve every septic equation with one or more 2-fold axioms.
We see this work as a continuation of the papers \cite{AL} and \cite{Ni}.
\begin{figure}[!ht]
\centering
\includegraphics[height=8cm]{figure1-eps-converted-to.pdf}
\caption{Axiom AL4a6ab used by Nishimura. $L^{F_a}=F_b$,\; $P_a^{F_a}\in L_a$,\; $P_b^{F_b}\in L_b$. Notation of the points and lines as in \cite{Ni}.}
\end{figure}
\section{Setting}
We use the notation of a \emph{point}, \emph{line}, \emph{folded point} and \emph{folded line} as Alperin and Lang do, cf. \cite[pp.\ 4–5]{AL} for exact definitions and formulas.
The list of 2-fold axioms given by Alperin and Lang is impressive and too long in order to try every axiom. So we did some calculations in order to filter out the axioms which yield irreducible polynomials of degree $7$ for the slope of one of the fold lines: These are\footnote{One might think of ``AL'' as Alignment or Alperin-Lang.}: AL3a5b6b7a, AL3a5b6b7b, AL3a5b7ab and AL6ab8. From the geometrical point of view and after consulting \cite[7.1.1]{AL} we decided that AL6ab8 is suitable. Let us describe it.
Assume that we have already constructed four points and two lines. We seek to fold one point onto the first line and the second point onto the second line such that the third point folded by the first foldline meets the fourth point folded by the second foldline, cf. Figure 2.
\begin{figure}[!ht]
\centering
\includegraphics[height=7cm]{figure2-eps-converted-to.pdf}
\caption{A representation of the axiom AL6ab8. $Q^{l_1} = G = S^{l_2}$, $P^{l_1}\in m$, $R^{l_2}\in n$.}
\end{figure}
From the basic origami theory we know that folding one point not on a given line onto this line yields as a fold line a tangent to the parabola defined by the point and line \cite[Theorem 10.3]{martin}.
We fix the line $m$ and the points $P\not\in m$ and $Q\neq P$ and we let the tangent $l$ to the parabola with focus $P$ and directrix $m$ vary over the set of all tangents to this parabola. Then the reflexion image $Q^{l}$ of $Q$ across $l$ moves along a cubic curve, cf. Figure 3. This curve was discussed for instance in \cite[p.\ 150]{martin}, \cite[pp.\ 76]{hull}, \cite{frigerio}.
\begin{figure}[!ht]\centering
\includegraphics[height=7cm]{cubic-eps-converted-to.pdf}
\caption{Three different origami cubic curves for three different positions of $Q$. A point $Q$ is folded across each tangent to the parabola with directrix $m$ and focus $P$. Note that the blue curve has an isolated \emph{real} point $Q$, but obviously there are no such isolated points if you look at this curve in $\mathbb{P}_2\mathbb C$ where it naturally lives. }
\end{figure}
Let us be more specific here. Let $m =\{(x,y)\in\mathbb R^2 \mid ax+by+1=0\}$ be a given line with origami-constructible\footnote{We will use \emph{origami-constructible} for \emph{1-fold origami-constructible}.} $a,b$. Let $P=(c,d)\in\mathbb R^2, P\not\in m$ and $Q=(e,f)\in\mathbb R^2$, $Q\neq P$ be two given points with origami-constructible coefficients $c,d,e,f$. It is well-known how to find the image of a point by reflexion across a line, cf. \cite[Definition 3]{AL}. The equation of an arbitrary tangent $l$ to a parabola with focus $P$ and directrix $m$ is easily calculated, too. So if we put this together, we can calculate the equation of the moving point $Q^{l}$. It is: $ax^3 + bx^2y + (-ac - ae + bd - bf + 1)x^2 + axy^2 + (-2ad - 2bc)xy + (2ace + 2adf - ae^2 - af^2 + 2bcf - 2bde - 2e)x + by^3 + (ac - ae - bd - bf + 1)y^2 + (-2acf + 2ade + 2bce + 2bdf - be^2 - bf^2 - 2f)y - ace^2 +
acf^2 - 2adef + ae^3 + aef^2 - 2bcef + bde^2 - bdf^2 + be^2f + bf^3 + e^2 + f^2 = 0 $.
Obviously, this curve passes through $Q$, so if we change the coordinate system to $X\!:=x-e$ and $Y\!:=y-f$ and simplify a little bit the curve will have the equation:
\begin{equation}\label{cubic}
(X^2+Y^2)(aX+bY) + t_1 X^2+t_2 XY +t_3 Y^2 = 0
\end{equation}
with $t_1=-ac + 2ae + bd + 1, t_2=-2ad + 2af - 2bc + 2be, t_3=ac - bd + 2bf + 1\in\mathbb Q(a,b,c,d,e,f)$.
We call this curve a \emph{(circular nodal)} origami cubic curve and denote it by $\mathcal{C}\!:=\mathcal{C}(a,b,c,d,e,f)$.
The letters $a,b,c,d,e,f$ will be used throughout the paper in the sense just explained.
We pause for a second to show a converse to the necessary condition for an origami cubic curve.
In the definition of the line $m=\{(x,y)\mid ax+by+1=0\}$ there is no loss of generality assuming $a\neq 0$. Then we divide the equation~(\ref{cubic}) by $a$ and set
$t_0:=\frac{b}{a}$ getting
\begin{equation*}
(X^2+Y^2)(X+t_0Y) + (-c + 2e + t_0d + \frac{1}{a})X^2 + (-2d + 2f - 2t_0c + 2t_0e)XY + (c - t_0d + 2t_0f + \frac{1}{a})Y^2 = 0.
\end{equation*}
We show that the coefficients at $X^2$, $XY$, and $Y^2$ can take arbitrary values $t_1$, $t_2$, $t_3$ (for given $t_0,e,f$ and suitable choices of $a,c,d$).
To achieve this, we have to solve the system of equations
\begin{align*}
&t_0= \frac{b}{a},\\
&t_1=-ac + 2ae + t_0ad + 1,\\
&t_2=-2d + 2f - 2t_0c + 2t_0e ,\\
&t_3=ac - t_0ad + 2t_0af + 1
\end{align*}
for given $t_0,t_1,t_2,t_3,e,f$ and unknown variables $a,b,c,d$.
This system is solved by
\begin{align*}
&a:=\frac{2}{2t_0f - t_1 - t_3 + 2e},\\
&b:=t_0 a,\\
&c:=\frac{2t_0^2e - t_0 t_2 - t_1 + t_3 + 2e}{2t_0^2 + 2},\\
&d:=\frac{2t_0^2 f + t_0 t_1 - t_0 t_3 - t_2 + 2f}{2t_0^2 + 2}.
\end{align*}
We have therefore shown the following
\begin{Lemma}
Let $t_0,t_1,t_2,t_3,e,f$ be real numbers, and let $X\!:=x-e$ and $Y\!\!:=y-f$. Then the cubic curve given by $(X^2+Y^2)(X+t_0 Y) +t_1X^2+t_2XY +t_3Y^2=0$ is an origami cubic curve $\mathcal{C}(a,b,c,d,e,f)$
with $a,b,c,d$ in the field generated by $t_0,t_1,t_2,t_3,e,f$ over $\mathbb Q$. That is, there exists a parabola (given by directrix $\{(x,y)\mid ax+by+1=0\}$ and focus $(c,d)$) such that the image
of the given point $(e,f)$ under reflection across the tangents of the parabola is exactly the given cubic.
\end{Lemma}
\begin{Bemerkung}
For certain values of $t_0,t_1,t_2,t_3,e,f$ the denominator of $a$ (and $b$) in the above solution may vanish. This, however, does not mean that there is no solution,
but rather that the directrix of the parabola passes through the origin and therefore cannot be represented in the form $g_{a,b}:=\{(x,y)\in~\!\mathbb R^2, ax+by+1=0\}$ as above.
Furthermore one can see that if the cubic curve is irreducible then the point $(c,d)$ in the above solution will not lie on the line $g_{a,b}$, i.e.\! they really define a parabola.
\end{Bemerkung}
In the following, interpreting AL6ab8 geometrically, we construct two parabolas and two tangents to them such that two given points are superposed by folding across these tangents. Finding these tangents resp. folds $l_1$ and $l_2$ as in Figure 2 is equivalent to finding the intersection points of two (origami) cubics.
By the Bézout Theorem there are nine (projective) complex intersection points of two cubics. The equation~(\ref{cubic}) reveals that, in our situation, the two origami cubic curves will have two intersection points at infinity, so we get generically seven affine intersection points. This yields an equation of degree 7, cf.\ the following section.
The Galois group of this equation is $S_7$ in the generic case, but we noticed that, for suitable values for the points and lines, smaller group such as $A_7$ and $PSL_3\mathbb F_2$ occur as well (see Figures 4 and 5 for concrete crease patterns for these groups). This observation led to the natural question
\emph{Which subgroups of $S_7$ are realizable by 2-fold axioms?} Even stronger: \emph{Is every septic equation solvable by 2-fold axioms?}
\section{Generic Septic Equations}
\label{generic}
We want to find suitable values for the given points and lines, such that the arising origami cubic curves intersect in a point with the ``right'' minimal polynomial, i.e. minimal polynomial with the wanted Galois group, for instance. We have seen that it suffices in many cases to fix one of the two parabolas.
In the axiom AL6ab8 we drop the generality and specify one half of the data. Let $n$ be the line with the equation $y=-1$. Let $R=(0,1)$ and $S=(0,0)$, cf. Figure 2. So we fold the origin across the tangents of the parabola with the equation $y = \frac{1}{4}x^2$. If $(X,Y)$ is an intersection point of the two arising origami cubic curves and $W\!:=\frac{X}{Y}$ (this is, up to sign, just the slope of the fold line across which $(0,0)$ is reflected to $(X,Y)$),
the following equation of degree $7$ is satisfied by $W$:
\begin{equation}\label{deg7pol}
\begin{gathered}
W^7 + \big(\frac{3}{2}e + \frac{1}{2}ft_0 + t_0 - \frac{1}{2}t_1\big)W^6+\\ + \big(\frac{3}{4}e^2 + \frac{1}{2}eft_0 + et_0 -\frac{1}{2}et_1 + \frac{1}{4}f^2 - \frac{1}{4}ft_2 + f - \frac{1}{2}t_2\big)W^5+\\
+ \big(\frac{1}{8}e^3 + \frac{1}{8}e^2ft_0 + \frac{1}{4}e^2t_0 - \frac{1}{8}e^2t_1 + \frac{1}{8}ef^2 - \frac{1}{8}eft_2 +\frac{1}{2}ef - \frac{1}{4}et_2 + \frac{1}{2}e + \frac{1}{8}f^3t_0+ \\+ \frac{3}{4}f^2t_0 - \frac{1}{8}f^2t_3 +\frac{3}{2}ft_0 - \frac{1}{2}ft_3 - \frac{1}{2}t_3\big)W^4+\\ + \big(\frac{3}{4}e^2 + \frac{1}{2}eft_0 - \frac{1}{2}et_1 + \frac{1}{4}f^2 - \frac{1}{4}ft_2\big)W^3+\\ + \big(\frac{1}{4}e^3 + \frac{1}{4}e^2ft_0 + \frac{1}{4}e^2t_0 -\frac{1}{4}e^2t_1 + \frac{1}{4}ef^2 - \frac{1}{4}eft_2+ \\ + \frac{1}{2}ef -\frac{1}{4}et_2 + \frac{1}{4}f^3t_0 +
\frac{3}{4}f^2t_0 - \frac{1}{4}f^2t_3 - \frac{1}{2}ft_3\big)W^2+\\ + \frac{1}{8}e^3 +\frac{1}{8}e^2ft_0 -
\frac{1}{8}e^2t_1 + \frac{1}{8}ef^2 - \frac{1}{8}eft_2 + \frac{1}{8}f^3t_0 - \frac{1}{8}f^2t_3 = 0.
\end{gathered}
\end{equation}
We see that the coefficient at $W^1$ is zero (which can of course always be achieved for a general equation of degree $7$ by substituting $W^{-1}$ for $W$ and applying a linear transformation).
The question is therefore whether the remaining six coefficients can take arbitrary values $s_1,\ldots,s_6$ for suitable choices of $t_0,t_1,t_2,t_3,e,f$. As the resulting system of
equations is linear in $t_0,\ldots,t_3$, we can assign arbitrary values to four of the coefficients (say, the coefficients at $W^6,\ldots, W^3$).\\
More precisely, this is achieved by setting
\begin{align*}
&t_0\!:=\frac{2s_1e + s_2f - \frac{3}{2}e^2 - \frac{1}{2}f^2-s_4(f+2)}{ef+2e},\\
&t_1\!:=3e + ft_0 + 2t_0 -2s_1,\\
&t_2\!:=\frac{4s_1e - 3e^2 + f^2 + 4f -4s_2}{2+f},\\
&t_3\!:=\frac{-2s_1e^2 + 4s_2e + e^3 + 4e + f^3t_0 + 6f^2t_0 + 12ft_0 -8s_3}{4f+4+f^2}.\end{align*}
The polynomial we obtain in this way from Formula (\ref{deg7pol}) is
\begin{equation}
\label{deg7pol2}
W^7+s_1W^6+s_2W^5+s_3W^4+s_4W^3+C_1W^2+C_2 = 0,
\end{equation}
where
\begin{align*}
C_1:=\frac{1}{4(e f^2 + 4 e f + 4 e)}\big(&-2s_1 e^3 f - 4s_1 e^3 - 2 s_1 e f^3 - 12 s_1 e f^2 - s_2 e^2 f^2 + 2 s_2 e^2 f + 8 s_2 e^2 - s_2 f^4 -\\&- 6 s_2 f^3 + 8 s_3 e f^2 +16 s_3 e f + s_4 e^2 f^2 + 4s_4 e^2 f + 4s_4 e^2 + s_4 f^4 + 8 s_4 f^3 +\\
&+12 s_4 f^2 + \frac{1}{2} e^4 f + e^4 + e^2 f^3 + 4e^2 f^2 - 8 e^2 f +
\frac{1}{2} f^5 + 3 f^4\big),\end{align*}
\begin{align*}
C_2:=\frac{1}{4(e f^3 + 6 e f^2 + 12 e f + 8 e)}\big(&-2 s_1 e^3 f^2 - 4s_1 e^3 f - 2 s_1 e f^4 - 8 s_1 e f^3 - s_2 e^2 f^3 +
4s_2 e^2 f -\\&- s_2 f^5 - 4s_2 f^4 + 4s_3 e f^3 + 8 s_3 e f^2 + s_4 e^2 f^3
+ 6 s_4 e^2 f^2 +\\&+ 12 s_4 e^2 f + 8 s_4 e^2 + s_4 f^5 + 6 s_4 f^4 +
8 s_4 f^3 + \frac{1}{2} e^4 f^2 - 2 e^4 +\\ & +e^2 f^4 + 2 e^2 f^3 -6 e^2 f^2 + \frac{1}{2} f^6 + 2 f^5\big).
\end{align*}
Now we are ready to show the main result.
\begin{Satz}
A generic equation of degree $7$ can be solved by 2-fold origami.
\end{Satz}
\begin{beweis}
If we set $f=0$ in equation (\ref{deg7pol2}), then we obtain
\begin{equation}
\label{deg7pol_reduced}
W^7 + s_1W^6 + s_2W^5 + s_3W^4 + s_4W^3 + \frac{1}{16}(e^3 - 4e^2s_1 + 8es_2 + 8es_4)W^2 - \frac{1}{16}e^3 + \frac{1}{4}es_4=0.
\end{equation}
Replacing $W$ by $W^{-1}$, we obtain a septic equation with vanishing coefficient at $W^6$. After multiplying $W$ with an appropriate factor and dividing by the leading coefficient, we
even get a monic septic polynomial of the form $W^7+a_1W^5+a_2W^4+a_3W^3+a_4W^2+a_5W+a_5$, where the $a_i$ are rational functions in $s_1,\ldots,s_4$ and $e$.\\
We investigate whether for suitable choices of $s_1,\ldots,s_4$ and $e$ \emph{any} equation of the form
\begin{equation}\label{general_septic} W^7+a_1W^5+a_2W^4+a_3W^3+a_4W^2+a_5W+a_5,\end{equation} with real-valued coefficients $a_1,\ldots,a_5$,
can be obtained. This leads to a system of polynomial equations in the variables $s_1,\ldots,s_4$ and $e$ over the function field $\mathbb Q(a_1,\ldots,a_5)$. Gröbner basis methods show that
the system can be solved by $e$ satisfying the equation
\begin{align*}
p_e(a_1,\ldots,a_5)\!:= e^8& + \\
\frac{e^6}{a_5^3}&\big(48a_1^2a_2a_5^2 + 40a_1a_2^4a_5 - 176a_1a_2^2a_5^2 - 56a_1a_4a_5^2 + 112a_1a_5^3 + 4a_2^7-\\
&- 40a_2^5a_5 - 40a_2^3a_4a_5 + 128a_2^3a_5^2 + 136a_2a_4a_5^2 - 128a_2a_5^3\big)+\\
\frac{e^4}{a_5^4}&\big(368a_1^4a_2^2a_5^2 + 448a_1^4a_5^3 - 64a_1^3a_2^5a_5 - 192a_1^3a_2^3a_5^2 -
1248a_1^3a_2a_4a_5^2 - 1280a_1^3a_2a_5^3 +\\
&+ 448a_1^2a_2^4a_4a_5 - 96a_1^2a_2^4a_5^2 + 1888a_1^2a_2^2a_4a_5^2 + 128a_1^2a_2^2a_5^3 + 896a_1^2a_4^2a_5^2 - 896a_1^2a_4a_5^3 +\\
&+ 1792a_1^2a_5^4 + 128a_1a_2^5a_4a_5 - 64a_1a_2^5a_5^2 - 1184a_1a_2^3a_4^2a_5 -
544a_1a_2^3a_4a_5^2 + 512a_1a_2^3a_5^3 - \\
&-320a_1a_2a_4^2a_5^2 - 640a_1a_2a_4a_5^3 - 1024a_1a_2a_5^4 - 32a_2^6a_4^2 + 64a_2^6a_4a_5 - 16a_2^6a_5^2 + \\
&+ 352a_2^4a_4^2a_5 - 736a_2^4a_4a_5^2 + 192a_2^4a_5^3 + 800a_2^2a_4^3a_5 - 1600a_2^2a_4^2a_5^2 + 2816a_2^2a_4a_5^3 -\\
&- 768a_2^2a_5^4 - 896a_4^3a_5^2 + 3584a_4^2a_5^3 - 3584a_4a_5^4 + 1024a_5^5\big)+\\
\frac{e^2}{a_5^5}&\big(256a_1^7a_5^3 + 1024a_1^6a_2a_5^3 - 1024a_1^5a_2^2a_4a_5^2 + 1536a_1^5a_2^2a_5^3 - 3584a_1^5a_4a_5^3 -\\
&- 2048a_1^4a_2^3a_4a_5^2 + 1024a_1^4a_2^3a_5^3 + 4608a_1^4a_2a_4^2a_5^2 -
8704a_1^4a_2a_4a_5^3 + 512a_1^3a_2^4a_4^2a_5 - \\
&- 1024a_1^3a_2^4a_4a_5^2 + 256a_1^3a_2^4a_5^3 + 12800a_1^3a_2^2a_4^2a_5^2 - 6656a_1^3a_2^2a_4a_5^3 - 3584a_1^3a_4^3a_5^2 + \\
&+ 14336a_1^3a_4^2a_5^3 - 3072a_1^2a_2^3a_4^3a_5 + 6144a_1^2a_2^3a_4^2a_5^2 -
1536a_1^2a_2^3a_4a_5^3 - 27648a_1^2a_2a_4^3a_5^2 +\\
&+ 15360a_1^2a_2a_4^2a_5^3 + 7168a_1a_2^2a_4^4a_5 - 12288a_1a_2^2a_4^3a_5^2 + 3072a_1a_2^2a_4^2a_5^3 + 14336a_1a_4^4a_5^2 - \\
& - 14336a_1a_4^3a_5^3 + 64a_2^5a_4^4 - 768a_2^3a_4^4a_5 - 4608a_2a_4^5a_5 + 11264a_2a_4^4a_5^2 - 2048a_2a_4^3a_5^3\big)+\\
\frac{1}{a_5^5}&\big(-1024a_1^3a_2^3a_4^4 + 6144a_1^2a_2^2a_4^5 - 12288a_1a_2a_4^6 + 8192a_4^7\big)=0
\end{align*}
and $s_1,\ldots,s_4$ lying in the field extension of $\mathbb Q(a_1,\ldots,a_5)$ generated by $e$.
But obviously $e^2$ is a root of a quartic polynomial. As quadratic and quartic polynomials can be solved by 1-fold origami,
$e$ is an origami-constructible number – and so are $s_1,\ldots,s_4$. Therefore, by substituting $t_0,\ldots,t_3$ and then $a,b,c,d$ as described above, all the values for our 2-fold step
are constructible numbers. If we can, in addition, choose them as real numbers – for which it is sufficient that $e$ is real – then we can solve the generic septic equation (\ref{general_septic}) by 2-fold origami.
While the above polynomial $p_e(a_1,\ldots,a_5)$ of degree $8$ may of course have no real roots for certain choices of $a_1,\ldots,a_5$, we will show that there is always a polynomial
\[W^7+b_1W^5+b_2W^4+b_3W^3+b_4W^2+b_5W+b_5\] generating the same field extension as the analogous polynomial in $a_1,\ldots,a_5$, such that $p_e(b_1,\ldots,b_5)$ has a real root.
Firstly, observe that $p_0(a_1,\ldots,a_5) = -1024\cdot a_4^4a_5^{-5}\cdot (a_1a_2-2a_4)^3$ and $\lim\limits_{e\to+\infty}p_e= +\infty$. If we can enforce $a_5(a_1a_2-2a_4)>0$,
then $p$ will change its sign somewhere between $0$ and $+\infty$ and therefore have a real root. Now for $w\in \mathbb R$ a root of
$W^7+a_1W^5+a_2W^4+a_3W^3+a_4W^2+a_5W+a_5$ and $\lambda\in \mathbb Q$, we can bring the minimal polynomial of $w+\frac{\lambda}{w}$ into the form $W^7+b_1W^5+b_2W^4+b_3W^3+b_4W^2+b_5W+b_5$
via linear transformations. The term $b_5(b_1b_2-2b_4)$ is a rational function in the $a_i$ and $\lambda$; as we are only interested in the sign of this expression, we can multiply it by arbitrary squares and thus obtain a square-free polynomial $F$ in $a_1,\ldots, a_5$ and $\lambda$.
Viewing $F$ as a polynomial in $\lambda$ over $\mathbb Q(a_1,\ldots, a_5)$, we observe that $F$ splits as $F(\lambda)=F_1(\lambda)\cdot F_2(\lambda)$ with $F_1$, $F_2$ polynomials in $\lambda$ of degree 5 and 7 respectively. But $F_1$ and $F_2$ will both have a real root, and generically these roots will not coincide; this means that the expression $b_5(b_1b_2-2b_4)$
will change its sign at some point, so if we choose $\lambda\in \mathbb Q$ in a suitable interval, $b_5(b_1b_2-2b_4)$ will be positive, and $p_e(b_1,\ldots,b_5)$ will have a real root.
But this means that we can construct $w+\frac{\lambda}{w}$, and therefore $w$ as well, with 2-fold origami, so every real root of a generic septic equation is constructible by 2-fold origami.
\end{beweis}
\begin{Bemerkung}
Note that our ``generic'' form can be obtained without loss of generality, if we view the coefficients as
transcendentals; however, for certain specializations, like polynomials of the form $W^7-A$ this is not possible by linear transformations. We will deal with equations $W^7-A = 0$ in \ref{seventhroot}.\\
Also, throughout the proof, we deal with rational functions in certain coefficients; of course, for a bad choice of the coefficients, these might not be well-defined due to vanishing denominators. The term ``generic'' polynomial should always be understood in the sense that the denominators have to behave well.
\end{Bemerkung}
\section{Solvable groups}
We showed above that a generic equation of degree $7$ is solvable by 2-fold origami, but there are some important cases which seem not to be included in the generic result,
like 2-folding of seventh roots. We deal with this separately and show more generally that every solvable $\{2,3,5,7\}$-extension of $\mathbb Q$ is solvable by 2-fold origami.
\subsection{Angle septisection}
If you are an origami artist you have quite often to create some difficult marks to proceed. Usually these are some divisions of a segment, like third parts.
It can occur that you need a third part of an angle\footnote{By the way, the possibility of angle trisection is one of the advantages of 1-fold origami over euclidean constructions.}.
Robert Lang found an exact angle quintisection with 2-fold origami, which is impossible by 1-fold origami, and \cite{AL} and \cite{Ni} put this result on a more general basis. As far as we know an exact angle septisection for a general angle has not been given by means of $k$-fold origami for $k<5$.
Robert Lang did find an approximate solution \cite{lang2010}, though, and used it for the construction of his famous scorpion.
Let $\varphi\in (0,2\pi)$ be an angle, $A=2\cos(\varphi)$ and $x=2\cos(\varphi/7)$. Then one easily verifies with de~Moivre's formula that $x^7-7x^5+14x^3-7x-A=0$.\\
If we can solve this equation for arbitrary $A\in (-2,2)$, then we can septisect an arbitrary angle. The following theorem states that we can do this with 2-fold origami.
\begin{Satz}
Septisection of arbitrary angles $\varphi\in(0,2\pi)$ is possible with 2-fold origami.
\end{Satz}
\begin{beweis}
We take the polynomial from equation (\ref{deg7pol_reduced}), replace $W$ with ${W}^{-1}$ (so the polynomial has vanishing coefficient at $W^6$ instead of $W^1$), and multiply $W$ with a constant factor
in order to let the constant and the linear coefficient take the same value. Denote the resulting polynomial by $h_1(W)$. Then we treat $W^7-7W^5+14W^3-7W-A$ in the same way (that is, multiply $W$ with factor $\frac{A}{7}$) and denote the result by $h_2(W)$. Now compare the coefficients of
$h_1$ and $h_2$. The arising system of equations over $\mathbb Q(A)$ is solved by $s_2=0=s_4$ and
\begin{align*}
s_1:=\;&\frac{4302592 (-28 + A^2) (-196 + 14 A^2 + 3 A^4) e +
196 A^4 (5488 + 560 A^2 + A^4) e^3 - A^6 (28 + 3 A^2) e^5}{153664 A^2 (21952 - 784 A^2 - 252 A^4 + A^6)},\\
s_3:=\;&\frac{-3764768 (-112 + A^4) e - 98 A^2 (784 + 280 A^2 + A^4) e^3 + A^6 e^5}{5488 (21952 - 784 A^2 - 252 A^4 + A^6)},
\end{align*}
where $e$ fulfills $e^6 - \frac{38416}{A^2} e^4 + \frac{-7529536 A^4 + 210827008 A^2 - 843308032}{A^6} e^2 - \frac{210827008}{A^2} = 0$.
As all the other unknown coefficients $a,b,c,d$ of our initial point and line setting can be expressed as rational functions in these, we are done if we can construct $e$ as a real number; but the above sextic polynomial in $e$ can be solved by solving cubic and quadratic equations, i.\;\!e.\ by 1-fold origami. It remains to be seen whether $e$ can be chosen as a real number.
As $p(x) = x^6 - \frac{38416}{A^2} x^4 + \frac{-7529536 A^4 + 210827008 A^2 - 843308032}{A^6} x^2 - \frac{210827008}{A^2}$ is negative at $0$ and $\lim\limits_{x\to+\infty} p(x) = +\infty$, such a real number $e$ exists, indeed.
\end{beweis}
\subsection{Folding seventh roots}\label{seventhroot}
We try to specialize all intermediate coefficients of the polynomial in equation (\ref{deg7pol2}) to zero. This corresponds to constructing seventh roots.
So we compare coefficients of the polynomial in (\ref{deg7pol2}) with those of the polynomial $W^7+s$, where $s$ is any positive real number.
This leads to two equations in $e$ and $f$ over the field $\mathbb Q(s)$. This system of equations has a solution in the function field defined by\\
$f^{10}t^2 + 2f^{10}t + f^{10} + 24f^9t^2 + 24f^9t + 252f^8t^2 - 84f^8t + 1536f^7t^2 - 1264f^7t + 6048f^6t^2 - 1008f^6t + 16128f^5t^2 + 5376f^5t + 29568f^4t^2 + 3584f^4t + 36864f^3t^2 - 6144f^3t +
29952f^2t^2 + 14336ft^2 + 3072t^2=0,$ where $t\!:=s^2$. This defines a rational function field $\mathbb Q(f,t)$ over $\mathbb Q(t)$, and therefore we can find a parameter $w$ such that $\mathbb Q(w)=\mathbb Q(f,t)$ and express $t$ as a rational
function in it; computer calculation yields
$t=2^{10}\frac{w^7}{(w + 7)^7(w+1)^2(w+3)}$ for a suitable parameter $w$.
Remember that we want to solve $X^7+\sqrt{t}=0$. Multiply $X$ with a factor $\sqrt{\frac{2w}{w+7}}$, we can transform this to $X^7+\sqrt{T}=0$, where
$T = \frac{8}{(w+1)^2(w+3)}$. Note that the square root that is introduced in this transformation does not lead to any problems, as square roots are of course constructible by 1-fold origami.
But now we can specialize $T$ to an arbitrary positive value; $w$ will then be the (w.l.o.g.\ real) root of a cubic equation, and we can solve this equation with 1-fold origami. Now $e$ and $f$ lie in the field generated by $w$ and $\sqrt{t}$, which is at most a quadratic extension of $\mathbb Q(w)$. As we can w.\;\!l.\;\!o.\;\!g.\ multiply $T$ with positive rational 7th powers, the field $\mathbb Q(w,\sqrt{T})$ can even be enforced to be real because for $T>0$ small enough, for the equation $8 = T(w+1)^2(w+3)$ will always have a \emph{positive} solution $w$, and therefore $t$ will be positive with $T$ as well. So the construction is completed.
Together with angle septisection shown above, this result leads to the following
\begin{Satz}
Let $K\mid \mathbb Q$ be a finite solvable Galois extension of degree $2^a\cdot 3^b\cdot 5^c\cdot 7^d$ with $a,b,c,d\in \mathbb N_0$. Then $K$ is solvable by 2-fold origami.
\end{Satz}
\begin{beweis}
Galois theory says that the extension $K\mid \mathbb Q$ can be solved by repeatedly taking (square, cubic, fifth and seventh) roots.
Now taking the $n$-th root of any complex number can be achieved by taking the real $n$-th root of its absolute value, combined with angle $n$-section.\\
Square roots and cubic roots can be taken by 1-fold origami. Nishimura \cite{Ni} and Lang \cite{lang5} showed that in particular fifth roots and quintisection can be taken with 2-fold origami. This leaves $n=7$, and we showed above how to
septisect arbitrary angles and take seventh roots of reals.
\end{beweis}
\section{{Crease patterns for nonsolvable transitive groups in $S_7$}}
In the previous section we showed that every polynomial whose Galois group is a solvable subgroup of $S_7$ can be solved by 2-fold origami.
Now we turn to nonsolvable transitive groups in $S_7$. These are $S_7$, $A_7$ and $PSL_3\mathbb F_2\cong PSL_2\mathbb F_7$, cf. \cite[p.\ 60, Table 2.1]{dixon}. With the methods of Section \ref{generic}, one could give many explicit constructions
for each of these groups; however these constructions would in general be quite lengthy and involved as they require for instance the folding of solutions of quartic equations.\\
We give explicit examples of folds with very nice initial coordinates that lead to Galois groups $A_7$ and $PSL_3\mathbb F_2$ (the generic case $S_7$ is left out as almost all
folds with axiom AL6ab8 lead to this Galois group).
First, we want to give a realisation of $A_{7}$ by specializing the axiom AL6ab8.
\begin{figure}[!ht]\centering
\includegraphics[height=8cm]{A7-eps-converted-to.pdf}
\caption{Crease pattern for $A_7$ by AL6ab8. $G,H,I$ are the intersection points of the two bold cubics in red and blue.
The green foldlines $l_1$ and $l_2$ arise by folding $Q$ resp. $S$ on $G$.}
\end{figure}
We put \[m: x= -2 ,\; P=(-4,-1),\; Q=(1,2)\] for the first parabola set, cf.\ Figure 4. Furthermore we set \[n: y = -1 ,\; R=(0,1),\; S=(1,0)\] for the second parabola set.
Putting these numbers into the equations we dealt with above, we get a polynomial $h$ of degree 7, describing the intersection points of the two cubics, such that $\operatorname{Gal}(h\mid\mathbb Q) \cong A_{7}$.
More precisely, the slope of the foldline $l_2$ is a root of the polynomial $y^7 + y^6 - 8y^5 + 3y^4 + y^3 - 3y^2 + 2y - 1$. The discriminant of this polynomial is equal to
$2^8\cdot 31^2\cdot 157^2$, so it is a square and the Galois group must be contained in $A_7$. In fact, equality holds, as one verifies with a computer algebra program such as Magma.\\
Note that this polynomial has exactly three real roots, corresponding to the three intersection points of our cubics in the affine plane. The slope of the line $l_2$ in Figure 4 is the real root of approximate value $-3.49$.
Now, let us describe how to construct $PSL_{3}\mathbb F_{2}$ by AL6ab8. As depicted in Figure 5, set \[m: y = \frac{1}{2}x-1,\; P=(-\frac{16}{5}, -\frac{12}{5}),\; Q=(-3,-3); \quad n: y = -2,\; R=(0,0),\; S=(1,-1).\]
\begin{figure}[!ht]\centering
\includegraphics[height=7cm]{psl32-eps-converted-to.pdf}
\caption{Crease pattern for $PSL_3\mathbb F_2$ by AL6ab8. The green foldlines $l_1$ and $l_2$ arise by folding $Q$ resp. $S$ on $H$.}
\end{figure}
Again the two cubics intersect in three real points; the slope of fold line $l_2$ fulfills the equation \[y^7 + 3y^6 - 3y^4 + 5y^3 + y^2 - 10y - 1=0,\]
whose Galois group surprisingly turns out to be $PSL_{3}\mathbb F_{2}$. It is notable that this polynomial is very simple and the number field generated by one of its roots has very small
discriminant, namely $2^6\cdot 383^2$.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,877 |
Q: RecyclerView not displaying data when Asynchronous operations performed public class ProfileSearchRecyclerAdapter extends RecyclerView.Adapter<ProfileSearchRecyclerAdapter.ViewHolder>{
private static List<ProfileSearchResult> profileList = new ArrayList<>();
private List<ProfileSearchResult> profileListNew = new ArrayList<>();
private Context context;public ProfileSearchRecyclerAdapter(Context context){
this.context = context;
}
public void notifyChanges(List<ProfileSearchResult> changes){
profileListNew.addAll(changes);
AsyncTaskRunner taskRunner = new AsyncTaskRunner();
taskRunner.execute();
}
@NonNull
@Override
public ViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.search_result_profile_layout,parent,false);
return new ViewHolder(view);
}
@Override
public void onBindViewHolder(@NonNull ViewHolder holder, int position) {
RequestOptions profilePictureRequest = new RequestOptions();
profilePictureRequest.placeholder(R.drawable.index);
holder.name.setText(profileList.get(position).getName());
holder.tag.setText(profileList.get(position).getTagline());
Glide.with(context)
.applyDefaultRequestOptions(profilePictureRequest)
.load(profileList.get(position).getProfilePictureUrl())
.into(holder.thumbnail);
}
@Override
public int getItemCount() {
return profileList.size();
}
public class ViewHolder extends RecyclerView.ViewHolder {
View view;
public CircleImageView thumbnail;
public TextView name;
public TextView tag;
public ViewHolder(View itemView) {
super(itemView);
view = itemView;
name = view.findViewById(R.id.search_result_name);
tag = view.findViewById(R.id.search_result_tagline);
thumbnail = view.findViewById(R.id.search_result_thumbnail);
}
}
private class AsyncTaskRunner extends AsyncTask<List<ProfileSearchResult>, Void, Void> {
/**
* This method compares profileSearchResultChanges with profileList to add new files,
* remove deleted files and modify files (remove old version of a file and add it's new version)
* Asynchronously
* @param lists
* @return
*/
@Override
protected Void doInBackground(List<ProfileSearchResult>... lists) {
Iterator<ProfileSearchResult> iter = profileList.iterator();
while (iter.hasNext()) {
ProfileSearchResult result = iter.next();
if(false == profileListNew.contains(result)){
profileList.remove(result);
}
}
iter = profileListNew.iterator();
while (iter.hasNext()) {
ProfileSearchResult result = iter.next();
if(false == profileList.contains(result)){
profileList.add(result);
}
}
return null;
}
@Override
protected void onPostExecute(Void aVoid) {
invalidate();
}
}
public void invalidate(){
notifyDataSetChanged();
}
}
This is the adapter class for my recyclerview. As Livedata registers an update for query snapshot, I pass the document to the adapter. When I assign
profileList = changes;
in notifyChanges() without any asynchronous operation, my recyclerview displays data. But when I do the Asynchronous operations, the RecyclerView is empty. What could be the reason? Thanks in advance
05-30 18:27:07.813 13453-14405/com.sachintitus.instafy.instafy E/AndroidRuntime: FATAL EXCEPTION: AsyncTask #5
Process: com.sachintitus.instafy.instafy, PID: 13453
java.lang.RuntimeException: An error occurred while executing doInBackground()
at android.os.AsyncTask$3.done(AsyncTask.java:318)
at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:354)
at java.util.concurrent.FutureTask.setException(FutureTask.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:242)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:243)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)
at java.lang.Thread.run(Thread.java:761)
Caused by: java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.next(ArrayList.java:831)
at com.sachintitus.instafy.instafy.adapters.ProfileSearchRecyclerAdapter$AsyncTaskRunner.doInBackground(ProfileSearchRecyclerAdapter.java:96)
at com.sachintitus.instafy.instafy.adapters.ProfileSearchRecyclerAdapter$AsyncTaskRunner.doInBackground(ProfileSearchRecyclerAdapter.java:82)
at android.os.AsyncTask$2.call(AsyncTask.java:304)
at java.util.concurrent.FutureTask.run(FutureTask.java:237)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:243)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)
at java.lang.Thread.run(Thread.java:761)
A: Do not call notifyDataSetChanged() from any other thread than main thread. Just remove calls to invalidate() from doInBackground() and instead add it to onPostExecute()
@Override
protected void onPostExecute(Void aVoid) {
Log.w("PROCESS EXECUTED", String.valueOf(profileList.size()));
invalidate();
}
ArrayList should not be modified when iteration over it. Instead of using remove() method on ArrayList, use remove() method on Iterator.
Iterator<ProfileSearchResult> iter = profileList.iterator();
while (iter.hasNext()) {
ProfileSearchResult result = iter.next();
if(false == profileListNew.contains(result)){
iter.remove();
}
}
Also you may be running multiple AsyncTasks concurrently which may throw ConcurrentModificationException, so you should synchronize operations on lists by using lock so that only one thread can modify list at a time. Add this synchronized block to doInBackground() in your AsyncTask like this
@Override
protected Void doInBackground(List<ProfileSearchResult>... lists) {
synchronized (profileList) {
Iterator<ProfileSearchResult> iter = profileList.iterator();
while (iter.hasNext()) {
ProfileSearchResult result = iter.next();
if(false == profileListNew.contains(result)){
iter.remove();
}
}
iter = profileListNew.iterator();
while (iter.hasNext()) {
ProfileSearchResult result = iter.next();
if(false == profileList.contains(result)){
profileList.add(result);
}
}
return null;
}
}
A: I believe the problem of the ConcurrentModificationException is here
Iterator<ProfileSearchResult> iter = profileList.iterator();
while (iter.hasNext()) {
ProfileSearchResult result = iter.next();
if(false == profileListNew.contains(result)){
profileList.remove(result); // <------- here <-------
}
}
In java if you wish to remove an element from a list while using its iterator you must use the iterator's method to do so or face the ConcurrentModificationException
It's mostly a protection mechanism to reassure the iterator won't pass twice on the same list node or skip a node. docs link
instead of profileList.remove(result);
try iter.remove()
check this example
Hope this helps
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,216 |
{"url":"https:\/\/drr.ikcest.org\/app\/scb03","text":"English \u00a0\u00a0 \u4e2d\u6587\n\n# Arccosine inverse cosine function batch online calculator\n\nCategory:\n\n### App description\n\nIn mathematics, inverse trigonometric functions (sometimes also known as arcus functions), inverse functions (anti trigonometric functions) or circular functions (cyclic functions) are the inverse functions of trigonometric functions (with appropriate restricted domain). Specifically, they are the inverse functions of sine, cosine, tangent, cotangent, secant and auxiliary functions, and are used to obtain an angle from the trigonometric ratio of any angle. Inverse trigonometric functions are widely used in engineering, navigation, physics and geometry. The inverse cosine function (one of the inverse trigonometric functions) is the inverse function of the cosine function y=cosx\uff08x\u2208[0,\u03c0]\uff09, which is denoted as y=arccosx or cosy=x(x\u2208[-1,1]).\n\ny=arccos(x)\n\n Y(Degrees) Y(Radian) X 180 \u030a \u03c0 -1 150 \u030a 5\u03c0\/6 -0.866025 135 \u030a 3\u03c0\/4 -0.707107 120 \u030a 2\u03c0\/3 -0.5 90 \u030a \u03c0\/2 0 60 \u030a \u03c0\/3 0.5 45 \u030a \u03c0\/4 0.707107 30 \u030a \u03c0\/6 0.866025 0 \u030a 0 1\n\n### Usage example:\n\nNumber: 0.5,1\n\nClick \"calculate\" to output the result\n\nArccosine:\n\n1.047198\n\n0","date":"2022-08-18 02:07:50","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8420297503471375, \"perplexity\": 3770.1255315184503}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882573145.32\/warc\/CC-MAIN-20220818003501-20220818033501-00045.warc.gz\"}"} | null | null |
Q: BC30002 Error When Compiling VB Project with Microsoft.CodeAnalysis.Emit I'm using the Microsoft.CodeAnalysis.Emit to compile a Visual Basic project and generate .dll file with following compilation options.
VisualBasicCompilationOptions(OutputKind.DynamicallyLinkedLibrary, optimizationLevel: OptimizationLevel.Debug);
Following error is thrown by the emitter for all VB projects I tried to compile. Please advice how to resolve this.
vstest.executionengine.x86.exe Error: 0 : xxxxx -,
C:\Projects\xxxx\xxxxx\My Project\Settings.Designer.vb(67,48): error
BC30002: Type 'Global.xxxx.xxxx.Console.VBTestApp.My.MySettings' is
not defined.
A: To compile the generated Settings.Designer.vb file correctly, you have to set the root namespace of the project to the same one the file was generated with. In your case, that seems to be xxxx.xxxx.Console.VBTestApp, so your options should be:
new VisualBasicCompilationOptions(
OutputKind.DynamicallyLinkedLibrary,
optimizationLevel: OptimizationLevel.Debug,
rootNamespace: "xxxx.xxxx.Console.VBTestApp")
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 668 |
Enjoy an exciting week of plein-air watercolour painting in Barcelona, Spain. Artist Paul Raymonde and his wife Angela offer you an intensive 5 day watercolour training workshop with expert tuition. It is an opportunity to not only benefit from a rare city-based painting course but also to explore the unique heritage and impressive artistic wealth of Spain's vibrant capital of culture.
The first day in Barcelona, making a value study so as to get a feel for the medium without having to think about mixing colours.
On the second day we attempt a plein-air study in two colours only - warm and cool.
This gang nails a world famous landmark.
Even the winter in Barcelona is great for plein-air - just wrap up warm.
End of the week - students showing their favourite paintings for the week.
Paul has been coaching watercolourists of varying levels of proficiency for some years now and has seen the same mistakes being made time and again. He has designed a one week intensive watercolour course which approaches the technique from fundamentals and takes painters through various exercises that will ensure they understand the ground rules of successful watercolour painting.
Participants need no experience. They can be complete beginners or practicing artists. The week's itinerary is designed to be informative for all levels of proficiency.
It is not necessary to bring any equipment. Easels, palettes, brushes, paint, paper and all necessary materials are provided by the course during the mornings on location. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,595 |
\section{Introduction}
Developments in sensor technology and manufacturing processes have led to the creation of small and safe robots (referred to as cobots) that can safely physically interact with a human operator in a shared workspace~\cite{akella}. Cobots are a new and growing market, with value expected to reach \$9.1 Billion by 2025~\cite{munster_2019}. This growth of demand on robots leads to \ac{hri} becoming an important field of scientific inquiry~\cite{song2017}. A user-centred approach to \ac{hri} research is important for building safe and usable robots~\cite{heinzmann1999safe}. HRI user research relies on precise recording and analysis of human interactions with robots. This interaction can be with real or simulated robots in controlled user studies~\cite{groechel22}. Research relying on virtual settings for HRI has been supported by the recent boom in the use of mixed-reality consumer headsets~\cite{groechel22}. In this paper we focus of the research question \textbf{"What are the benefits of evaluating human-robot interaction with simulated robots?"}, and we reflect on this question with findings from our work.
\section{Case Studies}
In the following part we provide specific examples of virtual environments we developed for our studies. We use the case studies as demonstration for what we could create for user studies in simulation. All environments were developed with Unity3D and linked to ROS. Figures (1-3) show screenshots taken from different environments built for the studies as they appeared in Unity3D. We discuss the findings derived from our studies in section \ref{sec:discussion}.
\begin{figure}[h]
\includegraphics[width=0.9\linewidth]{Graphics/Overall.png}
\caption{Behind the Wall: The user cooperates with a cobot sitting behind a wall. The wall becomes opaque and the user loses line of sight with the robot}
\label{fig:behindthewall}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.9\linewidth]{Graphics/modular_robot_screenshot1.PNG}
\caption{Modular Robot. The lower platform consists of multiple cubic modules. A drop down menu to add the individual modules can be seen on the top left.}
\label{fig:modularRobot}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.9\linewidth]{Graphics/robot_supervision.PNG}
\caption{Robot Dual-task. The user solves math problems using the buttons on the platform (left) while supervising a cobot that sorts cubes into different trays by color (right).}
\label{fig:dualTask}
\end{figure}
\textbf{Collaboration Behind a Wall}
We developed an environment in Unity for a study done in \ac{vr}involving collaboration with a cobot that was sitting behind a visual barrier (a "wall", as seen in fig. \ref{fig:behindthewall}) and the participants had the task of responding to the robot's actions by interacting with objects in the environment. The main goal of the work was to assess how helpful different feedback modalities were in terms of easing the task's cognitive load and enhancing user experience. The team member building the environment was a bachelor student with no prior Unity3D experience at the beginning of the work. After learning how to use Unity3D and connect it to ROS, the team member managed to get a working understanding of Unity and ROS integration within three weeks. Because the study involved a hidden cobot, the tutorial for the study began by showing the participants a cobot sitting behind a transparent glass-like wall. While collaborating with the cobot in the tutorial mode, the wall became more opaque until finally the robot was completely occluded. The study involved audio,visual, and haptic feedback related to the collaborative task, as well as a combination of all three. The study relied on a simulated Niryo One~\footnote{https://niryo.com/fr/product/niryo-one/} robot. Fig. \ref{fig:behindthewall} shows a screenshot of the Unity3D environment.
\textbf{Modular Robots}
For an interdisciplinary study involving computer science and mechanical engineering about modular configurable robots, we built an environment where modules are made of prefabs that can click together. Examples of modules include platform blocks, robotic arms with different configurations, and end-effectors. This gives an overview about the outcome in terms of the robot's shape, dimensions, and functionalities. The 3D models used in simulation of all modules were based on CAD models developed by mechanical engineers working on the project. Simulation of the robot module prefabs did not have to wait until the non-standard physical prototypes for the different modules were built. A screenshot of the work-in-progress can be seen in fig. \ref{fig:modularRobot}.
\textbf{Dual-task Supervision}
We developed another environment (fig. \ref{fig:dualTask}) for a study aimed at exploring the use of different interaction modalities to reduce the cognitive load when supervising a cobot as a secondary task. The user had the task of solving a series of math problems using the buttons on the platform (depicted on the left) while supervising a cobot that is separating colored cubes into different trays by color (seen on the right). The cubes were automatically generated and dispensed, and the behaviour of the robot was fully autonomous, and the participants had to interact with various elements of the environment to perform the required task. This study relied on simulating a Universal Robots e-series robot~\footnote{https://www.universal-robots.com/de/e-series/}.
\section{Future Direction and Research Questions}
There are some limitations of simulated robots. For example, a real robot offers a way to test the actual and real value of a lot of the experiments and research when it is carried out in its intended setting. Moreover, using a real robot is more conducive than a simulation for work towards certification and quality assurance. With a real robot, the user has a better sense of the actual real interaction, and can better evaluate aspects relating to sense of safety and trust. When the researcher is faced with the choice between a study in simulation vs. a real robot, it is crucial to use the proper tool evaluate the metrics of interest. It could be possible that predictability, transparency can be equally important in both settings. However, a simulated setting may have additional and specific emphasis on immersion and perceived realism, whereas A real setting could have more emphasis on trust, especially with apprehensive users. The limitations of both settings are grounds for future studies. Based on our findings from the studies we developed, as well as the points discussed in the previous sections, we believe that the focus of future research should be on answering the following research questions:
\textbf{What conditions make better use of simulated robots?} More research should go into guidelines for designing user studies that help researchers decide whether to use simulation, depending on the study conditions and metrics.
\textbf{What are the limitations of both simulation and reality in HRI?} To address the limitations of both aspects, further research should go into how the different methods can complement each other. This should also involve validation studies done in both settings to help derive guidelines for the design of future experiments.
\balance
\section{Approach}
Simulating a robot for a study relies on three components, namely: virtualization, control, and the supporting hardware.
\textbf{Virtualization}
Tools like Unity3D~\footnote{https://unity.com/} have made building a virtual environment much easier and more accessible, especially with the rise of game development. Such tools allow flexibility to the researcher to experiment with different types of robots without needing to have access to all the different physical models, such as humanoid robots or collaborative robotic arms of varying degrees of freedom. A simulated robot also enables running studies in different environments, which could be real (with augmented reality) or can themselves be simulated as well (such as in a \ac{vr} setting). Additionally, Unified Robotics Description Format (URDF) files are widely available for a wide variety or robot classes. URDF files can then be used to generate 3D models of robots which can in turn be made into templates, or \textit{prefabricated game objects} ("prefabs" for short). Other prefabs of other objects can be used to rapidly setup environment elements such as furniture, panels, surfaces, and walls.
\textbf{Control}
Controlling a robot involves path and trajectory planning as well as forward and inverse kinematics for the precise control of joint positions and angles. A simulated robot that behaves similarly to a real robot requires the same form of control for proper behavior. Tools like the Robot Operating System~\footnote{https://www.ros.org/} (ROS) enable the control of a robot, provided a valid URDF model. Special-purpose plugins exist for establishing a connection between Unity3D and ROS in order to control a simulated robot.
\textbf{Hardware}
Commercial mixed reality headsets such as the Microsoft Hololens~\footnote{https://www.microsoft.com/en-us/hololens/hardware} for \ac{ar}, and the Oculus Quest~\footnote{https://store.facebook.com/at/en/quest/products/quest-2/} for \ac{vr} can be used to interact with a simulated robot in a real or virtual environment. The Oculus Quest headsets, for example, are equipped with hand-held controllers, haptic feedback, stereo audio, as well as hand recognition. These tools allow multiple interactions modality between the user and the simulation.
\section{Discussion}~\label{sec:discussion}
We have demonstrated the use of simulating a robot by these case studies. Based on our conducted studies, we derive the following advantages for simulating a robot rather than buying or building one.
First, the ease of development of simulations. Tools like ROS and Unity3D are offered online for free. This cuts down on the cost of building a simulated environment, whereas actually buying and mounting a robot for a study would cost thousands of euros. This would be in addition to the cost, effort, and time of the logistics involved in transporting the robot to the study location. For example, in our dual-task study (fig. \ref{fig:dualTask}) building the machinery needed for dispensing the cubes, as well as the wiring for the screens and math problems would have been more costly, complicated, and tedious than building it in simulation. Moreover, a simulated robot does not require the space a real robot would need to operate freely without colliding with its environment. The lack of limitation on space makes it easier for a simulated robot to be integrated in virtual environments and studies.
Second, the lower barrier on conducting studies. The cost of mixed reality headsets is lower compared to the cost of a functional cobot, which lowers the financial barrier. A study done with a simulated robot in a virtual setting can be done remotely, provided the user has access to the proper hardware. This lowers the barrier on mobility and removes location restriction. This is useful in situations where a global event restricts mobility, e.g. the COVID-19 pandemic.
Third, the ability to test novel designs. Our finding from the study involving the simulated modular robot was that we did not have to wait until the modules are physically built to begin experimenting. Because the design is not according to a set standard, the mechanical engineers involved in the project had to build the first prototype from scratch, which involved a process of curating the components, measuring, cutting, and assembling the different modules in a functional way. It took less time to build the modules' virtual counterparts and simulate their behavior.
Fourth, the flexibility of the environment. We had more control on the environment in our virtual studies than we would have in a real setting. Working space layout and dimensions were not a constraint for building different settings in terms of layout and size for the studies.
Fifth, repeatability and modularity. A simulated robot would be easier to replicate for a different study, as opposed to physically moving the robot or buying a replica. This allows easier access to validation studies, as well as studies conducted in multiple locations simultaneously. This also allows a modular approach to studies were components from one study could easily be merged or adapted with another.
Studies have shown that a virtual setting can yield results that are as valid as those yielded by an in-situ study.~\cite{Voit2019OnlineArtifacts}, and that \ac{vr} could be beneficial especially for early prototypes~\cite{Mobach2008DoWorlds}. The findings by Matsas et al.~\cite{Matsas2018PrototypingReality} included the benefit of using VR for greater control over the environment and more rapidly changing the conditions. Study results also show that simulated robots in \ac{vr} are suitable for experiments using audiovisual modalities for interaction between humans and robots~\cite{krenn2021s}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,094 |
Herta Classen, geborene Herta Baer, (* 5. März 1913 in Chemnitz; † 17. April 1986 in Berlin) war eine deutsche Journalistin in der DDR. Sie war von 1956 bis 1959 Chefredakteurin des Deutschlandsenders und von 1959 bis 1969 Intendantin des Berliner Rundfunks.
Leben
Classen, Tochter eines Angestellten, erlernte nach der Volks- und Handelsschule den Beruf der Anwaltsgehilfin, in dem sie bis 1933 tätig war. Anschließend war sie bis 1946 Verkäuferin im Leinehaus Voigt in Pulsnitz in Sachsen.
Nach dem Ende des Zweiten Weltkriegs trat Classen in die Kommunistische Partei Deutschlands (KPD) ein und wurde 1946 Mitglied der Sozialistischen Einheitspartei Deutschlands (SED). 1946 begann sie als Landtagsberichterstatterin für die Sächsische Zeitung ihre journalistische Laufbahn. Von 1948 bis 1953 studierte sie an der Parteihochschule Karl Marx der SED. Von 1949 bis 1951 war sie Pressereferentin des SED-Parteivorstandes, später des Zentralkomitees (ZK) der SED.
Von 1951 bis 1956 war Classen Redakteurin, dann bis 1959 Chefredakteurin des Deutschlandsenders und anschließend bis 1969 Intendantin des Berliner Rundfunks, für den sie bis in die 1980er Jahre als Kommentatorin aktiv war.
Von 1960 bis 1971 war Classen Mitglied der SED-Bezirksleitung Berlin und ab 1961 stellvertretende Vorsitzende des Staatlichen Rundfunkkomitees, sowie von 1961 bis 1967 Präsidiumsmitglied des Zentralvorstandes des Verbands der Journalisten der DDR. Zusätzlich war Classen von 1962 bis 1966 stellvertretende Vorsitzende der Freundschaftsgesellschaft DDR-Japan.
Ehrungen
1968 Clara-Zetkin-Medaille (DDR)
1963 Vaterländischer Verdienstorden in Bronze (DDR)
1973 Vaterländischer Verdienstorden in Silber (DDR)
1978 Vaterländischer Verdienstorden in Gold (DDR)
1983 Ehrenspange zum Vaterländischen Verdienstorden in Gold
Schriften
Nippon zwischen gestern und morgen, 262 Seiten, Verlag Neues Leben (1964)
Literatur
Einzelnachweise
Journalist (DDR)
Chefredakteur
Rundfunkintendant
SED-Funktionär
KPD-Mitglied
Träger des Vaterländischen Verdienstordens (Ehrenspange)
Trägerin der Clara-Zetkin-Medaille
Deutscher
DDR-Bürger
Geboren 1913
Gestorben 1986
Frau | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 291 |
Yoon Bong-gil (hangul : 윤봉길, hanja : 尹奉吉, Yun Bong-gil), né le à Yesan en Corée et exécuté par la justice japonaise à l'âge de 24 ans le à Kanazawa au Japon, est un militant indépendantiste coréen, qui lutta contre la domination japonaise en Corée (1910-1945).
Attentat de Shanghai
Après la victoire japonaise durant la Guerre de Shanghai, l'armée japonaise a décidé de profiter de l'anniversaire de l'empereur Hirohito le pour célébrer cette victoire, la cérémonie a eu lieu dans le parc Hongkou à Shanghai.
Le commandant de la garnison du gouvernement national de la République de Chine, Président par intérim et chef de la région de Songhu Chen Mingshu décida de profiter de cette célébration pour assassiner les hauts commandants japonais mais l'armée japonaise avait interdit l'admission des Chinois dans le parc afin de prévenir toute éventualité. Finalement, les envoyés de Chen contactèrent le Gouvernement provisoire de la République de Corée. Son président Kim Gu exprima la volonté d'entreprendre cette tâche qui fut confiée à Yoon.
Le , il fait exploser une bombe dissimulée dans une petite boîte à déjeuner lors de cette fête. L'explosion tue Yoshinori Shirakawa, un général de l'armée impériale japonaise, et , chancelier du gouvernement auprès des résidents japonais de Shanghai. Elle blesse aussi grièvement Kenkichi Ueda, commandant de la de l'armée impériale japonaise, , consul-général japonais à Shanghai, et Mamoru Shigemitsu, envoyé japonais à Shanghai (celui-ci perdit sa jambe droite).
Yoon est arrêté sur place et condamné par le tribunal militaire japonais à Shanghai le . Il est transféré à la prison d'Osaka le et exécuté à Kanazawa le . Il est ensuite inhumé au cimetière de Nodayama.
Tchang Kaï-chek déclara à son sujet : "Un jeune patriote coréen a accompli une chose que des dizaines de milliers de soldats chinois n'ont pas réussie."
Suites
En , ses restes sont exhumés par des résidents coréens au Japon et transférés à Séoul où ils reçurent un rite funéraire. Ils furent ensuite réenterrés au cimetière national de Séoul. En 1962, le gouvernement de la seconde république de Corée du Sud loua son action et le décora à titre posthume du cordon de la république de Corée (grand cordon) et de l'ordre du mérite pour la fondation de la nation.
Voir aussi
Histoire de la Corée durant la colonisation japonaise
Mouvement d'indépendance coréen
Lee Bong-chang
Incident de Sakuradamon
Attentat de Shanghai
Références
Liens externes
Website devoted to Yoon Bong-gil
Naissance en juin 1908
Décès en décembre 1932
Résistance coréenne
Personnalité coréenne condamnée pour meurtre
Condamné à mort exécuté au Japon au XXe siècle
Décès à 24 ans
Mouvement d'indépendance coréen | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,633 |
Isabel Ellie Knaggs ( - ) est une cristallographe née en Afrique du Sud. Elle a fait ses études et travaillé au Royaume-Uni. Elle a travaillé avec Kathleen Lonsdale sur la structure cristalline du benzile.
Enfance et formation
Knaggs est née à Durban. Elle a fréquenté la North London Collegiate School et a ensuite fréquenté le Bedford College de Londres. En 1913, Knaggs rejoint le Girton College de l'université de Cambridge pour étudier la chimie. Elle a étudié avec William Pope sur la détermination des structures cristallines. l'a nommée assistante de recherche. Elle a été élue membre de la Société géologique de Londres en 1921. Elle a terminé son doctorat, avec une thèse intitulée The Relation between the Crystal Structure and Constitution of Carbon Compounds, with Special Reference to Simple Substitution Products of Methane, en 1923 à l'Imperial College de Londres. Pendant son doctorat, Knaggs est restée démonstratrice en géologie au Bedford College de Londres.
Recherches
En 1925, elle a reçu une bourse Hertha Ayrton de deux ans pour rejoindre la Royal Institution. Knaggs a travaillé avec William Henry Bragg et Kathleen Lonsdale. Elle s'est penchée sur la réflexion diffuse des rayons X des monocristaux. Elle a obtenu un poste permanent en 1927. Elle a déterminé la structure cristalline du .
Knaggs a co-écrit Tables of Cubic Crystal Structures avec Berta Karlik et Constance Elam en 1932. Elle a été conseillère chez Burroughs Wellcome (aujourd'hui GlaxoSmithKline ). À sa retraite, Knaggs a été élue scientifique invitée à la Royal Institution.
Vie privée
En 1979, Knaggs a déménagé en Australie. Le , Knaggs est décédé à Sydney, en Australie.
Références
Liens externes
Étudiant de Girton College
Étudiant de l'Imperial College London
Décès en novembre 1980
Naissance en août 1893
Cristallographe
Femme chimiste
Chimiste sud-africain
Décès à 87 ans | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,825 |
\section{Proofs of Lemmata and Propositions}
We will first provide proofs of the lemmata and propositions omitted in the main
text, and then provide additional results of our experimental evaluation.
\subsection{Identity Matrices Have Rounding Rank 2}
\label{Sec:IdentityMatrices}
For $n \geq 3$, let $\mI_n \in \mathbb{R}^{n \times n}$ be the identity matrix.
From Proposition~\ref{prop:rrank1_eq_block_nested} we get that $\rrank(\mI_n) > 1$, since
the identity matrix is not nested.
We look at the matrix
\begin{align*}
\mA &=
\begin{pmatrix}
1 & -\frac{1}{2} \\
2^{-1} & -\frac{1}{2} 4^{-1} \\
\vdots & \vdots \\
2^{-n+1} & -\frac{1}{2} 4^{-n+1}
\end{pmatrix}
\begin{pmatrix}
1 & 2 & \hdots & 2^{n-1} \\
1 & 4 & \hdots & 4^{n-1}
\end{pmatrix} \\
&=
\begin{pmatrix}
1 - \frac{1}{2} & 2-\frac{4}{2} & 4-\frac{16}{2} & \cdots \\
\frac{1}{2} - \frac{1}{8} & 1-\frac{4}{8} & 2-\frac{16}{8} & \cdots \\
\frac{1}{4} - \frac{1}{32} & \frac{1}{2}-\frac{4}{32} & 1-\frac{16}{32} & \cdots \\
\vdots & \vdots & \vdots & \ddots
\end{pmatrix},
\end{align*}
and observe that $\mA_{ij} = \frac{1}{2}$, if $i = j$, and $\mA_{ij} < \frac{1}{2}$, otherwise.
Thus, we get $\round(\mA) = \mI_n$ and therefore $\rrank(\mI_n) = 2$.
\subsection{Proof of Proposition~\ref{prop:LowerBound}}
\cite{BetterLinearLowerBound} proved a lower bound for sign rank.
\begin{lemma}[{\hspace*{-.5em}\cite[Th.~5]{BetterLinearLowerBound}}]
Let $\mB \in \{-1, +1\}^{m \times n}$.
Let $r = \rank(\mB)$ and let $\sigma_1(\mB) \geq \dots \geq \sigma_r(\mB) > 0$ be
the singular values of $\mB$. Denote the sign rank of $\mB$ by $d$. Then
\begin{align*}
d \sum_{i = 1}^d \sigma_i^2(\mB) \geq mn.
\end{align*}
\end{lemma}
We use the previous lemma to prove a lower bound on rounding rank.
\begin{varthm}[Proposition~\ref{prop:LowerBound} (again)]
Let $r = \rank(\mB^\pm)$ and let $\sigma_1(\mB^\pm) \geq \dots \geq \sigma_r(\mB^\pm) > 0$ be
the singular values of $\mB^\pm$. Then
\begin{align*}
(\rrank_0(\mB) + 1) \sum_{i = 1}^{\rrank_0(\mB)} \sigma_i^2(\mB^\pm) \geq mn.
\end{align*}
\end{varthm}
\begin{proof}
As argued in the main text, $\rrank_0(\mB) \leq \srank(\mB)$ for all $\mB$.
This implies
\begin{align*}
\sum_{i=1}^{\srank(\mB)} \sigma_i^2(\mB^\pm) \geq \sum_{i=1}^{\rrank_0(\mB)} \sigma_i^2(\mB^\pm).
\end{align*}
Using the previous lemma for sign rank and $\rrank_0(\mB) + 1 \geq \srank(\mB)$, we get
\begin{align*}
\rrank_0(\mB) + 1 &\geq \srank(\mB) \\
&\geq \frac{mn}{\sum_{i=1}^{\srank(\mB)} \sigma_i(\mB^\pm)}\\
&\geq \frac{mn}{\sum_{i=1}^{\rrank_0(\mB)} \sigma_i(\mB^\pm)}.
\end{align*}
After multiplying with the denominator of the last equation, we obtain the desired result.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:UsefulHyperplaneSeparation}}
We revisit the Hyperplane Separation Theorem.
\begin{lemma*}[Hyperplane Separation Theorem {\cite[page 46]{ConvexOptimization}}]
Let $A$ and $B$ be two disjoint nonempty closed convex sets in $\mathbb{R}^d$, one of which is compact.
Then there exists a nonzero vector $\vv \in \mathbb{R}^d$ and real numbers $c_1 < c_2$,
such that $\langle \vx, \vv\rangle > c_2$ and $\langle \vy, \vv \rangle < c_1$
for all $\vx \in A$ and $\vy \in B$.
\end{lemma*}
Now we prove Lemma~\ref{lem:UsefulHyperplaneSeparation} of the paper.
\begin{varthm}[Lemma~\ref{lem:UsefulHyperplaneSeparation} (again)]
Let $A$ and $B$ be two disjoint nonempty convex sets in $\mathbb{R}^d$, one of which is compact.
Then for all $c \in \mathbb{R} \setminus \{ 0 \}$ there exists a nonzero vector $\vv \in \mathbb{R}^d$,
such that $\langle \vx, \vv \rangle > c$ and $\langle \vy, \vv \rangle < c$ for all $\vx \in A$ and $\vy \in B$.
\end{varthm}
\begin{proof}
Let $c \in \mathbb{R} \setminus \{ 0 \}$ be arbitrary.
We apply the Hyperplane Separation Theorem to $A$ and $B$
to obtain a vector $\vv'$ and numbers $c_1 < c_2$
with $\langle \vx, \vv' \rangle > c_2$ and $\langle \vy, \vv' \rangle < c_1$
for all $\vx \in A$ and all $\vy \in B$.
Now we consider three cases.
Case 1: $c_1 \neq 0$ and $c_2 \neq 0$ and $\sign(c_1) = \sign(c_2)$.
We set $\alpha = c / c_2$ and $\vv = \alpha \vv'$.
Then we get for $\vx \in A$:
\begin{align*}
\langle \vx, \vv \rangle = \alpha \langle \vx, \vv' \rangle > \alpha c_2 = c,
\end{align*}
as well as for $\vy \in B$:
\begin{align*}
\langle \vy, \vv \rangle = \alpha \langle \vy, \vv' \rangle < \alpha c_1 = \frac{c_1}{c_2} c < c,
\end{align*}
where in the last inequality we used that $0 < c_1/c_2 < 1$.
Case 2: $c_1 \neq 0$ and $c_2 \neq 0$ and $\sign(c_1) \neq \sign(c_2)$.
Since the signs of $c_1$ and $c_2$ disagree, we have $c_1 < 0 < c_2$.
Thus, we can pick $c_1' \in (0,c_2)$ arbitrarily
and still maintain all properties guaranteed by the Hyperplane Separation Theorem
for $\vv$, $c_1'$ and $c_2$.
Now we are in case 1.
Case 3: $c_1 = 0$ or $c_2 = 0$.
We can pick numbers $d_1, d_2 \in (c_1, c_2)$ with $d_1 < d_2$.
Observe that both $d_1$ and $d_2$ are non-zero.
Then we have $\langle \vx, \vv' \rangle > c_2 > d_2$
and $\langle \vy, \vv' \rangle < c_1 < d_1$
for all $\vx \in A$ and all $\vy \in B$.
Now we can use case 1 for $\vv'$, $d_1$ and $d_2$.
\end{proof}
\subsection{Proof of Proposition~\ref{prop:rrankp_vs_rrank}}
The proof follows ideas of \cite{ProbabilisticCommunicationComplexity} and adds some details
to achieve the non-negativity.
\begin{varthm}[Proposition~\ref{prop:rrankp_vs_rrank} (again)]
Let $\mB \in \{0, 1\}^{m \times n}$ be a binary matrix. Then
\begin{align*}
\rrank(\mB) \leq \rrank_+(\mB) \leq \rrank(\mB) + 2.
\end{align*}
\end{varthm}
\begin{proof}
The first inequality is trivial as the standard rounding rank
is more general than the non-negative rounding rank.
The trickier part is the second inequality.
The idea of the proof is to take points and hyperplanes achieving the rounding rank of the matrix
and to project them into a higher dimensional space, where they are non-negative.
This projection happens via an explicit construction, that gets somewhat technical.
Let $k = \rrank(\mB)$.
Then by definition there exist matrices $\mL \in \mathbb{R}^{m \times k}$ and $\mR \in \mathbb{R}^{n \times k}$
with $\mB = \round(\mL \mR^T)$ for rounding threshold $\frac{1}{2}$. As before we will interpret the rows
$\vl_1, \dots, \vl_m$ of $\mL$ as points in $\mathbb{R}^k$ and the rows $\vr_1, \dots, \vr_n$ of $\mR$
as normal vectors of affine hyperplanes in $\mathbb{R}^k$.
For each $\vr_j = \left(\vr_{j1}, \dots, \vr_{jk}\right)$, we set
$\vr_j' = \left(\vr_{j1}, \dots, \vr_{jk}, - \frac{1}{2}, \frac{1}{2} - \sum_{m = 1}^k \vr_{jm}\right) \in \mathbb{R}^{k+2}$
and observe that these vectors define hyperplanes in $\mathbb{R}^{k+2}$ containing the origin, i.e.\ we have
$0 \in \left\{ \vx \in \mathbb{R}^{k+2} : \langle \vx, \vr_j' \rangle = 0 \right\}$.
We set $d_j = \max\{ | \vr_{jm}' | : m = 1, \dots, k+2 \}$ and define $\vr_j'' = \frac{1}{2d_j} \vr_j'$.
Observe that for all $m = 1, \dots, k+2$, we have $-\frac{1}{2} \leq \vr_{jm}'' \leq \frac{1}{2}$.
For each $\vl_i = \left(\vl_{i1}, \dots, \vl_{ik} \right)$, we set $c_i = \max\{ |\vl_{i1}|, \dots, |\vl_{ik}|, 1 \}$ and
we further define $\vl_i' = (c_i + \vl_{i1}, \dots, c_i + \vl_{ik}, c_i + 1, c_i) \in \mathbb{R}^{k+2}$
and observe that
$\vl_i'$ is non-zero and non-negative. By $\vl_i''$ we denote $\vl_i'$ after normalizing with the $L^1$-norm, i.e.\
$\vl_i'' = \vl_i' / || \vl_i' ||_1$, where $|| \vl_i' ||_1 = \sum_{m = 1}^{k+2} | \vl_{im} | $.
Now we do a short intermediate computation that shows that the
$\vl_i''$ and $\vr_j''$ indeed still round to matrix $\mB$
with rounding threshold $0$:
\begin{align}
\langle \vr_j'', \vl_i'' \rangle
&= \frac{1}{||\vl_i'||_1} \langle \vr_j'', \vl_i' \rangle \nonumber \\
&= \frac{1}{2d_j ||\vl_i'||_1} \langle \vr_j', \vl_i' \rangle \nonumber \\
&= \frac{1}{2d_j ||\vl_i'||_1} \left( \sum_{m=1}^k \vr_{jm} (c_i + \vl_{im}) - \frac{1}{2}(c_i + 1) \right. \nonumber \\
& \hspace{1cm} \left. + \left(\frac{1}{2} - \sum_{m = 1}^k \vr_{jm}\right)c_i \right) \nonumber \\
&= \frac{1}{2d_j ||\vl_i'||_1} \left( \sum_{m=1}^k \vr_{jm} \vl_{im} - \frac{1}{2} \right) \nonumber \\
&= \frac{1}{2d_j ||\vl_i'||_1} \left( \langle \vr_j, \vl_i \rangle - \frac{1}{2} \right) \label{eq:technicalThingy1} \\
&=
\begin{cases}
\geq 0, & \text{if $\langle \vr_j, \vl_i \rangle \geq \frac{1}{2}$}, \\
< 0, & \text{otherwise}.
\end{cases} \nonumber
\end{align}
We move on to define $\vr_j''' \in \mathbb{R}^{k+2}$ by setting
$\vr_{jl}''' = \frac{1}{2} + \vr_{jl}''$ for all $l = 1, \dots, k+2$.
Observe that each component of $\vr_j'''$ is non-negative.
We perform another intermediate computation, that we will need later:
\begin{align}
\langle \left(\frac{1}{2}, \dots, \frac{1}{2}\right), \vl_i'' \rangle \nonumber
&= \frac{1}{2} \sum_{m=1}^{k+2} \vl_{im}'' \nonumber \\
&= \frac{1}{2 ||\vl_i'||_1} \sum_{m=1}^{k+2} \vl_{im}' \nonumber \\
&= \frac{1}{2 ||\vl_i'||_1} \left( \sum_{m=1}^{k} (c_i + \vl_{im}) + 2c_i + 1 \right) \nonumber \\
&= \frac{1}{2 ||\vl_i'||_1} \left( (k+2) c_i + 1 + \sum_{m=1}^{k} \vl_{im} \right). \label{eq:technicalThingy2}
\end{align}
Now we observe that the $\vr_j'''$ and $\vl_i''$ give a non-negative rounding rank
decomposition of $\mB$ for different rounding thresholds,
where we use \eqref{eq:technicalThingy1} and \eqref{eq:technicalThingy2} in the second step:
\begin{align}
\langle \vr_j''', \vl_i'' \rangle \nonumber
&= \langle \vr_j'' + \left( \frac{1}{2}, \dots, \frac{1}{2} \right), \vl_i'' \rangle \nonumber \\
&= \frac{ \langle \vr_j, \vl_i \rangle - \frac{1}{2} }{2d_j ||\vl_i'||_1}
+ \frac{(k+2) c_i + 1 + \sum_{m=1}^{k} \vl_{im}}{2 ||\vl_i'||_1}. \label{eq:techroundingthresh}
\end{align}
Notice that the first summand of~\eqref{eq:techroundingthresh} is non-negative
iff $\langle \vr_j, \vl_i \rangle \geq \frac{1}{2}$.
Thus, if we use the second summand as rounding threshold, then we would round correctly.
The issue is that this rounding threshold depends on $\vl_i$.
To solve this problem and to get everything to rounding threshold $\frac{1}{2}$, we rescale the $\vl_i''$.
We denote the second summand of~\eqref{eq:techroundingthresh} by $\alpha$ and observe that $\alpha \geq 0$
by choice of $c_i$. Now we set $\vl_i''' = \frac{1}{2 \alpha} \vl_i''$ and obtain:
\begin{align}
\langle \vr_j''', \vl_i''' \rangle
&= \frac{1}{2\alpha} \langle \vr_i''', \vl_i'' \rangle \nonumber \\
&= \frac{ \langle \vr_j, \vl_i \rangle - \frac{1}{2} }{4 \alpha d_j ||\vl_i'||_1}
+ \frac{1}{2}, \label{eq:finaltechnicality}
\end{align}
where we used~\eqref{eq:techroundingthresh} in the last step.
The inner product is non-negative by choice of $\vl_i'''$ and $\vr_j'''$
and the first summand of~\eqref{eq:finaltechnicality} is non-negative
iff $\langle \vr_j, \vl_i \rangle \geq \frac{1}{2}$.
Thus, $\langle \vr_j''', \vl_i''' \rangle \geq \frac{1}{2}$
iff $\langle \vr_j, \vl_i \rangle \geq \frac{1}{2}$
iff $\mB_{ij} = 1$.
Therefore, the $\vr_j'''$ and $\vl_i'''$ give a non-negative rounding rank decomposition of $\mB$ for rounding threshold $\frac{1}{2}$.
\end{proof}
\subsection{Proof of Proposition~\ref{prop:rrank1_eq_block_nested}}
\begin{varthm}[Proposition~\ref{prop:rrank1_eq_block_nested} (again)]
Let $\mB \in \{ 0, 1\}^{m \times n}$ with $\mB \neq 0$. The following statements are equivalent:
\begin{enumerate}
\item $\rrank(\mB) = 1$.
\item $\mB$ is nested or there exist permutation matrices $\mP_1$ and $\mP_2$
and nested matrices $\mB_1$ and $\mB_2$, such that
\begin{align*}
\mB = \mP_1
\begin{pmatrix}
\mB_1 & 0 \\
0 & \mB_2
\end{pmatrix}\mP_2.
\end{align*}
\end{enumerate}
\end{varthm}
\begin{proof}
$1 \Rightarrow 2$:
Let $\mB=\round(\vl\vr^T)$. If $\vl$ (or $\vr$) is non-negative or non-positive,
$\mB$ is nested. To see this, observe that $\round(\vl \vr^T)$ remains unmodified
if we replace entries of opposite sign in $\vr$ (or $\vl$) by $0$ and then
take absolute values. Then we can apply Theorem~\ref{Thm:NestedMatrices}.
Otherwise, both $\vl$ and $\vr$ contain both strictly negative and
strictly positive entries. Then there exists some permutation matrix $\mP_1$, such that
$\mP_1^{-1}\vl$ is non-increasing. We pick the vectors $\vl_+\ge 0$ and
$\vl_-\le 0$, such that $\mP_1^{-1}\vl=\begin{pmatrix}\vl_+ \\ \vl_- \end{pmatrix}$.
Similarly, there is some permutation matrix $\mP_2$, such that
$\mP_2\vr$ is non-increasing and we set $\vr_+$ and $\vr_-$ accordingly.
Using this notation we can do a quick computation,
\begin{align*}
\mB
&= \round(\vl\vr^T) \\
&= \round(\mP_1(\mP_1^T\vl)(\mP_2\vr)^T\mP_2) \\
&= \mP_1 \round\left(
\begin{pmatrix}
\vl_+ \\
\vl_-
\end{pmatrix}
\begin{pmatrix}
\vr_+ \\
\vr_-
\end{pmatrix}^T
\right)
\mP_2 \\
&= \mP_1
\begin{pmatrix}
\round(\vl_+\vr_+^T) & \round(\vl_+\vr_-^T) \\
\round(\vl_-\vr_+^T) & \round(\vl_-\vr_-^T)
\end{pmatrix}^T
\mP_2 \\
&= \mP_1
\begin{pmatrix}
\mB_1 & 0 \\
0 & \mB_2
\end{pmatrix}
\mP_2,
\end{align*}
where $\mB_1=\round(\vl_+\vr_+^T)$ and $\mB_2=\round(\vl_-\vr_-^T)=\round((-\vl_-)(-\vr_-^T))$.
The last equality holds since $\round(\vl_+\vr_-^T)=\bm0$ and
$\round(\vl_-\vr_+^T)=\bm0$. Finally, we observe that $\mB_1$ and $\mB_2$ are
nested matrices by Theorem~11.
$2 \Rightarrow 1$:
If $\mB$ is nested, then $\rrank(\mB) \leq \rrank_+(\mB) = 1$ by Theorem~11.
Suppose $\mB$ is not nested and we are given $\mP_1$, $\mP_2$, $\mB_1$ and $\mB_2$
as in the statement of the Lemma.
Then $\mB_1$ and $\mB_2$ are non-zero (otherwise $\mB$ would be nested) and they have non-negative
rounding rank one by Theorem~11. Thus, we can assume that $\mB_1 = \round(\vl_1 \vr_1^T)$
and $\mB_2 = \round(\vl_2 \vr_2^T)$ for some non-negative vectors $\vl_1$, $\vl_2$, $\vr_1$ and $\vr_2$.
Now we observe that
\begin{align*}
\round \left(
\begin{pmatrix}
\vl_1 \\ -\vl_2
\end{pmatrix}
\begin{pmatrix}
\vr_1 \\
-\vr_2
\end{pmatrix}^T
\right)
=
\begin{pmatrix}
\mB_1 & 0 \\
0 & \mB_2
\end{pmatrix}.
\end{align*}
Thus, by setting $\vl = \mP_1 \begin{pmatrix} \vl_1 \\ -\vl_2 \end{pmatrix}$ and
$\vr = \mP_2^T \begin{pmatrix} \vr_1 \\ -\vr_2 \end{pmatrix}$ we get $\mB = \round(\vl \vr^T)$.
\end{proof}
\subsection{Experimental Results}
\label{sec:experimental-results}
The results on estimating the rounding rank on small synthetic data with normally distributed factors are presented in Figure~\ref{fig:apx:synth:rank:times}. The results on the minimum error fixed rounding rank experiments with medium-sized, normally distributed data are presented in Figures~\ref{fig:apx:synth:unif:time} (for timing results with uniformly-distributed factors) and~\ref{fig:apx:synth:err} (for results with normally distributed factors). In all cases, the results are essentially similar to the corresponding results presented in the main paper.
\begin{figure*}
\centering
\includegraphics[height=\legendheight]{rank_small_legend} \\
\rotatebox[origin=l]{90}{\hspace*{1em}\small Normal dist.}\hspace*{\smallfigsep}%
\subfigure[Rank, vary $m$]{%
\label{fig:synth:small:n:norm:rank}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_n_norm_rank}%
}\hspace*{\smallfigsep}%
\subfigure[Rank, vary $k$]{%
\label{fig:synth:small:k:norm:rank}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_k_norm_rank}%
}\hspace*{\smallfigsep}%
\subfigure[Rank, vary $\mu$]{%
\label{fig:synth:small:dens:norm:rank}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_dens_norm_rank}%
}\hspace*{\smallfigsep}%
\subfigure[Rank, vary $p$]{%
\label{fig:synth:small:noise:norm:rank}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_noise_norm_rank}%
}\\
\rotatebox[origin=l]{90}{\hspace*{3em}\small Time}\hspace*{\smallfigsep}%
\subfigure[Time, vary $m$]{%
\label{fig:synth:small:n:norm:time}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_n_norm_time}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $k$]{%
\label{fig:synth:small:k:norm:time}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_k_norm_time}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $d$]{%
\label{fig:synth:small:dens:norm:time}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_dens_norm_time}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $p$]{%
\label{fig:synth:small:noise:norm:time}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_noise_norm_time}%
}%
\caption{Estimated rank and running times when using small synthetic data sets with normally distributed factor matrices (cf. Figure 2 of the main submission).}
\label{fig:apx:synth:rank:times}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=\legendheight]{err_big_legend} \\
\rotatebox[origin=l]{90}{\hspace*{3em}\small Time}\hspace*{\smallfigsep}%
\subfigure[Time, vary $m$]{%
\label{fig:synth:big:n:unif:errtime}%
\includegraphics[width=\smallfigwidth]{err_big_vary_n_unif_time_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $k$]{%
\label{fig:synth:big:k:unif:errtime}%
\includegraphics[width=\smallfigwidth]{err_big_vary_k_unif_time_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $\mu$]{%
\label{fig:synth:big:dens:unif:errtime}%
\includegraphics[width=\smallfigwidth]{err_big_vary_dens_unif_time_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $p$]{%
\label{fig:synth:big:noise:unif:errtime}%
\includegraphics[width=\smallfigwidth]{err_big_vary_noise_unif_time_kplus_20}%
}%
\caption{Running times for the minimum-error fixed rounding rank decompositions on medium-sized synthetic data with uniformly distributed factors.}
\label{fig:apx:synth:unif:time}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=\legendheight]{err_big_legend} \\
\rotatebox[origin=l]{90}{\hspace*{1em}\small Normal dist.}\hspace*{\smallfigsep}%
\subfigure[Error, vary $m$]{%
\label{fig:synth:big:n:norm:err}%
\includegraphics[width=\smallfigwidth]{err_big_vary_n_norm_err_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Error, vary $k$]{%
\label{fig:synth:big:k:norm:err}%
\includegraphics[width=\smallfigwidth]{err_big_vary_k_norm_err_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Error, vary $d$]{%
\label{fig:synth:big:dens:norm:err}%
\includegraphics[width=\smallfigwidth]{err_big_vary_dens_norm_err_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Error, vary $p$]{%
\label{fig:synth:big:noise:norm:err}%
\includegraphics[width=\smallfigwidth]{err_big_vary_noise_norm_err_kplus_20}%
}\\
\rotatebox[origin=l]{90}{\hspace*{3em}\small Time}\hspace*{\smallfigsep}%
\subfigure[Time, vary $m$]{%
\label{fig:synth:big:n:norm:errtime}%
\includegraphics[width=\smallfigwidth]{err_big_vary_n_norm_time_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $k$]{%
\label{fig:synth:big:k:norm:errtime}%
\includegraphics[width=\smallfigwidth]{err_big_vary_k_norm_time_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $d$]{%
\label{fig:synth:big:dens:norm:errtime}%
\includegraphics[width=\smallfigwidth]{err_big_vary_dens_norm_time_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $p$]{%
\label{fig:synth:big:noise:norm:errtime}%
\includegraphics[width=\smallfigwidth]{err_big_vary_noise_norm_time_kplus_20}%
}%
\caption{Relative reconstruction errors and running times on medium-sized synthetic data with normally distributed factors. The top row gives the relative reconstruction error and the bottom row the running times. The results of \asso are omitted as they were significantly worse than the other results. All data points are averages over 10 random matrices and the width of the error bars is twice the standard deviation. Compare to Figure 3 of the main submission.}
\label{fig:apx:synth:err}
\end{figure*}
\section{Computing the Rounding Rank}
\label{sec:comp-round-rank}
In this section, we provide algorithms approximating $\rrank(\mB)$ and for
approximately solving the \minErrorRRproblem{$k$} problem. The algorithms are
based on some of the most common paradigms for algorithm design in data mining.
The \proje{} algorithm makes use of randomized projections, \SVD{} uses
truncated SVD, \lpca{} uses logistic PCA, and \asso{} is a Boolean matrix
factorization algorithm.
For each algorithm, we first discuss how to obtain an approximation to
$\rrank(\mB)$ (in the form of an upper bound) and then discuss extensions
to solve \minErrorRRproblem{$k}$.
\paragraph{Projection-based algorithm (\proje)}
We first describe a Monte Carlo algorithm to decide whether
$\rrank(\mB) \leq d$ for a given matrix $\mB$ and $d \in \mathbb{N}$. The
algorithm can output \textsc{yes} or \textsc{unknown}. If the algorithm outputs \textsc{yes}, it also
produces a rounding rank decomposition. We use this algorithm for different
values of $d$ to approximate $\rrank(\mB)$.
The decision algorithm is inspired by a simple observation: Considering an $m
\times n$ binary matrix $\mB$, we have $\mB = \round(\mB \mI)$, where $\mI$
denotes the $n \times n$ identity matrix. We interpret each row $\mB_i$ of
$\mB$ as a point in $\R^n$ and each column $\mI_j$ of $\mI$ as the normal vector
of a hyperplane in $\R^n$. The hyperplane given by $\mI_j$ separates the
points $\mB_i$ into the classes $C_j = \{ \mB_{i} : \mB_{ij} = 1 \}$ and $\bar
C_j = \{ \mB_{i} : \mB_{ij} = 0 \}$ by the $j$'th column of $\mB$, since
$\mB_{ij} = \round(\langle \mB_i, \mI_j \rangle)$. The idea of \proje is to
take the points $\mB_i$ (the rows of $\mB$) and to project them into
lower-dimensional space $\R^d$, $d \ll n$, to obtain vectors $\mL_1, \dots,
\mL_m \in \R^d$. We use a randomized projection that approximately preserves the
distances of the $\mB_i$ and---if $\mB$ has rounding rank at most $d$---try (or
hope) to maintain the separability of the points by hyperplanes by doing
so. Given the projected vectors in $\R^d$, we check separability by affine
hyperplanes and find the corresponding normal vectors $\mR_1, \dots, \mR_n$
using a linear program. If the $\mL_i$ turn out to be separable, we have
$\mB_{ij} = \round(\langle \mL_i, \mR_j \rangle)$ for all $i,j$ and thus $\mB =
\round(\mL \mR^T)$, where $\mL$ and $\mR$ have the $\mL_i$'s and $\mR_j$'s in
their rows, respectively. We conclude $\rrank(\mB) \leq d$ and output
\textsc{yes}. If the $\mL_i$ are not separable, no conclusions can be drawn and the
algorithm outputs \textsc{unknown}.
The Johnson--Lindenstrauss Lemma~\cite{JohnsonLindenstraussLemma} asserts that
there exists a linear mapping $\mA$ that projects points from a high-dimensional
space into a lower-dimensional space while approximately preserving the
distances. We use the projections proposed by Achlioptas~\cite{Achlioptas03Database} to obtain $\mA$.
We set $\mL_i=\mB_i\mA$. The linear program (LP) to compute the normal vector $\mR_j$ is
\begin{align*}
\text{find} \quad & \mR_j \in\R^{d} & & \\
\text{subject to} \quad & \sum_{k = 1}^d \mL_{ik} \mR_{jk} \geq \tau + \varepsilon
& \text{if }\mB_{ij} = 1, \\
& \sum_{k = 1}^d \mL_{ik} \mR_{jk} \leq \tau - \varepsilon
& \text{if }\mB_{ij} = 0.
\end{align*}
We enforce strict separability by introducing an offset $\varepsilon >
0$. In practice, we set $\varepsilon$ to the smallest positive number
representable by the floating-point hardware. Notice that the LP only aims at
finding a feasible solution; it has $m$ constraints and $d$ variables.
To approximate the rounding rank, we repeatedly run the above
algorithm with increasing values of $d$ until it outputs \textsc{yes}; i.e., $d=1,2,\ldots$.
Alternatively, we could use some form of binary search to find a suitable
value of $d$. In practice, however, solving the LP for large
values of $d$ slows down the binary search too much.
\eat{
Algorithm~\ref{Alg:RandomisedDecisionAlgo} gives the pseudocode
for \proje.
We utilise Algorithm~\ref{Alg:RandomisedDecisionAlgo} to compute
an approximation of $\rrank(\mB)$: For $d = 1$ we query if
$\rrank(\mB) \leq d$. If the result is true, then we output $d$ as an approximation
for the rounding rank. Otherwise, we set $d = d + 1$ and repeat the procedure.
\begin{algorithm}[tb]
\DontPrintSemicolon
\KwData{A binary matrix $\mB \in \{ 0, 1 \}^{m \times n}$, a dimensionality $d \in \mathbb{N}$
and a rounding threshold $\tau \in \R$.}
\KwResult{\textit{True} and matrices $\mL$ and $\mR$ with $\round_\tau(\mL \mR^T) = \mB$
if the algorithm found a rounding rank decomposition in $\R^d$;
\textit{False} otherwise.}
Sample a dimensionality-reduction mapping $\mA \in \R^{n \times d}$ as in Equation~2 of \cite{Achlioptas03Database} \;
Set $\mL \leftarrow \mB \mA$ \;
\For{ $j \leftarrow 1$ to $n$ } {
Find $\vr_j$ from \eqref{Eq:HyperplaneLP} \;
\If{ the LP had no feasible solution }{
\Return{ False } \;
}
}
Set $\mR \leftarrow \begin{pmatrix} \vr_1 & \cdots & \vr_n \end{pmatrix}^T$ \;
\Return{ True and $\mL$ and $\mR$ } \;
\caption{Algorithm deciding if $\rrank(\mB) \leq d$.}
\label{Alg:RandomisedDecisionAlgo}
\end{algorithm}
}
To solve \minErrorRRproblem{$k$}, we modify the LP of \proje to output
an approximate solution. For this purpose, we introduce non-negative
slack-variables $\vc_i$ as in soft-margin SVMs to allow for errors, and an
objective function that minimizes the $L_1$ norm of the slack variables. We
obtain the following LP:
\begin{alignat}{4}
\min_{\substack{\vc \in \R_{\geq 0}^m \\ \mR_j \in \R^d}} & \quad & \sum_{i = 1}^m &\;\vc_i \notag \\
\text{subject to} & & \sum_{k = 1}^d &\;\mL_{ik} \mR_{jk} + \vc_i &\;\geq\;& \tau + \varepsilon,
&\qquad& \text{if }\mB_{ij} = 1, \notag \\
& & \sum_{k = 1}^d &\;\mL_{ik} \mR_{jk} - \vc_i &\;\leq\;& \tau - \varepsilon,
& & \text{if }\mB_{ij} = 0. \notag
\end{alignat}
\paragraph{Rounded SVD algorithm (\SVD)}
We use rounded SVD to approximate $\rrank(\mB)$. The algorithm
is greedy and similar to the one in \cite{NeighborhoodData}.
Given a binary matrix $\mB$, the algorithm sets $k = 1$. Then
it computes the rank-$k$ truncated SVD of $\mB$ and rounds it. If the rounded
matrix and $\mB$ are equal, it returns $k$, otherwise, it sets $k = k+1$
and repeats. The underlying reasoning is that the rank-$k$ SVD is the
real-valued rank $k$ matrix minimizing the distance to $\mB$ w.r.t.\ the
Frobenius norm. Hence, also its rounded version should be ``close'' to $\mB$.
To approximately solve \minErrorRRproblem{$k$}, we compute the truncated
rank-$\ell$-SVD of $\mB$ for all $\ell = 1, \dots, k$ and return the rounded
matrix with the smallest error.
\paragraph{Logistic Principal Component Analysis (\lpca)}
The logistic function $f(x) = \left(1 + e^{-x}\right)^{-1}$ is a differentiable
surrogate of the rounding function and it can be used to obtain a smooth approximation of the rounding.
\lpca~\cite{schein03generalized} models each $\mB_{ij}$ as a Bernoulli random
variable with success probability $f(\langle \mL_i,\mR_j\rangle)$, where $\mL
\in \R^{m \times k}$ and $\mR \in \R^{n \times k}$ are unknown parameters. Given
$\mB$ and $k \in \mathbb{N}$ as input, L-PCA obtains (approximate)
maximum-likelihood estimates of $\mL$ and $\mR$. If each $f(\langle
\mL_i,\mR_j\rangle)$ is a good estimate of $\mB_{ij}=1$, then $\lVert \mB
- \round(\mL\mR^T) \rVert_F$ should be small.
To approximate the $\rrank(\mB)$, we
run \lpca on $\mB$ for $k = 1$ and check if $\round(\mL \mR^T) = \mB$.
If this is the case, we return $k$, otherwise, we set $k = k + 1$ and repeat.
To use \lpca to compute an approximation of \minErrorRRproblem{$k$}, we
simply run \lpca and apply rounding.
\paragraph{Permutation algorithm (\shay)}
The only known algorithm to approximate the sign rank of a $n \times n$ matrix
in polynomial time was given in~\cite{SignRankVsVCDimension}; it guarantees an
upper bound within an approximation ratio of $O(n / \log n)$. By
Prop.~\ref{prop:changingRoundingThreshold}, we can use this method to
approximate the rounding rank. The algorithm permutes the rows of the
input matrix $\mB$ s.t.\ the maximum number of bit flips over all
columns is approximately minimized. It then algebraically
approximates $\rrank(\mB)$ by evaluating a certain polynomial
based on the occurring bit flips.
\eat{
\sn{Is this intuition OK? In case you are interested, the polynomial part works the following way:
Consider the matrix $\mB$ after reordering. We build a polynomial for each column of $\mB$.
A bit flip from 0 to 1 (or vice versa) in a column of $\mB$ corresponds to a root of the polynomial
which as input gets the number of the row in which the flip occurs.
Now we create the factorization $\mL$ and $\mR$ by writing the coefficients of the polynomial into
a row of $\mL$ and the points at which we evaluate the polynomial into the column of $\mR$.}
}
The algorithm cannot solve the \minErrorRRproblem{$k$} problem.
\paragraph{Nuclear norm algorithm (\nuclear)}
The nuclear norm $\norm{\mX}_*$ of a matrix $\matr{X}$ is the sum of the
singular values of $\matr{X}$ and is a convex and differentiable surrogate of
the rank function of matrix. A common relaxation for minimum-rank matrix
factorization is to minimize $\norm{\mX}_*$ instead of $\rank(\mX)$. In our
setting, we obtain the following minimization problem:
\begin{alignat}{3}
\mX^* = &\argmin_{\mX \in \R^{m\times n}} &\quad & \norm{\mX}_* & & \notag \\
&\;\text{subject to} & & \mX_{ij} \geq \tau &\qquad& \text{if }\mB_{ij} = 1, \notag \\
& & & \mX_{ij} < \tau & &\text{if }\mB_{ij} = 0. \notag
\end{alignat}
This method has some caveats: While $\mX^*$ must have small singular values, it
may still have \emph{many}. Additionally, by Prop.~\ref{prop:hugeEntries}, some
entries of a matrix $\mA$ achieving the rounding rank might be extremely
large. In such a case, some of the singular values of $\mA$ must also be large,
and consequently the nuclear norm of the matrix is large. Thus, $\mX^*$ might
have a too large rank. This algorithm cannot be extended to solve
\minErrorRRproblem{$k$}.
\section{Conclusions}
\label{sec:conclusions}
Rounding rank is a natural way to characterize the commonly-applied rounding
procedure.
Rounding rank has some significant differences to real rank: for example,
restricting the factor matrices to be non-negative has almost no consequences to
rounding rank. Rounding rank provides a robust definition of an intrinsic
dimension of a data, and as we saw in the experiments, real-world data sets can
have surprisingly small rounding ranks. At the same time, rounding rank-related
problems appear naturally in various different fields of data analysis; for
example, the connection to nested matrices is somewhat surprising, and allowed
us to develop new algorithms for the \BMNA problem.
Unfortunately, computing the rounding rank, and the related minimum-error decomposition, is computationally very hard. We have studied a
number of algorithms---based on common algorithm design paradigms in data
mining---in order to understand how well they behave in our problems. None of
the tested algorithms emerges as a clear winner, though.
The most obvious future research direction is to find better
algorithms that aim directly for good rounding rank decompositions and scale to
larger data sizes. Another question is if the factors obtained by a rounding
rank decomposition reveal \emph{interpretable} insights into the data.
The
connections of rounding rank to other problems also propose natural follow-up
questions.
For example, communities in graphs are often nested (sub-)matrices~\cite{metzler16hyperbolae}. Could rounding rank decompositions be used to find non-clique-like communities?
\section{Discussion}
\label{sec:discussion}
\section{Experiments}
\label{sec:experiments}
We conducted an experimental study on synthetic and real-world datasets to
evaluate the relative performance of each algorithm for estimating the rounding
rank or for \minErrorRRproblem$k$.
\subsection{Implementation Details}
\label{sec:exp:implementation}
For
\lpca, we used the implementation by the authors of \cite{schein03generalized}.
We implemented \nmt and \shay in C and all other algorithms in
Matlab. For \nuclear, we used the CVX package with the SeDuMi solver
\cite{cvx}. For solving the linear programs in \proje, we used Gurobi.
Due to numerical instabilities, \nuclear often returned a matrix with only
positive singular values (i.e.\ of full rank). We countered this by zeroing the smallest singular values of the returned matrix that did not affect to the result of the rounding.
All experiments were conducted on a computer with eight Intel Xeon E5530
processors running at 2.4\,GHz and 48\,GB of main memory. All our
algorithms and the synthetic data generators are available online.\!\footnote{\url{http://dws.informatik.uni-mannheim.de/en/resources/software/rounding-rank/}}
\subsection{Results With Synthetic Data}
\label{sec:exp:synthetic}
We start by studying the behavior of the algorithms under controlled synthetic datasets.
\subsubsection{Data generation}
\label{sec:exp:synth:data}
We generated synthetic data by sampling two matrices $\mL\in\R^{m\times
k}$ and $\mR\in\R^{n\times k}$ and then rounding their product to obtain $\mB =
\round_\tau(\mL\mR^T)$ with rounding rank \emph{at
most} $k$. The actual rounding rank of $\mB$ can be lower, however, because
there may be matrices $\mL'\in\R^{m\times k'}$ and $\mR'\in\R^{n\times
k'}$ with $k' < k$ and $\round_\tau(\mL'\mR'^T) = \mB$. (In fact, we
sometimes found such matrices.) In some experiments, we additionally applied
noise by flipping elements selected uniformly at random. We report as
\emph{noise level} $p$ the ratio of the number of flipped elements to the number
of non-zeros in the original noise-free matrix.
We sampled every element of $\mL$ and $\mR$ i.i.d.~using two families of
distributions: uniform and normal distribution. For both distributions, we first
pick a desired expected value $\mu = \E[(\mL\mR^T)_{ij}]$ of each entry in
$\mL\mR^T$. We then parameterize the distributions such that the expected value
for an element of $\mL$ or $\mR$ is $q = \sqrt{\mu/k}$. For the normal
distribution, we set the variance to $1$, and for the uniform distribution, we
sampled from range $[q-1/2, q+1/2]$.
We generated two sets of matrices. In the first set, the matrices were very
small, and it was used to understand the behavior of the slower algorithms. In
the second set, the matrices were medium-sized, to give us more realistic-sized
data, but we could use only some of the methods with these data. When generating
the data, we varied four different parameters: number of rows $m$, the planted
rank $k$, the expected value $\mu$, and the level of noise $p$. In all
experiments, we varied one of these parameters, while keeping the others
fixed. We generated all datasets with rounding threshold $\tau=1/2$. For the
small data, we used $n=100$ columns and the number of rows varied from $60$ to
$220$ with steps of $40$ with the default value being $n=100$.
The rank $k$ in the small matrices varied from $5$ to $30$
with steps of $5$, default being $k=10$; the expected value $\mu$ varied from
$0.1$ to $0.7$ with $0.1$ steps (default was $\mu=0.5$); the noise level $p$
varied from $0.05$ to $0.5$ with steps of $0.05$, and by default we did not
apply any noise. We generated ten random matrices with each parameter setting to
test the variation of the results.
For the medium-sized matrices, we used $n=300$ columns and the number of rows
varied from $400$ to $600$ with steps of $50$ the default being $m=500$; the
planted rank $k$ varied from $40$ to $100$ with default value $k = 60$; the
expected value and the noise were as with the small data. We generated five
random matrices with each parameter setting.
\subsubsection{Rounding rank}
\label{sec:exp:synth:rank}
In our first set of experiments, we studied the performance of the different
algorithms for estimating the rounding rank. The results for the small synthetic
datasets are summarized in Fig.~\ref{fig:synth:small}. The results are given for
the uniformly distributed factor matrices; the results with normally distributed
factors were largely similar and are postponed to the appendix.
\begin{figure*}[tb]
\centering
\includegraphics[height=\legendheight]{rank_small_legend} \\
\rotatebox[origin=l]{90}{\hspace*{1em}\small Uniform dist.}\hspace*{\smallfigsep}%
\subfigure[Rank, vary $m$]{%
\label{fig:synth:small:n:unif:rank}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_n_unif_rank}%
}\hspace*{\smallfigsep}%
\subfigure[Rank, vary $k$]{%
\label{fig:synth:small:k:unif:rank}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_k_unif_rank}%
}\hspace*{\smallfigsep}%
\subfigure[Rank, vary $\mu$]{%
\label{fig:synth:small:dens:unif:rank}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_dens_unif_rank}%
}\hspace*{\smallfigsep}%
\subfigure[Rank, vary $p$]{%
\label{fig:synth:small:noise:unif:rank}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_noise_unif_rank}%
}\\
\rotatebox[origin=l]{90}{\hspace*{3em}\small Time}\hspace*{\smallfigsep}%
\subfigure[Time, vary $m$]{%
\label{fig:synth:small:n:unif:time}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_n_unif_time}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $k$]{%
\label{fig:synth:small:k:unif:time}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_k_unif_time}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $\mu$]{%
\label{fig:synth:small:dens:unif:time}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_dens_unif_time}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary $p$]{%
\label{fig:synth:small:noise:unif:time}%
\includegraphics[width=\smallfigwidth]{rank_small_vary_noise_unif_time}%
}%
\caption{Estimated rounding ranks and running times on small synthetic data varying different parameters. The top row gives estimated rank for uniformly distributed factor matrices and the bottom row shows running times. \shay can only run on square matrices and was excluded from the ``vary $m$'' experiments. All data points are averages over 10 random matrices and the width of the error bars is twice the standard deviation. }
\label{fig:synth:small}
\end{figure*}
We used \proje, \nuclear, \SVD, and \lpca. We also used \shay in all experiments
except when we varied the number of rows (\shay only works with square
matrices). We also computed a lower bound \lb~on $\rrank_0$ using
Prop.~\ref{prop:LowerBound}. Finally, in experiments with no noise, we also plot
the planted rank (inner dimension of the factor matrices), which acts as an
\emph{upper bound} of the actual rounding rank.
As can be seen from Fig.~\ref{fig:synth:small}, the estimated lower bound is
almost always less than 3, even when the data contains significant amounts of noise. It
seems reasonable to assume that the true rounding rank of the data is therefore
closer to the upper bound of our planted rank than the estimated lower bound
given by \lb.
Of the algorithms tested here, \proje, and \shay are the only ones that aim
directly to find the rounding rank, with \shay being the only one with
approximation guarantees (albeit weak ones). Our experiments show that \shay is
not competitive to most other methods; good theoretical properties do not ensure
a good practical behavior. \proje performs much better, being typically the
second-best method.
\SVD is commonly employed in the literature, but our experiments show clearly that for computing the rounding rank, it is not recommended.
\lpca consistently produced the smallest (i.e.~best) rank estimate but it was also the second-slowest
method. \proje, the second-best method for
estimating the rank, was much faster.
\SVD often produced the worst estimates, but it is also the fastest
method.
The running times are broadly as expected: \nuclear has to solve a semidefinite programming problem, \lpca solves iteratively dense least-squares problems, \proje only needs to solve linear equations, and \SVD computes a series of orthogonal projections.
Varying the different parameters yielded mostly expected results with the most interesting result being how little the rank and noise had effect to the results. We assume that this is (at least partially) due to the robustness of the rounding rank: increasing the noise, say, might not have increased the rounding rank of the matrix. This is clearly observed when the rank is varied (Fig.~\ref{fig:synth:small:k:unif:rank}), where \lpca actually obtains smaller rounding rank than the planted one.
\subsubsection{Minimum-error decomposition} \label{sec:exp:synth:err}
We now study the algorithms' capability to return low-error fixed-rank
decompositions. We leave out \shay and \nuclear as they only approximate
rounding rank. Instead, we add a method to compare against: \truncSVD.
It computes the standard truncated SVD, that is, we do not apply
any rounding. \truncSVD is used for providing a baseline: in principle, the methods
that apply rounding should give better results as they utilize the added
information that the final matrix must be binary. At the same time, however, the
rounding procedure may emphasize small errors (e.g., incorrectly representing a
$1$ with $0.49$ contributes $\approx 0.26$ to the sum of squares; after
rounding, the contribution is $1$). We also tested the \asso~\cite{miettinen08discrete} algorithm for Boolean matrix factorization (BMF).
Like any BMF algorithm, \asso returns a rounding rank decomposition restricted to binary factor matrices.
The performance of \asso's approximations was so much worse than the performance
of the other methods that we decided to omit it from the results.
To compare the algorithms, we use the relative reconstruction error, that is,
the squared Frobenius norm of the distance between the data and its
representation relative to the squared norm of the data. For all method except
\truncSVD, the relative reconstruction error agrees with the absolute number of
errors divided by the number of non-zeros in the data.
\begin{figure*}[tb]
\centering
\includegraphics[height=\legendheight]{err_big_legend} \\
\rotatebox[origin=l]{90}{\hspace*{1em}\small Uniform dist.}\hspace*{\smallfigsep}%
\subfigure[Error, vary $m$]{%
\label{fig:synth:big:n:unif:err}%
\includegraphics[width=\smallfigwidth]{err_big_vary_n_unif_err_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Error, vary $k$]{%
\label{fig:synth:big:k:unif:err}%
\includegraphics[width=\smallfigwidth]{err_big_vary_k_unif_err_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Error, vary $\mu$]{%
\label{fig:synth:big:dens:unif:err}%
\includegraphics[width=\smallfigwidth]{err_big_vary_dens_unif_err_kplus_20}%
}\hspace*{\smallfigsep}%
\subfigure[Error, vary $p$]{%
\label{fig:synth:big:noise:unif:err}%
\includegraphics[width=\smallfigwidth]{err_big_vary_noise_unif_err_kplus_20}%
\caption{Relative reconstruction errors on medium-sized synthetic data with uniformly distributed factors. The results of \asso are omitted as they were significantly worse than the other results. All data points are averages over 10 random matrices and the width of the error bars is twice the standard deviation. }
\label{fig:synth:err}
\end{figure*}
The results for these experiments are presented in Fig.~\ref{fig:synth:err}. We
only report the reconstruction with uniformly distributed factors:
the running times were as with the above experiments, and the results with normally
distributed factors were generally similar to the reported ones. The other results are in the appendix. As in the above
experiments, \lpca is the best method, and the slowest as well, taking sometimes
an order of magnitude longer than \proje. The best all-rounder here, though, is
the \SVD method: it provided reasonable results and was by far the fastest method.
\subsection{Results with Real-World Data} \label{sec:exp:real}
We now turn our attention to real-world datasets. For these experiments we
used only \proje, \lpca, and \SVD to estimate the rounding rank, and added
\truncSVD and \asso for the minimum-error decompositions.
\paragraph{Datasets} The basic properties of the datasets are listed in
Tab.~\ref{tab:real:rank:int}. The \abstracts data
set\footnote{\url{http://kdd.ics.uci.edu/databases/nsfabs/nsfawards.html}} is a
collection of project abstracts that were submitted to the National Science
Foundation of the USA in applications for funding. The data is
documents-by-terms matrix giving the appearance of terms in documents. The
\dblp data\footnote{\url{http://dblp.uni-trier.de/db/}} is an
authors-by-conferences matrix containing information who published where. The
\now data set\footnote{\url{http://www.helsinki.fi/science/now/}} contains
information about the locations at which fossils of certain species were
found. It was fetched by \cite{NOWData} and preprocessed according to
\cite{NOWData2}. The \dialect data~\cite{DialectsData1,DialectsData2} contains
information about which linguistic features appear in the dialect spoken in
various parts of Finland. The \apj dataset is a binary matrix containing access
control rules from Hewlett-Packard~\cite{RoleMining}.
\paragraph{Rounding rank} First we computed the upper bounds for the rounding
ranks with the different methods. The results and running times are shown in
Tab.~\ref{tab:real:rank:int}. As with the synthetic experiments, \lpca is
again giving the best results, followed by \proje and \SVD, the latter of which
returns often significantly worse results than the other two. In the running
times the order is reversed, \lpca taking orders of magnitude longer than
\proje, which is still slower than \SVD.
\begin{table}[tb]
\caption{Upper bounds for rounding rank with $\tau=0.5$ for the
real-world data. Known Boolean ranks from~\cite{belohlavek15from-below}. \lpca did not finish on the \abstracts data in reasonable time.}
\label{tab:real:rank:int} \label{tab:real:rank}\label{tab:real:ranktime}
\centering \small { \setlength{\tabcolsep}{3pt}
\input{real_ranks_tab_int.tex}
}
\end{table}
Note that the estimated rounding
ranks in Tab.~\ref{tab:real:rank:int} are significantly less than the respective
normal or Boolean ranks. For example, for the \apj data, the normal rank is
$455$, the Boolean rank is $453$, but \lpca~shows that the rounding rank is at
most $9$. Similarly, the normal and Boolean ranks for \dblp are $19$, while the
rounding rank is no more than $11$. In most cases, the rounding rank is about an
order of magnitude smaller than the real rank.
This shows that the expressive power of the methods significantly increases by applying the rounding.
\paragraph{Minimum-error decompositions} The relative reconstruction errors for
the real-world datasets together with running times are presented in
Tab.~\ref{tab:real:err}. Again, \lpca is often---but not always---the best
method, especially with higher ranks. Again, the running time was high
though. An exception to this is the \abstracts data, where \lpca is in fact
faster than \proje (although it is still extremely slow). Again, \proje is often
the second-best, and more consistently so with higher ranks.
\begin{table*}[t] \caption{Reconstruction errors relative to the number of
non-zeros and running times in real-world data.} \label{tab:real:err} \centering \small
{\setlength{\tabcolsep}{3pt} \input{real_errors_times_2_tab.tex} } \end{table*}
\subsection{Nestedness} \label{sec:exp:nestedness}
Here we studied the possibility to use the non-negative
rounding rank-1 decomposition to solve the \BMNA problem. For these purposes, we
generated nested matrices, perturbed them with noise, and tried to find the
closest nested matrix using \nmt, \nexhaust, their combination \nmtexhaust, and
\SVD. All nested matrices were \by{200}{300} and we varied the density of the
data (from $0.1$ to $0.7$ with steps of $0.1$) and the noise level (from $0.05$
to $0.5$ with steps of $0.05$). A default density of $\mu=0.5$ was used when the
noise was varied, and noise level $p=0.15$ was used when the density was
varied.
Our results are shown in Fig.~\ref{fig:nested}. \nmt and \nexhaust produced
similar results, with \nmt being slightly better. The combined \nmtexhaust is no
better than \nmt, and \SVD is significantly worse. In the running times, though,
we see that \nmt takes much more time than the other approaches.
\begin{figure*}[tbp]
\centering
\includegraphics[height=\legendheight]{nested_legend}
\subfigure[Error, vary density]{%
\label{fig:nested:dens:err}%
\includegraphics[width=\smallfigwidth]{nested_dens_err}%
}\hspace*{\smallfigsep}%
\subfigure[Error, vary noise level]{%
\label{fig:nested:noise:err}%
\includegraphics[width=\smallfigwidth]{nested_noise_err}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary density]{%
\label{fig:nested:dens:time}%
\includegraphics[width=\smallfigwidth]{nested_dens_time}%
}\hspace*{\smallfigsep}%
\subfigure[Time, vary noise]{%
\label{fig:nested:noise:time}%
\includegraphics[width=\smallfigwidth]{nested_noise_time}%
}%
\caption{Relative reconstruction errors and running times on nested data. The running times for \nmtexhaust\ exclude the running time of the \nmt algorithm. All data points are averages over 10 random matrices and the width of the error bars is twice the standard deviation. }
\label{fig:nested}
\end{figure*}
\section{Introduction}
\label{sec:introduction}
When facing data that can be expressed as a binary matrix, the data analyst
usually has two options: either she uses combinatorial methods---such as
frequent itemset mining or various graph algorithms---that will retain the
binary structure of the data, or she applies some sort of continuous-valued
matrix factorization---such as SVD or NMF---that will represent the binary
structure with continuous approximations. The different approaches come with
different advantages and drawbacks. Retaining the combinatorial structure is
helpful for interpreting the results and can preserve better other
characteristics such as sparsity. Continuous methods, on the other hand, are
often more efficient, yield better reconstruction errors, and may be interpreted
probabilistically.
A third alternative, often applied to get ``the best of both worlds,'' is to
perform a continuous factorization first and apply some function to the elements
of the reconstructed matrix to make them binary afterwards. In probabilistic
modelling, for example, the logistic function is commonly used to map real
values into the unit range. We can obtain a binary reconstruction by
rounding, i.e.\ by setting all values less than $1/2$ to $0$ and the remaining
values to $1$. Alternatively, for $\{-1,1\}$ matrices, we may take the sign of
the values of a continuous factorization to obtain a discrete representation.
Even though such methods are commonly used, relatively little is known about the consequences of this thresholding process.
There are few, if any, methods that
aim at finding a matrix that rounds exactly to the given binary data, or finding
a low-rank matrix that causes only little error when rounded (although
there are methods that have such a behavior as a by-product). Almost
nothing is known about the theoretical properties of such decompositions.
In this paper, we give a comprehensive treatise of these topics. We introduce
the concept of \emph{rounding rank}, which, informally, is defined to be the least
rank of a real matrix that rounds to the given binary matrix. But does it matter
how we do the rounding? How will the results change if we constrain ourselves to
nonnegative factorizations? A solid theoretical understanding of the properties
of rounding rank will help data miners and method developers to understand what
happens when they apply rounding.
Some of our results are novel, while others are based on results obtained from
related topics such as \emph{sign rank} and \emph{dot product graphs}.
Studying rounding rank is not only of theoretical interest. The concept can
provide new insight or points of view for existing problems, and lead to
interesting new approaches. In essence, rounding rank provides another
\emph{intrinsic dimensionality} of the data (see,
e.g.~\cite{tatti06what}). Rounding rank can be used, for example, to determine
the minimum number of features linear classifiers need for multi-label
classification
or the minimum number of
dimensions we need from a dimensionality reduction algorithm.
There is also a close relationship to \emph{nested matrices}~\cite{junttila11},
a particular type of binary matrices that occur, for example, in ecology.
We show that nested matrices are equivalent to matrices with a non-negative
rounding rank of 1 and use this property to develop a new algorithm for the
problem of finding the closest nested matrix.
But just knowing about the properties of rounding rank will not help if we
cannot find good decompositions. As data miners have encountered problems
related to rounding rank earlier, there are already existing algorithms for
closely related problems. In fact, any low-rank matrix factorization algorithm
could be used for estimating (or, more precisely, bounding) the rounding rank,
but not all of them would work equally well. To that end, we survey a number of
algorithms for estimating the rounding rank and for finding the least-error
fixed rounding rank decomposition. We also present some novel methods.
One major contribution of this paper is an empirical evaluation of these
algorithms.
Our experiments aim to help the practitioners in choosing the correct algorithm
for the correct task: for example, if one wants to estimate the rounding rank of
a binary matrix, simply rounding the truncated singular value decomposition may
not be a good idea.
\section{Nested Matrices}
\label{sec:nestedness}
A binary matrix is nested if we can reorder its
columns such that after the reordering, the one-entries in each row form a
contiguous segment starting from the first column~\cite{MannilaTerziNested}.
Intuitively, nested matrices model subset/superset relationships between the
rows and columns of a matrix. Such structures are, for example, found in
presence/absence data of locations and species~\cite{MannilaTerziNested}.
We show that nested matrices are exactly the matrices with non-negative rounding
rank~1. Formally,
a binary matrix $\mB$ is \emph{directly nested} if for each one-entry
$\mB_{ij}=1$, we have $\mB_{i'j'}=1$ for all $i' \in \{1, \dots, i - 1\}$
and $j' \in \{1, \dots, j-1\}$.
A binary matrix $\mB$ is \emph{nested} if there exist permutation
matrices $\mP_1$ and $\mP_2$, such that $\mP_1\mB\mP_2$ is directly nested.
\begin{theorem}
\label{Thm:NestedMatrices}
Let $\bm0\neq\mB \in \{ 0, 1\}^{m \times n}$. Then $\mB$ is nested if and only if
$\rrankp(\mB) = 1$.
\end{theorem}
\begin{proof}
$\Rightarrow$: We reorder the rows and columns of
$\mB$ by the number of $1$s they contain in descending order. This gives us
permutation matrices $\mP_1$ and $\mP_2$ s.t.\ $\mB' = \mP_1 \mB \mP_2$ is
directly nested. Set $\vp = \mB' \bm1$, i.e., $\vp$ is the vector containing
the row sums of $\mB'$. Then for $\vec{l'}$ and $\vec{r'}$ with $\vec{l}'_i =
2^{\vec{p}_i - 1}$ and $\vec{r}'_j = 2^{-j}$, $\mB' = \round(\vec{l}'
\cdot (\vec{r}')^T)$. Setting $\vec{l} = \mP_1^T \vec{l}'$ and $\vec{r} =
\mP_2 \vec{r}'$, we get $\mB = \round(\vl \cdot \vr^T)$. Hence, we have
$\rrank(\mB) = 1$.
$\Leftarrow$: Let $\vl \geq 0$ and $\vr \geq 0$ be s.t.\ $\mB = \round(\vl
\vr^T)$. Then there exist permutation matrices $\mP_1$ and $\mP_2$ s.t.\ for
$\vl' = \mP_1 \vl$ we have $\vl_1' \geq \dots \geq \vl_m'$ and for $\vr' =
\mP_2^T \vr$ we have $\vr_1' \geq \dots \geq \vr_n'$. Set $\mB' = \round(\vl'
(\vr')^T)$ and observe $\vl_i' \vr_j' \geq \vl_{i+1}' \vr_j'$ for all
$i,j$. Therefore, for each entry of $\mB'$, $\mB'_{ij} = \round(\vl_i' \vr_j')
\geq \round(\vl_{i+1}' \vr_j') = \mB'_{(i+1)j}$. Similarly,
$\mB'_{ij} = \round(\vl_i' \vr_j') \geq \round(\vl_i' \vr_{j+1}') =
\mB'_{i(j+1)}$. Therefore, $\mB'$ is directly nested. We conclude that $\mB
= \vl \vr^T$ is nested since $\mB = \round(\vl \vr^T) = \mP_1^{T}
\round(\mP_1(\vl \vr^T)\mP_2) \mP_2^{T} = \mP_1^{T} \mB' \mP_2^{T}$.
\end{proof}
Binary matrices with rounding rank~1 are also closely related to nested matrices.
\begin{proposition}
\label{prop:rrank1_eq_block_nested}
Let $\bm0\neq\mB \in \{ 0, 1\}^{m \times n}$. The following statements are
equivalent:
\begin{enumerate}
\item $\rrank(\mB) = 1$.
\item \eat{$\mB$ is nested or} there exist permutation matrices $\mP_1$ and $\mP_2$
and nested matrices $\mB_1$ and $\mB_2$, such that
\begin{align*}
\mB = \mP_1
\begin{pmatrix}
\mB_1 & 0 \\
0 & \mB_2
\end{pmatrix}\mP_2.
\end{align*}
\end{enumerate}
\end{proposition}
The proof is in the appendix.
\paragraph{Algorithms}
Mannila and Terzi~\cite{MannilaTerziNested} introduced the \emph{Bidirectional
Minimum Nestedness Augmentation} (\BMNA) problem: Given a binary matrix $\mB$,
find the nested matrix $\mA$ which minimizes $\norm{\mB - \mA}_F$. We will
discuss three algorithms to approximately solve this problem.
\cite{MannilaTerziNested} gave an algorithm, \nmt, which approximates
a solution for the \BMNA problem by iteratively eliminating parts of the matrix
that violate the nestedness.
Next, we propose a alternating minimization algorithm, \nexhaust, which exploits
Th.~\ref{Thm:NestedMatrices}.
\nexhaust maintains two vectors $\vec{l} \in \nR^m$ and $\vec{r} \in \nR^n$ and
iteratively minimizes the error $\lVert\mB - \round(\vec{l} \cdot
\vec{r}^T)\rVert_F$. It starts by fixing $\vec{r}$ and updates $\vec{l}$, such
that the error is minimized. Then $\vec{l}$ is fixed and $\vec{r}$ is updated.
This procedure is repeated until the error stops reducing or we have reached a
certain number of iterations.
We describe an update of $\vec{l}$ for fixed $\vec{r}$; updating $\vr$ for given
$\vl$ is symmetric. Observe that changing entry $\vl_i$ only alters the $i$'th
row of $\mA = \vl \cdot \vr^T$, and consequently $\mA_i$ is not affected by any
$\vl_k$ with $k \neq i$. Hence, we only describe the procedure for updating
$\vl_i$. Define the set $V_i = \{ \vr_j : \mB_{ij} = 1\}$ of all values of
$\vr$ where $\mB_i$ is non-zero. We make the following observations: If we set
$\vl_i < \frac{1}{2 \max(\vr)}$, then $\mA_i$ only contains zeros after the
update. If $\frac{1}{2 \max(\vr)} < \vl_i < \frac{1}{2 \max(V_i)}$, then after
the update all non-zeros of $\mA_i$ will be in entries where $\mB_i$ has a zero.
If $\vl > \frac{1}{2 \min(V_i)}$, we add too many $1$s to $\mA_i$. Thus, all
values that we need to consider for updating $\vl_i$ are $\frac{1}{2 \max(\vr)}$
and the values in $\{ \frac{1}{2v} : v \in V_i\}$. The algorithm tries all
possible values for $\vl_i$ exhaustively and computes the error at each step.
We can also use the results of \nmt as initialization for \nexhaust:
We run \nmt and obtain a nested matrix $\mB$. Now we use the construction from
step 1 of the proof of Th.~\ref{Thm:NestedMatrices} to obtain $\vl$ and $\vr$
with $\mB = \round(\vl \vr^T)$, and try to improve using \nexhaust.
Finally, we can use \SVD to solve the \BMNA problem approximately. By the
Perron--Frobenius Theorem \cite[Ch. 8.4]{horn13matrix}, the principal left and
right singular vectors of a non-negative matrix are also non-negative. Hence we
may use the \SVD algorithm to obtain the rank-$1$ truncated SVD and round. By
Th.~\ref{Thm:NestedMatrices}, the result must be nested.
\section{Notation}
\label{sec:notation}
\rem{Stefan: This should be folded inside the theory section (the only interesting notation is $\mA_{\leq k}$).}
Matrices will be denoted by bold-faced capital letters,
vectors are usually column vectors and will be referred to using
bold lowercase letters.
The dot product of two vectors is given by
$\langle \cdot, \cdot \rangle$.
We will denote the matrix with the first $k$ columns of a matrix $\mA$
by $\mA_{\leq k}$ and the $i$'th row of $\mA$ by $\mA_i$.
\section*{Categories and Subject Descriptors}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\footnotetext[1]{Part of the work was done when the author was with MPI Informatics, Saarland University and the Saarbr\"ucken Graduate School of Computer Science.}
\renewcommand{\thefootnote}{\arabic{footnote}}
\input{introduction}
\input{theory}
\input{computing}
\input{nestedness}
\input{experiments}
\input{conclusion}
\bibliographystyle{abbrv}
\section{Definitions, Background, and Theory}
\label{sec:theory}
In this section we formally define the rounding rank of a binary matrix, discuss
its properties, and compare it to other well-known
matrix-ranks. Throughout this paper, we use $\mB$ to denote a binary $m \times
n$ matrix.
\subsection{Definitions}
The \emph{rounding function} w.r.t.\ \emph{rounding threshold} $\tau \in \mathbb{R}$
is
\begin{align*}
\round_\tau(x) =
\begin{cases}
1, & \text{if } x \geq \tau, \\
0, & \text{if } x < \tau.
\end{cases}
\end{align*}
We apply $\round_\tau$ to matrices by rounding element-wise, i.e.\ if $\mA
\in \R^{m \times n}$ is a real-valued matrix, then $\round_\tau(\mA)$ denotes an
$m \times n$ binary matrix with $[\round_\tau(\mA)]_{ij} =
\round_\tau(\mA_{ij})$.
\paragraph{Rounding rank}
Given a rounding threshold $\tau \in \mathbb{R}$, the \emph{rounding rank of
$\mB$ w.r.t.\ $\tau$} is given by
\begin{equation}
\label{eq:rrank}
\rrank_\tau(\mB) = \min\{ \rank(\mA) : \mA \in \mathbb{R}^{m \times n}, \round_\tau(\mA) = \mB \}.
\end{equation}
The rounding rank of $\mB$ is thus the smallest rank of any real-valued matrix
that rounds to $\mB$. We often omit
$\tau$ for brevity and write $\round(\mA)$ and $\rrank(\mB)$ for
$\tau=1/2$.
When $\mB$ has rounding rank $k$, there exists matrices $\mL \in \mathbb{R}^{m
\times k}$ and $\mR \in \mathbb{R}^{n \times k}$ with $\mB = \round_\tau(\mL
\mR^T)$. We refer to $\mL$ and $\mR$ as a \emph{rounding rank decomposition} of
$\mB$.
\paragraph{Sign rank}
The \emph{sign matrix} of $\mB$, $\mB^\pm \in \{-1,+1\}^{m \times n}$, is obtained from $\mB$
by replacing every $0$ by $-1$. Given
a sign matrix, its \emph{sign rank} is given by
\begin{equation}
\label{eq:srank}
\srank(\mB^\pm) = \min\{ \rank(\mA): \mA \in \mathbb{R}_{\neq 0}^{m \times n}, \sign(\mA) = \mB \},
\end{equation}
where $\mathbb{R}_{\neq 0} = \mathbb{R} \setminus \{0 \}$. The sign rank
is thus the smallest rank of any real-valued matrix $\mA$ without $0$-entries
and with $\mB^\pm_{ij} = \sign(\mA_{ij})$ for all $i,j$. The sign rank is
closely related to the rounding rank as
\(
\rrank_0(\mB) \leq \srank(\mB^\pm) \leq \rrank_0(\mB) + 1.
\)
The first inequality holds because for any $\mA\in \mathbb{R}_{\neq 0}^{m \times n}$ and with
$\sign(\mA) = \mB^\pm$, $\round_0(\mA)^\pm = \sign(\mA)$. The second inequality
holds because if $\round(\mA) = \mB$ and $\mA$ contains $0$-entries, we can add
a constant $0<\varepsilon<\min_{a_{ij}<0}\lvert a_{ij}\rvert$ to
each entry of $\mA$ to obtain $\sign(\mA + \varepsilon) = \mB^\pm$ and
$\rank(\mA + \varepsilon) \leq \rank(\mA) + 1$. Even when $\tau\neq 0$, the
differences remain small as suggested by
Prop.~\ref{prop:changingRoundingThreshold}.
\paragraph{Non-negative rounding rank}
We define the
\emph{non-negative rounding rank} of $\mB$ w.r.t.~$\tau$, denoted
$\rrankp_\tau(\mB)$, as the smallest $k$ such that there exist non-negative
matrices $\mL \in \nR^{m \times k}$ and $\mR \in \nR^{n \times k}$ with
$\round_\tau(\mL \mR^T) = \mB$.
\paragraph{Minimum-error rounding rank problem}
The rounding rank is concerned with exact reconstructions of $\mB$. We relax
this by introducing the \emph{\minErrorRRproblemFullname{$k$} problem}: Find a
binary matrix $\mC \in \{0,1\}^{m \times n}$ with $\rrank(\mC) \leq k$ which
minimizes $\norm{\mB - \mC}_F$, where $\norm{\cdot}_F$ denotes the Frobenius
norm. Note that $\norm{\mB - \mC}_F^2$ corresponds to the number of entries in
which $\mB$ and $\mC$ disagree. We denote the problem by
\minErrorRRproblem{$k$}.
\subsection{Related Work}
\label{subsec:related-work}
A number of concepts closely related to rounding rank (albeit less general) have
been studied in various communities.
There is a relationship between rounding rank and dot-product
graphs~\cite{Reiterman92Embedding,Fiduccia98Dot,SphereAndDotProduct}, which
arise in social network analysis~\cite{Young2007}. Let $G$ be a graph with $n$
vertices and adjacency matrix $\mM$. Then $G$ is a \emph{dot-product graph of
rank~$k$} if there exists a matrix $\mL \in \mathbb{R}^{m \times k}$ such that $\mM
= \round_1(\mL \mL^T)$.
The rank of a dot-product graph
corresponds to the \emph{symmetric} rounding rank of its adjacency matrix. In
this paper, we consider asymmetric factorizations and allow for rectangular
matrices.
Sign rank was studied in the communication complexity community in order to
characterize a certain communication model. Consider two players, Alice and
Bob. Alice and Bob obtain private inputs $x, y \in \{0,1\}^n$, respectively, and their
task is to evaluate a function $f : \{0,1\}^n \times \{0,1\}^n \to \{0,1\}$ on
their inputs. The \emph{communication matrix} $\mM_f$ of $f$ is the $2^n \times
2^n$ binary matrix with $[\mM_f]_{xy} = f(\bin(x),\bin(y))$, where $\bin : 2^n
\to \{0,1\}^n$ denotes the $n$-bit binary encoding of its input number. The
\emph{probabilistic communication complexity} of $f$ is the smallest number of
bits Alice and Bob have to communicate in order to compute $f(x,y)$ correctly
with probability larger than $\frac{1}{2}$. It is known that the probabilistic
communication complexity of $f$ and $\log(\srank(\mM_f))$ differ by
at most one~\cite{GeometricalRealizations,ProbabilisticCommunicationComplexity,LinearLowerBound}.
Sign rank was also studied in learning theory to
understand the limits of large margin
classification~\cite{SignRankVsVCDimension,PartitioningAndGeometricEmbedding,BetterLinearLowerBound,Ben03Limitations};
see Alon et al.~\cite{SignRankVsVCDimension} for a summary of applications of sign rank.
These complexity results focus on achieving lower and upper bounds on sign rank
as well as the separation of complexity classes.
We review some of these results
in subsequent sections and present them in terms of rounding rank, thereby
making them accessible to the data mining community.
Ben-David et al.~\cite[Cor.~14]{Ben03Limitations} showed that
only a very small fraction of the $n \times n$ sign matrices can be
well-approximated (with ``vanishing'' error in at most $n^{-O(1)}$ entries)
by matrices of sign-rank at most $k$ unless $k = \omega(n^{1 - O(1)})$ is very
large.
To the best of our knowledge, there are no known results for fixed relative
error (e.g., 5\% of the matrix entries) or for the \minErrorRRproblem{$k$}
problem.
\subsection{Characterization of Rounding Rank}
Below we give a geometric
interpretation of rounding rank that helps to relate it to problems in data
mining. A similar theorem was presented in the context of communication
complexity~\cite[Th.~5]{ProbabilisticCommunicationComplexity}. Our presentation is in terms of
matrix ranks (instead of communication protocols) and gives a short proof that
provides insights into the relationship between rounding rank and geometric
embeddings.
\begin{theorem}
\label{Thm:CharacterisationRoundingRank}
Let $d \in \mathbb{N}$ and $\tau\in\R$. The
following statements are equivalent:
\begin{enumerate}
\item $\rrank_\tau(\mB) \leq d$.
\item There exist points $\vx_1, \dots, \vx_m \in \mathbb{R}^d$ and affine
hyperplanes $H_1, \dots, H_n$ in $\R^d$ with normal vectors $\vc_1, \dots,
\vc_n \in \mathbb{R}^d$ given by $H_j = \{ \vx \in \R^d : \langle \vx, \vc_j
\rangle = \tau \}$ such that $\round_\tau(\langle \vx_i, \vc_j \rangle) =
\mB_{ij}$ for all $i, j$.
\end{enumerate}
\end{theorem}
\begin{proof}
$2 \Rightarrow 1$: Consider points $\vx_i$ and hyperplanes $H_j$ with the
property asserted in the theorem. Define an $m \times d$ matrix $\mL$ with the
$\vx_i$ in its rows, and an $n \times d$ matrix $\mR$ with the $\vc_j$ in its
rows. Then $\round_\tau(\mL\mR^T) = \mB$, and hence $\rrank_\tau(\mB) \leq d$.
$1 \Rightarrow 2$: Let $\mB = \round_\tau(\mA)$ with $\rank(\mA) \leq d$. Pick
any two real matrices $\mL$ and $\mR$ with $d$ columns s.t.\
$\mL\mR^T=\mA$. We can consider the rows $\mL_i$ of $\mL$ as points in $\R^d$
($\vx_i=\mL_i$) and the rows $\mR_j$ of $\mR$ as the normal vectors
($\vc_j=\mR_j$) of affine hyperplanes $H_j$ with offset $\tau$. Since $\mB =
\round_\tau(\mA)$, we also get $\mB_{ij} = \round_\tau(\langle \mL_i, \mR_j
\rangle)$ for all $i,j$.
\end{proof}
Fig.~\ref{fig:Hyperplanes} illustrates
Th.~\ref{Thm:CharacterisationRoundingRank} in $\R^2$ with $n=3$ and $\tau = 0$.
The three hyperplanes dissect the space into six convex, open regions. Each
point $\vx \in \R^2$ can be labeled with a binary vector according to whether it
is ``above'' or ``below'' each of the hyperplanes $H_j$ by using the rounding
function $\round_\tau(\langle \vx, \vc_j\rangle)$.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{Hyperplanes}%
\caption{Three hyperplanes in $\R^2$ with the labels of the subspaces into which they
dissect the space.
Any $m\times 3$ binary matrix in which each row corresponds to one of the
six label vectors has rounding rank at most 2.}
\label{fig:Hyperplanes}
\end{figure}
The second point of Th.~\ref{Thm:CharacterisationRoundingRank} can be
interpreted as follows: Pick a binary matrix $\mB$ and treat each of the $n$
columns $\mB_j$ as the labels of a binary classification problem $P_j$ on $m$
points. We can find data points $\vx_1, \dots, \vx_m$ and affine hyperplanes
$H_1, \dots, H_n$ in $\R^d$ which solve all linear classification problems $P_j$
without error if and only if the rounding rank of $\mB$ is at most~$d$. We then
interpret the $\vx_i$ as data points and the $\vc_j$ as feature weights.
Rounding rank decompositions thus describe the ``best case'' for multiple linear
classification problems: if the rounding rank of $\mB$ is $d$, then we need at
least $d$ features to achieve perfect classification. In other words, we need to
collect at least $\rrank(\mB)$ features (or attributes) to have a chance to
classify perfectly. Similarly, if we employ dimensionality reduction, linear
classification cannot be perfect if we reduce to less than $d$ dimensions.
\begin{corollary}[informal]
\label{cor:CorollaryRR}
Rounding rank provides a natural lower bound on how many features we need for
linear classification. This provides us with lower bounds on data collection
or dimensionality reduction.
\end{corollary}
\subsection{Comparison of the Ranks}
We compare rounding rank with several well-known ranks. Many of the results in
this subsection were obtained for sign rank in the communication complexity
community; we present these results here in terms of rounding rank. To the best
of our knowledge, we are the first to make the role of the rounding
threshold explicit by introducing mixed matrices (see
Prop.~\ref{prop:changingRoundingThreshold}).
\paragraph{Boolean rank}
For binary matrices $\mL \in \{0,1\}^{m \times k}$ and $\mR \in \{0,1\}^{n
\times k}$, the \emph{Boolean matrix product $\mL \bprod \mR^T$} is given by
the $m \times n$ binary matrix with $[ \mL \bprod \mR^T]_{ij} =
\bigvee_{\ell=1}^k (\mL_{ik} \land \mR_{jk})$ for all entries $i,j$. The
\emph{Boolean rank of a binary matrix $\mB$}, denoted $\rankB(\mB)$, is the
smallest $k \in \mathbb{N}$ s.t.\ there exist $\mL \in \{0,1\}^{m \times k}$ and
$\mR \in \{0,1\}^{n \times k}$ with $\mB = \mL \bprod
\mR^T$~\cite{miettinen08discrete}. The rounding rank is a lower bound on the
Boolean rank.
\begin{lemma}
$\rrank(\mB) \leq \rankB(\mB)$.
\end{lemma}
\begin{proof}
Let $\rankB(\mB) = k$. Then there exist matrices $\mL \in \{0,1\}^{m \times
k}$ and $\mR \in \{0,1\}^{n \times k}$ s.t.\ $\mB = \mL \bprod \mR^T$. If
we use the algebra of $\R$, we get $[\mL \mR^T]_{ij} \geq \frac{1}{2}$ iff
$\mB_{ij} = 1$. This implies $\round(\mL \mR^T) = \mB$ and $\rrank(\mB) \leq k
= \rankB(\mB)$.
\end{proof}
\paragraph{Real rank}
Comparing rounding rank and real rank, we observe that $\mB = \round(\mB)$ for all
binary matrices $\mB$. Hence, \[ \rrank(\mB) \leq \rank(\mB).\] This is in
contrast to the relationship between Boolean rank and standard rank, which
cannot be compared (i.e.\ neither serves as a lower bound to the other)~\cite{SurveyOfCliqueCoverings}.
Note that the rounding rank can be much lower than both real and Boolean
rank. For example, the $n\times n$ ``upper triangle matrix'' with 1's on the
main diagonal and above has real and Boolean rank $n$, but rounding rank $1$
(see Th.~\ref{Thm:NestedMatrices}). As another example, we show in Section~\ref{Sec:IdentityMatrices}
in the appendix that the $n\times n$ identity
matrix has rounding rank 2 for all $n\ge3$. In fact, while we know that a
real-valued $n \times n$ matrix can have rank up to $n$, the situation is
different for rounding rank: On the one hand, for large enough $n$, all $n
\times n$ matrices $\mB$ have $\rrank(\mB) \leq (\frac{1}{2} +
o(1))n$~\cite[Cor.~1.2]{GeometricalRealizations}. On the other hand, for each
$n$, there exist $n \times n$ matrices with $\rrank(\mB) \geq \frac{n}{32}$,
i.e., the rounding rank can indeed be linear in
$n$~\cite[Cor.~1.2]{GeometricalRealizations}.
It is well-known that real-valued matrices with all entries picked uniformly at
random from some bounded proper interval have full standard rank with
probability 1. For rounding rank, an $n \times n$ binary matrix sampled uniformly
at random has rounding rank $\Omega(n)$ with high
probability (see the proof of Cor.~1.2 in \cite{GeometricalRealizations}). Hence,
the rounding ranks of random binary matrices are expected to be large. The
real-world data matrices in our experiments often had small rounding ranks,
though.
A lower bound on the rounding rank of a binary matrix $\mB$ can be derived from
the singular values of the sign matrix $\mB^\pm$.
\begin{proposition}
\label{prop:LowerBound}
Let $r = \rank(\mB^\pm)$ and let $\sigma_1(\mB^\pm) \geq \dots \geq \sigma_r(\mB^\pm) > 0$ be
the non-zero singular values of $\mB^\pm$. Then
\begin{align*}
(\rrank_0(\mB) + 1) \sum_{i = 1}^{\rrank_0(\mB)} \sigma_i^2(\mB^\pm) \geq mn.
\end{align*}
\end{proposition}
Prop.~\ref{prop:LowerBound} is a slight modification of a result
in~\cite[Th.~5]{BetterLinearLowerBound} and we give the proof in the appendix.
\paragraph{Role of rounding threshold} We compare the rounding ranks of a fixed
matrix for different rounding thresholds.
We call a binary matrix \emph{mixed}, if it contains no all-zero and no all-one
columns (or rows).
\begin{proposition}
\label{prop:changingRoundingThreshold}
For any $\mB$ and arbitrary $\tau\neq \tau' \in \mathbb{R}$, $\rrank_\tau(\mB)$
and $\rrank_{\tau'}(\mB)$ differ by at most $1$. If additionally $\tau,
\tau'\neq 0$, $\rrank_\tau(\mB) = \rrank_{\tau'}(\mB)$ if
$\sign(\tau)=\sign(\tau')$ or if $\mB$ is mixed.
\end{proposition}
To prove Prop.~\ref{prop:changingRoundingThreshold} we need
Lem.~\ref{lem:UsefulHyperplaneSeparation} below. The lemma is implied by the
Hyperplane Separation Theorem~\cite[p. 46]{ConvexOptimization}, and we prove it
in the appendix.
\begin{lemma}
\label{lem:UsefulHyperplaneSeparation}
Let $A$ and $B$ be two disjoint nonempty convex sets in $\R^d$, one of which
is compact. Then for all nonzero $c \in \mathbb{R}$, there exists a
nonzero vector $\vv \in \R^d$, such that $\langle \vx, \vv \rangle > c$ and
$\langle \vy, \vv \rangle < c$ for all $\vx \in A$ and $\vy \in B$.
\end{lemma}
\begin{proof}[Proof of Prop.~\ref{prop:changingRoundingThreshold}]
First claim: Let $\tau,\tau' \in \R$ be arbitrary and pick $k\in\N$, $\mL \in \R^{m
\times k}$, $\mR \in \R^{n \times k}$ such that $\round_\tau(\mL \mR^T) =
\mB$. Set $c=\tau' - \tau$, then
\begin{align*}
\mB_{ij}
= \round_{\tau}([\mL\mR^T]_{ij})
= \round_{\tau'} ([\mL \mR^T]_{ij} + c).
\end{align*}
Set $\mL'= \begin{pmatrix}\mL & c\mathbf1\end{pmatrix}$ and
$\mR'=\begin{pmatrix}\mR & \mathbf1 \end{pmatrix}$, where $\mathbf1$ denotes the
all-one vector. Then $\round_{\tau'}(\mL'\mR'^T)=\mB$ and thus
$\rrank_{\tau'}(\mB) \leq k+1$.
Second claim: Without loss of generality, assume that $\mB$ contains no all-zero and no all-one columns
(otherwise tranpose the matrix). Let $\tau,\tau' \neq 0$ and let $k$ and $\mL \mR^T$ be as
before. If $\sign(\tau)=\sign(\tau')$, set $c=\tau'/\tau>0$ and
$\mR'=c\mR$. Then $\round_\tau(\mL\mR)=\round_{\tau'}(\mL\mR')$ by construction
so that $\rrank_{\tau'}(\mB)\le k$. By reversing the roles of $\tau$ and $\tau'$
in the argument, we establish $\rrank_{\tau}(\mB)=\rrank_{\tau'}(\mB)$.
Suppose $\tau,\tau' \neq 0$ (not necessarily of same sign) and let $\mB$ be
mixed. We now treat the rows of $\mL$ as points $\mL_1, \dots, \mL_m$ in
$\mathbb{R}^k$ and show that there exists an $n \times k$ matrix $\mR'$
consisting of normal vectors of affine hyperplanes in $\mathbb{R}^k$ in its rows
such that the hyperplanes separate the points with rounding threshold $\tau'$,
thereby establishing $\rrank_{\tau'}(\mB)\le\rrank_{\tau}(\mB)$. Again, by
reversing the roles of $\tau$ and $\tau'$, we obtain equality. To construct the
$j$'th row of $\mR'$, let $C_j= \{ \mL_i : \mB_{ij} = 1 \}$ and $\bar C_j = \{
\mL_i : \mB_{ij} = 0 \}$. Notice that since $\mB$ is mixed, both $C_j$ and
$\bar C_j$ are non-empty. We observe that the convex hulls of $C_j$ and
$\bar C_j$ are separated by the affine hyperplane with the $j$'th row of $\mR$
as its normal vector and offset from the origin $\tau$. Thus, we apply
Lem.~\ref{lem:UsefulHyperplaneSeparation} to obtain a vector $\vr'_j$ s.t.\
$\langle \vr'_j, \vc \rangle > \tau'$ for all $\vc \in C_j$ and $\langle \vr'_j,
\bar\vc \rangle < \tau'$ for all $\bar\vc \in \bar C$. We set $\vr'_j$ to
be the $j$'th row of $\mR'$. To obtain $\mR'$, we repeating this procedure for
each of its $n$ rows.
\end{proof}
The above proof can be adopted to show that if $\mB$ is mixed, even using a
different (non-zero) rounding threshold for each row (or column) does not affect
the rounding rank.
\paragraph{Non-negative rounding rank}
While the gap between rank and non-negative rank can be arbitrarily
large~\cite{NMFGap}, for rounding rank and non-negative rounding rank this is
not the case.
\begin{proposition}
\label{prop:rrankp_vs_rrank}
$\rrankp_\tau(\mB) \leq \rrank_\tau(\mB) + 2$.
\end{proposition}
This can be shown using ideas similar to the ones
in~\cite{ProbabilisticCommunicationComplexity} by a simple but lengthy computation.
We give a proof in the appendix.
\subsection{Computational Complexity}
The following proposition asserts that rounding rank is \NP-hard to compute
regardless of the rounding threshold.
\begin{proposition}
\label{prop:rrNPhard}
It is \NP-hard to decide if $\rrank_0(\mB) \leq k$ for all $k > 2$. For $\tau
\neq 0$, it is \NP-hard to decide if $\rrank_\tau(\mB) \leq k$ for all $k > 1$.
\end{proposition}
For sign rank (i.e. $\tau = 0$), this was proven in
\cite[Th.~1.2]{SignRankNP},\cite[Sec.~3]{VisibilityConstraints}.
Moreover, Alon
et al.~\cite{SignRankVsVCDimension} argue that computing the sign rank is
equivalent to the existential theory of the reals. For $\tau \neq 0$,
NP-hardness was proven in~\cite[Th.~10]{SphereAndDotProduct}.
It is an open problem whether sign rank or rounding rank computation is in \NP.
Assume we store a matrix $\mA$ that achieves the rounding rank of $\mB$ by
representing all entries with rational numbers. The following proposition
asserts that in general, the space needed to store a $\mA$ can be exponential in
the size of $\mB$. Hence, the proposition rules out proving that computing
rounding rank is in \NP{} by nondeterministically guessing a matrix $\mA$ of
small rank and rounding it.
\begin{proposition}
\label{prop:hugeEntries}
For all sufficiently large $n$, there exist $n \times n$ binary matrices $\mB$
with $\rrank(\mB) = 3$ s.t.\
for each matrix $\mA$ with $\rank(\mA) = 3$ and
$\round(\mA) = \mB$,
it takes $\Theta(\exp(n))$ bits to store the entries of $\mA$
using rational numbers.
\end{proposition}
Prop.~\ref{prop:hugeEntries} can be
derived from the proof of \cite[Th.~4]{SphereAndDotProduct}.
\begin{lemma}
The \minErrorRRproblem{$k$} problem is \NP-hard to solve exactly. It is also
\NP-hard to approximate within any polynomial-time computable factor.
\end{lemma}
\begin{proof}
Both claims follow from Prop.~\ref{prop:rrNPhard}. If in polynomial time we
could solve the \minErrorRRproblem{$k$} problem exactly or within any factor,
then we could also decide if $\rrank(\mB) \leq k$ by checking if the result
for \minErrorRRproblem{$k$} is zero.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,944 |
\section{Introduction}
\begin{figure}[t]
\centering
\renewcommand{\captionlabelfont}{\footnotesize}
\setlength{\abovecaptionskip}{3pt}
\setlength{\belowcaptionskip}{-12pt}
\includegraphics[width=3.05in]{fr9.pdf}\\
\caption{\footnotesize Comparison of open-set and closed-set face recognition.}\label{fr}
\end{figure}
Recent years have witnessed the great success of convolutional neural networks (CNNs) in face recognition (FR). Owing to advanced network architectures \cite{krizhevsky2012imagenet,simonyan2014very,szegedy2015going,he2016deep} and discriminative learning approaches \cite{sun2014deep2,schroff2015facenet,wen2016discriminative}, deep CNNs have boosted the FR performance to an unprecedent level. Typically, face recognition can be categorized as face identification and face verification \cite{huang2014labeled,Kemelmacher-Shlizerman_2016_CVPR}. The former classifies a face to a specific identity, while the latter determines whether a pair of faces belongs to the same identity.
\par
In terms of testing protocol, face recognition can be evaluated under closed-set or open-set settings, as illustrated in Fig.~\ref{fr}. For closed-set protocol, all testing identities are predefined in training set. It is natural to classify testing face images to the given identities. In this scenario, face verification is equivalent to performing identification for a pair of faces respectively (see left side of Fig.~\ref{fr}). Therefore, closed-set FR can be well addressed as a classification problem, where features are expected to be separable. For open-set protocol, the testing identities are usually disjoint from the training set, which makes FR more challenging yet close to practice. Since it is impossible to classify faces to known identities in training set, we need to map faces to a discriminative feature space. In this scenario, face identification can be viewed as performing face verification between the probe face and every identity in the gallery (see right side of Fig.~\ref{fr}). Open-set FR is essentially a metric learning problem, where the key is to learn discriminative large-margin features.
\par
Desired features for open-set FR are expected to satisfy the
criterion that the maximal intra-class distance is smaller than the minimal inter-class distance under a certain metric space. This criterion is necessary if we want to achieve perfect accuracy using nearest neighbor. However, learning features with this criterion is generally difficult because of the intrinsically large intra-class variation and high inter-class similarity \cite{ross2004multimodal} that faces exhibit.
\par
\begin{figure*}[t]
\centering
\renewcommand{\captionlabelfont}{\footnotesize}
\setlength{\abovecaptionskip}{4pt}
\setlength{\belowcaptionskip}{-10pt}
\includegraphics[width=6.85in]{finalcomp.pdf}\\
\caption{\footnotesize Comparison among softmax loss, modified softmax loss and A-Softmax loss. In this toy experiment, we construct a CNN to learn 2-D features on a subset of the CASIA face dataset. In specific, we set the output dimension of FC1 layer as 2 and visualize the learned features. Yellow dots represent the first class face features, while purple dots represent the second class face features. One can see that features learned by the original softmax loss can not be classified simply via angles, while modified softmax loss can. Our A-Softmax loss can further increase the angular margin of learned features.}\label{comp}
\end{figure*}
\par
Few CNN-based approaches are able to effectively formulate the aforementioned criterion in loss functions. Pioneering work \cite{taigman2014deepface,sun2014deep} learn face features via the softmax loss\footnote{Following \cite{liu2016large}, we define the softmax loss as the combination of the last fully connected layer, softmax function and cross-entropy loss.}, but softmax loss only learns separable features that are not discriminative enough. To address this, some methods combine softmax loss with contrastive loss \cite{sun2014deep2,sun2016sparsifying} or center loss \cite{wen2016discriminative} to enhance the discrimination power of features. \cite{schroff2015facenet} adopts triplet loss to
supervise the embedding learning, leading to state-of-the-art face recognition results. However, center loss only explicitly encourages intra-class compactness. Both contrastive loss \cite{hadsell2006dimensionality} and triplet loss \cite{schroff2015facenet} can not constrain on each individual sample, and thus require carefully designed pair/triplet mining procedure, which is both time-consuming and performance-sensitive.
\par
It seems to be a widely recognized choice to impose Euclidean margin to learned features, but a question arises: \emph{Is Euclidean margin always suitable for learning discriminative face features?}
To answer this question, we first look into how Euclidean margin based losses are applied to FR.
Most recent approaches \cite{sun2014deep2,sun2016sparsifying,wen2016discriminative} combine Euclidean margin based losses with softmax loss to construct a joint supervision. However, as can be observed from Fig.~\ref{comp}, the features learned by softmax loss have intrinsic angular distribution (also verified by \cite{wen2016discriminative}). In some sense, Euclidean margin based losses are incompatible with softmax loss, so it is not well motivated to combine these two type of losses.
\par
In this paper, we propose to incorporate angular margin instead. We start with a binary-class case to analyze the softmax loss. The decision boundary in softmax loss is $\thickmuskip=2mu \medmuskip=2mu (\bm{W}_{1} - \bm{W}_{2})\bm{x} + b_{1} - b_{2} = 0$, where $\bm{W}_{i}$ and $b_{i}$ are weights and bias\footnote{If not specified, the weights and biases in the paper are corresponding to the fully connected layer in the softmax loss.} in softmax loss, respectively. If we define $\bm{x}$ as a feature vector and constrain $\thickmuskip=2mu \medmuskip=2mu \|\bm{W}_1\| = \|\bm{W}_2\| = 1$ and $\thickmuskip=2mu \medmuskip=2mu b_{1}=b_{2} = 0$, the decision boundary becomes $\thickmuskip=2mu \medmuskip=2mu \|\bm{x}\|(\cos (\theta_1) -\cos (\theta_2)) = 0$, where $\theta_i$ is the angle between $\bm{W}_{i}$ and $\bm{x}$. The new decision boundary only depends on $\theta_1$ and $\theta_2$. Modified softmax loss is able to directly optimize angles, enabling CNNs to learn angularly distributed features (Fig.~\ref{comp}).
\par
Compared to original softmax loss, the features learned by modified softmax loss are angularly distributed, but not necessarily more discriminative. To the end, we generalize the modified softmax loss to angular softmax (A-Softmax) loss. Specifically, we introduce an integer $m$ ($\thickmuskip=2mu \medmuskip=2mu m \geq 1$) to quantitatively control the decision boundary. In binary-class case, the decision boundaries for class 1 and class 2 become $\thickmuskip=0mu \medmuskip=0mu \|\bm{x}\|(\cos (m\theta_1) -\cos (\theta_2)) = 0$ and $\thickmuskip=0mu \medmuskip=0mu \|\bm{x}\|(\cos (\theta_1) -\cos (m\theta_2)) = 0$, respectively. $m$ quantitatively controls the size of angular margin. Furthermore, A-Softmax loss can be easily generalized to multiple classes, similar to softmax loss. By optimizing A-Softmax loss, the decision regions become more separated, simultaneously enlarging the inter-class margin and compressing the intra-class angular distribution.
\par
A-Softmax loss has clear geometric interpretation. Supervised by A-Softmax loss, the learned features construct a discriminative angular distance metric that is equivalent to geodesic distance on a hypersphere manifold. A-Softmax loss can be interpreted as constraining learned features to be discriminative on a hypersphere manifold, which intrinsically matches the prior that face images lie on a manifold \cite{lee2003video,he2005face,talwalkar2008large}. The close connection between A-Softmax loss and hypersphere manifolds makes the learned features more effective for face recognition. For this reason, we term the learned features as \emph{SphereFace}.
\par
Moreover, A-Softmax loss can quantitatively adjust the angular margin via a parameter $m$, enabling us to do quantitative analysis. In the light of this, we derive lower bounds for the parameter $m$ to approximate the desired open-set FR criterion that the maximal intra-class distance should be smaller than the minimal inter-class distance.
\par
Our major contributions can be summarized as follows:
\par
(1) We propose A-Softmax loss for CNNs to learn discriminative face features with clear and novel geometric interpretation. The learned features discriminatively span on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold.
\par
(2) We derive lower bounds for $m$ such that A-Softmax loss can approximate the learning task that minimal inter-class distance is larger than maximal intra-class distance.
\par
(3) We are the very first to show the effectiveness of angular margin in FR. Trained on publicly available CASIA dataset \cite{yi2014learning}, \emph{SphereFace} achieves competitive results on several benchmarks, including Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1.
\par
\section{Related Work}
\textbf{Metric learning.} Metric learning aims to learn a similarity (distance) function. Traditional metric learning \cite{xing2003distance,weinberger2005distance,kostinger2012large,ying2012distance} usually learns a matrix $\bm{A}$ for a distance metric $\thickmuskip=2mu \medmuskip=2mu \|\bm{x}_1-\bm{x}_2\|_{\bm{A}}=\sqrt{(\bm{x}_1-\bm{x}_2)^T\bm{A}(\bm{x}_1-\bm{x}_2)}$ upon the given features $\bm{x}_1,\bm{x}_2$. Recently, prevailing deep metric learning \cite{hu2014discriminative,lu2015multi,song2016deep,taigman2014deepface,sun2014deep2,schroff2015facenet,wen2016discriminative} usually uses neural networks to automatically learn discriminative features $\thickmuskip=2mu \medmuskip=2mu \bm{x}_1,\bm{x}_2$ followed by a simple distance metric such as Euclidean distance $\thickmuskip=2mu \medmuskip=2mu \|\bm{x}_1-\bm{x}_2\|_2$. Most widely used loss functions for deep metric learning are contrastive loss \cite{chopra2005learning,hadsell2006dimensionality} and triplet loss \cite{wang2014learning,schroff2015facenet,hoffer2014deep}, and both impose Euclidean margin to features.
\par
\textbf{Deep face recognition.} Deep face recognition is arguably one of the most active research area in the past few years. \cite{taigman2014deepface,sun2014deep} address the open-set FR using CNNs supervised by softmax loss, which essentially treats open-set FR as a multi-class classification problem. \cite{sun2014deep2} combines contrastive loss and softmax loss to jointly supervise the CNN training, greatly boosting the performance. \cite{schroff2015facenet} uses triplet loss to learn a unified face embedding. Training on nearly 200 million face images, they achieve current state-of-the-art FR accuracy. Inspired by linear discriminant analysis, \cite{wen2016discriminative} proposes center loss for CNNs and also obtains promising performance. In general, current well-performing CNNs \cite{sun2016sparsifying,liu2015targeting} for FR are mostly built on either contrastive loss or triplet loss. One could notice that state-of-the-art FR methods usually adopt ideas (e.g. contrastive loss, triplet loss) from metric learning, showing open-set FR could be well addressed by discriminative metric learning.
\par
L-Softmax loss \cite{liu2016large} also implicitly involves the concept of angles. As a regularization method, it shows great improvement on closed-set classification problems. Differently, A-Softmax loss is developed to learn discriminative face embedding. The explicit connections to hypersphere manifold makes our learned features particularly suitable for open-set FR problem, as verified by our experiments. In addition, the angular margin in A-Softmax loss is explicitly imposed and can be quantitatively controlled (e.g. lower bounds to approximate desired feature criterion), while \cite{liu2016large} can only be analyzed qualitatively.
\section{Deep Hypersphere Embedding}
\subsection{Revisiting the Softmax Loss}
We revisit the softmax loss by looking into the decision criteria of softmax loss. In binary-class case, the posterior probabilities obtained by softmax loss are
\begin{equation}
\footnotesize
\thickmuskip=1mu p_{1} = \frac{\exp({\bm{W}_{1}^T\bm{x}+b_{1}})}{\exp({\bm{W}_{1}^T\bm{x}+b_{1}}) + \exp({\bm{W}^T_{2}\bm{x}+b_{2}})}
\end{equation}
\begin{equation}
\footnotesize
\thickmuskip=1mu p_{2} = \frac{\exp({\bm{W}^T_{2}\bm{x}+b_{2}})}{\exp({\bm{W}_{1}^T\bm{x}+b_{1}}) + \exp({\bm{W}_{2}^T\bm{x}+b_{2}})}
\end{equation}
where $\bm{x}$ is the learned feature vector. $\bm{W}_{i}$ and $b_{i}$ are weights and bias of last fully connected layer corresponding to class $i$, respectively. The predicted label will be assigned to class 1 if $\thickmuskip=2mu \medmuskip=2mu p_{1}>p_{2}$ and class 2 if $\thickmuskip=2mu \medmuskip=2mu p_{1}<p_{2}$. By comparing $p_{1}$ and $p_{2}$, it is clear that $\thickmuskip=2mu \medmuskip=2mu \bm{W}^T_{1}\bm{x}+b_{1}$ and $\thickmuskip=2mu \medmuskip=2mu \bm{W}^T_{2}\bm{x}+b_{2}$ determine the classification result. The decision boundary is $\thickmuskip=2mu \medmuskip=2mu (\bm{W}_{1} - \bm{W}_{2})\bm{x} + b_{1} - b_{2} = 0$. We then rewrite $\thickmuskip=2mu \medmuskip=2mu \bm{W}^T_{i}\bm{x}+b_{i}$ as $\thickmuskip=2mu \medmuskip=2mu \|\bm{W}^T_{i}\|\|\bm{x}\|\cos(\theta_i)+b_{i}$ where $\theta_{i}$ is the angle between $\bm{W}_{i}$ and $\bm{x}$. Notice that if we normalize the weights and zero the biases ($\thickmuskip=2mu \medmuskip=2mu \|\bm{W}_{i}\|=1$, $\thickmuskip=1mu b_{i}=0$), the posterior probabilities become $\thickmuskip=0mu \medmuskip=0mu p_{1} =\|\bm{x}\|\cos(\theta_{1})$ and $\thickmuskip=0mu \medmuskip=0mu p_{2} =\|\bm{x}\|\cos(\theta_{2})$. Note that $p_{1}$ and $p_{2}$ share the same $\bm{x}$, the final result only depends on the angles $\theta_{1}$ and $\theta_{2}$. The decision boundary also becomes $\thickmuskip=0mu \medmuskip=0mu \cos(\theta_1)-\cos(\theta_2)=0$ (i.e. angular bisector of vector $\bm{W}_1$ and $\bm{W}_2$). Although the above analysis is built on binary-calss case, it is trivial to generalize the analysis to multi-class case. During training, the modified softmax loss ($\thickmuskip=1mu \|\bm{W}_{i}\|=1,b_{i}=0$) encourages features from the $i$-th class to have smaller angle $\theta_{i}$ (larger cosine distance) than others, which makes angles between $\bm{W}_{i}$ and features a reliable metric for classification.
\par
To give a formal expression for the modified softmax loss, we first define the input feature $\bm{x}_i$ and its label $y_i$. The original softmax loss can be written as
\begin{equation}\label{sm1}
\footnotesize
L=\frac{1}{N}\sum_iL_i=\frac{1}{N}\sum_i-\log\big( \frac{e^{f_{y_i}}}{\sum_je^{f_j}} \big)
\end{equation}
where $f_j$ denotes the $j$-th element ($j\in[1,K]$, $K$ is the class number) of the class score vector $\bm{f}$, and $N$ is the number of training samples. In CNNs, $\bm{f}$ is usually the output of a fully connected layer $\bm{W}$, so $f_j=\bm{W}_j^T\bm{x}_i+b_j$ and $f_{y_i}=\bm{W}_{y_i}^T\bm{x}_i+b_{y_i}$ where $\bm{x}_i$, $\bm{W}_{j}, \bm{W}_{y_i}$ are the $i$-th training sample, the $j$-th and $y_i$-th column of $\bm{W}$ respectively. We further reformulate $L_i$ in Eq. \eqref{sm1} as
\begin{equation}\label{sm2}
\footnotesize
\begin{aligned}
L_i=&-\log\big( \frac{e^{\bm{W}_{y_i}^T\bm{x}_i+b_{y_i}}}{\sum_je^{\bm{W}_j^T\bm{x}_i+b_j}} \big)\\
=&-\log\big( \frac{e^{\|\bm{W}_{y_i}\|\|\bm{x}_i\|\cos(\theta_{y_i,i})+b_{y_i}}}{\sum_je^{\|\bm{W}_j\|\|\bm{x}_i\|\cos(\theta_{j,i})+b_j}} \big)
\end{aligned}
\end{equation}
in which $\thickmuskip=2mu \medmuskip=2mu \theta_{j,i}(0\leq\theta_{j,i}\leq\pi)$ is the angle between vector $\bm{W}_j$ and $\bm{x}_i$. As analyzed above, we first normalize $\thickmuskip=2mu \medmuskip=2mu \|\bm{W}_j\|=1,\forall j$ in each iteration and zero the biases. Then we have the modified softmax loss:
\begin{equation}\label{modified}
\footnotesize
L_{\textnormal{modified}}=\frac{1}{N}\sum_i-\log\big( \frac{e^{\|\bm{x}_i\|\cos(\theta_{y_i,i})}}{
\sum_{j}e^{\|\bm{x}_i\|\cos(\theta_{j,i})}} \big)
\end{equation}
Although we can learn features with angular boundary with the modified softmax loss, these features are still not necessarily discriminative. Since we use angles as the distance metric, it is natural to incorporate angular margin to learned features in order to enhance the discrimination power. To this end, we propose a novel way to combine angular margin.
\subsection{Introducing Angular Margin to Softmax Loss}
Instead of designing a new type of loss function and constructing a weighted combination with softmax loss (similar to contrastive loss) , we propose a more natural way to learn angular margin. From the previous analysis of softmax loss, we learn that decision boundaries can greatly affect the feature distribution, so our basic idea is to manipulate decision boundaries to produce angular margin. We first give a motivating binary-class example to explain how our idea works.
\par
Assume a learned feature $\bm{x}$ from class 1 is given and $\theta_i$ is the angle between $\bm{x}$ and $\bm{W}_i$, it is known that the modified softmax loss requires $\thickmuskip=2mu \medmuskip=2mu \cos(\theta_1)>\cos(\theta_2)$ to correctly classify $\bm{x}$. But what if we instead require $\thickmuskip=2mu \medmuskip=2mu \cos(m\theta_1)>\cos(\theta_2)$ where $\thickmuskip=2mu m\geq2$ is a integer in order to correctly classify $\bm{x}$? It is essentially making the decision more stringent than previous, because we require a lower bound\footnote{The inequality $\thickmuskip=2mu \medmuskip=2mu \cos(\theta_1)>\cos(m\theta_1)$ holds while $\thickmuskip=2mu \theta_1\in[0,\frac{\pi}{m}], m\geq2$.} of $\cos(\theta_1)$ to be larger than $\cos(\theta_2)$. The decision boundary for class 1 is $\thickmuskip=2mu \medmuskip=2mu \cos(m\theta_1)=\cos(\theta_2)$. Similarly, if we require $\thickmuskip=2mu \medmuskip=2mu \cos(m\theta_2)>\cos(\theta_1)$ to correctly classify features from class 2, the decision boundary for class 2 is $\thickmuskip=2mu \medmuskip=2mu \cos(m\theta_2)=\cos(\theta_1)$. Suppose all training samples are correctly classified, such decision boundaries will produce an angular margin of $\frac{m-1}{m+1}\theta_{2}^{1}$ where $\theta_{2}^{1}$ is the angle between $\bm{W}_1$ and $\bm{W}_2$. From angular perspective, correctly classifying $\bm{x}$ from identity 1 requires $\thickmuskip=2mu \theta_1<\frac{\theta_2}{m}$, while correctly classifying $\bm{x}$ from identity 2 requires $\thickmuskip=2mu \theta_2<\frac{\theta_1}{m}$. Both are more difficult than original $\thickmuskip=2mu \theta_1<\theta_2$ and $\thickmuskip=2mu \theta_2<\theta_1$, respectively. By directly formulating this idea into the modified softmax loss Eq. \eqref{modified}, we have
\begin{equation}\label{naiveangular}
\footnotesize
L_{\textnormal{ang}}=\frac{1}{N}\sum_i-\log\big( \frac{e^{\|\bm{x}_i\|\cos(m\theta_{y_i,i})}}{e^{\|\bm{x}_i\|\cos(m\theta_{y_i,i})}+
\sum_{j\neq y_i}e^{\|\bm{x}_i\|\cos(\theta_{j,i})}} \big)
\end{equation}
where $\theta_{y_i,i}$ has to be in the range of $[0,\frac{\pi}{m}]$. In order to get rid of this restriction and make it optimizable in CNNs, we expand the definition range of $\cos(\theta_{y_i,i})$ by generalizing it to a monotonically decreasing angle function $\psi(\theta_{y_i,i})$ which should be equal to $\cos(\theta_{y_i,i})$ in $[0,\frac{\pi}{m}]$. Therefore, our proposed A-Softmax loss is formulated as:
\begin{equation}\label{angular}
\footnotesize
L_{\textnormal{ang}}=\frac{1}{N}\sum_i-\log\big( \frac{e^{\|\bm{x}_i\|\psi(\theta_{y_i,i})}}{e^{\|\bm{x}_i\|\psi(\theta_{y_i,i})}+
\sum_{j\neq y_i}e^{\|\bm{x}_i\|\cos(\theta_{j,i})}} \big)
\end{equation}
in which we define $\thickmuskip=2mu \medmuskip=2mu \psi(\theta_{y_i,i})=(-1)^k\cos(m\theta_{y_i,i})-2k$, $ \theta_{y_i,i}\in[\frac{k\pi}{m},\frac{(k+1)\pi}{m}]$ and $\thickmuskip=2mu k\in[0,m-1]$. $\thickmuskip=2mu m\geq1$ is an integer that controls the size of angular margin. When $\thickmuskip=2mu m=1$, it becomes the modified softmax loss.
\par
\begin{table}[t]
\centering
\renewcommand{\captionlabelfont}{\footnotesize}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\setlength{\abovecaptionskip}{4pt}
\setlength{\belowcaptionskip}{-10pt}
\footnotesize
\begin{tabular}{|c|c|}
\hline
Loss Function & Decision Boundary \\
\hline\hline
Softmax Loss & $\thickmuskip=2mu \medmuskip=2mu (\bm{W}_{1} - \bm{W}_{2})\bm{x} + b_{1} - b_{2} = 0$\\\hline
Modified Softmax Loss & $\thickmuskip=2mu \medmuskip=2mu \|\bm{x}\|(\cos \theta_1 -\cos \theta_2) = 0$\\\hline
A-Softmax Loss & \tabincell{c}{$\thickmuskip=2mu \medmuskip=2mu \|\bm{x}\|(\cos m\theta_1 -\cos \theta_2) = 0$ for class 1\\$\thickmuskip=2mu \medmuskip=2mu \|\bm{x}\|(\cos \theta_1 -\cos m\theta_2) = 0$ for class 2}\\
\hline
\end{tabular}
\caption{\footnotesize Comparison of decision boundaries in binary case. Note that, $\theta_i$ is the angle between $\bm{W}_i$ and $\bm{x}$.}\label{decision}
\end{table}
\par
The justification of A-Softmax loss can also be made from decision boundary perspective. A-Softmax loss adopts different decision boundary for different class (each boundary is more stringent than the original), thus producing angular margin. The comparison of decision boundaries is given in Table~\ref{decision}. From original softmax loss to modified softmax loss, it is from optimizing inner product to optimizing angles. From modified softmax loss to A-Softmax loss, it makes the decision boundary more stringent and separated. The angular margin increases with larger $m$ and be zero if $\thickmuskip=2mu m=1$.
\par
Supervised by A-Softmax loss, CNNs learn face features with geometrically interpretable angular margin. Because A-Softmax loss requires $\thickmuskip=2mu \medmuskip=2mu \bm{W}_i=1,b_i=0$, it makes the prediction only depends on angles between the sample $\bm{x}$ and $\bm{W}_i$. So $\bm{x}$ can be classified to the identity with smallest angle. The parameter $m$ is added for the purpose of learning an angular margin between different identities.
\par
To facilitate gradient computation and back propagation, we replace $\cos(\theta_{j,i})$ and $\cos(m\theta_{y_i,i})$ with the expressions only containing $\bm{W}$ and $\bm{x}_i$, which is easily done by definition of cosine and multi-angle formula (also the reason why we need $m$ to be an integer). Without $\theta$, we can compute derivative with respect to $\bm{x}$ and $\bm{W}$, similar to softmax loss.
\subsection{Hypersphere Interpretation of A-Softmax Loss}
\label{geo}
\begin{figure}[t]
\centering
\renewcommand{\captionlabelfont}{\footnotesize}
\setlength{\abovecaptionskip}{4pt}
\setlength{\belowcaptionskip}{-7pt}
\includegraphics[width=3.15in]{manifold.pdf}
\caption{\footnotesize Geometry Interpretation of Euclidean margin loss (e.g. contrastive loss, triplet loss, center loss, etc.), modified softmax loss and A-Softmax loss. The first row is 2D feature constraint, and the second row is 3D feature constraint. The orange region indicates the discriminative constraint for class 1, while the green region is for class 2. \label{geoint}}
\end{figure}
A-Softmax loss has stronger requirements for a correct classification when $\thickmuskip=2mu m\geq2$, which generates an angular classification margin between learned features of different classes. A-Softmax loss not only imposes discriminative power to the learned features via angular margin, but also renders nice and novel hypersphere interpretation. As shown in Fig.~\ref{geoint}, A-Softmax loss is equivalent to learning features that are discriminative on a hypersphere manifold, while Euclidean margin losses learn features in Euclidean space.
\par
To simplify, We take the binary case to analyze the hypersphere interpretation. Considering a sample $\bm{x}$ from class 1 and two column weights $\bm{W}_1,\bm{W}_2$, the classification rule for A-Softmax loss is $\thickmuskip=2mu \medmuskip=2mu \cos (m\theta_1)>\cos(\theta_2)$, equivalently $\thickmuskip=2mu \medmuskip=2mu m\theta_1<\theta_2$. Notice that $\theta_1,\theta_2$ are equal to their corresponding arc length $\omega_1,\omega_2$\footnote{$\omega_i$ is the shortest arc length (geodesic distance) between $\bm{W}_i$ and the projected point of sample $\bm{x}$ on the unit hypersphere, while the corresponding $\theta_i$ is the angle between $\bm{W}_i$ and $\bm{x}$.} on unit hypersphere $\thickmuskip=0mu \medmuskip=0mu \{v_j,\forall j|\sum_j v_j^2 =1, v\geq 0\}$. Because $\thickmuskip=2mu \medmuskip=2mu \|\bm{W}\|_1=\|\bm{W}\|_2=1$, the decision replies on the arc length $\omega_1$ and $\omega_2$. The decision boundary is equivalent to $\thickmuskip=2mu \medmuskip=2mu m\omega_1=\omega_2$, and the constrained region for correctly classifying $\bm{x}$ to class 1 is $\thickmuskip=2mu \medmuskip=2mu m\omega_1<\omega_2$. Geometrically speaking, this is a hypercircle-like region lying on a hypersphere manifold. For example, it is a circle-like region on the unit sphere in 3D case, as illustrated in Fig.~\ref{geoint}. Note that larger $m$ leads to smaller hypercircle-like region for each class, which is an explicit discriminative constraint on a manifold. For better understanding, Fig.~\ref{geoint} provides 2D and 3D visualizations. One can see that A-Softmax loss imposes arc length constraint on a unit circle in 2D case and circle-like region constraint on a unit sphere in 3D case. Our analysis shows that optimizing angles with A-Softmax loss essentially makes the learned features more discriminative on a hypersphere.
\subsection{Properties of A-Softmax Loss}
\label{prop}
\begin{property}
A-Softmax loss defines a large angular margin learning task with adjustable difficulty. With larger $m$, the angular margin becomes larger, the constrained region on the manifold becomes smaller, and the corresponding learning task also becomes more difficult.
\end{property}
We know that the larger $m$ is, the larger angular margin A-Softmax loss constrains. There exists a minimal $m$ that constrains the maximal intra-class angular distance to be smaller than the minimal inter-class angular distance, which can also be observed in our experiments.
\begin{definition}[minimal $m$ for desired feature distribution]
$m_{\textnormal{min}}$ is the minimal value such that while $m>m_{\textnormal{min}}$, A-Softmax loss defines a learning task where the maximal intra-class angular feature distance is constrained to be smaller than the minimal inter-class angular feature distance.
\end{definition}
\begin{property}[lower bound of $m_{\textnormal{min}}$ in binary-class case]
In binary-class case, we have $\thickmuskip=2mu \medmuskip=2mu m_{\textnormal{min}}\geq2+\sqrt{3}$.
\end{property}
\begin{proof}
We consider the space spaned by $\bm{W}_1$ and $\bm{W}_2$. Because $\thickmuskip=2mu m\geq2$, it is easy to obtain the maximal angle that class 1 spans is $\frac{\theta_{12}}{m-1}+\frac{\theta_{12}}{m+1}$ where $\theta_{12}$ is the angle between $\bm{W}_1$ and $\bm{W}_2$. To require the maximal intra-class feature angular distance smaller than the minimal inter-class feature angular distance, we need to constrain
\begin{equation}
\footnotesize
\underbrace{\frac{\theta_{12}}{m-1}+\frac{\theta_{12}}{m+1}}_{\text{max intra-class angle}}\leq\underbrace{\frac{(m-1)\theta_{12}}{m+1}}_{\text{min inter-class angle}},\ \ \theta_{12}\leq\frac{m-1}{m}\pi
\end{equation}
\begin{equation}
\footnotesize
\underbrace{\frac{2\pi-\theta_{12}}{m+1}+\frac{\theta_{12}}{m+1}}_{\text{max intra-class angle}}\leq\underbrace{\frac{(m-1)\theta_{12}}{m+1}}_{\text{min inter-class angle}},\ \ \theta_{12}>\frac{m-1}{m}\pi
\end{equation}
After solving these two inequalities, we could have $\thickmuskip=2mu \medmuskip=2mu m_{\textnormal{min}}\geq2+\sqrt{3}$, which is a lower bound for binary case.
\end{proof}
\par
\begin{property}[lower bound of $m_{\textnormal{min}}$ in multi-class case]
Under the assumption that $\bm{W}_i,\forall i$ are uniformly spaced in the Euclidean space, we have $m_{\textnormal{min}}\geq3$.
\end{property}
\begin{proof}
We consider the 2D $k$-class ($k\geq3$) scenario for the lower bound. Because $\bm{W}_i,\forall i$ are uniformly spaced in the 2D Euclidean space, we have $\thickmuskip=2mu \medmuskip=2mu \theta_{i}^{i+1}=\frac{2\pi}{k}$ where $\theta_{i}^{i+1}$ is the angle between $\bm{W}_i$ and $\bm{W}_{i+1}$. Since $\bm{W}_i,\forall i$ are symmetric, we only need to analyze one of them. For the $i$-th class ($\bm{W}_i$), We need to constrain
\begin{equation}
\footnotesize
\underbrace{\frac{\theta_{i}^{i+1}}{m+1}+\frac{\theta_{i-1}^{i}}{m+1}}_{\text{max intra-class angle}}\leq\underbrace{\min\bigg{\{}\frac{(m-1)\theta_{i}^{i+1}}{m+1},\frac{(m-1)\theta_{i-1}^{i}}{m+1}\bigg{\}}}_{\text{min inter-class angle}}
\end{equation}
After solving this inequality, we obtain $m_{\textnormal{min}}\geq3$, which is a lower bound for multi-class case.
\end{proof}
Based on this, we use $\thickmuskip=2mu \medmuskip=2mu m=4$ to approximate the desired feature distribution criteria. Since the lower bounds are not necessarily tight, giving a tighter lower bound and a upper bound under certain conditions is also possible, which we leave to the future work. Experiments also show that larger $m$ consistently works better and $\thickmuskip=2mu m=4$ will usually suffice.
\vspace{-.1mm}
\subsection{Discussions}
\vspace{-.1mm}
\textbf{Why angular margin.} First and most importantly, angular margin directly links to discriminativeness on a manifold, which intrinsically matches the prior that faces also lie on a manifold. Second, incorporating angular margin to softmax loss is actually a more natural choice. As Fig.~\ref{comp} shows, features learned by the original softmax loss have an intrinsic angular distribution. So directly combining Euclidean margin constraints with softmax loss is not reasonable.
\par
\textbf{Comparison with existing losses.} In deep FR task, the most popular and well-performing loss functions include contrastive loss, triplet loss and center loss. First, they only impose Euclidean margin to the learned features (w/o normalization), while ours instead directly considers angular margin which is naturally motivated. Second, both contrastive loss and triplet loss suffer from data expansion when constituting the pairs/triplets from the training set, while ours requires no sample mining and imposes discriminative constraints to the entire mini-batches (compared to contrastive and triplet loss that only affect a few representative pairs/triplets).
\par
\begin{table*}[t]
\renewcommand{\captionlabelfont}{\footnotesize}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\setlength{\abovecaptionskip}{3pt}
\setlength{\belowcaptionskip}{-10pt}
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Layer & 4-layer CNN & 10-layer CNN & 20-layer CNN & 36-layer CNN & 64-layer CNN\\
\hline\hline
Conv1.x & [3$\times$3, 64]$\times$1, S2 & [3$\times$3, 64]$\times$1, S2 & \tabincell{c}{[3$\times$3, 64]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 64\\&3\times3, 64\end{aligned}\right]\times 1$}& \tabincell{c}{[3$\times$3, 64]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 64\\&3\times3, 64\end{aligned}\right]\times 2$} & \tabincell{c}{[3$\times$3, 64]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 64\\&3\times3, 64\end{aligned}\right]\times 3$} \\\hline
Conv2.x & [3$\times$3, 128]$\times$1, S2 & \tabincell{c}{[3$\times$3, 128]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 128\\&3\times3, 128\end{aligned}\right]\times 1$} & \tabincell{c}{[3$\times$3, 128]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 128\\&3\times3, 128\end{aligned}\right]\times 2$} & \tabincell{c}{[3$\times$3, 128]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 128\\&3\times3, 128\end{aligned}\right]\times 4$} & \tabincell{c}{[3$\times$3, 128]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 128\\&3\times3, 128\end{aligned}\right]\times 8$} \\\hline
Conv3.x & [3$\times$3, 256]$\times$1, S2 & \tabincell{c}{[3$\times$3, 256]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 256\\&3\times3, 256\end{aligned}\right]\times 2$} & \tabincell{c}{[3$\times$3, 256]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 256\\&3\times3, 256\end{aligned}\right]\times 4$} & \tabincell{c}{[3$\times$3, 256]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 256\\&3\times3, 256\end{aligned}\right]\times 8$} & \tabincell{c}{[3$\times$3, 256]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 256\\&3\times3, 256\end{aligned}\right]\times 16$} \\\hline
Conv4.x & [3$\times$3, 512]$\times$1, S2 & [3$\times$3, 512]$\times$1, S2 & \tabincell{c}{[3$\times$3, 512]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 512\\&3\times3, 512\end{aligned}\right]\times 1$} & \tabincell{c}{[3$\times$3, 512]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 512\\&3\times3, 512\end{aligned}\right]\times 2$} & \tabincell{c}{[3$\times$3, 512]$\times$1, S2\\$\left[\begin{aligned}&3\times3, 512\\&3\times3, 512\end{aligned}\right]\times 3$} \\\hline
FC1 & 512 & 512 & 512 & 512 & 512 \\\hlin
\end{tabular}
\caption{\footnotesize Our CNN architectures with different convolutional layers. Conv1.x, Conv2.x and Conv3.x denote convolution units that may contain multiple convolution layers and residual units are shown in double-column brackets. E.g., [3$\times$3, 64]$\times$4 denotes 4 cascaded convolution layers with 64 filters of size 3$\times$3, and S2 denotes stride 2. FC1 is the fully connected layer. }\label{netarch}
\end{table*}
\vspace{-.75mm}
\section{Experiments (more in Appendix)}
\vspace{-.25mm}
\subsection{Experimental Settings}
\vspace{-.5mm}
\textbf{Preprocessing}. We only use standard preprocessing. The face landmarks in all images are detected by MTCNN \cite{zhang2016joint}. The cropped faces are obtained by similarity transformation. Each pixel ($[0,255]$) in RGB images is normalized by subtracting 127.5 and then being divided by 128.
\par
\textbf{CNNs Setup}. Caffe \cite{jia2014caffe} is used to implement A-Softmax loss and CNNs. The general framework to train and extract \emph{SphereFace} features is shown in Fig.~\ref{arch}. We use residual units \cite{he2016deep} in our CNN architecture. For fairness, all compared methods use the same CNN architecture (including residual units) as \emph{SphereFace}. CNNs with different depths (4, 10, 20, 36, 64) are used to better evaluate our method. The specific settings for difffernt CNNs we used are given in Table~\ref{netarch}. According to the analysis in Section \ref{prop}, we usually set $m$ as 4 in A-Softmax loss unless specified. These models are trained with batch size of 128 on four GPUs. The learning rate begins with 0.1 and is divided by 10 at the 16K, 24K iterations. The training is finished at 28K iterations.
\vspace{-1.9mm}
\begin{figure}[h]
\centering
\renewcommand{\captionlabelfont}{\footnotesize}
\setlength{\abovecaptionskip}{3pt}
\setlength{\belowcaptionskip}{-7pt}
\includegraphics[width=2.9in]{arch.pdf}
\caption{\footnotesize Training and Extracting \emph{SphereFace} features. \label{arch}}
\end{figure}
\par
\textbf{Training Data}. We use publicly available web-collected training dataset CASIA-WebFace \cite{yi2014learning} (after excluding the images of identities appearing in testing sets) to train our CNN models. CASIA-WebFace has 494,414 face images belonging to 10,575 different individuals. These face images are horizontally flipped for data augmentation. Notice that the scale of our training data (0.49M) is relatively small, especially compared to other private datasets used in DeepFace \cite{taigman2014deepface} (4M), VGGFace \cite{parkhi2015deep} (2M) and FaceNet \cite{schroff2015facenet} (200M).
\par
\textbf{Testing}. We extract the deep features (\emph{SphereFace}) from the output of the FC1 layer. For all experiments, the final representation of a testing face is obtained by concatenating its original face features and its horizontally flipped features. The score (metric) is computed by the cosine distance of two features. The nearest neighbor classifier and thresholding are used for face identification and verification, respectively.
\subsection{Exploratory Experiments}
\begin{figure*}[t]
\centering
\renewcommand{\captionlabelfont}{\footnotesize}
\setlength{\abovecaptionskip}{3pt}
\setlength{\belowcaptionskip}{-8pt}
\includegraphics[width=6.43in]{3dpoint-6.pdf}\\
\caption{\footnotesize Visualization of features learned with different $m$. The first row shows the 3D features projected on the unit sphere. The projected points are the intersection points of the feature vectors and the unit sphere. The second row shows the angle distribution of both positive pairs and negative pairs (we choose class 1 and class 2 from the subset to construct positive and negative pairs). Orange area indicates positive pairs while blue indicates negative pairs. All angles are represented in radian. Note that, this visualization experiment uses a 6-class subset of the CASIA-WebFace dataset.}\label{vis_m}
\end{figure*}
\textbf{Effect of $\bm{m}$.} To show that larger $m$ leads to larger angular margin (i.e. more discriminative feature distribution on manifold), we perform a toy example with different $m$. We train A-Softmax loss with 6 individuals that have the most samples in CASIA-WebFace. We set the output feature dimension (FC1) as 3 and visualize the training samples in Fig.~\ref{vis_m}. One can observe that larger $m$ leads to more discriminative distribution on the sphere and also larger angular margin, as expected. We also use class 1 (blue) and class 2 (dark green) to construct positive and negative pairs to evaluate the angle distribution of features from the same class and different classes. The angle distribution of positive and negative pairs (the second row of Fig.~\ref{vis_m}) quantitatively shows the angular margin becomes larger while $m$ increases and every class also becomes more distinct with each other.
\par
Besides visual comparison, we also perform face recognition on LFW and YTF to evaluate the effect of $m$. For fair comparison, we use 64-layer CNN (Table~\ref{netarch}) for all losses. Results are given in Table~\ref{diffm}. One can observe that while $m$ becomes larger, the accuracy of A-Softmax loss also becomes better, which shows that larger angular margin can bring stronger discrimination power.
\par
\vspace{-1.0mm}
\begin{table}[h]
\centering
\footnotesize
\renewcommand{\captionlabelfont}{\footnotesize}
\setlength{\abovecaptionskip}{3pt}
\setlength{\belowcaptionskip}{-7pt}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Dataset & Original & m=1 & m=2 & m=3 & m=4 \\
\hline\hline
LFW & 97.88 & 97.90 & 98.40 & 99.25 & \textbf{99.42} \\
YTF & 93.1 & 93.2 & 93.8 & 94.4 & \textbf{95.0} \\
\hline
\end{tabular}
\caption{\footnotesize Accuracy(\%) comparison of different $m$ (A-Softmax loss) and original softmax loss on LFW and YTF dataset.}\label{diffm}
\end{table}
\par
\textbf{Effect of CNN architectures.} We train A-Softmax loss ($\thickmuskip=2mu m=4$) and original softmax loss with different number of convolution layers. Specific CNN architectures can be found in Table~\ref{netarch}. From Fig.~\ref{bar}, one can observe that A-Softmax loss consistently outperforms CNNs with softmax loss (1.54\%$\sim$1.91\%), indicating that A-Softmax loss is more suitable for open-set FR. Besides, the difficult learning task defined by A-Softmax loss makes full use of the superior learning capability of deeper architectures. A-Softmax loss greatly improve the verification accuracy from 98.20\% to 99.42\% on LFW, and from 93.4\% to 95.0\% on YTF. On the contrary, the improvement of deeper standard CNNs is unsatisfactory and also easily get saturated (from 96.60\% to 97.75\% on LFW, from 91.1\% to 93.1\% on YTF).
\par
\vspace{-1.2mm}
\begin{figure}[h]
\centering
\renewcommand{\captionlabelfont}{\footnotesize}
\setlength{\abovecaptionskip}{3pt}
\setlength{\belowcaptionskip}{-6pt}
\includegraphics[width=3.2in]{bar2.pdf}\\
\caption{\footnotesize Accuracy (\%) on LFW and YTF with different number of convolutional layers. Left side is for LFW, while right side is for YTF.}\label{bar}
\end{figure}
\subsection{Experiments on LFW and YTF}
\begin{table}[t]
\centering
\footnotesize
\renewcommand{\captionlabelfont}{\footnotesize}
\setlength{\abovecaptionskip}{6pt}
\setlength{\belowcaptionskip}{-10pt}
\begin{tabular}{|c|c|c|c|c|}
\hline
Method & Models & Data & LFW & YTF \\
\hline\hline
DeepFace \cite{taigman2014deepface} & 3 & 4M* & 97.35 & 91.4 \\
FaceNet \cite{schroff2015facenet} & 1 & 200M* & \textbf{99.65} & 95.1 \\
Deep FR \cite{parkhi2015deep} & 1 & 2.6M & 98.95 & \textbf{97.3}\\
DeepID2+ \cite{sun2015deeply} & 1 & 300K* & 98.70 & N/A \\
DeepID2+ \cite{sun2015deeply} & 25 & 300K* & 99.47 & 93.2 \\
Baidu \cite{liu2015targeting} & 1 & 1.3M* & 99.13 & N/A \\
Center Face \cite{wen2016discriminative} & 1 & 0.7M* & 99.28 & 94.9 \\\hline\hline
Yi et al. \cite{yi2014learning}& 1 & WebFace & 97.73 & 92.2 \\
Ding et al. \cite{ding2015robust}& 1 & WebFace & 98.43 & N/A \\
Liu et al. \cite{liu2016large}& 1 & WebFace & 98.71 & N/A \\\hline\hline
Softmax Loss & 1 & WebFace & 97.88 & 93.1\\
Softmax+Contrastive \cite{sun2014deep} & 1 & WebFace & 98.78 & 93.5\\
Triplet Loss \cite{schroff2015facenet} & 1 & WebFace & 98.70 & 93.4\\
L-Softmax Loss \cite{liu2016large}& 1 & WebFace & 99.10 & 94.0\\
Softmax+Center Loss \cite{wen2016discriminative} & 1 & WebFace & 99.05 & 94.4\\\hline\hline
SphereFace & 1 & WebFace & \textbf{99.42} & \textbf{95.0}\\
\hline
\end{tabular}
\caption{\footnotesize Accuracy (\%) on LFW and YTF dataset. * denotes the outside data is private (not publicly available). For fair comparison, all loss functions (including ours) we implemented use 64-layer CNN architecture in Table~\ref{netarch}.}\label{lfwytf}
\end{table}
LFW dataset \cite{huang2007labeled} includes 13,233 face images from 5749 different identities, and YTF dataset \cite{wolf2011face} includes 3,424 videos from 1,595 different individuals. Both datasets contains faces with large variations in pose, expression and illuminations. We follow the unrestricted with labeled outside data protocol \cite{huang2014labeled} on both datasets. The performance of \emph{SphereFace} are evaluated on 6,000 face pairs from LFW and 5,000 video pairs from YTF. The results are given in Table~\ref{lfwytf}. For contrastive loss and center loss, we follow the FR convention to form a weighted combination with softmax loss. The weights are selected via cross validation on training set. For L-Softmax \cite{liu2016large}, we also use $\thickmuskip=2mu m=4$. All the compared loss functions share the same 64-layer CNN architecture.
\par
\begin{figure*}[t]
\centering
\renewcommand{\captionlabelfont}{\footnotesize}
\setlength{\abovecaptionskip}{3pt}
\setlength{\belowcaptionskip}{-10pt}
\includegraphics[width=6.6in]{cmcroc.pdf}\\
\caption{\footnotesize CMC and ROC curves of different methods under the small training set protocol.}\label{megafacefig}
\end{figure*}
\par
Most of the existing face verification systems achieve high performance with huge training data or model ensemble. While using single model trained on publicly available dataset (CAISA-WebFace, relatively small and having noisy labels), \emph{SphereFace} achieves 99.42\% and 95.0\% accuracies on LFW and YTF datasets. It is the current best performance trained on WebFace and considerably better than the other models trained on the same dataset. Compared with models trained on high-quality private datasets, \emph{SphereFace} is still very competitive, outperforming most of the existing results in Table~\ref{lfwytf}. One should notice that our single model performance is only worse than Google FaceNet which is trained with more than 200 million data.
\par
For fair comparison, we also implement the softmax loss, contrastive loss, center loss, triplet loss, L-Softmax loss \cite{liu2016large} and train them with the same 64-layer CNN architecture as A-Softmax loss. As can be observed in Table~\ref{lfwytf}, \emph{SphereFace} consistently outperforms the features learned by all these compared losses, showing its superiority in FR tasks.
\subsection{Experiments on MegaFace Challenge}
\begin{table}[t]
\centering
\footnotesize
\renewcommand{\captionlabelfont}{\footnotesize}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\setlength{\abovecaptionskip}{4pt}
\setlength{\belowcaptionskip}{-11pt}
\begin{tabular}{|c|c|c|c|c|}
\hline
Method & protocol & Rank1 Acc. & Ver. \\
\hline\hline
NTechLAB - facenx large & Large & 73.300 & 85.081 \\
Vocord - DeepVo1 & Large & \textbf{75.127} & 67.318 \\
Deepsense - Large & Large & 74.799 & \textbf{87.764} \\
Shanghai Tech & Large & 74.049 & 86.369 \\
Google - FaceNet v8 & Large & 70.496 & 86.473 \\
Beijing FaceAll\_Norm\_1600 & Large & 64.804 & 67.118\\
Beijing FaceAll\_1600 & Large & 63.977 & 63.960 \\\hline\hline
Deepsense - Small & Small & \textbf{70.983} & \textbf{82.851}\\
SIAT\_MMLAB & Small & 65.233 & 76.720 \\
Barebones FR - cnn & Small & 59.363 & 59.036 \\
NTechLAB - facenx\_small & Small & 58.218 & 66.366 \\
3DiVi Company - tdvm6 & Small & 33.705 & 36.927 \\\hline\hline
Softmax Loss & Small & 54.855 & 65.925 \\
Softmax+Contrastive Loss \cite{sun2014deep} & Small & 65.219 & 78.865 \\
Triplet Loss \cite{schroff2015facenet} & Small & 64.797 & 78.322 \\
L-Softmax Loss \cite{liu2016large} & Small & 67.128 & 80.423 \\
Softmax+Center Loss \cite{wen2016discriminative} & Small & 65.494 & 80.146 \\\hline\hline
SphereFace (single model) & Small & \textbf{72.729} & \textbf{85.561}\\
SphereFace (3-patch ensemble) & Small & \textbf{75.766} & \textbf{89.142}\\
\hline
\end{tabular}
\caption{\footnotesize Performance (\%) on MegaFace challenge. ``Rank-1 Acc.'' indicates rank-1 identification accuracy with 1M distractors, and ``Ver.'' indicates verification TAR for $10^{-6}$ FAR. TAR and FAR denote True Accept Rate and False Accept Rate respectively. For fair comparison, all loss functions (including ours) we implemented use the same deep CNN architecture.}\label{megaface}
\end{table}
MegaFace dataset \cite{miller2015megaface} is a recently released testing benchmark with very challenging task to evaluate the performance of face recognition methods at the million scale of distractors. MegaFace dataset contains a gallery set and a probe set. The gallery set contains more than 1 million images from 690K different individuals. The probe set consists of two existing datasets: Facescrub \cite{ng2014data} and FGNet. MegaFace has several testing scenarios including identification, verification and pose invariance under two protocols (large or small training set). The training set is viewed as small if it is less than 0.5M. We evaluate \emph{SphereFace} under the small training set protocol. We adopt two testing protocols: face identification and verification. The results are given in Fig.~\ref{megafacefig} and Tabel \ref{megaface}. Note that we use simple 3-patch feature concatenation ensemble as the final performance of \emph{SphereFace}.
\par
Fig.~\ref{megafacefig} and Tabel \ref{megaface} show that \emph{SphereFace} (3 patches ensemble) beats the second best result by a large margins (4.8\% for rank-1 identification rate and 6.3\% for verification rate) on MegaFace benchmark under the small training dataset protocol. Compared to the models trained on large dataset (500 million for Google and 18 million for NTechLAB), our method still performs better (0.64\% for id. rate and 1.4\% for veri. rate). Moreover, in contrast to their sophisticated network design, we only employ typical CNN architecture supervised by A-Softamx to achieve such excellent performance. For single model \emph{SphereFace}, the accuracy of face identification and verification are still 72.73\% and 85.56\% respectively, which already outperforms most state-of-the-art methods. For better evaluation, we also implement the softmax loss, contrastive loss, center loss, triplet loss and L-Softmax loss \cite{liu2016large}. Compared to these loss functions trained with the same CNN architecture and dataset, \emph{SphereFace} also shows significant and consistent improvements. These results convincingly demonstrate that the proposed \emph{SphereFace} is well designed for open-set face recognition. One can also see that learning features with large inter-class angular margin can significantly improve the open-set FR performance.
\vspace{-7mm}
\section{Concluding Remarks}
\vspace{-1.5mm}
This paper presents a novel deep hypersphere embedding approach for face recognition. In specific, we propose the angular softmax loss for CNNs to learn discriminative face features (\emph{SphereFace}) with angular margin. A-Softmax loss renders nice geometric interpretation by constraining learned features to be discriminative on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a non-linear manifold. This connection makes A-Softmax very effective for learning face representation. Competitive results on several popular face benchmarks demonstrate the superiority and great potentials of our approach. We also believe A-Softmax loss could also benefit some other tasks like object recognition, person re-identification, etc.
\par
{
\small
\bibliographystyle{ieee}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,649 |
\section{Approaches for Adaptive Personal Robots}
The promise of personal robots operating in human environments to interact with people on a daily basis points out the importance of adaptivity of the machine to its changing and unlimited environment, to match its behaviour and learn new skills and knowledge as the users' needs change.
In order to learn an open-ended repertoire of skills, developmental robots, like animal or human infants, need to be endowed with task-independent mechanisms to explore new activities and new situations ~\cite{Weng01,Asada09}.
The set of skills that could be learnt is infinite but can not be learnt completely within a life-time. Thus, deciding how to explore and what to learn becomes crucial. Exploration strategies of the recent years can be classified into two families: 1) socially guided exploration; 2) internally guided exploration and in particular instrinsically motivated exploration.
\subsection{Socially Guided Exploration}
To build a robot that can learn and adapt to human environment, the most straightforward way might be to transfer knowledge about tasks or skills from a human to a machine. Several works incorporate human input to a machine learning process, for instance through human guidance to learn by demonstration \cite{ChernovaVelosoJAIR09,Lopes09,Cederborg10,PbDCalinon} or by physical guidance \cite{Calinon07}, through human control of the reinforcement learning reward \cite{Blumberg:2002:ILI:566654.566597,Robotic-clicker-training}, through human advice\cite{A-teaching-method-for-reinforcement-leClouse-J.-and-Utgoff-P.-arning}, or through human tele-operation during training \cite{Effective-reinforcement-learning-for-mobile-robots}.
However, high dependence on human teaching is limited because of human patience, ambiguous human input, the correspondence problem \cite{Imitation-and-Social-Learning-in-Robots-Humans-and-Animals:}, etc. Increasing the learner's autonomy from human guidance could address these limitations. This is the case of internally guided exploration methods.
\subsection{Intrinsically Motivated Exploration}
Intrinsic motivation, an example of internally guided exploration, has drawn attention recently, especially for open-ended cumulative learning of skills \cite{Weng01,Oudeyer10b}. The word \textit{intrinsic motivation} in psychology describes the attraction of humans toward different activities for the pleasure they experience intrinsically. This is crucial for autonomous learning and discovery of new capabilities \cite{Ryan00,Deci85,Oudeyer08}. This inspired the creation of fully autonomous robots \cite{Barto04,Oudeyer07,Baranes09a,Schmidhuber10,Schembri07c} with meta-exploration mechanisms monitoring the evolution of learning performances of the robot, in order to maximise informational gain, and with heuristics defining the notion of interest \cite{Fedorov72,Cohn96,Roy01}.
Nevertheless, most intrinsic motivation approaches address only partially the challenges of unlearnability and unboundedness \cite{OudeyerIMCleverBook}. As interestingness is based on the derivative of the evolution of performance of acquired knowledge or skills, computing measures of interest requires a level of sampling density that decreases the efficiency as the level of sampling grows. Even in bounded spaces, the measures of interest, mostly non-stationary regressions, face the curse of dimensionality \cite{Bishop07}. Thus, without additional mechanisms, the identification of learnable zones where knowledge and competence can progress, becomes inefficient. The second limit relates to unboundedness. If the measure of interest depends only on the evaluation of performances of predictive models or of skills, it is impossible to explore/sample inside all localities in a life time. Therefore, complementary mechanisms have to be introduced in order to constrain the growth of the size and complexity of practically explorable spaces and allow the organism to introduce self-limits in the unbounded world and/or drive them rapidly toward learnable subspaces. Among constraining processes are motor synergies, morphological computation, maturational constraints as well as social guidance.
\subsection{Combining Internally Guided Exploration and Socially Guided Exploration}
Intrinsic motivation and socially guided learning, traditionally opposed, yet strongly interact in the daily life of humans. Both approaches have their own limits, but combining both could on the contrary solve them.
Social guidance can drive a learner into new intrinsically motivating spaces or activities which it may continue to explore alone for their own sake, but which might have been discovered only thanks to social guidance. Robots may acquire new strategies for achieving those intrinsically motivated activities by external observation or advice.
Reinforcement learning can let the human directly control the actions of a robot agent with teleoperation to provide example task demonstrations \cite{Peters_NN_2008,Kormushev2010IROS} which initialize the learning process by imitation learning and subsequently, improve the policy by reinforcement learning. Nevertheless, the role of the teacher here is restricted to the initialisation phase. Moreover, these works aim at one particular preset task, and do not explore the whole world.
Inversely, as learning that depends highly on the teacher quickly discourages the user from teaching to the robot, integrating self-exploration to social learning methods could relieve the user from overly time-consuming teaching.
Moreover, while self-exploration tends to result in a broader task repertoire, guided-exploration with a human teacher tends to be more specialised, with fewer tasks but faster learnt. Combining both can thus bring out a system that acquires a wide range of knowledge which is necessary to scaffold future learning with a human teacher on specifically needed tasks.
In initial work in this direction has been presented in \cite{ThomazBreazeal-ConnSci08,ThomazPhDThesis}, Socially Guided Exploration's motivational drives, along with social scaffolding from a human partner, bias the behaviour to create learning opportunities for a hierarchical Reinforcement Learning mechanism. However, the representation of the continuous environment by the robot is discrete and the set up is a limited and preset world, with few primitive actions possible.
We would like to address the learning in the case of an unbounded, non-preset and continuous environment.
This paper introduces \textbf{SGIM} (Socially Guided Intrinsic Motivation), an algorithm to deal with such spaces, by merging socially guided exploration and intrinsic motivation. The next section describes SGIM's intrinsic motivation part before its social interaction part. Then, we present the fishing experiment and its results.
\section{Intrinsic Motivation : the SAGG-RIAC Algorithm}
In this section we introduce Self-Adaptive Goal Generation - Robust Intelligent Adaptive Curiosity, an implementation of competence-based intrinsic motivations \cite{Baranes10b}.
We chose this algorithm as the intrinsically motivation part of SGIM for its efficiency in learning a wide range of skills in high-dimensional space including both easy and unlearnable subparts. Moreover, its goal directedness allows bidirectional merging with socially guided methods based on feedback on either goal and/or means. Its ability to detect unreachable spaces also makes it suitable for unbounded spaces.
\subsection{Formalisation of the Problem}
\label{formalisation}
Let us consider a robotic system which configurations/states are described in both a state space $X$ (eg. actuator space), and an operational/task space $Y$. For given configurations $(x_1, y_1) \in X \times Y$, an action $a \in A$ allows a transition towards the new states $(x_2,y_2) \in X \times Y$. We define the action $a$ as a parameterised dynamic motor primitive. While in classical reinforcement learning problems, $a $ is usually defined as a sequence of micro-actions, parameterised motor primitives consist in complex closed-loop dynamical policies which are actually temporally extended macro-actions, that include at the low-level long sequences of micro-actions, but have the advantage of being controlled at the high-level only through the setting of a few parameters. The association $ M :(x_1, y_1, a) \mapsto (x_2,y_2)$ corresponds to a learning exemplar that will be memorised, and the goal of our system is to learn both the forward and inverse models of the mapping $M $.
We can also describe the learning in terms of tasks, and consider $y_2$ as a \textit {goal} which the system reaches through the \textit{means} $a$ in a given \textit {context} $(x_1,y_1)$. In the following, both points of view will be used interchangeably.
\subsection{Global Architecture of SAGG-RIAC}
SAGG-RIAC is a multi-level active learning algorithm and consists in pushing the robot to perform babbling in the goal space by self-generating goals which provide a maximal competence improvement for reaching those goals. Once a goal is set, a lower level active motor learning algorithm locally explores how to reach the given self-generated goal.
The SAGG-RIAC architecture is organised into 2 levels :
\begin{itemize}
\item A higher level of active learning which decides what to learn, sets a goal $y_g \in Y$ depending on the level of achievement of previous goals, and learns at longer time scale.
\item A lower level of active learning that attempts to reach the goal $y_g$ set by the higher level and learns at shorter time scale.
\end{itemize}
\subsection{Lower Level Learning}
The lower level is made of 2 modules. The \textit{Goal Directed Low-Level Interest Computation} module guides the system toward the goal $y_g$ and creates a model of the world that may be reused afterwards for other goals. The \textit{Goal-Directed Low Level Actions Interest Computation} module measures the interest level of the goal $y_g$ by $Sim$, a function representing the similarity between the final state $y_f$ of the reaching attempt, and the actual goal $y_g$. The exact definition depends on the specific learning task, but $Sim$ is to be defined in $[-\infty; 0]$, such that the higher $Sim(y_g,y_f, \rho)$, the more efficient the reaching attempt is.
\subsection{Higher Level Learning}
The two modules of the higher level compute the interesting goals to explore, depending on the performance of the short-term level and the previous goals already explored.
The \textit{Goal Interest Computation} module relies on the feedback of the lower level to map the interest level in the task space $Y$.
The interest $interest_i$ of a region $R_i \subset Y$ is \textit{ the local competence progress, over a sliding time window of the $\mathbf{\zeta}$ more recent goals attempted inside ${R}_i$}:
\begin{center}
\vspace{-0.4cm}
\scriptsize
\begin{eqnarray*}
interest_i = \left| \frac{\left(\displaystyle \sum_{j=| {R}_i|-\zeta}^{|{R}_i|-\frac{\zeta}{2}} \gamma_{y_j} \right) - \left(\displaystyle \sum_{j=|{R}_i|-\frac{\zeta}{2}}^{|{R}_i|} \gamma_{y_j} \right) }{\zeta} \right|
\label{interest}
\end{eqnarray*}
\end{center}
where $\{y_{1}, y_{2}, ..., y_{k}\}_{{R}_i}$ are elements of $R_i$ indexed by their relative time order of experimentation and $\gamma_{y_j}$ is the the competence of $y_j \in R_i$ and defined with respect to the similarity between the final state $y_f$ of the reaching attempt, and the actual goal $y_j$ :
\vspace{-0.3cm}
\begin{eqnarray}
\gamma_{y_j}= \left\{
\begin{array}{ll}
Sim(y_j,y_f, \rho) & \mbox{if} \ Sim(y_j,y_f, \rho) \le \varepsilon_{sim} < 0\\
0 & \mbox{otherwise} \nonumber
\end{array}
\right.
\end{eqnarray}
\vspace{-0.3cm}
The \textit{Goal Self-Generation} module uses the measure of interest to split $Y$ into subspaces to maximally discriminate areas according to their levels of interest and select the region where future goals will be chosen.
The goal self-generation mechanism involves random exploration of the space in order to map the level of interest for the different subparts. This prevents it from exploring efficiently large goal spaces containing small reachable subparts because of the need for discrimination of these subparts from unreachable ones. In order to solve this problem, we propose to bootstrap intrinsic motivation with social guidance. In the following section, we review different kinds of social interactions modes then describe our algorithm SGIM-D (Socially Guided Intrinsic Motivation by Demonstration).
\section{SGIM Algorithm}
\subsection{Formalisation of the Social Interaction}
Within the problem of learning the forward and the inverse models of the mapping $ M : (x_1, y_1, a) \mapsto (x_2,y_2)$, we would like to introduce the role of a human teacher to boost the learning of the means $a$ and goal $y_2$ in the contexts $(x_1,y_1)$ and set a formalisation of the case where an imitator is trying to build good world models and where paying attention to the demonstrator is one strategy for speeding up this learning. Given the model estimated by the robot $M_{R}$, and by the human teacher $M_{H}$, we can consider social interaction as a transformation $SocInter: (M_R, M_H) \mapsto (M2_R, M2_H) $. The goal of the learning is that the robot acquires a perfect model of the world, i.e. that $SocInter(M_R, M_H) = (M_{perfect},M_{perfect})$. Social interaction is a combination of: the human teacher's behaviour or guidance $SocInter_H$ and the machine learner's behaviour $SocInter_R$.
We presume a transparent communication between the teacher and the learner, i.e. the teacher can access the real visible state of the robot as a noiseless function of its internal state $visible_R(M_R)$. Let us note $\widetilde{visible}_R$ the "perfect visible state" of the robot, i.e. the value of the visible states of the robot when its estimation of the model is perfect: $M_R = M_{perfect}$.
Moreover, we postulate that the teacher is omniscient, his estimation of the model is the perfect model $M_H = M_{perfect}$. Therefore, our social interaction is a transformation $SocInter: M_R \mapsto M $.
In order to define the social interaction that we wish to consider, we need to peruse the different possibilities.
\subsection{Analysis of Social Interaction Modes}
First of all, let us define which type of interaction takes place, and what role we shall give to the teacher. Taking inspiration from psychology, such as the use of motherese in child development \cite{springerlink:10.1023/A:1013215010749} or the importance of positive feedback \cite{ThomazBreazeal-ConnSci08}, reward-like feedback seems to be important in learning. They typically provide an estimation of a distance between the robot's visible state and its "perfect visible state" : $SocInter_H \sim dist(visible_R,\widetilde{visible}_R) $. Yet, this cheering needs to be completed by games where parents show and instruct children interesting cases and help children reach their goals. Therefore, we prefer a demonstration type of interaction. Besides, social interaction can be separated into two broad categories of social learning \cite{Imitation-in-animals-and-artifacts-chapter-Three-sources-of-information-in-social-learning}: imitation where the learner copies the specific motor patterns $a$, and emulation where the learner attempts to replicate goal states $y_2 \in Y$.
To enable both imitation and emulation and influence the learner both from the action and goal point of view, we provide the learner with both a means and a goal examples: $SocInter_H \in A \times Y $. Indeed, the teacher who shows both a means and a goal offers the best opportunity for the learner to progress, for the learner can use both the means or the goal-driven approach.
Our next question is: when should the interaction occur? For the robot's adaptability or flexibility to the changing environment and demand from the user, interactions should take place throughout the learning process. In order to test the efficiency of our algorithm and control the way interactions occur, we choose to trigger the interaction at a constant frequency.
Lastly, to induce significative improvement of the learner, we shall provide him with demonstrations in a not yet learned subspace, in order to make the robot explore new goals and unexplored subspaces.
So as to bootstrap a system endowed with intrinsic motivation, we choose a learning by demonstration of means and goals, where the teacher introduces at regular pace a random demonstration among the unreached goals for SGIM-D.
\subsection{Description of SGIM-D Algorithm}
This section details how SGIM learns an inverse model in a continuous, unbounded and non-preset framework, combining both intrinsic motivation and social interaction. Our Socially Guided Intrinsic Motivation algorithm merges SAGG-RIAC as intrinsic motivation, with a learning by demonstration, as social interaction.
SGIM-D includes two different levels of learning (fig. \ref{StructureSGIM}).
\begin{figure}
\vspace{-0.6cm}
\centering
\includegraphics[width=7cm]{Figures/StructureSGIM.pdf}
\vspace{-0.3cm}
\caption{ \small{ Structure of SGIM-D (Socially Guided Intrinsic Motivation by Demonstration). SGIM-D is organised into 2 levels.} }
\label{StructureSGIM}
\vspace{-0.4cm}
\end{figure}
\subsubsection{Higher Level Learning}
The higher level of active learning decides which goal $(x_2,y_2)$ is interesting to explore and contains 3 modules. The \textit{Goal Self-Generation module} and the \textit {Goal Interest Computation} module are as in SAGG-RIAC. The \textit{Social Interaction} module manages the interaction with the human teacher. It interfaces the social guidance of the human teacher $SocInter_H$ with the Goal Interest Computation Module and interrupts the intrinsic motivation at every demonstration by the teacher. It first triggers an emulation effect, as it registers the demonstration $(a_{demo},y_{demo})$ in the memory of the system and gives it as input to the goal interest computation module. It also triggers the imitation behaviour and sends the demonstrated action $a_{demo}$ to the Imitation module of the lower level.
\subsubsection{Lower Level Learning}
The lower level of active learning also contains 3 modules. The \textit {Goal Directed Exploration and Learning} module and the \textit{Goal Directed Low Level Actions Interest Computation} module, as in SAGG-RIAC, use $M_H$ to reach the self-generated goal $(x_2,y_2)$. The \textit {Imitation} module interfaces with the high-level Social Interaction module. It tries small variations to explore in the locality of $a_{demo}$ and help address the correspondence problem in the case of a human demonstration which does not use the same parametrisation as the robot.
The above description is detailed for our choice of SGIM by Demonstration. Such a structure remains suitable for other choices of social interaction modes, we only have to change the content of the Social Interaction module, and change the Imitation module to the chosen behaviour. Our structure, notably, can deal with cases where the intrinsically motivated part gives a feedback to the teacher, as the Goal Interest Computation module and the Social Interaction module communicate bilaterally. For instance, the case where the learner asks the teacher for demonstrations can still use this structure.
We have hitherto presented intrinsic motivation's SAGG-RIAC and analysed social learning and its different modes, to design Socially Guided Intrinsic Motivation by Demonstration (SGIM-D) that merges both paradigms, to learn a model in a continuous, unbounded and non-preset framework. In the following section we use SGIM-D to learn fishing skill.
\section{Fishing Experiment}
This fishing experiment focuses on the learning of inverse models in a continuous space, and deals with a very high-dimensional and redundant model. The model of a fishing rod in a simulator might be mathematically computed, but a real-world fishing rod's dynamics would be impossible to model. A learning system of such a case is therefore interesting.
\begin{figure}
\vspace{-0.6cm}
\centering
\includegraphics[width=7cm]{Figures/FishingRod.png}
\vspace{-0.3cm}
\caption{ \scriptsize{Fishing experimental setup. Our 6-dof robot arm manipulates a fishing rod. Each joint is controlled by a bezier curve parameterised by 4 scalars (initial, middle and final joint position and a duration). We track the position of the hook when it reaches the water surface.}}
\label{FishingRod}
\vspace{-0.6cm}
\end{figure}
\subsection{Experimental Setup}
Our continuous environment sets a 6 degrees-of-freedom robot arm that learns how to use a fishing rod (fig. \ref{FishingRod}), i.e. for a given goal position $y_g$ where the hook should reach when falling into the water, which action $a$ to perform.
In our experiment, $X$ describes the actuator/joint positions and the state of the fishing rod. $ Y$ is a 2-D space that describes the position of the hook when it reaches the water. The robot always starts with the same initial position, $x_1$ and $y_1$ always take the same values $x_{org}$ and $y_{org}$. Variable $a$ describes the parameters of the commands for the joints. We choose to control each joint with a Bezier curve defined by 4 scalars (initial, middle and final joint position and a duration). Therefore an action is represented by 24 parameters: $a= (a^1,a^2, ...a^{24} )$ define the points $x= (x^1,x^2, ...x^{6} )$ of the Bezier curve and then the joint positions made physically consistent which the robot attains sequentially . Because our experiment uses at each trial the same context $(x_{org}, y_{org})$, our system memorises after executing every action $a$ only the association $(a,y_2) $ and learns the context-free association $ M : a \mapsto y_2$.
The experimental scenario sets the robot to explore the task space through intrinsic motivation when it is not interrupted by the teacher. After $P$ movements, the teacher interrupts whatever the robot is doing, and gives him an example $(a_{demo}, y_{demo})$. The robot first registers that example in its memory as if it were its own. Then, the Imitation module tries to imitate the teacher with movement parameters $a_{imitate} = a_{demo} + a_{rand} $ where $a_{rand}$ is a random movement parameter variation, so that $|a_{rand}|< \epsilon$. At the end of the imitation phase, SAGG-RIAC resumes the autonomous exploration, taking into account the new set of experience. We hereafter describe the low-level exploration, specific to this problem.
\subsection{ Empirical Implementation of the Low-Level Exploration}
\label{one-goal}
Let us first consider that the robot learns to reach a fixed goal position $y_g = (y_g^1,y_g^2)$. We first have to define the similarity function $Sim$ with respect to the euclidian distance $D$ :
\vspace{-0.2cm}
\begin{eqnarray}
Sim(y_g, y_f, y_{org}) = \left\{
\begin{array}{ll}
-1 & \mbox{if} \frac{D(y_g,y_f)}{D(y_g, y_{org})} > 1\\
- \frac{D(y_g,y_f)}{D(y_g, y_{org})} & \mbox{otherwise} \nonumber
\end{array}
\right.
\end{eqnarray}
\vspace{-0.2cm}
To learn the inverse model $InvModel : y \mapsto a $, we use the following optimisation mechanism which can be divided into: a exploitation regime and an exploration regime.
\subsubsection{Exploitation Regime}
The exploitation regime uses the memory to locally interpolate an inverse model. Given the high redundancy of the problem, we chose a local approach and extract the most reliable data by computing the set $L$ of the $l_{max}$ nearest neighbours of $y_g$ and their corresponding movement parameters using an ANN method \cite{Muja09} which is based on a tree split using the k-means process:
$ L = \left\{ (y,a)_1, (y,a)_2, ... , (y,a)_{l_{max}} \right\} \subset (Y\times A)^{l_{max}} $.
Then, for each element $(y,a)_l \in L$, we compute its reliability. Let $K_l$ be the set of the $k_{max}$ nearest neighbours of $a_l$ chosen from the full dataset :
$ K_l = \left\{ (y,a)_1, (y,a)_2, ... , (y,a)_{k_{max}} \right\} $, and $var_l$ is the variance of $K_l$.
As the reliability of a movement depends both on the local knowledge and its reproductivity, we define the reliability of $(y,a)_l \in L$ as $dist(y_l,y_g) + \alpha\times var_l$, where $\alpha $ is a constant ($\alpha =$ 0.5 in our experiment). We choose among L the smallest value, as the most reliable set $(y,a)_{best}$.
In the locality of the set $(y,a)_{best}$, we interpolate using the $k_{max}$ elements of $K_{best}$ to compute the action corresponding to $y_g$ :
$a_g = \sum_{k=1}^{k_{max}} {coef_k a_k} $ where $coef_k \sim Gaussian(dist(y_k,y_g)) $ is a normalized gaussian.
\subsubsection{Exploration Regime}
The system just uses a random movement parameter to explore the space.
It continuously estimates the distance between the goal $y_g$ and the closest already reached position $y_c$, $dist(y_c, y_g)$. The system has a probability proportional to $dist(y_c, y_g)$ of being in the exploration regime, and the complementary probability of being in the exploitation regime.
\subsection{Simulations}
The experimental setup has been designed for a human teacher. Nevertheless, to test our algorithm, to control better the demonstrations of the teacher, to be able to run statistics, and as starting point, we used V-REP physical simulator with ODE physics engine, which updates every 50 ms. The noise of the control system of the 3D robot is estimated to 0.073 for 10 attempts of 20 random movement parameters while the reachable area spans between -1 and 1 for each dimension.
Per experiment, we ran 5000 movements and assessed the performance on a 129 points benchmark set (fig. \ref{BenchmarkTeachingSet}) every 250 movements.
After several runs of random explorations and SAGG-RIAC, we determined the apparent reachable space as the set of all the reached points in the goal/task space, which makes up some 70 000 points. We then divided the space into small squares, and generated a point randomly in each square. Using a $26\times 16$ grid, we obtained a set of 129 goal points in the task space, representative of the reachable space, and independent of the experiment data used.
\begin{figure}
\vspace{-0.5cm}
\centering
\includegraphics[width=4.0cm]{Figures/TeachingBenchmark.pdf}
\vspace{-0.3cm}
\caption{\scriptsize{
Maps of the benchmark points used to assess the performance of the robot, and the teaching set, used in SGIM.}
}
\label{BenchmarkTeachingSet}
\vspace{-0.2cm}
\end{figure}
Likewise, we prepared a teaching set of 27 demonstrations (fig. \ref{BenchmarkTeachingSet}) and defined in the reachable space small squares $subY$. To each $subY$, we will choose a demonstration $(a,y)$ so that $y \in subY$. So that the teacher gives the most useful demonstration, we compute $M_H^{-1}(subY)= \{a | M_H: a \mapsto y\in subY \}$. We tested all the movement parameters $a \in M_H^{-1}(subY)$ to choose the most reliable one $a_{demo}$, ie, the movement parameters that resulted in the smallest variance in the goal space $a_{demo}= min \{ var( M_H(a)) ) \}_{a \in M_H^{-1}(subY)} $.
\subsection{Experimental results}
\subsubsection{A Wide Range of Skills }
\begin{figure}
\vspace{-0.1cm}
\centering
\includegraphics[width=8cm]{Figures/FigureSGIM.pdf}
\vspace{-0.3cm}
\caption{\scriptsize{
Histograms of the positions explored by the fishing rod inside the 2D goal space $(y^1,y^2)$. On each row shows the timeline of the cumulated set of points throughout 5000 random movements. Each row represents a different learning algorithm : random input parameters, SAGG RIAC and SGIM-D. }
}
\label{FigureSGIM}
\vspace{-0.3cm}
\end{figure}
We ran the experiment in the same conditions but with different learning algorithms, and plotted in fig. \ref{FigureSGIM} the histogram of the positions of the fishing rod when it reaches the water surface.
The 1st line of fig. \ref{FigureSGIM} shows that a natural position lies around $(0.5,0)$ in the case of an exploration with random movement parameters. Most movements parameters map to a position of the hook around that central position.
The second line of fig. \ref{FigureSGIM} shows the histogram in the task space of the explored points under SAGG-RIAC algorithm throughout different timeframes. Compared to a random parameters exploration, SAGG-RIAC has increased the explored space, and most of all, explores more uniformly the explorable space. The regions of interest change through time as the system finds new interesting subspaces to explore. Intrinsic motivation exploration results in a wider repertoire for the robot.
Besides, Fig. \ref{FigureSGIM} highlights a region around $(-0.5, -0.25)$ that was ignored by both the random exploration and SAGG-RIAC, but was well explored by SGIM-D. This isolated subspace corresponds to a tiny subspace in the parameters space, seldom explored by the random exploration or seen by SAGG-RIAC which was focusing on the subspaces around the places it already explored. On the contrary, in SGIM, the teacher gives a demonstration that brings new competence to the robot, and triggers the system's interest to define the area as interesting.
\begin{figure}
\vspace{-0.2cm}
\centering
\includegraphics[width=6.7cm]{Figures/CompareEvaluationInterpolationSizeTaskSpace6.pdf}
\vspace{-0.3cm}
\caption{ \scriptsize{
Evaluation of the performance of the robot under the learning algorithms: SAGG-RIAC and SGIM-D, when the task space is small or 20 times larger. We plotted the mean distance to the benchmark points over several runs of the experiment.}
}
\label{CompareEvaluationInterpolation}
\vspace{-0.4cm}
\end{figure}
\subsubsection{Precision}
To assess the precision of its learning, and compare its performance in large spaces, we plotted the performance of SAGG-RIAC, SGIM-D and when performing only variations of the teacher's demonstrations (with the same number of demonstrations as with SGIM-D). Fig. \ref{CompareEvaluationInterpolation} shows the mean error for the benchmark in the case of a task space bounded close to the reachable space, and when we multiplied the size by 20.
In the case of the small task space, the plots show that SGIM-D performs better than SAGG-RIAC or the learning by demonstrations alone.
As expected, performance decreases when the size of the task space increases (cf. section 1). However it improves with SGIM-D, and the difference between SAGG-RIAC and SGIM-D is more important in the case of a large task space, thus the improvement is most significative when the task space size increases.
\begin{figure}
\centering
\includegraphics[width=7cm]{Figures/HistogramGoalsLarge.pdf}
\vspace{-0.4cm}
\caption{\scriptsize{
Histograms of the goals set by the Goal Self-Generation Module when the task space is large. The different figures correspond to the results for different runs of the experiment with SAGG-RIAC algorithm (1st row) and SGIM-D algorithm (2nd row). Both rows figures have been zoomed and centred on the reachable space}
}
\label{HistogramGoals}
\vspace{-0.4cm}
\end{figure}
\subsubsection{Identification of the reachable space}
This difference in the performance is explained by Fig \ref{HistogramGoals}, which plots the histogram of the set of the self-generated goals and the subspaces explored by the robot. We can see that in the second row, most goals are within the reachable space, and cover mostly the reachable space. This means the SGIM-D could differentiate the reachable subspaces from the unreachable subspaces. On the contrary, the first row shows goal points that appear disorganised : SAGG-RIAC has not identified which subspaces are unreachable.
Demonstrations given by the teacher improved the learner's knowledge of the inverse model $InvModel$. We also note that the demonstrations occurred only once every 150 movements, meaning that even a slight presence of the teacher can improve significantly the performance of the autonomous exploration. In conclusion, SGIM-D improves the precision of the system with little intervention from the teacher, and helps point out key subregions to be explored. The role of SGIM-D is most significant when the size of the task space increases.
\section{ Conclusion and Future Work}
Our experiment shows that SGIM learns a model of its environment, and little intervention from the teacher can improve its learning compared to demonstrations alone or SAGG-RIAC, especially in the case of a large task space. Even though the teacher is not omniscient, he can transfer his knowledge to the learner and bootstrap autonomous exploration.
Nevertheless, in this initial validation study in simulation, we made strong supposition about the teacher. He has the same motion generation rules than the robot, and is omniscient so that he teaches the robot the reachable space. A study of a non-omniscient teacher should show how his demonstrations bias the subspaces explored by the robot.
Experiments with human demonstrations need to be conducted to address the problems of correspondence and biased teacher.
Albeit these suppositions, our simulation indicates that SGIM-D successfully combines learning by demonstration and autonomous exploration even in an experimental setup as complex as having a continuous 24-dimension action space.
This paper introduces \textbf{Socially Guided Intrinsic Motivation by Demonstration}, a learning algorithm for models in a continuous, unbounded and non-preset environment, which efficiently combines social learning and intrinsic motivation. It proposes a hierarchical learning with a higher level that determines which goals are interesting either through intrinsic motivation or social interaction, and a lower-level learning that endeavours to reach it. Our framework takes advantage of the demonstrations of the teacher to explore unknown subspaces, to gain precision, and efficiently identify the reachable space from the unreachable space even in large task spaces thanks to the knowledge transfer from the teacher to the learner. It also takes advantage of the autonomous exploration to improve its performance in a wide range of tasks in the teacher's absence.
In our experiment, the robot imitates the teacher for a fixed duration before returning to emulation mode where SGIM-D takes into account the goal of this new data. However, future work on a more natural and autonomous algorithm to switch between imitation and emulation could improve the efficiency of the system.
\vspace{-0.2cm}
\section*{Acknowledgments}
\vspace{-0.2cm}
This research was partially funded by ERC Grant EXPLORERS 240007 and ANR MACSi.
\vspace{-0.1cm}
{\tiny
\bibliographystyle{named}
\vspace{-0.2cm}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,835 |
MetroBank is moving a step forward by servicing the public from early morning until late evening including weekends and bank holidays. This decision of running up late comparing to other high street banks is based on Metro Bank's ideology "that you should be able to bank when it suits you." Surely MetroBank lives up to its claim with opening times from Monday to Friday, 8am to 8pm, Saturdays, 8am to 6pm and Sundays/Bank Holidays, 11am to 5pm.
But whatelse does Metro Bank offers apart from late hours openings? Well, there is actually some attractive services currently held by the bank which can be of great interest to customers. The first to mention is based on 0% commission free from current card usage and withdrawals when travelling within the European Community Area countries. That is surely a good reason and advantage to use MetroBank current accounts over most if not all high street banks especially for those who for work or leisure purposes have the need to travel regularly in Europe.
Secondly MetroBank offers personal belongings safe deposit boxes which can be rented very easily and quite hassle free without many detailed questions to customers (and perhaps non customers) that wish to keep safe any valuables, documents or whatever it is worth to keep safe and away. It quite reminds the old school Swiss banks which did and still do offer such private safe depositing services.
Thirdly the bank offers a same day account opening which is very smooth and easy which includes the printing of the bank card and cheque book in the same day as well. MetroBank also offers all the other packages most banks do such as ISA, Savings accounts including mortgages, borrowing and also Business banking services.
All in all we score MetroBank the number one UK bank of the year for its commitment to serve the public until late hours, fee less when using your bank card in Europe, easy to rent safe deposit boxes and same day account opening including the printing of your current account card and cheque book which in our opinion tops up in service and availability comparing to other high street banks and even building societies. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,338 |
{"url":"http:\/\/gradestack.com\/GRE-Complete-Tutor\/Coordinate-Geometry\/Slope-Intercept-Form\/16609-3354-13048-study-wtw","text":"# Slope-Intercept Form\n\nMultiplying both sides of the equation \u00a0by x \u00e2\u20ac\u201c a yields\n\ny \u00e2\u20ac\u201c b = m(x \u00e2\u20ac\u201c a)\n\nNow, if the line passes through the y-axis at (0,b), then the equation becomes\n\ny \u00e2\u20ac\u201c b = m(x \u00e2\u20ac\u201c 0)\n\nor\n\ny \u00e2\u20ac\u201c b = mx\n\nor\n\ny = mx + b\n\nThis is called the slope-intercept form of the equation of a line, where m is the slope and b is the y-intercept.\u00a0This form is convenient because it displays the two most important bits of information about a line: its slope and its y-intercept.\n\nExample\nThe equation of the line above\u00a0is\n Column A Column B AO BO\n\nSince \u00a0is in slope-intercept form, we know the slope of the line is 9\/10.\u00a0Now, the ratio of BO\u00a0to AO\u00a0is the slope of the line (rise over run).\n\nHence,\u00a0. Multiplying both sides of this equation by AO\u00a0yields .\n\nIn other words,\u00a0BO\u00a0is 9\/10 the length of AO.\u00a0Hence,\u00a0AO\u00a0is longer.","date":"2017-01-19 00:25:44","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8643794655799866, \"perplexity\": 1132.0285558713806}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560280410.21\/warc\/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz\"}"} | null | null |
Le Palais Barbarigo Nani Mocenigo est un palais gothique de Venise situé dans le quartier Dorsoduro, le long de la digue Nani sur la rivière San Trovaso, près du campo du même nom .
Histoire
Le palais qui date du était la résidence de la famille Barbarigo . Le bâtiment faisait partie de la dot qu'Elena Barbarigo, fille du Doge Agostin Barbarigo, apporta à son mari Giorgio Nani. Le palais passa à son fils Bernardo, fondateur de la branche familiale nommée di San Trovaso . Dans la première moitié du , la lignée de San Trovaso s'éteignit et l'édifice devint la maison des parents éloignés de Nani Mocenigo qui vivaient auparavant dans un immeuble du quartier de Cannaregio .
Une partie du bâtiment appartient à cette famille, tandis que le reste a été acheté par l' Université Ca' Foscari, qui en a fait le siège du Département d'études italiennes, avec une bibliothèque attenante. Depuis 2007, le bâtiment est vide, parfois loué à de riches touristes ou utilisé pour des événements artistiques. À partir de 2022, le bâtiment est utilisé comme hôtel.
Architecture
Le palais est un exemple typique de l' architecture gothique vénitienne des . La façade de forme carrée comporte trois niveaux et une mezzanine. Le rez-de-chaussée offre deux portails gothiques : le central et le plus petit à gauche. Les deux étages nobles ont des quadriphores centraux soutenus par des balustrades et flanqués de paires de fenêtres ogivales à une lumière. Le premier étage noble comporte une paire d'armoiries dans les ailes.
Sur le côté droit du toit se trouve une terrasse donnant sur la zone de San Trovaso et du canal de la Giudecca .
Notes et références
Palais dans le sestiere de Dorsoduro | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,194 |
Q: Complex expression repeating vs subquery in oracle sql I have such dilemma writing:
SELECT CASE Complex_Multiline_Condition WHEN THEN ... END as some_field,
<a_lot_of_fields>
FROM a
JOIN b ON Complex_Multiline_Condition
or
SELECT CASE cond_result WHEN 1 THEN ... ELSE ... END as some_filed,
<a_lot_of_fields>
FROM (
SELECT CASE WHEN Complex_Multiline_Condition THEN 1 ELSE 0 END as cond_result,
<a_lot_of_fields>
FROM a
)
JOIN b ON CASE cond_result WHEN 1 THEN ELSE END = ...
Second way is more clear and DRY, but introduces one more step in processing. This is important for me, because I'm dealing with huge dataset (about million records). So what is general rule of thumb in this situation? Does subquery COST a lot to re-select huge dataset in subquery only for complex condition evaluation?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,902 |
I've pretty-much lived my life by going with the flow. This worked when I was younger because it allowed me the chance to see new places and learn new skills. However, at a certain point I began to realize that I'd 'been there, done that'. I needed something more but I didn't know what.
Since finding the answer to this "Big Question" I recognize 3 steps I took to make it happen.
1. Figure out what you really want to do and what mark you want to leave on the planet.
My journey began a years-long quest for "my purpose," involving various career searches, counselors and spiritual visioning. After much frustration and many tears I felt as though I hadn't really accomplished anything. Then I finally made a breakthrough that has set me on an incredible trajectory, while working with a life coach.
All that time I had been looking inward, trying to find answers, when the answer was apparent from the outside looking in: my dream is to own and run a sustainable, socially responsible hotel.
I had run hotels for many years and I have enjoyed creating outstanding experiences for travelers. The part that left me unfulfilled was that my efforts just made some owner or corporation richer. I have found little "purpose" to doing business for business sake. If I could run my own hotel then I could add that extra piece that had been missing for me.
Get clear on your dream. It could be right under your nose.
2. Tell someone what you are dreaming.
Realizing my dream was even more challenging because it seemed so far out of reach. Many people that I shared this new purpose with, smiled and said, "That's nice, but so what?" They reflected my belief that I couldn't actually pull this off.
Gratefully, I had my wife and a few friends who were supportive of this emerging (and seemingly unrealistic) vision.
The trick here is to set the idea of your dream in motion, by sharing it with others. This will keep you focused on how to make it happen. And make you accountable for taking action.
3. Do something a little different than your normal.
It was obvious that knowing my purpose was not enough. And telling folks about it, while helpful for motivation, was still not going make it happen. In my case I began searching for hotels for sale and figuring out how much money I would need in order to make a down payment, even though it seemed unrealistic.
My wife helped me to take another action that was different and which led to where I am today. Instead of just talking about hotels, Claire suggested that we take a trip and actually look at some properties I was interested in. We assessed our assets and saw a creative way we could afford to take action on this dream. It was her inspiration that began the process that leads me on the edge of my dream.
Here is the key: do something, anything, different from your daily routine, which moves you towards your goal.
Now I am on my way to creating a partnership with a beautiful hotel in a wonderful small town near Portland. It has been an incredible journey and there is a great deal more work involved, but I love it.
One of the challenges of reaching for great heights, is sometimes you need help and one of my lessons has been to reach out. I am in that position right now. As I've been telling folks about my dream and providing updates, they often ask how they can help. We have put together a crowdfunding campaign so you can help support our dream. Simple click here to go to IndieGoGo and help Bring the Balch to Brilliance.
Previous PostPrevious Our hotel deal is on its way! | {
"redpajama_set_name": "RedPajamaC4"
} | 96 |
Guess what?! I won a gold medal from the prestigious Gelett Burgess Awards!!! My book "What Do You Use to Help Your Body?: Maggie Explores the World of Disabilities" won under the category of illness, bereavement and disability. I can't describe how honored I am! WOW! WOW! WOW! | {
"redpajama_set_name": "RedPajamaC4"
} | 1,076 |
\section{I. Analytical proof for rainbow free energy term}
{We provide an {analytic calculation of the logarithmic rainbow term from conformal field theory (CFT).} Firstly we introduce two maps:
\begin{enumerate}[(i)]
\item $\varphi_1: \ t \to z =\sqrt t$, where $0\le \arg(z) \le \pi$, $t\in \mathbb C$;
\item $\varphi_2: \ z \to w =\frac{2\beta}{\pi} \mathrm{arcosh} z$.
\end{enumerate}
}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{FigS0.pdf}
\caption{{Conformal mappings construct a semi-infinite cylinder geometry with one rainbow boundary.} \label{map}}
\end{figure}
{$\varphi_1$ maps $\mathbb C$($t$--plane) to $\mathbb H^+$($z$--plane), and {identifies} the negative half real axis with the positive half in $z$--plane, while $\varphi_2$ maps $\mathbb H^+$(in $z$--plane) to a {half-infinite rectangle} $\{w: \Re w \ge 0, 2\beta \ge \Im w \ge 0 \}$ (in $w$--plane). $\varphi_2(-1)=2i\beta$, $\varphi_2(1)=0 $.
The rectangle has reflection symmetry about the line $\{\Im w=\beta \}$, and the {identification in} the real axis on $z$--plane transforms to the {identification} in the reflection symmetric boundaries on the $w-$ plane. Geometrically, {it corresponds to} a {half-infinite} cylinder with a rainbow boundary, which is along the segment [0,$2i\beta$] on imaginary axis in the rightmost plot of Fig.~\ref{map}.
We consider the transformation of the stress tensor
\begin{equation}
\langle T(z)\rangle = (\frac{\mathrm{d} t}{\mathrm{d} z})^{2}\langle T(t)\rangle+\frac{c}{12} \{t;z\},
\end{equation}
where $\{t; z\}= \frac{\mathrm{d}^3 t/\mathrm{d} z^3}{\mathrm{d} t/\mathrm{d} z} -\frac{3}{2}(\frac{\mathrm{d}^2 z/\mathrm{d} t^2}{\mathrm{d} z/\mathrm{d} t})^2$ is the Schwarzian derivative. We find that $\{t; z\}= -3/2z^2$, {which leads to} $\langle T(z)\rangle=-\frac {c}{8z^2}$, by assuming $\langle T(t)\rangle =0$.
Similarly, one can get
\begin{equation}\label{Tw}
\langle T(w)\rangle = -\frac {c\pi^2} {8(2\beta)^2} [\frac{\sinh^2 ( \pi w/2\beta)}{\cosh^2 (\pi w/2\beta )}+\frac{\cosh^2 (\pi w/2\beta)}{\sinh^2 ( \pi w/2\beta)}]+\frac {c\pi^2}{12 (2\beta)^2 }
\end{equation}
after the conformal map $\varphi_2$.
The universal free energy term $F_0=\ln{\mathcal{Z}_{\rm{CFT}}}$, defined as the {logarithm} of CFT partition function, varies in the following way as the metric tensor changes by $\delta g_{\mu\nu }$:
\begin{equation}
\delta F_0 = \frac 1 2 \int \mathrm{d}^2 x \sqrt g \delta g_{\mu\nu } \langle T^{\mu\nu} \rangle
\end{equation}
For such a cylinder with a rainbow boundary, we {introduce} an infinitesimal scaling of the circumference:
$2\beta \to (1+\epsilon)(2\beta)$, {namely,} $\delta \beta = \epsilon \beta$ {with a small $\epsilon$}. This is realized by applying a coordinate transformation $w^1\to (1+\epsilon) w^1$, {and note that $w^1$ is the imaginary part of $w$, i.e., $w=w^0+iw^1$. This transformation leads to the change in the metric tensor $\delta g_{\mu\nu}=2\epsilon \delta_{\mu 1}\delta_{\nu 1}$.}
According to
\begin{equation}
\langle T^{11}\rangle=- \langle T^{00}\rangle=-(T_{ww}+T_{\bar w\bar w}) = \frac 1 \pi \langle T(w)\rangle,
\end{equation}
we obtain the variation in $F_0$ as
\begin{equation}\label{F}
\delta F_0=\frac 1 \pi \int \mathrm{d} w^0\mathrm{d} w^1 \langle T(w)\rangle \frac{\delta\beta}{\beta}.
\end{equation}
We substitute Eq. (\ref{Tw}) into Eq.~(\ref{F}), and perform the integration as follows.
Firstly, $I$ is introduced as
\begin{equation}
\begin{aligned}
I &= \int_0^{2\beta} \mathrm{d} w^1 \int_{i w^1}^{L/2+i w^1} \mathrm{d} w^0 \langle T(w)\rangle \\
&= \int_0^{\pi} \mathrm{d} w^1 \int_{i w^1}^{\pi L/4\beta +i w^1} \mathrm{d} w^0 [-\frac c 8 (\frac {\sinh^2 w}{\cosh^2 w }+\frac{\cosh^2 w}{\sinh^2 w}) +\frac{c}{12}],
\end{aligned}
\end{equation}
which is manifestly related to $F_0$ as $L \to \infty $. Exploiting the equation
\begin{equation}
\frac{\mathrm{d} }{\mathrm{d} w} (2w-\tanh w -\frac 1{\tanh w} )=\frac{\sinh^2 w}{\cosh^2 w }+\frac{\cosh^2 w}{\sinh^2 w},
\end{equation}
one obtains
\begin{equation}
I= \int_0^{\pi} \mathrm{d} w^1 \{-\frac {c\pi L} {6\cdot 4\beta} -\frac c 8[i \tan w^1+(i \tan w^1)^{-1} -\tanh (\frac{\pi L}{4\beta} +i w^1) - \tanh^{-1}(\frac{\pi L}{4\beta} +i w^1)]\}
\end{equation}
The integrals of $ \tan w^1$ and $( \tan w^1)^{-1}$ terms cancel each other, and $-\frac {c\pi L} {6\cdot 4\beta}$ term contributes $\frac {\pi c L} {24 \beta}$ to the resulting free energy term.
Since $-\tanh (\frac{\pi L}{4\beta} +i w^1) - [\tanh (\frac{\pi L}{4\beta} +i w^1)]^{-1}$ converges to -2 as $L/\beta \to \infty$, they contribute a term $(c/4) \ln\beta$ in the final result, through Eq.~(\ref{F}). All in all, the universal free energy term turns out to be
\begin{equation}
F_0=\frac{\pi c} {24 \beta} L+\frac c 4 \ln\beta,
\end{equation}
which is a sum of the non-orientable correction $\frac{\pi c} {24 \beta} L$ and the logarithmic rainbow term $\frac c 4 \ln\beta$. The former constitutes a bulk correction and affects the slope of low-temperature linear specific heat, in quantum chains.}
{The rainbow term can also be understood from CFT by considering a free boson model.
A prominent example is the Heisenberg XY spin chain simulated in the main text ($c=1$), whose continuum limit is a CFT modeled by a free boson
with action $S=\frac{1}{4\pi }\int dzd{\bar{z}~\partial \phi }{\bar{\partial}%
}{\bar{\phi}}${, where }${\phi }$ is the boson field. The logarithmic term $F_{\mathcal{R}}$ is due to the effective ``corners", i.e., branch points at $z=0$ identified twice on the rainbow boundary. For $n=c$ copies of free bosons ${\phi }_{i}$, the stress tensor $T(z)=\sum_{i}{\partial \phi }_{i}{\partial \phi }_{i}$ is of scaling dimension two, thus there exists a second-order pole at the $z=0$ branch locus, resulting in the one-point function $\left\langle T(z)\right\rangle = -\frac{c}{8}(1/z^{2})$.
Its variation with respect to $\beta $ is $\delta F_{\mathcal{R}}/\delta \beta =-\frac{%
\delta }{\delta \beta }\left( \frac{1}{2\pi }\int d^{2}z^{\prime
}\left\langle T(z^{\prime })\right\rangle \right) $, and in the
thermodynamic limit, namely large $L$ and $x_{2}/x_{1}\ll 1~$limit [$x_{1}$($%
x_{2}$) is the continuous length(width) integral variable], this is $\delta
F_{\mathcal{R}}/\delta \beta =\frac{\delta }{\delta \beta }\left( \frac{c}{4}%
\int dx_{2}(1/x_{2})\right) =\frac{c}{4\beta }$, i.e., $F_{\mathcal{R}}\sim
\frac{c}{4}\ln \beta $. }
{To conclude, here we provide an analytical calculation of the universal free energy term $F_0$. Through two conformal maps which transform the complex plane into half-infinite cylinder with rainbow boundary, we obtain two terms in the final results. They represent, for quantum chains, the bulk free energy correction and rainbow logarithmic entropy, respectively. A simplified argument for the rainbow correction, in free boson field case, is also given.}
\section{II. Boundary matrix product state approach}
\label{SM:BMPS}
In this work, we use power iteration method to determine the eigenvectors of the transfer matrix in tensor networks (TNs), by exploiting the boundary matrix product states (bMPS). We also introduce the TN technique for an efficient extraction of universal data from the obtained bMPS.
\subsection{A. Determination of dominant MPS by power iterations}
Generally, given an operator $A$, the power iteration algorithm makes use of the recurrence relation
\begin{equation}\label{iteration}
b_{k+1}=\frac{A \cdot b_k}{||A \cdot b_k||},
\end{equation}
where the vectors $\{b_k\}$, starting with a random vector $b_0$, constitute a converging series. At each iteration, the vector $b_k$ is multiplied by the operator A and then normalized to get the next vector $b_{k+1}$. Power method is typically robust and efficient as a dominant eigenvalue/vector solver. Under the circumstance that $A$ has a unique dominant eigenvalue (with largest magnitude) and the starting vector $b_0$ has a nonzero component parallel to related dominant eigenvector, the power method converges the series \{$b_k$\} right to that eigenvector.
To evaluate the partition functions of quantum chains or 2D statistical models, the power method is combined with the bMPS technique.
We determine the dominant bMPS $b_k$ of a transfer matrix (in the form of matrix product operator, MPO) in the TN by multiplying the MPO $A$ to the MPS $\{b_k\}$ and normalizing it, until the latter converges to the dominant eigenvector (also in the form of bMPS).
Since the Hilbert space grows exponentially with system size, after each iteration we truncate bMPS to a fixed bond dimension, making this procedure sustainable. To be specific, in a single step of multiplying MPO $A$ to MPS $b_k$ [\Fig{Fig:BMPS}(a)], one gets $b_{k+1}$ with a ``fat" MPS, i.e., with an enlarged bond dimension.
Although the initial MPO $A$ is with periodic boundary condition along the vertical direction [see, e.g., Figs.~\ref{FigS5}(c,d)], we fold the ``long-range" PBC bond in the bulk and use exclusively MPO and MPS with open boundary condition (OBC), for later convenience of canonicalization and truncation processes.
As depicted in \Fig{Fig:BMPS}(b), we introduce a forward and backward canonicalization procedure for the efficient compression, by utilizing the Schmidt decomposition (in practice singular value decomposition, SVD) on each bond bipartition. During the forward canonicalization [left, \Fig{Fig:BMPS}(b)], the MPS is gauged into a left canonical form (see the arrow flow); then we perform the truncation procedure, i.e., keep only the largest $D$ singular values and their corresponding bond basis in the spectra, during the sweep backwards [right, \Fig{Fig:BMPS}(b)].
\begin{figure}[htbp]
\includegraphics[width=.8\textwidth]{FigS1}
\caption{(a) Project MPO $A$ to the MPS $b_k$ and obtain a fat MPS $b_{k+1}$, which is then truncated into a MPS with bond dimension $D$. (b) illustrates the forward and backward sweeps in order to guarantee respectively the left- and right-canonical form of MPS, as well as an (globally) optimal truncation process.}
\label{Fig:BMPS}
\end{figure}
\subsection{B. Tensor network calculation of the crosscap and rainbow overlaps}%
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{FigS2}
\caption{Overlap between bMPS and (a) the rainbow state and (b) the crosscap state. (c,d) show the main steps to contract bMPS (of bond dimension $D$) with the crosscap boundary, which turns out to be special inner-product-like contraction procedure in (c,d) with complexity $\mathcal{O}(D^4)$. The last step (e) finishes the contraction by summing over all bond indices, and returns the trace.}\label{SM:overlap}
\end{figure}%
Once the bMPS representation of $| i_0 \rangle$ is obtained, we need to contract the bMPS with corresponding boundary state $| \mathcal{C} \rangle$ (crosscap) or $| \mathcal{R} \rangle$ (rainbow) in the last step to extract universal terms. These overlap calculations can be converted to tensor contractions (traces) shown in \Fig{SM:overlap}.
We start with the rainbow boundary overlap $\langle i_0 | \mathcal{R} \rangle$ in Fig.~\ref{SM:overlap}(a), which can be conveniently simulated. One folds the bMPS from the middle, and then contract the tensors, just like performing an ordinary MPS inner product, with conventional $\mathcal{O}(D^3)$ complexity ($D$ the bond dimension of bMPS).
The crosscap boundary overlap $\langle i_0 | \mathcal{C} \rangle$ takes slightly higher computational cost to perform the contraction. As shown in Fig. \ref{SM:overlap}(b), one needs to compute a special inner product of upper half of bMPS with the rest lower half. One starts from the top [see Fig.~\ref{SM:overlap}(c)], prepare a rank-3 tensor $T_1$ and contract successively with the transfer matrix [Fig.~\ref{SM:overlap}(d)], with complexity $\mathcal{O}(D^4)$. After $n$ iterations ($n$ proportional to system width $\beta$), one arrives at a single tensor $T_n$ in Fig.~\ref{SM:overlap}(e) and can compute the trace $\langle i_0 | \mathcal{C} \rangle = \mathrm{Tr}(T_n)$ at ease.
\section{III. Determination of tricritical point of BEG model}%
\begin{figure}[htbp]
\includegraphics[width=0.55\textwidth]{FigS3.pdf}
\caption{Klein free energy results of BEG model, at different temperatures $T$ and spin anisotropy parameters $\Delta$. The parameters $(T, \Delta)$ are previous estimates of tricritical point, taken from Refs. \onlinecite{PhysRevE.92.022134S, PhysRevE.73.036702S, PhysRevE.66.026130S}, in corresponding order.
\label{SM:BEG}}
\end{figure}
Recently, there has been considerable interest \cite{PhysRevE.92.022134S,PhysRevE.73.036702S,PhysRevE.66.026130S} in studying the Blume-Emery-Griffiths (BEG) model, both by numerical and analytical methods.
The BEG model is first proposed \cite{BEG1971S} to describe mixtures of liquid He$^3$ and He$^4$, where there exists a superfluid phase, as well as $\lambda$- and first-order transitions, in the abundant He$^4$ phase diagram.
The Hamiltonian of BEG model reads
\[ E = -J\sum_{<i,j>} s_is_j-K\sum_{<i,j>}s_i^2s_j^2+\Delta \sum_is_i^2, \]
where $s_k=0, \pm 1$ is the spin variable. In Ref.~\onlinecite{PhysRevB.14.4946S},
Berker and Wortis suggested an amenable phase diagram using position-space renormalization group method. It contains three phases: a ferromagnetic phase and two paramagnetic phases (denoted by para$_{\pm}$), characterized by magnetization $M$ and quadrupolar order parameter $Q$. Remarkably, there exists a tricritical line in the phase diagram where three phases coexist, and gives rise to an emergent supersymmetric conformal symmetry.
To the best of our knowledge, no analytical results have been obtained on the precise location of tricritical points of the square-lattice BEG model. Numerical calculations (with $K=0, J=1$) have been performed \cite{PhysRevE.92.022134S, PhysRevE.73.036702S, PhysRevE.66.026130S}, and we here compare these results by calculating the Klein free energy $F_{\mathcal{K}}$ in Fig.~\ref{SM:BEG}.
We find that $F_{\mathcal{K}}$ can be used to pinpoint the tricriticality in a very sensitive and precise way, and it turns out that the estimate $T=0.608, \Delta=1.96604(1)$ in Ref. \onlinecite{PhysRevE.92.022134S} produces the most accurate $F_{\mathcal{K}}=\ln{k}$, i.e., it is in nice agreement with the CFT prediction of 0.854258 \cite{UniEntropy-2017S}. Therefore, parameter $T=0.608, \Delta=1.96604(1)$ was also adopted in the main text to calculate the rainbow free energy [see Fig. 5 and Tab. I].
\section{IV. Universal term with conical angles other than $\pi$}
{In \Fig{Fig:bMPS} we {show lattice realizations} of $\mathbb{RP}^2$ {different from those in the main text}, which results in two different effective conical angles ($2\pi/3$ and $4\pi/3$, respectively) after the cut-and-sew process. Through TN simulations, {we find that the rainbow term changes to $F_{\mathcal{R}} =\frac c 4 \mathcal A\ln \beta$}, where $\mathcal A=7/6$ is a geometric factor. It is then revealed that this logarithmic term can be obtained by substituting two conical angles $\theta =2\pi/3$ and $4\pi/3$ to {Cardy-Peschel formula} $\frac {c\theta}{24\pi}[(\frac{2\pi}{\theta})^2-1] \ln L$ \cite{Cardy-Peschel-1988S}, which {provides a very intuitive explanation of the rainbow term: they originate from} effective conical singularities.}
\subsection{A. Symmetry and normalization condition of boundary MPS}
\label{App:SymmTN}
\begin{figure}[tbp]
\includegraphics[angle=0,width=0.8\linewidth]{FigS4.pdf}
\caption{(a) A TN realization of $\mathbb{RP}^2$ where the edges are glued such that the arrows match. The honeycomb lattice TN is transformed in to (b) with two conical angles, after a cut-and-sew process. The transfer matrix (c) and its transformation (d) by a shift of half the column, which are essentially equivalent, due to vertical PBC. (c) and (d) are also reflection symmetric by construction, implying that the left eigenvector of (c) is identical to the right eigenvector of (d), which is then also equivalent to the right eigenvector of (c), up to a translation of half a column backward).}
\label{Fig:bMPS}
\end{figure}
{Since the vertical transfer matrix in Figs.~\ref{Fig:bMPS}(c,d) is no longer real symmetric, we firstly address the glide symmetry of transfer matrices, as well as related TN techniques for correctly extracting the universal data.} We demonstrate that, due to the symmetry in the transfer matrix, a proper normalization of bMPS can be done, which is then of importance in the correct extraction of {universal} ``boundary" terms.
As shown in Fig. 2 of the main text, the TNs are defined on either the square or honeycomb lattice. The TN representations are constructed symmetrically in the main text. For example, the square TN has reflection symmetry and thus the transfer matrix has the same left and right eigenvectors, i.e., unnormalized $|\tilde{i}_0\rangle=|\tilde{j}_0\rangle$ (left and right eigenvectors, respectively). In this case, we can set the factors as $\mathcal{N}=\langle \tilde{i}_0 | \tilde{i}_0\rangle=\langle \tilde{j}_0|\tilde{j}_0\rangle$ and then normalize the vectors as $| i(j)_0 \rangle = |\tilde{i}(\tilde{j})_0\rangle / \sqrt{\mathcal{N}}$.
Given the normalized vectors $| i(j)_0 \rangle$, we can extract the crosscap and rainbow free energy terms by calculating $F_{\mathcal{C}} = \ln|\langle\mathcal C|i_0\rangle|$, and $F_{\mathcal{R}} = \ln|\langle\mathcal R|i_0\rangle|$, respectively.
However, for the construction in \Fig{Fig:bMPS}(c), the transfer matrix is no longer parity symmetric and, at first glance, there seems to be some subtlety in determining universal data associated with each boundary. If one normalizes the left and right eigenvectors of the transfer matrix in a naive manner, i.e., $\langle i_0|i_0\rangle=\langle j_0|j_0\rangle=1$, the mutual normalization condition
\begin{equation}
\langle j_0|i_0\rangle=1
\label{klein-NORM}
\end{equation}
will not be guaranteed in general, and as a result, $\ln{|\langle\mathcal{B_L} | i_0 \rangle|}$ ($\ln{|\langle j_0 |\mathcal{B_R} \rangle|}$) does not produce the desired crosscap (rainbow) term. If one aims at the normalization condition in Eq.~(\ref{klein-NORM}) and there seems existing an arbitrariness to distribution the normalization factor between $|i_0 \rangle$ and $|j_0 \rangle$, which then affects the determination of the crosscap and rainbow terms.
{The solution to this difficulty is to note that this arbitrariness can be removed by exploiting the glide symmetry between} $|i_0\rangle$ and $|j_0\rangle$, which enables a proper normalization of both $|i_0\rangle$ and $|j_0\rangle$. As elaborated in \Fig{Fig:bMPS}, unnormalized $|\tilde{j}_0\rangle$ can be related to $|\tilde{i}_0\rangle$ via a shift operation (of half a column), and thus $\langle \tilde i_0 |\tilde i_0 \rangle=\langle \tilde j_0 | \tilde j_0 \rangle$. Therefore, in order to satisfy the condition Eq.~\eqref{klein-NORM}, one can evenly distribute the normalization factor $\mathcal{N}_f=\langle \tilde{j}_0|\tilde{i}_0\rangle$ to $|\tilde{i}_0\rangle$ and $\langle \tilde{j}_0|$, namely $| i_0\rangle = \tfrac{1}{\sqrt{\mathcal{N}_f}} | \tilde{i}_0 \rangle$ and $\langle j_0| = \tfrac{1}{\sqrt{\mathcal{N}_f}}\langle \tilde{j_0} |$. After this normalization, it turns out that both $\ln |\langle \mathcal{C} | i_0\rangle|$ and $\ln |\langle j_0 | \mathcal{C} \rangle|$ result in the same crosscap term, i.e., exactly half of the Klein term, $F_{\mathcal{C}}=\frac{1}{2} \ln{k}$, as shown in Fig.~\ref{FigS5}.
\begin{figure}[htbp]
\centering
\includegraphics[width=.5\textwidth]{FigS5.pdf}
\caption{ {Crosscap term $F_{\mathcal{C}}$ and Klein term $F_{\mathcal{K}}$ of Ising model on various lattices, the corresponding (honeycomb) TN realization of $\mathbb{RP}^2$ is shown in Fig.~\ref{Fig:bMPS}. The retained number of states $D=256$ in the simulations. \label{FigS5}}}
\end{figure}
At last, we have to mention that in the cases above the symmetries (reflection or glide symmetry) in the TN is of key importance in extracting the universal term associated with a specific boundary. In case there is no such kind of symmetries present in the transfer matrix of TN, only the total universal term contributed from both edges can be calculated and it is no longer convenient to distribute it.
\subsection{B. Crosscap term}
{Figure \ref{FigS5} shows the crosscap terms and Klein terms of Ising model on hexagonal, kagome and triangular lattices, with corresponding TN representations in \Fig{Fig:bMPS}. It is very surprising that the crosscap/Klein term for all three cases are all extraordinarily accurate, i.e., equal to $F_{\mathcal{C}}=\frac 1 2\ln{(1+\sqrt{2}/2)}$ or $F_{\mathcal{K}}=\ln{(1+\sqrt{2}/2)}$ up to numerical noises/small computational errors. Analytical calculations reveal that even a single row of those three lattices can already produce the universal value exactly, and the analytical solution of single-row kagome lattice can be found in Section IV.D.}
{The universal crosscap term $\ln k/2$ we found here is very remarkable since it does not depend on the specific geometric details, but only relates to the non-orientable topology. On the Klein bottle we have found $\ln k$ constant entropy \cite{UniEntropy-2017S}, while here on $\mathbb{RP}^2$ we discover possibly the smallest unit of the ``twist entropy", i.e., $\frac{1}{2} \ln{k}$.}
\subsection{C. Rainbow free energy term }
\begin{figure}[htbp]
\centering
\includegraphics[width=.5\textwidth]{FigS6.pdf}
\caption{TN simulation data with $\chi=500$ for Ising, and $\chi=200$ for Potts model. Dashed lines represent the linear fitting $F_
\mathcal R=\frac c 4\times \frac 7 6 \ln \beta+b_0$, $b_0$ is some non-universal constant\label{FigS6}}
\end{figure}
{
In the main text, we have shown that with angle $\theta = \pi$ conical singularities, we can recover the rainbow term $\frac{c}{4} \ln\beta$.
In Fig.~\ref{FigS6}, we show that the rainbow terms in TN with different lattice geometries in \Fig{Fig:bMPS} (but the same non-orientable topology, i.e., $\mathbb{RP}^2$). The slopes of the lines in Fig.~\ref{FigS6} is no longer $\frac c 4$ as in the main text, but equal $ \mathcal{A}c/4$, with a geometry factor $\mathcal A \simeq 1.165$. To understand the existence of the geometry factor $\mathcal{A}$, we again associate the logarithmic term to the effective conical singularities, as shown in Fig. \Fig{Fig:bMPS}, with angles $\theta_1=4\pi/3$ and $\theta_2=2\pi/3$ now. Substituting the two angle values to Cardy-Peschel formula
\begin{equation}
F_\theta = \frac {c\theta}{24\pi}[(\frac{2\pi}{\theta})^2-1] \ln L,
\end{equation}
one gets $F_{\theta_1}=\frac{5c}{72}\ln \beta$ and $F_{\theta_2}=\frac{2c}{9}\ln \beta$, and $F_\mathcal R=F_{\theta_1}+F_{\theta_2}=\frac c 4 \frac 7 6 \ln \beta$, by assuming the characteristic system size $L \sim \beta$. Some numerical results are summarized in Tab.~\ref{tab}.
One may also think of the factor $\mathcal{A}$ as $1/\sin \frac{2\pi}{3} \simeq 1.1547005$, which is another possible formula of the geometric factor $\mathcal{A}$, as the rotation symmetry in hexagonal TN could be responsible for it. However, there are two reasons to favor the latter. Firstly, the hexagonal lattice in the main text contribute the logarithmic term with $\mathcal A=1$. If the factor is due to rotational symmetry, it should also be $\mathcal A\neq 1$ there too, which however is not the case. Secondly, there is some appreciable, i.e., $\sim$ 1\% difference between the values $1/\sin \frac{2\pi}{3} \simeq 1.155$ and $7/6 \simeq 1.1666667$. Our numerical results in Tab.~\ref{tab}, with fine resolution to distinguish the two, clearly support the latter.}
\begin{table*}[htbp]
\footnotesize
\caption{Fitting logarithmic terms of hexagon-like lattice (coordination number $z$). The data is fits by $\beta>3$. For Potts model, we also get rid of the data with $\beta >25$ for relatively large errors.}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\hline
Model & \multicolumn{3}{c|}{Ising ($c=0.5$)} & \multicolumn{2}{|c|}{Potts (0.8)} \\ \hline
Lattice & {H} & {T} & K & H & K \\ \hline
$T_c$& \multicolumn{2}{c|}{$\cosh(2/T_c)=\sec(\pi/z)$ \cite{PhysRev.79.357S}} &$\frac{4}{\ln(3+2\sqrt3)}$ \cite{doi:10.1143-ptp-10.2.158S} & 0.6738 \cite{RevModPhys.54.235S} &1.5849 \cite{RevModPhys.54.235S} \\ \hline
Slopes $D$& 0.14577 & 0.14581 & 0.14581 & 0.23296 & 0.232985 \\ \hline
$\mathcal A=4D/c$ &1.16621 &1.16648& 1.16651 & 1.1648&1.1649\\ \hline
\hline
\end{tabular}%
\label{tab}%
\end{table*}%
\subsection{D. Exact Klein Term of Ising model on a Kagome $\Delta-$chain}
\label{App:SYMPO}
\begin{figure}[htbp]
\includegraphics[width=0.7\linewidth]{FigS7}
\caption{TN representation of kagome-lattice Ising model on $\mathbb{K}^2$ with width $\beta=1$. (a) shows a geometry after a cut-and-sew process, which becomes a width-2 lattice with two ``mini-crosscaps" on both ends. (b) depicts the transfer matrix between different sites in (a).
\label{Fig:DChain}}
\end{figure}
For a Klein bottle with width $\beta$=1 [as depicted in \Fig{Fig:DChain}(a),
also dubbed as the $\Delta$-chains hereafter],
we can obtain the transfer matrix analytically and thus calculate the residual free energy directly.
We take the kagome-lattice Ising model as an example and demonstrate that, on this quasi-1D system, the residual free energy term \textit{exactly} equals $\ln k$, where $k=\sum_\alpha d_\alpha/D$ is the sum of quantum dimensions of CFTs.
Due to the PBC, the triangles pointing outwards (those on even columns) can be flipped inwards, and the system is essentially of period one.
The transfer matrix, shown in \Fig{Fig:DChain}(b), is denoted as $M^{\alpha\beta}_{\alpha'\beta'}$. $M$ consists of two rank-three tensors $T_{i,j,k}$, storing the Boltzmann weight, i.e., $T_{i,j,k}=\exp(-h_{\triangle_{i,j,k}}/T_c)$, where $h_{\triangle_{i,j,k}}=-J(s_is_j+s_js_k+s_ks_i)$ is the Hamiltonian of three spins within the same triangle ($s_i=\pm 1$ labels a Ising spin). By contracting two $T$ tensors, we arrive at
\[
M=
\begin{pmatrix}
x^3+x^{-1}&x^1+x^{-1}&x^1+x^{-1}&2x^{-1}\\
x^1+x^{-1}&2 x^1&2x^{-1}&x^1+x^{-1}\\
x^1+x^{-1}&2x^{-1}&2 x^1&x^1+x^{-1}\\
2x^{-1}&x^1+x^{-1}&x^1+x^{-1}&x^3+x^{-1}
\end{pmatrix}
\]
where $x=\exp(2J/T_c)$, with the basis ordering $(\uparrow \uparrow, \uparrow\downarrow, \downarrow\uparrow, \downarrow\downarrow)$. By diagonalizing the (symmetric) transfer matrix $M$, we obtain the dominant eigenvetor $|i_0\rangle$ as $(1,\sqrt2-1,\sqrt2-1,1)/(2 \sqrt{2-\sqrt{2}})$, at the critical point $T_c/J=\frac{4}{\ln(3+2\sqrt3)}$. We can calculate the residual free energy term according Eq. (1) in the main text, using the crosscap boundary $|\mathcal C\rangle=(1,0,0,1)$. As a result, $F_{\mathcal K}=\ln (1+\sqrt2/2)$, equals the CFT prediction exactly!
This calculation remarkably confirm that the residual term value equals exactly the universal Klein term in such a narrow strip. For Ising model on wider kagome strips ($\beta>1$), we see the residual free energy term stays exactly $\ln (1+\sqrt2/2)$ throughout, in Fig.~\ref{FigS5}. In addition, from Fig.~\ref{FigS5} we observe that Ising models with tilted square and honeycomb TNs also possess such a nice property, i.e., the residual free energy term perfectly coincides with $\ln (1+\sqrt2/2)$ even on lattices with smallest possible width ($\beta=1$).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 487 |
El Quartet de corda núm. 5 en si bemoll major, op. 27, va ser compost per Mieczysław Weinberg entre l'octubre i novembre de 1945 i dedicada al Quartet de corda Beethoven, que el va estrenar posteriorment el 17 de maig de 1947 a Moscou.
El Quartet núm. 5 op. 27 (que nodrirà el 1991 a la Simfonia de cambra núm. 3) aposta per la transparència, l'economia de mitjans i l'escassetat de textures, que Xostakóvitx no practicaria fins a 1952 en el seu també cinquè quartet.
Estructurat en cinc moviments, el brevíssim Scherzo central constitueix el seu cimal: una desenfrenada cavalcada a la vora de l'abisme, impulsada per brutals accents marcials i mecanicistes, que representa un veritable tour de force per als seus intèrprets.
Moviments
I.Melodia. Andante sostenuto
II.Humoreska. Andantino
III.Scherzo. Allegro molto
IV.Improvisation. Lento
V.''Serenata. Moderato con moto
Referències
05
Obres del 1945
Composicions en si bemoll major
Quartets de corda de la dècada del 1940 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,578 |
Global Sodium Glutamate Market Research Report 2021
Sodium glutamate commonly referred as MSG, also known as monosodium glutamate is sodium salt of glutamic acid. Sodium glutamate is
Global Sodium Methylate Market Research Report 2021
Sodium methylate, also known as sodium methoxide (molecular formula CH3ONa), can be produced by an exothermic reaction between elemental sodium
Global Sodium Persulfate Market Research Report 2021
Sodium persulfate is an inorganic compound and is the sodium salt of persulfate which is also known as peroxydisulfate. The growth
Global Sodium Reducing Agents Market Research Report 2021
Sodium reducing agents are becoming key ingredients in food and beverage products owing to health and nutritional benefits across the
Global Solvent Borne Coatings Market Research Report 2021
Solvent borne coatings are used primarily as protective layers in liquid form, which are applied to the surface of a
Global Sorbic Acid Market Research Report 2021
Sorbic acid (2, 4-hexadienoic acid) and its mineral salts are naturally occurring organic compounds primarily used as food preservative. Growing use
Global Sorbitan Monostearate Market Research Report 2021
Sorbitan Monostearate is an ester of stearic acid and sorbitol derivative which is also referred as synthetic wax which is
Global Soundproof Glass Market Research Report 2021
Soundproof glass is actually two panes of glass that have vacuum between them. Soundproof glass is used to prevent sound
Global Soy Based Chemicals Market Research Report 2021
Soy based chemicals are used in many commercial products such as biodiesel, bio plastics, cosmetics, paints & coatings etc. Bio diesel
Global Speciality Solvents Market Research Report 2021
A solvent is a liquid (gas, or solid) that dissolves solids, liquid or gaseous solutes (a solvent or solute can | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,840 |
Q: Pass parameter from Jquery to webhandler asp.net c# I want to be able to drag and drop files into multiple folders on a server, I am using jquery to pass to HttpHandler but I can't pass the save location to webhandler. I would like to send the path from jquery in the request is there a way to incluse that when the data for file transfer is passed.
$.ajax({
type: "POST",
url: "FileHandler.ashx",
contentType: false,
processData: false,
data: data,
success: function (result) {
alert(result);
},
error: function () {
alert("There was error uploading files!");
}
});
Maybe something like this:
$.ajax({
type: "POST",
url: "FileHandler.ashx",
contentType: false,
processData: false,
data: data,
filepath: document.getElementById("<%=listDrop.ClientID%>");
success: function (result) {
alert(result);
},
error: function () {
alert("There was error uploading files!");
}
});
and then retrieve the path in the webhandler to pass as save location?
I have tried this $.ajax({
type: "POST",
url: "FileHandler.ashx",
contentType:false,
processData: false,
data: {
data: newData,
filepath:JSON.stringify("~/uploads/")
},
success: function (result) {
alert('success');
},
error: function () {
alert("There was error uploading files!");
}
}); But I have question about the declaration of the data type for the files I will be uploading when creating the get and set in asp. filepath is a string but what data type are the files.
A: although i am answering from my mobile, i tried my best to keep the things intact and use ful. If required, Please format it accordingly.
I am not going to write entire code here, just algo.
1. Identify the parameter to be passed. If its only path of multiple folders then you can create an array of objects in javascript for the same. See tge example
var listOfFolder = new string[];
listOfFolder[0]="path of first dir";
:
:
listOfFolder[n]="path of nth dir";
var data = JSON.stringify(listOfFolder);
2. pass this data in data attribute of jquery ajax. 3. Grab this path in Process Request event and deserialize.
4. Do whatever you want.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,786 |
The global ECU remap leader, Superchips, has introduced two new remaps for diesel engines in the Volvo range. Both the D3 engine and the D5 can now have additional power and torque released by the company's remaps and with it, the potential for improved fuel economy as well.
The five-cylinder, two-litre D3 engine is used in several ranges, including the S60, S80, V60, V70 and the crossover SUVs, the XC60 and XC70.
Using a base output of 163PS, the Superchips remap adds 23bhp at 3457rpm and a very impressive 61Nm torque at 2327rpm.
The D5 engine is used in the same ranges and is the larger of the two, at 2.4-litres. Its output from the factory is 215PS and the Superchips remap for the D5 adds an even more impressive 44bhp at 4730rpm and 73Nm torque at 2286rpm.
On both conversions, the additional torque improves driveability and cruising while the gains in power allow drivers to enjoy their cars more. And as a final benefit, both conversions have the potential to deliver up to 7% improvements in fuel economy, when driven in a similar fashion to pre-conversion.
The conversion is carried out by one of Superchips nationwide dealer network and typically takes around an hour. Customers can either wait at the dealer or drop off their car and return later to collect.
The cost of the remap for both the D3 engine and the D5 is £399 including VAT and labour. The conversion is covered by Superchips' 12-month/30,000-mile warranty*. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,981 |
Q: AngularJS: Performing $http request inside custom service and returning data I have defined a custom http service in angular that looks like this:
angular.module('myApp')
.factory('myhttpserv', function ($http) {
var url = "http://my.ip.address/"
var http = {
async: function (webService) {
var promise = $http.get(url + webService, { cache: true }).then(function (response) {
return response.data;
});
return promise;
}
};
return http;
});
And I can access this service in my controller like so:
angular.module('myApp')
.controller('myCtrl', function (myhttpserv) {
var webService = 'getUser?u=3'
myhttpserv.async(webService).then(function (data) {
console.log(data);
})
});
However I now need to streamline this process so that it is ALL contained inside the service with a static url and it simply returns the data. So that I can just call it in the controller like so:
angular.module('myApp')
.controller('myCtrl', function ($scope, myhttpserv) {
console.log(myhttpserv.var1);
console.log(myhttpserv.var2);
etc...
});
I can't seem to tweak the service to get this functionality. Anyone know the correct way to do it?
A: Option 1 - Use promise API
angular.module('myApp').factory('myhttpserv', function ($http) {
return $http.get('http://my.ip.address/getUser?u=3', { cache: true });
});
Controller:
angular.module('myApp').controller('myCtrl', function ($scope, myhttpserv) {
myhttpserv.then(function(response){
console.log(response.data);
});
});
Option 2 - Using route resolve
angular.module('myApp', ['ngRoute']).config(['$routeProvider',
function($routeProvider) {
$routeProvider.
when('/myCtrl', {
templateUrl: 'myView.html',
controller: 'myCtrl',
resolve: {
load: function (myhttpserv) {
return myhttpserv;
}
});
}]);
Service:
angular.module('myApp').factory('myhttpserv', function ($http) {
var data = {};
var url = "http://my.ip.address/";
var promise = $http.get(url + 'getUser?u=3', { cache: true }).then(function (response) {
data = response.data;
});
return data;
});
Controller:
angular.module('myApp')
.controller('myCtrl', function ($scope, myhttpserv) {
console.log(myhttpserv.data.var1);
console.log(myhttpserv.data.var1);
etc...
});
Option 3 - Use $interval service
angular.module('myApp').factory('myhttpserv', function ($http) {
var data = {};
var url = "http://my.ip.address/";
var promise = $http.get(url + 'getUser?u=3', { cache: true }).then(function (response) {
data = response.data;
});
return data;
});
Controller:
angular.module('myApp').controller('myCtrl', function ($scope, $interval, myhttpserv) {
$scope.intervalPromise = $interval(function(){
if (Object.keys(myhttpserv.data).length!=0)
{
console.log(myhttpserv.data);
$interval.cancel($scope.intervalPromise);
}
}, 100);
});
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,206 |
Michigan State Horticultural Society
Apply for an Industry Scholarship
Apply for a Jordan B. Tatter Scholarship
How to Become a Scholarship Donor
About the Trust Fund
Trust Funded Research
Become a Trust Fund Donor
Michigan Apple Cider Award
MSHS Constitution
MSHS By-Laws
MSHS Annual Meeting Resolutions
MSHS Officers: 1870 – Current
Doug Buhler
Awarded jointly with the Michigan Vegetable Council Master Farmer Associate Award 2013.
Doug Buhler is Director of MSU AgBioResearch and Senior Associate Dean for Research of the College of Agriculture and Natural Resources at Michigan State University.
Doug grew up on a dairy farm in Wisconsin. He received his B.S. degree from the University of Wisconsin-Platteville and M.S. and Ph.D. degrees from the University of Nebraska. He was on the faculty at the University of Wisconsin-Madison from 1984 to 1989. From 1989 to 2000, he was a research scientist for the USDA's Agricultural Research Service.
Doug came to Michigan State University in 2000 as Professor and Chair of the Department of Crop and Soil Sciences, a position he held from 2000 to 2005. From October 2003 to March 2005, he also served as State Leader for Agricultural Programs for Michigan State University Extension. From 2005 to 2010,Doug served as Associate Director of MSU AgBioResearch (formerly Michigan Agricultural Experiment Station) and Associate Dean for Research for the College of Agriculture and Natural Resources. From 2011 to 2013, Doug served as interim Dean of the College of Agriculture and Natural Resources. He is a member of a number of professional societies, including the American Society of Agronomy, Crop Science Society of America, Weed Science Society of America, and North Central Weed Science Society.
During his time and many responsibilities at Michigan State University, Doug Buhler has shown a strong commitment to working with industry stakeholders. He has always been willing to listen to representatives of industry concerning their needs and concerns. He was the lead person at MSU for Project GREEEN for many years and helped develop this initiative into what it is today – an outstanding example of cooperation between industry stakeholders and the university. Doug has recognized the importance to industry of key research and extension positions and worked to hire new scientists during financially difficult times.
For his strong commitment to serve the needs of industry stakeholders, Doug Buhler is being recognized by the Michigan State Horticultural Society with its Distinguished Service Award and by the Michigan Vegetable Council with its Master Farmer Associate Award. These organizations, along with many other Michigan agricultural and commodity organizations, recognize the excellence of Doug's work and the resulting benefits to our state's agriculture. Doug and his wife Jean reside in East Lansing.
Jim Koan
Jim Koan has a history of growing apples that can be documented all the way back to the late 1800's when his grandfather, Albert Koan Sr., farmed and homesteaded a 120 acre, diverse and sustainable parcel which included a small orchard. Born in 1923, Jim's father, Albert Koan Jr. grew up participating in mostly the harvest of the apples from that orchard and that included merely storing the apples in wooden barrels and putting down more hard cider than sweet cider for the winter. He wasn't privy to much horticultural practices or propagating information in those days. That remained the case when in 1948, Albert Koan Jr., with his wife Mary, came around the corner from his father's farm to plant apple trees on his own 50 acres – though that crop of choice was strongly advised against on that soil by the county extension at the time. Albert's son, Jim, was also born that same year and grew up with that first planting of apple trees and continues to grow apples on the same farm (with much added acreage, of course) his dad initially started out on. Jim and his wife Karen, have five children and three of them have now come back to the farm and play a significant role in his growing business.
The 500 acre farm includes a 150 acres of organic apples, as well as corn, soybeans, wheat, barley, and pasture. They also grow pumpkins for their farm market and raise reindeer. In the last few years, they've also added about one hundred fifty head of organic hogs. The piglets are used to eat drop apples and to root around the trees. The meat is sold at various farm stores as well as food cooperatives. As the farm has transitioned into organic growing, it has opened opportunities for selling premium apples and cider. About five years ago Jim developed a tasty brew of hard organic cider. It is made from entirely fresh squeezed apples. His hard cider is now has sixty distributers in forty states and is a primary focus of his business. Jim's story is one of success with roots anchored firmly in the soil and strong family ties.
Michigan State Horticultural Society is pleased to present the Distinguished Service Award to Jim Koan.
2016 Distinguished Service Awards
7087 East Napier Ave.
Benton Harbor, Michigan 49022
MICHIGAN STATE HORTICULTURAL SOCIETY ... Serving growers since 1870 ...
© 2016. Michigan State Horticultural Society. Created using GetSimple CMS with Red Agency theme. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,575 |
<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.0"?>
<!--
<copyright>
</copyright>
$Id$
-->
<plugin>
<extension point="org.eclipse.emf.ecore.generated_package">
<package
uri="leveleditor"
class="Leveleditor.LeveleditorPackage"
genModel="model/Leveleditor.genmodel"/>
</extension>
</plugin>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,107 |
Contextual objectivity is a principle with roots in quantum mechanics that was adapted and applied to explain and describe the operations of news media organizations during times of war. Proposed by Adel Iskandar and Mohammed El-Nawawy in their analysis of Al-Jazeera as a case study, the term expresses the attempt "to reflect all sides of any story while retaining the values, beliefs and sentiments of the target audience". The concept has been applied by some scholars to explain Fox News Channel's news programming in the 2002–2003 run-up to the Iraq War. Other studies used contextual objectivity to describe differences between mainstream media and alternative ethnic media's coverage of post-Hurricane Katrina relief efforts.
References
Al-Jazeera: The Story of the Network that is Rattling Governments and redefining modern journalism, Adel Iskandar and Mohammed El-Nawawy
The Minotaur of 'Contextual Objectivity': War coverage and the pursuit of accuracy with appeal
Al Jazeera: In Pursuit of 'Contextual Objectivity' by Ralph Berenger
Journalism | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,888 |
James Tool has been engineering and building fixtures for more than 25 years. We take great pride in our abilities to provide solutions for our customers.
We engineer in the latest 3-D Solidworks® and then provide those detailed models to the shop for efficient and accurate programming through the use of Mastercam®. Our build and test area is dedicated and well equipped to build the most complex hydraulic fixtures.
We engineer and build more than 300 fixtures each year. Most of our fixtures utilize multiple pressures, sequenced clamping, clamp confirmation, and air part clamp confirmation for robot loading. Our fixtures are engineered and built for many industries including :Aerospace, Heavy Equipment, Automotive, Oil & Gas, Nuclear and Transportation.
A James Tool fixture is extensively tested in one of our custom built hydraulic test stands. These test stands will replicate any CNC machines hydraulic system. This ensures that the part being clamped is under the same production conditions. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,945 |
SubTopic iSeries system performance and monitoring
Data and data management
Posted by: Margaret Rouse
WhatIs.com
A lock is a mechanism for controlling access to something. In programming, locks are often used so that multiple programs or threads of a program can share a resource - for example, access to a file for updating it - on a one-at-a-time basis. Typically, a lock is of temporary duration and when the resource is no longer required, it is freed for locking and use by the next sharer in a queue.
From a system point-of-view, locking is a method of synchronizing potentially concurrent uses of a database or other common resource. An operating system may enforce locking or some other mechanism in order to ensure that actions occur in the right sequence (when they don't, the situtation is known as a race condition). An operating system must also provide for means to ensure that two programs do not become dependent on each other for the release of a lock, a situation known as a deadlock in which the programs are essentially halted. In some systems, a mutex is a named object that provides a lock for a given resource.
Advance your data center career by mastering these 5 IT Skills
Maintaining an IT skill set that is up-to-date is essential, not just to maintain your career, but to advance it as well --otherwise you risk falling behind in this fast-paced, extremely competitive IT job market. Access this 25-page guide for expert insight, career forecasts and tips you need to know to keep your IT skills sharp.
This was last updated in August 2005
CIW (Computer Intensive Workload)
In IBM's AS/400 and iSeries line of computers, CIW (Computer Intensive Workload) is a measure that can be used to compare the ... See complete definition
i5/OS
i5/OS is the name IBM has given to its newest release of OS/400 V5R3. i5/OS runs on IBM's i5 servers, which are based on IBM's ... See complete definition
In general, midrange refers to computers that are more powerful and capable than personal computers but less powerful and capable... See complete definition
Dig Deeper on iSeries system performance and monitoring
Android Security Rewards program expands, adds $1.5M bounty
By: Michael Heller
SD-WAN vendor lock-in is unavoidable, but not necessarily bad
By: Jennifer English
CapsLock Indicator Shows Key States
Adding New Levels of Device Security to Meet Emerging Threats –Intel
Choosing a modern, flexible, simple infrastructure solution –Pure Storage
How Can I Lock Down my distributed SQL Server / SQL Express Application? –NetLib Security
How to Avoid Cloud Vendor Lock-in with Four Best Practices –Precisely | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,125 |
Q: How to set ManagedBy property for Security Group via C# I am having issues updating the ManagedBy property for Security Groups (works fine for Distribution Groups). I receive the following error: '{"You don't have sufficient permissions. This operation can only be performed by a manager of the group."}'
I am running with an account that has full access. The command works when running via PowerShell just not via C#. Here is the command I am running.
Command cmdCreateDR = new Command("Set-DistributionGroup");
cmdCreateDR.Parameters.Add("Identity", "GroupName");
cmdCreateDR.Parameters.Add(new CommandParameter("BypassSecurityGroupManagerCheck"));
Collection<PSObject> manByUsers = new Collection<PSObject>();
manByUsers.Add(new PSObject("domain\\UserName1"));
manByUsers.Add(new PSObject("domain\\UserName2"));
cmdCreateDR.Parameters.Add("ManagedBy", manByUsers);
Collection<PSObject> resultsEx = RunPowerShellCommand(cmdCreateDR);
A: BypassSecuirtyGroupManagerCheck is a switch and the way your using it your passing it in a bool set to false which won't achieve your desired results. You should be doing something like
SwitchParameter switchpara = new SwitchParameter(true);
cmdCreateDR.Parameters.Add("BypassSecurityGroupManagerCheck", switchpara);
Cheers
Glen
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,119 |
Q: How do I set up my Response object to perform all of the necessary cleanup This is my code, my application is in error inadvertently where the application will not load and throws an error. The only way to fix the issue is by recycling the app pool that the application runs under. But, I am wondering if my problem can be resolved by setting up this code correctly? For instance putting it in a using statement, which I am not certain how to do.
// Write PDF
Response.Clear();
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition", assessment);
Response.WriteFile(filepath3);
Response.End();
I don't know if this is related or not, but this is the Server error:
Server Error in '/Apps/LeadAssessmentApp' Application.
Creating an instance of the COM component with CLSID {080D0D78-F421-11D0-A36E-00C04FB950DC} from the IClassFactory failed due to the following error: 800401e4 Invalid syntax (Exception from HRESULT: 0x800401E4 (MK_E_SYNTAX)).
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Runtime.InteropServices.COMException: Creating an instance of the COM component with CLSID {080D0D78-F421-11D0-A36E-00C04FB950DC} from the IClassFactory failed due to the following error: 800401e4 Invalid syntax (Exception from HRESULT: 0x800401E4 (MK_E_SYNTAX)).
Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,502 |
\section{Introduction}
Amongst the many materials being studied for chemical applications,
2-dimensional (2D) materials like graphene and hexagonal boron nitride
(h-BN) are some of the most versatile and interesting thanks to their
novel properties and sustainable compositions. Properties of graphene
and h-BN manifest themselves in various important applications such as
desalination\cite{wang2012water}, water purification\cite{Lei_13},
energy storage\cite{raccichini2015role}, energy
generation\cite{Sales_10,Dhiman_11,Yin_12,Yin_14,Siria_13} and
catalysis\cite{li_2013semi,sun_2011,li2011highly}. For example,
sizeable voltages have been measured from forming water salinity
gradients across graphene sheets and
nanotubes\cite{Sales_10,Dhiman_11,Yin_12,Yin_14}, and Siria
\textit{et al.}\ demonstrated that water flowing osmotically through a BN
nanotube produces remarkably large electric currents\cite{Siria_13}.
This was attributed to the possible dissociation and adsorption of
water on the interior of the nanotube which influences the dynamics
inside the nanotube.
Much of the work on graphene and h-BN is also motivated by the
sustainability and the availability of the component elements -- an
aspect which can be difficult to meet using materials that contain
transition or noble metals\cite{fthenakis_sustainability_2009}.
Already, hydrogenated h-BN is thought to be a potential photocatalyst
as a material that is active under visible light and has a band gap
roughly in line with the reduction and oxidation potentials of
water\cite{li_2013semi}. Similar efforts are being made to develop
graphene into a photocatalyst by modification of its band gap, and
also as a support to other photocatalytic
materials\cite{sun_2011,li2011highly}.
Despite the promising applications of h-BN and graphene as membranes
and catalysts, there are still major gaps in our understanding of the
interaction of molecules like water on clean graphene and h-BN
surfaces on the atomic level, and even less is known about how doping
in the materials alters their interaction with molecules.
Indeed, experimental routes to produce hybrid composites of h-BN and
graphene\cite{Ci_2010,Liu_2013} have emerged with high levels of
control being reported on the nanometre scale, which is more reason to
gain better atomic level understanding. Various theoretical studies
on band gap engineering using h-BN and graphene mixtures
\cite{fan_band_2012,xu_density_2010,shinde_direct_2011,ding_electronic_2009,Ruiqi2012,Nitesh2013,moses2014composition,chang2013band,ferrighi2015boron},
have revealed the tuneability of these materials through the mixture
of atoms. Other studies have focused on exploiting this tuneability
for catalysis of oxygen reduction
reactions\cite{Zhao_2013,baierle_adsorption_2007,kattel_density_2014,li_oxygen_2012,sen_rules_2014,wang_bcn_2012,zhong_nitrogen-_2014,sinthika_doped_2014,wang_vertically_2011,fei_boron-_2014,zhao_can_2013}
and H$_2$
adsorption\cite{baierle_hydrogen_2006,pizzochero2015hydrogen,chhetri2016superior}.
An important aspect to consider, if using graphene and h-BN based
materials as catalysts, is their degree of selectivity. A high degree
of selectivity is an extremely desirable property for any catalyst and
indeed, the rational design of metal-based heterogeneous catalysts is
the focus of intense research (see \textit{e.g.} references
\citenum{tsai2015rational,li2015local,yang2013understanding,subbaraman2012trends,nilekar2011mixed,grabow2011mechanism,vojvodic2011optimizing}).
However, even in these cases the metal-based catalysts do not
necessarily have very different selectivities, and although they can
be doped or alloyed to vary their reactivity, the effect on reaction
energies and barriers is often a constant shift with respect to
different
molecules\cite{vojvodic2011optimizing,tsai2014understanding,norskov2002,michaelides2003}.
For instance, in the reaction pathways towards H$_2$ formation
discussed by Cortright \textit{et al.}, a metal catalyst is used throughout,
which also catalyses H$_2$ consuming reactions
instead\cite{cortright2002hydrogen}. Meanwhile, Guo \textit{et al.}\ have shown
that a more complex selective catalyst gives rise to a higher
conversion rate of methane to H$_2
\cite{guo2014direct}.
Here we investigate water and some other environmentally and
industrially relevant small molecules with density functional theory
(DFT). The particular focus of this study is to establish the
thermodynamics of dissociative adsorption and how this is affected by
doping. From this work we draw a number of conclusions. First, doping
strongly affects the dissociation process, in some situations making
dissociation more favourable by several electronvolts. Second,
different surfaces have varying reactivity for the set of molecules
considered, with some substrates significantly enhancing the
reactivity of polar molecules and others enhancing the reactivity of
non-polar adsorbates.
Below, we begin by describing our computational setup in Section
\ref{METHOD} and present our DFT results for water adsorption in
Section \ref{water_result}, followed by an overview regarding the
relative adsorption of other molecules in Section
\ref{molec_result}. In Section \ref{disc} we discuss the trends
observed in adsorption sites and structures, and propose a general
framework for dissociative adsorption before finally concluding, in
Section \ref{CONC}.
\section{Methods}\label{METHOD}
The dissociative adsorption of a water monomer and other molecules on
graphene, h-BN, and their doped counterparts was calculated using DFT
and the Vienna \textit{Ab-Initio} Simulation Package (VASP) 5.3.2
\cite{vasp1,vasp2,vasp3,vasp4}. VASP uses plane-wave basis sets and
projector augmented wave (PAW) potentials\cite{PAW_94,PAW_99} to model
the core region of atoms.
\subsection{System setup}
The graphene and h-BN substrates are modelled using $(5\times5)$
hexagonal unit cells containing 50 atoms, for which adsorption
energies are converged to less than 10 meV with respect to
$(7\times7)$ unit cells. After a series of convergence tests for the
plane-wave cut-off energy we chose to use a 400 eV energy cut-off,
which gives dissociative adsorption energies converged to within 16
meV of a 600 eV energy cut-off. $\Gamma$-point sampling of reciprocal
space for the (${5\times5}$) cell was used but \textbf{k}-point
densities up to ($9\times9\times1$) were tested. Adsorption energies
using $\Gamma$-point sampling are within 50 meV ($3\%$) of the
converged adsorption energies for all substrates. Spin polarisation
was applied since H pre-adsorption on the substrates gives rise to
spin polarised states. A 10 \AA\ separation in the z-direction between
substrates without a dipole correction proved to be converged for
dissociative adsorption energies of water compared to using a dipole
correction or a 20 \AA\ separation ($<15$ meV
difference).\footnote{For methanol the separation distance in the
z-direction was increased to 20 \AA\ to allow space for the larger
adsorbed fragments.}
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth]{figure1.eps}
\caption{The clean and doped graphene and h-BN surfaces considered in
this study. (a) ($5\times5$) unit cell of graphene. (b) BN doping in
($5\times5$) unit cell of graphene which we refer to as BNDG. (c)
($5\times5$) unit cell of h-BN. (d) C doping in ($5\times5$) unit
cell of h-BN, referred to as 2CBN. For clarity only a small portion
of the ($5\times5$) unit cell is shown in (b) and (d). C is coloured
cyan, B is pink, and N is blue.}\label{figure_1}
\end{figure}
For the dissociative adsorption energies evaluated here (spanning a
few eV) we have mostly used the Perdew-Burke-Ernzerhof (PBE)\cite{PBE}
generalised gradient approximation exchange-correlation
functional. However we have also verified that the key results
obtained here are not particularly sensitive to the choice of
exchange-correlation functional, as discussed in Section \ref{disc}.
There are many different ways of isoelectronically doping graphene
with BN and \textit{vice versa} and as a first step we focus on low
concentrations of doping: one pair of BN substituting two C atoms in a
$(5\times5)$ unit cell of graphene which we refer to as boron nitride
doped graphene (BNDG) and likewise, two C atoms substituting a BN pair
in a $(5\times5)$ unit cell of h-BN, henceforth referred to as
2CBN. Doped substrates are modelled by isoelectronically doping the
pristine sheets and relaxing the unit cells using a plane-wave energy
cut-off of 600 eV to remove any strain introduced by the mixture of B,
N and C atoms. Relaxation effects are small: less than $1\%$ of the
relaxed lattice constant of the undoped system.\footnote{We verified
the stability of the doped substrates by calculating their cohesive
energies and we find good agreement with other work for similar
arrangements of doping atoms\cite{shinde_direct_2011}. Cohesive
energies for the different substrates have been calculated as
$E_{coh}=E^{tot}_{sheet}-{N_C}{E^{tot}_C}-{N_B}{E^{tot}_B}-{N_N}{E^{tot}_N}$
where $E^{tot}_{sheet}$, $E^{tot}_C$, $E^{tot}_B$ and $E^{tot}_N$
are the total energies of the sheet and gaseous C, B and N atoms in
the unit cell, respectively, and $N_C$, $N_B$ and $N_N$ are the
numbers of C, B and N atoms in the unit cell. The doped sheets in
this study have cohesive energies between that of graphene and h-BN,
and the four substrates range between $-7.06$ and $-7.84$ eV/atom.}
When water dissociates on a 2D substrate there are a number of
possible adsorption scenarios. Here, we have focused on four possible
outcomes. Schematic illustrations are given in Fig. \ref{figure_2} and
in brief they involve: (i) An OH group on the surface and the release
of (half) an H$_2$ molecule, referred to as ``OH (\sfrac{1}{2}H$_2$
gas)'';
(ii) The adsorption of both OH and H components of water on the
surface, with them both being on one side of the substrate, namely
``cis(OH--H)''. We consider this configuration to be particularly
important because 2D materials tend to be examined by supporting them
on other materials, leaving only one side of the surface exposed;
(iii) The adsorption of both OH and H on the surface but this time on
opposite sides of the substrate, referred to here as
``trans(OH--H)''. This could arise from having the substrate suspended
in a wet environment or from the H atoms diffusing through the sheet
and there are indications that graphene and h-BN are permeable to
protons\cite{hu2014proton}. However, as it is not clear how likely it
is for molecules to dissociate on different sides of the substrates,
we consider this configuration to be less relevant than cis(OH--H);
(iv) Lastly, ``OH--H--H'' which is again the adsorption of both OH and H,
this time on a surface that has an H atom pre-adsorbed. We tested this
particular set-up in light of previous experimental and simulation
work, where this is thought to cause water
dissociation\cite{Siria_13}.
Many adsorption sites are available for each category and we have
calculated only a number of possibilities: ortho, meta, and para
positioning of the adsorbed components with respect to each other, as
well as adsorption of the components far away from each other and the
doping site in the substrate.
The absolute adsorption energy for dissociative adsorption, $E_{ads}$
is defined as,
\begin{equation}
E_{ads}=E^{tot}_{ads/sub}-E^{tot}_{sub}-E^{tot}_{ads}\label{IE}
\end{equation}
where $E^{tot}_{ads/sub}$ is the total energy of the adsorption
system, $E^{tot}_{sub}$ is the total energy of the relaxed substrate,
and $E^{tot}_{ads}$ is the energy of the intact molecule in the gas
phase. Equation \ref{IE} is used for all but one adsorption state,
that is OH (\sfrac{1}{2}H$_2$ gas). Here we also take into account the
energy ($E^{tot}_{H_2}$) of the \sfrac{1}{2}H$_2$ gas molecule that is
formed:
\begin{equation}
E_{ads}=E^{tot}_{ads/sub}+\sfrac{1}{2}E^{tot}_{H_2}
-E^{tot}_{sub}-E^{tot}_{ads}\label{IE2}
\end{equation}
Within these definitions negative adsorption energies correspond to
favourable (exothermic) adsorption processes. Bond strengths of
hydrogen and hydroxyl to the surfaces are calculated with respect to a
gas phase hydrogen atom or hydroxyl group instead of the whole
molecule:
\begin{equation}
E_{bond}=E^{tot}_{sub} + E^{tot}_{ads}-E^{tot}_{ads/sub}\label{BS}
\end{equation}
\section{Results}\label{Results}
We begin with the results for the dissociative adsorption of water on
the pure substrates, graphene and h-BN, and on the doped substrates,
BNDG and 2CBN. In general, we find that the dissociation of water is
more facile on the doped substrates and is also strongly affected by
the presence of a pre-adsorbed H atom, local electronic induction, and
steric effects arising from rehybridisation of orbitals in the
substrate atoms. We use these insights to look at the adsorption of
H$_2$, methane, and methanol on the same surfaces in Section
\ref{molec_result}. From our analysis we see that different substrates
favour the dissociation of different molecules, depending on their
polarity, enabling us to make comparisons between the adsorption
behaviour of polar and non-polar molecules and fragments.
\subsection{Dissociative adsorption of water on graphene, h-BN, BNDG and 2CBN}\label{water_result}
Fig. \ref{figure_2} reports results for the dissociative adsorption of
water on the clean and doped substrates. It can be seen that the
energetics of the dissociation process varies significantly for the
various adsorption structures and substrates.
On pristine graphene we find that dissociation is strongly endothermic
in agreement with previous work\cite{cabrera2007,xu2005water}. In
addition the energy of the dissociation process varies by as much as 2
eV depending on the final adsorption configuration. The lowest
adsorption configuration on pristine graphene is trans(OH--H) ($2.19$
eV) with OH and H in ortho positions, in agreement with Xu
\textit{et al.}\cite{xu2005water} The cis(OH--H) configuration shown in Fig.
\ref{diss_fig} (a) on graphene has a dissociative adsorption energy of
2.57 eV and is thus $\sim0.4$ eV less stable than trans(OH--H).
Dissociative water adsorption is in general, more thermodynamically
favourable on h-BN than on graphene. For example the cis(OH--H) state
on pristine h-BN shown in Fig. \ref{diss_fig} (d), has $E_{ads}$ of
$1.19$ eV and is $1.38$ eV more favourable than the equivalent
structure on graphene. Nonetheless, given just how thermodynamically
unfavourable water dissociation is, it is unlikely that water monomers
will dissociate on pristine graphene and h-BN.
\begin{figure}
\centering \includegraphics[width=0.45\textwidth]{figure2.eps}
\caption{The dissociative adsorption energy of water on graphene,
BNDG, 2CBN and h-BN is shown for different adsorption
structures. Red circles indicate the adsorption of OH from water
onto the substrate and the release of hydrogen gas. The black
diamonds indicate the dissociative adsorption of a water molecule
into OH and H on the substrate. The blue crosses correspond to the
adsorption energies on a hydrogenated substrate. The categories of
water dissociation on the substrate are illustrated on the
right.}\label{figure_2}
\end{figure}
Upon moving to the doped substrates, for which numerous configurations
were considered, we find a significant lowering in the energy to
adsorb water. From graphene to BNDG, and from h-BN to 2CBN, we gain
$\sim1$ eV in the adsorption of a water molecule. The cis(OH--H)
state and lowest energy dissociation state for each doped surface is
shown in Fig. \ref{diss_fig}. On both BNDG and 2CBN, B--OH and C--H
bonds are formed. Note from Table \ref{bonds} that the B--OH bond is
$\sim1.3$ eV stronger on BNDG than on h-BN (or C--OH on
graphene). Hence, a marked activation of the B atom towards binding OH
results from the mixture of N and C atoms surrounding it and in this
way doping leads to a considerable lowering of the dissociative
adsorption energy.
\begin{figure}
\centering \includegraphics[width=0.45\textwidth]{figure3.eps}
\caption{The most stable cis(OH--H) (top panel) and most stable
overall dissociative adsorption structures (lower panel) of water on
graphene, h-BN, BNDG and 2CBN are shown. (a) and (e) are water on
pristine and hydrogenated graphene, respectively. (b) and (c) show
water adsorbed on BNDG and 2CBN, whilst (f) and (g) show water
adsorption on the hydrogenated counterparts. (d) and (h) are on
pristine and hydrogenated h-BN, respectively.}\label{diss_fig}
\end{figure}
The presence of the pre-adsorbed H atom also significantly improves
the thermodynamics of water adsorption by $\sim1$ eV for each
substrate.
Favourable OH--H--H configurations are shown in Fig. \ref{diss_fig}
and from Fig. \ref{figure_2} it can be seen that water splitting is
thermodynamically favourable on the hydrogenated h-BN ($-0.24$ eV),
BNDG ($-0.38$ eV) and 2CBN ($-1.12$ eV) surfaces. Thus doping and
hydrogenating both graphene and h-BN makes the thermodynamics of water
dissociation considerably more favourable. The general conclusion that
pre-adsorbed hydrogen facilitates water dissociation is in agreement
with Siria \textit{et al.}\cite{Siria_13} Interestingly, the overall most
favourable states for water dissociation on the doped surfaces contain
a B--N--C construction in the surface where B--OH, N--H, and C--H
bonds are formed. We considered if the increased reactivity at these
sites is due to the pre-adsorbed H atom on a N site destabilising the
surface and thus activating it towards water adsorption, but this is
unlikely because the N--H bond is very weak (only $0.07$ eV). The
B--N--C construction in the surface of doped substrates is therefore
central to making the dissociation energy more exothermic, and
exemplifies the use of isoelectronic doping to tune the dissociative
adsorption energy of water.
In all OH--H--H states, the OH and H components of the dissociated
water are arranged in a hydrogen bonded fashion. The hydrogen bond on
h-BN at $1.95$ \AA\ is shorter than the hydrogen bond on graphene
($2.23$ \AA) despite the slightly smaller lattice constant of
graphene. The hydrogen bonding distances are indicative of the more
polarised binding of OH and H on h-BN, which culminates in a more
negative oxygen atom in the OH group and hence a shorter hydrogen
bond.
Additional DFT calculations of water dissociation on the protonated
(as opposed to hydrogenated) substrates were also performed. A
homogeneous background charge is added in the DFT calculations of the
charged systems so that the electrostatic interactions do not diverge
and can be computed under periodic boundary conditions. These reveal
that protonation is slightly less effective than hydrogenation but
still increases the tendency of water to dissociate by $\sim0.8$ eV
with respect to the non-protonated clean surfaces. Thus either
hydrogen pre-adsorption or acidic conditions (pre-adsorbed protons)
could be key elements in the activation of these sheets towards
dissociative water adsorption.
Before moving on to discuss the other adsorbates, two additional
features of these adsorption systems deserve comment. First,
adsorption of the dissociated fragments on separate sides of the sheet
(so-called trans adsorption) is favoured in general. Specifically,
trans-ortho(OH--H) adsorption is $\sim0.4$ eV more stable than
cis-ortho(OH--H) on graphene. This is consistent with previous work on
graphene\cite{lin_hydrogen_2008,boukhvalov2008,balog2010,vsljivanvcanin2011,Merino2015ortho}
and demonstrates the stabilisation gained by adhering to a more
tetrahedral structure around the sp$^3$ hybridised C atom.
Likewise on h-BN and BNDG, the tetrahedral arrangements of
trans(OH--H) and OH--H--H lead to lower dissociative adsorption
energies (by about 0.3 eV). Note the 2CBN system is an exception and
the most stable (OH--H) configuration on 2CBN has cis-para
positioning, shown in Fig. \ref{diss_fig}(c). The trans-ortho (OH--H)
state on 2CBN is still close in energy and only $0.04$ eV less stable
than cis-para\footnote{The 0.04 eV difference between cis-para and
trans-ortho adsorption configurations remained the same using a
denser $6 \times 6 \times1$ \textbf{k}-point mesh.}. This can be
explained by the difference in partial charges on the B atoms bonding
to OH in each case. Electronegative N atom neighbours make B atoms
more positive and subsequently form a stronger polar bond with OH. In
the trans-ortho state, the B atom is surrounded by only two N atoms
and hence, is not as electrophilic as the B atom in the cis-para state
which is bonded to three other N atoms. This example in 2CBN
demonstrates that inductive effects from neighbouring atoms dominate
over steric effects. Despite the advantage of satisfying the sp$^3$
hybridisation in trans adsorption states, it is important to remember
that in practice 2D materials are often suspended or grown over
substrates\cite{bn_exp,wang2011monolayer,wang2010periodicity,wintterlin2009graphene,diaz2013hexagonal,altenburg2010graphene,lattelais2015cycloaddition,li2012influence,joshi2012boron}
(metals or silicon carbide) where cis configurations are more likely.
Second, inductive effects are also introduced by the adsorbed water
fragments. This can be seen by comparing the co-adsorbed to the
separately adsorbed OH and H fragments. Specifically, OH
(\sfrac{1}{2}H$_2$ gas) adsorption on graphene and h-BN only differ by
5 meV and indeed the C--OH and B--OH bond (as listed in Table
\ref{bonds}) in graphene and h-BN are almost identical. In contrast
C--H bonds in graphene are significantly stronger than N--H bonds in
h-BN, implying that OH--H on graphene might be more stable, and yet
water adsorption is more exothermic on h-BN. It follows that the
binding of hydrogen atoms on the surface perturbs the local electronic
structure and therefore, the bond strength of OH to the surface, such
that the OH--H configuration is considerably more stable on h-BN than
on graphene.
\begin{table}
\caption{Bond strengths (in eV) for H and OH on graphene, h-BN and
BNDG sheets with respect to a gas phase hydrogen atom or OH
molecule. Parentheses indicate neighbouring atoms in the
substrate. Negative bond energies correspond to endothermic but
metastable adsorption minima. No minimum was found for OH adsorbed
on the N atom.}
\begin{tabular}{lr}
\hline\hline
Bond & Bond strength (eV) \\ \hline
Graphene & \\
C--H & 0.81 \\
C--OH & 0.67 \\ \hline
h-BN & \\
N--H & $-0.77$ \\
B--H & $-0.01$ \\
B--OH & 0.67 \\ \hline
BNDG & \\
B--H & 0.98 \\
N--H & 0.07 \\
(B)C--H & 1.15 \\
(N)C--H & 1.04 \\
B--OH & 1.96 \\
(B)C--OH & 0.84 \\
(N)C--OH & 1.03 \\ \hline\hline
\end{tabular}
\label{bonds}
\end{table}
It is useful to explain these trends in terms of the physical
properties of the surfaces and we have done this by looking at Bader
charges\cite{bader1990atoms,henkelman2006fast}, average electrostatic
potentials at each atom, and Kohn-Sham orbitals of the dissociated
states.\footnote{Of course there are many ways to project charges onto
atoms and Bader charges discussed here are simply used for
pinpointing the relevant trends in the materials.} Comparison of
the adsorption structures and Bader charges suggests the most stable
adsorption states arise from: (i) C--H in which the C site has the
most negative partial charge across the surface; (ii) B--OH in which
the B atom is positive and susceptible to nucleophilic attack; and
(iii) N--H in which the N atom is the most negative and therefore the
strongest nucleophile. A careful analysis reveals that the adsorption
of water is affected by a combination of factors involving orbital
overlap and electrostatic interactions. Graphene has weaker
electrostatic interactions with water than h-BN, but better orbital
overlap (evidenced by bond strengths in Table \ref{bonds}). In
contrast, hybrids of h-BN and graphene have stronger electrostatic
interactions with water than graphene, and also stronger orbital
overlap with water than h-BN. Due to these combined effects doped
graphene and h-BN are more suited for the adsorption of
water. Evidently for a given substrate, electrostatic interactions
with a molecule are determining the site of adsorption (\textit{e.g.}
in 2CBN the cis-para state of water is more stable than the
trans-ortho).
To recap, isoelectronic doping has a significant impact on the
thermodynamics of water dissociation of graphene and h-BN. The most
thermodynamically favourable adsorption identified is the OH--H--H
configuration on 2CBN with an adsorption energy of $-1.12$ eV. The
strong adsorption energy on 2CBN can be attributed to: (i) the B--OH
bond in which the B atom is more positive compared to B atoms in the
other substrates; and (ii) a stronger C--H bond at 2CBN as opposed to
a B--H bond at h-BN.
\subsection{Dissociative adsorption of hydrogen, methane, and methanol}\label{molec_result}
With the insight gained from water adsorption, we also calculated the
dissociative adsorption of H$_2$, methane, and methanol. As before,
various configurations were calculated for each system and in
Fig. \ref{figure_bar}(a) we report the most favourable dissociation
energies found for the molecules on the same side (cis configurations)
of the pure and doped substrates. The change in zero point energy
(ZPE) upon dissociative adsorption for each system is also included in
the energies in Fig. \ref{figure_bar}. ZPEs were calculated using the
harmonic approximation and we find that the change in ZPE increases
the dissociative adsorption energies by up to $0.3$ eV, which is
certainly not insignificant. In some cases the adsorption energies of
the trans states are more favourable than cis but since it is more
feasible for adsorbates to dissociate on the same side of the
substrate, we show results only for cis configurations.
From these calculations with the other adsorbates we learn two key
things. First, doping of the pristine substrates helps the
thermodynamics of dissociation for these molecules too. Second, the
details are quite different, with methanol behaving in a similar
manner to water by benefiting most from BN doping in graphene, whereas
H$_2$ and methane benefit most from C doping in h-BN.
Figs. \ref{figure_bar} (b) and (c) illustrate this latter point by
showing the gain in dissociative adsorption energy for each molecule
as a result of doping in the pristine substrates.
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth]{figure4.eps}
\caption{(a) Dissociative adsorption energies including ZPE
contributions of H$_2$, methane, water, and methanol on graphene,
BNDG, 2CBN and h-BN. H$_2$ in blue circles, methane in green
diamonds, water in black triangles, and methanol in red
squares. Results are given only for the most stable adsorption
structure for each molecule and substrate with the fragments
adsorbed on the same side of the substrate and without pre-adsorbed
hydrogen. (b) Gain in adsorption energies from doping pristine
graphene with BN (in eV) for different molecules, illustrating a
marked increase in the reactivity with polar adsorbates. (c) Gain in
adsorption energies from doping pristine h-BN with 2C (in eV) for
different molecules and here the reactivity with non-polar
adsorbates increases more significantly. The insets in (b) and (c)
illustrate the doping.}\label{figure_bar}
\end{figure}
The adsorption of methanol varies in a similar way to water across the
different substrates and favours the same adsorption sites (ortho on
graphene, BNDG, and h-BN, and para on 2CBN). From
Fig. \ref{figure_bar} we see that water and methanol adsorption
energies both become more favourable by $\sim 1.4$ eV as the substrate
is changed from graphene to BNDG. Having established that the C--OH to
B--OH change in bond energy is the main contributor to the difference
in adsorption energies for water on graphene and BNDG, we can deduce
that the same is true for methanol. Note that the adsorption of
methanol is stronger than that of water on all substrates by $0.2-0.4$
eV. On graphene, BNDG, and 2CBN the C--O bond of methanol is broken
preferentially with the CH$_3$ fragment bonding to the substrate at
the same sites as the H from water does. However on h-BN, the O--H
bond is broken instead, resulting in N--H and B--OCH$_3$ bonds with
the h-BN substrate.
Meanwhile the non-polar molecules, H$_2$ and methane, also benefit
from doping of the pristine substrates but in particular from C doping
in h-BN. This appears to be because the alkene-like bond between the
two C atoms, which is susceptible to alkene addition reactions, is
particularly effective at breaking weakly polarised bonds. Methane
and H$_2$ follow exactly the same trend but H$_2$ is adsorbed around
0.6 eV more strongly overall.
By tracking the lowest adsorption states across the substrates in
Fig. \ref{figure_bar}(a), we see that the preference for H$_2$ and
water switch; H$_2$ adsorbs preferably on graphene and water is
preferred on BNDG and pure h-BN. H$_2$ and water have almost the same
dissociative adsorption energies on 2CBN ($\sim0.5$ eV). The different
adsorption preferences that depend on the isoelectronic substrate
doping is a significant outcome, especially given that these materials
are composed of sustainable and abundant elements, making them
desirable candidates for catalysis.
Finally, as with water adsorption we also examined the effect of H
pre-adsorption on the dissociative adsorption energy of these small
molecules. We found in a similar manner to water that dissociative
adsorption becomes more favourable by $0.7-1.5$ eV on the hydrogenated
surfaces, such that H$_2$, water, and methanol have exothermic
dissociative adsorption energies on BNDG, 2CBN, and h-BN. Thus, as
with water, doping and hydrogenation significantly improves the
energetics of dissociative adsorption on graphene and h-BN.
\section{Discussion and general framework}\label{disc}
Some important trends can be observed from the adsorption structures
and energies of water and the other molecules studied here, which are
likely to apply in general to polar and non-polar adsorbates on BNDG
and C doped h-BN systems. Although we have studied water adsorption
more extensively, the trends also hold for H$_2$, methanol, and
methane. To summarise:
\begin{itemize}
\item Isoelectronic doping of graphene with BN increases the
reactivity with polar adsorbates (\textit{i.e.} water and methanol)
by $\sim1.4$ eV but only changes the reactivity with non-polar
adsorbates by $\sim0.5$ eV. Conversely, isoelectronic doping of h-BN
with C increases the reactivity most with H$_2$ and methane, by
$1.2-1.8$ eV.
\item Hydrogen atom (or proton) pre-adsorption on the substrate
significantly improves the thermodynamics of dissociation for the
molecules considered by $\sim1$ eV ($\sim0.8$ eV), resulting in
exothermic dissociative adsorption, and suggesting that acidic
conditions aid dissociation on the substrates.
\item The most exothermic adsorption sites for polar adsorbates share
the B--N--C construction, in which there is already a H atom
pre-adsorbed on a N atom. Meanwhile, non-polar adsorbates favour
C--C sites with localised electrons (as in 2CBN).
\item Local electronic inductive effects dominate over steric
effects. In other words, para-positioning of molecule fragments is
possible (however ortho is generally favoured) if the atoms in the
substrate have a larger electrostatic potential in the para sites.
\item Atoms in the substrate that change to sp$^3$ hybridisation as a
result of chemisorption prefer to be in a more tetrahedral
arrangement, \textit{e.g.} the trans-ortho configuration is
$\sim0.3$ eV more stable than cis-ortho.
\end{itemize}
Some comments related to these trends are appropriate. First, all the
numbers given have been derived from the PBE exchange-correlation
functional. It is well known that bond strengths and adsorption
energies vary from one functional to the next\cite{schimka2010accurate,vdwpers} and PBE in
particular neglects van der Waals dispersion forces and does not
include exact exchange. Indeed, previous work on similar systems to
those considered here, namely the physisorption of water on
h-BN\cite{al-hamdani2} and on BN doped benzene\cite{al-hamdani1}, has
shown that vdW interactions can be important. Here, however, we are
concerned with strongly bonded chemisorption structures of the
dissociated fragments of water and the other molecules involving an
energy scale of several electronvolts. Nonetheless we have
investigated the dissociative adsorption energies for all states in
Fig. \ref{figure_bar} using the vdW-inclusive optB86b-vdW
functional\cite{vdwDF,B86,vdw_opt11}. We find that the inclusion of
vdW interactions makes the thermodynamics of dissociative adsorption
energy more favourable by $0.2-0.5$ eV. With this functional some
adsorption states are exothermic even in the absence of pre-adsorbed
hydrogen. In contrast, when we look at the thermodynamics of water
adsorption with B3LYP\cite{b3lypA,b3lypB,b3lypC,b3lypD}, that accounts
for some exact exchange but not dispersion, dissociative adsorption is
less favourable by \textit{circa} 0.2 to 0.4 eV. It is clear therefore
that the thermodynamics of dissociative adsorption is sensitive to the
choice of exchange-correlation functional, with the PBE values
presented here resting in the middle of three functionals
considered. Importantly, the relative energies and trends across the
surfaces remains unchanged whether or not dispersion interactions or
exact exchange are accounted for.
Second, when probed experimentally 2D materials like graphene and h-BN
are often adsorbed on a support material such as metals or silicon
carbide. We have not included supporting materials in this study but
the electronic properties of graphene and h-BN can be influenced by
the choice of
support\cite{bn_exp,wang2011monolayer,wang2010periodicity,wintterlin2009graphene,diaz2013hexagonal,altenburg2010graphene,lattelais2015cycloaddition,li2012influence}. Metals
for instance, can hybridise the p$_z$-states in graphene and the N
atoms in h-BN, and thus alter the reactivity of the
surfaces\cite{wintterlin2009graphene,wang2011monolayer,diaz2013hexagonal}. It
is also known that differences in the lattice constants of the 2D
material and support can lead to an undulating moir\'{e} structure in
which different regions of the 2D overlayer interact differently with
the
substrate\cite{bn_exp,wang2010periodicity,altenburg2010graphene,li2012influence,joshi2012boron}. It
would be interesting in future work to explore how the presence of a
substrate alters the trends observed here.
Third, we have seen that depending on the type of doping the
thermodynamics of dissociation of either polar or non-polar molecules
can be enhanced. This would potentially be exploited in heterogeneous
catalysis where it is generally desirable to identify catalysts that
can cleave specific bonds and as a result enhance the selectivity
towards a particular reaction product. In future work it would be
interesting to explore this possibility through calculations of the
kinetics of dissociation on the substrates considered here. However,
since it is now well established that reaction barriers for chemical
reactions at surfaces correlate well with the thermodynamics, it is
not unreasonable to suggest that the thermodynamic trends identified
here could lead to interesting catalytic behaviour.
\section{Conclusion}\label{CONC}
To conclude, the dissociative adsorption of water, H$_2$, methane, and
methanol has been studied on pristine graphene and h-BN, and on their
doped counterparts (BNDG and 2CBN) using DFT.
Most notably, isoelectronic doping of the pristine surfaces makes the
dissociation process more favourable generally by at least 1 eV. Based
on electronic structure analyses, we conclude that the increased
reactivity of the surface is because B atoms (as a doping species) are
more susceptible to nucleophilic attack, and in 2CBN the C--C double
bond is more susceptible to alkene addition-like reactions. These
changes in the local electronic structure favour particular adsorption
configurations. The OH component bonds strongly to the doping B atom,
whilst H atoms bond preferentially to C compared to either B or N
atoms. Hence, methanol behaves very similarly to water as a polar
molecule, because of the OH group. In the same vein, H$_2$ and methane
follow the same trend across the different surfaces, with both binding
preferentially on 2CBN, where there is a high energy C--C double bond.
The results presented in this study also suggest that adsorption is
exothermic in the presence of adsorbed H atoms (or protons) on the
surface. Thus, there could be important implications for the transport
properties and chemical reactions of water and other molecules across
doped graphene and h-BN membranes, and conditions (acidic or basic)
are likely to be useful gauges for altering the interaction with
molecules.
Finally, we observe variations in the thermodynamics for the set of
molecules considered depending on the surface. Again we caution that
the calculation of reaction barriers and even rates is an important
next step but these results suggest that one can vary the preference
for H$_2$ dissociative adsorption to that of water or methanol for
example, and consequently alter the course of reaction pathways in
either H$_2$ or methanol formation processes. Consider for example the
wasteful dehydration and methanation reactions in Cortright \textit{et al.}'s
reaction pathways catalysed by a metal for H$_2$
production\cite{cortright2002hydrogen}; wherein H$_2$ is consumed by
reacting with CO$_2$ at low temperatures to produce alkanes and water.
This reaction can be avoided if methanol, methane, and water are split
more readily than H$_2$. According to our findings this might be
achievable for methanol and water by doping graphene with BN.
Overall, our results indicate that isoelectronically doped graphene
and h-BN could exhibit interesting chemical and catalytic activities
which could potentially be exploited.
\acknowledgments We are grateful for support from University College
London and Argonne National Laboratory (ANL) through the Thomas Young
Centre-ANL initiative. Some of the research leading to these results
has received funding from the European Research Council under the
European Union's Seventh Framework Programme (FP/2007-2013) / ERC
Grant Agreement number 616121 (HeteroIce project). A.M. is supported
by the Royal Society through a Wolfson Research Merit
Award. O.A.v.L. acknowledges funding from the Swiss National Science
foundation (No. PP00P2 138932). In addition, we are grateful for
computing resources provided by the London Centre for Nanotechnology
and University College London. We would also like to thank Michail
Stamatakis for his very helpful insights and suggestions.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 655 |
const chai = require('chai');
const expect = chai.expect;
const methods = [
{ name: '.pkg.test.first', parameters: [] },
{ name: '.pkg.test.second', parameters: [
{ type: 'Simple', repeated: false, name: 'first' },
{ type: 'Simple', repeated: false, name: 'second' },
], },
{ name: '.pkg.test.third', parameters: [{ type: 'string', repeated: false, name: 'text' }] },
{ name: '.pkg.test.forth', parameters: [{ type: 'int32', repeated: true, name: 'integers' }] },
{ name: '.pkg.test.fifth', parameters: [{ type: 'Some', repeated: false, name: 'value' }] },
];
const context = {
response: { first: 'bla', second: ['test', 'array'] },
user: { name: 'Max', id: '1234' },
};
describe('Getting formatting information:', () => {
const ConfigHelper = require('../config-helper');
let configHelper;
it('succeeds to initialize', () => {
configHelper = new ConfigHelper(__dirname + '/custom-formatting.yml', methods);
expect(configHelper).not.to.be.a('undefined');
});
describe('Default Templates:', () => {
let formattings;
beforeEach(() => {
formattings = configHelper.getResponseTemplates('.pkg.test.second');
});
it('returns one response templates for the second method', () => {
expect(formattings).to.be.an('array').of.length(1);
});
it('outputs the given response as yaml', () => {
let output = formattings[0](context);
expect(output).to.equal('first: bla\nsecond:\n - test\n - array');
});
});
describe('Custom Templates:', () => {
let formattings;
beforeEach(() => {
formattings = configHelper.getResponseTemplates('.pkg.test.first');
});
it('returns two response templates for the first method', () => {
expect(formattings).to.be.an('array').of.length(2);
});
it('formats as expected for the first template', () => {
let output = formattings[0](context);
expect(output).to.equal('bla\n - test\n - array');
});
it('formats as expected for the second template', () => {
let output = formattings[1](context);
expect(output).to.equal('Hallo Max: bla');
});
});
});
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,448 |
\section{Introduction}
\label{sec:int}
Explaining the observed distribution of heavy element abundances in our solar system is a pressing scientific question. The quest for a quantitative understanding of nucleosynthesis involves exploring and understanding a complex interplay between nuclear properties and extreme, astrophysical environments. Our understanding of the stellar processes responsible for the production of elements heavier than iron has improved significantly since the first serious attempts at explaining stellar nucleosynthesis in the seminal works of Burbidge \textit{et al.}~\cite{burbidge1957} and Cameron~\cite{cameron1957} in the 1950's. Despite recent advances, several open questions remain and better determined nuclear data is key to answering these open questions~\cite{Arnould2007,Rauscher2013}.
Elements heavier than iron are mainly produced by the slow neutron capture process, $s$-process, that takes place in asymptotic giant branch stars or in the rapid neutron capture process, $r$-process. We recently found observational support for that this process takes place in neutron star mergers (not excluding other possible r-process sites such as core collapse supernovae)~\cite{Rauscher2013,Savchenko2017}. The isotope $^{89}$Y is produced by both the s- and r-processes. Since it is an $N=50$-isotope, the neutron capture cross section on this isotope is rather low, making it a bottleneck for the s-process and the yield of heavier isotopes. Understanding the $^{89}$Y(n,$\gamma)^{90}$Y reaction cross section is therefore key to understanding how isotopes heavier than $^{89}$Y are formed by the s-process in the evolved stars on the asymptotic giant branch (AGB) of the Hertzsprung-Russell diagram. Reactions that involve $^{89}$Y are important to determine the total s-process production of isotopes in the mass region $78 \leq A \leq 92$ \cite{sprocessN50} and the neutron cross section of $^{89}$Y also influences the whole s-process abundance distribution~\cite{0004-637X-601-2-864}. The r-process contribution to the solar abundances, a key set of abundance data to which galactic chemical evolution model outputs is routinely compared, is commonly obtained by subtracting the s-process contribution. For this reason, s-process isotope production is key to understanding also the r-process. Furthermore, the abundance of $^{89}$Y is one of three so-called s-light abundances that are used as reference to compare theoretical models of stellar nucleosynthesis and galactic chemical evolution to abundance observations. The part of the s-process reaction network that involves $^{89,90}$Y is shown in Fig.\ref{fig:sflow}. Also note that in AGB stars more massive than typically $4~M_{\odot}$, the thermal pulses are hot enough to burn $^{22}$Ne at the bottom of the pulse leading to a rather large neutron flux (with densities of the order of $10^{12}$~cm$^{-3}$). In this case an important amount of the unstable $^{90}$Sr (with a half-life of $t_{1/2}=28.8$~y),$^{90}$Y ($t_{1/2}=2.7$~d) and $^{91}$Y ($t_{1/2}=58.5$~d) is produced. These branchings in the neutron-rich region may impact the production of Zr isotopes by bypassing $^{90}$Zr \cite{Karinkuzhi18}, but are still affected by nuclear physics uncertainties regarding the $^{90}$Sr(n,$\gamma)^{91}$Sr, $^{90}$Y(n,$\gamma)^{91}$Y and $^{91}$Y(n,$\gamma)^{92}$Y reaction rates, as also indicated in Fig.\ref{fig:sflow}.
\begin{figure}[tb]
\includegraphics[width=0.45\textwidth]{s-flow-extended.png}
\caption{(Color online) Part of the s-process network is shown here. The blue arrows indicate the main reactions that involve $^{89}$Y for the case of an s-process taking place in an AGB star, while the dashed arrows indicate branchings that in a more massive AGB star may impact the production of Zr isotopes. \label{fig:sflow}}
\end{figure}
The reaction cross sections needed for large reaction network calculations are calculated in the statistical framework of Hauser and Feshbach~\cite{hauser}, except for cases where experimental cross sections are available. Two key ingredients for such calculations are the nuclear level density (NLD) and the $\gamma$-ray strength function ($\gamma$SF). The cross section of neutron capture on unstable isotopes is challenging to study experimentally, and benchmarking theoretical models of NLD and $\gamma$SF needed for calculating cross sections is therefore important also in the context of the s-process.
In this work we present novel measurements of the $(\gamma,n)$ reaction cross section on $^{89}$Y made at the NewSUBARU synchrotron facility~\cite{Ando_new_sub_design, newsubweb}. The inverse Compton scattering method was used to produce $\gamma$-ray energy photon beams in the range of $S_{\textrm{n}} \leq E_{\gamma} \leq S_{2\textrm{n}}$. This photon beam was used to make neutron measurements between the neutron binding energy, $S_n=11.482$ MeV, and the two neutron separation energy, $S_{2n}=20.835$ MeV. We combine our results obtained at NewSUBARU with the $\gamma$SF measured below particle threshold for $^{89}$Y obtained using the Oslo method~\cite{unfolding,schiller2000,omsystematics}, making use of the principle of detailed balance for emission and absorption of radiation \cite{blatt1952}. The NLD of ${}^{89, 90}$Y below $S_n$ have previously been reported in Ref. \cite{guttormsen_yttrium_2014} and the $\gamma$SF of $^{89}$Y in Ref. \cite{PhysRevC.93.045810}. We focus here on the experimental details and results of the ${}^{89}$Y($d,p)^{90}$Y experiment where the NLD and $\gamma$SF of $^{90}$Y below the neutron binding energy, $S_n$, were extracted. The Y-isotopes close to stability are not expected to display substantial structure effects and therefore the experimental results for the $\gamma$SF of $^{89}$Y are used in combination with the $^{90}$Y $\gamma$SF and NLD to constrain the $^{89}$Y$(n,\gamma)^{90}$Y cross section and to calculate the Maxwellian averaged reaction rates for astrophysical relevant temperatures.
\section{Measuring $^{89}$Y($\gamma$,n): Setup and method}
The measured photonuclear cross section for the exclusive one-neutron channel, $\sigma_{\textrm{exp}}$, for an incoming photon beam with maximum beam energy, $E_{\textrm{max}}$, is given by,
\begin{equation} \label{eq:gammancs}
\sigma_{\textrm{exp}}=\int_{S_n}^{E_{\textrm{max}}}D^{E_{\textrm{max}}}(E_{\gamma})\sigma(E_{\gamma})dE_{\gamma}=\frac{N_n}{N_tN_{\gamma}\zeta\epsilon_ng},
\end{equation}
where $D$ is the normalized energy distribution of the photon beam and $\sigma(E_{\gamma})$ is the photoneutron cross section as function of photon energy, $E_{\gamma}$. The number of neutrons detected is $N_n$, $\epsilon_n$ represents the neutron detection efficiency, $N_t$ the number of target nuclei per unit area, $N_{\gamma}$ the number of photons incident on target and $\zeta=(1-\exp^{-\mu t})/\mu t$ is a correction for the self-attenuation effect in a thick-target measurement and finally,$g$, is the fraction of photons with $E_{\gamma}>S_n$. To determine $\sigma(E_{\gamma})$ from $\sigma_{\textrm{exp}}$, we need to determine experimentally the other parameters on the right hand side of \ref{eq:gammancs} and the energy distribution of the photon beam.
The experiment was carried out using photon beams with maximum energies energies in the range of 11.6-20.0 MeV with FWHM 0.21 - 0.68 MeV. The energy distribution of the photon beams used in this work are shown in Fig.\ref{fig:allbeams}). The beams were produced through inverse Compton scattering between Nd:YVO$_4$ laser photons ($\lambda$ = 1064 nm) and relativistic electrons at the NewSUBARU storage ring \cite{Ando_new_sub_design}. The laser Compton scattering (LCS) photons resulted in narrowly distributed, pencil-like beams. The experiment was set up at BL01, situated at the end of one of the two 14 m long straight sections of the storage ring. Electrons are injected into the ring at $\simeq$ 1.0 GeV and can be decelerated down to $\simeq$ 0.5 MeV or accelerated up to $\simeq$ 1.5 GeV. The energy of the photon beam is varied by changing the energy of the electron beam, rather than the wavelength of the laser photons.
The photon beam was directed at a $^{89}$Y target with areal density of 1.873 g/cm$^2$. The High Efficiency Neutron Detector, based upon the ring-ratio technique developed by Berman \textit{et al.} \cite{berman1967}, was used to detect the neutrons emitted from the $(\gamma,n)$-channel. A 8"x12" NaI(Tl) scintillator detector was placed behind the target and the neutron detector directly in the beam-line to continuously monitor the number of photons per beam-bunch. For neutron detection the signals were read out from the detector using a combined amplifier and discrimination module, and the number of neutrons detected were counted with a scaler unit. The schematic layout of the setup is provided in Fig. \ref{fig:gacko}. Further details on the setup and analysis is provided in what follows.
\begin{figure}[tb]
\includegraphics[width=0.45\textwidth]{LaserSetup.png}
\caption{A schematic illustration of the experimental setup, including the laser-electron collision. The angle between the laser photon and electron is added for illustration only, at NewSUBARU the laser beam approaches the electron beam head-on.\label{fig:gacko}}
\end{figure}
\subsection{Determining $N_{\gamma}$ and the beam profile}
The LCS photons are produced in head-on collisions between laser photons and electrons. The energy of a photon emitted after scattering off an electron is given by the following relation
\begin{equation} \label{eq:comptonscatt}
E_{\gamma} = \frac{4{\gamma}^2E_L}{1+(\gamma \theta)^2 + 4\gamma E_L/(mc^2)}
\end{equation}
where $E_L$ is the laser photon energy, $\gamma$ is the relativistic factor $\gamma = E_e/mc^2 = 1/\sqrt{1-\beta^2}$ where $E_e$ is the incident electron energy and $mc^2 = 0.511$ MeV is the electron energy at rest, $\beta=v/c$ where $v$ is the velocity of the electrons and $c$ is the speed of light and $\theta$ is the scattering angle of the scattered photon relative to the electron beam axis. The energy spread of the photon beam is mainly due to the electron beam emittance and the angular divergence of the backscattered photon beam, the latter of which was limited by two lead collimators placed between the laser-electron interaction point and the experimental station at BL01.
The laser beam was provided by a Q-switch Nd:YVO$_4$ laser with wavelength, $\lambda=1064$ nm, and maximum power $= 35$ W. The laser was operated at the internal frequency at 20 kHz and was gated with external switching gates at 10 Hz providing a macroscopic time structure of 80 ms beam-on and 20 ms beam-off. The macroscopic time structure of the beam-off was used as gate to generate background spectra during the runs.
The energy of the electron beam, and consequently the maximum energy of the photon beam, was calculated from the nominal electron energy setting of the storage ring using distinct calibration coefficients for the beam energy for nominal energies 974 MeV $\leq E_0 \leq$ 1250 MeV \cite{Shima2014} and and for 500 MeV $\leq E_0 <$ 974 MeV \cite{Uts2014}. This calibration has an accuracy in the order of $10^{-5}$.
A total of 12 photon beam energies ranging from 11.6 MeV - 16.0 MeV were provided by decelerating the electrons and 4 photon beam energies in the range of 17.0 - 20.0 MeV were provided by accelerating the injected electrons. For comparison, $S_n=11.474$ MeV and $S_2n=20.8257$ MeV for $^{89}$Y, meaning that we probed the full range of the exclusive $(\gamma,n)$ channel. The $(\gamma$,np) channel opens at 18.190 MeV. The threshold for $(\gamma$,p) is at 7.076 MeV, but we assume throughout our analysis that any effect of this channel can be neglected since the $(\gamma,n)$ channel dominates.
After every change of electron energy (and thus photon beam energy), the alignment of the setup was checked by inspecting the light spot produced by the synchrotron radiation from NewSUBARU, ensuring that it was centered on the center of the neutron detector and the NaI(Tl)-scintillator detector at the end of the beam-line.
To determine the energy profile of the photon beam, a $\gamma$-spectrum was measured at the beginning and end of each run with a given energy. For this purpose a $3.5^{\prime\prime}\times 4.0^{\prime\prime}$ LaBr$_3$(Ce) detector was placed directly in front of the photon beam. To avoid pile-up in this detector, the laser power was set to a lowest setting and a 2 cm thick lead slab was placed in front of the scintillator to attenuate the beam to $\ll$ 1 photon per bunch to avoid multi-photon events. A time gate was used to take a background spectrum in parallel by gating on the laser-off time window.
To obtain the actual energy profile of the incident photon beam, the response function of the LaBr$_3$(Ce)-scintillator detector must be taken into account. For this purpose a simulation package has been developed for simulating the response of the LaBr$_3$(Ce) detector to the photon beam using the framework of GEANT4 9.6\cite{geant1,geant2,geant3,thesisioana}. The interaction between a laser photons and electrons, as well as the transport of photons back into the experimental hall and the LaBr$_3$(Ce) scintillator detector, are simulated with the GEANT4 package. The simulations include beam-line elements such as the vacuum tube and the collimators. The emittance parameters of the electron beam are varied by hand, starting at the emittance values of Ref.\cite{Horikawa2010} and varying the emittance ellipse parameters within the typical deviations measured for the NewSUBARU storage ring, until a good agreement between the simulated spectrum and experimental spectrum has been obtained. The incident photon beam that provides a the best agreement between simulated is accepted as the energy distribution of the photon beam. This simulation procedure was repeated for each beam energy as the emittance of the electron beam will change as the energy of the beam is changed and also as function of storage time in the ring.
Since both the electrons in the storage ring and the laser photons are packed in small bunches due to the microstructure of the colliding beams, the photon beam resulting from collisions is also bunched. The electron beam is bunched at 500 MHz with a bunch width of 60 ps and the laser beam at 20 kHz with a bunch width of 60 ns. Consequently, the photon beam resulting from collisions has the same bunch-structure as that of the laser photons. The photons passing through the target without interacting are detected in the NaI(Tl) detector behind the neutron detector. Since photons within a given 60 ns bunch cannot be resolved in time by the NaI(Tl) scintillation detector, the signals pile up, generating a pile-up (or multi-photon) spectrum. From the shape of the measured pile-up spectrum, the mean number of photons per bunch was deduced, and consequently the total number of photons, $N_{\gamma}$, could be calculated \cite{Kondo2011} according to the following equation
\begin{equation}
N_{\gamma} = \frac{\langle ch \rangle_{\textrm{pile-up}}}{\langle ch \rangle_{\textrm{single}}}(\sum n_i)_{\textrm{pile-up}}
\end{equation}
where $\langle ch \rangle_{\textrm{pile-up}}$ is the mean channel of the pile-up spectrum, $\langle ch \rangle_{\textrm{single}}$ the mean channel of the single photon spectrum and $(\sum n_i)_{\textrm{pile-up}}$ the total number of counts, $n_i$, for all channels $i$. A typical single photon and pile-up spectrum is shown in Fig. \ref{fig:pileup}.
\begin{figure}
\includegraphics[width=0.51\textwidth]{Run002Plot.png}
\caption{(Color online) The pile-up and single photon spectrum for $E_{\gamma,\textrm{max}}=14.68$ MeV. In this case the average number of photons per bunch is 3.4 and $N_{\gamma} = 4.32 \times 10^7$. \label{fig:pileup}}
\end{figure}
In a recent work by Utsunomiya \textit{et al}, submitted to Nucl. Instrum Meth. A, the experimental formula used to determine the mean number was thoroughly investigated with the Poisson-fitting method. It was shown that the inherent uncertainty of this method for determining photon flux determined from the pile-up spectrum (provided that the spectra are free from quenching effects of the photomultiplier tube of the NaI(Tl) detector as is the case here) is less than 0.1\%. The main contribution to the uncertainty of the pile-up technique is consequently related to experimental conditions leading to ambiguity in what channel to set the lower threshold for analysis at and where to cut off the small 2-photon contribution in the single-photon spectrum. This uncertainty has been estimated to be $\approx$ 1\%.
\subsection{Neutron detection}
As mentioned previously, the neutrons were detected with the high-efficiency 4$\pi$ neutron detector consisting of 20 $^3$He-filled proportional counters embedded in a polyethylene moderator of $36 \times 36 \times 50$ cm$^3$ fully covered by a neutron absorbing material in order to reduce the neutron background. The proportional counters were arranged in three rings of 4, 8, and 8 $^3$He counters placed at distances of 3.8 (ring 1), 7.0 (ring 2), and 10.0 cm (ring 3), respectively from the photon beam axis \cite{Filipescu2014}. The average neutron energy was determined by the ring-ratio technique originally developed by Berman \textit{et al.} \cite{berman1967}, where differential moderation provides a measure of the average energy of the detected neutrons. The discriminator threshold of the counters was adjusted to exclude signals from X-ray and $\gamma$, consequently only counting signals originating from the $^3$He(n,p)$^3$H reaction in the counters. By taking the ratios of counts for detectors in the different rings of detectors, $R_{12}$, $R_{23}$ and $R_{13}$, the average energy of the emitted neutrons was determined. The ring-ratio curves (see Fig. \ref{fig:thecurves}) used to determine the average neutron energy, and thus the detection efficiency, were determined by simulating the response of the detector using monochromatic neutron sources \cite{neutrondet}.
\begin{figure}[h]
\includegraphics[width=0.49\textwidth]{TheCurves.png}
\caption{(Color online) a) The ring ratio curves for the High Efficiency Neutron Detector and b) the efficiency curves, as function of the neutron energy, for the three rings and th total efficiency. \label{fig:thecurves}}
\end{figure}
The total neutron detection efficiency is $> 60\%$ for neutrons with energies less than 1 MeV. The detection energies for the neutrons detected in this experiment are shown in panel c of Fig.\ref{fig:neutroneff}. The neutron detection efficiencies of the three rings were recently remeasured using a calibrated $^{252}$Cf source with an emission rate of $2. 27 \times 10^4$ s$^{− 1}$ with 2.2$\%$ uncertainty at the National Meteorology Institute of Japan \cite{Nyhus2015}. Details about the neutron detector can be found in Ref. \cite{neutrondet}. The target sample was kept in a aluminum holder shaped as a cylinder placed at the center of the neutron detector setup. The ring ratios obtained in this experiment are shown in panel a of Fig. \ref{fig:neutroneff} and the corresponding average neutron energies (shown in panel b) were determined using the curves shown in Fig. \ref{fig:thecurves}.
\begin{figure}[h]
\includegraphics[width=0.49\textwidth]{RingRatios.png}
\caption{(Color online) a) The ring ratios determined for each photon beam energy used in the current experiment and b) the average values for the neutron energy, $E_n$, used to determine the c) total detection efficiencies, $\epsilon_{n}$, used to determine the total number of neutrons emitted. The values in b) and c) are given without error bars. See the discussion in Sec.\ref{sec:errorprop} for details on uncertainties and error propagation. \label{fig:neutroneff}}
\end{figure}
\subsection{Error propagation}\label{sec:errorprop}
In this work we have performed error propagation
analysis by Monte Carlo sampling. The starting point for determining the neutron detection efficiency is the ring ratios. In the Monte Carlo simulations, the measured number of detected neutrons for each ring, both on gate $N_{1,ON}$, $N_{2,ON}$, $N_{3,ON}$ and off gate $N_{1,OFF}$, $N_{2,OFF}$, $N_{3,OFF}$, were varied. The number in the subscript stands for the ring number. We assumed that the measured values to vary as a Gaussian distribution where the mean value $\mu_{i,A}$ was taken to be the originally measured values, $\mu_{i,A}=N_{i,A}$ and the standard deviation, $\sigma_{i,A}$, was taken to be $\sigma_{i,A}=\sqrt{N_{i,A}}$, where $i$ is the ring number and $A$ is ON or OFF. For each sampling repetition, the three ring ratios, $R_{i,j}=\frac{N_i}{N_j}, i\neq j$, were calculated, where $N_i$ is the number of neutrons after background subtraction. The average neutron energy, $E_n$, was then calculated and the neutron detection efficiency, $\epsilon_n$, by accessing the ring-ratio-curves for the neutron detector.
The ring-ratio-curve itself has an uncertainty stemming from the uncertainty in absolute calibration of the efficiency of the detector. This uncertainty was assumed to be the same as the uncertainty of the main calibration point of the efficiency curve and the whole efficiency of the detector was sampled independently from a Gaussian distribution where the mean value was taken to be $\epsilon_n$ and the standard deviation to be $0.022 \epsilon_n$.
The total uncertainty of $N_{\gamma}$ is taken to be $\approx$ 1.0$\%$ in this work, as the 1.0\%. The errors of $N_{\gamma}$ are also assumed to be distributed according to a Gaussian with the mean $\mu=N_{\gamma}$. The number of photons were also sampled independently from the other variables. We did not attempt to quantify the uncertainty of the energy profile determined through GEANT4-simulations. As this error is expected to be small, but the work would be detailed and rather technical, such an investigations is left to be carried out in future work.
Finally, the deviation for each run was determined by fitting a Gaussian function to the simulated distribution of cross section values resulting in the standard errors, SE$_{CS}$, of each run. The results are presented in Tab. \ref{tab:errortab}. As one would expect and has been reported in earlier works, see e.g. Ref.\cite{neutrondet}, the largest errors occur for the photon beam energies closest to $S_n$. This is mainly due to the low statistics in the neutron number due to the low $(\gamma,$n)-cross section close to particle threshold. The $\pm$ 1 $\sigma$ limits of the unfolded cross section shown in Fig.\ref{fig:gnfinal}, $\sigma(E_{\gamma})$, were finally obtained by unfolding the monochromatic cross section, $\sigma_{EXP} \pm 1 \textrm{SE}$, where $\textrm{SE}$ is the standard error obtained in the Monte Carlo simulations. The values are also provided in Table \ref{tab:errortab}.
\begin{table}[tb]
\centering
\caption{The simulated errors of the measured cross sections for the 17 runs, before any corrections for the energy distribution of the photon beam. $E_{nom}$ is the nominal beam energy, $\sigma_{CS}$ the average cross section for the full photon beam distribution and SE$_{CS}$ the standard error of the cross section. See the text for details on the simulations.}
\label{tab:errortab}
\begin{tabular}{llll}
\hline
$E_{nom}$ [MeV]& $\sigma_{\textrm{exp}}$ [mb] & SE$_{CS}$ [mb] & SE$_{CS}$ [\%] \\
\hline
801 & 0.60 & $2.9\cdot10^{-2}$ & 4.8 \\
808 & 0.86 & $6.4\cdot10^{-2}$ & 7.5 \\
808 & 0.88 & $8.0\cdot10^{-2}$ & 9.1 \\
815 & 3.5 & $1.7\cdot10^{-1}$ & 4.9 \\
824 & 7.6 & $2.6\cdot10^{-1}$ & 3.5 \\
832 & 12.3 & $3.1\cdot10^{-1}$ & 3.1 \\
849 & 20.6 & $5.8\cdot10^{-1}$ & 2.8 \\
882 & 41.3 & 1.1 & 2.7 \\
904 & 62.0 & 1.6 & 2.7 \\
946 & 126.4 & 3.3 & 2.6 \\
976 & 165.3 & 4.2 & 2.6 \\
991 & 172.8 & 4.4 & 2.6 \\
1006 & 156.3 & 4.2 & 2.7 \\
1020 & 143.5 & 3.8 & 2.7 \\
1034 &125.1 & 3.4 & 2.7 \\
1047 & 116.5 & 3.2 & 2.7 \\
1061 & 106.0 & 2.9 & 2.8 \\
\hline
\end{tabular}
\end{table}
\subsection{Correction for photon beam energy profile}
A first approximation for the cross section can be obtained by using the maximum photon beam energy , $E_{\gamma,\textrm{max},i} $ for a given $E_e,i$ and assuming that the photon energy is monochromatic. These values are provided in Table \ref{tab:errortab}. This measured quantity that we from now on call $\sigma_{\textrm{exp}}$, measured for $E_{\gamma,\textrm{max}, i}$, represents the integrated cross section for the whole range of photon beam energies from $S_n-E_{\gamma,\textrm{max}, i}$. To obtain $\sigma(E_{\gamma})$, the energy profile must be taken into account. The photon beam profiles for this experiment, as determined using GEANT4 simulations, are shown in Fig. \ref{fig:allbeams}.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{AllGammaBeams.pdf}
\caption{The incident photon beam energy distributions for all runs, as determined by GEANT4 simulations. The total area of the beam profiles have been normalized to 1.\label{fig:allbeams}}
\end{figure}
A recently developed procedure has been used to determine the cross section as a function of photon energy, $\sigma(E_{\gamma})$, by unfolding with the simulated beam profiles from our integrated cross section values measured for each photon beam energy, $\sigma_f$
\begin{equation}
\sigma_f = \int_{S_n}^{E_{\gamma, \textrm{max}}} n(E_{\gamma})\sigma(E_{\gamma})dE_{\gamma}
\end{equation}
where $n(E_{\gamma})$ is constituted of our simulated photon beam profiles. The unfolding method we have developed is inspired by the well tested unfolding method developed for the Oslo method \cite{unfolding}.
We formulated the problem as a set of linear equations
\begin{equation}
\sigma_{\rm f }=\bf{D}\sigma,
\end{equation}
where the indexes $i$ and $j$ of the matrix element $D_{i,j}$ corresponds to $E_{\gamma,\rm max}$ and $E_{\gamma}$, respectively. The $D_{i,j}$ elements are non-zero for $j$ corresponding to $E_{\gamma}$ values
fulfilling the condition $ S_n - \Delta \leq E_{\gamma} \leq E_{\rm max} + \Delta$, where $\Delta$ is the resolution of the full-energy peak.
Thus, the set of equations is given by
\begin{equation}
\begin{pmatrix}\sigma_{\rm{1}}\\\sigma_{\rm{2}}\\ \vdots \\ \sigma_N \end{pmatrix}_{\rm f}\\\mbox{}=
\begin{pmatrix}D_{ 11} & D_{ 12}& \cdots &\cdots &D_{ 1M} \\ D_{ 21} & D_{ 22}&
\cdots & \cdots &D_{ 2M} \\ \vdots &\vdots & \vdots & \vdots & \vdots \\ D_{ N1} & D_{ N2}& \cdots & \cdots &D_{ NM}\end{pmatrix}
\begin{pmatrix}\sigma_{1}\\\sigma_{2}\\ \vdots \\ \vdots \\\sigma_{M} \end{pmatrix}.
\label{eq:matrise_unfolding}
\end{equation}
Each row of $\bf{D}$ corresponds to a GEANT4 simulated photon
beam profile belonging to a specific measurement characterized by $E_{\gamma,\rm max,i}$.
In this experiment, we measured $N=16$ beam energies, but the beam profile is simulated with $M = 2000$ energy bins. The system of linear equations in Eq.~(\ref{eq:matrise_unfolding})is underdetermined and the $\sigma(E_{\gamma})$ cannot be determined by matrix inversion. In order to find $\sigma(E_{\gamma})$,
we utilize the following iterative algorithm to unfold for the photon beam profile:
\begin{itemize}
\item [1)] As a starting point, we choose for the 0th iteration, a constant trial function $\sigma^0$.
This initial vector
is multiplied with $\bf{D}$ and we get the 0th folded vector $\sigma^0_{\rm f}= {\bf D} \sigma^{0}$.
\item[2)] The next trial input function, $\sigma^1$, can be established by adding the difference of
the experimentally measured spectrum $\sigma_{\rm{exp}}$ and the folded spectrum $\sigma^0 _{\rm f}$,
to $\sigma^0$. In order to be able to add the folded and the input vector together, we first perform a spline
fit on the folded vector, then interpolate with a cubic spline, so that the two vectors have equal dimensions. Our new input vector is:
\begin{equation}
\sigma^1 = \sigma^0 + (\sigma_{\rm{exp}}-\sigma^0 _{\rm f}).
\end{equation}
\item[3)] The steps 1) and 2) are iterated $i$ times giving
\begin{eqnarray}
\sigma^i_{\rm f} &=& {\bf D} \sigma^{i}
\\
\sigma^{i+1} &=& \sigma^i + (\sigma_{\rm{exp}}-\sigma^i _{\rm f})
\end{eqnarray}
until convergence is achieved. This means that
$\sigma^{i+1}_{\rm f} \approx \sigma_{\rm exp}$ within the statistical errors.
In order to quantitatively check convergence, we calculate the reduced $\chi^2$ of $\sigma^{i+1}_{\rm f}$ and
$\sigma_{\rm{exp}}$ after each iteration.
\item[4)] Finally, an energy dependent smoothing was applied. No structures finer than the full width half maximum of the photon beam may be expected to be resolved and are thus removed by smoothing.
\end{itemize}
In this work we needed 6 iterations to obtain convergence within the statistical uncertainties. The final result (Table \ref{tab:gntab}) is shown in Sec. \ref{sec:data} Fig.\ref{fig:gnfinal} where we compare with ($\gamma$,n) cross section data from previous works by Berman et al. \cite{berman1967} and Lepretre et al. \cite{lepretre1971}.
\begin{table}[tb]
\centering
\caption{The final, unfolded cross section $\sigma$ evaluated at the maximum photon beam energy, $E_{\gamma,\textrm{max}}$, of each run, and the standard error, SE$_{CS}$, of the unfolded cross section also evaluated at $E_{\gamma,\textrm{max}}$. The number of digits for $E_{\gamma, \textrm{max}}$ indicates how well the beam profile has been determined.}
\label{tab:gntab}
\begin{tabular}{lll}
\hline
$E_{\gamma, \textrm{max}}$ [MeV]& $\sigma$ [mb] & SE$_{CS}$ [mb]\\
\hline
11.80 & 3.1 & 0.2 \\
12.00 & 8.4 & 0.4 \\
12.26 & 15.7 & 0.5 \\
12.50 & 24.5 & 0.8 \\
13.00 & 30.4 & 0.9 \\
14.00 & 59.5 & 1.6 \\
14.68 & 90.4 & 2.4 \\
16.02 & 177.8 & 4.6 \\
17.00 & 214.9 & 5.6 \\
17.50 & 191.2 & 5.0 \\
18.02 & 139.5 & 3.8 \\
18.50 & 107.0 & 2.9 \\
19.01 & 83.6 & 2.3 \\
19.48 & 79.7 & 2.2 \\
20.00 & 67.2 & 1.9 \\
\hline
\end{tabular}
\end{table}
\section{Particle-$\gamma$ data on $^{90}$Y: Setup and method}
The experiment probing the $\gamma$SF below $S_{\textrm{n}}$ was performed at the Oslo Cyclotron Laboratory (OCL), utilizing a deuteron beam of 13 MeV. The beam impinged on a natural $^{89}$Y target with thickness 2.25 mg/cm$^2$. Details about the experimental setup and analysis of the data are provided in Ref. \cite{guttormsen_yttrium_2014}. Particle-$\gamma$ coincidences were measured with the particle-telescope system SiRi\cite{Guttormsen2011168} and the NaI(Tl) scintillator array CACTUS\cite{cactus} at OCL. The (d,p)-channel of the experiment was selected using $\Delta E-E$ technique. From the coincidence data, the primary $\gamma$-ray spectra, as shown in Fig.\ref{fig:primary}, for the excitation energies, $E_x$, was extracted using the iterative method described in Ref. \cite{schiller2000}. The primary spectra represent the distribution of the first emitted $\gamma$-rays in cascades from a given excitation energy range. The $\gamma$ transmission coefficient, $\mathfrak{T}(E_{\gamma})$, is assumed to depend only upon the energy of the emitted primary $\gamma$-ray, in keeping with the Brink hypothesis \cite{Brinkthesis,GuttormsenPRL2016}. In that case, the primary matrix can be factorized into two multiplicative functions as follows
\begin{equation} \label{eq:rhotausep}
P(E_{\gamma},E_x) \propto \rho(E_x - E_{\gamma}) \mathfrak{T}(E_{\gamma}),
\end{equation}
where $ \rho(E_x - E_{\gamma})$ is the nuclear level density at the excitation energy of the nucleus after a $\gamma$-ray with energy $E_{\gamma}$ has been emitted and $\mathfrak{T}(E_{\gamma})$ is the transmission coefficient.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1.\columnwidth]{fg_matrix_90Y.pdf}
\caption{(Color online) The primary $\gamma$-ray spectra as function of excitation energy, $E_x$, for the ($d,p\gamma$)$^{90}$Y data set. The dashed lines indicate the region used in the further analysis. \label{fig:primary}}
\end{center}
\end{figure}
\subsection{Extraction of level density and $\mathfrak{T}(E_{\gamma})$}
\label{sec:exp}
From the distribution of primary $\gamma$-rays as function of excitation energy,
we get simultaneously information on both the nuclear level density (NLD) and
$\gamma$-transmission coefficient~\cite{schiller2000}.
The limits for extraction used in this work are: $E_\gamma^{min}$ = 1.51 MeV, $E_x^{min}$ = 3.67 MeV, and $E_x^{max}$ = 7.84 MeV. Although $E_x^{max}$ is higher than the neutron separation energy $S_n = 6.857$ MeV by approximately 1 MeV, we ensure that we are not using gamma spectra contaminated with gamma decay events from the ($d,pn\gamma$)$^{89}$Y channel by setting $E_\gamma^{min}$ = 1.51 MeV. The $E_x,E_\gamma$ matrix is previously shown in Ref.~\cite{guttormsen_yttrium_2014}.
The obtained reduced $\chi^2$ is 2.8. Note that we do not attempt to
correct for Porter-Thomas fluctuations~\cite{porterthomas1956}, which are expected to be significant for nuclei with low level density. The excitation-energy resolution is $\approx 120$ keV (FWHM), determined from the width of the ground-state proton peak.
\begin{figure*}[!hbt]
\begin{center}
\includegraphics[clip,width=1.8\columnwidth]{does_it_work_90Y_OCL.pdf}
\caption{(Color online). Data of primary $\gamma$ rays for several excitation-energy gates compared to the calculated result using the extracted $ \rho(E_x - E_{\gamma})$ and $\mathfrak{T}(E_{\gamma})$.}
\label{fig:doesitwork}
\end{center}
\end{figure*}
In Fig.~\ref{fig:doesitwork}, we test how well the functions $\rho(E_x-E_\gamma)$ and $\mathcal{T}(E_\gamma)$ extracted from the whole region within $E_\gamma^{min},E_x^{min},E_x^{max}$ reproduce \textit{individual} primary spectra from 127-keV $E_x$ bins. In general, the product $\rho \times \mathcal{T}$ reproduces the data points well. Some data points do however deviate from the product by several orders of magnitude. This is likely to be due to Porter-Thomas fluctuations of transitions to individual or a few levels, as mentioned above. Note that the error bars in Fig.~\ref{fig:doesitwork} include statistical errors and systematic errors from the unfolding procedure and the extraction of primary $\gamma$ rays~\cite{schiller2000}.
\subsection{Normalization of level density and $\gamma$-ray strength function}
\label{sec:norm}
As only the functional form is uniquely determined through the above mentioned fit procedure, the common slope and the absolute scales of the NLD and $\gamma$-transmission coefficient, respectively, are found by normalizing to auxiliary data.
\subsubsection{Level density}
\label{sec:nld}
For the level density, we normalize to known, discrete levels~\cite{NNDC} at low excitation energy, where the level scheme is considered complete. In the case of $^{90}$Y, we normalize to the discrete levels (binned in 127-keV excitation-energy bins as our data points) for the range $E_x = 0.88 - 1.89$ MeV.
Close to the neutron separation energy, $S_n$, we utilize neutron-resonance data for
estimating the total level density $\rho(S_n)$ at that energy. For $^{90}$Y, we
take the average $s$-wave resonance spacing $D_0$ from Ref.~\cite{mughabghab2006} of 4790(300) eV. To calculate $\rho(S_n)$ for all spins, not just the spins reached via $s$-wave
neutron capture, we make use of the Hartree-Fock-Bogoliubov plus combinatorial
(HFB+comb.) calculations of Goriely et al.~\cite{goriely2008} tuned to reproduce the $D_0$ value at $S_n$, using a shift $\delta$ and a slope correction $\alpha$ (see Eq.~(9) in Ref.~\cite{goriely2008}).
We note that the spin distribution of these calculations are fully compatible with the average spin $\left< J \right>_{\mathrm{exp}} \approx 3.4$ at low $E_x \approx 1$ MeV for $^{90}$Y.
We take the lower limit to be the highest value of $D_0$ (corresponding to the lowest level density) and vice versa, see Table~\ref{tab:par1}, and propagate the errors quadratically.
Note that the previous normalization of $^{90}$Y in Ref.~\cite{guttormsen_yttrium_2014} is fully compatible with the lower limit of the present normalization.
\begin{table}[bt]
\begin{center}
\caption{NLD and $\gamma$SF normalization parameters for $^{90}$Y. The parameters $\alpha$
and $\delta$ are used for matching the HFB+comb. calculations with the $D_0$ value as shown
in Eq.~(9) of Ref.~\cite{goriely2008}.}
\begin{tabular}{lcccccc}
\hline
\hline
& $D_0$ & $\rho(S_n)$ & $\rho_{\mathrm{red}}(S_n)$ & $\alpha$ &$\delta$ & $\left< \Gamma_{\gamma 0}\right>$ \\
& (eV) & (MeV$^{-1}$) & (MeV$^{-1}$) &(MeV$^{-1/2}$)& (MeV) & (meV) \\
\hline
Low & 5090 & 4924 & 3689 &-0.3763 & -0.269 & 101 \\
Middle & 4790 & 5232 & 3920 &-0.3527 & -0.269 & 168 \\
High & 4490 & 5582 & 4182 &-0.3275 & -0.269 & 302 \\
\hline
\hline
\end{tabular}
\label{tab:par1}
\end{center}
\end{table}
\begin{figure}[bt]
\begin{center}
\includegraphics[clip,width=1.\columnwidth]{nld_90Y_HFB08.pdf}
\caption{(Color online) Normalized level density of $^{90}$Y (see text).
The data points within the arrows are used for normalization. Error bars include statistical errors, systematic errors from the unfolding and extraction of the primary $\gamma$-ray spectra, and systematic errors from the normalization to the $D_0$ value.}
\label{fig:leveldensity90Y}
\end{center}
\end{figure}
Because the $^{89}$Y($d,p$) reaction does not populate high spins, and the slope of the NLD
is intertwined with the slope of the $\gamma$-transmission coefficient, we also estimate
a reduced NLD corresponding to a spin range representative of the one populated in the
experiment.
From levels populated in previous ($d,p$) experiments~\cite{NNDC}, in particular levels given in table 3 of Ref.~\cite{michaelsenNPA} and
levels identified in the present experiment, we estimate the spin range of the directly populated levels to be $J\approx 0-6$.
Further, we take into account that our NLD is determined \textit{after} emission of one dipole transition carrying $L=1$, so that the spin range of the final levels is $J\approx 0-7$.
Using the spin distribution of the HFB+comb. calculations at $S_n$, the NLD for the spin range $J= 0-7$ corresponds to $\approx 75$\% of the total NLD for all spins at $S_n$.
This reduced NLD, $\rho_{\mathrm{red}}(S_n)$, (see Table~\ref{tab:par1}) gives us the slope for the $\gamma$-transmission coefficient.
The normalized level density is shown in Fig.~\ref{fig:leveldensity90Y}.
This minor reduction of the slope of the $\gamma$-transmission coefficient is not crucial for the further analysis. In fact, the $\gamma$-transmission coefficient obtained by assuming a full coverage of all spins available in the HFB+comb. calculations is well within the final systematic errors.
\subsubsection{$\gamma$-ray strength function}
\label{sec:gsf}
The slope of the $\gamma$-transmission coefficient is determined
by normalizing the NLD to the reduced $\rho_{\mathrm{red}}(S_n)$ as described in the previous section. The absolute scale was found by use of the total, average radiative width
$\left< \Gamma_{\gamma0} \right>$ as described in Ref.~\cite{voinov2001}:
Ref.~\cite{mughabghab2006} gives for $^{90}$Y a value $\left< \Gamma_{\gamma0} \right> = 134$ meV, without any estimate of the uncertainty. By closer inspection of the individual $\Gamma_{\gamma0}$ values listed, it is clear that a rather wide range of possible $\left< \Gamma_{\gamma0} \right>$ can be estimated. The values from Ref.~\cite{mughabghab2006} are provided in Table~\ref{tab:gamma}.
\begin{table}[ht]
\begin{center}
\caption{Individual $\Gamma_{\gamma0}$ widths for $^{90}$Y as listed in Ref.~\cite{mughabghab2006}.
As $^{89}$Y as $J^\pi = 1/2^-$ in the ground state, $s$-wave capture gives $J=0^-,1^-$ levels in $^{90}$Y.}
\begin{tabular}{rcr}
\hline
\hline
$E_n$ & $J$ & $\Gamma_{\gamma0}$ \\
(keV) & & (meV) \\
\hline
$-0.251$ & 1 & 126 \\
$2.598$ & 1 & 131(10) \\
$7.498$ & 0 & 116(12) \\
$11.59$ & 0 & 542(64) \\
$13.78$ & 1 & 109(11) \\
$15.23$ & 0 & 92(9) \\
$26.40$ & 0 & 128(15) \\
$26.94$ & [1] & 106(10) \\
$29.65$ & 1 & 151(15) \\
$38.06$ & 1 & 174(20) \\
\hline
\hline
\end{tabular}
\label{tab:gamma}
\end{center}
\end{table}
Making an average of all the values in Table~\ref{tab:gamma} we get
$\left< \Gamma_{\gamma0} \right> = 168$ meV. Calculating the unbiased standard deviation,
this yields 134 meV. However, by removing the abnormal 11.59-keV resonance with $\Gamma_{\gamma0} = 542(64)$ meV from the average, we obtain $\left< \Gamma_{\gamma0} \right> = 126(25)$ meV.
Based on these considerations we estimate $\left< \Gamma_{\gamma0} \right> = 168_{-67}^{+134}$ meV, so that the lower(upper) limit is given by the results excluding(including) the resonance with the largest width (see also Table~\ref{tab:gamma}).
From the normalized transmission coefficient, the $\gamma$SF is determined by
\begin{equation}
f(E_\gamma) = \frac{\mathcal{T}(E_{\gamma})}{2\pi E_\gamma^3},
\end{equation}
since dipole radiation dominates in the considered $E_x$ region~\cite{kopecky2017,larsen2013}.
The normalized $\gamma$SF is shown in Fig.~\ref{fig:gsfdata}.
\begin{figure}[!t]
\begin{center}
\includegraphics[clip,width=1.\columnwidth]{plotstrengthdata_Y.pdf}
\caption{(Color online) Normalized $\gamma$SF of $^{90}$Y
shown together with the renormalized $^{89}$Y data from Ref.~\cite{PhysRevC.93.045810}. The error bars include statistical errors, systematic uncertainties from the unfolding and extraction of primary $\gamma$-ray spectra, and systematic uncertainties from the normalization.}
\label{fig:gsfdata}
\end{center}
\end{figure}
\subsection{Re-evaluation of the $^{89}$Y level density and $\gamma$-ray strength function}
\label{sec:gsf89Y}
For completeness, we have also re-evaluated the normalization of the $^{89}$Y data from Ref.~\cite{PhysRevC.93.045810}.
As for $^{89}$Y,
the HFB+comb calculations of Ref.~\cite{goriely2008} reproduce well
the average spin at low excitation energies, $\left< J \right>_{\mathrm{exp}} \approx 3.3$ around $E_x \approx 2.2$ MeV.
For the re-normalization, we use the HFB+comb. calculations with the following parameters keeping the shift $\delta = 0.$ MeV in all cases: middle normalization with $D_0 = 121$ eV at $S_n = 11. 482$ MeV, $\alpha = 0.0$ MeV$^{-1/2}$; high normalization with $D_0 = 100$ eV, $\alpha = 0.0551$ MeV$^{-1/2}$ ; low normalization with $D_0 = 143$ eV, $\alpha = -0.0505$ MeV$^{-1/2}$.
For the normalization of the $\gamma$SF of $^{89}$Y, we have considered all available
$\left< \Gamma_{\gamma 0} \right>$ data for Rb, Sr, Y and Zr isotopes from Ref.~\cite{mughabghab2006}.
For $^{91,92}$Zr, we use the adopted values from Ref.~\cite{guttormsen2017}.
As noted for $^{90}$Y, the $\Gamma_{\gamma 0}$ values for individual $s$-wave resonances vary considerably, as do the estimated averages.
With the aim of catching the spread in the
$\left< \Gamma_{\gamma 0} \right>$ data, we have fitted simple polynomials to the available data as shown in Fig.~\ref{fig:Ggfit}.
From these fits, we estimate $\left< \Gamma_{\gamma 0} \right> = 279_{-129}^{+220}$ meV for $^{89}$Y, where the central value is taken as the average of the linear and constant fit, the lower limit corresponds to the one estimated in Ref.~\cite{PhysRevC.93.045810}, and the upper limit is taken as 79\% above the central value (as estimated for $^{90}$Y).
The present central value is considerably higher and with larger errors than the previous value from Ref.~\cite{PhysRevC.93.045810} of 150(38) meV.
The resulting renormalized $\gamma$SF of $^{89}$Y is shown in Fig.~\ref{fig:gsfdata}.
\begin{figure}[!t]
\begin{center}
\includegraphics[clip,width=1.\columnwidth]{systGamma_mass90_Mughabghab.pdf}
\caption{(Color online) Fit of available
$\left< \Gamma_{\gamma 0} \right>$ data for Rb, Sr, Y and Zr isotopes taken from Refs.~\cite{mughabghab2006,guttormsen2017}.}
\label{fig:Ggfit}
\end{center}
\end{figure}
\section{Comparison of data}
\label{sec:data}
Our results for the $^{89}$Y($\gamma$,n) cross section are here compared to existing data for this reaction in Fig. \ref{fig:gnfinal}. The measurements of Berman et al. \cite{berman1967} and Lepretre et al. \cite{lepretre1971} show a significant discrepancy for the whole energy range probed by the two experimental campaigns. Our measured cross section represents an intermediate value, but somewhat closer in value to Lepretre et al's result for $E_{\gamma}$ less than approximately 18 MeV. For $E_{\gamma}>$ 18 MeV, our results are compatible with the results of Lepretre et al.
\begin{figure}
\includegraphics[width=0.48\textwidth]{UpperLowerFinal.png}
\caption{The variation in the cross section under the monochromatic assumption was determined by the simulation procedure described in the text (for $10^7$ samples). The upper and lower limits on the unfolded cross section corresponds to unfolding the monochromatic cross section $\pm 1 \textrm{SE}$, where $\textrm{SE}$ is the standard error found in \ref{sec:errorprop}. For comparison, previous experimental results by Berman et al. \cite{berman1967} and Lepretre et al. \cite{lepretre1971} are also displayed. \label{fig:gnfinal}}
\end{figure}
To compare all available data for $^{89}$Y, the $^{89}$Y($\gamma,n$) cross section data from Refs.~\cite{berman1967,lepretre1971} and the new $^{89}$Y($\gamma,n$) data from this work are converted into $\gamma$SF using the principle of detailed balance~\cite{blatt1952} by the relation~\cite{RIPL}
\begin{equation}
f(E_\gamma) = \frac{\sigma_{(\gamma,n)}(E_\gamma)}{3\pi^2 \hbar^2 c^2 E_\gamma},
\label{eq:crossGSF}
\end{equation}
again assuming that dipole radiation is dominant.
These data are also shown in Fig.~\ref{fig:allgsfdata}.
We note that ($\gamma,n$) cross section data are not a good measure for the $\gamma$SF close to neutron threshold due to threshold effects of the neutron emission (see, \textit{e.g}, Ref.~\cite{utsunomiya2009}); most importantly the competition of the neutron channel with the $\gamma$ channels. Hence, the ($\gamma,n$) data closest to $S_n$ are not used in the following.
Eq.~(\ref{eq:crossGSF}) is also used for transforming $^{89}$Y($\gamma,\gamma^\prime$) cross sections from Ref.~\cite{benouaret2009} into $\gamma$SF. All the data are shown together in Fig.~\ref{fig:allgsfdata}.
\begin{figure}[!t]
\begin{center}
\includegraphics[clip,width=1.\columnwidth]{plotstrengthdata_allY.pdf}
\caption{(Color online) Normalized $\gamma$SF of $^{89}$Y
shown together with new $^{89}$Y($\gamma,n$) data from this work, and $^{89}$Y($\gamma,n$) data from
Refs.~\cite{berman1967,lepretre1971} as well as $^{89}$Y($\gamma,\gamma^\prime$) data from Ref.~\cite{benouaret2009}.}
\label{fig:allgsfdata}
\end{center}
\end{figure}
In general, the $^{89,90}$Y $\gamma$SFs increase as a function of $\gamma$-ray energy.
This is expected as we are measuring the low-energy tail of the giant dipole resonance
(GDR)~\cite{dietrich1988}, which in this case is centered around $E_\gamma \approx 17$ MeV,
and is represented by the photoneutron data.
In the $^{90}$Y Oslo data, we note that there is a peak at $E_\gamma\approx 6.6$ MeV.
This is likely due to strong $M1$ spin-flip transitions for the neutron configuration $\nu(0g_{9/2}^{-1} 0g_{7/2}^1)$.
Such spin-flip transitions have been measured recently
in a photon-scattering experiment on the $N=50$ isotone $^{90}$Zr at the HI$\gamma$S facility~\cite{rus13}. Also, in a previous measurement of the ($n,\gamma$)$^{90}$Y reaction by Raman \textit{et al.}~\cite{raman1981}, it was found that the 2.6-keV resonance decays via (a) strong $M1$ transition(s).
Moreover, we observe an increase at decreasing $\gamma$ energies for $E_\gamma \lesssim 3$ MeV for both $^{89,90}$Y. This feature has been seen in many nuclei since the first observation in the iron isotopes~\cite{voinov2004}, where it was recently shown to also be dominated by dipole transitions~\cite{larsen2013,simon2016,larsen2017}.
The physical mechanism causing the low-energy enhancement is, however, still unclear, despite its presence in many nuclei, with the deformed $^{151,153}$Sm being the heaviest cases so far~\cite{simon2016}.
Within the thermal-continuum quasiparticle random phase approximation (TCQRPA), the low-energy enhancement is explained as being due to $E1$ transitions~\cite{litvinova2013}, while shell-model calculations predict an increase in strength for low-energy $M1$ transitions~\cite{schwengner2013,brown2014,schwengner2017}, even when the $E1$ component is calculated as well~\cite{sieja2017}.
A recent Compton polarization measurement by Jones \textit{et al.} using the GRETINA array~\cite{Jones2018} shows a slight bias towards $M1$ transitions. However, the data statistics do not allow to draw significant conclusions about the source of the low-energy enhancement in $^{56}$Fe. Admixtures of $E1$ and $M1$ transitions cannot be ruled out.
In the following section (Sec. \ref{sec:shell}) we discuss predictions of the $\gamma$SF from shell-model calculations on $^{90}$Y for the quasi-continuum region, as well as recent calculations within the quasi-particle random phase approximation (QRPA) for the $E1$ and $M1$ strength built on the ground state. In addition, we compare with fits using phenomenological models.
\section{Model descriptions of the $\gamma$SF}
\label{sec:shell}
\subsection{Shell-model calculations}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[clip,width=2.\columnwidth]{plotstrengthdata_theory_allY.pdf}
\caption{(Color online) Comparison of data and microscopic calculations of the
dipole $\gamma$SF (a) and phenomenological models (b).}
\label{fig:strength_calc}
\end{center}
\end{figure*}
\begin{table*}[ht]
\begin{center}
\caption{Parameters found from the model fits of $f_{\mathrm{tot}}^1$ to the $\gamma$SF of $^{90}$Y and ($\gamma,n$) data.}
\begin{tabular}{lcclcllclllcc}
\hline
\hline
Norm. & $E_{E1,1}$ & $\Gamma_{E1,1}$& $\sigma_{E1,1}$ & $E_{E1,2}$ & $\Gamma_{E1,2}$ & $\sigma_{E1,2}$ & $T_f$ &$E_{M1}$ & $\Gamma_{M1}$ & $\sigma_{M1}$ & $C$ & $\eta$ \\
& (MeV) & (MeV) & (mb) & (MeV) & (MeV) & (mb) & (MeV) & (MeV) & (MeV) &(mb) & 10$^{-7}$(MeV$^{-3}$) &(MeV$^{-1}$) \\
\hline
Low & 16.1(1) & 3.85(5) & 115(7) & 17.1(1) & 1.99(10) & 153(7) & 0.73(2) & 6.54(3) & 0.56(10) & 0.75(11) & 0.9(2) & 2.1(1) \\
Middle & 16.2(1) & 3.71(6) & 131(8) & 17.1(1) & 1.78(10) & 149(9) & 0.96(2) & 6.56(4) & 1.03(19) & 1.14(16) & 1.5(3) & 2.1(1) \\
High & 16.1(1) & 3.00(5) & 146(10) & 17.1(1) & 1.60(8) & 180(10) & 1.45(3) & 6.60(4) & 1.34(16) & 2.16(20) & 2.8(5) & 2.1(1) \\
\hline
\hline
\end{tabular}
\label{tab:gsfpar}
\end{center}
\end{table*}
The shell-model calculations were performed with the RITSSCHIL code \cite{zwa85} with a model space consisting of the $\pi(0f_{5/2}, 1p_{3/2}, 1p_{1/2}, 0g_{9/2})$ proton orbits and the
$\nu(0g_{9/2}, 1d_{5/2}, 0g_{7/2})$ neutron orbits relative to a $^{68}$Ni core. The same configuration space was also applied in our earlier study of the $M1$ strength functions in $^{94,95,96}$Mo and $^{90}$Zr \cite{schwengner2013}. In the present calculations, two protons were allowed to be lifted from the $(fp)$ shell to the $0g_{9/2}$ orbit and two neutrons from the $0g_{9/2}$ to the $1d_{5/2}$ orbit. This resulted in dimensions up to 29000. The exclusion of an
occupation of the $\nu(0g_{7/2})$ orbit suppresses the spin-flip peak formed mainly by $1^+ \rightarrow 0^+$ transitions with energies, $E_{\gamma}$, around 7 MeV \cite{schwengner2013,rus13}, but turned out to have no significant influence on the low-energy part of the strength function.
The calculations included states with spins from $J$ = 0 to 10 for $^{90}$Y.
For each spin the lowest 40 states were calculated. Reduced transition probabilities $B(M1)$ were
calculated for all transitions from initial to final states with energies $E_f < E_i$ and spins following the usual dipole selection rules. For the minimum and maximum $J_i$, the cases $J_f = J_i - 1$ and $J_f = J_i + 1$, respectively, were excluded. This resulted in more than 32000 $M1$ transitions for each parity $\pi = +$ and $\pi = -$, which were sorted into 100 keV bins according to their transition energy $E_\gamma = E_i - E_f$. The average $B(M1)$ value for one energy bin was obtained as the sum of all $B(M1)$ values divided by the number of transitions within this bin.
The $M1$ strength functions were deduced using the relation
\begin{equation}
f_{M1}(E_i,E_\gamma, J, \pi)=
a\left<B(M1,E_i,E_\gamma, J, \pi)\right>\cdot\rho(E_i, J, \pi).
\end{equation}
This corresponds to the relation given in Ref.~\cite{bartholomew1972}
using $B(M1) = a \Gamma E^{-3}$ where $a=16\pi/9 (\hbar c)^{-3}$. They were calculated by multiplying the ${B(M1)}$ value in $\mu^2_N$ of each
transition with $11.5473 \times 10^{-9}$ times the level density at the energy, as determined by these calculations,
of the initial state $\rho(E_i)$ in MeV$^{-1}$ and deducing averages in energy
bins as done for the $\left<B(M1)\right>$ values (see above) and averaging over $J$, $\pi$ and $E_i$.
When calculating the strength functions, gates were set on the excitation
energy $E_i$ that correspond to the ones applied in the analysis of the
experimental data, namely 3.67 - 7.84 MeV (see Sec.~\ref{sec:exp}).
The resulting $M1$ strength function for $^{90}$Y is shown in Fig.~\ref{fig:strength_calc}. The low-energy
behavior shows an increase at low energies similar to that of the strength functions calculated for the neighboring nuclei $^{94,95,96}$Mo, $^{90}$Zr \cite{schwengner2013} and for $^{56,57}$Fe \cite{brown2014}.
The low-energy enhancement of $M1$ strength is in the shell model picture caused by transitions between the several close-lying states of all considered spins located above the yrast line in
the transitional region to the quasi-continuum of nuclear states. Inspecting the wave functions, one finds large $B(M1)$ values for transitions between states that contain a large component (up to about 50\%) of the same configuration with broken pairs of both protons and neutrons in high-$j$ orbits, whereas states containing only proton excitations or only neutron excitations are not depopulated by strong $M1$ transitions.
The largest $M1$ matrix elements connect configurations with the spins of high-$j$ protons
re-coupled with respect to those of high-$j$ neutrons to the total spin $J_f = J_i, J_i \pm 1$. The corresponding main configurations for negative-parity states in $^{90}$Y are
$\pi(1p_{1/2}^1) \nu(0g_{9/2}^{-1} 1d_{5/2}^2)$ or
$\pi(1p_{1/2}^1) \nu(0g_{9/2}^{-2} 1d_{5/2}^3)$ and by additional proton
excitations within the $(fp)$ shell, i.e.
$\pi[(0f_{5/2}, 1p_{3/2})^{-1} 1p_{1/2}^2] \nu(0g_{9/2}^{-1} 1d_{5/2}^2)$
and also proton excitations over the subshell gap at $Z$ = 40,
$\pi[(0f_{5/2}, 1p_{3/2})^{-1} 1p_{1/2}^0 0g_{9/2}^2]\nu(0g_{9/2}^{-1} 1d_{5/2}^2)$.
The positive-parity states require the excitation of an $(fp)$ proton to the
$0g_{9/2}$ orbit, for example
$\pi(1p_{3/2}^{-1} 1p_{1/2}^1 0g_{9/2}^1) \nu(0g_{9/2}^{-1} 1d_{5/2}^2)$.
The orbits in these configurations have large $g$ factors with opposite signs
for protons and neutrons. Combined with specific relative phases of the proton
and neutron partitions they cause large total magnetic moments.
\subsection{QRPA calculations}
As $E1$ transitions are out of reach within the framework of the shell model in this case,
we have employed recent QRPA calculations based on the D1M Gogny force taken from
Ref.~\cite{martini2016}. These calculations give the $E1$ strength for
one-particle-one-hole excitations built on the ground state only, and are not
necessarily representative of the $E1$ strength in quasi-continuum.
On the other hand, if the Brink hypothesis~\cite{brink1955} is approximately correct, the obtained $E1$ strength should be a good substitute for the quasi-continuum strength.
Further, also the ground-state $M1$ strength is obtained within the same framework~\cite{goriely2016}. The microscopic calculations including the shell-model results are shown together with the data in Fig.~\ref{fig:strength_calc}.
It is apparent that the QRPA $E1$ strength describes rather well the lower limit of the ($d,p\gamma$)$^{90}$Y data, while the GDR centroid is shifted towards lower $E_\gamma$ with respect to the ($\gamma,n$) data. The QRPA $M1$ strength shows quite a bit of structure with a strong peak around $E_\gamma \approx 7.8$ MeV, consistent with the expected spin-flip transitions, and probably related to the peak seen in the $^{90}$Y data about 1 MeV lower in $E_\gamma$.
As expected, the QRPA $M1$ strength shows no low-energy increase in strength as these are built up of ground-state excitations. In contrast, the shell-model calculations show a prominent
low-energy increase, although lower in absolute value than the ($d,p\gamma$)$^{90}$Y data. This indicates that the upbend in $^{90}$Y can be understood as relating to transitions between excited states in the quasi-continuum.
\subsection{Phenomenological models}
We have used the phenomenological Generalized Lorentzian (GLo) model~\cite{kopecky_uhl_1990}, with a constant temperature of the final states $T_f$ in
agreement with the Brink hypothesis~\cite{brink1955}. The GLo model is given by
\begin{align}
& f_{\rm GLo}^{E1}(E_{\gamma},T_f) = \frac{1}{3\pi^2\hbar^2c^2}\sigma_{E1}\Gamma_{E1} \times \\ \nonumber
& \left[\frac{ E_{\gamma} \Gamma(E_{\gamma},T_f)}{(E_\gamma^2-E_{E1}^2)^2 + E_{\gamma}^2
\Gamma (E_{\gamma},T_f)^2} + \;0.7\frac{\Gamma(E_{\gamma}=0,T_f)}{E_{E1}^3} \right],
\label{eq:GLO}
\end{align}
with
\begin{equation}
\Gamma(E_{\gamma},T_f) = \frac{\Gamma_{E1}}{E_{E1}^2} (E_{\gamma}^2 + 4\pi^2 T_f^2).
\end{equation}
Here, the parameters $\Gamma_{E1}$, $E_{E1}$ and $\sigma_{E1}$ correspond to the width,
centroid energy, and peak cross section of the GDR respectively.
To simultaneously fit the $^{89}$Y($\gamma,n$) data from this work together with the
($d,p\gamma$)$^{90}$Y data, we have used two GLo functions for the $E1$ strength
with a common temperature $T_f$ together with a Standard Lorentzian (SLo) function for the $M1$ spin-flip resonance, and an exponential function of the form $f_{\mathrm{upbend}}^{M1} = C\exp{-\eta E_{\gamma}}$.
Although $^{89}$Y is considered to be a spherical nucleus, where only one GLo function would be assumed to be sufficient to describe the GDR, we find that our ($\gamma,n$) data display significant structures, and that the peak around $E_\gamma \approx 16-17$ MeV is rather flat.
Hence, we introduce two GLo components to better reproduce the ($\gamma,n$) data.
We obtain the total dipole-strength fit function
\begin{equation}
f_{\mathrm{tot}}^1 = f_{\rm GLo1}^{E1} + f_{\rm GLo2}^{E1} + f_{\rm SLo}^{M1} + f_{\mathrm{upbend}}^{M1},
\label{eq:glofit}
\end{equation}
with, in principle, 12 free parameters in the fit.
To restrict the temperature parameter, we first
performed an individual fit of the two GLo components to the present $^{89}$Y($\gamma,n$) data in the range of $E_\gamma = 14.0-18.0$ MeV and the $^{89}$Y($d,p\gamma$)$^{90}$Y data of this work in the range of $E_\gamma = 1.5-7.9$ MeV. From this fit of the $E1$ component, we determine $T_f$ and fix it in the next fit where we include the $f_{\rm SLo}^{E1}$ and $f_{\mathrm{upbend}}^{M1}$ terms, so that there are in practice 11 free parameters.
We performed three different fits for the lower, middle and upper normalizations.
The obtained parameters from the three fits to the upper, lower and middle $f_{\mathrm{tot}}^{1}$ are listed in Table~\ref{tab:gsfpar}.
We find that the centroids $E_{E1,1}, E_{E1,2}$ are similar regardless of which normalization is used for the $^{90}$Y data, as expected since these centroids are mainly determined by the ($\gamma,n$) data.
The other GLo parameters vary significantly from the fits to the low, middle and high normalization to compensate for the change in absolute value of the $^{90}$Y data. Further, the $M1$ spin-flip centroid is not sensitive to our choice of normalization, while the width and peak cross section vary according to the low, middle and high normalizations.
The parameters for the exponential fit to the upbend indicate a stable slope of $\eta = 2.1(1)$ MeV$^{-1}$, while the constant $C$ again show a large spread in accordance with the normalization uncertainties. It is interesting that the $\eta$ parameter is similar to that found for $^{89}$Y: $\eta(^{89}\mathrm{Y}) \approx 2.5$ MeV$^{-1}$~\cite{PhysRevC.93.045810}.
\section{Radiative neutron capture cross section and reaction rate}
\label{sec:talys}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[clip,width=1.9\columnwidth]{cross_section_rate_90Y.pdf}
\caption{(Color online) Calculated $^{89}$Y($n,\gamma$)$^{90}$Y cross sections (a) compared
to data from Refs.~\cite{tolstikov1966,macklin1967,stupegia1968,boldeman1977,poenitz1982,voignier1992},
and the corresponding astrophysical reaction rates (b) compared with BRUSLIB~\cite{BRUSLIB} and
JINA REACLIB (kd02-v06)~\cite{JINA-REACLIB} recommended rates. The shaded bands indicate the 1$\sigma$ uncertainty, including statistical and systematic errors as well as contributions from the width-fluctuation treatment, and the possible contribution from direct capture.}
\label{fig:talyscalc_90Y}
\end{center}
\end{figure*}
We now use the obtained lower, middle and upper normalizations of the $\gamma$SF to constrain the input NLD and $\gamma$SF of $^{90}$Y using the nuclear reaction code TALYS-1.9~\cite{TALYS_18} to calculate the $^{89}$Y($n,\gamma$)$^{90}$Y cross section and astrophysical reaction rate. Specifically, we use the HFB+comb. NLD normalized with the parameters given in Table~\ref{tab:par1} to well reproduce the NLD data points \cite{goriely2008}.
We also include the 30 first discrete levels of $^{90}$Y in the calculations.
Further, we use the phenomenological, fitted models in Eq.~(\ref{eq:glofit}) as input with the parameters given in Table~\ref{tab:gsfpar}, including them as tabulated $E1$ and $M1$ strengths using the \textit{E1file} and \textit{M1file} keywords.
For the neutron optical-model potential, we apply the one from Koning and Delaroche with
global parameters~\cite{koning03}.
We consider the uncertainty in the treatment of the width fluctuations by using the default TALYS option (Moldauer, Refs.~\cite{moldauer,moldauer2} as well as the Hofmann-Richert-Tepel-Weidenm\"{u}ller model~\cite{hrtw,hrtw2,hrtw3}.
Further, we also take into account a possible contribution from direct capture as prescribed in Ref.~\cite{Xu2012}, using the TALYS keywords \textit{racap y} to invoke the direct-capture mechanism and \textit{ldmodelracap 2} to use total particle-hole state densities in the direct-capture calculation.
We propagate the errors quadratically as before to estimate $\approx 1\sigma$ uncertainties in the calculated cross section and the rate.
The resulting ($n,\gamma$) cross sections and reaction rates are shown in Fig.~\ref{fig:talyscalc_90Y}a and b, respectively. Note that using the constant-temperature (CT) NLD, $\rho_{CT}(E) = 1/T \exp{(E-E_0)/T}$~\cite{ericson1959,ericson1960} deduced from the $^{90}$Y data in Ref.~\cite{guttormsen_yttrium_2014}, gives a cross section and reaction rate very close to the middle normalization in this work. Hence, the HFB+comb. NLD and the CT NLD are fully compatible in this case.
We see from Fig.~\ref{fig:talyscalc_90Y}a that our upper limit best reproduces the data we compare with.
This implies that the tentative value of $\left<\Gamma_\gamma\right> \sim 134$ meV
given in Ref.~\cite{mughabghab2006} is likely to be too low. Also, by looking at the two last individual radiative widths listed in Table~\ref{tab:gamma}, there could be an increasing trend. New measurements of both $\left<\Gamma_\gamma\right>$ and the ($n,\gamma$) cross section would be highly desirable to clarify the situation and provide higher precision.
As for the astrophysical rates shown in Fig.~\ref{fig:talyscalc_90Y}b, the BRUSLIB rate agrees
rather well with our results for the middle normalization. The JINA REACLIB rate differs
significantly in shape between $T \approx 0.1-1$ GK, and the absolute value is also much higher
for $T\approx 4-10$ GK compared to the BRUSLIB one. Our current error band seems to capture both the library reaction rates except at the highest temperatures.
We have also calculated the Maxwellian-averaged cross section (MACS) and compared to experimentally available information compiled in the KADoNiS library \cite{kadonis}. The experimental results compiled in KADoNiS for 30 keV have statistical errors ranging from $3.2\% - 14.3 \%$ and the absolute value varies from 13.5-21 mb. The experimental MACS values are shown in Fig. \ref{fig:talysmacs_90Y} together with the present experimentally constrained MACS. Our results are in good agreement with the recommended KADoNiS values \cite{bao2000}.
\begin{figure}[!htb]
\includegraphics[clip,width=.9\columnwidth]{cross_section_macs_90Y.png}
\caption{(Color online) Calculated $^{89}$Y($n,\gamma$)$^{90}$Y MACS compared with the experimental values compiled by KADoNiS at 30 keV \cite{sprocessN50,MAM78A, BAM77, AGM71} and for a range of temperatures according to Ref. \cite{bao2000}.}
\label{fig:talysmacs_90Y}
\end{figure}
\section{Summary and outlook}
\label{sec:sum}
We have measured the $^{89}$Y($\gamma$,n) cross section between $S_n$ and $S_{2n}$ with high precision, providing a third data set that eventually could contribute to resolving the longstanding discrepancy between data from Livermore (Berman \textit{et al.}) and Saclay (Lepretre \textit{et al.}). Our errors are in the range of $\sim 3\% - 5\%$, where the larger relative error relates to the low cross section values measured close to $S_n$. The $^{89}$Y($\gamma$,n)-cross section measured in this work is rather consistent in shape with previous measurements, but our values are intermediate to the two previous measurements for $E_{\gamma}<$ 18 MeV. We are however unable to describe the details of the structure of the GDR by the phenomenological Generalized Lorentzian model without two components. It is known that ${89}$Y is a spherical system and this asymmetry can therefore not be contributed to deformation.
We combined the $\gamma$SF obtained from $^{89}$Y(p,p$\gamma$) coincidence data and $E_{\gamma}<S_n$ with the $\gamma$SF obtained from the $^{89}$Y($\gamma$,n) and thereby describing experimentally most of the energy range 1.5 MeV $< E_{\gamma}<$ 20 MeV. The $\gamma$-ray strength function of $^{90}$Y for $E_{\gamma}<S_n$ has been studied using the Oslo method on $^{89}$Y(d,p$\gamma$) coincidence data. We assumed that structure effects can be neglected and that the $\gamma$SF of $^{89}$Y can be combined with that of $^{90}$Y to cover a large energy range.
Our experimental results for $^{89}$Y and $^{90}$Y were combined into TALYS cross section and reaction rate calculations to constrain the $^{89}$Y($n,\gamma$)$^{90}$Y reaction cross section and the Maxwellian averaged reaction rate. In addition to the systematic uncertainty of the normalization, the gap in $E_{\gamma}$ where data is lacking also introduces substantial uncertainty in how to model the total $\gamma$-ray strength function. While our cross section results are consistent with several previous measurements and thus both BRUSLIB and JINA REACLIB, our systematic uncertainties stemming from the normalization parameters for the $\gamma$-ray strength function for $E_{\gamma}<8$ MeV are too large to be sensitive to the differences between the two reaction rate libraries in the temperature range of relevance for the s-process. Our MACS values are in good agreement with the recommended values of the KADoNiS library.
While Hauser-Feshbach calculations of reaction cross sections cannot compete with experimental data, where available, the approach is needed in order to reliably predict for energy ranges and reactions where direct measurements are lacking. The $^{89}$Y($n,\gamma$)$^{90}$Y reaction cross section is vital in calculating the production of elements heavier than $A \sim 90$ in the s-process in stellar models, and has consequently been well studied experimentally (all thought not with small enough uncertainties for certain applications). Our experimentally based calculations demonstrate well the applicability of the approach of using experimental $\gamma$SFs and NLDs to constrain reaction cross sections, through the application of the Hauser-Feshbach formalism as implemented in TALYS, in this region of the nuclear chart. Future work will focus on obtaining experimental $\gamma$SFs and NLDs from particle-$\gamma$ coincidence data for unstable isotopes close to $N=50$ and using these results to constrain important cross sections for astrophysical applications.
\acknowledgments
The authors wish to thank J.C.~M{\"{u}}ller, E.A.~Olsen, A.~Semchenkov and J.~Wikne at the Oslo Cyclotron Laboratory for providing excellent experimental conditions and T.~W.~Hagen, S.~Rose for taking shifts.
Furthermore, the authors would like to thank H. Ohgaki of the Institute of Advanced Energy, Kyoto University, for making a large volume LaBr$_3$(Ce) detector available for the experiment at New SUBARU storage ring.
A.C.L. gratefully acknowledges funding through ERC-STG-2014, grant agreement no. 637686. A.C.L. would also like to thank A.~Koning for solving issues in the TALYS-1.9 release.
G.M.T. gratefully acknowledges funding of this research from the Research Council of Norway, Project Grant No. 262952.
S.~G. is F.N.R.S. research associate.
S.~S. acknowledges financial support by the Research Council of Norway, project grant no. 210007.
M.~W. acknowledges support by the National Research Foundation of South Africa under grant no. 92789 and 83867.
I.G. and D.F. acknowledge the support from the Extreme Light Infrastructure Nuclear Physics (ELI-NP) Phase II, project cofinanced by the Romanian Government and the European Union through the European Regional Development Fund - the Competitiveness Operational Programme (1/07.07.2016, COP, ID 1334).
H.U. acknowledges the support from the Premier Project of the Konan University.
This work was partly supported by the IAEA and performed within the IAEA CRP on "Updating the Photonuclear data Library and generating a Reference Database for Photon Strength Functions" (F41032) and JPN-20564.
A.V.V. acknowledges support from US Department of Energy DE-NA0002905. This work was partly performed under the auspices of the US Department of Energy DE-AC52-07NA27344 (LLNL) and DE-AC02-05CH11231 (LBNL).
R.S. was supported by the European Commission within the
Seventh Framework Programme through Fission-2013-CHANDA (project no.605203).
\bibliographystyle{apsrev4-1}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,993 |
Hannah Dennison
Subscribe to Hannah's Occasional Newsletter
Thieves!
When a slew of silver thefts hit the small town of Gipping-on-Plym, it's up to aspiring investigative journalist Vicky Hill to figure out if the culprits are gypsies, tramps, or thieves…
Unaware that their temporary dwelling is also serving as the official gathering site for the upcoming annual "Morris Dance-a-thon," the gypsies set up their illegal campground at The Grange, much to the dismay of almost everyone in town. And when silver is stolen shortly after the gypsies arrive, blaming the despised nomads makes more than a little sense.
But when the body of an unnamed woman is found in a shallow stream, Vicky suspects there's a connection between the murder and the recent silver thefts. And since both her boss and the local police refuse to investigate, Vicky takes on the case by herself. It's a criminal conundrum suited only for the mind of Vicky Hill, and she's determined to uncover the answers and clinch her fourth national exclusive!
The Honeychurch Hall Mysteries
Dangerous Deception (Fifth in the Series)
Murderous Mayhem (Fourth in the Series)
Killer Ball (Third in the Series)
Deadly Desires (Second in the Series)
Murder at Honeychurch Hall (First in the Series)
The Vicky Hill Mysteries
Accused! (Fifth in the Series)
Thieves! (Fourth in the Series)
Expose! (Third in the Series)
Scoop! (Second in the Series)
A Vicky Hill Exclusive! (First in the Series) | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,131 |
\section{Introduction}
\label{sec:intro}
Despite of all successes of the standard model (SM), this model does not represent a theory of everything since there remain many unsolved open questions such as the origin of dark matter, matter-antimatter asymmetry in the universe, the hierarchy problem, etc. To solve these problems many theories have been proposed which are generally qualified as the theories beyond the SM (BSM). Among the most important ones are those based on the supersymmetry. These extended models often contain an extended Higgs sector. As an overview, the minimal extensions known as two-Higgs-doublet models (2HDMs) \cite{Lee:1973iz} include a second complex Higgs doublet which, after spontaneous symmetry breaking, leads to five physical Higgs boson states, i.e., two neutral scalars ($h$ and $H$, with the assumption $m_h < m_H$), two charged Higgs bosons ($H^\pm$) and one neutral pseudoscalar ($A$) \cite{Djouadi:2005gj}.
Furthermore, after imposing a discrete symmetry that gives natural flavor conservation the 2HDMs can be also classified into four categories; Type I, II, III and IV, according to the couplings of the doublets to the fermions.
The minimal supersymmetric standard model (MSSM) \cite{Gunion} is one of the most popular and very well-studied BSM scenarios where one doublet couples to up quarks and the other to down quarks and charged leptons. It should be noted that, the Higgs sector of the MSSM is a Type-II 2HDM which provides elegant solutions to some of the short comings of the SM. It does also predict rich and various phenomenology to be testable in colliders.
Since there is no fundamental charged scalar boson in the SM, then the discovery of a charged scalar boson would clearly represent unambiguous evidence for the presence of new physics beyond the standard model. In this context, searching for the charged Higgs bosons signal is unique and in this work we propose a new channel to search for them at the current and future colliders.\\
In all classes of 2HDM scenario, the charged Higgs bosons $H^\pm$ can appear lighter or heavier than the top quark, while the lightest CP-even Higgs boson $h$ can align with the properties of the SM. Therefore, looking for charged Higgs bosons $H^\pm$ in various decay channels over a wide range of masses is a top priority program in the current LHC experiments and future colliders.
Experimental searches for light charged Higgs bosons ($m_{H^\pm}< m_t$) have already been started at the Tevatron. For example, the CMS \cite{CMS:2014cdp} and the ATLAS \cite{TheATLAScollaboration:2013wia} collaborations have reported their results of proton-proton collision data recorded at $\sqrt{s}=8$~TeV using the $\tau+jets$ channel with a hadronically decaying $\tau$ lepton in the final state, i.e., $t\rightarrow bH^+(\rightarrow \tau^+\nu_\tau)$. Last results on searching for charged Higgs bosons in the $H^\pm\to \tau^\pm\nu_\tau$ decay channel in proton-proton collisions at $\sqrt{s}=13$~TeV is reported by the CMS experiment \cite{Sirunyan:2019hkq}.
According to reported results, the large region in the MSSM $m_{H^+}-\tan\beta$ parameter space is excluded for $m_{H^+}=80-160$~GeV corresponding to the entire range of $\tan\beta$ up to 60, except a hole around $m_{H^+}\approx 150-160$~GeV for $\tan\beta\approx 10$. Here, $\tan\beta$ is the ratio of the vacuum expectation values of the neutral components of the two Higgs doublets. Therefore, it seems that there is no much chance to find the light charged Higgs bosons and colliders should concentrate on probing the heavy charged Higgs bosons ($m_{H^\pm}>m_t$).
Heavy charged Higgs bosons are mainly produced directly in association with a top quark (and also a bottom quark) \cite{Harlander:2011aa,deFlorian:2016spz}. Moreover, charged Higgs bosons can be produced in supersymmetric (SUSY) cascade decays via heavier neutralino and chargino production in squark and gluino decays, see Refs.~\cite{Datta:2001qs,Datta:2003iz}.
On the other hand, in many models a heavy charged Higgs boson is predicted to decay predominantly
either to a tau and its associated neutrino, or to a top and a bottom quark ($H^+\to t\bar{b}$). However, the channel $H^+\to t\bar{b}$ suffers from large multi-jet background, but it dominates in the heavy mass region, see Refs.~\cite{Sirunyan:2020hwv,ATLAS:2020jqj,ATLAS:2016qiq,Aad:2015typ}.
Searches for the signature $H^+\to t\bar{b}$ have been interpreted by the ATLAS and CMS Collaborations in proton-proton collisions at center-of-mass energies of 8 \cite{Aad:2015typ} and 13 TeV \cite{Sirunyan:2020hwv,ATLAS:2020jqj,Aad:2021xzu} and a small excluded region in the MSSM $m_{H^+}-\tan\beta$ parameter space has been presented.
For example, the corresponding searches carried out by ATLAS at $\sqrt{s}=13$~TeV and the integrated luminosity $L = 13.2$ fb$^{-1}$ have excluded $m_{H^+}\approx 300-900$~GeV for a very low $\tan\beta (\approx 0.5-1.7)$ region \cite{ATLAS:2016qiq}, where as for high values of $\tan\beta >44(60)$, $m_{H^+}\approx 300(366)$~GeV have been excluded. Therefore, large regions in the parameter space are still allowed and corresponding searches are in progress.\\
In the present work, we study the dominant decay mode $H^+\to t\bar{b}$ followed by $b\to B+X$, where $B$ is the bottom-flavored hadron and $X$ collectively denotes the unobserved final state particles. Therefore, our proposed channel to search for heavy charged Higgs bosons at colliders is to study the energy distributions of B-hadrons inclusively produced in the decay mode $H^+\to B+X$.
To this aim, our primary purpose is the evaluation of the next-to-leading order (NLO) QCD corrections to the differential partial decay width $d\Gamma(H^+\to t\bar{b}(+g))/dx_b$, where $x_b$ stands for the scaled-energy of bottom quark. This differential width, which is presented for the first time, is needed to obtain the energy spectrum of B-mesons through heavy charged Higgs decays. Also, the hadronization process $b\to B$ is described by the nonperturbative fragmentation functions (FFs) which will be introduced in Section~\ref{sec:two}. The differential decay width at the parton level ($d\Gamma/dx_b$), the nonperturbative FFs and the factorization theorem, introduced in Sec.~\ref{sec:two}, allow us to compute the desired physical quantity; the energy spectrum of B-hadrons.
Beforehand, in Ref.~\cite{Kniehl:2012mn} we have studied the energy spectrum of B-mesons produced form direct decay of top quarks in the SM, i.e., $t\to BW^++X$. It would be expected that a comparison between the energy spectrum of B-mesons from charged Higgs decays and those from top decays at SM indicates a signal for new physics beyond the SM.
\\
This paper is organized as follows.
In Sec.~\ref{sec:one}, we express our analytical results of the ${\cal O}(\alpha_s)$ QCD corrections to the Born level rate of $H^+\to t\bar{b}$. We shall apply the massless scheme where the bottom quark mass is ignored but the arbitrary value of charged Higgs mass is retained.
In Sec.~\ref{sec:two}, we give our numerical analysis of inclusive production of B-hadrons from heavy charged Higgs decay considering the factorization theorem and the DGLAP evaluation equations.
Sec.~\ref{sec:three} is devoted to our summary and conclusions.
\section{Parton level results in the general 2HDM}
\label{sec:one}
Assuming $m_{H^+}>m_t$, we first study the NLO radiative corrections to the partial decay width
\begin{eqnarray}\label{born}
H^+\to t\bar{b},
\end{eqnarray}
in the general 2HDM, where $H_1$ and $H_2$ are the doublets whose vacuum expectation values (VEV's), i.e., $\textbf{v}_1$ and $\textbf{v}_2$, give masses to the down and up type quarks, respectively. The squared sum of VEV's is fixed by the Fermi constant $G_F$ as $\textbf{v}_1^2+\textbf{v}_2^2=(\sqrt{2} G_F)^{-1}=(246~GeV)^2$. However, the ratio of two VEV's is a free parameter and can be characterized by the angle $\beta$ by introducing $\tan\beta=\textbf{v}_2/\textbf{v}_1$.
A linear combination of the charged components of doublets $H_1$ and $H_2$ does also give the observable charged Higgs $H^\pm$, i.e., $H^\pm=H_2^\pm\cos\beta-H_1^\pm\sin\beta$.\\
In a general 2HDM, tree-level flavor-changing neutral currents (FCNC) can be avoided if one does not couple the same Higgs doublet to up- and down-type quarks simultaneously.
Therefore, for our purpose we need the specific models which naturally stop these problems by restricting the Higgs coupling.
In this context, there are two possibilities (which are also called two models) for the two Higgs doublets to couple to the quarks.\\
In the first possibility (or model I), the Higgs doublet $H_1$ couples to all bosons and another doublet $H_2$ couples to all quarks in the same manner as in the SM. In this model, the Yukawa couplings between the top- and the bottom-quark and the charged Higgs are given by the following Lagrangian \cite{GHK}
\begin{eqnarray}\label{modelfirst}
L_1&=&\frac{g_{_W}}{2\sqrt{2}m_W}V_{tb}\cot\beta \bigg\{H^+\bar{t}\big[m_t(1-\gamma_5)-
\nonumber\\
&&m_b(1+\gamma_5)\big]b\bigg\}+H.c,
\end{eqnarray}
where, $g_W^2=4\sqrt{2} m_W^2 G_F$ and the CKM matrix element is labeled by $V_{tb}$.\\
In the second possibility (model II), the doublet $H_1$ couples only to
the right chiral down-type quarks while the $H_2$ couples only to the right chiral up-type quarks. In this model, the charged Higgs boson couplings to fermions are given by the following Lagrangian
\begin{eqnarray}\label{modelsecond}
L_{2}&=&\frac{g_{_W}}{2\sqrt{2}m_W}V_{tb} \bigg\{H^+\bar{t}\big[m_t\cot\beta(1-\gamma_5)+
\nonumber\\
&&m_b\tan\beta(1+\gamma_5)\big] b\bigg\}+H.c .
\end{eqnarray}
These two models are also known as Type-I and Type-II 2HDM scenarios and, as mentioned in the Introduction, the MSSM \cite{Fayet:1974pd,Fayet:1976et,Dimopoulos:1981zb} is a special case of a Type-II 2HDM.
For the process (\ref{born}), considering the interaction Lagrangians (\ref{modelfirst}) and (\ref{modelsecond}) the current density is expressed as $J^{\mu}\propto \psi_b(a+b\gamma_5)\bar{\psi}_t$ so that the coupling factors in two models are given by
\begin{eqnarray}\label{model1}
\textbf{model I}:\quad a&=&\frac{g_{_W}}{2\sqrt{2}m_W}V_{tb}(m_t-m_b)\cot\beta,\nonumber\\
b&=&\frac{g_{_W}}{2\sqrt{2}m_W}V_{tb}(m_t+m_b)\cot\beta,
\end{eqnarray}
and
\begin{eqnarray}\label{model2}
\textbf{model II}:\quad
a&=&\frac{g_{_W}}{2\sqrt{2}m_W}V_{tb}(m_t \cot\beta+m_b\tan\beta),\nonumber\\
b&=&\frac{g_{_W}}{2\sqrt{2}m_W}V_{tb}(m_t \cot\beta-m_b\tan\beta).\nonumber\\
\end{eqnarray}
In next section, we describe the technical detail of our calculation for the ${\cal O}(\alpha_s)$ radiative corrections to the tree-level decay rate of $H^+\to t\bar{b}$ using dimensional regularization to regularize all divergences.
\subsection{Born decay width of $H^+\to t\bar{b}$}
The decay process (\ref{born}) is analyzed in the rest frame of the charged Higgs boson.
It is straightforward to calculate the Born term contribution to the partial decay rate of the process (\ref{born}) in the 2HDM. According to the given Lagrangian in Eqs.~(\ref{modelfirst}) and (\ref{modelsecond}), the coupling of the charged-Higgs to the fermions (top and bottom quark in (\ref{born})) can either be expressed as a superposition of scalar and pseudoscalar coupling factors or as a combination of right- and left-chiral coupling factors \cite{GHK}. Therefore, the lowest order decay amplitude is of the form
\begin{eqnarray}\label{b1}
M_0=v_b(a\boldsymbol{1}+b\gamma_5)\bar{u}_t=v_b\{g_t\frac{1+\gamma_5}{2}+g_b\frac{1-\gamma_5}{2}\}\bar{u}_t,
\end{eqnarray}
where, $g_t=a+b$ and $g_b=a-b$.
Therefore, the tree-level decay width reads
\begin{eqnarray}\label{gammatree}
\Gamma_0&=&\frac{N_c m_H}{8\pi}\lambda^{\frac{1}{2}}(1,R,y)\bigg[2(a^2+b^2)(S-R)\nonumber\\
&&-2(a^2-b^2)\sqrt{Ry}\bigg],
\end{eqnarray}
where $\lambda(x,y,z)=(x-y-z)^2-4y z$ is the K\"all\'en function and $N_c=3$ is a color factor. Here, for simplicity, we have defined: $R=(m_b/m_H)^2$, $y=(m_t/m_H)^2$ and $S=(1+R-y)/2$.
This result is in complete agreement with the one presented in Ref.~\cite{Li:1990ag}.
In the limit of vanishing bottom quark mass, the tree-level decay width is of the form
\begin{eqnarray}\label{gammatreezero}
\Gamma_0=\frac{N_c m_H(1-y)^2}{8\pi}(a^2+b^2),
\end{eqnarray}
where, in both models I and II one has
\begin{eqnarray}\label{gammaaa}
a^2+b^2=\sqrt{2}G_F |V_{tb}|^2 m_t^2\cot^2\beta.
\end{eqnarray}
Since $m_b<<m_t$, finite-$m_b$ corrections are expected to be negligible in the case at hand. This expectation has been actually confirmed in Ref.~\cite{Kniehl:2012mn}, by a comparative analysis of the partial width of the decay $t\to bW^+$ in the general-mass variable-flavor-number scheme (GM-VFNS), where bottom-quark mass is preserved, and the zero-mass variable-flavor-number scheme (ZM-VFNS), where bottom is included among the massless quark flavors. Then, throughout this work we apply the ZM-VFNS or massless scheme.
In next section, we compute the ${\cal O}(\alpha_s)$ QCD corrections to the Born-level decay rate of $H^+\to t\bar{b}$ and
present, for the first time, the analytical parton-level expressions for $d\Gamma(H^+\to B+X)/dx_B$ at NLO in the ZM-VFNS. To this aim, we calculate the quantity $d\Gamma_b/dx_b$ where,
\begin{eqnarray}\label{variable}
x_b=\frac{E_b}{E_b^{max}}=\frac{2E_b}{m_H(1-y)},
\end{eqnarray}
is the scaled-energy of b-quark. It ranges as $0\leq x_b\leq 1$.
\subsection{${\cal O}(\alpha_s)$ virtual corrections}
\label{virtual}
The QCD virtual one-loop corrections to the process $H^+\to t\bar{b}$ contain both infrared (IR) and
ultraviolet (UV) divergences where the UV-divergences appear when the integration
region of the internal momentum of the virtual gluon goes to infinity and the
IR-divergences arise from the soft-gluon singularities.
In this work, we adopt the "on-shell" mass renormalization
scheme and apply the dimensional regularization scheme to regularize all divergences. Through this scheme, all singularities are regularized in $D =4-2\epsilon$ dimensions to become single poles in $\epsilon$.
Considering the two-body phase space for the virtual corrections the contribution of virtual radiations into the
differential decay width reads
\begin{eqnarray}
\frac{d\Gamma^{\textbf{vir}}_b}{dx_b}=\frac{S}{8\pi m_H}
\overline{|M^{\textbf{vir}}|^2}\delta(1-x_b),
\end{eqnarray}
where,
$\overline{|M^{\textbf{vir}}|^2}=\sum_{Spin}(M_0^{\dagger} M_{loop}+M_{loop}^{\dagger} M_0)$. Here, $M_0$ is the Born term amplitude (\ref{b1}) and the renormalized amplitude of the virtual corrections is given by
$M_{loop}=v_b(\Lambda_{ct}+\Lambda_l)(a+b\gamma_5)\bar{u}_t$,
where $\Lambda_{ct}$ represents the counterterm and $\Lambda_l$
arises from the one-loop vertex correction \cite{MoosaviNejad:2012ju}.
Following Refs.~\cite{Czarnecki,Liud}, the counterterm
of the vertex includes the wave-function renormalizations of quarks as well as the top quark mass renormalization
\begin{eqnarray}
\Lambda_{ct}=\frac{\delta Z_b}{2}+\frac{\delta Z_t}{2}-
\frac{\delta m_t}{m_t}.
\end{eqnarray}
Since, we are working in the ZM-VFN scheme where $m_b=0$ is assumed, then the b-quark mass counterterm is $\delta m_b=0$.
The wave function and the mass renormalization constants are given by \cite{Korner:2002fx}
\begin{eqnarray}\label{mass}
\delta Z_b &=& -\frac{\alpha_s(\mu_R)}{4\pi}C_F\big[\frac{1}{\epsilon_{UV}}-\frac{1}{\epsilon_{IR}}\big],
\nonumber\\
\delta Z_t &=& -\frac{\alpha_s(\mu_R)}{4\pi}C_F\big[\frac{1}{\epsilon_{UV}}+\frac{2}{\epsilon_{IR}}
-3\gamma_E+3\ln\frac{4\pi\mu_F^2}{m_t^2}+4\big],
\nonumber\\
\frac{\delta m_t}{m_t}&=&\frac{\alpha_s(\mu_R)}{4\pi}C_F\big[\frac{3}{\epsilon_{UV}}-3\gamma_E+
3\ln\frac{4\pi\mu_F^2}{m_t^2}+4\big],
\end{eqnarray}
where, $\gamma_E=0.5772\cdots$ is the Euler constant, $C_F=(N_c^2-1)/(2N_c)=4/3$ for $N_c=3$ quark colors, and $\mu_F$ is the factorization scale which is arbitrarily set as $\mu_F=m_H$ in our work.
Conventionally, $\epsilon_{IR}$ and $\epsilon_{UV}$ represent the infrared and
the ultraviolet divergences, respectively.\\
The real part of the vertex correction is given by
\begin{eqnarray}
\Lambda_l&=&\frac{\alpha_s N_c m_H^2}{\pi}C_F(a^2+b^2)\big[y-1+(1-y)B_0(0,0,0)\nonumber\\
&&-yB_0(m_H^2,0,m_t^2)+B_0(m_t^2,0,m_t^2)
\nonumber\\
&&-(1-y)^2m_H^2 C_0(0,m_t^2,m_H^2,0,0,m_t^2)\big],
\end{eqnarray}
where, $B_0$ and $C_0$ are the Passarino-Veltman 2-point and 3-point integrals \cite{Dittmaier:2003bc}.
By summing all virtual corrections up, the UV-singularities are canceled so that the virtual differential decay rate is ultraviolet finite. But, the IR-divergences are remaining which are now labeled by $\epsilon$.
Eventually, the virtual one-loop contributions read
\begin{eqnarray}\label{virt}
\frac{d\Gamma^{\textbf{vir}}_b}{dx_b}&=&\Gamma_0
\frac{\alpha_s(\mu_R)}{2\pi}C_F
\delta(1-x_b)\bigg\{2Li_2(y)-\frac{1}{\epsilon^2}+\frac{F}{\epsilon}
\nonumber\\
&&-\frac{F^2}{2}+(2y-5)\ln\frac{1-y}{y}+\ln^2y-\frac{3\pi^2}{4}-\frac{7}{8}\bigg\},
\nonumber\\
\end{eqnarray}
where, $Li_2(y)=-\int_0^y (dt/t) \ln(1-t)$ is the Spence function and
\begin{eqnarray}
F=-\ln\frac{4\pi}{y}+2\ln\frac{1-y}{y}+\gamma_E-\frac{5}{2}.
\end{eqnarray}
\subsection{Real gluon corrections (Bremsstrahlung)}
To obtain the infrared-finite physical results for $d\Gamma_b/dx_b$ one must include the contributions of real gluons emission.
Considering two Feynman graphs including the real gluon emissions from the top and bottom quark, the ${\cal O}(\alpha_s)$ real gluon emission (tree-graph) amplitude reads
\begin{eqnarray}\label{finfin}
M^{\textbf{real}}&=&g_s\frac{\lambda^a}{2} v(p_b, s_b)\big\{-\frac{2p_t^\mu+
\displaystyle{\not}p_g \gamma^\mu}{2p_t \cdot p_g}
\\
&&+\frac{p_b^\mu+\gamma^\mu \displaystyle{\not}p_g}
{2p_b\cdot p_g}\big\}(a\textbf{1}+b\gamma_5) \bar{u}(p_t, s_t)\epsilon_{\mu}^{\star}(p_g,r),\nonumber
\end{eqnarray}
where $\epsilon(p_g,r)$ refers to the polarization vector of the emitted real gluon with the spin $r$. The first and second expressions in the curly brackets are related to the real gluon emissions from the top and bottom quarks, respectively.
In order to regulate the IR-divergences which arise from the soft and collinear real-gluon emissions, as before, we apply dimensional regularization scheme. According to this scheme, the real differential decay rate for the process $H^+\to t\bar{b}g$ is given by
\begin{eqnarray}\label{moozmooz}
d\Gamma^{\textbf{real}}=\frac{\mu_F^{2(4-D)}}{2m_H}|M^{\textbf{real}}|^2dR_3(p_t, p_b, p_g, p_{_{H^+}}),
\end{eqnarray}
where, $\mu_F$ is an arbitrary reference mass and the phase space element $dR_3$ is defined as
\begin{eqnarray}\label{ahah}
\frac{d^{D-1}\vec{p}_b}{2E_b}\frac{d^{D-1}\vec{p}_t}{2E_t}\frac{d^{D-1}\vec{p}_g}{2E_g}
(2\pi)^{3-2D}\delta^D(p_H-\sum_{g,b,t} p_f).
\end{eqnarray}
To evaluate the differential decay rate $d\Gamma^{real}_b/dx_b$,
we fix the momentum of bottom quark in Eq.~(\ref{moozmooz}) and integrate over
the gluon energy which ranges as
\begin{eqnarray}
m_H\frac{(1-y)(1-x_b)}{2}\leq E_g \leq m_H\frac{(1-y)(1-x_b)}{2(1-x_b(1-y))}.
\end{eqnarray}
Note that, when we integrate over the phase
space of the real gluon radiation, terms of the form $(1-x_b)^{-1-2\epsilon}$ appear which are due to the radiation of soft gluon, i.e., $E_g\to 0 \equiv x_b\to1$. Thus, we employ the following prescription introduced in Ref.~\cite{Corcella:1}
\begin{eqnarray}
(1-x_b)^{-1-2\epsilon}&&=-\frac{1}{2\epsilon}\delta(1-x_b)+\bigg(\frac{1}{1-x_b}\bigg)_+\nonumber\\
&&-2\epsilon \bigg(\frac{\ln(1-x_b)}{1-x_b}\bigg)_+,
\end{eqnarray}
where the plus distributions are defined as
\begin{eqnarray}
\int_0^1 (f(x))_{_+}h(x)dx=\int_0^1f(x)[h(x)-h(1)]dx.
\end{eqnarray}
\subsection{Analytical results for $d\Gamma/dx_i$ at parton level}
The ${\cal O}(\alpha_s)$ corrections to the differential decay rate of $H^+\to t\bar{b}$ is obtained by summing the Born, the virtual and the real gluon contributions.
It reads
\begin{eqnarray}\label{pol1}
\frac{d\Gamma^{\textbf{nlo}}_b}{dx_b}&=&\Gamma_0\Big[\delta(1-x_b)+
\frac{C_F\alpha_s}{2\pi}\Big\{[-\frac{1}{\epsilon}+\gamma_E-\ln 4\pi]\nonumber\\
&&\times[\frac{3}{2}\delta(1-x_b)+\frac{1+x_b^2}{(1-x_b)_+}]+T_1\Big\}\Big],
\end{eqnarray}
where, by defining $S=(1-y)/2$ (with $y=m_t^2/m_H^2$) one has
\begin{eqnarray}\label{pol11}
T_1&=&\delta(1-x_b)\bigg\{\frac{3}{2}\ln y+4S\ln\frac{y}{1-y}-2Li_2\frac{1}{y}-\frac{\pi^2}{3}-2\bigg\}\nonumber\\
&&+2(1+x_b^2)\bigg(\frac{\ln(1-x_b)}{1-x_b}\bigg)_+\nonumber\\
&&+\frac{1+x_b^2}{(1-x_b)_+}\bigg\{\ln\frac{4S^2x_b^2}{1-2Sx_b}+\frac{1}{(1-2Sx_b)^2}\bigg[-2S^2x_b^2\nonumber\\
&&+\frac{(1-x_b)^2+2x_b(4Sx_b-1)}{1+x_b^2}\bigg]\bigg\}.
\end{eqnarray}
Our result of differential decay rate, which is presented for the first time, after integration over $x_b$ ($0\leq x_b \leq 1$) is in complete agreement with the result presented in \cite{Li:1990ag}.
Note that, our main purpose is to evaluate the energy distribution of B-hadrons produced in heavy charged Higgs boson decay: $H^+\to t\bar{b}(+g)\to B+X$, where B-hadrons can be produced from the fragmentation of b-quark as well as the emitted real gluons. Therefore, in order to obtain
the most accurate energy spectrum of produced B-hadrons we have to consider the contribution of gluon fragmentation as well.
It should be noted that, the gluon splitting contribution is
important at the low energy of the observed B-hadron so this contribution decreases the size of decay rate at the threshold, see Refs.~\cite{Nejad:2016epx,Nejad:2014sla}. With this explanation, we also need to compute the NLO differential decay rate $d\Gamma^{\textbf{nlo}}_g/dx_g$, where $x_g=2E_g/(m_H (1-y))$ is the scaled-energy of emitted real gluon, as in (\ref{variable}).
Ignoring the details of calculation, this differential decay rate is given by
\begin{eqnarray}\label{pol2}
\frac{d\Gamma^{\textbf{nlo}}_g}{dx_g}&=&\\
&&\hspace{-1cm}\Gamma_0\frac{C_F\alpha_s}{2\pi}\bigg\{\frac{1+(1-x_g)^2}{x_g}(-\frac{1}{\epsilon}+\gamma_E-\ln 4\pi)+T_2\bigg\},\nonumber
\end{eqnarray}
where,
\begin{eqnarray}\label{pol22}
T_2&=&\frac{1+(1-x_g)^2}{x_g}\ln\frac{S^2x_g^2(1-x_g)^2(1-2Sx_g)}{y^2}\nonumber\\
&&+\frac{(x_g+2)^2-8}{x_g}.
\end{eqnarray}
In Eqs.~(\ref{pol1}) and (\ref{pol2}), the terms $T_1$ and $T_2$ are free of all IR-divergences. In order to subtract the singularities
remaining in the differential decay widths, we employ the modified minimal-subtraction $(\overline{MS})$ scheme, where the singularities are absorbed into the bare fragmentation functions (FFs). This renormalizes the
FFs, endowing them with $\mu_F$ dependence, and creates in
the differential decay widths the finite terms of the form $(\alpha_s/\pi)\ln(m_H^2/\mu_F^2)$ which are rendered perturbatively small by choosing $\mu_F={\cal O}(m_H)$.
Following the $\overline{MS}$ scheme, in order to have the finite coefficient functions we have to subtract from
Eqs.~(\ref{pol1}) and (\ref{pol2}), the ${\cal O}(\alpha_s)$ term multiplying the characteristic $\overline{MS}$ constant, i.e., $-1/\epsilon+\gamma_E-\ln 4\pi$ \cite{Corcella:1}.
\section{Numerical results}
\label{sec:two}
In this work, using the ZM-VFNS we study the decay process
\begin{eqnarray}\label{eqq}
H^+\to t\bar{b}(+g),
\end{eqnarray}
followed by $\bar{b}/g\to B+X$. In this process, top quark dominantly decays as: $t\to bW^+\to bl^+\nu_l$.
In the narrow-width approximation (NWA), where we set $p_t^2=m_t^2$ and $p_{W^+}^2=m_{W^+}^2$ and ignore small terms of order ${\cal O}(\Gamma_i^2/m_i^2)(i=t,W^+)$, the total decay rate reads
\begin{eqnarray}\label{br}
&&\Gamma(H^+\to b\bar{b}l^+\nu_l)=\\
&&\Gamma(H^+\to t\bar{b})\times B(t\to bW^+)\times B(W^+\to l^+\nu_l),\nonumber
\end{eqnarray}
where, for the branching ratios one has $B(t\to bW^+)=96.2\%$ and $B(W^+\to l^+\nu_l)=10.86\%$ \cite{Zyla:2020zbs}.
More details about the NWA can be found in Ref.~\cite{MoosaviNejad:2019agw}.
Having the differential decay widths for the process (\ref{eqq}), i.e., Eqs.~(\ref{pol1}) and (\ref{pol2}), we are now in a situation to make our phenomenological predictions
for the scaled-energy ($x_B$) distribution of B-hadrons inclusively produced in the decay of heavy charged Higgs bosons. To present our results for the $x_B$-distribution, we
consider the differential distribution $d\Gamma^{nlo}/dx_B$ of the partial width of the decay $H^+\to B+X$, where
$x_B=2E_B/(m_H(1-y))$ is the scaled-energy of B-hadrons in the charged Higgs rest frame. The $x_B$-variable is defined as $x_b$ in (\ref{variable}).
Our tool to compute the scaled energy
distribution of B-hadrons is the factorization theorem of QCD-improved parton model \cite{collins}. According to this theorem \cite{Salajegheh:2018hfs}, the energy distribution of B-hadrons can be expressed as the convolution of the parton-level spectrum $d\Gamma_a/dx_a (a=b, g)$ with
the nonperturbative FFs of $a\to B$, describing the hadronization process of $a\to B$. The $a\to B$ FFs are labeled by $D_a^B(z, \mu_F)$, where $\mu_F$ is the factorization scale and $z=E_B/E_a$ is the fragmentation variable which indicates the energy fraction of parent parton carried by the produced hadron. The factorization theorem is expressed as
\begin{equation}\label{eq:master}
\frac{d\Gamma}{dx_B}=\sum_{a=b, g}\int_{x_a^\text{min}}^{x_a^\text{max}}
\frac{dx_a}{x_a}\,\frac{d\Gamma_a}{dx_a}(\mu_R, \mu_F) D_a^B(\frac{x_B}{x_a}, \mu_F),
\end{equation}
where, $\mu_R$ and $\mu_F$ are the renormalization and factorization scales, respectively. The scale $\mu_R$ is related to the renormalization of the QCD coupling constant. In this paper, we use the convention $\mu_R=\mu_F=m_H$, a choice often made.
Several searches for the signature $H^+\to t\bar{b}$ in the context of 2HDMs have been done by the ATLAS and CMS Collaborations in proton-proton collisions at center-of-mass energies of 8 and 13 TeV \cite{Aad:2015typ,Sirunyan:2020hwv,ATLAS:2020jqj}. For example, in Ref.~\cite{Sirunyan:2020hwv} the presented results are based on proton-proton collision data collected in 2016 at $\sqrt{s}=13$~TeV by the CMS experiment, corresponding to an integrated luminosity of $35.9~ fb^{-1}$. Figure 7 in this reference shows the excluded parameter space in the MSSM scenarios. Based on their results, the maximum $\tan\beta$ value excluded is $0.88$ for $0.20< m_{H^\pm}< 0.55$~TeV.
The corresponding searches carried out by ATLAS at $\sqrt{s}=13$~TeV and the integrated luminosity $L = 13.2$ fb$^{-1}$ have been excluded $m_{H^+}\approx 300-900$~GeV for a very low $\tan\beta (\approx 0.5-1.7)$ region \cite{ATLAS:2016qiq}, where as for high values of $\tan\beta >44(60)$, $m_{H^+}\approx 300(366)$~GeV are excluded.
Although, a definitive search over the $m_{H^+}-\tan\beta$ plane is a program that still has to be carried out and this belongs to the LHC experiments and future colliders.
In this work, for our numerical analysis we restrict ourselves to the allowed regions of the $m_{H^+}-\tan\beta$ parameter space evaluated by the CMS experiments, see Fig.7 in Ref.~\cite{Sirunyan:2020hwv}. Moreover, from Ref.~\cite{Nakamura:2010zzi} we adopt other input parameters as
$G_F = 1.16637\times10^{-5}$~GeV$^{-2}$ and
$m_t = 172.98$~GeV.
We will also evaluate the QCD coupling constant $\alpha_s$ at NLO in the $\overline{\text{MS}}$ scheme through the following relation
\begin{eqnarray}\label{alpha}
\alpha^{(n_f)}_s(\mu)=\frac{1}{b_0\ln(\mu^2/\Lambda^2)}
\Big\{1-\frac{b_1 \ln\big[\ln(\mu^2/\Lambda^2)\big]}{b_0^2\ln(\mu^2/\Lambda^2)}\Big\},
\nonumber\\
\end{eqnarray}
where, $\Lambda$ is the QCD scale parameter. Also, $b_0$ and $b_1$ are given by
\begin{eqnarray}
b_0=\frac{33-2n_f}{12\pi}, \quad b_1=\frac{153-19n_f}{24\pi^2},
\end{eqnarray}
where, $n_f$ is the number of active quark flavors. In this work, we adopt
$\Lambda_{\overline{\text{MS}}}^{(5)}=231.0$~MeV adjusted such
that $\alpha_s^{(5)}(\mu)=0.1184$ for $\mu=m_Z=91.1876$~GeV \cite{Nakamura:2010zzi}.\\
First, we present the numerical results for the NLO decay rate $\Gamma(H^+\to t\bar{b})$ at the ZM-VFN scheme. To do this, we consider $d\Gamma_b/dx_b$ (\ref{pol1}) and integrate over $x_b (0\leq x_b\leq 1)$. Our results for various values of $m_{H^+}$ read
\begin{eqnarray}\label{rate}
\Gamma^{NLO}&=&\Gamma_0(1-0.01574),\quad \text{for}\quad m_{H^+}=200~GeV\nonumber\\
\Gamma^{NLO}&=&\Gamma_0^\prime(1-0.05396),\quad \text{for}\quad m_{H^+}=400~GeV\nonumber\\
\Gamma^{NLO}&=&\Gamma_0^{\prime\prime}(1-0.07050),\quad \text{for}\quad m_{H^+}=800~GeV.\nonumber\\
\end{eqnarray}
The decay rate at the Born level (\ref{gammatreezero}) depends on $m_{H^+}$ and $\tan\beta$. For the tree-level decay rates in the above relations we have $\Gamma_0=0.7493\cot^2\beta$, $\Gamma_0^\prime=15.6038\cot^2\beta$ and $\Gamma_0^{\prime\prime}=42.9046\cot^2\beta$.
From Eq.~(\ref{rate}), it is seen that the QCD corrections decrease the charged Higgs boson decay width and their amounts depend on the charged Higgs mass. Note that, for the total decay rate of process $H^+\to \bar{b} t(\to bW^+(\to l^+\nu_l))$ the above results should be multiplied by $B(t\to bW^+)$ and $B(W^+\to l^+\nu_l)$, see Eq.~(\ref{br}).
Now, we go back to our main aim: the evaluation of energy distribution of B-hadrons in heavy charged Higgs decays. For this purpose, we use the factorization relation (\ref{eq:master}) where to describe the splitting $(b, g)\rightarrow B$, from Ref.~\cite{Salajegheh:2019ach} we employ the
realistic nonperturbative $B$-hadron FFs determined at NLO in the ZM-VFN scheme. These FFs have been determined through a global fit to
electron-positron annihilation data presented by ALEPH \cite{Heister:2001jg} and OPAL
\cite{Abbiendi:2002vt} at CERN LEP1 and by SLD \cite{Abe:1999ki} at SLAC SLC. According to the approach used in \cite{Salajegheh:2019ach}, the power ansatz $D_b(z,\mu_F^\text{ini})=Nz^\alpha(1-z)^\beta$ is adopted for the $b\to B$ splitting where the free parameters have been determined at the initial scale $\mu_F^\text{ini}=4.5$~GeV. The fit yielded $N=2575.014$, $\alpha=15.424$, and $\beta=2.394$. The gluon FF is assumed to be zero at the initial scale $\mu_F^\text{ini}$ and generated
via the DGLAP evolution equations \cite{dglap}.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth,bb=137 42 690 690]{fig2}
\caption{\label{fig2}%
The $x_B$ spectrum in heavy charged Higgs decay in the 2HDM. The NLO result (dashed line) is compared to the LO one (solid line) taking $\tan\beta=2$ and $m_{H^+}=200$~GeV. }
\end{center}
\end{figure}
In Fig.~\ref{fig2}, our prediction for the energy spectrum of bottom-flavored hadrons is presented by plotting $d\Gamma/dx_B$ versus $x_B$. For this prediction, we have studied the size of the NLO corrections by comparing the LO (solid line) and NLO (dashed line) distributions. In order to study the importance of NLO corrections at the parton level, we evaluated the LO distribution using the same NLO $b\to B$ FF. Our results show that the NLO corrections lead to a significant enhancement of the partial decay width in the peak region and above, while these corrections decrease the size of partial decay rate in the lower-$x_B$ range. It should be noted that, the contribution of gluon splitting is appreciable only in the low-$x_B$ region. For higher values of $x_B$, the contribution of b-quark fragmentation dominates, as expected \cite{Kniehl:2012mn}.
\\
In Fig.~\ref{fig1}, the dependence of $x_B$ spectrum on $\tan\beta$ is studied, taking $m_{H^+}=200$~GeV. As is seen, when the value of $\tan\beta$ increases the decay rate decreases, because the Born rate $\Gamma_0$ (\ref{gammatreezero}) is proportional to $\cot^2\beta$.
\\
In Fig.~\ref{fig3}, by fixing $\tan\beta=2$ we have investigated the dependence of $x_B$ spectrum on the charged Higgs mass taking $m_{H^+}=200$ (solid line), $m_{H^+}=400$~GeV (dashed line) and $m_{H^+}=600$~GeV (dot-dashed line).
This figure shows that, if $m_{H^+}$ increases the size of partial decay width increases as well. Nevertheless, the peak position of $x_B$-distribution is approximately independent of the charged Higgs mass.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth,bb=137 42 690 690]{fig1}
\caption{\label{fig1}%
$d\Gamma(H^+\to BX)/x_B$ as a function of $x_B$ in the 2HDM considering different values of $\tan\beta=1$, $3$ and $6$. The mass of heavy charged Higgs is fixed to $m_{H^+}=200$~GeV. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth,bb=137 42 690 690]{fig3}
\caption{\label{fig3}%
The $x_B$ spectrum in charged Higgs decay for different values of charged Higgs mass: $m_{H^+}=200$~GeV (solid line), $m_{H^+}=400$~GeV (dashed line) and $m_{H^+}=600$~GeV (dot-dashed line). }
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:three}
The SM of particle physics predicts one neutral Higgs boson, whereas the Minimal Supersymmetric requires five Higgs particles, three neutral bosons
and two charged bosons. The discovery of charged Higgs bosons would be proof of new physics beyond the SM. For this reason, searches for the charged
Higgs bosons are strongly motivated so that in recent years it has been a goal of many high energy colliders such as the
CERN LHC. Searches for light charged Higgs bosons (particles lighter than the top quark) has been inconclusive and no
evidence has been yet found. In this regard, the results reported by the CMS and ATLAS Collaborations show the large excluded region in the MSSM $m_{H^+}-\tan\beta$ parameter space.
Therefore, it sounds that most efforts should be concentrated on probing heavy charged Higgs bosons (heavier than the top quark). These scalar bosons are predicted to decay predominantly either to a tau and its associated neutrino ($\tau\bar{\nu}_\tau$), or to a top and a bottom quark ($t\bar{b}$). In spite of the fact that the decay channel $H^+\to t\bar{b}$ suffers from large multi-jet background, but it dominates in the heavy mass region. \\
In this work, we studied the dominant decay channel $H^+\to t\bar{b}(+g)$ followed by the hadronization process $(b,g)\to B$. At colliders, the bottom-flavored hadrons could be identified by a displaced decay vertex associated whit charged lepton tracks. On other words, B-hadrons decay to the $J/\psi$ followed by the $J/\psi\to \mu^+\mu^-$ decays, see Ref.~\cite{Kharchilava:1999yj}. Then a muon in jet is associated to the b-flavored hadron. Furthermore, one can also explore an other way to associated the $J/\psi$ with the corresponding isolated lepton- by measuring the jet charge of identified $b$ and not requiring the tagging muon. Therefore, at the LHC and future colliders the decay channel $H^+\rightarrow B+X$ is proposed to search for the heavy charged Higgs bosons and evaluating the distribution in the scaled-energy ($x_B$) of B-mesons would be of particular interest. This distribution is studied by evaluating the quantity $d\Gamma/dx_B$.
To present our phenomenological prediction of the $x_B$-distribution, we first calculated an analytic
expression for the NLO radiative corrections to the differential decay width $d\Gamma(H^+\to t\bar{b})/dx_a (a=b, g)$ and then
employed the nonperturbative $(b,g)\to B$ FFs, relying on their universality and scaling violations.
Our results have been presented in the ZM-VFN scheme where the b-quark mass is ignored from the beginning. In this scheme, results are the same in both the type-I and II 2HDM scenarios.\\
Our analysis is expected to make a contribution to the LHC searches for charged Higgs bosons.
In fact, a comparison between the energy spectrum of B-mesons produced from charged Higgs decays at 2HDM and those from top decays at SM ($t\to B+X$) would indicate a signal for new physics beyond the SM.
\\
Our analysis can be also extended to the production of hadron species other than the B-hadron, such as pions, kaons and protons, etc. This would be possible by using the nonperturbative $(b, g)\rightarrow \pi/K/P/D^+$ FFs presented in Refs.~\cite{Soleymaninia:2013cxa,Nejad:2015fdh,Salajegheh:2019srg,Salajegheh:2019nea}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,489 |
{"url":"http:\/\/math.stackexchange.com\/questions\/364520\/fx-frac-e2x-1-1e2x-1-what-is-the-value-of-f1-2009-f\/364527","text":"# $f(x) = \\frac {e^{2x-1}} {(1+e^{2x-1})}.$ What is the value of $f(1\/2009) + f(2\/2009) + \u2026 + f(2008\/2009)$?\n\nHere's the question.\n\nLet $f(x) = \\frac {e^{2x-1}} {(1+e^{2x-1})}$\n\nThen what is the value of $f(1\/2009) + f(2\/2009) + ... + f(2008\/2009)$ ?\n\nAll I could think of doing was to add and subtract 1 in the numerator of the function, to get the value of the sum as 2008 - {something}.\n\nHints??\n\nEDIT: please note the correction. $f(x) = \\frac {e^{2x-1}} {(1+e^{2x-1})}$ and NOT $\\frac {e^{2x-1}} {(1-e^{2x-1})}$\n\n-\n\nHint: Consider $f(x) + f(1-x)$.\n\n-\nGot the answer! Thanks!!!! \u2013\u00a0 Parth Thakkar Apr 17 '13 at 15:24\nGreat Hint! +1... \u2013\u00a0 DonAntonio Apr 17 '13 at 15:25\n@CalvinLin If you don't mind me asking, what is the difference between brilliant.org and sites such as art of problem solving? \u2013\u00a0 AlanH Jun 3 '13 at 21:55\n\nI'd like to just bring about a curiosity, now that we have Calvin's very clever observation that leads to the exact result. But the sum looks a lot like a Riemann sum to me, so I went forth:\n\n$$\\sum_{k=0}^{2008} f\\left(\\frac{k}{2009}\\right) \\approx 2009 \\int_0^1 dx \\: f(x)$$\n\nNote of course that the sum on the LHS goes from $k=0$, and not $k=1$ as specified in the OP. Keep that in mind.\n\nSo the integral is easily evaluated:\n\n$$\\int_0^1 dx \\: f(x) = \\frac{1}{2} \\log{\\left(\\frac{1+e}{1+(1\/e)}\\right)} = \\frac{1}{2}$$\n\nTherefore\n\n$$\\sum_{k=0}^{2008} f\\left(\\frac{k}{2009}\\right) \\approx \\frac{2009}{2}$$\n\nBut as I included the $k=0$ term in this sum, let's subtract it out to get the correct approximation:\n\n$$\\sum_{k=1}^{2008} f\\left(\\frac{k}{2009}\\right) \\approx \\frac{2009}{2} - \\frac{1}{1+e}$$\n\nThe exact value of the sum is $1004$, from which the above approximate result has a relative error of\n\n$$\\frac{e}{2 (1+e) 2008} \\sim 0.02\\%$$\n\nJust saying.\n\n-","date":"2015-05-22 10:45:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8720990419387817, \"perplexity\": 499.2620204319683}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-22\/segments\/1432207924919.42\/warc\/CC-MAIN-20150521113204-00245-ip-10-180-206-219.ec2.internal.warc.gz\"}"} | null | null |
// https://hacks.mozilla.org/2011/03/the-shortest-image-uploader-ever/
function upload(file) {
/* Is the file an image? */
if (!file || !file.type.match(/image.*/)) return;
/* Lets build a FormData object*/
var fd = new FormData(); // I wrote about it: https://hacks.mozilla.org/2011/01/how-to-develop-a-html5-image-uploader/
fd.append("image", file); // Append the file
var xhr = new XMLHttpRequest(); // Create the XHR (Cross-Domain XHR FTW!!!) Thank you sooooo much imgur.com
xhr.open("POST", "https://api.imgur.com/3/upload.json"); // Boooom!
xhr.setRequestHeader('Authorization', 'Client-ID c2e15b62bf762a8');
xhr.onload = function() {
// Big win!
var data = JSON.parse(xhr.responseText),
textarea = document.getElementById('textarea'),
texta = textarea.value.split("\n");
if (data.data && data.data.link) {
link = "";
texta.push(link);
textarea.value = texta.join("\n");
}
//document.querySelector("#link").href = JSON.parse(xhr.responseText).upload.links.imgur_page;
}
/* And now, we send the formdata */
xhr.send(fd);
}
function clickHandler(videoId) {
return function() {
// Create an iFrame with autoplay set to true
var iframe = document.createElement("iframe");
iframe.setAttribute("src",
"https://www.youtube.com/embed/" + videoId + "?autoplay=1&autohide=1&border=0&wmode=opaque&enablejsapi=1");
// The height and width of the iFrame should be the same as parent
iframe.className = 'video_frame'
// Replace the YouTube thumbnail with YouTube HTML5 Player
this.parentNode.replaceChild(iframe, this);
}
};
function youtube() {
var re = /youtube\.com\/watch\?.*v=(.+?)(&|$)/
var links = document.getElementsByTagName('a');
var len = links.length;
for (var i = 0; i < len; i++) {
var href = links[i].href;
var matches = href.match(re);
if (matches) {
var videoId = matches[1];
var div = document.createElement('div');
div.className = 'youtube';
var image = document.createElement('img');
image.className = 'thumb'
image.setAttribute('src', 'http://i.ytimg.com/vi/'+videoId+'/hqdefault.jpg');
div.appendChild(image);
image.onclick = clickHandler(videoId)
links[i].parentElement.insertBefore(div, links[i]);
}
}
}
youtube();
function rsz(elem, max) {
if (elem == undefined || elem == null) return false;
if (max == undefined) max = 320;
if (elem.width > elem.height) {
if (elem.width > max) elem.width = max;
} else {
if (elem.height > max) elem.height = max;
}
}
function tgl(elem, max) {
if (elem == undefined || elem == null) return false;
if (elem.alt != 1) {
elem.removeAttribute('width');
elem.removeAttribute('height');
elem.alt = 1;
} else {
elem.removeAttribute('alt');
rsz(elem, max);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,509 |
Say goodbye to March the best way how. We've got plenty of events to get excited about over the weekend!
Find out how to kickstart your career in the beauty industry with our accredited training courses. Don't miss the live demonstrations, giveaways, personalised tours, mini facials, skin treatments, and seminars. Visit ACTI's Open Day from 10AM to 2PM (there will be two sessions of practical workshops at 10am & 12pm) to catch all the exciting activities, or pop in any time to speak with their friendly team. They are now enrolling for April 2019 with full time and part time study options available. Enrolments open for domestic and international students. ACTI Training is located in 26 Mort Street, Braddon. For more info or to RSVP, visit their website.
Roll up Canberra, the greatest party is BACK! The Spiegeltent is returning to breathe life into Canberra … or better yet; LIFE The Show. The rest of the pop-up festival line-up is known for presenting a series of world-class music, comedy, and dancing acts over a thrilling couple of weeks. Catch The Spiegeltent while you can from 30 March to 21 April at the Canberra Theatre Centre's forecourt. You can count on the tent's line-up to bring lively entertainment and jaw-dropping acts to your night out. Headlining is the adults-only and jaw-dropping circus-cabaret sensation LIFE The Show, from the makers of Blanc de Blanc. For more info and tickets, click here.
Celebrate the launch of Canberra District's Wine Week with more than 50 wines on offer at our annual Wine Tasting event hosted in the beautiful courtyard at New Acton. From 2pm to 6pm, collect your Canberra Wines 'Tasting Glass' and embark on a taste of our 'liquid geography'. If you think you know about Canberra Wines, then think again, as this year's Festival theme will have you thinking – 'Expect the Unexpected.' Food will also be available to purchase, so grab your friends and make an afternoon of it. Entry is $25 presale and $30 at the door. Ticket price includes a reusable tasting glass to take home. Buy your tickets early and save here.
Members of the Armstrong Siddeley Car Club of Australia, as a part of their centenary celebrations of the marque, invite you to Old Parliament House to view a collection of vintage cars of 'Aircraft Quality'. Armstrong Siddeleys were extensively used in the 1950s to the mid 1960s by the Federal Government. Queen Elizabeth was chauffeured in an Armstrong Siddeley during her 1954 visit to Australia, and even Sir Charles Kingsford Smith owned a number of Armstrong Siddeleys. The event will run 10am-3pm. See event brochure here.
Come along to the Australian National Botanic Gardens for a boutique sale with plants grown by the gardens' Growing Friends from material sourced in the gardens. All proceeds go to support the Gardens. The gates open from 8.30am – 11am, make sure to come early for the best selection! Autumn is considered to be an excellent time for planting, allowing plants to settle in to their new environment after the summer heat and before the winter cold so get growing!
Happy birthday to Brodburger! To celebrate 10 years of Brod they are saying a big fat thank you to Canberra for all of your support, they're throwing one big birthday bash the best way we know how – with burgers! On Saturday 30th March 5:30 to 8:30pm they'll be giving away FREE BURGERS at Brod Kingston! There'll also be live music and entertainment from local bands, DJs and performers, plenty of Brod merchandise, lawn games, competitions and so much more. The Canberra glassworks will also be open with live glass blowing of a Brodburger, created by glass artist Annette Blair.
Matt Corby is coming to Canberra and it's kind of a big deal! Matt and the band will be touring Australia & New Zealand this March/April and will be making a stop by Canberra to play a show at the UC Refectory. For tickets, visit their website.
The Canberra MS Walk + Fun Run sets off from 8am – 4pm on Rond Terrace and takes a scenic route around the beautiful Lake Burley Griffin past the iconic sites of Canberra. Stay hydrated at our water stops and watch out for roving entertainers along the way. Then it's back to Rond Terrace to replenish and relax with food, music and entertainment for the whole family. The MS Walk + Fun Run route is wheelchair and stroller friendly. Click here to register.
Roll up, roll up for our Family Fun Day Screening of your favourite big-eared elephant. Join Limelight at 1.30pm followed by a special screening of the live action remake of Dumbo from 2pm. Each ticket will include free face painting! Don't miss one of the cutest movies of the year! Buy tickets here. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,423 |
32. Mistrzostwa Świata w Podnoszeniu Ciężarów, które odbyły się w Monachium w dniach 12 – 16 października 1955. W tabeli medalowej tryumfowali reprezentanci ZSRR. Udział wzięło 108 zawodników.
Rezultaty
Tabela medalowa
Linki zewnętrzne
Rezultaty zawodów na Sport-komplett.de
Mistrzostwa świata w podnoszeniu ciężarów
1955 w sporcie | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,180 |
package de.hub.clickwatch.connection.adapter;
import org.eclipse.emf.ecore.EObject;
import org.eclipse.emf.ecore.EStructuralFeature;
import com.google.inject.ImplementedBy;
import de.hub.clickwatch.connection.adapter.internal.MergeAdapter;
import de.hub.clickwatch.merge.Merger;
import de.hub.clickwatch.model.Handler;
import de.hub.clickwatch.model.Node;
@ImplementedBy(MergeAdapter.class)
public interface IMergeAdapter {
public void merge(Node node);
public void merge(Handler handler);
public Merger getMerger();
public boolean hasChanged(EObject object, EStructuralFeature feature);
public boolean hasChanges();
public void addChange(EObject object, EStructuralFeature feature);
public void clearChanges();
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,598 |
14 Customs, Traditions, and Social Norms in French Culture
Funny phrases
Untranslateable Words
A Culture of Customs in France
When you're in France, the last thing you want to do is offend a French person. That's why you should know the most important and common French customs and traditions, so you're informed when you travel. Part of reaching French fluency is knowing about the culture and social norms of France. So, here are 14 interesting, weird, unique, and entertaining customs in French culture.
1. How to Greet French People
As far as social norms in France go, the most important you should know is how to greet French people. The French have a lot of different ways to say hello, so it's not easy to navigate between the customs.
Bises/kisses on the cheek: The traditional kisses on the cheek is a very important greeting in France. But, there are certain rules for it. Normally, it only happens with women, children, and men who are very close friends. Let a French person initiate the bises if you're uncertain whether it's appropriate or not. Usually, there are only two bises given, but in certain regions of France it can be up to 4.
Handshake: When you meet a person for the first time, greet them with a handshake. Only after a second meeting switch to a more relaxed greeting.
Always say "bonjour": "Bonjour" means "good day", and it's the most common way to say hello in French. Whether you're meeting a friend or walking into a shop, "bonjour" will always work well in French etiquette.
2. Should You Bring Wine to a Dinner Party in France?
While bringing a bottle of wine to a dinner party in the US is a thoughtful and polite gesture, it may not be in France. French dinner party hosts may think that you judge their hosting or wine serving abilities. They could think that you're bringing a wine to drink for yourself because the wine your host chose won't suit you. While this French social norm isn't as serious as it may seem, it's best to avoid offending the person who invited you to their home.
If you want to be safe, just in case, perhaps opt for a different gift to bring to a dinner party. Flowers are a very nice touch, but be careful, because certain flowers have meanings. You can also gift chocolates and sweets as well.
3. Are the French Always Late?
While in the US and the UK there's a culture of arriving early, the French social norm is the exact opposite. French hosts need an extra 10-15 minutes to set up the table or fix their makeup before guests arrive. Arriving early (or even on time) shows that you expect to be entertained promptly. That's why French people are always late. If everyone's late, nobody is. It's best to be fashionably late to dinner parties in France.
4. What Do French Kids Eat?
French food is exquisite, and even the kids know this. There's no "kids menu" in France. While American children opt for chicken nuggets and French fries, French children eat the same kinds of food their parents do. Being well behaved at a table, and eating healthy meals is a requirement for every French child. Healthy eating habits are taught and followed even with school lunches in France. This French custom has defined generations of French cultural upbringing.
5. You Can Eat a Baguette On the Street
Speaking of eating, a very strange and unique social norm in France is that you can't eat food on the street. While this part of French culture is fading away with the popularity of food trucks, it's still important today.
However, there is one exception to this French custom: baguettes. As you're walking home with a bag full of fresh, warm baguettes, it's totally acceptable and customary in France to munch on their ends. But, only eat it until you reach the bag. If you keep eating the baguette from the bag, that's the same faux pas as eating on the street.
6. Wedding Bells and Honking Horns in French Tradition
When a couple gets married there are a lot of cultural traditions to follow. And the French have their own unique traditions to add to weddings. On the way from the church to the reception, the entire wedding party honks their car horns as a French tradition. This makes for a very loud celebration.
But, what's even better, sometimes strangers on the street join in too. Cars not part of the wedding party also honk their horn in celebration for the couple.
7. The French Tradition of Getting Slapped in the Face with a Fish
Yes, you read that right. The French have a tradition where you get slapped in the face with a wish. Carnivals are grand celebrations across France that date back hundreds of years. They signify the coming of spring, harvest, and having a good time. And of course, they all have their unique French traditions in every region.
In Dunkirk, the most peculiar French traditions happen. This custom is not one you'd forget if you ever experience it. People dress up in very colorful clothes to gather on the main square. As the party reaches its peak, they release 450 kilos (almost 1000 pounds) of herrings on the crowd. This is certainly one of the craziest and most unique French customs out there.
8. Celebrating French Wine is a French Tradition
French wine is exquisite. But, there's one grape that deserves a special celebration in France: beaujolais nouveau. On the third Thursday of November (kind of like Thanksgiving) the production of this wine ends, and it finally arrives in stores. It's a French custom to rush to supermarkets (kind of like Black Friday), or gather in a bar have a glass of beaujolais nouveau. This signifies the new season.
9. The French Custom of Ugly Hats
While French fashion is a very important part of French culture, there's one holiday where French clothes aren't exactly aesthetic. On November 25th, French customs celebrate St. Catherine's Day. It's customary to pray for a husband for unwed women over 25. These bachalorettes receive an ugly yellow and green hat from their married friends.
10. Gloomy Bachelor or Bachelorette Party Culture in France
While in the US bachelor and bachelorette parties are crazy parties with a lot of nudity and inflatable private parts, France has very different customs. They have a very dark and somewhat morbid view of the affair. It's more like a French funeral. The party's name is "enterrement de vie de garcon (EVG)/ jeune fille" literally translates to 'funeral of the life of the young man/ woman'.
French grooms and brides-to-be wear comic tombstones or costumes to signify the end of their youth and freedom. This is definitely one of the weirder traditions French people have. Learning a new language causes less anxiety than the prospect of marriage in France.
11. Fishy April's Fool Customs in France
April's Fools is a day for pranks and jokes. In French customs, fish are the ultimate topic of comedy. That's why this day's called "Poisson d'Avril" (April Fish) in French. Some French traditions call for people to stick paper fish on the back of unsuspecting family members, friends, or even strangers. Newspapers also publish funny news with names of fish woven into the words for comedic effect.
12. Bastille Day is a French Holiday Tradition
The 14th of July marks the historic day of the storming of the Bastille Prison outside of Paris. This national holiday is the French version of the American 4th of July. Military parades march on the Champs-Élysées, and firework shows mark the celebration of this French tradition. Families gather outside for picnics and outdoor activities. It's a very fun and patriotic French custom, if you're in France on Bastille Day, make sure you take part.
13. Smaller Personal Space in French Social Norms
Personal space is not as important to the French as it is to Anglo-Saxons and Americans. They are comfortable with a distance of one foot, while British people normally stand about two feet apart. In general, when you see French people conversing, they will stand in closer proximity than British or American people.
14. 13 is Unlucky in French Superstition
It would be unwise to leave a list of French customs and traditions with only 13. That's because 13 is an unlucky number in French superstition. Seating 13 guests at a table is a great faux pas. Just like there were 13 people around the table at the Last Dinner, with Jesus and the 12 apostles. Judas betrayed Jesus, so 13 is an unlucky number.
Learn French to Practice French Customs
While these French customs and traditions vary between regions, there's one thing that ties them together. All of French people speak French. If you truly want to fit into French culture, you need to learn the language. Luckily, learning French is fun and easy with OptiLingo.
OptiLingo is an app that brings you results. By giving you only the most common French words and phrases, this app doesn't waste your time. Guaranteed French fluency in minimal time. To start your language learning journey the right way download OptiLingo today!
French Food Culture and Dining Experience
What Expatriates Should Know About Working in France
19 Unique French Words You Need to Learn
The Best Way to Learn French in 10 Easy Steps | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,020 |
\section{Supplementary Material}
In this Supplemental Material we present further technical details of the experimental setup, the adjustment procedure, as well as additional measurement data.
\begin{figure*}[!b]
\includegraphics[width=0.7\textwidth]{setup.pdf
\caption{Monochromatized neutrons pass through a polarizing magnet (polarizer), selecting only the $\ket{+z}$-spin component to fulfill the Bragg-condition inside the interferometer. Direct-current spin-rotator DC1 turns the spin in the direction of flight. Inside the interferometer, the RFG creates a magnetic field $\boldsymbol B_{\rm{RFG}}(\Omega,t)$ in path $\ket{I}$, rotating with angular velocity $\Omega$ in a plane perpendicular to the neutron beam. In path $\ket{II}$, a local z-field $B_{\rm{loc}}$ is mainly used to compensate different times of interaction with the guide-field $B_0$ between the paths because of the length of the RFG. In this process, deviations of the guide field in both paths are also compensated. After recombination at the last interferometer plate, the neutrons in foreward direction are detected in a He-3 counter tube. For magnetic adjustment DC2 and the analyzer supermirror (analyzer) can be placed in front of the detector but are removed before the final measurement. Spin directions are indicated with blue arrows, magnetic fields in pink. The phase shifter plate ($\chi$) consists of a slab of sapphire and is rotated to record interferograms. \label{fig:setupSupp}}
\end{figure*}
\subsection{Experimental Setup}
The experimental setup applied for the measurement of spin-rotation coupling in neutron interferometry is schematically illustrated in Fig.\,\ref{fig:setupSupp}. A beam of monochromatized neutrons with a wavelength $\lambda=\SI{1.9}{\textup{\AA}}$ is split by a magnetic prism into two divergent, antiparallelly polarized sub-beams with spin states $\ket{\pm z}$. Next the neutrons enter a static magnetic guide field region, consisting of a coil in Helmholtz configuration that is placed between polarizer and analyzer and generates a field $\boldsymbol B_0=9\,{\rm G}\,\hat{\boldsymbol e}_z$. The purpose of the guide field is to prevent depolarization because of magnetic stray fields, thereby upholding the degree of polarization of $P\sim 0.99$. Inside the guide field, a direct current spin rotator (DC1) is placed in front of the interferometer which rotates the polarization vector by $\pi/2$ from $\pm z$ to $\pm y$, the flight direction of the neutrons. They traverse to a silicon perfect crystal interferometer which is aligned such that only the $+y$ polarized sub-beam fulfills the Bragg condition of the lattice at the first plate of the interferometer. Consequently, the $-y$ polarized sub-beam never reaches the detectors (in practice it is absorbed by a Cd-slab, which is not depicted in Fig.\,\ref{fig:setupSupp}).
In path $\ket{I}$ of the interferometer, the RFG generates a rotating magnetic field denoted as $\boldsymbol B_{\rm{RFG}}(\Omega,t)=B_1\cos(\Omega t)\hat e_x+(B_1\sin(\Omega t)-B_0)\hat e_z$, rotating with angular velocity $\Omega$ and amplitude $B_1$ in a plane perpendicular to the neutron beam. This coil is the key element in the neutron optical setup and induces the spin-rotation coupling that shall be observed in the actual experiment. In path $\ket{II}$, a Larmor accelerator \cite{Geppert14} produces a local z-field $B_{\rm{loc}}$. This field is used to compensate differences in the Larmor precession angles (induced by the guide-field $B_0$) acquired the two paths.
Finally a direct current spin rotator (DC2) as well as a spin analyzer, which only transmits $\ket{+z}$-spins, are placed in front of the detector for the adjustment procedure which is described below.
\subsection{Adjustment Procedure}
The coil DC1 is used to prepare the spin state $\ket{+y}$ for the RFG while DC2 and analyzer are used in the adjustment procedure of the experiment to analyze the $\ket{+y}$ spin component. As Larmor precession is induced inside the guide field between the coils, their distances need to be adjusted such that the input for the RFG and DC2 is a $\ket{+y}$ spin state. To do this, path $\ket{II}$ is blocked with a Cadmium absorber, which is not depicted in Fig\,\,\ref{fig:setupSupp}. Coil DC1 rotates the spin by $\pi/2$ into the $x$-$y$-plane by Larmor precession about the static magnetic field inside the coil, which is pointing in $+x$-direction. Another rotation of $\pi/2$ about the same axis inside the RFG causes a minimum intensity in the case of a Larmor precession with an angle $2\pi$ in the guide field in between DC1 and RFG with Larmor frequency $\omega_0=-\frac{2\mu}{\hbar} B_0$. The observed intensity oscillation is depicted in Fig.\,\ref{fig:Dist}. The positions of DC1 and the RFG are fixed at the minimum intensity. The same procedure is repeated to adjust the position of DC2 relative to the RFG.
\begin{figure*}[!h]
\includegraphics[width=0.5\textwidth]{DistScanSupp.pdf
\caption{Intensity modulation due to variation of the distance between DC 1 and RFG. Both devices are set to induce a $\pi/2$ spin rotation. Error bars indicate $\pm$ one standard deviation.\label{fig:Dist}}
\end{figure*}
\begin{figure}[!b]
\includegraphics[width=0.81\textwidth]{InterferogramsSupp_WithSM.pdf}
\caption{All interferograms recorded as intensity oscillations by rotating the phase shifter orientation with Supermirror and DC 2 inserted. With increasing frequency the interferograms are again continuously shifted.
}
\label{fig:interferogramSM}
\end{figure}
\begin{figure}[!b]
\includegraphics[width=0.8\textwidth]{InterferogramsSupp.pdf}
\caption{All interferograms recorded as intensity oscillations by rotating the phase shifter orientation. With increasing frequency the interferograms are also continuously shifted.
}
\label{fig:interferogram}
\end{figure}
The RFG is supplied from now on with oscillating and $\pi/2$ phase shifted currents to produce the rotating field while each DC coil produces a $\pi/2$ rotation. For each frequency (\SIrange{0}{20}{\kilo\hertz}), the amplitudes of both currents through the RFG are simultaneously increased from zero and its resulting minimum intensity until the same minimum intensity is produced again. This signifies the case of a cyclic rotation of the polarization vector inside the Mashhoon box.
\par
As we want to investigate a Mashhoon phase shift in the interference between otherwise identical wave functions, both paths need to be recombined with aligned spin states. Therefore, the $z$-coil $B_{\mathrm{loc}}$ in path II is used, serving as Larmor accelerator. This coil is required since inside the RFG the $z$-field offset locally compensates the guide field, such that there is no static guide field present inside the RFG.
To adjust the case of spin alignment, the absorber (not depicted) is switched to block path $\ket{I}$ and the $\pi/2$-rotations by of both DC1 and DC2 are maintained. The current in the $z$-coil for the magnetic field $B_{\rm{loc}}$ in path $\ket{II}$ is scanned. The spin orientations of both beams are parallelly adjusted at the last plate with that current for the Larmor accelerator which produces a minimum intensity.
The above setup procedure is necessary to adjust the currents and relative positions of DC1, DC2, the Larmor accelerator and the RFG. The absorber is removed for the following measurements.
\par
Interferograms were produced with the described setup. The observed results are plotted in Fig.\,\ref{fig:interferogramSM}, showing a phase shift with spin analysis.
Unlike described in the main text, only the last step of the measurements were conducted with absorber, DC2 and analyzer removed. All obtained interferograms, contributing to the plot of the final results (main text Fig.\,3) are plotted in Fig.\,\ref{fig:interferogram}. The only difference is a higher count rate due to the less neutron optical components that have been inserted in the setup, mainly because of the transmission $T\sim 0.4$ of the supermirror for the $+z$-spin component (and $T=0$ for $-z$).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,498 |
"Du får göra som du vill" is a song written by Patrik Isaksson and recorded by himself for his 1999 debut album När verkligheten tränger sig på. The song was awarded a Grammis award for "Song of the year 1999".
The single peaked at No. 11 in the Swedish singles chart, and was tested for Svensktoppen on 20 March 1999, but failed to enter the chart.
The song appeared in the 2004 film Fröken Sverige.
Erik Linder recorded the song for his 2009 album Inifrån.
Charts
References
External links
Information at Svensk mediedatabas
Information at Svensk mediedatabas
1999 singles
Swedish-language songs
Swedish pop songs
1999 songs
Sony Music singles | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,472 |
package v2actions
import (
"code.cloudfoundry.org/cli/api/cloudcontroller"
"code.cloudfoundry.org/cli/api/cloudcontroller/ccv2"
)
// Domain represents a CLI Domain.
type Domain ccv2.Domain
// DomainNotFoundError is an error wrapper that represents the case
// when the domain is not found.
type DomainNotFoundError struct{}
// Error method to display the error message.
func (e DomainNotFoundError) Error() string {
return "Domain not found."
}
func isResourceNotFoundError(err error) bool {
_, isResourceNotFound := err.(cloudcontroller.ResourceNotFoundError)
return isResourceNotFound
}
// GetDomain returns the shared or private domain associated with the provided
// Domain GUID.
func (actor Actor) GetDomain(domainGUID string) (Domain, Warnings, error) {
var allWarnings Warnings
domain, warnings, err := actor.GetSharedDomain(domainGUID)
allWarnings = append(allWarnings, warnings...)
switch err.(type) {
case nil:
return domain, allWarnings, nil
case DomainNotFoundError:
default:
return Domain{}, allWarnings, err
}
domain, warnings, err = actor.GetPrivateDomain(domainGUID)
allWarnings = append(allWarnings, warnings...)
switch err.(type) {
case nil:
return domain, allWarnings, nil
default:
return Domain{}, allWarnings, err
}
}
// GetSharedDomain returns the shared domain associated with the provided
// Domain GUID.
func (actor Actor) GetSharedDomain(domainGUID string) (Domain, Warnings, error) {
domain, warnings, err := actor.CloudControllerClient.GetSharedDomain(domainGUID)
if err == nil {
return Domain(domain), Warnings(warnings), nil
}
if isResourceNotFoundError(err) {
return Domain{}, Warnings(warnings), DomainNotFoundError{}
}
return Domain{}, Warnings(warnings), err
}
// GetPrivateDomain returns the private domain associated with the provided
// Domain GUID.
func (actor Actor) GetPrivateDomain(domainGUID string) (Domain, Warnings, error) {
domain, warnings, err := actor.CloudControllerClient.GetPrivateDomain(domainGUID)
if err == nil {
return Domain(domain), Warnings(warnings), nil
}
if isResourceNotFoundError(err) {
return Domain{}, Warnings(warnings), DomainNotFoundError{}
}
return Domain{}, Warnings(warnings), err
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 311 |
Q: MySQL - Ordering by number of times value found within 'IN' statement I have the following query:
SELECT Transaction.ID
FROM
Transactions
WHERE
Transaction.MetaID
IN (3,4,5,6)
ORDERBY Count(Transaction.MetaID);
Which obviously isn't working. Basically I would like to order the query by the number of times MetaID matches the IN statement - so some may match one of those values, others may match more others none.
A: SELECT a.ID
FROM Transactions a
WHERE a.MetaID IN (3,4,5,6)
GROUP BY a.ID
ORDER BY COUNT(a.MetaID);
A: If you want to order by MetaID
SELECT a.ID
FROM Transactions a
WHERE a.MetaID IN (3,4,5,6)
ORDER BY a.MetaID;
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,562 |
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// This file is a part of the 'esoco-business' project.
// Copyright 2018 Elmar Sonnenschein, esoco GmbH, Flensburg, Germany
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
package de.esoco.process;
import de.esoco.entity.Entity;
import de.esoco.history.HistoryManager;
import de.esoco.history.HistoryRecord;
import de.esoco.lib.property.MutableProperties;
import de.esoco.process.param.ParameterBase;
import de.esoco.process.step.InteractionFragment;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.obrel.core.RelationType;
import org.obrel.core.RelationTypeModifier;
import org.obrel.core.RelationTypes;
import org.obrel.type.MetaTypes;
import org.obrel.type.StandardTypes;
import static de.esoco.history.HistoryManager.HISTORIZED;
import static de.esoco.lib.property.StateProperties.PROPERTIES_CHANGED;
import static de.esoco.lib.property.StateProperties.STRUCTURE_CHANGED;
import static de.esoco.lib.property.StateProperties.VALUE_CHANGED;
import static de.esoco.process.ProcessRelationTypes.AUTO_UPDATE;
import static de.esoco.process.ProcessRelationTypes.CONTINUATION_PARAM;
import static de.esoco.process.ProcessRelationTypes.CONTINUATION_PARAMS;
import static de.esoco.process.ProcessRelationTypes.HISTORY_END;
import static de.esoco.process.ProcessRelationTypes.HISTORY_START;
import static de.esoco.process.ProcessRelationTypes.HISTORY_TARGET_PARAM;
import static de.esoco.process.ProcessRelationTypes.INTERACTION_PARAMS;
import static de.esoco.process.ProcessRelationTypes.INTERACTION_EVENT_PARAM;
import static de.esoco.process.ProcessRelationTypes.STOP_PROCESS_EXECUTION;
import static de.esoco.process.ProcessRelationTypes.TRANSACTION_END;
import static de.esoco.process.ProcessRelationTypes.TRANSACTION_START;
import static org.obrel.core.RelationTypes.newType;
import static org.obrel.type.MetaTypes.TRANSACTIONAL;
import static org.obrel.type.StandardTypes.NAME;
/********************************************************************
* Base class for all kinds of process steps. This class is serializable and
* uses the default serialized form. Subclasses must declare a serialVersionUID
* field and implement correct serialization of any fields that they define. In
* general it is recommended that subclasses don't define fields but use
* relations instead. The types of all relations that shall not be serialized
* must have the flag {@link RelationTypeModifier#TRANSIENT} set. Further
* information about the serialization of processes can be found in the
* documentation of class {@link Process}.
*
* <p>A process step can be set to be wrapped inside a history group or a
* transaction by setting one of the flags {@link HistoryManager#HISTORIZED} or
* {@link MetaTypes#TRANSACTIONAL}, respectively. The {@link #execute()} method
* will then be invoked after a transaction and a history group have been
* started. History always includes a transaction to combine the writing of the
* history and persistent changes by the step.</p>
*
* @author eso
*/
public abstract class ProcessStep extends ProcessFragment
{
//~ Static fields/initializers ---------------------------------------------
private static final long serialVersionUID = 1L;
/** The next step's name */
public static final RelationType<String> NEXT_STEP = newType();
static
{
RelationTypes.init(ProcessStep.class);
}
//~ Instance fields --------------------------------------------------------
/** Will be restored by the parent process on deserialization */
private transient Process rProcess;
private boolean bMarkingAsModified;
private Set<RelationType<?>> aNewInteractionParams = null;
// collects modifications of parameters in the full fragment hierarchy
private Set<RelationType<?>> aModifiedParams = new HashSet<>();
//~ Constructors -----------------------------------------------------------
/***************************************
* Default constructor, which must be provided by any subclass.
*/
public ProcessStep()
{
}
//~ Methods ----------------------------------------------------------------
/***************************************
* Overridden to mark structure changes for legacy process interactions.
*
* @see ProcessFragment#addDisplayParameters(Collection)
*/
@Override
public void addDisplayParameters(
Collection<? extends RelationType<?>> rParams)
{
super.addDisplayParameters(rParams);
prepareNewInteractionParameters(rParams);
}
/***************************************
* Overridden to mark structure changes for legacy process interactions.
*
* @see ProcessFragment#addSubFragment(RelationType, InteractionFragment)
*/
@Override
public void addSubFragment(
RelationType<List<RelationType<?>>> rFragmentParam,
InteractionFragment rSubFragment)
{
super.addSubFragment(rFragmentParam, rSubFragment);
// necessary for legacy process step based interactions that do not
// use InteractionFragment
setUIFlag(STRUCTURE_CHANGED, get(INTERACTION_PARAMS));
}
/***************************************
* Returns the step's name.
*
* @return The step's name
*/
public final String getName()
{
return get(NAME);
}
/***************************************
* @see ProcessFragment#getProcess()
*/
@Override
public final Process getProcess()
{
return rProcess;
}
/***************************************
* Implemented to return this instance.
*
* @see ProcessFragment#getProcessStep()
*/
@Override
public ProcessStep getProcessStep()
{
return this;
}
/***************************************
* Checks whether a parameter has been modified by the process during the
* last interaction cycle.
*
* @param rParam The parameter to check
*
* @return TRUE if the parameter has been modified
*/
public boolean isParameterModified(RelationType<?> rParam)
{
return aModifiedParams.contains(rParam) ||
hasUIFlag(PROPERTIES_CHANGED, rParam);
}
/***************************************
* Removes the modification markers for a certain process parameter.
*
* @param rParam The process parameter
*/
public void removeParameterModification(ParameterBase<?, ?> rParam)
{
removeParameterModification(rParam.type());
}
/***************************************
* Removes the modification markers for a certain parameter relation type.
*
* @param rParamType The parameter relation type
*/
public void removeParameterModification(RelationType<?> rParamType)
{
MutableProperties rUiProperties = getUIProperties(rParamType);
if (rUiProperties != null)
{
rUiProperties.removeProperty(VALUE_CHANGED);
rUiProperties.removeProperty(PROPERTIES_CHANGED);
rUiProperties.removeProperty(STRUCTURE_CHANGED);
}
aModifiedParams.remove(rParamType);
}
/***************************************
* Resets all parameter modification markers for this step.
*/
public void resetParameterModifications()
{
// iterate over a copy because aModifiedParams is modified
for (RelationType<?> rParamType : new ArrayList<>(aModifiedParams))
{
removeParameterModification(rParamType);
}
}
/***************************************
* @see Object#toString()
*/
@Override
public String toString()
{
String sResult = getClass().getSimpleName();
String sName = getName();
if (!sResult.equals(sName))
{
sResult += "[" + sName + "]";
}
return sResult;
}
/***************************************
* Executes the process step. This method must be implemented by subclasses
* to provide the functionality of the process step. in case of errors The
* implementation may throw any kind of exception. The Process class will
* catch any Throwable that is thrown from this method.
*
* @throws Exception Any kind of exception may be thrown if executing the
* step fails
*/
protected abstract void execute() throws Exception;
/***************************************
* This method will be invoked on the currently suspended step if the parent
* process is rolled back. Most step implementations won't need to override
* this method. It is intended only for special steps that perform complex
* tasks like the execution of sub-processes. The default implementation
* does nothing.
*
* @throws Exception Any kind of exception can be thrown
*/
protected void abort() throws Exception
{
}
/***************************************
* This method is similar to the {@link #rollback()} method, but it is
* invoked if the enclosing interactive process is canceled completely.
* Therefore it is not necessary to restore the process parameters in this
* method. It only needs to be implemented if the process step needs to undo
* a modification that has been performed on execution and which must not be
* persistent if the process is canceled. This is an infrequent case because
* in most cases this will be implemented differently, e.g. by making
* persistent changes in the final, non-interactive steps of the process.
*
* <p>The default implementation does nothing.</p>
*
* @throws Exception Any kind of exception may be thrown if canceling fails
*/
protected void cancel() throws Exception
{
}
/***************************************
* This method must be overridden by subclasses that support a rollback of
* their processing. If it returns TRUE the step must also implement the
* method {@link #rollback()} with the rollback functionality.
*
* <p>This default implementation always returns FALSE.</p>
*
* @return TRUE if the step implementation support a rollback
*/
protected boolean canRollback()
{
return false;
}
/***************************************
* check whether the execution of this process should be stopped.
*
* @return TRUE if the process should be stopped, FALSE otherwise.
*/
protected boolean checkStopProcessExecution()
{
return hasFlagParameter(STOP_PROCESS_EXECUTION);
}
/***************************************
* This method will always be invoked at the end of a process (whether
* successful or not) for all executed steps. Process steps that allocate
* resources should override this method to free such resources if that
* hasn't already be done in a regular way. Multiple invocations of this
* method can occur and should be handled correctly by implementations. The
* default implementation does nothing.
*/
protected void cleanup()
{
}
/***************************************
* Returns the name of the next process step that shall be executed after
* this one.
*
* @return The name of the next process step
*/
protected String getNextStep()
{
String sNextStep;
if (isContinuedInteraction())
{
// re-execute this step again after an interactive input event
sNextStep = getName();
}
else
{
sNextStep = get(NEXT_STEP);
}
return sNextStep;
}
/***************************************
* Internal method to invoke the {@link #execute()} method. Should only be
* invoked by framework classes.
*
* @throws Exception Any exception may be thrown by subclasses
*/
protected void internalExecute() throws Exception
{
execute();
}
/***************************************
* Checks whether this step must be interrupted to perform an interaction to
* query additional data. The default implementation returns TRUE if the
* flag {@link MetaTypes#INTERACTIVE} is set to TRUE and at least one
* interaction parameter is present.
*
* @return TRUE if an interaction is needed
*
* @throws Exception Any exception may be thrown by subclasses
*/
protected boolean needsInteraction() throws Exception
{
return hasFlag(MetaTypes.INTERACTIVE) &&
get(INTERACTION_PARAMS).size() > 0;
}
/***************************************
* This method can be overridden by (interactive) subclasses if an
* interaction continuation occurs. That is the case if the parameter that
* caused the interaction is an element of the continuation parameter list
* that is stored in {@link ProcessRelationTypes#CONTINUATION_PARAMS}. The
* framework stores the corresponding parameter into {@link
* ProcessRelationTypes#CONTINUATION_PARAM} before this method will be
* invoked.
*
* <p>The default implementation does nothing.</p>
*
* @throws Exception Any exception may be thrown
*/
protected void prepareContinuation() throws Exception
{
}
/***************************************
* Prepares the execution of this step in succession of the previous step by
* invoking {@link #prepareParameters()} and {@link #prepareValues()}. This
* method should only be overridden by framework classes.
*
* @throws Exception Any exception may be thrown
*/
protected void prepareExecution() throws Exception
{
prepareParameters();
prepareValues();
}
/***************************************
* Prepares a re-execution of this step during an interaction by invoking
* the method {@link #prepareValues()} and resets the interaction parameter.
* This method should only be overridden by framework classes.
*
* @throws Exception Any exception may be thrown
*/
protected void prepareInteraction() throws Exception
{
prepareValues();
setParameter(INTERACTION_EVENT_PARAM, null);
}
/***************************************
* Prepares new interaction parameters for rendering.
*
* @param rParams
*/
protected void prepareNewInteractionParameters(
Collection<? extends RelationType<?>> rParams)
{
if (rProcess != null)
{
for (RelationType<?> rParam : rParams)
{
// do not use paramModified() to prevent setting of the (empty)
// parameter because the VALUE_CHANGED property is set; this
// can cause problems with some legacy processes
markParameterAsModified(rParam);
if (rParam.getTargetType() == List.class &&
rParam.get(MetaTypes.ELEMENT_DATATYPE) ==
RelationType.class)
{
// necessary for legacy process step based interactions that do not
// use InteractionFragment
setUIFlag(STRUCTURE_CHANGED, rParam);
}
}
}
else
{
if (aNewInteractionParams == null)
{
aNewInteractionParams = new HashSet<>(rParams);
}
aNewInteractionParams.addAll(rParams);
}
}
/***************************************
* This method can be overridden to prepare this step's parameters for
* execution. The main use of this method is for steps that are interactive
* to prepare the interaction parameters. This method will not be invoked if
* this step is re-executed because of the modification of an interactive
* input parameter. In that case only {@link #prepareValues()} will be
* invoked.
*
* @throws Exception Any exception may be thrown if the preparation fails
*/
protected void prepareParameters() throws Exception
{
}
/***************************************
* This method can be overridden to prepare a step's parameter values after
* their initialization in the method {@link #prepareParameters()} or to
* update the values after an interaction occurred.
*
* @throws Exception Any exception may be thrown if the preparation fails
*/
protected void prepareValues() throws Exception
{
}
/***************************************
* Will be invoked by a process on a rollback to reset parameter
* initializations performed by interactive steps. This method will be
* invoked on the currently suspended step as well as on any step on which
* the {@link ProcessStep#rollback()} method is invoked.
*
* @throws Exception Any exception may be thrown if the reset fails
*/
protected void resetParameters() throws Exception
{
}
/***************************************
* This method can be overridden to resume this step after the process had
* been suspended. It will be invoked before the {@link #execute()} method
* is called when the process had been suspended by this step after a
* previous call to the {@link #prepareParameters()} method. A possible
* application would be to collect the user input from the parameters for an
* interactive step.
*
* <p>If the method is invoked after an interactive input occurred the
* interaction parameter will still be set. It will be reset automatically
* before the process continues the execution.</p>
*
* @return TRUE if the process can continue with the execution of this step,
* FALSE if this step requires another interaction first
*
* @throws Exception Any exception may be thrown if resuming the step fails
*/
protected boolean resume() throws Exception
{
return true;
}
/***************************************
* Must be implemented by a subclass if it can perform a rollback of a
* previous execution. It is guaranteed by the framework that this method
* will only be invoked after the step has been executed already. It is
* intended mainly for interactive processes that stop execution at certain
* (interactive) steps and can be rolled back to previous such steps. A step
* implementation that supports rollback must return TRUE from it's
* overridden {@link #canRollback()} method.
*
* <p>A successful rollback must leave this step in a state that allows it
* and the following steps in the enclosing process to be executed again. On
* re-execution, the method {@link #prepareParameters()} will be invoked
* again too before execution. Basically, after a rollback the parameters of
* the enclosing process should be in the same state as they had been before
* the execution of this step.</p>
*
* <p>This default implementation always throws a {@link ProcessException}
* stating that a rollback is not supported.</p>
*
* @throws Exception Any exception may be thrown if the rollback fails
*/
protected void rollback() throws Exception
{
throw new ProcessException(this,
String.format("Rollback not supported by process step %s [%s]",
getName(),
getClass().getSimpleName()));
}
/***************************************
* Sets the name of the next process step that shall be executed after this
* one.
*
* @param sNextStepName The name of the next process step
*/
protected void setNextStep(String sNextStepName)
{
set(NEXT_STEP, sNextStepName);
}
/***************************************
* This method must be overridden by subclasses that require initialization.
* It will be invoked automatically after a step instance has been created
* and added to it's process. Overriding classes should invoke the super
* method.
*
* @throws ProcessException Can be thrown by subclasses if the
* initialization fails
*/
protected void setup() throws ProcessException
{
}
/***************************************
* Throws a runtime exception that signals a missing process parameter.
*
* @param rParamType The relation type of the missing parameter
*/
@Override
protected <T> void throwMissingParameterException(
RelationType<T> rParamType)
{
throw new IllegalStateException(String.format("Parameter %s not set",
rParamType));
}
/***************************************
* Will be invoked to validate the step's parameters after an interaction
* has occurred. This method processes all validation functions that are
* stored in {@link ProcessRelationTypes#PARAM_VALIDATIONS} and throws an
* {@link InvalidParametersException} if at least one validation fails.
* Subclasses can override this method to implement their own validations
* but should in most cases also invoke the superclass method.
*
* <p>The validation will not occur if an interactive input parameter exists
* and the method {@link #continueOnInteraction(RelationType...)} returns
* FALSE because then this step will be prepared and executed again to
* continue with the current interaction. Validation only occurs on the
* transition to the next step.</p>
*
* @throws InvalidParametersException If a preset parameter validation fails
* @throws Exception Any exception may be thrown if the
* validation fails
*/
protected void validate() throws Exception
{
handleParamValidation(true);
RelationType<?> rInteractionParam =
getParameter(INTERACTION_EVENT_PARAM);
if (!isContinuedInteraction() || isContinuationParam(rInteractionParam))
{
handleParamValidation(false);
}
}
/***************************************
* Invokes the validation functions that are stored in the process step
* relations {@link ProcessRelationTypes#INTERACTION_PARAM_VALIDATIONS} and
* {@link ProcessRelationTypes#PARAM_VALIDATIONS}. It can be overridden by
* subclasses to perform more complex parameter validations that cannot be
* described by a single validation function. It returns a mapping from
* invalid parameters to the corresponding error messages if at least one
* parameter validation fails. If no error occurs the returned map will be
* empty.
*
* <p>Subclasses should normally invoke the superclass method and add their
* own error message to the returned map if necessary. That allows the user
* interface of interactive steps to display all failures at once.</p>
*
* @param bOnInteraction TRUE if the validation occurs during an
* interaction and FALSE if it occurs when the
* process progresses to the next step
*
* @return The mapping from invalid parameters to the corresponding error
* messages
*/
protected Map<RelationType<?>, String> validateParameters(
boolean bOnInteraction)
{
return performParameterValidations(getParameterValidations(bOnInteraction));
}
/***************************************
* Package-internal method to be overridden by subclasses that allow
* multiple interactions in a single step which can be rolled back with this
* method. This is needed for steps that invoke sub-processes. This method
* should always be invoked before executing a rollback with the method
* {@link #rollbackToPreviousInteraction()}.
*
* <p>This default implementation always returns FALSE.</p>
*
* @return TRUE if this step support the rollback to a previous interaction
*/
boolean canRollbackToPreviousInteraction()
{
return false;
}
/***************************************
* Returns the interaction process step for this step. This default
* implementation always returns THIS but subclasses can return different
* step, e.g. when they execute sub-processes.
*
* @return The interaction step
*/
ProcessStep getInteractionStep()
{
return this;
}
/***************************************
* Checks whether this step is a interaction that has been initialized
* before.
*
* @return TRUE if this interaction performs a continued interaction
*/
final boolean isContinuedInteraction()
{
return getParameter(INTERACTION_EVENT_PARAM) != null ||
hasFlag(AUTO_UPDATE);
}
/***************************************
* Records a parameter modification that can later be queried with {@link
* #isParameterModified(RelationType)}. Modifications are changes to
* parameter values or properties.
*
* @param rParamType The relation type of the modified parameter
*/
void parameterModified(RelationType<?> rParamType)
{
if (!bMarkingAsModified)
{
bMarkingAsModified = true;
setUIFlag(VALUE_CHANGED, rParamType);
aModifiedParams.add(rParamType);
bMarkingAsModified = false;
}
}
/***************************************
* Performs this process step by invoking it's execute method. If either
* history or transactions handling are enabled for this step the execution
* will be wrapped inside a history group and/or a transaction.
*
* @return The name of the next process step to execute after this one
*
* @throws Exception If performing the process step fails
*/
final String perform() throws Exception
{
boolean bHistorized = hasFlag(HISTORIZED);
boolean bTransactional = hasFlag(TRANSACTIONAL);
boolean bBeginHistory = bHistorized || hasFlag(HISTORY_START);
boolean bCommitHistory = bHistorized || hasFlag(HISTORY_END);
boolean bBeginTransaction =
bBeginHistory || bTransactional || hasFlag(TRANSACTION_START);
boolean bCommitTransaction =
bCommitHistory || bTransactional || hasFlag(TRANSACTION_END);
setParameter(CONTINUATION_PARAM, null);
if (bBeginTransaction)
{
RelationType<? extends Entity> rTargetParam =
get(HISTORY_TARGET_PARAM);
String sValue = get(HistoryRecord.VALUE);
Entity rTarget =
rTargetParam != null ? getParameter(rTargetParam)
: get(HistoryRecord.TARGET);
if (sValue == null)
{
sValue = getProcess().get(StandardTypes.NAME) + "." + getName();
}
getProcess().beginTransaction(bBeginHistory, rTarget, sValue);
}
internalExecute();
RelationType<?> rInteractionParam =
getParameter(INTERACTION_EVENT_PARAM);
if (!isContinuedInteraction())
{
removeAllSubFragments();
executeCleanupActions();
}
else if (isContinuationParam(rInteractionParam))
{
setParameter(CONTINUATION_PARAM, rInteractionParam);
setParameter(INTERACTION_EVENT_PARAM, null);
prepareContinuation();
}
if (bCommitTransaction)
{
getProcess().commitTransaction(bCommitHistory);
}
return getNextStep();
}
/***************************************
* An internal method that prepares this step for execution If no
* interaction occurred it invokes {@link #prepareExecution()}.
*
* @return TRUE if the process can continue with the execution of this step,
* FALSE if this step requires an interaction first
*
* @throws Exception Any exception may be thrown if the preparation fails
*/
boolean prepareStep() throws Exception
{
if (isContinuedInteraction())
{
prepareInteraction();
}
else
{
// execute any remnant finish actions and clear action list
executeCleanupActions();
prepareExecution();
}
return !needsInteraction();
}
/***************************************
* Package-internal method to be overridden by subclasses that allow
* multiple interactions in a single step which can be rolled back with this
* method. This is needed for steps that invoke sub-processes. This method
* should only be invoked after querying the capability of a rollback with
* the method {@link #canRollbackToPreviousInteraction()}.
*
* <p>This default implementation does nothing.</p>
*
* @throws ProcessException If the rollback fails
*/
void rollbackToPreviousInteraction() throws ProcessException
{
}
/***************************************
* Sets the step's name. Will only be used internally for special purposes
*
* @param sName The new name of the step
*/
void setName(String sName)
{
set(NAME, sName);
}
/***************************************
* Package internal method to associate this step with a particular
* instance.
*
* @param rProcess The new parent process of this step
*/
void setProcess(Process rProcess)
{
this.rProcess = rProcess;
if (aNewInteractionParams != null)
{
prepareNewInteractionParameters(aNewInteractionParams);
aNewInteractionParams = null;
}
}
/***************************************
* Evaluates a list of parameter validation functions and throws an
* exception for all invalid parameters.
*
* @param bOnInteraction rParamValidations The mapping from parameters to
* validation functions
*
* @throws InvalidParametersException If one or more parameters are invalid
*/
private void handleParamValidation(boolean bOnInteraction)
throws InvalidParametersException
{
Map<RelationType<?>, String> rInvalidParams =
validateParameters(bOnInteraction);
if (rInvalidParams.size() > 0)
{
throw new InvalidParametersException(this, rInvalidParams);
}
}
/***************************************
* Checks whether an interaction on the given parameter should continue the
* process execution.
*
* @param rParam The parameter to check
*
* @return TRUE if the parameter will continue the process execution
*/
private boolean isContinuationParam(RelationType<?> rParam)
{
return hasRelation(CONTINUATION_PARAMS) &&
get(CONTINUATION_PARAMS).contains(rParam);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 47 |
This weekend we want to surprise you with stunning landscapes by French photographer David Keochkerian. He is the master of landscapes and as you can easily notice he pays a lot of attention to post processing. His photos are beautiful, stunning, amazing… Of course we can continue to use a lot of enthusiastic epithets but but they are no substitute for personal impression. So don't waste your time and jump into the post! Enjoy! | {
"redpajama_set_name": "RedPajamaC4"
} | 7,372 |
\section{Introduction}
The classical risk measure for insurance companies is the probability of ruin.
This quantity is intensively studied, since in 1903 \citet{lundberg1903} introduced his model for the surplus of an insurance company.
However, in many models a trajectory can only either lead to ruin or tend to infinity.
Hence, \citet{definetti1957} introduced expected discounted future dividend payments as a valuation principle for a homogeneous insurance portfolio,
which builds an alternative risk measure.
This concept originally comes from corporate finance, where the firm value is often determined by the accumulated future dividend payments.
Our aim is to solve the valuation problem of an insurance company in these terms. Since its introduction, the dividend maximization problem has been solved in various setups, for example by \citet{shreve1984}, \citet{jeanblanc1995}, \citet{radner1996}, and \citet{asmussen1997} in diffusion models. \citet{albrecher20112} study a similar problem with random intervention times.
Overviews can be found in \citet{albrecher20092}, or \citet{avanzi2009}. For an introduction to optimization problems in insurance in general we refer to \citet{schmidli2008} and \citet{azcue2014}.\\
However, all of these contributions assume a constant economic environment. The dividend maximization problem is considered over a potentially long time horizon, which makes the assumption that the economy does not change a strong one. Since a changing economic environment structurally influences the insurance market, we would like to incorporate such changes into our model.
In setups which allow for a change in the economic environment the dividend maximization problem has been studied for example by \citet{jiang2012}, \citet{sotomayor2011}, \citet{zhu2013}, \citet{albrecher2012}, and \citet{azcue2015}. \citet{jiang2012}, \citet{sotomayor2011}, and \citet{zhu2013} consider a diffusion model for the surplus, the parameters of which are driven by an underlying Markov chain. Such a setup is called {\em regime switching model}. In a Cram\'er-Lundberg type model \citet{albrecher2012} allow the economy to become worse once before ruin, and \citet{azcue2015} allow for a finite number of shifts of the economy. In all these models, the driving Markov chain is assumed to be {\em observable}. This means that they assume full information and therefore their models differ from the model studied herein.\\
We use the dividend maximization approach to solve the valuation problem of an insurance company in a {\em hidden} Markov model.
More precisely, we model the surplus process of the insurance company as a Brownian motion with drift. The drift is assumed to be driven by an underlying Markov chain, the current state of which is {\em unobservable} under the available information.
In contrast to \citet{sz12} and \citet{sz13}, where the setup was Bayesian, i.e., the underlying Markov chain was not allowed to change its state, we allow for regime shifts. This gives the model a different interpretation.
Here, the different states of the Markov chain represent different phases of the economy. It is of certain interest on the one hand to allow for changes in the economic environment and on the other hand it is realistic that the current state of the economy is not exactly known instantaneously, but only over time, and also over time the drift of a diffusion cannot be estimated satisfactorily, see \cite[Chapter 4.2]{rogers2013} -- even more if shifts are allowed.
Thus allowing for uncertainty extends \cite{sotomayor2011} to a more practically relevant direction.\\
In the literature of mathematical finance hidden Markov models have already been used intensively for studying investment problems, e.g., in \citet{karatzas2001}, \citet{sass2004}, \citet{rieder2005}, or \citet{frey2012, frey2014}. Also dividend problems were solved in the mathematical finance literature, see \citet{hubalek2004} and \citet{grandits2007}, who seek to maximize the expected accumulated utility of dividends, and the expected utility of accumulated dividends, respectively. However, in the insurance context related results concerning hidden Markov models are scarce. We refer to \citet{gerber1977} as an example, who models the value of a single insurance policy as a Brownian motion with unobservable drift describing uncertainty about the quality of a risk. Another, more recent example is the paper by \citet{liang2014}, who study optimal reinsurance and investment under unobservable claim size and intensity.
In \citet{decamps2015} a valuation problem similar to the dividend maximization problem is studied in a rather specific model from the point of view of corporate finance.\\
This paper is organized as follows.
In Section \ref{sec:Setup} we define our model and show how to overcome uncertainty by applying a result from stochastic filtering theory, and thus transform the setup into one under full information.
The stochastic optimization problem under study is presented in Section \ref{sec:Opti}.
For solving the stochastic optimization problem we derive the Hamilton-Jacobi-Bellman (HJB) equation and characterize the optimal value function as a viscosity solution to this HJB equation.
We also prove uniqueness of the viscosity solution even though there is a lack of boundary conditions.
In Section \ref{sec:Num} we treat the problem numerically.
First of all we examine the filter dynamics. Then, we solve the HJB equation numerically. For this we need to introduce a correction term to ensure positivity of the scheme, but we can show that the approximate solution converges.
We present a multitude of numerical examples.
Furthermore, we are able prove admissibility of the candidate optimal strategies, which means showing that the underlying system of stochastic differential equations with discontinuous drift and degenerate diffusion has a solution.
Section \ref{sec:Concl} concludes the paper.\\
The main contribution of this paper is the analytical characterization of the solution to the dividend maximization problem in a hidden Markov switching model, including a non-standard uniqueness proof of the generalized solution to the associated Hamilton-Jacobi-Bellman (HJB) equation, and an extensive numerical study of the outcomes of this model. The intention behind this study is to impart good comprehension of the model and of how to optimally pay out dividends. Assuming only partial information makes the model more natural and realistic.
\section{Setup and filtering}
\label{sec:Setup}
All stochastic variables introduced in the following are assumed to be defined on the filtered probability space $({\cal E}, {\cal F}, \{{\cal F}_t\}_{t \ge 0}, \P)$.\\
The surplus process is given by
\begin{equation}
\label{eq:dynX1}X_t=x+\int_0^t \mu_s \,ds+\sigma B_t - L_t\,,
\end{equation}
with initial capital $x>0$, where $\mu=(\mu_t)_{t \ge 0}$ is the unobservable drift process, $\sigma$
is the constant and known volatility, and $B=(B_t)_{t \ge 0}$ is a standard Brownian motion.
The accumulated dividend process $L$ is given by
\begin{align}
\label{eq:dynL1}dL_t=u_t \,dt\,,
\end{align}
with $L_0=0$ and density $u_t \in [0,K]$ for all $t\ge0$ that serves as the control variable in our optimization problem.
Note that the surplus process $X$ is always associated to a dividend strategy, however, for notational reasons, we will not make that explicit.\\
Furthermore, let $\mu_t:=\mu(Y_t)$, where $Y=(Y_t)_{t\ge 0}$ is an $M$-state Markov chain with known generator matrix $Q=(q_{ij})_{i,j=1}^M$. Let $\mu_t \in \lbrace \mu_1, \dots, \mu_M \rbrace$, where $\mu_t=\mu_i$, if $Y_t=e_i$, and $e_i$ is the $M$-dimensional unit vector the $i$-th component of which is $1$. Without loss of generality let $\mu_1 > \dots > \mu_M$. We assume that the current state of the Markov chain is unobservable under the observation filtration, but we know its initial distribution $\P(Y_0=e_i)=p_i$ with $p_i > 0$ for all $i \in \lbrace 1, \dots ,M\rbrace$ and $\sum_{i=1}^M p_i = 1$.\\
Note that it is crucial that the volatility is assumed not to be driven by the Markov chain, as then it would be possible to estimate the current state from the quadratic variation.\\
The uncontrolled surplus process $Z=(Z_t)_{t\geq 0}$ is given by
\begin{align}
\label{eq:dynZ1}Z_t&= x + \int_0^t \mu_s\, ds+\sigma B_t\,.
\end{align}
As the dividend payments have to be adapted to the uncontrolled process, the observation filtration is given by $\{{\cal F}^{Z}_t\}_{t\ge 0} \subset \{{\cal F}_t\}_{t\ge 0}$, which is the augmentation of the filtration generated by $Z$.\\
\subsection*{Stochastic filtering}
\label{subsec:filtering}
As $\mu_t$ is not ${\cal F}^Z_t$-measurable, we are in a situation of partial information. To overcome this uncertainty, we apply a result from stochastic filtering theory. This means we replace the unobservable parameter $\mu_t$ by an estimator, which potentially uses all the information generated by ${\cal F}^Z_t$, but not more. We refer the interested reader to \citet{elliott1995} for more information about hidden Markov models and their filtering, and to \citet{bain2009} for stochastic filtering in general. \citet{rieder2005} suggest using the Wonham filter (see \cite{liptser1977,wonham1964}) for the case where the unobservable variable is driven by a Markov chain.\\
From \citet[Theorem 9.1]{liptser1977} we know the following proposition.
\begin{proposition}
Denote the conditional probability that the Markov chain is in state $i$ at time $t$ (and hence $\mu_t=\mu_i$) as
\begin{align*}
\pi_i(t)=\P(\mu_t=\mu_i \mid {\cal F}_t^{Z})
\end{align*}
for $i= 1, \ldots, M$, and the estimator for the drift as
\begin{align}\label{eq:estimator}
\nu_t={\mathbb{E}}(\mu_t \mid {\cal F}_t^{Z})=\sum_{i=1}^M \mu_i \pi_i (t)\,.
\end{align}
Then $(\pi_1, \dots \pi_M)$ solves the following system of stochastic differential equations
\begin{align}\label{eq:wonham}
\pi_i(t)&=p_i+\int_0^t \sum_{j=1}^M q_{ji} \pi_j (s) \, ds + \int_0^t \pi_i (s) \frac{\mu_i-\nu_s}{\sigma} \, dW_s\,,\\
\pi_i(0)&=p_i\,,
\end{align}
for $i=1,\ldots,M$, with the innovation process
\begin{align}\label{eq:W}
W_t=\int_0^t\frac{\mu_s-\nu_s}{\sigma} \,ds + B_t\,.
\end{align}
Furthermore, $W=(W_t)_{t \ge 0}$ is an $\{{\cal F}_t^{Z}\}_{t \ge 0}$ - Brownian motion.
\end{proposition}
In particular, we have that the estimator $\nu=(\nu_t)_{t \ge 0}$ is adapted to the observation filtration.\\
Please note that $\pi_M(t)=1-\sum_{i=1}^{M-1} \pi_i(t)$ for all $t \ge 0$, which implies that the correct state space of the filter is the simplex $\bar {\cal S} := \{(\pi_1,\dots,\pi_M) \in [0,1]^M: \sum_{i=1}^M \pi_i = 1\}$, the interior of which is denoted by ${\cal S} := \{(\pi_1,\dots,\pi_M) \in (0,1)^M: \sum_{i=1}^M \pi_i = 1\}$.
For later use, we define $\Pi:=(\Pi_t)_{t \ge 0}=(\pi_1(t), \dots, \pi_{M-1}(t))_{t \ge 0}$ and $p:=\Pi_0=(p_1, \dots, p_{M-1})$.
From now on we consider the following $M$-dimensional system of SDEs:
\begin{align}
\label{eq:dynX2}X_t&= x + \int_0^t (\nu_s-u_s) \,ds +\sigma W_t\,,\\
\label{eq:dynpi}\pi_i(t)&=p_i+\int_0^t \left(q_{Mi} + \sum_{j=1}^{M-1} (q_{ji}-q_{Mi}) \pi_j (s)\right) \, ds + \int_0^t \pi_i (s) \frac{\mu_i-\nu_s}{\sigma} \, dW_s \qquad i= 1, \dots, M-1\,,
\end{align}
where
\begin{align}\label{eq:nu}
\nu_t=\mu_M + \sum_{j=1}^{M-1}(\mu_j-\mu_M)\pi_j(t)\,.
\end{align}
Since there is only one source of uncertainty, which is adapted to the observation filtration, in system \eqref{eq:dynX2}, \eqref{eq:dynpi}, we are now in a situation of full information, but at the cost of $M-1$ additional dimensions.
\section{Stochastic optimization}
\label{sec:Opti}
In this section we at first define the stochastic optimization problem under study. Then we derive the associated HJB equation.
Finally, we present the main result of this section, which is the characterization of the solution of the optimization problem as the unique viscosity solution to the HJB equation.\\
We would like to find the optimal value function $V$, which is the supremum over all dividend policies of the discounted dividend payments up to the time of ruin $\tau:=\inf\{t\ge 0 \, \vline \, X_t\le 0\}$,
\begin{align*}
V(x,p)=\sup_{u\in A} J^{(u)}(x,p)=\sup_{u\in A} {\mathbb{E}}_{x,p} \left(\int_0^{\tau}e^{-\delta t}u_t \,dt\right)\,,
\end{align*}
where $\delta>0$ is the discount rate, $A$ denotes the set of admissible controls, and ${\mathbb{E}}_{x,p}(\cdot)$ denotes the expectation under the conditions $X_0=x$ and $\Pi_0=p$. Admissible controls are $\{{\cal F}^Z_t\}_{t\ge 0}$-progressively measurable, $[0,K]$-valued, and fulfill $u_t \equiv 0$ for $t>\tau$.\\
Note that the underlying system of stochastic processes \eqref{eq:dynX2}, \eqref{eq:dynpi} describes autonomous state dynamics in the sense of \cite[Section IV.5]{fleming2006}. Furthermore, we will consider an infinite time horizon. Therefore, the optimal control will be Markovian.
\begin{lemma}\label{lem:x-infinity}
The optimal value function $V$ is continuous. We have $0\le V \le \frac{K}{\delta}$, $V$ increases in $x$, and $\lim_{x \to \infty} V(x,p)=\frac{K}{\delta}$ uniformly in $p$.
\end{lemma}
\begin{proof}
From \cite[Chapter 3, Theorem 5]{krylov1980} we know that the optimal value function $V$ is continuous.\\
The monotonicity of $V$ with respect to $x$ follows by an argument from \cite[Chapter 2.5.1, p. 97]{schmidli2008}.\\
Clearly, the optimal value function is bounded by $0\le V(x,p)\le \int_0^\infty K e^{-\delta s}ds=\frac{K}{\delta}$, and it is easy to check that in the limit it converges to $\frac{K}{\delta}$, cf. \cite[Chapter 2.5.1, p. 97]{schmidli2008}.
\end{proof}
\subsection*{The Hamilton-Jacobi-Bellman equation}
\label{subsec:HJB}
For deriving the HJB equation we need a version of the dynamic programming principle, or Bellman principle, see \cite[Chapter 3, Theorem 6]{krylov1980}.
\begin{proposition}[Bellman principle]
\label{prop:bellman}
For every bounded stopping time $\eta$ we have
\[
V(x,p)
=\sup_{u \in A} {\mathbb{E}}_{x,p}\left(\int_0^{\tau \wedge \eta}e^{-\delta t}u_t\,dt+e^{-\delta(\tau\wedge\eta)}V(X_{\tau\wedge\eta},\Pi_{\tau\wedge\eta})\right)\,.
\]
\end{proposition}
Now, assuming $V \in C^2$, one can derive the associated HJB equation from the Bellman principle:
\begin{align}
\label{eq:HJB}({\cal L} - \delta) V +\sup_{u \in [0,K]}(u(1-V_x))=0\,,
\end{align}
where
\begin{align*}
{\cal L} V&= \mu_M V_x+\sum_{i=1}^{M-1} \left((\mu_i-\mu_M) p_i\ V_x + \left(q_{Mi} + \sum_{j=1}^{M-1} (q_{ji}-q_{Mi}) p_j \right) V_{p_i} + p_i \left( \mu_i - \nu \right) V_{x p_i}\right.\\
&+ \left.\frac{1}{2}\sum_{k=1}^{M-1} \left( \left( p_i \frac{\mu_i-\nu}{\sigma} \right) \left( p_k \frac{\mu_k-\nu}{\sigma} \right) V_{p_i p_k} \right)\right)+ \frac{1}{2} \sigma^2 V_{xx}\,,
\end{align*}
and $\nu$ is given by \eqref{eq:nu}.
The HJB-equation is a second order degenerate-elliptic PDE, since there is only one Brownian motion driving the $M$-dimensional process $(X_t,\Pi_t)_{t \ge 0}$. The supremum in \eqref{eq:HJB} is attained at
\begin{align*}
u=\begin{cases}
K, & V_x \le 1\\
0, & V_x > 1\,.
\end{cases}
\end{align*}
As boundary conditions we have for $x=0$ and $x\rightarrow \infty$
\begin{align}
\label{eq:bcx1} V(0,p)&=0\,,\\
\label{eq:bcx2} V(x,p)&\rightarrow\frac{K}{\delta} \mbox{ uniformly in }p\mbox{ as }x \rightarrow \infty\,.
\end{align}
For $p_i \in \{0,1\}$, $i=1,\dots,M-1$, we have no boundary conditions available. However, as the filter $p$ never reaches the boundary, boundary conditions in these particular directions are not required for the solution being well-defined.
The reason for this is that we still get uniqueness in the interior and on the relevant part of the boundary, see Corollary \ref{cor:comparison}.\\
It should be mentioned at this place that the solution to the problem with complete information, i.e., for observable $Y$, does not serve as boundary condition.
This is because even if we knew that we started in a certain state, then a moment later we would again not be able to observe the state, whereas in the model suggested by \cite{sotomayor2011},
one is then still able to observe it.
\subsection*{Analytic characterization}
\label{subsec:analytics}
Now we come to the analytic characterization of the optimal value function.
In the Markov switching setup where the current state of the Markov chain is observable (see \cite{sotomayor2011}), the HJB equation can be solved explicitly and the solution is smooth.
In our case the HJB equation \eqref{eq:HJB} is a degenerate-elliptic PDE, which makes the existence of a smooth solution questionable. Thus, we deal with a weaker concept of solutions, namely viscosity solutions. The only required smoothness for this is continuity, however, the concept is still strong enough to prove uniqueness. Furthermore, it is also useful for numerical treatment, see \citet{barles1991}, or \citet[Chapter IX]{fleming2006}. These are two important strengths of the concept of viscosity solutions and make it very beneficial to problems like ours.
Therefore, we are going to characterize the optimal value function $V$ as the unique viscosity solution of \eqref{eq:HJB}.\\
Denote by ${\cal T}:= \{(p_1,\dots,p_{M-1}) \in (0,1)^{M-1}: \sum_{i=1}^{M-1} p_i < 1\}$ the state space of the first $M-1$ dimensions of the filter with closure $\bar {\cal T}:= \{(p_1,\dots,p_{M-1}) \in [0,1]^{M-1}: \sum_{i=1}^{M-1} p_i \le 1\}$ and boundary $\partial \bar {\cal T}$. Further denote
$\Omega:=(0,\infty) \times {\cal T}$, $\bar{\Omega}=[0,\infty)\times \bar {\cal T}$ and let $\partial \bar{\Omega}$ be its boundary. Furthermore, let $\Gamma_-:=(0,\infty)\times\partial \bar {\cal T} \subseteq \partial \bar{\Omega}$.
Then $\Gamma_+:=\partial \bar \Omega \backslash \Gamma_-$ denotes the so-called {\it relevant part} of the boundary.
\begin{definition}\label{def:viscosity}
(viscosity solution)
\begin{enumerate}
\item A function $w:\bar{\Omega} \rightarrow {\mathbb{R}}$ is a {\it viscosity subsolution} to \eqref{eq:HJB}, if
\[
({\cal L}-\delta) \phi(\bar{x},\bar{p}) + \sup_{u \in [0,K]}(u(1-\phi_x(\bar{x},\bar{p})))\ge 0
\]
for all $(\bar{x},\bar{p}) \in \Omega$ and for all $\phi \in C^2(\Omega)$ such that $w-\phi$ attains a maximum at $(\bar{x},\bar{p})$ with $w(\bar{x},\bar{p})=\phi(\bar{x},\bar{p})$.
\item A function $w:\bar{\Omega} \rightarrow {\mathbb{R}}$ is a {\it viscosity supersolution} to \eqref{eq:HJB}, if
\[
({\cal L}-\delta) \psi(\bar{x},\bar{p}) + \sup_{u \in [0,K]}(u(1-\psi_x(\bar{x},\bar{p})))\le 0
\]
for all $(\bar{x},\bar{p}) \in \Omega$ and for all $\psi \in C^2(\Omega)$ such that $w-\psi$ attains a minimum at $(\bar{x},\bar{p})$ with $w(\bar{x},\bar{p})=\psi(\bar{x},\bar{p})$.
\item $w:\bar{\Omega} \rightarrow {\mathbb{R}}$ is a {\it viscosity solution} to \eqref{eq:HJB}, if it is both a viscosity sub- and supersolution.
\end{enumerate}
\end{definition}
The basic idea of viscosity solutions is to estimate the function from below and from above by smooth test functions.
For details about viscosity solutions, see, e.g., \citet{fleming2006}, or \citet{crandall1992}.\\
The following theorem shows the connection between the solution of the optimization problem and the weak solution of the HJB equation.
\begin{theorem}\label{thm:viscosity}
The optimal value function $V$ is a viscosity solution of \eqref{eq:HJB} with boundary conditions \eqref{eq:bcx1} and \eqref{eq:bcx2}.
\end{theorem}
\begin{proof}
We have to show that the optimal value function $V$ is a viscosity sub- and supersolution, cf.~\cite[Proof of Theorem 5.1]{sz12}. \\
\underline{Viscosity supersolution:} Let $\psi \in C^2(\Omega)$, $\psi \le V$ and $(\bar{x},\bar{p})$ such that $V(\bar{x},\bar{p})=\psi(\bar{x},\bar{p})$. Let $\eta >0$.\\
Applying the dynamic programming principle we get
\begin{align*}
\psi(\bar{x},\bar{p})=V(\bar{x},\bar{p}) &= \sup_{u \in A} {\mathbb{E}}_{\bar{x},\bar{p}} \left( \int_0^{\tau \wedge \eta} e^{-\delta t} u_t \,dt + e^{-\delta(\tau \wedge \eta)}V(X_{\tau \wedge \eta}, \Pi_{\tau \wedge \eta})\right)\\
&\ge {\mathbb{E}}_{\bar{x},\bar{p}} \left( u \frac{1-e^{-\delta(\tau \wedge \eta)}}{\delta} + e^{-\delta(\tau \wedge \eta)} \psi (X_{\tau \wedge \eta}, \Pi_{\tau \wedge \eta})\right)
\end{align*}
for any fixed $u \in [0,K]$.\\
Now we apply It\^o's formula to $\psi$, note that the stochastic integrals are martingales, divide by $\eta$ and let $\eta \rightarrow 0$. This yields
\[
0 \ge u-\delta \psi (\bar{x},\bar{p}) + {\cal L} \psi (\bar{x},\bar{p}) -u \, \psi_x (\bar{x},\bar{p})\,.
\]
As $u$ was arbitrary,
\[
0 \ge ({\cal L}-\delta) \psi (\bar{x},\bar{p}) + \sup_{u \in [0,K]} \left(u (1-\psi_x (\bar{x},\bar{p}))\right)\,.
\]
Thus, $V$ is a viscosity supersolution.\\
\underline{Viscosity subsolution:} Let $\phi \in C^2 (\Omega)$, $\phi \geq V$ and $(\bar{x}, \bar{p})$ such that $\phi (\bar{x}, \bar{p}) = V(\bar{x},\bar{p})$.
For $\varepsilon > 0$ let $\eta > 0$ and $u^{\varepsilon,\eta}$ be an $\frac{\varepsilon \eta}{2}$-optimal dividend
policy for the first part of the dynamic programming principle, and denote the surplus coming from $u^{\varepsilon,\eta}$ as $X^{\varepsilon,\eta}$. Then
\begin{align*}
\phi(\bar{x}, \bar{p}) - \frac{\varepsilon \eta}{2} &= V(\bar{x}, \bar{p}) - \frac{\varepsilon \eta}{2} \le {\mathbb{E}}_{\bar{x},\bar{p}} \left( \int_{0}^{\tau \wedge \eta} e^{- \delta t} u_t^{\varepsilon,\eta} \, d t + e^{-\delta(\tau \wedge \eta)} V(X_{\tau \wedge \eta}^{\varepsilon,\eta}, \Pi_{\tau \wedge \eta})\right)\\
& \le {\mathbb{E}}_{\bar{x},\bar{p}} \left( \int_0^{\tau \wedge \eta} e^{- \delta t} u_t^{\varepsilon,\eta} dt + e^{-\delta(\tau \wedge \eta)} \phi (X_{\tau \wedge \eta}^{\varepsilon,\eta}, \Pi_{\tau \wedge \eta})\right)\\
&= {\mathbb{E}}_{\bar{x},\bar{p}} \left( \int_{0}^{\tau \wedge \eta} e^{-\delta t} u_t^{\varepsilon,\eta} dt + e^{- \delta (\tau \wedge \eta)} \left(\phi (\bar{x}, \bar{p})+ \int_0^{\tau \wedge \eta} {\cal L} \phi \, dt - \int_0^{\tau \wedge \eta} \phi_x u_t^{\varepsilon,\eta} \, dt \right)\right)\\
&\le {\mathbb{E}}_{\bar{x},\bar{p}} \left( \int_{0}^{\tau \wedge \eta} e^{-\delta t} \hat u_t^{\varepsilon,\eta}\, dt + e^{- \delta (\tau \wedge \eta)} \left(\phi (\bar{x}, \bar{p})+ \int_0^{\tau \wedge \eta} {\cal L} \phi \, dt - \int_0^{\tau \wedge \eta} \phi_x \hat u_t^{\varepsilon,\eta} \, dt \right)\right)+\frac{\varepsilon \eta}{2}\,,
\end{align*}
where $\hat u^{\varepsilon,\eta}$ is continuous in $t$, has values in $[0,K]$, and approximates $u^{\varepsilon, \eta}$ in $L^1([0,\eta),\lambda)$.
Furthermore, we applied It\^o's formula and used that the stochastic integrals are martingales.
Rearranging the inequality and dividing by $\eta$ yields
\begin{align*}
-\varepsilon &\le {\mathbb{E}}_{\bar{x},\bar{p}} \left( \frac{e^{- \delta (\tau \wedge \eta)}-1}{\eta} \phi (\bar{x}, \bar{p})+\frac{ e^{- \delta (\tau \wedge \eta)}}{\eta} \int_0^{\tau \wedge \eta} {\cal L} \phi \, dt
+ \frac{1}{\eta}\int_0^{\tau \wedge \eta} \left(e^{-\delta t}-e^{- \delta (\tau \wedge \eta)}\phi_x\right) \hat u_t^{\varepsilon,\eta} \, dt \right)\,.
\end{align*}
Now we apply the mean value theorem:
\begin{align*}
-\varepsilon &\le {\mathbb{E}}_{\bar{x},\bar{p}} \left( \frac{e^{- \delta (\tau \wedge \eta)}-1}{\eta} \phi (\bar{x}, \bar{p})+\frac{e^{- \delta (\tau \wedge \eta)}}{\eta} \int_0^{\tau \wedge \eta} {\cal L} \phi \, dt
+ \frac{\tau \wedge \eta}{\eta}\left(e^{-\delta \xi}-e^{- \delta (\tau \wedge \eta)}\phi_x\right) \hat u_\xi^{\varepsilon,\eta} \right)\,,
\end{align*}
and let $\eta \rightarrow 0$ along a sequence:
\begin{align*}
-\varepsilon &\le ({\cal L}-\delta) \phi (\bar{x}, \bar{p})+ \limsup_{\eta \to 0} {\mathbb{E}}_{\bar{x},\bar{p}}\left( \frac{\tau \wedge \eta}{\eta}\left(e^{-\delta \xi}-e^{- \delta (\tau \wedge \eta)}\phi_x\right) \hat u_\xi^{\varepsilon,\eta} \right)\,.
\end{align*}
Fatou's lemma gives
\begin{align*}
& \limsup_{\eta \to 0} {\mathbb{E}}_{\bar{x},\bar{p}}\left( \frac{\tau \wedge \eta}{\eta}\left(e^{-\delta \xi}-e^{- \delta (\tau \wedge \eta)}\phi_x\right) \hat u_\xi^{\varepsilon,\eta} \right)
\le
{\mathbb{E}}_{\bar{x},\bar{p}}\left( \limsup_{\eta \to 0} \frac{\tau \wedge \eta}{\eta}\left(e^{-\delta \xi}-e^{- \delta (\tau \wedge \eta)}\phi_x\right) \hat u_\xi^{\varepsilon,\eta} \right)\\
&= {\mathbb{E}}_{\bar{x},\bar{p}}\left( (1-\phi_x(\bar{x},\bar{p})) \left( \limsup_{\eta \to 0} \hat u_\xi^{\varepsilon,\eta} \,1_{\{1-\phi_x(\bar{x},\bar{p}) \ge 0\}} + \liminf_{\eta \to 0} \hat u_\xi^{\varepsilon,\eta} \,1_{\{1-\phi_x(\bar{x},\bar{p}) < 0\}} \right)\right)
= \tilde u(\bar x, \bar p)(1-\phi_x(\bar{x},\bar{p}))\,,
\end{align*}
where $\tilde u(\bar x, \bar p)= \left( \limsup_{\eta \to 0} \hat u_\xi^{\varepsilon,\eta} \,1_{\{1-\phi_x(\bar{x},\bar{p}) \ge 0\}} + \liminf_{\eta \to 0} \hat u_\xi^{\varepsilon,\eta} \,1_{\{1-\phi_x(\bar{x},\bar{p}) < 0\}}\right) $.
As $\varepsilon>0$ was arbitrary,
\[
({\cal L} - \delta) \phi(\bar{x}, \bar{p}) + \tilde u(\bar x, \bar p)(1-\phi_x(\bar{x}, \bar{p})) \geq 0\,.
\]
Since $\tilde u(\bar x, \bar p)(1-\phi_x(\bar{x}, \bar{p})) \leq \sup_{u \in [0, K]} u (1- \phi_x(\bar{x}, \bar{p}))$, we get
\[
({\cal L} - \delta) \phi(\bar{x}, \bar{p}) + \sup_{u \in [0, K]} u (1- \phi_x(\bar{x}, \bar{p})) \geq 0\,.
\]
Thus, V is also a viscosity subsolution.\\
Altogether, $V$ is a viscosity solution.
\end{proof}
Now it remains to prove uniqueness. For this, one has to prove a weak maximum principle, which in standard proofs results in the statement that if two viscosity solutions are equal on the boundary, they are also equal in the interior of the domain. However, as mentioned above, we have no boundary conditions available in the $p$ directions. But \citet{lions1983b} shows that if the underlying stochastic process does not reach some parts of the boundary of the domain with a positive probability, then as these parts are not reached anyway, the study can be restricted to the interior and the relevant part of the boundary.
\begin{theorem}[Comparison]\label{thm:comparison}
Let $w_1$ and $w_2$ be bounded and continuous viscosity solutions of \eqref{eq:HJB}.\\
If $w_1\leq w_2$ on $\Gamma_+$ and $\lim_{x\to\infty} (w_1-w_2)(x,p)\le 0$ uniformly in $p$,
then $w_1\leq w_2$ on $\Omega$.
\end{theorem}
\begin{proof}
Define $\tau':=\inf\{t \ge 0 \vert (X_t,\Pi_t) \in \partial \bar \Omega\}$. We need to check whether
$\P(\tau'<\infty \,,\,\, (X_{\tau'},\Pi_{\tau'}) \in \Gamma_-)=0$.
From \cite[Corollary 2.2]{chigansky2007} we know that the Wonham filter never reaches the boundary. Therefore, the above probability is indeed zero.
Hence, we may apply \cite[Corollary II.1]{lions1983b}, which proves the statement.
\end{proof}
Uniqueness of the solution of \eqref{eq:HJB} now follows as a corollary.
\begin{cor}\label{cor:comparison}
The optimal value function $V$ is the unique bounded viscosity solution of \eqref{eq:HJB} on $\Omega \cup \Gamma_+$ with boundary conditions \eqref{eq:bcx1} and \eqref{eq:bcx2}.
\end{cor}
The following theorem shows that our analytic characterization includes smooth solutions to the HJB equation. Furthermore, it can be concluded that if there is a dividend policy leading to a smooth value function that solves the HJB equation in the viscosity sense, then this policy is optimal.
\begin{theorem}\label{thm:verification}
Let $w$ be a viscosity supersolution of \eqref{eq:HJB} with
boundary conditions \eqref{eq:bcx1} and \eqref{eq:bcx2}, and $w \in C^2$ almost everywhere. Then $V \le w$.
\end{theorem}
\begin{proof}
The proof runs along the same lines as \cite[Proof of Theorem 5.3]{sz12}.
We begin with convoluting $w$ with a Gauss Weierstrass kernel function. Due to notational ambiguities, we remark that here the area of a circle with radius $1$ is denoted by $\pi$.
Let $\varphi(x,p) := \frac{1}{\pi^\frac{M}{2}} e^{-\left( x^2 + \sum_{i=1}^{M-1} p_i^2 \right) }$ and
\[
\varphi^n(x,p) := n^M \int_{-\infty}^\infty \int_{- \infty}^\infty \dots \int_{- \infty}^\infty w(x-s,(p_1-t_1,\dots p_{M-1}-t_{M-1})) \varphi(ns,nt)\, ds\, dt_1\, \dots \, dt_{M-1}
\]
for $n \in {\mathbb{N}}$, where $nt=(nt_1,\dots,nt_{M-1})$.
Clearly, as $n \to \infty$,
$\varphi^n \to w$ and ${\cal L} \varphi^n \to {\cal L} w$, see \citet{wheeden1977}.\\
For an admissible strategy $u=(u_t)_{t \ge 0}$ and $T>0$,
\begin{align*}
e^{-\delta (T \wedge \tau)} \varphi^n (X_{T \wedge \tau},\Pi_{T \wedge \tau})
=& \varphi^n(x,p) + \int_0^{T \wedge \tau} e^{-\delta t}\, d\varphi^n(X_t,\Pi_t) + \int_0^{T \wedge \tau} \varphi^n(X_t,\Pi_t)\, d(e^{-\delta t})\\
=& \varphi^n(x,p) + \int_0^{T \wedge \tau} e^{-\delta t} \left[ -\delta \varphi^n(X_t,\Pi_t) + {\cal L} \varphi^n (X_t, \Pi_t) - u_t \varphi_x^n (X_t, \Pi_t) \right]\,dt
+ M_t\,,
\end{align*}
where $M=(M_t)_{t\ge0}$ is a martingale. Therefore,
\begin{align*}
{\mathbb{E}}_{x,p} \left( e^{-\delta (T \wedge \tau)} \varphi^n (X_{T \wedge \tau},\Pi_{T \wedge \tau})\right)
= \varphi^n(x,p) + {\mathbb{E}}_{x,p} \left( \int_0^{T \wedge \tau} e^{-\delta t} \left[ -\delta \varphi^n(X_t,\Pi_t) + {\cal L} \varphi^n (X_t, \Pi_t) - u_t \varphi_x^n (X_t, \Pi_t) \right]\,dt\right)\,.
\end{align*}
Note that $w$ fulfills
\[
-\delta w + {\cal L} w + (1-w_x) u \le 0 \quad \mbox{a.e.}
\]
Thus, for $\varepsilon>0$ we can choose $n$ large enough such that
\[
-\delta \varphi^n + {\cal L} \varphi^n + (1-\varphi^n_x) u \le \varepsilon\,,
\]
and hence
\[
{\cal L} \varphi^n \le \delta \varphi^n - (1-\varphi^n_x) u + \varepsilon\,.
\]
Therefore,
\begin{align*}
&{\mathbb{E}}_{x,p} \left( e^{-\delta (T \wedge \tau)} \varphi^n (X_{T \wedge \tau},\Pi_{T \wedge \tau})\right)\\
&\le \varphi^n(x,p) + {\mathbb{E}}_{x,p} \left( \int_0^{T \wedge \tau} e^{-\delta t} \left[ -\delta \varphi^n(X_t,\Pi_t) + \delta \varphi^n(X_t,\Pi_t) - (1-\varphi_x^n(X_t,\Pi_t)) u_t + \varepsilon - u_t \varphi_x^n (X_t, \Pi_t) \right]\,dt\right)\\
&= \varphi^n(x,p) - {\mathbb{E}}_{x,p} \left( \int_0^{T \wedge \tau} e^{-\delta t} u_t \,dt - \varepsilon \int_0^{T \wedge \tau} e^{-\delta t} \,dt\right)\,.
\end{align*}
By dominated convergence we have for $n\rightarrow\infty$
\begin{align*}
{\mathbb{E}}_{x,p} \left( e^{-\delta (T \wedge \tau)} w (X_{T \wedge \tau},\Pi_{T \wedge \tau})\right)
\le w(x,p) - {\mathbb{E}}_{x,p} \left( \int_0^{T \wedge \tau} e^{-\delta t} u_t \,dt - \varepsilon \int_0^{T \wedge \tau} e^{-\delta t} \,dt\right)\,.
\end{align*}
As $\varepsilon$ was arbitrary,
\[
{\mathbb{E}}_{x,p} \left( e^{-\delta (T \wedge \tau)} w (X_{T \wedge \tau},\Pi_{T \wedge \tau})\right) \le w(x,p) - {\mathbb{E}}_{x,p} \left( \int_0^{T \wedge \tau} e^{-\delta t} u_t \,dt\right)\,,
\]
and hence
\[
{\mathbb{E}}_{x,p} \left( e^{-\delta (T \wedge \tau)} w (X_{T \wedge \tau},\Pi_{T \wedge \tau})\right) + {\mathbb{E}}_{x,p} \left( \int_0^{T \wedge \tau} e^{-\delta t} u_t \,dt\right) \le w(x,p)\,.
\]
Since $w$ is bounded we have that $\lim_{T \to \infty} {\mathbb{E}}_{x,p} \left(
e^{-\delta (T \wedge \tau)} w (X_{T \wedge \tau},\Pi_{T \wedge \tau})\right)
= 0$. Thus, by bounded convergence,
\[
J^{(u)}(x,p)={\mathbb{E}}_{x,p} \left( \int_0^{\tau} e^{-\delta t} u_t \,dt\right)
= \lim_{T \to \infty} {\mathbb{E}}_{x,p} \left( \int_0^{\tau\wedge T} e^{-\delta t} u_t \,dt\right)
\le w(x,p)\,.
\]
Since for each $u$, $w$ dominates the optimal value function, by taking the supremum over all $u \in [0,K]$ in the derivation, we get
$V(x,p) \le w(x,p)$.
\end{proof}
\begin{remark}
If there is a strategy $\tilde{u} \in A$ such that $J^{(\tilde{u})}$ is a viscosity supersolution with $J^{(\tilde{u})} \in C^2$ almost everywhere,
then by Theorem \ref{thm:verification}, $J^{(\tilde{u})}=V$ is the classical solution to the problem, and $\tilde{u}$ is the optimal policy.
\end{remark}
\section{Numerics}
\label{sec:Num}
In this section we first simulate a path of the Markov chain $Y$ and a path of the Wonham filter to get an idea of its behaviour. Then we describe a numerical procedure for computing approximations to $V$ and the optimal dividend policy. We will restrict our numerical analysis to the case $M=2$. For a better understanding of the numerical results we transform the state process and consider $(X_t,\nu_t)_{t \ge 0}$, where $\nu_t=\mu_1 \pi_1(t) + \mu_2 (1-\pi_1(t))$, and the corresponding transformed HJB equation, instead of considering $(X_t,\pi_1(t))_{t \ge 0}$.\\
For simulating paths of the Wonham filter we need to express $W$ in terms of $Z$. The representation follows from \eqref{eq:W}:
$dW_t=\frac{dZ_t-\nu_t dt}{\sigma}$.
With this and equations \eqref{eq:estimator} and \eqref{eq:wonham} we get
\begin{align}\label{eq:nu_sim}
d \nu_t &= \left( q_{11}(\nu_t-\mu_2) + q_{21} (\mu_1-\nu_t) - \frac{(\nu_t-\mu_2)(\mu_1-\nu_t)\nu_t}{\sigma^2} \right) dt + \frac{(\nu_t-\mu_2)(\mu_1-\nu_t)}{\sigma^2} dZ_t\,,\\
\nu_0 &=:\upsilon=\mu_2 + p_1 (\mu_1-\mu_2)\,.
\end{align}
Now we simulate the increments of the Brownian motion $B$ as $\sqrt{\Delta t}\,{\underline b}_t$, where ${\underline b}_t \sim {\cal N}(0,1)$. Furthermore, we simulate a path ${\underline y}$ of the Markov chain $Y$, and calculate $d{\underline z}_{t+\Delta t} = \mu ({\underline y}_t) \Delta t + \sigma \sqrt{\Delta t}\,{\underline b}_t$. With this and equation \eqref{eq:nu_sim} we are ready to calculate the path of the estimator
\begin{align*}
d {\underline \nu}_{t+\Delta t} = \left( q_{11}({\underline \nu}_t-\mu_2) + q_{21} (\mu_1-{\underline \nu}_t) - \frac{({\underline \nu}_t-\mu_2)(\mu_1-{\underline \nu}_t){\underline \nu}_t}{\sigma^2} \right) \Delta t + \frac{({\underline \nu}_t-\mu_2)(\mu_1-{\underline \nu}_t)}{\sigma^2} d{\underline z}_{t+\Delta t}
\end{align*}
by applying the Euler-Maruyama scheme.
Figure \ref{fig:wonhamplot} shows a path of the drift of the uncontrolled process governed by the underlying Markov chain, and its estimator. We see that the estimator always needs some time to notice the change in the drift and only adapts to it slowly, but this clearly depends on the choice of $Q$. Furthermore, we see that the estimator indeed does not reach the boundary.\\
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{wonham}
\end{center}
\caption{Wonham filter estimate (red) of the drift (blue).}\label{fig:wonhamplot}
\end{figure}
Now we are going to solve the HJB equation numerically. The transformed HJB equation is
\begin{align}
\label{eq:HJBnu}(\tilde {\cal L} - \delta) V +\sup_{u \in [0,K]}(u(1-V_x))=0\,,
\end{align}
where
\begin{align*}
\tilde {\cal L} V&= \upsilon V_x + \left(q_{21}(\mu_1-\upsilon)+q_{11}(\upsilon-\mu_2)\right) V_{\upsilon} + (\mu_1-\upsilon)(\upsilon-\mu_2) V_{x \upsilon}
+ \frac{1}{2 \sigma^2} (\mu_1-\upsilon)^2(\upsilon-\mu_2)^2 V_{\upsilon \upsilon} + \frac{1}{2} \sigma^2 V_{xx}\,.
\end{align*}
To solve this PDE numerically, we first introduce an approximation based on an idea from \cite{frey2014} to the HJB equation to ensure convergence of the scheme.
Define
\begin{equation}\label{eq:apprunderl}
\begin{aligned}
X^{(u),\epsilon}_t&= x+\int_0^t (\nu_s^{\epsilon}-u_s) \, ds + \sigma W_t\,,\\
\nu_t^{\epsilon} &=\upsilon +\int_0^t \left( q_{11}(\nu_s^{\epsilon}-\mu_2) + q_{21} (\mu_1-\nu_s^{\epsilon}) \right) \,ds + \int_0^t\frac{(\nu_s^{\epsilon}-\mu_2)(\mu_1-\nu_s^{\epsilon})}{\sigma}\, dW_s+2\int_0^t \sqrt{\bar \epsilon(\nu_s^{\epsilon})} \,d\widetilde W_s\,,
\end{aligned}
\end{equation}
and $\tau^{\epsilon}=\inf\{t\ge0 | X_t^{(u),\epsilon} \le 0\}$. $\widetilde W=(\widetilde W_t)_{t \ge0}$ is a Brownian motion independent of $W$, and $\bar \epsilon$ is a smooth function that is bounded, has bounded derivatives, and is vanishing at $\mu_1,\mu_2$ and $\bar \epsilon(\upsilon)=\epsilon$ on $(\mu_1+\zeta,\mu_2-\zeta)$ for some small $\zeta>0$. Furthermore, denote by $J^{(u),\epsilon}, V^\epsilon$ the value function and the optimal value function associated to the approximate underlying system \eqref{eq:apprunderl}.
This introduces an additional second order term in the approximate HJB equation
\begin{align}
\label{eq:HJBnuappr}(\tilde {\cal L}+\bar \epsilon V_{\upsilon \upsilon}^\epsilon - \delta) V^\epsilon +\sup_{u \in [0,K]}(u(1-V^\epsilon_x))=0\,.
\end{align}
The difference to the approximation in \cite{frey2014} is that here we do not regularize our HJB equation with the additional term, but just use it to ensure positivity of the scheme in the interior of the computation domain. Note however that the analytic characterization of the optimal value function herein does not require regularization.\\
The following theorem states that the solution still converges to the optimal value function.
\begin{theorem}
Let $V^\epsilon$ be the solution to \eqref{eq:HJBnuappr}. Then $\lim_{\epsilon \to 0}\|V^\epsilon-V\|=0$.
\end{theorem}
\begin{proof}
We show that the result holds for every value function $J^{(u),\epsilon} \to J^{(u)}$. From this we may conclude that the result also holds for $V$, see \cite[Corollary 7.4]{frey2014}.\\
Let $J^{(u),\epsilon,T}, J^{(u),T}$ denote the corresponding value functions stopped at time $0<T<\infty$. We have
\begin{align*}
\lim_{\epsilon\to 0} \|J^{(u),\epsilon} (x,\upsilon)-J^{(u)} (x,\upsilon)\| & = \lim_{\epsilon\to 0} \lim_{T \to \infty}\|J^{(u),\epsilon} -J^{(u)} \|\\
&= \lim_{\epsilon\to 0} \lim_{T \to \infty}\|J^{(u),\epsilon}-J^{(u),\epsilon,T}+J^{(u),\epsilon,T}-J^{(u),T}+J^{(u),T} -J^{(u)} \|\\
&\le \lim_{\epsilon\to 0} \lim_{T \to \infty}\left(\|J^{(u),\epsilon}-J^{(u),\epsilon,T}\|+\|J^{(u),\epsilon,T}-J^{(u),T}\|+\|J^{(u),T} -J^{(u)}\|\right)\\
&= \lim_{\epsilon\to 0} \lim_{T \to \infty}\|J^{(u),\epsilon}-J^{(u),\epsilon,T}\|+\lim_{\epsilon\to 0} \lim_{T \to \infty}\|J^{(u),\epsilon,T}-J^{(u),T}\|\\ &+\lim_{\epsilon\to 0} \lim_{T \to \infty}\|J^{(u),T} -J^{(u)}\|\,,\\
\end{align*}
where we skipped the arguments in the calculations and used that all terms in the last but one row are bounded since the optimal value function is bounded due to Lemma \ref{lem:x-infinity}. Now we show that all terms tend to $0$.
\begin{align*}
\lim_{\epsilon\to 0} \lim_{T \to \infty}\|J^{(u),T} -J^{(u)}\| &= \lim_{\epsilon\to 0} \lim_{T \to \infty}\left\|{\mathbb{E}}_{x,\upsilon}\left(\int_0^{\tau \wedge T} e^{-\delta t} u_t \, dt \right)-{\mathbb{E}}_{x,\upsilon}\left(\int_0^{\tau} e^{-\delta t} u_t \, dt \right)\right\|\\
&\le \frac{K}{\delta}\lim_{\epsilon\to 0} \lim_{T \to \infty}\left\|{\mathbb{E}}_{x,\upsilon}\left( e^{-\delta \tau} -e^{-\delta(\tau \wedge T)} \right)\right\|\,,
\end{align*}
and using that the last term is bounded we get
\begin{align*}
\lim_{\epsilon\to 0} \lim_{T \to \infty}\|J^{(u),T} -J^{(u)}\|
\le \frac{K}{\delta}\lim_{\epsilon\to 0} \left\|{\mathbb{E}}_{x,\upsilon}\left(\lim_{T \to \infty}\left( e^{-\delta \tau} -e^{-\delta(\tau \wedge T)} \right)\right)\right\|=0\,.
\end{align*}
Analogously we obtain $\lim_{\epsilon\to 0} \lim_{T \to \infty}\|J^{(u),\epsilon}-J^{(u),\epsilon,T}\|=0$.
For the last term we get
\begin{align*}
\lim_{\epsilon\to 0} \lim_{T \to \infty}\|J^{(u),\epsilon,T}-J^{(u),T}\|&=\lim_{\epsilon\to 0} \lim_{T \to \infty}\left\|{\mathbb{E}}_{x,\upsilon}\left(\int_0^{\tau^\epsilon \wedge T} e^{-\delta t} u_t \, dt \right)-{\mathbb{E}}_{x,\upsilon}\left(\int_0^{\tau \wedge T} e^{-\delta t} u_t \, dt \right)\right\|\\
&=\lim_{\epsilon\to 0} \lim_{T \to \infty}\left\|{\mathbb{E}}_{x,\upsilon}\left(\int_{\tau \wedge \tau^\epsilon \wedge T}^{(\tau \vee \tau^\epsilon) \wedge T} e^{-\delta t} u_t \, dt \right)\right\|\\
&\le \frac{K}{\delta}\lim_{\epsilon\to 0} \lim_{T \to \infty}\left\|{\mathbb{E}}_{x,\upsilon}\left(e^{-\delta(\tau \wedge \tau^\epsilon \wedge T)}-e^{-\delta ((\tau \vee \tau^\epsilon) \wedge T)}\right)\right\|\\
&= \frac{K}{\delta}\lim_{T \to \infty} \left\|{\mathbb{E}}_{x,\upsilon}\left(\lim_{\epsilon\to 0}\left(e^{-\delta(\tau \wedge \tau^\epsilon \wedge T)}-e^{-\delta ((\tau \vee \tau^\epsilon) \wedge T)}\right)\right)\right\|\,,
\end{align*}
where we used boundedness of the term in the last but one row. Now it remains to show that $\tau^\epsilon \to \tau$.
Proving that ${\mathbb{E}}\left(\sup_{0\le t \le T}\| \nu^{\epsilon}_t-\nu_t\|^2\right) \to_{\epsilon\to 0} 0$ and ${\mathbb{E}}\left(\sup_{0\le t \le T}\| X^{(u),\epsilon}_t-X^{(u)}_t\|^2\right) \to_{\epsilon\to 0} 0$ runs along the same lines as in \cite[Proof of Lemma 7.2]{frey2014}.
Since $X^{(u),\epsilon} \to X^{(u)}$ u.c.p.~we have that along a subsequence $\epsilon(k)$, $X^{(u),\epsilon(k)} \to X^{(u)}$ u.c.a.s.~and hence also $\tau^{\epsilon(k)} \to \tau$ a.s.
Thus $\lim_{\epsilon\to 0} \lim_{T \to \infty}\|J^{(u),\epsilon,T}-J^{(u),T}\|=0$, which closes the proof.
\end{proof}
Now we are ready to solve \eqref{eq:HJBnuappr} numerically.
We first of all have to restrict our computation domain and therefore choose a sufficiently large number $H$ to approximate the
domain of the value function by $[0,H]\times [\mu_2,\mu_1]$.\\
To compute the approximate outcomes of our problem, we use policy iteration. Initially, we use a dividend policy of threshold type, since such strategies solve the problem with complete information and thus are good candidates also in our situation. As the initial threshold level we use the convex combination of the threshold levels which are outcomes to the problem with complete information from \citet{sotomayor2011}, as we expect it to be close to the correct solution to our problem:
$b_0(\upsilon):=\frac{\mu_1-\upsilon}{\mu_1-\mu_2}
\bar{b}_2
+\frac{\upsilon-\mu_2}{\mu_1-\mu_2}\bar{b}_1$,
where $\bar b_1, \bar b_2$ denote the threshold levels in the case with complete information for states $1$ and $2$, respectively.
The initial strategy is given by
$u^{(0)}(x,\upsilon)=K1_{\{x\ge b_0(\upsilon)\}} (x,\upsilon)$.
Note that the mesh we use for our computation is generated such that there are more grid points available where they are needed the most -- between $b_0(\mu_2)$ and $b_0(\mu_1)$.
For more details about the mesh generation we refer to \cite{sz12}.
Now we iteratively apply the following procedure:
\begin{itemize}
\item For a given strategy $u^{(k)}$ calculate $V^{(k)}$ by solving
\begin{align}\label{eq:numHJB}
(\tilde {\cal L}^G-\delta)V+\bar \epsilon {\cal D}^G_{\upsilon \upsilon}V+u^{(k)}(1-{\cal D}^G_xV)=0\,,
\end{align}
where $\tilde {\cal L}^G$ is the operator $\tilde {\cal L}$ with differentiation
operators replaced by finite differences, ${\cal D}^G_x$ is
a finite difference approximation to differentiation with respect to $x$, and
${\cal D}^G_{\upsilon \upsilon}$ is the finite difference operator replacing the second derivative w.r.t. $\upsilon$.
\item The next iterate $u^{(k+1)}$ is chosen to maximize $u(1-{\cal D}^G_xV^{(k)})$. Thus
$u^{(k+1)}(x,\upsilon)=K1_{\{{\cal D}^G_x V^{(k)}(x,\upsilon)\le 1 \}}$.
\end{itemize}
In our experiments the iteration stops after $6$ steps, since $u^{(6)} \approx u^{(5)}$.\\
The idea behind the construction of the finite difference method is based on the fact that in the discretized setting the diffusion is approximated by Markov chains which locally preserve properties of the original process (cf. \cite[p. 67]{kushner2001}).
The additional term $\bar \epsilon$ is required to guarantee positivity of the scheme and hence to obtain its Markov chain interpretation.
Corresponding convergence results can be found in \cite{kushner1990, kushner2001} and \cite[Chapter IX]{fleming2006}. In \cite[p.~324]{fleming2006} it is noted that the policy iteration converges, yielding value function and associated policy.
\subsection*{Numerical results}
\label{subsec:Results}
We computed both value function and dividend policy for the parameter sets
$\sigma = 1$, $\mu_1=2$, $\mu_2=1$, $\delta =0.5$, $-q_{11}=0.25$, $q_{21}=0.5$, $B=10$, and $K \in \{ 0.2, 0.3, 0.67, 1.8 \}$.
The resulting strategies turn out to be threshold strategies with threshold levels
depending on the estimate of $\mu$. Figure \ref{fig:thresholdplot} shows the resulting threshold levels and compares them to the threshold levels which are outcomes to the dividend maximization problem with constant and unobservable drift, i.e., $Q \equiv 0$, which is studied in \cite{sz12}. Interestingly, while in the case studied herein the threshold level is falling in $\upsilon$ for $K=0.67$, it increases in the Bayesian case. But note that for this parameter choice both curves are rather flat.
The intuition behind the fact that the threshold level grows for some parameter sets, whereas it falls for others is the following.
Usually, the level falls in $\upsilon$ since in the better state the company can pay out dividends earlier as it will recover from it due to the higher drift.
However, for low values of $\upsilon$ the volatility comes more into effect and hence if $K$ is small anyway, then it becomes better to pay dividends even for low values of $x$, because volatility might lead into early ruin. Then, as $\upsilon$ grows it becomes more effective than the volatility and hence soon ruin is not expected anymore and the strategy is designed for a longer living company for higher values of $\upsilon$.
It is interesting that for all of the four parameter sets in the state with the lower drift dividends are paid out more cautiously in the Markov switching case than in the Bayesian case, and in the state with the higher drift it is the other way round. An explanation for this is that since in the Markov switching case there is a chance that the economy gets better, it is the better strategy to wait, if $\upsilon$ is small and pay out dividends at higher values of $\upsilon$.
Hence the state with the lower drift is the state of saving, whereas the other state is the state of spending.
In the Bayesian case the drift does not change and therefore the situation is more balanced.\\
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{BayesSwitchingComp_K02}
\includegraphics[scale=0.6]{BayesSwitchingComp_K03}
\includegraphics[scale=0.6]{BayesSwitchingComp_K067}
\includegraphics[scale=0.6]{BayesSwitchingComp_K18}
\end{center}
\caption{The resulting threshold levels for different parameter sets (blue) compared to the threshold levels from the Bayesian case (dashed red).}\label{fig:thresholdplot}
\end{figure}
Figure \ref{fig:valueplot} shows the value function corresponding to $K=1.8$ (but they all look rather similar).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{Switchingvalue_K18}
\end{center}
\caption{The resulting value function for $K=1.8$.}\label{fig:valueplot}
\end{figure}
Figure \ref{fig:valueplot} suggests that the optimal value function is smooth, however proving smoothness is -- due to the degeneracy of the HJB equation, which is highly non-standard -- out of the scope of this paper.\\
Figure \ref{fig:valuecomp} compares the resulting value function to the one from the Bayesian case. We observe that for a smaller estimate of the drift the value is higher in the case with switching,
and for a high estimate of the drift the value is higher in the Bayesian case. This is due to the fact that if a high drift is expected in the Bayesian case, it is more probable that the drift is in fact high,
whereas in the case studied in this paper the drift might change to the worse. For a small estimate of the drift it is exactly the other way round.\\
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{valuecomp15}
\includegraphics[scale=0.6]{valuecomp19}
\end{center}
\caption{The resulting value function for $K=1.8$ (blue) compared to the resulting value function from the Bayesian case (dashed red).}\label{fig:valuecomp}
\end{figure}
One more interesting case to study is that of a high dividend bound $K$. Figure \ref{fig:Kgrows} suggests that for growing parameter $K$ the dividend policy converges to what we expect to be the optimal barrier level in the case of unbounded dividend payments. In future research, it would be of interest to study the singular control problem with unbounded dividend rates.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.8]{Kgrows}
\end{center}
\caption{The resulting threshold levels for high values of $K$.}\label{fig:Kgrows}
\end{figure}
\subsection*{Admissibility of threshold strategies}
\label{subsec:threshold}
For the Markov switching model under full information \citet{sotomayor2011} find threshold strategies to be optimal. For each state $i$ of the underlying Markov chain they get a constant threshold level $b_i$ such that no dividends are paid below this level and dividends are paid at the maximum rate $K$, if the surplus process exceeds the level. Remember however that in their setup the current state is observable, and therefore it can always be decided which threshold level has to be chosen.\\
In our case, as the current state is estimated in a continuous way, we have only one threshold level. We see that the numerical approximation of the optimal dividend policy is of threshold type with a threshold level $b$ depending on the estimate of the drift, and hence is of the form
\begin{align*}
u_t=K \, 1_{\{X_t \ge b(\nu_t)\}}(X_t)\,.
\end{align*}
Therefore, it is of considerable importance to study the admissibility of this type of strategies. The question is whether the system
\begin{align}
\label{eq:dynX2ad}X_t&= x + \int_0^t (\nu_s-K \, 1_{\{X_s \ge b(\nu_s)\}}(X_s)) \,ds +\sigma W_t\,,\\
\label{eq:dynpiad}\pi_i(t)&=p_i+\int_0^t \left(q_{Mi} + \sum_{j=1}^{M-1} (q_{ji}-q_{Mi}) \pi_j (s)\right) \, ds + \int_0^t \pi_i (s) \frac{\mu_i-\nu_s}{\sigma} \, dW_s\,, \qquad i = 1, \dots, M-1 \,,
\end{align}
with $\nu$ as in \eqref{eq:nu}, has a solution. As the drift coefficient of this SDE is discontinuous, classical results from the SDE literature as \cite[Theorem 2.2]{mao2007}
cannot be applied. However, for threshold levels $b$ which satisfy the Assumptions of \cite[Theorem 3.20]{sz2015b}, we get existence and uniqueness of a unique global strong solution to system \eqref{eq:dynX2ad}, \eqref{eq:dynpiad}.
\section{Summary and conclusion}
\label{sec:Concl}
We have presented a diffusion model for the surplus process of an insurance company, where the drift coefficient changes in response to a change of the economic environment.\\
The change of the economic environment has been modeled by a Markov chain, and uncertainty has been introduced by not allowing to observe the current state of the Markov chain.
We have shown how to overcome uncertainty in this situation by applying a result from stochastic filtering theory. Then we have stated the dividend maximization problem and we have derived the associated HJB equation.\\
We have been able to characterize the solution to the stochastic optimization problem as the unique viscosity solution to this HJB equation.\\
Finally, we have presented an extensive numerical study for the solution to the optimization problem, which suggests that the optimal dividend policy is of threshold type. We have shown that such strategies are indeed admissible using a non-standard result on stochastic differential equations.\\
The main contribution of the current paper is the improvement of the regime switching models that have been studied in the literature by not assuming full information any more. Furthermore, emphasis was put on the numerical study to impart a deeper understanding of the behaviour of the optimal dividend policy for different parameter sets on the one hand, and in comparison to the Bayesian case on the other hand.
\section*{Acknowledgements}
The author thanks Gunther Leobacher (Johannes Kepler University Linz), Stefan Thonhauser (Graz University of Technology) and Ralf Wunderlich (BTU Cottbus-Senftenberg)
for fruitful discussions and helpful advice that improved this paper. \\
Furthermore, the author thanks two anonymous referees for their suggestions.\\
M. Sz\"olgyenyi is supported by the Vienna Science and Technology Fund (WWTF): Project MA14-031.\\
The main part of this paper was written while M. Sz\"olgyenyi was member of the Department of Financial Mathematics and Applied Number Theory, Johannes Kepler University Linz, 4040 Linz, Austria.\\
During this time, M. Sz\"olgyenyi was supported by the Austrian Science Fund (FWF): Project F5508-N26, which is part of the Special Research Program "Quasi-Monte Carlo Methods: Theory and Applications".
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,030 |
\section{Introduction}
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[clip,trim=22cm 0cm 4cm 3cm,width=0.215\textwidth,height=0.125\textwidth]{opt_seq0016_159.jpg}&\includegraphics[clip,trim=22cm 0cm 4cm 3cm,width=0.215\textwidth,height=0.125\textwidth]{lplp_seq0016_159.jpg}\\
\includegraphics[clip,trim=0cm 1cm 30cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{dp_seq0004_224.jpg}&\includegraphics[clip,trim=0cm 1cm 30cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{lpdp_seq0004_224.jpg}
\end{tabular}
\caption{We describe a framework for learning parameters of a multi-object
tracking objective that includes pairwise interactions between objects.
The left column shows tracking without pairwise interactions. Our system
\textit{learns} to enforce both inter-class and intra-class mutual exclusion
as well as co-occurrence relationship between trajectories. By
incorporating pairwise interactions between objects within a frame we are
able to improve detection performance. }
\end{figure}
Multi-target tracking is a classic topic of research in computer vision.
Thanks to advances of object detector performance in single, still images,
"tracking-by-detection" approaches that build tracks on top of a collection
of candidate object detections have shown great promise. Tracking-by-detection
avoids some problems such as drift and is often able to recover from extended
periods of occlusion since it is ``self-initializing''. Finding an optimal set
of detections corresponding to each track is often formulated as a discrete
optimization problem of finding low-cost paths through a graph of candidate
detections for which there are often efficient combinatorial algorithms (such
as min-cost matching or min-cost network-flow) that yield globally optimal
solutions (\eg, ~\cite{10.1109/CVPR.2008.4587584,PirsiavashRF_CVPR_2011}).
Tracking by detection is somewhat different than traditional generative
formulations of multi-target tracking, which draw a distinction between the
problem of estimating a latent continuous trajectory for each object from the
discrete per-frame data-association problem of assigning observations (\eg,
detections) to underlying tracks. Such methods (\eg,
~\cite{Andriyenko:2012:DCO,Milan:2013:DTE,WuThScBe2012}) allow for explicitly
specifying an intuitive model of trajectory smoothness but face a difficult
joint inference problem over both continuous and discrete variables with little
guarantee of optimality.
In tracking by detection, trajectories are implicitly defined by the selected
group of detections. For example, the path may skip over some frames entirely
due to occlusions or missing detections. The transition cost of utilizing a
given edge between detections in successive frames thus could be interpreted as
some approximation of the marginal likelihood associated with integrating over
a set of underlying continuous trajectories associated with the corresponding
pair of detections. This immediately raises difficulties, both in (1) encoding
strong trajectory models with only pairwise potentials and (2) identifying the
parameters of these potentials from training data.
One line of attack is to first group detections in to candidate tracklets and
then perform scoring and association of these
tracklets~\cite{Yang12anonline,Brendel11multiobjecttracking,Wang_2014_CVPR}.
Tracklets allow for scoring much richer trajectory and appearance models while
maintaining some benefits of purely combinatorial grouping. Another approach
is to attempt to include higher-order constraints directly in a combinatorial
framework~\cite{Butt_2013_ICCV_Workshops,DBLP:journals/corr/ChariLLS14}.
In either case, there are a large number of parameters associated with these
richer models which necessitates application of machine learning techniques.
This is particularly true for (undirected) combinatorial models based on,
\eg network-flow, where parameters are often set empirically by hand.
In this work, we introduce an extension to the standard min-cost flow tracking
objective that allows us to model pairwise interactions between tracks. This
allows us to incorporate useful knowledge such as typical spatial relationships
between detections of different objects and suppression of multiple overlapping
tracks of the same object. This quadratic interaction necessitates the
development of approximate inference methods which we describe in Section
\ref{sec:inference}. In Section \ref{sec:learning} we describe an approach to
joint learning of model parameters in order to maximize tracking performance on
a training data set using techniques for structured
prediction~\cite{Taskar03M3N}. Structured prediction has been applied in
tracking to learning inter-frame affinity
metrics~\cite{Kim:2012:OMT:2482048.2482058} and
association~\cite{lou_11_structured} as well as a variety of other learning
tasks such as fitting CRF parameters for segmentation~\cite{export:67945} and
word alignment for machine
translation~\cite{Lacoste-Julien:2006:WAV:1220835.1220850}. To our best
knowledge, the work presented here is unique in utilizing discriminative
structured prediction to \textit{jointly} learn the complete set of parameters
of a tracking model from labeled data, including track birth/death bias, transition
affinities, and multi-object contextual relations. We conclude with
experimental results (Section \ref{sec:experiments}) which demonstrate that the
learned quadratic model and inference routines yield state of the art
performance on multi-target, multi-category object tracking in urban scenes.
\section{Model
\label{sec:model}
We begin by formulating multi-target tracking and data association as a
min-cost flow network problem equivalent to that of
~\cite{10.1109/CVPR.2008.4587584}, where individual tracks are described by a
first-order Markov Model whose state space is spatial-temporal locations in
videos. This framework incorporates a state transition likelihood that
generates transition features in successive frames, and an observation
likelihood that generates appearance features for objects and background.
\subsection{Tracking by Min-cost Flow}
For a given video sequence, we consider a discrete set of candidate object
detection sites $V$ where each candidate site $x=(l,\sigma,t)$ is described
by its location, scale and frame number. We write $\Phi = \{\phi_a(x) | x \in
V\}$ for the image evidence (appearance features) extracted at each
corresponding spatial-temporal location in a video. A single object track
consists of an ordered set of these detection sites: $T = \{x_1, ... , x_n\}$,
with strictly increasing frame numbers.
We model the whole video by a collection of tracks $\mathcal{T} = \{T_1, ...
, T_k\}$, each of which independently generates foreground object appearances
at the corresponding sites according to distribution $p_{fg}(\phi_a)$ while the
remaining site appearances are generated by a background distribution
$p_{bg}(\phi_a)$. Each site can only belong to a single track. Our task is to
infer a collection of tracks that maximize the posterior probability
$P(\mathcal{T}|\Phi)$ under the model. Assuming that tracks behave
independently of each other and follow a first-order Markov model, we can write
an expression for MAP inference:
\begin{align}
\mathcal{T}^* &= \underset{\mathcal{T}}{\operatorname{argmax}} \prod_{T \in \mathcal{T}} P(\Phi|T) P(T) \nonumber \\
\begin{split}
&= \underset{\mathcal{T}}{\operatorname{argmax}} \big( \prod_{T \in \mathcal{T}} \prod_{x \in T} l(\phi_a(x)) \big) \times \\
& \prod_{T \in \mathcal{T}} \big( p_s(x_1) p_e(x_N) \prod_{i=1}^{N-1} p_t(x_{i+1}|x_i) \big)
\end{split}\label{eqn:maptrack}
\end{align}
where
\begin{align}
l(\phi_a(x)) = \frac{p_{fg}(\phi_a(x))}{p_{bg}(\phi_a(x))} \nonumber
\end{align}
is the appearance likelihood ratio that a specific location $x$ corresponds to
the object tracked and $p_s$, $p_e$ and $p_t$ represent the likelihoods for
tracks starting, ending and transitioning between given sites.
The set of optimal tracks can be found by taking the log of
\ref{eqn:maptrack} to yield an integer linear program (ILP) over flow variables $\mathbf{f}$.
\begin{align}
\underset{\mathbf{f}}{\operatorname{min}} &\ \sum_{i} c_i^s f_i^s + \sum_{ij \in E} c_{ij} f_{ij} + \sum_{i} c_i f_i + \sum_{i} c_i^t f_i^t
\label{eqn:mincostflow} \\
\text{s.t.}&\quad f_i^s + \sum_j f_{ji} = f_i = f_i^t + \sum_j f_{ij} \nonumber\\
& f_i^s, f_i^t, f_i, f_{ij} \in \{0,1\} \nonumber
\end{align}
where $E$ is the set of valid transitions between sites in successive
frames and the costs are given by
\begin{align}
\begin{split}
c_i = -\log l(\phi_a(x)), c_{ij} = -\log p(x_j | x_i)\\
c_i^s = -\log p_s(x_i), c_i^t = -\log p_t(x_i)
\label{eqn:potentials}
\end{split}
\end{align}
This ILP is a well studied problem known as minimum-cost network flow~\cite{SSP}.
The constraints satisfy the \textit{total unimodularity} property and thus can
be solved exactly using any LP solver or via various efficient specialized solvers,
including network simplex, successive shortest path and push-relabel with
bisectional search~\cite{10.1109/CVPR.2008.4587584}.
While these approaches yield globally optimal solutions, the authors
of~\cite{PirsiavashRF_CVPR_2011} consider even faster approximations based on
multiple rounds of dynamic programming (DP). In particular, the successive shortest
paths algorithm (SSP) finds optimal flows by applying Dijkstra's algorithm on a
residual graph constructed from the original network in which some edges corresponding
to instanced tracks have been reversed. This can be implemented by performing
multiple forward and backward passes of dynamic programming (see Appendix for details). $\cite{PirsiavashRF_CVPR_2011}$ found that two or even
one pass of DP often performs nearly as well as SSP in practical tracking
scenarios. In our experiments we evaluate several of these variants.
\subsubsection{Track interdependence}
The aforementioned model assumes tracks are independent of each other, which is
not always true in practice. A key contribution of our work is showing that
pairwise relations between tracks can be integrated into the model to improve
tracking performance. In order to allow interactions between multiple objects,
we add a pairwise cost term denoted $q_{ij}$ and $q_{ji}$ for jointly
activating a pair of flows $f_i$ and $f_j$ corresponding to detections at
sites $x_i = (l_i,\sigma_i,t_i)$ and $x_j=(l_j,\sigma_j,t_j)$. An intuitive
example of $q_{ij}$ and $q_{ji}$ would be penalty for overlap locations or a
boost for co-occurring objects. We only consider pairwise interactions between
pairs of sites in the same video frame which we denote by $EC = \{ij : t_i = t_j\}$.
Adding this term to \ref{eqn:mincostflow} yields an Integer Quadratic Program
(IQP):
\begin{align}
\begin{split}
\underset{\mathbf{f}}{\operatorname{min}}& \sum_{i} c_i^s f_i^s + \sum_{ij \in E} c_{ij} f_{ij} + \sum_{i} c_i f_i \\
&+ \sum_{ij \in EC } q_{ij} f_i f_j + \sum_{i} c_i^t f_i^t
\end{split} \label{eqn:quadflow} \\
\text{s.t.}&\quad f_i^s + \sum_j f_{ji} = f_i = f_i^t + \sum_j f_{ij} \nonumber\\
& f_i^s, f_i^t, f_i, f_{ij} \in \{0,1\} \nonumber
\end{align}
The addition of quadratic terms makes this objective hard to solve in general.
In the next section we discuss two different approximations for finding high
quality solutions $\mathbf{f}$. In Section~\ref{sec:learning} we describe how
the costs $\mathbf{c}$ can be learned from data.
\section{Inference}
\label{sec:inference}
Now we describe different methods to conduct tracking inference (finding the
optimal flows $\mathbf{f}$). These inference routines are used both for
predicting a set of tracks at test time as well as optimizing parameters during
learning (see Section~\ref{sec:learning}).
As mentioned in previous section, for traditional min-cost network flow problem defined in Equation~\ref{eqn:mincostflow} there exists various efficient solvers that explores its \textit{total unimodularity} property to find the global optimum. We employ MOSEK's built-in network simplex solver in our experiments, as other alternative algorithms yield exactly the same solution.
In contrast, finding the global minimum of the IQP problem~\ref{eqn:quadflow}
is NP-hard~\cite{ANasser14} due to the quadratic terms. We evaluate two
different schemes for finding high-quality approximate solutions. The first is
a standard approach of introducing auxiliary variables and relaxing the
integral constraints to yield a linear program (LP) that lower-bounds the
original objective. We also consider a greedy approximation based on successive
rounds of dynamic programming that also yields good solutions while avoiding
the expense of solving a large scale LP.
\subsection{LP Relaxation and Rounding}
If we relax the integer constraints and deform the costs as necessary to make
the objective convex, then the global optimum of~\ref{eqn:quadflow} can be
found in polynomial time. For example, one could apply Frank-Wolfe algorithm
to optimize the relaxed, convexified QP while simultaneously keeping track of
good integer solutions~\cite{TangECCV14}. However, for real-world tracking
over long videos, the relaxed QP is still quite expensive. Instead
we follow the approach proposed by Chari
\etal~\cite{DBLP:journals/corr/ChariLLS14}, reformulating the IQP as an
equivalent ILP problem by replacing the quadratic terms $f_i f_j$ with a set of
auxiliary variables $u_{ij}$:
\begin{align}
\begin{split}
\underset{\mathbf{f}}{\operatorname{min}}
\sum_{i} c_i^s f_i^s + \sum_{ij \in E} c_{ij} f_{ij} + \sum_{i} c_i f_i \\
+ \sum_{ij \in EC } q_{ij} u_{ij} + \sum_{i} c_i^t f_i^t
\end{split} \label{eqn:lprelax} \\
\begin{split}
\text{s.t.} \; f_i^s, f_i^t, f_i, f_j, f_{ij}, u_{ij} \in \{0,1\} \\
f_i^s + \sum_j f_{ji} = f_i = f_i^t + \sum_j f_{ij}
\end{split} \nonumber \\
\begin{split}
u_{ij} \le f_i, u_{ij} \le f_j \nonumber \\
f_i + f_j \le u_{ij} + 1 \nonumber
\end{split}
\end{align}
The new constraint sets enforce $u_{ij}$ to be $1$ only when $f_i$ and $f_j$
are both $1$. By relaxing the integer constraints, program
\ref{eqn:lprelax} can be solved efficiently via large scale LP solvers
such as CPLEX or MOSEK.
During test time we would like to predict a discrete set of tracks. This
requires rounding the solution of the relaxed LP to some solution that
satisfies not only integer constraints but also flow constraints.
~\cite{DBLP:journals/corr/ChariLLS14} proposed two rounding heuristics: a
Euclidean rounding scheme that minimizes $\| \mathbf{f} - \widehat{\mathbf{f}}
\|^2$ where $\widehat{\mathbf{f}}$ is the non-integral solution given by the LP
relaxation. When $\mathbf{f}$ is constrained to be binary, this objective
simplifies to a linear function $(\mathbf{1}-2 \widehat{\mathbf{f}})^T \mathbf{f} + \|
\widehat{\mathbf{f}} \|^2$, which can be optimized using a standard linear
min-cost flow solver. Alternately, one can use a linear under-estimator of
\ref{eqn:quadflow} similar to the Frank-Wolfe algorithm:
\begin{align}
\begin{split}
&\sum_{i} c_i^s f_i^s + \sum_{ij \in E} c_{ij} f_{ij} +\\
&\sum_{i} ( c_i + \sum_{ij \in EC} q_{ij} \widehat{u}_{ij} + \sum_{ji \in EC} q_{ji} \widehat{u}_{ji} ) f_i + \sum_{i} c_i^t f_i^t
\end{split}
\end{align}
Both of these rounding heuristics are linear functions
subject to the original integer and flow constraints and thus can be solved as
an ordinary min-cost network flow problem. In our experiments we execute both
rounding heuristics and choose the solution with lower cost.
\subsection{Greedy Sequential Search}
We now describe a simple greedy algorithm inspired by the combination of
dynamic programming and non-maximal suppression proposed in
~\cite{PirsiavashRF_CVPR_2011}. We carry out a series of rounds of dynamic
programming to find the shortest path between source and sink nodes. In each
round, once we have identified a track, we update the (unary) costs associated
with all detections to include the effect of the pairwise quadratic interaction
term of the newly activated track (\eg suppressing overlapping detections,
boosting the scores of commonly co-occurring objects). This is analogous to
greedy algorithms for maximum-weight independent set where the elements are
paths through the network.
\begin{algorithm}{}
\begin{minipage}{0.45\textwidth}
\caption{DP with pairwise Cost Update}
\label{alg:alg1}
\begin{algorithmic}[1]
\State \textbf{Input}: A Directed-Acyclic-Graph $G$ with edge weights $c_i,c_{ij}$
\State initialize $\mathcal{T} \leftarrow \emptyset$
\Repeat
\State Find shortest start-to-end path $p$ on $G$
\State $track\_cost = cost(p)$
\If {$track\_cost < 0$}
\ForAll{locations $x_i$ in $p$}
\State $c_j = c_j + q_{ij} + q_{ji}$ for all $ij,ji \in EC$
\State $c_i = +\infty$
\EndFor
\State $\mathcal{T} \leftarrow \mathcal{T} \cup p$
\EndIf
\Until{$track\_cost \ge 0$}
\State \textbf{Output}: track collection $\mathcal{T}$
\end{algorithmic}
\end{minipage}
\end{algorithm}
In the absence of quadratic terms, this algorithm corresponds to the 1-pass DP
approximation of the successive-shortest paths (SSP) algorithm. Hence it does
not guarantee an optimal solution, but, as we show in the experiments, it
performs well in practice. A practical implementation difference (from the
linear objective) is that updating the costs with the quadratic terms when a
track is instanced has the unfortunate effect of invalidating cost-to-go
estimates which could otherwise be cached and re-used between successive rounds
to accelerate the DP computation.
Interestingly, the greedy approach to updating the pairwise terms can also be
used with a 2-pass DP approximation to SSP where backward passes subtract
quadratic penalties. We describe the details of our implementation of the
2-pass algorithm in the Appendix. We found the 1-pass approach superior as
the complexity and runtime grows substantially for multi-pass DP with pairwise
updates.
\section{Tracking Features and Potentials}
In order to learn the tracking potentials ($\mathbf{c}$ and $\mathbf{q}$) we
parameterize the flow cost objective by a vector of weights $\mathbf{w}$ and a set of
features $\Psi(X,\mathbf{f})$ that depend on features extracted from the video,
the spatio-temporal relations between candidate detections, and which tracks
are instanced. With this linear parameterization we write the cost of a given
flow as $C(\mathbf{f}) = -\mathbf{w}^T \Psi(X,\mathbf{f})$ where the negative sign is a
useful convention to convert the minimization problem into a maximization.
The vector components of the weight and feature vector are given by:
\begin{align}
\mathbf{w} =
\begin{bmatrix}
w_S\\ w_t\\ w_s\\ w_a\\ w_E
\end{bmatrix}
\indent \Psi(X,\mathbf{f}) =
\begin{bmatrix}
\sum_i \phi_S(x_i^s) f_i^s \\ \sum_{ij \in E} \psi_t(x_i, x_j) f_{ij} \\ \sum_{ij \in EC} \psi_s(x_i, x_j) f_i f_j \\ \sum_i \phi_a(x_i) f_i \\ \sum_i \phi_E(x_i^t) f_i^t
\end{bmatrix}
\end{align}
Here $w_a$ represents local appearance template for the tracked objects of
interest, $w_t$ represents weights for transition features, $w_s$ represents
weights for pairwise interactions, $w_S$ and $w_E$ represents weights
associated with track births and deaths. $\phi_a(x_i)$ is
the image feature at spatial-temporal location $x_i$, $\psi_t(x_i, x_j)$
represents the feature of transition from location $x_i$ to location $x_j$,
$\psi_s(x_i, x_j)$ represents the feature of pairwise interaction between location
$x_i$ and $x_j$ that are in the same frame, $\phi_S(x_i^s)$ represents
feature of birth node to location $x_i$ and $\phi_E(x_i^t)$ represents feature
of location $x_i$ to sink node.
\textbf{Local appearance model:} We make use of an off-the-shelf detector
to capture local appearance. Our local appearance feature thus consists
of the detector score along with a constant 1 to allow for a variable
bias.
\textbf{Transition model:} We use a simple motion model (described in
Section \ref{sec:experiments}) to predict candidate windows' locations in future
frames; we connect a candidate $x_i$ at time $t_i$ with another candidate $x_j$
at a later time $t_i+n$, only if the overlap ratio between $x_i$'s predicted
window at $t_i+n$ and $x_j$'s window at $t_i+n$ exceeds $0.3$. The overlap
ratio is defined as two windows' intersection over their union. We use this
overlap ratio as a feature associated with each transition link. The transition
link's feature will be 1 if this ratio is lower than 0.5, and 0 otherwise. In
our experiments we allow up to $7$ frames occlusion for all the network-flow
methods. We append a constant 1 to this feature and bin these features according
to the length of transition. This yields a $16$ dimensional feature for each transition
link.
\textbf{Birth/death model:}
In applications with static cameras it can be useful to learn a spatially
varying bias to model where tracks are likely to appear or disappear.
However, videos in our experiments are all captured from a moving vehicle,
we thus use a single constant value 1 for the birth and death features.
\textbf{Pairwise interactions:}
$w_s$ is a weight vector that encodes valid geometric configurations of two
objects. $\psi(x_i, x_j)$ is a discretized spatial-context feature
that bins relative location of detection window at location $x_i$ and window at
location $x_j$ into one of the $D$ relations including on top of, above, below,
next-to, near, far and overlap (similar to the spatial context
of~\cite{DesaiRF_ICCV_2009}). To mimic the temporal NMS described in
\cite{PirsiavashRF_CVPR_2011} we add one additional relation, strictly overlap,
which is defined as the intersection of two boxes over the area of the first box;
we set the corresponding feature to 1 if this ratio is greater than 0.9 and 0 otherwise.
Now assume that we have $K$ classes of objects in
the video, then $w_s$ is a $DK^2$ vector, \ie $w_s = [w_{s11}^T, w_{s12}^T,
..., w_{sij}^T, ... , w_{sKK}^T]^T$, in which $w_{sij}$ is a length of $D$
column vector that encodes valid geometric configurations of object of class
$i$ w.r.t. object of class $j$. In such way we can capture intra- and
inter-class contextual relationships between tracks.
\section{Learning}
\label{sec:learning}
We formulate parameter learning of tracking models as a structured prediction
problem. With some abuse of notation, assume we have $N$ training videos
$(X_n, \mathbf{f}_n) \in \mathcal{X} \times \mathcal{F}, n = 1,..., N$. Given
ground-truth tracks in training videos specified by flow variables $\mathbf{f}_n$,
we discriminatively learn tracking model parameters $w$ using a structured SVM with
margin rescaling:
\begin{align}
&\mathbf{w}^* = \underset{\mathbf{w},\xi_n \ge 0}{\operatorname{argmin}} \ \frac{1}{2} \| \mathbf{w} \| ^2 + C \sum_n \xi_n \label{eqn:structsvm}\\
\text{s.t.} &\ \forall n, \widehat{\mathbf{f}}, \langle \mathbf{w}, \bigtriangleup \Psi(X_n,\mathbf{f}_n, \widehat{\mathbf{f}}) \rangle \ge L(\mathbf{f}_n, \widehat{\mathbf{f}}) - \xi_n \nonumber
\end{align}
where
\begin{align}
&\bigtriangleup \Psi(X_n,\mathbf{f}_n, \widehat{\mathbf{f}}) = \Psi(X_n,\mathbf{f}_n) - \Psi(X_n,\widehat{\mathbf{f}}) \nonumber
\end{align}
where $\Psi(X_n,\mathbf{f}_n)$ are the features extracted from $n$th training video.
$L(\mathbf{f}_n, \widehat{\mathbf{f}})$ is a loss
function that penalize any difference between the inferred label
$\widehat{\mathbf{f}}$ and the ground truth label $\mathbf{f}_n$. The
constraint on the slack variables $\xi_n$ ensure that we pay a cost for
any training videos in which the flow cost of the ground-truth tracks under
model $w$ is higher than some other incorrect labeling.
\subsection{Cutting plane optimization}
We optimize the structured SVM objective in~\ref{eqn:structsvm} using a
standard cutting-plane method~\cite{Joachims/etal/09a} in which the exponential
number of constraints (one for each possible flow $\widehat{\mathbf{f}}$)
are approximated by a much smaller number of terms. Given a current
estimate of $\mathbf{w}$ we find a ``most violated constraint'' for each training
video:
\[
\widehat{\mathbf{f}}_n^* = \underset{\widehat{\mathbf{f}}}{\operatorname{argmax}} \ L(\mathbf{f}_n, \widehat{\mathbf{f}}) - \langle \mathbf{w}, \bigtriangleup \Psi(X_n,\mathbf{f}_n, \widehat{\mathbf{f}}) \rangle
\]
We can then add these constraints to the optimization problem and solve for an
updated $\mathbf{w}$. This procedure is iterated until no additional constraints are
added to the problem. In our implementation, at each iteration we add a single
linear constraint which is a sum of violating constraints derived from
individual videos in the dataset which is also a valid cutting
plane constraint~\cite{DesaiRF_ICCV_2009}.
The key subroutine is finding the most-violated constraint for a given video
which requires solving the loss-augmented inference problem (we drop the $n$
subscript notation from here on)
\begin{align}
\widehat{\mathbf{f}}^* &= \underset{\widehat{\mathbf{f}}}{\operatorname{argmin}} \ \langle w, \Psi(X,\widehat{\mathbf{f}}) \rangle - L(\mathbf{f}, \widehat{\mathbf{f}})
\end{align}
As long as the loss function $L(\mathbf{f}, \widehat{\mathbf{f}})$
decomposes as a sum over flow variables then this problem has the same form
as our test time tracking inference problem, the only difference being that the
cost of variables in $\mathbf{f}$ is augmented by their corresponding negative
loss.
We note that our two inference algorithms behave somewhat differently when
producing constraints. The greedy algorithm has no guarantee of finding the
optimal flow for a given tracking problem and hence may not generate all the
necessary constraints for learning $\mathbf{w}$. In contrast, for the LP relaxation, we
have the option of adding constraints corresponding to fractional solutions
(rather than rounding them to discrete tracks). If we use a loss function that
penalizes incorrect non-integral solutions, this may push the structured SVM to
learn parameters that tend to result in tight relaxations. These scenarios are
termed ``undergenerating'' and ``overgenerating'' respectively by
\cite{Finley/Joachims/08a} since approximate inference is performed over a
subset or superset of the exact space of flows.
\subsection{Loss function}
Now we describe loss functions for multi-target tracking problem. We use a
weighted hamming loss to measure loss between ground truth labels $\mathbf{f}$
and inferred labels $\widehat{\mathbf{f}}$:
\begin{align}
L(\widehat{\mathbf{f}} , \mathbf{f}) = \sum_{f_i \in \mathbf{f}} loss_i \left| f_i - \widehat{f_i} \right|
\end{align}
where $\{ loss_1, ..., loss_i,..., loss_{|\mathbf{f}|} \}$ is a
vector indicating the penalty for differences between the estimated
flow $\widehat{\mathbf{f}}$ and the ground-truth $\mathbf{f}$.
For example, when $\mathbf{loss} = \mathbf{1}$ it becomes the hamming loss.
\textbf{Transition Loss:} A critical aspect for successful learning is to
define a good $\mathbf{loss}$ vector that closely reassembles major tracking
performance criteria, such as Multiple Object Tracking Accuracy
(MOTA~\cite{Bernardin:2008:EMO:1384968.1453688}). Metrics
such as false positive, false negative, true positive, true negative and
true/false birth/death can be easily incorporated by setting their
corresponding values in $\mathbf{loss}$ to 1.
By definition, id switches and fragmentations~\cite{Li09learningto} are determined by looking at
labels of two consecutive transition links simultaneously, under such
definition the loss cannot be optimized by our inference routine which only
considers pairwise relations between detections within a frame. Instead, we
propose a decomposable loss for transition links that attempts to capture
important aspects of MOTA by taking into account the length and localization of
transition links rather than just using a constant (Hamming) loss on mislabeled
links. We found empirically that careful specification of the loss function
is crucial for learning a good tracking model.
In order to describe our transition loss, let us first denote four types of
transition links: $NN$ is the link from a false detection to another false
detection, $PN$ is the link from a true detection to a false detection, $NP$ is
the link from a false detection to a true detection, $PP^+$ is the link from a
true detection to another true detection with the same identity, and $PP^-$ is
the link from a true detection to another true detection with a different
identity. For all the transition links, we interpolate detections between its
start detection and end detection (if their frame numbers differ more than 1);
the interpolated virtual detections are considered either true virtual
detection or false virtual detection, depending on whether they overlap with a
ground truth label or not. Loss for different types of transition is defined
as:
\begin{adjustwidth}{1em}{0pt}
1. For $NN$ links, the loss will be (number of true virtual detections + number of false virtual detections)\\
2. For $PN$ and $NP$ links, the loss will be (number of true virtual detections + number of false virtual detections + 1)\\
3. For $PP^+$ links, the loss will be (number of true virtual detections)\\
4. For $PP^-$ links, the loss will be (number of true virtual detections + number of false virtual detections + 2)
\end{adjustwidth}
{\bf Ground-truth flows:} In practice, available training datasets specify
ground-truth bounding boxes that need to be mapped onto ground-truth flow
variables $\mathbf{f_n}$ for each video. To do this mapping, we first consider
each frame separately, taking the highest scoring detection window that
overlaps a ground truth label as true detection, each true detection will be
assigned a track identity label same as the ground truth label it overlaps.
Next, for each track identity, we run a simplified version of the dynamic
programming algorithm to find the path that claims the largest number of true
detections. After we iterate through all id labels, any instanced graph
edge will be a true detection/transition/birth/death while the remainder
will be false.
\section{Experimental results}
\label{sec:experiments}
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[clip,trim=5cm 3cm 27cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{dp_seq0004_247.jpg}&\includegraphics[clip,trim=5cm 3cm 27cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{lpdp_seq0004_247.jpg}\\
\includegraphics[clip,trim=5cm 3cm 27cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{dp_seq0004_248.jpg} &\includegraphics[clip,trim=5cm 3cm 27cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{lpdp_seq0004_248.jpg}
\end{tabular}
\caption{Example benefit of soft transition penalty. Left column is an ID
switch error (IDSW) of the baseline due to removing aggressive transition
links based on an empirical hard overlap threshold. At right column,
our model prevents this error by learning a soft penalty function that
allows for some aggressive transitions to occur.}
\end{figure}
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[clip,trim=16cm 5cm 22cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{opt_seq0008_97.jpg}&\includegraphics[clip,trim=16cm 5cm 22cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{lplp_seq0008_97.jpg}\\
\includegraphics[clip,trim=16cm 5cm 22cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{opt_seq0008_102.jpg}&\includegraphics[clip,trim=16cm 5cm 22cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{lplp_seq0008_102.jpg}\\
\includegraphics[clip,trim=16cm 5cm 22cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{opt_seq0008_110.jpg}&\includegraphics[clip,trim=16cm 5cm 22cm 5cm,width=0.215\textwidth,height=0.125\textwidth]{lplp_seq0008_110.jpg}
\end{tabular}
\caption{Example of track co-occurrence. The right column is the model
learned with pairwise terms (LP+Flow+Struct), while the left column is
learned without pairwise terms (SSP+FLow+Struct). Co-occurrence term forces
both track 2 and 3 to initialize even when the detector responses are weak.}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[clip,trim=0cm 0.55cm 0cm 0cm,width=0.45\linewidth]{transition_weights.png} &
\includegraphics[clip,trim=0cm 1.75cm 1cm 0cm,width=0.45\linewidth]{pairwise_weights.png} \\
(a) inter-frame weights & (b) intra-frame weights \\
\end{tabular}
\end{center}
\caption{Visualization of the weight vector learned by our method. Yellow
has small cost, blue has large cost.
(a) shows transition weights for different length of frame jumps. The
model encourages transitions to nearby neighboring frames, and penalizes
long or weak transition links (\ie overlap ratio lower than 0.5). (b)
shows learned pairwise contextual weights between objects. The model
encourages intra-class co-occurrence when objects are close but penalizes
overlap and objects on top of others. Note the strong negative interaction
learned between cyclist and pedestrian (two classes which are easily
confused by their respective detectors.). By exploring contextual cues we
can make correct prediction on this otherwise confusing configuration.
}
\end{figure}
\textbf{Dataset:} We have focused our experiments on training sequences of KITTI tracking
benchmark~\cite{Geiger2012CVPR}. KITTI tracking benchmark consists of to 21
training sequences with a total of 8008 frames and 8 classes of labeled
objects; of all the labeled objects we evaluated three categories which had
sufficient number of instances for comparative evaluation: cars, pedestrians
and cyclists. We use publicly available LSVM~\cite{voc-release4} reference detections and evaluation script\footnote{\url{http://www.cvlibs.net/datasets/kitti/eval_tracking.php}}.
The evaluation script only evaluates objects that are not too far away and not truncated by more than 15 percent, it also does not consider vans as false positive for cars or sitting
persons as false positive for pedestrians. The final dataset contains
636 labeled car trajectories, 201 labeled pedestrian trajectories and 37
labeled cyclists trajectories.
\textbf{Training with ambiguous labels:} One difficulty of training on the
KITTI tracking benchmark is that it has special evaluation rules for ground
truth labels such as small/truncated objects and vans for cars, sitting persons
for pedestrians. This is resolved by removing all detection candidates that
correspond to any of these ``ambiguous" ground truth labels during training; in
this way we avoid mining hard negatives from those labels. Also, to speed up
training, we partition full-sized training sequences in to 10-frame-long
subsequences with a 5-frame overlap, and define losses on each subsequence
separately.
\begin{figure}[t]
\begin{center}
\includegraphics[clip,trim=0cm 6cm 0cm 6cm,width=1.0\linewidth]{speed_and_quality.pdf}
\end{center}
\caption{Speed and quality comparison of proposed undergenerating and
overgenerating approximation. Over the 21 training sequences in KITTI
dataset, LP+rounding produces cost that is very close to relaxed global
optimum. DP gives a lower bound that is within 1\% of relaxed global
optimum, while being 2 to 7 times faster than a commercial LP solver (MOSEK)
}
\label{fig:fig5}
\end{figure}
\textbf{Data-dependent transition model:} In order to keep the size of tracking
graphs tractable for our inference methods, we need a heuristic to select a
sparse set of links between detection candidates across frames. We found that
simply predicting candidate's locations in future frames via optical flow gives
very good performance. Specifically, we first compute frame-wise optical flow
using software of~\cite{Liu2009}, then for a candidate detection $x_i$ at frame
$t_i$, we compute the mean of vertical flows and the mean of horizontal flows
within the candidate box, and use them to predict candidate's location in the
next frame $t_i+1$; for $x_i$'s predicted locations in frame $t_i+2$ we use its
newly predicted location at $t_i+1$ and candidate's original box size to repeat
the process described above, and same for $t_i+n$.
\textbf{Trajectory smoothing:} During evaluation we observe that many track
fragmentation errors (FRAG) reported by the benchmark are due to the raw trajectory
oscillating away from the ground-truth due to poorly localized
detection candidates. Inspired by the trajectory model of
~\cite{Andriyenko:2012:DCO}, we post-process each output raw trajectory by
fitting a cubic B-spline. This smoothing of the trajectory eliminates many
FRAGs from the raw track, making the fragmentation number more meaningful
when compared across different models.
\textbf{Baselines:} We use the publicly available code
from~\cite{Geiger2014PAMI} as a first baseline. It relies on a three-stages
tracklet linking scheme with occlusion sensitive appearance learning; it is by
far the best tracker for cars on KITTI tracking benchmark among all published
methods. Also we consider dynamic programming (DP) and successive shortest path (SSP) with default parameters
in~\cite{PirsiavashRF_CVPR_2011} as another two baselines, denoted as DP+Flow
and SSP+Flow in our table.
\textbf{Parameter settings:} We tuned the structural parameters of the
various baselines to give good performance. For all baselines we only use
detections that have a positive score. For DP+Flow and SSP+Flow we also remove
all transition links that have overlap ratios lower than 0.5. For learned
tracking models (+Struct) we use detections that have scores greater than -0.5,
and transition links that have overlap ratios greater than 0.3.
\textbf{Benchmark Results:}
We evaluate performance using a standard battery of performance measures.
The evaluation result for each object category, as well as for all categories are shown in Table~\ref{tab:tab1}.
For our learned tracking models (+Struct) we use either network simplex solver
(for SSP+Flow+Struct) or LP relaxation (for LP/DP+Flow+Struct) for
training and conduct leave-one-sequence-out cross-validation with
$C=2^{-9},2^{-8},...,2^3$. We report cross-validation result under best $C$, which is
$C=2^{-8}$ for SSP+Flow+Struct and $C=2^{-7}$ for LP/DP+Flow+Struct. Our simple
motion model helps DP+Flow outperform state-of-the-art baseline by a
significant margin. One exception is IDSW which we attribute to the fact
that the network-flow methods do not explicitly model target appearance.
While SSP+Flow seems to perform poorly with default parameters, it turns out
that with properly learned parameters (SSP+Flow+Struct), it produces results
that are comparable to (and often better than) DP+Flow, this indicates that
there is much more potential of SSP than suggested in previous work. In addition,
SSP's guarantee of optimality makes it very attractive if more complicated
features and network structure are to be used in learning. As shown in
Table~\ref{tab:tab1}, in our evaluation over all objects our model learned with
pairwise costs (LP/DP+Flow+Struct) achieves the best MOTA, Recall, Mostly
Tracked(MT) and Mostly Lost(ML) performance while keeping other metrics
competitive.
\textbf{Approximate Inference:} To evaluate quality of the LP+rounding and DP
approximation, we run both LP+rounding and DP inference on models trained via
LP relaxation and DP respectively. We then average the running time and
minimum cost found on each sequence for LP+rounding and DP, respectively. Fig~\ref{fig:fig5}
shows the accumulative running time and cost for each algorithm. During our
experiments, LP+rounding often finds the exact relaxed global optimal, and when
it doesn't it still gives very close approximation. While greedy forward
search using DP rarely reach relaxed global optimum, it still produced good
approximate solutions that were often within 1\% of relaxed global optimum
while running significantly faster (2-7x) than LP+rounding.
\begin{table}
\begin{center}
\renewcommand{\tabcolsep}{3.5pt}
{\scriptsize
\begin{tabular}{ c | c | c | c | c | c | c | c | c |}
\bb{Car} & MOTA & MOTP & Rec & Prec & MT & ML & IDSW & FRAG \\
\hline\hline
Baseline~\cite{Geiger2014PAMI} & 57.8 & 78.8 & 58.6 & 98.8 & 14.9 & 28.4 & 22 & 225 \\
\hline\hline
SSP+Flow & 49.0 & 79.1 & 49.1 & 99.7 & 18.4 & 59.9 & 0 & 47 \\
\hline
DP+Flow & 62.2 & 79.0 & 63.4 & 98.5 & 25.2 & 24.2 & 43 & 177 \\
\hline\hline
SSP+Flow+Struct & 63.4 & 78.3 & 65.4 & 97.1 & 27.4 & 20.0 & 2 & 179 \\
\hline
LP+Flow+Struct & 64.1 & 78.1 & 67.1 & 95.7 & 30.5 & 18.7 & 3 & 208 \\
\hline
DP+Flow+Struct & 64.6 & 78.0 & 67.5 & 96.0 & 30.1 & 18.6 & 17 & 222 \\
\hline
\\
\end{tabular}}
{\scriptsize
\begin{tabular}{ c | c | c | c | c | c | c | c | c |}
\bb{Pedestrian} & MOTA & MOTP & Rec & Prec & MT & ML & IDSW & FRAG \\
\hline\hline
Baseline & 40.2 & 73.2 & 49.0 & 86.6 & 4.2 & 32.2 & 132 & 461 \\
\hline\hline
SSP+Flow & 37.9 & 73.4 & 41.8 & 92.0 & 8.4 & 57.5 & 25 & 146 \\
\hline
DP+Flow & 49.7 & 73.1 & 57.2 & 88.9 & 18.6 & 26.3 & 46 & 260 \\
\hline\hline
SSP+Flow+Struct & 51.2 & 73.2 & 57.4 & 90.5 & 19.2 & 24.6 & 16 & 230 \\
\hline
LP+Flow+Struct & 52.6 & 72.9 & 60.2 & 89.2 & 22.2 & 21.6 & 31 & 281 \\
\hline
DP+Flow+Struct & 52.4 & 73.0 & 60.0 & 89.2 & 19.8 & 22.2 & 36 & 277 \\
\hline
\\
\end{tabular}}
{\scriptsize
\begin{tabular}{ c | c | c | c | c | c | c | c | c |}
\bb{Cyclist} & MOTA & MOTP & Rec & Prec & MT & ML & IDSW & FRAG \\
\hline\hline
Baseline & 39.0 & 81.6 & 39.6 & 99.5 & 5.4 & 37.8 & 7 & 26 \\
\hline\hline
SSP+Flow & 18.7 & 85.6 & 18.7 & 100 & 5.4 & 89.2 & 0 & 1 \\
\hline
DP+Flow & 42.4 & 81.2 & 42.5 & 100 & 18.9 & 45.9 & 2 & 5 \\
\hline\hline
SSP+Flow+Struct & 47.4 & 79.7 & 59.9 & 83.0 & 35.1 & 32.4 & 5 & 10 \\
\hline
LP+Flow+Struct & 52.3 & 79.6 & 61.1 & 88.2 & 40.6 & 27.0 & 12 & 21 \\
\hline
DP+Flow+Struct & 56.3 & 79.4 & 64.2 & 89.7 & 40.5 & 27.0 & 9 & 15 \\
\hline
\end{tabular}}
\end{center}
\renewcommand{\tabcolsep}{3.5pt}
{\scriptsize
\begin{tabular}{ c | c | c | c | c | c | c | c | c |}
\bb{All Categories} & MOTA & MOTP & Rec & Prec & MT & ML & IDSW & FRAG \\
\hline\hline
Baseline & 51.7 & 77.4 & 54.8 & 95.3 & 12.1 & 29.7 & 161 & 712 \\
\hline\hline
SSP+Flow & 44.2 & \bb{77.7} & 45.5 & \bb{97.5} & 15.6 & 60.8 & 25 & \bb{194} \\
\hline
DP+Flow & 57.6 & 77.4 & 60.5 & 95.7 & 23.5 & 25.7 & 91 & 442 \\
\hline\hline
SSP+Flow+Struct & 59.0 & 77.0 & 62.8 & 94.5 & 25.9 & 21.5 & \bb{23} & 419 \\
\hline
LP+Flow+Struct & 60.2 & 76.7 & 64.8 & 93.5 & \bb{29.2} & 19.7 & 46 & 510 \\
\hline
DP+Flow+Struct & \bb{60.6} & 76.7 & \bb{65.1} & 93.8 & 28.4 & \bb{19.7} & 62 & 514 \\
\hline
\end{tabular}}
\vspace{0.05in}
\caption{Tracking result for cars, pedestrian and cyclist categories
in the KITTI tracking benchmark and aggregate performance over all categories.
The proposed method using quadratic interactions between objects and parameters
trained using structured prediction achieves state-of-the art MOTA and is
competitive across multiple performance measures.
}
\label{tab:tab1}
\end{table}
\begin{table}
\begin{center}
{\scriptsize
\begin{tabular}{ c c || c | c | c ||}
&& \multicolumn{3}{|c|| }{Train} \\
&&& DP & LP \\
\hline\hline
\multirow{14}{*}{Test}
&\multirow{8}{*}{DP} & MOTA & 60.5 & 60.6 \\ \cline{3-5}
&& Recall & 65.2 & 65.1 \\ \cline{3-5}
&& Precision & 93.5 & 93.8 \\ \cline{3-5}
&& MT & 28.6 & 28.4 \\ \cline{3-5}
&& ML & 20.5 & 19.7 \\ \cline{3-5}
&& IDSW & 68 & 62\\ \cline{3-5}
&& FRAG & 517 & 514 \\ \cline{2-5}
&\multirow{8}{*}{LP+round} & MOTA & 60.1 & 60.2 \\ \cline{3-5}
&& Recall & 64.9 & 64.8 \\ \cline{3-5}
&& Precision & 93.3 & 93.5 \\ \cline{3-5}
&& MT & 29.3 & 29.2 \\ \cline{3-5}
&& ML & 20.3 & 19.7 \\ \cline{3-5}
&& IDSW & 56 & 46 \\ \cline{3-5}
&& FRAG & 518 & 510 \\
\hline
\hline
\end{tabular}}
\end{center}
\caption{Performance evaluation over 21 sequences using cross validation
for different combinations of inference algorithm used during training and
test time.}
\label{tab:tab2}
\end{table}
\textbf{Overgenerating versus Undergenerating:} Previous works have shown that in
general, models trained with relaxed inference are preferable than models
trained with greedy inference. To investigate this idea in our particular
problem, we also conduct leave-one-sequence-out cross-validation using
either DP or the LP relaxation as the inference method for training. The
evaluation results under different training/testing inference combinations are
shown in Table~\ref{tab:tab2}. Notice that model
trained with the LP relaxation does slightly better in most metrics, whereas DP
stands out as a good inference algorithm at test time. Moreover, though
slightly falling behind, model trained with greedy DP is very close to
the performance of that trained with LP and thus suggests the greedy algorithm
proposed here is a very competitive inference method.
\section{Summary}
\label{sec:Summary}
We augmented the well-studied network-flow tracking model with pairwise cost, and
proposed an end-to-end framework that jointly optimizes parameters for such model.
We extensively evaluated a traditional LP relaxation-based method and a novel greedy
dynamic programming method for inference in the augmented network, both of which
achieves state-of-the-art performance, while our greedy DP algorithm being 2-7x
faster than a commercial LP solver.
\section{Acknowledgements} This work was supported by NSF DBI-1053036,
IIS-1253538 and a Google Research Award.
{\small
\bibliographystyle{ieee}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 245 |
\section{Introduction}
Controllable text generation is a challenging task in natural language generation, which aims to generate fluent text with desired attributes.
Pilot studies attempt single-aspect control by directly finetuning a conditional model \cite{ziegler2019finetuning, keskarCTRL2019}, or turn to methods with language models fixed \cite{Dathathri2020Plug} due to the high cost of large-scale pre-trained language models \cite{NEURIPS2020_1457c0d6, zhang2022opt}.
Recent works focus on a more practical setting, multi-aspect\footnote{For example, \textit{positive} is an attribute from sentiment aspect while \textit{sports} is an attribute from topic aspect.} controllable text generation, with existing approaches mainly divided into three technical routes: weighted decoding \cite{Dathathri2020Plug, krause-etal-2021-gedi-generative}, multi-objective optimization \cite{kumar2021controlled, mireshghallah-etal-2022-mix}, and prefix-tuning \cite{qian-etal-2022-controllable}, which explore ways to combine controllers learned from single-aspect and apply them to a fixed language model yet suffering from attribute degeneration caused by the mutual interference of controllers.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{pic/intersection.pdf}
\vspace{-0.8cm}
\caption{
Probability space of attributes. \textcolor{HoneyOrange}{Orange} background denotes the estimated distribution over natural language. \textcolor{blue}{Blue} and \textcolor{lgreen}{green} areas represent distributions over sentences containing attributes from two different aspects, respectively. The darker region means a higher probability in the space. The shaded are distributional centers, the areas with the highest probability density.
}
\label{fig:1}
\end{figure}
We provide a distributional perspective to observe and alleviate this problem.
In the current text generation paradigm, a language model forms an estimated distribution over sentences with training data amounted to sampling from natural language distribution \cite{NEURIPS2021_260c2432}.
For single-aspect control, these methods train a classifier or a prefix for each attribute independently, which is regarded as appraising a center of distribution over attribute-relevant sentences, before biasing the language model's distribution to this center.
Correspondingly, when generalizing to multi-aspect control, their fusion strategy is directly obtaining interpolation or average of these centers, which may be too straightforward.
As shown in Figure \ref{fig:1}, the \textbf{\textcolor{Optimal}{interpolation}} point denotes the position they acquired after combining multiple centers in the probability space. And the \textbf{\textcolor{Intersection}{intersection}} represents where oracle sentences that simultaneously satisfy multiple attributes lie.
In the left part of Figure \ref{fig:1}, when distributions of attributes is symmetric\footnote{We plot distributions of attributes in \S \ref{attributes}.}, the interpolation point is indeed within the intersection area.
However, there could be a mismatch between the interpolation point and intersection. For example, as illustrated in the right part of Figure \ref{fig:1}, two skewed distributions intersect on the tails, leaving the interpolation point out of the intersection area and thus making it lack the ability to express all desired attributes together.
In this paper, different from approximating the intersection area with the interpolation point, we propose a strategy for directly acquiring the intersection.
We first deploy an autoencoder structure to map attribute-relevant sentences to latent representations constituting an estimated attribute space.
With our specially designed constraints, this space can model relationships among attributes.
Afterward, we provide an effective intersection searching algorithm that can walk around the long tail regions in distributions of all desired attributes and iteratively find where they combine more tightly.
Finally, we utilize a prefix-tuning-based decoder to construct sentences from the searched intersection.
We experiment on three-aspect control with two attributes from the sentiment aspect, four from the topic, and one from detoxification, with datasets IMDb movie reviews \cite{maas-etal-2011-learning}, AGNews \cite{NIPS2015_250cf8b5}, and Jigsaw Toxic Comment Classification Challenge Dataset, respectively. We evaluate the relevance of each attribute independently and calculate their average as the final relevance metric. Besides, we assess the text quality with perplexity and distinctness concerning fluency and diversity.
Results show that our method can significantly outperform strong baseline models on multi-aspect control. Furthermore, we find out in our analytical experiments that our intuitive assumptions fit well with our observation. The main contributions are as follows:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt]
\item We propose a distributional perspective that models multi-aspect control more practically.
\item We provide a method that directly searches for intersections in the attribute space and generates sentences with desired attributes.
\item We experimentally reveal the effectiveness of our method on multi-aspect control compared to strong baselines and achieve the SOTA.
\end{itemize}
\section{Related Work}
Variational autoencoders are often used for controllable text generation in early work \cite{10.5555/3305381.3305545, duan-etal-2020-pre, mai-etal-2020-plug} where they spend a lot of effort into improving text fluency.
The prosperity of large-scale pre-trained language models \cite{radford2019language} provides more exploration directions for attribute control such as fine-tuning \cite{ficler-goldberg-2017-controlling, ziegler2019finetuning, keskarCTRL2019}. Recent work has made gratifying progress on single-aspect control \cite{krause-etal-2021-gedi-generative}, leading studies gradually turn to a more difficult task, multi-aspect control, including the following three main approaches.
\paragraph{Weighted Decoding}
As the scale of language models increases rapidly, weighted decoding \cite{Dathathri2020Plug, krause-etal-2021-gedi-generative, yang-klein-2021-fudge, liu-etal-2021-dexperts, gu-etal-2022-improving} becomes a simple and practical choice.
It is a framework that decomposes the probability of sentences conditioned on attributes into a language model and a classifier with the bayesian rule directly at decoding time.
When handling multi-aspect control, it can be easily generalized by interpolating classifiers \cite{lin-riedl-2021-plug}.
\paragraph{Multi-Objective Optimization}
Controllable text generation task is naturally a multi-objective optimization problem when regarding its decoding process as an optimization objective.
Some approaches, such as DGC \cite{khalifa2020distributional}, Mix\&Match \cite{mireshghallah-etal-2022-mix}, and COLD Decoding \cite{qin2022cold}, adopt Energy-based Models \cite{lecun2006tutorial} to blend multiple objectives.
Others like MUCOCO \cite{kumar2021controlled} convert the optimization objectives of multi-aspect control to inequality constraints and thereby apply the lagrange multiplier method for this constrained optimization problem.
\paragraph{Prefix-Tuning}
GPT-3 \cite{brown2020language} provides a new paradigm named prompt-based learning \cite{liu2021pre}, which is able to perform few-shot learning on downstream tasks. Prefix-Tuning \cite{li-liang-2021-prefix} leverages the learned lightweight prompts to trigger the conditional generation capability of the language model.
Applying Prefix-Tuning to multi-aspect controllable text generation \cite{yu-etal-2021-attribute-alignment, qian-etal-2022-controllable, carlsson-etal-2022-fine, yang2022tailor} can be regarded as optimizing on multi-objective implicitly.
\begin{figure*}[ht]
\centering
\vspace{-0.5cm}
\resizebox{2\columnwidth}{!}{
\includegraphics[scale=0.55]{pic/model.pdf}
}
\vspace{-0.5cm}
\caption{An overview of our method. \textbf{Top}: Illustration of our autoencoder structure with prefix-tuning deployed on the fixed decoder, where latent representations $\mathcal{H}_i$ constitute an estimated attribute space. \textbf{Bottom Left}: Illustration of attribute classification loss $\mathcal{L}_C$ and aspect gap loss $\mathcal{L}_G$ attached to the attribute space. \textbf{Bottom Right}: Inferencing stage with prefix mapped from the intersection of attributes.}
\label{fig:2}
\end{figure*}
\section{Methodology}
In this section, we first introduce the motivation and overall process of our method, after which we describe each module in detail.
\subsection{Overview}
As illustrated in Figure \ref{fig:2}, our method mainly revolves around the attribute space including estimating the attribute space, searching for intersections, and mapping intersections to sentences.
Firstly, we aim to construct an attribute space using sampled sentences to estimate the real space as accurately as possible. We employ an autoencoder structure with the latent representations denoting points that constitute our estimated attribute space. To ensure that our estimated space reliably models the attributes, such as their probability distributions and relationships between different attributes, we further attach three constraints to the representation.
\begin{enumerate*}[label=(\Roman*)]
\item \textbf{Reconstruction Loss} $\mathcal{L}_R$ aims to bridge the gap between points in attribute space and natural attribute-relevant sentences, which is recovering attributes reflected by contents.
\item \textbf{Attribute Classification Loss} $\mathcal{L}_C$ forces the encoder to focus more on capturing attributes by distinguishing points of different attributes from the same aspect.
\item \textbf{Aspect Gap Loss} $\mathcal{L}_G$ %
penalizes the discrepancy of aspects, which is caused by the domain gap among different data sources for different aspects.
Inspired by the feature alignment \cite{10.1145/1772690.1772767}, we minimize the distances between distributional centers of each two aspects.
\end{enumerate*}
The second step aims to search for an intersection area of desired attributes. If the intersection area exists, a point in the area satisfies that neighbor points appearing in a tiny surrounding region should cover all required attributes. Inspired by this neighborhood ideology, we design an algorithm that iteratively approaches an area where these attributes bind more tightly.
The third step maps our searched intersection to a Prefix that activates the language model to generate attribute-relevant sentences. To make the language model less sensitive to slight variations, we sample a perturbation vector from a multivariate gaussian distribution.
\subsection{Estimating Attribute Space}
Given $|\mathbf{A}|$ aspects $\mathbf{A} = \left\{A_1,\cdots,A_{|\mathbf{A}|}\right\}$ with each comprising $|A_t|$ attributes $\left\{a_1^t,\cdots,a_{|A_t|}^t\right\}$, $I_\tau^t$ is an index set representing the identifiers of all sentences with attribute $a^t_\tau$ in the training data. We have $I^t = \bigcup\limits_{\tau=1}^{|A_t|} I_\tau^t, I = \bigcup\limits_{t=1}^{|\mathbf{A}|} I^t$, where $I^t$ is the indices of all sentences with any attribute in aspect $A_t$ and $I$ is the indices of the entire training data.
We encode sentences $\{X_i\}$ from all aspects $\mathbf{A}$ to representations $\{\mathcal{H}_i\}$ with unified mapping parameters $\phi$: $\mathcal{H}_i = \text{Encode}_\phi(X_i)$, where $i \in I$.
\paragraph{Reconstruction Loss $\mathcal{L}_R$}
As in the top of Figure \ref{fig:2}, $\mathcal{L}_R$ is computed in the same way as the autoregressive loss of pre-trained language model $p_{\text{LM}}$:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\begin{aligned}
\label{eq:2}
\mathcal{L}_R &= -\sum\limits_{i\in I} \log p_\text{LM}(X_i|\text{Prefix}_i)\\
\text{Prefix}_i &= \text{MLP}_\theta(\mathcal{H}_i + \lambda \varepsilon_i),\; \varepsilon_i \sim \mathcal{N}(\mathbf{0}, \mathbf{I}),
\end{aligned}
\end{equation}
where $X_i$ here is a sample sentence from the entire training set, i.e., $i \in I$.
Besides, $\varepsilon_i$, with a scaling factor $\lambda$, is a perturbation vector sampled from a multivariate gaussian distribution $\mathcal{N}(\mathbf{0}, \mathbf{I})$ for robustness when reconstructing. The multi-layer perceptron $\text{MLP}_{\theta}$ will map perturbed $\mathcal{H}_i$ to $\text{Prefix}_i$ that can activate the language model to generate text with desired attributes.
It's worth noting that our primary goal is to recover attributes, which means $\mathcal{L}_R$ does not need and preferably does not converge too well while maintaining text fluency.
\paragraph{Attribute Classification Loss $\mathcal{L}_C$}
We force the encoder to focus on attributes by $\mathcal{L}_C$ in the way:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:3}
\begin{aligned}
\mathcal{L}_C = -\sum\limits_{t=1}^{|\mathbf{A}|}\sum\limits_{\tau=1}^{|A_t|} \sum\limits_{i \in I_\tau^t} \log p_{\pi_{t}}(a_\tau^t|\mathcal{H}_i).
\end{aligned}
\end{equation}
Given sentence representation, $p_{\pi_t}$ is a classifier that distinguish attributes $\left\{a_\tau^t\right\}$ from aspect $A_t$ with parameter $\pi_{t}$.
\paragraph{Aspect Gap Loss $\mathcal{L}_G$}
We penalize the discrepancy between distributional centers by:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:4}
\mathcal{L}_G = \sum\limits_{\mathclap{1\leq t_1<t_2\leq |\mathbf{A}|}}\quad\;\,\,\left\|\sum\limits_{i \in I^{t_1}} \frac{\mathcal{H}_i}{|I^{t_1}|} - \sum\limits_{j \in I^{t_2}} \frac{\mathcal{H}_j}{|I^{t_2}|}\right\|_2,
\end{equation}
which are Euler distances between every two distinct distributional centers.
When generalizing to a larger scale of aspects, it is relatively expensive to calculate averages over the entire dataset each time the model is updated.
We calculate this loss in practice using a batch-level approximation. We assign each aspect a memory unit to store the latest representation of the aspect's estimated center.
Each time processing a batch of sentences from one aspect, we take the average of their representations as the center and sum up the Euler distances to centers of other aspects in the memory, which is the estimated $\mathcal{L}_G$. Then, we update the memory unit of this aspect to the latest.
\begin{algorithm}[t]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\caption{Intersection Searching}
\label{alg1}
\begin{algorithmic}[1]
\REQUIRE $\mathcal{H}_i, i \in \bigcup\limits_{t=1}^N I^t_{\alpha_t}$ from $N$ attributes\\
\quad \; $\omega_{\alpha_t}$ weight of each attribute
\ENSURE Intersection of $N$ attributes: $\mathcal{\tilde{H}}^*$
\STATE Initialize $M$ candidates:$\{\mathcal{\tilde{H}}^0_m\}$
\STATE Iterate $S$ times
\FOR{$s$ in $[0, S-1]$}
\FOR{$m$ in $[1, M]$}
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathbf{0}$
\FOR{$t$ in $[1, N]$}
\STATE $\mathbf{H}\leftarrow\mathop{\text{Nearest}}\limits_{top K}(\mathcal{\tilde{H}}_m^s,\left\{\mathcal{H}_i, i \in I^t_{\alpha_t}\right\})$
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathcal{\tilde{H}}_m^{s+1} +\omega_{\alpha_t} \mathop{\text{mean}}(\mathbf{H})$
\ENDFOR
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathcal{\tilde{H}}_m^{s+1} / \sum\limits_{t=1}^N \omega_{\alpha_t}$
\ENDFOR
\ENDFOR
\STATE $\mathcal{\tilde{H}}^*\leftarrow \text{Select}(\{\mathcal{\tilde{H}}^{S}_m\})$
\end{algorithmic}
\end{algorithm}
During the training stage, our loss function is:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:5}
\mathcal{L} = w_1\mathcal{L}_R + w_2\mathcal{L}_C + w_3\mathcal{L}_G.
\end{equation}
It's worth noting that we only update parameters $\phi$, $\theta$, and $\left\{\pi_t\right\}$ for the encoder, the MLP layer, and the classifier heads, respectively.
\subsection{Intersection of Attributes}
Suppose there is an intersection point, denoted as $\mathcal{\tilde{H}}^*$, located within the intersection region of attributes $\left\{a^1_{\alpha_1},a^2_{\alpha_2},\cdots,a^N_{\alpha_N}\right\}$ from $N$ different aspects, where $a^t_{\alpha_t}$ is the $\alpha_t$th attribute in aspect $A_t$.
Our algorithm \ref{alg1} approximates the $\mathcal{\tilde{H}}^*$ by iteratively approaching a most balanced point with nearest neighbors from different attributes.
First, we initialize the candidates $\{\mathcal{\tilde{H}}^0_m\}$ by randomly sampling points in the attribute space, calculating their distance to the closest point of each attribute $a^t_{\alpha_t}$, and selecting the top $M$ samples with the smallest average distance to all attributes.
At each iteration $s$, we choose the top-K\footnote{We study the practical meaning and impact of $K$ in \S \ref{sec:effectofk}.} nearest points to $\mathcal{\tilde{H}}^s_m$ for each attribute and update $\mathcal{\tilde{H}}^{s+1}_m$ using the weighted average of these points.
It is worth mentioning that $\omega_{\alpha_t}$ is the weight used to balance attributes or favor some specifically, and a negative value of $\omega_{\alpha_t}$ can even move away from a particular one.
Finally, we select the best candidate from the last iteration $S$, which is expected to be in the intersection region, i.e., a representation related to multiple attributes.
\subsection{Generation with Intersections}
As illustrated in the right bottom of Figure \ref{fig:2}, we convert the representation $\mathcal{\tilde{H}}^*$ obtained from the intersection area directly to the $\text{Prefix}$ with $\text{MLP}_\theta$ and let the language model generate multi-attributed sentence $Y$ from input $\mathcal{X}$ as:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:1}
\begin{aligned}
Y &= \mathrm{arg}\max\limits_y \; p_\text{LM}(y|\text{Prefix}^*;\mathcal{X})\\
\text{Prefix}^* &=\text{MLP}_{\theta}(\mathcal{\tilde{H}}^* + \lambda \varepsilon_i),\; \varepsilon_i \sim \mathcal{N}(\mathbf{0}, \mathbf{I}).
\end{aligned}
\end{equation}
When generating several attribute-relevant sentences for one attribute combination, we only need to calculate the intersection for it once.
\section{Experiment}
In this section, we demonstrate the effectiveness of our method on three-aspect control, including sentiment, topic, and detoxification.
\subsection{Multi-Aspect Control Task}
The datasets we use are the same as GeDi \cite{krause-etal-2021-gedi-generative} and Contrastive Prefix \cite{qian-etal-2022-controllable}.
To balance the data scale across all aspects, we randomly sample 10k sentences from each dataset that is less than the number of samples GeDi uses, with each attribute equally dividing this amount.
We use the IMDb movie reviews \cite{maas-etal-2011-learning}, the AGNews dataset \cite{NIPS2015_250cf8b5}, and the Jigsaw Toxic Comment Classification Challenge Dataset\footnote{\url{https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/}} for sentiment, topic and detoxification aspects, respectively.
The prompts used for text generation are the same as those used in the PPLM \cite{Dathathri2020Plug}, with 20 from its bag-of-words experiment and 15 from its discriminator experiment. We experiment with $8$ combinations of the $3$ aspects with $2$ sentiments $\times$ $4$ topics $\times$ $1$ detoxification and generate $5$ completions for each combination and each prompt. Totally, each model will generate $35 \times 2 \times 4 \times 1 \times 5 = 1400$ sentences.
It is worth noting that we do not specifically use prompts that induce the language model to generate toxic text, making detoxification easier to improve.
To measure the performance on different aspects, we compute the attribute relevance. We finetune a DeBERTa \cite{he2021deberta, he2021debertav3} classifier on the Yelp dataset \cite{NIPS2015_250cf8b5} for sentiment aspect and a classifier for topic utilizing all its remaining data not used during training. We evaluate the non-toxicity with the Google Perspective API\footnote{\url{https://www.perspectiveapi.com}}.
The final performance of a model is determined by the average of these three attribute relevance scores introduced above.
We also use two auxiliary metrics to measure text quality.
One is perplexity calculated by GPT2-large following Contrastive Prefix \cite{qian-etal-2022-controllable}. To ensure that models are not insensitive to changes in different prefixes, we calculate the Distinctness \cite{li-etal-2016-diversity} of sentences generated from different prefixes and average the 1-gram, 2-grams, and 3-grams distinct scores for simplicity.
Moreover, we conduct human evaluation with sentences generated by different models shuffled. Each sentence is rated by three professional evaluators for 3 attribute relevance and text fluency. Evaluators rate each item on a scale of 1 to 5, with 5 representing text highly related to the desired attribute or very fluent.
\subsection{Baselines}
\begin{table*}[ht]
\small
\vspace{-0.3cm}
\centering
\begin{tabular}{l|c|ccc|c|c}
\hline
\hline
\textbf{Methods} & \textbf{Average}↑ (\%) & \textbf{Sentiment}↑ (\%) & \textbf{Topic}↑ (\%) & \textbf{Detoxification}↑ (\%) & \textbf{PPL.}↓ &\textbf{Dist.}↑\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Weighted Decoding Based Methods}}\\
\hline
\textbf{PPLM} & 71.0 $\pm$ 21.4 & 64.7 $\pm$ 24.8 & 63.5 $\pm$ 22.7 & 84.9 $\pm$\; 6.5 & 62.6 & 62.0\\
\hline
\textbf{GeDi} & 81.4 $\pm$ 14.7 & 76.1 $\pm$ 17.2 & 73.8 $\pm$ 11.3 & 94.2 $\pm$\; 1.9 & 116.6 & 75.1\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Multi-Objective Optimization Based Methods}}\\
\hline
\textbf{MUCOCO} & 73.9 $\pm$ 24.1 & 65.0 $\pm$ 33.7 & 67.2 $\pm$ 18.3 & 89.5 $\pm$\; 3.5 & 405.6 & 49.7\\
\hline
\textbf{Mix\&Match} & 79.7 $\pm$ 21.8 & 73.5 $\pm$ 25.9 & 69.9 $\pm$ 21.1 & 95.8 $\pm$\; 1.9 & 63.0 & 61.8\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Prefix-Tuning Based Methods}}\\
\hline
\textbf{Contrastive Prefix} &&&&&\\
\quad concatenation & 77.2 $\pm$ 18.5 & 67.3 $\pm$ 20.7 & 71.8 $\pm$ 16.5 & 92.6 $\pm$\; 2.9 & 54.6 & 39.9\\
\quad semi-supervised & 81.3 $\pm$ 16.5 & 74.4 $\pm$ 19.6 & 76.9 $\pm$ 16.7 & 92.7 $\pm$\; 3.5 & 31.9 & 43.3\\
\hline
\textbf{Ours} & \textbf{87.4} $\pm$ 10.9 & \textbf{86.7} $\pm$ 10.5 & \textbf{84.8} $\pm$ 14.2
& 90.7 $\pm$\; 7.4 & 28.4 & 49.5\\
\cline{2-7}
\quad w/o $\mathcal{L}_G$ & 80.9 $\pm$ 16.2 & 71.6 $\pm$ 11.7 & 75.9 $\pm$ 18.9 & 95.3 $\pm$\; 2.6 & 71.5 & 58.9\\
\quad w/o $\mathcal{L}_C$ & 62.3 $\pm$ 41.8 & 49.1 $\pm$ 49.8 & 41.7 $\pm$ 36.0 & \textbf{96.0} $\pm$\; 0.1 & 473.0 & 37.0\\
\hline
\hline
\end{tabular}
\caption{Automatic Results on Multi-Aspect Control. Hyperparameters and details are in \S \ref{sec:appendix3}.}
\label{tab:1}
\vspace{-0.2cm}
\end{table*}
\begin{enumerate*}[label=(\Roman*)]
\item \textbf{Weighted Decoding}:
\textbf{PPLM} \cite{Dathathri2020Plug} biases the language model with gradients back-propagated from trained classifiers. \textbf{GeDi} \cite{krause-etal-2021-gedi-generative} influences the decoding process with token probabilities conditioned on attributes.
\item \textbf{Multi-objective Optimization}:
\textbf{MUCOCO} \cite{kumar2021controlled} regards the decoding process as a constrained optimization problem, where the language model is the objective function and attributes are constraints.
\textbf{Mix\&Match} \cite{mireshghallah-etal-2022-mix} controls attributes with energy-based models and generates sentences by masking, sampling, and correcting.
\item \textbf{Prefix-Tuning}:
\textbf{Contrastive Prefix} \cite{qian-etal-2022-controllable} utilizes prefixes to activate the language model to generate attribute-relevant sentences by concatenation or semi-supervision.
\end{enumerate*}
\subsection{Results}
According to the automatic evaluation results in Table \ref{tab:1}, under the multi-aspect setting, we group models based on their type of methods in chronological order.
In addition, we demonstrate their standard deviations, which reflect the stability of models among different attribute combinations.
For weighted decoding, GeDi uses more powerful classifiers than PPLM and performs better on attribute relevance, stability to different combinations, and distinctness while correspondingly worse on perplexity.
Multi-objective optimization methods achieve a favorable performance on attribute relevance while MUCOCO explodes on perplexity due to its non-autoregressive paradigm not being suitable for generating from scratch.
Performance of semi-supervised Contrastive Prefix is similar to GeDi, except for lack of diversity.
Our method performs best on average attribute-related metrics, with at least a $7.3\%$ significant improvement over existing baselines. Our advances mainly come from sentiment and topic aspects, with no less than $13.9\%$ and $10.3\%$ each.
Although our model is not the best on detoxification, it is the most balanced and stable according to the lowest standard deviation on average, $10.9$.
As a prefix-tuning-based method inducing the language model without direct modification, which is naturally good at text fluency, we perform well on perplexity and inherit the performance on diversity.
Furthermore, we conduct ablation on aspect gap loss $\mathcal{L}_G$ and attribute classification loss $\mathcal{L}_C$ separately.
On the one hand, without $\mathcal{L}_G$, we can not alleviate the bias in different training datasets, making it hard to search for the intersection areas. Since training sentences of sentiment and topic aspects are mainly non-toxic, our model focuses more on detoxification rather than struggling for the other two, leading to considerable declines on their relevance while slight improvements on detoxification. Besides, as the distance among sample points from different aspects in the attribute space increases, our model will generate sentences mapped from far more sparse areas, leading to a small decrease on fluency and a subtle increase on diversity.
On the other hand, without $\mathcal{L}_C$, our attribute space will totally collapse. The relevance of sentiment and topic drops drastically while the non-toxicity boosts because model can hardly distinguish representations of different attributes in the same aspect and focus on relatively more effortless detoxification.
Worse still, without distinct representations, our model is required to recover different sentences from similar ones, leading to oscillation in training and hardly generating complete text when inferencing.
Results of human evaluation are in Table \ref{tab:2}, with inter-annotator agreement being $0.36$ in Fleiss' $\kappa$.
We evaluate GeDi, Contrastive Prefix, and our method and observe that the results are consistent with the automatic ones on sentiment and topic relevance.
The performance of models on detoxification is high and relatively similar, making the automatic results different from the manual ones where the annotators believe that our model does a better job than baselines.
Since perplexity is relatively unreliable, the manually measured fluency of GeDi is much better than that of the Contrastive Prefix. And our method achieves the best fluency.
\begin{table}[t]
\small
\centering
\begin{tabular}{l|cccc}
\hline
\hline
\textbf{Methods} & \textbf{Sent.}↑ & \textbf{Topic}↑ & \textbf{Detox.}↑ &\textbf{Fluency}↑\\
\hline
\hline
\textbf{GeDi} & 2.96 & 2.72 & 4.59 & 3.08\\
\hline
\textbf{Con. Prefix} & 2.84 & 2.90 & 4.40 & 2.26\\
\hline
\textbf{Ours} & \textbf{3.47} & \textbf{3.39} & \textbf{4.71} & \textbf{3.69}\\
\hline
\hline
\end{tabular}
\caption{Human Evaluation on Multi-Aspect Control. }
\label{tab:2}
\end{table}
\begin{table*}[ht]
\small
\centering
\vspace{-0.3cm}
\resizebox{2\columnwidth}{!}{
\begin{tabular}{l|cc|cccc|c}
\hline
\hline
\multirow{2}{*}{\textbf{Methods}} & \multicolumn{2}{c|}{\textbf{Sentiment} (\%)} &\multicolumn{4}{c|}{\textbf{Topic} (\%)} & \multirow{2}{*}{\textbf{Detox.} (\%)}\\
& \textbf{Neg.}& \textbf{Pos.}& \textbf{World}&\textbf{Sports}& \textbf{Business}&\textbf{Sci./Tech.}&\\
\hline
\hline
\multicolumn{6}{l}{\quad \textit{Weighted Decoding Based Methods}}\\
\hline
\textbf{GeDi} \textit{single-aspect} & 93.9& 70.7& 73.4 & 85.7 & 75.7 & 98.0 & 94.9 \\
\hline
\multirow{8}{*}{\textbf{GeDi}}
& 94.7 & - & 80.0 & - & - & - & 90.6\\
& 84.2 & - & - & 74.8 & - & - & 93.9\\
& 94.9 & - & - & - & 75.7 & - & 96.6\\
& 90.6 & - & - & - & - & 80.1 & 92.8\\
& - & 53.7 & 61.4 & - & - & - & 94.4\\
& - & 60.5 & - & 74.3 & - & - & 95.2\\
& - & 57.6 & - & - & 54.3 & - & 95.7\\
& - & 72.3 & - & - & - & 90.2 & 94.2\\
\cline{2-8}
\qquad \textit{average} & \textbf{91.1} $(- \textbf{2.8})$ & 61.0 $(-9.7)$ & 70.7 $(-2.7)$ & 74.6 $(-11.1)$ & 65.0 $(-10.7)$ & 85.2 $(-12.8)$ & \textbf{94.2} $(-\textbf{0.7})$\\
\hline
\hline
\multicolumn{6}{l}{\quad \textit{Prefix-Tuning Based Methods}}\\
\hline
\textbf{Prefix} \textit{single-aspect} & 88.4 & 90.6 & 74.5 & 85.3 & 93.5 & 93.6 & 93.8\\
\hline
\multirow{8}{*}{\tabincell{l}{\textbf{Contrastive Prefix}\\ \quad semi-supervised}}
& 65.5 & - & \textbf{80.6} & - & - & - & 91.8\\
& 67.2 & - & - & \textbf{90.3} & - & - & 92.5\\
& 56.0 & - & - & - & 79.2 & - & 92.2\\
& 90.0 & - & - & - & - & 93.3 & 84.8\\
& - & 93.5 & 64.8 & - & - & - & 95.1\\
& - & 41.8 & - & 78.5 & - & - & 94.8\\
& - & 87.4 & - & - & 41.7 & - & 95.2\\
& - & 93.6 & - & - & - & 86.7 & 95.3\\
\cline{2-8}
\qquad \textit{average} & 69.7 $(-18.7)$ & 79.1 $(-11.5)$ & \textbf{72.7} $(-\textbf{1.8})$ & \textbf{84.4} $(-\textbf{0.9})$ & 60.5 $(-33.0)$ & 90.0 $(-3.6)$ & 92.7 $(-1.1)$\\
\hline
\multirow{8}{*}{\textbf{Ours}}
& 69.7 & - & 71.7 & - & - & - & 84.1\\
& 78.6 & - & - & 80.0 & - & - & 80.2\\
& \textbf{99.9} & - & - & - & \textbf{96.7} & - & 96.8\\
& 92.8 & - & - & - & - & \textbf{98.0} & 81.7\\
& - & 80.5 & 58.0 & - & - & - & 95.1\\
& - & 84.7 & - & 86.6 & - & - & 94.5\\
& - & 87.6 & - & - & 91.7 & - & \textbf{98.1}\\
& - & \textbf{99.7} & - & - & - & 96.1 & 95.4\\
\cline{2-8}
\qquad \textit{average} & 85.3 $(-3.1)$ & \textbf{88.1} $(-\textbf{2.5})$ & 64.9 $(-9.6)$ & 83.3 $(-2.0)$ & \textbf{94.2} $(+\textbf{0.7})$ & \textbf{96.8} $(+\textbf{3.2})$ & 90.7 $(-3.1)$\\
\hline
\hline
\end{tabular}
}
\caption{Detailed Results on Single-Aspect and Multi-Aspect Control. We demonstrate results on \textit{single-aspect} and \textit{average} results on multi-aspect control with their difference to \textit{single-aspect}, where other rows each represent an attribute combination. Cases are in \S \ref{sec:appendix4}. Detailed results for other baseline models and our ablations are in \S \ref{sec:appendix5}.}
\label{tab:3}
\vspace{-0.2cm}
\end{table*}
\section{Analysis}
\subsection{Effect of Different Attributes and their Combinations}
We illustrate the detailed results of each attribute and their combinations in Table \ref{tab:3}.
GeDi and Prefix-tuning perform differently in \textit{single-aspect} control, each with its advantages. For example, GeDi is dedicated to \textit{negative} with $93.9\%$ relevance, while Prefix-tuning is good at \textit{positive} with $90.6\%$ relevance.
When dealing with multi-aspect control, they inherit such imbalanced characteristics, with \textit{average} relevance of $91.1\%$ and $79.1\%$, respectively. In addition, the baselines decrease correspondingly in the \textit{average} relevance of each attribute compared to \textit{single-aspect}, ranging from $0.7$ to $33.0$.
On average, our model outperforms other baselines on attribute metrics (Table \ref{tab:1}). In detail, our model performs competitively for most attributes compared to another prefix-tuning-based model, Contrastive Prefix. Especially, on attributes like \textit{business} and \textit{sci/tech}, our model significantly improves over another prefix-tuning-based method on multi-aspect control and can even surpass it under \textit{single-aspect} control.
In addition, correlations between attributes vary widely, as in Table \ref{tab:3}.
For example, generally, \textit{positive} fits well with \textit{non-toxic} while \textit{negative} leads to a massive drop in non-toxicity, which is consistent with the intuition that one can hardly praise people and offend them simultaneously.
Besides, \textit{world} and \textit{business} news are often reported negatively, such as war, famine, inflation, etc., making it challenging to combine them with \textit{positive}.
When attributes are not closely correlated, which means that few natural sentences possess these attributes together, our method is more likely to capture such a rarely occurred incident and magnify their frequency.
Take \textit{business} as an example. It is effortless to achieve a fine attribute relevance when performing single-aspect control on \textit{business}, with GeDi achieving $75.7$ and Prefix obtaining $93.5$. After attaching \textit{positive} to \textit{business}, baseline models will suffer from a decline due to their weak correlation, where GeDi and Contrastive Prefix drop to $54.3$ and $41.7$, respectively. In contrast, our method can alleviate this problem by retrieving this unusual co-occurrence in the training sentences and recovering it from the attribute space, achieving a performance of $91.7$, which is close to single-aspect control.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{pic/attribute_space.pdf}
\caption{Projection of 4 attributes from attribute space.}
\label{fig:3}
\end{figure}
When combining business with negative, which is a relatively common combination, there is still some decrease for baseline models. On the contrary, our method can even obtain the performance of $96.7$ that surpasses single-aspect control.
\subsection{Estimated Attribute Space}
We demonstrate part of our estimated attribute space in Figure \ref{fig:3} with four attributes: \textit{\textcolor{sred}{positive}}, \textit{\textcolor{sblue}{negative}}, \textit{\textcolor{syellow}{sports}}, and \textit{\textcolor{sgreen}{sci/tech}} from sentiment and topic aspects.
We project the high-dimensional space to 2D with Principal Component Analysis (PCA).
Consistent with our hypothesis, distributions of \textit{\textcolor{syellow}{sports}} and \textit{\textcolor{sgreen}{sci/tech}} are asymmetric and the intersections lie in the sparse edges of attributes' distribution.
In addition, we project the intersections searched by the \textcolor{smediumvioletred}{baseline}'s strategy and \textcolor{darkred}{ours}, respectively. For \textit{\textcolor{sred}{positive}}-\textit{\textcolor{sgreen}{sci/tech}} and \textit{\textcolor{sblue}{negative}}-\textit{\textcolor{sgreen}{sci/tech}} pairs, the combinations are relatively tight, making it easy to find intersections. However, intersection areas for \textit{\textcolor{sred}{positive}}-\textit{\textcolor{syellow}{sports}} and \textit{\textcolor{sblue}{negative}}-\textit{\textcolor{syellow}{sports}} pairs are considerably sparse.
As shown in enlarged area, the \textcolor{smediumvioletred}{baseline} searched intersection is at the midpoint of the two distributional centers, but this location is not where the attributes intersect. On the contrary, \textcolor{darkred}{our} method can find an intersection in such a sparse region, making various points from the two different attributes appear simultaneously in its tiny surrounding area.
It worth noting that \textit{positive} and \textit{negative} appear to intersect in this projection because they are close in the high-dimensional space. But there is actually no intersection if only projecting these two attributes in \S \ref{sec:pos_neg}.
\subsection{Effect of $K$}
\label{sec:effectofk}
\begin{table}[t]
\small
\setlength{\abovecaptionskip}{0.2cm}
\vspace{-0.3cm}
\centering
\begin{tabular}{r|c|ccc}
\hline
\hline
$\textbf{K}$ &\textbf{Avg.}↑ & \textbf{Sent.}↑ & \textbf{Topic}↑ & \textbf{DeTox.}↑\\
\hline
5000 & \textcolor{sblue}{75.5} & 70.5 & 67.9 & 88.2\\
4000 & \textcolor{sblue}{77.6} & 72.9 & 71.4 & 88.4\\
3000 & \textcolor{sblue}{78.7} & 72.4 & 74.7 & 88.9\\
2000 & \textcolor{sblue}{79.1} & 72.6 & 75.9 & 88.7\\
1500 & \textcolor{sblue}{79.9} & 73.6 & 77.1 & 89.0\\
1000 & \textcolor{sblue}{80.7} & 75.7 & 77.2 & 89.1\\
800 & \textcolor{sblue}{82.9} & 79.3 & 79.2 & 90.3\\
500 & \textcolor{sblue}{85.2} & 83.5 & 81.5 & 90.5\\
300 & \textcolor{sblue}{85.7} & 84.1 & 83.2 & 89.7\\
200 & \textcolor{sred}{\textbf{87.4}} & \textbf{86.7} & \textbf{84.8} & 90.7\\
150 & \textcolor{sgreen}{84.0} & 79.2 & 84.3 & 88.4\\
100 & \textcolor{sgreen}{83.9} & 78.7 & 83.6 & 89.5\\
50 & \textcolor{sgreen}{82.2} & 78.4 & 78.5 & 89.6\\
20 & \textcolor{sgreen}{80.9} & 77.8 & 73.1 & 91.7\\
10 & \textcolor{sgreen}{80.8} & 79.6 & 71.5 & 91.2\\
5 & \textcolor{sblue}{81.4} & 82.9 & 69.3 & 92.1\\
3 & \textcolor{lred}{85.0} & 86.1 & 77.7 & 91.1\\
1 & \textcolor{sgreen}{78.8} & 63.1 & 80.9 & \textbf{92.4}\\
\hline
\hline
\end{tabular}
\caption{Results that vary with $K$.}
\vspace{-0.2cm}
\label{tab:k_analysis}
\end{table}
We analyze the variation of $K$ in the intersection searching algorithm and demonstrate the results in Table \ref{tab:k_analysis}.
Our model reaches a critical point when $K$ is 200, and the performance is optimal this time.
On the one hand, as the value of $K$ gradually increases, our method pays less attention to regions where samples are fewer while attributes combine more tightly, and the performance decreases accordingly.
When $K$ reaches 5k, our method degenerates into a plain prefix-tuning model, which treats intersection as the midpoint of distributional centers. Its performance is similar and slightly inferior to the concatenation version of Contrastive Prefix in Table \ref{tab:1}.
On the other hand, smaller $K$ leads to suboptimal performance since the effect of noise becomes non-negligible in training data.
When $K$ is less than $10$, our model will be very unstable.
\subsection{Distribution of Attributes}
\label{attributes}
\begin{figure}[t]
\centering
\vspace{-0.3cm}
\includegraphics[width=\columnwidth]{pic/single_2.pdf}
\caption{Distribution of attribute World from Topic.}
\label{fig:4}
\vspace{-0.2cm}
\end{figure}
We project sample points to 2D by PCA, with each attribute projected independently. As in Figure \ref{fig:4}, we display a scatterplot of World and conduct a Gaussian kernel density estimation to visualize its probability distribution. The darker area denotes a higher probability, where more representation points of oracle sentences gather. And the region annotated by a red ellipse is the estimated distributional center. As in the plot, the distribution of World is significantly asymmetric as the center lies in the top part, with the bottom being a sparse long tail. In addition, the distribution is even non-convex with an isolated cluster in the lower right corner. This observation supports our hypothesis that the practical distributions of attributes are far more complex than symmetric distributions such as Gaussian distribution. Besides, we plot the distribution of other attributes in the \S \ref{sec:appendix1}.
\section{Discussion on Distributional Lens}
Pilot work such as DGC \cite{khalifa2020distributional} estimates the language distribution with an energy-based model and optimizes this distribution to satisfy constraints by approaching the constraints manifold. Recent distributional approaches like COLD Decoding \cite{qin2022cold} and MuCoLa \cite{kumar2022constrained} take the language and attribute distribution in the same space so as to sample attribute-related sentences with Langevin Dynamics. Concurrent work on the image side, PromptGen \cite{wu2022generative}, simulates the complex distribution of images relevant to target attributes using a deep generative model. However, as a consensual hypothesis in manifold learning, the pre-trained language model estimates a low-dimensional manifold of language in a high-dimensional embedding space, which means most points in the embedding space are not probabilistically modeled by the language model. We believe that placing too much trust in the distributional modeling ability of language models is not a good choice. Our method attempts to depict the attribute space with discrete sample points of attributed sentences and make these discrete points, along with their coverage areas, compose the support set of our estimated distribution.
\section{Conclusion}
In this work, we present a distributional perspective for the multi-aspect controllable text generation with experimental results confirming the superiority of our model. Further observations on the 2D projection of the estimated attribute space show that our hypothesis about the attribute space is more feasible. In the future, we can explore the correlation between different attribute combinations for more fine-grained control and capture the bias in datasets to eliminate or utilize it.
\section*{Limitations}
Our method has a certain dependence on the data since we need to estimate an attribute space. Therefore, it is difficult for our method to perform well in the setting of few-shot learning. However, this disadvantage is not that severe, because we only need single-aspect data, which is relatively sufficient in style transfer tasks. Another dependence of our method on data is that it is somewhat sensitive to biases in the data. When the semantic divergence of different aspects in training data is too large, our aspect gap loss, which aims to reduce the distance among the distributions of each aspect, will conflict with the sentence reconstruction loss. As a result, it may be hard to obtain a reliable intersection in the attribute space.
Computational resources also have an impact on our approach, as our aspect gap loss leverages a batch-level estimation for each aspect. Therefore, a larger batch size means a more accurate approximation, leaving the attribute space fewer biases. An alternative strategy for smaller batches is to backpropagate the loss after accumulating enough distributional samples, which requires more training epochs.
\section*{Ethics Statement}
We are totally aware that text generation technology has a potential to be used maliciously to generate fake, toxic, or offensive content. However, after training on the Detoxification aspect, controllable text generation technology is a powerful weapon for combating hate speech, and eliminating harmful information in pre-trained language models. In addition, our multi-aspect controllable text generation technology can take Detoxification as an default aspect when controlling other aspects. We believe it meaningful and beneficial to advance research on controllable text generation.
\section*{Acknowledgements}
Xiaocheng Feng is the corresponding author of this work. We thank the anonymous reviewers for their insightful comments. This work was supported by the National Key R\&D Program of China via grant 2020AAA0106502, National Natural Science Foundation of China (NSFC) via grant 62276078 and the Major Key Project of PCL, PCL2021A06.
\normalem
\section{Introduction}
Controllable text generation is a challenging task in natural language generation, which aims to generate fluent text with desired attributes.
Pilot studies attempt single-aspect control by directly finetuning a conditional model \cite{ziegler2019finetuning, keskarCTRL2019}, or turn to methods with language models fixed \cite{Dathathri2020Plug} due to the high cost of large-scale pre-trained language models \cite{NEURIPS2020_1457c0d6, zhang2022opt}.
Recent works focus on a more practical setting, multi-aspect\footnote{For example, \textit{positive} is an attribute from sentiment aspect while \textit{sports} is an attribute from topic aspect.} controllable text generation, with existing approaches mainly divided into three technical routes: weighted decoding \cite{Dathathri2020Plug, krause-etal-2021-gedi-generative}, multi-objective optimization \cite{kumar2021controlled, mireshghallah-etal-2022-mix}, and prefix-tuning \cite{qian-etal-2022-controllable}, which explore ways to combine controllers learned from single-aspect and apply them to a fixed language model yet suffering from attribute degeneration caused by the mutual interference of controllers.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{pic/intersection.pdf}
\vspace{-0.8cm}
\caption{
Probability space of attributes. \textcolor{HoneyOrange}{Orange} background denotes the estimated distribution over natural language. \textcolor{blue}{Blue} and \textcolor{lgreen}{green} areas represent distributions over sentences containing attributes from two different aspects, respectively. The darker region means a higher probability in the space. The shaded are distributional centers, the areas with the highest probability density.
}
\label{fig:1}
\end{figure}
We provide a distributional perspective to observe and alleviate this problem.
In the current text generation paradigm, a language model forms an estimated distribution over sentences with training data amounted to sampling from natural language distribution \cite{NEURIPS2021_260c2432}.
For single-aspect control, these methods train a classifier or a prefix for each attribute independently, which is regarded as appraising a center of distribution over attribute-relevant sentences, before biasing the language model's distribution to this center.
Correspondingly, when generalizing to multi-aspect control, their fusion strategy is directly obtaining interpolation or average of these centers, which may be too straightforward.
As shown in Figure \ref{fig:1}, the \textbf{\textcolor{Optimal}{interpolation}} point denotes the position they acquired after combining multiple centers in the probability space. And the \textbf{\textcolor{Intersection}{intersection}} represents where oracle sentences that simultaneously satisfy multiple attributes lie.
In the left part of Figure \ref{fig:1}, when distributions of attributes is symmetric\footnote{We plot distributions of attributes in \S \ref{attributes}.}, the interpolation point is indeed within the intersection area.
However, there could be a mismatch between the interpolation point and intersection. For example, as illustrated in the right part of Figure \ref{fig:1}, two skewed distributions intersect on the tails, leaving the interpolation point out of the intersection area and thus making it lack the ability to express all desired attributes together.
In this paper, different from approximating the intersection area with the interpolation point, we propose a strategy for directly acquiring the intersection.
We first deploy an autoencoder structure to map attribute-relevant sentences to latent representations constituting an estimated attribute space.
With our specially designed constraints, this space can model relationships among attributes.
Afterward, we provide an effective intersection searching algorithm that can walk around the long tail regions in distributions of all desired attributes and iteratively find where they combine more tightly.
Finally, we utilize a prefix-tuning-based decoder to construct sentences from the searched intersection.
We experiment on three-aspect control with two attributes from the sentiment aspect, four from the topic, and one from detoxification, with datasets IMDb movie reviews \cite{maas-etal-2011-learning}, AGNews \cite{NIPS2015_250cf8b5}, and Jigsaw Toxic Comment Classification Challenge Dataset, respectively. We evaluate the relevance of each attribute independently and calculate their average as the final relevance metric. Besides, we assess the text quality with perplexity and distinctness concerning fluency and diversity.
Results show that our method can significantly outperform strong baseline models on multi-aspect control. Furthermore, we find out in our analytical experiments that our intuitive assumptions fit well with our observation. The main contributions are as follows:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt]
\item We propose a distributional perspective that models multi-aspect control more practically.
\item We provide a method that directly searches for intersections in the attribute space and generates sentences with desired attributes.
\item We experimentally reveal the effectiveness of our method on multi-aspect control compared to strong baselines and achieve the SOTA.
\end{itemize}
\section{Related Work}
Variational autoencoders are often used for controllable text generation in early work \cite{10.5555/3305381.3305545, duan-etal-2020-pre, mai-etal-2020-plug} where they spend a lot of effort into improving text fluency.
The prosperity of large-scale pre-trained language models \cite{radford2019language} provides more exploration directions for attribute control such as fine-tuning \cite{ficler-goldberg-2017-controlling, ziegler2019finetuning, keskarCTRL2019}. Recent work has made gratifying progress on single-aspect control \cite{krause-etal-2021-gedi-generative}, leading studies gradually turn to a more difficult task, multi-aspect control, including the following three main approaches.
\paragraph{Weighted Decoding}
As the scale of language models increases rapidly, weighted decoding \cite{Dathathri2020Plug, krause-etal-2021-gedi-generative, yang-klein-2021-fudge, liu-etal-2021-dexperts, gu-etal-2022-improving} becomes a simple and practical choice.
It is a framework that decomposes the probability of sentences conditioned on attributes into a language model and a classifier with the bayesian rule directly at decoding time.
When handling multi-aspect control, it can be easily generalized by interpolating classifiers \cite{lin-riedl-2021-plug}.
\paragraph{Multi-Objective Optimization}
Controllable text generation task is naturally a multi-objective optimization problem when regarding its decoding process as an optimization objective.
Some approaches, such as DGC \cite{khalifa2020distributional}, Mix\&Match \cite{mireshghallah-etal-2022-mix}, and COLD Decoding \cite{qin2022cold}, adopt Energy-based Models \cite{lecun2006tutorial} to blend multiple objectives.
Others like MUCOCO \cite{kumar2021controlled} convert the optimization objectives of multi-aspect control to inequality constraints and thereby apply the lagrange multiplier method for this constrained optimization problem.
\paragraph{Prefix-Tuning}
GPT-3 \cite{brown2020language} provides a new paradigm named prompt-based learning \cite{liu2021pre}, which is able to perform few-shot learning on downstream tasks. Prefix-Tuning \cite{li-liang-2021-prefix} leverages the learned lightweight prompts to trigger the conditional generation capability of the language model.
Applying Prefix-Tuning to multi-aspect controllable text generation \cite{yu-etal-2021-attribute-alignment, qian-etal-2022-controllable, carlsson-etal-2022-fine, yang2022tailor} can be regarded as optimizing on multi-objective implicitly.
\begin{figure*}[ht]
\centering
\vspace{-0.5cm}
\resizebox{2\columnwidth}{!}{
\includegraphics[scale=0.55]{pic/model.pdf}
}
\vspace{-0.5cm}
\caption{An overview of our method. \textbf{Top}: Illustration of our autoencoder structure with prefix-tuning deployed on the fixed decoder, where latent representations $\mathcal{H}_i$ constitute an estimated attribute space. \textbf{Bottom Left}: Illustration of attribute classification loss $\mathcal{L}_C$ and aspect gap loss $\mathcal{L}_G$ attached to the attribute space. \textbf{Bottom Right}: Inferencing stage with prefix mapped from the intersection of attributes.}
\label{fig:2}
\end{figure*}
\section{Methodology}
In this section, we first introduce the motivation and overall process of our method, after which we describe each module in detail.
\subsection{Overview}
As illustrated in Figure \ref{fig:2}, our method mainly revolves around the attribute space including estimating the attribute space, searching for intersections, and mapping intersections to sentences.
Firstly, we aim to construct an attribute space using sampled sentences to estimate the real space as accurately as possible. We employ an autoencoder structure with the latent representations denoting points that constitute our estimated attribute space. To ensure that our estimated space reliably models the attributes, such as their probability distributions and relationships between different attributes, we further attach three constraints to the representation.
\begin{enumerate*}[label=(\Roman*)]
\item \textbf{Reconstruction Loss} $\mathcal{L}_R$ aims to bridge the gap between points in attribute space and natural attribute-relevant sentences, which is recovering attributes reflected by contents.
\item \textbf{Attribute Classification Loss} $\mathcal{L}_C$ forces the encoder to focus more on capturing attributes by distinguishing points of different attributes from the same aspect.
\item \textbf{Aspect Gap Loss} $\mathcal{L}_G$ %
penalizes the discrepancy of aspects, which is caused by the domain gap among different data sources for different aspects.
Inspired by the feature alignment \cite{10.1145/1772690.1772767}, we minimize the distances between distributional centers of each two aspects.
\end{enumerate*}
The second step aims to search for an intersection area of desired attributes. If the intersection area exists, a point in the area satisfies that neighbor points appearing in a tiny surrounding region should cover all required attributes. Inspired by this neighborhood ideology, we design an algorithm that iteratively approaches an area where these attributes bind more tightly.
The third step maps our searched intersection to a Prefix that activates the language model to generate attribute-relevant sentences. To make the language model less sensitive to slight variations, we sample a perturbation vector from a multivariate gaussian distribution.
\subsection{Estimating Attribute Space}
Given $|\mathbf{A}|$ aspects $\mathbf{A} = \left\{A_1,\cdots,A_{|\mathbf{A}|}\right\}$ with each comprising $|A_t|$ attributes $\left\{a_1^t,\cdots,a_{|A_t|}^t\right\}$, $I_\tau^t$ is an index set representing the identifiers of all sentences with attribute $a^t_\tau$ in the training data. We have $I^t = \bigcup\limits_{\tau=1}^{|A_t|} I_\tau^t, I = \bigcup\limits_{t=1}^{|\mathbf{A}|} I^t$, where $I^t$ is the indices of all sentences with any attribute in aspect $A_t$ and $I$ is the indices of the entire training data.
We encode sentences $\{X_i\}$ from all aspects $\mathbf{A}$ to representations $\{\mathcal{H}_i\}$ with unified mapping parameters $\phi$: $\mathcal{H}_i = \text{Encode}_\phi(X_i)$, where $i \in I$.
\paragraph{Reconstruction Loss $\mathcal{L}_R$}
As in the top of Figure \ref{fig:2}, $\mathcal{L}_R$ is computed in the same way as the autoregressive loss of pre-trained language model $p_{\text{LM}}$:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\begin{aligned}
\label{eq:2}
\mathcal{L}_R &= -\sum\limits_{i\in I} \log p_\text{LM}(X_i|\text{Prefix}_i)\\
\text{Prefix}_i &= \text{MLP}_\theta(\mathcal{H}_i + \lambda \varepsilon_i),\; \varepsilon_i \sim \mathcal{N}(\mathbf{0}, \mathbf{I}),
\end{aligned}
\end{equation}
where $X_i$ here is a sample sentence from the entire training set, i.e., $i \in I$.
Besides, $\varepsilon_i$, with a scaling factor $\lambda$, is a perturbation vector sampled from a multivariate gaussian distribution $\mathcal{N}(\mathbf{0}, \mathbf{I})$ for robustness when reconstructing. The multi-layer perceptron $\text{MLP}_{\theta}$ will map perturbed $\mathcal{H}_i$ to $\text{Prefix}_i$ that can activate the language model to generate text with desired attributes.
It's worth noting that our primary goal is to recover attributes, which means $\mathcal{L}_R$ does not need and preferably does not converge too well while maintaining text fluency.
\paragraph{Attribute Classification Loss $\mathcal{L}_C$}
We force the encoder to focus on attributes by $\mathcal{L}_C$ in the way:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:3}
\begin{aligned}
\mathcal{L}_C = -\sum\limits_{t=1}^{|\mathbf{A}|}\sum\limits_{\tau=1}^{|A_t|} \sum\limits_{i \in I_\tau^t} \log p_{\pi_{t}}(a_\tau^t|\mathcal{H}_i).
\end{aligned}
\end{equation}
Given sentence representation, $p_{\pi_t}$ is a classifier that distinguish attributes $\left\{a_\tau^t\right\}$ from aspect $A_t$ with parameter $\pi_{t}$.
\paragraph{Aspect Gap Loss $\mathcal{L}_G$}
We penalize the discrepancy between distributional centers by:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:4}
\mathcal{L}_G = \sum\limits_{\mathclap{1\leq t_1<t_2\leq |\mathbf{A}|}}\quad\;\,\,\left\|\sum\limits_{i \in I^{t_1}} \frac{\mathcal{H}_i}{|I^{t_1}|} - \sum\limits_{j \in I^{t_2}} \frac{\mathcal{H}_j}{|I^{t_2}|}\right\|_2,
\end{equation}
which are Euler distances between every two distinct distributional centers.
When generalizing to a larger scale of aspects, it is relatively expensive to calculate averages over the entire dataset each time the model is updated.
We calculate this loss in practice using a batch-level approximation. We assign each aspect a memory unit to store the latest representation of the aspect's estimated center.
Each time processing a batch of sentences from one aspect, we take the average of their representations as the center and sum up the Euler distances to centers of other aspects in the memory, which is the estimated $\mathcal{L}_G$. Then, we update the memory unit of this aspect to the latest.
\begin{algorithm}[t]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\caption{Intersection Searching}
\label{alg1}
\begin{algorithmic}[1]
\REQUIRE $\mathcal{H}_i, i \in \bigcup\limits_{t=1}^N I^t_{\alpha_t}$ from $N$ attributes\\
\quad \; $\omega_{\alpha_t}$ weight of each attribute
\ENSURE Intersection of $N$ attributes: $\mathcal{\tilde{H}}^*$
\STATE Initialize $M$ candidates:$\{\mathcal{\tilde{H}}^0_m\}$
\STATE Iterate $S$ times
\FOR{$s$ in $[0, S-1]$}
\FOR{$m$ in $[1, M]$}
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathbf{0}$
\FOR{$t$ in $[1, N]$}
\STATE $\mathbf{H}\leftarrow\mathop{\text{Nearest}}\limits_{top K}(\mathcal{\tilde{H}}_m^s,\left\{\mathcal{H}_i, i \in I^t_{\alpha_t}\right\})$
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathcal{\tilde{H}}_m^{s+1} +\omega_{\alpha_t} \mathop{\text{mean}}(\mathbf{H})$
\ENDFOR
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathcal{\tilde{H}}_m^{s+1} / \sum\limits_{t=1}^N \omega_{\alpha_t}$
\ENDFOR
\ENDFOR
\STATE $\mathcal{\tilde{H}}^*\leftarrow \text{Select}(\{\mathcal{\tilde{H}}^{S}_m\})$
\end{algorithmic}
\end{algorithm}
During the training stage, our loss function is:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:5}
\mathcal{L} = w_1\mathcal{L}_R + w_2\mathcal{L}_C + w_3\mathcal{L}_G.
\end{equation}
It's worth noting that we only update parameters $\phi$, $\theta$, and $\left\{\pi_t\right\}$ for the encoder, the MLP layer, and the classifier heads, respectively.
\subsection{Intersection of Attributes}
Suppose there is an intersection point, denoted as $\mathcal{\tilde{H}}^*$, located within the intersection region of attributes $\left\{a^1_{\alpha_1},a^2_{\alpha_2},\cdots,a^N_{\alpha_N}\right\}$ from $N$ different aspects, where $a^t_{\alpha_t}$ is the $\alpha_t$th attribute in aspect $A_t$.
Our algorithm \ref{alg1} approximates the $\mathcal{\tilde{H}}^*$ by iteratively approaching a most balanced point with nearest neighbors from different attributes.
First, we initialize the candidates $\{\mathcal{\tilde{H}}^0_m\}$ by randomly sampling points in the attribute space, calculating their distance to the closest point of each attribute $a^t_{\alpha_t}$, and selecting the top $M$ samples with the smallest average distance to all attributes.
At each iteration $s$, we choose the top-K\footnote{We study the practical meaning and impact of $K$ in \S \ref{sec:effectofk}.} nearest points to $\mathcal{\tilde{H}}^s_m$ for each attribute and update $\mathcal{\tilde{H}}^{s+1}_m$ using the weighted average of these points.
It is worth mentioning that $\omega_{\alpha_t}$ is the weight used to balance attributes or favor some specifically, and a negative value of $\omega_{\alpha_t}$ can even move away from a particular one.
Finally, we select the best candidate from the last iteration $S$, which is expected to be in the intersection region, i.e., a representation related to multiple attributes.
\subsection{Generation with Intersections}
As illustrated in the right bottom of Figure \ref{fig:2}, we convert the representation $\mathcal{\tilde{H}}^*$ obtained from the intersection area directly to the $\text{Prefix}$ with $\text{MLP}_\theta$ and let the language model generate multi-attributed sentence $Y$ from input $\mathcal{X}$ as:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:1}
\begin{aligned}
Y &= \mathrm{arg}\max\limits_y \; p_\text{LM}(y|\text{Prefix}^*;\mathcal{X})\\
\text{Prefix}^* &=\text{MLP}_{\theta}(\mathcal{\tilde{H}}^* + \lambda \varepsilon_i),\; \varepsilon_i \sim \mathcal{N}(\mathbf{0}, \mathbf{I}).
\end{aligned}
\end{equation}
When generating several attribute-relevant sentences for one attribute combination, we only need to calculate the intersection for it once.
\section{Experiment}
In this section, we demonstrate the effectiveness of our method on three-aspect control, including sentiment, topic, and detoxification.
\subsection{Multi-Aspect Control Task}
The datasets we use are the same as GeDi \cite{krause-etal-2021-gedi-generative} and Contrastive Prefix \cite{qian-etal-2022-controllable}.
To balance the data scale across all aspects, we randomly sample 10k sentences from each dataset that is less than the number of samples GeDi uses, with each attribute equally dividing this amount.
We use the IMDb movie reviews \cite{maas-etal-2011-learning}, the AGNews dataset \cite{NIPS2015_250cf8b5}, and the Jigsaw Toxic Comment Classification Challenge Dataset\footnote{\url{https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/}} for sentiment, topic and detoxification aspects, respectively.
The prompts used for text generation are the same as those used in the PPLM \cite{Dathathri2020Plug}, with 20 from its bag-of-words experiment and 15 from its discriminator experiment. We experiment with $8$ combinations of the $3$ aspects with $2$ sentiments $\times$ $4$ topics $\times$ $1$ detoxification and generate $5$ completions for each combination and each prompt. Totally, each model will generate $35 \times 2 \times 4 \times 1 \times 5 = 1400$ sentences.
It is worth noting that we do not specifically use prompts that induce the language model to generate toxic text, making detoxification easier to improve.
To measure the performance on different aspects, we compute the attribute relevance. We finetune a DeBERTa \cite{he2021deberta, he2021debertav3} classifier on the Yelp dataset \cite{NIPS2015_250cf8b5} for sentiment aspect and a classifier for topic utilizing all its remaining data not used during training. We evaluate the non-toxicity with the Google Perspective API\footnote{\url{https://www.perspectiveapi.com}}.
The final performance of a model is determined by the average of these three attribute relevance scores introduced above.
We also use two auxiliary metrics to measure text quality.
One is perplexity calculated by GPT2-large following Contrastive Prefix \cite{qian-etal-2022-controllable}. To ensure that models are not insensitive to changes in different prefixes, we calculate the Distinctness \cite{li-etal-2016-diversity} of sentences generated from different prefixes and average the 1-gram, 2-grams, and 3-grams distinct scores for simplicity.
Moreover, we conduct human evaluation with sentences generated by different models shuffled. Each sentence is rated by three professional evaluators for 3 attribute relevance and text fluency. Evaluators rate each item on a scale of 1 to 5, with 5 representing text highly related to the desired attribute or very fluent.
\subsection{Baselines}
\begin{table*}[ht]
\small
\vspace{-0.3cm}
\centering
\begin{tabular}{l|c|ccc|c|c}
\hline
\hline
\textbf{Methods} & \textbf{Average}↑ (\%) & \textbf{Sentiment}↑ (\%) & \textbf{Topic}↑ (\%) & \textbf{Detoxification}↑ (\%) & \textbf{PPL.}↓ &\textbf{Dist.}↑\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Weighted Decoding Based Methods}}\\
\hline
\textbf{PPLM} & 71.0 $\pm$ 21.4 & 64.7 $\pm$ 24.8 & 63.5 $\pm$ 22.7 & 84.9 $\pm$\; 6.5 & 62.6 & 62.0\\
\hline
\textbf{GeDi} & 81.4 $\pm$ 14.7 & 76.1 $\pm$ 17.2 & 73.8 $\pm$ 11.3 & 94.2 $\pm$\; 1.9 & 116.6 & 75.1\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Multi-Objective Optimization Based Methods}}\\
\hline
\textbf{MUCOCO} & 73.9 $\pm$ 24.1 & 65.0 $\pm$ 33.7 & 67.2 $\pm$ 18.3 & 89.5 $\pm$\; 3.5 & 405.6 & 49.7\\
\hline
\textbf{Mix\&Match} & 79.7 $\pm$ 21.8 & 73.5 $\pm$ 25.9 & 69.9 $\pm$ 21.1 & 95.8 $\pm$\; 1.9 & 63.0 & 61.8\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Prefix-Tuning Based Methods}}\\
\hline
\textbf{Contrastive Prefix} &&&&&\\
\quad concatenation & 77.2 $\pm$ 18.5 & 67.3 $\pm$ 20.7 & 71.8 $\pm$ 16.5 & 92.6 $\pm$\; 2.9 & 54.6 & 39.9\\
\quad semi-supervised & 81.3 $\pm$ 16.5 & 74.4 $\pm$ 19.6 & 76.9 $\pm$ 16.7 & 92.7 $\pm$\; 3.5 & 31.9 & 43.3\\
\hline
\textbf{Ours} & \textbf{87.4} $\pm$ 10.9 & \textbf{86.7} $\pm$ 10.5 & \textbf{84.8} $\pm$ 14.2
& 90.7 $\pm$\; 7.4 & 28.4 & 49.5\\
\cline{2-7}
\quad w/o $\mathcal{L}_G$ & 80.9 $\pm$ 16.2 & 71.6 $\pm$ 11.7 & 75.9 $\pm$ 18.9 & 95.3 $\pm$\; 2.6 & 71.5 & 58.9\\
\quad w/o $\mathcal{L}_C$ & 62.3 $\pm$ 41.8 & 49.1 $\pm$ 49.8 & 41.7 $\pm$ 36.0 & \textbf{96.0} $\pm$\; 0.1 & 473.0 & 37.0\\
\hline
\hline
\end{tabular}
\caption{Automatic Results on Multi-Aspect Control. Hyperparameters and details are in \S \ref{sec:appendix3}.}
\label{tab:1}
\vspace{-0.2cm}
\end{table*}
\begin{enumerate*}[label=(\Roman*)]
\item \textbf{Weighted Decoding}:
\textbf{PPLM} \cite{Dathathri2020Plug} biases the language model with gradients back-propagated from trained classifiers. \textbf{GeDi} \cite{krause-etal-2021-gedi-generative} influences the decoding process with token probabilities conditioned on attributes.
\item \textbf{Multi-objective Optimization}:
\textbf{MUCOCO} \cite{kumar2021controlled} regards the decoding process as a constrained optimization problem, where the language model is the objective function and attributes are constraints.
\textbf{Mix\&Match} \cite{mireshghallah-etal-2022-mix} controls attributes with energy-based models and generates sentences by masking, sampling, and correcting.
\item \textbf{Prefix-Tuning}:
\textbf{Contrastive Prefix} \cite{qian-etal-2022-controllable} utilizes prefixes to activate the language model to generate attribute-relevant sentences by concatenation or semi-supervision.
\end{enumerate*}
\subsection{Results}
According to the automatic evaluation results in Table \ref{tab:1}, under the multi-aspect setting, we group models based on their type of methods in chronological order.
In addition, we demonstrate their standard deviations, which reflect the stability of models among different attribute combinations.
For weighted decoding, GeDi uses more powerful classifiers than PPLM and performs better on attribute relevance, stability to different combinations, and distinctness while correspondingly worse on perplexity.
Multi-objective optimization methods achieve a favorable performance on attribute relevance while MUCOCO explodes on perplexity due to its non-autoregressive paradigm not being suitable for generating from scratch.
Performance of semi-supervised Contrastive Prefix is similar to GeDi, except for lack of diversity.
Our method performs best on average attribute-related metrics, with at least a $7.3\%$ significant improvement over existing baselines. Our advances mainly come from sentiment and topic aspects, with no less than $13.9\%$ and $10.3\%$ each.
Although our model is not the best on detoxification, it is the most balanced and stable according to the lowest standard deviation on average, $10.9$.
As a prefix-tuning-based method inducing the language model without direct modification, which is naturally good at text fluency, we perform well on perplexity and inherit the performance on diversity.
Furthermore, we conduct ablation on aspect gap loss $\mathcal{L}_G$ and attribute classification loss $\mathcal{L}_C$ separately.
On the one hand, without $\mathcal{L}_G$, we can not alleviate the bias in different training datasets, making it hard to search for the intersection areas. Since training sentences of sentiment and topic aspects are mainly non-toxic, our model focuses more on detoxification rather than struggling for the other two, leading to considerable declines on their relevance while slight improvements on detoxification. Besides, as the distance among sample points from different aspects in the attribute space increases, our model will generate sentences mapped from far more sparse areas, leading to a small decrease on fluency and a subtle increase on diversity.
On the other hand, without $\mathcal{L}_C$, our attribute space will totally collapse. The relevance of sentiment and topic drops drastically while the non-toxicity boosts because model can hardly distinguish representations of different attributes in the same aspect and focus on relatively more effortless detoxification.
Worse still, without distinct representations, our model is required to recover different sentences from similar ones, leading to oscillation in training and hardly generating complete text when inferencing.
Results of human evaluation are in Table \ref{tab:2}, with inter-annotator agreement being $0.36$ in Fleiss' $\kappa$.
We evaluate GeDi, Contrastive Prefix, and our method and observe that the results are consistent with the automatic ones on sentiment and topic relevance.
The performance of models on detoxification is high and relatively similar, making the automatic results different from the manual ones where the annotators believe that our model does a better job than baselines.
Since perplexity is relatively unreliable, the manually measured fluency of GeDi is much better than that of the Contrastive Prefix. And our method achieves the best fluency.
\begin{table}[t]
\small
\centering
\begin{tabular}{l|cccc}
\hline
\hline
\textbf{Methods} & \textbf{Sent.}↑ & \textbf{Topic}↑ & \textbf{Detox.}↑ &\textbf{Fluency}↑\\
\hline
\hline
\textbf{GeDi} & 2.96 & 2.72 & 4.59 & 3.08\\
\hline
\textbf{Con. Prefix} & 2.84 & 2.90 & 4.40 & 2.26\\
\hline
\textbf{Ours} & \textbf{3.47} & \textbf{3.39} & \textbf{4.71} & \textbf{3.69}\\
\hline
\hline
\end{tabular}
\caption{Human Evaluation on Multi-Aspect Control. }
\label{tab:2}
\end{table}
\begin{table*}[ht]
\small
\centering
\vspace{-0.3cm}
\resizebox{2\columnwidth}{!}{
\begin{tabular}{l|cc|cccc|c}
\hline
\hline
\multirow{2}{*}{\textbf{Methods}} & \multicolumn{2}{c|}{\textbf{Sentiment} (\%)} &\multicolumn{4}{c|}{\textbf{Topic} (\%)} & \multirow{2}{*}{\textbf{Detox.} (\%)}\\
& \textbf{Neg.}& \textbf{Pos.}& \textbf{World}&\textbf{Sports}& \textbf{Business}&\textbf{Sci./Tech.}&\\
\hline
\hline
\multicolumn{6}{l}{\quad \textit{Weighted Decoding Based Methods}}\\
\hline
\textbf{GeDi} \textit{single-aspect} & 93.9& 70.7& 73.4 & 85.7 & 75.7 & 98.0 & 94.9 \\
\hline
\multirow{8}{*}{\textbf{GeDi}}
& 94.7 & - & 80.0 & - & - & - & 90.6\\
& 84.2 & - & - & 74.8 & - & - & 93.9\\
& 94.9 & - & - & - & 75.7 & - & 96.6\\
& 90.6 & - & - & - & - & 80.1 & 92.8\\
& - & 53.7 & 61.4 & - & - & - & 94.4\\
& - & 60.5 & - & 74.3 & - & - & 95.2\\
& - & 57.6 & - & - & 54.3 & - & 95.7\\
& - & 72.3 & - & - & - & 90.2 & 94.2\\
\cline{2-8}
\qquad \textit{average} & \textbf{91.1} $(- \textbf{2.8})$ & 61.0 $(-9.7)$ & 70.7 $(-2.7)$ & 74.6 $(-11.1)$ & 65.0 $(-10.7)$ & 85.2 $(-12.8)$ & \textbf{94.2} $(-\textbf{0.7})$\\
\hline
\hline
\multicolumn{6}{l}{\quad \textit{Prefix-Tuning Based Methods}}\\
\hline
\textbf{Prefix} \textit{single-aspect} & 88.4 & 90.6 & 74.5 & 85.3 & 93.5 & 93.6 & 93.8\\
\hline
\multirow{8}{*}{\tabincell{l}{\textbf{Contrastive Prefix}\\ \quad semi-supervised}}
& 65.5 & - & \textbf{80.6} & - & - & - & 91.8\\
& 67.2 & - & - & \textbf{90.3} & - & - & 92.5\\
& 56.0 & - & - & - & 79.2 & - & 92.2\\
& 90.0 & - & - & - & - & 93.3 & 84.8\\
& - & 93.5 & 64.8 & - & - & - & 95.1\\
& - & 41.8 & - & 78.5 & - & - & 94.8\\
& - & 87.4 & - & - & 41.7 & - & 95.2\\
& - & 93.6 & - & - & - & 86.7 & 95.3\\
\cline{2-8}
\qquad \textit{average} & 69.7 $(-18.7)$ & 79.1 $(-11.5)$ & \textbf{72.7} $(-\textbf{1.8})$ & \textbf{84.4} $(-\textbf{0.9})$ & 60.5 $(-33.0)$ & 90.0 $(-3.6)$ & 92.7 $(-1.1)$\\
\hline
\multirow{8}{*}{\textbf{Ours}}
& 69.7 & - & 71.7 & - & - & - & 84.1\\
& 78.6 & - & - & 80.0 & - & - & 80.2\\
& \textbf{99.9} & - & - & - & \textbf{96.7} & - & 96.8\\
& 92.8 & - & - & - & - & \textbf{98.0} & 81.7\\
& - & 80.5 & 58.0 & - & - & - & 95.1\\
& - & 84.7 & - & 86.6 & - & - & 94.5\\
& - & 87.6 & - & - & 91.7 & - & \textbf{98.1}\\
& - & \textbf{99.7} & - & - & - & 96.1 & 95.4\\
\cline{2-8}
\qquad \textit{average} & 85.3 $(-3.1)$ & \textbf{88.1} $(-\textbf{2.5})$ & 64.9 $(-9.6)$ & 83.3 $(-2.0)$ & \textbf{94.2} $(+\textbf{0.7})$ & \textbf{96.8} $(+\textbf{3.2})$ & 90.7 $(-3.1)$\\
\hline
\hline
\end{tabular}
}
\caption{Detailed Results on Single-Aspect and Multi-Aspect Control. We demonstrate results on \textit{single-aspect} and \textit{average} results on multi-aspect control with their difference to \textit{single-aspect}, where other rows each represent an attribute combination. Cases are in \S \ref{sec:appendix4}. Detailed results for other baseline models and our ablations are in \S \ref{sec:appendix5}.}
\label{tab:3}
\vspace{-0.2cm}
\end{table*}
\section{Analysis}
\subsection{Effect of Different Attributes and their Combinations}
We illustrate the detailed results of each attribute and their combinations in Table \ref{tab:3}.
GeDi and Prefix-tuning perform differently in \textit{single-aspect} control, each with its advantages. For example, GeDi is dedicated to \textit{negative} with $93.9\%$ relevance, while Prefix-tuning is good at \textit{positive} with $90.6\%$ relevance.
When dealing with multi-aspect control, they inherit such imbalanced characteristics, with \textit{average} relevance of $91.1\%$ and $79.1\%$, respectively. In addition, the baselines decrease correspondingly in the \textit{average} relevance of each attribute compared to \textit{single-aspect}, ranging from $0.7$ to $33.0$.
On average, our model outperforms other baselines on attribute metrics (Table \ref{tab:1}). In detail, our model performs competitively for most attributes compared to another prefix-tuning-based model, Contrastive Prefix. Especially, on attributes like \textit{business} and \textit{sci/tech}, our model significantly improves over another prefix-tuning-based method on multi-aspect control and can even surpass it under \textit{single-aspect} control.
In addition, correlations between attributes vary widely, as in Table \ref{tab:3}.
For example, generally, \textit{positive} fits well with \textit{non-toxic} while \textit{negative} leads to a massive drop in non-toxicity, which is consistent with the intuition that one can hardly praise people and offend them simultaneously.
Besides, \textit{world} and \textit{business} news are often reported negatively, such as war, famine, inflation, etc., making it challenging to combine them with \textit{positive}.
When attributes are not closely correlated, which means that few natural sentences possess these attributes together, our method is more likely to capture such a rarely occurred incident and magnify their frequency.
Take \textit{business} as an example. It is effortless to achieve a fine attribute relevance when performing single-aspect control on \textit{business}, with GeDi achieving $75.7$ and Prefix obtaining $93.5$. After attaching \textit{positive} to \textit{business}, baseline models will suffer from a decline due to their weak correlation, where GeDi and Contrastive Prefix drop to $54.3$ and $41.7$, respectively. In contrast, our method can alleviate this problem by retrieving this unusual co-occurrence in the training sentences and recovering it from the attribute space, achieving a performance of $91.7$, which is close to single-aspect control.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{pic/attribute_space.pdf}
\caption{Projection of 4 attributes from attribute space.}
\label{fig:3}
\end{figure}
When combining business with negative, which is a relatively common combination, there is still some decrease for baseline models. On the contrary, our method can even obtain the performance of $96.7$ that surpasses single-aspect control.
\subsection{Estimated Attribute Space}
We demonstrate part of our estimated attribute space in Figure \ref{fig:3} with four attributes: \textit{\textcolor{sred}{positive}}, \textit{\textcolor{sblue}{negative}}, \textit{\textcolor{syellow}{sports}}, and \textit{\textcolor{sgreen}{sci/tech}} from sentiment and topic aspects.
We project the high-dimensional space to 2D with Principal Component Analysis (PCA).
Consistent with our hypothesis, distributions of \textit{\textcolor{syellow}{sports}} and \textit{\textcolor{sgreen}{sci/tech}} are asymmetric and the intersections lie in the sparse edges of attributes' distribution.
In addition, we project the intersections searched by the \textcolor{smediumvioletred}{baseline}'s strategy and \textcolor{darkred}{ours}, respectively. For \textit{\textcolor{sred}{positive}}-\textit{\textcolor{sgreen}{sci/tech}} and \textit{\textcolor{sblue}{negative}}-\textit{\textcolor{sgreen}{sci/tech}} pairs, the combinations are relatively tight, making it easy to find intersections. However, intersection areas for \textit{\textcolor{sred}{positive}}-\textit{\textcolor{syellow}{sports}} and \textit{\textcolor{sblue}{negative}}-\textit{\textcolor{syellow}{sports}} pairs are considerably sparse.
As shown in enlarged area, the \textcolor{smediumvioletred}{baseline} searched intersection is at the midpoint of the two distributional centers, but this location is not where the attributes intersect. On the contrary, \textcolor{darkred}{our} method can find an intersection in such a sparse region, making various points from the two different attributes appear simultaneously in its tiny surrounding area.
It worth noting that \textit{positive} and \textit{negative} appear to intersect in this projection because they are close in the high-dimensional space. But there is actually no intersection if only projecting these two attributes in \S \ref{sec:pos_neg}.
\subsection{Effect of $K$}
\label{sec:effectofk}
\begin{table}[t]
\small
\setlength{\abovecaptionskip}{0.2cm}
\vspace{-0.3cm}
\centering
\begin{tabular}{r|c|ccc}
\hline
\hline
$\textbf{K}$ &\textbf{Avg.}↑ & \textbf{Sent.}↑ & \textbf{Topic}↑ & \textbf{DeTox.}↑\\
\hline
5000 & \textcolor{sblue}{75.5} & 70.5 & 67.9 & 88.2\\
4000 & \textcolor{sblue}{77.6} & 72.9 & 71.4 & 88.4\\
3000 & \textcolor{sblue}{78.7} & 72.4 & 74.7 & 88.9\\
2000 & \textcolor{sblue}{79.1} & 72.6 & 75.9 & 88.7\\
1500 & \textcolor{sblue}{79.9} & 73.6 & 77.1 & 89.0\\
1000 & \textcolor{sblue}{80.7} & 75.7 & 77.2 & 89.1\\
800 & \textcolor{sblue}{82.9} & 79.3 & 79.2 & 90.3\\
500 & \textcolor{sblue}{85.2} & 83.5 & 81.5 & 90.5\\
300 & \textcolor{sblue}{85.7} & 84.1 & 83.2 & 89.7\\
200 & \textcolor{sred}{\textbf{87.4}} & \textbf{86.7} & \textbf{84.8} & 90.7\\
150 & \textcolor{sgreen}{84.0} & 79.2 & 84.3 & 88.4\\
100 & \textcolor{sgreen}{83.9} & 78.7 & 83.6 & 89.5\\
50 & \textcolor{sgreen}{82.2} & 78.4 & 78.5 & 89.6\\
20 & \textcolor{sgreen}{80.9} & 77.8 & 73.1 & 91.7\\
10 & \textcolor{sgreen}{80.8} & 79.6 & 71.5 & 91.2\\
5 & \textcolor{sblue}{81.4} & 82.9 & 69.3 & 92.1\\
3 & \textcolor{lred}{85.0} & 86.1 & 77.7 & 91.1\\
1 & \textcolor{sgreen}{78.8} & 63.1 & 80.9 & \textbf{92.4}\\
\hline
\hline
\end{tabular}
\caption{Results that vary with $K$.}
\vspace{-0.2cm}
\label{tab:k_analysis}
\end{table}
We analyze the variation of $K$ in the intersection searching algorithm and demonstrate the results in Table \ref{tab:k_analysis}.
Our model reaches a critical point when $K$ is 200, and the performance is optimal this time.
On the one hand, as the value of $K$ gradually increases, our method pays less attention to regions where samples are fewer while attributes combine more tightly, and the performance decreases accordingly.
When $K$ reaches 5k, our method degenerates into a plain prefix-tuning model, which treats intersection as the midpoint of distributional centers. Its performance is similar and slightly inferior to the concatenation version of Contrastive Prefix in Table \ref{tab:1}.
On the other hand, smaller $K$ leads to suboptimal performance since the effect of noise becomes non-negligible in training data.
When $K$ is less than $10$, our model will be very unstable.
\subsection{Distribution of Attributes}
\label{attributes}
\begin{figure}[t]
\centering
\vspace{-0.3cm}
\includegraphics[width=\columnwidth]{pic/single_2.pdf}
\caption{Distribution of attribute World from Topic.}
\label{fig:4}
\vspace{-0.2cm}
\end{figure}
We project sample points to 2D by PCA, with each attribute projected independently. As in Figure \ref{fig:4}, we display a scatterplot of World and conduct a Gaussian kernel density estimation to visualize its probability distribution. The darker area denotes a higher probability, where more representation points of oracle sentences gather. And the region annotated by a red ellipse is the estimated distributional center. As in the plot, the distribution of World is significantly asymmetric as the center lies in the top part, with the bottom being a sparse long tail. In addition, the distribution is even non-convex with an isolated cluster in the lower right corner. This observation supports our hypothesis that the practical distributions of attributes are far more complex than symmetric distributions such as Gaussian distribution. Besides, we plot the distribution of other attributes in the \S \ref{sec:appendix1}.
\section{Discussion on Distributional Lens}
Pilot work such as DGC \cite{khalifa2020distributional} estimates the language distribution with an energy-based model and optimizes this distribution to satisfy constraints by approaching the constraints manifold. Recent distributional approaches like COLD Decoding \cite{qin2022cold} and MuCoLa \cite{kumar2022constrained} take the language and attribute distribution in the same space so as to sample attribute-related sentences with Langevin Dynamics. Concurrent work on the image side, PromptGen \cite{wu2022generative}, simulates the complex distribution of images relevant to target attributes using a deep generative model. However, as a consensual hypothesis in manifold learning, the pre-trained language model estimates a low-dimensional manifold of language in a high-dimensional embedding space, which means most points in the embedding space are not probabilistically modeled by the language model. We believe that placing too much trust in the distributional modeling ability of language models is not a good choice. Our method attempts to depict the attribute space with discrete sample points of attributed sentences and make these discrete points, along with their coverage areas, compose the support set of our estimated distribution.
\section{Conclusion}
In this work, we present a distributional perspective for the multi-aspect controllable text generation with experimental results confirming the superiority of our model. Further observations on the 2D projection of the estimated attribute space show that our hypothesis about the attribute space is more feasible. In the future, we can explore the correlation between different attribute combinations for more fine-grained control and capture the bias in datasets to eliminate or utilize it.
\section*{Limitations}
Our method has a certain dependence on the data since we need to estimate an attribute space. Therefore, it is difficult for our method to perform well in the setting of few-shot learning. However, this disadvantage is not that severe, because we only need single-aspect data, which is relatively sufficient in style transfer tasks. Another dependence of our method on data is that it is somewhat sensitive to biases in the data. When the semantic divergence of different aspects in training data is too large, our aspect gap loss, which aims to reduce the distance among the distributions of each aspect, will conflict with the sentence reconstruction loss. As a result, it may be hard to obtain a reliable intersection in the attribute space.
Computational resources also have an impact on our approach, as our aspect gap loss leverages a batch-level estimation for each aspect. Therefore, a larger batch size means a more accurate approximation, leaving the attribute space fewer biases. An alternative strategy for smaller batches is to backpropagate the loss after accumulating enough distributional samples, which requires more training epochs.
\section*{Ethics Statement}
We are totally aware that text generation technology has a potential to be used maliciously to generate fake, toxic, or offensive content. However, after training on the Detoxification aspect, controllable text generation technology is a powerful weapon for combating hate speech, and eliminating harmful information in pre-trained language models. In addition, our multi-aspect controllable text generation technology can take Detoxification as an default aspect when controlling other aspects. We believe it meaningful and beneficial to advance research on controllable text generation.
\section*{Acknowledgements}
Xiaocheng Feng is the corresponding author of this work. We thank the anonymous reviewers for their insightful comments. This work was supported by the National Key R\&D Program of China via grant 2020AAA0106502, National Natural Science Foundation of China (NSFC) via grant 62276078 and the Major Key Project of PCL, PCL2021A06.
\normalem
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,623 |
Health Risks of E-cigarettes Emerge
Posted by DailyHealthAlerts.com in News & Articles
E-cigarette technology was first launched in China in 2003. However, it was introduced to the outside world as a potentially less hazardous product to smokers. Although e-cigarettes were marketed as an alternative to conventional cigarettes, providing that 'nicotine fix' without lung-blackening tar and smoke, researchers aren't fully confident that vaping is a safe option to smoking.
Both users and manufacturers in hopes of devising an alternative for conventional smoking methods have warmly embraced vaping devices. Sadly, scientific researches claim that there is no such thing as a safe cigarette.
Certain recent researches have shown that e-cigs may contain harmful particles that can affect lung tissue and cause several illnesses.
Are E-Cigarettes a healthier smoking Alternative?
Early studies pointed out several promising benefits of vaping. In fact, some experts insisted that vaping devices have made it easier for smokers to quit smoking and be safe from potential health hazards such as lung cancer.
Cigarette smoke comprises of 4000 chemicals including formaldehyde, cyanide, arsenic, carbon monoxide, DDT and ammonia. There are 43 cancer-causing chemicals in cigarette smoke whereas E-juices used in electronic cigarettes generally contain only food grade flavorings, glycerin, propylene and glycol. None of these substances has shown any negative impact on health.
E-cigs are battery-powered devices that heat a liquid solution (e-liquid) made from artificial flavors and nicotine to create an aerosol, which replicates the physical sensation of smoking. The process is popularly dubbed as vaping.
Therefore, it can be safe to say that vaping is generally safer as it exposes smokers to fewer toxins. Scientists also conclude that vaping is gentler on respiratory passages and the lungs.
Although vaping is a complicated concept to understand, it is slowly replacing cigarettes by people who are looking for healthier smoking options. The masses often perceive vaping as the same thing as smoking without knowing that vaping is actually less harmful than tobacco smoking.
Are they Completely Safe?
Recently manufactured electronic cigarettes have been used for almost a decade now. However, they have reached immense popularity only recently. Unlike conventional cigarettes, e-cigs do not work on the principle of combusting dried tobacco leaves covered in almost 600 additives; 69 of which are carcinogenic.
E-cigarettes are quickly taking over conventional cigarettes. However, all vaping devices have been less effective in helping people to quit smoking completely. Although, e-cigarettes increase the chances of quitting in the first month, the effects seem to dissipate after 3 or 6 months. This discovery was made in a number of studies conducted by University of Toronto researcher Riyad al-Lehebi in which it was noted that people who intend to quit smoking must consider other well-established options to quit for good.
Sophisticated e-cig devices enable smokers to modify the voltage settings from the battery to regulate its heating intensity. When the solution becomes hotter, it strengthens the effect of the nicotine hit. Sadly, these higher temperatures affect the propylene glycol and glycerin, which are used as solvents within the liquid solution and convert them to carbonyl such as acetaldehyde and formaldehyde.
A recent study has discovered when increasing the e-cig voltage from 3.2V to 4.8V the mentioned solvents generated roughly the same quantity of formaldehyde as a conventional cigarette. Although our body produces formaldehyde as a byproduct of normal metabolism in the cells, it is considered to be carcinogenic when inhaled.
The study also noted that if e-cigs are used at a lower voltage, they might result in producing up to 800 times less formaldehyde than a traditional cigarette. However, this is not as safe as we assume it to be. Since, the size of the vapor particles and the delivery method affects the lungs negatively; it can cause serious damage to your respiratory system.
This implies that even if the particle is not toxic itself, its size is enough to damage the lungs.
Even if vaping has become a better and safer smoking option, it is still too early to say that it is completely safe. Technological developments have improved production standards of the current vaping devices and e-cigarettes to reduce long-term hazard, but the technology has still a long way to go when it comes to completely safe smoking.
The mechanics of e-cigarettes may contribute to how much harm it can cause to your health. A study conducted at the University Of Alabama School Of Medicine discovered a connection between coil temperature and the production of harmful chemicals such as acrolein, acetaldehyde and formaldehyde in the e-cigarette. Since there are no configuration standards for e-cigarettes, the study suggests that this lack of consistency makes it hard to definitely evaluate the health effects of e-cigarettes.
Scientists have been testing the long-term effects of flavored e-cigarette liquids on the calcium in our lungs. They were surprised to not that that not all flavors deliver the same effect. A study by University of North Carolina at Chapel Hill deduced that 5 of the 13 flavors brought affected calcium in the lungs. These flavors included menthol tobacco, banana pudding and hot cinnamon candies.
Although e-cigarettes are less harmful than ordinary cigarettes, yet, there are several concerns about the possible health hazards associated with it. The e-liquid contains toxins and chemicals other than nicotine, which may cause damage to our lungs and body. According to the US Centers for Disease Control and Prevention (CDC) e-cigarettes have been identified as an emerging challenge for public health.
There have been warnings about the potential long-term risks (and benefits) that the use of e-cigarettes may bring. Extensive research work is required to know the exact outcomes of the components of an e-cigarette.
For instance, there are doubts over vapor ingredients accumulating in the lungs, upper airway and mouth, or being absorbed into the body. However, there have been no long-term studies on the health impact of e-cigarettes since the relatively new product has only been available in the mainstream market for just above a decade.
A vaper for five years and a little wild explorer at heart who lives to eat, Elena takes great pleasure in her travels all over the world, and even greater (guilty) pleasure in delectable dishes from exotic cafes in Turkey to swanky tapas houses in Malta. An avid cook and baker, she can be found in the kitchen whipping up dishes and trying out new recipes, when she isn't traveling or working at her actual day-job. She's currently working at Infinite Vapour E-liquid and E-juice as content writer.
How Hazardous Is Your Hair Dye?
The Food and Vitamin Macular Degeneration Program
Ovarian Cancer Risk for Inactive Women
The Hateful 8 Oils
Holiday Should and Shouldn't Foods to Eat
Omega-3: Why You Need It
3 Macular Degeneration Risks and What You Can Do About It
5 Benefits of Tongue Scraping
5 Good News Facts About Coronavirus
Beware of Bogus Macular Degeneration Stem Cell Clinics
3 Easy Ways to Be More Attractive, Regardless of Looks
The Dangers of PFOA/C8: It's Already In You
Bee Venom Therapy-Cure to Many Ailments?
5 Hidden Causes of Body Odor
Eight Easy Ways You Can Turn Health into Success | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,140 |
AllBurning IssuesLegal AffairsNationalSports
What More Do Gambians Want To Know About The Coalition Criteria…
Pap Saine Calls For Repeal Of 'Draconian Media Laws'
Sarata Jabbie's Testimony Before The TRRC
World Population Day Observed in Gambia
Government Receives 50 Tonnes of Groundnut Seeds
Second National Youth Agribusiness, Tourism Expo Launched
NARI Trains Scientists On Plant Breeding
Agric Ministry Hands Over 6,000 Bags of Fertilizer for Sale to…
FOROYAA EDUCATING THE PEOPLE
POLITICAL PARTIES ENGAGE IN DEBATE An Exercise In Democracy, APRC Conspicuously Absent
When citizens become conscious of their rights and decide to take their destiny in their hands they become sovereign.
What the UTG students have done is what every sovereign citizen would do; to engage political leaders in a debate and demand answers from them on questions that are of concern to them. They invited both the ruling party and opposition parties but the ruling party was not represented.
As a ruling party, the APRC is to be held accountable for their deeds in the past five years. They are expected to defend their 'achievements' under Vision 2016 and Vision 2020 in education, health, agriculture and other sectors as well as their defence of the liberty and dignity of Gambians. The opposition on the other hand should strive to convince the audience that they can do a better job with their superior policies, plans and programmes, and therefore deserve to replace the ruling party.
Sovereign citizens who know themselves, their country and the world would listen carefully to the divergent views of political leaders and make up their mind as to who will best serve their interest and the nation. To vote on the basis of sentiments, be it religion, race, ethnicity and so on, is to deceive oneself because election is about deciding on who will best serve as trustee to take charge of the country.
December 1 is fast approaching and on that fateful day Gambians will decide who will take charge of the affairs of this country to take them out of their poverty and misery and ensure their liberty and dignity.
Previous articleWHAT WILL THE IEC DO ABOUT DEAD PERSONS WHOSE NAMES ARE ON THE REGISTER OF VOTERS?
Next articleHalifa Sallah makes pledges to the electorate
Advice To The US President And Those Who Call Themselves Women Of Colour
As the Somali American representative from Minnesota battle against narrow nationalist prejudices based on her origin, Foroyaa is compelled to heighten the debate on...
Court Goes On Holiday With Remanded Prisoners In Cells
It is a fundamental principle that justice must be done and must be seen to be done. The Criminal Procedure Code aims to restrict...
Is There A New Party?
Foroyaa is constantly bombarded with information of the emergence of one political party or the other. In some cases, online messages are given of...
On The Resignation Of A President
Some people are contemplating the resignation of the president under the Gambian Constitution without advancing constitutional amendments. Under section 65 of the 1997 Constitution:...
The US House Of Representatives Saves The World From War
On Friday, the House of Representatives of the US imposed a ban on President Trump from using any federal funds in a war campaign against...
Chinese investment in The Gambia and Environmental Questions
Diplomacy with a human face rests on three fundamental pillars - state to state relationship, state to people relationship and people to people relationship. Investment...
Musa Barrow May Not Play in Uefa Champions League as Loan...
By Sulayman Bah Gambian fans may never get to see their very own Musa Barrow in Uefa Champions League this season. The player's future is the...
Journalist Recollects Hostile Conduct Meted on Former 'Independent Newspaper'
High Court Dismisses Yankuba Touray's Bail Application
What did Ebrima Dibba of the UDP have to say about...
FOROYAA Newspaper is your premier Gambian community portal delivering news on politics, legal affairs and more. We provide you with the latest breaking news and videos straight from The Gambia and the rest of the world.
Ecowas Court Awards Six Million Dalasis Against The Gambia
Halifa, Darboe Debate on Situation of Gambia Returnees
President Adama Barrow reshuffles his Cabinet
Editorial1152
Burning Issues1086
Legal Affairs695
Marriage & Family492
Your Representative439
© 2017 Foroyaa Newspaper (The Gambia) - Site by DigiTech Solutions | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,048 |
{"url":"https:\/\/wikimili.com\/en\/John_von_Neumann","text":"John von Neumann\n\nLast updated\n\nJohn von Neumann\nJohn von Neumann in the 1940s\nBorn\nNeumann J\u00e1nos Lajos\n\nDecember 28, 1903\nDiedFebruary 8, 1957 (aged\u00a053)\nWashington, D.C., United States\nCitizenship Hungary\nUnited States\nAlma\u00a0mater P\u00e1zm\u00e1ny P\u00e9ter University\nETH Z\u00fcrich\nUniversity of G\u00f6ttingen\nKnown\u00a0for\nSpouse(s)Marietta K\u00f6vesi\nKlara Dan\nChildren Marina von Neumann Whitman\nAwards B\u00f4cher Memorial Prize (1938)\nNavy Distinguished Civilian Service Award (1946)\nMedal for Merit (1946)\nMedal of Freedom (1956)\nEnrico Fermi Award (1956)\nScientific career\nFields Mathematics, physics, statistics, economics, computer science\nInstitutions University of Berlin\nPrinceton University\nLos Alamos Laboratory\nThesis Az \u00e1ltal\u00e1nos halmazelm\u00e9let axiomatikus fel\u00e9p\u00edt\u00e9se (Axiomatic construction of general set theory)\u00a0(1925)\nDavid Hilbert\nDoctoral students Donald B. Gillies\nIsrael Halperin\nFriederich Mautner\nOther\u00a0notable students Paul Halmos\nClifford Hugh Dowker\nBenoit Mandelbrot [1]\nSignature\n\nJohn von Neumann (; Hungarian : Neumann J\u00e1nos Lajos, pronounced\u00a0; December 28, 1903\u00a0\u2013 February\u00a08, 1957) was a Hungarian-American mathematician, physicist, computer scientist, engineer and polymath. Von Neumann was generally regarded as the foremost mathematician of his time [2] and said to be \"the last representative of the great mathematicians\". [3] He integrated pure and applied sciences.\n\nVon Neumann made major contributions to many fields, including mathematics (foundations of mathematics, functional analysis, ergodic theory, group theory, representation theory, operator algebras, geometry, topology, and numerical analysis), physics (quantum mechanics, hydrodynamics, and quantum statistical mechanics), economics (game theory), computing (Von Neumann architecture, linear programming, self-replicating machines, stochastic computing), and statistics. He was a pioneer of the application of operator theory to quantum mechanics in the development of functional analysis, and a key figure in the development of game theory and the concepts of cellular automata, the universal constructor and the digital computer.\n\nVon Neumann published over 150 papers in his life: about 60 in pure mathematics, 60 in applied mathematics, 20 in physics, and the remainder on special mathematical subjects or non-mathematical ones. [4] His last work, an unfinished manuscript written while he was in the hospital, was later published in book form as The Computer and the Brain .\n\nHis analysis of the structure of self-replication preceded the discovery of the structure of DNA. In a shortlist of facts about his life he submitted to the National Academy of Sciences, he wrote, \"The part of my work I consider most essential is that on quantum mechanics, which developed in G\u00f6ttingen in 1926, and subsequently in Berlin in 1927\u20131929. Also, my work on various forms of operator theory, Berlin 1930 and Princeton 1935\u20131939; on the ergodic theorem, Princeton, 1931\u20131932.\"[ citation needed ]\n\nDuring World War II, von Neumann worked on the Manhattan Project with theoretical physicist Edward Teller, mathematician Stanislaw Ulam and others, problem-solving key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. He developed the mathematical models behind the explosive lenses used in the implosion-type nuclear weapon and coined the term \"kiloton\" (of TNT) as a measure of the explosive force generated. After the war, he served on the General Advisory Committee of the United States Atomic Energy Commission, and consulted for organizations including the United States Air Force, the Army's Ballistic Research Laboratory, the Armed Forces Special Weapons Project, and the Lawrence Livermore National Laboratory. As a Hungarian \u00e9migr\u00e9, concerned that the Soviets would achieve nuclear superiority, he designed and promoted the policy of mutually assured destruction to limit the arms race.\n\nEarly life and education\n\nFamily background\n\nVon Neumann was born Neumann J\u00e1nos Lajos to a wealthy, acculturated and non-observant Jewish family. In Hungarian the family name comes first, and his given names are equivalent to John Louis in English.\n\nVon Neumann was born in Budapest, Kingdom of Hungary, which was then part of the Austro-Hungarian Empire. [5] [6] [7] He was the eldest of three brothers; his two younger siblings were Mih\u00e1ly (English: Michael von Neumann; 1907\u20131989) and Mikl\u00f3s (Nicholas von Neumann, 1911\u20132011). [8] His father, Neumann Miksa (Max von Neumann, 1873\u20131928) was a banker, who held a doctorate in law. He had moved to Budapest from P\u00e9cs at the end of the 1880s. [9] Miksa's father and grandfather were both born in Ond (now part of the town of Szerencs), Zempl\u00e9n County, northern Hungary. John's mother was Kann Margit (English: Margaret Kann); [10] her parents were Jakab Kann and Katalin Meisels of the Meisels family. [11] Three generations of the Kann family lived in spacious apartments above the Kann-Heller offices in Budapest; von Neumann's family occupied an 18-room apartment on the top floor. [12]\n\nOn February 20, 1913, Emperor Franz Joseph elevated John's father to the Hungarian nobility for his service to the Austro-Hungarian Empire. The Neumann family thus acquired the hereditary appellation Margittai, meaning \"of Margitta\" (today Marghita, Romania). The family had no connection with the town; the appellation was chosen in reference to Margaret, as was their chosen coat of arms depicting three marguerites. Neumann J\u00e1nos became margittai Neumann J\u00e1nos (John Neumann de Margitta), which he later changed to the German Johann von Neumann. [13]\n\nChild prodigy\n\nVon Neumann was a child prodigy. When he was six years old, he could divide two eight-digit numbers in his head [14] [15] and could converse in Ancient Greek. When the six-year-old von Neumann caught his mother staring aimlessly, he asked her, \"What are you calculating?\". [16]\n\nWhen they were young, governesses taught von Neumann, his brothers and his cousins. Von Neumann's father believed that knowledge of languages other than their native Hungarian was essential, so the children were tutored in English, French, German and Italian. [17] By the age of eight, von Neumann was familiar with differential and integral calculus, [18] but he was particularly interested in history. He read his way through Wilhelm Oncken's 46-volume world history series Allgemeine Geschichte in Einzeldarstellungen (General History in Monographs). [19] A copy was contained in a private library Max purchased. One of the rooms in the apartment was converted into a library and reading room, with bookshelves from ceiling to floor. [20]\n\nVon Neumann entered the Lutheran Fasori Evang\u00e9likus Gimn\u00e1zium in 1914. [21] Eugene Wigner was a year ahead of von Neumann at the Lutheran School and soon became his friend. [22] This was one of the best schools in Budapest and was part of a brilliant education system designed for the elite. Under the Hungarian system, children received all their education at the one gymnasium. The Hungarian school system produced a generation noted for intellectual achievement, which included Theodore von K\u00e1rm\u00e1n (born 1881), George de Hevesy (born 1885), Michael Polanyi (born 1891), Le\u00f3 Szil\u00e1rd (born 1898), Dennis Gabor (born 1900), Eugene Wigner (born 1902), Edward Teller (born 1908), and Paul Erd\u0151s (born 1913). [23] Collectively, they were sometimes known as \"The Martians\". [24]\n\nAlthough Von Neumann's father insisted von Neumann attend school at the grade level appropriate to his age, he agreed to hire private tutors to give von Neumann advanced instruction in those areas in which he had displayed an aptitude. At the age of 15, he began to study advanced calculus under the renowned analyst G\u00e1bor Szeg\u0151. [22] On their first meeting, Szeg\u0151 was so astounded with the boy's mathematical talent that he was brought to tears. [25] Some of von Neumann's instant solutions to the problems that Szeg\u0151 posed in calculus are sketched out on his father's stationery and are still on display at the von Neumann archive in Budapest. [22] By the age of 19, von Neumann had published two major mathematical papers, the second of which gave the modern definition of ordinal numbers, which superseded Georg Cantor's definition. [26] At the conclusion of his education at the gymnasium, von Neumann sat for and won the E\u00f6tv\u00f6s Prize, a national prize for mathematics. [27]\n\nUniversity studies\n\nAccording to his friend Theodore von K\u00e1rm\u00e1n, von Neumann's father wanted John to follow him into industry and thereby invest his time in a more financially useful endeavor than mathematics. In fact, his father asked von K\u00e1rm\u00e1n to persuade his son not to take mathematics as his major. [28] Von Neumann and his father decided that the best career path was to become a chemical engineer. This was not something that von Neumann had much knowledge of, so it was arranged for him to take a two-year, non-degree course in chemistry at the University of Berlin, after which he sat for the entrance exam to the prestigious ETH Zurich, [29] which he passed in September 1923. [30] At the same time, von Neumann also entered P\u00e1zm\u00e1ny P\u00e9ter University in Budapest, [31] as a Ph.D. candidate in mathematics. For his thesis, he chose to produce an axiomatization of Cantor's set theory. [32] [33] He graduated as a chemical engineer from ETH Zurich in 1926 (although Wigner says that von Neumann was never very attached to the subject of chemistry), [34] and passed his final examinations for his Ph.D. in mathematics simultaneously with his chemical engineering degree, of which Wigner wrote, \"Evidently a Ph.D. thesis and examination did not constitute an appreciable effort.\" [34] He then went to the University of G\u00f6ttingen on a grant from the Rockefeller Foundation to study mathematics under David Hilbert. [35]\n\nEarly career and private life\n\nVon Neumann's habilitation was completed on December 13, 1927, and he began to give lectures as a Privatdozent at the University of Berlin in 1928. [36] He was the youngest person ever elected Privatdozent in the university's history in any subject. [37] By the end of 1927, von Neumann had published 12 major papers in mathematics, and by the end of 1929, 32, a rate of nearly one major paper per month. [38] His powers of recall allowed him to quickly memorize the pages of telephone directories, and recite the names, addresses and numbers therein. [19] In 1929, he briefly became a Privatdozent at the University of Hamburg, where the prospects of becoming a tenured professor were better, [39] but in October of that year a better offer presented itself when he was invited to Princeton University. [40]\n\nOn New Year's Day in 1930, von Neumann married Marietta K\u00f6vesi, who had studied economics at Budapest University. [40] Von Neumann and Marietta had one child, a daughter, Marina, born in 1935. As of 2021 Marina is a distinguished professor emerita of business administration and public policy at the University of Michigan. [41] The couple divorced in 1937. In October 1938, von Neumann married Klara Dan, whom he had met during his last trips back to Budapest before the outbreak of World War II. [42]\n\nIn 1930, before marrying Marietta, von Neumann was baptized into the Catholic Church. [43] Von Neumann's father, Max, had died in 1929. None of the family had converted to Christianity while Max was alive, but all did afterward. [44]\n\nIn 1933, he was offered a lifetime professorship at the Institute for Advanced Study in New Jersey when that institution's plan to appoint Hermann Weyl fell through. [45] He remained a mathematics professor there until his death, although he had announced his intention to resign and become a professor at large at the University of California, Los Angeles. [46] His mother, brothers and in-laws followed von Neumann to the United States in 1939. [47] Von Neumann anglicized his first name to John, keeping the German-aristocratic surname von Neumann. His brothers changed theirs to \"Neumann\" and \"Vonneumann\". [13] Von Neumann became a naturalized citizen of the United States in 1937, and immediately tried to become a lieutenant in the United States Army's Officers Reserve Corps. He passed the exams easily but was rejected because of his age. [48] His prewar analysis of how France would stand up to Germany is often quoted: \"Oh, France won't matter.\" [49]\n\nKlara and John von Neumann were socially active within the local academic community. [50] His white clapboard house at 26 Westcott Road was one of Princeton's largest private residences. [51] He always wore formal suits. He once wore a three-piece pinstripe while riding down the Grand Canyon astride a mule. [52] Hilbert is reported to have asked, \"Pray, who is the candidate's tailor?\" at von Neumann's 1926 doctoral exam, as he had never seen such beautiful evening clothes. [53]\n\nVon Neumann held a lifelong passion for ancient history and was renowned for his historical knowledge. A professor of Byzantine history at Princeton once said that von Neumann had greater expertise in Byzantine history than he did. [54]\n\nVon Neumann liked to eat and drink; his wife, Klara, said that he could count everything except calories. He enjoyed Yiddish and \"off-color\" humor (especially limericks). [18] He was a non-smoker. [55] In Princeton, he received complaints for regularly playing extremely loud German march music on his phonograph, which distracted those in neighboring offices, including Albert Einstein, from their work. [56] Von Neumann did some of his best work in noisy, chaotic environments, and once admonished his wife for preparing a quiet study for him to work in. He never used it, preferring the couple's living room with its television playing loudly. [57] Despite being a notoriously bad driver, he enjoyed driving\u2014frequently while reading a book\u2014occasioning numerous arrests as well as accidents. When Cuthbert Hurd hired him as a consultant to IBM, Hurd often quietly paid the fines for his traffic tickets. [58]\n\nVon Neumann's closest friend in the United States was mathematician Stanislaw Ulam. A later friend of Ulam's, Gian-Carlo Rota, wrote, \"They would spend hours on end gossiping and giggling, swapping Jewish jokes, and drifting in and out of mathematical talk.\" When von Neumann was dying in the hospital, every time Ulam visited, he came prepared with a new collection of jokes to cheer him up. [59] Von Neumann believed that much of his mathematical thought occurred intuitively; he would often go to sleep with a problem unsolved and know the answer upon waking up. [57] Ulam noted that von Neumann's way of thinking might not be visual, but more aural. [60]\n\nMathematics\n\nSet theory\n\nThe axiomatization of mathematics, on the model of Euclid's Elements , had reached new levels of rigour and breadth at the end of the 19th century, particularly in arithmetic, thanks to the axiom schema of Richard Dedekind and Charles Sanders Peirce, and in geometry, thanks to Hilbert's axioms. [61] But at the beginning of the 20th century, efforts to base mathematics on naive set theory suffered a setback due to Russell's paradox (on the set of all sets that do not belong to themselves). [62] The problem of an adequate axiomatization of set theory was resolved implicitly about twenty years later by Ernst Zermelo and Abraham Fraenkel. Zermelo\u2013Fraenkel set theory provided a series of principles that allowed for the construction of the sets used in the everyday practice of mathematics, but did not explicitly exclude the possibility of the existence of a set that belongs to itself. In his doctoral thesis of 1925, von Neumann demonstrated two techniques to exclude such sets\u2014the axiom of foundation and the notion of class. [61]\n\nThe axiom of foundation proposed that every set can be constructed from the bottom up in an ordered succession of steps by way of the principles of Zermelo and Fraenkel. If one set belongs to another, then the first must necessarily come before the second in the succession. This excludes the possibility of a set belonging to itself. To demonstrate that the addition of this new axiom to the others did not produce contradictions, von Neumann introduced a method of demonstration called the method of inner models , which became an essential instrument in set theory. [61]\n\nThe second approach to the problem of sets belonging to themselves took as its base the notion of class, and defines a set as a class that belongs to other classes, while a proper class is defined as a class that does not belong to other classes. On the Zermelo\u2013Fraenkel approach, the axioms impede the construction of a set of all sets that do not belong to themselves. In contrast, on von Neumann's approach, the class of all sets that do not belong to themselves can be constructed, but it is a proper class, not a set. [61]\n\nOverall, von Neumann's major achievement in set theory was an \"axiomatization of set theory and (connected with that) elegant theory of the ordinal and cardinal numbers as well as the first strict formulation of principles of definitions by the transfinite induction\". [63]\n\nBuilding on the work of Felix Hausdorff, in 1924 Stefan Banach and Alfred Tarski proved that given a solid ball in 3\u2011dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets that can be reassembled together in a different way to yield two identical copies of the original ball. Banach and Tarski proved that, using isometric transformations, the result of taking apart and reassembling a two-dimensional figure would necessarily have the same area as the original. This would make creating two unit squares out of one impossible. But in a 1929 paper, [64] von Neumann proved that paradoxical decompositions could use a group of transformations that include as a subgroup a free group with two generators. The group of area-preserving transformations contains such subgroups, and this opens the possibility of performing paradoxical decompositions using these subgroups. The class of groups von Neumann isolated in his work on Banach\u2013Tarski decompositions was very important in many areas of mathematics, including von Neumann's own later work in measure theory (see below).\n\nProof theory\n\nWith the aforementioned contributions of von Neumann to sets, the axiomatic system of the theory of sets avoided the contradictions of earlier systems and became usable as a foundation for mathematics, despite the lack of a proof of its consistency. The next question was whether it provided definitive answers to all mathematical questions that could be posed in it, or whether it might be improved by adding stronger axioms that could be used to prove a broader class of theorems.\n\nBuilding on the work of Ackermann, von Neumann began attempting to prove (using the finistic methods of Hilbert's school) the consistency of first-order arithmetic. He succeeded in proving the consistency of a fragment of arithmetic of natural numbers (through the use of restrictions on induction). [65] He continued looking for a more general proof of the consistency of classical mathematics using methods from proof theory. [66]\n\nA strongly negative answer to whether it was definitive arrived in September 1930 at the historic Second Conference on the Epistemology of the Exact Sciences of K\u00f6nigsberg, in which Kurt G\u00f6del announced his first theorem of incompleteness: the usual axiomatic systems are incomplete, in the sense that they cannot prove every truth expressible in their language. Moreover, every consistent extension of these systems necessarily remains incomplete. [67]\n\nLess than a month later, von Neumann, who had participated in the Conference, communicated to G\u00f6del an interesting consequence of his theorem: that the usual axiomatic systems are unable to demonstrate their own consistency. [67] G\u00f6del had already discovered this consequence, now known as his second incompleteness theorem, and sent von Neumann a preprint of his article containing both theorems. [68] Von Neumann acknowledged G\u00f6del's priority in his next letter. [69] He never thought much of \"the American system of claiming personal priority for everything.\" [70] However von Neumann's method of proof differed from G\u00f6del's, as his used polynomials to explain consistency. [71] [72] With this discovery, von Neumann ceased work in mathematical logic and foundations of mathematics and instead spent time on problems connected with applications. [73]\n\nErgodic theory\n\nIn a series of papers published in 1932, von Neumann made foundational contributions to ergodic theory, a branch of mathematics that involves the states of dynamical systems with an invariant measure. [74] Of the 1932 papers on ergodic theory, Paul Halmos wrote that even \"if von Neumann had never done anything else, they would have been sufficient to guarantee him mathematical immortality\". [75] By then von Neumann had already written his articles on operator theory, and the application of this work was instrumental in the von Neumann mean ergodic theorem. [75]\n\nMeasure theory\n\nIn measure theory, the \"problem of measure\" for an n-dimensional Euclidean space Rn may be stated as: \"does there exist a positive, normalized, invariant, and additive set function on the class of all subsets of Rn?\" [75] The work of Felix Hausdorff and Stefan Banach had implied that the problem of measure has a positive solution if n = 1 or n = 2 and a negative solution (because of the Banach\u2013Tarski paradox) in all other cases. Von Neumann's work argued that the \"problem is essentially group-theoretic in character\": [75] the existence of a measure could be determined by looking at the properties of the transformation group of the given space. The positive solution for spaces of dimension at most two, and the negative solution for higher dimensions, comes from the fact that the Euclidean group is a solvable group for dimension at most two, and is not solvable for higher dimensions. \"Thus, according to von Neumann, it is the change of group that makes a difference, not the change of space.\" [75]\n\nIn a number of von Neumann's papers, the methods of argument he employed are considered even more significant than the results. In anticipation of his later study of dimension theory in algebras of operators, von Neumann used results on equivalence by finite decomposition, and reformulated the problem of measure in terms of functions. [75] A major contribution von Neumann made to measure theory was the result of a paper written to answer a question of Haar regarding whether there existed an algebra of all bounded functions on the real number line such that they form \"a complete system of representatives of the classes of almost everywhere-equal measurable bounded functions\". [76] He proved this in the positive, and in later papers with Stone discussed various generalizations and algebraic aspects of this problem. [77] He also proved by new methods the existence of disintegrations for various general types of measures. Von Neumann also gave a new proof on the uniqueness of Haar measures by using the mean values of functions, although this method only worked for compact groups. [76] He had to create entirely new techniques to apply this to locally compact groups. [75] He also gave a new proof for the Radon\u2013Nikodym theorem. [78] His lecture notes on measure theory at the Institute for Advanced Study were an important source for knowledge on the field in America at the time, and were later published. [75] [79]\n\nTopological groups\n\nUsing his previous work on measure theory von Neumann made several contributions to the theory of topological groups, beginning with a paper on almost periodic functions on groups, where von Neumann extended Bohr's theory of almost periodic functions to arbitrary groups. [80] He continued this work with another paper in conjunction with Bochner that improved the theory of almost periodicity to include functions that took on elements of linear spaces as values rather than numbers. [81] In 1938, he was awarded the B\u00f4cher Memorial Prize for his work in analysis in relation to these papers. [82] [43]\n\nIn a 1933 paper, he used the newly discovered Haar measure in the solution of Hilbert's fifth problem for the case of compact groups. [75] [83] The basic idea behind this was discovered several years earlier when von Neumann published a paper on the analytic properties of groups of linear transformations and found that closed subgroups of a general linear group are Lie groups. [84] This was later extended by Cartan to arbitrary Lie groups in the form of the closed-subgroup theorem. [43] [76]\n\nFunctional analysis\n\nVon Neumann was the first one to come up with an \u201cabstract\u201d Hilbert space in a formal and axiomatic fashion. It was defined as a complex vector space with a hermitian scalar product, with the corresponding norm being both separable and complete. He continued with the development of the spectral theory of operators in Hilbert space in 3 seminal papers between 1929 and 1932. [85] For twenty years von Neumann was considered the 'undisputed master' of this area. [76] These developments were primarily prompted by needs in quantum mechanics where von Neumann realized the need to extend the spectral theory of Hermitian operators from the bounded to the unbounded case. [86] Other major achievements in these papers include a complete elucidation of spectral theory for normal operators, a generalisation of Riesz\u2019s presentation of Hilbert\u2019s spectral theorems at the time, and the discovery of hermitian operators in a Hilbert space, as distinct from self-adjoint operators, which enabled him to give a description of all hermitian operators which extend a given hermitian operator. In addition he wrote a paper detailing how the usage of infinite matrices, common at the time in spectral theory, was inadequate as a representation for hermitian operators. His work on operator theory lead to his most profound invention in pure mathematics, the study of von Neumann algebras and in general of operator algebras. [87]\n\nIn other work in functional analysis von Neumann was also the first mathematician to apply new topological ideas from Hausdorff to Hilbert spaces. He also gave the first general definition of locally convex spaces. [88] His later work on rings of operators lead to him revisiting his earlier work on spectral theory and providing a new way of working through the geometric content of the spectral theory by the use of direct integrals of Hilbert spaces. [89]\n\nOperator algebras\n\nVon Neumann founded the study of rings of operators, through the von Neumann algebras. A von Neumann algebra is a *-algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the identity operator. [90] The von Neumann bicommutant theorem shows that the analytic definition is equivalent to a purely algebraic definition as being equal to the bicommutant. [91] After elucidating the study of the commutative algebra case, von Neumann embarked in 1936, with the partial collaboration of F.J. Murray, on the noncommutative case, the general study of factors classification of von Neumann algebras. The six major papers in which he developed that theory between 1936 and 1940 \"rank among the masterpieces of analysis in the twentieth century\". [3] The direct integral was later introduced in 1949 by John von Neumann for his work on operator theory. [92] His work here lead on to the next two major topics.\n\nGeometry\n\nVon Neumann founded the field of continuous geometry. [93] It followed his path-breaking work on rings of operators. In mathematics, continuous geometry is a substitute of complex projective geometry, where instead of the dimension of a subspace being in a discrete set 0, 1, ..., n, it can be an element of the unit interval [0,1]. Earlier, Menger and Birkhoff had axiomatized complex projective geometry in terms of the properties of its lattice of linear subspaces. Von Neumann, following his work on rings of operators, weakened those axioms to describe a broader class of lattices, the continuous geometries. While the dimensions of the subspaces of projective geometries are a discrete set (the non-negative integers), the dimensions of the elements of a continuous geometry can range continuously across the unit interval [0,1]. Von Neumann was motivated by his discovery of von Neumann algebras with a dimension function taking a continuous range of dimensions, and the first example of a continuous geometry other than projective space was the projections of the hyperfinite type II factor. [94] [95]\n\nLattice theory\n\nBetween 1937 and 1939, von Neumann worked on lattice theory, the theory of partially ordered sets in which every two elements have a greatest lower bound and a least upper bound. Garrett Birkhoff writes: \"John von Neumann's brilliant mind blazed over lattice theory like a meteor\". [96]\n\nVon Neumann provided an abstract exploration of dimension in completed complemented modular topological lattices (properties that arise in the lattices of subspaces of inner product spaces): \"Dimension is determined, up to a positive linear transformation, by the following two properties. It is conserved by perspective mappings (\"perspectivities\") and ordered by inclusion. The deepest part of the proof concerns the equivalence of perspectivity with \"projectivity by decomposition\"\u2014of which a corollary is the transitivity of perspectivity.\" [96]\n\nAdditionally, \"[I]n the general case, von Neumann proved the following basic representation theorem. Any complemented modular lattice L having a \"basis\" of n \u2265 4 pairwise perspective elements, is isomorphic with the lattice \u211b(R) of all principal right-ideals of a suitable regular ring R. This conclusion is the culmination of 140 pages of brilliant and incisive algebra involving entirely novel axioms. Anyone wishing to get an unforgettable impression of the razor edge of von Neumann's mind, need merely try to pursue this chain of exact reasoning for himself\u2014realizing that often five pages of it were written down before breakfast, seated at a living room writing-table in a bathrobe.\" [96]\n\nMathematical formulation of quantum mechanics\n\nVon Neumann was the first to establish a rigorous mathematical framework for quantum mechanics, known as the Dirac\u2013von Neumann axioms, in his 1932 work Mathematical Foundations of Quantum Mechanics . [97] After having completed the axiomatization of set theory, he began to confront the axiomatization of quantum mechanics. He realized in 1926 that a state of a quantum system could be represented by a point in a (complex) Hilbert space that, in general, could be infinite-dimensional even for a single particle. In this formalism of quantum mechanics, observable quantities such as position or momentum are represented as linear operators acting on the Hilbert space associated with the quantum system. [98]\n\nThe physics of quantum mechanics was thereby reduced to the mathematics of Hilbert spaces and linear operators acting on them. For example, the uncertainty principle, according to which the determination of the position of a particle prevents the determination of its momentum and vice versa, is translated into the non-commutativity of the two corresponding operators. This new mathematical formulation included as special cases the formulations of both Heisenberg and Schr\u00f6dinger. [98] When Heisenberg was informed von Neumann had clarified the difference between an unbounded operator that was a self-adjoint operator and one that was merely symmetric, Heisenberg replied \"Eh? What is the difference?\" [99]\n\nVon Neumann's abstract treatment permitted him also to confront the foundational issue of determinism versus non-determinism, and in the book he presented a proof that the statistical results of quantum mechanics could not possibly be averages of an underlying set of determined \"hidden variables,\" as in classical statistical mechanics. In 1935, Grete Hermann published a paper arguing that the proof contained a conceptual error and was therefore invalid. [100] Hermann's work was largely ignored until after John S. Bell made essentially the same argument in 1966. [101] In 2010, Jeffrey Bub argued that Bell had misconstrued von Neumann's proof, and pointed out that the proof, though not valid for all hidden variable theories, does rule out a well-defined and important subset. Bub also suggests that von Neumann was aware of this limitation and did not claim that his proof completely ruled out hidden variable theories. [102] The validity of Bub's argument is, in turn, disputed. [103] In any case, Gleason's theorem of 1957 fills the gaps in von Neumann's approach.\n\nVon Neumann's proof inaugurated a line of research that ultimately led, through Bell's theorem and the experiments of Alain Aspect in 1982, to the demonstration that quantum physics either requires a notion of reality substantially different from that of classical physics, or must include nonlocality in apparent violation of special relativity. [104]\n\nIn a chapter of The Mathematical Foundations of Quantum Mechanics, von Neumann deeply analyzed the so-called measurement problem. He concluded that the entire physical universe could be made subject to the universal wave function. Since something \"outside the calculation\" was needed to collapse the wave function, von Neumann concluded that the collapse was caused by the consciousness of the experimenter. He argued that the mathematics of quantum mechanics allows the collapse of the wave function to be placed at any position in the causal chain from the measurement device to the \"subjective consciousness\" of the human observer. Although this view was accepted by Eugene Wigner, [105] the Von Neumann\u2013Wigner interpretation never gained acceptance among the majority of physicists. [106] The Von Neumann\u2013Wigner interpretation has been summarized as follows: [107]\n\nThe rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse. [107]\n\nThough theories of quantum mechanics continue to evolve, there is a basic framework for the mathematical formalism of problems in quantum mechanics underlying most approaches that can be traced back to the mathematical formalisms and techniques first used by von Neumann. In other words, discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations. [97]\n\nVon Neumann entropy\n\nVon Neumann entropy is extensively used in different forms (conditional entropy, relative entropy, etc.) in the framework of quantum information theory. [108] Entanglement measures are based upon some quantity directly related to the von Neumann entropy. Given a statistical ensemble of quantum mechanical systems with the density matrix ${\\displaystyle \\rho }$, it is given by ${\\displaystyle S(\\rho )=-\\operatorname {Tr} (\\rho \\ln \\rho ).\\,}$ Many of the same entropy measures in classical information theory can also be generalized to the quantum case, such as Holevo entropy and conditional quantum entropy.\n\nQuantum mutual information\n\nQuantum information theory is largely concerned with the interpretation and uses of von Neumann entropy. The von Neumann entropy is the cornerstone in the development of quantum information theory, while the Shannon entropy applies to classical information theory. This is considered a historical anomaly, as Shannon entropy might have been expected to be discovered before Von Neumann entropy, given the latter's more widespread application to quantum information theory. But Von Neumann discovered von Neumann entropy first, and applied it to questions of statistical physics. Decades later, Shannon developed an information-theoretic formula for use in classical information theory, and asked von Neumann what to call it. Von Neumann said to call it Shannon entropy, as it was a special case of von Neumann entropy. [109]\n\nDensity matrix\n\nThe formalism of density operators and matrices was introduced by von Neumann [110] in 1927 and independently, but less systematically by Lev Landau [111] and Felix Bloch [112] in 1927 and 1946 respectively. The density matrix is an alternative way to represent the state of a quantum system, which could otherwise be represented using the wavefunction. The density matrix allows the solution of certain time-dependent problems in quantum mechanics.\n\nVon Neumann measurement scheme\n\nThe von Neumann measurement scheme, the ancestor of quantum decoherence theory, represents measurements projectively by taking into account the measuring apparatus which is also treated as a quantum object. The 'projective measurement' scheme introduced by von Neumann led to the development of quantum decoherence theories. [113] [114]\n\nQuantum logic\n\nVon Neumann first proposed a quantum logic in his 1932 treatise Mathematical Foundations of Quantum Mechanics , where he noted that projections on a Hilbert space can be viewed as propositions about physical observables. The field of quantum logic was subsequently inaugurated, in a famous paper of 1936 by von Neumann and Garrett Birkhoff, the first work ever to introduce quantum logics, [115] wherein von Neumann and Birkhoff first proved that quantum mechanics requires a propositional calculus substantially different from all classical logics and rigorously isolated a new algebraic structure for quantum logics. The concept of creating a propositional calculus for quantum logic was first outlined in a short section in von Neumann's 1932 work, but in 1936, the need for the new propositional calculus was demonstrated through several proofs. For example, photons cannot pass through two successive filters that are polarized perpendicularly (e.g., horizontally and vertically), and therefore, a fortiori , it cannot pass if a third filter polarized diagonally is added to the other two, either before or after them in the succession, but if the third filter is added between the other two, the photons will indeed pass through. This experimental fact is translatable into logic as the non-commutativity of conjunction ${\\displaystyle (A\\land B)\\neq (B\\land A)}$. It was also demonstrated that the laws of distribution of classical logic, ${\\displaystyle P\\lor (Q\\land R)=(P\\lor Q)\\land (P\\lor R)}$ and ${\\displaystyle P\\land (Q\\lor R)=(P\\land Q)\\lor (P\\land R)}$, are not valid for quantum theory. [116]\n\nThe reason for this is that a quantum disjunction, unlike the case for classical disjunction, can be true even when both of the disjuncts are false and this is in turn attributable to the fact that it is frequently the case in quantum mechanics that a pair of alternatives are semantically determinate, while each of its members is necessarily indeterminate. This latter property can be illustrated by a simple example. Suppose we are dealing with particles (such as electrons) of semi-integral spin (spin angular momentum) for which there are only two possible values: positive or negative. Then, a principle of indetermination establishes that the spin, relative to two different directions (e.g., x and y) results in a pair of incompatible quantities. Suppose that the state \u0278 of a certain electron verifies the proposition \"the spin of the electron in the x direction is positive.\" By the principle of indeterminacy, the value of the spin in the direction y will be completely indeterminate for \u0278. Hence, \u0278 can verify neither the proposition \"the spin in the direction of y is positive\" nor the proposition \"the spin in the direction of y is negative.\" Nevertheless, the disjunction of the propositions \"the spin in the direction of y is positive or the spin in the direction of y is negative\" must be true for \u0278. In the case of distribution, it is therefore possible to have a situation in which ${\\displaystyle A\\land (B\\lor C)=A\\land 1=A}$, while ${\\displaystyle (A\\land B)\\lor (A\\land C)=0\\lor 0=0}$. [116]\n\nAs Hilary Putnam writes, von Neumann replaced classical logic with a logic constructed in orthomodular lattices (isomorphic to the lattice of subspaces of the Hilbert space of a given physical system). [117]\n\nGame theory\n\nVon Neumann founded the field of game theory as a mathematical discipline. [118] He proved his minimax theorem in 1928. It establishes that in zero-sum games with perfect information (i.e., in which players know at each time all moves that have taken place so far), there exists a pair of strategies for both players that allows each to minimize his maximum losses. When examining every possible strategy, a player must consider all the possible responses of his adversary. The player then plays out the strategy that will result in the minimization of his maximum loss. [119]\n\nSuch strategies, which minimize the maximum loss for each player, are called optimal. Von Neumann showed that their minimaxes are equal (in absolute value) and contrary (in sign). He improved and extended the minimax theorem to include games involving imperfect information and games with more than two players, publishing this result in his 1944 Theory of Games and Economic Behavior , written with Oskar Morgenstern. Morgenstern wrote a paper on game theory and thought he would show it to von Neumann because of his interest in the subject. He read it and said to Morgenstern that he should put more in it. This was repeated a couple of times, and then von Neumann became a coauthor and the paper became 100 pages long. Then it became a book. The public interest in this work was such that The New York Times ran a front-page story. [120] In this book, von Neumann declared that economic theory needed to use functional analysis, especially convex sets and the topological fixed-point theorem, rather than the traditional differential calculus, because the maximum-operator did not preserve differentiable functions. [118]\n\nIndependently, Leonid Kantorovich's functional analytic work on mathematical economics also focused attention on optimization theory, non-differentiability, and vector lattices. Von Neumann's functional-analytic techniques\u2014the use of duality pairings of real vector spaces to represent prices and quantities, the use of supporting and separating hyperplanes and convex sets, and fixed-point theory\u2014have been the primary tools of mathematical economics ever since. [121]\n\nMathematical economics\n\nVon Neumann raised the intellectual and mathematical level of economics in several influential publications. For his model of an expanding economy, he proved the existence and uniqueness of an equilibrium using his generalization of the Brouwer fixed-point theorem. [118] Von Neumann's model of an expanding economy considered the matrix pencil \u00a0A\u00a0\u00a0\u03bbB with nonnegative matrices\u00a0A and B; von Neumann sought probability vectors \u00a0p and\u00a0q and a positive number\u00a0\u03bb that would solve the complementarity equation\n\n${\\displaystyle p^{T}(A-\\lambda B)q=0}$\n\nalong with two inequality systems expressing economic efficiency. In this model, the (transposed) probability vector p represents the prices of the goods while the probability vector q represents the \"intensity\" at which the production process would run. The unique solution \u03bb represents the growth factor which is 1 plus the rate of growth of the economy; the rate of growth equals the interest rate. [122] [123]\n\nVon Neumann's results have been viewed as a special case of linear programming, where his model uses only nonnegative matrices. The study of his model of an expanding economy continues to interest mathematical economists with interests in computational economics. [124] [125] [126] This paper has been called the greatest paper in mathematical economics by several authors, who recognized its introduction of fixed-point theorems, linear inequalities, complementary slackness, and saddlepoint duality. In the proceedings of a conference on von Neumann's growth model, Paul Samuelson said that many mathematicians had developed methods useful to economists, but that von Neumann was unique in having made significant contributions to economic theory itself. [127]\n\nVon Neumann's famous 9-page paper started life as a talk at Princeton and then became a paper in German that was eventually translated into English. His interest in economics that led to that paper began while he was lecturing at Berlin in 1928 and 1929. He spent his summers back home in Budapest, as did the economist Nicholas Kaldor, and they hit it off. Kaldor recommended that von Neumann read a book by the mathematical economist L\u00e9on Walras. Von Neumann found some faults in the book and corrected them\u2013for example, replacing equations by inequalities. He noticed that Walras's General Equilibrium Theory and Walras's Law, which led to systems of simultaneous linear equations, could produce the absurd result that profit could be maximized by producing and selling a negative quantity of a product. He replaced the equations by inequalities, introduced dynamic equilibria, among other things, and eventually produced the paper. [128]\n\nLinear programming\n\nBuilding on his results on matrix games and on his model of an expanding economy, von Neumann invented the theory of duality in linear programming when George Dantzig described his work in a few minutes, and an impatient von Neumann asked him to get to the point. Dantzig then listened dumbfounded while von Neumann provided an hourlong lecture on convex sets, fixed-point theory, and duality, conjecturing the equivalence between matrix games and linear programming. [129]\n\nLater, von Neumann suggested a new method of linear programming, using the homogeneous linear system of Paul Gordan (1873), which was later popularized by Karmarkar's algorithm. Von Neumann's method used a pivoting algorithm between simplices, with the pivoting decision determined by a nonnegative least squares subproblem with a convexity constraint (projecting the zero-vector onto the convex hull of the active simplex). Von Neumann's algorithm was the first interior point method of linear programming. [130]\n\nMathematical statistics\n\nVon Neumann made fundamental contributions to mathematical statistics. In 1941, he derived the exact distribution of the ratio of the mean square of successive differences to the sample variance for independent and identically normally distributed variables. [131] This ratio was applied to the residuals from regression models and is commonly known as the Durbin\u2013Watson statistic [132] for testing the null hypothesis that the errors are serially independent against the alternative that they follow a stationary first order autoregression. [132]\n\nSubsequently, Denis Sargan and Alok Bhargava extended the results for testing if the errors on a regression model follow a Gaussian random walk (i.e., possess a unit root) against the alternative that they are a stationary first order autoregression. [133]\n\nFluid dynamics\n\nVon Neumann made fundamental contributions in the field of fluid dynamics.\n\nVon Neumann's contributions to fluid dynamics included his discovery of the classic flow solution to blast waves, [134] and the co-discovery (independently of Yakov Borisovich Zel'dovich and Werner D\u00f6ring) of the ZND detonation model of explosives. [135] During the 1930s, von Neumann became an authority on the mathematics of shaped charges. [136]\n\nLater with Robert D. Richtmyer, von Neumann developed an algorithm defining artificial viscosity that improved the understanding of shock waves. When computers solved hydrodynamic or aerodynamic problems, they tried to put too many computational grid points at regions of sharp discontinuity (shock waves). The mathematics of artificial viscosity smoothed the shock transition without sacrificing basic physics. [137]\n\nVon Neumann soon applied computer modelling to the field, developing software for his ballistics research. During WW2, he arrived one day at the office of R.H. Kent, the Director of the US Army's Ballistic Research Laboratory, with a computer program he had created for calculating a one-dimensional model of 100 molecules to simulate a shock wave. Von Neumann then gave a seminar on his computer program to an audience which included his friend Theodore von K\u00e1rm\u00e1n. After von Neumann had finished, von K\u00e1rm\u00e1n said \"Well, Johnny, that's very interesting. Of course you realize Lagrange also used digital models to simulate continuum mechanics.\" It was evident from von Neumann's face, that he had been unaware of Lagrange's M\u00e9canique analytique. [138]\n\nMastery of mathematics\n\nStan Ulam, who knew von Neumann well, described his mastery of mathematics this way: \"Most mathematicians know one method. For example, Norbert Wiener had mastered Fourier transforms. Some mathematicians have mastered two methods and might really impress someone who knows only one of them. John von Neumann had mastered three methods.\" He went on to explain that the three methods were:\n\n1. A facility with the symbolic manipulation of linear operators;\n2. An intuitive feeling for the logical structure of any new mathematical theory;\n3. An intuitive feeling for the combinatorial superstructure of new theories. [139]\n\nEdward Teller wrote that \"Nobody knows all science, not even von Neumann did. But as for mathematics, he contributed to every part of it except number theory and topology. That is, I think, something unique.\" [140]\n\nVon Neumann was asked to write an essay for the layman describing what mathematics is, and produced a beautiful analysis. He explained that mathematics straddles the world between the empirical and logical, arguing that geometry was originally empirical, but Euclid constructed a logical, deductive theory. However, he argued, that there is always the danger of straying too far from the real world and becoming irrelevant sophistry. [141] [142] [143]\n\nNuclear weapons\n\nManhattan Project\n\nBeginning in the late 1930s, von Neumann developed an expertise in explosions\u2014phenomena that are difficult to model mathematically. During this period, von Neumann was the leading authority of the mathematics of shaped charges. This led him to a large number of military consultancies, primarily for the Navy, which in turn led to his involvement in the Manhattan Project. The involvement included frequent trips by train to the project's secret research facilities at the Los Alamos Laboratory in a remote part of New Mexico. [31]\n\nVon Neumann made his principal contribution to the atomic bomb in the concept and design of the explosive lenses that were needed to compress the plutonium core of the Fat Man weapon that was later dropped on Nagasaki. While von Neumann did not originate the \"implosion\" concept, he was one of its most persistent proponents, encouraging its continued development against the instincts of many of his colleagues, who felt such a design to be unworkable. He also eventually came up with the idea of using more powerful shaped charges and less fissionable material to greatly increase the speed of \"assembly\". [144]\n\nWhen it turned out that there would not be enough uranium-235 to make more than one bomb, the implosive lens project was greatly expanded and von Neumann's idea was implemented. Implosion was the only method that could be used with the plutonium-239 that was available from the Hanford Site. [145] He established the design of the explosive lenses required, but there remained concerns about \"edge effects\" and imperfections in the explosives. [146] His calculations showed that implosion would work if it did not depart by more than 5% from spherical symmetry. [147] After a series of failed attempts with models, this was achieved by George Kistiakowsky, and the construction of the Trinity bomb was completed in July 1945. [148]\n\nIn a visit to Los Alamos in September 1944, von Neumann showed that the pressure increase from explosion shock wave reflection from solid objects was greater than previously believed if the angle of incidence of the shock wave was between 90\u00b0 and some limiting angle. As a result, it was determined that the effectiveness of an atomic bomb would be enhanced with detonation some kilometers above the target, rather than at ground level. [149] [150]\n\nVon Neumann, four other scientists, and various military personnel were included in the target selection committee that was responsible for choosing the Japanese cities of Hiroshima and Nagasaki as the first targets of the atomic bomb. Von Neumann oversaw computations related to the expected size of the bomb blasts, estimated death tolls, and the distance above the ground at which the bombs should be detonated for optimum shock wave propagation and thus maximum effect. The cultural capital Kyoto, which had been spared the bombing inflicted upon militarily significant cities, was von Neumann's first choice, [151] a selection seconded by Manhattan Project leader General Leslie Groves. However, this target was dismissed by Secretary of War Henry L. Stimson. [152]\n\nOn July 16, 1945, von Neumann and numerous other Manhattan Project personnel were eyewitnesses to the first test of an atomic bomb detonation, which was code-named Trinity. The event was conducted as a test of the implosion method device, at the bombing range near Alamogordo Army Airfield, 35 miles (56\u00a0km) southeast of Socorro, New Mexico. Based on his observation alone, von Neumann estimated the test had resulted in a blast equivalent to 5 kilotons of TNT (21\u00a0 TJ ) but Enrico Fermi produced a more accurate estimate of 10 kilotons by dropping scraps of torn-up paper as the shock wave passed his location and watching how far they scattered. The actual power of the explosion had been between 20 and 22 kilotons. [153] It was in von Neumann's 1944 papers that the expression \"kilotons\" appeared for the first time. [154] After the war, Robert Oppenheimer remarked that the physicists involved in the Manhattan project had \"known sin\". Von Neumann's response was that \"sometimes someone confesses a sin in order to take credit for it.\" [155]\n\nVon Neumann continued unperturbed in his work and became, along with Edward Teller, one of those who sustained the hydrogen bomb project. He collaborated with Klaus Fuchs on further development of the bomb, and in 1946 the two filed a secret patent on \"Improvement in Methods and Means for Utilizing Nuclear Energy\", which outlined a scheme for using a fission bomb to compress fusion fuel to initiate nuclear fusion. [156] The Fuchs\u2013von Neumann patent used radiation implosion, but not in the same way as is used in what became the final hydrogen bomb design, the Teller\u2013Ulam design. Their work was, however, incorporated into the \"George\" shot of Operation Greenhouse, which was instructive in testing out concepts that went into the final design. [157] The Fuchs\u2013von Neumann work was passed on to the Soviet Union by Fuchs as part of his nuclear espionage, but it was not used in the Soviets' own, independent development of the Teller\u2013Ulam design. The historian Jeremy Bernstein has pointed out that ironically, \"John von Neumann and Klaus Fuchs, produced a brilliant invention in 1946 that could have changed the whole course of the development of the hydrogen bomb, but was not fully understood until after the bomb had been successfully made.\" [157]\n\nFor his wartime services, von Neumann was awarded the Navy Distinguished Civilian Service Award in July 1946, and the Medal for Merit in October 1946. [158]\n\nAtomic Energy Commission\n\nIn 1950, von Neumann became a consultant to the Weapons Systems Evaluation Group (WSEG), [159] whose function was to advise the Joint Chiefs of Staff and the United States Secretary of Defense on the development and use of new technologies. [160] He also became an adviser to the Armed Forces Special Weapons Project (AFSWP), which was responsible for the military aspects on nuclear weapons. Over the following two years, he became a consultant to the Central Intelligence Agency (CIA), a member of the influential General Advisory Committee of the Atomic Energy Commission, a consultant to the newly established Lawrence Livermore National Laboratory, and a member of the Scientific Advisory Group of the United States Air Force. [159]\n\nIn 1955, von Neumann became a commissioner of the AEC. He accepted this position and used it to further the production of compact hydrogen bombs suitable for Intercontinental ballistic missile (ICBM) delivery. He involved himself in correcting the severe shortage of tritium and lithium 6 needed for these compact weapons, and he argued against settling for the intermediate-range missiles that the Army wanted. He was adamant that H-bombs delivered into the heart of enemy territory by an ICBM would be the most effective weapon possible, and that the relative inaccuracy of the missile wouldn't be a problem with an H-bomb. He said the Russians would probably be building a similar weapon system, which turned out to be the case. [161] [162] Despite his disagreement with Oppenheimer over the need for a crash program to develop the hydrogen bomb, he testified on the latter's behalf at the 1954 Oppenheimer security hearing, at which he asserted that Oppenheimer was loyal, and praised him for his helpfulness once the program went ahead. [18]\n\nShortly before his death from cancer, von Neumann headed the United States government's top secret ICBM committee, which would sometimes meet in his home. Its purpose was to decide on the feasibility of building an ICBM large enough to carry a thermonuclear weapon. Von Neumann had long argued that while the technical obstacles were sizable, they could be overcome in time. The SM-65 Atlas passed its first fully functional test in 1959, two years after his death. The feasibility of an ICBM owed as much to improved, smaller warheads as it did to developments in rocketry, and his understanding of the former made his advice invaluable. [163]\n\nMutual assured destruction\n\nVon Neumann is credited with developing the equilibrium strategy of mutual assured destruction (MAD). He also \"moved heaven and earth\" to bring MAD about. His goal was to quickly develop ICBMs and the compact hydrogen bombs that they could deliver to the USSR, and he knew the Soviets were doing similar work because the CIA interviewed German rocket scientists who were allowed to return to Germany, and von Neumann had planted a dozen technical people in the CIA. The Soviets considered that bombers would soon be vulnerable, and they shared von Neumann's view that an H-bomb in an ICBM was the ne plus ultra of weapons; they believed that whoever had superiority in these weapons would take over the world, without necessarily using them. [164] He was afraid of a \"missile gap\" and took several more steps to achieve his goal of keeping up with the Soviets:\n\n\u2022 He modified the ENIAC by making it programmable and then wrote programs for it to do the H-bomb calculations verifying that the Teller-Ulam design was feasible and to develop it further.\n\u2022 Through the Atomic Energy Commission, he promoted the development of a compact H-bomb that would fit in an ICBM.\n\u2022 He personally interceded to speed up the production of lithium-6 and tritium needed for the compact bombs.\n\u2022 He caused several separate missile projects to be started, because he felt that competition combined with collaboration got the best results. [165]\n\nVon Neumann's assessment that the Soviets had a lead in missile technology, considered pessimistic at the time, was soon proven correct in the Sputnik crisis. [166]\n\nVon Neumann entered government service primarily because he felt that, if freedom and civilization were to survive, it would have to be because the United States would triumph over totalitarianism from Nazism, Fascism and Soviet Communism. [52] During a Senate committee hearing he described his political ideology as \"violently anti-communist, and much more militaristic than the norm\". He was quoted in 1950 remarking, \"If you say why not bomb [the Soviets] tomorrow, I say, why not today? If you say today at five o'clock, I say why not one o'clock?\" [167]\n\nOn February 15, 1956, von Neumann was presented with the Medal of Freedom by President Dwight D. Eisenhower. His citation read:\n\nDr. von Neumann, in a series of scientific study projects of major national significance, has materially increased the scientific progress of this country in the armaments field. Through his work on various highly classified missions performed outside the continental limits of the United States in conjunction with critically important international programs, Dr. von Neumann has resolved some of the most difficult technical problems of national defense. [168]\n\nComputing\n\nVon Neumann was a founding figure in computing. [169] Von Neumann was the inventor, in 1945, of the merge sort algorithm, in which the first and second halves of an array are each sorted recursively and then merged. [170] [171] Von Neumann wrote the 23 pages long sorting program for the EDVAC in ink. On the first page, traces of the phrase \"TOP SECRET\", which was written in pencil and later erased, can still be seen. [171] He also worked on the philosophy of artificial intelligence with Alan Turing when the latter visited Princeton in the 1930s. [172]\n\nVon Neumann's hydrogen bomb work was played out in the realm of computing, where he and Stanis\u0142aw Ulam developed simulations on von Neumann's digital computers for the hydrodynamic computations. During this time he contributed to the development of the Monte Carlo method, which allowed solutions to complicated problems to be approximated using random numbers. [173]\n\nVon Neumann's algorithm for simulating a fair coin with a biased coin is used in the \"software whitening\" stage of some hardware random number generators. [174] Because using lists of \"truly\" random numbers was extremely slow, von Neumann developed a form of making pseudorandom numbers, using the middle-square method. Though this method has been criticized as crude, von Neumann was aware of this: he justified it as being faster than any other method at his disposal, writing that \"Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.\" [175] Von Neumann also noted that when this method went awry it did so obviously, unlike other methods which could be subtly incorrect. [175]\n\nWhile consulting for the Moore School of Electrical Engineering at the University of Pennsylvania on the EDVAC project, von Neumann wrote an incomplete First Draft of a Report on the EDVAC . The paper, whose premature distribution nullified the patent claims of EDVAC designers J. Presper Eckert and John Mauchly, described a computer architecture in which the data and the program are both stored in the computer's memory in the same address space. This architecture is the basis of most modern computer designs, unlike the earliest computers that were \"programmed\" using a separate memory device such as a paper tape or plugboard. Although the single-memory, stored program architecture is commonly called von Neumann architecture as a result of von Neumann's paper, the architecture was based on the work of Eckert and Mauchly, inventors of the ENIAC computer at the University of Pennsylvania. [176]\n\nJohn von Neumann consulted for the Army's Ballistic Research Laboratory, most notably on the ENIAC project, [177] as a member of its Scientific Advisory Committee. [178] The electronics of the new ENIAC ran at one-sixth the speed, but this in no way degraded the ENIAC's performance, since it was still entirely I\/O bound. Complicated programs could be developed and debugged in days rather than the weeks required for plugboarding the old ENIAC. Some of von Neumann's early computer programs have been preserved. [179]\n\nThe next computer that von Neumann designed was the IAS machine at the Institute for Advanced Study in Princeton, New Jersey. He arranged its financing, and the components were designed and built at the RCA Research Laboratory nearby. John von Neumann recommended that the IBM 701, nicknamed the defense computer, include a magnetic drum. It was a faster version of the IAS machine and formed the basis for the commercially successful IBM 704. [180] [181]\n\nStochastic computing was first introduced in a pioneering paper by von Neumann in 1953. [182] However, the theory could not be implemented until advances in computing of the 1960s. [183] [184]\n\nCellular automata, DNA and the universal constructor\n\nVon Neumann's rigorous mathematical analysis of the structure of self-replication (of the semiotic relationship between constructor, description and that which is constructed), preceded the discovery of the structure of DNA. [186]\n\nVon Neumann created the field of cellular automata without the aid of computers, constructing the first self-replicating automata with pencil and graph paper.\n\nThe detailed proposal for a physical non-biological self-replicating system was first put forward in lectures von Neumann delivered in 1948 and 1949, when he first only proposed a kinematic self-reproducing automaton. [187] [188] While qualitatively sound, von Neumann was evidently dissatisfied with this model of a self-replicator due to the difficulty of analyzing it with mathematical rigor. He went on to instead develop a more abstract model self-replicator based on his original concept of cellular automata. [189]\n\nSubsequently, the concept of the Von Neumann universal constructor based on the von Neumann cellular automaton was fleshed out in his posthumously published lectures Theory of Self Reproducing Automata. [190] Ulam and von Neumann created a method for calculating liquid motion in the 1950s. The driving concept of the method was to consider a liquid as a group of discrete units and calculate the motion of each based on its neighbors' behaviors. [191] Like Ulam's lattice network, von Neumann's cellular automata are two-dimensional, with his self-replicator implemented algorithmically. The result was a universal copier and constructor working within a cellular automaton with a small neighborhood (only those cells that touch are neighbors; for von Neumann's cellular automata, only orthogonal cells), and with 29 states per cell. Von Neumann gave an existence proof that a particular pattern would make infinite copies of itself within the given cellular universe by designing a 200,000 cell configuration that could do so.\n\n[T]here exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself.\n\n\u2014von Neumann, 1948 [190]\n\nVon Neumann addressed the evolutionary growth of complexity amongst his self-replicating machines. [192] His \"proof-of-principle\" designs showed how it is logically possible, by using a general purpose programmable (\"universal\") constructor, to exhibit an indefinitely large class of self-replicators, spanning a wide range of complexity, interconnected by a network of potential mutational pathways, including pathways from the most simple to the most complex. This is an important result, as prior to that it might have been conjectured that there is a fundamental logical barrier to the existence of such pathways; in which case, biological organisms, which do support such pathways, could not be \"machines\", as conventionally understood. Von Neumann considers the potential for conflict between his self-reproducing machines, stating that \"our models lead to such conflict situations\", [193] indicating it as a field of further study. [190] :147\n\nThe cybernetics movement highlighted the question of what it takes for self-reproduction to occur autonomously, and in 1952, John von Neumann designed an elaborate 2D cellular automaton that would automatically make a copy of its initial configuration of cells. The von Neumann neighborhood, in which each cell in a two-dimensional grid has the four orthogonally adjacent grid cells as neighbors, continues to be used for other cellular automata. Von Neumann proved that the most effective way of performing large-scale mining operations such as mining an entire moon or asteroid belt would be by using self-replicating spacecraft, taking advantage of their exponential growth. [194]\n\nVon Neumann investigated the question of whether modelling evolution on a digital computer could solve the complexity problem in programming. [193]\n\nBeginning in 1949, von Neumann's design for a self-reproducing computer program is considered the world's first computer virus, and he is considered to be the theoretical father of computer virology. [195]\n\nWeather systems and global warming\n\nAs part of his research into weather forecasting, von Neumann founded the \"Meteorological Program\" in Princeton in 1946, securing funding for his project from the US Navy. [196] Von Neumann and his appointed assistant on this project, Jule Gregory Charney, wrote the world's first climate modelling software, and used it to perform the world's first numerical weather forecasts on the ENIAC computer; [196] von Neumann and his team published the results as Numerical Integration of the Barotropic Vorticity Equation in 1950. [197] Together they played a leading role in efforts to integrate sea-air exchanges of energy and moisture into the study of climate. [198] Von Neumann proposed as the research program for climate modeling: \"The approach is to first try short-range forecasts, then long-range forecasts of those properties of the circulation that can perpetuate themselves over arbitrarily long periods of time, and only finally to attempt forecast for medium-long time periods which are too long to treat by simple hydrodynamic theory and too short to treat by the general principle of equilibrium theory.\" [199]\n\nVon Neumann's research into weather systems and meteorological prediction led him to propose manipulating the environment by spreading colorants on the polar ice caps to enhance absorption of solar radiation (by reducing the albedo), [200] [201] thereby inducing global warming. [200] [201] Von Neumann proposed a theory of global warming as a result of the activity of humans, noting that the Earth was only 6\u00a0\u00b0F (3.3\u00a0\u00b0C) colder during the last glacial period, he wrote in 1955: \"Carbon dioxide released into the atmosphere by industry's burning of coal and oil - more than half of it during the last generation - may have changed the atmosphere's composition sufficiently to account for a general warming of the world by about one degree Fahrenheit.\" [202] [203] However, von Neumann urged a degree of caution in any program of intentional human weather manufacturing: \"What could be done, of course, is no index to what should be done... In fact, to evaluate the ultimate consequences of either a general cooling or a general heating would be a complex matter. Changes would affect the level of the seas, and hence the habitability of the continental coastal shelves; the evaporation of the seas, and hence general precipitation and glaciation levels; and so on... But there is little doubt that one could carry out the necessary analyses needed to predict the results, intervene on any desired scale, and ultimately achieve rather fantastic results.\" [203]\n\n\"The technology that is now developing and that will dominate the next decades is in conflict with traditional, and, in the main, momentarily still valid, geographical and political units and concepts. This is a maturing crisis of technology... The most hopeful answer is that the human species has been subjected to similar tests before and it seems to have a congenital ability to come through, after varying amounts of trouble.\"\n\n\u2014von Neumann, 1955 [203]\n\nTechnological singularity hypothesis\n\nThe first use of the concept of a singularity in the technological context is attributed to von Neumann, [204] who according to Ulam discussed the \"ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.\" [205] This concept was fleshed out later in the book Future Shock by Alvin Toffler.\n\nRecognition\n\nCognitive abilities\n\nNobel Laureate Hans Bethe said \"I have sometimes wondered whether a brain like von Neumann's does not indicate a species superior to that of man\", [19] and later Bethe wrote that \"[von Neumann's] brain indicated a new species, an evolution beyond man\". [206] Seeing von Neumann's mind at work, Eugene Wigner wrote, \"one had the impression of a perfect instrument whose gears were machined to mesh accurately to a thousandth of an inch.\" [207] Paul Halmos states that \"von Neumann's speed was awe-inspiring.\" [18] Israel Halperin said: \"Keeping up with him was\u00a0... impossible. The feeling was you were on a tricycle chasing a racing car.\" [208] Edward Teller admitted that he \"never could keep up with him\". [209] Teller also said \"von Neumann would carry on a conversation with my 3-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us.\" [210] Peter Lax wrote \"Von Neumann was addicted to thinking, and in particular to thinking about mathematics\". [211]\n\nWhen George Dantzig brought von Neumann an unsolved problem in linear programming \"as I would to an ordinary mortal\", on which there had been no published literature, he was astonished when von Neumann said \"Oh, that!\", before offhandedly giving a lecture of over an hour, explaining how to solve the problem using the hitherto unconceived theory of duality. [212]\n\nLothar Wolfgang Nordheim described von Neumann as the \"fastest mind I ever met\", [213] and Jacob Bronowski wrote \"He was the cleverest man I ever knew, without exception. He was a genius.\" [214] George P\u00f3lya, whose lectures at ETH Z\u00fcrich von Neumann attended as a student, said \"Johnny was the only student I was ever afraid of. If in the course of a lecture I stated an unsolved problem, the chances were he'd come to me at the end of the lecture with the complete solution scribbled on a slip of paper.\" [215] Eugene Wigner writes: \"'Jancsi,' I might say, 'Is angular momentum always an integer of h? ' He would return a day later with a decisive answer: 'Yes, if all particles are at rest.'... We were all in awe of Jancsi von Neumann\". [216] Enrico Fermi told physicist Herbert L. Anderson: \"You know, Herb, Johnny can do calculations in his head ten times as fast as I can! And I can do them ten times as fast as you can, Herb, so you can see how impressive Johnny is!\" [217]\n\nHalmos recounts a story told by Nicholas Metropolis, concerning the speed of von Neumann's calculations, when somebody asked von Neumann to solve the famous fly puzzle: [218]\n\nTwo bicyclists start 20 miles apart and head toward each other, each going at a steady rate of 10 mph. At the same time a fly that travels at a steady 15 mph starts from the front wheel of the southbound bicycle and flies to the front wheel of the northbound one, then turns around and flies to the front wheel of the southbound one again, and continues in this manner till he is crushed between the two front wheels. Question: what total distance did the fly cover? The slow way to find the answer is to calculate what distance the fly covers on the first, southbound, leg of the trip, then on the second, northbound, leg, then on the third, etc., etc., and, finally, to sum the infinite series so obtained.\n\nThe quick way is to observe that the bicycles meet exactly one hour after their start, so that the fly had just an hour for his travels; the answer must therefore be 15 miles.\n\nWhen the question was put to von Neumann, he solved it in an instant, and thereby disappointed the questioner: \"Oh, you must have heard the trick before!\" \"What trick?\" asked von Neumann, \"All I did was sum the geometric series.\" [18]\n\nEugene Wigner told a similar story, only with a swallow instead of a fly, and says it was Max Born who posed the question to von Neumann in the 1920s. [219]\n\nEidetic memory\n\nVon Neumann was also noted for his eidetic memory (sometimes called photographic memory). Herman Goldstine wrote:\n\nOne of his remarkable abilities was his power of absolute recall. As far as I could tell, von Neumann was able on once reading a book or article to quote it back verbatim; moreover, he could do it years later without hesitation. He could also translate it at no diminution in speed from its original language into English. On one occasion I tested his ability by asking him to tell me how A Tale of Two Cities started. Whereupon, without any pause, he immediately began to recite the first chapter and continued until asked to stop after about ten or fifteen minutes. [220]\n\nVon Neumann was reportedly able to memorize the pages of telephone directories. He entertained friends by asking them to randomly call out page numbers; he then recited the names, addresses and numbers therein. [19] [221]\n\nMathematical legacy\n\n\"It seems fair to say that if the influence of a scientist is interpreted broadly enough to include impact on fields beyond science proper, then John von Neumann was probably the most influential mathematician who ever lived,\" wrote Mikl\u00f3s R\u00e9dei in John von Neumann: Selected Letters. [222] James Glimm wrote: \"he is regarded as one of the giants of modern mathematics\". [223] The mathematician Jean Dieudonn\u00e9 said that von Neumann \"may have been the last representative of a once-flourishing and numerous group, the great mathematicians who were equally at home in pure and applied mathematics and who throughout their careers maintained a steady production in both directions\", [3] while Peter Lax described him as possessing the \"most scintillating intellect of this century\". [224] In the foreword of Mikl\u00f3s R\u00e9dei's Selected Letters, Peter Lax wrote, \"To gain a measure of von Neumann's achievements, consider that had he lived a normal span of years, he would certainly have been a recipient of a Nobel Prize in economics. And if there were Nobel Prizes in computer science and mathematics, he would have been honored by these, too. So the writer of these letters should be thought of as a triple Nobel laureate or, possibly, a 3+12-fold winner, for his work in physics, in particular, quantum mechanics\". [225]\n\nIllness and death\n\nIn 1955, von Neumann was diagnosed with what was either bone, pancreatic or prostate cancer [226] [227] after he was examined by physicians for a fall, whereupon they inspected a mass growing near his collarbone. [228] The cancer was possibly caused by his radiation exposure during his time in Los Alamos National Laboratory. [228] He was not able to accept the proximity of his own demise, and the shadow of impending death instilled great fear in him. [229] He invited a Catholic priest, Father Anselm Strittmatter, O.S.B., to visit him for consultation. [18] [228] Von Neumann reportedly said, \"So long as there is the possibility of eternal damnation for nonbelievers it is more logical to be a believer at the end,\" referring to Pascal's wager. He had earlier confided to his mother, \"There probably has to be a God. Many things are easier to explain if there is than if there isn't.\" [230] [231] [232] Father Strittmatter administered the last rites to him. [18] Some of von Neumann's friends, such as Abraham Pais and Oskar Morgenstern, said they had always believed him to be \"completely agnostic\". [231] [233] Of this deathbed conversion, Morgenstern told Heims, \"He was of course completely agnostic all his life, and then he suddenly turned Catholic\u2014it doesn't agree with anything whatsoever in his attitude, outlook and thinking when he was healthy.\" [234] Father Strittmatter recalled that even after his conversion, von Neumann did not receive much peace or comfort from it, as he still remained terrified of death. [234]\n\nVon Neumann was on his deathbed when he entertained his brother by reciting by heart and word-for-word the first few lines of each page of Goethe's Faust. [7] On his deathbed, his mental capabilities became a fraction of what they were before, causing him much anguish; at times Von Neumann even forgot the lines that his brother recited from Goethe's Faust. [228] He died at age 53 on February 8, 1957, at the Walter Reed Army Medical Center in Washington, D.C., under military security lest he reveal military secrets while heavily medicated. He was buried at Princeton Cemetery in Princeton, Mercer County, New Jersey. [235]\n\nSelected works\n\n\u2022 1923. On the introduction of transfinite numbers, 346\u201354.\n\u2022 1925. An axiomatization of set theory, 393\u2013413.\n\u2022 1932. Mathematical Foundations of Quantum Mechanics , Beyer, R. T., trans., Princeton Univ. Press. 1996 edition: ISBN \u00a0 0-691-02893-1.\n\u2022 1937. von Neumann, John (1981). Halperin, Israel (ed.). Continuous geometries with a transition probability. Memoirs of the American Mathematical Society. 34. ISBN \u00a0 978-0-8218-2252-4. MR \u00a0 0634656.\n\u2022 1944. Theory of Games and Economic Behavior , with Morgenstern, O., Princeton Univ. Press, online at archive.org. 2007 edition: ISBN \u00a0 978-0-691-13061-3.\n\u2022 1945. First Draft of a Report on the EDVAC\n\u2022 1948. \"The general and logical theory of automata,\" in Cerebral Mechanisms in Behavior: The Hixon Symposium, Jeffress, L.A. ed., John Wiley & Sons, New York, N. Y, 1951, pp.\u00a01\u201331, MR 0045446.\n\u2022 1960. von Neumann, John (1998). . Princeton Landmarks in Mathematics. Princeton University Press. ISBN \u00a0 978-0-691-05893-1. MR \u00a0 0120174.\n\u2022 1963. Collected Works of John von Neumann, Taub, A. H., ed., Pergamon Press. ISBN \u00a0 0-08-009566-6\n\u2022 1966. Theory of Self-Reproducing Automata, Burks, A. W., ed., University of Illinois Press. ISBN \u00a0 0-598-37798-0 [190]\n\nPhD students\n\nNotes\n\n1. Dempster, M. A. H. (February 2011). \"Benoit B. Mandelbrot (1924\u20132010): a father of Quantitative Finance\" (PDF). Quantitative Finance. 11 (2): 155\u2013156. doi:10.1080\/14697688.2011.552332. S2CID \u00a0 154802171.\n2. R\u00e8dei 1999, p.\u00a03.\n3. Dieudonn\u00e9 2008, p.\u00a090.\n4. Doran et al. 2004, p.\u00a08.\n5. Doran et al. 2004, p.\u00a01.\n6. Myhrvold, Nathan (March 21, 1999). \"John von Neumann\". Time . Archived from the original on February 11, 2001.\n7. Blair 1957, p.\u00a0104.\n8. Dyson 1998, p.\u00a0xxi.\n9. Macrae 1992, pp.\u00a038\u201342.\n10. Macrae 1992, pp.\u00a037\u201338.\n11. Macrae 1992, p.\u00a039.\n12. Macrae 1992, pp.\u00a044\u201345.\n13. Macrae 1992, pp.\u00a057\u201358.\n14. Henderson 2007, p.\u00a030.\n15. Mitchell 2009, p.\u00a0124.\n16. Macrae 1992, pp.\u00a046\u201347.\n17. Halmos, P. R. (1973). \"The Legend of von Neumann\". The American Mathematical Monthly. 80 (4): 382\u2013394. doi:10.2307\/2319080. JSTOR \u00a0 2319080.\n18. Blair 1957, p.\u00a090.\n19. Macrae 1992, p.\u00a052.\n20. Aspray, William (1990). John von Neumann and the Origins of Modern Computing. Cambridge, Massachusetts: MIT Press. ISBN \u00a0 978-0-262-01121-1.\n21. Macrae 1992, pp.\u00a070\u201371.\n22. Doran et al. 2004, p.\u00a03.\n23. Macrae 1992, pp.\u00a032\u201333.\n24. Nasar 2001, p.\u00a081.\n25. Macrae 1992, p.\u00a084.\n26. von K\u00e1rm\u00e1n, T., & Edson, L. (1967). The wind and beyond. Little, Brown & Company.\n27. Macrae 1992, pp.\u00a085\u201387.\n28. Macrae 1992, p.\u00a097.\n29. Regis, Ed (November 8, 1992). \"Johnny Jiggles the Planet\". The New York Times . Retrieved February 4, 2008.\n30. von Neumann, J. (1928). \"Die Axiomatisierung der Mengenlehre\". Mathematische Zeitschrift (in German). 27 (1): 669\u2013752. doi:10.1007\/BF01171122. ISSN \u00a0 0025-5874. S2CID \u00a0 123492324.\n31. Macrae 1992, pp.\u00a086\u201387.\n32. The Collected Works of Eugene Paul Wigner: Historical, Philosophical, and Socio-Political Papers. Historical and Biographical Reflections and Syntheses, By Eugene Paul Wigner, (Springer 2013), page 128\n33. Macrae 1992, pp.\u00a098\u201399.\n34. The History Of Game Theory, Volume 1: From the Beginnings to 1945, By Mary-Ann Dimand, Robert W Dimand, (Routledge, 2002), page 129\n35. Macrae 1992, p.\u00a0145.\n36. Macrae 1992, pp.\u00a0143\u2013144.\n37. Macrae 1992, pp.\u00a0155\u2013157.\n38. \"Marina Whitman\". The Gerald R. Ford School of Public Policy at the University of Michigan. July 18, 2014. Retrieved January 5, 2015.\n39. Macrae 1992, pp.\u00a0170\u2013174.\n40. Bochner, S. (1958). \"John von Neumann; A Biographical Memoir\" (PDF). National Academy of Sciences . Retrieved August 16, 2015.\n41. Macrae 1992, pp.\u00a043, 157.\n42. Macrae 1992, pp.\u00a0167\u2013168.\n43. Macrae 1992, p.\u00a0371.\n44. Macrae 1992, pp.\u00a0195\u2013196.\n45. Macrae 1992, pp.\u00a0190\u2013195.\n46. Ulam 1983, p.\u00a070.\n47. Macrae 1992, pp.\u00a0170\u2013171.\n48. Regis 1987, p.\u00a0103.\n49. \"Conversation with Marina Whitman\". Gray Watson (256.com). Archived from the original on April 28, 2011. Retrieved January 30, 2011.\n50. Poundstone, William (May 4, 2012). \"Unleashing the Power\". The New York Times .\n51. Blair, pp. 89\u2013104.\n52. Macrae 1992, p.\u00a0150.\n53. Macrae 1992, p.\u00a048.\n54. Blair 1957, p.\u00a094.\n55. Stern, Nancy (January 20, 1981). \"An Interview with Cuthbert C. Hurd\" (PDF). Charles Babbage Institute, University of Minnesota. Retrieved June 3, 2010.\n56. Rota 1989, pp.\u00a026\u201327.\n57. Macrae 1992, p.\u00a075.\n58. Van Heijenoort 1967, pp.\u00a0393\u2013394.\n59. Macrae 1992, pp.\u00a0104\u2013105.\n60. Murawski, Roman (2010). \"JOHN VON NEUMANN AND HILBERT'S SCHOOL\". ESSAYS IN THE PHILOSOPHY AND HISTORY OF LOGIC AND MATHEMATICS. Pozna\u0144 Studies in the Philosophy of the Sciences and the Humanities. p.\u00a0196. ISBN \u00a0 978-90-420-3091-6.\n61. von Neumann, J. (1929), \"Zur allgemeinen Theorie des Masses\" (PDF), Fundamenta Mathematicae , 13: 73\u2013116, doi:\n62. Neumann, J. v. (1927). \"Zur Hilbertschen Beweistheorie\". Mathematische Zeitschrift. 24: 1\u201346. doi:10.1007\/BF01475439. S2CID \u00a0 122617390.\n63. Murawski, Roman (2010). \"JOHN VON NEUMANN AND HILBERT'S SCHOOL\". ESSAYS IN THE PHILOSOPHY AND HISTORY OF LOGIC AND MATHEMATICS. Pozna\u0144 Studies in the Philosophy of the Sciences and the Humanities. pp.\u00a0204\u2013206. ISBN \u00a0 978-90-420-3091-6.\n64. von Neumann 2005, p.\u00a0123.\n65. Dawson 1997, p.\u00a070.\n66. von Neumann 2005, p.\u00a0124.\n67. Macrae 1992, p.\u00a0182.\n68. von Plato, Jan (2018). \"In search of the sources of incompleteness\" (PDF). Proceedings of the International Congress of Mathematicians 2018. 3: 4080. doi:10.1142\/9789813272880_0212. ISBN \u00a0 978-981-327-287-3.\n69. von Plato, Jan (2020). Can Mathematics Be Proved Consistent?. Springer International Publishing. p.\u00a018. ISBN \u00a0 978-3-030-50876-0.\n70. Murawski, Roman (2010). \"JOHN VON NEUMANN AND HILBERT'S SCHOOL\". ESSAYS IN THE PHILOSOPHY AND HISTORY OF LOGIC AND MATHEMATICS. Pozna\u0144 Studies in the Philosophy of the Sciences and the Humanities. p.\u00a0209. ISBN \u00a0 978-90-420-3091-6.\n71. Two of the papers are:\nHopf, Eberhard (1939). \"Statistik der geod\u00e4tischen Linien in Mannigfaltigkeiten negativer Kr\u00fcmmung\". Leipzig Ber. Verhandl. S\u00e4chs. Akad. Wiss. 91: 261\u2013304.\n72. Halmos, Paul R. (1958). \"Von Neumann on measure and ergodic theory\" (PDF). Bull. Amer. Math. Soc. 64 (3, Part 2): 86\u201394. doi:.\n73. Dieudonn\u00e9, Jean. \"Von Neumann, Johann (or John)\". Encyclopedia.com. Complete Dictionary of Scientific Biography. Retrieved August 7, 2021.\n74. Ionescu Tulcea, Alexandra; Ionescu Tulcea, C. (1969). Topics in the Theory of Lifting. Springer-Verlag Berlin Heidelberg. p.\u00a0V. ISBN \u00a0 978-3-642-88509-9.\n75. Neumann, J. v. (1940). \"On Rings of Operators. III\". Annals of Mathematics. 41 (1): 94\u2013161. doi:10.2307\/1968823. JSTOR \u00a0 1968823.\n76. Neumann, John von (1950). Functional Operators, Volume 1: Measures and Integrals. Princeton University Press. ISBN \u00a0 9780691079660.\n77. von Neumann, John (1934). \"Almost Periodic Functions in a Group. I.\" Transactions of the American Mathematical Society. 36 (3): 445\u2013492. doi:10.2307\/1989792. JSTOR \u00a0 1989792.\n78. von Neumann, John; Bochner, Salomon (1935). \"Almost Periodic Functions in Groups, II\". Transactions of the American Mathematical Society. 37 (1): 21\u201350. doi:10.2307\/1989694. JSTOR \u00a0 1989694.\n79. \"AMS B\u00f4cher Prize\". AMS. January 5, 2016. Retrieved January 12, 2018.\n80. von Neumann, J. (1933). \"Die Einfuhrung Analytischer Parameter in Topologischen Gruppen\". Annals of Mathematics . 2. 34 (1): 170\u2013190. doi:10.2307\/1968347. JSTOR \u00a0 1968347.\n81. v. Neumann, J. (1929). \"\u00dcber die analytischen Eigenschaften von Gruppen linearer Transformationen und ihrer Darstellungen\". Mathematische Zeitschrift (in German). 30 (1): 3\u201342. doi:10.1007\/BF01187749. S2CID \u00a0 122565679.\n82. Dieudonn\u00e9, Jean (1981). History of Functional Analysis. North-Hollywood Publishing Company. p.\u00a0172. ISBN \u00a0 0444861483.\n83. Steen, L. A. (April 1973). \"Highlights in the History of Spectral Theory\". The American Mathematical Monthly. 80 (4): 370. doi:10.2307\/2319079. JSTOR \u00a0 2319079.\n84. Dieudonn\u00e9, Jean (1981). History of Functional Analysis. North-Hollywood Publishing Company. pp.\u00a0175\u2013176, 178\u2013179, 181, 183. ISBN \u00a0 0444861483.\n85. Dieudonn\u00e9, Jean (1981). History of Functional Analysis. North-Hollywood Publishing Company. pp.\u00a0211, 218. ISBN \u00a0 0444861483.\n86. Steen, L. A. (April 1973). \"Highlights in the History of Spectral Theory\". The American Mathematical Monthly. 80 (4): 373. doi:10.2307\/2319079. JSTOR \u00a0 2319079.\n87. Petz & R\u00e8dei 1995, pp.\u00a0163\u2013181.\n88. \"Von Neumann Algebras\" (PDF). Princeton University. Retrieved January 6, 2016.\n89. \"Direct Integrals of Hilbert Spaces and von Neumann Algebras\" (PDF). University of California at Los Angeles. Archived from the original (PDF) on July 2, 2015. Retrieved January 6, 2016.\n90. Macrae 1992, p.\u00a0140.\n91. von Neumann, John (1930). \"Zur Algebra der Funktionaloperationen und Theorie der normalen Operatoren\". Mathematische Annalen (in German). 102 (1): 370\u2013427. Bibcode:1930MatAn.102..685E. doi:10.1007\/BF01782352. S2CID \u00a0 121141866.. The original paper on von Neumann algebras.\n92. Birkhoff, Garrett (1958). Von Neumann and lattice theory (PDF). Bulletin of the American Mathematical Society . 64. pp.\u00a050\u201356. doi:10.1090\/S0002-9904-1958-10192-5. ISBN \u00a0 978-0-8218-1025-5.\n93. Macrae 1992, pp.\u00a0139\u2013141.\n94. Macrae 1992, p.\u00a0142.\n95. Hermann, Grete (1935). \"Die naturphilosophischen Grundlagen der Quantenmechanik\". Naturwissenschaften . 23 (42): 718\u2013721. Bibcode:1935NW.....23..718H. doi:10.1007\/BF01491142. S2CID \u00a0 40898258. English translation in Hermann, Grete (2016). Crull, Elise; Bacciagaluppi, Guido (eds.). Grete Hermann Between physics and philosophy. Springer. pp.\u00a0239\u2013278.\n96. Bell, John S. (1966). \"On the problem of hidden variables in quantum mechanics\". Reviews of Modern Physics . 38 (3): 447\u2013452. Bibcode:1966RvMP...38..447B. doi:10.1103\/RevModPhys.38.447. OSTI \u00a0 1444158.\n97. Bub, Jeffrey (2010). \"Von Neumann's 'No Hidden Variables' Proof: A Re-Appraisal\". Foundations of Physics . 40 (9\u201310): 1333\u20131340. arXiv:. Bibcode:2010FoPh...40.1333B. doi:10.1007\/s10701-010-9480-9. S2CID \u00a0 118595119.\n98. Mermin, N. David; Schack, R\u00fcdiger (2018). \"Homer nodded: von Neumann's surprising oversight\". Foundations of Physics. 48 (9): 1007\u20131020. arXiv:. Bibcode:2018FoPh...48.1007M. doi:10.1007\/s10701-018-0197-5. S2CID \u00a0 118951033.\n99. Freire, Olival Jr. (2006). \"Philosophy enters the optics laboratory: Bell's theorem and its first experimental tests (1965\u20131982)\". Studies in History and Philosophy of Modern Physics. 37 (4): 577\u2013616. arXiv:. Bibcode:2006SHPMP..37..577F. doi:10.1016\/j.shpsb.2005.12.003. S2CID \u00a0 13503517.\n100. Wigner, Eugene; Henry Margenau (December 1967). \"Remarks on the Mind Body Question, in Symmetries and Reflections, Scientific Essays\". American Journal of Physics. 35 (12): 1169\u20131170. Bibcode:1967AmJPh..35.1169W. doi:10.1119\/1.1973829.\n101. Schlosshauer, M.; Koer, J.; Zeilinger, A. (2013). \"A Snapshot of Foundational Attitudes Toward Quantum Mechanics\". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 44 (3): 222\u2013230. arXiv:. Bibcode:2013SHPMP..44..222S. doi:10.1016\/j.shpsb.2013.04.004. S2CID \u00a0 55537196.\n102. Schreiber, Zvi (1995). \"The Nine Lives of Schroedinger's Cat\". arXiv:.\n103. Nielsen, Michael A. and Isaac Chuang (2001). Quantum computation and quantum information (Repr.\u00a0ed.). Cambridge [u.a.]: Cambridge Univ. Press. p.\u00a0700. ISBN \u00a0 978-0-521-63503-5.\n104. Quantum Information Theory, By Mark M. Wilde, (Cambridge University Press 2013), page 252\n105. von Neumann, John (1927), \"Wahrscheinlichkeitstheoretischer Aufbau der Quantenmechanik\", G\u00f6ttinger Nachrichten, 1: 245\u2013272\n106. Schl\u00fcter, Michael and Lu Jeu Sham (1982), \"Density functional theory\", Physics Today, 35 (2): 36\u201343, Bibcode:1982PhT....35b..36S, doi:10.1063\/1.2914933\n107. Ugo Fano (June 1995), \"Density matrices as polarization vectors\", Rendiconti Lincei, 6 (2): 123\u2013130, doi:10.1007\/BF03001661, S2CID \u00a0 128081459\n108. Giulini, Domenico. (1996). Decoherence and the Appearance of a Classical World in Quantum Theory. Joos, Erich., Kiefer, Claus., Kupsch, Joachim., Stamatescu, Ion-Olimpiu., Zeh, H. Dieter. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN \u00a0 978-3-662-03263-3. OCLC \u00a0 851393174.\n109. Bacciagaluppi, Guido (2020), \"The Role of Decoherence in Quantum Mechanics\", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Summer 2020\u00a0ed.), Metaphysics Research Lab, Stanford University, retrieved July 25, 2020\n110. Gabbay, Dov M.; Woods, John (2007). \"The History of Quantum Logic\". The Many Valued and Nonmonotonic Turn in Logic. Elsevier. pp.\u00a0205\u20132017. ISBN \u00a0 978-0-08-054939-2.\n111. Birkhoff, Garrett; von Neumann, John (October 1936). \"The Logic of Quantum Mechanics\". Annals of Mathematics. 37 (4): 823\u2013843. doi:10.2307\/1968621. JSTOR \u00a0 1968621.\n112. Putnam, Hilary (1985). Philosophical Papers: Volume 3, Realism and Reason. Cambridge University Press. p.\u00a0263. ISBN \u00a0 978-0-521-31394-0.\n113. Kuhn, H. W.; Tucker, A. W. (1958). \"John von Neumann's work in the theory of games and mathematical economics\". Bull. Amer. Math. Soc. 64 (Part 2) (3): 100\u2013122. CiteSeerX \u00a0. doi:10.1090\/s0002-9904-1958-10209-8. MR \u00a0 0096572.\n114. von Neumann, J (1928). \"Zur Theorie der Gesellschaftsspiele\". Mathematische Annalen (in German). 100: 295\u2013320. doi:10.1007\/bf01448847. S2CID \u00a0 122961988.\n115. Lissner, Will (March 10, 1946). \"Mathematical Theory of Poker Is Applied to Business Problems; GAMING STRATEGY USED IN ECONOMICS Big Potentialities Seen Strategies Analyzed Practical Use in Games\". The New York Times. ISSN \u00a0 0362-4331 . Retrieved July 25, 2020.\n116. For this problem to have a unique solution, it suffices that the nonnegative matrices\u00a0A and\u00a0B satisfy an irreducibility condition, generalizing that of the Perron\u2013Frobenius theorem of nonnegative matrices, which considers the (simplified) eigenvalue problem\nA \u2212 \u03bb Iq = 0,\nwhere the nonnegative matrix\u00a0A must be square and where the diagonal matrix \u00a0Iis the identity matrix. Von Neumann's irreducibility condition was called the \"whales and wranglers\" hypothesis by D. G. Champernowne, who provided a verbal and economic commentary on the English translation of von Neumann's article. Von Neumann's hypothesis implied that every economic process used a positive amount of every economic good. Weaker \"irreducibility\" conditions were given by David Gale and by John Kemeny, Morgenstern, and Gerald L. Thompson in the 1950s and then by Stephen M. Robinson in the 1970s.\n117. Morgenstern & Thompson 1976, pp.\u00a0xviii, 277.\n118. Rockafellar 1970, pp.\u00a0i, 74.\n119. Rockafellar 1974, pp.\u00a0351\u2013378.\n120. Ye 1997, pp.\u00a0277\u2013299.\n121. Bruckmann, Gerhart; Weber, Wilhelm, eds. (September 21, 1971). Contributions to von Neumann's Growth Model. Proceedings of a Conference Organized by the Institute for Advanced Studies Vienna, Austria, July 6 and 7, 1970. Springer\u2013Verlag. doi:10.1007\/978-3-662-24667-2. ISBN \u00a0 978-3-662-22738-1.\n122. Macrae 1992, pp.\u00a0250\u2013253.\n123. Dantzig, G. B. (1983). \"Reminiscences about the origins of linear programming.\". In Bachem, A.; Gr\u00f6tschel, M.; Korte, B. (eds.). Mathematical Programming The State of the Art: Bonn 1982. Berlin, New York: Springer-Verlag. pp.\u00a078\u201386. ISBN \u00a0 0387120823. OCLC \u00a0 9556834.\n124. Dantzig, George; Thapa, Mukund N. (2003). Linear Programming\u00a0: 2: Theory and Extensions. New York, NY: Springer-Verlag. ISBN \u00a0 978-1-4419-3140-5.\n125. von Neumann, John (1941). \"Distribution of the ratio of the mean square successive difference to the variance\". Annals of Mathematical Statistics . 12 (4): 367\u2013395. doi:. JSTOR \u00a0 2235951.\n126. Durbin, J.; Watson, G. S. (1950). \"Testing for Serial Correlation in Least Squares Regression, I\". Biometrika . 37 (3\u20134): 409\u2013428. doi:10.2307\/2332391. JSTOR \u00a0 2332391. PMID \u00a0 14801065.\n127. Sargan, J.D.; Bhargava, Alok (1983). \"Testing residuals from least squares regression for being generated by the Gaussian random walk\". Econometrica . 51 (1): 153\u2013174. doi:10.2307\/1912252. JSTOR \u00a0 1912252.\n128. von Neumann 1963a, pp.\u00a0219\u2013237.\n129. von Neumann 1963b, pp.\u00a0205\u2013218.\n130. Ballistics: Theory and Design of Guns and Ammunition, Second Edition By Donald E. Carlucci, Sidney S. Jacobson, (CRC Press, 26 Aug 2013), page 523\n131. von Neumann, J.; Richtmyer, R. D. (March 1950). \"A Method for the Numerical Calculation of Hydrodynamic Shocks\". Journal of Applied Physics. 21 (3): 232\u2013237. Bibcode:1950JAP....21..232V. doi:10.1063\/1.1699639.\n132. Metropolis, Nicholas, ed. (2014). A History of Computing in the Twentieth Century. Elsevier. p.\u00a024. ISBN \u00a0 978-1-4832-9668-5.\n133. Ulam 1983, p.\u00a096.\n134. Dyson 1998, p.\u00a077.\n135. \"Von Neumann: The Mathematician\". MacTutor History of Mathematics Archive. Retrieved December 16, 2016.\n136. \"Von Neumann: The Mathematician, Part 2\". MacTutor History of Mathematics Archive. Retrieved December 16, 2016.\n137. von Neumann 1947, pp.\u00a0180\u2013196.\n138. Hoddeson et al. 1993, pp.\u00a0130\u2013133, 157\u2013159.\n139. Hoddeson et al. 1993, pp.\u00a0239\u2013245.\n140. Hoddeson et al. 1993, p.\u00a0295.\n141. Sublette, Carey. \"Section 8.0 The First Nuclear Weapons\". Nuclear Weapons Frequently Asked Questions. Retrieved January 8, 2016.\n142. Hoddeson et al. 1993, pp.\u00a0320\u2013327.\n143. Macrae 1992, p.\u00a0209.\n144. Hoddeson et al. 1993, p.\u00a0184.\n145. Macrae 1992, pp.\u00a0242\u2013245.\n146. Groves 1962, pp.\u00a0268\u2013276.\n147. Hoddeson et al. 1993, pp.\u00a0371\u2013372.\n148. Macrae 1992, p.\u00a0205.\n149. Macrae 1992, p.\u00a0245.\n150. Herken 2002, pp.\u00a0171, 374.\n151. Bernstein, Jeremy (2010). \"John von Neumann and Klaus Fuchs: an Unlikely Collaboration\". Physics in Perspective. 12 (1): 36\u201350. Bibcode:2010PhP....12...36B. doi:10.1007\/s00016-009-0001-1. S2CID \u00a0 121790196.\n152. Macrae 1992, p.\u00a0208.\n153. Macrae 1992, pp.\u00a0350\u2013351.\n154. \"Weapons' Values to be Appraised\". Spokane Daily Chronicle . December 15, 1948. Retrieved January 8, 2015.\n155. Heims 1980, p.\u00a0276.\n156. Macrae 1992, pp.\u00a0367\u2013369.\n157. Macrae 1992, pp.\u00a0359\u2013365.\n158. Macrae 1992, pp.\u00a0362\u2013363.\n159. Heims 1980, pp.\u00a0258\u2013260.\n160. Macrae 1992, pp.\u00a0362\u2013364.\n161. Blair 1957, p.\u00a096.\n162. \"Dwight D. Eisenhower: Citation Accompanying Medal of Freedom Presented to Dr. John von Neumann\". The American Presidency Project.\n163. Goldstine 1980, pp.\u00a0167\u2013178.\n164. Knuth 1998, p.\u00a0159.\n165. Knuth, Donald E. (1987). \"Von Neumann's First Computer Program\". In Aspray, W.; Burks, A. (eds.). Papers of John von Neumann on computing and computer theory. Cambridge: MIT Press. pp.\u00a0 89\u201395. ISBN \u00a0 978-0-262-22030-9.\n166. Macrae 1992, pp.\u00a0183\u2013184.\n167. Macrae 1992, pp.\u00a0334\u2013335.\n168. von Neumann, John (1951). \"Various techniques used in connection with random digits\". National Bureau of Standards Applied Math Series. 12: 36.\n169. Von Neumann, John (1951). \"Various techniques used in connection with random digits\". National Bureau of Standards Applied Mathematics Series. 12: 36\u201338.\n170. \"John W. Mauchly and the Development of the ENIAC Computer\". University of Pennsylvania. Archived from the original on April 16, 2007. Retrieved January 27, 2017.\n171. Macrae 1992, pp.\u00a0279\u2013283.\n172. \"BRL's Scientific Advisory Committee, 1940\". U.S. Army Research Laboratory. Retrieved January 12, 2018.\n173. Knuth, Donald E. (1996). Selected papers on computer science (Center for the Study of Language and Information \u2013 Lecture Notes). Stanford, Calif. Cambridge, Mass.: CSLI Publications Cambridge University Press. ISBN \u00a0 978-1-881526-91-9.\n174. R\u00e9dei, Mikl\u00f3s (ed.). \"Letter to R. S. Burlington.\". John Von Neumann: Selected Letters. The American Mathematics Society and The London Mathematical Society. pp.\u00a073 ff. ISBN \u00a0 978-0-8218-9126-1.\n175. Dyson 2012, pp.\u00a0267\u2013268, 287.\n176. von Neumann, John (1995). \"Probabilistic logics and the synthesis of reliable organisms from unreliable components\". In Br\u00f3dy, F.; V\u00e1mos, Tibor (eds.). . World Scientific. pp.\u00a0 567\u2013616. ISBN \u00a0 978-981-02-2201-7.\n177. Petrovic, R.; Siljak, D. (1962). \"Multiplication by means of coincidence\". ACTES Proc. of 3rd Int. Analog Comp. Meeting.\n178. Afuso, C. (1964), Quart. Tech. Prog. Rept, Department of Computer Science, University of Illinois at Urbana-Champaign, Illinois\n179. Pesavento, Umberto (1995), \"An implementation of von Neumann's self-reproducing machine\" (PDF), Artificial Life, 2 (4): 337\u2013354, doi:10.1162\/artl.1995.2.337, PMID \u00a0 8942052, archived from the original (PDF) on June 21, 2007\n180. Rocha (2015), pp.\u00a025\u201327.\n181. von Neumann, John (1966). A. Burks (ed.). The Theory of Self-reproducing Automata. Urbana, IL: Univ. of Illinois Press. ISBN \u00a0 978-0-598-37798-2.\n182. \"2.1 Von Neumann's Contributions\". Molecularassembler.com. Retrieved September 16, 2009.\n183. \"2.1.3 The Cellular Automaton (CA) Model of Machine Replication\". Molecularassembler.com. Retrieved September 16, 2009.\n184. von Neumann, John (1966). Arthur W. Burks (ed.). Theory of Self-Reproducing Automata (PDF) (PDF). Urbana and London: University of Illinois Press. ISBN \u00a0 978-0-598-37798-2.\n185. McMullin, B. (2000), \"John von Neumann and the Evolutionary Growth of Complexity: Looking Backwards, Looking Forwards...\", Artificial Life, 6 (4): 347\u2013361, doi:10.1162\/106454600300103674, PMID \u00a0 11348586, S2CID \u00a0 5454783\n186. Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life, Francisco J. Varela, Paul Bourgine, (MIT Press 1992), page 236\n187. Freitas, Robert A., Jr. (1980). \"A Self-Reproducing Interstellar Probe\". Journal of the British Interplanetary Society. 33: 251\u2013264. Bibcode:1980JBIS...33..251F . Retrieved January 9, 2015.\n188. Filiol 2005, pp.\u00a019\u201338.\n189. Weather Architecture By Jonathan Hill (Routledge, 2013), page 216\n190. Charney, J. G.; Fj\u00f6rtoft, R.; Neumann, J. (1950). \"Numerical Integration of the Barotropic Vorticity Equation\". Tellus. 2 (4): 237\u2013254. Bibcode:1950TellA...2..237C. doi:.\n191. Gilchrist, Bruce, \"Remembering Some Early Computers, 1948-1960\" (PDF). Archived from the original (PDF) on December 12, 2006. Retrieved December 12, 2006., Columbia University EPIC, 2006, pp.7-9. (archived 2006) Contains some autobiographical material on Gilchrist's use of the IAS computer beginning in 1952.\n192. Intraseasonal Variability in the Atmosphere-Ocean Climate System, By William K.-M. Lau, Duane E. Waliser (Springer 2011), page V\n193. Macrae 1992, p.\u00a0332.\n194. Heims 1980, pp.\u00a0236\u2013247.\n195. Macrae 1992, p.\u00a016.\n196. Engineering: Its Role and Function in Human Society edited by William H. Davenport, Daniel I. Rosenthal (Elsevier 2016), page 266\n197. The Technological Singularity by Murray Shanahan, (MIT Press, 2015), page 233\n198. Chalmers, David (2010). \"The singularity: a philosophical analysis\". Journal of Consciousness Studies. 17 (9\u201310): 7\u201365.\n199. Macrae 1992, p.\u00a0backcover.\n200. Wigner, Mehra & Wightman 1995, p.\u00a0129.\n201. Kaplan, Michael and Kaplan, Ellen (2006) Chances are\u2013: adventures in probability. Viking.\n202. Teller, Edward (April 1957). \"John von Neumann\". Bulletin of the Atomic Scientists. 13 (4): 150\u2013151. Bibcode:1957BuAtS..13d.150T. doi:10.1080\/00963402.1957.11457538.\n203. Nowak, Amram (January 1, 1966). \"John Von Neumann a documentary\". Mathematical Association of America, Committee on Educational Media. OCLC \u00a0 177660043., DVD version (2013) OCLC \u00a0 897933992.\n204. Mirowski 2002, p.\u00a0258.\n205. Goldstine 1980, pp.\u00a0171.\n206. Bronowski 1974, p.\u00a0433.\n207. Petkovi\u0107 2009, p.\u00a0157.\n208. The Recollections of Eugene P. Wigner, by Eugene Paul Wigner, Andrew Szanton, Springer, 2013, page 106\n209. Fermi Remembered, James W. Cronin, University of Chicago Press (2004), page 236\n210. \"Fly Puzzle (Two Trains Puzzle)\". Mathworld.wolfram.com. February 15, 2014. Retrieved February 25, 2014.\n211. \"John von Neumann \u2013 A Documentary\". The Mathematical Association of America. 1966. pp.\u00a016m46s\u201319m04s. Retrieved February 22, 2016.\n212. Goldstine 1980, pp.\u00a0167.\n213. John von Neumann: Life, Work, and Legacy Institute of Advanced Study, Princeton\n214. von Neumann 2005, p.\u00a07.\n215. Glimm, Impagliazzo & Singer 1990, p.\u00a0vii.\n216. von Neumann 2005, p.\u00a0xiii.\n217. While there is a general agreement that the initially discovered bone tumour was a secondary growth, sources differ as to the location of the primary cancer. While Macrae gives it as pancreatic, the Life magazine article says it was prostate.\n218. Veisdal, J\u00f8rgen (November 11, 2019). \"The Unparalleled Genius of John von Neumann\". Medium. Retrieved November 19, 2019.\n219. Jacobsen, Annie. (June 7, 2016). The Pentagon's brain\u00a0: an uncensored history of DARPA, America's top secret military research agency. ISBN \u00a0 978-0-316-37166-7. OCLC \u00a0 1037806913.\n220. Read, Colin (2012). The Portfolio Theorists: von Neumann, Savage, Arrow and Markowitz. Great Minds in Finance. Palgrave Macmillan. p.\u00a065. ISBN \u00a0 978-0230274143 . Retrieved September 29, 2017. When von Neumann realised he was incurably ill his logic forced him to realise that he would cease to exist... [a] fate which appeared to him unavoidable but unacceptable.\n221. Macrae 1992 , p.\u00a0379\"\n222. Dransfield & Dransfield 2003 , p.\u00a0124 \"He was brought up in a Hungary in which anti-Semitism was commonplace, but the family were not overly religious, and for most of his adult years von Neumann held agnostic beliefs.\"\n223. Ayoub 2004 , p.\u00a0170 \"On the other hand, von Neumann, giving in to Pascal's wager on his death bed, received extreme unction.\"\n224. Pais 2006 , p.\u00a0109 \"He had been completely agnostic for as long as I had known him. As far as I could see this act did not agree with the attitudes and thoughts he had harbored for nearly all his life.\"\n225. Poundstone 1993, p.\u00a0194.\n226. Macrae 1992, p.\u00a0380.\n227. \"John von Neumann Theory Prize\". Institute for Operations Research and the Management Sciences. Archived from the original on May 13, 2016. Retrieved May 17, 2016.\n228. \"IEEE John von Neumann Medal\". IEEE Awards. Institute of Electrical and Electronics Engineers . Retrieved May 17, 2016.\n229. \"The John von Neumann Lecture\". Society for Industrial and Applied Mathematics . Retrieved May 17, 2016.\n230. \"Von Neumann\". United States Geological Survey . Retrieved May 17, 2016.\n231. \"22824 von Neumann (1999 RP38)\". Jet Propulsion Laboratory . Retrieved February 13, 2018.\n232. \"(22824) von Neumann = 1999 RP38 = 1998 HR2\". Minor Planet Center . Retrieved February 13, 2018.\n233. Anderson, Christopher (November 27, 1989). \"NSF Supercomputer Program Looks Beyond Princeton Recall\". The Scientist Magazine. Retrieved May 17, 2016.\n234. \"Introducing the John von Neumann Computer Society\". John von Neumann Computer Society. Archived from the original on April 29, 2008. Retrieved May 20, 2008.\n235. Kent & Williams 1994, p.\u00a0321.\n236. \"American Scientists Issue\". Smithsonian National Postal Museum . Retrieved May 17, 2016.\n237. \"John von Neumann Award\". d\u00edjaink \u2013 Rajk. Archived from the original on December 15, 2014. Retrieved May 17, 2016.\n238. John von Neumann at the Mathematics Genealogy Project. Retrieved March 17, 2015.\n239. While Israel Halperin's thesis advisor is often listed as Salomon Bochner, this may be because \"Professors at the university direct doctoral theses but those at the Institute do not. Unaware of this, in 1934 I asked von Neumann if he would direct my doctoral thesis. He replied Yes.\" (Halperin, Israel (1990). \"The Extraordinary Inspiration of John von Neumann\". The Legacy of John von Neumann. Proceedings of Symposia in Pure Mathematics. 50. pp.\u00a015\u201317. doi:10.1090\/pspum\/050\/1067747. ISBN \u00a0 978-0-8218-1487-1.)\n\nRelated Research Articles\n\nDavid Hilbert was a German mathematician and one of the most influential mathematicians of the 19th and early 20th centuries. Hilbert discovered and developed a broad range of fundamental ideas in many areas, including invariant theory, the calculus of variations, commutative algebra, algebraic number theory, the foundations of geometry, spectral theory of operators and its application to integral equations, mathematical physics, and the foundations of mathematics.\n\nEugene Paul \"E. P.\" Wigner was a Hungarian theoretical physicist who also contributed to mathematical physics. He obtained American citizenship in 1937, and received the Nobel Prize in Physics in 1963 \"for his contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles\".\n\nThe mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. This mathematical formalism uses mainly a part of functional analysis, especially Hilbert spaces, which are a kind of linear space. Such are distinguished from mathematical formalisms for physics theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces (L2 space mainly), and operators on these spaces. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space.\n\nIn quantum mechanics, a density matrix is a matrix that describes the quantum state of a physical system. It allows for the calculation of the probabilities of the outcomes of any measurement performed upon this system, using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed states. Mixed states arise in quantum mechanics in two different situations: first when the preparation of the system is not fully known, and thus one must deal with a statistical ensemble of possible preparations, and second when one wants to describe a physical system which is entangled with another, as its state can not be described by a pure state.\n\nMathematical physics refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as \"the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories\".\n\nIn mathematics, a von Neumann algebra or W*-algebra is a *-algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the identity operator. It is a special type of C*-algebra.\n\nIn quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. The predictions that quantum physics makes are in general probabilistic. The mathematical tools for making predictions about what measurement outcomes may occur were developed during the 20th century and make use of linear algebra and functional analysis.\n\nHilbert's sixth problem is to axiomatize those branches of physics in which mathematics is prevalent. It occurs on the widely cited list of Hilbert's problems in mathematics that he presented in the year 1900. In its common English translation, the explicit statement reads:\n\nIn quantum mechanics, quantum logic is a set of rules for reasoning about propositions that takes the principles of quantum theory into account. This research area and its name originated in a 1936 paper by Garrett Birkhoff and John von Neumann, who were attempting to reconcile the apparent inconsistency of classical logic with the facts concerning the measurement of complementary variables in quantum mechanics, such as position and momentum.\n\nIn mathematics and in theoretical physics, the Stone\u2013von Neumann theorem refers to any one of a number of different formulations of the uniqueness of the canonical commutation relations between position and momentum operators. It is named after Marshall Stone and John von Neumann.\n\nIn abstract algebra, a Jordan algebra is a nonassociative algebra over a field whose multiplication satisfies the following axioms:\n\n1. .\n\nIn quantum statistical mechanics, the von Neumann entropy, named after John von Neumann, is the extension of classical Gibbs entropy concepts to the field of quantum mechanics. For a quantum-mechanical system described by a density matrix \u03c1, the von Neumann entropy is\n\nIn the theory of von Neumann algebras, a part of the mathematical field of functional analysis, Tomita\u2013Takesaki theory is a method for constructing modular automorphisms of von Neumann algebras from the polar decomposition of a certain involution. It is essential for the theory of type III factors, and has led to a good structure theory for these previously intractable objects.\n\nIn mathematical physics, Gleason's theorem shows that the rule one uses to calculate probabilities in quantum physics, the Born rule, can be derived from the usual mathematical representation of measurements in quantum physics together with the assumption of non-contextuality. Andrew M. Gleason first proved the theorem in 1957, answering a question posed by George W. Mackey, an accomplishment that was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics. Multiple variations have been proven in the years since. Gleason's theorem is of particular importance for the field of quantum logic and its attempt to find a minimal set of mathematical axioms for quantum theory.\n\nIn mathematics, a commutation theorem explicitly identifies the commutant of a specific von Neumann algebra acting on a Hilbert space in the presence of a trace. The first such result was proved by Francis Joseph Murray and John von Neumann in the 1930s and applies to the von Neumann algebra generated by a discrete group or by the dynamical system associated with a measurable transformation preserving a probability measure. Another important application is in the theory of unitary representations of unimodular locally compact groups, where the theory has been applied to the regular representation and other closely related representations. In particular this framework led to an abstract version of the Plancherel theorem for unimodular locally compact groups due to Irving Segal and Forrest Stinespring and an abstract Plancherel theorem for spherical functions associated with a Gelfand pair due to Roger Godement. Their work was put in final form in the 1950s by Jacques Dixmier as part of the theory of Hilbert algebras. It was not until the late 1960s, prompted partly by results in algebraic quantum field theory and quantum statistical mechanics due to the school of Rudolf Haag, that the more general non-tracial Tomita\u2013Takesaki theory was developed, heralding a new era in the theory of von Neumann algebras.\n\nIn mathematics, Hilbert spaces allow generalizing the methods of linear algebra and calculus from the two-dimensional and three dimensional Euclidean spaces to spaces that may have an infinite dimension. A Hilbert space is a vector space equipped with an inner product operation, which allows defining a distance function and perpendicularity. Furthermore, Hilbert spaces are complete for this distance, which means that there are enough limits in the space to allow the techniques of calculus to be used.\n\nIn physics, quantum dynamics is the quantum version of classical dynamics. Quantum dynamics deals with the motions, and energy and momentum exchanges of systems whose behavior is governed by the laws of quantum mechanics. Quantum dynamics is relevant for burgeoning fields, such as quantum computing and atomic optics.\n\nHuzihiro Araki is a Japanese mathematical physicist and mathematician, who worked on the foundations of quantum field theory, on quantum statistical mechanics, and on the theory of operator algebras.\n\nThe Koopman\u2013von Neumann mechanics is a description of classical mechanics in terms of Hilbert space, introduced by Bernard Koopman and John von Neumann in 1931 and 1932, respectively.\n\nIn mathematical physics, the Dirac\u2013von Neumann axioms give a mathematical formulation of quantum mechanics in terms of operators on a Hilbert space. They were introduced by Paul Dirac in 1930 and John von Neumann in 1932.\n\nReferences\n\nBooks\n\nPopular periodicals\n\nVideo","date":"2021-10-19 08:27:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 9, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6693241000175476, \"perplexity\": 2055.7248827262547}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585246.50\/warc\/CC-MAIN-20211019074128-20211019104128-00695.warc.gz\"}"} | null | null |
import os
import tempfile
import cv2
try:
from win32com import client as com
from win32com.client import constants as c
import win32api
except ImportError:
com = win32api = c = None
from gouda.barcode import Barcode
from gouda.gouda_error import GoudaError
from gouda.util import debug_print, is_clsid_registered
class DataSymbolEngine(object):
"""Decode using the DataSymbol's BarcodeReader SDK
BarcodeReader can decode many types of barcodes - currently using it just
for Data Matrix and Code 128 + Code 129
"""
CLSID = "BarcodeReader.BarcodeDecoder"
def __init__(self, datamatrix, n_barcodes=None):
if not self.available():
raise GoudaError('Data Symbol unavailable')
else:
com.pythoncom.CoInitialize()
# Tip from stackoverflow about how to access COM constants
# http://stackoverflow.com/a/21534997/1773758
self.d = com.gencache.EnsureDispatch(self.CLSID)
if datamatrix:
self.d.BarcodeTypes = c.DataMatrix
else:
self.d.BarcodeTypes = c.Code128 | c.Code39
if n_barcodes is None:
n_barcodes = 1 if datamatrix else 10
self.d.LinearFindBarcodes = n_barcodes
# Map values in EBarcodeTypes to text
# This should be a class member but the enumeration is visible only
# after the call to EnsureDispatch.
self.types = {
c.Code128: 'Code 128',
c.Code39: 'Code 39',
c.Interl25: 'Interleaved 2 of 5',
c.EAN13: 'EAN-13',
c.EAN8: 'EAN-8',
c.Codabar: 'Codabar',
c.Code11: 'Code 11',
c.UPCA: 'UPC-A',
c.UPCE: 'UPC-E',
c.DataMatrix: 'Data Matrix',
}
@classmethod
def available(cls):
return com is not None and is_clsid_registered(cls.CLSID)
def decode_file(self, path):
# DecodeStream?
self.d.DecodeFile(str(path))
res = [None] * self.d.Barcodes.length
for i in range(0, self.d.Barcodes.length):
b = self.d.Barcodes.item(i)
res[i] = Barcode(self.types.get(b.BarcodeType, 'Unknown'), b.Text)
return res
def __call__(self, img):
# Temporary files on Windows are pain
img_temp = tempfile.NamedTemporaryFile(suffix='.png', delete=False)
try:
msg = 'Writing temp file [{0}] for Data Symbol'
debug_print(msg.format(img_temp.name))
cv2.imwrite(img_temp.name, img)
img_temp.close()
return self.decode_file(img_temp.name)
finally:
os.unlink(img_temp.name)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,974 |
Q: Using Hadoop to perform DML operations on large fixed-format files We have a product that uses a MySQL database as the data-store. The data-store holds large amount of data. The problem we are facing is that the response time of the application is very slow. The database queries are very basic with very simple joins, if any. The root cause for the slow response time according to some senior employees is the database operations on the huge data-store.
Another team in our company had worked on a project in the past where they processed large fixed-format files using Hadoop and dumped the contents of these files into database tables. Borrowing from this project, some of the team members feel that we can migrate from using a MySQL database to simple fixed-format files that will hold the data instead. There will be one file corresponding to each table in the database instead. We can then build another data interaction layer that provides interfaces for performing DML operations on the contents in these files. This layer will be developed using Hadoop and the MapReduce programming model.
At this point, several questions come to my mind.
1. Does the problem statement fit into the kind of problems that are solved using Hadoop?
2. How will the application ask the data interaction layer to fetch/update/delete the required data? As far as my understanding goes, the files containing the data will reside on HDFS. We will spawn a Hadoop job that will process the required file (similar to a table in the db) and fetch the required data. This data will be written to an outout file on HDFS. We will have to parse this file to get the required content.
3. Will the approach of using fixed format-files and processing them with Hadoop truly solve the problem?
I have managed to set up a simple node cluster with two Ubuntu machines but after playing around with Hadoop for a while, I feel that the problem statement is not a good fit for Hadoop. I could be completely wrong and therefore want to know whether Hadoop fits into this scenario or is it just a waste of time as the problem statement is not in line with what Hadoop is meant for?
A: I would suggest go straight to Hive (http://hive.apache.org/). It is SQL engine / datawarehouse build on top of the Hadoop MR.
In a nutshell - it get Hadoop scalability and hadoop high latency.
I would consider storing bulk of data there, do all required transformation and only summarized data move to MySQL to serve queries. Usually it is not good idea to translate user requests to the hive queries - they are too slow, capability to run jobs in parallel is not trivial.
A: If you are planning to update data more often then storing directly in hadoop may not be a good option for you. To update a file in hadoop you may have to rewrite the file and then delete old file and copy a new file in hdfs.
However if you are just searching and joining the data then its good option. If you use hive then you could make some queries like sql.
In hadoop your work flow could be something described below:
*
*You will run a hadoop job for your queries.
*Your hadoop program will parse query and execute some job to join
and read files based on your queries and input parameters.
*Your output will be generated in hdfs.
*You will copy the output to local file system. Then show the output to your program.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,207 |
Kinder Gottes bzw. Söhne (und Töchter) Gottes ist eine im Neuen Testament der Bibel von Paulus gebrauchte Bezeichnung für Christen.
Durch die "Verwandtschaft" in Gott sind sie Erben bzw. Nachkommen Gottes (vgl. z. B. ) und Miterben Christi (vgl. z. B. ). Durch die Gemeinschaft mit dem Sohn Gottes Jesus haben sie, so sagt dieser Ausdruck, Anteil an dessen Gottessohnschaft. Sie sind nicht mehr Knechte (vgl. z. B. ) und nicht Sklaven der Sünde (vgl. z. B. ), die gezwungen einem fremden Willen (des "Fleisches") folgen, sondern seine Kinder, die in der Freiheit der Liebe (Evangelische Freiheit) aus eigener Einsicht handeln, um die Gebote zu erfüllen und das Gute zu tun im Namen Gottes (vgl. z. B. ). In dem Geist der Sohnschaft dürfen die Christen Kinder Gottes heißen (vgl. ) und können rufen: Abba, Vater (vgl. ).
Literatur
Weblinks
Personengruppe (Neues Testament)
Paulus von Tarsus | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,748 |
\section{Appendix}
\vspace{-1mm}
\subsection{Bug Injection Heuristics and Common Weakness Enumeration Types}
\label{appendix:bug_injection_cwe}
\vspace{-1mm}
Our automated bug injection heuristics are motivated by the real-world security bugs that are always small but hazardous. We empirically conclude the frequently happened vulnerable patterns based on the concrete CWE types. Table~\ref{tab: cwe} shows that each of our operation is relating to several CWE types. We inject all these security issues automatically and ask model to distinguish them with the benign samples.
\vspace{-2mm}
\subsection{Node Type Details}
\vspace{-1mm}
We parse the source code into ASTs and extract the node type and parent node type for each token. Table~\ref{tab: node_type} shows an example after parsing. We can see that, with the parent node type, each token can be well embedded with its local structure contexts. Considering two tokens that are distant from each other: \texttt{if} and \texttt{else}. With only node types, we just know these two tokens are keywords, but with parent node type, we can easily know that they are from the same \texttt{if-statement} and they are siblings in the AST.
\vspace{-2mm}
\subsection{Dataset }
\vspace{-1mm}
\noindent\textbf{Pre-training} We collect our dataset from C and Java Github repositories. Our main dataset is the combination of Java {\sc small} and C {\sc small}. From Table~\ref{tab: pretrain_data_stats}, we can see that our dataset is significantly smaller than the existing pre-trained models. For an ablation study (\S~\ref{subsec:medium_model}) with enlarged dataset, we collect a {\sc medium} dataset of C language. We have seen the improvement using such larger dataset, but even {\sc medium} dataset is still much smaller than other datasets.
\noindent\textbf{Datasets for downstream tasks} We provide dataset details of our downstream tasks in Table~\ref{tab: downstream_dataset}. Noted that for POJ-104~\citep{mou2016convolutional}, Table~\ref{tab: downstream_dataset} only shows the number of code samples, and we follow the design of CodeXGLUE that build positive and negative pairs during the minibatch generation. The amount of pairs for training is much larger than the number of samples.
\input{acl2022/tables/dataset_full}
\input{acl2022/tables/bug_injection_cwe}
\input{acl2022/tables/token_and_type}
\input{acl2022/tables/downstream_dataset}
\begin{figure*}
\vspace{-3mm}
\centering
\includegraphics[width=0.95\textwidth]{acl2022/figures/reveal_example}
\caption{An example in REVEAL dataset. The patched code happens to be in the train split and the buggy code is in the test split. During inference, {\sc DISCO}\xspace MLM+CLR$^\pm$\xspace model can correctly predict the buggy code as vulnerable, while MLM+CLR$^+$ predicts it as benign.}
\label{fig:reveal_example}
\vspace{-3mm}
\end{figure*}
\vspace{-1mm}
\subsection{Configuration }
\vspace{-1mm}
\label{appendix: model_config}
{\sc DISCO}\xspace is built based on a stack of 12 layers transformer encoder with 12 attention heads and 768 hidden sizes. The model is implemented with PyTorch-1.9.0 and Huggingface-transformer-4.12.3~\footnote{\url{https://github.com/huggingface/transformers/tree/c439752482759c94784e11a87dcbf08ce69dccf3}}.
Longer sequences are disproportionately expensive so we follow the original BERT design by pre-training the model with short sequence length for first 70\% steps and long sequence length for the rest 30\% steps to learn the positional embeddings. {\sc DISCO}\xspace is trained with Java {\sc small} and C {\sc small} for 24 hours in total with two 32GB NVIDIA Tesla V100 GPUs, using batch size of 128 with max sequence length of 256 tokens and batch size of 64 sequences with max sequence length 512 tokens.
{\sc DISCO}\xspace is also trained with C {\sc medium} for 3 days, using batch size of 1024 sequences with max sequence length of 256 tokens and batch size of 512 sequences and max sequence length of 512 tokens.
We use the Adam optimizer and 1e-4 as the pre-training learning rate. For fine-tuning tasks, we use batch size of 8 and the learning of 8e-6. We train the model with train split and evaluate the model during the training using validation split. We pick the model with best validation performance for testing.
\vspace{-1mm}
\subsection{Case Study}
\vspace{-1mm}
We studied the model performance on REVEAL dataset for vulnerability detection. Figure~\ref{fig:reveal_example} shows two samples inside REVEAL. We can recognize that they are from the same program. We further checked the details of these two example and we found the code on the left is a buggy version, and it is fixed by adding an argument of value 0 to the function call. This real-world situation actually matches our "Change of Function Calls" (\S~\ref{subsec:bug_injection}) bug injection operation. In the REVEAL dataset, the patched code is in the training corpus while the buggy one is in the test split. Interestingly, during the inference, {\sc DISCO}\xspace MLM+CLR$^\pm$\xspace can correctly predict the buggyiess while MLM+CLR$^+$ fails. This empirically prove our bug injected samples can help the model identify small but siginicant real-world vulnerabilities.
\section{Introduction}
\label{sec1:intro}
Large pre-trained models have been applied for source code and reported promising performance for code understanding\brc{generation also requires understanding} and generation tasks. Pre-training with enormous amount of open-source programs, existing techniques have successfully captured the textual code features using token-based tasks~\citep{feng2020codebert, buratti2020cbert, cubert}, such as masked language model and next sentence prediction. Some works further explored the potentials of leveraging structural characteristics of programming languages to understand code~\citep{guo2021graphcodebert, Jiang2021TreeBERT}.
However, even with good understanding about code tokens and structures, the model can still get confused when trying to differentiate code functionalities.
Two programs can have different tokens and structures while achieving the same functionality, \hbox{\emph{i.e.,}}\xspace semantic clones. On the other hand, two programs that share the syntax and most of the tokens can have significantly distinct behaviors. For instance, if we just replace $<$ with $\leq$ of a boundary checking statement before accessing an array, the program's functionality
can trigger severe security vulnerability \hbox{\emph{e.g.,}}\xspace buffer overflow bug\footnote{https://en.wikipedia.org/wiki/Buffer\_overflow}.
Existing pretraining technique does not learn anyth\brc{incomplete senctence}
In addition, existing pre-training techniques always require really large datasets and expensive hardware to achieve promising performance for downstream tasks. For developers and researchers who have limited access to large datasets and premium equipment, it is challenging to explore such source code pre-training
.
In this work, we present {\sc DISCO}\xspace, a self-supervised pre-trained model that jointly learn the textual, syntactic and functional properties of source code.
To learn about the structural properties of code, we propose a new pre-training task, \emph{node-type masked language model (NT-MLM)}, which can embed the local tree-based contexts together with the token-based contexts into each token.
\scc{Review this}To deal with the limitation of existing pre-training techniques, we intentionally train the model to recognize similar programs and differentiate buggy programs with the benign ones, not only based on their textual and syntactic code similarity.
However, collecting a human-labeled dataset that has both functionally equivalent and bug/patch code pairs and contains enough data for model pre-training is very difficult and time-consuming, so we design structure-guided code transformation heuristics to automatically augment the training sample with one positive and one hard negative contrasts. The positive contrast will be functionally equivalent to the original code while look quite different. The negative contrast is textually and syntactically very similar code but enclosed with bugs and security issues. We apply contrastive learning to bring the functionally similar code embeddings closer and separate the buggy programs further from the benign ones in the vector space.
We carefully designed {\sc DISCO}\xspace's pre-training to learn about syntactic and functional relationship between code components and we expect {\sc DISCO}\xspace to learn sufficient knowledge for downstream applications from limited amount of data, consequently saving computing resources.
To this end, we pre-train {\sc DISCO}\xspace on a \emph{small} dataset, with 865 MB of C code and 992 MB Java code from 100 most popular GitHub, and evaluate the model on code understanding and generation tasks: vulnerability detection, code clone detection and code summarization. Experiments show that our small models can outperform many baselines that are pre-trained on much larger datasets. Our analysis further illustrates that by adding hard negative code samples and the new NT-MLM objective, the model can better understand code during pre-training. The ablation study also reveals the potentials of pre-training our model with larger datasets and further improve the performance.
In summary, our major contributions are: 1) We design structure-guided code transformation heuristics to augment training data without human labels. We introduce \emph{hard negative code samples} for the first time, to better learn code contrasts. Our automated bug injection strategy successfully encloses real-world security issues to the code. 2) We propose a new pre-training task, NT-MLM, to embed structural context to each token embedding. 3) We develop {\sc DISCO}\xspace, a self-supervised pre-training technique that can jointly and efficiently learn the textual, syntactic and functional properties of code. Even though pre-trained with significantly less data, {\sc DISCO}\xspace can match or outperform the state-of-the-art models on code understanding and generation tasks.
\section{Introduction}
\label{sec1:intro}
Understanding the functional similarity/dissimilarity of source code is at the core of several code modeling tasks such as software vulnerability and code clone detection, which are important for software maintenance~\cite{kim2017vuddy, li2016vulpecker}.
Existing pre-trained Transformer models~\citep{guo2021graphcodebert, feng2020codebert, ahmad2021plbart} show promises for understanding code syntax (\hbox{\emph{i.e.,}}\xspace tokens and structures). However, they still get confused when trying to identify functional (dis)-similarities. For instance, syntax-based models can embed two code fragments with identical functionality but very different tokens and structures as distinct vectors and fail to identify them as semantically similar.
Likewise, these models cannot distinguish between two code fragments that differ in functionalities but share a close syntactic resemblance. For example, consider an if statement {\tt if(len(buf) $<$ N)} checking buffer length before accessing the buffer. Keeping the rest of the program the same, if we simply replace the token `$<$' with `$\leq$,' the modification can potentially trigger security vulnerability, \hbox{\emph{e.g.,}}\xspace buffer overflow bug\footnote{https://en.wikipedia.org/wiki/Buffer\_overflow}. It is challenging for existing pre-training techniques to tell apart such subtle differences in the functionalities.
In addition, existing pre-training techniques rely on a huge volume of training corpus that is randomly selected. For fine-tuning tasks like code clone detection or vulnerability detection, such random selection of training data is never tailored to teach the model about code functionalities.
To address these limitations, we present {\sc DISCO}\xspace, a self-supervised pre-trained model that jointly learns the general representations of source code and specific functional features for identifying source code similarity/dis-similarity.
Similar to state-of-the-art pre-trained Transformer models~\citep{devlin-etal-2019-bert, liu2019roberta}, we apply the standard masked language model (MLM) to capture the token features of source code.
To learn about the structural code properties, we propose a new auxiliary pre-training task that consumes additional inputs of local tree-based contexts (\hbox{\emph{e.g.,}}\xspace parent or sibling nodes in abstract syntax trees) and embeds such structural context, together with the token-based contexts, into each token representation. On top of such well-learned general code representations, we further incorporate prior knowledge of code clones and vulnerable programs into the pre-training to help the model learn the functional (dis)-similarity. We design structure-guided code transformation heuristics to automatically augment each training sample with one synthetic code clone (\hbox{\emph{i.e.,}}\xspace positive samples) that is structurally different yet functionally identical and one vulnerable contrast (\hbox{\emph{i.e.,}}\xspace hard negative samples) that is syntactically similar but injected with security bugs.
During the pre-training, {\sc DISCO}\xspace learns to bring similar programs closer in the vector space and differentiate the benign code from its vulnerable contrast, using a contrastive learning objective. Since we augment the dataset in a more targeted way than existing works and the model {\em explicitly} learns to reason about a code \hbox{\emph{w.r.t.}}\xspace its functional equivalent and different counterparts during pre-training, {\sc DISCO}\xspace can learn sufficient knowledge for downstream applications from a limited amount of data, consequently saving computing resources. In particular, we evaluate {\sc DISCO}\xspace for clone detection and vulnerability detection, as the knowledge of similar/dissimilar code fragments is at the core of these tasks.
\vspace{-0.5mm}
To this end, we pre-train {\sc DISCO}\xspace on a \emph{small} dataset, with only 865 MB of C code and 992 MB Java code from 100 most popular GitHub repositories, and evaluate the model on four different datasets for vulnerability and code clone detection. Experiments show that our small models outperform baselines that are pre-trained on 20$\times$ larger datasets.
The ablation study (\S \ref{subsec:medium_model}) also reveals that pre-training our model with 10$\times$ larger datasets further improves the performance up to 8.2\%, outperforming state-of-the-art models by 1\% for identifying code clones and up to 9.6\% for vulnerability detection, even if our dataset is still smaller.
\vspace{-0.5mm}
In summary, our contributions are: 1) We design structure-guided code transformation heuristics to automatically augment training data to integrate prior knowledge of vulnerability and clone detection without human labels. 2) We propose a new pre-training task to embed structural context to each token embedding. 3) We develop {\sc DISCO}\xspace, a self-supervised pre-training technique that jointly and efficiently learns the textual, structural, and functional properties of code. Even though pre-trained with significantly less data, {\sc DISCO}\xspace matches or outperforms the state-of-the-art models on code clone and vulnerability detection.
\vspace{-1mm}
\section{Introduction}
\label{sec1:intro}
Large pre-trained models have been applied for source code and reported promising performance for code understanding and generation tasks.
These models have successfully captured the code features by treating code as text sequences~\citep{feng2020codebert, buratti2020cbert, cubert}.
Some works further explored the potentials of leveraging structural characteristics of programming languages (\hbox{\emph{e.g.,}}\xspace abstract syntax tree and code graph) to understand code~\citep{guo2021graphcodebert, Jiang2021TreeBERT}.
However, even with a good understanding of code syntax (\hbox{\emph{i.e.,}}\xspace tokens and structures),
such pre-trained models can get confused when understanding code functionalities.
For instance, two code fragments having identical functionality (a.k.a. semantic clones) but different syntax may not be recognized as similar by the existing models.
Likewise, these models cannot distinguish between two code fragments that differ in functionalities but share close syntactic resemblance. For example, consider an if statement {\tt if(len(buf) $<$ N)} checking buffer length before accessing the buffer. Keeping the rest of the program the same, if we simply replace the token `$<$' with `$\leq$', the modification can potentially trigger security vulnerability \hbox{\emph{e.g.,}}\xspace buffer overflow bug\footnote{https://en.wikipedia.org/wiki/Buffer\_overflow}. It is challenging for existing pre-training techniques to capture such subtle differences in the functionalities.
To address these limitations, we present {\sc DISCO}\xspace, a self-supervised pre-trained model that jointly learns code tokens, structures and functionalities.
For understanding code tokens, we use a standard masked language model (MLM).
To learn about the structural code properties, we propose a new pre-training task, \emph{node-type masked language model (NT-MLM)}, which can embed the local tree-based contexts together with the token-based contexts into each token.
Finally, to capture functional properties, we carefully design the pre-training to recognize the structurally different yet functionally identical programs (\hbox{\emph{i.e.,}}\xspace positive samples) as similar and to differentiate between structurally close but functionally different programs (\hbox{\emph{i.e.,}}\xspace negative samples). We apply contrastive learning to bring the functionally similar code embeddings closer in the vector space and vice versa.
However, collecting a human-annotated dataset having both positive and negative samples, especially at the scale of for usual pre-training, is difficult and time-consuming. To automate, we design structure-guided code transformation heuristics to automatically augment the training sample with one positive and one hard negative contrasts.
The positive contrast will be functionally equivalent to the original code while look quite different. The negative contrast is textually and syntactically very similar but not the functionally identical code and likely to contain bugs and security issues. Using contrastive learning, we aim to place all the functionally similar code embeddings closer and separate the buggy programs further from the benign ones in the vector space.
Additional benefit of such carefully designed positive and negative sample generation is that we augment the data in a targeted manner. In contrast, existing pretraining techniques rely on huge volume of training corpus, all are randomly selected. Given the main motivation of pretraining is to understand program's functionality~\citep{ahmad2021plbart}, such random selection of training data are not tailored for explicitly learning the functionalities. In contrast,
our approach in learning such is through contrastive reasoning. Since the model {\em explicitly} learns to reason about a code \hbox{\emph{w.r.t.}}\xspace its functional equivalent and different counterparts during pretraining, our pretrained model outperforms other models even with a much smaller pretrained dataset.
To this end, we carefully designed {\sc DISCO}\xspace's pre-training to learn about syntactic and functional relationship between code components and
We pre-train {\sc DISCO}\xspace on a \emph{small} dataset, with 865 MB of C code and 992 MB Java code from 100 most popular GitHub, and evaluate the model on code understanding and generation tasks: vulnerability detection, code clone detection and code summarization. Experiments show that our small models can outperform many baselines that are pre-trained on much larger datasets. Our analysis further illustrates that by adding carefully designed positive and negative code samples and the new NT-MLM objective, the model can better understand code during pre-training. The ablation study also reveals the potentials of pre-training our model with larger datasets and further improve the performance.
In summary, our major contributions are: 1) We design structure-guided code transformation heuristics to automatically augment traing data without human labels. We introduce {negative code samples} (carefully crafted to inject real-world bugs) for the first time, to better learn code contrasts. Our d bug injection strategy successfully encloses real-world security issues to the code. 2) We propose a new pre-training task, NT-MLM, to embed structural context to each token embedding. 3) We develop {\sc DISCO}\xspace, a self-supervised pre-training technique that can jointly and efficiently learn the textual, structural and functional properties of code. Even though pre-trained with significantly less data, {\sc DISCO}\xspace can match or outperform the state-of-the-art models on code understanding and generation tasks.
\section{Related Works}
\label{sec2:related}
\textbf{Pre-training for Source Code.}
Researchers have been passionate about pre-training Transformer models~\citep{vaswani2017transformer} for source code with two categories: encoder-only and encoder-decoder~\citep{ahmad2021plbart, wang2021codet5,roziere2021dobf,phan2021cotext}. Our work focuses on pre-training encoder-only Transformer models to understand code. Existing models are pre-trained with different token level objectives, such as
masked language model (MLM) \citep{cubert, buratti2020cbert}, next sentence prediction (NSP) \citep{cubert}, replaced token detection, and bi-modal learning between source code and natural languages \citep{feng2020codebert}. However, these approaches ignore the underlying structural information to fully understand the syntax and semantics of programming languages. Recently, more works aimed to understand the strict-defined structure of source code leveraging abstract syntax tree (AST) \citep{zuegner_code_transformer_2021, Jiang2021TreeBERT}, control/data flow graphs (CFG/DFG) \citep{guo2021graphcodebert}. {\sc DISCO}\xspace leverages code structures differently from existing works in two ways: (a.) with AST/CFG/DFG, we automatically generate program contrasts to augment the datasets targeting specific downstream tasks. (b.) {\sc DISCO}\xspace takes an additional input of local AST context, and we propose a new cloze task to embed local structural information into each token representation.
\vspace{-1mm}
\noindent\textbf{Self-supervised Contrastive Learning.} Self-supervised contrastive learning, originally proposed for computer vision~\citep{chen20simclr}, has gained much interest in language processing~\citep{giorgi-etal-2021-declutr, wu2020clear, gao2021simcse}.
The common practice of self-supervised contrastive learning is building similar counterparts, without human interference, for the original samples and forcing the model to recognize such similarity from a batch of randomly selected samples.
Corder~\citep{bui2021corder} leverages contrastive learning to understand the similarity between a program and its functionally equivalent code. While Corder approach will help code similarity detection type of applications, their pre-training does not learn to differentiate syntactically very close, but functionally different programs.
Such differentiation is crucial for models to work well for bug detection~\citep{ding2020patching}. ContraCode~\citep{jain2020contrastive} also leverages contrastive learning. However, they generate negative contrast for a program from unrelated code examples, not from variants of the same code. They also do not encode the structural information into the code as we do. Inspired by the empirical findings that hard negative image and text samples are beneficial for contrastive learning~\citep{gao2021simcse, robinson2021hardnegative}, {\sc DISCO}\xspace learns both from equivalent code as the positive contrast, and functionally different yet syntactically close code as the \emph{hard-negative} contrast.
We generate hard-negative samples by injecting small but crucial bugs in the original code (\S \ref{subsec:bug_injection}).
\section{Data Augmentation Without Human Labels}
\label{sec3:data_aug}
\begin{figure}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{iclr2022/figures/original_sample.png}
\caption{Original Code}
\label{fig:orig_sample}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{iclr2022/figures/positive_sample.png}
\caption{Functionally Equivalent Code}
\label{fig:pos_sample}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{iclr2022/figures/negative_sample.png}
\caption{Bug Injected Code}
\label{fig:neg_sample}
\end{subfigure}
\caption{An example illustrating data augmentation.~\ref{fig:orig_sample} shows the original code that is adapted from the CVE-2021-38094 patch.
\ref{fig:pos_sample} shows functionality equivalent code of ~\ref{fig:orig_sample} where the original code is transformed by renaming and statements permutation. \ref{fig:neg_sample} shows a small variation from~\ref{fig:orig_sample} where a potential integer overflow bug is injected.}
\label{fig:data_aug}
\vspace{-5mm}
\end{figure}
Goal of our pre-training is to train the model to identify the similar programs that can be structurally different (positive sample) and differentiate the buggy programs (negative sample) that share structural resemblances with the benign ones. Thus, for each original sample, we need a labeled positive and a negative example. Manually collecting such a dataset is expensive, especially at the scale of pre-training. Here we design code transformation heuristics to automatically generate similar and buggy counterparts of a given code, so that the transformation can be applied to any amount of programs without human efforts.
Given a code sample, we first parse the code into the AST.
We analyze the AST and build a control/data flow graph from the AST. \rdc{should we say negative and then positive?}
We use this graph to apply our code transformation heuristics.
For every original code sample ($x$), we apply semantic preserving transformation heuristics (\S\ref{subsec:similar_code_generation}) to generate a positive sample ($x^+$).
We also apply bug injection heuristics (\S \ref{subsec:bug_injection}) to generate a hard-negative code example ($x^-$).
We design the heuristics in a way that makes $x^+$ be the functional equivalent or semantic clone of $x$, and $x^-$ be the buggy/noisy version of $x$.
It is also worth noting that, not all heuristics are applicable to all code samples; we decide on applicable heuristic based on the flow graph of original code.
Figure~\ref{fig:data_aug} shows
an
example of the code transformation.
The code in figure \ref{fig:pos_sample} is the functional equivalent of figure \ref{fig:orig_sample}; even though they look drastically different.
Figure \ref{fig:neg_sample} shows a buggy version of figure \ref{fig:orig_sample}, where the original code is injected with a potential integer-overflow bug and the expected behavior of the resultant code is unpredictable.
\subsection{Bug Injection}
\label{subsec:bug_injection}
For generating hard negative sample ($x^-$) from a given code ($x$), we define six categories of bug injection heuristics.
While generating such hard negative samples, our goal is to maintain maximum textual similarity of the original code, so that the model can learn to analyze source code beyond textual similarity.
These heuristics are inspired by the buggy code patterns from a wide number of Common Weakness Enumeration (CWE) types (Appendix~\ref{appendix:bug_injection_cwe}).
While it is very difficult to guarantee that $x^-$ will exhibit vulnerability or security bug, our heuristics will force $x^-$ to exhibit different functionality than $x$.
\textbf{Misuse of Data Type.}
Usage of a wrong data type can trigger several security flaws in the code. For instance, usage of a smaller data type (\hbox{\emph{e.g.,}}\xspace~{\tt short}) in place of a larger one (\hbox{\emph{e.g.,}}\xspace~{\tt long}) may result in overflow bug (\hbox{\emph{e.g.,}}\xspace \citet{cve-2021-38094}). In addition, such errors are very difficult to track since these bugs are usually exhibited in input extremities (\hbox{\emph{i.e.,}}\xspace very large or very small values), and for languages that support implicit type casting, such incorrect type may even cause imprecision bug -- resulting in the unpredictable behavior of the code. We intentionally change the data types inside $x$ with wrong ones to inject potential security bugs.
\textbf{Misuse of Pointer.}
Incorrect use of pointers is a major concern of vulnerabilities in source code. Accessing an uninitialized pointer may lead to unpredictable behavior.
A {\tt NULL} pointer or already freed pointer could lead to Null Pointer Dereferencing vulnerability (\hbox{\emph{e.g.,}}\xspace \citet{cve-2021-3449}).
To inject pointer-related bugs, we randomly remove the initialization expression when a pointer is declared, or set some pointers to {\tt NULL}.
\textbf{Change of Conditional Statements.}
Programmers usually check necessary preconditions using {\tt if-statement} before doing any safety critical operation.
For instance, before accessing an array with an index, a programmer may add a condition checking the validity of the index.
Lack of such checks can lead to buffer-overflow bug in code (\hbox{\emph{e.g.,}}\xspace \citet{cve-2020-24020}).
We observe that such checks surround very small lines of code ($\leq$ 5 lines).
We introduce bug in the code by removing such small {\tt if-statement} statement.
In addition, we also inject bug by modifying randomly selected arithmetic conditions.
We replace the comparison operator ($<$, $>$, $\leq$, $\geq$, $==$, $!=$) with another operator, to inject potential out-of-bound access, forcing the program to deviate from its original behavior.
\textbf{Misuse of Variables.}
When there are multiple variables present in a code scope (\hbox{\emph{e.g.,}}\xspace a {\tt if} block surrounding number of statement), incorrect use of variables may lead to erroneous behavior of the program.
Such errors are known as {\sc VarMisuse} bug~\citep{allamanis2018learning}.
We induce code with such bugs by replacing a variable with another.
To keep the resultant code compilable, we perform scope analysis on the AST and replace a variable with another variable reachable in the same scope.
\textbf{Misuse of Values.}
Uninitialized variables, or variables with wrong values may alter the program behavior, may even cause security flaw (\hbox{\emph{e.g.,}}\xspace ~\citet{cve-2019-12730}). We modify the original code by removing the initializer expression of variable. In addition, to induce the code with {\tt divide-by-zero} vulnerability, we identify the potential divisor variables from the flow graph and forcefully assign zero values to them immediately before the division.
\textbf{Change of Function Calls.} We induce bug in the code by randomly changing arguments of function call. For a randomly selected function call, we add, remove, swap or assign {\tt NULL} value to arguments, forcing the code to behave unexpectedly.
\subsection{Similar Code Generation}
\label{subsec:similar_code_generation}
In order to generate positive samples ($x^+$) from a given code, we use three different heuristics.
In this case, our goal is to generate functionally equivalent code while inducing maximum textual difference.
These heuristics are inspired by code clones~\citep{funaro2010hybrid, sheneamer2018detection}.
\textbf{Variable Renaming.}
Variable renaming is a typical code cloning strategy and frequently happens during software development~\citep{ain2019syscodeclone}.
To generate such a variant of the original code, we either (a.) rename a variable in the code with a random identifier name or (b.) with an abstract name such as \texttt{VAR\_i}~\citep{roziere2021dobf}.
While choosing random identifier names, we only select identifiers that are available in the dataset.
For any variable renaming, we ensure that both the definition of the variable and subsequent usage(s) are renamed.
We also ensure that a name is not used to rename more than one variable.
\textbf{Function Renaming.}
We rename function calls with abstract names like \texttt{FUNC\_i}. By doing this, we make more tokens different compared with the original code but keep the same syntax and semantics. We do not rename
library calls for the code (\hbox{\emph{e.g.,}}\xspace, \texttt{memcpy()} in C).
\textbf{Permutation of Statements.}
Relative order among the program statements that are independent of each other can be changed without altering the functionality of code.
More specifically, we focus on the variable declaration or initialization statements.
We first conduct dependency analysis to identify a set of local variables that do not depend on other values for initialization. Then we move their declaration statements to the beginning of the function and permute them.
\section{Data Augmentation Without Human Labels}
\label{sec3:data_aug}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{acl2022/figures/original_sample.png}
\caption{Original Code}
\label{fig:orig_sample}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{acl2022/figures/positive_sample.png}
\caption{Functionally Equivalent Code}
\label{fig:pos_sample}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{acl2022/figures/negative_sample.png}
\caption{Bug Injected Code}
\label{fig:neg_sample}
\end{subfigure}
\caption{An example illustrating data augmentation.~\ref{fig:orig_sample} shows the original code that is adapted from the CVE-2021-38094 patch.
\ref{fig:pos_sample} shows functionality equivalent code of ~\ref{fig:orig_sample} where the original code is transformed by renaming and statements permutation. \ref{fig:neg_sample} shows a variation from~\ref{fig:orig_sample} where a potential integer overflow bug is injected.}
\label{fig:data_aug}
\vspace{-5mm}
\end{figure*}
Our pre-training aims to identify similar programs that can be structurally different (positive sample) and differentiate the buggy programs (negative sample) that share structural resemblances with the benign ones. Thus, we need a labeled positive and a negative example for each original sample. Manually collecting them
is expensive, especially at the scale of pre-training. To this end, we design code transformation heuristics to automatically generate such positive and negative samples so that the transformation can be applied to any amount of programs without human efforts.
We first represent a code sample as Abstract Syntax Tree (AST), and build a control/data flow graph from the AST.
The code transformation heuristics are then applied to this graph.
For every original code sample ($x$), we apply semantic preserving transformation heuristics (\S\ref{subsec:similar_code_generation}) to generate a positive sample ($x^+$) and a bug injection heuristics (\S \ref{subsec:bug_injection}) to generate a hard-negative code example ($x^-$).
We design the heuristics in a way that makes $x^+$ be the functional equivalent or semantic clone of $x$ and $x^-$ be the buggy/noisy version of $x$.
Noted that not all heuristics are applicable to all code samples; we decide on applicable heuristics based on the flow graph of the original code.
Figure~\ref{fig:data_aug} shows an example of the code transformation.
\subsection{Bug Injection}
\label{subsec:bug_injection}
To generate a hard negative sample ($x^-$) from a given code ($x$), we define six categories of bug injection heuristics.
Here our goal is to maintain maximum token-level similarity to the original code, so that the model can learn to analyze source code beyond token-level similarity.
These heuristics are inspired by the buggy code patterns from a wide range of Common Weakness Enumeration (CWE) types (Appendix~\ref{appendix:bug_injection_cwe}).
While it is challenging to guarantee that $x^-$ will exhibit vulnerability or security bug, our heuristics will force $x^-$ to exhibit different functionality than $x$. Compared with a concurrent work from \citet{allamanis2021self}, our methods are significantly different. First, we focus on concrete types of security bugs that have been identified by the security experts, while they mainly target regular bugs. Second, our scope is not only bug detection but clone detection as well, and we apply contrastive learning to differentiate the code functionalities of code clones and vulnerabilities.
\noindent\textbf{Misuse of Data Type.}
Usage of the wrong data type can trigger several security flaws.
For instance, using a smaller data type (\hbox{\emph{e.g.,}}\xspace~{\tt short}) to replace a larger one (\hbox{\emph{e.g.,}}\xspace~{\tt long}) may result in an overflow bug (\hbox{\emph{e.g.,}}\xspace \citet{cve-2021-38094}). Such errors are complicated to track since they are usually exhibited in input extremities (\hbox{\emph{i.e.,}}\xspace very large or very small values). For languages allowing implicit typecasting, such an incorrect type may even cause imprecision, resulting in the unpredictable behavior of the code. We intentionally change the data types in $x$ to inject potential bugs, while ensuring the code can still be compiled (\hbox{\emph{e.g.,}}\xspace we will not replace {\tt int} with {\tt char}).
\noindent\textbf{Misuse of Pointer.}
Incorrect pointer usage is a major security concern.
Accessing uninitialized pointers may lead to unpredictable behavior.
A {\tt NULL} pointer or freed pointer could lead to Null Pointer Dereferencing vulnerability (\hbox{\emph{e.g.,}}\xspace \citet{cve-2021-3449}).
To inject such bugs, we randomly remove the initialization expression during pointer declaration, or set some pointers to {\tt NULL}.
\noindent\textbf{Change of Conditional Statements.}
Programmers usually check necessary preconditions using {\tt if-statement} before doing any safety-critical operation.
For instance, before accessing an array with an index, a programmer may add a condition checking the validity of the index.
Lack of such checks can lead to buffer-overflow bugs in code (\hbox{\emph{e.g.,}}\xspace \citet{cve-2020-24020}).
We introduce bugs in the code by removing such small {\tt if-statement}s.
In addition, we also inject bugs by modifying randomly selected arithmetic conditions--- replace the comparison operator ($<$, $>$, $\leq$, $\geq$, $==$, $!=$) with another operator, to inject potential out-of-bound access, forcing the program to deviate from its original behavior.
\noindent\textbf{Misuse of Variables.}
When there are multiple variables present in a code scope, incorrect use of variables may lead to erroneous behavior of the program.
Such errors are known as {\sc VarMisuse} bug~\citep{allamanis2018learning}.
We induce code with such bugs by replacing one variable with another.
To keep the resultant code compilable, we perform scope analysis on the AST and replace a variable with another variable reachable in the same scope.
\noindent\textbf{Misuse of Values.}
Uninitialized variables or variables with wrong values may alter the program behaviors and consequently cause security flaws (\hbox{\emph{e.g.,}}\xspace ~\citet{cve-2019-12730}). We modify the original code by removing the initializer expression of some variables. In addition, to induce the code with {\tt divide-by-zero} vulnerability, we identify the potential divisor variables from the flow graph and forcefully assign zero values to them immediately before the division.
\noindent\textbf{Change of Function Calls.} We induce bugs in the code by randomly changing arguments of function calls. For a randomly selected function call, we add, remove, swap, or assign {\tt NULL} value to arguments, forcing the code to behave unexpectedly.
\subsection{Similar Code Generation}
\label{subsec:similar_code_generation}
To generate positive samples ($x^+$) from a given code, we use three different heuristics.
In this case, our goal is to generate functionally equivalent code while inducing maximum textual difference.
These heuristics are inspired by code clone literature~\citep{funaro2010hybrid, sheneamer2018detection}.
\noindent\textbf{Variable Renaming.}
Variable renaming is a typical code cloning strategy and frequently happens during software development~\citep{ain2019syscodeclone}.
To generate such a variant of the original code, we either (a.) rename a variable in the code with a random identifier name or (b.) with an abstract name such as \texttt{VAR\_i}~\citep{roziere2021dobf}.
While choosing random identifier names, we only select available identifiers in the dataset.
We ensure that both the definition of the variable and subsequent usage(s) are renamed for any variable renaming. We also ensure that a name is not used to rename more than one variable.
\noindent\textbf{Function Renaming.}
We rename function calls with abstract names like \texttt{FUNC\_i}. By doing this, we make more tokens different compared with the original code but keep the same syntax and semantics. We do not rename
library calls for the code (\hbox{\emph{e.g.,}}\xspace \texttt{memcpy()} in C).
Noted that even if tokens like \texttt{VAR\_i} and \texttt{FUNC\_i} are rare in normal code, the model will not bias towards identifying samples with these tokens as positive samples. The reason is that, as shown in Figure~\ref{fig:model_arch}, $x^+, y^+$ and $z^+$ all potentially have these abstract tokens, but the model learns to move $EMB_{x}$ closer to $EMB_{x^+}$ and further from $EMB_{y^+}$ and $EMB_{z^+}$, regardless of the existence of abstract tokens.
\noindent\textbf{Statement Permutation.}
The relative order among the program statements that are independent of each other can be changed without altering the code functionality.
More specifically, we focus on the variable declaration or initialization statements.
We first conduct the dependency analysis to identify a set of local variables that do not depend on other values for initialization. Then we move their declaration statements to the beginning of the function and permute them.
\section{{\sc DISCO}\xspace}
\label{sec4:model}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{iclr2022/figures/model-overview.png}
\caption{An illustration of {\sc DISCO}\xspace pre-training with a minibatch of three. The original code and its node types will be randomly masked with $[MASK]$, and the final representation of masked tokens will be used to recover their source tokens and node types. The original code, say $x$, will also be transformed to build ($x, x^+, x^-$). Then the pair will be fed into the \textbf{same} transformer encoder and get the embedding of each sequence with $[CLS]$ tokens for contrastive learning.}
\label{fig:model_arch}
\vspace{-5mm}
\end{figure}
This section presents the model architecture, input representation and pre-training tasks.
{\sc DISCO}\xspace uses a 12-layered transformer encoder~\citep{vaswani2017transformer} model similar to BERT~\citep{devlin-etal-2019-bert}.
We feed the model with both textual information from the source code and structural information from the syntax tree (\S\ref{subsec:input_repr}).
We pretrain {\sc DISCO}\xspace using three different pretraining tasks (\S\ref{subsec:pretraining-task}).
Figure \ref{fig:model_arch} depicts an example workflow of {\sc DISCO}\xspace.
We randomly select tokens in the original sample and mask them and their node types, and then use the embedding of these masks to predict them back. We further extract the sequence embeddings within a minibatch and learn to contrast them based on the code functionality.
\subsection{Input Representation}
\label{subsec:input_repr}
\textbf{Source Code.} Given a program ($x$), we apply a lexical analyzer to tokenize it based on the language grammar and flatten the program as a token sequence ($x_1 x_2 ... x_m$, where $x_i$ is i'th token in the code).
We further train a sentencepiece~\citep{kudo-richardson-2018-sentencepiece} tokenizer based on such flattened code token sequences with vocabulary size 20,000.
We use this tokenizer to divide the source code tokens into subtokens.
We prepend the subtoken sequence with a special token {\tt [CLS]} and append with a special token {\tt [SEP]}.
Finally, {\sc DISCO}\xspace converts the pre-processed code sequence $C = \{[CLS], c_1, c_2, ..., c_k, [SEP]\}$ to vectors $V^{src} = \{v_{[CLS]}^{src}, v_1^{src}, v_2^{src}, ..., v_k^{src}, v_{[SEP]}^{src}\}$ with a token embedding layer.
\textbf{AST Node Types.}
For every token in input code, we extract the node type ($tt$) from the syntax tree.
Since such types are all terminal node types (\hbox{\emph{e.g.,}}\xspace keyword, identifier, punctuation), we do not get enough information about the structure only with these types.
In order to add more information about the tree, for each token, we also extract its parent type ($pt$).
Such parent type provides us with information about the context of a token.
For instance, when parent type of an {\tt idenfier} is {\tt Function-Declarator}, we know that that identifier is a function name.
In contrast, when the {\tt identifier}'s parent is a {\tt Binary Expression}, it should a variable.
Consequently, we annotate each code sub-token $c_i$ with a type token $t = tt\#pt$.
It is worth noting that, subtokens coming from the same original code token will all have the same type.
Therefore, we have the node type sequence for the code $T = \{[CLS], t_1, t_2, ..., t_k, [SEP]\}$, and {\sc DISCO}\xspace converts it as vectors $V^{type} = \{v_{[CLS]}^{type}, v_1^{type}, v_2^{type}, ..., v_k^{type}, v_{[SEP]}^{type}\}$ with a type embedding layer.
Appendix Table~\ref{tab: node_type} shows an example of code tokens and their node types.
{\sc DISCO}\xspace generates token representation $v_i$ of subtoken $c_i$ as a sum of token embedding $v_i^{src}$ and type embedding $v_i^{type}$.
\subsection{Pre-training}
\label{subsec:pretraining-task}
Our aim for pretraining {\sc DISCO}\xspace is to train the model to learn robust representation of the source code. We aim training the {\sc DISCO}\xspace to learn representation of source code based on (a.) textual context, (b.) syntax tree, and (c.) code functionality. In that spirit, pretrain {\sc DISCO}\xspace to optimize on three different objectives, \hbox{\emph{i.e.,}}\xspace Masked Language Model (MLM), Node Type - Masked Language Model (NT-MLM) and Contrastive Learning (CLR).
For a given code $x$, we first embed the tokens and node-types to vectors $V = \{v_{[CLS]}, v_1, ..., v_{[SEP]}\}$. We optimize MLM loss ($\mathcal{L}_{MLM}$) (\S\ref{subsec:mlm}) and NT-MLM loss ($\mathcal{L}_{NT-MLM}$) (\S\ref{subsec:ntmlm}) based on $x$. These two loss functions learn about the textual and syntactic context of source code. For every code $x$ in a minibatch of input, we generate positive example $x^+$ and hard-negative example $x^-$ using the heuristics described in Section \ref{sec3:data_aug}. We optimize CLR loss ( $\mathcal{L}_{CLR}$) (\S\ref{subsec:clr}) on original code and its positive and hard-negative counterparts.
The final loss function to optimize for pretraining {\sc DISCO}\xspace is
\begin{equation}
\mathcal{L}(\theta) = \mathcal{L}_{MLM}(\theta) + \mathcal{L}_{NT-MLM}(\theta) + \mathcal{L}_{CLR}(\theta)
\end{equation}
\subsubsection{Masked Language Model}
\label{subsec:mlm}
We apply the standard masked language model to the original code ($x$). Given a source code sequence $C$, we randomly choose 15\% of tokens and replace them with a special token $[MASK]$ for 80\% of the time and a random token for 10\% of the time and leave the rest 10\% unchanged. We record the indices of masked token as $loc_m$, replaced token as $loc_r$ and unchanged tokens as $loc_u$ for node-type MLM. We define the union of these indices as $M = loc_m \cup loc_r \cup loc_u$. MLM will learn to recover the masked source code $\{c_i | i \in M\}$ given the transformer encoder's output $h_i$. We present the loss for MLM as
$\mathcal{L}_{MLM} = \sum_{i \in M} - log P(c_i | h_i)$
\subsubsection{Node-type Masked Language Model }
\label{subsec:ntmlm}
Token-based MLM re-builds the token using its surrounding tokens and successfully encodes the contextual information into each token representation. Motivated by MLM, we propose the tree-based context-aware pre-training task, to encode the structural context, such as parent, sibling and children nodes. As we shown in Figure~\ref{fig:model_arch}, we flatten the ASTs as sequences and we expect the flattened trees can well preserve the local structure information (i.e., sub-trees containing terminal nodes), and existing work~\citep{Chakraborty2020codit} has empirically shown such potentials. To this end, we introduce node-type masked language model (NT-MLM). Given the corresponding node type sequence $T$ of source code $C$, we mask the node types $\{t_p | p \in loc_m\}$ with the special token $[MASK]$, and replace the node types $\{t_q | q \in loc_r\}$ with random tokens. Specifically, by doing this, we make sure that if a source code token is chosen to be masked or replaced, its corresponding node type will be performed the same operation. NT-MLM will learn to recover the masked source code $\{t_i | i \in M\}$ given the transformer encoder's output $h_i$. We present the loss for NT-MLM as
$\mathcal{L}_{NT-MLM} = \sum_{i \in M} - log P(t_i | h_i)$
\subsubsection{Contrastive Learning }
\label{subsec:clr}
We adopt contrastive learning to focus on the functional characteristics of code. With the structure-guided code transformation algorithms in Section~\ref{sec3:data_aug}, we are able to generate a positive sample ($x^+$ in Figure~\ref{fig:model_arch}) and a hard negative sample ($x^-$ in Figure~\ref{fig:model_arch}) for each program in the dataset.
More specifically, we have a minibatch of $N$ programs, and for each program, we extract the sequence representation from the transformer outputs $\mathbf{h} = h_{[CLS]}$. We will augment every sequence in the minibatch with positive and negative samples, and then the minibatch is extended to N pairs of $(\mathbf{h}, \mathbf{h}^+, \mathbf{h}^-)$. We refer to the contrastive loss with hard negative samples from \citet{gao2021simcse} and we adapt it to our scope as follows. We use cosine similarity as the $sim()$ function and $\tau$ is the temperature parameter to scale the loss, and we use $\tau = 0.05$.
\begin{equation}
\mathcal{L}_{CLR} = -\log\frac{\mathbf{e}^{\text{sim}\left(\mathbf{h}, \mathbf{h}^+\right)/\tau}}{\sum^{N}_{n=1}\left(\mathbf{e}^{\text{sim}\left(\mathbf{h}, \mathbf{h}_{n}^+\right)/\tau} + \mathbf{e}^{\text{sim}\left(\mathbf{h}, \mathbf{h}_{n}^-\right)/\tau} \right)}
\end{equation}
\section{{\sc DISCO}\xspace}
\label{sec4:model}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{acl2022/figures/model-overview.png}
\caption{An illustration of {\sc DISCO}\xspace pre-training with a minibatch of three. The original code and its node types will be randomly masked with $[MASK]$, and the final representation of masked tokens will be used to recover their source tokens and node types. The original code, say $x$, will also be transformed to build ($x, x^+, x^-$). Then the triplet will be fed into the \textbf{same} Transformer encoder and get the embedding of each sequence with $[CLS]$ tokens for contrastive learning.}
\label{fig:model_arch}
\vspace{-5mm}
\end{figure*}
This section presents the model architecture, input representation, and pre-training tasks.
{\sc DISCO}\xspace uses a 12-layered Transformer encoder model similar to BERT.
We feed the model with both source code text and structure (AST) information (\S\ref{subsec:input_repr}).
We pre-train {\sc DISCO}\xspace using three different pre-training tasks (\S\ref{subsec:pretraining-task}).
Figure \ref{fig:model_arch} depicts an example workflow of {\sc DISCO}\xspace.
We randomly select tokens in the original sample, mask them and their node types, and then use the embedding of these masks to predict them back. We further extract the sequence embeddings within a minibatch and contrast them based on the code functionality.
\subsection{Input Representation}
\label{subsec:input_repr}
\textbf{Source Code.} Given a program ($x$), we apply a lexical analyzer to tokenize it based on the language grammar and flatten the program as a token sequence ($x_1 x_2 ... x_m$, where $x_i$ is i$^{th}$ token in the code).
We further train a sentencepiece~\citep{kudo-richardson-2018-sentencepiece} tokenizer based on such flattened code token sequences with vocabulary size 20,000.
We use this tokenizer to divide the source code tokens into subtokens.
We prepend the subtoken sequence with a special token {\tt [CLS]} and append with a special token {\tt [SEP]}.
Finally, {\sc DISCO}\xspace converts the pre-processed code sequence $C = \{[CLS], c_1, c_2, ..., c_k, [SEP]\}$ to vectors $V^{src} = \{v_{[CLS]}^{src}, v_1^{src}, v_2^{src}, ..., v_k^{src}, v_{[SEP]}^{src}\}$ with a token embedding layer.
\noindent\textbf{Local AST Types.}
For every token in the input code, we extract the node type ($tt$) from the syntax tree.
Since such types are all terminal node types (\hbox{\emph{e.g.,}}\xspace keyword, identifier, punctuation), we do not get enough information about the structure only with these types.
In order to add more information about the tree, we also extract its parent type ($pt$) for each token.
Such parent type provides us with information about the structural context of a token. For instance, when the parent type of an {\tt identifier} is {\tt Function-Declarator}, we know that the identifier is a function name.
In contrast, when the {\tt identifier}'s parent is a {\tt Binary Expression}, it should be a variable.
Consequently, we annotate each code sub-token $c_i$ with a local AST-type token $t = tt\#pt$.
It is worth noting that sub-tokens coming from the same code token will all have the same type.
Therefore, we have the AST-type sequence for the code $T = \{[CLS], t_1, t_2, ..., t_k, [SEP]\}$, and {\sc DISCO}\xspace converts it as vectors $V^{type} = \{v_{[CLS]}^{type}, v_1^{type}, v_2^{type}, ..., v_k^{type}, v_{[SEP]}^{type}\}$ with a type embedding layer.
Appendix Table~\ref{tab: node_type} shows an example of code tokens and their AST types.
{\sc DISCO}\xspace generates token representation $v_i$ of subtoken $c_i$ as a sum of token embedding $v_i^{src}$ and type embedding $v_i^{type}$.
Thus, $V=V^{src} + V^{type}$.
\subsection{Pre-training}
\label{subsec:pretraining-task}
We aim to pre-train the {\sc DISCO}\xspace to learn the representation of source code based on (a.) token-based context, (b.) AST-based context, and (c.) code functionality. In that spirit, we pre-train {\sc DISCO}\xspace to optimize on three different objectives, \hbox{\emph{i.e.,}}\xspace masked language model (MLM), local AST node type-MLM (NT-MLM), and Contrastive Learning (CLR).
For a given program $x$, we first embed the tokens and node-types to vectors $V = \{v_{[CLS]}, v_1, ..., v_{[SEP]}\}$. We optimize MLM loss ($\mathcal{L}_{MLM}$) (\S\ref{subsec:mlm}) and NT-MLM loss ($\mathcal{L}_{NT-MLM}$) (\S\ref{subsec:ntmlm}) based on $x$. These two loss functions learn about the textual and syntactic context of source code. For every code sample $x$ in a minibatch of input, we generate a positive example $x^+$ and a hard-negative example $x^-$ using the heuristics described in Section \ref{sec3:data_aug}. We optimize CLR loss ( $\mathcal{L}_{CLR}$) (\S\ref{subsec:clr}) considering the original code and its positive and hard-negative counterparts.
The final loss function to optimize for pre-training {\sc DISCO}\xspace is
\begin{equation*}
\mathcal{L}(\theta) = \mathcal{L}_{MLM}(\theta) + \mathcal{L}_{NT-MLM}(\theta) + \mathcal{L}_{CLR}(\theta)
\end{equation*}
\subsubsection{Encoding Token-based Context}
\label{subsec:mlm}
We apply the standard masked language model to the original code ($x$). Given a source code sequence $C$, we randomly choose 15\% of tokens and replace them with a special token $[MASK]$ for 80\% of the time and a random token for 10\% of the time and leave the rest 10\% unchanged. We record the indices of masked token as $loc_m$, replaced token as $loc_r$ and unchanged tokens as $loc_u$ for node-type MLM. We define the union of these indices as $M = loc_m \cup loc_r \cup loc_u$. MLM will learn to recover the masked source code $\{c_i | i \in M\}$ given the Transformer encoder's output $h_i$. We present the loss for MLM as
$\mathcal{L}_{MLM} = \sum_{i \in M} - log P(c_i | h_i)$
\subsubsection{Encoding AST-based Context }
\label{subsec:ntmlm}
Token-based MLM re-builds the token using its surrounding tokens and successfully encodes the contextual information into each token representation. Motivated by MLM, we propose the tree-based context-aware pre-training task, to encode the structural context, such as parent, sibling, and children nodes. As we have shown in Figure~\ref{fig:model_arch}, we flatten the ASTs as sequences and we expect the flattened trees can preserve the local structure information (i.e., sub-trees containing terminal nodes), and existing work~\citep{Chakraborty2020codit, Hellendoorn2020Global} has empirically shown such potentials. To this end, we introduce AST node-type masked language model (NT-MLM). Given the corresponding AST-type sequence $T$ of source code $C$, we mask the AST types $\{t_p | p \in loc_m\}$ with the special token $[MASK]$, and replace the AST types $\{t_q | q \in loc_r\}$ with random tokens. Specifically, by doing this, we make sure that if a source code token is chosen to be masked or replaced, its corresponding AST type will perform the same operation. NT-MLM will learn to recover the masked AST type $\{t_i | i \in M\}$ given the Transformer encoder's output $h_i$. We present the loss for NT-MLM as
$\mathcal{L}_{NT-MLM} = \sum_{i \in M} - log P(t_i | h_i)$
A recent work, CodeT5~\citep{wang2021codet5}, proposes to predict token type as well. However, our new objective is different from them in both high-level designs and the detailed implementation. First, their objective only predicts one single token type: identifiers, while our approach predicts all possible AST types. Also, we do not only consider the AST node type of tokens, but also include their AST parents to embed the local sub-tree context (\S \ref{subsec:input_repr}). Second, CodeT5 implements the identifier tagging task as a binary classification (0/1) for each token, while our NT-MLM reconstructs the local ASTs out of hundreds of distinct types.
\subsubsection{Contrastive Learning }
\label{subsec:clr}
We adopt contrastive learning to focus on the functional characteristics of code. With the structure-guided code transformation algorithms in Section~\ref{sec3:data_aug}, we are able to generate a positive sample ($x^+$ in Figure~\ref{fig:model_arch}) and a hard negative sample ($x^-$ in Figure~\ref{fig:model_arch}) for each program in the dataset.
More specifically, we have a minibatch of $N$ programs, and for each program, we extract the sequence representation from the Transformer outputs $\mathbf{h} = h_{[CLS]}$. We will augment every sequence in the minibatch with positive and negative samples, and then the minibatch is extended to N triplets of $(\mathbf{h}, \mathbf{h}^+, \mathbf{h}^-)$. We refer to the contrastive loss with hard negative samples from \citet{gao2021simcse} and we adapt it to our scope as follows. We use cosine similarity as the $sim()$ function and $\tau$ is the temperature parameter to scale the loss, and we use $\tau = 0.05$.
\footnotesize
\begin{equation*}
\mathcal{L}_{CLR} = -\log\frac{\mathbf{e}^{\text{sim}\left(\mathbf{h}, \mathbf{h}^+\right)/\tau}}{\sum^{N}_{n=1}\left(\mathbf{e}^{\text{sim}\left(\mathbf{h}, \mathbf{h}_{n}^+\right)/\tau} + \mathbf{e}^{\text{sim}\left(\mathbf{h}, \mathbf{h}_{n}^-\right)/\tau} \right)}
\end{equation*}
\normalsize
We also consider to pre-train the model with only positive counterparts as a variation. In such a case, the minibatch will contain N pairs of $(\mathbf{h}, \mathbf{h}^+)$ and the loss is computed as
\footnotesize
\begin{equation*}
\mathcal{L}_{CLR} = -\log\frac{\mathbf{e}^{\text{sim}\left(\mathbf{h}, \mathbf{h}^+\right)/\tau}}{\sum^{N}_{n=1}\left(\mathbf{e}^{\text{sim}\left(\mathbf{h}, \mathbf{h}_{n}^+\right)/\tau} \right)}
\end{equation*}
\normalsize
\section{Experiments}
\label{sec5:expr}
In this section, we will explain our experimental settings and report the results. We evaluate our model on vulnerability and code clone detection.
\subsection{Experimental Settings}
\label{subsec:expr_pretrain}
\textbf{Data.}
We collect our pre-training corpus from open-source C and Java projects.
We rank Github repositories by the number of stars and focus on the most popular ones.
After filtering out forks from existing repositories,
we collect the dataset for each language from top-100 repositories.
We only consider the ``.java'' and ``.c'' files for Java and C repositories respectively, and we further remove comments and empty lines from these files.
The corresponding datasets for Java and C are of size of 992MB and 865MB, respectively.
Our datasets are significantly \emph{smaller} than existing pre-training models~\citep{feng2020codebert, ahmad2021plbart, guo2021graphcodebert}.
For example, while CodeBERT and GraphCodeBERT are trained on 20GB data, we used an order of magnitude less data.
Details of our datasets and the comparison can be found in Appendix Table~\ref{tab: pretrain_data_stats}.
\noindent\textbf{Models.}
To study the different design choices, we train four variations of {\sc DISCO}\xspace. (i) \textbf{
MLM+CLR$^{\pm}$+NT-MLM
} is trained by all three tasks with hard negative samples. (ii) \textbf{ MLM+CLR$^{\pm}$}. The input of this model only considers the source code sequence and ignores the AST-type sequence. This model helps us understand the impact of NT-MLM.
(iii) \textbf{MLM+CLR$^+$}. This variant evaluates the effectiveness of hard negative code samples, by contrasting its performance with MLM+CLR$^{\pm}$.
(iv) \textbf{MLM}. This is the baseline trained with only MLM objective.
We provide detailed model configuration in Appendix~\ref{appendix: model_config} to ensure the reproducibility.
\noindent\textbf{Baselines.} We consider two types of baselines: encoder-only pre-trained Transformers and existing deep-learning tools designed for code clone and vulnerability detection. We do not consider encoder-decoder pre-trained Transformers as baselines, since such generative models always need much more pre-training data and training steps to converge, so it is unfair to compare our model with them. For example, PLBART uses 576G source code for pre-training, while we only use less than 1G.
Based on the data size. As future work, we plan to pre-train the model on much larger datasets.
\subsection{Vulnerability Detection (VD)}
\label{subsec: vul_detect}
VD is the task to identify
security bugs: given a source code function, the model predicts 0 (benign) or 1 (vulnerable) as binary classification.
\noindent\textbf{Dataset and Metrics.} We consider two datasets for VD task: REVEAL~\citep{chakraborty2021reveal} and CodeXGLUE~\citep{msr2021codexglue, Zhou2019DevignEV}. In the real-world scenario, vulnerable programs are always rare compared to the normal ones, and ~\citet{chakraborty2021reveal} have shown such imbalanced ratio brings challenges for deep-learning models to pinpoint the bugs. To imitate the real-world scenario, they collect REVEAL dataset from Chromium (open-source project of Chrome) and Linux Debian Kernel, which keeps the ratio of vulnerable to benign programs to be roughly 1:10. Following ~\citet{chakraborty2021reveal}, we consider precision, recall and F1 as the metrics.
CodeXGLUE presents another dataset of security vulnerabilities. It is less real-world than REVEAL, since it a balanced dataset, but it has been frequently used by existing Transformer-based models to evaluate their tools for VD
task. To compare with these baselines, we use CodeXGLUE train/valid/test splits for training and testing. We use accuracy as the metric, following the design of the benchmark.
\noindent\textbf{REVEAL.} Table~\ref{tab: reveal_result} shows the results. We compare with four deep-learning-based VD
tools. VulDeePecker~\citep{li2018vuldeepecker} and SySeVR~\citep{li2018sysevr} apply program slices and sequence-based RNN/CNN to learn the vulnerable patterns. Devign~\citep{Zhou2019DevignEV} uses graph-based neural networks (GNN) to learn the data dependencies of program.
REVEAL~\citep{chakraborty2021reveal} applies GNN + SMOTE~\citep{chawla2002smote} + triplet loss during training to handle the imbalanced distribution.
We also consider pre-trained RoBERTa, CodeBERT and GraphCodeBERT, and a 12-Layer Transformer model trained from scratch.
\input{acl2022/tables/reveal_result}
\input{acl2022/tables/devign_result}
In our case, the best {\sc DISCO}\xspace variation with contrastive learning and NT-MLM objective
outperforms all the baselines, including the graph-based approaches and models pre-trained with larger datasets. This empirically proves that {\sc DISCO}\xspace can efficiently understand the code semantics and data dependencies from limited amount of data, helping the identification of the vulnerable patterns. We notice that hard negative samples (\hbox{\emph{i.e.,}}\xspace buggy code contrasts) helps {\sc DISCO}\xspace improve the performance. The reason is that REVEAL contains thousands of (buggy version, fixed version) pairs for the same function. Two functions in such a pair are different by only one or a few tokens. Such real-world challenges align well with our automatically generated buggy code, and pre-training with these examples teaches the model better distinguish the buggy code from the benign ones. We provide an example in Appendix Figure~\ref{fig:reveal_example} to illustrate this.
\noindent\textbf{CodeXGLUE.} We consider four pre-trained models: RoBERTa, CodeBERT, GraphCodeBERT and C-BERT. The first three are pre-trained on much larger datasets than ours. However, even trained with small dataset, three variations of {\sc DISCO}\xspace outperforms the baselines.
Unlike REVEAL, CodeXGLUE does not have those challenging pairs of functions' buggy and patched version; thus the hard negative contrast in {\sc DISCO}\xspace does not help the model much.
\input{acl2022/tables/code_clone_results}
\subsection{Clone Detection}
\label{subsec:clone_detect}
Clone detection aims to identify the programs with similar functionality.
It also can help detecting security vulnerabilities---given a known vulnerability, we can scan the code base with clone detector and check for similar code snippets.
\noindent\textbf{Dataset and Metrics.} We consider POJ-104~\citep{mou2016convolutional} and BigCloneBench~\citep{svajlenko2014towards} as the evaluation datasets. We again strictly follow the CodeXGLUE train/dev/test splits for experiments. Following CodeXGLUE's design, we use MAP@R as the metric for POJ-104 and precision/recall/F1 as the metric for BigCloneBench.
\noindent\textbf{POJ-104.} We consider three pre-trained models, one graph-based model~\citep{ye2020msiim} and one 12-layer Transformer model trained from scratch as baselines. Table~\ref{tab:clone_result} shows that, with hard negative contrast and NT-MLM, {\sc DISCO}\xspace outperforms all baselines including CodeBERT, which is pre-trained on much larger datasets.
This highlights the significance of learning the code contrasts together with syntactical information to better capture the functional similarities. Interestingly, we notice that DISCO-MLM performs the best among all variations. This indicates that our current positive heuristics might not align with all the clone patterns in this benchmark. As future work, we will propose more code transformation rules to imitate more real-world clone patterns.
\noindent\textbf{BigCloneBench.} Our best model achieves slightly better precision than the baselines indicating that our designs with contrastive learning and structure information can compensate the loss brought by less data. However, our recall is slightly worse than GraphCodeBERT, since they are pre-trained on large datasets with code graph. We conclude that enlarging our Java pre-training dataset is necessary for code clone detection and we regard this as future work.
\subsection{Medium Pre-trained Model}
\label{subsec:medium_model}
As shown in Section~\ref{sec5:expr}, {\sc DISCO}\xspace trained on a small dataset achieves comparable or even better performance than models pre-trained on large datasets in vulnerability and clone detection (Let's call this version {\sc DISCO}$_{small}$\xspace). We further explore the benefits of pre-training using larger data. We pre-train a {\sc Medium} model, {\sc DISCO}$_{medium}$\xspace, on our extended datasets with more C-language Github repositories (13G). Note that our medium dataset is still smaller than the large dataset of the baseline models (13G vs.~20G). We evaluate {\sc DISCO}$_{medium}$\xspace on C-language tasks. The results are shown in Table~\ref{tab: medium_model}. Increasing the pre-training dataset improves the performance of downstream tasks, outperforming the best baselines' results.
\input{acl2022/tables/medium_model}
\section{Analysis}
\label{sec6:analysis}
\begin{wrapfigure}{r}{0.4\linewidth}
\begin{center}
\includegraphics[width=0.4\columnwidth]{acl2022/figures/ppl.png}
\end{center}
\caption{\small The evaluation perplexity of last five epochs for different {\sc DISCO}\xspace variations.}
\label{fig:ppl}
\vspace{-3mm}
\end{wrapfigure}
\textbf{Impacts of Augmented Samples and NT-MLM.} Language model perplexity is an important metric to evaluate the quality of pre-trained embeddings. To better understand how data augmentation and NT-MLM objective affect the pre-training quality, we keep evaluating the perplexity of {\sc DISCO}\xspace's three variations (Section~\ref{subsec:expr_pretrain}) on the held-out data during pre-training.
We plot the last five epochs in Figure~\ref{fig:ppl}. As we explained in Figure~\ref{fig:model_arch}, we only apply MLM to the original sample $x$ regardless of the existence of $(x^+, x^-)$, so it is fair to compare among three models. We can see that the model with hard negative samples keeps having lower perplexity than MLM+CLR$^+$ model, and the model with node type information has even lower perplexity than both models that only consider source code tokens. This indicates that even if the models always come across the same sequences for MLM, learning the contrast of the hard negative pairs with tree-based context can further help the model understand the sequence.
\input{acl2022/tables/medium_model}
\textbf{Medium Pre-trained Model }
As shown in Section~\ref{sec5:expr}, {\sc DISCO}\xspace trained on a small dataset achieves comparable or even better performance than models pre-trained on large datasets in code understanding and generation tasks (Let's call this version {\sc DISCO}$_{small}$\xspace). We further explore the benefits of pre-training using larger data. We pre-train a {\sc Medium} model, {\sc DISCO}$_{medium}$\xspace, on our extended datasets with top-10,000 C-language Github repositories (13G). We evaluate {\sc DISCO}$_{medium}$\xspace on C-language tasks. The results are shown in Table~\ref{tab: medium_model}. Increasing the pre-training dataset improves the performance of downstream tasks. Note that our medium dataset is still smaller than the large dataset of the baseline models (13G vs.~20G).
\section{Conclusion}
\label{sec:conclusion}
In this work, we present {\sc DISCO}\xspace, a self-supervised contrastive learning framework to both learn the general representations of source code and specific characteristics of vulnerability and code clone detections. Our evaluation reveals that {\sc DISCO}\xspace pre-trained with smaller dataset can still outperform the large models' performance and thus prove the effectiveness of our design.
\section*{Acknowledgements}
We would appreciate the insightful feedback and comments from the anonymous reviewers.
This work was partially done when Yangruibo Ding was an intern at IBM Research. This work is also supported in part by NSF grants CCF-2107405, CCF-1845893, IIS-2040961, and IBM.
\section*{Ethical Considerations}
The main goal of {\sc DISCO}\xspace is to generate functionality-aware code embeddings, producing similar representations for code clones and differentiating security bugs from the benign programs.
Our data is collected from either the open-source projects, respecting corresponding licences' restrictions, or publicly available benchmarks. Meanwhile, throughout the paper we make sure
to summarize the paper's main claims. We also discussed {\sc DISCO}\xspace's limitation and potential future work for clone detection in Section~\ref{subsec:clone_detect}. We report our model configurations and experiment details in Appendix~\ref{appendix: model_config}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,652 |
\section{Introduction}
Let $B_n$ be the Artin braid group, with generators $\sigma_1,\dots,\sigma_{n-1}$ and relations $\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}$ for $1\le i\le n-2$, and $\sigma_{j}\sigma_i=\sigma_{i}\sigma_j$ for $|j-i|\ge 2$.
It is well known from work of Dyer and Grossman \cite{DG81} that the automorphism group of the braid group may be realized as $\operatorname{Aut}(B_n)\cong\bar{B}_n \rtimes \ensuremath{\mathbb Z}_2$, where $\bar{B}_n$ denotes $B_n$ modulo its center and $\ensuremath{\mathbb Z}_2$ acts by taking generators to their inverses.
In this paper, we find an explicit presentation for the automorphism group of the Artin pure braid, the kernel $P_n=\ker(B_n \to \Sigma_n)$ of the natural map from the braid group to the symmetric group. The pure braid group has generators
\begin{equation} \label{eq:Pn gens}
A_{i,j}=\sigma_{j-1}^{}\cdots\sigma_{i+1}^{}\sigma_i^{2}\sigma_{i+1}^{-1}\cdots\sigma_{j-1}^{-1}
=\sigma_{i}^{-1}\cdots\sigma_{j-2}^{-1}\sigma_{j-1}^{2}\sigma_{j-2}^{}\cdots\sigma_{i}^{},
\end{equation}
and relations
\begin{equation} \label{eq:purebraidrels}
A_{r,s}^{-1}A_{i,j}^{}A_{r,s}^{}=
\begin{cases}
A_{i,j}^{}&\text{if $i<r<s<j$,}\\
A_{i,j}^{}&\text{if $r<s<i<j$,}\\
A_{r,j}^{}A_{i,j}^{}A_{r,j}^{-1}&\text{if $r<s=i<j$,}\\
A_{r,j}^{}A_{s,j}^{}A_{i,j}^{}A_{s,j}^{-1}A_{r,j}^{-1}&\text{if $r=i<s<j$,}\\
[A_{r,j}^{},A_{s,j}^{}]A_{i,j}^{}[A_{r,j}^{},A_{s,j}^{}]^{-1}&\text{if $r<i<s<j$,}
\end{cases}
\end{equation}
where $[u,v]=uvu^{-1}v^{-1}$ is the commutator. Birman \cite{Bir75} is a general reference.
Let
\[
F(\ensuremath{\mathbb C},n)=\{(x_1,\dots,x_n) \in \ensuremath{\mathbb C}^n \mid x_i \neq x_j\ \text{if}\ i \neq j\}
\]
be the configuration space of $n$ distinct ordered points in $\ensuremath{\mathbb C}$. The symmetric group $\Sigma_n$ acts freely on $F(\ensuremath{\mathbb C},n)$ by permuting coordinates. Let $C(\ensuremath{\mathbb C},n)=F(\ensuremath{\mathbb C},n)/\Sigma_n$ denote the orbit space, the configuration space of $n$ distinct unordered points in $\ensuremath{\mathbb C}$. It is well known that $P_n=\pi_1(F(\ensuremath{\mathbb C},n))$, $B_n=\pi_1(C(\ensuremath{\mathbb C},n))$, and that these spaces are Eilenberg-Mac\,Lane spaces for these braid groups.
For $i\neq j$, let $H_{i,j}=\ker(x_i-x_j)$, and let $\ensuremath{\mathcal A}_n=\{H_{i,j}\mid 1\le i<j\le n\}$ denote the braid arrangement in $\ensuremath{\mathbb C}^n$, consisting of the reflecting hyperplanes of the symmetric group $\Sigma_n$.
The configuration space $F(\ensuremath{\mathbb C},n)=M(\ensuremath{\mathcal A}_n)=\ensuremath{\mathbb C}^n \smallsetminus \bigcup_{1\le i<j \le n}H_{i,j}$ may be realized as the complement of the braid arrangement $\ensuremath{\mathcal A}_n$.
The other pure braid groups we consider may be viewed as arising from an analogous construction.
Let $r$ be a natural number greater than or equal to $2$.
The complex hyperplane arrangement $\ensuremath{\mathcal A}_{r,n}$ in $\ensuremath{\mathbb C}^n$ defined by the polynomial
\begin{equation} \label{eq:monopoly}
Q_{r,n}=Q(\ensuremath{\mathcal A}_{r,n})=x_1\cdots x_n \prod_{1\le i<j \le n} (x_i^r - x_j^r)
\end{equation}
is known as a full monomial arrangement. The complement $M(\ensuremath{\mathcal A}_{r,n}) = \ensuremath{\mathbb C}^n \setminus Q_{r,n}^{-1}(0)$ may be realized as the orbit configuration space\[
F_{\Gamma}(\ensuremath{\mathbb C}^*,n) = \{(x_1,\dots,x_n) \in (\ensuremath{\mathbb C}^*)^{n} \mid \Gamma\cdot x_i \cap \Gamma\cdot x_j
=\emptyset\ \text{if}\ i\neq j\}
\]
of ordered $n$-tuples of points in $\ensuremath{\mathbb C}^*$ which lie in distinct orbits of the free action of $\Gamma=\ensuremath{\mathbb Z}/r\ensuremath{\mathbb Z}$ on $\ensuremath{\mathbb C}^*$ by multiplication by the primitive $r$-th root of unity $\exp(2\pi\sqrt{-1}/r)$.
Let $B(r,n)$ denote the group with generators
$\rho_0,\rho_1,\dots,\rho_{n-1}$ and relations
\begin{equation} \label{eq:MonomialBraidRels}
(\rho_0\rho_1)^2=(\rho_1\rho_0)^2\!,\
\rho_i\rho_{i+1}\rho_i=\rho_{i+1}\rho_i\rho_{i+1}\, (1\le
i<n),\ \rho_i\rho_j=\rho_j\rho_i\ (|j-i|>1).
\end{equation}
This is the (full) monomial braid
group, the fundamental group of the orbit space
$M(\ensuremath{\mathcal A}_{r,n})/W$, where $W=G(r,n)$ is the full monomial group,
cf.~\cite{BMR}. Note that $B(r,n)$ is independent of $r$. This group
admits a natural surjection to $G(r,n)$, which may be presented with
generators $\rho_0,\rho_1,\dots,\rho_{n-1}$ and relations
\eqref{eq:MonomialBraidRels} together with
$\rho_0^r=\rho_1^2=\dots =\rho_{n-1}^2=1$.
Note that the hyperplanes of
$\ensuremath{\mathcal A}_{r,n}$ are the reflecting hyperplanes of the
group $G(r,n)$,
and that this group is isomorphic to the wreath product of the symmetric group $\Sigma_n$ and the cyclic group $\ensuremath{\mathbb Z}/r\ensuremath{\mathbb Z}$.
The fundamental group of the complement $M(\ensuremath{\mathcal A}_{r,n})$ of the full monomial arrangement is the kernel $P({r,n})=\pi_1(M(\ensuremath{\mathcal A}_{r,n}))$ of the aforementioned surjection
$B(r,n) \twoheadrightarrow G(r,n)$, which we refer to as the pure monomial braid group. Furthermore, $M(\ensuremath{\mathcal A}_{r,n})$ is an Eilenberg-Mac\,Lane space for the pure monomial braid group.
A presentation for the group $P({r,n})$ is found in \cite[Thm.~2.2.4]{Coh01} (in slightly different notation).\footnote{\,There is a typographical error in the second family of relations recorded in \cite[(2.9)]{Coh01}. These relations should read $[A_{j,k}^{(p)},\,A_{j,l}^{[p]}A_{i,l}^{(q)}(A_{j,l}^{[p]})^{-1}]$.}
For $1\le i \le n$, let
$X_i=\rho_{i-1}\cdots\rho_2\rho_1\rho_0\rho_1\rho_2 \cdots\rho_{i-1}$,
and define
\begin{equation} \label{eq:PureMonomialGens}
\begin{split}
C_j^{}&=\rho_{j-1}^{}\cdots\rho_2^{}\rho_1^{}\rho_0^r
\rho_1^{-1}\rho_2^{-1}\cdots\rho_{j-1}^{-1}
\ (1\le j\le n),\\
A_{i,j}^{(q)}&=X_i^{q-r} \cdot\rho_{j-1}^{}\cdots\rho_{i+1}^{}
\rho_i^2
\rho_{i+1}^{-1}\cdots\rho_{j-1}^{-1}\cdot X_i^{r-q}
\ (1\le i<j\le n,\ 1\le q\le r).
\end{split}
\end{equation}
These elements generate the pure monomial braid group.
Setting $r=1$ in \eqref{eq:monopoly} yields a polynomial $Q_{1,n}$ which defines an arrangement $\ensuremath{\mathcal A}_{1,n}$ whose complement has the homotopy type of the complement of the braid arrangement $\ensuremath{\mathcal A}_{n+1}$ in $\ensuremath{\mathbb C}^{n+1}$, $M(\ensuremath{\mathcal A}_{1,n}) \simeq M(\ensuremath{\mathcal A}_{n+1})$. For $r \ge 2$, the mapping $M(\ensuremath{\mathcal A}_{r,n}) \to M(\ensuremath{\mathcal A}_{1,n})$ defined by $(x_1,\dots,x_n) \mapsto (x_1^r,\dots,x_n^r)$ is a finite covering (equivalent to the pullback along the inclusion $M(\ensuremath{\mathcal A}_{1,n}) \hookrightarrow (\ensuremath{\mathbb C}^*)^n$ of the covering $(\ensuremath{\mathbb C}^*)^n \to (\ensuremath{\mathbb C}^*)^n$ defined by the same formula). Thus, $P(r,n)=\pi_1(M(\ensuremath{\mathcal A}_{r,n}))$ is a finite index subgroup of $P_{n+1}=\pi_1(M(\ensuremath{\mathcal A}_{1,n}))$.
In this paper,
building on work of Bell and Margalit \cite{BM07} and Charney and Crisp \cite{CC05},
we find finite presentations of the automorphism groups of the pure braid groups $P_n$ and $P(r,n)$. These automorphism groups, $\operatorname{Aut}(P_n)$ in particular, are used in \cite{CFR11} to study the residual freeness of these pure braid groups.
The structure of the automorphism groups of the full braid groups $B_n$ and $B(r,n)$ is known,
see \cite{DG81} and \cite{CC05}. The monomial braid group $B(r,n)=B(2,n)$ may be realized as the Artin group of type B, and the automorphism group $\operatorname{Aut}(B(2,n))$ was determined in \cite{CC05} from this perspective.
\section{Preliminaries}
In this section, we gather a number of facts regarding split extensions, (pure) braid groups, and mapping class groups which will be of use in analyzing the automorphism groups of the pure braid groups $P_n$ and $P(r,n)$.
Let $K$ be a group with trivial center, $Z(K)=1$, and let $A$ be an abelian group. As noted by Leininger and Margalit \cite{LM06}, a split central extension
\[
1\to A \to G \leftrightarrows K \to 1
\]
induces a split extension
\begin{equation} \label{eq:splitaut}
1\to \operatorname{tv}(G) \to \operatorname{Aut}(G) \leftrightarrows \operatorname{Aut}(K) \to 1,
\end{equation}
where $\operatorname{tv}(G) < \operatorname{Aut}(G)$ is the subgroup consisting of all automorphisms of $G$ which become trivial upon passing to the quotient $K$. If, moreover, $G=A \times K$ is a direct product, an explicit splitting in \eqref{eq:splitaut} is given by sending $\alpha \in \operatorname{Aut}(K)$ to $\tilde\alpha \in \operatorname{Aut}(G)$, where $\left.\tilde\alpha\right|_A=\operatorname{id}_A$ and $\left.\tilde\alpha\right|_K=\alpha$. We occassionally abuse notation and write simply $\alpha$ in place of $\tilde\alpha$ in this situation.
For a group $G$ with infinite cyclic center $Z=\langle z \rangle$, a transvection is an endomorphism of $G$ of the form $x \mapsto x z^{t(x)}$, where $t\colon G \to \ensuremath{\mathbb Z}$ is a homomorphism, see Charney and Crisp \cite{CC05}. Such a map is an automorphism if an only if its restriction to $Z$ is surjective, which is the case if and only if $z \mapsto z$ or $z \mapsto z^{-1}$, that is, $t(z)=0$ or $t(z)=-2$. For the groups we are interested in, the extension $1\to Z(G) \to G \to G/Z(G) \to 1$ is split (in fact $G \cong Z(G) \times G/Z(G)$), and $Z(G)$ is infinite cyclic. In this instance, the subgroup $\operatorname{tv}(G) < \operatorname{Aut}(G)$ consists of all transvection automorphisms of $G$, so we refer to $\operatorname{tv}(G)$ as the transvection subgroup of $\operatorname{Aut}(G)$.
As alluded to in the previous paragraph, the pure braid groups $P_n$ and $P(r,n)$ admit direct product decompositions
\begin{equation} \label{eq:direct}
P_n \cong Z(P_n) \times P_n/Z(P_n)\quad \text{and} \quad P(r,n) \cong Z(P(r,n)) \times P(r,n)/Z(P(r,n)),
\end{equation}
and the center of each of these groups is infinite cyclic. The above direct product decompositions (the first of which, for $P_n$, is well known) may be obtained using results from the theory of hyperplane arrangements. See Orlik and Terao \cite{OT} as a general reference. First, if $\ensuremath{\mathcal A}$ is a central arrangement in $\ensuremath{\mathbb C}^n$ (the hyperplanes of which all contain the origin), the restriction of the Hopf bundle $\ensuremath{\mathbb C}^n\smallsetminus\{0\} \to \ensuremath{\mathbb{CP}}^{n-1}$ to the complement $M=M(\ensuremath{\mathcal A})$ yields a homeomorphism $M \cong \ensuremath{\mathbb C}^* \times \bar{M}$, where $\bar{M}$ is the complement of the projectivization of $\ensuremath{\mathcal A}$ in $ \ensuremath{\mathbb{CP}}^{n-1}$. Thus, $\pi_1(M) \cong \ensuremath{\mathbb Z} \times \pi_1(\bar{M})$. Second, the braid arrangement $\ensuremath{\mathcal A}_n$ and the full monomial arrangement $\ensuremath{\mathcal A}_{r,n}$ are
fiber-type (or supersolvable) arrangements. As such, the fundamental groups of the complements decompose as iterated semidirect products of free groups,
\begin{equation*} \label{eq:iterated}
P_n=\pi_1(M(\ensuremath{\mathcal A}_n))= \rtimes_{k=1}^{n-1} F_k
\ \text{and}\
P(r,n)=\pi_1(M(\ensuremath{\mathcal A}_{r,n}))=\rtimes_{k=1}^{n-1} F_{r(k-1)+1},
\end{equation*}
where $F_k$ is the free group of rank $k$. The direct product decompositions \eqref{eq:direct} follow easily from these two facts. Note also that these considerations imply that the groups $\bar{P}_n=P_n/Z(P_n)$ and $\bar{P}(r,n)=P(r,n)/Z(P(r,n))$ are centerless.
Explicit generators for the centers of the braid groups $B_n$, $P_n$, $B(r,n)$, and $P(r,n)$ are known. Regarding the Artin braid groups, it is a classical result of Chow (see \cite[Cor.~1.8.4]{Bir75}) that $Z(B_n)=Z(P_n)=\ensuremath{\mathbb Z}$, generated by
\[
Z_n = (\sigma_1\sigma_2\cdots\sigma_{n-1})^n=(A_{1,2})(A_{1,3}A_{2,3})\cdots (A_{1,n}\cdots A_{n-1,n}).
\]
The centers of the monomial braid groups $B(r,n)$ and $P(r,n)$ were determined by Brou\'e, Malle, and Rouquire \cite[Prop.~3.10]{BMR}. In terms of the generators $\rho_0,\rho_1,\dots,\rho_{n-1}$ of $B(r,n)$, these centers are given by
\[
Z(B(r,n)) = \langle(\rho_0\rho_1\cdots \rho_{n-1})^n\rangle \ \text{and}\
Z(P(r,n)) = \langle(\rho_0\rho_1\cdots \rho_{n-1})^{nr}\rangle.
\]
Write $\zeta_n=(\rho_0\rho_1\cdots \rho_{n-1})^n$ so that $Z(B(r,n)) = \langle\zeta_n\rangle$ and
$Z(P(r,n)) = \langle\zeta_n^r\rangle$.
Since $B(r,n)=B(2,n)$ is the type B Artin group, the fact that $Z(B(r,n))=\langle \zeta_n\rangle$ follows from work of Deligne \cite{Del}, see also Brieskorn and Saito \cite{BS72}.
We express $\zeta_n^r$ in terms of the generators of the pure monomial braid group $P(r,n)$ recorded in \eqref{eq:PureMonomialGens}. For $1\le i < j \le n$, let
\begin{align} \label{eq:particularmonobraids}
A_{i,j}^{[q]}&=A_{i,j}^{(q)}A_{i,j}^{(q+1)}\cdots A_{i,j}^{(r-1)} \quad \text{(for $q<r$),}\notag\\
V_{i,j}^{(q)}&=A_{i,j}^{(q)}A_{i+1,j}^{(q)}\cdots A_{j-1,j}^{(q)}\quad \text{(for $q\le r$),}\\
D_k&=A_{k-1,k}^{[1]}A_{k-2,k}^{[1]}\cdots A_{1,k}^{[1]} C_k V_{1,k}^{(r)}\quad \text{(for $k\le n$)}.\notag
\end{align}
\begin{lemma} \label{lem:monocenter}
The center of the pure monomial braid group $P(r,n)$ is generated~by
\[
\zeta_n^r = D_1 D_2 \cdots D_n.
\]
\end{lemma}
\begin{proof}
Recall the braids $X_i=\rho_{i-1}\cdots\rho_2\rho_1\rho_0\rho_1\rho_2 \cdots\rho_{i-1}$ in $B(r,n)$. An inductive argument using the monomial braid relations \eqref{eq:MonomialBraidRels} reveals that
\begin{equation*} \label{ZBrnX}
\zeta_n = X_1 X_2 \cdots X_n = \zeta_{n-1} \cdot X_n.
\end{equation*}
The relations \eqref{eq:MonomialBraidRels} may also be used to check that $X_iX_j=X_jX_i$ for each $i$ and $j$.
Thus, $Z(P(r,n))$ is generated by $\zeta_n^r = (X_1 X_2 \cdots X_n)^r=X_1^r X_2^r \cdots X_n^r=\zeta_{n-1}^r \cdot X_n^r$.
We may inductively assume that $\zeta_{n-1}^r=D_1 D_2 \cdots D_{n-1}$, so it suffices to show that $D_n=X_n^r$.
Use \eqref{eq:PureMonomialGens} and \eqref{eq:particularmonobraids} to check that $C_n V_{1,n}^{(r)}=\rho_{n-1} \cdots \rho_1 \rho_0^r \rho_1 \cdots \rho_{n-1}$ and also that
$A_{i,n}^{[1]} = X_i^{1-r}(A_{i,n}^{(r)} X_i)^{r-1}$. Then, a calculation reveals that
\[
D_n=(X_1 \cdots X_{n-1})^{1-r} Y_{n-1}^{r-1} \rho_{n-1} \cdots Y_1^{r-1} \rho_1 \rho_0^r \rho_1 \cdots \rho_{n-1},
\]
where $Y_i = \rho_i^2 X_i$. Since $Y_i \rho_i=\rho_iX_{i+1}$ and $X_j \rho_i=\rho_i X_j$ for $i<j$, we have
\[
\begin{aligned}
D_n&=(X_1 \cdots X_{n-1})^{1-r} \rho_{n-1} \cdots \rho_1 X_n^{r-1} \cdots X_2^{r-1} \rho_0^r \rho_1 \cdots \rho_{n-1}\\
\ \qquad\qquad&=\zeta_{n-1}^{1-r} \rho_{n-1} \cdots \rho_1 \zeta_n^{r-1} \rho_0\rho_1\cdots \rho_{n-1}=\zeta_{n-1}^{1-r} \zeta_n^{r-1} X_n = X_n^r. \qquad\qquad\ \qedhere
\end{aligned}
\]
\end{proof}
Recall that $\bar{P}_n=P_n/Z(P_n)$ and $\bar{P}(r,n)=P(r,n)/Z(P(r,n))$. These groups may be realized as finite index subgroups of the (extended) mapping class group of the punctured sphere. Let $\ensuremath{\mathbb S}_m$ denote the sphere $S^2$ with $m$ punctures, and let $\operatorname{Mod}(\ensuremath{\mathbb S}_{m})$ be the extended mapping class group of $\ensuremath{\mathbb S}_{m}$, the group of isotopy classes of all self-diffeomorphisms of $\ensuremath{\mathbb S}_{m}$.
The mapping class group $M(0,m)$ of isotopy classes of orientation-preserving self-diffeomorphisms of $\ensuremath{\mathbb S}_{m}$ is an index two subgroup of $\operatorname{Mod}(\ensuremath{\mathbb S}_{m})$. For $m\ge 2$, the group $M(0,m)$ admits a presentation with generators $\omega_1,\dots,\omega_{m-1}$ and relations
\begin{equation}
\begin{array}{ll} \label{eq:mcgrels}
\omega_i\omega_j=\omega_j\omega_i\ \text{for}\ |i-j|\ge 2,
&\omega_i \omega_{i+1} \omega_i= \omega_{i+1} \omega_i \omega_{i+1}, \\
\omega_1\cdots \omega_{m-2}\omega_{m-1}^2\omega_{m-2}\cdots \omega_1=1,
&(\omega_1 \omega_2 \cdots \omega_{m-1})^{m}=1,
\end{array}
\end{equation}
see \cite[Thm.~4.5]{Bir75}.
The extended mapping class group $\operatorname{Mod}(\ensuremath{\mathbb S}_{m})$ then admits a presentation with the above generators and relations, along with the additional generator $\epsilon$ and relations $(\epsilon \omega_i)^2=1$ and $\epsilon^2=1$.
If $G$ is a subgroup of a group $\Gamma$, recall that the normalizer of $G$ in $\Gamma$ is
$N_\Gamma(G) =\{\gamma \in \Gamma \mid \gamma^{-1} G \gamma = G\}$,
the largest subgroup of $\Gamma$ having $G$ as a normal subgroup. Building on work of Korkmaz \cite{Kor99} and Ivanov \cite{Iv03}, Charney and Crisp \cite[Cor.~4 (ii)]{CC05} establish the following.
\begin{proposition} \label{prop:CC}
If $m \ge 5$ and $G$ is a finite index subgroup of $\Gamma=\operatorname{Mod}(\ensuremath{\mathbb S}_m)$, then $\operatorname{Aut}(G) \cong N_\Gamma(G)$.
\end{proposition}
Throughout the paper, $\operatorname{Aut}(G)$ denotes the group of right automorphisms of $G$, with multiplication $\alpha\cdot \beta=\beta\circ\alpha$.
\section{Automorphisms of the Artin pure braid group}
The map $B_n \to M(0,n+1) \to \operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$ given by $\sigma_i \mapsto \omega_i$, $1\le i \le n-1$, realizes $\bar{B}_n=B_n/Z$ as a finite index subgroup of the extended mapping class group $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$, where $Z=Z(B_n)=Z(P_n)$, see, for instance, \cite{CC05}. This comes from realizing $B_n$ as the orientation-preserving mappping class group of ${\mathbb D}_n$, the $n$-punctured disk, relative to the boundary, and including ${\mathbb D}_n$ in $\ensuremath{\mathbb S}_{n+1}$. In this way, $\bar{P}_n=P_n/Z$ is realized as $\mathrm{P}\!\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$, the subgroup of orientation-preserving mapping classes which fix every puncture.
The subgroup $\bar{P}_n$ is normal in $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$. Thus, $\operatorname{Aut}(\bar{P}_n) \cong \operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$ for $n\ge 4$, see Proposition \ref{prop:CC}. This fact was originally established by Korkmaz \cite{Kor99}, and extended by Bell and Margalit \cite{BM07}. Since $P_n \cong Z \times \bar{P}_n$, the split extension \eqref{eq:splitaut} yields a semidirect product decomposition
$\operatorname{Aut}(P_n) \cong \operatorname{tv}(P_n) \rtimes \operatorname{Aut}(\bar{P}_n)$. This is an ingredient in the identification, for $n\ge 4$, of the automorphism group of the pure braid group as
\begin{equation} \label{eq:semi}
\operatorname{Aut}(P_n) \cong (\ensuremath{\mathbb Z}^N \rtimes \ensuremath{\mathbb Z}_2) \rtimes \operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})
\end{equation}
made by Bell and Margalit \cite[Thm.~8]{BM07}.
Here, $\operatorname{tv}(P_n)\cong \ensuremath{\mathbb Z}^N \rtimes \ensuremath{\mathbb Z}_2$, where $N=\binom{n}{2}-1$.
Recall that the center $Z=Z(P_n)$ of the pure braid group is infinite cyclic, generated by
$Z_n=A_{1,2}A_{1,3}A_{2,3}\cdots A_{1,n}\cdots A_{n-1,n}$.
The transvection subgroup $\operatorname{tv}(P_n)$ of $\operatorname{Aut}(P_n)$ consists of automorphisms of the form $A_{i,j} \mapsto A_{i,j} Z_n^{t_{i,j}}$, where $t_{i,j} \in \ensuremath{\mathbb Z}$ and $\sum t_{i,j}$ is either equal to $0$ or $-2$. In the former case, $Z_n \mapsto Z_n$, while $Z_n \mapsto Z_n^{-1}$ in the latter. This yields a surjection $\operatorname{tv}(P_n) \to \ensuremath{\mathbb Z}_2$, with kernel consisting of transvections for which $\sum t_{i,j}=0$. Since $P_n$ has $\binom{n}{2}=N+1$ generators, this kernel is free abelian of rank $N$. The choice $t_{1,2}=-2$ and all other $t_{i,j}=0$ gives a splitting $\ensuremath{\mathbb Z}_2 \to \operatorname{tv}(P_n)$. Thus,
$\operatorname{tv}(P_n) \cong \ensuremath{\mathbb Z}^N \rtimes \ensuremath{\mathbb Z}_2$. This group is generated by transvections $\psi,\phi_{i,j}\colon P_n \to P_n$, $1\le i<j\le n$, $\{i,j\}\neq \{1,2\}$, where
\begin{equation} \label{eq:transvections}
\psi\colon A_{p,q} \mapsto \begin{cases} A_{1,2}Z_n^{-2}&\text{$p=1$, $q=2$,}\\
A_{p,q}&\text{otherwise,} \end{cases} \
\phi_{i,j}\colon A_{p,q} \mapsto \begin{cases} A_{1,2}Z_n^{}&\text{$p=1$, $q=2$,}\\
A_{i,j}Z_n^{-1}&\text{$p=i$, $q=j$,}\\ A_{p,q}&\text{otherwise.} \end{cases}
\end{equation}
It is readily checked that $\psi^2=1$ and that $\psi \phi_{i,j}\psi = \phi_{i,j}^{-1}$. Observe that nontrivial elements of $\operatorname{tv}(P_n)$ are outer automorphisms.
The mapping class group $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$ acts on $\bar{P}_n=P_n/\langle Z_n\rangle \cong \mathrm{P}\!\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$ by conjugation.
We exhibit automorphisms of $P_n$ which fix the generator $Z_n$ of the center and induce the corresponding automorphisms of $\bar{P}_n$ upon passing to the quotient.
For group elements $x$ and $y$, write $y^x=x^{-1}yx$.
Define elements $\omega_k$, $1\le k \le n$, and $\epsilon$ of $\operatorname{Aut}(P_n)$ as follows:
\begin{equation} \label{eq:mcgautos}
\begin{aligned}
\omega_k\colon A_{i,j}&\mapsto \begin{cases}
A_{i-1,j}&\text{if $k=i-1$,}\\
A_{i+1,j}^{A_{i,i+1}}&\text{if $k=i<j-1$,}\\
A_{i,j-1}&\text{if $k=j-1>i$,}\\
A_{i,j+1}^{A_{j,j+1}}&\text{if $k=j$,}\\
A_{i,j}&\text{otherwise,}
\end{cases}
\qquad \text{for $1\le k \le n-1$, $k \neq 2$,}\\
\omega_2\colon A_{i,j}&\mapsto \begin{cases}
A_{1,3}^{A_{2,3}}Z_n&\text{if $i=1$, $j=2$,}\\
A_{1,2}Z_n^{-1}&\text{if $i=1$, $j=3$,}\\
A_{3,j}^{A_{2,3}}&\text{if $i=2$, $j\ge 4$,}\\
A_{2,j}&\text{if $i=3$,}\\
A_{i,j}&\text{otherwise,}
\end{cases}\\
\omega_n\colon A_{i,j}&\mapsto \begin{cases}
A_{i,j}&\text{if $j \neq n$,}\\
(A_{1,n}A_{1,2}A_{1,3}\cdots A_{1,n-1})^{-1}Z_n&\text{if $i=1$, $j=n$,}\\
(A_{2,n}A_{1,2}A_{2,3}\cdots A_{2,n-1})^{-1}Z_n&\text{if $i=2$, $j=n$,}\\
(A_{i,n}A_{1,i}\cdots A_{i-1,i} A_{i,i+1}\cdots A_{i,n-1})^{-1}&\text{if $3\le i$, $j=n$,}
\end{cases}\\
\epsilon\colon A_{i,j}&\mapsto
\begin{cases}
A_{1,2}^{-1} Z_n^2&\text{if $i=1$, $j=2$,}\\
(A_{i+1,j} \cdots A_{j-1,j})^{-1} A_{i,j}^{-1} (A_{i+1,j} \cdots A_{j-1,j})&\text{otherwise.}
\end{cases}
\end{aligned}
\end{equation}
Check that $\omega_k(Z_n)=Z_n$ for each $k$, and $\epsilon(Z_n)=Z_n$. Also, note that, for $1\le k \le n-1$ and $k\neq 2$, the automorphism $\omega_k$ is given by the usual conjugation action of the braid $\sigma_k$ on the pure braid group, $\omega_k(A_{i,j}^{})=A_{i,j}^{\sigma_k}=\sigma_k^{-1}A_{i,j}^{}\sigma_k^{}$, see \cite{DG81}. The automorphism $\omega_2$ is the composite of the conjugation action of $\sigma_2$ and the transvection $\phi_{1,3}$, see \eqref{eq:transvections}. This accounts for the fact that $A_{1,2}=[(A_{1,3}A_{2,3})\cdots (A_{1,n}\cdots A_{n-1,n})]^{-1}$ in $\bar{P}_n$, the fact that, for instance, $A_{1,3}^{\sigma_2}=A_{1,2}$ in $P_n$, and insures that $\omega_2(Z_n)=Z_n$.
Similar considerations explain the occurrence of $Z_n$ in the formulas for the automorphisms $\omega_n$ and $\epsilon$ above. The former automorphism of $P_n$ lifts the automorphism of $\bar{P}_n$ given by conjugation by $\omega_n \in \operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$. This conjugation action can be determined using the mapping class group relations \eqref{eq:mcgrels}, noting that the relations $\omega_i\omega_{i+1}\omega_i=\omega_{i+1}\omega_i\omega_{i+1}$ and
$\omega_1\cdots \omega_{n-1}\omega_n^2\omega_{n-1}\cdots \omega_1=1$ imply that, for instance,
\[
\begin{aligned}
\omega_n^{-1} A_{n-1,n}^{} \omega_n^{}&=
\omega_n^{-1} \omega_{n-1}^{2} \omega_n^{}=
\omega_{n-1}^{} \omega_{n}^{2} \omega_{n-1}^{-1}\\
&=\omega_{n-1}^{}\left(\omega_{n-1}^{}\cdots \omega_2^{} \omega_1^2 \omega_2^{} \cdots \omega_{n-1}^{}\right)^{-1} \omega_{n-1}^{-1}\\
&=\omega_{n-1}^{}\left(A_{1,n}^{} A_{2,n}^{} \cdots A_{n-1,n}^{}\right)^{-1} \omega_{n-1}^{-1}\\
&=(A_{n-1,n}A_{1,n-1}\cdots A_{n-2,n-1})^{-1}.
\end{aligned}
\]
Similarly, the fact that $A_{i,n}=A_{n-1,n}^{\omega_{n-2}\cdots \omega_i}$ for $i \le n-2$ may be used to calculate $\omega_n^{-1} A_{i,n}^{} \omega_n^{}$.
\begin{proposition} \label{prop:mcgrels}
The elements $\omega_1,\dots,\omega_n,\epsilon \in \operatorname{Aut}(P_n)$
satisfy the mapping class group relations \eqref{eq:mcgrels} and the relations $\epsilon^2=1$ and $(\epsilon \omega_k)^2=1$ for each $k$, $1\le k \le n$.
\end{proposition}
\begin{proof}
As noted above, for $1\le k \le n-1$ and $k\neq 2$, the automorphism $\omega_k$ is given by the conjugation action of the braid $\sigma_k$,
$\omega_k(A_{i,j}) = A_{i,j}^{\sigma_k}$. It follows that all of the (braid) relations \eqref{eq:mcgrels} that do not involve $\omega_2$ or $\omega_n$ hold.
So it remains to check that the automorphisms $\omega_k$ of $P_n$ satisfy
$\omega_i\omega_2\omega_i=\omega_2\omega_i\omega_2$ for $i=1,3$, $\omega_2\omega_i=\omega_i\omega_2$ for $i \ge 4$,
$\omega_{n-1}\omega_n\omega_{n-1}=\omega_{n}\omega_{n-1}\omega_{n}$, $\omega_i\omega_n=\omega_n\omega_i$ for $i\le n-2$,
$\omega_1\cdots \omega_{n-1}\omega_n^2\omega_{n-1}\cdots \omega_1=1$, and
$(\omega_1 \omega_2 \cdots \omega_n)^{n+1}=1$. We will check the last two, and leave the others as exercises for the reader.
To verify that $\omega_1\cdots \omega_{n-1}\omega_n^2\omega_{n-1}\cdots \omega_1=1$, first check that
\begin{equation*} \label{eq:composites}
\begin{aligned}
\omega_1\cdots \omega_{n-1}(A_{i,j})&=\begin{cases}
Z_n A_{1,n}^{A_{2,n}\cdots A_{n-1,n}}&\text{if $i=1$, $j=2$,}\\
Z_n^{-1}A_{1,2}&\text{if $i=2$, $j=3$,}\\
A_{j-1,n}^{A_{j,n}\cdots A_{n-1,n}(A_{1,j-1}\cdots A_{j-2,j-1})^{-1}}&\text{if $i=1$, $j\ge 3$,}\\
A_{i-1,j-1}&\text{otherwise,}
\end{cases}\\
\omega_n^2(A_{i,j})&=\begin{cases}
A_{i,j}&\text{if $j \le n-1$,}\\
A_{i,n}^{A_{1,i}\cdots A_{i-1,i}A_{i,i+1}\cdots A_{i,n-1}}\hskip 31pt &\text{if $j=n$,}
\end{cases}\\
\omega_{n-1}\cdots \omega_{1}(A_{i,j})&=\begin{cases}
A_{2,3} Z_n \hskip 108pt &\text{if $i=1$, $j=2$,}\\
A_{1,2} Z_n^{-1}&\text{if $i=1$, $j=n$,}\\
A_{1,i+1}&\text{if $i\ge 2$, $j=n$,}\\
A_{i+1,j+1}&\text{otherwise.}
\end{cases}
\end{aligned}
\end{equation*}
These calculations, together with the pure braid relations \eqref{eq:purebraidrels}, can be used to check that
$\omega_1\cdots \omega_{n-1}\omega_n^2\omega_{n-1}\cdots \omega_1=1$.
To verify that $(\omega_1 \omega_2 \cdots \omega_n)^{n+1}=1$, first note that the relations
$\omega_i\omega_{i+1}\omega_i=\omega_{i+1}\omega_i\omega_{i+1}$ for $1\le i\le n$ and $\omega_j\omega_i=\omega_i\omega_j$ for $|j-i|\ge 2$ imply that
\[
\begin{aligned}
(\omega_1 \omega_2 \cdots \omega_n)^{n+1}&=(\omega_1 \omega_2 \cdots \omega_{n-1})^{n}\cdot \omega_n\cdots \omega_2
\omega_1^2\omega_2\cdots \omega_n,\ \text{and}\\
\omega_n\cdots \omega_2\omega_1^2\omega_2\cdots \omega_n&=(\omega_n\cdots \omega_1)\cdot
\omega_1\cdots \omega_{n-1}\omega_n^2\omega_{n-1}\cdots \omega_1 (\omega_n\cdots \omega_1)^{-1}.
\end{aligned}
\]
Since $\omega_1\cdots \omega_{n-1}\omega_n^2\omega_{n-1}\cdots \omega_1=1$ by the previous paragraph, it suffices to check that
$(\omega_1 \omega_2 \cdots \omega_{n-1})^{n}=1$.
Write $\tau=\omega_1\cdots \omega_{n-1}$. We must show that $\tau^n=1$. The action of $\tau$ on the pure braid generators $A_{i,j}$ is given above. In particular, $\tau(A_{i,j})=A_{i-1,j-1}$ for $i\ge 2$ and $j\ge 4$. Also, note that for $j \ge 3$, the pure braid relations \eqref{eq:purebraidrels} may be used to show that
\[
\tau(A_{1,j})=A_{j-1,n}^{A_{j,n}\cdots A_{n-1,n}(A_{1,j-1}\cdots A_{j-2,j-1})^{-1}}
=A_{j-1,n}^{A_{1,n}\cdots A_{n-1,n}}.
\]
Observe that $\tau^{n-3}(A_{n-1,n})=A_{2,3}$. Consequently, $\tau^{n-2}(A_{n-1,n})=Z_n^{-1}A_{1,2}$, and $\tau^{n-1}(A_{n-1,n})=A_{1,n}^{A_{2,n}\cdots A_{n-1,n}}$.
A calculation then reveals that $\tau^n(A_{n-1,n})=A_{n-1,n}$. It follows that $\tau^{n-k}(A_{k-1,k})=A_{n-1,n}$ for $k\ge 3$, which implies that $\tau^n(A_{k-1,k})=A_{k-1,k}$ for $k\ge 3$.
If $i=j-k$ with $k\ge 2$ (so that $j\ge 3$), then $A_{i,j}=A_{j-k,j}=\tau^{n-j}(A_{n-k,n})$. If $\tau^j(A_{i,j})=A_{n-k,n}$, it follows that $\tau^n(A_{n-k,n})=A_{n-k,n}$ and then that $\tau^n(A_{i,j})=A_{i,j}$. Thus, it suffices to show that $\tau^j(A_{i,j})=A_{n-k,n}$.
If $i>1$, then $\tau^{i-1}(A_{i,j})=A_{1,j-i+1}$. So it is enough to show that $\tau^q(A_{1,q})=A_{n-q+1,n}$, where $q \ge 3$. Checking that
\[
\tau^p(A_{1,q})=A_{q-p,n-p+1}^{A_{1,n-p+1}\cdots A_{n-p,n-p+1} \cdot A_{n-p+1,n-p+2}\cdots A_{n-p+1,n}}
\]
for $1\le p \le q-1$, we have
\[
\begin{aligned}
\tau^q(A_{1,q})&=\tau(A_{1,n-q}^{A_{2,n-q}\cdots A_{n-q-1,n-q} \cdot A_{n-q,n-q+1}\cdots A_{n-q,n}})\\
&=A_{n-q-1,n}^{A_{n-q,n}\cdots A_{n-1,n} \cdot A_{n-q-1,n-q}\cdots A_{n-q-1,n}}
\end{aligned}
\]
A calculation with the pure braid relations \eqref{eq:purebraidrels} then shows that $\tau^q(A_{1,q})=A_{n-q+1,n}$.
It remains to check that $\epsilon^2=1$ and $(\epsilon \omega_k)^2=1$ for each $k$, $1\le k \le n$. The first of these is straightforward. For the remaining ones, note that
$\epsilon(A_{i,j} \cdots A_{j-1,j})=
(A_{i,j} \cdots A_{j-1,j})^{-1}$ for $i>1$ and $j>2$, $\epsilon(A_{i,i+1}\cdots A_{i,j})=(A_{i,i+1}\cdots A_{i,j})^{-1}$ for $i>1$, while $\epsilon(A_{1,2}\cdots A_{1,j})=(A_{1,2}\cdots A_{1,j})^{-1}Z_n^2$. These observations, together with the pure braid relations \eqref{eq:purebraidrels} may be used to verify that $\omega_k \epsilon \omega_k = \epsilon$ for each $k$, $1\le k \le n$.
\end{proof}
Thus, the elements $\omega_1,\dots,\omega_n$ and $\epsilon$ of $\operatorname{Aut}(P_n)$ satisfy the relations of the extended mapping class group $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$. By construction, these elements of $\operatorname{Aut}(P_n)$ induce the automorphisms of
$\bar{P}_n \cong \mathrm{P}\!\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$ corresponding to conjugation by the generators (with the same names) of $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$ upon passing to the quotient.
\begin{theorem} \label{thm:autpres}
For $n\ge 4$, the automorphism group $\operatorname{Aut}(P_n)$ of the pure braid group admits a presentation with generators
\[
\epsilon, \ \omega_k,\ 1\le k \le n,\ \psi,\ \phi_{i,j}, \ 1\le i < j \le n,\ \{i,j\}\neq \{1,2\},
\]
and relations
\begin{equation*}
\begin{aligned}
&\begin{matrix}
\omega_i\omega_j=\omega_j\omega_i,\ |i-j|\ge 2, \hfill
&\omega_i \omega_{i+1} \omega_i= \omega_{i+1} \omega_i \omega_{i+1},\ i< n,
&\epsilon^2=1, \hfill\\[3pt]
(\omega_1 \omega_2 \cdots \omega_n)^{n+1}=1,\hfill
&\omega_1\cdots \omega_{n-1}\omega_n^2\omega_{n-1}\cdots \omega_1=1,\hfill
&(\epsilon \omega_k)^2=1,\ k \le n,\\[3pt]
\psi\phi_{i,j}^{} \psi=\phi_{i,j}^{-1},\ \forall i,j, \hfill
&\phi_{i,j}\phi_{p,q}=\phi_{p,q}\phi_{i,j},\ \forall i,j,p,q\hfill
&\psi^2=1,\hfill \\[3pt]
\epsilon\psi \epsilon=\psi, \hfill
&\omega_k^{-1}\psi^{}_{}\omega_k^{}=\psi,\ k\le n \hfill
&\epsilon\phi_{i,j}^{} \epsilon=\phi_{i,j}^{-1}, \hfill
\end{matrix}
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
&\begin{matrix}
\omega_1^{-1}\phi_{i,j}^{}\omega_1^{}=\begin{cases}
\phi_{2,j}&i=1,\\ \phi_{1,j}&i=2,\\
\phi_{i,j}&\text{otherwise,}
\end{cases}\qquad \qquad
&
\omega_2^{-1}\phi_{i,j}^{}\omega_2^{}=\begin{cases}
\phi_{1,3}^{-1}&i=1,j=3,\\
\phi_{1,3}^{-1}\phi_{3,j}^{}&i=2, j> 3,\\
\phi_{1,3}^{-1}\phi_{2,j}^{}&i= 3,\\
\phi_{1,3}^{-1}\phi_{i,j}^{}&\text{otherwise,}
\end{cases}
\end{matrix}\\
&
\omega_k^{-1}\phi_{i,j}^{}\omega_k^{}=\begin{cases}
\phi_{i-1,j}&k=i-1,\\ \phi_{i+1,j}&k=i<j-1,\\
\phi_{i,j-1}&k=j-1>i,\\ \phi_{i,j+1}&k=j,\\
\phi_{i,j}&\text{otherwise,}
\end{cases}\quad \text{for $3\le k \le n-1$,}\\
&
\omega_n^{-1}\phi_{i,j}^{}\omega_n^{}=\begin{cases}
\phi_{i,j}{}\phi_{1,n}^{}\phi_{2,n}^{}\phi_{i,n}^{-1}\phi_{j,n}^{-1}&j<n,\\
\phi_{i,n}^{-1}\phi_{1,n}^{}\phi_{2,n}^{}&j=n.
\end{cases}
\end{aligned}
\end{equation*}
\end{theorem}
\begin{proof}
Recall from \eqref{eq:splitaut} and \eqref{eq:semi} that there is a split, short exact sequence
\[
1\to \operatorname{tv}(P_n) \to \operatorname{Aut}(P_n) \leftrightarrows \operatorname{Mod}(\ensuremath{\mathbb S}_{n+1}) \to 1.
\]
Since the automorphisms $\psi$ and $\phi_{i,j}$ generate the transvection subgroup $\operatorname{tv}(P_n)$, and the automorphisms $\epsilon$ and $\omega_k$ induce the generators of $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})=\operatorname{Aut}(\bar{P}_n)$, these automorphisms collectively generate $\operatorname{Aut}(P_n)$. By Proposition \ref{prop:mcgrels}, the automorphisms $\epsilon$ and $\omega_k$ satisfy the extended mapping class group relations. As noted previously, the formulas \eqref{eq:transvections} may be used to show that the transvections $\psi$ and $\phi_{i,j}$ satisfy $\psi^2=1$ and $\psi\phi_{i,j}\psi=\phi_{i,j}^{-1}$. So it suffices to show that the actions of the automorphisms $\epsilon$ and $\omega_k$ on the transvections $\psi$ and $\phi_{i,j}$ are as asserted. This may be accomplished by calculations with the explicit descriptions of these automorphisms given in \eqref{eq:transvections} and \eqref{eq:mcgautos}.
\end{proof}
\begin{remark} Theorem \ref{thm:autpres} exhibits the semidirect product structure of $\operatorname{Aut}(P_n) \cong \operatorname{tv}(P_n) \rtimes \operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$. Recall that $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})=M(0,n+1) \rtimes \ensuremath{\mathbb Z}_2$ is itself the semidirect product of the (non-extended) mapping class group and $\ensuremath{\mathbb Z}_2$. Note that the generator $\psi$ of $\operatorname{tv}(P_n) < \operatorname{Aut}(P_n)$ commutes with the generators $\epsilon,\omega_1,\dots,\omega_n$ of $\operatorname{Aut}(P_n)$ which induce the generators of $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+1})$. It follows that $\operatorname{Aut}(P_n)$ may be realized as the iterated semidirect product $\operatorname{Aut}(P_n) \cong (\ensuremath{\mathbb Z}^N \rtimes M(0,n+1)) \rtimes (\ensuremath{\mathbb Z}_2 \times \ensuremath{\mathbb Z}_2)$.
\end{remark}
Similar considerations yield a
presentation for the automorphism group of the three strand pure braid group $P_3\cong \ensuremath{\mathbb Z} \times F_2$. In this case, the split extension \eqref{eq:splitaut} yields $\operatorname{Aut}(P_3) \cong \operatorname{tv}(P_3) \rtimes \operatorname{Aut}(F_2)$,
where $\operatorname{tv}(P_3)\cong \ensuremath{\mathbb Z}^2 \rtimes \ensuremath{\mathbb Z}_2$, generated by $\psi,\phi_{1,3},\phi_{2,3}$
with $\psi^2=1$ and $\psi\phi_{i,j}^{}\psi=\phi_{i,j}^{-1}$
(see \eqref{eq:transvections}), and
\[
F_2 = \bar{P}_3=P_3/Z(P_3)=P_3/\langle A_{1,2}A_{1,3}A_{2,3}\rangle = \langle A_{1,3}, A_{2,3} \rangle
\]
is the free group on two generators. The group $\operatorname{Aut}(F_2)$ admits the following presentation, due to Neumann (see \cite[\S3.5, Prob.~2]{MKS}):
\[
\operatorname{Aut}(F_2) = \langle P,\sigma,U \mid P^2,\sigma^2,(\sigma P)^4,(P\sigma PU)^2,(UP\sigma)^3,[U,\sigma U \sigma]\rangle,
\]
where the automorphisms $P,\sigma,U$ of $F_2$ are given by
\[
\begin{matrix}
P(A_{1,3}^{})=A_{2,3}, \quad & \sigma(A_{1,3}^{})=A_{1,3}^{-1}, \quad & U(A_{1,3}^{})=A_{1,3}^{}A_{2,3}^{},\\
P(A_{2,3}^{})=A_{1,3},\hfill & \sigma(A_{2,3}^{})=A_{2,3}^{},\hfill & U(A_{2,3}^{})=A_{2,3}^{}.\hfill
\end{matrix}
\]
Lifts of these automorphisms to automorphisms of $P_3$ fixing $Z_3=A_{1,2}A_{1,3}A_{2,3}$ are given by setting
\[
P(A_{1,2}^{})=A_{2,3}A_{1,2}A_{2,3}^{-1},\quad
\sigma(A_{1,2}^{})=A_{1,2}^{}A_{1,3}^2,\quad
U(A_{1,2}^{})=A_{2,3}^{-1}A_{1,2}.
\]
Calculations with these formulas yield the following result.
\begin{proposition}
The automorphism group $\operatorname{Aut}(P_3)$ of the three strand pure braid group admits a presentation with generators $P,\ \sigma,\ U,\ \psi,\ \phi_{1,3},\ \phi_{2,3}$,
and relations
\[
\begin{matrix}
P^2, & \sigma^2, & (\sigma P)^4, & (P\sigma PU)^2, & (UP\sigma)^3, & [U,\sigma U \sigma],\\
[U,\psi], & [P,\psi], & [\sigma,\psi], & [U,\phi_{1,3}],& P\phi_{1,3}P\phi_{2,3}^{-1}, & (\sigma\phi_{1,3})^2, \\
\psi^2, & (\psi\phi_{i,3})^2, & [\phi_{1,3},\phi_{2,3}], & \phi_{1,3}[\phi_{2,3}^{},U],& P\phi_{2,3}^{}P\phi_{1,3}^{-1}, & [\sigma,\phi_{2,3}^{}].
\end{matrix}
\]
\end{proposition}
\begin{remark} Note that the generator $\psi$ of $\operatorname{tv}(P_3)< \operatorname{Aut}(P_3)$ commutes with the generators $P,\sigma,U$ of $\operatorname{Aut}(P_3)$ which project to the generators of $\operatorname{Aut}(F_2)$. It follows that
$\operatorname{Aut}(P_3) \cong \ensuremath{\mathbb Z}^2 \rtimes (\ensuremath{\mathbb Z}_2 \times \operatorname{Aut}(F_2))$.
\end{remark}
Since the two strand pure braid group $P_2=\ensuremath{\mathbb Z}$ is infinite cyclic, $\operatorname{Aut}(P_2)=\ensuremath{\mathbb Z}_2$.
\section{Automorphisms of the pure monomial braid group}
As discussed for example in \cite[\S3]{BMR}, the full monomial braid group $B(r,n)=B(2,n)$ embeds in the Artin braid group $B_{n+1}$.
In terms of the standard generators $\sigma_i$, $1\le i \le n$, of $B_{n+1}$ and the generators $\rho_j$, $0\le j \le n-1$ of $B(r,n)$, one choice of embedding is
given by $\rho_0 \mapsto \sigma_1^2$ and $\rho_j \mapsto \sigma_{j+1}$ for $j\neq 0$.
Restricting to the pure monomial braid group yields a monomorphism $P(r,n) \to P_{n+1}$. In terms of the generators \eqref{eq:Pn gens} of $P_{n+1}$ and \eqref{eq:PureMonomialGens} of $P(r,n)$, this is given by
\[
C_j\mapsto A_{1,j+1}^r,\quad
A_{i,j}^{(q)}\mapsto (A_{1,i+1} \cdots A_{i,i+1})^{q-r} A_{i+1,j+1}
(A_{1,i+1} \cdots A_{i,i+1})^{r-q}.
\]
Recall the generators $Z_{n+1}=(\sigma_1\cdots\sigma_n)^{n+1}$ and $\zeta_n=(\rho_0\cdots\rho_{n-1})^n$ of the centers $Z(B_{n+1})=Z(P_{n+1})$ and $Z(B(r,n))$, and that $Z(P(r,n))$ is generated by $\zeta_n^r$. It is readily checked that the above embedding takes $\zeta_n$ to $Z_{n+1}$. Consequently, the group $\bar{P}(r,n)=P(r,n)/Z(P(r,n))$ may be realized as a (finite index) subgroup of $\bar{P}_{n+1}=P_{n+1}/Z(P_{n+1})$.
Composing with the map $B_{n+1} \to \operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$ given by $\sigma_i \mapsto \omega_i$ realizes $\bar{P}(r,n)$ as a finite index subgroup of the extended mapping class group $\Gamma=\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$. Hence, for $n \ge 3$, we have $\operatorname{Aut}(\bar{P}(r,n)) \cong N_\Gamma(\bar{P}(r,n))$ by Proposition \ref{prop:CC}.
Since $P(r,n) \cong Z(P(r,n)) \times \bar{P}(r,n)$, the split extension \eqref{eq:splitaut} yields a semidirect product decomposition
$\operatorname{Aut}(P(r,n)) \cong \operatorname{tv}(P(r,n)) \rtimes \operatorname{Aut}(\bar{P}(r,n))$. Thus, for $n\ge 3$, the automorphism group of the pure monomial braid group may be realized as
\begin{equation*} \label{eq:monosemi}
\operatorname{Aut}(P(r,n)) \cong \operatorname{tv}(P(r,n)) \rtimes N_\Gamma(\bar{P}(r,n)).
\end{equation*}
\begin{lemma} \label{lem:monotrans}
Let $N_r=r\binom{n}{2}+n-1$.
The transvection subgroup of the automorphism group of the pure monomial braid group is given by
$\operatorname{tv}(P(r,n)) \cong \ensuremath{\mathbb Z}^{N_r} \rtimes \ensuremath{\mathbb Z}_2$,
where $\ensuremath{\mathbb Z}_2$ acts on $\ensuremath{\mathbb Z}^{N_r}$ by taking elements to their inverses.
\end{lemma}
\begin{proof}
For notational convenience, denote the generator of the center of $P(r,n)$ by $Z_{r,n}=\zeta_n^r$.
In terms of the generators \eqref{eq:PureMonomialGens} of $P(r,n)$, the transvection subgroup $\operatorname{tv}(P(r,n))$ of $\operatorname{Aut}(P(r,n))$ consists of automorphisms of the form
$C_j \mapsto C_j Z_{r,n}^{s_j}$ and $A_{i,j}^{(q)} \mapsto A_{i,j}^{(q)} Z_{r,n}^{t_{i,j,q}}$,
where $s_j,t_{i,j,q}\in\ensuremath{\mathbb Z}$ and $S=\sum_{j=1}^n s_j + \sum_{q=1}^r\sum_{1\le i<j\le n} t_{i,j,q}$ is either equal to $0$ or $-2$. In the former case, $Z_{r,n} \mapsto Z_{r,n}$, while $Z_{r,n} \mapsto Z_{r,n}^{-1}$ in the latter. This yields a surjection $\operatorname{tv}(P(r,n)) \to \ensuremath{\mathbb Z}_2$, with kernel consisting of transvections for which $ S=0$. Since $P(r,n)$ has $N_r+1$ generators, this kernel is free abelian of rank $N_r$. Setting $s_{1}=-2$, $s_j=0$ for $2\le j \le n$, and all $t_{i,j,q}=0$ gives a splitting $\ensuremath{\mathbb Z}_2 \to \operatorname{tv}(P(r,n))$. Thus,
$\operatorname{tv}(P(r,n)) \cong \ensuremath{\mathbb Z}^{N_r} \rtimes \ensuremath{\mathbb Z}_2$. This group is generated by transvections $\Psi$, $\Upsilon_i$, $2\le i \le n$, $\Phi_{i,j,p}$, $1\le i <j\le n$, $1\le p \le r$, of $P(r,n)$,
defined by
\begin{align} \label{eq:monotransvections}
\Psi\colon &\begin{cases}
C_j \mapsto C_1Z_{r,n}^{-2}&\text{if $j= 1$,}\\
C_j \mapsto C_j&\text{if $j\neq 1$,}\\
A_{k,l}^{(q)}\mapsto A_{k,l}^{(q)}&\text{for all $k$, $l$, $q$,} \end{cases} \quad
\Upsilon_i\colon \begin{cases}
C_j \mapsto C_1Z_{r,n}&\text{if $j= 1$,}\\
C_j \mapsto C_iZ_{r,n}^{-1}&\text{if $j=i$,}\\
C_j \mapsto C_j,&\text{if $j\neq 1,i$,}\\
A_{k,l}^{(q)}\mapsto A_{k,l}^{(q)}&\text{for all $k$, $l$, $q$,} \end{cases} \\
\Phi_{i,j,p}\colon &\begin{cases}
C_j \mapsto C_1Z_{r,n}&\text{if $j= 1$,}\\
C_j \mapsto C_j&\text{if $j\neq 1$,}\\
A_{k,l}^{(q)}\mapsto A_{i,j}^{(p)} Z_{r,n}^{-1}&\text{if $k=i$, $l=j$, $q=p$,}\\
A_{k,l}^{(q)}\mapsto A_{k,l}^{(q)}&\text{otherwise.} \end{cases} \notag
\end{align}
Check that the transvections $\Upsilon_i$, $\Phi_{i,j,p}$ all commute, and that $\Psi^2=1, \Psi \Upsilon_{i}\Psi = \Upsilon^{-1}_{i}$, and $\Psi \Phi_{i,j,p}\Psi = \Phi_{i,j,p}^{-1}$ to complete the proof.
\end{proof}
For $n\ge 3$, viewing the group $\bar{P}(r,n)$ as a subgroup of the extended mapping class group via the sequence of embeddings
\[
\bar{P}(r,n) \to \bar{P}_{n+1} \to \bar{B}_{n+1} \to M(0,n+2) \to \operatorname{Mod}(\ensuremath{\mathbb S}_{n+2}),
\]
the group $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$ acts on $\bar{P}(r,n)$ by conjugation. The subgroup $\bar{P}(r,n)<
\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$ is, however, not a normal subgroup. For instance, one can check that $\omega_1\cdot
\bar{P}(r,n)\neq \bar{P}(r,n)\cdot\omega_1$. Thus, the normalizer $N_\Gamma(\bar{P}(r,n))$ of $\bar{P}(r,n)$ in $\Gamma=\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$ is a proper subgroup of $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$.
So to understand the structure of $\operatorname{Aut}(P(r,n))=\operatorname{tv}(P(r,n)) \rtimes N_\Gamma(\bar{P}(r,n))$, we must determine this normalizer. For $n\ge 3$, the normalizer $N_\Gamma(\bar{B}(r,n))$ of $\bar{B}(r,n)=\bar{B}(2,n)=B(2,n)/Z(B(2,n))$ in $\Gamma=\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$ was found by Charney and Crisp
\cite[Prop.~10]{CC05}:
\begin{equation}\label{eq:Bnormalizer}
N_\Gamma(\bar{B}(r,n)) \cong \bar{B}(r,n))\rtimes (\ensuremath{\mathbb Z}_2 \times \ensuremath{\mathbb Z}_2).
\end{equation}
Identifying the generators of $\bar{B}(r,n)$ with their images in $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$, the group $N_\Gamma(\bar{B}(r,n))$ has generators
$\rho_0=\omega_1^2, \rho_1=\omega_2,\dots,\rho_{n-1}=\omega_n, \epsilon, \Delta$,
where
\[
\Delta = \omega_1 \cdots \omega_{n+1}\cdot \omega_1 \cdots \omega_n\cdot \omega_1 \cdots \omega_{n-1} \cdots\cdots \omega_1\cdot\omega_2\cdot\omega_1
\]
in $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$. Note that $\Delta^2=(\omega_1\cdots\omega_{n+1})^{n+2}=1$.
The elements $\epsilon$ and $\Delta$ generate $\ensuremath{\mathbb Z}_2\times\ensuremath{\mathbb Z}_2$. Their action on $\bar{B}(r,n))$ is given by $\epsilon\colon\rho_i\mapsto \rho_i^{-1}$ and
\begin{equation*} \label{eq:Deltaaction}
\Delta\colon \rho_i \mapsto
\begin{cases}
(\rho_{n-1} \cdots \rho_1 \rho_0 \rho_1 \cdots \rho_{n-1})^{-1}&\text{if $i=0$,}\\
\rho_{n-i}&\text{if $1\le i \le n-1$.}
\end{cases}
\end{equation*}
\begin{proposition} \label{prop:Pnormalizer}
Let $\Gamma=\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$. For $n \ge 3$, $N_\Gamma(\bar{P}(r,n))=N_\Gamma(\bar{B}(r,n))$.
\end{proposition}
\begin{proof}
Since $\bar{P}(r,n)$ is normal in $\bar{B}(r,n)$, we have $\rho_i(\bar{P}(r,n))=\bar{P}(r,n)$ for each $i$, $0\le i \le n-1$. It is straightforward to check that
$\epsilon(\bar{P}(r,n))=\bar{P}(r,n)$. We assert that $\Delta(\bar{P}(r,n))=\bar{P}(r,n)$ as well, which would imply that $\bar{P}(r,n)$ is normal in $N_\Gamma(\bar{B}(r,n))$.
For this, recall the monomial braids $X_i=\rho_{i-1}\cdots\rho_1\rho_0\rho_1\cdots\rho_{i-1}$, and note that $\Delta(\rho_0)=X_n^{-1}$ and more generally, $\Delta(X_i)=X_{n-i+1}^{-1}$. Recall also from the proof of Lemma \ref{lem:monocenter} that $X_i^r=D_i$ is a pure monomial braid. Using these observations, one can check (on the generators of $\bar{P}(r,n)$, see \eqref{eq:PureMonomialGens}) that $\Delta(\bar{P}(r,n))=\bar{P}(r,n)$. Thus, $\bar{P}(r,n)\triangleleft N_\Gamma(\bar{B}(r,n))$.
The above considerations imply that $N_\Gamma(\bar{B}(r,n))$ is a subgroup of $N_\Gamma(\bar{P}(r,n))$, since the latter is the largest subgroup of $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$ in which $\bar{P}(r,n)$ is normal. However, the (right) cosets of $H=N_\Gamma(\bar{B}(r,n))$ in $\Gamma=\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$ are $H$ and $H\cdot\omega_1$, and since
$\omega_1\cdot \bar{P}(r,n)\neq \bar{P}(r,n)\cdot\omega_1$, the same is true for any element of $H\cdot\omega_1$. It follows that
$N_\Gamma(\bar{P}(r,n))=N_\Gamma(\bar{B}(r,n))$.
\end{proof}
Hence we have $\operatorname{Aut}(P(r,n)) \cong \operatorname{tv}(P(r,n)) \rtimes N_\Gamma(\bar{B}(r,n))$,
and we now turn our attention to exhibiting a presentation for this group. As done with the Artin pure braid group in the previous section, we exhibit automorphisms of $P(r,n)$ which fix the generator $Z_{r,n}=\zeta_n^r$ of the center, and induce the corresponding (conjugation) automorphisms of $\bar{P}(r,n)$ upon passing to the quotient.
The automorphisms $\epsilon$ and $\Delta$ of $\bar{B}(r,n)$ extend to automorphisms of $B(r,n)$
(denoted by the same symbols) which take the generator $\zeta_n$ of the center $Z(B(r,n))$ to its inverse.
For $\beta \in B(r,n)$, let $c_\beta\in\operatorname{Aut}(P(r,n))$ be the automorphism given by conjugation by $\beta$, $c_\beta(x)=\beta^{-1}x\beta$. Recall the transvection automorphisms $\Psi$, $\Upsilon_i$, $\Phi_{i,j,p}$ of $P(r,n)$ defined in \eqref{eq:monotransvections}, and
define elements $\tilde\rho_k$, $0\le k \le n-1$, $\tilde\epsilon$, and $\tilde\Delta$ of $\operatorname{Aut}(P(r,n))$ as follows:
\begin{equation} \label{eq:monoauts}
\tilde\rho_0 = c_{\rho_0},\ \tilde\rho_1= c_{\rho_1} \circ \Upsilon_2,\ \tilde\rho_k=c_{\rho_k}\ (2\le k \le n-1),\ \tilde\epsilon= \epsilon \circ \Psi,\ \tilde\Delta = \Delta \circ \Psi \circ \Upsilon_n.
\end{equation}
Since $c_\beta(Z_{r,n})=Z_{r,n}$,
$\epsilon(Z_{r,n})=Z_{r,n}^{-1}$,
$\Delta(Z_{r,n})=Z_{r,n}^{-1}$, $\Upsilon_j(Z_{r,n})=Z_{r,n}$, and $\Psi(Z_{r,n})=Z_{r,n}^{-1}$, each of the automorphisms defined above fixes $Z_{r,n}$.
Explicit formulas for the actions of these automorphisms on the pure monomial braid generators \eqref{eq:PureMonomialGens} may be obtained through calculations using
the monomial braid relations \eqref{eq:MonomialBraidRels} and the presentation for $P(r,n)$ found in \cite[Thm.~2.2.4]{Coh01} (see also \cite[Lem.~2.2.3]{Coh01}). The results of these calculations are relegated to the next section.
\begin{proposition} \label{prop:Nrels}
The automorphisms $\tilde\rho_0,\dots,\tilde\rho_{n-1},\tilde\epsilon,\tilde\Delta\in\operatorname{Aut}(P(r,n))$
satisfy
\begin{equation*}
\begin{array}{lll}
\tilde\rho_i \tilde\rho_{i+1} \tilde\rho_i= \tilde\rho_{i+1} \tilde\rho_i \tilde\rho_{i+1}\ \text{for $1\le i<n$},
&\tilde\rho_i\tilde\rho_j=\tilde\rho_j\tilde\rho_i\ \text{for}\ |i-j|\ge 2,
&\tilde\epsilon^2=1,\\[2pt]
(\tilde\rho_0 \tilde\rho_1 \cdots \tilde\rho_{n-1})^{n}=1,
&(\tilde\rho_0\tilde\rho_1)^2=(\tilde\rho_1\tilde\rho_0)^2,
&\tilde\Delta^2=1,\\[2pt]
\tilde\Delta\tilde\rho_0\tilde\Delta=(\tilde\rho_{n-1}\cdots\tilde\rho_1\tilde\rho_0\tilde\rho_1\cdots\tilde\rho_{n-1})^{-1},
&(\tilde\epsilon\tilde\rho_k)^2=1\ \text{for}\ 0\le k< n,
&[\tilde\epsilon,\tilde\Delta]=1, \\[2pt]
\tilde\Delta\tilde\rho_k\tilde\Delta=\tilde\rho_{n-k}\ \text{for}\ 1\le k< n.
\end{array}
\end{equation*}
\end{proposition}
These are the relations of the normalizer of $\bar{P}(r,n)$ in $\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$.
\begin{proof}[Sketch of proof]
Since $\tilde\rho_k$ is conjugation by $\rho_k$ for $k \neq 1$, all of the relations which do not involve $\tilde\rho_1$, $\tilde\epsilon$, and $\tilde\Delta$ hold since they hold in the monomial braid group. Additionally, note that the automorphisms $\rho_k$, $\epsilon$, and $\Delta$ of $\bar{P}(r,n)$ generate the normalizer
$N_\Gamma(\bar{P}(r,n))=N_\Gamma(\bar{B}(r,n))$, where $\Gamma=\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$, so they satisfy the analogs of the relations stated in the Proposition.
These observations, together with the formulas for the automorphisms $\tilde\rho_k$, $\tilde\epsilon$, $\tilde\Delta$, $\Psi$, and $\Upsilon_i$ recorded in \S\ref{sec:monoaction} and \eqref{eq:monotransvections}, may be used to verify that all of the asserted relations hold. For instance, let $\tau= \rho_0\rho_1\cdots\rho_{n-1}$ and $\tilde\tau=\tilde\rho_0\tilde\rho_1\cdots\tilde\rho_{n-1}$. Note that $\tau^n=1$. One can check that
\[
\begin{array}{ll}
\tilde\tau(C_1)=\tau(C_1) \cdot Z_{r,n} = C_n^{W_1}\cdot Z_{r,n}, &
\tilde\tau(C_2)=\tau(C_2) \cdot Z_{r,n}^{-1} = C_1^{W_2}\cdot Z_{r,n}^{-1},\\
\tilde\tau(C_j)=\tau(C_j) = C_{j-1}^{W_j} \ (3\le j\le n), &
\tilde\tau(A_{i,j}^{(q)}) = \tau(A_{i,j}^{(q)}),
\end{array}
\]
for certain words $W_j\in P(r,n)$. This, together with the fact $\tau^n=1$, may be used to show that $\tilde\tau^n=(\tilde\rho_0 \tilde\rho_1 \cdots \tilde\rho_{n-1})^{n}=1$.
For the relation $\tilde\Delta\tilde\rho_0\tilde\Delta=(\tilde\rho_{n-1}\cdots\tilde\rho_1\tilde\rho_0\tilde\rho_1\cdots\tilde\rho_{n-1})^{-1}$, it is enough to show that
$\tilde\lambda=\tilde\rho_{n-1}\cdots\tilde\rho_1\tilde\rho_0\tilde\rho_1\cdots\tilde\rho_{n-1}\tilde\Delta\tilde\rho_0\tilde\Delta=1$. The analogous automorphism $\lambda=\rho_{n-1}\cdots\rho_1\rho_0\rho_1\cdots\rho_{n-1}\Delta\rho_0\Delta$ is trivial (consider its action on the generators of $B(r,n)$). Checking that $\tilde\lambda(x)=\lambda(x)$ for each generator $x$ of $P(r,n)$ reveals that $\tilde\lambda=1$ as well.
Verification of the remaining relations may be handled in a similar manner, and is left to the reader.
\end{proof}
\begin{theorem} \label{thm:monoautpres}
For $n\ge 3$, the automorphism group $\operatorname{Aut}(P(r,n))$ of the pure monomial braid group admits a presentation with generators
\[
\tilde\epsilon,\ \tilde\Delta,\ \tilde\rho_k,\ 0\le k \le n-1,\ \Psi,\ \Upsilon_l, \ 2\le l \le n,\ \Phi_{i,j,p}, \ 1\le i < j \le n,\ 1\le p\le r,
\]
and relations
\begin{align*}
&\begin{array}{lll}
\tilde\rho_i \tilde\rho_{i+1} \tilde\rho_i= \tilde\rho_{i+1} \tilde\rho_i \tilde\rho_{i+1}\ \text{for $1\le i<n$},
&\tilde\rho_i\tilde\rho_j=\tilde\rho_j\tilde\rho_i\ \text{for}\ |i-j|\ge 2,
&\tilde\epsilon^2=1,\\[2pt]
(\tilde\rho_0 \tilde\rho_1 \cdots \tilde\rho_{n-1})^{n}=1,
&(\tilde\rho_0\tilde\rho_1)^2=(\tilde\rho_1\tilde\rho_0)^2,
&\tilde\Delta^2=1,\\[2pt]
\tilde\Delta\tilde\rho_0\tilde\Delta=(\tilde\rho_{n-1}\cdots\tilde\rho_1\tilde\rho_0\tilde\rho_1\cdots\tilde\rho_{n-1})^{-1},
&(\tilde\epsilon\tilde\rho_k)^2=1\ \text{for}\ 0\le k< n,
&[\tilde\epsilon,\tilde\Delta]=1,\\[2pt]
\tilde\Delta\tilde\rho_k\tilde\Delta=\tilde\rho_{n-k}\ \text{for}\ 1\le k< n,
\end{array} \displaybreak[0]\\
&\ \,\Psi^2=1,\ \ [\Upsilon_l,\Phi_{i,j,p}]=1, \forall l,i,j,p,\ \ [\Phi_{i,j,p},\Phi_{k,l,q}]=1, \forall i,j,k,l,p,q,\ \ \tilde\epsilon\Psi\tilde\epsilon=\Psi,\\
&\ \,\Psi\Upsilon_l\Psi=\Upsilon_l^{-1}, \forall l,\ \ \Psi\Phi_{i,j,p}\Psi=\Phi_{i,j,p}^{-1}, \forall i,j,p,\ \
\ \,\tilde\Delta\Psi\tilde\Delta=\Psi,\quad
\tilde\rho_k^{-1}\Psi\tilde\rho_k^{}=\Psi, \forall k,\displaybreak[0]\\
&\ \,\tilde\Delta\Upsilon_l\tilde\Delta=\begin{cases}\Upsilon_{n-l+1}^{-1}\Upsilon_n&l<n,\\ \Upsilon_n&l=n,\end{cases}\qquad
\tilde\epsilon\Upsilon_l\tilde\epsilon=\Upsilon_l^{-1}, \forall l,\qquad
\tilde\rho_0^{-1}\Upsilon_l\tilde\rho_0^{}=\Upsilon_l, \forall l,\displaybreak[0]\\
&\ \,\tilde\rho_1^{-1}\Upsilon_l^{}\tilde\rho_1^{}=\begin{cases}\Upsilon_2^{-1}&l=2,\\ \Upsilon_2^{-1}\Upsilon_l^{}&l\neq 2,\end{cases}\qquad
\tilde\rho_k^{-1}\Upsilon_l^{}\tilde\rho_k^{}=\begin{cases}
\Upsilon_{k+1}&l=k,\\ \Upsilon_k&l=k+1,\\ \Upsilon_l&l\neq k,k+1,\end{cases}\ \text{for $k\ge 2$,} \displaybreak[0]\\
&\ \,\tilde\Delta\Phi_{i,j,p}\tilde\Delta=\begin{cases}
\Upsilon_{n-j+1}^{-1}\Upsilon_{n-i+1}^{-1}\Upsilon_n\Phi_{n-j+1,n-i+1,p}&j<n,\\
\Upsilon_{n-i+1}^{-1}\Upsilon_n\Phi_{1,n-i+1,p}&j=n,
\end{cases} \displaybreak[0]\\
&\ \,\tilde\epsilon\Phi_{i,j,p}\tilde\epsilon=\begin{cases}
\Phi_{i,j,r-p}^{-1}& p< r,\\ \Phi_{i,j,r}^{-1}&p=r,\end{cases}
\qquad \tilde\rho_0^{-1}\Phi_{i,j,q}\tilde\rho_0^{}=\begin{cases}\Phi_{1,j,r}&i=1,q=1,\\
\Phi_{1,j,q-1}&i=1,q\neq 1,\\\Phi_{i,j,q}&\text{otherwise},\end{cases} \displaybreak[0]\\
&\ \,\tilde\rho_1^{-1}\Phi_{i,j,q}^{}\tilde\rho_1^{}=\begin{cases}\Upsilon_2^{-1}\Phi_{1,2,r-p}&i=1,j=2,p<r,\\
\Upsilon_2^{-1}\Phi_{2,j,p}^{}&i=1,j>2,\\ \Upsilon_2^{-1}\Phi_{1,j,p}^{}&i=2,\\
\Upsilon_2^{-1}\Phi_{i,j,p}^{}&\text{otherwise},\end{cases} \displaybreak[0]\\
&\ \,\tilde\rho_k^{-1}\Phi_{i,j,q}^{}\tilde\rho_k^{}=\begin{cases}
\Phi_{k,j,p}&k=i-1,\\ \Phi_{k+1,j,p}&k=i<j-1,\\ \Phi_{k,k+1,r-p}&k=i=j-1, p<r,\\ \Phi_{i,k,p}&k=j-1>i,\\ \Phi_{i,k+1,p}&k=j,\\ \Phi_{i,j,p}&\text{otherwise}
\end{cases}\qquad \text{for $k\ge 2$.}
\end{align*}
\end{theorem}
\begin{proof}
By Proposition \ref{prop:CC} and Proposition \ref{prop:Pnormalizer}, there is a split, short exact sequence
\[
1\to \operatorname{tv}(P(r,n)) \to \operatorname{Aut}(P(r,n)) \leftrightarrows N_\Gamma(\bar{P}(r,n)) \to 1,
\]
where $\Gamma=\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$. From the proof of Lemma \ref{lem:monotrans}, the automorphisms $\Psi$, $\Upsilon_l$, and $\Phi_{i,j,p}$ generate the transvection subgroup $\operatorname{tv}(P(r,n))$, and satisfy the relations $\Psi^2=1, [\Upsilon_l,\Phi_{i,j,p}]=1, \Psi\Upsilon_l\Psi=\Upsilon_l^{-1}, \Psi\Phi_{i,j,p}^{-1}\Psi=\Phi_{i,j,p}^{-1}$. Since the automorphisms $\tilde\epsilon, \tilde\Delta, \tilde\rho_k$ induce the generators of $N_\Gamma(\bar{P}(r,n))$, these automorphisms, together with the aforementioned transvections, generate $\operatorname{Aut}(P(r,n))$. By Proposition \ref{prop:Nrels}, the automorphisms $\tilde\epsilon, \tilde\Delta, \tilde\rho_k$ satisfy the relations of $N_\Gamma(\bar{P}(r,n))$. So it suffices to show that the actions of these automorphisms on the transvections $\Psi$, $\Upsilon_l$, and $\Phi_{i,j,p}$ are as asserted. This may be accomplished by calculations with the descriptions of these automorphisms given in \eqref{eq:monoauts}, \S\ref{sec:monoaction}, and \eqref{eq:monotransvections}.
\end{proof}
\begin{remark} Theorem \ref{thm:monoautpres} exhibits the semidirect product structure $\operatorname{Aut}(P(r,n)) \cong \operatorname{tv}(P(r,n)) \rtimes N_\Gamma(\bar{P}(r,n))$, where $\Gamma=\operatorname{Mod}(\ensuremath{\mathbb S}_{n+2})$ and $\operatorname{tv}(P(r,n)) \cong \ensuremath{\mathbb Z}^{N_r} \rtimes \ensuremath{\mathbb Z}_2$. Recall that $N_\Gamma(\bar{P}(r,n))=N_\Gamma(\bar{B}(r,n))=\bar{B}(r,n) \rtimes (\ensuremath{\mathbb Z}_2\times\ensuremath{\mathbb Z}_2)$ (see \eqref{eq:Bnormalizer}), and note that the transvection $\Psi$ commutes with all generators of $N_\Gamma(\bar{P}(r,n))$. It follows that
$\operatorname{Aut}(P(r,n)) \cong (\ensuremath{\mathbb Z}^{N_r} \rtimes \bar{B}(r,n))\rtimes (\ensuremath{\mathbb Z}_2\times\ensuremath{\mathbb Z}_2\times\ensuremath{\mathbb Z}_2)$.
\end{remark}
In the case $n=2$, we have $P(r,2) \cong \ensuremath{\mathbb Z} \times F_{r+1}$, and the split extension \eqref{eq:splitaut} yields $\operatorname{Aut}(P(r,2)) \cong \operatorname{tv}(P(r,2))\rtimes \operatorname{Aut}(F_{r+1})$,
where $\operatorname{tv}(P(r,2))\cong \ensuremath{\mathbb Z}^{r+1} \rtimes \ensuremath{\mathbb Z}_2$, generated by $\Psi$, $\Upsilon_2$, $\Phi_{1,2,p}$, $1\le p \le r$,
(see Lemma \ref{lem:monotrans}), and
\[
F_{r+1} = \bar{P}(r,2)=P(r,2)/Z(P(r,2))=P(r,2)/\langle Z_{r,2}\rangle = \langle A_{1,2}^{(1)}, \dots, A_{1,2}^{(r-1)}, C_2, A_{1,2}^{(r)} \rangle
\]
is the free group on $r+1\ge 3$ generators. The group $\operatorname{Aut}(F_{r+1})$ admits a presentation, due to Nielsen, with generators $P$, $Q$, $\sigma$, and $U$, where
\[
\begin{array}{ll}
P\colon \begin{cases} A_{1,2}^{(1)} \mapsto A_{1,2}^{(2)},\\ A_{1,2}^{(2)} \mapsto A_{1,2}^{(1)},\\
A_{1,2}^{(q)} \mapsto A_{1,2}^{(q)}&q\neq 1,2,\\ C_2 \mapsto C_2, \end{cases}
&
Q\colon \begin{cases} A_{1,2}^{(q)} \mapsto A_{1,2}^{(q+1)}&q<r-1,\\ A_{1,2}^{(r-1)} \mapsto C_2,\\
A_{1,2}^{(r)} \mapsto A_{1,2}^{(1)},\\ C_2 \mapsto A_{1,2}^{(r)}, \end{cases} \\[3pt]
\sigma\colon \begin{cases} A_{1,2}^{(1)} \mapsto (A_{1,2}^{(1)})^{-1},\\ A_{1,2}^{(q)} \mapsto A_{1,2}^{(q)}&q\neq 1,\\ C_2 \mapsto C_2, \end{cases}
&
U\colon \begin{cases} A_{1,2}^{(1)} \mapsto A_{1,2}^{(1)}A_{1,2}^{(2)},\\ A_{1,2}^{(q)} \mapsto A_{1,2}^{(q)}&q\neq 1,\\ C_2 \mapsto C_2, \end{cases}
\end{array}
\]
Refer to \cite[\S3.5, Cor.~N1]{MKS} for the relations satisfied by these generators of $\operatorname{Aut}(F_{r+1})$.
Lifts of these automorphisms to automorphisms of $P(r,2)$ which fix the generator $Z_{r,2}=C_1A_{1,2}^{(1)}\cdots A_{1,2}^{(r-1)}C_{2}A_{1,2}^{(r)}$ of the center are given by setting
\[
\begin{array}{ll}
P(C_1)=C_1[A_{1,2}^{(1)}, A_{1,2}^{(2)}],\qquad\qquad&
Q(C_1) = (A_{1,2}^{(1)})^{-1}C_1 A_{1,2}^{(1)},\\[3pt]
\sigma(C_1)=C_1 (A_{1,2}^{(1)})^2,&
U(C_1)=C_1 A_{1,2}^{(1)} (A_{1,2}^{(1)}A_{1,2}^{(2)})^{-1}.
\end{array}
\]
Calculations with these formulas yields the following result.
\begin{proposition} \label{prop:r2mono}
The automorphism group of the pure monomial braid group $P(r,2)$ is generated by
$
P,\ Q,\ \sigma,\ U,\ \Psi,\ \Upsilon_2,\ \Phi_{1,2,p},\ 1\le p\le r,
$
and decomposes as a semidirect product $\operatorname{Aut}(P(r,2)) \cong \operatorname{tv}(P(r,n)) \rtimes \operatorname{Aut}(F_{r+1})$.
The action of $\operatorname{Aut}(F_{r+1})$ on $\operatorname{tv}(P(r,n))$ is given by
\[
\begin{array}{ll}
P\colon\begin{cases} \Psi \mapsto \Psi, \\ \Upsilon_2 \mapsto \Upsilon_2, \\
\Phi_{1,2,1} \mapsto \Phi_{1,2,2},\\ \Phi_{1,2,2} \mapsto \Phi_{1,2,1},\\ \Phi_{1,2,p} \mapsto \Phi_{1,2,p}&p\neq 1,2,\end{cases}
&
Q\colon\begin{cases} \Psi \mapsto \Psi, \\ \Upsilon_2 \mapsto \Phi_{1,2,r}, \\
\Phi_{1,2,p} \mapsto \Phi_{1,2,p+1}&p\le r-2,\\ \Phi_{1,2,r-1} \mapsto \Upsilon_2,\\ \Phi_{1,2,r} \mapsto \Phi_{1,2,1},\end{cases}\\
\sigma\colon\begin{cases} \Psi \mapsto \Psi, \\ \Upsilon_2 \mapsto \Upsilon_2, \\
\Phi_{1,2,1} \mapsto \Phi_{1,2,1}^{-1},\\ \Phi_{1,2,p} \mapsto \Phi_{1,2,p}&p\neq 1,\end{cases}
&
U\colon\begin{cases} \Psi \mapsto \Psi, \\ \Upsilon_2 \mapsto \Upsilon_2, \\
\Phi_{1,2,2} \mapsto \Phi_{1,2,1}^{-1}\Phi_{1,2,2},\\ \Phi_{1,2,p} \mapsto \Phi_{1,2,p}&p\neq 2.\end{cases}
\end{array}
\]
\end{proposition}
\begin{remark} Note that the generator $\Psi$ of $\operatorname{tv}(P(r,2))< \operatorname{Aut}(P(r,2))$ commutes with the generators $P,Q,\sigma,U$ of $\operatorname{Aut}(P(r,2))$ which project to the generators of $\operatorname{Aut}(F_{r+1})$. It follows that
$\operatorname{Aut}(P(r,2)) \cong \ensuremath{\mathbb Z}^{r+1} \rtimes (\ensuremath{\mathbb Z}_2 \times \operatorname{Aut}(F_{r+1}))$.
\end{remark}
Since $P(r,1)=\ensuremath{\mathbb Z}$ is infinite cyclic, $\operatorname{Aut}(P(r,1))=\ensuremath{\mathbb Z}_2$.
\section{The action of $\operatorname{Aut}(P(r,n))$ on the generators of $P(r,n)$} \label{sec:monoaction}
We record the action of the elements $\tilde\epsilon, \tilde\Delta, \tilde\rho_k, 0\le k \le n-1$, of $\operatorname{Aut}(P(r,n))$
on the generators $C_j$, $1\le j\le n$, and $A_{i,j}^{(q)}$, $1\le i < j \le n$, $1\le q \le r$ of the pure monomial braid group $P(r,n)$.
See \eqref{eq:monotransvections} for the action of the generators of the transvection subgroup $\operatorname{tv}(P(r,n))$.
Recall the elements $A_{i,j}^{[q]}, V_{i,j}^{(q)},
D_k \in P(r,n)$ from \eqref{eq:particularmonobraids}, and that $y^x=x^{-1}yx$.
For $1\le i<j \le n$ and $1\le q \le r$, let $U_i^{(q)}=A_{i,i+1}^{(q)}A_{i,i+2}^{(q)} \cdots A_{i,n}^{(q)}$.
The actions of $\tilde\epsilon$, $\tilde\Delta$, and $\tilde\rho_k$ are given by:
\begin{align*} \label{eq:monoauto}
\tilde\epsilon&\colon
\begin{cases}
C_{j}\mapsto C_1^{-1} Z^2_{r,n} &\text{if $j=1$,}\\
C_{j}\mapsto (V_{1,j}^{(r)})^{-1}{C_j^{-1}}V_{1,j}^{(r)} &\text{if $j\neq 1$,}\\
A_{i,j}^{(q)}\mapsto (V_{i+1,j}^{(r)})^{-1}{(A_{i,j}^{(r)})^{-1}} V_{i+1,j}^{(r)}&\text{if $q=r$,}\\
A_{i,j}^{(q)}\mapsto (V_{i+1,j}^{(r)})^{-1}D_i^{}{(A_{i,j}^{(r-q)})^{-1}}D_i^{-1}V_{i+1,j}^{(r)}&\text{if $q\neq r$,}
\end{cases}\displaybreak[2]\\
\tilde\Delta&\colon
\begin{cases}
C_{j}\mapsto D_n^{-1} Z_{r,n} &\text{if $j=1$,}\\
C_{j}\mapsto [U_{n-j+1}^{(r)} D_{n-j+1} U_{n-j+1}^{(1)} \cdots U_{n-j+1}^{(r-1)}]^{-1}&\text{if $j\neq 1,n$,}\\
C_{j}\mapsto [U_{1}^{(r)} D_{1} U_{1}^{(1)} \cdots U_{1}^{(r-1)}]^{-1} Z_{r,n} &\text{if $j=n$,}\\
A_{i,j}^{(q)}\mapsto (A_{n-j+1,n-i+1}^{(r)})^{V_{n-j+1,n-i+1}^{(r)}}&\text{if $q=r$,}\\
A_{i,j}^{(q)}\mapsto (A_{n-j+1,n-i+1}^{(q)})^{V_{n-j+1,n-i+1}^{(r)}D_{n-j+1}^{-1}D_{n-i+1}^{-1}}&\text{if $q\neq r$,}
\end{cases}\displaybreak[2]\\
\tilde\rho_0&\colon
\begin{cases}
C_{j}\mapsto C_1&\text{if $j=1$,}\\
C_{j}\mapsto {C_j}^{(A_{1,j}^{(r-1)})^{-1}}&\text{if $j\neq 1$,}\\
A_{i,j}^{(q)}\mapsto A_{i,j}^{(q-1)}&\text{if $i=1$, $q\neq 1$,}\\
A_{i,j}^{(q)}\mapsto (A_{i,j}^{(r)})^{C_1}&\text{if $i=1$, $q=1$,}\\
A_{i,j}^{(q)}\mapsto A_{i,j}^{(q)}&\text{if $i\ge 2$,}
\end{cases}\displaybreak[2]\\
\tilde\rho_1&\colon
\begin{cases}
C_{j}\mapsto C_2^{A_{1,2}^{(r)}} Z_{r,n}&\text{if $j=1$,}\\
C_{j}\mapsto C_1 Z_{r,n}^{-1}&\text{if $j=2$,}\\
C_{j}\mapsto C_j &\text{if $j\ge 3$,}\\
A_{i,j}^{(q)}\mapsto (A_{1,2}^{(r-q)})^{(C_1A_{1,2}^{(1)}\cdots A_{1,2}^{(r-q-1)})^{-1}}&\text{if $i=1$, $j=2$ $q<r$,}\\
A_{i,j}^{(q)}\mapsto A_{2,j}^{(q)}&\text{if $i=1$, $j\ge 3$ $q=r$,}\\
A_{i,j}^{(q)}\mapsto (A_{2,j}^{(q)})^{(C_1A_{1,2}^{(1)}\cdots A_{1,2}^{(r-q-1)}C_1^{-1})^{-1}}&\text{if $i=1$, $j\ge 3$ $q<r$,}\\
A_{i,j}^{(q)}\mapsto (A_{1,j}^{(q)})^{A_{1,2}^{(q+1)}\cdots A_{1,2}^{(r)}}&\text{if $i=2$, $q<r$,}\\
A_{i,j}^{(q)}\mapsto A_{i,j}^{(q)}&\text{otherwise,}
\end{cases}
\displaybreak[2]\\
\tilde\rho_k&\colon
\begin{cases}
C_{j}\mapsto C_{j-1} &\text{if $k=j-1$,}\\
C_{j}\mapsto C_{j+1}^{A_{j,j+1}^{(r)}}&\text{if $k=j$,}\\
C_{j}\mapsto C_j &\text{if $k\neq j-1,j$,}\\
A_{i,j}^{(q)}\mapsto (A_{i-1,j}^{(q)})^{A_{i-1,i}^{[q]}A_{i-1,i}^{(r)}}&\text{if}\ k=i-1,q<r,\\
A_{i,j}^{(q)}\mapsto A_{i-1,j}^{(r)}&\text{if}\ k=i-1,q=r,\\
A_{i,j}^{(q)}\mapsto (A_{i+1,j}^{(q)})^{(D_iA_{i,i+1}^{(1)}\cdots A_{i,i+1}^{(r-q-1)}D_i^{-1})^{-1}}&\text{if}\ k=i<j-1,q<r,\\
A_{i,j}^{(q)}\mapsto A_{i+1,j}^{(r)}&\text{if}\ k=i<j-1,q=r,\\
A_{i,j}^{(q)}\mapsto (A_{i,i+1}^{(r-q)})^{(D_iA_{i,i+1}^{(1)}\cdots A_{i,i+1}^{(r-q-1)})^{-1}}&\text{if}\ k=i=j-1,q<r,\\
A_{i,j}^{(q)}\mapsto A_{i,j-1}^{(q)}&\text{if}\ k=j-1>i,\\
A_{i,j}^{(q)}\mapsto (A_{i,j+1}^{(q)})^{A_{j,j+1}^{(r)}}&\text{if}\ k=j,\\
A_{i,j}^{(q)}\mapsto A_{i,j}^{(q)}&\text{otherwise,}
\end{cases}
\end{align*}
for $2\le k \le n-1$.
\newpage
\newcommand{\arxiv}[1]{{\texttt{\href{http://arxiv.org/abs/#1}{{arXiv:#1}}}}}
\newcommand{\MRh}[1]{\href{http://www.ams.org/mathscinet-getitem?mr=#1}{MR#1}}
\bibliographystyle{amsalpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,870 |
{"url":"https:\/\/www.gradesaver.com\/textbooks\/math\/calculus\/calculus-8th-edition\/chapter-14-partial-derivatives-14-2-limits-and-continuity-14-2-exercises-page-950\/6","text":"Calculus 8th Edition\n\n$-2\/3$\nFactoring out $x+y$ $$\\lim\\limits_{(x,y) \\to (2,-1)} \\frac{xy(x+y)}{(x+y)(x-y)}= \\lim\\limits_{(x,y) \\to (2,-1)} \\frac{xy}{(x-y)} = \\frac{2*-1}{2+1}=-2\/3$$","date":"2019-12-06 09:14:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9684889316558838, \"perplexity\": 14980.762898369214}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540486979.4\/warc\/CC-MAIN-20191206073120-20191206101120-00214.warc.gz\"}"} | null | null |
Movies » Taj Mahal Movie Review
C Venkatesh / fullhyd.com
Taj Mahal is one movie whose glorious camera sweeps and amazing background score you don't want to catch on a dinky little video screen from the tenth row of that "Volvo" Video Coach, with its Made-In-Hurry sound system. Unfortunately, a week or two from now, that's where you'll have the highest probability of seeing it.
There's nothing wrong with the premise, and not very much wrong with the execution either. The main problem lies in the length. 180 minutes of two devoted lovers who spend all their camera-conscious time gazing into each others eyes is 60 or 90 minutes too much. Properly edited, this could have been a snappy and enjoyable experience. As it stands, it's soporific, but enjoyable every time you wake up from your 40 winks. This is, after all, a story that's been told, retold and rehashed for centuries. What's more, the director's chosen love over gold, steering clear of any Mughal-intrigue except for the transparent and predictable romance-blocking manouevers.
Given the large number of "historical" extravaganzas that have deservedly vanished without traces, Taj Mahal is a pleasant surprise. Liberties with history are within the limits of poetic license, and the focus is kept on the title - the "eternal love story". Many story-tellers lose their way introducing sideshows that grow larger than the main story, but in this case the director (Akbar Khan, who also co-produced it, and who also co-wrote it - these Khan brothers don't do things in halves!) has done a commendable job of keeping his eyes on the road and his hands upon the wheel.
He starts his tale briefly replaying the downfall of Shah Jahan (Kabir Bedi, whose make-up artist should be strung up by the thumbs) at the hands of Aurangzeb (Arbaaz Khan). The camera angles are excellent, and the martial mood is well created with extravagant sets and apposite music. But in a lot of scenes, the special-effects seem to lack that vital je na sais quoi.
Jahanara (Manisha Koirala, who doesn't make much of what is a miniscule role to begin with) comforts her father and leads him into a flashback. Which, of course, is where the story really begins. Sonya Jehan, as the young Arjumand who later becomes Mumtaz Mahal, is just about adequate. She is overshadowed by a very perky Pooja Batra as Noor Jahan, who comes as a welcome relief - if Batra can turn out performances like this one, she deserves more recognition from Bollywood.
Akbar Khan and his cast are comfortable cooing like turtle-doves, but are not very convincing being intrigue-ridden Mughals. Efforts to express outrage, frustration, despondence and other such emotions are not convincing at all. The exceptions are Aurangzeb, who is sometimes chilling, and Noor Jahan, who is quite easily the lifeblood of this movie.
The cinematography is excellent, with (sometimes overly) vivid colors and an excellent sense of camera-domination. The sets are done quite well - pre-release publicity claims this is the most expensive film ever made in Bollywood - but some of the scenes seem to be subliminal tributes to Matrix and Jurassic Park. Not all movies gain from slow motion fights or from soldiers executing triple-somersaults when felled. The researchers should have watched a little more of historical tours-de-force, and a little less of The Lord of the Rings. The special effects are decidedly tacky. Fortunately, there are very few of them.
There are also, surprisingly, very few songs. Only one is even rhythmic. You'd expect more from Hariharan, but it's not to be. The dialogues are penned in what your linguistically-challenged reviewer concludes is chaste Urdu. Hyderabadis will enjoy the language, though Hindi-speakers may struggle with some of the phrases. The background score is outstanding. Not surprising, since the credits attribute it to Naushad. Purists would look for the lighting, to change with the story's mood, but Akbar Khan seems to prefer even his stormy nights very well-lit. The background score does a great job of making up for this.
The choreography is atrocious. Saroj Khan is given joint credit for the few dances there are, but perhaps she was thinking of something else.
Refreshingly, there's almost no display of flesh, as costume designer Anna Singh sheaths the actors from head to toe in period costumes, except for a surprisingly liberal display of thigh and decolletage on the young Shah Jahan's wedding night. The today's de-riguer lip-locks are very cleanly done. There's also very little overt violence - only three scenes contain any blood or implicit violence - which means you can safely take your kids. Of course, if you can make them sit through all three hours, they have decidedly more patience than your reviewer.
All in all, an A++ for being extremely politically-correct in more respects than one, an A+ for avoiding chauvinism and stereotypes, an A+ for sticking to the promised theme, an A for effort, a B for acting, and a B- for plot. Unfortunately, an F for editing.
TAJ MAHAL SNAPSHOT
Zulfiqar Syed, Sonya Jehan, Kabir Bedi, Manisha Koirala, Pooja Batra, Arbaaz Ali, Kim Sharma, Milind Gunaji, Vaquar Sheikh, Arbaaz Khan
See Taj Mahal full details
TAJ MAHAL USER REVIEWS
Be the first to comment on Taj Mahal! Just use the simple form below.
fullhyd.com has 700,000+ monthly visits. Tell Hyderabad what you feel about Taj Mahal!
Nayakudu
Shiv Shankar
Bharatha Simhareddy
Muddula Koduku
Modati Cinema
Shaadi No. 1
Mujhse Shaadi Karoge
Premeswari
Yuvaraju
Maine Dil Tujhko Diya
S P Jayasimha
Nayee Padosan
Laaga Chunari Mein Daag
Larry Crowne
The Lone Ranger (Hindi)
The Sponge Bob Movie: Sponge...
Taj Mahal hindi movie
Taj Mahal reviews
Zulfiqar Syed, Sonya Jehan | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,771 |
The Voice of the New Generation in Israel Embarks on US Speaking Tour
IDF Commander to Speak to Jewish and Christian Audiences
Contact: Dena Wimpfheimer, 201-645-4134, dena@djwconsult.com
MEDIA ADVISORY, Aug. 23 /Christian Newswire/ -- The Opportunity: Interview Jeremy Gimpel, an American-born commander in the Israeli Defense Forces, the host of the most popular talk show on Israel National Radio, and a leader in Israeli activism and politics as he travels across the United States.
Recognizing the distinct challenges that Israel is facing at the current time, Gimpel is kicking off a US speaking tour that will span from South Carolina to Oregon with a message of truth, moral clarity, and an honest perspective on the current events in the Middle East including the tenuous "cease fire" between Israel and the Hezbollah. His goal is to strengthen the ties between Americans and Israelis, Jews and Christians, and provide insight and perspective on the controversial events in Israel that will inevitably effect the entire world.
"Since the return of my people to the Land of Israel, and the establishment of the Jewish State, we have never been so isolated in the global arena of public opinion," says Gimpel. "It is more important than ever before that the people of America hear the truth about what is really happening in the Middle East. The support for the Jewish people by the Christian World is historically unparalleled. This unity is not only due to a common enemy that seeks our destruction, but to the realization that our lives are based in shared Biblical values."
Gimpel, along with co-Founder Ari Abramowitz, are both seasoned veterans of the Israel Defense Forces. As commander and soldier, they have served and protected every border of Israel – from Lebanon to the Gaza strip. Both young men continue to serve as soldiers in reserve units and have a comprehensive understanding of the Israeli military and the complex geopolitical situation in the region.
Jeremy Gimpel and Ari Abramowitz host "A Light unto the Nations," the most popular radio show on Israel National Radio. The two also regularly contribute columns for several publications such as the Jerusalem Post Christian and Arutz Sheva.
Jeremy Gimpel will be in the United States from August 28th through September 20th. Please contact Dena Wimpfheimer to schedule an interview; dena@djwconsult.com 201-645-4134 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 457 |
{"url":"https:\/\/percentages.io\/179-is-what-percent-of-324","text":"is what % of ?\n55.25%\n\n# How to solve this problem\n\n## A step by step guide\n\nThis type of problem is simple to solve and has many uses in day to day life. One example of this calculating what percentage grade you scored on a test. Solving this type of problem involes two simple math problems. Like many percentage problems, the only operations involved are multiplication and division. In this case, one division and one multiplication. As with any type of division and multiplication, these operations can be done in any order you like. Here are the steps you have to take to solve this problem:\n\nStep 1: Divide 179 by 324;\nThe first step is to divide the numerator of this problem by the denominator. The numerator in this case is 179 and the denominator is 324. Here is the equation for this operation: $$\\frac{179}{324} = 179 \\div 324 = 0.55246913580247$$\n\nStep 2: Multiply 0.55246913580247 by 100\nThe second step is to multiply the result of step 1 by 100. This will turn our original answer into a percentage. This is the final answer to the problem. Here is the equation for: $$0.55246913580247 \\times 100 = 55.25$$\n\nSimilar problems","date":"2019-08-25 11:00:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8710042238235474, \"perplexity\": 257.85266046531683}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027323328.16\/warc\/CC-MAIN-20190825105643-20190825131643-00379.warc.gz\"}"} | null | null |
Hands Down, These Are the Best Documentaries of All Time
Hadley Mendelsohn
Hadley was the Associate Editor at MyDomaine for two and a half years before joining the House Beautiful team as the Design Editor.
Just as we watch romantic period dramas to feed a nostalgic wonder, feel-good comedies for a much-needed laugh, indie flicks to delight our eyes, and the occasional horror movie for a scare at a safe distance, we turn to documentaries to expand our understanding of the world around us. When done well, documentaries are both informative and entertaining, leaving us with more than just a quiet contentment.
If you've been wanting to learn more about prevalent issues but don't feel like listening to a podcast or reading a book, then you've come to the right place. Below, we've compiled a roundup of what we think are the best documentaries of all time. Some are heart-rending, some are uplifting, and all offer life-altering lessons that'll ignite you with a drive to act on them.
Scroll through to read about our 19 favorite documentaries and add them to your own viewing queue depending on your interests, design aficionados, history buffs, and social justice mavericks alike.
The Poetic Tearjerker
Nostalgia for the Light $3
This Chilean documentary is nothing short of moving. With beautiful cinematography, it presents a compelling thematic dichotomy between three groups of people stationed in the Atacama desert on different expeditions: astronomers examining faraway galaxies, archaeologists digging on excavation sites to uncover ruins from ancient civilizations, and the mothers and daughters searching for those they lost to Pinochet's politicide.
Aptly named, this film literalizes the concept of nostalgia, highlighting the vast beauty of the cosmos while zeroing in on the atrocities of the past and the ways in which they maintain a presence. The viewer inherits this sense of awe as well as a wistful longing for something that once was and that can never be again because it was violently cut short. It challenges the audience to wrestle with the impossibility of singularities, or in other words, how one thing is never really just one thing.
The Genre Bender
Exit Through the Gift Shop $4
A shopkeeper-turned-filmmaker goes on a quest to identify and befriend legendary anonymous street artist Banksy. Banksy himself ends up creating the documentary, though he only appears on camera a few times as a blacked out figured with a distorted voice so his identity remains protected and unknown to the viewer.
Though it's categorized as a documentary and seems like one, the viewer is encouraged to question the difference between fact and fiction—if there even is one. Banksy takes art out of the high-brow, institutional setting of a museum and into the streets for anyone to access it. He strips away the elitism of art world as well as the notions of attribution and fame and complicates our understanding of clear roles between the artist and consumer, fiction, and reality.
The Funny Charmer
Iris $4
For anyone who loves fashion and style, Iris is an inspiring, upbeat portrait of the self-proclaimed geriatric starlet. Watching this living legend go about her day as she spews quick-witted musing after quick-witted musing is worthwhile for those who love all things design and/or value creativity. She has much to teach us all about aging, expressing our style with grace and integrity, and, more importantly, always keeping a sense of humor. Even Kanye West agrees (you'll see what we mean once you watch the documentary).
The Call to Action
The Hunting Ground $4
Well known for an earlier investigative documentary about rape in the military, The Invisible War, director Kirby Dick brings us another groundbreaking piece about the epidemic, only this time, the setting is an academic campus. The Hunting Ground presents the prevalent, grim topic in a plain, straightforward manner, letting the audience hear from the survivors and statistics themselves rather than attempting to capture it as entertaining.
While there are are some flawed storytelling devices employed throughout, the survivors' accounts make it thoroughly effective. This documentary takes a look at why rape is so widespread across college campuses, and identifying at least some of those sources is a good starting point for change. In that sense, it's a timely call to action and has even been screened for many policy-makers in Washington.
The Must-See Lesson
An Inconvenient Truth $4
Deemed one of the first films to raise public awareness about global warming, An Inconvenient Truth is a must-see. Viewers will learn about the scientific phenomenon itself as well as the resulting repercussions and what we can do to reverse some of them. In case this wasn't already clear, personal habits, lifestyle choices, political structures, and the health of our environment are all intrinsically linked.
The Poignant Masterpiece
We Were Here $4
This documentary is a retrospective of the earliest days of the AIDS epidemic in San Francisco and those it impacted. While there have been many productive dramatizations of this period in history, nothing can compare to the personal perspective that We Were Here reveals. Rather than emphasizing the statistics of lost lives, we hear from five people who survived the epidemic, humanizing the disease and those who survived it instead of stigmatizing them. There voices uplift the audience and give us a glimpse of the transcendence of community and love.
The Thought-Provoking Tribute
Amy $4
Regardless of whether you're an all-in Amy Winehouse fan or you've never listened to one of her tracks, this behind-the-scenes documentary of the artist's life from childhood to death is both captivating and profoundly disturbing. Amy Winehouse's own lyrics narrate the film (hence the subtitle Written in Her Words). This film makes the audience think about its own role in the culture of consumption; as we tune in to devour her prodigal body of work, we're watching her unravel from addiction as documented in exploitative paparazzi footage.
The Fascinating Time Capsule
Grey Gardens $4
This 1975 documentary is so iconic and shocking that it's difficult to put into words—not even a gothic novel of epic proportions could make this stuff up. It explores the lives of recluse mother-daughter duo Little Edie and Big Edie (the aunt of Jackie Kennedy Onassis), and the viewer steps inside the wreckage of what used to be their East Hampton mansion. Hint: There are the filth and grime of dead cats and raccoons around every corner, giving "haunted house" a whole new meaning. It's also simply funny. So for anyone who loves period pieces and a riches-to-rags story, look no further than this fascinating feature.
The Moving Exposé
Llévante Mis Amores $4
Llévante Mis Amores shows us a group of women who make food and bottle water from a local well to toss onto a moving train of laborers and migrants from Latin America to the U.S. in the village of La Patrona, Mexico. It's ultimately uplifting as the generous spirits of these women, who rarely even see the faces of those they're feeding, carry us through the film. Though these women are clearly risking their lives for the strangers who pass them on the freight train risking their lives in search for better opportunities, the most eye-opening moments are when we hear why they do it. It's a fresh, evocative look into the polemic immigration challenges of today.
The Freaky Game Changer
Food, Inc. $1
When you walk into a grocery store, how often do you wonder where exactly your food is coming from? And for some of us, if it tastes good and we can get it a decent price, that's all that really matters. But Food, Inc. will make you think again about the industrial food complex. Food, Inc. is a disturbing look inside of America's food industry, from factory farming to policy making and "food deserts."
The Brilliant Eye-Opener
I Am Not Your Negro $5
This documentary carries on the legacy of civil rights activist and seminal author James Baldwin. It shows us how current patterns of inequality, both empirical and in mass media and Hollywood representations, are anchored in the past (though not contained there). With what amounts to much more than just a portrait of Baldwin himself, the viewer will get a quick lesson in American history, ultimately leaving them feeling more empowered to take action.
The Vibrant Feature
If you think Madonna made up vogueing, think again. In this 1990 documentary, we get a glimpse into the world of cross-dressing balls where this style of dance originated along with so many other cultural icons and phenomenon. Paris Is Burning (which actually takes place in New York City) visualizes disenfranchised communities and challenges our notions of "real" by validating the identities of those in drag as well as other subcultures that shape us. Oh, and the soundtrack is unmatched.
The Investigative Tell-All
West of Memphis $4
If you like true crime documentaries as well as films about social justice issues today, then West of Memphis is an essential watch. This true story focuses on a new investigation to reveal the flaws in our criminal justice system as a wrongly convicted man receives the death penalty for killing three 8-year-old boys in West Memphis, Arkansas, in 1993.
Watch More Impactful Documentaries
The Invisible War $7
The True Cost $4
Bowling for Columbine $4
Man on Wire $4
Dark Girls $3
Basquiat $3
There's more where that came from with our roundups of the most gripping history and nature documentaries.
This post was originally published on August 1, 2017, and has since been updated.
These 30 Thought-Provoking Documentaries on Hulu Are Actually Worth Your Time
The Best Biographical Documentaries on Netflix Right Now
10 of the Best History Documentaries on Netflix
The 20 Best Health Documentaries on Netflix to Watch Right Now
These 20 Documentaries on HBO Go Are Actually Worth Your Time
The 30 Best Documentaries on Amazon Prime Will Change Your Life
What to Watch During Black History Month (And All Year Long)
We Asked 11 Chefs to Weigh in on the Best Food Documentaries Yet
These Really Are the Best Documentaries on HBO Right Now
The Best True Crime Books Guaranteed to Give You Chills
The Best Date-Night Movies of All Time
Top Editors Have Spoken—Here Are 20 Fashion Documentaries To Watch Now
Black History Book Club: 14 Authors You Need to Know
The 20 Best Family-Friendly Movies on Netflix for Fun, Cringe-Free Streaming
50 Essential Books to Add to Your Literary Bucket List
Without a Doubt, These Are the Best Period Dramas of All Time | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,617 |
Jmílnyk () es una ciudad de importancia regional de Ucrania perteneciente a la óblast de Vinnytsia.
En 2015 tiene una población estimada de habitantes.
Es la capital del raión de Jmílnyk, pero no pertenece al mismo.
Se sitúa en el curso alto del río Bug Meridional, 67 km al noroeste de Vínnytsia. Es una de las ciudades más antiguas de la Podolia.
Demografía
La ciudad ha tenido la siguiente evolución demográfica:
1897: 11 657 habitantes
1926: 10 794 habitantes
1959: 13 288 habitantes
1989: 29 702 habitantes
2001: 27 898 habitantes
Todas las estimaciones posteriores a 2012 sitúan la población en torno a 28 000 habitantes
Según el censo de 2001, la gran mayoría de la población era hablante de ucraniano (97,12%), existiendo una minoría de hablantes de ruso (2,59%).
Patrimonio
Referencias
Enlaces externos
Ciudades de importancia regional de la óblast de Vínnytsia | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,374 |
Q: How to return compile-constant string literal with C++ constexpr function I have a constexpr function and I'm trying to strip the file name from the __FILE__ macro, that is, remove everything but the path. I sketched up this basic function to do so, and I made it constexpr in hopes that the compiler can deduce the result and just place that calculated result as a string in the final binary. The function isn't perfect, just a simple mock-up.
constexpr const char* const get_filename()
{
auto file{ __FILE__ };
auto count{ sizeof(__FILE__) - 2 };
while (file[count - 1] != '\\')
--count;
return &file[count];
}
int main()
{
std::cout << get_filename() << std::endl;
return 0;
}
The problem is that this is not being evaluated at compile time (build: MSVC x64 Release Maximum Speed optimization). I'm assuming this is because of returning a pointer to something inside a constant string in the binary, which is essentially what the function is doing. However, what I want the compiler to do is parse the get_filename function and somehow return the string literal "main.cpp", for example, instead of returning a pointer of that substring. Essentially, I want this to compile down so that the final binary just has main.cpp in it, and nothing else part of the __FILE__ macro. Is this possible?
A: Because you don't want the full __FILE__ path in the final binary, we must copy the string to a std::array:
constexpr auto get_filename()
{
constexpr std::string_view filePath = __FILE__;
constexpr auto count = filePath.rfind("\\");
static_assert(count != std::string::npos);
std::array<char, count> fileName{};
std::copy(filePath.data() + count + 1, filePath.data() + filePath.size(), fileName.data());
return fileName;
}
And specify constexpr when calling the get_filename function:
constexpr auto fileName = get_filename();
std::cout << fileName.data();
Alternatively, since C++20, you could use consteval to force it to be evaluated at compile time:
consteval auto get_filename();
Here's the test on godbolt, it uses printf instead of std::cout for a shorter asm.
A: Like this my code is in a file called .\some_path\main.cpp (windows path style):
#include <string_view>
#include <iostream>
static constexpr std::string_view get_filename()
{
std::string_view name{ __FILE__ };
auto pos = name.find_last_of('\\'); // or '/' on linux
if (pos != std::string::npos)
{
return std::string_view{ name.begin() + pos + 1, name.end() };
}
return name;
}
int main()
{
static_assert(get_filename() == "main.cpp");
std::cout << get_filename();
return 0;
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,107 |
Biografia
Figlio di Dante II e di Costanza Maccaccaro, nacque nel 1395 e, appena diciannovenne, sedette nel Consiglio della città di Verona. Oltre all'attività politica, aiutò il padre Dante a migliorare e a far fruttare i possedimenti che suo nonno Pietro aveva acquistato intorno a Verona, specie a Gargagnago in Valpolicella. Secondo Pietro Fraticelli, dantista e libraio fiorentino (1803-1866), Leonardo nel 1430 andò a Firenze insieme ad altri veronesi per conoscere meglio le proprie radici e la storia del suo bisavolo, accompagnato in questo tour cittadino dall'umanista e cancelliere Leonardo Bruni. Questi, nella sua Vita di Dante del 1436, racconta alla conclusione che:
Sposatosi con Jacopa di messere Gabriele Della Verità, fece testamento il 17 settembre 1439 nominando i due figli ancora minorenni, Giovanni e Pietro quali suoi eredi universali sotto la tutela della madre e del notaio Cendrata. Morì nel 1441, venendo sepolto nel monastero della chiesa di Santa Anastasia di Verona.
Discendenza
Dal matrimonio con Jacopa Della Verità Leonardo ebbe 5 figli:
Chiara
Elisabetta
Costanza
Giovanni (1427-1445)
Pietro
Note
Bibliografia
Collegamenti esterni
Leonardo | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,273 |
We provide the great deals in this segment and these services are available at global platform as per customers' demands. We serve Indian Tourists, Corporate and Tourists From Abroad by providing them Luxury Buses, Traveller Mini Buses / Mini Vans according their schedule, budget and convenience.
General Bus Seats Booking- Services are available through our collaborations on the basis of approved bus routes in India and abroad with the help of related National / International / State Transport Service Authority and Other Private Service Providers.
Luxury Buses- Services are available in different capacity like: 27 Seater, 35 Seater, 42 Seater, 46 Seater, 49 seaters, Luxury General Coaches and Double Decker Coaches etc.
Traveller Mini Buses / Mini Vans – Services are available in different capacity like: 9 Seater, 12 Seater, 14 Seater, 16 Seater, 17 Seater & 22 Seater. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,846 |
\section{Introduction}
\label{sec:intro}
The SDSS-III/Baryon Oscillation Spectroscopic Survey \citep[BOSS;][]{Eisenstein2011, Dawson2013} provided unprecedented statistics at the high-mass end by measuring the spectra of about 1.5 million luminous red galaxies \citep[LRGs\,;][]{Eisenstein2001} over 10,000\,deg$^2$ of sky down to magnitude $r\sim 22.2$ and within the redshift range $0.2<z<0.7$. This data set has been used not only to accurately measure the baryon acoustic oscillation feature \citep[BAO;][]{Eisenstein2005, Anderson2014, 2017MNRAS.470.2617A}, but also to study the massive galaxy population at $z\sim0.55$. BOSS allowed us to characterise the red/blue color bimodality observed in LRGs \citep[]{2013MNRAS.432..359T, 2014MNRAS.437.1109R, 2016MNRAS.462.2218F, AMD2016a}, to constrain the high-mass end of the stellar mass and luminosity functions of these massive galaxies \citep[][]{2013MNRAS.435.2764M, 2013MNRAS.436..697B, 2016MNRAS.457.4021L, 2016MNRAS.455.4122B, 2017MNRAS.467.2217B, AMD2016a} and to measure the intrinsic relation between galaxy luminosity and velocity dispersion \citep[]{AMD2016b, 2017MNRAS.468...47M}. Despite these achievements, the morphological and structural properties of BOSS LRGs have been difficult to probe due to the poor SDSS image quality (median seeing of 2").
More recently, the Dark Energy Camera Legacy Survey\footnote{\url{http://legacysurvey.org/decamls/}} (DECaLS) of the SDSS Equatorial Sky has been designed to obtain high-quality images that cover 6700$\,\rm{deg^2}$ in three optical bands $(g,\,r,\,z)$. With a limiting magnitude of $r\leq 23.4$ and a median seeing of $1.2"$, it allows a narrower and more efficient target selection for the DESI survey \citep[][]{2013MNRAS.428.1498C, 2016A&A...592A.121C}. DECaLS improves dramatically the quality of the SDSS data set, providing also deeper photometry.
Besides the classification of galaxies through their morphology and shape parameters, the stellar mass-size relation has been explored in a number of works as a powerful scaling law to connect fundamental galaxy properties. \citet[]{2010MNRAS.404.2087B} studied the distribution of stellar mass (M$_{\star}$), size, velocity dispersion, luminosity and color as a function of galaxy morphology and concentration index for SDSS massive early-type galaxies. They claimed that sample selections based on colour or concentration lead to significantly different scaling relations. \citet[]{2011MNRAS.412L...6B} investigated further these dependencies in a sample of SDSS early-type galaxies (ETGs) and found that there is a particular stellar mass scale (M$_{\star}\sim2\times10^{11}\,\rm{M_{\odot}}$) beyond which major mergers start to dominate the assembly histories of these massive galaxies. \citet[]{2013ApJ...778L...2C} identified the same mass scale as the transition point between two processes that regulate the mass-size distribution of galaxies in dense environments and in the field. From one side, spiral galaxies are replaced by bulge-dominated fast-rotator ETGs, with the same mass-size relation and mass distribution as in the field. On the other hand, the slow-rotator ETGs are segregated in mass from the fast ones, and their size increases proportionally to their mass.
These evidences suggest that bulge growth (outside-in evolution) and bulge-related environmental quenching dominate in the low-mass end, while dry mergers (inside-out evolution) and halo-related quenching shape the mass and size growth at the high-mass end.
\citet[]{2013ApJ...779...29H} investigated the impact of different large-scale environments (i.e., field, group and clusters) on the size of massive ETGs at $z\sim0$. At fixed stellar mass, they did not find any significant dependence of the central and satellite ETG sizes on the environment. The mass-size relation of these galaxies is independent of the host halo mass and the galaxy position within the halo. This result is not sensitive to different galaxy selections based on morphology, star formation, or central density.
\citet[]{2011MNRAS.415.3903T} studied the buildup of the mass-size relation of elliptical galaxies from $z\sim0$ up to $z\sim1$, using observations from SDSS and HST/GOODS. They did not find any evidence for age segregation at fixed stellar mass. This rules out the scenario of a present-day mass-size relation progressively established through a bottom-up sequence in which older galaxies populate its lower tail, remaining in place since their formation. Their result supports instead the hypothesis that the local mass-size relation is defined at $z\sim1$, with all galaxies occupying a region half of the size of the present-day distribution.
\citet[]{2003MNRAS.343..978S} explored the connection between galaxy size and luminosity (or stellar mass) using $z\sim0.1$ SDSS data and found a trend which is significantly steeper for early- than for late-type galaxies.
Recently, \citet[]{2017arXiv170704979Z} analysed the dependence of the luminosity- or mass-size relation on galaxy concentration and morphology in the SDSS DR7 Main galaxy sample. They found a clear trend of smaller sizes and steeper slope for early-type elliptical galaxies.
\citet[][]{2011MNRAS.418.1055M} studied the morphology and size of BOSS luminous massive galaxies using HST/COSMOS photometry and found that about $74\%$ of them are early-type elliptical or lenticular, while the rest are late-type spirals. \citet[]{2014ApJ...789...92B} compared galaxy size measurements in SDSS, SDSS-III/BOSS and COSMOS data at $0.1\lesssim z\lesssim 0.7$ to derive accurate corrections for the galaxy effective radii (i.e. sizes). \citet[][]{2017ApJ...837..147H} investigated the redshift-size relation in massive ETGs in the UltraVISTA and CANDELS surveys. They found evidence of a significant mass build up at $r<3\,$kpc beyond $z>4$, and a clear evolutionary change at $z\sim1.5$, when the galaxy progenitor stops growing in-situ through disk star formation and accretes minor mergers. \citet[][]{2017arXiv170103526S} explored the ratio between galaxy size and dark matter halo virial radius at $z\lesssim3$ using data from GAMA and CANDELS. They found very little dependence on stellar mass and lower ratios at high redshift for more massive galaxies.
In this work, we aim to characterise the morphology and the stellar mass-size relation of the well-known SDSS-III/BOSS DR12 CMASS and LOWZ galaxy samples \citep[]{Anderson2012, Bolton2012, Anderson2014, 2015ApJS..219...12A} within the redshift range $0.2<z<0.7$. To this purpose, we match these BOSS spectroscopic samples to the DECaLS DR3 photometric catalog. We calibrate DECaLS sizes using the high-resolution (0.6" median seeing) CFHT/MegaCam observations and optimise the correction individually for each morphological type.
By cross-matching our DECaLS selections with the Portsmouth \citep[][]{2013MNRAS.435.2764M} stellar mass catalog at $0.2<z<0.7$, we are able to constrain the M$_{\star}$--size relation of very massive LRGs in a sample of unprecedented size at these redshifts. Our cross-matched BOSS-DECaLS galaxy samples with CFHT calibrated sizes are made publicly available for the community on the \textsc{Skies and Universes}\footnote{\url{http://www.skiesanduniverses.org/}} database.
The paper is organised as follows: Section \ref{sec:data} describes the data sets used in our analysis. In Section \ref{sec:calibration} we explain how the DECaLS effective radii are calibrated using CFHT observations. In Section \ref{sec:results} we present our results: the morphology of BOSS galaxies, their stellar mass-size relation and their size evolution. We compare with previous studies in Section\,\ref{sec:comparison} and summarize our conclusions in Section \ref{sec:summary}. For the analysis we adopt the cosmology: $h=0.6777,\,\Omega_m=0.3071,\,\Omega_{\Lambda}=0.6929,\,n=0.96,\,\sigma_8=0.8228$ \citep[][]{Planck2014}.
\section{Data and galaxy selections}
\label{sec:data}
\begin{figure}
\begin{center}
\includegraphics[width=0.96\linewidth]{plots/footprint.pdf}\hfill
\caption{Footprint of the cross-matched DECaLS-BOSS galaxy sample (green area) versus the original SDSS-III/BOSS coverage (grey).}
\label{fig:completeness}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.04\linewidth]{plots/DECaLS_LOWZ_distr.pdf}\hfill\vspace{-0.4cm}
\includegraphics[width=1.04\linewidth]{plots/DECaLS_CMASS_distr.pdf}\hfill
\caption{$(z-W1)$ vs. $(g-z)$ color distributions of the cross-matched DECaLS-BOSS LOWZ (top) and CMASS (bottom) samples. The contours denote the $1\,\sigma$ and $2\,\sigma$ uncertainty regions.}
\label{fig:selezioni}
\end{center}
\end{figure}
We use the DECam Legacy Survey (DECaLS) DR3 photometric catalog\footnote{\url{http://legacysurvey.org/dr3/files/}} row-by-row-matched to the SDSS DR12 spectroscopic galaxy sample\footnote{\url{https://data.sdss.org/sas/dr12/sdss/spectro/redux/}}. DECaLS is an optical survey on the 4m Blanco telescope at Cerro Tololo Inter-American Observatory designed to complement the SDSS, SDSS-III, SDSS-IV and DESI surveys with high-quality images from 6700$\,\rm{deg^2}$ of extragalactic sky in the equatorial region in three optical bands $(g,\,r,\,z)$. The DECaLS DR3 photometric catalog also includes the infrared WISE\footnote{\url{http://wise.ssl.berkeley.edu/index.html}} bands (W1, W2, W3, W4). The sky coverage lies within $-18^{\circ} < \delta < +34^{\circ}$ in celestial and $|b| > 18^{\circ}$ in Galactic coordinates. DECaLS has improved dramatically the quality of the SDSS imaging data, providing a deeper photometry with limiting magnitude of $r\leq 23.4$ and a median seeing of $1.2"$.
In the cross-matched catalog introduced above, we select the BOSS CMASS and LOWZ galaxy samples of LRGs (hereafter our \textquoteleft\textquoteleft parent samples") using the SDSS spectroscopic flags\footnote{\url{http://www.sdss.org/dr13/algorithms/boss_galaxy_ts/}}.
\noindent We further exclude point-like sources from the parent samples by imposing the DECaLS condition \texttt{TYPE!="PSF"}. We recover 238,008 CMASS and 75,018 LOWZ galaxies, respectively, i.e., about 31\% and 23\% of the original BOSS samples. The missing galaxies are not observed by DECaLS DR3, which has an effective area of 4380\,deg$^{2}$, much smaller than the 9376\,deg$^{2}$ of the SDSS-III/BOSS, as shown in Figure \ref{fig:completeness}. In Figure \ref{fig:selezioni}, we display our LOWZ (top panel) and CMASS (bottom) parent samples in the DECaLS color-color plane. We use the $g$ and $z$-band magnitudes from DECaLS and the W1 infrared magnitude from WISE to highlight the color properties of BOSS LRGs in DECaLS photometry.
Beside DECaLS magnitudes, for the analysis we adopt DECaLS effective radii, surface brightness profiles and galaxy morphologies. We perform galaxy size calibrations using data from two different surveys: the MegaPrime/MegaCam\footnote{\url{http://www.cfht.hawaii.edu/Instruments/Imaging/MegaPrime/}} at CFHT and the Cosmic Evolution Survey (COSMOS)\footnote{\url{http://cosmos.astro.caltech.edu/}}. The first one has a $1\,$deg$^2$ field-of-view with a resolution of 0.187" per pixel and a median seeing of $\sim 0.7"$. It provides much better imaging quality, which is key to precisely determine galaxy sizes and morphological types. The second survey was originally designed to probe galaxy formation and evolution over a 2\,deg$^2$ equatorial field with imaging by most of the major space-based telescopes and a number of large ground based telescopes.
We adopt \citet[][]{2013MNRAS.435.2764M} stellar masses for the galaxies in our parent samples to study the mass-size relation of LRGs at $0.2<z<0.7$. These are estimated by fitting model spectral energy distributions to the BOSS observed magnitudes.
\section{Galaxy size calibration}
\label{sec:calibration}
\begin{figure*}
\begin{center}
\includegraphics[width=0.48\linewidth]{plots/DECaLS_LOWZ_CFHT_correction.pdf}\hfill
\includegraphics[width=0.48\linewidth]{plots/DECaLS_CMASS_CFHT_correction.pdf}\hfill
\includegraphics[width=0.48\linewidth]{plots/DECaLS_LOWZ_COSMOS_correction.pdf}\hspace{0.6cm}
\includegraphics[width=0.48\linewidth]{plots/DECaLS_CMASS_COSMOS_correction.pdf}\hfill
\caption{DECaLS LOWZ (left column) and CMASS (right column) effective radii as a function of the corresponding CFHT (top row) and COSMOS (bottom row) sizes in arcsec. The dashed diagonal line corresponds to the 1:1 relation for each case.}
\label{fig:corr}
\end{center}
\end{figure*}
\begin{table*}\centering
\ra{1.3}
\begin{tabular}{@{}lccccc@{}}
\hline
& \multicolumn{2}{c}{DECaLS LOWZ} & &\\
\hline
&$0.15\leq z<0.3$ & $0.3\leq z<0.43$ & & \\
& $R_0$\,[arcsec]\hspace{1cm}$\alpha$ & $R_0$\,[arcsec]\hspace{1cm}$\alpha$ & & \\
\hline
CFHT DeV & \,1.226$\pm$0.020\hspace{0.5cm} -0.324$\pm$0.014 &1.141$\pm$0.009 \hspace{0.5cm}-0.395$\pm$0.011 & & \\
CFHT Exp &\, 1.370$\pm$0.168\hspace{0.5cm} -0.672$\pm$0.154 &1.292$\pm$0.094 \hspace{0.5cm}-0.652$\pm$0.143 & &\\
\hline
COSMOS DeV & \,1.241$\pm$0.289 \hspace{0.5cm}-0.079$\pm$0.166&1.556$\pm$0.168\hspace{0.5cm} -0.439$\pm$0.128& & \\
COSMOS Exp & \hspace{0.5cm}-- &\hspace{0.5cm}-- & &\\
\hline
\hline
& \multicolumn{2}{c}{DECaLS CMASS} &&\\
\hline
& $0.43< z\leq 0.55$&$0.55< z<0.7$ && \\
& $R_0$\,[arcsec]\hspace{1cm}$\alpha$ &$R_0$\,[arcsec] \hspace{1cm} $\alpha$ && \\
\hline
CFHT DeV & \, 1.009$\pm$0.006\hspace{0.5cm} -0.469$\pm$0.009 &0.952$\pm$0.006 \hspace{0.5cm}-0.547$\pm$0.011&& \\
CFHT Exp & \, 2.085$\pm$0.147\hspace{0.5cm}-0.276$\pm$0.020 &2.123$\pm$0.143 \hspace{0.5cm}-0.247$\pm$0.018 &&\\
\hline
COSMOS DeV & \, 1.256 $\pm$0.126\hspace{0.5cm}-0.186$\pm$0.083 &1.847$\pm$0.356 \hspace{0.5cm}-0.832$\pm$0.344&& \\
COSMOS Exp & \hspace{0.5cm}-- &\hspace{0.5cm}-- & &\\
\hline
\end{tabular}\vspace{0.3cm}
\caption{Best-fit coefficients for the calibration factor $f(R_{\rm{DECaLS}})$ given in Eq. \ref{eq:calib}. The COSMOS correction for the DECaLS CMASS and LOWZ samples with exponential profile is omitted due to the lack of statistics.}
\label{tab:corrtable}
\end{table*}
In order to correct our galaxy size measurements from seeing effects \citep[][]{1993MNRAS.264..961S,2003AJ....125.1882B,2014ApJ...789...92B}, we calibrate DECaLS effective radii with the latest CFHT (see Section \ref{sec:data}) observations. We cross-match our CMASS and LOWZ samples with the data available in the four CFHT fields. Only galaxies with De Vaucouleurs and exponential profiles are employed. For those objects surviving the matching (4721 in CMASS and 2050 in LOWZ), we compare their radii measured in both surveys. We define the DECaLS circularized radius as $R_{\rm{DECaLS}}=R_{\rm{eff}}\sqrt{(b/a)}$, where $R_{\rm{eff}}$ is the DECaLS effective radius, while $a$ and $b$ are the semi-major and semi-minor ellipse axes, respectively. For the calibration we use the following functional form:
\begin{equation}
R_{\rm{DECaLS}}^{\rm{calib}}=R_{\rm{DECaLS}}\times f(R_{\rm{DECaLS}}),
\label{eq:correctionfunc}
\end{equation}
where $f(R_{\rm{DECaLS}})$ is the calibration function depending on DECaLS size defined as:
\begin{equation}
f(R_{\rm{DECaLS}})=\left (\frac{R_{\rm{DECaLS}}}{R_0}\right )^\alpha.
\label{eq:calib}
\end{equation}
We separately fit CMASS and LOWZ galaxies with De Vaucouleurs and exponential profiles to find the optimal parameters $\alpha$ and $R_0$. As part of the fitting procedure, we perform sigma-clipping, rejecting those objects located more than $2\,\sigma$ away from the mean of the $R_{\rm{CFHT}}/R_{\rm{DECaLS}}$ distribution. The excluded points are considered outliers in what follows. The best-fit parameters are reported in Table \ref{tab:corrtable}.
In the top panels of Figure \ref{fig:corr}, we display DECaLS versus CFHT effective radii of the LOWZ (left) and CMASS (right) samples, respectively. The grey points are DECaLS original radii before the CFHT calibration; the blue contours are the corrected sizes.
The effect of the CFHT calibration lowers DECaLS effective radii by a $\sim$40\% factor, fully consistent with the statistical correction made by \citet[][]{2011MNRAS.418.1055M} using the Zurich Estimator of Structural Types \citep[ZEST;][]{2007ApJS..172..406S} measurements. In what follows, we extrapolate and apply this calibration to the entire CMASS and LOWZ parent samples.
In order to test the CFHT calibration, we also derive an independent correction by cross-matching DECaLS with COSMOS data. Even though the overlap between the two data sets is very small -- only 67 galaxies survive the matching for CMASS and 56 for LOWZ -- the result is consistent with the CFHT analysis, as shown in the bottom panels of Figure \ref{fig:corr}. Here we show DECaLS LOWZ on the left and CMASS on the right side. The grey points are the DECaLS radii before correction and the blue filled squares are the sizes calibrated using COSMOS data. The blue empty squares are the outliers, i.e. those objects located more than $2\,\sigma$ away from the mean of the corrected distribution.
\section{Results}
\label{sec:results}
In this section, we present our main results: the morphology of the cross-matched BOSS-DECaLS CMASS and LOWZ samples and the stellar mass-size relation for their early-type galaxy population.
\subsection{The morphology of BOSS LRGs}
\label{sec:morfologia}
\begin{figure*}
\begin{center}
\hspace{-0.5cm}
\includegraphics[width=0.45\linewidth]{plots/gz_colorcutLOWZ.pdf}\hfill\hspace{0.4cm}
\includegraphics[width=0.45\linewidth]{plots/gz_colorcutCMASS.pdf}\hfill
\caption{$(g-z)$ color distribution of the cross-matched DECaLS-BOSS LOWZ (left) and CMASS (right) samples. The contributions of galaxies with De Vaucouleurs and exponential profiles are shown as red, solid and blue dashed histograms, respectively.}
\label{fig:giplot}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.5\linewidth]{plots/DECaLS_LOWZ_morf_histo_py.pdf}\hfill
\includegraphics[width=0.5\linewidth]{plots/DECaLS_CMASS_morf_histo_py.pdf}\hfill
\caption{DECaLS LOWZ (left) and CMASS (right) effective radius. The large majority (89\% in LOWZ and 64\% in CMASS) of both samples is composed by galaxies with De Vaucouleurs profiles. Only 4\% (14\%) of LOWZ (CMASS) galaxies in DECaLS have an exponential profile. Objects classified as \textquoteleft\textquoteleft simple" have exponential profiles and round shape, with fixed effective radius. Galaxies classified as \textquoteleft\textquoteleft composite" are fitted by a combination of De Vaucouleurs and exponential profiles. The vertical dashed lines represent the median DECaLS seeing at the mean redshift of each sample.}
\label{fig:hlr}
\end{center}
\end{figure*}
We use the DECaLS surface brightness profile classification as an indicator of the morphology of CMASS and LOWZ galaxies. In DECaLS, the following profiles have been fitted to individual objects:
\begin{itemize}
\item De Vaucouleurs: Sersic \citep[][]{1968adga.book.....S} profile with $n=4$.
\item Exponential: Sersic profile with $n=1$.
\item Composite: linear combination of a De Vaucouleurs and an exponential profile with the same source center.
\item Simple: exponential profile with a fixed 0.45" effective radius and circular shape.
\end{itemize}
We find that 64\% (89\%) of CMASS (LOWZ) galaxies have De Vaucouleurs profiles; 14\% (4\%) are exponentials; 17\% (1\%) are simple and 5\% (6\%) are composite.
Galaxies with De Vaucouleurs profiles are typically early-type/ellipticals, while exponentials correspond to late-type/spirals \cite[see e.g.,][]{1993MNRAS.265.1013C, 1994MNRAS.271..523D, 1995MNRAS.275..874A, 2005AJ....129...61B, 2007ApJ...659.1159S, 2011A&A...529A..53T}.
Composite profiles are a mixture of the two previous configurations. Simple profiles are used when any other profile with varying radius does not yield a significantly better $\chi^2$ (note that the number of parameters is penalized in the determination of the goodness of fit).
The CMASS selection allows for a fraction of bluer objects in the sample, which increases with redshift \citep[]{Eisenstein2001, AMD2016a}. This explains the presence of galaxies with exponential profiles. Interestingly, the fraction of galaxies with De Vaucouleurs profiles increases significantly from the LOWZ to the CMASS sample, as the fraction of exponentials decreases. In Figure \ref{fig:giplot}, we show the
$(g-z)$ color distributions in both the LOWZ (left) and CMASS (right) samples for De Vaucouleurs and elliptical galaxies separately. In the CMASS sample, galaxies with De Vaucouleurs profiles are significantly redder than those showing an exponential profile, as expected from the early-late type association. Interestingly, this separation is less obvious in the LOWZ sample, which might be due to the presence of more dusty spirals having an exponential profile. Note that the red/blue separation in the CMASS sample is more evident in the $(g-i)$ color distribution (i.e., $(g-i)=2.35$), as shown in \citet[][]{2011MNRAS.418.1055M}, \citet[][]{Dawson2013}, \citet[][]{2013MNRAS.435.2764M}, \citet[][]{2014MNRAS.437.1109R}, \citet[][]{2016MNRAS.462.2218F}, and \citet[][]{2017ApJ...836...87L}.
The fraction of late-type and early-type galaxies that we find in our samples is approximately consistent, given the uncertainties and differences between different methods, with results from \citet[]{2011MNRAS.418.1055M}, \citet[]{2013MNRAS.435.2764M} and \citet[]{AMD2016a} using the SDSS photometry.
In Figure \ref{fig:hlr}, we show the effective radius distribution of the LOWZ (left) and CMASS (right) samples, highlighting the contribution from the different morphologies. In both populations, the early-type De Vaucouleurs galaxy distribution peaks at $\rm{R_{DECaLS}}\sim7\,\rm{kpc}$, exponentials around $8\,\rm{kpc}$, composite at $12\,\rm{kpc}$ and simple below $5\,\rm{kpc}$. Most of the galaxies classified as ``composite" have a companion nearby preventing to accurately measure their effective radius. Due to this configuration, composite galaxies have on average larger radii and wider size distributions compared to the other morphologies. The number of galaxies and the number density (per unit deg$^2$) of each sample are reported in Table \ref{tab:densities}.
\begin{table*}\centering
\ra{1.3}
\begin{tabular}{@{}lccccrrrc@{}}
\hline
& \multicolumn{3}{c}{DECaLS LOWZ} & \phantom{abc}& \multicolumn{3}{c}{DECaLS CMASS} &
\phantom{abc} \\
\hline
& N$_{\rm{gal}}$ & $n_{\rm{dens}}$\,[deg$^{-2}$] & fraction\,[\%] && N$_{\rm{gal}}$ &$n_{\rm{dens}}$\,[deg$^{-2}$] & fraction\,[\%] \\
\hline
Total & 84,986 & 19.4 & 100 && 239,431 & 54.7 & 100\\
De\,Vaucouleurs & 75,441 & 17.2 & 89 && 154,004 & 35.2 & 64\\
Exponential & 3464& 0.8& 4 &&33,681 & 7.7 & 14 \\
Simple & 1062& 0.3& 1 &&41,292 & 9.4 & 17 \\
Composite &5019& 1.1& 6 && 10,454 & 2.4 & 5\\
\hline
\end{tabular}\vspace{0.3cm}
\caption{The number, number density (per unit deg$^2$) and fraction of De Vaucouleurs, exponential, simple and composite galaxies in the DECaLS LOWZ and CMASS samples.}
\label{tab:densities}
\end{table*}
In Figure \ref{fig:hlr}, the median seeing at the corresponding redshift of each sample is represented by a solid vertical line. The DECaLS PSF is dominated by seeing on scales of 1-1.2", which corresponds to a FWHM of about 2.8\,kpc at the mean redshift of LOWZ ($z\sim0.3$) and about 3.9\,kpc at the mean redshift of CMASS ($z\sim0.55$). This makes the effective radius distribution fall sharply at small radii. For LOWZ galaxies, however, this effect is less pronounced due to their larger angular size compared to CMASS objects. In what follows, we exclude from our samples those objects classified as ``simple", which have effective radius significantly lower than these thresholds.
\subsection{The mass-size relation of LRGs at $0.2<z<0.7$}
\label{sec:Mstar}
Hereafter, we will focus only on LRGs with De Vaucouleurs profiles.
\begin{figure*}
\begin{center}
\includegraphics[width=0.48\linewidth]{plots/logRBOSS_LOWZ_low.pdf}\hfill
\includegraphics[width=0.48\linewidth]{plots/logRBOSS_LOWZ_up.pdf} \hfill
\includegraphics[width=0.48\linewidth]{plots/logRBOSS_CMASS_low.pdf}\hspace{0.6cm}
\includegraphics[width=0.48\linewidth]{plots/logRBOSS_CMASS_up.pdf}
\caption{Stellar mass--\,size relation for the DECaLS LOWZ (top row) and CMASS (bottom row) samples, considering only galaxies with De Vaucouleurs profiles. DECaLS effective radii are calibrated using CFHT data as explained in Section \ref{sec:calibration}. We show in green the $1\sigma$ (innermost), $2\sigma$ (median) and $3\sigma$ (outermost) contours of each distribution, weighted against stellar mass incompleteness by applying the correction from \citet[]{2016MNRAS.457.4021L}. The blue points are the mean radii in bins of stellar mass and the error bars are the $\pm1\,\sigma$ scatter. The blue solid line is a linear fit to these mean values. The black dotted line is the linear fit to the uncalibrated relation. The grey thin contours correspond to previous observations of less massive quiescent galaxies in CFHT SDSS Stripe 82 \citep[][]{2017MNRAS.469.4523C}. The red dashed and dot-dashed lines are the results for COSMOS ETGs in groups and in the field environment from \citet[]{2013MNRAS.428.1715H}.}
\label{fig:masssize}
\end{center}
\end{figure*}
\begin{table*}\centering
\ra{1.3}
\begin{tabular}{@{}lccccc@{}}
\hline
& \multicolumn{2}{c}{DECaLS LOWZ} & \multicolumn{2}{c}{DECaLS CMASS} & \\
\hline
& $0.2\leq z<0.3$ & $0.3\leq z<0.4$ & $0.43\leq z< 0.55$&$0.55\leq z<0.6$ \\
\hline
$f$&1.00 &0.87&0.57&1.0 \\
$\sigma$&0.12 &0.20&0.20&0.22 \\
$\log{(M_1/M_{\odot})}$&11.24 &11.27&11.24&11.36\\
\hline
$A$& 0.238$\pm$0.044 &0.219$\pm$0.022 & 0.202$\pm$0.021 &0.172$\pm$0.015 \\
$B$ & -1.947$\pm$0.509&-1.706$\pm$0.263 & -1.493$\pm$0.241 &-1.141$\pm$0.178 \\
\hline
\end{tabular}\vspace{0.3cm}
\caption{\textit{Top:} Parameters used in Eq.\,\ref{eq:comp_leauthaud} from \citet[]{2016MNRAS.457.4021L} to correct for stellar-mass incompleteness. \textit{Bottom:} Parameters of the linear fits $\rm{\log{(R_{DECaLS}/kpc)}}=$$\,A\,\rm{\log{(M_{\star}/M_{\odot})}}$$\,+\,B$ to the stellar mass-size relations shown in Figure \ref{fig:masssize}.}
\label{tab:param_mass_size}
\end{table*}
Figure \ref{fig:masssize} displays the circularized effective radius as a function of stellar mass for the DECaLS LOWZ (upper row) and CMASS (lower row) samples, respectively, in four bins of redshift ($0.2\leq z<0.3$ and $0.3\leq z<0.4$ for LOWZ; $0.43\leq z<0.55$ and $0.55\leq z<0.6$ for CMASS). The density contours are approximately corrected from stellar-mass incompleteness using the analytic formula from \citet[]{2016MNRAS.457.4021L}:
\begin{equation}
c=\frac{f}{2} \left[1+\rm{erf}{\left(\frac{\log{M_{\star}/M_1}}{\sigma}\right)} \right],
\label{eq:comp_leauthaud}
\end{equation}
where the parameter values are chosen at the mean redshift of our samples, see Table\,\ref{tab:param_mass_size}.
As expected, we find a correlation, although mild, between effective radius and stellar mass in our cross-matched BOSS-DECaLS samples. The mean size estimates in bins of stellar mass are displayed on top of each distribution as blue points; the error bars correspond to the $\pm1\,\sigma$ dispersion around the mean.
A linear fit of the form $\rm{\log{(R_{DECaLS}/kpc)}}=\,\textit{A}\,\rm{\log{(M_{\star}/M_{\odot})}}\,+\,\textit{B}$ is also shown in each panel of Figure \ref{fig:masssize} as a blue solid line; the corresponding parameters are given in Table\,\ref{tab:param_mass_size}. The slope of the mass-size relation increases mildly across our redshift range $0.2\leq z<0.7$, with values of $A\sim0.17-0.24$. \\\\
\indent BOSS provides unprecedented statistics at the high-mass end, as compared to previous surveys and samples at similar redshifts. Establishing a fair comparison at these stellar masses is therefore tricky. Instead, in Figure \ref{fig:masssize}, we show results from two relatively large lower-mass samples. The first one is a selection of quiescent galaxies observed in CFHT SDSS Stripe 82 \citep[][]{2017MNRAS.469.4523C}, with stellar masses from the S82 Massive Galaxy Catalog\footnote{\url{http://www.ucolick.org/~kbundy/massivegalaxies/}} \citep[S82-MGC;][]{2015ApJS..221...15B}. The second one is composed by early-type galaxies detected using COSMOS \citep[]{2013MNRAS.428.1715H}.
When combined, the BOSS mass-size relation appears as a natural higher-mass continuation of those lower-mass relations, but displaying a significantly
flatter slope (the typical slope at lower-masses is $A\sim0.47-0.61$).
The apparent flattening observed in the mass-size relation might be due to residual incompleteness and selection effects that we could not take into account in the analysis, and to the CFHT size calibration. In Figure\,\ref{fig:masssize}, we overplot the linear fit to the uncalibrated relation (black dotted line), which is flatter ($A\sim0.20-0.45$) than the lower-mass measurements, but steeper than the corrected relation, especially towards higher redshifts. By comparing these two fits, one can appreciate the effect of the CFHT calibration on the DECaLS size estimates, which are reduced by a factor $\sim0.5-0.25$\,dex.
Note also that the size correction has a stronger effect on the higher redshift bins (i.e., CMASS), as expected from the right panel of Figure\,\ref{fig:corr}.
The possibility remains that the apparent flattening of the mass-size relation towards the high-mass end is related to the well-documented curvature of scaling relations for early-type galaxies \citep[see e.g.,][]{2007MNRAS.377..402D, 2009MNRAS.394.1978H, 2011MNRAS.412L...6B, 2013ApJ...769L...5K, 2013MNRAS.432.1862C, 2013MNRAS.432.1709C, AMD2016a, AMD2016b, 2017MNRAS.468...47M}. In BOSS, particularly, this phenomenon was reported by \citet[][]{AMD2016b} when analysing the intrinsic $L-\sigma$ relation for the red sequence population. In Section\,\ref{sec:comparison}, we discuss possible interpretations of this result.
\subsection{The redshift-size relation of LRGs at $0.2<z<0.7$}
\label{sec:z-size}
We have analysed the redshift evolution of the average size of massive LRGs from the BOSS-DECaLS cross-matched samples. This measurement, due to the mass-size relation itself, is very sensitive to the particular stellar mass range observed, so comparisons with previous results should be taken with caution.
Figure \ref{fig:zsize} displays the mean effective radius of our LOWZ (blue point) and CMASS (red square) samples, in which only galaxies with De Vaucouleurs profile are considered; the error bars correspond to $\pm1\,\sigma$ scatter around the mean. Our results are obtained by integrating over the entire stellar mass range. The empty black triangles represent previous estimates from SDSS and SDSS-III/BOSS \citep[]{2014ApJ...789...92B} calibrated against HST/COSMOS data and selected in a narrow bin of stellar mass.
The redshift evolution of the DECaLS early-type galaxy sizes calibrated using CFHT data is overall consistent with a flat trend, i.e. no evolution. This is in good agreement with CFHT observations in Stripe 82 of quiescent ETGs \citep[][]{2017MNRAS.469.4523C}. However, when we combine our nearly flat results with the SDSS measurements at $z\sim 0.1$ \citep[][empty magenta point]{2003MNRAS.343..978S}, the evolutionary trend mildly declines with redshift and reconciles with \citet[]{2014ApJ...789...92B}. The effective radius estimates presented by \citet[]{2014ApJ...789...92B} are systematically smaller than our results and their evolutionary trend is overall similarly flat.
Interestingly, when we limit our measurements to very high masses, $\log{(\rm{M_{\star}/ M_{\odot}})}>11.8$, we find a slope steeply declining with redshift. This is in line with current estimates for very massive ETGs in ULTRAVISTA and CANDELS/3D-HST \citep[]{2017ApJ...837..147H} and with the massive ETGs at $11.2< \log{(\rm{M_{\star}/M_{\odot}})} <12$ observed in COSMOS \citep[]{2013MNRAS.428.1715H}.
\begin{figure}
\begin{center}
\includegraphics[width=1.05\linewidth]{plots/DECaLS_CMASS_zsize_lesspoins.pdf}
\caption{Redshift-size relation of our DECaLS CMASS (red filled square) and LOWZ (blue filled point) galaxies, compared to the SDSS-III/BOSS (black empty triangles) results from \citet[]{2014ApJ...789...92B}. We also show the $z\sim0.1$ SDSS Main galaxy sample measurement from \citet[]{2003MNRAS.343..978S} (magenta empty point) which, combined with our results, suggests a mildly declining redshift trend. The dot-dashed line is the fit to the COSMOS ETGs with $11.2< \log{(\rm{M_{\star}/M_{\odot}})} <12$ \citep[]{2013MNRAS.428.1715H}.}
\label{fig:zsize}
\end{center}
\end{figure}
\section{Comparison with previous studies}
\label{sec:comparison}
We have measured the stellar mass-size relation for massive early-type galaxies within the redshift range $0.2< z < 0.7$. When compared with lower-mass results, our measurement shows a relative flattening of this relation, especially at higher redshift.
At face value, it seems that the observed flattening of the mass-size relation could be related to the well-documented curvature of the scaling relations towards the high-mass end, which has been extensively addressed in the literature for early-type galaxies \citep[][]{2009MNRAS.394.1978H, 2007MNRAS.377..402D, 2011MNRAS.412L...6B, 2013ApJ...769L...5K, 2013MNRAS.432.1862C, 2013MNRAS.432.1709C, AMD2016b}.
In particular, \citet[]{2009MNRAS.394.1978H} studied the stellar mass-size relation in a sample of $\sim50,000$ SDSS ETGs at $z\sim0.1$ and found evidence for a deviation from the linear behaviour: galaxies with $\rm{\log{(M_{\star}/M_{\odot})}}\gtrsim11.5$ have larger sizes than expected. The slope of the regression line depends on the weighting scheme adopted to correct from survey incompleteness and ranges from $A\sim1$ (unitary weights) to $A\sim0.47$ ($1/V_{\rm{max}}(L)$ weights).
\citet[]{2011MNRAS.412L...6B} demonstrated that different scaling relations for ETGs all point to two preferential mass scales, $3\times10^{10}$ and $2\times10^{11}\,\rm{M_{\odot}}$, as places where fundamental physical processes happen.
\citet[][]{2013ApJ...769L...5K} investigated the Faber-Jackson correlation between velocity dispersion $\sigma$ and total galaxy luminosity separately
for elliptical galaxies with and without cores. Using the mass-to-light ratio, they related $\sigma$ to the stellar mass. They found that the velocity dispersion of core ellipticals increases much more slowly with luminosity and mass, compared to the coreless galaxies. They claimed that this is an evidence for dry major mergers as the dominant growth mode of the most massive elliptical galaxies.
\citet[]{AMD2016b} found a steep slope and small scatter for the L-$\sigma$ relation of the massive red sequence population at $z\sim0.55$ using the CMASS sample.
Although our measurement, in combination with lower-mass results, seems generally consistent with the curvature of the scaling relations towards the high-mass end, it is noteworthy that this behaviour appears to go in the opposite direction to what is reported by \citet[]{2009MNRAS.394.1978H} at low redshift. As mentioned above, they find that SDSS ETGs at the high-mass end are progressively larger than expected (from a linear relation). Establishing a fair comparison is, however, hindered by sample differences.
Besides focusing on a different redshift range, their conclusion is drawn mostly from an intermediate-mass sample (the high-mass end corresponds to the tail of the distribution), whereas our results are obtained from a larger sample covering exclusively the high-mass end (and after comparing with independent lower-mass measurements at the same redshift). Follow-up work will be specifically devoted to addressing this question.
We have also measured the redshift evolution of the average size of massive early-type galaxies from $z=0.7$. Our results are consistent with a non-evolving scenario. This conclusion is in agreement with results from \citet[]{2017ApJ...851...34B}, who detected no growth in the stellar mass of massive (i.e., $\rm{\log(M_{\star}/M_{\odot})>11.2}$) galaxies over $0.3<z<0.65$. \citet[]{AMD2016b} also found results generally consistent with no evolution of the high-mass end of the L-$\sigma$ relation all the way to $z=0$.
\section{Summary and future work}
\label{sec:summary}
We have studied the morphology, the stellar mass-size relation and the size evolution of the SDSS-III/BOSS DR12 CMASS and LOWZ spectroscopic galaxy samples cross-matched with the DECaLS DR3 ($g,\,r,\,z$) deeper and higher-quality image photometry. The resulting CMASS and LOWZ selections include about 31\% and 23\% of the original BOSS samples.
We find that the large majority of both populations is composed of early-type galaxies with De Vaucouleurs profiles, while only less than 20\% of them are late-type spirals with exponential profiles. The fraction of ETG clearly increases from LOWZ to CMASS. We calibrate the DECaLS sizes of these galaxies against the available observations from CFHT/Megacam and COSMOS with better image quality. We obtain an excellent agreement between these two independent corrections and our results are fully consistent with \citet[][]{2011MNRAS.418.1055M} using ZEST \citep[][]{2007ApJS..172..406S} data. By cross-matching our CMASS and LOWZ galaxies with De Vaucouleurs profiles with the Portsmouth \citep[]{2013MNRAS.435.2764M} stellar mass catalog for SDSS-III/BOSS LRGs at $0.2<z<0.7$, we are able to study the high-mass end of the distribution up to $\log{(\rm{M\star/M_{\odot}})}\sim\,12.2$ with unprecedented statistics for 313,026 galaxies over 4380\,deg$^{2}$. Our main results can be summarized as:
\begin{enumerate}
\item the BOSS-DECaLS mass-size relation for massive early-type galaxies exhibits a clear correlation with an apparent flattening in the slope compared to previous estimates from ETGs in CFHT SDSS Stripe 82 at lower masses \citep{2013MNRAS.428.1715H, 2017MNRAS.469.4523C}. Further analysis is needed to determine what causes this behaviour. The apparent flattening might be explained by the fact that scaling relations for the most massive early-type galaxies can be systematically different from the same relations at lower masses \citep[e.g.,][]{AMD2016b, 2009MNRAS.394.1978H, 2011MNRAS.412L...6B, 2013ApJ...769L...5K}. \\
\item we find no evolution in the BOSS-DECaLS ETG sizes over $0.2<z<0.7$. This result is consistent with the non-evolving scenario found by \citet[]{AMD2016b} in the high-mass end of the L-$\sigma$ relation all the way to $z=0$. In addition, it is consolidated by the no-growth detection in the stellar mass of Stripe 82 Massive galaxies within $0.3<z<0.65$ \citep[]{2017ApJ...851...34B}.
If we focus only on the most massive galaxies at $\log{(\rm{M_{\star}/ M_{\odot}})}>11.8$, the slope of their evolution changes to steeply declining with redshift. This is in agreement with current estimates for very massive ETGs in ULTRAVISTA and CANDELS/3D-HST \citep[]{2017ApJ...837..147H} and in COSMOS \citep[]{2013MNRAS.428.1715H}.\\
\item combining our BOSS-DECaLS size measurements with the SDSS results at $z\sim 0.1$ \citep[]{2003MNRAS.343..978S}, the evolutionary trend mildly declines with redshift and reconciles with \citet[]{2014ApJ...789...92B}. This is consistent with a passive evolution scenario for LRGs from $z\sim0.55$ \citep[][]{2013MNRAS.435.2764M, AMD2016a, AMD2016b, 2017ApJ...851...34B}.
\end{enumerate}
This work provides a galaxy sample with unprecedented statistics that can be used to further investigate morphological and size-related aspects in the evolution of LRGs. In addition, this cross-matched sample can be used to study the dependence of clustering on morphological and size-related properties of LRGs. Our cross-matched BOSS-DECaLS CMASS and LOWZ samples with CFHT calibrated sizes are made publicly available for the community on the \textsc{Skies and Universes}\footnote{\url{http://www.skiesanduniverses.org/}} database.
In a follow-up study, we will attempt to deconvolve the uncertainties on the effective radius and the residual incompleteness effects present in the mass-size relation using a similar forward-modeling Bayesian method as the one presented in \citet[]{AMD2016a, AMD2016b}. Within this framework, we will be able to measure the mass-size relation for the {\it{intrinsic}} red sequence population photometrically identified in \citet[]{AMD2016a}.
We also plan to look at the dust properties and star formation history of these galaxies by cross-matching them with the available data from the infra-red Herschel\footnote{\url{http://sci.esa.int/herschel/}} ESA mission.
In the near future, the Subaru HSC-CCP\footnote{\url{http://hsc.mtk.nao.ac.jp/ssp/}} Collaboration will provide ulta-deep multicolor images down to $r_{\rm{AB}}\sim28$ with 0.6" median seeing, which wil be key to improve the current constraints on galaxy size and morphology. New-generation spectroscopic surveys such as SDSS-IV/eBOSS \citep[]{2016AJ....151...44D}, DESI \citep[][]{2015AAS...22533607S} and Euclid \citep[]{2011arXiv1110.3193L, 2015arXiv150502165S} will produce enormous data sets with high-resolution out to redshift $z\sim2$. These observations will allow us to better understand the galaxy formation paradigm on small scales, and to coherently link it to the evolution of the large scale structure of our Universe.
\section*{Acknowledgments}
GF is supported by a European Space Agency (ESA) Research Fellowship at the European Space Astronomy Centre (ESAC), in Madrid, Spain.
AMD acknowledges support from the Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP), through the grant 2016/23567--4.
GF and FP acknowledge financial support from MINECO grant AYA2014-60641-C2-1-P.
GF and FP are thankful to the Lawrence Berkeley National Laboratory for hosting the first phase of this work. GF and coauthors wish to thank D. Lang for insightful discussions on DECaLS morphology details.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is \url{http://www.sdss3.org/}.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; NOAO Proposal ID \# 2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Proposal ID \# 2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; NOAO Proposal ID \# 2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory (NOAO); the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOAO. The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham Nation.
NOAO is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l'Espai (IEEC/CSIC), the Institut de Fisica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A\&M University.
BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program "The Emergence of Cosmological Structures" Grant \# XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant \# 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant \# 11433005).
The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration.
The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO.
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,700 |
Q: Perforce recommended way to work on a task while keeping stable state Recently I started working at a new place and my company uses Perforce as their source control.
From what I saw, my team members are working on a task and when they finish the task they just submit the files. In case they need to stop working on the current task and move to another one, they shelve the files and move to the different task. This means, there is a shelve per task, but there is no way to view local history of things that were done as part of the task. This results large changes at every submit, and it is not possible to easily return to a working state as part of the current task.
In the past, I have been working with git. With git, I would commit very often and I was able to easily view the history of the changes I have done, even in the short term.
For example, before renaming a variable I would always commit, and then if something gets messed up, I would just revert it without even thinking and trying to dive into debugging of what is wrong. As well, when developing a feature and having basic things that work, I would commit so I would just be able to easily return to that state.
What I started to do is manually copy the files I am working on to a local git repo and then commit things over there, and then copy them back to perforce before submitting them. This is definitely a bad idea.
I am aware to the fact the git and perforce are fundamentally different, and I wanted to know what is the recommended way to work when working on a large task at perforce without accidentally destroying my work during the development.
I am working on a gigantic project, and working with git-p4 and syncing all the changes is impossible. As well, I tried to look at: Perforce equivalent of git local commit but it still means there is a shelve saved for each state I want to save, which is not very convenient.
A: The standard way of doing this in Perforce is to use a branch; unlike shelves, branches maintain full version history of everything. There are three basic approaches to consider:
*
*Create a single personal dev branch that you stabilize all your changes in before merging them back to the mainline branch. After everything has been stabilized and merged from your dev branch to the mainline, your dev branch can be updated to the latest mainline state and you have a "clean" basis for your next batch of changes.
*Create a new task branch (or task stream if you're using streams) per major change. This can be a bit heavyweight if you have a lot of files in a "classic" branching model, but if you're using task streams, you can unload the task stream after the task is done to prune the unchanged branched files from the active depot history (similar to what happens when you squash a branch in git).
*Create a personal server via p4 clone. This is the most git-like model -- you instantiate an entire new server on your local machine, where you can create all the branches you want with no impact on the shared server. When your changes are ready, you p4 push them back to the shared server. p4 fetch is used to pull newer changes from the shared server and "rebase" your local changes onto them if required.
I've usually opted for the first approach; the main potential drawback to it is that if you have multiple major destabilizing changes in flight at the same time, it's hard to isolate them from each other in a single branch, but in practice this is a situation that hasn't come up very often for me.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,601 |
<?php
namespace Joker\JokeBundle;
use Symfony\Component\HttpKernel\Bundle\Bundle;
class JokerJokeBundle extends Bundle
{
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,475 |
est un film muet américain réalisé par Marshall Stedman, sorti en 1909.
Synopsis
Fiche technique
Titre
Réalisation : Marshall Stedman
Producteur : William Selig
Société de production : Selig Polyscope Company
Société de distribution : Selig Polyscope Company
Pays d'origine :
Langue : film muet avec les intertitres en anglais
Format : Noir et blanc — 35 mm — 1,33:1 — Muet
Genre : Film dramatique
Dates de sortie :
:
Distribution
William Duncan
Liens externes
Film américain sorti en 1909
Film dramatique américain
Film produit par William Selig
Film en anglais
Film muet américain
Film américain en noir et blanc | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,964 |
\section{Technology and Data Analysis Challenges}
\label{challenges}
The previous chapters demonstrated that the determination of the magnetic field vector using remote sensing instruments requires a close intermeshing of observational techniques and analysis methods. Technology must be pushed to its limits to obtain data sets with high S/N ratios in all Stokes parameters. An accurate knowledge of all instrumental deficiencies, unavoidably resulting in data degradation, is necessary in order to treat them carefully in the data analysis pipelines. The most common tool for extracting the magnetic field from the observations is the inversion of the Stokes spectra: From an initial guess model atmosphere synthetic Stokes spectra are computed, then the parameters of this model atmosphere are adjusted iteratively until the synthetic Stokes spectra agree with the observed ones \cite[see also][this issue]{delacruzrodriguez:15a}. In this process it is extremely important that the synthetic Stokes spectra are degraded in exactly the same way as the \revmark{instrument alters the spectra originating from the Sun}. Only then the comparison between the observed and synthetic Stokes profile will deliver reliable information about the conditions in the solar atmosphere.
The aim is, of course, to keep the instrumental deficiencies as small as possible. This is where technological improvements come into play, which have allowed for a significant increase in the accuracy of spectro-polarimetric measurements at high spatial resolution during the last decade. The key ingredients here are advances in detector technology (e.g. FSP, see \sect{groundbased:status}), massively multiplexed focal planes (2-dimensional spectropolarimeters, see \sect{groundbased:obs}), and the use of adaptive optic systems mandatory for operating large aperture solar telescopes at their diffraction limit (see \sect{groundbased:ao}).
In the near future, large-aperture solar telescopes will allow to observe solar features well below scales of 50\,km. Another challenge for the future is to improve the height resolution by performing spectropolarimetric measurements in many spectral lines simultaneously. The difference in the formation height of the lines and in the response to changes of atmospheric parameters will help modern inversion tools to retrieve the stratification in the solar atmosphere with unprecedented accuracy.
Despite all the efforts to avoid instrumental deficiencies, they will always be present in the data. The proper treatment of these instrumental effects is a big challenge on the data analysis side. \revmark{This has been done successfully during the last years in terms of image reconstruction techniques based on phase-diversity, speckle, or MOMFBD techniques \cite[]{loefdahl:94,vonderluehe:93,vannoort:05}. The next evolutionary steps in this direction} are novel inversion techniques, like the spatially-coupled inversions by \cite{vannoort:12} or the sparse inversion of Stokes profiles by \cite{asensioramos:15}, which take into account the point-spread-function (PSF) of the telescope self-consistently during the inversion process. In a next development step, inversion programs must take into account the natural 2D-coupling between the single-column atmospheres.
But even the cleanest data are subject to ambiguous solutions in the determination of the physical conditions in the solar atmosphere. Such ambiguities are caused by the physical process producing the measured signal (e.g., the well-known 180$^\circ$ ambiguity of the Zeeman effect, or the Van Vleck ambiguity in the Hanle diagnostics), or by the mere presence of a multitude of model atmospheres producing the same Stokes vector within the noise level of the data (degeneracy of the fit parameters).
The resolution of ambiguities is a complex and very often not unique process, involving assumptions about the physical conditions in the solar atmosphere. A successful technique to solve for these ambiguities is to prescribe certain conditions to the solar atmosphere, for example minimizing the divergence of the magnetic field or the vertical current density. A description about available tools for removing the 180$^\circ$ Zeeman ambiguity was given by \cite{metcalf:06}. The solution of the so-called Van Vleck ambiguity, present in Hanle measurements, often requires to choose the ``more likely'' solution, i.e., the solution matching a realistic, physical model of the observed structure \cite[e.g.,][]{orozco:14,schad:13a}.
The identification of the degeneracy in the solution is another important step towards the reliable interpretation of spectropolarimetric data. It helps to identify the simplest model atmosphere compatible with the observed Stokes spectra, which is usually the one with the lowest number of free parameters. Bayesian inversion techniques \cite[]{asensioramos:07} or Markov chain Monte Carlo (MCMC) methods are promising approaches to detect degeneracies.
Ideally, ambiguities and degeneracies in the solutions should be addressed directly by the observations: multi-line spectropolarimetry can provide the complementary information needed to find the unique solution. Similarly, observations from different vantage points are beneficial. The latter will be achieved for the first time by the out-of-ecliptic mission Solar Orbiter (see \sect{space}), which will provide high-resolution Stokes maps of the photosphere with the Polarimetric and Helioseismic Imager (PHI).
\section{Chromospheric Magnetic Field Measurements}
\label{chromag}
\subsection{The Importance of the Chromospheric Field Measurement to the Current Status of Solar Physics }
\label{chromag:importance}
The previous \secs{ground} and \ref{space} demonstrated the high level of sophistication reached especially for the measurement\footnote{In this section, the term measurement refers to the determination of the magnetic field from remote sensing instruments by interpreting signals directly influenced by the solar magnetic field.} of the magnetic field in the photosphere, achieved by using advanced instrumentation in ground-based, balloon-borne and space-based observatories. The spatial and temporal resolution of the measurements in this deepest layer of the solar atmosphere directly accessible by observations has been increased by an order of magnitude within the last three decades. Similarly, recent space observatories operating at wavelengths not accessible from the ground permitted exploitation of the structure of the coronal magnetic field by analyzing the plasma emission of ions trapped in the coronal magnetic field with unprecedented precision.
The chromosphere is the coupling element between the photosphere and the corona. This extremely dynamic layer is located only one convection-cell size above the solar photosphere ($\approx1000$\,km). Exposed to the cold environment of space, the radiative energy losses effectively cool down the chromosphere. To maintain the typical temperature of approximately 10\,000\,K a substantial amount of energy is required, about 15 times higher \cite[]{aschwanden:07} than the energy necessary to heat the solar corona. Whereas the ``coronal heating problem'' has attracted the attention of the solar physics community for decades, the equally important and interesting problem for the chromosphere has only very recently been emphasized. Similarly to the corona, the candidates for the chromospheric heating are wave dissipation, magnetic reconnection and Ohmic dissipation. Not only are the details of the underlying heating mechanisms poorly understood, but even the relative importance of the possible mechanisms in various structures remains poorly determined. The investigation of the chromospheric magnetism is crucial for both the understanding of the Sun-Earth connection with all its consequences to our technical and natural environment, and the detailed study of fundamental plasma physical processes in density and temperature regimes difficult to achieve in laboratories.
Progress in this field of research has been hindered by several aspects of inferring the properties of the chromosphere. The low gas and plasma densities result in large mean free paths for photons. In this radiatively-dominated regime the population of atomic levels deviates strongly from thermodynamic equilibrium, requiring complex and computationally expensive three-dimensional radiative transfer modeling to interpret the observations. Of similar complexity are magneto-hydrodynamic simulations, which for the photosphere have in the meantime become a reliable and robust tool to understand and analyze the fundamental physical processes. The basis to shed light into this complex field of research are reliable measurements of the chromospheric magnetic field at high spatial and temporal resolution.
\subsection{Challenges to High-Resolution Measurements of Chromospheric Fields}
\label{chromag:challenges}
The chromosphere ``hides'' itself between the observationally easier accessible layers photosphere and corona. With an average height of 500 to 2000\,km above the solar surface it may be observed almost exclusively on the solar disk, with the consequence that the chromospheric information contained in the solar spectrum is to a large extent overwritten by the photospheric emission. The pressure scale height in the solar atmosphere is approximately \revmark{150\,km}, leading to typical densities in the chromosphere of about four orders of magnitude lower than the photospheric ones. This decrease of gas pressure leads to an expansion and subsequently weakening of the magnetic field, rooted in often sub-arcsecond sized magnetic patches in the quiet-Sun photosphere.
The main tool for diagnosing the chromospheric magnetic field is spectropolarimetry in Fraunhofer lines formed under chromospheric conditions. With a few exceptions, the chromospheric signature in these spectral lines is only present in a narrow range around the line core. Weak fields and the low densities require highly sensitive spectropolarimetric measurements to detect these signals. Additionally, these low densities result in an increase of the typical velocities in the chromosphere: the Alf\'en speed reaches values of 100\,km\,s$^{-1}${}, the sound speed is around 20\,km\,s$^{-1}${}. Gas, plasma and wave motions can easily achieve these velocities, shock waves can form and reconnection of magnetic field lines can occur, making measurements at high temporal and spatial resolution mandatory.
The resulting dilemma was already discussed in \sect{overview} for photospheric conditions. There, the typical velocities are on the order of 5\,km\,s$^{-1}${}, but the velocities in the chromosphere can easily be an order of magnitude higher. As a consequence, the maximum permissible exposure time to freeze in the solar evolution for a large aperture solar telescope ($\gtrsim$1\,m) lies in the \refmark{ten millisecond range} (see \fig{expt_sn.pdf}a). On the other hand, the photon flux in the line core, i.e., the part of the spectrum containing the chromospheric information, is on the average only 10--20\% of the continuum intensity. Additionally, to characterize the weak chromospheric fields, a high S/N ratio in the range of a few times $10^{-4}$ is desirable. To achieve these goals, highly photon efficient instrumentation on large aperture solar telescopes, not necessarily operated at their diffraction limit, is mandatory. The Daniel K. Inouye Solar Telescope \cite[DKIST,][]{rimmele:08,tritschler:15}, currently under construction on Maui/Hawaii, and the European Solar Telescope \cite[EST,][]{collados:13} are the logical steps in this direction.
Besides these technical challenges, chromospheric conditions introduce a high level of complexity to the interpretation of the polarization signals. Simplifying assumptions, like the local thermodynamic equilibrium (LTE) conditions prevailing in the photosphere, do not hold anymore. Chromospheric absorption and emission processes need to be modeled under non-LTE conditions involving on average 6--12 atomic levels in the most prominent atoms used for chromospheric diagnostics (H, Ca, Mg, and He). The population of the levels in these atoms additionally strongly depends on the incident coronal radiation field. Its anisotropy in the illumination of the chromospheric layer is responsible for creating imbalances in the population of the sub-levels of the atoms, leading to atomic polarization. Long time scales of up to several hours for ionization and recombination processes introduce a memory effect of the chromosphere to past events, complicating the determination of the initial conditions for theoretical models \cite[][this issue]{delacruzrodriguez:15a}.
\subsection{Diagnostic Techniques}
\label{chromag:techniques}
Our knowledge about the chromospheric magnetic field is mainly based on the interpretation of the polarimetric signal in chromospheric spectral lines. The above mentioned difficulties in measuring the polarimetric signal in these lines motivates the search for alternative diagnostic techniques. \refmark{Two possible candidates are:} extrapolations based on photospheric magnetic field maps and measurements in the millimeter and sub-millimeter wavelength range, soon available at high spatial resolution using the ALMA (Atacama Large Millimeter/sub-millimeter Array).
Extrapolations of the photospheric field rely on the robustness and simplicity of photospheric magnetic field measurements, which can be obtained on a routine basis at higher spatial and temporal resolution than chromospheric measurements. During the past decade a significant improvement on the reliability of these extrapolations has been achieved, for example by including H$\alpha${} images in the pre-processing of the data \cite[]{wiegelmann:08a}. \refmark{Unfortunately, the principle of extrapolations, based on minimizing the complexity of the magnetic field in the observed atmospheric volume, is not applicable for the most interesting and very dynamic conditions of, e.g., shocks or reconnection sites. To uncover the secrets at these locations, direct measurements are mandatory.}
The linear relation between millimeter wavelength emissions and the chromospheric temperature makes ALMA an ideal thermometer for the solar chromosphere. In addition, in the near future it will perform magnetic field measurements in the chromosphere with a spatial resolution of down to 0.2$^{\prime\prime}${} based on two physical processes: the Zeeman effect in high-n recombination lines of hydrogen and diatomic molecules, and the influence of the magnetic field to the temperature distribution by suppressing the power of propagating waves. Details about the usage of ALMA for chromospheric diagnostics can be found in this issue \cite[]{judge:15a} and in \cite{wedemeyer:15a}.
In the following sections we return to the current state-of-the-art measurement technique for chromospheric fields using spectropolarimetry in Zeeman and Hanle sensitive spectral lines in the visible and the near infrared regime. Here the improved observational capabilities have led to significant progress during the last couple of years, which will continue in the coming decade with the advent of large-aperture solar telescopes.
\subsubsection{Diagnostic Spectral Lines}
In the strictest sense the term chromosphere is defined as the region of H$\alpha${} emission. Using the H$\alpha${} line as the tool to investigate the chromospheric magnetism therefore seems to be an obvious choice. Unfortunately, the above-mentioned difficulties in modeling spectral lines are most prominent in this line. H$\alpha${} originates from an excited level of the hydrogen atom ($n=3$), whose population strongly depends on the temperature and radiative processes in the atmosphere, and therefore making the interpretation of the spectral profiles exceptionally complex \cite[]{leenaarts:12}.
Spectral lines that are easier to interpret have been identified and are now observed on a routine basis. \fig{he_10830-058.png} \cite[snapshot from Bifrost chromospheric modeling code, taken from][]{carlsson:15a} presents an overview of these lines which have emerged as especially well suited for diagnosing chromospheric magnetism. Sorted by their formation height from bottom to top, obtained from non-LTE modeling, these lines are: Mg\,\textsc{i}\,B$_2${} at 517\,nm, the Na\,\textsc{D}{} line pair at 589\,nm, the Ca\,\textsc{ii}{} infrared triplet at 854\,nm, the Ca\,\textsc{ii\,h\&k}{} lines at 390\,nm, He\,\textsc{i\,D3}{} at 588\,nm, the He\,\textsc{i}{} triplet at 1083\,nm, and the Mg\,\textsc{ii}\,h\&k{} lines at 279\,nm. The latter has been observed on a routine basis only in spectroscopic and imaging mode by the IRIS mission \cite[]{depontieu:14} and the second Sunrise stratospheric balloon flight \cite[]{barthol:11}. Its potential to retrieve the magnetic field vector is very high, but so far limited to theoretical calculations \cite[]{trujillobueno:14}.
The quiet-Sun modeling underlying \fig{he_10830-058.png} demonstrates the high corrugation of especially the upper chromospheric layers \cite[]{carlsson:15a}. The corrugation becomes even stronger above active regions containing pores or sunspots. The height variation can easily span several Mm and must be taken into account for the correct interpretation of the spectropolarimetric signals.
\colfig{he_10830-058.png}{Formation heights of various chromospheric spectral lines computed from the self-consistent chromospheric modeling code Bifrost \cite[]{gudikson:11}. The snapshot from a time series shows the iso-temperature surfaces (black lines), the plasma-$\beta$=1 layer (green) and the optical depth $\tau$=1 layers for the spectral lines as labeled in the figure. The layer of absorption in the He\,\textsc{i}{}~1083\,nm triplet is indicated by the gray-shaded area. Taken from \cite{carlsson:15a}.}
The Ca\,\textsc{ii}{} and He\,\textsc{i}{} infrared triplets are currently in the focus for the diagnostics of chromospheric magnetism. The analysis of the Ca\,\textsc{ii}{} lines has benefited from improvements in the computationally expensive non-LTE modeling \cite[e.g.,][]{leenaarts:09,delacruzrodriguez:12} and the availability of image reconstruction techniques, thereby permitting production of polarimetric maps at the diffraction limit of the modern solar telescopes at high signal-to-noise ratio \cite[]{scharmer:06,cavallini:06,puschmann:12}. Inversions of the radiative transfer equation (RTE) to retrieve the physical conditions in the chromosphere from the measured Stokes spectra are on a good route to become a standard tool for the solar community, similarly to the inversion tools currently available for the photosphere. The progress in this field is summarized in this issue in the article \cite[][this issue]{delacruzrodriguez:15a}.
\subsubsection{The He\,\textsc{i}{} 1083\,nm Triplet}
This triplet has gained attraction due to two major improvements: developments in detector technology allow for spectropolarimetric observations at noise levels down to the $10^{-4}$ range (in units of the continuum), and the theoretical explanation for the linear polarization signal in the absence of magnetic fields \cite[]{trujillobueno:02}. The latter led to the development of easy-to-use inversion codes \cite[]{asensioramos:08a,lagg:09a}. The biggest advantage of the He\,\textsc{i}{} 1083\,nm line over other chromospheric lines is the absence of almost any photospheric contamination in the signal. The reason for this lies in the special formation process of this triplet. The He\,\textsc{i}{} 1083\,nm line (as well as the He\,\textsc{i\,D3}{} line) is a transition in the triplet state of the He{} atom, which is only sparsely populated under normal photospheric and chromospheric conditions. Coronal ultra-violet (UV) radiation with a wavelength shorter than 50.4\,nm (i.e., 24.58\,eV) is required to ionize the He{} atoms. The subsequent recombination occurs with equal probability either back to the singlet state, or to the triplet state. Since the chromosphere is highly opaque to coronal UV radiation, this ionization process occurs only in the top most layer of the chromosphere, resulting in a thin slab containing He\,\textsc{i}{} atoms in the triplet state. To first order, the physical conditions in this thin slab are height-invariant, an assumption validated by the simplicity of the observed Zeeman signals in this triplet. The flip side of this special formation process is the weak absorption signature above quiet Sun regions, where coronal UV illumination is very low.
\subsubsection{He\,\textsc{i}{}\,1083\,nm and the Hanle Effect}
A further strength of the He\,\textsc{i}{} triplet is the coverage of a wide range of magnetic field strengths: The Land\'e factors of the He\,\textsc{i}{} triplet ($g_{\mbox{eff}}=2.0, 1.75, 1.25$) allow for reliable measurements of the magnetic field vector above $\approx$50--100\,G using the Zeeman effect. For higher field strengths the Paschen-Back effect sets in, shifting the Zeeman sub-levels and introducing an asymmetry in the Stokes profiles \cite[]{socasnavarro:04,socasnavarro:05,sasso:06a}. The Hanle effect covers the region of lower field strengths \cite[0.1--100\,G,][]{trujillobueno:02}. Anisotropic illumination of the thin slab by the photospheric radiation field introduces imbalances in the population of the lower and upper levels of the transition, producing a characteristic linear polarization signal. Weak magnetic fields of up to $\approx$8\,G change the strength and the direction of polarization (Hanle sensitive regime), whereas the linear polarization signals for stronger fields are only sensitive to the field direction (Hanle saturated regime).
Despite the above-mentioned advantages of the He{} triplet, the correct interpretation of the signals poses major challenges for the near future. For the correct modeling of the strength of the line, the coronal environment must be taken into account as well as the importance of collisions for the population of the triplet state. The Hanle measurements require an exact determination of the anisotropy of the incident radiation field, influenced not only by the height of the slab, but also by the presence of brightness variations in the photosphere (e.g. sunspots), and the absorption of optically thick He\,\textsc{i}{} layers. Additionally, the ambiguities introduced by the Hanle effect must be treated carefully.
The near-infrared spectral region is particularly well suited for the large-aperture ground-based telescopes which were commissioned recently. The reasons are better and more stable seeing \cite[]{turon:70}, less scattered light \cite[]{staveland:70} and less atmospheric extinction. The New Solar Telescope \cite[NST,][]{goode:10} at the Big Bear Solar Observatory with an aperture of 1.6\,m and the GREGOR Telescope on Tenerife \cite[1.5\,m,][]{schmidt:12} are both equipped with instrumentation for magnetic field diagnostics in the near infrared region, with one of the main foci being on imaging polarimetry and spectropolarimetry in the He\,\textsc{i}{} 1083\,m triplet.
\subsection{Examples of Current Chromospheric Field Measurement Results}
\label{chromag:examples}
\subsubsection{\refmark{He\,\textsc{i}{} 1083\,nm Triplet}}
\refmark{Two examples of recent He\,\textsc{i}{} 1083\,nm observations were selected} to demonstrate recent advances in the interpretation of the spectropolarimetric data and the progress in instrumentation, allowing for observations at unprecedented spatial resolution and polarimetric sensitivity in this line.
\refmark{The first one shows data from the Facility Infrared Spectropolarimeter \cite[FIRS,][]{jaeggli:10} at the National Solar Observatory's Dunn Solar Telescope (DST):} \refmark{\cite{schad:13} presents an observation} of superpenumbral chromospheric fibrils spanning from the penumbra of NOAA AR 11408 to the internetwork regions outside the active region (see \fig{schad_he_fibrils_fig10.pdf}). Limited by the diffraction limit of the instrument of 0.6$^{\prime\prime}${}, full Stokes maps in He\,\textsc{i}{} 1083\,nm were recorded with an average noise level of 3--4\ten{-4} in units of the continuum intensity. The spectra were analyzed with the inversion code HAZEL \cite[``Hanle and Zeeman Light'',][]{asensioramos:08a} which is based on the multiterm calculations of the orthohelium atom of \cite{landi:04}. \citeauthor{schad:13} focussed their analysis on the determination of the magnetic field orientation in these fibrils to answer the question whether the magnetic field vector is aligned with the structures observed simultaneously in H$\alpha${} and Ca\,\textsc{ii}{} infrared intensity maps. This required a thorough analysis to select the correct solution out of up to 8 ambiguous solutions, a consequence of the combination of the well known 180$^\circ$ ambiguity of Zeeman effect with the so called Van Vleck ambiguity \cite[see, e.g.,][]{casini:05a,casini:09a,merenda:06,orozco:15}.
\colfig{schad_he_fibrils_fig10.pdf}{Magnetic field strength, inclination and azimuth (in solar reference frame) of superpenumbral fibrils observed in the He\,\textsc{i}{} 1083\,nm triplet with FIRS at the DST on January 29 2012. The central part of the filament is characterized by horizontal fields below 300\,G, well aligned to the visible fibrils \cite[from][]{schad:13}.}
\refmark{As a result of this analysis, \cite{schad:13} concluded} that the projected direction of the field derived from He\,\textsc{i}{} inversions does not deviate by more than $\pm$10$^\circ$ from the visible fibrils.
For most of the fibrils observed in the Ca\,\textsc{ii}{} 854.2\,nm line, \cite{delacruzrodriguez:11} obtained a similar alignment, but in some of the Ca\,\textsc{ii}{} fibrils the magnetic field direction deviates significantly from the visible structures. A thorough investigation of more fibrils at high spatial resolution, ideally simultaneously in He\,\textsc{i}{} and Ca\,\textsc{ii}{}, is required to resolve this discrepancy.
\colfig{grisdata.pdf}{\refmark{Continuum image and He\,\textsc{i}{} 1083\,nm Stokes $I$, $Q$, and $V$ maps of AR~12096 obtained with \revmark{GREGOR/GRIS} on June 27 2014 close to disk center ($\mu=0.98$). The achieved spatial resolution is 0.44$^{\prime\prime}${} at a spectral resolution of 200\,000. The total exposure time of 1.6\,s per slit position for the full Stokes vector (without readout and processing time) resulted in a noise level of 7--10$\times 10^{-4}$ of the continuum intensity.}}
\refmark{The largest European solar telescope, GREGOR, entered the scientific phase in 2014 \cite[]{schmidt:12}. The GREGOR Infrared Spectrograph \cite[GRIS,][]{collados:12} is producing spectropolarimetric He\,\textsc{i}{} 1083\,nm data with an unprecedented spatial resolution close to the diffraction limit of the telescope at signal to noise ratios in the Stokes parameters of up to 3000. An example of such Stokes parameter maps, composed from the scan of the slit over AR~12096 on June 27 2014, is presented in \fig{grisdata.pdf}. A detailed analysis of the fine structure of the chromospheric field is currently being performed at the partner institutions (Kiepenheuer Institut f\"ur Sonnenphysik, Freiburg; Max-Planck-Institut f\"ur Sonnensystemforschung, G\"ottingen; Leibniz-Institut f\"ur Astrophysik, Potsdam; Instituto de Astrof\'isica de Canarias, La Laguna) and will soon appear in a peer-reviewed journal.}
\subsubsection{\refmark{Ca\,\textsc{ii}~854.2\,nm}}
\refmark{The highest spatial resolution for spectropolarimetric measurements is currently achievable only using filter-based instruments. As described in \sect{filterinstruments}, these instruments allow to apply image reconstruction routines to fully exploit solar telescopes down to their theoretical resolution limits. The most successful instruments of this type are the CRisp Imaging SpectroPolarimeter at the 1-meter Swedish Solar Telescope \cite[SST/CRISP,][]{scharmer:08} and the Interferometric BIdimensional Spectrometer \cite[IBIS,][]{cavallini:06}.}
\colfig{delacruz_chromo_UF_Fig2.pdf}{\refmark{CRISP Stokes $I$, $Q/I$, and $V/I$ filtergrams at $\Delta\lambda=-150$\,m\AA{} from the core of the Ca\,\textsc{ii}{} 854.2\,nm line \cite[adapted from ][]{delacruzrodriguez:13}. The boxes mark the regions for which a detailed analysis of umbral flashes and running penumbral waves was performed.}}
\refmark{An example of a chromospheric magnetic field measurement with CRISP is presented in \fig{delacruz_chromo_UF_Fig2.pdf}, showing the Stokes $I$, $Q/I$, and $V/I$ map of AR~11204 observed on May~4 2011. \cite{delacruzrodriguez:13} applied the non-LTE inversion code NICOLE \cite[]{socasnavarro:00} to this dataset and obtained the height dependence of temperature, line-of-sight velocity and magnetic field vector above the umbra and the penumbra of the sunspot at a spatial resolution of $\approx$0.15$^{\prime\prime}${}. The high temporal cadence of CRISP (16~s for a full-Stokes line scan) enabled the authors to perform a detailed analysis of umbral flashes, revealing their $\approx$1000~K hotter shock front when compared to the surrounding material, without a measurable influence on the magnetic field strength and direction. Additionally, a surprisingly large variation of the magnetic field strength (up to 200~G) in the chromosphere above the penumbra could be attributed to running penumbral waves. For more examples of high-quality measurements in the Ca\,\textsc{ii}{} line using CRISP and IBIS data the reader is referred to \cite[]{pietarila:07} and \cite[]{kleint:12}.}
\section{Conclusions}
\label{conclusions}
As mentioned in the introduction, the Sun would be quietly evolving along the main sequence if it were not for its magnetic field. Solar magnetism is responsible for solar activity and space weather events, some of which affect our modern civilization. Magnetic processes appear to be ubiquitous throughout the universe. So for both astrophysical and practical reasons there has been increasing interest in observing and understanding the solar magnetic field. This increase is suggested by \fig{SolMagFields-Pub.pdf} which shows 5-year totals of papers containing ``solar or sun'' and ``photosphere or chromosphere'' and ``magnetic'' in either the title or the abstract of refereed journals (based on NASA ADS). Even more impressive is the increase in the percentage of the publications relative to the ones without the search tag ``magnetic'': during the last 5 years, more than 60\% of all papers on the solar photosphere and chromosphere deal with the subject magnetism.
\colfig{SolMagFields-Pub.pdf}{5-year totals of papers published in refereed journals including the words ``(solar OR sun) AND (photosphere OR chromosphere) AND magnetic'' in the title or abstract according to ADS (dashed blue curve). The solid orange curve indicates the percentage of these publications relative to the ones without the search tag ``magnetic'' (right ordinate axis).}
This impressive growth of interest in solar magnetism stems to a large extent from the recognition of its importance to almost all fundamental physical processes on the Sun and its relevance to space weather, including the efforts to predict space weather events. The rapid development of measurement techniques for solar magnetic fields is a logical consequence, additionally supported by progress in technology, especially in the field of detectors and optical instrumentation. Together with the recent and forthcoming introduction of new telescopes we expect a steady increase of new measurements and understanding of the solar magnetic field. Recent excitement in exoplanet discovery and research has awakened interest in new techniques of spectropolarimetry of stars that should benefit solar observational methods. In the entire two-century history of solar polarimetry, the next decade should be the most exciting and scientifically productive.
\section{Ground-Based Techniques}
\label{ground}
\refmark{Ground based observations have several advantages over space- or balloon-borne observatories. Their large aperture allows for highest spatial resolution, they can be equipped with complex instrumentation with a diverse selection of observing modes in a multitude of spectral lines, elaborate calibration schemes can be applied, and virtually no limits on data rates do exist. The only impediments to observing from ground in past few decades were the atmospheric seeing effects and the diurnal cycle. The latter has been very successfully mitigated by establishing globally distributed observing networks with identical instrumentation such as GONG, which can be run for extremely long time periods by constant maintenance and upgrades. On the other hand the techniques that are used to overcome seeing effects has seen tremendous progress. In the following we will give examples of the current state-of-the-art in high resolution imaging techniques, narrow band two-dimensional spectrometers, and high precision polarimeters.}
\subsection{Longitudinal Magnetometry: Still a Viable Tool}
\refmark{The term magnetometry has been loosely used in solar physics for the measurement of magnetic fields in the solar atmosphere. However, in reality we can only remotely sense the solar magnetic field by analyzing the photons emitted from the solar atmosphere and applying diagnostic techniques such as Zeeman and Hanle effect to the analyzed photons. Depending on the extent of analysis of the collected photons one can infer either complete vector or just longitudinal component of the magnetic field.} The strength of longitudinal magnetometry is its simplicity in interpreting the measured signal, resulting from the almost linear dependence of the longitudinal Zeeman effect to the line-of-sight component of the magnetic field \cite[][this issue]{stenflo:15a}. Complex, model dependent techniques, like inversions, can often be avoided, and still the fine structure of the solar atmosphere can be retrieved \cite[]{stenflo:94a}. \cite{babcock:53} used the polarization properties of the Zeeman components of the spectral line to measure the projection of the magnetic field vector along the line-of-sight. The instruments that use this method to map the longitudinal magnetic field of the Sun are popularly known as Babcock type magnetographs. Longitudinal magnetographs have been used extensively for studying the distribution and evolution of magnetic flux on the Sun at a variety of spatial and temporal scales. Modern longitudinal magnetographs based on two dimensional image sensors exist in various flavors (e.g., SOLIS, \textit{Hinode}{} SP/NFI, GONG, MDI), and are still very popular for variety of studies such as quiet sun magnetism, flux emergence, active region evolution, magnetic helicity flux estimation, flare related changes, polar field strengths, coronal field extrapolations and so forth.
\subsection{The Age of Vector Magnetic Field Measurements}
In contrast to longitudinal measurements the transverse fields are more challenging to measure and, even more, to interpret. \refmark{The Zeeman effect induced linear polarization signal is proportional to the square of the transverse field strength} and is therefore weaker compared to the circular polarization signal which is proportional to the longitudinal field strength. This different response of the measured signals to the orientation of the magnetic field results often in ambiguous interpretations. For example, the distribution of the magnetic field orientation in weak field regions is still under debate as to whether the weaker fields are predominantly horizontal or vertical \cite[]{borrero:13}. In active regions, especially near the polarity inversion line, the transverse component of the field becomes strong, often accompanied with a highly non-potential or sheared magnetic field topology. The presence of free or excess magnetic energy above the potential field energy is a necessary condition for the occurrence of energetic phenomena such as flares and coronal mass ejections (CMEs). It is therefore necessary to measure the full magnetic field vector in solar active regions in order to estimate the non-potentiality of the magnetic field and its evolution leading to flare and or CME.
Further, it is now well known that solar active regions show a weak hemispheric bias in their magnetic field twist \cite[]{pevtsov:94}. The twist of the magnetic field can be derived by using various proxies such as force-free parameter $\alpha$ , current helicity, magnetic helicity, chirality of filament barbs, sense of rotation of chromospheric superpenumbral whirls around sunspots.
\refmark{Most of the above mentioned proxies require the measurement of the full magnetic vector field. The origin of the helicity in active regions and their hemispheric bias is still an open question. Various mechanisms have been proposed to explain it, for example its origin in the solar dynamo itself, or during the rise of buoyant flux tubes via their interaction with turbulent convection, or after the flux emergence due to reconnection near the polarity inversion line. Some authors think that the solar dynamo itself is responsible for the generation of helicity, others believe that the turbulent convection interacting with the buoyant flux tubes as they rise towards the surface is the culprit, and a third group thinks that helicity is generated via reconnection processes when the emerged flux interacts with the ambient magnetic properties of the solar corona.}
\refmark{A systematic monitoring of vector fields on the solar disk is therefore needed to understand the relation between magnetic twist and other observables, as well as to build up statistics. Synoptic observations of the magnetic field vector over the full disk derived from full Stokes polarimetry in the photospheric Fe I 630.1~nm line are routinely done by the SOLIS/VSM instrument. \fig{sanj_fig1a.pdf} shows the synoptic Carrington map of the photospheric field (radial component B$_r$) generated using the daily fulldisk measurements by SOLIS/VSM \citep{gosain:13}. Such maps are useful, for example, for monitoring the hemispheric helicity trends \citep{gosain:13}, their relation with the kinetic helicity of subsurface flows derived from helioseismology \citep{komm:14,komm:15}, global nonlinear force-free field (NLFFF) extrapolations \citep{tadesse:14a,tadesse:14b} and so on. In addition to synoptic vector field measurements of the full disk it is also desirable to have highly sensitive magnetic field measurements in certain areas of the solar disk such as polar regions and beneath non-active region filaments that also can erupt and be associated with a CME.}
\colfig{sanj_fig1a.pdf}{\refmark{Synoptic Carrington map of the vector magnetic field component B$_r$ synthesized using full-disk SOLIS/VSM vector magnetograms is shown for CR-2109 (the B$_\theta$ and B$_\phi$ components are not shown). The map is scaled between $\pm$100\,G with positive (white) values of B$_r$ pointing upward. The yellow square boxes '1' and '2' show examples of a typical compact active region and a diffuse decaying region, respectively.}}
\subsubsection{Filter-Based or Imaging Instruments\label{filterinstruments}}
As mentioned in \sect{overview} the magnetic field is determined from multi-dimensional spectropolarimetric observations: for every pixel of a 2-dimensional map the spectrum and the temporal evolution should be recorded, resulting in a 4-dimensional data cube for each of the four Stokes components ($I,Q,U,V$).
Since detectors can only record two-dimensional images, a choice of the order of the sampling must be made. Two different approaches are commonly taken, the filter-based and spectrograph-based measurement. Both are eventually leading to same output but are fundamentally different in that the imaging instruments sample two spatial dimensions simultaneously and spectral samples are obtained sequentially, while in the case of spectrograph-based instruments one spatial dimension (along the slit) and the spectral dimension are obtained simultaneously and the second spatial dimension is obtained sequentially by scanning the slit laterally. The fourth dimension of the data cube, time, is achieved by the repetition of these measurements.
\refmark{Examples of \revmark{current} filter type instruments are the Imaging Vector Magnetograph \cite[IVM,][]{mickey:96}, the Interferometric BIdimensional Spectrometer \cite[IBIS,][]{cavallini:06}, the Solar Vector Magnetograph \cite[SVM,][]{gosain:06}, the CRisp Imaging SpectroPolarimeter (CRISP) at the SST \cite[]{scharmer:08}, and the GREGOR Fabry-P\'erot Instrument \cite[GFPI, ][]{denker:10}.} Tunable imaging spectrometers come in various flavors. For example, HMI uses a Michelson Interferometer as a filter, and CRISP, IBIS, GFPI, IVM, and SVM use tunable air-gap Fabry-P\'erot etalons in single or tandem configuration. The rapid tunability of piezo-mounted air-gap etalons and the high throughput are main advantages of these devices. \revmark{Further, Fabry-P\'erot etalons have the advantage that, unlike for example birefringent filters, they do not contain linear polarizers. This makes dual beam polarimetry possible (see \sect{seeing}) by placing the polarizing beam splitter close to the focal plane of the cameras.} Alternatively, one can also choose solid electro-optic crystal based etalons such as lithium niobate (LiNbO$_3$) crystal wafer based etalons \cite[]{mathew:98,choudhary:02,martinezpillet:11} where the highly polished thin wafer is coated with highly reflective coatings. Here, the wavelength tuning is based on the change of the refractive index \refmark{and the thickness} of the LiNbO$_3$ crystals by applying different voltages. The advantage of these solid etalons compared to normal, air-spaced etalons is their high refractive index ($\approx$2.28), allowing for smaller etalon diameters for a given field-of-view. \refmark{For instance, the largest air-gap etalons (15\,cm aperture) ever made and used in the Improved Solar Observing Optical Network \cite[ISOON,][]{neidig:98} could be replaced by 6.5\,cm aperture LiNb0$_3$ etalons of equivalent throughput. On the other hand their limitations are that they are fragile (specially with larger apertures), have higher absorption losses, and are susceptible to damage if tuned faster than 1000\,Volts/s.}
\refmark{Under the assumption that the acquisition of a spectropolarimetric data-cube is faster than the evolution timescale of the solar structures and that of the atmospheric seeing, filter-based instruments provide a snapshot of the magnetic field over the field-of-view. Thus, such measurements are useful for studies of the temporal evolution of magnetic fields at high cadence, ideal for studies about, e.g., flares or waves. Until the beginning of the new millennium, the magnetic field inference from these observations had some limitations in terms of spectral resolution, sparse wavelength sampling, or spectral purity \cite[]{lites:94}. However, significant progress has been made since then in improving the performance of these instruments in various aspects, for example by combining two or more etalons in tandem configurations to increase the spectral resolution. Instruments like CRISP, IBIS, TESOS and more recently GFPI, therefore now allow to obtain depth dependent, high spatial resolution vector magnetograms. As an example, \fig{scharmer_sst.pdf} shows the longitudinal field obtained from SST/CRISP observations with twice the angular resolution of Hinode/SOT \cite[reproduced from ][]{scharmer:13}. Such observations are possible by combining several techniques, like adaptive optics, post-facto image reconstruction algorithms and the high spectral resolution and throughput of the CRISP instrument. Due to short acquisition time of the spectropolarimetric dataset from such instruments it is possible to acquire high quality observations even during brief moments of good seeing.}
\colfigcr[0.70]{scharmer_sst.pdf}{\refmark{Longitudinal magnetic field for a 35$^{\prime\prime}${}$\times$35$^{\prime\prime}${} large FoV at $\tau_c$=0.95 derived from NICOLE inversions show the presence of opposite polarity fields in most parts of the penumbra. Such opposite polarity fields are absent in higher layers (e.g., at $\tau_c$=0.09, 0.01). The long white arrow points towards the disk center. Figure reproduced from \cite{scharmer:13}, Fig. 4(a).}}
\subsubsection{Stokes Spectrographic Observations}
\label{groundbased:obs}
The spectrograph based instruments sample the spectral dimension simultaneously. Spatial scanning by moving the slit over the field-of-view is needed to build two-dimensional maps. Prominent examples of spectrograph based imaging polarimeters are ASP \citep{lites:96a}, SOLIS/VSM \citep{keller:01}, SPINOR \citep{socasnavarro:06}, TIP \citep{collados:07}, GRIS \citep{collados:13}. In order to expedite the scanning time, a few variants have been developed, such as the multi-slit spectrograph, that uses multiple slits separated far enough so that the usable range of spectra can be recorded without overlap \cite[e.g., FIRS,][]{jaeggli:10}. Here, an order-sorting pre-filter must be used to isolate the wavelength range of interest. Other variants can be grouped into two classes: (1) integral field spectrographs (IFS), where the 2D field-of-view is converted to a one dimensional arrangement feeding a regular slit spectrograph, using either an image slicer \cite[]{bowen:38} or a bundle of optical fibres \cite[e.g. DLNIRSP,][]{elmore:14}, and (2) microlens arrays producing multiple point images out of an extended field-of-view, again feeding a spectrograph with the multiple point sources, oriented such that the dispersion avoids overlapping of the individual spectra (microlens fed spectrograph, M. van Noort, private communication).
\refmark{Simultaneous wavelength information together with typically higher spectral resolution, higher signal-to-noise ratios and finer wavelength sampling are the main benefits of these instruments.} This is of particular advantage when inferring the accurate field strength and orientation in the pixel, including the height stratification of the physical parameters in the line forming region of the solar atmosphere. Sit-and-stare observing modes cater well to the studies of dynamic phenomena, ideally complemented with high temporal resolution slit-jaw imaging. With an IFS the dynamical studies can be extended to a small but 2-dimensional field-of-view. The disadvantage of spectrograph based instruments is that they are not well suited for observations of a large field-of-view such as a full AR on time-scales of minutes, which is required by some studies such as flare research.
The ideal solution, therefore, would be to take hybrid approach where the high-quality reflective slit-jaw projection of the spectrograph is used to feed a filter based spectropolarimeter. This allows for information over whole field-of-view with high cadence and limited spectral resolution while the smaller field-of-view sampled by slit (or an IFS) has more detailed spectral information. Real-time processing of the imaging data can help to rapidly re-position the slit to an interesting region, e.g., a flaring kernel or an emerging flux region within the slit-jaw field-of-view.
\subsection{Overcoming Seeing\label{seeing}}
One of the biggest challenges for any ground-based measurements is the atmospheric seeing, which changes rapidly within a few tens of milliseconds. The atmospheric seeing can be one of the major sources of spurious polarization signals in sequential measurements \cite[]{lites:87,leka:01}. Dual beam measurements, where the orthogonal polarization states are separated by means of a polarizing beam splitter and recorded simultaneously, are essential for reducing the seeing effects in measurements. Unlike single beam setup, the dual beam setup fully utilizes 100 percent of the photons available for polarimetry. However, different optical paths for the measurement of the two polarization states in a dual beam setup might introduce a systematic differential gain/aberrations error, this puts strict constraints on the optical quality of the two optical paths in such a system. Alternatively, if single beam setup is preferred, a very fast modulation scheme with a matching fast camera readout must be used, in order to allow a complete modulation cycle before the seeing changes. However, polarization measurements require accumulation of various modulation cycles to build up the desired S/N. This usually results in a loss of spatial resolution, since frames taken under variable seeing conditions must be \revmark{combined} to acquire the desired S/N ratio. In order to preserve or restore the spatial resolution in such observations we then resort to the high angular resolution techniques as described below.
\subsubsection{Image Correction in Real Time}
\label{groundbased:ao}
The optical systems which can compensate for distorting effects of atmospheric seeing in real-time are known as adaptive optics (AO) systems. Such systems typically have a wavefront sensor (WFS), which provides information on the wavefront distortions. This information is used to control in real-time a corrector optics, typically a combination of tip-tilt mirror and a deformable mirror \cite[]{rimmele:11,scharmer:00}. The advantage of such systems is that it (i) enhances the angular resolution in the images, and (ii) allows us to make longer exposures to attain high S/N ratio in polarimetric measurements. The latter benefit relieves the requirement for extremely fast camera readouts in order to freeze the atmospheric seeing. However, due to the highly variable nature of seeing and due to time constraints in AO systems, only a limited number of higher order \revmark{modes can be corrected}. Thus, the residual seeing effects still remain, introducing seeing-induced cross-talk \cite[]{krishnappa:12}. Higher modulation frequencies help to reduce the seeing-induced crosstalk \cite[]{judge:04,gandorfer:04,ramelli:10,casini:12,krishnappa:12}.
The seeing is generally expressed with the Fried parameter, $r_0$, which is a measure of the spatial scale over which the wavefront can be considered approximately distortion-free (rms distortion of less than 1 radian). The performance of AO systems depends heavily upon the basic seeing conditions ($r_0$), the performance being better when $r_0$ is larger. Another property of AO systems is that the quality of real-time corrections is good near the center of isoplanatic patch (typically $\approx$1--2$^{\prime\prime}${}) in the FoV, the patch which is used for the wave front sensing. A correction over a field-of-view much larger then the isoplanatic patch can be achieved with so called multi-conjugate adaptive optics (MCAO) systems \cite[]{rimmele:10}. First prototypes of MCAO systems are currently being developed \cite[e.g.,][]{schmidt:14}.
Observations in infrared wavelengths, such as the Fe\,\textsc{i}{} 1.56\,$\mu$m lines, are best suited for magnetometry, since the magnetic sensitivity scales linearly with the wavelength. A further benefit of observing in this wavelength region is that the atmospheric seeing varies with wavelength such that $r_0$ is proportional to $\lambda^{6/5}$, i.e., the isoplanatic size is much larger at longer wavelengths.
\subsubsection{Image Reconstruction: Post Processing Techniques}
To further improve the quality of AO corrected data, offline image reconstruction methods, like speckle imaging \cite[]{vonderluehe:93}, phase diversity \revmark{\cite[]{loefdahl:94}}, and multi-object multi-frame blind deconvolution \cite[MOMFBD,][]{vannoort:05} are applied. These techniques are not limited by isoplantism and can be used to improve image quality over a large field-of-view. \revmark{Typically they require separate broadband channel images, recorded at a similar wavelength and strictly simultaneously with the narrowband channel images. The high S/N ratios in these broadband channel images deliver the additional information required for the reconstruction of the narrowband channel images.}
Such reconstruction techniques are essential when narrow band imaging polarimetry is performed as the seeing introduces spatial to spectral crosstalk, and as a consequence spurious polarization signals. Reconstruction methods achieve a good alignment between the pixels of sequentially taken, narrowband images \cite[]{vannoort:05}, which helps avoid spurious polarization signals. The reconstruction techniques are, however, prone to errors or lead to spurious features when the noise in the data is not properly accounted for, and therefore require accurate calibration of parameters. For further details on speckle polarimetry reader is referred to \cite{keller:92} and references therein, and for MOMFBD techniques we refer to \cite{vannoort:05} and \cite{lofdahl:94}.
\subsection{Ground-Based Observing Networks}
A network of identical instrumentation distributed geographically around the globe in order to achieve continuous observations of the Sun has been a very successful concept. The long-term operations of networks, such as Global Oscillations Network Group \cite[GONG,][]{harvey:96} or the Birmingham Solar Observing Network \cite[BiSON,][]{chaplin:96}, have led to a continuous data series of uniform quality. The benefit of running such networks are (i) redundancy of sites (e.g., if one site fails there is still flow of data from other nodes), (ii) a possibility of cross calibration and merging of data, (iii) easy upgradeability and maintenance of instruments. Earlier networks were designed for helioseismology studies, which require long, uninterrupted time series of solar oscillation measurements to achieve high frequency resolution. Nowadays, emphasis is put on long term global solar magnetic field measurements for solar cycle studies and continuous monitoring of vector magnetic fields in solar active regions for space weather prediction research. A proposal to set up a new network of identical instruments designed for measuring fulldisk multi-height velocity and magnetic field vector measurements along with high resolution context imaging in different wavelength bands, such as white light, G-band, H-alpha and Calcium K, is currently funded by European Research Council (ERC). This new network is called Solar Physics Research Integrated Network Group (SPRING) \cite[]{hill:13}.
\subsection{Examples of the State of the Art and Current Status}
\label{groundbased:status}
The magnetic field observations got a big boost in the last decade owing to the space based \textit{Hinode}{}/SP and \textit{SDO}{}/HMI instruments. The current next-big-thing in solar magnetic field measurements is expected from the 4-meter aperture Daniel K. Inouye Solar Telescope \cite[DKIST,][]{rimmele:08,tritschler:15}, currently being built on Maui/Hawaii. Equipped with adaptive optics and very sensitive infrared detectors, DKIST will provide magnetic field measurements with unprecedented sensitivity and spatial resolution. However, since the number of photons per diffraction limit resolution elements remains independent of the telescope aperture size, the telescope will be very sensitive when used as a so-called ``photon bucket'', i.e., operated at a resolution coarser than its diffraction limit. Another series of large aperture solar telescopes have become fully operational in the last decade: the 1.6\,m New Solar Telescope \cite[NST,][]{goode:10} at Big Bear, the 1.5\,m GREGOR telescope on Tenerife \cite[]{schmidt:12}, and the 1\,m \refmark{SST} on La Palma \cite[]{scharmer:03}.
All of these telescopes are equipped with fully operational AO systems, while the development of MCAO systems is being carried out in parallel. In the near future, these MCAO systems will allow for magnetic field measurements at a higher spatial resolution over a much larger field-of-view than conventional AO systems deliver \cite[]{rimmele:10}. Also, recent demonstration of ground-layer solar AO by \cite{ren:15} looks promising.
Specialized detectors designed for high speed polarimetry are capable of reaching a polarization sensitivity of the order of 1\ten{-5} of continuum intensity. Examples of such detectors are as below:
\begin{itemize}
\item Charge shuffling cameras, e.g., the Zurich Imaging POLarimeter (ZIMPOL), contain hidden buffers for caching the charges on the chip thus allowing to buildup sufficient signal before the actual readout is performed \citep{povel:94}. Further, since the same pixel is exposed for different modulation states, no differential gain variations are present. The latest version of the ZIMPOL system employs sensors with improved electronics and high sensitivity. New designs/versions of ZIMPOL based on CMOS detectors are also being planned \cite[]{ramelli:10}.
\item The Fast Solar Polarimeter \cite[FSP,][]{feller:14} uses pn-CCD cameras with extremely low readout noise while delivering high frame rates. A polarization sensitivity of below 1\ten{-4} of the continuum intensity has already been achieved. The development of these cameras for larger formats (1 megapixel) is a current project at the Max Planck Institute for Solar System Research.
\end{itemize}
\section{Introduction}
\label{intro}
The energy produced by nuclear fusion in the core of our closest star, the Sun, is transported into space almost exclusively by photons emitted from the solar surface. In the absence of solar magnetic fields, this energy flux would likely be nearly constant. It is the solar activity cycle, manifested in the reversal of the solar magnetic field with a 22-year periodicity, which is the main contributor to the variation of the solar energy flux impinging on earth. This variation is only at the order of less than one percent \cite[]{solanki:13a}, but this is sufficient to affect our natural and technical environment in a significant manner. Spectacular eruptions of high energy particles in so-called coronal mass ejections also have the solar magnetic field as their driver. Any reliable space weather forecast, important for example to avoid damage to Earth orbiting satellites, requires the understanding of the energetics in the magnetized solar atmosphere. A robust determination of the solar magnetic field in all atmospheric layers is therefore mandatory.
The transition from the regime dominated by convective gas and plasma motions to the one driven by magnetic forces takes place between the photosphere and the chromosphere. In this layer, the plasma $\beta$ (i.e., the ratio of gas pressure to magnetic pressure) becomes unity. Extremely fast physical processes on size scales at and well below the resolution limit of current solar telescopes, like reconnection or wave dissipation, occur ubiquitously in active regions but also in places of low solar activity. As a logical consequence, solar instrumentation development aims for the determination of the magnetic field at highest spatial and temporal resolution directly above, in and below the $\beta=1$ layer \cite[][this issue]{kleint:15a}.
Since in situ measurements of the field using magnetometers in these deep layers of the solar atmosphere are impossible, we have to rely on remote-sensing observations. The information about the magnetic field must be extracted from the radiation emitted from the solar surface and the layers above it. Instruments in the near ultra-violet, the visible regions (where the energy flux from the photosphere and the chromosphere is highest), and the infrared regions of the solar spectrum record the intensity and polarization profiles of Fraunhofer lines. The conditions in the solar atmosphere encoded in these spectral lines are deciphered by the combination of advanced instrumentation and sophisticated analysis techniques.
For more than a century the Zeeman effect has been used to reliably determine the magnetic field in the photosphere \cite[][this issue]{hale:1908,stenflo:15a}. Enhanced instrumental capabilities allowed one to extend the measurements from regions with strong Zeeman signals, like sunspots or pores \cite[][this issue]{rempel:15a}, down to small-scale events and also towards higher atmospheric layers: the chromosphere and the corona \cite[][this issue]{trujillo:15a,wiegelmann:15a}. Especially in these weak field environments, the Hanle effect has gained outstanding importance during the last two decades. In some special spectral lines the advantages of both effects are combined, and allow for continuous measurements from weak, unresolved magnetic fields in the sub-gauss regime up to strong, kilo-gauss field in sunspots.
The unambiguous interpretation of observations using these two effects requires the exact treatment of the physical mechanisms behind them. The radiative transfer equation must be solved, the population of the energy sub-levels in the relevant atoms of the solar atmosphere must be computed, and the effect of radiation and collisions must be considered \cite[][this issue]{delacruzrodriguez:15a}. In addition, the tiny imprints of weak, small scale magnetic fields in the polarization signals of spectral lines require advanced instrumentation with cutting-edge technology in, e.g., optical engineering and detector design.
\section{Overview of Solar Magnetic Field Measurement}
\label{overview}
Several methods have been used to estimate the properties of the magnetic field in various regions of the solar atmosphere. The assumption that density structures and their dynamics are entrained along or guided by the magnetic field has long been used to infer magnetic properties from observations of the solar corona, prominences and chromospheric fibrils \cite[e.g.][]{bigelow:1889}. Frequencies of oscillation of features thought to be tied to magnetic fields have been used to estimate magnetic field strengths \cite[e.g.][]{hyder:66,ballester:14}. Various properties of solar radio emission are used to deduce magnetic field characteristics in the chromosphere and corona \cite[see review by][]{kundu:90}. In addition to these remote sensing observations, direct measurement of heliospheric magnetic fields has been done for decades, and by 2024 we can expect in situ magnetic field measurements as close as 8.5 solar radii above the solar surface (Solar Probe Plus 2015).
By far the majority of solar magnetic field remote sensing measurements have been made by observations of the Zeeman and Hanle polarization effects on atomic spectral lines. Some reviews of solar spectropolarimetry methods include \cite{stenflo:94a,stenflo:13a} and \cite{keller:02a}. The vastly larger general field of polarimetry measurements and techniques is frequently reviewed. Some recent reviews of special note include \cite{trippe:14}, \cite{snik:14}, and \cite{rodenhuis:14}.
It is useful to consider the key elements of solar spectropolarimetry as a sequence from the source to the inferred measurement: source magnetic field $\rightarrow$ radiative transfer $\rightarrow$ atmosphere $\rightarrow$ telescope $\rightarrow$ calibration $\rightarrow$ \revmark{imaging optics} $\rightarrow$ polarization modulator $\rightarrow$ wavelength selector $\rightarrow$ detector $\rightarrow$ data recording $\rightarrow$ calculation of source polarization $\rightarrow$ inference of source magnetic field properties. Our task is to determine the source magnetic field as functions of 3D spatial position and time from observations of the Stokes vector elements that convey intensity and polarization information as functions of 2D spatial position, spectral wavelength, and time. This is a big challenge. \revmark{In the future, it may be possible to derive more robust 3D information by using simultaneous stereoscopic observations from space.} One could discuss each of the key elements of solar spectropolarimetry in detail but space constraints prevent that. Here we give a simple overview with focused attention on some topics of current interest.
\subsection{Contemporary Observational and Research Areas}
At present, one may identify four major domains of solar polarimetry research driven by technical tradeoff considerations and scientific interests.
\begin{enumerate}
\item High spatial resolution, with the goal of exploring small-scale magnetoconvection.
\item High polarimetric sensitivity, for studies of \revmark{weak magnetic fields} including the chromosphere and corona.
\item Full solar disk synoptic sequences, for investigations of large-scale magnetic field evolution, the solar dynamo and solar cycle, and for space weather applications.
\item Fast cadence, to better understand solar activity such as flares, CMEs, spicules, filaments, etc.
\end{enumerate}
There is not enough light available to meet all these needs simultaneously using available instrumentation or methods. Tradeoffs are required, and to guide these tradeoffs various system considerations are useful. A spectropolarimetric solar observation can be represented as an 5D hypercube with time ($t$) as one dimension and Stokes vector elements, which represent polarization information ($I,Q,U,V$) as one additional dimension. The remaining three dimensions are two plane-of-the-sky coordinates ($x, y$) and wavelength ($\lambda$). Usually this datacube is drawn at one instant of time with edges $x, y, \lambda$ and cube volume elements containing one of the four Stokes vector elements. Ideally we would like to completely fill such a datacube with a single, brief snapshot in order to obtain observations with the most efficient use of available light. \cite{hagen:13} have very usefully reviewed progress toward snapshot spectral imaging technology and techniques. In practice, current measurements of solar magnetic fields are mostly made using spatially-scanning slit spectrometers or wavelength-scanning narrow band filters. In the former case observing time speed is compromised in order to maintain spectrum line profile integrity at the expense of non-simultaneous spatial sampling. In the latter case observing speed is emphasized but the profile integrity is compromised by the need to observe spectra at different wavelengths and polarizations at different times. The history of magnetic field estimation is full of attempts to mitigate these tradeoffs and their compromises.
\subsection{Time Constraints}
The duration of an observation is important for two main reasons. First, the Stokes vector must be measured accurately at each datacube element before any observation conditions change. \revmark{Second, the acquisition of a spectropolarimetric data cube must be made sufficiently fast compared to the evolutionary time scale of the magnetic structure observed.} These requirements conflict with both the acquisition of a sufficient number of photons for low-noise Stokes polarimetry and construction of the spatial/spectral images \refmark{quickly} enough to suppress noise from solar feature evolution and ground-based seeing or space-based pointing jitter. Thus, progress in solar magnetic feature measurement is strongly influenced by such tradeoffs, but it is also strongly linked to technology improvements that enhance the efficiency of observations.
In current practice, intensity measurements in at least four different polarization modulation states are needed to fully define a Stokes vector measurement. A polarization modulator device sequentially produces the different states. Measurement noise arises irreducibly not only from variations in the number of photons detected (shot-noise), but also spuriously from any image intensity or geometry changes during the modulation sequence. A goal is to not add significant spurious noise in excess of the shot noise. For ground-based observations, atmospheric seeing is a major source of noise. \refmark{Measurements of the frequency spectrum of seeing effects on solar images \cite[]{rimmele:00} show large power at low frequencies but also that significant noise is present at frequencies in excess of 100\,Hz.}
\refmark{Practical experience shows that polarization noise produced by atmospheric seeing may be reduced by rapid acquisition of modulated solar images.} Spatial resolution and site conditions are important factors, but completing a polarization modulation sequence in 10\,ms (from the ground) or a few seconds (from space - depending on the frequency spectrum of pointing jitter) is highly desired for minimizing spurious noise. Ground-based slit spectrograph polarimeters using fast polarization modulation can reach noise levels (uncertainty of $Q, U,$ or $V$ relative to $I$) of order $10^{-5}$ while ground-based filter polarimeters are usually limited to $10^{-3}$ due to an additional time penalty of acquiring many images at different wavelengths and polarization states during which observation conditions change. Adaptive optics and reconstruction techniques for ground-based filter images can reduce the spurious noise caused by seeing. For both ground and space filter observations the evolving pattern of solar intensity and polarization produce polarization noise that depends on the properties of the instrument and the acquisition sequence of the various wavelength and polarization state images.
\subsection{Solar Polarimetry is Light-Starved}
\revmark{To determine the basic structure of strong magnetic features like sunspots,} noise levels of $10^{-2}$ of the continuum intensity are acceptable. Some high sensitivity work requires noise of the order 10$^{-5}$, or even smaller. Considering just shot noise, 10$^4$ to 10$^{10}$ photons must be collected to reach these noise levels. Suppose we want to make a photospheric observation at disk center using two spatial pixels per diffraction limit with an overall instrument efficiency of 10\%. \refmark{The temporal modulation typically used for polarization measurements requires us to accept a displacement of no more than 0.05 pixel on the imaging device during the observation, otherwise spurious noise is introduced to the data. Under these conditions and assuming a typical photospheric velocity of 5\,km\,s$^{-1}${},} \fig{expt_sn.pdf}a shows the maximum exposure time allowed at three different wavelengths and a range of telescope apertures. \refmark{At the small aperture size limit shown (GONG) exposure times of about 10 seconds can be used, whilst at the large size limit (DKIST), 60-160\,ms are the longest exposures allowed.}
\colfig{expt_sn.pdf}{Left (a): Maximum exposure time for diffraction-limited disk-center photospheric observation constrained by $<$0.05 pixel motion of target at 5\,km\,s$^{-1}${}. Right (b): Maximum signal to noise ratio per exposure for diffraction-limited photospheric observations at disk center \refmark{resulting from the number of photons per pixel impinging the detector within the maximum exposure time}.}
If we use two spectral samples per spectral resolution element of $\lambda/\Delta \lambda = 90\,000$ and assume an overall system transmission of 10\%, then \fig{expt_sn.pdf}b shows the maximum signal-to-noise ratio (S/N) for Stokes I that can be obtained in the continuum \refmark{computed using the maximum available exposure times}. Note that the S/N ($I/\Delta I$ or $I/\Delta p$) will be lower for $Q,U,V$ ($\Delta p \approx \sqrt{3}/\Delta I$) and in the darker parts of spectral lines. It is obvious that achieving high polarimetric sensitivity in diffraction limited observations is very difficult.
\subsection{Strategies to Improve Efficiency}
The grim outlook shown in \fig{expt_sn.pdf}b has motivated many tradeoffs to be able to secure useful high-resolution, high-precision measurements of solar magnetic fields. Several of these were used in the first solar magnetographs. One may, for example, use larger spatial and spectral samples, combine signals from more than one spectrum line, use sparse sampling of the spectrum, slice the incoming solar image into a format better suited to the spectrometer, and/or select a wavelength compatible with high overall instrumental efficiency. Dual-beam polarimetry was introduced so that all the photons are used all the time. As detector technology advanced, better quantum efficiency and higher read-out speeds were achieved all the time (in addition to greatly reducing crosstalk among polarization states arising from residual image motion \refmark{and blurring}).
There are only a few chromospheric and coronal spectral lines that are useful for magnetography, but many lines forming in the photosphere are available. The photospheric photon flux per spectrum line Doppler width peaks at near-IR wavelengths around 1\,$\mu$m so this should be an optimum wavelength for observations. However, IR spectrum lines tend to be weak compared to those at shorter wavelengths (though this is counteracted by stronger Zeeman splitting at long wavelengths), and spatial resolution for a given telescope aperture is smaller at longer wavelengths. Historically, lines in the visible part of the spectrum have been favored mainly owing to the availability of lower-cost detectors with high quantum efficiency. At present, the optimum wavelength for photospheric magnetic field Zeeman effect measurements ranges from the yellow to the near infrared.
One long-used strategy is to observe only circular polarization (Stokes $V$), from which the line-of-sight component of the magnetic field can be estimated. Provided the magnetic elements are spatially resolved, to first order the strength of the circular polarization signal depends linearly on the source magnetic field strength, while the linear polarization signal (Stokes $Q$ and $U$) has a quadratic dependence, and for modest field strengths it is rather weak.
The availability of detectors with large numbers of pixels that can be read out fairly quickly has stimulated clever techniques to fill the image/polarization/wavelength hypercube in single snapshots \cite[]{hagen:13}. The basic idea is to divide the large space on the detector in ways that allow recording spectral, spatial, and polarization information simultaneously. One older example first used photographically \cite[]{martin:74} is as follows. Instead of using a small format detector in a spectrograph having one long entrance slit, a large format detector allows one to use many entrance slits each producing a spectrum of a different part of the solar image. \fig{meudon.pdf} is an early example of what such an observation would look like. Overlapping of the output spectra is mitigated by using a blocking filter with a narrow pass band. So instead of scanning a solar image with one slit, using N slits reduces the required scan time to 1/N, a significant efficiency improvement. There are dozens of other clever ideas to improve efficiency based on large format detectors. Many of the ideas use fiber optics, microlens, micropolarizer, and microfilter arrays and other modern optical technologies. It is almost inevitable that a new generation of solar magnetographs will utilize some of these efficiency improvements.
\colfig{meudon.pdf}{\refmark{An early example of trading sparse spatial sampling to gain spectral coverage.} Spectroheliogram and zoom showing individual K-line spectra at numerous positions of the entrance slit (Meudon, 1938 August 8).}
Detector performance is key to magnetic field measurement. Fortunately, detector technology continues to improve rapidly. High efficiency sometimes requires conflicting properties. These include: high quantum efficiency over a large range of wavelengths, large pixel count, large dynamic range, excellent linearity, fast readout, no image smear, excellent modulation transfer function, no image lag (signal retention between successive readouts), no etalon fringing, and, if possible, pixel level on-chip demodulation capability. Manufacturers have devised various architectures based on converting photons to charges in a detection layer followed by conversion of the charges to a measurable voltage. In some devices the detection layer is stacked above a layer of readout electronics. In others all the components are essentially in one layer. In some devices (e.g. CCD, most CMOS) the readout process destroys the original charge while in others (e.g. CID, DEPFET, ZIMPOL, sCMOS) the charge is shuffled and not necessarily destroyed immediately by the readout process. To obtain high speed, the sensor area is divided into smaller regions and readout is accomplished simultaneously through multiple channels.
\subsection{State of the Art}
\fig{domains.pdf} summarizes the major domains of solar spectropolarimetry, and some telescopes/instruments in use (or planned in parentheses) placed close to the domains for which they were designed. This is not a full listing (for example, a few years ago there were more than 20 instruments capable of measuring the chromospheric magnetic field). The figure is intended to suggest the large range of capabilities available for measuring the solar magnetic field limited \revmark{primarily} by financial and personnel resources and the advance of technology.
\colfig{domains.pdf}{Domains of current solar spectropolarimetry and a sampling of telescope/instrument names clustered to the domain for which they are best suited.}
\section{Space-based Techniques}
\label{space}
\subsection{Motivation for Measurement of Photospheric Magnetic Fields from Space}
\label{motivation}
\refmark{Owing to the circumstance} that many aspects of photospheric magnetism have remained largely unresolved, the study of solar magnetism has resulted in an ongoing quest for observations of higher angular resolution. A fundamental scale of the solar atmosphere is the density scale height ($h_{\rho}$, of order \revmark{150\,km} in the photosphere). The scale height is the characteristic length for many of the most important hydrodynamic and radiative processes. Resolving photospheric magnetic fields on this scale has been a goal of solar physics for several decades, and the cleanest way to arrive to a spatial resolution at $h_{\rho}$ would be observations from a space-based platform. Unfortunately, the cost of deploying a telescope of the necessary size (diameter of order 1\,m) in space has been prohibitive to realization of this goal. Parallel efforts from ground-based observatories -- larger telescopes, adaptive optics, advanced image restoration techniques -- are pushing observations closer to this long-sought goal. As a result, the quest for the highest angular resolution from space might now be relegated to a secondary status relative to the benefits of uninterrupted observations with uniformly \textit{very high} angular resolution. Provided that continuity of observations is achieved, many extremely important science problems of solar physics may be addressed effectively without completely resolving the magnetic fields with resolution equivalent to $h_{\rho}$.
Another motivation for space-based observations is access to the hemisphere of the Sun not visible from Earth (or near-Earth orbit), including detailed study of magnetic fields at the solar poles. Imaging instrumentation has been flown (e.g., the \textit{STEREO} mission), but so far no observations of photospheric magnetic fields have been carried out from such a vantage point, although one such mission is in development (see \sect{sect:future}).
A less obvious motivation for space observations of solar magnetism is support of coordinated observing campaigns. Science objectives that require diverse observations from multiple observatories (both space- and ground-based) have a much higher chance of success because the space-based components of the campaign have a predictably high probability of success.
\subsection{Challenges of Space-Based Magnetic Field Measurements}
The development of any space mission is a costly and lengthy process, often necessitating international collaboration in order to make the program possible within budgets of the individual participating countries. International collaborations increase the complexity of management of a space instrumentation program, but they also broaden the pool of available scientific and engineering talent. Many of the missions providing measures of photospheric magnetic fields (as summarized below) result from significant international collaboration.
The technical challenges of space instrumentation are significant in comparison to ground-based observing. Weight and power are always at a premium, as is physical size. Space instrumentation must be resilient under extremes of temperature and radiation exposure never encountered on the ground. The vacuum environment can lead to contamination of optical surfaces, so careful attention must be paid to the outgassing properties of materials for space flight. Furthermore, complex instrumentation must be ultra-reliable and multiply redundant to reduce the possibility of a single-point failure that would bring an end to the mission. These challenges of space instrumentation drive the high development cost and result in significant compromises in design. As a rule, the simpler the design and the fewer the moving parts, the more reliable the instrument becomes. As a result of simplicity, space instrumentation usually has far less flexibility of operation than comparable ground-based instrumentation.
To date, space instrumentation for measurement of photospheric magnetism has carried out measurements of the Zeeman effect in the visible spectrum only. There is little motivation for observing photospheric fields in ultraviolet lines because the ratio of magnetic splitting to the Doppler width decreases linearly with decreasing wavelength. Furthermore, several other factors render observations at ultraviolet wavelengths more difficult, from both scientific and technical standpoints. Conversely, the advantages of observing in the infrared are countered by the cost of deploying proportionately larger telescopes in order to achieve spatial resolution equivalent to that of an instrument operating in the visible.
Solar observations require high angular resolution, so the measurement of magnetic fields over the spatial extent of even a single active region demands very high data rates. Full Stokes polarimetry at multiple wavelengths is necessary for measurement of the vector magnetic field, hence, at least four times the data volume of intensity-only measurements are required. All of this data must be transmitted to the ground at a cadence appropriate for monitoring the evolution of the target solar features. In order to confront the data requirements in the face of limited telemetry, the polarization data are usually compressed on-board. The compression leads to some loss of information that would not be present in equivalent ground-based measurements. Even with compression, limitations imposed by telemetry remain a limitation to the quantity of data that may be acquired during any given time period.
\subsection{Past and Ongoing Space Missions}
To date there have been only three successful space missions capable of measurements of the photospheric magnetic field. Two of those, \textit{SoHO}{} and \textit{SDO}{}, carried synoptic instruments with full-disk capability intended for precision Doppler velocity measurements for helioseismology in addition to sensing the magnetic field. The third mission, \textit{Hinode}{}, has instrumentation optimized for high-resolution study of magnetic fields as a primary objective. These three missions are described in more detail below, and an example of one significant scientific accomplishment is provided for each.
\subsubsection{Solar and Heliospheric Observatory/Michelson Doppler Imager (\textit{SoHO}{}/MDI)}
The \textit{SoHO}{}/MDI instrument \citep{scherrer:95} was primarily intended for helioseismology, but fortunately the capability for measurement of circular polarization, and hence the line-of-sight component of the magnetic apparent flux density ${B^L}_{\rm app}$, was retained in spite of pressure to ``de-scope'' the instrument owing to budgetary considerations. \textit{SoHO}{}/MDI operated from December 1995 to April 2011 and therefore provided coverage of the magnetic field spanning more than one complete solar cycle.
\colfig{mdioptical.pdf}{The optical layout of the \textit{SoHO}{}/MDI instrument is shown. The wavelength isolation was accomplished by a pair of Michelson interferometers preceded by a Lyot filter. Circular polarization analysis was accomplished by alternately rotating quarter-wave retarders into the beam with the polarization analyzer wheel. Diagram taken from \citet{scherrer:95}.}
The optical layout of \textit{SoHO}{}/MDI is shown in \fig{mdioptical.pdf}. The heart of the instrument, the Michelson interferometers, had heritage from the Fourier Tachometer \citep{brown:81}, a ground-based instrument for helioseismology. The rotatable waveplates between the interferometers allowed the device to tune over the ~0.5\,\AA{} bandpass of the Lyot filter centered on the Ni\,{\sc i} line at 6768\,\AA{}. The full-width half-maximum (FWHM) of the combined system was listed at 94\,m\AA{}{}, and in typical operation \textit{SoHO}{}/MDI sampled the line at five wavelengths separated by 75\,m\AA{}.
The emphasis on helioseismology led to several aspects of this instrument that were not optimal for precision measures of photospheric magnetic fields: performing the polarization analysis in by alternately inserting separate quarter-wave plates, the use of a spectrum line with less than optimal magnetic sensitivity, a spectral resolution a factor of 2--3 times greater than the typical Doppler width of the Ni\,{\sc i} line in the photosphere, and limited angular resolution (4 arcsec in full-disk mode, 1.2$^{\prime\prime}${} in high resolution mode). That being said, owing to its station at the first Earth-Sun Lagrangian point L1, \textit{SoHO}{}/MDI was the first instrument to provide an uninterrupted sequence of longitudinal magnetograms of the Sun at a regular cadence and uniform quality. This instrument facilitated the study of the evolution of active regions in a way never before realized.
The magnetogram data from \textit{SoHO}{}/MDI were (and are still) being used extensively to understand solar phenomena, as witnessed by the more than 800 abstracts citing the MDI instrument between 1996 and 2015. The numerous applications of the data from this instrument facilitated such diverse studies as extrapolation of photospheric fields to compare with structures in the chromosphere and corona, studies of active region evolution including measures of injection of helicity into the upper solar atmosphere, polar fields, flare-induced magnetic field changes, the decay of active regions, and many more.
The science made possible uniquely by this space experiment is illustrated by the statistical study of the orientation axis of bipolar regions \citep{stenflo:12}. They examined over 73\,000 active regions from 1995--2011. This selection included regions with bipolar strength (or flux) ranging from just above the typical flux of ephemeral active regions to the largest observed regions. They found that the distribution of tilt angles (Joy's law) with solar latitude obeys the same relationship independently of the strength of the dipole (or net flux) \refmark{of the region.
This behavior led} the authors to conclude that the tilt of active regions is established at the source of the dynamo giving rise to the active regions, not in the buoyant rise through the convection zone as was previously hypothesized. Furthermore, the authors showed examples (representing just a few percent of the total number of active regions) that did not obey the Hale hemispheric polarity law. The authors then conclude that these exceptions ``...rule out the possibility of well-defined, coherent toroidal flux systems as a source of all active regions, even the large ones''.
\subsubsection{Solar Dynamics Observatory/Helioseismic and Magnetic Imager (\textit{SDO}{}/HMI)}
The Solar Dynamics Observatory (\textit{SDO}{}), launched in February 2011, carries as one of its complement of instruments the Helioseismic and Magnetic Imager \cite[HMI,][]{scherrer:12,schou:12}. This instrument is so similar in design to the \textit{SoHO}{}/MDI that no optical layout of the system is provided herein, but it incorporates a number of features that represent significant upgrades from \textit{SoHO}{}/MDI:
\begin{itemize}
\item A different spectral line is observed: Fe\,{\sc i} 6173\,\AA{}. Not only is this line isolated within a fairly clean region of the spectrum to enable precision helioseismology, but it is also a normal Zeeman triplet with high magnetic sensitivity.
\item Polarization analysis is carried out with rotating retarders mounted permanently in the beam. This arrangement allows for a higher precision polarimetry than \textit{SoHO}{}\refmark{/MDI}. Additionally, these waveplates allow for measurement of the full Stokes vector, so the instrument is capable of vector magnetometry.
\item The angular resolution of the instrument is 1$^{\prime\prime}${} (0.5$^{\prime\prime}${} pixels), with the field-of-view covering entire solar disk.
\item Two CCD cameras are used, one is dedicated to helioseismology, the other is dedicated to polarimetry. The data system operates at a higher cadence than that of \textit{SoHO}{}/MDI.
\end{itemize}
As the name of the instrument implies, this instrument has a much greater scientific emphasis on magnetometry than the \textit{SoHO}{}/MDI did. Besides continuing the helioseismic function beyond the end of the \textit{SoHO}{}/MDI program, \textit{SDO}{}/HMI brings a new dimension in continuous \textit{vector} magnetometry of the full disk. This instrument makes it possible, for the first time, to perform synoptic studies of the vector magnetic field of every active region on the disk. The work of \citet{liu:14} serves as one example of a study involving the \textit{SDO}{}/HMI vector magnetogram data of many active regions. In that work they sought to validate, with data of higher resolution and uniform quality, the rather weak hemispheric preference for sign of the twist of active region fields as reported by several authors who used ground-based data. As a measure of twist they adopted a ${B_z}^2$-weighted average of the force-free parameter $\alpha$ (obeying $\nabla \times \bf{B} = \alpha \bf{B}$) over the active region. \fig{liuhelicity2014.pdf} shows the result of their analysis of 151 active regions appearing during the first 3.5 years of the current activity cycle. This study demonstrates a very clear hemispheric preference for sign of the magnetic helicity. The high cadence of \textit{SDO}{}/HMI vector magnetic field measurements allowed them to select for study observations of active regions as they traversed the solar central meridian, thereby avoiding any bias that could arise as a result of viewing angle with respect to solar longitude.
\colfig{liuhelicity2014.pdf}{The latitudinal distribution of the sign of the ${B_z}^2$-weighted average ${\alpha}_w$ of the force-free parameter $\alpha$ is indicated as a function of solar latitude and time during the rise phase of Solar Cycle 24. The figure results from analysis of vector magnetograms obtained with \textit{SoHO}{}/MDI. Sign of this proxy of magnetic helicity is indicated by color. For this distribution of 151 active regions, 75\% show the hemispheric preference for helicity suggested by previous studies of ground-based data. Diagram adapted from \citet{liu:14}.}
Being a very similar cousin to the \textit{SoHO}{}/MDI instrument, the \textit{SDO}{}/HMI instrument is optimized for helioseismology rather than precision polarimetry. As a result, it has some limitations for measurement of the vector magnetic field. Firstly, the instrument has a rather low duty cycle and therefore it does not use the available photons efficiently. The signal-to-noise ratio of typical polarimetric observations is such that studies of the vector magnetic field in quiet regions are extremely difficult at best. Second, the instrument is a single-beam polarimeter with a slow polarization modulation cycle. This means that there will be crosstalk among the Stokes parameters arising from uncompensated image motion and evolution of the solar scene. Third, only one spectral line is sampled, and that with six samples of the line profile. Each wavelength sample is subject to a filter bandpass of 76\,m\AA{}. Instruments optimized for Stokes polarimetry sample the complete spectral profile of two or more lines having different sensitivities to the Zeeman effect, and do so with a spectral resolution comparable to the Doppler width of the line in question (typically 30--40\,m\AA{}~for photospheric lines). The compromises of the \textit{SDO}{}/HMI spectral sampling and resolution result in greatly reduced sensitivity to subtle features of the Stokes profiles that would otherwise assist in precision measures of the magnetic field vector. They also prevent the accurate extraction of magnetic filling factors in the data reduction process, so the standard \textit{SDO}{}/HMI data inversions assume unit filling factor. For regions outside of sunspots, the extracted field strength is then actually a measure of the average field strength over the resolution element - a measure that is lower than the actual field for the usual case where intense magnetic elements are not resolved spatially. Furthermore, the inferred field inclination will be more transverse to the line-of-sight than the actual field. Thus one must anticipate significant systematic errors in the inferred magnetic field vector for regions outside of sunspots. These errors may or may not be significant depending upon the nature of the science question being addressed. For studies that are highly sensitive to such errors, one should use data from instruments specifically optimized for Stokes polarimetry, such as the Solar Optical Telescope on the \textit{Hinode}{} mission described in the following. Furthermore, limited spectral sampling hinders the extraction of vertical gradients of physical parameters that might otherwise be available to analysis in finely-sampled Stokes profiles, and the limited wavelength range of these samples precludes measurement of strongly Doppler-shifted fields or very strong Zeeman splitting in sunspot umbrae.
\subsubsection{\textit{Hinode}{} SOT/FG and SOT/SP}
The \textit{Hinode}{} mission \citep{kosugi:07a} is the first space mission carrying instrumentation specifically optimized for precision Stokes polarimetry. For this purpose it carries the 50 cm Solar Optical Telescope \citep[SOT:][]{tsuneta:08a} with capability of both spectrographic observations with the Spectro-Polarimeter \citep[SP:][]{lites:13a} and filtergraphic observations with the Narrowband Filter Instrument (NFI). The spectrographic observations have the advantages that they provide measures of the spectral Stokes profiles simultaneously at all wavelengths, so that the integrity of the profiles in wavelength is retained, and also that solar events with high Doppler shift do not fall outside of the sampled wavelength range. These advantages are countered by the non-simultaneity of images of the solar scene that must be built up by stepping the narrow spectrograph slit across the image. One may have the best of both worlds with simultaneous filtergraphic and spectrographic Stokes polarimetry as implemented in the SOT/SP/NFI instrumentation.
\colfig{sotoptical.pdf}{The optical layout of the \textit{Hinode}{} SOT identifies the five major optical systems with different colors. Of particular interest for this paper are the Narrowband Filter Instrument (blue) and the Spectro-Polarimeter (magenta), both of which are specifically designed to perform precision polarimetry for measurement of solar magnetic fields. For a detailed description of the Spectro-Polarimeter see \citet{lites:13a}. Diagram taken from \citet{tsuneta:08a}.}
The optical layout of the SOT Focal Plane Package (FPP) is shown in \fig{sotoptical.pdf}. Polarization modulation is accomplished by a rotating bi-crystalline retarder (quartz and sapphire) located in the symmetric part of the optical system just after the Optical Telescope Assembly (OTA). After the reflection by the tip-tilt mirror the beam is divided among four optical paths: the SP, the Broadband Filter Instrument (BFI), the NFI, and the Correlation Tracker. The NFI and BFI share a common focal plane and detector. The SP and NFI (or BFI) can operate simultaneously because the SP has its own detector. The polarization modulator rotates at 0.625\,Hz such that the frame transfer detectors sample continuously at 10\,Hz: a sampling speed fast enough to ``freeze'' moving features in the solar atmosphere for most polarimetric measurements. The SOT/SP operates in dual-beam mode by imaging both orthogonal linear polarizations exiting the polarizing beam splitter near its focal plane. Owing to continuous integration/readout of the detector, the instrument operates at close to 100\% duty cycle. Several modes of polarimetry are available to the NFI, including frame-transfer modes using an externally positioned mask of the focal plane. So both SOT/NFI and SOT/SP are capable of high precision polarimetry at high angular resolution.
\colfig{centenoemerge.pdf}{This sequence of four continuum intensity images derived from repeated, short maps of the \textit{Hinode}{}/SOT Spectro-Polarimeter reveal the emergence of a small dipole loop in a granule of the quiet Sun. Red and green contours represent positive and negative circular polarization (upward- and downward-directed magnetic field), and orange contours indicate linear polarization (horizontal field component). A loop structure is seen to form in the second frame with horizontal fields between the two vertically-oriented fields of opposite polarity. As the top of the loop rises above the photosphere, only vertical fields remain in the photosphere, each centered over a dark intergranular lane. Diagram taken from \citet{centeno:07}.}
Freedom from the disturbing effects of the Earth's atmosphere allows the \textit{Hinode}{} polarimeters to continuously achieve angular resolution of the 50 cm SOT. This has permitted the detailed study of small-scale magnetism in the solar atmosphere. One area of intense study by \textit{Hinode}{} researchers has been the magnetism of the quiet solar atmosphere, as illustrated by the example in \fig{centenoemerge.pdf}. Prior to \textit{Hinode}{}, glimpses from ground-based instrumentation revealed the presence of small-scale horizontal fields in the solar photosphere \citep{lites:96}, and it was suspected that they might correspond to emergence of small magnetic loops into the photosphere. With the advent of \textit{Hinode}{} the widespread occurrence of these features was revealed for the first time \citep{lites:08a} owing to the excellent image quality combined with high polarimetric precision. Using repeated short maps of quiet regions with SOT/SP, \citet{centeno:07} were able to demonstrate that the horizontal fields are indeed the tops of tiny magnetic loops emerging within granules, as shown in \fig{centenoemerge.pdf}.
\refmark{As a consequence of the seeing-free space environment, the point spread function (PSF) does not vary significantly during the time of an observation. This attribute allowed the development of inversion codes that take into account the PSF self-consistently during the minimization process, with the result that maps having diffraction-limited resolution of the SOT may be obtained for the physical parameters in the solar atmosphere. Such techniques are described in \cite{vannoort:12,asensioramos:15} and also in this special issue \cite[]{delacruzrodriguez:15a}. They have already resulted in numerous publications \cite[e.g.,][]{lagg:14,tiwari:13,buehler:15}.}
The \textit{Hinode}{} mission was launched in September 2006 and still continues to operate providing measures of the photospheric magnetic field reliably and on demand. It is the only such space mission operating now, and comparable or better instrumentation is only on the horizon, being at least 6--10 years away. The SOT/SP is operating nominally, having experienced only a 25\% drop in throughput over the past nine years on orbit. The SOT/NFI instrument continues to provide limited magnetogram data in the wing of the Na D-line only, owing to a progressive drift off-band of the pre-filters to the birefringent filter. Also, about a year into the mission, \textit{Hinode}{} suffered a failure of its high-rate X-band telemetry. Fortunately, the lower rate S-band telemetry continued to function well, and conservative management of the data volume from SOT combined with a greatly increased number of downlink passes has allowed most of the SOT science objectives to continue to be addressed. This history of \textit{Hinode}{} telemetry issues is yet another testament to the need for redundancy in space mission hardware.
\subsection{High-Altitude Balloon Missions}
Although they are not strictly space missions, a brief discussion of high-altitude balloon missions carrying instrumentation to measure the photospheric magnetic field is provided here. Balloon platforms reproduce some desirable conditions of space: 1) the rarified atmosphere at high altitude nearly completely eliminates the disruptive effect of atmospheric seeing, 2) during the high-latitude mid-summer extended periods of solar observing without night interruption are possible, and 3) some ultraviolet wavelengths become nearly transparent.
The cost of a balloon experiment is much smaller than that of an equivalent experiment in space because of relaxed restrictions on weight, power, contamination, vibration/acoustic tolerance, documentation, and other factors. Cost issues aside, development of a balloon mission is still a major undertaking. Taking the the two prior solar magnetographic balloon missions as typical, the development time for a major long-duration balloon experiment is comparable to that of a space mission. Reliability constraints of a balloon mission are similar to those of a space experiment since high altitude balloon launches are expensive and infrequent. One usually has the opportunity to recover the flight hardware after the end of a balloon mission, but that hardware must be able to survive a crash landing if it is to be flown again. In most respects, pointing and image stabilization of a balloon-borne high resolution telescope is more difficult than a space platform because one has to contend with variable winds at float altitude, and variable wind gradients between the balloon itself and its payload drive a pendulum motion.
Solar long-duration balloon experiments may be flown at mid-summer in the polar regions in order to have continuous observing throughout the flight. Two solar magnetographic experiments have been flown: the Flare Genesis Experiment \citep[FGE,][]{bernasconi:00, bernasconi:01, bernasconi:02}, \refmark{and the successful Sunrise program as described in the following.}
\subsubsection{Sunrise/IMaX}
\refmark{The Sunrise program} \citep{barthol:11} \revmark{is} a high-altitude, long-duration balloon experiment intended to explore solar phenomena at very high angular resolution \cite[see also ][in this issue]{kleint:15a}. It was a joint effort of Germany, Spain, and the USA. It consisted of a 1.0\,m telescope carrying the Imaging Magnetograph eXperiment \cite[IMaX - \fig{imax_optical.pdf},][]{martinezpillet:11} and an ultraviolet imager \cite[Sunrise Filter Imager (SuFI),][]{gandorfer:11}. The former operated in the highly Zeeman-sensitive Fe{\sc i} line at 5250\,\AA{}. Also like FGE, IMaX used a pair of liquid crystal variable retarders to accomplish polarization modulation, but as a polarimeter IMaX had many features that qualified it as a quantitative solar polarimeter. IMaX measured combinations of four states that uniquely define the Stokes 4-vector. Compared to a polarimeter measuring pure states with six measurements, the IMaX scheme has significantly higher polarimetric efficiency \citep{deltoroiniesta:00}. Furthermore, the instrument had a nearly continuous expose/readout cycle resulting in a duty cycle of near unity. These features allowed IMaX to perform polarimetry at a S/N appropriate for quantitative determination of the magnetic field vector. IMaX also used a tunable lithium niobate Fabry-P\'erot filter in double pass so that it achieved a spectral resolution of 85\,m\AA{} \revmark{at 5250\,\AA{}}. In typical full-Stokes polarimetry mode it sampled five wavelengths throughout the line profile, each with a S/N of about 1000.
\colfig{imax_optical.pdf}{The optical layout of the IMaX instrument on the Sunrise balloon program illustrates the simplicity of its design. Having two focal planes, it may operate as either a dual-beam polarimeter or, by applying a slight de-focus to one of the beams, as a phase-diversity imager. The wavelength isolation is accomplished by a tunable lithium niobate crystalline Fabry-P\'erot etalon in double pass. Diagram taken from \citet{martinezpillet:11}.}
Two flights of Sunrise have been realized. The first flight in the north polar region lasted six days in June 2009, and produced outstanding results. High resolution sequences of data of duration up to 30 minutes were obtained of quiet Sun conditions. The Sunrise/IMaX system achieved close to its design angular resolution of about 0.15$^{\prime\prime}${}, i.e. a factor of two better than that of \textit{Hinode}{} SOT/SP.
An example of the science that was made possible by Sunrise/IMaX is illustrated by \fig{deadcalm.pdf}. In that work, \cite{martinezgonzalez:12} identify the presence of small-scale magnetic loops in the photosphere as evidenced by opposite circular polarization patches (upward- and downwardly-directed magnetic fields) connected by a detectable linear polarization patch between them (horizontal component of the field). Such loops were identified in short maps with the \textit{Hinode}{} SOT/SP (see \fig{centenoemerge.pdf}), but the high-resolution and high-precision imaging polarimetry of Sunrise/IMaX for the first time permitted this view of their spatial/temporal organization. This work reveals persistent areas where none of these small-scale loops appear: ``dead calm'' regions of the quiet Sun. The authors interpret this result - that the emergence of small-scale magnetic loops in the quiet Sun is not isotropic - as an important observed property of quiet Sun magnetism that must be explained by proposed mechanisms for its generation (such as a small-scale dynamo).
\colfig{deadcalm.pdf}{Two sequences of polarimetric data from Sunrise/IMaX of durations 22 and 31 minutes are presented. Average longitudinal magnetograms are shown in greyscale. During these sequences, small-scale loops in the photosphere (red circles) appear as evidenced by linear polarization (yellow patches) separating two patches of opposite circular polarization. Dotted circles (``dead calm'') in the image at right delimit areas where there is very little or no emergence of these small-scale loops. Arrows point to regions where several loops appear at the same location in the time series. Figure taken from \citet{martinezgonzalez:12}.}
\subsection{Future Space Missions to Measure Solar Photospheric Magnetic Fields}
\label{sect:future}
\refmark{Perhaps there remain some hidden surprises regarding photospheric magnetic fields that are lurking below our current capability of resolving them because we have certainly not resolved the fine-scale structure of magnetism in the internetwork, and especially in the intergranular lanes. Both theory and simulations \cite[e.g.,][]{voegler:07,rempel:14} and interpretation of observations \cite[e.g.,][]{pietarilagraham:09} suggest that observation of magnetic structure at the dominant scale of photospheric magnetism may be beyond reach with current diagnostic techniques, in recognition of the scattering properties of spectral lines in the photosphere when sampling scales at or below the photon mean free path.} In any event, improvement of adaptive optical systems for the next generation of large ground-based telescopes is proceeding at a rapid pace, so future exploration of photospheric magnetism at very small scales would appear to be most aptly addressed using those facilities.
How then could future space observations contribute uniquely to further the understanding of photospheric magnetism? Little is still known regarding details of the photospheric field at the extreme solar poles. The polar regions are important in that it is believed that they play a crucial role in the progression of the large-scale solar dynamo. Magnetism of the polar regions has been studied extensively using \textit{Hinode}{} \cite[][and \citeauthor{petrie:15a}, this issue]{ito:10,shiota:12}, but spatial resolution of the extreme poles is limited. Vertical fields are detected mainly by the transverse Zeeman effect and is therefore not very sensitive to weaker fields, and one is detecting fields well above the physical height corresponding to the $\tau_c = 1$ level at disk center because of foreshortening. The Solar Orbiter mission will fly the Polarimetric and Helioseismic Imager (PHI) in an inclined orbit that will pass within 0.28~AU of the Sun \cite[][this issue]{kleint:15a}. PHI will perform precision polarimetry of the polar regions near closest approach, so magnetic fields of the polar regions will be attained from a more favorable viewpoint, albeit only for a brief period.
A new frontier for observational studies of solar magnetism is the chromosphere (see \sect{chromag}). Solar-C is an initiative for a major space mission lead by Japan but, like \textit{Hinode}{}, having significant contributions from NASA and ESA. The major scientific thrust for Solar-C is the magnetism of the chromosphere, but interpretation of chromospheric field measurements will require accompanying measurements of fields in the photosphere. The Solar-C concept provides for simultaneous measurements of the field vector using spectral lines forming at several heights from the photosphere through the chromosphere.
In the near term, it is extremely important for the discipline to continue to exploit its excellent resources for photospheric field measurement that are still on orbit. The community should strive to ensure that \textit{SDO}{}/HMI and \textit{Hinode}{} SOT/SP are operational as long as possible in order to build a synoptic data base of vector magnetic fields through a full solar cycle. One important motivation to sustain these observations is to understand the long-term behavior of non-AR fields: the remnant fields of active regions and also fields apparently not associated with the global dynamo. This is especially important if the most recent extended solar minimum and the present weak maximum heralds a major shift in the dynamo activity, for example the beginning of a new Maunder minimum.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 92 |
Q: UrlFetch PUT is sending a Post Request? I am trying to update a document in a MongoDB database using the MongoLab Rest API http://docs.mongolab.com/data-api/.
In the documentation it says to use a PUT request to update documents within a collection with the update operator in the Body of the http request.
So, in accordance to the documentation I try the following:
urlinsert = 'https://api.mongolab.com/api/1/databases/ur_coursesniper/collections/classes?apiKey={key}&q={q}'
urlinsert = urlinsert.format(q=querycheck, key=CONFIG["key"])
form_fields = {
"$push" : { "Users" : email},
}
form_data = urllib.urlencode(form_fields)
result = urlfetch.fetch(url=urlinsert,
payload=form_data,
method=urlfetch.PUT,)
However, after this block of code executes the document within in the collection is not updated.
The HTTP response I receive is
2015-08-16 10:55:45,129 module.py:812] default: "POST / HTTP/1.1" 200 91
This is perplexing since it says the response is both successful and a POST.
Any ideas on what exactly is going on?
A: str.format() returns the formatted string, it does not update it in place. Thus
urlinsert.format(q=querycheck, key=CONFIG["key"])
has no affect on the value of urlinsert, and the query string apiKey={key}&q={q} is sent. Try this instead:
urlinsert = urlinsert.format(q=querycheck, key=CONFIG["key"])
It would be interesting to see what is contained in the body of the response from mongodb - it might contain an error message.
Update
Now that you are setting the Content-Type: application/json header you need to send JSON data in the payload. "$push" : { "Users" : email} is not valid JSON. You can use json.dumps() to convert the form_fields dictionary to a valid JSON string:
import json
urlinsert = 'https://api.mongolab.com/api/1/databases/ur_coursesniper/collections/classes?apiKey={key}&q={q}'
urlinsert = urlinsert.format(q=querycheck, key=CONFIG["key"])
form_fields = {"$push": {"Users": email}}
result = urlfetch.fetch(url=urlinsert,
payload=json.dumps(form_fields),
method=urlfetch.PUT,
headers={'Content-Type': 'application/json'})
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 447 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.