text
stringlengths
100
957k
meta
stringclasses
1 value
# Is there a way to measure the degree of superposition of a quantum state? I am wondering if there is a way to calculate the amount of superposition that a quantum state is in. For example, if I have a $$2$$-qubit quantum system, with basis $$\mathcal{B} = \{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}$$, and a state $$|\psi\rangle$$ of this system, I would assume that if such a function that computes the degree of superposition exists, it would be defined as $$f : \mathcal{H} \rightarrow [0,1]$$, and satisfy the following properties: $$f(|\psi\rangle) = 0 \ \text{iff} \ |\psi\rangle \in \mathcal{B} \ \text{(up to global phase)}, \text{and}$$ $$f(|\psi\rangle) = 1 \ \text{iff} \ |\psi\rangle = \frac{1}{\sqrt{4}}(|00\rangle + |01\rangle +|10\rangle + |11\rangle) \ \text{(up to relative phase)}.$$ As well, the following would be true: if $$|\phi\rangle = \sqrt{\frac{1}{10}}|00\rangle + \sqrt{\frac{9}{10}}|01\rangle$$, and $$|\theta\rangle = \sqrt{\frac{2}{10}}|00\rangle + \sqrt{\frac{8}{10}}|01\rangle$$, then $$f(|\phi\rangle) < f(|\theta\rangle)$$. Has such a function been defined as of yet? Thanks for any help! • So, in other words, this function $f$ is a way of measuring how close a state is to an element of the basis? Notice that the function depends on the choice of the basis, so it would not reflect an information about an internal structure of the system, but its relation to the measurement apparatus. That said, I have never seen this done. – Lucas Baldo Jul 9 at 4:40
{}
## Wednesday, March 18, 2020 ### How to Calculate the Median of Exponential Distribution How to Calculate the Median of Exponential Distribution The median of a set of data is the midway point wherein exactly half of the data values are less than or equal to the median. In a similar way, we can think about the median of a continuous probability distribution, but rather than finding the middle value in a set of data, we find the middle of the distribution in a different way. The total area under a probability density function is 1, representing 100%, and as a result,  half of this can be represented by one-half or 50 percent. One of the big ideas of mathematical statistics is that probability is represented by the area under the curve of the density function, which is calculated by an integral, and thus the median of a continuous distribution is the point on the real number line where exactly half of the area lies to the left. This can be more succinctly stated by the following improper integral. The median of the continuous random variable X with density function f( x) is the value M such that: ï » ¿0.5∠«m−∞f(x)dx0.5\int_{m}^{-\infty}f(x)dx0.5∠«m−∞​f(x)dxï » ¿ Median for Exponential Distribution We now calculate the median for the exponential distribution Exp(A). A random variable with this distribution has density function f(x) e-x/A/A for x any nonnegative real number. The function also contains the mathematical constant e, approximately equal to 2.71828. Since the probability density function is zero for any negative value of x, all that we must do is integrate the following and solve for M: 0.5 ∠«0M f(x) dx Since the integral ∠« e-x/A/A dx -e-x/A, the result is that 0.5 -e-M/A 1 This means that 0.5 e-M/A and after taking the natural logarithm of both sides of the equation, we have: ln(1/2) -M/A Since 1/2 2-1, by properties of logarithms we write: - ln2 -M/A Multiplying both sides by A gives us the result that the median M A ln2. Median-Mean Inequality in Statistics   One consequence of this result should be mentioned: the mean of the exponential distribution Exp(A) is A, and since ln2 is less than 1, it follows that the product Aln2 is less than A. This means that the median of the exponential distribution is less than the mean. This makes sense if we think about the graph of the probability density function. Due to the long tail, this distribution is skewed to the right. Many times when a distribution is skewed to the right, the mean is to the right of the median. What this means in terms of statistical analysis is that we can oftentimes predict that the mean and median do not directly correlate given the probability that data is skewed to the right, which can be expressed as the median-mean inequality proof known as Chebyshevs inequality. As an example, consider a data set that posits that a person receives a total of 30 visitors in 10 hours, where the mean wait time for a visitor is 20 minutes, while the set of data may present that the median wait time would be somewhere between 20 and 30 minutes if over half of those visitors came in the first five hours.
{}
# There is a 3-connected 5-regular simple $n$-vertex planar graph iff $n$ satisfies....? Is there any characterization on the set of integers $$n$$ such that there is a 3-connected 5-regular simple $$n$$-vertex planar graph? There is a 3-connected 5-regular simple $$n$$-vertex planar graph if and only if $$n=12$$ or $$n \ge 16$$ is even. See Recursive generation of 5-regular graphs by Mahdieh Hasheminezhad, Brendan D. McKay, Tristan Reeves in WALCOM: Algorithms and Computation, eds. Das and Uehara, Lecture Notes in Computer Science, vol 5431, Springer 2009. The number of such graphs is given in OEIS A308489. They use a set of 7 graphs that are irreducible under a system of expansions & reductions and, as is common for contemporary graph theory, computer assistance. E.g., "The program completed execution in 21 seconds. In total, 39621 induced subgraphs were found..." There are no such graphs when $$n$$ is odd, by the handshaking lemma. Conversely, for all even $$n \geq 224$$, we claim such a graph exists. In particular, given two planar 5-regular graphs $$G$$, $$H$$ each drawn on the surface of a sphere, we can define the 'connected sum' of the graphs as follows: • remove a small disk (containing one vertex) from the sphere on which $$G$$ is drawn; • remove a small disk (containing one vertex) from the sphere on which $$H$$ is drawn; • combine the two resulting hemispheres at their equator. The resulting graph (which may depend on the chosen vertices) has $$|G| + |H| - 2$$ vertices, and inherits the planarity, 5-regularity, and 3-connectedness of $$G$$ and $$H$$. Now, given an even integer $$n \geq 224$$, we can find integers $$i, j \geq 0$$ such that $$n = 2 + 10i + 58j$$. Then we can construct an $$n$$-vertex graph with the desired properties by taking the connected sum of $$i$$ copies of the icosahedron and $$j$$ copies of the snub dodecahedron. This leaves finitely many values of $$n$$ to check, namely the even numbers between 14 and 222. • great answer! Great that you present this simple construction Jul 22, 2020 at 9:39 • Fun application of the Frobenius problem going from $2+10i+58j$ to 224. Using your construction with the icosahedron and the 16-vertex graph Brendan & collaborators found covers graphs with $n = 2+10i+14j$ vertices and brings the cases to check down to 14, 18, 20, 24, 28, 34, 38, 48. Jul 23, 2020 at 1:13
{}
# Ubuntu – Removing Unwanted Wine icon from Launcher launchershortcutswine I recently tried to install the Logitech Gaming Software (version 9.02.65) in my Ubuntu 18.04 environment. Since Logitech only provides this program for Windows and Mac, I wanted to install it on Ubuntu using Wine 4.0. However this didn't work out so well (in fact the GUI was completely unusable) so I decided to remove the Logitech Software and put it inside a Windows Virtual Machine. I am now however left with an unwanted (and unusable) Logitech Gaming Software Link in my Ubuntu Launcher. The Link is not situated in /usr/share/applications nor is it in ~/.local/share/applications. My question: Where could the Logitech Gaming shortcut be situated and more importantly, how can I remove it from the launcher? Sometimes wine drops a folder in /usr/share/applications/wine and/or ~/.local/share/applications/ with a lot of .desktop entries in it. You could check those, if you haven't already.
{}
# What is this integration? This is from the book introduction to mechanics by Kleppner D. Is it some typo, or something i don’t understand? Does F dt comes down in front of integration? • Definitely a "bug" in the book typesetting. – John Alexiou Jul 31 at 12:06 $$\underbrace{\int F dt}_{1\text{ collision}}$$ • the "1 collision" just looks like description of integral bounds, so I think it should be $\int_\text{1 collision} F dt$. – Umaxo Jul 31 at 5:42
{}
GATE Questions & Answers of Flexural and Shear stresses What is the Weightage of Flexural and Shear stresses in GATE Exam? Total 7 Questions have been asked from Flexural and Shear stresses topic of Solid Mechanics subject in previous GATE papers. Average marks 1.71. A solid circular beam with radius of 0.25 m and length of 2 m is subjected to a twisting moment of 20 kNm about the z-axis at the free end, which is the only load acting as shown in the figure. The shear stress component $\style{font-family:'Times New Roman'}{\tau_{xy}}$ at Point ‘M’ in the cross-section of the beam at a distance of 1 m from the fixed end is A cantilever beam of length 2 m with a square section of side length 0.1 m is loaded vertically at the free end. The vertical displacement at the free end is 5 mm. The beam is made of steel with Young’s modulus of 2.0×1011 N/m2. The maximum bending stress at the fixed end of the cantilever is For the stress state (in MPa) shown in the figure, the major principal stress is 10 MPa. The shear stress $\style{font-family:'Times New Roman'}\tau$ is A haunched (varying depth) reinforced concrete beam is simply supported at both ends, as shown in the figure. The beam is subjected to a uniformly distributed factored load of intensity 10 kN/m.  The design shear force (expressed in kN) at the section X-X of the beam is ______ A 450 mm long plain concrete prism is subjected to the concentrated vertical loads as shown in the figure. Cross section of the prism is given as 150 mm × 150 mm. Considering linear stress distribution across the cross-section, the modulus of rupture (expressed in MPa) is ________
{}
Volume 22, Issue 3 Relationship Between the Stiffly Weighted Pseudoinverse and Multi-Level Constrained Rseudoinverse J. Comp. Math., 22 (2004), pp. 427-436. Published online: 2004-06 Preview Full PDF 199 3091 Export citation Cited by • Abstract It is known that for a given matrix $A$ of rank $r$, and a set $D$ of positive diagonal matrices, $\sup_{W\in D}||(W^{\frac{1}{2}}A)^†W^{\frac{1}{2}}||_2=(\min_i \sigma_+(A^{(i)})^{-1}$, in which $(A^{(i)})$is a submatrix of A formed with $r = (\rm{rank}(A))$ rows of $A$, such that $(A^{(i)})$ has full row rank $r$. In many practical applications this value is too large to be used. In this paper we consider the case that both $A$ and $W(\in D)$ are fixed with $W$ severely stiff. We show that in this case the weighted pseudoinverse $W^{\frac{1}{2}}A)^†W^{\frac{1}{2}}$ is close to a multi-level constrained weighted pseudoinverse therefore $||(W^{\frac{1}{2}}A)^†W^{\frac{1}{2}}||_2$ is uniformly bounded. We also prove that in this case the solution set the stiffly weighted least squares problem is close to that of corresponding multi-level constrained least squares problem. • Keywords Weighted least squares, Stiff, Multi-Level constrained pseudoinverse. • BibTex • RIS • TXT @Article{JCM-22-427, author = {Musheng Wei , }, title = {Relationship Between the Stiffly Weighted Pseudoinverse and Multi-Level Constrained Rseudoinverse}, journal = {Journal of Computational Mathematics}, year = {2004}, volume = {22}, number = {3}, pages = {427--436}, abstract = { It is known that for a given matrix $A$ of rank $r$, and a set $D$ of positive diagonal matrices, $\sup_{W\in D}||(W^{\frac{1}{2}}A)^†W^{\frac{1}{2}}||_2=(\min_i \sigma_+(A^{(i)})^{-1}$, in which $(A^{(i)})$is a submatrix of A formed with $r = (\rm{rank}(A))$ rows of $A$, such that $(A^{(i)})$ has full row rank $r$. In many practical applications this value is too large to be used. In this paper we consider the case that both $A$ and $W(\in D)$ are fixed with $W$ severely stiff. We show that in this case the weighted pseudoinverse $W^{\frac{1}{2}}A)^†W^{\frac{1}{2}}$ is close to a multi-level constrained weighted pseudoinverse therefore $||(W^{\frac{1}{2}}A)^†W^{\frac{1}{2}}||_2$ is uniformly bounded. We also prove that in this case the solution set the stiffly weighted least squares problem is close to that of corresponding multi-level constrained least squares problem. }, issn = {1991-7139}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/jcm/10316.html} } TY - JOUR T1 - Relationship Between the Stiffly Weighted Pseudoinverse and Multi-Level Constrained Rseudoinverse AU - Musheng Wei , JO - Journal of Computational Mathematics VL - 3 SP - 427 EP - 436 PY - 2004 DA - 2004/06 SN - 22 DO - http://doi.org/ UR - https://global-sci.org/intro/article_detail/jcm/10316.html KW - Weighted least squares, Stiff, Multi-Level constrained pseudoinverse. AB - It is known that for a given matrix $A$ of rank $r$, and a set $D$ of positive diagonal matrices, $\sup_{W\in D}||(W^{\frac{1}{2}}A)^†W^{\frac{1}{2}}||_2=(\min_i \sigma_+(A^{(i)})^{-1}$, in which $(A^{(i)})$is a submatrix of A formed with $r = (\rm{rank}(A))$ rows of $A$, such that $(A^{(i)})$ has full row rank $r$. In many practical applications this value is too large to be used. In this paper we consider the case that both $A$ and $W(\in D)$ are fixed with $W$ severely stiff. We show that in this case the weighted pseudoinverse $W^{\frac{1}{2}}A)^†W^{\frac{1}{2}}$ is close to a multi-level constrained weighted pseudoinverse therefore $||(W^{\frac{1}{2}}A)^†W^{\frac{1}{2}}||_2$ is uniformly bounded. We also prove that in this case the solution set the stiffly weighted least squares problem is close to that of corresponding multi-level constrained least squares problem. Musheng Wei. (1970). Relationship Between the Stiffly Weighted Pseudoinverse and Multi-Level Constrained Rseudoinverse. Journal of Computational Mathematics. 22 (3). 427-436. doi: Copy to clipboard The citation has been copied to your clipboard
{}
# The I The spontaneous i voltaic cell: cathode reaction is: reaction is: cell reaction is: L Retry Entire 1 Group ###### Question: The I The spontaneous i voltaic cell: cathode reaction is: reaction is: cell reaction is: L Retry Entire 1 Group standard 1 Hg box group half cell (Fo Ul Kelecuon; attempts 1 remalung 0.8S5V ) leave it blank and standard Cdlt ( Previous ICd half cell (E? Next) #### Similar Solved Questions ##### Answer each question in 3-5 sentences How would you help educate people on the importance of... Answer each question in 3-5 sentences How would you help educate people on the importance of prevention? Why do you think mental health issues are so common in adolescents?... ##### A child psychologist believe that controlled physical outbursts of anger (like punching a pillow) may improve... A child psychologist believe that controlled physical outbursts of anger (like punching a pillow) may improve the mood of young boys with emotional impairment. He believes that the proportion of boys who benefit from the treatment is greater than the proportion of girls. A random sample from each po... ##### Go to the SEC website (www.sec.gov). a.) Click on the “Enforcement” tab and then on “Accounting... Go to the SEC website (www.sec.gov). a.) Click on the “Enforcement” tab and then on “Accounting and Auditing.” Determine how many AAERs were issued in the current year: b.) Open AAER 3824. Briefly summarize the charges against JP Morgan Chase & Co. Was the company fined? ... ##### QUESTION 19A multiple-choice quiz hus 100 questions each with noss ble answers of Khich only 1 coner U"E Normal approximation thc probability thuta sheer GUesswOr * yield_ fromn 21 74 correct anshers0.1303 0.32 0.5497 0.8697 QUESTION 19 A multiple-choice quiz hus 100 questions each with noss ble answers of Khich only 1 coner U"E Normal approximation thc probability thuta sheer GUesswOr * yield_ fromn 21 74 correct anshers 0.1303 0.32 0.5497 0.8697... ##### [-Y1 Points]DETAILSScALcETB 3.1.060_At what point on the curvethe tungent line parallel the line 4 _(x) -Illustrate by graphing the curve and both lines:Need Help? [-Y1 Points] DETAILS ScALcETB 3.1.060_ At what point on the curve the tungent line parallel the line 4 _ (x) - Illustrate by graphing the curve and both lines: Need Help?... ##### What skill a student exemplifies if he used the word "dad" instead of using "father"? What skill a student exemplifies if he used the word "dad" instead of using "father"?... ##### CQ)-49 &1 € 0.76' mousures tha concentrabon , conconiration?0f 0 drug ! putson'5 syelan hour eher Iha dnug admines{nnou, whan does Iha tn (oech Ii peaxPont concontralon Gt houn (Round deom:i placor | cQ)-49 &1 € 0.76' mousures tha concentrabon , conconiration? 0f 0 drug ! putson'5 syelan hour eher Iha dnug admines{nnou, whan does Iha tn (oech Ii peax Pont concontralon Gt houn (Round deom:i placor |... ##### Draw the ammonium salt formed in each reaction (A, B, C). (Nothing in Reaction C is... Draw the ammonium salt formed in each reaction (A, B, C). (Nothing in Reaction C is cut off, the bond attached to N is just close to the bottom of the screen) THANK YOU SO MUCH! Reaction A. Reaction B. HCI ZI Reaction C. NH, HCI... ##### 8.54 95) [fyour decisioninpart (4) was reject , perform Tukey_test to determine which pairwise means are significantly different using a familywise error rate of a = 0.05: 5.a) How many pairwise comparisons are possible among the treatment levels?_5.b) ComputeX1,X2,X3, and x4-5.c) Compute the test statistic, q, for each pairwise comparison.5.d) Write the null hypothesis and the alternative hypothesis for each pairwise comparison S.e) What is your decision about the null hypotheses; Explain your 8.54 9 5) [fyour decisioninpart (4) was reject , perform Tukey_test to determine which pairwise means are significantly different using a familywise error rate of a = 0.05: 5.a) How many pairwise comparisons are possible among the treatment levels?_ 5.b) ComputeX1,X2,X3, and x4- 5.c) Compute the tes... ##### The wave of a plucked, taut-wire is represented by the function, y(x, t) = 3/[(x –... The wave of a plucked, taut-wire is represented by the function, y(x, t) = 3/[(x – 2.0t)2 + 1]. a) Sketch the amplitude of the wave pulse as a function of position for t=0 s, t = 1.0 s, t = 2.0 s. b) The displacement of a wave is given by the expression, y (x, t) = 15cos(1.0x – 100xt). T... ##### Suppose there are n identical firms in the market for plums. Each firm's cost function is... Suppose there are n identical firms in the market for plums. Each firm's cost function is given by C(q)=25+q^2 where q represents the amount that an individual firm will produce. Also, the market demand for plums is given by P = 100 - 2Q, where Q is the total amount of the good produced by all t... ##### SORU-5Select the correct name of the compound whose structure is2,5-Diethyl-6-methyloctane3-)6-Ethyl-3,7-dimethylnonaneC-)4,7-Diethyl-3-methyloctaneD-)4-Ethyl-3,7-dimethylnonaneMore than one of these answers(47 SORU-5 Select the correct name of the compound whose structure is 2,5-Diethyl-6-methyloctane 3-) 6-Ethyl-3,7-dimethylnonane C-) 4,7-Diethyl-3-methyloctane D-) 4-Ethyl-3,7-dimethylnonane More than one of these answers (47... ##### 0:09:08 Sar11h_ch02.09m Which of the following is not a broad key principle of effective corporate governance... 0:09:08 Sar11h_ch02.09m Which of the following is not a broad key principle of effective corporate governance articulated in the 2010 report of the NYSE? O Independence and betty remessary attributes of board members, however, companies also must strike the right balance in the appointment of expert... ##### Question 1 (16 points) ListenTo long horizontal parallel wures are scparated by Whle the Wire at (he top carrics curtent of 15 A, the Wlc at tho bottont canies currert of 5 A If the curterts %e in the &umna direct1n ad FOIItIg to the ripht; find () the magmitude and dircction of the magnetie ficld at paut that betwcen thc (wo Mtcs Afd ta fiom thc wire top, (6) thc magnitudc and ditection ol thc force pct Unt lenguh acting on thc WIC Jt thc bottomFormat Question 1 (16 points) Listen To long horizontal parallel wures are scparated by Whle the Wire at (he top carrics curtent of 15 A, the Wlc at tho bottont canies currert of 5 A If the curterts %e in the &umna direct1n ad FOIItIg to the ripht; find () the magmitude and dircction of the magnetie f... ##### Apre: 0 ol 1 pl8 0f 20 complele)HN Score: 0%,0 ol 20 [AR.7.11 Groph d1t bkrtmg funcion Y24+2Quesenm HelpChoose Ihe best graph apre: 0 ol 1 pl 8 0f 20 complele) HN Score: 0%,0 ol 20 [ AR.7.11 Groph d1t bkrtmg funcion Y24+2 Quesenm Help Choose Ihe best graph... ##### Find the accumulated present value of an investment over year period there flow of S9,000 per year ad the interest rate is 9% compounded continuously:continuous money Find the accumulated present value of an investment over year period there flow of S9,000 per year ad the interest rate is 9% compounded continuously: continuous money... ##### QUESTION 11What pin number is being pointed to by the arrow? QUESTION 11 What pin number is being pointed to by the arrow?... ##### When we conclude that β1 = 0 in a test of hypothesis or a test for... When we conclude that β1 = 0 in a test of hypothesis or a test for significance of regression, we can also conclude that the correlation, ρ, is equal to ______... ##### Let A be a 5 x 10-matrix and let B be a 10 x 5-matrix: Thenthe matrix AB has the size 5 x 5the multiplication is not definedthe matrix AB has the size 15 x 15the matrix AB has the size 10 x 10the matrix AB has the size 10 x 5Ithe matrix AB has the size 5 x 10 Let A be a 5 x 10-matrix and let B be a 10 x 5-matrix: Then the matrix AB has the size 5 x 5 the multiplication is not defined the matrix AB has the size 15 x 15 the matrix AB has the size 10 x 10 the matrix AB has the size 10 x 5 Ithe matrix AB has the size 5 x 10... ##### 4 Find +9 fI 21 SydA, where Ris the parallelogram enclosed by the lines 4r + y = 0, 4r + y = 2, 2r 5y = 1, 21 5y = 9 4 Find +9 fI 21 SydA, where Ris the parallelogram enclosed by the lines 4r + y = 0, 4r + y = 2, 2r 5y = 1, 21 5y = 9... Exercise 20-4 The following facts apply to the pension plan of Boudreau Inc. for the year 2017 Plan assets, January 1, 2017 Projected benefit obligation, January 1, 2017 Settlement rate Service cost Contributions (funding) Actual and expected return on plan assets Benefits paid to retirees $490,000 ... 4 answers ##### Uze thh: Counting Rule for Pennutations fonnulu caiculete Ih: tc'ss] naiat a panufations when the professor picking three sludents frcm the cugike E2Lr &li Iive and the order maltcc46. If the professor picks three studenis Tndom from the original group of five; what is the probability that the results will be that Bob will present on Monday Ed will present Wednesday and Chun will present Friday? Give your answer a5 fraction. Uze thh: Counting Rule for Pennutations fonnulu caiculete Ih: tc'ss] naiat a panufations when the professor picking three sludents frcm the cugike E2Lr &li Iive and the order maltcc 46. If the professor picks three studenis Tndom from the original group of five; what is the probability tha... 1 answer ##### Choose the best answer to each of the following. Explain your reasoning with one or more complete sentences. Where is most of the$\mathrm{CO}_{2}$that has outgassed from Earth's volcanoes? (a) in the atmosphere (b) in space (c) in rocks Choose the best answer to each of the following. Explain your reasoning with one or more complete sentences. Where is most of the$\mathrm{CO}_{2}$that has outgassed from Earth's volcanoes? (a) in the atmosphere (b) in space (c) in rocks... 1 answer ##### Question 28 What is the 4-bit (straight) binary equivalent of the decimal 13? [LO:DCRT 2 marks 1011 0111 1101 1110 0 Question 28 What is the 4-bit (straight) binary equivalent of the decimal... Question 28 What is the 4-bit (straight) binary equivalent of the decimal 13? [LO:DCRT 2 marks 1011 0111 1101 1110 0 Question 28 What is the 4-bit (straight) binary equivalent of the decimal 13? [LO:DCRT 2 marks 1011 0111 1101 1110 0... 1 answer ##### Financial statement data for Pat's Pigpens, Inc. are given below. All figures are in dollars. Use... Financial statement data for Pat's Pigpens, Inc. are given below. All figures are in dollars. Use this data to construct an Income Statement for the ycar ending December 31, 2018 and use your constructed statement to answer the following 4 questions. Advertising Beginning of year inventory Depre... 5 answers ##### IHMMM s a linear operator and X € Ker(L), then L(v+x) = Llv), Vvev Select one: True FalsePrevious pacJump tORellurn to: Gerierali) IHMMM s a linear operator and X € Ker(L), then L(v+x) = Llv), Vvev Select one: True False Previous pac Jump tO Rellurn to: Gerierali)... 5 answers ##### Switch the order of integration of f [58 xy? dydx: Do not evaluate the integral: Switch the order of integration of f [58 xy? dydx: Do not evaluate the integral:... 1 answer ##### L&D Scenario At 0830 hrs. - Leanne arrived by car to your unit, with her husband... L&D Scenario At 0830 hrs. - Leanne arrived by car to your unit, with her husband George and her mother Catherine. She states the membranes ruptured at 0800 hrs and is feeling uncomfortable With contractions that started at 0500 hrs. She is G1 P0 A0, Her LMP was April 2nd 2019. The Ultrasound ... 5 answers ##### SITUATION 1 Direct evidence of Newton$ universal law of gravitation was provided from objects (treated as samples) were determined renowned experiment by Henry Cavendish (1731-18101. In the experiment; by weighing and the measured force of attraction was used to masses of earth $density (in B/cm") in time order by rOw, are: calculate the density of the earth. The walues - of the5.36295.585.655.575.535.625.295.445.345.795.105.275.395.425.475.635.34 5.46 5.30 5.75 5.68 5.85 (a) Determine th SITUATION 1 Direct evidence of Newton$ universal law of gravitation was provided from objects (treated as samples) were determined renowned experiment by Henry Cavendish (1731-18101. In the experiment; by weighing and the measured force of attraction was used to masses of earth $density (in B/cm&q... 1 answer ##### A 643-nm thick soap film (n = 1.33) that is surrounded by air on both sides... A 643-nm thick soap film (n = 1.33) that is surrounded by air on both sides is illuminated with white light. Which wavelength of yellow light will not be reflected by this film due to destructive interference? The yellow region of the visible spectrum spans the range of wavelengths from 565 - 580 nm... 1 answer ##### The Morris Corporation has$400,000 of debt outstanding, and it pays an interest rate of 8%... The Morris Corporation has $400,000 of debt outstanding, and it pays an interest rate of 8% annually. Morris's annual sales are$2 million, its average tax rate is 30%, and its net profit margin on sales is 3%. If the company does not maintain a TIE ratio of at least 4 to 1, its bank will refuse... ##### How do you convert 100 mm Hg to Pa? How do you convert 100 mm Hg to Pa?... ##### Find the magnitude and the direction of the resultant of each of the following systems of forces using geometric vectors Find the magnitude and the direction of the resultant of each of the following systems of forces using geometric vectors.a) Forces of 3 N and 8 N acting at an angle of 60 degrees to each other.Please help me with this question. I don't understand the wording, so I don't know how to graph thi... ##### What is the product in the following reaction?+ 2 Equt, HBr What is the product in the following reaction? + 2 Equt, HBr... ##### Solve the following problem n=26;j= 0.024; PMT = 5219. PV = ? PV = Si (Round to two decimal places Solve the following problem n=26;j= 0.024; PMT = 5219. PV = ? PV = Si (Round to two decimal places... ##### 7. Issued 200 credit to babchap inc for defective merchandise shipped on 8/23 What is the... 7. Issued 200 credit to babchap inc for defective merchandise shipped on 8/23 What is the journal entry?... ##### Graph the family of polynomials in the same viewing rectangle, using the given values of $c .$ Explain how changing the value of $c$ affects the graph. $P(x)=x^{4}+c ; \quad c=-1,0,1,2$ Graph the family of polynomials in the same viewing rectangle, using the given values of $c .$ Explain how changing the value of $c$ affects the graph. $P(x)=x^{4}+c ; \quad c=-1,0,1,2$... ##### 4. Solve the following for 0° 50 360°. (3 marks each) sin + - = 0... 4. Solve the following for 0° 50 360°. (3 marks each) sin + - = 0 a) 3 4 b) cos 0 = 0.276... ##### BIO- 340 Activity 5: Hardy-{ WWeinbere Name (Last, Chi-square analysis (PAGE ] 0f 2) First): Mollotne data was regarding theUSA 'obtdincd from papulation areatso Dreviaus BIO J40dass couni providednd Data obtained fromretercnce panors nc analci decrmine undpravid Ihc "Expectod" frequency 0Crto Amua pncnolyp comtalec Keat Kettat anmec frequenot & Test paper Sodium squucclup acnzoatee (w0 Dcaimals requtrcdl: TASTERS (p Ap0i NOm WASIERS (0" Allele frequency Frequency Freq BIO- 340 Activity 5: Hardy-{ WWeinbere Name (Last, Chi-square analysis (PAGE ] 0f 2) First): Mollotne data was regarding theUSA 'obtdincd from papulation areatso Dreviaus BIO J40dass couni providednd Data obtained fromretercnce panors nc analci decrmine undpravid Ihc "Expectod" fre... -- 0.023023--
{}
# Four short links: 24 June 2016 Science Fiction Economics, Behavioural Economics, Neural Recording, and Sensitive Alexa June 24, 2016 1. Bladerunner FuturismDr Floyd’s PicturePhone call in “2001: A Space Odyssey” cost $1.70 for a 90-second call. (This is substantially cheaper than the$9-a-minute that the PicturePhone cost when the system first launched.) By comparison, Deckard’s 30-second call to Rachael costs \$1.25. 2. AI in Apple and GoogleOne way to see the whole development of computing over the past 50 years is as removing questions that a computer needed to ask, and adding new questions that it could ask. […] Apple has been making computers that ask you fewer questions since 1984. […] Since buying PA Semi in 2008 (if not earlier), Apple has approached the design of the SOCs in its devices as a fundamental core competence and competitive advantage […]. It’s not clear whether Apple looks at AI in the same way. 3. ## Learn faster. Dig deeper. See farther. Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful. Learn more 4. The 2016 Behavioural Economics Guide — large, comprehensive, and useful guide for designers and product managers. 5. Physical Principles for Scalable Neural RecordingSimultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. We also study the physics of powering and communicating with microscale devices embedded in brain tissue. 6. Alexa Learning Emotions — we’ve been swearing at our computers for years. Now they’ll know. Post topics: Four Short Links Share:
{}
# Homework Help: Physical Chemistry Question (work done during decomposition) 1. Mar 15, 2010 ### Paint 1. The problem statement, all variables and given/known data A sample consisting of 1.0 mol of calcium carbonate CaCO3(s) was heated to 800°C, when it decomposed (CaCO3 → CaO + CO2). The heating was carried out in a container fitted with a piston which was initially resting on the solid. Calculate the work done during complete decomposition at 1.0 atm. What work would be done if instead of having a piston the container was open to the atmosphere? 2. Relevant equations Expansion work against constant external pressure: w=-pex$$\Delta$$V 3. The attempt at a solution Ok the textbook gives an example, so I tried following that. Because Vf>>Vi, and Vf=nRT/pex, then w=-pex x nRT/pex=-nRT (im assuming n is number of moles of CO2?). 1 mole of CaCO3 makes 1 mole of CO2, so plugging in numbers, I get 8.9kJ, although I dont use the 1 atm pressure at all, so I'm thinking I'm doing the second part of the question first. Any help would be appreciated.
{}
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Testing the influence of testosterone administration on men’s honesty in a large laboratory experiment ## Abstract The impact of testosterone on decision-making is a growing literature, with several reports of economically relevant outcomes. Similar to Wibral et al. (2012), we investigate the effects of exogenous testosterone administration on deception in a double-blind placebo controlled study. Participants (N = 242) were asked to roll a die in private and were paid according to their reported roll, which creates the opportunity to lie about the outcome to increase earnings. We find evidence for self-serving lying in both treatment and control groups and a statistically insignificant negative effect (d = −0.17, 95% CI[−0.42, 0.08]) indicating more honest behavior (i.e., lower reports) following testosterone administration. Although insignificant, the direction was the same as in the Wibral et al. study, and the meta-analytic effect of the two studies demonstrates lower reporting (i.e., more honesty) following testosterone (vs. placebo) administration, significant at the 0.05 level (d = −0.27, 95% CI[−0.49, −0.06]). We discuss how our results and methodology compare with Wibral et al. and identify potential causes for differences in findings. Finally, we consider several plausible connections between testosterone and lying that may be further investigated using alternative methodologies. ## Introduction Lying plays an important role in interpersonal relationships and many types of economic transactions, as it can create strategic advantages from informational asymmetries. Investigations of the determinants of lying have recently attracted widespread attention, and include research of the roles played by other-regarding preferences1, social and cultural norms2,3, the size and nature of incentives4,5,6, the likelihood and costs of detection7, performance in an antecedent competition8, the opportunity for self-justification or self-signalling9,10,11, and the role of individual differences, and gender in particular12,13. Deception is a part of the behavioral repertoire of many animal species14,15,16. The understanding of the biological foundations of deceptive behavior or lying in humans, however, is limited. Functional Magnetic Resonance Imaging (fMRI) studies suggest that deception is associated with increased activation in brain regions involved in socio-cognitive processes, such as the right tempero-parietal junction, precuneus and anterior frontal gyrus, and executive functions, such as the anterior cingulate cortex and amygdala17,18,19,20. In addition, two studies reported that intranasal administration of the neuropeptide oxytocin promotes group-serving dishonesty21 and decreases the ability to detect lies told by members of the opposite sex22. However, it should be noted that several methodological reviews have recently challenged the validity of the intranasal oxytocin literature, casting uncertainty over these findings23,24,25. The male sex steroid hormone testosterone plays a central role in physical development, and has been shown to have considerable psychological effects, such as on mood in hypogonadal men26,27 and cognition28,29,30,31. There are also several reports documenting the hormone’s impact on decision making in a variety of economically important contexts, such as financial risk taking32, asset trading33,34,35, and economic games assessing trust, reciprocity, and cooperation36,37,38. While much of testosterone behavioral research has focused on antisocial behaviors, such as aggression39,40, testosterone has also been shown to promote prosocial behavior in certain contexts, such as increasing fair bargaining behavior38. A common explanation for testosterone’s promotion of prosocial behavior in some contexts and antisocial behavior in others is that testosterone may increase the desire for social status and thus promotes status seeking behavior41,42,43. Along this line of argumentation, lying is a socially complex behavior that can affect social status. Hence, testosterone may impact lying in ways that increase social status, even at an economic cost. Consistent with this notion, a study of Dutch females who were administered testosterone before playing bluff poker found that the participants who received testosterone were less likely to bluff and more likely to call bluffs44. The authors argued that while random bluffing was the payoff maximizing strategy in the game, exhibiting dishonesty was harmful for the player’s social status. In a study closely related to our own, Wibral et al.45 investigated the influence of testosterone administration on lying with a die-roll task (originally introduced in Fischbacher & Heusi4) — an active behavioral measure of deception that has also been shown to predict dishonest behavior in the field46. In this study, German male participants (N = 91) were randomly administered testosterone or placebo under a double-blind exogenous administration protocol and were given monetary incentive to lie without possibility of being discovered. Wibral et al. found that testosterone, in comparison to placebo, significantly reduced deception. The authors speculated that this decrease in lying was caused by testosterone’s effects on pride and self-image, two psychological constructs that are related to status concerns but do not require the actualization of status outcomes to impact behavior. The current study aims to further test the robustness and generalizability of the findings of Wibral et al., because of the following reasons. First and foremost, recent large scale investigations have repeatedly demonstrated the importance of building a robust epistemological foundation that allows science to progress cumulatively47. While an encouraging fraction of laboratory economics experiments can be successfully replicated, a considerable proportion of significant effects either cannot be replicated or the replicated effect size is of a smaller magnitude48. It should be noted that this experiment is not a direct replication of the Wibral et al. study. Although the overall experimental design is similar, several methodological modifications (detailed later in Differences from Wibral et al.45) were purposefully made to increase our likelihood of detecting behavioral effects from testosterone administration. Such iterative methodological changes, along with employing ample sample sizes, are important for testing for the robustness and generalizability of the effect. Second, although the effect reported by Wibral et al. is seemingly strong and with a relatively small p-value (2-sided t-test, t(89) = 2.65, p < 0.001), the die-roll task was a part of an experimental battery comprising of 11 tasks - a common research practice in behavioral endocrinological research that is aimed at maximizing the knowledge gained from each participant undergoing a pharmacological treatment49. The Bonferroni corrected p-value is p = 0.11, which means that the statistical evidence were not overwhelming. Finally, while the sample size (N = 91) was larger than previous testosterone administration studies, the number of participants who faced an opportunity to lie was effectively smaller, due to the random nature of the task. This is because participants whose die roll outcome is high face no incentive to misreport. To this end, we conducted a double-blind placebo-controlled investigation of exogenous testosterone’s effect on the die-roll task, in a sample of N = 242 American participants (mostly college students, for full demographic details see Supplementary Materials S1). Our sample size is over 2.5 times larger than the Wibral et al.45 study - a magnitudinal difference that is in line with the “small telescope” heuristic which provides the statistical power to test whether the original study was underpowered to detect the reported effect size50. As in the Wibral et al. study, participants were seated privately in cubicles without the possibility of observation by researchers or other participants. They were each given a 6-sided die, a pen, and a slip of paper, and instructed via computer to roll the die and report the outcome both on the slip of paper and into the computer, thus earning them the dollar amount of what they reported. As in the original study, the task was a part of an experimental battery. Given the Wibral et al. report, we hypothesized that participants who received testosterone would be more honest (i.e., report lower outcomes compared to placebo). ## Methods ### Participants Males over the age of 18 (mean = 23.65, SD = 7.24), mostly college students, were recruited via e-mail and posters to participate in an experiment on testosterone and economic decision making at the Center for Neuroeconomics Studies, Claremont Graduate University. 125 participants were administered testosterone gel and 118 were administered a placebo gel. One participant who was administered the placebo gel left the experiment before participating in the die-roll task and was therefore excluded from all analyses, bringing the total number of participants used in analysis to N = 242. The institutional review boards of Caltech and Claremont Graduate University approved this study, all participants gave informed consent, and no adverse events occurred. The study was performed in accordance with the guidelines set forth by both IRBs. Descriptive statistics of the participants are presented in Supplementary Materials Table S1. ### Procedure In each experimental day there were two sessions, with one in the morning and one in the afternoon. The morning session lasted from 9:00 am to 9:45 am, and the afternoon session lasted from 2:00 pm until roughly 4:15 pm. Participants provided 4 saliva samples, one in the morning, and three in the afternoon. Participants completed the die roll task immediately before the 4th saliva sampling, which took place on average at 4:17 pm (SD = 12.2 minutes), and were dismissed shortly thereafter. Chronologically, participants arrived at the laboratory in the morning in groups of 12 or 16, whereupon they were given an informed consent form and signed it upon assent. They then proceeded to a separate room where their hands were scanned (digit ratio is a purported measure of pre-natal testosterone exposure51) and facial photographs were taken (facial characteristics are associated with testosterone levels52). Next, they went to another room where they completed brief demographic and mood surveys in randomly assigned private cubicles. The private cubicles had a desk, computer, keyboard, monitor, and mouse, and all activity on the computer or desk was out of sight of any other participant or researcher. A saliva sample was taken at the cubicles to assess baseline testosterone levels, the first of a total of 4 samples taken for each participant (the three others were taken during the afternoon session). Participants then proceeded to another separate room in groups of 2–6 where they were given a small paper cup containing either 10 g of topical testosterone 1% (2 × 50 mg packets Vogelxo® by Upsher-Smith) or volume equivalent of an inert placebo of similar texture and viscosity (80% alcogel, 20% Versagel®) under a double-blind protocol (the paper cups were filled by the lab manager, who did not interact with the participants or reveals its contents to the research assistants). Participants were instructed to remove their shirts and self-apply the entirety of the cup’s contents to their shoulders, upper arms, and chest, as demonstrated by a research assistant. Participants were also instructed to not put their shirts back on until the gel had fully dried. Following application of the gel, all participants were asked to avoid touching any part of their body before washing their hands, and then brought into an adjacent restroom in order to thoroughly wash their hands with warm water and soap. Participants were then given a strict set of instructions (which were also in the informed consent and recruitment materials), both verbally to the group and on a printed hand-out given to each participant, of what to do preceding the afternoon session and for the next 23 hours. Participants were told to refrain from bathing or any activities that might cause excessive perspiration, not to eat after 1:00 pm (in order to produce high quality saliva samples), and to return to the lab by 1:55 pm. Participants were then dismissed from the laboratory until the afternoon session. They were also told to abstain from any skin-to-skin contact with females, as per the recommendations of the testosterone gel manufacturer. A researcher contacted each participant via text message shortly before 1 pm to remind them to not eat any more and that they were only allowed to consume water before the afternoon session. Upon return for the afternoon session, a researcher verbally confirmed with each participant whether they had adhered to the guidelines, and no participants admitted noncompliance. Participants were not allowed to drink water for the 10 minutes preceding a saliva sample, which was enforced via observation by researchers. Saliva samples were also inspected for abnormalities, e.g. whether it was dark from smoking or oral bleeding, and any such were marked in an experimental log for monitoring and potential exclusion. For the afternoon session, participants returned to the same private cubicle they had used in the morning session. They then provided a second saliva sample at 2:05 pm. In each cubicle there were also a standard 6-sided die, slip of paper, and pen, which were used in the die-roll task. Upon arrivals (with no incidents of lateness) participants took part in a battery of seven behavioral tasks that included math-based competitions, risk preference questionnaires, the cognitive reflection test, and others as part of another experiment. The third saliva sample was taken at 3:15 pm, and the fourth sample was taken at 4:15 pm, with the die task immediately preceding the fourth sample. Once the researcher had collected the reported rolls from all participants, the participants were paid in cash for their earnings in the study, and then provided their final saliva samples. Through Qualtrics, an online survey platform, participants were given instructions to roll the die on their desk, and both record the result of the die roll into the survey and onto the slip of paper (see Supplementary Materials S2 for full instructions). The instructions informed participants that they would receive the dollar value of their reported roll - a report of 3 would earn $3, a report of 5 would earn$5, et cetera. The instructions stated that participants could roll the die more than once, but that only their first roll would count. Once participants had recorded their roll, they brought their slips to the research assistant, who was standing at the far end of the room, on the other side of the cubicle walls that surrounded each participant. This ensured that the roll and recording of the roll outcome were both done privately. ### Differences from Wibral et al In this section, we consider the a priori differences between our study design and that of Wibral et al.45, and how these differences may impact results. These differences are summarized in Table 1. First, our research differs from that of Wibral et al. in the loading period of testosterone. Our choice of testing schedule was based on the recommendations of a report by Eisenegger et al.53 which studies the pharmacokinetics of testosterone in healthy young men. The study documented clear elevation in testosterone levels between 3 and 7 hours after topical administration. The Eisenegger et al. report explicitly recommended testing for behavioral effects 7 hours after administration, and noted that peak testosterone levels were at 3 hours after administration. The findings of the Eisenegger et al. report are qualitatively similar to those of Chik et al.54, who also find that a transdermal application of testosterone (of lower dose than Eisenegger et al. or our study) in healthy young men led to peak serum testosterone levels roughly four hours after administration. In the Wibral et al. study, the die roll task took place about 21–24 hours after administration (thus between 18 and 21 hours after peak testosterone levels), whereas in our study it took place roughly 7 hours after testosterone administration, as suggested by Eisenegger et al.53. One reason to be concerned that this methodological difference might cause the attenuation of the behavioral effect is due to lower treatment potency. Because our study used saliva sampling and Wibral et al. used blood sampling, we cannot directly compare measurements of testosterone levels. However, we confirmed a significant elevation in testosterone levels in our experiment, and this elevation did lead to behaviorally significant impacts in other tasks31,43. Relatedly, another study found that testosterone administration significantly increased aggression in some participants after only an hour40. Given the pharmacokinetics of testosterone, it is likely that this administration schedule led to higher testosterone levels in our study than in Wibral et al.45, and thus we would expect to see greater treatment potency (though we acknowledge that non-linear dose-dependency cannot be entirely ruled out). It should be noted, however, that the Eisenegger et al. study stopped sampling saliva after 7 hours, and more information is needed on the pharmacokinetics of testosterone over longer time periods as in Wibral et al.45. Second, we differ in the amount of testosterone administered to participants. Whereas Wibral et al.45 used 50 mg, and the Eisenegger et al.53 study used 150 mg of topical testosterone gel, we decided to use 100 mg of testosterone gel. Our reasoning for using a larger dosage than Wibral et al. is that we wanted to increase the potency of our treatment in order to increase our probability of detecting the behavioral effects of testosterone. However, we did not increase the dose up to 150 mg, as in Eisenegger et al.53, in order to maintain ecological validity: 50 mg and 100 mg, but not 150 mg, are typical dosages indicated by prescription guidelines provided by the manufacturer of Vogelxo®. (the maximum recommended dose is 100 mg, with the advice to begin all patients at 50 mg daily for 14 days and adjust the dose upwards if serum testosterone levels are measured to still be below the normal range). The Eisenegger et al. report also notes that the pharmacokinetic data found in their study are qualitatively similar to those found by Chik et al.54, who studied the effects of 50 mg of testosterone in healthy young men. This suggests that the pharmacokinetics of the intermediate dose of 100 mg are likely to be similar as well. Third, in the Wibral et al.45 study participants were paid the monetary value of their reported rolls of 1–5, but paid 0 for a reported roll of 6, where our study used a simpler payment scheme, with payoffs matching the reported roll. Although the salient decision individuals faced in either methodological design is essentially the same - whether to misreport a private die roll in order to increase earnings - this change does modify, to some degree, the stakes of the game - as in our study the worst a participant can do is earn \$1, as opposed to nothing, and therefore so his incentive to lie may be reduced. However, as a meta-analysis of the die-roll task found no differences in reporting even when the differences in stakes are 500 times larger55, we find it unlikely that this difference made a substantial impact. Fourth, in both studies the die-roll task was a part of an experimental battery (which is common practice in pharmacological experiments), but the batteries consisted of different behavioral tasks. In our experiment, the die roll was the last behavioral task, and it took place immediately following a task where participants made a series of either risky or safe bets, and then were publicly ranked and identified as “winners” and “losers” according to whether they were in the top or bottom half of earners. Previous research has shown that participants who win a competition tend to lie more afterwards in a die-roll game where the reported roll of one participant is subtracted from a shared amount to be split with another participant8. This study differs from our own in that reported rolls in our study did not impact the earnings of other participants. We test for any effect of winning or losing in the risk task, as well as an interaction with treatment, in our Results section. The antecedent task in Wibral et al.45 was the Devil’s Task, a risk preference measure in which participants either take or reject a series of gambles, wherein winning the gamble adds to a cumulative payoff or losing the gamble eliminates the entire payoff 56. Fifth, in our study we used saliva to measure testosterone levels, compared to blood draws in Wibral et al. The advantage of using a saliva test is that it is operationally simpler as it does not require a blood draw. Relevant to the behavioral differences between the studies, blood draws cause some amount of pain and stress as compared to a saliva draw. This pain and stress could lead to an increase in cortisol levels, and the interaction between testosterone to cortisol might be important for deceptive behavior, as it is for aggressive behavior57. However, we did not find evidence that cortisol moderated the relationship between treatment and reported die roll in our study (OLS, coefficient for interaction between treatment and log cortisol levels β = 0.22, 95% CI[−0.43, 0.86], p = 0.51, see Supplementary Materials S6). Last, participants in Wibral et al. were German and our study participants were American. This difference may have non-trivial consequences, as culture may influence perceptions of social status and actions which will lead to its elevation58. It may be the case that money is relatively more important for social status in America than Germany, and thus the same increased drive for social status will produce relatively less honesty among Americans. Propensities for honest behavior indeed vary substantially between cultures. For instance, one study found that in a task where participants were instructed to anonymously report the result of a coin flip (with a material incentive to misreport) only 3.4% of British participants lied, compared to 70% of Chinese participants59. While we are not aware of any studies that directly compare German and American behavior using similar methodologies, it is worth investigating if the impact of testosterone on behavior may differ according to cultural context in line with broader mechanisms associated with testosterone (e.g., status seeking). ### Measures #### Saliva Sampling A total of 4 saliva samples were taken throughout the experimental day, the 1st occurring before treatment administration between 9:25 and 9:34 am, the 2nd upon return to the lab for the afternoon session between 1:55 and 2:15 pm, the 3rd in the middle of the behavioral tasks battery between 3:02 and 3:38 pm, and the 4th at the very end between 4:10 and 4:44 pm. Participants were not allowed to bring food or drink into the laboratory, and the only water break allowed was immediately following the 3rd saliva sample, which occurred an hour before the 4th sample. #### Hormonal Assays Saliva samples were immediately stored on dry ice in coolers after collection and shipped to ZRT Laboratories (Beaverton, OR) for assay. Salivary steroids (estrone, estradiol, estriol, testosterone, androstenedione, DHEA, 5-alpha DHT, progesterone, 17OH-progesterone, 11-deoxycortisol, cortisol, cortisone, and corticosterone) were measured by liquid chromatography tandem mass spectrometry (LC-MS/MS) using an AB Sciex Triple Quad 5500. Further details about the assay procedure are available in the Supplementary Materials. A series of one-sample Kolmogorov-Smirnov tests for conformity to Gaussian (Supplementary Materials Table S2) indicated that all hormonal measurement distributions were better approximated by a Gaussian distribution following a log transformation, as indicated by higher p-values (i.e., the Gaussian normality hypotheses were less likely to be rejected after log-transformations). Thus, all hormonal measurements were log transformed prior to data analysis in order to make their distributions closer to Gaussian. It should be noted that these log transformations only impact our supplementary analysis, which is based primarily on OLS regression and is thus benefited from a normal distribution. Three saliva samples (two from sample 2, and one from sample 3) could not be analyzed due to insufficient fluid and thus excluded from analyses involving these hormonal samples. After experimental session 13 (of 17) it was discovered that some of the pre-treatment baseline saliva samples from both treatment groups had testosterone measures exceeding those expected in healthy young men (i.e., greater than 400 pg/mL). Crucially, hormonal panel data show normal upstream and downstream testosterone metabolites dihydrotestosterone and androstenedione, respectively, among all participants with these abnormally high samples. Interpreting this singular hormonal abnormality, only the samples themselves were affected, but not participants’ physiological levels of testosterone. Following discovery of the viral spread into samples, a thorough experimental sterilization protocol was enacted, and the number of samples with abnormally high testosterone was drastically reduced. Ultimately, it was deduced that the testosterone gel had been transferred from common surfaces (e.g., door knobs, mouse pads) onto participants’ hands, and then into the saliva sampling tubes. Full details of the issue and our response are available in the Supporting Materials S4. In light of the resulting unreliability of measured testosterone levels, we avoid relying on measured hormones for analysis such as a regression of die rolls on testosterone levels; instead, we use treatment groups in our analysis. #### Digit Ratio and Facial Masculinity Digit ratio was calculated by first measuring the length of the second and fourth digits from the hand scans taken during the morning session, and then dividing the length of the second digit by that of the fourth. Facial masculinity was defined as the facial width-height ratio, calculated by measuring the distance between the cheekbones (width) and the upper eyelid to the top of the upper lip (height) and dividing width by height. Both facial and hand measurements were made using a software tool which counted the number of pixels between two points selected on an image. Two trained research assistants independently made each measurement, and the mean of the two measurements was used. Any discrepancies between the two measurements greater than 5% were reviewed by a senior researcher, of which there was only one instance. Further details on these measures are available in the Supplementary Materials. ### Statistical Approach The first aim of our analysis is to provide a straightforward comparison between our results and those of Wibral et al. To that end, we perform the same set of statistical tests as those reported by Wibral et al., juxtapose their results against our own, and note differences. Our second aim is to make an assessment of the cumulative evidence on the relationship between exogenous testosterone and lying on the die roll task. To do so, we perform a joint analysis using a fixed effects model. ## Results ### Manipulation Check We observed elevated levels of T and its metabolites (e.g., dihydrotestosterone) in the saliva measurements of the testosterone group but not in the placebo group following gel administration relative to baseline, and average levels of testosterone were significantly higher in the testosterone group than in the placebo group following gel administration. In order to verify that the participants who had received testosterone gel indeed experienced an elevation in their testosterone levels compared to those who received placebo, we submitted the logged testosterone levels to a repeated-measures ANOVA, that included treatment status as a between-subject factor, measurement time as a within subject factor, and the interaction between the two. The F-ratio of the interaction term was significant at the 0.01 level (F(3, 716) = 311.58, p < 0.001), indicating unequal mean levels of testosterone across sampling points and treatment status. We further tested for differences in logged testosterone levels between the two treatment groups in each of the four time point of saliva sampling, using 2-sided t-tests. Comparing log testosterone levels in the morning baseline sample across treatment groups yielded a non-significant difference (t(239) = 1.440, p = 0.15). The mean (SD) non-logged testosterone levels in the morning were 480.13 (826.95)pg/mL in the treatment group, and 616.24 (1052.93)pg/mL in the placebo group. However testosterone levels were significantly higher in the treatment group in the second (t(239) = −18.61, p < 0.001), third (t(239) = −24.70, p < 0.001) and fourth (t(239) = −25.80, p < 0.001) saliva sample, providing a robust and successful manipulation check to our pharmacological testosterone treatment. Mean (SD) non-logged testosterone levels 11,342.27 (15,270.73)pg/mL in treatment and 249.00 (274.20)pg/mL in placebo at the second saliva sample, 20,609.34 (20027.17)pg/mL in treatment and 353.36 (570.76)pg/mL in placebo at the third saliva sample, and 9.16 (1.40)pg/mL in treatment and 5.19 (0.92)pg/mL in placebo at the fourth saliva sample. These changes in salivary testosterone levels are in line with other studies which also used topical testosterone and progesterone administration60,61. There were no treatment effects on either mood, treatment expectancy, or levels of all other measured hormones, ruling out these potential indirect treatment influences on the task (see Supplementary Materials Table S4 for further details). ### The Influence of Testosterone Administration on Deception We use three non-parametric measures to compare the two treatment groups: the distribution of rolls via a χ2-test against equal distributions, the mean reported roll via a Mann-Whitney U-test of differences, and the reported proportion of the highest possible roll via a Fisher’s exact test. A χ2-test confirms that both treatment groups exhibited evidence of self-serving lying, as indicated by a right-skewed distribution (χ2-test of even distribution, Testosterone χ2(5) = 26.71, p = 0.02, Placebo χ2(5) = 13.10, p < 0.001, see Fig. 1). This common use of lying is in line with other research using the die-roll task4,10,55,62 and demonstrates that participants grasp that they are able to misreport their rolls presumably in order to increase their earnings and do so. A Mann-Whitney U-test of differences in the distributions of reported roll yielded could not reject the null that the distributions are the same (z(240) = 1.57, p = 0.12). The Fisher’s exact test could not reject the null that the proportion of 6’s reported in each treatment group are the same (Fisher’s exact, p = 0.17). We also report the result of a parametric t-test, and use this summary statistic in order to perform a joint analysis of our study together with the results of Wibral et al. Overall, our findings are similar, regardless of whether we use a parametric or non-parametric approach. The average reported die roll in our sample of the placebo group was 4.21 (95% CI[3.91, 4.52]) and treatment group was 3.94 (95% CI[3.67, 4.22]), which by a 2-sided t-test did not significantly differ (t(240) = −1.31, p = 0.19, Cohen’s d = −0.17, 95% CI[−0.42, 0.08], see Fig. 2). ### Comparison to Wibral et al In Table 2 we juxtapose the major statistical results from our study and those from Wibral et al.45. Overall, our results are directionally the same in that testosterone is associated with a decrease in reported rolls, but we do not find statistical significance by any measure at the 10% level. The key reported measures of Wibral et al. were the Mann-Whitney U-test of different distributions between treatment groups (z(89) = 2.78, p = 0.01) and the Fisher’s exact test of different frequencies of reporting the number with the highest material incentive (p = 0.01), which we contrast with our Mann-Whitney U-test result (z(240) = 1.57, p = 0.12), and Fisher’s exact test result (p = 0.18). In terms of effect size, Wibral et al. found a medium effect size of Cohen’s d = −0.56 (95% CI[−0.97, −0.14]) of the impact of testosterone on average reported die-roll. Based on this effect size, a sample size of 81 would be sufficiently powered at β = 0.80 at the 5% level. A typical finding in the replication literature is that the replicated effect size is smaller than the original by about a half in psychological experiments47 and a third in experimental economics48. With our sample size of N = 242, we achieved β = 0.88 at the 5% level for detecting the 2/3 of the original effect size, or β = 0.68 at the 5% level for detecting one half of the original effect size. Our small effect size of Cohen’s d = −0.17 (95% CI[−0.42, 0.08]) suggests that we would have needed a sample size of N = 878 to detect a significant difference in means for the point estimate, and N = 142 to detect the upper bound of our confidence interval at the 5% level with β = 0.80. ### Joint Analysis of Studies To perform the joint analysis we use a fixed effects model using a weighted average of both studies, in line with previous work on replications47,48. Because the system of payoffs in the Wibral et al. study was such that reporting a 6 earned nothing, we transformed their data to match our own based on the payoffs associated with each report, such that a report of 5 was coded as 6, 6 was coded as 1, 1 was coded as 2, et cetera. Using the fixed effects model, Cohen’s d is equal to −0.27 (95% CI[−0.49, −0.06]) and a test of d = 0 is rejected at the 0.05 level (z(1) = 2.46, p = 0.01, see Fig. 3). The achieved power is >0.999, calculated using G-Power. Further details, including robustness to a random effects specification and a search for comparable studies are in the Supplementary Materials. ### Effect of Winning or Losing in Risk Task As discussed previously in Differences From Wibral et al., this task was part of an experimental battery, and the preceding task was a risk task in which participants were divided into winners and losers based on their performance. In order to test an association between the competition outcomes and a potential interaction between competition outcomes and treatment, we ran a two-way ANOVA with competition outcomes (winning/losing) and treatment (testosterone/placebo) as between-subject factors, as well as an interaction term. We found no significant effects of competition outcome on reported die roll (F(1, 234) = 0.71, p = 0.401), treatment condition (F(1, 234) = 3.21, p = 0.074]), or interaction (F(1, 234) = 1.41, p = 0.236). ### Facial Masculinity and Digit Ratio To test for the impact of digit ratio and facial masculinity on behavior, we performed a number of ordinary least squares regressions. Regressing die roll on treatment, digit ratio, and the interaction of digit ratio and treatment did not yield any coefficients significantly different from 0 (treatment β = −5.159(5.835), 95% CI[−16.654, 6.335], t(235) = −0.88, p = 0.377, digit ratio β = −4.278(4.352), 95% CI[−12.851, 4.294], t(235) = −0.98, p = 0.327, interaction β = 5.147(6.149), 95% CI[−6.966, 17.260], t(235) = 0.84, p = 0.403). Similarly, regressing die roll on treatment, facial masculinity, and the interaction of treatment and facial masculinity did not yield any coefficients significantly different from 0 (treatment β = −0.018(0.644), 95% CI[−1.287, 1.251], t(227) = −0.03, p = 0.978, facial masculinity β = 0.160(0.282), 95% CI[−0.396, 0.715], t(227) = 0.57, p = 0.572, interaction β = −0.126(0.390), 95% CI[−0.895, 0.643], t(227) = −0.32, p = 0.746). The results remained insignificant when including measurements of other hormonal levels and demographic characteristics, as reported in the Supplementary Materials. ## Discussion The present study found modest evidence for testosterone reducing self-serving dishonesty. Although statistically insignificant, the direction was in same direction of the original study, with our joint analysis indicating a significant effect (p = 0.01) of testosterone administration on the mean die roll report. It should be taken as suggestive, but not conclusive, evidence of a relationship between testosterone and reduced lying, that should encourage further exploration. In this section we discuss the limitations of our study, and then suggest avenues for future research elaborating on the association between testosterone and deception. Despite the advantages of strict experimental control, there are several limitations inherent in the methodology of the current study. First, a general limitation of laboratory studies is that at the participants are aware that they are taking part in an experiment. One cannot entirely rule out the possibility that such knowledge might bias behavior (e.g., via experimenter demand effect), in a way that could interact with the treatment. A second limitation of the particular task at hand (where we do not directly observe the behavior of the participants), is the incapacity to measure whether or by how much each individual participant lied. This limits, to some degree, the capacity to explore which factors might moderate of the behavioral effect. Third, while the use of college students in scientific experiments, particularly the behavioral sciences, is a widely accepted practice, it comes with specific considerations to be made when generalizing findings to other populations. The possibility of an interaction between our subject population characteristics (young males) and our experimental design is particularly relevant, as the levels of testosterone decrease with age after 2063 and vary significantly between sexes64. Thus, our sample is not representative of the baseline physiology of the general population. Furthermore, the proposed psychological mechanism through which testosterone impacts lying is through social-status concerns, and it may be that different demographic groups would not pursue social status goals through honesty on this task. Therefore, we advise that any generalized interpretation of our findings to other populations should be made with caution. Going forward, elucidating the relationship between testosterone and deception requires clear hypotheses of connecting mechanisms and methodologies that directly test them. The relatively complex chain of reasoning connecting testosterone and deception in the die-roll task proposed by Wibral et al. is that testosterone increases status seeking, and thus elevates the decision maker’s need for pride, which in turn promotes honest behavior. However, as deception is typically associated with material benefits that are also important for one’s social status, it is not a priori clear whether testosterone-induced status seeking should decrease, rather than increase honesty in this task. Using deception tasks with more obvious social status interpretations would provide a stronger test of this potential connection. Another potential mechanism by which testosterone impacts die roll reports may be through its influence on impulsivity31. Greater impulsivity may reduce the propensity of an individual to engage in processes which either increase or decrease their ultimate willingness to lie. For example, reflection could either increase lying by justifying it as harming no one10, or decrease it by reflecting upon moral considerations. Further experimental work that aspires to explore this issue should have clear predictions about whether lying in the specific behavioral task used is more associated with either impulsive or deliberative decision-making. A final possibility is that testosterone may increase the feelings of distrust in participants. Several studies have found a negative relationship between testosterone administration and trust, as measured by reduced offers in the trust game36 and facial trustworthiness evaluations by women65. Moreover, studies of anabolic steroid users found that they are more likely to report paranoia, even after short-term use66,67,68. Even though both in our study and in Wibral et al. the researchers made efforts to provide privacy for the participants and ensure them of this fact, recent research suggests that lying in the die-roll task is partly driven by fears of detection5. Thus, increased feelings of distrust might lead participants to doubt that the researchers were truly unable to observe their actions, or to be concerned of a hidden or unstated punishment for being observed deceiving. Further research could attempt to address this issue by including survey measures to assess whether or not participants felt as if their actions were truly performed in privacy if applicable, or using methodologies where a lie is completely undetectable. An example of such a methodology is a “mind” game in which participants think of a number and then roll a die in private, and report whether the rolled number matched the number they thought of, which was used in Kajackaite and Gneezy5. In summation, we find a statistically insignificant negative effect of testosterone administration on mean reported die roll. When jointly considered along with results from a previous and similar study by Wibral et al., there is overall evidence of a negative association between testosterone and lying. There are a number of plausible mechanisms which might explain this association, but currently with data only from the die-roll task it is not possible to determine which mechanism(s) play a central role. In addition to designing future studies around straightforward tests of these mechanisms, researchers should use large sample sizes and facilitate the replications of their findings. Evidence is growing that testosterone impacts behavior in diverse ways, and practices which help build a robust knowledge base on these impacts is paramount for progress. ## References 1. 1. Gneezy, U. Deception: The role of consequences. Am. Econ. Rev. 95, 384–394 (2005). 2. 2. Gächter, S. & Schulz, J. F. Intrinsic honesty and the prevalence of rule violations across societies. Nature 531, 496–499 (2016). 3. 3. Mann, H., Garcia-Rada, X., Hornuf, L., Tafurt, J. & Ariely, D. Cut from the same cloth: Similarly dishonest individuals across cultures. J. Cross. Cult. Psychol. 47, 858–874 (2016). 4. 4. Fischbacher, U. & Franziska, F. Lies in disguise—an experimental study on cheating. Journal of the European Economic Association 11, 525–547 (2013). 5. 5. Kajackaite, A. & Gneezy, U. Incentives and cheating. Games. Econ. Behav. 102, 433–444 (2017). 6. 6. Charness, G., Masclet, D. & Villeval, M. The dark side of competition for status. Manage. Sci. 60, 38–55 (2014). 7. 7. Becker, G. S. Crime and punishment: An economic approach. The Economic Dimensions of Crime. 13–68 (1968). 8. 8. Schurr, A. & Ritov, I. Winning a competition predicts dishonest behavior. Proceedings of the National Academy of Sciences 113, 1754–1759 (2016). 9. 9. Shalvi, S., Dana, J., Handgraaf, M. J. J. & De Dreu, C. K. W. Justified ethicality: Observing desired counterfactuals modifies ethical perceptions and behavior. Organ. Behav. Hum. Decis. Process. 115, 181–190 (2011). 10. 10. Shalvi, S., Eldar, O. & Bereby-Meyer, Y. Honesty requires time (and lack of justifications). Psychol. Sci. 23, 1264–1270 (2012). 11. 11. Mazar, N., Amir, O. & Ariely, D. The dishonesty of honest people: A theory of self-concept maintenance. J. Mark. Res. 45, 633–644 (2008). 12. 12. Dreber, A. & Johannesson, M. Gender differences in deception. Econ. Lett. 99, 197–199 (2008). 13. 13. Muehlheusser, G., Roider, A. & Wallmeier, N. Gender differences in honesty: Groups versus individuals. Econ. Lett. 128, 25–29 (2015). 14. 14. Trivers, R. The elements of a scientific theory of self-deception. Ann. N. Y. Acad. Sci. 907, 114–131 (2000). 15. 15. Bugnyar, T. & Kotrschal, K. Observational learning and the raiding of food caches in ravens, corvus corax: is it ‘tactical’ deception? Anim. Behav. 64, 185–195 (2002). 16. 16. Whiten, A. & Byrne, R. W. Tactical deception in primates. Behavioral & Brain Sciences 11, 233–244 (1988). 17. 17. Volz, K. G., Vogeley, K., Tittgemeyer, M., von Cramon, D. Y. & Sutter, M. The neural basis of deception in strategic interactions. Front. Behav. Neurosci. 9, 27 (2015). 18. 18. Lisofsky, N., Kazzer, P., Heekeren, H. R. & Prehn, K. Investigating socio-cognitive processes in deception: a quantitative meta-analysis of neuroimaging studies. Neuropsychologia 61, 113–122 (2014). 19. 19. Spence, S. et al. A cognitive neurobiological account of deception: evidence from functional neuroimaging. Philos. Trans. R. Soc. Lond. B Biol. Sci. 359, 1755–1762 (2004). 20. 20. Garrett, N., Lazzaro, S. C., Ariely, D. & Sharot, T. The brain adapts to dishonesty. Nat. Neurosci. 19, 1727–1732 (2016). 21. 21. Shalvi, S. & De Dreu, C. K. W. Oxytocin promotes group-serving dishonesty. Proceedings of the National Academy of Sciences 111, 5503–5507 (2014). 22. 22. Pfundmair, M., Erk, W. & Reinelt, A. Lie to me—oxytocin impairs lie detection between sexes. Psychoneuroendocrinology. 84, 135–138 (2017). 23. 23. Lane, A., Luminet, O., Nave, G. & Mikolajczak, M. Is there a publication bias in behavioural intranasal oxytocin research on humans? Opening the file drawer of one laboratory. Journal of Neuroendocrinology 28 (2016). 24. 24. Walum, H., Waldman, I. & Young, L. J. Statistical and methodological considerations for the interpretation of intranasal oxytocin studies. Biol. Psychiatry 79, 251–257 (2016). 25. 25. Nave, G., Camerer, C. F. & McCullough, M. Does oxytocin increase trust in humans? A critical review of research. Perspectives on Psychological Science 10, 772–789 (2015). 26. 26. Pope, H. G., Kouri, E. M. & Hudson, J. I. Effects of supraphysiologic doses of testosterone on mood and aggression in normal men: a randomized controlled trial. Arch. Gen. Psychiatry 57, 133–140 (2000). 27. 27. Wang, C. E. A. Long-term testosterone gel (androgel) treatment maintains beneficial effects on sexual function and mood, lean and fat mass, and bone mineral density in hypogonadal men. The Journal of Clinical Endocrinology & Metabolism 89, 2085–2098 (2004). 28. 28. Gray, P. E. A. Dose-dependent effects of testosterone on sexual function, mood, and visuospatial cognition in older men. The Journal of Clinical Endocrinology & Metabolism 90, 3838–3846 (2005). 29. 29. Newman, M. L., Sellers, J. G. & Josephs, R. A. Testosterone, cognition, and social status. Horm. Behav. 47, 205–211 (2005). 30. 30. Janowsky, J. S., Oviatt, S. K. & Orwoll, E. S. Testosterone influences spatial cognition in older men. Behav. Neurosci. 108, 325 (1994). 31. 31. Nave, G., Nadler, A., Zava, D. & Camerer, C. Single-dose testosterone administration impairs cognitive reflection in men. Psychol. Sci. 28, 1398–1407 (2017). 32. 32. Apicella, C. L., Carré, J. M. & Dreber, A. Testosterone and economic risk taking: A review. Adaptive Human Behavior and Physiology 1, 358–385 (2015). 33. 33. Nadler, A., Jiao, P., Alexander, V., Johnson, C. J. & Zak, P. J. The bull of wall street: Experimental analysis of testosterone and asset trading. Management Science (Forthcoming). 34. 34. Coates, J. M. & Herbert, J. Endogenous steroids and financial risk taking on a london trading floor. Proceedings of the National Academy of Sciences 105, 6167–6172 (2008). 35. 35. Cueva, C. et al. Cortisol and testosterone increase financial risk taking and may destabilize markets. Sci. Rep. 5, 6167–6172 (2015). 36. 36. Boksem, M. A. et al. Testosterone inhibits trust but promotes reciprocity. Psychol. Sci. 24, 2306–2314 (2013). 37. 37. Van Honk, J., Montoya, E. R., Bos, P. A., van Vugt, M. & Terburg, D. New evidence on testosterone and cooperation. Nature 485, 7399 (2012). 38. 38. Eisenegger, C., Naef, M., Snozzi, R., Heinrichs, M. & Fehr, E. Prejudice and truth about the effect of testosterone on human bargaining behaviour. Nature 463, 356–361 (2010). 39. 39. Hermans, E. J., Ramsey, N. F. & van Honk, J. Exogenous testosterone enhances responsiveness to social threat in the neural circuitry of social aggression in humans. Biol. Psychiatry 63, 263–270 (2008). 40. 40. Carré, J. M., Ruddick, E. L., Moreau, B. J. & Bird, B. M. Testosterone and Human Aggression. (2017). 41. 41. Eisenegger, C., Haushofer, J. & Fehr, E. The role of testosterone in social interaction. Trends. Cogn. Sci. 15, 263–271 (2011). 42. 42. Dreher, J. et al. Testosterone causes both prosocial and antisocial status-enhancing behaviors in human males. Proceedings of the National Academy of Sciences 113, 11633–11638 (2016). 43. 43. Nave, G. et al. Single-dose testosterone administration increases men’s preference for status goods. Nature Communications (Forthcoming). 44. 44. Van Honk, J. et al. Effects of testosterone administration on strategic gambling in poker play. Sci. Rep. 6, 18096 (2016). 45. 45. Wibral, M. et al. Testosterone administration reduces lying in men. PLoS. ONE. 7, e46774 (2012). 46. 46. Dai, Z., Galeotti, F. & Villeval, M. C. Cheating in the lab predicts fraud in the field: An experiment in public transportation. Management Science (2017). 47. 47. Open Science Collaboration. Estimating the reproducibility of psychological science. Science 349, 6251 (2015). 48. 48. Camerer, C. F. et al. Evaluating replicability of laboratory experiments in economics. Science 351, 1433–1436 (2016). 49. 49. Zethraeus, N. et al. A randomized trial of the effect of estrogen and testosterone on economic behavior. Proceedings of the National Academy of Sciences 106, 6535–6538 (2009). 50. 50. Simonsohn, U. Small telescopes: Detectability and the evaluation of replication results. Psychol. Sci. 26, 559–569 (2015). 51. 51. Lutchmaya, S., Baron-Cohen, S., Raggatt, P., Knickmeyer, R. & Manning, J. T. 2nd to 4th digit ratios, fetal testosterone and estradiol. Early Hum. Dev. 77, 23–28 (2004). 52. 52. Penton-Voak, I. S. & Chen, J. Y. High salivary testosterone is linked to masculine male facial appearance in humans. Evol. Hum. Behav. 25, 229–241 (2004). 53. 53. Eisenegger, C., von Eckardstein, A., Fehr, E. & von Eckardstein, S. Pharmacokinetics of testosterone and estradiol gel preparations in healthy young men. Psychoneuroendocrinology. 38, 171–178 (2013). 54. 54. Chik, Z. et al. Pharmacokinetics of a new testosterone transdermal delivery system, tds®-testosterone in healthy males. Br. J. Clin. Pharmacol. 61, 275–279 (2006). 55. 55. Abeler, J., Nosenzo, D. & Raymond, C. Preferences for truth-telling. IZA Discussion Paper 10188 (2016). 56. 56. Slovic, P. Risk-taking in children: Age and sex differences. Child Dev. 37, 169–176 (1966). 57. 57. Montoya, E., Terburg, D., Bos, P. A. & van Honk, J. Testosterone, cortisol, and serotonin as key regulators of social aggression: A review and theoretical perspective. Motiv. Emot. 36, 65–73 (2012). 58. 58. DiMaggio, P. Cultural capital and school success: The impact of status culture participation on the grades of us high school students. Am. Sociol. Rev. 47, 189–201 (1982). 59. 59. Hugh-Jones, D. Honesty and beliefs about honesty in 15 countries. University of East Anglia Discussion Paper (215). 60. 60. Mayo, A., Macintyre, H., Wallace, A. & Ahmed, S. Transdermal testosterone application: pharmacokinetics and effects on pubertal status, short-term growth, and bone turnover. The Journal of Clinical Endocrinology & Metabolism 89, 681–687 (2004). 61. 61. Du, J. Y. et al. Percutaneous progesterone delivery via cream or gel application in postmenopausal women: a randomized cross-over study of progesterone levels in serum, whole blood, saliva, and capillary blood. Menopause 20, 1169–1175 (2013). 62. 62. Erat, S. & Gneezy, U. White lies. Manage. Sci. 58, 723–733 (2012). 63. 63. Harman, S. M. E. A. Longitudinal effects of aging on serum total and free testosterone levels in healthy men. The Journal of Clinical Endocrinology & Metabolism 86, 724–731 (2001). 64. 64. Torjesen, P. A. & Sandnes, L. Serum testosterone in women as measured by an automated immunoassay and a ria. Clin. Chem. 50, 678–679 (2004). 65. 65. Bos, P. A., Terburg, D. & van Honk, J. Testosterone decreases trust in socially naive humans. Proceedings of the National Academy of Sciences 107, 9991–9995 (2010). 66. 66. Perry, P. J., Anderson, K. H. & Yates, W. R. Illicit anabolic steroid use in athletes. a case series analysis. Am. J. Sports Med. 18, 422–428 (1990). 67. 67. Pope, H. G. & Katz, D. L. Affective and psychotic symptoms associated with anabolic steroid use. Am. J. Psychiatry 1, 487 (1988). 68. 68. Wilson, I., Prange, A. & Lara, P. Methyltestosterone and imipramine in men: conversion of depression to a paranoid reaction. Am. J. Psychiatry 131, 21–24 (1985). ## Acknowledgements Funding for this work generously provided by Caltech, Ivey Business School, IFREE, Russell Sage Foundation, University of Southern California, INSEAD, and the Stockholm School of Economics. Special thanks to the RAs who helped conduct the experiment, Matthias Wibral for his suggestions and help with this project, David Zava for assisting in hormonal assays, and David Kimball for LC-MS/MS assay testing. ## Author information Authors ### Contributions Conceived the experiment: A.N. and G.N. Performed the experiment: A.H., G.T. and J.B. Analyzed the data: A.H. and G.T. Wrote the manuscript: A.H., G.N. and A.N. All authors reviewed the manuscript. ### Corresponding author Correspondence to Gideon Nave. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Henderson, A., Thoelen, G., Nadler, A. et al. Testing the influence of testosterone administration on men’s honesty in a large laboratory experiment. Sci Rep 8, 11556 (2018). https://doi.org/10.1038/s41598-018-29928-z • Accepted: • Published: • ### Baseline testosterone moderates the effect of money exposure on charitable giving intent • John B. Dinsmore • , Eric P. Stenstrom •  & Jonathan W. Kunstman Psychology & Marketing (2021) • ### The pivotal role of heme Oxygenase-1 in reversing the pathophysiology and systemic complications of NAFLD • Ariel Sasson • , Eva Kristoferson • , Rogerio Batista • , John A. McClung •  & Stephen J. Peterson Archives of Biochemistry and Biophysics (2021) • ### The role of oxytocin on self‐serving lying • Cornelia Sindermann • , Ruixue Luo • , Benjamin Becker • , Keith M. Kendrick •  & Christian Montag Brain and Behavior (2020) • ### Testosterone administration in human social neuroendocrinology: Past, present, and future • Justin M. Carré •  & Brittney A. Robinson Hormones and Behavior (2020) • ### Testosterone reduces the threat premium in competitive resource division • Shawn N. Geniole • , Valentina Proietti • , Brian M. Bird • , Triana L. Ortiz • , Pierre L. Bonin • , Bernard Goldfarb • , Neil V. Watson •  & Justin M. Carré Proceedings of the Royal Society B: Biological Sciences (2019)
{}
b) Determine the amount of over or under-applied overhead. Your answer must clearly state whether the calculated amount is over or under-applied.
{}
## Section15.1Characteristics of Sound Wave Just as any mechanical wave, sound wave transports energy and momentum from the source to the detector, not by any transport of matrial, but by coupling of motion that causes the wave to form. Unlike mechanical wave on a string, sound in air is longitudinally polarized because particles of the air vibrate along the same line as that of wave travel (see Figure 15.1.1). When sound travels through air, particles of air vibrating in the forward direction press against other particles of air creating a higher pressure than when the wave was not there, called . This gives rise to regions of compression and rarefaction of air. As a matter of fact sound wave in all fluids are longitudinal since fluids cannot provide restoring force to a shear stress generated when a sound wave traveling in the medium. Sound waves in solids can be both longitudinal and transverse. You will study various types of stress in another chapter. Both light and sound have physically measurable characteristics and human-perceived characteristics. For instance, frequency and intensity of sound can be measured by instruments which provide objective measures of the wave. Perception of frequency is called pitch and perception of intensity is called loudness which can be subjective. Generally, higher frequency sound is perceived to be at higher pitch than the sound at lower frequency. But, our brain can sometimes interpret louder sound to be at a higher pitch even though it may have the same frequency. Sound of higher intensity is generally perceived as louder. But that also depends on the frequency. We Will discuss the human response in another place. If we have a sinusoidal sound wave, it will have a single frequency $f$ and a single wavelength $\lambda\text{.}$ They will be related to speed $v$ by the fundamental formula of wave motion by $$v = f \lambda.\tag{15.1.1}$$ This simply says that wave travels a distance equal to the wavelength in time $1/f\text{.}$ By analyzing the vibration of the particles of the medium, we can show that the speed of mechanical waves, including sound wave, through a medium comes from a competition between two opposite tendencies - a restoring force whose tendency is to bring the particle to equilibrium and an inertia whose tendency is to maintain the motion. In a one-dimensional system such as a string, the restoring force is provided by the tension in the string $(F_T)$ and the inertia is provided by mass per unit length of the string $(\mu )\text{.}$ The speed of mechanical wave in a string was stated in the last chapter to be \begin{equation*} v =\sqrt{\frac{F_T}{\mu}}. \end{equation*} The speed of sound in air is similarly related to the properties of air. The restoring force is provided by the bulk modulus $B$ and the inertia is provided by mass per unit volume $\rho\text{.}$ The speed of sound in air is therefore given in terms of properties of air by the following. \begin{equation*} v = \sqrt{\frac{B}{\rho}}. \end{equation*} The density of air is not constant. It depends on the temperature and pressure. We quote here experimental relation of dependence of speed on temperature. At 1 atm and $0^{\circ}\text{C}\text{,}$ the speed of sound in air is found to be 331 m/s and at another temperature $t^{\circ}\text{C}\text{,}$ the speed of sound in air at 1 atm is given by the following approximate formula. \begin{equation*} v\approx 331\left(1+1.8\times 10^{-3}t\right)\text{ m/s}. \end{equation*} Thus at room temperature of $20^{\circ}\text{C}\text{,}$ the speed of sound in air is approximately $343\text{ m/s}\text{.}$ Speed of sound is different in different materials depending upon their bulk moduli and densities and the polarization of the wave. While we have only longitudinally polarized waves in liquids and gases, sound waves in solids can be also transverse. Table 15.1.2 gives the speed of sound in some common materials of interest. at $25^{\circ}\text{C}$ and $1\text{ atm}$ Source: Kaye and Laby, Table of physical and chemical constants 16th edition (published 1995) ### Subsection15.1.1Sound through Solid Media Since sound is a mechanical vibration, it can travel through any material medium. In liquid and gas, only longitudinal sound is possible because fluids do not have a restoring force in tangential direction, they have restoring force only against compression. Solids have restoring force for compression as well as shear forces. Therefore you will find three polarizations of sound waves in solid: one longitudinal, and two transverse modes, one for each perpendicular direction. The speeds of longitudinal and transverse waves are different since the restoring forces are different for them. \begin{equation*} v_{\text{sound}} = \sqrt{\frac{\text{Restoring force per unit area}}{\text{Inertia as given by density}}} \end{equation*} For a uniform isotropic material the two transverse waves have the same speed. Let $Y$ be the Young's modulus, $G$ the shear modulus, and $\rho$ the density of the solid. The speed of longitudinal sound $v_L$ is related to the Elastic modulus $E\text{.}$ $$v_L = \sqrt{\frac{E}{\rho}}\tag{15.1.2}$$ where $E = Y+\frac{3}{4} B\text{.}$ Here $Y$ is the Young's modulus and $B$ the bulk modulus. On the other hand, the speed of transverse waves $v_T$ is related to the shear modulus. $$v_T = \sqrt{\frac{G}{\rho}}\tag{15.1.3}$$ For instance, steel has $Y\approx 215\text{ GPa}\text{,}$ $B = 166\text{ MPa}\text{,}$ $G\approx 84\text{ GPa}\text{,}$ and density $7,800\text{ kg/m}^3\text{,}$ therefore the longitudinal and transverse waves travel at different speeds in steel. \begin{equation*} \text{In steel: } v_L = 6592\ \text{m/s}; \ \ v_T = 3281\ \text{m/s}. \end{equation*} The numbers here are a little different than those listed in the table because of the temperature dependence of sound. In general, shear stress $G$ is less than the Young's modulus $Y\text{.}$ Hence speed of transverse waves will be less than that of longitudinal wave. Sound waves in solids are used to find defects inside solid materials by non-destructive means. The non-destructive techniques based on propagation of waves in material media have important applications in medical physics and other engineering fields. For instance, in aeronautics, invisible cracks in the wings of air planes can be detected even before they become large enough to cause an accident.
{}
Teoriya Veroyatnostei i ee Primeneniya RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Subscription Guidelines for authors Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Teor. Veroyatnost. i Primenen.: Year: Volume: Issue: Page: Find Defect of the size of nonrandomized test and randomization effect on the necessary sample size in testing the Bernoulli success probabilityA. A. Zaikin 417 Multidimensional geometry, functions of many variables and probabilityV. A. Zorich 436 Asymptotics of sums of residuals of a one-parameter regression on order statisticsA. P. Kovalevskii, E. V. Shatalin 452 Local renewal theorems in the absence of an expectationS. V. Nagaev 468 Asymptotics of suprema of Gaussian fields with applications to kernel density estimatorsL. A. Sakhanenko 499 Partially complete sufficient statistics are jointly completeA. Kagan, Y. Malinovsky, L. Mattner 542 Sequential joint detection and estimationY. Yilmaz, G. Moustakides, X. Wang 562 Short Communications On local limit theorem for integer random vectorsN. G. Gamkrelidze 579 Optional sampling theorem for deformed submartingalesI. V. Pavlov, O. V. Nazarko 585 Limit theorem for maximum of random variables with copulas which are IT-copulas of Student’s ${t}$-distributionE. A. Savinov 594 Asymptotic normality of estimates with local averaging for weakly dependent random fieldsA. P. Shashkin 603 Matrix ${t}$-distribution with degree vectors of degrees of freedomA. S. Shvedov 613 In memoriam 620
{}
# Questions about the definition of a Category This is the following definition of a category that I'm using Now correct me if I'm wrong but nothing explicitly in the definition of a category above states that for any two objects $X$, $Y \in \text{Obj}(C)$ there must exist a morphism $f \in \text{Hom}_C(X, Y)$ correct? But under the text describing composition, there most exist a morphism $f \in \text{Hom}_C(X, Y)$ for any two objects $X$, $Y \in \text{Obj}(C)$, because we need that morphism for composition. So there can't be any empty $\text{Hom}_C(X, Y)$ classes correct? My assertion in the above paragraph would then trivially prove the following. Let $C$ be a category and suppose $f \in \text{Hom}_C(X, Y)$ for some objects $X, Y \in \text{Obj}(C)$, then there exists a morphism $g \in \text{Hom}_C(Y, X)$ Has everything I said above been correct? • Where does it say that there must exist such a morphism? The only existence axiom yields the identity morphism. – Paul K Feb 3 '18 at 14:35 • Indeed, any discrete category contains no arrows between any two different objects. – Patrick Stevens Feb 3 '18 at 14:43 Yes, it may happen that $\operatorname{Hom}_C(X,Y)$ is the empty class. No, that does not invalidate the part about composition. If $\operatorname{Hom}_C(X,Y)=\emptyset$, then it is vacuously true that for every $f\in \operatorname{Hom}_C(X,Y)$ and $g\in \operatorname{Hom}_C(Y,Z)$, we have a composite morphism $g\circ f$. An example is the category of fields and field homomorphisms: If two fields have different characteristic, there is no morphism between them. The axioms just state that there is a mapping $$Hom(X,Y) \times Hom(Y,Z) \to Hom(X,Z)$$ but this does not imply that $Hom(X,Y),Hom(Y,Z)$ and $Hom(X,Z)$ are non empty. Indeed, if any of $Hom(X,Y),Hom(Y,Z)$ are empty, then the domain of the wanted mapping is empty, so we can take the mapping to be the empty set. Indeed, the empty set is a mapping $\emptyset \to A$ for any class $A$. (It is also unique, since no other such mappings exist.) For a simple counterexample, take the $\bf Set$ category made of sets and functions, and observe that $Hom(\{42\},\emptyset)$ is empty, otherwise we could take a morphism/function $f:\{42\}\to\emptyset$ and have $f(42)\in\emptyset$. Discrete (poset) categories also provide counterexamples. Take $\{0,1\}$ as objects, and only the two identity morphisms $id_0,id_1$. This makes a category, even if $Hom(0,1)=Hom(1,0)=\emptyset$. At most, the composition axiom implies that, if $Hom(X,Y),Hom(Y,Z)$ are both non empty, then $Hom(X,Z)$ is also non empty. This is because we can take two morphisms $f\in Hom(X,Y),g \in Hom(Y,Z)$ and compose them as $g\circ f \in Hom(X,Z)$. $Hom(W,W)$ is always nonempty because of the identity morphism.
{}
Definition:Divergence Operator/Physical Interpretation Let $\mathbf V$ be a vector field acting over a region of space $R$. The divergence of $\mathbf V$ at a point $P$ is the total flux away from $P$ per unit volume.
{}
# Basic Example¶ If you are not familiar with the main concepts of Bayesian Optimization, a quick introduction is available here. In this tutorial, we will explain how to create a new experiment in which a simple function ( $$-{(5 * x - 2.5)}^2 + 5$$) is maximized. Let’s say we want to create an experiment called “myExp”. The first thing to do is to create the folder exp/myExp under the limbo root. Then add two files: • the main.cpp file • a python file called wscript, which will be used by waf to register the executable for building The file structure should look like this: limbo |-- exp |-- myExp +-- wscript +-- main.cpp |-- src ... Next, copy the following content to the wscript file: from waflib.Configure import conf def options(opt): pass def build(bld): bld(features='cxx cxxprogram', source='main.cpp', includes='. ../../src', target='myExp', uselib='BOOST EIGEN TBB LIBCMAES NLOPT', use='limbo') For this example, we will optimize a simple function: $$-{(5 * x - 2.5)}^2 + 5$$, using all default values and settings. If you did not compile with libcmaes and/or nlopt, remove LIBCMAES and/or NLOPT from ‘uselib’. To begin, the main file has to include the necessary files: 1 2 3 4 5 6 #include // you can also include but it will slow down the compilation #include using namespace limbo; We also need to declare the Parameter struct: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 struct Params { struct bayes_opt_boptimizer : public defaults::bayes_opt_boptimizer { }; // depending on which internal optimizer we use, we need to import different parameters #ifdef USE_NLOPT struct opt_nloptnograd : public defaults::opt_nloptnograd { }; #elif defined(USE_LIBCMAES) struct opt_cmaes : public defaults::opt_cmaes { }; #else struct opt_gridsearch : public defaults::opt_gridsearch { }; #endif // enable / disable the writing of the result files struct bayes_opt_bobase : public defaults::bayes_opt_bobase { BO_PARAM(int, stats_enabled, true); }; // no noise struct kernel : public defaults::kernel { BO_PARAM(double, noise, 1e-10); }; struct kernel_maternfivehalves : public defaults::kernel_maternfivehalves { }; // we use 10 random samples to initialize the algorithm struct init_randomsampling { BO_PARAM(int, samples, 10); }; // we stop after 40 iterations struct stop_maxiterations { BO_PARAM(int, iterations, 40); }; // we use the default parameters for acqui_ucb struct acqui_ucb : public defaults::acqui_ucb { }; }; Here we are stating that the samples are observed without noise (which makes sense, because we are going to evaluate the function), that we want to output the stats (by setting stats_enabled to true), that the model has to be initialized with 10 samples (that will be selected randomly), and that the optimizer should run for 40 iterations. The rest of the values are taken from the defaults. By default limbo optimizes in $$[0,1]$$, but you can optimize without bounds by setting BO_PARAM(bool, bounded, false) in bayes_opt_bobase parameters. If you do so, limbo outputs random numbers, wherever needed, sampled from a gaussian centered in zero with a standard deviation of $$10$$, instead of uniform random numbers in $$[0,1]$$ (in the bounded case). Finally limbo always maximizes; this means that you have to update your objective function if you want to minimize. Then, we have to define the evaluation function for the optimizer to call: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 struct Eval { // number of input dimension (x.size()) BO_PARAM(size_t, dim_in, 1); // number of dimensions of the result (res.size()) BO_PARAM(size_t, dim_out, 1); // the function to be optimized Eigen::VectorXd operator()(const Eigen::VectorXd& x) const { double y = -((5 * x(0) - 2.5) * (5 * x(0) - 2.5)) + 5; // we return a 1-dimensional vector return tools::make_vector(y); } }; It is required that the evaluation struct has the static function members dim_in() and dim_out(), specifying the input and output dimensions. Also, it should have the operator() expecting a const Eigen::VectorXd& of size dim_in(), and return another one, of size dim_out(). With this, we can declare the main function: 1 2 3 4 5 6 7 8 9 10 int main() { // we use the default acquisition function / model / stat / etc. bayes_opt::BOptimizer boptimizer; // run the evaluation boptimizer.optimize(Eval()); // the best sample found std::cout << "Best sample: " << boptimizer.best_sample()(0) << " - Best observation: " << boptimizer.best_observation()(0) << std::endl; return 0; } The full main.cpp can be found here Finally, from the root of limbo, run a build command, with the additional switch --exp myExp: ./waf build --exp myExp Then, an executable named myExp should be produced under the folder build/exp/myExp. When running this executable, you should see something similar to this: 0 new point: 0.502378 value: 4.99986 best:4.99986 1 new point: 0.503035 value: 4.99977 best:4.99986 2 new point: 0.502521 value: 4.99984 best:4.99986 3 new point: 0.502533 value: 4.99984 best:4.99986 4 new point: 0.502556 value: 4.99984 best:4.99986 5 new point: 0.502585 value: 4.99983 best:4.99986 6 new point: 0.502618 value: 4.99983 best:4.99986 7 new point: 0.502643 value: 4.99983 best:4.99986 8 new point: 0.502646 value: 4.99983 best:4.99986 9 new point: 0.502673 value: 4.99982 best:4.99986 10 new point: 0.502383 value: 4.99986 best:4.99986 11 new point: 0.502262 value: 4.99987 best:4.99987 12 new point: 0.502111 value: 4.99989 best:4.99989 13 new point: 0.501921 value: 4.99991 best:4.99991 14 new point: 0.501679 value: 4.99993 best:4.99993 15 new point: 0.501383 value: 4.99995 best:4.99995 16 new point: 0.501055 value: 4.99997 best:4.99997 17 new point: 0.500751 value: 4.99999 best:4.99999 18 new point: 0.500517 value: 4.99999 best:4.99999 19 new point: 0.500358 value: 5 best:5 20 new point: 0.500256 value: 5 best:5 21 new point: 0.500189 value: 5 best:5 22 new point: 0.500145 value: 5 best:5 23 new point: 0.500114 value: 5 best:5 24 new point: 0.500092 value: 5 best:5 25 new point: 0.500075 value: 5 best:5 26 new point: 0.500063 value: 5 best:5 27 new point: 0.500054 value: 5 best:5 28 new point: 0.500046 value: 5 best:5 29 new point: 0.500039 value: 5 best:5 30 new point: 0.500035 value: 5 best:5 31 new point: 0.50003 value: 5 best:5 32 new point: 0.500027 value: 5 best:5 33 new point: 0.500024 value: 5 best:5 34 new point: 0.500022 value: 5 best:5 35 new point: 0.50002 value: 5 best:5 36 new point: 0.500018 value: 5 best:5 37 new point: 0.500016 value: 5 best:5 38 new point: 0.500015 value: 5 best:5 39 new point: 0.500014 value: 5 best:5 Best sample: 0.500014 - Best observation: 5 These lines show the result of each sample evaluation of the $$40$$ iterations (after the random initialization). In particular, we can see that algorithm progressively converges toward the maximum of the function ($$5$$) and that the maximum found is located at $$x = 0.500014$$. Running the executable also created a folder with a name composed of YOUCOMPUTERHOSTNAME-DATE-HOUR-PID. This folder should contain two files: limbo |-- YOUCOMPUTERHOSTNAME-DATE-HOUR-PID +-- samples.dat +-- aggregated_observations.dat The file samples.dat contains the coordinates of the samples that have been evaluated during each iteration, while the file aggregated_observations.dat contains the corresponding observed values. If you want to display the different observations in a graph, you can use the python script print_aggregated_observations.py (located in limbo_root/src/tutorials). For instance, from the root of limbo you can run python src/tutorials/print_aggregated_observations.py YOUCOMPUTERHOSTNAME-DATE-HOUR-PID/aggregated_observations.dat
{}
# CAT Quant Practice Problems Question: Each side of a given polygon is parallel to either the X or the Y axis. A corner of such a polygon is said to be convex if the internal angle is 90° or concave if the internal angle is 270°. If the number of convex corners in such a polygon is 25, the number of concave corners must be 1. 20 2. 0 3. 21 4. 22 Correct Option:3 ## CAT Quant Questions with Video Solutions CAT Quant Practice Problems 4.5 (89.41%) 17 vote[s]
{}
• ### Homotopy Decompositions of Gauge Groups over Real Surfaces(1701.00430) Jan. 2, 2017 math.AT We analyse the homotopy types of gauge groups of principal U(n)-bundles associated to pseudo Real vector bundles in the sense of Atiyah. We provide satisfactory homotopy decompositions of these gauge groups into factors in which the homotopy groups are well known. Therefore, we substantially build upon the low dimensional homotopy groups as provided in a paper by I. Biswas, J. Huisman, and J. Hurtubise. • ### VLT/Magellan spectroscopy of 29 strong lensing selected galaxy clusters(1611.00769) Nov. 2, 2016 astro-ph.CO, astro-ph.GA We present an extensive spectroscopic follow-up campaign of 29 strong lensing (SL) selected galaxy clusters discovered primarily in the Second Red-Sequence Cluster Survey (RCS-2). Our spectroscopic analysis yields redshifts for 52 gravitational arcs present in the core of our galaxy clusters, which correspond to 35 distinct background sources that are clearly distorted by the gravitational potential of these clusters. These lensed galaxies span a wide redshift range of $0.8 \le z \le 2.9$, with a median redshift of $z_s = 1.8 \pm 0.1$. We also measure reliable redshifts for 1004 cluster members, allowing us to obtain robust velocity dispersion measurements for 23 of these clusters, which we then use to determine their dynamical masses by using a simulation-based $\sigma_{DM} - M_{200}$ scaling relation. The redshift and mass ranges covered by our SL sample are $0.22 \le z \le 1.01$ and $5 \times10^{13} \le M_{200}/h^{-1}_{70}M_{\odot} \le 1.9\times10^{15}$, respectively. We analyze and quantify some possible effects that might bias our mass estimates, such as the presence of substructure, the region where cluster members are selected for spectroscopic follow-up, the final number of confirmed members, and line-of-sight effects. We find that 10 clusters of our sample with $N_{mem} \gtrsim 20$ show signs of dynamical substructure. However, the velocity data of only one system is inconsistent with a uni-modal distribution. We therefore assume that the substructures are only marginal and not of comparable size to the clusters themselves. Consequently, our velocity dispersion and mass estimates can be used as priors for SL mass reconstruction studies and also represent an important step toward a better understanding of the properties of the SL galaxy cluster population. • ### G2C2 - IV: A novel approach to study the radial distributions of multiple populations in Galactic globular clusters(1504.06509) April 24, 2015 astro-ph.GA We use the HB morphology of 48 Galactic GCs to study the radial distributions of the different stellar populations known to exist in globular clusters. Assuming that the (extremely) blue HB stars correspond to stars enriched in Helium and light elements, we compare the radial distributions of stars selected according to colour on the HB to trace the distribution of the secondary stellar populations in globular clusters. Unlike other cases, our data show that the populations are well mixed in 80% of the cases studied. This provides some constraints on the mechanisms proposed to pollute the interstellar medium in young globular clusters. • ### The XMM Cluster Survey: Optical analysis methodology and the first data release(1106.3056) June 17, 2011 astro-ph.CO The XMM Cluster Survey (XCS) is a serendipitous search for galaxy clusters using all publicly available data in the XMM-Newton Science Archive. Its main aims are to measure cosmological parameters and trace the evolution of X-ray scaling relations. In this paper we present the first data release from the XMM Cluster Survey (XCS-DR1). This consists of 503 optically confirmed, serendipitously detected, X-ray clusters. Of these clusters, 255 are new to the literature and 356 are new X-ray discoveries. We present 464 clusters with a redshift estimate (0.06 < z < 1.46), including 261 clusters with spectroscopic redshifts. In addition, we have measured X-ray temperatures (Tx) for 402 clusters (0.4 < Tx < 14.7 keV). We highlight seven interesting subsamples of XCS-DR1 clusters: (i) 10 clusters at high redshift (z > 1.0, including a new spectroscopically-confirmed cluster at z = 1.01); (ii) 67 clusters with high Tx (> 5 keV); (iii) 131 clusters/groups with low Tx (< 2 keV); (iv) 27 clusters with measured Tx values in the SDSS `Stripe 82' co-add region; (v) 78 clusters with measured Tx values in the Dark Energy Survey region; (vi) 40 clusters detected with sufficient counts to permit mass measurements (under the assumption of hydrostatic equilibrium); (vii) 105 clusters that can be used for applications such as the derivation of cosmological parameters and the measurement of cluster scaling relations. The X-ray analysis methodology used to construct and analyse the XCS-DR1 cluster sample has been presented in a companion paper, Lloyd-Davies et al. (2010). • ### The ACS Virgo Cluster Survey. XIII. SBF Distance Catalog and the Three-Dimensional Structure of the Virgo Cluster(astro-ph/0702510) Feb. 20, 2007 astro-ph The ACS Virgo Cluster Survey consists of HST ACS imaging for 100 early-type galaxies in the Virgo Cluster, observed in the F475W and F850LP filters. We derive distances for 84 of these galaxies using the method of surface brightness fluctuations (SBFs), present the SBF distance catalog, and use this database to examine the three-dimensional distribution of early-type galaxies in the Virgo Cluster. The SBF distance moduli have a mean (random) measurement error of 0.07 mag (0.5 Mpc), or roughly 3 times better than previous SBF measurements for Virgo Cluster galaxies. Five galaxies lie at a distance of ~23 Mpc and are members of the W' cloud. The remaining 79 galaxies have a narrow distribution around our adopted mean distance of 16.5+/-0.1 (random mean error) +/-1.1 Mpc (systematic). The rms distance scatter of this sample is 0.6+/-0.1 Mpc, with little dependence on morphological type or luminosity class (i.e., 0.7+/-0.1 and 0.5+/-0.1 Mpc for the giants and dwarfs, respectively). The back-to-front depth of the cluster measured from our sample of galaxies is 2.4+/-0.4 Mpc (i.e., +/-2sigma of the intrinsic distance distribution). The M87 (cluster A) and M49 (cluster B) subclusters are found to lie at distances of 16.7+/-0.2 and 16.4+/-0.2 Mpc, respectively. There may be a third subcluster associated with M86. A weak correlation between velocity and line-of-sight distance may be a faint echo of the cluster velocity distribution not having yet completely virialized. In three dimensions, Virgo's early-type galaxies appear to define a slightly triaxial distribution, with axis ratios of (1:0.7:0.5). The principal axis of the best-fit ellipsoid is inclined ~20-40 deg. from the line of sight, while the galaxies belonging to the W' cloud lie on an axis inclined by ~10-15 deg.
{}
# Polynomial roots in the ring extension Let $R$ be a ring with identity (not necessarily commutative) and $R[x]$ be a ring of polynomials over $R$. We say that a ring $S$ is an extension of $R$ if there is a subring $\tilde{R}$ in $S$ isomorphic to $R$. Let $S$ be an extension of $R$, and $$\phi: R\to \tilde{R}\subset S$$ be a ring isomorphism. We say that a polynomial $f(x) = \sum\limits_{j\geq 0}f_jx^j\in R[x]$ has a root $\alpha\in S$ if $$\sum\limits_{j\geq 0}\phi(f_j)\alpha^j = 0.$$ In the case, where $R$ is a commutative ring every monic polynomaial $f(x)\in R[x]$ has a root $[x]_f$ in the extension $S = R[x]/R[x]f(x)$ of $R$. In the case, where $R$ is not commutative the set $R[x]/R[x]f(x)$ is a left $R[x]$-module but not a ring, because an ideal $R[x]f(x)$ is not two-sided ideal, but only one-sided. Also in non-commutative case there are examples such that two-sided ideal, containing $f(x)$ that is an ideal $R[x]f(x)R[x]$ is equal to $R[x]$ and in this case $R[x]/R[x]f(x)R[x]$ isomorphic to zero ring. I want to prove that for every ring with identity $R$ and every monic polynomaial $f(x)$ over $R$ there exists an extension $S$ of $R$ such that $f(x)$ has a root in $S$. • How could $R[x]f(x)R[x]$ be equal to $R[x]$ when the degree of $f$ is not 0? – Uri Bader Apr 30 '16 at 15:56 • @user89334 The example in my answer below does just that. The ideal generated by $1+ax$ contains $1=(1-ax)(1+ax)$. – Pace Nielsen Apr 30 '16 at 16:11 • In general, even in the commutative case, polynomials don't have roots in an extension. Example: $f=1 + 2x\in (\mathbb{Z}/4)[x]$. The problem is that $R[x]/(f)$ is in general no extension of $R$. – Todd Leason Apr 30 '16 at 16:12 • My comment was very stupid. I came back to erase it, but figure its too late... – Uri Bader Apr 30 '16 at 16:22 • Note: "unitary polynomial" means "monic polynomial". – YCor Apr 30 '16 at 16:29 In the noncommutative case, your condition for a "root" is called a "right root". I remember that T.Y. Lam worked with this condition a bit (you might search through his papers, or look in his "First Course in Noncommutative Rings" book). It is easy to get commutative rings where your condition fails (because even though $R[x]f(x)$ is an ideal in $R[x]$, it still intersects $R$)! For instance, let $F$ be a non-zero ring with identity and consider the ring $$R=F[a\ :\ a^2=0].$$ The polynomial $f(x)=1+ax\in R[x]$ cannot have any root in any extension ring of $R$, because this would force $a$ to be a unit in the extension ring, but also nilpotent. Edited to add: I've been thinking about the new question and monic polynomials. Without loss of generality think of $R$ as contained in $S$. If $R$ is allowed to have a different unit than $S$, I think that the answer is positive. When $R$ is forced to be a unital subring of $S$ (i.e. containing the same unit) the answer gets a bit harder, as I'll describe below. In the latter case, take $S:=R\coprod_{\mathbb{Z}}\mathbb{Z}[t]/(f(t))$. Our goal is to show that $R$ is a unital subring of $S$, and we are done. We can do that by writing elements of $S$ in reduced form. To explain the motivation, take $f$ to be a quadratic polynomial for a moment. Say $f(x)=x^2+bx+c$ with $b,c\in R$. Ignoring the "bar" notation (since $S$ is a factor ring) for convenience, we have the relation $$t^2\mapsto -bt-c.$$ We can reduce $t^3$ in two ways, and that gives us a new relation $$tbt\mapsto -bt-tc+ct-bc.$$ We can now reduce $t^2bt$, $tbt^2$, and $tbtbt$ in two ways (each), and we get another relation beginning $tb^2t\mapsto\cdots$. As long as $b^n\notin \mathbb{Z}$ for each $n\geq 1$, I believe that this demonstrates that your question is positively answered (for quadratics, and this can be extended). When $b^n\in \mathbb{Z}$ things get more complicated, but the problem may still be tractable. When $R$ is allowed to have a different identity from $S$, there is an even easier construction. • I found the simpler example in the commutative case and beat you for 2 minutes. -:) – Todd Leason Apr 30 '16 at 16:15 • @Pace Nielsen, Thank you very much, but I'm interesting only in the unitary polynomials. I will clarify my question. – Mikhail Goltvanitsa Apr 30 '16 at 16:24 • @Pace Nielsen, Thank you. Can you explain more circumstantially what is the construction of $R\coprod_{\mathbb{Z}}\mathbb{Z}[t]/(f(t))$. Also I don't understand, why $t$ don't commute with elements of $R$. – Mikhail Goltvanitsa May 4 '16 at 14:12 • @MikhailGoltvanitsa The elements of $R$ don't commute with $t$, because this is the coproduct of rings. For instance the ring $\mathbb{Z}[s]\coprod_{\mathbb{Z}}\mathbb{Z}[t] = \mathbb{Z}\langle s,t\rangle$ has $s,t$ non-commuting. I can't explain the intricacies of the coproduct of rings in this comment, so I recommend finding a good book on the subject. – Pace Nielsen May 4 '16 at 14:35 I have found a very simple and demonstrative construction of the ring extension, which came from non-commutative generalization of Hamilton-Caley's Theorem. Let $R$ be a ring and $f(x) = x^m-\sum\limits_{j=0}^{m-1}f_jx^j\in R[x]$ be a monic polynomial. We identify ring $R$ with a subring $\tilde{R} = \{\mathrm{diag}(r,r,\ldots,r): r\in R\}\subset M_m(R)$. Then $f(x)$ has a root of the form $$\alpha=\left(\begin{array}{llllll} 0& e& 0&\ldots& 0& 0 \\ 0& 0& e&\ldots& 0& 0 \\ . & . & . & . & .& . \\ 0& 0&0&\ldots& 0& e \\ f_0& f_1& f_2&\ldots& f_{m-2}& f_{m-1} \\ \end{array} \right)$$ That is $\alpha^m-\sum\limits_{j=0}^{m-1}f_j\alpha^j = 0\in M_m(R)$. But we note that in general: $f(\alpha^T)\neq 0$. See article for proof of non-commutative generalization of Hamilton-Caley's Theorem. • Nice! Note that your properties $(*)$ and $(**)$ are what Caracciolo, Sportiello and Sokal (in arXiv:0809.3516v2) call "column-commutativity" and "row-commutativity". – darij grinberg Jul 29 '17 at 20:01 • Mikhail, looking at the reference you give, the proof of the non-commutative generalization of the Caley-Hamilton theorem is done only for Galois rings (which are finite commutative rings). Finding roots over arbitrary commutative rings is trivial, and doesn't need to pass through matrix rings. – Pace Nielsen Aug 1 '17 at 18:52 • I tried looking for a good source for the fact that a companion matrix of a monic polynomial satisfies that polynomial (even over a noncommutative ring). Apparently it is just a "classical fact" that nobody has bothered to state. The only direct reference I could find was in a paper of Andre Leroy. – Pace Nielsen Aug 2 '17 at 17:00 • @PaceNielsen, in the given refference the proof of non-commutative Hamilton-Caley Theorem is given for arbitrary non-commutative ring with identity. – Mikhail Goltvanitsa Aug 20 '17 at 10:38
{}
# Systematic optimization of laser cooling of dysprosium Systematic optimization of laser cooling of dysprosium We report on an apparatus for cooling and trapping of neutral dysprosium. We characterize and optimize the performance of our Zeeman slower and 2D molasses cooling of the atomic beam by means of Doppler spectroscopy on a 136 kHz broad transition at 626 nm. Furthermore, we demonstrate the characterization and optimization procedure for the loading phase of a magneto-optical trap (MOT) by increasing the effective laser linewidth by sideband modulation. After optimization of the MOT compression phase, we cool and trap up to $$10^9$$ 10 9 atoms within 3 seconds in the MOT at temperatures of 9 $$\mu$$ μ K and phase space densities of $$1.7 \cdot 10^{-5}$$ 1.7 · 10 - 5 , which constitutes an ideal starting point for loading the atoms into an optical dipole trap and for subsequent forced evaporative cooling. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Physics B Springer Journals # Systematic optimization of laser cooling of dysprosium , Volume 124 (6) – May 29, 2018 9 pages /lp/springer_journal/systematic-optimization-of-laser-cooling-of-dysprosium-Q0ecNYpAPl Publisher Springer Journals Subject Physics; Physics, general; Physical Chemistry; Optics, Lasers, Photonics, Optical Devices; Quantum Optics; Engineering, general ISSN 0946-2171 eISSN 1432-0649 D.O.I. 10.1007/s00340-018-6981-2 Publisher site See Article on Publisher Site ### Abstract We report on an apparatus for cooling and trapping of neutral dysprosium. We characterize and optimize the performance of our Zeeman slower and 2D molasses cooling of the atomic beam by means of Doppler spectroscopy on a 136 kHz broad transition at 626 nm. Furthermore, we demonstrate the characterization and optimization procedure for the loading phase of a magneto-optical trap (MOT) by increasing the effective laser linewidth by sideband modulation. After optimization of the MOT compression phase, we cool and trap up to $$10^9$$ 10 9 atoms within 3 seconds in the MOT at temperatures of 9 $$\mu$$ μ K and phase space densities of $$1.7 \cdot 10^{-5}$$ 1.7 · 10 - 5 , which constitutes an ideal starting point for loading the atoms into an optical dipole trap and for subsequent forced evaporative cooling. ### Journal Applied Physics BSpringer Journals Published: May 29, 2018 ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
{}
# AFNetworking http://afnetworking.com AFNetworking is a third-party framework which is used to communicate to remote web services. We use third-party here to distinguish from frameworks created by Apple. AFNetworking is an Objective-C framework, not a Swift framework, so some additional work is needed to bring it into our Swift projects. ## Installation Installing AFNetworking can happen with CocoaPods. Described in more detail here (https://github.com/AFNetworking/AFNetworking). We will be using CocoaPods to install AFNetworking. The podfile should include: pod "AFNetworking" Then run a pod install. This is described in the CocoaPods chapter, so take a look and jump back here when you're ready. ## Example AFNetworking calls The basic premise of any REST API is a data transfer using the standard HTTP verbs of GET, POST, PUT and DELETE. If you need a refresher, we covered REST here. Code snippets of example GET and POST commands are found on the following post: http://swift-ios.co/using-swift-and-afnetworking-2/ Although written primarly in Objective-C, the official documentation of AFNetworking is excellent. It will detail standard approaches to take when using AFNetworking: http://cocoadocs.org/docsets/AFNetworking/2.5.0/ AFNetworking was recently convereted to a Swift library. That library is called Alamofire. You may choose to use this as an alternative to AFNetworking, in order to find more Swift-specific examples available on the web. ## Alamofire Since the release of Swift, the new hotness is to use Alamofire, created by the same Matt Thomspon of AFNetworking. Here, we'll cover how to use Alamofire, if you prefer to use it instead of AFNetworking. ## Installation For installation, we normally use CocoaPods (we have a chapter on CocoaPods coming up--feel free to jump ahead and come back) but CocoaPods isn't available yet for use with Alamofire. Instead, here is a video that goes through the process of installing a third-party library: ## Why use Alamofire over NSURLSession? The reason a developer would choose to use a third party library instead of iOS libraries is the reduction of code and simplicity of use. Let's check out the following examples: ## GET Notice how this GET command uses significantly less code than our previous example found in the Networking APIs? Alamofire.request(.GET, "http://httpbin.org/get", parameters: ["foo": "bar"]) .response { (request, response, data, error) in println(request) println(response) println(error) } Using this third party library enables faster development time and a more refined focus on finding solutions. ## POST The POST retains the same code reduction as the above GET. let parameters = [ "foo": "bar", "baz": ["a", 1], "qux": [ "x": 1, "y": 2, "z": 3 ] ] Alamofire.request(.POST, "http://httpbin.org/post", parameters: parameters) The Alamofire framework is very powerful and robust. We covered just the basics, which should cover the majority of initial API integration that you may encounter. Feel free to keep exploring their README.md file, which contains many more examples for a variety of situations. ## References Alamofire on GitHub https://github.com/Alamofire/Alamofire AFNetworking on GitHub https://github.com/AFNetworking/AFNetworking Alamofire - NSHipster http://nshipster.com/alamofire/
{}
# zbMATH — the first resource for mathematics Sanov property, generalized I-projection and a conditional limit theorem. (English) Zbl 0544.60011 Known results on the asymptotic behaviour of the probability that the empirical distribution $$\hat P_ n$$ of an i.i.d. sample $$X_ 1,...,X_ n$$ with common distribution $$P_ X$$ belongs to a given convex set $$\Pi$$ of probability measures, and new results on that of the joint distribution of $$X_ 1,...,X_ n$$ under the condition $$\hat P_ n\in \Pi$$ are obtained simultaneously, using an information-theoretic identity. Related results available in the literature concern the case when the condition $$\hat P_ n\in \Pi$$ represents a finite number of constraints on sample means; then, under various regularity hypotheses, the convergence of $$P_{X| \hat P_ n\in \Pi}$$ to the I-projection of $$P_ X$$ on $$\Pi$$ has been established. These results are generalized in four directions: (i) more general sets $$\Pi$$ are concerned; (ii) the I-projection of $$P_ X$$ on $$\Pi$$ need not exist (but the generalized I-projection, $$P^*$$, exists); (iii) a stronger kind of convergence $$P_{X| \hat P_ n\in \Pi}\to P^*$$ is established (convergence in information); (iv) r.v.s. $$X_ 1,...,X_ n$$ under the condition $$\hat P_ n\in \Pi$$ are shown to behave like i.i.d. r.v.s. with common distribution $$P^*$$ (it is described in an exact way by the notion of asymptotic quasi-independence introduced in the paper). When $$\hat P_ n\in \Pi$$ is the event that the sample mean of a V-valued statistic $$\psi$$ is in a given convex subset of V, a locally convex topological vector space, the limiting conditional distribution of (either) $$X_ i$$ is characterized as a member of the exponential family determined by $$\psi$$ through the unconditional distribution $$P_ X$$, while $$X_ 1,...,X_ n$$ are conditionally asymptotically quasi-independent. Reviewer: B.Kryžienė ##### MSC: 60B10 Convergence of probability measures 60F10 Large deviations 60B12 Limit theorems for vector-valued random variables (infinite-dimensional case) 62B10 Statistical aspects of information-theoretic topics 94A17 Measures of information, entropy 82B05 Classical equilibrium statistical mechanics (general) Full Text:
{}
# Description of the problem Imagine a quarter of an infinite chessboard, as in a square grid, extending up and right, so that you can see the lower left corner. Place a 0 in there. Now for every other cell in position (x,y), you place the smallest non-negative integer that hasn't showed up in the column x or the row y. It can be shown that the number in position (x, y) is x ^ y, if the rows and columns are 0-indexed and ^ represents bitwise xor. Given a position (x, y), return the sum of all elements below that position and to the left of that position, inside the square with vertices (0, 0) and (x, y). # The input Two non-negative integers in any sensible format. Due to the symmetry of the puzzle, you can assume the input is ordered if it helps you in any way. # Output The sum of all the elements in the square delimited by (0, 0) and (x, y). # Test cases 5, 46 -> 6501 0, 12 -> 78 25, 46 -> 30671 6, 11 -> 510 4, 23 -> 1380 17, 39 -> 14808 5, 27 -> 2300 32, 39 -> 29580 14, 49 -> 18571 0, 15 -> 120 11, 17 -> 1956 30, 49 -> 41755 8, 9 -> 501 7, 43 -> 7632 13, 33 -> 8022 • vaguely related – RGS Feb 26 '20 at 22:16 • Interesting, I'm curious if there's a bitwise way to express this without summing everything. – xnor Feb 26 '20 at 22:20 • We'll see what you guys come up with :) – RGS Feb 26 '20 at 22:21 • The main diagonal is A224923. Feb 26 '20 at 22:56 # Perl 6, 19 bytes {sum [X+^] 0 X..@_} Try it online! The sum of the bitwise xor between all numbers in the range 0 to all inputs. # Python 2, 49 bytes lambda x,y:sum(k/-~x^k%-~x for k in range(~x*~y)) Try it online! The usual div-mod trick for iterating over two ranges. Unfortunately, since we want inclusive ranges, we need to raise each input by 1, which we do with -~. Thanks to Bubbler for saving 2 bytes by cancelling the minuses on -~x*-~y. Python 3 would be a byte longer using //. I tried to use the 3.8 walrus operator to cut down on the -~x repetition, without success. 55 bytes f=lambda m,n:m|n>0and(m^n)+f(m-1,n)+f(m,n-1)-f(m-1,n-1) Try it online! A cute recursion, but unfortunately longer. This doesn't use anything particular about XOR -- this same method can be used to add up any two-variable function over a rectangle. The base case m|n>0 terminates with zero when either m or n becomes negative, or (harmlessly) when both are zero. This will time out on larger test cases due to the huge degree of recursive branching. 52 bytes f=lambda m,n,b=1:m|n>0and(m^n)+f(m,n-1)*b+f(m-1,n,0) Try it online! A similar shorter recursion. The idea is that we can recursively travel left or down, but once we've gone left, we set to flag b to zero and ignore the results of any further travel down. The result is that we travel to each cell in the rectangle exactly one, like this: O < O < O < O v O < O < O < O v O < O < O < O Other ideas Here are some partial ideas that didn't lead to decent golfs, but maybe someone can make use of them. The one-dimensional summation (on a half-open range) w=lambda i,n:sum(i^j for j in range(n)) can be expressed recursively as w=lambda i,n:i+n and 4*w(i/2,n/2)+n%2*(n^i^1)+n/2 (Python 2 used for floor division.) We also have an efficient formula $$f(m-1,n-1) = \sum_{i=0}^{\infty} {\frac{m n - |m \odot 2^{i}||n \odot 2^{i}|}{2} 2^i}$$ where $$\ \odot k\$$ is the symmetric modulo-$$\(2k)\$$ operator mapping to the range from $$\-k\$$ to $$\k\$$. This summation can be truncated where $$\2^i\$$ is bigger than the inputs, since further terms will be zero. • Feb 27 '20 at 2:51 • Also, an attempt to remove some -~s turned out to be 53 bytes. Feb 27 '20 at 2:53 • Good work here! +1 – RGS Feb 27 '20 at 19:18 # Wolfram Language (Mathematica), 28 bytes Array[BitXor,{##}+1,0,Plus]& Try it online! • Nice use of the fourth argument! Feb 27 '20 at 10:52 • Wow, this is incredibly short for a Mathematica submission! +1 – RGS Feb 27 '20 at 19:20 • You can save three bytes by taking a list as input Feb 28 '20 at 14:47 # Ruby, 36 bytes ->x,y{(0..y+x*y+=1).sum{|r|r/y^r%y}} Try it online! • What is the |r| doing? – RGS Feb 27 '20 at 19:25 • @RGS It defines the name of the variable which is used inside the block, iterating over every value between 0 and y+x*y+=1. Kind of like for( r = 0; r <= y+x*(y+1); r = r + 1 ){ Feb 28 '20 at 11:26 # APL (Dyalog Extended), 15 bytes +/,{⊥≠/⊤⍵}¨⍳1+⎕ Try it online! Full program. ### How it works +/,{⊥≠/⊤⍵}¨⍳1+⎕ ⎕ ⍝ Take a pair [x y] from stdin ⍳1+ ⍝ Generate indices from [0 0] to [x y] inclusive { }¨ ⍝ For each pair, ⊤⍵ ⍝ Convert two numbers into two-column bit matrix ⍝ (each column is binary representation of each number) ≠/ ⍝ Reduce each row with not-equals (XOR) ⊥ ⍝ Convert the resulting binary representation back to integer +/, ⍝ Sum all elements and implicit print • Nice APL submission +1 – RGS Feb 27 '20 at 19:19 # 05AB1E, 6 bytes Ýδ^˜O Explanation: Ý # Push a list in the range [0,value] for each value in the (implicit) input-pair # Push both these lists separated to the stack δ # Apply double-vectorized: ^ # XOR them together ˜O # And then take the flattened sum # (after which the result is output implicitly) • Thanks for your submission! Was it after or before the morning meetings? ;) – RGS Feb 27 '20 at 19:19 • @RGS Uhh, dunno anymore haha. I think slightly before based on that '13 hours ago'. ;) Feb 27 '20 at 21:13 • Alright! I see you like golfing :D – RGS Feb 27 '20 at 21:55 • @RGS What makes you think that, haha. Is it my multiple answers a day? My 65k+ rep? My 7+ gold badges? Me joining 3.5 years ago, but still active today? xD But yes, I indeed like golfing while I'm waiting for things to compile/build at work. Feb 28 '20 at 7:30 • Oh man, I wish I was coding in a compiled language :P I am gonna propose that back at work, so that I can enjoy some code golf while everything builds... – RGS Feb 28 '20 at 8:20 # J, 23 17 bytes -6 bytes thanks to Bubbler! 1#.1#.XOR/&(i.,]) Try it online! # K (oK), 25 bytes {+/2/'~=/'(64#2)\''+!1+x} Try it online! Most probably can be golfed further. • J, 17 bytes using outer product instead of catalogue. Feb 27 '20 at 9:27 • @Bubbler Of course! That's much, much better! Feb 27 '20 at 9:29 • Thanks for your two solutions +1 Why don't you have two different answers? – RGS Feb 27 '20 at 19:23 • @RGS Well, I'm not sure... They turned out to be quite different. Feb 28 '20 at 7:29 # Jelly, 7 bytes ^þ/;RSS Try it online! A monadic link taking a pair of integers and returning an integer. A couple of alternative 7-byters: • So short! Thanks for the alternatives +1 – RGS Feb 26 '20 at 23:48 # Burlesque, 18 bytes psqrzMPcp{q$$r[}ms Try it online! ps # Parse to arr of ints qrz # Boxed range [0,N] MP # Map push cp # Cartesian product { q$$ # Boxed xor r[ # Reduce by }ms # Map sum • Nice solution +1 Do the several letters together represent a single function name? – RGS Feb 27 '20 at 19:23 • @RGS there's no such things as a function, it's all stack. The q is shorthand for putting it in a block. e.g. qrz = {rz}. Operations are elements of the stack too, and most operations are 2 chars. Some, like maps and reduce, take blocks of operations as an argument and apply them. Feb 27 '20 at 19:33 • I understand, thanks for the explanation! – RGS Feb 27 '20 at 19:37 # C (gcc), 48 bytes R;f(x,y){R=++x*++y;for(y=0;--R;y+=R%x^R/x);x=y;} Try it online! • Thanks for your C submission +1 – RGS Feb 27 '20 at 19:20 • Nice! How does this work? Mar 1 '20 at 2:49 • @S.S.Anne xnor has used the same algorithm for his python solution and written a good explanation. – xibu Mar 2 '20 at 16:22 # Java 8, 58 bytes (x,y)->{int t=++x*-~y;for(y=0;--t>0;)y+=t%x^t/x;return y;} Port of @xibu's C answer, so make sure to upvote him! Try it online. Explanation: (x,y)->{ // Method with two integer parameters and integer return-type int t=++x // Increase x by 1 first with ++x *-~y; // Create a temp integer t, with value x*(y+1) for(y=0; // Reset y to 0 to reuse as result-sum --t>0;) // Loop as long as t-1 is larger than 0, // decreasing it before every iteration with --t y+= // Increase the result-sum by: t%x // t modulo-x ^ // Bitwise XOR-ed with: t/x; // t integer-divided by x return y;} // And then return the result-sum y • Straightforward! Thanks +1 – RGS Feb 27 '20 at 19:25 # Clojure, 72 bytes (defn s[x y](apply +(for[a(range(inc x))b(range(inc y))](bit-xor a b)))) Ungolfed: (defn sumxy[x y] (apply + (for [a (range (inc x)) b (range (inc y)) ] (bit-xor a b)))) Tests: (println (s 5 46) " <-> " 6501) (println (s 0 12) " <-> " 78) (println (s 25 46) " <-> " 30671) (println (s 6 11) " <-> " 510) (println (s 4 23) " <-> " 1380) (println (s 17 39) " <-> " 14808) (println (s 5 27) " <-> " 2300) (println (s 32 39) " <-> " 29580) (println (s 14 49) " <-> " 18571) (println (s 0 15) " <-> " 120) (println (s 11 17) " <-> " 1956) (println (s 30 49) " <-> " 41755) (println (s 8 9) " <-> " 501) (println (s 7 43) " <-> " 7632) (println (s 13 33) " <-> " 8022) Try it online! • So many parenthesis :p +1 – RGS Feb 27 '20 at 19:26 • Lisp (of which Clojure is a variant) was one of the earliest high-level languages. It probably would have been more widely adopted were it not for the high costs incurred by the untimely post-war depletion of the Strategic Parenthesis Reserve. However, despite such setbacks, Lisp and its derivatives have been influential in key algorithmic techniques such as recursion and condescension. See the "...History of Programming Languages" page. :-) Feb 27 '20 at 22:52 • I don't think I even understand all the jokes and references, but that blog post is making me laugh so much :D thanks for this! – RGS Feb 27 '20 at 22:56 # Japt-x, 6 bytes ô ï^Vô Try it ô ï^Vô :Implicit input of integers U & V ô :Range [0,U] ï :Cartesian product with Vô :Range [0,V] ^ :Reduce each pair by XORing :Implicit output of sum of resulting array • I like the two eyes ô at the ends of the code +1 – RGS Feb 27 '20 at 19:21 # JavaScript (ES6), 41 bytes Takes input as (x)(y) (or the other way around). Computes the sum recursively, the obvious way. x=>g=(y,Y=y)=>~x&&(x^y)+g(y?y-1:x--&&Y,Y) Try it online! • Good answer! I thought you were going to find some weird formula with the bits of x and y! Nothing came to mind? – RGS Feb 26 '20 at 23:47 • @RGS The carries implied by a sum usually doesn't mix well with bitwise operations. So I doubt a magic formula exists. (But I'd love to see one.) Feb 26 '20 at 23:56 • If time efficiency were the goal, there are some simple formulas that come quite close (and can then be corrected by brute-forcing a much shorter list). Feb 27 '20 at 21:24 # Python 3, 59 58 bytes Saved a byte thanks to Neil and HyperNeutrino!!! lambda x,y:sum(a^b for a in range(x+1)for b in range(y+1)) Try it online! • You have an extra space that you can remove after the ) in range(x+1) Feb 26 '20 at 22:43 # C (gcc), 61 52 bytes Saved 9 bytes thanks to Arnauld!!! b;s;f(x,y){for(s=b=y;~x;b--||(b=y,x--))s+=x^b;s-=y;} Try it online! • Do you really have to have the three variables before the f? Can't you stuff them somewhere else or something? :/ – RGS Feb 26 '20 at 23:50 • @RGS Tried several variations and they had either more or the same byte count. :( Feb 26 '20 at 23:51 • 52 bytes Feb 26 '20 at 23:51 • @Arnauld That's diabolical - thanks! :-) Feb 26 '20 at 23:57 # Wolfram Language (Mathematica), 33 bytes Sum[i~BitXor~j,{i,0,#},{j,0,#2}]& Try it online! # Zsh, 31 bytes eval '<<<$[' +{0..$1}^{0..$2} ] Try it online! Uses eval to expand the <<<$[ ] after expanding the lists. The TIO link adds set -x so you can see what the brace expansion looks like. • Thanks for your zsh submission! – RGS Feb 27 '20 at 19:24 # Perl 5-pa, 36 bytes map{//;map$\+=$_^$',0..$F[0]}0..<>}{ Try it online! Takes in inputs on separate lines. • Thanks for your Perl 5 submission! Why did you go for Perl 5 and not any other version? Like a more recent version? – RGS Feb 27 '20 at 21:29 • @RGS Perl 5 is very different from Perl 6, so much so that the latter has been renamed to Raku to prevent confusion. Perl 5 is what most people refer to as just Perl – Jo King Feb 28 '20 at 2:09 # Oracle SQL, 126 bytes This isn't a golfing language and it doesn't have a bitwise XOR operator but: SELECT SUM(x+y-2*BITAND(x,y))FROM(SELECT LEVEL-1 x FROM T CONNECT BY LEVEL<x+2),(SELECT LEVEL-1 y FROM T CONNECT BY LEVEL<y+2) Assumes that there is a table T with columns X and Y containing one row that has the input values. So for the inputs: CREATE TABLE t(x,y) AS SELECT 5,46 FROM DUAL; This outputs: | SUM(X+Y-2*BITAND(X,Y)) | | ---------------------: | | 6501 | db<>fiddle here As an aside, its only 81 characters in PostgreSQL: SELECT sum(a#b)FROM t,generate_series(0,t.x)AS x(a),generate_series(0,t.y)AS y(b) db<>fiddle here But its less fun as that has a built-in XOR operator and series generation. • Really cool solution! thanks for it +1 – RGS Feb 27 '20 at 21:29 # Japt-x, 8 bytes ò@Vò^XÃc Try it • 8 bytes: ò@Vò^XÃc. Feb 27 '20 at 7:30 • I also like the eyes and nose here: ò@ò – RGS Feb 27 '20 at 19:25 # Julia 1.0, 33 bytes (x,y)->sum(i⊻j for i=0:x,j=0:y) Try it online! • This Julia answer is really short +1 good job – RGS Mar 1 '20 at 9:26 # C (gcc), 51 bytes d,r;f(x,y){for(d=y;~d||(x--,d=y),~x;)r+=x^d--;d=r;} Try it online! • Nice answer! +1 keep up the good work – RGS Mar 1 '20 at 9:26 • 50 bytes Jul 4 '20 at 6:57
{}
Pinterest • 전 세계의 아이디어 카탈로그 46 1 팔로워 Excellent!!! Thank you so much and come again! Excellent!!! Thank you so much and come again! Excellent!!! Thank you so much and come again! Excellent!!! Thank you so much and come again! Excellent!!! Thank you so much and come again! Excellent!!! Thank you so much and come again! Excellent!!! Thank you so much and come again! Excellent!!! Thank you so much and come again! Excellent!!! Thank you so much and come again! Excellent!!! Thank you so much and come again!
{}
Timezone: » Supervised Q-Learning for Continuous Control Hao Sun · Ziping Xu · Taiyi Wang · Meng Fang · Bolei Zhou Policy gradient (PG) algorithms have been widely used in reinforcement learning (RL). However, PG algorithms rely on exploiting the value function being learned with the first-order update locally, which results in limited sample efficiency. In this work, we propose an alternative method called Zeroth-Order Supervised Policy Improvement (ZOSPI). ZOSPI exploits the estimated value function $Q$ globally while preserving the local exploitation of the PG methods based on zeroth-order policy optimization. This learning paradigm follows Q-learning but overcomes the difficulty of efficiently operating argmax in continuous action space. It finds max-valued action within a small number of samples. The policy learning of ZOSPI has two steps: First, it samples actions and evaluates those actions with a learned value estimator, and then it learns to perform the action with the highest value through supervised learning. We further demonstrate such a supervised learning framework can learn multi-modal policies. Experiments show that ZOSPI achieves competitive results on the continuous control benchmarks with a remarkable sample efficiency. #### Author Information ##### Ziping Xu (University of Michigan) My name is Ziping Xu. I am a fifth-year Ph.D. student in Statistics at the University of Michigan. My research interests are on sample efficient reinforcement learning and transfer learning, multitask learning. I am looking for research-orientated full-time job starting Fall 2023 ##### Bolei Zhou (UCLA) Assistant professor at UCLA's computer science department
{}
# Tensor ordering I've been trying to figure out how tensor indices work with sage and I have a really simple question - how are the indices ordered after contracting two tensors? For example, if I have two tensors S,T or type (s_1,s_2) and (t_1,t_2) and I contract them, how will the indices of the resulting tensor be ordered? e.g. if S and T are both of type (3,3), then: $$S.\text{contract}(1,T,4) = S^{abc}_{\quad def} {\color{white}*} T^{ghi}_{\quad jbk}$$ how would the resulting tensors indicices be ordered? $$R^{ac\quad ghi}_{\quad def \quad jk}$$ or $$R^{acghi}_{\quad \quad defjk}$$ I tried looking on the page for tensor indices but I couldn't figure it out; experimentation seemed to suggest the second but I wanted to be sure. Thanks; and sorry if this is a silly question whose explanation I missed in the docs edit retag close merge delete Sort by » oldest newest most voted Yes this is the second form, namely $$S^{abc}_{\quad def} T^{ghi}_{\quad jbk} = R^{acghi}_{\quad \ \ defjk}$$ This is so because in SageMath, the contravariant indices come always before the covariant ones. more
{}
Spectrum Coordination in Energy Efficient Cognitive Radio Networks # Spectrum Coordination in Energy Efficient Cognitive Radio Networks ## Abstract Device coordination in open spectrum systems is a challenging problem, particularly since users experience varying spectrum availability over time and location. In this paper, we propose a game theoretical approach that allows cognitive radio pairs, namely the primary user (PU) and the secondary user (SU), to update their transmission powers and frequencies simultaneously. Specifically, we address a Stackelberg game model in which individual users attempt to hierarchically access to the wireless spectrum while maximizing their energy efficiency. A thorough analysis of the existence, uniqueness and characterization of the Stackelberg equilibrium is conducted. In particular, we show that a spectrum coordination naturally occurs when both actors in the system decide sequentially about their powers and their transmitting carriers. As a result, spectrum sensing in such a situation turns out to be a simple detection of the presence/absence of a transmission on each sub-band. We also show that when users experience very different channel gains on their two carriers, they may choose to transmit on the same carrier at the Stackelberg equilibrium as this contributes enough energy efficiency to outweigh the interference degradation caused by the mutual transmission. Then, we provide an algorithmic analysis on how the PU and the SU can reach such a spectrum coordination using an appropriate learning process. We validate our results through extensive simulations and compare the proposed algorithm to some typical scenarios including the non-cooperative case in [1] and the throughput-based-utility systems. Typically, it is shown that the proposed Stackelberg decision approach optimizes the energy efficiency while still maximizing the throughput at the equilibrium. {IEEEkeywords} Cognitive Radio Networks; Multi-carrier systems; Energy Efficiency; Spectrum Coordination; Game Theory; Learning; Sensing. ## 1 Introduction Cognitive radio technology has been proposed first to increase the throughput of the mobiles for the next generation of wireless technologies [2]. This enhancement is possible with an efficient use of the wireless spectrum and specifically spectrum holes. Indeed, PUs that have a specific and licensed access to the spectrum let part of the spectrum unused in different time and geographic location. Many works have been done for optimizing the behavior of SUs in cognitive radio networks (CRNs), see [3] for a survey. However, most of previous works are focused on spectrum sharing [4] or CRN and interference avoidance [5]. Consequently, the energy efficiency aspect in this setting was largely ignored. Green communications are attracting growing attention due to various economical and environmental reasons. This has led research community to focus more to reduce energy consumption by introducing enhanced networking technologies [6], [7]. Motivated by the limited battery life of mobile terminals in spite of the transmission rate, green networking have spurred great interest and excitement these recent years. In the literature, energy efficient power control game has been first proposed by Goodman et al. in [8] for flat fading channels and re-used by [1] for multi-carrier code-division multiple access (CDMA) systems and [9] for relay-assisted DS/CDMA. Most of these works do not consider the cognitive radio technology and therefore the capabilities of the secondary users. In CRN, interference management is very important since the interference due to spectrum-sharing can significantly degrade the overall performance. In the existing work, various resource allocation methods are proposed to either improve energy efficiency or alleviate interference. However, very little research has addressed their joint interaction. In [10], the authors considered that primary and secondary users’ signals coexist in the same frequency band, and the transmit powers of the SUs are constrained so that the interference from the whole secondary network to each PU does not exceed a prescribed threshold. They formulate the problem using a non-cooperative power control game and proved the existence of a unique Nash equilibrium (NE). [11] provides an energy efficient game perspective to the problem of contention-based synchronization in orthogonal frequency-division multiple access (OFDMA) communication systems. Each user trades off its available resources so as to selfishly maximize its own revenue (in terms of probability of correct detection) while saving as much energy as possible and satisfying quality-of-service (QoS) requirements (in terms of probability of false alarm and timing estimation accuracy). In [12], the authors study the gradual removal problem in energy efficient wireless networks. That is, any transmitting user whose required transmit power for reaching its target-SIR exceeds its maximum power is temporarily removed, but resumes its transmission if its required transmit power goes below a given threshold obtained in a distributed manner. Thus all transmitting users reach their target rate consuming the minimum aggregate transmit power. We consider in this paper a hierarchical (Stackelberg) game model of a CRN in which the PU is the leader and the SU is the follower of the game. It is noteworthy that in our paper we consider the spectrum underlay concept in which the PU experiences interference from the SU. Most of the current work has been focusing on the spectrum sharing between cognitive radio pairs, where cognitive radio nodes dynamically detect spectrum holes of primary spectrum users and opportunistically utilize them in frequency and time [3]. We formally prove that the hierarchical structure of the game induces a spectrum coordination between the different components of the network in such a way that they transmit on distinct carriers. This coordination property across the multiple interfering devices is particularly appealing not only from an implementation perspective, but also due to its low complexity, smaller overhead, and ability for radio resource management (see as an example [13] for open spectrum ad-hoc networks, [14] for multi-cell MIMO systems and [15] for cellular downlink networks). There are many motivations for studying wireless networks with hierarchical structures, but the most important ones are to improve the network efficiency and modeling aspect. The Stackelberg game has been firstly proposed in economic problem and also in biology for modeling optimal behaviors against nature [16]. It is in fact a mechanism for wireless networks in which some wireless nodes have the priority to access the medium whereas some other nodes are equipped with cognitive sensors like in CRN (see [17] which is one of the first reference which addressed a multi-leader and multi-follower game theoretic model for CR spectrum sharing networks). This is also a natural setting for heterogeneous wireless networks due to the absence of coordination among the small cells and between small cells and macro cells [18, 19]. At the core lies the idea that the utility of the leader obtained at the Stackelberg equilibrium is higher than his utility obtained at the NE when the two users play simultaneously. This is due to the Stackelberg mechanism in which the leader anticipates the follower’s action. It has been proved in [20] that this result is also true for the follower. The goal is then to find a Stackelberg equilibrium in this two-step game [21]. The original contributions of this paper are threefold: • Introducing hierarchy concept in power control game for energy efficient multi-carrier systems, • Characterizing completely and analytically the Stackelberg equilibrium and compare the results obtained in the proposed hierarchical game with those obtained in the non-cooperative game in [1], • Our main result is that we always obtain an equilibrium (contrary to the work addressed in [1]) where, for the most general cases, the two users transmit on distinct carriers delivering a binary channel assignment. The organization of the paper is the following. First, we introduce in Section 2 the CRN context and the different decision makers of the system. In Section 3, we define the energy efficiency framework which is used throughout the paper and present the game theoretic model in Section 4. Next, in Section 5, we characterize the Stackelberg equilibrium by providing a thorough analysis on the existence and uniqueness of such an equilibrium. Having these results, we then address the important property of spectrum coordination in Section 6. Section 7 provides an analysis of the implementation issues including a learning algorithm that ensures convergence to the Stackelberg equilibrium in 7.1 and the sensing issue in 7.2. Section 8 illustrates some numerical results and Section 9 concludes the paper. ## 2 The Cognitive Radio System Model We consider a network composed of a PU (or leader – indexed by ), having the priority to access the medium, and a SU (or follower – indexed by ) that accesses the medium after observing the action of the PU subject to mutual interference. We assume slotted transmissions (over carriers) for both the PU and the SU. The equivalent baseband signal received by the base station can be written as yk=h1kx1k+h2kx2k+zk,fork=1,2 (1) where stands for the block fading process of user on the sub-band , is the signal transmitted by user on the sub-band and is the additive Gaussian noise at the th sub-band. We denote by the fading channel gain which is assumed to stay constant over each block fading length (i.e., coherent communication). We statistically model the channel gains to be i.i.d. distributed over the Rayleigh fading coefficients. The signal transmitted can be further written as where and are the transmit power and data of user . We thus have . The additive Gaussian noise at the receiver is i.i.d. circularly symmetric and for . For any user the received signal-to-noise plus interference ratio (SINR) over carrier is expressed as γnk=gnkpnkσ2+2∑m=1m≠ngmkpmk:=pnkˆhnk. (2) In the remainder, we will define the ratio between the SINR and the transmission power by the effective channel gain . It follows from the above SINR expression that the strategy chosen by a user (i.e., the power vector ) may affect the performance of the other user in the network through multiple-access interference reflected by the effective channel gain. ## 3 Network Energy Efficiency Analysis Our system model is based on the seminal paper [8] that defines the energy efficiency framework. In order to formulate the power control problem as a game, we first need to define a utility function suitable for data applications. Increasing the transmit power clearly favors the packet success rate and therefore the throughput. However, as the packet success rate tends to one, further increasing the power can lead to marginal gains in terms of throughput regarding the amount of extra power used. The following utility function allows one to measure the corresponding trade-off between the transmission benefit (total throughput over both carriers) and cost (total power over both carriers)1: un(p1,p2)=Rn⋅(f(γn1)+f(γn2))pn1+pn2 (3) where is the transmission rate of user and is an increasing, continuous and S-shaped efficiency function which measures the packet success rate. A more detailed discussion of the efficiency function can be found in [22]. The utility function that has units of bits per joule perfectly captures the trade-off between throughput and battery life and is particularly suitable for applications where energy efficiency is crucial such as sensors and mobiles terminals. ## 4 The Game Theoretic Framework ### 4.1 The non-cooperative game problem An important solution concept of the game under consideration is the NE [23], which is a fundamental concept in non-cooperative strategic games. It is a vector of strategies (or actions in our case) , one for each player, such that no player has incentive to unilaterally deviate, i.e., for all action , where the subscript on vector stands for ”except user ”. In [8], authors showed that, under certain conditions, the NE of the game with utility (3) exists. ### 4.2 The hierarchical game formulation In this work, we consider a Stackelberg game framework in which the PU decides first his power control vector and based on this, the SU will adapt his power control vector . ###### Definition 1. (Stackelberg equilibrium): A vector of actions is called Stackelberg equilibrium (SE) if and only if: where and . A SE can be determined using a bi-level approach [21]. Given the action of the PU, we compute the best-response function of the SU (the function ), i.e., the action of the SU which maximizes his utility given the action of the PU. This best-response function is characterized by using a result from [1] which depends on the PU’s power control on carrier through the following expression: ∀k∈{1,2},ˆh2k(p1k)=γ2kp2k=g2kσ2+g1kp1k. ## 5 Characterization of the Stackelberg Equilibrium In order to determine the SE, a standard approach is to consider a backward induction technique. Then, we first determine the best-response function of the SU depending on the action of the PU. This result comes directly from Proposition 1 of [1]. For making this paper sufficiently self-contained, we review here the latter proposition. ### 5.1 The secondary user’s power control vector ###### Proposition 1. (Given in [1]) Given the power control vector of the PU, the best-response function of the SU is given by Misplaced & (4) with and is the unique (positive) solution of the first order equation xf′(x)=f(x). (5) Equation (5) has a unique solution if the efficiency function is sigmoidal [24], and we will use this assumption throughout our paper. Proposition 1 claims that there are two regions depending on the PU’s power control which yields different best-response functions for the SU. Below, we define the two regions: A = {(p11,p12)|ˆh22≥ˆh21} (6) = {(p11,p12)|p12≤p11g11g22g12g21+σ2(g22−g21)g12g21} and B = {(p11,p12)|ˆh22<ˆh21} (7) = {(p11,p12)|p12>p11g11g22g12g21+σ2(g22−g21)g12g21}. ### 5.2 The primary user’s power control vector So far, we have seen that the best-response function of the SU is to use only one carrier, the one with the best effective channel gain. Let us now study the optimal power control for the PU knowing the best-response function of the SU. The following proposition, which is our first main result, gives the existence and uniqueness of the optimal power control of the PU at the SE knowing the best-response function of the SU. Notice that uniqueness of the SE is a desirable property for a Stackelberg game. If there exists exactly one equilibrium, we can predict the equilibrium strategy of the players and resulting performance of the system. ###### Proposition 2. (First main result) Existence and uniqueness of the PU’s power control at the SE There exists a unique power control vector for the PU which maximizes his energy efficiency over Region . It is defined by: ˜p12=0,and˜p11=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩[]lrσ2γ∗g11,if g22g21≥11+γ∗,σ2(g21−g22)g11g22,otherwise% . There exists a unique power control vector for the PU which maximizes his energy efficiency over Region . It is defined by: ˜p11=0and˜p12=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩σ2γ∗g12,ifg22g21≤1+γ∗,σ2(g22−g21)g12g21,otherwise% . For the clarity of the exposition, this proposition is proven in Appendix .1. This result combined with the result of Prop. 1 yields the existence of a SE. ###### Corollary 1. At the Stackelberg equilibrium, when the channel gains of the SU satisfy 11+γ∗≤g21g22≤1+γ∗, (8) the power control vector which maximizes the PU’s utility is unique and is given by ˜p1k=⎧⎪⎨⎪⎩σ2γ∗g1k,fork=˜k,0,for allk≠˜k, (9) where denotes the ”best” carrier of the PU, i.e., . ###### Proof. The proof makes use of results from Prop. 2 for the PU’s power control in Region and . We have that the utility of the PU within Region is maximized when , yielding maxp11,p12uA1(p11,p12)=maxp11uA1(p11,0)=maxp11R1f(g11p11σ2)p11 (10) which implies that the maximum utility over Region is given by . Within Region , we have that the utility of the PU is maximized when , yielding maxp11,p12uB1(p11,p12)=maxp12uB1(0,p12)=maxp12R1f(g12p12σ2)p12. (11) which implies that the maximum utility within Region is . Combining the above results for Region (in Eq. (10)) and Region (in Eq. (11)), the maximization problem of the PU’s utility becomes maxp11,p12u1(p11,p12)=max(˜uA1,˜uB1)={˜uA1,ifg11≥g12,˜uB1,ifg11 where we use the fact that is a strictly increasing function. This completes the proof. ∎ Condition 8 means that a given user experiences approximately the same channel characteristics over his two carriers. Note that this is typically the case when the two carriers are close enough [25]. Corollary 1 says that the utility of PU is maximized when he transmits only over his best carrier. Accordingly, we observe that the carrier which doest not provide enough energy efficiency to outweigh the interference degradation caused by the SU’s transmission is switched ”off”. Notice that this result is in contradiction with throughput-based-utility systems which lead to a water-filling power control [26] where only a certain number of carriers are exploited depending on the channel gains. To resume, Prop. 1 and Prop. 2 suggest that, at the SE, both the SU and the PU transmit on only one carrier depending on their channel gains. In the next section, we will show that hierarchy ”pushes” users towards coordinating their actions in such a way that they transmit on distinct carriers. ## 6 Spectrum Coordination ### 6.1 General result A necessary and sufficient condition on the SU’s channel gains such that the best-response function of the SU is to transmit over a distinct carrier than the PU is given in the following proposition. ###### Proposition 3. At the Stackelberg equilibrium, if the PU transmits over only one carrier, the SU transmits over a distinct carrier if and only if Condition (8) is satisfied. The proof of Prop. 3 is given in Appendix .2. Prop. 3 claims that Condition (8) is a necessary and sufficient condition to obtain spectrum coordination. We will see in the next proposition that a spectrum coordination can occurs even if Condition (8) is not satisfied. In this case, the SE is not unique as the SU obtains the same utility by choosing to transmit either on a different carrier from the PU (coordination case) or on the same carrier than the PU (non coordination case). ###### Proposition 4. (Second main result) Spectrum Coordination Introducing hierarchy between users in a two-carrier energy efficient power control game induces a natural coordination pattern where users have incentive to choose their transmitting carriers in such a way that they transmit on orthogonal channels. ###### Proof. To show this important result, we will determine the Stackelberg equilibria of the users depending on their channel gains. As far as the proposed hierarchical model is concerned, the SE can be computed by considering the following possibilities: • (a) If (i.e., the SU experiences approximately the same radio conditions over his two carriers), • (i) if , then because and . The SE is then given by: (˜p11,˜p12,˜p21,˜p22)=(0,γ∗σ2g12,γ∗σ2g21,0), (12) • (ii) otherwise, , then because and . The SE is then given by: is (˜p11,˜p12,˜p21,˜p22)=(γ∗σ2g11,0,0,γ∗σ2g22). (13) • (b) If (i.e., the SU experiences deep fade on his second carrier compared to his first carrier), • (i) if , then the Stackelberg equilibrium is (˜p11,˜p12,˜p21,˜p22)=(0,γ∗σ2g12,γ∗σ2g21,0). (14) • (ii) otherwise, , the power control vector of the PU at the SE is (˜p11,˜p12)=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩(σ2(g21−g22)g11g22,0)if (???),(0,σ2γ∗g12)otherwise. where Condition (15) is g11g12≥f(γ∗)γ∗g21g22−1f(g21g22−1). (15) The SU transmits on the carrier which is left idle by the PU if Condition (15) is not satisfied. In this case (˜p21,˜p22)=(σ2γ∗g21,0). If Condition (15) is satisfied, we have the following best-response function for the SU: ¯¯¯p2(σ2(g21−g22)g11g22,0)={(σ2γ∗g22,0)or(0,σ2γ∗g22)}, because the effective channel gains are equal for both carriers, i.e., . Then the best-response function is not unique in this case and the two players can use the same carrier, the first one here. As the SU plays after observing the action of the primary user, the SU can decide, for optimizing spectrum utilization, to transmit over the carrier left idle by the PU. Moreover, the SU’s power is inversely proportional to the channel gain over the second carrier. Then, it is more convenient for him to transmit over this second carrier. • (c) If (i.e., the SU experiences deep fade on his first carrier compared to his second carrier), we have the similar results: • (i) if , then the SE is (˜p11,˜p12,˜p21,˜p22)=(γ∗σ2g11,0,0,γ∗σ2g22). (16) • (ii) otherwise, , the power control vector of the PU at the SE is (˜p11,˜p12)=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩(0,σ2(g22−g21)g12g21)if (???),(σ2γ∗g11,0)otherwise. where Condition (17) is g11g12<γ∗f(γ∗)f((g22−g21)g21)(g22−g21)g21. (17) The SU transmits on the carrier which is left idle by the PU if Condition (17) is not satisfied. In this case (˜p21,˜p22)=(0,σ2γ∗g22). If Condition (17) is satisfied, we have the following best-response function for the SU: ¯¯¯p2(0,σ2(g22−g21)g12g21)={(0,σ2γ∗g21)or(σ2γ∗g21,0)}, because the effective channel gains are equal for both carriers, i.e., . Then the best-response function is not unique in this case and the two players can use the same carrier, the second one here. In this particular case, the SU can decide to transmit over the first carrier in order to optimize the spectrum utilization. Again, as the SU’s power is inversely proportional to the channel gain on the first carrier, it is more convenient for him to transmit over this first carrier. Having treated the case of spectrum coordination, let us now present a particular case (on the fading channel ) where the two players gain by transmitting on the same carrier at the SE. ### 6.2 Extreme Case In a Stackelberg game, if the leader decides to play a Nash action, then the follower plays the Nash action too as it is the best-response function to the Nash action. Then, depending on the ratio , it could be interesting for the PU to transmit over the same channel than the SU. We will show in the next proposition that this case can appear, essentially when the target SINR at the SE is very low, i.e., and with some conditions on the channel gains. ###### Proposition 5. At the Stackelberg equilibrium, in Region (resp. Region ), if , both the PU and the SU transmit on the first (resp. second) carrier if gn1gn2≥11−γ∗,(resp.gn1gn2≤1−γ∗),forn∈{1,2}. (18) The proof of Prop. 5 is given in Appendix .3. Prop. 5 claims that the probability of extreme case turns out to be the probability of no coordination between users. Specifically, in the extreme case of Region (), the PU decides to transmit on the same carrier (second one here) as the second carrier is much better that the first one. In the extreme case of Region (), the channel gain is very bad on the second carrier with respect to the one on the first carrier and then both users choose to transmit on the first carrier. Note that, in this case, the SU and the PU transmit over the same carrier using an optimal power control given by the Stackelberg model proposed in [20]. Notice that, in the case of Rayleigh fading channels, the probability of being in the extreme case is given by: ψ(γ∗) = Pr{g11g12≥11−γ∗}⋅Pr{g21g22≥11−γ∗} +Pr{g11g12≤1−γ∗}⋅Pr{g21g22≤1−γ∗} = ⎡⎣∫∞0∫∞y(1−γ∗)e−(x+y)dxdy⎤⎦2 +[∫∞0∫(1−γ∗)y0e−(x+y)dxdy]2=2⋅(γ∗−1γ∗−2)2. Figure 1 depicts the probability of being in the extreme case – which is the probability of no coordination – when . It is shown that the probability of being in the extreme case is always lower than . As increases, the extreme region shrinks resulting in a decrease of the probability of no coordination. A global overview of the occupation of the carriers at the SE, as function of the ratios and is depicted in Figure 2. It is shown the main contributions of the paper, namely • we have proved the existence and uniqueness of an equilibrium when a user can observe the action of the other user before deciding his own action, whatever the channel gains are. This result is not true in the case when the two users play a NE (see for instance [1]), • although we have formulated the problem of energy efficiency maximization by allowing that a carrier could be shared by both users, we have obtained a spectrum coordination pattern in which, to refrain from mutual interference, users have incentive to choose their carriers orthogonally (exactly like in OFDMA systems). ## 7 Implementation Issues Although Prop. 1 and Prop. 2 guarantee SE existence, it is still not clear whether users will be able to calculate this equilibrium in a decentralized environment where only partial/local information is available at the mobile terminal. Consequently, our goal in this section will be to study implementation issues related to the converge to the equilibrium and its speed along with the sensing problem. So far, we have assumed that the channels are static. If the channels fluctuate stochastically over time, the associated game still admits an equilibrium, but the learning process is no more deterministic; just the same, by employing the theory of stochastic approximation, it can be shown that users still converge to equilibrium [27]. In the next section, we propose a temporal difference learning algorithm that ensures convergence to the SE within a limited time. ### 7.1 Learning-based approach The interaction between the PU and the SU provides a potential incentive for both agents to make decision process based on their respective perceived payoff. Determining the equilibrium strategy of both the primary and the secondary users requires in practice the knowledge of several informations which can not be observed in a realistic scenario [28]. We propose, in this section, an on-policy learning-based algorithm that allow the PU and the SU to determine their strategies on-the-fly. Machine learning is a powerful technique where learning is accomplished by real-time interactions with the environment, and proper utilization of past experience. In particular, we consider a well-known temporal difference learning where each user maintains state-value functions as a lookup tables in order to determine the optimal action in the current time slot [29]. To cope with the hierarchical decision process between the PU and the SU, we further set an iteration scale parameter which traduces how frequent the SU updates its state-value function and set new values of powers with respect to the PU. The PU’s state-value function is given by q(gt−1,pt−1)←(1−βt)q(gt−1,pt−1)+ βt(u1+κq(gt,pt)), whereas, the SU’s state-value function is Q(gt−1,pt−1)←(1−αt)Q(gt−1,pt−1)+αt(u2+κQ(gt,pt)), where is the discount factor, and and are the learning rate factors satisfying and , respectively and . The pseudo-code for the proposed algorithm is given in Algorithm 1. Specifically, we consider an effective balancing between exploration and exploitation. Note that with a probability we explore new actions, while we choose the already established action with a probability . Indeed, the trade-off between exploration and exploitation remains a challenging issue in stochastic learning process. Algorithm 1: Learning-based Algorithm for Energy Efficient Cognitive Radio Networks. {algorithm} Initialize and for all channel gains and transmit powers  Initialize , , , and ; \Whiletrue   Observe the new channel gains   Select transmit power vector as follows with probability , else choose a random transmit power vector  \For   Observe the new channel gains   Select transmit power vector as follows with probability , else choose a random power vector  Use the transmit power vector and observe the reward , and given by Eq. (3) The following proposition proves that the learning-based algorithm for energy efficient cognitive radio networks converges to the optimal policy. ###### Proposition 6. The learning-based algorithm converges w.p.1 to the optimal -function. The proof of Prop. 6 is given in Appendix .4. The learning rate time is addressed in the following proposition. ###### Proposition 7. Let and be the value of the learning-based algorithm for the SU and the PU respectively. Then, we have with probability at least , given that T=Ω⎛⎜ ⎜⎝Niter⋅(L+Φ⋅L+1)1β⋅ln(Vmaxϵ)⋅V2maxln(|S||A|VmaxδβϵΦ)(Φβϵ)2⎞⎟ ⎟⎠ (19) where , , is the maximum reward obtained, and and are the number of possible states and strategies respectively. For a sequence of state-action pairs let the covering time, denoted by , be an upper bound on the number of state-action pairs starting from any pair, until all state-action appear in the sequence. Indeed, the convergence speed of the proposed algorithm depends on the iteration scale parameter . The notation implies that there are constants and such that . The proof of Prop. 7 is given in Appendix .5. ### 7.2 Spectrum Sensing In the current Stackelberg model, Proposition 4 claims that the SU transmits over a certain frequency carrier in order to reach only when the PU does not. This enables public access to the new spectral ranges without sacrificing the transmission quality of the actual license owners. Typically, the PU comes first in the system, estimates his channel gains over his two carriers and adapts his transmit power using Prop. 2. The SU comes later in the system, estimates his channel links over his two carriers and chooses his transmit power using Prop. 1. Such an assumption could be further justified by the fact that in an asynchronous context, the probability that two users decide to transmit at the same moment is negligible as the number of users is limited. Thus, within this setting, the PU is assumed to be oblivious to the presence of the SU. The PU communicates with his BS while the SU listens to the wireless channel. The SU has only to reliably detect the carrier used by the PU and not the PU’s transmit power as it is the case in the single carrier context in [20]). Many well-known techniques were developed in order to detect the holes in the spectrum band (energy detection [30], feature detection [31], etc.). ## 8 Numerical illustration In this section, we present a comprehensive Matlab-based simulation of the CRN described in the previous sections. We consider the energy efficiency function proposed in most papers dealing with power control games that is , where is the block length in bits. This results on ( dB). and the rate Mbps for . ### 8.1 Energy Efficiency as a function of the SNR This section is devoted to performance comparison of the proposed Stackelberg scheme with respect to traditional schemes. As far as sum energy efficiency comparison is concerned, this can be conducted by considering the four following schemes: • the Stackelberg model: the one proposed in this paper, • the Nash model: each user chooses his power level according to [1], • the best channel model: each user chooses to transmit on his ”best” channel (i.e., the one with the best channel gain) without sensing, • the best channel with sensing: the PU chooses the ”best” channel to transmit on. The SU senses the spectrum and transmits on the vacant sub-band. Here we assume perfect sensing of the idle sub-band by the SU. In Figure 4, we plot the energy efficiency at equilibrium as function of the SNR. Interestingly, we see that the energy efficiency of the PU at the SE performs the same than in the sensing scenario till dB, while the energy efficiency of the SU at the SE is always the same than in the scheme where sensing is done by the SU. Moreover, the Stackelberg model outperforms all the other strategies. This is due to the Stackelberg mechanism in which the PU anticipates the SU’s action. In particular, we found out that the PU achieves an energy efficiency gain up to with respect to the Nash strategy at dB. As expected, results in Figure 4 also show that the energy efficiency for the SU at SE is less than the one obtained at NE. This is due to the fact that in Nash model, the PU does not anticipate the SU’s action. Notice that, as the SNR decreases, all configurations tend towards having the same (zero) energy efficiency. This can be justified by the fact that, at low SNR regime, whatever the power control strategy each user chooses, the signal is overwhelmed by the noise. Figure 4 depicts the throughput at the equilibrium. We observe approximately the same observations than in Figure 4. Of particular interest is the fact that the PU still outperforms all the other strategies till dB whereas the throughput of the SU at the SE is still less than the one obtained at the Nash equilibrium. That is, the proposed Stackelberg scheme achieves a flexible and desirable trade-off between energy efficiency and throughput maximization. ### 8.2 Learning the Equilibria To proceed further with the analysis, we resort to simulate how the PU and the SU users converge to the equilibria according to Algorithm 7.1 presented in Section 7.1. The noise variance is which corresponds to a dB. We consider an iteration scale , which means that the SU runs iterations for iteration of the PU. #### Static Channels In Figures 6 and 6, we consider static channel gains , , and . We observe from Figure 6 that the optimal power control decision of the PU is to transmit on the first carrier whereas the SU chooses to transmit on the second carrier as claimed by Prop. 4. Indeed, we have and which is in the interval . This means that the SE is given by Prop. 4-a-ii yielding the following SE: In Figure 8 and 8, we change the second carrier’s PU channel gain to and the second carrier’s SU channel gain to . The SE changes accordingly. In fact, we have that and which corresponds to the case (b-i) of Prop. 4 where the PU decides to transmit on the second carrier and the SU transmit on the first carrier yielding the following SE: In Figure 6 and 8, we look at the energy efficiency of the PU and the SU. In general case, the PU outperforms the SU since the PU anticipates the SU’s action (see Fig. 6). However, it is illustrated in Fig. 8 that, although he plays first, the PU performs worse that the SU at the equilibrium as the best SU’s carrier () is much better than the PU’s best carrier (). In Figure 10, we plot the energy efficiency of the PU and the SU at the NE proposed in [1] depending on time. It is clear that both the PU and the SU converge to the same energy efficiency since the Nash game is a one-shot game. We also observe that both the PU and the SU converge to exactly the same energy efficiency of Mbit/Joule than the one obtained in Figure 4 at dB. Next, we plot in Figure 10, the convergence of the energy efficiency at the SE for both the PU and the SU. Again we observe that the PU and the SU converge to the same energy efficiency of Mbit/Joule and of Mbit/Joule respectively obtained in Figure 4 at dB. Moreover, as expected, that the energy efficiency at the SE of the PU is higher than the energy efficiency of the SU. Note that the variance of energy efficiency in Figures 10 and 10 is due to the fact that the fading channel states of the PU and the SU vary every time slot. Though, the algorithm still converges to the equilibrium of an averaged game whose payoff functions correspond to the users’ achievable ergodic rates. ## 9 Conclusion In this paper, we have proposed a hierarchical concept in a power control game for energy efficient multi-carrier cognitive radio systems. We have firstly completely and analytically characterized the Stackelberg equilibrium of such a game. Interestingly, we have shown that, although we have considered that each user is prone to interference from the other transmitter on the same carrier, for the vast majority of cases, there exists a natural coordination pattern where the PU and the SU have incentive to choose their transmitting carriers orthogonally (like in OFDMA systems). The proposed system goes toward the vision of a fully coordinated cognitive radio multi-carrier network, whereby transmit powers are coordinated across the users. Then, we have compared the users’ energy efficiency of the proposed hierarchal game with those obtained in a standard non-cooperative setting. In addition to allowing coordination of the spectrum usage, the proposed power control game provides additional functionalities that can be used in energy efficient CRN. In particular, the proposed Stackelberg scheme achieves a flexible and desirable trade-off between energy efficiency and throughput maximization. For implementation purposes, the SU has only to reliably sense the spectral environment (and not the PU’s transmit power as it is the case in the single carrier context in [20]) and then decides to transmit only on the best carrier left idle by the PU. Finally, with extensive measurement-driven simulations we show that the proposed game model converges to the desired equilibria in a small number of steps, and hence are amenable to practical implementation. ### .1 Proof of Prop. 2: Existence and uniqueness of the PU’s power control at the SE ###### Proof. Given Proposition 1, we have that the power control vector of the SU in Region and are given, respectively, by pA2(p12)=(0,γ∗(σ2+g12p12)g22) and pB2(p11)=(γ∗(σ2+g11p11)g21,0). Based on the above equations, we can compute the explicit expression of the PU’s SINR on each carrier for both regions, namely γ11=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩g11p11σ2,in Region A,g11p11σ2(1+γ∗)+γ∗g11p11,in Region B γ12=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩g12p12σ2(1+γ∗)+γ∗g12p12,in Region A,g12p12σ2,in Region B It follows that the utility function of the PU given by Equation (3) for Region can be expressed as uA1(p11,p12) = R1f(γ11)+R1f(γ12)p11+p12, = R1f(g11p11σ2)+R1f(g12p12σ2(1+γ∗)+γ∗g12p12)p11+p12 Similarly, in Region , the PU’s utility function is uB1(p11,p12)=R1f(g11p11σ2(1+γ∗)+γ∗g11p11)+R1f(g12p12σ2)p11+p12 Without loss of generality, the analysis is given only for Region . Similar approach can be adopted for Region . We first derive the utility of the PU w.r.t . We obtain ∂uA1(p11,p12)∂p11=R1⋅f′(γ11)⋅g11σ2⋅(p11+p12)−[f(γ11)+f(γ12)](p11+p12)2 Now, let us compute the derivative of the PU’s utility on the Region w.r.t . We have ∂uA1(p11,p12)∂p12=R1⋅f′(γ12)⋅∂γ12∂p12⋅(p11+p12)−[f(γ11)+f(γ12)](p11+p12)2 where . Knowing that and after some simple simplifications, we obtain that . We shall now look for a couple such that . It follows from the above results that a couple is solution of the following system Unknown environment '% with and . The solutions of the above system are given by p11=σ2γ11g11 (20) and p12=σ2g12γ12(1+γ∗)(1−γ∗γ12). (21) In Region , Eq. (6) yields to the following relation between the powers of the PU: p
{}
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Charge-density-wave order takes over antiferromagnetism in Bi2Sr2−x La x CuO6 superconductors ## Abstract Superconductivity appears in the cuprates when a spin order is destroyed, while the role of charge is less known. Recently, charge density wave (CDW) was found below the superconducting dome in YBa2Cu3O y when a high magnetic field is applied perpendicular to the CuO2 plane, which was suggested to arise from incipient CDW in the vortex cores that becomes overlapped. Here by 63Cu-nuclear magnetic resonance, we report the discovery of CDW induced by an in-plane field, setting in above the dome in single-layered Bi2Sr2−x La x CuO6. The onset temperature T CDW takes over the antiferromagnetic order temperature T N beyond a critical doping level at which superconductivity starts to emerge, and scales with the pseudogap temperature T*. These results provide important insights into the relationship between spin order, CDW and the pseudogap, and their connections to high-temperature superconductivity. ## Introduction High transition-temperature (T c) superconductivity is obtained by doping carriers to destroy an antiferromagnetic (AF) spin ordered Mott insulating phase. Although it is generally believed that the interaction responsible for the spin order is important for the superconductivity1, the electron pairing mechanism is still elusive. This is because the nature of the normal state is still unclear2,3. For example, in the region with low carrier concentration p (0 < p < 0.2), a pseudogap state emerges where partial density of states (DOS) is lost below a characteristic temperature T* well above T c 4 or even T N 5. Although the nature of the strange metallic state is still under debate, it is likely connected to both spin and change fluctuations or even orders. In fact, experimental progress suggests that the spin and charge degrees of freedom are highly entangled. For example, a striped spin/charge order was found around x ~ 1/8 in La1.6−x Nd0.4Sr x CuO4 (LSCO) two decades ago6. More recently, various forms of charge order were reported in many other systems. Scanning tunneling microscopy (STM) in Bi2Sr2CaCu2O8+δ found a modulation of local DOS in the vortex cores where superconductivity is destroyed7, which was interpreted as due to halos of incipient CDW localized within the cores8,9. Resonant elastic and inelastic x-ray spectroscopy (RXS) measurements found a short-range CDW with ordering vectors along the in-plane Cu-O bond directions, q = (~0.3, 0) and (0, ~0.3). The correlation length is $$\xi _{a,b}$$ ~ 50 Å for YBa2Cu3O7−y (YBCO)10,11 and $$\xi _{a,b}$$ ~ 20 Å for the other systems12,13,14,15,16. Quite recently, it was suggested by 17O nuclear magnetic resonance (NMR) in YBCO that such CDW is of static origin17. In Bi2Sr2−x La x CuO6+δ (Bi2201), most intriguingly, the onset temperature of the short-range CDW was found to coincide12,16 with T* that is far above T c or T N 5,18. Application of a high magnetic field is useful to diagnose the interplay between various orders in the cuprates. When a high magnetic field is applied perpendicular to the CuO2 plane, superconductivity can be suppressed substantially. In YBCO, 63Cu NMR at H = 28.5 T revealed a long-range charge density modulation perpendicular to the CuO-chain in the sample with p = 0.10819. RXS also indicated that a high field induces a correlation along the CuO-chain direction and modifies the coupling between CuO2 bilayers, thus causes a three-dimensional CDW20,21. These observations are consistent with early discovery of a Fermi-surface reconstruction by quantum oscillations22 and a recent report of a thermodynamic phase transition23. These findings have arisen much interests, but the origin of the CDW and its connection to superconductivity is yet unknown. As the long-range CDW onsets below T c(H = 0) and only emerges when the field is applied perpendicular to the CuO2 plane, a wide-spread speculation is that it is due to incipient CDW in the vortex cores7 that becomes overlapped as the field gets stronger11,13,24. In fact, a field as large as 28.5 T applied in the CuO2—plane of YBCO did not bring about any long-range CDW19. Also, the role of the CuO chain is unclear; in Bi2Sr2CaCu2O8+δ without a CuO chain, no long-range CDW was found25. In order to clarify the relationship between the intertwined AF spin order, CDW, pseudogap and superconductivity, we apply high magnetic fields up to 42.5 T parallel to the Cu−O bond direction (H||a or b axis) in Bi2Sr2−x La x CuO6+δ where the pseudogap spans from the parent AF insulating phase to the overdoped superconducting regime5,18. This material has no CuO chain and the application of an in-plane field does not create vortex cores in the CuO2 plane. Surprisingly, we discover a long-range CDW that emerges far above the superconducting dome for H || > 10 T. We find that such CDW order becomes the successor of the AF order beyond p = 0.107 at which superconductivity starts to emerge. The T CDW takes over T N, but disappears well before the pseudogap closes. Our results indicate that CDW can be well disentangled from other orders. ## Results ### Evidence for a field-induced CDW in underdoped Bi2201 Figure 1a–d shows the 63Cu-NMR satellite (3/2 ↔ 1/2 transition) lines for four compounds of Bi2Sr2−x La x CuO6+δ at two representative temperatures at H || = 14.5 or 20.1 T. As seen in Fig. 1a, no change between T = 100 and 4.2 K is observed for the optimally doped compound (p = 0.162). However, the spectrum is broadened at T = 4.2 K for p = 0.135 (Fig. 1b), and a splitting ±δf of the spectrum is observed at T = 4.2 K for p = 0.128 and 0.114, as seen in Fig. 1c, d. The spectra at T = 4.2 K for p = 0.128 and 0.114 can be reproduced by a sum of two Gaussian functions. It is noted that at low fields below 10 T, the spectrum shows no appreciable temperature dependence in the whole temperature range. The NMR line splitting indicates a long-range order, as it measures an assemble of nuclear spins over the sample. Figure 1e shows the field evolution of δf for p = 0.114. The δf grows steeply at H || = 10.4 T and saturates above H || ~ 14.5 T. Figure 1f shows the temperature dependence of δf for p = 0.114 under various fields. The δf grows rapidly below T ~ 30, 55, and 60 K at H || = 11, 13, and above 14.5 T, respectively. These results indicate that a field-induced phase transition occurs in the underdoped Bi2201. The results are qualitatively similar to the results found in YBCO where the same transition line splits into two peaks due to the spatial modulation of the NQR frequency ν Q 19. Next we show that the field-induced phase transition is due to a charge order, but not spin order. Figure 2a, b, respectively, shows the temperature dependence of the satellite and the center lines for the sample with the lowest doping p = 0.114. A spectrum broadening is also found in the center line (1/2 ↔ −1/2 transition) at T = 4.2 K, but it is much smaller than the satellite line. Figure 2c shows the temperature dependence of the NMR intensity obtained by integrating the spectrum at each temperature. The intensity has no anomaly above T c, indicating that there is no spin order. For antiferromagnetic insulator p = 0.107 (x = 0.8), the intensity decreases below T N = 66 K because an internal field shifts the peak frequency far away5. Furthermore, a possibility of striped phase formation leading to a wipe out effect found in LSCO26 can also be ruled out. It is noted that the possibility of a field-induced spin-density-wave order has already been ruled out previously for p = 0.16227. Therefore, the splitting of the satellite line (Fig. 2a) and the broadening of the center line (Fig. 2b) is due to a distribution of the Knight shift $$K_\parallel \pm \delta K_\parallel$$ and the NQR frequency as ν Q ± δν as observed in YBCO. Furthermore, the splitting δf satellite = 1.22 MHz is much larger than δf center = 0.271 MHz, indicating that the ν Q change is the main contributor to the observed line splitting. By a simple calculation (Supplementary Note 1), we find that $$\delta K_\parallel \sim 0.05 \pm 0.01\%$$ and δν = 2.5 ± 0.2 MHz can reproduce both the satellite and the center lines at the same time (shaded areas in Fig. 2a, b). The relation ν Q = 22.0 + 39.6 p (Supplementary Fig. 1) then yields a hole-concentration distribution $$\delta p\sim 0.06 \pm 0.01$$ at the Cu site. Since there is no spin order here (Fig. 2c) as mentioned already, the splitting of the satellite line δf ( δp) indicates a field-induced long-range charge distribution, i.e., a formation of CDW at low temperature in underdoped Bi2201. Figure 2d, e shows the temperature dependence of the nuclear spin-lattice relaxation rate divided by T, 1/T 1 T and spin-spin relaxation rate 1/T 2 for p = 0.114 obtained at two different fields. At H = 9 T, both quantities decrease monotonically below T* = 230 K. At H = 13 T, however, a pronounced peak was found in 1/T 1 T at T CDW = 55 K. Such a peak in 1/T 1 T is a characteristic of a CDW order28. The 1/T 2 also shows a sharp decrease at T CDW = 55 K. These results provide further evidence for a field-induced CDW phase transition. ### H-T phase diagram for underdoped Bi2201 To obtain the CDW onset temperature (T CDW) and the threshold field (H CDW) for p = 0.114, we study the temperature dependence of the NMR spectra at various magnetic fields (Fig. 1e, f). Figure 3 shows the HT phase diagram for p = 0.114. Remarkably, the long-range CDW state in Bi2201 emerges at a temperature far above T c, in contrast to that in YBCO where CDW appears below T c(H = 0)19,24. ### Relationship between CDW and superconductivity Figure 4a, b shows the H-dependence and T-dependence of the satellite splitting δf which allow us to obtain H CDW and T CDW for various doping levels. Figure 5 shows the hole concentration dependence of H c2, H CDW, and T CDW. The H c2 ~ 60 T for p = 0.162 (Supplementary Fig. 5) decreases with decreasing doping level but increasing again at p = 0.114. Although the previous Nernst effect study on three Bi2201 samples (p = 0.12, 0.16, and 0.19) did not take a closer look into the doping range as we did here29, our result is consistent with the results of YBCO24 and La1.8−x Eu0.2Sr x CuO4 29. The H CDW is slightly lower than that in YBCO, suggesting that CDW has a similar energy scale across different class of cuprates. However, the relationship between H CDW and H c2 is completely different from that seen in YBCO where H CDW scaled with H c2. Namely, H CDW was the lowest at the doping concentration where H c2 was the smallest there24, which led to the suggestion that CDW can only be seen when the superconducting state is suppressed as the vortex cores become overlapped. In the present case, however, no vortex cores are created in the CuO2 plane. In fact, H CDW and T CDW are more related with doping concentration itself as can be seen in Fig. 5, rather than with H c2. Namely, the long-range CDW order is induced more easily closer to the AF phase boundary. ## Discussion In this section, we discuss possible CDW form and the implication of the phase diagram we found. First, the result can be understood by an incommensurate 1-dimensional (1D) long-range CDW as follows, as the situation is similar to that observed at the in-plane Cu2F-site located below the oxygen-filled CuO chain in YBCO19. For an unidirectional CDW state, the wave modulation causes a spatial distribution in the electric field gradient (EFG) and thus the NQR frequency so that ν = ν Q + δνcos(ϕ(X))28,30, where X ( = a or b axis) is the modulation direction. The NMR spectral intensity (I(ν)) depends on the spatial variation of ϕ(X) as I(ν) = 1/(2πdν/). For an incommensurate order, ϕ(X) is proportional to X, so that the NMR spectrum shows an edge singularity at ν = ν Q ± δν, as $$I(\nu ) = 1/\left( {2\pi \delta \nu \sqrt {1 - \left( {\left( {\nu - \nu _Q} \right)/\delta \nu } \right)^2} } \right)$$ 28,30. By convoluting a broadening function, a two-peak structure can be reproduced. In such case, the quantity 2δν corresponds to the CDW order parameter28. We emphasize that the value of δp is twice larger than that observed in YBCO19, indicating that a larger CDW amplitude is realized in Bi2201. This difference may arise from the different crystal structure between the two systems. YBCO is a bi-layer system while Bi2201 is single layered. When CDW has a different phase between two CuO2 planes, the ordering effect would be weakened or even canceled out. It has been known that magnetic field works to induce a correlation along the CuO-chain direction and modifies the coupling between CuO2 bilayers in YBCO20,21. In the present case, the short-range CDW at zero magnetic field12,16 could also be modified by H || > 10 T to be a long ranged 1D CDW along the Cu−O direction. Second, how about a 2D-CDW case? Recent resonant X-ray scattering measurement on Bi2201 p ~ 0.11 found a perfect 2D but local (ξ ~ 20 Å) CDW formation along the Cu−O bond direction with the wave vector (Q*, 0) and (0, Q*) (Q* ~ 0.26)16,31. On the other hand, STM measurement suggested a commensurate density-wave with the ordering vectors q DW  = (0.25, 0) and (0, 0.25)32. In either case, if such a local CDW becomes long ranged with the same ordering vectors, a spatial distribution of the NQR frequency can be written as ν = ν Q + δν X cos(ϕ(X)) + δν Y cos(ϕ(Y)), where X = a-axis and Y = b-axis are the modulation directions30. As suggested by RXS and STM measurements, when the CDW amplitudes are equivalent δν X  = δν Y , such CDWs yield $$I(\nu )\sim - {\mathrm{ln}}[(\nu - \nu _{\mathrm{Q}})/\delta \nu ]/\delta \nu$$ 30. In this case, the logarithmic singularity appears at δν = 030, which is different from the 1D case. It is obvious that we can not explain our experimental results with such 2D CDW. However, if the amplitude for each directions is different, $$\delta \nu _{X(Y)} \gg \delta \nu _{Y(X)}$$, two edge-singularities will appear30. It is also interesting to note that if there are CDW domains with modulations ν X  = ν Q + δν X cos(ϕ(X)) and ν Y  = ν Q + δν Y cos(ϕ(Y)) in each domain, the NMR lineshape will be the same as in the 1D case. Third, a 3D CDW will not show two edge singularities in ν Q distribution in any case30. Therefore, we may exclude the possibility of a 3D CDW because, being different from the bilayer YBCO, the long distance between CuO2 planes produces no CDW correlation along the c-axis in Bi220116 and in our case the magnetic field is applied parallel to the CuO2 plane. We now show in Fig. 6 how the long-range CDW emerges as the magnetic field is increased. As seen in the H-p plane at T = 4.2 K, the field-induced CDW emerges for H || > 10 T in the underdoped regime. At such high fields, upon increasing doping, the AF state with T N = 66 K at p = 0.107 changes to a CDW ordered state with T CDW ~ 60 K at p = 0.114. Upon further doping to p = 0.162 where the pseudogap persists; however, the CDW order disappears. Although a detailed analysis is difficult, a similar field-induced CDW is also found when the magnetic field is applied perpendicular to the CuO2 plane (Supplementary Fig. 6). Figure 7 compares the phase diagram with that for YBCO. In YBCO, the short-range CDW sets in far below T* and the field-induced CDW (FICDW) occurs inside the superconducting dome forming a dome-like shape19, while in Bi2201 the short-range CDW sets in right at T* and the FICDW emerges far above the superconducting dome and coexists with superconductivity. Figure 6 reveals several important things. First and most remarkably, down-shifting the T* curve in temperature coincides with the T CDW curve. As can be seen more directly in Fig. 8, T CDW scales with T*. This may suggest that the pseudogap is a fluctuating form of the long-range order found in this work, but more work is needed. Very recently, a polar Kerr effect33 and an optical rotational anisotropy measurements34 suggested that a possible phase transition takes place at T*. However, we note that the probes used are ultrafast in time scale. In NMR measurements, the time scale is in the 10−8 s range which is much slower than the optical measurements33,34, and it is reasonable that T* is seen as a fluctuating crossover temperature. Second, T N is succeeded by T CDW beyond a critical doping level at which superconductivity emerges, pointing to the important role of charge degree of freedom in high-temperature superconductivity. However, the detailed evolution from AF to CDW order under high magnetic fields is unclear at the moment. It is a future task to clarify whether the evolution is a first-order phase transition or not. The result also calls for further scrutinies of the AF insulating phase. In fact, the entanglement of the spin and charge freedoms3,35 was recently found to occur already in the insulating phase itself36. In any case, our results show that CDW order is another outstanding quantum phenomenon that needs to be addressed on the same footing as the AF spin order. Finally, the first demonstration of the ability of using an in-plane field to tune the electronic state should stimulate more works that will eventually help to solve the problem of high-T c superconductivity. ## Methods ### Samples The single crystals of Bi2Sr2−x La x CuO6+δ (Bi2201, p = 0.114 (x = 0.75), 0.128 (0.65), 0.135 (0.60), and 0.162 (0.40)) were grown by the traveling solvent floating zone method37,38. The hole concentration (p) were estimated previously39. Small and thin single-crystal platelets, typically sized up to 2 mm−2 mm−0.1 mm, cleaved from an as-grown ingot, were used. The in-plane Cu−O bond direction (a or b axis) was determined by Laue reflection. T c(H) is defined at the onset temperature of diamagnetism observed by ac-susceptibility measurement using NMR coil. H c2 is determined by fitting T c(H) to the WHH formula40. ### NMR The 63Cu-NMR spectra were taken by sweeping the rf frequency at a fixed field below H = 15 T but they were taken by sweeping the field at a fixed frequency above H = 15 T. Measurements at H = 14.5 T were conducted at Institute of Physics, CAS, Beijing, and those below H = 14.5 T were conducted at Okayama University. High magnetic fields above H = 15 T are generated by the Hybrid magnet in the National High Magnetic Field Laboratory, Tallahassee, Florida. For 63,65Cu, the nuclear spin Hamiltonian is expressed as the sum of the Zeeman and nuclear quadrupole interaction terms, $${\cal H} = {\cal H}_{\mathrm{z}} + {\cal H}_{\mathrm{Q}} = - ^{63,65}\gamma \hbar {\mathbf{I}} \cdot {\mathbf{H}}_0(1 + K) + (h\nu _{\mathrm{Q}}/6)[3I_z^2 - I(I + 1) + \eta (I_x^2 - I_y^2)]$$, where 63 γ = 11.285 MHz/T and 65 γ = 12.089 MHz/T, K is the Knight shift, and I = 3/2 is the 63,65Cu nuclear spin. The NQR frequency ν Q and the asymmetry parameter η are defined as $$v_{\mathrm{Q}} = \frac{{3eQV_{{\mathrm{zz}}}}}{{2I(2I - 1)h}}$$, $$\eta = \frac{{V_{{\mathrm{xx}}} - V_{{\mathrm{yy}}}}}{{V_{{\mathrm{zz}}}}}$$, with Q and V αβ being the nuclear quadrupole moment and the electric field gradient (EFG) tensor41. The principal axis z of the EFG is along the c axis and η = 042. Due to $${\cal H}_{\mathrm{Q}}$$, one obtains the NMR center line and the two satellite transition lines between $$\left| m \right\rangle$$ and $$\left| {m - 1} \right\rangle$$, (m = 3/2, 1/2, −1/2), at $$\nu _{m \leftrightarrow m - 1} = ^{63,65}\gamma H_0(1 + K) + (\nu _{\mathrm{Q}}/2)(3\mathop {{cos}}\nolimits^2 \theta - 1)(m - 1/2)$$ + second-order correction. The second term of the right side is the first order term in the presence of quadrupole interaction. Here, θ is the angle between H and EFG. The T 1 and T 2 were measured at the frequencies in the center peak (m = 1/2 ↔ −1/2 transition). The T 1 values were measured by using a single saturating pulse and were determined by standard fits to the recovery curve of the nuclear magnetization to the theoretical function for the nuclear spin I = 3/218. The T 2 values were obtained by fits to the spin-echo decay curve of the nuclear magnetization I(t) to I(t) = I(0)exp(−2t/T 2)5. ### Data availability The data that support the findings of this study are available on reasonable request. ## References 1. 1. Lee, P. A., Nagaosa, N. & Wen, X.-G. Doping a Mott insulator: physics of high-temperature superconductivity. Rev. Mod. Phys. 78, 17–85 (2006). 2. 2. Keimer, B., Kivelson, S. A., Norman, M. R., Uchida, S. & Zaanen, J. From quantum matter to high-temperature superconductivity in copper oxides. Nature 518, 179–186 (2015). 3. 3. Fradkin, E., Kivelson, S. A. & Tranquada, J. M. Colloquium: theory of intertwined orders in high temperature superconductors. Rev. Mod. Phys. 87, 457–482 (2015). 4. 4. Timusk, T. & Statt, B. The pseudogap in high-temperature superconductors: an experimental survey. Rep. Prog. Phys. 62, 61–122 (1999). 5. 5. Kawasaki, S., Lin, C. T., Kuhns, P. L., Reyes, A. P. & Zheng, G.-q Carrier-concentration dependence of the pseudogap ground state of superconducting Bi2Sr2−x La x CuO6+δ revealed by 63,65Cu-nuclear magnetic resonance in very high magnetic fields. Phys. Rev. Lett. 105, 137002 (2010). 6. 6. Tranquada, J. M., Sternlieb, B. J., Axe, J. D., Nakamura, Y. & Uchida, S. Evidence for stripe correlations of spins and holes in copper oxide superconductors. Nature 375, 561–563 (1995). 7. 7. Hoffman, J. E. J. E. et al. A four unit cell periodic pattern of quasi-particle states surrounding vortex cores in Bi2Sr2CaCu2O8+δ . Science 295, 466–469 (2002). 8. 8. Zhang, T., Demler, E. & Sachdev, S. Competing orders in a magnetic field: spin and charge order in the cuprate superconductors. Phys. Rev. B 66, 094501 (2002). 9. 9. Kivelson, S. A., Lee, D.-H., Fradkin, E. & Oganesyan, V. Competing order in the mixed state of high-temperature superconductors. Phys. Rev. B 66, 144516 (2002). 10. 10. Ghiringhelli, G. et al. Long-range incommensurate charge fluctuations in (Y,Nd)Ba2Cu3O6+x . Science 337, 821–825 (2012). 11. 11. Chang, J. et al. Direct observation of competition between superconductivity and charge density wave order in YBa2Cu3O6.67. Nat. Phys. 8, 871–876 (2012). 12. 12. Comin, R. et al. Charge order driven by Fermi-arc instability in Bi2Sr2−x La x CuO6+δ . Science 343, 390–392 (2014). 13. 13. da Silva Neto, E. H. et al. Ubiquitous interplay between charge ordering and high-temperature superconductivity in cuprates. Science 343, 393–396 (2014). 14. 14. Hashimoto, M. et al. Direct observation of bulk charge modulations in optimally doped Bi1.5Pb0.6Sr1.54CaCu2O8+δ . Phys. Rev. B 89, 220511(R) (2014). 15. 15. Tabis, W. et al. Charge order and its connection with Fermi-liquid charge transport in a pristine high-T c cuprate. Nat. Commun. 5, 5875 (2014). 16. 16. Peng, Y. Y. et al. Direct observation of charge order in underdoped and optimally doped Bi2(Sr,La)2CuO6 by resonant inelastic x-ray scattering. Phys. Rev. B 94, 184511 (2016). 17. 17. Wu, T. et al. Incipient charge order observed by NMR in the normal state of YBa2Cu3O y . Nat. Commun. 6, 6438 (2015). 18. 18. Zheng, G.-q, Kuhns, P. L., Reyes, A. P., Liang, B. & Lin, C. T. Critical point and the nature of the pseudogap of single-layered copper-oxide Bi2Sr2−x La x CuO6+δ superconductors. Phys. Rev. Lett. 94, 047006 (2005). 19. 19. Wu, T. et al. Magnetic-field-induced charge-stripe order in the high-temperature superconductor YBa2Cu3O y . Nature 477, 191–194 (2011). 20. 20. Gerber, S. et al. Three-dimensional charge density wave order in YBa2Cu3O6.67 at high magnetic fields. Science 350, 949–952 (2015). 21. 21. Chang, J. et al. Magnetic field controlled charge density wave coupling in underdoped YBa2Cu3O6+x . Nat. Commun. 7, 11494 (2016). 22. 22. Doiron-Leyraud, N. et al. Quantum oscillations and the Fermi surface in an underdoped high-T c superconductor. Nature 447, 565–568 (2007). 23. 23. LeBoeuf, D. D. et al. Thermodynamic phase diagram of static charge order in underdoped YBa2Cu3O y . Nat. Phys. 9, 79–83 (2013). 24. 24. Wu, T. et al. Emergence of charge order from the vortex state of a high-temperature superconductor. Nat. Commun. 4, 2113 (2013). 25. 25. Crocker, J. et al. NMR studies of pseudogap and electronic inhomogeneity in Bi2Sr2CaCu2O8+δ . Phys. Rev. B 84, 224502 (2011). 26. 26. Hunt, A. W., Singer, P. M., Cederström, A. F. & Imai, T. Glassy slowing of stripe modulation in „(La,Eu,Nd)2−x Â…(Sr,Ba) x CuO4: A 63Cu and 139La NQR study down to 350 mK. Phys. Rev. B 64, 134525 (2001). 27. 27. Mei, J.-W., Kawasaki, S., Zheng, G.-q, Weng, Z.-Y. & Wen, X.-G. Luttinger-volume violating Fermi liquid in the pseudogap phase of the cuprate superconductors. Phys. Rev. B 85, 134519 (2012). 28. 28. Kawasaki, S. et al. Coexistence of multiple charge-density waves and superconductivity in SrPt2As2 revealed by 75As-NMR/NQR and 195Pt-NMR. Phys. Rev. B 91, 060510(R) (2015). 29. 29. Chang, J. et al. Decrease of upper critical field with underdoping in cuprate superconductors. Nat. Phys. 8, 751–756 (2012). 30. 30. Blinc, R. & Apih, T. NMR in multidimensionally modulated incommensurate and CDW systems. Prog. Nucl. Magn. Reson. Spectrosc. 41, 49–82 (2002). 31. 31. Comin, R. et al. Symmetry of charge order in cuprates. Nat. Mater. 14, 796–800 (2015). 32. 32. Fujita, K. et al. Direct phase-sensitive identification of a d-form factordensity wave in underdoped cuprates. PNAS 111, E3026–E3032 (2014). 33. 33. He, R.-H. et al. From a single-band metal to a high-temperature superconductor via two thermal phase transitions. Science 331, 1579–1583 (2011). 34. 34. Zhao, L. et al. A global inversion-symmetry-broken phase inside the pseudogap region of YBa2Cu3O y . Nat. Phys. 13, 250–254 (2017). 35. 35. Tu, W. L. & Lee, T. K. Genesis of charge orders in high temperature superconductors. Sci. Rep. 6, 18675 (2016). 36. 36. Cai, P. et al. Visualizing the evolution from the Mott insulator to a charge-ordered insulator in lightly doped cuprates. Nat. Phys. 12, 1047–1051 (2016). 37. 37. Peng, J. B. & Lin, C. T. Growth and accurate characterization of Bi2Sr2−x La x CuO6+δ single crystals. J. Supercond. Nov. Magn. 23, 591–596 (2010). 38. 38. Liang, B. & Lin, C. T. Floating-zone growth and characterization of high-quality Bi2Sr2−x La x CuO6+δ single crystals. J. Cryst. Growth 267, 510–516 (2004). 39. 39. Ono, S. S. et al. Metal-to-insulator crossover in the low-temperature normal state of Bi2Sr2−x La x CuO6+δ . Phys. Rev. Lett. 85, 638–641 (2000). 40. 40. Werthamer, N. R., Helfand, E. & Hohenberg, P. C. Temperature and purity dependence of the superconducting critical field, Hc2. III. electron spin and spin-orbit effects. Phys. Rev. 147, 295–302 (1966). 41. 41. Abragam, A., The Principles of Nuclear Magnetism (Oxford University Press, London, 1961). 42. 42. Zheng, G.-q, Kitaoka, Y., Ishida, K. & Asayama, K. Local hole distribution in the CuO2 plane of high-T cCu-oxides studied by Cu and oxygen NQR/NMR. J. Phys. Soc. Jpn 64, 2524–2532 (1995). ## Acknowledgements We thank D.-H. Lee, S. Uchida, L. Taillefer, T. K. Lee, M.-H. Julien, and S. Onari for useful discussion, and S. Maeda and D. Kamijima for experimental assistance. A portion of this work was performed at National High Magnetic Field Laboratory, which is supported by NSF Cooperative Agreement No. DMR-1157490 and the State of Florida. Support by research grants from Japan MEXT (No. 25400374 and 16H04016), China NSF (No. 11634015), and MOST of China (No. 2016YFA0300502 and No. 2015CB921304) is acknowledged. ## Author information Authors ### Contributions G.-q.Z. planned the project. C.T.L. synthesized the single crystals. S.K., Z.L., M.K., P.L.K., A.P.R., and G.-q.Z. performed NMR measurements. G.-q.Z. wrote the manuscript with inputs from S.K. All authors discussed the results and interpretation. ### Corresponding author Correspondence to Guo-qing Zheng. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Kawasaki, S., Li, Z., Kitahashi, M. et al. Charge-density-wave order takes over antiferromagnetism in Bi2Sr2−x La x CuO6 superconductors. Nat Commun 8, 1267 (2017). https://doi.org/10.1038/s41467-017-01465-9 • Accepted: • Published: • ### Anomalous vortex liquid in charge-ordered cuprate superconductors • Yu-Te Hsu • , Maarten Berben • , Matija Čulo • , Takeshi Kondo • , Tsuneshiro Takeuchi • , Yue Wang • , Steffen Wiedmann • , Stephen M. Hayden •  & Nigel E. Hussey Proceedings of the National Academy of Sciences (2021) • ### Microscopic evidence for the intra-unit-cell electronic nematicity inside the pseudogap phase in YBa2Cu4O8 • Wen Wang • , Jun Luo • , ChunGuang Wang • , Jie Yang • , Yasuharu Kodama • , Rui Zhou •  & Guo-Qing Zheng Science China Physics, Mechanics & Astronomy (2021) • ### Enhanced Mobility and Large Linear Nonsaturating Magnetoresistance in the Magnetically Ordered States of TmNiC2 • Kamil K. Kolincio • , Marta Roman •  & Tomasz Klimczuk Physical Review Letters (2020) • ### Possibility of Pomeranchuk instability in staggered flux state • Kenji Kobayashi •  & Hisatoshi Yokoyama Journal of Physics: Conference Series (2020) • ### Pseudogap Behavior in T′-Pr1.3−xLa0.7CexCuO4 Revealed by 63,65Cu NMR • Yongsun Lee • , Hideto Fukazawa • , Shuhei Kanamaru • , Masahiro Yamamoto • , Yoh Kohori • , Akira Takahashi • , Takayuki Kawamata • , Koki Kawabata • , Kazuki Tajima •  & Yoji Koike Journal of the Physical Society of Japan (2020)
{}
# Inequality on outer product Let $$v \in \mathbb{R}^d$$ be a vector, whose norm is upper bounded by $$n$$, and $$\overline{v}$$ an estimate of $$v$$ with some noise, such that $$\|v- \overline{v}\| < \delta$$. I would like to have a bound on $$\left\| vv^T - \overline{v}\overline{v}^T\right\|$$. Recall that $$vv^T$$ here means an outer product, so it forms a matrix $$d \times d$$. First, note that $$\|vv^T\| = n^2$$, and that we can express $$\overline{v} = v + e$$, where $$e$$ is a vector such that $$\|e\| < \delta$$. From this, we can write: $$\| vv^T - (v+e)(v+e)^T\| = \| vv^T - (v+e)(v^T+e^T)\| = \| vv^T - (vv^T+ve^T + ev^T+ee^T)\|$$ $$= \|- ve^T - ev^T - ee^T \|$$ From this we can conclude that $$\| vv^T - \overline{v}\overline{v}^T \| < 2n\delta + \delta^2$$. Am I right? I have no intuition why an absolute bound of $$\delta$$ becomes relative (i.e. depends on $$n$$). I would rather expect it to depend on $$d$$! Thanks. You just need to apply the Triangle Inequality to see this: \begin{align*} ||vv^T - \bar{v}\bar{v}^T|| &= ||-ve^T - ev^T - ee^T|| \\ &\color{red}{\leq} ||-ve^T|| + ||-ev^T|| + ||-ee^T|| \quad\color{red}{\text{ triangle inequality}} \\ &= ||ve^T|| + ||ev^T|| + ||ee^T|| \\ &\color{blue}{\leq} n\delta + \delta n + \delta^2 = 2n\delta + \delta^2 \end{align*} The $$\color{blue}{\text{last inequality}}$$ results from directly applying the definition of a Matrix Norm. To recap the norm of a real $$p \times q$$ matrix $$A$$ induced by the usual $$\Bbb R^q$$ vector norm is $$||A||_{p \times q} := \sup\{||Ax||_{q} : x \in \Bbb R^q, ||x||_{q} = 1\}$$ where I put subscripts to differentiate between the $$p \times q$$ matrix norm and the usual $$\Bbb R^q$$ vector norm. So for example to bound the quantity $$||ve^T||_{d \times d}$$, we investigate the $$\sup$$ of all $$||(ve^T)x||_d$$ where $$||x||_d = 1$$. Note that $$(ve^T)x = v(e^Tx)$$ and $$e^Tx$$ is a scalar. So $$||(ve^T)x||_d = ||v(e^Tx)||_d = \color{red}{||v||_d}|e^Tx| \leq \color{red}{n}|e^Tx| \quad\color{red}{\text{ as ||v||_d \leq n}}$$ Also $$e^Tx$$ is the inner product of $$e$$ and $$x$$ so by Cauchy-Schwarz $$|e^Tx| \leq ||e||_d\color{blue}{||x||_d} = ||e||_d \cdot \color{blue}{1} < \delta \quad\color{blue}{\text{ as ||x||_q = 1}}$$ Thus combining the two results we get $$||(ve^T)x||_d < n\delta$$ and since this upper bound $$n \delta$$ holds for all values $$||(ve^T)x||_d$$ with $$||x||_d = 1$$, taking the supremum over them gives $$||ve^T||_{d \times d} \leq n\delta$$ You can similarly find that $$||ev^T||_{d \times d} \leq \delta n$$ and $$||ee^T||_{d \times d} \leq \delta^2$$.
{}
# e-ILV The e-ILV provides free access to all the terms and definitions contained in the international standard CIE S 017:2020 ILV: International Lighting Vocabulary, 2nd edition. To search for another term please use  "Back to the list" to return to the main page or the e-ILV. For the complete set of terms and definitions CIE S 017:2020 can be purchased from the CIE Webshop. Members of a CIE National Committee or Associate National Committee have access to a 66,7 % discount on the purchase price of the standard. # 17-21-080 Me; M density of exiting radiant flux with respect to area at a point on a real or imaginary surface where Φe is radiant flux and A is the area from which the radiant flux leaves Note 1 to entry: For Planckian radiation, Me = σ T4 where σ is the Stefan–Boltzmann constant and T is thermodynamic temperature. Note 2 to entry: The corresponding photometric quantity is "luminous exitance". The corresponding photon quantity is "photon exitance". Note 3 to entry: The radiant exitance is expressed in watt per square metre (W⋅m−2). Note 4 to entry: This entry was numbered 845-01-47 in IEC 60050-845:1987. Note 5 to entry: This entry was numbered 17-1020 in CIE S 017:2011. Publication date: 2020-12
{}
# Assign multiple variables with a Python list values PythonServer Side ProgrammingProgramming Depending on the need of the program we may an requirement of assigning the values in a list to many variables at once. So that they can be further used for calculations in the rest of the part of the program. In this article we will explore various approaches to achieve this. ## Using for in The for loop can help us iterate through the elements of the given list while assigning them to the variables declared in a given sequence.We have to mention the index position of values which will get assigned to the variables. ## Example Live Demo listA = ['Mon', ' 2pm', 1.5, '11 miles'] # Given list print("Given list A: " ,listA) # using for in vDay, vHrs, vDist = [listA[i] for i in (0, 2, 3)] # Result print ("The variables : " + vDay + ", " + str(vHrs) + ", " +vDist) ## Output Running the above code gives us the following result − Given list A: ['Mon', ' 2pm', 1.5, '11 miles'] The variables : Mon, 1.5, 11 miles ## With itemgetter The itergetter function from the operator module will fetch the item for specified indexes. We directly assign them to the variables. ## Example Live Demo from operator import itemgetter listA = ['Mon', ' 2pm', 1.5, '11 miles'] # Given list print("Given list A: " ,listA) # using itemgetter vDay, vHrs, vDist = itemgetter(0, 2, 3)(listA) # Result print ("The variables : " + vDay + ", " + str(vHrs) + ", " +vDist) ## Output Running the above code gives us the following result − Given list A: ['Mon', ' 2pm', 1.5, '11 miles'] The variables : Mon, 1.5, 11 miles ## With itertools.compress The compress function from itertools module will catch the elements by using the Boolean values for index positions. So for index position 0,2 and 3 we mention the value 1 in the compress function and then assign the fetched value to the variables. ## Example Live Demo from itertools import compress listA = ['Mon', ' 2pm', 1.5, '11 miles'] # Given list print("Given list A: " ,listA) # using itemgetter vDay, vHrs, vDist = compress(listA, (1, 0,1, 1)) # Result print ("The variables : " + vDay + ", " + str(vHrs) + ", " +vDist) ## Output Running the above code gives us the following result − Given list A: ['Mon', ' 2pm', 1.5, '11 miles'] The variables : Mon, 1.5, 11 miles Published on 13-May-2020 14:07:05
{}
# [R: New Features on mindr] Supports the new format of FreeMind. Displays mind maps directly. Supports bookdown projects. By Peng Zhao | October 11, 2018 In the recent months, I have received many kind feedbacks and helpful suggestions from mindr users. I did not improve or enhance mindr until the latest week. Now the new version 1.1.5 brings more exciting features. • Create mind maps out of a directory. I added a new function dir2() to create a mind map out of a directory in the user's computer. • Support the new format of FreeMind mind maps. The new format of FreeMind mind maps uses />rather than </node> as the ending of a node. I added the compatibility for both the old and the new. I added a new parameter to the function markmap(input = c('.md', '.mm')). Now it can display not only markdown files, but also FreeMind mind maps in a markmap widge. commit: display .mm files directly I added a new paramter pattern to the funcrtion md2mm, by which the uses can decide what files to import. By default, the '.md' and '.Rmd' files are imported. Commit: file filter I added a parameter 'bookdown_style' to the function md2mm. If the user choose the bookdown style, then mindr will put 'index.Rmd' before all the rest files, and lower the levels of the chapter headings if there is # (Part), # (APPENDIX) or # Reference. The identification of chapter headings were improved as well. A mind map produced from the bookdown project the Blogdown Book is as follows. Commits:
{}
# ( In a tube ) why CO2 gas doesn't sinks (gets below) , displacing the water to be on the top, If carbon dioxide (44 g/mol) has more molar mass than water( 17 g/mol)? Is it just because water is a liquid and co2 is a gas or there are some other factors? ## i have read in my book that CO2 released from some reaction can be collected using displacement of water (as it is slightly soluble in water) and it is collected above the jar thus lowering the water level, so why water is below and CO2 is above if co2 has molar mass than water? Aug 21, 2017 #### Answer: Density of the substance stratifies into layers with the less dense on top of the more dense . #### Explanation: Density of $C {O}_{2}$ = 1.978 g/Liter at ~${25}^{o} C$ and water in liquid form at the same temperature is ~1000 g/Liter. Formula weights of the substances do not stratify into layers, density difference does. Hot air balloons rise into the atmosphere because the density of hot air is less than the surrounding cooler air, and so goes all immiscible substances at different densities.
{}
Limited access According to Albert Einstein, mass and energy are equivalent. That means that mass and energy can be converted into each other according to the famous equation, $E={ m }^{ }{ c }^{ 2 }$, where $E$ is energy, $m$ is mass, and $c$ is the speed of light. Which of the following are examples of mass-energy equivalence? A A furnace burns a fuel and creates energy until the fuel is depleted. You start with mass and create energy from it. B A nuclear reactor converts mass to energy in a fission process. C An endothermic reaction requires the addition of energy in order to create the final product. This energy is converted into the mass of the product. D When matter and antimatter meet, they annihilate each other and create energy from their mass. Select an assignment template
{}
# Class models in set theory and category theory Is it a mistake ab initio to think of categories as of models of category theory, just as we think of (inner) models-of-set-theory as of models of set theory, graphs as of models of graph theory, groups as of models of group theory, topologies as of models of topology, and so on? There is one big difference, and maybe the essence lies within: all "normal" models have to been based on sets, while categories may be based on proper classes. Why would it be a problem for "normal" models of "normal" theories to be based on proper classes, but not for categories as models of category theory? Side remark: Forgive me for talking at large: "Normal" models of "normal" theories are required to be based on sets, because only then one can apply the machinery of set theory. For categories one definitely wants to apply another machinery, the machinery of category theory. But if this machinery can handle class models in principle, why not apply this machinery on class models of "normal" theories, too - and in a direct way? - @Qiaochu: I'll take your advice and will accept some answers. Thanks for the hint. – Hans Stricker Feb 21 '11 at 19:38 I love your trick of using <sup>, but may I suggest you use some other way of marking side remarks? – Mariano Suárez-Alvarez Feb 22 '11 at 3:00 I'll take the liberty of interpreting your question as follows. What are the benefits and/or disadvantages of considering categories as models for the first order theory $T=T_{Cat}$ of categories? Considering categories as models of $T$ can be done. While it is not the commonest definition of category and it is certainly not emphasized that it is being done it certainly can be done. The article "Enlargement of categories" by Brunjes and Serpe is an example where the advantage of doing just that is clear. Namely, they extend the familiar notion of enlargement of sets in logic to enlargement of categories. Thus the evident advantage is that one can use all of the tools of logic to the study of categories. The disadvantage with this approach is that it is limited to the study of categories whose objects form a set and thus many important categories (e.g., $Top$, $Set$, $Grp$, ...) are excluded from such a study. Another important point to be made here is that while categories can be defined in terms of sets it is also possible to use categories instead of sets as foundations of mathematics. This is a very fruitful approach that would be somewhat limited if we demand to base category theory on sets. The name for a category that can be used instead of $Set$ for the purposed of logic is Topos (there are plenty of introductory texts about topos theory).
{}
### Finite type conditions on reinhardt domains (1996).Finite type conditions on reinhardt domains (1996). Access Restriction Open Author Fu, Siqi ♦ Isaev, Alexander V. ♦ Krantz, Steven G. Source CiteSeerX Content type Text File Format PDF Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science Subject Keyword Pseudoconvex Reinhardt Domain ♦ Finite Type Condition ♦ Reinhardt Domain ♦ Boundary Point ♦ Variety Type ♦ Regular Type ♦ Finite Type ♦ Omega Whenever ♦ Log Jz ♦ Eodory Metric ♦ Invariant Object ♦ Domain Omega Ae ♦ Forthcoming Paper ♦ Bergman Kernel Abstract . In this paper we prove that, if p is a boundary point of a smoothly bounded pseudoconvex Reinhardt domain in C n , then the variety type at p is identical to the regular type. In this paper we study the finite type conditions on pseudoconvex Reinhardt domain. We prove that, if p is a boundary point of a smoothly bounded pseudoconvex Reinhardt domain in C n , then the variety type at p is identical to the regular type. In a forthcoming paper, we will study the biholomorphically invariant objects (e.g., the Bergman kernel and metric, the Kobayashi and Carath'eodory metrics) on a pseudoconvex Reinhardt domain of finite type. We first recall some definitions. A domain\Omega ae C n is Reinhardt if (e i 1 z 1 ; : : : ; e in z n ) 2\Omega whenever (z 1 ; : : : ; z n ) 2\Omega and 0 ` j 2; 1 j n. Denote Z j = f(z 1 ; : : : ; z n ) 2 C n ; z j = 0g; for j = 1; : : : ; n. Let Z = S n j=1 Z j . Define L : C n n Z ! R n by L(z 1 ; : : : ; z n ) = (log jz 1 ... Educational Role Student ♦ Teacher Age Range above 22 year Educational Use Research Education Level UG and PG ♦ Career/Technical Study Learning Resource Type Article Publisher Date 1996-01-01
{}
# Lesson 6 Represent Numbers in Different Ways • Let’s represent numbers in different ways. ## Warm-up Which One Doesn’t Belong: Numbers in Different Ways Which one doesn’t belong? 1. three hundred twenty-five 2. 253 3. 3 hundreds, 2 tens, 5 ones ## Activity 1 Numbers as Words 1. Fill in the blanks to represent 248 with words. two forty- 2. Fill in the blanks to represent 562 with words. hundred - 3. Represent this number with words. 4. Represent 627 with words. 5. Represent with words. 6. Represent three hundred eighteen in two different ways. ## Activity 2 Represent the Numbers Represent the number on your poster. Be sure to represent the number using: • a three-digit number • a base-ten diagram • expanded form • words If you have time: Represent the number using only tens and ones. Represent the number composed in a different way. ## Problem 1 Represent the number 235 in these ways. 1. a base-ten diagram 2. expanded form 3. words
{}
Dirichlet process mixture models¶ Bayesian mixture models introduced how to infer the posterior of the parameters of a mixture model with a fixed number of components $K$. We can either find $K$ using model selection, i.e. with AIC, BIC, WAIC, etc., or try to automatically infer this number. Nonparametric mixture models do exactly this. Here we implement a nonparametric Bayesian mixture model using Gibbs sampling. We use a Chinese restaurant process prior and stick-breaking construction to sample from a Dirichlet process (see for instance Nils Hjort's Bayesian Nonparametrics, Peter Orbanz' lecture notes), Kevin Murphy's book and last but not least Herman Kamper's notes. We'll implement the Gibbs sampler using the CRP ourselves, since (I think) Stan doesn't allow us to do this and then use the stick-breaking construction with Stan. That is technically not possible though, so we use a small hack (a truncated DP). As usual I do not take warranty for the correctness or completeness of this document. I'll use R, cause it's the bestest! In [1]: options(repr.fig.width=4, repr.plot.height=3) In [2]: suppressMessages(library("e1071")) suppressMessages(library("mvtnorm")) suppressMessages(library("dplyr")) suppressMessages(library("ggplot2")) suppressMessages(library("MCMCpack")) suppressMessages(library("bayesplot")) suppressMessages(library("rlang")) suppressMessages(library("tsne")) set.seed(23) In [3]: suppressMessages(library(rstan)) rstan_options(auto_write = TRUE) options(mc.cores = parallel::detectCores()) In Bayesian mixture models we used following hierarchical form to describe a mixture model: \begin{align*} \boldsymbol \theta_k & \sim \mathcal{G}_0\\ \boldsymbol \pi & \sim \text{Dirichlet}(\boldsymbol \alpha_0)\\ z_i & \sim \text{Discrete}(\boldsymbol \pi)\\ \mathbf{x}_i \mid z_i = k & \sim {P}(\boldsymbol \theta_k) \end{align*} where $\mathcal{G}_0$ is some base distribution for the model parameters. The DP on contrast, as any BNP model, puts priors on structures that accomodate infinite sizes. The resulting posteriors give a distribution on structures that grow with new observations. A mixture model using an possibly infinite number of components could look like this: \begin{align*} \mathcal{G} & \sim \mathcal{DP}(\alpha, \mathcal{G}_0)\\ \boldsymbol \theta_i & \sim \mathcal{G}\\ \mathbf{x}_i& \sim {P}(\boldsymbol \theta_i) \end{align*} where $\mathcal{G}_0$ is the same base measure as above and $\mathcal{G}$ is a sample from the DP, i.e. also a random measure. The Chinese restaurant process¶ One way, and possibly the easiest, to implement a DPMM is using a Chinese restaurant process (CRP) which is a distribution over partitions. Data generating process¶ The hierarchical model using a CRP is: \begin{align*} \boldsymbol \theta_k & \sim \mathcal{G}_0 \\ z_i \mid \mathbf{z}_{1:i-1} & \sim \text{CRP} \\ \mathbf{x}_i & \sim P(\boldsymbol \theta_{z_i}) \end{align*} where $\text{CRP}$ is a prior on possible infinitely many classes. Specifically the CRP is defined as: \begin{align*} P(z_i = k \mid \mathbf{z}_{-i}) = \left\{ \begin{array}{ll} \frac{N_k}{N - 1 + \alpha}\\ \frac{\alpha}{N - 1 + \alpha}\\ \end{array} \right. \end{align*} where $N_k$ is the number of customers at table $k$ and $\alpha$ some hyperparameter. For the variables of interest, $\boldsymbol \theta_k$ and $\boldsymbol z$ the posterior is: \begin{align*} P(\boldsymbol \theta, \boldsymbol z \mid \mathbf{X}) \propto P(\mathbf{X} \mid \boldsymbol \theta, \boldsymbol z ) P(\boldsymbol \theta) P ( \boldsymbol z ) \end{align*} Using a Gibbs sampler, we iterate over the following two steps: 1) sample $z_i \sim P(z_i \mid \mathbf{z}_{-i}, \mathbf{X}, \boldsymbol \theta) \propto P(z_i \mid \mathbf{z}_{-i}) P(\mathbf{x}_i \mid \boldsymbol \theta_{z_i}, \mathbf{X}_{-i}, \mathbf{z})$ 2) sample $\boldsymbol \theta_k \sim P(\boldsymbol \theta_k \mid \mathbf{z}, \mathbf{X})$ So we alternate sampling assignments of data to classes and sampling the parameters of the data distribution given the class assignments. The major difference here compared to the finite case is the way of sampling $z_i$ which we do using the CRP in the infinite case. The CRP itself is defined by $P(z_i \mid \mathbf{z}_{-i})$, so replacing this by a usual finite sample would give us a finite mixture. Evaluation of the likelihoods in the first step is fairly straightforward as we will see. Updating the model parameters in the second step is conditional on every class, an by that also not too hard to do. Stick-breaking construction¶ With the CRP with put a prior distribution on the possibly infinite number of class assignments. An alternative approach is to use stick-breaking construction. The advantage here is that we could use Stan using a truncated DP, thus we don't need to implement the sampler ourselves. Data generating process¶ If we, instead of putting a CRP prior on the latent labels, put a prior on the possibly infinite sequence of mixing weights $\boldsymbol \pi$ we arrive at the stick-breaking construction. The hierarchical model now looks like this: \begin{align*} \nu_k &\sim \text{Beta}(1, \alpha) \\ \pi_k & = \nu_k \prod_{j=1}^{k-1} (1 - \nu_j) \\ \boldsymbol \theta_k & \sim G_0 \\ \mathbf{x}_i & \sim \sum_k \pi_k P(\boldsymbol \theta_k) \end{align*} where $N_k$ is the number of customers at table $k$ and $\alpha$ some hyperparameter. The distribution of the mixing weights is sometimes denoted as $$\boldsymbol \pi \sim \text{GEM}(\alpha)$$ Gaussian DPMM¶ In the following section, we derive a Gaussian Dirichlet process mixture using the CRP with a Gibbs sampler and the stick-breaking construction using Stan. CRP¶ In the Gaussian case the hierarchical model using the CRP has the following form: \begin{align*} \boldsymbol \Sigma_k & \sim \mathcal{IW}\\ \boldsymbol \mu_k & \sim \mathcal{N}(\boldsymbol \mu_0, \boldsymbol \Sigma_0) \\ z_i \mid z_{1:i-1} & \sim \text{CRP} \\ \mathbf{x}_i & \sim \mathcal{N}(\boldsymbol \mu_{z_i}, \boldsymbol \Sigma_{z_i}) \end{align*} Let's derive the Gibbs sampler for a infinite Gaussian mixture using the CRP. First we set data $\mathbf{X}$ some constants. We create a very simple data set to avoid problems with identifiability and label switching. For a treatment of the topic see Michael Betancourt's case study. $n$ is the number of samples, $p$ is the dimensionality of the Gaussian, $\alpha$ is the Dirichlet concentration. In [4]: n <- 100 p <- 2 alpha <- .5 Latent class assignments (Z), the current table index and the number of customers per table: In [5]: Z <- integer(n) X <- matrix(0, n, p) curr.tab <- 0 tables <- c() Parameters of the Gaussians: In [6]: sigma <- .1 mus <- NULL Then we create a random assignment of customers to tables with probability $P(z_i \mid Z_{-i})$, i.e. we use the CRP to put data into classes. Note that we don't know the number of classes that comes out! In [7]: for (i in seq(n)) { probs <- c(tables / (i - 1 + alpha), alpha / (i - 1 + alpha)) table <- rdiscrete(1, probs) if (table > curr.tab) { curr.tab <- curr.tab + 1 tables <- c(tables, 0) mu <- mvtnorm::rmvnorm(1, c(0, 0), 10 * diag(p)) mus <- rbind(mus, mu) } Z[i] <- table X[i,] <- mvtnorm::rmvnorm(1, mus[Z[i], ], sigma * diag(p)) tables[table] <- tables[table] + 1 } Let's see how many clusters and how many data points per clusters we have. In [8]: data.frame(table(Z))%>% ggplot() + geom_col(aes(Z, Freq), width=.5) + theme_minimal() In [9]: data.frame(X=X, Z=as.factor(Z)) %>% ggplot() + geom_point(aes(X.1, X.2, col=Z)) + theme_minimal() Posterior inference using Gibbs sampling¶ Let's infer the posteriors. We randomly initialize the cluster assignments and set all customers to table 1. Hyperparameter $\alpha$ controls the probability of opening a new table. In [10]: # initialization of the cluster assignments K <- 1 zs <- rep(K, n) alpha <- 5 tables <- n Define the priors of the model. We set the covariances to be fixed. In [11]: mu.prior <- matrix(c(0, 0), ncol = 2) sigma.prior <- diag(p) q.prior <- solve(sigma.prior) Base distribution $\mathcal{G}_0$: In [12]: sigma0 <- diag(p) prec0 <- solve(sigma0) mu0 <- rep(0, p) To infer the posterior we would use the Gibbs sampler described above. Here, I am only interested in the most likely assignment, i.e. the map of $Z$. In [13]: for (iter in seq(100)) { for (i in seq(n)) { # look at data x_i and romove its statistics from the clustering zi <- zs[i] tables[zi] <- tables[zi] - 1 if (tables[zi] == 0) { K <- K - 1 zs[zs > zi] <- zs[zs > zi] - 1 tables <- tables[-zi] mu.prior <- mu.prior[-zi, ] } # compute posterior probabilitites P(z_i \mid z_-i, ...) no_i <- seq(n)[-i] probs <- sapply(seq(K), function(k) { crp <- sum(zs[no_i] == k) / (n + alpha - 1) lik <- mvtnorm::dmvnorm(X[i, ], mu.prior[k,], sigma.prior) crp * lik }) # compute probability for opening up a new one crp <- alpha / (n + alpha - 1) lik <- mvtnorm::dmvnorm(X[i, ], mu0, sigma.prior + sigma0) probs <- c(probs, crp * lik) probs <- probs / sum(probs) # sample new z_i according to the conditional posterior above z_new <- which.max(probs) if (z_new > K) { K <- K + 1 tables <- c(tables, 0) mu.prior <- rbind(mu.prior, mvtnorm::rmvnorm(1, mu0, sigma0)) } zs[i] <- z_new tables[z_new] <- tables[z_new] + 1 # compute conditional posterior P(mu \mid ...) for(k in seq(K)) { Xk <- X[zs == k, ,drop=FALSE] lambda <- solve(q.prior + tables[k] * q.prior) nominator <- tables[k] * q.prior %*% apply(Xk, 2, mean) mu.prior[k, ] <- mvtnorm::rmvnorm(1, lambda %*% nominator, lambda) } } } Let's see if that worked out! In [14]: data.frame(X=X, Z=as.factor(zs)) %>% ggplot() + geom_point(aes(X.1, X.2, col=Z)) + theme_minimal() Cool, except for the lone guy on top the clustering worked nicely. Stick breaking construction¶ In order to make the DPMM with stick-breaking work in Stan, we need to supply a maximum number of clusters $K$ from which we can choose. Setting $K=n$ would mean that we allow that every data point defines its own cluster. For the sake of the exercise I'll set it the maximum number of clusters to $10$. The hyperparameter $\alpha$ parameterizes the Beta-distribution which we use to sample stick lengths. We use the same data we already generated above. In [15]: K <- 10 alpha <- 2 The model is a bit more verbose in comparison to the finite case (Bayesian mixture models). We only need to add the stick breaking part in the transformed parameters, the rest stays the same. We again use the LKJ prior for the correlation matrix of the single components and set a fixed prior scale of $1$. In order to get nice, unimodel posteriors, we also introduce an ordering of the mean values. In [16]: stan.file <- "_models/dirichlet_process_mixture.stan" data { int<lower=0> K; int<lower=0> n; int<lower=1> p; row_vector[p] x[n]; real alpha; } parameters { ordered[p] mu[K]; cholesky_factor_corr[p] L; real <lower=0, upper=1> nu[K]; } transformed parameters { simplex[K] pi; pi[1] = nu[1]; for(j in 2:(K-1)) { pi[j] = nu[j] * (1 - nu[j - 1]) * pi[j - 1] / nu[j - 1]; } pi[K] = 1 - sum(pi[1:(K - 1)]); } model { real mix[K]; L ~ lkj_corr_cholesky(5); nu ~ beta(1, alpha); for (i in 1:K) { mu[i] ~ normal(0, 5); } for(i in 1:n) { for(k in 1:K) { mix[k] = log(pi[k]) + multi_normal_cholesky_lpdf(x[i] | mu[k], L); } target += log_sum_exp(mix); } } In [17]: fit <- stan(stan.file, data = list(K=K, n=n, x=X, p=p, alpha=alpha), iter = 10000, warmup = 1000, chains = 1) SAMPLING FOR MODEL 'dirichlet_process_mixture' NOW (CHAIN 1). Chain 1: Chain 1: Gradient evaluation took 0.000845 seconds Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 8.45 seconds. Chain 1: Chain 1: Chain 1: Iteration: 1 / 10000 [ 0%] (Warmup) Chain 1: Iteration: 1000 / 10000 [ 10%] (Warmup) Chain 1: Iteration: 1001 / 10000 [ 10%] (Sampling) Chain 1: Iteration: 2000 / 10000 [ 20%] (Sampling) Chain 1: Iteration: 3000 / 10000 [ 30%] (Sampling) Chain 1: Iteration: 4000 / 10000 [ 40%] (Sampling) Chain 1: Iteration: 5000 / 10000 [ 50%] (Sampling) Chain 1: Iteration: 6000 / 10000 [ 60%] (Sampling) Chain 1: Iteration: 7000 / 10000 [ 70%] (Sampling) Chain 1: Iteration: 8000 / 10000 [ 80%] (Sampling) Chain 1: Iteration: 9000 / 10000 [ 90%] (Sampling) Chain 1: Iteration: 10000 / 10000 [100%] (Sampling) Chain 1: Chain 1: Elapsed Time: 129.032 seconds (Warm-up) Chain 1: 361.426 seconds (Sampling) Chain 1: 490.457 seconds (Total) Chain 1: In [18]: fit Inference for Stan model: dirichlet_process_mixture. 1 chains, each with iter=10000; warmup=1000; thin=1; post-warmup draws per chain=9000, total post-warmup draws=9000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff mu[1,1] -0.40 0.00 0.04 -0.48 -0.43 -0.40 -0.37 -0.32 4824 mu[1,2] -0.37 0.00 0.04 -0.45 -0.40 -0.37 -0.34 -0.29 6646 mu[2,1] -3.83 0.00 0.16 -4.13 -3.94 -3.83 -3.72 -3.52 3802 mu[2,2] 0.83 0.00 0.16 0.52 0.72 0.83 0.94 1.14 4326 mu[3,1] 1.31 0.00 0.06 1.19 1.27 1.31 1.35 1.43 7955 mu[3,2] 1.36 0.00 0.06 1.24 1.32 1.36 1.40 1.48 7130 mu[4,1] -2.92 0.05 3.96 -11.19 -5.21 -2.94 -0.33 4.66 6931 mu[4,2] 2.54 0.04 3.86 -4.64 0.10 2.10 5.02 10.67 8998 mu[5,1] -2.80 0.05 3.94 -10.86 -5.27 -2.72 -0.20 4.89 6896 mu[5,2] 2.66 0.05 4.06 -4.88 0.00 2.32 5.27 11.13 6487 mu[6,1] -2.75 0.06 4.01 -10.89 -5.28 -2.72 -0.04 5.05 5280 mu[6,2] 2.78 0.04 3.97 -4.72 0.16 2.52 5.34 11.02 9468 mu[7,1] -2.78 0.05 4.06 -10.99 -5.39 -2.71 -0.08 5.08 6357 mu[7,2] 2.85 0.05 4.10 -4.87 0.06 2.67 5.55 11.16 6168 mu[8,1] -2.77 0.05 4.09 -11.17 -5.41 -2.68 0.00 5.03 5898 mu[8,2] 2.76 0.05 4.11 -5.10 0.00 2.62 5.43 11.09 6646 mu[9,1] -2.87 0.06 4.16 -11.28 -5.61 -2.76 -0.05 4.94 5026 mu[9,2] 2.77 0.04 4.07 -4.85 0.00 2.64 5.46 11.13 8978 mu[10,1] -2.78 0.05 4.11 -11.17 -5.49 -2.75 0.02 4.94 5819 mu[10,2] 2.86 0.05 4.06 -4.78 0.09 2.69 5.53 11.12 8017 L[1,1] 1.00 NaN 0.00 1.00 1.00 1.00 1.00 1.00 NaN L[1,2] 0.00 NaN 0.00 0.00 0.00 0.00 0.00 0.00 NaN L[2,1] -0.88 0.00 0.02 -0.91 -0.89 -0.88 -0.87 -0.84 6392 L[2,2] 0.48 0.00 0.03 0.42 0.46 0.48 0.50 0.55 6371 nu[1] 0.43 0.00 0.05 0.33 0.39 0.43 0.46 0.52 8297 nu[2] 0.66 0.00 0.06 0.52 0.62 0.66 0.70 0.77 1675 nu[3] 0.88 0.00 0.09 0.65 0.84 0.90 0.94 0.98 684 nu[4] 0.35 0.01 0.25 0.01 0.13 0.30 0.53 0.89 1700 nu[5] 0.34 0.00 0.24 0.01 0.13 0.29 0.50 0.86 6359 nu[6] 0.34 0.00 0.24 0.01 0.14 0.29 0.51 0.83 7177 nu[7] 0.33 0.00 0.23 0.01 0.13 0.29 0.50 0.84 6252 nu[8] 0.33 0.00 0.24 0.01 0.13 0.29 0.50 0.85 7674 nu[9] 0.34 0.00 0.24 0.01 0.14 0.30 0.51 0.85 6661 nu[10] 0.34 0.00 0.24 0.01 0.13 0.29 0.51 0.85 3142 pi[1] 0.43 0.00 0.05 0.33 0.39 0.43 0.46 0.52 8297 pi[2] 0.38 0.00 0.05 0.28 0.34 0.38 0.41 0.47 2450 pi[3] 0.17 0.00 0.04 0.11 0.15 0.17 0.20 0.25 6441 pi[4] 0.01 0.00 0.01 0.00 0.00 0.00 0.01 0.05 217 pi[5] 0.01 0.00 0.01 0.00 0.00 0.00 0.01 0.03 1981 pi[6] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.02 3094 pi[7] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 6097 pi[8] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 3677 pi[9] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 5116 pi[10] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 4733 lp__ -406.17 0.09 4.44 -415.70 -408.94 -405.81 -403.05 -398.50 2575 Rhat mu[1,1] 1.00 mu[1,2] 1.00 mu[2,1] 1.00 mu[2,2] 1.00 mu[3,1] 1.00 mu[3,2] 1.00 mu[4,1] 1.00 mu[4,2] 1.00 mu[5,1] 1.00 mu[5,2] 1.00 mu[6,1] 1.00 mu[6,2] 1.00 mu[7,1] 1.00 mu[7,2] 1.00 mu[8,1] 1.00 mu[8,2] 1.00 mu[9,1] 1.00 mu[9,2] 1.00 mu[10,1] 1.00 mu[10,2] 1.00 L[1,1] NaN L[1,2] NaN L[2,1] 1.00 L[2,2] 1.00 nu[1] 1.00 nu[2] 1.00 nu[3] 1.00 nu[4] 1.00 nu[5] 1.00 nu[6] 1.00 nu[7] 1.00 nu[8] 1.00 nu[9] 1.00 nu[10] 1.00 pi[1] 1.00 pi[2] 1.00 pi[3] 1.00 pi[4] 1.01 pi[5] 1.00 pi[6] 1.00 pi[7] 1.00 pi[8] 1.00 pi[9] 1.00 pi[10] 1.00 lp__ 1.00 Samples were drawn using NUTS(diag_e) at Fri Mar 1 12:00:24 2019. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1). High effective sample sizes, no divergent transitions and $\hat{R}$s of one looks good. Our model seems well specified! Let's look at some plots though. First the traces for the means and mixing weights. In [19]: posterior <- extract(fit) In [20]: options(repr.fig.width=4, repr.plot.height=8) data.frame(posterior$pi) %>% set_names(paste0("PI_", 1:10)) %>% tidyr::gather(key, value) %>% ggplot(aes(x = value, y = ..density.., fill=key), position="dodge") + facet_grid(key ~ ., scales="free") + geom_histogram(bins=50) + theme_minimal() From the plot above it looks as if Stan believes it's sufficient to use three components as the means of the mixing weights of the seven other components are fairly low or even zero. However, let's extract all means of the posterior means and assign each data point to a cluster. In [21]: post.mus <- do.call( "rbind", lapply(1:10, function(i) apply(posterior$mu[,i,], 2, mean))) In [22]: probs <- purrr::map_dfc(seq(10), function(i) { mvtnorm::dmvnorm(X, post.mus[i,], diag(2))}) %>% set_names(paste0("Z", seq(10))) In [23]: zs.stan <- apply(probs, 1, which.max) And the final plot: In [24]: options(repr.fig.width=4, repr.plot.height=3) In [25]: data.frame(X=X, Z=as.factor(zs.stan)) %>% ggplot() + geom_point(aes(X.1, X.2, col=Z)) + theme_minimal() Cool, our small hack using Stan and stick-breaking worked even better than our CRP implementation. Here, we managed to give every point its correct label. Multivariate Bernoullis¶ Next, we derive a Dirichlet process mixture for a multivariate Bernoulli distribution (or whatever name is more suitable here). We again use the CRP with a Gibbs sampler and the stick-breaking construction using Stan. CRP¶ We model every observation $\mathbf{x} \in \{0, 1 \}^p$ as: \begin{align*} \pi_{{z_i}_j} & \sim \text{Beta}(a, b)\\ z_i \mid z_{1:i-1} & \sim \text{CRP} \\ x_{i, j} \mid z_i & \sim \text{Bernoulli}(\pi_{{z_i}_j}), \end{align*} with hyperparameters $a$ and $b$. Thus the likelihood for one datum is the product over $p$ independent Bernoullis: $$P(\mathbf{x} \mid z, \boldsymbol \Pi) = \prod_{j=1}^p \pi_{{z_i}_j}^{x_j} \cdot (1 - \pi_{{z_i}_j})^{(1 - x_j)}$$ First we again generate some data. We sample $100$ points $\mathbf{X}$ from a $p=5$ dimensional mixture of a $k=3$ multivariate Bernoullis and their latent class assignments $\mathbf{z}$ (since we already saw how one can generate data using a CRP). In [26]: n <- 200 p <- 3 alpha <- 0.5 k <- 3 Z <- sample(1:k, n, replace = T) table(Z) Z 1 2 3 67 66 67 Then we generate the true success probabilities for every Bernoulli. These are $k \cdot p$ many. We simulate an easy scenario where every dimension has the same probability for every class. In [27]: probs.true <- matrix(seq(0.1, 0.9, length.out=k), k, p) probs.true 0.1 0.1 0.1 0.5 0.5 0.5 0.9 0.9 0.9 Then we generate the data using these probabilities randomly: In [28]: probs.matrix <- probs.true[Z, ] X <- (probs.matrix > matrix(runif(n * p), n, p)) * 1L 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 Let's have a look at it using $t$-SNE. Note, that since the data are Bernoulli, axes are kinda hard to interpret and we shouldn't get a clear separation of the clusters as in the Gaussian case (the same is true for PCA, too). In [29]: tsne.data <- tsne(X, perplexity = 50, max_iter = 1500) sigma summary: Min. : 2.98023223876953e-08 |1st Qu. : 2.98023223876953e-08 |Median : 2.98023223876953e-08 |Mean : 0.328735702315581 |3rd Qu. : 0.87608321405121 |Max. : 0.881540922682334 | Epoch: Iteration #100 error is: 10.5341680925935 Epoch: Iteration #200 error is: 0.114879642075053 Epoch: Iteration #300 error is: 0.107652843585134 Epoch: Iteration #400 error is: 0.107640277108229 Epoch: Iteration #500 error is: 0.107640268222389 Epoch: Iteration #600 error is: 0.107640267970035 Epoch: Iteration #700 error is: 0.107640267854842 Epoch: Iteration #800 error is: 0.107640267767748 Epoch: Iteration #900 error is: 0.107640267688465 Epoch: Iteration #1000 error is: 0.107640267615484 Epoch: Iteration #1100 error is: 0.107640267553738 Epoch: Iteration #1200 error is: 0.107640267497753 Epoch: Iteration #1300 error is: 0.107640267445983 Epoch: Iteration #1400 error is: 0.107640267399936 Epoch: Iteration #1500 error is: 0.107640267359697 In [30]: plot(tsne.data, col=Z) We use a concentration of $\alpha=1$ for the CRP and $a=b=1$ as hyperparameters for the Beta to get a somewhat uniform shape. In [31]: alpha <- 1 a <- b <- 1 As for the Gaussian case, we need to compute the likelihood of $\mathbf{x}_i$ evaluated at its cluster $k$. Since we factorize over $p$ independent Bernoullis, we need to write down this likelihood manually: In [32]: # likelihood for existing cluster ppd <- function(x, ps) { exp(sum(x * log(ps) + (1 - x) * (1 - log(ps)))) } # likelihood for random new cluster ppde <- function(x) { ps <- rbeta(p, a, b) ppd(x, ps) } For the sample we start with the following initial parameter settings: one cluster $K=1$, i.e. all latent class assignments are $\mathbf{z} = \{1 \}^n$, and $p$ random samples from a Beta for the success probabilities of the Bernoullis determining the cluster. In [33]: K <- 1 zs <- rep(K, n) tables <- n priors <- array(rbeta(p, a, b), dim = c(1, p, 1)) priors 1. 0.151305898092687 2. 0.909731121268123 3. 0.82791749550961 Then we implement the Gibbs sampler (or rather the ECM): In [34]: for (it in seq(100)) { for (i in seq(n)) { # look at data x_i and romove its statistics from the clustering zi <- zs[i] tables[zi] <- tables[zi] - 1 if (tables[zi] == 0) { K <- K - 1 zs[zs > zi] <- zs[zs > zi] - 1 tables <- tables[-zi] priors <- priors[,,-zi,drop=FALSE] } # compute posterior probabilitites P(z_i \mid z_-i, ...) no_i <- seq(n)[-i] probs <- sapply(seq(K), function(k) { crp <- sum(zs[no_i] == k) / (n + alpha - 1) lik <- ppd(X[i, ], priors[,,k]) crp * lik }) # compute probability for opening up a new one crp <- alpha / (n + alpha - 1) lik <- ppde(X[i, ]) probs <- c(probs, crp * lik) probs <- probs / sum(probs) # sample new z_i according to the conditional posterior above z_new <- which.max(probs) if (z_new > K) { K <- K + 1 tables <- c(tables, 0) priors <- abind::abind(priors, array(rbeta(p, a, b), dim=c(1, p, 1))) } zs[i] <- z_new tables[z_new] <- tables[z_new] + 1 # compute conditional posterior P(mu \mid ...) for(k in seq(K)) { Xk <- X[zs == k, ,drop=FALSE] priors[,,k] <- sapply(colSums(Xk), function(i) rbeta(1, i + 1, nrow(Xk) - i + 1)) } } } Let's compare the inferred assignments with the true ones. In [35]: par(mfrow=c(1, 2)) plot(tsne.data, col=Z) plot(tsne.data, col=zs) Stick-breaking construction¶ As above, we try to implement the mixture using Stan, too. We use a truncated DP again and set the maximum number of clusters $K =5$, because the model is quite hard to compute and we don't use a lot of data, and the concentration to $\alpha=2$. In [36]: K <- 5 alpha <- 2 The respective Stan file is similar to the Gaussian case. There is one more tricky part, though. In the Gaussian case, we needed to order the mean vectors such that we can identifiable posteriors. For the binary case, we in addition need to make sure that the parameters are probabilities, i.e. have a domain from $0$ to $1$. We do that by first declaring a $K \times p$-dimensional parameter rates which we order: ordered[p] rates[K]. Then, in order to make a probability out of it, we apply the inverse logit function for every element: prob = inv_logit(rates). That should do the trick. The complete Stan file is shown below. In [37]: stan.file <- "_models/binary_dirichlet_process_mixture.stan" data { int<lower=1> K; int<lower=1> n; int<lower=1> p; int<lower=0,upper=1> x[n, p]; real<lower=1> alpha; } parameters { ordered[p] rates[K]; real<lower=0, upper=1> nu[K]; } transformed parameters { simplex[K] pi; vector<lower=0, upper=1>[p] prob[K]; pi[1] = nu[1]; for(j in 2:(K-1)) { pi[j] = nu[j] * (1 - nu[j - 1]) * pi[j - 1] / nu[j - 1]; } pi[K] = 1 - sum(pi[1:(K - 1)]); for (k in 1:K) { for (ps in 1:p) { prob[k, ps] = inv_logit(rates[k, ps]); } } } model { real mix[K]; nu ~ beta(1, alpha); for(i in 1:n) { for(k in 1:K) { mix[k] = log(pi[k]); for (ps in 1:p) { mix[k] += bernoulli_lpmf(x[i, ps] | prob[k, ps]); } } target += log_sum_exp(mix); } } In [38]: fit <- stan(stan.file, data = list(K=K, n=n, x=matrix(as.integer(X), n, p), p=p, alpha=alpha), iter = 12000, warmup = 2000, chains = 1, control = list(adapt_delta = 0.99)) hash mismatch so recompiling; make sure Stan code ends with a blank line SAMPLING FOR MODEL 'binary_dirichlet_process_mixture' NOW (CHAIN 1). Chain 1: Chain 1: Gradient evaluation took 0.000431 seconds Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 4.31 seconds. Chain 1: Chain 1: Chain 1: Iteration: 1 / 12000 [ 0%] (Warmup) Chain 1: Iteration: 1200 / 12000 [ 10%] (Warmup) Chain 1: Iteration: 2001 / 12000 [ 16%] (Sampling) Chain 1: Iteration: 3200 / 12000 [ 26%] (Sampling) Chain 1: Iteration: 4400 / 12000 [ 36%] (Sampling) Chain 1: Iteration: 5600 / 12000 [ 46%] (Sampling) Chain 1: Iteration: 6800 / 12000 [ 56%] (Sampling) Chain 1: Iteration: 8000 / 12000 [ 66%] (Sampling) Chain 1: Iteration: 9200 / 12000 [ 76%] (Sampling) Chain 1: Iteration: 10400 / 12000 [ 86%] (Sampling) Chain 1: Iteration: 11600 / 12000 [ 96%] (Sampling) Chain 1: Iteration: 12000 / 12000 [100%] (Sampling) Chain 1: Chain 1: Elapsed Time: 101.608 seconds (Warm-up) Chain 1: 2557.83 seconds (Sampling) Chain 1: 2659.44 seconds (Total) Chain 1: Warning message: “There were 1873 divergent transitions after warmup. Increasing adapt_delta above 0.99 may help. See http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup”Warning message: “There were 8127 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 10. See http://mc-stan.org/misc/warnings.html#maximum-treedepth-exceeded”Warning message: “Examine the pairs() plot to diagnose sampling problems ” That is unpleasant! A lot of divergent transitions and almost all of the transitions exceeded the maximum tree depth (see the Stan manual). The divergent transitions are more severe a problem than the transitions so let's look at some diagnostic plots. In [39]: fit Inference for Stan model: binary_dirichlet_process_mixture. 1 chains, each with iter=12000; warmup=2000; thin=1; post-warmup draws per chain=10000, total post-warmup draws=10000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff rates[1,1] -0.24 0.06 0.22 -0.70 -0.39 -0.24 -0.09 0.21 15 rates[1,2] -0.04 0.08 0.27 -0.64 -0.21 -0.02 0.15 0.42 12 rates[1,3] 0.02 0.08 0.28 -0.57 -0.16 0.04 0.21 0.53 13 rates[2,1] -41.03 14.23 22.24 -78.46 -59.41 -40.12 -19.71 -10.36 2 rates[2,2] -31.83 9.33 18.03 -71.97 -43.00 -27.37 -18.92 -7.30 4 rates[2,3] -18.72 5.90 14.48 -56.57 -24.71 -14.24 -7.80 -3.36 6 rates[3,1] -10.89 11.09 19.34 -48.01 -25.62 -12.53 6.75 20.63 3 rates[3,2] 3.78 12.85 22.06 -33.19 -16.41 4.73 24.57 34.55 3 rates[3,3] 10.79 11.42 20.01 -25.96 -6.86 17.62 29.58 35.92 3 rates[4,1] -59.64 5.63 15.40 -88.10 -68.41 -60.86 -55.09 -22.39 7 rates[4,2] -29.59 13.20 29.39 -76.24 -54.26 -32.01 -2.97 24.18 5 rates[4,3] -9.62 11.32 25.54 -57.00 -28.04 -10.72 12.35 34.58 5 rates[5,1] 12.24 1.93 7.34 2.26 6.53 10.36 17.46 29.67 14 rates[5,2] 15.33 2.42 7.48 4.01 9.28 14.21 21.35 31.04 10 rates[5,3] 27.77 0.69 6.10 14.90 23.35 28.58 33.00 36.37 78 nu[1] 0.48 0.01 0.05 0.38 0.46 0.49 0.51 0.56 17 nu[2] 0.35 0.06 0.12 0.17 0.24 0.35 0.44 0.57 4 nu[3] 0.21 0.07 0.18 0.02 0.06 0.17 0.30 0.72 6 nu[4] 0.21 0.03 0.12 0.03 0.11 0.20 0.30 0.45 12 nu[5] 0.28 0.05 0.18 0.04 0.14 0.25 0.40 0.67 14 pi[1] 0.48 0.01 0.05 0.38 0.46 0.49 0.51 0.56 17 pi[2] 0.18 0.03 0.07 0.08 0.12 0.18 0.24 0.31 4 pi[3] 0.06 0.02 0.05 0.01 0.02 0.06 0.09 0.18 8 pi[4] 0.06 0.02 0.05 0.00 0.02 0.05 0.10 0.17 8 pi[5] 0.21 0.02 0.06 0.06 0.18 0.22 0.25 0.31 8 prob[1,1] 0.44 0.01 0.05 0.33 0.40 0.44 0.48 0.55 15 prob[1,2] 0.49 0.02 0.07 0.35 0.45 0.49 0.54 0.60 12 prob[1,3] 0.51 0.02 0.07 0.36 0.46 0.51 0.55 0.63 13 prob[2,1] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 24 prob[2,2] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 157 prob[2,3] 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.03 755 prob[3,1] 0.37 0.27 0.47 0.00 0.00 0.00 1.00 1.00 3 prob[3,2] 0.55 0.28 0.49 0.00 0.00 0.99 1.00 1.00 3 prob[3,3] 0.61 0.27 0.48 0.00 0.00 1.00 1.00 1.00 3 prob[4,1] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 343 prob[4,2] 0.23 0.19 0.41 0.00 0.00 0.00 0.05 1.00 5 prob[4,3] 0.30 0.24 0.45 0.00 0.00 0.00 1.00 1.00 4 prob[5,1] 0.99 0.00 0.05 0.91 1.00 1.00 1.00 1.00 85 prob[5,2] 1.00 0.00 0.01 0.98 1.00 1.00 1.00 1.00 148 prob[5,3] 1.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 348 lp__ -358.00 1.00 2.91 -363.63 -360.14 -358.00 -355.75 -352.76 8 Rhat rates[1,1] 1.03 rates[1,2] 1.08 rates[1,3] 1.08 rates[2,1] 3.06 rates[2,2] 1.72 rates[2,3] 1.35 rates[3,1] 2.52 rates[3,2] 2.30 rates[3,3] 2.12 rates[4,1] 1.41 rates[4,2] 1.40 rates[4,3] 1.40 rates[5,1] 1.02 rates[5,2] 1.05 rates[5,3] 1.04 nu[1] 1.15 nu[2] 1.96 nu[3] 1.28 nu[4] 1.22 nu[5] 1.06 pi[1] 1.15 pi[2] 2.02 pi[3] 1.20 pi[4] 1.34 pi[5] 1.29 prob[1,1] 1.03 prob[1,2] 1.08 prob[1,3] 1.08 prob[2,1] 1.05 prob[2,2] 1.01 prob[2,3] 1.00 prob[3,1] 2.11 prob[3,2] 2.40 prob[3,3] 2.22 prob[4,1] 1.00 prob[4,2] 1.38 prob[4,3] 1.57 prob[5,1] 1.02 prob[5,2] 1.02 prob[5,3] 1.00 lp__ 1.23 Samples were drawn using NUTS(diag_e) at Fri Mar 1 12:46:17 2019. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1). In [40]: posterior_cp_pi <- as.array(fit, pars = c("pi")) posterior_cp_prob <- as.array(fit, pars = c("prob")) np_cp <- nuts_params(fit) In [41]: mcmc_trace(posterior_cp_pi,np = np_cp) Overall the transitions divergence at allmost all points, the traces are not consequently not very nice either. Let's also look at the effective sample size. In [42]: ratios_cp <- neff_ratio(fit) mcmc_neff(ratios_cp) Yuk! Most of the effective sample sizes are extremely low. Before we go on, we should try to change our model, since often badly setup models cause unpleasant posterior analytics. In the case that Stan doesn't like non-parametrics (the truncated stick-breaking), we could test the same model and fix the number of clusters. In [43]: stan.file <- "_models/binary_mixture.stan" data { int<lower=1> K; int<lower=1> n; int<lower=1> p; int<lower=0,upper=1> x[n, p]; vector<lower=0>[K] alpha; } parameters { ordered[p] rates[K]; simplex[K] pi; } transformed parameters { vector<lower=0, upper=1>[p] prob[K]; for (k in 1:K) { for (ps in 1:p) { prob[k, ps] = inv_logit(rates[k, ps]); } } } model { real mix[K]; pi ~ dirichlet(alpha); for(i in 1:n) { for(k in 1:K) { mix[k] = log(pi[k]); for (ps in 1:p) { mix[k] += bernoulli_lpmf(x[i, ps] | prob[k, ps]); } } target += log_sum_exp(mix); } } In [47]: fit_fixed_K <- stan(stan.file, data = list(K=K, n=n, x=X, p=p, alpha=rep(1.0, K)), iter = 10000, warmup = 1000, chains = 1, control = list(adapt_delta = 0.99)) SAMPLING FOR MODEL 'binary_mixture' NOW (CHAIN 1). Chain 1: Chain 1: Gradient evaluation took 0.000325 seconds Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 3.25 seconds. Chain 1: Chain 1: Chain 1: Iteration: 1 / 10000 [ 0%] (Warmup) Chain 1: Iteration: 1000 / 10000 [ 10%] (Warmup) Chain 1: Iteration: 1001 / 10000 [ 10%] (Sampling) Chain 1: Iteration: 2000 / 10000 [ 20%] (Sampling) Chain 1: Iteration: 3000 / 10000 [ 30%] (Sampling) Chain 1: Iteration: 4000 / 10000 [ 40%] (Sampling) Chain 1: Iteration: 5000 / 10000 [ 50%] (Sampling) Chain 1: Iteration: 6000 / 10000 [ 60%] (Sampling) Chain 1: Iteration: 7000 / 10000 [ 70%] (Sampling) Chain 1: Iteration: 8000 / 10000 [ 80%] (Sampling) Chain 1: Iteration: 9000 / 10000 [ 90%] (Sampling) Chain 1: Iteration: 10000 / 10000 [100%] (Sampling) Chain 1: Chain 1: Elapsed Time: 46.5187 seconds (Warm-up) Chain 1: 785.536 seconds (Sampling) Chain 1: 832.055 seconds (Total) Chain 1: Warning message: “There were 8673 divergent transitions after warmup. Increasing adapt_delta above 0.99 may help. See http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup”Warning message: “There were 235 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 10. See http://mc-stan.org/misc/warnings.html#maximum-treedepth-exceeded”Warning message: “Examine the pairs() plot to diagnose sampling problems ” In [48]: fit_fixed_K Inference for Stan model: binary_mixture. 1 chains, each with iter=10000; warmup=1000; thin=1; post-warmup draws per chain=9000, total post-warmup draws=9000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff rates[1,1] -290.13 106.55 183.54 -500.22 -444.41 -357.05 -73.30 18.60 3 rates[1,2] -192.72 72.04 150.88 -441.52 -320.67 -208.29 -43.91 28.82 4 rates[1,3] -90.81 38.02 106.25 -335.29 -161.16 -64.79 -7.99 35.86 8 rates[2,1] -204.38 96.89 151.38 -402.90 -343.11 -277.98 -33.23 21.32 2 rates[2,2] -129.93 65.80 121.15 -358.00 -233.50 -122.27 -18.63 31.11 3 rates[2,3] -65.22 37.07 88.97 -285.94 -113.61 -41.04 6.48 35.32 6 rates[3,1] 10.36 1.64 5.79 2.10 5.98 9.87 13.62 24.47 12 rates[3,2] 19.11 0.69 7.34 6.44 13.60 18.37 24.49 34.07 113 rates[3,3] 27.16 0.59 7.11 10.31 22.51 28.53 33.10 36.34 146 rates[4,1] -0.28 0.02 0.26 -0.82 -0.45 -0.27 -0.09 0.19 128 rates[4,2] -0.04 0.02 0.25 -0.53 -0.21 -0.04 0.12 0.44 173 rates[4,3] 0.04 0.02 0.25 -0.46 -0.13 0.04 0.21 0.52 169 rates[5,1] -317.91 142.92 229.78 -702.01 -522.26 -209.27 -99.68 -37.97 3 rates[5,2] -217.58 97.06 181.71 -607.87 -376.39 -137.15 -64.00 -17.91 4 rates[5,3] -108.69 51.76 126.31 -442.64 -166.99 -51.82 -19.75 8.30 6 pi[1] 0.09 0.00 0.06 0.00 0.04 0.08 0.13 0.23 225 pi[2] 0.08 0.01 0.06 0.00 0.03 0.07 0.12 0.23 117 pi[3] 0.22 0.02 0.07 0.03 0.19 0.23 0.27 0.32 9 pi[4] 0.49 0.00 0.05 0.40 0.46 0.49 0.52 0.59 490 pi[5] 0.12 0.03 0.08 0.00 0.04 0.10 0.18 0.29 9 prob[1,1] 0.17 0.15 0.36 0.00 0.00 0.00 0.00 1.00 6 prob[1,2] 0.19 0.17 0.39 0.00 0.00 0.00 0.00 1.00 5 prob[1,3] 0.20 0.17 0.40 0.00 0.00 0.00 0.00 1.00 5 prob[2,1] 0.21 0.19 0.40 0.00 0.00 0.00 0.00 1.00 5 prob[2,2] 0.24 0.20 0.42 0.00 0.00 0.00 0.00 1.00 4 prob[2,3] 0.26 0.20 0.43 0.00 0.00 0.00 1.00 1.00 5 prob[3,1] 0.99 0.00 0.04 0.89 1.00 1.00 1.00 1.00 63 prob[3,2] 1.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 1042 prob[3,3] 1.00 0.00 0.00 1.00 1.00 1.00 1.00 1.00 420 prob[4,1] 0.43 0.01 0.06 0.31 0.39 0.43 0.48 0.55 129 prob[4,2] 0.49 0.00 0.06 0.37 0.45 0.49 0.53 0.61 173 prob[4,3] 0.51 0.00 0.06 0.39 0.47 0.51 0.55 0.63 169 prob[5,1] 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1868 prob[5,2] 0.00 0.00 0.06 0.00 0.00 0.00 0.00 0.00 559 prob[5,3] 0.03 0.02 0.18 0.00 0.00 0.00 0.00 1.00 60 lp__ -346.05 3.57 6.83 -359.50 -351.98 -344.82 -340.44 -335.84 4 Rhat rates[1,1] 1.93 rates[1,2] 1.50 rates[1,3] 1.29 rates[2,1] 2.94 rates[2,2] 1.84 rates[2,3] 1.45 rates[3,1] 1.07 rates[3,2] 1.00 rates[3,3] 1.01 rates[4,1] 1.02 rates[4,2] 1.02 rates[4,3] 1.02 rates[5,1] 3.57 rates[5,2] 2.15 rates[5,3] 1.52 pi[1] 1.00 pi[2] 1.03 pi[3] 1.16 pi[4] 1.00 pi[5] 1.02 prob[1,1] 1.24 prob[1,2] 1.26 prob[1,3] 1.26 prob[2,1] 1.32 prob[2,2] 1.36 prob[2,3] 1.32 prob[3,1] 1.04 prob[3,2] 1.00 prob[3,3] 1.00 prob[4,1] 1.02 prob[4,2] 1.02 prob[4,3] 1.02 prob[5,1] 1.00 prob[5,2] 1.00 prob[5,3] 1.04 lp__ 1.90 Samples were drawn using NUTS(diag_e) at Fri Mar 1 13:05:12 2019. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1). That didn't seem to help at all. So the problem is apparently stemming from the multivariate Bernoulli, which makes somewhat sense, since binary data isn't very talkative in the first place, so making informed inferences on these data sets is difficult. For binary data, collapsing the parameters and only using latent class assignments seems to be preferrable. Thus, the next notebook will be on efficient inference of class assignments using particle MCMC.
{}
# Proving that the average case complexity of binary search is O(log n) I know that the both the average and worst case complexity of binary search is O(log n) and I know how to prove the worst case complexity is O(log n) using recurrence relations. But how would I go about proving that the average case complexity of binary search is O(log n)? • How could an upper bound for the worst case not be an upper bound for the average case? – A.Schulz Oct 20 '14 at 5:38 I think most text book will provide you a good proof. For me, I can show the average case complexity as follows. Assuming a uniform distribution of the position of the value that one wants to find in an array of size $n$. • For the case of 1 read, the position should be in the middle so there is a probability of $\frac{1}{n}$ for this case • For the case of 2 reads, one will read the middle position and then 1 of the 2 other middle positions from the 2 sub-arrays. This probability is $\frac{2}{n}$ • For the case of 3 reads, there are $2*2$ positions which result in this cost as you go into the 4 sub-arrays of the first 2 sub-arrays. The probability for this cost is $\frac{2^2}{n}$ ... • For the case of $x$ reads, the probability for this case is $\frac{2^{x-1}}{n}$ For the average case, the number of reads will be $\sum\limits_{i=1}^{\log(n)} \frac{i2^{i-1}}{n} = \frac{1}{n} \sum\limits_{i=1}^{\log(n)} i2^{i-1}$ Now you can do integration on an approximation formula which will give you $O(n\log(n))$. Note that $\int\limits_{1}^{\log(n)} x 2^x dx$ can be calculated and bounded into $\log(n)*2^{\log(n)} = n\log(n)$ This is a very good way to do that applies to many cases. Another way to see it can also be $i2^{i-1} < \log(n) * 2^{i-1}$ Then the formula above is bounded by $\frac{\log(n)}{n} \sum\limits_{i=1}^{\log(n)} 2^{i-1}$ The summation part is actually $\frac{1 - 2^{\log(n)}}{1 - 2} = 2^{\log(n)} - 1 = n - 1$ which is definitely less than $n$, multiplying this with $\frac{\log(n)}{n}$ gives you what you want $\log(n)$ So you will get the bound as you want $O(\log(n))$ • Great answer! You can also check Decision-tree-Model, to visually figure out the complexity of the algorithm (Reference: CLR) – user3378649 Oct 19 '14 at 22:24
{}
# Are these empirical discoveries about the Serre Swinnerton-Dyer ring of prime level modular power series actual theorems? In this question Joel Bellaiche constructed an algebra, M, of modular forms for gamma_0 (N) in finite characteristic (which he called p, but I'll call ell) and asked to know its structure. Matt Emerton gave a "comment-answer" that showed, among other things, that M is integrally closed. I've made some explicit calculations (particularly when ell=2 and 3; see my question for some of these when ell=2). The empirical "discoveries" unveiled by these seem likely to be theorems. I'll present a couple of these here, saving the remaining ones for edits. I'll assume that the level N is a prime, p. M contains f1 and fp; the mod ell reductions of the Fourier expansions of the cusp forms delta(z) and delta(pz). Discovery 1---M is integral over Z/ell[f1,fp] Discovery 2---When ell=2 or 3, then M is the integral closure of Z/ell[f1,fp] in its field of fractions. Are these really true? I'll also hazard the following guess when l>3. The mod ell reductions of the Eisenstein series E_4 and E_6 generate an extension of Z/ell(f1,fp) of degree (l-1)/2, and M is the integral closure of Z/ell[f1,fp] in this extension. EDIT: To describe further observations I'll introduce some notation. If k is even and non-negative, M[k] will be the Z/ell subspace of M consisting of the mod ell reductions of modular power series in Z[[x]] corresponding to weight k forms for gamma_0 (p). C will be a non-singular projective curve over Z/ell with function field the field of fractions of M, and D will be the divisor of poles of the element fp of M[12]. For example, when ell=2 and p=11, then in the notation of my question referenced above, M[k] has dimension k, while M2 is spanned by 1 and t, and M[4] is spanned by 1,t,t^2 and r. We have the relation r^2+r=t^3+t, and C is the curve r^2+r=t^3+t with the point O at infinity adjoined. f11=r^3+r^4+t^3 has zeros of orders 11 and 1 at (t,r)=(0,0) and (0,1) and D=12(O). Furthermore M[12] is spanned by 1,t,t^2,t^3,t^4,t^5,t^6,r,t*r,t^2*r,t^3*r and t^4*r and is the complete linear series attached to the divisor D. Here's what I think is true in general. Suppose ell=2. Then: 1---fp has one zero of order p, and one of order 1 on C. 2a--When p is 11 mod 12, fp has (p+1)/12 poles of order 12. 2b--When p is 5 mod 12, fp has 1 pole of order 6 and (p-5)/12 of order 12. 2c--When p is 7 mod 12, fp has 2 poles of order 4 and (p-7)/12 of order 12. 2d--When p is 13 mod 12, fp has 1 pole of order 6, 2 of order 4 and (p-13)/12 of order 12. 3---For each k there is a divisor D, easily describable in terms of D and k such that M[k] is the complete linear series attached to this divisor; in particular D<12m>=m(D). I think that entirely similar results hold when ell=3. My belief is that all of this is known, but I'd appreciate proofs and/or references. - @paul: I took the liberty of adding links to the MO questions you refer to in the body. I'd suggest you change the title to something more descriptive that does not include an MO question number. – Alberto García-Raboso Mar 11 '13 at 16:46 @Alberto: Thanks. I followed your suggestion. – paul Monsky Mar 11 '13 at 19:45
{}
Question # Find the value of x if  2x−3=1.  3210 Solution ## The correct option is A 3Given, 2x−3=1 We know, for any non-zero integer 'a', a0=1. ⇒2x−3=20 Since the powers of 2 in the LHS and RHS are equal and the bases are the same, the exponents must be equal. ⇒x−3=0⇒x=3 Suggest corrections
{}
# [Not-a-Bug] iReboot not showing XP or vista options in XP #### baldwin ##### Member I am dual booting Vista & XP. The iReboot icon menu in the notification area in Vista shows the "reboot on selection" options Vista or XP as its supossed to. However, the iReboot icon menu in the notification area in XP ONLY shows the "reboot on selection" but NOT the Vista or XP option. I have uninstalled and reinstalled iReboot twice in XP. Any Ideas why the rwo options don't show in XP enev tho they show in Vista? baldwin #### Mak 2.0 ##### Mod...WAFFLES!?!? Staff member Do you have EasyBCD installed? #### mqudsi ##### Mostly Harmless Staff member Hi baldwin, welcome to NeoSmart Technologies. I'm currently in XP right now, and it works fine here..... Note that EasyBCD is not required. To debug: Start | Run | CMD.exe Code: cd \Program Files\NeoSmart Technologies\iReboot\ bcdedit.exe Copy and paste the output here, please. #### Mak 2.0 ##### Mod...WAFFLES!?!? Staff member EasyBCD isnt required anymore. I thought it was?!? Well good work now that it isnt. I am in XP and it is working fine for me as well. #### mqudsi ##### Mostly Harmless Staff member Perhaps there was a bug in one of the beta releases or something, but IIRC iReboot was always intended to function fully stand-alone... #### Mak 2.0 ##### Mod...WAFFLES!?!? Staff member I just remember a old convo during the beta that we had with EasyBCD being needed. Now that it isnt. That is great news. Guess i should read up more. *whistles* So much reading such little time... #### baldwin ##### Member Yes I have EasyBCD installed. I can boot to either Vista or Xp ok....its just that i can't use iReboot in Xp. It woks OK in Vista tho. Here is My Boot.ini file in vista: timeout=30 default=multi(0)disk(0)rdisk(2)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(2)partition(1)\WINDOWS="Microsoft Windows XP Professional" /fastdetect /NoExecute=OptIn Here is my boot.ini in XP:[boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /noexecute=optin /fastdetect They are both on seperate physical drives. baldwin #### mqudsi ##### Mostly Harmless Staff member That doesn't help me any - please follow the steps in post #3. #### baldwin ##### Member I'm sorry, it seems that i'm not good at DOS commands. I keep getting the Error "Program files is not a valid command".....is there anyother way that I can provide You with the information You neeed?....Can I attach that bcdedit.exe file here to this reply? baldwin #### Mak 2.0 ##### Mod...WAFFLES!?!? Staff member I think it should have C:\in front of the \program files\neosmart technologies...The C:\ represents you XP drive where your files are located. If it isnt installed on C: then replace that with your drive letter. #### mqudsi ##### Mostly Harmless Staff member Ah, it's the missing " around the thing.. C:\ is optional, but wouldn't hurt. Code: cd "C:\Program Files\NeoSmart Technologies\iReboot\" ..... #### baldwin ##### Member Folks, I think I just figured out why ireboot is not working on my XP system but is working correctly on my vista system. I am using a program on my vista system called RollbackRX. Apparently it takes over the boot process at startup and does not recognize the XP drive. For whatever reason easybcd will not install on the XP version. After I install easybcd on XP and try to run it I get the error "valid BCD registry not detected. EasyBCD has detected that your BCD boot data and MBR are either not from the latest version of windows vista or doesn't exist"..... I know you said that easyBCD doesn't need to be installed in order for iReboot to run, however, since RollbackRX taks over the boot process, iReboot doesn't see the vista or Xp options on the iReboot menu in order to make a selection of either of the operating systems. As I said earlier iReboot works flawlessly in the vista operating system. Thanks to all who responded. baldwin #### mqudsi ##### Mostly Harmless Staff member Thanks for getting back to us, baldwin. That makes quite a bit of sense, third-party programs that hook into the bootloader would certainly cause something like this to happen. It is unfortunately not possible to add a workaround to iReboot, your best bet would be contact RollbackRX and ask them (nicely! ) to make their program compatible with the Vista boot loader.
{}
• Found. Phys. (IF 1.437) Pub Date : 2020-08-03 M. I. Samar, V. M. Tkachuk We propose a Lorentz-covariant deformed algebra describing a (3 + 1)-dimensional quantized spacetime, which in the nonrelativistic limit leads to undeformed one. The deformed Poincaré transformations leaving the algebra invariant are identified. In the classical limit the Lorentz-covariant deformed algebra yields the deformed Lorentz-covariant Poisson brackets. Kepler problem with the deformed Lorentz-covariant 更新日期:2020-08-04 • Found. Phys. (IF 1.437) Pub Date : 2020-07-28 P.-M. Binder A recent proposal to formulate physics in terms of finite-information variables is examined, concentrating on its consequences for classical mechanics. Both shortcomings and promising avenues are discussed. 更新日期:2020-07-28 • Found. Phys. (IF 1.437) Pub Date : 2020-07-25 Ciann-Dong Yang, Shiang-Yi Han The correspondence principle states that the quantum system will approach the classical system in high quantum numbers. Indeed, the average of the quantum probability density distribution reflects a classical-like distribution. However, the probability of finding a particle at the node of the wave function is zero. This condition is recognized as the nodal issue. In this paper, we propose a solution 更新日期:2020-07-25 • Found. Phys. (IF 1.437) Pub Date : 2020-07-22 Consecutive measurements performed on the same quantum system can reveal fundamental insights into quantum theory’s causal structure, and probe different aspects of the quantum measurement problem. According to the Copenhagen interpretation, measurements affect the quantum system in such a way that the quantum superposition collapses after each measurement, erasing any memory of the prior state. We 更新日期:2020-07-23 • Found. Phys. (IF 1.437) Pub Date : 2020-07-22 Tomasz Bigaj, Antonio Vassallo An important part of the influential Humean doctrine in philosophy is the supervenience principle (sometimes referred to as the principle of separability). This principle asserts that the complete state of the world supervenes on the intrinsic properties of its most fundamental components and their spatiotemporal relations (the so-called Humean mosaic). There are well-known arguments in the literature 更新日期:2020-07-22 • Found. Phys. (IF 1.437) Pub Date : 2020-07-18 Chrysovalantis Stergiou In this paper, I reconstruct an argument of Aristidis Arageorgis against empirical underdetermination of the state of a physical system in a C*-algebraic setting and explore its soundness. The argument, aiming against algebraic imperialism, the operationalist attitude which characterized the first steps of Algebraic Quantum Field Theory, is based on two topological properties of the state space: being 更新日期:2020-07-18 • Found. Phys. (IF 1.437) Pub Date : 2020-07-17 Sébastien Poinat One of the most striking features of the epistemological situation of Quantum Mechanics is the number of interpretations and the many schools of thought, with no consensus on the way to understand the theory. In this article, I introduce a distinction between orthodox interpretations and heterodox interpretations of Quantum Mechanics: the orthodox interpretations preserve all the quantum principles 更新日期:2020-07-17 • Found. Phys. (IF 1.437) Pub Date : 2020-07-12 Sergey N. Filippov, Stan Gudder, Teiko Heinosaari, Leevi Leppäjärvi The formalism of general probabilistic theories provides a universal paradigm that is suitable for describing various physical systems including classical and quantum ones as particular cases. Contrary to the usual no-restriction hypothesis, the set of accessible meters within a given theory can be limited for different reasons, and this raises a question of what restrictions on meters are operationally 更新日期:2020-07-13 • Found. Phys. (IF 1.437) Pub Date : 2020-07-10 Engel Roza An analysis is presented of the possible existence of the second anomalous dipole moment of Dirac’s particle next to the one associated with the angular momentum. It includes a discussion why, in spite of his own derivation, Dirac has doubted about its relevancy. It is shown why since then it has been overlooked and why it has vanished from leading textbooks. A critical survey is given on the reasons 更新日期:2020-07-10 • Found. Phys. (IF 1.437) Pub Date : 2020-07-10 Nikodem Popławski We show that in the presence of the torsion tensor $$S^k_{ij}$$, the quantum commutation relation for the four-momentum, traced over spinor indices, is given by $$[p_i,p_j]=2i\hbar S^k_{ij}p_k$$. In the Einstein–Cartan theory of gravity, in which torsion is coupled to spin of fermions, this relation in a coordinate frame reduces to a commutation relation of noncommutative momentum space, $$[p_i,p_j]=i\epsilon 更新日期:2020-07-10 • Found. Phys. (IF 1.437) Pub Date : 2020-07-08 Salvatore Capozziello, Micol Benetti, Alessandro D. A. M. Spallicci The uncertainty on measurements, given by the Heisenberg principle, is a quantum concept usually not taken into account in General Relativity. From a cosmological point of view, several authors wonder how such a principle can be reconciled with the Big Bang singularity, but, generally, not whether it may affect the reliability of cosmological measurements. In this letter, we express the Compton mass 更新日期:2020-07-08 • Found. Phys. (IF 1.437) Pub Date : 2020-07-08 Aldo F. G. Solis-Labastida, Jorge G. Hirsch Measurements are shown to be processes designed to return figures: they are effective. This effectivity allows for a formalization as Turing machines, which can be described employing computation theory. Inspired in the halting problem we draw some limitations for measurement procedures: procedures that verify if a quantity is measured cannot work in every case. 更新日期:2020-07-08 • Found. Phys. (IF 1.437) Pub Date : 2020-06-30 Filipe C. R. Barroso, Orfeu Bertolami We propose a Quantum Field Theory description of beams on a Mach–Zehnder interferometer and apply the method to describe Interaction Free Measurements (IFMs), concluding that there is a change of momentum of the fields in IFMs. Analysing the factors involved in the probability of emission of low-energy photons, we argue that they do not yield meaningful contributions to the probabilities of the IFMs 更新日期:2020-06-30 • Found. Phys. (IF 1.437) Pub Date : 2020-06-30 Arkady Plotnitsky This article aims to contribute to the ongoing task of clarifying the relationships between reality, probability, and nonlocality in quantum physics. It is in part stimulated by Khrennikov’s argument, in several communications, for “eliminating the issue of quantum nonlocality” from the analysis of quantum entanglement. I argue, however, that the question may not be that of eliminating but instead 更新日期:2020-06-30 • Found. Phys. (IF 1.437) Pub Date : 2020-06-25 Klaus Renziehausen, Ingo Barth Bohm developed the Bohmian mechanics (BM), in which the Schrödinger equation is transformed into two differential equations: a continuity equation and an equation of motion similar to the Newtonian equation of motion. This transformation can be executed both for single-particle systems and for many-particle systems. Later, Kuzmenkov and Maksimov used basic quantum mechanics for the derivation of many-particle 更新日期:2020-06-26 • Found. Phys. (IF 1.437) Pub Date : 2020-06-22 Marco Forgione This paper argues that the path integral formulation of quantum mechanics suggests a form of holism for which the whole (total ensemble of paths) has properties that are not strongly reducible to the properties of the parts (the single trajectories). Feynman’s sum over histories calculates the probability amplitude of a particle moving within a boundary by summing over all the possible trajectories 更新日期:2020-06-22 • Found. Phys. (IF 1.437) Pub Date : 2020-06-08 S. L. R. Vieira, K. Bakke By exploring the hypothesis of magnetic monopoles, we consider the existence of electric fields produced by magnetic current densities. Then, we consider a uniformly rotating frame with the purpose of searching for effects of rotation on the interaction of axial electric fields with the magnetic quadrupole moment of a neutral particle. Our analysis is made through the WKB (Wentzel, Kramers and Brillouin) 更新日期:2020-06-08 • Found. Phys. (IF 1.437) Pub Date : 2020-05-19 Ali Barzegar QBism is one of the main candidates for an epistemic interpretation of quantum mechanics. According to QBism, the quantum state or the wavefunction represents the subjective degrees of belief of the agent assigning the state. But, although the quantum state is not part of the furniture of the world, quantum mechanics grasps the real via the Born rule which is a consistency condition for the probability 更新日期:2020-05-19 • Found. Phys. (IF 1.437) Pub Date : 2020-05-18 Holger F. Hofmann The Hilbert space formalism describes causality as a statistical relation between initial experimental conditions and final measurement outcomes, expressed by the inner products of state vectors representing these conditions. This representation of causality is in fundamental conflict with the classical notion that causality should be expressed in terms of the continuity of intermediate realities. 更新日期:2020-05-18 • Found. Phys. (IF 1.437) Pub Date : 2020-05-04 Detlev Buchholz, Klaus Fredenhagen The essence of the path integral method in quantum physics can be expressed in terms of two relations between unitary propagators, describing perturbations of the underlying system. They inherit the causal structure of the theory and its invariance properties under variations of the action. These relations determine a dynamical algebra of bounded operators which encodes all properties of the corresponding 更新日期:2020-05-04 • Found. Phys. (IF 1.437) Pub Date : 2020-04-26 Per Arve Everett’s Relative State Interpretation has gained increasing interest due to the progress of understanding the role of decoherence. In order to fulfill its promise as a realistic description of the physical world, two postulates are formulated. In short they are (1) for a system with continuous coordinates \({\mathbf {x}}$$, discrete variable j, and state $$\psi _j({\mathbf {x}})$$, the density $$\rho 更新日期:2020-04-26 • Found. Phys. (IF 1.437) Pub Date : 2020-04-15 Victoria J. Wright, Stefan Weigert The authors would like to make the corrections to the original article described below. 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-02-18 Robert C. Bishop, George F. R. Ellis Contextual emergence was originally proposed as an inter-level relation between different levels of description to describe an epistemic notion of emergence in physics. Here, we discuss the ontic extension of this relation to different domains or levels of physical reality using the properties of temperature and molecular shape (chirality) as detailed case studies. We emphasize the concepts of stability 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-03-09 R. E. Kastner The Frauchiger–Renner Paradox is an extension of paradoxes based on the “Problem of Measurement,” such as Schrödinger’s Cat and Wigner’s Friend. All these paradoxes stem from assuming that quantum theory has only unitary (linear) physical dynamics, and the attendant ambiguity about what counts as a ‘measurement’—i.e., the inability to account for the observation of determinate measurement outcomes 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-03-27 G. B. Mainland, Bernard Mulligan There are two types of fluctuations in the quantum vacuum: type 1 vacuum fluctuations are on shell and can interact with matter in specific, limited ways that have observable consequences; type 2 vacuum fluctuations are off shell and cannot interact with matter. A photon will polarize a type 1, bound, charged lepton–antilepton vacuum fluctuation in much the same manner that it would polarize a dielectric 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-04-04 Michał Drągowski, Marta Włodarczyk Fundamental incompatibility arises at the interface of quantum mechanics and the special theory of relativity with Einstein synchronization, in which simultaneity is not absolute. It has, however, been shown that a relativistic theory preserving absolute simultaneity allows to formulate Lorentz-covariant quantum theory, at a price of introducing a preferred frame of reference manifesting itself in 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-03-14 Juliusz Doboszewski Two interesting “no hole” spacetime properties (being epistemically hole free (g), not being future nakedly singular) are unstable in the fine topology. 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-03-02 Eric Ling We show that the big bang is a coordinate singularity for a large class of \(k = -1$$ inflationary FLRW spacetimes which we have dubbed ‘Milne-like.’ By introducing a new set of coordinates, the big bang appears as a past boundary of the universe where the metric is no longer degenerate—a result which has already been investigated in the context of vacuum decay (Coleman and De Luccia in Phys Rev D 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-04-18 Guang Ping He Recently, Frauchiger and Renner proposed a Gedankenexperiment, which was claimed to be able to prove that quantum theory cannot consistently describe the use of itself. Here we show that the conclusions of Frauchiger and Renner actually came from their incorrect description of some quantum states. With the correct description there will be no inconsistent results, no matter which quantum interpretation 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-04-09 Ingemar Bengtsson The problem of constructing maximal equiangular tight frames or SICs was raised by Zauner in 1998. Four years ago it was realized that the problem is closely connected to a major open problem in number theory. We discuss why such a connection was perhaps to be expected, and give a simplified sketch of some developments that have taken place in the past 4 years. The aim, so far unfulfilled, is to prove 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-04-08 Pierre Uzan This paper shows that the Clauser–Horne–Shimony–Holt test of locality of correlations which was originally designed to be used with binary observables can actually be used for any couples of quantum-like bounded continuous observables, and then for any experimental situation describable within the mathematical framework of quantum theory. 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2019-05-03 Joshua Norton The Hole Argument was originally formulated by Einstein and it haunted him as he struggled to understand the meaning of spacetime coordinates in the context of the diffeomorphism invariance of general relativity. This argument has since been put to philosophical use by Earman and Norton (Br J Philos Sci 515–525, 1987) to argue against a substantival conception of spacetime. In the present work I demonstrate 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2019-10-04 The Hole Argument is primarily about the meaning of general covariance in general relativity. As such it raises many deep issues about identity in mathematics and physics, the ontology of space–time, and how scientific representation works. This paper is about the application of a new foundational programme in mathematics, namely homotopy type theory (HoTT), to the Hole Argument. It is argued that 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2019-05-04 Neil Dewar This is an essay about general covariance, and what it says (or doesn’t say) about spacetime structure. After outlining a version of the dynamical approach to spacetime theories, and how it struggles to deal with generally covariant theories, I argue that we should think about the symmetry structure of spacetime rather differently in generally-covariant theories compared to non-generally-covariant 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2019-08-23 John Dougherty I apply homotopy type theory (HoTT) to the hole argument as formulated by Earman and Norton. I argue that HoTT gives a precise sense in which diffeomorphism-related Lorentzian manifolds represent the same spacetime, undermining Earman and Norton’s verificationist dilemma and common formulations of the hole argument. However, adopting this account does not alleviate worries about determinism: general 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2018-12-05 Carolyn Brighouse I illustrate a challenge to a view that is a response to the Hole Argument. The view, sophisticated substantivalism, has been claimed to be the received view. While sophisticated substantivalism has many defenders, there is a fundamental tension in the view that has not received the attention it deserves. Anyone who defends or endorses sophisticated substantivalism, should acknowledge this challenge 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-03-18 Bryan W. Roberts, James Owen Weatherall This special issue of Foundations of Physics collects together articles representing some recent new perspectives on the hole argument in the history and philosophy of physics. Our task here is to introduce those new perspectives. 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-01-28 Bryan W. Roberts Leibniz Equivalence is a principle of applied mathematics that is widely assumed in both general relativity textbooks and in the philosophical literature on Einstein’s hole argument. In this article, I clarify an ambiguity in the statement of this Leibniz Equivalence, and argue that the relevant expression of it for the hole argument is strictly false. I then show that the hole argument still succeeds 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-02-10 We address a recent proposal concerning ‘surplus structure’ due to Nguyen et al. (Br J Phi Sci, 2018). We argue that the sense of ‘surplus structure’ captured by their formal criterion is importantly different from—and in a sense, opposite to—another sense of ‘surplus structure’ used by philosophers. We argue that minimizing structure in one sense is generally incompatible with minimizing structure 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2018-08-25 Samuel C. Fletcher Recent work on the hole argument in general relativity by Weatherall (Br J Philos Sci 69(2):329–350, 2018) has drawn attention to the neglected concept of (mathematical) models’ representational capacities. I argue for several theses about the structure of these capacities, including that they should be understood not as many-to-one relations from models to the world, but in general as many-to-many 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-01-28 Saeed Naif Turki Al-Rashid, Mohammed A. Z. Habeeb, Tugdual S. LeBohec Applying the resolution–scale relativity principle to develop a mechanics of non-differentiable dynamical paths, we find that, in one dimension, stationary motion corresponds to an Itô process driven by the solutions of a Riccati equation. We verify that the corresponding Fokker–Planck equation is solved for a probability density corresponding to the squared modulus of the solution of the Schrödinger 更新日期:2020-04-23 • Found. Phys. (IF 1.437) Pub Date : 2020-02-14 Márton Gömöri Contemporary debate over laws of nature centers around Humean supervenience, the thesis that everything supervenes on the distribution of non-nomic facts. The key ingredient of this thesis is the idea that nomic-like concepts—law, chance, causation, etc.—are expressible in terms of the regularities of non-nomic facts. Inherent to this idea is the tacit conviction that regularities, “constant conjunctions” 更新日期:2020-02-14 • Found. Phys. (IF 1.437) Pub Date : 2020-01-30 Tim Maudlin The meaning and truth conditions for claims about physical modality and causation have been considered problematic since Hume’s empiricist critique. But the underlying semantic commitments that follow from Hume’s empiricism about ideas have long been abandoned by the philosophical community. Once the consequences of that abandonment are properly appreciated, the problems of physical modality and causal 更新日期:2020-01-30 • Found. Phys. (IF 1.437) Pub Date : 2020-01-21 Balázs Gyenis We call attention to different formulations of how physical laws relate to what is physically possible in the philosophical literature, and argue that it may be the case that determinism fails under one formulation but reigns under the other. Whether this is so depends on our view on the nature of laws, and may also depend on the inter-theoretical relationships among our best physical theories, or 更新日期:2020-01-21 • Found. Phys. (IF 1.437) Pub Date : 2019-06-01 A fundamental postulate of statistical mechanics is that all microstates in an isolated system are equally probable. This postulate, which goes back to Boltzmann, has often been criticized for not having a clear physical foundation. In this note, we provide a derivation of the canonical (Boltzmann) distribution that avoids this postulate. In its place, we impose two axioms with physical interpretations 更新日期:2019-11-01 • Found. Phys. (IF 1.437) Pub Date : 2018-11-06 Howard Barnum,Ciarán M Lee,John H Selby We investigate the connection between interference and computational power within the operationally defined framework of generalised probabilistic theories. To compare the computational abilities of different theories within this framework we show that any theory satisfying four natural physical principles possess a well-defined oracle model. Indeed, we prove a subroutine theorem for oracles in such 更新日期:2019-11-01 • Found. Phys. (IF 1.437) Pub Date : 2017-03-21 Jakob Kellner In an attempt to demonstrate that local hidden variables are mathematically possible, Pitowsky constructed "spin-[Formula: see text] functions" and later "Kolmogorovian models", which employs a nonstandard notion of probability. We describe Pitowsky's analysis and argue (with the benefit of hindsight) that his notion of hidden variables is in fact just super-determinism (and accordingly physically 更新日期:2019-11-01 • Found. Phys. (IF 1.437) Pub Date : 2014-01-01 Maurice A de Gosson The aim of the famous Born and Jordan 1925 paper was to put Heisenberg's matrix mechanics on a firm mathematical basis. Born and Jordan showed that if one wants to ensure energy conservation in Heisenberg's theory it is necessary and sufficient to quantize observables following a certain ordering rule. One apparently unnoticed consequence of this fact is that Schrödinger's wave mechanics cannot be 更新日期:2019-11-01 • Found. Phys. (IF 1.437) Pub Date : 2013-01-01 Maurice A de Gosson Quantum blobs are the smallest phase space units of phase space compatible with the uncertainty principle of quantum mechanics and having the symplectic group as group of symmetries. Quantum blobs are in a bijective correspondence with the squeezed coherent states from standard quantum mechanics, of which they are a phase space picture. This allows us to propose a substitute for phase space in quantum 更新日期:2019-11-01 • Found. Phys. (IF 1.437) Pub Date : 2019-08-28 Joanna Luc In this paper non-Hausdorff manifolds as potential basic objects of General Relativity are investigated. One can distinguish four stages of identifying an appropriate mathematical structure to describe physical systems: kinematic, dynamical, physical reasonability, and empirical. The thesis of this paper is that in the context of General Relativity, non-Hausdorff manifolds pass the first two stages 更新日期:2019-08-28 • Found. Phys. (IF 1.437) Pub Date : 2019-07-15 László E. Szabó On the basis of what I call physico-formalist philosophy of mathematics, I will develop an amended account of the Kantian–Reichenbachian conception of constitutive a priori. It will be shown that the features (attributes, qualities, properties) attributed to a real object are not possessed by the object as a “thing-in-itself”; they require a physical theory by means of which these features are constituted 更新日期:2019-07-15 • Found. Phys. (IF 1.437) Pub Date : 2019-06-29 Tomasz Placek The possibility question concerns the status of possibilities: do they form an irreducible category of the external reality, or are they merely features of our cognitive framework? If fundamental physics is ever to shed light on this issue, it must be done by some future theory that unifies insights of general relativity and quantum mechanics. The paper investigates one programme of this kind, namely 更新日期:2019-06-29 • Found. Phys. (IF 1.437) Pub Date : 2019-06-14 Thomas Müller In this paper we describe a novel approach to defining an ontologically fundamental notion of co-presentness that does not go against the tenets of relativity theory. We survey the possible reactions to the problem of the present in relativity theory, introducing a terminological distinction between a static role of the present, which is served by the relation of simultaneity, and a dynamic role of 更新日期:2019-06-14 • Found. Phys. (IF 1.437) Pub Date : 2019-05-14 Samuel C. Fletcher Based on three common interpretive commitments in general relativity, I raise a conceptual problem for the usual identification, in that theory, of timelike curves as those that represent the possible histories of (test) particles in spacetime. This problem affords at least three different solutions, depending on different representational and ontological assumptions one makes about the nature of (test) 更新日期:2019-05-14 Contents have been reproduced by permission of the publishers. down wechat bug
{}
# 180 Days of Math for Fifth Grade Day 50 Answers Key By accessing our 180 Days of Math for Fifth Grade Answers Key Day 50 regularly, students can get better problem-solving skills. ## 180 Days of Math for Fifth Grade Answers Key Day 50 Directions Solve each problem. Question 1. Calculate the difference between 96 and 32. ______________ Answer: 64 Explanation: By subtracting the smallest number 32 from the biggest number 96 we get the difference 96 – 32 = 64 Question 2. 9 • 6 = _____ Answer: 54 Explanation: . dot represents the multiplication Multiply the numbers 9 and 6 to get the product. 9 .6 or 9 x 6 = 54 Question 3. How many groups of 4 are in 36? _____________________ Answer: 9 Explanation: Divided 36 into 4 groups Question 4. Round 45,958 to the nearest hundred. _____________________ Answer: 46,000 Explanation: 958 is nearest to 1000 1000 is added to 45000 Round 45,958 to the nearest hundred is 46,000 Question 5. 50% of $60 is ____. Answer:$30 Explanation: 50% of \$60 = $$\frac{50 x 60}{100}$$ = 5 x 6 = 30 Question 6. 10 ÷ 2 + 9 = ____ Answer: 14 Explanation: The given equation is a combined operation first divide 10÷ 2 = 5 5 + 9 = 14 Question 7. 9 × 8 = 65 + Answer: 7 Explanation: Let the unknown number be x 9 × 8 = 65 + x 72 – 65 = x x = 7 Question 8. How many months are in 2 years? ________________ Answer: 24 months Explanation: There are 12 months is a calendar year 2 years = 2 x 12 = 24 Question 9. Is this a regular shape? Answer: Rhombus Explanation: A rhombus is a 2-D shape with four sides hence termed as a quadrilateral. It has two diagonals that bisect each other at right angles. Question 10. You want to create a survey to find out how your classmates got to school this morning. What would be a good question to ask? _____________________ _____________________ _____________________ Answer: How do you come to school in the morning? Explanation: Survey is the quantitative method to collect the information from a group. Question 11. You place the following shapes in a bag: 5 circles, 3 triangles. 7 squares, and 5 rectangles. If you reach into the bag and pull out a shape, what is the probability that you will grab a square? _____________________ Answer: 0.35 is the probability that you will grab a square Explanation: The shapes in a bag are as follows : 5 circles, 3 triangles, 7 squares, and 5 rectangles. there are 20 total events $$\frac{7}{20}$$ = 0.35 35% chance of getting a square Question 12. Find the rule to complete the pyramid. Answer: Explanation: Add two consecutive numbers in the bottom for upper number as result 9 + 5 = 14 5 + 4 = 9 6 + 7 = 13 such that we can complete the pyramid Scroll to Top
{}
# Fluctuating Interest Rates Deliver Fiscal Insurance¶ In addition to what’s in Anaconda, this lecture will need the following libraries: In [1]: !pip install --upgrade quantecon ## Overview¶ This lecture extends our investigations of how optimal policies for levying a flat-rate tax on labor income and issuing government debt depend on whether there are complete markets for debt. A Ramsey allocation and Ramsey policy in the AMSS [AMSSeppala02] model described in optimal taxation without state-contingent debt generally differs from a Ramsey allocation and Ramsey policy in the Lucas-Stokey [LS83] model described in optimal taxation with state-contingent debt. This is because the implementability restriction that a competitive equilibrium with a distorting tax imposes on allocations in the Lucas-Stokey model is just one among a set of implementability conditions imposed in the AMSS model. These additional constraints require that time $t$ components of a Ramsey allocation for the AMSS model be measurable with respect to time $t-1$ information. The measurability constraints imposed by the AMSS model are inherited from the restriction that only one-period risk-free bonds can be traded. Differences between the Ramsey allocations in the two models indicate that at least some of the measurability constraints of the AMSS model of optimal taxation without state-contingent debt are violated at the Ramsey allocation of a corresponding [LS83] model with state-contingent debt. Another way to say this is that differences between the Ramsey allocations of the two models indicate that some of the measurability constraints of the AMSS model are violated at the Ramsey allocation of the Lucas-Stokey model. Nonzero Lagrange multipliers on those constraints make the Ramsey allocation for the AMSS model differ from the Ramsey allocation for the Lucas-Stokey model. This lecture studies a special AMSS model in which • The exogenous state variable $s_t$ is governed by a finite-state Markov chain. • With an arbitrary budget-feasible initial level of government debt, the measurability constraints • bind for many periods, but $\ldots$. • eventually, they stop binding evermore, so $\ldots$. • in the tail of the Ramsey plan, the Lagrange multipliers $\gamma_t(s^t)$ on the AMSS implementability constraints (8) converge to zero. • After the implementability constraints (8) no longer bind in the tail of the AMSS Ramsey plan • history dependence of the AMSS state variable $x_t$ vanishes and $x_t$ becomes a time-invariant function of the Markov state $s_t$. • the par value of government debt becomes constant over time so that $b_{t+1}(s^t) = \bar b$ for $t \geq T$ for a sufficiently large $T$. • $\bar b <0$, so that the tail of the Ramsey plan instructs the government always to make a constant par value of risk-free one-period loans to the private sector. • the one-period gross interest rate $R_t(s^t)$ on risk-free debt converges to a time-invariant function of the Markov state $s_t$. • For a particular $b_0 < 0$ (i.e., a positive level of initial government loans to the private sector), the measurability constraints never bind. • In this special case • the par value $b_{t+1}(s_t) = \bar b$ of government debt at time $t$ and Markov state $s_t$ is constant across time and states, but $\ldots$. • the market value $\frac{\bar b}{R_t(s_t)}$ of government debt at time $t$ varies as a time-invariant function of the Markov state $s_t$. • fluctuations in the interest rate make gross earnings on government debt $\frac{\bar b}{R_t(s_t)}$ fully insure the gross-of-gross-interest-payments government budget against fluctuations in government expenditures. • the state variable $x$ in a recursive representation of a Ramsey plan is a time-invariant function of the Markov state for $t \geq 0$. • In this special case, the Ramsey allocation in the AMSS model agrees with that in a [LS83] model in which the same amount of state-contingent debt falls due in all states tomorrow • it is a situation in which the Ramsey planner loses nothing from not being able to purchase state-contingent debt and being restricted to exchange only risk-free debt debt. • This outcome emerges only when we initialize government debt at a particular $b_0 < 0$. In a nutshell, the reason for this striking outcome is that at a particular level of risk-free government assets, fluctuations in the one-period risk-free interest rate provide the government with complete insurance against stochastically varying government expenditures. In [2]: import matplotlib.pyplot as plt %matplotlib inline from scipy.optimize import fsolve, fmin ## Forces at Work¶ The forces driving asymptotic outcomes here are examples of dynamics present in a more general class incomplete markets models analyzed in [BEGS17] (BEGS). BEGS provide conditions under which government debt under a Ramsey plan converges to an invariant distribution. BEGS construct approximations to that asymptotically invariant distribution of government debt under a Ramsey plan. BEGS also compute an approximation to a Ramsey plan’s rate of convergence to that limiting invariant distribution. We shall use the BEGS approximating limiting distribution and the approximating rate of convergence to help interpret outcomes here. For a long time, the Ramsey plan puts a nontrivial martingale-like component into the par value of government debt as part of the way that the Ramsey plan imperfectly smooths distortions from the labor tax rate across time and Markov states. But BEGS show that binding implementability constraints slowly push government debt in a direction designed to let the government use fluctuations in equilibrium interest rate rather than fluctuations in par values of debt to insure against shocks to government expenditures. • This is a weak (but unrelenting) force that, starting from an initial debt level, for a long time is dominated by the stochastic martingale-like component of debt dynamics that the Ramsey planner uses to facilitate imperfect tax-smoothing across time and states. • This weak force slowly drives the par value of government assets to a constant level at which the government can completely insure against government expenditure shocks while shutting down the stochastic component of debt dynamics. • At that point, the tail of the par value of government debt becomes a trivial martingale: it is constant over time. ## Logical Flow of Lecture¶ We present ideas in the following order • We describe a two-state AMSS economy and generate a long simulation starting from a positive initial government debt. • We observe that in a long simulation starting from positive government debt, the par value of government debt eventually converges to a constant $\bar b$. • In fact, the par value of government debt converges to the same constant level $\bar b$ for alternative realizations of the Markov government expenditure process and for alternative settings of initial government debt $b_0$. • We reverse engineer a particular value of initial government debt $b_0$ (it turns out to be negative) for which the continuation debt moves to $\bar b$ immediately. • We note that for this particular initial debt $b_0$, the Ramsey allocations for the AMSS economy and the Lucas-Stokey model are identical • we verify that the LS Ramsey planner chooses to purchase identical claims to time $t+1$ consumption for all Markov states tomorrow for each Markov state today. • We compute the BEGS approximations to check how accurately they describe the dynamics of the long-simulation. ### Equations from Lucas-Stokey (1983) Model¶ Although we are studying an AMSS [AMSSeppala02] economy, a Lucas-Stokey [LS83] economy plays an important role in the reverse-engineering calculation to be described below. For that reason, it is helpful to have readily available some key equations underlying a Ramsey plan for the Lucas-Stokey economy. Recall first-order conditions for a Ramsey allocation for the Lucas-Stokey economy. For $t \geq 1$, these take the form \begin{aligned} (1+\Phi) &u_c(c,1-c-g) + \Phi \bigl[c u_{cc}(c,1-c-g) - (c+g) u_{\ell c}(c,1-c-g) \bigr] \\ &= (1+\Phi) u_{\ell}(c,1-c-g) + \Phi \bigl[c u_{c\ell}(c,1-c-g) - (c+g) u_{\ell \ell}(c,1-c-g) \bigr] \end{aligned} \tag{1} There is one such equation for each value of the Markov state $s_t$. In addition, given an initial Markov state, the time $t=0$ quantities $c_0$ and $b_0$ satisfy \begin{aligned} (1+\Phi) &u_c(c,1-c-g) + \Phi \bigl[c u_{cc}(c,1-c-g) - (c+g) u_{\ell c}(c,1-c-g) \bigr] \\ &= (1+\Phi) u_{\ell}(c,1-c-g) + \Phi \bigl[c u_{c\ell}(c,1-c-g) - (c+g) u_{\ell \ell}(c,1-c-g) \bigr] + \Phi (u_{cc} - u_{c,\ell}) b_0 \end{aligned} \tag{2} In addition, the time $t=0$ budget constraint is satisfied at $c_0$ and initial government debt $b_0$: $$b_0 + g_0 = \tau_0 (c_0 + g_0) + \frac{\bar b}{R_0} \tag{3}$$ where $R_0$ is the gross interest rate for the Markov state $s_0$ that is assumed to prevail at time $t =0$ and $\tau_0$ is the time $t=0$ tax rate. In equation (3), it is understood that \begin{aligned} \tau_0 = 1 - \frac{u_{l,0}}{u_{c,0}} \\ R_0^{-1} = \beta \sum_{s=1}^S \Pi(s | s_0) \frac{u_c(s)}{u_{c,0}} \end{aligned} It is useful to transform some of the above equations to forms that are more natural for analyzing the case of a CRRA utility specification that we shall use in our example economies. ### Specification with CRRA Utility¶ As in lectures optimal taxation without state-contingent debt and optimal taxation with state-contingent debt, we assume that the representative agent has utility function $$u(c,n) = {\frac{c^{1-\sigma}}{1-\sigma}} - {\frac{n^{1+\gamma}}{1+\gamma}}$$ and set $\sigma = 2$, $\gamma = 2$, and the discount factor $\beta = 0.9$. We eliminate leisure from the model and continue to assume that $$c_t + g_t = n_t$$ The analysis of Lucas and Stokey prevails once we make the following replacements \begin{aligned} u_\ell(c, \ell) &\sim - u_n(c, n) \\ u_c(c,\ell) &\sim u_c(c,n) \\ u_{\ell,\ell}(c,\ell) &\sim u_{nn}(c,n) \\ u_{c,c}(c,\ell)& \sim u_{c,c}(c,n) \\ u_{c,\ell} (c,\ell) &\sim 0 \end{aligned} With these understandings, equations (1) and (2) simplify in the case of the CRRA utility function. They become $$(1+\Phi) [u_c(c) + u_n(c+g)] + \Phi[c u_{cc}(c) + (c+g) u_{nn}(c+g)] = 0 \tag{4}$$ and $$(1+\Phi) [u_c(c_0) + u_n(c_0+g_0)] + \Phi[c_0 u_{cc}(c_0) + (c_0+g_0) u_{nn}(c_0+g_0)] - \Phi u_{cc}(c_0) b_0 = 0 \tag{5}$$ In equation (4), it is understood that $c$ and $g$ are each functions of the Markov state $s$. The CRRA utility function is represented in the following class. In [3]: import numpy as np class CRRAutility: def __init__(self, β=0.9, σ=2, γ=2, π=0.5*np.ones((2, 2)), G=np.array([0.1, 0.2]), Θ=np.ones(2), transfers=False): self.β, self.σ, self.γ = β, σ, γ self.π, self.G, self.Θ, self.transfers = π, G, Θ, transfers # Utility function def U(self, c, n): σ = self.σ if σ == 1.: U = np.log(c) else: U = (c**(1 - σ) - 1) / (1 - σ) return U - n**(1 + self.γ) / (1 + self.γ) # Derivatives of utility function def Uc(self, c, n): return c**(-self.σ) def Ucc(self, c, n): return -self.σ * c**(-self.σ - 1) def Un(self, c, n): return -n**self.γ def Unn(self, c, n): return -self.γ * n**(self.γ - 1) ## Example Economy¶ We set the following parameter values. The Markov state $s_t$ takes two values, namely, $0,1$. The initial Markov state is $0$. The Markov transition matrix is $.5 I$ where $I$ is a $2 \times 2$ identity matrix, so the $s_t$ process is IID. Government expenditures $g(s)$ equal $.1$ in Markov state $0$ and $.2$ in Markov state $1$. We set preference parameters as follows: \begin{aligned} \beta & = .9 \cr \sigma & = 2 \cr \gamma & = 2 \end{aligned} Here are several classes that do most of the work for us. The code is mostly taken or adapted from the earlier lectures optimal taxation without state-contingent debt and optimal taxation with state-contingent debt. In [4]: import numpy as np from scipy.optimize import root from quantecon import MarkovChain class SequentialAllocation: ''' Class that takes CESutility or BGPutility object as input returns planner's allocation as a function of the multiplier on the implementability constraint μ. ''' def __init__(self, model): # Initialize from model object attributes self.β, self.π, self.G = model.β, model.π, model.G self.mc, self.Θ = MarkovChain(self.π), model.Θ self.S = len(model.π) # Number of states self.model = model # Find the first best allocation self.find_first_best() def find_first_best(self): ''' Find the first best allocation ''' model = self.model S, Θ, G = self.S, self.Θ, self.G Uc, Un = model.Uc, model.Un def res(z): c = z[:S] n = z[S:] return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G]) res = root(res, 0.5 * np.ones(2 * S)) if not res.success: raise Exception('Could not find first best') self.cFB = res.x[:S] self.nFB = res.x[S:] # Multiplier on the resource constraint self.ΞFB = Uc(self.cFB, self.nFB) self.zFB = np.hstack([self.cFB, self.nFB, self.ΞFB]) def time1_allocation(self, μ): ''' Computes optimal allocation for time t >= 1 for a given μ ''' model = self.model S, Θ, G = self.S, self.Θ, self.G Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn def FOC(z): c = z[:S] n = z[S:2 * S] Ξ = z[2 * S:] # FOC of c return np.hstack([Uc(c, n) - μ * (Ucc(c, n) * c + Uc(c, n)) - Ξ, Un(c, n) - μ * (Unn(c, n) * n + Un(c, n)) \ + Θ * Ξ, # FOC of n Θ * n - c - G]) # Find the root of the first-order condition res = root(FOC, self.zFB) if not res.success: raise Exception('Could not find LS allocation.') z = res.x c, n, Ξ = z[:S], z[S:2 * S], z[2 * S:] # Compute x I = Uc(c, n) * c + Un(c, n) * n x = np.linalg.solve(np.eye(S) - self.β * self.π, I) return c, n, x, Ξ def time0_allocation(self, B_, s_0): ''' Finds the optimal allocation given initial government debt B_ and state s_0 ''' model, π, Θ, G, β = self.model, self.π, self.Θ, self.G, self.β Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn # First order conditions of planner's problem def FOC(z): μ, c, n, Ξ = z xprime = self.time1_allocation(μ)[2] return np.hstack([Uc(c, n) * (c - B_) + Un(c, n) * n + β * π[s_0] @ xprime, Uc(c, n) - μ * (Ucc(c, n) * (c - B_) + Uc(c, n)) - Ξ, Un(c, n) - μ * (Unn(c, n) * n + Un(c, n)) + Θ[s_0] * Ξ, (Θ * n - c - G)[s_0]]) # Find root res = root(FOC, np.array( [0, self.cFB[s_0], self.nFB[s_0], self.ΞFB[s_0]])) if not res.success: raise Exception('Could not find time 0 LS allocation.') return res.x def time1_value(self, μ): ''' Find the value associated with multiplier μ ''' c, n, x, Ξ = self.time1_allocation(μ) U = self.model.U(c, n) V = np.linalg.solve(np.eye(self.S) - self.β * self.π, U) return c, n, x, V def Τ(self, c, n): ''' Computes Τ given c, n ''' model = self.model Uc, Un = model.Uc(c, n), model.Un(c, n) return 1 + Un / (self.Θ * Uc) def simulate(self, B_, s_0, T, sHist=None): ''' Simulates planners policies for T periods ''' model, π, β = self.model, self.π, self.β Uc = model.Uc if sHist is None: sHist = self.mc.simulate(T, s_0) cHist, nHist, Bhist, ΤHist, μHist = np.zeros((5, T)) RHist = np.zeros(T - 1) # Time 0 μ, cHist[0], nHist[0], _ = self.time0_allocation(B_, s_0) ΤHist[0] = self.Τ(cHist[0], nHist[0])[s_0] Bhist[0] = B_ μHist[0] = μ # Time 1 onward for t in range(1, T): c, n, x, Ξ = self.time1_allocation(μ) Τ = self.Τ(c, n) u_c = Uc(c, n) s = sHist[t] Eu_c = π[sHist[t - 1]] @ u_c cHist[t], nHist[t], Bhist[t], ΤHist[t] = c[s], n[s], x[s] / u_c[s], \ Τ[s] RHist[t - 1] = Uc(cHist[t - 1], nHist[t - 1]) / (β * Eu_c) μHist[t] = μ return np.array([cHist, nHist, Bhist, ΤHist, sHist, μHist, RHist]) /home/ubuntu/anaconda3/lib/python3.7/site-packages/numba/np/ufunc/parallel.py:355: NumbaWarning: The TBB threading layer requires TBB version 2019.5 or later i.e., TBB_INTERFACE_VERSION >= 11005. Found TBB_INTERFACE_VERSION = 11004. The TBB threading layer is disabled. warnings.warn(problem) In [5]: import numpy as np from scipy.optimize import fmin_slsqp from scipy.optimize import root from quantecon import MarkovChain class RecursiveAllocationAMSS: def __init__(self, model, μgrid, tol_diff=1e-4, tol=1e-4): self.β, self.π, self.G = model.β, model.π, model.G self.mc, self.S = MarkovChain(self.π), len(model.π) # Number of states self.Θ, self.model, self.μgrid = model.Θ, model, μgrid self.tol_diff, self.tol = tol_diff, tol # Find the first best allocation self.solve_time1_bellman() self.T.time_0 = True # Bellman equation now solves time 0 problem def solve_time1_bellman(self): ''' Solve the time 1 Bellman equation for calibration model and initial grid μgrid0 ''' model, μgrid0 = self.model, self.μgrid π = model.π S = len(model.π) # First get initial fit from Lucas Stokey solution. # Need to change things to be ex ante pp = SequentialAllocation(model) interp = interpolator_factory(2, None) def incomplete_allocation(μ_, s_): c, n, x, V = pp.time1_value(μ_) return c, n, π[s_] @ x, π[s_] @ V cf, nf, xgrid, Vf, xprimef = [], [], [], [], [] for s_ in range(S): c, n, x, V = zip(*map(lambda μ: incomplete_allocation(μ, s_), μgrid0)) c, n = np.vstack(c).T, np.vstack(n).T x, V = np.hstack(x), np.hstack(V) xprimes = np.vstack([x] * S) cf.append(interp(x, c)) nf.append(interp(x, n)) Vf.append(interp(x, V)) xgrid.append(x) xprimef.append(interp(x, xprimes)) cf, nf, xprimef = fun_vstack(cf), fun_vstack(nf), fun_vstack(xprimef) Vf = fun_hstack(Vf) policies = [cf, nf, xprimef] # Create xgrid x = np.vstack(xgrid).T xbar = [x.min(0).max(), x.max(0).min()] xgrid = np.linspace(xbar[0], xbar[1], len(μgrid0)) self.xgrid = xgrid # Now iterate on Bellman equation T = BellmanEquation(model, xgrid, policies, tol=self.tol) diff = 1 while diff > self.tol_diff: PF = T(Vf) Vfnew, policies = self.fit_policy_function(PF) diff = np.abs((Vf(xgrid) - Vfnew(xgrid)) / Vf(xgrid)).max() print(diff) Vf = Vfnew # Store value function policies and Bellman Equations self.Vf = Vf self.policies = policies self.T = T def fit_policy_function(self, PF): ''' Fits the policy functions ''' S, xgrid = len(self.π), self.xgrid interp = interpolator_factory(3, 0) cf, nf, xprimef, Tf, Vf = [], [], [], [], [] for s_ in range(S): PFvec = np.vstack([PF(x, s_) for x in self.xgrid]).T Vf.append(interp(xgrid, PFvec[0, :])) cf.append(interp(xgrid, PFvec[1:1 + S])) nf.append(interp(xgrid, PFvec[1 + S:1 + 2 * S])) xprimef.append(interp(xgrid, PFvec[1 + 2 * S:1 + 3 * S])) Tf.append(interp(xgrid, PFvec[1 + 3 * S:])) policies = fun_vstack(cf), fun_vstack( nf), fun_vstack(xprimef), fun_vstack(Tf) Vf = fun_hstack(Vf) return Vf, policies def Τ(self, c, n): ''' Computes Τ given c and n ''' model = self.model Uc, Un = model.Uc(c, n), model.Un(c, n) return 1 + Un / (self.Θ * Uc) def time0_allocation(self, B_, s0): ''' Finds the optimal allocation given initial government debt B_ and state s_0 ''' PF = self.T(self.Vf) z0 = PF(B_, s0) c0, n0, xprime0, T0 = z0[1:] return c0, n0, xprime0, T0 def simulate(self, B_, s_0, T, sHist=None): ''' Simulates planners policies for T periods ''' model, π = self.model, self.π Uc = model.Uc cf, nf, xprimef, Tf = self.policies if sHist is None: sHist = simulate_markov(π, s_0, T) cHist, nHist, Bhist, xHist, ΤHist, THist, μHist = np.zeros((7, T)) # Time 0 cHist[0], nHist[0], xHist[0], THist[0] = self.time0_allocation(B_, s_0) ΤHist[0] = self.Τ(cHist[0], nHist[0])[s_0] Bhist[0] = B_ μHist[0] = self.Vf[s_0](xHist[0]) # Time 1 onward for t in range(1, T): s_, x, s = sHist[t - 1], xHist[t - 1], sHist[t] c, n, xprime, T = cf[s_, :](x), nf[s_, :]( x), xprimef[s_, :](x), Tf[s_, :](x) Τ = self.Τ(c, n)[s] u_c = Uc(c, n) Eu_c = π[s_, :] @ u_c μHist[t] = self.Vf[s](xprime[s]) cHist[t], nHist[t], Bhist[t], ΤHist[t] = c[s], n[s], x / Eu_c, Τ xHist[t], THist[t] = xprime[s], T[s] return np.array([cHist, nHist, Bhist, ΤHist, THist, μHist, sHist, xHist]) class BellmanEquation: ''' Bellman equation for the continuation of the Lucas-Stokey Problem ''' def __init__(self, model, xgrid, policies0, tol, maxiter=1000): self.β, self.π, self.G = model.β, model.π, model.G self.S = len(model.π) # Number of states self.Θ, self.model, self.tol = model.Θ, model, tol self.maxiter = maxiter self.xbar = [min(xgrid), max(xgrid)] self.time_0 = False self.z0 = {} cf, nf, xprimef = policies0 for s_ in range(self.S): for x in xgrid: self.z0[x, s_] = np.hstack([cf[s_, :](x), nf[s_, :](x), xprimef[s_, :](x), np.zeros(self.S)]) self.find_first_best() def find_first_best(self): ''' Find the first best allocation ''' model = self.model S, Θ, Uc, Un, G = self.S, self.Θ, model.Uc, model.Un, self.G def res(z): c = z[:S] n = z[S:] return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G]) res = root(res, 0.5 * np.ones(2 * S)) if not res.success: raise Exception('Could not find first best') self.cFB = res.x[:S] self.nFB = res.x[S:] IFB = Uc(self.cFB, self.nFB) * self.cFB + \ Un(self.cFB, self.nFB) * self.nFB self.xFB = np.linalg.solve(np.eye(S) - self.β * self.π, IFB) self.zFB = {} for s in range(S): self.zFB[s] = np.hstack( [self.cFB[s], self.nFB[s], self.π[s] @ self.xFB, 0.]) def __call__(self, Vf): ''' Given continuation value function next period return value function this period return T(V) and optimal policies ''' if not self.time_0: def PF(x, s): return self.get_policies_time1(x, s, Vf) else: def PF(B_, s0): return self.get_policies_time0(B_, s0, Vf) return PF def get_policies_time1(self, x, s_, Vf): ''' Finds the optimal policies ''' model, β, Θ, G, S, π = self.model, self.β, self.Θ, self.G, self.S, self.π U, Uc, Un = model.U, model.Uc, model.Un def objf(z): c, n, xprime = z[:S], z[S:2 * S], z[2 * S:3 * S] Vprime = np.empty(S) for s in range(S): Vprime[s] = Vf[s](xprime[s]) return -π[s_] @ (U(c, n) + β * Vprime) def objf_prime(x): epsilon = 1e-7 x0 = np.asfarray(x) f0 = np.atleast_1d(objf(x0)) jac = np.zeros([len(x0), len(f0)]) dx = np.zeros(len(x0)) for i in range(len(x0)): dx[i] = epsilon jac[i] = (objf(x0+dx) - f0)/epsilon dx[i] = 0.0 return jac.transpose() def cons(z): c, n, xprime, T = z[:S], z[S:2 * S], z[2 * S:3 * S], z[3 * S:] u_c = Uc(c, n) Eu_c = π[s_] @ u_c return np.hstack([ x * u_c / Eu_c - u_c * (c - T) - Un(c, n) * n - β * xprime, Θ * n - c - G]) if model.transfers: bounds = [(0., 100)] * S + [(0., 100)] * S + \ [self.xbar] * S + [(0., 100.)] * S else: bounds = [(0., 100)] * S + [(0., 100)] * S + \ [self.xbar] * S + [(0., 0.)] * S out, fx, _, imode, smode = fmin_slsqp(objf, self.z0[x, s_], f_eqcons=cons, bounds=bounds, fprime=objf_prime, full_output=True, iprint=0, acc=self.tol, iter=self.maxiter) if imode > 0: raise Exception(smode) self.z0[x, s_] = out return np.hstack([-fx, out]) def get_policies_time0(self, B_, s0, Vf): ''' Finds the optimal policies ''' model, β, Θ, G = self.model, self.β, self.Θ, self.G U, Uc, Un = model.U, model.Uc, model.Un def objf(z): c, n, xprime = z[:-1] return -(U(c, n) + β * Vf[s0](xprime)) def cons(z): c, n, xprime, T = z return np.hstack([ -Uc(c, n) * (c - B_ - T) - Un(c, n) * n - β * xprime, (Θ * n - c - G)[s0]]) if model.transfers: bounds = [(0., 100), (0., 100), self.xbar, (0., 100.)] else: bounds = [(0., 100), (0., 100), self.xbar, (0., 0.)] out, fx, _, imode, smode = fmin_slsqp(objf, self.zFB[s0], f_eqcons=cons, bounds=bounds, full_output=True, iprint=0) if imode > 0: raise Exception(smode) return np.hstack([-fx, out]) In [6]: import numpy as np from scipy.interpolate import UnivariateSpline class interpolate_wrapper: def __init__(self, F): self.F = F def __getitem__(self, index): return interpolate_wrapper(np.asarray(self.F[index])) def reshape(self, *args): self.F = self.F.reshape(*args) return self def transpose(self): self.F = self.F.transpose() def __len__(self): return len(self.F) def __call__(self, xvec): x = np.atleast_1d(xvec) shape = self.F.shape if len(x) == 1: fhat = np.hstack([f(x) for f in self.F.flatten()]) return fhat.reshape(shape) else: fhat = np.vstack([f(x) for f in self.F.flatten()]) return fhat.reshape(np.hstack((shape, len(x)))) class interpolator_factory: def __init__(self, k, s): self.k, self.s = k, s def __call__(self, xgrid, Fs): shape, m = Fs.shape[:-1], Fs.shape[-1] Fs = Fs.reshape((-1, m)) F = [] xgrid = np.sort(xgrid) # Sort xgrid for Fhat in Fs: F.append(UnivariateSpline(xgrid, Fhat, k=self.k, s=self.s)) return interpolate_wrapper(np.array(F).reshape(shape)) def fun_vstack(fun_list): Fs = [IW.F for IW in fun_list] return interpolate_wrapper(np.vstack(Fs)) def fun_hstack(fun_list): Fs = [IW.F for IW in fun_list] return interpolate_wrapper(np.hstack(Fs)) def simulate_markov(π, s_0, T): sHist = np.empty(T, dtype=int) sHist[0] = s_0 S = len(π) for t in range(1, T): sHist[t] = np.random.choice(np.arange(S), p=π[sHist[t - 1]]) return sHist ## Reverse Engineering Strategy¶ We can reverse engineer a value $b_0$ of initial debt due that renders the AMSS measurability constraints not binding from time $t =0$ onward. We accomplish this by recognizing that if the AMSS measurability constraints never bind, then the AMSS allocation and Ramsey plan is equivalent with that for a Lucas-Stokey economy in which for each period $t \geq 0$, the government promises to pay the same state-contingent amount $\bar b$ in each state tomorrow. This insight tells us to find a $b_0$ and other fundamentals for the Lucas-Stokey [LS83] model that make the Ramsey planner want to borrow the same value $\bar b$ next period for all states and all dates. We accomplish this by using various equations for the Lucas-Stokey [LS83] model presented in optimal taxation with state-contingent debt. We use the following steps. Step 1: Pick an initial $\Phi$. Step 2: Given that $\Phi$, jointly solve two versions of equation (4) for $c(s), s=1, 2$ associated with the two values for $g(s), s=1,2$. Step 3: Solve the following equation for $\vec x$ $$\vec x= (I - \beta \Pi )^{-1} [ \vec u_c (\vec n-\vec g) - \vec u_l \vec n] \tag{6}$$ Step 4: After solving for $\vec x$, we can find $b(s_t|s^{t-1})$ in Markov state $s_t=s$ from $b(s) = {\frac{x(s)}{u_c(s)}}$ or the matrix equation $$\vec b = {\frac{ \vec x }{\vec u_c}} \tag{7}$$ Step 5: Compute $J(\Phi) = (b(1) - b(2))^2$. Step 6: Put steps 2 through 6 in a function minimizer and find a $\Phi$ that minimizes $J(\Phi)$. Step 7: At the value of $\Phi$ and the value of $\bar b$ that emerged from step 6, solve equations (5) and (3) jointly for $c_0, b_0$. ## Code for Reverse Engineering¶ Here is code to do the calculations for us. In [7]: u = CRRAutility() def min_Φ(Φ): g1, g2 = u.G # Government spending in s=0 and s=1 # Solve Φ(c) def equations(unknowns, Φ): c1, c2 = unknowns # First argument of .Uc and second argument of .Un are redundant # Set up simultaneous equations eq = lambda c, g: (1 + Φ) * (u.Uc(c, 1) - -u.Un(1, c + g)) + \ Φ * ((c + g) * u.Unn(1, c + g) + c * u.Ucc(c, 1)) # Return equation evaluated at s=1 and s=2 return np.array([eq(c1, g1), eq(c2, g2)]).flatten() global c1 # Update c1 globally global c2 # Update c2 globally c1, c2 = fsolve(equations, np.ones(2), args=(Φ)) uc = u.Uc(np.array([c1, c2]), 1) # uc(n - g) # ul(n) = -un(c + g) ul = -u.Un(1, np.array([c1 + g1, c2 + g2])) * [c1 + g1, c2 + g2] # Solve for x x = np.linalg.solve(np.eye((2)) - u.β * u.π, uc * [c1, c2] - ul) global b # Update b globally b = x / uc loss = (b[0] - b[1])**2 return loss Φ_star = fmin(min_Φ, .1, ftol=1e-14) Optimization terminated successfully. Current function value: 0.000000 Iterations: 24 Function evaluations: 48 To recover and print out $\bar b$ In [8]: b_bar = b[0] b_bar Out[8]: -1.0757576567504166 To complete the reverse engineering exercise by jointly determining $c_0, b_0$, we set up a function that returns two simultaneous equations. In [9]: def solve_cb(unknowns, Φ, b_bar, s=1): c0, b0 = unknowns g0 = u.G[s-1] R_0 = u.β * u.π[s] @ [u.Uc(c1, 1) / u.Uc(c0, 1), u.Uc(c2, 1) / u.Uc(c0, 1)] R_0 = 1 / R_0 τ_0 = 1 + u.Un(1, c0 + g0) / u.Uc(c0, 1) eq1 = τ_0 * (c0 + g0) + b_bar / R_0 - b0 - g0 eq2 = (1 + Φ) * (u.Uc(c0, 1) + u.Un(1, c0 + g0)) \ + Φ * (c0 * u.Ucc(c0, 1) + (c0 + g0) * u.Unn(1, c0 + g0)) \ - Φ * u.Ucc(c0, 1) * b0 return np.array([eq1, eq2], dtype='float64') To solve the equations for $c_0, b_0$, we use SciPy’s fsolve function In [10]: c0, b0 = fsolve(solve_cb, np.array([1., -1.], dtype='float64'), args=(Φ_star, b[0], 1), xtol=1.0e-12) c0, b0 Out[10]: (0.9344994030900681, -1.0386984075517638) Thus, we have reverse engineered an initial $b0 = -1.038698407551764$ that ought to render the AMSS measurability constraints slack. ## Short Simulation for Reverse-engineered: Initial Debt¶ The following graph shows simulations of outcomes for both a Lucas-Stokey economy and for an AMSS economy starting from initial government debt equal to $b_0 = -1.038698407551764$. These graphs report outcomes for both the Lucas-Stokey economy with complete markets and the AMSS economy with one-period risk-free debt only. In [11]: μ_grid = np.linspace(-0.09, 0.1, 100) log_example = CRRAutility() log_example.transfers = True # Government can use transfers log_sequential = SequentialAllocation(log_example) # Solve sequential problem log_bellman = RecursiveAllocationAMSS(log_example, μ_grid, tol_diff=1e-10, tol=1e-10) T = 20 sHist = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0]) sim_seq = log_sequential.simulate(-1.03869841, 0, T, sHist) sim_bel = log_bellman.simulate(-1.03869841, 0, T, sHist) titles = ['Consumption', 'Labor Supply', 'Government Debt', 'Tax Rate', 'Government Spending', 'Output'] # Government spending paths sim_seq[4] = log_example.G[sHist] sim_bel[4] = log_example.G[sHist] # Output paths sim_seq[5] = log_example.Θ[sHist] * sim_seq[1] sim_bel[5] = log_example.Θ[sHist] * sim_bel[1] fig, axes = plt.subplots(3, 2, figsize=(14, 10)) for ax, title, seq, bel in zip(axes.flatten(), titles, sim_seq, sim_bel): ax.plot(seq, '-ok', bel, '-^b') ax.set(title=title) ax.grid() axes[0, 0].legend(('Complete Markets', 'Incomplete Markets')) plt.tight_layout() plt.show() /home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:24: RuntimeWarning: divide by zero encountered in reciprocal /home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:29: RuntimeWarning: divide by zero encountered in power /home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:249: RuntimeWarning: invalid value encountered in true_divide /home/ubuntu/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:249: RuntimeWarning: invalid value encountered in multiply 0.04094445433233349 0.0016732111461332116 0.001484674847941 0.0013137721366331043 0.0011814037130040284 0.0010559653360017447 0.0009446661650755715 0.000846380731628398 0.0007560453790063078 0.00067560010343466 0.0006041528462857525 0.0005396004515459193 0.0004820716905347933 0.0004308273211086521 0.00038481851378044445 0.0003438352176738895 0.0003072436934733089 0.00027450091465793127 0.00024531773294974706 0.0002192332430240099 0.00019593539314200368 0.0001751430347918003 0.00015655939852722542 0.00013996737079027295 0.00012514457794920812 0.00011190070819375152 0.00010007020011301444 8.94972853302982e-05 8.004975324147347e-05 7.160585226330105e-05 6.405840600053609e-05 5.731160550371448e-05 5.1279701154854075e-05 4.5886517457366006e-05 4.1063904846585554e-05 3.675096993929666e-05 3.289357328114443e-05 2.944332266803128e-05 2.6356778356648567e-05 2.359547684463056e-05 2.112486761464294e-05 1.891429243202663e-05 1.6935989642385405e-05 1.5165570537627252e-05 1.3581075125069435e-05 1.2162765973890376e-05 1.0893227628548794e-05 9.756678023766783e-06 8.739234293861053e-06 7.828320692928978e-06 7.012602850337454e-06 6.282198732787138e-06 5.628118881061573e-06 5.042427624912607e-06 4.517800319241584e-06 4.048011465994329e-06 3.627182051840163e-06 3.250228060793855e-06 2.912555261494936e-06 2.610063271524622e-06 2.339096440867862e-06 2.0963001170891376e-06 1.8787856578596273e-06 1.6838896608221645e-06 1.5092762929338156e-06 1.352790480459603e-06 1.212586978716101e-06 1.0869367377498523e-06 9.74329327423575e-07 8.734258346617685e-07 7.829793804242991e-07 7.019280130255423e-07 6.292786464255896e-07 5.641636192416832e-07 5.058007979896173e-07 4.534842618505505e-07 4.0659061286626196e-07 3.6455313678260827e-07 3.2687001570379387e-07 2.9308819949355066e-07 2.6280345248910004e-07 2.3565293841809765e-07 2.1131168485864436e-07 1.8948851406702883e-07 1.6992245317097197e-07 1.5237965028807156e-07 1.3665054185762274e-07 1.2254728941233475e-07 1.0990157585069218e-07 9.85625174249574e-08 8.839490105586063e-08 7.927750926028107e-08 7.110169770805235e-08 6.377012147386367e-08 5.719543386709333e-08 5.129944021536304e-08 4.6011930812783534e-08 4.127024941982672e-08 3.7017900542013434e-08 3.320421049749887e-08 2.978383419404908e-08 2.6716182749875635e-08 2.3964822319250456e-08 2.1497107790770152e-08 1.9283758070737936e-08 1.729852767976951e-08 1.551787226411719e-08 1.3920697555410214e-08 1.2488067996502076e-08 1.1203018634961256e-08 1.0050329638255684e-08 9.016351225761823e-09 8.088846855071936e-09 7.256839885231707e-09 6.510489273125116e-09 5.840966660606102e-09 5.240355714591733e-09 4.7015565088111045e-09 4.218202075137356e-09 3.7845815612558025e-09 3.395571907042164e-09 3.0465804735790387e-09 2.733486281344324e-09 2.4525939395611508e-09 2.2005875755384236e-09 1.9744955428085335e-09 1.771649218301515e-09 1.5896548383910459e-09 1.4263891526182913e-09 1.279859299241416e-09 1.1484693198893273e-09 1.030523465286588e-09 9.247520966280898e-10 8.298195114442591e-10 7.446575650027425e-10 6.682368814847576e-10 5.996652540458408e-10 5.381363585687993e-10 4.829229734494842e-10 4.333792396702729e-10 3.8891927704299075e-10 3.490241382775913e-10 3.132226852966556e-10 2.8109576167664684e-10 2.5226523999171857e-10 2.2639297870395177e-10 2.031759543162564e-10 1.8234069811843256e-10 1.6364294848399104e-10 1.4686295687459096e-10 1.3180531398848157e-10 1.1829182184116467e-10 1.0616375383750133e-10 9.528098551331463e-11 The Ramsey allocations and Ramsey outcomes are identical for the Lucas-Stokey and AMSS economies. This outcome confirms the success of our reverse-engineering exercises. Notice how for $t \geq 1$, the tax rate is a constant - so is the par value of government debt. However, output and labor supply are both nontrivial time-invariant functions of the Markov state. ## Long Simulation¶ The following graph shows the par value of government debt and the flat rate tax on labor income for a long simulation for our sample economy. For the same realization of a government expenditure path, the graph reports outcomes for two economies • the gray lines are for the Lucas-Stokey economy with complete markets • the blue lines are for the AMSS economy with risk-free one-period debt only For both economies, initial government debt due at time $0$ is $b_0 = .5$. For the Lucas-Stokey complete markets economy, the government debt plotted is $b_{t+1}(s_{t+1})$. • Notice that this is a time-invariant function of the Markov state from the beginning. For the AMSS incomplete markets economy, the government debt plotted is $b_{t+1}(s^t)$. • Notice that this is a martingale-like random process that eventually seems to converge to a constant $\bar b \approx - 1.07$. • Notice that the limiting value $\bar b < 0$ so that asymptotically the government makes a constant level of risk-free loans to the public. • In the simulation displayed as well as other simulations we have run, the par value of government debt converges to about $1.07$ afters between 1400 to 2000 periods. For the AMSS incomplete markets economy, the marginal tax rate on labor income $\tau_t$ converges to a constant • labor supply and output each converge to time-invariant functions of the Markov state In [12]: T = 2000 # Set T to 200 periods sim_seq_long = log_sequential.simulate(0.5, 0, T) sHist_long = sim_seq_long[-3] sim_bel_long = log_bellman.simulate(0.5, 0, T, sHist_long) titles = ['Government Debt', 'Tax Rate'] fig, axes = plt.subplots(2, 1, figsize=(14, 10)) for ax, title, id in zip(axes.flatten(), titles, [2, 3]): ax.plot(sim_seq_long[id], '-k', sim_bel_long[id], '-.b', alpha=0.5) ax.set(title=title) ax.grid() axes[0].legend(('Complete Markets', 'Incomplete Markets')) plt.tight_layout() plt.show() ### Remarks about Long Simulation¶ As remarked above, after $b_{t+1}(s^t)$ has converged to a constant, the measurability constraints in the AMSS model cease to bind • the associated Lagrange multipliers on those implementability constraints converge to zero This leads us to seek an initial value of government debt $b_0$ that renders the measurability constraints slack from time $t=0$ onward • a tell-tale sign of this situation is that the Ramsey planner in a corresponding Lucas-Stokey economy would instruct the government to issue a constant level of government debt $b_{t+1}(s_{t+1})$ across the two Markov states We now describe how to find such an initial level of government debt. ## BEGS Approximations of Limiting Debt and Convergence Rate¶ It is useful to link the outcome of our reverse engineering exercise to limiting approximations constructed by [BEGS17]. [BEGS17] used a slightly different notation to represent a generalization of the AMSS model. We’ll introduce a version of their notation so that readers can quickly relate notation that appears in their key formulas to the notation that we have used. BEGS work with objects $B_t, {\mathcal B}_t, {\mathcal R}_t, {\mathcal X}_t$ that are related to our notation by \begin{aligned} {\mathcal R}_t & = \frac{u_{c,t}}{u_{c,t-1}} R_{t-1} = \frac{u_{c,t}}{ \beta E_{t-1} u_{c,t}} \\ B_t & = \frac{b_{t+1}(s^t)}{R_t(s^t)} \\ b_t(s^{t-1}) & = {\mathcal R}_{t-1} B_{t-1} \\ {\mathcal B}_t & = u_{c,t} B_t = (\beta E_t u_{c,t+1}) b_{t+1}(s^t) \\ {\mathcal X}_t & = u_{c,t} [g_t - \tau_t n_t] \end{aligned} In terms of their notation, equation (44) of [BEGS17] expresses the time $t$ state $s$ government budget constraint as $${\mathcal B}(s) = {\mathcal R}_\tau(s, s_{-}) {\mathcal B}_{-} + {\mathcal X}_{\tau(s)} (s) \tag{8}$$ where the dependence on $\tau$ is to remind us that these objects depend on the tax rate and $s_{-}$ is last period’s Markov state. BEGS interpret random variations in the right side of (8) as a measure of fiscal risk composed of • interest-rate-driven fluctuations in time $t$ effective payments due on the government portfolio, namely, ${\mathcal R}_\tau(s, s_{-}) {\mathcal B}_{-}$, and • fluctuations in the effective government deficit ${\mathcal X}_t$ ### Asymptotic Mean¶ BEGS give conditions under which the ergodic mean of ${\mathcal B}_t$ is $${\mathcal B}^* = - \frac{\rm cov^{\infty}(\mathcal R, \mathcal X)}{\rm var^{\infty}(\mathcal R)} \tag{9}$$ where the superscript $\infty$ denotes a moment taken with respect to an ergodic distribution. Formula (9) presents ${\mathcal B}^*$ as a regression coefficient of ${\mathcal X}_t$ on ${\mathcal R}_t$ in the ergodic distribution. This regression coefficient emerges as the minimizer for a variance-minimization problem: $${\mathcal B}^* = {\rm argmin}_{\mathcal B} {\rm var} ({\mathcal R} {\mathcal B} + {\mathcal X}) \tag{10}$$ The minimand in criterion (10) is the measure of fiscal risk associated with a given tax-debt policy that appears on the right side of equation (8). Expressing formula (9) in terms of our notation tells us that $\bar b$ should approximately equal $$\hat b = \frac{\mathcal B^*}{\beta E_t u_{c,t+1}} \tag{11}$$ ### Rate of Convergence¶ BEGS also derive the following approximation to the rate of convergence to ${\mathcal B}^{*}$ from an arbitrary initial condition. $$\frac{ E_t ( {\mathcal B}_{t+1} - {\mathcal B}^{*} )} { ( {\mathcal B}_{t} - {\mathcal B}^{*} )} \approx \frac{1}{1 + \beta^2 {\rm var} ({\mathcal R} )} \tag{12}$$ (See the equation above equation (47) in [BEGS17]) ### Formulas and Code Details¶ For our example, we describe some code that we use to compute the steady state mean and the rate of convergence to it. The values of $\pi(s)$ are 0.5, 0.5. We can then construct ${\mathcal X}(s), {\mathcal R}(s), u_c(s)$ for our two states using the definitions above. We can then construct $\beta E_{t-1} u_c = \beta \sum_s u_c(s) \pi(s)$, ${\rm cov}({\mathcal R}(s), \mathcal{X}(s))$ and ${\rm var}({\mathcal R}(s))$ to be plugged into formula (11). We also want to compute ${\rm var}({\mathcal X})$. To compute the variances and covariance, we use the following standard formulas. Temporarily let $x(s), s =1,2$ be an arbitrary random variables. Then we define \begin{aligned} \mu_x & = \sum_s x(s) \pi(s) \\ {\rm var}(x) &= \left(\sum_s \sum_s x(s)^2 \pi(s) \right) - \mu_x^2 \\ {\rm cov}(x,y) & = \left(\sum_s x(s) y(s) \pi(s) \right) - \mu_x \mu_y \end{aligned} After we compute these moments, we compute the BEGS approximation to the asymptotic mean $\hat b$ in formula (11). After that, we move on to compute ${\mathcal B}^*$ in formula (9). We’ll also evaluate the BEGS criterion (8) at the limiting value ${\mathcal B}^*$ $$J ( {\mathcal B}^*)= {\rm var}(\mathcal{R}) \left( {\mathcal B}^* \right)^2 + 2 {\mathcal B}^* {\rm cov}(\mathcal{R},\mathcal{X}) + {\rm var}(\mathcal X) \tag{13}$$ Here are some functions that we’ll use to compute key objects that we want In [13]: def mean(x): '''Returns mean for x given initial state''' x = np.array(x) return x @ u.π[s] def variance(x): x = np.array(x) return x**2 @ u.π[s] - mean(x)**2 def covariance(x, y): x, y = np.array(x), np.array(y) return x * y @ u.π[s] - mean(x) * mean(y) Now let’s form the two random variables ${\mathcal R}, {\mathcal X}$ appearing in the BEGS approximating formulas In [14]: u = CRRAutility() s = 0 c = [0.940580824225584, 0.8943592757759343] # Vector for c g = u.G # Vector for g n = c + g # Total population τ = lambda s: 1 + u.Un(1, n[s]) / u.Uc(c[s], 1) R_s = lambda s: u.Uc(c[s], n[s]) / (u.β * (u.Uc(c[0], n[0]) * u.π[0, 0] \ + u.Uc(c[1], n[1]) * u.π[1, 0])) X_s = lambda s: u.Uc(c[s], n[s]) * (g[s] - τ(s) * n[s]) R = [R_s(0), R_s(1)] X = [X_s(0), X_s(1)] print(f"R, X = {R}, {X}") R, X = [1.055169547122964, 1.1670526750992583], [0.06357685646224803, 0.19251010100512958] Now let’s compute the ingredient of the approximating limit and the approximating rate of convergence In [15]: bstar = -covariance(R, X) / variance(R) div = u.β * (u.Uc(c[0], n[0]) * u.π[s, 0] + u.Uc(c[1], n[1]) * u.π[s, 1]) bhat = bstar / div bhat Out[15]: -1.0757585378303758 Print out $\hat b$ and $\bar b$ In [16]: bhat, b_bar Out[16]: (-1.0757585378303758, -1.0757576567504166) So we have In [17]: bhat - b_bar Out[17]: -8.810799592140484e-07 These outcomes show that $\hat b$ does a remarkably good job of approximating $\bar b$. Next, let’s compute the BEGS fiscal criterion that $\hat b$ is minimizing In [18]: Jmin = variance(R) * bstar**2 + 2 * bstar * covariance(R, X) + variance(X) Jmin Out[18]: -9.020562075079397e-17 This is machine zero, a verification that $\hat b$ succeeds in minimizing the nonnegative fiscal cost criterion $J ( {\mathcal B}^*)$ defined in BEGS and in equation (13) above. Let’s push our luck and compute the mean reversion speed in the formula above equation (47) in [BEGS17]. In [19]: den2 = 1 + (u.β**2) * variance(R) speedrever = 1/den2 print(f'Mean reversion speed = {speedrever}') Mean reversion speed = 0.9974715478249827 Now let’s compute the implied meantime to get to within 0.01 of the limit In [20]: ttime = np.log(.01) / np.log(speedrever) print(f"Time to get within .01 of limit = {ttime}") Time to get within .01 of limit = 1819.0360880098472 The slow rate of convergence and the implied time of getting within one percent of the limiting value do a good job of approximating our long simulation above. • Share page
{}
# $π^ - π^-$ Asymmetry and the Neutron Skin in Heavy Nuclei $π^ - π^-$ Asymmetry and the Neutron Skin in Heavy Nuclei - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Descargar gratis o leer online en formato PDF el libro: $π^ - π^-$ Asymmetry and the Neutron Skin in Heavy Nuclei In heavy nuclei the spatial distribution of protons and neutrons is different. At CERN SPS energies production of $\pi^+$ and $\pi^-$ differs for $pp$, $pn$, $np$ and $nn$ scattering. These two facts lead to an impact parameter dependence of the $\pi^+$ to $\pi^-$ ratio in $^{208}Pb + ^{208}Pb$ collisions. A recent experimen Autor: Antoni Szczurek; Piotr Pawlowski Fuente: https://archive.org/
{}
# XOR: Is it possible to get $a$ and $b$ if I have $a \oplus b$ and $a \times b$? Intuitively I would say yes but I can't find a way to prove it. I tried with small values and bruteforcing shows that there seems to only be one solution given a distinct tuplet. For example $$(1,72)$$ has only $$(8,9)$$ as valid $$a$$ and $$b$$ values. Is there a way to do this mathematically? • Welcome to Puzzling.SE! For the uninitiated in the room (such as me), would you mind explaining what precisely is meant by the little symbol that looks like a Phillips head screw? Thanks, and have a great day! – Brandon_J May 4 '19 at 17:28 • It’s bitwise addition modulo 2, @Brandon_J – El-Guest May 4 '19 at 17:30 • Thanks for letting me know, @El-Guest . – Brandon_J May 4 '19 at 17:30 • So essentially, I convert the number to binary (for exapmle, 3-->00011, 24-->11000), then perform theXOR operation on each pair of bits, and then convert back to a normal number (27)? – Brandon_J May 4 '19 at 17:54 • Not sure this is a puzzle, but I'm glad you got your answer. (Please don't forget to $\color{green}{\checkmark \small\text{Accept}}$ it!) – Rubio May 5 '19 at 4:34 By counterexample, $$a,b$$ pair is clearly not unique. The pairs $$(5,9)$$ and $$(3,15)$$ both multiply to $$45$$, and add bitwise to $$12$$. $$5 \oplus 9 = 12$$, $$5 \times 9 = 45$$ $$3 \oplus 15 = 12$$, $$3 \times 15 = 45$$
{}
# What is meant by the "mole"? How is it useful in chemical calculations? Mar 7, 2016 If I have a mole of stuff, there are $6.022 \times {10}^{23}$ individual items of that stuff. So if you have a dozen eggs, there are $\frac{12 \cdot e g g s}{6.022 \times {10}^{23} \cdot e g g s \cdot m o {l}^{-} 1}$. Now obviously this is a ridiculously small number, and of no use to any calculation. However, if I have gram quantities of elements or compounds, this is a useful exercise, because in $12.00 \cdot g$ ""^12C there are $6.022 \times {10}^{23}$ ""^12C atoms, $\text{1 mole}$ of ""^12C atoms. I could repeat these elemental molar masses for every element in the Periodic Table, and you will always have access to one of these in every test in Chemistry and Physics you ever sit.
{}
# 7.24     Calculate a) $\Delta G^{0}$ for the formation of NO2 from NO and O2 at 298K                $NO{(g)}+\frac{1}{2}O_{2}(g) \rightleftharpoons NO_{2}(g)$                where                $\Delta _{f}G^{+}\left [ NO_{2} \right ]=52.0kJ/mol$                 $\Delta _{f}G^{+}\left [ NO \right ]=87.0kJ/mol$                $\Delta _{f}G^{+}\left [ O_{2} \right ]=0kJ/mol$ M manish Given data, $\Delta _{f}G^{+}\left [ NO_{2} \right ]=52.0kJ/mol$ $\Delta _{f}G^{+}\left [ NO \right ]=87.0kJ/mol$ $\Delta _{f}G^{+}\left [ O_{2} \right ]=0kJ/mol$ given chemical reaction- $NO{(g)}+\frac{1}{2}O_{2}(g) \rightleftharpoons NO_{2}(g)$ for the reaction, $\Delta G^0$ = $\Delta G^0$(products) - $\Delta G^0$ (reactants) = (52-87-0) = -35 kJ/mol Exams Articles Questions
{}
Publication Title Precision measurement of the X(3872) mass in $J/\psi\pi^{+}\pi^{-}$ decays Author Institution/Organisation CDF Collaboration Abstract We present an analysis of the mass of the X(3872) reconstructed via its decay to J/ψπ+π- using 2.4  fb-1 of integrated luminosity from pp̅ collisions at √s=1.96  TeV, collected with the CDF II detector at the Fermilab Tevatron. The possible existence of two nearby mass states is investigated. Within the limits of our experimental resolution the data are consistent with a single state, and having no evidence for two states we set upper limits on the mass difference between two hypothetical states for different assumed ratios of contributions to the observed peak. For equal contributions, the 95% confidence level upper limit on the mass difference is 3.6  MeV/c2. Under the single-state model the X(3872) mass is measured to be 3871.61±0.16(stat)±0.19(syst)  MeV/c2, which is the most precise determination to date. Language English Source (journal) Physical review letters. - New York, N.Y. Publication New York, N.Y. : 2009 ISSN 0031-9007 Volume/pages 103:15(2009), p. 152001,1-152001,8 ISI 000270672100011 Full text (Publisher's DOI) Full text (open access) UAntwerpen Faculty/Department Research group Publication type Subject Affiliation Publications with a UAntwerp address
{}
# In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? Under a standard gaussian distribution (mean 0 and variance 1), the kurtosis is $$33$$. Compared to a heavy tail distribution, is the kurtosis normally larger or smaller? ### I. A direct answer to the OP Answer: It depends on what you mean by “heavy tails.” By some definitions of “heavy tails,” the answer is “no,” as pointed out here and elsewhere. Why do we care about heavy tails? Because we care about outliers (substitute the phrase “rare, extreme observation” if you have a problem with the word “outlier.” However, I will use the term “outlier” throughout for brevity.) Outliers are interesting from several points of view: In finance, outlier returns cause much more money to change hands than typical returns (see Taleb‘s discussion of black swans). In hydrology, the outlier flood will cause enormous damage and needs to be planned for. In statistical process control, outliers indicate “out of control” conditions that warrant immediate investigation and rectification. In regression analysis, outliers have enormous effects on the least squares fit. In statistical inference, the degree to which distributions produce outliers has an enormous effect on standard t tests for mean values. Similarly, the degree to which a distribution produces outliers has an enormous effect on the accuracy of the usual estimate of the variance of that distribution. So for various reasons, there is a great interest in outliers in data, and in the degree to which a distribution produces outliers. Notions of heavy-tailedness were therefore developed to characterize outlier-prone processes and data. Unfortunately, the commonly-used definition of “heavy tails” involving exponential bounds and asymptotes is too limited in its characterization of outliers and outlier-prone data generating processes: It requires tails extending to infinity, so it rules out bounded distributions that produce outliers. Further, the standard definition does not even apply to a data set, since all empirical distributions are necessarily bounded. Here is an alternative class of definitions of ”heavy-tailedness,” which I will call “tail-leverage($$mm$$)” to avoid confusion with existing definitions of heavy-tailedness, that addresses this concern. Definition: Assume absolute moments up to order $$m>2m>2$$ exist for random variables $$XX$$ and $$YY$$. Let $$U=|(X−μX)/σX|mU = |(X - \mu_X)/\sigma_X|^m$$ and let $$V=|(Y−μY)/σY|mV =|(Y - \mu_Y)/\sigma_Y|^m$$. If $$E(V)>E(U)E(V) > E(U)$$, then $$YY$$ is said to have greater tail-leverage($$mm$$) than $$XX$$. The mathematical rationale for the definition is as follows: Suppose $$E(V)>E(U)E(V) > E(U)$$, and let $$μU=E(U)\mu_U = E(U)$$. Draw the pdf (or pmf, in the discrete case, or in the case of an actual data set) of $$VV$$, which is $$pV(v)p_V(v)$$. Place a fulcrum at $$μU\mu_U$$ on the horizontal axis. Because of the well-known fact that the distribution balances at its mean, the distribution $$pV(v)p_V(v)$$ “falls to the right” of the fulcrum at $$μU\mu_U$$. Now, what causes it to “fall to the right”? Is it the concentration of mass less than 1, corresponding to the observations of $$YY$$ that are within a standard deviation of the mean? Is it the shape of the distribution of $$YY$$ corresponding to observations that are within a standard deviation of the mean? No, these aspects are to the left of the fulcrum, not to the right. It is the extremes of the distribution (or data) of $$YY$$, in one or both tails, that produce high positive values of $$VV$$, which cause the “falling to the right.” BTW, the term “leverage” should now be clear, given the physical representation involving the fulcrum. But it is worth noting that, in the characterization of the distribution “falling to the right,” that the “tail leverage” measures can legitimately be called measures of “tail weight.” I chose not to do that because the “leverage” term is more precise. Much has been made of the fact that kurtosis does not correspond directly to the standard definition of “heavy tails.” Of course it doesn’t. Neither does it correspond to any but one of the infinitely many definitions of “tail leverage” I just gave. If you restrict your attention to the case where $$m=4m=4$$, then an answer to the OP’s question is as follows: Greater tail leverage (using $$m=4m=4$$ in the definition) does indeed imply greater kurtosis (and conversely). They are identical. Incidentally, the “leverage” definition applies equally to data as it does to distributions: When you apply the kurtosis formula to the empirical distribution, it gives you the estimate of kurtosis without all the so-called “bias corrections.” (This estimate has been compared to others and is reasonable, often better in terms of accuracy; see “Comparing Measures of Sample Skewness and Kurtosis,” D. N. Joanes and C. A. Gill, Journal of the Royal Statistical Society. Series D (The Statistician) Vol. 47, No. 1 (1998), pp. 183-189.) My stated leverage definition also resolves many of the various comments and answers given in response to the OP: Some beta distributions can be more greatly tail-leveraged (even if “thin-tailed” by other measures) than the normal distribution. This implies a greater outlier potential of such distributions than the normal, as described above regarding leverage and the fulcrum, despite the normal distribution having infinite tails and the beta being bounded. Further, uniforms mixed with classical “heavy-tailed” distributions are still “heavy-tailed,” but can have less tail leverage than the normal distribution, provided the mixing probability on the “heavy tailed” distribution is sufficiently low so that the extremes are very uncommon, and assuming finite moments. Tail leverage is simply a measure of the extremes (or outliers). It differs from the classic definition of heavy-tailedness, even though it is arguably a viable competitor. It is not perfect; a notable flaw is that it requires finite moments, so quantile-based versions would be useful as well. Such alternative definitions are needed because the classic definition of “heavy tails” is far too limited to characterize the universe of outlier-prone data-generating processes and their resulting data. ### II. My paper in The American Statistician My purpose in writing the paper “Kurtosis as Peakedness, 1905-2014: R.I.P.” was to help people answer the question, “What does higher (or lower) kurtosis tell me about my distribution (or data)?” I suspected the common interpretations (still seen, by the way), “higher kurtosis implies more peaked, lower kurtosis implies more flat” were wrong, but could not quite put my finger on the reason. And, I even wondered that maybe they had an element of truth, given that Pearson said it, and even more compelling, that R.A. Fisher repeated it in all revisions of his famous book. However, I was not able to connect any math to the statement that higher (lower) kurtosis implied greater peakedness (flatness). All the inequalities went in the wrong direction. Then I hit on the main theorem of my paper. Contrary to what has been stated or implied here and elsewhere, my article was not an “opinion” piece; rather, it was a discussion of three mathematical theorems. Yes, The American Statistician (TAS) does often require mathematical proofs. I would not have been able to publish the paper without them. The following three theorems were proven in my paper, although only the second was listed formally as a “Theorem.” Main Theorem: Let $$ZX=(X−μX)/σXZ_X = (X - \mu_X)/\sigma_X$$ and let $$κ(X)=E(Z4X)\kappa(X) = E(Z_X^4)$$ denote the kurtosis of $$XX$$. Then for any distribution (discrete, continuous or mixed, which includes actual data via their discrete empirical distribution), $$E{Z4XI(|ZX|>1)}≤κ(X)≤E{Z4XI(|ZX|>1)}+1E\{Z_X^4 I(|Z_X| > 1)\}\le\kappa(X)\le E\{Z_X^4 I(|Z_X| > 1)\} +1$$. This is a rather trivial theorem to prove but has major consequences: It states that the shape of the distribution within a standard deviation of the mean (which ordinarily would be where the “peak” is thought to be located) contributes very little to the kurtosis. Instead, the theorem implies that for all data and distributions, kurtosis must lie within $$±0.5\pm 0.5$$ of $$E{Z4XI(|ZX|>1)}+0.5E\{Z_X^4 I(|Z_X| > 1)\} + 0.5$$. A very nice visual image of this theorem by user “kjetil b Halvorsen” is given at https://stats.stackexchange.com/a/362745/102879; see my comment that follows as well. The bound is sharpened in the Appendix of my TAS paper: Refined Theorem: Assume $$XX$$ is continuous and that the density of $$Z2XZ_X^2$$ is decreasing on [0,1]. Then the “+1” of the main theorem can be sharpened to “+0.5”. This simply amplifies the point of the main theorem that kurtosis is mostly determined by the tails. More recently, @sextus-empiricus was able to reduce the “$$+0.5+0.5$$” bound to “$$+1/3+1/3$$“, see https://math.stackexchange.com/a/3781761 . A third theorem proven in my TAS paper states that large kurtosis is mostly determined by (potential) data that are $$bb$$ standard deviations away from the mean, for arbitrary $$bb$$. Theorem 3: Consider a sequence of random variables $$XiX_i$$,$$i=1,2,… i = 1,2,\dots$$, for which $$κ(Xi)→∞\kappa(X_i) \rightarrow \infty$$. Then $$E{Z4iI(|Zi|>b)}/κ(Xi)→1E\{Z_i^4I(|Z_i| > b)\}/ \kappa(X_i) \rightarrow 1$$, for each $$b>0b>0$$. The third theorem states that high kurtosis is mostly determined by the most extreme outliers; i.e., those observations that are $$bb$$ or more standard deviations from the mean. These are mathematical theorems, so there can be no argument with them. Supposed “counterexamples” given in this thread and in other online sources are not counterexamples; after all, a theorem is a theorem, not an opinion. So what of one suggested “counterexample,” where spiking the data with many values at the mean (which thereby increases “peakedness”) causes greater kurtosis? Actually, that example just makes the point of my theorems: When spiking the data in this way, the variance is reduced, thus the observations in the tails are more extreme, in terms of number of standard deviations from the mean. And it is observations with large standard deviation from the mean, according to the theorems in my TAS paper, that cause high kurtosis. It’s not the peakedness. Or to put it another way, the reason that the spike increases kurtosis is not because of the spike itself, it is because the spike causes a reduction in the standard deviation, which makes the tails more standard deviations from the mean (i.e., more extreme), which in turn increases the kurtosis. It simply cannot be stated that higher kurtosis implies greater peakedness, because you can have a distribution that is perfectly flat over an arbitrarily high percentage of the data (pick 99.99% for concreteness) with infinite kurtosis. (Just mix a uniform with a Cauchy suitably; there are some minor but trivial and unimportant technical details regarding how to make the peak absolutely flat.) By the same construction, high kurtosis can be associated with any shape whatsoever for 99.99% of the central distribution – U-shaped, flat, triangular, multi-modal, etc. There is also a suggestion in this thread that the center of the distribution is important, because throwing out the central data of the Cauchy example in my TAS paper makes the data have low kurtosis. But this is also due to outliers and extremes: In throwing out the central portion, one increases the variance so that the extremes are no longer extreme (in terms of $$ZZ$$ values), hence the kurtosis is low. Any supposed “counterexample” actually obeys my theorems. Theorems have no counterexamples; otherwise, they would not be theorems. A more interesting exercise than “spiking” or “deleting the middle” is this: Take the distribution of a random variable $$XX$$ (discrete or continuous, so it includes the case of actual data), and replace the mass/density within one standard deviation of the mean arbitrarily, but keep the mean and standard deviation of the resulting distribution the same as that of $$XX$$. Q: How much change can you make to the kurtosis statistic over all such possible replacements? A: The difference between the maximum and minimum kurtosis values over all such replacements is $$≤0.25.\le 0.25.$$ The above question and its answer comprise yet another theorem. Anyone want to publish it? I have its proof written down (it’s quite elegant, as well as constructive, identifying the max and min distributions explicitly), but I lack the incentive to submit it as I am now retired. I have also calculated the actual max differences for various distributions of $$XX$$; for example, if $$XX$$ is normal, then the difference between the largest and smallest kurtosis is over all replacements of the central portion is 0.141. Hardly a large effect of the center on the kurtosis statistic! On the other hand, if you keep the center fixed, but replace the tails, keeping the mean and standard deviation constant, you can make the kurtosis infinitely large. Thus, the effect on kurtosis of manipulating the center while keeping the tails constant, is $$≤0.25\le 0.25$$. On the other hand, the effect on kurtosis of manipulating the tails, while keeping the center constant, is infinite. So, while yes, I agree that spiking a distribution at the mean does increase the kurtosis, I do not find this helpful to answer the question, “What does higher kurtosis tell me about my distribution?” There is a difference between “A implies B” and “B implies A.” Just because all bears are mammals does not imply that all mammals are bears. Just because spiking distribution increases kurtosis does not imply that increasing kurtosis implies a spike; see the uniform/Cauchy example alluded to above in my answer. It is precisely this faulty logic that caused Pearson to make the peakedness/flatness interpretations in the first place. He saw a family of distributions for which the peakedness/flatness interpretations held, and wrongly generalized. In other words, he observed that a bear is a mammal, and then wrongly inferred that a mammal is a bear. Fisher followed suit forever, and here we are. A case in point: People see this picture of “standard symmetric PDFs” (on Wikipedia at https://en.wikipedia.org/wiki/File:Standard_symmetric_pdfs.svg) and think it generalizes to the “flatness/peakedness” conclusions. Yes, in that family of distributions, the flat distribution has the lower kurtosis and the peaked one has the higher kurtosis. But it is an error to conclude from that picture that high kurtosis implies peaked and low kurtosis implies flat. There are other examples of low kurtosis (less than the normal distribution) distributions that are infinitely peaked, and there are examples of infinite kurtosis distributions that are perfectly flat over an arbitrarily large proportion of the observable data. The bear/mammal conundrum also arises in the Finucan conditions, which state (oversimplified) that if tail probability and peak probability increase (losing some mass in between to maintain the standard deviation), then kurtosis increases. This is all fine and good, but you cannot turn the logic around and say that increasing kurtosis implies increasing tail and peak mass (and reducing what is in between). That is precisely the fatal flaw with the sometimes-given interpretation that kurtosis measures the “movement of mass simultaneously to the tails and peak but away from the shoulders.” Again, all mammals are not bears. A good counterexample to that interpretation is given here https://math.stackexchange.com/a/2523606/472987 in “counterexample #1, which shows a family of distributions in which the kurtosis increases to infinity, while the mass inside the center stays constant. (There is also a counterexample #2 that has the mass in the center increasing to 1.0 yet the kurtosis decreases to its minimum, so the often-made assertion that kurtosis measures “concentration of mass in the center” is wrong as well.) Many people think that higher kurtosis implies “more probability in the tails.” This is not true; counterexample #1 shows that you can have higher kurtosis with less tail probability when the tails extend. So what does kurtosis measure? It precisely measures tail leverage (which can be called tail weight as well) as amplified through fourth powers, as I stated above with my definition of tail-leverage($$mm$$). I would just like to reiterate that my TAS article was not an opinion piece. It was instead a discussion of mathematical theorems and their consequences. There is much additional supportive material in the current post that has come to my attention since writing the TAS article, and I hope readers find it to be helpful for understanding kurtosis.
{}
# Why are multiple Nav Mesh Agents jittery when the target is inaccessible? I have a simple scene. Most of the land is flat, except for three raised cylindrical platforms coming up from the ground. I have several enemies with Nav Mesh Agent components attached, with their target being the player. I used the Standard Assets' ThirdPersonCharacter script, and then called character.Move(agent.desiredVelocity, false, false); within the Update() of a character control script. The agents work great when the player runs around on the ground. When the player hops on the platform (about 2 units above the ground), an agent will walk toward the edge of the platform and then stop, just as expected. However, if I throw in a second agent, both agents work fine only until the player stands on the platform. Then they start to jitter, like they cannot make up their minds on whether to turn left or right when they should really just walk straight. If I add even more agents, the jittering gets worse. With enough, they don't even move forward and simply turn left and right rapidly. Clearly, there is a problem with interference, and I understand the agents are designed to avoid obstacles as well as each other, but here is what I found and tried so far: • I switched the obstacle avoidance quality to None. The jitter still happens just as much. • Some other posts suggested setting IsKinematic to true for moving obstacles. I cannot do this because the character script depends on the rigidbody not being kinematic. • I adjusted the height of the platform and re-baked the navmesh. It had no effect. It must be the fact that the platform is not actually accessible, so that the agents can only make a partial path. • I messed with the pathfinding settings on the agent, such as turning auto repath off, since it has to do with partial paths. There were no noticeable changes. Does anyone know what might be causing the jitters and how I could fix this? # Update As requested, here is how my Nav Mesh agent is configured: • Could you please show the settings of the navmesh and navmeshAgent with the code you are using to move units with navmeshAgent? So we could recreate this on our devices or see if the problem lies in the code. – Candid Moon _Max_ Feb 16 '17 at 16:31 • @CandidMoon, I added a screenshot of the inspector for the Nav Mesh Agent. I haven't made any significant changes to the Standard Assets' ThirdPersonCharacter and AIThirdPersonController scripts, so assume I am using those. – tyjkenn Feb 16 '17 at 17:54 ## 3 Answers I had this problem and I was messing around with the Nav Mesh Agent and found out turning off "Auto Braking" fixed the problem. • The Stopping Distance may need increased slightly as well. – Chad Mar 14 '18 at 15:24 Try setting the priority of all the Agents very low, except one, and see if this exact Agent is also facing the same issue as the others. To me it sounds like the agents were constantly trying to find a new path, as the other agents get too close. • It tried this. It didn't change anything for any of the agents. However, strangely enough, the glitch became less frequent when I made a slope to one of the platforms, even when I would stand on the other slope-less platforms. – tyjkenn Feb 16 '17 at 17:44 When having the stopping distance at 0.3 with no autobreaking a weird behaviour shows where they reach the destination but then move back a bit. Setting the stopping distance to 1 works.
{}
# Conditional Distribution ## Homework Statement A card is picked at random from N cards labeled 1,2,...,N and the number that appears is X. A second card is picked at random from cards numbered 1,2,..., X and its number is Y. Find the conditional distribution of X given Y = y. ## Homework Equations $$P(X = x | Y = y) = \frac{P(X = x , Y = y)}{P(Y = y)}$$ ## The Attempt at a Solution From what I understand, there are two decks. The first deck has N cards, while the second deck has X cards, which depends on the value of the card chosen from the first deck, hence x and y are not independent. I know that $$P(Y = y) = \frac{1}{x}$$ I'm not sure how to find P(X = x , Y = y). Any help would be greately appreciated. Thank you.
{}
# How do you write 4x - y-7 =0 in slope intercept form? Sep 29, 2015 The answer is $y = 4 x - 7$ . #### Explanation: The slope-intercept form of a linear equation is $y = m x + b$, where $m$ is the slope and $b$ is the y-intercept. In order to convert $4 x - y - 7 = 0$ into slope-intercept form, solve for $y$. $4 x - y - 7 = 0$ Subtract $4 x$ from both sides of the equation. $- y - 7 = - 4 x$ Add $7$ to both sides. $- y = - 4 x + 7$ Multiply both sides by $- 1$. $y = 4 x - 7$
{}
# Well ordering on the quotient of well ordered sets Let $X\neq\emptyset$ be some set. Consider $\mathcal A$ to be the set containing all pairs $(Y,\leq)$, where $Y\subseteq X$ and $\leq$ is a well-ordering. Define $(A,\leq)\equiv (B,\leq')$ iff $(A,\leq)$ is order isomorphic to $(B,\leq')$, which is clearly an equivalence relation. Further, define, in $\mathcal A/\equiv\,= :\hat{\mathcal A}$ an ordering $[(A,\leq)]\preceq [(B,\leq')]$ if there exists an initial segment of $B'\subseteq B$ such that $(A,\leq)\equiv (B',\leq')$. The ordering is strict if all $B'\subset B$ are proper initial segments. I have verified this ordering is well-defined and is a total ordering since for any two well ordered sets there is an order isomorphism from one to an initial segment of the other. Now, fix $(A,\leq)\in\mathcal A$ and consider the set $$\hat{\mathcal S}_A := \{ [(B,\leq')] : [(B,\leq')]\prec [(A,\leq)]\}$$ The mapping $f: (A,\leq)\to \hat{S}_A, x\mapsto [(A_x,\leq)]$, where $A_x := \{z\in A: z<x\}\subset A$ is a proper initial segment, is an order isomorphism. From the above, we are to deduce the set $(\hat{A},\preceq)$ is actually well-ordered. To finish A. Karagila's post. If $B$ is not least in $\mathcal B$, then $B'\in\mathcal B$ and $B'\prec B$, hence $B'$ embeds into an initial segment $A_y\subset A$, but then $y\in A$ to which corresponds $B'\in\mathcal B$ such that $B'\equiv A_y$, consequently $y<x$, which is impossible. • Huh? What are you trying to prove? – Asaf Karagila Mar 30 '18 at 12:43 • @AsafKaragila that the total ordering $\preceq$ is also a well ordering. An upper bound for a nonempty subset of $\hat{A}$ would be sufficient, as well. I'm sort of stuck, it's probably something stupidly simple. – Alvin Lepik Mar 30 '18 at 12:43 • When you weave a labyrinth around your work, it is easy to get lost. A good exit strategy is to scrap the idea all together, and start fresh. – Asaf Karagila Mar 30 '18 at 12:51 ## 1 Answer Your proof is overly complicated, and the reliance on the axiom of choice is itself a bad direction. The construction ultimately leads to Hartogs' theorem which implies the axiom of choice follows from the comparability of any two cardinals. If we had used the axiom of choice to prove Hartogs' theorem, then this would be circular. The proof is simpler and tantamount to just verifying by hand. Given a non-empty family $\cal B\subseteq A$, pick some $A$ such that $[(A,\leq)]$ is in $\cal B$. If this is a minimal element, we're done. So let's assume it's not. Now each $[(B,\leq_B)]\in\cal B$ such that $[(B,\leq_B)]\prec[(A,\leq)]$ embeds into a unique proper initial segment of $A$. So each such $B$ gives us a unique $x$ such that $(B,\leq_B)$ is isomorphic to $A_x=\{a\in A\mid a<x\}$. Moreover, since $A$ is not the minimal element of $\cal B$, the set of $\{x\in A\mid\exists[(B,\leq_B)]\in\mathcal B, B\equiv A_x\}$ is non-empty. Since $\leq$ is a well-ordering of $A$, there is a minimal $x$. Now show that $B$ which corresponds to this minimal $x$ is the minimal element of $\cal B$. • Your advice under main post is certainly sensible. It's more akin to struggling to get free from a spider's web. The more you struggle, the tighter the web gets around you. Thanks for the breath of fresh air :D – Alvin Lepik Mar 30 '18 at 13:01 • In hindsight, I'd have known not to use choice if I knew the ins and outs of Hartog's theorem. We're just given these things as exercises and mostly they're just routine checks. Sometimes things get weird, though. – Alvin Lepik Mar 30 '18 at 13:04 • Well, hindsight 20/20. I can't even count the number of mistakes that in retrospect I should have never made. :) – Asaf Karagila Mar 30 '18 at 13:09 • Good point, I withdraw my statement. – Alvin Lepik Mar 30 '18 at 13:11
{}
# Q. X is heated with soda lime and gives ethane. X is AFMCAFMC 2005Aldehydes Ketones and Carboxylic Acids Solution: ## $\ce{$\underset{\mathrm{(X)}}{\ce{CH_3CH_2COONa }}$+NaOH->C[CaO][\Delta] CH_3 -CH_3+Na_2CO_3}$ You must select option to get answer and solution ## 3. Which of the following attacks glass The p-Block Elements - Part2 ## 1. An acidic buffer solution can be prepared by mixing the solution of IIT JEE 1981 Equilibrium ## 2. A mixture of benzaldehyde and formaldehyde on heating with aqueous NaOH solution gives IIT JEE 2001 Aldehydes Ketones and Carboxylic Acids ## 3. Which of the following solutions will have pH close to 1.0 ? IIT JEE 1992 Equilibrium ## 4. Acetone is mixed with bleaching powder to give AFMC 2004 Aldehydes Ketones and Carboxylic Acids ## 5. 75% of a first order reaction was completed in 32 minutes. When was 50% of the reaction completed COMEDK 2010 Chemical Kinetics ## 7. Acidic hydrogen is present in IIT JEE 1985 Hydrocarbons ## 8. The oxidation state of Cr in CrO6 is : NEET 2019 Redox Reactions ## 9. The order of stability of the following tautomeric compounds is :- NEET 2013 Aldehydes Ketones and Carboxylic Acids ## 10. For a first order reaction, the time taken to reduce the initial concentration by a factor of 1/4 is 20 min. The time required to reduce initial concentration by a factor of 1/16 is KEAM 2011 Chemical Kinetics
{}
# How to rotate a table? I have a table and I want to rotate it. The table has 3 rows and 4 columns, and I want to rotate the text inside this table as well. How can I do this? • please, for the love of god, don't use sidewaystables unless you intend the document to be read while lying down. There's nothing more annoying than having to crane your neck to read a table in a pdf, use lscape or pdflscape so the table is rotated properly when shown on a monitor. – Shep Sep 10 '14 at 1:45 • In relation to previous comment: see this answer for rotating a floating table with pdflscape – Olivier Sep 22 '15 at 8:19 As Jake said you can use \rotatebox from the graphicx package to rotate a table. This is perfectly fine for uncomplicated tables. However, this will read the whole table as macro argument which doesn't allow for verbatim or other special content and isn't that efficient. As alternative you can use the \adjustbox macro or adjustbox environment from the adjustbox package (written by me). Both process the content as real box and not as macro argument as therefore avoids the mentioned drawbacks: \documentclass{article} \begin{document} \begin{tabular}{ll} First First & First Second\\ Second First & Second Second \end{tabular} \end{document} Alternatively, you can use the very new package realboxes. When loaded with the graphicx option (or without any but after graphicx) it provides \Rotatebox which works like \rotatebox but reads the content also as real box: \documentclass{article} \usepackage[graphicx]{realboxes} \begin{document} \Rotatebox{90}{% \begin{tabular}{ll} First First & First Second\\ Second First & Second Second \end{tabular} }% \end{document} • I am not able to get a caption on my table. How can I do that – Agaz Hussain Jul 8 '18 at 11:42 • @AaghazHussain: You can't place floats (table, figure etc.) inside boxes. You need to either place the table environment around the rotated box while placing the \caption inside it OR use a non-floating alternative like \captionof (see (Label and caption without float)[tex.stackexchange.com/q/7210/2975]) or the caption={the caption text},nofloat=table keys when you useadjustbox. – Martin Scharrer Jul 9 '18 at 14:03 • Thanks, I got the caption but not rotated with the table and also I try to center the rotated table \begin{adjustbox}{width=\textwidth,totalheight=\textheight,keepaspectratio,rotate=90,caption={Time taken in seconds}, float=table, center}, but it is not centering. – Agaz Hussain Jul 10 '18 at 11:15 • @AaghazHussain: The order of keys is important. You can't center a float, you need to center the content of it. Do you really need a floating table here? Because if not, simply change float= to nonfloat= and move the rotate key just before the center key. It looks to me that you want it over the full page size right? Then you might want to wrap the whole thing into an \afterpage{..} (afterpage package) to place it on the next page. Maybe with \clearpage added before the adjustbox. Also you should check you width and height, as they are the dimensions BEFORE the rotate! – Martin Scharrer Jul 10 '18 at 12:12 • @AaghazHussain: If you still have issues don't hesitate to add a new question for your specific problem. Please add a link to this answer and state what is still missing and what you want to achieve exactly. – Martin Scharrer Jul 10 '18 at 12:13 Another option is to use sidewaystable from the rotating package. \documentclass{article} \usepackage{rotating} \begin{document} \begin{sidewaystable} \centering \begin{tabular}{ll} First First & First Second\\ Second First & Second Second \end{tabular} \end{sidewaystable} \end{document} If all you want to do is rotate the complete table, but keep everything else on the page unrotated, you can use the \rotatebox{<angle>}{ ... } command from the graphicx package: \documentclass{article} \usepackage{graphicx} \begin{document} \rotatebox{90}{ \begin{tabular}{ll} First First & First Second\\ Second First & Second Second \end{tabular} } \end{document} However, if you have a large table that will take up the whole page, you might want to rotate the page instead of the table. You can do this using the pdflscape package if you're compiling with pdflatex, or lscape if you're using latex, which introduce a landscape environment. \documentclass{article} \usepackage{pdflscape} \begin{document} \begin{landscape} \begin{tabular}{ll} First First & First Second\\ Second First & Second Second \end{tabular} \end{landscape} \end{document} • the landscape option is the best option: there's nothing more obnoxious than inserting a sideways table and forcing your reader to crane his neck just to read the pdf. – Shep Sep 10 '14 at 1:42 Assuming you want to rotate the table because it doesn't fit the width of a portrait page. Based on @Shep's comment to this question: use the pdflscape package (CTAN, dtx, pdf) by Heiko Oberdiek. Package pdflscape adds PDF support to the environment landscape of package lscape by setting the PDF page attribute /Rotate I use Lyx, and it's in the FAQ: How can I typeset certain pages of my documents in landscape mode? Use the package lscape (or better pdflscape, which also supports pdflatex output). Add to the preamble: \usepackage{pdflscape} In the document, embrace the pages which should be in landscape mode by: \begin{landscape} ... \end{landscape} All other text will be in portrait mode pages. If you don't have the pdflscape package installed, get it from ctan The lscape package is part of the graphics bundle and should be installed by default. • Are you able to help with this question here about rotating a large table in Lyx? – hhh Mar 13 '16 at 21:10 • Even though I really like this solution, some printing services (such as bod) have issues with rotated pages. A check with your printing service is needed before using landscape. – koppor Jul 12 '16 at 20:06 The ctable packages also has an option to rotate the table: sideways. For example: \documentclass{article} \usepackage{ctable} \begin{document} \ctable[ label={tab:mytable}, botcap, % caption below table sideways % This rotates the table ] {ccc} { % Table footnotes here, see ctable docs } { Column 1 & Column 2 & Column 3 \\ Row 2, 1 & 2, 2 & 2, 3 \\ } \end{document}
{}
# Why does the criterion for convergence of a power series not imply every series with bounded terms converges? I am reading Complex Made Simple by David C. Ullrich. There is a result from which I am deducing bogus conclusions, so I must be misunderstanding it somehow: Lemma 1.0. Suppose $$(c_n)_{n = 0}^{\infty}$$ is a sequence of complex numbers, and define $$R \in [0, \infty]$$ by $$R = \sup \{r \ge 0: \text{the sequence } (c_nr^n) \text{ is bounded}\}.$$ Then the power series $$\sum_{n=0}^{\infty}c_n(z-z_0)^n$$ converges absolutely and uniformly on every compact subset of the disk $$D(z_0, R)$$ and diverges at every point $$z$$ with $$|z-z_0|>R$$. My bogus conclusion: Let $$c_n$$ be a sequence of complex numbers and suppose that $$c_n r^n$$ is bounded. Then $$\sum_{n=0}^{\infty} c_n r^n$$ converges. My reasoning: Let $$c_n$$ be any sequence of complex numbers. The series $$\sum_{n=0}^{\infty}c_n(z-z_0)^n$$ converges absolutely whenever $$|z - z_0|, so $$\sum_{n=0}^{\infty}c_nr^n$$ converges whenever $$r < R$$, so $$\sum_{n=0}^{\infty}c_nr^n$$ converges whenever $$c_n r^n$$ is bounded. The problem comes in the last step. Just because $$\sum_{n=1}^\infty c_nr^n$$ converges with $$r \lt R$$ you cannot conclude that $$\sum_{n=1}^\infty c_nR^n$$ converges. As an example, let $$c_n=1$$ for all $$n$$. We note that $$R=1$$ here. $$c_nR^n=1$$, so is bounded. For any $$r \lt 1$$, $$\sum_{n=1}^\infty c_nr^n$$ converges absolutely, but $$\sum_{n=1}^\infty c_nR^n$$ does not converge. • Hi! I $think$ I understand, but could I please run my whole reasoning by you to check? My reasoning is this: Let $c_n$ be a sequence and let's say $R=1$. Take $r=0.5$. Then $c_n (0.5)^n$ is bounded. Let $z_0=0$. The theorem tells us that $\sum_{n=0}^{\infty} c_nz^n$ converges absolutely for any $z$ with $|z|<1$; pick $z = 0.5$, Then $\sum_{n=0}^{\infty} c_n (0.5)^n$ converges. But here I made the mistake; the conclusion we can draw from here is that $\sum_{n=0}^{\infty} c_nr^n$ converges for "almost every" $r$ for which $c_nr^n$ is bounded (for any $r$ with $|r|<1$). (continued) – Ovi Dec 29 '18 at 14:45 • (continued) But we cannot draw the conclusion that "If $c_n r^n$ is bounded, then $\sum_{n=0}^{\infty} c_n r^n$ converges, because it may be the case that $c_n R^n$ is bounded, but $\sum_{n=0}^{\infty} c_n R^n$ does not converge. – Ovi Dec 29 '18 at 14:45 • Yes, that is correct. The sum will converge for every $r$ with modulus strictly less than $R$, but not necessarily on the circle $|r|=R$. – Ross Millikan Dec 29 '18 at 15:26 • Thank you! ${}{}{}$ – Ovi Dec 29 '18 at 16:06 The fact that $$c_n r^n$$ is bounded, means that $$r$$ is in the set we're taking the sup of, so $$r \le R$$. But the convergence of $$\sum_{n=0}^\infty c_n s^n$$ is only guaranteed for $$s < R$$. It could very well be that $$r=R$$; a simple example is $$c_n = (-1)^n$$ and $$r=1$$. The condition $$r is not equivalent to $$c_nr^n$$ being bounded. It is possible that $$c_nr^n$$ is bounded for $$r=R$$ as well, and in that case we cannot conclude that the series converges.
{}
2k views ### Find all integer solutions for the equation $|5x^2 - y^2| = 4$ In a paper that I wrote as an undergraduate student, I conjectured that the only integer solutions to the equation $$|5x^2 - y^2| = 4$$ occur when $x$ is a Fibonacci number and $y$ is a Lucas number. ... 124 views 153 views ### How to prove that the roots of this equation are integers? Let there be an equation $a^2 + 4ab + b^2 - 121 = 0$ where I want to prove that a,b are integers. Then I want to find whether there are integer values of $b$ for which $a$ is also an integer. Let us ... 607 views ### Solving the equation $x^2-7y^2=-3$ over integers I'd like to solve the following Pell equation: $$x^2-7y^2=-3$$ Where $x$ and $y$ are integers. I applied the usual procedure, which avoids continued fractions: The two minimal positive integer ... 348 views ### Solutions to Diophantine Equations I am looking for integer solutions to the equation $$x^2 = 5y^2 + 14y + 1$$ I know that Pell's Equation is of the form $x^2 - ny^2=1$ and that there exist algorithms to solve this equation. I was ... 311 views ### Why can't the Alpertron solve this Pell-like equation? Dario Alpern's Alpertron is convenient for solving Pell and Pell-like equations. It can even solve the one at the heart of Archimedes' cattle problem, $$p^2-(4)(609)(7766)(4657^2)q^2=1$$ and give ... ### Finding integers of the form $3x^2 + xy - 5y^2$ where $x$ and $y$ are integers, using diagram via arithmetic progression So the diagram drawn looks like this: We begin at the edges labeled $3$ and $-5$ because we are using those as the bases for $x$ and $y$, respectively. The way we obtain the values of the 2 adjacent ...
{}
# Clustering FFT frequency bins from sensor time series data I am trying to analyse multivariate time series data sets. I have 6 signals for each event, representing 3 linear accelerations and 3 rotational velocities for a 40ms window. I am trying to find a way to cluster together similar events based of these 6 signals. The method I am currently looking at is using FFT on each signal to reduce it to frequency bins, Then doing some sort of clustering algorithm on highest 3 amplitude frequencies or something along those lines. My question is what sort of clustering algorithm should I be looking at to cluster my problem. If for example my problem has 100 events, 6 sensors, 3 frequency and amplitudes per sensor per event. I am new to this type of signal processing so this methodology might not be feasible but I welcome any suggestions on a clustering algorithm or a completely other approach that you might think is better for my problem. There are more complex approaches, but VQ gives you a good, simple baseline. If it doesn't give you the performance you need, then you can move to more complex approaches. The next step, after simple VQ, would be to try $$K$$-means clustering. These notes give a good outline.
{}
## The Annals of Statistics ### Bayesian manifold regression #### Abstract There is increasing interest in the problem of nonparametric regression with high-dimensional predictors. When the number of predictors $D$ is large, one encounters a daunting problem in attempting to estimate a $D$-dimensional surface based on limited data. Fortunately, in many applications, the support of the data is concentrated on a $d$-dimensional subspace with $d\ll D$. Manifold learning attempts to estimate this subspace. Our focus is on developing computationally tractable and theoretically supported Bayesian nonparametric regression methods in this context. When the subspace corresponds to a locally-Euclidean compact Riemannian manifold, we show that a Gaussian process regression approach can be applied that leads to the minimax optimal adaptive rate in estimating the regression function under some conditions. The proposed model bypasses the need to estimate the manifold, and can be implemented using standard algorithms for posterior computation in Gaussian processes. Finite sample performance is illustrated in a data analysis example. #### Article information Source Ann. Statist., Volume 44, Number 2 (2016), 876-905. Dates Revised: September 2015 First available in Project Euclid: 17 March 2016 https://projecteuclid.org/euclid.aos/1458245738 Digital Object Identifier doi:10.1214/15-AOS1390 Mathematical Reviews number (MathSciNet) MR3476620 Zentralblatt MATH identifier 1341.62196 #### Citation Yang, Yun; Dunson, David B. Bayesian manifold regression. Ann. Statist. 44 (2016), no. 2, 876--905. doi:10.1214/15-AOS1390. https://projecteuclid.org/euclid.aos/1458245738 #### References • [1] Aronszajn, N. (1950). Theory of reproducing kernels. Trans. Amer. Math. Soc. 68 337–404. • [2] Belkin, M. (2003). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15 1373–1396. • [3] Bhattacharya, A., Pati, D. and Dunson, D. (2014). Anisotropic function estimation using multi-bandwidth Gaussian processes. Ann. Statist. 42 352–381. • [4] Bickel, P. J. and Kleijn, B. J. K. (2012). The semiparametric Bernstein–von Mises theorem. Ann. Statist. 40 206–237. • [5] Bickel, P. J. and Li, B. (2007). Local polynomial regression on unknown manifolds. In Complex Datasets and Inverse Problems. Institute of Mathematical Statistics Lecture Notes—Monograph Series 54 177–186. IMS, Beachwood, OH. • [6] Binev, P., Cohen, A., Dahmen, W. and DeVore, R. (2007). Universal algorithms for learning theory. II. Piecewise polynomial functions. Constr. Approx. 26 127–152. • [7] Binev, P., Cohen, A., Dahmen, W., DeVore, R. and Temlyakov, V. (2005). Universal algorithms for learning theory. I. Piecewise constant functions. J. Mach. Learn. Res. 6 1297–1321. • [8] Camastra, F. and Vinviarelli, A. (2002). Estimating the intrinsic dimension of data with a fractal-based method. IEEE P.A.M.I. 24 1404–1407. • [9] Carter, K. M., Raich, R. and Hero, A. O. III (2010). On local intrinsic dimension estimation and its applications. IEEE Trans. Signal Process. 58 650–663. • [10] Castillo, I., Kerkyacharian, G. and Picard, D. (2013). Thomas Bayes’ walk on manifolds. Probab. Theory Related Fields 158 665–710. • [11] Chen, M., Silva, J., Paisley, J., Wang, C., Dunson, D. and Carin, L. (2010). Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds. IEEE Trans. Signal Process. 58 6140–6155. • [12] Farahmand, A. M., Szepesvái, C. and Audibert, J. (2007). Manifold-adaptive dimension estimation. In ICML 2007 265–272. ACM Press, New York. • [13] Ghosal, S., Ghosh, J. K. and van der Vaart, A. W. (2000). Convergence rates of posterior distributions. Ann. Statist. 28 500–531. • [14] Ghosal, S. and van der Vaart, A. (2007). Convergence rates of posterior distributions for non-i.i.d. observations. Ann. Statist. 35 192–223. • [15] Giné, E. and Nickl, R. (2011). Rates on contraction for posterior distributions in $L^{r}$-metrics, $1\leq r\leq\infty$. Ann. Statist. 39 2883–2911. • [16] Kpotufe, S. (2009). Escaping the curse of dimensionality with a tree-based regressor. In COLT 2009—The 22nd Conference on Learning Theory, June 1821. Montreal, QC. • [17] Kpotufe, S. and Dasgupta, S. (2012). A tree-based regressor that adapts to intrinsic dimension. J. Comput. System Sci. 78 1496–1515. • [18] Kundu, S. and Dunson, D. B. (2011). Latent factor models for density estimation. Available at arXiv:1108.2720v2. • [19] Lawrence, N. D. (2003). Gaussian process latent variable models for visualisation of high dimensional data. Neural Information Processing Systems 16 329–336. • [20] Levina, E. and Bickel, P. (2004). Maximun likelihood estimation of intrinsic dimension. In Advances in Neural Information Processing Systems 17. MIT Press, Cambridge, MA. • [21] Lin, L. and Dunson, D. B. (2014). Bayesian monotone regression using Gaussian process projection. Biometrika 101 303–317. • [22] Little, A. V., Lee, J., Jung, Y. M. and Maggioni, M. (2009). Estimation of intrinsic dimensionality of samples from noisy low-dimensional manifolds in high dimensions with multiscale SVD. In 2009 IEEE/SP 15th Workshop on Statistical Signal Processing 85–88. IEEE, Cardiff. • [23] Nene, S. A., Nayar, S. K. and Murase, H. (1996). Columbia object image library (COIL-100). Technical report, Columbia Univ., New York. • [24] Page, G., Bhattacharya, A. and Dunson, D. (2013). Classification via Bayesian nonparametric learning of affine subspaces. J. Amer. Statist. Assoc. 108 187–201. • [25] Reich, B. J., Bondell, H. D. and Li, L. (2011). Sufficient dimension reduction via Bayesian mixture modeling. Biometrics 67 886–895. • [26] Roweis, S. T. and Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science 290 2323–2326. • [27] Savitsky, T., Vannucci, M. and Sha, N. (2011). Variable selection for nonparametric Gaussian process priors: Models and computational strategies. Statist. Sci. 26 130–149. • [28] Stone, C. J. (1982). Optimal global rates of convergence for nonparametric regression. Ann. Statist. 10 1040–1053. • [29] Tenenbaum, J. B., de Silva, V. and Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science 290 2319–2323. • [30] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B. Stat. Methodol. 73 273–282. • [31] Tokdar, S. T., Zhu, Y. M. and Ghosh, J. K. (2010). Bayesian density regression with logistic Gaussian process and subspace projection. Bayesian Anal. 5 319–344. • [32] van de Geer, S. (2000). Empirical Processes in M-Estimation. Cambridge Univ. Press, Cambridge. • [33] van der Vaart, A. and van Zanten, H. (2011). Information rates of nonparametric Gaussian process methods. J. Mach. Learn. Res. 12 2095–2119. • [34] van der Vaart, A. W. and van Zanten, J. H. (2008). Reproducing kernel Hilbert spaces of Gaussian priors. In Pushing the Limits of Contemporary Statistics: Contributions in Honor of Jayanta K. Ghosh. Inst. Math. Stat. Collect. 3 200–222. IMS, Beachwood, OH. • [35] van der Vaart, A. W. and van Zanten, J. H. (2009). Adaptive Bayesian estimation using a Gaussian random field with inverse gamma bandwidth. Ann. Statist. 37 2655–2675. • [36] Yang, Y. and Dunson, D. B. (2015). Supplement to “Bayesian manifold regression.” DOI:10.1214/15-AOS1390SUPP. • [37] Ye, G.-B. and Zhou, D.-X. (2008). Learning and approximation by Gaussians on Riemannian manifolds. Adv. Comput. Math. 29 291–310. • [38] Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B. Stat. Methodol. 67 301–320. #### Supplemental materials • Reviews of geometric properties and proofs of Theorems 2.1, 2.2, 2.4 and 3.2. Concepts and results in differential and Riemannian geometry were reviewed in Section 7, where new results are included with proofs. Then proofs of Theorems 2.1, 2.2, 2.4 and 3.2 are provided in Section 8.
{}
## Fast Mixing for Discrete Point Processes author: Patrick Rebeschini, Yale Institute for Network Science, Yale University published: Aug. 20, 2015,   recorded: July 2015,   views: 1934 Categories # Report a problem or upload files If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data. Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status. # Description We investigate the systematic mechanisms for designing fast mixing Markov chains Monte Carlo algorithms to sample from discrete point processes. Such processes are defined as probability distributions $\mu(S)\propto \exp(f(S))$ over all subsets $S\subseteq 2^V$ of a finite set $V$ through a bounded set function $f:2^V\rightarrow \mathbb{R}$. In particular, a subclass of discrete point processes characterized by submodular functions (which include determinantal point processes, log-submodular distributions, and submodular point processes) has recently gained a lot of interest in machine learning and shown to be effective for modeling diversity and coverage. We show that if the set function (not necessarily submodular) displays a natural notion of decay of correlation, then it is possible to design fast mixing Markov Chain Monte Carlo methods that provide size-free error bounds on marginal approximations. The conditions that we introduce involve a control on the second order (discrete) derivatives of set functions. We then provide sufficient conditions for fast mixing when the set function is submodular, and specialize our results for canonical examples.
{}
# Perturbative Renormalization in Phi 4 Theory Diracobama2181 TL;DR Summary I seem to have a misunderstanding as to how counterterms actually get rid of the divergences in amplitudes. For example, after the Lagrangian is renormalized at 1-loop order, it is of the form $$\mathcal{L}=\frac{1}{2}\partial^{\mu}\Phi\partial_{\mu}\Phi-\frac{1}{2}m^2\Phi^2-\frac{\lambda\Phi^4}{4!}-\frac{1}{2}\delta_m^2\Phi^2-\frac{\delta_{\lambda}\Phi^4}{4!}$$. So if I were to attempt to find the amplitude of $$\bra{p'}(\Phi(x_1)\Phi(x_2))\ket{p}$$ to order $\lambda$, I would get $$\bra{p'}\Phi(x_1)\Phi(x_2)\ket{p}=\bra{\Omega}a_{p'}\Phi(x)\Phi(x) a_{p}^{\dagger}e^{i\int d^4y (\lambda+\delta_{\lambda})}\ket{\Omega}\\=(e^{i(p'\cdot x_1-p\cdot x_2)}+e^{i(p'\cdot x_2-p\cdot x_1)}-i(\lambda+\delta_{\lambda})\int e^{i(p'-p)\cdot x}\int\frac{d^4k}{(2\pi)^4}\frac{ie^{-ik\cdot (x_1-x)}}{k^2-m^2+i\epsilon}\int\frac{d^4q}{(2\pi)^4}\frac{ie^{-iq \cdot (x_2-x)}}{q^2-m^2+i\epsilon}d^4x)$$ From here, I would dimensionally regularize and use $$\delta_{\lambda}=\frac{3\lambda^2}{32\pi^2}(\frac{2}{\epsilon}-\gamma+\log{4\pi})$$, which is at order $\lamba^2$, so it dosent cancel out the divergence of this integral. What is it about renormalization that I'm misunderstanding?
{}
I have many requests to create downloadable files, so, I will be publishing downloadable files in PDF starting with the Integer Addition Practice Sheet below. You are allowed to print and share the file below (and future PDF files) via hard copy or email. However, you are not allowed to sell or publish them in print or upload them on websites of any kind (e.g. blogs). In each worksheet, I will recommend the time limit to answer the questions. This way, you should be conscious of the time since the Civil Service Exam or any other exam is usually under time pressure. I have also placed the answer key on a different page, so you don’t have to see it when answering the practice sheet on your computer. ## How to Use the Articles A, An, and The A,  an, and the are called articles. The definite article the is used to refer to a specific noun, while the indefinite articles a and an are used to refer to non-specific nouns. Consider the following examples. 1.) Please give me the notebook. 2.) Please give me a notebook. In the first sentence, a specific notebook is referred to. It is assumed that both the speaker and the listener know which notebook is referred to. On the other hand, in the second sentence, the speaker asks for a notebook, any notebook will do. Rules in Using A and An A and an are both used to refer to non-specific nouns but there are rules that you must remember to use them.  » Read more ## Grammar and Correct Usage Quiz 4 Let us continue learning grammar by answering the quiz below. Some of the answers below have brief explanations, while the ones can you can easily look up at a dictionary are not explained. Grammar and Correct Usage Quiz 4 Fill in the blanks with the most appropriate word. 1. We just came from Mount Apo. That was an exciting _____. a.) trip b.) journey c.) voyage d.) expedition ## Grammar and Correct Usage Quiz 3 After answering Grammar and Correct Usage Quiz 1 and Quiz 2, let me continue the series with the quiz below. Some of the answers below have brief explanations. Some of the answers that can you can easily look up at a dictionary are not explained. Grammar and Correct Usage Quiz 3 1. Many anime nowadays are not good for children because of their violence.  I think Disney movies are more _____ for them. a.) suitable b.) suited c.) unsuitable ## Grammar Quiz – Is, It, Its, It’s It is a pronoun that usually refers to something that has previously mentioned or easily identified. Example: A room with two televisions in it. Its is the possessive form of it. Example:This is a weird-looking gadget. What is its purpose? It’s is a contraction of it is or it has. Example: It’s time to go. (It is time to go).  » Read more ## How to Solve Rectangle Area Problems Series The How to Solve Rectangle Area Problems Series is a series of posts on the basics of solving rectangle area problems. This post is a summary of the rectangle area series. 1.) Calculating Areas of Geometric Figures discusses the notion of area and the intuitive derivation of the formula $A = l \times w$ where $A$ is the area of a rectangle, $l$ is its length and $w$ its width. 2.) How to Solve Rectangle Area Problems Part 1 discusses the basic problems involving area of rectangles. 3.) How to Solve Rectangle Area Problems Part 2 discusses intermediate problems involving area of rectangles. This involves solving the area given the rectangle’s perimeter. 4.) Rectangle Area Problems Quiz is a self quiz to be able to determine your understanding regarding rectangle area. I hope you enjoyed this series. More series to come in the future. ## Rectangle Area Quiz This is the conclusion of the Solving Problems on Rectangle Area Series. In the first part, we have discussed the intuition basics of rectangle area formula and solved basic problems about it. In the second part, we have solved more complicated rectangle area problems. In this post, you are allowed to test what you have learned in the previous parts of the series. Ideal Time Limit: 15 minutes Rectangle Area Quiz 1. The length of a rectangle is 8 cm and its width is 7 cm. What is its area?  » Read more ## Grammar Quiz – Any, Every, Some, None Any, every, some, none are some of the words that can be compounded with other words to form pronouns, adjectives, and adverbs. They can be easily confused with each other. Take the quiz below and see how you understand these words. The answer key can be seen by clicking the red + button after the choices.  Good luck! Grammar Quiz –  Any, Every, Some, None 1. There’s _____ you can do. I’ve made up my mind already. a.) anything b.) everything c.) something
{}
University Physics with Modern Physics (14th Edition) a. $\lambda_2=h\sqrt{\frac{8L^2}{2(1^2)h^2}}$ $\lambda_1=6.0\times10^{-10}m$. The wavelength is twice that of the width of the box. The momentum is $p_1=\frac{h}{\lambda_1}=1.1\times10^{-24}\;kg \cdot m/s$. b. $\lambda_2=h\sqrt{\frac{8L^2}{2(2^2)h^2}}$ $\lambda_2=3.0\times10^{-10}m$. The wavelength is the same as the width of the box. The momentum is $p_2=\frac{h}{\lambda_2}=2.2\times10^{-24}\;kg \cdot m/s$. c. $\lambda_3=h\sqrt{\frac{8L^2}{2(3^2)h^2}}$ $\lambda_3=2.0\times10^{-10}m$. The wavelength is two-thirds the width of the box. The momentum is $p_3=\frac{h}{\lambda_3}=3.3\times10^{-24}\;kg \cdot m/s$.
{}
# Electric Current, Voltage, and Resistance Overview | Three Basic Electrical Quantities Want create site? Find Free WordPress Themes and plugins. Electric Current, voltage, and resistance are three of the fundamental electrical properties. Stated simply, current: is the directed flow of charge through a conductor. Voltage: is the force that generates the current. Resistance: is an opposition to current that is provided by the material, component, or circuit. Electric Current, Voltage, and resistance are the three primary properties of an electrical circuit. The relationships among them are defined by the fundamental law of circuit operation, called Ohm’s law. ## Electric Current As you know, an outside force can break an electron free from its parent atom. In copper (and other metals), very little external force is required to generate free electrons. In fact the thermal energy (heat) present at room temperature (220C) can generate free electrons. The number of electrons generated varies directly with temperature. In other words, higher temperatures generate more free electrons. The motion of the free electrons in copper is random when no directing force is applied. That is, the free electrons in copper are random when no directing force is applied. That is, the free electrons move in every direction, as shown in Figure 1. Since, the free electrons are moving in every direction, this net flow of electrons in any direction is zero. Figure 1 Random electron motion in copper Figure 2 Illustrates what happens when an external force causes all of the electrons to move in the same direction. In this case, a negative potential is applied to one end of the copper and a positive potential is applied to the other. As a result, the free electrons all move from negative to positive, and we can say that we have a directed flow of charge (electrons). This directed flow of electrons is called electric current. Figure 2 Directed electron motion in copper. Let’s look at what happens on a larger scale when electron motion is directed by an outside force. In Figure 3, the negative potential directs electron flow (current) toward the positive potential. The current passes through the lamp, causing it to produce light and heat. The more intense the current (meaning the greater its value), the greater the light and heat produced by the bulb. Figure 3 Current through a basic lamp circuit. Electric Current is represented in formulas by the letter I (For intensity). The intensity of current is determined by the amount of charge flowing per second. The greater the flow of charge per second, the more intense the current. Coulombs and Amperes The change on a single electron is not sufficient to provide in a practical unit of measure for charge. Therefore, the Coulomb (C) is used as the basic unit of charge One coulomb equals the total charge on 6.25 × 1018 electrons. When, one coulomb of charge passes a point in one second, we have one Ampere (A) of electric current. In other words, $\begin{matrix}1\text{ ampere=1 Coulomb per second} \\Or \\1\text{ A=1 C/s} \\\end{matrix}$ The total current passing a point (in amperes) can be found by dividing the total charge ( in coulombs) by the time ( in seconds) . By Formula $\begin{matrix}I=\frac{Q}{t} & {} & \left( 1 \right) \\\end{matrix}$ Where I= the intensity of electric current in amperes Q= the total charge, in coulombs T= the time it takes the charge to pass a point, in seconds This relationship is illustrated in Example 1. Example 1 Three coulombs of charge pass through a copper wire every second. What is the value of electric current? Solution Using equation 1, the current is found as $I=\frac{Q}{t}=\frac{3C}{1s}=3{}^{C}/{}_{s}=3A$ Example 1 is included here to help you understand the relationship between amperes, coulombs, and seconds. In practice, electric current is not calculated using Equation 1 because you cannot directly measure coulombs in charge. As you will learn, there are far more practical ways to calculate current. Two Theories: Conventional Current and Electron Flow. There are two theories that describe electric current, and you will come across both in practice. The Conventional Current theory defines current as the flow of charge from positive to negative. This theory is called “conventional current” because it is the older of the two approaches to current, and for many years was the only one taught outside of military and trade schools. Electron Flow is the newer of the two current theories. Electron flow theory defines current as the flow of electrons from negative to positive. The two electric current theories are contrasted in Figure 4. Each circuit contains a battery and a lamp. Conventional current begins at the positive battery terminal, passes through the lap, and returns to the battery through its negative terminal. Electron flow is in the opposite direction: It begins at the negative terminal, passes through the lamp, and returns to the battery through its positive terminal. Figure 4 Conventional current and electron flow. It is worth nothing that the two circuits in Figure 4 are identical. The only difference between the two is how we describe the electric current. In practice, how you view current does not affect any circuit calculations, measurements, or test procedures. Even so you should get comfortable with both viewpoints, since both are used by many engineers technicians, and technical publications. In this text, we take the electron flow approach to current. That is, we will assume current is the flow of electrons from negative to positive. Direct Current (DC) Versus Alternating Current (AC) Current is generally classified as being either Direct Current (DC) or Alternating Current (AC). The differences between direct current and alternating current are illustrated in Figure 5. Figure 5 Direct current (DC) and alternating current (AC). Direct current is unidirectional. That is, the flow of charge is always in the same direction. The term direct current usually implies that the current has a fixed value. For example, the graph in Figure 5a shows that the current has a constant value of 1A. While a fixed value is implied, direct current can change in value. However, the direction of current does not change. Alternating Current is Bidirectional. That is, the direction of current changed continually. For example, in figure 5b, the graph shows that the current builds to a peak value in one direction and then builds to a peak value in the other direction. Note that the alternating current represented by the graph not only changes direction but is constantly changing in value. Electric Current Produces Heat Whenever electric current is generated through a component or circuit, heat is produced. The amount of heat varies with the level of current: The greater the current, the more heat it produces. This is why many high-current components, like motors, get hot when they are operated. Some High-current circuits get so hot that they have to be cooled. The heat produced by electric current is sometimes a desirable thing. Toasters, electric stoves, and heat lamps are common items that take advantage of the heat produced by current. Figure 6 High current causes a stove heating element burner to glow red. Putting it all together Free electrons are generated in copper at room temperature. When undirected, the motion of these free electrons is random, and the net flow of electrons in any one direction is zero. When directed by an outside force, free electrons are forced to move in a uniform direction. This directed flow of charge is referred to as electric current. Electric Current is represented by the letter I, which stands for intensity. The intensity of current depends on the amount of charge moved and the time required to move it. Electric Current is measured in amperes (A). When one coulomb of current passes a point every second, you have one ampere of current. There are two current theories. The electron flow theory describes current as the flow of charge (electrons) from negative to positive. The conventional current theory describes current as the flow of charge from positive to negative. Both approaches are widely followed. The way you view current does not affect the outcome of any circuit calculations, measurements, or test procedures. Most electrical and electronic systems contain both direct current (DC) and alternating current (AC) circuits. In DC circuits, the current is always in the same direction. In AC circuits, current continually changes direction. Review Questions How are free electrons generated in a conductor at room temperature? The thermal energy (heat) present at room temperature is enough to generate free electrons. What is electric current? What factors affect the intensity of electric current? Current is the directed flow of electrons in a material. The intensity of current depends on the amount of charge moved and the time required to move it. What is a coulomb? One coulomb equals the total charge on 6.25 × 1018 electrons. What is he basic unit of electric current? The ampere is the basic unit of electric current. It is defined as 1 coulomb per second or 1 A = 1 C/s. Contrast the electron flow and convention current theories. Conventional current theory defines current as the flow of charge from positive to negative. Electron flow is the flow of charge from negative to positive. ## Voltage Voltage can be described as a force that generates the flow of electrons (current) through a circuit. In this section, we take a detailed look at voltage and how it generates current. Generating Current with a Battery Battery in Figure 7a has two terminals. The positive (+) terminal has an excess of positive ions and is described as having a positive potential. The negative (-) terminal has an excess of electrons and is described as having a negative potential. Figure 7 A difference of potential and a resulting current. Thus there is a Difference of Potential, or voltage (V), between the two terminals. If we connect the two terminals of the battery with the copper wire and lamp (Figure7b), a current is produced as the electrons are drawn to the positive terminal of the battery. In other words, there is a directed flow of electrons from the negative (-) terminal to the positive (+) terminal of the battery. There are several important points that need to be made: 1. Voltage is a force that moves electrons, for this reason, it is often referred to as Electrical Force (E) or Electromotive Force (EMF). 2. Current and voltage is not the same thing. Current is the directed flow of electrons from negative to positive. Voltage is the electrical force that generates current. In other words, current occurs as a result of an applied voltage (electric force). The volt (V) is the unit of measure for voltage. Technically defined, one volt is the amount of electrical force that uses one joule (J) of energy to move one coulomb (C) of charge that is, \begin{align}& \text{1 volt= 1 joule per coulomb} \\& \text{or} \\& \text{1 V=1 }{}^{J}/{}_{C} \\\end{align} Review Questions What is voltage? Voltage is the force that generates current in a circuit. How does voltage generate a current through a wire? A voltage source has an excess of electrons (negative charge) on one terminal and an excess of positive ions on the other. This is referred to as a potential difference. The excess electrons at the negative terminal are attracted by the positive ions on the positive terminal. This results in the flow of charge in any wire that connects the two terminals of the voltage source. What is the unit of measure for voltage? How is it defined? The unit of measure for voltage is the volt. One volt is the amount of electrical force that uses one joule (J) of energy to move one coulomb (C) of charge. 1 V = 1 J/C. How would you define a coulomb in terms of voltage and energy? 1 coulomb equals 1 joule per volt, 1 C = 1 J/V How would you define a joule in terms of voltage and charge? 1 joule equals 1 V times 1 coulomb, 1 J = 1 V × 1 C ## Resistance All elements provide some opposition to current. This opposition to current is called resistance. The higher the resistance of an element, component, or circuit, the lower the current produced by the given voltage. Resistance (R) is measured in Ohms. Ohms are represented using the Greek letter omega (Ω). Technically defined, one ohm is the amount of resistance that limits current to one ampere when one volt of electrical force is applied. This definition is illustrated in Figure 8. Figure 8 A basic electric circuit. The schematic diagram in Figure 8 shows a battery that is connected to a resistor. A resistor is a component that provides a specific amount of resistance. As shown in the figure, a resistance of 1Ω limits the current to 1A when 1V is applied. Note that the long end-bar on the battery schematic symbol represents the battery’s positive terminal and the short end bar represents its Negative terminal. Putting It All Together We have now defined charge, current, voltage and resistance. For convenience, these electrical properties are summarized in Table 1. Table 1: Basic Electrical Properties Many of the properties listed in Table 1 can be defined in terms of the others. For example, in our discussion on resistance, we said that one ohm is the amount of resistance that limits current to one amp when one volt of electrical force is applied. By the same token, we can redefine the ampere and the volt as follows: 1. One ampere is the amount of current that is generated when one volt of electrical force is applied to one ohm of resistance. 2. One volt is the amount of electrical force required to generate one amp of current through one ohm of resistance. Review Question What is resistance? Resistance is the opposition to current. What is the basic unit of resistance and how is it defined? The unit of resistance is the Ohm (Ω). One ohm is the amount of resistance that limits current to 1 ampere when one volt is applied. 1 Ω = 1 A/V. Define each of the following values in terms of the other two: current, voltage, and resistance. 1 V is the force required to cause 1 ampere of current through 1 ohm of resistance. 1 A is the current that results from when 1 volt is applied to 1 ohm of resistance. 1 Ω is the resistance that limits current to 1 ampere when 1 volt is applied. Did you find apk for android? You can find new Free Android Games and apps.
{}
<markdown> # [XRM2008](http://xrm2008.web.psi.ch) ## 21. Juli 2008 ### 10:38 - Salome: Fluorescence microscopy * X-ray microscopy @ ESRF * technical development • different beamlines * GUI for control of microscope > ROI can be drawn in, direct conversion of coordinates * multimodal nano-imaging set-up • prototype setup, operated with pink beam • multilevel-detectors * fluorescence tomography • sinograms from fluorescence! • algorithmic solutions are preferred over mechanical solutions • different materials can be extracted • fluorescence signal and diffraction signal are obtained @ same time, crystalline phases can be reconstructed (mostly shown an overview with lots of technical diagrams) ### 11:11 - Kaulich: TwinMic at Elettra * spectromicroscopy • spectroscopy of human cells • 80 * 80 \mu\meter\squared image width • simultaneously acquire signals from different elements! (spectroscopy, rather technical) ### 11:36 - Holzner: Fluorescence & phase contrast microscopy * mass per area can be known, but phase contrast is needed to obtain full information of biological probes * difference of opposing detector halves (segmented detector is used) > already obtain information from probe • correlation of soft tissue with elemental content (with directional dependence) * phase image increases resolution * obtain directly thickness map of sample • determine elemental concentration ### 11:57 - Bergmann: Archimedes manuscript * XRF for document recovery of scientifically very valuable script * nothing (original) has survived of archimedes writings > recopying it on “new data formats” * all we know about archimedes comes from 3 documents (codex a, b and c) * geometrical discovery by physical thought-experiment * codex has been imaged after it has been bought by “donor” * archimedes writings have been overwritten by prayer-book, so recovery was “scientifically relevant” * archeology with highly technical methods (spectroscopy) * 106-107 px in 1-10 hours * imaging of soft tissue is the ultimate goal, fossils can be done now * [data](http://www.archimedespalimpsest.net) * [more info](http://www.archimedespalimpsest.org) * publication in physics world, 2007 ### 14:01 - Otero: Dynamic STM * STM > atomic resolution of sample surface is easy, morphology can be also extracted * molecule movement observed (rotated molecules move, unrotated stay put) > diffusion coefficient depends on the orientation of the molecule regarding the surface * hybrid solar cells using dye molecules • “basically convert power out of vegetables” > hard to capture bio-molecules on surfaces • knowledge about molecule (achieved through STM) helps with design of it and makes available to cover surfaces with nearly everything you want to… ### 14:50 - Saito: SR-STM * optimization of SR-STM @ beamline, mechanical tips, etc. ### 15:21 - Ono: Nanosheets * oxide nanosheets, layered compound which is delaminated in single sheets (~1nm thick) * stacked nanosheets can be achieved > Tailoring the properties * tiny amount of sheet materials still gets us good spectra — ## 22. Juli 2008 worked for akira — ## 23. Juli 2008 ### 10:49 - Vogel: stretched proteins * protein structure > obtain information through fluoroscopy * confocal microscopy * protein unfolding occurs in cell culture * protein droplets > pull out fibers and deposit those on stretchable substrates * strained proteins can become physiologically relevant/significant • bacterial adhesion is enhanced by shear flow > high flow gives high adhesion • resistency-control would become feasible • could be used as nanoglue, since bond gets stronger as it's pulled ono ### 11:22 - Sasaki: Functional membrane proteins * dynamical study of proteins * single molecular detection system * diffracted x-ray tracking (DXT) * proteins can be imaged with the use of “x-ray radiation pressure” * making artificial nano-crystal • commercially available crystals are often not enough perfectly crystallised • 3D and 1D nano-crystals (1D is enough for Sasaki's applications) • pH enables them to alter the state of the protein which can then be observed with DXT ### 11:48 - Vogt: endogenous metals in cells * metals are fundamental components of biological systems • linked to diseases, used in therapeutics and diagnostics * is XRF the correct tool for the job? • it is at least better if you compare an analytical EM and hard x-ray microscope ### 12:17 - Lee: Hard x-ray phase contrast microscopy * samples are in \micro\meter scale * phase-contrast makes staining unnecessary > easy imaging of biological samples (be it either optical or x-ray microscopy) * sample preparation (wet/dry) still destroyed the sample through surface tension (> shear forces) * micro air bubbles can be shown * velocity profile with a resolution of several \micro\seconds ### 14:05 - Hertz: Lab x-ray micro imaging * compact water-window microscopy * relatively weak source > high efficiency zoneplates * functional imaging with size-selective coll. Au identification (with wavelet filtering) * no real progression on compact sources • used to be rotating anode > ~100 W/mm\squared • new: liquid jet with much higher output energy > higher speed of the anode (compared to the rotating anode) and in plus it's a regenerative target, since the anode can be damaged. • not only liquid metal anodes, but also used methanol (which performed much better than expected) > ~1MW/mm\squared • fluid dynamics start to play a role for the liquid anode • e-beam and reliability is improved > spin-off * 3 \nano\meter lines can be distinguished * tumor detection should be feasible * lab x-ray microscopy approaches synchrotron quality for soft x-rays ### 14:39 - Benk: X-rays from discharge plasma * lab source for XRM > laser produced plasma and discharge plasma used as a source * driving force was lithography application * hollow cathode used to pinch the plasma to reach the critical conditions for emission ### 15:03 - Sandberg: Table-top diffractive imaging * diffractive lens-less imaging * highly coherent source > laser-like beam with gaussian profile * 72 nm resolution with 47 nm wavelength source > possible because of big NA * curvature correction of diffraction pattern increases resolution > mathematically match diffraction pattern on “curved” CCD * holography/phase retrieval hybrid method increases resolution — ## 24. Juli 2008 ### 08:30 - Hwa Shik Youn: Bio-fibers & hard X-ray microscopy * microscope optics influences image contrast ### 09:03 - Nishino: Nanostructure analysis by coherent x-ray diffraction * diffraction microscopy for biological samples • no need of crystallization • no need of thin-sectioning • no need of staining * study chromosome through diffraction imaging • unstained chromosomes can be imaged * 2D to 3D > different incident angles of diffraction are measured • 3D fourier transformation • showed consistent data with 2D reconstruction • first observation of cellular organelle in 3D obtained with hard x-rays with a spatial resolution of 120 nm! • but: they are working close to the feature-destroying dose line! * method can also be used in material science ### 09:29 - Larabell: Quantitative bio imaging * cryo-stage at end-station with cryo optical microscope and cryo x-ray microscope * histogram segmentation of organelles > colorcoding parts of histogram * variance weighted mean filtering * automatic segmentation > ask/look at publications * zone plates are used in the beamline (showed extremely nice movie of whole process! (transmission, FBP, segmentation, visualization, etc!)) ### 10:32 - Hell: STED & 4Pi microscopy * breaking abbes barrier * increase the resolution of the imaging method simply through physical methods, no assumptions on material are made. * higher resolution than with confocal microscope * focal spot is so small, that focal-scanning inside the cell is possible > scanning mitochondria with resolution below 50 nm. * if switching states are recorded, we can go below the diffraction limit, effectively passing abbe's equation ### 11:27 - Feser: Commercial X-ray microscopy * commercial applications of different xradia products * automatic tomography > passive measurement system to record run-off (poster p2_030) ### 12:06 - Vila Comamala: X-ray diffractive optics * beam-shaping condenser lens, plate parameters permit the shaping of a square spot * spatial resolution limit in x-ray microscopy • resolution limit is from outermost zone plate zone * multi keV range zone plates are possible and are in use @ PSI ### 14:00 - Heim: Full field microscopy * automated tomography @ ~400 proj/30min * volume zone plates should enable sub 10 nm resolution * cryo-tomography > aligned dateset * evtl. interessant für Dimitri, since the also use some kind of tilt-series, but kinda simpler and with bigger sample sizes ### 14:38 - Aoki: Zernike microscopy * basically just showed images that were obtained with phase contrast methods ### 15:08- Sakdinawat: Specialized diffractive optics * DIC magnetic phase contrast * spiral zone plates * cubic zone plates (square deformation of the zone pattern) * specialized zoneplates can significantly extend the depth of field ### 16:03 - Stoll: Magnetic vortex dynamics ### 16:33 - Fischer: Magn. dynamics with TXM ### 17:02 - Eimüller: Magnetic TXM — ## 25. Juli 2008 ### 08:50 - Cloetens: Hard X-ray Nanotomography * scanning time is around 3h, completely limited by detector * combination of projection and scanning x-ray microscopy * detection on platinum nanoparticle with a diameter of 6 nm * working on thin slices, so no real tomography, but still chemical imaging on the organelle level * zoom tomography > sample is much greater than FOV * setup to scan laminar structures > sample rotates off the surface normal axis * thermal stability of the system is crucial ### 09:06 - Ludwig: Diffraction contrast tomography * analysis of structural material response on external stimuli * differential aperture > sub-micrometer spatial resolution * analysis • background removal • pair matching of projections of 180° pairs • indexing • back-projection is then possible and then the full sample can be reconstructed (sample has 0.6mm in diameter) * forward simulation for proof of image acquisition * strain in sample can be measured and extracted ### 09:35 - Brennan: Nano-tomography of a comet * analysis of comet to determine the original composition of the universe * collect comet dust with aerogel * imaging with xradia xrm with 40nm resolution @ 5-14keV * imaging of the sample without destroying it, nanotomography * up to now not using diffraction but still possible to study the chemical composition of the sample ### 10:32 - Suzuki: Imaging, holography & tomography * holography with a combination of zoneplate objective and prism interferometer * phase-contrast ct by imaging holography ### 11:03 - Hitchcock: STXM tomography * combining imaging and spectroscopy * quantitative chemical maps from differential image from two different energies * radiation dose is something you have to think about > wet environment > sample moved > cryo-stage is needed ### 11:36 - Ade: STXM - from science to applications * applications towards the more efficient photovoltaic materials * fabrication of organic solar cells </markdown>
{}
# CPU Affinity Bind specific processes to specific processors with a new system call. The ability in Linux to bind one or more processes to one or more processors, called CPU affinity, is a long-requested feature. The idea is to say always run this process on processor one'' or run these processes on all processors but processor zero''. The scheduler then obeys the order, and the process runs only on the allowed processors. CPU亲合力就是指在Linux中能够将一个或多个进程绑定到一个或多个处理器上运行,这是期待已久的特性。也就是说:“在1号处理器上一直运行该程序”或者是“在所有的处理器上运行这些程序,而不是在0号处理器上运行”。然后,调度器将遵循该规则,程序仅仅运行在允许的处理器上。 Other operating systems, such as Windows NT, have long provided a system call to set the CPU affinity for a process. Consequently, demand for such a system call in Linux has been high. Finally, the 2.5 kernel introduced a set of system calls for setting and retrieving the CPU affinity of a process. 其他类型的操作系统,比如Windows NT,很早以前就开始提供系统调用,用于设置应用程序的CPU亲合力。因此,要求Linux也提供这样的系统调用的呼声日益高涨。最终在2.5内核中,引入了一套系统调用用于设置和获取某个进程的CPU亲合力特性。 In this article, I look at the reasons for introducing a CPU affinity interface to Linux. I then cover how to use the interface in your programs. If you are not a programmer or if you have an existing program you are unable to modify, I cover a simple utility for changing the affinity of a given process using its PID. Finally, we look at the actual implementation of the system call. 本文将探讨CPU亲合力被引入到Linux中的动机,以及在应用程序中如何使用亲合力接口。假如你不是一个程序员或者你拥有一个程序,但又不想修改它,那么我将运用一套简单的工具,通过PID改变给定进程的亲合力。最后,我们将探讨亲合力系统调用的具体实现原理。 ## Soft vs. Hard CPU Affinity There are two types of CPU affinity. The first, soft affinity, also called natural affinity, is the tendency of a scheduler to try to keep processes on the same CPU as long as possible. It is merely an attempt; if it is ever infeasible, the processes certainly will migrate to another processor. The new O(1) scheduler in 2.5 exhibits excellent natural affinity. On the opposite end, however, is the 2.4 scheduler, which has poor CPU affinity. This behavior results in the ping-pong effect. The scheduler bounces processes between multiple processors each time they are scheduled and rescheduled. Table 1 is an example of poor natural affinity; Table 2 shows what good natural affinity looks like. time 1 time 2 time 3 time 4 Process A CPU 0 CPU 1 CPU 0 CPU 1 Table 1. The Ping-Pong Effect time 1 time 2 time 3 time 4 Process A CPU 0 CPU 0 CPU 0 CPU 0 Table 2. Good Affinitiy Hard affinity, on the other hand, is what a CPU affinity system call provides. It is a requirement, and processes must adhere to a specified hard affinity. If a processor is bound to CPU zero, for example, then it can run only on CPU zero. ## Why One Needs CPU Affinity Before we cover the new system calls, let's discuss why anyone would need such a feature. The first benefit of CPU affinity is optimizing cache performance. I said the O(1) scheduler tries hard to keep tasks on the same processor, and it does. But in some performance-critical situations--perhaps a large database or a highly threaded Java server--it makes sense to enforce the affinity as a hard requirement. Multiprocessing computers go through a lot of trouble to keep the processor caches valid. Data can be kept in only one processor's cache at a time. Otherwise, the processor's cache may grow out of sync, leading to the question, who has the data that is the most up-to-date copy of the main memory? Consequently, whenever a processor adds a line of data to its local cache, all the other processors in the system also caching it must invalidate that data. This invalidation is costly and unpleasant. But the real problem comes into play when processes bounce between processors: they constantly cause cache invalidations, and the data they want is never in the cache when they need it. Thus, cache miss rates grow very large. CPU affinity protects against this and improves cache performance. A second benefit of CPU affinity is a corollary to the first. If multiple threads are accessing the same data, it might make sense to bind them all to the same processor. Doing so guarantees that the threads do not contend over data and cause cache misses. This does diminish the performance gained from multithreading on SMP. If the threads are inherently serialized, however, the improved cache hit rate may be worth it. The third and final benefit is found in real-time or otherwise time-sensitive applications. In this approach, all the system processes are bound to a subset of the processors on the system. The specialized application then is bound to the remaining processors. Commonly, in a dual-processor system, the specialized application is bound to one processor, and all other processes are bound to the other. This ensures that the specialized application receives the full attention of the processor. ## Getting the New System Calls The system calls are new, so they are not available yet in all systems. You need at least kernel 2.5.8-pre3 and glibc 2.3.1; glibc 2.3.0 supports the system calls, but it has a bug. The system calls are not yet in 2.4, but patches are available at www.kernel.org/pub/linux/kernel/people/rml/cpu-affinity. Many distribution kernels also support the new system calls. In particular, Red Hat 9 is shipping with both kernel and glibc support for the new calls. Real-time solutions, such as MontaVista Linux, also fully support the new interface. On most systems, Linux included, the interface for setting CPU affinity uses a bitmask. A bitmask is a series of n bits, where each bit individually corresponds to the status of some other object. For example, CPU affinity (on 32-bit machines) is represented by a 32-bit bitmask. Each bit represents whether the given task is bound to the corresponding processor. Count the bits from right to left, bit 0 to bit 31 and, thus, processor zero to processor 31. For example: 11111111111111111111111111111111 = 4,294,967,295 is the default CPU affinity mask for all processes. Because all bits are set, the process can run on any processor. Conversely: 00000000000000000000000000000001 = 1 is much more restrictive. Only bit 0 is set, so the process may run only on processor zero. That is, this affinity mask binds a process to processor zero. Get it? What do the next two masks equal in decimal? What is the result of using them as the affinity mask of a process? 10000000000000000000000000000000 00000000000000000000000000000011 The first is equal to 2,147,483,648 and, because bit 31 is set, binds the process to processor number 31. The second is equal to 3, and it binds the process in question to processor zero and processor one. The Linux CPU affinity interface uses a bitmask like that shown above. Unfortunately, C does not support binary constants, so you always have to use the decimal or hexadecimal equivalent. You may get a compiler warning for very large decimal constants that set bit 31, but they will work. ## Using the New System Calls With the correct kernel and glibc in hand, using the system calls is easy: #define _GNU_SOURCE #include <sched.h> long sched_setaffinity(pid_t pid, unsigned int len, long sched_getaffinity(pid_t pid, unsigned int len, The first system call is used to set the affinity of a process, and the second system call retrieves it. In either system call, the PID argument is the PID of the process whose mask you wish to set or retrieve. If the PID is set to zero, the PID of the current task is used. The second argument is the length in bytes of the CPU affinity bitmask, currently four bytes (32 bits). This number is included in case the kernel ever changes the size of the CPU affinity mask and allows the system calls to be forward-compatible with any changes; breaking syscalls is bad form, after all. The third argument is a pointer to the bitmask itself. Let us look at retrieving the CPU affinity of a task: unsigned long mask; if (sched_getaffinity(0, len, &mask) < 0) { perror("sched_getaffinity"); return -1; } As a convenience, the returned mask is binary ANDed against the mask of all processors in the system. Thus, processors in your system that are not on-line have corresponding bits that are not set. For example, a uniprocessor system always returns 1 for the above call (bit 0 is set and no others). Setting the mask is equally easy: unsigned long mask = 7; /* processors 0, 1, and 2 */ if (sched_setaffinity(0, len, &mask) < 0) { perror("sched_setaffinity"); } This example binds the current process to the first three processors in the system. You then can call sched_getaffinity() to ensure the change took effect. What does sched_getaffinity() return for the above setup if you have only two processors? What if you have only one? The system call fails unless at least one processor in the bitmask exists. Using a mask of zero always fails. Likewise, binding to processor seven if you do not have a processor seven will fail. It is possible to retrieve the CPU affinity mask of any process on the system. You can set the affinity of only the processes you own, however. Of course, root can set any process' affinity. ## I Want a Tool! If you are not a programmer, or if you cannot modify the source for whatever reason, you still can bind processes. Listing 1 is the source code for a simple command-line utility to set the CPU affinity mask of any process, given its PID. As we discussed above, you must own the process or be root to do this. Listing 1. bind Usage is simple; once you learn the decimal equivalent of the CPU mask, you need: usage: bind pid cpu_mask As an example, assume we have a dual computer and want to bind our Quake process (with PID 1600) to processor two. We would enter the following: bind 1600 2 ## Getting Really Crafty In the previous example, we bound Quake to one of the two processors in our system. To ensure top-notch frame rates, we need to bind all the other processes on the system to the other processor. You can do this by hand or by writing a crafty script, but neither is efficient. Instead, make use of the fact that CPU affinity is inherited across a fork(). All of a process' children receive the same CPU affinity mask as their parent. Then, all we need to do is have init bind itself to one processor. All other processes, by nature of init being the root of the process tree and thus the superparent of all processes, are then likewise bound to the one processor. The cleanest way to do this type of bind is to hack this feature into init itself and pass in the desired CPU affinity mask using the kernel command line. We can accomplish our goal with a simpler solution, though, without having to modify and recompile init. Instead, we can edit the system startup script. On most systems this is /etc/rc.d/rc.sysinit or /etc/rc.sysinit, the first script run by init. Place the sample bind program in /bin, and add these lines to the start of rc.sysinit: /bin/bind 1 1 /bin/bind \$ 1 These lines bind init (whose PID is one) and the current process to processor zero. All future processes will fork from one of these two processes and thus inherit the CPU affinity mask. You then can bind your process (whether it be a real-time nuclear control system or Quake) to processor one. All processes will run on processor zero except our special process (and any children), which will run on processor one. This ensures that the entire processor is available for our special process. ## Kernel Implementation of CPU Affinity Long before Linus merged the CPU affinity system calls, the kernel supported and respected a CPU affinity mask. There was no interface by which user space could set the mask. Each process' mask is stored in its task_struct as an unsigned long, cpus_allowed. The task_struct structure is called the process descriptor. It stores all the information about a process. The CPU affinity interface merely reads and writes cpus_allowed. Whenever the kernel attempts to migrate a process from one processor to another, it first checks to see if the destination processor's bit is set in cpus_allowed. If the bit is not set, the kernel does not migrate the process. Further, whenever the CPU affinity mask is changed, if the process is no longer on an allowed processor it is migrated to one that is allowed. This ensures the process begins on a legal processor and can migrate only to a legal processor. Of course, if it is bound to only a single processor, it does not migrate anywhere. ## Conclusion The CPU affinity interface introduced in 2.5 and back-ported elsewhere provides a simple yet powerful mechanism for controlling which processes are scheduled onto which processors. Users with more than one processor may find the system calls useful in squeezing another drop of performance out of their systems or for ensuring that processor time is available for even the most demanding real-time task. Of course, users with only one processor need not feel left out. They also can use the system calls, but they aren't going to be too useful. • 本文已收录于以下专栏: ## Java线程CPU亲和性工具 • liu251 • 2015年05月03日 22:30 • 4701 ## 关于CPU affinity的几篇文章 cpu绑定和cpu亲和性 原文地址:http://blog.csdn.net/joker0910/article/details/7484371 将进程/线程与cpu绑定,最直观的好处就是提高... ## 多核优化,使用linux affinity 将进程,线程,中断指定到对应的cpu运行,用ftrace查看消耗时间 cpu的affinity简介 使用cpu的affinity机制可以将对应的进程,线程,以及中断指定代对应的cpu上运行,如果合理配置,减少某个cpu负担,提高其他cpu的使用率,从而到达提高系统... • welljrj • 2017年06月05日 10:26 • 230 ## 【Linux】 CPU亲和性(affinity)及与亲和性有关的两个函数 sched_setaffinity()和 sched_getaffinity() • pugu12 • 2015年07月27日 17:13 • 250 ## 【Linux】 CPU亲和性(affinity)及与亲和性有关的两个函数 sched_setaffinity()和 sched_getaffinity() 【Linux】 CPU亲和性(affinity)及与亲和性有关的两个函数 sched_setaffinity()和 sched_getaffinity() 转自: http://www.ibm.... 举报原因: 您举报文章:CPU亲合力(CPU Affinity) 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)
{}
# On Venn Diagrams and the Counting of Regions Generalization of the fact that $$n^2-n+2$$ is the maximum number of disjoint regions in the plane that can be formed by $$n$$ circles using the basic set operations Old Node ID: 759 Author(s): Branko Grunbaum (University of Washington) Publication Date: Wednesday, August 3, 2005 Original Publication Source: College Mathematics Journal Original Publication Date: November, 1984 Subject(s): Geometry and Topology Plane Geometry Flag for Digital Object Identifier: Publish Page: Furnished by JSTOR: File Content: Rating Count: 27.00 Rating Sum: 78.00 Rating Average: 2.89 Author (old format): Branko Grunbaum Applicable Course(s): 4.9 Geometry Modify Date: Wednesday, February 1, 2006
{}
Dividing and factorising polynomial expressions test questions 1 When $$3{x^3} + 2{x^2} - 7x + 5$$ is divided by $$x - 2$$, what is the remainder? 2 When $${x^4} + 2{x^3} - {x^2} + 5$$ is divided by $$x - 3$$, what is the remainder? 3 Find the remainder when dividing the polynomial $$f(x) = 4{x^3} - 2x + 7$$ by $$(x - 1)$$ 4 If $${x^4} + 2{x^3} - k{x^2} + 2x - 3$$ divides exactly by $$x+3$$, what is the value of $$k$$? 5 When $$2{x^3} + {x^2} - 5x + 2$$ is divided by $$2x - 1$$, what are the other factors? 6 One root of the equation $$4{x^3} - 8{x^2} - x + 2 = 0$$ is $$\frac{1}{2}$$. What are the other roots?
{}
# How to Convert Radians to Degrees Updated: August 6, 2019 | References Radians and degrees are both units used for measuring angles.As you may know, a circle is comprised of 2π radians, which is the equivalent of 360°; both of these values represent going "once around" a circle. Therefore, 1π radian represents going 180° around a circle, which makes 180/π the perfect conversion tool for moving from radians to degrees. To convert from radians to degrees, you simply have to multiply the radian value by 180/π. If you want to know how to do this, and to understand the concept in the process, read this article. ## Steps 1. 1 Know that π radians is equal to 180 degrees. Before you begin the conversion process, you have to know that π radians = 180°, which is equivalent to going halfway around a circle. This is important because you'll be using 180/π as a conversion metric. This is because 1 radians is equal to 180/π degrees.[1] 2. 2 Multiply the radians by 180/π to convert to degrees. It's that simple. Let's say you're working with π/12 radians. Then, you've got to multiply it by 180/π and simplify when necessary. Here's how you do it:[2] • π/12 x 180/π = • 180π/12π ÷ 12π/12π = • 15° 3. 3 Practice with a few examples. If you really want to get the hang of it, then try converting from radians to degrees with a few more examples. Here are some other problems you can do: • Example 1: 1/3π radians = π/3 x 180/π = 180π/3π ÷ 3π/3π = 60° • Example 2: 7/4π radians = 7π/4 x 180/π = 1260π/4π ÷ 4π/4π = 315° • Example 3: 1/2π radians = π /2 x 180/π = 180π /2π ÷ 2π/2π = 90° 4. 4 Remember that there's a difference between "radians" and "π radians." If you say 2π radians or 2 radians, you are not using the same terms. As you know, 2π radians is equal to 360 degrees, but if you're working with 2 radians, then if you want to convert it to degrees, you will have to calculate 2 x 180/π. You will get 360/π, or 114.5°. This is a different answer because, if you're not working with π radians, the π does not cancel out in the equation and results in a different value.[3] ## Community Q&A Search • Question Donagan • We know from working with the numbers in the article above that one radian is equivalent to approximately 57.3 degrees. Therefore, you would multiply 57.3 by 1.03 to find the number of degrees you're looking for. Thanks! 24 9 • Question How do I convert degrees into radians? Donagan • The easiest way to do it is to recognize that 180° equals π radians, or 3.14 radians. Then determine what fraction (or percentage) of 180° the angle you're concerned with is, and multiply that fraction by 3.14 radians. For example, to convert 60° to radians, divide 60° by 180°. That's 1/3. Then multiply 1/3 by 3.14: that's 1.05 radians. Thanks! 8 6 • Question How do I convert 11/16 of a radian to degrees? Donagan • Since 1 radian is approximately 57.3 degrees, 11/16 of a radian is (11/16)(57.3°) = 39.39°. Thanks! 13 8 • Question How is 1/4 radians converted to degrees? Donagan • Because π radians = 3.14 radians = 180°, π radians / π = 1 radian = 180°/3.14 = 57.32°. Therefore, 1/4 radian = 57.32° / 4 = 14.33°. Thanks! 12 12 • Question How do I find length of arc of a circle whose radius is intercepted by theta? • The length arc of a circle can be found by multiplying the angle measurement (theta) by the radius r. Thanks! 13 13 • Question Could this be reversed? In other words, can I find the radians from the degrees by dividing the degrees by 180/pi? • Yes, the degree/radian relationship can be used to convert in either direction. Thanks! 0 3 • Question How would I convert 3.14 divided by 3 from radians to degrees? Donagan • As with any other radians-to-degrees conversion, multiply the radians value by 180/π. In this case, (π/3)(180/π) = 180/3 = 60°. Thanks! 1 0 • Question How do I convert negative degrees to radians? Donagan • The process is the same as with positive degrees. Just insert a negative sign in front of the final radians value. Thanks! 0 0 200 characters left ## Tips • When multiplying, leave the pi in your radians as the symbol not the decimal approximation, this way you can more easily cancel it out during your calculation • Many graphing calculators come with functions to convert units or can download programs to do so. Ask your math teacher if such a function exists on your calculator. ## Things You'll Need • Pen or Pencil • Paper • Calculator wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, 12 people, some anonymous, worked to edit and improve it over time. Co-authors: 12 Updated: August 6, 2019 Views: 236,990 Categories: Trigonometry Article Summary To convert radians to degrees, the key is knowing that 180 degrees is equal to pi radians. Then multiply the measurement in radians by 180 divided by pi. For example, pi over 3 radians would be equal to 60 degrees. If the measurement is 2 radians, remember that it does not include pi, and multiply 2 by 180 divided by pi to get 114.5 degrees. • A Anonymous May 15 "I understand it now. I never knew it was so easy!" • HB Hashan Bhushitha Jul 3, 2016 "It has helped me to understand about radians and degrees and how to convert each of them." • BA Bilal Abbas Sep 25, 2016 • VV Viahal Vishwakarma Sep 2, 2016 "It was really helpful, friends so try it now." • EM Estefy Moreno Mar 20, 2017 "It helped me a lot in my homework, thanks." • VV Vishal Vishwakarma Sep 2, 2016 "I understand it very well."
{}
# Dogfight Two players play a game on the cartesian plane which occurs in 2 phases. In the first phase, a number $$n$$ is selected from $$\{2,3,4,\ldots, 999\}.$$ The first player then places a point at a location $$(x,y)$$ in the plane satisfying $$-n \leq x \leq n, -n \leq y \leq n$$ and $$x,y$$ integers. The players alternate placing points until a total of $$n$$ points have been placed. In the second phase, the first player picks two distinct points in the plane that are not joined and joins them by a curve that does not intersect itself, any of the other $$n-2$$ points, or any of the already drawn curves. The players alternate turns, drawing curves in this manner. The first player who is unable to draw a curve loses. For the 998 starting values of $$n,$$ determine how many of these the first player has a winning strategy for. Details and assumptions The players get to choose where they want to place their points. The curve is a continuous path which does not include the endpoints. Hence, 2 curves may seem to intersect at one of the $$n$$ points. ×
{}
Lemma 32.4.16. Let $S$ be a scheme. Let $X = \mathop{\mathrm{lim}}\nolimits X_ i$ be a directed limit of schemes over $S$ with affine transition morphisms. Let $Y \to X$ be a morphism of schemes over $S$. 1. If $Y \to X$ is a closed immersion, $X_ i$ quasi-compact, and $Y$ locally of finite type over $S$, then $Y \to X_ i$ is a closed immersion for $i$ large enough. 2. If $Y \to X$ is an immersion, $X_ i$ quasi-separated, $Y \to S$ locally of finite type, and $Y$ quasi-compact, then $Y \to X_ i$ is an immersion for $i$ large enough. 3. If $Y \to X$ is an isomorphism, $X_ i$ quasi-compact, $X_ i \to S$ locally of finite type, the transition morphisms $X_{i'} \to X_ i$ are closed immersions, and $Y \to S$ is locally of finite presentation, then $Y \to X_ i$ is an isomorphism for $i$ large enough. Proof. Proof of (1). Choose $0 \in I$ and a finite affine open covering $X_0 = U_{0, 1} \cup \ldots \cup U_{0, m}$ with the property that $U_{0, j}$ maps into an affine open $W_ j \subset S$. Let $V_ j \subset Y$, resp. $U_{i, j} \subset X_ i$, $i \geq 0$, resp. $U_ j \subset X$ be the inverse image of $U_{0, j}$. It suffices to prove that $V_ j \to U_{i, j}$ is a closed immersion for $i$ sufficiently large and we know that $V_ j \to U_ j$ is a closed immersion. Thus we reduce to the following algebra fact: If $A = \mathop{\mathrm{colim}}\nolimits A_ i$ is a directed colimit of $R$-algebras, $A \to B$ is a surjection of $R$-algebras, and $B$ is a finitely generated $R$-algebra, then $A_ i \to B$ is surjective for $i$ sufficiently large. Proof of (2). Choose $0 \in I$. Choose a quasi-compact open $X'_0 \subset X_0$ such that $Y \to X_0$ factors through $X'_0$. After replacing $X_ i$ by the inverse image of $X'_0$ for $i \geq 0$ we may assume all $X_ i'$ are quasi-compact and quasi-separated. Let $U \subset X$ be a quasi-compact open such that $Y \to X$ factors through a closed immersion $Y \to U$ ($U$ exists as $Y$ is quasi-compact). By Lemma 32.4.11 we may assume that $U = \mathop{\mathrm{lim}}\nolimits U_ i$ with $U_ i \subset X_ i$ quasi-compact open. By part (1) we see that $Y \to U_ i$ is a closed immersion for some $i$. Thus (2) holds. Proof of (3). Working affine locally on $X_0$ for some $0 \in I$ as in the proof of (1) we reduce to the following algebra fact: If $A = \mathop{\mathrm{lim}}\nolimits A_ i$ is a directed colimit of $R$-algebras with surjective transition maps and $A$ of finite presentation over $A_0$, then $A = A_ i$ for some $i$. Namely, write $A = A_0/(f_1, \ldots , f_ n)$. Pick $i$ such that $f_1, \ldots , f_ n$ map to zero under the surjective map $A_0 \to A_ i$. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
# Homework Help: Linear algebra Homework question 1. Nov 1, 2009 1. The problem statement, all variables and given/known data I am entirely lost with this one question I can't seem to figure out how to do it at all. The question states that $$\omega$$ is a complex number where $$\omega$$=(r)(e^(i $$\theta$$)) r and $$\theta$$ are real numbers r>0 $$\theta$$ is element of [0,2$$\pi$$[ n is a positive integer consider the equation z^n = $$\omega$$ Solve for z in terms of r, $$\theta$$ and n 2. Relevant equations I would love to list some relevant equations but like I said, I have absolutely no idea what to do and any help at all would be greatly appreciated. 3. The attempt at a solution I have tried looking over it multiple times however nothing seems to click. If anyone can just simply put me in the right direction or link me to anything that might help, that would be great! -Nick 2. Nov 1, 2009 ### tiny-tim Welcome to PF! (have a theta: θ and a pi: π and an omega: ω and try using the X2 tag just above the Reply box ) You're looking for a solution in the form z = se, such that zn = re 3. Nov 1, 2009 Hey :) Thanks for the welcome as well as the help. I'm not too sure where to find all those symbols really :( New too all this! and I guess that I replace z in the equation with z=se so that Zn= [se]n = re but then from there, how do you isolate each variable or constant so that you get some sort of answer? sorry :( I'm just having a really hard time with this one... Thanks again! 4. Nov 1, 2009 ### tiny-tim ok, and when you expand it, [se]n = … ? 5. Nov 1, 2009 well it ends up being Zn=sn eiφn= re but the path still seems blocked to me.... :( 6. Nov 2, 2009 ### tiny-tim (just got up :zzz: …) ok, sneinφ= re so now use the fact that the polar form, re, is unique in r, and unique in θ (mod 2π) 7. Nov 4, 2009 ### rickz02 Sir Tiny-tim is my solution to the problem right? zn = rei$$\theta$$ ln(zn) = ln(rei$$\theta$$) nlnz = ln(rei$$\theta$$) lnz = $$1/n$$ln(rei$$\theta$$) lnz = ln(rei$$\theta$$)1/$$n$$ z = (rei$$\theta$$)1/n 8. Nov 5, 2009 ### tiny-tim (just got up :zzz: …) erm … it's very long, and it isn't a solution. You've started with zn = rei$$\theta$$, and ended with z = (rei$$\theta$$)1/n, which is just a restatement of the question. As I said, the solution has to be in the form z = se you have to say what s is, and what φ is, in each solution. 9. Nov 5, 2009 ### rickz02 But the problem is to solve for z, so I was thinking i just need to have z in one side and other parameters in the other side(Am I right???). 10. Nov 5, 2009 ### tiny-tim Yes, the LHS of your z = (re)1/n is excellent. It's the RHS that isn't finished. 11. Nov 7, 2009 Hey! Sorry been busy for the past couple of days :( Still stuck on this question though.. I'm slowly understanding how it works though :) Soo Sneinφ=re I'm supposed to solve for z, but does that mean that I have to solve for s as well as φ individually or not?... which are the modulus and the argument I believe? and so z= (sneinφ)1/n how do I add in the θ $$\in$$ [0,2π[ I know it has to go in there somewhere!! Thanks again, -Nick 12. Nov 8, 2009 ### tiny-tim Hey Nick! (just got up :zzz: …) Yes, that's correct if sneinφ=re, then that's exactly the same as saying sn = r and einφ = e (have an ε ) Yes! If einφ = e, then that's exactly the same as saying that nφ = θ (mod 2π), or nφ = θ + a multiple of 2π. 13. Nov 8, 2009 ok okay thanks :) soo then do I replace the φ with θ+2kπ? or rather nφ and that's it? Or is there more to be done? and shouldn't the answer in polar form be with φ as opposed to θ? Thanks again :) 14. Nov 8, 2009 ### tiny-tim φ = … ? Not following you. 15. Nov 8, 2009 I guess it's nφ = θ+2k so I substitute nφ for that. My question is.. is that where it ends? if the question states: put the solutions in the form z=se would we keep nφ or still substitute it for θ+2k? I'm getting myself confused :S Thanks!! -Nick 16. Nov 8, 2009 ### tiny-tim Hi Nick! (btw, you mean 2π, not 2) Look, it's very simple … you haven't done that yet! φ = … ?​ 17. Nov 8, 2009 Sorry I copy pasted it but apparently the pi didn't work :S Okay so then the answer for what φ equals to is φ= (θ+2kπ)/n and sn=r so s= r1/n and that gives me the argument and the modulus so now to solve for z I write z= r1/nei(θ+2kπ)/n = (rei(θ+2kπ))1/n Thanks again btw!! :) -Nick 18. Nov 9, 2009 Stop here!
{}
# Tag Info 9 The last two who had this error had a problem with a virus program kicking in (Avast in both cases). See e.g. https://github.com/MiKTeX/miktex/issues/585 8 To compile a previous version of the pgf manual as a whole you can perform the following steps: Clone the git repository: git clone https://github.com/pgf-tikz/pgf Checkout the tag you want to compile the manual for (you can list the tags using git tag). I'll do that for 3.0.1 by running git checkout 3.0.1. Configure an RCS provider script like https://... 2 In older LaTeX versions mixing small caps (\scshape, \textsc) and italic (\itshape, \textit) didn't really worked. The commands cancelled each other out, you only got the one or the other: \documentclass{article} \usepackage[T1]{fontenc} \usepackage{lmodern} \begin{document} {\scshape\itshape abc} {\itshape\scshape abc} \end{document} gives e.g. in texlive ... 2 AFAICT, there are two separate issues that need to get resolved: (a) How to inform the front-end/editor that you wish to compile your document with LuaLaTeX (or XeLaTeX) rather than with pdfLaTeX; and (b) how to modify the tex file so that you can use the system fonts called Arial and Arial Narrow. I can't help you with the first issue as I don't use ... 2 You can use relsize package with the command \mathlarger: \documentclass{article} \usepackage{amsmath,amssymb} \usepackage{relsize} \begin{document} $d'(f(x), f(y)) \leq L d(x,y),\, \mathlarger{\forall} x,y \in X$ \end{document} But if you want it monstrously big 😁😁😁😁😁 you could use \mathlarger{\mathlarger\forall}. 1 I found the problem. Once all the MiKTeX files are downloaded, check inside the containing folder. There will be a new identical installer. Run it and this time it will work. Just look inside the folder where you downloaded your MikTeX files, you will find the installer as setup-4.0-x64, identical to the one you downloaded from the MiKTeX page. Or in case it ... 1 This is a passing bug in miktex. It fails if some message should be written to the terminal which contains bytes which are invalid in utf8. In your case it is a overfull box message which contains Umlauts. The log-file (which is 8bit encoded) contains then this, and miktex is trying to write it to the terminal too: Overfull \hbox (4.07864pt too wide) in ... 1 Here is a solution with some simplifications, using the w columntype defined by recent versions of array and the makecell package. Note that column widths are very large for the text in cells, and I had to load geometry with option a3paper. You should see if the given dimensions are really necessary. \documentclass{article} \usepackage{romannum} \usepackage{... 1 Your TeXLive would more or less work as-is if you just add c:\texlive\2019\bin\win32 to the searchpath, although you would miss out on other system integration such as file associations. Or, if you run c:\texlive\bin\win32\tlaunch, it will turn your copy into a launcher-based installation, see https://www.ctan.org/pkg/tlaunch. Afterwards, you can convert ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# College Math Teaching ## May 31, 2015 ### And a Fields Medalist makes me feel better Filed under: calculus, editorial, elementary mathematics, popular mathematics, topology — Tags: — collegemathteaching @ 10:30 pm I have subscribed to Terence Tao’s blog. $\frac{d^{k+1}}{dx^{k+1}}(1+x^2)^{\frac{k}{2}}$ for $k \in \{1, 2, 3, ... \}$. Try this yourself and surf to his post to see the “slick, 3 line proof”.
{}
## Introduction Widespread recurrent connections between thalamus and cortex represent a cardinal principle of brain organization, and thalamo-corticall interplay has long been recognized to contribute to whole-brain function and dysfunction1,2. As thalamo-cortical loops play a major role in the balance of cortical excitation and inhibition, a detailed assessment of perturbations in this network may enhance our understanding of mechanisms giving rise to the spectrum of common human epilepsies. A key involvement of the thalamo-cortical system is recognized in both genetic/idiopathic generalized epilepsies (GE), as well as syndromes traditionally considered, with temporal lobe epilepsy (TLE) as a prototypical example. Long-standing evidence from animal models and electro-clinical observations in patients indicates a prominent role of thalamo-cortical loops in seizures that appear generalized from the get-go, as in the case of GE. This is also seen in seizures propagating from a confined temporal network toward a more widespread hemispheric territory, with possible secondary generalization as in the case of TLE3,4,5. These observations are complemented by neuroimaging work in humans showing structural and functional changes in thalamic and cortical areas in both syndromes6. Collectively, these multiple lines of evidence suggest that investigating thalamo-cortical and cortico-cortical networks may be instrumental to understand mechanisms of focal and generalized human epilepsies at a systems level. Despite early-day neuroimaging studies based on magnetic resonance imaging (MRI) volumetry of the thalamus in both TLE and GE7, surprisingly little work directly addressed common and distinct perturbations of the thalamo-cortical network in both syndromes. Work comparing these cohorts in isolation to healthy controls (HCs) suggests that both generally present with morphological anomalies in this circuit, with findings being more robust in TLE than GE6,8,9,10,11,12. TLE indeed has been shown to present rather consistently with thalamic atrophy, specifically with marked effects in anterior and mediodorsal divisions, as well as widespread and multi-lobar cortical thinning6,9, findings also supported by postmortem and ex vivo histological data13,14. GE, on the other hand, presents with a more mixed pattern of structural compromise, sometimes showing subtle atrophy6,12,15 and sometimes not16,17, possibly resulting from intrinsic heterogeneity across GE syndromes and a less severe disease trajectory. However, even when restricting GE patients to those with generalized tonic–clonic seizures as their only seizure type (i.e., not studying patients with juvenile absence or myoclonic epilepsies), findings remain inconsistent. Importantly, there are no studies directly comparing focal and GE in terms of thalamo-cortical and cortico-cortical network pathology, and underlying functional dynamics. Furthermore, neuroimaging findings have thus far not been related to potential microcircuit mechanisms that may play a critical role in shaping the macroscopic expression of epilepsy-related network abnormalities18. Advances in MRI acquisition and modeling techniques offer unprecedented opportunities to probe thalamo-cortical network organization and pathology across multiple scales in vivo19. In particular, it is now possible to interrogate different MRI contrasts and to aggregate those into regional descriptors of disease load at the morphological and microstructural level. Diffusion MRI tractography and resting-state functional MRI (rs-fMRI) connectivity analysis provide novel ways to profile brain organization at the network level20,21. Moreover, the synergistic integration of these different modalities addresses structure–function coupling and can be harnessed to provide large-scale simulations of brain function22,23,24,25. One approach specifically models whole-brain dynamics via a network of anatomically connected neural masses. In contrast to statistical approaches that interrogate macroscale organization and structure–function coupling, the dynamical properties in these models are governed by parameters with biophysical interpretations that reflect empirical models of neuronal circuit function22. A recent study in healthy individuals has also shown that these models can make robust simulations of functional connectivity from structural connectivity, and that the model can furthermore be used to infer regional-specific microcircuit parameters, such as recurrent excitation–inhibition and external subcortical drive22. Thus, applying these models to epilepsy will provide a complementary and mechanistic perspective on the role of the thalamo-cortical system on cortico-cortical dynamics. To study shared and distinct patterns of thalamo-cortical network pathology across the spectrum of focal and GE, we derived patient-specific measures of cortical and thalamic disease load using a multiscale MRI approach that targets microstructure, morphology, and macroscale connectivity. To identify microcircuit-level mechanisms dissociating both syndromes, we harnessed advanced biophysical simulation paradigms that integrate structural and functional connectome data in a unified framework, allowing the estimation of the role of recurrent excitation–inhibition as well as subcortical drive on cortical dynamics. To foreshadow our results, we observed more marked imaging anomalies in TLE relative to controls and GE, suggesting that this “focal” epilepsy is paradoxically associated with more severe whole-brain anomalies than an epilepsy syndrome defined by generalized seizures. Biophysical simulations complemented these findings and indicated marked reductions of subcortical drive together with increases in recurrent excitation–inhibition in TLE, specifically in limbic and fronto-central networks, while GE presented with increased subcortical drive but no changes in recurrent cortical excitation–inhibition. Several sensitivity analyses confirmed the robustness. ## Results ### Data sample and overall analytical strategy Our main analyses compared both patient cohorts directly to each other (see “Methods” section for inclusion criteria, clinical characteristics, and neuroimaging). In brief, we studied 107 TLE patients with unilateral hippocampal atrophy, 96 GE patients with generalized tonic–clonic seizures as their only seizure type, and 65 HCs. Patient cohorts had a comparable age and sex distribution, and underwent identical 3T multimodal MRI. Our image processing allowed profiling of cortical and thalamic morphology, microstructural markers as derived from diffusion MRI, and thalamo-cortical resting-state functional connectivity. Cortex-wide functional connectomes and diffusion tractography-derived structural connectomes were integrated using relaxed mean-field models. This computational technique was used to generate veridical simulations of functional connectivity from structural connectomes and to estimate cortical microcircuit parameters, specifically recurrent excitation–inhibition and influence of external/subcortical drive on macroscale cortical dynamics. The main findings show direct comparisons between patient cohorts for cortical (and cortico-cortical) as well as thalamic (and thalamo-cortical) networks, at the level of morphology, microstructure, connectivity, and microcircuit organization. Findings relative to the HCs are presented in the Supplementary Materials. All analyses controlled for age and sex. ### Morphological profiling Surface-based analysis of high-resolution MRI data in all participants profiled cortical and thalamic morphology (Fig. 1). These findings showed a marked divergence between GE and TLE, with the latter showing stronger atrophy in both cortical and thalamic regions. TLE patients furthermore presented with extensive cortical thinning in fronto-central, temporo-limbic, prefrontal, and somatomotor regions (family-wise error corrected p-value, pFWE < 0.05) relative to GE. TLE patients also showed bilateral thalamic volume reduction compared to GE (left and right p < 0.005). Surface-shape analysis confirmed strongest thalamic atrophy in ipsilateral mediodorsal divisions (pFWE < 0.05). Similar findings were observed when separately comparing TLE and GE patients to HCs (Supplementary Fig. 1a), specifically showing marked atrophy in the former group and only subtle anomalies in the latter. ### Diffusion MRI studies of tissue microstructure Similar to the morphological findings, uni- and multivariate analyses of diffusion MRI parameters pointed toward a dissociation of GE and TLE (Fig. 2). Indeed, surface-based diffusivity analysis of the superficial white matter, located immediately below the cortical mantle, revealed an extended territory of more marked anomalies in TLE compared to GE, encompassing temporal, fronto-opercular, as well as midline regions, with stronger effects in the ipsilateral hemisphere. Post hoc univariate analysis in clusters of multivariate findings, together with unconstrained surface mapping of individual diffusion parameters, showed that findings were generally characterized by decreases of fractional anisotropy (FA) and mean diffusivity (MD) increases in TLE (pFWE < 0.05). Complementing neocortical findings, multivariate diffusion parameter analysis of the thalamus identified marked differences between TLE and GE in both the left and right hemispheres. Findings were characterized by marked FA reductions (p < 0.005) and marginal MD increases in the ipsilateral thalamus in TLE relative to GE. A similar dissociation between TLE and GE, again pointing toward more marked changes in the former cohort, was seen when comparing patient groups separately to controls (Supplementary Fig. 1b). ### Connectivity and microcircuit simulations We first built mean-field models to estimate the contributions of subcortical drive and recurrent excitation–inhibition on simulated cortico-cortical dynamics, and then targeted thalamo-cortical functional networks using seed-based resting-state functional connectivity analysis (Fig. 3). For biophysical circuit simulations, we leveraged a relaxed mean-field approach that models whole-brain functional dynamics and connectivity based on diffusion MRI connectome information (see “Methods” section for details). In brief, these models assume that neural dynamics are governed by (i) recurrent intraregional input related to recurrent excitation–inhibition, (ii) inter-regional input mediated by anatomical connections, (iii) extrinsic input mainly mediated by subcortical regions, and (iv) neuronal noise22. While the originally proposed mean-field models assume these four parameters to be constant across brain regions, the relaxed mean-field model applied here allows recurrent excitation–inhibition and subcortical input to vary, and to estimate these parameters at a region-specific level22. In our controls, the relaxed mean-field models showed a similar overall performance to simulated functional connectivity from structural connectivity, and yielded similar microcircuit parameter estimates as in a previous study in healthy adults22 (Supplementary Fig. 2). For both patient cohorts, simulations supported the divergence between TLE and GE, demonstrating that cortico-cortical dynamics were driven by distinct microcircuit perturbations. Specifically, when compared to GE, TLE presented with reduced subcortical drive to multiple networks, predominantly to limbic and somatomotor networks. Conversely, recurrent excitation–inhibition to those two networks was increased in TLE relative to GE. Similar findings were observed when comparing each patient cohort separately to controls, mainly showing reduced subcortical drive in TLE and increased thalamo-cortical drive in GE (Supplementary Fig. 1c). Considering thalamo-cortical functional connectivity, both patient groups diverged, with TLE showing widespread reductions in connectivity to nodes of the limbic, default-mode, fronto-parietal control, and somatomotor networks relative to GE (pFWE < 0.05). Similar findings were observed when independently comparing each patient cohort to controls, showing reduced thalamo-cortical connectivity in TLE and increases in GE (Supplementary Fig. 1c). ### Sensitivity analyses Several sensitivity analyses assessed robustness and consistency of our main findings. First, we repeated the thalamo-cortical functional connectivity analysis after additionally regressing out the global mean signal and observed virtually identical effects, i.e., a marked divergence between TLE and GE patients at the level of thalamo-cortical resting-state functional connectivity, with TLE showing markedly reduced connectivity relative to GE (Supplementary Fig. 3). Second, we split the TLE cohort based on patients, who reported secondary generalized tonic–clonic seizures within a year prior to the imaging investigation (with/without secondary generalized tonic–clonic seizures: 31/69% of the TLE group), and repeated all elements of the multiscale analysis. This approach was chosen to reflect recent clinical history; patients with no or generalized seizure events in their history that were more than a year prior to the investigation were grouped into the ‘without’ category. Findings remained virtually identical when separately comparing each of these subcohorts to GE, suggesting that the dissociation between TLE and GE is not driven by current presence of secondary generalized tonic–clonic seizures in TLE, nor its absence (Supplementary Fig. 4). Given previous work suggesting an association between neuroimaging measures of brain organization and drug response16,26,27,28,29, we also repeated our analysis while additionally controlling for drug response patterns (i.e., drug resistance versus drug responsiveness). The linear model was set as followed: $${\mathrm{Model}} = \beta _0 + \beta _1 \times {\mathrm{Sex}} + \beta _2 \times {\mathrm{Age}} + \beta _3 \times {\mathrm{Medication}}\,{\mathrm{response}} \,+ \beta _4 \times {\mathrm{Group}}.$$ Focusing on relatively well-controlled (67 TLE, mean number of meds = 1.27; 83 GE, mean meds = 1.16) or refractory (40 TLE, mean meds = 2.90; 13 GE, mean meds = 2.69) patients, we still observed the above-mentioned major dissociations between patient groups, suggesting that divergent effects were likely independent from variable degrees of drug response in TLE and GE (Supplementary Fig. 5). As both measures can affect brain structure and function, we repeated our analyses using statistical models that additionally controlled for age at onset and duration. These findings were highly consistent with our main results (Supplementary Fig. 6). Furthermore, findings were robust when restricting our cohorts to patients with at least 1-year epilepsy duration and at least one seizure per year (94 TLE and 74 GE) (Supplementary Fig. 7). To finally investigate whether our results would be affected by hippocampal atrophy, we repeated the multiscale analysis after adding overall hippocampus volume as a covariate of no interest into the model. The linear model was thus set as follows: $${\mathrm{Model}} = \beta _0 + \beta _1 \times {\mathrm{Sex}} + \beta _2 \times {\mathrm{Age}} + \beta _3 \times {\mathrm{Mean}}\,{\mathrm{hippocampal}}\,{\mathrm{volume}} + \beta _4 \times {\mathrm{Group}}.$$ Although between-cohort differences were overall smaller due to dominant hippocampal lesions in TLE, main divergences remained (Supplementary Fig. 8). ## Discussion Converging evidence from experimental studies in animals as well as electrophysiological and neuroimaging work in patients suggests a key role of the thalamo-cortical circuity in both focal and GE3,4,5,6,30. Yet, differential patterns of cortico-cortical and thalamo-cortical compromise across the spectrum of human epilepsies are not fully understood, due to the scarcity of work systematically and directly comparing focal and GE. To close this gap, we profiled thalamo-cortical network pathology in two large groups of TLE and GE patients, as well as matched controls, using multiscale neuroimaging and computational simulations of microcircuit function. Despite gathering evidence that the structural and functional organization of the thalamo-cortical network is abnormal in both patient groups relative to controls, we obtained robust support for a dissociation of both syndromes at the level of morphology, diffusion MRI-derived indices of tissue microstructure, and macroscale functional connectivity. Findings indicated more marked anomalies in TLE relative to controls and GE, thus suggesting that the archetypal “focal” epilepsy is paradoxically associated with more severe whole-brain anomalies than an epilepsy syndrome electro-clinically defined by generalized seizures. Further support for a dissociation between both syndromes was gathered from connectome-informed biophysical simulations. These suggested marked reductions of subcortical drive, as well as increases in recurrent excitation–inhibition on cortical microcircuit function in TLE, specifically in limbic and fronto-central networks, while GE presented with increased subcortical drive but no changes in recurrent cortical excitation–inhibition. Findings were not mediated by between-cohort differences drug response and remained significant when separately analyzing TLE cohorts according to a history of secondary generalized tonic–clonic seizures. Furthermore, although anomalies in TLE were in part diminished when controlling for hippocampal structural imaging findings, further confirming that mesiotemporal pathology often mirrors the degree of whole-brain anomalies in this syndrome31, differences between TLE and GE were still largely consistent. Collectively, our findings demonstrate thalamo-cortical pathological signatures in TLE and GE that convey a compelling dissociation of the two syndromes at multiple scales, refining our understanding of macroscale and microcircuit mechanisms contributing to the spectrum of common epilepsies in humans. Our work benefitted from inclusion of a large sample of patients with TLE and GE in whom high-quality 3T multimodal neuroimaging data were available, together with a matched sample of HCs. Our neuroimaging paradigm leveraged cutting-edge morphological analysis, diffusion parameter profiling, and macrolevel connectomics of both cortical and thalamic subregions. While prior research from our group and others has shown that the integration of these features can lead to novel descriptions of healthy and diseased brains19,32, the current work represents one of the most comprehensive in vivo study to date of thalamo-cortical and cortico-cortical networks in both epileptic conditions. Combining these different imaging modalities, our findings strengthen the notion that both GE and TLE are associated with disorganization of these different networks at multiple scales33,34,35,36. However, and even more importantly, we captured consistent evidence for diverging patterns of pathology across both syndromes. Considering gray matter morphology, TLE patients presented with marked atrophy in dorsomedial thalamic divisions as well as prefrontal, fronto-central, and temporal neocortices relative to GE patients and relative to controls, paralleling earlier studies suggesting an association between TLE and multi-lobar gray matter reductions demonstrated, using several techniques and confirmed across multiple sites6,37,38. Our inclusion criteria ruled out the presence of visible neocortical lesions in the investigated patient cohorts and, overall, the pathological substrates of gray matter atrophy in TLE remain unclear. Yet, cortical atrophy appears relatively prevalent across TLE cohorts, with marked gray matter loss of temporal and fronto-central regions seen in ~35% of patients39. Although the presence of subtle preexisting anomalies of neurodevelopmental origin is possible40, cortical atrophy undergoes measurable progressive changes over time37,41,42,43. These findings are also in line with the conclusions of a prior postmortem study in TLE patients, suggesting acquired cortical pathology that potentially represents seizure-related damage and manifests as both gliotic and microgliotic changes14. In the thalamus, while postmortem analysis also emphasized heterogeneity across patients, findings overall confirmed reductions in neuronal density in mediodorsal divisions, which could represent a plausible pathological substrate of the local volume reductions seen in the current work13. On the other hand, comparison of GE patients to controls highlighted gray matter loss only at uncorrected levels in fronto-central regions, reflecting a generally less marked impact of GE on mesoscale morphology12,16,44. In this work, gray matter morphometric analyses were complemented by diffusion MRI parametrization of the thalamus and the superficial white matter, which lies immediately below the cortical interface. This approach offered an analysis of diffusion MRI metrics in the same anatomical reference frame as the morphological findings. Although this approach detected diffusion anomalies in GE in both cortical and thalamic regions, findings in this cohort were more restricted than those observed in TLE. Indeed, diffusion anomalies in TLE affected the thalamus bilaterally, as well as a widespread cortical territory that radiated outward from the ipsilateral mesiotemporal disease epicenter to invade limbic and higher-order association cortices in temporal frontal and parietal lobes. As the diffusion parameter changes identified in this work may be sensitive to different microstructural and architectural substrates, we can only speculate on histopathological underpinnings. In TLE, prior work has shown that FA reductions of select fiber tracts may relate to changes in axonal membrane and myelination45, whereas MD changes may represent a combination of gliosis and increased extracellular space46,47. Collectively, the analysis of TLE and GE indicates that both syndromes affect thalamic and cortical networks. On the other hand, our findings strongly suggest diverging pathological signatures at the macroscale level, contributing to our increasing understanding of whole-brain disease effects across the common epilepsies. Given the maturation of MRI co-registration techniques to bring different modalities into the same reference frame48, studies have begun to investigate the coupling between brain structure and function in humans49. Our work harnessed biophysical simulations to integrate structural and functional network data, and to estimate aspects of cortical microcircuit organization, specifically the interplay of recurrent excitation–inhibition and the influence of subcortical/external drive on cortical dynamics. The relaxed mean-field models chosen here provide plausible estimates of functional connectivity from structural connectomes with relatively few assumptions and low parametric complexity. In healthy populations, these techniques have shown promise to simulate functional interactions solely based on structural connectivity with robust accuracy, and have begun to provide insight into the hierarchical organization of cortex at macroscale and its link to underlying microcircuit configurations22,23,24,25,50. Capitalizing on one of the first applications of these relaxed mean-field models to epilepsy cohorts, we identified diverging mechanisms in focal and GE, bridging microcircuit properties and macroscale function. In fact, these large-scale models allowed for the estimation of region-specific model parameters that provided insights into the role of excitation–inhibition and excitatory subcortical drive on microcircuit function. These analyses suggested marked divergences between epilepsy syndromes with respect to the role of subcortical input on cortico-cortical dynamics, with TLE showing a reduced influence, particularly to limbic and somatomotor areas, while GE expressed an increasing contribution of subcortical drive on cortical function. The increased subcortical drive in GE did not seem to be accompanied by marked changes in recurrent excitation–inhibition in cortical regions, a finding that may mirror the overall smaller degree of morphological and microstructural anomalies observed in our analyses. Despite unchanged intraregional recurrent input, cortical circuits in GE may still show greater excitability due to increased subcortical drive. Conversely, reductions in subcortical drive in limbic and somatomotor areas in TLE occurred in parallel with increased recurrent excitation–inhibition, echoing the stronger morphological and microstructural anomalies detected in fronto-central cortices in this cohort. According to prior neuroimaging studies in patients and experimental work performed in animal models, persistent and often recurrent loops between the thalamus and distributed neocortices in GE may contribute to the initiation and maintenance of generalized seizures51. In contrast, thalamo-cortical decoupling has previously been detected in TLE via rs-fMRI4, structural MRI covariance9, and diffusion MRI tractography52. The overall disconnection of limbic cortices may possibly relate to a functional segregation of these regions from subcortical networks, which in turn may reflect the existence of recurrent cortico-cortical loops in TLE, and potentially promote epileptogenesis. In future work, it will be relevant to further cross-validate the connectome-derived biophysical parameters with empirical studies on microcircuit function, and dysfunction at laminar and cellular scales18, for example, through ultra-high field MRI studies or studies in animal models. Perturbations in thalamic connectivity to cortical target regions may, for instance, alter the microstructural context and connectivity between different cortical layers, in turn perturbing the cortical microcircuit layout and hierarchical organization of cortical networks53. Beyond contributing to efforts aiming at dissociating between established electro-clinically defined epilepsy syndromes, shifting toward an intracortical- and microcircuit-level scale may identify mechanisms contributing to inter-individual variability within particular syndromes, for example, by showing how seizures may generalize in a given patient and how seizure generalization could be prevented. By consolidating connectivity, structure, and microcircuit properties into a unified framework, these investigations could open new avenues for the conceptualization, and dissociation of focal and generalized human epilepsies. Our results support the notion that both GE and TLE are associated with thalamo-cortical and cortico-cortical network abnormalities at multiple levels. Moreover, our findings demonstrated a robust dissociation between TLE and GE, with the former being more markedly affected. Despite this divergence, one cortical network that appeared most susceptible to epilepsy-related pathology overall was the limbic system, showing measurable microstructural anomalies in both focal and generalized syndromes. Increased susceptibility of the limbic system has repeatedly been suggested in the context of TLE54,55, and is also plausible given the mesiotemporal epicenter of the disorder. In different GE syndromes, several studies have also detected structural anomalies in limbic cortices, including work showing mesiotemporal gray matter loss as well as malrotation56,57,58. The increased disease effect on the limbic system across both TLE and GE may relate to the role of the thalamus itself in the pathophysiological networks of both syndromes. Several divisions of the thalamus, such as the mediodorsal and anterior division, have sometimes also been closely associated to the limbic circuity due to their anatomical proximity and dense interconnectivity with several key nodes of the “grand lobe limbique,” such as the hippocampus and amygdala. On the other hand, limbic and paralimbic cortices differ from other cortices, in terms of their underlying microstructure and circuit properties59,60, and have a relatively simple laminar organization compared to eulaminate areas exhibiting a more differentiated lamination53,59,61. These differential lamination patterns also mirror gradual changes in myeloarchitecture and microcircuit properties, with limbic areas showing less myelination but higher synaptic density compared to areas involved in less integrative sensory–motor and unimodal association processing. The latter are ultimately believed to affect the increased plasticity and heightened susceptibility of the limbic network to multiple diseases59. Our inclusion criteria restricted the analysis to patients with an electrophysiological signature characteristic of TLE, as well as GE patients with tonic–clonic seizures as their only seizure type. Hence, despite the consistent dissociation of these two prototypical forms of focal and GE shown in this work, it remains to be evaluated whether and how our framework extends to other prevalent generalized and focal epilepsy syndromes, including extratemporal epilepsy secondary to malformations of cortical development, that is associated with an intriguing, and seemingly coupled, spectrum of lesional pathology, and widespread network anomalies62. On the other hand, our study benefitted from the inclusion of patients with different disease severities, clinical history, and hippocampal imaging findings, allowing us to examine the contribution of these inter-individual factors on our results. A comprehensive battery of sensitivity analyses revealed that between-cohort divergences were only modestly related to potential variations in drug response patterns, virtually identical when restricting the TLE cohort to those with a history of secondary generalized tonic–clonic seizures, and robust against correction for age at seizure onset and disease duration. Furthermore, although we found a noticeable modulation of the between-cohort difference when factoring in the degree of hippocampal anomalies, a key indicator of network level compromise in TLE (refs. 31,63), between-group divergences were still consistent. We conclude by emphasizing that our work offers an integrative perspective on how different scales of brain organization interact in focal and generalized epilepsies. While our cross-sectional findings preclude direct inferences on whether microcircuit perturbations cause macroscale reorganization (or vice versa), they motivate future longitudinal research that models these multiscale interactions in the context of disease progression. Given their scope to interrogate microcircuit and macroscale effects, we will similarly explore the utility of the developed platform in the prediction, and predict and monitoring of the efficacy of therapeutic interventions, notably anti-epileptic medication and epilepsy surgery. ## Methods ### Participants We studied 263 epilepsy patients recruited from Jinling Hospital, Nanjing, China between July 2009 and August 2018. Patients were diagnosed as having either GE with generalized tonic–clonic seizures, or unilateral TLE with MRI evidence for hippocampal sclerosis. Diagnoses followed ILAE criteria64, and were informed by electro-clinical factors, neurological examination, and neuroimaging. Further inclusion criteria were: (i) age older than 16 years; (ii) right-handedness; (iii) no mass lesion (i.e., brain tumor, cerebral hemorrhage or ischemia, and cerebrovascular malformation); (iv) no history of brain surgery; (v) no significant physical conditions; (vi) no alcohol or substance abuse; and (vii) no MRI contraindications. Among the initial 263 patients, we selected only those with available MRI data for all the studied modalities and those who did not present with imaging artifacts. Our final patient cohort consisted of 203 patients: 96 GE (31 females, mean ± SD age = 25.65 ± 7.85 years) and 107 TLE patients (53 left and 54 right TLE; 47 females, mean ± SD age = 27.29 ± 7.81 years). Patients were compared to 65 age- and sex-matched HCs (25 females, mean ± SD age = 24.98 ± 4.96 years). Detailed sociodemographic and clinical information can be found in Table 1. This study was carried out according to the declaration of Helsinki and approved by the ethics committee of Jinling Hospital. Written informed consent was obtained from every participant. ### MRI protocol Data were acquired on a 3T MRI scanner (TIM Trio, Siemens Medical Solution, Erlangen, Germany) equipped with an eight-channel head coil. We used a 3D rapid gradient echo sequence to acquire high-resolution T1-weighted MRI (T1w; 176 slices; repetition time (TR) = 2300 ms; echo time (TE) = 2.98 ms; flip angle = 9°; field of view (FOV) = 256 × 256 mm2; 0.5 × 0.5 × 1 mm3 voxels) and a 2D echo-planar imaging spin-echo sequence to acquire diffusion MRI (DWI; 45 slices; TR = 6100 ms; TE = 93 ms; 120 volumes with non-collinear directions (b = 1000 s/mm2) and four volumes without diffusion weighting (b = 0 s/mm2); FOV = 240 × 240 mm2; 0.94 × 0.94 × 3 mm3 voxels). Using 2D echo-planar BOLD imaging, we acquired rs-fMRI (30 slices; TR = 2000 ms; TE = 30 ms; flip angle, 90°; FOV = 240 × 240 mm2; 250 volumes; 3.75 × 3.75 × 4 mm3 voxels). Participants were instructed to keep their eyes closed and remain still in the scanner. Two experienced radiologists (Z.Z. and G.L.) reviewed routine T1w and FLAIR images and reached consensus in the final diagnosis. ### Structural MRI We processed T1w data using FreeSurfer (v6.0; http://surfer.nmr.mgh.harvard.edu/)65,66,67 to generate models of the cortical surface and to index neocortical morphology. In brief, the pipeline involves skull stripping, image registration, and cortical surface reconstruction. Cortical thickness was measured as the Euclidean distance between corresponding vertices on pial–gray and gray–white matter interfaces. Surfaces were inspected for inaccuracies, and corrected if necessary, prior to registration to the hemisphere-matched Conte69 template from the human connectome project initiative68 with ~64 k surface points (henceforth, vertices). Thickness data were blurred using a 20 mm full-width at half-maximum kernel. The entire thalamus was automatically segmented using FSL-FIRST (v5.0.9; https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FIRST/)69, and segmentations were linearly registered to MNI152 space. Segmentations were converted to triangular surfaces and parameterized using a spherical harmonic representation model70,71,72. We generated a thalamic template with ~6k vertices across our HCs, and aligned individual surfaces to the template based on shape intrinsic features. We measured vertex-wise displacement vectors for each participant’s thalamus to the template in surface normal direction, indicating inward/outward deformations of an individual relative to HCs9. ### Diffusion MRI Data were preprocessed with MRtrix (v0.3.15; http://www.mrtrix.org/)73. Processing included head motion and eddy current correction, de-noising, as well as diffusion parameter estimation (FA, MD). As in previous analyses32,74,75, we co-registered the T1w to diffusion MRI data using boundary-based registration48. To study the microstructure of the white matter immediately beneath the neocortical mantle, we generated subject-specific surfaces by propagating the gray–white matter interface along a Laplacian potential field toward the ventricular walls for approximately 2 mm32,74,75. This depth was selected to target the cortico-cortical U-fiber system as well as terminations of long-range fiber tracts76. Superficial white matter surfaces were mapped to diffusion space via the inverse to the initial co-registration, and used to sample voxel-wise FA and MD. As for the cortical thickness measures, superficial white matter data were surface-registered to Conte69. To analyze thalamic diffusion parameters, preprocessed diffusion MRI data were mapped to the MNI152 template using a combination of linear and nonlinear transformations. Left and right thalamic masks were generated by intersecting the previously generated thalamic segmentations across participants in MNI152 space. We extracted thalamic FA and MD values for all participants prior to statistical analysis. Preprocessed diffusion MRI data in native space underwent multi-shell, multi-tissue constrained spherical-deconvolution to estimate voxel-wise fiber orientation distributions. Anatomically constrained tractography generated 40 million tracts77, and we stored structural connectomes of 200 cortical parcels, defined via subclustering of an anatomy-based atlas while guaranteeing comparable parcel size78,79. The group-averaged structural connectome was fed into the biophysical modeling framework (see below). ### Resting-state fMRI The rs-fMRI processing was conducted via DPARSF (v2.3; http://www.rfmri.org/DPARSF)80. The first 10 images were excluded to ensure steady-state signal equilibrium. Images underwent correction for slice timing, realignment, band-pass filtering (0.01–0.1 Hz), and spatial smoothing using a 6 mm full-width at half-maximum Gaussian kernel. We statistically corrected for head motion as well as average white matter and cerebrospinal fluid signals. To extract cortical functional time series, we aligned subject-specific functional images to their cortical surfaces via boundary-based registrations22 and sampled time series at each vertex at mid-thickness. As with the cortical thickness measures, functional time series were also aligned to Conte69. To obtain thalamic time series, we resampled native space rs-fMRI data to the MNI152 template. We used the above-mentioned thalamic masks to extract time series for all participants, and calculated thalamo-cortical functional connectivity via Pearson correlation coefficients between the thalamus and each cortical vertex. A cortex-wide functional connectivity matrix was also calculated based on the same cortical parcellation as above. Individual connectivity maps underwent Fisher r-to-z transformations prior to generating a group-level connectome. ### Connectome-informed biophysical simulations We used a recently proposed connectome-informed brain model to study structure–function coupling and to estimate the relative influence of microcircuit parameters on macroscale cortical dynamics in our three cohorts22. Specifically, we harnessed a relaxed mean-field neural mass model that captures the link between cortical functional dynamics and structural connectivity derived from diffusion imaging, and its modulation through region-specific microcircuit parameters. For further details on the model and its mathematical underpinnings, we refer to the original publication on the relaxed mean-field model22 and earlier work on the use of (non-relaxed) mean-field models in the context of connectomics81. In brief, mean-field neural mass models capture neural dynamics at the level of neuronal populations. This is achieved by the mathematical simplification (i.e., mean-field reduction) of detailed spiking neuronal networks. While making some simplifying approximations81, mean-field models have been shown to capture complex neural dynamics with relatively low parametric complexity82. More specifically, under the mean-field model, the neural dynamics of a given region are governed by four components: (i) recurrent intraregional input, i.e., recurrent excitation–inhibition; (ii) inter-regional input, mediated by anatomical connections from other nodes, (iii) external input, mainly from subcortical regions, and (iv) neuronal noise modeled through an uncorrelated Gaussian22. There are “free” parameters associated with each component. The original (non-relaxed) mean-field models81 assume these parameters are constant across brain regions, which is not biologically plausible. The relaxed mean-field model that we utilized allows recurrent excitation–inhibition, and extrinsic (subcortical) input to vary across regions and automatically estimated22. The dynamics of each region i is described by the following coupled nonlinear stochastic differential equations81: $$\dot S_i = - \frac{{S_i}}{{\tau s}} \,+ \,r\left( {1 - S_i} \right)H\left( {x_i} \right) + \sigma \nu _i\left( t \right),$$ (1) $$H\left( {x_i} \right) = \frac{{ax_i - b}}{{1 - \exp \left( { - d\left( {ax_i-b} \right)} \right)}},$$ (2) $$x_i = wJS_i + GJ\mathop {\sum}\limits_j {C_{ij}S_i + I}.$$ (3) For a given region i, Si in formula (1) represents the average synaptic gating variable, $$H\left( {x_i} \right)$$ in formula (2) denotes the population firing rate, and xi in formula (3) denotes the total input current. In (3), the input current is determined by the recurrent connection strength wi (henceforth excitation–inhibition), the excitatory external input I (such as from subcortical relays; henceforth subcortical input), and inter-regional signal flow. The latter is governed by Cij, which represents the structural connectivity derived from diffusion MRI tractography between regions i and j, and the global coupling G. The constant G scales the strength of information flow from other cortical regions to the i-th region relative to the recurrent connection and subcortical inputs. Following prior work22, we set synaptic coupling J = 0.2609 nA. In formula (2), parameter values for the input–output function H(xi) were set to a = 270 n/C, b = 108 Hz, and d = 0.154 s. To run the model in each of the cohorts (GE, TLE, and controls), we entered the group-average structural and functional connectivity matrices as input, yielding the global coupling constant G, the global noise amplitude $$\sigma$$, as well as recurrent connection strengths wi, and excitatory subcortical inputs Ii for each region as output. During parameter estimation, simulated neural activities Si in formula (1) were fed to a Balloon–Windkessel hemodynamic model83 to simulate fMRI signals for each region i. Global and region-specific parameters were determined by maximizing the similarity between simulated and empirical functional connectivity, based on a previously developed algorithm for inverting neural mass models that leverages the well-established expectation–maximization algorithm. Correlations between simulated and empirical functional connectivity were r = 0.51/0.48/0.57 in HC/GE/TLE (and above baseline correlations between structural and functional connectivity r = 0.37/0.40/0.41; Supplementary Fig. 2). To measure alterations in subcortical input on cortico-cortical dynamics, we quantified between-group differences for given region i and normalized results by the SD within the corresponding network. ### Statistics and reproducibility (a) Surface-based analyses: Analysis was carried out using SurfStat for Matlab84, which provides a unified framework to assess between-group differences in our multiscale measures (i.e., cortical thickness, thalamic surface-shape displacements, superficial white matter diffusivity, and thalamo-cortical functional connectivity), while controlling for age and sex. Surface-based measures were z-scored relative to HCs, and left and right TLE patients were pooled into a single cohort such that all lesions were consistently located in the left hemisphere85 (findings were also similar when flipping a comparable proportion of GE patients; Supplementary Fig. 9). Surface-based results were corrected for multiple comparisons at a family-wise error rate of pFWE < 0.05. (b) Thalamic analysis: To study univariate thalamic parameters (i.e., global thalamic volume, FA, and MD), we z-scored the parameters relative to controls and measured group differences using Student’s t-test. Mahalanobis distances were calculated as multivariate dissimilarity measures for comparing diffusion parameters in patients relative to controls, and significances were obtained using SurfStat’s multivariate Hotelling’s T2 test. (c) Sensitivity analyses: Several sensitivity analyses assessed robustness and consistency of our main findings: (i) we repeated the thalamo-cortical functional connectivity analysis after additionally regressing out the global mean signal; (ii) we split the TLE cohort based on patients who reported secondary generalized tonic–clonic seizures within a year prior to the imaging investigation (with/without secondary generalized tonic–clonic seizures: 31/69% of the TLE group) and repeated all elements of the multiscale analysis and (iii) we repeated our analysis while additionally controlling for drug response patterns, age at onset and duration, and hippocampal volume. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
{}
# 4.1: Determinants- Definition ##### Objectives 1. Learn the definition of the determinant. 2. Learn some ways to eyeball a matrix with zero determinant, and how to compute determinants of upper- and lower-triangular matrices. 3. Learn the basic properties of the determinant, and how to apply them. 4. Recipe: compute the determinant using row and column operations. 5. Theorems: existence theorem, invertibility property, multiplicativity property, transpose property. 6. Vocabulary words: diagonal, upper-triangular, lower-triangular, transpose. 7. Essential vocabulary word: determinant. In this section, we define the determinant, and we present one way to compute it. Then we discuss some of the many wonderful properties the determinant enjoys. ## The Definition of the Determinant The determinant of a square matrix $$A$$ is a real number $$\det(A)$$. It is defined via its behavior with respect to row operations; this means we can use row reduction to compute it. We will give a recursive formula for the determinant in Section 4.2. We will also show in Subsection Magical Properties of the Determinant that the determinant is related to invertibility, and in Section 4.3 that it is related to volumes. Definition $$\PageIndex{1}$$: Determinant The determinant is a function $\det\colon \bigl\{\text{square matrices}\bigr\}\to \mathbb{R} \nonumber$ satisfying the following properties: 1. Doing a row replacement on $$A$$ does not change $$\det(A)$$. 2. Scaling a row of $$A$$ by a scalar $$c$$ multiplies the determinant by $$c$$. 3. Swapping two rows of a matrix multiplies the determinant by $$-1$$. 4. The determinant of the identity matrix $$I_n$$ is equal to $$1$$. In other words, to every square matrix $$A$$ we assign a number $$\det(A)$$ in a way that satisfies the above properties. In each of the first three cases, doing a row operation on a matrix scales the determinant by a nonzero number. (Multiplying a row by zero is not a row operation.) Therefore, doing row operations on a square matrix $$A$$ does not change whether or not the determinant is zero. The main motivation behind using these particular defining properties is geometric: see Section 4.3. Another motivation for this definition is that it tells us how to compute the determinant: we row reduce and keep track of the changes. Example $$\PageIndex{1}$$ Let us compute $$\det\left(\begin{array}{cc}2&1\\1&4\end{array}\right).$$ First we row reduce, then we compute the determinant in the opposite order: \begin{align*} \amp\left(\begin{array}{cc}2&1\\1&4\end{array}\right) \amp\strut\det\amp=7\\ \;\xrightarrow{R_1\leftrightarrow R_2}\;\amp \left(\begin{array}{cc}1&4\\2&1\end{array}\right) \amp \strut\det \amp= -7\\ \;\xrightarrow{R_2=R_2-2R_1}\;\amp \left(\begin{array}{cc}1&4\\0&-7\end{array}\right) \amp \strut\det \amp= -7\\ \;\xrightarrow{R_2=R_2\div -7}\;\amp \left(\begin{array}{cc}1&4\\0&1\end{array}\right) \amp \strut\det \amp= 1\\ \;\xrightarrow{R_1=R_1-4R_2}\;\amp \left(\begin{array}{cc}1&0\\0&1\end{array}\right) \amp \strut\det \amp= 1 \end{align*} The reduced row echelon form of the matrix is the identity matrix $$I_2\text{,}$$ so its determinant is $$1$$. The second-last step in the row reduction was a row replacement, so the second-final matrix also has determinant $$1$$. The previous step in the row reduction was a row scaling by $$-1/7\text{;}$$ since (the determinant of the second matrix times $$-1/7$$) is $$1\text{,}$$ the determinant of the second matrix must be $$-7$$. The first step in the row reduction was a row swap, so the determinant of the first matrix is negative the determinant of the second. Thus, the determinant of the original matrix is $$7$$. Note that our answer agrees with Definition 3.5.2 in Section 3.5 of the determinant. ##### Example $$\PageIndex{2}$$ Compute $$\det\left(\begin{array}{cc}1&0\\0&3\end{array}\right)$$. Solution Let $$A=\left(\begin{array}{cc}1&0\\0&3\end{array}\right)$$. Since $$A$$ is obtained from $$I_2$$ by multiplying the second row by the constant $$3\text{,}$$ we have $\det(A)=3\det(I_2)=3\cdot 1=3. \nonumber$ Note that our answer agrees with Definition 3.5.2 in Section 3.5 of the determinant. ##### Example $$\PageIndex{3}$$ Compute $$\det\left(\begin{array}{ccc}1&0&0\\0&0&1\\5&1&0\end{array}\right)$$. Solution First we row reduce, then we compute the determinant in the opposite order: \begin{align*} \amp\left(\begin{array}{ccc}1&0&0\\0&0&1\\5&1&0\end{array}\right) \amp\strut\det\amp=-1\\ \;\xrightarrow{R_2\leftrightarrow R_3}\;\amp \left(\begin{array}{ccc}1&0&0\\5&1&0\\0&0&1\end{array}\right) \amp \strut\det \amp= 1\\ \;\xrightarrow{R_2=R_2-5R_1}\;\amp \left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right) \amp \strut\det \amp= 1 \end{align*} The reduced row echelon form is $$I_3\text{,}$$ which has determinant $$1$$. Working backwards from $$I_3$$ and using the four defining properties Definition $$\PageIndex{1}$$, we see that the second matrix also has determinant $$1$$ (it differs from $$I_3$$ by a row replacement), and the first matrix has determinant $$-1$$ (it differs from the second by a row swap). Here is the general method for computing determinants using row reduction. Recipe: Computing Determinants by Row Reducing Let $$A$$ be a square matrix. Suppose that you do some number of row operations on $$A$$ to obtain a matrix $$B$$ in row echelon form. Then $\det(A) = (-1)^r\cdot \frac{\text{(product of the diagonal entries of B)}} {\text{(product of scaling factors used)}}, \nonumber$ where $$r$$ is the number of row swaps performed. In other words, the determinant of $$A$$ is the product of diagonal entries of the row echelon form $$B\text{,}$$ times a factor of $$\pm1$$ coming from the number of row swaps you made, divided by the product of the scaling factors used in the row reduction. ##### Remark This is an efficient way of computing the determinant of a large matrix, either by hand or by computer. The computational complexity of row reduction is $$O(n^3)\text{;}$$ by contrast, the cofactor expansion algorithm we will learn in Section 4.2 has complexity $$O(n!)\approx O(n^n\sqrt n)\text{,}$$ which is much larger. (Cofactor expansion has other uses.) ##### Example $$\PageIndex{4}$$ Compute $$\det\left(\begin{array}{ccc}0&-7&-4\\2&4&6\\3&7&-1\end{array}\right).$$ Solution We row reduce the matrix, keeping track of the number of row swaps and of the scaling factors used. \begin{aligned}\left(\begin{array}{ccc}0&-7&-4\\2&4&6\\3&7&-1\end{array}\right)\quad\xrightarrow{R_1\leftrightarrow R_2}\quad &\left(\begin{array}{ccc}2&4&6\\0&-7&-4\\3&7&-1\end{array}\right)\quad r=1 \\ {}\xrightarrow{R_1=R_1\div 2}\quad &\left(\begin{array}{ccc}1&2&3\\0&-7&-4\\3&7&-1\end{array}\right)\quad \text{scaling factors }=\frac{1}{2} \\ {}\xrightarrow{R_3=R_3-3R_1}\quad &\left(\begin{array}{ccc}1&2&3\\0&-7&-4\\0&1&-10\end{array}\right) \\ {}\xrightarrow{R_2\leftrightarrow R_3}\quad &\left(\begin{array}{ccc}1&2&3\\0&1&-10\\0&-7&-4\end{array}\right)\quad r=2 \\ {}\xrightarrow{R_3=R_3+7R_2}\quad &\left(\begin{array}{c}1&2&3\\0&1&-10\\0&0&-74\end{array}\right)\end{aligned} We made two row swaps and scaled once by a factor of $$1/2\text{,}$$ so the Recipe: Computing Determinants by Row Reducing says that $\det\left(\begin{array}{ccc}0&-7&-4\\2&4&6\\3&7&-1\end{array}\right) = (-1)^2\cdot\frac{1\cdot 1\cdot(-74)}{1/2} = -148. \nonumber$ ##### Example $$\PageIndex{5}$$ Compute $$\det\left(\begin{array}{ccc}1&2&3\\2&-1&1\\3&0&1\end{array}\right).$$ Solution We row reduce the matrix, keeping track of the number of row swaps and of the scaling factors used. \begin{aligned}\left(\begin{array}{ccc}1&2&3\\2&-1&1\\3&0&1\end{array}\right)\quad\xrightarrow{\begin{array}{l}{R_2=R_2-2R_1}\\{R_3=R_3-3R_1}\end{array}}\quad &\left(\begin{array}{ccc}1&2&3\\0&-5&-5 \\ 0&-6&-8\end{array}\right) \\ {}\xrightarrow{R_2=R_2\div 5}\quad &\left(\begin{array}{ccc}1&2&3\\0&1&1\\0&-6&-8\end{array}\right)\quad\text{scaling factors }=-\frac{1}{5} \\ {}\xrightarrow{R_3=R_3+6R_2}\quad &\left(\begin{array}{ccc}1&2&3\\0&1&1\\0&0&-2\end{array}\right)\end{aligned} We did not make any row swaps, and we scaled once by a factor of $$-1/5\text{,}$$ so the Recipe: Computing Determinants by Row Reducing says that $\det\left(\begin{array}{ccc}1&2&3\\2&-1&1\\3&0&1\end{array}\right) = \frac{1\cdot 1\cdot(-2)}{-1/5} = 10. \nonumber$ Example $$\PageIndex{6}$$: The Determinant of a $$2\times 2$$ Matrix Let us use the Recipe: Computing Determinants by Row Reducing to compute the determinant of a general $$2\times 2$$ matrix $$A = \left(\begin{array}{cc}a&b\\c&d\end{array}\right)$$. • If $$a = 0\text{,}$$ then $\det\left(\begin{array}{cc}a&b\\c&d\end{array}\right) = \det\left(\begin{array}{cc}0&b\\c&d\end{array}\right) = -\det\left(\begin{array}{cc}c&d\\0&b\end{array}\right) = -bc. \nonumber$ • If $$a\neq 0\text{,}$$ then \begin{aligned} \det\left(\begin{array}{cc}a&b\\c&d\end{array}\right)&=a\cdot\det\left(\begin{array}{cc}1&b/a\\c&d\end{array}\right)=a\cdot\det\left(\begin{array}{cc}1&b/a \\ 0&d-c\cdot b/a\end{array}\right) \\ &=a\cdot 1\cdot (d-bc/a)=ad-bc.\end{aligned} In either case, we recover Definition 3.5.2 in Section 3.5. $\det\left(\begin{array}{cc}a&b\\c&d\end{array}\right) = ad-bc. \nonumber$ If a matrix is already in row echelon form, then you can simply read off the determinant as the product of the diagonal entries. It turns out this is true for a slightly larger class of matrices called triangular. ##### Definition $$\PageIndex{2}$$: Diagonal • The diagonal entries of a matrix $$A$$ are the entries $$a_{11},a_{22},\ldots\text{:}$$ Figure $$\PageIndex{1}$$ • A square matrix is called upper-triangular if its nonzero entries all lie above the diagonal, and it is called lower-triangular if its nonzero entries all lie below the diagonal. It is called diagonal if all of its nonzero entries lie on the diagonal, i.e., if it is both upper-triangular and lower-triangular. Figure $$\PageIndex{2}$$ Proposition$$\PageIndex{1}$$ Let $$A$$ be an $$n\times n$$ matrix. 1. If $$A$$ has a zero row or column, then $$\det(A) = 0.$$ 2. If $$A$$ is upper-triangular or lower-triangular, then $$\det(A)$$ is the product of its diagonal entries. Proof 1. Suppose that $$A$$ has a zero row. Let $$B$$ be the matrix obtained by negating the zero row. Then $$\det(A) = -\det(B)$$ by the second defining property, Definition $$\PageIndex{1}$$. But $$A = B\text{,}$$ so $$\det(A) = \det(B)\text{:}$$ $\left(\begin{array}{ccc}1&2&3\\0&0&0\\7&8&9\end{array}\right)\quad\xrightarrow{R_2=-R_2}\quad\left(\begin{array}{ccc}1&2&3\\0&0&0\\7&8&9\end{array}\right).\nonumber$ Putting these together yields $$\det(A) = -\det(A)\text{,}$$ so $$\det(A)=0$$. Now suppose that $$A$$ has a zero column. Then $$A$$ is not invertible by Theorem 3.6.1 in Section 3.6, so its reduced row echelon form has a zero row. Since row operations do not change whether the determinant is zero, we conclude $$\det(A)=0$$. 2. First suppose that $$A$$ is upper-triangular, and that one of the diagonal entries is zero, say $$a_{ii}=0$$. We can perform row operations to clear the entries above the nonzero diagonal entries: $\left(\begin{array}{cccc}a_{11}&\star&\star&\star \\ 0&a_{22}&\star&\star \\ 0&0&0&\star \\ 0&0&0&a_{44}\end{array}\right)\xrightarrow{\qquad}\left(\begin{array}{cccc}a_{11}&0&\star &0\\0&a_{22}&\star&0\\0&0&0&0\\0&0&0&a_{44}\end{array}\right)\nonumber$ In the resulting matrix, the $$i$$th row is zero, so $$\det(A) = 0$$ by the first part. Still assuming that $$A$$ is upper-triangular, now suppose that all of the diagonal entries of $$A$$ are nonzero. Then $$A$$ can be transformed to the identity matrix by scaling the diagonal entries and then doing row replacements: $\begin{array}{ccccc}{\left(\begin{array}{ccc}a&\star&\star \\ 0&b&\star \\ 0&0&c\end{array}\right)}&{\xrightarrow{\begin{array}{c}{\text{scale by}}\\{a^{-1},\:b^{-1},\:c^{-1}}\end{array}}}&{\left(\begin{array}{ccc}1&\star&\star \\ 0&1&\star \\ 0&0&1\end{array}\right)}&{\xrightarrow{\begin{array}{c}{\text{row}}\\{\text{replacements}}\end{array}}}&{\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right)}\\{\det =abc}&{\xleftarrow{\qquad}}&{\det =1}&{\xleftarrow{\qquad}}&{\det =1}\end{array}\nonumber$ Since $$\det(I_n) = 1$$ and we scaled by the reciprocals of the diagonal entries, this implies $$\det(A)$$ is the product of the diagonal entries. The same argument works for lower triangular matrices, except that the the row replacements go down instead of up. ##### Example $$\PageIndex{7}$$ Compute the determinants of these matrices: $\left(\begin{array}{ccc}1&2&3\\0&4&5\\0&0&6\end{array}\right)\qquad\left(\begin{array}{ccc}-20&0&0\\ \pi&0&0\\ 100&3&-7\end{array}\right)\qquad\left(\begin{array}{ccc}17&03&4\\0&0&0\\11/2&1&e\end{array}\right).\nonumber$ Solution The first matrix is upper-triangular, the second is lower-triangular, and the third has a zero row: \begin{aligned}\det\left(\begin{array}{ccc}1&2&3\\0&4&5\\0&0&6\end{array}\right)&=1\cdot 4\cdot 6=24 \\ \det\left(\begin{array}{ccc}-20&0&0\\ \pi&0&0\\100&3&-7\end{array}\right)&=-20\cdot 0\cdot -7=0 \\ \det\left(\begin{array}{ccc}17&-3&4\\0&0&0\\ 11/2&1&e\end{array}\right)&=0.\end{aligned} A matrix can always be transformed into row echelon form by a series of row operations, and a matrix in row echelon form is upper-triangular. Therefore, we have completely justified Recipe: Computing Determinants by Row Reducing for computing the determinant. The determinant is characterized by its defining properties, Definition $$\PageIndex{1}$$, since we can compute the determinant of any matrix using row reduction, as in the above Recipe: Computing Determinants by Row Reducing. However, we have not yet proved the existence of a function satisfying the defining properties! Row reducing will compute the determinant if it exists, but we cannot use row reduction to prove existence, because we do not yet know that you compute the same number by row reducing in two different ways. Theorem $$\PageIndex{1}$$: Existence of the Determinant There exists one and only one function from the set of square matrices to the real numbers, that satisfies the four defining properties, Definition $$\PageIndex{1}$$. We will prove the existence theorem in Section 4.2, by exhibiting a recursive formula for the determinant. Again, the real content of the existence theorem is: Note $$\PageIndex{1}$$ No matter which row operations you do, you will always compute the same value for the determinant. ## Magical Properties of the Determinant In this subsection, we will discuss a number of the amazing properties enjoyed by the determinant: the invertibility property, Proposition $$\PageIndex{2}$$, the multiplicativity property, Proposition $$\PageIndex{3}$$, and the transpose property, Proposition $$\PageIndex{4}$$. Proposition $$\PageIndex{2}$$: Invertibility Property A square matrix is invertible if and only if $$\det(A)\neq 0$$. Proof If $$A$$ is invertible, then it has a pivot in every row and column by the Theorem 3.6.1 in Section 3.6, so its reduced row echelon form is the identity matrix. Since row operations do not change whether the determinant is zero, and since $$\det(I_n) = 1\text{,}$$ this implies $$\det(A)\neq 0.$$ Conversely, if $$A$$ is not invertible, then it is row equivalent to a matrix with a zero row. Again, row operations do not change whether the determinant is nonzero, so in this case $$\det(A) = 0.$$ By the invertibility property, a matrix that does not satisfy any of the properties of the Theorem 3.6.1 in Section 3.6 has zero determinant. Corollary $$\PageIndex{1}$$ Let $$A$$ be a square matrix. If the rows or columns of $$A$$ are linearly dependent, then $$\det(A)=0$$. Proof If the columns of $$A$$ are linearly dependent, then $$A$$ is not invertible by condition 4 of the Theorem 3.6.1 in Section 3.6. Suppose now that the rows of $$A$$ are linearly dependent. If $$r_1,r_2,\ldots,r_n$$ are the rows of $$A\text{,}$$ then one of the rows is in the span of the others, so we have an equation like $r_2 = 3r_1 - r_3 + 2r_4. \nonumber$ If we perform the following row operations on $$A\text{:}$$ $R_2 = R_2 - 3R_1;\quad R_2 = R_2 + R_3;\quad R_2 = R_2 - 2R_4 \nonumber$ then the second row of the resulting matrix is zero. Hence $$A$$ is not invertible in this case either. Alternatively, if the rows of $$A$$ are linearly dependent, then one can combine condition 4 of the Theorem 3.6.1 in Section 3.6 and the transpose property, Proposition $$\PageIndex{4}$$ below to conclude that $$\det(A)=0$$. In particular, if two rows/columns of $$A$$ are multiples of each other, then $$\det(A)=0.$$ We also recover the fact that a matrix with a row or column of zeros has determinant zero. ##### Example $$\PageIndex{8}$$ The following matrices all have zero determinant: $\left(\begin{array}{ccc}0&2&-1 \\ 0&5&10\\0&-7&3\end{array}\right),\quad \left(\begin{array}{ccc}5&-15&11\\3&-9&2\\2&-6&16\end{array}\right),\quad\left(\begin{array}{cccc}3&1&2&4\\0&0&0&0\\4&2&5&12\\-1&3&4&8\end{array}\right),\quad\left(\begin{array}{ccc}\pi&e&11 \\3\pi&3e&33\\12&-7&2\end{array}\right).\nonumber$ The proofs of the multiplicativity property, Proposition $$\PageIndex{3}$$, and the transpose property, $$\PageIndex{4}$$, below, as well as the cofactor expansion theorem, Theorem 4.2.1 in Section 4.2, and the determinants and volumes theorem, Theorem 4.3.2 in Section 4.3, use the following strategy: define another function $$d\colon\{\text{n\times n matrices}\} \to \mathbb{R}\text{,}$$ and prove that $$d$$ satisfies the same four defining properties as the determinant. By the existence theorem, Theorem $$\PageIndex{1}$$, the function $$d$$ is equal to the determinant. This is an advantage of defining a function via its properties: in order to prove it is equal to another function, one only has to check the defining properties. Proposition $$\PageIndex{3}$$: Multiplicativity Property If $$A$$ and $$B$$ are $$n\times n$$ matrices, then $\det(AB) = \det(A)\det(B). \nonumber$ Proof In this proof, we need to use the notion of an elementary matrix. This is a matrix obtained by doing one row operation to the identity matrix. There are three kinds of elementary matrices: those arising from row replacement, row scaling, and row swaps: $\begin{array}{ccc} {\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right)}&{\xrightarrow{R_2=R_2-2R_1}} &{\left(\begin{array}{ccc}1&0&0\\-2&1&0\\0&0&1\end{array}\right)} \\ {\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right)}&{\xrightarrow{R_1=3R_1}} &{\left(\begin{array}{ccc}3&0&0\\0&1&0\\0&0&1\end{array}\right)} \\ {\left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right)}&{\xrightarrow{R_1\leftrightarrow R_2}} &{\left(\begin{array}{ccc}0&1&0\\1&0&0\\0&0&1\end{array}\right)}\end{array}\nonumber$ The important property of elementary matrices is the following claim. Claim: If $$E$$ is the elementary matrix for a row operation, then $$EA$$ is the matrix obtained by performing the same row operation on $$A$$. In other words, left-multiplication by an elementary matrix applies a row operation. For example, \begin{aligned}\left(\begin{array}{ccc}1&0&0\\-2&1&0\\0&0&1\end{array}\right)\left(\begin{array}{ccc}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{array}\right)&=\left(\begin{array}{lll}a_{11}&a_{12}&a_{13} \\ a_{21}-2a_{11}&a_{22}-2a_{12}&a_{23}&-2a_{13} \\ a_{31}&a_{32}&a_{33}\end{array}\right) \\ \left(\begin{array}{ccc}3&0&0\\0&1&0\\0&0&1\end{array}\right)\left(\begin{array}{ccc}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{array}\right)&=\left(\begin{array}{rrr}3a_{11}&3a_{12}&3a_{13} \\ a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{array}\right) \\ \left(\begin{array}{ccc}0&1&0\\1&0&0\\0&0&1\end{array}\right)\left(\begin{array}{ccc}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{array}\right)&=\left(\begin{array}{ccc}a_{21}&a_{22}&a_{23}\\a_{11}&a_{12}&a_{13}\\a_{31}&a_{32}&a_{33}\end{array}\right).\end{aligned} The proof of the Claim is by direct calculation; we leave it to the reader to generalize the above equalities to $$n\times n$$ matrices. As a consequence of the Claim and the four defining properties, Definition $$\PageIndex{1}$$, we have the following observation. Let $$C$$ be any square matrix. 1. If $$E$$ is the elementary matrix for a row replacement, then $$\det(EC) = \det(C).$$ In other words, left-multiplication by $$E$$ does not change the determinant. 2. If $$E$$ is the elementary matrix for a row scale by a factor of $$c\text{,}$$ then $$\det(EC) = c\det(C).$$ In other words, left-multiplication by $$E$$ scales the determinant by a factor of $$c$$. 3. If $$E$$ is the elementary matrix for a row swap, then $$\det(EC) = -\det(C).$$ In other words, left-multiplication by $$E$$ negates the determinant. Since $$d$$ satisfies the four defining properties of the determinant, it is equal to the determinant by the existence theorem $$\PageIndex{1}$$. In other words, for all matrices $$A\text{,}$$ we have $\det(A) = d(A) = \frac{\det(AB)}{\det(B)}. \nonumber$ Multiplying through by $$\det(B)$$ gives $$\det(A)\det(B)=\det(AB).$$ 1. Let $$C'$$ be the matrix obtained by swapping two rows of $$C\text{,}$$ and let $$E$$ be the elementary matrix for this row replacement, so $$C' = EC$$. Since left-multiplication by $$E$$ negates the determinant, we have $$\det(ECB) = -\det(CB)\text{,}$$ so $d(C') = \frac{\det(C'B)}{\det(B)} = \frac{\det(ECB)}{\det(B)} = \frac{-\det(CB)}{\det(B)} = -d(C). \nonumber$ 2. We have $d(I_n) = \frac{\det(I_nB)}{\det(B)} = \frac{\det(B)}{\det(B)} = 1. \nonumber$ Now we turn to the proof of the multiplicativity property. Suppose to begin that $$B$$ is not invertible. Then $$AB$$ is also not invertible: otherwise, $$(AB)^{-1} AB = I_n$$ implies $$B^{-1} = (AB)^{-1} A.$$ By the invertibility property, Proposition $$\PageIndex{2}$$, both sides of the equation $$\det(AB) = \det(A)\det(B)$$ are zero. Now assume that $$B$$ is invertible, so $$\det(B)\neq 0$$. Define a function $d\colon\bigl\{\text{n\times n matrices}\bigr\} \to \mathbb{R} \quad\text{by}\quad d(C) = \frac{\det(CB)}{\det(B)}. \nonumber$ We claim that $$d$$ satisfies the four defining properties of the determinant. 1. Let $$C'$$ be the matrix obtained by doing a row replacement on $$C\text{,}$$ and let $$E$$ be the elementary matrix for this row replacement, so $$C' = EC$$. Since left-multiplication by $$E$$ does not change the determinant, we have $$\det(ECB) = \det(CB)\text{,}$$ so $d(C') = \frac{\det(C'B)}{\det(B)} = \frac{\det(ECB)}{\det(B)} = \frac{\det(CB)}{\det(B)} = d(C). \nonumber$ 2. Let $$C'$$ be the matrix obtained by scaling a row of $$C$$ by a factor of $$c\text{,}$$ and let $$E$$ be the elementary matrix for this row replacement, so $$C' = EC$$. Since left-multiplication by $$E$$ scales the determinant by a factor of $$c\text{,}$$ we have $$\det(ECB) = c\det(CB)\text{,}$$ so $d(C') = \frac{\det(C'B)}{\det(B)} = \frac{\det(ECB)}{\det(B)} = \frac{c\det(CB)}{\det(B)} = c\cdot d(C). \nonumber$ Recall that taking a power of a square matrix $$A$$ means taking products of $$A$$ with itself: $A^2 = AA \qquad A^3 = AAA \qquad \text{etc.} \nonumber$ If $$A$$ is invertible, then we define $A^{-2} = A^{-1} A^{-1} \qquad A^{-3} = A^{-1} A^{-1} A^{-1} \qquad \text{etc.} \nonumber$ For completeness, we set $$A^0 = I_n$$ if $$A\neq 0$$. Corollary $$\PageIndex{2}$$ If $$A$$ is a square matrix, then $\det(A^n) = \det(A)^n \nonumber$ for all $$n\geq 1$$. If $$A$$ is invertible, then the equation holds for all $$n\leq 0$$ as well; in particular, $\det(A^{-1}) = \frac 1{\det(A)}. \nonumber$ Proof Using the multiplicativity property, Proposition $$\PageIndex{3}$$, we compute $\det(A^2) = \det(AA) = \det(A)\det(A) = \det(A)^2 \nonumber$ and $\det(A^3) = \det(AAA) = \det(A)\det(AA) = \det(A)\det(A)\det(A) = \det(A)^3; \nonumber$ the pattern is clear. We have $1 = \det(I_n) = \det(A A^{-1}) = \det(A)\det(A^{-1}) \nonumber$ by the multiplicativity property, Proposition $$\PageIndex{3}$$ and the fourth defining property, Definition $$\PageIndex{1}$$, which shows that $$\det(A^{-1}) = \det(A)^{-1}$$. Thus $\det(A^{-2}) = \det(A^{-1} A^{-1}) = \det(A^{-1})\det(A^{-1}) = \det(A^{-1})^2 = \det(A)^{-2}, \nonumber$ and so on. ##### Example $$\PageIndex{9}$$ Compute $$\det(A^{100}),$$ where $A = \left(\begin{array}{cc}4&1\\2&1\end{array}\right). \nonumber$ Solution We have $$\det(A) = 4 - 2 = 2\text{,}$$ so $\det(A^{100}) = \det(A)^{100} = 2^{100}. \nonumber$ Nowhere did we have to compute the $$100$$th power of $$A\text{!}$$ (We will learn an efficient way to do that in Section 5.4.) Here is another application of the multiplicativity property, Proposition $$\PageIndex{3}$$. Corollary $$\PageIndex{3}$$ Let $$A_1,A_2,\ldots,A_k$$ be $$n\times n$$ matrices. Then the product $$A_1A_2\cdots A_k$$ is invertible if and only if each $$A_i$$ is invertible. Proof The determinant of the product is the product of the determinants by the multiplicativity property, Proposition $$\PageIndex{3}$$: $\det(A_1A_2\cdots A_k) = \det(A_1)\det(A_2)\cdots\det(A_k). \nonumber$ By the invertibility property, Proposition $$\PageIndex{2}$$, this is nonzero if and only if $$A_1A_2\cdots A_k$$ is invertible. On the other hand, $$\det(A_1)\det(A_2)\cdots\det(A_k)$$ is nonzero if and only if each $$\det(A_i)\neq0\text{,}$$ which means each $$A_i$$ is invertible. ##### Example $$\PageIndex{10}$$ For any number $$n$$ we define $A_n = \left(\begin{array}{cc}1&n\\1&2\end{array}\right). \nonumber$ Show that the product $A_1 A_2 A_3 A_4 A_5 \nonumber$ is not invertible. Solution When $$n = 2\text{,}$$ the matrix $$A_2$$ is not invertible, because its rows are identical: $A_2 = \left(\begin{array}{cc}1&2\\1&2\end{array}\right). \nonumber$ Hence any product involving $$A_2$$ is not invertible. In order to state the transpose property, we need to define the transpose of a matrix. ##### Definition $$\PageIndex{3}$$: Transpose The transpose of an $$m\times n$$ matrix $$A$$ is the $$n\times m$$ matrix $$A^T$$ whose rows are the columns of $$A$$. In other words, the $$ij$$ entry of $$A^T$$ is $$a_{ji}$$. Figure $$\PageIndex{3}$$ Like inversion, transposition reverses the order of matrix multiplication. Fact $$\PageIndex{1}$$ Let $$A$$ be an $$m\times n$$ matrix, and let $$B$$ be an $$n\times p$$ matrix. Then $(AB)^T = B^TA^T. \nonumber$ Proof First suppose that $$A$$ is a row vector an $$B$$ is a column vector, i.e., $$m = p = 1$$. Then \begin{aligned}AB&=\left(\begin{array}{cccc}a_1 &a_2&\cdots &a_n\end{array}\right)\left(\begin{array}{c}b_1\\b_2\\ \vdots\\b_n\end{array}\right)=a_1b_1+a_2b_2+\cdots +a_nb_n \\ &=\left(\begin{array}{cccc}b_1&b_2&\cdots &b_n\end{array}\right)\left(\begin{array}{c}a_1\\a_2\\ \vdots\\a_n\end{array}\right)=B^TA^T.\end{aligned} Now we use the row-column rule for matrix multiplication. Let $$r_1,r_2,\ldots,r_m$$ be the rows of $$A\text{,}$$ and let $$c_1,c_2,\ldots,c_p$$ be the columns of $$B\text{,}$$ so $AB=\left(\begin{array}{c}—r_1 —\\ —r_2— \\ \vdots \\ —r_m—\end{array}\right)\left(\begin{array}{cccc}|&|&\quad &| \\ c_1&c_2&\cdots &c_p \\ |&|&\quad &|\end{array}\right)=\left(\begin{array}{cccc}r_1c_1&r_1c_2&\cdots &r_1c_p \\ r_2c_1&r_2c_2&\cdots &r_2c_p \\ \vdots &\vdots &{}&\vdots \\ r_mc_1&r_mc_2&\cdots &r_mc_p\end{array}\right).\nonumber$ By the case we handled above, we have $$r_ic_j = c_j^Tr_i^T$$. Then \begin{aligned}(AB)^T&=\left(\begin{array}{cccc}r_1c_1&r_2c_1&\cdots &r_mc_1 \\ r_1c_2&r_2c_2&\cdots &r_mc_2 \\ \vdots &\vdots &{}&\vdots \\ r_1c_p&r_2c_p&\cdots &r_mc_p\end{array}\right) \\ &=\left(\begin{array}{cccc}c_1^Tr_1^T &c_1^Tr_2^T&\cdots &c_1^Tr_m^T \\ c_2^Tr_1^T&c_2^Tr_2^T&\cdots &c_2^Tr_m^T \\ \vdots&\vdots&{}&\vdots \\ c_p^Tr_1^T&c_p^Tr_2^T&\cdots&c_p^Tr_m^T\end{array}\right) \\ &=\left(\begin{array}{c}—c_1^T— \\ —c_2^T— \\ \vdots \\ —c_p^T—\end{array}\right)\left(\begin{array}{cccc}|&|&\quad&| \\ r_1^T&r_2^T&\cdots&r_m^T \\ |&|&\quad&|\end{array}\right)=B^TA^T.\end{aligned} Proposition $$\PageIndex{4}$$: Transpose Property For any square matrix $$A\text{,}$$ we have $\det(A) = \det(A^T). \nonumber$ Proof We follow the same strategy as in the proof of the multiplicativity property, Proposition $$\PageIndex{3}$$: namely, we define $d(A) = \det(A^T), \nonumber$ and we show that $$d$$ satisfies the four defining properties of the determinant. Again we use elementary matrices, also introduced in the proof of the multiplicativity property, Proposition $$\PageIndex{3}$$. 1. Let $$C'$$ be the matrix obtained by doing a row replacement on $$C\text{,}$$ and let $$E$$ be the elementary matrix for this row replacement, so $$C' = EC$$. The elementary matrix for a row replacement is either upper-triangular or lower-triangular, with ones on the diagonal: $R_1=R_1+3R_3:\left(\begin{array}{ccc}1&0&3\\0&1&0\\0&0&1\end{array}\right)\quad R_3=R_3+3R_1:\left(\begin{array}{ccc}1&0&0\\0&1&0\\3&0&1\end{array}\right).\nonumber$ It follows that $$E^T$$ is also either upper-triangular or lower-triangular, with ones on the diagonal, so $$\det(E^T) = 1$$ by this Proposition $$\PageIndex{1}$$. By the Fact $$\PageIndex{1}$$ and the multiplicativity property, Proposition $$\PageIndex{3}$$, $\begin{split} d(C') \amp= \det((C')^T) = \det((EC)^T) = \det(C^TE^T) \\ \amp= \det(C^T)\det(E^T) = \det(C^T) = d(C). \end{split} \nonumber$ 2. Let $$C'$$ be the matrix obtained by scaling a row of $$C$$ by a factor of $$c\text{,}$$ and let $$E$$ be the elementary matrix for this row replacement, so $$C' = EC$$. Then $$E$$ is a diagonal matrix: $R_2=cR_2:\: \left(\begin{array}{ccc}1&0&0\\0&c&0\\0&0&1\end{array}\right).\nonumber$ Thus $$\det(E^T) = c$$. By the Fact $$\PageIndex{1}$$ and the multiplicativity property, Proposition $$\PageIndex{3}$$, $\begin{split} d(C') \amp= \det((C')^T) = \det((EC)^T) = \det(C^TE^T) \\ \amp= \det(C^T)\det(E^T) = c\det(C^T) = c\cdot d(C). \end{split} \nonumber$ 3. Let $$C'$$ be the matrix obtained by swapping two rows of $$C\text{,}$$ and let $$E$$ be the elementary matrix for this row replacement, so $$C' = EC$$. The $$E$$ is equal to its own transpose: $R_1\longleftrightarrow R_2:\:\left(\begin{array}{ccc}0&1&0\\1&0&0\\0&0&1\end{array}\right)=\left(\begin{array}{ccc}0&1&0\\1&0&0\\0&0&1\end{array}\right)^T.\nonumber$ Since $$E$$ (hence $$E^T$$) is obtained by performing one row swap on the identity matrix, we have $$\det(E^T) = -1$$. By the Fact $$\PageIndex{1}$$ and the multiplicativity property, Proposition $$\PageIndex{3}$$, $\begin{split} d(C') \amp= \det((C')^T) = \det((EC)^T) = \det(C^TE^T) \\ \amp= \det(C^T)\det(E^T) = -\det(C^T) = - d(C). \end{split} \nonumber$ 4. Since $$I_n^T = I_n,$$ we have $d(I_n) = \det(I_n^T) = det(I_n) = 1. \nonumber$ Since $$d$$ satisfies the four defining properties of the determinant, it is equal to the determinant by the existence theorem $$\PageIndex{1}$$. In other words, for all matrices $$A\text{,}$$ we have$\det(A) = d(A) = \det(A^T). \nonumber$ The transpose property, Proposition $$\PageIndex{4}$$, is very useful. For concreteness, we note that $$\det(A)=\det(A^T)$$ means, for instance, that $\det\left(\begin{array}{ccc}1&2&3\\4&5&6\\7&8&9\end{array}\right) = \det\left(\begin{array}{ccc}1&4&7\\2&5&8\\3&6&9\end{array}\right). \nonumber$ This implies that the determinant has the curious feature that it also behaves well with respect to column operations. Indeed, a column operation on $$A$$ is the same as a row operation on $$A^T\text{,}$$ and $$\det(A) = \det(A^T)$$. Corollary $$\PageIndex{4}$$ The determinant satisfies the following properties with respect to column operations: 1. Doing a column replacement on $$A$$ does not change $$\det(A)$$. 2. Scaling a column of $$A$$ by a scalar $$c$$ multiplies the determinant by $$c$$. 3. Swapping two columns of a matrix multiplies the determinant by $$-1$$. The previous corollary makes it easier to compute the determinant: one is allowed to do row and column operations when simplifying the matrix. (Of course, one still has to keep track of how the row and column operations change the determinant.) ##### Example $$\PageIndex{11}$$ Compute $$\det\left(\begin{array}{ccc}2&7&4\\3&1&3\\4&0&1\end{array}\right).$$ Solution It takes fewer column operations than row operations to make this matrix upper-triangular: \begin{aligned}\left(\begin{array}{ccc}2&7&4\\3&1&3\\4&0&1\end{array}\right)\quad\xrightarrow{C_1=C_1-4C_3}\quad &\left(\begin{array}{ccc}-14&7&4\\-9&1&3\\0&0&1\end{array}\right) \\ {}\xrightarrow{C_1=C_1+9C_2}\quad&\left(\begin{array}{ccc}49&7&4\\0&1&3\\0&0&1\end{array}\right)\end{aligned} We performed two column replacements, which does not change the determinant; therefore, $\det\left(\begin{array}{ccc}2&7&4\\3&1&3\\4&0&1\end{array}\right) = \det\left(\begin{array}{ccc}49&7&4\\0&1&3\\0&0&1\end{array}\right) = 49. \nonumber$ #### Multilinearity The following observation is useful for theoretical purposes. We can think of $$\det$$ as a function of the rows of a matrix: $\det(v_1,v_2,\ldots,v_n) = \det\left(\begin{array}{c}—v_1— \\ —v_2— \\ \vdots \\ —v_n—\end{array}\right). \nonumber$ Proposition $$\PageIndex{5}$$: Multilinearity Property Let $$i$$ be a whole number between $$1$$ and $$n\text{,}$$ and fix $$n-1$$ vectors $$v_1,v_2,\ldots,v_{i-1},v_{i+1},\ldots,v_n$$ in $$\mathbb{R}^n$$. Then the transformation $$T\colon\mathbb{R}^n \to\mathbb{R}$$ defined by $T(x) = \det(v_1,v_2,\ldots,v_{i-1},x,v_{i+1},\ldots,v_n) \nonumber$ is linear. Proof First assume that $$i=1\text{,}$$ so $T(x) = \det(x,v_2,\ldots,v_n). \nonumber$ We have to show that $$T$$ satisfies the defining properties, Definition 3.3.1, in Section 3.3. • By the first defining property, Definition $$\PageIndex{1}$$, scaling any row of a matrix by a number $$c$$ scales the determinant by a factor of $$c$$. This implies that $$T$$ satisfies the second property, i.e., that $T(cx) = \det(cx,v_2,\ldots,v_n) = c\det(x,v_2,\ldots,v_n) = cT(x). \nonumber$ • We claim that $$T(v+w) = T(v) + T(w)$$. If $$w$$ is in $$\text{Span}\{v,v_2,\ldots,v_n\}\text{,}$$ then $w = cv + c_2v_2 + \cdots + c_nv_n \nonumber$ for some scalars $$c,c_2,\ldots,c_n$$. Let $$A$$ be the matrix with rows $$v+w,v_2,\ldots,v_n\text{,}$$ so $$T(v+w) = \det(A).$$ By performing the row operations $R_1 = R_1 - c_2R_2;\quad R_1 = R_1 - c_3R_3;\quad\ldots\quad R_1 = R_1 - c_nR_n, \nonumber$ the first row of the matrix $$A$$ becomes $v+w-(c_2v_2+\cdots+c_nv_n) = v + cv = (1+c)v. \nonumber$ Therefore, $\begin{split} T(v+w) = \det(A) \amp= \det((1+c)v,v_2,\ldots,v_n) \\ \amp= (1+c)\det(v,v_2,\ldots,v_n) \\ \amp= T(v) + cT(v) = T(v) + T(cv). \end{split} \nonumber$ Doing the opposite row operations $R_1 = R_1 + c_2R_2;\quad R_1 = R_1 + c_3R_3;\quad\ldots\quad R_1 = R_1 + c_nR_n \nonumber$ to the matrix with rows $$cv,v_2,\ldots,v_n$$ shows that $\begin{split} T(cv) \amp= \det(cv,v_2,\ldots,v_n) \\ \amp= \det(cv+c_2v_2+\cdots+c_nv_n,v_2,\ldots,v_n) \\ \amp= \det(w,v_2,\ldots,v_n) = T(w), \end{split} \nonumber$ which finishes the proof of the first property in this case. Now suppose that $$w$$ is not in $$\text{Span}\{v,v_2,\ldots,v_n\}$$. This implies that $$\{v,v_2,\ldots,v_n\}$$ is linearly dependent (otherwise it would form a basis for $$\mathbb{R}^n$$), so $$T(v)$$ = 0. If $$v$$ is not in $$\text{Span}\{v_2,\ldots,v_n\}\text{,}$$ then $$\{v_2,\ldots,v_n\}$$ is linearly dependent by the increasing span criterion, Theorem 2.5.2 in Section 2.5, so $$T(x) = 0$$ for all $$x\text{,}$$ as the matrix with rows $$x,v_2,\ldots,v_n$$ is not invertible. Hence we may assume $$v$$ is in $$\text{Span}\{v_2,\ldots,v_n\}$$. By the above argument with the roles of $$v$$ and $$w$$ reversed, we have $$T(v+w) = T(v)+T(w).$$ For $$i\neq 1\text{,}$$ we note that $\begin{split} T(x) \amp= \det(v_1,v_2,\ldots,v_{i-1},x,v_{i+1},\ldots,v_n) \\ \amp= -\det(x,v_2,\ldots,v_{i-1},v_1,v_{i+1},\ldots,v_n). \end{split} \nonumber$ By the previously handled case, we know that $$-T$$ is linear: $-T(cx) = -cT(x) \qquad -T(v+w) = -T(v) - T(w). \nonumber$ Multiplying both sides by $$-1\text{,}$$ we see that $$T$$ is linear. For example, we have $\det\left(\begin{array}{lcr}—&v_1&— \\ —&av+bw&— \\ —&v_3&—\end{array}\right)=a\det\left(\begin{array}{c}—v_1— \\ —v— \\ —v_3—\end{array}\right)+b\det\left(\begin{array}{c}—v_1— \\ —w— \\ —v_3—\end{array}\right)\nonumber$ By the transpose property, Proposition $$\PageIndex{4}$$, the determinant is also multilinear in the columns of a matrix: $\det\left(\begin{array}{ccc}|&|&|\\ v_1&av+bw&v_3 \\ |&|&|\end{array}\right) = a\det\left(\begin{array}{ccc}|&|&|\\v_1&v&v_3 \\ |&|&|\end{array}\right) + b\det\left(\begin{array}{ccc}|&|&|\\v_1&w&v_3\\|&|&|\end{array}\right). \nonumber$ ##### Remark: Alternative defining properties In more theoretical treatments of the topic, where row reduction plays a secondary role, the defining properties of the determinant are often taken to be: 1. The determinant $$\det(A)$$ is multilinear in the rows of $$A$$. 2. If $$A$$ has two identical rows, then $$\det(A) = 0$$. 3. The determinant of the identity matrix is equal to one. We have already shown that our four defining properties, Definition $$\PageIndex{1}$$, imply these three. Conversely, we will prove that these three alternative properties imply our four, so that both sets of properties are equivalent. Defining property $$2$$ is just the second defining property, Definition 3.3.1, in Section 3.3. Suppose that the rows of $$A$$ are $$v_1,v_2,\ldots,v_n$$. If we perform the row replacement $$R_i = R_i + cR_j$$ on $$A\text{,}$$ then the rows of our new matrix are $$v_1,v_2,\ldots,v_{i-1},v_i+cv_j,v_{i+1},\ldots,v_n\text{,}$$ so by linearity in the $$i$$th row, $\begin{split} \det(\amp v_1,v_2,\ldots,v_{i-1},v_i+cv_j,v_{i+1},\ldots,v_n) \\ \amp= \det(v_1,v_2,\ldots,v_{i-1},v_i,v_{i+1},\ldots,v_n) + c\det(v_1,v_2,\ldots,v_{i-1},v_j,v_{i+1},\ldots,v_n) \\ \amp= \det(v_1,v_2,\ldots,v_{i-1},v_i,v_{i+1},\ldots,v_n) = \det(A), \end{split} \nonumber$ where $$\det(v_1,v_2,\ldots,v_{i-1},v_j,v_{i+1},\ldots,v_n)=0$$ because $$v_j$$ is repeated. Thus, the alternative defining properties imply our first two defining properties. For the third, suppose that we want to swap row $$i$$ with row $$j$$. Using the second alternative defining property and multilinearity in the $$i$$th and $$j$$th rows, we have $\begin{split} 0 \amp= \det(v_1,\ldots,v_i+v_j,\ldots,v_i+v_j,\ldots,v_n) \\ \amp= \det(v_1,\ldots,v_i,\ldots,v_i+v_j,\ldots,v_n) + \det(v_1,\ldots,v_j,\ldots,v_i+v_j,\ldots,v_n) \\ \amp= \det(v_1,\ldots,v_i,\ldots,v_i,\ldots,v_n) + \det(v_1,\ldots,v_i,\ldots,v_j,\ldots,v_n) \\ \amp\qquad+\det(v_1,\ldots,v_j,\ldots,v_i,\ldots,v_n) + \det(v_1,\ldots,v_j,\ldots,v_j,\ldots,v_n) \\ \amp= \det(v_1,\ldots,v_i,\ldots,v_j,\ldots,v_n) + \det(v_1,\ldots,v_j,\ldots,v_i,\ldots,v_n), \end{split} \nonumber$ as desired. ##### Example $$\PageIndex{12}$$ We have $\left(\begin{array}{c}-1\\2\\3\end{array}\right) = -\left(\begin{array}{c}1\\0\\0\end{array}\right) + 2\left(\begin{array}{c}0\\1\\0\end{array}\right) + 3\left(\begin{array}{c}0\\0\\1\end{array}\right). \nonumber$ Therefore, $\begin{split} \det\amp\left(\begin{array}{ccc}-1&7&2\\2&-3&2\\3&1&1\end{array}\right) = -\det\left(\begin{array}{ccc}1&7&2\\0&-3&2\\0&1&1\end{array}\right) \\ \amp+ 2\det\left(\begin{array}{ccc}0&7&2\\1&-3&2\\0&1&1\end{array}\right) + 3\det\left(\begin{array}{ccc}0&7&2\\0&-3&2\\1&1&1\end{array}\right). \end{split} \nonumber$ This is the basic idea behind cofactor expansions in Section 4.2. Note $$\PageIndex{2}$$: Summary: Magical Properties of the Determinant 1. There is one and only one function $$\det\colon\{n\times n\text{ matrices}\}\to\mathbb{R}$$ satisfying the four defining properties, Definition $$\PageIndex{1}$$. 2. The determinant of an upper-triangular or lower-triangular matrix is the product of the diagonal entries. 3. A square matrix is invertible if and only if $$\det(A)\neq 0\text{;}$$ in this case, $\det(A^{-1}) = \frac 1{\det(A)}. \nonumber$ 4. If $$A$$ and $$B$$ are $$n\times n$$ matrices, then $\det(AB) = \det(A)\det(B). \nonumber$ 5. For any square matrix $$A\text{,}$$ we have $\det(A^T) = \det(A). \nonumber$ 6. The determinant can be computed by performing row and/or column operations. This page titled 4.1: Determinants- Definition is shared under a GNU Free Documentation License 1.3 license and was authored, remixed, and/or curated by Dan Margalit & Joseph Rabinoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{}
# List decoding for binary Goppa codes D.J. Bernstein Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review 13 Citations (Scopus) ### Abstract This paper presents a Patterson-style list-decoding algorithm for classical irreducible binary Goppa codes. The algorithm corrects, in polynomial time, approximately $n - \sqrt{n(n-2t-2)}$ errors in a length-n classical irreducible degree-t binary Goppa code. Compared to the best previous polynomial-time list-decoding algorithms for the same codes, the new algorithm corrects approximately $t^2/2n$ extra errors. Original language English Coding and Cryptology (Third International Workshop, IWCC 2011, Qingdao, China, May 30-June 3, 2011. Proceedings) Y.M. Chee Berlin Springer 62-80 978-3-642-20900-0 https://doi.org/10.1007/978-3-642-20901-7_4 Published - 2011 ### Publication series Name Lecture Notes in Computer Science 6639 0302-9743 Binary codes Decoding Polynomials ### Cite this Bernstein, D. J. (2011). List decoding for binary Goppa codes. In Y. M. Chee (Ed.), Coding and Cryptology (Third International Workshop, IWCC 2011, Qingdao, China, May 30-June 3, 2011. Proceedings) (pp. 62-80). (Lecture Notes in Computer Science; Vol. 6639). Berlin: Springer. https://doi.org/10.1007/978-3-642-20901-7_4 Bernstein, D.J. / List decoding for binary Goppa codes. Coding and Cryptology (Third International Workshop, IWCC 2011, Qingdao, China, May 30-June 3, 2011. Proceedings). editor / Y.M. Chee. Berlin : Springer, 2011. pp. 62-80 (Lecture Notes in Computer Science). title = "List decoding for binary Goppa codes", abstract = "This paper presents a Patterson-style list-decoding algorithm for classical irreducible binary Goppa codes. The algorithm corrects, in polynomial time, approximately $n - \sqrt{n(n-2t-2)}$ errors in a length-n classical irreducible degree-t binary Goppa code. Compared to the best previous polynomial-time list-decoding algorithms for the same codes, the new algorithm corrects approximately $t^2/2n$ extra errors.", author = "D.J. Bernstein", year = "2011", doi = "10.1007/978-3-642-20901-7_4", language = "English", isbn = "978-3-642-20900-0", series = "Lecture Notes in Computer Science", publisher = "Springer", pages = "62--80", editor = "Y.M. Chee", booktitle = "Coding and Cryptology (Third International Workshop, IWCC 2011, Qingdao, China, May 30-June 3, 2011. Proceedings)", } Bernstein, DJ 2011, List decoding for binary Goppa codes. in YM Chee (ed.), Coding and Cryptology (Third International Workshop, IWCC 2011, Qingdao, China, May 30-June 3, 2011. Proceedings). Lecture Notes in Computer Science, vol. 6639, Springer, Berlin, pp. 62-80. https://doi.org/10.1007/978-3-642-20901-7_4 List decoding for binary Goppa codes. / Bernstein, D.J. Coding and Cryptology (Third International Workshop, IWCC 2011, Qingdao, China, May 30-June 3, 2011. Proceedings). ed. / Y.M. Chee. Berlin : Springer, 2011. p. 62-80 (Lecture Notes in Computer Science; Vol. 6639). Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review TY - GEN T1 - List decoding for binary Goppa codes AU - Bernstein, D.J. PY - 2011 Y1 - 2011 N2 - This paper presents a Patterson-style list-decoding algorithm for classical irreducible binary Goppa codes. The algorithm corrects, in polynomial time, approximately $n - \sqrt{n(n-2t-2)}$ errors in a length-n classical irreducible degree-t binary Goppa code. Compared to the best previous polynomial-time list-decoding algorithms for the same codes, the new algorithm corrects approximately $t^2/2n$ extra errors. AB - This paper presents a Patterson-style list-decoding algorithm for classical irreducible binary Goppa codes. The algorithm corrects, in polynomial time, approximately $n - \sqrt{n(n-2t-2)}$ errors in a length-n classical irreducible degree-t binary Goppa code. Compared to the best previous polynomial-time list-decoding algorithms for the same codes, the new algorithm corrects approximately $t^2/2n$ extra errors. U2 - 10.1007/978-3-642-20901-7_4 DO - 10.1007/978-3-642-20901-7_4 M3 - Conference contribution SN - 978-3-642-20900-0 T3 - Lecture Notes in Computer Science SP - 62 EP - 80 BT - Coding and Cryptology (Third International Workshop, IWCC 2011, Qingdao, China, May 30-June 3, 2011. Proceedings) A2 - Chee, Y.M. PB - Springer CY - Berlin ER - Bernstein DJ. List decoding for binary Goppa codes. In Chee YM, editor, Coding and Cryptology (Third International Workshop, IWCC 2011, Qingdao, China, May 30-June 3, 2011. Proceedings). Berlin: Springer. 2011. p. 62-80. (Lecture Notes in Computer Science). https://doi.org/10.1007/978-3-642-20901-7_4
{}
# Circular Motion and Friction of car turning corner 1. Jun 16, 2010 ### haroldtreen 1. The problem statement, all variables and given/known data "A 1500kg car rounds a curve of radius 75m at a speed of 25m/s. The curve is banked at an angle of 22 degrees to the horizontal. Calculate: a) The magnitude of the frictional force. b) The Coefficient of Kinetic Friction." 2. Relevant equations F=mv$$^{2}$$/r a) 61000N b) 0.33 3. The attempt at a solution What makes sense to me is that the centripetal force is countered by another force. I believe this other force is a combination of the horizontal frictional force & the horizontal normal force (cause from the banked curve resisting the centripetal force). The centripetal force = 12500N So, 12500N = Force Friction x + Force Normal x = Force Friction x + Force Normal*Sin$$\phi$$ Force Normal = mg/cos$$\phi$$ This however have been unable to get me the provided answers :S. It seems like a pretty simple question and I have been playing around with it for a bit and keep ending up with 6560N for the frictional force :(. If anyone could help it would be great...I'm studying for exams! :P 2. Jun 17, 2010 ### mo_0820 Now, please make the force analysis again. How many forces act on the car? And what is the effect? 3. Jun 17, 2010 ### inky Actually your answer given for frictional force is not 61000N. Could you check this ? I think anwer is 6100 N. You need to use g =9.8 ms^-2. You can get 6083.08 N. It is around 6100N. Then if use coefficent of friction formula, you can get mu value 0.33. Could you try again? 4. Jun 18, 2010 ### ehild This is not true.The force of friction has both horizontal and vertical components. By the way, the answer for the frictional force is wrong, it is rather 6100 N. ehild 5. Jun 18, 2010 ### inky Use summation F(x)=(mv^2)/r summation F(y)=0 Both normal force and friction force have 2 components.
{}
# Why is there a deep mysterious relation between string theory and number theory, elliptic curves, $E_8$ and the Monster group? + 10 like - 0 dislike 3837 views Why is there a deep mysterious relation between string theory and number theory (Langlands program), elliptic curves, modular functions, the exceptional group $E_8$, and the Monster group as in Monstrous Moonshine? Surely it's not just a coincidence in the Platonic world of mathematics. Granted this may not be fully answerable given the current state of knowledge, but are there any hints/plausibility arguments that might illuminate the connections? This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user user1796 retagged Mar 25, 2014 I actually voted this question thumbs-up. It's a good question and I would like to know the most accurate answer, too. Clearly, the rough sketch of the answer is that string theory just knows about all important and exceptional structures in mathematics. But why does it know them? What is the logic that dictates that "other solutions" of a theory whose main physical goal is "only" to unify the interactions including gravity with quantum mechanics produces all other maths, including maths we used to think was totally abstract? This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user Luboš Motl Some of the specific connections listed come from the "modular invariance" of string theory, the need for one-loop amplitudes to be invariant under "large" reparametrizations of the world-sheet. This means that modular forms and their properties are relevant - thus Langlands - and also establishes a link to lattices - mathoverflow.net/questions/24604/… This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user Mitchell Porter Related question on mathoverflow: mathoverflow.net/q/58990/13917 This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user Qmechanic I remember the days when the eightfold way en.wikipedia.org/wiki/Eightfold_Way_%28physics%29 was mysterious. This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user anna v + 5 like - 0 dislike To start with, the relation of string theory to complex elliptic curves is clear: these are just pointed, genus one closed Riemann surfaces, and hence are certain string worldsheets. The fact that in constructions such as the "refined Witten genus" it is actually arithmetic elliptic curves (not over the complex numbers but, ultimately, over the ring of integers and hence over the rationals and the p-adics, see at fracture square) that play a role is some deep fact that is vaguely reminiscent of p-adic string theory, only that what presently goes under this headline does not fully live up to what is at issue here. (There is a PO question on this point here). The true answer to this arithmetic geometry-incarnation of stringy pysics must rest in the function field analogy, which rouhgly says something like if you do algebraic number theory in a single variable -- if you study arithmetic curves -- , then this is analogous to studying complex curves, hence string worldsheets. To put this in perspective: there is an old motivation from the first pages of the string theory textbooks, which says that where point particle mechanics is about the real line (the worldline) so string theory is about the complex plane (the worldsheet) and hence that the passage from point particles to strings is like the step from real analysis to complex analysis. Somehow the function field analogy says that this seemingly simple-minded statement is indeed true and much deeper than it might maybe seem. In some way stringy phenomena are visible at the very root of mathematics (number theory) because if you "work with a siingle algebraic variable", then that is already analogous to "working with a single complex variable" hence is analogous to studying complex curves, hence string worldsheets. Even that statement may still seem far-out at this level.But digging deeper it turns out to work out more and more. For instance 90 per cent of number theory is about picking some such arithmetic curve and then "attaching" to it a zeta-function or theta-function or eta-function or L-function. The deep conjectures of number theory all revolve around this (notably the Langlands correspondence). But looking at this from the point of view of the function field analogy, one finds that all this is analogous to the 3dCS/2dWZW correspondence. I have tried to summarize this a bit in this table here: zeta-functions and eta-functions and theta-functions and L-functions -- table . There'd be more to say, but I am running out of battery. I gave a talk related to this four weeks back at CUNY, here. answered Sep 1, 2014 by (5,900 points) + 4 like - 0 dislike Let me first answer the relation between string theory and $E(8)$ (I don't think I can answer the rest.). A common appearance of $E(8)$ in strings theory, is in the gauge group of Type HE string theory, i.e., in $E(8)\otimes E(8)$. Now, this appears in Type HE string theory because of the fact that it is an even, unimodular lattice. But, it is interesting, for another reason; due to the embedding of the Standard Model Subgroup: $$SU(3)\otimes SU(2)\otimes U(1)\subset SU(5)\subset SO(10)\subset E(6)\subset E(7)\subset E(8)$$ That's a lot of embeddings, but notice - The first group here, in the Standard Model subgroup, the second, third, fourth, fifth, are GUT subgroups. And $E(8)$ happens to be the "largest" and "most complicated" of the exceptional lie groups. So a TOE better deal with $E(8)$, somewhere! I don't know about the relation between monstrous moonshine and string theory, but you can refer to Wikipedia. There is definitely a connection with number theory. And even more: . $$1+2+3+4=10$$ Not joking! EM is the curvature of the $U(1)$ bundle . Weak is the curvature of the $SU(2)$ bundles. Strong is the curvature of the $SU(3)$ bundle. Gravity is the curvature of spacetime . I.e. 1D manifold, 2D, 3D, 4D $\implies$ 10 D . answered Jul 16, 2013 by (1,955 points) edited Jan 7, 2015 SO(10) is not a subgroup of U(5). Why would a TOE need E(8) just because it is the largest exceptional group? The 1,2,3,4 numerology is rather weak since you are just looking at groups with these numbers in them that appear in very different ways. This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user Philip Gibbs @PhilipGibbs: Fixed the SO(10) U(5) probem . The $E(8)$ logic was supposed to be intuitive . The 1,2,3,4 thing isn't numerology, it isn't so different, by the way . This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user Dimensio1n0 @PhilipGibbs: In fact, why do you think Kaluza - Klein theory is 5-dimensionals? This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user Dimensio1n0 There is another point, that E(8) is E6xSU(3), and on a Calabi Yau, the SU(3) is the holonomy, so you can easily and naturally break the E8 to E6. This idea appears in Candelas Horowitz Strominger Witten in 1985, right after Heterotic strings and it is still the easiest way to get the MSSM. The biggest obstacle is to get rid of the MS part--- you need a SUSY breaking at high energy that won't wreck the CC or produce a runaway Higgs mass, since it seems right now there is no low-energy SUSY. This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user Ron Maimon @RonMaimon: Thanks, I added that in too. This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user Dimensio1n0 @DImension10AbhimanyuPS: ok, but you shouldn't write what I said, which is technically wrong--- E8 is not E6xSU(3), it's a simple group, but it has an embedded E6xSU(3) and fills in the off-diagonal parts with extra crud that's broken when you have SU(3) gauge fluxes which follow the holonomy of the manifold. The precise decomposition is described in detail in Green Schwarz and Witten, which has a nice description of E8. This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user Ron Maimon @RonMaimon: I know, but I think that is clear (that $E(8)$ is not $E(6)\times SU(3)$. This post imported from StackExchange Physics at 2014-03-07 16:32 (UCT), posted by SE-user Dimensio1n0 Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverf$\varnothing$owThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{}
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Sensitive dependence on initial conditions between dynamical systems and their induced hyperspace dynamical systems. (English) Zbl 1172.37006 If $\left(E,f\right)$ is a dynamical system, then the hyperspace dynamical system $\left(\stackrel{^}{E},\stackrel{^}{f}\right)$ is defined by $\stackrel{^}{f}\left(A\right):=f\left(A\right)$ on the collection $\stackrel{^}{E}$ of all subsets of $E$. The relation of different concepts of chaotic behaviour on $\left(E,f\right)$ and $\left(\stackrel{^}{E},\stackrel{^}{f}\right)$ has been investigated in several papers. A map is said to depend sensitively on initial conditions (this property is briefly called sensitivity), if there is a $\delta >0$ such that for any $x\in E$ and any $\epsilon >0$ there is a $y\in E$ with $d\left(y,x\right)<\epsilon$ and an $n\in ℕ$ with $d\left({f}^{n}\left(y\right),{f}^{n}\left(x\right)\right)\ge \delta$. In this paper the authors introduce the notion of collective sensitivity. This means that there is a $\delta >0$ such that for finitely many ${x}_{1},{x}_{2},\cdots ,{x}_{k}\in E$ and any $\epsilon >0$ there are ${y}_{1},{y}_{2},\cdots ,{y}_{k}\in E$ with $d\left({y}_{j},{x}_{j}\right)<\epsilon$ for all $j\in \left\{1,2,\cdots ,k\right\}$ and there is an $n\in ℕ$ and a $u\in \left\{1,2,\cdots ,k\right\}$ such that $d\left({f}^{n}\left({y}_{j}\right),{f}^{n}\left({x}_{u}\right)\right)\ge \delta$ for all $j\in \left\{1,2,\cdots ,k\right\}$ or $d\left({f}^{n}\left({x}_{j}\right),{f}^{n}\left({y}_{u}\right)\right)\ge \delta$ for all $j\in \left\{1,2,\cdots ,k\right\}$. It is proved that $\left(\stackrel{^}{E},\stackrel{^}{f}\right)$ is sensitive if and only if $\left(E,f\right)$ is collectively sensitive. Here $\stackrel{^}{E}$ is endowed with the hit-or-miss topology. Moreover, also the conditions $\left(𝒞,\stackrel{^}{f}\right)$ is sensitive and $\left(ℱ,\stackrel{^}{f}\right)$ is sensitive are equivalent to $\left(\stackrel{^}{E},\stackrel{^}{f}\right)$ is sensitive, where $𝒞$ is the collection of all nonempty compact subsets of $E$ and $ℱ$ is the collection of all nonempty finite subsets of $E$, both endowed with the Hausdorff metric (which is equivalent to the Vietoris topology in this case). The authors also prove that weak mixing implies collective sensitivity. ##### MSC: 37B05 Transformations and group actions with special properties 54B20 Hyperspaces (general topology) 54H20 Topological dynamics
{}
× # Approximation for the $$n^\text{th}$$ prime number The Prime Number Theorem states that $$\displaystyle \pi(x) \sim \frac{x}{\log{x}}$$, where $$\displaystyle \pi(x)$$ is the number of primes not exceeding $$\displaystyle x$$, that is, $$\displaystyle \lim _{x\to \infty }{\frac {\;\pi (x)\;}{\frac {x}{\log(x)}}}=1$$. Let $$\displaystyle p_{n}$$ be the $$\displaystyle n$$th prime number. Prove that, as a corollary of the Prime Number Theorem, $$\displaystyle p_{n} \sim n\log{n}$$, that is, $$\displaystyle p_{n} \to n\log{n}$$ as $$\displaystyle n \to \infty$$. Note by Alexander Israel Flores Gutiérrez 2 months, 2 weeks ago
{}
# What does it mean for a set to be compact in another set? I am given the following definition: Let $B$ be a set of continuous maps with domain a metric space $A$ and codomain a metric space $N$, and $B_x=\{f(x):f\in B\}$. $B$ is pointwise compact if for each $x\in A, B_x$ is compact in $N$. I can't figure out what it means for $B_x$ to be compact in $N$. Is the author merely trying to signal that $B_x$ is a subset of $N$? If so, then is this slight rephrasing correct? $B$ is pointwise compact if for each $x\in A, B_x \subset N$ is compact. - The intent of the author is probably to make explicit that the topology on $B_x$ is that induced by $N$. Your rephrasing doesn't work: $B$ itself is not a subset of $N$ (just leave out $\subset N$). Yes, and I would say you can even leave out the other "$\in N$", although that would leave the fact that $B_x$ should get the subspace topology implicit. – Magdiragdag Sep 29 '13 at 16:17 It is automatic that $B_x$ is a subset of $N$ (by definition of $B_x$); the author means $B_x$ is compact when given the subspace topology (where open sets of $B_x$ are precisely sets of the form $B_x \cap U$, where $U$ is open in $N$). Yes, if we understand the metric as given (and we use the metric space topology). I agree that it is better to think of compactness as an absolute notion, if the topology is assumed given. (Being "closed" is not an absolute notion.) So if you understand $B_x$ as automatically carrying the metric inherited from $N$, then you could say this is absolute in that sense. – user43208 Sep 29 '13 at 15:01
{}
# Operator group A group of operators, a one-parameter group of operators (cf. Operator) on a Banach space $E$, i.e. a family of bounded linear operators $U _ {t}$, $- \infty < t < \infty$, such that $U _ {0} = I$, $U _ {s+} t = U _ {s} \cdot U _ {t}$ and $U _ {t}$ depends continuously on $t$( in the uniform, strong or weak topology). If $E$ is a Hilbert space and $\| U _ {t} \|$ is uniformly bounded, then the group $\{ U _ {t} \}$ is similar to a group of unitary operators (Sz.-Nagy's theorem, cf. also Unitary operator). #### References [1] B. Szökevalfi-Nagy, "On uniformly bounded linear transformations in Hilbert space" Acta Sci. Math. (Szeged) , 11 (1947) pp. 152–157 [2] E. Hille, R.S. Phillips, "Functional analysis and semi-groups" , Amer. Math. Soc. (1948) V.I. Lomonosov A group with operators, a group with domain of operators $\Sigma$, where $\Sigma$ is a set of symbols, is a group $G$ such that for every element $a \in G$ and every $\sigma \in \Sigma$ there is a corresponding element $a \sigma \in G$ such that $( ab) \sigma = a \sigma \cdot b \sigma$ for any $a, b \in G$. Let $G$ and $G ^ \prime$ be groups with the same domain of operators $\Sigma$; an isomorphic (a homomorphic) mapping $\phi$ of $G$ onto $G ^ \prime$ is called an operator isomorphism (operator homomorphism) if $( a \sigma ) \phi = ( a \phi ) \sigma$ for any $a \in G$, $\sigma \in \Sigma$. A subgroup (normal subgroup) $H$ of the group $G$ with domain of operators $\Sigma$ is called an admissible subgroup (admissible normal subgroup) if $H \sigma \subseteq H$ for any $\sigma \in \Sigma$. The intersection of all admissible subgroups containing a given subset $M$ of $G$ is called the admissible subgroup generated by the set $M$. A group which does not have admissible normal subgroups apart from itself and the trivial subgroup is called a simple group (with respect to the given domain of operators). Every quotient group of an operator group by an admissible normal subgroup is a group with the same domain of operators. A group $G$ is called a group with a semi-group of operators $\Sigma$ if $G$ is a group with domain of operators $\Sigma$, $\Sigma$ is a semi-group and $a( \sigma \tau ) = ( a \sigma ) \tau$ for any $a \in G$, $\sigma , \tau \in \Sigma$. If $\Sigma$ is a semi-group with an identity element $\epsilon$, it is supposed that $a \epsilon = a$ for every $a \in G$. Every group with an arbitrary domain of operators $\Sigma _ {0}$ is a group with semi-group of operators $\Sigma$, where $\Sigma$ is the free semi-group generated by the set $\Sigma _ {0}$. A group $F$ with semi-group of operators $\Sigma$ possessing an identity element is called $\Sigma$- free if it is generated by a system of elements $X$ such that the elements $x \alpha$, where $x \in X$, $\alpha \in \Sigma$, constitute for $F$( as a group without operators) a system of free generators. Let $F$ be a $\Gamma$- free group ( $\Gamma$ being a group of operators), let $\Delta$ be a subgroup of $\Gamma$, let $f \in F$, and let $A _ {f, \Delta }$ be the admissible subgroup of $F$ generated by all elements of the form $f ^ { - 1 } ( f \alpha )$, where $\alpha \in \Delta$. Then every admissible subgroup of $F$ is an operator free product of groups of type $A _ {f, \Delta }$ and a $\Gamma$- free group (see [2]). If $\Sigma$ is a free semi-group of operators, then, if $a \neq 1$, the admissible subgroup of the $\Sigma$- free group $F$ generated by the element $a$ is itself a $\Sigma$- free group with free generator $a$( see also ). An Abelian group with an associative ring of operators $K$ is just a $K$- module (cf. Module). #### References [1] A.G. Kurosh, "The theory of groups" , 1–2 , Chelsea (1955–1956) (Translated from Russian) [2] S.T. Zavalo, "-free operator groups" Mat. Sb. , 33 (1953) pp. 399–432 (In Russian) [3a] S.T. Zavalo, "-free operator groups I" Ukr. Mat. Zh. , 16 : 5 (1964) pp. 593–602 (In Russian) [3b] S.T. Zavalo, "-free operator groups II" Ukr. Mat. Zh. , 16 : 6 (1964) pp. 730–751 (In Russian) How to Cite This Entry: Operator group. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Operator_group&oldid=48049 This article was adapted from an original article by A.P. Mishina (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{}
# Smoothness of tensors on reductive coset manifolds This comes from O'Neill's Semi-Riemannian Geometry, in the proof of Proposition 11.22. Given a reductive coset manifold $M = G/H$ of a Lie Group $G$ with Lie subspace $m$, if you fix the differential of the projection map $d\pi$ to be an isometry, then there is to be a one-to-one correspondence between $Ad(H)$ invariant scalar products on m and G-invariant metrics on $M$. The proof outlines a construction of the metric given an Ad(H) invariant scalar product on m by transferring the scalar product to the tangent space of M at the origin and then deriving it on the rest of $M$ via the differential maps of the induced translation maps. The part I'm struggling with is how to show the smoothness of this metric. The book implies the way to see this is through local sections that exist because $\pi$ is a submersion, but I'm struggling to see how this is applied... -
{}
# Fight Finance #### CoursesTagsRandomAllRecentScores You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0) and in one year (t=1) and have nothing left in the bank at the end (t=1). How much can you consume at each time? You have$100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0), in one year (t=1) and in two years (t=2), and still have $50,000 in the bank after that (t=2). How much can you consume at each time? Your neighbour asks you for a loan of$100 and offers to pay you back $120 in one year. You don't actually have any money right now, but you can borrow and lend from the bank at a rate of 10% pa. Rates are given as effective annual rates. Assume that your neighbour will definitely pay you back. Ignore interest tax shields and transaction costs. The Net Present Value (NPV) of lending to your neighbour is$9.09. Describe what you would do to actually receive a $9.09 cash flow right now with zero net cash flows in the future. An investor owns an empty block of land that has local government approval to be developed into a petrol station, car wash or car park. The council will only allow a single development so the projects are mutually exclusive. All of the development projects have the same risk and the required return of each is 10% pa. Each project has an immediate cost and once construction is finished in one year the land and development will be sold. The table below shows the estimated costs payable now, expected sale prices in one year and the internal rates of returns (IRR's). Mutually Exclusive Projects Project Costnow ($) Sale price inone year ($) IRR(% pa) Petrol station 9,000,000 11,000,000 22.22 Car wash 800,000 1,100,000 37.50 Car park 70,000 110,000 57.14 Which project should the investor accept? An investor owns a whole level of an old office building which is currently worth$1 million. There are three mutually exclusive projects that can be started by the investor. The office building level can be: • Rented out to a tenant for one year at $0.1m paid immediately, and then sold for$0.99m in one year. • Refurbished into more modern commercial office rooms at a cost of $1m now, and then sold for$2.4m when the refurbishment is finished in one year. • Converted into residential apartments at a cost of $2m now, and then sold for$3.4m when the conversion is finished in one year. All of the development projects have the same risk so the required return of each is 10% pa. The table below shows the estimated cash flows and internal rates of returns (IRR's). Mutually Exclusive Projects Project Cash flownow ($) Cash flow inone year ($) IRR(% pa) Rent then sell as is -900,000 990,000 10 Refurbishment into modern offices -2,000,000 2,400,000 20 Conversion into residential apartments -3,000,000 3,400,000 13.33 Which project should the investor accept? The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Net Present Value (NPV) of the project? Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 0 2 121 What is the Internal Rate of Return (IRR) of the project detailed in the table below? Assume that the cash flows shown in the table are paid all at once at the given point in time. All answers are given as effective annual rates. Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 0 2 121 If a project's net present value (NPV) is zero, then its internal rate of return (IRR) will be: The required return of a project is 10%, given as an effective annual rate. What is the payback period of the project in years? Assume that the cash flows shown in the table are received smoothly over the year. So the $121 at time 2 is actually earned smoothly from t=1 to t=2. Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 11 2 121 A project has the following cash flows: Project Cash Flows Time (yrs) Cash flow ($) 0 -400 1 0 2 500 What is the payback period of the project in years? Normally cash flows are assumed to happen at the given time. But here, assume that the cash flows are received smoothly over the year. So the$500 at time 2 is actually earned smoothly from t=1 to t=2. The below graph shows a project's net present value (NPV) against its annual discount rate. For what discount rate or range of discount rates would you accept and commence the project? All answer choices are given as approximations from reading off the graph. The below graph shows a project's net present value (NPV) against its annual discount rate. Which of the following statements is NOT correct? A firm is considering a business project which costs $11m now and is expected to pay a constant$1m at the end of every year forever. Assume that the initial $11m cost is funded using the firm's existing cash so no new equity or debt will be raised. The cost of capital is 10% pa. Which of the following statements about net present value (NPV), internal rate of return (IRR) and payback period is NOT correct? How many years will it take for an asset's price to double if the price grows by 10% pa? How many years will it take for an asset's price to quadruple (be four times as big, say from$1 to $4) if the price grows by 15% pa? The saying "buy low, sell high" suggests that investors should make a: Total cash flows can be broken into income and capital cash flows. What is the name given to the income cash flow from owning shares? An asset's total expected return over the next year is given by: $$r_\text{total} = \dfrac{c_1+p_1-p_0}{p_0}$$ Where $p_0$ is the current price, $c_1$ is the expected income in one year and $p_1$ is the expected price in one year. The total return can be split into the income return and the capital return. Which of the following is the expected capital return? A share was bought for$30 (at t=0) and paid its annual dividend of $6 one year later (at t=1). Just after the dividend was paid, the share price fell to$27 (at t=1). What were the total, capital and income returns given as effective annual rates? The choices are given in the same order: $r_\text{total}$ , $r_\text{capital}$ , $r_\text{dividend}$. One and a half years ago Frank bought a house for $600,000. Now it's worth only$500,000, based on recent similar sales in the area. The expected total return on Frank's residential property is 7% pa. He rents his house out for $1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is$18,617.27. The future value of 12 months of rental payments one year in the future is $19,920.48. What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on. For an asset price to double every 10 years, what must be the expected future capital return, given as an effective annual rate? Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After one year, would you be able to buy , exactly the as or than today with the money in this account? A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. A stock has a real expected total return of 7% pa and a real expected capital return of 2% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What is the nominal expected total return, capital return and dividend yield? The answers below are given in the same order. Which of the following statements about cash in the form of notes and coins is NOT correct? Assume that inflation is positive. Notes and coins: When valuing assets using discounted cash flow (net present value) methods, it is important to consider inflation. To properly deal with inflation: (I) Discount nominal cash flows by nominal discount rates. (II) Discount nominal cash flows by real discount rates. (III) Discount real cash flows by nominal discount rates. (IV) Discount real cash flows by real discount rates. Which of the above statements is or are correct? How can a nominal cash flow be precisely converted into a real cash flow? You expect a nominal payment of$100 in 5 years. The real discount rate is 10% pa and the inflation rate is 3% pa. Which of the following statements is NOT correct? What is the present value of a real payment of $500 in 2 years? The nominal discount rate is 7% pa and the inflation rate is 4% pa. On his 20th birthday, a man makes a resolution. He will put$30 cash under his bed at the end of every month starting from today. His birthday today is the first day of the month. So the first addition to his cash stash will be in one month. He will write in his will that when he dies the cash under the bed should be given to charity. If the man lives for another 60 years, how much money will be under his bed if he dies just after making his last (720th) addition? Also, what will be the real value of that cash in today's prices if inflation is expected to 2.5% pa? Assume that the inflation rate is an effective annual rate and is not expected to change. The answers are given in the same order, the amount of money under his bed in 60 years, and the real value of that money in today's prices. You're considering making an investment in a particular company. They have preference shares, ordinary shares, senior debt and junior debt. Which is the safest investment? Which will give the highest returns? Which business structure or structures have the advantage of limited liability for equity investors? Who is most in danger of being personally bankrupt? Assume that all of their businesses' assets are highly liquid and can therefore be sold immediately. Which of the following statements about book and market equity is NOT correct? The below screenshot of Commonwealth Bank of Australia's (CBA) details were taken from the Google Finance website on 7 Nov 2014. Some information has been deliberately blanked out. What was CBA's market capitalisation of equity? The investment decision primarily affects which part of a business? The financing decision primarily affects which part of a business? Business people make lots of important decisions. Which of the following is the most important long term decision? The expression 'you have to spend money to make money' relates to which business decision? Suppose you had $100 in a savings account and the interest rate was 2% per year. After 5 years, how much do you think you would have in the account if you left the money to grow? than$102, $102 or than$102? Do you think that the following statement is or ? “Buying a single company stock usually provides a safer return than a stock mutual fund.” What is the net present value (NPV) of undertaking a full-time Australian undergraduate business degree as an Australian citizen? Only include the cash flows over the duration of the degree, ignore any benefits or costs of the degree after it's completed. Assume the following: • The degree takes 3 years to complete and all students pass all subjects. • There are 2 semesters per year and 4 subjects per semester. • University fees per subject per semester are $1,277, paid at the start of each semester. Fees are expected to stay constant for the next 3 years. • There are 52 weeks per year. • The first semester is just about to start (t=0). The first semester lasts for 19 weeks (t=0 to 19). • The second semester starts immediately afterwards (t=19) and lasts for another 19 weeks (t=19 to 38). • The summer holidays begin after the second semester ends and last for 14 weeks (t=38 to 52). Then the first semester begins the next year, and so on. • Working full time at the grocery store instead of studying full-time pays$20/hr and you can work 35 hours per week. Wages are paid at the end of each week. • Full-time students can work full-time during the summer holiday at the grocery store for the same rate of $20/hr for 35 hours per week. Wages are paid at the end of each week. • The discount rate is 9.8% pa. All rates and cash flows are real. Inflation is expected to be 3% pa. All rates are effective annual. The NPV of costs from undertaking the university degree is: A project to build a toll road will take 3 years to complete, costing three payments of$50 million, paid at the start of each year (at times 0, 1, and 2). After completion, the toll road will yield a constant $10 million at the end of each year forever with no costs. So the first payment will be at t=4. The required return of the project is 10% pa given as an effective nominal rate. All cash flows are nominal. What is the payback period? You're trying to save enough money to buy your first car which costs$2,500. You can save $100 at the end of each month starting from now. You currently have no money at all. You just opened a bank account with an interest rate of 6% pa payable monthly. How many months will it take to save enough money to buy the car? Assume that the price of the car will stay the same over time. Your main expense is fuel for your car which costs$100 per month. You just refueled, so you won't need any more fuel for another month (first payment at t=1 month). You have $2,500 in a bank account which pays interest at a rate of 6% pa, payable monthly. Interest rates are not expected to change. Assuming that you have no income, in how many months time will you not have enough money to fully refuel your car? You really want to go on a back packing trip to Europe when you finish university. Currently you have$1,500 in the bank. Bank interest rates are 8% pa, given as an APR compounding per month. If the holiday will cost $2,000, how long will it take for your bank account to reach that amount? You just signed up for a 30 year fully amortising mortgage loan with monthly payments of$1,500 per month. The interest rate is 9% pa which is not expected to change. To your surprise, you can actually afford to pay $2,000 per month and your mortgage allows early repayments without fees. If you maintain these higher monthly payments, how long will it take to pay off your mortgage? You're trying to save enough money for a deposit to buy a house. You want to buy a house worth$400,000 and the bank requires a 20% deposit ($80,000) before it will give you a loan for the other$320,000 that you need. You currently have no savings, but you just started working and can save $2,000 per month, with the first payment in one month from now. Bank interest rates on savings accounts are 4.8% pa with interest paid monthly and interest rates are not expected to change. How long will it take to save the$80,000 deposit? Round your answer up to the nearest month. A student won $1m in a lottery. Currently the money is in a bank account which pays interest at 6% pa, given as an APR compounding per month. She plans to spend$20,000 at the beginning of every month from now on (so the first withdrawal will be at t=0). After each withdrawal, she will check how much money is left in the account. When there is less than $500,000 left, she will donate that remaining amount to charity. In how many months will she make her last withdrawal and donate the remainder to charity? When using the dividend discount model, care must be taken to avoid using a nominal dividend growth rate that exceeds the country's nominal GDP growth rate. Otherwise the firm is forecast to take over the country since it grows faster than the average business forever. Suppose a firm's nominal dividend grows at 10% pa forever, and nominal GDP growth is 5% pa forever. The firm's total dividends are currently$1 billion (t=0). The country's GDP is currently $1,000 billion (t=0). In approximately how many years will the company's total dividends be as large as the country's GDP? Which of the following is NOT a synonym of 'required return'? You have$100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume half as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end. How much can you consume at time zero and one? The answer choices are given in the same order. Which of the following equations is NOT equal to the total return of an asset? Let $p_0$ be the current price, $p_1$ the expected price in one year and $c_1$ the expected income in one year. A stock was bought for $8 and paid a dividend of$0.50 one year later (at t=1 year). Just after the dividend was paid, the stock price was $7 (at t=1 year). What were the total, capital and dividend returns given as effective annual rates? The choices are given in the same order: $r_\text{total}$, $r_\text{capital}$, $r_\text{dividend}$. A fixed coupon bond was bought for$90 and paid its annual coupon of $3 one year later (at t=1 year). Just after the coupon was paid, the bond price was$92 (at t=1 year). What was the total return, capital return and income return? Calculate your answers as effective annual rates. The choices are given in the same order: $r_\text{total},r_\text{capital},r_\text{income}$. In the 'Austin Powers' series of movies, the character Dr. Evil threatens to destroy the world unless the United Nations pays him a ransom (video 1, video 2). Dr. Evil makes the threat on two separate occasions: • In 1969 he demands a ransom of $1 million (=10^6), and again; • In 1997 he demands a ransom of$100 billion (=10^11). If Dr. Evil's demands are equivalent in real terms, in other words $1 million will buy the same basket of goods in 1969 as$100 billion would in 1997, what was the implied inflation rate over the 28 years from 1969 to 1997? The answer choices below are given as effective annual rates: A residential investment property has an expected nominal total return of 8% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. You are a banker about to grant a 2 year loan to a customer. The loan's principal and interest will be repaid in a single payment at maturity, sometimes called a zero-coupon loan, discount loan or bullet loan. You require a real return of 6% pa over the two years, given as an effective annual rate. Inflation is expected to be 2% this year and 4% next year, both given as effective annual rates. You judge that the customer can afford to pay back $1,000,000 in 2 years, given as a nominal cash flow. How much should you lend to her right now? The below screenshot of Microsoft's (MSFT) details were taken from the Google Finance website on 28 Nov 2014. Some information has been deliberately blanked out. What was MSFT's market capitalisation of equity? One year ago a pharmaceutical firm floated by selling its 1 million shares for$100 each. Its book and market values of equity were both $100m. Its debt totalled$50m. The required return on the firm's assets was 15%, equity 20% and debt 5% pa. In the year since then, the firm: • Earned net income of $29m. • Paid dividends totaling$10m. • Discovered a valuable new drug that will lead to a massive 1,000 times increase in the firm's net income in 10 years after the research is commercialised. News of the discovery was publicly announced. The firm's systematic risk remains unchanged. Which of the following statements is NOT correct? All statements are about current figures, not figures one year ago. Hint: Book return on assets (ROA) and book return on equity (ROE) are ratios that accountants like to use to measure a business's past performance. $$\text{ROA}= \dfrac{\text{Net income}}{\text{Book value of assets}}$$ $$\text{ROE}= \dfrac{\text{Net income}}{\text{Book value of equity}}$$ The required return on assets $r_V$ is a return that financiers like to use to estimate a business's future required performance which compensates them for the firm's assets' risks. If the business were to achieve realised historical returns equal to its required returns, then investment into the business's assets would have been a zero-NPV decision, which is neither good nor bad but fair. $$r_\text{V, 0 to 1}= \dfrac{\text{Cash flow from assets}_\text{1}}{\text{Market value of assets}_\text{0}} = \dfrac{CFFA_\text{1}}{V_\text{0}}$$ Similarly for equity and debt. If a firm makes a profit and pays no dividends, which of the following accounts will increase? The working capital decision primarily affects which part of a business? Payout policy is most closely related to which part of a business? A newly floated farming company is financed with senior bonds, junior bonds, cumulative non-voting preferred stock and common stock. The new company has no retained profits and due to floods it was unable to record any revenues this year, leading to a loss. The firm is not bankrupt yet since it still has substantial contributed equity (same as paid-up capital). On which securities must it pay interest or dividend payments in this terrible financial year? What is the lowest and highest expected share price and expected return from owning shares in a company over a finite period of time? Let the current share price be $p_0$, the expected future share price be $p_1$, the expected future dividend be $d_1$ and the expected return be $r$. Define the expected return as: $r=\dfrac{p_1-p_0+d_1}{p_0}$ The answer choices are stated using inequalities. As an example, the first answer choice "(a) $0≤p<∞$ and $0≤r< 1$", states that the share price must be larger than or equal to zero and less than positive infinity, and that the return must be larger than or equal to zero and less than one. Total cash flows can be broken into income and capital cash flows. What is the name given to the cash flow generated from selling shares at a higher price than they were bought? For an asset price to triple every 5 years, what must be the expected future capital return, given as an effective annual rate? Which of the following statements is NOT correct? Apples and oranges currently cost $1 each. Inflation is 5% pa, and apples and oranges are equally affected by this inflation rate. Note that when payments are not specified as real, as in this question, they're conventionally assumed to be nominal. Which of the following statements is NOT correct? Which of the following statements about inflation is NOT correct? What is the present value of a nominal payment of$1,000 in 4 years? The nominal discount rate is 8% pa and the inflation rate is 2% pa. A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 2.5% pa. Inflation is expected to be 2.5% pa. All of the above are effective nominal rates and investors believe that they will stay the same in perpetuity. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. A low-growth mature stock has an expected nominal total return of 6% pa and nominal capital return of 2% pa. Inflation is expected to be 3% pa. All of the above are effective nominal rates and investors believe that they will stay the same in perpetuity. What are the stock's expected real total, capital and income returns? The answer choices below are given in the same order. Katya offers to pay you $10 at the end of every year for the next 5 years (t=1,2,3,4,5) if you pay her$50 now (t=0). You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Will you or Katya's deal? There are many ways to write the ordinary annuity formula. Which of the following is NOT equal to the ordinary annuity formula? This annuity formula $\dfrac{C_1}{r}\left(1-\dfrac{1}{(1+r)^3} \right)$ is equivalent to which of the following formulas? Note the 3. In the below formulas, $C_t$ is a cash flow at time t. All of the cash flows are equal, but paid at different times. The following cash flows are expected: • 10 yearly payments of $60, with the first payment in 3 years from now (first payment at t=3). • 1 payment of$400 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? Your friend overheard that you need some cash and asks if you would like to borrow some money. She can lend you $5,000 now (t=0), and in return she wants you to pay her back$1,000 in two years (t=2) and every year after that for the next 5 years, so there will be 6 payments of $1,000 from t=2 to t=7 inclusive. What is the net present value (NPV) of borrowing from your friend? Assume that banks loan funds at interest rates of 10% pa, given as an effective annual rate. A project to build a toll bridge will take two years to complete, costing three payments of$100 million at the start of each year for the next three years, that is at t=0, 1 and 2. After completion, the toll bridge will yield a constant $50 million at the end of each year for the next 10 years. So the first payment will be at t=3 and the last at t=12. After the last payment at t=12, the bridge will be given to the government. The required return of the project is 21% pa given as an effective annual nominal rate. All cash flows are real and the expected inflation rate is 10% pa given as an effective annual rate. Ignore taxes. The Net Present Value is: Some countries' interest rates are so low that they're zero. If interest rates are 0% pa and are expected to stay at that level for the foreseeable future, what is the most that you would be prepared to pay a bank now if it offered to pay you$10 at the end of every year for the next 5 years? In other words, what is the present value of five $10 payments at time 1, 2, 3, 4 and 5 if interest rates are 0% pa? Discounted cash flow (DCF) valuation prices assets by finding the present value of the asset's future cash flows. The single cash flow, annuity, and perpetuity equations are very useful for this. Which of the following equations is the 'perpetuity with growth' equation? The following equation is called the Dividend Discount Model (DDM), Gordon Growth Model or the perpetuity with growth formula: $$P_0 = \frac{ C_1 }{ r - g }$$ What is $g$? The value $g$ is the long term expected: For a price of$13, Carla will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to Carla's share or politely ? The first payment of a constant perpetual annual cash flow is received at time 5. Let this cash flow be $C_5$ and the required return be $r$. So there will be equal annual cash flows at time 5, 6, 7 and so on forever, and all of the cash flows will be equal so $C_5 = C_6 = C_7 = ...$ When the perpetuity formula is used to value this stream of cash flows, it will give a value (V) at time: For a price of$1040, Camille will sell you a share which just paid a dividend of $100, and is expected to pay dividends every year forever, growing at a rate of 5% pa. So the next dividend will be $100(1+0.05)^1=105.00$, and the year after it will be $100(1+0.05)^2=110.25$ and so on. The required return of the stock is 15% pa. Would you like to the share or politely ? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$P_{0} = \frac{C_1}{r_{\text{eff}} - g_{\text{eff}}}$$ What would you call the expression $C_1/P_0$? The following is the Dividend Discount Model (DDM) used to price stocks: $$P_0=\dfrac{C_1}{r-g}$$ If the assumptions of the DDM hold, which one of the following statements is NOT correct? The long term expected: A stock just paid its annual dividend of$9. The share price is $60. The required return of the stock is 10% pa as an effective annual rate. What is the implied growth rate of the dividend per year? The following cash flows are expected: • 10 yearly payments of$80, with the first payment in 3 years from now (first payment at t=3). • 1 payment of $600 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? The Australian Federal Government lends money to domestic students to pay for their university education. This is known as the Higher Education Contribution Scheme (HECS). The nominal interest rate on the HECS loan is set equal to the consumer price index (CPI) inflation rate. The interest is capitalised every year, which means that the interest is added to the principal. The interest and principal does not need to be repaid by students until they finish study and begin working. Which of the following statements about HECS loans is NOT correct? Which of the following statements about gold is NOT correct? Assume that the gold price increases by inflation. Gold: A stock’s current price is$1. Its expected total return is 10% pa and its long term expected capital return is 4% pa. It pays an annual dividend and the next one will be paid in one year. All rates are given as effective annual rates. The dividend discount model is thought to be a suitable model for the stock. Ignore taxes. Which of the following statements about the stock is NOT correct? In the dividend discount model (DDM), share prices fall when dividends are paid. Let the high price before the fall be called the peak, and the low price after the fall be called the trough. $$P_0=\dfrac{C_1}{r-g}$$ Which of the following statements about the DDM is NOT correct? A stock will pay you a dividend of $10 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 5% pa, so the next dividend after the$10 one tonight will be $10.50 in one year, then in two years it will be$11.025 and so on. The stock's required return is 10% pa. What is the stock price today and what do you expect the stock price to be tomorrow, approximately? In the dividend discount model: $$P_0 = \dfrac{C_1}{r-g}$$ The return $r$ is supposed to be the: The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_0 = \frac{d_1}{r - g}$$ Which expression is NOT equal to the expected dividend yield? Two companies BigDiv and ZeroDiv are exactly the same except for their dividend payouts. BigDiv pays large dividends and ZeroDiv doesn't pay any dividends. Currently the two firms have the same earnings, assets, number of shares, share price, expected total return and risk. Assume a perfect world with no taxes, no transaction costs, no asymmetric information and that all assets including business projects are fairly priced and therefore zero-NPV. All things remaining equal, which of the following statements is NOT correct? Estimate the US bank JP Morgan's share price using a price earnings (PE) multiples approach with the following assumptions and figures only: • The major US banks JP Morgan Chase (JPM), Citi Group (C) and Wells Fargo (WFC) are comparable companies; • JP Morgan Chase's historical earnings per share (EPS) is $4.37; • Citi Group's share price is$50.05 and historical EPS is $4.26; • Wells Fargo's share price is$48.98 and historical EPS is $3.89. Note: Figures sourced from Google Finance on 24 March 2014. Estimate Microsoft's (MSFT) share price using a price earnings (PE) multiples approach with the following assumptions and figures only: • Apple, Google and Microsoft are comparable companies, • Apple's (AAPL) share price is$526.24 and historical EPS is $40.32. • Google's (GOOG) share price is$1,215.65 and historical EPS is $36.23. • Micrsoft's (MSFT) historical earnings per share (EPS) is$2.71. Source: Google Finance 28 Feb 2014. Details of two different types of light bulbs are given below: • Low-energy light bulbs cost $3.50, have a life of nine years, and use about$1.60 of electricity a year, paid at the end of each year. • Conventional light bulbs cost only $0.50, but last only about a year and use about$6.60 of energy a year, paid at the end of each year. The real discount rate is 5%, given as an effective annual rate. Assume that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the Equivalent Annual Cost (EAC) of the low-energy and conventional light bulbs. The below choices are listed in that order. Carlos and Edwin are brothers and they both love Holden Commodore cars. Carlos likes to buy the latest Holden Commodore car for $40,000 every 4 years as soon as the new model is released. As soon as he buys the new car, he sells the old one on the second hand car market for$20,000. Carlos never has to bother with paying for repairs since his cars are brand new. Edwin also likes Commodores, but prefers to buy 4-year old cars for $20,000 and keep them for 11 years until the end of their life (new ones last for 15 years in total but the 4-year old ones only last for another 11 years). Then he sells the old car for$2,000 and buys another 4-year old second hand car, and so on. Every time Edwin buys a second hand 4 year old car he immediately has to spend $1,000 on repairs, and then$1,000 every year after that for the next 10 years. So there are 11 payments in total from when the second hand car is bought at t=0 to the last payment at t=10. One year later (t=11) the old car is at the end of its total 15 year life and can be scrapped for $2,000. Assuming that Carlos and Edwin maintain their love of Commodores and keep up their habits of buying new ones and second hand ones respectively, how much larger is Carlos' equivalent annual cost of car ownership compared with Edwin's? The real discount rate is 10% pa. All cash flows are real and are expected to remain constant. Inflation is forecast to be 3% pa. All rates are effective annual. Ignore capital gains tax and tax savings from depreciation since cars are tax-exempt for individuals. You own a nice suit which you wear once per week on nights out. You bought it one year ago for$600. In your experience, suits used once per week last for 6 years. So you expect yours to last for another 5 years. Your younger brother said that retro is back in style so he wants to wants to borrow your suit once a week when he goes out. With the increased use, your suit will only last for another 4 years rather than 5. What is the present value of the cost of letting your brother use your current suit for the next 4 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new suit when your current one wears out and your brother will not use the new one; your brother will only use your current suit so he will only use it for the next four years; and the price of a new suit never changes. You own some nice shoes which you use once per week on date nights. You bought them 2 years ago for $500. In your experience, shoes used once per week last for 6 years. So you expect yours to last for another 4 years. Your younger sister said that she wants to borrow your shoes once per week. With the increased use, your shoes will only last for another 2 years rather than 4. What is the present value of the cost of letting your sister use your current shoes for the next 2 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new pair of shoes when your current pair wears out and your sister will not use the new ones; your sister will only use your current shoes so she will only use it for the next 2 years; and the price of new shoes never changes. An industrial chicken farmer grows chickens for their meat. Chickens: 1. Cost$0.50 each to buy as chicks. They are bought on the day they’re born, at t=0. 2. Grow at a rate of $0.70 worth of meat per chicken per week for the first 6 weeks (t=0 to t=6). 3. Grow at a rate of$0.40 worth of meat per chicken per week for the next 4 weeks (t=6 to t=10) since they’re older and grow more slowly. 4. Feed costs are $0.30 per chicken per week for their whole life. Chicken feed is bought and fed to the chickens once per week at the beginning of the week. So the first amount of feed bought for a chicken at t=0 costs$0.30, and so on. 5. Can be slaughtered (killed for their meat) and sold at no cost at the end of the week. The price received for the chicken is their total value of meat (note that the chicken grows fast then slow, see above). The required return of the chicken farm is 0.5% given as an effective weekly rate. Ignore taxes and the fixed costs of the factory. Ignore the chicken’s welfare and other environmental and ethical concerns. Find the equivalent weekly cash flow of slaughtering a chicken at 6 weeks and at 10 weeks so the farmer can figure out the best time to slaughter his chickens. The choices below are given in the same order, 6 and 10 weeks. An investor bought a bond for $100 (at t=0) and one year later it paid its annual coupon of$1 (at t=1). Just after the coupon was paid, the bond price was $100.50 (at t=1). Inflation over the past year (from t=0 to t=1) was 3% pa, given as an effective annual rate. Which of the following statements is NOT correct? The bond investment produced a: A share’s current price is$60. It’s expected to pay a dividend of $1.50 in one year. The growth rate of the dividend is 0.5% pa and the stock’s required total return is 3% pa. The stock’s price can be modeled using the dividend discount model (DDM): $P_0=\dfrac{C_1}{r-g}$ Which of the following methods is NOT equal to the stock’s expected price in one year and six months (t=1.5 years)? Note that the symbolic formulas shown in each line below do equal the formulas with numbers. The formula is just repeated with symbols and then numbers in case it helps you to identify the incorrect statement more quickly. An equities analyst is using the dividend discount model to price a company's shares. The company operates domestically and has no plans to expand overseas. It is part of a mature industry with stable positive growth prospects. The analyst has estimated the real required return (r) of the stock and the value of the dividend that the stock just paid a moment before $(C_\text{0 before})$. What is the highest perpetual real growth rate of dividends (g) that can be justified? Select the most correct statement from the following choices. The highest perpetual real expected growth rate of dividends that can be justified is the country's expected: A share currently worth$100 is expected to pay a constant dividend of $4 for the next 5 years with the first dividend in one year (t=1) and the last in 5 years (t=5). The total required return is 10% pa. What do you expected the share price to be in 5 years, just after the dividend at that time has been paid? An Apple iPhone 6 smart phone can be bought now for$999. An Android Kogan Agora 4G+ smart phone can be bought now for $240. If the Kogan phone lasts for one year, approximately how long must the Apple phone last for to have the same equivalent annual cost? Assume that both phones have equivalent features besides their lifetimes, that both are worthless once they've outlasted their life, the discount rate is 10% pa given as an effective annual rate, and there are no extra costs or benefits from either phone. Stocks in the United States usually pay quarterly dividends. For example, the software giant Microsoft paid a$0.23 dividend every quarter over the 2013 financial year and plans to pay a $0.28 dividend every quarter over the 2014 financial year. Using the dividend discount model and net present value techniques, calculate the stock price of Microsoft assuming that: • The time now is the beginning of July 2014. The next dividend of$0.28 will be received in 3 months (end of September 2014), with another 3 quarterly payments of $0.28 after this (end of December 2014, March 2015 and June 2015). • The quarterly dividend will increase by 2.5% every year, but each quarterly dividend over the year will be equal. So each quarterly dividend paid in the financial year beginning in September 2015 will be$ 0.287 $(=0.28×(1+0.025)^1)$, with the last at the end of June 2016. In the next financial year beginning in September 2016 each quarterly dividend will be $0.294175 $(=0.28×(1+0.025)^2)$, with the last at the end of June 2017, and so on forever. • The total required return on equity is 6% pa. • The required return and growth rate are given as effective annual rates. • Dividend payment dates and ex-dividend dates are at the same time. • Remember that there are 4 quarters in a year and 3 months in a quarter. What is the current stock price? A low-quality second-hand car can be bought now for$1,000 and will last for 1 year before it will be scrapped for nothing. A high-quality second-hand car can be bought now for $4,900 and it will last for 5 years before it will be scrapped for nothing. What is the equivalent annual cost of each car? Assume a discount rate of 10% pa, given as an effective annual rate. The answer choices are given as the equivalent annual cost of the low-quality car and then the high quality car. You just bought a nice dress which you plan to wear once per month on nights out. You bought it a moment ago for$600 (at t=0). In your experience, dresses used once per month last for 6 years. Your younger sister is a student with no money and wants to borrow your dress once a month when she hits the town. With the increased use, your dress will only last for another 3 years rather than 6. What is the present value of the cost of letting your sister use your current dress for the next 3 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new dress when your current one wears out; your sister will only use the current dress, not the next one that you will buy; and the price of a new dress never changes. Details of two different types of desserts or edible treats are given below: • High-sugar treats like candy, chocolate and ice cream make a person very happy. High sugar treats are cheap at only $2 per day. • Low-sugar treats like nuts, cheese and fruit make a person equally happy if these foods are of high quality. Low sugar treats are more expensive at$4 per day. The advantage of low-sugar treats is that a person only needs to pay the dentist $2,000 for fillings and root canal therapy once every 15 years. Whereas with high-sugar treats, that treatment needs to be done every 5 years. The real discount rate is 10%, given as an effective annual rate. Assume that there are 365 days in every year and that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the equivalent annual cash flow (EAC) of the high-sugar treats and low-sugar treats, including dental costs. The below choices are listed in that order. Ignore the pain of dental therapy, personal preferences and other factors. You're about to buy a car. These are the cash flows of the two different cars that you can buy: • You can buy an old car for$5,000 now, for which you will have to buy $90 of fuel at the end of each week from the date of purchase. The old car will last for 3 years, at which point you will sell the old car for$500. • Or you can buy a new car for $14,000 now for which you will have to buy$50 of fuel at the end of each week from the date of purchase. The new car will last for 4 years, at which point you will sell the new car for $1,000. Bank interest rates are 10% pa, given as an effective annual rate. Assume that there are exactly 52 weeks in a year. Ignore taxes and environmental and pollution factors. Should you buy the or the ? Private equity firms are known to buy medium sized private companies operating in the same industry, merge them together into a larger company, and then sell it off in a public float (initial public offering, IPO). If medium-sized private companies trade at PE ratios of 5 and larger listed companies trade at PE ratios of 15, what return can be achieved from this strategy? Assume that: • The medium-sized companies can be bought, merged and sold in an IPO instantaneously. • There are no costs of finding, valuing, merging and restructuring the medium sized companies. Also, there is no competition to buy the medium-sized companies from other private equity firms. • The large merged firm's earnings are the sum of the medium firms' earnings. • The only reason for the difference in medium and large firm's PE ratios is due to the illiquidity of the medium firms' shares. • Return is defined as: $r_{0→1} = (p_1-p_0+c_1)/p_0$ , where time zero is just before the merger and time one is just after. An 'interest payment' is the same thing as a 'coupon payment'. or ? An 'interest rate' is the same thing as a 'coupon rate'. or ? An 'interest rate' is the same thing as a 'yield'. or ? Which of the following statements is NOT equivalent to the yield on debt? Assume that the debt being referred to is fairly priced, but do not assume that it's priced at par. An 'interest only' loan can also be called a: Which of the following statements is NOT correct? Borrowers: Which of the following statements is NOT correct? Lenders: Which of the below statements about effective rates and annualised percentage rates (APR's) is NOT correct? Which of the following statements about effective rates and annualised percentage rates (APR's) is NOT correct? A credit card offers an interest rate of 18% pa, compounding monthly. Find the effective monthly rate, effective annual rate and the effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: $$r_\text{eff monthly} , r_\text{eff yearly} , r_\text{eff daily}$$ A European bond paying annual coupons of 6% offers a yield of 10% pa. Convert the yield into an effective monthly rate, an effective annual rate and an effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: $$r_\text{eff, monthly} , r_\text{eff, yearly} , r_\text{eff, daily}$$ Calculate the effective annual rates of the following three APR's: • A credit card offering an interest rate of 18% pa, compounding monthly. • A bond offering a yield of 6% pa, compounding semi-annually. • An annual dividend-paying stock offering a return of 10% pa compounding annually. All answers are given in the same order: $r_\text{credit card, eff yrly}$, $r_\text{bond, eff yrly}$, $r_\text{stock, eff yrly}$ In Australia, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 2.83% pa. The inflation rate is currently 2.2% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? In Germany, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 0.04% pa. The inflation rate is currently 1.4% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? On his 20th birthday, a man makes a resolution. He will deposit$30 into a bank account at the end of every month starting from now, which is the start of the month. So the first payment will be in one month. He will write in his will that when he dies the money in the account should be given to charity. The bank account pays interest at 6% pa compounding monthly, which is not expected to change. If the man lives for another 60 years, how much money will be in the bank account if he dies just after making his last (720th) payment? You want to buy an apartment priced at $300,000. You have saved a deposit of$30,000. The bank has agreed to lend you the $270,000 as a fully amortising loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage loan payments are paid in arrears (at the end of the month). Your friend wants to borrow$1,000 and offers to pay you back $100 in 6 months, with more$100 payments at the end of every month for another 11 months. So there will be twelve $100 payments in total. She says that 12 payments of$100 equals $1,200 so she's being generous. If interest rates are 12% pa, given as an APR compounding monthly, what is the Net Present Value (NPV) of your friend's deal? A 10 year Australian government bond was just issued at par with a yield of 3.9% pa. The fixed coupon payments are semi-annual. The bond has a face value of$1,000. Six months later, just after the first coupon is paid, the yield of the bond decreases to 3.65% pa. What is the bond's new price? Which one of the below statements about effective rates and annualised percentage rates (APR's) is NOT correct? Many Australian home loans that are interest-only actually require payments to be made on a fully amortising basis after a number of years. You decide to borrow $600,000 from the bank at an interest rate of 4.25% pa for 25 years. The payments will be interest-only for the first 10 years (t=0 to 10 years), then they will have to be paid on a fully amortising basis for the last 15 years (t=10 to 25 years). Assuming that interest rates will remain constant, what will be your monthly payments over the first 10 years from now, and then the next 15 years after that? The answer options are given in the same order. You just entered into a fully amortising home loan with a principal of$600,000, a variable interest rate of 4.25% pa and a term of 25 years. Immediately after settling the loan, the variable interest rate suddenly falls to 4% pa! You can't believe your luck. Despite this, you plan to continue paying the same home loan payments as you did before. How long will it now take to pay off your home loan? Assume that the lower interest rate was granted immediately and that rates were and are now again expected to remain constant. Round your answer up to the nearest whole month. A home loan company advertises an interest rate of 6% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given to four decimal places. A credit card company advertises an interest rate of 18% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given to four decimal places. A semi-annual coupon bond has a yield of 3% pa. Which of the following statements about the yield is NOT correct? All rates are given to four decimal places. How much more can you borrow using an interest-only loan compared to a 25-year fully amortising loan if interest rates are 6% pa compounding per month and are not expected to change? If it makes it easier, assume that you can afford to pay $2,000 per month on either loan. Express your answer as a proportional increase using the following formula: $$\text{Proportional Increase} = \dfrac{V_\text{0,interest only}}{V_\text{0,fully amortising}} - 1$$ You want to buy an apartment worth$500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the$450,000 as a fully amortising mortgage loan with a term of 25 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You want to buy an apartment worth $400,000. You have saved a deposit of$80,000. The bank has agreed to lend you the $320,000 as a fully amortising mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You just signed up for a 30 year fully amortising mortgage loan with monthly payments of$2,000 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 5 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. You just signed up for a 30 year fully amortising mortgage with monthly payments of $1,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 20 years, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. You just signed up for a 30 year fully amortising mortgage loan with monthly payments of$1,500 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. You want to buy an apartment priced at $300,000. You have saved a deposit of$30,000. The bank has agreed to lend you the $270,000 as an interest only loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage payments are paid in arrears (at the end of the month). You just signed up for a 30 year interest-only mortgage with monthly payments of$3,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 15 years, just after the 180th payment at that time, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. Remember that the mortgage is interest-only and that mortgage payments are paid in arrears (at the end of the month). You just borrowed $400,000 in the form of a 25 year interest-only mortgage with monthly payments of$3,000 per month. The interest rate is 9% pa which is not expected to change. You actually plan to pay more than the required interest payment. You plan to pay $3,300 in mortgage payments every month, which your mortgage lender allows. These extra payments will reduce the principal and the minimum interest payment required each month. At the maturity of the mortgage, what will be the principal? That is, after the last (300th) interest payment of$3,300 in 25 years, how much will be owing on the mortgage? You want to buy an apartment worth $300,000. You have saved a deposit of$60,000. The bank has agreed to lend you $240,000 as an interest only mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You want to buy an apartment priced at$500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the$450,000 as an interest only loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? A bank grants a borrower an interest-only residential mortgage loan with a very large 50% deposit and a nominal interest rate of 6% that is not expected to change. Assume that inflation is expected to be a constant 2% pa over the life of the loan. Ignore credit risk. From the bank's point of view, what is the long term expected nominal capital return of the loan asset? A prospective home buyer can afford to pay $2,000 per month in mortgage loan repayments. The central bank recently lowered its policy rate by 0.25%, and residential home lenders cut their mortgage loan rates from 4.74% to 4.49%. How much more can the prospective home buyer borrow now that interest rates are 4.49% rather than 4.74%? Give your answer as a proportional increase over the original amount he could borrow ($V_\text{before}$), so: $$\text{Proportional increase} = \frac{V_\text{after}-V_\text{before}}{V_\text{before}}$$ Assume that: • Interest rates are expected to be constant over the life of the loan. • Loans are interest-only and have a life of 30 years. • Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates compounding per month. In Australia in the 1980's, inflation was around 8% pa, and residential mortgage loan interest rates were around 14%. In 2013, inflation was around 2.5% pa, and residential mortgage loan interest rates were around 4.5%. If a person can afford constant mortgage loan payments of$2,000 per month, how much more can they borrow when interest rates are 4.5% pa compared with 14.0% pa? Give your answer as a proportional increase over the amount you could borrow when interest rates were high $(V_\text{high rates})$, so: $$\text{Proportional increase} = \dfrac{V_\text{low rates}-V_\text{high rates}}{V_\text{high rates}}$$ Assume that: • Interest rates are expected to be constant over the life of the loan. • Loans are interest-only and have a life of 30 years. • Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates (APR's) compounding per month. For a price of $95, Nicole will sell you a 10 year bond paying semi-annual coupons of 8% pa. The face value of the bond is$100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to the bond or politely ? For a price of $100, Vera will sell you a 2 year bond paying semi-annual coupons of 10% pa. The face value of the bond is$100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to her bond or politely ? Calculate the price of a newly issued ten year bond with a face value of $100, a yield of 8% pa and a fixed coupon rate of 6% pa, paid semi-annually. So there are two coupons per year, paid in arrears every six months. Calculate the price of a newly issued ten year bond with a face value of$100, a yield of 8% pa and a fixed coupon rate of 6% pa, paid annually. So there's only one coupon per year, paid in arrears every year. Bonds X and Y are issued by the same US company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X and Y's coupon rates are 8 and 12% pa respectively. Which of the following statements is true? Bonds A and B are issued by the same company. They have the same face value, maturity, seniority and coupon payment frequency. The only difference is that bond A has a 5% coupon rate, while bond B has a 10% coupon rate. The yield curve is flat, which means that yields are expected to stay the same. Which bond would have the higher current price? A two year Government bond has a face value of$100, a yield of 0.5% and a fixed coupon rate of 0.5%, paid semi-annually. What is its price? The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? A two year Government bond has a face value of 100, a yield of 2.5% pa and a fixed coupon rate of 0.5% pa, paid semi-annually. What is its price? Which of the following statements about risk free government bonds is NOT correct? Hint: Total return can be broken into income and capital returns as follows: \begin{aligned} r_\text{total} &= \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0} \\ &= r_\text{income} + r_\text{capital} \end{aligned} The capital return is the growth rate of the price. The income return is the periodic cash flow. For a bond this is the coupon payment. The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? A bond maturing in 10 years has a coupon rate of 4% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value of the bond is100. What is its price? Bonds A and B are issued by the same Australian company. Both bonds yield 7% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond A pays coupons of 10% pa and bond B pays coupons of 5% pa. Which of the following statements is true about the bonds' prices? Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100) and maturity (3 years). The only difference is that bond X and Y's yields are 8 and 12% pa respectively. Which of the following statements is true? A three year bond has a fixed coupon rate of 12% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value is $100. What is its price? Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100), maturity (3 years) and yield (10%) as each other. Which of the following statements is true? A four year bond has a face value of $100, a yield of 6% and a fixed coupon rate of 12%, paid semi-annually. What is its price? Which one of the following bonds is trading at a discount? A five year bond has a face value of$100, a yield of 12% and a fixed coupon rate of 6%, paid semi-annually. What is the bond's price? Which one of the following bonds is trading at par? A firm wishes to raise $8 million now. They will issue 7% pa semi-annual coupon bonds that will mature in 10 years and have a face value of$100 each. Bond yields are 10% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? For a bond that pays fixed semi-annual coupons, how is the annual coupon rate defined, and how is the bond's annual income yield from time 0 to 1 defined mathematically? Let: $P_0$ be the bond price now, $F_T$ be the bond's face value, $T$ be the bond's maturity in years, $r_\text{total}$ be the bond's total yield, $r_\text{income}$ be the bond's income yield, $r_\text{capital}$ be the bond's capital yield, and $C_t$ be the bond's coupon at time t in years. So $C_{0.5}$ is the coupon in 6 months, $C_1$ is the coupon in 1 year, and so on. The coupon rate of a fixed annual-coupon bond is constant (always the same). What can you say about the income return ($r_\text{income}$) of a fixed annual coupon bond? Remember that: $$r_\text{total} = r_\text{income} + r_\text{capital}$$ $$r_\text{total, 0 to 1} = \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0}$$ Assume that there is no change in the bond's total annual yield to maturity from when it is issued to when it matures. Select the most correct statement. From its date of issue until maturity, the income return of a fixed annual coupon: Issuing debt doesn't give away control of the firm because debt holders can't cast votes to determine the company's affairs, such as at the annual general meeting (AGM), and can't appoint directors to the board. or ? Companies must pay interest and principal payments to debt-holders. They're compulsory. But companies are not forced to pay dividends to share holders. or ? Your friend just bought a house for $400,000. He financed it using a$320,000 mortgage loan and a deposit of $80,000. In the context of residential housing and mortgages, the 'equity' tied up in the value of a person's house is the value of the house less the value of the mortgage. So the initial equity your friend has in his house is$80,000. Let this amount be E, let the value of the mortgage be D and the value of the house be V. So $V=D+E$. If house prices suddenly fall by 10%, what would be your friend's percentage change in equity (E)? Assume that the value of the mortgage is unchanged and that no income (rent) was received from the house during the short time over which house prices fell. Remember: $$r_{0\rightarrow1}=\frac{p_1-p_0+c_1}{p_0}$$ where $r_{0-1}$ is the return (percentage change) of an asset with price $p_0$ initially, $p_1$ one period later, and paying a cash flow of $c_1$ at time $t=1$. Your friend just bought a house for $1,000,000. He financed it using a$900,000 mortgage loan and a deposit of $100,000. In the context of residential housing and mortgages, the 'equity' or 'net wealth' tied up in a house is the value of the house less the value of the mortgage loan. Assuming that your friend's only asset is his house, his net wealth is$100,000. If house prices suddenly fall by 15%, what would be your friend's percentage change in net wealth? Assume that: • No income (rent) was received from the house during the short time over which house prices fell. • Your friend will not declare bankruptcy, he will always pay off his debts. One year ago you bought $100,000 of shares partly funded using a margin loan. The margin loan size was$70,000 and the other 30,000 was your own wealth or 'equity' in the share assets. The interest rate on the margin loan was 7.84% pa. Over the year, the shares produced a dividend yield of 4% pa and a capital gain of 5% pa. What was the total return on your wealth? Ignore taxes, assume that all cash flows (interest payments and dividends) were paid and received at the end of the year, and all rates above are effective annual rates. Hint: Remember that wealth in this context is your equity (E) in the house asset (V = D+E) which is funded by the loan (D) and your deposit or equity (E). Here are the Net Income (NI) and Cash Flow From Assets (CFFA) equations: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)$$ $$CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp$$ What is the formula for calculating annual interest expense (IntExp) which is used in the equations above? Select one of the following answers. Note that D is the value of debt which is constant through time, and $r_D$ is the cost of debt. Interest expense (IntExp) is an important part of a company's income statement (or 'profit and loss' or 'statement of financial performance'). How does an accountant calculate the annual interest expense of a fixed-coupon bond that has a liquid secondary market? Select the most correct answer: Annual interest expense is equal to: Which one of the following will increase the Cash Flow From Assets in this year for a tax-paying firm, all else remaining constant? Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - ΔNWC+IntExp$$ A manufacturing company is considering a new project in the more risky services industry. The cash flows from assets (CFFA) are estimated for the new project, with interest expense excluded from the calculations. To get the levered value of the project, what should these unlevered cash flows be discounted by? Assume that the manufacturing firm has a target debt-to-assets ratio that it sticks to. A retail furniture company buys furniture wholesale and distributes it through its retail stores. The owner believes that she has some good ideas for making stylish new furniture. She is considering a project to buy a factory and employ workers to manufacture the new furniture she's designed. Furniture manufacturing has more systematic risk than furniture retailing. Her furniture retailing firm's after-tax WACC is 20%. Furniture manufacturing firms have an after-tax WACC of 30%. Both firms are optimally geared. Assume a classical tax system. Which method(s) will give the correct valuation of the new furniture-making project? Select the most correct answer. The US firm Google operates in the online advertising business. In 2011 Google bought Motorola Mobility which manufactures mobile phones. Assume the following: • Google had a 10% after-tax weighted average cost of capital (WACC) before it bought Motorola. • Motorola had a 20% after-tax WACC before it merged with Google. • Google and Motorola have the same level of gearing. • Both companies operate in a classical tax system. You are a manager at Motorola. You must value a project for making mobile phones. Which method(s) will give the correct valuation of the mobile phone manufacturing project? Select the most correct answer. The mobile phone manufacturing project's: A company increases the proportion of debt funding it uses to finance its assets by issuing bonds and using the cash to repurchase stock, leaving assets unchanged. Ignoring the costs of financial distress, which of the following statements is NOT correct: A method commonly seen in textbooks for calculating a levered firm's free cash flow (FFCF, or CFFA) is the following: \begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + \\ &\space\space\space+ Depr - CapEx -\Delta NWC + IntExp(1-t_c) \\ \end{aligned} Does this annual FFCF or the annual interest tax shield? One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use earnings before interest and tax (EBIT). \begin{aligned} FFCF &= (EBIT)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ \end{aligned} \\ Does this annual FFCF or the annual interest tax shield? One method for calculating a firm's free cash flow (FFCF, or CFFA) is to ignore interest expense. That is, pretend that interest expense $(IntExp)$ is zero: \begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp \\ &= (Rev - COGS - Depr - FC - 0)(1-t_c) + Depr - CapEx -\Delta NWC - 0\\ \end{aligned} Does this annual FFCF with zero interest expense or the annual interest tax shield? One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use net operating profit after tax (NOPAT). \begin{aligned} FFCF &= NOPAT + Depr - CapEx -\Delta NWC \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC \\ \end{aligned} \\ Does this annual FFCF or the annual interest tax shield? There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). Some include the annual interest tax shield in the cash flow and some do not. Which of the below FFCF formulas include the interest tax shield in the cash flow? $$(1) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp$$ $$(2) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp.(1-t_c)$$ $$(3) \quad FFCF=EBIT.(1-t_c )+ Depr- CapEx -ΔNWC+IntExp.t_c$$ $$(4) \quad FFCF=EBIT.(1-t_c) + Depr- CapEx -ΔNWC$$ $$(5) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC+IntExp.t_c$$ $$(6) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC$$ $$(7) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC$$ $$(8) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC-IntExp.t_c$$ $$(9) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC$$ $$(10) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC-IntExp.t_c$$ The formulas for net income (NI also called earnings), EBIT and EBITDA are given below. Assume that depreciation and amortisation are both represented by 'Depr' and that 'FC' represents fixed costs such as rent. $$NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )$$ $$EBIT=Rev - COGS - FC-Depr$$ $$EBITDA=Rev - COGS - FC$$ $$Tax =(Rev - COGS - Depr - FC - IntExp).t_c= \dfrac{NI.t_c}{1-t_c}$$ There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). One method is to use the following formulas to transform net income (NI) into FFCF including interest and depreciation tax shields: $$FFCF=NI + Depr - CapEx -ΔNWC + IntExp$$ $$NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )$$ Another popular method is to use EBITDA rather than net income. EBITDA is defined as: $$EBITDA=Rev - COGS - FC$$ One of the below formulas correctly calculates FFCF from EBITDA, including interest and depreciation tax shields, giving an identical answer to that above. Which formula is correct? Value the following business project to manufacture a new product. Project Data Project life 2 yrs Initial investment in equipment6m Depreciation of equipment per year $3m Expected sale price of equipment at end of project$0.6m Unit sales per year 4m Sale price per unit $8 Variable cost per unit$5 Fixed costs per year, paid at the end of each year $1m Interest expense per year 0 Tax rate 30% Weighted average cost of capital after tax per annum 10% Notes 1. The firm's current assets and current liabilities are$3m and $2m respectively right now. This net working capital will not be used in this project, it will be used in other unrelated projects. Due to the project, current assets (mostly inventory) will grow by$2m initially (at t = 0), and then by $0.2m at the end of the first year (t=1). Current liabilities (mostly trade creditors) will increase by$0.1m at the end of the first year (t=1). At the end of the project, the net working capital accumulated due to the project can be sold for the same price that it was bought. 2. The project cost $0.5m to research which was incurred one year ago. Assumptions • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 3% pa. • All rates are given as effective annual rates. • The business considering the project is run as a 'sole tradership' (run by an individual without a company) and is therefore eligible for a 50% capital gains tax discount when the equipment is sold, as permitted by the Australian Tax Office. What is the expected net present value (NPV) of the project? A young lady is trying to decide if she should attend university. Her friends say that she should go to university because she is more likely to meet a clever young man than if she begins full time work straight away. What's the correct way to classify this item from a capital budgeting perspective when trying to find the Net Present Value of going to university rather than working? The opportunity to meet a desirable future spouse should be classified as: A man is thinking about taking a day off from his casual painting job to relax. He just woke up early in the morning and he's about to call his boss to say that he won't be coming in to work. But he's thinking about the hours that he could work today (in the future) which are: A man has taken a day off from his casual painting job to relax. It's the end of the day and he's thinking about the hours that he could have spent working (in the past) which are now: Find Candys Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Candys Corp Income Statement for year ending 30th June 2013$m Sales 200 COGS 50 Operating expense 10 Depreciation 20 Interest expense 10 Income before tax 110 Tax at 30% 33 Net income 77 Candys Corp Balance Sheet as at 30th June 2013 2012 $m$m Assets Current assets 220 180 PPE Cost 300 340 Accumul. depr. 60 40 Carrying amount 240 300 Total assets 460 480 Liabilities Current liabilities 175 190 Non-current liabilities 135 130 Owners' equity Retained earnings 50 60 Contributed equity 100 100 Total L and OE 460 480 Note: all figures are given in millions of dollars ($m). Find Ching-A-Lings Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Ching-A-Lings Corp Income Statement for year ending 30th June 2013$m Sales 100 COGS 20 Depreciation 20 Rent expense 11 Interest expense 19 Taxable Income 30 Taxes at 30% 9 Net income 21 Ching-A-Lings Corp Balance Sheet as at 30th June 2013 2012 $m$m Inventory 49 38 Trade debtors 14 2 Rent paid in advance 5 5 PPE 400 400 Total assets 468 445 Trade creditors 4 10 Bond liabilities 200 190 Contributed equity 145 145 Retained profits 119 100 Total L and OE 468 445 Note: All figures are given in millions of dollars ($m). The cash flow from assets was: To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the balance sheet needed? Note that the balance sheet is sometimes also called the statement of financial position. Cash Flow From Assets (CFFA) can be defined as: Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI = (Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - \Delta NWC+IntExp$$ Which one of the following will have no effect on net income (NI) but decrease cash flow from assets (CFFA or FFCF) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - ΔNWC+IntExp$$ Find the cash flow from assets (CFFA) of the following project. One Year Mining Project Data Project life 1 year Initial investment in building mine and equipment$9m Depreciation of mine and equipment over the year $8m Kilograms of gold mined at end of year 1,000 Sale price per kilogram$0.05m Variable cost per kilogram $0.03m Before-tax cost of closing mine at end of year$4m Tax rate 30% Note 1: Due to the project, the firm also anticipates finding some rare diamonds which will give before-tax revenues of $1m at the end of the year. Note 2: The land that will be mined actually has thermal springs and a family of koalas that could be sold to an eco-tourist resort for an after-tax amount of$3m right now. However, if the mine goes ahead then this natural beauty will be destroyed. Note 3: The mining equipment will have a book value of $1m at the end of the year for tax purposes. However, the equipment is expected to fetch$2.5m when it is sold. Find the project's CFFA at time zero and one. Answers are given in millions of dollars ($m), with the first cash flow at time zero, and the second at time one. Project Data Project life 2 yrs Initial investment in equipment$600k Depreciation of equipment per year $250k Expected sale price of equipment at end of project$200k Revenue per job $12k Variable cost per job$4k Quantity of jobs per year 120 Fixed costs per year, paid at the end of each year $100k Interest expense in first year (at t=1)$16.091k Interest expense in second year (at t=2) $9.711k Tax rate 30% Government treasury bond yield 5% Bank loan debt yield 6% Levered cost of equity 12.5% Market portfolio return 10% Beta of assets 1.24 Beta of levered equity 1.5 Firm's and project's debt-to-equity ratio 25% Notes 1. The project will require an immediate purchase of$50k of inventory, which will all be sold at cost when the project ends. Current liabilities are negligible so they can be ignored. Assumptions • The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. Note that interest expense is different in each year. • Thousands are represented by 'k' (kilo). • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are nominal. The inflation rate is 2% pa. • All rates are given as effective annual rates. • The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? A firm plans to issue equity and use the cash raised to pay off its debt. No assets will be bought or sold. Ignore the costs of financial distress. Which of the following statements is NOT correct, all things remaining equal? A fast-growing firm is suitable for valuation using a multi-stage growth model. It's nominal unlevered cash flow from assets ($CFFA_U$) at the end of this year (t=1) is expected to be $1 million. After that it is expected to grow at a rate of: • 12% pa for the next two years (from t=1 to 3), • 5% over the fourth year (from t=3 to 4), and • -1% forever after that (from t=4 onwards). Note that this is a negative one percent growth rate. Assume that: • The nominal WACC after tax is 9.5% pa and is not expected to change. • The nominal WACC before tax is 10% pa and is not expected to change. • The firm has a target debt-to-equity ratio that it plans to maintain. • The inflation rate is 3% pa. • All rates are given as nominal effective annual rates. What is the levered value of this fast growing firm's assets? A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of debt to raise money for new projects of similar risk to the company's existing projects. Assume a classical tax system. Which statement is correct? A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of equity to raise money for new projects of similar systematic risk to the company's existing projects. Assume a classical tax system. Which statement is correct? A company issues a large amount of bonds to raise money for new projects of similar risk to the company's existing projects. The net present value (NPV) of the new projects is positive but small. Assume a classical tax system. Which statement is NOT correct? Which statement about risk, required return and capital structure is the most correct? A firm is considering a new project of similar risk to the current risk of the firm. This project will expand its existing business. The cash flows of the project have been calculated assuming that there is no interest expense. In other words, the cash flows assume that the project is all-equity financed. In fact the firm has a target debt-to-equity ratio of 1, so the project will be financed with 50% debt and 50% equity. To find the levered value of the firm's assets, what discount rate should be applied to the project's unlevered cash flows? Assume a classical tax system. Question 99 capital structure, interest tax shield, Miller and Modigliani, trade off theory of capital structure A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. Assume that: • The firm and individual investors can borrow at the same rate and have the same tax rates. • The firm's debt and shares are fairly priced and the shares are repurchased at the market price, not at a premium. • There are no market frictions relating to debt such as asymmetric information or transaction costs. • Shareholders wealth is measured in terms of utiliity. Shareholders are wealth-maximising and risk-averse. They have a preferred level of overall leverage. Before the firm's capital restructure all shareholders were optimally levered. According to Miller and Modigliani's theory, which statement is correct? Fill in the missing words in the following sentence: All things remaining equal, as a firm's amount of debt funding falls, benefits of interest tax shields __________ and the costs of financial distress __________. You just agreed to a 30 year fully amortising mortgage loan with monthly payments of$2,500. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. The below choices are given in the same order. You deposit money into a bank. Which of the following statements is NOT correct? You: You bought a house, primarily funded using a home loan from a bank. Which of the following statements is NOT correct? Where can a publicly listed firm's book value of equity be found? It can be sourced from the company's: Where can a private firm's market value of equity be found? It can be sourced from the company's: There are a number of different formulas involving real and nominal returns and cash flows. Which one of the following formulas is NOT correct? All returns are effective annual rates. Note that the symbol $\approx$ means 'approximately equal to'. Taking inflation into account when using the DDM can be hard. Which of the following formulas will NOT give a company's current stock price $(P_0)$? Assume that the annual dividend was just paid $(C_0)$, and the next dividend will be paid in one year $(C_1)$. A home loan company advertises an interest rate of 4.5% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? For an asset's price to quintuple every 5 years, what must be its effective annual capital return? Note that a stock's price quintuples when it increases from say $1 to$5. How many years will it take for an asset's price to triple (increase from say $1 to$3) if it grows by 5% pa? If someone says "my shares rose by 10% last year", what do you assume that they mean? If the nominal gold price is expected to increase at the same rate as inflation which is 3% pa, which of the following statements is NOT correct? A stock is expected to pay a dividend of $1 in one year. Its future annual dividends are expected to grow by 10% pa. So the first dividend of$1 is in one year, and the year after that the dividend will be $1.1 (=1*(1+0.1)^1), and a year later$1.21 (=1*(1+0.1)^2) and so on forever. Its required total return is 30% pa. The total required return and growth rate of dividends are given as effective annual rates. The stock is fairly priced. Calculate the pay back period of buying the stock and holding onto it forever, assuming that the dividends are received as at each time, not smoothly over each year. A share will pay its next dividend of $C_1$ in one year, and will continue to pay a dividend every year after that forever, growing at a rate of $g$. So the next dividend will be $C_2=C_1 (1+g)^1$, then $C_3=C_2 (1+g)^1$, and so on forever. The current price of the share is $P_0$ and its required return is $r$ Which of the following is NOT equal to the expected share price in 2 years $(P_2)$ just after the dividend at that time $(C_2)$ has been paid? A real estate agent says that the price of a house in Sydney Australia is approximately equal to the gross weekly rent times 1000. What type of valuation method is the real estate agent using? A stock will pay you a dividend of $2 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 3% pa, so the next dividend after the$2 one tonight will be $2.06 in one year, then in two years it will be$2.1218 and so on. The stock's required return is 8% pa. What is the stock price today and what do you expect the stock price to be tomorrow, approximately? Itau Unibanco is a major listed bank in Brazil with a market capitalisation of equity equal to BRL 85.744 billion, EPS of BRL 3.96 and 2.97 billion shares on issue. Banco Bradesco is another major bank with total earnings of BRL 8.77 billion and 2.52 billion shares on issue. Estimate Banco Bradesco's current share price using a price-earnings multiples approach assuming that Itau Unibanco is a comparable firm. Note that BRL is the Brazilian Real, their currency. Figures sourced from Google Finance on the market close of the BVMF on 24/7/15. Telsa Motors advertises that its Model S electric car saves $570 per month in fuel costs. Assume that Tesla cars last for 10 years, fuel and electricity costs remain the same, and savings are made at the end of each month with the first saving of$570 in one month from now. The effective annual interest rate is 15.8%, and the effective monthly interest rate is 1.23%. What is the present value of the savings? All other things remaining equal, a project is worse if its: The following cash flows are expected: • A perpetuity of yearly payments of $30, with the first payment in 5 years (first payment at t=5, which continues every year after that forever). • One payment of$100 in 6 years and 3 months (t=6.25). What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? How much more can you borrow using an interest-only loan compared to a 25-year fully amortising loan if interest rates are 4% pa compounding per month and are not expected to change? If it makes it easier, assume that you can afford to pay $2,000 per month on either loan. Express your answer as a proportional increase using the following formula: $$\text{Proportional Increase} = \dfrac{V_\text{0,interest only}}{V_\text{0,fully amortising}} - 1$$ A firm wishes to raise$50 million now. They will issue 7% pa semi-annual coupon bonds that will mature in 6 years and have a face value of $100 each. Bond yields are 5% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? A firm wishes to raise$50 million now. They will issue 5% pa semi-annual coupon bonds that will mature in 3 years and have a face value of $100 each. Bond yields are 6% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? A firm wishes to raise$50 million now. They will issue 5% pa semi-annual coupon bonds that will mature in 10 years and have a face value of $100 each. Bond yields are 5% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? A stock is expected to pay its first dividend of$20 in 3 years (t=3), which it will continue to pay for the next nine years, so there will be ten $20 payments altogether with the last payment in year 12 (t=12). From the thirteenth year onward, the dividend is expected to be 4% more than the previous year, forever. So the dividend in the thirteenth year (t=13) will be$20.80, then $21.632 in year 14, and so on forever. The required return of the stock is 10% pa. All rates are effective annual rates. Calculate the current (t=0) stock price. A 4.5% fixed coupon Australian Government bond was issued at par in mid-April 2009. Coupons are paid semi-annually in arrears in mid-April and mid-October each year. The face value is$1,000. The bond will mature in mid-April 2020, so the bond had an original tenor of 11 years. Today is mid-September 2015 and similar bonds now yield 1.9% pa. What is the bond's new price? Note: there are 10 semi-annual coupon payments remaining from now (mid-September 2015) until maturity (mid-April 2020); both yields are given as APR's compounding semi-annually; assume that the yield curve was flat before the change in yields, and remained flat afterwards as well. An investor bought a 5 year government bond with a 2% pa coupon rate at par. Coupons are paid semi-annually. The face value is $100. Calculate the bond's new price 8 months later after yields have increased to 3% pa. Note that both yields are given as APR's compounding semi-annually. Assume that the yield curve was flat before the change in yields, and remained flat afterwards as well. Use the below information to value a levered company with constant annual perpetual cash flows from assets. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Both the cash flow from assets including and excluding interest tax shields are constant (but not equal to each other). Data on a Levered Firm with Perpetual Cash Flows Item abbreviation Value Item full name $\text{CFFA}_\text{U}$$100m Cash flow from assets excluding interest tax shields (unlevered) $\text{CFFA}_\text{L}$ $112m Cash flow from assets including interest tax shields (levered) $g$ 0% pa Growth rate of cash flow from assets, levered and unlevered $\text{WACC}_\text{BeforeTax}$ 7% pa Weighted average cost of capital before tax $\text{WACC}_\text{AfterTax}$ 6.25% pa Weighted average cost of capital after tax $r_\text{D}$ 5% pa Cost of debt $r_\text{EL}$ 9% pa Cost of levered equity $D/V_L$ 50% pa Debt to assets ratio, where the asset value includes tax shields $t_c$ 30% Corporate tax rate What is the value of the levered firm including interest tax shields? Below are some statements about loans and bonds. The first descriptive sentence is correct. But one of the second sentences about the loans' or bonds' prices is not correct. Which statement is NOT correct? Assume that interest rates are positive. Note that coupons or interest payments are the periodic payments made throughout a bond or loan's life. The face or par value of a bond or loan is the amount paid at the end when the debt matures. Bonds X and Y are issued by the same US company. Both bonds yield 6% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 8% pa and bond Y pays coupons of 12% pa. Which of the following statements is true? There are many different ways to value a firm's assets. Which of the following will NOT give the correct market value of a levered firm's assets $(V_L)$? Assume that: • The firm is financed by listed common stock and vanilla annual fixed coupon bonds, which are both traded in a liquid market. • The bonds' yield is equal to the coupon rate, so the bonds are issued at par. The yield curve is flat and yields are not expected to change. When bonds mature they will be rolled over by issuing the same number of new bonds with the same expected yield and coupon rate, and so on forever. • Tax rates on the dividends and capital gains received by investors are equal, and capital gains tax is paid every year, even on unrealised gains regardless of when the asset is sold. • There is no re-investment of the firm's cash back into the business. All of the firm's excess cash flow is paid out as dividends so real growth is zero. • The firm operates in a mature industry with zero real growth. • All cash flows and rates in the below equations are real (not nominal) and are expected to be stable forever. Therefore the perpetuity equation with no growth is suitable for valuation. Where: $$r_\text{WACC before tax} = r_D.\frac{D}{V_L} + r_{EL}.\frac{E_L}{V_L} = \text{Weighted average cost of capital before tax}$$ $$r_\text{WACC after tax} = r_D.(1-t_c).\frac{D}{V_L} + r_{EL}.\frac{E_L}{V_L} = \text{Weighted average cost of capital after tax}$$ $$NI_L=(Rev-COGS-FC-Depr-\mathbf{IntExp}).(1-t_c) = \text{Net Income Levered}$$ $$CFFA_L=NI_L+Depr-CapEx - \varDelta NWC+\mathbf{IntExp} = \text{Cash Flow From Assets Levered}$$ $$NI_U=(Rev-COGS-FC-Depr).(1-t_c) = \text{Net Income Unlevered}$$ $$CFFA_U=NI_U+Depr-CapEx - \varDelta NWC= \text{Cash Flow From Assets Unlevered}$$ A 30 year Japanese government bond was just issued at par with a yield of 1.7% pa. The fixed coupon payments are semi-annual. The bond has a face value of $100. Six months later, just after the first coupon is paid, the yield of the bond increases to 2% pa. What is the bond's new price? Bonds X and Y are issued by the same company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 6% pa and bond Y pays coupons of 8% pa. Which of the following statements is true? Which one of the following bonds is trading at a premium? An investor bought two fixed-coupon bonds issued by the same company, a zero-coupon bond and a 7% pa semi-annual coupon bond. Both bonds have a face value of $1,000, mature in 10 years, and had a yield at the time of purchase of 8% pa. A few years later, yields fell to 6% pa. Which of the following statements is correct? Note that a capital gain is an increase in price. Convert a 10% continuously compounded annual rate $(r_\text{cc annual})$ into an effective annual rate $(r_\text{eff annual})$. The equivalent effective annual rate is: Which of the following interest rate quotes is NOT equivalent to a 10% effective annual rate of return? Assume that each year has 12 months, each month has 30 days, each day has 24 hours, each hour has 60 minutes and each minute has 60 seconds. APR stands for Annualised Percentage Rate. A continuously compounded monthly return of 1% $(r_\text{cc monthly})$ is equivalent to a continuously compounded annual return $(r_\text{cc annual})$ of: An effective monthly return of 1% $(r_\text{eff monthly})$ is equivalent to an effective annual return $(r_\text{eff annual})$ of: Which of the following quantities is commonly assumed to be normally distributed? The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Which of the below statements is NOT correct? If a stock's future expected effective annual returns are log-normally distributed, what will be bigger, the stock's or effective annual return? Or would you expect them to be ? The symbol $\text{GDR}_{0\rightarrow 1}$ represents a stock's gross discrete return per annum over the first year. $\text{GDR}_{0\rightarrow 1} = P_1/P_0$. The subscript indicates the time period that the return is mentioned over. So for example, $\text{AAGDR}_{1 \rightarrow 3}$ is the arithmetic average GDR measured over the two year period from years 1 to 3, but it is expressed as a per annum rate. Which of the below statements about the arithmetic and geometric average GDR is NOT correct? Fred owns some Commonwealth Bank (CBA) shares. He has calculated CBA’s monthly returns for each month in the past 20 years using this formula: $$r_\text{t monthly}=\ln⁡ \left( \dfrac{P_t}{P_{t-1}} \right)$$ He then took the arithmetic average and found it to be 1% per month using this formula: $$\bar{r}_\text{monthly}= \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( r_\text{t monthly} \right)} }{T} =0.01=1\% \text{ per month}$$ He also found the standard deviation of these monthly returns which was 5% per month: $$\sigma_\text{monthly} = \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( \left( r_\text{t monthly} - \bar{r}_\text{monthly} \right)^2 \right)} }{T} =0.05=5\%\text{ per month}$$ Which of the below statements about Fred’s CBA shares is NOT correct? Assume that the past historical average return is the true population average of future expected returns. Here is a table of stock prices and returns. Which of the statements below the table is NOT correct? Price and Return Population Statistics Time Prices LGDR GDR NDR 0 100 1 50 -0.6931 0.5 -0.5 2 100 0.6931 2 1 Arithmetic average 0 1.25 0.25 Arithmetic standard deviation -0.6931 0.75 0.75 The current gold price is$700, gold storage costs are 2% pa and the risk free rate is 10% pa, both with continuous compounding. What should be the 3 year gold futures price? A $100 stock has a continuously compounded expected total return of 10% pa. Its dividend yield is 2% pa with continuous compounding. What do you expect its price to be in one year? A$100 stock has a continuously compounded expected total return of 10% pa. Its dividend yield is 2% pa with continuous compounding. What do you expect its price to be in 2.5 years? A bank quotes an interest rate of 6% pa with quarterly compounding. Note that another way of stating this rate is that it is an annual percentage rate (APR) compounding discretely every 3 months. Which of the following statements about this rate is NOT correct? All percentages are given to 6 decimal places. The equivalent: Convert a 10% effective annual rate $(r_\text{eff annual})$ into a continuously compounded annual rate $(r_\text{cc annual})$. The equivalent continuously compounded annual rate is: A continuously compounded semi-annual return of 5% $(r_\text{cc 6mth})$ is equivalent to a continuously compounded annual return $(r_\text{cc annual})$ of: A stock has an arithmetic average continuously compounded return (AALGDR) of 10% pa, a standard deviation of continuously compounded returns (SDLGDR) of 80% pa and current stock price of $1. Assume that stock prices are log-normally distributed. In one year, what do you expect the mean and median prices to be? The answer options are given in the same order. A stock has an arithmetic average continuously compounded return (AALGDR) of 10% pa, a standard deviation of continuously compounded returns (SDLGDR) of 80% pa and current stock price of$1. Assume that stock prices are log-normally distributed. In 5 years, what do you expect the mean and median prices to be? The answer options are given in the same order. Here is a table of stock prices and returns. Which of the statements below the table is NOT correct? Price and Return Population Statistics Time Prices LGDR GDR NDR 0 100 1 99 -0.010050 0.990000 -0.010000 2 180.40 0.600057 1.822222 0.822222 3 112.73 0.470181 0.624889 0.375111 Arithmetic average 0.0399 1.1457 0.1457 Arithmetic standard deviation 0.4384 0.5011 0.5011 Fred owns some BHP shares. He has calculated BHP’s monthly returns for each month in the past 30 years using this formula: $$r_\text{t monthly}=\ln⁡ \left( \dfrac{P_t}{P_{t-1}} \right)$$ He then took the arithmetic average and found it to be 0.8% per month using this formula: $$\bar{r}_\text{monthly}= \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( r_\text{t monthly} \right)} }{T} =0.008=0.8\% \text{ per month}$$ He also found the standard deviation of these monthly returns which was 15% per month: $$\sigma_\text{monthly} = \dfrac{ \displaystyle\sum\limits_{t=1}^T{\left( \left( r_\text{t monthly} - \bar{r}_\text{monthly} \right)^2 \right)} }{T} =0.15=15\%\text{ per month}$$ Assume that the past historical average return is the true population average of future expected returns and the stock's returns calculated above $(r_\text{t monthly})$ are normally distributed. Which of the below statements about Fred’s BHP shares is NOT correct? In 2014 the median starting salaries of male and female Australian bachelor degree accounting graduates aged less than 25 years in their first full-time industry job was $50,000 before tax, according to Graduate Careers Australia. See page 9 of this report. Personal income tax rates published by the Australian Tax Office are reproduced for the 2014-2015 financial year in the table below. Taxable income Tax on this income 0 –$18,200 Nil $18,201 –$37,000 19c for each $1 over$18,200 $37,001 –$80,000 $3,572 plus 32.5c for each$1 over $37,000$80,001 – $180,000$17,547 plus 37c for each $1 over$80,000 $180,001 and over$54,547 plus 45c for each $1 over$180,000 The above rates do not include the Medicare levy of 2%. Exclude the Medicare levy from your calculations How much personal income tax would you have to pay per year if you earned $50,000 per annum before-tax? A firm pays a fully franked cash dividend of$100 to one of its Australian shareholders who has a personal marginal tax rate of 15%. The corporate tax rate is 30%. What will be the shareholder's personal tax payable due to the dividend payment? Due to floods overseas, there is a cut in the supply of the mineral iron ore and its price increases dramatically. An Australian iron ore mining company therefore expects a large but temporary increase in its profit and cash flows. The mining company does not have any positive NPV projects to begin, so what should it do? Select the most correct answer. A pharmaceutical firm has just discovered a valuable new drug. So far the news has been kept a secret. The net present value of making and commercialising the drug is $200 million, but$600 million of bonds will need to be issued to fund the project and buy the necessary plant and equipment. The firm will release the news of the discovery and bond raising to shareholders simultaneously in the same announcement. The bonds will be issued shortly after. Once the announcement is made and the bonds are issued, what is the expected increase in the value of the firm's assets (ΔV), market capitalisation of debt (ΔD) and market cap of equity (ΔE)? The triangle symbol is the Greek letter capital delta which means change or increase in mathematics. Ignore the benefit of interest tax shields from having more debt. Remember: $ΔV = ΔD+ΔE$ A new company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below. To value the firm's assets, the terminal value needs to be calculated using the perpetuity with growth formula: $$V_{\text{terminal, }t-1} = \dfrac{FFCF_{\text{terminal, }t}}{r-g}$$ Which point corresponds to the best time to calculate the terminal value? An old company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below. To value the firm's assets, the terminal value needs to be calculated using the perpetuity with growth formula: $$V_{\text{terminal, }t-1} = \dfrac{FFCF_{\text{terminal, }t}}{r-g}$$ Which point corresponds to the best time to calculate the terminal value? A new company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below. To value the firm's assets, the terminal value needs to be calculated using the perpetuity with growth formula: $$V_{\text{terminal, }t-1} = \dfrac{FFCF_{\text{terminal, }t}}{r-g}$$ Which point corresponds to the best time to calculate the terminal value? A mining firm has just discovered a new mine. So far the news has been kept a secret. The net present value of digging the mine and selling the minerals is $250 million, but$500 million of new equity and $300 million of new bonds will need to be issued to fund the project and buy the necessary plant and equipment. The firm will release the news of the discovery and equity and bond raising to shareholders simultaneously in the same announcement. The shares and bonds will be issued shortly after. Once the announcement is made and the new shares and bonds are issued, what is the expected increase in the value of the firm's assets $(\Delta V)$, market capitalisation of debt $(\Delta D)$ and market cap of equity $(\Delta E)$? Assume that markets are semi-strong form efficient. The triangle symbol $\Delta$ is the Greek letter capital delta which means change or increase in mathematics. Ignore the benefit of interest tax shields from having more debt. Remember: $\Delta V = \Delta D+ \Delta E$ A young lady is trying to decide if she should attend university or not. The young lady's parents say that she must attend university because otherwise all of her hard work studying and attending school during her childhood was a waste. What's the correct way to classify this item from a capital budgeting perspective when trying to decide whether to attend university? The hard work studying at school in her childhood should be classified as: A young lady is trying to decide if she should attend university or begin working straight away in her home town. The young lady's grandma says that she should not go to university because she is less likely to marry the local village boy whom she likes because she will spend less time with him if she attends university. What's the correct way to classify this item from a capital budgeting perspective when trying to decide whether to attend university? The cost of not marrying the local village boy should be classified as: The 'time value of money' is most closely related to which of the following concepts? Question 513 stock split, reverse stock split, stock dividend, bonus issue, rights issue Which of the following statements is NOT correct? A company conducts a 4 for 3 stock split. What is the percentage change in the stock price and the number of shares outstanding? The answers are given in the same order. A company's share price fell by 20% and its number of shares rose by 25%. Assume that there are no taxes, no signalling effects and no transaction costs. Which one of the following corporate events may have happened? In mid 2009 the listed mining company Rio Tinto announced a 21-for-40 renounceable rights issue. Below is the chronology of events: • 04/06/2009. Share price opens at$69.00 and closes at $66.90. • 05/06/2009. 21-for-40 rights issue announced at a subscription price of$28.29. • 16/06/2009. Last day that shares trade cum-rights. Share price opens at $76.40 and closes at$75.50. • 17/06/2009. Shares trade ex-rights. Rights trading commences. All things remaining equal, what would you expect Rio Tinto's stock price to open at on the first day that it trades ex-rights (17/6/2009)? Ignore the time value of money since time is negligibly short. Also ignore taxes. A fairly priced unlevered firm plans to pay a dividend of $1 next year (t=1) which is expected to grow by 3% pa every year after that. The firm's required return on equity is 8% pa. The firm is thinking about reducing its future dividend payments by 10% so that it can use the extra cash to invest in more projects which are expected to return 8% pa, and have the same risk as the existing projects. Therefore, next year's dividend will be$0.90. No new equity or debt will be issued to fund the new projects, they'll all be funded by the cut in dividends. What will be the stock's new annual capital return (proportional increase in price per year) if the change in payout policy goes ahead? Assume that payout policy is irrelevant to firm value (so there's no signalling effects) and that all rates are effective annual rates. The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_{0} = \frac{c_1}{r_{\text{eff}} - g_{\text{eff}}}$$ What is the discount rate '$r_\text{eff}$' in this equation? A share was bought for $20 (at t=0) and paid its annual dividend of$3 one year later (at t=1). Just after the dividend was paid, the share price was $16 (at t=1). What was the total return, capital return and income return? Calculate your answers as effective annual rates. The choices are given in the same order: $r_\text{total},r_\text{capital},r_\text{income}$. The following is the Dividend Discount Model (DDM) used to price stocks: $$P_0 = \frac{d_1}{r-g}$$ Assume that the assumptions of the DDM hold and that the time period is measured in years. Which of the following is equal to the expected dividend in 3 years, $d_3$? When using the dividend discount model to price a stock: $$p_{0} = \frac{d_1}{r - g}$$ The growth rate of dividends (g): The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_0=\frac{d_1}{r_\text{eff}-g_\text{eff}}$$ Which expression is NOT equal to the expected capital return? Find the cash flow from assets (CFFA) of the following project. Project Data Project life 2 years Initial investment in equipment$8m Depreciation of equipment per year for tax purposes $3m Unit sales per year 10m Sale price per unit$9 Variable cost per unit $4 Fixed costs per year, paid at the end of each year$2m Tax rate 30% Note 1: Due to the project, the firm will have to purchase $40m of inventory initially (at t=0). Half of this inventory will be sold at t=1 and the other half at t=2. Note 2: The equipment will have a book value of$2m at the end of the project for tax purposes. However, the equipment is expected to fetch $1m when it is sold. Assume that the full capital loss is tax-deductible and taxed at the full corporate tax rate. Note 3: The project will be fully funded by equity which investors will expect to pay dividends totaling$10m at the end of each year. Find the project's CFFA at time zero, one and two. Answers are given in millions of dollars ($m). Mr Blue, Miss Red and Mrs Green are people with different utility functions. Note that a fair gamble is a bet that has an expected value of zero, such as paying$0.50 to win $1 in a coin flip with heads or nothing if it lands tails. Fairly priced insurance is when the expected present value of the insurance premiums is equal to the expected loss from the disaster that the insurance protects against, such as the cost of rebuilding a home after a catastrophic fire. Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Each person has$500 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $500. Each player can flip a coin and if they flip heads, they receive$500. If they flip tails then they will lose $500. Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Each person has$256 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $256. Each player can flip a coin and if they flip heads, they receive$256. If they flip tails then they will lose $256. Which of the following statements is NOT correct? Which of the below statements about utility is NOT generally accepted by economists? Most people are thought to: Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Each person has$50 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $50. Each player can flip a coin and if they flip heads, they receive$50. If they flip tails then they will lose $50. Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Each person has$50 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $50. Each player can flip a coin and if they flip heads, they receive$50. If they flip tails then they will lose $50. Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Each person has$50 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $50. Each player can flip a coin and if they flip heads, they receive$50. If they flip tails then they will lose $50. Which of the following statements is NOT correct? Mr Blue, Miss Red and Mrs Green are people with different utility functions. Each person has$50 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $50. Each player can flip a coin and if they flip heads, they receive$50. If they flip tails then they will lose $50. Which of the following statements is NOT correct? Two years ago Fred bought a house for$300,000. Now it's worth $500,000, based on recent similar sales in the area. Fred's residential property has an expected total return of 8% pa. He rents his house out for$2,000 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is $23,173.86. The future value of 12 months of rental payments one year ahead is$25,027.77. What is the expected annual growth rate of the rental payments? In other words, by what percentage increase will Fred have to raise the monthly rent by each year to sustain the expected annual total return of 8%? The sayings "Don't cry over spilt milk", "Don't regret the things that you can't change" and "What's done is done" are most closely related to which financial concept? Question 768  accounting terminology, book and market values, no explanation Accountants and finance professionals have lots of names for the same things which can be quite confusing. Which of the following groups of items are NOT synonyms? "Buy low, sell high" is a well-known saying. It suggests that investors should buy low then sell high, in that order. How would you re-phrase that saying to describe short selling? Which of the following statements is NOT correct? Assume that all things remain equal. So for example, don't assume that just because a company's dividends and profit rise that its required return will also rise, assume the required return stays the same. You deposit money into a bank account. Which of the following statements about this deposit is NOT correct? An Australian company just issued two bonds: • A 1 year zero coupon bond at a yield of 8% pa, and • A 2 year zero coupon bond at a yield of 10% pa. What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. An Australian company just issued two bonds: • A 6-month zero coupon bond at a yield of 6% pa, and • A 12 month zero coupon bond at a yield of 7% pa. What is the company's forward rate from 6 to 12 months? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. A European company just issued two bonds, a • 1 year zero coupon bond at a yield of 8% pa, and a • 2 year zero coupon bond at a yield of 10% pa. What is the company's forward rate over the second year (from t=1 to t=2)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. Estimate the Chinese bank ICBC's share price using a backward-looking price earnings (PE) multiples approach with the following assumptions and figures only. Note that the renminbi (RMB) is the Chinese currency, also known as the yuan (CNY). • The 4 major Chinese banks ICBC, China Construction Bank (CCB), Bank of China (BOC) and Agricultural Bank of China (ABC) are comparable companies; • ICBC 's historical earnings per share (EPS) is RMB 0.74; • CCB's backward-looking PE ratio is 4.59; • BOC 's backward-looking PE ratio is 4.78; • ABC's backward-looking PE ratio is also 4.78; Note: Figures sourced from Google Finance on 25 March 2014. Share prices are from the Shanghai stock exchange. A firm issues debt and uses the funds to buy back equity. Assume that there are no costs of financial distress or transactions costs. Which of the following statements about interest tax shields is NOT correct? Below is a graph of 3 peoples’ utility functions, Mr Blue (U=W^(1/2) ), Miss Red (U=W/10) and Mrs Green (U=W^2/1000). Assume that each of them currently have $50 of wealth. Which of the following statements about them is NOT correct? (a) Mr Blue would prefer to invest his wealth in a well diversified portfolio of stocks rather than a single stock, assuming that all stocks had the same total risk and return. A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the start-of-year amount, but it is paid at the end of every year. This fee is charged regardless of whether the fund makes gains or losses on your money. The fund offers to invest your money in shares which have an expected return of 10% pa before fees. You are thinking of investing$100,000 in the fund and keeping it there for 40 years when you plan to retire. What is the Net Present Value (NPV) of investing your money in the fund? Note that the question is not asking how much money you will have in 40 years, it is asking: what is the NPV of investing in the fund? Assume that: • The fund has no private information. • Markets are weak and semi-strong form efficient. • The fund's transaction costs are negligible. • The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. A residential real estate investor believes that house prices will grow at a rate of 5% pa and that rents will grow by 2% pa forever. All rates are given as nominal effective annual returns. Assume that: • His forecast is true. • Real estate is and always will be fairly priced and the capital asset pricing model (CAPM) is true. • Ignore all costs such as taxes, agent fees, maintenance and so on. • All rental income cash flow is paid out to the owner, so there is no re-investment and therefore no additions or improvements made to the property. • The non-monetary benefits of owning real estate and renting remain constant. Which one of the following statements is NOT correct? Over time: Use the below information to value a levered company with constant annual perpetual cash flows from assets. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Both the cash flow from assets including and excluding interest tax shields are constant (but not equal to each other). Data on a Levered Firm with Perpetual Cash Flows Item abbreviation Value Item full name $\text{CFFA}_\text{U}$ $48.5m Cash flow from assets excluding interest tax shields (unlevered) $\text{CFFA}_\text{L}$$50m Cash flow from assets including interest tax shields (levered) $g$ 0% pa Growth rate of cash flow from assets, levered and unlevered $\text{WACC}_\text{BeforeTax}$ 10% pa Weighted average cost of capital before tax $\text{WACC}_\text{AfterTax}$ 9.7% pa Weighted average cost of capital after tax $r_\text{D}$ 5% pa Cost of debt $r_\text{EL}$ 11.25% pa Cost of levered equity $D/V_L$ 20% pa Debt to assets ratio, where the asset value includes tax shields $t_c$ 30% Corporate tax rate What is the value of the levered firm including interest tax shields? One year ago you bought a $1,000,000 house partly funded using a mortgage loan. The loan size was$800,000 and the other $200,000 was your wealth or 'equity' in the house asset. The interest rate on the home loan was 4% pa. Over the year, the house produced a net rental yield of 2% pa and a capital gain of 2.5% pa. Assuming that all cash flows (interest payments and net rental payments) were paid and received at the end of the year, and all rates are given as effective annual rates, what was the total return on your wealth over the past year? Hint: Remember that wealth in this context is your equity (E) in the house asset (V = D+E) which is funded by the loan (D) and your deposit or equity (E). Which of the following statements about returns is NOT correct? A stock's: The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. A stock has a beta of 0.5. In the last 5 minutes, the federal government unexpectedly raised taxes. Over this time the share market fell by 3%. The risk free rate was unchanged. What do you think was the stock's historical return over the last 5 minutes, given as an effective 5 minute rate? A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. In the last 5 minutes, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged. What do you think was the stock's historical return over the last 5 minutes, given as an effective 5 minute rate? A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates. Over the last year, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged. What do you think was the stock's historical return over the last year, given as an effective annual rate? A firm changes its capital structure by issuing a large amount of equity and using the funds to repay debt. Its assets are unchanged. Ignore interest tax shields. According to the Capital Asset Pricing Model (CAPM), which statement is correct? The CAPM can be used to find a business's expected opportunity cost of capital: $$r_i=r_f+β_i (r_m-r_f)$$ What should be used as the risk free rate $r_f$? A firm's WACC before tax would decrease due to: Which of the following statements about the weighted average cost of capital (WACC) is NOT correct? Project Data Project life 1 year Initial investment in equipment$8m Depreciation of equipment per year $8m Expected sale price of equipment at end of project 0 Unit sales per year 4m Sale price per unit$10 Variable cost per unit $5 Fixed costs per year, paid at the end of each year$2m Interest expense in first year (at t=1) $0.562m Corporate tax rate 30% Government treasury bond yield 5% Bank loan debt yield 9% Market portfolio return 10% Covariance of levered equity returns with market 0.32 Variance of market portfolio returns 0.16 Firm's and project's debt-to-equity ratio 50% Notes 1. Due to the project, current assets will increase by$6m now (t=0) and fall by $6m at the end (t=1). Current liabilities will not be affected. Assumptions • The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. • Millions are represented by 'm'. • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 2% pa. All rates are given as effective annual rates. • The project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? The capital market line (CML) is shown in the graph below. The total standard deviation is denoted by σ and the expected return is μ. Assume that markets are efficient so all assets are fairly priced. Which of the below statements is NOT correct? A company advertises an investment costing$1,000 which they say is under priced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Of the 15% pa total expected return, the dividend yield is expected to be 4% pa and the capital yield 11% pa. Assume that the company's statements are correct. What is the NPV of buying the investment if the 15% total return lasts for the next 100 years (t=0 to 100), then reverts to 10% after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant, the dividends can only be re-invested at 10% pa and all returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): Which of the following statements is NOT correct? Lenders: A stock's required total return will decrease when its: To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the income statement needed? Note that the income statement is sometimes also called the profit and loss, P&L, or statement of financial performance. A home loan company advertises an interest rate of 9% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given with an accuracy of 4 decimal places. A stock's total standard deviation of returns is 20% pa. The market portfolio's total standard deviation of returns is 15% pa. The beta of the stock is 0.8. What is the stock's diversifiable standard deviation? Which of the following interest rate labels does NOT make sense? A firm has a debt-to-assets ratio of 20%. What is its debt-to-equity ratio? What is the present value of real payments of $100 every year forever, with the first payment in one year? The nominal discount rate is 7% pa and the inflation rate is 4% pa. A company conducts a 10 for 3 stock split. What is the percentage increase in the stock price and the number of shares outstanding? The answers are given in the same order. A company conducts a 2 for 3 rights issue at a subscription price of$8 when the pre-announcement stock price was $9. Assume that all investors use their rights to buy those extra shares. What is the percentage increase in the stock price and the number of shares outstanding? The answers are given in the same order. Question 668 buy and hold, market efficiency, idiom A quote from the famous investor Warren Buffet: "Much success can be attributed to inactivity. Most investors cannot resist the temptation to constantly buy and sell." Buffet is referring to the buy-and-hold strategy which is to buy and never sell shares. Which of the following is a disadvantage of a buy-and-hold strategy? Assume that share markets are semi-strong form efficient. Which of the following is NOT an advantage of the strict buy-and-hold strategy? A disadvantage of the buy-and-hold strategy is that it reduces: Which of the following is NOT a valid method for estimating the beta of a company's stock? Assume that markets are efficient, a long history of past data is available, the stock possesses idiosyncratic and market risk. The variances and standard deviations below denote total risks. "Buy low, sell high" is a phrase commonly heard in financial markets. It states that traders should try to buy assets at low prices and sell at high prices. Traders in the fixed-coupon bond markets often quote promised bond yields rather than prices. Fixed-coupon bond traders should try to: A stock's required total return will increase when its: Who owns a company's shares? The: Let the 'income return' of a bond be the coupon at the end of the period divided by the market price now at the start of the period $(C_1/P_0)$. The expected income return of a premium fixed coupon bond is: Technical traders: An economy has only two investable assets: stocks and cash. Stocks had a historical nominal average total return of negative two percent per annum (-2% pa) over the last 20 years. Stocks are liquid and actively traded. Stock returns are variable, they have risk. Cash is riskless and has a nominal constant return of zero percent per annum (0% pa), which it had in the past and will have in the future. Cash can be kept safely at zero cost. Cash can be converted into shares and vice versa at zero cost. The nominal total return of the shares over the next year is expected to be: The efficient markets hypothesis (EMH) and no-arbitrage pricing theory is most closely related to which of the following concepts? Which of the following statements about Australian franking credits is NOT correct? Franking credits: Question 625 dividend re-investment plan, capital raising Which of the following statements about dividend re-investment plans (DRP's) is NOT correct? Assets A, B, M and $r_f$ are shown on the graphs above. Asset M is the market portfolio and $r_f$ is the risk free yield on government bonds. Which of the below statements is NOT correct? Assets A, B, M and $r_f$ are shown on the graphs above. Asset M is the market portfolio and $r_f$ is the risk free yield on government bonds. Assume that investors can borrow and lend at the risk free rate. Which of the below statements is NOT correct? Which of the following statements about yield curves is NOT correct? A company advertises an investment costing$1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Of the 15% pa total expected return, the dividend yield is expected to always be 7% pa and rest is the capital yield. Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% total return lasts for the next 100 years (t=0 to 100), then reverts to 10% after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant, the dividends can only be re-invested at 10% pa and all returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): You deposit cash into your bank account. Does the deposit account represent a debt or to you? A business project is expected to cost $100 now (t=0), then pay$10 at the end of the third (t=3), fourth, fifth and sixth years, and then grow by 5% pa every year forever. So the cash flow will be $10.5 at the end of the seventh year (t=7), then$11.025 at the end of the eighth year (t=8) and so on perpetually. The total required return is 10℅ pa. Which of the following formulas will NOT give the correct net present value of the project? Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart. You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate. You expect BHP will pay a $0.55 interim dividend in six months and a$0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be $0.572 each, and so on in perpetuity. Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa. What is the current price of a BHP share? Project Data Project life 1 year Initial investment in equipment$6m Depreciation of equipment per year $6m Expected sale price of equipment at end of project 0 Unit sales per year 9m Sale price per unit$8 Variable cost per unit $6 Fixed costs per year, paid at the end of each year$1m Interest expense in first year (at t=1) $0.53m Tax rate 30% Government treasury bond yield 5% Bank loan debt yield 6% Market portfolio return 10% Covariance of levered equity returns with market 0.08 Variance of market portfolio returns 0.16 Firm's and project's debt-to-assets ratio 50% Notes 1. Due to the project, current assets will increase by$5m now (t=0) and fall by $5m at the end (t=1). Current liabilities will not be affected. Assumptions • The debt-to-assets ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. • Millions are represented by 'm'. • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 2% pa. • All rates are given as effective annual rates. • The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? A firm is considering a business project which costs$10m now and is expected to pay a single cash flow of $12.1m in two years. Assume that the initial$10m cost is funded using the firm's existing cash so no new equity or debt will be raised. The cost of capital is 10% pa. Which of the following statements about net present value (NPV), internal rate of return (IRR) and payback period is NOT correct? A firm pays a fully franked cash dividend of $70 to one of its Australian shareholders who has a personal marginal tax rate of 45%. The corporate tax rate is 30%. What will be the shareholder's personal tax payable due to the dividend payment? In late 2003 the listed bank ANZ announced a 2-for-11 rights issue to fund the takeover of New Zealand bank NBNZ. Below is the chronology of events: • 23/10/2003. Share price closes at$18.30. • 24/10/2003. 2-for-11 rights issue announced at a subscription price of $13. The proceeds of the rights issue will be used to acquire New Zealand bank NBNZ. Trading halt announced in morning before market opens. • 28/10/2003. Trading halt lifted. Last (and only) day that shares trade cum-rights. Share price opens at$18.00 and closes at $18.14. • 29/10/2003. Shares trade ex-rights. All things remaining equal, what would you expect ANZ's stock price to open at on the first day that it trades ex-rights (29/10/2003)? Ignore the time value of money since time is negligibly short. Also ignore taxes. A graph of assets’ expected returns $(\mu)$ versus standard deviations $(\sigma)$ is given in the graph below. The CML is the capital market line. Which of the following statements about this graph, Markowitz portfolio theory and the Capital Asset Pricing Model (CAPM) theory is NOT correct? A risk manager has identified that their hedge fund’s continuously compounded portfolio returns are normally distributed with a mean of 10% pa and a standard deviation of 30% pa. The hedge fund’s portfolio is currently valued at$100 million. Assume that there is no estimation error in these figures and that the normal cumulative density function at 1.644853627 is 95%. Which of the following statements is NOT correct? All answers are rounded to the nearest dollar. Which of the following statements about probability distributions is NOT correct? A risk manager has identified that their pension fund’s continuously compounded portfolio returns are normally distributed with a mean of 5% pa and a standard deviation of 20% pa. The fund’s portfolio is currently valued at $1 million. Assume that there is no estimation error in the above figures. To simplify your calculations, all answers below use 2.33 as an approximation for the normal inverse cumulative density function at 99%. All answers are rounded to the nearest dollar. Which of the following statements is NOT correct? A risk manager has identified that their investment fund’s continuously compounded portfolio returns are normally distributed with a mean of 10% pa and a standard deviation of 40% pa. The fund’s portfolio is currently valued at$1 million. Assume that there is no estimation error in the above figures. To simplify your calculations, all answers below use 2.33 as an approximation for the normal inverse cumulative density function at 99%. All answers are rounded to the nearest dollar. Assume one month is 1/12 of a year. Which of the following statements is NOT correct? A graph of assets’ expected returns $(\mu)$ versus standard deviations $(\sigma)$ is given in the below diagram. Each letter corresponds to a separate coloured area. The portfolios at the boundary of the areas, on the black lines, are excluded from each area. Assume that all assets represented in this graph are fairly priced, and that all risky assets can be short-sold. Which of the following statements about this graph and Markowitz portfolio theory is NOT correct? Which statement(s) are correct? (i) All stocks that plot on the Security Market Line (SML) are fairly priced. (ii) All stocks that plot above the Security Market Line (SML) are overpriced. (iii) All fairly priced stocks that plot on the Capital Market Line (CML) have zero idiosyncratic risk. Select the most correct response: Government bonds currently have a return of 5% pa. A stock has an expected return of 6% pa and the market return is 7% pa. What is the beta of the stock? Government bonds currently have a return of 5%. A stock has a beta of 2 and the market return is 7%. What is the expected return of the stock? The security market line (SML) shows the relationship between beta and expected return. Investment projects that plot above the SML would have: The security market line (SML) shows the relationship between beta and expected return. Investment projects that plot on the SML would have: Examine the following graph which shows stocks' betas $(\beta)$ and expected returns $(\mu)$: Assume that the CAPM holds and that future expectations of stocks' returns and betas are correctly measured. Which statement is NOT correct? An effective semi-annual return of 5% $(r_\text{eff 6mth})$ is equivalent to an effective annual return $(r_\text{eff annual})$ of: If a variable, say X, is normally distributed with mean $\mu$ and variance $\sigma^2$ then mathematicians write $X \sim \mathcal{N}(\mu, \sigma^2)$. If a variable, say Y, is log-normally distributed and the underlying normal distribution has mean $\mu$ and variance $\sigma^2$ then mathematicians write $Y \sim \mathbf{ln} \mathcal{N}(\mu, \sigma^2)$. The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Select the most correct statement: The below three graphs show probability density functions (PDF) of three different random variables Red, Green and Blue. Let $P_1$ be the unknown price of a stock in one year. $P_1$ is a random variable. Let $P_0 = 1$, so the share price now is $1. This one dollar is a constant, it is not a variable. Which of the below statements is NOT correct? Financial practitioners commonly assume that the shape of the PDF represented in the colour: If a stock's future expected continuously compounded annual returns are normally distributed, what will be bigger, the stock's or continuously compounded annual return? Or would you expect them to be ? If a stock's expected future prices are log-normally distributed, what will be bigger, the stock's or future price? Or would you expect them to be ? The below diagram shows a firm’s cash cycle. Which of the following statements about companies’ cash cycle is NOT correct? What is the Cash Conversion Cycle for a firm with a: • Payables period of 1 day; • Inventory period of 50 days; and • Receivables period of 30 days? All answer options are in days: The following quotes are most closely related to which financial concept? • “Opportunity is missed by most people because it is dressed in overalls and looks like work” -Thomas Edison • “The only place where success comes before work is in the dictionary” -Vidal Sassoon • “The safest way to double your money is to fold it over and put it in your pocket” - Kin Hubbard Use the below information to value a levered company with annual perpetual cash flows from assets that grow. The next cash flow will be generated in one year from now, so a perpetuity can be used to value this firm. Note that ‘k’ means kilo or 1,000. So the$30k is $30,000. Data on a Levered Firm with Perpetual Cash Flows Item abbreviation Value Item full name $\text{CFFA}_\text{U}$$30k Cash flow from assets excluding interest tax shields (unlevered) $g$ 1.5% pa Growth rate of cash flow from assets, levered and unlevered $r_\text{D}$ 4% pa Cost of debt $r_\text{EL}$ 16.3% pa Cost of levered equity $D/V_L$ 80% pa Debt to assets ratio, where the asset value includes tax shields $t_c$ 30% Corporate tax rate Which of the following statements is NOT correct? A firm conducts a two-for-one stock split. Which of the following consequences would NOT be expected? A firm is about to conduct a 2-for-7 rights issue with a subscription price of $10 per share. They haven’t announced the capital raising to the market yet and the share price is currently$13 per share. Assume that every shareholder will exercise their rights, the cash raised will simply be put in the bank, and the rights issue is completed so quickly that the time value of money can be ignored. Disregard signalling, taxes and agency-related effects. Which of the following statements about the rights issue is NOT correct? After the rights issue is completed: A stock, a call, a put and a bond are available to trade. The call and put options' underlying asset is the stock they and have the same strike prices, $K_T$. You are currently long the stock. You want to hedge your long stock position without actually trading the stock. How would you do this? You believe that the price of a share will fall significantly very soon, but the rest of the market does not. The market thinks that the share price will remain the same. Assuming that your prediction will soon be true, which of the following trades is a bad idea? In other words, which trade will NOT make money or prevent losses? A man just sold a call option to his counterparty, a lady. The man has just now: A trader buys one crude oil European style call option contract on the CME expiring in one year with an exercise price of $44 per barrel for a price of$6.64. The crude oil spot price is $40.33. If the trader doesn’t close out her contract before maturity, then at maturity she will have the: A trader just bought a European style put option on CBA stock. The current option premium is$2, the exercise price is $75, the option matures in one year and the spot CBA stock price is$74. Which of the following statements is NOT correct? Gross discrete returns in different states of the world are presented in the table below. A gross discrete return is defined as $P_1/P_0$, where $P_0$ is the price now and $P_1$ is the expected price in the future. An investor can purchase only a single asset, A, B, C or D. Assume that a portfolio of assets is not possible. Gross Discrete Returns In Different States of the World Investment World states (probability) asset Good (50%) Bad (50%) A 2 0.5 B 1.1 0.9 C 1.1 0.95 D 1.01 1.01 Which of the following statements about the different assets is NOT correct? Asset: Suppose the yield curve in the USA and Germany is flat and the: • USD federal funds rate at the Federal Reserve is 1% pa; • EUR deposit facility at the European Central Bank is -0.4% pa (note the negative sign); • Spot EUR exchange rate is 1 USD per EUR; • One year forward EUR exchange rate is 1.011 USD per EUR. You suspect that there’s an arbitrage opportunity. Which one of the following statements about the potential arbitrage opportunity is NOT correct? Which of the following statements about Macaulay duration is NOT correct? The Macaulay duration: A fixed coupon bond’s modified duration is 20 years, and yields are currently 10% pa compounded annually. Which of the following statements about the bond is NOT correct? Which of the following statements is NOT correct? Fairly-priced assets should: The Capital Asset Pricing Model (CAPM) and the Single Index Model (SIM) are single factor models whose only risk factor is the market portfolio’s return. Say a Solar electricity generator company and a Beach bathing chair renting company are influenced by two factors, the market portfolio return and cloud cover in the sky. When it's sunny and not cloudy, both the Solar and Beach companies’ stock prices do well. When there’s dense cloud cover and no sun, both do poorly. Assume that cloud coverage risk is a systematic risk that cannot be diversified and that cloud cover has zero correlation with the market portfolio’s returns. Which of the following statements about these two stocks is NOT correct? The CAPM and SIM: Who was the first theorist to endorse the maximisiation of the geometric average gross discrete return for investors (not gamblers) since it gave a "...portfolio that has a greater probability of being as valuable or more valuable than any other significantly different portfolio at the end of n years, n being large"? (a) Daniel Bernoulli.
{}
We're sorry Aspose doesn't work properply without JavaScript enabled. # Find and replace not working Hi Find and replace is not working in my docx. Can you please check? Thanks Gaurav test.docx (22.7 KB) I tried replacing the text SunBurn. @gkumar16 I have checked with the following simple code and text is replaced properlu: Document doc = new Document("C:\\Temp\\in.docx"); doc.getRange().replace("SunBurn", "REPLACED"); doc.save("C:\\Temp\\out.docx"); out.docx (13.9 KB) It’s not working when I am using the following find and replace option. FindReplaceOptions findReplaceOptions = new FindReplaceOptions(); findReplaceOptions.setFindWholeWordsOnly(true); document.getRange().replace("SunBurn.", "REPLACED", findReplaceOptions); @gkumar16 This is expected behavior. If you search “SunBurn” with Find whole words only in MS Word, it also does not match anything: @alexey.noskov Thank you for clarifying. 1 Like
{}
# The Standard Model More Deeply: Lessons on the Strong Nuclear Force from Quark Electric Charges For readers who want to go a bit deeper into details (though I suggest you read last week’s posts for general readers first [post 1, post 2]): Last week, using just addition and subtraction of fractions, we saw that the ratio of production rates • R = Rate (e+ e ⟶ quark anti-quark) / Rate (e+ e ⟶ muon anti-muon) (where e stands for “electron” and e+ for “positron”) can be used to verify the electric charges of the quarks of nature. [In this post I’ll usually drop the word “electric” from “electric charge”.] Specifically, the ratio R, at different energies, is both sensitive to and consistent with the Standard Model of particle physics, not only confirming the quarks’ charges but also the fact that they come in three “colors”. (About colors, you can read recent posts here, here and here.) To keep the previous posts short, I didn’t give evidence that the data agrees only with the Standard Model; I’ll start today by doing that. But I did point out that the data doesn’t quite match the simple prediction. You can see that in the figure below, repeated from last time; it shows the data (black dots) lies close to the predictions (the solid lines) but generally lies a few percent above them. Why is this? The answer: we neglected a small but noticeable effect from the strong nuclear force. Not only does accounting for this effect fix the problem, it allows us to get a rough measure of the strength of the strong nuclear force. From these considerations we can learn several immensely important facts about nature, as we’ll see today and in the next post. ### Checking that the Data Really Verifies the Standard Model Figure 1 shows data roughly agrees with the Standard Model prediction, in which quarks come in N=3 colors and have charges • Up, Charm, Top (u,c,t): Qu = Qc = Qt = 2/3 • Down, Strange, Bottom (d,s,b): Qd = Qs = Qb = -1/3 where “Qu” means “charge of the u quark.” But maybe it agrees with lots of other possibilities too? Are there other choices of charges and/or N that would work just as well? If we assume N=3, but allow Qu to vary (always keeping Qu = Qc = Qt and Qd = Qs = Qb = Qu -1 , as is required by the charges of protons, neutrons and other “baryons”), predictions for R are shown in Figure 2. Blue, green and red curves correspond to the three regions in Figure 1: low energy (2 – 3 GeV), medium energy (5 – 10 GeV) and high energy (11-20 GeV). The location where Qu = Qc = Qt = 2/3 is marked with a vertical black line, and the predictions of the Standard Model for the values of R in the three regions where we predicted it are shown with blue, green and red stars. Meanwhile, from Figure 1 the values of R from the three regions can be estimated from the data; these are plotted as thick horizontal dashed lines. That the stars (the Standard Model prediction) lie nearly on top of the dashed lines (the data) means the Standard Model is consistent with nature. But you can also see that there is no other value of Qu where the predictions (the three curved lines) match the data (the thick dashed lines). So if N=3, then Qu at (or very close to) 2/3 is really the only acceptable option. Things don’t work for other values of N, either. For N=4 and any Qu, the prediction for R is always too big. For N=2, prediction would almost work for Qu = 1, near where the blue curve equals 2, but the red and green prediction are identical there and equal to 4, which clearly disagrees with the data. ### Understanding the Remaining Discrepancy Despite the near-agreement in Figure 1, the remaining discrepancy is troubling. Data is always above the prediction in each of the three regions. Did we leave something out? In the predictions we made for R, we assumed that only electromagnetism is important and that all other forces can be neglected. But this is not quite a fair assumption. What we’ve left out, conceptually, is the possibility that when an electron and positron annihilate, they become a quark, a corresponding anti-quark and a gluon, as in Figure 4. Specifically, this is a gluon which carries a substantial fraction of the available energy and whose direction makes a wide angle with the quark and anti-quark directions. (Gluons that move along the quark or anti-quark directions, or have low energy, must be treated differently; they are the ones responsible for jets and quark confinement, and are implicitly already accounted for — a very long story.) The effect of emitting an energetic, wide-angle gluon from a quark or anti-quark can be calculated, and slightly increases the rate for production of “hadrons” (particles containing quarks, anti-quarks and gluons) from electron-positron collisions. Specifically, it increases R by a factor • (1 + αs / π) where αs is the characteristic strength (or “coupling”) of the strong nuclear force, and π is the usual quantity from math class. If we simply accept this result without question, we can see from Figure 1 that data and prediction would agree much better if αs were about 0.2 to 0.3, so that (1 + αs / π) would be larger than 1 by about 5% to 10%. We can even view the discrepancy as our first measurement of αs, admittedly an imprecise one. ### The Strength of a Force What does this “strength” mean? There’s an analogous quantity in electromagnetism, often denoted “α” or “αem” , and it is measured to be about 1/137.04 = 0.00730, at least in familiar contexts. This small but mighty number arises when we write the law for electric forces — Coulomb’s law — which tells us the force F between two objects of charge Q1 and Q2 that are a distance r apart. In first-year physics textbooks you’ll see this written as • F = k Q1 Q2 / r2 where k is Coulomb’s constant and the charges are in units called “Coulombs”; for instance, the electron’s charge is -e, where e is about 1.6 x 10-19 Coulombs. But professional physicists write this differently, using the fundamental constants ħ (Planck’s constant) and c (the cosmic speed limit): • F = α ħc Q1 Q2 / r2 Here the charges are pure numbers: the electron’s charge is -1, for instance. The “e” from first-year physics, with units of Coulombs, has been absorbed into α, which is now itself a pure number, namely 1/137.04. Since the force F is proportional to α, we can say that α sets the strength of all electric forces. Although α is often called the electromagnetic coupling constant (or, historically, the “fine structure constant”, referring to its effect on atomic energy levels), it is not in fact constant. At distances shorter than a trillionth of a meter, Coulomb’s law is slightly wrong, and we can understand the cause as distance-dependence in α itself. This change in the electromagnetic coupling arises from quantum effects which make empty space polarizable. We’ll get back to this next time. For the strong nuclear force, there’s an analogous law to Coulomb’s law that governs the force between a quark and an anti-quark at a fixed distance r. It would read • F = αs ħc (4/3) / r2 if αs were constant. But in many contexts it is a bad approximation to treat αs as constant, and we’ll return to this next time. For now, though, we’ll just say that the data from Figure 1 suggests that, at least in the range of energies around 1 — 20 GeV, αs is somewhere around 0.2 to 0.3. ### Did we miss something? But let’s finish for today by answering an obvious question. Muons, unlike quarks, can’t radiate gluons — to do so would require the action of the strong nuclear force, to which the muons are immune — but they can certainly radiate photons via electromagnetism. If, in calculating the ratio R, we have to include electron+positron quark + anti-quark + gluon in the numerator to make things work, shouldn’t we, for consistency, have to account for electron+positron muon + anti-muon + photon in the denominator? And in fact, what about electron+positron quark + anti-quark + photon in the numerator? The point is that effects involving photons, on either the numerator or denominator of R, are too small to worry about. For instance, the effect on the denominator of R of photon emission is of the size (1 + αem / π) = 1.002 a shift of two tenths of a percent, far too small to see in Figure 1. The only important effect that we left out comes from the strong nuclear force, precisely because it is strong, i.e. because αs >> αem . We’ll see more examples of this next time. ### 12 thoughts on “The Standard Model More Deeply: Lessons on the Strong Nuclear Force from Quark Electric Charges” 1. >… “effects involving photons … are too small to worry about.” Independently of electroweak unification, doesn’t this magnitudes data argue powerfully against any attempt at U(3) inclusion of the photon as the ninth oddball, infinite-range gluon? • Yes it does. But it was harder to explain so I didn’t lead with it. • The data is where it all starts, though, so that works great. Thanks! • Argh. Matt Strassler, I like your data argument for about two orders of magnitude difference in alpha scales within nucleons, I really do. It helps me understand the genesis of the SU(3) model. I tried to convince myself that it gets rid of the Glashow mnemonic cube. But of course, it doesn’t, else Glashow would never have mentioned it. In the end, the alpha-ratio argument is a scaling factor that visually flattens the Glashow cube U(3)-like relationships without altering them in any fundamental way. Let me flip the question around in case I’m missing the obvious: What is the Standard Model explanation for Glashow’s cubes? Surely the fact that all fundamental fermions form lovely T3 isospin pairs across the corners of two U(3) Glashow cubes has been noticed and explained? I’ve looked but have not had much luck. • The cube is a “charge lattice”, and the whole thing fits inside SO(10), the grand unified group. This can be partially explained within the Standard Model (but not completely) by appealing to consistency conditions called “anomalies”. The full structure of Standard Model matter is not explained within the Standard Model; that’s one of several reasons that physicists are frustrated with the Standard Model and were hoping for some clues from the Large Hadron Collider. 2. Color charges (mass?) are Conjugative localized quantization like qbits? It “appears” same for Proton and Electron (mutually repulsive), like… the Sun and the Moon “appears” in same size..!? • sorry… mutually attractive force … 3. Nice review. Two questions. (1) Does one find experimental value of R (q,qbar) by summing cross sections of all hadron production upto those quark masses? (2) Theoretical values by one photon exchange? But then there will be rhos, omegas etc. exchanges as you mentioned last time. In that case there may be double counting since these particles are also made from quarks-antiquarks. • (1) Although it’s not easy to build a detector which can identify any particular hadron in terms of whether it’s a pion or a kaon or a proton, it’s not hard to build one that can distinguish hadrons from other types of particles. So it’s a relatively (!) easy and natural experiment to study electron-positron collisions and simply count how many collisions result in some hadrons being produced. The denominator is also easy to measure because muons and anti-muons are easy to identify experimentally. (2) Photons, rhos and omegas “mix”. All low interactions between electrons/positrons and quarks/anti-quarks would cease, including those from rhos and omegas, if the electromagnetic interaction were shut off completely. So the photon is always essential at all energy scales… the rho is only produced through its mixing with the photon… and it’s *you* who are double-counting 🙂 . • Thanks Matt. I did some phenomenological work in 60s and 70s. Those were bubble chamber days. Then switched to NMR biophysics and quantum optics. I have not kept up much with either theoretical or experimental work for how current detectors work. So please excuse my ignorant questions! I understand mostly (1). For (2) I would still like to understand if the diagram e+-e- to one photon in s-channel and then it going to quarks is the principle contribution to R (theory). Thanks again. I find your blogs very educational. 4. dimensions of (h_bar c) is (J.m), so why didn’t you write F= (alpha*h_bar*c) (Q_1 Q_2)/r^2 instead of F= alpha/(h_bar*c) (Q_1 Q_2)/r^2 ? • Thanks — an error of transposition and inattention. Fixed now.
{}
# Basic Numerical Integration: the Trapezoid Rule¶ A simple illustration of the trapezoid rule for definite integration: $$\int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right).$$ First, we define a simple function and sample it between 0 and 10 at 200 points In [1]: %matplotlib inline import numpy as np import matplotlib.pyplot as plt In [2]: def f(x): return (x-3)*(x-5)*(x-7)+85 x = np.linspace(0, 10, 200) y = f(x) Choose a region to integrate over and take only a few points in that region In [3]: a, b = 1, 8 # the left and right boundaries N = 5 # the number of points xint = np.linspace(a, b, N) yint = f(xint) Plot both the function and the area below it in the trapezoid approximation In [4]: plt.plot(x, y, lw=2) plt.axis([0, 9, 0, 140]) plt.fill_between(xint, 0, yint, facecolor='gray', alpha=0.4) plt.text(0.5 * (a + b), 30,r"$\int_a^b f(x)dx$", horizontalalignment='center', fontsize=20); Compute the integral both at high accuracy and with the trapezoid approximation In [5]: from __future__ import print_function The integral is: 565.2499999999999 +/- 6.275535646693696e-12
{}
This function provides point samples from one- and two-dimensional inhomogeneous Poisson processes. The log intensity has to be provided via its values at the nodes of an inla.mesh.1d or inla.mesh object. In between mesh nodes the log intensity is assumed to be linear. ## Usage sample.lgcp( mesh, loglambda, strategy = NULL, R = NULL, samplers = NULL, ignore.CRS = FALSE ) ## Arguments mesh An INLA::inla.mesh object loglambda vector or matrix; A vector of log intensities at the mesh vertices (for higher order basis functions, e.g. for inla.mesh.1d meshes, loglambda should be given as mesh$m basis function weights rather than the values at the mesh$n vertices) A single scalar is expanded to a vector of the appropriate length. If a matrix is supplied, one process sample for each column is produced. strategy Only relevant for 2D meshes. One of 'triangulated', 'rectangle', 'sliced-spherical', 'spherical'. The 'rectangle' method is only valid for CRS-less flat 2D meshes. If NULL or 'auto', the the likely fastest method is chosen; 'rectangle' for flat 2D meshes with no CRS, 'sliced-spherical' for CRS 'longlat' meshes, and 'triangulated' for all other meshes. R Numerical value only applicable to spherical and geographical meshes. It is interpreted as R is the equivalent Earth radius, in km, used to scale the lambda intensity. For CRS enabled meshes, the default is 6371. For CRS-less spherical meshes, the default is 1. samplers A SpatialPolygonsDataFrame or inla.mesh object. Simulated points that fall outside these polygons are discarded. ignore.CRS logical; if TRUE, ignore any CRS information in the mesh. Default FALSE. This affects R and the permitted values for strategy. ## Value A data.frame (1D case), SpatialPoints (2D flat and 3D spherical surface cases) SpatialPointsDataFrame (2D/3D surface cases with multiple samples). For multiple samples, the data.frame output has a column 'sample' giving the index for each sample. object of point locations. ## Details For 2D processes on a sphere the R parameter can be used to adjust to sphere's radius implied by the mesh. If the intensity is very high the standard strategy "spherical" can cause memory issues. Using the "sliced-spherical" strategy can help in this case. • For crs-less meshes on R2: Lambda is interpreted in the raw coordinate system. Output has an NA CRS. • For crs-less meshes on S2: Lambda with raw units, after scaling the mesh to radius R, if specified. Output is given on the same domain as the mesh, with an NA CRS. • For crs meshes on R2: Lambda is interpreted as per km^2, after scaling the globe to the Earth radius 6371 km, or R, if specified. Output given in the same CRS as the mesh. • For crs meshes on S2: Lambda is interpreted as per km^2, after scaling the globe to the Earth radius 6371 km, or R, if specified. Output given in the same CRS as the mesh. ## Author Daniel Simpson dp.simpson@gmail.com (base rectangle and spherical algorithms), Fabian E. Bachl bachlfab@gmail.com (inclusion in inlabru, sliced spherical sampling), Finn Lindgren finn.lindgren@gmail.com (extended CRS support, triangulated sampling) ## Examples # \donttest{ # The INLA package is required if (bru_safe_inla(quietly = TRUE)) { vertices <- seq(0, 3, by = 0.1) mesh <- INLA::inla.mesh.1d(vertices) loglambda <- 5 - 0.5 * vertices pts <- sample.lgcp(mesh, loglambda) pts$y <- 0 plot(vertices, exp(loglambda), type = "l", ylim = c(0, 150)) points(pts, pch = "|") } # } # \donttest{ # The INLA package is required if (bru_safe_inla(quietly = TRUE) && require(ggplot2, quietly = TRUE)) { data("gorillas", package = "inlabru") pts <- sample.lgcp(gorillas$mesh, loglambda = 1.5, samplers = gorillas$boundary ) ggplot() + gg(gorillas$mesh) + gg(pts) } #> Warning: PROJ support is provided by the sf and terra packages among others # }
{}
Definition 4.32.2. Let $\mathcal{C}$ be a category. Let $p : \mathcal{S} \to \mathcal{C}$ be a category over $\mathcal{C}$. 1. The fibre category over an object $U\in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ is the category $\mathcal{S}_ U$ with objects $\mathop{\mathrm{Ob}}\nolimits (\mathcal{S}_ U) = \{ x\in \mathop{\mathrm{Ob}}\nolimits (\mathcal{S}) : p(x) = U\}$ and morphisms $\mathop{\mathrm{Mor}}\nolimits _{\mathcal{S}_ U}(x, y) = \{ \phi \in \mathop{\mathrm{Mor}}\nolimits _\mathcal {S}(x, y) : p(\phi ) = \text{id}_ U\} .$ 2. A lift of an object $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ is an object $x\in \mathop{\mathrm{Ob}}\nolimits (\mathcal{S})$ such that $p(x) = U$, i.e., $x\in \mathop{\mathrm{Ob}}\nolimits (\mathcal{S}_ U)$. We will also sometime say that $x$ lies over $U$. 3. Similarly, a lift of a morphism $f : V \to U$ in $\mathcal{C}$ is a morphism $\phi : y \to x$ in $\mathcal{S}$ such that $p(\phi ) = f$. We sometimes say that $\phi$ lies over $f$. There are also: • 2 comment(s) on Section 4.32: Categories over categories In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
# Simple IF statement, am I doing it right? [closed] Although I have been coding in PHP for a while now I am trying to go back to basics with some things to make sure by understanding and methods are the most efficient. I have the following snippet... if (!empty($_GET['location'])) {$location = $_GET['location']; } else {$location = ''; } Hopefully the purpose of the snippet should be clear but what I want to know is am I doing it right, is this the most effective way of expressing this simple function? • The code's purpose, however brief it may be, should still be in the title only. – Jamal Apr 1 '15 at 12:59 You can use the ternary operator for this kind of operation, like this: $location = (empty($_GET['location']) ? '' : $_GET['location']; It is far more readable. Personaly I prefer this short code: <?php$location=@\$_GET['location']; I know that purists don't like it, but I totally disagree. Most notably regarding the "@", which they often say to be avoided since it hides potential errors: yes, generally speaking, but here the "error" is only what we expect when we use empty() and cannot be anything else, so it is totally safe. NOTE: this is suitable to replace your initial formulation because you wrote if(!empty()), but you should not infer it would be good as well if you write if(!isset()): • In the first case you simply want to get an empty result whenever no value has been given (and with the short code we get NULL, which in PHP is pretty equivalent to ""). • At the opposite in the latter case, you'll probably want to distinguish between an empty value and no value at all (when the location query param doesn't appear in the URL): then you should keep the if() {...} else {...} formulation.
{}
499. The Maze III 499. The Maze III 刷题内容 ` There is a ball in a maze with empty spaces and walls. The ball can go through empty spaces by rolling up (u), down (d), left (l) or right (r), but it won't stop rolling until hitting a wall. When the ball stops, it could choose the next direction. There is also a hole in this maze. The ball will drop into the hole if it rolls on to the hole. Given the ball position, the hole position and the maze, find out how the ball could drop into the hole by moving the shortest distance. The distance is defined by the number of empty spaces traveled by the ball from the start position (excluded) to the hole (included). Output the moving directions by using 'u', 'd', 'l' and 'r'. Since there could be several different shortest ways, you should output the lexicographically smallest way. If the ball cannot reach the hole, output "impossible". The maze is represented by a binary 2D array. 1 means the wall and 0 means the empty space. You may assume that the borders of the maze are all walls. The ball and the hole coordinates are represented by row and column indexes. Example 1: Input 1: a maze represented by a 2D array 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 Input 2: ball coordinate (rowBall, colBall) = (4, 3) Input 3: hole coordinate (rowHole, colHole) = (0, 1) Output: "lul" Explanation: There are two shortest ways for the ball to drop into the hole. The first way is left -> up -> left, represented by "lul". The second way is up -> left, represented by 'ul'. Both ways have shortest distance 6, but the first way is lexicographically smaller because 'l' < 'u'. So the output is "lul". Example 2: Input 1: a maze represented by a 2D array 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 Input 2: ball coordinate (rowBall, colBall) = (4, 3) Input 3: hole coordinate (rowHole, colHole) = (3, 0) Output: "impossible"
{}
Home Profile Watchlist Rules Search Legend Forum Unread topics or posts Topic Locked Announcement # Forums > Tips & Tricks How do you capture a brain dump of tasks? AuthorMessage non-profitToolbox.com Posted: Apr 02, 2010 Score: 0 Hello, When I start my workday, I have 5-20 things swimming in my head that i want to do a fast capture with. I now use notepad because it keeps up with me as fast as I can type in 'dump' mode, like this. - - etc. This isn't organized, but it IS fast. Now it's time for druthers and hacks. If I had my druthers: 'Add a task' would be dump-friendly or have a dump mode - a compact grid format (just like the task list when I view it) and I could tab through quickly, drop things in and then adjust & submit. since I don't have my druthers... Any Hacks for capturing a pile of loose tasks into my daily-changing GTDish system? stephen.ferro Posted: Apr 02, 2010 Score: 1 For this purpose I use the "Add multiple tasks..." option under add tasks. I set the fields to go to my inbox for processing later and let the tasks flow. larryg215 Posted: Aug 09, 2010 Score: 0 I have a 'csv template' setup with headings for folder, context, notes, priority, etc... I input all my items, save, and then import all at once, they drop right into place without having to add them one at a time using the website. cherylmoss25 Posted: Aug 10, 2010 Score: 0 Posted by larryg215: I have a 'csv template' setup with headings for folder, context, notes, priority, etc... I input all my items, save, and then import all at once, they drop right into place without having to add them one at a time using the website. Hi larryg215 - I'm brand new and can't find any place that shows what fields are importable. Would you share what you use in your csv template? Thanks! PeterW Posted: Aug 10, 2010 Score: 0 http://www.toodledo.com/connect_csv.php springchick Posted: Aug 12, 2010 Score: 0 I'm not real familiar with the bones behind GTD, but are you using TD as your sole place to write things down as they occur to you? I love TD, but find it is more useful as a planned task list, not as grounds for brain-dumps or even gathering of {often unorganized} info at meetings. I still use my trusty little moleskin-type journal with hand-numbered pages for capturing info and making quick notes throughout the day. Not only is it more convenient to jot down notes in a meeting this way, but I find I retain info better if I hand write it rather than type it. Then once or twice a day, I pull from my journal to create projects and tasks in TD, noting the page number in my notebook where I can find the backup info. PeterW Posted: Aug 12, 2010 Score: 0 I use several methods to collect tasks / brain dump. I have a small notepad & pen in a few key locations, e.g. at home where I keep my keys & wallet, in my car, in my wife's car, on my desk at work. I almost always have my iPhone with me and use Appigo Todo. It has a 'quick add' function that works well - you just type the task name and it goes straight to the Inbox. If I don't process it later on the iPhone, I will do so in Toodledo after syncing. At my desk or on my laptop at home I use Toodledo. For single thoughts that come up I just click add, type the description and let it go to the inbox (no folder) when I'm busy and process it later. If I have a bunch of pages from my notepad or have a number of things swimming around in my head, I will use Toodledo's bulk add feature which works great for me. I also have a spiral-bound A4-size note book that I use for work, e.g. in group meetings or one-on-one conversations with the CEO. I use this so I can take detailed notes. These often lead to the generation of new tasks so I will usually photocopy the page(s) and put this in my tray for later entry. b_sull65 Posted: Aug 14, 2010 Score: 0 I use the "add multiple tasks" function as well. My trick is that I created a folder called "Inbox" where I dump everything. That way I can quickly get everything off my mind without having to worry about categories, etc. Then I can find everything in one place and process it later. SRhyse Posted: Aug 14, 2010 Score: 0 I do my more focused brain dumps in digital mindmaps on my iPhone and computer, and use Evernote for more sporadic ones throughout the day. Digital mindmaps are great for things you're still editing, organizing or categorizing, and I would loath to go back to any normal text editor for planning or brainstorming after using them. Once I've got something locked down there, I copy and paste it into Toodledo's multi add if it's task oriented. I agree that Toodledo, or any taskmanager / scheduler app for that matter, tends to make for a horrible 'Inbox.' phaedrus54 Posted: Aug 27, 2010 Score: 0 I'm just switching to Toodledo but I brought a trick from another GTD system: Set the email address for submissions. Install Launchy (PC) or Quicksilver (MAC). Configure a short cut in there for emailing that email address. Thus entries involve, alt-space, type "tood tab Call Bill" and Enter. I'll start building in the syntax once I've used this awhile. I also set a Contact shortcut on my Android which links to this submission email. This is a quick way to note that movie/book/idea that comes up on the go. If you need help with Launchy config, check Lifehacker and look for an article that uses a VBS script that you mod w/ your specifics. This message was edited Aug 27, 2010. chris Posted: Sep 15, 2010 Score: 0 I use SlickRun instead of Launchy, so I can add a task by clicking WIN-Q and typing something like "td new task #tomorrow *work" with this config: in Slickrun I have a keyword setup as: filename: C:\path\to\slickrun-email.vbs parameters: "$I$" contents of slickrun-email.vbs (replace all caps email/password) Set iMsg = CreateObject("CDO.Message") Set iConf = CreateObject("CDO.Configuration") Set Flds = iConf.Fields schema = "http://schemas.microsoft.com/cdo/configuration/" Flds.Item(schema & "sendusing") = 2 Flds.Item(schema & "smtpserver") = "smtp.gmail.com" Flds.Item(schema & "smtpserverport") = 465 Flds.Item(schema & "smtpauthenticate") = 1 Flds.Item(schema & "smtpusessl") = 1 Flds.Update With iMsg .To = "MYTOODLEDOEMAIL" .From = "Chris Lott <MYEMAILADDRESS>" .Subject = wscript.arguments.item(0) .HTMLBody = message .Sender = " " .Organization = " " .ReplyTo = " " Set .Configuration = iConf SendEmailGmail = .Send End With set iMsg = nothing set iConf = nothing set Flds = nothing This message was edited Sep 15, 2010. alexandremrj_1376332798 Posted: Sep 18, 2010 Score: 0 I'm relatively new to Toodledo but i'm using it as "Is it actionable" part of GTD. My inbox is a Post-It note on my cellphone, yellow for work, blue for Other (Home or supermarket). This way i only need to go to one place to process it all, and do all the 2 minute ideas before they entering the system. This way my processing is a bit further than simply sorting everything in Toodledo. You cannot reply yet To participate in these forums, you must be signed in.
{}
# Unwanted line breaks after equation tag when using fleqn and leqno When you use fleqn and leqno and assign your own equations tags with \tag to equations in environments such as gather and align and you assign a tag with a certain length, then there's a line break after the tag. Given the same condition except that environment equation is used instead does not cause line breaks. How can I avoid the line breaks in the former case? \documentclass[fleqn,leqno]{article} \usepackage{amsmath} \begin{document} \begin{gather} \tag{one} 1+1=2 \\ \tag{two} 2+2=4 \end{gather} $$\tag{three} 1+1=2$$ \end{document} - AFAIK, you shouldn't use both fleqn and leqno. –  egreg Nov 21 '11 at 16:47 @egreg Could you please provide a reference for why I should avoid it? Also, is there some recommended way of getting a similar result? –  N.N. Nov 21 '11 at 16:51 I see very little point in putting the number on the left of left aligned equations. Of course, if you really want to do it, you have to set \mathindent as wide as necessary in order to accommodate all your tags and leave sufficient space to avoid ambiguities. –  egreg Nov 21 '11 at 16:57 You can increase \mathindent, but I don't know why you want to use fleqn and leqno simultaneously: \documentclass[fleqn,leqno]{article} \usepackage{amsmath} \setlength\mathindent{2cm} \begin{document} \begin{gather} \tag{one} 1+1=2 \\ \tag{two} 2+2=4 \end{gather} $$\tag{three} 1+1=2$$ \end{document} - I use fleqn and leqno because I want to flush equations to the left and also have equation numbers to the left. Is this bad practice? –  N.N. Nov 21 '11 at 16:53 @N.N. But then, the equation number could be cause some confusion as it could be taken, if it falls too close to an equation, as part of the equation. –  Gonzalo Medina Nov 21 '11 at 16:55 Why is \setlength\mathindent needed for gather but not for equation? –  N.N. Nov 22 '11 at 7:08 @N.N. perhaps you can consider opening a new question. –  Gonzalo Medina Nov 22 '11 at 11:41
{}
# The Number Guessing Game Let’s play a game. I think of 5 numbers from 1 to 100. A friend, who has no idea what my 5 numbers are, then tells that you can pick a number from 31 to 60. You win the game if the number you picked is one of the 5 numbers I thought of. Assume that I had no idea that you were going to be restricted to guessing only a number from 31 to 60 (otherwise it wouldn’t fair!). What are the odds of you winning the game? ## Why is this interesting? Well, apart from the fact that a mathematician never shys away from a problem, this problem is interesting because, there is a seemingly complicated twist to the original problem. It turns out that it actually isn’t that complicated at all. But, the problem is most interesting because of the answer to the question. So, you will have to stick until the end to know why this is intersting and worth thinking about. Now that we have established the conundrum of the Hermeneutic Circle, let’s dive into the solution. If you are very impatient, jump to the results to know the answer. ## The Original Problem The question I posed is a spin-off of a very simple problem in probability. Let’s solve that before introducing the intricacies and restrictions. The problem goes like this: I think of 5 numbers from 1 to 100. What are the odds that you guess exactly one of those numbers in a single attempt? The solution is straightforward. There are 5 right answers. And you have a pool of 100 numbers to guess from. $Probability = \frac{5}{100} = 0.05$ It will do well to remember the number 0.05. Now, let’s look at the problem at hand. ## The Solution For the sake of simplicity, let’s call the person who thinks of the number as Player 1 (or P1) and the person who guesses as Player 2 (or P2). And for completeness, let’s call the person who imposes restrictions, making life harder for P2, as Referee (or R). Back to the original problem, let us study the situation before we answer the actual question. First of, notice that there are 6 different possibilites for the 5 numbers and range. 1. None of the 5 numbers lie in the range 2. Exactly 1 of the 5 numbers lie in the range 3. Exactly 2 of the 5 numbers lie in the range 4. Exactly 3 of the 5 numbers lie in the range 5. Exactly 4 of the 5 numbers lie in the range 6. All of the 5 numbers lie in the range For succinctness, let us call the event that exactly $$i$$ numbers lie in the range as $$R_i$$. So, the above mentioned possibilities are events $$R_0$$, $$R_1$$, $$R_2$$, $$R_3$$, $$R_4$$, and $$R_5$$. Notice that these events are mutually exclusive and exhaustive. Let us call the event that P2 wins the game i.e., guesses a correct number as $$C$$. It is actually easier for us to calculate the probability of $$C$$ occurring conditioned on the events $$R_i$$. It is also straightforward to calculate the probability of $$R_i$$. So, with the help of law of total probability, we can answer the question posed as follows: $P(C) = \sum\limits_{i = 0}^5{P(C|R_i) * P(R_i)}$ $P(C|R_i) = \frac{i}{30}$ $P(R_i) = \frac{(^{30}C_i) * (^{70}C_{5-i})}{^{100}C_5}$ $P(C) = \sum\limits_{i = 0}^5{\frac{i}{30}*\frac{(^{30}C_i) * (^{70}C_{5-i})}{^{100}C_5}} = 0.05$ Surprisingly we get the same answer got from the original problem i.e., 0.05. Could this just be a coincidence? ## Generalization Let us generalize our formula for arbitrary values. Let $$S$$ be the set of all elements from which P1 can think of. And let $$k$$ be the number of elements that P1 thinks of. Now, R imposes a restriction on P2. Let $$A$$ be that restriction i.e., the set of all elements from which P2 can guess the answer. Let $$|S| = n$$, $$|A| = m$$, and $$A \subseteq S$$. Therefore $$m \leq n$$. Let the set of elements that P1 thinks of be $$X$$. Clearly $$|X| = k$$. Our events are defined as before. $$R_i$$ is the event that exactly $$i$$ elements of $$X$$ lie in $$A$$ i.e., $$|X \cap A| = i$$ where $$0 \leq i \leq k$$. $$C$$ is the event that P2 wins the game. Again, applying the law of total probability, we have: $P(C) = \sum\limits_{i = 0}^k{P(C|R_i) * P(R_i)}$ Consider the event $$C$$ conditioned on $$R_i$$. P2 can guess from a total of $$m$$ elements. But, only $$i$$ of them can make P2 win. Therefore, the probability of $$C$$ conditioned on $$R_i$$ can be written as: $P(C|R_i) = \frac{i}{m}$ The number of ways event $$R_i$$ can occur is the number of ways we can choose $$i$$ elements from $$A$$ and the number of ways we can choose the rest i.e., $$(k - i)$$ elements from $$S-A$$. Note that because $$A \subseteq S$$, $$|S - A| = n - m$$. So, we can write the probability of $$R_i$$ as: $P(R_i) = \frac{(^{m}C_i) * (^{n-m}C_{k-i})}{^{n}C_k}$ Now, we can answer the generalized question: $P(C) = \sum\limits_{i = 0}^k{\frac{i}{m}*\frac{(^{m}C_i) * (^{n-m}C_{k-i})}{^{n}C_k}}$ ## Results I have written a quick python script to evaluate this for different values of $$n$$, $$m$$, and $$k$$. Turns out that if we set $$n = 100$$ and $$k = 5$$, then for all $$m$$ such that $$0 < m \leq n$$ $$P(C) = 0.05$$. This is far too interesting to be just a coincidence… Well, in fact for arbitrary $$n > 0$$ and $$0 < k \leq n$$, as long as $$0 < m \leq n$$, $P(C) = \frac{k}{n}$ This means that the restriction the referee R imposes on P2 has no effect on the odds that they will win the game. ## Discussion Our intuition says that if R gives a smaller range for P2 to guess from, then it reduces the probability that P2 wins thus increasing the probability of P1 winning. But we have just shown that our inherent human intuition is wrong just like the Monty Hall Problem and lots of other times. But how can we undersand the result we just derived intuitively? Keep in mind the way we solved the original problem. Now, if the numbers are truly random, then the odds will be the same irrespective of the restriction imposed on P2. This is due to three facts: 1. P1, the person thinking the numbers, doesn’t know the restriction that will be imposed. 2. R, who sets the restriction, does not know what numbers P1 has thought of. 3. P2, the person guessing, also doesn’t know the numbers P1 has thought of. Since we have eliminated bias in all three people playing the game, we need to account for all the different possibilites the situation creates. In doing so, the net effect of the restriction becomes nil. Thus, we end up with the original problem again. We went to great lengths trying to complicate a simple problem only to go back to sqaure one! ## Acknowledgements Shout out to my professor Chris Ketelsen and my classmate Michael Dresser for encouraging, and helping, me to think about this problem. Another shout out to my friend Aravindh Shankar for proof reading my solution.
{}
# Number of solution for $xy +yz + zx = N$ Is there a way to find number of "different" solutions to the equation $xy +yz + zx = N$, given the value of $N$. Note: $x,y,z$ can have only non-negative values. • This seems like a very interesting (and possibly very hard) question. The sequence begins $1, 3, 6, 7, 9, 9, 12, 9, 15, 12, 12, 15, 19, 9, 18, 18, 18, 15, 18, 15, 27, 18, 12, 21, 30, 12, 24, 22, 21, 21, 24, 21, 30, 18, 18, 30, 36, 9, 24, 30, 30$ for $N=0,1,\ldots,40$. – David Bevan Jun 14 '13 at 8:36 • It's tabulated at oeis.org/A067751 with the comment, "An upper bound on the number of solutions appears to be $9\sqrt n$." – Gerry Myerson Jun 14 '13 at 9:43 • If you insist on the variables being positive, you lose about $3d(N)$ solutions, but you get numbers related to class numbers of quadratic fields. See docserver.carma.newcastle.edu.au/212/2/98_119-Borwein-Choi.pdf – Gerry Myerson Jun 14 '13 at 9:51 • See also the discussion starting on page 291 of Mordell's book, Diophantine Equations, for the relation to class numbers. – Gerry Myerson Jun 14 '13 at 9:59 • A simple way to calculate this number for arbitrary N is to enumerate all 3-tuples $(x, y, z)$ with $x \leq y \leq z \leq N$, and checking if it is a solution, essentially bruteforcing the solution (there are some possible optimizations, for example: one can easily see that $x = 1, y = 1, z = N - 2$ will be a solution if $N > 2$, and likewise some other combinations can be skipped becuase they are never a solution). Of course, this is not very elegant, but in fact this does answer your question (with a definite "yes, there is a way"). – Ruben Mar 16 '14 at 6:28 The problem is difficult, as it is related to the determination of class numbers of quadratic number fields. See the references I have given in the comments. equation: $XY+XZ+YZ=N$ Solutions in integers can be written by expanding the number of factorization: $N=ab$ And using solutions of Pell's equation: $p^2-(4k^2+1)s^2=1$ $k$ -some integer which choose on our own. Solutions can be written: $X=ap^2+2(ak+b+a)ps+(2(a-2b)k+2b+a)s^2$ $Y=2(ak-b)ps+2(2ak^2+(a+2b)k+b)s^2$ $Z=bp^2-2(2b+a)kps+(4bk^2-2ak-b)s^2$ And more: $X=-2bp^2+2(k(4b+a)+b)ps-2((4b+2a)k^2+(2b-a)k)s^2$ $Y=-(2b+a)p^2+2(k(4b+a)-b-a)ps-(8bk^2-(4b+2a)k+a)s^2$ $Z=bp^2-2(2b+a)kps+(4bk^2-2ak-b)s^2$ • Where does all this come from?! – vonbrand Mar 11 '14 at 14:42 • I do not understand? How? It is necessary to solve the equation. – individ Mar 11 '14 at 15:18 Perhaps these formulas for someone too complicated. Then equation: $XY+XZ+YZ=N$ If we ask what ever number: $p$ That the following sum can always be factored: $p^2+N=ks$ Solutions can be written. $X=p$ $Y=s-p$ $Z=k-p$ the equation: $XY+XZ+YZ=a(X+Y+Z)$ Solutions can be written using the solutions of Pell's equation: $p^2-(k^2-k+1)s^2=a$ $k$ - This number can be any and defined by us. Then the decision will be Met form: $X=p^2+(k+1)ps$ $Y=p^2+(k-2)ps$ $Z=p^2+(1-2k)ps$ And more. $X=(k+1)ps-(k^2-k+1)s^2$ $Y=(k-2)ps-(k^2-k+1)s^2$ $Z=(1-2k)ps-(k^2-k+1)s^2$ Here is a C# program that does the job in a pretty naive way. It eliminates some possibilities, then bruteforces all solutions. You can also use the Mathematica code provided on the OEIS page on this sequence (thank Gerry Myerson for this!). using System; class Program { static void Main(string[] args) { Console.Write("N = "); Console.WriteLine(); int u = 0, add = -1, n = 0, s = 0; while (u < 2 * N) { u += (add += 2); n++; } while (n <= N + 1) { if (canBeWrittenAsSumOfThreeSquares(u - 2 * N)) { for (int x = 0; x <= n / 3; x++) for (int y = x; y <= (n - x) / 2; y++) { int z = n - x - y; if (x * y + y * z + z * x == N) { Console.Write(x + ", " + y + ", " + z + " "); if (x != y && y != z && x != z) { Console.WriteLine("(6 permutations)"); s += 6; } else if (x == y && y == z) { Console.WriteLine("(1 permutation)"); s++; } else { Console.WriteLine("(3 permutations)"); s += 3; } } } } n++; } Console.WriteLine(s + " solutions found."); The code uses the fact that $xy + yz + zx = \frac{(x + y + z)^2 - x^2 - y^2 - z^2}{2}$. When we have $xy + yz + zx = N$, we have $(x + y + z)^2 - x^2 - y^2 - z^2 = 2N$ and $(x + y + z)^2 - 2N = x^2 + y^2 + z^2$ a number can be written as a sum of three squares iff it is not of the form $4^k(8m + 7)$ see wikipedia article. So we simply check all $u \in \mathbb{N}$ so that $0 \leq u^2 - 2N \leq N^2$, and then iterate through all increasing ($x \leq y \leq z$) 3-tuples $(x, y, z)$, multyplying with the number of permutations that is possible (either 1, if all three components are the same, 3, when two components are the same and one different, or 6, when all the components are different). I haven't used any advanced theorem (class numbers?), but I just wanted to illustrate that there is an approach that works for small $N$. I was able to calculate the amount of solutions for $N=4000$ in about one minute. This approach probably can be expanded to be more efficient.
{}
# Asymmetric Zero range Process with Site-wise disorder: convergence to critical measure from supercritical initial configurations vendredi 6 juin 2014, 9h30 - 10h30 $We study nearest neighbor asymmetric zero range processes on \mathbf\left\{Z\right\} with site-wise disorder given by a random variable at each site which multiplies the rate.$ Let $\alpha(x), x \in \mathbf{Z}$ be an ergodic sequence taking values in $(c,1]$ where $c >0$. We assume that $\alpha(x)$ has no atom at $c$. The generator of the process is given below. $L^\alpha f(\eta) = \sum_{x,y\in\mathbf{Z}}\alpha(x) p(y-x)g(\eta(x))\left[ f\left(\eta^{x,y}\right)-f(\eta) \right]$ $g(k)$ denotes the rate at which a particle jumps from a site with occupation number $k$. We take $g$ to be increasing with $\lim_{k \to \infty} g(k)=1$. $p$ is a nearest neighbor jump kernel. In earlier work of Benjamini, Ferrari and Landim it was shown that there was a critical density $\rho_c$ above which no product invariant measures existed and a hydrodynamic limit below the critical density was established. Andjel, Ferrari, Landim and Guiol considered the totally asymmetric case with $g=1$ and showed that all invariant measures are in the convex hull of product invariant measures with density lower than or equal to the critical density. They further showed that starting from a configuration with lower asymptotic empirical density at $-\infty$ greater than $\rho_c$ the process converges to the maximal invariant measure with density $\rho_c$. Their approach exploited the total asymmetry. We extend their result to the asymmetric case and general $g$. Ideas on general Euler hydrodynamics of attractive one dimensional systems, results on convergence to local equilibrium and hydrodynamics of semi-infinite system with sources/sinks are used to obtain our result. We also show that the condition on initial profile is necessary and give a counter example to show that the the result may not hold without the nearest neighbor assumption. (This is joint work with Christophe Bahadoran, Thomas Mountford and Ellen Saada)
{}