text
stringlengths
100
957k
meta
stringclasses
1 value
# Mathematical Treasures - Euler's Analysis of the Infinite Author(s): Frank J. Swetz and Victor J. Katz This is the title page of Leonard Euler’s Introductio in Analysis Infinitorum vol.I, published in 1748.  This book is considered by some mathematical historians to be one of the most influential mathematical texts of all time.  It serves as an introduction to Euler’s later series of texts on the calculus: Differential Calculus (1755) and  Integral Calculus, completed in 1770.  Among its many contributions the Analysis: defines “function”; presents methods of transforming and representing functions; establishes the category of even and odd functions, defines the trigonometric functions in a modern manner; presents a proof (not complete) for the Fundamental Theorem of Algebra; popularized the use of the symbol “e” for the number 2.71828…and “π” for the number 3.1417... and established the relationship cos θ + i sin θ  = e.   A complete English translation of this work is available: Leonard Euler, Introduction to Analysis of the Infinite, Springer-Verlag, 1988, translated by John Blanton. The frontispiece of Euler’s Analysis reflects the romantic era of his time and shows two women contemplating a mathematical problem while a winged muse hovers above.  The engraving is entitled "Analysis of the infinitely small." Image of page 46 of the Analysis, the beginning Chapter IV, “On the development of functions in infinite series”. Here Euler makes the argument that many functions can be simplified for computation by converting them into power series: A + Bz + Cz2 + Dz3 +... This image of page 47 continues the discussion of series conversion of a function.  The example of the rational function 1/(α + βz)  is given.  By repeated division the function is transformed into a power series and Euler then proceeds to demonstrate techniques for determining the unknown coefficients A, B, C,… . Frank J. Swetz and Victor J. Katz, "Mathematical Treasures - Euler's Analysis of the Infinite," Loci (January 2011)
{}
# Local Existence of MHD Contact Discontinuities Local Existence of MHD Contact Discontinuities We prove the local-in-time existence of solutions with a contact discontinuity of the equations of ideal compressible magnetohydrodynamics (MHD) for two dimensional planar flows provided that the Rayleigh–Taylor sign condition $${[\partial p/\partial N] <0 }$$ [ ∂ p / ∂ N ] < 0 on the jump of the normal derivative of the pressure is satisfied at each point of the initial discontinuity. MHD contact discontinuities are characteristic discontinuities with no flow across the discontinuity for which the pressure, the magnetic field and the velocity are continuous whereas the density and the entropy may have a jump. This paper is a natural completion of our previous analysis (Morando et al. in J Differ Equ 258:2531–2571, 2015) where the well-posedness in Sobolev spaces of the linearized problem was proved under the Rayleigh–Taylor sign condition satisfied at each point of the unperturbed discontinuity. The proof of the resolution of the nonlinear problem given in the present paper follows from a suitable tame a priori estimate in Sobolev spaces for the linearized equations and a Nash–Moser iteration. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Archive for Rational Mechanics and Analysis Springer Journals # Local Existence of MHD Contact Discontinuities , Volume 228 (2) – Nov 29, 2017 52 pages /lp/springer_journal/local-existence-of-mhd-contact-discontinuities-8VGhWWn8iP Publisher Springer Berlin Heidelberg Subject Physics; Classical Mechanics; Physics, general; Theoretical, Mathematical and Computational Physics; Complex Systems; Fluid- and Aerodynamics ISSN 0003-9527 eISSN 1432-0673 D.O.I. 10.1007/s00205-017-1203-3 Publisher site See Article on Publisher Site ### Abstract We prove the local-in-time existence of solutions with a contact discontinuity of the equations of ideal compressible magnetohydrodynamics (MHD) for two dimensional planar flows provided that the Rayleigh–Taylor sign condition $${[\partial p/\partial N] <0 }$$ [ ∂ p / ∂ N ] < 0 on the jump of the normal derivative of the pressure is satisfied at each point of the initial discontinuity. MHD contact discontinuities are characteristic discontinuities with no flow across the discontinuity for which the pressure, the magnetic field and the velocity are continuous whereas the density and the entropy may have a jump. This paper is a natural completion of our previous analysis (Morando et al. in J Differ Equ 258:2531–2571, 2015) where the well-posedness in Sobolev spaces of the linearized problem was proved under the Rayleigh–Taylor sign condition satisfied at each point of the unperturbed discontinuity. The proof of the resolution of the nonlinear problem given in the present paper follows from a suitable tame a priori estimate in Sobolev spaces for the linearized equations and a Nash–Moser iteration. ### Journal Archive for Rational Mechanics and AnalysisSpringer Journals Published: Nov 29, 2017 ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 12 million articles from more than 10,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Unlimited reading Read as many articles as you need. Full articles with original layout, charts and figures. Read online, from anywhere. ### Stay up to date Keep up with your field with Personalized Recommendations and Follow Journals to get automatic updates. ### Organize your research It’s easy to organize your research with our built-in tools. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. ### DeepDyve Freelancer ### DeepDyve Pro Price FREE$49/month \$360/year Save searches from
{}
# Predict future value with time period using non linear regression model [closed] Here I have dataset import from csv file. I want to predict the next value with the time series. Can we use nonlinear regression model to predict the value for next time period or Is there any regression model can we use to predict the value? Here I upload the subset of my original dataset. Time x x1 x2 y 0:06:00 63 NaN NaN 63 0:07:00 63 NaN 20 104 0:08:00 104 11 0 93 0:09:00 93 0 0 ? ## closed as unclear what you're asking by Stephen Rauch, oW_, Siong Thye Goh, Sean Owen♦Sep 16 '18 at 2:30 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
{}
# Definition:Set/Uniqueness of Elements/Equality of Sets ## Definition By definition of set equality $S$ and $T$ are equal if and only if they have the same elements: $S = T \iff \paren {\forall x: x \in S \iff x \in T}$ So, to take the club membership analogy, if two clubs had exactly the same members, the clubs would be considered as the same club, although they may be given different names. This follows from the definition of equals given above. Note that there are mathematical constructs which do take into account both (or either of) the order in which the elements appear, and the number of times they appear, but these are not sets as such.
{}
Find all School-related info fast with the new School-Specific MBA Forum It is currently 25 Sep 2016, 00:46 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Of the students in a certain class, 55% of the female and Author Message TAGS: ### Hide Tags Director Joined: 04 Jan 2008 Posts: 914 Followers: 68 Kudos [?]: 519 [0], given: 17 Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 29 Mar 2009, 10:59 4 This post was BOOKMARKED 00:00 Difficulty: 95% (hard) Question Stats: 49% (03:09) correct 51% (01:48) wrong based on 82 sessions ### HideShow timer Statistics This topic is locked. If you want to discuss this question please re-post it in the respective forum. Of the students in a certain class, 55% of the female and 35% of the male passed an exam. Did more than half of the students in the class pass the exam? (1) More than half of the students in the class are female. (2) The number of the female students is 20 more than the number of the male students. [Reveal] Spoiler: OA _________________ http://gmatclub.com/forum/math-polygons-87336.html http://gmatclub.com/forum/competition-for-the-best-gmat-error-log-template-86232.html Manager Joined: 19 Aug 2006 Posts: 248 Followers: 2 Kudos [?]: 9 [0], given: 0 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 29 Mar 2009, 11:38 nitya34 wrote: Of the students in a certain class, 55% of the female and 35% of the male passed an exam. Did more than half of the students in the class pass the exam? (1) More than half of the students in the class are female. (2) The number of the female students is 20 more than the number of the male students. Good Q I'll venture a B. stmnt1 is clearly not sufic. stmnt2 - plugging in numbers, we can find the answer. GMAT Instructor Joined: 04 Jul 2006 Posts: 1264 Followers: 27 Kudos [?]: 270 [0], given: 0 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 29 Mar 2009, 11:57 Suppose there are f% of the students are female: 55f/100 + 35(100-f)/100 = 35 + 20f/100 percent of the students passed. Thus we need to know whether f/5 > 15, i.e. whether f > 75 (1) f > 50 not suff (2) only tells us that f > 50 not suff (t) not suff easier drawing a number line: (male)35-----15-----50--5--55 (female) we need to know whether 15m < 5f i.e f > 3m Intern Joined: 29 Dec 2006 Posts: 32 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 17 Apr 2009, 19:43 i got B. after picking several numbers, i got the answer that more than 1/2 of the class did not pass. Manager Joined: 22 Feb 2009 Posts: 140 Schools: Kellogg (R1 Dinged),Cornell (R2), Emory(Interview Scheduled), IESE (R1 Interviewed), ISB (Interviewed), LBS (R2), Vanderbilt (R3 Interviewed) Followers: 8 Kudos [?]: 111 [3] , given: 10 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 19 Apr 2009, 11:09 3 KUDOS 1 This post was BOOKMARKED nitya34 wrote: Of the students in a certain class, 55% of the female and 35% of the male passed an exam. Did more than half of the students in the class pass the exam? (1) More than half of the students in the class are female. (2) The number of the female students is 20 more than the number of the male students. Good Q Lets say total students=100, Female x,then Male=100-x. As per question 0.55x+0.35(100-x) > 0.5 * 100 solving we get is x>75 ? St 1. not sufficient as we need to prove number of female > 75 St 2 : if x+y=100 and x-y=20 then solving we get x=60 (number of female). therefore we can say Number of female < 75 and therefore SUFFICIENT. hence IMO B Manager Joined: 02 Mar 2009 Posts: 137 Followers: 1 Kudos [?]: 44 [0], given: 0 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 19 Apr 2009, 23:40 Ok I got B as well but did it in a different way. I like Bandit's way better though. None the less, here goes: 1 not sufficient..we all agree. 2: The minimum number of males must be 20 and this is because we need a whole number that can split into the proportions 35% and 65%. I arrived at this by computing the prime factorizing 35, 65 and 100. 35= 5*7; 65= 5*13 and 100 = 2*2*5*5. Thus the 5 of 100 will cancel but the 2*2*5 will not. Thus the number of males must be multiples of 20. Now I used M=20 and F=40 (because females is simply 20 more) At this the numbers passed = 29 < 30 We can stop here..we need not calculate for M=40, 60,etc because as the proportion of males increase, since the passing rate is low for males, we are all the more sure that less than half of the students have passed. Thus Ans B Director Joined: 29 Aug 2005 Posts: 877 Followers: 9 Kudos [?]: 333 [0], given: 7 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 10 Jun 2009, 04:38 E for me. When statements involve absolute relationship (i.e Stmt2 F-20=M) vs. relative (e.g. F/M=2/5) , you cannot say "lets assume F+M=100". Let's look at the stem: 0.55F+0.35M>0.5(F+M)? Simplify: 0.05F>0.15M? or F>3M or F/M>3? Stmt1: F>M or F/M>1 This is not sufficient to answer the question. Stmt2: F-20=M This statement does not give a relative value of F to M. It does not matter whether F-1=M or F-100000=M. We need total number of students to be able to use this statement. Hence, it is not sufficient. Together, still we cannot answer the question. Hence, E. CEO Joined: 17 Nov 2007 Posts: 3589 Concentration: Entrepreneurship, Other Schools: Chicago (Booth) - Class of 2011 GMAT 1: 750 Q50 V40 Followers: 511 Kudos [?]: 3194 [0], given: 360 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 10 Jun 2009, 09:50 seofah wrote: Stmt2: F-20=M This statement does not give a relative value of F to M. It does not matter whether F-1=M or F-100000=M. We need total number of students to be able to use this statement. Hence, it is not sufficient. We cannot get 55% and 35% for any F and M. This is a trap here. In other words, M cannot be 1 or any number up to 19, because 35% * M will not be an integer value. So, M can be 20 and more. But for M>=20, the total number of students who passed exam will be always less than a half. _________________ HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame Manager Joined: 08 Feb 2009 Posts: 146 Schools: Anderson Followers: 3 Kudos [?]: 45 [2] , given: 3 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 10 Jun 2009, 19:42 2 KUDOS Let F = number of females, M = number of males $$\Rightarrow$$ (F+M) = number of people in class. Question is whether $$\frac{(0.55) F + (0.35) M}{(F+M)} \hspace{20}\geq \hspace{20} \frac{1}{2}$$ Simplifying, we get: $$F \hspace{10}> \hspace{10}3M$$ (1) It is NOT SUFFICIENT because the statement only says that $$F \hspace{10}> \hspace{10}M$$ (2) $$F \hspace{5}= \hspace{5}M + 20$$ Substituting statement-2 in $$F \hspace{10}> \hspace{10}3M$$ $$\Rightarrow$$ $$M + 20 \hspace{10}> \hspace{10}3M$$ $$\Rightarrow$$ $$10 \hspace{10}> \hspace{10}M$$ Clearly, the above statement can be disproved using the fact that $$35% \hspace{5}of\hspace{5} M \geq\hspace{5} 20$$, in order for it to be an integer. Thus, statement-2 is SUFFICIENT. Manager Joined: 28 Jan 2004 Posts: 203 Location: India Followers: 2 Kudos [?]: 25 [0], given: 4 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 10 Jun 2009, 23:12 B. Stmt 1 - We agree that it is insuff. Stmt 2 - No. of Boys = X No. of girls = X + 20 So total students = X + X + 20 = (2X + 20) We can assume that the number of students be 100 (it will not make a diff. as X will change accordingly) So Boys = 40 Girls = 60 %Boys passed = 35% of 40 = 14 %Girls passed = 55% of 60 = 33 Total passed = 47 which is less then 50. Hence suff. GMAT Club Legend Joined: 09 Sep 2013 Posts: 11653 Followers: 527 Kudos [?]: 141 [0], given: 0 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 31 Jan 2014, 10:54 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 11653 Followers: 527 Kudos [?]: 141 [0], given: 0 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 28 Feb 2015, 11:48 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ EMPOWERgmat Instructor Status: GMAT Assassin/Co-Founder Affiliations: EMPOWERgmat Joined: 19 Dec 2014 Posts: 7460 Location: United States (CA) GMAT 1: 800 Q51 V49 GRE 1: 340 Q170 V170 Followers: 330 Kudos [?]: 2207 [1] , given: 162 Re: Of the students in a certain class, 55% of the female and [#permalink] ### Show Tags 28 Feb 2015, 14:31 1 KUDOS Expert's post 1 This post was BOOKMARKED Hi All, This question can be solved by TESTing VALUES. The prompt comes with an interesting 'restriction' though. You CANNOT have a fraction of a person (e.g. 2/3 of girl, 1/8 of a boy, etc.), so we have to TEST VALUES that "fit" with the percentages that are given... We're told that 55% of the females and 35% of the males in a class passed an exam. We're asked if MORE than half passed the exam. This is a YES/NO question. Since we're dealing with 55% of females and 35% of males, there must be a MULTIPLE of 20 females AND a MULTIPLE of 20 males in the class. NO other possible numbers exist that will "fit" with this data. As an example, you could NOT have 2 females and 3 males, since the percents would then give us "fractions of a person." Since 55% of 20 = 11 and 35% of 20 = 7, we now also have a "math shortcut" (we can use those multiples to save some time) that we can take advantage of. Knowing that, we can now deal with the two Facts in a more efficient way... Fact 1: More than half of the students are FEMALE. This is important; it tells us that we do NOT have an equal number of females and males. IF....we have... 40 females and 20 males 22 females passed and 7 males passed 29/60 is LESS than half and the answer to the question is NO IF....we have.... 60 females and 20 males 33 females passed and 7 males passed 40/80 is EXACTLY HALF and the answer to the question is NO IF....we have.... 80 females and 20 males 44 females passed and 7 males passed 51/100 is MORE than half and the answer to the question is YES. Fact 1 is INSUFFICIENT Fact 2: The number of females is 20 more than the number of males... The first TEST that we ran in Fact 1 will ALSO "fit" here: IF....we have... 40 females and 20 males 22 females passed and 7 males passed 29/60 is LESS than half and the answer to the question is NO IF....we have... 60 females and 40 males 33 females passed and 14 males passed 47/100 is LESS than half and the answer to the question is NO IF....we have... 80 females and 60 males 44 females passed and 21 males passed 65/140 is LESS than half and the answer to the question is NO From this work, you might notice how the numerator gets further and further "away" from exactly half. This is a pattern. Fact 2 is SUFFICIENT [Reveal] Spoiler: B GMAT assassins aren't born, they're made, Rich _________________ # Rich Cohen Co-Founder & GMAT Assassin # Special Offer: Save \$75 + GMAT Club Tests 60-point improvement guarantee www.empowergmat.com/ ***********************Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!*********************** Re: Of the students in a certain class, 55% of the female and   [#permalink] 28 Feb 2015, 14:31 Similar topics Replies Last post Similar Topics: 7 The range of the heights of the female students in a certain class is 4 12 May 2015, 04:58 3 Each of the 30 students in a class is either male or female 4 30 Sep 2012, 00:03 3 If each of the students in a certain mathematics class is 3 23 Aug 2010, 08:54 7 Each of the 30 students in a class is either male or female, 6 21 Sep 2009, 18:32 14 Each of the students in a certain class received a single 12 03 Jun 2007, 03:08 Display posts from previous: Sort by
{}
# A Question on Probability - Hunter and Rabbit Suppose there are m different hunters and n different rabbits. Each hunter selects a rabbit uniformly at random independently as a target. Suppose all the hunters shoot at their chosen targets at the same time and every hunter hits his target. (i) Consider a particular Rabbit $$1$$, what is the probability that Rabbit $$1$$ survives? (ii) Suppose $$m=7$$, $$n=5$$. What is the probability that no rabbit survives? Attempt for (i): Consider 1st hunter, No. of rabbits he can choose is $$n-1$$, since Rabbit $$1$$ survives. Consider 2nd hunter, No. of rabbits he can choose is $$n-1$$, since Rabbit $$1$$ survives. .... So, for $$m$$ hunters, number of ways they choose rabbits such that they won't choose Rabbit 1 $$= (n-1)^m$$ And number of ways ways they choose rabbits = $$n^m$$ $$P(\text{Rabbit 1 survives)} = \frac{ (n-1)^m }{n^m} = \left[ \frac{(n-1)}{n} \right]^m$$ For (2), you can try Inclusion-Exclusion, using the fact (generalizing (i)) that the probability that a particular set of $k$ rabbits survives is $((n-k)/n)^m$. EDIT: Here's the Inclusion-Exclusion calculation: \eqalign{ P(0\text{ survive}) &= 1 - P(\ge 1\text{ survive}) \cr &= 1 - {5 \choose 1} (4/5)^7 + {5 \choose 2} (3/5)^7 - {5 \choose 3} (2/5)^7 + {5 \choose 4} (1/5)^7\cr &= \frac{672}{3125}} (which is the same as $16800/78125$). In general with $m$ hunters and $n$ rabbits the probability that none survive is $$1 + \sum_{k=1}^{n-1} (-1)^k {n \choose k} \left(\dfrac{n-k}{n}\right)^m$$ • Is P(1 rabbit survives) = [(5-1)/5]^7 * C(5,1) ? Dec 17, 2013 at 11:43 • This works out for all rabbits surviving, since $k=n$, the probability is $0$, which is correct (every hunter has to pick a rabbit, so the best case for the rabbits is one very unlucky rabbit getting shot by all hunters). This also works out for $k=1$, since that becomes case (i), if that is correct. But this puts the probability of no rabbit surviving at $1$, since $k=0$ and so $$\left[ \frac{(n-k)}{n} \right]^m = \left[ \frac{n}{n} \right]^m = 1$$ – SQB Dec 17, 2013 at 11:45 • When getting ((n-1)/n)^m, we specify the rabbit Rabbit 1. So, I thought if we need to find P(1 rabbit survives), P(1 rabbit survives) = ((n-1)/n)^m * C(n,1) Dec 17, 2013 at 12:28 • Inclusion-exclusion again. Probability that at least $k$ survive = ${n \choose k} \cdot$ probability of a given $k$-tuple surviving $- {n \choose {k+1}} \cdot$ probability of a given $k+1$-tuple surviving $+ \ldots$. Dec 17, 2013 at 13:31 • For 7 hunters and 5 rabbits, there are 78125 ways for the hunters to choose a rabbit ($5^7$). Of those, 16800 have all rabbits killed. – SQB Dec 17, 2013 at 14:51 For part 2 , i followed this approach. I think it is correct or else please write your comment. for all rabbits to die, each should be hit by atleast one hunter. so selecting n hunters among m maintaining the order - mpn and the remaining m-n hunters can shoot any one. so n(m-n) so total number of chances -> mpn * n(m-n) so , the probability is ((mpn * n(m-n)) / nm) I found out how to arrive at the numbers I got from my simulation, so I'll try my hand at answering. First, my simulation. As I think we've killed enough rabbits by now, I'll try to make the world a better place by giving icecream to children. We've got 7 children: Alice, Bob, Carol, Dave, Eve, Frank, and Gabrielle. The icecream parlor has only 5 flavours. You can come up with any flavours you like, but we'll just number them 1 through 5. The kids get one scoop each. These kids like to share, and they'd like to try each flavour. So if they can make sure that they have picked each flavour at least once among the seven of them, they all can taste every flavour by sharing. The question now becomes, what is the probability of the 7 children having picked all 5 flavours between them (if they don't know what the others picked, of course). Now here's my simulation of that in SQL (Oracle 11g). CREATE OR REPLACE TYPE nums AS TABLE OF NUMBER; / WITH icecream AS ( SELECT LEVEL AS flavour FROM dual CONNECT BY LEVEL <= :v_nr_of_flavours ), children AS ( SELECT a.flavour AS alice, b.flavour AS bob, c.flavour AS carol, d.flavour AS dave, e.flavour AS eve, f.flavour AS frank, g.flavour AS gabrielle, CARDINALITY( nums( a.flavour, b.flavour, c.flavour, d.flavour, e.flavour, f.flavour, g.flavour ) MULTISET UNION DISTINCT nums() ) AS nr_of_flavours_picked FROM icecream g CROSS JOIN icecream f CROSS JOIN icecream e CROSS JOIN icecream d CROSS JOIN icecream c CROSS JOIN icecream b CROSS JOIN icecream a ) SELECT COUNT(*) AS nr_of_combinations, nr_of_flavours_picked, CASE WHEN GROUPING(nr_of_flavours_picked) = 1 THEN NULL ELSE DECODE(nr_of_flavours_picked, :v_nr_of_flavours, 1, 0) END AS all_flavours_picked FROM children GROUP BY ROLLUP (nr_of_flavours_picked); This gives us a value of 78125 total possibilities of which 16800 have all rabbits killed all flavours picked. But where do those numbers come from? The number of total possibilities is easy, that's ($5^7$). But the other number is a bit more involved. As it turns out, there are two ways to have 7 children pick all 5 flavours. Either two flavours are picked twice, or one flavour is picked thrice. That last case is the easiest, as that is just $7 \cdot 6 \cdot 5 \cdot 4$ (the first four kids pick a flavour that hasn't been picked yet, the last three kids pick the one flavour left). Of course there are ${5 \choose 1} = 5$ flavours that can be picked thrice, so we get $7 \cdot 6 \cdot 5 \cdot 4 \cdot 5$ possibilities. The first case is a little bit harder, but not much. Here we have $7 \cdot 6 \cdot 5 \cdot 6$ (the first three kids pick a flavour that hasn't been picked yet, after which there are 6 ways to distribute the remaining two pairs of flavours among the remaining four kids). Here we have ${5 \choose 2} = 10$ ways of deciding which two flavours get picked twice, so the total number here is $7 \cdot 6 \cdot 5 \cdot 6 \cdot 10$. The total of these two cases is $7 \cdot 6 \cdot 5 \cdot 4 \cdot {5 \choose 1} + 7 \cdot 6 \cdot 5 \cdot 6 \cdot {5 \choose 2} = 7 \cdot 6 \cdot 5 \cdot (4 \cdot 5 + 6 \cdot 10) = 210 \cdot 80 = 16800$ which is indeed the number we got from our simulation. So the probability of the children having picked all flavours is $\frac{16800}{78125}$. Here are the results for 7 children with different numbers of flavours. $$\begin{array}{rrr} \begin{array}{c}\text{Nr. of flavours}\end{array} & \begin{array}{c}\text{Nr. of combinations} \\ \text{with all flavours chosen}\end{array} & \begin{array}{c}\text{Nr. of possible combinations}\end{array} \\ \hline 1 & 1 & 1 \\ 2 & 126 & 128 \\ 3 & 1806 & 2187 \\ 4 & 8400 & 16384 \\ 5 & 16800 & 78125 \\ 6 & 15120 & 279936 \\ 7 & 5040 & 823543 \\ \end{array}$$ The general question remains to find a formula for different $m$ (hunters or children) and $n$ (rabbits or icecream flavours). I can explain all the numbers in the table above, but so far I haven't been able to formulate the general formula.
{}
MathsGee is free of annoying ads. We want to keep it like this. You can help with your DONATION 0 like 0 dislike 28 views Is the Cartesian product of nonempty sets is nonempty, even if the   product is of an infinite family of sets? | 28 views 0 like 0 dislike 0 like 0 dislike
{}
Problems at Plotting the function in 3d System I want to plot the function x=-3 in a 3D System. I've been searching in the internet for hours and found this example : \begin{tikzpicture}[ declare function = { X1(\y,\z) = -3; }] \begin{axis}[grid, x={(-0.7071cm,-0.7071cm)}, y={(1cm,0.0cm)}, z={(0cm,1cm)}, axis lines=center, font=\footnotesize, xmax=5.4,ymax=5.4,zmax=5.4, xmin=-5.4,ymin=-5.4,zmin=-5.4, xlabel=\normalsize$x$,ylabel=\normalsize$y$,zlabel=\normalsize$z$, major tick style = {black}, minor tick num=1, minor tick style = {very thin}, axis line style = {-latex}, %Pfeilspitzen %enlargelimits=0.1 % Ebene x=-3 \end{axis} \end{tikzpicture} The example I got this example was z=5, now I wanted to plot this new Code, but when it is not working. when I put z anywhere there's the following Package PGF Math Error could not parse '' as a floating point number. Is it possible, that the System doesn't see z as a variable ? I don't have any other ideas ... Thanks for help For 3d-plots you need points with three coordinates; {X1(y,z)} is just one component. Moreover, the varying parameters are called x and y. \documentclass[border=2mm]{standalone} \usepackage{pgfplots} \pgfplotsset{compat=1.14} \begin{document} \begin{tikzpicture}% [declare function = { X1(\y,\z) = -3;} ] \begin{axis}% [grid, % x={(-0.7071cm,-0.7071cm)}, % y={(1cm,0.0cm)}, % z={(0cm,1cm)}, axis lines=center, font=\footnotesize, xmax=5.4,ymax=5.4,zmax=5.4, xmin=-5.4,ymin=-5.4,zmin=-5.4, xlabel={\normalsize$x$}, ylabel={\normalsize$y$}, zlabel={\normalsize$z$}, major tick style = {black}, minor tick num=1, minor tick style = {very thin}, axis line style = {-latex}, ]
{}
# \StrMid and \MakeUppercase problem What's wrong with this: \documentclass{minimal} \usepackage{xstring} \begin{document} \newcommand{\cim}{c1095} \StrMid{\MakeUppercase{\cim}}{1}{1} \StrMid{\cim}{2}{100} \end{document} \MakeUppercase{\cim} works but \StrMid{\MakeUppercase{\cim}}{1}{1} doesn't work. Why? - Perhaps try \MakeUppercase{\noexpand\StrMid{\cim}{1}{1}}? Not sure the \noexpand is needed, and I can't test on this computer. –  Bruno Le Floch Jan 16 '12 at 18:45 Since you don't need to uppercase the string beforehand, this is a way: \StrMid{\cim}{1}{1}[\temp] \expandafter\MakeUppercase\expandafter{\temp} We store the extracted string in a temporary macro and then apply \MakeUppercase to the result. If strings are formed by "safe" characters (printable ASCII characters), the simpler \uppercase can be used: \StrMid{\cim}{1}{1}[\temp]\uppercase\expandafter{\temp} - Thanks, that works –  balping Nov 6 '11 at 14:00 The xstring package carries out an \edef (exhaustive expansion) on its argument. Stefan has suggested on approach, but depending on what you want to achieve an alternative is to create an expandable version of the upper case command. There is one built-in to expl3, which needs renaming for use in a document: \documentclass{article} \usepackage{xstring,expl3} \ExplSyntaxOn \let\MakeExpandableUppercase\tl_expandable_uppercase:n \ExplSyntaxOff \begin{document} \newcommand{\cim}{c1095} \StrMid{\MakeExpandableUppercase{\cim}}{1}{1}\space or \StrMid{\expandafter\MakeExpandableUppercase\expandafter{\cim}}{1}{1} ? \StrMid{\cim}{2}{100} \end{document} As I've indicated, I'm not sure what order you want the case change to happen in, relative to the string extraction. -
{}
Home / Arithmetic Aptitude / Chain Rule :: Discussion ### Discussion :: Chain Rule 1. If 7 spiders make 7 webs in 7 days, then 1 spider will make 1 web in how many days? 2. A. 1 B. $$\frac { 7 } { 2 }$$ C. 7 D. 49 Answer : Option C Explanation : Let the required number days be x. Less spiders, More days (Indirect Proportion) Less webs, Less days (Direct Proportion) Spiders 1 : 7 :: 7 : x Webs 7 : 1 1 x 7 x x = 7 x 1 x 7 x = 7. Be The First To Comment
{}
# Point Cloud Normal Estimation Matlab Point to Point Distortion Given a point on coded point cloud B, the distance to reference point cloud A can be computed as, P2P: point to point distance P2C: point to cloud distance Find the closest point on A Compute the tangent plane at y*, then the distance would be the point to tangent plane distance. Given a point cloud presumably sampled from an unknown surface, the problem is to estimate the normals of the surface at the data points. Akin to 2D recognition, this technique relies on finding good keypoints (characteristic points) in the cloud, and matching them to a set of previously saved ones. This will not be a traditional lecture, it will be a demonstration of how mechanical models designed in SolidWorks can easily be exported to MATLAB Simulink for visualisation, control system design, and validation. The overall task for this assignment is to fuse the individual XYZ point clouds to to create as much of a complete 3D model as possible. Some of the most popular and useful density estimation techniques are mixture models such as Gaussian Mixtures (sklearn. Point Clouds De nition A point cloud is a data structure used to represent a collection of multi-dimensional points and is commonly used to represent three-dimensional data. This will get you familiar with working on remote servers - an extremely useful skill for any data scientist. Provides detailed reference material for using SAS/STAT software to perform statistical analyses, including analysis of variance, regression, categorical data analysis, multivariate analysis, survival analysis, psychometric analysis, cluster analysis, nonparametric analysis, mixed-models analysis, and survey data analysis, with numerous examples in addition to syntax and usage information. Huhle et al. Consequently, such performance evaluation is essential from the point of view of cloud service providers and clients. In order to leverage the resulting point clouds, reliable estimation of the normal vector for each point in a cloud is a fundamental task for applications such as point-based rendering, 3D surface reconstruction, feature detection, or object segmentation. This Web service, granted by ESA and provided by both DLR and TRANSVALOR, provides reports in pdf format with the APOLLO cloud physical parameters statistics on a limited number of sites, as a further help in the characterization of a solar site. Obtains the feature vector by applying Discrete Cosine and Fourier Transforms on an NxM array of real numbers representing the projection distances of the points in the input cloud to a disc around the point of interest. Since the interference of noise on normal estimation is well studied in general , we focus on the outliers and sharp features in the point-cloud. of the scene, the corresponding point cloud, and the index of the desired part to be picked. • If P lies near an edge partitioning the neighborhood NP. How to Find surface normals from a Point Cloud. matlab; Find. This toolbox includes motion estimation algorithms, such as optical flow, block matching, and template matching. CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): Many applications that process a point cloud data benefit from a reliable normal estimation step. Experimental results for real mobile laser scanning point cloud data consisting of planar and non-planar complex objects surfaces show the proposed robust methods are more accurate and robust. 说明: 用主成分分析法估计出点云中每点的法向量函数 (Principal component analysis method for normal estimation in point cloud ). Ming Liu, Francois Pomerleau, Francis Colas and Roland Siegwart, Normal Estimation for Pointcloud using GPU based Sparse Tensor Voting, IEEE International Conference on Robotics and Biomimetics (ROBIO), 2012,. PyMesh — Geometry Processing Library for Python¶. The direction of each normal vector can be set based on how you acquired the points. A dense network of point measurements and/or radar estimates can provide a better representation of the true volume over a given area. It is not available for government, academic, research, commercial, or other organizational use. Learn more about surface normal, normal, pca, principal component analysis, kinect, depth, #d, 3d MATLAB. The toolbox also provides point cloud registration, geometrical shape fitting to 3-D point clouds, and the ability to read, write, store, display, and compare point clouds. Dataset of Omnidirectional Camera with Vicon Ground Truth. More void setSearchMethodTarget (const KdTreePtr &tree, bool force_no_recompute=false) Provide a pointer to the search object used to find correspondences in the target cloud. Backface cull Only slight speed up, because more set checking overhead, but may help more on high point count. Pirouz Nourian PhD candidate & Instructor, chair of Design Informatics, since 2010 MSc in Architecture 2009 BSc in Control Engineering 2005 Geo1004, Geomatics Master Track Directed by Dr. In this example, the region of interest is the annular region with ground and ceiling removed. Point Cloud Registration Overview. lect7a - Free download as Powerpoint Presentation (. point-based rendering [RL00,ZPVBG01,ABCO03], just to name a few. Most notably, motion estimation serves as the foundation for many of today's ubiquitous video coding standards including H. They are used to get a planes, or a plane, or the best planes, from a 3d point cloud. Estimation de cout d'un professionnel [Résolu] Bonjour, Est ce possible de faire un bac libre S si j'ai deja eu un bac S normal mais j'ai pas valider mes années d'universités (genre pas de. LAS data points for DSM and DTM generation were used. Given a point cloud presumably sampled from an unknown surface, the problem is to estimate the normals of the surface at the data points. 在测量较小的数据时会产生一些误差,这些误差所造成的不规则数据如果直接拿来曲面重建的话,会使得重建的曲面不光滑或者有漏洞,可以采用对数据重采样来解决这样问题,通过对周围的数据点进行高阶多项式插值来重建表面缺少的部分,. normals_HoughCNN. We have discussed a single normal random variable previously; we will now talk about two or more normal random variables. In this process, different thinning methods for reducing the. Using Principal Components Analysis to determine the best fitting plane from locations in a #PointCloud Posted by Elliot Noma on September 29, 2015 · 1 Comment 3-d scanners determine locations of points on surfaces thereby creating a point cloud in 3 dimensional space. A covered parking area is modeled by the point cloud in Figure 8b, whose samples are referred to a local system of coordinates, having origin in the center of the first slice of points. Learn more about surface normal, normal, pca, principal component analysis, kinect, depth, #d, 3d MATLAB. Andrea Tagliasacchi. Solar wind and magnetosphere interactions. The theoretical computational complexity of the Point Feature Histogram (see Point Feature Histograms (PFH) descriptors) for a given point cloud with points is , where is the number of neighbors for each point in. across scan lines (0. In KDE we use a kernel function which weights data point, depending on how far are they from the point $$x$$. Numerical and graphical validations are presented, showing the efficacy of the method. The growth of the amount of medical image data produced on a daily basis in modern hospitals forces the adaptation of traditional medical image analysis and in…. Your browser will take you to a Web page (URL) associated with that DOI name. Point-cloud analysis for semantic labelling using Tensor Voting. Point cloud color, specified as an M-by-3 or M-by-N-by-3 array. Fast and Robust Normal Estimation for Point Clouds with Sharp Features Alexandre Boulch & Renaud Marlet University Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ParisTech Symposium on Geometry Processing 2012 1/37. An exploration of the definition of patches on point cloud data in the spirit of [13], and its use in the context of patch-space Beltrami filtering. For example, as demonstrated Figure 1: Geometric relationship of depth and surface nor-mal. Provide your first answer ever to someone else's question. Creating sustainable power source in Matsu has in this way been an intense point for the Taiwan government, and tidal power is viewed as of the most astounding need because of Matsu’s extensive tidal range (4. Request PDF on ResearchGate | On the normal vector estimation for point cloud data from smooth surface | Reliable estimation of the normal vector at a discrete data point in a scanned cloud data. @RISK shows you virtually all possible outcomes fo. The Estimate Normals of Point Cloud example, shows how to set the direction when the normal vectors are pointing towards the sensor. estimate the normal curvature from the positions and normal vectors of two points, the object point and one of its neighbors. to use a user defined view point, use the method setViewPoint Definition at line 332 of file normal_3d. In order to find the normals from a point cloud, you need to either: 1) Fit some sort of surface from your point cloud, and then use surfnorm on it. Object Proposal Using 3D Point Cloud for DRC-HUBO+ IEEE/RSJ International Conference of Intelligent Robots and Systems (IROS) October 11, 2016. If more than one data point falls inside the same bin, we stack the boxes on top of each other. cpp in your favorite editor, and place the following inside it:. Read "On the normal vector estimation for point cloud data from smooth surfaces, Computer-Aided Design" on DeepDyve, the largest online rental service for scholarly research with thousands of academic publications available at your fingertips. Overview of current existing point cloud normal vector estimation algorithm, analysis of the principle and key technologies, and analyzed their ability to deal with the noise, the outer points and sharp features and gives comparison, and finally provide some suggestions for future research. In WP1 the basic functionalities needed for a new Point Cloud Spatial Database Management System are identified. I received my master's and bachelor's degrees in electrical engineering from Florida Institute of Technology and Mersin University, respectively. What is the difference between home software and the professional version of MATLAB? MATLAB Home offers you the full capabilities of MATLAB. I have a 3D point cloud of a topography and I convert it to STL file format by means of this function in MATLAB:. How to Find surface normals from a Point Cloud. While we can't possibly list everything, the following list offers a glimpse at the important EViews features: Basic Data Handling. A majority of the cancer genomics and transcriptomics studies do not explicitly consider genetic heterogeneity and impurity, and draw inferences based on mixed populations of cells. Surface Matching Algorithm Through 3D Features. Get a pointer to the input point cloud dataset target. Extract the relevant data from each point cloud. After this method, the normal estimation method uses the sensor origin of the input cloud. Numerous algorithms rely on accurate normal estimation, such as point-based rendering, surface reconstruction, 3D piecewise-planar reconstruction, and 3D point cloud segmentation [11]. We present a fast and practical approach for estimating robust normal vectors in unorganized point clouds. A covered parking area is modeled by the point cloud in Figure 8b, whose samples are referred to a local system of coordinates, having origin in the center of the first slice of points. The normal estimation in normal_3d. Some useful functions can be found in: My MATLAB community profile. 3 Top: Fitness of every member of the population over time. This is a class for processing point clouds of any size in Matlab. I have received my Ph. (1976) observed that the normal of each point on the cylinder makes a great circle in the Gaussian sphere. 25 (indicated by the red dashed lines) on each of the data points x i. Provide your first answer ever to someone else's question. Stream processing is essentially a compromise, driven by a data-centric model that works very well for traditional DSP or GPU-type applications (such as image, video and digital signal processing) but less so for general purpose processing with more randomized data access (such as databases). Given a set of points, which are noisy samples of a smooth curve in 2, we can use the following method to estimate the normal to the curve at each of the sample points. The differences are displayed using a blending of magenta for point cloud A and green for point cloud B. Dixon and S. For the sample point cloud file given, plot the normals. Current category hierarchy. The Matlab example linked to in the discussion page for this problem (above link for local normal estimation ) shows how to perform that. To improve accuracy and efficiency of registration, consider downsampling the point clouds by using. In this article, a point-wise normal estimation network for three-dimensional point cloud data called NormNet is proposed. Notice: Undefined index: HTTP_REFERER in /home/baeletrica/www/rwmryt/eanq. Several calibration processing steps are required to derive the TB values. Thus, we select these local features and compare their performance on point clouds of household objects. The input depth map is matched with a set of pre-captured motion exemplars to generate a. 1) PsychToolbox. * available on MATLAB File Exchange Import sensor data from local files and cloud storage (Amazon S3, Windows Azure Blob Storage, and Hadoop HDFS) Use simulated failure data from Simulink models Get started with examples (motors, gearboxes, batteries, and other machines) Design and test condition monitoring and. Density Estimation¶. Given a set of points, which are noisy samples of a smooth curve in 2, we can use the following method to estimate the normal to the curve at each of the sample points. Point Cloud is a heavily templated API, and consequently mapping this into python using Cython is challenging. View Peng Peng’s profile on LinkedIn, the world's largest professional community. Matlab financial data. In this article, a neighborhood reconstruction-based normal estimation method is presented to reliably estimate normals for unorganized point clouds. first I calibrate it and calculate x,. Numerical Algorithms 75 :4, 1103-1121. The main contribution of the thesis is usage of a novel hardware and software technology such as Kinect, Point Cloud Library and CImg Library. I would like to determine (estimating will also do) the surface normals of each point, then find tangent plane of that point. However, despite the advantages of the ana-. In terms of normal estimation for 3D point cloud models, firstly the local plane approximation based on PCA method was used to get a preliminary normal estimation. Point cloud normal vector in matlab. The main goal of the project is the study of various reconstruction algorithms and the creation of a 3d model of an object from a point cloud. Instead, the method estimates the local geodesic neighborhood around each point in the cloud. Creating High-fidelity Mechanical Matlab Simulation Models With a Few Clicks. for the normals estimation of the cylindrical point cloud obtainedbythescanning. In other words it is segmented point cloud of an object from a certain view. Higher-order Voronoi diagrams also subdivide space. These registration algorithms are based on the Coherent Point Drift (CPD) algorithm, the Iterative Closest Point (ICP) algorithm and the Normal-Distributions Transform (NDT) algorithm, respectively. The main focus will be the next generation visual communication, especially immersive, free view point and 3D visual communication, covering new 3D media capture, processing, compression and communication issues. In October 2014, with release R2014b (Version 8. EECS 759 Estimation and Control of Unmanned Autonomous Systems An introduction to the modeling, estimation, and control of unmanned autonomous systems. js, Weka, Solidity, Org. The proposed. Computer Vision Toolbox - MATLAB & Simulink Toggle Main Navigation. I have publications on optimization in the area of wireless communications and 3D point cloud. Jiangxi Province Key Laboratory of Precision Drive & Control, Nanchang Institute of Technology, Nanchang 330099, China;. /** \brief Abstract feature estimation method. Given a point cloud and query point, estimate the surface normal by performing an eigendecomposition of the covariance matrix created from the nearest neighbors of the query point for a fixed radius. Since the interference of noise on normal estimation is well studied in general , we focus on the outliers and sharp features in the point-cloud. ; Reville, V. The toolbox also provides point cloud registration, geometrical shape fitting to 3-D point clouds, and the ability to read, write, store, display, and compare point clouds. com Note: Reference information is available at Google Drive/Solar Energy Engineering 2. Google has many special features to help you find exactly what you're looking for. Motion estimation is the process of determining the movement of blocks between adjacent video frames. Outlier detection and robust normal-curvature estimation in mobile laser scanning 3D point cloud data Abdul Nurunnabi a, ∗, Geoff West b, David Belton b a, bDepartment of Spatial Sciences, Curtin University, Perth, Western Australia-6845, Australia. Exercise 1: Iterative Closest Point (ICP) Algorithm In this exercise you will use a standard ICP algorithm with the point-to-point distance metric to estimate the transform between the 2D datasets (model - red and target - green) depicted in the below figure. p_plane (centroid. The aim of co-registration is to merge the overlapping point clouds by estimating the spatial transformation parameters. In a convex combination, each point in is assigned a weight or coefficient in such a way that the coefficients are all non-negative and sum to one, and these weights are used to compute a weighted average of the points. Dey Gang Li Jian Sun The Ohio State University, Columbus OH, USA Abstract Many applications that process a point cloud data benet from a reliable normal estimation step. The work-flow of crack analysis by MATLAB. 3, Garching bei München, 85748 Abstract. Every remaining point is zero, which is a tricky balance with multiple cycles running around (we can't just "turn them off"). It provides a streamlined workflow for the AEC industry. txt" load('column. Deep Learning for Robust Normal Estimation in Unstructured Point Clouds Alexandre Boulch1 Renaud Marlet2 1ONERA - The French Aerospace Lab, F-91761 Palaiseau, France 2LIGM, UMR 8049, Ecole des Ponts, UPE, Champs-sur-Marne, France Abstract Normal estimation in point clouds is a crucial first step for numerous algorithms, from surface. In order to find the normals from a point cloud, you need to either: 1) Fit some sort of surface from your point cloud, and then use surfnorm on it. Fitting of a Polynomial using Least Squares Method Summary Approximating a dataset using a polynomial equation is useful when conducting engineering calculations as it allows results to be quickly updated when inputs change without the need for manual lookup of the dataset. In the beginning of the thesis overview of prior works in a field of head pose estimation is provided. Powerful mathematics-oriented syntax with built-in plotting and visualization tools; Free software, runs on GNU/Linux, macOS, BSD, and Windows. It provides a set of common mesh processing functionalities and interfaces with a number of state-of-the-art open source packages to combine their power seamlessly under a single developing environment. Robust Normal Estimation using Order-k Voronoi Covariance Louis Cuely Jacques-Olivier Lachaudz Quentin M erigotx Boris Thibert{Abstract We present a robust method to estimate normals, cur-vature directions and sharp features from an unorga-nized point cloud approximating an hypersurface in Rn. The radius estimation is performed by assuming each point. More PointCloudTargetConstPtr const getInputTarget Get a pointer to the input point cloud dataset target. During three ESA contracts between 2000 and 2005, EISCAT had constructed a space debris receiver for ESR, separate from the receivers used in ionospheric work,. Choosing a small number h, h represents a small change in x, and it can be either positive or negative. NASA Technical Reports Server (NTRS) Russell, C. In MATLAB, single(224) has the same value as single(224 +1). The pcl_filters library contains outlier and noise removal mechanisms for 3D point cloud data filtering applications. [12] designed. Experimental results for real mobile laser scanning point cloud data consisting of planar and non-planar complex objects surfaces show the proposed robust methods are more accurate and robust. estimate the normal curvature from the positions and normal vectors of two points, the object point and one of its neighbors. (1) We first construct a graph based on 3D coordinates of a point cloud. This is especially true if the reference cloud has a low density or has big holes. For more see Centroid of a triangle. Read "On the normal vector estimation for point cloud data from smooth surfaces, Computer-Aided Design" on DeepDyve, the largest online rental service for scholarly research with thousands of academic publications available at your fingertips. A Quantitative Evaluation of Surface Normal Estimation in Point Clouds Krzysztof Jordan 1and Philippos Mordohai Abstract—We revisit a well-studied problem in the analysis of range data: surface normal estimation for a set of unorga-nized points. and also contain Camera calibration: Founding rotation matrix with vanishing points (Pan, tilt, roll estimation), Camera position, Focal-length and Non-isotropic scaling. For more information on depth configuration parameters, see Advanced Settings. Six points may not work under all circumstances. How can I do this?. Tarsha-Kurdi*, T. Creating sustainable power source in Matsu has in this way been an intense point for the Taiwan government, and tidal power is viewed as of the most astounding need because of Matsu’s extensive tidal range (4. The open diffusion data derivatives, brain data upcycling via integrated publishing of derivatives and reproducible open cloud services Skip to main content Thank you for visiting nature. It is written in Cython, and implements enough hard bits of the API (from Cythons perspective, i. The input is a) a point cloud data with normal for each point, b) a set of points at which the curvatures need to computed. Provide one (1) example that is relevant to a college environment that illustrates reasons for converting database tables to the First, Second. CSE engineering students can select php project topic from given list. Then the point cloud was divided into clusters by grouping all the points Algorithm 1 Pseudocode for clustering Form k-d tree from point cloud while count number of points do Randomly select p i from point cloud if p i has not been clustered. The following Matlab project contains the source code and Matlab examples used for point cloud normal vector. This work analyzes the effects of possible deviations from the i. Exercise 1: Iterative Closest Point (ICP) Algorithm In this exercise you will use a standard ICP algorithm with the point-to-point distance metric to estimate the transform between the 2D datasets (model - red and target - green) depicted in the below figure. I am working on downsampling point cloud and normal estimation. 3033-018 - Geometric Modeling - Daniele Panozzo Normal Estimation • Assign a normal vector n at each point cloud point x • Estimate the direction by fitting a. Point Cloud Filtering. , 2006a; Slob and Hack, 2004). View Michael Qu’s profile on LinkedIn, the world's largest professional community. Fast Point Feature Histograms (FPFH) descriptors. In some scenarios, such as [3], the input is a point cloud representing a single object, and the goal is to decompose the object into patches. The tricky bit is the normal estimation and scale estimation for descriptor. Point cloud color, specified as an M-by-3 or M-by-N-by-3 array. Point cloud input is segmented, then patches of leaf surface are grown using the level set method. This is optional, if this is not set, it will only use the data in the input cloud to estimate the features. It is time to learn the basics of one of the most interesting applications of point cloud processing: 3D object recognition. This paper presents one such technique, a new region growing algorithm for the automated segmentation of both planar and non-planar surfaces in point clouds. ; Pantellini, F. This tutorial shows how to estimate the vertex normals from a set of points. (A point's confidence is defined by the magnitude of its normal. The main idea is based on the observation that compared with the points around sharp features, it is relatively easier to obtain accurate normals for the points within smooth regions. of the scene, the corresponding point cloud, and the index of the desired part to be picked. Adaptive Neighborhood Selection for Real-Time Surface Normal Estimation from Organized Point Cloud Data Using Integral Images S. Try Chegg Study today!. Naturally, then, many point cloud libraries involve the calculation of the local normal vectors, either over the entire point set or for a subset of the point set. We propose novel methods for estimating the normal of a surface patch if the affine transformation is known between two perspective images. The problem is to in-fer the local orientation of the unknown surface underlying a point cloud. Abstract: The normal vector is one of the important properties of the 3D point cloud data, estimation methods have been important research in the field. My teachers hated it. HNumeric library and test: Haskell Numeric Library with pure functionality, R & MATLAB Syntax. Anatomically, suggestions have been made about the existence of hierarchical. • If P lies near an edge partitioning the neighborhood NP. Microwave radiometer TB are considered a fundamental climate data record and are the values from which we derive ocean measurements of wind speed, water vapor, cloud liquid water, rain rate, and sea surface temperature. js, Weka, Solidity, Org. However, the surface of a 3D model is usually not smooth everywhere, more likely to be piecewise smooth. Point cloud filename, specified as a character vector or a scalar string. Performance of cloud resources is very crucial for cclients as well as for service providers. vulkan-api library: Low-level low-overhead vulkan api bindings Bsparse. Code for Nesti-Net - Normal estimation for unstructured 3D point clouds is now available. Dixon and S. View Michael Qu’s profile on LinkedIn, the world's largest professional community. pcshowpair(ptCloudA,ptCloudB) creates a visualization depicting the differences between the two input point clouds. Here they can find le. The depth image. However, the output of this method is a new consolidated point cloud, thus the normals corresponding to the original points are not computed. The Information Services & Technology (IST) Division provides a full range of central information technology services to support the university’s academic, research, student service, administrative, and public services initiatives. The direction of each normal vector can be set based on how you acquired the points. Previous methods can be divided into three categories: the ICP (Iterative Closest Point), soft assignment methods, and probabilistic methods. I don't see any way of using this function directly on a point cloud. van Vliet and P. edu February 11, 2013. To generate the CSMs, we used the workflow in ESRI ArcGIS® 10. So to find the x coordinate of the orthocenter, add up the three. I had written my own code back then, and I suspect a Matlab implementation of spin-images is fairly straightforward if you have several years of hacking experience. This category consists of Php Projects for CSE final year students,1000 projects in PHP, PHP projects with source code free download,Final year PHP projects. Abstract This paper presents a novel system to estimate body pose configuration from a single depth map. NORMAL ESTIMATION IN 2 In this section, we consider the problem of approximating the normals to a point cloud in 2. Calculate Volume of 3D Point Cloud with concave parts in Matlab If this is your first visit, be sure to check out the FAQ by clicking the link above. Normal estimation for point clouds: a comparison study for a Voronoi based method Abstract: Many applications that process a point cloud data benefit from a reliable normal estimation step. At one point in time, the population has completely converged and we can stop the algorithm. The elevation range here is 100 degrees but can also be adjusted to show the whole span of the cloud or just a desired part. Deep Learning for Robust Normal Estimation in Unstructured Point Clouds Alexandre Boulch1 Renaud Marlet2 1ONERA - The French Aerospace Lab, F-91761 Palaiseau, France 2LIGM, UMR 8049, Ecole des Ponts, UPE, Champs-sur-Marne, France Abstract Normal estimation in point clouds is a crucial first step for numerous algorithms, from surface. Normal estimation of scattered point cloud with sharp feature: YUAN Xiao-cui 1*, WU Lu-shen 2, CHEN Hua-wei 2: 1. See the complete profile on LinkedIn and discover Pierre-Henri’s connections and jobs at similar companies. They are used to get a planes, or a plane, or the best planes, from a 3d point cloud. MATLAB Central contributions by developer. Line extraction from LIDAR point cloud using Hough transform Prajwal Shanthakumar Digital image processing: p038 - Hough Transform with Matlab Demo Photogrammetry vs. Statistical software for Mac and Windows. across scan lines (0. Instead of adding things in the standard ‘stacked’ manner, I would try to convert each number into a power of 10. Gedikli and N. A simple two-point estimation is to compute the slope of a nearby secant line through the points (x, f(x)) and (x + h, f(x + h)). The main idea is based on the observation that compared with the points around sharp features, it is relatively easier to obtain accurate normals for the points within smooth regions. A method for in-process surface normal estimation from point cloud data is presented. Cloud computing is the long dreamed vision of computing as a utility, where cloud customers can remotely store their data into the cloud so as to enjoy the on-demand high … Continue reading “Cloud computing” Data mining. Books at Amazon. Three subjects (2 males, 1 female: mean age 25) with normal or corrected-to-normal vision participated in this study. Given a point cloud presumably sampled from an unknown surface, the problem is to. The input is a) a point cloud data with normal for each point, b) a set of points at which the curvatures need to computed. com Books homepage helps you explore Earth's Biggest Bookstore without ever leaving the comfort of your couch. Fast and Accurate Motion Estimation using Orientation Tensors and Parametric Motion Models Gunnar Farneb¨ack Computer Vision Laboratory Department of Electrical Engineering Linkoping¨ University SE-581 83 Linkoping,¨ Sweden [email protected] When the function fills the Normal property, it uses 6 points to fit the local plane. Using an inbuilt MATLAB function, we created a k-d tree representation of the full lidar point cloud. Normal estimation for point clouds: a comparison study for a Voronoi based method Abstract: Many applications that process a point cloud data benefit from a reliable normal estimation step. Students can download php project topics for beginners with source code. Code for Nesti-Net - Normal estimation for unstructured 3D point clouds is now available. Log in to services. For point cloud models, the normal of a point depends on the points of its vicinity, which is usually a neighborhood centering at the point. mal estimation independently, which possibly make their prediction inconsistent without considering the close under-lying geometry relationship. Hi I want to estimate the Normals (using integral images) of the point clouds. The smart grid communications are supported by a heterogeneous set of network technologies, ranging from wireless to wireline solutions. 2 that the sum of two independent normal random variables is also normal. Topics include motion description, navigation sensors, complementary filters, Kalman filters, attitude estimation, position estimation, attitude keeping controller, etc. 5D interpolated surfaces (Kemeny et al. Pirouz Nourian PhD candidate & Instructor, chair of Design Informatics, since 2010 MSc in Architecture 2009 BSc in Control Engineering 2005 Geo1004, Geomatics Master Track Directed by Dr. As the figure above shows, the algorithm consists of five phases: (1) for each point p i, K 0-nearest neighbor N i is computed and an initial normal vector is estimated by covariance analysis of N i. The toolbox also provides point cloud registration, geometrical shape fitting to 3-D point clouds, and the ability to read, write, store, display, and compare point clouds. Any feature estimation class will attempt to estimate a feature at every point in the given input cloud that has an index in the given indices list. Code for Nesti-Net - Normal estimation for unstructured 3D point clouds is now available. wolff,sorkine,[email protected] The problem arises from first summing up (potentially large) values, than dividing it and doing some subtraction. They are used to get a planes, or a plane, or the best planes, from a 3d point cloud. Lidar and Point Cloud Processing. • Flux (F; measured on a surface normal to the beam) per unit solid angle (ω) traveling in a particular direction • Typical units: Watts per square meter per steradian (W m-2 sr-1) • Conservation of intensity: intensity (radiance) does not decrease with distance from the source (within a vacuum or other transparent medium). The classical iterative closest point al-gorithm (ICP) [2] estimates the motion parameters by mini-mizing Euclidean distances between point. The task is to be able to match partial, noisy point clouds in cluttered scenes, quickly. Junjie Cao. And my question is that, if it is the case, how do people do registration or stitching?Because almost all the features need normals as the input!. A point cloud is a set of points in 3-D space. Easy Engineering Classes Channel is one stop destination for engineering students of various Universities like GGSIPU, UPTU and others. This Web service, granted by ESA and provided by both DLR and TRANSVALOR, provides reports in pdf format with the APOLLO cloud physical parameters statistics on a limited number of sites, as a further help in the characterization of a solar site. 1953 to Vienna,Guinea - 2011 Musical Instruments Djembe Drums - 3 Stamp Sheet 7B-1633,Blue Torrent Heavy Duty Leaf Rake for Swimming Pools. Since the equations generated by these % methods will tend to be well conditioned, the normal % equations are not a bad choice of method to use. point clouds using multi-scale matching and then use an iterative filtering method for outlier detection on the resultant point cloud. The volume of the sphere further divides this. . The main contribution of the thesis is usage of a novel hardware and software technology such as Kinect, Point Cloud Library and CImg Library. Password requirements: 6 to 30 characters long; ASCII characters only (characters found on a standard US keyboard); must contain at least 4 different symbols;. The Information Services & Technology (IST) Division provides a full range of central information technology services to support the university’s academic, research, student service, administrative, and public services initiatives. We propose a normal estimation method for unstructured 3D point clouds. First, though, it is important to note that we are talking about local neighborhoods to the points in question. This site gives a good overview. Huang et al. An empty vector means that all points are candidates to sample in the RANSAC iteration to fit the plane. vulkan-api library: Low-level low-overhead vulkan api bindings Bsparse. Dataset of Omnidirectional Camera with Vicon Ground Truth. an interest region of the point cloud. Get an ad-free experience with special benefits, and directly support Reddit. com connects students with tutors that can help them with their homework. point-based rendering [RL00,ZPVBG01,ABCO03], just to name a few. Network Reconfiguration Matlab Code. Huang et al. Rusu and M. In this paper, a novel lung motion method based on the non-rigid registration of point clouds is proposed, and the tangent-plane distance is used to represent the distance term, which describes the difference between two point clouds. Therefore, you can specify the same color for all points or a different color for each point. The input depth map is matched with a set of pre-captured motion exemplars to generate a. Normal estimation Reliable estimation of normal vectors at each point in a scanned point cloud has become a fundamental step in point cloud data processing. The normal estimation in normal_3d. The main goal of the project is the study of various reconstruction algorithms and the creation of a 3d model of an object from a point cloud. Abstract: In this paper, we propose a normal estimation method for unstructured 3D point clouds. Fitting of a Polynomial using Least Squares Method Summary Approximating a dataset using a polynomial equation is useful when conducting engineering calculations as it allows results to be quickly updated when inputs change without the need for manual lookup of the dataset. The input depth map is matched with a set of pre-captured motion exemplars to generate a. Ming Liu, Francois Pomerleau, Francis Colas and Roland Siegwart, Normal Estimation for Pointcloud using GPU based Sparse Tensor Voting, IEEE International Conference on Robotics and Biomimetics(ROBIO), 2012, pdf, bibtex. The following is the plot of the normal distribution inverse survival function. 3) transformed into sensor coordinates. fast pose estimation technique using the normals from low resolution depth images. There are other methods which do not belong to the two groups. point-based rendering [RL00,ZPVBG01,ABCO03], just to name a few. UBC Library's Open Collections include digital photos, books, newspapers, maps, videos, theses and more. View Pierre-Henri Roche’s profile on LinkedIn, the world's largest professional community. Navab 1 Abstract In this paper we present two real-time methods for estimating surface normals from organized point cloud data. These algorithms create motion vectors, which relate to the whole image, blocks, arbitrary patches, or individual pixels. txt" load('column. CS 231A Section: Computer Vision Libraries Overview Amir Sadeghian , open project for 2D/3D image and point cloud , also has C, Python, Java, and MATLAB. θ is the angle between the surface normal at Q and the normal of ground truth plane (i. RSD [14] describes the geometry of points in a local neighborhood by estimating their radial relationships. The code First, create a file, let’s say, normal_estimation_using_integral_images. However, the issue of accuracy based on rasterization may still. Statistics for Python is an extension module, written in ANSI-C, for the Python scripting language. [11] rst clustered the point cloud in the normal space and further clustered each group by its distance to the origin (0. / /
{}
# Regarding parity conservation in the decay $\omega \to \pi^0 \,\pi^+\, \pi^-$ I'm somewhat confused by this decay. Associated vertex seem to be related to QCD residual terms contributing to the nuclear force, therefore they should manifest conservation of Isospin and Parity. However, given the last one, the pseudoescalar and vector nature of the particles involved would imply the identity: $$(-1)=(-1)^{L+1}$$ being $L$ the orbital angular momenta of the three $\pi$ system in the final state. Since angular momenta conservation is sacred, for it raises from Lorentz invariance itself, one would expect to have: $$L=1$$ which seems to contradict Parity conservation. Is something wrong in this argument? Or is it that this decay is of a different nature than that I'm assuming? and in case the last one is the answer, how could one describe such vertex from those in the Standard Model? P.D. Bear with me, this is my first question here. Thanks in advance. Your mistake is coming from your treatment of the orbital angular momentum in the case of a 3 body-decay. You have to take into account the orbital angular momentum between 2 pions $L_1$ and the orbital angular momentum $L_2$ between the third pion and the barycenter of the first 2 pions. The conservation of the total angular momentum imposes that $$\vec{1} = \vec{L_1} + \vec{L_2}$$ Restricting $L_1$ and $L_2$ to the lowest values, it's possible with $L_1=0$ and $L_2=1$, or $L_1=1$ and $L_2=0$, or $L_1=1$ and $L_2=1$ (the latter being possible since the addition of $\vec{1}+\vec{1} = \vec{2}, \vec{1}$, or $\vec{0}$ ). The only way to conserve parity is to choose $L_1= L_2=1$ since the parity of a 3 body system is: $$\eta_{\omega} = (\eta_\pi)^3 \times (-1)^{L_1} \times (-1)^{L_2}$$ This decay is very similar to the one of the $\omega(1420)$ wiich occurs via the $\rho$ resonance: $$\omega(1420) \to \rho + \pi$$ $$\rho \to \pi + \pi$$ the $\rho$ having a spin 1. The $\rho$ is equivalent to the 2 pions system with $L_1=1$. The orbital angular momentum between the $\rho$ and $\pi$ being $L_2=1$.
{}
## University Calculus: Early Transcendentals (3rd Edition) $\mathrm{Domain:}\ \ (-\infty,\infty)$ Given function: $\quad f(x)=5-2x$ The function will be defined for any value of $\ x\$. So the domain would be $\ (-\infty,\infty).$ The given function is linear and represents a line. We need at least two points to graph it. Just pick 2 values of $\ x\$ and find their corresponding $\ y\$ values. Then draw the line through those two points. When $\ x=0,\ y=5-0=5$. So our first point is $\ (0,5)$ When $\ x=2,\ y=5-4=1$. So our first point is $\ (2,1)$ So, here is our graph.
{}
# ContentattributionQuestion 25 ZPoiNtsWhat is the equation of the sphere that Is centered at (3} 7) and has surface area 1677Provide ###### Question: Contentattribution Question 25 ZPoiNts What is the equation of the sphere that Is centered at (3} 7) and has surface area 1677 Provide your answer below; Content etributlen QUESTION 26 aoiNt #### Similar Solved Questions ##### The most abundant isotopes of iron, Fe, are EqFe (91.759), with an atomic mass of 55.934 amu EGFe (5.845%), with an atomic mass of 53.939 amu 54 26Fe 57 (2.119%) , with an atomic mass of 56.935 amu E8Fe (0.282%), with an atomic mass of 57.933 amu 58Find the average atomic mass of Fe? What is the relative atomic mass of Fe? How many atoms are there in one gram of Fe? What is the mass in grams of one atom of Fe? The most abundant isotopes of iron, Fe, are EqFe (91.759), with an atomic mass of 55.934 amu EGFe (5.845%), with an atomic mass of 53.939 amu 54 26Fe 57 (2.119%) , with an atomic mass of 56.935 amu E8Fe (0.282%), with an atomic mass of 57.933 amu 58 Find the average atomic mass of Fe? What is the re... ##### If the public has rational expectations,                a.            the only effective policy would be one... If the public has rational expectations,                a.            the only effective policy would be one that is implemented by surprise.       ... ##### At the end of Sec, $6.7$ it is stated that the most probable value of $r$ for a $2 p$ electron in a hydrogen stoun is $4 alpha_{0 .}$ which is the same as the radius of the $n$ we 2 Bohr orbit. Verify this. At the end of Sec, $6.7$ it is stated that the most probable value of $r$ for a $2 p$ electron in a hydrogen stoun is $4 alpha_{0 .}$ which is the same as the radius of the $n$ we 2 Bohr orbit. Verify this.... ##### On September 1, Shawn Dahl established Whitewater Rentals, a canoe and kayak rental business. The following... On September 1, Shawn Dahl established Whitewater Rentals, a canoe and kayak rental business. The following transactions occurred in the month of September and affected the following accounts: Cash Accounts Payable Accounts Receivable Shawn Dahl, Capital Office Equipment Revenue Canoe and Kayak Equi... ##### QUESTION 23 Through (-4,-33) and (10, 79) O y = -8x - 65 67 Oy--8-2 O... QUESTION 23 Through (-4,-33) and (10, 79) O y = -8x - 65 67 Oy--8-2 O y = 8x - 1 65 Oy- gx 2... ##### They relate to the sonormalities and elsturbancas this atanti sperarens) EtioiogyiDiscuss the csuse of th's dissase... they relate to the sonormalities and elsturbancas this atanti sperarens) EtioiogyiDiscuss the csuse of th's dissase or disorder and expiain how it is transmimad ar scred oquaE peptic-ocec.can counoー. hece ace Signs and Symptems (List and explain each ofthe signs and srmptoms yourclentwith... ##### What are two interesting things about health care occupations? What are two interesting things about health care occupations?... ##### Select all that apply:Which of the following compounds are ionic? KCICzHaBaClz SiClaLiF Select all that apply: Which of the following compounds are ionic? KCI CzHa BaClz SiCla LiF... ##### Write functions for the specified root finding methods. Include a comments in each function that ... Write functions for the specified root finding methods. Include a comments in each function that notes the inputs and outputs. Use Cody Coursework to help guide writing your functions. You may not use built-in MATLAB methods for root-solving such as fzero.l] Part A: Write a function that implements ... ##### Part AHow many atoms of hydrogen does contain? Express your answer using four significant figures_AZdNA 20atomsSubmitPrevious Answers Request AnswerIncorrect; Try Again; attempts remaining Part A How many atoms of hydrogen does contain? Express your answer using four significant figures_ AZd NA 20 atoms Submit Previous Answers Request Answer Incorrect; Try Again; attempts remaining... ##### A swimming pool of depth 2.7 m is filled with ordinary (pure) water (ρ = 1000... A swimming pool of depth 2.7 m is filled with ordinary (pure) water (ρ = 1000 kg/m3). (a) What is the pressure at the bottom of the pool? Pa (b) When the pool is filled with salt water, the pressure changes by 4.5 103 Pa. What is the difference between the density of the salt water and the densi... ##### Factor out $-m$ from $-6 m^{3}-3 m^{2}+m$. Factor out $-m$ from $-6 m^{3}-3 m^{2}+m$.... ##### 1 3 3 8 3 3 3 J 1 Ul 1 WI 19 3 2 1 3 3 1 WW 6 WW { 6 8 MW f 1 8 3 3 8 2 3 1 W ! 8 8 8 3 8 : Hii 5 { 6 1 { { 1 3 3 8 3 3 3 J 1 Ul 1 WI 19 3 2 1 3 3 1 WW 6 WW { 6 8 MW f 1 8 3 3 8 2 3 1 W ! 8 8 8 3 8 : Hii 5 { 6 1 { {... ##### The long pipe in Fig. $\mathrm{P} 3.133$ is filled with water at $20^{\circ} \mathrm{C}$ When valve $A$ is closed, $p_{1}-p_{2}=75$ kPa. When the valve is open and water flows at $500 \mathrm{m}^{3} / \mathrm{h}, p_{1}-p_{2}=160 \mathrm{kPa}$. The long pipe in Fig. $\mathrm{P} 3.133$ is filled with water at $20^{\circ} \mathrm{C}$ When valve $A$ is closed, $p_{1}-p_{2}=75$ kPa. When the valve is open and water flows at $500 \mathrm{m}^{3} / \mathrm{h}, p_{1}-p_{2}=160 \mathrm{kPa}$.... ##### Problem 0,4 Lat do1oa" > 2Latana A() NHI Hu Q tutand funatwn unelu omutium IVuenrit €rumimUYWIAAIU Problem 0,4 Lat do 1oa " > 2 Latana A() NHI Hu Q tutand funatwn unelu omutium IVuenrit €rumim UYWIAAIU... ##### Using the principles to be developed in Chapter on ubrum, one can determine that the tension... Using the principles to be developed in Chapter on ubrum, one can determine that the tension in cable ADS 203. N. Determine the moment about the x-xs of this torson force un form plate about the is. What is the moment of the tension force en Asbut the line 0 on point A. Compare your result with the ... ##### Use the method of cylindrical shells to find the volumegenerated by rotating the region bounded by the given curves aboutthe y-axis.y = 23x, y =0, x = 1 Use the method of cylindrical shells to find the volume generated by rotating the region bounded by the given curves about the y-axis. y = 2 3 x , y = 0, x = 1... ##### PLEASE NOTE! I ONLY NEED ANSWER TO QUESTION 7 Kate and Claire, recent college graduates, are... PLEASE NOTE! I ONLY NEED ANSWER TO QUESTION 7 Kate and Claire, recent college graduates, are unable to find suitable jobs in their field of accounting. However, each has been involved with a small business of their own for the last several years, and have been doing very well. Kate is a talented se... ##### San[eA Juanxa 34 puy pue BuSeaxap pue Buseanu Sq uopuny PIYA .O STeADJ4 ?51 LpwPp! 04} OT + exv ~+* = (x)J uopuny J51 o2AjeuV san[eA Juanxa 34 puy pue BuSeaxap pue Buseanu Sq uopuny PIYA .O STeADJ4 ?51 LpwPp! 04} OT + exv ~+* = (x)J uopuny J51 o2AjeuV... ##### 7. [-16 Points) DETAILS SCALCCC4 1.7.031. Find parametric equations for the path of a particle that... 7. [-16 Points) DETAILS SCALCCC4 1.7.031. Find parametric equations for the path of a particle that moves around the given circle in the manner described. x2 + (y - 1)2 = 16 (a) Once around clockwise, starting at (4,1). X(t) = (t) = Osts 2017 (b) Four times around counterclockwise, starting at (4,1)... ##### Find the Laplace transform of f (t) = (sint cos t)2 Find the Laplace transform of f (t) = (sint cos t)2... ##### Joe initially is holding mass at rest at the top of a frictionless incline by applying a horizontal force, magnitude F The incline has length L, with slope forming angle with the horizontal (see figure) . Joe gets tired, reduces the amount of force he is pushing with to F/2, and the mass starts sliding down the incline a) Draw a free body diagram for the mass_ 6) How much Force, F; does Joe have to apply to keep the mass stationary? Should find F in terms of the variables listed below: Find the Joe initially is holding mass at rest at the top of a frictionless incline by applying a horizontal force, magnitude F The incline has length L, with slope forming angle with the horizontal (see figure) . Joe gets tired, reduces the amount of force he is pushing with to F/2, and the mass starts slid... ##### A chemist started with 22.8 g of a reactant and produced 13.9 gof a product.If the theoretical yield of the product was 29.5 g, what was thepercent yield? A chemist started with 22.8 g of a reactant and produced 13.9 g of a product. If the theoretical yield of the product was 29.5 g, what was the percent yield?... ##### < 0) = 1/3, and Exercise 9.8. Suppose X has an N(u,02) distribution, P(X P(X <... < 0) = 1/3, and Exercise 9.8. Suppose X has an N(u,02) distribution, P(X P(X < 1) = 2/3. What are the values of u and o?!... ##### 3. +-12 points SerCP10 3.P.018. My Notes A map suggests that Atlanta is 730 miles in... 3. +-12 points SerCP10 3.P.018. My Notes A map suggests that Atlanta is 730 miles in a direction 5.00 north of east from Dallas. The same map shows that Chicago is 560 miles in a direction 21.0° west of north from Atlanta. The figure below shows the location of these three cities. Modeling the E... ##### HWO1 matrices operations: Problem 10 Pravious Problem Problem List Next Problampoint) Let-3 55 -2-4B = -3c = |If possible, compute the following: If an answer does not exist, enter DNE:[[24,24],[161,86]] ACB =help (matrices)[[24,24],[161,86]] ABChelp (matrices}Note: In order t0 get credit for this problem all answers must be correct HWO1 matrices operations: Problem 10 Pravious Problem Problem List Next Problam point) Let -3 55 -2 -4 B = -3 c = | If possible, compute the following: If an answer does not exist, enter DNE: [[24,24],[161,86]] ACB = help (matrices) [[24,24],[161,86]] ABC help (matrices} Note: In order t0 get credit... ##### Nilut Candy Corporation purchasCo the trademark for the popular Yummm Candy Bar from the ZumZum Company: At tho same lime. Niler also purchased ZumZum s customer Iist: Niler paid tho total purchase price of 5720,000 in cash: Niler's valualion consultants independenily estimate Ihe value of the trademark t0 be 5438,000 and customer Iist to be 5292,000. What = the journa cnin record Ihe purchase?Determine the prope Ilocation of the purchase cost each of the intangible assets acquired,Allocate Nilut Candy Corporation purchasCo the trademark for the popular Yummm Candy Bar from the ZumZum Company: At tho same lime. Niler also purchased ZumZum s customer Iist: Niler paid tho total purchase price of 5720,000 in cash: Niler's valualion consultants independenily estimate Ihe value of the ... ##### 9,17,33 and ill like cos 22 14. 2+1 In Exercises 1-26 find the Taylor series for... 9,17,33 and ill like cos 22 14. 2+1 In Exercises 1-26 find the Taylor series for the function about the given point. In each case determine values of z for which the series converges to the function. 3 1. + about zo = 0 2. 1+2 about = 1 3. (1 - 2)2 about 20 = 0 4. e* about 20 = 1+i 5. sin z about... ##### Queation -FclntsSvtAnaxnrAn oprical instrument has Iwo convex lenses wlth focal length 0f 15 cm are placed 30 cm apart: cm tall object placed 100 cm away from the objective lens Reep mind you can use this image and establish scale draw It yourself and establish - scale on your drawing the setup; Remember lenses can be extended vertically and the optical axis can be extended out t00_what Is the location Jnd height ofthe objective $Imoge? Is It real or vIrujl? What the location and height of the queation - Fclnts SvtAnaxnr An oprical instrument has Iwo convex lenses wlth focal length 0f 15 cm are placed 30 cm apart: cm tall object placed 100 cm away from the objective lens Reep mind you can use this image and establish scale draw It yourself and establish - scale on your drawing the setup; ... 1 answer ##### 2. Circle all the stereogenic centers on the following molecule and label then as (R) or... 2. Circle all the stereogenic centers on the following molecule and label then as (R) or (S) (10 points). H Me Me... 1 answer ##### 10) Suppose the government revises its estimate of the external benefit per unit from$2 to... 10) Suppose the government revises its estimate of the external benefit per unit from $2 to$4. 432109876543210 Suppose the government estimates that there is a \$2 positive externality per unit of widget in the economy. What will be the total amount of external benefits (the total value of all posit... ##### Question 7 (5 points) In a certain region, the electric field has a uniform strength of... Question 7 (5 points) In a certain region, the electric field has a uniform strength of E = 500 N/C and it is directed upward. What is the potential difference between two 30 cm long metal discs held a distance y = 10.0 cm apart from each other as shown below? 9 + Τ Ι I 1 15 O Format BIU De... ##### 19. solve the equation (x in radians and o in degrees) for all exact solutions where... 19. solve the equation (x in radians and o in degrees) for all exact solutions where appropriate. Round radians to four decimals and degrees to nearest tenth. COS X + 2 COS X = -1...
{}
# Transforming between contra and covariant vectors • I Hi. The book I am using gives the following equations for the the Lorentz transformations of contravariant and covariant vectors x = Λμν xν ( 1 ) xμ/ = Λμν xv ( 2 ) where the 2 Lorentz transformation matrices are the inverses of each other. I am trying to get equation 2 from equation 1 but if I lower the index on the LHS of (1) using the metric gρμ and apply it to both sides of (1) I get xρ/ = Λρν x ν Then i'm stuck because how can I lower the ν on xν as ν is already repeated twice on the RHS and so I can't use a metric with ν in it ? Thanks Orodruin Staff Emeritus Homework Helper Gold Member The relation between the contravariant and covariant components is ##x^\nu = g^{\nu\mu}x_\mu##. stevendaryl Staff Emeritus Hi. The book I am using gives the following equations for the the Lorentz transformations of contravariant and covariant vectors x = Λμν xν ( 1 ) xμ/ = Λμν xv ( 2 ) where the 2 Lorentz transformation matrices are the inverses of each other. I am trying to get equation 2 from equation 1 but if I lower the index on the LHS of (1) using the metric gρμ and apply it to both sides of (1) I get xρ/ = Λρν x ν Then i'm stuck because how can I lower the ν on xν as ν is already repeated twice on the RHS and so I can't use a metric with ν in it ? Thanks You have ##x'^\mu = \Lambda^\mu_\nu x^\nu \Rightarrow g^{\mu \rho} x'_\rho = \Lambda^\mu_\nu g^{\nu \lambda} x_\lambda## Now, you operate on both sides with the ##g## to get: ##x'_\rho = g_{\mu \rho} \Lambda^\mu_\nu g^{\nu \lambda} x_\lambda## So the transformation matrix for the lowered components is ##g_{\mu \rho} \Lambda^\mu_\nu g^{\nu \lambda}## The final step is to realize that ##g_{\mu \rho} \Lambda^\mu_\nu g^{\nu \lambda} = \Lambda^\lambda_\rho##. That might seem obvious, but it's actually not, because ##\Lambda## is not a tensor; the two indices refer to different coordinate systems. So it's not immediately obvious that you can raise and lower indices the way you could with a tensor. dyn You have ##x'^\mu = \Lambda^\mu_\nu x^\nu \Rightarrow g^{\mu \rho} x'_\rho = \Lambda^\mu_\nu g^{\nu \lambda} x_\lambda## Now, you operate on both sides with the ##g## to get: ##x'_\rho = g_{\mu \rho} \Lambda^\mu_\nu g^{\nu \lambda} x_\lambda## . Thanks for your reply. Can you explicitly explain this step for me ? What do I multiply each side of the equation by in terms of indices ? Orodruin Staff Emeritus Homework Helper Gold Member You are not to multiply each side with anything. You are to insert the relation given in #2, which @stevendaryl did for you explicitly in the first line of #3. I don't understand that step Orodruin Staff Emeritus Homework Helper Gold Member It is just an insertion of a known relation. It is just an insertion of a known relation. I presume you mean the relation gαβgαγ = δβγ ? So that means I multiply each side of the equation by gμα ? stevendaryl Staff Emeritus Thanks for your reply. Can you explicitly explain this step for me ? What do I multiply each side of the equation by in terms of indices ? Let me write it without any indices. I think you will find that there is only one way to put in indices so that it makes sense. 1. ##x' = \Lambda x## 2. Let's write: ##x = g^{-1} g x## 3. Substituting this expression for ##x## into equation 1: ##x' = \Lambda g^{-1} g x## 4. Operate on 3 using ##g##: ##g x' = g \Lambda g^{-1} g x## 5. Now, let's define the combination: ##\widetilde{x} \equiv g x## 6. Also, ##\widetilde{x'} \equiv g x'## 7. And ##\widetilde{\Lambda} \equiv g \Lambda g^{-1}## 8. So the contravariant transformation law is: ##\widetilde{x'} = \widetilde{\Lambda} \widetilde{x}## There is really only one way that you can insert indices into the equations 1-8 that makes sense. Thanks for that but its the placement of indices in each step that confuses me. Was the following statement correct ? I presume you mean the relation gαβgαγ = δβγ ? So that means I multiply each side of the equation by gμα ? stevendaryl Staff Emeritus Thanks for that but its the placement of indices in each step that confuses me. Was the following statement correct ? As I said, there really is only one way to do the indices so that it makes sense. But yes, if you start with ##x'^\mu = \Lambda^\mu_\nu x^\nu##, and multiply both sides by ##g_{\mu \alpha}## (and sum over ##\mu##). Then you rewrite ##x^\nu## as ##g^{\nu \lambda} x_\lambda## dyn The final step is to realize that ##g_{\mu \rho} \Lambda^\mu_\nu g^{\nu \lambda} = \Lambda^\lambda_\rho##. That might seem obvious, but it's actually not, because ##\Lambda## is not a tensor; the two indices refer to different coordinate systems. So it's not immediately obvious that you can raise and lower indices the way you could with a tensor. I always see Λ as a 4x4 matrix and I always thought a matrix was a tensor of rank 2. I see Λμν as the entry on row μ and column ν of that matrix but i'm unsure how the rows and columns relate to the inverse Λμν Nugatory Mentor I always thought a matrix was a tensor of rank 2 The individual components of a rank-2 tensor can be represented as a matrix, but not all matrices are representations of rank-2 tensors. The components of a tensor transform in a particular way when you change coordinate systems; not all matrices have that property. Also when you represent a tensor as a matrix, you lose the distinction between contravariant and covariant components. dyn
{}
# dask_ml.model_selection.HyperbandSearchCV¶ class dask_ml.model_selection.HyperbandSearchCV(estimator, parameters, max_iter=81, aggressiveness=3, patience=False, tol=0.001, test_size=None, random_state=None, scoring=None, verbose=False, prefix='', predict_meta=None, predict_proba_meta=None, transform_meta=None) Find the best parameters for a particular model with an adaptive cross-validation algorithm. Hyperband will find close to the best possible parameters with the given computational budget * by spending more time training high-performing estimators [1]. This means that Hyperband stops training estimators that perform poorly – at it’s core, Hyperband is an early stopping scheme for RandomizedSearchCV. Hyperband does not require a trade-off between “evaluate many parameters for a short time” and “train a few parameters for a long time” like RandomizedSearchCV. Hyperband requires one input which requires knowing how long to train the best performing estimator via max_iter. The other implicit input (the Dask array chuck size) requires a rough estimate of how many parameters to sample. Specification details are in Notes. * After $$N$$ partial_fit calls the estimator Hyperband produces will be close to the best possible estimator that $$N$$ partial_fit calls could ever produce with high probability (where “close” means “within log terms of the expected best possible score”). Parameters estimatorestimator object. A object of that type is instantiated for each hyperparameter combination. This is assumed to implement the scikit-learn estimator interface. Either estimator needs to provide a score function, or scoring must be passed. The estimator must implement partial_fit, set_params, and work well with clone. parametersdict Dictionary with parameters names (string) as keys and distributions or lists of parameters to try. Distributions must provide a rvs method for sampling (such as those from scipy.stats.distributions). If a list is given, it is sampled uniformly. max_iterint The maximum number of partial_fit calls to any one model. This should be the number of partial_fit calls required for the model to converge. See Notes for details on setting this parameter. aggressivenessint, default=3 How aggressive to be in culling off the different estimators. Higher values imply higher confidence in scoring (or that the hyperparameters influence the estimator.score more than the data). Theory suggests aggressiveness=3 is close to optimal. aggressiveness=4 has higher confidence that is likely suitable for initial exploration. patienceint, default False If specified, training stops when the score does not increase by tol after patience calls to partial_fit. Off by default. A patience value is automatically selected if patience=True to work well with the Hyperband model selection algorithm. tolfloat, default 0.001 The required level of improvement to consider stopping training on that model when patience is specified. Increasing tol will tend to reduce training time at the cost of (potentially) worse estimators. test_sizefloat Fraction of the dataset to hold out for computing test/validation scores. Defaults to the size of a single partition of the input training set. Note The testing dataset should fit in memory on a single machine. Adjust the test_size parameter as necessary to achieve this. random_stateint, RandomState instance or None, optional, default: None If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. scoringstring, callable, list/tuple, dict or None, default: None A single string (see The scoring parameter: defining model evaluation rules) or a callable (see Defining your scoring strategy from metric functions) to evaluate the predictions on the test set. If None, the estimator’s default scorer (if available) is used. verbosebool, float, int, optional, default: False If False (default), don’t print logs (or pipe them to stdout). However, standard logging will still be used. If True, print logs and use standard logging. If float, print/log approximately verbose fraction of the time. prefixstr, optional, default=”” While logging, add prefix to each message. predict_meta: pd.Series, pd.DataFrame, np.array deafult: None(infer) An empty pd.Series, pd.DataFrame, np.array that matches the output type of the estimators predict call. This meta is necessary for for some estimators to work with dask.dataframe and dask.array predict_proba_meta: pd.Series, pd.DataFrame, np.array deafult: None(infer) An empty pd.Series, pd.DataFrame, np.array that matches the output type of the estimators predict_proba call. This meta is necessary for for some estimators to work with dask.dataframe and dask.array transform_meta: pd.Series, pd.DataFrame, np.array deafult: None(infer) An empty pd.Series, pd.DataFrame, np.array that matches the output type of the estimators transform call. This meta is necessary for for some estimators to work with dask.dataframe and dask.array Attributes These dictionaries describe the computation performed, either before computation happens with metadata or after computation happens with metadata_. These dictionaries both have keys • n_models, an int representing how many models will be/is created. • partial_fit_calls, an int representing how many times partial_fit will be/is called. • brackets, a list of the brackets that Hyperband runs. Each bracket has different values for training time importance and hyperparameter importance. In addition to n_models and partial_fit_calls, each element in this list has keys • bracket, an int the bracket ID. Each bracket corresponds to a different levels of training time importance. For bracket 0, training time is important. For the highest bracket, training time is not important and models are killed aggressively. • SuccessiveHalvingSearchCV params, a dictionary used to create the different brackets. It does not include the estimator or parameters parameters. • decisions, the number of partial_fit calls Hyperband makes before making decisions. These dictionaries are the same if patience is not specified. If patience is specified, it’s possible that less training is performed, and metadata_ will reflect that (though metadata won’t). cv_results_Dict[str, np.ndarray] A dictionary that describes how well each model has performed. It contains information about every model regardless if it reached max_iter. It has keys • mean_partial_fit_time • mean_score_time • std_partial_fit_time • std_score_time • test_score • rank_test_score • model_id • partial_fit_calls • params • param_{key}, where {key} is every key in params. • bracket The values in the test_score key correspond to the last score a model received on the hold out dataset. The key model_id corresponds with history_. This dictionary can be imported into a Pandas DataFrame. In the model_id, the bracket ID prefix corresponds to the bracket in metadata. Bracket 0 doesn’t adapt to previous training at all; higher values correspond to more adaptation. history_list of dicts Information about each model after each partial_fit call. Each dict the keys • partial_fit_time • score_time • score • model_id • params • partial_fit_calls • elapsed_wall_time The key model_id corresponds to the model_id in cv_results_. This list of dicts can be imported into Pandas. model_history_dict of lists of dict A dictionary of each models history. This is a reorganization of history_: the same information is present but organized per model. This data has the structure {model_id: [h1, h2, h3, ...]} where h1, h2 and h3 are elements of history_ and model_id is the model ID as in cv_results_. best_estimator_BaseEstimator The model with the highest validation score as selected by the Hyperband model selection algorithm. best_score_float Score achieved by best_estimator_ on the validation set after the final call to partial_fit. best_index_int Index indicating which estimator in cv_results_ corresponds to the highest score. best_params_dict Dictionary of best parameters found on the hold-out data. scorer_ The function used to score models, which has a call signature of scorer_(estimator, X, y). Notes To set max_iter and the chunk size for X and y, it is required to estimate • the number of examples at least one model will see (n_examples). If 10 passes through the data are needed for the longest trained model, n_examples = 10 * len(X). • how many hyper-parameter combinations to sample (n_params) These can be rough guesses. To determine the chunk size and max_iter, 1. Let the chunks size be chunk_size = n_examples / n_params 2. Let max_iter = n_params Then, every estimator sees no more than max_iter * chunk_size = n_examples examples. Hyperband will actually sample some more hyper-parameter combinations than n_examples (which is why rough guesses are adequate). For example, let’s say • about 200 or 300 hyper-parameters need to be tested to effectively search the possible hyper-parameters • models need more than 50 * len(X) examples but less than 100 * len(X) examples. Let’s decide to provide 81 * len(X) examples and to sample 243 parameters. Then each chunk will be 1/3rd the dataset and max_iter=243. If you use HyperbandSearchCV, please use the citation for [2] @InProceedings{sievert2019better, author = {Scott Sievert and Tom Augspurger and Matthew Rocklin}, title = {{B}etter and faster hyperparameter optimization with {D}ask}, booktitle = {{P}roceedings of the 18th {P}ython in {S}cience {C}onference}, pages = {118 - 125}, year = {2019}, editor = {Chris Calloway and David Lippa and Dillon Niederhut and David Shupe}, # noqa doi = {10.25080/Majora-7ddc1dd1-011} } References 1 “Hyperband: A novel bandit-based approach to hyperparameter optimization”, 2016 by L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. https://arxiv.org/abs/1603.06560 2 “Better and faster hyperparameter optimization with Dask”, 2018 by S. Sievert, T. Augspurger, M. Rocklin. https://doi.org/10.25080/Majora-7ddc1dd1-011 Examples >>> import numpy as np >>> from sklearn.linear_model import SGDClassifier >>> >>> X, y = make_classification(chunks=20) >>> est = SGDClassifier(tol=1e-3) >>> param_dist = {'alpha': np.logspace(-4, 0, num=1000), >>> 'loss': ['hinge', 'log', 'modified_huber', 'squared_hinge'], >>> 'average': [True, False]} >>> >>> search = HyperbandSearchCV(est, param_dist) >>> search.fit(X, y, classes=np.unique(y)) >>> search.best_params_ {'loss': 'log', 'average': False, 'alpha': 0.0080502} Methods decision_function(X) fit(X[, y]) Find the best parameters for a particular model. get_params([deep]) Get parameters for this estimator. inverse_transform(Xt) predict(X) Predict for X. predict_log_proba(X) Log of probability estimates. predict_proba(X) Probability estimates. score(X[, y]) Returns the score on the given data. set_params(**params) Set the parameters of this estimator. transform(X) Transform block or partition-wise for dask inputs. partial_fit __init__(estimator, parameters, max_iter=81, aggressiveness=3, patience=False, tol=0.001, test_size=None, random_state=None, scoring=None, verbose=False, prefix='', predict_meta=None, predict_proba_meta=None, transform_meta=None)
{}
International Economics: Theory and Policy, v. 1.0 by Steve Suranovic 17.5 PPP in the Long Run Learning Objective 1. Interpret the PPP theory as a projection of long-term tendencies in exchange rate values. In general, the purchasing power parity (PPP) theory works miserably when applied to real-world data. In other words, it is rare for the PPP relationship to hold true between any two countries at any particular point in time. In most scientific disciplines, the failure of a theory to be supported by the data means the theory is refuted and should be thrown out or tossed away. However, economists have been reluctant to do that with the PPP theory. In part this is because the logic of the theory seems particularly sound. In part it’s because there are so many “frictions” in the real world, such as tariffs, nontariff barriers, transportation costs, measurement problems, and so on that it would actually be surprising for the theory to work when applied directly to the data. (It is much like expecting an object to follow Newton’s laws of motion while sitting on the ground.) In addition, economists have conceived of an alternative way to interpret or apply the PPP theory to overcome the empirical testing problem. The trick is to think of PPP as a “long-run” theory of exchange rate determination rather than a short-run theory. Under such an interpretation, it is no longer necessary for PPP to hold at any point in time. Instead, the PPP exchange rate is thought to represent a target toward which the spot exchange rate is slowly drawn. This long-run interpretation requires an assumption that importers and exporters cannot respond quickly to deviations in the cost of market baskets between countries. Instead of immediate responses to price differences between countries by engaging in arbitrage—buying at the low price and selling high—traders respond slowly to these price signals. Some reasons for the delay include imperfect information (traders are not aware of the price differences), long-term contracts (traders must wait till current contractual arrangements expire), and/or marketing costs (entry to new markets requires research and setup costs). In addition, we recognize that the exchange rate is not solely determined by trader behavior. Investors, who respond to different incentives, might cause persistent deviations from the PPP exchange rate even if traders continue to respond to the price differences. When there is a delayed response, PPP no longer needs to hold at a particular point in time. However, the theory does imagine that traders eventually will adjust to the price differences (buying low and selling high), causing an eventual adjustment of the spot exchange rate toward the PPP rate. However, as adjustment occurs, it is quite possible that the PPP exchange rate also continues to change. In this case, the spot exchange rate is adjusting toward a moving target. How long will this adjustment take? In other words, how long is the long run? The term itself is generally used by economists to represent some “unspecified” long period of time; it might be several months, years, or even decades. Also, since the target, the PPP exchange rate, is constantly changing, it is quite possible that it is never reached. The adjustment process may never allow the exchange rate to catch up to the target even though it is constantly chasing it. Perhaps the best way to see what the long-run PPP theory suggests is to consider Figure 17.3 "Hypothetical Long-Term Trend". The figure presents constructed data (i.e., made up) between two countries, A and B. The dotted black line shows the ratio of the costs of market baskets between the two countries over a long period, a century between 1904 and 2004. It displays a steady increase, indicating that prices have risen faster in country A relative to country B. The solid blue line shows a plot of the exchange rate between the two countries during the same period. If PPP were to hold at every point in time, then the exchange rate plot would lie directly on top of the market basket ratio plot. The fact that it does not means PPP did not hold all the time. In fact, PPP held only at times when the exchange rate plot crosses the market basket ratio plot; on the diagram this happened only twice during the century—not a very good record. Figure 17.3 Hypothetical Long-Term Trend Nonetheless, despite performing poorly with respect to moment-by-moment PPP, the figure displays an obvious regularity. The trend of the exchange rate between the countries is almost precisely the trend in the market basket ratio; both move upward at about the same “average” rate. Sometimes the exchange rate is below the market basket ratio, even for a long period of time, but at other times, the exchange rate rises up above the market basket ratio. The idea here is that lengthy exchange rate deviations from the market basket ratio (i.e., the PPP exchange rate) mean long periods of time in which the cost of goods is cheaper in one country than in another. Eventually, traders will respond to these price discrepancies and begin to import more from the less expensive country. This will lead to the increase in demand for that country’s currency and cause the exchange rate to move back toward the market basket ratio. However, in the long-run version of the theory, this will take time, sometimes a considerable amount of time, even years or more. To see how this relationship works in one real-world example, consider Figure 17.4 "U.S./UK Long-Term Trends". It plots the exchange rate (E$/£) between the U.S. dollar and the British pound between 1913 and 2004 together with an adjusted ratio of the countries’ consumer price indices (CPIs) during the same period.A technical point: The ratio of CPIs is adjusted because the ratio of CPIs must be multiplied by the PPP exchange rate that prevailed in the base year for the two countries. However, the CPI series used has 1967 as the base year in the United Kingdom and 1974 as the base year in the United States. This would mean the CPI ratio should be multiplied by the ratio of the cost of a market basket in the United States in 1974 divided by the market basket cost in the United Kingdom in 1967. Unsurprisingly, I don’t have that information. Thus I’ll assume a number (1.75) that is somewhat greater than the actual exchange rate that prevailed at the time. The higher number may account for the fact that prices rose considerably between 1967 and 1974. In any case, it remains a guess. The adjusted ratio represents an estimate of the ratio of the costs of market baskets between the two countries. Figure 17.4 U.S./UK Long-Term Trends In the diagram, the dotted black line represents the estimated ratio of market basket costs and the solid blue line is the exchange rate (E$/£). Note how closely the exchange rate tracks the trend in the market basket ratio. This remains true even though the exchange rate remained fixed during some lengthy periods of time, as in the 1950s and 1960s. While this depiction is just two countries over a long period, it is suggestive that the long-run version of PPP may have some validity. More sophisticated empirical tests of the long-run version of PPP have shown mixed results, as some studies support the hypothesis while others tend to reject it. Regardless, there is much more support for this version of the theory than for the much more simplistic short-run version. Key Takeaways • Under the long-run purchasing power parity (PPP) theory, the PPP exchange rate is thought to represent a target toward which the spot exchange rate is slowly drawn over time. The empirical evidence for this theory is mixed. • Long-run data showing the trend in consumer price index (CPI) ratios between the United States and the United Kingdom relative to the \$/£ exchange rate suggest some validity to the theory. Exercise 1. Jeopardy Questions. As in the popular television game show, you are given an answer to a question and you must respond with the question. For example, if the answer is “a tax on imports,” then the correct question is “What is a tariff?” 1. The term used by economists to denote an unspecified point in time in the distant future. 2. The term used by economists to denote an unspecified point in time in the near future. 3. The term used to describe the general path along which a variable is changing. 4. Under this version of the PPP theory, the PPP exchange rate represents a target toward which the spot exchange rate is slowly drawn over time.
{}
# Coefficient of friction has no units 1. Apr 14, 2004 ### KingNothing In my physics class, when we involved friction...the book kept making it a point of "why the coefficient of friction has no units"...the answer was always because it was a ratio. Well.., if $$F_{friction}=umg$$ then $$u=\frac{F_{friction}}{mg}$$, right? So why can't we say that the coefficient of friction is "Newtons per kilogram meters per second per second"? edit: fix latex Integral Last edited by a moderator: Apr 14, 2004 2. Apr 14, 2004 ### garytse86 F = uR, both LHS and RHS have units N, so u (friction coefficient) must have no units (homogenous) 3. Apr 14, 2004 ### Staff: Mentor Because 1 Newton = 1 Kg/m^2. The units cancel. 4. Apr 14, 2004 ### KingNothing I see, so in essence I was saying that the units should be $$\frac{ma}{ma}$$. I see how I was wrong. I still don't think "It's a ratio" is a good way of saying it. Almost everything is a ratio aside from distance and time. 5. Apr 14, 2004 ### Chi Meson You are correct. The better way of saying it is that "the coefficient of friction is a ratio of two forces" (the normal force over the frictional force); this implies that units will cancel. 6. Apr 14, 2004 ### jdavel Decker said: "I still don't think "It's a ratio" is a good way of saying it. Almost everything is a ratio aside from distance and time." In some systems of units even distance or time come out as a ratio! But you're right, just saying "It's a ratio" doesn't explain the coefficient of friction very well. Try this. Many (maybe even most) of the questions physics tries to answer boil down to this: Given the location and velocity of an object at a certain time, what will its location and velocity be at any later time? Once Newton defined the force acting on objects as F=ma, the answer to these questions became: What is the force acting on the object at any location? (The simple fact that this is usually an easier question to answer, is why Newton is considered the greatest physicist whoever lived!) In most cases the force can be expressed as the product of 3 factors: F = T*M*f(x) where T is a value that's already been measured and Tabulated, because it can be applied in many similar situations, M is a value that you can Measure or determine for your particular situation, and f(x) is a universal law of physics that shows how the force on the object will depend on the object's location. Some examples: 1) The force on a charged object near another charged object F = Em*(q1*q2)*(1/x^2) Em depends on the the particular material in the space between the particles, and there are tables of these values for many different materials q1 and q2 are the charge on the particular objects in question The last factor says the force decreases as the square of the distance between the objects. 2) The force that a bungee cord exerts on somebody when they jump off a bridge: F = Y*(pi*R^2/L)*(-x) Y is the Young's modulus of the material the cord is made of, again values have been tabulated for lots of materials. The second is calculated from the dimensions (R & L) of the cord -x shows that the force is in the opposite direction that the cord is stretched (very important for the jumper!) and that it gets greater the more the cord is stretched (also good for the jumper). 3) Friction works the same way, but it's particularly simple F = Cf*W*1 Cf is the coefficient of friction between your object and the surface it slides on (you can look it up) W is the weight of the object (you can measure it) 1 is how F depends on x, but since 1 is a constant, F doesn't depend on x. In all these cases, the units of the 3 factors have to combine to be Newtons. That's why Cf has no units! The other 2 factors are weight(Newtons) and 1(no units). So Cf is the conversion factor from one force (weight, which is easily measured) to another force, friction, which is part of the total force used in F=ma to find where the object will be and how fast it will be going at any time in the future. Well that's a lot more than "It's a ratio"! Hope it helped a little. Last edited: Apr 14, 2004
{}
# ltrb is the way to go ## The answer to this puzzle is a two-word phrase. Note: It's not super precise, but with the help of the title and spotting some pattern, it should be solvable. Good luck! < v< > ^> ^> ^> v v > > v< ^< ^< ^< v ^> ^> ^ ^ v< ^< v< > < v> ^> < < v< v> v< v> v> v> ^> ^ ^> _ _ _ _ _ _ _ |P E G D I N O H Q| |M T C A R X A L C| |D N R S A L E D G| |Q I O N I T C I Z| |__ __ __ __ __R__| Hint1: Not everything is relevant, some are just there to confuse. Count everything you see (I really mean everything), and you'll hopefully see what's relevant. Hint2: If you look at the characters(on top), there are two different types. Focus on one of them only. And like I mentioned before.. count. • While I can't quite figure the pattern out, here's a little something for others: ygeo zbfg yvxryl zrnaf yrsg gbc evtug obggbz May 27, 2022 at 14:12 • The “box” formed by the lines around the letters - are the holes in the box above the P and Q in the upper corners intentional? Can/should letters be pushed in or out through them? Is the bottom of the box supposed to be solid? Or is this all irrelevant? May 28, 2022 at 19:24 • @SQLnoob All I can say is. No, the holes are irrelevant and you're not supposed to push letter in or out through them. I'll add a hint in 1 or 2 days. May 29, 2022 at 7:07 • That is, if its still unsolved ofc May 29, 2022 at 7:14 Here we go! We notice that the first chart consists combined arrows and single arrows. We remove the single arrows and the corresponding letters in the second chart. v< ^> ^> ^> v< ^< ^< ^< ^> ^> v< ^< v< v> ^> v< v> v< v> v> v> ^> ^> _ _ _ _ _ _ _ | E D I N | | T C A R A L | | N R S E D | | I O N I T C I | |__ __ __ __ R| Now we see that the number of letters remaining is equal to the number of lines surrounding them. Then we follow the directions implied by the arrows. v< bottom/left ^< top/left ^> top/right v> bottom/right Then we start from the bottom left corner to get, C A R D I N A R L E D T I N R I S N O I T C E As per the title left, top, right, bottom is the way to go it spells INTERCARDINAL DIRECTIONS • You're almost there! Rot13(gel gb vapyhqr gur yvarf fheebhaqvat gur qverpgvbaf naq frr vs lbh pna pbzr hc jvgu gur pbeerpg nafjre. Ubj znal yvarf naq yrggref qb lbh frr?) Jul 31, 2022 at 5:35 • @Prim3numbah , thanks for the hint. Please check now. Jul 31, 2022 at 7:53 • Looks good now, well done! Jul 31, 2022 at 8:16 Partial solution/spoiler? I was moving letters around the grid in the directions of the arrows, and ended up with something that looked like it was approaching the phrase "CHARACTER INSERTION", but it was still a bit jumbled up. I'm guessing that this phrase is supposed to appear after performing a series of operations, inserting characters into the grid using the arrows, but I haven't been able to figure out the exact method for getting all the letters to appear where they should be.
{}
# How to create a canvas with an ellipse using FabricJS? FabricJSJavascriptHTML5 Canvas #### Complete Python Prime Pack 9 Courses     2 eBooks #### Artificial Intelligence & Machine Learning Prime Pack 6 Courses     1 eBooks #### Java Prime Pack 9 Courses     2 eBooks In this tutorial, we are going to learn how to create a canvas with an Ellipse object using FabricJS. Ellipse is one of the various shapes provided by FabricJS. In order to create an ellipse, we will create an instance of fabric.Ellipse class and add it to the canvas. ## Syntax new fabric.Ellipse({ rx: Number, ry: Number }: Object) ## Parameters • options (optional) − This parameter is an Object which provides additional customizations to our ellipse. Using this parameter color, cursor, stroke width and a lot of other properties can be changed related to the ellipse object of which rx and ry are properties. ## Options Keys • rx − This property accepts a Number which determines the horizontal radius of the ellipse. If we don't specify a horizontal radius, our ellipse would not be displayed on our canvas. • ry − This property accepts a Number which determines the vertical radius of the ellipse. If we don't specify a vertical radius, our ellipse would not be displayed on our canvas. ## Example 1 Creating an instance of fabric.Ellipse() and adding it to our canvas Let's see an example of how we can add an ellipse to our canvas. Here we have created an object with a horizontal radius of 80px and vertical radius of 50 px. We have used sky blue color to fill in our object whose hexadecimal value is #87ceeb. <!DOCTYPE html> <html> <!-- Adding the Fabric JS Library--> <script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/510/fabric.min.js"></script> <body> <h2>How to create a canvas with an ellipse using FabricJS?</h2> <p>Here we have created an ellipse object and set it over a canvas.</p> <canvas id="canvas"></canvas> <script> // Initiate a canvas instance var canvas = new fabric.Canvas("canvas"); // Initiate an Ellipse instance var ellipse = new fabric.Ellipse({ left: 115, top: 100, fill: "#87ceeb", rx: 80, ry: 50, }); // Adding it to the canvas canvas.setWidth(document.body.scrollWidth); canvas.setHeight(250); </script> </body> </html> ## Example 2 Manipulating the Ellipse object by using the set method In this example, we have assigned the properties to the ellipse by using the set method which is a setter for values. Any property related to stroke, strokeWidth, radius, scaling, rotation, etc., can be mutated by using this method. <!DOCTYPE html> <html> <!-- Adding the Fabric JS Library--> <script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/510/fabric.min.js"></script> <body> <h2>How to create a canvas with an ellipse using FabricJS?</h2> <p>Here we have used the <b>set</b> method to create an ellipse object over the canvas. </p> <canvas id="canvas"></canvas> <script> // Initiate a canvas instance var canvas = new fabric.Canvas("canvas"); // Initiate an Ellipse instance var ellipse = new fabric.Ellipse(); // Using set to set the properties ellipse.set("rx", 90); ellipse.set("ry", 40); ellipse.set("fill", "#1e90ff"); ellipse.set({ stroke: "rgba(245,199,246,0.5)", strokeWidth: 6 }); ellipse.set("left", 150); ellipse.set("top", 90); // Adding it to the canvas </html>
{}
## Commutative Algebra 2 In this installation, we will study more on ideals of a ring A. Definition. If $\mathfrak a\subseteq A$ is an ideal, its radical is defined by $r(\mathfrak a) := \{ x \in A: x^n \in \mathfrak a \text{ for some } n>0\}.$ To fix ideas, again consider the case $A = \mathbb Z$ again. For the ideal (m) where $m = p_1^{e_1}\ldots p_k^{e_k}$, each $e_i > 0$, its radical is simply (m’) where $m' = p_1 \ldots p_k$ with the repeated exponents removed. Thus one thinks of the radical as “retaining only the prime factors”. Our first result is: Lemma 1. The radical of an ideal $\mathfrak a$ is also an ideal. Proof Let $\mathfrak b = r(\mathfrak a)$. Suppose $x, y \in \mathfrak b$; pick m, n > 0 such that $x^m, y^n \in \mathfrak a$. As before, $(x+y)^{m+n}$ is a sum of terms $Cx^i y^j$ with $i+j = m+n$ and so each term is a multiple of $x^m$ or $y^n$. Thus $(x+y)^{m+n} \in \mathfrak a$ so $x+y \in \mathfrak b$. Also for any $z\in A$ we have $(xz)^m = x^m z^m \in \mathfrak a$. Hence $xz \in \mathfrak b$. Finally since $0\in \mathfrak b$, we see that $\mathfrak b$ is an ideal of A. ♦ The following properties relate the radical of an ideal to earlier constructions. Proposition 1. Let $\mathfrak a, \mathfrak b$ be ideals of A, and $(\mathfrak a_i)$ be any collection of ideals of A. • $r(r(\mathfrak a)) = r(\mathfrak a)$. • $r(\sum_i \mathfrak a_i) = r(\sum_i r(\mathfrak a_i))$. • $r(\mathfrak a \cap \mathfrak b) = r(\mathfrak {ab}) = r(\mathfrak a) \cap r(\mathfrak b)$. Proof Since $\mathfrak a \subseteq r(\mathfrak a)$ we have proven ⊇ of the first claim. Conversely, if $x \in r(r(\mathfrak a))$ then $x^n \in \mathfrak a$ for some n > 0 and so $(x^n)^m = x^{mn} \in \mathfrak a$ for some mn > 0. Thus $x\in r(\mathfrak a)$. For second claim, since $\mathfrak a_i \subseteq r(\mathfrak a_i)$, ⊆ is obvious. Conversely, if x lies in the RHS then $x^n \in \sum_i r(\mathfrak a_i)$ for some n > 0, and so $x^n = y_1 + \ldots + y_k$ with $y_j \in r(\mathfrak a_{i_j})$. Without loss of generality, there is an m > 0 such that $y_j^m \in \mathfrak a_{i_j}$ for each j = 1, …, k (take m large enough). Then $(x^n)^{mk} = (y_1 + \ldots + y_k)^{mk}$ = sum of terms of the form $y_1^{e_1} \ldots y_k^{e_k}$ with $e_1 +\ldots + e_k = mk.$ In each term, we have $e_j \ge m$ for some j, hence the term is a multiple of $y_j^m \in \mathfrak a_{i_j}$. Thus $(x^n)^{mk} \in \sum_i \mathfrak a_i$ and x lies in the LHS. Finally for the last claim. • Since $\mathfrak{ab} \subseteq \mathfrak a \cap \mathfrak b$ the second term is contained in the first. • Since $\mathfrak a \cap \mathfrak b \subseteq \mathfrak a$ we have $r(\mathfrak a \cap \mathfrak b) \subseteq r(\mathfrak a)$ and similarly $r(\mathfrak a \cap \mathfrak b)\subseteq r(\mathfrak b)$ so the first term is contained in the third. • Finally if $x\in r(\mathfrak a) \cap r(\mathfrak b)$ there exist mn > 0 such that $x^m \in \mathfrak a, x^n \in\mathfrak b$. Then $x^{m+n} \in \mathfrak{ab}$ so the third term is contained in the second. ♦ It is not true that $r(\cap \mathfrak a_i) = \cap_i r(\mathfrak a_i)$ for any class of ideals $\mathfrak a_i\subseteq A$. For example, take $A = \mathbb Z$ and $\mathfrak a_n = (2^n)$ for $n = 1, 2, \ldots$. Then $r(\mathfrak a_n) = (2)$ so $r(\cap_{n\ge 1} \mathfrak a_n) = r((0)) = (0), \quad \cap_n r(\mathfrak a_n) = \cap_n (2) = (2).$ Definition. An ideal $\mathfrak a \subseteq A$ is called a radical ideal if $r(\mathfrak a) = \mathfrak a$. Note that for any ideal $\mathfrak a$, $r(\mathfrak a)$ is a radical ideal. ### Exercise. 1. Prove that a prime ideal is radical. 2. Decide which of the following is true. Find counter-examples for the false claims. • If $\mathfrak a, \mathfrak b$ are radical ideals, so is $\mathfrak a + \mathfrak b$. • If $\mathfrak a, \mathfrak b$ are radical ideals, so is $\mathfrak a \cap \mathfrak b$. • If $\mathfrak a, \mathfrak b$ are radical ideals, so is $\mathfrak a\mathfrak b$. • If $(\mathfrak a_i)$ is a collection of radical ideals, so is $\sum_i \mathfrak a_i$. • If $(\mathfrak a_i)$ is a collection of radical ideals, so is $\cap_i \mathfrak a_i$. [For a counter-example to the first claim, take the ring A = ℤ[X], the ring of polynomials with integer coefficients.] # Division of Ideals Finally, we wish to divide ideal $\mathfrak a$ by $\mathfrak b$. Definition. Let $\mathfrak a,\mathfrak b\subseteq A$ be ideals. Write $(\mathfrak a : \mathfrak b) := \{ x \in A: x\mathfrak b\subseteq \mathfrak a\}.$ Here, the notation $x\mathfrak b$ means $\{xy : y\in \mathfrak b\}$; note that this is an ideal of A. As a convenient mnemonic for the definition (whether it is $x\mathfrak b \subseteq \mathfrak a$ or $x \mathfrak a \subseteq \mathfrak b$), just recall that in the ring ℤ we have (mnℤ : nℤ) = mℤ. Lemma 2. The set $\mathfrak c := (\mathfrak a : \mathfrak b)$ is an ideal of A. Proof Clearly $0 \in \mathfrak c$ since $0\mathfrak b = (0)$. Next suppose $x, y \in \mathfrak c$, so $x\mathfrak b, y\mathfrak b \subseteq \mathfrak a$. Then $(x-y)\mathfrak b \subseteq x\mathfrak b + y\mathfrak b \subseteq \mathfrak a$. Finally if $x\in \mathfrak c$, so $x\mathfrak b\subseteq \mathfrak a$, then any $z\in A$ gives us $xz \mathfrak b \subseteq z\mathfrak a \subseteq \mathfrak a$ since $\mathfrak a$ is an ideal of A. ♦ Finally, we go through some basic properties of ideal division. Proposition 2. Let $\mathfrak a, \mathfrak b, \mathfrak c$ be ideals of A, and $(\mathfrak a_i), (\mathfrak b_i)$ be any collection of ideals of A. • $(\mathfrak a : \mathfrak b)\mathfrak b \subseteq \mathfrak a$. • $((\mathfrak a : \mathfrak b) : \mathfrak c) = (\mathfrak a : \mathfrak {bc})$. • $(\cap_i \mathfrak a_i : \mathfrak b) = \cap_i (\mathfrak a_i : \mathfrak b)$. • $(\mathfrak a : \sum_i \mathfrak b_i) = \cap_i (\mathfrak a : \mathfrak b_i)$. Proof First claim: if $x \in (\mathfrak a : \mathfrak b)$, $y\in\mathfrak b$ then by definition $xy\in \mathfrak a$. Hence $\mathfrak a$ also contains any finite sum of $x_i y_i$ with $x_i \in (\mathfrak a : \mathfrak b)$, $y_i \in \mathfrak b$. Second claim: for $x\in A$, $x \in ((\mathfrak a : \mathfrak b) : \mathfrak c) \iff x\mathfrak c \subseteq (\mathfrak a : \mathfrak b) \iff (x\mathfrak c)\mathfrak b \subseteq \mathfrak a \iff x \in (\mathfrak a : \mathfrak {cb}).$ Third claim: for $x \in A$, $x\in (\cap_i \mathfrak a_i : \mathfrak b) \iff x\mathfrak b \subseteq \cap_i \mathfrak a_i \iff (\forall i, x\mathfrak b\subseteq \mathfrak a_i) \iff (\forall i, x \in (\mathfrak a_i : \mathfrak b)).$ Fourth claim: $x \in (\mathfrak a : \sum_i \mathfrak b_i) \iff x(\sum_i \mathfrak b_i) \subseteq \mathfrak a \iff (\forall i, x\mathfrak b_i \subseteq \mathfrak a),$ where the second equivalence follows from: $x\sum_i \mathfrak b_i = \sum_i (x\mathfrak b_i)$ and for any collection of ideals $\mathfrak b_i$, $\sum_i \mathfrak b_i \subseteq \mathfrak c \iff (\forall i, \mathfrak b_i \subseteq \mathfrak c)$. ♦ ### Note In the next article, we will be looking at some basic ideas in algebraic geometry to motivate many of our subsequent concepts. The concept of a radical ideal is of paramount importance there. We will not be seeing much of ideal division for a while, until we encounter invertible ideals. This entry was posted in Advanced Algebra and tagged , , , , . Bookmark the permalink.
{}
AP Physics 1 Conservation of Energy A 2 kg block slides a distance of 3.0m down a frictionless ramp angled at 30 degrees from the horizontial. The block's final kinetic energy is 35 J. Determine the following: A) The initial kinetic energy B) The initial velocity of the block Find the Initial Potential Energy First find the initial height $sin(\theta)=\frac{H}{Distance \ traveled\ down\ the\ Ramp}$ $sin(30^{\circ})=\frac{H}{3}$ $H=3sin(30^{\circ})$ $H=1.5m$ Now can find the initial potential energy $PE_{i}=mgH$ $PE_{i} = 2(9.81)(1.5)$ $PE_{i} = 29.4 J$ Starting with the idea of conservation of energy we can find the initial kinetic energy Note the final potential energy is 0 J since the object is at the bottom of the ramp at a height of 0 m. $ME_i=ME_f$ $PE_i+KE_i=PE_f+KE_f$ $29.4+KE_i=0+35$ $KE_i=5.6J$ Now we can solve for the initial velocity of the block from $KE_i$ $KE_i=\frac{1}{2}m(V_i^2)$ $5.6=\frac{1}{2}(2)V_i^2$ $V_i^2=\frac{2(5.6)}{2}$ $V_i=\sqrt{5.6}$ $V_i=2.4\frac{m}{s}$
{}
× # Absolute value equations Let $$a, b, c, d$$ and $$e$$ be real numbers such that: $$|a-b|=2|b-c|=3|c-d|=4|d-e|=5|e-a|$$ Prove that: $$a=b=c=d=e$$. Note by Ißra Jörg 11 months, 2 weeks ago
{}
### Home > INT3 > Chapter 8 > Lesson 8.1.1 > Problem8-11 8-11. 1. Where does the graph of each function below intersect the x-axis? Homework Help ✎ 1. f(x) = (x + 3)2 – 5 2. f(x) = (x – 74)2(x + 29) Set f(x) equal to 0 and solve for x. $(-3 \pm \sqrt{5}, 0)$ See the hint for part (a). The Zero Product Property can be used here.
{}
Open Access # Quantum of Banach algebras ### Abstract A variety of Banach algebras is a non-empty class of Banach algebras, for which there exists a family of laws such that its elements satisfy all of the laws. Each variety has a unique core (see [3]) which is generated by it. Each Banach algebra is not a core but, in this paper, we show that for each Banach algebra there exists a cardinal number (quantum of that Banach algebra) which shows the elevation of that Banach algebra for bearing a core. The class of all cores has interesting properties. Also, in this paper, we shall show that each core of a variety is generated by essential elements and each algebraic law of essential elements permeates to all of the elements of all of the Banach algebras belonging to that variety, which shows the existence of considerable structures in the cores. ### Article Information Title Quantum of Banach algebras Source Methods Funct. Anal. Topology, Vol. 12 (2006), no. 1, 32-37 MathSciNet MR2210903 Copyright The Author(s) 2006 (CC BY-SA) ### Authors Information M. H. Faroughi Department of Mathematics, University of Tabriz, Tabriz, Iran ### Citation Example M. H. Faroughi, Quantum of Banach algebras, Methods Funct. Anal. Topology 12 (2006), no. 1, 32-37. ### BibTex @article {MFAT309, AUTHOR = {Faroughi, M. H.}, TITLE = {Quantum of Banach algebras}, JOURNAL = {Methods Funct. Anal. Topology}, FJOURNAL = {Methods of Functional Analysis and Topology}, VOLUME = {12}, YEAR = {2006}, NUMBER = {1}, PAGES = {32-37}, ISSN = {1029-3531}, URL = {http://mfat.imath.kiev.ua/article/?id=309}, }
{}
# Potential energy chain problem 1. May 23, 2013 ### joshmccraney 1. The problem statement, all variables and given/known data Consider a circle with radius $r$ diagrammed as the unit circle, but take only the second quadrant. On this quarter of the circle lies a chain with mass per unit length $\rho$ (the length of the chain is $\pi r/2$). If $\theta$ is the angle made with the vertical axis at any point on the circle, determine the velocity $v$ of the last piece of chain that falls at any arbitrary point in $\theta$. Ignore friction. The chain starts at rest. 3. The attempt at a solution I know using work/energy will make life easier here: $$\Delta V_g+\Delta T=0$$ looking for $\Delta V_g$ is the tough part (change in gravitational potential). my thoughts were to look at an infinitesimal piece of chain $\rho r d\theta$ and then try to figure out how the height changes as $\theta$ changes. I think $\Delta H$, where $H$ is height of a piece of chain, is $cos\theta_1-cos\theta_2$. From here, modeling went sour. Hopefully someone can help me out! Thanks! Last edited: May 23, 2013 2. May 24, 2013 ### haruspex When at theta, how much of the chain is still in contact with the quadrant? Can you determine the y-coordinate of the mass centre of that part and of the other part? 3. May 26, 2013 ### haruspex Having thought about this some more, I think the problem is extremely difficult to answer fully. I suspect you are supposed to assume that each part of the chain remains in contact with the arc until it falls below the bottom of the arc. But that would not happen. After descending through a certain angle (the solution of 4sin(θ)+2θ=π cos(θ), approx 0.48 radians) the trailing end of the chain would detach from the arc. Thereafter it becomes very complex, with an increasing length of chain describing some curve in mid air. The chain would become completely detached from the arc before the trailing end reaches the 90 degree mark. 4. May 26, 2013 ### CWatters As I read it that is the θ they are asking for when they say "last piece of chain that falls at any arbitrary point in θ" 5. May 26, 2013 ### haruspex I can't read it that way. They're asking for a velocity at an arbitrary theta, not a value of theta; namely, the velocity of the trailing end as it passes angle theta. 6. May 27, 2013 ### CWatters I was hoping that by finding the value of theta that would allow the velocity to be calculated? Not sure how. 7. May 27, 2013 ### haruspex It's no problem to calculate the velocity for all theta up to that critical value. Thereafter it's pretty much impossible. I did wonder if there's a way to get the eventual horizontal component of momentum, but even that looks too hard. 8. May 30, 2013 ### utkarshakash Just concentrate on the centre of mass of the chain. Try to find out velocity of CM of chain at an arbitrary theta. Since the mass is uniformly distributed it is obvious where the CM would lie. 9. May 30, 2013 ### haruspex The chain is changing shape. The velocity of the mass centre will not tell you the velocity of the trailing end of the chain, which is what the question is asking. Up until the trailing end starts to peel away from the surface, every part of the chain will be moving at the same speed (not the same velocity) and that can be determined from work conservation.
{}
By default SVG images with non-visible MathML are generated. The equation editor in Google Docs is based on LaTeX syntax and recognizes similar shortcuts. LaTeX provides a feature of special editing tool for scientific tool for math equations in LaTeX. PlotSolve( ) Solves a given equation for the main variable and returns a list of all solutions and the graphical output in the Graphics View. You can set Pages to make numerical suffixes superscript as you type. to get. You can also use an untitled window as a scratchpad for equations to be added to the toolbar or copied via the clipboard or drag-and-drop into other equation windows or documents. You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. The left and right delimiters may be identical. Mathematical Equations in LaTeX. Is there a way so that LaTeX automatically splits the equation over the two pages? You can type a backslash (\) followed by the name of a symbol and a space to insert that symbol. Microsoft Office 2003, XP (2002) — MathType Toolbar and Menu in Word and PowerPoint: MathType adds a toolbar and menu to Microsoft Word and PowerPoint, allowing quick access to its features and powerful commands to do equation numbering, produce great-looking maths web pages… There are numerous potential commands in LaTeX one may consider counting in order to automatically output their total number of appearances in a document. The double backslash works as a newline character. Although the commands \left. man pages section 1: User Commands. This command uses the Angular CLI to generate features such as pages, components, directives, services, etc. Shortcut of writing subscript Similar to subscript, Ms Word has shortcut for writing superscript which is ^ sign. Two new commands are also presented in the example: \textcolor{red}{easily} Changes the colour of inline text. ... A simple 4-pages … The Quadratics page contains 13 separate commands for dealing with the most common questions concerning quadratics. This is middle school algebra where the concept of … This averages the incoming data for the configured amount of time (in seconds). For example, \numberwithin{equation}{section} in the preamble will prepend the section number to all equation numbers. MediaWiki renders mathematical equations using a combination of HTML and a variant of LaTeX.. It's not as graphical as Equation Editor, but if you know a little TeX layout it's easy. -pn This says that subscripts and superscripts should be n points smaller than the … Functions can be used to create formulas that manipulate data and calculate strings and numbers. The following notices will provide … See Also. a) If [Alt=] is implemented inside a document paragraph, then the equation is known as “in-line” equation mode. For example, when you type \alpha, the Greek letter Alpha is inserted. Title Bayesian postestimation — Postestimation tools for bayesmh and the bayes prefix Postestimation commands Remarks and examples Also see Postestimation commands The following Bayesian postestimation commands are available after the bayesmh command ([BAYES] bayesmh) and the bayes prefix ([BAYES] bayes): Command Description bayesgraph graphical summaries and convergence … This window will be untitled until you give it a name when you save it as a file using the Save or Save As commands on this menu. The \hfill commands put roughly equal space between the choices. Note these keystrokes are limited to Pages by default, and they are not immediately available in TextEdit. The more common way to do this is with delim xy between .EQ and .EN. The totcount package provides a simple way to do that. Once you build a body of custom commands that you will be using in many LaTeX documents, you should learn about creating your own package so you don't have to copy all your custom commands from document to document. This add-on has many advantages when compared to other formula editors or the default Google Docs equation editors. Let us construct a parabola as a locus: Create free Points A and B, and Line d lying through them (this will be the directrix of the parabola). New – Ctrl+N/⌘+N Opens a new, empty, equation window so that you can work in it. This could be used to give an equation number to a Figure or list, for example This combination of icons and tabs is known as the Ribbon interface, which appears in Word, PowerPoint, Excel, Outlook, and Access. During a Pages update which was pushed out a few months ago, Pages obtained the ability to write equations using LaTeX or MathML commands. I am aware of the ALT+= keyboard shortcut to get to equation editor mode. Create free point F for the focus. and \right. It uses LaTeX commands to generate small PDF files which you can then drag and drop into any application. Solving for the roots of one algebraic equation and one unknown Finding the roots of a single linear algebraic equation with one unknown variable is of course trivial. Pages is a powerful word processor that lets you create stunning documents, and comes included with most Apple devices. These commands are particularly suited for interactive command line use. According to an Apple Support article, the size and color of the equation can be changed ():The equation appears at the insertion point in your document (or before the selected text). If I remember right: hold: “APPLE & +” at the same time release then type ;), for google docs command + . eq2 := x^3 -> - 9*x^2 -> + 26*x - 24 = 0 s2 := solve( eq2, x ) Type these commands in a Maple worksheet and pay careful attention to the results … When drafting a report it is occasionally useful to be able to see the labels you have given your equations, figures, sections or … Operation Command Display Is equal: a = b: a=b Is not equal: a <> b: a≠b Approximately: a approx 2: a≈b Divides: a divides b: Does not divide: a ndivides b In this article, you will learn how to write basic equations and constructs in LaTeX, about aligning equations, stretchable horizontal … Equation Maker for Mac Typeset resolution-independent mathematics using LaTeX syntax. has the solution x=3 and we know there is just a single root. And with real-time collaboration, your team can work together from anywhere, whether they’re on Mac, iPad, iPhone, or using a PC. Cheers. There are plenty of tutorials on the Web for LaTeX usage as well. For a full list of available types, use npx ng g --help For a list of options for a types, use npx ng g --help; You can specify a path to nest your feature within any number of subdirectories. Solution Getting the most out of your MathType commands for Microsoft Word. The result is that LaTeX places the formula on the next page. cases . Manual sizing can also be useful when an equation is too large, trails off the end of the page, and must be separated into two lines using an align command. The Latex package showkeys. The following commands have object command forms (e.g., Equation::arch). The equation editor supports the following functions EWMAF(weight:value) Exponentially weighted moving average filter - filter noisy signals here. The following tables show the commands … Next: Packages; Previous: Symbols By putting a \label command immediately after \begin{subequations } you can get a reference to the parent number; \eqref{grp} from the above example would produce (2) while \eqref {second } would produce (2a).. You can also use the subequations environment to skip an equation number but record it in a label. Documentation ... eqn normally sets equations at whatever the current point size is when the equation is encountered. man pages section 1: User Commands. can be used to balance the delimiters on each line, this may a= b+ c (1) In case one does not want to have an equation number, the *-version is used: \begin{equation*} a = b + c \end{equation*} a= b+ c All other possibilities of typesetting simple equations have disadvantages: The displaymath-environment o ers no equation-numbering. MathType and Equation Editor equation objects and files contain information that word processing, page layout, and other programs can use to automatically align the baseline of inline equation objects with their surrounding text. Type these commands in a Maple worksheet and pay careful attention to the results of each entering each keystroke. When in equation editor, those shortcuts do not work, and instead I … In the example the word easily is printed in red \colorbox{BurntOrange}{this text} Changes the background colour of the text passed as second parameter. Drag and drop them into popular Mac apps such as Pages, Numbers, Keynote, and Microsoft Word. The equation editor switches between “variable style” or “function style”, depending on whether it interprets part of an equation as a variable or a function (compare the two styles in the equation , which would not look right if it were displayed as ). The cases package adds the \numcases and the \subnumcases commands, which produce multi-case equations with a separate equation number and a separate equation number plus a letter, respectively, for each case. 'weight' should be between 0(more smoothing) and 1(less smoothing) TAVG(seconds:value) Timed average. Takes two parameters, the colour to use and the text whose colour is changed. There are two types of numbers that appear in chemical equations. Google Sheets supports cell formulas typically found in most desktop spreadsheet packages. A small letter or number placed slightly lower than the normal text. Example: PlotSolve(x^2 = 4x) yields {(0, 0), (4, 0)} and displays the points (0, 0) and (4, 0) in the Graphics View. $$\begin{split} Very long equation \end{split}$$ Now, I have half a page left where I want to place that equation, but it is a little bit longer. Users of MathType 3.1 and earlier should be sure to read the MathType User's Supplement Manual starting with the section entitled "Using the Commands" on page 14 through 23 (Windows) or pages … Documentation ... Sets equation delimiters set to characters x and y with the command-line argument. The version of LaTeX used is a subset of AMS-LaTeX markup, a superset of LaTeX markup which is in turn a superset of TeX markup, for mathematical formulas.Only a limited part of the full TeX language is supported; see below for details. Example: “ If we have , the system will…” b) If [Alt=] is implemented at a separate line (no attached text before or after the equation), then the result is on the … Microsoft Office 2013 displays commands in a series of icons stored on different tabs. In general, we recommend that you use the object forms of the commands. I use it for Keynote more than Pages, but it works all the same. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. …sections, chapters, pages, theorems, equations, references, etc. I also know of the CTRL+=, CTRL+SHIFT+= shortcuts to toggle superscript/subscript modeoutside of equation editor. To other formula editors or the default Google Docs is based on syntax!: \textcolor { red } { section } pages equation commands the preamble will prepend the section number all. All equation numbers are also presented in the example: \textcolor { red } section! Potential commands in LaTeX suffixes superscript as you type ( \ ) followed by the name of a and! On each line, this may the LaTeX package showkeys package showkeys LaTeX! Worksheet and pay careful attention to the results of each entering each keystroke prepend the section to! ( e.g., equation::arch )... eqn normally Sets equations at whatever the current point size when!, directives, services, etc be aligned accordingly is when the equation environment, use split! Lets you create stunning documents, and Microsoft Word placed slightly lower than the normal text Mac apps as. Is based on LaTeX syntax the more common way to do this is with delim xy between.EQ and.. 13 separate commands for dealing with the most common questions concerning Quadratics red } easily! And y with the most common questions concerning Quadratics of appearances in a document paragraph, the! Types of numbers that appear in chemical equations } Changes the colour of inline text manipulate data and calculate and. Comes included with most Apple devices used to create formulas that manipulate data and calculate strings and numbers Alpha inserted! Services, etc \hfill commands put roughly equal space between the choices put roughly equal space between the.! Superscript which is ^ sign 0 ( more smoothing ) TAVG ( seconds: value Timed. The example: \textcolor { red } { easily } Changes the colour of inline text into Mac. E.G., equation::arch ) as equation editor which is ^ sign a feature special... Has the solution x=3 and we know there is just a single root equations at whatever the current point is. And Microsoft Word If you know a little TeX layout it 's easy If you know a little layout. If you know a little TeX layout it 's not as graphical as equation editor in Google Docs editors... Processor that lets you create stunning documents, and Microsoft Word them into popular Mac apps such as Pages numbers... With most Apple devices, directives, services, etc delimiters set to x... For math equations in LaTeX LaTeX syntax equation delimiters set to characters x and y the. Keynote, and Microsoft Word create formulas that manipulate data and calculate strings numbers... That appear in chemical equations there are numerous potential commands in LaTeX calculate. Functions can be used to create formulas that manipulate data and calculate strings and numbers and! The incoming data for the configured amount of time ( in seconds ) included with most Apple devices number all... Immediately available in TextEdit available in TextEdit Sets equations at whatever the current size... Questions concerning Quadratics ' should be between 0 ( more smoothing ) and 1 ( less ). The formula on the Web for LaTeX usage as well each entering each keystroke { section in.: Packages ; Previous: Symbols Mathematical equations in LaTeX one may consider counting in to. Default, and they are not immediately available in TextEdit consider counting in order to automatically their... ( e.g., equation::arch ) you type \alpha, the Greek letter Alpha inserted! To characters x and y with the command-line argument equation is encountered total number of appearances in a document,! The solution x=3 and we know there is just a single root can set Pages to make suffixes. Formulas that manipulate data and calculate strings and numbers command-line argument: value ) Timed average type commands... Calculate strings and numbers type a backslash ( \ ) followed by the name a... For math equations in LaTeX one may consider counting in order to automatically output their number... 13 separate commands for dealing with the most common questions concerning Quadratics layout it easy... Commands for dealing with the command-line argument which is ^ sign of text... It 's not as graphical as equation editor in Google Docs equation editors create stunning documents, and they not! Next: Packages ; Previous: Symbols Mathematical equations in LaTeX to characters x and y with command-line... Splits the equation environment, use the split environment to split the equations smaller. Writing superscript which is ^ sign set Pages to make numerical suffixes superscript as you type pages equation commands, the letter. And calculate strings and numbers to do this is with delim xy between.EQ and.EN on syntax... A backslash ( \ ) followed by the name of a symbol and a space to insert symbol. ) Timed average } { easily } Changes the colour to use and the text whose colour changed! In the preamble will prepend the section number to all equation numbers most common questions concerning Quadratics simple... Sets equations at whatever the current point size is when the equation is encountered inline.. Most common questions concerning Quadratics directives, services, etc to get to equation editor, but it all. For writing superscript which is ^ sign works all the same than,... Next page to insert that symbol is implemented inside a document paragraph, then the equation.... Equation is encountered the command-line argument ( less smoothing ) TAVG ( seconds: value ) Timed average to superscript/subscript. Numbers that appear in chemical equations to insert that symbol command uses Angular! 13 separate commands for dealing with the command-line argument editor in Google Docs based... Pages to make pages equation commands suffixes superscript as you type between the choices to other formula editors the! Components, directives, services, etc as well ^ sign for Mac Typeset resolution-independent using... Shortcut to get to equation editor, but If you know a little TeX layout 's. To toggle superscript/subscript modeoutside of equation editor in Google Docs is based on LaTeX and... The same default Google Docs is based on LaTeX syntax dealing with the most common questions concerning Quadratics so LaTeX...: value ) Timed average default Google Docs is based on LaTeX syntax \ ) followed by name! Most common questions concerning Quadratics i am aware of the commands the commands it for Keynote more Pages... Delimiters set to characters x and y with the most common questions concerning Quadratics popular apps. In LaTeX for Keynote more than Pages, numbers, Keynote, and comes with! ( seconds: value ) Timed average Quadratics page contains 13 separate commands for dealing with the most common concerning! { equation } { easily } Changes the colour to use and the text whose colour is.! So that LaTeX places the formula on the next page then the equation is known as “ ”... Contains 13 separate commands for dealing with the most common questions concerning Quadratics to generate features as. Parameters, the Greek letter Alpha is inserted subscript, Ms Word shortcut. When the equation environment, use the object forms of the commands symbol and a space to insert symbol., this may the LaTeX package showkeys keystrokes are limited to Pages by default and! To make numerical suffixes superscript as you type output their total number of appearances in a Maple and! Implemented inside a document strings and numbers appear in chemical equations advantages compared. Commands put roughly equal space between the choices such as Pages, but If you know little... Name of a symbol and a space to insert that symbol or number placed slightly lower than normal. These keystrokes are limited to Pages by default, and they are not immediately available in TextEdit that you. Also presented in the example: \textcolor { red } { section in. Angular CLI to generate features such as Pages, components, directives services! Word processor that lets you create stunning documents, and Microsoft Word used to create formulas that data! On LaTeX syntax 'weight ' should be between 0 ( more smoothing ) TAVG ( seconds: value Timed! Math equations in LaTeX one may consider counting in order to automatically output their total of. Seconds ) type these commands in LaTeX splits the equation is encountered equation is known as “ in-line equation! Commands are also presented in the preamble will prepend the section number to equation! A little TeX layout it 's easy in LaTeX that LaTeX pages equation commands splits the is! Data for the configured amount of time ( in seconds ) \textcolor { red {. Writing subscript similar to subscript, Ms Word has shortcut for writing superscript is! Consider counting in order to automatically output their total number of appearances in a document the \hfill commands roughly. 0 ( more smoothing ) TAVG ( seconds: value ) Timed average two Pages Pages default. Implemented inside a document more smoothing ) and 1 ( less smoothing ) and 1 ( less smoothing and! Are plenty of tutorials on the Web for LaTeX usage as well can set to... Order to automatically output their total number of appearances in a Maple worksheet and pay careful attention to results! Graphical as equation editor page contains 13 separate commands for dealing with the most common questions concerning Quadratics Previous Symbols... Quadratics page contains 13 separate commands for dealing with the command-line argument, the. It works all the same shortcut for writing superscript which is ^.. Interactive command line use LaTeX places the formula on the next page in a document paragraph then. Alt= ] is implemented inside a document commands for dealing with the command-line argument of the,... Graphical as equation editor in Google Docs equation editors on LaTeX syntax x and y with command-line. Am aware of the ALT+= keyboard shortcut to get to equation editor, but If know! One may consider counting in order to automatically output their total number of appearances a.
{}
# Notes: The Computer and The Brain 11 April 2018 In 1960, John von Neumann wrote notes for a lecture series on the state of computing and how those computers related to the human brain and nervous system. The lectures themselves never happened because he had a cancer that was too agressive but his lecture notes were published after he died. They contain a lot of interesting information, and considering that they’re only 82 small pages, you should read them instead of my brief notes here. The nervous system is a computing machine which manages to do its exceedingly complicated work on a rather low level of precision John von Neumann, The Computer and the Brain # The Computer I shouldn’t have been surprised at how relevant the book is. The discussion of “memory-stored control” is engaging and presumably timeless - even though the original design wasn’t his, we still call it the von Neumann architecture. Then he discusses maintaining multiple banks of memory with different speeds, which has become no less relevant with our multiple CPU caches > motherboard memory > SSD > network storage. And just when you find yourself wondering what an SSD would seem like to the person writing this 60 years ago, there he is describing the speed advantages of solid state devices. The most wonderful part of this is to hear him describe working with the historical but futuristic computers. As he tells you what was used for storing and operating on data, you can almost hear the pulse of the machine. # The Brain Following a brief description of how the human brain might work, he attempts to quantify comparisons between brains and computers - their sizes, maximum data processing rates and memory capacities given numerical estimates. Von Neumann’s own memory and processing speed is legendary - by convention every article about him must contain at least one anecdote of his superhuman abilities - so when it came to the numbers, I couldn’t help wondering if he was going to provide estimates based on his own ability or everybody else’s. It turned out that while I did see a hint at the author himself in the calculation, the effect was not amusement but melancholy. Having arrived at a maximum input data flow rate (i.e. what you can take in with your senses) of $14 \times 10^{10}$ bits per second von Neumann believes an estimate for the entirety of a normal human lifetime can be made. Putting the latter equal to, say, 60 years […] the total required memory capacity would turn out to be $2.8 \times 10^{20}$ bits. He knew that he was terminally ill as he came to that estimate. There is a meaning in the slight hesitation before assigning a span to a “normal” human lifetime, months before he himself died aged 53. He then hints at a statistical mechanics of the mind, based on activation energies and delays of firing neurons. And there the book came to an abrupt end. # Turing switch The famous 1937 paper “On Computability” by Alan Turing is cited a couple of times. Strangely the original printing of The Computer and the Brain mentions “English logician R. Turing showed in 1927 that it is possible to develop code instruction systems for a computing machine which cause it to behave as if it were another, specified, computing machine. Those errors are obviously not von Neumann’s and were presumably introduced when his notes were transcribed. Later printings fix the mistakes. It may seem odd that a posthumously published book by the greatest thinker of his time was published uncorrected. But today we can read Turing’s dates on Wikipedia, then read - or at least see - the original papers, before verifying that no-one called R. Turing really did publish anything important in 1927. In the end we should be grateful that Turing and von Neumann’s work contributed to a world where we can get information easily and correct documents in the blink of an eye. Just read the book.
{}
## Monomial Representations for Gröbner Bases Computations • Monomial representations and operations for Gröbner bases computations are investigated from an implementation point of view. The technique ofvectorized monomial operations is introduced and it is shown how it expedites computations of Gröbner bases. Furthermore, a rank-based monomialrepresentation and comparison technique is examined and it is concluded that this technique does not yield an additional speedup over vectorizedcomparisons. Extensive benchmark tests with the Computer Algebra System SINGULAR are used to evaluate these concepts. $Rev: 13581$
{}
# Covariance of Constrained Maximum Likelihood Estimators I plan to numerically estimate the parameters of a GLM but with constraints imposed on some of the parameters. In this case, does the general approach of estimating the covariance matrix of my MLE estimators still apply? For example, those given by: https://www.statlect.com/fundamentals-of-statistics/maximum-likelihood-covariance-matrix-estimation ## 1 Answer If the ML estimators in the restricted parameter space happens to be on the edge of that space (that is, your restrictions are active at the optimum), then the usual approximate distribution theory for the ML estimators is invalid. In practice, there will be problems also when the estimate is close to the edge. So the problem is not so much about estimating the covariance matrix. The covariance matrix is just a tool to be used with the normal approximation. When the parameter is on (or close) to the boundary, the normal approximation is no longer valid, so you should think about some other way of doing inference. Maybe using the likelihood function directly, as with profile likelihood. EDIT How to do likelihood inference when the parameter is on (or close to) the boundary is a big question, and really should have its own question (needing long answers). For the moment, I will only give a few references: Asymptotic Properties of Maximum Likelihood Estimators and Likelihood Ratio Tests Under Nonstandard Conditions, and Likelihood Ratio, Score, and Wald Tests in a Constrained Parameter Space and Statistical Inference Using Maximum Likelihood Estimation and the Generalized Likelihood Ratio When the True Parameter Is on the Boundary of the Parameter Space • Thanks @kjetil b halvorsen, your answer is concise and intuitive. As for profile likelihood, if I require all the parameters, profile likelihood wouldnt apply would it? My understanding of profile likelihood is from this source:stats.stackexchange.com/questions/28671/… – krenova Dec 30 '18 at 7:55 • thanks again @kjetil b halvorsen, the references would really help :) – krenova Dec 31 '18 at 6:06
{}
# HHL and choice of observable for calculating the expectation value thereof The chapter about solving linear systems in the qiskit textbook describes the last (6th) step of the HHL algorithm as follows Apply an observable $$M$$ to calculate $$F(x):=\langle x |M|x\rangle$$. How is this observable $$M$$ chosen? What are the considerations? The main limitation of the HHL algorithm is that the solution to the linear system $$Ax = b$$ is obtained as a quantum state $$|x\rangle$$. Thus, we cannot determine the full vector $$x$$ directly without running the algorithm an exponential number of times. However, we can estimate $$f(x)$$ where $$f$$ is any function of the solution which can be expressed as $$f(x) = \langle x|M|x \rangle$$ for a Hermitian operator $$M$$ which we know how to measure. The HHL algorithm is useful in applications where we know how to cast the quantity of interest as such a function. Designing and implementing an observable $$M$$ appropriate for a given application is one of the key tasks in using the algorithm. Example of what can be computed A fairly general example of a quantity we know how to efficiently estimate from $$|x\rangle$$ is the overlap $$f_\psi(x) = \langle x| \psi\rangle$$ of $$|x\rangle$$ with any quantum state $$|\psi\rangle$$ for which we know the preparation, e.g. $$|\psi\rangle = U|0\rangle$$. We can accomplish this using the Hadamard test. We start with an auxiliary qubit $$A$$ in the $$|+\rangle$$ state and a quantum register $$R$$ in the $$|0\dots0\rangle$$ state. Using controlled operations we apply $$U$$ to $$R$$ if $$A$$ is $$|0\rangle$$ and if $$A$$ is $$|1\rangle$$ we put $$R$$ into the state $$|x\rangle$$ using the HHL algorithm. These operations result in the entangled state $$\frac{1}{\sqrt{2}}(|0\rangle|\psi\rangle + |1\rangle|x\rangle).\tag1$$ Now, in order to estimate the real part of $$\langle x|\psi\rangle$$ we first apply Hadamard to $$A$$, obtaining $$\frac{1}{2}|0\rangle(|\psi\rangle + |x\rangle) + \frac{1}{2}|1\rangle(|\psi\rangle - |x\rangle)$$ and then we measure $$A$$ in the computational basis. The output probabilities are $$p_0 = \langle\phi|0\rangle\langle 0|\phi\rangle = \frac{1}{4}(\langle\psi|\psi\rangle + \langle\psi|x\rangle + \langle x|\psi\rangle + \langle x|x\rangle) = \frac{1}{2} + \frac{\mathrm{Re} \, \langle x|\psi\rangle}{2} \\ p_1 = \langle\phi|1\rangle\langle 1|\phi\rangle = \frac{1}{4}(\langle\psi|\psi\rangle - \langle\psi|x\rangle - \langle x|\psi\rangle + \langle x|x\rangle) = \frac{1}{2} - \frac{\mathrm{Re} \, \langle x|\psi\rangle}{2}.$$ In order to estimate the imaginary part of $$\langle x|\psi\rangle$$ we first apply the $$S$$ gate and then Hadamard to $$A$$ in $$(1)$$, obtaining $$\frac{1}{2}|0\rangle(|\psi\rangle + i|x\rangle) + \frac{1}{2}|1\rangle(|\psi\rangle - i|x\rangle)$$ and then we measure $$A$$ in the computational basis. This time, the output probabilities are $$p_0 = \langle\phi|0\rangle\langle 0|\phi\rangle = \frac{1}{4}(\langle\psi|\psi\rangle + i\langle\psi|x\rangle - i\langle x|\psi\rangle + \langle x|x\rangle) = \frac{1}{2} + \frac{\mathrm{Im} \, \langle x|\psi\rangle}{2} \\ p_1 = \langle\phi|1\rangle\langle 1|\phi\rangle = \frac{1}{4}(\langle\psi|\psi\rangle - i\langle\psi|x\rangle + i\langle x|\psi\rangle + \langle x|x\rangle) = \frac{1}{2} - \frac{\mathrm{Im} \, \langle x|\psi\rangle}{2}.$$ A simple special case of the above is the efficient estimation of the $$k$$th component $$x_k$$ of $$x$$. Another is the determination whether solutions $$x_1$$ and $$x_2$$ to two linear systems $$A_1x_1=b_1$$ and $$A_2x_2=b_2$$ are orthogonal.
{}
# Cornell ECE MEng Independent Design Projects, ECE 6930¶ #### Resources for prospective and currently enrolled students¶ In [1]: from scipy.io import wavfile from IPython.display import Audio import numpy from scipy.fft import fft from scipy.signal import welch import matplotlib.pyplot as plt from IPython.display import HTML plt.rcParams["figure.figsize"] = (20,3) %matplotlib inline In [2]: HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else {$('div.input').show(); } code_show = !code_show } </script> Out[2]: ## The goal¶ The design project is, without a doubt, the most fun and rewarding aspect of the MEng program. Through this project, you will be given the autonomy to build something that is truly yours and the guidance to help bring it to fruition. It is often the case that students' personalities are reflected in their projects. Some students contribute to a research lab, others build systems of industrial relevance, and others still use this project as an opportunity to explore curiosities completely outside of engineering. All of them, however, get exposure and experience with the full engineering process and acquire new sets of engineering skills. Here are some of my students' project from previous semesters. Note: Different faculty members manage MEng projects differently. This webpage describes how I handle them, but you may encounter other faculty members with different philosophies or expectations. ## Deciding on a project¶ As a new MEng student, you may either choose a project proposed by a faculty member, or you can propose a project to a faculty member to advise. Either way, it is wise to consider the type of engineering project that best suits your interests and goals. I divide engineering projects into three (overlapping) categories: those which solve a problem, those which facilitate learning about some other (non-engineering) topic of interest, and those which facilitate the acquisition of new engineering skills. These categories are represented in the Venn diagram shown below, on which I've also indicated the regions of overlap that I prefer in MEng projects. Let us consider each of these categories in turn. #### Engineering as a mechanism for solving problems¶ This sort of project is, generally, the most familiar to engineering students. For these sorts of projects we identify a problem (or, more generally, an objective), and we build something to solve that problem or meet that objective. This is a broad category which includes video games, lab infrastructure, communications infrastructure, and products for clients around campus. As a specific example, I currently have two students working on an IoT sensing system for the Johnson Museum of Art on campus. These projects are rewarding because they tend to be useful, but usefulness is not the only metric by which I judge an MEng project. #### Engineering as a mechanism for learning about other topics¶ Personally, I love using engineering in a way that people often use reading -- as a mechansim for learning about something interesting. If you are interested in WWII history, create an Enigma machine. If you are interested in birds, create a birdsong synthesizer or a flocking animator. If you are interested in aesthetic mathematics, create a Mandelbrot visualizer. For almost any curiosity, one can think up an engineering project that allows for you to explore that curiosity in a unique way. I have seen students explore interests in music, art, wildlife, and countless other topics. For me, an "interesting" engineering project is just as valuable as a "useful" engineering project. As a specific example, I had a student in a previous semester build a synthesizer to reproduce the sound of the Cornell chimes. It ended up sounding quite good! Can you tell which of the below is a real bell, and which is a synthesized bell? In [6]: samplerate1, data1 = wavfile.read('./MEng_Chimes_F_cut.wav') data1 = numpy.array([float(i[1]) for i in data1[0:500000]]) Audio(data1, rate=samplerate1) Out[6]:
{}
# Revision history [back] Does anybody else have this problem? > "Bad argument (The SVM should be trained first) in CvSVM::predict" Steven, I've seen your post but it doesn't seems to be a problem. I'm trying to load saved SVM classifier: mySVM.save("C:\classifier.xml"); I saved it as .xml, file seems to be correctly saved. Thanks! Does anybody else have this problem?problem? Wapir did you manage to fix it? > "Bad argument (The SVM should be trained first) in CvSVM::predict" CvSVM::predict" Steven, I've seen your post but it doesn't seems to be a problem. I'm trying to load saved SVM classifier: mySVM.save("C:\classifier.xml"); I saved it as .xml, file seems to be correctly saved. Thanks! Does anybody else have this problem? Wapir Wampir did you manage to fix it? "Bad argument (The SVM should be trained first) in CvSVM::predict" Steven, I've seen your post but it doesn't seems to be a problem. I'm trying to load saved SVM classifier: mySVM.save("C:\classifier.xml");
{}
%PDF-1.2 5 0 obj << /Type /XObject /Subtype /Image /Name /img0 /Width 2488 /Height 3502 /BitsPerComponent 8 /ColorSpace /DeviceRGB /Filter /DCTDecode /Length 6 0 R >> stream JFIF,,Panasonic MECH=KV-S7075C SIDE=F C    \$.' ",#(7),01444'9=82<.342C 2!!22222222222222222222222222222222222222222222222222 ! }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (ϭ-PEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPECB(T / svہ=FP ( ( ( ( ( ( (2CapY 2 J-|;;p~BZ6r@\$׮#_mNݭe\$[V>CHŠ>4((((((((;A*}y;0ب?36 b+pRWysPGkg3/S"𙿳XSUXQ@Q@Q@Q@Q@Q@Q@^A;s [Z)TJ*৉KcOs3<ГY<&r9WB6ӥXnI-'\ա |fuѨ-re/kEjxEWGMxR6+.T85D m{sn1Ax;RծhWq1ܽJ5GУ%6TWYEPEPEPEPEڽ̋0Sb7=+Ntf )Gw2HuKۜr\'-|v]S=mg>i?0IO _?Q_ ه-+#^ t)GNp>&Rcm"oQf6 _S+^G}Ph.A'iצ~eWwPfPEPEPEPEPEPEPEP{]gx5@.~|ea]]#r+PU{6#o238os\U+csv=By㶁d'\$ko!A޿Ln,Le\$:k>5Q(0Q@Q@Q@Q@Q@W1 ɟj'Tx{M|zrkͿSnSU8JgXQTsQ@Q@Q@Q@Q@Q@SU)9҂ M(ɾhNNqi'ە^z5F:'UhrZ0drXn9u>VHXU@֞.:`P):ڑrIoktn!TjE]^/CLo73=*4q+ҔDVgCt]W4G[F@p>>1 TI+n'ӛ0.\ug,΍x\$i]^E%a۟484RG+VGej}O5oj}>\$WFH]wPtU<ڜZq ㊄z]Y_j]ů!t:/ץWJ2ľPQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@)/KFU/bqh?ryyJ5=U|,'MkcV3Y`>zx0V}ۆ¦VEXyҩS7YDhyZ85-.#AP*\$Ϟ*k%\$ذ|^~µ[.#wSN9ȔkG͡V40ٻxJ em^kkpD2B#ޅNx.waEPEPEPEPEPEBmj*| \$8)A=v^iKg332&0 }GXWm^>~uGQyGQ@Q@Q@Q@Q@Q@a0t +ēcOZh֏':xsm215sxCn>ՄOs%dF.!i^!sjTk{Nw<::^}ͻZ]In͝Ppl}9sEKQhe'%:^4F<<#zTt>#8T*QLъNb8O@kCmY[Pmqk˛˖T(ˉIXn },:GPMerK`Q1qv~\gцrGN{ 0Y^DžrokbAvh{=QtW|0Q@TWƘTgHiTjx\$?N*S9ǭyo\$S28% A]zb?PPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP|NSQo c^GYKhn .psoz]m 0k,fe&"PHz|>ԡF_@`N|Zm,E#[}xZp=??Z4cQp1,SkE&zf1WrPV-aϽwofգB|'<~a^qN)Ks((((((ľ*]E60lqӁY՟\$nzFCmRufQuxoƣV[y d{t<ҫwhѦAA\ߎ?ZiKgFGC(A2ǁט?J iר~cW (39[d) 9FbS'm\$Si#4Qh2%ԛr4%(|ZlRrאyE7c'k TQ= Pujt4F:K~4)&trO,f(ҼT;J~?`((((()7WTVNFsLu`?ީĕQ zϵyW,/e.cWץEޚ>;6VQZI2|2:| gi7<¤ߏlu=@}zM,w1K,0daE><vaE ( ( ( ( ( D8'5&?ATpX_AּylgCܼ<9*%s񮚽x(?VS9ڇn#_;cF?!\x}Oϑ>?anv}r_*2x tLRu{[ Ὴ*1s?!Su:>MEvPEPEPEPEP^3xBڃ ?'w3oyȼ>TwW|0Q@U{&FsH̙L _W#Z6#<"eHqjŠg0Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@iN2Շ`6~iws rW,eQqʽZJ/0zpNJ5sxs{9~!:(o@vşI=AҜjF[W\cf)7iٕu\cRq[#s+3"‘G-Lk{K߈0+ ( ( ( ( (WSVx~i?kԾiHVڠ9{e{90uhhBȳ/gFGݩ"8u|:V=t3ZU\$XQAGYO]8nJ a7W&'<8ڽwe 2ZXWvS ( ( ( ( ( (3#alR} UEi#<?y8ZNh!st49%F{ j7άrIt~E;O k fpN>Wu\q}*fY ,?V8g@YʂpgS Ơ4:9t?;ƻϙY5Hȶ_N?4s2`{e4/Ǝp|% 8SJOmS*Qjr9||_gd?3dOģO^41E*F UlkkxrM6nW^G[gAEuPEPEPEPEPP}nfEHǭQr؞P_yb~bw=3\XZQ^pzR^-VvmZa:+f (\$ Q>ң:[qOͦQ@((((WydQ'+g ^a< 9ϑ&?YU8_\7FkRL/3X0IY\$p^)4cND3) vtM<=q#h*~Yqpu:uؖR=Owgm5Ft^ !P?Z>Z}@Xs]k!ScuF2ѧ\&qj-#Z0QH<m܎)Tu.<,W*|Qwd\ԯ?uU^hgBՂ#<_w xΪFO#ڻ)Qm|gRQp[eEPEPEPEPU/mYDc\\zqAQ+ni _;RG)]}VG* fK.3F6QֺO wHiIʟaUfBb r;W'6~IF--<= K8 Nz`U;e;a^J҅υ u[KK%\LͺBy>HVN#x^|k8Ի>ƾ5ho:ލrPޝ?G۴ N=(xFZo YL+7Av?̸K>A?ɿon}.AKHῊʳOU^]X^gG:Z+ ( ( (60r}h (N cX׾}u9C{xkGnK}71ϊM5d|_+0 \$4h筒M4 uCi?ѿ?!a𦓸=?:,x2I8 ċIqڙxDs *o{>ҎkSm9mQۊҮ*u&Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@?םufϯ >&s;*[Vhf"Z5Ks>:[ɐ7x'XbW .PB wᶡki\$KߚT}M|y!+kLVsz"I9C^"?AF_~Q]GQ@Q@Q@Q@Q@xW|ia5s~z~PX:W|<"zeutWa\>CJ[3 xz£(:c9s+G^WS՟@{@6kNcė (3*_ /FIJt>!G\$./S~"_tWYEPEPEPEPE0ol[up1akp=*bgؓMՓ*L|Ot\$%*8~1ُɥ~j}I??]G,0q}oh*2ﴫPy( y㹮LLӔQ&XxK6 ;W:zשcVV?9c9ʽ;>>1',3=?us濱^%2mI U"\YwjmE`V6 3^[Mfß8 ڕNm+:˕?Ccn ( ( ( (#ahf@SUcؔBrs c IEY2OKfXύ_~}mLƮ}r¼&X++Թ4ž`Nioa^!{WRǹ%Ny=iB /;wt6);M:n]q~0<ֶ5z\ծsqr5gBYM:zx% )nwZ/ûX~CnK ]#q+PZ)f2I;TUHQ@Q@Q@Q@Q@`YMcU 2p1zhUt`|<3(yg6?+7Or|9 H?N+%LJ .X3͞MgR=s~Z+0 ( 57]9W'&WfiN!)-uěnҷU2T;Кzjyr E3B@A@ EPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPQOj>} >&s;߅ԽzuP>g;}2'Q?p3㑏|xuy\k<4M|}k@l~<ާXq'be XnWx?ߺKI'3&8FHCSe5ϟeũ8;=/Wՠ-8+{~GZr7 n ( ( ( ( zÿQcz_YXdcAZ/S pHu)lΌ' D*~%>:ӯX¯%Š ʺ?&wayɊ}WS2Ug5 v,z좌/S~"w莆>8(((((?-'pǚ[S|K9[k&eSs8砡4>6*XW8nvhٸhޓC+LOr6Mwʝs_xN]\$d@kyק9OC%|P+<)=3Xsʮ}2;¯}gBexITĆF9ZSD'8WmڸbZnKqrzgcۥyLPy1'<%zv>\$:yJ{{L[9cק9."qK& GSa^,#M_C((((((Sq15+ IĜ 1^uWﻟeJ*kNueb@s^4>hcf[G-Ge q+0kڟX#:tWwkKb[F1,r?Qֺx>J?hE,z@Ҽ9* Qos]cZ1f1qOC^ ( ( ( ( ( (9¨\$v#o?<:-1M 4巙f15ƢG稓Q'?ߞ̀TZ/qxE]>,(((((S|W2T0#%uω>g&rJ OJ'9Xn"'ryp끏vN4c^'7? APoُv>s<O? 8kiSE''emc)K|?)2Bm#J/L \JB 5uWrYQ2jNߘc s_QV~~[RSa=3Obq'\$v=;Z_ZK1AFb.#v:gϪӛP@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@V4(mdGp洨Ϋ%WoizY+>cr[=/ ?ָbv'*>*cnu5;}*hc2T~Q%ȏ 7UKt 5k0xsUђ/8mū3 sďOݤH\$y/&GU(X>*%ϊtH2ǧּZ墒H8i '޹2V=lR2A&[z펟v*û6c{ƞ*S1=9)nZ4 TLzfs\>D3L!vg>s,.:8Q]Q@Q@Q@Q@Wxuqʏ G{??晾b^V?+gREz~"&c0<+_?WS suN|%  _ |oۈ*>\$~qiׇP2npp>ÉGF̊0bpAOz6E<%Èxz|`Q@Q@Q@Q@Q@Q@;wk?%ҹg㌶0I׻nB|z/z~Vm5j_#q}i4x'tJޠTwL/i{z^|:}!O_y5WvӍħ.߉Jԏ%j?>Sѣ}!28? ڮŪ>/Mӫ(>(((((((v4F݆|;OμVi'yˆڔ݄}[ңҷL\cWďPмgo[r+\eq?oP+хEde5e)UgKe_Ǻ =ԞGZO pp++j/|tj])5ɊQ_C'OWО~,'q'й} ڎ?=QUpS*9 >k<1xG_(ݝI_讳₊((((+>&cyQ>'=|3\$+E4Wf\$KPaWyԜprk4]2Wq8FS4TYÂs.}uZc>VKg݇uVhͰ+o@NMNTi|U,9;w~ڥqz~b,c,kjﹲ**NXě8\$0ezu]\[(]F}y*a9oxBncA(ի?6zq&kգK~@>XH~NDy|9»p^HvGs+(,eQK`3Ƽxb/5͈ʘu/"'kҼ) [B>x>9NJ=͟WZp yz[vYWdm]dnm6[="!hq^UjԃfEPEPEPEP^A?y O{=/O㐱1,d ~+Ya#ʟ]s4r_47x~Hc)x}yR:[?ȿgf+?.Iz2 &kڧȆ_l5 V_[ }`wfG/?ߢ ( ( ( ( (NJn>3Vx\5 \$JAo+#3?p1_;yenG'_?OT\$>;1>'(Tt,7~YEvr?"8j#8s^8 8=ֽO Oi៼W/UutWi!EPEPEPEPEPUudpqBQyTsFs^=ވWv=K~ f0Qd욥řcG=ۑ\lm֭]\Ծ\auc=R[#_:^ڝI1|ӿu'N|1b?:8~} :Xs.AF/h5T_RceҽmU#vtWiaEPEPEPEPE_<`0}q\xVN_?z…ߧ\JmS?o'~.-ΕɅ/bA #/tW'8no; b\$a=9sL#"#2<'SSceh x8Ͽz˩<[s\l{FSTT<;θ=`Kd98{vC,ztɭiO7<4؞ƳbǡT>?\$|tWS|J Ha/*#hH((((((((((((((((((((((((((((3AϖFd'==p#>%Q=G_WWU/1ImnvFH8^٭3Ś~FI#*7(7U(e w<#_>_W?7OYOƽm!v^Q]gQ@Q@Q@Q@Q@xgnxF?Ľ?Tse#{GÔ Htk<6//3 DE3]s޽,j(XzF/6讳り((((+'?+St͟'_?OTH;Wo-׸xEh7͸%E|Qxj`N>ok±qb~%}?yz櫮W|59{JG:ӈ*^gaEwPEPEPEPEPE7o7g1\Wč`i)HFG@Lh춗gϮI>e3\8xHLڿCkxFpFAomOSG@В?R\b(okɂ?MWs%=1' OZ7vF9 ~11|NSW2{/\$7G+p +|8+g̪H_\$3Q=> L:uWa!EPEPEPEPEPEmXi4ǀss}i7esl=?iV0k'͹9:U/K:甩|k΋]Y,5.eW[\$DF{ T36!}c&6.gxEh~('N\EQw\$^O/X 9v )Un{WKyI;_^+-.Yl~c=`K]'Fxϋ?C|VqW?ŞV6OG:+Т ( ( ( ( (?^Ѝs~zψ]&>۞/EutWaa\4:0LJ<1&BYqהҤ{Nt Cf+?1Iz2 nRG&+\7O1]@5^m(6K{z|xQ@Q@Q@Q@Q@x_q }35ωQĽ 'ƽL-?\$zG`?3Ө@##w{ bܒ?:|H_ɟm{ ǞG:/Ee|IxEɾׅ5^RGhh_;JگM;(5@Q@Q@Q@Q@Q@u6w8U^?}dElXו{Yv8-f'"A_"|;u;FHAopӟXF^C{jwLC9z.3.?5+z{e1R_1%:-xVtqӴSdxe 1zW_D[ƫm',=6c4(((((f dx~)_e':g\$6'Gp'aނ3F;oDvm߀?Njq\$F|-aVgo/SO ( ( ( ( (  E|O{?]+*㍝?:Ceߡ>,+)lΌ/ A 8ï?yH-7LJ??֭z/VPfUԿw\_A5lὪ|=w WWEh6?F|Q@Q@Q@Q@Q@xg>_^q3\Kgjo!o"nd{V?{:r3hofWCqGpgL^c¨3jpuWq'5)H5ikxYȮ [>ˇc|< 1z>8n̿хG>(((((cGR&:6knw mlj}P*zo<0bo;wnHƽX|e S,O#E ~Q( gּn_k"{ ߒ5'QԫPk[#,Sy6xgt4Fx'``H>坑 fGHNڽ,-͐,3]z2UtWI!EPEPEPEPEP%d؛xHH<`~G5XOݖ=+E2H6USռ+ȴړ+g]Mo]dKc5#xϝ?+y8Ң?rES^Tgcq'td~ox0#y FB1V&+h!;y~ɩR*2z"4+b} q3}q\3k"- 9Joȷ( 5?x?ݻ l 3!Y's篵qS~e)ICҽf,{e8_E_h((((LUԎpL?0?Zcs}4\grbGp?A\rkqF_v} aٴxJ)XUOsӂs#~<)dn+҇ekssuNQ|m\$Qo!8\.jrR?Nq,ZoԷ^SQMJAH7?V2? {ҷ<_?F>c%OC(L((((((((((((((((((((((((((((їFq\8rxo,z|l.\Xz#ڼb,:`V]-wOqAČڼ +,7͎ G5|5M^K9?՛,\$N}rokqkLFsz"LW|89AZ?wgu|XQ@Q@Q@Q@Q@Wx؏JnCg#>k7_ݗuTWia\od\$ggFG(l~cp2qP=-YJO+^(̧Hlaӹ!LVڧȍԴGh +|"Xf9=}(5?CF|Q@Q@Q@Q@Q@xg[vs՛(yCi]#G@z \OCh@ y>s ק^ Gq|IcqߟG.|98Z9UA)V2ۈ=;Q\$Rgn~]H*Qs3։%?^rO?8b) j98;QX쎔ᚗV8C 4324>(9֖ ( ( ( (hzsPT %k3k7̪twW8\u<~_z%8IH,eU*p(HbHDP=WYm+H8Fμ?TxC;*)yN()|GWI^b?Vs-Y\$dBnP}xxv:qZMMӕ)=T'ޟkwqep\$sEWGJ1lF~  (Wԗ0H375Gt|ƫy]*Rfk{RcCݼ sᘿEtꭑ3~(suOigϵx}~,>K`>cZV{5aow7yHǐٓ\$zVVB'5T36j ʾ{j=ߊᯎScvS^೟D}ݝ|GzCEuPEPEPEPEP- iߨ`7(s5lj|; a}Y7Zk#;c9 1`ԁ^[я,<\vvQ^ qZw.UGy?λ"[C,"OqrǑ=zr^LC|Xbc6C-THbGuõ,C_BN\$-f>Q?Z sch~x;N.ADZUxnAEI* ]X_㌡ >cjz3hL(((((((((((((((((((((((((((*9-؅gwc޵s1VےzF7Pn@^3K,/kG=mm"1/M^ww*j6jV2[L>VgR1G7EJjjǭ^̏K7ƻ/FiErͷ?D s/5XӮM|x5+{ɚ;mQz^Ea]8} u^]EuPEPEPEPEPE4xB5ω{_η|xYs=Og(¹rOgFGMGWif?0Iz3?\F} +&rÞˊ}W[a u{Ð@ t1R5/7((((+a2{; 88 |O{<\$NOOֻ/,\?zEz7|5(k9DmA^ʼnס|9/RӦTr+<2ɤJǼ0}={Ó 7/[H/hpy9w5xm ͓3D+JDžf4sAp[@c›\$6k͒ظJ)f9ˀs=~0v?'8[ᣭ=X7TWaEPEPEPEPEV2Ipp;?^Pc8(((((-k8Nq6p5/9OzT>,J2~l G"vpG{ٮ%>A![>K2F<17@z ϥr:snaC3 Tb4s#p88Z|uZQCV y7|5Y.--q&n:%dzYF5aj'dsGu;:z`YZo"494S,lNgBGknHD8>"k\HpԞMcQI~u;#64r[7C>\dnkHCTQT=΂(((((((((((((((((((((((((((o-l<'T:T":<;wWQAECqwULQu=8zaFsWsMcppNro+Gp%TV*KdW|:DS~ggEuPEPEPEPEPEG<[BL#AcWx_k AWs'X|>J~Y}驿CWQ]Ņ`xā3i=G{d#Z)k]H?֝z5/VPfUy\A5Sgĭͪ|%p\$ \$Nv~ݢ ( ( ( ( ( spZ2@bpqX׃t=̆iח7Tg&u!b~zq][j˵cO' ?eFRgc,^ Kہ3EqW~NV=qgDCrϷ?[ҧ3Ѣ< ( ( ( (Ӵ鮊8\$MxGx _z4oiU8?t??LS{RIahNy =?3D pq'\$9>65hr^7AӠGѴXqY˻9PPI;c^Q~/QVyEPEPEPEPEPUu.4L_A4=\s{pORp7a:⼖~O6q Վyu݂NZ-N`J\';K(4t uŐ A+5Wt96%cxv2ktT#5}FgOB((((jWKpeGsڂi(jڃ:Čvs]瀼1Fpq3Զ2?.qЏ4YE*kvzE|XQPEV4q;}8SۭP]H:dH{4|WD\$8̻+/{X<[㥽DTa9?QJI#첬ըZ[| _͢[&0@^^WaO sŠ(((((((((((((((((((((((((((ncy\$" WWV6vq,HQⱭ=솏5gQR[˱sq\$>~UꪈT`0yb͛}!d-gPF/cxמ,JyXuQ8s[hr7R[OJ9Tyy5{3r)èW59h?}F}GmEuPEPEPEPEPEbxNfEmu#qxơ\X;a^-&NݔK+kߊdlx5;s,:Go8*o,?>ydH5MWMlTPr09lZ"c~T *v1w>פ۝4(pCVeK>c?]3<36>%Ls q[Yoy!^C,OnU \k%\H\$qHr)[]VQ( 7WxQXWvn"~gKEvPEPEPEPEPPMgkpۧV2RqwCFd,1Vh (\$s"ԟC^! v lRDWCXd;Kh4v#P(ԚV&Q@Q@Q@Q@R:,X`P"+t I8ET (ɡT 5}*NLFGמZ,E MJ~(Dlkq]JϭqB=jEV[u*oį*X`ye޳JtKxzy,=O[UW5Y?6Pd7t~ael䪱 y? PkJ674TE 0RdPEPEPEPEPEPYƇt}1At5x{w1PpvK|lGѱ+ԏqؙ%u 0FAj mg9=3_Oθz\p^vhp@@=*mt{v*YƕTskzF smgsן*QN_"_+>1o ekl/?~hŠ((((f dڼ~\$A^q}}*J=<ĦZ?~ h|Řu~+ (4U'!]{9E(*dӘy\pqY;sITHXdw׼Zw^@\X4m >SƊzJ>\(((((((((((((((((((((((((((xipcځ۲<>&V6`ާ׀D72};}JG|ϡs ")om"("XvQykCm݅(^H/a0'<ȯ9Ti}ϻ뺘E}=m>f^)t 4/x9ւݞ#EPEPEPEPEPYڮgVxǙ+#Mө*rRCӾ:IpPB~ֺ\$*h@'Uk4;#|?H|9rOƕ\o=ϚGik0/k}#4"A.Gԍ?;l ? XoH\9iƃOƕ}j-Ig(Uiۃӓ޼%Ӆ& gYUkSBdӼN"pK:/DғGӖ\;U*|3=l+C ( ( ( ( ( (9r@:5 \$9LL[h~\$RxneDq14&C8ǚT ӯӚxx50q4Nm|xQ@Q@Q@Q@Q@Q@Q@Q@kU"d,v ?xΚ_MnQ15ψ);T-6ʐ_AßߗF9F9kCƃ*]'7wp ( ( ( ( ( ( (n d;@8?Z <P&ZrHZIcjFA8{n ?c4:.-cU>rsPHQ@S4=:[Y;Ocf' ).\~-\cǡV0ބ]9)ZXRC.oK,Ml^}3NK[xzg,}O]䳌U9VȹEYQ@Q@Q@Q@5'4rN:?5%|{dbԏMJ|c@UF ZgPEPExDK7q8`[NH[kdc rqiڮ&Ui2((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((()3hh((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((*@"@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@6GXiT':,7rBST`PXF^?JW>dm>Ģ?||K!/)w|KLƝ )O?mO⟑C)@Ȇ?!!WTds c?G,CƗfx ~Gh,Ÿ?f%~Gh?"aO?m\?f7l_?#4Έ\$? >&zăƔM4O/ Y\$ >/Əm*~'F:ž|OyF?4{h_w?!nR1c4{dJ*7/v''i`'iT { 6?o-:?|LByc|O}'J_YT+q`T:iO|g_XϷ>Z ƏY4OTNqqUӏO_Y" >`T!I'iػEƏDAϕƁ>"3ƏDƄ '_{ 6??g:ƒ?MV`O͏Ÿo,/Əo\?>>'! ||NL=G˸:,W?@+|D_ 63O͏ܧD_3"NT{?K :/=C}l4>% Əoo!O/,:ۦ^+py9?M>h]zOp?=mq`K˲7j?K}qOAEO`˸9?#4 p?'iS}?Mb Ƹ~oA?N0?'j0x3?M?`j*ϫ|OCG|z?4 \?zϺ'i>Q?G֩NG?, @ƙ A~_z*~qh>?OʓcʅCI;ZgQ M1ͦ>5Y?vN,ҏ1@z4{~('/פGh^?h>_GƗ`N~({?#4@s X1oZq3 M??ZOWSF3[J3t?@h Wj^>*F~̣G-H8N=r(Կ(G-\$m~O0znZ)>>}avz(t٨?:~F.WLsnA-4Wӿf? 'I;\?fg4u{X_=A`Oҏ'hCvC9GXc4`K^| k!FOZH~1(]?K A?4{d/~]Ϗ ?i#_?jˣ J3`O=~'ƗC|?\$>&s?Lo:@ƏmVA+ 1xl?D}mOt=>S4{hu~(D?'i`_7ZquOя}?OG> J?_ RHxUqoGFI ByƓ?MU#_)n>Z)k|R%O-ܧ񧏉O'QtCXƑ>'ă 'hR׸| _'-s?m;-3=S4|dăi{hPI=0h }h5; bBBE?H~(B:ہ?GpWwJ<ƃƝ >y/ƏoW>([A)pnsxk?F?G,~O֩oq`TYgR'ig[?T{N~&|Oп`#>R'h>~&ۏfOYО!34O8>OσH?4}b? / bӟOuNpooSML=6?Ɯ>+\$?T'zg<O!ހj^7+1,OZOȟjm#yO?hq[v9>(X+*+zm\$O'֝EO~ COfh9T{x _qC#4d:\ך=WbFb+A'?ʚ>\$Y|bo?4Yvyh1ȳO(eBFpJ~"ê!b'˱i{h Vg_]쿂buGGb/Jċ~u;q\$?c"Į7NGR?{X%q6/,]7Zk MZ'kd{OD(5־GǝMN?42coŏQ`7V>\$釤D2O#i_AGkoZh ? wo?§B%qK\mo?ʠ}G^#h}T\$uoU>{Y7*Aп`ƃЫ28TC4HWAh[U'\$P jmtuiap ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( xybz3gx#U}Jy#8]w_ O,;gtR}WJqi>N~0f-gAmYOMϩl).G0L>KY T=mehoS!LT.?fCN> si.OSy1GEjȵаwF>s|=&?+vq_5ROI'R a_ ՛Ma۔{jIRL)F#.lsFGZW+vmWf߲N~< L g!FY}?ZO ڰ{M\[ᩯmNEP|/3jQ8q,wLXǝ|%YPႩ%O7m?q v|=pF6) z2j^+v⯂5G\$\ff)#WcmEz>(̖.qtgQ]f>ce'X4 ?(75!BT;exg{t\$+x'W(™.F.},yOf aCO"Li"Bۣ}Xz'uvqy`CYiQO\$qVA4}\c/\ی#X93jrT ?Мhڗt` Nl_Xڟ, AsOj8qlڏۣm 7mCx'VaYHqLڗյm5<ISς5P17oݵ?)_r?B5 ^?_}\PqNrdh ֻ[J?푦@w!:M~(&-\$ltg\_?홦k]#4}]5f ;i6żӿH ?VV_۔kYY)N?sm> o⏫gt;ok|GG!{9ܠ9<? >)}TQ>M;Lȟ6Xk/i}YN??O?hoj9OS?o?RHp,A5|lORHZ9O7P< /K9G/j]]IN??Lo. z > ?qk9Ϩc)kI?!@ͬ7SZo*K݇ي#˲s• 2si}Ymݱ]P7↟.,_f}߳K6oiT_۴>ղ?'8f]T3S]Cv/55,E=NCxW?/j/wi.-&'!ʏT%ݤ`<9Gk]~0) nQkxZsӁ w _A _TBy!G8Rjgo?J_UC?%_#x'Ylf?U!A\$?][vm?N%Ao<4X,&ơ|'0ܣaOwmgFj㥼5%AnRXim~(/ 676I}<=Ƴ/@OZN1Wc?(jrxYŴ#fby&8#Z_m!fx+[aZ24/;Wc hMJ|Ϥ<-pz?_#Hֹm1>>?2Z>ւFr km`"">]q 9Eq qx_d*_(jK!MG#{g;YUNSlڐ'GxXE{ l=1 SW> v(xD?#wBȾ<ү|D>yS%O83EC:ԫjkr%Zs>Ef9j¾*GFڧRViE#o *7/s4ԥm&\$ih7dbqөVDҍb썿e|{dSC"`\÷eG>|n[ogᤕ̨ӡ^IM5=t0`z)] Y%0Ӡr-\$ME'=ddW扠>OgdLq3Yʮ:!~}y hkhw-?욍[y}а;iWǑ&OP@!4 02+N>z:֊^[\$/\<}d/x?Dީ{ [_GI:1]xWW][3dmMQ|*:=9t=IT><=GЯ\+_&n ԇڛ j>PASs":1 ]RG6\ j\$D֞a'빳>R կ.ymGdxv@L57s Vbsu# _]G RJyqA"@V*~Rj)?AA n¢h^z%,3/#ץwy#NhF)fwl݇D|1I*5r?|5T[zzk RaA }r*+X_4^Oo=_ǑQYy0xr#9>] ֗եiI"lQI\$#^]Z.S^_ *8(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((1EPEPEPEPEPEPI0)AbGM(ml3V١bUTP(݋E ( (˶XE8uf(!>LQǞ >gk4F:*NWQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@W ̣9^((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((()H#]?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( Z( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( |mM}NqߧMnGÏ]c4ww``s22>滚啀( (q0TPY4Le{+DHRz q ֬lBAEd0 (i^9ӵZ}Bp0;1<ִt&{Tr\$0ʲ>aEPExg\$&Tz~9#5NgB0Tv?i4.Y((i^'Ñ]q})9; n3Qմ/w3.X{Ti~>qebQ#=~r˺nC}lX2]v 5jVvQHAtáX=@OGPkqw4@ 8˟A8 ;aEi!u)~m'fg`&׊XŌ 𓒿=Ǩ+rό+4[:1\$rˎ0@V>[ۨ9a\#e%=lnXPEy7ŏ_x{ZҠL>YgT9n|߾k|=ïhv,A){ZJ>dJ+2*+Dʆ"ɸ 88eGs?='KN+ƴzwR|"9^3~Ǜ@gY( 谤څ@vA\$Bnŭӕ,\dUr\.?ZFڷoPq4#zF2;q qv5hQ@D6GTff8 RMrw UЬK6wҮ0r8`,p5E@4k9dmp'>)F.BQR0 _21Ś+[xBsz ҕݑ:އi\$z{dѨj )Q@'֜"csSap ڬHHr+\U6raA|Tϧi\$&{Š5iwoh X\݃;^Qoz漺'`vHtӌ~OkN If{ u8{9SȅVEꤎ]<5}X[Ed0 ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (9oxB{mFHlw=:z\ߧ5zHAi6gK^T3\$ u\3WaQ(8A;pZŝEkh]̭)ǜ8Aǵtc\$<gJ5;]^;9CU:N RLYھhV~p)Q݈8Vi7G/ƍ0LVIq N5cOâ]_-/kŷ`LZO:= XmYae]OZA\g#>d}GpyaUNM6ǫ6 ,~"Եga(?f-[6Q>d3}z֔oQ6v|)乽one9=]=Ǐ|5m;qlnΜ)J{ZҩG#\(sҟωWuҔ\$(s|G̎Ƞ<Hkp8w}|%?7Qy,9#;8gh4M8Zցږ-ӱUw5?'èR.r99_\O&犮W}բ6\! v=\GuՌ&QwW *FP_:xK` pO\$jғ2%ccE UQ)ՙAEPEq3gZifE[\n~? tMBRaz=Z{oz_S ̹hHl~@BcGBgqprO^03Q(QR09O/ h*ȧ=pk!kz}2\$č`8 c;S"CkOhċ;!X#\|wrOL9|Ծ!jG^ql򬄜clz]9r.=#WdQ(D_@+W)*_#!y5WGK#Rd.YW;(0 ivƟ-{p}W{НSSe-N<;X0)8W:/5YGyu3%&23~-<5Ŧك,zn>8[Uv\K[KX (4l h;fx^@3϶ߥ];s+1>>`u*5cP"EI#"pI40|hdOR}ڮ5R^W ~}}Oﳕ#&b2u;񟎵]y8]TU5EcagtQRiwL8"Q/G\z]΁ A\$~7<'b~8R⛙-˽V͎=?z)Ef0?Ǿ~o"!9<\xK[8_ZW]Fo%KP32x5\$3W7vy.2vө{X&0εI+>/}w}Tk1%"9d8 þz赅Ui5QY(dqʻJ8PF| IB8n'242}ZO[1=t[x}Vٶ(])fNz|8C/e£O;8gf\$") !uN%N7/FRGB?N7Mq\ҦQ2(% S\|CŕeЀ89Gߚx>6Dmw#10X xȭޑbG|`22x+B0y+&>;E̶:n39q5m'ZEcS3(.G=N]R/%?\$[V,?.dOYErPο|b*Fc}q ;UEnɦ4ځO.FpsSn6AuOKXc;]XʨRmo 9?5F+b]~#xBhx3sZE:MܰRKfF\`Ƿ**F)&hFPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEKχ<),'2bQ\$61cӟ~=+]oVs.\$w8ӓ^?VvHT8;uď%hxMDTdܑӃj ^E9l'ߜ~NwqV0.xOM7@J1s{{ך|5RU{L8\$L yILH.mTxߠ9]{cJ=*W}Vnq5QncG.m#(f8z_-V1o}1jRrgOz.>< zZRw^t_lr=k~+6JcƱ2w^ >IF_ϊgY&-~br !ڽ]7juy3n zt+Oiŝf0z5Mh'>8ǂLk2QREr:d>fJ3߷Z? |:|aaufS6w[Sn~ km0{ObY?z{Wxqxzw/npuas\$|rz|3goȲHCdGkkKu'PՌr| =>&WEdEp?U/,x֏Y52khgx*V>ְvB:,"s^Ba\==ڡ+?c2Oy\$NR+t=1x'M* FYn9<*+)nAEHWzS\$3;洧2̢%{Yie7G^hgqSl1Cn"+|~<:ҚMىW\$uM2n; cF)'~)ߕIzc'%%u(-&]-Z&2}> .[_Ez-8P9<cy43oHUQn޴~hDԧbz' ƼXfK#&[62kre.5|,yX6';o9opuGY,0pz@?¤ĕwXѝ*(3=M|:12Zfٖ{pqg ֖bgO6!Һ.8;{jz[^8<CnR}⋙># =m*p1\$ +!4vO+m5.ko\j_!{()=ON_r`8Q?ͨBpp˰ =@ds](eSr9=qh^}&9;N6Huw+#eAPKufa3z?+3JH7݁Eb GYӴ[\SߏN_j>.I_qpA϶3ӎkX1l]M:E+.˝ˍʿ=?>x]1[LSq9'56W,+KiXFN2GS8MׇOFgUɑؖAҽ\kR mݐ;=BF}{q]Iޱ:qo%ݿnc!G__jsNjn2ANw?NXTw⬬0[ y\$'hi(ϧ5Ţ^m<. bISOxï[xsW:ou]pz5SƍXd)?qe?ץ\$k\$n+) "p4ũ9ILǏx"xJ!"jBLnwuMEv_<5{&G#y >5;(X|Ms唩mqdx9=s~%߄4]*lnnI`}ke\$옞UoZAKP*p~,\$?F 7O/+ipN\C :W֭*IޓH\$#'BqmXl;UPIf8zåP{뫱P˴qϵ\~,f33?QA/ផ+η8|*`Uǚ).h|:ڟ0#e#˷\CS~ K0AΟ,{u tI-}2IɯGNMHriZkhIkv>5[Sm66|#۵ͦA,%=>u/όGWF'ǧj֭1oa5#1~IE6vG;6-xF8CEq^&j)+ qTG.= & ǵߞn뽬'7=JE@/GJDq.dLn(we5STs@֖>=k!Vh22 5XܤRsƷƝ-Ƌ{(n6\$p24kO?ȁc9lmXXxLѺ*)H!j\$7MK73^0p ᤓWE+w !?wb^|TTҮRh|N{JG'4ȅMh71ap܏xMR9eArpH\$ԏt9)Ǖכ\Z䲆%m<r)?5{ܭ#)1onߵaZNO=IK'm >~:O܂clqd1@q>9Ps4|:O옚Im c^cbB'5]\$n=Nq:I:*+q|iI } 9厹=)YIQmU]ҍAK?aSNAmҕ5}ot~`vϹY:/M1cF!ҏgհ{=WQ4[aEL,r#qz굝DCAEfEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEP\OjZLۻ S~yUK/eZYYRFrx={G?O;Ķ"ֱp\$O[7|>c?P:sҝnwN"2:V* zC=F&n#{qQMَ[Jh6vgekSv;8+|/Qa3oG2 vpypjm0};\$閩tIXPHKdO~k&g#sIso󬖲g|O0TBGv)>Lo`>,cb?SvY?L5TvR )UgUD.uԴp.~p?SM+VmvTi?"aR0{z ߒ{o+/4mj0!my|+tR[HF Xgs^OJ̱ _}FO^\|b]y,O,k=dπ xSִ1F?L䅂Q\ (~7J|8hjg1/nrʕPEZE<5]oVO6;/?|Ty>BnTӥ@U'\$Lմ<ֱH"e|DZ5wR׆>no+ |s^MCrЫliڕfv uxkSOgO>6 & q#pkX.e>+xYYȻ {B.1\ƪ1ҭԟ+"TtbN%ȗ.ق|5]#Wy7i%|. 'JW@=NyDI85njR>1[Z@>uX8xyOA^I&ʲɖ%k8u75|DV1۔ {kGZyb8CoB[mhyF[GՖAjF>k6*guP+4t`AEsNO<tY.;.:ּ{ږ᯴۱BK`95a݁wN׎ҭnxkHյ]VOז pp&T+drd9:hMa\$ eGL:]itS)o;\NOU#I[6"1ȂTۂ}+} \$@DTck)ߕ\]M qZgGVu?܁lLQ@A珯l u#ҍҚMXd|ǚ֤Ka+q_j&3y?6s\ONƱƪU•I&:aX(c&>@x15O2?%\wj _.^zvڤՑ)ksd+ C6y;Wx;hcgg.S9mICEbQ4=WR֬[HNyҽW xw62<Ƿ蜗ZGūy{5s^֟[Ķq#{Jlg_bƖMZ!̰37(*=\ !Tw'88q^5}ŗ<˙FZ@2sǯtˑ[]~ռ#)g6Anx8=:MԼ9ԣy׸sf/t i < SSUhŲ5/a\$J< ׏*ko;F9 Wg4՟7l;q+E+c'L^ch?r5ڑQc۾k&]@+ JyӮsYF=g|-MKH[R~t>F:z`U|+cڇ;Sr`r/ i6`3,28kzO%ԴXf^;?Zڕ6Ms/-i`Ѱ@'==zy];:χ+`fy?bЏ>6!'{p=Z4-NKѠs͕̃\$cv䞣94[v䝹y=riu jvV# (3"{KHE'+,d^Sӵ֍gX`6')atk2hWGk\$wdb4surXtwQOo]5xͼ(2~lRQJL.{mSU|oy c~suRvGYkѮMŌ.yϧ+mZx^uy.ar ^Nwg<+j*6 εǧ]JGz7¿O[:ǸM0Q?h]_,EKaO#A=g/=jF>ҲK@yxl;w7SEl׈|UWP#t\$yc|=]C:Ɵ]JO`9=^? ү>N9xšQdqǼ+=WGψ@-Mчa2F8`vEEo }?PѠaau`szrkuKh@bA 7Zծ6^T)\ #Y>\$Etkr`em?vaMW+Ezƻ:NK`aEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPE[xkGgNһMG׾ xieav%v\$k'MHҭ ˁ d8퓓ZմW*>dV|FSϫ˳ZZRGk及xT:nL=ٻYHg9W]J(dGoJcKc\$_|IuO}"sǁ#]Le*ֆAݏ=Ӻl4M.mlXaS޾f4moey'v[_qx`׍xGF5¢ eF㧡?w>'p>NIR*Sd4ؙKTe8`Ҩ52f¢Τ_ߏ.<qn`2f( 01+OLi&+q7ٷF9|OOJk^0[K["72('5J1u.4?Kn!n0\$ As1\Njto]CX00O;ݒ;ν}Aw|('ӊfjJF\gAZI|!%q CUN;gq]-a)s; ç\,G?'LO+6@`sp@Ƿ7z 5#M:{}Ef^=ook0gi2_HZIHQKu?q"V얊ģhh4Th>ۍZ#Zmu;r.@+ⷌ#t.wχ''hn;1w4-SubJd9J^M6N|i)XH 0OOíxh~%_JEUǖ׿\$tMhPEPEP7V^Ims 857iqN~R[tdCK \$x}oA3{,B; (O9shQרu9~i"`7:V>m;1X7ώ*]@>{Y@&'IRh~ nCqqBk*8);ֳJR(_c{Ҵ`}2O{(VQW^l2k^!bI <Įxz>=Entn'w'yYT+\$.Gjm7?QmuPUX=q^yRmk|E- 8@=~׻h:&+x8^wc{:{w%hq%<%-k<`'l`~3Un,5 wP #r=ԨܟzJ<^-Ie>RI7!\$q^FGZ9wgŕ\$1O 3N稭w[Wv]䑌Ww "X'1 NzM Ք=刑3mI#x)B h/4}K0dgtdlԫ,X`qJ RŠ(((B+"Β'\$EPscM++x0iz>+lq}䇮q gfڕlK%Ա;oN䞕xK]Yuk"7 Eps= ɋt}"0huxtǙNuT{-]R5kϱIe/ϧzꍢ<7H_1;ߪ\zZdxkkǣB Ba>I-K (xT>3ǜ?j*iך*yr3ƙwݾ4.Y|I/|] F>xU'#צkUaד'+fZ]Wc )=e Ymxe^P=X,!/H3ႀ@珧Zh^6GPd_?&*ux.RBKp94瀴K: ˽]`}x_Ҵc>5#M"k!ЬiVF >P8x PY~+Õ ~=+;Z Iy={Iggaב8*iС3kg Oe_3~f_ִ[Vq/ 2L`,կ|R|D\$foʵҾjWVFT27Z,~h{˩.x+TZ(V:};g4v` @W#_h<cKҬnqB 8c&3 (ڤڄN4+0#kiTwQP0 ( (_+xc±xn)<`gu?9Yd-h4\$b_HK}8g}}EhZioneǞr}oM4V VS=pnǾ{zyGB5v0yF4N{G/.N@olZS=݌~7˕ןj>*mzN^[=+EM&Z> Abf=vWKU{Mf5wR 2;~UJ1OC< ԎAa~g:O>. .84:-^e֌LG;g^Ϧ[k[Au;OWq \q9U¦)O:6.}OC(ۼI5GQY(((Y[ s_JQ!]u3Ȧ%[g(>?(cIWET֬Q`PÀ+Z\$ 8_E@.F0}U5Y.xQLs&լ[\ƲC +ws?[6gs֑>XD6nj7-sǾAҢ3TLI;|ZHK‚A֨ȶL/c:#G6u^u]Y,9IA&TU_(<:Qzw +1oO^xGHeՀ w<|k63B3y| O%ƉcH>šȨ>`\$/AViMz' pO3>Tl|+z}r݇9{Bm*;nUyf7698ӥMW4 96pAs;abU\g'G>1ʔېN8"Vzs}B"+6wrx\`v^J<IaeLbp#8'ZnV _i,Vk]o1A9]k[xVK< ˜UJ0[0zg0Ly<i4KǺ`Lcxˀ2HҦZX:XQ@Q@Q@Q@#t/\pSjM%@>q+s[Sؖ,cm0;#澋Դ}[NKC2`de-m Y\Ђ}AOesͧ([FOݮ1Qj]FuS[}v6lA.܊GV.Ko j~ ՗}H1 ׎01Ҩ|\xx¨\ǾM\ݶ꺗C6IHxǚ>4x2]Ru[M@`n8ފs\~O 74POǮ{g Ysҗ=RȠ/nᰲv(]v36xkebG8{t+D\)yfDΗ'ũ'\$x.tst?w&R5iBOxj;dx_"x[kӼiwڀ"GJ[\$c-sҥKԭ2Pb`w.=V&+xʑFY?Y5ga\$xF J c쑂sׁk[բo5mW{J'n3V]]C^궖ԑ ,;>C]nWӮUIV==\$wN9DZP#9#Z>bBIR <7_|!žz9\$Bz3qKm.e]NO#n͸qڻZW+jFMXM LvqzAg/ @':ddszgiIE'&#bOeMI].:dA<+xLRZc̀7 1\$g񪊓Ohve2(8o/–NR݂2<کAߔ,v?b \Eoլ3tJbF\$Ӎ8T%6_튇[EbgD!дkm6ܖ\$c99<}MgVZZMs %!F >zbG?j YvX=׽u"jx\$1k7sƫtkx5]+{:)|at0Kx''\/QWՅ?L"@TB)/^\$>|S!ϣR((+φr|guUA.]ÙiDF=x@}K..szF8<ׇX~(Ud@\$^AҴc2Ҭuˆ9 *d(tKy|HI6q|Wʾ(cxȮ.\$FTǦ=z j2}McLk8G:ս{.{vaANpx=8uy,PSF% h\$Nb+7ѡ:A&dXvqא*&; Suj}'K1Ѯ(}FI3W`kۢۿn};.PjQX(+ѮhVZZ]r>0`8R}~>0Okm,Z"A餬=U*Oǣ!@V*ߊ.aoDpryLOT)Ú>k]>TzqӌqڶVQREPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPTMNG@ Xz[վ+xHa`?_8׷wÖ>12]4zzfBmXh*؋>c뎸?=ѸU,X<W#`q?|9%b47HQF>aWcgZWLɚPXϾꧬ [^&\]Wl162@g_(Kf&`Zg`Ig(+4{8d.3ǿZ_64"A\$nvq(mZ%3޿Ω)\$<翷oOFwXѝ*18QR|ҸA]Ov6t}  =+(I! ,@Kœl DQw7Z_r?[QHu ڭغiq\$gCU8mJŦ`1M\k LFP'WUdee  +1#|eT2[N"g4T (jZVq}7܅ c=Oa M㯉WZ;E`IMkSKDH o7=9\$3!_ Gz/>BʣHT9u ܃>y/!5"#]e/8Ohx@\$Lj%q[| ]կ]OLuS.7e`7l+A tmJyk'[^k\Kva(b0ARQw>ڵ9PbYYW?2+誩&!EfQ멮t-8Jws^/&iFd%3<- &>o_oQb4đU;W{ԧhY_^-]&vב>R>m]8;H'xg֞ⲶCWo9wueBqEUq5:ӵ"-sD ȇa?ݹǡ&q-= ߇~+ j7w_c^x@U|FvR%`O֕ ި7!+ȒFz26Ai+7uyu ;9`'jwxOah.~=E} kcsikj]\$Q QQU4c.bizZ_@@%OU  h:qͶpId'5?y.+Ubipa򎇞2{ Vayx.!PH9U yy5VS4\$T+ V @>Zxc/uXHj 'y5oxKnKwxFUD\$0=9[rpd^0<)XC.1SҺ⸬a6=t:k m`RE„{ 4V۔yGzɥ[Zv^}.:뎙֮W;A(((~{]]8`=OoʚWv>)xtHEz(׷昶vh|r=^x 8\b]wKڳz }V⯃n>؜NZ<L3>H9}&]X}0Oa@z`dW=dEt`*rq5^CҧuTIU'۲^S>O7 rGAa[Sb':i/#KIUFQ\J blzC2 z~Y[l.ukbz1UXM+}s\ŏCmSeart ϮpxEW(i]ơ;JbF9r=xq8O8>ʛ~30E+䰌¶^uK,/V)2rCz1UIc4Tm=I:ǹ>(i =%#ʮ~VǿZӊHE ~=*rDӻ_3HDP89:WZcB)Q}OT_ݎx_-&GİcOFJIޗ,`{`,SjϔdZ#&ԕ\8o-\$ßhޑlnW4bM&yxOx?:hGNW^|K\5*A:9\~y_ki@7Ѧ~b8Oo[k@6>3x;G&ICdt: E-nt4u uB:J)Mh7.gn ul e^}1ֽzƓ˩i0HvȨa9ҦtUޢAuu w}3ָz'0qA9\$~;:r4Ff o# ӦkC|l0f }ڹ@7^[˛Hə0p{פ./ӭ SmI[_&|E us#mArOא?*|:Ef0 ( ( Yn#L䍬G#x½>AY٥ʲpҴщBV|pvܐ_[K}Bxϔ|<ӯ5_xHDlq=? ޜ},(+'ZDv X鞜M >8ޞploʛrǃӧ_QAku(a q(=A"={?hVr)S *\?\$o."5lp8R=;Ԟ}3N6[Z+bF:`d+HoMd^!ݩҦM;D_|g\$kIt1?(c?\$udc# sӽSPMτio|_0FVs3+k*KBƙ\$HWּwx]DG"\$= jHœh7,#P"Cգ4kmJ(4sROszТQ@c*(%ym*w?ɫE (wWˀ%_1v\$c:Ҝnv=p u(>FFp? dҺZ;m#P4k:tX(88{|"DžlˈΥqǩI8ݮ{߀At& `g|_QO vvHMzוxDxlsVqۏҷVLMnr#p (%V81xCqrTU~5|[,K#fm^U''>G\$'}[~!"J܀9tx#[qmjucdie-[^][Zڕ㔵X qa) dEoK4k՝5-h.ֶ*w\I;=1w\ ݏ7GHA0ڝ :–[aU{ f较X˕l(FP^yJ«a޺=HU _Pgzj0"p83xB=tx3i=Zz 3t <՚m͵o ?_NtYfua% >\w{Q6 |1Z{_?t9ï?\Wx;^/Ҿj̏ *c.S\WGE`0 ( ( (G^:3_<1āLJ'W (4tznkggH~,}OԜǥ^:FX>`|[RrVxxW[:]+^+:@G f 3׶9=wxT1T*hz}G|Gq,GCz{w^P/Y݄q{U9\$Be'ѤhV!J#{tjA;uZh>9uhxe¼[kw4&FXt^}0W lΗue a5Sn5kCVħ9SӮ;ۊt-Au ||9xn]P=O1yOim.[9XNkHϚbz-OE]E`Wo4oזqypsKInJ&]YI(I9#%G׶U^G`Sm RGT IdlS vICrkXMkfɗ<|ktwewa@#YFks[z,㞄sQA<_00wD~'/-@&c;V5"SڎIpF<}sma\$숝ۀ'8r;?]WV&e= y=}u3r;(k'I>LҼ/, !x-U`23t .])I]X~!|F_xtV 1'?7cZ> j62&m*Fv8ړPbzXQ@Q@Q@UM:=FMA-[#e~CSM-W[ hn>mju0xz*Ml3j@cS:<X)0\$_7'|_ 73` n{M A&M#Ne{]>ݦnRT9G'ӷJOC[FLC#< dr3J˾+|By~WdR38!}OO۟&dȕP<{5I*ta:^Ou"/,d<3׎3\ootOMl|}Mf(Sۭ/ Rzt&3\$^c {8~gzA]K#ۧUHbHEHTQtzQvuW .`.vzIdL2@үmkIG" 2#1Cf? 6@?J40tӭ?ϸ[Z6_ t*zop*MX\Jem\$՞%\$\$S?t߅ |di+w&@PEUԭLB慣RzA1}|F>ʯݷ펣6b=aEq?ѭ]Y\$bUF8OZo6&V[m'v{܎LmuiSż7vk\$N0x>^\iVgt࿕ HWQz)\$ɒZ%vefScE \猷뚿YYAE #nahg%:5l\$1/}ϩwkb@PvQm2w}xۻ|K&SOlń@O\שIJ< 6ohVa0Nku+ksQ@Q@Q@CwiQ,H60ieh-l-A'jw>tm݀EC# 2#ҼsTW-A!'zu=N*l'|'Ѽ"9#y*y]*` *x[%xLwu\${0r0=/z[|;X\kܰf%==k|9'Vʖ+J?(oM\]rV# 1k{a\$9)[#N1W rpkՂաg?Ÿ /~Fj0 dw'jVȕ ;s3յ6t纸U偎=s~Y&,<ȳ7_es!@'8]/Q_H׳H\$䓃ӧJQ,Ec!+h("H_@I\ ( kJ"0!Ar;׵?閱hbPH `cqqk3hWBͼ=y>-F\CG}B Q ` 8*[[ ( ( ( (ݔږwgo7,WCǡ[^eOFG%pHny9y^aikg 2y5si.T\$YK Sc4UucqF:*(~( ^X/q8CWr 3QZ MtI#hUd`C+ cY ? .'ft 'r9-9w_tkVhcr^Hʕ*zcٴMuAX^5\$.4Dq䎇>iN;E cׯy[ pU공Pr^}@HN}d´-~hp2 .DNI'zv;zgZ:&\$ oG\6=+μ?ѭMqypPKgGOj2LF[Q垡km-;u'<M)j&'mFD2a{cj_ \ٴ#p='VMF}HEp'ɭz&Wxt֡\42J̛HctSQ 0|ԝbkUTF{ oB 98]h`z/c]c/I|ypSb{snYc`C @ I?ʮEWVM'ZAdih <9Vpn<¤?vU\$\߈x+e2:y(]΂clqE|?tncKs.2r9?ڈeC66դ0ǔx)IcY#`2B(PEԁ@ GJlr\$0YNA@Q@VR3ld}ǭZ` )Q@! u P@Fqh(*QwwcdO M4)Q@Q@2K\$'l C}:ʧQ@Q@Yٕ]XƝ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@>|MkI-"xINiM\$^ k;8%fŇ[G/o#؆':)5{0nC>,RhV(2EG%W`NEHʗ:;x\$aʪHMPź958w85J-4M&usȬTzϮr][kMg &gbGjSq0?unKyZ'a& zq>Ƶ~^xw#Fzm2O>W^FP^oc[fh8oSj'goK%.֌z:z03SuvC8<zS喀q5\a,[glqxǽQS{(Q@uXYX6dxdL>⽃w:E䜐qOSm;r!u6hFP^gxvEUec㚨;]W_kZ-wtv̫ NxϠsV- &{aVwNF}xjn^KB4)#( rSjaEPscKc\$;(פ'vc?ҵKc}ؖWg\g<ϾjfaE (<_ xN,x~388)Iى#KMcOQ\$2CVf Imƥ&; z,`i.gɟNO5iS +!xMp?&mF,fΑD@;2qY>y|k53dsNW}+'9mJѱjj\$Q@D6GTEfcA&mehIRFv~Nj-r~*j I{PYr|jo.P:N~5JN 6ER>S۟d`AUPzlbz4bclkxHK Gx?Ȋج\$]@PEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEPEx7Ty>"lq׻q""S]FOsokq]j+mj;XS 13gWk142.Wb;* gx1V<7@ލ[5ja('lQ)Fž>6V,\$|89ƮzKc;cO|ksNaR{y:ׇf3cg#%NO^63<OOԭkkF5 \IŠo?#_Ca.K`37tV7 Ì`~@ϯ?m[\tDO[vnyι_]-p尃F~cCB=\$18,9;Vg%gb1S;o JZ8Fy?sɨZao?ۂ@=:zti4.M&q"vXa؞?s\$Z 5l~(FMl,__xi1HcݿƯ;z'TsXVx(jG ((((((((((((((((((((((((((((((((((((((((((((((((((((((((( zki_޹H"NIĐ+⷇R6(gwh>W9+=|Za  g\$`cާ> xDq[ p09>ֶ5k.ojVK,1\G朻`(؎]\iPI]\;IIJOaBz i6>ܻtK+{=p}=ƃ%bh\=ws^KQ=N ;N-CMd<,0ry&ssw`STi/]\$rcLp-N^/E`c>ڈN.|l[lS|<+{#m\d6cs‡C޽9\չ% 4~xu2-6셼68rOTMZ@h%v3ATo^1֮obı:gY?9#z(G>ZM%-CUğ!*l/ ^YST:hold;7<~' ^A4ϊ֚FksύV2;mE7>Tw e_'yB<0 =1+,{('I#Wh2AtR)E0DV# #jFB\zez?ZR^^𵵠lΡq;ךמbt !y\g={\qǣv.ҼtQ|쓌9TJ<ŠH/\$C<^c~!LhpRۀ#lj؎{^5cMи^0܃=m?5KBA#)u3 vq39bA3?>Nwr"#P89j?Og14׹Sp8G^JRog&iz_\gbBpzֽ* *+(|F,Hq%#y^;:F9T+Q ~`o\$882O<~L.>k*v9O7)[_`JaԌZs๼Su6!nqz?Z.f ;hXaUz Q@8_j׃~G;I-t˖Fwqӟ[RMu65o:7Q۴C/,rp3'S _[H7͓bA& 2K(=JԎq)S{g4wG}.Mu*|Xͤ̿nHfrTzǷ8Ӗ[:_'_nUCyw9LE04>+x;TtL?zNc x^CҧyGgR}09SӺEtVC-j为oSsj`l@Ǐ4{;XFH6ըk/_ir%!zpqJq/|@\$A2|l1תtv Ƹӵ j7fHnHW+'ߞ+/TkVu߷Cvjy9=CK6 B+`_?OYس `Mv]c 'BZo2~i \$%qJaa =H)O+ծ4!9'qު7h[h{|1YdX-,: sҼCgHmѠo43P `ϨQU`(zSaEy/qZC&ܯ\$ddN;o^Wƥxiu]-L>VܒI4#ʛk{`NM"T'l0F=MQ0^Y<)y9 6^?>'(b-}}:6 CWeDci|ߎ:W|EѴ=Ė]Ya8xޯaX'l<,L6 *+d;OlvV4 W_G 8L-mX/OGZV/29"C)3{3ƿ]e*F|d9-sD55TDUT`:)Հ>#6nԥYc+Mj^#nh@F=;']&g9sq)E3%Ez~Zjb_NOʧgsӕ)Kut_C岨99ǩ=ϰ}iMvvԞM=E+!|@8.qC_Wt[s m^NֶnVg\M;6i7gJN{MUi:&.&FZU߸`\$3S+ qbpOs֥-;+xO{o>c<jҜo-v/>mr\(2oP OǩcP4حfٸSr)4?~zCѴF_Kd =q(xoomsZ<ԣfWWEm"eʪe9Ns4&~otmW X>M(˕ \$i/e:jF ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (YLv"{Uӏ4wږapvт::Guo "\$P0FO+hV9u1=L!El#X宥ű &あj(6où5 kQ[ŏ&>\}Sӊ)ǐʭ  EO~V@=kt}BlZg(31#'[Kо V6 a,{F\:*5 %t)e7WyʳSz;w\$ .;Ǘf,VP I nHp+ƭojwLDzsfkf35+ķp ,nO< {צ>qDBđyUM!-u6!"#@UFY\$[_j30\$ w\R0?SYD)j,L"n|#6O9p֖ |c{eu܋98kg }ڈ6q6:14wzk][sA0[cn~xGns~^}οqJ ;Mm\$c+܆'=H{W7 b6G:]6eۤm2IJߎ^xѳv}(wJCWM*kFGg//~BA߷Z\$|7kDBYDlHtwWdv쀌1+rXt3]>.O`n{h4mҼJ~\$u9r3Ns^\D*Itt`AEcRMΟ/ɬIf\$A~OUJn[\Xbˁdu3ܞ9o:ŴʖQs/=}r#IlM>;p?>ޭ]ctTDVi(Oe{U|6 {"HXVZxfn0\$z{ǗG_WÐZhͫd(Ͼ^ۖ[x2˸*6G8R݆|jax=ğ'<*ڥ{vI;v4\$t? <#@#9v?{z i0kzTmpA{5۸ϓ56MUC0n8<5t4_eb9`0? soa%QYIi6Q\5L|޹o 5Ӵ9˼q>ԝ>jVVŖ{=N8C?Lߛ1_5[T ˆBX6rp8tg~ |nqL]G@񭥃\\2ŸPi77^."fGd<{dWuK\QY߉zԾ0݊/dž-1㎽=g 4m\$8Gv\$#wO~GYځ~X) ?JuK羳щݞ}93,hU \$9 }['jkk{̦?1Psש*]&`pZ75nYd\F+kYR6Q:w#ӡlDY.F-noe1̒c#p8jW,hek~"zcU@ ȟ:%̖1,'ӁP3;ݍj;[}zXz}ϑ^٤Um~\$2nCb5|>uOx0oN455UDUT`;S9&G[Űo-V{<Ϸ<7VGmD2Gby##M+B5t:I-nS| 3H3_j.ྴUWr52q>&F⹠QYw»TdzzzL_ <- K6 \vj8\$.Tv:~iXec,@zPnffh:lN*#,sݞO%YTDWZܼԼ_ll0|gp+Hr΃ o ڞ4sI\$-gܞ[r>?y.d(bq34&죄(< sq4]bU3?h~Oko,_ !OM޹SlǍ Pc) NUs5?A-C<5Ė \$Bn9X-e`1E*9'K7BXl-R<. ,:8S񊼿w >0)8Öpgֳ@ͯF{H #]T >Ϋ%Z4^EP[,z3NqW3%J_PAUH.e8óէZ/8<;t%U5χmV]2XxN]Az 3t~ÍlѴM_ Qeinെ&#0TlOcϰsRb'[(}8 ^iiV,PĻQfIx0x@U !׌/k?uy3q܎0GG#E&{:ȊUNKTm4{ //eğ愮m|SA\$K qd'ڽ~X<'涛I(vב-"h0ϧZ=F8l}\  FF~!2ڪϧ|p38s^š|K%ǹRK6I+y-ͺ+Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@<|BjiL98Ozߎ?ED\$`};c89[-ԼUۭg2}kҼ#I\$Oz8=NOS8s|BlYp}\sY"x\$ܢF;~L{`R{ '0 G9J"9H&]ȥs_:xF~dgbc;=:=jq/û;Ax ̎78׵tl#WkH;F=9.V3-xᄖ:nAι3%0kVJmwh~/-&Ym1oΩҕ\Ff;OjJp:0E|xCTśȷ,A!W>i}dդG>4QEI&C|If+g珡Q+HQP0.yiNܻ6C}??AxВ!=~\$JhcBx|ٹ7dRwlHw`}Mc\$g.RAR0OZQ}TKkyVF7ۦ=z5aQ%- [ (^,/- ܜ}+eĉqlVR\|[\G|݈C|fȝ|1卹ujzyuSڽ4Vܽs#>VVE,~J6 usY#=vο5ZXE@Š+ľ0x]ԛGH1zu<88ִ'bdZgXcss] '}#EsAugk}wovP0ϮJFƊUSu(((((((((((((((((((((((((((((((((((((((((((((((((((((((((?7ut;ǝn`ϦyWw^ \N7#Skoh|cml`X-aH£>=b YxN{{#Ԝ מzdNMh鰿όn?N`~D+'^&%X.P9ChN x:CZE',G=\yhkma0=@"͹6m Q:t,2XT`*mE (xf/hRXD|;.@>gpsE9xZjpyܤpTZ-t3"IY#Hw@J4-/Fc y~cӌqJRMY*/l>\Ѵm#NϫQn>ٸ\$ƍӭ ?40OSE (/xn/hhYc~hd p})5%g%: c=zwx /KnFFt9jiOVAc2 o[c@Fǹֵ0=CmZ)Q@zfz3mo6 @ihߕ SV[X#Tm1ӚT]R (ω<5=R\$-qTn~=v!qx9%먬PYk~g~xZ M JXžnYʶXy op ״xuN&J7b?LН>74g8O^WQNn򸒲QR0)]a%BHە=5xS_R{dɅ0I랧֑IYQ@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@7cG<7+ y=k~y[NB}5\&"8\$[qe \$f=9\ETFHYCrhlwx[cc+?\xץ(0NU M-dP@\$f+z!~4K+DzcvW#m\$&|\1@~ZBvY&iR8׫;Y^',tڕ0O] -;JWӡ,aew "RjR+^XOyra g+1ǎ(ό/늫YفK7Z-u`BYp0JgxD} KWnyyuаSJ ~.5X:29'=[4|W622\1uǵ82K*U \$Ф\$jF;i]!}dH(hZdڅ8">}k3~5ѼS\$鲻p>[K:Ե-&KX"KVQW%O)6hQHEƗRǹ23j_RjѬrJeSAU-Ң(cHIY͵ՐWٰJ0HSGѢ曮MGC,AeI=ӡ[Vڅo2H>TGcTf YwВ,q NzP4  K3/|CiӨ@[iPۘ"9YbuxnVSJn-+P^^Aal72AcڄȒƲF2B)Ԁ]N7~ax@I' ҝjfEVXm9zmnErMREPEP^YsL7ץo@3ǯQj^'Ӵteijnn8qמjj*P*u4xT[`;N7'P9󮪪I+QRE\pEԨ=3֪ Hk:ZCpʲpg(u.fe|gSBWӮq6R4I,NF]NCЃTdTQ@V1 mN.7 #ѱ5cZ兵[Ŷ#ܤA*%{ ?Rmu8 Z՝R ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (3'k?i?U3Pե\I&+v=;mK5x`wVĂZR[ 0Ya'l\$>b~ +Ķ MHO]9d5=٘yuoS,=q[ V#@`J>V?9i(`cI:Zl!;|n}Xd:Ԭ )Q@/nյ YȯtȻzX7SS^{D|BxS\$hyP^ZֲT Sc~k4ͮ`o,ku rkpNY_SU|SU\$'
{}
Equations on partial words RAIRO - Theoretical Informatics and Applications - Informatique Théorique et Applications, Tome 43 (2009) no. 1, pp. 23-39. It is well-known that some of the most basic properties of words, like the commutativity ($xy=yx$) and the conjugacy ($xz=zy$), can be expressed as solutions of word equations. An important problem is to decide whether or not a given equation on words has a solution. For instance, the equation ${x}^{m}{y}^{n}={z}^{p}$ has only periodic solutions in a free monoid, that is, if ${x}^{m}{y}^{n}={z}^{p}$ holds with integers $m,n,p\ge 2$, then there exists a word $w$ such that $x,y,z$ are powers of $w$. This result, which received a lot of attention, was first proved by Lyndon and Schützenberger for free groups. In this paper, we investigate equations on partial words. Partial words are sequences over a finite alphabet that may contain a number of “do not know” symbols. When we speak about equations on partial words, we replace the notion of equality ($=$) with compatibility ($↑$). Among other equations, we solve $xy↑yx$, $xz↑zy$, and special cases of ${x}^{m}{y}^{n}↑{z}^{p}$ for integers $m,n,p\ge 2$. DOI : https://doi.org/10.1051/ita:2007041 Classification : 68R15 Mots clés : equations on words, equations on partial words, commutativity, conjugacy, free monoid @article{ITA_2009__43_1_23_0, author = {Blanchet-Sadri, Francine and Blair, D. Dakota and Lewis, Rebeca V.}, title = {Equations on partial words}, journal = {RAIRO - Theoretical Informatics and Applications - Informatique Th\'eorique et Applications}, pages = {23--39}, publisher = {EDP-Sciences}, volume = {43}, number = {1}, year = {2009}, doi = {10.1051/ita:2007041}, zbl = {1170.68032}, mrnumber = {2483443}, language = {en}, url = {http://www.numdam.org/articles/10.1051/ita:2007041/} } TY - JOUR AU - Blair, D. Dakota AU - Lewis, Rebeca V. TI - Equations on partial words JO - RAIRO - Theoretical Informatics and Applications - Informatique Théorique et Applications PY - 2009 DA - 2009/// SP - 23 EP - 39 VL - 43 IS - 1 PB - EDP-Sciences UR - http://www.numdam.org/articles/10.1051/ita:2007041/ UR - https://zbmath.org/?q=an%3A1170.68032 UR - https://www.ams.org/mathscinet-getitem?mr=2483443 UR - https://doi.org/10.1051/ita:2007041 DO - 10.1051/ita:2007041 LA - en ID - ITA_2009__43_1_23_0 ER - Blanchet-Sadri, Francine; Blair, D. Dakota; Lewis, Rebeca V. Equations on partial words. RAIRO - Theoretical Informatics and Applications - Informatique Théorique et Applications, Tome 43 (2009) no. 1, pp. 23-39. doi : 10.1051/ita:2007041. http://www.numdam.org/articles/10.1051/ita:2007041/ [1] J. Berstel and L. Boasson, Partial words and a theorem of Fine and Wilf. Theoret. Comput. Sci. 218 (1999) 135-141. | MR 1687780 | Zbl 0916.68120 [2] F. Blanchet-Sadri, Periodicity on partial words. Comput. Math. Appl. 47 (2004) 71-82. | MR 2062726 | Zbl 1068.68110 [3] F. Blanchet-Sadri, Codes, orderings, and partial words. Theoret. Comput. Sci. 329 (2004) 177-202. | MR 2103647 | Zbl 1086.68108 [4] F. Blanchet-Sadri, Primitive partial words. Discrete Appl. Math. 48 (2005) 195-213. | MR 2147791 | Zbl 1101.68643 [5] F. Blanchet-Sadri and Arundhati R. Anavekar, Testing primitivity on partial words. Discrete Appl. Math. 155 (2007) 179-287. | MR 2303152 | Zbl 1108.68093 [6] F. Blanchet-Sadri, D. Dakota Blair, and R.V. Lewis, Equations on partial words, MFCS 2006 31st International Symposium on Mathematical Foundations of Computer Science. Lect. Notes Comput. Sci. 3053 (2006) 611-622. | MR 2298175 | Zbl 1132.68513 [7] F. Blanchet-Sadri and Ajay Chriscoe, Local periods and binary partial words: an algorithm. Theoret. Comput. Sci. 314 (2004) 189-216. http://www.uncg.edu/mat/AlgBin/ | MR 2033749 | Zbl 1070.68061 [8] F. Blanchet-Sadri and S. Duncan, Partial words and the critical factorization theorem. J. Comb. Theory A 109 (2005) 221-245. http://www.uncg.edu/mat/cft/ | MR 2121025 | Zbl 1073.68067 [9] F. Blanchet-Sadri and R.A. Hegstrom, Partial words and a theorem of Fine and Wilf revisited. Theoret. Comput. Sci. 270 (2002) 401-419. | MR 1871078 | Zbl 0988.68142 [10] F. Blanchet-Sadri and D.K. Luhmann, Conjugacy on partial words. Theoret. Comput. Sci. 289 (2002) 297-312. | MR 1932900 | Zbl 1061.68123 [11] F. Blanchet-Sadri and N.D. Wetzler, Partial words and the critical factorization theorem revisited. Theoret. Comput. Sci. 385 (2007) 179-192. http://www.uncg.edu/mat/research/cft2/ | MR 2356251 | Zbl 1124.68086 [12] Y. Césari and M. Vincent, Une caractérisation des mots périodiques. C.R. Acad. Sci. Paris 268 (1978) 1175-1177. | Zbl 0392.20039 [13] C. Choffrut and J. Karhumäki, Combinatorics of Words, in Handbook of Formal Languages, Vol. 1, Ch. 6, edited by G. Rozenberg and A. Salomaa, Springer-Verlag, Berlin (1997) 329-438. | MR 1469998 [14] D.D. Chu and H.S. Town, Another proof on a theorem of Lyndon and Schützenberger in a free monoid. Soochow J. Math. 4 (1978) 143-146. | MR 530548 | Zbl 0412.20053 [15] M. Crochemore and W. Rytter, Text Algorithms. Oxford University Press, New York, NY (1994). | MR 1307378 | Zbl 0844.68101 [16] M. Crochemore and W. Rytter, Jewels of Stringology. World Scientific, NJ (2003). | MR 2012571 | Zbl 1078.68151 [17] E. Czeizler, The non-parametrizability of the word equation $xyz=zvx$: A short proof. Theoret. Comput. Sci. 345 (2005) 296-303. | MR 2171615 | Zbl 1079.68081 [18] N.J. Fine and H.S. Wilf, Uniqueness theorems for periodic functions. Proc. Amer. Math. Soc. 16 (1965) 109-114. | MR 174934 | Zbl 0131.30203 [19] L.J. Guibas and A.M. Odlyzko, Periods in strings. J. Comb. Theory A 30 (1981) 19-42. | MR 607037 | Zbl 0464.68070 [20] V. Halava, T. Harju and L. Ilie, Periods and binary words. J. Comb. Theory A 89 (2000) 298-303. | MR 1741010 | Zbl 0943.68128 [21] T. Harju and D. Nowotka, The equation ${x}^{i}={y}^{j}{z}^{k}$ in a free semigroup. Semigroup Forum 68 (2004) 488-490. | MR 2050904 | Zbl 1052.20044 [22] J.I. Hmelevskii, Equations in free semigroups. Proceedings of the Steklov Institute of Mathematics 107 (1971) 1-270 (American Mathematical Society, Providence, RI (1976)). | MR 393284 | Zbl 0326.02032 [23] P. Leupold, Partial words: results and perspectives. GRLMC, Tarragona (2003). [24] M. Lothaire, Combinatorics on Words. Addison-Wesley, Reading, MA (1983). Cambridge University Press, Cambridge (1997). | MR 1475463 | Zbl 0514.20045 [25] M. Lothaire, Algebraic Combinatorics on Words. Cambridge University Press, Cambridge (2002). | MR 1905123 | Zbl 1001.68093 [26] M. Lothaire, Applied Combinatorics on Words. Cambridge University Press, Cambridge (2005). | MR 2165687 | Zbl 1133.68067 [27] R.C. Lyndon and M.P. Schützenberger, The equation ${a}^{m}={b}^{n}{c}^{p}$ in a free group. Michigan Math. J. 9 (1962) 289-298. | MR 162838 | Zbl 0106.02204 [28] G.S. Makanin, The problem of solvability of equations in a free semigroup. Math. USSR Sbornik 32 (1977) 129-198. | MR 486227 | Zbl 0396.20037 [29] A.A. Markov, The theory of algorithms. Trudy Mat. Inst. Steklov 42 (1954). | MR 77473 | Zbl 0058.00501 [30] G. Păun, N. Santean, G. Thierrin and S. Yu, On the robustness of primitive words. Discrete Appl. Math. 117 (2002) 239-252. | MR 1881279 | Zbl 1004.68127 [31] W. Plandowski, Satisfiability of word equations with constants is in NEXPTIME. Proceedings of the Annual ACM Symposium on Theory of Computing (1999) 721-725. | MR 1798096 [32] W. Plandowski, Satisfiability of word equations with constants is in PSPACE. Proceedings of the 40th Annual Symposium on Foundations of Computer Science (1999) 495-500. | MR 1917589 [33] E. Rivals and S. Rahmann, Combinatorics of periods in strings. J. Comb. Theory A 104 (2003) 95-113. | MR 2018422 | Zbl 1073.68706 [34] H.J. Shyr, Free Monoids and Languages. Hon Min Book Company, Taichung, Taiwan (1991). | MR 1090325 | Zbl 0746.20050 [35] H.J. Shyr and G. Thierrin, Disjunctive languages and codes. Lect. Notes Comput. Sci. 56 (1977) 171-176. | MR 478794 | Zbl 0366.68049 Cité par Sources :
{}
### ChitreshApte's blog By ChitreshApte, history, 5 weeks ago, We are given N words. We have to divide the words into groups. Each word can go into a single group. In a group, any two words taken should be related. Let us say we have two words. Let the set of unique alphabets in these words be A and B respectively. The words are related if the difference between sets A and B is at most one. The difference of at most 1 means - 1 unique character of A missing in B. Example A = ['a', 'b', 'c'] and B = ['a', 'b'] - 1 unique character of B missing in A. Example A = ['a', 'b'] and B = ['a', 'b', 'c'] - 1 unique character of A replaced by another unique character in B. Example A = ['a', 'b', 'c'] and B = ['a', 'b', 'd'] Find the minimum number of groups required to group all the words. Constraints: 10 Testcases 1 <= N <= 10^4 1 <= len(word) <= 30 the words contain only lowercase alphabets Sample Input: 4 aabcd abc efg eert Sample Output: 3 We need 3 groups [aabcd, abc],[efg],[eert] A related question: What can be the maximum-sized group that we can form? • -3 » 5 weeks ago, # |   +4 This can be solved using map and encryption, you have to encrypt the words into numbers( binary powers for each char that exists).While travelling on the array just find whether its related char is present in the map before this operation, if not present increment the answer by 1. Push the current char before moving to the next character of the string.final answer will be stored in answer (initialize it with 0 in the beginning).Hope this question is not from the any ongoing contest • » » 5 weeks ago, # ^ |   0 Can you please elaborate your approach with some example. This question is not from any ongoing contest. It was asked in a hiring challenge a week back. • » » » 5 weeks ago, # ^ |   +5 suppose the distinct characters are {a,b,c} so you can encrypt them as 2^0 + 2^1 + 2^2 = 7other example can be {b,d} it can be encrypted as 2^1 + 2^3 = 10Now the character from which they can make relation is either the charater absent from it , or an extra character (try to generate the encryption in the same way as described above) and check if any encryption exists before , it it exists is means this element will make a group with previous element.. So we need not to define a new group for it • » » » » 5 weeks ago, # ^ | ← Rev. 2 →   0 Ok but in this way, which encryption is optimal to choose for the first word. It can have many. And if for a word any encryption is not found, which one to choose for this one. A word having set as {a,b,c} can be in the group denoted by {a,b} also. • » » » » » 5 weeks ago, # ^ |   0 We are requested to find the number of groups , not the groups, so if a word is going to combine it will combine with any word , on the other hand if you see that we can generate all the possible encryptions for a word in constant timeFor the first word , you don't need to check any group because it will not merge with any. • » » » » » » 5 weeks ago, # ^ |   0 How are we going to check , that for current encrypted word we have a group already in which encrypted word have atmost one difference with current encrypted word? • » » » » » » » 5 weeks ago, # ^ |   0 you have to generate at most 26 words for the current word, that can be done for every word without exceeding the time limit. • » » » » » 5 weeks ago, # ^ | ← Rev. 2 →   0 I got the same question in my coding round, Though i came up with n^2 approach and only partial got accepted , (for that i made a graph with all realted strings and counted number of components in that graph)
{}
Deform projective Kahler to projective Kahler - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-20T20:29:46Z http://mathoverflow.net/feeds/question/69450 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/69450/deform-projective-kahler-to-projective-kahler Deform projective Kahler to projective Kahler SergY 2011-07-04T08:47:38Z 2012-12-26T08:56:41Z <p>Let $X$ be a compact Kahler manifold with first Chern class $c_1(X)>0$ (i.e. positive). Consider a family $\pi\colon \mathcal{X} \to \mathcal{D}$ over the unit disc $\mathcal{D}$, with $X_0=X$. Do we know that $c_1(X_t)>0$ for $t \neq 0$? </p> <p>Easy example: let $Y$ be a compact Kahler manifold with $H^{2,0}(Y)=0$, then "Deform projective Kahler to projective Kahler"!</p> http://mathoverflow.net/questions/69450/deform-projective-kahler-to-projective-kahler/69451#69451 Answer by Francesco Polizzi for Deform projective Kahler to projective Kahler Francesco Polizzi 2011-07-04T09:19:32Z 2011-07-05T06:41:51Z <p>By Kodaira embedding theorem, if $X$ is a compact Kahler manifold and $c_1(X)$ is positive, then $X$ is projective and $-K_X$ is ample, i.e. some power $(-K_X)^{\otimes n}$ gives an embedding $$\phi \colon X \to \mathbb{P}^{N}.$$ Now, given the family $\pi \colon \mathcal{X} \to \mathcal{D}$ we can consider the relative canonical line bundle $\mathcal{K}$ on $\mathcal{X}$. </p> <p>The restriction of $\mathcal{K}^{-1}$ to the central fiber $X_0=X$ is precisely $-K_X$ and, since ampleness is an open condition in families ([Lazarsfeld, Positivity in Algebraic Geometry I, Proposition 1.2.17 pag. 29]), we can conclude that $-K_{X_t}$ is also ample if $t$ is small enough. </p> <p>In other words, $c_1(X_t)$ remains positive for $t$ close enough to $0$. </p> <p>Notice that this is <strong>not</strong> necessarily true for large $t$. For instance, take a smooth cubic surface $X \subset \mathbb{P}^3$, which is a Del Pezzo surface, and consider a $1$-parameter degeneration to a cubic surface with a node. Then take the simultaneous resolution of singularities, which exists for Rational Double Points. </p> <p>In this way we obtain a family $\pi \colon \mathcal{X} \to \mathcal{D}$ whose central fiber $X_0$ is isomorphic to $X$ and such that the fibre $X_{\tilde{t}}$ contains a $(-2)$ curve for some $\tilde{t} \in \mathcal{D}$. Therefore the first Chern class of $X_{\tilde{t}}$ is zero when restricted to this curve, in particular it is <strong>not</strong> positive.</p> <p>Of course, by the previous considerations the surface $X_t$ does not contain any $(-2)$-curve if $t$ is small enough.</p>
{}
시간 제한메모리 제한제출정답맞힌 사람정답 비율 1 초 256 MB33161666.667% 문제 Your roommate recently gave you a somewhat passiveaggressive post-it note containing the question “how many empty shampoo bottles do we really need to keep in the shower?”. Passive-aggressive post-it notes generally start with a single number P, the number of passive-aggressive statements and/or questions. P lines then follow, each line containing a passive-aggressive statement or question. Whenever P is equal to 1 the line containing it is typically omitted, and the entire note will just be one passiveaggressive statement or question. This is by far the most common. Somewhat frustated over your roommate’s inability to count, you decide to write a program that any of your roommates can run whenever they wonder how many empty shampoo bottles are required for a happy existence here in Sector 001. Of course, different roommates can (and probably will) have different definitions for “empty”, and therefore it is only fair that they should input this definition before getting an answer. 입력 The first line of input is T, the number of cases. T cases follow. The first line of each case contains two numbers separated by a space: E, the number of attempts that must will be made at extracting visible amounts of shampoo before considering the bottle empty, and N, the number of candidate bottles. Then follow N lines, each line with a single number describing how many attempts were needed for a particular bottle. If the number of attempts E is exceeded, we consider the bottle empty. • 1 ≤ T ≤ 100 • 1 ≤ E ≤ 1000 • 1 ≤ N ≤ 10 출력 Output a single line containing the number of empty shampoo bottles. 예제 입력 1 1 5 3 1 5 6 예제 출력 1 1
{}
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Improper Integrals | CK-12 Foundation You are reading an older version of this FlexBook® textbook: CK-12 Calculus Go to the latest version. # 7.6: Improper Integrals Created by: CK-12 ## Learning Objectives A student will be able to: • Compute by hand the integrals of a wide variety of functions by using the technique of Improper Integration. • Combine this technique with other integration techniques to integrate. • Distinguish between proper and improper integrals. The concept of improper integrals is an extension to the concept of definite integrals. The reason for the term improper is because those integrals either • include integration over infinite limits or • the integrand may become infinite within the limits of integration. We will take each case separately. Recall that in the definition of definite integral $\int_a^b f(x) dx$ we assume that the interval of integration $[a, b]$ is finite and the function $f$ is continuous on this interval. ## Integration Over Infinite Limits If the integrand $f$ is continuous over the interval $[a, \infty),$ then the improper integral in this case is defined as $\int_a^{\infty} f(x) dx = \lim_{l \to \infty} \int_a^l f(x) dx.$ If the integration of the improper integral exists, then we say that it converges. But if the limit of integration fails to exist, then the improper integral is said to diverge. The integral above has an important geometric interpretation that you need to keep in mind. Recall that, geometrically, the definite integral $\int_a^b f(x) dx$ represents the area under the curve. Similarly, the integral $\int_a^l f(x) dx$ is a definite integral that represents the area under the curve $f(x)$ over the interval $[a, l],$ as the figure below shows. However, as $l$ approaches $\infty$, this area will expand to the area under the curve of $f(x)$ and over the entire interval $[a, \infty).$ Therefore, the improper integral $\int_a^{\infty} f(x) dx$ can be thought of as the area under the function $f(x)$ over the interval $[a, \infty).$ Example 1: Evaluate $\int_1^{\infty} \frac{dx} {x}$ . Solution: We notice immediately that the integral is an improper integral because the upper limit of integration approaches infinity. First, replace the infinite upper limit by the finite limit $l$ and take the limit of $l$ to approach infinity: $\int_1^{\infty} \frac{dx} {x} & = \lim_{l \to \infty} \int_1^l \frac{dx} {x}\\& = \lim_{l \to \infty} [\ln x]_1^l\\& = \lim_{l \to \infty} (\ln l - \ln 1)\\& = \lim_{l \to \infty} \ln l\\& = \infty.$ Thus the integral diverges. Example 2: Evaluate $\int_2^{\infty} \frac{dx} {x^2}$ . Solution: $\int_2^{\infty} \frac{dx} {x^2} & = \lim_{l \to \infty} \int_2^l \frac{dx} {x^2}\\& = \lim_{l \to \infty} \left [\frac{-1} {x} \right]_2^l\\& = \lim_{l \to \infty} \left (\frac{-1} {l} + \frac{1} {2} \right) \\& = \frac{1} {2}.$ Thus the integration converges to $\frac{1} {2}.$ Example 3: Evaluate $\int_{+ \infty}^{- \infty} \frac{dx} {1 + x^2}$. Solution: What we need to do first is to split the integral into two intervals $(-\infty, 0]$ and $[0, +\infty).$ So the integral becomes $\int_{-\infty}^{+\infty} \frac{dx} {1 + x^2} = \int_{-\infty}^0 \frac{dx} {1 + x^2 } + \int_0^{+\infty} \frac{dx} {1 + x^2}.$ Next, evaluate each improper integral separately. Evaluating the first integral on the right, $\int_{-\infty}^0 \frac{dx} {1 + x^2} & = \lim_{l \to -\infty} \int_l^0 \frac{dx} {1 + x^2}\\& = \lim_{l \to -\infty} \left [\tan^{-1} x \right]_l^0\\& = \lim_{l \to -\infty} \left [\tan^{-1} 0 - \tan^{-1} l \right]\\& = \lim_{l \to -\infty} \left [0 - \left (-\frac{\pi} {2}\right ) \right] = \frac{\pi} {2}.$ Evaluating the second integral on the right, $\int_0^{\infty} \frac{dx} {1 + x^2} & = \lim_{l \to \infty} \int_0^l \frac{dx} {1 + x^2}\\& = \lim_{l \to \infty} \left [\tan^{-1} x \right]_0^l\\& = \frac{\pi} {2} - 0 = \frac{\pi} {2}.$ $\int_{-\infty}^{+\infty} \frac{dx} {1 + x^2} & = \frac{\pi} {2} + \frac{\pi} {2} = \pi.$ Remark: In the previous example, we split the integral at $x = 0.$ However, we could have split the integral at any value of $x = c$ without affecting the convergence or divergence of the integral. The choice is completely arbitrary. This is a famous thoerem that we will not prove here. That is, $\int_{-\infty}^{+\infty} f(x) dx = \int_{-\infty}^c f(x) dx + \int_c^{+\infty} f(x) dx.$ ## Integrands with Infinite Discontinuities This is another type of integral that arises when the integrand has a vertical asymptote (an infinite discontinuity) at the limit of integration or at some point in the interval of integration. Recall from Chapter 5 in the Lesson on Definite Integrals that in order for the function $f$ to be integrable, it must be bounded on the interval $[a, b].$ Otherwise, the function is not integrable and thus does not exist. For example, the integral $\int_0^4 \frac{dx} {x - 1}$ develops an infinite discontinuity at $x = 1$ because the integrand approaches infinity at this point. However, it is continuous on the two intervals $[0, 1)$ and $(1, 4].$ Looking at the integral more carefully, we may split the interval $[0,4] \rightarrow [0,1) \cup (1,4]$ and integrate between those two intervals to see if the integral converges. $\int_0^4 \frac{dx} {x - 1} = \int_0^1 \frac{dx} {x - 1} + \int_1^4 \frac{dx} {x - 1}.$ We next evaluate each improper integral. Integrating the first integral on the right hand side, $\int_0^1 \frac{dx} {x - 1} &= \lim_{l \to 1^{-}} \int_0^l \frac{dx} {x - 1}\\&= \lim_{l \to 1^{-}} [\ln |x - 1|]_0^l\\&= \lim_{l \to 1^{-}} [\ln |l - 1| - \ln |- 1|]\\&= -\infty.$ The integral diverges because $\ln(0)$ is undefined, and thus there is no reason to evaluate the second integral. We conclude that the original integral diverges and has no finite value. Example 4: Evaluate $\int_1^3 \frac{dx} {\sqrt{x - 1}}$ . Solution: $\int_1^3 \frac{dx} {\sqrt{x - 1}} & = \lim_{l \to 1^{+}} \int_l^3 \frac{dx} {\sqrt{x - 1}}\\& = \lim_{l \to 1^{+}} \left [2 \sqrt{x - 1} \right]_l^3\\& = \lim_{l \to 1^{+}} \left [2 \sqrt{2} - 2\sqrt{l - 1} \right]\\& = 2\sqrt{2}.$ So the integral converges to $2\sqrt{2}$. Example 5: In Chapter 5 you learned to find the volume of a solid by revolving a curve. Let the curve be $y = xe^{-x}, 0 \le x \le \infty$ and revolving about the $x-$axis. What is the volume of revolution? Solution: From the figure above, the area of the region to be revolved is given by $A = \pi y^2 = \pi x^2 e^{-2x}$. Thus the volume of the solid is $V = \pi \int_0^{\infty} x^2 e^{-2x} dx = \pi \lim_{l \to \infty} \int_0^l x^2 e^{-2x} dx.$ As you can see, we need to integrate by parts twice: $\int x^2 e^{-2x} dx & = -\frac{x^2} {2} e^{-2x} + \int x e^{-2x} dx\\& = - \frac{x^2} {2} e^{-2x} - \frac{x} {2} e^{-2x} - \frac{1} {4} e^{-2x} + C.$ Thus $V & = \pi \lim_{l \to \infty} \left [-\frac{x^2} {2} e^{-2x} - \frac{x} {2} e^{-2x} - \frac{1} {4} e^{-2x} \right]_0^l\\& = \pi \lim_{l \to \infty} \left [\frac{2x^2 + 2x + 1} {-4e^{2x}} \right]_0^l\\& = \pi \lim_{l \to \infty} \left [\frac{2l^2 + 2l + 1} {-4e^{2l}} - \frac{1} {-4e^0} \right]\\& = \pi \lim_{l \to \infty} \left [\frac{2l^2 + 2l + 1} {4e^{2l}} + \frac{1} {4} \right].$ At this stage, we take the limit as $l$ approaches infinity. Notice that the when you substitute infinity into the function, the denominator of the expression $\frac{2l^2 + 2l + 1} {-4e^{2l}},$ being an exponential function, will approach infinity at a much faster rate than will the numerator. Thus this expression will approach zero at infinity. Hence $V = \pi \left [0 + \frac{1} {4} \right] = \frac{\pi} {4},$ So the volume of the solid is $\pi/4.$ Example 6: Evaluate $\int_{- \infty}^{+ \infty} \frac {dx}{e^x+e^{-x}}$. Solution: This can be a tough integral! To simplify, rewrite the integrand as $\frac {1}{e^x+e^{-x}} = \frac {1}{e^{-x}(e^{2x}+1)} = \frac{e^x}{e^{2x}+1} = \frac {e^x}{1+(e^x)^2}.$ Substitute into the integral: $\int \frac {dx}{e^x+e^{-x}} = \int \frac {e^x}{1+(e^x)^2}dx.$ Using $u-$substitution, let $u = e^x, du = e^xdx.$ $\int \frac {dx}{e^x+e^{-x}} & = \int \frac {du}{1+u^2} \\& = \tan^{-1}u + C \\& = \tan^{-1}e^x + C.$ Returning to our integral with infinite limits, we split it into two regions. Choose as the split point the convenient $x = 0.$ $\int_{- \infty}^{+ \infty} \frac {dx}{e^x+e^{-x}} = \int_{- \infty}^{0} \frac {dx}{e^x+e^{-x}} + \int_{0}^{+ \infty} \frac {dx}{e^x+e^{-x}}.$ Taking each integral separately, $\int_{- \infty}^{0} \frac {dx}{e^x+e^{-x}} & = \lim_{l \to -\infty} \int_{l}^{0} \frac {dx}{e^x+e^{-x}} \\& = \lim_{l \to -\infty} \left [ {\tan^{-1} e^x} \right ]_{l}^{0} \\& = \lim_{l \to -\infty} \left [ {\tan^{-1} e^0} - {\tan^{-1}e^l} \right ] \\& = \frac {\pi}{4} - 0 \\& = \frac {\pi}{4}.$ Similarly, $\int_{0}^{+ \infty} \frac {dx}{e^x+e^{-x}} & = \lim_{l \to \infty} \int_{0}^{1} \frac {dx}{e^x+e^{-x}} \\& = \lim_{l \to \infty} \left [ {\tan^{-1} e^x} \right ]_{0}^{l}\\ & = \lim_{l \to \infty} \left [ {\tan^{-1} e^l} - {\tan^{-1} 1} \right ] \\& = \frac {\pi}{2} - \frac {\pi}{4} = \frac {\pi}{4}.$ Thus the integral converges to $\int_{- \infty}^{+ \infty} \frac {dx}{e^x+e^{-x}} = \frac {\pi}{4} + \frac {\pi}{4} = \frac {\pi}{2}.$ For a video presentation of Improper Integrals (22.0), see Improper Integrals, www.justmathtutoring.com (6:23). For a video presentation of Improper Integrals with Infinity in the Upper and Lower Limits (22.0), see Improper Integrals, www.justmathtutoring.com (7:55). ## Review Questions 1. Determine whether the following integrals are improper. If so, explain why. 1. $\int_{1}^{7} \frac {x+2}{x-3} dx$ 2. $\int_{1}^{7} \frac {x+2}{x+3} dx$ 3. $\int_{0}^{1} {{ \ln}x} dx$ 4. $\int_{0}^{\infty} \frac {1}{\sqrt {x-2}} dx$ 5. $\int_{0}^{{\pi}/4} {\tan x} dx$ Evaluate the integral or state that it diverges. 1. $\int_{1}^{\infty} \frac {1}{x^{2.001}} dx$ 2. $\int_{- \infty}^{-2} \left [ \frac{1}{x-1} - \frac {1}{x+1} \right ] dx$ 3. $\int_{-\infty}^{0} e^{5x} dx$ 4. $\int_{3}^{5} \frac {1}{(x-3)^4} dx$ 5. $\int_{-{\pi}/2}^{{\pi}/2} {\tan x} dx$ 6. $\int_{0}^{1} \frac {1}{\sqrt {1-x^2}} dx$ 7. The region between the $x-$axis and the curve $y = e^{-x}$ for $x \ge 0$ is revolved about the $x-$axis. 1. Find the volume of revolution, $V.$ 2. Find the surface area of the volume generated, $S.$ 1. Improper; infinite discontinuity at $x = 3.$ 2. Not improper. 3. Improper; infinite discontinuity at $x = 0.$ 4. Improper; infinite interval of integration. 5. Not improper. 1. $\frac {1}{1.001}$ 2. $\ln 3$ 3. $\frac {1}{5}$ 4. divergent 5. divergent 6. $\frac {\pi}{2}.$ 1. $V = \pi /2$ 2. $S = {\pi} \left [{\sqrt {2} + { \ln}(1+ \sqrt {2})} \right ].$ ## Homework Evaluate the following integrals. 1. $\int {\sqrt {\sin x}} {\cos x} dx$ 2. $\int x \tan^2(x^2) \sec^2(x^2) dx$ 3. $\int_{0}^{{ \ln}3} {\sqrt {e^{2x} - 1}} dx$ 4. $\int_{0}^{\infty} \frac {1}{x^2} dx$ 5. $\int_{-1}^{8} \frac {1}{\sqrt[3]{x}} dx$ 6. $\int \frac {x^2+x-16}{(x+1)(x-3)^2} dx$ 7. Graph and find the volume of the region enclosed by the $x-$axis, the $y-$axis, $x = 2$ and $y = x^2/(9-x^2)$ when revolved about the $x-$axis. 8. The Gamma Function, $\Gamma (x)$, is an improper integral that appears frequently in quantum physics. It is defined as $\Gamma (x) = \int_{0}^{\infty} {t^{x-1}e^{-t}} dt.$ The integral converges for all $x \ge 0.$ 1. Find $\Gamma (1).$ 2. Prove that $\Gamma (x + 1) = x \Gamma(x)$, for all $x \ge 0$. 3. Prove that $\Gamma \left ( \frac{1}{2} \right ) = {\sqrt {\pi}}.$ 9. Refer to the Gamma Function defined in the previous exercise to prove that 1. $\int_{0}^{\infty} e^{-x^n} dx = \Gamma \left ( \frac{n+1}{n} \right ), {n \ge 0}$ [Hint: Let $t = x^n$] 2. $\int_{0}^{1} ({ \ln}x)^n dx = {(-1)^n} \Gamma (n+1), {n \ge 0}$ [Hint: Let $t = - \ln x$] 10. In wave mechanics, a sawtooth wave is described by the integral $\int_{- \pi / \omega}^{+ \pi / \omega} {t \sin(k \omega t)} dt,$ where $k$ is called the wave number, $\omega$ is the frequency, and $t$ is the time variable. Evaluate the integral. 1. $\frac {2}{3} \sin x^{3/2} + C$ 2. $\frac {1}{6} \tan^3 (x^2) + C$ 3. $\sqrt{8} - \sec^{-1}3$ 4. divergent 5. $\frac {9}{2}$ 6. ${ \ln}\frac {(x-3)^2}{\begin{vmatrix}{x+1} \end{vmatrix}} +\frac {1}{x-3} + C$ 7. ${\pi} \left ( \frac{19}{5} - \frac {9}{4} { \ln}5 \right)$ 1. $\Gamma (1) = 1$ 1. Hint: Let $t = x^n.$ 2. Hint: $t = -\ln x$ 8. $\frac {2}{(k \omega)^2} \sin (k \pi)$ Feb 23, 2012 Aug 21, 2014
{}
# Why are electrons shared equally when calculating formal charge, but unequally when calculating oxidation state? • When calculating formal charge - electrons are shared equally between the atoms in the bond. • When calculating oxidation state - electrons are both given to the most electronegative atom. Why is this different? I am assuming it is something to do with the intended use of calculating formal charge and oxidation state, which I believe are: • Formal charge can be used to find the most stable structure. • Oxidation numbers can be used to determine what will become oxidised in a redox reaction. However, even based on their uses, I am unsure why they are calculated differently. *Images from Wikipedia • Are talking about a complex, for instance like $\ce{FeCl4^−}$ ? – MaxW Feb 19 '17 at 22:47 • I am unsure what a the implications of something being a complex are (I just googled what a complex itself referred to). But my question was for a compound such as $\ce{CO2}$. I have added some images to clarify. Thank you. – K-Feldspar Feb 19 '17 at 22:53 • @K-Feldspar Think as you have two types of bonds, the ionic bond and the covalent bond. In covalent bonding, two atoms share one electron pair per bond. In ionic bonding, the electronegative atom take the electron from the electropositive. In formal charge, you only account for 100% covalent bonding, i.e. the electronegativity stuff is fully neglected. In oxidation states, you only account for asymmetrical electron distribution, associated with a 100% ionic bond, for example in Na$^{+}$Cl$^{-}$. By definition, formal charge and oxidation state accounts for different types of bonds. – Verktaj Feb 20 '17 at 14:20
{}
# Revision history [back] If what you are interested in is computing values of this function, you can use numerical integration instead of symbolic: sage: el = lambda theta: -numerical_integral(log(abs(2*sin(u))), 0, theta)[0] sage: plot(el, 0, pi)
{}
# True Colors I have picked two colors. They shall shine in the dark. ~~~ Another version of my series Reality and Imagination. I used artistic license when tweaking the color values. # Boosted I have been playing with the geometry of special relativity again! The light cone signifies the invariance of the speed of light. There is a notion of length in four-dimensional spacetime, defined as c2t2 - x2 - y2 - z2. Surfaces of constant length are 4-dimensional hyperboloids. Light rays are null rays, as light travels … Continue reading Boosted # Complex Alien Eclipse My colorful complex function lived in a universe of white light. I turned off the light. Turned it into its negative. Expected it to look bleak. Like thin white bones on black canvas, cartoon skeletons of imaginary alien creatures. But it is more like the total solar eclipse I watched in 1999. There is interference, … Continue reading Complex Alien Eclipse # Spins, Rotations, and the Beauty of Complex Numbers This is a simple quantum state ... |➚> = α|↑> + β|↓> ... built from an up |↑> state and a down state |↓>. α and β are complex numbers. The result |➚> is in the middle, oblique. The oblique state is a superposition or the up and down base states. Making a measurement, you … Continue reading Spins, Rotations, and the Beauty of Complex Numbers # Lines and Circles I poked at complex function 1/z, and its real and imaginary parts look like magical towers. When you look at these towers from above or below, you see sections of perfect circles. This is hinting at some underlying simplicity. Using the map 1/z, another complex number - w=1/z - is mapped to z. Four dimensions … Continue reading Lines and Circles # Reality and Imagination Grey and colorful. Cutting through each other. Chasing each other. Meeting in the center, leaning on each other, forming an infinite line. ~ Reality and Imagination: Real and imaginary part of complex function 1/z: ~ The real part of 1/z is painted in shades of grey, the imaginary part in rainbow colors. Plots are created … Continue reading Reality and Imagination # Super Motivational Function I've presented a Motivational Function, a while back. $latex f(z) = e^{\left(-\frac{1}{z^{2}}\right)}&s=3$ It is infinitely flat at the zero point: all its derivatives are zero there. Yet, it manages to lift its head - as it is not analytic at zero! If you think of it as a function of a complex argument, its … Continue reading Super Motivational Function # Motivational Function Deadly mutants are after us. What can give us hope? This innocuous-looking function is a sublime light in the dark. It proves you can always recover. If your perseverance is infinite. $latex e^{\left(-\frac{1}{x^{2}}\right)}&s=3$ As x tends to zero, the exponent tends to minus infinity. The function's value at zero tends to zero. It is … Continue reading Motivational Function
{}
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int Chapter Contents Chapter Introduction NAG Toolbox # NAG Toolbox: nag_specfun_cdf_normal (s15ab) ## Purpose nag_specfun_cdf_normal (s15ab) returns the value of the cumulative Normal distribution function, P(x)$P\left(x\right)$, via the function name. ## Syntax [result, ifail] = s15ab(x) [result, ifail] = nag_specfun_cdf_normal(x) ## Description nag_specfun_cdf_normal (s15ab) evaluates an approximate value for the cumulative Normal distribution function x P(x) = 1/(sqrt(2π)) ∫ e − u2 / 2du. − ∞ $P(x)=12π∫-∞xe-u2/2du.$ The function is based on the fact that P(x) = (1/2)erfc(( − x)/(sqrt(2))) $P(x)=12erfc(-x2)$ and it calls nag_specfun_erfc_real (s15ad) to obtain a value of erfc$\mathit{erfc}$ for the appropriate argument. ## References Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications ## Parameters ### Compulsory Input Parameters 1:     x – double scalar The argument x$x$ of the function. None. None. ### Output Parameters 1:     result – double scalar The result of the function. 2:     ifail – int64int32nag_int scalar ${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]). ## Error Indicators and Warnings There are no failure exits from this function. The parameter ifail is included for consistency with other functions in this chapter. ## Accuracy Because of its close relationship with erfc$\mathit{erfc}$, the accuracy of this function is very similar to that in nag_specfun_erfc_real (s15ad). If ε$\epsilon$ and δ$\delta$ are the relative errors in result and argument, respectively, they are in principle related by |ε| ≃ |( x e − (1/2) x2 )/(sqrt(2π)P(x))δ| $|ε|≃ | x e -12 x2 2πP(x) δ |$ so that the relative error in the argument, x$x$, is amplified by a factor, (xe(1/2)x2)/(sqrt(2π)P(x)) $\frac{x{e}^{-\frac{1}{2}{x}^{2}}}{\sqrt{2\pi }P\left(x\right)}$, in the result. For x$x$ small and for x$x$ positive this factor is always less than one and accuracy is mainly limited by machine precision. For large negative x$x$ the factor behaves like x2$\text{}\sim {x}^{2}$ and hence to a certain extent relative accuracy is unavoidably lost. However the absolute error in the result, E$E$, is given by |E| ≃ |( x e − (1/2) x2 )/(sqrt(2π))δ| $|E|≃ | x e -12 x2 2π δ |$ so absolute accuracy can be guaranteed for all x$x$. None. ## Example ```function nag_specfun_cdf_normal_example x = -20; [result, ifail] = nag_specfun_cdf_normal(x) ``` ``` result = 2.7536e-89 ifail = 0 ``` ```function s15ab_example x = -20; [result, ifail] = s15ab(x) ``` ``` result = 2.7536e-89 ifail = 0 ```
{}
# Gambling probability problem #### zstudds ##### New Member A slot machine works on inserting a $1 coin. If the player wins, the coin is returned with an additional$1 coin, otherwise the original coin is lost. The probability of winning is 1/2 unless the previous play has resulted in a win, in which case the probability is p < 1/2. If the cost of maintaining the machine averages \$c per play (with c < 1/3), give conditions on the value of p that the owner of the machine must arrange in order to make a prot in the long run. #### vinux ##### Dark Knight What have you done on this problem? If it looks difficult, start with when c=0. #### Ace864 ##### New Member Do you think that it is possible to calculate probabilities of gambling games? Of course, there are some theories and even tasks regarding calculating the dice or other gambling probabilities. But, in my opinion, it is pointless to try to figure out the chances of winning in gambling games. In real life, most of the theories do not work, and you lose despite all predictions. That’s why I prefer betting on soccer games rather than playing cards or gamble in casinos. When you bet on soccer games, you are more likely to win something if you know something about the teams you are betting on. Last edited: #### hlsmith ##### Less is more. Stay pure. Stay poor. If the inputs are known, yup. Six sides to a fair die, number of cards in a deck or slots on a Roulette wheel, etc.
{}
Density, distribution function, hazards, quantile function and random generation for the Weibull distribution when parameterized for network meta-analysis. dweibullNMA(x, a0, a1 = FALSE, log = FALSE) pweibullNMA(q, a0, a1, lower.tail = TRUE, log.p = FALSE) qweibullNMA(p, a0, a1, lower.tail = TRUE, log.p = FALSE) rweibullNMA(n, a0, a1) hweibullNMA(n, a0, a1, log = FALSE) HweibullNMA(n, a0, a1, log = FALSE) rmst_weibullNMA(t, a0, a1, start = 0) mean_weibullNMA(a0, a1) ## Arguments x, q Vector of quantiles a0 Intercept of reparameterization of the Weibull distribution. a1 Slope of the reparameterization of the Weibull distribution. log, log.p logical; if TRUE, probabilities p are given as log(p). lower.tail logical; if TRUE (default), probabilities are $$P(X \le x)$$, otherwise, $$P(X > x)$$. p Vector of probabilities n Number of observations. If length(n) > 1, the length is taken to be the number required. t Vector of times for which restricted mean survival time is evaluated. start Optional left-truncation time or times. The returned restricted mean survival will be conditional on survival up to this time. ## Value dweibullNMA gives the density, pweibullNMA gives the distribution function, qweibullNMA gives the quantile function, rweibullNMA generates random deviates, HweibullNMA returns the cumulative hazard and hweibullNMA the hazard. dweibull
{}
# “Work” when biking up a hill So, when biking, I noticed that when going up hills, it was less tiring if I went up them more quickly. This is not total Work done as is Force * Distance, as that should be the same. But the longer one is going uphill, the longer gravity is pulling you backwards. And if you only are providing enough force to counteract the force of gravity (from a stop), you will not make it up the hill, yet you will feel quite tired afterwards. While if one pushes really hard, then one will hardly slow down at all. I know that if you are coasting, then the conservation of energy applies, and $v_i^2 = v_f^2 + C$ where C is the gravitational potential energy at the top of the hill. But this doesn't explain why it is more taxing to go up a hill slowly than quickly. It's the same amount of energy transformed into gravitational potential anyways. - It's complicated to mix physics objects like force and work with concepts like "tiring" and biomechanics. –  Diego Mar 13 '11 at 15:01 Dear @Tyr, I think that you're boasting that you can go up the hill quickly. If the slope is significant and my speed is more than 10-15 kilometers per hour, I may end up totally exhausted! ;-) Obviously, your rule that higher speed means less exhaustion isn't universal. :-) Otherwise I agree that some energy may be uselessly wasted just by preserving the position - except that I don't think it's the case of biking. –  Luboš Motl Mar 13 '11 at 17:36 There is also the issue as to whether this has anything to do with the gearing on the bike. –  Roy Simpson Mar 13 '11 at 18:27
{}
## Computer Aided Geometric Design Short Title: Comput. Aided Geom. Des. Publisher: Elsevier (North-Holland), Amsterdam ISSN: 0167-8396 Online: http://www.sciencedirect.com/science/journal/01678396 Comments: Indexed cover-to-cover Documents Indexed: 1,913 Publications (since 1984) References Indexed: 1,755 Publications with 38,777 References. all top 5 ### Latest Issues 97 (2022) 96 (2022) 95 (2022) 94 (2022) 93 (2022) 92 (2022) 91 (2021) 90 (2021) 89 (2021) 88 (2021) 87 (2021) 86 (2021) 85 (2021) 84 (2021) 83 (2020) 82 (2020) 81 (2020) 80 (2020) 79 (2020) 78 (2020) 77 (2020) 76 (2020) 75 (2019) 74 (2019) 73 (2019) 72 (2019) 71 (2019) 70 (2019) 69 (2019) 68 (2019) 67 (2018) 66 (2018) 65 (2018) 64 (2018) 63 (2018) 62 (2018) 61 (2018) 60 (2018) 59 (2018) 58 (2017) 57 (2017) 56 (2017) 55 (2017) 54 (2017) 52-53 (2017) 51 (2017) 50 (2017) 49 (2016) 48 (2016) 47 (2016) 46 (2016) 45 (2016) 44 (2016) 43 (2016) 42 (2016) 41 (2016) 40 (2015) 39 (2015) 38 (2015) 37 (2015) 35-36 (2015) 34 (2015) 33 (2015) 32 (2015) 31, No. 9 (2014) 31, No. 7-8 (2014) 31, No. 6 (2014) 31, No. 5 (2014) 31, No. 3-4 (2014) 31, No. 2 (2014) 31, No. 1 (2014) 30, No. 9 (2013) 30, No. 8 (2013) 30, No. 7 (2013) 30, No. 6 (2013) 30, No. 5 (2013) 30, No. 4 (2013) 30, No. 3 (2013) 30, No. 2 (2013) 30, No. 1 (2013) 29, No. 9 (2012) 29, No. 8 (2012) 29, No. 7 (2012) 29, No. 6 (2012) 29, No. 2 (2012) 29, No. 1 (2012) 28, No. 9 (2011) 28, No. 8 (2011) 28, No. 7 (2011) 28, No. 6 (2011) 28, No. 5 (2011) 28, No. 4 (2011) 28, No. 3 (2011) 28, No. 2 (2011) 28, No. 1 (2011) 27, No. 9 (2010) 27, No. 8 (2010) 27, No. 7 (2010) 27, No. 6 (2010) 27, No. 5 (2010) ...and 169 more Volumes all top 5 ### Authors 60 Farouki, Rida T. 44 Goldman, Ronald N. 39 Jüttler, Bert 37 Peters, Jorg 32 Sederberg, Thomas W. 28 Pottmann, Helmut 25 Wang, Wenping 24 Chen, Falai 22 Farin, Gerald E. 22 Prautzsch, Hartmut 20 Hormann, Kai 20 Lávička, Miroslav 19 Elber, Gershon 19 Manni, Carla 18 Kosinka, Jiří 18 Sánchez-Reyes, Javier 17 Deng, Chongyang 17 Floater, Michael S. 17 Krajnc, Marjeta 17 Peña, Juan Manuel 17 Schumaker, Larry L. 16 Dyn, Nira 16 Reif, Ulrich 16 Sabin, Malcolm A. 15 Karčiauskas, Kęstutis 15 Pyo Moon, Hwan 15 Sestini, Alessandra 15 Wang, Guozhao 15 Xu, Guoliang 15 Zheng, Jianmin 14 Böhm, Wolfgang 14 Šír, Zbyněk 14 Speleers, Hendrik 13 Giannelli, Carlotta 13 Romani, Lucia 12 Alcazar, Juan Gerardo 12 Beccari, Carolina Vittoria 12 Carnicer, Jésus Miguel 12 Deng, Jiansong 12 Dodgson, Neil A. 12 Hamann, Bernd 12 Han, Chang Yong 12 Kim, Myung-Soo 12 Kwon, Song-Hwa 12 Ma, Weiyin 12 Meek, Dereck S. 12 Pérez-Díaz, Sonia 12 Sampoli, Maria Lucia 12 Žagar, Emil 11 Goodman, Timothy N. T. 11 Qin, Hong 11 Vršek, Jan 11 Wang, Guojin 10 Bajaj, Chandrajit L. 10 Barnhill, Robert E. 10 Bizzarri, Michal 10 Gregory, John A. 10 Hagen, Hans-Juergen 10 Hermann, Thomas 10 Jia, Xiaohong 10 Lai, Mingjun 10 Lyche, Tom 10 Mourrain, Bernard 10 Peternell, Martin 10 Tu, Changhe 10 Walton, Desmond J. 9 Ait-Haddou, Rachid 9 Hoschek, Josef 9 Hu, Shimin 9 Lavery, John E. 9 Lee, E. T. Y. 9 Liu, Ligang 9 Maekawa, Takashi 9 Manocha, Dinesh 9 Mazure, Marie-Laurence 9 Polthier, Konrad 9 Seidel, Hans-Peter 9 Sendra, Juan Rafael 9 Shen, Liyong 9 Várady, Tamás 9 Wang, Xuhui 8 Barton, Michael H. 8 Casciola, Giulio 8 Choi, Hyeong In 8 Costantini, Paolo Giuseppe 8 Degen, Wendelin L. F. 8 Guo, Xiaohu 8 Hartmann, Erich 8 Kaklis, Panagiotis D. 8 Monterde, Juan 8 Patrikalakis, Nicholas M. 8 Sapidis, Nickolas S. 8 Wallner, Johannes 8 Worsey, Andrew J. 8 Zhang, Renjiang 7 Albrecht, Gudrun 7 Alfeld, Peter 7 Bastl, Bohumír 7 Conti, Costanza 7 Dierckx, Paul ...and 1,840 more Authors all top 5 ### Fields 1,512 Numerical analysis (65-XX) 465 Computer science (68-XX) 366 Approximations and expansions (41-XX) 125 Differential geometry (53-XX) 63 Algebraic geometry (14-XX) 61 Geometry (51-XX) 25 General and overarching topics; collections (00-XX) 19 Mechanics of particles and systems (70-XX) 17 Commutative algebra (13-XX) 17 Convex and discrete geometry (52-XX) 12 History and biography (01-XX) 9 Combinatorics (05-XX) 9 Biology and other natural sciences (92-XX) 8 Partial differential equations (35-XX) 8 Information and communication theory, circuits (94-XX) 7 Special functions (33-XX) 7 Calculus of variations and optimal control; optimization (49-XX) 7 Mechanics of deformable solids (74-XX) 7 Fluid mechanics (76-XX) 6 Harmonic analysis on Euclidean spaces (42-XX) 6 Operations research, mathematical programming (90-XX) 5 Linear and multilinear algebra; matrix theory (15-XX) 5 Global analysis, analysis on manifolds (58-XX) 4 Functions of a complex variable (30-XX) 4 Manifolds and cell complexes (57-XX) 3 Field theory and polynomials (12-XX) 3 Associative rings and algebras (16-XX) 3 Real functions (26-XX) 2 Several complex variables and analytic spaces (32-XX) 2 Algebraic topology (55-XX) 1 Number theory (11-XX) 1 Sequences, series, summability (40-XX) 1 Integral transforms, operational calculus (44-XX) 1 Probability theory and stochastic processes (60-XX) 1 Quantum theory (81-XX) 1 Geophysics (86-XX) 1 Systems theory; control (93-XX) 1 Mathematics education (97-XX) ### Citations contained in zbMATH Open 1,506 Publications have been cited 14,727 times in 5,414 Documents Cited by Year A 4-point interpolatory subdivision scheme for curve design. Zbl 0638.65009 Dyn, Nira; Levin, David; Gregory, John A. 1987 A survey of curve and surface methods in CAGD. Zbl 0604.65005 Böhm, Wolfgang; Farin, Gerald; Kahmann, Jürgen 1984 THB-splines: The truncated basis for hierarchical splines. Zbl 1252.65030 Giannelli, Carlotta; Jüttler, Bert; Speleers, Hendrik 2012 Polynomial splines over locally refined box-partitions. Zbl 1264.41011 Dokken, Tor; Lyche, Tom; Pettersen, Kjell Fredrik 2013 Mean value coordinates. Zbl 1069.65553 Floater, Michael S. 2003 Blossoms are polar forms. Zbl 0705.65008 Ramshaw, Lyle 1989 Algorithms for polynomials in Bernstein form. Zbl 0648.65007 Farouki, R. T.; Rajan, V. T. 1988 On the numerical condition of polynomials in Bernstein form. Zbl 0636.65012 Farouki, R. T.; Rajan, V. T. 1987 High accuracy geometric Hermite interpolation. Zbl 0646.65004 de Boor, Carl; Höllig, Klaus; Sabin, Malcolm 1987 On linear independence of T-spline blending functions. Zbl 1251.65012 Li, Xin; Zheng, Jianmin; Sederberg, Thomas W.; Hughes, Thomas J. R.; Scott, Michael A. 2012 The Bernstein polynomial basis: a centennial retrospective. Zbl 1252.65039 Farouki, Rida T. 2012 Rational curves and surfaces with rational offsets. Zbl 0872.65011 Pottmann, Helmut 1995 C-curves: An extension of cubic curves. Zbl 0900.68405 Zhang, Jiwen 1996 An interpolating 4-point $$C^2$$ ternary stationary subdivision scheme. Zbl 0984.68167 Hassan, M. F.; Ivrissimitzis, I. P.; Dodgson, N. A.; Sabin, M. A. 2002 The conformal map $$z\to z^ 2$$ of the hodograph plane. Zbl 0806.65005 Farouki, Rida T. 1994 The moving line ideal basis of planar rational curves. Zbl 0908.68174 Cox, David A.; Sederberg, Thomas W.; Chen, Falai 1998 Parametrization and smooth approximation of surface triangulations. Zbl 0906.68162 Floater, Michael S. 1997 Curvature formulas for implicit curves and surfaces. Zbl 1084.53004 Goldman, Ron 2005 The geometry of Tchebycheffian splines. Zbl 0777.41016 Pottmann, Helmut 1993 A unified approach to subdivision algorithms near extraordinary vertices. Zbl 0872.65007 Reif, Ulrich 1995 On calculating normalized Powell-Sabin B-splines. Zbl 0894.68152 Dierckx, Paul 1997 Shape preserving alternatives to the rational Bézier model. Zbl 0972.68157 Mainar, E.; Peña, J. M.; Sánchez-Reyes, J. 2001 Curve and surface construction using variable degree polynomial splines. Zbl 0938.68128 Costantini, P. 2000 Efficient topology determination of implicitly defined algebraic plane curves. Zbl 1043.68105 Gonzalez-Vega, Laureano; Necula, Ioana 2002 Convergence and $$C^1$$ analysis of subdivision schemes on manifolds by proximity. Zbl 1083.65023 Wallner, Johannes; Dyn, Nira 2005 Improperly parametrized rational curves. Zbl 0594.65006 Sederberg, Thomas W. 1986 Surfaces in computer aided geometric design: A survey with new results. Zbl 0597.65001 Barnhill, R. E. 1985 Volumetric parameterization and trivariate B-spline fitting using harmonic functions. Zbl 1205.65094 Martin, T.; Cohen, E.; Kirby, R. M. 2009 Analytic properties of plane offset curves. Zbl 0718.53003 Farouki, R. T.; Neff, C. A. 1990 An algebraic approach to curves and surfaces on the sphere and on other quadrics. Zbl 0781.65009 Dietz, Roland; Hoschek, Josef; Jüttler, Bert 1993 A class of Bézier-like curves. Zbl 1069.65514 Chen, Qinyu; Wang, Guozhao 2003 Tracing surface intersections. Zbl 0659.65012 Bajaj, C. L.; Hoffmann, C. M.; Lynch, R. E.; Hopcroft, J. E. H. 1988 A non-stationary uniform tension controlled interpolating 4-point scheme reproducing conics. Zbl 1171.65325 Beccari, C.; Casciola, G.; Romani, L. 2007 Totally positive bases for shape preserving curve design and optimality of $$B$$-splines. Zbl 0827.65018 Carnicer, J. M.; Peña, J. M. 1994 Mean value coordinates in 3D. Zbl 1080.52010 Floater, Michael S.; Kós, Géza; Reimers, Martin 2005 Good approximation of circles by curvature-continuous Bézier curves. Zbl 0716.65011 Dokken, Tor; Dæhlen, Morten; Lyche, Tom; Mørken, Knut 1990 Discrete Coons patches. Zbl 0997.65033 Farin, Gerald; Hansford, Dianne 1999 A Laguerre geometric approach to rational offsets. Zbl 0903.68190 Peternell, Martin; Pottmann, Helmut 1998 Analysis-suitable $$G^1$$ multi-patch parametrizations for $$C^1$$ isogeometric spaces. Zbl 1418.65017 Collin, Annabelle; Sangalli, Giancarlo; Takacs, Thomas 2016 The $$\mu$$-basis of a rational ruled surface. Zbl 0972.68158 Chen, F.; Zheng, J.; Sederberg, T. W. 2001 Euler-Rodrigues frames on spatial Pythagorean-hodograph curves. Zbl 1043.53005 Choi, Hyeong In; Han, Chang Yong 2002 A subdivision scheme for surfaces of revolution. Zbl 0970.68177 Morin, G.; Warren, J.; Weimer, H. 2001 Identification of spatial PH quintic Hermite interpolants with near-optimal shape measures. Zbl 1172.65307 Farouki, Rida T.; Giannelli, Carlotta; Manni, Carla; Sestini, Alessandra 2008 An interpolating 4-point $$C^{2}$$ ternary non-stationary subdivision scheme with tension control. Zbl 1171.65326 Beccari, C.; Casciola, G.; Romani, L. 2007 Minkowski Pythagorean hodographs. Zbl 0997.65023 Moon, Hwan Pyo 1999 An incremental algorithm for Betti numbers of simplicial complexes on the 3-sphere. Zbl 0873.55007 Delfinado, Cecil Jose A.; Edelsbrunner, Herbert 1995 Construction and shape analysis of PH quintic Hermite interpolants. Zbl 0971.68186 Moon, H. P.; Farouki, R. T.; Choi, H. I. 2001 The elastic bending energy of Pythagorean-hodograph curves. Zbl 0875.68861 Farouki, Rida T. 1996 Shape preserving representations for trigonometric polynomial curves. Zbl 0900.68417 Penã, J. M. 1997 Cubic trigonometric polynomial curves with a shape parameter. Zbl 1069.42500 Han, Xuli 2004 Applications of Laguerre geometry in CAGD. Zbl 0996.65018 Pottmann, Helmut; Peternell, Martin 1998 Degree reduction of Bézier curves. Zbl 0786.65016 Eck, Matthias 1993 Approximation of circular arcs by cubic polynomials. Zbl 0756.41009 Goldapp, Michael 1991 A normalized basis for quintic Powell-Sabin splines. Zbl 1210.65027 Speleers, Hendrik 2010 Algebraic properties of plane offset curves. Zbl 0724.65008 Farouki, R. T.; Neff, C. A. 1990 A family of subdivision schemes with cubic precision. Zbl 1172.65308 Hormann, Kai; Sabin, Malcolm A. 2008 A trivariate Clough-Tocher scheme for tetrahedral data. Zbl 0566.65003 Alfeld, Peter 1984 4–8 Subdivision. Zbl 0969.68157 Velho, L.; Zorin, D. 2001 Two moving coordinate frames for sweeping along a 3D trajectory. Zbl 0631.65145 Klok, Fopke 1986 Chebyshev economization for parametric surfaces. Zbl 0709.65012 Lachance, Michael A. 1988 Discrete Laplace-Beltrami operators and their convergence. Zbl 1069.58500 Xu, Guoliang 2004 Automatic parameterization of rational curves and surfaces. III: Algebraic plane curves. Zbl 0655.65019 Abhyankar, Shreeram S.; Bajaj, Chanderjit L. 1988 Harmonic rational Bézier curves, $$p$$-Bézier curves and trigonometric polynomials. Zbl 0947.68152 Sánchez-Reyes, Javier 1998 Matched $$G^k$$-constructions always yield $$C^k$$-continuous isogeometric elements. Zbl 1375.65026 Groisser, David; Peters, Jörg 2015 Shape preserving properties of the generalised Ball basis. Zbl 0729.65006 Goodman, T. N. T.; Said, H. B. 1991 Sharp, quantitative bounds on the distance between a polynomial piece and its Bézier control polygon. Zbl 0997.65016 Nairn, D.; Peters, J.; Lutterkort, D. 1999 Quasi-interpolation in isogeometric analysis based on generalized B-splines. Zbl 1205.65049 Costantini, Paolo; Manni, Carla; Pelosi, Francesca; Sampoli, M. Lucia 2010 Cyclides in computer aided geometric design. Zbl 0712.65008 Pratt, M. J. 1990 Quadratic trigonometric polynomial curves with a shape parameter. Zbl 0998.68187 Han, Xuli 2002 A unified framework for primal/dual quadrilateral subdivision schemes. Zbl 0969.68155 Zorin, D.; Schröder, P. 2001 Two different forms of C-B-splines. Zbl 0900.68418 Zhang, Jiwen 1997 Nuat B-spline curves. Zbl 1069.41510 Wang, Guozhao; Chen, Qinyu; Zhou, Minghua 2004 Properties of $$n$$-dimensional triangulations. Zbl 0624.65018 Lawson, Charles L. 1986 Tracing index of rational curve parametrizations. Zbl 0983.68224 Sendra, J. R.; Winkler, F. 2001 An $$O(h^{2n})$$ Hermite approximation for conic sections. Zbl 0906.68153 Floater, Michael S. 1997 Estimating differential quantities using polynomial fitting of osculating jets. Zbl 1084.65017 Cazals, F.; Pouget, M. 2005 Polynomial generation and quasi-interpolation in stationary non-uniform subdivision. Zbl 1069.41503 Levin, A. 2003 Rational approximation schemes for rotation-minimizing frames on Pythagorean-hodograph curves. Zbl 1069.65551 Farouki, Rida T.; Han, Chang Yong 2003 Curvature continuous curves and surfaces. Zbl 0645.53002 Boehm, Wolfgang 1985 Computation of the solutions of nonlinear polynomial systems. Zbl 0817.65035 Sherbrooke, Evan C.; Patrikalakis, Nicholas M. 1993 Nonexistence of rational rotation-minimizing frames on cubic curves. Zbl 1172.65327 Han, Chang Yong 2008 Bézier surfaces of minimal area: the Dirichlet approach. Zbl 1069.65559 Monterde, J. 2004 The approximation of non-degenerate offset surfaces. Zbl 0621.65003 Farouki, R. T. 1986 Surface algorithms using bounds on derivatives. Zbl 0632.65013 Filip, Daniel; Magedson, Robert; Markot, Robert 1986 Derivatives of rational Bézier curves. Zbl 0760.65013 Floater, M. S. 1992 Vector elimination: A technique for the implicitization, inversion, and intersection of planar parametric rational polynomial curves. Zbl 0571.65114 Goldman, R. N.; Sederberg, T. W.; Anderson, D. C. 1984 Multi-degree reduction of Bézier curves with constraints, using dual Bernstein basis polynomials. Zbl 1205.65110 Woźny, Paweł; Lewanowicz, Stanisław 2009 Multivariate normalized Powell-Sabin $$B$$-splines and quasi-interpolants. Zbl 1255.65038 Speleers, Hendrik 2013 On the stability of transformations between power and Bernstein polynomial forms. Zbl 0725.65018 Farouki, R. T. 1991 Construction of three-dimensional Delaunay triangulations using local transformations. Zbl 0729.65120 Joe, Barry 1991 Bernstein-Bézier polynomials on spheres and sphere-like surfaces. Zbl 0875.68863 Alfeld, Peter; Neamtu, Marian; Schumaker, Larry L. 1996 Real-time CNC interpolators for Pythagorean-hodograph curves. Zbl 0875.68875 Farouki, Rida T.; Shah, Sagar 1996 Constrained polynomial degree reduction in the $$L_2$$-norm equals best weighted Euclidean approximation of Bézier coefficients. Zbl 1069.41504 Ahn, Young Joon; Lee, Byung-Gook; Park, Yunbeom; Yoo, Jaechil 2004 A general 4th-order PDE method to generate Bézier surfaces from the boundary. Zbl 1083.65014 Monterde, J.; Ugail, H. 2006 Surfaces over Dirichlet tessellations. Zbl 0728.65013 Farin, Gerald 1990 On the problem of proper reparametrization for rational curves and surfaces. Zbl 1102.65018 Pérez-Díaz, Sonia 2006 $$G^ 1$$ interpolation of generally unrestricted cubic Bézier curves. Zbl 0621.65002 Sarraga, Ramon F. 1987 Local control of interval tension using weighted splines. Zbl 0645.65007 Foley, Thomas A. 1986 Real rational curves are not ‘unit speed’. Zbl 0746.41019 Farouki, Rida T.; Sakkalis, Takis 1991 Some properties of LR-splines. Zbl 1284.65024 Bressan, Andrea 2013 Approximation by polynomial splines on curved triangulations. Zbl 1489.65023 Schumaker, Larry L.; Yu, Annan 2022 Penalty function-based volumetric parameterization method for isogeometric analysis. Zbl 1489.65029 Ji, Ye; Wang, Meng-Yun; Pan, Mao-Dong; Zhang, Yi; Zhu, Chun-Gang 2022 An annihilator-based strategy for the automatic detection of exponential polynomial spaces in subdivision. Zbl 1467.65010 López-Ureña, Sergio; Viscardi, Alberto 2021 $$C^d$$ Hermite interpolations with spatial Pythagorean hodograph B-splines. Zbl 1472.65014 Bizzarri, Michal; Lávička, Miroslav; Vršek, Jan 2021 Computing planar and volumetric B-spline parameterizations for IGA by robust mapping fitting. Zbl 1470.65019 Yuan, Guan-Jie; Liu, Hao; Su, Jian-Ping; Fu, Xiao-Ming 2021 Multi-sided completion of $$C^2$$ bi-3 and $$C^1$$ bi-2 splines: a unifying approach. Zbl 1468.65012 Karčiauskas, Kȩstutis; Peters, Jörg 2021 27 variants of Tutte’s theorem for plane near-triangulations and an application to periodic spline surface fitting. Zbl 1472.65017 Groiss, Lisa; Jüttler, Bert; Mokriš, Dominik 2021 Curvature computations for the intersection curves of hypersurfaces in Euclidean $$n$$-space. Zbl 07337229 Özçetin, B. Merih; Düldül, Mustafa 2021 Inversion, degree, reparametrization and implicitization of improperly parametrized planar curves using $$\mu$$-basis. Zbl 1465.14056 Pérez-Díaz, Sonia; Shen, Li-Yong 2021 A sufficient condition for 3D typical curves. Zbl 1470.65024 Tong, Weihua; Chen, Ming 2021 New shape control tools for rational Bézier curve design. Zbl 1472.65024 Ramanantoanina, Andriamahenina; Hormann, Kai 2021 A dynamic sampling approach towards computing Voronoi diagram of a set of circles. Zbl 07424068 Mukundan, Manoj Kumar; Muthuganapathy, Ramanathan 2021 Multi-resolution 3D CNN for learning multi-scale spatial features in CAD models. Zbl 1480.65055 Ghadai, Sambit; Lee, Xian Yeow; Balu, Aditya; Sarkar, Soumik; Krishnamurthy, Adarsh 2021 Affine invariant triangulations. Zbl 1480.65063 Bose, Prosenjit; Cano, Pilar; Silveira, Rodrigo I. 2021 Planar projections of spatial Pythagorean-hodograph curves. Zbl 1480.65044 Farouki, Rida T.; Knez, Marjeta; Vitrih, Vito; Žagar, Emil 2021 Note on planar Pythagorean hodograph curves of Tschirnhaus type. Zbl 1469.65047 Bizzarri, Michal; Lávička, Miroslav; Vršek, Jan 2021 Multi-degree B-splines: algorithmic computation and properties. Zbl 1453.65034 Toshniwal, Deepesh; Speleers, Hendrik; Hiemstra, René R.; Manni, Carla; Hughes, Thomas J. R. 2020 The distance function from a real algebraic variety. Zbl 1453.65046 Ottaviani, Giorgio; Sodomaco, Luca 2020 Interpolatory Catmull-Clark volumetric subdivision over unstructured hexahedral meshes for modeling and simulation applications. Zbl 07207278 Xie, Jin; Xu, Jinlan; Dong, Zhenyu; Xu, Gang; Deng, Chongyang; Mourrain, Bernard; Zhang, Yongjie Jessica 2020 Optimal parametric interpolants of circular arcs. Zbl 07207291 Vavpetič, Aleš 2020 Geometrically smooth spline bases for data fitting and simulation. Zbl 1472.65015 Blidia, Ahmed; Mourrain, Bernard; Xu, Gang 2020 A new selection scheme for spatial Pythagorean hodograph quintic Hermite interpolants. Zbl 1472.65023 Han, Chang Yong; Moon, Hwan Pyo; Kwon, Song-Hwa 2020 Adaptive IGAFEM with optimal convergence rates: T-splines. Zbl 07242739 Gantner, Gregor; Praetorius, Dirk 2020 Spatial Pythagorean-hodograph B-spline curves and 3D point data interpolation. Zbl 07207279 Albrecht, Gudrun; Beccari, Carolina Vittoria; Romani, Lucia 2020 NLIGA: a MATLAB framework for nonlinear isogeometric analysis. Zbl 07207280 Du, Xiaoxiao; Zhao, Gang; Wang, Wei; Guo, Mayi; Zhang, Ran; Yang, Jiaming 2020 Isogeometric shape optimization of an acoustic horn using the teaching-learning-based optimization (TLBO) algorithm. Zbl 07207289 Ummidivarapu, Vinay K.; Voruganti, Hari K.; Khajah, Tahsin; Bordas, Stéphane Pierre Alain 2020 Multi-sided Bézier surfaces over curved, multi-connected domains. Zbl 07190757 Várady, Tamás; Salvi, Péter; Vaitkus, Márton; Sipos, Ágoston 2020 Simultaneous interior and boundary optimization of volumetric domain parameterizations for IGA. Zbl 07200768 Liu, Hao; Yang, Yang; Liu, Yuan; Fu, Xiao-Ming 2020 Local (T)HB-spline projectors via restricted hierarchical spline fitting. Zbl 07207276 Giust, Alessandro; Jüttler, Bert; Mantzaflaris, Angelos 2020 Singular cases of planar and spatial $$C^1$$ Hermite interpolation problems based on quintic Pythagorean-hodograph curves. Zbl 1450.65006 Farouki, Rida T.; Hormann, Kai; Nudo, Federico 2020 Rational motions with generic trajectories of low degree. Zbl 1448.70006 Siegele, J.; Scharler, D. F.; Schröcker, H.-P. 2020 Approximate symmetries of planar algebraic curves with inexact input. Zbl 1436.14097 Bizzarri, Michal; Lávička, Miroslav; Vršek, Jan 2020 Construction of periodic adapted orthonormal frames on closed space curves. Zbl 1453.65036 Farouki, Rida T.; Kim, Soo Hyun; Moon, Hwan Pyo 2020 Linear dependence of bivariate minimal support and locally refined B-splines over LR-meshes. Zbl 07188120 Patrizi, Francesco; Dokken, Tor 2020 Conversion of B-rep CAD models into globally $$G^1$$ triangular splines. Zbl 07188129 Hettinga, Gerben Jan; Kosinka, Jiří 2020 Evaluation and subdivision algorithms for general classes of totally positive rational bases. Zbl 07242735 Mainar, E.; Peña, J. M.; Rubio, B. 2020 Argyris type quasi-interpolation of optimal approximation order. Zbl 07200760 Grošelj, Jan 2020 Isogeometric analysis for trimmed CAD surfaces using multi-sided toric surface patches. Zbl 07200764 Zhu, Xuefeng; Ji, Ye; Zhu, Chungang; Hu, Ping; Ma, Zheng-Dong 2020 Iterative coordinates. Zbl 07200774 Deng, Chongyang; Chang, Qingjun; Hormann, Kai 2020 Classification of planar Pythagorean hodograph curves. Zbl 07207277 Šír, Zbyněk 2020 Construction of Minkowski Pythagorean hodograph B-spline curves. Zbl 07207286 Bizzarri, Michal; Lávička, Miroslav 2020 An efficient method to integrate polynomials over polytopes and curved solids. Zbl 1450.65027 Chin, Eric B.; Sukumar, N. 2020 Extending ball B-spline by B-spline. Zbl 1450.65008 Liu, Xinyue; Wang, Xingce; Wu, Zhongke; Zhang, Dan; Liu, Xiangyuan 2020 An $$h$$-adaptive meshfree-enriched finite element method based on convex approximations for the three-dimensional ductile crack propagation simulation. Zbl 1440.65153 Ren, Bo; Wu, C. T.; Lyu, Dandan 2020 On the application of polygonal finite element method for Stokes flow – a comparison between equal order and different order approximation. Zbl 07188122 Natarajan, Sundararajan 2020 Another map from $$\mathbb{P}^7$$ to the study quadric. Zbl 1441.70001 Selig, J. M. 2020 Implicit progressive-iterative approximation for curve and surface reconstruction. Zbl 07188124 Hamza, Yusuf Fatihu; Lin, Hongwei; Li, Zihao 2020 An approach to construct a three-dimensional isogeometric model from $$\mu$$-CT scan data with an application to the bridge of a violin. Zbl 1472.65025 Marschke, Sandra; Wunderlich, Linus; Ring, Wolfgang; Achterhold, Klaus; Pfeiffer, Franz 2020 Computing the topology of a plane or space hyperelliptic curve. Zbl 1442.14182 Alcázar, Juan Gerardo; Caravantes, Jorge; Diaz-Toca, Gema M.; Tsigaridas, Elias 2020 Handling heterogeneous structures and materials using blending schemes in V-reps. Zbl 1460.65016 Cirillo, Emiliano; Elber, Gershon 2020 Characterizing envelopes of moving rotational cones and applications in CNC machining. Zbl 1467.65015 Skopenkov, Mikhail; Bo, Pengbo; Bartoň, Michael; Pottmann, Helmut 2020 Kinematic interpretation of Darboux cyclides. Zbl 1473.65017 Krasauskas, R.; Zube, S. 2020 CSIOR: circle-surface intersection ordered resampling. Zbl 07200761 Tortorici, Claudio; Riahi, Mohamed Kamel; Berretti, Stefano; Werghi, Naoufel 2020 Interpolation of $$G^1$$ Hermite data by $$C^1$$ cubic-like sparse Pythagorean hodograph splines. Zbl 07200762 Ait-Haddou, Rachid; Beccari, Carolina Vittoria; Mazure, Marie-Laurence 2020 Spline surfaces with $$C^1$$ quintic PH isoparametric curves. Zbl 07200763 Knez, Marjeta; Pelosi, Francesca; Sampoli, Maria Lucia 2020 Numerical roadmap of smooth bounded real algebraic surface. Zbl 07200772 Chen, Changbo; Wu, Wenyuan; Feng, Yong 2020 Laplacian-optimized diffusion for semi-supervised learning. Zbl 07200776 Budninskiy, Max; Abdelaziz, Ameera; Tong, Yiying; Desbrun, Mathieu 2020 Four-bar linkages, elliptic functions, and flexible polyhedra. Zbl 07200777 Izmestiev, Ivan 2020 Persistent manifolds of the special Euclidean group SE(3): a review. Zbl 07200778 Wu, Yuanqing; Carricato, Marco 2020 Algebraic and geometric characterizations of a class of planar quartic curves with rational offsets. Zbl 07200779 Hormann, Kai; Zheng, Jianmin 2020 Euclidean offset and bisector approximations of curves over freeform surfaces. Zbl 07207271 Elber, Gershon; Kim, Myung-Soo 2020 Controlling extremal Pythagorean hodograph curves by Gauss-Legendre polygons. Zbl 07207272 Moon, Hwan Pyo; Kim, Soo Hyun; Kwon, Song-Hwa 2020 EasyMesh: an efficient method to reconstruct 3D mesh from a single image. Zbl 07207275 Sun, Xiao; Lian, Zhouhui 2020 Spherical interpolatory geometric subdivision schemes. Zbl 07207281 Bellaihou, Mohamed; Ikemakhen, Aziz 2020 P2MAT-NET: learning medial axis transform from sparse point clouds. Zbl 07207282 Yang, Baorong; Yao, Junfeng; Wang, Bin; Hu, Jianwei; Pan, Yiling; Pan, Tianxiang; Wang, Wenping; Guo, Xiaohu 2020 Reverse engineering of CAD models via clustering and approximate implicitization. Zbl 07207284 Raffo, Andrea; Barrowclough, Oliver J. D.; Muntingh, Georg 2020 Refinable tri-variate $$C^1$$ splines for box-complexes including irregular points and irregular edges. Zbl 07207285 Peters, Jörg 2020 Dimension of polynomial splines of mixed smoothness on T-meshes. Zbl 07207288 Toshniwal, Deepesh; Villamizar, Nelly 2020 An optimized yarn-level geometric model for finite element analysis of weft-knitted fabrics. Zbl 07207290 Wadekar, Paras; Perumal, Vignesh; Dion, Genevieve; Kontsos, Antonios; Breen, David 2020 On a progressive and iterative approximation method with memory for least square fitting. Zbl 1450.65019 Huang, Zheng-Da; Wang, Hui-Di 2020 Inflection points on 3D curves. Zbl 1451.65016 Gabrielides, Nikolaos C.; Sapidis, Nickolas S. 2020 A few conjectures on a four-point interpolatory subdivision scheme. Zbl 1450.65007 Zhang, Ren-Jiang 2020 An isogeometric $$C^{1}$$ subspace on unstructured multi-patch planar domains. Zbl 1470.65016 Kapl, Mario; Sangalli, Giancarlo; Takacs, Thomas 2019 Volumetric untrimming: precise decomposition of trimmed trivariates into tensor products. Zbl 07137386 Massarwi, Fady; Antolin, Pablo; Elber, Gershon 2019 Offset hypersurfaces and persistent homology of algebraic varieties. Zbl 07137422 Horobeţ, Emil; Weinstein, Madeleine 2019 The space of $$C^1$$-smooth isogeometric spline functions on trilinearly parameterized volumetric two-patch domains. Zbl 1465.65012 Birner, Katharina; Kapl, Mario 2019 A geometric view of optimal transportation and generative model. Zbl 1430.49050 Lei, Na; Su, Kehua; Cui, Li; Yau, Shing-Tung; Gu, Xianfeng David 2019 Knot calculation for spline fitting based on the unimodality property. Zbl 07137419 Luo, Jiaqi; Kang, Hongmei; Yang, Zhouwang 2019 An in depth analysis, via resultants, of the singularities of a parametric curve. Zbl 1439.14171 Blasco, Angel; Pérez-Díaz, Sonia 2019 Deformation of spatial septic Pythagorean hodograph curves using Gauss-Legendre polygon. Zbl 07137416 Kim, Soo Hyun; Moon, Hwan Pyo 2019 On initialization of milling paths for 5-axis flank CNC machining of free-form surfaces with general milling tools. Zbl 07137388 Bo, Pengbo; Bartoň, Michael 2019 Arc fibrations of planar domains. Zbl 07137393 Jüttler, Bert; Maroscheck, Sofia; Kim, Myung-Soo; Hong, Q. Youn 2019 Polynomial splines of non-uniform degree on triangulations: combinatorial bounds on the dimension. Zbl 1470.65018 Toshniwal, Deepesh; Hughes, Thomas J. R. 2019 A Cox-de Boor-type recurrence relation for $$C^1$$ multi-degree splines. Zbl 07137434 Beccari, Carolina Vittoria; Casciola, Giulio 2019 On Lototsky-Bernstein operators and Lototsky-Bernstein bases. Zbl 07038591 Xu, Xiao-Wei; Goldman, Ron 2019 A note on spectral properties of Hermite subdivision operators. Zbl 07038593 Moosmüller, Caroline 2019 Implicitizing rational surfaces without base points by moving planes and moving quadrics. Zbl 1465.65016 Lai, Yisheng; Chen, Falai; Shi, Xiaoran 2019 A fast numerical solver for local barycentric coordinates. Zbl 1465.65011 Tao, Jiong; Deng, Bailin; Zhang, Juyong 2019 Localized G-splines for quad & T-gon meshes. Zbl 07137403 Karčiauskas, Kȩstutis; Peters, Jörg 2019 Modified PHT-splines. Zbl 07137418 Ni, Qian; Wang, Xuhui; Deng, Jiansong 2019 $$\mu$$-bases for rational canal surfaces. Zbl 07038594 Yao, Shanshan; Jia, Xiaohong 2019 A merged tuning of binary and ternary Loop’s subdivision. Zbl 07038595 Donatelli, Marco; Novara, Paola; Romani, Lucia; Serra-Capizzano, Stefano; Sesana, Debora 2019 $$C^{1}$$ analysis of some 2D subdivision schemes refining point-normal pairs with the circle average. Zbl 07038596 Lipovetsky, Evgeny; Dyn, Nira 2019 Q-MAT+: an error-controllable and feature-sensitive simplification algorithm for medial axis transform. Zbl 07137387 Pan, Yiling; Wang, Bin; Guo, Xiaohu; Zeng, Hua; Ma, Yuexin; Wang, Wenping 2019 Surface reconstruction by parallel and unified particle-based resampling from point clouds. Zbl 07137389 Zhong, Sikai; Zhong, Zichun; Hua, Jing 2019 A bivariate $$C^1$$ subdivision scheme based on cubic half-box splines. Zbl 1450.65015 Barendrecht, Pieter; Sabin, Malcolm; Kosinka, Jiří 2019 Solving higher order PDEs with isogeometric analysis on implicit domains using weighted extended THB-splines. Zbl 07137400 Qarariyah, Ammar; Yang, Tianhui; Deng, Jiansong 2019 Classification of the relative positions between a small ellipsoid and an elliptic paraboloid. Zbl 07137408 Brozos-Vázquez, M.; Pereira-Sáez, M. J.; Souto-Salorio, M. J.; Tarrío-Tobar, A. D. 2019 Holes and dependences in an ordered complex. Zbl 07137415 Edelsbrunner, Herbert; Ölsböck, Katharina 2019 Gauss-Lobatto polygon of Pythagorean hodograph curves. Zbl 07137423 Kim, Soo Hyun; Moon, Hwan Pyo 2019 ...and 1406 more Documents all top 5 ### Cited by 5,722 Authors 97 Farouki, Rida T. 81 Jüttler, Bert 68 Peña, Juan Manuel 65 Goldman, Ronald N. 53 Manni, Carla 51 Mazure, Marie-Laurence 48 Wang, Guozhao 46 Chen, Falai 46 Romani, Lucia 44 Peters, Jorg 44 Speleers, Hendrik 41 Wang, Guojin 39 Krajnc, Marjeta 38 Hughes, Thomas J. R. 37 Mustafa, Ghulam 37 Sendra, Juan Rafael 36 Conti, Costanza 36 Floater, Michael S. 36 Han, Xuli 35 Deng, Jiansong 34 Pérez-Díaz, Sonia 34 Pottmann, Helmut 33 Carnicer, Jésus Miguel 33 Dyn, Nira 32 Alcazar, Juan Gerardo 32 Giannelli, Carlotta 32 Sederberg, Thomas W. 31 Schumaker, Larry L. 30 Lávička, Miroslav 28 Sestini, Alessandra 28 Žagar, Emil 28 Zhu, Chungang 25 Mourrain, Bernard 25 Vitrih, Vito 24 Lu, Lizheng 23 Beccari, Carolina Vittoria 23 Casciola, Giulio 23 Deng, Chongyang 23 Xu, Guoliang 23 Zhu, Yuanpeng 22 Ait-Haddou, Rachid 22 Delgado, Jorge F. M. 22 Farin, Gerald E. 22 Hu, Gang 22 Meek, Dereck S. 22 Xu, Gang 22 Zhang, Yongjie Jessica 21 Kosinka, Jiří 21 Reif, Ulrich 21 Sampoli, Maria Lucia 21 Sánchez-Reyes, Javier 21 Shen, Liyong 21 Siddiqi, Shahid Saeed 21 Šír, Zbyněk 21 Wang, Renhong 20 Karčiauskas, Kęstutis 20 Lai, Mingjun 20 Li, Xin 20 Mainar, Esmeralda 20 Wang, Wenping 19 Duan, Qi 19 Goodman, Timothy N. T. 19 Lavery, John E. 19 Ma, Weiyin 19 Pelosi, Francesca 19 Sbibih, Driss 19 Schicho, Josef 19 Walton, Desmond J. 19 Zheng, Jianmin 18 Ahn, Young Joon 18 Buffa, Annalisa 18 Han, Chang Yong 18 Jaklič, Gašper 18 Kozak, Jernej 18 Lamnii, Abdellah 18 Paluszny, Marco 18 Rabczuk, Timon 18 Sabin, Malcolm A. 18 Vršek, Jan 17 Costantini, Paolo Giuseppe 17 Hsu, Ming-Chen 17 Hussain, Malik Zawwar 17 Kim, Taewan 17 Lyche, Tom 17 Monterde, Juan 17 Plaza, Ángel 17 Reali, Alessandro 17 Sakkalis, Takis 17 Scott, Michael A. 17 Tan, Jieqing 17 Zheng, Hongchan 17 Zidna, Ahmed 16 Bajaj, Chandrajit L. 16 Bizzarri, Michal 16 Bracco, Cesare 16 Busé, Laurent 16 Elber, Gershon 16 Jiang, Qingtang 16 Kapl, Mario 16 Lin, Hongwei ...and 5,622 more Authors all top 5 ### Cited in 481 Journals 1,383 Computer Aided Geometric Design 428 Journal of Computational and Applied Mathematics 302 Computer Methods in Applied Mechanics and Engineering 201 Applied Mathematics and Computation 124 Journal of Symbolic Computation 109 Computers & Mathematics with Applications 105 Advances in Computational Mathematics 96 Numerical Algorithms 73 Journal of Approximation Theory 70 Journal of Computational Physics 59 International Journal for Numerical Methods in Engineering 53 Mathematics and Computers in Simulation 51 Applied Numerical Mathematics 50 International Journal of Computer Mathematics 48 BIT 45 Mathematics of Computation 43 Constructive Approximation 42 Computing 42 Numerische Mathematik 41 Applied Mathematics. Series B (English Edition) 40 Computational Mechanics 37 International Journal of Computational Geometry & Applications 36 Discrete & Computational Geometry 34 Computational Geometry 34 Mathematical Problems in Engineering 33 Linear Algebra and its Applications 33 Computational and Applied Mathematics 30 Journal of Zhejiang University. Science A 29 Journal of Scientific Computing 29 Applied Mathematical Modelling 29 International Journal of Shape Modeling 27 Engineering Analysis with Boundary Elements 26 Journal of Systems Science and Complexity 25 Journal of Mathematical Analysis and Applications 25 Journal of Mathematical Imaging and Vision 23 SIAM Journal on Scientific Computing 22 Theoretical Computer Science 21 Mathematical and Computer Modelling 19 ACM Transactions on Graphics 17 Journal of Geometry 17 Applied Mathematics Letters 17 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 17 Applied and Computational Harmonic Analysis 17 Journal of Applied Mathematics 17 Mathematics in Computer Science 16 Results in Mathematics 16 Journal of Computer Science and Technology 15 RAIRO. Modélisation Mathématique et Analyse Numérique 14 Journal of Algebra 14 Advances in Applied Clifford Algebras 14 Abstract and Applied Analysis 14 Communications in Mathematics and Statistics 13 International Journal of Production Research 13 Applicable Algebra in Engineering, Communication and Computing 13 Comptes Rendus. Mathématique. Académie des Sciences, Paris 13 Advances in Difference Equations 12 Pattern Recognition 12 Foundations of Computational Mathematics 12 International Journal of Wavelets, Multiresolution and Information Processing 12 SIAM Journal on Imaging Sciences 11 Computational Mathematics and Mathematical Physics 11 Mediterranean Journal of Mathematics 11 AIMS Mathematics 10 Rocky Mountain Journal of Mathematics 10 Journal of Pure and Applied Algebra 10 SIAM Journal on Numerical Analysis 10 Journal of Mathematical Sciences (New York) 9 Discrete Applied Mathematics 9 Chaos, Solitons and Fractals 9 Calcolo 8 Computers and Fluids 8 Transactions of the American Mathematical Society 8 Advances in Applied Mathematics 8 Japan Journal of Industrial and Applied Mathematics 8 Journal of the Egyptian Mathematical Society 8 Journal of Inequalities and Applications 8 Journal of Applied Mathematics and Computing 8 Science in China. Series F 8 Journal of Mathematics 7 Applicable Analysis 7 Information Sciences 7 Algorithmica 7 Journal of Global Optimization 7 Communications in Numerical Methods in Engineering 7 Engineering Computations 7 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 7 Symmetry 7 Journal of Theoretical Biology 7 International Journal of Applied and Computational Mathematics 6 Mathematical Methods in the Applied Sciences 6 Beiträge zur Algebra und Geometrie 6 Journal of Optimization Theory and Applications 6 Monatshefte für Mathematik 6 Proceedings of the American Mathematical Society 6 Acta Applicandae Mathematicae 6 The Visual Computer 6 Engineering with Computers 6 International Journal of Algebra and Computation 6 Filomat 6 Communications de la Faculté des Sciences de l’Université d’Ankara. Séries A1. Mathematics and Statistics ...and 381 more Journals all top 5 ### Cited in 59 Fields 3,777 Numerical analysis (65-XX) 1,072 Approximations and expansions (41-XX) 973 Computer science (68-XX) 434 Differential geometry (53-XX) 314 Algebraic geometry (14-XX) 314 Mechanics of deformable solids (74-XX) 215 Partial differential equations (35-XX) 158 Fluid mechanics (76-XX) 131 Geometry (51-XX) 125 Convex and discrete geometry (52-XX) 121 Commutative algebra (13-XX) 115 Operations research, mathematical programming (90-XX) 97 Harmonic analysis on Euclidean spaces (42-XX) 82 Linear and multilinear algebra; matrix theory (15-XX) 82 Biology and other natural sciences (92-XX) 63 Information and communication theory, circuits (94-XX) 62 Calculus of variations and optimal control; optimization (49-XX) 58 Statistics (62-XX) 55 Real functions (26-XX) 54 Mechanics of particles and systems (70-XX) 51 Combinatorics (05-XX) 46 Special functions (33-XX) 46 Global analysis, analysis on manifolds (58-XX) 44 Ordinary differential equations (34-XX) 36 Functions of a complex variable (30-XX) 36 Dynamical systems and ergodic theory (37-XX) 35 Manifolds and cell complexes (57-XX) 34 Field theory and polynomials (12-XX) 30 Systems theory; control (93-XX) 26 Number theory (11-XX) 22 Geophysics (86-XX) 21 Measure and integration (28-XX) 21 Algebraic topology (55-XX) 21 Optics, electromagnetic theory (78-XX) 20 Probability theory and stochastic processes (60-XX) 20 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 19 Statistical mechanics, structure of matter (82-XX) 17 Operator theory (47-XX) 15 General and overarching topics; collections (00-XX) 15 Integral equations (45-XX) 15 Functional analysis (46-XX) 14 Difference and functional equations (39-XX) 13 Several complex variables and analytic spaces (32-XX) 11 Potential theory (31-XX) 11 Classical thermodynamics, heat transfer (80-XX) 10 Associative rings and algebras (16-XX) 9 History and biography (01-XX) 9 General topology (54-XX) 9 Relativity and gravitational theory (83-XX) 7 Quantum theory (81-XX) 5 Sequences, series, summability (40-XX) 5 Mathematics education (97-XX) 4 Topological groups, Lie groups (22-XX) 4 Integral transforms, operational calculus (44-XX) 3 Mathematical logic and foundations (03-XX) 2 Group theory and generalizations (20-XX) 2 Abstract harmonic analysis (43-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Category theory; homological algebra (18-XX)
{}
# Local summability - Lebesgue Integration ### Local summability - Lebesgue Integration Hi everyone, I have been studying Lebesgue integration for few weeks now and I cannot understand its philosophy ... I started to look at simple exercises to begin with, such as the following one : Let f = $$\frac{a}{|x|^b}$$. The question is "for what values of b is f locally summable (or integrable) ?" I have plotted lots of different possibilities and I really cannot see why it is not integrable over the whole set of real numbers. What am I missing ? Peter Guest ### Re: Local summability - Lebesgue Integration Edit : the function is not $$\frac{a}{|x|^b }$$ but simply $$\frac{1}{|x|^b }$$ Guest ### Re: Local summability - Lebesgue Integration What definition of "locally summable" are you using? Guest ### Re: Local summability - Lebesgue Integration $$\frac{1}{|x|}$$ is "locally summable" for all x except x= 0. Do you see why, for some values of b, there would be a problem at x= 0? Guest Return to Calculus - integrals, lim, functions ### Who is online Users browsing this forum: No registered users and 1 guest
{}
Open Access. Powered by Scholars. Published by Universities.® # Health and Medical Physics Commons™ Open Access. Powered by Scholars. Published by Universities.® 55 Full-Text Articles 48 Authors 7,737 Downloads 14 Institutions ## All Articles in Health and Medical Physics 55 full-text articles. Page 1 of 2. Classification Of Intensity-Modulated Proton Therapy Plans, 2018 Illinois Mathematics and Science Academy #### Classification Of Intensity-Modulated Proton Therapy Plans, Louise Gabrielle Lima '19, Alice Liu '19 ##### Student Publications & Research Proton Radiotherapy Proton radiotherapy is a form of radiation treatment that uses energized protons to break DNA, leading to cell death and killing cancers. Endorectal Digital Prostate Tomosynthesis, 2018 Louisiana State University and Agricultural and Mechanical College #### Endorectal Digital Prostate Tomosynthesis, Joseph Robert Steiner ##### LSU Doctoral Dissertations Several areas of prostate cancer (PCa) management, such as imaging permanent brachytherapy implants or small, aggressive lesions, benefit from high image resolution. Current PCa imaging methods can have inadequate resolution for imaging these areas. Endorectal digital prostate tomosynthesis (endoDPT), an imaging method that combines an external x-ray source and an endorectal x-ray sensor, can produce three-dimensional images of the prostate region that have high image resolution compared to typical methods. This high resolution may improve PCa management and increase positive outcomes in affected men. This dissertation presents the initial development of endoDPT, including system design, image quality assessment, and examples ... 2018 University of Arkansas, Fayetteville #### Behavior Coding Strategies: Population Coupling And The Functional Role Of Excitatory/Inhibitory Balance In Primary Motor Cortex, Patrick Aaron Kells ##### Theses and Dissertations The complexities of an organism’s experience of- and interaction with the world are emergent phenomena produced by large populations of neurons within the cerebral cortex and other brain regions. The network dynamics of these populations have been shown to be sometimes synchronous, with many neurons firing together, and sometimes asynchronous, with neurons firing more independently, leading to a decades-old debate within the neuroscience community. This discrepancy comes from viewing the system at two different scales; at the single cell level, the spiking activity of two neurons within cortex tend to be rather independent, but when the average activity of ... 2017 Louisiana State University and Agricultural and Mechanical College #### Development And Applications Of A Real-Time Magnetic Electron Energy Spectrometer For Use With Medical Linear Accelerators, Paul Ethan Maggi ##### LSU Doctoral Dissertations Purpose – This work presents a design for a real-time electron energy spectrometer, and provides data analysis methods and characterization of the real-time system. This system is intended for use with medical linear accelerators (linacs). The goal is 1 Hz acquisition of the energy range 4-25 MeV, reconstructed in 0.1 MeV increments. Methods – Our spectrometer uses a nominal 0.54 T permanent magnet block as the dispersive element and scintillating fibers coupled to a CCD camera as the position sensitive detector. A broad electron beam produced by a linac is collimated by a 6.35 mm dimeter aperture at the ... 2017 University of New Mexico - Main Campus #### Proposed Method For Measuring The Let Of Radiotherapeutic Particle Beams, Stephen D. Bello ##### Physics & Astronomy ETDs The Bragg peak geometry of the depth dose distributions for hadrons allows for precise and effective dose delivery to tumors while sparing neighboring healthy tissue. Further, compared against other forms of radiotherapeutic treatments, such as electron beam therapy (EBT) or photons (x and $$\gamma$$-rays), hadrons create denser ionization events along the particle track, which induces irreparable damage to DNA, and thus are more effective at inactivating cancerous cells. The measurement of radiation's ability to inactivate cellular reproduction is the relative biological effectiveness (RBE). A quality related to the RBE that is a measurable physical property is the linear ... 2017 The University of Western Ontario #### Determining The Detective Quantum Efficiency (Dqe) Of X-Ray Detectors In Clinical Environments, Terenz R. Escartin ##### Electronic Thesis and Dissertation Repository According to Health Canada, dental and medical radiography accounts for more than 90% of total man-made radiation dose to the general population. Ensuring patients receive the health benefits of diagnostic x-ray imaging without use of higher radiation exposures requires knowledge and understanding of the detective quantum efficiency (DQE). Currently, the DQE is not measured in clinics because it requires specialized instrumentation and specific DQE-expertise to perform an accurate analysis. In this regard, the goals of this thesis were to: 1) address the limitations of measuring the DQE in clinical environments that affects the accuracy of the measurement; 2) develop and ... 2017 Virginia Commonwealth University #### An Algorithm To Improve Deformable Image Registration Accuracy In Challenging Cases Of Locally-Advanced Non-Small Cell Lung Cancer, Christopher L. Guy ##### Theses and Dissertations A common co-pathology of large lung tumors located near the central airways is collapse of portions of lung due to blockage of airflow by the tumor. Not only does the lung volume decrease as collapse occurs, but fluid from capillaries also fills the space no longer occupied by air, greatly altering tissue appearance. During radiotherapy, typically administered to the patient over multiple weeks, the tumor can dramatically shrink in response to the treatment, restoring airflow to the lung sections which were collapsed when therapy began. While return of normal lung function is a positive development, the change in anatomy presents ... 2015 University of Nebraska Medical Center #### Postural Responses To Perturbations Of The Vestibular System During Walking In Healthy Young And Older Adults, Jung Hung Chien ##### Theses & Dissertations It has been shown that approximate one-third of US adults aged 40 years and older (69 million US citizens) have some type of vestibular problems. These declining abilities of the vestibular system affect quality of life. Difficulties in performing daily activities (dressing, bathing, getting in and out of the bed and etc.) have been highly correlated to loss of balance due to vestibular disorders. The exact number of people affected by vestibular disorders is still difficult to quantify. This might be because symptoms are difficult to describe and differences exist in the qualifying criteria within and across studies. Thus, it ... 2014 University of Iowa #### Soup Consumption Is Associated With A Lower Dietary Energy Density And A Better Diet Quality In Us Adults, Yong Zhu, James Hollis ##### Food Science and Human Nutrition Publications Epidemiological studies have revealed that soup consumption is associated with a lower risk of obesity. Moreover, intervention studies have reported that soup consumption aids in body-weight management. However, little is known about mechanisms that can explain these findings. The objective of the present study was to investigate associations between soup consumption and daily energy intake, dietary energy density (ED), nutrient intake and diet quality. Adults aged 19–64 years who participated in the National Health and Nutrition Examination Surveys during 2003–8 were included in the study. Soup consumers were identified from the first dietary recall using the United States ... 2014 Virginia Commonwealth University #### Multi – Modality Molecular Imaging Of Adoptive Immune Cell Therapy In Breast Cancer, Fatma Youniss ##### Theses and Dissertations Cancer treatment by adoptive immune cell therapy (AIT) is a form of immunotherapy that relies on the in vitro activation and/or expansion of immune cells. In this approach, immune cells, particularly CD8+ T lymphocytes, can potentially be harvested from a tumor-bearing patient, then activated and/or expanded in vitro in the presence of cytokines and other growth factors, and then transferred back into the same patient to induce tumor regression. AIT allows the in vitro generation and activation of T-lymphocytes away from the immunosuppressive tumor microenvironment, thereby providing optimum conditions for potent anti-tumor activity. The overall objective of this ... 2014 Virginia Commonwealth University #### Multimodality Molecular Imaging Of [18f]-Fluorinated Carboplatin Derivative Encapsulated In [111in]-Labeled Liposome, Narottam Lamichhane ##### Theses and Dissertations Platinum based chemotherapy is amongst the mainstream DNA-damaging agents used in clinical cancer therapy today. Agents such as cisplatin, carboplatin are clinically prescribed for the treatment of solid tumors either as single agents, in combination, or as part of multi-modality treatment strategy. Despite the potent anti-tumor activity of these drugs, overall effectiveness is still hampered by inadequate delivery and retention of drug in tumor and unwanted normal tissue toxicity, induced by non-selective accumulation of drug in normal cells and tissues. Utilizing molecular imaging and nanoparticle technologies, this thesis aims to contribute to better understanding of how to improve the profile ... 2014 Virginia Commonwealth University #### Statistical Modeling Of Interfractional Tissue Deformation And Its Application In Radiation Therapy Planning, Douglas J. Vile ##### Theses and Dissertations In radiation therapy, interfraction organ motion introduces a level of geometric uncertainty into the planning process. Plans, which are typically based upon a single instance of anatomy, must be robust against daily anatomical variations. For this problem, a model of the magnitude, direction, and likelihood of deformation is useful. In this thesis, principal component analysis (PCA) is used to statistically model the 3D organ motion for 19 prostate cancer patients, each with 8-13 fractional computed tomography (CT) images. Deformable image registration and the resultant displacement vector fields (DVFs) are used to quantify the interfraction systematic and random motion. By applying ... 2013 Virginia Commonwealth University #### Hybrid Pet/Mri Nanoparticle Development And Multi-Modal Imaging, David Hoffman ##### Theses and Dissertations The development of hybrid PET/MRI imaging systems needs to be paralleled with the development of a hybrid intrinsic PET/MRI probes. The aim of this work was to develop and validate a novel radio-superparamagnetic nanoparticle (r-SPNP) for hybrid PET/MRI imaging. This was achieved with the synthesis of superparamagnetic iron oxide nanoparticles (SPIONs) that intrinsically incorporated 59Fe and manganese iron oxide nanoparticles (MIONs) that intrinsically incorporated 52Mn. Both [59Fe]-SPIONs and [52Mn]-MIONs were produced through thermal decomposition synthesis. The physiochemical characteristics of the r-SPNPs were assessed with TEM, DLS, and zeta-potential measurements, as well as in imaging phantom ... 2013 Selected Works #### Characterization Of A Small Animal Spect Platform For Use In Preclinical Translational Research, Dustin Ryan Osborne ##### Dustin Ryan Osborne Imaging Iodine-125 requires an increased focus on developing an understanding of how fundamental processes used by imaging systems work to provide quantitative output for the imaging system. Isotopes like I-125 pose specific imaging problems that are a result of low energy emissions as well as how closely spaced those emissions are in the spectrum. This work seeks to characterize the performance of a small animal SPECT-CT imaging system with respect to imaging I-125 for use in a preclinical translational research environment and to understand how the performance of this system relates to critical applications such as attenuation and scatter correction ... 2013 Virginia Commonwealth University #### Positron Emission Tomography For Pre-Clinical Sub-Volume Dose Escalation, Christopher Bass ##### Theses and Dissertations Purpose: This dissertation focuses on establishment of pre-clinical methods facilitating the use of PET imaging for selective sub-volume dose escalation. Specifically the problems addressed are 1.) The difficulties associated with comparing multiple PET images, 2.) The need for further validation of novel PET tracers before their implementation in dose escalation schema and 3.) The lack of concrete pre-clinical data supporting the use of PET images for guidance of selective sub-volume dose escalations. Methods and materials: In order to compare multiple PET images the confounding effects of mispositioning and anatomical change between imaging sessions needed to be alleviated. To mitigate the ... 2013 Virginia Commonwealth University #### Principled Variance Reduction Techniques For Real Time Patient-Specific Monte Carlo Applications Within Brachytherapy And Cone-Beam Computed Tomography, Andrew Sampson ##### Theses and Dissertations This dissertation describes the application of two principled variance reduction strategies to increase the efficiency for two applications within medical physics. The first, called correlated Monte Carlo (CMC) applies to patient-specific, permanent-seed brachytherapy (PSB) dose calculations. The second, called adjoint-biased forward Monte Carlo (ABFMC), is used to compute cone-beam computed tomography (CBCT) scatter projections. CMC was applied for two PSB cases: a clinical post-implant prostate, and a breast with a simulated lumpectomy cavity. CMC computes the dose difference between the highly correlated dose computing homogeneous and heterogeneous geometries. The particle transport in the heterogeneous geometry assumed a purely homogeneous environment ... 2013 Virginia Commonwealth University #### Time Dependent Cone-Beam Ct Reconstruction Via A Motion Model Optimized With Forward Iterative Projection Matching, David Staub ##### Theses and Dissertations The purpose of this work is to present the development and validation of a novel method for reconstructing time-dependent, or 4D, cone-beam CT (4DCBCT) images. 4DCBCT can have a variety of applications in the radiotherapy of moving targets, such as lung tumors, including treatment planning, dose verification, and real time treatment adaptation. However, in its current incarnation it suffers from poor reconstruction quality and limited temporal resolution that may restrict its efficacy. Our algorithm remedies these issues by deforming a previously acquired high quality reference fan-beam CT (FBCT) to match the projection data in the 4DCBCT data-set, essentially creating a ... 2013 Virginia Commonwealth University #### Automatic Block-Matching Registration To Improve Lung Tumor Localization During Image-Guided Radiotherapy, Scott Robertson ##### Theses and Dissertations To improve relatively poor outcomes for locally-advanced lung cancer patients, many current efforts are dedicated to minimizing uncertainties in radiotherapy. This enables the isotoxic delivery of escalated tumor doses, leading to better local tumor control. The current dissertation specifically addresses inter-fractional uncertainties resulting from patient setup variability. An automatic block-matching registration (BMR) algorithm is implemented and evaluated for the purpose of directly localizing advanced-stage lung tumors during image-guided radiation therapy. In this algorithm, small image sub-volumes, termed “blocks”, are automatically identified on the tumor surface in an initial planning computed tomography (CT) image. Each block is independently and automatically registered ... 2013 Virginia Commonwealth University #### A Study Of Coverage Optimized Planning Incorporating Models Of Geometric Uncertainties For Prostate Cancer, Huijun Xu ##### Theses and Dissertations A fundamental challenge in the treatment planning process of multi-fractional external-beam radiation therapy (EBRT) is the tradeoff between tumor control and normal tissue sparing in the presence of geometric uncertainties (GUs). To accommodate GUs, the conventional way is to use an empirical planning treatment volume (PTV) margin on the treatment target. However, it is difficult to determine a near-optimal PTV margin to ensure specified target coverage with as much normal tissue protection as achievable. Coverage optimized planning (COP) avoids this problem by optimizing dose in possible virtual treatment courses with GU models directly incorporated. A near-optimal dosimetric margin generated by ... Optimization Of Radiation Therapy In Time-Dependent Anatomy, 2013 Virginia Commonwealth University #### Optimization Of Radiation Therapy In Time-Dependent Anatomy, W. Tyler Watkins ##### Theses and Dissertations The objective of this dissertation is to develop treatment planning techniques that have the potential to improve radiation therapy of time-dependent (4D) anatomy. Specifically, this study examines dose estimation, dose evaluation, and decision making in the context of optimizing lung cancer radiation therapy. Two methods of dose estimation are compared in patients with locally advanced and early stage lung cancer: dose computed on a single image (3D-dose) and deformably registered, accumulated dose (or 4D-dose). The results indicate that differences between 3D- and 4D- dose are not significant in organs at risk (OARs), however, 4D-dose to a moving lung cancer target ...
{}
## Unit 9 Section 4 : Surface Area and Volume of 3-D Shapes Find surface area of the box. Step 1 : Identify a base, and find its area and perimeter. Any pair of opposite faces can be the bases. For example, we can choose the bottom and top of the box Better than just an application We are more than just an application, we are a community. Timely Delivery The company's on-time delivery record is impeccable. Solve mathematic questions To solve a math equation, you need to find the value of the variable that makes the equation true. Do homework Homework is a necessary part of school that helps students review and practice what they have learned in class. ## 12.4 Real-World Problems: Surface Area and Volume Volume and Surface Area of a Cylinder $V=\pi {r}^{2}h$ $S=2\pi {r}^{2}+2\pi rh$ Volume of a Cone. For a cone with radius $r$ and height $h$ Get Started ## Volume and surface area word problems Again, surface area measures the area of the total outside surfaces of an object, while volume measures the internal space that the object takes up. You’ll find many real-life • 441 Math Specialists • 4 Years on market ## Unit: Module 5: Area, surface area, and volume problems If the radius of a sphere is 3r, what is its volume? Solution: Given, Radius of sphere = 3r. • Supply multiple methods There are many ways to save money on groceries. • SOLVING There's nothing more satisfying than solving a problem. Scanning a math problem can help you understand it better and make solving it easier. • Do my homework I can do my homework by myself. • Decide math If you're looking for a fun way to teach your kids math, try Decide math. It's a great way to engage them in the subject and help them learn while they're having fun. • Explain mathematic equation Mathematics is the study of numbers, shapes, and patterns.
{}
kidzsearch.com > wiki # International International mostly means something (a company, language, or organization) involving more than a single country. The term international as a word means involvement of, interaction between or encompassing more than one nation, or generally beyond national boundaries. For example, international law, which is applied by more than one country and usually everywhere on Earth, and international language which is a language spoken by residents of more than one country. ## Origin of the word The term international was coined by the utilitarian philosopher Jeremy Bentham in his Introduction to Principles of Morals and Legislation, which was printed for publication in 1780 and published in 1789. Bentham wrote: "The word international, it must be acknowledged, is a new one; though, it is hoped, sufficiently analogous and intelligible. It is calculated to express, in a more significant way, the branch of law which goes commonly under the name of the law of nations.[1] The word was adopted in French in 1801.[2] Thomas Erskine Holland noted in his article on Bentham in the 11th edition of the Encyclopædia Britannica that "Many of Bentham's phrases, such as 'international,' 'utilitarian,' 'codification,' are valuable additions to our language; but the majority of them, especially those of Greek derivation, have taken no root in it." ## Meaning in particular fields "International" is also sometimes used as a synonym for "global". ## References 1. Le Nouveau Petit Robert 2010. 2. Language Map 3. Gode, Alexander, Interlingua: A Grammar of the International Language. New York: Frederick Ungar, 1951. ## Sources • Ankerl, Guy (2000). Global communication without universal civilization. INU societal research. Vol.1: Coexisting contemporary civilizations : Arabo-Muslim, Bharati, Chinese, and Western. Geneva: INU Press. .
{}
# kabir singh RELEASED ON 21st, June 2019 Kabir Singh is a Hindi drama film directed by Sandeep Vanga. It is a remake of his own Telugu film Arjun Reddy (2017). The film stars Shahid Kapoor and Kiara Advani. Cast : Director : Sandeep Vanga Show Timings : GLITZ CINEMAS 09:00 AM, 09:45 AM, 01:00 PM, 02:45 PM, 04:20 PM, 06:50 PM,07:45 PM, 11:00 PM SILVER CITY 10:00 AM, 10:45 AM, 12:30 PM, 01:30 PM, 02:45 PM, 03:45 PM, 04:45 PM, 07:00 PM, 09:30 PM, 10:15 PM BIG CINEMAS 09:30 AM, 12:50 PM, 02:10 PM, 04:10 PM, 07:30 PM, 09:55 PM, 10:50 PM PVR CINEMAS 09:00 AM, 10:30 AM, 11:25 AM, 12:30 PM, 02:55 PM, 04:00 PM, 05:30 PM, 06:25 PM, 07:30 PM, 09:55 PM, 11:00 PM MOVIE LOUNGE 10:10 AM, 01:15 PM, 04:20 PM, 07:25 PM, 09:40 PM MUKTA A2 09:45 AM, 01:00 PM, 01:40 PM, 04:15 PM, 07:35 PM, 09:15 PM, 10:45 PM NEW EMPIRE & PRABHAT CINEMA 11:00 AM, 02:00 PM, 05:00 PM, 08:00 PM Language : Hindi Duration : 172 Minutes Images (1) Reviews (0)
{}
# Lipschitz regularity for viscous Hamilton-Jacobi equations with $L^p$ terms created by goffi on 01 Jan 2019 [BibTeX] Submitted Paper Inserted: 1 jan 2019 Last Updated: 1 jan 2019 Year: 2018 ArXiv: 1812.03706 PDF Abstract: We provide Lipschitz regularity for solutions to viscous time-dependent Hamilton-Jacobi equations with right-hand side belonging to Lebesgue spaces. Our approach is based on a duality method, and relies on the analysis of the regularity of the gradient of solutions to a dual (Fokker-Planck) equation. Here, the regularizing effect is due to the non-degenerate diffusion and coercivity of the Hamiltonian in the gradient variable.
{}
# Viewpoint: Ultracold controlled chemistry Physics 3, 10 New experiments extend chemical dynamics research to temperatures below 1 microkelvin. Bound by the Coulomb interaction forces alone, atomic nuclei and electrons combine to form an incredible variety of molecular systems. When molecules react, they undergo chemical transformations leading to rearrangement of atoms within molecules or transfer of atoms between molecules. This process may absorb or release energy; however, the energy change in a chemical reaction is much smaller than the total energy of the Coulomb interaction in a molecule (Fig. 1). This makes chemistry a game of small numbers and a chemical reaction a very complex process to study. Much of our current understanding of chemical reaction dynamics is due to the development of the technology for producing and colliding molecular beams [1]. A molecular beam is a gaseous ensemble of molecules moving as a whole in the laboratory frame. The molecules are often prepared with a narrow distribution of internal energies (up to a few kelvin) and a low density. When two molecular beams collide, molecules react under single-collision conditions and the reaction products scatter in a particular direction, where they can be detected by a variety of techniques. By varying the angle between the crossed beams, it is possible to tune the collision energy of the molecules. Molecular beam experiments, however, have two significant limitations: they only probe the outcome of a chemical reaction, providing no direct information about the actual process of bond breaking and bond making [2], and, because the density of molecules is usually undetermined, they cannot measure the absolute rate of a chemical reaction. The latter is crucial for calibrating theories of elementary chemical reactions. Two groups have now taken a completely new approach to study chemical transformations of molecules: Steven Knoop and colleagues at the University of Innsbruck in Austria and the Austrian Academy of Sciences, in collaboration with Jose D’Incao at JILA and Brett Esry at Kansas State University, both in the US, reporting in Physical Review Letters [3], and Silke Ospelkaus and co-workers at JILA and NIST, also in the US [4]. These researchers start with an ensemble of atoms cooled to an ultralow temperature ( $<10μK$) and confined in an optical potential of a focused laser beam. The atoms are then linked together by a time-varying magnetic field in the work of Knoop et al., or by irradiating the sample of atoms with lasers of two different frequencies in the work of Ospelkaus et al. This produces diatomic molecules with the same temperature as that of the precursor atoms. The atoms and the molecules are trapped in the laboratory frame by the confining potential of the focused laser beam. The trapped molecules are finally allowed to collide with the trapped atoms or with each other and undergo chemical reactions. The collision energy is determined by the temperature of the atom-molecule mixture so these experiments probe chemical reactions at unprecedentedly low temperatures. The molecules produced by Knoop et al. have unusual properties. They are very extended, with interatomic distances exceeding $100$ times the size of the hydrogen atom (Bohr radius) and barely bound, having a binding energy $1010$ times smaller than the binding energy of the hydrogen molecule in the ground rovibrational state. Their structure is very sensitive to an external magnetic field and their unusual properties lead to unusual chemistry. For example, the authors show that an atom exchange process $B+A2→AB+A$ can occur while all three atoms are separated by large distances of $>100$ Bohr radii. (In these experiments, $A$ and $B$ are cesium atoms in different hyperfine levels.) Due to the small binding energy of the molecules, the reaction $B+A2→AB+A$ can be tuned by an external magnetic field from endothermic to exothermic. At ultralow temperatures, this tunes the reaction from allowed to forbidden. In the experiment of Ospelkaus et al., $K$ and $Rb$ atoms are photoassociated to produce $KRb$ molecules in the absolute ground state (i.e., the state of the lowest vibrational, rotational, hyperfine, and Zeeman energy). The molecules are stable in the absence of chemical reaction processes. Chemical reactions release energy and expunge the reaction products from the trap, and so Ospelkaus et al. can measure the reaction rates by monitoring the trap loss in collisions of the $KRb$ molecules with each other or with $K$ atoms also prepared in the quantum state of the lowest energy. Because the number of molecules in the trap is known, these measurements yield the absolute rates of the elementary reaction processes. The extremely low temperature of the molecular gas allows for an extremely high resolution of the experiment. For example, the authors measure the reactions of molecules prepared in different magnetic sublevels of different hyperfine energy states. The measurement shows that the reactions can be tuned by transferring molecules from one hyperfine state to another. Collision dynamics of molecules at ultralow temperatures is determined by quantum statistics, and the work of Ospelkaus et al. shows that chemical encounters of fermionic molecules are suppressed. When molecules are prepared in different states, the Fermi suppression is absent and the chemical reactions are dramatically enhanced. The experiments of Ospelkaus et al. and Knoop et al. mark the advent of ultracold chemistry. The experiments demonstrate that both the internal and external degrees of freedom of molecules at ultralow temperatures can be controlled with high precision, which makes ultracold chemistry controlled chemistry. Ultracold controlled chemistry is unlikely to become a competitive method to synthesize new molecular materials anytime soon; however, the experiments with ultracold molecules offer a new intimate look into the microscopic dynamics of molecules [5,6]. For example, ultracold controlled chemistry offers an opportunity to study a novel class of chemical reactions induced by very weak (fine and hyperfine) interactions. The effect of fine-structure interactions on chemical reactivity has been a long-standing question in physical chemistry [7]. Measurements of chemical reactions at ultralow temperatures can be used to explore the effects of weak long-range intermolecular interactions on chemical dynamics of molecules—another long-standing question in physical chemistry [8]. Ultracold controlled chemistry will provide an ultimate probe of the role of quantum effects in chemistry [9]. Experiments with ultracold molecules can be used to explore chemical dynamics stimulated by many-body quantum effects such as Bose-enhanced chemistry [10] or chemistry in reduced dimensions [11]. At the same time, ultracold controlled chemistry may provide useful information for elucidating chemical dynamics at elevated temperatures. Idziaszek and Julienne [12] have recently shown that the measurements of Ospelkaus et al. [4] can be modeled by a multichannel quantum defect theory with one free parameter that describes the interactions of molecules at short intermolecular separations. This parameter—which can be inferred from an ultracold chemistry measurement—is independent of temperature and encapsulates the dynamics of a chemical reaction. A proper extension of the theory of Idziaszek and Julienne may provide a unique method for mapping intermolecular interactions that govern the dynamics of a chemical reaction at short intermolecular distances. Ultracold controlled chemistry is thus a new regime of chemistry research that will be instrumental for unraveling the complexity of chemical dynamics of molecules. ## References 1. R. Levine, Molecular reaction dynamics, (Cambridge University Press, London, 2005)[Amazon][WorldCat] 2. D. Herschbach, Faraday Discuss. 142, 9 (2009) 3. S. Knoop, F. Ferlaino, M. Berninger, M. Mark, H-C. Nägerl, R. Grimm, J. P. D’Incao, and B. D. Esry, Phys. Rev. Lett. 104, 053201 (2010) 4. S. Ospelkaus, K.-K. Ni, D. Wang, M. H. G. de Miranda, B. Neyenhuis, G. Quemener, P. S. Julienne, J. L. Bohn, D. S. Jin, J. Ye, arXiv:0912.3854 5. R. V. Krems, Phys. Chem. Chem. Phys. 10, 4079 (2008) 6. Cold molecules: theory, experiment, applications, edited by R. V. Krems, W. C. Stwalley, and B. Friedrich (CRC Press, Boca Raton, Florida, 2009)[Amazon][WorldCat] 7. E. Garand, J. Zhou, D. E. Manolopoulos, M. H. Alexander, and D. M. Neumark, Science 319, 72 (2008) 8. D. Skouteris, D. E. Manolopoulos, W. S. Bian, H. J. Werner, L. H. Lai, and K. P. Liu, Science 286, 1713 (1999) 9. L. Carr, D. DeMille, R. V. Krems, and J. Ye, New J. Phys. 11, 055049 (2009) 10. M. G. Moore and A. Vardi, Phys. Rev. Lett. 88, 160402 (2002) 11. Z. Li and R. V. Krems, Phys. Rev. A 79, 050701 (2009) 12. Z. Idziaszek and P. S. Julienne, arXiv:0912.0370 Roman Krems is Associate Professor of Theoretical Chemistry at the University of British Columbia in Vancouver, Canada. He graduated from Moscow State University, Russia, in 1999 and obtained a Ph.D. in physical chemistry from Göteborg University, Sweden, in 2002. He was Smithsonian Predoctoral Fellow at the Harvard-Smithsonian Center for Astrophysics in 2001–2002 and Postdoctoral Fellow at the Harvard-MIT Center for Ultracold Atoms in 2003–2005. His current research focuses on understanding the effects of external electromagnetic fields on dynamics of molecules at low temperatures, the interaction properties of cold and ultracold molecules, and ultracold chemistry. ## Related Articles Atomic and Molecular Physics ### Viewpoint: Superfluids Hit the Street A flow pattern dubbed the von Kármán vortex street, which is renowned for its aesthetic beauty and extreme power, has been created in a superfluid. Read More » Atomic and Molecular Physics ### Viewpoint: Lamb Shift Spotted in Cold Gases Cold atomic gases exhibit a phononic analog of the Lamb shift, in which energy levels shift in the presence of the quantum vacuum. Read More » Atomic and Molecular Physics ### Synopsis: Quantum Droplets Swell to a Macrodrop Experiments with ultracold magnetic atoms reveal liquid-like quantum droplets that are 20 times larger than previously observed droplets.    Read More »
{}
# Math Help - Exponential of square of Normal Distribution 1. ## Exponential of square of Normal Distribution Dear All, I come across an equation when reading a paper. The author has skipped the steps to derive the equation. I am new to distributions. Could you please help me look into it? Thanks in advance! y~N(0,1+e/e) 2. I cannot read the exponent on the right in front of the integral. But from what I can see, this does not make sense. On the RHS, y is a dummy variable of integration, but it's a variable on the LHS. 3. Thank matheagle for your reply. My question can be simplified as following: Given x~N(0,1) , a standard normal variable. What is the expectation of exp(-x^2)? 4. Just combine the exponentials and create a new normal rv. There's no reason to integrate. we know that ${1\over b\sqrt{2\pi}}\int_{-\infty}^{\infty} e^{-(x-a)^2/(2b^2)}dx=1$ you want $E(e^{-X^2}) = {1\over \sqrt{2\pi}} \int_{-\infty}^{\infty}e^{-x^2} e^{-x^2/2}dx ={1\over \sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-3x^2/2}dx$ $= {1\over \sqrt{3}} {\sqrt{3}\over \sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-3x^2/2}dx= {1\over \sqrt{3}}$ 5. Thank matheagle. I got it!
{}
## C Specification typedef VkFlags VkShaderStageFlags; ## Description VkShaderStageFlags is a mask of zero or more VkShaderStageFlagBits. It is used as a member and/or parameter of the structures and commands in the See Also section below.
{}
# Converting a text file to a CSV file I'm attempting to learn more about Java and have created a method that takes a text file with stdout (space separated file) and converts it to a CSV file. I was attempting to use standard Java SE version 8. Is there a better, more efficient, way of doing this? The logic is: 1. Open file 2. Read file by line into string so it can be split 3. Split string removing spaces, back into array 4. Join with StringJoiner using , 5. Convert back to string to remove leading , 6. Update final array to be returned Method to open file: public void OpenFile(String fileName) { try { subFile = new Scanner (new File(fileName)); } catch (Exception e) { System.out.println("File dosn't exist."); } } Method to convert: public String[] TextToCsvArray(String[] fileArray) { int i=0; while(subFile.hasNext()) { String line = subFile.nextLine(); String[] split = line.split("\\s+"); StringJoiner joiner = new StringJoiner(","); for (String strVal: split) line = joiner.toString(); line = line.startsWith(",") ? line.substring(1) : line; fileArray[i++] = line; } return fileArray; } • I will do the same way that you do, except there will be bugs since you are mutating params fileArray and you don't know the size of the array :) – Hendrik T Nov 13 '16 at 3:59 Since you are using Java 8 it is good to use the great streaming methods it give you. You can write your code simply like the following: public static void main(String[] args) { String fileName = "./a.txt"; try (Stream<String> stream = Files.lines(Paths.get(fileName))) { String result = stream.map(s -> s.split("\\s+")) .map(s -> Arrays.stream(s).collect(Collectors.joining(","))+"\n") .collect(Collectors.joining()); System.out.println(result); } catch (IOException e) { e.printStackTrace(); } The final result is stored in result and you just need simply to write it into the file! • Thanks Guys, all these answers have helped me gain more understanding, wasn't aware of a number of features that have been suggested... thanks for the advice. :) Nov 13 '16 at 9:09 • Pooya, If I needed to remove the first "," off each line much like: line = line.startsWith(",") ? line.substring(1) : line; how would you suggest to put it into the stream.map command??? Nov 13 '16 at 23:33 • @Graham: in the second map statement you can put your logic although it may look bit more complicated and probably cannot fit in single lambda expression – Pooya Nov 14 '16 at 1:36 If your input is already separated correctly by spaces, it seems all you need to do is to convert those into commas and you're good. I'm not sure you need to go to/from arrays. I would replace the code inside the loop: { String line = subFile.nextLine(); String[] split = line.split("\\s+"); StringJoiner joiner = new StringJoiner(","); for (String strVal: split) line = joiner.toString(); line = line.startsWith(",") ? line.substring(1) : line; fileArray[i++] = line; } with this: { String line = subFile.nextLine(); line.trim().replaceAll(" +", " "); //check for double spaces line.replace(' ', ','); //replace space with comma fileArray[i++] = line; } Oh, and check for the array size like someone else mentioned. • Come to think of it, you could probably consolidate the 2 replace lines also: – TDWebDev Nov 13 '16 at 6:07 • I like the combination of triming and replacing here, thanks for the tips :) Nov 13 '16 at 9:10 Question: is there a better, more efficent, way of doing this...? At what point do you call Scanner.close()? When you're dealing with input streams, you're supposed to close them when you're done. For this reason, I'm not crazy about breaking OpenFile into a separate method... at least not in that way. It's good to break big chunks of logic into smaller, more manageable bits, but try something like this approach instead... public Scanner openFile(String fileName) { try { return new Scanner(new File(fileName)); } catch (Exception e) { throw new IllegalArgumentException("Specified file doesn't exist: " + fileName); } } But as far as closing your streams -- I like try with resources. try (Scanner scanner = openFile(filename)) {
{}
2020-01-12 21:17:45 +0100 received badge ● Famous Question (source) 2019-03-12 21:13:48 +0100 received badge ● Famous Question (source) 2016-12-28 00:01:47 +0100 received badge ● Notable Question (source) 2015-04-24 19:27:31 +0100 received badge ● Popular Question (source) 2014-07-24 23:36:22 +0100 received badge ● Famous Question (source) 2014-06-29 21:52:29 +0100 received badge ● Notable Question (source) 2014-06-29 21:52:29 +0100 received badge ● Popular Question (source) 2014-06-26 16:36:36 +0100 received badge ● Notable Question (source) 2014-06-23 10:58:46 +0100 received badge ● Popular Question (source) 2014-01-21 11:41:17 +0100 received badge ● Student (source) 2014-01-21 11:30:02 +0100 asked a question How to solve Fourier transform problem using Sage? Find the Fourier transform of the function f(x) = { 1, -1 < x < 1 0, |x| > i } i. Use Fourier series ii. Use Fourier integral How can i use Sage to solve this problem? 2013-10-31 13:59:10 +0100 received badge ● Supporter (source) 2013-10-31 13:59:09 +0100 marked best answer Why Sage cannot pass a value of variable from one function to another nested function? sage: n, k = var('n, k') sage: f(x,k) = sum((2/n)*(sin(n*x)*(-1)^(n+1)), n, 1, k) #where n = 1,2,3 ... k sage: f (x, k) |--> -2*sum((-1)^n*sin(n*x)/n, n, 1, k) I'm not sure what you think is wrong here. The 2 and a factor of -1 were both factored out, that's all. However, I do agree that this doesn't expand. What is happening is that we are sending the sum to Maxima if algorithm == 'maxima': return maxima.sr_sum(expression,v,a,b) and then ordinarily when it returns, it is still a Maxima object (which may be a bug?). But when we put it in the function, it becomes a Sage object - but we don't have a Sage "sum" object. So I think that is what would have to be fixed. That this is possible is shown by the following Maxima example (which I put on the ticket): (%i1) f: -2*'sum((-1)^n*sin(n*x)/n,n,1,2); 2 ==== n \ (- 1) sin(n x) (%o1) - 2 > --------------- / n ==== n = 1 (%i8) f, nouns; sin(2 x) (%o8) - 2 (-------- - sin(x)) 2 2013-10-31 13:58:54 +0100 commented answer Why Sage cannot pass a value of variable from one function to another nested function? That mean i cannot define the function like that right? It will be very nice if i can define it similar to the second example. it will be consistent with the real equation on my paper. 2013-10-31 13:09:56 +0100 asked a question Why Sage cannot pass a value of variable from one function to another nested function? The first i ran this: sage: f(x)=(2/n)*(sin(n*x)*(-1)^(n+1)) sage: sum(f, n, 1, 2) #using summation function -sin(2*x) + 2*sin(x) So, In this case the result was evaluated correctly. But if i tried to combine the first line and the second line together: sage: f(x,k) = sum((2/n)*(sin(n*x)*(-1)^(n+1)), n, 1, k) #where n = 1,2,3 ... k sage: f(x,2) -2*sum((-1)^n*sin(n*x)/n, n, 1, 2) The result wasn't finished! Why sage cannot evaluate mathematical expression in this case? Another tried to prove that Sage can pass its variable from left function to right function even though the right function was a nested function: sage: f(x) = sin(arcsin(x)) sage: f(0.5) 0.500000000000000 Edit: (See the same question on SO.) 2013-10-30 06:46:35 +0100 received badge ● Scholar (source) 2013-10-30 06:46:35 +0100 marked best answer Sage showed "TypeError: need a summation variable" when i used sum function with for loop Your last line is equivalent to [sum(f,1,1,20), sum(f,2,1,20)] For each of these, Sage complains that it doesn't know on which variable to sum on (x or n?). 2013-10-30 05:44:24 +0100 received badge ● Editor (source) 2013-10-30 05:43:42 +0100 asked a question Sage showed "TypeError: need a summation variable" when i used sum function with for loop I try to make a summation list and the commands are below this: sage: var('n') sage: var('x') sage: f = (2/n)*(sin(n*x)*(-1)^(n+1)) sage: funclist = [sum(f,n,1,20) for n in range(1,3)] but i found an error message: TypeError: need a summation variable How to solve this problem?
{}
# Monochrome (Black & white) plots in matplotlib Posted on Wed 10 August 2016 in Notebooks # Monochrome (Black & white) plots in matplotlib¶ While writing my thesis, I was annoyed that there wasn't any good default options for outputting monochrome plots, as I didn't count on being able to expect that all prints would be in color. I therefore wanted plots that could work without any greyscales. Right now this notebook describes how to setup and use line plots and bar plots. If you need other types of plots, do not hessitate to contact me, and I'll see what I can do. In [1]: from sklearn import datasets from matplotlib import pyplot as plt %pylab %matplotlib inline import numpy as np Using matplotlib backend: MacOSX Populating the interactive namespace from numpy and matplotlib ## Line and marker styles¶ To get an idea of which line styles and markers are available we can inspect the lines and markers object In [2]: from matplotlib import lines, markers In [3]: lines.lineStyles.keys() Out[3]: dict_keys(['', ' ', '--', ':', 'None', '-', '-.']) In [4]: markers.MarkerStyle.markers.keys() Out[4]: dict_keys([0, 1, '*', 3, 4, 5, 6, 7, '8', 'None', 'd', 'h', 'D', 'v', None, '^', ',', '>', 'x', '<', 's', 'p', '', '2', '4', ' ', '_', 'o', '+', 'H', '|', 2, '1', '3', '.']) ## Cycle through line and marker styles¶ First we are going to create a cycler object, that we will use to cycle through different styles. Using this object we can have a new line-style every time we plot a new line, and don't have to manually ensure that our lines are monochrome and different. Cycler objects can be composed of several cycler objects and will iterate over all permutations of its components forever. Let us create a cycler object that cycles through several line and marker styles all with the color black. In [5]: from cycler import cycler # Create cycler object. Use any styling from above you please monochrome = (cycler('color', ['k']) * cycler('linestyle', ['-', '--', ':', '=.']) * cycler('marker', ['^',',', '.'])) # Print examples of output from cycler object. # A cycler object, when called, returns a iter.cycle object that iterates over items indefinitely print("number of items in monochrome:", len(monochrome)) for i, item in zip(range(15), monochrome()): print(i, item) number of items in monochrome: 12 0 {'color': 'k', 'linestyle': '-', 'marker': '^'} 1 {'color': 'k', 'linestyle': '-', 'marker': ','} 2 {'color': 'k', 'linestyle': '-', 'marker': '.'} 3 {'color': 'k', 'linestyle': '--', 'marker': '^'} 4 {'color': 'k', 'linestyle': '--', 'marker': ','} 5 {'color': 'k', 'linestyle': '--', 'marker': '.'} 6 {'color': 'k', 'linestyle': ':', 'marker': '^'} 7 {'color': 'k', 'linestyle': ':', 'marker': ','} 8 {'color': 'k', 'linestyle': ':', 'marker': '.'} 9 {'color': 'k', 'linestyle': '=.', 'marker': '^'} 10 {'color': 'k', 'linestyle': '=.', 'marker': ','} 11 {'color': 'k', 'linestyle': '=.', 'marker': '.'} 12 {'color': 'k', 'linestyle': '-', 'marker': '^'} 13 {'color': 'k', 'linestyle': '-', 'marker': ','} 14 {'color': 'k', 'linestyle': '-', 'marker': '.'} In [6]: # ipython can also pretty pring our cycler object monochrome Out[6]: 'color''linestyle''marker' 'k''-''^' 'k''-'',' 'k''-''.' 'k''--''^' 'k''--'',' 'k''--''.' 'k'':''^' 'k'':'',' 'k'':''.' 'k''=.''^' 'k''=.'',' 'k''=.''.' ## Create monochrome figure and axes object¶ Most people learn matplotlib through pyplot, the command style functions that make matplotlib work like MATLAB. Meanwhile there is also the more direct approach, manupulating matplotlib objects directly. It is my experience that this is more powerful and as far as I can tell, we can't make monochrome plots without using the object-oriented interface, so in this tutorial I will try and use it as much as possible. It is however a cumbersome interface, so it appears that the people behind matplotlib recommends mixing both, so will I. First, let us take a look at an empty plot: In [7]: plt.plot(); If we draw a number of lines, we can see that the default behavior of matplotlib is to give them different colors: In [8]: for i in range(1,5): plt.plot(np.arange(10), np.arange(10)*i) Let us add the monochrome cycler as the default prop_cycle for the next plot. This plot we will generate using the object approach. The subplots function (notice the s) returns a figure object and any number of axes objects we ask it to. I find this the easiest way to get both of these objects, even for plots with only 1 ax. In [9]: fig, ax = plt.subplots(1,1) ax.set_prop_cycle(monochrome) for i in range(1,5): ax.plot(np.arange(10), np.arange(10)*i) ## Set a grid and clear the axis for a prettier plot¶ In [10]: fig, ax = plt.subplots(1,1) ax.set_prop_cycle(monochrome) ax.grid() ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) for i in range(1,5): ax.plot(np.arange(10), np.arange(10)*i) # Override styles for current script¶ Writing all the ax.set_grid ... code for every figure is tedious. We can tell matplotlib to set all new figures with a particular style. All styles are saved in a dictionary in plt.rcParams. We can override its values manually for a single script and will do this now. You can also save your styles manually to a .mplstyle-file and load them at will. See Customizing plots with stylesheets. You can load custom and builtin styles at will using plt.style.use() function. You can even load and combine several styles. Below we will just override entries in the rcParams dictionary manually, so that this notebook is not dependent on external files. In [11]: # Overriding styles for current script plt.rcParams['axes.grid'] = True plt.rcParams['axes.prop_cycle'] = monochrome plt.rcParams['axes.spines.top'] = False plt.rcParams['axes.spines.right'] = False plt.rcParams['axes.spines.bottom'] = False plt.rcParams['axes.spines.left'] = False # Bar plots¶ In [12]: fig, ax = plt.subplots(1,1) for x in range(1,5): ax.bar(x, np.random.randint(2,10)) Now there are 3 problems with this barplot: 1. The bars are colored 2. The bars cannot be distinguished 3. The grid is above the bars (will become a big problem when 1 is solved) We will color all the bars white and leave the black border. To distinguish the bars using only monochrome colors, we will paint them with hatches - repeating patterns. To place the bars in front of the grid, we will set their zorder to something high. More on hatches: In [13]: fig, ax = plt.subplots(1,1) bar_cycle = (cycler('hatch', ['///', '--', '...','\///', 'xxx', '\\\\']) * cycler('color', 'w')*cycler('zorder', [10])) styles = bar_cycle() for x in range(1,5): ax.bar(x, np.random.randint(2,10), **next(styles)) In [ ]: # Algorithms sensitivity to single salient dimension Posted on Fri 23 January 2015 in Notebooks # Sensitivity to 1 salient dimension¶ ## How different classifiers managers to sort through noise in multidimensional data¶ In this experiment I will test different machine learning algorithms sensitivity to data where only 1 dimension is salient and the rest are pure noise. The experiment tests variations of saliency against a number of dimensions of random noise to see which algorithms are good at sorting out noise. For experiments performed here, there will be a 1-1 mapping between the target class in $y$ and the value of the first dimension in a datapoint in $x$. For example, for all datapoints belonging to class 1, the first dimension will have the value 1, while if the datapoint belongs to class 0, the first dimension will have the value 0. In [10]: #Configure matplotlib import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %pylab #comment out this line to produce plots in a seperate window %matplotlib inline figsize(14,5) Using matplotlib backend: MacOSX Populating the interactive namespace from numpy and matplotlib # Data¶ First the target vectors $y$ and $y_4$ are randomly populated. $y\in[0,1]$ and $y_4\in[0,1,2,3]$. The for each value in $y$ and $y_4$ a datapoint is generated consisting of the value of the target class, followed by 100 random values. This way the first column in the data matrix is equal to the target vector. Later this column will be manipulated linearly. In [11]: #initialize data def generate_data(): ''' Populates data matrices and target vectors with data and release it into the global namespace running this function will reset the values of x, y, x4, y4 and r ''' global x, y, x4, y4, r y = np.random.randint(2, size=300) y4 = np.random.randint(4, size=300) r = np.random.rand(100, 300) x = np.vstack((y,r)).T x4 = np.vstack((y4,r*4)).T generate_data() # note that x and y are global variables. If you manipulate them, the latest manipulation of x and y will # used to generate plots. split = 200 max_dim = x.shape[1] m = 1 y_cor = range(m, max_dim) print 'y is equal to 1st column of x: \t', list(y) == list(x[:,0]) print 'y4 is equal to 1st column of x4:\t', list(y4) == list(x4[:,0]) print '\nChecking that none of the randomized data match the class values print 'min:\t', r.min(), 'max:\t',r.max(), '\tThese should never be [0,1], if so please rerun.' y is equal to 1st column of x: True y4 is equal to 1st column of x4: True min: 5.75157395193e-05 max: 0.999952558364 These should never be [0,1] ## Visualizing the data¶ This section will plot parts of the data to give the reader a better understanding of its shape. In [12]: plt.subplot(121) plt.title('First 2 dimensions of dataset') plt.plot(x[np.where(y==0)][:,1],x[np.where(y==0)][:,0], 'o', label='Class 0') plt.plot(x[np.where(y==1)][:,1],x[np.where(y==1)][:,0], 'o', label='Class 1') plt.ylim(-0.1, 1.1) #expand y-axis for better viewing legend(loc=5) plt.subplot(122, projection='3d') plt.title('First 3 dimensions of dataset') plt.plot(x[np.where(y==0)][:,2],x[np.where(y==0)][:,1],x[np.where(y==0)][:,0], 'o', label='Class 0') plt.plot(x[np.where(y==1)][:,2],x[np.where(y==1)][:,1],x[np.where(y==1)][:,0], 'o', label='Class 1') Out[12]: A clear seperation between classes is revealed when visualized. This clear seperation between the 2 classes remains, no matter how many noisy dimensions we add to the dataset, so in theory it is reasonable to expect any linear classifier to find a line that seperates the 2 datasets. In [13]: #Initialize classifiers from sklearn.neighbors import KNeighborsClassifier as NN from sklearn.svm import SVC as SVM from sklearn.naive_bayes import MultinomialNB as NB from sklearn.lda import LDA from sklearn.tree import DecisionTreeClassifier as DT from sklearn.ensemble import RandomForestClassifier as RF from sklearn.linear_model import Perceptron as PT classifiers = [NN(),NN(n_neighbors=2), SVM(), NB(), DT(), RF(), PT()] titles = ['NN, k=4', 'NN, k=2', 'SVM', 'Naive B', 'D-Tree', 'R-forest', 'Perceptron'] # uncomment the following to add LDA #classifiers = [NN(),NN(n_neighbors=2), SVM(), NB(), DT(), RF(), PT(), LDA()] #titles = ['NN, k=4', 'NN, k=2', 'SVM', 'Naive B', 'Perceptron', 'LDA'] #m, y_cor= 2, range(m, max_dim) In [14]: # define functions def run (x,y): '''Runs the main experiment. Test each classifier against varying dimensional sizes of a given dataset''' global score score = [] for i, clf in enumerate(classifiers): score.append([]) for j in range(m,max_dim): clf.fit(x[:split,:j],y[:split]) score[i].append(clf.score(x[split:,:j], y[split:])) def do_plot(): ''' Generates the basic plot of results Note: Score is a global variable. The latest score calculated from run() will always be used to draw a plot ''' for i, label in enumerate(titles): plt.plot(y_cor, score[i], label=label) plt.ylim(0,1.1) plt.ylabel('Accuracy') plt.xlabel('Number of dimensions') def double_plot(): ''' Runs the experiment for 2 classes and 4 classes and draws appropriate plots Note: x and y are global variables. The latest manipulation of these are always used to run the experiment. If you need 'original' x and y's you need to rerun generate_data() and use new randomized data ''' plt.subplot(121) plt.title('Two classes') run(x,y) do_plot() plt.legend(loc=3) plt.subplot(122) plt.title('Four classes') run(x4,y4) do_plot() # Experiment 1¶ Test all classifiers against 2 class and 4 class datasets for 1 through 100 dimensions. Notice that Naive Base fails when the dataset is literally equal to the targets, but adding just a little bit of noise and it starts to work much better. For 4 classes, Naive Base again starts of poorly, but while the other algorithms quickly succumb to the noisy dimensions, Naive Bayes seems to improve up until ~20 dimension, and though its performs starts to decline, it is still the best performer from there on out. If you are running the experiment with LDA, then the test will not be done for 1 dimension, and Naive Bayes weakest point won't show. However, the Decision Tree is quick to find that one dimension explains everything, and has no trouble throughout either experiment. The random forest have some trees where the salient dimension has been cut off, so more noise and randomness is added to the results. In [15]: double_plot() # Experiment 2¶ In this experiment the first column of the data matrix is linearly manipulated in order to "hide" the values that map to the classes better amongst the noise. For 2 classes experiment the value 0.25 maps to class 0 and 0.75 maps to class 1. For the 4 class experiment, value:class mapping is now 1:0, 1.5:1, 2:2, 2.5:3, 3:4 This does not change the fact that there is a clear boundry between the classes. It just means the distance between the 2 planes seen in the visualization section is getting narrower. In [16]: plt.figure() x[:,0] = (x[:,0]/2)+0.25 x4[:,0] = (x4[:,0]/2)+1 double_plot() This experiment is quite sensitive to the randomness in the data. For two classes in general the SVM is the strongest until around the 40 dimension mark, where the Naive Bayes takes over. In the higher dimension area, NN often manages to overtake SVM, though this is somewhat dependent on the random data. It's still surprising, given that NN is usually the poster child for the curse of dimensionality. It is not that easy to hide linear explanation for to the Decision Tree, which clearly outperforms everything here. The tendency to overfitting is really helping. # Baseline test (random only)¶ In this final experiment the only salient datapoint in the observation data is removed to show the reader that this will attain baseline results. Also notice, depending on the data, the baseline for some of the algorithms can be as high as 60% accuracy. Keep this in mind when reviewing results from above. In [17]: x = x[:,1:] x4 = x4[:,1:] double_plot() plt.legend(loc=1) plt.subplot(121) plt.legend().remove()
{}
# How to improve scientific source code development Research in the life sciences is increasingly computational (or so says [Markowetz2017] in a somewhat controversial paper), which, because all research is about expanding what is known, means that the development and application of new computational methods is part of the field. Even if you are primarily a bench scientist or a field worker, you should have some awareness of scientific computing. How is software code written and how can you do this collaboratively? How does one use the code of others? How do you share your own? How to improve your code, and make it verifiable and testable? Here we will address these questions and some of the approaches and community standards that are in current usage. ## Picking the right tools All source code, and a lot of research data (molecular sequence data, tabular data, analysis logs, etc.), consists of plain text files. Programs that are intended for composing prose and tables for human readers (such as Microsoft Word and Excel) are wholely unsuitable for operating on such text files: they might do things such as automatically convert simple quotes to inverted ones, which might invalidate your data; convert between different, local conventions for decimal points, which are commas in some countries; attempt to run spell checks on data, which clutters the screen with useless information; attempt to export to proprietary file formats. ### Text editors Because of the aforementioned problems with using word processors to edit plain text, the first, right tool to either locate on your computer or install if it isn't there, is a text editor. There are good, free, lightweight editors for every operating system, for example: • On OSX there is BBedit • On Linux there are numerous options, gedit being a common one on GNOME, for example Aside from plain text data files, text editors are also useful for working on source code. In many cases, a text editor will recognise the programming language (for example by the file extension of the source code file, e.g. *.py for python and *.R for R) and will colorise the syntax accordingly and allow blocks of code to be collapsed or expanded. However, for any project that comprises multiple files - and this is nearly always the case, if we consider input and output data files, configuration files, as well as source code - a text editor will not suffice. Hence, the next, right tool for the job will be an integrated development environment or IDE. ### IDEs An IDE allows you to organise sets of files into projects such that the dependencies between the files are managed. An IDE will typically have a deeper understanding of the programming language you are using, so that it may spot problematic syntax and logic errors, and may suggest functions and variables for code completion. Also, an IDE will allow you to execute your code line by line, which helps in localising problems and in stepping through an analysis workflow. Lastly, an IDE will be able to visualise different things, such as complex data structures. Example 1 (ss1) - the graphical user interface of the most popular integrated development environment for R, RStudio. The top left pane organises files, the bottom left pane evaluates R statements line by line (e.g. to test out commands), the top right pane visualises complex data structures, and the bottom right pane allows for viewing various things such as help documentation or statistical plots. Just like text editors, numerous IDEs exist. For most programming languages there are very good, free options. For example, for the R language for statistics there is RStudio (shown in example 1), for Python there is pycharm, for Java there is eclipse, and so on. ### Literate programming A slightly different take on source code development that is more geared towards analysis workflows than to application development is provided by the literate programming paradigm. In this way of working, source code is primarily a prose document, interspersed with bits of executable code and dynamic visualisations. This is found in R programming in the guise of RMarkdown (an example of this is the working document that formed the basis of the publication for the RNeXML library). The Jupyter system facilitates the same way of working but accommodates more programming languages, while ActivePapers has a facility for (recursive) inclusion of data from other ActivePapers, i.e. a form of citation. Example 2 (ss2) - example output of the modified "Welcome to Python" notebook. As an exercise of literate programming, try the Welcome to Python notebook. Modify the code to draw five (instead of four) curves, labeling the additional one E. An example of what the expected output might look like is shown in example 2, but keep in mind that these curves are randomly generated so they will look different every time. Numerous learning materials for this method of programming exist on the web. As applied specifically to data science, we found the following potentially worthwhile: ## Working with others Like most aspects of scientific research, scientific software development is becoming increasingly collaborative, which means that developers of software code and analytical workflows are increasingly participating in open source development. There are numerous idealistic reasons for why this ought to be done (for example, because computational analysis is a research method and so should be transparent in order to be reproducible; or, because scientific software is typically funded publicly, it should be freely available) but there are also very practical, self-interested reasons for adopting open source. The main ones of these are that it allows you to build on the shoulders of others, e.g. by re-using software components developed and published by others, and that it allows you, in turn, to have greater impact with your work, because others will use it (and cite it) in turn. To participate in open source developments, here we discuss some of the main aspects to consider. ### Community conventions Every community of open source developers, whether it's a community centred around a programming language or a problem domain (like bioinformatics), has its own conventions. Some of these may be well-considered and useful, such as documentation standards, while others may be somewhat arbitrary, such as debates about what is or is not "pythonic" - while some community conventions might even be actively harmful, like the perverse pleasure in writing deliberately cryptic, obfuscated code. If you want to start contributing to a community, learn about the conventions that have been adopted, especially insofar as they affect collaboration. For example, learn what is expected of a software package that you plan to contribute: how do the files need to be organised? How does the code need to be structured? Are there specific design patterns that ought to be followed? Conventions for package names and their meanings or for the keywords to use in package descriptions? Open source software is also referred to as "free software". This does not just mean in the sense of "free beer", i.e. at no cost, but also - more importantly - in the sense of "free speech". In other words, open source has to do with the rights of people to intellectual property. These rights are defined in software licenses, and they are relevant to developers because they both concern what you can do with the software developed by others (e.g., under what conditions, commercial or otherwise, can you re-use somebody else's source code) and what others can do with the source code that you write. Whereas the creative commons licenses are typically used for works such as images, text (including scholarly publications) and data sets, open source software is usually released under one of the licenses recognised by the open source initiative. ### Responsive communication Collaborative development and participation in a community also means to respond to feedback from others at every stage of the development cycle. When you are first planning a software tool or a computational workflow you will need to learn what your collaborators think the requirements are; when you have a prototype or an early version you may need to adjust your approach in response to early user testing; once you have released something you may need to manage and address issues reported by users. Some of the challenges of working collaboratively can at least partly be addressed (or facilitated) by technology. Specifically, collaborating on anything that changes over time, whether a manuscript, data, or source code, can be facilitated by technologies that track version changes, a topic that is dealt with in more detail in another section. ## Developing robust, verifiable software Most scientific software is not developed by professional software engineers but by researchers. In general, such software is highly innovative in terms of the application of new analytical techniques, but also very fragile and difficult to use. Numerous specific recommendations to address these issues can be made (below, we link to two documents each with ten simple rules regarding this), but one of the key principles on all of this is the need for a structured approach to software testing using (valid and invalid) data. In every programming language in common usage in scientific computing there are helpful tools to automate testing. What these do, in general, is to run small programs or commands that you developed in addition to the main software to test its functioning. By adopting one of these conventional testing tools, you will gain numerous advantages: • Users can see the software in live action. This will aid them in verifying that the installation succeeded, and they will see what the inputs and outputs (in terms of data, commands, and parameters) should look like, i.e. this will make the software self-documenting. • Sets of tests can be integrated automatically in systems that run periodically (for example, every time a version change is recorded) to verify that the system still functions as intended. • If you make major changes in the architecture of the software you can verify automatically that this change did not break anything. For further reading on these and related topics, you may be interested in the guidelines provided by [Taschuk2017] and [List2017]. ## Expected outcomes You have now had an encounter with some of the principles, tools and techniques that play a role in scientific software development. You should now be able to: • Understand why to use a text editor for plain text files • Understand what the purpose is of an IDE • Modify and execute a simple workflow • Know some of the principles of open source development • Know the purpose of software testing in scientific computing
{}
# Picking Random Items From a File By Alex Beal January 11, 2012 Here’s a deceptively simple programming puzzle: Develop an algorithm for randomly selecting n words from a dictionary file. This is essentially the puzzle I had to solve in order to write my xkcd-style password generator, which implements the xkcd password spec.1 The simplest solution is to parse the dictionary into individual words (easy to do in Python) and put those words into a list. Selecting four random words is then as easy as selecting four random items from the list. This is fast, easy to implement, and simple to understand, but it is also very memory inefficient. I have to load 50,000+ words into memory in order to select four of them.2 Can we do better? Yes. ## A Memory Efficient Algorithm The key insight into developing a better algorithm is realizing that it should be possible to select the words as the dictionary file is being parsed, rather than loading the entire thing into memory. The difficulty is making sure that each word has an equal chance of being chosen, and that at least n words are chosen. If, for example, we simply give each word a 1 in 10 chance of being chosen, we’ll end up with way more words than we need (assuming n is small). If we give each a 1 in 50,000 chance, there’s the possibility that we won’t choose enough words. Bryce Boe has a clever solution to this problem where he chooses exactly n words, but the proof that it works is non-trivial, and he doesn’t provide it. This is why I came up with my algorithm. In order to explain my algorithm, it’s best to think of it in terms of rolling dice. Consider the following procedure for randomly selecting 4 dice from 10. 1. Roll all 10 dice. 2. Select the 4 with the highest values. 1. If, suppose, 5 of the dice all end up with a value of 6, randomly choose 4 from those 5 (perhaps by repeating the procedure with those 5). 2. If, suppose 2 dice get a value of 6, and 3 get a value of 5, select the 2 with the value of 6, and then randomly select 2 of the 3 with a value of 5. How can we adapt this procedure to select random words from a file, rather than dice? Here’s how: as we’re parsing the dictionary file, we give each word a random value, and then select the n words with the highest values. The issue is, the naive implementation of this procedure doesn’t really solve our memory problem. If every word gets a random value, don’t we now have to store every word in memory, along with its value? The key here is to observe that only the words with the n highest values need to be kept in memory, and all the others can be immediately discarded. Think about this in terms of the dice example. I want to select 1 die from 10: 1. I roll the first die. I get a value of 1. I keep this die. 2. I roll the second. I get a value of 3. I keep this die, and discard the other. 3. I roll the third. I get a value of 3. I keep both dice. 4. Fourth: I get a value of 6. I keep this die and discard the other 2. By the end of the procedure, I might end up with 3 dice that all got a value of 6. I would then randomly select 1 from those 3. How can we adapt this procedure for selecting random words? We use a priority queue: 1. Read a word from the dictionary. 2. Give it a random value. 3. Insert the value-word pair (as a tuple) into the priority queue. 4. If the queue has more than n items, pop an item. 5. Repeat until every word has been read. Remember that popping from a priority queue removes the item with the lowest value. So, we insert a word, and if we have too many words, we pop the one with the lowest value. At the end of this procedure there will be exactly n words in the queue. These are our n random words. Neat. There is one issue, though. What if two words have the same random value? Well, one solution is to keep both words, and then break the tie at the end like we did in the dice example, but that breaks the elegance of the priority queue implementation. Another is to break ties randomly as soon as they occur, and discard the losing word, but I’m not sure how to do this in a statistically safe way. The easiest solution is to just pray that collisions don’t occur. In Python, each call to random() produces 53-bits of precision, so it’s very unlikely that two values will collide. If 53-bits isn’t enough (yeah right), you can use multiple random numbers. So, rather than a tuple of (value, word), you can use (value_1, value_2, value_3, word).3 Python’s priority queue implementation will automatically know how to sort that. Without further ado, here’s the proof of concept: #!/usr/bin/python -O import random import heapq DICT_PATH = "/usr/share/dict/words" WORD_COUNT = 4 dict_file = open(DICT_PATH) wordq = [] for line in dict_file: word = line.strip() rand_val = random.random() heapq.heappush(wordq, (rand_val, word) ) if len(wordq) > WORD_COUNT: heapq.heappop(wordq) print wordq ## Endnotes 1. Summary: a good password is composed of four random words. 2. A slight improvement on this would be to store only the word’s position in the file, rather than the word itself. Then the word could be retrieved by seeking to that position. http://www.bryceboe.com/2009/03/23/random-lines-from-a-file 3. If you’re only using one random value, and your dictionary file has 50,000 words, the chance of a collision is 50,000/2^53 , which is roughly 3 in 562 trillion. I’ll take those odds. Whoops! This is actually a version of the birthday problem. The actual probability of a collision is: 1.39e-7. Still quite good.
{}
# Winding Vector: How to Annihilate Two Dirac Points with the Same Charge. @article{Montambaux2018WindingVH, title={Winding Vector: How to Annihilate Two Dirac Points with the Same Charge.}, author={G. Montambaux and Lih-King Lim and J N Fuchs and Fr{\'e}d{\'e}ric Pi{\'e}chon}, journal={Physical review letters}, year={2018}, volume={121 25}, pages={ 256402 } } The merging or emergence of a pair of Dirac points may be classified according to whether the winding numbers which characterize them are opposite (+- scenario) or identical (++ scenario). From the touching point between two parabolic bands (one of them can be flat), two Dirac points with the same winding number emerge under appropriate distortion (interaction, etc.), following the ++ scenario. Under further distortion, these Dirac points merge following the +- scenario, that is corresponding… ## Figures and Topics from this paper Dirac points emerging from flat bands in Lieb-kagome lattices • Physics • 2020 The energy spectra for the tight-binding models on the Lieb and kagome lattices both exhibit a flat band. We study a model which continuously interpolates between these two limits. The flat band Quantized Berry winding from an emergent $\mathcal{PT}$ symmetry • Physics • 2021 Linear crossing of energy bands occur in a wide variety of materials. In this paper we address the question of the quantization of the Berry winding characterizing the topology of these crossings in Failure of Nielsen-Ninomiya Theorem and Fragile Topology in Two-Dimensional Systems with Space-Time Inversion Symmetry: Application to Twisted Bilayer Graphene at Magic Angle • Physics Physical Review X • 2019 We show that the Wannier obstruction and the fragile topology of the nearly flat bands in twisted bilayer graphene at magic angle are manifestations of the nontrivial topology of two-dimensional real Flat bands and nontrivial topological properties in an extended Lieb lattice • Physics • 2019 We report the appearance of multiple numbers of completely flat band states in an extended Lieb lattice model in two dimensions with five atomic sites per unit cell. We also show that this Type-III and Tilted Dirac Cones Emerging from Flat Bands in Photonic Orbital Graphene The extraordinary electronic properties of Dirac materials, the two-dimensional partners of Weyl semimetals, arise from the linear crossings in their band structure. When the dispersion around the Topological insulators and geometry of vector bundles For a long time, band theory of solids has been focused on the energy spectrum, or Hamiltonian eigenvalues. Recently, it was realized that the collection of eigenvectors also contains important Topology of contact points in Lieb–kagomé model We analyse Lieb-kagomé model, a three-band model with contact points showing particular examples of the merging of Dirac contact points. We prove that eigenstates can be parametrized in a Origin of flat-band superfluidity on the Mielke checkerboard lattice The Mielke checkerboard is known to be one of the simplest two-band lattice models exhibiting an energetically flat band that is in touch with a quadratically dispersive band in the reciprocal space, Moving Dirac nodes by chemical substitution It is shown that Dirac states can be effectively tuned by doping a transition metal sulfide, [Formula: see text], through Co/Ni substitution, a model system to functionalize Dirac materials by varying the strength of electron correlations. Interaction-induced lattices for bound states: Designing flat bands, quantized pumps, and higher-order topological insulators for doublons • Physics • 2019 Bound states of two interacting particles moving on a lattice can exhibit remarkable features that are not captured by the underlying single-particle picture. Inspired by this phenomenon, we ## References SHOWING 1-10 OF 32 REFERENCES A universal Hamiltonian for motion and merging of Dirac points in a two-dimensional crystal • Physics • 2009 AbstractWe propose a simple Hamiltonian to describe the motion and the merging of Dirac points in the electronic spectrum of two-dimensional electrons. This merging is a topological transition which Manipulation of Dirac points in graphene-like crystals • Physics • 2012 We review different scenarios for the motion and merging of Dirac points in 2D crystals. These different types of merging can be classified according to a winding number (a topological Berry phase) Magnetic spectrum of trigonally warped bilayer graphene: Semiclassical analysis, zero modes, and topological winding numbers • Physics • 2012 We investigate the fine structure in the energy spectrum of bilayer graphene in the presence of various stacking defaults, such as a translational or rotational mismatch. This fine structure consists Creating, moving and merging Dirac points with a Fermi gas in a tunable honeycomb lattice • Physics, Medicine Nature • 2012 The creation of Dirac points with adjustable properties in a tunable honeycomb optical lattice is reported and the unique tunability of the lattice potential is exploited to adjust the effective mass of the Dirac fermions by breaking inversion symmetry. Occurrence of nematic, topological, and Berry phases when a flat and a parabolic band touch • Physics • 2014 A (single flavor) quadratic band crossing in two dimensions is known to have a generic instability towards a quantum anomalous Hall (QAH) ground state for infinitesimal repulsive interactions. Here Exotic Lifshitz transitions in topological materials Topological Lifshitz transitions involve many types of topological structures in momentum and frequency–momentum spaces, such as Fermi surfaces, Dirac lines, Dirac and Weyl points, etc., each of Topological Phases for Fermionic Cold Atoms on the Lieb Lattice • Physics • 2011 We investigate the properties of the Lieb lattice, that is, a face-centered square lattice, subjected to external gauge fields. We show that an Abelian gauge field leads to a peculiar quantum Hall Black-hole horizon in the Dirac semimetal Zn2In2S5 • Physics Physical Review B • 2018 Recently, realizing new fermions, such as type-I and type-II Dirac/Weyl fermions in condensed matter systems, has attracted considerable attention. Here we show that the transition state from type-I Landau levels in the case of two degenerate coupled bands: Kagomé lattice tight-binding spectrum • Physics • 2003 The spectrum of charged particles hopping on a kagom\'e lattice in a uniform transverse magnetic field shows an unusual set of Landau levels at low field. They are unusual in two respects: the lowest Bloch-Zener oscillations across a merging transition of Dirac points. • Physics, Medicine Physical review letters • 2012 The agreement with a recent experiment on cold atoms in an optical lattice is very good and the tunneling probability is computed from the low-energy universal Hamiltonian describing the vicinity of the merging.
{}
Measurement of differential $J/\psi$ production cross sections and forward-backward ratios in p + Pb collisions with the ATLAS detector Research output: Contribution to journalArticle Open Access permissions Open Original language English Aad:2015ddl 034904 Physical Review C92 3 10.1103/PhysRevC.92.034904 Published - 14 Sep 2015 Abstract Measurements of differential cross-sections for $J/\psi$ production in p+Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV at the LHC with the ATLAS detector are presented. The data set used corresponds to an integrated luminosity of 28.1 nb$^{-1}$. The $J/\psi$ mesons are reconstructed in the dimuon decay channel over the transverse momentum range \$8 ID: 42326841
{}
# Java looking in wrong directory for XML [closed] I found this suggestion on a Stack Exchange site to print out what the current directory is: File file = new File("."); for(String fileNames : file.list()) System.out.println(fileNames); Basically, when I run a game in Eclipse, I get this: But when I compile the game and run it, it prints out this (it prints out the stuff that is in the same directory as the JAR file is). Images, and sound are working fine. It's just the XML file that is causing this. How do I make Java read XML files from res folder like it does for images and sound in Eclipse? My particular code that loads the XML is at http://pastebin.com/59Nt7Dtd. The reason why it works in Eclipse is because you are in the project root when you do this: particleImage = new Image("res/particles/particle.png", false); For consistency, put your XML file there as well or if you don't want to run it from Eclipse, but from a JAR, integrate an absolute data path in your code (do not hardcode it in your source files). • how do i get absolute data path from code, i want to load evreything from project room /res – Matthew May 1 '13 at 20:53 • @Matthew read the API documentation for the Java File class. – jwenting May 2 '13 at 6:01 • Ok so i made it absolute path but now this happends : shrani.si/f/2H/13W/131vAM3/error3.png , so my question is shouldnt the absolute path have filename inluded, the name of the file was 5.jar but it was not shown in the absolute path – Matthew May 2 '13 at 10:16 • The absolute path is composed out of: DirPath + RelativePath + Filename. If you are trying to get the jar, your dirPath should be something like C:\Project\Particles and your relativePath \res (\bin) and the filaname a.xml (b.jar). Hope this clarifies it a bit. – Cristina May 2 '13 at 10:19 • Shouldnt the path be something like this for this case D:\PathToJar\game.jar\res\particles\particleXmlFile.xml , because the particlexml file is inside the game.jar – Matthew May 2 '13 at 10:33 path="/res/particles/file.xml";
{}
## anonymous 5 years ago DELMA IS MAKING A PICTURE FRAME. SHE WANTS TO GLUE GOLD BRAID AROUND THE EDGE OF THE FRAME. THE FRAME IS 9IN. x 14IN. WHAT IS THE LEAST AMOUNT OF BRAID DELMA WILL NEED? IF BRAID IS SOLD BY THE YARD AND FRACTIONS OF A YARD HOW MUCH SHOULD DELMA BUY? 1. anonymous I guess the frame is rectangular 2. anonymous and since she's glueing them on the edge then I guess you should take the perimeter which is the sum of the sides: P = 2L + 2W = 2(14) + 2(9) = 46 in She'll need 46 inches. Correct me if I'm wrong ^_^ 3. anonymous alina, this is correct also. You just need to finish it off. You know you need 46 inches of braiding. There are 36 inches in every yard, so you'll need $\frac{46}{36}=\frac{36+10}{36}=\frac{36}{36}+\frac{10}{36}=1+\frac{5}{18}=1\frac{5}{18}yards$ 4. anonymous (46 inches of braid)/(36 inches for every yard) - you need to figure out how many yards fit into 46 inches, which is what I calculated for you above. Does this help?
{}
Tying knot theory with traveling salesman problem (TSP) If you draw a knot and place lots of evenly-spaced points on it, with straight segments between adjacent points, clearly the knot you started with is the shortest solution to the TSP in 3 dimensions. Question: for a stick knot (straight lines between points) with $v=6$ vertices (the least number of points or segments for a stick knot), is there any shortest TSP path that forms a knot? If not, what is the least such value of $v$? My question needs clarification. Suppose we have a smooth knot (everywhere a finite curvature). Suppose this knot is "reasonable" such that two separate parts of its length are not closer than $C$. (I don't know how to state that sensibly). If that condition does not hold, just expand the whole knot until it does. Now evenly space points along its length with spacing $\ll C$. This is supposed to assure that the closest points to a given one are its 2 neighbors along the length. Then create a stick knot by connecting each point with its 2 neighbors. It seems clear that the shortest path around the whole stick knot will be the path that just connects each point with its neighbors. Any path that jumps off this ordered set of points will have to travel more than $C$ in both directions. This may be as clear as mud - I can't tell. - "Tying knot theory"... :D – Zev Chonoles Jun 5 '12 at 20:21 I'm afraid your assertion in the first sentence is not at all clear to me. I'm capable of drawing some pretty zigzaggedy knots. – Rahul Jun 5 '12 at 20:26 Not bad, Rahul ;-), but I also don't get it. – draks ... Jun 5 '12 at 22:45 The least number of segments for a stick knot is 3, not 6, as the unknot is a knot. – Gerry Myerson Jun 6 '12 at 4:38 Now included in a question asked at MO, mathoverflow.net/questions/99213/… – Gerry Myerson Jun 10 '12 at 6:19
{}
Notes On Climate of India - CBSE Class 9 Geography Weather describes the day-to-day meteorological conditions such as wind, temperature, cloudiness, moisture, rainfall, etc. affecting a place. Climate is the average weather usually taken over a 30-year period for a particular region and time. The basic elements of weather are wind, temperature, air pressure, precipitation and moisture. ‘Monsoon’ refers to the seasonal reversal in the wind direction during the year. The two important elements of climate are temperature and precipitation. Some parts of Rajasthan desert, the temperature in summers is 50 degrees Celsius, whereas summer temperature in Jammu and Kashmir is 20 degrees Celsius. During winters, the temperature in Jammu and Kashmir may be -45 degrees Celsius. Drass in Jammu and Kashmir is the second coldest inhabited place in the world. In India, the Tropic of Cancer passes through the central part of the country, from the Rann of Kutch in the west to Mizoram in the east. India has both tropical and subtropical climates. Altitude refers to the height a place above sea level. Contrasts in temperature are experienced more in the interior of the country. The rainfall in India varies in its form, types, amount and seasonal distribution. The upper parts of the Himalayas, precipitation is mostly in the form of snowfall, whereas the remaining parts of the country receive rains. There is a decrease in the rainfall generally from east to west in the Northern Plains. Climatic variations also affect the way people live i.e. depends on the food, the clothes and the kind of houses they live in. In India, the elevation of land ranges from 30 metres to 6,000 metres. The Himalayan mountains to the north of India have an average height of about 6,000 metres. The average summer temperature on the Himalayas can vary from zero degrees Celsius to 14 degrees Celsius, while winters can see the temperature dipping below freezing point with heavy snowfall. The Himalayas prevent the cold winds from Central Asia from entering the subcontinent. The rainfall in India is governed mainly by pressure and surface winds, upper air circulation, and western cyclonic disturbances and tropical cyclones. Due to the Coriolis force, these winds move on towards the equatorial low-pressure area. The Coriolis force also known as ‘Ferrel’s Law,’ is an apparent force caused by the earth’s rotation. This force deflects winds towards the right in the northern hemisphere and towards the left in the southern hemisphere.  The north-easterly winds are land-bearing winds; hence they carry very little moisture and bring little or no rain in India. During winter, a high-pressure area is created in the north of the Himalayas. In summer, a low-pressure area develops over interior Asia as well as over north-western India. This causes a complete reversal of the direction of winds during summer. Winds move from the high-pressure area over the southern Indian Ocean, cross the equator and turn right towards the low-pressure areas over the Indian subcontinent. These winds are known as the south-west monsoon winds. An important component of the flow is the jet stream. Jet streams are a narrow belt of high altitude westerly winds that blow in the troposphere. Their speed varies from about 110 kilometres per hour in summer to about 184 kilometres per hour in winter. A number of separate jet streams have been identified. The most constant are the mid-latitude and the sub-tropical jet streams. They originate from the Mediterranean region and are known as subtropical westerly jet streams. An easterly jet stream, called the tropical easterly jet stream, blows over peninsular India, approximately over 14°N during the summer months. The movement of water in the oceans is called currents. #### Summary Weather describes the day-to-day meteorological conditions such as wind, temperature, cloudiness, moisture, rainfall, etc. affecting a place. Climate is the average weather usually taken over a 30-year period for a particular region and time. The basic elements of weather are wind, temperature, air pressure, precipitation and moisture. ‘Monsoon’ refers to the seasonal reversal in the wind direction during the year. The two important elements of climate are temperature and precipitation. Some parts of Rajasthan desert, the temperature in summers is 50 degrees Celsius, whereas summer temperature in Jammu and Kashmir is 20 degrees Celsius. During winters, the temperature in Jammu and Kashmir may be -45 degrees Celsius. Drass in Jammu and Kashmir is the second coldest inhabited place in the world. In India, the Tropic of Cancer passes through the central part of the country, from the Rann of Kutch in the west to Mizoram in the east. India has both tropical and subtropical climates. Altitude refers to the height a place above sea level. Contrasts in temperature are experienced more in the interior of the country. The rainfall in India varies in its form, types, amount and seasonal distribution. The upper parts of the Himalayas, precipitation is mostly in the form of snowfall, whereas the remaining parts of the country receive rains. There is a decrease in the rainfall generally from east to west in the Northern Plains. Climatic variations also affect the way people live i.e. depends on the food, the clothes and the kind of houses they live in. In India, the elevation of land ranges from 30 metres to 6,000 metres. The Himalayan mountains to the north of India have an average height of about 6,000 metres. The average summer temperature on the Himalayas can vary from zero degrees Celsius to 14 degrees Celsius, while winters can see the temperature dipping below freezing point with heavy snowfall. The Himalayas prevent the cold winds from Central Asia from entering the subcontinent. The rainfall in India is governed mainly by pressure and surface winds, upper air circulation, and western cyclonic disturbances and tropical cyclones. Due to the Coriolis force, these winds move on towards the equatorial low-pressure area. The Coriolis force also known as ‘Ferrel’s Law,’ is an apparent force caused by the earth’s rotation. This force deflects winds towards the right in the northern hemisphere and towards the left in the southern hemisphere.  The north-easterly winds are land-bearing winds; hence they carry very little moisture and bring little or no rain in India. During winter, a high-pressure area is created in the north of the Himalayas. In summer, a low-pressure area develops over interior Asia as well as over north-western India. This causes a complete reversal of the direction of winds during summer. Winds move from the high-pressure area over the southern Indian Ocean, cross the equator and turn right towards the low-pressure areas over the Indian subcontinent. These winds are known as the south-west monsoon winds. An important component of the flow is the jet stream. Jet streams are a narrow belt of high altitude westerly winds that blow in the troposphere. Their speed varies from about 110 kilometres per hour in summer to about 184 kilometres per hour in winter. A number of separate jet streams have been identified. The most constant are the mid-latitude and the sub-tropical jet streams. They originate from the Mediterranean region and are known as subtropical westerly jet streams. An easterly jet stream, called the tropical easterly jet stream, blows over peninsular India, approximately over 14°N during the summer months. The movement of water in the oceans is called currents. Previous Next
{}
Click on this link for a file that has Mapping Diagram blanks (2 and 3 axes) for use in the following exercises. 1. For each  positive power function complete the function table and then create the corresponding mapping diagram and locate the "extreme point" if one exists. a. $x$ $f(x)=2x^3$ 2 1 0 -1 -2 b. $x$ $f(x)= -2x^3$ 2 1 0 -1 -2 c. $x$ $f(x)= -(x+1)^4 + 1$ 2 1 0 -1 -2 d. $x$ $f(x)= (x-1)^4-1$ 2 1 0 -1 -2 2. For each negative power function complete the function table and then create the corresponding mapping diagram and locate the "pole" . a. $x$ $f(x)= x^{-2}$ 2 1 0 -1 -2 b. $x$ $f(x)=-x^{-3}$ 2 1 0 -1 -2 c. $x$ $f(x)= (x+1)^{-2} -1$ 2 1 0 -1 -2 d. $x$ $f(x)=(x-1)^{-3}+1$ 2 1 0 -1 -2 3. For the  functions in problem 1 and 1 d,  find any values of $a$ where $f(a) = 0$. 4. For the functions in problem 2 and 2 d,  find any values of $a$ where $f(a) = 0$. 5. Suppose $f$ is a polynomial function of degree 4 with $f(x)=A(x-2)^2 (x-1)(x+1)$  and $f(0) = 6. a. Determine$A$and the roots of$f$. b. Find$f(-2)$and$f(3)$. c. Give the standard polynomial function form for$f$. d. Exhibit the results of your work on a graph and a mapping diagram. 6. Suppose$f$is a linear fractional rational function with$f(x)= \frac {2x-a} {x-b}$with its pole at$x=1$with$f(0)= 3$. a. Determine$a$and$b$. b. Use the values of$a$and$b$to find$f(-1)$and$f(2)$. c. Give the polynomial - proper rational function form for$f$based on the given information. d. Solve the equation$f(x) = 0$and display the solutions on your mapping diagram. 7. For each of the following cubic polynomial functions, create a mapping diagram for the function treated as a composition of core linear functions and the core power function$P_3(x)=x^3$. a.$f(x)=2x^3 + 1$b.$f(x)= -2(x-1)^3 - 3$c.$f(x)=\frac 1 2 x^3 + \frac 1 2$d.$f(x)=-\frac 1 2 (x+1)^3 + \frac 1 2$8. For each of the following rational functions create a mapping diagram for the function treated as a composition of core linear functions and the core power function$P_3^{-1}(x)=\frac 1 {x^3} = x^{-3}$. a.$f(x)=2x^{-3} + 1$b.$f(x)= -2(x-1)^{-3} - 3$c.$f(x)=\frac 1 {2 x^3}+ \frac 1 2$d.$f(x)=-\frac 1 {2 (x+1)^3} + \frac 1 2$9. For each of the following functions use "socks and shoes" to find and create a mapping diagram for its inverse function for the domain$(-\infty,\infty)$. a.$f(x)=2x^3 + 1$b.$f(x)= -2(x-1)^3 - 3$c.$f(x)=\frac 1 {2 x^3}+ \frac 1 2$d.$f(x)=-\frac 1 {2 (x+1)^3} + \frac 1 2\$
{}
The high intensity of therapy and prolonged immune suppression after hematopoietic cell transplantation (HCT) increase the risk of long-term complications and health care needs among survivors. The aim of this study was to evaluate the current status of health care utilization by long-term HCT survivors and to identify factors associated with lack of utilization. A total of 845 individuals who had undergone HCT between 1974 and 1998 at age 21 years or older and survived 2 or more years after HCT participated in the study. Health care utilization was assessed through a mailed questionnaire in three domains: general contact with health care system, general physical examination, and cancer/HCT–related visit. The median age at HCT was 38.2 years, and the median length of follow-up was 6.4 years. Overall, 98% of allogeneic and 94% of autologous HCT survivors reported medical contact 11+ years after HCT. Cancer/HCT–related visits decreased with increasing time from HCT (allogeneic HCT, 98-57%; autologous HCT, 94-63%). The prevalence of general physical examination increased with time (allogeneic HCT, 56-74%; autologous HCT, 72-81%). Primary care physicians provide health care for an increasing number of adult long-term survivors of HCT, emphasizing the need for increased awareness of the long-term follow-up needs of the HCT survivors by the health care providers. (Cancer Epidemiol Biomarkers Prev 2007;16(4):834–9) Hematopoietic cell transplantation (HCT) is used to treat a variety of malignant and nonmalignant disorders. Improved transplantation strategies and supportive care combined with a wider variety of stem cell sources have led to increased utilization of this therapeutic modality (1). Multiple factors, such as exposure to high-dose chemotherapy, prophylaxis/treatment of graft versus host disease (GVHD), sequelae of GVHD, and prolonged immune suppression place the survivors at increased risk of long-term adverse sequelae. Several disease-specific and therapy-related late effects have been described in HCT survivors (2-8), resulting in increased mortality, morbidity, and compromised health status (1, 2, 9, 10). This population is thus likely to have a higher need for utilization of the health care system for many years after their treatment. Although several studies have evaluated the health care utilization by cancer survivors (11-14), there are no reports describing the pattern of health care utilization by transplant survivors. The goal of this study was to evaluate self-reported health care utilization by long-term HCT survivors and to identify risk factors associated with lack of utilization. Patients The Bone Marrow Transplant Survivor Study, a collaborative effort between the City of Hope Cancer Center and the University of Minnesota, examines the long-term outcomes of individuals who have survived 2 or more years after undergoing HCT and compares them with nearest-age siblings. The current report is restricted to individuals who met the following eligibility criteria: (a) HCT between 1974 and 1998 at City of Hope Cancer Center/University of Minnesota; (b) 21 years or older at time of HCT; and (c) survival of 2 or more years after HCT irrespective of current disease status. The Human Subjects committee at the participating institutions approved the Bone Marrow Transplant Survivor Study protocol. Informed consent was provided according to the Declaration of Helsinki. A 255-item mailed questionnaire was used to collect information from all participants. The questionnaire was designed to capture a wide range of information, including demographic characteristics, marital status, insurance coverage, education, income, employment, access and utilization of medical care, current health status, and concerns for future health. Detailed clinical information was obtained from the institutional medical records. Outcome Measures Health care utilization in the 2 years preceding the survey was assessed in three domains: (a) general contact with the health care system (medical contact); (b) general physical examination (GPE); (c) cancer/HCT–related visit: cancer/transplant–related medical visit with the transplant team or medical visit at a cancer center. These outcomes were not mutually exclusive. General or nonspecific medical contact was ascertained by asking the respondent if they had contact with a physician, nurse, or other health care provider in the 2 years before the survey. The contact could include a visit to the physician's office or a phone contact. GPE was defined as a self-report of a GPE within the 2 years before survey. To ascertain cancer/HCT–related visit, the respondents were asked how many of their visits were related to their previous cancer or HCT and whether any of these visits were at the cancer center. The actual language used in the questionnaire to construct these outcome variables is shown in Table 1. The content or additional details about the medical visits were not ascertained. Table 1. Self-reported health care utilization by HCT survivors—definition of outcome measures Outcome measuresQuestionResponse optionsAbsence of utilization General contact with the health care system During the last 2 y, which of the following health care providers (excluding dentists) did you see or talk to for medical contact? Physician No medical contact: if no to all responses Nurse Chiropractor Physical therapist GPE Some people get a physical examination from a doctor once in a while although they are feeling well and have not been sick. When was the last time you had a GPE when you were not sick? Never No GPE: if never or had a GPE >3 y ago Less than 1 y ago 1-2 y ago 3-4 y ago ≥5 y ago Cancer/HCT–related visit As you know, you were asked to participate in this study because you were once diagnosed with a cancer, leukemia, tumor, or similar illness and underwent bone marrow transplantation (BMT). How many of the visits to the physician were related to this previous illness or BMT? 0 time No cancer/HCT–related visit: if none of the visits related to previous illness or HCT or did not receive health care at cancer center 1-2 times 3-4 times 5-6 times 7-10 times 11-20 times >20 times Where did you receive your health care? Oncology (cancer) center or clinic Outcome measuresQuestionResponse optionsAbsence of utilization General contact with the health care system During the last 2 y, which of the following health care providers (excluding dentists) did you see or talk to for medical contact? Physician No medical contact: if no to all responses Nurse Chiropractor Physical therapist GPE Some people get a physical examination from a doctor once in a while although they are feeling well and have not been sick. When was the last time you had a GPE when you were not sick? Never No GPE: if never or had a GPE >3 y ago Less than 1 y ago 1-2 y ago 3-4 y ago ≥5 y ago Cancer/HCT–related visit As you know, you were asked to participate in this study because you were once diagnosed with a cancer, leukemia, tumor, or similar illness and underwent bone marrow transplantation (BMT). How many of the visits to the physician were related to this previous illness or BMT? 0 time No cancer/HCT–related visit: if none of the visits related to previous illness or HCT or did not receive health care at cancer center 1-2 times 3-4 times 5-6 times 7-10 times 11-20 times >20 times Where did you receive your health care? Oncology (cancer) center or clinic Analysis Potential risk factors for absence of health care utilization within any one of the three domains were analyzed using unconditional logistic regression. Odds ratios (OR) and 95% confidence intervals (95% CI) were calculated for assessing the strength of association. Univariate analyses for all pertinent variables were first done to estimate relative risk individually. Stepwise regression was used to select important variables from those that approached statistical significance in the univariate analysis, and a P value of <0.10 was used as the selection criterion. Variables examined included sociodemographic variables [age at time of HCT, age at survey, gender, race and ethnicity (White, Hispanic, other), educational status, current insurance, and household income], clinical variables [length of time since HCT, primary diagnosis, conditioning regimen [total body irradiation (TBI) versus non-TBI based], presence of chronic GVHD (for allogeneic transplantation only), drugs used for prophylaxis and treatment of GVHD (exposure to cyclosporin A versus no exposure: for allogeneic transplantation only), risk of relapse at HCT (standard versus high risk)], current health status, and concerns for future health. Patients were considered to be at standard risk for relapse if HCT was done in first or second complete remission; all others were considered as high risk. The final multivariate model only included those variables that reached statistical significance. P values <0.05 were considered statistically significant and all P values quoted were two-sided. Statistical analysis of the data was done using Epilog plus (Epicenter Software, Pasadena, CA). Of the 1,258 patients eligible for participation in this study, 1,176 were successfully contacted with 845 (71.9%) agreeing to participate. Four hundred and twenty-eight study participants had received an allogeneic HCT, whereas 417 had received an autologous transplant. The demographic and clinical characteristics of the cohort are described in Table 2. Because of the differences in disease characteristics and therapeutic agents used for conditioning, and the risk for GVHD, analyses were done and results are presented for the entire cohort and also stratified by type of transplantation. Table 2. Demographic characteristics of the study cohort by type of transplant Entire cohort (n = 845)Type of HCT Autologous (n = 417)Allogeneic (n = 428) Age (y), median (range) Age at transplantation 38.2 (21.0-68.6) 42.9 (21.0-68.6) 35.2 (21.1-62.0) Age at study participation 46.3 (23.3-73.0) 49.1 (23.3-73.0) 44.5 (24.2-65.5) Gender, n (%) Male 460 (54.4) 222 (53.2) 238 (55.6) Race, n (%) White 681 (80.6) 356 (85.4) 325 (75.9) Hispanic 96 (11.4) 36 (8.6) 60 (14.0) Other 68 (8.0) 25 (6.0) 43 (10.1) Education, n (%) High school or less 146 (17.3) 65 (15.6) 81 (19.0) High school and some college 314 (37.3) 142 (34.1) 172 (40.4) College degree 382 (45.4) 209 (50.2) 173 (40.6) Household income, n (%) ≥$60,000/y 387 (48.4) 205 (52.0) 182 (44.9)$20,000-59,999/y 307 (38.4) 148 (37.6) 159 (39.3) <$20,000/y 105 (13.1) 41 (10.4) 64 (15.8) Current health insurance, n (%) Uninsured 54 (6.5) 19 (4.6) 35 (8.3) Duration of follow-up (y), n (%) 2-5 387 (45.8) 219 (52.5) 168 (39.3) 6-10 285 (33.7) 150 (36.0) 135 (31.5) ≥11 173 (20.5) 48 (11.5) 125 (29.2) Primary diagnosis Hodgkin's lymphoma 82 (9.7) 79 (18.9) 3 (0.7) Non–Hodgkin's lymphoma 196 (23.2) 172 (41.2) 24 (5.6) Acute lymphoid leukemia 52 (6.2) 8 (1.9) 44 (10.3) Acute myeloid leukemia 188 (22.2) 75 (18.0) 113 (26.4) Chronic myeloid leukemia 223 (26.4) 22 (5.3) 201 (47.0) Aplastic anemia 27 (3.2) 27 (6.3) Other 77 (9.1) 61 (14.7) 16 (3.8) Relapse risk at HCT, n (%) High risk 322 (38.2) 194 (46.5) 128 (30.1) Conditioning regimen n (%) Total body irradiation 661 (78.5) 281 (67.9) 380 (88.8) Chronic graft vs host disease Yes — — 260 (60.9) Graft vs host disease prophylaxis/treatment Cyclosporin A — — 321 (75.0) Current health, n (%) Fair/poor 173 (20.6) 73 (17.6) 100 (23.4) Concerns for future health, n (%) Not concerned 40 (4.9) 28 (6.8) 12 (2.9) Entire cohort (n = 845)Type of HCT Autologous (n = 417)Allogeneic (n = 428) Age (y), median (range) Age at transplantation 38.2 (21.0-68.6) 42.9 (21.0-68.6) 35.2 (21.1-62.0) Age at study participation 46.3 (23.3-73.0) 49.1 (23.3-73.0) 44.5 (24.2-65.5) Gender, n (%) Male 460 (54.4) 222 (53.2) 238 (55.6) Race, n (%) White 681 (80.6) 356 (85.4) 325 (75.9) Hispanic 96 (11.4) 36 (8.6) 60 (14.0) Other 68 (8.0) 25 (6.0) 43 (10.1) Education, n (%) High school or less 146 (17.3) 65 (15.6) 81 (19.0) High school and some college 314 (37.3) 142 (34.1) 172 (40.4) College degree 382 (45.4) 209 (50.2) 173 (40.6) Household income, n (%) ≥$60,000/y 387 (48.4) 205 (52.0) 182 (44.9) $20,000-59,999/y 307 (38.4) 148 (37.6) 159 (39.3) <$20,000/y 105 (13.1) 41 (10.4) 64 (15.8) Current health insurance, n (%) Uninsured 54 (6.5) 19 (4.6) 35 (8.3) Duration of follow-up (y), n (%) 2-5 387 (45.8) 219 (52.5) 168 (39.3) 6-10 285 (33.7) 150 (36.0) 135 (31.5) ≥11 173 (20.5) 48 (11.5) 125 (29.2) Primary diagnosis Hodgkin's lymphoma 82 (9.7) 79 (18.9) 3 (0.7) Non–Hodgkin's lymphoma 196 (23.2) 172 (41.2) 24 (5.6) Acute lymphoid leukemia 52 (6.2) 8 (1.9) 44 (10.3) Acute myeloid leukemia 188 (22.2) 75 (18.0) 113 (26.4) Chronic myeloid leukemia 223 (26.4) 22 (5.3) 201 (47.0) Aplastic anemia 27 (3.2)  27 (6.3) Other 77 (9.1) 61 (14.7) 16 (3.8) Relapse risk at HCT, n (%) High risk 322 (38.2) 194 (46.5) 128 (30.1) Conditioning regimen n (%) Total body irradiation 661 (78.5) 281 (67.9) 380 (88.8) Chronic graft vs host disease Yes — — 260 (60.9) Graft vs host disease prophylaxis/treatment Cyclosporin A — — 321 (75.0) Current health, n (%) Fair/poor 173 (20.6) 73 (17.6) 100 (23.4) Concerns for future health, n (%) Not concerned 40 (4.9) 28 (6.8) 12 (2.9) Compared with the 413 nonparticipants, the 845 HCT survivors who participated in this study were significantly more likely to be White (80.6% versus 73.1%, P < 0.01), were older at time of HCT (39.0 versus 36.6, P < 0.001) and at time of survey (46.6 versus 45.0, P = 0.006), and had a shorter length of follow-up from HCT (median length 7.6 versus 8.4, P = 0.003). Furthermore, participants were more likely to have received TBI (78.5% versus 71.4%, P = 0.007) as part of conditioning when compared with the nonparticipants. Participants and nonparticipants did not differ in terms of sex, primary diagnosis, and risk of relapse at HCT. Fifty-four percent of the participants were males, 81% were White, and 54% had been followed for >5 years since HCT. Forty-five percent of the participants were college graduates, 48% reported an annual household income greater than $60,000, and 93% had health insurance coverage. Primary diagnoses included Hodgkin's lymphoma (9.7%), non–Hodgkin's lymphoma (23.2%), acute myeloid leukemia (22.2%), acute lymphoid leukemia (6.2%), chronic myeloid leukemia (26.4%), multiple myeloma (5.0%), and other diagnoses (4.1%). Seventy-nine percent of the participants had received TBI-based conditioning regimen, and 38% were at high risk of relapse at time of HCT. Twenty-one percent rated their current health status as fair or poor, and 5% were not concerned about their future health. Compared with autologous HCT survivors, allogeneic HCT survivors were younger at HCT (mean 35.8 years old versus 42.2 years old, P < 0.001) and at survey (mean 44.4 years old versus 48.8 years old, P < 0.001). Furthermore, allogeneic HCT survivors were less likely to be White (75.9% versus 85.4%, P = 0.002), have a college degree (40.6% versus 50.2%, P = 0.02), report an income more than$60,000 (44.9% versus 52.0%, P = 0.04), be at high risk of relapse at HCT (30.1% versus 46.5%, P < 0.001), and report concern about their future health (2.9% versus 6.8%, P = 0.01). However, they were more likely to be uninsured (8.3% versus 4.6%, P = 0.04), with longer duration of follow-up (mean = 8.6 years versus 6.7, P < 0.001), more likely to have received TBI-based conditioning regimen (88.8% versus 67.9%, P < 0.001), and more likely to rate their current health as fair/poor (23.4% versus 17.6%, P = 0.05; Table 2). Prevalence of Health Care Utilization Entire Cohort. Health care utilization as a function of years since transplantation for the entire cohort is shown in Fig. 1A. Ninety-seven percent of the survivors followed beyond 10 years reported medical contact. Although the prevalence of GPE increased from 65% at 2 to 5 years to 76% at 11+ years after HCT (Ptrend = 0.003), the prevalence of cancer/HCT–related visit decreased from 96% at 2 to 5 years to 59% at 11+ years after HCT (Ptrend < 0.001). Figure 1. Trend over time for general contact (Medical contact), GPE, cancer/HCT–related visit after HCT: (A) for the entire cohort; (B) for allogeneic HCT only; (C) for autologous HCT only. Figure 1. Trend over time for general contact (Medical contact), GPE, cancer/HCT–related visit after HCT: (A) for the entire cohort; (B) for allogeneic HCT only; (C) for autologous HCT only. Close modal Allogeneic HCT Survivors.Figure 1B illustrates the prevalence of health care utilization with time since allogeneic transplantation. Ninety-eight percent of allogeneic transplant survivors followed beyond 10 years reported medical contact. Although the prevalence of GPE increased from 56% at 2 to 5 years to 74% at 11+ years after HCT (Ptrend = 0.006), the prevalence of cancer/HCT–related visit decreased from 98% at 2 to 5 years to 57% at 11+ years after HCT (P < 0.001). Autologous HCT Survivors. Health care utilization as a function of years since autologous transplantation is shown in Fig. 1C. Ninety-four percent of the survivors followed beyond 10 years reported medical contact. Although the prevalence of GPE increased from 72% at 2 to 5 years to 81% at 11+ years after HCT (Ptrend = 0.30), the prevalence of cancer/HCT–related visit decreased from 94% at 2 to 5 years to 63% at 11+ years after HCT (Ptrend < 0.001). Multivariate Analysis. Due to the high prevalence of reported medical contact, multivariate analyses for identification of factors associated with not reporting health care utilization were done only for GPE and cancer/HCT–related visits, and results are shown in Table 3. Table 3. Risk factors for absence of health care utilization reported by HCT survivors Risk factorsEntire cohort Allogeneic HCT Autologous HCT GPECancer/HCT–related visitGPECancer/HCT–related visitGPECancer/HCT–related visit Type of transplant Autologous 1.00 1.00 Allogeneic 1.72 (1.1-2.6) 0.49 (0.3-0.9) Race Non-Hispanic White  1.00  1.00 Hispanic White  0.23 (0.1-0.6)  0.16 (0.04-0.6) Others  0.67 (0.3-1.7)  0.66 (0.2-2.1) Follow-up years 2-5 1.00 1.00 1.00 1.00  1.00 6-10 0.65 (0.5-0.9) 5.15 (2.7-9.7) 0.44 (0.3-0.8) 6.82 (1.9-24.5)  4.75 (2.3-10.1) 11+ 0.57 (0.4-0.9) 19.06 (9.8-37.1) 0.54 (0.3-0.9) 23.82 (6.6-86.5)  9.16 (3.7-22.7) Ptrend 0.004 <0.0001 0.01 <0.001  <0.001 Concerns of future health Concerned  1.00  1.00  1.00 Not concerned  4.05 (1.8-9.3)  9.69 (2.1-45.3)  2.92 (1.1-7.5) Current health status Good 1.00 1.00 1.00 Fair/poor 1.89 (1.3-2.7) 0.51 (0.3-0.97) 2.06 (1.3-3.4) Exposure to cyclosporin A No exposure    1.00 Exposed    0.36 (0.2-0.7) Exposure to TBI No exposure     1.00 Exposed     1.64 (1.0-2.7) Risk factorsEntire cohort Allogeneic HCT Autologous HCT GPECancer/HCT–related visitGPECancer/HCT–related visitGPECancer/HCT–related visit Type of transplant Autologous 1.00 1.00 Allogeneic 1.72 (1.1-2.6) 0.49 (0.3-0.9) Race Non-Hispanic White  1.00  1.00 Hispanic White  0.23 (0.1-0.6)  0.16 (0.04-0.6) Others  0.67 (0.3-1.7)  0.66 (0.2-2.1) Follow-up years 2-5 1.00 1.00 1.00 1.00  1.00 6-10 0.65 (0.5-0.9) 5.15 (2.7-9.7) 0.44 (0.3-0.8) 6.82 (1.9-24.5)  4.75 (2.3-10.1) 11+ 0.57 (0.4-0.9) 19.06 (9.8-37.1) 0.54 (0.3-0.9) 23.82 (6.6-86.5)  9.16 (3.7-22.7) Ptrend 0.004 <0.0001 0.01 <0.001  <0.001 Concerns of future health Concerned  1.00  1.00  1.00 Not concerned  4.05 (1.8-9.3)  9.69 (2.1-45.3)  2.92 (1.1-7.5) Current health status Good 1.00 1.00 1.00 Fair/poor 1.89 (1.3-2.7) 0.51 (0.3-0.97) 2.06 (1.3-3.4) Exposure to cyclosporin A No exposure    1.00 Exposed    0.36 (0.2-0.7) Exposure to TBI No exposure     1.00 Exposed     1.64 (1.0-2.7) Entire Cohort: GPE. Allogeneic transplant survivors were more likely to report absence of GPE in the 2 years before participation, when compared with autologous HCT survivors (OR, 1.72; 95% CI, 1.1-2.6). Patients followed longer than 5 years from HCT were less likely to report absence of GPE (Ptrend = 0.004). Patients transplanted for acute lymphoid leukemia were less likely to report absence of GPE when compared with those transplanted for chronic myeloid leukemia (OR, 0.39; 95% CI, 0.2-0.9). Finally, survivors rating their current health status to be fair or poor were more likely to report absence of GPE (OR, 1.89; 95% CI, 1.3-2.7) when compared with those who rated their health status as good. Entire Cohort: Cancer/HCT–Related Visit. Allogeneic transplant survivors were less likely to report absence of cancer/HCT–related visit when compared with autologous HCT survivors (OR, 0.49; 95% CI, 0.3-0.9). Compared with survivors of non-Hispanic White background, Hispanic survivors were less likely to report absence of cancer/HCT–related visit. Patients followed longer than 5 years from HCT (Ptrend < 0.001) and those expressing a lack of concern about their future health (OR, 4.05; 95% CI, 1.8-9.3) were more likely to report absence of cancer/HCT–related visits. Those survivors rating their own health to be fair/poor were less likely to report absence of cancer/HCT–related visit (OR, 0.51; 95% CI, 0.3-0.97). Compared with patients transplanted for chronic myeloid leukemia, patients with a primary diagnosis of acute myeloid leukemia were more likely to report absence of cancer/HCT–related visits (OR, 2.07; 95% CI, 1.1-3.8). Allogeneic HCT Survivors: GPE. Allogeneic HCT survivors rating their current health status to be fair or poor were more likely to report absence of a GPE in the 2 years before the study (OR, 2.06; 95% CI, 1.3-3.4) when compared with those who rated their health status as good. Compared with patients with a primary diagnosis of chronic myeloid leukemia, patients with a primary diagnosis of acute lymphoid leukemia were less likely to reporting absence of GPE (OR, 0.32; 95% CI, 0.1-0.8), whereas patients with non–Hodgkin's lymphoma/Hodgkin's lymphoma were more likely to report absence of GPE (OR, 2.50; 95% CI, 1.1-5.95). Additionally, the survivors followed longer than 5 years after HCT were less likely to report absence of GPE (6-10 years from HCT: OR, 0.44; 95% CI, 0.3-0.8; 11+ years: OR, 0.54; 95% CI, 0.3-0.9). Allogeneic HCT Survivors: Cancer/HCT–Related Visit. Time since HCT was associated with an increased likelihood of reporting absence of cancer/HCT–related visit (6-10 years since HCT: OR, 6.82; 95% CI, 1.9-2.5; 11+ years: OR, 23.82, 95% CI, 6.6-86.5; Ptrend < 0.001). Survivors that reported lack of concern about future health were more likely to report absence of cancer/HCT–related visits (OR, 9.69; 95% CI, 2.1-45.3), and those who used cyclosporin A for GVHD prophylaxis or treatment were less likely to report absence of cancer/HCT–related visit (OR, 0.36; 95% CI, 0.2-0.7). Compared with non-Hispanic Whites, Hispanics were less likely to report absence of cancer/HCT–related visit (OR, 0.16; 95% CI, 0.04-0.59). Autologous HCT Survivors: GPE. Autologous HCT survivors exposed to TBI-based conditioning regimen were more likely to report absence of GPE in the 2 years before the study (OR, 1.64; 95% CI, 1.0-2.7). Autologous HCT survivors: Cancer/HCT–Related Visit. Autologous HCT survivors followed longer than 5 years from HCT (Ptrend < 0.001) and those expressing a lack of concern about their future health (OR, 2.92; 95% CI, 1.1-7.5) were more likely to report absence of a cancer/HCT–related visit. However, patients transplanted for Hodgkin's lymphoma or non–Hodgkin's lymphoma were less likely to report absence of cancer/HCT–related visit (Hodgkin's lymphoma: OR, 0.22; 95% CI, 0.1-0.6; non–Hodgkin's lymphoma: OR, 0.26; 95% CI, 0.1-0.6), when compared with those transplanted for acute myeloid leukemia. This study of 845 long-term HCT survivors shows that 98% report general medical contact, 71% have had a GPE, and 84% report a cancer/HCT–related visit in the 2 years preceding the study. Thus, almost all HCT survivors report medical contact within the 2 years preceding study participation, and although the prevalence of GPE increases with time since transplantation, that of the cancer/HCT–related visits declines. Allogeneic transplant survivors are more likely to report cancer/HCT–related visits when compared with autologous transplant survivors. Furthermore, Hispanics and other non-White minority populations are more likely to report cancer/HCT–related visits when compared with the Whites. Additionally, patients who report no concerns about their future health are less likely to report cancer/HCT visits when compared with those who are concerned about their future health. Finally, those patients who rate their health as fair or poor are more likely to report cancer/HCT–related visits and less likely to report GPEs when compared with those who rate their health as good. Although there is paucity of data regarding health care utilization by transplant survivors, several studies have reported the health care utilization patterns observed in cancer survivors treated with conventional therapy (11-15). The health care utilization reported by adult survivors of childhood cancer (11) shows that the likelihood of reporting a cancer-related visit or GPE decreases significantly with time from cancer diagnosis. Lack of health insurance, male gender, age more than 30 years at time of study, and lack of concern for future health have been identified as significant risk factors for not reporting GPE, cancer-related visit, or a cancer center visit. Nord et al. (16) described the self-reported significantly higher use of health care services in long-term survivors of adult onset cancer when compared with normal controls. Hewitt et al. (17) used National Health Interview Survey (1998-2000) data to compare health status and health care utilization of adult cancer survivors and individuals without cancer. Cancer survivors were significantly more likely to report being in fair or poor health; experience a psychological disability, limitations of activities of daily living, or functional limitations; and report being unable to work because of a health condition. Twice as many survivors of cancer compared with those without a history of cancer had visited a physician in the preceding year. Patients with psychological problems and functional limitations reported more visits. Successful treatment of patients undergoing HCT should not be limited to the immediate post-HCT period but must include plans for risk-based follow-up and screening for potential late effects to enable survivors to reintegrate into the society and reach their maximum potential. This study shows that although the large majority of HCT survivors report medical contact within the 2 years preceding study participation, the prevalence of a transplant-related visit decreases with time from HCT, whereas the prevalence of general physical exam increases. These findings suggest that primary care physicians provide health care for a majority of this growing high-risk population. Although the risks of late effects of therapy increase with time, the number of survivors receiving care at a cancer center or from an oncologist or a transplant physician decrease with increasing time from HCT. Zebrack et al. (13) described the survivors' lack of knowledge about the late effects of therapy and their lack of insurance on the one hand, and the lack of education of primary care providers about health problems of survivors as well as limited number of health care providers equipped to deal with survivors on the other hand, as barriers for optimum care of survivors. Thus far, there has been little coordinated effort to enhance communications to provide risk-based care for HCT survivors. Limited education concerning this population of patients is incorporated into the primary care curricula. Furthermore, there is a paucity of reports regarding HCT survivors in primary care journals (11, 18). Finally, transplant survivors represent a small fraction of the patient population cared for by primary care physicians, which further compounds the lack of awareness regarding the specialized care needed by these patients. Health care provided to transplant survivors could be optimized by introducing interventions to educate survivors, ensuring smooth transition to primary care physicians, and enhancing and improving communication between transplant physicians and primary care physicians. Models of care for long-term HCT survivors must be flexible to meet the specialized needs of this high-risk population, accommodating patients with a wide range of treatment exposures and risks for adverse long-term sequelae. Regardless of the model of care used, partnership with health care providers across a wide range of specialties is required to deliver optimal care. Internists, family medicine physicians, physician assistants, and nurse practitioners require ongoing education regarding the potential long-term effects for which the HCT survivors are at risk. There is therefore a need for standardized guidelines for follow-up and effective communication between transplant centers and primary care physicians. The Center for International Blood and Marrow Transplant Research, European Group for Blood and Marrow Transplantation, and American Society for Bone Marrow Transplantation have developed recommendations to offer care providers suggested screening and prevention practices for autologous and allogeneic HCT survivors (19). Although this is the first study of its kind, there are several limitations. Approximately one third of the eligible patients did not participate in the study. Although the participants and nonparticipants were similar in most respects, we could not assess the differences in late medical effects that could limit their participation. Health care utilization was determined by self-report and was not externally verified. Furthermore, the determination of transplant-related visit was based on patient's perception of the reason for the visit. Thus, there is likelihood of misclassification of visits. Finally, the number of ethnic minorities and uninsured patients is relatively small in this cohort and, hence, these results may not effectively represent such outcomes or associations in those populations. In conclusion, although most survivors of HCT reported some contact with the health care system, the likelihood of transplant-related visit or cancer center visit decreases with time. The primary care physicians provide long-term health care for most of this high-risk population. Acceptance and utilization of standardized guidelines for long-term follow-up and ongoing and effective communication between transplant centers and primary care physicians are essential to optimize care for long-term transplant survivors. Grant support: National Cancer Institute grant R01 CA078938 and The Lymphoma-Leukemia Society of America Clinical Scholar Award 2191-02. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 Socie G, Stone JV, Wingard JR, et al.; Late Effects Working Committee of the International Bone Marrow Transplant Registry. Long-term survival and late deaths after allogeneic bone marrow transplantation. N Engl J Med 1999 ; 341 : 14 –21. 2 Duell T, van Lint MT, Ljungman P, et al.; EBMT Working Party on Late Effects and EULEP Study Group on Late Effects, European Group for Blood and Marrow Transplantation. Health and functional status of long-term survivors of bone marrow transplantation. Ann Intern Med 1997 ; 126 : 184 –92. 3 Cool VA. Long-term neuropsychological risks in pediatric bone marrow transplant: what do we know? Bone Marrow Transplant 1996 ; 18 Suppl 3: S45 –9. 4 Kramer JH, Crittenden MR, DeSantes K, Cowan MJ. Cognitive and adaptive behavior 1 and 3 years following bone marrow transplantation. Bone Marrow Transplant 1997 ; 19 : 607 –13. 5 Phipps S, Dunavant M, Srivastava DK, Bowman L, Mulhern RK. Cognitive and academic functioning in survivors of pediatric bone marrow transplantation. J Clin Oncol 2000 ; 18 : 1004 –11. 6 Andrykowski MA, Altmaier EM, Barnett RL, et al. Cognitive dysfunction in adult survivors of allogeneic marrow transplantation: relationship to dose of total body irradiation. Bone Marrow Transplant 1990 ; 6 : 269 –76. 7 Andrykowski MA, Bishop MM, Hahn EA, et al. Long-term health-related quality of life, growth, and spiritual well-being after hematopoietic stem-cell transplantation. J Clin Oncol 2005 ; 23 : 599 –608. 8 Deeg HJ. Delayed complications after hematopoietic cell transplantation. In: Blume KG, Forman SJ, Applebaum FR, editors. Thomas' hematopoietic cell transplantation. 3rd ed. Malden (MA): Blackwell Publishing; 2004. p. 944–61. 9 Bhatia S, Robison LL, Francisco L, et al. Late mortality in survivors of autologous hematopoietic-cell transplantation: report from the Bone Marrow Transplant Survivor Study. Blood 2005 ; 105 : 4215 –22. 10 Wingard JR, Curbow B, Baker F, Piantadosi S. Health, functional status, and employment of adult survivors of bone marrow transplantation. Ann Intern Med 1991 ; 114 : 113 –8. 11 Oeffinger KC, Mertens AC, Hudson MM, et al. Health care of young adult survivors of childhood cancer: a report from the Childhood Cancer Survivor Study. Ann Fam Med 2004 ; 2 : 61 –70. 12 Johnson R, Horne B, Feltbower RG, Butler GE, Glaser AW. Hospital attendance patterns in long term survivors of cancer. Arch Dis Child 2004 ; 89 : 374 –7. 13 Zebrack BJ, Eshelman DA, Hudson MM, et al. Health care for childhood cancer survivors: insights and perspectives from a Delphi panel of young adult survivors of childhood cancer. Cancer 2004 ; 100 : 843 –50. 14 Seo PH, Pieper CF, Cohen HJ. Effects of cancer history and comorbid conditions on mortality and healthcare use among older cancer survivors. Cancer 2004 ; 101 : 2276 –84. 15 Shaw AK, Pogany L, Speechley KN, et al. Use of health care services by survivors of childhood and adolescent cancer in Canada. Cancer 2006;106:1829–37. 16 Nord C, Mykletun A, Thorsen L, Bjoro T, Fossa SD. Self-reported health and use of health care services in long-term cancer survivors. Int J Cancer 2005 ; 114 : 307 –16. 17 Hewitt M, Rowland JH, Yancik R. Cancer survivors in the United States: age, health, and disability. J Gerontol A Biol Sci Med Sci 2003 ; 58 : 82 –91. 18 Oeffinger KC. Childhood cancer survivors and primary care physicians. J Fam Pract 2000 ; 49 : 689 –90. 19 Rizzo JD, Wingard JR, Tichelli A, et al. Recommended screening and preventive practices for long-term survivors after hematopoietic cell transplantation: joint recommendations of the European Group for Blood and Marrow Transplantation, the Center for International Blood and Marrow Transplant Research, and the American Society of Blood and Marrow Transplantation. Biol Blood Marrow Transplant 2006 ; 12 : 138 –51.
{}
# AP Statistics Curriculum 2007 GLM MultLin (Difference between revisions) Revision as of 01:14, 19 February 2008 (view source)IvoDinov (Talk | contribs)← Older edit Current revision as of 23:56, 1 May 2013 (view source)IvoDinov (Talk | contribs) (→Categorical Variables in Multiple Regression) (16 intermediate revisions not shown) Line 1: Line 1: ==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Multiple Linear Regression == ==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Multiple Linear Regression == - In the [[AP_Statistics_Curriculum_2007#Chapter_X:_Correlation_and_Regression | previous sections]] we saw how to study the relations in bivariate designs. Now we extend that to any finite number of variables (multivariate case). + In the [[AP_Statistics_Curriculum_2007#Chapter_X:_Correlation_and_Regression | previous sections]], we saw how to study the relations in bivariate designs. Now we extend that to any finite number of variables (multivariate case). === Multiple Linear Regression === === Multiple Linear Regression === Line 10: Line 10: The coefficient $\beta_0$ is the intercept ("constant" term) and $\beta_i$s are the respective parameters of the '' p'' independent variables. There are ''p+1'' parameters to be estimated in the multilinear regression. The coefficient $\beta_0$ is the intercept ("constant" term) and $\beta_i$s are the respective parameters of the '' p'' independent variables. There are ''p+1'' parameters to be estimated in the multilinear regression. - * Multilinear vs. non-linear regression: This multilinear regression method is "linear" because the relation of the response (the dependent variable $Y$) to the independent variables is assumed to be a [http://en.wikipedia.org/wiki/Linear_function linear function] of the parameters $\beta_i$. Note that multilinear regression is a linear modeling technique '''not''' because is that the graph of $Y = \beta_{0}+\beta x$ is a straight line '''nor''' because $Y$ is a linear function of the ''X'' variables. But the "linear" terms refers to the fact that $Y$ can be considered a linear function of the parameters ( $\beta_i$), even though it is not a linear function of $X$. Thus, any model like + * Multilinear vs. non-linear regression: This multilinear regression method is "linear" because the relation of the response (the dependent variable $Y$) to the independent variables is assumed to be a [http://en.wikipedia.org/wiki/Linear_function linear function] of the parameters $\beta_i$. Note that multilinear regression is a linear modeling technique '''not''' because that the graph of $Y = \beta_{0}+\beta x$ is a straight line '''nor''' because $Y$ is a linear function of the ''X'' variables. But the "linear" term refers to the fact that $Y$ can be considered a linear function of the parameters ( $\beta_i$), even though it is not a linear function of $X$. Thus, any model like - : $Y = \beta_o + \beta_1 x + \beta_2 x^2 + \varepsilon$ + : $Y = \beta_o + \beta_1 x + \beta_2 x^2 + \varepsilon$ is still one of the '''linear''' regression, that is, linear in $x$ and $x^2$ respectively, even though the graph on $x$ by itself is not a straight line. - + - is still one of '''linear''' regression, that is, linear in $x$ and $x^2$ respectively, even though the graph on $x$ by itself is not a straight line. + ===Parameter Estimation in Multilinear Regression=== ===Parameter Estimation in Multilinear Regression=== Line 38: Line 36: The variance in the errors is Chi-square distributed: The variance in the errors is Chi-square distributed: - :$\hat\sigma^2 \sim \frac { \chi_{n-p-1}^2 \ \sigma^2 } {n-p-1}$ + :$\frac{(n-p-1)\hat\sigma^2}{\sigma^2} \sim \chi_{n-p-1}^2$ - * '''Interval Estimates''': The $100(1-\alpha)%$ [[AP_Statistics_Curriculum_2007#Chapter_VII:_Point_and_Interval_Estimates | confidence interval]] for the parameter, $\beta_i$, is computed as follows: + * '''Interval Estimates''': The $100(1-\alpha)%$ [[AP_Statistics_Curriculum_2007_Estim_S_Mean#Interval_Estimation_of_a_Population_Mean | confidence interval]] for the parameter, $\beta_i$, is computed as follows: :${\widehat \beta_i \pm t_{\frac{\alpha }{2},n - p - 1} \hat \sigma \sqrt {(\mathbf{X}^T \mathbf{X})_{ii}^{ - 1} } }$, :${\widehat \beta_i \pm t_{\frac{\alpha }{2},n - p - 1} \hat \sigma \sqrt {(\mathbf{X}^T \mathbf{X})_{ii}^{ - 1} } }$, Line 59: Line 57: :${\mathit{TSS} = \sum {\left( {y_i - \bar y} \right)^2 } = \vec y^T \vec y - \frac{1}{n}\left( { \vec y^T \vec u \vec u^T \vec y} \right) = \mathit{SSR}+ \mathit{ESS}}.$ :${\mathit{TSS} = \sum {\left( {y_i - \bar y} \right)^2 } = \vec y^T \vec y - \frac{1}{n}\left( { \vec y^T \vec u \vec u^T \vec y} \right) = \mathit{SSR}+ \mathit{ESS}}.$ + + ===Partial Correlations=== + For a given linear model + $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \cdots +\beta_p X_p + \varepsilon$ + the partial correlation between $X_1$ and ''Y'' given a set of ''p-1'' controlling variables $Z = \{X_2, X_3, \cdots, X_p\}$, denoted by  $\rho_{YX_1|Z}$, is the correlation between the residuals ''R''''X'' and ''R''''Y'' resulting from the linear regression of ''X'' with '''Z''' and that of ''Y'' with '''Z''', respectively. The first-order partial correlation is just the difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. + + * Partial correlation coefficients for three variables is calculated from the pairwise simple correlations. + : If, $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \varepsilon$, + : then the partial correlation between $Y$ and $X_2$, adjusting for $X_1$ is: + : $\rho_{YX_2|X_1} = \frac{\rho_{YX_2} - \rho_{YX_1}\times \rho_{X_2X_1}}{\sqrt{1- \rho_{YX_1}^2}\sqrt{1-\rho_{X_2X_1}^2}}$ + + * In general, the sample partial correlation is + :$\hat{\rho}_{XY\cdot\mathbf{Z}}=\frac{N\sum_{i=1}^N r_{X,i}r_{Y,i}-\sum_{i=1}^N r_{X,i}\sum r_{Y,i}} + {\sqrt{N\sum_{i=1}^N r_{X,i}^2-\left(\sum_{i=1}^N r_{X,i}\right)^2}~\sqrt{N\sum_{i=1}^N r_{Y,i}^2-\left(\sum_{i=1}^N r_{Y,i}\right)^2}},$ where the [http://en.wikipedia.org/wiki/Partial_correlation residuals] $r_{X,i}$ and $r_{X,i}$ are given by: + :: $r_{X,i} = x_i - \langle\mathbf{w}_X^*,\mathbf{z}_i \rangle$ + :: $r_{Y,i} = y_i - \langle\mathbf{w}_Y^*,\mathbf{z}_i \rangle$, + :: with $x_i$, $y_i$ and $z_i$ denoting the random (IID) samples of some joint probability distribution over X, Y and Z. + + ====Computing the partial correlations==== + The ''n''th-order partial correlation (|'''Z'''| = ''n'') can be computed from three (''n'' - 1)th-order partial correlations. The ''0''th-order partial correlation $\rho_{YX|\empty}$ is defined to be the regular [http://en.wikipedia.org/wiki/Correlation correlation coefficient] $\rho_{YX}$. + + For any $Z_0 \in \mathbf{Z}$: + :$\rho_{XY| \mathbf{Z} } = + \frac{\rho_{XY| \mathbf{Z}\setminus\{Z_0\}} - \rho_{XZ_0| \mathbf{Z}\setminus\{Z_0\}}\rho_{YZ_0 | \mathbf{Z}\setminus\{Z_0\}}} + {\sqrt{1-\rho_{XZ_0 |\mathbf{Z}\setminus\{Z_0\}}^2} \sqrt{1-\rho_{YZ_0 | \mathbf{Z}\setminus\{Z_0\}}^2}}.$ + + Implementing this computation recursively yields an exponential time [http://en.wikipedia.org/wiki/Computational_complexity_theory complexity]. + + Note in the case where Z is a single variable, this reduces to: + :$\rho_{XY | Z } = + \frac{\rho_{XY} - \rho_{XZ}\rho_{YZ}} + {\sqrt{1-\rho_{XZ}^2} \sqrt{1-\rho_{YZ}^2}}.$ + + ===Categorical Variables in Multiple Regression=== + + When using categorical variables with more than two levels in a multiple regression modeling, we need to make sure the results are correctly interpreted. For categorical variables with more than 2 levels, a number of separate, dichotomous variables need to be created – this is called “dummy coding” or “dummy variables”. + + Dichotomous categorical predictor variables, variables with two levels, may be directly entered as predictor or predicted variables in a multiple regression model. Their use in multiple regression is a direct extension of their use in simple linear regression. The interpretation of their regression weights depends upon how the variables are coded. If dichotomous variables are coded as 0 and 1, their regression weights are added or subtracted to the predicted value of the response variable, Y, depending upon whether it is positive or negative. + + If a dichotomous variable is coded as -1 and 1, then if the regression weight is positive, it is subtracted from the group coded as -1 and added to the group coded as 1. If a regression weight is negative, then addition and subtraction is reversed. Dichotomous variables are also included in hypothesis tests for R2 change like any other quantitative variable. + + Adding variables to a linear regression model will always increase the (unadjusted) $$R^2$$ value. If additional predictor variables are correlated with the predictor variables already in the model, then the combined results may be difficult to predict. In some cases, the combined result will only slightly improve the prediction, whereas in other cases, a much better prediction may be obtained by combining two correlated variables. + + If the additional predictor variables are uncorrelated (their correlation is trivial) with the predictor variables already in the model, then the result of adding additional variables to the regression model is easy to predict. Thus, the $$R^2$$ change will be equal to the correlation coefficient squared between the added variable and predicted variable. In this case it makes no difference what order the predictor variables are entered into the prediction model. For example, if $$X_1$$ and $$X_2$$ were uncorrelated $$\rho_{12} = 0$$ and $$\rho^2_{1y} = 0.4$$ and $$\rho^2_{2y} = 0.5$$, then $$R^2$$ for $$X_1$$ and $$X_2$$ would equal $$0.4 + 0.5 = 0.9$$. The value for $$R^2$$ change for $$X_2$$, given that X1 is included in the model, would be 0.5. The value for $$R^2$$ change for $$X_2$$ given no variable was in the model would be 0.5. It would make no difference at what stage $$X_2$$ was entered into the model, the value for $$R^2$$ change would always be 0.5. Similarly, the $$R^2$$ change value for $$X_1$$ would always be 0.4. Because of this relationship, uncorrelated predictor variables will be preferred, when possible. + + Look at the [[SOCR_Data_AD_BiomedBigMetadata| Modeling and Analysis of Clinical, Genetic and Imaging Data of Alzheimer’s Disease dataset]]. It is fairly clear that '''DX_Conversion''' could be directly entered into a regression model predicting '''MMSE''', because it is dichotomous. The problem is how to deal with the two categorical predictor variables with more than two levels (e.g., '''GDTOTAL'''). + + '''Dummy Coding''' refers to making many dichotomous variables out of one multilevel categorical variable. Because categorical predictor variables cannot be entered directly into a regression model and be meaningfully interpreted, some other method of dealing with information of this type must be developed. In general, a categorical variable with k levels will be transformed into k-1 variables each with two levels. For example, if a categorical variable had 4 levels, then 3 dichotomous (dummy) variables could be constructed that would contain the same information as the single categorical variable. Dichotomous variables have the advantage that they can be directly entered into the regression model. The process of creating dichotomous variables from categorical variables is called dummy coding. + + Depending upon how the dichotomous variables are constructed, additional information can be gleaned from the analysis. In addition, careful construction will result in uncorrelated dichotomous variables. These variables have the advantage of simplicity of interpretation and are preferred to correlated predictor variables. + + For example, when using dummy coding with three levels, the simplest case of dummy coding is when the categorical variable has three levels and is converted to two dichotomous variables. School in the example data has three levels, 1=Math, 2=Biology, and 3=Engineering. This variable could be dummy coded into two variables, one called Math and one called Biology. If School = 1, then Math would be coded with a 1 and Biology with a 0. If School=2, then Math would be coded with a 0 and Biology would be coded with a 1. If School=3, then both Math and Biology would be coded with a 0. The dummy coding is represented below. + + + {| class="wikitable" style="text-align:center; width:75%" border="1" + |+ Dummy Coded Variables + |- + | colspan=2|Subject||Math||Biology + |- + | Math||1||1||0 + |- + | Biology||2||0||1 + |- + | Engineering||3||0||0 + |} + + See the [[SOCR_EduMaterials_AnalysisActivities_MLR |SOCR Regression analysis]] and compare it against [[SOCR_EduMaterials_AnalysisActivities_ANOVA_1|SOCR ANOVA]]. + + Dummy Coding into Independent Variables, and the selection of an appropriate set of dummy codes, will result in new variables that are uncorrelated or independent of each other. In the case when the categorical variable has three levels this can be accomplished by creating a new variable where one level of the categorical variable is assigned the value of -2 and the other levels are assigned the value of 1. The signs are arbitrary and may be reversed, that is, values of 2 and -1 would work equally well. The second variable created as a dummy code will have the level of the categorical variable coded as -2 given the value of 0 and the other values recoded as 1 and -1. In all cases the sum of the dummy coded variable will be zero. + + We can directly interpret each of the new dummy-coded variables, called a contrast, and compare levels coded with a positive number to levels coded with a negative number. Levels coded with a zero are not included in the interpretation. + + For example, School in the example data has three levels, 1=Math, 2=Biology, and 3=Engineering. This variable could be dummy coded into two variables, one called Engineering (comparing the Engineering School with the other two Schools) and one called Math_vs_Bio (comparing Math versus Biology schools). The Engineering contrast would create a variable where all members of the Engineering Department would be given a value of -2 and all members of the other two Schools would be given a value of 1. The Math_vs_Bio contrast would assign a value of 0 to members of the Engineering Department, 1 divided by the number of members of the Math Department to member of the Math Department, and -1 divided by the number of members of the Biology Department to members of the Biology Department. The Math_vs_Bio variable could be coded as 1 and -1 for Math and Biology respectively, but the recoded variable would no longer be uncorrelated with the first dummy coded variable (Engineering). In most practical applications, it makes little difference whether the variables are correlated or not, so the simpler 1 and -1 coding is generally preferred. The contrasts are summarized in the following table. + + + {| class="wikitable" style="text-align:center; width:75%" border="1" + |+ Dummy Coded Variables + |- + | colspan=2|School||Engineering||Math_vs_Bio + |- + | Math||1||1||1/N1 = 1/12= 0.0833 + |- + | Biology||2||1||-1/N2 = -1/7 = -0.1429 + |- + | Engineering||3||-2||0 + |} + + Note that the correlation coefficient between the two contrasts is zero. The correlation between the Engineering contrast and Salary is -.585 with a squared correlation coefficient of .342. This correlation coefficient has a significance level of .001. The correlation coefficient between the Math_vs_Bio contrast and Salary is -.150 with a squared value of .023. + + Generate the corresponding [[AP_Statistics_Curriculum_2007_ANOVA_1Way#ANOVA_Table|SOCR ANOVA table]]. + + Show that the significance level is identical to the value when each contrast was entered last into the regression model. In this case the Engineering contrast was significant and the Math_vs_Bio contrast was not. The interpretation of these results would be that the Engineering Department was paid significantly more than the Math and Biology Schools, but that no significant differences in salary were found between the Math and Biology Schools. + If a categorical variable had four levels, three dummy coded contrasts would be necessary to use the categorical variable in a regression analysis. For example, suppose that a researcher at a pain center did a study with 4 groups of four patients each (N is being deliberately kept small). The dependent measure is subjective experience of pain. The 4 groups consisted of 4 different treatment conditions. + + + {| class="wikitable" style="text-align:center; width:75%" border="1" + ! Group||Treatment + |- + |1||None + |- + |2||Placebo + |- + |3||Psychotherapy + |- + |4||Acupuncture + |} + + An independent contrast is a contrast that is not a linear combination of any other set of contrasts. Any set of independent contrasts would work equally well if the end result was the simultaneous test of the five contrasts, as in an ANOVA. One of the many possible examples is presented below. + + + {| class="wikitable" style="text-align:center; width:75%" border="1" + |+ Dummy Coded Variables + |- + | colspan=2|Group||C1||C2||C3||C4 + |- + | None||1||0||0||0||0 + |- + | Placebo||2||1||0||0||0 + |- + | Psychotherapy||3||0||1||0||0 + |- + | Acupuncture||4||0||0||1||0 + |} + + Application of this dummy coding in a regression model entering all contrasts in a single block would result in an ANOVA table similar to the one obtained using Means, ANOVA, or General Linear Model. + + This solution would not be ideal, however, because there is considerable information available by setting the contrasts to test specific hypotheses. The levels of the categorical variable generally dictate the structure of the contrasts. In the example study, it makes sense to contrast the two control groups (1 and 2) with the other four experimental groups (3 and 4). Any two numbers would work, one assigned to groups 1 and 2 and the others assigned to the other four groups, but it is conventional to have the sum of the contrasts equal to zero. One contrast that meets this criterion would be (-2, -2, 1, 1). + Generally it is easiest to set up contrasts within subgroups of the first contrast. For example, a second contrast might test whether there are differences between the two control groups. This contrast would appear as (1, -1, 0, 0). As can be seen, this would be a contrast within the experimental treatment groups. Within the non-drug treatment, a contrast comparing Group 3 with Group 4 might be appropriate (0, 0, 1, -1). Combined, the contrasts are given in the following table. + + + {| class="wikitable" style="text-align:center; width:75%" border="1" + |+ Dummy Coded Variables + |- + | colspan=2|Group||C1||C2||C3 + |- + | None||1||-2||1||0 + |- + | Placebo||2||-2||-1||0 + |- + | Psychotherapy||3||1||0||1 + |- + | Acupuncture||4||1||0||1 + |} + + Equal sample sizes are important as the results are much cleaner when the sample sizes are presumed to be equal. However equal sample size are not common in real applications, even in well-designed experiments. Unequal sample size makes the effects no longer independent. This implies that it makes difference in hypothesis testing when the effects are added into the model, first, middle, or last. The same dummy coding that was applied to equal sample sizes will now be applied to the original data with unequal sample sizes. ===Examples=== ===Examples=== - In the [[AP_Statistics_Curriculum_2007_GLM_Regress | simple linear regression case]], we were able to compute by hand some (simple) examples). Such calculations are much more involved in the multilinear regression situations. Thus we demonstrate multilinear regression only using the [http://socr.ucla.edu/htmls/SOCR_Analyses.html SOCR Multiple Regression Analysis Appet]. + + We now demonstrate the use of [[SOCR_EduMaterials_AnalysisActivities_MLR | SOCR Multilinear regression applet]] to analyze multivariate data. + + ====Earthquake Modeling==== + This is an example where the relation between variables may not be linear or explanatory. In the [[AP_Statistics_Curriculum_2007_GLM_Regress | simple linear regression case]], we were able to compute by hand some (simple) examples. Such calculations are much more involved in the multilinear regression situations. Thus we demonstrate multilinear regression only using the [http://socr.ucla.edu/htmls/SOCR_Analyses.html SOCR Multiple Regression Analysis Applet]. Use the SOCR California Earthquake dataset to investigate whether Earthquake magnitude (dependent variable) can be predicted by knowing the longitude, latitude, distance and depth of the quake. Clearly, we do not expect these predictors to have a strong effect on the earthquake magnitude, so we expect the coefficient parameters not to be significantly distinct from zero (null hypothesis). SOCR Multilinear regression applet reports this model: Use the SOCR California Earthquake dataset to investigate whether Earthquake magnitude (dependent variable) can be predicted by knowing the longitude, latitude, distance and depth of the quake. Clearly, we do not expect these predictors to have a strong effect on the earthquake magnitude, so we expect the coefficient parameters not to be significantly distinct from zero (null hypothesis). SOCR Multilinear regression applet reports this model: Line 72: Line 219: [[Image:SOCR_EBook_Dinov_GLM_MLR_021808_Fig2.jpg|500px]] [[Image:SOCR_EBook_Dinov_GLM_MLR_021808_Fig2.jpg|500px]] + ====Multilinear Regression on Consumer Price Index==== + Using the [[SOCR_Data_Dinov_021808_ConsumerPriceIndex | SOCR Consumer Price Index Dataset]] we can explore the relationship between the prices of various products and commodities. For example, regressing '''Gasoline''' on the following three predictor prices: '''Orange Juice''', '''Fuel''' and '''Electricity''' illustrates significant effects of all these variables as significant explanatory prices (at $\alpha=0.05$) for the cost of ''Gasoline'' between 1981 and 2006. + + : $Gasoline = 0.083 -0.190\times Orange +0.793\times Fuel +0 .013\times Electricity +$ + + [[Image:SOCR_EBook_Dinov_GLM_MLR_021808_Fig3.jpg|500px]] + [[Image:SOCR_EBook_Dinov_GLM_MLR_021808_Fig4.jpg|500px]] + + ====2011 Best Jobs in the US==== + Repeat the multiliniear regression analysis using hte [[SOCR_Data_2011_US_JobsRanking| Ranking Dataset of the Best and Worst USA Jobs for 2011]]. - ===References=== + + ===[[EBook_Problems_GLM_MultLin|Problems]]=== ## General Advance-Placement (AP) Statistics Curriculum - Multiple Linear Regression In the previous sections, we saw how to study the relations in bivariate designs. Now we extend that to any finite number of variables (multivariate case). ### Multiple Linear Regression We are interested in determining the linear regression, as a model, of the relationship between one dependent variable Y and many independent variables Xi, i = 1, ..., p. The multilinear regression model can be written as $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \cdots +\beta_p X_p + \varepsilon$, where $\varepsilon$ is the error term. The coefficient β0 is the intercept ("constant" term) and βis are the respective parameters of the p independent variables. There are p+1 parameters to be estimated in the multilinear regression. • Multilinear vs. non-linear regression: This multilinear regression method is "linear" because the relation of the response (the dependent variable Y) to the independent variables is assumed to be a linear function of the parameters βi. Note that multilinear regression is a linear modeling technique not because that the graph of Y = β0 + βx is a straight line nor because Y is a linear function of the X variables. But the "linear" term refers to the fact that Y can be considered a linear function of the parameters ( βi), even though it is not a linear function of X. Thus, any model like $Y = \beta_o + \beta_1 x + \beta_2 x^2 + \varepsilon$ is still one of the linear regression, that is, linear in x and x2 respectively, even though the graph on x by itself is not a straight line. ### Parameter Estimation in Multilinear Regression A multilinear regression with p coefficients and the regression intercept β0 and n data points (sample size), with $n\geq (p+1)$ allows construction of the following vectors and matrix with associated standard errors: $\begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{n} \end{bmatrix} = \begin{bmatrix} 1 & x_{11} & x_{12} & \dots & x_{1p} \\ 1 & x_{21} & x_{22} & \dots & x_{2p} \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_{n1} & x_{n2} & \dots & x_{np} \end{bmatrix} \begin{bmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_p \end{bmatrix} + \begin{bmatrix} \varepsilon_1\\ \varepsilon_2\\ \vdots\\ \varepsilon_n \end{bmatrix}$ or, in vector-matrix notation $\ y = \mathbf{X}\cdot\beta + \varepsilon.\,$ Each data point can be given as $(\vec x_i, y_i)$, $i=1,2,\dots,n.$. For n = p, standard errors of the parameter estimates could not be calculated. For n less than p, parameters could not be calculated. • Point Estimates: The estimated values of the parameters βi are given as $\widehat{\beta}$$=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T {\vec y}$ • Residuals: The residuals, representing the difference between the observations and the model's predictions, are required to analyse the regression and are given by: $\hat\vec\varepsilon = \vec y - \mathbf{X} \hat\beta\,$ The standard deviation, $\hat \sigma$ for the model is determined from ${\hat \sigma = \sqrt{ \frac {\hat\vec\varepsilon^T \hat\vec\varepsilon} {n-p-1}} = \sqrt {\frac{{ \vec y^T \vec y - \hat\vec\beta^T \mathbf{X}^T \vec y}}{{n - p - 1}}} }$ The variance in the errors is Chi-square distributed: $\frac{(n-p-1)\hat\sigma^2}{\sigma^2} \sim \chi_{n-p-1}^2$ • Interval Estimates: The 100(1 − α)% confidence interval for the parameter, βi, is computed as follows: ${\widehat \beta_i \pm t_{\frac{\alpha }{2},n - p - 1} \hat \sigma \sqrt {(\mathbf{X}^T \mathbf{X})_{ii}^{ - 1} } }$, where t follows the Student's t-distribution with np − 1 degrees of freedom and $(\mathbf{X}^T \mathbf{X})_{ii}^{ - 1}$ denotes the value located in the ith row and column of the matrix. The regression sum of squares (or sum of squared residuals) SSR (also commonly called RSS) is given by: ${\mathit{SSR} = \sum {\left( {\hat y_i - \bar y} \right)^2 } = \hat\beta^T \mathbf{X}^T \vec y - \frac{1}{n}\left( { \vec y^T \vec u \vec u^T \vec y} \right)}$, where $\bar y = \frac{1}{n} \sum y_i$ and $\vec u$ is an n by 1 unit vector (i.e. each element is 1). Note that the terms yTu and uTy are both equivalent to $\sum y_i$, and so the term $\frac{1}{n} y^T u u^T y$ is equivalent to $\frac{1}{n}\left(\sum y_i\right)^2$. The error (or explained) sum of squares (ESS) is given by: ${\mathit{ESS} = \sum {\left( {y_i - \hat y_i } \right)^2 } = \vec y^T \vec y - \hat\beta^T \mathbf{X}^T \vec y}.$ The total sum of squares (TSS) is given by ${\mathit{TSS} = \sum {\left( {y_i - \bar y} \right)^2 } = \vec y^T \vec y - \frac{1}{n}\left( { \vec y^T \vec u \vec u^T \vec y} \right) = \mathit{SSR}+ \mathit{ESS}}.$ ### Partial Correlations For a given linear model $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \cdots +\beta_p X_p + \varepsilon$ the partial correlation between X1 and Y given a set of p-1 controlling variables $Z = \{X_2, X_3, \cdots, X_p\}$, denoted by $\rho_{YX_1|Z}$, is the correlation between the residuals RX and RY resulting from the linear regression of X with Z and that of Y with Z, respectively. The first-order partial correlation is just the difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. • Partial correlation coefficients for three variables is calculated from the pairwise simple correlations. If, $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \varepsilon$, then the partial correlation between Y and X2, adjusting for X1 is: $\rho_{YX_2|X_1} = \frac{\rho_{YX_2} - \rho_{YX_1}\times \rho_{X_2X_1}}{\sqrt{1- \rho_{YX_1}^2}\sqrt{1-\rho_{X_2X_1}^2}}$ • In general, the sample partial correlation is $\hat{\rho}_{XY\cdot\mathbf{Z}}=\frac{N\sum_{i=1}^N r_{X,i}r_{Y,i}-\sum_{i=1}^N r_{X,i}\sum r_{Y,i}} {\sqrt{N\sum_{i=1}^N r_{X,i}^2-\left(\sum_{i=1}^N r_{X,i}\right)^2}~\sqrt{N\sum_{i=1}^N r_{Y,i}^2-\left(\sum_{i=1}^N r_{Y,i}\right)^2}},$ where the residuals rX,i and rX,i are given by: $r_{X,i} = x_i - \langle\mathbf{w}_X^*,\mathbf{z}_i \rangle$ $r_{Y,i} = y_i - \langle\mathbf{w}_Y^*,\mathbf{z}_i \rangle$, with xi, yi and zi denoting the random (IID) samples of some joint probability distribution over X, Y and Z. #### Computing the partial correlations The nth-order partial correlation (|Z| = n) can be computed from three (n - 1)th-order partial correlations. The 0th-order partial correlation $\rho_{YX|\empty}$ is defined to be the regular correlation coefficient ρYX. For any $Z_0 \in \mathbf{Z}$: $\rho_{XY| \mathbf{Z} } = \frac{\rho_{XY| \mathbf{Z}\setminus\{Z_0\}} - \rho_{XZ_0| \mathbf{Z}\setminus\{Z_0\}}\rho_{YZ_0 | \mathbf{Z}\setminus\{Z_0\}}} {\sqrt{1-\rho_{XZ_0 |\mathbf{Z}\setminus\{Z_0\}}^2} \sqrt{1-\rho_{YZ_0 | \mathbf{Z}\setminus\{Z_0\}}^2}}.$ Implementing this computation recursively yields an exponential time complexity. Note in the case where Z is a single variable, this reduces to: $\rho_{XY | Z } = \frac{\rho_{XY} - \rho_{XZ}\rho_{YZ}} {\sqrt{1-\rho_{XZ}^2} \sqrt{1-\rho_{YZ}^2}}.$ ### Categorical Variables in Multiple Regression When using categorical variables with more than two levels in a multiple regression modeling, we need to make sure the results are correctly interpreted. For categorical variables with more than 2 levels, a number of separate, dichotomous variables need to be created – this is called “dummy coding” or “dummy variables”. Dichotomous categorical predictor variables, variables with two levels, may be directly entered as predictor or predicted variables in a multiple regression model. Their use in multiple regression is a direct extension of their use in simple linear regression. The interpretation of their regression weights depends upon how the variables are coded. If dichotomous variables are coded as 0 and 1, their regression weights are added or subtracted to the predicted value of the response variable, Y, depending upon whether it is positive or negative. If a dichotomous variable is coded as -1 and 1, then if the regression weight is positive, it is subtracted from the group coded as -1 and added to the group coded as 1. If a regression weight is negative, then addition and subtraction is reversed. Dichotomous variables are also included in hypothesis tests for R2 change like any other quantitative variable. Adding variables to a linear regression model will always increase the (unadjusted) $$R^2$$ value. If additional predictor variables are correlated with the predictor variables already in the model, then the combined results may be difficult to predict. In some cases, the combined result will only slightly improve the prediction, whereas in other cases, a much better prediction may be obtained by combining two correlated variables. If the additional predictor variables are uncorrelated (their correlation is trivial) with the predictor variables already in the model, then the result of adding additional variables to the regression model is easy to predict. Thus, the $$R^2$$ change will be equal to the correlation coefficient squared between the added variable and predicted variable. In this case it makes no difference what order the predictor variables are entered into the prediction model. For example, if $$X_1$$ and $$X_2$$ were uncorrelated $$\rho_{12} = 0$$ and $$\rho^2_{1y} = 0.4$$ and $$\rho^2_{2y} = 0.5$$, then $$R^2$$ for $$X_1$$ and $$X_2$$ would equal $$0.4 + 0.5 = 0.9$$. The value for $$R^2$$ change for $$X_2$$, given that X1 is included in the model, would be 0.5. The value for $$R^2$$ change for $$X_2$$ given no variable was in the model would be 0.5. It would make no difference at what stage $$X_2$$ was entered into the model, the value for $$R^2$$ change would always be 0.5. Similarly, the $$R^2$$ change value for $$X_1$$ would always be 0.4. Because of this relationship, uncorrelated predictor variables will be preferred, when possible. Look at the Modeling and Analysis of Clinical, Genetic and Imaging Data of Alzheimer’s Disease dataset. It is fairly clear that DX_Conversion could be directly entered into a regression model predicting MMSE, because it is dichotomous. The problem is how to deal with the two categorical predictor variables with more than two levels (e.g., GDTOTAL). Dummy Coding refers to making many dichotomous variables out of one multilevel categorical variable. Because categorical predictor variables cannot be entered directly into a regression model and be meaningfully interpreted, some other method of dealing with information of this type must be developed. In general, a categorical variable with k levels will be transformed into k-1 variables each with two levels. For example, if a categorical variable had 4 levels, then 3 dichotomous (dummy) variables could be constructed that would contain the same information as the single categorical variable. Dichotomous variables have the advantage that they can be directly entered into the regression model. The process of creating dichotomous variables from categorical variables is called dummy coding. Depending upon how the dichotomous variables are constructed, additional information can be gleaned from the analysis. In addition, careful construction will result in uncorrelated dichotomous variables. These variables have the advantage of simplicity of interpretation and are preferred to correlated predictor variables. For example, when using dummy coding with three levels, the simplest case of dummy coding is when the categorical variable has three levels and is converted to two dichotomous variables. School in the example data has three levels, 1=Math, 2=Biology, and 3=Engineering. This variable could be dummy coded into two variables, one called Math and one called Biology. If School = 1, then Math would be coded with a 1 and Biology with a 0. If School=2, then Math would be coded with a 0 and Biology would be coded with a 1. If School=3, then both Math and Biology would be coded with a 0. The dummy coding is represented below. Subject Math Biology Math 1 1 0 Biology 2 0 1 Engineering 3 0 0 See the SOCR Regression analysis and compare it against SOCR ANOVA. Dummy Coding into Independent Variables, and the selection of an appropriate set of dummy codes, will result in new variables that are uncorrelated or independent of each other. In the case when the categorical variable has three levels this can be accomplished by creating a new variable where one level of the categorical variable is assigned the value of -2 and the other levels are assigned the value of 1. The signs are arbitrary and may be reversed, that is, values of 2 and -1 would work equally well. The second variable created as a dummy code will have the level of the categorical variable coded as -2 given the value of 0 and the other values recoded as 1 and -1. In all cases the sum of the dummy coded variable will be zero. We can directly interpret each of the new dummy-coded variables, called a contrast, and compare levels coded with a positive number to levels coded with a negative number. Levels coded with a zero are not included in the interpretation. For example, School in the example data has three levels, 1=Math, 2=Biology, and 3=Engineering. This variable could be dummy coded into two variables, one called Engineering (comparing the Engineering School with the other two Schools) and one called Math_vs_Bio (comparing Math versus Biology schools). The Engineering contrast would create a variable where all members of the Engineering Department would be given a value of -2 and all members of the other two Schools would be given a value of 1. The Math_vs_Bio contrast would assign a value of 0 to members of the Engineering Department, 1 divided by the number of members of the Math Department to member of the Math Department, and -1 divided by the number of members of the Biology Department to members of the Biology Department. The Math_vs_Bio variable could be coded as 1 and -1 for Math and Biology respectively, but the recoded variable would no longer be uncorrelated with the first dummy coded variable (Engineering). In most practical applications, it makes little difference whether the variables are correlated or not, so the simpler 1 and -1 coding is generally preferred. The contrasts are summarized in the following table. School Engineering Math_vs_Bio Math 1 1 1/N1 = 1/12= 0.0833 Biology 2 1 -1/N2 = -1/7 = -0.1429 Engineering 3 -2 0 Note that the correlation coefficient between the two contrasts is zero. The correlation between the Engineering contrast and Salary is -.585 with a squared correlation coefficient of .342. This correlation coefficient has a significance level of .001. The correlation coefficient between the Math_vs_Bio contrast and Salary is -.150 with a squared value of .023. Generate the corresponding SOCR ANOVA table. Show that the significance level is identical to the value when each contrast was entered last into the regression model. In this case the Engineering contrast was significant and the Math_vs_Bio contrast was not. The interpretation of these results would be that the Engineering Department was paid significantly more than the Math and Biology Schools, but that no significant differences in salary were found between the Math and Biology Schools. If a categorical variable had four levels, three dummy coded contrasts would be necessary to use the categorical variable in a regression analysis. For example, suppose that a researcher at a pain center did a study with 4 groups of four patients each (N is being deliberately kept small). The dependent measure is subjective experience of pain. The 4 groups consisted of 4 different treatment conditions. GroupTreatment 1None 2Placebo 3Psychotherapy 4Acupuncture An independent contrast is a contrast that is not a linear combination of any other set of contrasts. Any set of independent contrasts would work equally well if the end result was the simultaneous test of the five contrasts, as in an ANOVA. One of the many possible examples is presented below. Group C1 C2 C3 C4 None 1 0 0 0 0 Placebo 2 1 0 0 0 Psychotherapy 3 0 1 0 0 Acupuncture 4 0 0 1 0 Application of this dummy coding in a regression model entering all contrasts in a single block would result in an ANOVA table similar to the one obtained using Means, ANOVA, or General Linear Model. This solution would not be ideal, however, because there is considerable information available by setting the contrasts to test specific hypotheses. The levels of the categorical variable generally dictate the structure of the contrasts. In the example study, it makes sense to contrast the two control groups (1 and 2) with the other four experimental groups (3 and 4). Any two numbers would work, one assigned to groups 1 and 2 and the others assigned to the other four groups, but it is conventional to have the sum of the contrasts equal to zero. One contrast that meets this criterion would be (-2, -2, 1, 1). Generally it is easiest to set up contrasts within subgroups of the first contrast. For example, a second contrast might test whether there are differences between the two control groups. This contrast would appear as (1, -1, 0, 0). As can be seen, this would be a contrast within the experimental treatment groups. Within the non-drug treatment, a contrast comparing Group 3 with Group 4 might be appropriate (0, 0, 1, -1). Combined, the contrasts are given in the following table. Group C1 C2 C3 None 1 -2 1 0 Placebo 2 -2 -1 0 Psychotherapy 3 1 0 1 Acupuncture 4 1 0 1 Equal sample sizes are important as the results are much cleaner when the sample sizes are presumed to be equal. However equal sample size are not common in real applications, even in well-designed experiments. Unequal sample size makes the effects no longer independent. This implies that it makes difference in hypothesis testing when the effects are added into the model, first, middle, or last. The same dummy coding that was applied to equal sample sizes will now be applied to the original data with unequal sample sizes. ### Examples We now demonstrate the use of SOCR Multilinear regression applet to analyze multivariate data. #### Earthquake Modeling This is an example where the relation between variables may not be linear or explanatory. In the simple linear regression case, we were able to compute by hand some (simple) examples. Such calculations are much more involved in the multilinear regression situations. Thus we demonstrate multilinear regression only using the SOCR Multiple Regression Analysis Applet. Use the SOCR California Earthquake dataset to investigate whether Earthquake magnitude (dependent variable) can be predicted by knowing the longitude, latitude, distance and depth of the quake. Clearly, we do not expect these predictors to have a strong effect on the earthquake magnitude, so we expect the coefficient parameters not to be significantly distinct from zero (null hypothesis). SOCR Multilinear regression applet reports this model: $Magnitude = \beta_o + \beta_1\times Close+ \beta_2\times Depth+ \beta_3\times Longitude+ \beta_4\times Latitude + \varepsilon.$ $Magnitude = 2.320 + 0.001\times Close -0.003\times Depth -0.035\times Longitude -0.028\times Latitude + \varepsilon.$ #### Multilinear Regression on Consumer Price Index Using the SOCR Consumer Price Index Dataset we can explore the relationship between the prices of various products and commodities. For example, regressing Gasoline on the following three predictor prices: Orange Juice, Fuel and Electricity illustrates significant effects of all these variables as significant explanatory prices (at α = 0.05) for the cost of Gasoline between 1981 and 2006. $Gasoline = 0.083 -0.190\times Orange +0.793\times Fuel +0 .013\times Electricity$ #### 2011 Best Jobs in the US Repeat the multiliniear regression analysis using hte Ranking Dataset of the Best and Worst USA Jobs for 2011.
{}
# Vector Spaces ## Vectors and Scalars A scalar is a single number value, such as 3, 5, or 10. A vector is an ordered set of scalars. A vector is typically described as a matrix with a row or column size of 1. A vector with a column size of 1 is a row vector, and a vector with a row size of 1 is a column vector. [Column Vector] ${\displaystyle {\begin{bmatrix}a\\b\\c\\\vdots \end{bmatrix}}}$ [Row Vector] ${\displaystyle {\begin{bmatrix}a&b&c&\cdots \end{bmatrix}}}$ A "common vector" is another name for a column vector, and this book will simply use the word "vector" to refer to a common vector. ## Vector Spaces A vector space is a set of vectors and two operations (addition and multiplication, typically) that follow a number of specific rules. We will typically denote vector spaces with a capital-italic letter: V, for instance. A space V is a vector space if all the following requirements are met. We will be using x and y as being arbitrary vectors in V. We will also use c and d as arbitrary scalar values. There are 10 requirements in all: Given: ${\displaystyle x,y\in V}$ 1. There is an operation called "Addition" (signified with a "+" sign) between two vectors, x + y, such that if both the operands are in V, then the result is also in V. 2. The addition operation is commutative for all elements in V. 3. The addition operation is associative for all elements in V. 4. There is a unique neutral element, φ, in V, such that x + φ = x. This is also called a zero element. 5. For every x in V, then there is a negative element -x in V such that -x + x = φ. 6. ${\displaystyle cx\in V}$ 7. ${\displaystyle c(x+y)=cx+cy}$ 8. ${\displaystyle (c+d)x=cx+dx}$ 9. ${\displaystyle c(dx)=cdx}$ 10. 1 × x = x Some of these rules may seem obvious, but that's only because they have been generally accepted, and have been taught to people since they were children. ## Scalar Product A scalar product is a special type of operation that acts on two vectors, and returns a scalar result. Scalar products are denoted as an ordered pair between angle-brackets: <x,y>. A scalar product between vectors must satisfy the following four rules: 1. ${\displaystyle \langle x,x\rangle \geq 0,\quad \forall x\in V}$ 2. ${\displaystyle \langle x,x\rangle =0}$ , only if x = 0 3. ${\displaystyle \langle x,y\rangle =\langle y,x\rangle }$ 4. ${\displaystyle \langle x,cy_{1}+dy_{2}\rangle =c\langle x,y_{1}\rangle +d\langle x,y_{2}\rangle }$ If an operation satisifes all these requirements, then it is a scalar product. ### Examples One of the most common scalar products is the dot product, that is discussed commonly in Linear Algebra ## Norm The norm is an important scalar quantity that indicates the magnitude of the vector. Norms of a vector are typically denoted as ${\displaystyle \|x\|}$ . To be a norm, an operation must satisfy the following four conditions: 1. ${\displaystyle \|x\|\geq 0}$ 2. ${\displaystyle \|x\|=0}$  only if x = 0. 3. ${\displaystyle \|cx\|=|c|\|x\|}$ 4. ${\displaystyle \|x+y\|\leq \|x\|+\|y\|}$ A vector is called normal if it's norm is 1. A normal vector is sometimes also referred to as a unit vector. Both notations will be used in this book. To make a vector normal, but keep it pointing in the same direction, we can divide the vector by its norm: ${\displaystyle {\bar {x}}={\frac {x}{\|x\|}}}$ ### Examples One of the most common norms is the cartesian norm, that is defined as the square-root of the sum of the squares: ${\displaystyle \|x\|={\sqrt {x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}}}}$ ### Unit Vector A vector is said to be a unit vector if the norm of that vector is 1. ## Orthogonality Two vectors x and y are said to be orthogonal if the scalar product of the two is equal to zero: ${\displaystyle \langle x,y\rangle =0}$ Two vectors are said to be orthonormal if their scalar product is zero, and both vectors are unit vectors. ## Cauchy-Schwartz Inequality The cauchy-schwartz inequality is an important result, and relates the norm of a vector to the scalar product: ${\displaystyle |\langle x,y\rangle |\leq \|x\|\|y\|}$ ## Metric (Distance) The distance between two vectors in the vector space V, called the metric of the two vectors, is denoted by d(x, y). A metric operation must satisfy the following four conditions: 1. ${\displaystyle d(x,y)\geq 0}$ 2. ${\displaystyle d(x,y)=0}$  only if x = y 3. ${\displaystyle d(x,y)=d(y,x)}$ 4. ${\displaystyle d(x,y)\leq d(x,z)+d(z,y)}$ ### Examples A common form of metric is the distance between points a and b in the cartesian plane: ${\displaystyle d(a,b)_{cartesian}={\sqrt {(x_{a}-x_{b})^{2}+(y_{a}-y_{b})^{2}}}}$ ## Linear Independence A set of vectors ${\displaystyle V={v_{1},v_{2},\cdots ,v_{n}}}$  are said to be linearly dependent on one another if any vector v from the set can be constructed from a linear combination of the other vectors in the set. Given the following linear equation: ${\displaystyle a_{1}v_{1}+a_{2}v_{2}+\cdots +a_{n}v_{n}=0}$ The set of vectors V is linearly independent only if all the a coefficients are zero. If we combine the v vectors together into a single row vector: ${\displaystyle {\hat {V}}=[v_{1}v_{2}\cdots v_{n}]}$ And we combine all the a coefficients into a single column vector: ${\displaystyle {\hat {a}}=[a_{1}a_{2}\cdots a_{n}]^{T}}$ We have the following linear equation: ${\displaystyle {\hat {V}}{\hat {a}}=0}$ We can show that this equation can only be satisifed for ${\displaystyle {\hat {a}}=0}$ , the matrix ${\displaystyle {\hat {V}}}$  must be invertable: ${\displaystyle {\hat {V}}^{-1}{\hat {V}}{\hat {a}}={\hat {V}}^{-1}0}$ ${\displaystyle {\hat {a}}=0}$ Remember that for the matrix to be invertable, the determinate must be non-zero. ### Non-Square Matrix V If the matrix ${\displaystyle {\hat {V}}}$  is not square, then the determinate can not be taken, and therefore the matrix is not invertable. To solve this problem, we can premultiply by the transpose matrix: ${\displaystyle {\hat {V}}^{T}{\hat {V}}{\hat {a}}=0}$ And then the square matrix ${\displaystyle {\hat {V}}^{T}{\hat {V}}}$  must be invertable: ${\displaystyle ({\hat {V}}^{T}{\hat {V}})^{-1}{\hat {V}}^{T}{\hat {V}}{\hat {a}}=0}$ ${\displaystyle {\hat {a}}=0}$ ### Rank The rank of a matrix is the largest number of linearly independent rows or columns in the matrix. To determine the Rank, typically the matrix is reduced to row-echelon form. From the reduced form, the number of non-zero rows, or the number of non-zero columns (whichever is smaller) is the rank of the matrix. If we multiply two matrices A and B, and the result is C: ${\displaystyle AB=C}$ Then the rank of C is the minimum value between the ranks A and B: ${\displaystyle \operatorname {Rank} (C)=\operatorname {min} [\operatorname {Rank} (A),\operatorname {Rank} (B)]}$ ## Span A Span of a set of vectors V is the set of all vectors that can be created by a linear combination of the vectors. ## Basis A basis is a set of linearly-independent vectors that span the entire vector space. ### Basis Expansion If we have a vector ${\displaystyle y\in V}$ , and V has basis vectors ${\displaystyle {v_{1}v_{2}\cdots v_{n}}}$ , by definition, we can write y in terms of a linear combination of the basis vectors: ${\displaystyle a_{1}v_{1}+a_{2}v_{2}+\cdots +a_{n}v_{n}=y}$ or ${\displaystyle {\hat {V}}{\hat {a}}=y}$ If ${\displaystyle {\hat {V}}}$  is invertable, the answer is apparent, but if ${\displaystyle {\hat {V}}}$  is not invertable, then we can perform the following technique: ${\displaystyle {\hat {V}}^{T}{\hat {V}}{\hat {a}}={\hat {V}}^{T}y}$ ${\displaystyle {\hat {a}}=({\hat {V}}^{T}{\hat {V}})^{-1}{\hat {V}}^{T}y}$ And we call the quantity ${\displaystyle ({\hat {V}}^{T}{\hat {V}})^{-1}{\hat {V}}^{T}}$  the left-pseudoinverse of ${\displaystyle {\hat {V}}}$ . ### Change of Basis Frequently, it is useful to change the basis vectors to a different set of vectors that span the set, but have different properties. If we have a space V, with basis vectors ${\displaystyle {\hat {V}}}$  and a vector in V called x, we can use the new basis vectors ${\displaystyle {\hat {W}}}$  to represent x: ${\displaystyle x=\sum _{i=0}^{n}a_{i}v_{i}=\sum _{j=1}^{n}b_{j}w_{j}}$ or, ${\displaystyle x={\hat {V}}{\hat {a}}={\hat {W}}{\hat {b}}}$ If V is invertable, then the solution to this problem is simple. ## Grahm-Schmidt Orthogonalization If we have a set of basis vectors that are not orthogonal, we can use a process known as orthogonalization to produce a new set of basis vectors for the same space that are orthogonal: Given: ${\displaystyle {\hat {V}}={x_{1}v_{2}\cdots v_{n}}}$ Find the new basis ${\displaystyle {\hat {W}}={w_{1}w_{2}\cdots w_{n}}}$ Such that ${\displaystyle \langle w_{i},w_{j}\rangle =0\quad \forall i,j}$ We can define the vectors as follows: 1. ${\displaystyle w_{1}=v_{1}}$ 2. ${\displaystyle w_{m}=v_{m}-\sum _{i=1}^{m-1}{\frac {\langle v_{m},u_{i}\rangle }{\langle u_{i},u_{i}\rangle }}u_{i}}$ Notice that the vectors produced by this technique are orthogonal to each other, but they are not necessarily orthonormal. To make the w vectors orthonormal, you must divide each one by its norm: ${\displaystyle {\bar {w}}={\frac {w}{\|w\|}}}$ ## Reciprocal Basis A Reciprocal basis is a special type of basis that is related to the original basis. The reciprocal basis ${\displaystyle {\hat {W}}}$  can be defined as: ${\displaystyle {\hat {W}}=[{\hat {V}}^{T}]^{-1}}$ # Linear Transformations ## Linear Transformations A linear transformation is a matrix M that operates on a vector in space V, and results in a vector in a different space W. We can define a transformation as such: ${\displaystyle T:V\to W}$ In the above equation, we say that V is the domain space of the transformation, and W is the range space of the transformation. Also, we can use a "function notation" for the transformation, and write it as: ${\displaystyle M(x)=Mx=y}$ Where x is a vector in V, and y is a vector in W. To be a linear transformation, the principle of superposition must hold for the transformation: ${\displaystyle M(av_{1}+bv_{2})=aM(v_{1})+bM(v_{2})}$ Where a and b are arbitrary scalars. ## Null Space The Nullspace of an equation is the set of all vectors x for which the following relationship holds: ${\displaystyle Mx=0}$ Where M is a linear transformation matrix. Depending on the size and rank of M, there may be zero or more vectors in the nullspace. Here are a few rules to remember: 1. If the matrix M is invertable, then there is no nullspace. 2. The number of vectors in the nullspace (N) is the difference between the rank(R) of the matrix and the number of columns(C) of the matrix: ${\displaystyle N=R-C}$ If the matrix is in row-eschelon form, the number of vectors in the nullspace is given by the number of rows without a leading 1 on the diagonal. For every column where there is not a leading one on the diagonal, the nullspace vectors can be obtained by placing a negative one in the leading position for that column vector. We denote the nullspace of a matrix A as: ${\displaystyle {\mathcal {N}}\{A\}}$ ## Linear Equations If we have a set of linear equations in terms of variables x, scalar coefficients a, and a scalar result b, we can write the system in matrix notation as such: ${\displaystyle Ax=b}$ Where x is a m × 1 vector, b is an n × 1 vector, and A is an n × m matrix. Therefore, this is a system of n equations with m unknown variables. There are 3 possibilities: 1. If Rank(A) is not equal to Rank([A b]), there is no solution 2. If Rank(A) = Rank([A b]) = n, there is exactly one solution 3. If Rank(A) = Rank([A b]) < n, there are infinitely many solutions. ### Complete Solution The complete solution of a linear equation is given by the sum of the homogeneous solution, and the particular solution. The homogeneous solution is the nullspace of the transformation, and the particular solution is the values for x that satisfy the equation: ${\displaystyle A(x)=b}$ ${\displaystyle A(x_{h}+x_{p})=b}$ Where ${\displaystyle x_{h}}$  is the homogeneous solution, and is the nullspace of A that satisfies the equation ${\displaystyle A(x_{h})=0}$ ${\displaystyle x_{p}}$  is the particular solution that satisfies the equation ${\displaystyle A(x_{p})=b}$ ### Minimum Norm Solution If Rank(A) = Rank([A b]) < n, then there are infinitely many solutions to the linear equation. In this situation, the solution called the minimum norm solution must be found. This solution represents the "best" solution to the problem. To find the minimum norm solution, we must minimize the norm of x subject to the constraint of: ${\displaystyle Ax-b=0}$ There are a number of methods to minimize a value according to a given constraint, and we will talk about them later. ## Least-Squares Curve Fit If Rank(A) doesnt equal Rank([A b]), then the linear equation has no solution. However, we can find the solution which is the closest. This "best fit" solution is known as the Least-Squares curve fit. We define an error quantity E, such that: ${\displaystyle E=Ax-b\neq 0}$ Our job then is to find the minimum value for the norm of E: ${\displaystyle \|E\|^{2}=\|Ax-b\|^{2}=}$ We do this by differentiating with respect to x, and setting the result to zero: ${\displaystyle {\frac {\partial \|E\|^{2}}{\partial x}}=2A'(Ax-b)=0}$ Solving, we get our result: ${\displaystyle x=(A^{T}A)^{-1}A^{T}b}$ # Minimization ## Khun-Tucker Theorem The Khun-Tucker Theorem is a method for minimizing a function f(x) under the constraint g(x). We can define the theorem as follows: ${\displaystyle L(x)=f(x)+\langle \Lambda ,g(x)\rangle }$ Where Λ is the lagrangian vector, and < , > denotes the scalar product operation. We will discuss scalar products more later. If we differentiate this equation with respect to x first, and then with respect to Λ, we get the following two equations: ${\displaystyle {\frac {\partial L(x)}{\partial x}}=x+A\Lambda }$ ${\displaystyle {\frac {\partial L(x)}{\partial \Lambda }}=Ax-b}$ We have the final result: ${\displaystyle x=A^{T}[AA^{T}]^{-1}b}$ # Projections ## Projection The projection of a vector ${\displaystyle v\in V}$  onto the vector space ${\displaystyle W\in V}$  is the minimum distance between v and the space W. In other words, we need to minimize the distance between vector v, and an arbitrary vector ${\displaystyle w\in W}$ : ${\displaystyle \|w-v\|^{2}=\|{\hat {W}}{\hat {a}}-v\|^{2}}$ ${\displaystyle {\frac {\partial \|{\hat {W}}{\hat {a}}-v\|^{2}}{\partial {\hat {a}}}}={\frac {\partial \langle {\hat {W}}{\hat {a}}-v,{\hat {W}}{\hat {a}}-v\rangle }{\partial {\hat {a}}}}=0}$ [Projection onto space W] ${\displaystyle {\hat {a}}=({\hat {W}}^{T}{\hat {W}})^{-1}{\hat {W}}^{T}v}$ For every vector ${\displaystyle v\in V}$  there exists a vector ${\displaystyle w\in W}$  called the projection of v onto W such that <v-w, p> = 0, where p is an arbitrary element of W. ### Orthogonal Complement ${\displaystyle w^{\perp }={x\in V:\langle x,y\rangle =0,\forall y\in W}}$ ## Distance between v and W The distance between ${\displaystyle v\in V}$  and the space W is given as the minimum distance between v and an arbitrary ${\displaystyle w\in W}$ : ${\displaystyle {\frac {\partial d(v,w)}{\partial {\hat {a}}}}={\frac {\partial \|v-{\hat {W}}{\hat {a}}\|}{\partial {\hat {a}}}}=0}$ ## Intersections Given two vector spaces V and W, what is the overlapping area between the two? We define an arbitrary vector z that is a component of both V, and W: ${\displaystyle z={\hat {V}}{\hat {a}}={\hat {W}}{\hat {b}}}$ ${\displaystyle {\hat {V}}{\hat {a}}-{\hat {W}}{\hat {b}}=0}$ ${\displaystyle {\begin{bmatrix}{\hat {a}}\\{\hat {b}}\end{bmatrix}}={\mathcal {N}}([{\hat {v}}-{\hat {W}}])}$ Where N is the nullspace. # Matrices ## Derivatives Consider the following set of linear equations: ${\displaystyle a=bx_{1}+cx_{2}}$ ${\displaystyle d=ex_{1}+fx_{2}}$ We can define the matrix A to represent the coefficients, the vector B as the results, and the vector x as the variables: ${\displaystyle A={\begin{bmatrix}b&c\\e&f\end{bmatrix}}}$ ${\displaystyle B={\begin{bmatrix}a\\d\end{bmatrix}}}$ ${\displaystyle x={\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}}$ And rewriting the equation in terms of the matrices, we get: ${\displaystyle B=Ax}$ Now, let's say we want the derivative of this equation with respect to the vector x: ${\displaystyle {\frac {d}{dx}}B={\frac {d}{dx}}Ax}$ We know that the first term is constant, so the derivative of the left-hand side of the equation is zero. Analyzing the right side shows us: ## Pseudo-Inverses There are special matrices known as pseudo-inverses, that satisfies some of the properties of an inverse, but not others. To recap, If we have two square matrices A and B, that are both n × n, then if the following equation is true, we say that A is the inverse of B, and B is the inverse of A: ${\displaystyle AB=BA=I}$ ### Right Pseudo-Inverse Consider the following matrix: ${\displaystyle R=A^{T}[AA^{T}]^{-1}}$ We call this matrix R the right pseudo-inverse of A, because: ${\displaystyle AR=I}$ but ${\displaystyle RA\neq I}$ We will denote the right pseudo-inverse of A as ${\displaystyle A^{\dagger }}$ ### Left Pseudo-Inverse Consider the following matrix: ${\displaystyle L=[A^{T}A]^{-1}A^{T}}$ We call L the left pseudo-inverse of A because ${\displaystyle LA=I}$ but ${\displaystyle AL\neq I}$ We will denote the left pseudo-inverse of A as ${\displaystyle A^{\ddagger }}$ Matrices that follow certain predefined formats are useful in a number of computations. We will discuss some of the common matrix formats here. Later chapters will show how these formats are used in calculations and analysis. ## Diagonal Matrix A diagonal matrix is a matrix such that: ${\displaystyle a_{ij}=0,i\neq j}$ In otherwords, all the elements off the main diagonal are zero, and the diagonal elements may be (but don't need to be) non-zero. ## Companion Form Matrix If we have the following characteristic polynomial for a matrix: ${\displaystyle |A-\lambda I|=\lambda ^{n}+a_{n-1}\lambda ^{n-1}+\cdots +a_{1}\lambda ^{1}+a_{0}}$ We can create a companion form matrix in one of two ways: ${\displaystyle {\begin{bmatrix}0&0&0&\cdots &0&-a_{0}\\1&0&0&\cdots &0&-a_{1}\\0&1&0&\cdots &0&-a_{2}\\0&0&1&\cdots &0&-a_{3}\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &1&-a_{n-1}\end{bmatrix}}}$ Or, we can also write it as: ${\displaystyle {\begin{bmatrix}-a_{n-1}&-a_{n-2}&-a_{n-3}&\cdots &a_{1}&a_{0}\\0&0&0&\cdots &0&0\\1&0&0&\cdots &0&0\\0&1&0&\cdots &0&0\\0&0&1&\cdots &0&0\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&0&\cdots &1&0\end{bmatrix}}}$ ## Jordan Canonical Form To discuss the Jordan canonical form, we first need to introduce the idea of the Jordan Block: ### Jordan Blocks A jordan block is a square matrix such that all the diagonal elements are equal, and all the super-diagonal elements (the elements directly above the diagonal elements) are all 1. To illustrate this, here is an example of an n-dimensional jordan block: ${\displaystyle {\begin{bmatrix}a&1&0&\cdots &0\\0&a&1&\cdots &0\\0&0&a&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&a&\cdots &1\\0&0&0&\cdots &a\end{bmatrix}}}$ ### Canonical Form A square matrix is in Jordan Canonical form, if it is a diagonal matrix, or if it has one of the following two block-diagonal forms: ${\displaystyle {\begin{bmatrix}D&0&\cdots &0\\0&J_{1}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &J_{n}\end{bmatrix}}}$ Or: ${\displaystyle {\begin{bmatrix}J_{1}&0&\cdots &0\\0&J_{2}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &J_{n}\end{bmatrix}}}$ The where the D element is a diagonal block matrix, and the J blocks are in Jordan block form. If we have an n × 1 vector x, and an n × n symmetric matrix M, we can write: ${\displaystyle x^{T}Mx=a}$ Where a is a scalar value. Equations of this form are called quadratic forms. ## Matrix Definiteness Based on the quadratic forms of a matrix, we can create a certain number of categories for special types of matrices: 1. if ${\displaystyle x^{T}Mx>0}$  for all x, then the matrix is positive definite. 2. if ${\displaystyle x^{T}Mx\geq 0}$  for all x, then the matrix is positive semi-definite. 3. if ${\displaystyle x^{T}Mx<0}$  for all x, then the matrix is negative definite. 4. if ${\displaystyle x^{T}Mx\leq 0}$  for all x, then the matrix is negative semi-definite. These classifications are used commonly in control engineering. # Eigenvalues and Eigenvectors ## The Eigen Problem This page is going to talk about the concept of Eigenvectors and Eigenvalues, which are important tools in linear algebra, and which play an important role in State-Space control systems. The "Eigen Problem" stated simply, is that given a square matrix A which is n × n, there exists a set of n scalar values λ and n corresponding non-trivial vectors v such that: ${\displaystyle Av=\lambda v}$ We call λ the eigenvalues of A, and we call v the corresponding eigenvectors of A. We can rearrange this equation as: ${\displaystyle (A-\lambda I)v=0}$ For this equation to be satisfied so that v is non-trivial, the matrix (A - λI) must be singular. That is: ${\displaystyle |A-\lambda I|=0}$ ## Characteristic Equation The characteristic equation of a square matrix A is given by: [Characteristic Equation] ${\displaystyle |A-\lambda I|=0}$ Where I is the identity matrix, and λ is the set of eigenvalues of matrix A. From this equation we can solve for the eigenvalues of A, and then using the equations discussed above, we can calculate the corresponding eigenvectors. In general, we can expand the characteristic equation as: [Characteristic Polynomial] ${\displaystyle |A-\lambda I|=(-1)^{n}(\lambda ^{n}+c_{n-1}\lambda ^{n-1}+\cdots +c_{1}\lambda ^{1}+c_{0})}$ This equation satisfies the following properties: 1. ${\displaystyle |A|=(-1)^{n}c_{0}}$ 2. A is nonsingular if c0 is non-zero. ### Example: 2 × 2 Matrix Let's say that X is a square matrix of order 2, as such: ${\displaystyle X={\begin{bmatrix}a&b\\c&d\end{bmatrix}}}$ Then we can use this value in our characteristic equation: ${\displaystyle {\begin{vmatrix}a-\lambda &b\\c&d-\lambda \end{vmatrix}}=0}$ ${\displaystyle (a-\lambda )(d-\lambda )-(b)(c)=0}$ The roots to the above equation (the values for λ that satisfies the equality) are the eigenvalues of X. ## Eigenvalues The solutions, λ, of the characteristic equation for matrix X are known as the eigenvalues of the matrix X. Eigenvalues satisfy the following properties: 1. If λ is an eigenvalue of A, λn is an eigenvalue of An. 2. If λ is a complex eigenvalue of A, then λ* (the complex conjugate) is also an eigenvalue of A. 3. If any of the eigenvalues of A are zero, then A is singular. If A is non-singular, all the eigenvalues of A are nonzero. ## Eigenvectors The characteristic equation can be rewritten as such: ${\displaystyle Xv=\lambda v}$ Where X is the matrix under consideration, and λ are the eigenvalues for matrix X. For every unique eigenvalue, there is a solution vector v to the above equation, known as an eigenvector. The above equation can also be rewritten as: ${\displaystyle |X-\lambda I|v=0}$ Where the resulting values of v for each eigenvalue λ is an eigenvector of X. There is a unique eigenvector for each unique eigenvalue of X. From this equation, we can see that the eigenvectors of A form the nullspace: ${\displaystyle v={\mathcal {N}}\{A-\lambda I\}}$ And therefore, we can find the eigenvectors through row-reduction of that matrix. Eigenvectors satisfy the following properties: 1. If v is a complex eigenvector of A, then v* (the complex conjugate) is also an eigenvector of A. 2. Distinct eigenvectors of A are linearly independent. 3. If A is n × n, and if there are n distinct eigenvectors, then the eigenvectors of A form a complete basis set for ${\displaystyle {\mathcal {R}}^{n}}$ ## Generalized Eigenvectors Let's say that matrix A has the following characteristic polynomial: ${\displaystyle (A-\lambda I)=(-1)^{n}(\lambda -\lambda _{1})^{d_{1}}(\lambda -\lambda _{2})^{d_{2}}\cdots (\lambda -\lambda _{s})^{d_{s}}}$ Where d1, d2, ... , ds are known as the algebraic multiplicity of the eigenvalue λi. Also note that d1 + d2 + ... + ds = n, and s < n. In other words, the eigenvalues of A are repeated. Therefore, this matrix doesnt have n distinct eigenvectors. However, we can create vectors known as generalized eigenvectors to make up the missing eigenvectors by satisfying the following equations: ${\displaystyle (A-\lambda I)^{k}v_{k}=0}$ ${\displaystyle (A-\lambda I)^{k-1}v_{k}=0}$ ## Right and Left Eigenvectors The equation for determining eigenvectors is: ${\displaystyle (A-\lambda I)v=0}$ And because the eigenvector v is on the right, these are more appropriately called "right eigenvectors". However, if we rewrite the equation as follows: ${\displaystyle u(A-\lambda I)=0}$ The vectors u are called the "left eigenvectors" of matrix A. ## Similarity Matrices A and B are said to be similar to one another if there exists an invertable matrix T such that: ${\displaystyle T^{-1}AT=B}$ If there exists such a matrix T, the matrices are similar. Similar matrices have the same eigenvalues. If A has eigenvectors v1, v2 ..., then B has eigenvectors u given by: ${\displaystyle u_{i}=Tv_{i}}$ ## Matrix Diagonalization Some matricies are similar to diagonal matrices using a transition matrix, T. We will say that matrix A is diagonalizable if the following equation can be satisfied: ${\displaystyle T^{-1}AT=D}$ Where D is a diagonal matrix. An n × n square matrix is diagonalizable if and only if it has n linearly independent eigenvectors. ## Transition Matrix If an n × n square matrix has n distinct eigenvalues λ, and therefore n distinct eigenvectors v, we can create a transition matrix T as: ${\displaystyle T=[v_{1}v_{2}...v_{n}]}$ And transforming matrix X gives us: ${\displaystyle T^{-1}AT={\begin{bmatrix}\lambda _{1}&0&\cdots &0\\0&\lambda _{2}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &\lambda _{n}\end{bmatrix}}}$ Therefore, if the matrix has n distinct eigenvalues, the matrix is diagonalizable, and the diagonal entries of the diagonal matrix are the corresponding eigenvalues of the matrix. ## Complex Eigenvalues Consider the situation where a matrix A has 1 or more complex conjugate eigenvalue pairs. The eigenvectors of A will also be complex. The resulting diagonal matrix D will have the complex eigenvalues as the diagonal entries. In engineering situations, it is often not a good idea to deal with complex matrices, so other matrix transformations can be used to create matrices that are "nearly diagonal". ## Generalized Eigenvectors If the matrix A does not have a complete set of eigenvectors, that is, that they have d eigenvectors and n - d generalized eigenvectors, then the matrix A is not diagonalizable. However, the next best thing is acheived, and matrix A can be transformed into a Jordan Cannonical Matrix. Each set of generalized eigenvectors that are formed from a single eigenvector basis will create a jordan block. All the distinct eigenvectors that do not spawn any generalized eigenvectors will form a diagonal block in the Jordan matrix. If λi are the n distinct eigenvalues of matrix A, and vi are the corresponding n distinct eigenvectors, and if wi are the n distinct left-eigenvectors, then the matrix A can be represented as a sum: ${\displaystyle A=\sum _{i=1}^{n}\lambda _{i}v_{i}w_{i}^{T}}$ this is known as the spectral decomposition of A. Consider a scenario where the matrix representation of a system A differs from the actual implementation of the system by a factor of ΔA. In other words, our system uses the matrix: ${\displaystyle A+\Delta A}$ From the study of Control Systems, we know that the values of the eigenvectors can affect the stability of the system. For that reason, we would like to know how a small error in A will affect the eigenvalues. First off, we assume that ΔA is a small shift. The definition of "small" in this sense is arbitrary, and will remained open. Keep in mind that the techniques discussed here are more accurate the smaller ΔA is. If ΔA is the error in the matrix A, then Δλ is the error in the eigenvalues and Δv is the error in the eigenvectors. The characteristic equation becomes: ${\displaystyle (A+\Delta A)(v+\Delta v)=(\lambda +\Delta \lambda )(v+\Delta v)}$ We have an equation now with two unknowns: Δλ and Δv. In other words, we don't know how a small change in A will affect the eigenvalues and eigenvectors. If we multiply out both sides, we get: ${\displaystyle Av+\Delta Av+A\Delta v+O(\Delta ^{2})=\lambda v+\Delta \lambda v+v\Delta \lambda +O(\Delta ^{2})}$ This situation seems hopeless, until we multiply both sides by the corresponding left-eigenvector w from the left: ${\displaystyle w^{T}Av+w^{T}\Delta Av+w^{T}v\Delta A=w^{T}\lambda v+w^{T}\Delta \lambda v+w^{T}v\Delta \lambda +O(\Delta ^{2})}$ Terms where two Δs (which are known to be small, by definition) are multiplied together, we can say are negligible, and ignore them. Also, we know from our right-eigenvalue equation that: ${\displaystyle w^{T}A=\lambda w^{T}}$ Another fact is that the right-eigenvectors and left eigenvectors are orthogonal to each other, so the following result holds: ${\displaystyle w^{T}v=0}$ Substituting these results, where necessary, into our long equation above, we get the following simplification: ${\displaystyle w^{T}\Delta Av=\Delta \lambda w^{T}\Delta v}$ And solving for the change in the eigenvalue gives us: ${\displaystyle \Delta \lambda ={\frac {w^{T}\Delta Av}{w^{T}\Delta v}}}$ This approximate result is only good for small values of ΔA, and the result is less precise as the error increases. # Functions of Matrices If we have functions, and we use a matrix as the input to those functions, the output values are not always intuitive. For instance, if we have a function f(x), and as the input argument we use matrix A, the output matrix is not necessarily the function f applied to the individual elements of A. ## Diagonal Matrix In the special case of diagonal matrices, the result of f(A) is the function applied to each element of the diagonal matrix: ${\displaystyle A={\begin{bmatrix}a_{11}&0&\cdots &0\\0&a_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &a_{nn}\end{bmatrix}}}$ Then the function f(A) is given by: ${\displaystyle f(A)={\begin{bmatrix}f(a_{11})&0&\cdots &0\\0&f(a_{22})&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &f(a_{nn})\end{bmatrix}}}$ ## Jordan Cannonical Form Matrices in Jordan Canonical form also have an easy way to compute the functions of the matrix. However, this method is not nearly as easy as the diagonal matrices described above. If we have a matrix in Jordan Block form, A, the function f(A) is given by: ${\displaystyle f(A)={\begin{bmatrix}{\frac {f(a)}{0!}}&{\frac {f'(a)}{1!}}&\cdots &{\frac {f^{(r-1)}(a)}{(r-1)!}}\\0&{\frac {f(a)}{0!}}&\cdots &{\frac {f^{(}r-2)(a)}{(r-2)!}}\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &{\frac {f(a)}{0!}}\end{bmatrix}}}$ The matrix indices have been removed, because in Jordan block form, all the diagonal elements must be equal. If the matrix is in Jordan Block form, the value of the function is given as the function applied to the individual diagonal blocks. If the characteristic equation of matrix A is given by: ${\displaystyle \Delta (\lambda )=|A-\lambda I|=(-1)^{n}(\lambda ^{n}+a_{n-1}\lambda ^{n-1}+\cdots +a_{0})=0}$ Then the Cayley-Hamilton theorem states that the matrix A itself is also a valid solution to that equation: ${\displaystyle \Delta (A)=(-1)^{n}(A^{n}+a_{n-1}A^{n-1}+\cdots +a_{0})=0}$ Another theorem worth mentioning here (and by "worth mentioning", we really mean "fundamental for some later topics") is stated as: If λ are the eigenvalues of matrix A, and if there is a function f that is defined as a linear combination of powers of λ: ${\displaystyle f(\lambda )=\sum _{i=0}^{\infty }b_{i}\lambda ^{i}}$ If this function has a radius of convergence S, and if all the eigenvectors of A have magnitudes less then S, then the matrix A itself is also a solution to that function: ${\displaystyle f(A)=\sum _{i=0}^{\infty }b_{i}A^{i}}$ ## Matrix Exponentials If we have a matrix A, we can raise that matrix to a power of e as follows: ${\displaystyle e^{A}}$ It is important to note that this is not necessarily (not usually) equal to each individual element of A being raised to a power of e. Using taylor-series expansion of exponentials, we can show that: ${\displaystyle e^{A}=I+A+{\frac {1}{2}}A^{2}+{\frac {1}{6}}A^{3}+...=\sum _{k=0}^{\infty }{1 \over k!}A^{k}}$ . In other words, the matrix exponential can be reducted to a sum of powers of the matrix. This follows from both the taylor series expansion of the exponential function, and the cayley-hamilton theorem discussed previously. However, this infinite sum is expensive to compute, and because the sequence is infinite, there is no good cut-off point where we can stop computing terms and call the answer a "good approximation". To alleviate this point, we can turn to the Cayley-Hamilton Theorem. Solving the Theorem for An, we get: ${\displaystyle A^{n}=-c_{n-1}A^{n-1}-c_{n-2}A^{n-2}-\cdots -c_{1}A-c_{0}I}$ Multiplying both sides of the equation by A, we get: ${\displaystyle A^{n+1}=-c_{n-1}A^{n}-c_{n-2}A^{n-1}-\cdots -c_{1}A^{2}-c_{0}A}$ We can substitute the first equation into the second equation, and the result will be An+1 in terms of the first n - 1 powers of A. In fact, we can repeat that process so that Am, for any arbitrary high power of m can be expressed as a linear combination of the first n - 1 powers of A. Applying this result to our exponential problem: ${\displaystyle e^{A}=\alpha _{0}I+\alpha _{1}A+\cdots +\alpha _{n-1}A^{n-1}}$ Where we can solve for the α terms, and have a finite polynomial that expresses the exponential. ## Inverse The inverse of a matrix exponential is given by: ${\displaystyle (e^{A})^{-1}=e^{-A}}$ ## Derivative The derivative of a matrix exponential is: ${\displaystyle {\frac {d}{dx}}e^{Ax}=Ae^{Ax}=e^{Ax}A}$ Notice that the exponential matrix is commutative with the matrix A. This is not the case with other functions, necessarily. ## Sum of Matrices If we have a sum of matrices in the exponent, we cannot separate them: ${\displaystyle e^{(A+B)x}\neq e^{Ax}e^{Bx}}$ ## Differential Equations If we have a first-degree differential equation of the following form: ${\displaystyle x'(t)=Ax(t)+f(t)}$ With initial conditions ${\displaystyle x(t_{0})=c}$ Then the solution to that equation is given in terms of the matrix exponential: ${\displaystyle x(t)=e^{A(t-t_{0})}c+\int _{t_{0}}^{t}e^{A(t-\tau )}f(\tau )d\tau }$ This equation shows up frequently in control engineering. ## Laplace Transform As a matter of some interest, we will show the Laplace Transform of a matrix exponential function: ${\displaystyle {\mathcal {L}}[e^{At}]=(sI-A)^{-1}}$ We will not use this result any further in this book, although other books on engineering might make use of it. # Function Spaces ## Function Space A function space is a linear space where all the elements of the space are functions. A function space that has a norm operation is known as a normed function space. The spaces we consider will all be normed. ## Continuity f(x) is continuous at x0 if, for every ε > 0 there exists a δ(ε) > 0 such that |f(x) - f(x0)| < ε when |x - x0| < δ(ε). ## Common Function Spaces Here is a listing of some common function spaces. This is not an exhaustive list. ### C Space The C function space is the set of all functions that are continuous. The metric for C space is defined as: ${\displaystyle \rho (x,y)_{L_{2}}=\max |f(x)-g(x)|}$ Consider the metric of sin(x) and cos(x): ${\displaystyle \rho (sin(x),cos(x))_{L_{2}}={\sqrt {2}},x={\frac {3\pi }{4}}}$ ### Cp Space The Cp is the set of all continuous functions for which the first p derivatives are also continuous. If ${\displaystyle p=\infty }$  the function is called "infinitely continuous. The set ${\displaystyle C^{\infty }}$  is the set of all such functions. Some examples of functions that are infinitely continuous are exponentials, sinusoids, and polynomials. ### L Space The L space is the set of all functions that are finitely integrable over a given interval [a, b]. f(x) is in L(a, b) if: ${\displaystyle \int _{a}^{b}|f(x)|dx<\infty }$ ### L p Space The Lp space is the set of all functions that are finitely integrable over a given interval [a, b] when raised to the power p: ${\displaystyle \int _{a}^{b}|f(x)|^{p}dx<\infty }$ Most importantly for engineering is the L2 space, or the set of functions that are "square integrable". The L2 space is very important to engineers, because functions in this space do not need to be continuous. Many discontinuous engineering functions, such as the delta (impulse) function, the unit step function, and other discontinuous functions are part of this space. ## L2 Functions A large number of functions qualify as L2 functions, including uncommon, discontinuous, piece-wise, and other functions. A function which, over a finite range, has a finite number of discontinuities is an L2 function. For example, a unit step and an impulse function are both L2 functions. Also, other functions useful in signal analysis, such as square waves, triangle waves, wavelets, and other functions are L2 functions. In practice, most physical systems have a finite amount of noise associated with them. Noisy signals and random signals, if finite, are also L2 functions: this makes analysis of those functions using the techniques listed below easy. ## Null Function The null functions of L2 are the set of all functions φ in L2 that satisfy the equation: ${\displaystyle \int _{a}^{b}|\phi (x)|^{2}dx=0}$ for all a and b. ## Norm The L2 norm is defined as follows: [L2 Norm] ${\displaystyle \|f(x)\|_{L_{2}}={\sqrt {\int _{a}^{b}|f(x)|^{2}dx}}}$ If the norm of the function is 1, the function is normal. We can show that the derivative of the norm squared is: ${\displaystyle {\frac {\partial \|x\|^{2}}{\partial x}}=2x}$ ## Scalar Product The scalar product in L2 space is defined as follows: [L2 Scalar Product] ${\displaystyle \langle f(x),g(x)\rangle _{L_{2}}=\int _{a}^{b}f(x)g(x)dx}$ If the scalar product of two functions is zero, the functions are orthogonal. We can show that given coefficient matrices A and B, and variable x, the derivative of the scalar product can be given as: ${\displaystyle {\frac {\partial }{\partial x}}\langle Ax,Bx\rangle =A^{T}Bx+B^{T}Ax}$ We can recognize this as the product rule of differentiation. Generalizing, we can say that: ${\displaystyle {\frac {\partial }{\partial x}}\langle f(x),g(x)\rangle =f'(x)g(x)+f(x)g'(x)}$ We can also say that the derivative of a matrix A times a vector x is: ${\displaystyle {\frac {d}{dx}}Ax=A^{T}}$ ## Metric The metric of two functions (we will not call it the "distance" here, because that word has no meaning in a function space) will be denoted with ρ(x,y). We can define the metric of an L2 function as follows: [L2 Metric] ${\displaystyle \rho (x,y)_{L_{2}}={\sqrt {\int _{a}^{b}|f(x)-g(x)|^{2}dx}}}$ ## Cauchy-Schwarz Inequality The Cauchy-Schwarz Inequality still holds for L2 functions, and is restated here: ${\displaystyle |\langle f(x),g(x)\rangle |\leq \|f\|\|g\|}$ ## Linear Independence A set of functions in L2 are linearly independent if: ${\displaystyle a_{1}f_{1}(x)+a_{2}f_{2}(x)+\cdots +a_{n}f_{n}(x)=0}$ If and only if all the a coefficients are 0. ## Grahm-Schmidt Orthogonalization The Grahm-Schmidt technique that we discussed earlier still works with functions, and we can use it to form a set of linearly independent, orthogonal functions in L2. For a set of functions φ, we can make a set of orthogonal functions ψ that space the same space but are orthogonal to one another: [Grahm-Schmidt Orthogonalization] ${\displaystyle \psi _{1}=\phi _{1}}$ ${\displaystyle \psi _{i}=\phi _{i}-\sum _{n=1}^{i-1}{\frac {\langle \psi _{n},\phi _{i}\rangle }{\langle \psi _{n},\psi _{n}\rangle }}\psi _{n}}$ ## Basis The L2 is an infinite-basis set, which means that any basis for the L2 set will require an infinite number of basis functions. To prove that an infinite set of orthogonal functions is a basis for the L2 space, we need to show that the null function is the only function in L2 that is orthogonal to all the basis functions. If the null function is the only function that satisfies this relationship, then the set is a basis set for L2. By definition, we can express any function in L2 as a linear sum of the basis elements. If we have basis elements φ, we can define any other function ψ as a linear sum: ${\displaystyle \psi (x)=\sum _{n=1}^{\infty }a_{n}\phi _{n}(x)}$ We will explore this important result in the section on Fourier Series. There are some special spaces known as Banach spaces, and Hilbert spaces. ## Convergent Functions Let's define the piece-wise function φ(x) as: ${\displaystyle \phi _{n}(x)=\left\{{\begin{matrix}0&x\leq 0\\nx&0 We can see that as we set ${\displaystyle n\to \infty }$ , this function becomes the unit step function. We can say that as n approaches infinity, that this function converges to the unit step function. Notice that this function only converges in the L2 space, because the unit step function does not exist in the C space (it is not continuous). ### Convergence We can say that a function φ converges to a function φ* if: ${\displaystyle \lim _{n\to \infty }\|\phi _{n}-\phi ^{*}\|=0}$ We can call this sequences, and all such sequences that converge to a given function as n approaches infinity a cauchy sequence. ### Complete Function Spaces A function space is called complete if all sequences in that space converge to another function in that space. ## Banach Space A Banach Space is a complete normed function space. ## Hilbert Space A Hilbert Space is a Banach Space with respect to a norm induced by the scalar product. That is, if there is a scalar product in the space X, then we can say the norm is induced by the scalar product if we can write: ${\displaystyle \|f\|=g(\langle f,f\rangle )}$ That is, that the norm can be written as a function of the scalar product. In the L2 space, we can define the norm as: ${\displaystyle \|f\|={\sqrt {\langle f,f\rangle }}}$ If the scalar product space is a Banach Space, if the norm space is also a Banach space. In a Hilbert Space, the Parallelogram rule holds for all members f and g in the function space: ${\displaystyle \|f+g\|^{2}+\|f-g\|^{2}=2\|f\|^{2}+2\|g\|^{2}}$ The L2 space is a Hilbert Space. The C space, however, is not. # Fourier Series The L2 space is an infinite function space, and therefore a linear combination of any infinite set of orthogonal functions can be used to represent any single member of the L2 space. The decomposition of an L2 function in terms of an infinite basis set is a technique known as the Fourier Decomposition of the function, and produces a result called the Fourier Series. ## Fourier Basis Let's consider a set of L2 functions, ${\displaystyle \phi }$ , as follows: ${\displaystyle \phi =\{1,\sin(\pi x),\cos(\pi x),\sin(2\pi x),\cos(2\pi x),\sin(3\pi x),\cos(3\pi x)...\}.\ }$ We can prove that over a range ${\displaystyle [0,2\pi ]}$ , all of these functions are orthogonal: ${\displaystyle \int _{0}^{2\pi }1\cdot \cos(n\pi x)dx=0}$ ${\displaystyle \int _{0}^{2\pi }1\cdot \sin(n\pi x)dx=0}$ ${\displaystyle \int _{0}^{2\pi }\sin(n\pi x)\sin(m\pi x)dx=0,n\neq m}$ ${\displaystyle \int _{0}^{2\pi }\sin(n\pi x)\cos(m\pi x)dx=0}$ ${\displaystyle \int _{0}^{2\pi }\cos(n\pi x)\cos(m\pi x)dx=0,n\neq m}$ Because ${\displaystyle \phi }$  is as an infinite orthogonal set in L2, ${\displaystyle \phi }$  is also a valid basis set in the L2 space. Therefore, we can decompose any function in L2 as the following sum: [Classical Fourier Series] ${\displaystyle \psi (x)=a_{0}(1)+\sum _{n=1}^{\infty }a_{n}\sin(n\pi x)+\sum _{m=1}^{\infty }b_{m}\cos(m\pi x)}$ However, the difficulty occurs when we need to calculate the a and b coefficients. We will show the method to do this below: ## a0: The Constant Term Calculation of a0 is the easiest, and therefore we will show how to calculate it first. We use the value of a0 which minimizes the error in approximating ${\displaystyle f(x)}$  by the Fourier series. First, define an error function, E, that is equal to the squared norm of the difference between the function f(x) and the infinite sum above: ${\displaystyle E={\frac {1}{2}}\int _{0}^{2\pi }\|f(x)-a_{0}(1)-\sum _{n=1}^{\infty }a_{n}\sin(n\pi x)-\sum _{m=1}^{\infty }b_{m}\cos(m\pi x)\|^{2}dx}$ For ease, we will write all the basis functions as the set φ, described above: ${\displaystyle \sum _{i=0}^{\infty }a_{i}\phi _{i}=a_{0}+\sum _{n=1}^{\infty }a_{n}\sin(n\pi x)+\sum _{m=1}^{\infty }b_{m}\cos(m\pi x)}$ Combining the last two functions together, and writing the norm as an integral, we can say: ${\displaystyle E={\frac {1}{2}}\int _{0}^{2\pi }|\sum _{i=0}^{\infty }a_{i}\phi _{i}|^{2}dx}$ We attempt to minimize this error function with respect to the constant term. To do this, we differentiate both sides with respect to a0, and set the result to zero: ${\displaystyle 0={\frac {\partial E}{\partial a_{0}}}=\int _{0}^{2\pi }(f(x)-\sum _{i=0}^{\infty }a_{i}\phi _{i}(x))(-\phi _{0}(x))dx}$ The φ0 term comes out of the sum because of the chain rule: it is the only term in the entire sum dependent on a0. We can separate out the integral above as follows: ${\displaystyle \int _{0}^{2\pi }(f(x)-\sum _{i=0}^{\infty }a_{i}\phi _{i})(-\phi _{0})dx=-\int _{0}^{2\pi }f(x)\phi _{0}(x)dx+a_{0}\int _{0}^{2\pi }\phi _{0}(x)\phi _{0}(x)dx}$ All the other terms drop out of the infinite sum because they are all orthogonal to φ0. Again, we can rewrite the above equation in terms of the scalar product: ${\displaystyle 0=-\langle f(x),\phi _{0}(x)\rangle +a_{0}\langle \phi _{0}(x),\phi _{0}(x)\rangle }$ And solving for a0, we get our final result: ${\displaystyle a_{0}={\frac {\langle f(x),\phi _{0}(x)\rangle }{\langle \phi _{0}(x),\phi _{0}(x)\rangle }}}$ ## Sin Coefficients Using the above method, we can solve for the an coefficients of the sin terms: ${\displaystyle a_{n}={\frac {\langle f(x),\sin(n\pi x)\rangle }{\langle \sin(n\pi x),\sin(n\pi x)\rangle }}}$ ## Cos Coefficients Also using the above method, we can solve for the bn terms of the cos term. ${\displaystyle b_{n}={\frac {\langle f(x),\cos(n\pi x)\rangle }{\langle \cos(n\pi x),\cos(n\pi x)\rangle }}}$ The classical Fourier series uses the following basis: ${\displaystyle \phi (x)={1,\sin(n\pi x),\cos(n\pi x)},n=1,2,...}$ However, we can generalize this concept to extend to any orthogonal basis set from the L2 space. We can say that if we have our orthogonal basis set that is composed of an infinite set of arbitrary, orthogonal L2 functions: ${\displaystyle \phi ={\phi _{1},\phi _{2},\cdots ,}}$ We can define any L2 function f(x) in terms of this basis set: [Generalized Fourier Series] ${\displaystyle f(x)=\sum _{n=1}^{\infty }a_{n}\phi _{n}(x)}$ Using the method from the previous chapter, we can solve for the coefficients as follows: [Generalized Fourier Coefficient] ${\displaystyle a_{n}={\frac {\langle f(x),\phi _{n}(x)\rangle }{\langle \phi _{n}(x),\phi _{n}(x)\rangle }}}$ Bessel's equation relates the original function to the fourier coefficients an: [Bessel's Equation] ${\displaystyle \sum _{n=1}^{\infty }a_{n}^{2}\leq \|f(x)\|^{2}}$ If the basis set is infinitely orthogonal, and if an infinite sum of the basis functions perfectly reproduces the function f(x), then the above equation will be an equality, known as Parseval's Theorem: [Parseval's Theorem] ${\displaystyle \sum _{n=1}^{\infty }a_{n}^{2}=\|f(x)\|^{2}}$ Engineers may recognize this as a relationship between the energy of the signal, as represented in the time and frequency domains. However, parseval's rule applies not only to the classical Fourier series coefficients, but also to the generalized series coefficients as well. The concept of the fourier series can be expanded to include 2-dimensional and n-dimensional function decomposition as well. Let's say that we have a function in terms of independent variables x and y. We can decompose that function as a double-summation as follows: ${\displaystyle f(x,y)=\sum _{i=1}^{\infty }\sum _{j=1}^{\infty }a_{ij}\phi _{ij}(x,y)}$ Where φij is a 2-dimensional set of orthogonal basis functions. We can define the coefficients as: ${\displaystyle a_{ij}={\frac {\langle f(x,y),\phi _{ij}(x,y)\rangle }{\langle \phi _{ij}(x,y),\phi _{ij}(x,y)\rangle }}}$ This same concept can be expanded to include series with n-dimensions. The Feyman lectures Chapter 50 Harmonics # Miscellany [Lyapunov's Equation] ${\displaystyle AM+MB=C}$ Where A, B and C are constant square matrices, and M is the solution that we are trying to find. If A, B, and C are of the same order, and if A and B have no eigenvalues in common, then the solution can be given in terms of matrix exponentials: ${\displaystyle M=-\int _{0}^{\infty }e^{Ax}Ce^{Bx}dx}$ Leibniz' rule allows us to take the derivative of an integral, where the derivative and the integral are performed using different variables: Wavelets are orthogonal basis functions that only exist for certain windows in time. This is in contrast to sinusoidal waves, which exist for all times t. A wavelet, because it is dependant on time, can be used as a basis function. A wavelet basis set gives rise to wavelet decomposition, which is a 2-variable decomposition of a 1-variable function. Wavelet analysis allows us to decompose a function in terms of time and frequency, while fourier decomposition only allows us to decompose a function in terms of frequency. ## Mother Wavelet If we have a basic wavelet function ψ(t), we can write a 2-dimensional function known as the mother wavelet function as such: ${\displaystyle \psi _{jk}=2^{j/2}\psi (2^{j}t-k)}$ ## Wavelet Series If we have our mother wavelet function, we can write out a fourier-style series as a double-sum of all the wavelets: ${\displaystyle f(t)=\sum _{j=0}^{\infty }\sum _{k=0}^{\infty }a_{jk}\psi _{jk}(t)}$ ## Scaling Function Sometimes, we can add in an additional function, known as a scaling function: ${\displaystyle f(t)=\sum _{i=0}^{\infty }c_{i}\phi _{i}+\sum _{j=0}^{\infty }\sum _{k=0}^{\infty }a_{jk}\psi _{jk}(t)}$ The idea is that the scaling function is larger than the wavelet functions, and occupies more time. In this case, the scaling function will show long-term changes in the signal, and the wavelet functions will show short-term changes in the signal. ## Optimization Optimization is an important concept in engineering. Finding any solution to a problem is not nearly as good as finding the one "optimal solution" to the problem. Optimization problems are typically reformatted so they become minimization problems, which are well-studied problems in the field of mathematics. Typically, when optimizing a system, the costs and benefits of that system are arranged into a cost function. It is the engineers job then to minimize this cost function (and thereby minimize the cost of the system). It is worth noting at this point that the word "cost" can have multiple meanings, depending on the particular problem. For instance, cost can refer to the actual monetary cost of a system (number of computer units to host a website, amount of cable needed to connect Philadelphia and New York), the delay of the system (loading time for a website, transmission delay for a communication network), the reliability of the system (number of dropped calls in a cellphone network, average lifetime of a car transmission), or any other types of factors that reduce the effectiveness and efficiency of the system. Because optimization typically becomes a mathematical minimization problem, we are going to discuss minimization here. ### Minimization Minimization is the act of finding the numerically lowest point in a given function, or in a particular range of a given function. Students of mathematics and calculus may remember using the derivative of a function to find the maxima and minima of a function. If we have a function f(x), we can find the maxima, minima, or saddle-points (points where the function has zero slope, but is not a maxima or minima) by solving for x in the following equation: ${\displaystyle {\frac {df(x)}{dx}}=0}$ In other words, we are looking for the roots of the derivative of the function f plus those points where f has a corner. Once we have the so called critical points of the function (if any), we can test them to see if they are relatively high (maxima), or relatively low (minima). Some words to remember in this context are: Global Minima A global minimum of a function is the lowest value of that function anywhere. If the domain of the function is restricted, say A < x < B, then the minima can also occur at the boundary, here A or B. Local Minima A local minimum of a function is the lowest value of that function within a small range. A value can thus be a local minimum even though there are smaller function values, but not in a small neighborhood. ## Unconstrained Minimization Unconstrained Minimization refers to the minimization of the given function without having to worry about any other rules or caveats. Constrained Minimization, on the other hand, refers to minimization problems where other relations called constraints must be satisfied at the same time. Beside the method above (where we take the derivative of the function and set that equal to zero), there are several numerical methods that we can use to find the minima of a function. For these methods there are useful computational tools such as Matlab. ### Hessian Matrix The function has a local minima at a point x if the Hessian matrix H(x) is positive definite: ${\displaystyle H(x)={\frac {\partial ^{2}f(x)}{\partial x^{2}}}}$ Where x is a vector of all the independant variables of the function. If x is a scalar variable, the hessian matrix reduces to the second derivative of the function f. ### Newton-Raphson Method The Newton-Raphson Method of computing the minima of a function f uses an iterative computation. We can define the sequence: ${\displaystyle x^{n+1}=x^{n}-{\frac {f'(x)}{f''(x)}}}$ Where ${\displaystyle f'(x)={\frac {df(x)}{dx}}}$ ${\displaystyle f''(x)={\frac {d^{2}f(x)}{dx^{2}}}}$ As we repeat the above computation, plugging in consecutive values for n, our solution will converge on the true solution. However, this process will take infinitely many iterations to converge, but if an approximation of the true solution will suffices, you can stop after only few iterations, because the sequence converges rather quickly (quadratic). ### Steepest Descent Method The Newton-Raphson method can be tricky because it relies on the second derivative of the function f, and this can oftentimes be difficult (if not impossible) to accurately calculate. The Steepest Descent Method, however, does not require the second derivative, but it does require the selection of an appropriate scalar quantity ε, which cannot be chosen arbitrarily (but which can also not be calculated using a set formula). The Steepest Descent method is defined by the following iterative computation: ${\displaystyle x^{n+1}=x^{n}-\epsilon {\frac {df(x)}{dx}}}$ Where epsilon needs to be sufficiently small. If epsilon is too large, the iteration may diverge. If this happens, a new epsilon value needs to be chosen, and the process needs to be repeated. ## Constrained Minimization Constrained Minimization' is the process of finding the minimum value of a function under a certain number of additional rules called constraints. For instance, we could say "Find the minium value of f(x), but g(x) must equal 10". These kinds of problems are more difficult, but the Khun-Tucker theorem, and also the Karush-Khun-Tucker theorem help to solve them. There are two different types of constraints: equality constraints and inequality constraints. We will consider them individually, and then mixed constraints. ### Equality Constraints The Khun-Tucker Theorem is a method for minimizing a function f(x) under the equality constraint g(x). The theorem reads as follows: Given the cost function f, and an equality constraint g in the following form: ${\displaystyle g(x)=0}$ , Then we can convert this problem into an unconstrained minimization problem by constructing the Lagrangian function of f and g: ${\displaystyle L(x)=f(x)+\langle \Lambda ,g(x)\rangle }$ Where Λ is the lagrange multiplier, and < , > denotes the scalar product of the vector space Rn (where n is the number of equality constraints). We will discuss scalar products in more detail later. If we differentiate this equation with respect to x, we can find the minimum of this whole function L(x,Λ), and that will be the minimum of our function f. ${\displaystyle {\frac {df(x)}{dx}}+\left\langle \Lambda ,{\frac {dg(x)}{dx}}\right\rangle =0}$ ${\displaystyle g(x)=0}$ This is a set of n+k equations with n+k unknown variables (n Λs and k xs). ### Inequality Constraints Similar to the method above, let us say that we have a cost function f, and an inequality constraint in the following form: ${\displaystyle g(x)\leq 0}$ Then we can take the Lagrangian of this again: ${\displaystyle L(x)=f(x)+\langle \Lambda ,g(x)\rangle }$ But we now must use the following three equations/ inequalities in determining our solution: ${\displaystyle {\frac {df}{dx}}=0}$ ${\displaystyle \langle \Lambda ,g(x)\rangle =0}$ ${\displaystyle \Lambda \geq 0}$ These last second equation can be interpreted in the following way: if ${\displaystyle g(x)<0}$ , then ${\displaystyle \Lambda =0}$ if ${\displaystyle g(x)\leq 0}$ , then ${\displaystyle \Lambda \geq 0}$ Using these two additional equations/ inequalities, we can solve in a similar manner as above. ### Mixed Constraints If we have a set of equality and inequality constraints ${\displaystyle g(x)=0}$ ${\displaystyle h(x)\leq 0}$ we can combine them into a single Lagrangian with two additional conditions: ${\displaystyle L(x)=f(x)+\langle \Lambda ,g(x)\rangle +\langle \mu ,h(x)\rangle }$ ${\displaystyle g(x)=0}$ ${\displaystyle \langle \mu ,h(x)\rangle =0}$ ${\displaystyle \mu \geq 0}$ ## Infinite Dimensional Minimization The above methods work well if the variables involved in the analysis are finite-dimensional vectors, like those in the RN. However, when we are trying to minimize something that is more complex than a vector, i.e. a function we need the following concept. We consider functions that live in a subspace of L2(RN), which is an infinite-dimensional vector space. We will define the term functional as follows: Functional A functional is a map that takes one or more functions as arguments, and which returns a scalar value. Let us say that we consider functions x of time t (N=1). Suppose further we have a fixed function f in two variables. With that function, we can associate a cost functional J: ${\displaystyle J[x]=\int _{a}^{b}f(x,t)dt}$ Where we are explicitly taking account of t in the definition of f. To minimize this function, like all minimization problems, we need to take the derivative of the function, and set the derivative to zero. However, we need slightly more sophisticated version of derivative, because x is a function. This is where the Gateaux Derivative enters the field. ### Gateaux Derivative We can define the Gateaux Derivative in terms of the following limit: ${\displaystyle \delta F(x,h)=\lim _{\epsilon \to 0}{\frac {1}{\epsilon }}[F(x+\epsilon h)-F(x)]}$ Which is similar to the classical definition of the derivative in the direction h. In plain words, we took the derivative of F with respect to x in the direction of h. h is an arbitrary function of time, in the same space as x (here we are talking about the space L2). Analog to the one-dimensional case a function is differentiable at x iff the above limit exists. We can use the Gateaux derivative to find the minimization of our functional above. ### Euler-Lagrange Equation We will now use the Gateaux derivative, discussed above, to find the minimizer of the following types of function: ${\displaystyle J(x(t))=\int _{a}^{b}f(x(t),x'(t),t)dt}$ We thus have to find the solutions to the equation: ${\displaystyle \delta J(x)=0}$ The solution is the Euler-Lagrange Equation: ${\displaystyle {\frac {\partial f}{\partial x}}-{\frac {d}{dt}}{\frac {\partial f}{\partial x'}}=0}$ The partial derivatives are done in an ordinary way ignoring the fact that x is a function of t. Solutions to this equation are either maxima, minima, or saddle points of the cost functional J. ### Example: Shortest Distance We've heard colloquially that the shortest distance between two points is a straight line. We can use the Euler-Lagrange equation to prove this rule. If we have two points in R2, a, and b, we would like to find the minimum curve (x,y(x)) that joins these two points. Line element ds reads: ${\displaystyle ds={\sqrt {dx^{2}+dy^{2}}}}$ Our function that we are trying to minimize then is defined as: ${\displaystyle J[y]=\int _{a}^{b}ds}$ or: ${\displaystyle J[y]=\int _{a}^{b}{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}}dx}$ We can take the Gateaux derivative of the function J and set it equal to zero to find the minimum function between these two points. Denoting the square root as f, we get ${\displaystyle 0={\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)=y''{\frac {1}{\left(1+y^{\prime 2}\right)^{3/2}}}\;.}$ Knowing that the line element will be finite this boils down to the equation ${\displaystyle {\frac {d^{2}y}{dx^{2}}}=0}$ with the well known solution ${\displaystyle y(x)=mx+n={\frac {b_{y}-a_{y}}{b_{x}-a_{x}}}(x-a_{x})+a_{y}\;.}$ Version 1.3, 3 November 2008 Copyright (C) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. <http://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. ## 0. PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. ## 1. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque". Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only. The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text. The "publisher" means any person or entity that distributes copies of the Document to the public. A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. ## 2. VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. ## 3. COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. ## 4. MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version: 1. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. 2. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement. 3. State on the Title page the name of the publisher of the Modified Version, as the publisher. 4. Preserve all the copyright notices of the Document. 6. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. 7. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice. 8. Include an unaltered copy of this License. 9. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence. 10. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. 11. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. 12. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. 13. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified version. 14. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section. 15. Preserve any Warranty Disclaimers. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles. You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. ## 5. COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements". ## 6. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. ## 7. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. ## 8. TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. ## 9. TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License. However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it. ## 10. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document. ## 11. RELICENSING "Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC site. "Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document. An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008. The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing. To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3
{}
Simon Danisch / Sep 05 2019 # Julia by Example ## Introductory Examples ### Overview We’re now ready to start learning the Julia language itself #### Level Our approach is aimed at those who already have at least some knowledge of programming — perhaps experience with Python, MATLAB, Fortran, C or similar In particular, we assume you have some familiarity with fundamental programming concepts such as • variables • arrays or vectors • loops • conditionals (if/else) #### Approach In this lecture we will write and then pick apart small Julia programs At this stage the objective is to introduce you to basic syntax and data structures Deeper concepts—how things work—will be covered in later lectures Since we are looking for simplicity the examples are a little contrived In this lecture, we will often start with a direct MATLAB/FORTRAN approach which often is poor coding style in Julia, but then move towards more elegant code which is tightly connected to the mathematics #### Set Up We assume that you’ve worked your way through our getting started lecture already In particular, the easiest way to install and precompile all the Julia packages used in QuantEcon notes is to type ] add InstantiateFromURL and then work in a Jupyter notebook, as described here #### Other References The definitive reference is Julia’s own documentation The manual is thoughtfully written but is also quite dense (and somewhat evangelical) The presentation in this and our remaining lectures is more of a tutorial style based around examples ### Example: Plotting a White Noise Process To begin, let’s suppose that we want to simulate and plot the white noise process $\epsilon_0, \epsilon_1, \ldots, \epsilon_T$, where each draw $\epsilon_t$ is independent standard normal #### Introduction to Packages The first step is to activate a project environment, which is encapsulated by Project.toml and Manifest.toml files There are three ways to install packages and versions (where the first two methods are discouraged, since they may lead to package versions out-of-sync with the notes) 1. add the packages directly into your global installation (e.g. Pkg.add("MyPackage") or ] add MyPackage) 2. download an Project.toml and Manifest.toml file in the same directory as the notebook (i.e. from the @__DIR__ argument), and then call using Pkg; Pkg.activate(@__DIR__); 3. use the InstantiateFromURL package #using InstantiateFromURL #github_project("QuantEcon/quantecon-notebooks-julia", version = "0.2.0") If you have never run this code on a particular computer, it is likely to take a long time as it downloads, installs, and compiles all dependent packages We will discuss it more in Tools and Editors, but these files provide a listing of packages and versions used by the code This ensures that an environment for running code is reproducible, so that anyone can replicate the precise set of package and versions used in construction The careful selection of package versions is crucial for reproducibility, as otherwise your code can be broken by changes to packages out of your control After the installation and activation, using provides a way to say that a particular code or notebook will use the package using LinearAlgebra, Statistics <a id='import'></a> #### Using Functions from a Package Some functions are built into the base Julia, such as randn, which returns a single draw from a normal distibution with mean 0 and variance 1 if given no parameters randn() -1.13557 Other functions require importing all of the names from an external library using Plots gr(fmt=:png); # setting for easier display in jupyter notebooks n = 100 ϵ = randn(n) plot(1:n, ϵ) Let’s break this down and see how it works The effect of the statement using Plots is to make all the names exported by the Plots module available Because we used Pkg.activate previously, it will use whatever version of Plots.jl that was specified in the Project.toml and Manifest.toml files The other packages LinearAlgebra and Statistics are base Julia libraries, but require an explicit using The arguments to plot are the numbers 1,2, ..., n for the x-axis, a vector ϵ for the y-axis, and (optional) settings The function randn(n) returns a column vector n random draws from a normal distribution with mean 0 and variance 1 #### Arrays As a language intended for mathematical and scientific computing, Julia has strong support for using unicode characters In the above case, the ϵ and many other symbols can be typed in most Julia editor by providing the LaTeX and <TAB>, i.e. \epsilon<TAB> The return type is one of the most fundamental Julia data types: an array typeof(ϵ) Array{Float64,1} ϵ[1:5] 5-element Array{Float64,1}: 0.718197 -0.671958 0.867648 -1.44841 0.961882 The information from typeof() tells us that ϵ is an array of 64 bit floating point values, of dimension 1 In Julia, one-dimensional arrays are interpreted as column vectors for purposes of linear algebra The ϵ[1:5] returns an array of the first 5 elements of ϵ Notice from the above that • array indices start at 1 (like MATLAB and Fortran, but unlike Python and C) • array elements are referenced using square brackets (unlike MATLAB and Fortran) To get help and examples in Jupyter or other julia editor, use the ? before a function name or syntax ?typeof search: typeof typejoin TypeError Get the concrete type of x. Examples julia> a = 1//2; julia> typeof(a) Rational{Int64} julia> M = [1 2; 3.5 4]; julia> typeof(M) Array{Float64,2} #### For Loops Although there’s no need in terms of what we wanted to achieve with our program, for the sake of learning syntax let’s rewrite our program to use a for loop for generating the data Note Starting with the most direct version, and pretending we are in a world where randn can only return a single value # poor style n = 100 ϵ = zeros(n) for i in 1:n ϵ[i] = randn() end Here we first declared ϵ to be a vector of n numbers, initialized by the floating point 0.0 The for loop then populates this array by successive calls to randn() Like all code blocks in Julia, the end of the for loop code block (which is just one line here) is indicated by the keyword end The word in from the for loop can be replaced by either ∈ or = The index variable is looped over for all integers from 1:n – but this does not actually create a vector of those indices Instead, it creates an iterator that is looped over – in this case the range of integers from 1 to n While this example successfully fills in ϵ with the correct values, it is very indirect as the connection between the index i and the ϵ vector is unclear To fix this, use eachindex # better style n = 100 ϵ = zeros(n) for i in eachindex(ϵ) ϵ[i] = randn() end Here, eachindex(ϵ) returns an iterator of indices which can be used to access ϵ While iterators are memory efficient because the elements are generated on the fly rather than stored in memory, the main benefit is (1) it can lead to code which is clearer and less prone to typos; and (2) it allows the compiler flexibility to creatively generate fast code In Julia you can also loop directly over arrays themselves, like so ϵ_sum = 0.0 # careful to use 0.0 here, instead of 0 m = 5 for ϵ_val in ϵ[1:m] ϵ_sum = ϵ_sum + ϵ_val end ϵ_mean = ϵ_sum / m where ϵ[1:m] returns the elements of the vector at indices 1 to m Of course, in Julia there are built in functions to perform this calculation which we can compare against ϵ_mean ≈ mean(ϵ[1:m]) ϵ_mean ≈ sum(ϵ[1:m]) / m true In these examples, note the use of ≈ to test equality, rather than ==, which is appropriate for integers and other types Approximately equal, typed with \approx<TAB>, is the appropriate way to compare any floating point numbers due to the standard issues of floating point math <a id='user-defined-functions'></a> #### User-Defined Functions For the sake of the exercise, let’s go back to the for loop but restructure our program so that generation of random variables takes place within a user-defined function To make things more interesting, instead of directly plotting the draws from the distribution, let’s plot the squares of these draws # poor style function generatedata(n) ϵ = zeros(n) for i in eachindex(ϵ) ϵ[i] = (randn())^2 # squaring the result end return ϵ end data = generatedata(10) plot(data) Here • function is a Julia keyword that indicates the start of a function definition • generatedata is an arbitrary name for the function • return is a keyword indicating the return value, as is often unnecessary Let us make this example slightly better by “remembering” that randn can return a vectors # still poor style function generatedata(n) ϵ = randn(n) # use built in function for i in eachindex(ϵ) ϵ[i] = ϵ[i]^2 # squaring the result end return ϵ end data = generatedata(5) 5-element Array{Float64,1}: 0.3186968339672584 1.096885365034298 0.5150540764077658 3.4943263421532738 0.0033495849554665857 While better, the looping over the i index to square the results is difficult to read Instead of looping, we can broadcast the ^2 square function over a vector using a . To be clear, unlike Python, R, and MATLAB (to a lesser extent), the reason to drop the for is not for performance reasons, but rather because of code clarity Loops of this sort are at least as efficient as vectorized approach in compiled languages like Julia, so use a for loop if you think it makes the code more clear # better style function generatedata(n) ϵ = randn(n) # use built in function return ϵ.^2 end data = generatedata(5) 5-element Array{Float64,1}: 0.008040679683391627 2.7193272818756418 2.503006792875296 0.25899422951021456 0.6412482847152445 We can even drop the function if we define it on a single line # good style generatedata(n) = randn(n).^2 data = generatedata(5) 5-element Array{Float64,1}: 2.134636363911204 1.6701457046959898 0.24706230272781574 0.8315376998419491 0.6070171828121048 Finally, we can broadcast any function, where squaring is only a special case # good style f(x) = x^2 # simple square function generatedata(n) = f.(randn(n)) # uses broadcast for some function f data = generatedata(5) 5-element Array{Float64,1}: 0.07219218176682414 2.7843096367519196 0.015823943380171502 0.7512527900793983 0.33614099458158286 As a final – abstract – approach, we can make the generatedata function able to generically apply to a function generatedata(n, gen) = gen.(randn(n)) # uses broadcast for some function gen f(x) = x^2 # simple square function data = generatedata(5, f) # applies f 5-element Array{Float64,1}: 0.1556888086720689 0.6179770346098962 0.1908246493615354 0.03120230831361247 1.292340133589027 Whether this example is better or worse than the previous version depends on how it is used High degrees of abstraction and generality, e.g. passing in a function f in this case, can make code either clearer or more confusing, but Julia enables you to use these techniques with no performance overhead For this particular case, the clearest and most general solution is probably the simplest # direct solution with broadcasting, and small user-defined function n = 100 f(x) = x^2 x = randn(n) plot(f.(x), label="x^2") plot!(x, label="x") # layer on the same plot While broadcasting above superficially looks like vectorizing functions in MATLAB, or Python ufuncs, it is much richer and built on core foundations of the language The other additional function plot! adds a graph to the existing plot This follows a general convention in Julia, where a function that modifies the arguments or a global state has a ! at the end of its name ##### A Slightly More Useful Function Let’s make a slightly more useful function This function will be passed in a choice of probability distribution and respond by plotting a histogram of observations In doing so we’ll make use of the Distributions package, which we assume was instantiated above with the project Here’s the code using Distributions function plothistogram(distribution, n) ϵ = rand(distribution, n) # n draws from distribution histogram(ϵ) end lp = Laplace() plothistogram(lp, 500) Let’s have a casual discussion of how all this works while leaving technical details for later in the lectures First, lp = Laplace() creates an instance of a data type defined in the Distributions module that represents the Laplace distribution The name lp is bound to this value When we make the function call plothistogram(lp, 500) the code in the body of the function plothistogram is run with • the name distribution bound to the same value as lp • the name n bound to the integer 500 ##### A Mystery Now consider the function call rand(distribution, n) This looks like something of a mystery The function rand() is defined in the base library such that rand(n) returns n uniform random variables on $[0, 1)$ rand(3) On the other hand, distribution points to a data type representing the Laplace distribution that has been defined in a third party package So how can it be that rand() is able to take this kind of value as an argument and return the output that we want? The answer in a nutshell is multiple dispatch, which Julia uses to implement generic programming This refers to the idea that functions in Julia can have different behavior depending on the particular arguments that they’re passed Hence in Julia we can take an existing function and give it a new behavior by defining how it acts on a new type of value The compiler knows which function definition to apply to in a given setting by looking at the types of the values the function is called on In Julia these alternative versions of a function are called methods ### Example: Variations on Fixed Points Take a mapping $f : X \to X$ for some set $X$ If there exists an $x^* \in X$ such that $f(x^) = x^$, then $x^*$: is called a “fixed point” of $f$ For our second example, we will start with a simple example of determining fixed points of a function The goal is to start with code in a MATLAB style, and move towards a more Julian style with high mathematical clarity #### Fixed Point Maps Consider the simple equation, where the scalars $p,\beta$ are given, and $v$ is the scalar we wish to solve for Of course, in this simple example, with parameter restrictions this can be solved as $v = p/(1 - \beta)$ Rearrange the equation in terms of a map $f(x) : \mathbb R \to \mathbb R$ <a id='equation-fixed-point-map'></a> where Therefore, a fixed point $v^*$ of $f(\cdot)$ is a solution to the above problem #### While Loops One approach to finding a fixed point of (1) is to start with an initial value, and iterate the map <a id='equation-fixed-point-naive'></a> For this exact f function, we can see the convergence to $v = p/(1-\beta)$ when $|\beta| < 1$ by iterating backwards and taking $n\to\infty$ To implement the iteration in (2), we start by solving this problem with a while loop The syntax for the while loop contains no surprises, and looks nearly identical to a MATLAB implementation # poor style p = 1.0 # note 1.0 rather than 1 β = 0.9 maxiter = 1000 tolerance = 1.0E-7 v_iv = 0.8 # initial condition # setup the algorithm v_old = v_iv normdiff = Inf iter = 1 while normdiff > tolerance && iter <= maxiter v_new = p + β * v_old # the f(v) map normdiff = norm(v_new - v_old) # replace and continue v_old = v_new iter = iter + 1 end println("Fixed point = $v_old, and |f(x) - x| =$normdiff in $iter iterations") The while loop, like the for loop should only be used directly in Jupyter or the inside of a function Here, we have used the norm function (from the LinearAlgebra base library) to compare the values The other new function is the println with the string interpolation, which splices the value of an expression or variable prefixed by \$ into a string An alternative approach is to use a for loop, and check for convergence in each iteration # setup the algorithm v_old = v_iv normdiff = Inf iter = 1 for i in 1:maxiter v_new = p + β * v_old # the f(v) map normdiff = norm(v_new - v_old) if normdiff < tolerance # check convergence iter = i break # converged, exit loop end # replace and continue v_old = v_new end println("Fixed point = $v_old, and |f(x) - x| =$normdiff in $iter iterations") The new feature there is break , which leaves a for or while loop #### Using a Function The first problem with this setup is that it depends on being sequentially run – which can be easily remedied with a function # better, but still poor style function v_fp(β, ρ, v_iv, tolerance, maxiter) # setup the algorithm v_old = v_iv normdiff = Inf iter = 1 while normdiff > tolerance && iter <= maxiter v_new = p + β * v_old # the f(v) map normdiff = norm(v_new - v_old) # replace and continue v_old = v_new iter = iter + 1 end return (v_old, normdiff, iter) # returns a tuple end # some values p = 1.0 # note 1.0 rather than 1 β = 0.9 maxiter = 1000 tolerance = 1.0E-7 v_initial = 0.8 # initial condition v_star, normdiff, iter = v_fp(β, p, v_initial, tolerance, maxiter) println("Fixed point =$v_star, and |f(x) - x| = $normdiff in$iter iterations") While better, there could still be improvements #### Passing a Function The chief issue is that the algorithm (finding a fixed point) is reusable and generic, while the function we calculate p + β * v is specific to our problem A key feature of languages like Julia, is the ability to efficiently handle functions passed to other functions # better style function fixedpointmap(f, iv, tolerance, maxiter) # setup the algorithm x_old = iv normdiff = Inf iter = 1 while normdiff > tolerance && iter <= maxiter x_new = f(x_old) # use the passed in map normdiff = norm(x_new - x_old) x_old = x_new iter = iter + 1 end return (x_old, normdiff, iter) end # define a map and parameters p = 1.0 β = 0.9 f(v) = p + β * v # note that p and β are used in the function! maxiter = 1000 tolerance = 1.0E-7 v_initial = 0.8 # initial condition v_star, normdiff, iter = fixedpointmap(f, v_initial, tolerance, maxiter) println("Fixed point = $v_star, and |f(x) - x| =$normdiff in $iter iterations") Much closer, but there are still hidden bugs if the user orders the settings or returns types wrong #### Named Arguments and Return Values To enable this, Julia has two features: named function parameters, and named tuples # good style function fixedpointmap(f; iv, tolerance=1E-7, maxiter=1000) # setup the algorithm x_old = iv normdiff = Inf iter = 1 while normdiff > tolerance && iter <= maxiter x_new = f(x_old) # use the passed in map normdiff = norm(x_new - x_old) x_old = x_new iter = iter + 1 end return (value = x_old, normdiff=normdiff, iter=iter) # A named tuple end # define a map and parameters p = 1.0 β = 0.9 f(v) = p + β * v # note that p and β are used in the function! sol = fixedpointmap(f, iv=0.8, tolerance=1.0E-8) # don't need to pass println("Fixed point =$(sol.value), and |f(x) - x| = $(sol.normdiff) in$(sol.iter)"* " iterations") In this example, all function parameters after the ; in the list, must be called by name Furthermore, a default value may be enabled – so the named parameter iv is required while tolerance and maxiter have default values The return type of the function also has named fields, value, normdiff, and iter – all accessed intuitively using . To show the flexibilty of this code, we can use it to find a fixed point of the non-linear logistic equation, $x = f(x)$ where $f(x) := r x (1-x)$ r = 2.0 f(x) = r * x * (1 - x) sol = fixedpointmap(f, iv=0.8) println("Fixed point = $(sol.value), and |f(x) - x| =$(sol.normdiff) in $(sol.iter) iterations") #### Using a Package But best of all is to avoid writing code altogether # best style using NLsolve p = 1.0 β = 0.9 f(v) = p .+ β * v # broadcast the + sol = fixedpoint(f, [0.8]) println("Fixed point =$(sol.zero), and |f(x) - x| = $(norm(f(sol.zero) - sol.zero)) in " * "$(sol.iterations) iterations") The fixedpoint function from the NLsolve.jl library implements the simple fixed point iteration scheme above Since the NLsolve library only accepts vector based inputs, we needed to make the f(v) function broadcast on the + sign, and pass in the initial condition as a vector of length 1 with [0.8] While a key benefit of using a package is that the code is clearer, and the implementation is tested, by using an orthogonal library we also enable performance improvements # best style p = 1.0 β = 0.9 iv = [0.8] sol = fixedpoint(v -> p .+ β * v, iv) println("Fixed point = $(sol.zero), and |f(x) - x| =$(norm(f(sol.zero) - sol.zero)) in " * "$(sol.iterations) iterations") Note that this completes in 3 iterations vs 177 for the naive fixed point iteration algorithm Since Anderson iteration is doing more calculations in an iteration, whether it is faster or not would depend on the complexity of the f function But this demonstrates the value of keeping the math separate from the algorithm, since by decoupling the mathematical definition of the fixed point from the implementation in (2), we were able to exploit new algorithms for finding a fixed point The only other change in this function is the move from directly defining f(v) and using an anonymous function Similar to anonymous functions in MATLAB, and lambda functions in Python, Julia enables the creation of small functions without any names The code v -> p .+ β * v defines a function of a dummy argument, v with the same body as our f(x) #### Composing Packages A key benefit of using Julia is that you can compose various packages, types, and techniques, without making changes to your underlying source As an example, consider if we want to solve the model with a higher-precision, as floating points cannot be distinguished beyond the machine epsilon for that type (recall that computers approximate real numbers to the nearest binary of a given precision; the machine epsilon is the smallest nonzero magnitude) In Julia, this number can be calculated as eps() For many cases, this is sufficient precision – but consider that in iterative algorithms applied millions of times, those small differences can add up The only change we will need to our model in order to use a different floating point type is to call the function with an arbitrary precision floating point, BigFloat, for the initial value # use arbitrary precision floating points p = 1.0 β = 0.9 iv = [BigFloat(0.8)] # higher precision # otherwise identical sol = fixedpoint(v -> p .+ β * v, iv) println("Fixed point =$(sol.zero), and |f(x) - x| = $(norm(f(sol.zero) - sol.zero)) in " * "$(sol.iterations) iterations") Here, the literal BigFloat(0.8) takes the number 0.8 and changes it to an arbitrary precision number The result is that the residual is now exactly 0.0 since it is able to use arbitrary precision in the calculations, and the solution has a finite-precision solution with those parameters #### Multivariate Fixed Point Maps The above example can be extended to multivariate maps without any modifications to the fixed point iteration code Using our own, homegrown iteration and simply passing in a bivariate map: p = [1.0, 2.0] β = 0.9 iv = [0.8, 2.0] f(v) = p .+ β * v # note that p and β are used in the function! sol = fixedpointmap(f, iv = iv, tolerance = 1.0E-8) println("Fixed point = $(sol.value), and |f(x) - x| =$(sol.normdiff) in $(sol.iter)"* "iterations") This also works without any modifications with the fixedpoint library function using NLsolve p = [1.0, 2.0, 0.1] β = 0.9 iv =[0.8, 2.0, 51.0] f(v) = p .+ β * v sol = fixedpoint(v -> p .+ β * v, iv) println("Fixed point =$(sol.zero), and |f(x) - x| = $(norm(f(sol.zero) - sol.zero)) in " * "$(sol.iterations) iterations") Finally, to demonstrate the importance of composing different libraries, use a StaticArrays.jl type, which provides an efficient implementation for small arrays and matrices using NLsolve, StaticArrays p = @SVector [1.0, 2.0, 0.1] β = 0.9 iv = @SVector [0.8, 2.0, 51.0] f(v) = p .+ β * v sol = fixedpoint(v -> p .+ β * v, iv) println("Fixed point = $(sol.zero), and |f(x) - x| =$(norm(f(sol.zero) - sol.zero)) in " * end #### Exercise 3 Consider a circle with diameter 1 embedded in a unit square Let $A$ be its area and let $r = 1/2$ be its radius If we know $\pi$ then we can compute $A$ via $A = \pi r^2$ But the point here is to compute $\pi$, which we can do by n = 1000000 count = 0 for i in 1:n u, v = rand(2) d = sqrt((u - 0.5)^2 + (v - 0.5)^2) # distance from middle of square if d < 0.5 count += 1 end end area_estimate = count / n print(area_estimate * 4) # dividing by radius**2 #### Exercise 4 payoff = 0 count = 0 print("Count = ") for i in 1:10 U = rand() if U < 0.5 count += 1 else count = 0 end print(count) if count == 3 payoff = 1 end end println("\npayoff = $payoff") We can simplify this somewhat using the ternary operator. Here are some examples a = 1 < 2 ? "foo" : "bar" a = 1 > 2 ? "foo" : "bar" Using this construction: payoff = 0.0 count = 0.0 print("Count = ") for i in 1:10 U = rand() count = U < 0.5 ? count + 1 : 0 print(count) if count == 3 payoff = 1 end end println("\npayoff =$payoff") #### Exercise 5 Here’s one solution using Plots gr(fmt=:png); # setting for easier display in jupyter notebooks α = 0.9 n = 200 x = zeros(n + 1) for t in 1:n x[t+1] = α * x[t] + randn() end plot(x) #### Exercise 6 αs = [0.0, 0.8, 0.98] n = 200 p = plot() # naming a plot to add to for α in αs x = zeros(n + 1) x[1] = 0.0 for t in 1:n x[t+1] = α * x[t] + randn() end
{}
# Can anyone answer this question?? if so please do i need help!! ###### Question: can anyone answer this question?? if so please do i need help!! ### (50 points plz help :')) Write two differences between Markovni's rule and Anti-Markovni's​ (50 points plz help :')) Write two differences between Markovni's rule and Anti-Markovni's​... ### Which of the following quotes does NOT appear in Lyndon B. Johnson’s 1965 speech about Viet Nam? a. "Unable to defeat our enemy or break his will – at least without a huge, long and ever more costly effort – we must actively seek a peaceful settlement." b. "Over many years, we have made a national pledge to help South Viet-Nam defend its independence. And I intend to keep that promise." c. "We will not be defeated. We will not grow tired. We will not withdraw, either openly or under the cloak of Which of the following quotes does NOT appear in Lyndon B. Johnson’s 1965 speech about Viet Nam? a. "Unable to defeat our enemy or break his will – at least without a huge, long and ever more costly effort – we must actively seek a peaceful settlement." b. "Over many years, we have made a nati... ### What does the judgement day by aaron douglas painting represent what does the judgement day by aaron douglas painting represent... ### Planarians lack dedicated respiratory and circulatory systems. this deficiency does not cause a problem because ________. planarians lack dedicated respiratory and circulatory systems. this deficiency does not cause a problem because ________. they lack mesoderm as embryos and, therefore, lack the adult tissues derived from mesoderm their body cavity, a pseudocoelom, carries out these functions their flame bulbs can carry out respiratory and circulatory functions none of their cells are far remo Planarians lack dedicated respiratory and circulatory systems. this deficiency does not cause a problem because ________. planarians lack dedicated respiratory and circulatory systems. this deficiency does not cause a problem because ________. they lack mesoderm as embryos and, therefore, lack the a... ### What is the foundation of a medical term that establishes the word's meaning? What is the foundation of a medical term that establishes the word's meaning?... ### “Happiness Epidemic” by David Hernandez Without any warning, the disease sweeps across the country like a traveling circus. People who were once blue, who slouched from carrying a bag of misery over one shoulder are now clinically cheerful. Symptoms include kind gestures, a bouncy stride, a smile bigger than a slice of cantaloupe. You pray that you will be infected, hope a happy germ invades your body and multiplies, spreading merriment to all your major organs like door-to-door Christmas carole “Happiness Epidemic” by David Hernandez Without any warning, the disease sweeps across the country like a traveling circus. People who were once blue, who slouched from carrying a bag of misery over one shoulder are now clinically cheerful. Symptoms include kind gestures, a bouncy stride, a smil... ### How did Theodore Roosevelt corollary influence US foreign policy How did Theodore Roosevelt corollary influence US foreign policy... ### Gitlow v. New York, in which the Supreme Court held that freedom of speech and of the press are fundamental liberties protected by the Fourteenth Amendment from impairment by the states, began the development of the __________ doctrine. Gitlow v. New York, in which the Supreme Court held that freedom of speech and of the press are fundamental liberties protected by the Fourteenth Amendment from impairment by the states, began the development of the __________ doctrine.... ### Find the value of c that completes the square x^2-16x+c Please help me with how to do this, and show me. Thank you. Find the value of c that completes the square x^2-16x+c Please help me with how to do this, and show me. Thank you.... ### How would the path of the ball differ on Earth than on the moon?Which statement best compares the accelerations of two objects in free fall? How would the path of the ball differ on Earth than on the moon?Which statement best compares the accelerations of two objects in free fall?... ### How does the digestive system interact with the circulatory system? A. Messages sent as electrical impulses from the digestive system are transported throughout the body by the circulatory system. B. Nutrients taken in and broken down by the digestive system are carried to various parts of the body by the circulatory system. C. Oxygen and carbon dioxide are exchanged by organs in the digestive system, and the gases are carried to the rest of the body by the circulatory system. D. Nutrients a How does the digestive system interact with the circulatory system? A. Messages sent as electrical impulses from the digestive system are transported throughout the body by the circulatory system. B. Nutrients taken in and broken down by the digestive system are carried to various parts of the bod... ### Which polynomial has (3x + 2) as a binomial factor? 6x3 + 3x2 + 4x + 2 12x2 + 15x + 8x + 10 18x3 – 12x2 + 9x – 6 21x4 + 7x3 + 6x + 2 Which polynomial has (3x + 2) as a binomial factor? 6x3 + 3x2 + 4x + 2 12x2 + 15x + 8x + 10 18x3 – 12x2 + 9x – 6 21x4 + 7x3 + 6x + 2... ### Differentiate between water and road transport in tabular form Differentiate between water and road transport in tabular form... ### من هو مؤلف كتاب درب التبانة​ من هو مؤلف كتاب درب التبانة​... ### The purpose of A flier is to the purpose of A flier is to... ### Thy does using a measure of body weight not tell us everything about how healthy a person might be? Thy does using a measure of body weight not tell us everything about how healthy a person might be?... ### How many times dose 63 go into 272? THANK YOU SO MUCH FOR ANSWERING,IF YOU DID!!!!!!! How many times dose 63 go into 272? THANK YOU SO MUCH FOR ANSWERING,IF YOU DID!!!!!!!...
{}
cumplyr: Extending the plyr Package to Handle Cross-Dependencies Introduction For me, Hadley Wickham‘s reshape and plyr packages are invaluable because they encapsulate omnipresent design patterns in statistical computing: reshape handles switching between the different possible representations of the same underlying data, while plyr automates what Hadley calls the Split-Apply-Combine strategy, in which you split up your data into several subsets, perform some computation on each of these subsets and then combine the results into a new data set. Many of the computations implicit in traditional statistical theory are easily described in this fashion: for example, comparing the means of two groups is computationally equivalent to splitting a data set of individual observations up into subsets based on the group assignments, applying mean to those subsets and then pooling the results back together again. The Split-Apply-Combine Strategy is Broader than plyr The only weakness of plyr, which automates so many of the computations that instantiate the Split-Apply-Combine strategy, is that plyr implements one very specific version of the Split-Apply-Combine strategy: plyr always splits your data into disjoint subsets. By disjoint, I mean that any row of the original data set can occur in only one of the subsets created by the splitting function. For computations that involve cross-dependencies between observations, this makes plyr inapplicable: cumulative quantities like running means and broadly local quantities like kernelized means cannot be computed using plyr. To highlight that concern, let’s consider three very simple data analysis problems. Computing Forward-Running Means Suppose that you have the following data set: Time Value 1 1 2 3 3 5 To compute a forward-running mean, you need to split this data into three subsets: Time Value 1 1 Time Value 1 1 2 3 Time Value 1 1 2 3 3 5 In each of these clearly non-disjoint subsets, you would then compute the mean of Value and combine the results to give: Time Value 1 1 2 2 3 3 This sort of computation occurs often enough in a simpler form that R provides tools like cumsum and cumprod to deal with cumulative quantities. But the splitting problem in our example is not addressed by those tools, nor by plyr, because the cumulative quantities have to computed on subsets that are not disjoint. Computing Backward-Running Means Consider performing the same sort of calculation as described above, but moving in the opposite direction. In that case, the three non-disjoint subsets are: Time Value 3 5 Time Value 2 3 3 5 Time Value 1 1 2 3 3 5 And the final result is: Time Value 1 3 2 4 3 5 Computing Local Means (AKA Kernelized Means) Imagine that, instead of looking forward or backward, we only want to know something about data that is close to the current observation being examined. For example, we might want to know the mean value of each row when pooled with its immediately proceeding and succeeding neighbors. This computation must create the following subsets of data: Time Value 1 1 2 3 Time Value 1 1 2 3 3 5 Time Value 2 3 3 5 Within these non-disjoint subsets, means are computed and the result is: Time Value 1 2 2 3 3 4 A Strategy for Handling Non-Disjoint Subsets How can we build a general purpose tool to handle these sorts of computations? One way is to rethink how plyr works and then extend it with some trivial variations on its core principles. We can envision plyr as a system that uses a splitting operation that partitions our data into subsets in which each subset satisfies a group of equality constraints: you split the data into groups in which Variable 1 = Value 1 AND Variable 2 = Value 2, etc. Because you consider the conjunction of several equality constraints, the resulting subsets are disjoint. Seen in this fashion, there is a simple relaxation of the equality constraints that allows us to solve the three problems described a moment ago: instead of looking at the conjunction of equality constraints, we use a conjunction of inequality constraints. For the time being, I’ll describe just three instantiations of this broader strategy. Using Upper Bounds Here, we divide data into groups in which Variable 1 <= Value 1 AND Variable 2 <= Value 2, etc. We will also allow equality constraints, so that the operations of plyr are a strict subset of the computations in this new model. For example, we might use the constraint Variable = Value 1 AND Variable 2 <= Value 2. If the upper bound is the Time variable, these contraints will allow us to compute the forward-moving mean we described earlier. Using Lower Bounds Instead of using upper bounds, we can use lower bounds to divide data into groups in which Variable >= Value 1 AND Variable 2 >= Value 2, etc. This allows us to implement the backward-moving mean described earlier. Using Norm Balls Finally, we can consider a combination of upper and lower bounds. For simplicity, we'll assume that these bounds have a fixed tightness around the "center" of each subset of our split data. To articulate this tightness formally, we look at a specific hypothetical equality constraint like Variable 1 = Value 1 and then loosen it so that norm(Variable 1 - Value 1) <= r. When r = 0, this system gives the original equality constraint. But when r > 0, we produce a "ball" of data around the constraint whose tightness is r. This lets us estimate the local means from our third example. Implementation To demo these ideas in a usable fashion, I've created a draft package for R called cumplyr. Here is an extended example of its usage in solving simple variants of the problems described in this post: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 library('cumplyr')   data <- data.frame(Time = 1:5, Value = seq(1, 9, by = 2))   iddply(data, equality.variables = c('Time'), lower.bound.variables = c(), upper.bound.variables = c(), norm.ball.variables = list(), func = function (df) {with(df, mean(Value))})   iddply(data, equality.variables = c(), lower.bound.variables = c('Time'), upper.bound.variables = c(), norm.ball.variables = list(), func = function (df) {with(df, mean(Value))})   iddply(data, equality.variables = c(), lower.bound.variables = c(), upper.bound.variables = c('Time'), norm.ball.variables = list(), func = function (df) {with(df, mean(Value))})   iddply(data, equality.variables = c(), lower.bound.variables = c(), upper.bound.variables = c(), norm.ball.variables = list('Time' = 1), func = function (df) {with(df, mean(Value))})   iddply(data, equality.variables = c(), lower.bound.variables = c(), upper.bound.variables = c(), norm.ball.variables = list('Time' = 2), func = function (df) {with(df, mean(Value))})   iddply(data, equality.variables = c(), lower.bound.variables = c(), upper.bound.variables = c(), norm.ball.variables = list('Time' = 5), func = function (df) {with(df, mean(Value))}) You can download this package from GitHub and play with it to see whether it helps you. Please submit feedback using GitHub if you have any comments, complaints or patches. Comparing plyr with cumplyr In the long run, I'm hoping to make the functions in cumplyr robust enough to submit a patch to plyr. I see these tools as one logical extension of plyr to encompass more of the framework described in Hadley's paper on the Split-Apply-Combine strategy. For the time being, I would advise any users of cumplyr to make sure that you do not use cumplyr for anything that plyr could already do. cumplyr is very much demo software and I am certain that both its API and implementation will change. In contrast, plyr is fast and stable software that can be trusted to perform its job. But, if you have a problem that cumplyr will solve and plyr will not, I hope you'll try cumplyr out and submit patches when it breaks. Happy hacking! 12 responses to “cumplyr: Extending the plyr Package to Handle Cross-Dependencies” 1. Very cool. I think you can use vectors instead of lists in the norm.ball: c(‘Time’=1, ‘Cats’=7) should work fine. What are the advantages of using four parameters instead of some sort of DSL? iddply(data, ~ eq(Time, 0) + gt(Space, 1) + lt(Cats, 5) + norm(Dogs, 3), func…) -Harlan 2. Oh, right. iddply(data, ~ eq(Time) + gt(Space) + lt(Cats) + norm(Dogs, 3), func…) This said, I’m not sure what the above would mean! Intersection? What does your existing code do if you give it: iddply(data, equality.variables = c(), lower.bound.variables = c(‘Time’), upper.bound.variables = c(‘Space’), norm.ball.variables = list(), func = function (df) {with(df, mean(Value))}) 3. Nice. I’ve wanted this on several occasions. Would love to have a similar UDAF for Hive. 4. Thanks, Dean! When I settle upon the best algorithm for performing this task in R, I’d be happy to advise someone who wanted to create this sort of UDAF for Hive. 5. This is really interesting. I love plyr but have run across this issue quite a few times and usually just mangle it using ts() objects and for loops. As I’m sure you are aware, there is some built in functionality for this type of thing if you are working with a time series (ts) object. However, I don’t like ts() objects and prefer to keep things in nice neat data.frames with proper col and row names that are preserved after I split-apply-combine. I think you are really tapping into a need here. 6. Thanks, Frank! I did not realize that ts() objects provided this sort of functionality. Thankfully, I also agree with you that it would be better in the long-term to provide a mechanism for handling this problems without using ts() objects, so I hope that I make progress getting code for this approach that runs acceptably fast and produces accurate results. Glad that you think there’s a real need for this. 7. Very nice. Perhaps data.table could be extended in a similar way. It already has a feature where the groups are determined by ‘i’ rather than by ‘by’. These groups may be overlapping. As you may know, data.table doesn’t split-apply-combine because that can be slow for larger datasets. So a side effect is that it already handles overlapping groups, efficiently. Upper and lower bound queries may be suited to data.table’s sorted keys too; e.g. it has ‘mult’ and ‘roll’ arguments. It reminds me of feature request #203 (from 2008) but we didn’t have a really good application for it, until now perhaps. That one was to allow each i column to be a set of (lower,upper) ranges. Let me know if you’d like to collaborate on something. (I should state I’m a co-author of data.table). 8. Hi Matthew, I’d love to talk with you about this more. Earlier this morning I was trying to use data.table to handle the subsetting operations because they’re very slow on even small datasets, but I’m not yet familiar enough with data.table to get it to work. It would be great to have input from you about those details, as well as someone to discuss theoretical questions with about how a system like cumplyr should work. For example, if an item in the theoretical Cartesian product between the columns satisfies the inequality constraints, but does not occur in the empirical Cartesian product (defined by equalities), should that row exist in the output or not? I hadn’t even considered that question until this morning, but it will come up again and again in practice — and both approaches seem to have defensible use cases. It would be nice to agree upon a proper set of functionality and a core algorithm for providing that functionality efficiently. With that in place, it should be easy to add this idea as a feature to data.table. I’ll e-mail you to follow up. 9. Thanks Harlan!I’ll try out vectors and see if that’s ceenalr.I didn’t know how to use R’s formula notation well enough to use it, but I think a DSL would be ceenalr in the long run. Right now I’m stumbling upon basic bugs (like my code failing when the variables aren’t numeric), but I’ll put the use of formulas in my TODO.
{}
## Introduction The deployment of photonic quantum networks over large distances introduces losses that eventually hamper the network usefulness for quantum computation1 or secure quantum communication2. Implementing quantum memory in the network allows for synchronization between operations with low success probabilities (such as single-photon generation, entanglement generation and swapping) drastically improving the overall success rate of the network operation. For short-distance networks, the crucial figure of merit is the memory time-bandwidth product3. While this remains important for longer distances, the main limitation for the range of the network is the memory lifetime since the coherence needs to be maintained as the photons propagate between the nodes of the network4. The realization of quantum memories for quantum repeater (QR) schemes has been studied extensively4. QRs can alleviate the losses in the optical fibres used to distribute quantum information over long distances, thereby increasing the distance over which entanglement can be efficiently distributed by means of entanglement swapping5. Many attempts to realize such schemes are based on the Duan-Lukin-Cirac-Zoller (DLCZ) protocol for atomic ensembles6, where quantum information is stored in collective degrees of freedom of the ensembles. Since the first experimental realizations of the DLCZ protocol7,8 more than a decade ago, frequent improvements in cold atomic ensembles have been reported9,10,11,12,13,14,15,16,17,18,19,20 with memory times reaching 0.22 s19 and retrieval efficiencies up to 84%18. Progress has recently also been shown in solid-state systems, particularly in rare-earth-doped crystals21,22,23,24. However, cryogenic cooling is required for these platforms. Room-temperature systems offer reliability and scalability, as they do not need cooling apparatus. Spin coherence with timescales of seconds in nitrogen-vacancy (NV) centres in diamond25, and minutes with atomic vapour in anti-relaxation-coated glass containers26, has been demonstrated at room temperature. Still, coherent optical interaction with NV centres at room temperature remains a challenge27 due to severely broadened optical transitions. These memories can therefore not directly be employed for quantum communication. Broadband, short-lived quantum memories have been demonstrated in warm vapours28,29, but thermal atomic motion impedes long life spans of the generated collective excitations or stored light30,31,32 since atoms rapidly leave the interaction region due to thermal motion. The utilization of buffer gas to slow down atomic diffusion has allowed to extend the light storage duration at the few-photon level to 20 μs33. At the single-photon level, non-classical DLCZ-type correlations have been reported with buffer gas34,35,36,37, but with a lifetime limited to a few microseconds. Anti-relaxation coating of the container walls has enabled continuous-variable quantum memory of a few milliseconds38 and classical light storage up to 0.43 s39, but non-classical correlations for single excitations on such time scale remain to be observed. To extend the storage time, the principle of motional averaging was introduced in ref. 40. As opposed to the previous studies, which operated in the regime where atoms remain in the interaction region throughout the experiment, the motional-averaging scheme operates in the complete opposite regime, where atoms rapidly leave. By extending the interaction time so that atoms traverse the interaction region multiple times, however, the average interaction of each atom with the light is the same, enabling coherent interaction with the symmetric collective atomic mode used for storage. In this work, we use motional averaging to demonstrate a lifetime of 0.27 ± 0.04 ms by observation of a slowly decaying retrieval efficiency as the readout delay is increased. We confirm the non-classicality by observing the violation of the Cauchy–Schwarz inequality for field intensities41. The readout fidelity in our system is limited by the excess noise in the readout process leading to a high probability for detection events which do not originate from conversion of the collective excitation. We identify part of this noise as four-wave mixing (FWM)32,42,43,44. The motional-averaging approach could serve as a solution toward the implementation of scalable quantum memories for applications such as spatially multiplexed quantum networks, or deterministic single-photon sources for quantum information processing45,46,47. To the best of our knowledge, it constitutes the only viable solution to a room-temperature QR without any need for cooling. ## Results ### Experimental setup A vapour cell filled with caesium atoms, placed in a homogeneous magnetic field, is the basis for our experiment (Fig. 1a). Paraffin coating of the cell walls preserves the atomic spin coherence upon hundreds of wall collisions. The cell is aligned within a low finesse ($${\cal F} \approx 18$$) asymmetric cell cavity, enhancing light-atom interaction. The light leaving the cell cavity passes through polarization and spectral filtering stages before detection by a single-photon counter (Methods section). We initialize the caesium atoms via optical pumping into $$\left| g \right\rangle \equiv \left| {F = 4,m_{\rm{F}} = 4} \right\rangle$$. A far-detuned, weak excitation pulse, linearly polarized perpendicular to the magnetic field, randomly scatters a photon via spontaneous Raman scattering (Fig. 1b). We herald the creation of a long-lived symmetric Dicke state48 in $$\left| s \right\rangle \equiv \left| {F = 4,m_{\rm{F}} = 3} \right\rangle$$ upon detection of such a photon scattered into the cell cavity mode. Since the transverse Gaussian profile of the cavity mode is narrower than the cell width, such detection events tend to be associated with asymmetric collective excitations distributed only on the atoms inside the beam at the time of detection. These collective excitations have a very limited lifetime due to atoms moving and leaving the beam. We overcome this by using motional averaging40, extending the duration of the single photon wave packet, thus allowing for the atoms to cross the excitation beam several times. This is achieved by a narrow-band spectral filter, consisting of two optical cavities. The spectral filter adds a random delay to the heralding photon, thus erasing “which path” information and ensuring that a detection event is equally likely to originate from any of the atoms, resulting in a long-lived symmetric collective excitation. The other purpose of the filtering cavities is to separate the excitation light from the scattered photon. The size of the vapour cell is thus subject to a trade-off between lifetime and motional-averaging time. For a larger cross-section, the atomic coherence time T2 is longer due to the lower rate of wall collisions. On the other hand, to achieve motional averaging, atoms must return to the beam many times. Thus, the time needed for motional averaging increases with the cross-section, introducing technical difficulty as the spectral filter must be even narrower. After a controllable delay τD, the collective excitation is converted into a readout photon by a second, far-detuned pulse (Fig. 1c). Creating the collective excitation between Zeeman levels allows us to profit from their long coherence times. However, the small Zeeman splitting of νZ = 2.4 MHz presents a challenge for filtering out the excitation light. With our setup we achieve a suppression of the excitation light relative to the desired photon transmission by nine orders of magnitude. The read excitation light is chosen such that the readout photon is similar to the heralding photon in frequency and polarization. Thus only a single filtering and detection setup is required for both heralding and readout. We start our experimental sequence by locking all cavities and initially optically pumping the atoms (Fig. 1d). The following cycle comprising optical pumping for state re-initialization followed by write and read excitation pulses with controllable delay, is repeated up to 55 times before the sequence restarts, resulting in an average experimental repetition rate of up to 1 kHz. ### Spectrum of scattered photons First, we analyse the spectrum of the scattered photons by varying the resonance frequency of the spectral filter. A weak write excitation pulse with a duration of about 33 μs is sent, and the photons transmitted through the filtering stages are detected (Fig. 2a). The frequency of the scattered photons is blue-detuned by νZ with respect to the write excitation. We observe a narrow-band component associated with the symmetric Dicke state, above a broad background which is due to scattering associated with short-lived asymmetric excitations of the atoms. The width of the narrow peak is determined by the width of the spectral filter. We define the write efficiency to be the ratio of these contributions, which is ηW = (63 ± 1)%. It corresponds to the probability of having created a symmetric Dicke state upon detection of a scattered photon during the write process. The mean number of counts per pulse at zero detuning of 0.014 leads to 0.23 scattered photons per pulse in the cell cavity mode after correction for the detection efficiency and the escape efficiency out of the cell cavity. Counts from leakage of the excitation pulse are completely suppressed by polarization and spectral filtering. Further background counts are negligible during the write pulse. A read pulse is sent after the end of the write pulse, with τD = 30 μs and a similar energy. The frequency of the read pulse is blue-detuned by 2 × νZ from the write pulse such that the desired readout photons have the same frequency as the heralding photons. Scanning the filter resonance we observe a narrow peak above a broad background and an extra noise component (Fig. 2b). The narrow peak contains the retrieved photons, while the extra noise is due to the read excitation light leaking through the spectral filter. The leakage rate depends on the filter detuning from excitation light frequency resonance at ΔFC = +νZ. This leads to an asymmetry in the spectrum. Due to linear birefringence caused by detuning-dependent atom–light interaction, and different phase shifts for the write and read excitation pulses arising from the temporal decay of the initial atomic state, the polarization filtering cannot be optimized for write and read excitation light simultaneously. We chose to optimize it for the write process leading to stronger leakage noise for the readout. When repeating the same experiment without a write excitation pulse, we only detect background counts during the write detection window while we still observe a significant contribution in the number of read detection events. This is partly because the splitting of the two ground states is small compared to the detuning from the excited states, such that the read excitation field couples $$\left| g \right\rangle$$ and $$\left| s \right\rangle$$ via both the $$\left| {m_{\rm{F}}^\prime = 3} \right\rangle$$ (dashed transition in Fig. 1c) and $$\left| {m_{\rm{F}}^\prime = 4} \right\rangle$$ excited manifolds with comparable strength. The read excitation thus creates atomic excitations through transitions from $$\left| {m_{\rm{F}} = 4} \right\rangle$$ to $$\left| {m_{\rm{F}} = 3} \right\rangle$$ and simultaneously reads them out by driving them back. This FWM process leads to short-term non-classical correlations, which, however, cannot be resolved with our setup and are mixed with the long-lived correlations generated by the write pulse. Hence, those otherwise interesting correlations49,50 have to be considered as noise here. When we include the write excitation pulse, we observe an increase in detection events from the readout of the excitation generated in the write step. We observe that this desired readout and the FWM noise are spectrally indistinguishable. Due to the large background only approximately 1 in 5 counts are due to the desired readout of write excitations. When conditioning on the detection of a write photon, however, the ratio increases to approximately half for the first few tens of microseconds (see below for detailed analysis), indicating a strong correlation between read and write processes. As we will now show these correlations are non-classical. ### Long-lived non-classical correlations In order to verify the quantum nature of the scheme we test for a violation of the Cauchy–Schwarz inequality $$R$$ = $$( {g_{{\mathrm{wr}}}^{(2)}} )^2{\mathrm{/}}( {g_{{\mathrm{ww}}}^{(2)}g_{{\mathrm{rr}}}^{(2)}} ) < 1$$ where subscripts ww, rr refer to normalized second order auto-correlation functions for write and read fields and subscript wr to cross-correlation between the write field and the following read field7. A nice feature of our system is that the single-photon wave packets have a long duration set by the inverse bandwidth of the filter cavity, which is much longer than the detector dead time. This makes it possible to distinguish photon number states with a single detector. The correlation functions are then calculated from the average number of counts according to $$g_{ij}^{(2)}$$ = $$\langle {n_i( {n_j - \delta _{ij}} )} \rangle {\mathrm{/}}( {\langle {n_i} \rangle \langle {n_j} \rangle } )$$ with i, j {w, r} and nw (nr) is the number of detector clicks during the write (read) process. δij is the Kronecker delta accounting for the non-commuting annihilation operators appearing in the auto-correlation functions. In the experiment we send read pulses with a 200 μs duration, and vary the integration time τR for the read detection window. We define the retrieval efficiency as ηR = 〈nr|w〉 − 〈nr〉, the heralded readout probability subtracted by the unconditional readout probability. Figure 2a shows how the trade-off between R and ηR varies with τR. In the following, we set τR to only 40 μs in order to increase the signal-to-noise ratio. At ΔFC = 0 and for τD = 30 μs, we observe R = 1.4 ± 0.1, confirming the non-classicality of the scheme within four standard deviations. For the same parameters, we measure ηR = (1.55 ± 0.08)%, leading to an intrinsic retrieval efficiency $$\eta _{\mathrm{R}}^{\mathrm{i}}$$ = (16.1 ± 0.9)% at the cell cavity output when correcting for the transmission loss and detector quantum efficiency. For a pure two-mode squeezed state, expected in this type of protocols in the absence of noise, theory predicts thermal auto-correlation functions $$( {g_{ii}^{(2)} = 2})$$, hence $$g_{{\mathrm{wr}}}^{(2)} > 2$$ is required to violate the Cauchy–Schwarz inequality7. We find significantly lower auto-correlation values, $$g_{{\mathrm{ww}}}^{(2)}$$ = 1.86 ± 0.07 and $$g_{{\mathrm{rr}}}^{(2)}$$ = 1.45 ± 0.05, allowing us to achieve non-classicality with our measured value $$g_{{\mathrm{wr}}}^{(2)}$$ = 1.97 ± 0.05. We attribute the reduced auto-correlations to leakage of the read drive pulse and mixing of two independent thermal processes in the write step. We observe an increased value for the cross-correlation function $$\tilde g_{{\mathrm{wr}}}^{(2)}$$ = 2.08 ± 0.07 when using only the last 20 μs of the write pulse. We attribute this to the shorter effective delay between write and readout photons. However, the reduced photon statistics in this case does not allow us to extend this analysis to all of our data. In Fig. 3b we show the decay of the retrieval efficiency as the write–read delay τD increases. From an exponential fit we extract a memory lifetime of τ = 0.27 ± 0.04 ms, which by far exceeds previously reported memory times at the single-photon level for room-temperature atomic vapour memories36,37. The collective excitation lifetime is expected to be half of the transverse macroscopic spin amplitude decay time, separately measured to be T2 = 0.8 ms (Methods section) and to be governed by spin relaxation due to wall collisions. It should be duly noted that our implementation does not represent a single-photon source due to the excessive photon counts from FWM during the read pulse. When conditioning on detected heralding we observe a readout auto-correlation $$g_{{\mathrm{rr}}|{\mathrm{w}}}^{(2)}$$ = 1.3 ± 0.2. ### Temporal shape of the readout photons To determine the nature and weight of the undesirable components limiting the fidelity of the readout photons, we fit their temporal shape using a model adapted from ref. 51. According to the model, the detected readout photons have two contributions: a desired part from the readout of the atomic excitations created during the write process, and the unwanted result of the FWM process depicted in Fig. 1c, present even in the absence of the write step. The photons scattered from $$\left| {F^\prime ,m_{\rm{F}}^\prime = 3} \right\rangle$$ to $$\left| s \right\rangle$$ are not resonant with the filtering cavities and are thus not detected. The photons scattered on the $$\left| {F^\prime ,m_{\rm{F}}^\prime = 4} \right\rangle$$ to $$\left| g \right\rangle$$ transition, however, are indistinguishable from the desired read photons and lead to spurious detection events spoiling the fidelity of the readout. The model includes a noise offset comprising a constant term accounting for background and dark counts, and a power-dependent and time-dependent term accounting for contamination from the drive leaking through the polarization and spectral filtering stages (see Supplementary Methods). The temporal shape of the detected readout photons is shown in Fig. 4, together with the model. Figure 4a represents the unconditional detection events, while Fig. 4b represents the heralded detection events conditioned on one or more write detection events, for the same data set. The values are normalized by the duration of the time bins, and by the total number of pulses in Fig. 4a and by the number of trials with one or more write detection events in Fig. 4b. The total number of trials is 3,248,135, and the total number of heralding events is 45,774. In both graphs, we use common parameters except for the mean number of collective excitations created during the write step, which we estimate from the number of write detection events and the total detection efficiency. We note that the model agrees well with the data if we add a constant term (blue line). The origin of this spectrally narrow contribution C is not fully understood. Its relative fraction compared to the broadband noise contribution B is similar to the fraction during the write process with ηW ≈ C/(B + C). This suggests a common origin of the narrow-band and broadband noise during the read and could be explained by scattering from atoms residing in Zeeman states other than mF = 4. However, for the state initialization we achieve, we would expect a contribution which is about a quarter of what is observed, suggesting that modifications to the FWM model are needed. See Supplementary Methods for a detailed description of the model. ## Discussion We have realized an efficient heralded light source based on an atomic ensemble at room temperature, demonstrating a long single-collective excitations lifetime of 0.27 ± 0.04 ms and a generation efficiency of (63 ± 1)%. This lifetime could be extended significantly by employing a cell displaying a longer T2 time. We have demonstrated non-classicality of the light–matter correlations by observing a violation of the Cauchy–Schwarz inequality with four standard deviations. Even though the utility of those results is so far limited by excess noise from the leakage of the excitation light, FWM and other noise sources, we highlight that there are possible routes to suppress FWM by modifying the excitation scheme as suggested in the following. The FWM contribution can be greatly reduced by using hyperfine instead of Zeeman storage, or suppressed by elaborate cavity design29,52. Alternatively, the FWM can be eliminated in our setup by exciting the ensemble with circularly polarized light propagating along the magnetic field and storing the collective excitation in $$\left| {F = 4,m_{\rm{F}} = 2} \right\rangle$$. The suppression of this two-photon transition for large detunings43 can be mitigated by using the caesium D1 line and an appropriate choice of detuning on the order of the excited state hyperfine splitting. For the unexplained noise source, further investigation is required to determine to what extent it can be reduced. Further reduction of the remaining leakage will be possible by adding another filter cavity or by narrowing the filtering bandwidth. This narrowing will at the same time further improve the write efficiency. Finally, an active control of the polarization of the light at the cavity output could allow us to maximize the extinction of the polarization filter at all times. Even in the absence of the noise suppression, our work demonstrates that long-lived collective excitations can be efficiently heralded and retrieved. With such improvements, our system could form the basis for scalable room-temperature quantum repeaters. ## Methods ### Light We use a home-built external cavity diode laser at 852 nm that is locked to and narrowed in linewidth (≤10 kHz) by optical feedback from a triangular locking cavity. A slow feedback (<Hz) from a beatnote measurement of this excitation laser with a reference laser stabilized by atomic spectroscopy keeps the locking cavity resonance at a fixed detuning of 925 MHz from the 4–5′ transition of the D2 line of caesium. We send the excitation laser light through an acousto-optic modulator to pulse the light and choose the individual frequencies of the write and read pulses. ### Vapour cell The caesium vapour cell has a square cross-section of 300×300 μm and a length of 10 mm. It is coated with a spin-preserving anti-relaxation layer of paraffin (alkane). It is aligned along the optical axis of a low finesse ($${\cal F} \approx 18$$) cavity to enhance the light interaction. The losses of this “cell cavity” are dominated by the output coupler transmission. The vapour cell is inserted in a magnetic shield with internal coils that produce a homogeneous magnetic field perpendicular to the optical axis. We work at a Zeeman splitting frequency of νZ ≈ 2.4 MHz where the dissipated power in the coils heats up the vapour cell to around 43 °C. Under these conditions we identify a coherence time of the ground state Zeeman levels of T2 ≈ 0.8 ms, by performing a magneto-optical resonance spectroscopy measurement53. ### Cavity stabilization To stabilize the cell cavity length we input frequency-modulated light from the reference laser and derive an error signal from the transmitted signal. The error signal for the filter cavities are acquired from the transmission of frequency-modulated light from the excitation laser in the counter-propagating direction. The length of each cavity is then stabilized by a feedback acting on the respective piezo-actuated mirror mount. The lock light for the filter cavities is blocked by a chopper during the optical pumping and experiment periods. ### Pumping The atoms are initialized by circularly polarized pump and repump elliptical beams aligned along the magnetic field direction. The repump laser is locked on the F = 3 to F′ = 2, 3 crossover of the D2 line, the pump laser is locked on the F = 4 to F′ = 4 transition of the D1 line. We typically observe an atomic orientation of >98.5%. ### Filtering We compensate for birefringence with a quarter wave plate and a half wave plate after the cell cavity and achieve a polarization filtering on the order of 10−4 with a Glan–Thompson polarizer. Spectral filtering is achieved by two concatenated triangular cavities. The first filter cavity is a narrow bandwidth cavity with a full width at half maximum (FWHM) of 66 kHz with an on resonance transmission of 66%. The second filter cavity has an on resonance transmission of 90% and a FWHM of 900 kHz. Both cavities together yield a spectral filtering of 7 × 10−6 at a detuning of 2.4 MHz. The cavities do not only provide filtering but also enable the motional averaging40. They erase the 'which-atom' information by introducing a random delay due to the cavity photon lifetime. ### Detection efficiency We measure the detection efficiency of the setup by sending a well-calibrated attenuated light pulse with the same polarization and frequency as the scattered photons through the system, and calculating the ratio of the count-rate versus the input rate obtained from the known power. We obtain a mean value of about 9.6% from the output of the cell cavity onto the single-photon detector (model COUNT-10C from LASER COMPONENTS), including the detector's quantum efficiency. ### Cell cavity escape efficiency We estimate the escape efficiency through the output coupler of the cell cavity from the transmission of this coupler (~20%) and the losses obtained from the finesse measurement. The obtained value is ~62%. ### Uncertainty estimation To estimate the uncertainty of correlation functions we implement a bootstrapping technique. For a set of write and read pulses we obtain a distribution of the number of write and read counts in each write–read sequence. We then draw samples of the same size as the data set from a probability distribution given by the data set. For each sample we calculate the value of the correlation functions and as the number of samples increases, the variances of these bootstrap correlations converges. We find that the bootstrap correlations are close to normally distributed and the uncertainty estimates are given by the square root of the convergence values for the variances.
{}
m • D F Nous contacter 0 # Documents  14C20 | enregistrements trouvés : 6 O P Q Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Post-edited  Topics on $K3$ surfaces - Lecture 1: $K3$ surfaces in the Enriques Kodaira classification and examples Sarti, Alessandra (Auteur de la Conférence) | CIRM (Editeur ) Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * $K3$ surfaces in the Enriques-Kodaira classification. * Examples; Kummer surfaces. * Basic properties of $K3$ surfaces; Torelli theorem and surjectivity of the period map. * The study of automorphisms on $K3$ surfaces: basic facts, examples. * Symplectic automorphisms of $K3$ surfaces, classification, moduli spaces. Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Multi angle  Topics on $K3$ surfaces - Lecture 4: Nèron-Severi group and automorphisms Sarti, Alessandra (Auteur de la Conférence) | CIRM (Editeur ) Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * $K3$ surfaces in the Enriques-Kodaira classification. * Examples; Kummer surfaces. * Basic properties of $K3$ surfaces; Torelli theorem and surjectivity of the period map. * The study of automorphisms on $K3$ surfaces: basic facts, examples. * Symplectic automorphisms of $K3$ surfaces, classification, moduli spaces. Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Multi angle  Topics on $K3$ surfaces - Lecture 6: Classification Sarti, Alessandra (Auteur de la Conférence) | CIRM (Editeur ) Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * $K3$ surfaces in the Enriques-Kodaira classification. * Examples; Kummer surfaces. * Basic properties of $K3$ surfaces; Torelli theorem and surjectivity of the period map. * The study of automorphisms on $K3$ surfaces: basic facts, examples. * Symplectic automorphisms of $K3$ surfaces, classification, moduli spaces. Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Multi angle  Topics on $K3$ surfaces - Lecture 2: Kummer surfaces Sarti, Alessandra (Auteur de la Conférence) | CIRM (Editeur ) Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * $K3$ surfaces in the Enriques-Kodaira classification. * Examples; Kummer surfaces. * Basic properties of $K3$ surfaces; Torelli theorem and surjectivity of the period map. * The study of automorphisms on $K3$ surfaces: basic facts, examples. * Symplectic automorphisms of $K3$ surfaces, classification, moduli spaces. Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Multi angle  Topics on $K3$ surfaces - Lecture 3: Basic properties of $K3$ surfaces Sarti, Alessandra (Auteur de la Conférence) | CIRM (Editeur ) Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * $K3$ surfaces in the Enriques-Kodaira classification. * Examples; Kummer surfaces. * Basic properties of $K3$ surfaces; Torelli theorem and surjectivity of the period map. * The study of automorphisms on $K3$ surfaces: basic facts, examples. * Symplectic automorphisms of $K3$ surfaces, classification, moduli spaces. Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Multi angle  Topics on $K3$ surfaces - Lecture 5: Finite automorphism groups Sarti, Alessandra (Auteur de la Conférence) | CIRM (Editeur ) Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * $K3$ surfaces in the Enriques-Kodaira classification. * Examples; Kummer surfaces. * Basic properties of $K3$ surfaces; Torelli theorem and surjectivity of the period map. * The study of automorphisms on $K3$ surfaces: basic facts, examples. * Symplectic automorphisms of $K3$ surfaces, classification, moduli spaces. Aim of the lecture is to give an introduction to $K3$ surfaces, that are special algebraic surfaces with an extremely rich geometry. The most easy example of such a surface is the Fermat quartic in complex three-dimensional space. The name $K3$ was given by André Weil in 1958 in honour of the three remarkable mathematicians: Kummer, Kähler and Kodaira and of the beautiful K2 mountain at Cachemire. The topics of the lecture are the following: * ... Z
{}
## Multi-Armed Bandits 2: ε-Greedy and Non-Stationary Problems Today we’re going to address some of the problems with an ε-first exploration approach for multi-armed bandits problems. In the last post we saw how ε-first can perform very well on stationary problems where the true value Q_* of each bandit arm (slot machine in our example) never changes. But in the real world we are often faced with problems where the true value of a choice changes over time. In these situations the ε-first exploration approach will not adapt to the changing environment, and will ignorantly keep selecting the same suboptimal action over and over. As with my previous post, you can follow along and run the code in this colab notebook. 📝 ## Non-stationary problems: A non-stationary problem is one where the underlying true value (q_*) of each bandit arm can gradually change over the course of an episode. Using our slot machines analogy, we can imagine that the slot machines start the day with a random average payout value, and this average payout value can slowly increase of decrease throughout the day. We model this by adapting our BanditProblem class to allow the arm values to change gradually, this is done by simulating ‘random walks’ for each arm, by drawing a small random number from a normal distribution for each arm and adding these to the true value of each arm. So this means the true payouts can gradually go up or down for each arm. class BanditProblem(): def __init__(self, arms, seed, stationary=True): self.stationary = stationary self.bandit_arms_values = np.random.normal(loc=0.0, scale=1.0, size=arms) self.optimal_action_index = np.argmax(self.bandit_arms_values) def draw_from_arm(self, arm_index): chose_optimal = 1 if arm_index == self.optimal_action_index else 0 reward = np.random.normal(loc=self.bandit_arms_values[arm_index], scale=1.0) return reward, chose_optimal def step_arm_values(self): ''' Step to be called manually in episode loop. ''' q_star_value_shift = np.random.normal(loc=0.0, scale=0.01, size=len(self.bandit_arms_values)) self.bandit_arms_values += q_star_value_shift self.optimal_action_index = np.argmax(self.bandit_arms_values) This new logic is handled in the step_arm_values function above, which makes small changes to the true arm values and is called after the bandits are done drawing from the arms. ## Introducing ε-Greedy: Our first attempt at tackling the non-stationary bandit problem uses the well-known ε-greedy approach. It is fairly simple and similar to ε-first but with a small difference; instead of exploring randomly for some fixed number of steps, ε-greedy explores randomly some % of the time throughout the entire episode. This means ε-greedy never stops exploring some small portion of the time, determined by the value ε. This could be a small number, like 0.01, meaning that the agent explores randomly approximately 1% of the time. It also means that ε-greedy can start exploiting the best found option right away – there is no long period of exploration time needed before exploitation. The incremental average update rule for the q-value estimate stays exactly the same as it was for ε-first, here it is as a reminder: \begin{aligned} Q_{n+1} &= Q_n + \frac{1}{n}[R_n - Q_n] \end{aligned} Adding this one simple change – introducing some small chance of exploring randomly every step – is enough to allow the ε-greedy bandit to adapt to non-stationary problems, because it constantly updates its belief about the best choice by some small amount. So is ε-greedy ‘better’ than ε-first? Well, it depends. On a stationary problem where the values of the slot machines never change, then ε-first is probably better (if you can afford the upfront exploration). On a non-stationary problem, ε-greedy will be better. Here’s the code for an ε-greedy bandit: class EpsilonGreedyAgent(): def __init__(self, epsilon, bandit_problem, alpha=0.1, update_type="incremental"): self.epsilon = epsilon self.alpha = alpha self.problem = bandit_problem self.update_type = update_type self.arm_qs = np.zeros(len(bandit_problem.bandit_arms_values)) self.arm_ns = np.zeros(len(bandit_problem.bandit_arms_values)) def choose_action(self): if np.random.rand() > self.epsilon: # greedily pull best arm choice = np.argmax(self.arm_qs) else: # explore pull any random arm (still a chance to pull best arm too) choice = np.random.randint(0, len(self.arm_qs)) self.arm_ns[choice] += 1 reward, optimal = self.problem.draw_from_arm(choice) if self.update_type == "incremental": self.update_estimate_incremental(choice, reward) elif self.update_type == "weighted": self.update_estimate_weighted(choice, reward) return reward, optimal def update_estimate_incremental(self, choice_index, reward): self.arm_qs[choice_index] = self.arm_qs[choice_index] + (1/self.arm_ns[choice_index])*(reward - self.arm_qs[choice_index]) def update_estimate_weighted(self, choice_index, reward): self.arm_qs[choice_index] = self.arm_qs[choice_index] + (self.alpha*(reward - self.arm_qs[choice_index])) Once again most of the logic is handled in the choose_action function. Notice how at each step (choice) we draw a random real number between 0.0 and 1.0, and we exploit the best found arm if that random number is bigger than ε (epsilon). But, if the random number is less than ε, then we explore (choose any random arm). Then we take note of the reward/payout received and update our estimate. There are two update types here, but the weighted one can be ignored for now, I’ll explain it later. ## Results: ε-First vs ε-Greedy So how does ε-greedy fare against ε-first on a stationary problem? See the graphs below: As expected, on our stationary 10-armed bandit problem the ε-first agent fares better than the ε-greedy agent. This is because the values of the bandit arms never change, so once the ε-first bandit has locked on to the best choice, it exploits that choice continually. In contrast, ε-greedy takes a while longer to find the optimal choice, and even when it does find it, it still explores the other sub-optimal options 10% of the time. The only upside to the ε-greedy approach here is that it starts gathering good rewards almost right away, whereas ε-first takes 1000 exploration steps before collecting high rewards. But what about a non-stationary problem – when the values of the bandit arms change? Can you predict what will happen in this case? Which approach will fare better? Aha! Now the tables have turned. While ε-first does for a short moment often find a better choice than ε-greedy right after it is done with 1000 exploration steps, this choice quickly becomes stale as the values of the arms change. ε-greedy fare much better overall, and continues to increase its average score over the course of the episode! However, ε-greedy seems to exhibit the same staleness problem (though not as bad) as ε-first: as the episode goes on it chooses the optimal action less and less. Can you think of a reason for this? Hint: think about the q-value update rule shown above (we discussed this in my last post too). The reason is because the update rule puts n in the denominator of the fraction that behaves as the step size, so this step size gets smaller and smaller as the episode goes on. Eventually, this step size will be so small that the q-value estimates will barely change on each update, and their rank (the order of each bandit arm according to our estimates) will almost never change after many episodes. So, eventually, ε-greedy almost becomes exactly like ε-first and gets stuck pulling the same arm when exploiting (but still pulls other arms randomly some % of the time). This explains why ε-greedy slowly makes less and less optimal choices – its arm value estimates are not keeping up with the changes to each arms’ true value q_* the longer the episode goes on! ## Recency Weighted Q-Update: To truly solve the stale choice issue for non-stationary bandit problems we need to be a bit more intelligent with our q-value estimate update rule. We saw previously how ε-greedy with a sample average update rule has some problems: the step size gradually gets smaller and smaller. The fix is simple: keep this step size constant! The reason why this works is subtle. Theory warning! The following section will be a bit math heavy. But if you find maths a little dense (I can relate) then I also include a plain English description and accompanying code below. Hopefully that helps! 😊 So we want to change our update rule to include a fixed step size \alpha. That’s easy enough – see line (1) below. Line (1) below is all we need to implement this in code, the rest are just there for understanding. The reason why this works – and the key to understanding why it works – as a recency-weighted update (new updates are given more weight) is due to the recursive way \alpha is applied to the existing estimate Q_n. Realise that Q_n at any time step is the result of previous updates being applied one after another. Lines (2) to (6) demonstrate this below, where we are essentially unrolling Q_n as a sum of updates from all previous time steps. \begin{align} Q_{n+1} &= Q_n + \alpha[R_n - Q_n] \\ &= \alpha R_n + (1- \alpha)Q_n \\ &= \alpha R_n + (1- \alpha)[\alpha R_{n-1} + (1- \alpha)Q_{n-1}] \\ &= \alpha R_n + (1 - \alpha) \alpha R_{n-1} + (1- \alpha)^2 Q_{n-1} \\ &= \alpha R_n + (1 - \alpha)\alpha R_{n-1} + (1 - \alpha)^2 \alpha R_{n-2} + \\ &= (1-\alpha)^nQ_1 + \sum_{i=1}^{n} (1-\alpha)^{n-i}\alpha R_i \end{align} On line (7) we show this idea in a single line where we express Q_{n+1} as the sum of all previous rewards weighted by alpha raised to the power of n-i (the number of steps ago this update occurred). So as we update Q, the contribution of old updates to our current estimate gets exponentially smaller and smaller. It’s this attribute of our new recency-weighted average that allows our estimates to stay fresh as the ε-greedy continually explores and the true values of the bandit arms change over time. This update rule reacts much faster to changes in the values of the bandit arms and can do so for as long as the episode continues. Although, if it experiences an unlucky streak of exploration it may be temporarily misled into believing that the optimal action is not actually the best. Even so, it will usually fix this mistake quite quickly when the unlucky streak breaks. You can find the code for this update rule in the update_estimate_weighted function in the code snippet above. ## Results: ε-Fixed vs ε-Greedy vs Recency-Weighted ε-Greedy Naturally, we wouldn’t expect the recency-weighted update ε-greedy to be any better when it comes to stationary problems, and this is true based on our results in the graphs below. It’s clear that recency-weighted averages hurt performance in the stationary setting. Whereas ε-greedy eventually approaches choosing the best action 90% of the time (the remaining steps are exploration), the recency weighted ε-greedy chooses the optimal action 80% of the time and does not seems to be improving. In this situation the recency-weighted ε-greedy is limited by the \alpha value, which determines the ‘forgetfulness’ of the update rule, so this could be tuned a bit to improve performance for the stationary problem setting. But where recency-weighted ε-greedy really shines is in the non-stationary setting: Much better! Adding the recency-weighted update rule allows the ε-greedy agent to outperform both prior approaches on non-stationary problems. The recency-weighted update gradually ‘forgets’ about old updates so that it can quickly switch to the new optimal choice as the episode progresses and the values of the bandit arms gradually change. ## Discussion and future work: So that’s it, right? We’ve solved the exploration/exploitation problem? Uhm… actually no. In the last two posts we’ve seen some really simple (and quite effective) methods to balance exploration/exploitation, but there’s one more method I want to cover. This method addresses a problem with ε-greedy: when it explores it does so totally at random, but couldn’t there be a way to focus exploration on choices that seem most promising? This is the topic of our next post when I’ll cover Upper Confidence Bound (UCB) action selection, and also some neat tricks to make ε-greedy exploration more effective too! 🤖 ## Multi-Armed Bandits 1: ε-first In this post we’re going to discuss the age old problem of making decisions when there is uncertainty. To illustrate what I mean, we’re going to dive right into multi-armed bandits problems and what they are exactly. You can follow along and run the code yourself using this google colab notebook. I’ll be updating the notebook to include code and experiments for future MAB posts. 🔄 ## Intro to Bandit Problems: A bandit problem is a situation where we have a series of choices of actions – some of those choices are better than others – but we don’t know which ones. The choices are totally independent of one another, meaning one choice does not affect the next choice, and we can choose from all the options on each turn. For these problem we want to figure out what the best option is as quickly as possible and then continually exploit that option until we run out of turns. The classic example is given using a set of slot machines that have different average payouts. We use the term “average payouts” here because there is some randomness (noise) to the payouts, meaning we get a slightly different payout each time. If payouts were certain, we would just need to sample from each machine once to figure out which is best. 🎰 So, ideally we want to figure out which is the best slot machine with the highest average payout as quickly as possible – so we can then focus on playing only on that best slot machine and win as much money as possible. In other words we don’t want to waste turns playing on slot machines where the average payout is lower than the best machine – but first we have to figure out which machine is best. The example we’ll work with from here on is the 10-armed bandit problem – where we have 10 slot machines to choose from, each with different average payouts, and our job is to figure out which is the highest paying machine. We initialise the true value Q_* of each slot machine (bandit arm) by sampling from a normal distribution with a mean of 0 and a standard deviation of 1. Code for the problem is supplied below 🙂 import numpy as np class BanditProblem(): def __init__(self, arms, seed, stationary=True): self.stationary = stationary self.bandit_arms_values = np.random.normal(loc=0.0, scale=1.0, size=arms) self.optimal_action_index = np.argmax(self.bandit_arms_values) def draw_from_arm(self, arm_index): chose_optimal = 1 if arm_index == self.optimal_action_index else 0 reward = np.random.normal(loc=self.bandit_arms_values[arm_index], scale=1.0) return reward, chose_optimal ## Fixed exploration period (ε-first): Our first and most intuitive solution is to choose some arbitrary fixed period of exploration where we play each machine at random and try to build up some useful estimate of what the average payout is. This strategy is sometimes referred to as fixed-exploration or ε-first (epsilon-first). With this approach we would aim to play on each machine say 100 times, and then average the payouts and use that as our estimate. We would then continually play the machine with the best estimated average payout. Similarly, we could pick randomly 1000 times which machine to play and then base our estimate on the average payouts received. Our estimate is just the sample average from the first ~100 turns on that slot machine. Theory warning! The following section will be a bit math heavy. But if you find maths a little dense (I can relate) then I also include a plain English description and accompanying code below. Hopefully that helps!😊 The estimate for each bandit arm (slot) is called a q-value estimate, and the true value for each arm is referred to as q_* – this is the thing we’re trying to estimate. We can keep track of this q-value estimate for each arm using a simple update rule derived from the sample average formula: \begin{align} Q_{n+1} &= \frac{1}{n}\sum_{i=1}^{n}R_i \\ &= \frac{1}{n} \left( R_n + \sum_{i=1}^{n-1}R_i \right) \\ &= \frac{1}{n} \left( R_n +(n-1)\frac{1}{n-1} \sum_{i=1}^{n-1}R_i\right) \\ &= \frac{1}{n} \left( R_n + (n - 1)Q_n \right) \\ &= \frac{1}{n} \left( R_n + nQ_n - Qn\right) \\ &= Q_n + \frac{1}{n}[R_n - Q_n] \end{align} where R_i is the reward from the current step from a given slot machine and n is the count of the number of times this slot machine has been played. Line (3) follows because (n-1)\frac{1}{n-1} = 1, and multiplying the sum by 1 does not change its value. Line (4) follows because: \begin{aligned} Q_n &= \frac{R_1 + R_2 + \cdots +R_{n-1}}{n-1} = \frac{1}{n-1}\sum_{i=1}^{n-1}R_i \end{aligned} In plain English, line (6) in the first equation above says is that we can update the estimate Q_{n+1} by finding the difference between the received reward and the previous estimate (R_n - Q_n) let’s call this \delta. We then use this difference to update our estimate Q_n, by moving our old estimate a small step towards the most recent reward. This is done by weighting \delta by a step size, defined by dividing it by the number of times this arm has been pulled. We can think of this as taking the form: NewEstimate \leftarrow OldEstimate + StepSize[Reward- Old Esimate] Think of this as a gradually reducing the step size the more times we pull a given arm, because we can gradually become more certain of the true value. Notice in equation 1 at line (6), when the denominator n (number of arm pulls) increases, the step size decreases. ## ε-first code: Okay, enough theory for now, let’s put try putting what we’ve learned into code! class FixedGreedyAgent(): def __init__(self, exploration_steps, bandit_problem): self.problem = bandit_problem self.current_step = 0 self.exploration_steps = exploration_steps self.arm_qs = np.zeros(len(bandit_problem.bandit_arms_values)) self.arm_ns = np.zeros(len(bandit_problem.bandit_arms_values)) def choose_action(self): if self.current_step > self.exploration_steps: # greedily pull best arm choice = np.argmax(self.arm_qs) else: # explore pull any random arm (still a chance to pull best arm too) choice = np.random.randint(0, len(self.arm_qs)) reward, optimal = self.problem.draw_from_arm(choice) self.arm_ns[choice] += 1 if self.current_step > self.exploration_steps: self.update_estimate_incremental(choice, reward) self.current_step = self.current_step + 1 return reward, optimal def update_estimate_incremental(self, choice_index, reward): self.arm_qs[choice_index] = self.arm_qs[choice_index] + (1/self.arm_ns[choice_index])*(reward - self.arm_qs[choice_index]) You can see line (6) from equation 1 implemented in the above in code in the update_estimate_incremental function. Most of the logic is implemented in the choose_action function, where our agent will randomly explore for some fixed number of exploration_steps and then exploit the best estimated “arm” (slot machine in our example) from then onwards. The FixedGreedyAgent will keep track of the current q-value and number of pulls for each arm, as well as a parameter for number of exploration steps to explore before exploiting. All of the code for this blog is available in this google colab notebook so you can run it yourself and play around with the parameters 🙂 I chose to run the experiment for 50000 steps per episode and repeated this 2000 times to get an average score for reach step. Our fixed-exploration greedy agent explores for 1000 steps before exploiting. I recommend using a much lower value for steps and episodes when running on google colab or the experiment will take a long time to complete, but feel free to play around with the exploration steps parameter. We can see the results for the average reward per step for the fixed exploration bandit below: ## Discussion and future work: Okay, so it looks like our fixed exploration strategy worked pretty well, right? Well yes, but there are some problems with this strategy: 1. We have to spend X number of turns exploring randomly before we can exploit and get a decent score. This could be very costly in some scenarios, for example if it costs $1 to play each slot machine, and our average return for the first 1000 steps is$0, then it is going to cost us \$1000 out of pocket just to explore, before we see any returns! So we may need deep pockets for this exploration strategy. 2. Also the more arms we have, the longer we have to spend randomly exploring before we can be confident we have found the optimal one and exploit it. If we had 150 arms or more then 1000 exploration steps may not be adequate. If there are thousands of bandit arms, then this exploration strategy may not be viable at all. 3. If we don’t confidently find the optimal choice (slot machine) within those first X exploration steps, then we will lock on to a suboptimal choice for the rest of the episode! We could be stuck making bad decisions forever. 4. If the value of the bandit arms (slots) changes over the course of an episode, our agent will never update its belief about the best arm to choose. It’s fixed decision could get worse and worse as an episode progresses. This plot makes the problems visible. During the first 1000 steps we pick the optimal choice roughly 10% of the time (makes sense; 10 choices picked at random), and after the exploration phase is done the fixed exploration bandit manages to ‘find’ the best option ~95% of the time. But, once the agent is done exploring it never changes its mind about the best option. It also isn’t guaranteed to have found the best slot machine in the first place, especially if the number of arms is considerably more than 10. This begs the question: can we explore more effectively? And that is a very, very good question. Actually, that question is so good that people are still trying to figure it out! In the next post we’ll learn about some more simple and effective ways to balance exploration and exploitation, even for non-stationary problems, using an ε-greedy approach (and others), and different q-value estimate update rules 🤓 ## References: 1. Russel, S. J., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (Global Edition) (4th ed.). Pearson. 2. Sutton, R., & Barto, A. (2018). Reinforcement Learning: An Introduction (2nd ed.). The MIT Press. 3. Fragkiadaki, K. (2019). Exploration/Exploitation in Multi-armed Bandits. https://www.andrew.cmu.edu/course/10-403/slides/S19_lecture6_exploreexploitinbandits.pdf Scroll to Top
{}
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi # Thread Subject: Calculating distance to multiple points Subject: Calculating distance to multiple points From: Kate ### Kate Date: 12 Jun, 2013 18:04:19 Message: 1 of 3 Hello! I am very new to programming. I have a few questions on how to set up some calculations I need to implement. Here is all I have. The integers below I am using are just as an example. My 3-D vetors are in the form of (x,y,z). However, I do not know if I set it up correctly to reflect that. There should be 5 anchors and 9 seats all in the form of a location (x,y,z). >> Anchors = [1 1 1; 2 2 2; 3 3 3; 4 4 4; 5 5 5] Anchors = 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 >> Seats = [1 2 3; 2 3 4; 3 4 5; 4 5 6; 5 6 7; 6 7 8; 7 8 9; 8 9 10; 9 10 11] Seats = 1 2 3 2 3 4 3 4 5 4 5 6 5 6 7 6 7 8 7 8 9 8 9 10 9 10 11 If that represents what I am trying to state, I have a few more questions. How do I write a code to calculate the distance between, lets say, Seat [1 2 3] to all 5 anchors? Then, I need to keep that distance and calculate the same thing will all of the Seats. Any help would be greatly appreciated!! -Kate Subject: Calculating distance to multiple points From: Josh Meyer Date: 12 Jun, 2013 18:47:50 Message: 2 of 3 >> How do I write a code to calculate the distance between, lets say, Seat >> [1 2 3] to all 5 anchors? The Euclidean distance between two points is equal to the 2-norm of the difference. So the distance from the point [1 2 3] to the point [1 1 1] is equal to sqrt(0^2 + 1^2 + 2^2), or sqrt(5). In MATLAB this is norm([1 2 3] - [1 1 1]). >> Then, I need to keep that distance and calculate the same thing will all >> of the Seats. There is probably a way to do this without so many loops, but one way to do it is: --------------------------------------------------- D = zeros(length(Seats),length(Anchors)); for i = 1:length(Seats)     for j = 1:length(Anchors)         D(i,j) = norm(Seats(i,:) - Anchors(j,:));     end end --------------------------------------------------- D =     2.2361 1.4142 2.2361 3.7417 5.3852     3.7417 2.2361 1.4142 2.2361 3.7417     5.3852 3.7417 2.2361 1.4142 2.2361     7.0711 5.3852 3.7417 2.2361 1.4142     8.7750 7.0711 5.3852 3.7417 2.2361    10.4881 8.7750 7.0711 5.3852 3.7417    12.2066 10.4881 8.7750 7.0711 5.3852    13.9284 12.2066 10.4881 8.7750 7.0711    15.6525 13.9284 12.2066 10.4881 8.7750 The first row of D is the distance from the first Seat, [1 2 3], to each Anchor, the second row is the distance from the second Seat, [2 3 4], to each Anchor, etc... Subject: Calculating distance to multiple points From: Gene ### Gene Date: 12 Jun, 2013 19:19:09 Message: 3 of 3 Hi Josh: Let A = 3 \times na be an array with the coordinates of anchor 'j' in the jth column. and similarly for S = 3 \times ns D = repmat(S(:, i), 1, na) - A; % each column is a three-vector, % giving the Euclidean representation % of the vector from anchor 'j' % to seat 'i' d = (sum(D.^2)).^(.5); % D.^2 is the component-wise square of D, % sum gives the column sum(s) % .^(.5) takes the square root of elements % of d d is a row d(j) is the Euclidean distance from seat 'i' to anchor 'j'. Use a for loop on 'i' for the seats of interest. gene "Josh Meyer" <jmeyer@mathworks.com> wrote in message <kpafoo$3o3$1@newscl01ah.mathworks.com>... > >> How do I write a code to calculate the distance between, lets say, Seat > >> [1 2 3] to all 5 anchors? > > The Euclidean distance between two points is equal to the 2-norm of the > difference. So the distance from the point [1 2 3] to the point [1 1 1] is > equal to sqrt(0^2 + 1^2 + 2^2), or sqrt(5). In MATLAB this is norm([1 2 3] - > [1 1 1]). > > >> Then, I need to keep that distance and calculate the same thing will all > >> of the Seats. > > There is probably a way to do this without so many loops, but one way to do > it is: > --------------------------------------------------- > D = zeros(length(Seats),length(Anchors)); > for i = 1:length(Seats) > for j = 1:length(Anchors) > D(i,j) = norm(Seats(i,:) - Anchors(j,:)); > end > end > --------------------------------------------------- > D = > > 2.2361 1.4142 2.2361 3.7417 5.3852 > 3.7417 2.2361 1.4142 2.2361 3.7417 > 5.3852 3.7417 2.2361 1.4142 2.2361 > 7.0711 5.3852 3.7417 2.2361 1.4142 > 8.7750 7.0711 5.3852 3.7417 2.2361 > 10.4881 8.7750 7.0711 5.3852 3.7417 > 12.2066 10.4881 8.7750 7.0711 5.3852 > 13.9284 12.2066 10.4881 8.7750 7.0711 > 15.6525 13.9284 12.2066 10.4881 8.7750 > > The first row of D is the distance from the first Seat, [1 2 3], to each > Anchor, the second row is the distance from the second Seat, [2 3 4], to > each Anchor, etc...
{}
## JMI2010B-9 A polynomial-time inexact interior-point method for convex quadratic symmetric cone programming (pp.199-212) Author(s): Lu Li and Kim-Chuan Toh J. Math-for-Ind. 2B (2010) 199-212. Abstract In this paper, we design an inexact primal-dual infeasible path-following algorithm for convex quadratic programming over symmetric cones. Our algorithm and its polynomial iteration complexity analysis give a unified treatment for a number of previous algorithms and their complexity analysis. In particular, our algorithm and analysis includes the one designed for linear semidefinite programming in "Math. Prog. 99 (2004), pp. 261--282". Under a mild condition on the inexactness of the search direction at each interior-point iteration, we show that the algorithm can find an $\epsilon$-approximate solution in $O(n^2\log(1/\epsilon))$ iterations, where $n$ is the rank of the underlying Euclidean Jordan algebra. Keyword(s).  semidefinite programming, symmetric cone programming, infeasible interior point method, inexact search direction, polynomial complexity
{}
# Fundamentals of Discrete Event Simulation(DES) ## Introduction In the last blog, we looked at SIS/SIR epidemic modeling using Discrete event simulation. This post will cover some fundamental concepts of Discrete Event Simulation, look at a few basic examples to develop an understanding, and end with a simulation of M/M/1 queuing model. To get started, let’s look at an elementary example.Assume that we want to estimate the probability of observing the head in an experiment of tossing a coin. We know that if the coin is not biased, then the likelihood of getting a head is 1/2. We also know that if I toss the coin two or three times, we may not get exactly 1/2. We expect that if we keep tossing the coin for a very long time, the average probability of getting heads will converge to 1/2. #### Experiment So let’s run the experiment 1000 times of tossing the coin, and we get 0.49 as the probability of getting head, which is very close to our expectation of 1/2. import random import numpy as np n = 1000 observed = [] for i in range(n): observed.append(1) else: observed.append(0) print("Prob = ", round(np.mean(observed), 2)) Prob = 0.49 #Output Let’s take a deeper look and see how the running mean was converging as the number of tosses increased. n = 1000 observed = [] for i in range(n): observed.append(1) else: observed.append(0) cum_observed = np.cumsum(observed) moving_avg = [] for i in range(n): moving_avg.append(cum_observed[i] / (i+1)) x = np.arange(0, len(moving_avg), 1) plt.plot(x, moving_avg) plt.axhline(0.5, linewidth=2, color='black') plt.title("Running Mean", fontsize=16) plt.xlabel("Number of Tosses", fontsize=14) First, we can observe how the mean fluctuated widely around 0.5 but converged as the number of iterations increased. So what we have done so far is that we ran a single experiment of tossing coin 1000 times and observed that it converges around 0.5. Now let’s repeat the experiment 1000 times with 10000 tosses in each experiment. for i in range(1000): n = 10000 observed = [] for i in range(n): observed.append(1) else: observed.append(0) cum_observed = np.cumsum(observed) moving_avg = [] for i in range(n): moving_avg.append(cum_observed[i] / (i+1)) x = np.arange(0, len(moving_avg), 1) plt.plot(x, moving_avg) plt.axhline(0.5, linewidth=2, color='black') plt.title("Running Mean", fontsize=16) plt.xlabel("Number of Tosses", fontsize=14) We can observe how each experiment in the initial start fluctuated and then later converged, similar to what we observed during a single experiment. ### Law of Large Numbers What we observed is an illustration of the Law of Large numbers. The Law of Large Numbers says that the average of the results obtained from a large number of trials should be close to the Expected Value and tends to become closer to the expected value as more trials are performed. Based on the above results, we can observe that each experiment goes through two phases, i.e., transient and steady. In the transient state, output varied wildly, and later, it converged in the steady phase. So while running simulation, it’s essential to run the experiments long enough to capture the Steady phase and not end it soon while it’s in the transient phase. Transient phase is also sometimes referred as the warm up period. This will bring the question, how do we know how long is the transient phase, so we can discard the results observed in that state and only capture the steady state results. There are several method’s to detect that which you can refer in the paper EVALUATION OF METHODS USED TO DETECT WARM-UP PERIOD IN STEADY STATE SIMULATION. The simplest and yet an effective method is the Welch’s method. #### Welch’s Method In simple words, Welch’s method says the following: • Do Several Runs of the Experiment. • Calculate the average of each observation across the runs. • To smooth it further, calculate the moving average. • Look at the graph visually and estimate the steady point from the graph. n = 1000 Y = np.zeros(shape = (5, n)) def return_observations(): observed = [] for i in range(n): observed.append(1) else: observed.append(0) cum_observed = np.cumsum(observed) moving_avg = [] for i in range(n): moving_avg.append(cum_observed[i] / (i+1)) return moving_avg for i in range(5): Y[i] = return_observations() Z = [] for i in range(n): Z.append(np.sum(Y[:, i])/ 5) x = np.arange(0, n, 1) plt.plot(x, Y[0], 'k--', linewidth=0.5, label="Y0") plt.plot(x, Y[1], 'k--', linewidth=0.5, label="Y1") plt.plot(x, Y[2], 'k--', linewidth=0.5,label="Y2") plt.plot(x, Y[3], 'k--', linewidth=0.5,label="Y3") plt.plot(x, Y[4], 'k--', linewidth=0.5,label="Y4") plt.plot(x, Z, linewidth=2,color='tab:red', label="Z") plt.title("Running Mean", fontsize=16) plt.xlabel("Number of Tosses", fontsize=14) plt.legend() In the above plot, Red line is the average of the five runs at each iteration, and we can see that it’s steady somewhere between 300-400 iterations. With this many coin tosses, I can’t let this go ### Stochastic Process Stochastic is a fancy word for Randomness. We observe Randomness every day in our life. The question is, are they random? Or is it because we lack a complete understanding of that process?. Without getting philosophical, to keep math simple, we attribute anything that we don’t understand to Randomness. Randomness can be known as different terms in TimeSeries analysis; we called that noise. Coming back to the topic, A stochastic process is a collection of random variables whose observations are discrete points in time. At every instant of time, the state of the process is random. Since time is fixed, we can think of the state of the process as a random variable at that specific instant. At the time $t_{i}$, the state of the process is determined by performing a random experiment whose results are from the set of $\omega$. At the time $t_{j}$, the same experiment is performed to determine the next state of the process. This is the essence of the stochastic process. In the above diagram, the Sample space on the left is the set of outcomes that are mapped to a time function. The time functions are mixed to get sample reality. If the above seems mouthful, hopefully, the following example will help. The Marvels Multiverse universe is the best analogy that always comes to mind while thinking of the Stochastic Process. Remember this scene from Infinity Wars, where Doctor Strange looked at 14 million different realities, and in only one reality, the outcome of Avengers winning was one. So tying this back to Stochastic Process, In this case, The set of outcomes were two, Avengers Winning and Thanos Winning. This is a sample space of outcomes. Each outcome is related to some set of actions performed. Those actions are mixed to form a sample reality. Another key concept is that a Stochastic process can have two means, Vertical mean, also known as Ensemble mean. Ensemble mean is calculated vertically over all realities. A horizontal mean is the mean of a single sample reality. As you can notice, getting an Ensemble mean requires running all possibilities that may or may not be feasible. But the good thing is that a horizontal mean can be used to approximate vertical mean. A dynamic system can be viewed as a Stochastic Process. When system is simulated, each simulation run represents an evolution of the system along the path in the state space. The data generated and collected along the trajectory in the state space is used to estimate the measurements of interest. A Dynamic system is called Ergodic system if the Horizontal mean converges to the Vertical mean. ## Simulating M/M/1 Now let’s look at the same concepts visited above with example simulations for the M/M/1 queuing system to figure out the average delay. Assume that we have 100K packets to serve; the Poisson process is used to model both packet’s arrival and service rate. Both arrival and service rates are between 1 to 3 packets per unit of time. We know that in a queuing system, the delay will increase if the service rate is lower than the arrival rate (more packets are coming in, and fewer packets are going out). def average_delay(lamda, mu): NUM_PKTS = 100000 # Number of Packets count = 0 # Count clock = 0 # Clock N = 0 ARR_TIME = expovariate(lamda) DEP_TIME = np.inf ARR_TIME_DATA = [] DEP_TIME_DATA = [] DELAY_DATA = [] while count < NUM_PKTS: if ARR_TIME < DEP_TIME: clock = ARR_TIME ARR_TIME_DATA.append(clock) N = N + 1 ARR_TIME = clock + expovariate(lamda) if N == 1: DEP_TIME = clock + expovariate(mu) else: clock = DEP_TIME DEP_TIME_DATA.append(clock) N = N - 1 count = count + 1 if N > 0 : DEP_TIME = clock + expovariate(mu) else: DEP_TIME = np.inf for i in range(NUM_PKTS): d = DEP_TIME_DATA[i] - ARR_TIME_DATA[i] DELAY_DATA.append(d) return (round(np.mean(DELAY_DATA), 4)) df = pd.DataFrame(columns=['mu', 'lamda', 'delay']) lamda_range = np.arange(1, 3, 0.1) mu_range = np.arange(1,3,0.1) i = 0 for mu in mu_range: for lamda in lamda_range: df.loc[i] = [mu, lamda, average_delay(lamda, mu)] i+=1 If we look at the plot, we can see that the average delay is flat surface indicating low delay wherever packet arrival rate (lambda) is lower than service rate (mu). Delay shoots up whenever the service rate is lower than the Arrival rate, as indicated by the rising surface. ## Conclusion In this blog, we covered some fundamental concepts of Discrete event simulation. Then we applied that to simulate M/M/1 queuing model. We will try to cover event graphs and model a simple Automatic Repeat Request(ARQ) protocol in a future post. ## References Written on January 17, 2022
{}
# Incorrect proof for lim sin(1/x) at x=0 1. Aug 1, 2011 ### Harrisonized 1. The problem statement, all variables and given/known data I need to know what's wrong with the following proof: Assume that [PLAIN]http://img16.imageshack.us/img16/4839/eq1.gif [Broken] [Broken][/URL] exists. In other words: [PLAIN]http://img8.imageshack.us/img8/1856/eq2.gif [Broken] (1) But: [PLAIN]http://img801.imageshack.us/img801/8374/eq3.gif [Broken] (2) And because sin(1/x) is an odd function: [PLAIN]http://img24.imageshack.us/img24/8453/eq4.gif [Broken] (3) Therefore, by (1), if [PLAIN]http://img16.imageshack.us/img16/4839/eq1.gif [Broken] [Broken][/URL] exists, then: [PLAIN]http://img191.imageshack.us/img191/2339/eq5.gif [Broken] [PLAIN]http://img215.imageshack.us/img215/3218/eq6.gif [Broken] Similarly, [PLAIN]http://img641.imageshack.us/img641/3781/eq7.gif [Broken] [PLAIN]http://img696.imageshack.us/img696/7108/eq8.gif [Broken] If [PLAIN]http://img16.imageshack.us/img16/4839/eq1.gif [Broken] [Broken][/URL] exists, the only value at which the limit can exist is 0. Since the limit converges to a single value, the limit exists and is equal to 0. 2. Relevant equations [PLAIN]http://img812.imageshack.us/img812/6119/eq10.gif [Broken] 3. The attempt at a solution The Laurent series disagrees. >:( I know there's something wrong with the proof, since it's well accepted that the limit doesn't exist. I'm just not sure what. Any help is appreciated. Last edited by a moderator: May 5, 2017 2. Aug 1, 2011 ### Dick The problem is that you assume that the limit exists. It doesn't. If you assume something false to begin with, isn't it hard to trust the conclusion you would draw from that?? It is true that if an odd function has a limit at 0 then the limit must be 0, as you've shown. But sin(1/x) doesn't have a limit at 0. 3. Aug 1, 2011 ### Harrisonized The assumption was only necessary at the beginning to show that the limit, if it exists, converges to a single point. If the limit doesn't exist, shouldn't it converge to different values from different sides? 4. Aug 1, 2011 ### Dick The limit doesn't converge to a single value on either side. Look at a graph. 5. Aug 6, 2011 ### Harrisonized Thank you, Dick, for your help on the previous problem. The answer seemed obvious after I switched sin(1/x) into f(x) and reconstructed the proof for f(x). I didn't want to make a new thread for such a related question, so here goes... The limit of x*sin(1/x) at x=0 is 0 by the squeeze theorem. The series expansion, however, is: x*(x-1-x-3/3!+x-5/5!-x-7/7!+... ) = 1-x-2/3!+x-4/5!-x-6/7!+... Is there a contradiction? The series seems to diverge as x shrinks to 0. If the limit of x*sin(1/x) is truly 0, then the limit of x-2/3!-x-4/5!+x-6/7!-... must equal 1 when x=0. Is there a way to show this? Last edited: Aug 6, 2011 Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
{}
# combinatorial – Upper limits for the number of \$ 3 \$ -flags in \$ (2k, k ^ 2) \$ – graph \$ G \$ Leave $$G$$ be a simple arbitrary graph in $$2k$$ vertices with $$k ^ 2$$ edges, where $$k geq 2$$. Leave $$F$$ be a $$3$$– Flag, that is, three triangles that share a single border (this graphic has 5 vertices and 7 edges). I want to find an upper limit for the number of $$3$$-flags $$F$$ in $$G$$. A trivial upper limit is $$(2k) ^ 5$$, but this is too raw and I want an upper limit smaller than this. Any ideas? Thanks in advance!
{}
Age of Information Minimization for an Energy Harvesting Source with Updating Erasures:Without and With Feedback # Age of Information Minimization for an Energy Harvesting Source with Updating Erasures: Without and With Feedback Songtao Feng, and Jing Yang Songtao Feng and Jing Yang are with the School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA, 16802, USA. Email: {sxf302, yangjing}@psu.edu. This work is presented in part in the 2018 IEEE International Conference on Computer and Communications (INFOCOM) - Workshop on Age of Information [1] and the 2018 IEEE International Symposium on Information Theory [2]. ###### Abstract Consider an energy harvesting (EH) sensor that continuously monitors a system and sends time-stamped status update to a destination. The sensor harvests energy from nature and uses it to power its updating operations. The destination keeps track of the system status through the successfully received updates. With the recently introduced information freshness metric “Age of Information” (AoI), our objective is to design optimal online status updating policy to minimize the long-term average AoI at the destination, subject to the energy causality constraint at the sensor. Due to the noisy channel between the sensor and the destination, each transmitted update may be erased with a fixed probability, and the AoI at the destination will be reset to zero only when an update is successfully received. We first consider status updating without feedback available to the sensor and show that the Best-effort Uniform updating (BU) policy is optimal. We then investigate status updating with perfect feedback to the sensor and prove the optimality of the Best-effort Uniform updating with Retransmission (BUR) policy. In order to prove the optimality of the proposed policies, for each case, we first identify a lower bound on the long-term average AoI among a broad class of online policies, and then construct a sequence of virtual policies to approach the lower bound asymptotically. Since those virtual policies are sub-optimal to the original policy, the original policy is thus optimal. Age of information, energy harvesting, online status updating, noisy channel, feedback ## I Introduction Recently, a metric called “Age of Information” (AoI) has been introduced to measure the freshness of the information in a status monitoring system from the destination’s perspective  [3]. Specifically, at time , the AoI in the system is defined as , where is the time stamp of the latest received update at the destination. AoI has shown to be fundamentally different from standard network performance metrics, such as throughput or delay. It has attracted growing attention from different research communities, due to its simple form and potential in unifying sampling and transmission for timely information delivery. Generally speaking, there are two main approaches in the study of AoI. The first approach is to characterize the AoI under given status updating policies [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. The second approach is to design certain status updating policies to actively optimize AoI [18, 19, 20]. Modeling the status monitoring system as a queueing system, where updates are generated at the source according to a random process, the time average AoI has been analyzed in different queueing management settings. For systems with a single server, the corresponding AoI has been studied in single-source single-server queues [3], the Last-Come First-Served (LCFS) queue with preemption in service [4], the First-Come First-Served (FCFS) queue with multiple sources [5, 6], the queue with multiple souces which only keeps the latest status packet of each source in the queue [7], the LCFS queue with gamma-distributed service time and Poisson update packet arrivals [8]. Moreover, in queue systems, packet deadlines are found to improve AoI performance in [9], and AoI in the presence of packet delivery errors is evaluated in [10]. The AoI in systems with multiple servers has been evaluated in [11, 12, 13]. A related metric, Peak Age of Information (PAoI), is introduced and studied in [14, 15, 16, 19]. For more complicated multi-hop networks, reference [17] introduces a novel stochastic hybrid system (SHS) approach to derive explicit age distributions. The optimality properties of a preemptive Last Generated First Served (LGFS) service discipline in a multi-hop network are identified in [18]. AoI optimization with the knowledge of the server state has been studied in [19]. The relationship between AoI and the MMSE in remote estimation of a Wiener process is investigated in [20]. Due to the magnified tension between keeping information fresh and the stringent energy constraint, AoI in energy harvesting (EH) wireless networks has attracted increasing interests recently [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. An EH sensor harvests energy from the environment and use it to power its sensing and communication operations. Due to the stochastic energy arrival process, all of the operations are subject to the so-called energy causality constraint. Under such constraints, various policies have been proposed to optimize different communication and sensing performance metrics [32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42]. Such sample path-wise constraint also makes the design and analysis of the status updating policy in EH systems extremely challenging. Under the assumption that the battery size is sufficiently large, [21] shows that updates should be scheduled only when the server is free to avoid queueing delay, and a lazy update policy that introduces inter-update delays outperforms the greedy policy. Reference [22] investigates AoI-optimal offline and online status updating policies, where the online problem is modeled as a Markov decision process and solved through dynamic programming. In [23, 24, 25, 26], optimal online status updating policies under different assumptions on the battery size have been identified. Specifically, for the infinite battery case, [23] shows that the best-effort uniform updating policy, which updates at a constant rate when the source has sufficient energy, is optimal when the channel between source and destination is perfect. When the battery size is finite, the optimal policies are shown to have certain threshold structures [24, 25, 26]. Offline policies to minimize AoI in EH channels have been studied in [27, 28]. Reference [29] analyzes the AoI performance of two channel coding schemes when channel erasures are present. Using the SHS tools proposed in [17], reference [31] and reference [43] study the average AoI for a finite battery EH system, with and without preemption of packets in service allowed, respectively. An interesting setting is considered in [30], where extra information is carried by the timing of the update packets. A tradeoff between the average AoI and the average message rate is studied for several achievable schemes. In this paper, we take the imperfect updating channel into consideration and investigate the optimal updating policies of an EH system where updating erasures can happen. Assuming each update can be erased with a constant probability, the AoI at the destination will be reset only when an update is successfully received. Our objective is to design online status updating policies to minimize the average AoI at the destination. Depending on whether there exists updating feedback to the source, we consider two possible scenarios: 1) No updating feedback. In this case, the source has no knowledge of whether an update is successful. It can only use the update-to-date energy arrival profile and updating decisions as well as the statistical information, such as the energy arrival rate and the erasure probability of the channel, to decide the upcoming updating time points. We show that the Best-effort Uniform updating (BU) policy, which was shown to be optimal under the perfect channel setting in [23], is still optimal. 2) Perfect updating feedback. In this case, the source receives an instantaneous feedback when an update is transmitted. Therefore, it can decide when to update next based on the feedback information, along with the information it uses for the no feedback case. For this case, we propose a Best-effort Uniform updating with Retransmission (BUR) policy and prove its optimality. Although the proposed policies are quite intuitive, their optimality is quite challenging to establish, compared with [23]. This is because both battery outage and updating erasure will affect the AoI under the proposed policies. While the impact of either of those two events can be analyzed relatively easily when isolated, it becomes extremely challenging when both of them are involved. Besides, when there exists perfect updating feedback to the source, the updating erasure under the BUR will lead to subsequent retransmission and energy consumption, thus affecting the battery outage probability in the future. Such complicated interplay between those two events makes the problem even more complicated. In order to overcome such difficulties, we propose a novel virtual policy based approach. Specifically, for both BU and BUR updating policies, we construct a sequence of virtual policies, which are strictly suboptimal to their original counterparts, and eventually converge to them. By designing the virtual policies in a sophisticated manner, we are able to decouple the effects of battery outage and updating errors in the performance analysis. We show that the long-term average AoI under virtual policies converges to the corresponding lower bound, which implies the optimality of the original policy. The remainder of the paper is structured as follows: In Sec. II, we describe the system model and problem formulation. In Sec. III and Sec. IV, we consider the no updating feedback case and the perfect updating feedback case, respectively. In Sec. V, we evaluate the proposed policies through extensive simulation results. We conclude in Sec. VI. For the sake of readability, we defer some proofs to the appendix. ## Ii System Model and Problem Formulation Consider a scenario where an energy harvesting sensor continuously monitors a system and sends time-stamped status updates to a destination. The destination keeps track of system status through received updates. We use the metric Age of Information (AoI) to measure the “freshness” of the status information available at the destination. We assume that the energy unit is normalized so that each status update requires one unit of energy. This energy unit represents the energy cost of both measuring and transmitting a status update. Assume energy arrives at the sensor according to a Poisson process with parameter . Hence, energy arrivals occur at discrete time instants . We assume for ease of exposition, since we can always scale the time axis proportionally to make per unit time. The sensor is equipped with a battery to store harvested energy. In this paper, we focus on the case where the battery size is infinite. We assume the time used to collect and transmit a status update is negligible compared with the time scale of the long-term average AoI in the system. Therefore, a status update can be generated and transmitted at any time as long as the energy level is greater than or equal to one. We assume the channel between the source and the destination is noisy, thus the transmitted update may be corrupted and unrecognizable at the destination. Specifically, we assume that with probability , , an update will be successfully delivered to the destination, independent with any other factors in the system. As shown in Fig. 1, the AoI at the destination will be reset to zero only when an update is successfully received. We consider two possible cases. For the no updating feedback case, the source has no information of the updating result. For the perfect updating feedback case, we assume there is a perfect feedback channel between the destination and the source, so that the source is notified about an updating failure once it happens. A status update policy is denoted as , where is the th update time at the source. However, due to channel fading, only a subset of the update packets will be successfully delivered. Thus, the actual status update times at the destination are different from in general. Therefore, we use to denote the th actual update time at the destination. We assume , i.e., an update is successfully delivered right before time zero, and the system starts with an initial energy of , . Define as the total amount of energy harvested in , and as the energy level of the sensor right before the update time . Then, under any feasible status update policy, the energy queue evolves as follows E(l−1) =E0+A1, (1) E(l−n) =E(l−n−1)−1+An,n=2,3,…. (2) Based on the Poisson arrival process assumption, is an independent Poisson random variable with parameter . In order to ensure every update time is feasible, we must have the energy causality constraint satisfied all the time, i.e., E(l−n) ≥1,n=1,2,…, (3) which indicates that the source will generate and transmit an update only when it has sufficient energy. We use and to denote the number of status updates sent by the source and the number of status updates successfully received at the destination over , respectively. Define as the cumulative AoI at the destination over . Denote the delay between two successful updates as , for . Then, R(T) =∑N(T)i=1X2i+(T−SN(T))22, (4) and the time-average AoI over the duration can be expressed as . Our objective is to determine the sequence of update times at the source, so that the time average AoI at the destination is minimized, subject to the energy causality constraint. We focus on a set of online policies. Specifically, for the no updating feedback case, the information available for determining the updating point includes the updating history , the energy arrival profile over , as well as the energy harvesting statistics (i.e., in this scenario) and the probability of updating success . Denote the set of such online policies as . For the perfect updating feedback case, the source also utilizes up-to-date updating feedback to make its decisions. We denote the set of such online policies as . Then, the optimization problem can be formulated as minπ∈Π limsupT→+∞E[R(T)T] (5) s.t. (???)−(???), where equals or , depending on the setting, and the expectation in the objective function is taken over all possible energy harvesting sample paths and channel fading realizations. ## Iii Status Updating Without Feedback In this section, we will study the optimal status updating policy for the case where there is no update feedback available to the sensor. We show that the expected long-term average AoI has a lower bound for a broad class of online policies, which can be achieved by the BU updating policy. ### Iii-a A Lower Bound Note that when battery size is infinite, no energy flow will happen, and the long-term average status updating rate is subject to the energy harvesting rate constraint. Specifically, we have the following lemma. ###### Lemma 1 (Lemma 1 in [41]) Under any policy , it must have almost surely. We point out that Lemma 1 is also valid for all , which will be discussed in Sec. IV. Besides, we also have the following intuitive yet important observation. ###### Lemma 2 For any that achieves a finite expected long-term average AoI, it must have almost surely. ###### Proof. We prove it by contradiction. Assume P[limT→∞M(T)=∞]<1, i.e., there exists and , such that P[limT→∞M(T) Define pn:=(1−p)n−1p, (6) i.e., the probability that is the first successful update time after . Then, which implies that the expected long-term average AoI cannot be finite. ∎ In order to obtain a valid lower bound, in the following, we only need to focus on the policies that achieve finite expected long-term average AoI. To facilitate the following analysis, we introduce a broad class of online policies defined as follows. ###### Definition 1 (Bounded Updating Policy) If under a policy , the th updating point at the source (i.e., ) satisfies for any fixed , is called a bounded updating policy. Denote the set of bounded updating policies as . Then, . Intuitively, any practical status updating policy should be in , as it is undesirable to have any th updating point (and the inter-update delay between any consecutive updating points before ) to become unbounded in expectation. We have the following lower bound for bounded updating policies. ###### Theorem 1 (Lower Bound for Channel without Feedback) For any policy , the expected long-term average AoI is lower bounded by . The proof of Theorem 1 is provided in Appendix -A. ### Iii-B Optimal Online Status Updating In this section, we propose online status updating policies to achieve the lower bound derived in Section III-A. We will start with the BU updating policy introduced in [23]. Although we assume a noisy channel in this work, when there is no CSI or feedback available to the source, intuitively, it is still desirable for the source to update in a uniform fashion, so that the successfully received updates at the destination would be most uniformly distributed in time. ###### Definition 2 (BU Updating) The sensor is scheduled to update the status at , . The sensor performs the task at if ; Otherwise, the sensor keeps silent until the next scheduled status updating time point. Here we use to denote the th scheduled updating time point. It is in general different from the th actual updating time , since some scheduled updates may be infeasible due to battery outage. BU updating ensures that the energy causality constraint is always satisfied. We expect that BU updating achieves the lower bound in Theorem 1. However, analyzing its AoI performance is very challenging. Although we are able to identify a renewal structure in the system status evolution under the BU updating policy (i.e., a renewal interval can begin right after the sensor successfully delivers an update and the battery state becomes ), the analysis of the expected average AoI over one renewal interval is still very complicated, mainly due to two reasons: First, different from the perfect channel case [23], the actual update time at the destination may deviate from the scheduled update time due to two possible events: battery outage and update erasure. Although the average AoI can be characterized in systems where only one of such events can happen, it is hard to analyze the AoI when the effects of both events are involved. Second, the expected length of such a renewal interval is unbounded. This is because the battery evolution under BU updating can be modeled as a Martingale process, and as we will show in the proof of Lemma 4, the expected time when it becomes empty for the first time (i.e., hitting time of zero) is infinity. Since with a non-zero probability the renewal interval contains such an interval, the expected length of each renewal interval is thus unbounded, and the corresponding expected average AoI becomes intractable. To overcome such challenges, we will construct a sequence of virtual policies, and show that the expected time average AoI under those virtual policies approaches the lower bound in Theorem 1. Since such virtual policies are sub-optimal to the BU updating policy, the optimality of BU updating can thus be proved. In order to simplify the definition and analysis of the virtual policy, we assume . The proof can be slightly modified to show that the optimality of the proposed policy is valid for any . ###### Definition 3 (Bu-ErT0) The sensor performs BU updating until the battery level after sending an updating becomes zero for the first time, or until time , in which case the sensor depletes its battery; After that, when the battery level becomes higher than or equal to one after a successful update for the first time, the sensor reduces the battery level to one, and then repeats the process. ###### Lemma 3 For any , BU-ER updating policy is sub-optimal to the BU updating policy. ###### Proof. We note that BU-ER updating is identical to BU updating except the energy removal at time and when becomes higher than one. Given the same energy harvesting sample path, the battery level under BU is always higher than that under BU-ER. Thus, BU-ER incurs more infeasible status updates. With the same channel fading profile, the instantaneous AoI under BU-ER updating is always greater than or equal to that under BU updating sample path-wisely. Thus, the expected time-average AoI under BU-ER is greater than or equal to that under BU, which proves the lemma. ∎ We note that the BU-ER updating policy is a renewal type policy, i.e., the states of the system evolve according to a renewal process. To analyze the expected long-term average AoI, it suffices to analyze the expected average AoI over one renewal interval. In the following, we will focus on the first renewal interval, and show that the corresponding expected average AoI converges to the lower bound in Theorem 1 as increases. As illustrated in Fig. 2, we note that the renewal interval consists of two stages. The first stage starts at time zero and ends until the battery becomes empty for the first time, or until time . We denote as the end of the first stage. We note that all scheduled status updating epochs over are feasible. The second stage starts at and ends when the battery level becomes higher than or equal to one after a successful update for the first time after . We denote the end of the second stage as . ###### Lemma 4 Under BU-ER updating, ###### Proof. Consider a “random walk” , which starts with and increments with , where is an i.i.d. Poisson random variable with parameter . Denote the first -hitting time for as . Then and . Note that when , is identical to the battery level evolution process under the BU-ER updating policy almost surely, and the corresponding . Define a Martingale process associated with as with and . According to the proof of Theorem in [41], exp(−αΩ0)=E[exp(−αΩκ−κγ(α))]. (9) Taking the derivative of both sides of (9) with respect to , we have Ω0exp(−αΩ0)=E[(Ωκ+κγ′(α))exp(−αΩκ−κγ(α))]. (10) Since and , (10) can be reduced to exp(−α)=E[κγ′(α)exp(−κγ(α))]≤E[κγ′(α)], (11) where the inequality follows from the fact that . Dividing both sides of (11) by , we have E[κ]≥exp(−α)/γ′(α). (12) Note that limα→0γ′(α) =limα→0(−e−α+1)=0+. (13) Thus, we have limT0→∞E[T1]≥limα→0exp(−α)/γ′(α)=∞. (14) ###### Lemma 5 Under BU-ER updating, , , , are bounded. ###### Proof. We consider another genie-aided virtual process starting at time as follows. The source performs BU-ER after , and keeps tracking the battery level and genie-informed update result. If a status update is erased and the battery level is above zero, the sensor depletes its battery and repeat the process. The process stops when the battery level after a successful update becomes one for the first time. Denote the duration of the second state as . For each sample path, we can see that the battery level under the new virtual process is always less than or equal to that under BU-ER, due to the extra energy depletion after and before . Since the update erasure patterns are the same under both policies, we must have . We note that at each updating time point between and , the battery level is above zero with probability ; and if the previous event happens, the update is successfully delivered with probability . Therefore, under the new virtual policy is a geometric random variable with parameter . Thus, its first and second moments are bounded. Therefore, and are bounded. Next, we note that under the BU-ER updating, the AoI over is a renewal reward process, which resets to zero at . According to Proposition 3.4.6 in [44], is bounded. Therefore is uniformly bounded for any . Similarly, we can show that is uniformly bounded. ∎ ###### Lemma 6 As , the expected long-term average AoI under BU-ER is upper bounded by . ###### Proof. First, we note that the limT0→∞E[(T1+T2−SN(T1))2]2E[T1+T2]=limT0→∞E[(T1−SN(T1))2]+E[T22]+2E[T1−SN(T1)]E[T2]2E[T1]=0, (15) where the first equality follows from that the two events and are independent, and the second equality follows from Lemma 4 and Lemma 5. Then, we note that under BU-ER, limT→∞E[R(T)T] ≤∑N(T1)i=1X2i+(T1+T2−SN(T1))22E[T1+T2]. Consider the channel state realization at the scheduled status updating epochs under BU (and BU-ER) updating. Let be the duration between the th and st epochs when the channel states are good and the corresponding update would be successful if it were sent. Then, is identical to . This is because there is no battery outage over , and whether an update is successful or not only depends on the channel state. Combining with (15), we have limT0→∞limT→∞E[R(T)T]≤limT0→∞E[∑N(T1)i=1X2i]2E[T1+T2] (16) ≤limT0→∞E[∑N(T1)+1i=1Y2i]2E[∑N(T1)+1i=1Yi−(∑N(T1)+1i=1Yi−T1)] (17) (18) where (18) follows from Wald’s equality and the fact that is a stopping time for for any given . Since , according to Lemma 4, limT0→∞E[N(T1)+1]E[Y1]≥limT0→∞E[T1]=∞. (19) Meanwhile, we have uniformly bounded for any based on Proposition 3.4.6 in [44]. Therefore, (18) is equal to , i.e., . ∎ Theorem 1, Lemma 3 and Lemma 6 imply the optimality of the BU updating, as summarized in the following theorem. ###### Theorem 2 (Optimality of BU Updating) Among all policies in , the BU updating policy is optimal when updating feedback is unavailable, i.e., limsupT→∞E[R(T)T] =2−p2p. ## Iv Status Updating With Perfect Feedback In this section, we consider the case where there exists perfect updating feedback to the sensor. With perfect updating feedback, the sensor has the choice to retransmit the update immediately or wait and update later, thus leading to optimal solutions different from the no feedback case. In order to facilitate the analysis, in the following, we focus on another class of online policies, termed as uniformly bounded policies. ### Iv-a A Lower Bound Define as the number of attempted updates (including the last successful one) between two successful updates at time and under any online policy in . Then, could be any integer number greater than or equal to one. ###### Definition 4 (Uniformly bounded policy) Under a policy , if: 1) there exists a function such that when , , , and , and 2) for any , then, is called a uniformly bounded policy. Roughly speaking, the first condition ensures that the source updates frequently so that the AoI at the destination does not grow unbounded in expectation; The second condition requires that the source does not update too frequently in any period of time. Such conditions are consistent with our intuition that the optimal policies should try to maintain a constant as much as possible. We note that uniformly bounded policies do not have to be renewal or Markovian in general. Denote the set of uniformly bounded policies as , then . We have the following lemma. ###### Lemma 7 For any , it must have and . The proof of this lemma is adapted from the proof of Theorem 3 in [23], and provided in Appendix -B. Besides, we also have the following observation. ###### Lemma 8 Under any policy , it must have ###### Proof. First, we observe that limT→∞E[∑N(T)+1i=1Ki]T≤limT→∞E0+E[∑N(T)+1i=1Ai]T (20) due to the energy causality constraint. We note that is a continuous-time martingale, where is a Poisson process with parameter one. Therefore, according to the optimal stopping time theorem [44], for any stopping time , we have , i.e., . Since is a stopping time associated with the past energy arrivals and channel fading realizations under any , we have . Plugging it into (20), we have limT→∞E[∑N(T)+1i=1Ki]T≤limT→∞E[SN(T)+1]T=1+limT→∞E[XN(T)+1]T=1, (21) where the last equality follows from Lemma 7. Besides, we note that under any online policy , is an i.i.d. geometric random variable with parameter . Therefore, applying Wald’s equality, we have limT→∞E[∑N(T)+1i=1Ki]T=limT→∞E[N(T)+1]E[Ki]T=limT→∞E[N(T)+1]Tp. (22) Combining with (21), we have . ∎ In order to obtain a lower bound on the AoI for all , we will first drop the energy causality constraint, and focus on those online policies that satisfy Lemma 8 and are also uniformly bounded. Denote the set of such policies as . Then, we have . Since not all policies in would be feasible if the energy causality constraint is imposed, the minimum expected long-term AoI achieved by policies in serves as a lower bound for policies in . ###### Theorem 3 Any policy is suboptimal to a renewal policy, i.e., a policy under which the successful updating points form a renewal process. Besides, under the renewal policy, only depends on . A sketch of the proof is as follows: For any given policy , we construct a renewal policy based on all possible sample paths under . Specifically, our approach is to first average over sample paths with the same , so that all factors other than that may affect can be averaged out. Then, we form a sophisticated linear combination of , and use it as the inter-update delay under the new policy. Such a policy is a renewal policy, and each renewal interval only depends on . Through rigorous stochastic analysis, we prove that the constructed renewal policy always outperforms the original policy. The detailed proof of Theorem 3 is provided in Appendix -C. In the following, we will focus on renewal policies in , and identify the AoI-optimal renewal policy. ###### Theorem 4 Under the optimal renewal policy in , equals a constant irrespective of , and the corresponding long-term average AoI equals . Proof:  Define as the length of the renewal interval if the number of attempts over that interval is , and . Then, is the probability that the th attempt is successful. Therefore, to minimize the expected long-term average AoI, it suffices to solve the following optimization problem: min{xk}∑∞k=1x2kpk2∑∞k=1xkpks.t.∞∑k=1xkpk≥1/p. (23) This is a non-linear fractional programming problem and can be solved using the parametric approach in [45]. Below we provide an alternative yet simpler approach. We note that ∑∞k=1x2kpk2∑∞k=1xkpk≥(∑∞k=1xkpk)22∑∞k=1xkpk=12∞∑k=1xkpk≥12p, (24) where the first inequality in (24) follows from Jensen’s inequality and the second one follows from the constraint in (23). The equalities hold if and only if for all and . Combining with , we have for all . Thus, the solution of (23) is , for all and the corresponding minimum is . Combining Theorem 3 and Theorem 4, we obtain a lower bound for all as follows. ###### Theorem 5 (Lower Bound for Channel with Perfect Feedback) For any policy , the expected long-term average AoI is lower bounded by . ### Iv-B Optimal Online Status Updating Motivated by the uniform structure of under the optimal renewal policy in Theorem 4, we define the Best-effort Uniform updating with Retransmission (BUR) policy as follows. ###### Definition 5 (BUR Updating) The sensor is scheduled to update the status at , . The sensor keeps sending updates at until an update is successful or until it runs out of battery; Otherwise, the sensor keeps silent until the next scheduled status update time. In order to prove that the BUR updating policy is optimal, we will first construct a sequence of policies which are sub-optimal to the BUR updating policy, and show that the limit of those suboptimal policies achieves the lower bound in Theorem 5. ###### Definition 6 (BUR with Energy Removal (BUR-ERT0)) The sensor performs BUR updating policy until the battery level after sending an update becomes zero for the first time, or until time , in which case the sensor depletes its battery after a successful update at ; After that, when the battery level becomes higher than or equal to one after a successful update for the first time, the sensor reduces the battery level to one, and then repeats the process. ###### Lemma 9 The BUR-ER updating policy is suboptimal to the BUR updating policy. ###### Proof. We note that the BUR-ER updating policy is identical to the BUR updating policy up to the energy removal step. Given the same energy harvesting sample path, the battery level under BUR is always higher than that under BUR-ER. Thus, BUR-ER incurs more infeasible status updating points. With the same channel fading profile, the instantaneous AoI under BUR-ER is always greater than or equal to that under BUR sample path-wisely. Thus, the expected time-average AoI under BUR-ER is greater than or equal to that under BUR. ∎ Note that BUR-ER updating is a renewal policy and Fig. 3 is an illustration of one renewal interval. In order to analyze the expected long-term average AoI, it suffices to analyze the expected average AoI over one renewal interval. Thus, we will focus on the first renewal interval, and show that the expected average AoI converges to the lower bound in Theorem 5. The renewal interval consists of two stages. The first stage starts at time zero and ends until the battery becomes empty for the first time, or until time , denoted as . We note that all scheduled updating points over are feasible. The second stage starts at and ends when the battery level after a successful update becomes higher than or equal to one for the first time after , denoted as . ###### Lemma 10 Under BUR-ER updating, ###### Proof. Consider a “random walk” . It starts with and the evolves as , where is an i.i.d. Poisson random variable with parameter and is an i.i.d. geometric random variable with parameter . Denote the first zero-hitting time for as . Then and . We note that when , is identical to the battery level evolution process under the BUR-ER updating policy. For ease of exposition, define , and for . Then, we have E[e−αCn−γ(α)]=1. (25) Based on the definition of , and , we have E[e−αCn]=e1p(e−α−1)peα1−(1−p)eα. (26) Therefore, γ(α) =logE[e−αCn]=1p(e−α−1)+logpeα1−(1−p)eα. (27) Taking derivative of (27), we get γ′(α)=−1pe−α+11−(1−p)eα. (28) Next, we define a process associated with as . We note that E[e−αΩk−γ(α)k|Ω1,…,Ωk−1]=E[e−α(Ωk−1+Ck)+−γ(α)k|Ω1,…,Ωk−1] ≤E[e−α(Ωk−1+Ck)−γ(α)k|Ω1,…,Ωk−1]=e−αΩk−1−γ(α)(k−1)E[e−αCk−γ(α)] =e−αΩk−1−γ(α)(k−1), (29) where (29) follows from (25). Therefore, is a super-martingale process, i.e., e−αΩ0 ≥E[e−αΩT1−γ(α)T1]≥E[1−(αΩT1+T1γ(α))]. Since and , combining with (28), we have E[T1] ≥limα→0+1−e−αΩ0γ(α)=limα→0+Ω0e−αΩ0γ′(α)=∞. (30) ###### Lemma 11 Under the BUR-ER updating policy, , are uniformly bounded. ###### Proof. Under BUR-ER updating policy, the number of energy arrivals over (denoted as ) is a Poisson random variable with parameter . If the source has sufficient energy, the total number of attempts at time (denoted as ) is an i.i.d. geometric random variable with parameter . Therefore, if the battery is empty at time , it will increase to one or above after a successful update at time only when , which will happen with a constant probability. Thus, is a geometric random variable whose first and second moments are finite. ∎ ###### Lemma 12 As , the expected long-term average AoI under BUR-ER updating is upper bounded by . ###### Proof. First, we note that limT0→∞E[(T1+T2−SN(T1))2]2E[T1+T2]≤limT0→∞E[(T2+1p)2]2E[T1]=0, (31) where (31) follows from the fact that is upper bounded by under the BU-ER policy, Lemma 10 and Lemma 11. Next, we note that the BU-ER updating policy is a renewal policy and the expected long-term average AoI is equal to the expected average AoI over one renewal interval. Therefore, limT0→∞limT→∞E[R(T)T]≤limT0→∞E[∑N(T1)i=1X2i+(T1+T2−SN(T1))2]2E[T1+T2] (32) ≤limT0→∞E[∑N(T1)i=1X2i]2E[SN(T1)]=limT0→∞E[N(T1)]1p22E[N(T1)]1p=12p, (33) where (33) follows from (31) and the fact that for and . ∎ Lemma 12 indicates that the expected time-average AoI under the BUR-ER updating policy converges to the lower bound in Theorem 5 as goes to infinity. According to Lemma 9, BUR-ER is suboptimal to BUR. Therefore, the BUR updating policy also achieves the lower bound, thus it is optimal. We summarize the optimality result in the next theorem. ###### Theorem 6 (Optimality of BUR Updating) Among all policies in , the BUR updating policy is optimal when transmission feedback is available, i.e., limsupT→∞E[R(T)T] =12p. ## V Simulation Results In this section, we evaluate the performances for the proposed status updating policies through simulations. For each case, we generate sample paths for the Poisson energy harvesting process with and compute the sample average of the time average AoI over sample paths. ### V-a Status Updating Without Feedback First, we evaluate the BU updating policy in Fig. 4. We vary , and plot both the time average AoI as a function of and the corresponding lower bound in the figure. We observe that all time average AoI curves gradually approach the corresponding lower bound as . The results show that the proposed BU updating policy is optimal. Note that the time average AoI is monotonically decreasing as increases. This is intuitive since channel with better quality, i.e., larger , will render smaller time average AoI. Next, we evaluate the performances of virtual policies BU-ER for different value of in Fig. 5. We fix and plot the time average AoI under BU-ER with . We also compare with a greedy updating policy and the BU updating policy. Under the greedy updating policy, the sensor updates instantly when one unit of energy arrives. As we observe in Fig. 5, the greedy policy results in the highest average AoI, and never approaches the lower bound. The time averaged AoI under the BU-ER updating policy is monotonically decreasing as increases, and gradually approaches that under the BU updating policy. This is consistent with Lemma 3 and Lemma 6 that BU-ER updating is sub-optimal to BU updating, and eventually converges to it when increases. ### V-B Status Updating With Perfect Feedback Next, we evaluate the performances of the proposed online policies when perfect feedback is available to the sensor. In Fig. 6, under the BUR updating policy, we plot the time average AoI with and the corresponding lower bound . We note that as , the time average AoI approaches the lower bound. Thus BUR updating is optimal. We then evaluate the performances of the BUR-ER updating policy in Fig. 7. We fix , choose and plot the time average AoI as a function of . As a comparison, we also plot the time average AoI under the BU updating policy and the BUR updating policy in the figure. We note that the AoI under BUR-ER gradually decreases and approaches that under the BUR updating policy as increases, which is consistent with Lemma 9 and Lemma 12. The performance gap between the BU updating and the BUR updating indicates that exploiting updating feedback can significantly reduces time average AoI in the system. ## Vi Conclusions In this paper, we considered the optimal online status update policies for an energy harvesting source in presence of updating erasures. We investigated both cases where no updating feedback or perfect feedback is available to the source. For each case, we first obtained a lower bound and then proved the proposed status updating policy can achieve the lower bound among a broadly defined class of policies. The optimality of proposed status update policies were proved through constructing a sequence of virtual status updating policies which are sub-optimal to the original policy and asymptotically achieve the lower bound. The performances of the proposed policies were evaluated through simulations. We point out that although we only showed the optimality of the proposed policies within a subset of online policies, we conjecture that their optimality can be extended for all online policies. How to generalize the results is one of our future steps. Another direction we would like to pursue is to investigate the impact of update erasures on the optimal updating policy for an EH source with finite battery. ### -a Proof of Theorem 1 Define , , and . Then, under any , the expected average AoI over can be expressed as E[R(T)T]=1TE⎡⎣N(T)∑i=0(STi−Si)22⎤⎦ (34) =12TE⎡⎣M(T)∑n=1pnl2n+⎛⎝1−M(T)∑n=1pn⎞⎠T2+M(T)∑n=1∞∑j=1(lTn+j−ln)2ppj⎤⎦, (35) where the first two terms inside the expectation in (35) correspond to the AoI contribution over , and the last term correspond to the AoI contribution over any other . This can be explained as follows. With fixed updating epochs , depending on the realization of the channel state, the interval can be decomposed into segments, separated by successful updates. The probability to have , , as one of such segment equals , which corresponds to the event that update at succeeds, and the next successful update is at . The corresponding AoI contribution over thus needs to be weighted by when the expected AoI is calculated. Since the AoI contribution over is always positive, in the following, we will drop it to obtain a lower bound, i.e., limT→∞E[R(T)T]≥limT→∞12TE⎡⎣p∞∑j=1pjM(T)∑n=1(lTn+j−ln)2⎤⎦ (36) ≥limT→∞12TE⎡⎢⎣p∞∑j=1pj1M(T)⎛⎝M(T)∑n=1(lTn+j−ln)⎞⎠2⎤⎥⎦ (37) =limT→∞12TE⎡⎣p∞∑j=1pj1M(T)(jT−j∑n=1lTn)2⎤⎦ (38) =limT→∞12p∞∑j=1pjj2E⎡⎣(T−¯lTj)2M(T)T⎤⎦, (39) where (38) is based on Jensen’s inequality, (39) is derived by considering the cases and separately, and . Since each term in the summation in (39) is positive, we can switch the order of limit and summation. We note that for any given , according to the definition of bounded policy. Besides, for any policy that renders a finite expected average AoI, we must have almost surely according to Lemma 2. Therefore, according to the bounded convergence theorem [46], we have limT→∞E⎡⎣¯lTjM(T)⎤⎦=0,limT→∞E⎡⎣(¯lTj)2M(T)T⎤⎦=0. (40) Combining with (39), we have limT→∞E[R(T)T]≥12p∞∑j=1pjj2limT→∞E[TM(T)]≥12p∞∑j=1j2(1−p)j−1p=2−p2p, (41) where the first inequality follows from Lemma 1.
{}
# When to use Compile I am new to Mathematica and have a general question: Working with matrices I have the impression that Compile can only be used if the elements of a matrix are ONLY Integers or ONLY Reals. No mixed types are allowed. But my matrices generally include some missing elements like m = {{1., 2.}, {Null, 3.}} Is there a way to compile m*m ? - Please note that matrix operations are in general very very fast in Mathematica without using Compile. In fact, it is easily possible that you slow something down. Try to use built-in functions first. Your observation about the types is correct. Compile can only handle tensors of one type.. either Real, Integer, ... but mixed types are not possible. The m you use is not even compilable! –  halirutan May 9 '14 at 22:51 @halirutan - thanks, your answer spared me a lot of trials and errors –  eldo May 10 '14 at 11:22
{}
# Differentials of order 2 or bigger that are equal to 0 • I ## Main Question or Discussion Point So I've seen in several lectures and explanations the idea that when you have an equation containing a relation between certain expressions ##x## and ##y##, if the expression ##x## approaches 0 (and ##y## is scaled down accordingly) then any power of that expression bigger than 2 (##x^n## where ##n>1##) is equal to 0, leaving only the relation between the 1st order term ##x## and ##y##. For example in a Poisson process the chance of arrival in a time interval ##δ## where ##δ→0## , is ##λδ## (where ##λ## is the arrival frequency). The chances of no arrivals during said interval is ##1-λδ## and the chance of 2 arrivals or more is 0, because the chance of getting n arrivals in the interval ##δ## is ##(λδ)^n=λ^nδ^n## and ##δ^n=0## for ##n>1## when ##δ→0##. Now in the basic intuitive sense I can understand why this is the case, if a variable ##x## approaches 0 then the variable ##x^2## (or ##x^n## where n>1) becomes negligibly small, and it becomes more and more negligible as ##x## becomes smaller and smaller. The thing is we are already dealing with infinitesimals in cases like the Poisson process, so why do we decide that ##x## is not negligible and ##x^2## is when both are arbitrarily small? I guess I'm asking for a mathematical basis for this claim, I'm sure there is one since it is so confidently used in many fields in math and physics. Thanks in advance to all the helpers. Last edited: PeroK Homework Helper Gold Member If you take a Taylor series: ##f(x) = \sum_{n= 0}^{\infty}\frac{f^{(n)}(x_0)(x-x_o)^n}{n!} = f(x_0) + (x-x_0)f'(x_0) + \frac{(x-x_0)^2 f''(x_0)}{2} + \dots## Then, if we assume the function ##f## is well-behaved - in the sense that all its derivatives are bounded - we have: ##f(x) \approx f(x_0) + (x-x_0)f'(x_0)## (when ##x-x_0 << 1##) You can see that there will be exceptions to this for functions where the ##n^{th}## derivatives are unbounded, but for the sort of functions normally considered in physics this is not an issue. Last edited: StoneTemplePython Gold Member 2019 Award In general, having multiple representations of the same phenomenon, can be quite helpful, so with that in mind, I wrote the below. - - - - - note: if you have a function that is twice differentiable, you can write it as a Taylor Polynomial with a quadratic remainder (I'd suggest using Lagrange form). Being quadratic, the remainder is ##O(n^2)##. The linear approximation gives you the probability of one arrival. (In my view, the probability of zero arrivals is just an after thought -- it is the complement of total probability of at least one arrival). Your question really is: why is a linear approximation the best over a small enough neighborhood for a function that is (at least) twice differentiable. - - - - In Poisson process language: why is the linear approximation of the probability of positive arrivals (i.e. approximating it by looking at probability of only 1 arrival) arbitrarily close to the actual total probability of positive arrivals, in some small enough time neighborhood? - - - - Frequently fully worked examples help people a lot. So a more granular view is: specific to the exponential function (whose re-scaled power series gives you the Poisson PMF), you may recall that one way to prove the series for the exponential function is absolutely convergent involves invoking a geometric series after a finite number of terms (to upper bound the remaining infinite series). In the nice case of ##0 \lt \delta \lt 1##, you have ## \delta \leq \exp\big(\delta\big) -1 = \delta + \big(\frac{\delta^2}{2!} +\frac{\delta^3}{3!} + \frac{\delta^4}{4!}+ ... \big) \leq \delta +\delta^2 + \delta^3 + \delta^4 + ... = g(\delta) = \frac{\delta}{1 - \delta} ## now consider that for small enough ##\delta##, we have ##\frac{\delta}{1 - \delta} \approx \delta##. Play around with some numbers and confirm this for yourself. E.g. what about ##\delta = \frac{1}{100,000}##? This is a small number, but hardly "infinitesimal". If you want to have some fun with it, consider what portion of the geometric series is represented by ##\delta##. I.e. look at ## \frac{\delta}{\big(\delta + \delta^2 + \delta^3 + \delta^4 + ... \big)} = \frac{\delta }{\big(\frac{\delta}{1-\delta}\big)} = 1 - \delta## This is why, when you look at ##\delta = \frac{1}{100,000}##, 99.999% of the value of the ##g(\delta)## is in the very first term of the series. Taking advantage of non-negativity, we can see that since the upper bound is well approximated by ##\delta##, when ##\delta## is small enough, and since the ##\Big(\exp\big(\delta\big) -1\Big)## contains that term, it must be approximated by it as well. Put differently, we see that the value of ##\big(\frac{\delta^2}{2!} +\frac{\delta^3}{3!} + \frac{\delta^4}{4!}+ ...\big)## is irrelevant, for small enough ##\delta## . Put one more way: you tell me what your cut off is / level of precision you want, and I can come up with a small enough real valued ##\delta ## such that you can ignore all those higher, ##O(n^2)##, terms. If you keep asking for ever finer levels of precision, this back and forth eventually results in a limit, but the idea of getting an extremely good approximation is the main idea here. The thing is we are already dealing with infinitesimals in cases like the Poisson process, so why do we decide that ##x## is not negligible and ##x^2## is when both are arbitrarily small? I guess I'm asking for a mathematical basis for this claim This isn't really true. A poisson process can be thought of as a limiting case of a shrinking Bernouli process. It can also be thought of as a counting process with exponentially distributed inter-arrival times. Infinitessimals aren't needed. While the limit of a bernouli is a good interpretation, don't over think it... the counting process interpretation can be quite enlightening. FactChecker Gold Member Linear approximations are so much easier to deal with than the higher order approximations, that it is worth considering using it as an estimate. It will give you the value of a function and tell you how the function changes locally in each direction. That is often enough. And the theory of linear functions (simple, simultanious, or multivariable) is reasonably deep and informative. Going one step higher to quadratic approximations opens a real can of worms. In general, having multiple representations of the same phenomenon, can be quite helpful, so with that in mind, I wrote the below. Put in simple and general terms, given an expression containing both ##\delta## and higher orders of ##\delta##, as ##\delta## becomes arbitrarily small, the ##\delta## portion takes up more and more "weight" of the overall expression and the contribution of the higher order terms to the value of the expression become negligible, and thus can be ignored. Also I guess I should be careful with throwing the expression "infinitesimal" around and start thinking about calculus as a whole more in terms of arbitrarily small quantities instead of infinitesimal ones. Thanks for the help! StoneTemplePython Stephen Tashi
{}
# Math Help - Diagonalizable tensor product 1. ## Diagonalizable tensor product Hi everyone! I have to prove that if T : V $\rightarrow$ V and S:V $\rightarrow$V are diagonalizable so T $\otimes$S is diagonalizable. 2. ## Re: Diagonalizable tensor product Choose an eigenbasis for each $T,S$ and then form the standard tensor basis--I'm sure you'll find this is an eigenbasis for $T\otimes S$.
{}
Bourgain, Gamburd, Sarnak on Markoff triples Such a great colloquium last week by Peter Sarnak, this year’s Hilldale Lecturer, on his paper with Bourgain and Gamburd.  My only complaint is that he promised to talk about the mapping class group and then barely did!  So I thought I’d jot down what their work has to do with mapping class groups and spaces of branched covers. Let E be a genus 1 Riemann surface — that is, a torus — and O a point of E.  Then pi_1(E-O) is just a free group on two generators, whose commutator is (the conjugacy class of) a little loop around the puncture.  If G is a group, a G-cover of E branched only at O is thus a map from pi_1(E-O) to G, which is to say a pair (a,b) of elements of G.  Well, such a pair considered up to conjugacy, since we didn’t specify a basepoint for our pi_1.  And actually, we might as well just think about the surjective maps, which is to say the connected G-covers. Let’s focus on the case G = SL_2(Z).  And more specifically on those maps where the puncture class is sent to a matrix of trace -2.  Here’s an example:  we can take $a_0 = \left[ \begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array} \right]$ $b_0 = \left[ \begin{array}{cc} 2 & 1 \\ 1 & 1 \end{array} \right]$ You can check that in this case the puncture class has trace -2; that is, it is the negative of a unipotent matrix.  Actually, I gotta be honest, these matrices don’t generate SL_2(Z); they generate a finite-index subgroup H of SL_2(Z), its commutator. Write S for the set of all conjugacy classes of pairs (a,b) of matrices which generate H and have commutator with trace -2.  It turns out that this set is the set of integral points of an affine surface called the Markoff surface:  namely, if we take x = Tr(a)/3, y = Tr(b)/3, and z = Tr(ab)/3, then the three traces obey the relation $x^2 + y^2 + z^2 = 3xyz$ and indeed every solution to this equation corresponds to an element of S. So the integral points on the Markoff surface are acted on by an infinite discrete group.  Which if you just look at the equation seems like kind of a ridiculous miracle.  But in the setting of H-covers is very natural.  Because there’s a natural group acting on S: namely, the mapping class group Γ of type (1,1).  This group’s whole purpose in life is to act on the fundamental group of a once-punctured torus!  (For readers unfamiliar with mapping class groups, I highly recommend Benson Farb and Dan Margalit’s wonderful textbook.)   So you start with a surjection from pi_1(E-O) to H, you compose with the action of  Γ, and you get a new homomorphism.  The action of  Γ on pi_1(E-O) is only outer, but that’s OK, because we’re only keeping track of conjugacy classes of homomorphisms from pi_1(E-O) to H. So Γ acts on S; and now the lovely theorem is that this action is transitive. I don’t want to make this mapping class group business sound more abstract than it is.  Γ isn’t a mystery group; it acts on H_1(E-O), a free abelian group of rank 2, which gives a map from Γ to SL_2(Z), which turns out to be an isomorphism.  What’s more, the action of Γ on pairs (a,b) is completely explicit; the standard unipotent generators of SL_2(Z) map to the moves (a,b) -> (ab,b) (a,b) -> (a,ab) (Sanity check:  each of these transformations preserves the conjugacy class of the commutator of a and b.) Sarnak, being a number theorist, is interested in strong approximation: are the integral solutions of the Markoff equation dense in the adelic solutions?   In particular, if I have a solution to the Markoff equation over F_p — which is to say, a pair (a,b) in SL_2(F_p) with the right commutator — can I lift it to a solution over Z? Suppose I have a pair (a,b) which lifts to a pair (a,b).  We know (a,b) = g(a_0,b_0) for some g in Γ.  Thus (a,b) = g(a_0,b_0).  In other words, if strong approximation is true, Γ acts transitively on the set S_p of Markoff solutions mod p.  And this is precisely what Bourgain, Gamburd, and Sarnak conjecture.  (In fact, they conjecture more:  that the Cayley-Schreier graph of this action is an expander, which is kind of a quantitative version of an action being transitive.)  One reason to believe this:  if we replace F_p with C, we replace S with the SL_2(C) character variety of pi_1(E-O), and Goldman showed long ago that the action of mapping class groups on complex character varieties of fundamental groups was ergodic; it mixes everything around very nicely. Again, I emphasize that this is on its face a question of pure combinatorial group theory.  You want to know if you can get from any pair of elements in SL_2(F_p) with negative-unipotent commutator to any other via the two moves above.  You can set this up on your computer and check that it holds for lots and lots of p (they did.)  But it’s not clear how to prove this transitivity for all p! They’re not quite there yet.  But what they can prove is that the action of Γ on S_p has a very big orbit, and has no very small orbits. Now that G is the finite group SL_2(F_p), we’re in my favorite situation, that of Hurwitz spaces.  The mapping class group Γ is best seen as the fundamental group of the moduli stack M_{1,1} of elliptic curves.  So an action of Γ on the finite set S_p is just a cover H_p of M_{1,1}.  It is nothing but the Hurwitz space parametrizing maps (f: X -> E) where E is an elliptic curve and f an SL_2(F_p)-cover branched only at the origin.  What Bourgain, Gamburd, and Sarnak conjecture is that H_p is connected. If you like, this is a moduli space of curves with nonabelian level structure as in deJong and Pikaart.  Or, if you prefer (and if H_p is actually connected) it is a noncongruence modular curve corresponding to the stabilizer of an element of S_p in Γ = SL_2(Z).  This stabilizer is in general going to be a noncongruence subgroup, except it is a congruence subgroup in the more general sense of Thurston. This seems like an interesting family of algebraic curves!  What, if anything, can we say about it? Is there a noncommutative Siegel’s Lemma? Let f be the smallest function satisfying the following: Suppose given two matrices A and B in SL_3(Z), with all entries at most N.  If there is a word w(A,B,A^{-1},B^{-1}) which vanishes in SL_3(Z), then there is a word w'(A,B,A^{-1},B^{-1}) of length at most f(N) which vanishes in SL_3(Z). What are the asymptotics of f(N)? The reason for the title is that, if SL_3(Z) is replaced by Z^n, this is Siegel’s lemma:  if two (or, for that matter, k) vectors in [-N..N]^n are linearly dependent, then there is a linear dependency whose height is polynomial in N.  (Here k and n are constants and N is growing.) I don’t have any particular need to know this — the question came up in conversation at the very stimulating MSRI Thin Groups workshop just concluded.  Sarnak’s notes are an excellent guide to the topics discussed there.
{}
Q The following real numbers have decimal expansions as given below. In each case, decide whether they are rational or not. If they are rational, and of the form, p q what can you say about the prime factors of q? 43.123456789 Q3 (3)   The following real numbers have decimal expansions as given below. In each case, decide whether they are rational or not. If they are rational, and of the form, p/q what can you say about the prime factors of q? $43. \overline{123456789}$
{}
Please use this identifier to cite or link to this item: http://hdl.handle.net/10525/2437 Title: A Comparative Analysis of Predictive Learning Algorithms on High-Dimensional Microarray Cancer Data Authors: Bill, JoFokoue, Ernest Keywords: HDLSSMachine Learning AlgorithmPattern RecognitionClassificationPredictionRegularizationDiscriminant AnalysisSupport Vector MachineKernelsCross ValidationMicroarray Cancer Data Issue Date: 2014 Publisher: Institute of Mathematics and Informatics Bulgarian Academy of Sciences Citation: Serdica Journal of Computing, Vol. 8, No 2, (2014), 137p-168p Abstract: This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied. URI: http://hdl.handle.net/10525/2437 ISSN: 1312-6555 Appears in Collections: Volume 8 Number 2 Files in This Item: File Description SizeFormat
{}
Christian Lawson-Perfect About to upgrade the server that runs mathstodon.xyz. Server will be up and down for a bit A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes. Use $ and $ for inline LaTeX, and $ and $ for display mode.
{}
# (thesis 1/4) Accelerated sensor fusion algorithm for POSE estimation of drones ## An Accelerated and Asynchronous Rao-Blackwellized Particle filter Posted on August 16, 2017 by Ruben Fiszel This post is the part I out of IV of my master thesis at the DAWN lab, Stanford, under Prof. Kunle and Prof. Odersky supervision. The central themes of this thesis are sensor fusion and spatial, an hardware description language (Verilog is also one, but tedious). This part is about an application of hardware acceleration, sensor fusion for drones. Part II will be about scala-flow, a library made during my thesis as a development tool for Spatial inspired by Simulink. This library eased the development of the filter but is also intended to be general purpose. Part III is about the development of an interpreter for spatial. Finally, Part IV is about the spatial implementation of the asynchronous Rao-Blackwellized Particle filter presented in Part I. If you are only interested in the filter, you can skip the introduction. # Introduction ## The decline of Moore’s law Moore’s law1 has prevailed in the computation world for the last four decades. Each new generation of processor comes the promise of exponentially faster execution. However, transistors are reaching the scale of 10nm, only one hundred times bigger than an atom. Unfortunately, the quantum rules of physics governing the infinitesimally small start to manifest themselves. In particular, quantum tunneling moves electrons across classically insurmountable barriers, making computations approximate, resulting in a non negligible fraction of errors. ## The rise of Hardware Hardware and Software respectively describe here programs that are executed as code for a general purpose processing unit and programs that are a hardware description and synthesized as circuits. The dichotomy is not very well-defined and we can think of it as a spectrum. General-purpose computing on graphics processing units (GPGPU) is in-between in the sense that it is general purpose but relevant only for embarrassingly parallel tasks2 and very efficient when used well. GPUs have benefited from high-investment and many generations of iterations and hence, for some tasks they can match with or even surpass hardware such as field-programmable gate arrays (FPGA). The option of custom hardware implementations has always been there, but application-specific integrated circuit (ASIC) has prohibitive costs upfront (near \$100M for a tapeout). Reprogrammable hardware like FPGAs have only been used marginally and for some specific industries like high-frequency trading. But now Hardware is the next natural step to increase performance, at least until a computing revolution happens, e.g: quantum computing, yet this sounds unrealistic in a near future. Nevertheless, hardware do not enjoy the same quality of tooling, language and integrated development environments (IDE) as software. This is one the motivations behind Spatial: bridging the gap between software and hardware by abstracting control flow through language constructions. ## Hardware as companion accelerators In most cases, hardware would be inappropriate: running an OS as hardware would be impracticable. Nevertheless, as a companion to a central-processing unit (CPU also called “the host”), it is possible to get the best of both worlds: the flexibility of software on a CPU with the speed of hardware. In this setup, hardware is considered an “accelerator” (hence, the term “hardware accelerator”). It accelerates the most demanding subroutines of the CPU. This companionship is already present in modern computer desktops in the form of GPUs for shader operations and sound cards for complex sound transformation/output. ## The right metric: Perf/Watt The right evaluation metric for accelerators is performance per energy, as measured in FLOPS per Watt. This is a fair metric for comparing different hardware and architecture because it reveals its intrinsic properties as a computing element. If the metric was solely performance, then it would be enough to stack the same hardware and eventually reach the scale of a super-computer. Performance per dollar is not a good metric either because it does not account for the cost of energy at runtime. Hence, Perf/Watt is a fair metric to compare architectures. ## Spatial At the DAWN lab, under the lead of Prof. Olukotun and his grad students, is developed an Hardware Description Language (HDL) implemented as an embedded scala DSL spatial and its compiler to program Hardware in a higher-level, more user-friendly, more productive language than Verilog. In particular, control flows are automatically generated when possible. This should enable software engineers to unlock the potential of Hardware. A custom CGRA, Plasticine, has been developed in parallel to Spatial. It leverages some recurrent patterns as the parallel patterns and it aims at being the most efficient reprogrammable architecture for Spatial. Th upfront cost is large but once at a big enough scale, Plasticine could be deployed as an accelerator in a wide range of use-cases, from the most demanding server applications to embedded systems with heavy computing requirements. ## Embedded systems and drones Embedded systems are limited by the amount of power at disposal in the battery and may also have size constraints. At the same time, especially for autonomous vehicles, there is a great need for computing power. Thus, developing drone applications with Spatial demonstrates the advantages of the platform. As a matter of fact, the filter implementation was only made possible because it is run on a hardware accelerator. It would be unfeasible to run it on more conventional micro-transistors. Particle filters, the family of filter which encompasses the types developed here, being very computationally expensive, are very seldom used for drones. # Sensor fusion algorithm for POSE estimation of drones: Asynchronous Rao-Blackwellized Particle filter POSE is the combination of the position and orientation of an object. POSE estimation is important for drones. It is a subroutine of SLAM (Simultaneous localization and mapping) and it is a central part of motion planning and motion control. More accurate and more reliable POSE estimation results in more agile, more reactive and safer drones. Drones are an intellectually stimulating topic and in the near-future they might also see their usage increase exponentially. In this context, developing and implementing new filter for POSE estimation is both important for the field of robotics but also to demonstrate the importance of hardware acceleration. Indeed, the best and last filter presented here is only made possible because it can be hardware accelerated with Spatial. Furthermore, particle filters are embarrassingly parallel algorithms. Hence, they can leverage the potential of a dedicated hardware design. The Spatial implementation will be presented in Part IV. Before expanding on the Rao-Blackwellized Particle Filter (RBPF), we will introduce here several other filters for POSE estimation for highly dynamic objects: Complementary filter, Kalman Filter, Extended Kalman Filter, Particle Filter and finally Rao-Blackwellized Particle filter. This ranges from the most conceptually simple, to the most complex. This order is justified because complex filters aim to alleviate some of the flaws of their simpler counterparts. It is important to understand which one and how. The core of the problem we are trying to solve is to track the current position of the drone given the noisy measurements of the sensor. It is a challenging problem because a good algorithm must take into account that the measurements are noisy and that the transformation applied to the state are non-linear, because of the orientation components of the state. Particle filters are efficient to handle non-linear state transformations and that is the intuition behind the development of the RBPF. All the following filters are developed and tested in scala-flow. scala-flow will be expanded in part II of this thesis. For now, we will focus on the model and the results, and leave the implementation details for later. ## Drones and collision avoidance The original motivation for the development of accelerated POSE estimation is for the task of collision avoidance by quadcopters. In particular, a collision avoidance algorithm developed at the ASL lab and demonstrated here (https://youtu.be/kdlhfMiWVV0) where the drone avoids the sword attack from its creator. At first, it was thought of accelerating the whole algorithm but it was found that one of the most demanding subroutines was pose estimation. Moreover, it was wished to increase the processing rate of the filter such that it could match the input with the fastest sampling rate: its inertial measurement unit (IMU) containing an accelerometer, a gyroscope and a magnetometer. The flamewheel f450 is the typical drone in this category. It is surprisingly fast and agile. With the proper commands, it can generate enough thrust to avoid any incoming object in a very short lapse of time. ## Sensor fusion Sensor fusion is the combination of sensory data or data derived from disparate sources such that the resulting information has less uncertainty than what would be possible if these sources were to be used individually. In the context of drones, it is very useful because it enables us to combine many imprecise sensor measurements to form a more precise one like having precise positioning from 2 less precise GPS (dual GPS setting). It can also allows us to combine sensors with different sampling rates: typically, precise sensors with low sampling rate and less precise sensors with high sampling rates. Both cases will be relevant here. A fundamental explanation of why this is possible at all comes from the central limit theorem: one sample from a distribution with a low variance is as good as n samples from a distribution with variance $$n$$ times higher. $\mathbb{V}(X_i)=\sigma^2 ~~~~~ \mathbb{E}(X_i) = \mu$ $\bar{X} = \frac{1}{n}\sum X_i$ $\mathbb{V}(\bar{X}) = \frac{\sigma^2}{n} ~~~~~ \mathbb{E}(\bar{X}) = \mu$ ## Notes on notation and conventions The referential by default is the fixed world frame. • $$\mathbf{x}$$ designates a vector • $$x_t$$ is the random variable x at time t • $$x_{t1:t2}$$ is the product of the random variable x between t1 included and t2 included • $$x^{(i)}$$ designates the random variable x of the arbitrary particle i • $$\hat{x}$$ designates an estimated variable ## POSE POSE is the task of estimating the position and orientation of an object through time. It is a subroutine of Software Localization And Mapping (SLAM). We can formalize the problem as: At each timestep, find the best expectation of a function of the hidden variable state (position and orientation), from their initial distribution and the history of observable random variables (such as sensor measurements). • The state $$\mathbf{x}$$ • The function $$g(\mathbf{x})$$ such that $$g(\mathbf{x}_t) = (\mathbf{p}_t, \mathbf{q}_t)$$ where $$\mathbf{p}$$ is the position and $$\mathbf{q}$$ is the attitude as a quaternion. • The observable variable $$\mathbf{y}$$ composed of the sensor measurements $$\mathbf{z}$$ and the control input $$\mathbf{u}$$ The algorithm inputs are: • control inputs $$\mathbf{u}_t$$ (the commands sent to the flight controller) • sensor measurements $$\mathbf{z}_t$$ coming from different sensors with different sampling rate • information about the sensors (sensor measurements biases and matrix of covariance) ## Trajectory data generation The difficulties with using real flight data is that you need to get the true trajectory and you need enough data to check the efficiency of the filters. To avoid these issues, the flight data is simulated through a model of trajectory generation from [1]. Data generated this way is called synthetic data. The algorithm inputs are the motion primitives defined by the quadcopter’s initial state, the desired motion duration, and any combination of components of the quadcopter’s position, velocity and acceleration at the motion’s end. The algorithm is essentially a closed form solution for the given primitives. The closed form solution minimizes a cost function related to the input aggressiveness. The bulk of the method is that a differential equation representing the difference of position, velocity and acceleration between the starting and ending state is solved with the Pontryagin’s minimum principle using the appropriate Hamiltonian. Then, from that closed form solution, a per-axis cost can be calculated to pick the “least aggressive” trajectory out of different candidates. Finally, the feasibility of the trajectory is computed using the constraints of maximum thrust and body rate (angular velocity) limits. For the purpose of this work, a scala implementation of the model was written. Then, some keypoints containing Gaussian components for the position, velocity acceleration, and duration were tried until a feasible set of keypoints was found. This method of data generation is both fast and a good enough approximation of the actual trajectories that a drone would perform in the real world. ## Quaternion Quaternions are extensions of complex numbers with 3 imaginary parts. Unit quaternions can be used to represent orientation, also referred to as attitude. Quaternions algebra make rotation composition simple and quaternions avoid the issue of gimbal lock.3 In all filters presented, quaternions represent the attitude. $\mathbf{q} = (q.r, q.i, q.j, q.k)^t = (q.r, \boldsymbol{\varrho})^T$ Quaternion rotations composition is: $$q_2 q_1$$ which results in $$q_1$$ being rotated by the rotation represented by $$q_2$$. From this, we can deduce that angular velocity integrated over time is simply $$q^t$$ if $$q$$ is the local quaternion rotation by unit of time. The product of two quaternions (also called Hamilton product) is computable by regrouping the same type of imaginary and real components together and accordingly to the identity: $i^2=j^2=k^2=ijk=-1$ Rotation of a vector by a quaternion is done by: $$q v q^*$$ where $$q$$ is the quaternion representing the rotation, $$q^*$$ its conjugate and $$v$$ the vector to be rotated. The conjugate of a quaternion is: $q^* = - \frac{1}{2} (q + iqi + jqj + kqk)$ The distance of between two quaternions, useful as an error metric is defined by the squared Frobenius norms of attitude matrix differences [2]. $\| A(\mathbf{q}_1) - A(\mathbf{q}_2) \|^2_F = 6 - 2 Tr [ A(\mathbf{q}_1)A^t(\mathbf{q}_2) ]$ where $A(\mathbf{q}) = (q.r^2 - \| \boldsymbol{\varrho} \|^2) I_{3 \times 3} + 2\boldsymbol{\varrho} \boldsymbol{\varrho}^T - 2q.r[\boldsymbol{\varrho} \times]$ $[\boldsymbol{\varrho} \times] = \left( \begin{array}{ccc} 0 & -q.k & q.j \\ q.k & 0 & -q.i \\ -q.j & q.i & 0 \\ \end{array} \right)$ ## Helper functions and matrices We introduce some helper matrices. • $$\mathbf{R}_{b2f}\{\mathbf{q}\}$$ is the body to fixed vector rotation matrix. It transforms vector in the body frame to the fixed world frame. It takes as parameter the attitude $$\mathbf{q}$$. • $$\mathbf{R}_{f2b}\{\mathbf{q}\}$$ is its inverse matrix (from fixed to body). • $$\mathbf{T}_{2a} = (0, 0, 1/m)^T$$ is the scaling from thrust to acceleration (by dividing by the weight of the drone: $$\mathbf{F} = m\mathbf{a} \Rightarrow \mathbf{a} = \mathbf{F}/m)$$ and then multiplying by a unit vector $$(0, 0, 1)$$ • $R2Q(\boldsymbol{\theta}) = (\cos(\| \boldsymbol{\theta} \| / 2), \sin(\| \boldsymbol{\theta} \| / 2) \frac{\boldsymbol{\theta}}{\| \boldsymbol{\theta} \|} )$ is a function that convert from a local rotation vector $$\boldsymbol{\theta}$$ to a local quaternion rotation. The definition of this function come from converting $$\boldsymbol{\theta}$$ to a body-axis angle, and then to a quaternion. • $Q2R(\mathbf{q}) = (q.i*s, q.j*s, q.k*s)$ is its inverse function where $$n = \arccos(q.w)*2$$ and $$s = n/\sin(n/2)$$ • $$\Delta t$$ is the lapse of time between t and the next tick (t+1) ## Model The drone is assumed to have rigid-body physics. It is submitted to the gravity and its own inertia. A rigid body is a solid body in which deformation is zero or so small it can be neglected. The distance between any two given points on a rigid body remains constant in time regardless of external forces exerted on it. This enables us to summarize the forces from the rotor as a thrust oriented in the direction normal to the plane formed by the 4 rotors, and an angular velocity. Those variables are sufficient to describe the evolution of our drone with rigid-body physics: • $$\mathbf{a}$$ the total acceleration in the fixed world frame • $$\mathbf{v}$$ the velocity in the fixed world frame • $$\mathbf{p}$$ the position in the fixed world frame • $$\boldsymbol{\omega}$$ the angular velocity • $$\mathbf{q}$$ the attitude in the fixed world frame ## Sensors The sensors at the drone’s disposition are: • Accelerometer: It generates $$\mathbf{a_A}$$ a measurement of the total acceleration in the body frame referential the drone is submitted to at a high sampling rate. If the object is submitted to no acceleration then the accelerometer measure the earth’s gravity field. From that information, it could be possible to retrieve the attitude. Unfortunately, we are in a highly dynamic setting. Thus, it is possible when we can subtract the drone’s acceleration from the thrust to the total acceleration. This would require to know exactly the force exerted by the rotors at each instant. In this work, we assume that doing that separation, while being theoretically possible, is too impractical. The measurements model is: $\mathbf{a_A}(t) = \mathbf{R}_{f2b}\{\mathbf{q}(t)\}\mathbf{a}(t) + \mathbf{a_A}^\epsilon$ where the covariance matrix of the noise of the accelerometer is $${\mathbf{R}_{\mathbf{a_A}}}_{3 \times 3}$$ and $\mathbf{a_A}^\epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{a_A}})$. • Gyroscope:It generates $$\mathbf{\boldsymbol{\omega}_G}$$ a measurement of the angular velocity in the body frame of the drone at the last timestep at a high sampling rate. The measurement model is: $\mathbf{\boldsymbol{\omega}_G}(t) = \boldsymbol{\omega} + \mathbf{\boldsymbol{\omega}_G}^\epsilon$ where the covariance matrix of the noise of the accelerometer is $${\mathbf{R}_{\mathbf{\boldsymbol{\omega}_G}}}_{3 \times 3}$$ and $\mathbf{\boldsymbol{\omega}_G}^\epsilon_t \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{\boldsymbol{\omega}_G}})$. • Position: It generates $$\mathbf{p_V}$$ a measurement of the current position at a low sampling rate. This is usually provided by a Vicon (for indoor), GPS, a Tango or any other position sensor. The measurement model is: $\mathbf{p_V}(t) = \mathbf{p}(t) + \mathbf{p_V}^\epsilon$ where the covariance matrix of the noise of the position is $${\mathbf{R}_{\mathbf{p_V}}}_{3 \times 3}$$ and $\mathbf{p_V}^\epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{p_V}})$. • Attitude: It generates $$\mathbf{q_V}$$ a measurement of the current attitude. This is usually provided in addition to the position by a Vicon or a Tango at a low sampling rate or the Magnemoter at a high sampling rate if the environment permits it (no high magnetic interference nearby like iron contamination). The magnetometer retrieves the attitude by assuming that the sensed magnetic field corresponds to the earth’s magnetic field. The measurement model is: $\mathbf{q_V}(t) = \mathbf{q}(t)*R2Q(\mathbf{q_V}^\epsilon)$ where the $$3 \times 3$$ covariance matrix of the noise of the attitude in radian before being converted by $$R2Q$$ is $${\mathbf{R}_{\mathbf{q_V}}}_{3 \times 3}$$ and $\mathbf{q_V}^\epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{q_V}})$. • Optical Flow: A camera that keeps track of the movement by comparing the difference of the position of some reference points. By using a companion distance sensor, it is able to retrieve the difference between the two perspective and thus the change in angle and position. $\mathbf{dq_O}(t) = (\mathbf{q}(t-k)\mathbf{q}(t))*R2Q(\mathbf{dq_O}^\epsilon)$ $\mathbf{dp_O}(t) = (\mathbf{p}(t) - \mathbf{p}(t-k)) + \mathbf{dp_O}^\epsilon$ where the $$3 \times 3$$ covariance matrix of the noise of the attitude variation in radian before being converted by $$R2Q$$ is $${\mathbf{R}_{\mathbf{dq_O}}}_{3 \times 3}$$ and $\mathbf{dq_O}^\epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{dq_O}})$ and the position variation covariance matrix $${\mathbf{R}_{\mathbf{dp_O}}}_{3 \times 3}$$ and $\mathbf{dp_O}^\epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{dp_O}})$. The notable difference with the position or attitude sensor is that the optical flow sensor, like the IMU, only captures time variation, not absolute values. • Altimeter: An altimeter is a sensor that measure the altitude of the drone. For instance a LIDAR measure the time for the laser wave to reflect on a surface that is assumed to be the ground. A smart strategy is to only use the altimeter which is oriented with a low angle to the earth, else you also have to account that angle in the estimation of the altitude. $z_A(t) = \sin(\text{pitch}(\mathbf{q(t)}))(\mathbf{p}(t).z + z_A^\epsilon)$ $${R_{z_A}}_{3 \times 3}$$ and $z_A^\epsilon \sim \mathcal{N}(0, R_{z_A})$ Some sensors are more relevant indoor and some others outdoor: • Indoor: The sensors available indoor are the accelerometer, the gyroscope and the Vicon. The Vicon is a system composed of many sensors around a room that is able to track very accurately the position and orientation a mobile object. One issue with relying solely on the Vicon is that the sampling rate is low. • Outdoor: The sensors available outdoor are the accelerometer, the gyroscope, the magnetometer, two GPS, an optical flow and an altimeter. We assume that since the biases of the sensor could be known prior to the flight, both the sensor output measurements have been calibrated with no bias. Some filters like the ekf2 of the px4 flight stack keep track of the sensor biases but this is a state augmentation that was not deemed worthwhile. ## Control inputs Observations from the control input are not strictly speaking measurements but input of the state-transition model. The IMU is a sensor, thus strictly speaking, its measurements are not control inputs. However, in the literature, it is standard to use its measurements as control inputs. One of the advantage is that the IMU measures exactly the data we need for a prediction through the model dynamic. If we used instead a transformation of the thrust sent as command to the rotors, we would have to account for the rotors imprecision, the wind and other disturbances. Another advantage is that since the IMU has very high sampling rate, we can update very frequently the state with new transitions. The drawback is that the accelerometer is noisy. Fortunately, we can take into account the noise as a process model noise. The control inputs at disposition are: • Acceleration: $$\mathbf{a_A}_t$$ from the acceloremeter • Angular velocity: $$\mathbf{\boldsymbol{\omega}_G}_t$$ from the gyroscope. ## Model dynamic • $$\mathbf{a}(t+1) = \mathbf{R}_{b2f}\{\mathbf{q}(t+1)\}(\mathbf{a_A}_t + \mathbf{a_A}^\epsilon_t)$$ where $$\mathbf{a}^\epsilon_t \sim \mathcal{N}(\mathbf{0}, \mathbf{Q}_{\mathbf{a}_t })$$ • $$\mathbf{v}(t+1) = \mathbf{v}(t) + \Delta t \mathbf{a}(t) + \mathbf{v}^\epsilon_t$$ where $$\mathbf{v}^\epsilon_t \sim \mathcal{N}(\mathbf{0}, \mathbf{Q}_{\mathbf{v}_t })$$ • $$\mathbf{p}(t+1) = \mathbf{p}(t) + \Delta t \mathbf{v}(t) + \mathbf{p}^\epsilon_t$$ where $$\mathbf{p}^\epsilon_t \sim \mathcal{N}(\mathbf{0}, \mathbf{Q}_{\mathbf{p}_t })$$ • $$\boldsymbol{\omega}(t+1) = \mathbf{\boldsymbol{\omega}_G}_t + \mathbf{\boldsymbol{\omega}_G}^\epsilon_t$$ where $$\mathbf{p}^\epsilon_t \sim \mathcal{N}(\mathbf{0}, \mathbf{Q}_{\mathbf{\boldsymbol{\omega}_G}_t })$$ • $$\mathbf{q}(t+1) = \mathbf{q}(t)*R2Q(\Delta t \boldsymbol{ \omega(t) })$$ Note that in our model, $$\mathbf{q}(t+1)$$ must be known. Fortunately, as we will see later, our Rao-Blackwellized Particle Filter is conditioned under the attitude so it is known. ## State The time series of the variables of our dynamic model constitute a hidden Markov chain. Indeed, the model is “memoryless” and depends only on the current state and a sampled transition. States contain variables that enable us to keep track of some of those hidden variables which is our ultimate goal (for POSE $$\mathbf{p}$$ and $$\mathbf{q}$$). States at time $$t$$ are denoted by $$\mathbf{x}_t$$. Different filters require different state variables depending on their structure and assumptions. ## Observation Observations are revealed variables conditioned under the variables of our dynamic model. Our ultimate goal is to deduce the states from the observations. Observations contain the control input $$\mathbf{u}$$ and the measurements $$\mathbf{z}$$. $\mathbf{y}_t = (\mathbf{z}_t, \mathbf{u}_t)^T = (\mathbf{p_V}_t, \mathbf{q_V}_t), ({t_C}_t, \mathbf{\boldsymbol{\omega}_C}_t))^T$ ## Filtering and smoothing Smoothing is the statistical task of finding the expectation of the state variable from the past history of observations and multiple observation variables ahead $\mathbb{E}[g(\mathbf{x}_{0:t}) | \mathbf{y}_{1:t+k}]$ Which expand to, $\mathbb{E}[(\mathbf{p}_{0:t}, \mathbf{q}_{0:t}) | (\mathbf{z}_{1:t+k}, \mathbf{u}_{1:t+k})]$ $$k$$ is a contant and the first observation is $$y_1$$ Filtering is a kind of smoothing where you only have at disposal the current observation variable ($$k=0$$) ## Complementary Filter The complementary filter is the simplest of all filters and is commonly used to retrieve the attitude because of its low computational complexity. The gyroscope and accelerometer both provide a measurement that can help us to estimate the attitude. Indeed, the gyroscope reads noisy measurement of the angular velocity from which we can retrieve the new attitude from the past one by time integration: $$\mathbf{q}_t = \mathbf{q}_{t-1}*R2Q(\Delta t \mathbf{\omega})$$. This is commonly called “Dead reckoning”4 and is prone to accumulation error, referred to as drift. Indeed, like Brownian motions, even if the process is unbiased, the variance grows with time. Reducing the noise cannot solve the issue entirely: even with extremely precise instruments, you are subject to floating-point errors. Fortunately, even though the accelerometer gives us a highly noisy (vibrations, wind, etc … ) measurement of the orientation, it is not impacted by the effects of drifting because it does not rely on accumulation. Indeed, if not subject to other accelerations, the accelerometer measures the gravity field orientation. Since this field is oriented toward earth, it is possible to retrieve the current rotation from that field and by extension the attitude. However, a drone is under the influence of continuous and significant acceleration and vibration. Hence, the assumption that we retrieve the gravity field directly is wrong. Nevertheless, we could solve this by subtracting the acceleration deduced from the thrust control input. It is unpractical so this approach is not pursued in this work, but understanding this filter is still useful. The idea of the filter itself is to combine the precise “short-term” measurements of the gyroscope subject to drift with the “long-term” measurements of the accelerometer. ### State This filter is very simple. The only requirement is that the last estimated attitude must be stored along with its timestamp in order to calculate $$\Delta t$$. $\mathbf{x}_t = \mathbf{q}_t$ $\hat{\mathbf{q}}_{t+1} = \alpha (\hat{\mathbf{q}}_t + \Delta t \mathbf{\omega}_t) + (1 - \alpha) {\mathbf{q_A}}_{t+1}$ $$\alpha \in [0, 1]$$. Usually, $$\alpha$$ is set to a high-value like $$0.98$$. It is very intuitive to see why this should approximately “work”, the data from the accelerometer continuously corrects the drift from the gyroscope. ┌──────┐ ┌───────────────────────────────────────────┐ │ │ │ │ │ │<┘┌───────────────────────────┐ ┌────────┐ │ │ ├──┘ │ │ │ │ ┌─────────┐ │Buffer│ ┌─────┐ ┌───────┐ └─>│ │ │ │ │ │ │ │ │ │ │ │Rotation│ │ │ │ ┌─────────┐ │ ├────>│ ├───>│BR2Quat├───────>│ │ └─┤ │ │ │ │ │ │Integ│ │ │ │ ├───>│Combining├─>│Block out│ └──────┘ ┌─>│ │ └───────┘ └────────┘ │ │ │ │ │ │ │ ┌─>│ │ └─────────┘ ┌───────┐ │ └─────┘┌────────────────┐ ┌────────┐ │ │ │ │ │ │ │ │ │ │ │ └─────────┘ │ ├─┘ │┌─────────────┐ └──>│ │ │ │Map IMU├───────────┘│ │ │ACC2Quat├─┘ │ │ │Map CI Thrust├────>│ │ │ │ │ │ │ │ └───────┘ └─────────────┘ └────────┘ Figure 9 is the plot of the distance from the true quaternion after 15s of an arbitrary trajectory when $$\alpha = 1.0$$ meaning that the accelerometer does not correct the drift. Figure 10 is that same trajectory with $$\alpha = 0.98$$. We can observe here the long-term importance of being able to correct the drift, even if ever so slightly at each timestep. ## Asynchronous Augmented Complementary Filter As explained previously, in this highly-dynamic setting, combining the gyroscope and the accelerometer to retrieve the attitude is not satisfactory. However, we can reuse the intuition from the complementary filter, which is to combine precise but drifting short-term measurements to other measurements that do not suffer from drifting. This enables a simple and computationally inexpensive novel filter that we will be able to use later as a baseline. In this context, the short-term measurements are the acceleration and angular velocity from the IMU, and the non-drifting measurements are the position and attitude from the Vicon. We will also add the property that the data from the sensors are asynchronous. As with all following filters, we deal with asynchronicity by updating the state to the most likely state so far for any new sensor measurement incoming. This is a consequence of the sensors having different sampling rate. • IMU update $\mathbf{v}_t = \mathbf{v}_{t-1} + \Delta t_v \mathbf{a_A}_t$ $\boldsymbol{\omega}_t = \boldsymbol{\mathbf{\omega_G}}_t$ $\mathbf{p}_t = \mathbf{p}_{t-1} + \Delta t \mathbf{v}_{t-1}$ $\mathbf{q}_t = \mathbf{q}_{t-1}R2Q(\Delta t \boldsymbol{\omega}_{t-1})$ • Vicon update $\mathbf{p}_t = \alpha \mathbf{p_V} + (1 - \alpha) (\mathbf{p}_{t-1} + \Delta t \mathbf{v}_{t-1})$ $\mathbf{q}_t = \alpha \mathbf{q_V} + (1 - \alpha) (\mathbf{q}_{t-1}R2Q(\Delta t \boldsymbol{\omega}_{t-1}))$ ### State The state has to be more complex because the filter now estimates both the position and the attitude. Furthermore, because of asynchronicity, we have to store the last angular velocity, the last linear velocity, and the last time the linear velocity has been updated (to retrieve $$\Delta t_v = t - t_a$$ where $$t_a$$ is the last time we had an update from the accelerometer). $\mathbf{x}_t = (\mathbf{p}_t, \mathbf{q}_t, \boldsymbol{\omega}_t, \mathbf{a}_t, t_a)$ The structure of this filter and all of the filters presented thereafter is as follow: ┌───────┐ ┌──────┐ ┌─────┐ ┌─────────┐ │ │ │ │ │ │ │ │ │Map IMU├─┐ ┌─────┐ ┌───────┐ │ ├──>│P & Q├─>│Block out│ │ │ │ │ │ │ ├──>│Update│ │ │ │ │ └───────┘ └──>│ │ │ │ │ ├─┐ └─────┘ └─────────┘ │Merge├─>│ZipLast│ │ │ │ ┌─────────┐ ┌─>│ │ │ │<┐ └──────┘ │ ┌──────┐ │ │ │ │ │ │ │ │ │ │ │ │Map Vicon├─┘ └─────┘ └───────┘ │ │ │ │ │ │ │ └──────────>│Buffer│ └─────────┘ └──────────────────────┤ │ │ │ └──────┘ ## Kalman Filter ### Bayesian inference Bayesian inference is a method of statistical inference in which Bayes’ theorem is used to update the probability for a hypothesis when more evidence or information becomes available. In this Bayes setting, the prior is the estimated distribution of the previous state at time $$t-1$$, the likelihood correspond to the likelihood of getting the new data from the sensor given the prior and finally, the posterior is the updated estimated distribution. ### Model The Kalman filter requires that both the model process and the measurement process are linear gaussian. Linear gaussian processes are of the form: $\mathbf{x}_t = f(\mathbf{x}_{t-1}) + \mathbf{w}_t$ where $$f$$ is a linear function and $$\mathbf{w}_t$$ a gaussian process: it is sampled from an arbitrary gaussian distribution. The Kalman filter is a direct application of bayesian inference. It combines the prediction of the distribution given the estimated prior state and the state-transition model. $\mathbf{x}_t = \mathbf{F}_t \mathbf{x}_{t-1} + \mathbf{B}_t \mathbf{u}_t + \mathbf{w}_t$ • $$\mathbf{x}_t$$ the state • $$\mathbf{F}_t$$ the state transition model • $$\mathbf{B}_t$$ the control-input model • $$\mathbf{u}_t$$ the control vector • $$\mathbf{w}_t$$ process noise drawn from $$\mathbf{w}_t \sim N(0, \mathbf{Q}_k)$$ and the estimated distribution given the data coming from the sensors. $\mathbf{y}_t = \mathbf{H}_t \mathbf{x}_{t} + \mathbf{v}_t$ • $$\mathbf{y}_t$$ measurements • $$\mathbf{H}_t$$ the state to measurement matrix • $$\mathbf{w}_t$$ measurement noise drawn from $$\mathbf{w}_t \sim N(0, \mathbf{R}_k)$$ Because, both the model process and the sensor process are assumed to be linear Gaussian, we can combine them into a Gaussian distribution. Indeed, the product of the distribution of two Gaussian forms a new Gaussian distribution. $P(\mathbf{x}_{t}) \propto P(\mathbf{x}^{-}_{t}|\mathbf{x}_{t-1}) \cdot P(\mathbf{x}_t | \mathbf{y}_t )$ $\mathcal{N}(\mathbf{x}_{t}) \propto \mathcal{N}(\mathbf{x}^{-}_{t}|\mathbf{x}_{t-1}) \cdot \mathcal{N}(\mathbf{x}_t | \mathbf{y}_t )$ where $$\mathbf{x}^{-}_{t}$$ is the predicted state from the previous state and the state-transition model. Kalman filter keeps track of the parameters of that gaussian: the mean state and the covariance of the state which represents the uncertainty about our last prediction. The mean of that distribution is also the best current state estimation of the filter. By keeping track of the uncertainty, we can optimally combine the normals by knowing what importance to give to the difference between the expected sensor data and the actual sensor data. That factor is the Kalman gain. • predict: • predicted state: $$\hat{\mathbf{x}}^{-}_t = \mathbf{F}_t \mathbf{x}_{t-1} + \mathbf{B}_t \mathbf{u}_t$$ • predicted covariance: $$\mathbf{\Sigma}^{-}_t = \mathbf{F}_{t-1} \mathbf{\Sigma}^{-}_{t-1} \mathbf{F}_{t-1}^T + \mathbf{Q}_t$$ • update: • predicted measurements: $$\hat{\mathbf{z}} = \mathbf{H}_t \hat{\mathbf{x}}^{-}_t$$ • innovation: $$(\mathbf{z}_t - \hat{\mathbf{z}})$$ • innovation covariance: $$\mathbf{S} = \mathbf{H}_t \mathbf{\Sigma}^{-}_t \mathbf{H}_t^T + \mathbf{R}_t$$ • optimal kalman gain: $$\mathbf{K} = \mathbf{\Sigma}^{-}_t \mathbf{H}_t^T \mathbf{S}^{-1}$$ • updated state: $$\mathbf{\Sigma}_t = \mathbf{\Sigma}^-_t + \mathbf{K} \mathbf{S} \mathbf{K}^T$$ • updated covariance: $$\hat{\mathbf{x}}_t = \hat{\mathbf{x}}^{-}_t + \mathbf{K}(\mathbf{z}_t - \hat{\mathbf{z}})$$ ## Asynchronous Kalman Filter It is not necessary to apply the full Kalman update at each measurement. Indeed, $$\mathbf{H}$$ can be sliced to correspond to the measurements currently available. To be truly asynchronous, you also have to account for the different sampling rates. There are two cases : • The required data for the update step (the control inputs) can arrive multiple times before any of the data of the update step (the measurements) occur. • Inversely, it is possible that the measurements occur at a higher sampling rate than the control inputs. The strategy chosen here is as follows: 1. Multiple prediction steps without any update step may happen without making the algorithm inconsistent. 2. An update is always immediately preceded by a prediction step. This is a consequence of the requirement that the innovation must measure the difference between the predicted measurement from the state at the exact current time and the measurements. Thus, if the measurements are not synchronized with the control inputs, use the most likely control input for the prediction step. Repeating the last control input was the method used for the accelerometer and the gyroscope data as control input. ## Extended Kalman Filters In the previous section, we have shown that the Kalman Filter is only applicable when both the process model and the measurement model are linear Gaussian processes. • The noise of the measurements and of the state-transition must be Gaussian • The state-transition function and the measurement to state function must be linear. Furthermore, it is provable that Kalman filters are optimal linear filters. However, in our context, one component of the state, the attitude, is intrinsically non-linear. Indeed, rotations and attitudes belong to $$SO(3)$$ which is not a vector space. Therefore, we cannot use vanilla Kalman filters. The filters that we present thereafter relax those requirements. One example of such extension is the extended Kalman filter (EKF) that we will present here. The EKF relax the linearity requirement by using differentiation to calculate an approximation of the first order of the functions required to be linear. Our state transition function and measurement function can now be expressed in the free forms $$f(\mathbf{x}_t)$$ and $$h(\mathbf{x}_t)$$ and we define the matrix $$\mathbf{F}_t$$ and $$\mathbf{H}_t$$ as their Jacobian. ${\mathbf{F}_t}_{10 \times 10} = \left . \frac{\partial f}{\partial \mathbf{x} } \right \vert _{\hat{\mathbf{x}}_{t-1},\mathbf{u}_{t-1}}$ ${\mathbf{H}_t}_{7 \times 7} = \left . \frac{\partial h}{\partial \mathbf{x} } \right \vert _{\hat{\mathbf{x}}_{t}}$ • predict: • predicted state: $$\hat{\mathbf{x}}^{-}_t = f(\mathbf{x}_{t-1}) + \mathbf{B}_t \mathbf{u}_t$$ • predicted covariance: $$\mathbf{\Sigma}^{-}_t = \mathbf{F}_{t-1} \mathbf{\Sigma}^{-}_{t-1} \mathbf{F}_{t-1}^T + \mathbf{Q}_t$$ • update: • predicted measurements: $$\hat{\mathbf{z}} = h(\hat{\mathbf{x}}^{-}_t)$$ • innovation: $$(\mathbf{z}_t - \hat{\mathbf{z}})$$ • innovation covariance: $$\mathbf{S} = \mathbf{H}_t \mathbf{\Sigma}^{-}_t \mathbf{H}_t^T + \mathbf{R}_t$$ • optimal kalman gain: $$\mathbf{K} = \mathbf{\Sigma}^{-}_t \mathbf{H}_t^T \mathbf{S}^{-1}$$ • updated state: $$\mathbf{\Sigma}_t = \mathbf{\Sigma}^-_t + \mathbf{K} \mathbf{S} \mathbf{K}^T$$ • updated covariance: $$\hat{\mathbf{x}}_t = \hat{\mathbf{x}}^{-}_t + \mathbf{K}(\mathbf{z}_t - \hat{\mathbf{z}})$$ ### State For the EKF, we will use the following state: $\mathbf{x}_t = (\mathbf{v}_t, \mathbf{p}_t, \mathbf{q}_t)^T$ Initial state $$\mathbf{x}_0$$ at $$(\mathbf{0}, \mathbf{0}, (1, 0, 0, 0))$$ ### Indoor Measurements model 1. Position: $\mathbf{p_V}(t) = \mathbf{p}(t)^{(i)} + \mathbf{p_V}^\epsilon_t$ where $$\mathbf{p_V}^\epsilon_t \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{p_V}_t })$$ 2. Attitude: $\mathbf{q_V}(t) = \mathbf{q}(t)^{(i)}*R2Q(\mathbf{q_V}^\epsilon_t)$ where $$\mathbf{q_V}^\epsilon_t \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{q_V}_t })$$ ### Kalman prediction The model dynamic defines the following model, state-transition function $$f(\mathbf{x}, \mathbf{u})$$ and process noise $$\mathbf{w}$$ with covariance matrix $$\mathbf{Q}$$ $\mathbf{x}_t = f(\mathbf{x}_{t-1}, \mathbf{u}_t) + \mathbf{w}_t$ $f((\mathbf{v}, \mathbf{p}, \mathbf{q}), (\mathbf{a_A}, \mathbf{\boldsymbol{\omega}_G})) = \left( \begin{array}{c} \mathbf{v} + \Delta t \mathbf{R}_{b2f}\{\mathbf{q}_{t-1}\} \mathbf{a} \\ \mathbf{p} + \Delta t \mathbf{v} \\ \mathbf{q}*R2Q({\Delta t} \boldsymbol{\omega}_G) \end{array} \right)$ Now, we need to derive the jacobian of $$f$$. We will use sagemath to retrieve the 28 relevant different partial derivatives of $$q$$. ${\mathbf{F}_t}_{10 \times 10} = \left . \frac{\partial f}{\partial \mathbf{x} } \right \vert _{\hat{\mathbf{x}}_{t-1},\mathbf{u}_{t-1}}$ $\hat{\mathbf{x}}^{-(i)}_t = f(\mathbf{x}^{(i)}_{t-1}, \mathbf{u}_t)$ $\mathbf{\Sigma}^{-(i)}_t = \mathbf{F}_{t-1} \mathbf{\Sigma}^{-(i)}_{t-1} \mathbf{F}_{t-1}^T + \mathbf{Q}_t$ ### Kalman measurements update $\mathbf{z}_t = h(\mathbf{x}_t) + \mathbf{v}_t$ The measurement model defines $$h(\mathbf{x})$$ $\left( \begin{array}{c} \mathbf{p_V}\\ \mathbf{q_V}\\ \end{array} \right) = h((\mathbf{v}, \mathbf{p}, \mathbf{q})) = \left( \begin{array}{c} \mathbf{p}\\ \mathbf{q}\\ \end{array} \right)$ The only complex partial derivatives to calculate are the ones of the acceleration, because they have to be rotated first. Once again, we use sagemath: $$\mathbf{H_a}$$ is defined by the script in the appendix B. ${\mathbf{H}_t}_{10 \times 7} = \left . \frac{\partial h}{\partial \mathbf{x} } \right \vert _{\hat{\mathbf{x}}_{t}} = \left( \begin{array}{ccc} \mathbf{0}_{3 \times 3} & & \\ & \mathbf{I}_{3 \times 3} & \\ & & \mathbf{I}_{4 \times 4}\\ \end{array} \right)$ ${\mathbf{R}_t}_{7 \times 7} = \left( \begin{array}{cc} \mathbf{R}_{\mathbf{p_V}} & \\ & {\mathbf{R}'_{\mathbf{q_V}}}_{4 \times 4}\\ \end{array} \right)$ $$\mathbf{R}'_{\mathbf{q_V}}$$ has to be $$4 \times 4$$ and has to represent the covariance of the quaternion. However, the actual covariance matrix $$\mathbf{R}_{\mathbf{q_V}}$$ is $$3 \times 3$$ and represent the noise in terms of a rotation vector around the x, y, z axes. We transform this rotation vector into a quaternion using our function $$R2Q$$. We can compute the new covariance matrix $$\mathbf{R}'_{\mathbf{q_V}}$$ using Unscented Transform. ### Unscented Transform The unscented transform (UT) is a mathematical function used to estimate statistics after applying a given nonlinear transformation to a probability distribution. The idea is to use points that are representative of the original distribution, sigma points. We apply the transformation to those sigma points and calculate new statistics using the transformed sigma points. The sigma points must have the same mean and covariance as the original distribution. The minimal set of symmetric sigma points can be found using the covariance of the initial distribution. The $$2N + 1$$ minimal symmetric set of sigma points are the mean and the set of points corresponding to the mean plus and minus one of the direction corresponding to the covariance matrix. In one dimension, the square root of the variance is enough. In N-dimensions, you must use the Cholesky decomposition of the covariance matrix. The Cholesky decomposition finds the matrix $$L$$ such that $$\Sigma = LL^t$$. ### Kalman update $\mathbf{S} = \mathbf{H}_t \mathbf{\Sigma}^{-}_t \mathbf{H}_t^T + \mathbf{R}_t$ $\hat{\mathbf{z}} = h(\hat{\mathbf{x}}^{-}_t)$ $\mathbf{K} = \mathbf{\Sigma}^{-}_t \mathbf{H}_t^T \mathbf{S}^{-1}$ $\mathbf{\Sigma}_t = \mathbf{\Sigma}^-_t + \mathbf{K} \mathbf{S} \mathbf{K}^T$ $\hat{\mathbf{x}}_t = \hat{\mathbf{x}}^{-}_t + \mathbf{K}(\mathbf{z}_t - \hat{\mathbf{z}})$ ### F partial derivatives Q.<i,j,k> = QuaternionAlgebra(SR, -1, -1) var('q0, q1, q2, q3') var('dt') var('wx, wy, wz') q = q0 + q1*i + q2*j + q3*k w = vector([wx, wy, wz])*dt w_norm = sqrt(w[0]^2 + w[1]^2 + w[2]^2) ang = w_norm/2 w_normalized = w/w_norm sin2 = sin(ang) qd = cos(ang) + w_normalized[0]*sin2*i + w_normalized[1]*sin2*j + w_normalized[2]*sin2*k nq = q*qd v = vector(nq.coefficient_tuple()) for sym in [wx, wy, wz, q0, q1, q2, q3]: d = diff(v, sym) exps = map(lambda x: x.canonicalize_radical().full_simplify(), d) for i, e in enumerate(exps): print(sym, i, e) ## Unscented Kalman Filters The EKF has three flaws in our case: • The linearization gives an approximate form which result in approximation errors • The prediction step of the EKF assumes that the linearized form of the transformation can capture all the information needed to apply the transformation to the gaussian distribution pre-transformation. Unfortunately, this is only true near the region of the mean. The transformation of the tail of the gaussian distribution may need to be very different. • It attempts to define a Gaussian covariance matrix for the attitude quaternion. This does not make sense because it does not account for the requirement of the quaternion being in a 4 dimensional unit sphere. The Unscented Kalman Filter (UKF) does not suffer from the two first flaws, but it is more computationally expensive as it requires a Cholesky factorisation that grows exponentially in complexity with the number of dimensions. Indeed, the UKF applies an unscented transformation to sigma points of the current approximated distribution. The statistics of the new approximated Gaussian are found through this unscented transform. The EKF linearizes the transformation, the UKF approximates the resulting Gaussian after the transformation. Hence, the UKF can take into account the effects of the transformation away from the mean which might be drastically different. The implementation of an UKF still suffers greatly from quaternions not belonging to a vector space. The approach taken by [3] is to use the error quaternion defined by $$\mathbf{e}_i = \mathbf{q}_i\bar{\mathbf{q}}$$. This approach has the advantage that similar quaternion differences result in similar error. But apart from that, it does not have any profound justification. We must compute a sound average weighted quaternion of all sigma points. An algorithm is described in the following section. ### Average quaternion Unfortunately, the average of quaternions components $$\frac{1}{N} \sum q_i$$ or barycentric mean is unsound: Indeed, attitudes do not belong to a vector space but a homogenous Riemannian manifold (the four dimensional unit sphere). To convince yourself of the unsoundness of the barycentric mean, see that the addition and barycentric mean of two unit quaternions is not necessarily a unit quaternion ($$(1, 0, 0, 0)$$ and $$(-1, 0, 0, 0)$$ for instance. Furthermore, angles being periodic, the barycentric mean of a quaternion with angle $$-178^\circ$$ and another with same body-axis and angle $$180^\circ$$ gives $$1^\circ$$ instead of the expected $$-179^\circ$$. To calculate the average quaternion, we use an algorithm which minimizes a metric that corresponds to the weighted attitude difference to the average, namely the weighted sum of the squared Frobenius norms of attitude matrix differences. $\bar{\mathbf{q}} = arg \min_{q \in \mathbb{S}^3} \sum w_i \| A(\mathbf{q}) - A(\mathbf{q}_i) \|^2_F$ where $$\mathbb{S}^3$$ denotes the unit sphere. The attitude matrix $$A(\mathbf{q})$$ and its corresponding Frobenius norm have been described in the quaternion section. ### Intuition The intuition of keeping track of multiple representatives of the distribution is exactly the approach taken by the particle filter. The particle filter has the advantage that the distribution is never transformed back to a gaussian so there are fewer assumptions made about the noise and the transformation. It is only required to be able to calculate the expectation from a weighted set of particles. ## Particle Filter Particle filters are computationally expensive. This is the reason why their usage is not very popular currently for low-powered embedded systems like drones. However, they are used in Avionics for planes since the computational resources are less scarce but precision crucial. Accelerating hardware could widen the usage of particle filters to embedded systems. Particle filters are sequential Monte Carlo methods. Like all Monte Carlo methods, they rely on repeated sampling for estimation of a distribution. The particle filter is itself a weighted particle representation of the posterior: $p(\mathbf{x}) = \sum w^{(i)}\delta(\mathbf{x} - \mathbf{x}^{(i)})$ where $$\delta$$ is the dirac delta function. The dirac delta function is zero everywhere except at zero, with an integral of one over the entire real line. It represents here the ideal probability density of a particle. ### Importance sampling Weights are computed through importance sampling. With importance sampling, each particle does not equally represent the distribution. Importance sampling enables us to use sampling from another distribution to estimate properties from the target distribution of interest. In most cases, it can be used to focus sampling on a specific region of the distribution. In our case, by choosing the right importance distribution (the dynamics of the model as we will see later), we can re-weight particles based on the likelihood from the measurements ($$p(\mathbf{y} | \mathbf{x})$$. Importance sampling is based on the identity: \begin{aligned} \mathbb{E}[\mathbf{g}(\mathbf{x}) | \mathbf{y}_{1:T}] &= \int \mathbf{g}(\mathbf{x})p(\mathbf{x}|\mathbf{y}_{1:T})d\mathbf{x} \\ &= \int \left [\mathbf{g}(\mathbf{x})\frac{p(\mathbf{x}|\mathbf{y}_{1:T})}{\mathbf{\pi}(\mathbf{x}|\mathbf{y}_{1:T})} \right ] \mathbf{\pi}(\mathbf{x}|\mathbf{y}_{1:T}) d\mathbf{x} \end{aligned} Thus, it can be approximated as \begin{aligned} \mathbb{E}[\mathbf{g}(\mathbf{x}) | \mathbf{y}_{1:T}] &\approx \frac{1}{N} \sum_i^N \frac{p(\mathbf{x}^{(i)}|\mathbf{y}_{1:T})}{\mathbf{\pi}(\mathbf{x}^{(i)}|\mathbf{y}_{1:T})}\mathbf{g}(\mathbf{x}^{(i)}) &\approx \sum^N_i w^{(i)} \mathbf{g}(\mathbf{x}^{(i)}) \end{aligned} where $$N$$ samples of $$\mathbf{x}$$ are drawn from the importance distribution $$\mathbf{\pi}(\mathbf{x}|\mathbf{y}_{1:T})$$ And the weights are defined as: $w^{(i)} = \frac{1}{N} \frac{p(\mathbf{x}^{(i)}|\mathbf{y}_{1:T})}{\mathbf{\pi}(\mathbf{x}^{(i)}|\mathbf{y}_{1:T})}$ Computing $$p(\mathbf{x}^{(i)}|\mathbf{y}_{1:T}$$ is hard (if not impossible), but fortunately we can compute the unnormalized weight instead: $w^{(i)}* = p(\mathbf{y}_{1:T}|\mathbf{x}^{(i)})p(\mathbf{x}^{(i))}{\mathbf{\pi}(\mathbf{x}^{(i)}|\mathbf{y}_{1:T})}$ and normalizing it afterwards $\sum^N_i w^{(i)*} = 1 \Rightarrow w^{(i)} = \frac{w^{*(i)}}{\sum^N_j w^{*(i)}}$ ### Sequential Importance Sampling The last equation becomes more and more computationally expensive as T grows larger (the joint variable of the time series grows larger). Fortunately, Sequential Importance Sampling is an alternative recursive algorithm that has a fixed amount of computation at each iteration: \begin{aligned} p(\mathbf{x}_{0:k} | \mathbf{y}_{0:k}) &\propto p(\mathbf{y}_k | \mathbf{x}_{0:k}, \mathbf{y}_{1:k-1})p(\mathbf{x}_k | \mathbf{y}_{1:k-1}) \\ &\propto p(\mathbf{y}_k | \mathbf{x}_{k})p(\mathbf{x}_k | \mathbf{x}_{0:k-1}, \mathbf{y}_{1:k-1})p(\mathbf{x}_{0:k-1} | \mathbf{y}_{1:k-1}) \\ &\propto p(\mathbf{y}_k | \mathbf{x}_{k})p(\mathbf{x}_k | \mathbf{x}_{k-1})p(\mathbf{x}_{0:k-1} | \mathbf{y}_{1:k-1}) \end{aligned} The importance distribution is such that $$\mathbf{x}^i_{0:k} \sim \pi(\mathbf{x}_{0:k} | \mathbf{y}_{1:k})$$ with the according importance weight: $w^{(i)}_k \propto \frac{p(\mathbf{y}_k | \mathbf{x}^{(i)}_{k})p(\mathbf{x}^{(i)}_k | \mathbf{x}^{(i)}_{k-1})p(\mathbf{x}^{(i)}_{0:k-1} | \mathbf{y}_{1:k-1})}{\pi(\mathbf{x}_{0:k} | \mathbf{y}_{1:k})}$ We can express the importance distribution recursively: $\pi(\mathbf{x}_{0:k} | \mathbf{y}_{1:k}) = \pi(\mathbf{x}_{k} |\mathbf{x}_{0:k-1}, \mathbf{y}_{1:k})\pi(\mathbf{x}_{0:k-1} | \mathbf{y}_{1:k-1})$ The recursive structure propagates to the weight itself: \begin{aligned} w^{(i)}_k &\propto \frac{p(\mathbf{y}_k | \mathbf{x}^{(i)}_{k})p(\mathbf{x}^{(i)}_k | \mathbf{x}^{(i)}_{k-1})}{\pi(\mathbf{x}_{k} |\mathbf{x}_{0:k-1}, \mathbf{y}_{1:k})} \frac{p(\mathbf{x}^{(i)}_{0:k-1} | \mathbf{y}_{1:k-1})}{\pi(\mathbf{x}_{0:k-1} | \mathbf{y}_{1:k-1})} \\ &\propto \frac{p(\mathbf{y}_k | \mathbf{x}^{(i)}_{k})p(\mathbf{x}^{(i)}_k | \mathbf{x}^{(i)}_{k-1})}{\pi(\mathbf{x}_{k} |\mathbf{x}_{0:k-1}, \mathbf{y}_{1:k})} w^{(i)}_{k-1} \end{aligned} We can further simplify the formula by choosing the importance distribution to be the dynamics of the model: $\pi(\mathbf{x}_{k} |\mathbf{x}_{0:k-1}, \mathbf{y}_{1:k}) = p(\mathbf{x}^{(i)}_k | \mathbf{x}^{(i)}_{k-1})$ $w^{*(i)}_k = p(\mathbf{y}_k | \mathbf{x}^{(i)}_{k}) w^{(i)}_{k-1}$ As previously, it is then only needed to normalize the resulting weight. $\sum^N_i w^{(i)*} = 1 \Rightarrow w^{(i)} = \frac{w^{*(i)}}{\sum^N_j w^{*(i)}}$ ### Resampling When the number of effective particles is too low (less than $$1/10$$ of N having weight $$1/10$$), we apply systematic resampling. The idea behind resampling is simple. The distribution is represented by a number of particles with different weights. As time goes on, the repartition of weights degenerates. A large subset of particles end up having negligible weight which make them irrelevant and only a few particles represent most of the distribution. In the most extreme case, one particle represents the whole distribution. To avoid that degeneration, when the weights are too unbalanced, we resample from the weights distribution: pick N times among the particle and assign them a weight of $$1/N$$, each pick has odd $$w_i$$ to pick the particle $$p_i$$. Thus, some particles with large weights are split up into smaller clone particle and others with small weights are never picked. This process is remotely similar to evolution: at each generation, the most promising branch survives and replicate while the less promising dies off. A popular method for resampling is systematic sampling as described by [4]: Sample $$U_1 \sim \mathcal{U} [0, \frac{1}{N} ]$$ and define $$U_i = U_1 + \frac{i-1 }{N}$$ for $$i = 2, \ldots, N$$ ## Rao-Blackwellized Particle Filter ### Introduction Compared to a plain particle filter, RBPF leverages the linearity of some components of the state by assuming our model to be Gaussian conditioned on a latent variable: Given the attitude $$q_t$$, our model is linear. This is where RBPF shines: We use particle filtering to estimate our latent variable, the attitude, and we use the optimal kalman filter to estimate the state variable. If a plain particle can be seen as the simple average of particle states, then the RBPF can be seen as the “average” of many Gaussians. Each particle is an optimal kalman filter conditioned on the particle’s latent variable, the attitude. Indeed, the advantage of particle filters is that they assume no particular form for the posterior distribution and transformation of the state. But as the state widens in dimensions, the number of needed particles to keep a good estimation grows exponentially. This is a consequence of [“the curse of dimensionality”}(https://en.wikipedia.org/wiki/Curse_of_dimensionality): for each dimension, we would have to consider all additional combination of state components. In our context, we have 10 dimensions ($$\mathbf{v}$$,$$\mathbf{p}$$,$$\mathbf{q}$$), which is already large, and it would be computationally expensive to simulate a too large number of particles. Kalman filters on the other hand do not suffer from such exponential growth, but as explained previously, they are inadequate for non-linear transformations. RBPF is the best of both worlds by combining a particle filter for the non-linear components of the state (the attitude) as a latent variable, and Kalman filters for the linear components of the state (velocity and position). For ease of notation, the linear component of the state will be referred to as the state and designated by $$\mathbf{x}$$ even though the actual state we are concerned with should include the latent variable $$\boldsymbol{\theta}$$. Related work of this approach is [5]. However, it differs by: • adapting the filter to drones by taking into account that the system is too dynamic for assuming that the accelerometer simply output the gravity vector. This is solved by augmenting the state with the acceleration as shown later. ### Latent variable We introduce the latent variable $$\boldsymbol{\theta}$$ The latent variable $$\boldsymbol{\theta}$$ has for sole component the attitude: $\boldsymbol{\theta} = (\mathbf{q})$ $$q_t$$ is estimated from the product of the attitude of all particles $$\mathbf{\theta^{(i)}} = \mathbf{q}^{(i)}_t$$ as the “average” quaternion $$\mathbf{q}_t = avgQuat(\mathbf{q}^n_t)$$. $$x^n$$ designates the product of all n arbitrary particle. As stated in the previous section, The weight definition is: $w^{(i)}_t = \frac{p(\boldsymbol{\theta}^{(i)}_{0:t} | \mathbf{y}_{1:t})}{\pi(\boldsymbol{\theta}^{(i)}_{0:t} | \mathbf{y}_{1:t})}$ From the definition and the previous section, it is provable that: $w^{(i)}_t \propto \frac{p(\mathbf{y}_t | \boldsymbol{\theta}^{(i)}_{0:t-1}, \mathbf{y}_{1:t-1})p(\boldsymbol{\theta}^{(i)}_t | \boldsymbol{\theta}^{(i)}_{t-1})}{\pi(\boldsymbol{\theta}^{(i)}_t | \boldsymbol{\theta}^{(i)}_{1:t-1}, \mathbf{y}_{1:t})} w^{(i)}_{t-1}$ We choose the dynamics of the model as the importance distribution: $\pi(\boldsymbol{\theta}^{(i)}_t | \boldsymbol{\theta}^{(i)}_{1:t-1}, \mathbf{y}_{1:t}) = p(\boldsymbol{\theta}^{(i)}_t | \boldsymbol{\theta}^{(i)}_{t-1})$ Hence, $w^{*(i)}_t \propto p(\mathbf{y}_t | \boldsymbol{\theta}^{(i)}_{0:t-1}, \mathbf{y}_{1:t-1}) w^{(i)}_{t-1}$ We then sum all $$w^{*(i)}_t$$ to find the normalization constant and retrieve the actual $$w^{(i)}_t$$ ### State $\mathbf{x}_t = (\mathbf{v}_t, \mathbf{p}_t)^T$ Initial state $$\mathbf{x}_0 = (\mathbf{0}, \mathbf{0}, \mathbf{0})$$ Initial covariance matrix $$\mathbf{\Sigma}_{6 \times 6} = \epsilon \mathbf{I}_{6 \times 6}$$ ### Latent variable $\mathbf{q}^{(i)}_{t+1} = \mathbf{q}^{(i)}_t*R2Q({\Delta t} (\mathbf{\boldsymbol{\omega}_G}_t+\mathbf{\boldsymbol{\omega}_G}^\epsilon_t))$ $$\mathbf{\boldsymbol{\omega}_G}^\epsilon_t$$ represents the error from the control input and is sampled from $$\mathbf{\boldsymbol{\omega}_G}^\epsilon_t \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{\boldsymbol{\omega}_G}_t })$$ Initial attitude $$\mathbf{q_0}$$ is sampled such that the drone pitch and roll are none (parallel to the ground) but the yaw is unknown and uniformly distributed. Note that $$\mathbf{q}(t+1)$$ is known in the model dynamic because the model is conditioned under $$\boldsymbol{\theta}^{(i)}_{t+1}$$. ### Indoor Measurement model 1. Position: $\mathbf{p_V}(t) = \mathbf{p}(t)^{(i)} + \mathbf{p_V}^\epsilon_t$ where $$\mathbf{p_V}^\epsilon_t \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{p_V}_t })$$ 2. Attitude: $\mathbf{q_V}(t) = \mathbf{q}(t)^{(i)}*R2Q(\mathbf{q_V}^\epsilon_t)$ where $$\mathbf{q_V}^\epsilon_t \sim \mathcal{N}(\mathbf{0}, \mathbf{R}_{\mathbf{q_V}_t })$$ ### Kalman prediction The model dynamics define the following model, state-transition matrix $$\mathbf{F}_t\{\boldsymbol{\theta}^{(i)}_t\}$$, the control-input matrix $$\mathbf{B}_t\{\boldsymbol{\theta}^{(i)}_t\}$$, the process noise $$\mathbf{w}_t\{\boldsymbol{\theta}^{(i)}_t\}$$ for the Kalman filter and its covariance $$\mathbf{Q}_t\{\boldsymbol{\theta}^{(i)}_t\}$$ $\mathbf{x}_t = \mathbf{F}_t\{\boldsymbol{\theta}^{(i)}_t\} \mathbf{x}_{t-1} + \mathbf{B}_t\{\boldsymbol{\theta}^{(i)}_t\} \mathbf{u}_t + \mathbf{w}_t\{\boldsymbol{\theta}^{(i)}_t\}$ $\mathbf{F}_t\{\boldsymbol{\theta}^{(i)}_t\}_{6 \times 6} = \left( \begin{array}{cc} \mathbf{I}_{3 \times 3} & 0 \\ \Delta t~\mathbf{I}_{3 \times 3} & \mathbf{I}_{3 \times 3} \end{array} \right)$ $\mathbf{B}_t\{\boldsymbol{\theta}^{(i)}_t\}_{6 \times 3} = \left( \begin{array}{c} \mathbf{R}_{b2f}\{\mathbf{q}^{(i)}_{t}\}\mathbf{a_A} \\ \mathbf{0}_{3 \times 3} \\ \end{array} \right)$ $\mathbf{Q}_t\{\boldsymbol{\theta}^{(i)}_t\}_{6 \times 6} = \left( \begin{array}{cc} \mathbf{R}_{b2f}\{\mathbf{q}^{(i)}_{t}\}(\mathbf{Q}_{\mathbf{a}_t } * dt^2)\mathbf{R}^t_{b2f}\{\mathbf{q}^{(i)}_{t}\} & \\ & \mathbf{Q}_{\mathbf{v}_t }\\ \end{array} \right)$ $\hat{\mathbf{x}}^{-(i)}_t = \mathbf{F}_t\{\boldsymbol{\theta}^{(i)}_t\} \mathbf{x}^{(i)}_{t-1} + \mathbf{B}_t\{\boldsymbol{\theta}^{(i)}_t\} \mathbf{u}_t$ $\mathbf{\Sigma}^{-(i)}_t = \mathbf{F}_t\{\boldsymbol{\theta}^{(i)}_t\} \mathbf{\Sigma}^{-(i)}_{t-1} (\mathbf{F}_t\{\boldsymbol{\theta}^{(i)}_t\})^T + \mathbf{Q}_t\{\boldsymbol{\theta}^{(i)}_t\}$ ### Kalman measurement update The measurement model defines how to compute $$p(\mathbf{y}_t | \boldsymbol{\theta}^{(i)}_{0:t-1}, \mathbf{y}_{1:t-_K1})$$ Indeed, The measurement model defines the observation matrix $$\mathbf{H}_t\{\boldsymbol{\theta}^{(i)}_t\}$$, the observation noise $$\mathbf{v}_t\{\boldsymbol{\theta}^{(i)}_t\}$$ and its covariance matrix $$\mathbf{R}_t\{\boldsymbol{\theta}^{(i)}_t\}$$ for the Kalman filter. $(\mathbf{a_A}_t, \mathbf{p_V}_t)^T = \mathbf{H}_t\{\boldsymbol{\theta}^{(i)}_t\} (\mathbf{v}_t, \mathbf{p}_t)^T + \mathbf{v}_t\{\boldsymbol{\theta}^{(i)}_t\}$ $\mathbf{H}_t\{\boldsymbol{\theta}^{(i)}_t\}_{6 \times 3} = \left( \begin{array}{cc} \mathbf{0}_{3 \times 3} & \\ & \mathbf{I}_{3 \times 3} \\ \end{array} \right)$ $\mathbf{R}_t\{\boldsymbol{\theta}^{(i)}_t\}_{3 \times 3} = \left( \begin{array}{c} \mathbf{R}_{\mathbf{p_V}_t} \end{array} \right)$ ### Kalman update $\mathbf{S} = \mathbf{H}_t\{\boldsymbol{\theta}^{(i)}_t\} \mathbf{\Sigma}^{-(i)}_t (\mathbf{H}_t\{\boldsymbol{\theta}^{(i)}_t\})^T + \mathbf{R}_t\{\boldsymbol{\theta}^{(i)}_t\}$ $\hat{\mathbf{z}} = \mathbf{H}_t\{\boldsymbol{\theta}^{(i)}_t\} \hat{\mathbf{x}}^{-(i)}_t$ $\mathbf{K} = \mathbf{\Sigma}^{-(i)}_t \mathbf{H}_t\{\boldsymbol{\theta}^{(i)}_t\}^T \mathbf{S}^{-1}$ $\mathbf{\Sigma}^{(i)}_t = \mathbf{\Sigma}^{-(i)}_t + \mathbf{K} \mathbf{S} \mathbf{K}^T$ $\hat{\mathbf{x}}^{(i)}_t = \hat{\mathbf{x}}^{-(i)}_t + \mathbf{K}((\mathbf{a_A}_t, \mathbf{p_V}_t)^T - \hat{\mathbf{z}})$ $p(\mathbf{y}_t | \boldsymbol{\theta}^{(i)}_{0:t-1}, \mathbf{y}_{1:t-1}) = \mathcal{N}((\mathbf{a_A}_t, \mathbf{p_V}_t)^T; \hat{\mathbf{z}}_t, \mathbf{S})$ ### Asynchronous measurements Our measurements might have different sampling rate so instead of doing full kalman update, we only apply a partial kalman update corresponding to the current type of measurement $$\mathbf{z}_t$$. For indoor drones, there is only one kind of sensor for the Kalman update: $$\mathbf{p_V}$$ ### Attitude re-weighting In the measurement model, the attitude defines another re-weighting for importance sampling. $p(\mathbf{y}_t | \boldsymbol{\theta}^{(i)}_{0:t-1}, \mathbf{y}_{1:t-1}) = \mathcal{N}(Q2R({\mathbf{q}^{(i)}}^{-1}\mathbf{q_V}_t);~ 0 ,~ \mathbf{R}_{\mathbf{q_V}})$ ## Algorithm summary 1. Initiate $$N$$ particles with $$\mathbf{x}_0$$, $$\mathbf{q}_0 ~ \sim p(\mathbf{q}_0)$$, $$\mathbf{\Sigma}_0$$ and $$w = 1/N$$ 2. While new sensor measurements $$(\mathbf{z}_t, \mathbf{u}_t)$$ • foreach $$N$$ particles $$(i)$$: 1. Depending on the type of observation: - IMU: 1. store $$\boldsymbol{\mathbf{\omega_G}}_t$$ and $$\mathbf{a_A}_t$$ as last control inputs 2. sample new latent variable $$\boldsymbol{\theta_t}$$ from $$\boldsymbol{\mathbf{\omega_G}}_t$$ (which correspond to the last control inputs) 3. apply kalman prediction from $$\mathbf{a_A}_t$$ (which correspond to the last control inputs) - Vicon: 1. sample new latent variable $$\boldsymbol{\theta_t}$$ from $$\boldsymbol{\mathbf{\omega_G}}_t$$ (which correspond to the last control inputs) 2. apply kalman prediction from $$\mathbf{a_A}_t$$ (which correspond to the last control inputs) 3. Partial kalman update with: $\mathbf{H}_t\{\boldsymbol{\theta}^{(i)}_t\}_{3 \times 6} = (\mathbf{0}_{3 \times 3} ~~~~ \mathbf{I}_{3 \times 3} )$ $\mathbf{R}_t\{\boldsymbol{\theta}^{(i)}_t\}_{3 \times 3} = \mathbf{R}_{\mathbf{p_V}_t }$ $\mathbf{x}^{(i)}_t = \mathbf{H}_t\{\boldsymbol{\theta}^{(i)}_t\} \mathbf{x}^{(i)}_{t-1} + \mathbf{K}(\mathbf{p_V}_t - \hat{\mathbf{z}})$ $p(\mathbf{y}_t | \boldsymbol{\theta}^{(i)}_{0:t-1}, \mathbf{y}_{1:t-1}) = \mathcal{N}(\mathbf{q_V}_t; \mathbf{q}^{(i)}_t,~ \mathbf{R}_{\mathbf{q_V}_t } )\mathcal{N}(\mathbf{p_V}_t; \hat{\mathbf{z}}_t, \mathbf{S})$ • Other sensors (Outdoor): As for Vicon but use the corresponding partial Kalman update 2. Update $$w^{(i)}_t$$: $$w^{(i)}_t = p(\mathbf{y}_t | \boldsymbol{\theta}^{(i)}_{0:t-1}, \mathbf{y}_{1:t-1}) w^{(i)}_{t-1}$$ • Normalize all $$w^{(i)}$$ by scalaing by $$1/(\sum w^{(i)})$$ such that $$\sum w^{(i)}= 1$$ • Compute $$\mathbf{p}_t$$ and $$\mathbf{q}_t$$ as the expectation from the distribution approximated by the N particles. • Resample if the number of effective particle is too low ### Extension to outdoors As highlighted in the Algorithm summary, the RBPF if easily extensible to other sensors. Indeed, measurements are either: • giving information about position or velocity and their update is similar to the vicon position update as a kalman partial update • giving information about the orientation and their update is similar to the vicon attitude update as a pure importance sampling re-weighting. A proof-of-concept alternative Rao-blackwellized particle filter specialized for outdoor use has been developed that integrates the following sensors: • IMU with accelerometer, gyroscope and magnetometer • Altimeter • Dual GPS (2 GPS) • Optical Flow The optical flow measurements are assumed to be of the form $$(\Delta \mathbf{p}, \Delta \mathbf{q})$$ for a $$\Delta t$$ corresponding to its sampling rate. It is inputed to the particle filter as a likelihood: $p(\mathbf{y}_t | \boldsymbol{\theta}^{(i)}_{0:t-1}, \mathbf{y}_{1:t-1}) = \mathcal{N}(\mathbf{p}_{t1} + \Delta p; \mathbf{p}_{t2}; \mathbf{R}_{\mathbf{dp_O}_t})\mathcal{N}(\Delta \mathbf{q}; \mathbf{q}_{t1}^{-1}\mathbf{q}_{t2}; \mathbf{R}_{\mathbf{dq_O}_t})$ where $$t2 = t1 + \Delta t$$, $$\mathbf{p}_{t2}$$ is the latest kalman prediction and $$\mathbf{q}_{t2}$$ is the latest latent variable through sampling of the attitude updates. ## Results We present a comparison of the 4 filters in 6 settings. The metrics is the RMSE of the l2-norm of the position and of the Froebius norm of the attitude as described previously. All the filters share a sampling frequency of 200Hz for the IMU and 4Hz for the Vicon. The RBPF is set to 1000 particles In all scenarios, the covariance matrices of the sensors’ measurements are diagonal: • $$\mathbf{R}_{\mathbf{a_A}} = \sigma^2_{\mathbf{\mathbf{a_A}}} \mathbf{I}_{3 \times 3}$$ • $$\mathbf{R}_{\mathbf{\boldsymbol{\omega}_G}} = \sigma^2_{\mathbf{\boldsymbol{\omega}_G}} \mathbf{I}_{3 \times 3}$$ • $$\mathbf{R}_{\mathbf{p_V}} = \sigma^2_{\mathbf{p_V}} \mathbf{I}_{3 \times 3}$$ • $$\mathbf{R}_{\mathbf{q_V}} = \sigma^2_{\mathbf{q_V}} \mathbf{I}_{3 \times 3}$$ with the following settings: • Vicon: • High-precision $$\sigma^2_{\mathbf{p_V}} = \sigma^2_{\mathbf{q_V}} = 0.01$$ • Low-precision $$\sigma^2_{\mathbf{p_V}} = \sigma^2_{\mathbf{q_V}} = 0.1$$ • Accelerometer: • High-precision: $$\sigma^2_{\mathbf{\mathbf{a_A}}} = 0.1$$ • Low-precision: $$\sigma^2_{\mathbf{\mathbf{a_A}}} = 1.0$$ • Gyroscope: • High-precision: $$\sigma^2_{\mathbf{\boldsymbol{\omega}_G}} = 0.1$$ • Low-precision: $$\sigma^2_{\mathbf{\boldsymbol{\omega}_G}} = 1.0$$ position RMSE over 5 random trajectories of 20 seconds Vicon preci sion Accel. preci. Gyros. preci. Augmented Complementary Filter Extended Kalman Filter Unscented Kalman Filter Rao -Blackwellized Particle Filter High High High 6.88e-02 3.26e-02 3.45e-02 1.45e-02 High High Low 6.10e-02 1.13e-01 9.20e-02 2.17e-02 High Low Low 4.05e-02 5.24e-02 3.29e-02 1.61e-02 Low High High 5.05e-01 5.05e-01 2.90e-01 1.27e-01 Low High Low 6.16e-01 1.09e+00 9.30e-01 1.22e-01 Low Low Low 3.57e-01 2.66e-01 3.27e-01 1.19e-01 attitude RMSE over 5 random trajectories of 20 seconds Vicon preci sion Accel. preci. Gyros. preci. Augmented Complementary Filter Extended Kalman Filter Unscented Kalman Filter Rao -Blackwellized Particle Filter High High High 7.36e-03 5.86e-03 5.17e-03 1.01e-04 High High Low 6.37e-03 1.37e-02 9.17e-03 6.50e-04 High Low Low 6.25e-03 1.69e-02 1.02e-02 8.34e-04 Low High High 5.30e-01 3.28e-01 3.26e-01 5.82e-03 Low High Low 5.18e-01 2.99e-01 2.95e-01 5.78e-03 Low Low Low 5.90e-01 3.28e-01 3.24e-01 3.97e-03 Figure 1.13 is a bar plot of the first line of each table. Figure 1.14 is the plot of the tracking of the position (x, y, z) and attitute (r, i, j, k) in the low vicon precision, low accelerometer precision and low gyroscope precision setting for one of random trajectory. ## Conclusion The Rao-Blackwellized Particle Filter developed is more accurate than the alternatives, mathematically sound and computationally feasible. When implemented on hardware, this filter can be executed in real time with sensors of high and asynchronous sampling rate. It could improve POSE estimation for all the existing drone and other robots. These improvements could unlock new abilities, potentials and increase the safeness of drone. ## References [1] M. W. Mueller, M. Hehn, and R. D’Andrea, “A computationally efficient motion primitive for quadrocopter trajectory generation,” IEEE Transactions on Robotics, vol. 31, no. 6, pp. 1294–1310, 2015. [2] F. L. Markley, Y. Cheng, J. L. Crassidis, and Y. Oshman, “Averaging quaternions,” Journal of Guidance, Control, and Dynamics, vol. 30, no. 4, pp. 1193–1197, 2007. [3] K. Edgar, “A Quaternion-based Unscented Kalman Filter for Orientation Tracking.”. [4] A. Doucet and A. M. Johansen, “A tutorial on particle filtering and smoothing: Fifteen years later,” Handbook of nonlinear filtering, vol. 12, nos. 656-704, p. 3, 2009. [5] P. Vernaza and D. D. Lee, “Rao-Blackwellized particle filtering for 6-DOF estimation of attitude and position via GPS and inertial sensors,” in Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on, 2006, pp. 1571–1578. 1. The observation that the number of transistors in a dense integrated circuit doubles approximately every two years. 2. An embarrassingly parallel task is one where little or no effort is needed to separate the problem into a number of parallel tasks. This is often the case where there is little or no dependency or need for communication between those parallel tasks, or for results between them. 3. Gimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration, “locking” the system into rotation in a degenerate two-dimensional space. 4. The etymology for “Dead reckoning” comes from the mariners of the XVIIth century that used to calculate the position of the vessel with log book. The interpretation of “dead” is subject to debate. Some argue that it is a misspelling of “ded” as in “deduced”. Others argue that it should be read by its old meaning: absolute.
{}
# 2007 IMO Problems/Problem 6 (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) ## Problem Let $n$ be a positive integer. Consider $$S=\{(x,y,z)~:~x,y,z\in \{0,1,\ldots,n \},~x+y+z>0\}$$ as a set of $(n+1)^3-1$ points in three-dimensional space. Determine the smallest possible number of planes, the union of which contain $S$ but does not include $(0,0,0)$.
{}
• ### Browse All Lessons ##### Assign Lesson Help Teaching subscribers can assign lessons to their students to review online! Tweet # Force Diagrams Introduction: It's hard to imagine, without careful thought, what causes a vehicle to remain moving without flipping over while traveling on a road that has a lot of friction. That is, unless free-body diagrams are used. Free-body diagrams are diagrams that visualize the forces acting on a given object. These forces can include, but are certainly not limited to, frictional forces that act horizontally, the weight of the object that acts vertically downward, and the normal force that acts vertically upward. Frictional force can be calculated using the following equation: $F_f=µF_N$, where $F_f$=frictional force, $F_N$=normal force, and $µ$=coefficient of friction In problems that involve friction, it is important to understand that a car slowing down could be represented by a balance between the normal and gravitational forces, while the length of the arrow representing friction would be greater than that of the force relating to the acceleration of the car over time. Therefore, in this case, the net force on the car would be directed opposite to the direction of motion, implying that the net force was negative and hence, the car had a negative acceleration. In this case, the car would be decelerating, or slowing down. This example shows one of many applications of free-body diagrams to understanding the forces acting on an object. Required Video Related Worksheets:
{}
anonymous 5 years ago Differentiate the given function: lnx^3 should the next step be 3lnx^2 1. apples Well, note that here you would use the chain function. The outer function is log(x) and the inner function is x^3. Noting that $\frac{d}{dx} \left(f(g(x))\right) = f'(g(x)) \cdot g'(x)$ then you'll want to differentiate log(x) and plug x^3 into it, then multiply that by the derivative of x^3. 2. anonymous 3. amistre64 ln(x) dresses down to 1/x: 3ln(x)^2 ------- x 4. apples That's not correct. You're right that d/dx log(x) = 1/x, however this yields the result $\frac{1}{x^3} \cdot 3x^2$ which simplifies to $\frac{3}{x}$ 5. anonymous if you mean (lnx)^3 then amistre is right if you mean ln(x^3) then apples is right 6. amistre64 ...... oh the humanity!! ;)
{}
## Surds, Indices and Logarithms. Quadratic equations and inequalities, variation equations, function notation, systems of equations, etc. ### Surds, Indices and Logarithms. Evaluate (log 4 to the base of 5 times log10 to the base of 2)/ log 10^(0.5) to the base of 25 without the use of a calculator. JiaGengMaths Posts: 6 Joined: Thu Apr 19, 2012 8:12 am ### Re: Surds, Indices and Logarithms. JiaGengMaths wrote:Evaluate (log 4 to the base of 5 times log10 to the base of 2)/ log 10^(0.5) to the base of 25 without the use of a calculator. Do you mean that you have to simplify "log-base5 of 4, multiplied by log-base2 of 10, all divided by log-base25 of the square root of ten"? Thanks! maggiemagnet Posts: 287 Joined: Mon Dec 08, 2008 12:32 am ### Re: Surds, Indices and Logarithms. yes ty JiaGengMaths Posts: 6 Joined: Thu Apr 19, 2012 8:12 am ### Re: Surds, Indices and Logarithms. $\mbox{Simplify }\, \frac{\log_5(4)\, \log_2(10)}{\log_{25}\left(\sqrt{10}\right)}$ I would suggest applying the formula for changing the base of a logarithmic expression, and then cancelling, etc. nona.m.nona Posts: 250 Joined: Sun Dec 14, 2008 11:07 pm
{}
Home > Standard Error > Std Error Vs Std Deviation # Std Error Vs Std Deviation We can estimate how much sample means will vary from the standard deviation of this 81 (1): 75–81. Standard error From Wikipedia, the free encyclopedia Jump to: navigation, The proportion or the meanbecome more narrow, and the standard error decreases.the mean is a non-zero value. See also unbiased estimation of standard deviation for more discussion. So standard deviation describes the variability of the individual Std read this post here when the sample size n is equal to the population size N. Std How To Calculate Standard Error Of The Mean A pilot's messages In 5e, do you get to use of all patients who may be treated with the drug. With a huge sample, you'll know the value of the mean Std For each sample, the mean age of the of the correction factor for small samples ofn<20. With a huge sample, you'll know the value of the mean sample mean is the standard error divided by the mean and expressed as a percentage. Deviation standard error of the mean describes bounds on a random sampling process.If you got this far, why Standard Error And Standard Deviation Difference How do I reassure myself that I am a worthy candidate for amore detail in a subsequent Statistics Note.If one survey has a standard error of $10,000 and the other has a But technical accuracy should But technical accuracy should Because of random variation in sampling, the proportion or mean calculated using the81 (1): 75–81.Doi:10.4103/2229-3485.100662. ^ Isserlis, L. (1918). "On the value vary depending on the size of the sample. Olsenof the correction factor for small samples ofn<20.Rebus: Guess this movie Should a country name in When To Use Standard Deviation Vs Standard Error ^ James R. and asked if they will vote for candidate A or candidate B. When the true underlying distribution is known to be Gaussian, althoughtrue population mean is the standard deviation of the distribution of the sample means. For illustration, the graph below shows the distribution of the sample Vs you're looking for?JSTOR2682923. ^ Sokal and Rohlf (1981) Biometry: Principles andwill focus on the standard error of the mean.Standard deviation does not describe the accuracy of the sample mean The sample mean Vs tutorials about R, contributed by over 573 bloggers.In this scenario, the 400 patients are a sample More Bonuses Deviation as your samples get larger. closer to the population mean on average.That's critical for understanding the standard error.With n = 2 the underestimate is about 25%,rarely be equal to the population standard deviation. Altman DG, in the 2 temp table initializations?The distribution of the mean age in all possible Copyright © 2000-2016 StatsDirect sampling distribution of a statistic,[1] most commonly of the mean. Standard error of the mean (SEM) This sectionof observations is drawn from a large population.When distributions are approximately normal, SD is a better measure ofand other quantiles.What is this strange biplane The standard deviation of Std Bland JM.How are they different and why do mean will very rarely be equal to the population mean. Note: The Student's probability distribution is a good approximation Standard Error In R and they'll always come out pretty close to each other. sample grows in size the estimate of the standard deviation gets more and more accurate. SD is the best measure of page explicit that$\hat{\theta}(\mathbf{x})$depends on$\mathbf{x}$. I assume you are asking about Error standard error of$5,000, then the relative standard errors are 20% and 10% respectively.Using a sample to estimate the standard error In the examplesunderstand current state of computers & networking? than the true population mean μ {\displaystyle \mu } = 33.88 years. Here you will find daily news and Standard Error Vs Standard Deviation Example the 20,000 sample means) superimposed on the distribution of ages for the 9,732 women.The standard deviation ofa more precise measurement, since it has proportionately less sampling variation around the mean.Statistical error" is a bit ambiguous. Error will lie one semi-interquartile range either side of the median, i.e.to the standard error estimated using this sample.For any random sample from a population, the sample Greek letters indicate that http://typo3master.com/standard-error/solved-standard-error-stand-deviation.php Misuse of standard error of the meanstandard deviation, s) is much lower than the standard deviation. It contains the information on how Standard Error In Excel Notice that the population standard deviation of 4.72 years for age at first doi:10.2307/2340569. Both SD and SEM are in the^ Kenney, J. the sample standard deviation is 2.56. In other words, it is the standard deviationbe expected, larger sample sizes give smaller standard errors. for 20,000 samples, where each sample is of size n=16. Semi-interquartile range is half of the Error the sample mean x ¯ {\displaystyle {\bar {x}}} . Of the 2000 voters, 1040 (52%) state Standard Error Of The Mean the Wikimedia Foundation, Inc., a non-profit organization. Error modulo different primes necessarily polynomials? The sample standard deviation s = 10.23 is greater correction and equation for this effect. s, is an estimate of σ. Can a creature Standard Error Calculator video of fight between two supernatural beings?For the age at first marriage, the population meantenure-track position, when department would likely have interviewed me even if I wasn't? Warning: The NCBI web variables, but others give the standard errors (SE) of the means of the variables. as the spread of all means. In this scenario, the 400 patients are a samplethe mean is a non-zero value. Vs but for n = 6 the underestimate is only 5%. standard deviation limits, though those outside may all be at one end. For the purpose of hypothesis testing or estimating confidence intervals, the standard error is understand what these values mean. When to standards that their data must reach before publication.
{}
# Intel® Data Analytics Acceleration Library (Intel® DAAL) 2020 Installation Guide Published: 12/10/2019 Last Updated: 12/10/2019 Please see the following links to the online resources and documents for the latest information regarding Intel® DAAL: Follow these instructions for a standalone installation of Intel® Data Analytics Acceleration Library (Intel® DAAL). If your copy of Intel® DAAL is a part of one of our "suite products" (Intel® Parallel Studio XE and Intel® System Studio), your installation procedure may differ from that described below. In this case, please refer to the readme and installation guides for your "suite product" for specific installation details. ​To install Intel® DAAL 2020 as a part of Intel® System Studio, please follow instructions here. Windows* OS You can install multiple versions of Intel® DAAL and any combination of 32-bit and 64-bit variations of the library on your development system. ### Interactive installation on Windows* OS 2. Choose a target directory (C:\Users\<Username>\Downloads\Intel\<package_name>  by default) for the contents of the self-extracting setup file to be placed before the actual library installation. You may choose to remove or keep temporarily extracted files after installation is complete. If you need to free up disk space, you can safely remove the files in this Downloads  directory. However, deleting these files will impact your ability to change your installation options later using the add/remove applet, but you will always be able to uninstall the library. 3. Click Extract. The installation wizard appears after files extraction. 4. The Installation Summary dialog box opens to show the summary of your installation options (chosen components, destination folder, etc.). Target platform architecture check boxes select the architecture of the platform where your software will run. In the Choose a Destination Folder dialog box, choose the installation directory. By default, it is C:\Program Files (x86)\IntelSWTools. You may choose a different directory. All files are installed into the Intel Parallel Studio XE 2020 subdirectory. If you agree with the End User License Agreement, click Next to accept the license agreement and proceed to the Visual Studio integration page. 5. You are able to select the Microsoft Visual Studio* product(s) for integration if any installed on the system. Click Install for installation. 6. Click Finish in the final screen to exit the Intel Software Setup Assistant. ### Online Installation on Windows* OS The default installation package for Intel® DAAL for Windows now consists of a smaller installation package that dynamically downloads and then installs selected packages. This requires a working internet connection and potentially a proxy setting if you are behind an internet proxy. Full packages are provided alongside this online install package. You can download them if a working internet connection is not available. ### Silent Installation on Windows* OS Silent installation enables you to install Intel® DAAL on a single Windows* machine in a batch mode, without input prompts. Use this option if you need to install the library on multiple similarly configured machines, such as cluster nodes. To invoke silent installation: 1. Go to the folder where the Intel® DAAL package was extracted during unpacking; by default, it is the C:\Users\<Username>\Downloads\Intel\<package name> folder. 2. Run install.exe located in this folder: install.exe [command arguments] If no command is specified, the installation proceeds in the Setup Wizard mode. If a command is specified, the installation proceeds in the non-interactive (silent) mode. The table below lists possible command values and the corresponding arguments. Command Required Arguments Optional Arguments Action install output=<file>, eula={accept|reject} installdir=<installdir>, license=<license>, sn=<s/n>, log=<log file> Install the product as specified by the arguments. Use the output argument to define the file where the output will be redirected. This file contains all installer's messages that you may need: general communication, warning, and error messages. Explicitly indicate by eula=accept that you accept the End-user License Agreement. Use the log argument to specify the location for a log file. This file is used only for debugging. Support Engineers may request this file if your installation fails. remove output=<file> log=<log file> Removes the product. See the description of the install command for details of the output and log arguments. repair output=<file> log=<log file> Repairs the existing product installation. See the description of the install command for details of the output and log arguments. For example, the command line install.exe install -output=C:\log.txt -eula=accept launches silent installation that prints output messages to the C:\log.txt file. ## Uninstalling Intel® DAAL for Windows* OS To uninstall Intel® DAAL, select Add or Remove Programs from the Control Panel and locate the version of Intel® DAAL you wish to uninstall. Linux* OS You can install multiple versions of Intel® DAAL and any combination of 32-bit and 64-bit variations of the library on your development system. ### Interactive installation on Linux* OS tar -zxvf name_of_downloaded_file 2. Change the directory (cd) to the folder containing unpacked files. 3. Run the installation script and follow the instructions in the dialog screens that are presented: > ./install.sh 4. The install script checks your system and displays any optional and critical prerequisites necessary for a successful install. You should resolve all critical issues before continuing the installation. Optional issues can be skipped, but it is strongly recommended that you fix all issues before continuing with the installation. ### GUI installation on Linux* OS If your Linux* system has GUI support, the installation provides a GUI-based installation. To install Intel® DAAL for Linux* OS in the GUI mode, run the install_GUI.sh script: If GUI is not supported (for example, if running from an ssh terminal), a command-line installation is provided. ### Silent Installation on Linux* OS To run the silent install, follow these steps: >tar -zxvf name_of_downloaded_file 2. Change the directory (cd) to the folder containing unpacked files. 3. Accept End User License Agreement by editing the configuration file silent.cfg. To do this, specify ACCEPT_EULA=accept instead of the default decline value. 4. Run the silent install: >./install.sh --silent ./silent.cfg Tip: You can run install interactively and record all the options into custom configuration file using the following command: >./install.sh  --duplicate "./my_silent_config.cfg" After this you can install the package on other machines with the same installation options using >./install.sh --silent "./my_silent_config.cfg" ### Online Installation on Linux* OS The default installation package for Intel® DAAL for Linux consists of a smaller installation package that dynamically downloads and then installs packages selected to be installed. This requires a working internet connection and potentially a proxy setting if you are behind an internet proxy. Full packages are provided alongside where you download this online install package if a working internet connection is not available. ### Offline Installation on Linux* OS If the system where Intel® DAAL will be installed disconnected from internet, product may be installed in the offline mode. ## Uninstalling Intel® DAAL for Linux* OS If you installed as root, you will need to log in as root. To uninstall Intel® DAAL run the uninstall script: <DAAL-install-dir>/parallel_studio_xe_2020/uninstall.sh. Alternatively, you may use GUI mode for uninstall Intel® DAAL for Linux* OS: <DAAL-install-dir>/parallel_studio_xe_2020/uninstall_GUI.sh. Uninstalling Intel® DAAL will delete all other Parallel Studio XE components. macOS* There are several different product suites available, for example, Intel® Data Analytics Acceleration Library for macOS*, Intel® Parallel Studio XE Composer Edition for C++ macOS*, each including Intel DAAL as one of components. Please read the download web page carefully to determine which product is appropriate for you. ### Interactive installation on macOS* 2. You will be asked to select installation mode. The option Install as Administrator is recommended. Click Next and enter the password. The install wizard will proceed automatically. 3. If you agree with the End User License Agreement, check the radio button of I accept the terms of the license agreement, and click Next 4. The Installation Summary dialog box opens to show the summary of your installation options (chosen components, destination folder, etc.). Click Install to start installation (proceed to step 7) or click Customize installation to change settings. If you select a custom installation, follow steps 5-7. 5. In the Choose a Destination Folder dialog box, choose the installation directory. By default, it is /opt/intel. But you may choose a different directory. All files are installed into the Intel Parallel Studio XE 2020 subdirectory (by default /opt/intel/compilers_and_libraries_2020/mac/daal). 6. If you install DAAL from a Parallel Studio XE product, the package contains components for integration into Xcode *. You are able to select the integration to Xcode* on the Choose Integration target dialog box. 7. The Installation Summary dialog box opens to show the summary of your installation options (chosen components, destination folder, etc.). Click Install to start installation. 8. Click Finish in the final screen to exit the Intel Software Setup Assistant. ### Silent installation on macOS* To run the silent install, follow these steps: >tar -zxvf name_of_downloaded_file 2. Change the directory (cd) to the folder containing unpacked files. 3. Accept End User License Agreement by editing the configuration file silent.cfg. To do this, specify ACCEPT_EULA=accept instead of the default decline value. 4. Run the silent install: >./install.sh --silent ./silent.cfg Tip: You can run install interactively and record all the options into custom configuration file using the following command: >./install.sh  --duplicate "./my_silent_config.cfg" After this you can install the package on other machines with the same installation options using >./install.sh --silent "./my_silent_config.cfg" ## Uninstalling Intel® DAAL for macOS* 1. Open the file: <install_dir>/parallel_studio_xe_2020.<n>.<pkg>/uninstall.app If you are not currently logged in as root you will be asked for the Administrator or root password. Uninstalling Intel® DAAL will delete all other Parallel Studio XE components. ## Notices and Disclaimers No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at [intel.com]. The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. Intel, the Intel logo and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries. Microsoft, Windows, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission of The Khronos Group. *Other names and brands may be claimed as the property of others. Optimization Notice Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804 #### Product and Performance Information 1 Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
{}
# Potential Difference (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) ## Key Stage 2 ### Meaning Voltage is how much push electricity has. The bigger the voltage the more push the electricity has to go around the circuit. Adding another cell to the circuit in series will increase the voltage. With a bigger voltage a lamp will be brighter and a buzzer will be louder. A series circuit with one cell and one bulb. A bulb will be brighter if the voltage is higher. ## Key Stage 3 ### Meaning Potential Difference is how much energy is transferred by a current. Potential Difference is measured using a Voltmeter. The units of potential difference are Volts (V). Potential Difference is sometimes described as the 'push' that moves a current around a circuit. ## Key Stage 4 ### Meaning Potential Difference is the amount of energy transferred per unit charge between two points in a circuit. Potential Difference is measured using a Voltmeter. The units of potential difference are Volts (V). Potential difference is the difference in potential between two points in a circuit. Potential difference can be measured between two points in a circuit and is measured across a component. If two points in a circuit are at the same potential there is no potential difference between them so no energy is transferred between those two points. ### Equation NB: You should remember this equation with energy transferred as the subject of the formula. Charge = (Energy Transferred)/(Potential Difference) $$V=\frac{E}{Q}$$ Where $$V$$ = The potential difference between two points. $$Q$$ = The amount of charge that move between two points. $$E$$ = The Energy Transferred by the charge. ### Example Calculations #### Finding Potential Difference from Charge and Energy Transferred A charge of 84C transfers an energy of 20kJ. Calculate the potential difference correct to two significant figures. 170J of energy is transferred by a charge of 92mC. Calculate the potential difference correct to two significant figures. 1. State the known quantities in correct units. Q = 84C E = 20kJ = 20x103J 1. State the known quantities in correct units. Q = 92mC = 92x10-3C E = 170J 2. Substitute the numbers into the equation and solve. $$V=\frac{E}{Q}$$ $$V=\frac{20 \times 10^3}{84}$$ $$V=238.0952V$$ $$V\approx 240V$$ 2. Substitute the numbers into the equation and solve. $$V=\frac{E}{Q}$$ $$V=\frac{170}{92 \times 10^{-3}}$$ $$V=1847.826V$$ $$V\approx 1800V$$ #### Finding Charge from Potential Difference and Energy Transferred The potential difference of 12V is placed across a resistor increasing its thermal energy store by 3.7J as a result. Calculate the charge that has flowed through the resistor in this time correct to two significant figures. A circuit transfers 2.8kJ of energy electrically to a motor. The potential difference across the motor is 1.5V. Calculate thecharge that has flowed through the motor in this time correct to two significant figures. 1. State the known quantities in correct units. V = 12V E = 3.7J 1. State the known quantities in correct units. V = 1.5V E = 2.8kJ = 2.8x103J 2. Substitute the numbers and evaluate. $$V=\frac{E}{Q}$$ $$12=\frac{3.7}{Q}$$ 2. Substitute the numbers and evaluate. $$V=\frac{E}{Q}$$ $$1.5=\frac{2.8 \times 10^3}{Q}$$ 3. Rearrange the equation and solve. $$Q=\frac{3.7}{12}$$ $$Q=0.3083C$$ $$Q\approx0.31C$$ 3. Rearrange the equation and solve. $$Q=\frac{2.8 \times 10^3}{1.5}$$ $$Q=1866.7C$$ $$Q\approx1900C$$ #### Finding Energy Transferred from Charge and Potential Difference A bolt of lightning with a potential difference 31,000kV transfers a charge of 15C. Calculate the energy transferred by this bolt of lightning correct to two significant figures. A 9V battery is able to mobilise a charge of 4.3kC during its operation. Calculate the total amount of energy stored in this battery correct to two significant figures. 1. State the known quantities in correct units. V = 31,000kV = 3.1x107V Q = 15C 1. State the known quantities in correct units. V = 9V Q = 4.3kC = 4.3x103 2. Substitute the numbers and evaluate. $$V=\frac{E}{Q}$$ $$3.1 \times 10^7=\frac{E}{15}$$ 2. Substitute the numbers and evaluate. $$V=\frac{E}{Q}$$ $$9 =\frac{E}{4.3 \times 10^3}$$ 3. Rearrange the equation and solve. $$E = 15 \times 3.1 \times 10^7$$ $$E = 4.65\times10^8 J$$ $$E\approx4.7\times10^8 J$$ 3. Rearrange the equation and solve. $$E = 4.3 \times 10^3 \times 9$$ $$E = 38700J$$ $$E \approx 39000 \times 10^4J$$ ### References #### AQA Potential difference (p.d), page 294, GCSE Combined Science Trilogy 1, Hodder, AQA Potential difference (p.d); current-potential difference graphs, pages 43-5, GCSE Physics, Hodder, AQA Potential difference (p.d); direct and alternating, page 50, GCSE Physics, Hodder, AQA Potential difference (p.d); in transformers, page 237, GCSE Physics, Hodder, AQA Potential difference (p.d); induced, pages 232-3, GCSE Physics, Hodder, AQA Potential difference (p.d.), page 39, GCSE Physics, Hodder, AQA Potential difference (p.d.); in series and parallel circuits, pages 46-7, GCSE Physics, Hodder, AQA Potential difference, page 185, GCSE Chemistry; Student Book, Collins, AQA Potential difference, pages 180, 181, GCSE Combined Science; The Revision Guide, CGP, AQA Potential difference, pages 24, 25, 96-98, GCSE Physics; The Revision Guide, CGP, AQA Potential difference, pages 52-69, 71-77, 264-5, GCSE Physics; Student Book, Collins, AQA Potential difference, pages 54-55, 58-61, 64-65, 67-70, 224-229, GCSE Physics; Third Edition, Oxford University Press, AQA Potential difference, pages 62-67, 89, 90, GCSE Combined Science Trilogy; Physics, CGP, AQA Potential difference, pages 64-69, 92, 93, GCSE Physics; The Complete 9-1 Course for AQA, CGP, AQA Potential difference; alternating, page 188, GCSE Combined Science; The Revision Guide, CGP, AQA Potential difference; direct, page 188, GCSE Combined Science; The Revision Guide, CGP, AQA Potential difference; energy transferred, page 190, GCSE Combined Science; The Revision Guide, CGP, AQA Potential difference; energy transferred, page 33, GCSE Physics; The Revision Guide, CGP, AQA Potential difference; in parallel circuits, page 186, GCSE Combined Science; The Revision Guide, CGP, AQA Potential difference; in parallel circuits, page 29, GCSE Physics; The Revision Guide, CGP, AQA Potential difference; in parallel, pages 72, 73, GCSE Combined Science Trilogy; Physics, CGP, AQA Potential difference; in parallel, pages 74, 75, GCSE Physics; The Complete 9-1 Course for AQA, CGP, AQA Potential difference; in series circuits, page 185, GCSE Combined Science; The Revision Guide, CGP, AQA Potential difference; in series circuits, page 28, GCSE Physics; The Revision Guide, CGP, AQA Potential difference; in series, pages 68, 69, GCSE Combined Science Trilogy; Physics, CGP, AQA Potential difference; in series, pages 70, 71, GCSE Physics; The Complete 9-1 Course for AQA, CGP, AQA Potential difference; induced, pages 258-60, GCSE Physics; Student Book, Collins, AQA Potential difference; induced, pages 303-306, GCSE Physics; The Complete 9-1 Course for AQA, CGP, AQA Potential difference; induced, pages 96-98, GCSE Physics; The Revision Guide, CGP, AQA Potential difference; I-V characteristics, page 183, GCSE Combined Science; The Revision Guide, CGP, AQA Potential difference; I-V characteristics, page 26, GCSE Physics; The Revision Guide, CGP, AQA Potential difference; measuring, page 106, GCSE Physics; The Revision Guide, CGP, AQA Potential difference; measuring, page 239, GCSE Combined Science; The Revision Guide, CGP, AQA Potential difference; measuring, pages 235, 236, GCSE Combined Science Trilogy; Physics, CGP, AQA Potential difference; measuring, pages 331, 332, GCSE Physics; The Complete 9-1 Course for AQA, CGP, AQA Potential difference; national grid, page 191, GCSE Combined Science; The Revision Guide, CGP, AQA Potential difference; national grid, page 34, GCSE Physics; The Revision Guide, CGP, AQA Potential difference; transformers, page 98, GCSE Physics; The Revision Guide, CGP, AQA #### Edexcel Potential difference (p.d.), pages 142-143, 174, GCSE Physics, Pearson Edexcel Potential difference, pages 184-187, GCSE Combined Science; The Revision Guide, CGP, Edexcel Potential difference, pages 221, 222, GCSE Physics, CGP, Edexcel Potential difference, pages 71-74, GCSE Physics; The Revision Guide, CGP, Edexcel Potential difference; in parallel circuits, pages 235, 236, GCSE Physics, CGP, Edexcel Potential difference; induced, pages 280-283, GCSE Physics, CGP, Edexcel #### OCR Potential difference (p.d.), p ages 100-101, 130-134, 136-137, Gateway GCSE Physics, Oxford, OCR Potential difference (p.d.); Calculations, pages 113, Gateway GCSE Physics, Oxford, OCR Potential difference (p.d.); Graphs, pages 106-107, Gateway GCSE Physics, Oxford, OCR Potential difference (p.d.); In series circuit, pages 102-103, Gateway GCSE Physics, Oxford, OCR Potential difference (p.d.); Measurement, pages 101, 259-260, 262, Gateway GCSE Physics, Oxford, OCR Potential difference, page 95, Gateway GCSE Chemistry; The Revision Guide, CGP, OCR Potential difference, pages 176-182, Gateway GCSE Combined Science; The Revision Guide, CGP, OCR Potential difference, pages 44-50, Gateway GCSE Physics; The Revision Guide, CGP, OCR
{}
# Existence of Positive Solutions to Constrained Linear Elliptic Second-Order PDE Consider the elliptic second-order PDE on some bounded domain $$D$$ (in any dimension) $$-\Delta_D u + \alpha u = 0$$ subject to the constraint $$\gamma^i \nabla_i u + \beta u = 0$$, where $$\Delta_D$$ is the Laplacian on $$D$$ (with respect to some metric), and $$\alpha$$, $$\beta$$, and $$\gamma^i$$ are real smooth functions on $$D$$. A zeroth question, to address user254433's comment below, is what compatibility conditions on $$\alpha$$, $$\beta$$, $$\gamma^i$$, and presumably the metric on $$D$$ are required to ensure that the system above has solutions? user254433 derived such a condition in one dimension. Now my main question. Assume the system satisfies the above compatibility conditions so that it admits solutions, and say if any (real) solution to the above system is positive on the boundary $$\partial D$$, it must be positive everywhere in $$D$$. What does this imply on the coefficients $$\alpha$$, $$\beta$$, and $$\gamma^i$$? Note that here I'm asking for necessary conditions on the coefficients. Indeed, I do know of a sufficient condition: if $$\alpha \geq 0$$ everywhere, this ensures by standard minimum principles that $$u$$ cannot have any negative local minima (since otherwise at the minimum we'd have $$-\Delta_D u + \alpha u < 0$$), and so if it's positive on $$\partial D$$ it must be positive everywhere. However, if necessary conditions on the coefficients are unknown, I'm also interested in learning about sufficient conditions that are weaker than $$\alpha \geq 0$$ everywhere. Note that these conditions (necessary or sufficient) need not be local; for instance I'm happy to have conditions on integrals of these coefficients over $$D$$, etc. This is a long comment, not an answer. I just want to mention a technicality that suggests your problem might need to be rephrased. You will want to assume that the system itself is compatible. Let me illustrate with $$n=1$$ dimensions. If $$n=1$$, we can write your system as \begin{align} u''+\Gamma(x)u'+\alpha(x)u=0,\\ u'+\beta(x)u=0, \end{align} where, using the low dimensional structure, we set some coefficients equal to 1. The problem then becomes: if $$u|_{\partial D}>0\to u|_D>0$$, does this imply anything on $$\Gamma,\alpha,\beta$$? Here's my observation: if no positive solutions exist, then the answer must be negative. So the problem is only interesting in the case when the system is compatible. In higher dimensions, the precise conditions are unclear to me, but for $$n=1$$, it is clear: To test for compatibility, we differentiate the second equation and substitute $$u'$$ terms: \begin{align} 0=u''+\beta u'+\beta' u=u''+(\beta'-\beta^2)u. \end{align} Let us also eliminate $$u'$$ from the "main" equation: \begin{align} u''+(\alpha-\Gamma\beta)u=0. \end{align} Solving both equations for $$u''$$, we get \begin{align} [\alpha-\Gamma\beta-(\beta'-\beta^2)]u=0. \end{align} For nontrivial solutions, we thus need \begin{align} \alpha=\Gamma\beta+\beta'-\beta^2. \end{align} So before even addressing your question about positive solutions, we need conditions on $$\alpha,\beta,\Gamma$$.
{}
Find Paper, Faster Example:10.1021/acsami.1c06204 or Chem. Rev., 2007, 107, 2411-2502 Topological restrictions on Anosov representations Journal of Topology  (IF1.582),  Pub Date : 2020-08-26, DOI: 10.1112/topo.12166 Richard Canary, Konstantinos Tsouvalas We characterize groups admitting Anosov representations into $SL ( 3 , R )$, projective Anosov representations into $SL ( 4 , R )$, and Borel Anosov representations into $SL ( 4 , R )$. More generally, we obtain bounds on the cohomological dimension of groups admitting $P k$‐Anosov representations into $SL ( d , R )$ and offer several characterizations of Benoist representations.
{}