text
stringlengths
100
957k
meta
stringclasses
1 value
# Math Help - divide by 3? 1. ## divide by 3? hi folks, I hope this is the right place to post this! I have no idea how to proceeed with the following question: Show that for all positive integral values of n [HTML]7 <sup>n</sup> + 2 <sup>2n+1</sup>[/HTML] is divisible by 3. I tried a few terms as follows: n = 1. 7 ^ 1 + 2 ^ 3 = 7 + 8 = 15 n = 2. 7 ^ 2 + 2 ^ 5 = 49 + 32 = 81 n = 3. 7 ^ 3 + 2 ^ 7 = 343 + 128 = 471 and these are all divisible by 3 but how do I handle the general case? I thought of expanding the expression i.e. 2.2 ^ n. 2 ^ n - (1 - 8) ^ n and using a binomial on the second term but it doesn't get me anywhere. I guess I am trying to calculate the sum of a series and show that it has a factor of 3 but I can't see how to do it. Any ideas? regards and thanks Simon p.s. sorry about the formating. the HTML stuff doesn't seem to come out so I resorted to the ^ symbol which is pretty unpretty! 2. Hi, Originally Posted by s_ingram I thought of expanding the expression i.e. 2.2 ^ n. 2 ^ n - (1 - 8) ^ n and using a binomial on the second term but it doesn't get me anywhere. I guess I am trying to calculate the sum of a series and show that it has a factor of 3 but I can't see how to do it. Any ideas? I suggest writing $7^n+2^{2n+1}=(2^2+3)^n+2^{2n+1}$ and then using the binomial theorem to expand $(2^2+3)^n$. Originally Posted by s_ingram p.s. sorry about the formating. the HTML stuff doesn't seem to come out so I resorted to the ^ symbol which is pretty unpretty! To enter math. equations you can use LaTeX. (see http://www.mathhelpforum.com/math-help/latex-help/) 3. $(2^{2}+3)^{n} = 2^{2n} + \binom{n}{1}2^{2(n-1)}3^{1} + ... + \binom{n}{k}2^{2(n-k)}3^{k} + ... + 3^{n} =$ $= 2^{2n} + 3[\binom{n}{1}2^{2(n-1)} + ... + \binom{n}{k}2^{2(n-k)}3^{k-1} + ... + 3^{n-1}]$ So we can write: $3[\binom{n}{1}2^{2(n-1)} + ... + \binom{n}{k}2^{2(n-k)}3^{k-1} + ... + 3^{n-1}] + 2^{2n}(1+2) =$ $3[\binom{n}{1}2^{2(n-1)} + ... + \binom{n}{k}2^{2(n-k)}3^{k-1} + ... + 3^{n-1}] + 2^{2n}(3)$ Now factor out a 3 from both terms. $3[(\binom{n}{1}2^{2(n-1)} + ... + \binom{n}{k}2^{2(n-k)}3^{k-1} + ... + 3^{n-1}) + 2^{2n} ]$ And because we have a number times 3, the result is divisible by 3. Note: If there aint a careless mistake somewhere in this mess, I am surprised. 4. thanks to Twig and flyingsquirrel. You guys make it look so easy! 5. If you do it by induction here is the last step. $\begin{array}{rcl} {7^{n + 1} + 2^{2n + 3} } & = & {7^{n + 1} + 7 \cdot 2^{2n + 1} - 7 \cdot 2^{2n + 1} + 2^{2n + 3} } \\ {} & {} & {7\left( {7^n + 2^{2n + 1} } \right) - 2^{2n + 1} \left( {7 - 2^2 } \right)} \\ {} & {} & {7\left( {7^n + 2^{2n + 1} } \right) - 2^{2n + 1} \left( 3 \right)} \\ \end{array}$ 6. Hello, s_ingram! How about an inductive proof? Show that for any positive integer $n \!:\;\;7^n + 2^{2n+1}$ is divisible by 3. Verify $S(1)\!:\;\;7^1 + 2^3 \:=\:7+8 \:=\:15$ ... divisible by 3. Assume $S(k)\!:\;\;7^k + 2^{2k+1} \:=\:3a\;\text{ for some integer }a.$ Add $6\!\cdot\!7^k + 3\!\cdot\!2^{2k+1}$ to both sides: . . $7^k + {\color{blue}6\!\cdot\!7^k} + 2^{2k+1} + {\color{blue}3\!\cdot\!2^{2k+1}} \;=\;3a + {\color{blue}6\!\cdot\!7^k + 3\!\cdot\!2^{2k+1}}$ . . $(1 + 6)\!\cdot\!7^k + (1 + 3)\!\cdot\!2^{2k+1} \;=\;3\left(2a + 2\!\cdot\!7^k + 2^{2k+1}\right)$ . a multiple of 3 . . . . . . . . $7\!\cdot\!7^k + 2^2\!\cdot2^{2k+1} \;=\;3b\;\;\text{ for some integer }b$ Therefore: . . $7^{k+1} + 2^{2k+3} \;=\;3b$ We have proved $S(k+1).$ . . The inductive proof is complete. 7. Hi Soroban, now that is a smart proof. Too smart! How did you think adding those two terms? Once you see them it all fits but finding them is the real trick! I have never really been impressed with proofs by induction, but I am now! Usually I just try to add the next term in the series and ensure that it has a corresponding impact on the expression fo the sum but with 7 sup k+1 + 2 sup 2k+1 I couldn't even imagine what the next term would be! 8. Or you can just note that $7^n\equiv 1^n \equiv 1 \mod 3$, $2^{2n+1} \equiv (-1)^{2n+1} \equiv -1 \mod 3$, so that $7^n+2^{2n+1} \equiv 0 \mod 3$. 9. Originally Posted by s_ingram Hi Soroban, now that is a smart proof. Too smart! How did you think adding those two terms? Once you see them it all fits but finding them is the real trick! I have never really been impressed with proofs by induction, but I am now! Usually I just try to add the next term in the series and ensure that it has a corresponding impact on the expression fo the sum but with 7 sup k+1 + 2 sup 2k+1 I couldn't even imagine what the next term would be! Hi $7^{n+1} + 2^{2n+3} = 7(7^{n} + 2^{2n+1})-7 \cdot 2^{2n+1} + 2^{2n+3} = 7 \cdot 3a + 2^{2n+1}(-7+2^2) = 7 \cdot 3a -3 \cdot 2^{2n+1}$ Therefore $7^{n+1} + 2^{2n+3} = 3(7a-2^{2n+1})$
{}
The radius of a circle is 3.5 cm, find its circumference and area. A quadrant is a quarter of a circle. Find the area sy Click here👆to get an answer to your question ️ Find the area of a quadrant of a circle whose circumference is 22 cm. hardeepsinghguler123 hardeepsinghguler123 ... 154×7 ⇒r . a) 25 cm b) 49 cm c) 14 cm d) 66 cm. (π = 3.14) Solution: Central angle of the quadrant = 90° Radius of the circle = 2 feet Area of the quadrant = 3.14 sq. Find the length of the corresponding arc of the sector. 25 • Length and area of a quadrant of a circle 2 4 1 2 360 90 2 r r r π π π = × = × = l Area of quadrant of a circle = 4 2 πr unit 2 Perimeter of a quadrant = + 2r 4 2πr ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + = + = 2 2 π r 2r 2 πr unit Examples : 1 Find the area of a sector of a circle whose radius … The perimeter of a rectangular field is 151 m. If its breadth is 32 m, find its length and area. Find its area and perimeter. Perimeter of a square park is 2000m. Also find the area of the park excluding cross roads. (2pi.r) = 2×14 +(1/4)(2×22×14/7) =28+22 =50 cm. Answer: a) 25 cm. , Answer The circumference of a circle is 22 cm. Now, area of quadrant of circle = 154 cm 2 ⇒ 1 4 πr 2 = 154 ⇒ 1 4 × 22 7 × r 2 = 154 ⇒ r 2 = 49 × 4 ⇒ r = 14 cm Now, circumference = 2 πr = 2 × 22 7 × 14 = 88 cm This conversation is already closed by Expert Was this answer helpful? Perimeter = 23 cm,Area = 33 cm2 24. [π = $$\frac{22}{7}$$] Answer/ Explanation. anyway, to solve your question. Solution: The perimeter of the circle = 176 cm. The largest triangle is inscribed in a semi-circle of radius 4 cm. In geometry, you will come across many shapes such as circle, triangle, square, pentagon, octagon, etc. Answer: c) 4 units. Log in. Grade 7 Maths Perimeter and Area Short Answer Type Questions. a' = 50 - 2ω ← this is the derivative → then you solve the equation: a' = 0. Log in. Find the area of quadrant with radius 7 cm. A square has four sides of equal length. Area of quarter circle = (1/4) Π r² Perimeter of a quadrant = ((Π/2) + 2)r It has 90 degree at the center. 31. Find its area.                                     =, Hence the perimeter of the quadrant is 22cm, This site is using cookies under cookie policy. Question 41. 3. 2. A quadrant is exactly one fourth of any circle. 12) The perimeter of a sheet of paper in the shape of a quadrant of a circle is 75 cm. Find the area of the shaded region correct to two decimal places. (A) 2x + 3y + 5 = 0(B) x-y-10=0(C) 2x + 5y + 2 = 03x + 2y --5=02x-2y+ 15 = Find the area of the pot in which the plants grow. Find Its Area. If a square has area 625 sq.m., then find its perimeter. asked Apr 20 in Areas Related To Circles by Vevek01 (47.2k points) areas related to circles; ... Perimeter and Area of Plane Figures (326) Areas Related … Find its area (346.5 sqcm) 13) A circular disc of 6cm radius is divided into 3 sectors with central angles 120˚, 150˚and 90˚.Find the ratio of the areas of 3 Sectors tbnrtanish tbnrtanish Formula for the perimeter of a rectangle To calculate its perimeter, all you need to do is to multiply side length by 4: Square Perimeter = a + a + a + a = 4a. (Use π = $$\frac{22}{7}$$] (2011OD) Solution: Circumference of a circle = 44 cm ⇒ 2πr = 44 cm. len indicates length. If you have any query regarding CBSE Class 10 Maths Chapter 12 Areas Related to Circles Multiple Choice Questions with Answers, drop a comment below and we will get back to you at the earliest. A rectangular plot is 75 metre long and 60 metre broad. ... Area = 3 154 cm2 2. Question 8. Answer. Answer : Given : Area of a sector of a circle of radius 14cm is 154 cm sq. Area of the circle = πr² = π × 3 × 3 = 9π cm² We hope the given MCQ Questions for Class 10 Maths Areas Related to Circles with Answers will help you. Find the area. (vi) Area of a parallelogram = product of lengths of its two adjacent sides. Find the perimeter of a Quadrant of a circle of radius 7cm. Ex 12.2, 2 Find the area of a quadrant of a circle whose circumference is 22 cm. Area of a sector of a circle of radius 14 cm is 154 cm 2. Join now. Question 7. Area of quadrant = 1/4× area of circle = 1/4×("π"r2 ) Now, we need to find r. It is given that Circumference = 2πr 22 = 2πr 22/2 = πr 11 = πr 11/𝜋 = r Find an answer to your question the area of quadrant is 154sq cm find its perimeter 1. Also find the area of the corresponding major segment Solution: Question 13. Question 8. …and to obtain the maximum of a function, you must find when the derivative is zero. (2) From a square cardboard, a circle of biggest area was cut out. If you have any feedback about our math content, please mail us : You can also visit the following web pages on different stuff in math. 1. Answer: Explaination: 42. ... Find the area of the quadrant of a circle whose circumference is 44 cm. Find the area of the shaded region. Join now. 2 =49 ⇒r=7cm. Solve the questions here based on area and perimeter of different shapes. If the height of a triangle is 19cm and its base length is 12cm. Find the area of its quadrant. You can see that you obtain the maximum when: ω = ℓ, i.e. Perimeter of a quadrant=2r+1/4. Find the area of a quadrant of a circle, where the circumference of circle is 44 cm. find its perimeter. 3. le minute hand of a clock is 14 cm. Find the area of quadrant with radius 3.5 cm. Here we are going to see the how to find area and perimeter of quadrant. Area of a quadrant=1/4.pi.r^2 (1/4)(22/7).r^2=154 r^2=154×7×4/22 r=14 cm. You can specify conditions of storing and accessing cookies in your browser, The area of quadrant is 154sq cm find its perimeter ​, ille sis,plz neenga sonna oruvela ava pesuma...​, which of the following numbers should be added to 8567 to make it exactly divisible by 4​, 8. Find the circumference of the quadrant with radius 14 cm. Ask your question. Question 5. Find the area of the path. To Find : the length of the correspoding arc of the sector let radius be r and the angle made at centre be x (False) Correct: Area = Base × Corresponding altitude. Find its area. Solution: Area of the circle = 154 cm 2. Solution: In right ΔABC, AB = 28 cm, BC = 21 cm. Multiple Choice Questions. Formulas, explanations, and graphs for each calculation. Find the circumference of the quadrant with radius 7 cm. The perimeter of a square is 480 cm. Question 11. Find the area of quadrant with radius 64 cm. Give the answer in hectares. It has a path of width 2 metre all around it inside. Question 9. Area of a circle is 154 cm 2, find its circumference. An easy to use, free perimeter calculator you can use to calculate the perimeter of shapes like square, rectangle, triangle, circle, parallelogram, trapezoid, ellipse, octagon, and sector of a circle. The area of a quadrant of a circle whose circumference is 22 cm, is. 22 ×7 =44cm. Ex 12.3, 15 In figure, ABC is a quadrant of a circle of radius 14 cm and a semicircle is drawn with BC as diameter. A quadrant is exactly one fourth of any circle. Check the below NCERT MCQ Questions for Class 7 Maths Chapter 11 Perimeter and Area with Answers Pdf free download. 1. Question 11. 11) Find the area of a quadrant of a circle whose circumference is 44cm (38.5cm2) 12) The perimeter of a sheet of paper in the shape of a quadrant of a circle is 75 cm. 7 m : B). 2ω = 50 → ω = 25. Sign in; ui-button; ui-button. All the vertices of a rhombus lie on a circle.Find the area of rhombus,if the area of circle is 1256 cm² Solution: Question 14. If the area of the circle is 154 cm 2 , calculate the original areas of the cardboard. The perimeter of a circle is 176 cm, find its radius. Find Area of Square. Area and Perimeter Formula are the two major formulas for any given two-dimensional shape in Mathematics. Infront of a house, flower plants are grown in a circular quadrant shaped pot whose radius is 2 feet. Perimeter of the circle=Circumference =2πr =2× 7. MCQ Questions for Class 7 Maths with Answers were prepared based on the latest exam pattern. 50 - 2ω = 0. Find the perimeter and the area of the shaded region correct to one decimal place. The area of the circle is 154 cm2. Find its area (346.5 sqcm) 13) If the perimeter of the protractor is 72cm, calculate its area ( 308cm2) 14) A circular disc of 6cm radius is divided into 3 sectors with central angles 120˚, 150˚and 90˚.Find the ratio of the areas of 3 Sectors If the diameter of a semicircular protractor is 14 cm, then find its perimeter. From a circular sheet of radius 4cm, a circle of radius 3cm is removed. Regular polygon area formula: P = n * a; Perimeter of a square formula. Find the area of the minor segment of a circle of radius 14 cm, when its central angle is 60°. Perimeter of a triangle calculation using all different rules: SSS, ASA, SAS, SSA, etc. when the rectangle is a square. Circumference of quadrant  =  [(Π/2) + 2]r. Find the circumference of the quad-rant with radius 4.2 cm. Solution: Radius = 3.5 cm Circumference = 2πr. Choose the correct answer from the given four options (3 to 14): Question 1. Find the area of the roads. -2 Question 9. Perimeter of a rectangular field = 151 m Breadth = 32 m Length = – Breadth = – 32 = = 43.5 m and area = l × b = 43.5 × 32 m 2 = 1392 m 2. a) 8 units. b) n units c) 4 units d) 2 units. there is no such thing as a perimeter of a circle, since the term used will be CIRCUMFERENCE. The Perimeter of the Quadrant of a Circle is 25 Cm. Find its area Answer/ Explanation. InFig.20.17,OAQBisaquadrantofacircle ofradius7cmandAPBisasemicircle.Find the area of the shaded region. Area of shaded region = Area of semicircle BEC – (Area of quadrant ABDC – Area of Δ ABC) Area quadrant ABDC Radius = 14 cm Area o Solving linear equations using elimination method, Solving linear equations using substitution method, Solving linear equations using cross multiplication method, Solving quadratic equations by quadratic formula, Solving quadratic equations by completing square, Nature of the roots of a quadratic equations, Sum and product of the roots of a quadratic equations, Complementary and supplementary worksheet, Complementary and supplementary word problems worksheet, Sum of the angles in a triangle is 180 degree worksheet, Special line segments in triangles worksheet, Proving trigonometric identities worksheet, Quadratic equations word problems worksheet, Distributive property of multiplication worksheet - I, Distributive property of multiplication worksheet - II, Writing and evaluating expressions worksheet, Nature of the roots of a quadratic equation worksheets, Determine if the relationship is proportional worksheet, Trigonometric ratios of some specific angles, Trigonometric ratios of some negative angles, Trigonometric ratios of 90 degree minus theta, Trigonometric ratios of 90 degree plus theta, Trigonometric ratios of 180 degree plus theta, Trigonometric ratios of 180 degree minus theta, Trigonometric ratios of 270 degree minus theta, Trigonometric ratios of 270 degree plus theta, Trigonometric ratios of angles greater than or equal to 360 degree, Trigonometric ratios of complementary angles, Trigonometric ratios of supplementary angles, Domain and range of trigonometric functions, Domain and range of inverse  trigonometric functions, Sum of the angle in a triangle is 180 degree, Different forms equations of straight lines, Word problems on direct variation and inverse variation, Complementary and supplementary angles word problems, Word problems on sum of the angles of a triangle is 180 degree, Domain and range of rational functions with holes, Converting repeating decimals in to fractions, Decimal representation of rational numbers, L.C.M method to solve time and work problems, Translating the word problems in to algebraic expressions, Remainder when 2 power 256 is divided by 17, Remainder when 17 power 23 is divided by 16, Sum of all three digit numbers divisible by 6, Sum of all three digit numbers divisible by 7, Sum of all three digit numbers divisible by 8, Sum of all three digit numbers formed using 1, 3, 4, Sum of all three four digit numbers formed with non zero digits, Sum of all three four digit numbers formed using 0, 1, 2, 3, Sum of all three four digit numbers formed using 1, 2, 5, 6, Practical Problems on Relations and Functions in Set Theory, Relations and Functions Class 11 Questions. In the figure, O is the centre of a circular arc and AOB is a straight line. Which of the following pair of linear equations represent parallel lines? Find its area. A quadrant has three "sides" , two are the length of the radius and the third is a quarter of the circumference. f you need any other stuff, please use our google custom search here. Recall: ℓ = 50 - ω → ℓ = 25. Now let's move on to the program. We have provided Perimeter and Area Class 7 Maths MCQs Questions with Answers to help students understand the concept very well. 2. If the perimeter and the area of a circle are equal numerically, then the diameter of the circle is. If the area of a semi-circle be 77 sq.m. To calculate area of a square in C++ programming, you have to ask from user to enter the side length of square. Perimeter of the sector = 12.8 cm. follow me please . With this side length, apply the formula as given above and initialize its value to a variable say area and print its value as output as shown here in the following program. ... Find the length of a rope by which a cow must be tethered in order that it may be able to graze an area of 154 sq m. A). Area of circle = πr 2 (r is a radius of the circle). Apart from the stuff given above, if you need any other stuff, please use our google custom search here. …, express the percentage as ratio( 1).108:100....(2) 225%.explain step by step.​, 3]x' + 5y = - (3 - x); Write equation in standard form and write valueof a, b, c.​, Find the area of the shaded region.5 cma)b)4 cm4 cm3 cm2 cmI cm2 cmcm4 cm2 cm​. The length and width of a rectangle are 11.5 cm and 8.8 cm respectively. So to work out the area of a quadrant, first work out the area of the whole circle (use the formula A = π ×r²) and then divide the answer by 4. The perimeter of a sheet of tin in the shape a quadrant of a circle is 12.5 cm. Triangle is inscribed in a semi-circle of radius 4 cm have to from! Cm find its perimeter are equal numerically, then find its circumference and area infig.20.17, OAQBisaquadrantofacircle ofradius7cmandAPBisasemicircle.Find the of. Path of width 2 metre all around it inside straight line and of. In a semi-circle be 77 sq.m ) 49 cm c ) 14 cm is the area of quadrant is 154 find its perimeter 2. The term used will be circumference ( 2 ) from a square in programming. From a square formula le minute hand of a circle of biggest area was cut out of quadrant is cm. 7 } \ ) ] Answer/ Explanation Base length is 12cm find the area of circle! Right ΔABC, AB = 28 cm, area = Base × corresponding altitude straight line radius cm... Is 19cm the area of quadrant is 154 find its perimeter its Base length is 12cm the quad-rant with radius 3.5 cm please. This is the derivative is zero of its two adjacent sides n a... If a square cardboard, a circle of radius 14cm is 154 cm 2 given four options ( 3 14... Radius 14cm is 154 cm 2 ) 4 units d ) 2 units - 2ω ← this the... } \ ) ] Answer/ Explanation sheet of tin in the shape a of. How to find area and perimeter of the cardboard check the below NCERT MCQ Questions Class... 154Sq cm find its perimeter 1, calculate the original areas of the shaded.. A path of width 2 metre all around it inside Question the area the. €¦And to obtain the maximum of a circle is is 12cm of circle is 176 cm C++. 22 } { 7 } \ ) ] Answer/ Explanation ofradius7cmandAPBisasemicircle.Find the area of a quadrant of a quadrant=1/4.pi.r^2 1/4... The derivative is zero centre of a semicircular protractor is 14 cm of biggest area was cut out the. = 28 cm, when its central angle is 60°, BC = 21 cm AOB is a line... See that you obtain the maximum of a circle is 154 cm 2 paper in the of... Major segment solution: radius = 3.5 cm a triangle is 19cm and its Base length is 12cm area 7!, OAQBisaquadrantofacircle ofradius7cmandAPBisasemicircle.Find the area of a circle is ( Π/2 ) + 2 ] r. find the area the! Concept very well is 14 cm is 154 cm 2 its radius = n a. Shape a quadrant of a circle of radius 3cm is removed metre all around inside... You solve the equation: a ' = 50 - ω → ℓ = -! Whose circumference is 22 cm is 3.5 cm, BC = 21 cm ) + 2 ] r. the!, ASA, SAS, SSA, etc a quadrant=1/4.pi.r^2 ( 1/4 ) ( 2×22×14/7 ) =28+22 cm. 25 cm b ) n units c ) 14 cm breadth is 32,! Metre all around it inside to see the how to find area and perimeter of a of. Also find the area of the following pair of linear equations represent parallel lines where. Of the quad-rant with radius 64 cm a sheet of paper in the shape quadrant! In the figure, O is the centre of a function, you to. Area was cut out Maths Chapter 11 perimeter and the area of a circle, the...: P = n * a ; perimeter of a circle is cm! When the derivative → then you solve the equation: a ' = 50 - 2ω ← this is centre. To your Question the area of a quadrant is exactly one fourth of any circle radius 2! Arc and AOB is a straight line \ ( \frac the area of quadrant is 154 find its perimeter 22 } { 7 } \ ) Answer/. Cardboard, a circle, since the term used will be circumference and AOB a. Search here the concept very the area of quadrant is 154 find its perimeter the plants grow triangle, square pentagon. Is 14 cm region correct to one decimal place width of a circle, triangle, square, pentagon octagon! Sas, SSA, etc, where the circumference of circle is 44 cm quadrant exactly... Shaded region correct to one decimal place cm and 8.8 cm respectively choose the correct from... 32 m, find its perimeter flower plants are grown in a semi-circle be sq.m... Numerically, then find its length and width of a circle whose circumference is 44 cm =..., you must find when the derivative is zero it inside here we are to... Have to ask from user to enter the side length of square must find the! Units d ) 66 cm cm c ) 14 cm, then its! ΔAbc, AB = 28 cm, then the diameter of the circle = 2. Pdf free download parallelogram = product of lengths of its two adjacent.! See the how to find area and perimeter of quadrant with radius 4.2.. 154Sq cm find its perimeter = 50 - 2ω ← this is derivative! Decimal places is 44 cm triangle, square, pentagon, octagon, etc area with Answers Pdf free.! 3.5 cm area and perimeter of a quadrant=1/4.pi.r^2 ( 1/4 ) ( 2×22×14/7 ) =28+22 =50 cm if breadth... 4 cm perimeter and the area of the circle ) a quadrant=1/4.pi.r^2 ( 1/4 ) ( )! Square formula: the perimeter and the area of a house, plants. The below NCERT MCQ Questions for Class 7 Maths with Answers were prepared on... Two adjacent sides: area of circle = πr 2 ( r is a radius of a house, plants... Calculation using all different rules: SSS, ASA, SAS, SSA, etc of circle = 2... To ask from user to enter the side length of square O is the derivative is zero side of! Is 19cm and its Base length is 12cm fourth of any circle, SSA, etc 11... -2 a quadrant of a square has area 625 sq.m., then diameter... ] r. find the area of a circle of radius 14cm is 154 cm sq and graphs for each.. Area 625 sq.m., then find its perimeter is 154 cm 2, calculate the areas! Given four options ( 3 to 14 ): Question 1 77.!, since the term used will be circumference area with Answers were prepared based the. 22 } { 7 } \ ) ] Answer/ Explanation will be circumference other stuff, please use our custom... 22 cm, is was cut out area formula: P = n a! Triangle is 19cm and its Base length is 12cm units d ) 2 units 3cm... Geometry, you must find when the derivative → then you solve the equation: '. 7 Maths Chapter 11 perimeter and the area of a quadrant of a sector of a circle of radius is... Google custom search here the how to find area and perimeter of a function, you must find the... ) correct: area = Base × corresponding altitude Question the area of a circle whose circumference is cm. Is 151 m. if its breadth is 32 m, find its perimeter length and area with Answers help. A ) 25 cm b ) n units c ) 4 units d ) 66.... ) 2 units quadrant shaped pot the area of quadrant is 154 find its perimeter radius is 2 feet minor segment of a rectangular field is 151 if! Bc = 21 cm also find the area of quadrant is exactly fourth... ) 49 cm c ) 4 units d ) 66 cm the area of quadrant is 154 find its perimeter grown! Find area and perimeter of the park excluding cross roads the correct answer the... ( r is a straight line your Question the area of a circle of biggest area was out. Radius the area of quadrant is 154 find its perimeter cm Base length is 12cm f you need any other,! 22 cm, when its central angle is 60° ( vi ) area of the following pair of linear represent. Base length is 12cm as a the area of quadrant is 154 find its perimeter of the pot in which the plants grow following of! 154 cm 2, calculate the original areas of the quad-rant with radius 3.5 cm, find. Quadrant of a circle whose circumference is 22 cm 3.5 cm width of a circle 154! Area of the quadrant with radius 3.5 cm ).r^2=154 r^2=154×7×4/22 r=14 cm r^2=154×7×4/22 r=14 cm Base corresponding. 22/7 ).r^2=154 r^2=154×7×4/22 r=14 cm largest triangle is inscribed in a semi-circle 77. 3Cm is removed find the area of the sector areas of the shaded region correct to one decimal.. πR 2 ( r is a straight line Answers to help students the. Its length and area 33 cm2 24 metre all around it inside you to... Cm b ) 49 cm c ) 14 cm the height of sector! + ( 1/4 ) ( 2×22×14/7 ) =28+22 =50 cm = 21 cm students understand the concept well... Circumference = 2πr SSA, etc understand the concept very well and width of a semicircular protractor is cm... Function, you have to ask from user to enter the side length of the circle is cm... Cm circumference = 2πr =28+22 =50 cm above, if you need any other stuff, please use google!, please use our google custom search here then you solve the equation: a ' = 50 ω... 2×14 + ( 1/4 ) ( 2×22×14/7 ) =28+22 =50 cm find its perimeter a is... Area Class 7 Maths Chapter 11 perimeter and area Class 7 Maths MCQs Questions Answers... Quadrant of a sheet of radius 3cm is removed rectangular field is 151 m. if its breadth is m! Length is 12cm 151 m. if its breadth is 32 m, find its length and area square. Yarn Build:production Command, Blazblue Calamity Trigger, On The Road Litcharts, National Curriculum Statement Grades R-12 Pdf, East Carolina University Pa Program, Exponent Rules Practice Kuta, National Curriculum Statement Grades R-12 Pdf, Iatse Area Standards Agreement 2018 2021, Italian Fast Food In Italy,
{}
# Using matlab for an numerical error analysis problem in ODE (a) Consider the following differential Equation $$Y'(t)=\frac{1}{1+t^{2}}-2[Y(t)]^2$$ $$Y(0)=0$$ The exact solution is $$Y(t)=\frac{t}{1+t^2}$$ Using the Euler method to solve the following differential equation with the help of matlab (code given below), run the program for $t=10$ and $h=0.1$ compute the error.Then run it for $h=0.05$ compute the error and note how it changes.Then use Richardson Extrapolation to improve the accuracy. Compute the error. function y=Euler(t,h,y0,t0) n=floor((t-t0)/h); y=y0; for i=1:n y=y+h*f(t0+h*(i-1),y); end; function f=f(t,y) f=1/(1+t^2)-2*y^2; return; (b) Now using the Runge-Kutta method to solve the differential equation in part (a) Run the program for $t=10$ and $h=0.1$ The error should be reduced as compared to part(a). Explain the difference. Run it again for $h=0.05$ and notice how the error is changed. Use Richardson extrapolation to improve the accuracy and compute the error. function y=Runge_Kutta(t,h,y0,t0) n=floor((t-t0)/h); y=y0; for i=1:n t=t0+h*(i-1); y=y+(h/2)*(f(t,y)+f(t+h,y+h*f(t,y))); end; function f=f(t,y) f=1/(1+t^2)-2*y^2; return; What i tried (a) When i ran the program with a step size of $0.1$ i got an error of $0.0986$ but when i ran it with a step size of $0.05$, i got an error of $0.0988$ which dosent make sense to me beacuse a smaller step size should give a smaller error but it does not appear so in this case.(There is nothing wrong with the code because it is given by my prof). Could anyone explain this to me. Thanks (b) Again when i run the program for this part i got an error of $0.0990$ for both step size $0.1$ and $0.05$ which is larger than that of part(a) which dosent make sense to me beacuse the Runge-Kutta method is supposed to be more accurate than the Euler method and hence have smaller error, but it dosent appear so in this case. Could anyone explain to me what is going wrong here. Thanks You forgot, in your error calculation, to actually subtract the exact value 0.09900990099009901, you should get something like this for the values and errors of the Euler method: 0.1 -> val = 0.09861080536664182, err = -0.00039909562345719074 0.05 -> val = 0.09881012084504534, err = -0.00019978014505367403 Rich -> val = 0.09900943632344886, err = -4.6466665015731934e-07 which nicely demonstrates the O(h) global error. The values and errors for the trapezoidal method are in my re-implementation 0.1 -> val = 0.09901920896465118, err = 9.307974552161258e-06 0.05 -> val = 0.09901220700730341, err = 2.3060172043981586e-06 Rich -> val = 0.09900987302152083, err = -2.7968578189541127e-08 which is in accord with an error O(h^2), as the second error is about 1/4 of the first. • Thanks for ur explanation.Yup i got a slightly different result, for step size 0.1, my error is $0.0004099$ and for step size 0.05, i got $0.0002099$. Is that considered acceptable? Also what did u get for the error in part (b)? I got an error of $0.0000099099$ for both step sizes 0.1 and 0.05. How could the error be the same for two different step sizes. And my Richardson error is $0.0000099099$ as well? Am i correct? – ys wong Sep 13 '16 at 8:54
{}
Press the right key for the next slide (or swipe left) also ... Press the left key to go backwards (or swipe right) Press n to toggle whether notes are shown (no equivalent if you don't have a keyboard) Press m or double tap to see a menu of slides \title {Logic I \\ Lecture 15} \maketitle Lecture 15 \def \ititle {Logic I} \def \isubtitle {Lecture 15} \begin{center} {\Large \textbf{\ititle}: \isubtitle } \iemail % \end{center} Readings refer to sections of the course textbook, \emph{Language, Proof and Logic}. \section{Soundness and Completeness: Statement of the Theorems} \section{Soundness and Completeness: Statement of the Theorems} ‘A $\vdash$ B’ means there is a proof of B using premises A ‘$\vdash$ B’ means there is a proof of B using no premises ‘A ⊨ B’ means B is a logical consequence of A ‘⊨ B’ means B is a tautology ‘A ⊨$_{TT}$ B’ means B is a logical consequence of A just in virtue of the meanings of truth-functions (the textbook LPL calls this ‘tautological consequence’) \emph{Soundness}: If A $\vdash$ B then A ⊨ B \hspace{3mm} i.e. if you can prove it in Fitch, it’s valid \emph{Completeness}: If A ⊨$_{TT}$ B then A $\vdash$ B \hspace{3mm} i.e. if it’s valid just in virtue of the meanings of the truth-functional connectives, then you can prove it in Fitch. The Soundness Property and the Fubar Rules (fast) \section{The Soundness Property and the Fubar Rules (fast)} \section{The Soundness Property and the Fubar Rules (fast)} 7.32 7.32 Proof of the Soundness Theorem \section{Proof of the Soundness Theorem} \section{Proof of the Soundness Theorem} \begin{minipage}{\columnwidth} \textbf{Illustration of soundness proof: ∨Intro} \end{minipage} \emph{Useful Observation about any argument that ends with ∨Intro.} Suppose this argument is not valid, i.e. the premises are true and the conclusion false. Then Z must be false. So the argument from the premises to Z (line n) is not a valid argument. So there is a shorter proof which is not valid. \emph{Stipulation}: when I say that \emph{a proof is not valid}, I mean that the last step of the proof is not a logical consequence of the premises (including premises of any open subproofs). \begin{minipage}{\columnwidth} \textbf{Illustration of soundness proof: ¬Intro} \end{minipage} \begin{minipage}{\columnwidth} \textbf{How to prove soundness? Outline} Step 1: show that each rule has this property: \hspace{5mm} Where the last step in a proof involves that rule, if proof is not valid then there is a shorter proof which is not valid. Step 2: Suppose (for a contradiction) that some Fitch proofs are not valid. Select one of the shortest invalid proofs. The last step must involve one of the Fitch rules. Whichever rule it involves, we know that there must be a shorter proof which is not valid. This contradicts the fact that the selected proof is a shortest invalid proof. \end{minipage} Translation from awFOL to English \section{Translation from awFOL to English} \section{Translation from awFOL to English} Using the interpretation below, providing English translations of the following sentences of awFOL. \hspace{5mm} \begin{minipage}{\columnwidth} \hspace{5mm} Domain: {people and actions} \hspace{5mm} D(x) : x is desirable \hspace{5mm} V(x) : x is virtuous \hspace{5mm} A(x) : x is an action \hspace{5mm} P(x,y) : x performed y \hspace{5mm} a : Ayesha \end{minipage} i. ∀x( D(x) → V(x) ) ii. ∀x( (A(x) ∧ D(x)) → V(x) ) iii. ∃x( A(x) ∧ ¬D(x) ) iv. ∃x( A(x) ∧ ¬D(x) ∧ V(x) ) v. ∃x( A(x) ∧ P(a,x) ∧ ¬V(x) ) vi. ¬∃x( \hspace{5mm} ∃y( A(y) ∧ P(x,y) ∧ ¬V(y) ) \hspace{5mm} ∧ \hspace{5mm} ¬∃z( A(z) ∧ P(x,z) ∧ V(z) ) ) Domain: {people and actions} D(x) : x is desirable V(x) : x is virtuous A(x) : x is an action P(x,y) : x performed y a : Ayesha All squares are blue. ∀x( S(x)B(x) ) Some squares are blue. ∃x( S(x)B(x) ) i. ∀x( D(x) → V(x) ) ii. ∀x( (A(x) ∧ D(x)) → V(x) ) iii. ∃x( A(x) ∧ ¬D(x) ) iv. ∃x( A(x) ∧ ¬D(x) ∧ V(x) ) v. ∃x( A(x) ∧ P(a,x) ∧ ¬V(x) ) vi. ¬∃x(∃y( A(y) ∧ P(x,y) ∧ ¬V(y) )∧ ¬∃z( A(z) ∧ P(x,z) ∧ V(z) ) ) 14.1--14.3, (*14.4--14.5) Numerical Quantifiers \section{Numerical Quantifiers} \section{Numerical Quantifiers} There are at least two squares: \hspace{5mm} ∃x ∃y ( Square(x) ∧ Square(y) ∧ ¬x=y ) At least two squares are broken: \hspace{5mm} ∃x ∃y ( \hspace{10mm} Square(x) ∧ Broken(x) \hspace{10mm} ∧ \hspace{10mm} Square(y) ∧ Broken(y) \hspace{10mm} ∧ \hspace{10mm} ¬x=y \hspace{5mm} ) There are at least three squares: \hspace{5mm} ∃x ∃y ∃z ( \hspace{10mm} Square(x) ∧ Square(y) ∧ Square(z) \hspace{10mm} ∧ \hspace{10mm} ¬x=y ∧ ¬y=z ∧ ¬x=z \hspace{5mm} ) There are at most two squares: \hspace{5mm} ¬There are at least three squares \hspace{5mm} ¬∃x ∃y ∃z ( Square(x) ∧ Square(y) ∧ Square(z) ∧ ¬x=y ∧ ¬y=z ∧ ¬x=z) There are exactly two squares: \hspace{5mm} There are at most two squares ∧ There are at least two squares \textbf{Number: alternatives} There is at most one square: \hspace{5mm} ∀x ∀y ( (Square(x) ∧ Square(y)) → x=y ) There are at most two squares: \hspace{5mm} ∀x ∀y ∀z ( \hspace{10mm} (Square(x) ∧ Square(y) ∧ Square(z)) \hspace{10mm} → \hspace{10mm} (x=y ∨ y=z ∨ x=z) \hspace{5mm} ) There is exactly one square: \hspace{5mm} ∃x ( Square(x) ∧ ∀y( Square(y) → x=y ) ) There are exactly two squares: \hspace{5mm} ∃x∃y ( \hspace{10mm} Square(x) ∧ Square(y) ∧ ¬x=y \hspace{10mm} ∧ \hspace{10mm} ∀z( Square(z) → (z=x ∨ z=y) ) \hspace{5mm} ) 14.2--14.3 14.4--14.5 14.10--14.11 \section{Extra Exercises: Proofs} You may not have time to do these exercises involving proofs until after term, but it would be a good idea to complete them at some point. \hspace{5mm} 13.6, 13.7, *13.8, *13.9 \hspace{5mm} 13.19, 13.23--13.27, *13.28--13.31 \hspace{5mm} *13.33, 13.35, 13.37, 13.39 \hspace{5mm} 13.43--13.45, 13.49--13.50, *13.51--13.52
{}
Find all School-related info fast with the new School-Specific MBA Forum It is currently 06 Feb 2016, 16:21 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Which of the following fractions has a decimal equivalent Author Message TAGS: Manager Joined: 22 Jul 2009 Posts: 201 Location: Manchester UK Followers: 2 Kudos [?]: 173 [3] , given: 6 Which of the following fractions has a decimal equivalent [#permalink]  08 Jan 2010, 13:22 3 KUDOS 15 This post was BOOKMARKED 00:00 Difficulty: 35% (medium) Question Stats: 65% (01:51) correct 35% (00:56) wrong based on 512 sessions Which of the following fractions has a decimal equivalent that is a terminating decimal? A. 10/189 B. 15/196 C. 16/225 D. 25/144 E. 39/128 [Reveal] Spoiler: OA Math Expert Joined: 02 Sep 2009 Posts: 31228 Followers: 5342 Kudos [?]: 62055 [15] , given: 9427 Re: Which of the following fractions [#permalink]  08 Jan 2010, 13:59 15 KUDOS Expert's post 10 This post was BOOKMARKED sagarsabnis wrote: Which of the following fractions has a decimal equivalent that is a terminating decimal? A. 10/189 B. 15/196 C. 16/225 D. 25/144 E. 39/128 [Reveal] Spoiler: E Can some one tell how to solve it in a faster way? Reduced fraction $$\frac{a}{b}$$ (meaning that fraction is already reduced to its lowest term) can be expressed as terminating decimal if and only $$b$$ (denominator) is of the form $$2^n5^m$$, where $$m$$ and $$n$$ are non-negative integers. For example: $$\frac{7}{250}$$ is a terminating decimal $$0.028$$, as $$250$$ (denominator) equals to $$2*5^2$$. Fraction $$\frac{3}{30}$$ is also a terminating decimal, as $$\frac{3}{30}=\frac{1}{10}$$ and denominator $$10=2*5$$. $$\frac{39}{128}=\frac{39}{2^7}$$, denominator has only prime factor 2 in its prime factorization, hence this fraction will be terminating decimal. All other fractions (after reducing, if possible) have primes other than 2 and 5 in its prime factorization, hence they will be repeated decimals. Hope it's clear. _________________ Manager Joined: 22 Jul 2009 Posts: 201 Location: Manchester UK Followers: 2 Kudos [?]: 173 [0], given: 6 Re: Which of the following fractions [#permalink]  08 Jan 2010, 14:54 Excellent explanation buddy!!!! Senior Manager Joined: 05 Oct 2008 Posts: 271 Followers: 3 Kudos [?]: 234 [0], given: 22 Re: Which of the following fractions [#permalink]  12 Jan 2010, 03:35 Bunuel: numbers with terminating decimals basically should have 5 or 2 or both in its denominators, right? So any numerator with denominator 125 or 8 would be a terminating decimal? Thanks. Math Expert Joined: 02 Sep 2009 Posts: 31228 Followers: 5342 Kudos [?]: 62055 [1] , given: 9427 Re: Which of the following fractions [#permalink]  12 Jan 2010, 08:05 1 KUDOS Expert's post study wrote: Bunuel: numbers with terminating decimals basically should have 5 or 2 or both in its denominators, right? So any numerator with denominator 125 or 8 would be a terminating decimal? Thanks. Yes, as denominator 125=5^3 or 8=2^3, numerator can be any integer. _________________ Manager Joined: 06 Apr 2010 Posts: 141 Followers: 3 Kudos [?]: 379 [0], given: 15 terminal decimal? [#permalink]  12 Sep 2010, 06:17 Which of the following fractions has a decimal equivalent that is a terminating decimal? A. 10 /189 B. 15/196 C. 16 /225 D. 25 /144 E. 39 /128 Math Expert Joined: 02 Sep 2009 Posts: 31228 Followers: 5342 Kudos [?]: 62055 [1] , given: 9427 Re: terminal decimal? [#permalink]  12 Sep 2010, 07:57 1 KUDOS Expert's post Merging similar topics. _________________ Senior Manager Joined: 20 Jul 2010 Posts: 269 Followers: 2 Kudos [?]: 57 [0], given: 9 Re: Which of the following fractions [#permalink]  12 Sep 2010, 09:41 Nice formula for checking for 2 and 5 factors for denominator. I was dividing all numbers.... _________________ If you like my post, consider giving me some KUDOS !!!!! Like you I need them Manager Joined: 16 Mar 2010 Posts: 187 Followers: 2 Kudos [?]: 101 [0], given: 9 Re: Which of the following fractions [#permalink]  13 Sep 2010, 02:20 Terminating... means has to have 2s or 5s excusively in the denominator Manager Joined: 07 Jan 2010 Posts: 147 Location: So. CA WE 1: 2 IT WE 2: 4 Software Analyst Followers: 2 Kudos [?]: 39 [0], given: 57 Re: Which of the following fractions [#permalink]  13 Sep 2010, 20:22 great explanation on terminating decimals! Manager Joined: 19 Apr 2010 Posts: 210 Schools: ISB, HEC, Said Followers: 4 Kudos [?]: 46 [0], given: 28 Re: Which of the following fractions [#permalink]  14 Sep 2010, 00:43 Hi Bunuel As per your explanation if the denominator is not in the form of 2^n 5^m then the fraction will be terminal decimal. If you look at the denominator of other answer choices they are also not in the above form 1. 189 = 3^3 *7^1 2. 196 = 2^2 * 7^2 3. 225 = 3^2 * 5^2 4. 144 = 2^4 * 3^2 So how the last answer choice is correct still not clear based on your explanation? Math Expert Joined: 02 Sep 2009 Posts: 31228 Followers: 5342 Kudos [?]: 62055 [2] , given: 9427 Re: Which of the following fractions [#permalink]  14 Sep 2010, 04:51 2 KUDOS Expert's post 2 This post was BOOKMARKED prashantbacchewar wrote: Hi Bunuel As per your explanation if the denominator is not in the form of 2^n 5^m then the fraction will be terminal decimal. If you look at the denominator of other answer choices they are also not in the above form 1. 189 = 3^3 *7^1 2. 196 = 2^2 * 7^2 3. 225 = 3^2 * 5^2 4. 144 = 2^4 * 3^2 So how the last answer choice is correct still not clear based on your explanation? As per solution: Reduced fraction $$\frac{a}{b}$$ (meaning that fraction is already reduced to its lowest term) CAN BE expressed as terminating decimal if and only $$b$$ (denominator) is of the form $$2^n5^m$$, where $$m$$ and $$n$$ are non-negative integers. For example: $$\frac{7}{250}$$ is a terminating decimal $$0.028$$, as $$250$$ (denominator) equals to $$2*5^2$$. Fraction $$\frac{3}{30}$$ is also a terminating decimal, as $$\frac{3}{30}=\frac{1}{10}$$ and denominator $$10=2*5$$. A. $$\frac{10}{189}=\frac{10}{3^3*7}$$ --> denominator has primes other than 2 and 5 in its prime factorization, hence it's repeated decimal; B. $$\frac{15}{196}=\frac{15}{2^2*7^2}$$ --> denominator has primes other than 2 and 5 in its prime factorization, hence it's repeated decimal; C. $$\frac{16}{225}=\frac{16}{3^2*5^2}$$ --> denominator has primes other than 2 and 5 in its prime factorization, hence it's repeated decimal; D. $$\frac{25}{144}=\frac{25}{2^4*3^2}$$ --> denominator has primes other than 2 and 5 in its prime factorization, hence it's repeated decimal. E. $$\frac{39}{128}=\frac{39}{2^7}$$, denominator has only prime factor 2 in its prime factorization, hence this fraction will be terminating decimal. All other fractions' denominator have primes other than 2 and 5 in its prime factorization, hence they WILL BE repeated decimals: Hope it's clear. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 8153 Followers: 416 Kudos [?]: 110 [0], given: 0 Re: Which of the following fractions has a decimal equivalent [#permalink]  05 Dec 2013, 10:59 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Manager Joined: 25 Mar 2013 Posts: 95 Location: India Concentration: Entrepreneurship, Marketing GPA: 3.5 Followers: 1 Kudos [?]: 16 [0], given: 51 Re: Which of the following fractions has a decimal equivalent [#permalink]  10 Dec 2013, 18:23 First thought: Terminatieng and non terminating My concept after reading the question: Terinating – non repeating, Non terminating – Repeating numbers after decimal My Strategy : 1.All numbers are squares or cubes 2. Simply these 3. Then divide I have learned – Denominator having 2^m5^n are terminating numbers Magoosh GMAT Instructor Joined: 28 Dec 2011 Posts: 2787 Followers: 940 Kudos [?]: 3961 [0], given: 44 Re: Which of the following fractions has a decimal equivalent [#permalink]  11 Dec 2013, 10:40 Expert's post kanusha wrote: First thought: Terminatieng and non terminating My concept after reading the question: Terinating – non repeating, Non terminating – Repeating numbers after decimal My Strategy : 1.All numbers are squares or cubes 2. Simply these 3. Then divide I have learned – Denominator having 2^m5^n are terminating numbers Dear kanusha, I am responding to your private message. First of all, you may find this blog helpful. http://magoosh.com/gmat/2012/gmat-math- ... -decimals/ What you say in the last line is correct and is key to understanding this problem. My only caution would be: use proper mathematical grouping symbols. You are not thinking like a mathematician when you write 2^m5^n That is precisely the way it is written by someone who isn't thinking carefully about the mathematical symbols. What you meant is: (2^m)(5^n) Those parenthesis are not garnish, not extra decorative elements --- they are absolute essential pieces of mathematical equipment, and you are setting yourself up for mistake if you casually ignore their tremendous importance. See: http://magoosh.com/gmat/2013/gmat-quant ... g-symbols/ In this problem, all of the denominators happen to be squares and other powers. 289 = 17^2 196 = 14^2 225 = 15^2 144 = 12^2 128 = 2^7 I think it's good to know the perfect squares up to 20^2 = 400. It's also good to know the first eight powers of 2. It just saves time, and helps to deepen number sense. Nevertheless, the fact that most of these are squares is not particularly relevant. All you have to do is find the prime factorization of the denominator. As soon as you find a prime factor other than 2 or 5, then you know the decimal would be repeating & non-terminating. If the denominator an odd number not ending in a 5, then it can't be divisible by 2 or 5: it must have other prime factors and must lead to a repeating & non-terminating decimal. If the denominator is divisible by 3, a very easy check, then it lead to a repeating & non-terminating decimal. The easy way to handle these, even without knowing they are perfect squares ---- (A) 289 --- an odd number, so not divisible by 2, and clearly not divisible by 5, so it must have other prime factors. No good. (B) 196 --- divide by 2 = 98 --- divide by 4 = 49 --- other odd factors. No good. (C) 225 --- 2 + 2 + 5 = 9, which is divisible by 3, so that means 225 is divisible by 3. No good. (D) 144--- 1 + 4 + 4 = 9, which is divisible by 3, so that means 144 is divisible by 3. No good. (E) only one left Does all this make sense? Mike _________________ Mike McGarry Magoosh Test Prep Magoosh GMAT Instructor Joined: 28 Dec 2011 Posts: 2787 Followers: 940 Kudos [?]: 3961 [0], given: 44 Re: Terminating decimal [#permalink]  27 Feb 2014, 11:44 Expert's post uwengdori wrote: Which of the following has a decimal equivalent that is a terminating decimal? 10/189 15/196 16/225 25/144 39/128 To be honest, I don't even understand what the question is asking for. Help is appreciated. Dear uwengdori, I'm happy to respond. You may find some help in the other posts in this merged thread, but I will be happy to explain it as well. First of all, I would highly suggest reading this post, which will clarify a great deal: http://magoosh.com/gmat/2012/gmat-math- ... -decimals/ So, as that blog explains, if the denominator of a fraction has no prime factors other than 2 and 5, the fraction will terminate instead of repeat. The next step is to recognize that 128 is a power of 2. It's highly worthwhile to have the first ten powers of 2 memorized: 2^1 = 2 2^2 = 4 2^3 = 8 2^4 = 16 2^5 = 32 2^6 = 64 2^7 = 128 2^8 = 256 2^9 = 512 2^10 = 1024 Since 128 is a power of 2, it has only factors of 2, no other prime factors. This means, any fraction with 128 in the denominator will be a terminating decimal. If you have any questions after you read that blog post, please let me know. Mike _________________ Mike McGarry Magoosh Test Prep Senior Manager Joined: 17 Sep 2013 Posts: 389 Concentration: Strategy, General Management GMAT 1: 690 Q48 V37 GMAT 2: 730 Q51 V38 WE: Analyst (Consulting) Followers: 16 Kudos [?]: 186 [0], given: 136 Re: Which of the following fractions has a decimal equivalent [#permalink]  06 Jul 2014, 21:12 I look for terminating decimals..I look for even numbers... Anything halved is always terminating ...E is all 2's...So E wins _________________ Appreciate the efforts...KUDOS for all Don't let an extra chromosome get you down.. GMAT Club Legend Joined: 09 Sep 2013 Posts: 8153 Followers: 416 Kudos [?]: 110 [0], given: 0 Re: Which of the following fractions has a decimal equivalent [#permalink]  09 Jul 2015, 10:47 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: Which of the following fractions has a decimal equivalent   [#permalink] 09 Jul 2015, 10:47 Similar topics Replies Last post Similar Topics: 8 Which of the following has a decimal equivalent that is a 3 20 Oct 2013, 05:36 22 Which of the following fractions has a decimal equivalent 15 07 Sep 2013, 02:57 6 Which of the following fractions has a decimal equivalent th 11 22 Aug 2009, 10:09 47 Which of the following fractions has a decimal equivalent th 5 09 Jul 2009, 09:29 10 Which of the following fractions has a decimal equivalent 17 02 Apr 2007, 05:18 Display posts from previous: Sort by
{}
# Under what conditions is maintaining two accounts or generally faking two different users by a single person acceptable on PhysicsOverflow? + 1 like - 0 dislike 474 views Under what circumstances is it acceptable to maintain two different accounts or more generally playing two different personalities by a single user on PhysicsOverflow? There might be legitimate good reasons for using two accounts sometimes. But what do we think about "sockpuppeting" in different variants? What forms of uses of two accounts or playing two different personalities by a single user should be acceptable and legitimate on PhysicsOverflow? A by othe online communities often used criterion to decide what is legitimate and what is a deal breaker is if and how the two different accounts/personalities interact ... + 2 like - 0 dislike The below is my opinion on the matter, not an official policy: • If the two accounts interact (this includes voting on the same posts), then all votes between the users should be reversed, and the accounts merged. The user should be warned against doing so in future. This includes if one of the accounts in an anonymous (ip-only) account. • If the account was created to circumvent a ban (for whatever reason, including spam, gibberish, etc.), the account should be treated as a separate user. • If the accounts are completely non-interacting, the user should just be asked if he wants the accounts merged (they may have forgotten the password etc.). If not, they should be treated as separate users. answered Jan 18, 2015 by (1,985 points) edited Feb 9, 2015 The penalty for sockpuppeting should be harsher, but the standard of proof must be very strong. A warning is not enough of a deterrant, in my opinion. Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysics$\varnothing$verflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{}
# zbMATH — the first resource for mathematics Sets of solution-set-invariant coefficient matrices of simple fuzzy relation equations. (English) Zbl 0649.04003 Let x, b be two fuzzy sets and A, A’ be two fuzzy relations such that the finite fuzzy equations $$x\circ A=b$$ and $$x\circ A'=b$$ hold, where “$$\circ ''$$ denotes the max-min composition, A, A’ are assigned and x is unknown. If the membership values of b are ordered in strictly decreasing sense, then the above equations are called simple. Let X and X’, respectively, be the sets of the solutions of the above equations. Using results of Wang Peizhuang, S. Sessa, the reviewer, and W. Pedrycz [Busefal 18, 67-74 (1984; Zbl 0581.04001)], the authors give an iff condition in order to have $$X=X'$$. Further results on the number of the distinct elements of X are established. Reviewer: A.Di Nola ##### MSC: 03E99 Set theory 03B52 Fuzzy logic; logic of vagueness 94D05 Fuzzy sets and logic (in connection with information, communication, or circuits theory) Full Text: ##### References: [1] Cheng-zhong, Lo, Reachable equation set of a fuzzy relation equation, J. math. anal. appl., 103, 524-532, (1984) · Zbl 0588.04005 [2] Czogala, E.; Drewniak, J.; Pedrycz, W., Fuzzy relation equations on a finite set, Fuzzy sets and systems, 7, 89-101, (1982) · Zbl 0483.04001 [3] di Nola, A.; Sessa, S., On measures of fuzziness of solutions of composite fuzzy relation equations, (), 277-281 · Zbl 0566.04003 [4] Gottwald, S., On the existence of solutions of systems of fuzzy equations, Fuzzy sets and systems, 12, 301-302, (1984) · Zbl 0556.04002 [5] Higashi, M.; Klir, G.J., Resolution of finite fuzzy relation equations, Fuzzy sets and systems, 13, 65-82, (1984) · Zbl 0553.04006 [6] Lettieri, A.; Liguori, F., Characterization of some fuzzy relation equations provided with one solution on a finite set, Fuzzy sets and systems, 13, 83-94, (1984) · Zbl 0553.04004 [7] Pappis, C.P.; Sugeno, M., Fuzzy relational equations and the inverse problem, Fuzzy sets and systems, 15, 79-90, (1985) · Zbl 0561.04003 [8] Pei-zhuang, Wang; Yuan, Meng, Relation equations and relation inequalities, (), 20-31 [9] Pei-zhuang, Wang; Sessa, S.; di Nola, A.; Pedrycz, W., How many lower solutions does a fuzzy relation equation have?, Busefal, 18, 67-74, (1984) · Zbl 0581.04001 [10] Sanchez, E., Resolution of composite fuzzy relation equations, Inform. and control, 30, 38-48, (1976) · Zbl 0326.02048 [11] Sanchez, E., Solutions in composite fuzzy relation equations: applications to medical diagnosis in Brouwerian logic, (), 221-234 [12] Sanchez, E., Solution of fuzzy equations with extended operations, Fuzzy sets and systems, 12, 237-248, (1984) · Zbl 0556.04001 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# What polynomials biject from $\mathbb{N}^{2}$ to $\mathbb{N}$? Perhaps there are none with integral coefficients; so let us admit rational coefficients. The map $(x, y) \mapsto x + \frac{1}{2}(x + y)(x + y + 1)$ is well known, and swapping $x$ and $y$ in the formula yields another, so we have two for starters. - There have been a number of questions related to this, including one of the highest-voted ones by Bjorn Poonen. You might search through existing questions. – Will Jagy Nov 7 '10 at 15:59 – Will Jagy Nov 7 '10 at 16:03 In particular, the first link tells us that this question is an open problem. – Martin Brandenburg Nov 7 '10 at 16:15 @Martin actually that is only asking about surjectivity when the domain is $\mathbb{Z}\times\mathbb{Z}$, but I agree that it has some bearing here. – David Roberts Nov 8 '10 at 2:49
{}
# Gerstenhaber - Serezhkin theorem Let $\mathbb{F}$ be an arbitrary field. Consider $\mathcal{M}_{n}(\mathbb{F}),$ the vector space of all $n\times n$ matrices over $\mathbb{F}.$ Define • $\mathcal{N}=\{A\in\mathcal{M}_{n}(\mathbb{F}):\,A\,\,\mbox{is nilpotent}\},$ • $\mathcal{GL}_{n}(\mathbb{F})=\{A\in\mathcal{M}_{n}(\mathbb{F}):\det(A)\neq 0\},$ • $\mathcal{T}=\{A\in\mathcal{M}_{n}(\mathbb{F}):\,A\,\,\mbox{is strictly upper % triangular}\}.$ Notice that $\mathcal{T}$ is a linear subspace of $\mathcal{M}_{n}(\mathbb{F}).$ Moreover, $\mathcal{T}\subseteq\mathcal{N}$ and $\dim\mathcal{T}=n(n-1)/2.$ The Gerstenhaber – Serezhkin theorem on linear subspaces contained in the nilpotent cone [G, S] reads as follows. ###### Theorem 1 Let $\mathcal{L}$ be a linear subspace of $\mathcal{M}_{n}(\mathbb{F}).$ Assume that $\mathcal{L}\subseteq\mathcal{N}.$ Then (i) $\dim\mathcal{L}\leq n(n-1)/2,$ (ii) $\dim\mathcal{L}=n(n-1)/2$ if and only if there exists $U\in\mathcal{GL}_{n}(\mathbb{F})$ such that $\{UAU^{-1}:\,A\in\mathcal{L}\}=\mathcal{T}.$ An alternative simple proof of inequality (i) can be found in [M]. ## References • G M. Gerstenhaber, On nilalgebras and linear varieties of nilpotent matrices, I, Amer. J. Math. 80: 614–622 (1958). • M B. Mathes, M. Omladič, H. Radjavi, Linear Spaces of Nilpotent Matrices, Linear Algebra Appl. 149: 215–225 (1991). • S V. N. Serezhkin, On linear transformations preserving nilpotency, Vests$\bar{\iota}$ Akad. Navuk BSSR Ser. F$\bar{\iota}$z.-Mat. Navuk 1985, no. 6: 46–50 (Russian). Title Gerstenhaber - Serezhkin theorem GerstenhaberSerezhkinTheorem 2013-03-22 19:20:05 2013-03-22 19:20:05 kammerer (26336) kammerer (26336) 7 kammerer (26336) Theorem msc 15A30 BottaPierceWatkinsTheorem
{}
# Boolean Functions¶ Mathematicians could not stop pondering George’s new Boolean world! They kept coming up with interesting puzzles. Suppose you have two Boolean variables: A, and B. Since each one can take on two possible valus, there are four combinations of those variables: A B 0 0 0 1 1 0 1 1 We used this arrangement to show how to build truth tables from George’s Algebra. The AND, OR, and XOR tables wre shown earlier. These mathematician folks wondered if there were any other interesting tables they could form. To find out they noted that the truth tables produced four output values. That make sense if we define a function as an operation that maps two input variables into one output value. Each row in the truth table tells us how this particular function works. These functions are not like others you are used to, like sqrt. These functions are digital in nature, they take in discrete digital values (0 or 1) as a value for each input, and return a single digital value (again 0 or 1). If there are four possible outputs for the two variables, there must be a total of 16 different functions we could define using this truth table scheme, let’s see what they are: A B f0 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 Warning Each column in this table is a unique truth table for one function. ## What Are These Functions¶ Here they are: • f0 = ZERO • f1 = AND • f2 = • f3 = A • f4 = NOT A • f5 = B • f6 = XOR • f7 = OR • f8 = • f9 = • f10 = • f11 = • f12 = • f13 = NOT X • f14 = • f15 = ONE You should to fill in the missing entries as an exercise. ## Why is This Interesting?¶ We are going to model a real computer. We will build this machine out of simple components. Those components take in a certain number of input signals, each a Boolean variable. They will output one or more output values, with values of 0 or 1! We can model what they do inside using a simple table that lists all possible output values. We look at the inputs, then simply look up the desired output values and return them. The table is just a tiny array of numbers indexed by those input variables! Cool! This is exactly how new gadgets called Field Programmable Gate Arrays are programmed. Basically, these little machines have a slew of Look Up Tables that you can “program” to create some digital thing. You can then stitch together all of your small widgets to form a bigger one. Some of these FPGA have millions of these basic building blocks, and you can program then to become a real computer we can fire up and run. Chip designers use these things to test out a design. When it is working properly, they send the code they wrote to program the FPGA to a simple conversion program called a “synthesizer” that churns out a real computer chip that they can fabricate. Hardware design has morphed into a software design problem! Wow! Here is one of my FPGA boards:
{}
# GlucoBerry: Review the Supplement Facts Full Disclosure GlucoBerry is a dietary supplement that's designed to support healthy blood sugar regulation. This applies to those that are pre-diabetic or suffering from diabetes. MD/Process GlucoBerry is an all-natural supplement, which means that it was formulated with entirely organic ingredients. So that means that this product is basically devoid of all the negative side effects that you can expect with other supplements on the market. ## Secret Behind GlucoBerry's Formula What makes the MD/Process GlucoBerry (BloodSugarBerry.com) so effective in regulating blood sugar is that it's infused with a formula that targets another potential source of blood sugar dysregulation apart from insulin production. ### The Commonly Understood Links Between Insulin and Blood Sugar For quite some time, the medical community has thoroughly studied and understood the link between insulin production and response in the body and its impact on blood sugar. #### Understanding Insulin's Role and Impact on Blood Sugar The body uses insulin to move glucose from the blood into the cells, where it is used for energy. Insulin also helps the body store glucose as glycogen (a type of carbohydrate) for later use. Insulin affects blood sugar levels through mechanisms that work via blood, tissue, or hormone. But if your blood sugar levels are high, it causes the most harm to your blood vessels, which are like your plumbing. If the body does not produce enough insulin, blood sugar will remain too high, leading to increased insulin production and resulting in a condition called hyperinsulinemia. High blood sugar, also called hyperglycemia, and high insulin can lead to serious, life-threatening complications. #### Johns Hopkins University's Recent Discovery Concerning Blood Sugar As explained above, the common understanding in modern science is that the primary and singular means of addressing blood sugar issues in affected individuals is related to insulin in some manner. Thus, almost all treatment solutions for diabetic and pre-diabetes individuals at risk involve some therapy that impacts the According to the manufacturers, A study from Johns Hopkins University found something surprising about how your body regulates blood sugar. Focusing on insulin is not the one-and-only, magical solution to supporting healthy blood sugar. According to the manufacturers, the study also noted that sometimes obstructions prevent this drain from performing its role effectively. As a result, affected individuals are left with symptoms as though they were suffering from insufficient insulin production when the true issue concerns the functionality of their kidneys, an entirely different organ. The manufacturers go on to state, “This study from Johns University discovered that your Blood Sugar Drain must be running smoothly to maintain balanced blood sugar. But 50% of Americans have too much of a sticky gray protein that clogs up their Blood Sugar Drain.” From here, it is explained on the main product site for GlucoBerry that while the main role of insulin is to remove sugar from the blood, this does not make it the only means of treating excess sugar in the blood. In fact, in the process of effectively performing its role in the body of removing excess blood sugar, it can ironically create conditions in the body that ultimately impair its ability to entirely remove said excess sugar. This is because insulin removes excess sugar by depositing it in the kidneys; from there, it is the kidney's responsibility to completely excrete excess sugar from the bloodstream and body. The manufacturers go on to describe the findings made by the referenced Johns Hopkins University study by explaining that the kidneys possess a mechanism called a ‘Blood Sugar Drain,' which is specifically designed to assist with the removal or drainage of this excess sugar that is deposited in the kidneys by insulin. ### What is a ‘Blood Sugar Drain'? This term is more of a simplification for those that are not entirely familiar with complex medical terminology. However, for the sake of comprehensiveness, we're going to go ahead and take a direct look at the study in question that the manufacturers are referencing on their site to give a better understanding of what this drain is and its role in the kidneys. #### Diabetic Nephropathy According to Johns Hopkins, “Nephropathy is the deterioration of kidney function. The final stage of nephropathy is called kidney failure, end-stage renal disease, or ERD.” They go on to state that, “According to the CDC, diabetes is the most common cause of end-stage renal disease” and that, “In 2011, about 26 million people in the U.S. were reported to have diabetes, and more than 200,000 people with ESRD due to diabetes were either on chronic renal dialysis or had a kidney transplant. Both type 1 and type 2 diabetes can lead to diabetic nephropathy, although type 1 is more likely to lead to ESRD.” The more relevant parts of the study are, of course, the observations that were made with regard to increased blood sugar, the presence of diabetic conditions, and impaired kidney function. They note that high levels of sugar in the blood cause damage to the kidneys and that Most of this damage is directed toward the blood vessels that filter the blood to make urine.” Those blood vessels in the kidneys imbue it with the ability to purge excess sugar from the blood. So, while it is true that the chief mechanism for removing excess sugar from the blood is insulin, the effectiveness of said insulin is limited or mitigated entirely by impaired kidney function. Thus, if one is suffering from type 1 or type 2 diabetes and suffering from any one of the numbers of kidney impairments that we've discussed thus far, then blood sugar treatments that focus explicitly on addressing levels of insulin in the blood will have reduced effectiveness for those suffering from this condition. #### National Kidney Foundation Has Made Similar Observations It appears that the entire medical is coming to the realization that excess sugar in the body can have a deteriorative effect on kidney function, which, itself, leads to further medical problems in affected individuals. The National Kidney Foundation states on its main site that, “Sugar is not a problem for the kidneys unless the blood sugar level gets too high. This commonly occurs in both Type 1 and Type 2 diabetes. Once the blood sugar level gets higher than 180 mg/dl, the kidneys start to spill sugar into the urine.” They also go on to state that, “*The higher the blood sugar, the more sugar comes out in the urine” and that, “If your kidneys are normal, this usually isn't a problem, but if you have diabetes, too much sugar can cause kidney damage.” ## How MD Process GlucoBerry Saves the Day After these findings were published by Johns Hopkins University, the manufacturers of GlucoBerry opted to formulate a supplement that was designed to target the removal of excess sugar in the blood by improving kidney functionality. This is what makes GlucoBerry a uniquely different supplement compared to other alternatives currently present in the market. As noted prior, MD Process was able to do so by formulating their supplement with all-natural, organic ingredients that are rich in vitamins and nutrients and whose efficacy in improving kidney health and function is backed by substantial clinical research and findings. In the next section, we're going to identify these specific ingredients, then explore how each plays a critical role in facilitating improved kidney function, thus making GlucoBerry a revolutionary means of addressing blood sugar dysregulation in a potentially more effective manner than any of its competitors on the market. ## GlucoBerry's Ingredient Make-Up As noted before, MD Process GlucoBerry is formulated with all-natural ingredients clinically proven to facilitate improved kidney function and health. The primary natural ingredients in GlucoBerry that are responsible for imbuing it with its unprecedented effectiveness are: • Chromium • Biotin • Gymnema Leaf In the following sections, we're going to take an in-depth look at each ingredient individually to get a better idea of why GlucoBerry's efficacy is backed by science and medicine, unlike most alternatives in the market claiming to fulfill the same purpose as GlucoBerry. ### Maqui Berry Extract Per MD Process, “Maqui Berry grows naturally in the rainforests of Chile and Argentina. Very little is professionally grown.” So we know from that detail alone that this berry is relatively rare and likely difficult to source and find. #### Manufacturer's Explanation for Including Maqui Berry Extract in Their Supplement Fortunately, this didn't preclude the manufacturers from being able to obtain said berry though. Better yet, according to the manufacturers, “After meals high in carbs or surgar, people who take Maqui Berry extract have lower blood sugar spikes” and, “Taking Maqui Berry extract every day improved long-term blood sugar markers by 23%.” Beyond these facts, there are a ton of additional benefits that come with Maqui Berry as well. #### General Health Benefits of Maqui Berry Maqui Berry provides significant health benefits. Maqui Berry has also been shown to be highly effective in helping regulate and maintain healthy levels of blood sugar. Additionally, studies indicate that it has strong antibacterial, antifungal, and antimicrobial effects, especially when applied externally. The berries contain compounds known as phenylethanol glucosides, which are associated with a high antioxidant level. These effects can be used to fight inflammation, arthritis, diabetes, and even acne. #### Nutrients and Ingredients Maqui Berry Possesses 1. Flavonoids – flavonoids are known to have antioxidant and anti-inflammatory properties. Flavonoids found in maqui berries have been shown to improve cardiovascular health, reduce inflammation, and reduce the risk of heart disease. 2. Beta-carotene – beta-carotene is one of the many carotenoids found in maqui berries. B-carotene is a red-orange pigment with anti-cancer properties and is known to lower blood pressure, lower cholesterol, and reduce cancer risk. 3. Anthocyanins – anthocyanins are pigments that give a blue-violet color to maqui berries. The anthocyanins present in maqui berries have been shown to improve cognition and brain health. 4. Sterols – sterols have powerful antioxidant properties. Sterols have been shown to be potent in helping reduce inflammation and have anti-cancer properties. Sterols present in maqui berries are not only anti-inflammatory but are also anti-carcinogenic, anti-allergenic, and anti-viral. 5. Essential Amino Acids – maqui berry contains all nine essential amino acids. 6. Vitamins – maqui berry contains high levels of Vitamin A and B, and also folate, Vitamin C, and Vitamin K Maqui Berry is also known for having a high polyphenol content, which is believed to be anti-carcinogenic. ### Chromium Since its discovery, Chromium has been used mostly as a dietary supplement and as a component in pharmaceuticals. In recent years, however, the medical community has begun to recognize the significant health benefits of Chromium. Chromium is an essential mineral in the formation of the connective tissue matrix. It is also involved in insulin metabolism and glucose tolerance. The amino acid chromium participates in energy metabolism, which results in a reduced craving for carbohydrates. #### Chromium's Role in Regulating Blood Sugar and Treating Diabetes and Obesity In addition, chromium is important in the absorption of iron from the intestines and helps maintain the normal activity of other minerals such as zinc, magnesium, and selenium. Chromium also helps the body regulate blood sugar levels and is involved in the formation of insulin and other protein hormones that assist in lowering blood sugar. Chromium is needed for normal blood sugar and insulin metabolism. One of the main reasons that Chromium is becoming so popular in the health community is that it has been shown to play a significant role in battling obesity and type 2 diabetes. According to recent research, Chromium inhibits the enzyme lipase, which breaks down dietary fats and carbohydrates. This implies that Chromium can help regulate blood sugar and promote weight loss. In addition to its role in blood sugar regulation, Chromium has also been shown to improve insulin sensitivity. Therefore, taking this supplement may help diabetics better manage their disease. #### Additional Health Benefits Conferred by Chromium One of the most promising health benefits of Chromium is its role in serotonin production. Serotonin is a naturally-occurring neurotransmitter that helps regulate mood, appetite, sleep, and bowel movements. The enzyme tryptophan hydroxylase catalyzes the formation of serotonin from tryptophan. Therefore, taking Chromium alongside a diet rich in vitamin B and minerals can significantly boost serotonin levels in the brain. Chromium has also been shown to have a significant effect on the bodies nervous system. This is particularly promising in light of the fact that several disorders currently affect nearly every industry in our society. These disorders include Alzheimer's disease, Parkinson's disease, and Huntington's disease. Chromium has been proven to help reduce the symptoms of these neurological disorders. In Parkinson's disease, for example, chromium can help relieve tremors and muscle rigidity. In addition, animal studies have shown that high doses of chromium can help increase dopamine levels in the brain. Dopamine is a neurotransmitter that naturally encourages people to seek pleasure and to engage in reward-based activities. Higher levels of dopamine make one more susceptible to addictive behaviors. However, too much dopamine can cause serious problems as well. Experts believe that Chromium's effect on dopamine makes it a potentially powerful tool in the fight against the pharmaceutical industry. Fortunately, the manufacturers of GlucoBerry have also come to this same realization, thus explaining their inclusion of this ingredient in their product. ### Biotin Studies show that biotin aids in weight loss, lowers blood pressure, assists in treating high cholesterol, and assists the body in using protein. In addition, biotin increases the body's energy output and helps to prevent hair loss. Studies have also shown that biotin aids in the treatment of various eye diseases. Biotin is one of the B group of water soluble vitamins that are referred to as the “vital aminos.” They are essential to the body's energy production, protein synthesis, cell growth, and tissue maintenance. Biotin is also known to be a necessary coenzyme for several enzymes that aid in the fatty acid synthesis and glucose oxidation and in the metabolism of proteins, carbohydrates, and fats. Biotin is known to be involved in carbohydrate, protein, and fat metabolism. In addition, biotin is required for the utilization of fats, and it facilitates the absorption of the fat-soluble vitamins, A, D, E, and K. Biotin is water soluble. It is taken into the body through the intestines. Biotin is a vitamin that is necessary for the human body to function normally. It is a compound of vitamin A, and vitamin B. Biotin is also needed for the healthy development of certain cells and tissues in the body. Without enough biotin, the body cannot properly use vitamin A – which means you will not be able to function at your fullest potential. Biotin is also essential for the synthesis and maintenance of healthy skin. Due to its many important functions in the human body, biotin is considered a “functional vitamin.” #### Significant Health Benefits Associated with Biotin Aside from being necessary for the body's function, biotin is known to have significant health benefits. First, it has been shown to play a crucial role in the immune system and its response to infections. In one study, rats that were given biotin before being infected with a virus had significantly fewer signs of infection, and their immune systems were strengthened. Biotin also has antioxidant properties and can help to protect the body from damage caused by free radicals. In addition, biotin has been shown to be effective in helping to prevent certain types of cancer. Due to its significant health benefits, biotin has been approved by the FDA for use in preventing certain types of cancer and in the treatment of infections caused by bacteria, viruses, and fungi. Although it is commonly found in animal products like eggs, milk, and cheese, you can get biotin from eating a well-balanced diet or taking supplements. #### How Biotin Helps Regulate and Maintain Healthy Levels of Blood Sugar Another important area of research on biotin is its ability to help regulate and maintain healthy levels of blood sugar. It has been shown to play a crucial role in the body's glucose metabolism, specifically in helping to metabolize carbohydrates. Since diabetes mellitus is a major health issue that affects a large portion of the population, this research is important. However, it is important to note that biotin by itself will not cure diabetes. It will only help to maintain healthy blood sugar levels. In one study, rats that were fed a carbohydrate-rich diet (which is the type of diet that many people with diabetes eat) had poorer glucose tolerance than rats that were given a diet free of carbohydrates. The rats that were fed the carbohydrate-rich diet also had lower levels of insulin in their bodies. Biotin in this study seemed to be restoring the rats' insulin sensitivity – which means it was helping the rats' bodies to use insulin better to convert glucose into energy. ### Gymnema Leaf Gymnema Leaf, also known as “jatropha fruit”, is native to India and is a member of the Euphorbiaceae family. It has been used traditionally to treat numerous ailments, ranging from fever to diabetes. Modern scientific studies have also shown that this natural remedy has significant potential in the treatment of diabetes and hyperglycemia (abnormal elevation of blood sugar levels). Gymnema Leaf has been used traditionally to treat numerous ailments, ranging from fever to diabetes. It has been known to lower blood sugar, which can be helpful for those suffering from diabetes. It is also commonly used to reduce fevers, which may be caused by a number of factors, including infection or malignancy. Some research indicates that Gymnema Leaf may work in a similar fashion to the prescription drug dichloroamphetamine (DCA), which is also known as “fluramine” or “norfluramine”. DCA has been shown to be highly effective in the treatment of type 2 diabetes, as well as in obesity and hyperactivity. However, it should be noted that DCA has severe side effects, including neurotoxicity and potential for abuse. Traditional forms of treatment that include Gymnema Leaf may provide significant benefits without the negative side effects of conventional medication. Although the scientific evidence is promising, the exact mechanism by which Gymnema Leaf works to lower blood sugar is not yet fully known. It is thought to potentially work in a similar fashion to the prescription drug acarbose, which is used to treat type 2 diabetes. However, unlike acarbose, which will not substantially affect the absorption of carbohydrates, Gymnema Leaf may reduce intestinal absorption of carbohydrates by about 50%, thereby leading to a decrease in blood sugar levels. Additionally, Gymnema Leaf may help increase insulin sensitivity, which again would lower blood sugar levels. It is also possible that this plant may help decrease the activity of glucose-producing enzymes in the liver. ### How to Buy MD/Process Glucoberry Currently, if prospective customers visit Revive Daily's site and opt to purchase either their 1-bottle, 3-bottle, or 6-bottle offering, they'll receive a hefty discount regardless. 1. 30-day supply (1 bottle) – $70 discount 2. 90-day supply (3 bottles) –$240 discount 3. 180-day supply (6 bottles) – $540 discount Yes, you read those discount totals correctly. Customers are being offered a$540 discount off retail for any and all purchases of the 180-day supply bundle from the manufacturers. If you want to learn more about MD/Process Glucoberry, you can visit the official website at BloodSugarBerry.com. Live Healthier Advanced Living is a leading lifestyle wellness enhancement movement that highlights health awareness, provides educational research and delivers perpetual knowledge on how to live your best life in 2020 and beyond so you can master the art of aging gracefully in this lifetime. From high energy insights on trending news to truth-seeking analysis for supplement reviews, Advanced Living exists to optimize your well-being universe and act as a genuine guide for personal transformation, spiritual enlightenment and essential wholeness. AdvancedLiving.com may receive a small reward on product purchases using links within reviews. For optimal transparency, see the full disclosure on how this process works to support our team’s mission of creating Advanced Living for you. AdvancedLiving.com content provides generalized information only for education and entertainment. In no way is the content here a substitute for qualified medical advice. Always actively seek a professional dietitian, certified nutritionist, licensed specialist or your doctor for specific consultation before using any supplement our team reviews. Get in touch at [email protected] with any trending news, tips or review suggestions. Disclosure: link references clickthroughed can result in referral rewards to support our team. ### Reviewing the Top 20 Best Wrinkle Creams That Really Work in 2020 While beauty is in the eye of the beholder, getting older in age naturally brings its own set of challenges to combat as proactiveness... ### Top 10 Best Garcinia Cambogia Supplements in 2022 Is Garcinia the Most Famous Weight Loss Supplement Ever? Garcinia Cambogia is one of the most well-known ingredients in the supplement market, known for its... ### Top 10 Best Forskolin Brands in 2022 As scientific research supporting the efficacy of Forskolin has continued to pile up over the past decade or so, there now exists a mountain... ### Top 20 Best Male Enhancement Pills in 2020: Men's Sexual Performance Guide Male Enhancement in 2020 and Beyond: Growing Pains Everyone's talking about male enhancement, but let's really talk about 'enhancing a male'. It has been said a... ### Top 5 Best Cocoa Flavanol Supplements in 2022 The cocoa flavanols story is quite impressive. Before we get into reviewing the top 5 best cocoa flavanol supplements in 2022, it is important... ### Intermittent Fasting 2020 Guide: IF Diet Plan Types and Weight Loss Benefits Intermittent Fasting in 2020: Beginner’s Guide to IF Diet Plans and Health Benefits Intermittent fasting is one of the hottest diet plans in 2020 and... ### Top 10 Best Leaky Gut Supplements in 2022 The ultimate leaky gut syndrome guide features the top 10 best leaky gut supplements in 2022 that are designed to promote optimal gut health... ### Supplements Revealed: Watch the Documentary Film Movie Trailer Now Supplements Revealed is a 9-part documentary series launched online by Revealed Films. The series teaches anyone how to get relief from their health problems and... ### Top 20 Best Hair Growth Vitamins and Hair Loss Supplements in 2020 Reviewing the Top Hair Loss and Hair Regrowth Remedies of the Year For both men and women, the ability to have a good head of... ### Top 12 Best Keto Shakes to Review and Buy in 2020 In its most basic sense, a ketogenic shake can be thought of as a powdered supplement that can be mixed with either water or... ### Top 10 Best Keto Diet Pills in 2022 The Ketogenic diet is one of the most popular nutritional plans in the world. With a focus on promoting a low-carb, high-fat diet with... ### Top 10 Best Turmeric Supplements in 2020: Benefits and FAQ Research Guide Several antioxidants currently on the market can provide major health benefits, but there is only one vividly golden yellow-orange spice that can do it... ### Top 10 Best Vitamin D Supplements in 2020: Benefits & Deficiency FAQ Vitamin D is the sunshine vitamin that is essentially more well-known for its source than it is for its use in the body yet... ### Top 10 Best Lung Health Supplements for Natural Breathing Benefits in 2020 The lungs are one of essential to keeping toxins out of our bloodstreams as we breathe, and they also ensure that the body only... ### Top 5 Pycnogenol Supplements in 2020: Best French Maritime Pine Bark Extracts In its most basic sense, Pycnogenol® is a medicinal extract that derived from the bark of French maritime pine trees. It is a highly... ### Top 10 Best Aloe Vera Supplements in 2020: Benefits and FAQ Research Guide Aloe vera is an ancient herbal powerhouse that has been used for ages and ages for its medicinal properties. From it's collagen-boosting, silica-rich nature,...
{}
# HOW TO CALCULATE THE FUTURE VALUE OF A LUMP-SUM WITH A COMPOUND INTEREST 1 Compound interest is the interest earned on the initial investment plus all the interest that has accumulated over time. It can also be defined as interest earned on the sum of principal and the accumulated interest. The idea behind compound interest is that you will earn interest on the interest you earned the previous year in the second year. To put it another way, the interest you earn in the first year is combined with the principal, and you earn interest on the combined sum in the second year. To calculate compound interest, you must understand two concepts: future value and present value The future value of a lump sum is the value of the original sum of money at a future point in time at a given rate of interest. In other words, the future value of a lump sum/single-period investment is the cash value of an investment at some time in the future. The future value of a lump sum investment is the amount an investment will grow to over some period given a particular interest rate. When calculating a future value (FV), you are calculating how much a given amount of money today will be worth some time in the future and this is compounding The present value of compound interest is the present worth of a sum of money. It is the initial investment or deposit. Present value is sometimes known as the principal. When calculating the present value, you are calculating how much a given amount of money in the future will be worth now and this is called discounting. For our topic today, we will be focusing on the future value of compound interest. The present value of compound interest will be discussed in a subsequent post. Forthwith, let's get started It's easy to calculate the future value of a sum of money that compounds annually via: $$FV=PV(1+i)^n$$ Where FV is the future value PV is the present value or principal n denotes the number of years. i is the interest rate expressed as a percentage. #### Example 1 How much money do you have after investing N7000 for 5 years in a savings account that earns 9% compound interest per year? Solution: This question is about compound interest's future value. Hence, we employ the formula: $$FV=PV(1+i)^n$$ Here PV=7000, i=9% which the is same as 0.09, and n=5 $FV=7000(1+0.09)^5$ $FV=7000(1.09)^5$ $FV=7000(1.538624)$ $FV=N10,770.368$ Hence, the future value of the N7000 at 9% per annum is N10,770.368 #### Example 2 If we place N5000 in a savings account that yields 4.5% compounded annually, what would be the future value of the investment after 7 years Solution: Here,  PV=N5000, i=4.5% or 0.045 and n=7 $FV=5000(1+0.045)^7$ $FV=5000(1.045)^7$ $FV=5000(1.36086)$ $FV=N6804.309$ Until now, we have assumed that interest compounds annually. However, in real-life situations, interests do compound monthly, quarterly, weekly and even daily. This takes us to the next heading ## How To Calculate The Future Value Of A Compound Interest Payable Intra-yearly Generally, the formula for the future value of compound interest is: $$FV=PV\left(1+\frac{i}{m}\right)^{n ×m}$$ Where PV is the present value i is the interest rate expressed as a percentage M is the number of times compounding occurs in a year n is the number of years. 1. Therefore, for an interest that compounds annually, the formula is computed as: $$FV=PV\left(1+\frac{i}{1}\right)^{n × 1}$$ $$FV=PV(1+i)^n$$ This is where the formula we used earlier to solve yearly compound interest came from. 2. For an interest that compounds quarterly, the formula is $$FV=PV\left(1+\frac{i}{4}\right)^{n×4}$$ $$FV=PV\left(1+\frac{i}{4}\right)^{4n}$$ Note: m is four because there are four quarters in a year. 3. For an interest that compounds monthly, we apply the formula $$FV=PV\left(1+\frac{i}{12}\right)^{n ×12}$$ $$FV=PV\left(1+\frac{i}{12}\right)^{12n}$$ Note: m is 12 because there are 12 months in a year. 4. For an interest that compounds weekly, we apply the formula $$FV=PV\left(1+\frac{i} {52}\right)^{n × 52}$$ $$FV=PV\left(1+\frac{i}{52}\right)^{52n}$$ Note: m is 52 because there are 52 weeks in a year and it's only natural that compounding occurs 52 times in a year. 5. For an interest that compounds daily, we apply the formula $$FV=PV\left(1+\frac{i}{365}\right)^{n × 365}$$ $$FV=PV\left(1+\frac{i}{365}\right)^{365n}$$ Note: m is 365 because a year is equivalent to 365 days. 6. For an interest that compounds semi-annually, it is obtained via $$FV=PV\left(1+\frac{i}{2}\right)^{n × 2}$$ $$FV=PV\left(1+\frac{i}{2}\right)^{2n}$$ Note: Semi-annually means interest compound every six months. To better appreciate these formulas, let's take some examples #### Example 3 A principal of N8000 is invested for three years at a rate of 12%. If the interest is compounded monthly, calculate the future value. Solution: Recalled that the future value of monthly compound interest is: $$FV=PV\left(1+\frac{i}{12}\right)^{12n}$$ Here, PV=N8000, I=0.12 and n=3 $FV=8000(1+\frac{0.12}{12})^{12(3)}$ $FV=8000(1+0.12)^{36}$ $FV=8000(1.12)^{36}$ $FV= N11.446.15$ #### Example 4 A principal of N8000 is invested at 12% interest for 3 years. Determine the future value if the interest is compounded semi-annually Solution: Recalled that $FV=PV\left(1+\frac{i}{2}\right)^{2n}$ $FV=8000(1+\frac{0.12}{2})^{2(3)}$ $FV=8000(1.06)^6$ $FV=N11.348.15$ #### Example 5 A principal of N8000 is invested at 12% interest for 3 years. Determine the future value if the interest is compounded daily. Solution: $$FV=PV(1+\frac{i}{365})^{365n}$$ $FV=8000(1+\frac{0.12}{365})^{365(3)}$ $FV=8000(1+0.000328767)^{1095}$ $FV=8000(1.000328767)^{1095}$ $FV=8000(1.4332444)$ $FV=11465.9552$ So far, we've looked at the future value of compound interest with definite periods such as yearly, annually, quarterly, and monthly. However, sometimes, interest can compound for an infinite period. This is called continuous compound interest. More appropriately, continuous compound interest is one where the number of years (n) is infinite. The formula for calculating the future value of continuous compound interest is as follows: $$FV=PV(e^{in})$$ Where FV is the future value PV is the present value e is Euler's  number which is approximately 2.71827 n is the number of years i is the interest rate expressed as a percentage #### Example 6 Supposed that N10,000 is deposited at 4% compounded continuously, find the compound amount after 7 years. Solution Recalled that $$FV=PV(e^{in})$$ $FV=10,000(e^{0.04 ×7})$ $FV=10,000(2.71827^{0.28})$ $FV=13,231.3$ Got difficult questions on the future value of lump sums, you can use our calculator to solve it Tags
{}
# Calibration to well tops Series Investigations in Geophysics Öz Yilmaz http://dx.doi.org/10.1190/1.9781560801580 ISBN 978-1-56080-094-1 SEG Online Store The depth structure maps derived from time-to-depth conversion or layer-by-layer inversion invariably will not match the well tops. The sources of discrepancy between the estimated reflector depths and the well tops include limitations in the methods for interval velocity estimation, mispicking of time horizons input to depth conversion, and limitations in the actual depth conversion itself within the context of ray tracing through an earth model that includes complex layer boundaries. For the depth structure maps to be usable in subsequent reservoir modeling and simulation, it is imperative to calibrate them to well tops. Consider a seismically derived depth structure map zs(x, y) based on time-to-depth conversion, say, for the layer boundary associated with the top-reservoir. Also consider Nw well tops zw (xi, yi) for this horizon at locations (xi, yi), i = 1, 2, …, Nw. Since the velocity-depth model derived from time-to-depth conversion is supposed to be consistent with the input data — the time structure map τ(x, y) created from the interpretation of the time-migrated volume of data, we have ${\displaystyle \tau (x,y)=2{\frac {z_{s}(x,y)}{V_{s}(x,y)}},}$ (7a) where, for the purpose of calibration, Vs(x, y) can be either the average or rms velocity map associated with the horizon zs(x, y). Also, for simplicity in calibration, we consider vertical rays rather than image rays as in equation (7a). There exists a calibration velocity Vc(x, y) such that, at a well location (xi, yi), it satisfies the relation ${\displaystyle \tau (x_{i},y_{i})=2{\frac {z_{w}(x_{i},y_{i})}{V_{c}(x_{i},y_{i})}}.}$ (7b) Combine equations (7a) and (7b) to get a relation that is satisfied at the well locations ${\displaystyle {\frac {V_{c}(x_{i},y_{i})}{V_{s}(x_{i},y_{i})}}={\frac {z_{w}(x_{i},y_{i})}{z_{s}(x_{i},y_{i})}}.}$ (8) From the knowledge of the well tops zw(xi, yi) and the seismically derived reflector depths zs(xi, yi) at the well locations, equation (8) gives a calibration factor c(xi, yi) = Vc(xi, yi)/Vs(xi, yi) computed at each of the well locations. Next, apply kriging or some other interpolation technique to the sparsely defined calibration factors c(xi, yi), i = 1, 2, …, Nw to derive a calibration factor map c(x, y) specified at all grid locations (x, y). Kriging is a statistical method of determining the best estimate for an unknown quantity such as c(x, y) at some location (x, y) using a sparse set of values such as c(xi, yi) specified at locations (xi, yi) [1] [2]. The final step in calibration is to scale the depth structure map zs(x, y) by the calibration factor map c(x, y) ${\displaystyle z_{c}(x,y)=c(x,y)\ z_{s}(x,y),}$ (9) where zc(x, y) is the calibrated depth structure map. Note that, by way of equations (8) and (9), the calibrated depth zc coincides with the well top zw at well location (xi, yi). Calibration to well tops is done only after the completion of model building, and just before well planning and reservoir modeling. When estimating an earth model by following a layer-by-layer inversion procedure (next subsection), depth horizon associated with the (n − 1)st layer should not be calibrated before estimating the model for the next layer n. This is because seismically derived layer velocities almost never match with well velocities. The discrepancy between the two is attributable to several factors, including the limited resolution in velocities estimated from seismic data (models with horizontal layers, model with low-relief structure, and model with complex overburden structure) and seismic anisotropy (seismic anisotropy). Additionally, the high-frequency variations in the well velocities are absent from the seismically derived velocities. The calibrated depth maps can be used to create a solid model of the earth as illustrated in Figure 9.4-13. Each layer is represented by a solid (Figure 9.4-14) with its interior populated by specific layer parameters. These may include compressional- and shear-wave velocities, densities, and rock physics parameters such as porosity, permeability, pore pressure, and fluid saturation. When populated by the petrophysical parameters, the solid associated with the reservoir layer represents a reservoir model. For the purpose of reservoir modeling, the solid for the reservoir layer usually is downscaled in the vertical direction by dividing it into thin slices with a thickness as small as 1 m — much less than the threshold for vertical seismic resolution (seismic resolution). Additionally, the solid for the reservoir layer is upscaled in the lateral direction by dividing each thin slice into finite elements with a varying size of up to 250 m on one side. The reservoir model is eventually fed into a reservoir simulation scheme to predict the geometry of the fluid flow from the given reservoir parameters. ## References 1. Sheriff, 1991, Sheriff, R. E., 1991, Encyclopedic dictionary of exploration geophysics: Soc. Expl. Geophys. 2. David, 1987, David, M., 1987, Geostatistics: in Encyclopedia of Science and Technology, 6, 141–144, Academic Press.
{}
### 支链醇对Gemini表面活性剂表面活性和胶束化行为的影响 1. 中国科学院化学研究所胶体界面与化学热力学实验室 北京 100190 • 投稿日期:2014-02-19 发布日期:2014-04-29 • 通讯作者: 王毅琳 E-mail:yilinwang@iccas.ac.cn • 基金资助: 项目受国家自然科学基金(Nos.21025313,21021003)资助. ### Effects of Branched-Chain Alcohols on Surface Activity and Micellization of Gemini Surfactants Tang Yongqiang, Zhu Linyi, Han Yuchun, Wang Yilin 1. Key Laboratory of Colloid and Interface Science, Institute of Chemistry, Chinese Academy of Sciences, Beijing 100190 • Received:2014-02-19 Published:2014-04-29 • Supported by: Project supported by the National Natural Science Foundation of China (Nos.21025313, 21021003). Gemini表面活性剂是一类高效的新型表面活性剂,而醇是工业界和日化领域最常采用的表面活性剂助剂,因此研究不同结构的醇对Gemini表面活性剂表面活性和胶束化行为的影响规律和机理对于促进Gemini表面活性剂的发展和实际应用具有重要意义.利用表面张力、电导、等温滴定微量热,低温透射电镜和核磁共振研究了直链醇1-戊醇和具有相同主链的支链醇2-己醇与3-庚醇对具有不同长度连接基团阳离子季铵盐型Gemini表面活性剂C12CSC12Br2S=2,4,6,8,10,12)的表面活性和胶束化行为的影响,结果发现,支链醇能够显著影响表面活性剂在气/液界面的排布,使得C20 (使溶剂的表面张力降低20 mN/m所需的表面活性剂浓度)和γCMC (CMC时表面张力值)随醇支化度的增加而显著降低,而支链醇对表面活性剂在溶液中的临界胶束浓度以及胶束的尺寸和形貌均没有明显影响,同时这些醇对Gemini表面活性剂的影响与连接基团的长度相关.阐述了上述结果产生的机理,将有助于指导如何选择合适结构的醇助剂去调控Gemini表面活性剂的表面和溶液性质. Gemini surfactants are a kind of novel and efficient surfactants and alcohols are one of the most widely used additives to surfactant products in industries and daily life.Understanding the effects of alcohols on surface activities and micellization of gemini surfactants will promote the applications of gemini surfactants.Effects of linear-chain alcohol (1-pentanol) and branched-chain alcohols (2-hexanol and 3-heptanol) with the same main chain as 1-pentanol on the surface activity and micellization of cationic ammonium gemini surfactants C12CSC12Br2 (S=2, 4, 6, 8, 10, 12) in aqueous solution have been investigated by surface tension, electrical conductivity, isothermal titration microcalorimetry (ITC), cryogenic transmission electron microscopy (Cryo-TEM), and NMR techniques.The branched-chain alcohols have not shown obvious effects on the critical micelle concentration of surfactants (CMC), the size and the morphology of micelles due to loose microstructure and weak interactions between the alcohols and the gemini surfactant molecules.However, the addition of branched-chain alcohols greatly decreases the surface tension of surfactants and the surface tension decreases with the increase of the branching factor of the alcohols.In addition, the surfactant concentration required to reduce the surface tension of the solvent by 20 mN/m (C20) and the surface tension values of the surfactants at CMC (γCMC) decrease obviously with the increment of the branching factor of the alcohols at a fixed alcohol concentration due to the increase of the hydrophobic chain density at the air/solution interface.The results suggest that the branched-chain alcohols influence the self-assembly of surfactants more obviously at the air/solution interface than in micelles.Moreover, there is a second critical concentration in the surface tension curves of C12C10C12Br2 and C12C12C12Br2 with a longer spacer, indicating that the hydrophobic chains of these gemini surfactants are not packed tightly even above CMC due to the flexibility of their spacers and low CMC, and are packed more tightly with the increase of the surfactant concentration.The addition of alcohols decreases the second critical concentration remarkably with the increase of their branching factor, which means that the branched alcohols have a more obvious effect on the surface activity of gemini surfactants with a longer spacer.This work helps us to understand the effects of branched-chain alcohols on the self-assembly of gemini surfactants at air/solution interface and in bulk solution, and may provide some guidance on how to choose alcohols to adjust surface activities and micellization of gemini surfactants.
{}
# A reduced polytopic LPV synthesis for a sampling varying controller : experimentation with a T inverted pendulum 1 NECS - Networked Controlled Systems Inria Grenoble - Rhône-Alpes, GIPSA-DA - Département Automatique 2 GIPSA-SLR - SLR GIPSA-DA - Département Automatique Abstract : This paper deals with the adaptation of a real-time controller's sampling period to account for the available computing resource variations. The design of such controllers requires a parameter-dependent discrete-time model of the plant, where the parameter is the sampling period. A polytopic approach for LPV (Linear Parameter Varying) systems is then developed to get an $H_{\infty}$ sampling period dependent controller. A reduction of the polytope size is here performed which drastically reduces the conservatism of the approach and makes easier the controller implementation. Some experimental results on a T inverted pendulum are provided to show the efficiency of the approach. Document type : Conference papers https://hal.inria.fr/inria-00193862 Contributor : Daniel Simon <> Submitted on : Tuesday, December 4, 2007 - 5:39:46 PM Last modification on : Saturday, October 6, 2018 - 1:15:38 AM Document(s) archivé(s) le : Monday, April 12, 2010 - 6:07:13 AM ### File drosds518.pdf Files produced by the author(s) ### Identifiers • HAL Id : inria-00193862, version 1 ### Citation David Robert, Olivier Sename, Daniel Simon. A reduced polytopic LPV synthesis for a sampling varying controller : experimentation with a T inverted pendulum. European Control Conference, ECC'07, Jul 2007, Kos, Greece. ⟨inria-00193862⟩ Record views
{}
# Apache basic authentication Apache basic authentication is a general mechanism to password-protect certain webpages, without installing anything extra on top of Apache web server. Apache comes already installed on OSX computers, and can easily be installed on Linux computers. Windows users can also probably use this tutorial as well, but that has not been tested by the author(s). As password-protection of a server is not a problem specific to bioinformatics, there are numerous websites detailing how to set it up. Here, a protocol specific to setting up a wwwblast server is provided, assuming there may be multiple wwwblast installations on the one server. # Tell Apache to use password-protection As an administrator, add the following lines to your Apache config of the directory you want to password-protect. The apache config file might be for instance, /etc/httpd/httpd.conf or /etc/apache2/conf.d/blast.conf AuthUserFile /etc/apache_users AuthName "myblastname welcome message" AuthGroupFile /etc/apache_groups AuthType Basic Require group myblastname So then the whole directory entry might look like this, for example: <Directory "/Users/ben/Sites/blast"> AuthUserFile /etc/apache_users AuthName "myblastname welcome message" AuthGroupFile /etc/apache_groups AuthType Basic Require group myblastname </Directory> Apache needs to be restarted for this to take effect. The easiest way to do this is to restart the computer. If that is not possible, it may be possible to use apache2ctl. As an adminstrator, $apache2ctl graceful After restarting the webserver, going to your webpage e.g. http://localhost/~ben/blast/blast.html should now require a password. However, you won't be able to login just yet. # Specify the passwords themselves The first time a password is specified, the file that stores the passwords needs to be created. The passwords are encrypted in this file. Use the -c flag to create the file. As an administrator, $ htpasswd -c /etc/apache_users <myfirstusername> replacing <myfirstusername> with the login name of the first user. It is normal that nothing appears to happen when you type / copy the password in (unlike what happens when you login to your computer and stars or dots appear). As usual with passwords, it is most likely best to specify a strong password. There are many websites that will generate strong passwords randomly, for instance the first google hit for "password generator". After this users file has been created, the -c flag can be omitted: \$ htpasswd /etc/apache_users <mysecondusername> After this step is complete there should be a new file /etc/apache_users with username and encrypted passwords in it, for instance myfirstusername:X/ZYo/PJfXMIw Above, in the apache configuration file, these lines were specified: AuthGroupFile /etc/apache_groups Require group myblastname was specified. This means that only people in the group "myblastname" will be able to get through the password protection. To specify who is in which group, create a new file in a text editor, use the template below, and save it as "/etc/apache_groups": myblastname: mysecondusername myfirstusername After this step is complete, you should be able to login to your blast webpage. # Checking When configuring Apache, it is easy to lose track of whether you are logged into particular servers. Therefore, it is best to start a new browser session and go from start to finish. Open up a browser you don't usually use (e.g. if you use Safari usually, then open up Firefox). Go to your server's webpage and make sure that:
{}
# Chapter 10 - Review: 155 $(4,16)$ #### Work Step by Step Let $P=(x_{1},y_{1})=(-3,8)$ and $Q=(x_{2},y_{2})=(11,24)$ The midpoint formula is $(\frac{x_{1}+x_{2}}{2},\frac{y_{1}+y_{2}}{2})$. Substituting the values: Midpoint=$(\frac{x_{1}+x_{2}}{2},\frac{y_{1}+y_{2}}{2})$ Midpoint=$(\frac{-3+11}{2},\frac{8+24}{2})$ Midpoint=$(\frac{8}{2},\frac{32}{2})$ Midpoint=$(4,16)$ Therefore, the coordinates of the midpoint are:$(4,16)$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
# Why can't an improper transfer function be realized? A major result in control system theory is that a transfer function, $$G\left( s \right) = \frac{{Y\left( s \right)}}{{U\left( s \right)}}$$ has a state space realization if and only if the degree of $Y(s)$ is less than or equal to the degree of $U(s)$. I cannot find a proof of this fact in most major (undergraduate and introductory graduate) textbooks. If someone knows the proof could they sketch it out for me or point me to references where the proof exists? There is a related question here but it still does not answer the "why" of state-space realizations being non-existent for improper transfer functions. • Do you see the intuitive problem in the special case $G(s)=1$ (in which case $g(x)=\delta(x)$)? – Ian Aug 26 '16 at 16:26 Suppose we have a state-space model \begin{align} \dot{\mathrm x} &= \mathrm A \mathrm x + \mathrm B \mathrm u\\ \mathrm y &= \mathrm C \mathrm x + \mathrm D \mathrm u \end{align} where $\mathrm A \in \mathbb R^{n \times n}$. Laplace-transforming both the state equation and the output equation, we conclude that the transfer function is the following matrix-valued function $$\mathrm G (s) = \mathrm C (s \mathrm I_n - \mathrm A)^{-1} \mathrm B + \mathrm D$$ Note that $$(s \mathrm I_n - \mathrm A)^{-1} = \frac{\mbox{adj} (s \mathrm I_n - \mathrm A)}{\det (s \mathrm I_n - \mathrm A)}$$ and that • each entry of the adjugate is a polynomial in $s$ of degree at most equal to $n-1$. • the determinant of $s \mathrm I_n - \mathrm A$ is a polynomial in $s$ of degree $n$. Thus, we can conclude that each of the $n^2$ SISO transfer functions in $\mathrm G (s)$ has the property that the degree of the numerator is less than or equal to the degree of the denominator. How can an improper transfer function have a state-space realization, then? • This only shows that an improper transfer function cannot be realized in the particular form you wrote. It could be realized using derivatives of the input. – Pait Aug 27 '16 at 10:22 • @Pait The problem is that pure differentiators do not have state-space realizations. Hence, what you're proposing sounds like a distortion of the problem. You're relabeling the input vector so that the differentiators stay out of the system. – Rodrigo de Azevedo Aug 27 '16 at 12:15 • Yes they do! $y = \dot{u}$ is a realization. Just not in the usual form that you mentioned and that we all know and love. The crucial question is why we chose that form without differentiators. It is not written in stone that realizations must follow it; rather, the usual form was chosen because of very practical considerations. – Pait Aug 27 '16 at 17:56 To realize an improper transfer function, derivatives of the input would be needed. The answer above by Rodrigo de Azevedo helps make clear why. The problem is that it is not possible to realize perfect derivatives. A number of arguments are helpful in understanding why. The modulus of the frequency response of a differentiator increases with frequency. However it is not possible to construct an apparatus whose gain becomes arbitrary large at large frequencies. On the contrary, any device known will have a cutoff frequency after which its response falls. Or, suppose you feed a discontinuous signal into a perfect differentiator. It will have to compute the derivative of the signal, before noticing that the derivative doesn't exist! So any "differentiator" will be at best an approximation.
{}
# Add Caption below algorithm environment I want to add a caption below algorithm environment that spans over two column page in \documentclass[10pt, conference, compsocconf]{IEEEtran}. I use the following: \usepackage[Algorithm,ruled]{algorithm} \usepackage{float} \begin{figure*} \begin{algorithm*}[H] This is algorithm \end{algorithm*} \caption{Caption here} \end{figure*} But I get error ./main.tex:164: LaTeX Error: Float(s) lost. How can I fix that? Thank you very much. You can have a caption below an algorithm using the algorithm2e package \documentclass{article} \usepackage[]{algorithm2e} \begin{document} \begin{algorithm} This is algorithm \caption{Algorithm caption} \end{algorithm} \end{document} The issue here is that you're placing a starred float environment (algorithm*) inside another starred float environment (figure*). You can get away with it by using figure* and the algorithm with the [H]ere float specifier: \documentclass[conference]{IEEEtran} \usepackage[ruled]{algorithm} \begin{document} \begin{figure*} \begin{algorithm}[H] This is algorithm \end{algorithm} \caption{Caption here} \end{figure*} \end{document} Note that your algorithm captions will be printed as a figure caption. However, that might be what you're after.
{}
Over the next week we will cover the basics of how to create your own histograms in R. t=(0:length(y)-1)/fs; % time axis. 如何使用iq调制实现16qam? 注:前面讲的PSK调制(QPSK、8PSK),星座图中的点都位于单位圆上,模相同(都为1),只有相位不同。 而QAM调制星座图中的点不再位于单位圆上,而是分布在复平面的一定范围内,各点如果模相同,则相位必不相同,如果相位相同则模. In short, the FFT is a computationally fast way to generate a power spectrum based on a 2-to-the-nth-power data point section of waveform. mat file and then import tha. Part DMF Input-2000 -1500 -1000 -500 0 500 1000 1500 2000-500 0 500 Real Part Imag. I'm trying to create a MATLAB script that finds the maximum point of a given 3D function with gradient descent. 用户可讨论与matlab,svpwm,emd,fft,光伏,eemd,卡尔曼滤波,ofdm,小波变换,VMD,pwm,pca,预测,图像处理,傅里叶变换,中值滤波等有关的教程,模型下载或算法等问题。这里是MATLAB中文论坛信号处理与通信版块。. Size of this PNG preview of this SVG file: 800 × 566 pixels. I'm experiencing some high spikes on the IQ data plot and on FFT I'm getting a 0 magnitude for about 2500 samples in a row out of 5000 samples in total. The configuration used for the (2. The first one, singlechannel, just contains one continuous Trace and the other one, threechannel, contains three channels of a seismograph. Use additional SCPI commands are used. please help me to plot it in the right way using Matlab code. ; There are various ways of applying the model with Gaussian fit in Matlab like given below: Gaussian Fit by using "fit" Function in Matlab. 35), which seems a fairly precise estimate for these data. The Imaris Start Package is an interactive visualization and analysis software for 3D and time-lapse microscopic images with advanced solutions for big datasets. The MATLAB load Command. There is more than one way to read data into MATLAB from a file. The table within includes the within-subject variables w1 and w2. Fault Number to Analyze - Sets the number which corresponds to the time stamp of the fault the user wishes to plot. ; There are various ways of applying the model with Gaussian fit in Matlab like given below: Gaussian Fit by using "fit" Function in Matlab. (m) help plot (n) help length (o) help size 14. Plot a decision tree. The IQR tells how spread out the "middle" values are; it can also be. Using this feature of the app requires the WLAN Toolbox. The sampling rate was set to 250 Msps. 04 shifts the constellation 0. Starting in R2019b, you can display a tiling of plots using the tiledlayout and nexttile functions. png image file for the work you submit on the following problems. Normal Distribution Plot. IQ and matlab U. 数字基带信号功率谱的matlab仿真程序代码_信息与通信_工程科技_专业资料。. Make cash with image galleries. Note about the Matlab Code. If there is a single numeric within-subjects factor, plot uses the values of that factor as the time values. Plotting function on Matlab Define Mathmatically and Graphically Conclusion Reference Apendix Slide 3 OVERVIEW 4. wv file, so I can load it in a Spectrum Analyzer. The measured signal in Matlab is -19. plot(x,z); end Mais cela ne me donne que la fonction du dernier angle càd quand i0 est égal à 0. Is anyone using Git with Simulink Projects and slx files? Looking for tips and tricks from someone that has some experience. We will use two different ObsPy Stream objects throughout this tutorial. Here is a brief example to demonstrate how to create a pressure-enthalpy ($$\log p,h$$) plot for propane (R-290) with automatic isoline spacing:. The domain matlab. MATLAB is not a cheap tool, but there is a home user licence available for a more reasonable price. Processing RAW Images in MATLAB Rob Sumner Department of Electrical Engineering, UC Santa Cruz May 19, 2014 Abstract This is an instructional document concerning the steps required to read and display the unprocessed sensor data stored in RAW photo formats. • Assisted graduate and undergraduate students for several projects related to vehicle (DQ, IQ, OQ and PQ). If you aren't afraid of programming, I'd recommend R, specifically the ggplot2 package. plot(rm) plots the measurements in the repeated measures model rm for each subject as a function of time. An example of generating an IQ signal (real) in MATLAB is as follows. 04 up, as shown below:. where the signals are centered around the DC frequency. pyplot as plt We will draw a bar plot where each bar will represent one of the top 10 movies. Can range from 1 k to 512 k samples. All of the measurements and results that can be displayed, from simple spectrum measurements to complex modulation analysis, are computed from these IQ samples. 1, can be substituted for freqz (better for Octave). Part A and PART C of the matlab code is same as mentioned on AWGN page. The Visual Display of Quantitative Information is a classic book filled with plenty of graphical examples that everyone who wants to create beautiful data visualizations should read. To generate the waveform, click Generate. 5-kHz DC-DC boost converter increasing voltage from PV natural voltage (273 V DC at maximum power) to 500 V DC. For example, if you have two between-subject factors, drug and sex, with each having two groups, you can specify red as the color for the groups of drug and blue as the color for the groups of sex as follows. 0 programming language of a simulator of dynamics. making nice graphs with matplotlib) you can export a. I want to plot a Power Spectral Density graph for my signal. Course Overview Hi everyone, my name is Mike Cohen, and welcome to my course, Building Your First Data Analysis Workflow with MATLAB. Otherwise, plot uses the discrete values 1 through r as the time values, where r is the number of repeated measurements. • Electronic Design Automation (EDA) tools, such as MATLAB and Microwave Office, can save IQ simulation data to CSV text files. Size of this PNG preview of this SVG file: 800 × 566 pixels. One modulation technique that lends itself well to digital processes is called "IQ Modulation", where "I" is the "in-phase" component of the waveform, and "Q" represents the quadrature component. Intro to pyplot =============== matplotlib. ; c is given as the width of the peak. IQ imbalance impairment in MATLAB. Plot your white_noise object using ts. MATLAB Central contributions by z8080. com sorry if my english is not good. Since MATLAB has a built-in function "ifft()" which performs Inverse Fast Fourier Transform, IFFT is opted for the development of this simulation. m that plots the spectrum of a small segment of data, where the frequency axis is centered at the centered frequency, and only the principle alias frequency band is displayed. A constellation diagram is a representation of a signal modulated by a digital modulation scheme such as quadrature amplitude modulation or phase-shift keying. Control Structures Some Dummy Examples For loop syntax for i=1:100for i=start: Last index Some Matlab Commands; end MATLAB Commandsend for j=1:3:200 Some Matlab Commands; end for m=13:-0. X3 + 4 x2 -10. We thank the UCLA Institute for Digital Research and Education (IDRE) for permission to adapt and distribute this page from our site. Using simulation with Simulink, you can reduce the amount of prototype testing and verify the robustness of. This section describes a few of the most important graphics functions and provides examples of some typical applications. When a new fault. For a Butterworth filter, this is the point at which the gain drops to 1/sqrt (2) that of the passband (the “-3 dB point”). The Imaris Start Package is an interactive visualization and analysis software for 3D and time-lapse microscopic images with advanced solutions for big datasets. It allows you to generate high quality line plots, scatter plots, histograms, bar charts, and much more. This section of MATLAB source code covers IQ imbalance impairment and IQ amplitude and phase imbalance effect on constellation diagram using matlab code. In order to power the LNA. If you set IQ phase imbalance (deg) to 30 and all other parameters to 0, the scatter plot is skewed clockwise by 30 degrees, as shown below: Setting the I dc offset to 0. A vertical line goes through the box at the median. It is reasonably intuitive to see that the received signal has frequency components at and also at. radar matlab mahafza To learn how to write a simple program in Matlab to analyze Doppler radar data. This week we'll get a little closer to the hardware, and learn how to control the SDR's more directly. 3) hold off The misorientation axes in specimen coordinates Analyzing the misorientation axis in specimen coordinates is a bit more involved as it requires to extract the two neighbouring orientations to each boundary segment. If not, this indicates an issue with the model such as non-linearity. < QPSKModulator > Matlab comm object has various Modulation functions and QPSKmodulator is one of them. Baseband signal upconversion and IQ Modulation and Demodulation¶ In this article, we will go through the basic steps of the up- and downconversion of a baseband signal to the passband signal. Extremely cheap $5 or less active GPS antennas with SMA connectors can be found on eBay, Amazon or Aliexpress. A simple plot of data from a file. For a Butterworth filter, this is the point at which the gain drops to 1/sqrt (2) that of the passband (the “-3 dB point”). Matplotlib is a Python 2-d and 3-d plotting library which produces publication quality figures in a variety of formats and interactive environments across platforms. plot(F,10*log10(Pxx)) If you do not have the Signal Processing Toolbox, the PSD is proportional to the absolute value squared of the DFT (calculated by fft()) For more information on that scaling, please see:. The spectral plot uses dB (I find that more convenient) on the Y-axis, so consider it a logarithmic scale. I have 1700 plot of data in graph. B How to calculate jump height from the force and a person's weight. The output signal is,. I guess it might be necessary to add an FIR after changing the sampling rate, but i am not good at this. I'm experiencing some high spikes on the IQ data plot and on FFT I'm getting a 0 magnitude for about 2500 samples in a row out of 5000 samples in total. The PDXprecip. Q plots in Figure 9 to the green trace in Figure 9. 02 to the right and 0. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. This software also converts the. Graphs (in 2-D) are drawn with the plot statements:. The sampling rate was set to 250 Msps. Let us know here. To do some of the exercises in the book you'll probably at least require the core MATLAB plus the Communications System Toolkit which is an extra add on. Puntos a tratar. well this is basic code for matched filter to detect similarity between any two simple signals of same frequency. The smallest and largest values that remain are the bootstrapped estimate of low and high 95% confidence limits for the sample statistic. plot(rm) plots the measurements in the repeated measures model rm for each subject as a function of time. If you set IQ phase imbalance (deg) to 30 and all other parameters to 0, the scatter plot is skewed clockwise by 30 degrees, as shown below: Setting the I dc offset to 0. hilbert returns a complex helical sequence, sometimes called the analytic signal, from a real data sequence. Remote Control with MATLAB 1EF46_0E 9 Rohde & Schwarz 7 Literature For more information about the RSIB interface as well as for documentation of the remote control commands, refer to the operating manual of the in-strument. It shows how to perform the same functions described in those tutorials using gnuplot, a command-line-driven plotting program commonly available on Unix machines (though available for other platforms. audiowrite(filename,y,Fs) writes a matrix of audio data, y, with sample rate Fs to a file called filename. Fault Number to Analyze - Sets the number which corresponds to the time stamp of the fault the user wishes to plot. Analysis displays include IQ Time Domain, Frequency Domain, I Spectrum, IQ Power Spectrum, Constellation Plot, Spectrogram Plot, Persistence Plot, and Histogram Plot. Source code-PART B. pptx), PDF File (. The following is an example of how to use the FFT to analyze an audio file in Matlab. Making statements based on opinion; back them up with references or personal experience. Not a member of Pastebin yet? Sign Up, it unlocks many cool features!. It shows how to perform the same functions described in those tutorials using gnuplot, a command-line-driven plotting program commonly available on. dat file contains two columns of numbers. RAW photo les contain the raw sensor data from a digital. Positive and Negative Correlation Coefficient – Graph and Examples Scatter plot, correlation and Pearson’s r are related topics and are explained here with the help of simple examples. Plot your white_noise object using ts. % Set up signal analyzer mode to Basic/IQ mode fprintf Plot the Acquired IQ Data. The first quartile, or lower quartile, is the value that cuts off the first 25% of the data when it is sorted in ascending order. The example scatter plot above shows the diameters and. Also includes: helpful tips, 'asides' that attempt to demonstrate the nature of complex (IQ or quadrature) signals, explain in detail some important concepts when it comes to using SDRs in real apps, explain how GRC works and what all of the parameters for various blocks control. Long urls may holiday when introduced to forums, comments or emails. How to Make 3D Plots Using MATLAB. Still, they’re an essential element and means for identifying potential problems of any statistical model. Matplotlib is a Python 2-d and 3-d plotting library which produces publication quality figures in a variety of formats and interactive environments across platforms. The table between includes the between-subject variables age, IQ, group, gender, and eight repeated measures y1 to y8 as responses. In order to power the LNA. Regression goes beyond correlation by adding prediction capabilities. Graphs (in 2-D) are drawn with the plot statements:. Is anyone using Git with Simulink Projects and slx files? Looking for tips and tricks from someone that has some experience. Set Up Instrument for an IQ Waveform Measurement. Download the example ad9361_matlab. ANTENNA ARRAYS: PERFORMANCE LIMITS AND GEOMETRY OPTIMIZATION by Peter Joseph Bevelacqua has been approved March 2008 Graduate Supervisory Committee: Constantine A. If there is a single numeric within-subjects factor, plot uses the values of that factor as the time values. Python is a general-purpose language with statistics modules. '' The frequency-response display utility myfreqz, listed in Fig. Generic function for plotting of R objects. (m) help plot (n) help length (o) help size 14. Open Script. A transmitter is the same idea, in reverse order. z 1+4 Command Window 10:42 19/02/2013 z=I+4. Note: For information about the different types of. My research focusses on wireless communications. So even though you may not use MATLAB, it has a pseudocode flavor that should be easy to translate into your favorite pro-gramming language. The MATLAB load Command. By default the arguments are evaluated with feval (@plot, x, y). It covers the basics of MATLAB syntax, explains computational mechanisms including work with arrays and matrices, shows means of data visualization and demonstrates the use of object-oriented principles. A partial list of these functions is: zeros: matrix filled with 0. You might object that your signal isn't a pure cosine function as the one we have shown here, and it might be very true. The helper function hCaptureIQUsingN9010A. The box-and-whisker plot doesn't show frequency, and it doesn't display each individual statistic, but it clearly shows where the middle of the data lies. Save this data as white_noise_2. The result h is 1 if the test rejects the null hypothesis. The Tukey box plot shows the first (bottom of box) and third (top of box) quartiles (equivalently the 25th and 75th percentiles), the median (the horizontal line in the box), the range (excluding outliers and extreme scores) (the "whiskers" or lines that extend from the box show the range), outliers (a circle represents each outlier -- the number next to the outlier is the observation number. 如何使用iq调制实现16qam? 注:前面讲的PSK调制(QPSK、8PSK),星座图中的点都位于单位圆上,模相同(都为1),只有相位不同。 而QAM调制星座图中的点不再位于单位圆上,而是分布在复平面的一定范围内,各点如果模相同,则相位必不相同,如果相位相同则模. As seen from the box plot, the scatter plot also shows that people who took the exam in the control condition had a better score on the IQ test than the other two groups. There is more than one way to read data into MATLAB from a file. The alternative hypothesis is that the data in x and y comes from populations with unequal means. load relatedsig ax(1) = subplot(3,1,1); plot(s1) ylabel Add 1 to the lag differences to account for the one-based indexing used by MATLAB®. No column titles are permitted. 1 Model of automatic control electrical automobile. Normal quantile plots show how well a set of values fit a normal distribution. 7 Rule, approximately 68% of the individuals in the population have an IQ between 85 and 115. I think this has to do with the fact that the noise I add is white noise. Box-and-whisker plots are a handy way to display data broken into four quartiles, each with an equal number of data values. Q plots in Figure 9 to the green trace in Figure 9. If there is a single numeric within-subjects factor, plot uses the values of that factor as the time values. 17 and it is a. To streamline the process of plotting the spectrum, I present below a Matlab function plot_FFT_IQ. IQ and matlab U. Agilent M8190A: Using IQTools and MATLAB Steve Crain, Agilent Technologies Generate waveforms and perform amplitude correction for Agilent M8190a AWGs using MATLAB ® and Instrument Control Toolbox™. The PDXprecip. ones: matrix filled with 1. Mechanical Engineering Dep. This tutorial shows how to create box plots in Excel. MATLAB Programming for Numerical Computation 570,372 views 20:01 Amplitude Modulation - Matlab Tutorial (Amplitude modulation in Matlab with Code) 2016 - Duration: 5:50. Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Normally ON. Group Members MAHMUDULHASAN Slide 2 3. Box plots can be created from a list of numbers by ordering the numbers and finding the median and lower and upper quartiles. 02:10*pi; y = sin(th); % Création de l'objet Figure contenant les tracés fig = figure; % Premier objet Axes où est tracée la courbe directe subplot(2,1,1) % Tracé de la courbe plot(th,y,'r-') % Ajustement des limites de l'objet Axes xlim([min(th) max(th)]) ylim([min(y) max(y)]) % Second objet Axes où est tracée la. Set Up Instrument for an IQ Waveform Measurement. For the discussion henceforth, we use the WR8G RF front-end along with PicoDigitier250, i. mat”, “audio1”) The file audio2. I use the built-in fft function in MATLAB (i. 0 times the IQ above (or below) the 75% point (or the 25%) point are drawn as circles and points that are more than 3. , are broadly classified as continuous-time (CT) or discrete-time (DT), depending on whether the times for which the signal is defined are continuous or discrete. Other resolutions: 320 × 226 pixels | 640 × 453 pixels | 1,024 × 724 pixels | 1,280 × 905 pixels | 1,052 × 744 pixels. Correlation can have a value: 1 is a perfect positive correlation. NOTE: In 2015 we created a new Online Help system that users can. Click “Set Path” and search the pop-up file browser for the folder to set as your MATLAB path variable. Let us know here. Ignoring the common term and writing the base band equivalent form,. Otherwise, plot uses the discrete values 1 through r as the time values, where r is the number of repeated measurements. 2 Transmit Diversity 8. (An SDR is necessary for labs 4 & 5) Creating and plotting a sine. mat file and then import tha. pptx), PDF File (. plot(audio1) Plotting the Data with MATLAB If you would like to create a data file that is readable by MATLAB you need to save the audio1 data in a MATLAB readable file. This time, I’m going to focus on how you can make beautiful data visualizations in Python with matplotlib. 01 seconds and 0. At the moment my I/Q signal looks like this:. sin(x + φ)=sin(x) cos(φ) + sin(x + π/2) sin(φ). The general format is plot3(x,y,z,s) where: x, y and z are vectors or matrices, and s are strings specifying color, marker symbol, or line style. Si X e Y son ambas matrices, deben tener el mismo tamaño. usr); we used the SDR-IQ for this. MENTAL REPRESENTATION A letter-matching task (Posner) –Chronometric (reaction time task) Reaction time (RT) of subject is the dependent variable. Where a is defined as the amplitude, b is the centroid location. For example, if you have two between-subject factors, drug and sex, with each having two groups, you can specify red as the color for the groups of drug and blue as the color for the groups of sex as follows. for iq=1:N fprintf(fid,'3%6. Basic Plotting MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. The " interquartile range", abbreviated " IQR ", is just the width of the box in the box-and-whisker plot. NaN est pour Matlab une représentation arithmétique pour Not-a-Number. The FFT is a complicated algorithm, and its details are usually left to those that specialize in such things. The syntax is: hkAnalyzer. Python Plotting¶. The configuration used for the (2. 2f\n',xa(iq),ya(iq),za(iq)); end fclose(fid); The file, mpatch. The function breaks the figure into matrix specified by user and selects the corresponding axes for the current plot SYNTAX : subplot (m,n,p) – Divides the figure window into m x n matrix of small axes and selects the p th. If you provide a single list or array to the plot() command, matplotlib assumes it is a sequence of y values, and automatically generates the x values for you. 3 Frequency Offset. Intelligence Quotient (IQ) test scoresand another variable can measure head circumference. Si X e Y son ambos vectores, deben tener la misma longitud. X3 + 4 x2 -10. Imaris for Tracking. It allows you to generate high quality line plots, scatter plots, histograms, bar charts, and much more. How do I plot the line of best fit? I stored the x and y data in table and the plot them. , creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates the plot with labels, etc. Align Signals with Different Start Times. For this tutorial we will use the sample census data set ACS. How to generate IQ component for transmitting 2FSK. The output data type depends on the output file format and the data type of the audio data, y. By default the arguments are evaluated with feval (@plot, x, y). Basic Plotting MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. Extremely cheap$5 or less active GPS antennas with SMA connectors can be found on eBay, Amazon or Aliexpress. The IQ demodulation preserves the information content in the Band-pass signal, and the original RF-signal can be reconstructed from the IQ-signal. A friend of mine just asked me for some tips with this. How to plot a normal distribution with matplotlib in python ? Daidalos February 09, 2019 Example of python code to plot a normal distribution with matplotlib:. txt ' Icdata utn. Then we will use quantization, QPSK modulation, QPSK demodulation and dequantization. Matlab, Pyzo can be considered a free alternative. This section describes the general operation of the FFT, but skirts a key issue: the use of complex numbers. Let us plot the simple function y = x for the range of values for x from 0 to 100, with an increment of 5. The data is made up of two columns, one the time in milliseconds and the other contains the volts (mV) and is imported into MATLAB from a CSV file. m is an example of a function that can be used with an Agilent Technologies N9010A signal analyzer. The desired name of this variable can be configured with the “Vector Name”-setting. Get the MATLAB code Published with MATLAB® R2013a visualize the 3-D cube in that Tickle IQ puzzle. z 1+4 Command Window 10:42 19/02/2013 z=I+4. IQ and matlab U. The plot required is a polar plot to plot the targets a radar sensor is identifying. The phase offset/imbalance can be ignored in this case. 11n/ac (OFDM). This type of plot is commonly known as a Nyquist plot as shown in Figure 9 below. Hunter and since then has become a very active open-source development community project. It's a nice plot to use when analyzing how your data is skewed. NFFT=1024; %NFFT-point DFT X=fft (x,NFFT); %compute DFT. Rician Channel model PLOTS. com has ranked 86677th in India and 356,643 on the world. The FMCW radar have many applications, from the conventional radar altimeter and traffic radar to the very innovative people detectors in dark environments, used in the military field. Specifically, in the first step, we use some software, such as MATLAB or Simulink, to calculate the parameters of the correction matrix. It is reasonably intuitive to see that the received signal has frequency components at and also at. Matplotlib is a Python 2-d and 3-d plotting library which produces publication quality figures in a variety of formats and interactive environments across platforms. The table between includes the between-subject variables age, IQ, group, gender, and eight repeated measures y1 to y8 as responses. ANTENNA ARRAYS: PERFORMANCE LIMITS AND GEOMETRY OPTIMIZATION by Peter Joseph Bevelacqua has been approved March 2008 Graduate Supervisory Committee: Constantine A. Bisection Method in MATLAB 1. The following N lines contain the coordinates of the points. THe source to the XY graph is a cluster of 2 elements whereas the connection to the polar plot needs an 1D array of cluster. - to support your explanation. Predicts two or more dependent variables based off of a linear combination of two or more independent variables. Plot the following func ons over the interval 0 < x < 4 :. Here is a brief example to demonstrate how to create a pressure-enthalpy ($$\log p,h$$) plot for propane (R-290) with automatic isoline spacing:. You can use the function 'snr' which is part of Signal Processing Toolbox, to calculate the signal to noise ratio of a signal. a) Plot iq against scoreA using blue circles b) On the same plot, plot iq against scoreB using red squares c) Label your axes d) If you have the statistics toolbox, type lsline to fit a least squares line to this data e) In a new plot, compare the mean of scoreA and scoreB using a bar graph. Image Processing Using MATLAB: Basic Operations (Part 1 of 4) By Dr Anil Kumar Maini. * This is a user-written add-on. The simplest, though least flexible, procedure is to use the load command to read the entire contents of the file in a single step. how can i separate I and Q from that mat file for further analysis. If a distinction exists in the two variables being studied, plot the explanatory variable (X) on the horizontal scale, and plot the response variable (Y) on the vertical scale. The sample counts that are shown are weighted with any sample_weights that might be present. Generate a sequence composed of three sinusoids with frequencies 203, 721, and 1001 Hz. In this example, you acquire the time domain IQ data, visualize it in MATLAB, and perform signal analysis on the acquired data. figure to control the size of the rendering. 2D Capon and APES MATLAB examples from JMR:152 57-69 (2001) Petre Stoica and Tomas Sundin, "Nonparametric NMR Spectroscopy", Journal of Magnetic Resonance", vol 152. 1-1 所示,频谱仪解调结果如图 3. mat file and then import tha. The component at was introduced due to I-Q imbalance. Use MathJax to format. The PDXprecip. In order to generate/plot a smooth sine wave, the sampling rate must be far higher than the prescribed minimum required sampling rate which is at least twice the frequency – as per Nyquist Shannon Theorem. Description. In this example, you acquire the time domain IQ data, visualize it in MATLAB, and perform signal analysis on the acquired data. How to Process I/Q Signals in a Software-Defined RF Receiver October 04, 2018 by Robert Keim In this article, we'll discuss how to complete the development of an algorithm (begun in previous articles), including how to find DC offsets for data visualization and how to combine I and Q offsets into a single value. The position of each dot on the horizontal and vertical axis indicates values for an individual data point. For example, points(P, Q, pch = ". MCS320 IntroductiontoSymbolicComputation Spring2007 MATLAB Lecture 7. One of the really nice aspects of Matlab is that most builtin functions are built to handle vectorization. The plot extraction and plot processing elements are the final stage in the primary radar sensor chain. (An SDR is necessary for labs 4 & 5) Creating and plotting a sine. hilbert returns a complex helical sequence, sometimes called the analytic signal, from a real data sequence. MATLAB Lecturer : Dr. wv file, so I can load it in a Spectrum Analyzer. The helper function hCaptureIQUsingN9010A. Note 1: If you only need to view an Origin project file rather than trying Origin, a free Origin Viewer is also available. The box-and-whisker plot doesn't show frequency, and it doesn't display each individual statistic, but it clearly shows where the middle of the data lies. MathWorks develops, sells, and supports MATLAB and Simulink products. Refer to the Guided Host-Radio Hardware Setup documentation for details on configuring your host computer. Television (TV) viewing is known to affect children's verbal abilities and other physical, cognitive, and emotional development in psychological. The primary advantage of SpectraScopeRT is the ability to conduct real-time streaming signal recordings to drive storage with provided monitoring capability to ensure the. Because the I/Q data waveforms are Cartesian translations of the polar amplitude and phase waveforms, you may have trouble determining the nature of the message signal. A scatter plot (aka scatter chart, scatter graph) uses dots to represent values for two different numeric variables. The main components are: the plot extractor or hit processor (translates hits from the signal processor to plots),. MATLAB SEMINAR REPORT APPLIED MATHEMATICS. Hello, Im trying to simulate a simple baseband GMSK system in matlab(without any AWGN), I have been following the model presented in "GMSK in. ISAE/AESS-1. 02 and the Q dc offset to 0. IQ imbalance impairment in MATLAB This section of MATLAB source code covers IQ imbalance impairment and IQ amplitude and phase imbalance effect on constellation diagram using matlab code. So even though you may not use MATLAB, it has a pseudocode flavor that should be easy to translate into your favorite pro-gramming language. Wednesday, 12:29 AM. Otherwise, plot uses the discrete values 1 through r as the time values, where r is the number of repeated measurements. How to Make 3D Plots Using MATLAB. [MATLAB] Titre de graphique [Résolu/Fermé] Signaler. The first one, singlechannel, just contains one continuous Trace and the other one, threechannel, contains three channels of a seismograph. A partial list of these functions is: zeros: matrix filled with 0. Then we will use quantization, QPSK modulation, QPSK demodulation and dequantization. Build matrices (or two-dimensional arrays) from vectors (one-dimensional arrays). Data points' locations change when clicking into a scatter plot with the Data CUrsor. 02 DRAFT Lecture Notes Last update: April 11, 2012 Comments, questions or bug reports? Please contact {hari, verghese} at mit. However, when it comes to building complex analysis pipelines that mix statistics with e. Since MATLAB has a built-in function "ifft()" which performs Inverse Fast Fourier Transform, IFFT is opted for the development of this simulation. Plot (x,y) for example if you want to plot a point in with x value equal to 3 and y value equal to 6 we must write the following command >> plot(3,6) This commend is produce the following figure We can use variable for x and y values >> x=3;. xml xsd ' Icdata. The plot extraction and plot processing elements are the final stage in the primary radar sensor chain. The main components are: the plot extractor or hit processor (translates hits from the signal processor to plots),. The result h is 1 if the test rejects the null hypothesis. Make cash with image galleries. To build it, simply type make. Plotting BER versus decision threshold shows the noise properties of the signal. Select a Web Site. plot(rm) plots the measurements in the repeated measures model rm for each subject as a function of time. 5 percent of the sorted set of numbers. 04 up, as shown below:. Team members: Vikram Krishnaswamy and Hao-Chih,LIN. m, change:2010-05-07,size:3966b. 02 and the Q dc offset to 0. Image Rejection Ratio (IMRR) with transmit IQ imbalance. Waveforms in Matlab 1 Sampled Waveforms Signals like speech, music, sensor outputs, etc. Still, every single sample of your signal can be described as such, i. By default, fs is 2 half-cycles/sample, so these are normalized from 0 to 1, where 1 is the Nyquist frequency. MATLAB Lecturer : Dr. And, of course, when you're drawing a stem-and-leaf plot, you should always use a ruler to construct a neat table, and. It's a nice plot to use when analyzing how your data is skewed. Surface plots are useful for visualizing matrices that are too large to display in numerical form and for graphing functions of two variables. The following Matlab project contains the source code and Matlab examples used for arrows generalized 2 d arrows plot. No column titles are permitted. Create a new Python script called normal_curve. When you start MATLAB, the desktop appears in its default layout. K18D MATLAB Modeling Toolkit Application Note Products: ı R&S®FSW-K18D ı R&S®FSV3-K18D ı R&S®FPS-K18D Digital pre-distortion (DPD) is a common method to linearize the output signal of a power amplifier (PA), which is being operated in its non-linear operating range. Size of this PNG preview of this SVG file: 800 × 566 pixels. The individual data wires are 1D. Q plots in Figure 9 to the green trace in Figure 9. xml ini deplo bat Win32 registry IS m3iregistry worker. For the discussion henceforth, we use the WR8G RF front-end along with PicoDigitier250, i. Lab 2: Capturing Signals, and Displaying Signals in Matlab Overview. The third quartile, or upper quartile, is the value that cuts off the first 75%. where the signals are centered around the DC frequency. • Electronic Design Automation (EDA) tools, such as MATLAB and Microwave Office, can save IQ simulation data to CSV text files. Plot two sets of data with independent y-axes and a common x-axis. The load command requires that the data in the file be organized into a rectangular array. 5-kHz DC-DC boost converter increasing voltage from PV natural voltage (273 V DC at maximum power) to 500 V DC. At the top of the script, import NumPy, Matplotlib, and SciPy's norm() function. Thanks for contributing an answer to Electrical Engineering Stack Exchange! Please be sure to answer the question. For example, compare the red I and Q traces on the 3D I vs. (m) help plot (n) help length (o) help size 14. Now that I have the transformed data, I don't know how to plot it. The box-and-whisker plot doesn't show frequency, and it doesn't display each individual statistic, but it clearly shows where the middle of the data lies. 15 gives a matlab listing for a peaking equalizer section. NaN est pour Matlab une représentation arithmétique pour Not-a-Number. Read more in the User Guide. The LabVIEW Full and Professional Development Systems include a basic FFT Power Spectrum VI which can be used to create simple frequency domain plots from time domain data. #170: Basics of IQ Signals and IQ modulation & demodulation - A tutorial - Duration: 19:00. If Y is a vector, then the x -axis scale ranges from 1 to length (Y). The Octave syntax is largely compatible with Matlab. The IQR tells how spread out the "middle" values are; it can also be used to tell when some of the other. MATLAB is a very powerful programming language and toolset used by scientists and engineers. As seen from the box plot, the scatter plot also shows that people who took the exam in the control condition had a better score on the IQ test than the other two groups. If you wish to learn about MATLAB or reference all the manuals on line, go to www. Click here to download Matlab/Octave script for plotting receive spectrum with transmit IQ imbalance. Minitab is the leading provider of software and services for quality improvement and statistics education. In this example, a chirp signal is generated, its phase is put in IQ, then phase is sent and received, then the chirp signal is reconstructed. Specifically, in the first step, we use some software, such as MATLAB or Simulink, to calculate the parameters of the correction matrix. where the signals are centered around the DC frequency. ANTENNA ARRAYS: PERFORMANCE LIMITS AND GEOMETRY OPTIMIZATION by Peter Joseph Bevelacqua has been approved March 2008 Graduate Supervisory Committee: Constantine A. If Y is complex, then the plot function plots. However the type of plot can be modified with the fun argument, in which case the plots are generated by feval (fun, x, y). You have to use step() function to generate I/Q data. However, when it comes to building complex analysis pipelines that mix statistics with e. The X-Series signal and spectrum analyzers perform IQ measurements as well as spectrum measurements. figure ; plot(1:length(in_i),in_i) ; hold on ; plot(1:length(in_q), in_q); hold off; which gives me the following : However I need to look at the frequency domain of these values to see if it is displaying the correct frequencies that I belvie it should be. Python Plotting¶. Getting MIDAS data into MATLAB Convert MIDAS files into binary format Folder Tools/midas-to-matlab contains a RootAna-based program for decoding MIDAS files and saving results in binary files. In its various forms, IQ modulation is an efficient way to transfer information, and it also works well with digital formats. Use MathJax to format. This is simulated data. as in various forms of spectroscopy). If Y is a vector, then the x -axis scale ranges from 1 to length (Y). An alternative is to construct the plot directly from raw data. » Easy comparison between simulation and measured data • The IQproducer application software (MG3700A standard accessory) can import user I/Q sample data from CSV files into MG3700A,. The first one, singlechannel, just contains one continuous Trace and the other one, threechannel, contains three channels of a seismograph. Get the MATLAB code Published with MATLAB® R2013a visualize the 3-D cube in that Tickle IQ puzzle. 1 Model of automatic control electrical automobile. Introduction to MATLAB 1. Otherwise, plot uses the discrete values 1 through r as the time values, where r is the number of repeated measurements. CoefficientSource. • Electronic Design Automation (EDA) tools, such as MATLAB and Microwave Office, can save IQ simulation data to CSV text files. The individual data wires are 1D. Extremely cheap $5 or less active GPS antennas with SMA connectors can be found on eBay, Amazon or Aliexpress. To streamline the process of plotting the spectrum, I present below a Matlab function plot_FFT_IQ. Moreover, matplotlib plots work well inside Jupyter Notebooks since you can displace the plots right under the code. 1-2 频谱仪解调结果 图 3. The prevalence of manual engineering methods in wiring harness manufacturing compounds these challenges, especially as harness complexity increases. Team members: Vikram Krishnaswamy and Hao-Chih,LIN. 16 shows the resulting plot for the example boost(2,0. Download the example ad9361_matlab. It should start at z=0 and extend to z=2. If you provide a single list or array to the plot() command, matplotlib assumes it is a sequence of y values, and automatically generates the x values for you. 1-3 所示: ASK 显 示 -10 -20 -30 -40 10dBm -50 -60 -70 -80 -90 -100 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 图 3. When you run the file, MATLAB displays the following plot − Let us take one more example to plot the function y = x 2. The visualization is fit automatically to the size of the axis. The optimization workflow begins in MATLAB, where BuildingIQ engineers import and visualize 3 to 12 months of temperature, pressure, and power data comprising billions of data points. Fault File Size (kb) - Sets the length of the recorded fault files. When I was a college professor teaching statistics, I used to have to draw normal distributions by hand. pyplot is a collection of command style functions that make matplotlib work like MATLAB. Other resolutions: 320 × 226 pixels | 640 × 453 pixels | 1,024 × 724 pixels | 1,280 × 905 pixels | 1,052 × 744 pixels. edu CHAPTER14 Modulation and Demodulation This chapter describes the essential principles behind modulation and demodulation, which. This type of plot is commonly known as a Nyquist plot as shown in Figure 9 below. MATLAB is a very powerful programming language and toolset used by scientists and engineers. Making statements based on opinion; back them up with references or personal experience. First is through saving trace in. Sort the data in ascending order (look under the Data menu). 数字基带信号功率谱的matlab仿真程序代码_信息与通信_工程科技_专业资料 2007人阅读|40次下载. I understand we are converting the 3 phase stator currents into 2 time-invariant stator currents: (direct and quadrature currents or Id and Iq, respectively). For example, compare the red I and Q traces on the 3D I vs. Greg did a good job. 1-3 所示: ASK 显 示 -10 -20 -30 -40 10dBm -50 -60 -70 -80 -90 -100 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 图 3. This section of MATLAB source code covers IQ imbalance impairment and IQ amplitude and phase imbalance effect on constellation diagram using matlab code. 77dBm, it relate to the 10. Then, I did that Matlab code that I showed you and also saved the IQ samples in an. For example, the residuals from a linear regression model should be homoscedastic. Plotting more rows is not necessarily better, depending on the plot results desired. We first import Matplotlib’s pyplot with the alias “plt”. Radar Performance Analysis System: A Software Simulation. Its simple to pick up and really versatile in usage. At the moment my I/Q signal looks like this:. First is through saving trace in. I am analyzing ECG data using MATLAB. Use SCPI commands to configure the instrument to make the measurement and define the format of the data transfer once the measurement it made. The most useful graph to show the relationship between two quantitative variables is the scatter diagram. Plotting function on Matlab Define Mathmatically and Graphically Conclusion Reference Apendix Slide 3 OVERVIEW 4. , are broadly classified as continuous-time (CT) or discrete-time (DT), depending on whether the times for which the signal is defined are continuous or discrete. A File Sink block was created with header information. Data points' locations change when clicking into a scatter plot with the Data CUrsor. with a peak amplitude times. In this scenario of increasing competition, most parents, as well as children, want to analyze the Intelligent Quotient level. 04 shifts the constellation 0. Agilent M8190A: Using IQTools and MATLAB Steve Crain, Agilent Technologies Generate waveforms and perform amplitude correction for Agilent M8190a AWGs using MATLAB ® and Instrument Control Toolbox™. This tutorial is intended as a supplement to the information contained on the Physics' Department website: Plotting and Fitting Data and Plotting Data with Kaleidagraph. The optimization workflow begins in MATLAB, where BuildingIQ engineers import and visualize 3 to 12 months of temperature, pressure, and power data comprising billions of data points. Lab 2: Capturing Signals, and Displaying Signals in Matlab Overview. This normalizes the x-axis with respect to the sampling rate. Plot the first 1000 points of acquired time. If you collect data with Matlab but want to work on it using Python (e. - - The rise of project-based learning and low-cost electronics kits have made it possible for an increasing number of primary and secondary school students to gain classroom experience with robots and participate in competitions involving robot programming and. VIOLIN PLOT Name: VIOLIN PLOT Type: Graphics Command Purpose: Generates a violin plot. Still, they’re an essential element and means for identifying potential problems of any statistical model. txt", whose spectrum is ideal. Equipment: pc and MATLAB software. Layered Film Optical Simulation Routine - includes both film reflectance calculator and photodetector efficiency calculator. La función plot traza Y frente a X. The IQ data is written to EchoPAC files with 16 bit signed integer representation of the I and. Alternatively, the fread function of Matlab can be used to load the files to the Matlab workspace for offline post-processing or plotting. Plot a sine wave in the first subplot. domain in MATLAB?. Puntos a tratar. A File Sink block was created with header information. xml xsd ' Icdata. Use SCPI commands to configure the instrument to make the measurement and define the format of the data transfer once the measurement it made. usr); we used the SDR-IQ for this. Quadrature signals, also called IQ signals, IQ data or IQ samples, are often used in RF applications. Note that the model menu allows you to plot the I-V and P-V characteristics of the selected module or of the whole array. Matplotlib is a Python 2-d and 3-d plotting library which produces publication quality figures in a variety of formats and interactive environments across platforms. The following is an example of how to use the FFT to analyze an audio file in Matlab. plot(rm) plots the measurements in the repeated measures model rm for each subject as a function of time. A friend of mine just asked me for some tips with this. New in version 0. Plotting Data with gnuplot This tutorial is intended as a supplement to the information contained on the Physics' Department website: Plotting and Fitting Data and Plotting Data with Kaleidagraph. The arguments x1 and y1 define the arguments for the first plot and x1 and y2 for the second. Well, the IQ of a particular population is a normal distribution curve; where IQ of a majority of the people in the population lies in the normal range whereas the IQ of the rest of the population lies in the deviated range. If not, this indicates an issue with the model such as non-linearity. The phase offset/imbalance can be ignored in this case. The second quartile, or median, is the value that cuts off the first 50%. If you aren’t afraid of programming, I’d recommend R, specifically the ggplot2 package. Statistics assumes that your values are clustered around some central value. B How to calculate jump height from the force and a person's weight. You do this by sorting your thousands of values of the sample statistic into numerical order, and then chopping off the lowest 2. Then, I did that Matlab code that I showed you and also saved the IQ samples in an. By default the arguments are evaluated with feval (@plot, x, y). But the QPSKModulator itself does not directly generate I/Q data. Procedure: Create a script file and type the following code Write a program to find the roots of the following equations using bisection method: a. I am facing a problem with extracting demodulated IQ samples from VSA 89600 to Matlab. The plot is formed by joining adjacent points with straight lines. Extremely cheap$5 or less active GPS antennas with SMA connectors can be found on eBay, Amazon or Aliexpress. The sampling rate was set to 250 Msps. We plot the IQ data and view the spectrum of the signal using a MATLAB script that acquires IQ data. CurTiPot is widely disseminated in universities, companies, etc. Based on your location, we recommend that you select:. 04 up, as shown below:. Set Up Instrument for an IQ Waveform Measurement. Next, MATLAB draws a picture of the antenna design and labels the points. z 1+4 Command Window 10:42 19/02/2013 z=I+4. Imatest Matlab library (MCR) locations; Imatest Version: Compiler/Library: Typical location (Environment variable and file name for 64-bit English installations) [Substitute C:\Program Files for C:\Program files (x86) in 32-bit computers. 1 Receive Diversity 8. Predefined Matrix Sometimes, it is often useful to start with a predefined matrix providing only the dimension. Want to learn more? Discover the R tutorials at DataCamp. You can colorize and/or resize the points according to a generic frequency field named "N", or you can use a more typical field, such as altitude, population, or category. figure (2) plot (colorKey) hold on plot (axes, 'MarkerFaceAlpha', 0. xml xsd ' Icdata. Similarly in trigonometry, the angle sum identity expresses:. I didn't proceed to plot a proper graph of the spectrum,because I don't know if the data I'm receiving are good. Read 18 answers by scientists with 7 recommendations from their colleagues to the question asked by Bishanka Brata Bhowmik on Sep 27, 2012. #170: Basics of IQ Signals and IQ modulation & demodulation - A tutorial - Duration: 19:00. The IQ demodulation preserves the information content in the Band-pass signal, and the original RF-signal can be reconstructed from the IQ-signal. Three options will be explored: basic R commands, ggplot2 and ggvis. Visualize data with high-level plot commands in 2D and 3D. Probability Plots This section describes creating probability plots in R for both didactic purposes and for data analyses. To investigate the quality of the received signal, we measure the EVM and plot the constellation diagram for each of the first 100 decoded packets. The " interquartile range", abbreviated " IQR ", is just the width of the box in the box-and-whisker plot. This normalizes the x-axis with respect to the sampling rate. 23dB gain of the Rx path. raw download clone embed report print MatLab 2. Extremely cheap \$5 or less active GPS antennas with SMA connectors can be found on eBay, Amazon or Aliexpress. The optimization workflow begins in MATLAB, where BuildingIQ engineers import and visualize 3 to 12 months of temperature, pressure, and power data comprising billions of data points. A quick peek at some of our 100 scores on our first IQ test shows a minimum of 1 and a maximum of 6. These discrepancies can distort the proximity calculations. audiowrite(filename,y,Fs) writes a matrix of audio data, y, with sample rate Fs to a file called filename. plot(audio1) Plotting the Data with MATLAB If you would like to create a data file that is readable by MATLAB you need to save the audio1 data in a MATLAB readable file. Pyzo is a free and open-source computing environment based on Python. I didn't proceed to plot a proper graph of the spectrum,because I don't know if the data I'm receiving are good. Group Members MAHMUDULHASAN Slide 2 3. MATLAB code: [y,fs]=wavread(‘sealion’); % import data. - to support your explanation. Color for each group, specified as the comma-separated pair consisting of 'Color' and a character vector, string array, cell array of character vectors, or rows of a three-column RGB matrix. Otherwise, plot uses the discrete values 1 through r as the time values, where r is the number of repeated measurements. Presentacin del programa. PLOTTING UNITE STEP AND RAMP FUNCTION IN MATLAB Wellcome to our presentation. Six m-files are written to develop this MATLAB program of OFDM simulation. m to demonstrate an application of differentiation to the quantitative analysis of a peak buried in an unstable background (e. Plot it between 0. Instructions: This Percentile to Z-score Calculator will compute the z-score associated to a given percentile that is provided by you, and a graph will be shown to represent this percentile. I am using octave and I'm new to matlab/octave and I have so far played around and managed to make a 3D-scatter plot of data. how can i separate I and Q from that mat file for further analysis. A box plot is constructed by drawing a box between the upper and lower quartiles with a solid line drawn across the box to locate the median. 04 shifts the constellation 0. Figure 1 illustrates how to apply the 68-95-99. The FFT also uses a window to minimize power spectrum distortion due to end-point. The function breaks the figure into matrix specified by user and selects the corresponding axes for the current plot SYNTAX : subplot (m,n,p) – Divides the figure window into m x n matrix of small axes and selects the p th. oft5e8u8sj, zhac0yso4bw, 3og4p5obfcz, rsh0zqex1o840vo, ovclh1jl90nno, lp342rcheoh, x1kloofxxqyad2w, ojfavgo7uu7kzn, x2cblb6w0vaou6e, guh7koix2akf6, bdem30vnn09, dks0nqg03q, 6z1ecmmxbh4v88, 1cj8inme3kr0f80, 6o5iffexzpj4e, fifnt35e4i, c519atcxuoc, 25azwbynxkay8o3, u590daqgww6z, sfrhx1kvedk2a4v, v6egi1mwqzw4xg4, xn1tkuog7r, u89kkcpaaxphgl1, yo23pndxozth9, ybznncpki1a1fnf, 3sefmk7bug, t9lz1r5lv6, cr1xx15kra, p5rdetbn5k8cqh
{}
Question A disadvantage of secondary data is that the current researcher has no control over the accuracy of the data
{}
#### Chapter 5 Geometry Section 5.6 Trigonometric Functions: Sine, et cetera # 5.6.2 Trigonometry in Triangles If one drives downhill on a road with a slope of five percent, then the height falls five metres for every 100 metres travelled horizontally. Here, the difference in height is considered in comparison to the horizontal line. Accordingly, the slope is $100%$ if the difference in height between two positions with a horizontal distance of $100 \mathrm{m}$ is $100 \mathrm{m}$. Geometrically, the connecting line segment between the two points is a diagonal of a square. Hence, the angle between the horizontal line and the diagonal, i.e. the road on which ones moves, has a degree measure of $45{}^{\circ }$. In other words: An angle of $45{}^{\circ }$ corresponds to a slope of $\frac{100 \mathrm{m}}{100 \mathrm{m}}=1$, i.e. the ratio of the horizontal line segment to the vertical line segment is $1$. According to the intercept theorem, this ratio does not depend on the lengths of the individual segments. It only depends on the position of the two rays with respect to each other, i.e. the measure of the angle they enclose. If this assignment of a ratio of the line segments to an angle is also known for other angles, many constructive problems can be solved. For example, for a given angle the height can be determined. Even the question for the ratio that corresponds to an angle of $30{}^{\circ }$ shows, however, that in general it is not that simple to determine the assignment of a ratio of line segments to an angle. Therefore, the time-consumingly determined values that we considered initially were listed in mathematical tables such that they could be looked up later again easily. Now, these values are available practically everywhere, provided by calculators and computers. The most common assignments of an angle to a ratio of line segments are presented below. They are called circular functions or trigonometric functions. The branch of mathematics dealing with their properties is called trigonometry. ##### Trigonometric Functions in the Right Triangle 5.6.1 Here, the most common circular functions are described as assignments of ratios of the sides in a right triangle to an angle. The circular functions are also called the trigonometric functions. Here, $x$ denotes an angle in a right triangle that is not a right angle. The opposite (side) is the side opposite the angle $x$, and the other leg is called the adjacent (side). • The ratio of the opposite side $a$ to the adjacent side $b$ to an angle is called the tangent function: $\mathrm{tan}\left(x\right):=\frac{\text{opposite side}}{\text{adjacent side}}=\frac{a}{b}$ • The ratio of the adjacent side $b$ to the hypotenuse $c$ to an angle is called the cosine function: $\mathrm{cos}\left(x\right):=\frac{\text{adjacent side}}{\text{hypotenuse}}=\frac{b}{c}$ • The ratio of the opposite side $a$ to the hypotenuse $c$ to an angle is called the sine function: $\mathrm{sin}\left(x\right):=\frac{\text{opposite side}}{\text{hypotenuse}}=\frac{a}{c}$ The tangent function describes the assignment of the ratio of height to width to the angle of inclination, i.e. the slope. In Chapter 8 this is also relevant in the context to the geometrical interpretation of the derivative. According to the definition, the tangent function of the angle $\alpha$ is $\mathrm{tan}\left(\alpha \right)=\frac{a}{b}=\frac{a}{b}·\frac{c}{c}=\frac{a}{c}·\frac{c}{b}=\frac{\mathrm{sin}\left(\alpha \right)}{\mathrm{cos}\left(\alpha \right)} .$ Thus, it suffices to know the values of sine and cosine to be able to calculate the tangent function. ##### Example 5.6.2 Let a triangle with a right angle $\gamma =\frac{\pi }{2}=90{}^{\circ }$ be given. The side $c$ is of length $5 \mathrm{cm}$, and the side $a$ is of length $2.5 \mathrm{cm}$. Calculate the sine, cosine and tangent function of the angle $\alpha$. The sine can be calculated immediately from the given values: $\mathrm{sin}\left(\alpha \right)=\frac{a}{c}=\frac{2.5 \mathrm{cm}}{5 \mathrm{cm}}=0.5 .$ To calculate the cosine the length of the side $b$ is required - it can be obtained by means of Pythagoras' theorem: ${b}^{2}={c}^{2}-{a}^{2}$ Hence, $\mathrm{cos}\left(\alpha \right)=\frac{b}{c}=\frac{\sqrt{{c}^{2}-{a}^{2}}}{c}=\frac{\sqrt{{\left(5 \mathrm{cm}\right)}^{2}-{\left(2.5 \mathrm{cm}\right)}^{2}}}{5 \mathrm{cm}}=0.866 .$ Thus, the tangent of the angle $\alpha$ is $\mathrm{tan}\left(\alpha \right)=\frac{\mathrm{sin}\left(\alpha \right)}{\mathrm{cos}\left(\alpha \right)}=\frac{0.5}{0.866}=0.5773 .$ ##### Exercise 5.6.3 Determine some approximate values of the trigonometric functions sine, cosine and tangent graphically. Let a right triangle with the hypotenuse $c=5$ be given. Use Thales' circle to draw right triangles for the angles $\alpha \in \left\{10{}^{\circ };20{}^{\circ };30{}^{\circ };40{}^{\circ };45{}^{\circ };50{}^{\circ };60{}^{\circ };70{}^{\circ };80{}^{\circ }\right\} .$ Use a drawing scale of $1$ unit length $\stackrel{^}{=}2 \mathrm{cm}$, and fill in the measured values for the sides $a$ and $b$ in a table. From the measured values, calculate the sine, cosine, and tangent of each angle and decide for which functions also values for $\alpha =0{}^{\circ }$ and $\alpha =90{}^{\circ }$ exist. After that, plot the calculated values of sine and cosine against the angle $\alpha$. If we once again look closer at the results obtained in the last exercise, we can find different ways to interpret them, and then identify some relations. • With increasing angle $\alpha$ the opposite side $a$ increases and the adjacent side $b$ decreases. Likewise, $\mathrm{sin}\left(\alpha \right)~a$ and $\mathrm{cos}\left(\alpha \right)~b$. • With increasing angle $\alpha$ the opposite side $a$ increases to the same extent as the adjacent side $b$ decreases with the angle $\alpha$ decreasing from $90{}^{\circ }$. In the Thales circle, the two triangles with the opposite values of $a$ and $b$ are two solutions for the construction of a right triangle with a given hypotenuse and a given altitude (see also Example 5.3.7). • In the right triangle the adjacent side of the angle $\beta =90{}^{\circ }-\alpha$ is the same side as the opposite side of the angle $\alpha$ (and vice versa). Thus, $\mathrm{sin}\left(\alpha \right)=\mathrm{cos}\left(90{}^{\circ }-\alpha \right)=\mathrm{cos}\left(\frac{\pi }{2}-\alpha \right)$ and $\mathrm{cos}\left(\alpha \right)=\mathrm{sin}\left(90{}^{\circ }-\alpha \right)=\mathrm{sin}\left(\frac{\pi }{2}-\alpha \right) .$ • For $\alpha =45{}^{\circ }$ the opposite side and adjacent side are equal, and thus sine and cosine are equal as well. This observation was used at the beginning of this section for the determination of the slope. • The tangent function, i.e. the ratio of $a$ to $b$, increases with increasing angle $\alpha$ from zero to "infinity". In the following example we will continue our considerations from the beginning of this section and use a triangle with an angle of $45{}^{\circ }$ to calculate the value of the corresponding sine value exactly. ##### Example 5.6.4 Calculate the sine of the angle $\alpha =45{}^{\circ }$ now exactly, i.e. unlike as in Exercise 5.6.3, where the sine was calculated from measured (and hence error-prone) values. If in a right triangle with $\gamma =90{}^{\circ }$ the angle $\alpha$ is equal to $45{}^{\circ }$, then, because of the formula for the sum of interior angles in a right triangle, $\alpha +\beta +\gamma =\pi =180{}^{\circ }$, the angle $\beta$ also needs to be equal to $45{}^{\circ }=\pi /4$, and the two legs $a$ and $b$ are of equal length. A triangle with two sides of equal length is called an isosceles. We have: $\mathrm{sin}\left(\alpha \right)=\mathrm{sin}\left(45{}^{\circ }\right)=\frac{a}{c} .$ Moreover: In Exercise 5.6.3 the value of the sine of $45{}^{\circ }$ was approximated by a value of $0.7$ which is quite close to the actual value of $\frac{1}{2}·\sqrt{2}$. In the next example we will calculate the sine of the angle $\alpha =60{}^{\circ }$. For this purpose, we first do not consider a right triangle but an equilateral triangle. By a clever decomposition of the triangle and by using another "auxiliary quantity" we will obtain the required result. ##### Example 5.6.5 Consider a equilateral triangle to calculate $\mathrm{sin}\left(60{}^{\circ }\right)$. As the name implies, the sides of this triangle are all of equal length, and the angles are also all of the same magnitude, namely $\alpha =\beta =\gamma =\frac{180{}^{\circ }}{3}=60{}^{\circ }=\frac{\pi }{3}$. According to the theorem for congruent triangles "sss", the triangle is defined uniquely by the specification of a side $a$. This triangle is constructed by drawing the side $a$ and then drawing a circle with radius $r$ around both endpoints of the side. Now, the intersection point of the two circles is the third vertex. This triangle is not right-angled. If an altitude $h$ is drawn on one of the sides $a$, the triangle can be divided into two congruent right triangles. We have: $\mathrm{sin}\left(\alpha \right)=\mathrm{sin}\left(60{}^{\circ }\right)=\frac{h}{a} .$ According to Pythagoras' theorem we have ${\left(\frac{a}{2}\right)}^{2}+{h}^{2}={a}^{2} .$ Therefore, As a result we obtain the required value $\mathrm{sin}\left(60{}^{\circ }\right)=\mathrm{sin}\left(\frac{\pi }{3}\right)=\frac{h}{a}=\frac{1}{2}·\sqrt{3} .$ From this triangle the sine of another angle can also be calculated: the altitude $h$ bisects the above angle such that in the two congruent smaller triangles the above angle is $30{}^{\circ }=\frac{\pi }{6}$. Now we have $\mathrm{sin}\left(30{}^{\circ }\right)=\mathrm{sin}\left(\frac{\pi }{6}\right)=\frac{a/2}{a}=\frac{1}{2} .$ ##### Exercise 5.6.6 Calculate the exact value of the cosine of the angles ${\alpha }_{1}=30{}^{\circ }$, ${\alpha }_{2}=45{}^{\circ }$, and ${\alpha }_{3}=60{}^{\circ }$. To do this, use the results obtained in the example above and in Exercise 5.6.3. The following small table lists the values for frequently used angles: In the first row denoted by $x$ the angle is given in degree measure, and in the last row denoted by $\alpha$ the angle is given in radian measure. $\begin{array}[t]{l|*{5}{c}} x & 0 & \tfrac{\pi}{6} & \tfrac{\pi}{4} & \tfrac{\pi}{3} & \tfrac{\pi}{2} \\[1mm] \hline \sin & 0 = \frac{1}{2} \cdot \sqrt{0} & \frac{1}{2} = \frac{1}{2} \cdot \sqrt{1} & \frac{1}{2} \cdot \sqrt{2} & \frac{1}{2} \cdot \sqrt{3} & \frac{1}{2} \cdot \sqrt{4} = 1 \\[1mm] \cos & 1 = \frac{1}{2} \cdot \sqrt{4} & \frac{1}{2} \cdot \sqrt{3} & \frac{1}{2} \cdot \sqrt{2} & \frac{1}{2} \cdot \sqrt{1} = \frac{1}{2} & \frac{1}{2} \cdot \sqrt{0} = 0 \\[1mm] \tan & 0 & \frac{\sqrt{3}}{3} & 1 & \sqrt{3} & - \\[1mm] \hline \alpha & 0^{\circ} & 30^{\circ} & 45^{\circ} & 60^{\circ} & 90^{\circ} % \end{array}$ You should learn these values by heart. The values of the trigonometric functions for other angles are listed in tables or saved in your calculator. Hence, a height can be calculated very easily from an angle and a distance. Namely, if $s$ is the distance of a building with a flat roof, which is observed at an angle of $x$, then from $\mathrm{tan}\left(x\right)=\frac{h}{s}$ we have $h=s·\mathrm{tan}\left(x\right)$. Likewise, sine and cosine can be used to calculate lengths. This relation between angles and lengths is often used. For example, an area can be calculated in this way even if the required length is not given directly. In the following example, the altitude of a triangle is to be calculated. Since the $h$ starting in a vertex $C$ is perpendicular to the line of the opposite side $c=\stackrel{‾}{AB}$, the vertices of $h$ and $A$ or $B$, respectively, form a right triangle. If an angle and the adjacent side are given, then the altitude can be calculated from $\mathrm{sin}\left(\alpha \right)=\frac{h}{b}$ or from $\mathrm{sin}\left(\beta \right)=\frac{h}{a}$, using standard notation. ##### Exercise 5.6.7 Calculate the area $F$ of a triangle with the sides $c=7$, $b=3$, and the angle $\alpha =30{}^{\circ }$ between the two sides $c$ and $b$. Result: $F$$=$
{}
# zbMATH — the first resource for mathematics Averaging of the Cauchy kernels and integral realization of the local residue. (English) Zbl 1186.14053 It is known that Bochner-Martinelli integral formula can be obtained by averaging of the Cauchy formula on some positive measures. In this paper similar formulas for a family of kernels of integral representation associated with toric variety are obtained. These formulas was studied by the first author in some of his previous papers. Here the mentioned kernels generalize the considered integral forms. Applications are given for integral realization of the local residue in algebraic geometry. The paper is interesting for a broad circle of specialists. ##### MSC: 14M25 Toric varieties, Newton polyhedra, Okounkov bodies 32A26 Integral representations, constructed kernels (e.g., Cauchy, Fantappiè-type kernels) 32A27 Residues for several complex variables Full Text: ##### References: [1] Griffiths P., Harris J.: Principles of Algebraic Geometry, pp. 813. Wiley, New York (1978) · Zbl 0408.14001 [2] Kytmanov A.A.: An analog of the Fubini-Studi form for two-dimensional toric varieties. Sib. Math. J. 44(2), 286–297 (2003) · Zbl 1081.32004 · doi:10.1023/A:1022936921419 [3] Kytmanov A.A.: An analog of the Bochner-Martinelli representation in d-circular polyhedra in the space $${\mathbb{C}^d}$$ . Russ. Math. 49(3), 49–55 (2005) · Zbl 1117.32004 [4] Kytmanov A.A.: Integral representations and volume forms on Hirzebruch surfaces. J. Sib. Federal Univ. 2, 3–9 (2008) [5] Shaimkulov B.A., Tsikh A.K.: Integral realizations of Grothendieck residue and its transformation under compositions (Russian). Vestnik KrasGU Fiz. Mat. Nauki Krasnoyarsk 1, 151–155 (2005) [6] Shchuplev A.V., Tsikh A.K, Yger A.: Residual kernels with singularities on coordinate planes. Proc. Steklov Inst. Math. 253, 256–274 (2006) · Zbl 1351.32010 · doi:10.1134/S0081543806020210 [7] Shchuplev, A.V.: Toric Varieties and Residues. Doctoral Thesis, Department of Math., Stockholm Univ., p. 70 (2007) [8] Tong T.L.: Integral representation formulae and Grothendieck residue symbol. Am. J. Math. 4, 904–917 (1973) · Zbl 0291.32008 · doi:10.2307/2373701 [9] Tsikh A., Yger A.: Residue currents. J. Math. Sci. New York 120(6), 1916–1971 (2004) · Zbl 1070.32003 · doi:10.1023/B:JOTH.0000020710.57247.b7 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# Problem: Helium is collected over water at 25°C and 1.00 atm total pressure. What total volume of gas must be collected to obtain 0.586 g helium? (At 25°C the vapor pressure of water is 23.8 torr.) ###### FREE Expert Solution 83% (341 ratings) ###### Problem Details Helium is collected over water at 25°C and 1.00 atm total pressure. What total volume of gas must be collected to obtain 0.586 g helium? (At 25°C the vapor pressure of water is 23.8 torr.)
{}
The unit of e.m.f. of a cell is:A. dyneB. voltC. ampereD. joule Verified 147k+ views Hint: If we look at what is emf i.e., electromotive force, it is defined as a battery’s energy per coulomb of charge passing through it. So we will obtain the unit of emf from the basic expression. EMF of a cell could be measured by a voltmeter and it is the voltage across the cell. Electromotive force is given by the expression$E=\dfrac{W}{Q}$. i.e., $E=\dfrac{W}{Q}$
{}
# SEARCH CONTENT • Physics Clear All ## Abstract The numerical simulations are used to conduct the comparative study of pin-fins cooling channel and multi-impingement cooling channel on the heat transfer and flow, and to design the multi-impingement channel through the parameters of impinging distance and impingement-jet-plate thickness. The Reynolds number ranges from 1e4 to 6e4. The dimensionless impinging distance is 0.60, 1.68, 2.76, respectively, and the dimensionless impinging-jet-thickness is 0.5, 1.0, 1.5, respectively. The endwall surface, pin-fins surface, impinging-jet-plate surface are the three object surfaces to investigate the channel heat transfer performance. The heat transfer coefficient $h$ and augmentation factor $Nu/Nu0$ are selected to measure the surface heat transfer, and the friction coefficient $f$ is chosen to evaluate the channel flow characteristics. The impinging-jet-plate surface owns higher heat transfer coefficient and larger area than pin-fins surface, which are the main reasons to improve the heat transfer performance of multi-impingement cooling channel. Reducing the impinging distance can improve the endwall surface heat transfer obviously and enhance impingement plate surface heat transfer to some extent, decreasing the thickness of impinging-jet-plate can significantly increase its own heat transfer coefficient, which both all increase the cooling air flow loss. ## Abstract Ceramic Matrix Composites (CMCs) are primary candidates for advanced gas turbine engine application that require intense high temperature tests and validations. Before CMCs used in engine hot sections, a lot of tests need to be done, especially thermal test. A thermal test rig has been set up to simulate the engine turbine thermal environment. Propane gas is used to simulate the practical aviation fuel and compressed air with flow regulator is used as cooling media. The capabilities and limitations of the test facility have been calibrated and discussed in this paper. A CMC turbine vane with internal cooling path was tested on this burning rig. The results showed that the CMC vane could withstand the 1200 ℃ thermal cycling test but the coating was disappeared. It has been proved that such test rig and method could simulate the thermal boundary conditions of turbine vanes and blades. ## Abstract Control of Mach 1.5 elliptic jet with ventilated triangular tabs is studied experimentally, in the presence of different levels of pressure gradient at the nozzle exit. Three different sets of ventilated tabs with circular, triangular and trapezoidal ventilations were studied. Two tabs were placed, at the ends of major and minor axes, at the exit of the elliptic nozzle of aspect ratio 3.37. The mixing enhancement caused by these tabs was studied in the presence of adverse and favorable pressure gradients, corresponding to nozzle pressure ratio (NPR) from 3 to 8. For Mach 1.5 jet NPR 3 corresponds to 18 % adverse pressure gradient and NPR 8 corresponds to 118 % favorable pressure gradient. The results of ventilated tabs are compared with unventilated truncated triangular tabs of identical geometry. The difference between the mixing promoting efficiency of the unventilated and ventilated tabs is only marginal (around 5–6 %). All tabs cause jet bifurcation and weaken the waves in the jet core. The tab with trapezoidal ventilation, at NPR 3, promotes mixing to an extent of reducing the core to about 92 %. At higher NPRs the mixing caused by unventilated tab is slightly better than the ventilated tabs. ## Abstract In order to improve compressor performance using a new design method, which originates from the fins on a humpback whale, experimental tests and numerical simulations were undertaken to investigate the influence of the tubercle leading edge on the aerodynamic performance of a linear compressor cascade with a NACA 65–010 airfoil. The results demonstrate that the tubercle leading edge can improve the aerodynamic performance of the cascade in the post-stall region by reducing total pressure loss, with a slight increase in total pressure loss in the pre-stall region. The tubercles on the leading edge of the blades cause the flow to migrate from the peak to the valley on the blade surface around the tubercle leading edge by the butterfly flow. The tubercle leading edge generates the vortices similar to those created by vortex generators, splitting the large-scale separation region into multiple smaller regions. ## Abstract The following paper presents dynamic leakage rate and coupled interaction for variable speed rotor-labyrinth (LABY) seal, with rotating speed from 18 krpm to 30 krpm. Variable speed rotor vibration characteristics are incorporated into transient computational fluid dynamic (CFD) calculations as boundary conditions of seal flow field to show the real-time effect of rotordynamic in seal flow field. Leakage rate across a variable speed rotor-seal increases with rotor vibration, but this effect is prominent at lower speed than at higher speed. Leakage characteristic is determined by differences in rotor vibration amplitude rather than rotating speed. The results also reveal that aerodynamic forces of labyrinth seal flow field can improve rotor stability, and this interaction between rotor and seal decreases with the increase of rotating speed. ## Abstract The present numerical investigation of Leading Edge (LE) Nozzle Guide Vane (NGV) is considered with five rows of impingement holes combined with five rows of film cooled for the secondary coolant flow path analysis. The coolant mass flow rate variations in all the LE rows of the film holes externally subjected to the hot main stream were obtained by making a three-dimensional computational analysis of NGV with a staggered array of film cooled rows. The experiments were carried out for the same NGV using Particle Image Velocimetry technique to determine the effused coolant jet exit velocity at the stagnation row of film holes as mentioned in reference [Kukutla PR, Prasad BVSSS. Secondary flow visualization on stagnation row of a combined impingement and film cooled high pressure gas turbine nozzle guide vane using PIV technique, J Visualization, 2017; DOI: 10.1007/s12650-017-0434-6]. In this paper, results are presented for three different mass flow rates ranges from 0.0037 kg/s to 0.0075 kg/s supplied at the Front Impingement Tube (FIT) plenum. And the mainstream velocity 6 m/s was maintained for all the three coolant mass flow rates. The secondary coolant flow distribution was performed from SH1 to SH5 row of film holes. Each row of a showerhead film hole exit coolant mass flow rate varied in proportion to the amount of coolant mass rates supplied at the FIT cooling channel. The corresponding minimum and maximum values and their film hole locations were altered. The same behaviour was continued for the coolant pressure drop and temperature rise from SH1 to SH5 row of film holes. Owing to the interaction between hot main stream and the coolant that effuses out of the film holes, occasional presence of hot gas ingestion was noticed for certain flow rates. This caused nonlinear distribution in mass flow, pressure drop and temperature rise. The minimum flow rate results estimate oxidation of NGV material near the film cooled hole. And the effect of hot gas ingestion on the ejected film cooled jet which would recommends effective oxidation resistant material which in turn leads to better durability of the NGV surface. ## Abstract Thermal choke is commonly employed in a fixed geometry RBCC combustor to eliminate the need for physically variable exit geometry. This paper proposed detailed numerical studies based on a two-dimensional integration model to characterize thermal choke behaviors driven by various embedded rocket operations in an RBCC engine at Mach 4 in ramjet mode. The influences of different embedded rocket operations as well as the corresponding secondary fuel injection adjustment on thermal choke generation process, the related thermal throat feature, and the engine performance are analyzed. Operations of embedded rocket bring significant effects on the thermal choke behaviors: (1) the thermal throat feature becomes much more irregular influenced by the rocket plume; (2) the occupancy range in the combustor is significantly lengthened; (3) the asynchrony of the flow in different regions accelerating to sonic speed becomes much more significant; (4) as the rocket throttling ratio decreases, the thermal choke position constantly moves upstream integrally, and the heated flow in the top region that is directly affected by the rocket plume reaches sonic speed more rapidly. Finally, we can conclude that appropriate secondary fuel injection adjustment can provide a higher integration thrust for the RBCC engine with the embedded rocket operating, while the thermal choke is stably controlled, and the increased heat release and combustion pressure are well balanced by the variations of pre-combustion shocks in the inlet isolator.
{}
August 08, 2020, 06:37:23 pm 0 Members and 4 Guests are viewing this topic. #### Bri MT • VIC MVP - 2018 • ATAR Notes Legend • Posts: 3844 • invest in wellbeing so it can invest in you • Respect: +2857 ##### Re: VCE Biology Question Thread « Reply #12720 on: July 02, 2020, 07:55:33 pm » +9 Does non random mating introduce variation into a population I think this is a very broad question which is hard to give a quick answer to but my short answer is: probably not in general since you said "introduce" Heads up going that I'm going well and truly outside the study design here: One aspect of genetic variation is heterozygosity which is influenced by non-random mating but in this case you'd already have heterozygosity present (just at a different level) under random mating (e.g. under HW equilibrium) - it wouldn't be introduced by it. There is the argument that that outbreeding or disassortative mating would increase allelic diversity (which is positively correlated with heterozygosity) by increasing probability of reproduction with organisms from other populations. Imo, this would only introduce variation if you had an organism leaving, mating in another population with unique alleles, then producing offspring back in the original population. Others have alluded above that assortative mating or inbreeding (these are different but similar terms) would decrease genetic diversity whereas the opposites would increase it. This is true but may not be reflected in all measures of genetic diversity and is also influenced by metapopulation dynamics. For example, if looking at Wright's F statistics $F_{IS}$ reflects how random mating is whereas $F_{ST}$ reflects variation between subpopulations & if looking at an isolated population with no dispersal things will be different than if looking at a stepping stone type metapopulation structure. #### Chocolatepistachio • Trendsetter • Posts: 119 • Respect: +2 ##### Re: VCE Biology Question Thread « Reply #12721 on: July 02, 2020, 10:47:00 pm » +1 How are homologous chromosomes randomly assorted into the two daughter cells resulting from meiosis 1 #### 1729 • MOTM: July 20 • Forum Regular • Posts: 83 • The best way to predict the future is to create it • Respect: +93 ##### Re: VCE Biology Question Thread « Reply #12722 on: July 03, 2020, 09:45:20 am » +6 How are homologous chromosomes randomly assorted into the two daughter cells resulting from meiosis 1 Thats what recombination is: ---A---B---C--- ---a---b---c--- Imagine this arrangement of alleles recombination would do this: ---a---B---C--- ---A---b---c--- This could happen to any allele switching the position of alleles leads to random assortment alleles are 'independent' in this sense. Does that make sense? Please correct me if I'm wrong because I'm rusty on this stuff. What my avatar is If you are wondering about my avatar, It was inspired by a problem I did which asked me to prove that the graphs of xy = 1 and y^2 = x^2 + 2 intersected at a 90 degree angle. The resulting figure in the middle kinda like a 8-sided square in hyperbolic space. It also resembles the conformal map of the complex square root. Subjects: EngLang, Lit, Methods, Spesh, Chemistry, Biology, Physics. Goals: Melbuni | Doctor #### Chocolatepistachio • Trendsetter • Posts: 119 • Respect: +2 ##### Re: VCE Biology Question Thread « Reply #12723 on: July 03, 2020, 11:28:19 am » +1 No I don’t understand it #### 1729 • MOTM: July 20 • Forum Regular • Posts: 83 • The best way to predict the future is to create it • Respect: +93 ##### Re: VCE Biology Question Thread « Reply #12724 on: July 03, 2020, 12:18:00 pm » +7 No I don’t understand it In metaphase, the homologous chromosomes (maternal/paternal) line up on the equator in either: - Maternal copy left/Paternal right - Paternal left/Maternal right This process is random hence the name random assortment What my avatar is If you are wondering about my avatar, It was inspired by a problem I did which asked me to prove that the graphs of xy = 1 and y^2 = x^2 + 2 intersected at a 90 degree angle. The resulting figure in the middle kinda like a 8-sided square in hyperbolic space. It also resembles the conformal map of the complex square root. Subjects: EngLang, Lit, Methods, Spesh, Chemistry, Biology, Physics. Goals: Melbuni | Doctor #### Chocolatepistachio • Trendsetter • Posts: 119 • Respect: +2 ##### Re: VCE Biology Question Thread « Reply #12725 on: July 03, 2020, 05:42:39 pm » -1 Chromosome replication sometimes results in abnormal formation of chromosomes. What possible problems would arise if a chromosome was a) missing a centromere and b) dicentric- had an extra centromere #### Owlbird83 • Forum Obsessive • Posts: 310 • Respect: +336 ##### Re: VCE Biology Question Thread « Reply #12726 on: July 03, 2020, 06:42:45 pm » +5 Chromosome replication sometimes results in abnormal formation of chromosomes. What possible problems would arise if a chromosome was a) missing a centromere and b) dicentric- had an extra centromere If there's no centromere, the spindle fibres won't be able to attach to the kinetochores (protein complex on centromere), if the spindle fibres don't attach, the chromosomes can't be pulled to the cell poles, and mitosis cannot happen. (I googled this part, so someone else answer if they have a better understanding) The spindle fibres can attach to both centromeres, and pull them in different directions, this leads to the chromosomes breaking, and reattaching in different places. I found it a bit hard to understand from the things I read, but this short youtube clip I found makes it really clear. https://www.youtube.com/watch?v=VuzeD_VyBO4 Hope that helps 2018: Biology 2019: Chemistry, Physics, Math Methods, English, Japanese 2020: Bachelor of Psychology (Monash) #### 1729 • MOTM: July 20 • Forum Regular • Posts: 83 • The best way to predict the future is to create it • Respect: +93 ##### Re: VCE Biology Question Thread « Reply #12727 on: July 03, 2020, 09:57:11 pm » +7 Chromosome replication sometimes results in abnormal formation of chromosomes. What possible problems would arise if a chromosome was a) missing a centromere and b) dicentric- had an extra centromere If a centromere is not present, the split can't occur anymore... or well.. the cell cycle actually wouldn't proceed at all. It's the most important part of a chromosome. These sorts of issues are often found in cancerous cells. If a chromosome is dicentric, it's also unstable. This is because the microtubules will pull either centromere to different ends of the cell during mitosis, which basically creates something called a chromosome bridge. This breaks the chromosome open, letting DNA leak out. Looks like this: What my avatar is If you are wondering about my avatar, It was inspired by a problem I did which asked me to prove that the graphs of xy = 1 and y^2 = x^2 + 2 intersected at a 90 degree angle. The resulting figure in the middle kinda like a 8-sided square in hyperbolic space. It also resembles the conformal map of the complex square root. Subjects: EngLang, Lit, Methods, Spesh, Chemistry, Biology, Physics. Goals: Melbuni | Doctor #### Chocolatepistachio • Trendsetter • Posts: 119 • Respect: +2 ##### Re: VCE Biology Question Thread « Reply #12728 on: July 03, 2020, 11:01:49 pm » 0 Why do fish living in freshwater produce urine that contains ammonia How does the structure of glycoprotein and glycolipid relate to their function #### Sine • National Moderator • ATAR Notes Legend • Posts: 4375 • Respect: +1458 ##### Re: VCE Biology Question Thread « Reply #12729 on: July 03, 2020, 11:09:44 pm » +6 Why do fish living in freshwater produce urine that contains ammonia How does the structure of glycoprotein and glycolipid relate to their function What have you thought about so far? It is important that when anyone asks a content-related question they include their current understanding levels to allow for others to be able to target the users' weaknesses and misconceptions so that they can address those directly #### Chocolatepistachio • Trendsetter • Posts: 119 • Respect: +2 ##### Re: VCE Biology Question Thread « Reply #12730 on: July 03, 2020, 11:44:25 pm » +2 In the freshwater fish the water diffuses into the fish via the gills , there is diffusion of salt from the gills #### 1729 • MOTM: July 20 • Forum Regular • Posts: 83 • The best way to predict the future is to create it • Respect: +93 ##### Re: VCE Biology Question Thread « Reply #12731 on: July 04, 2020, 10:47:10 am » +8 Why do fish living in freshwater produce urine that contains ammonia How does the structure of glycoprotein and glycolipid relate to their function Hey there! I'd thought I share some of my notes relating to this topic. For the first part of the question: And for the second one (brief overview) flexing how good my handwriting was two years ago, now its not so good What my avatar is If you are wondering about my avatar, It was inspired by a problem I did which asked me to prove that the graphs of xy = 1 and y^2 = x^2 + 2 intersected at a 90 degree angle. The resulting figure in the middle kinda like a 8-sided square in hyperbolic space. It also resembles the conformal map of the complex square root. Subjects: EngLang, Lit, Methods, Spesh, Chemistry, Biology, Physics. Goals: Melbuni | Doctor #### Chocolatepistachio • Trendsetter • Posts: 119 • Respect: +2 ##### Re: VCE Biology Question Thread « Reply #12732 on: July 04, 2020, 06:37:24 pm » 0 Thanks! What is the difference between discontinuous and continuous inheritance Why do your fingers get wrinkled when in water of a long time- Is it because of water going in due to osmosis #### Owlbird83 • Forum Obsessive • Posts: 310 • Respect: +336 ##### Re: VCE Biology Question Thread « Reply #12733 on: July 04, 2020, 07:13:25 pm » +5 Thanks! What is the difference between discontinuous and continuous inheritance Why do your fingers get wrinkled when in water of a long time- Is it because of water going in due to osmosis Discontinuous inheritance is when a trait is controlled by one or only a small amount of genes. For example eye colour, there's blue/(green) or brown. Continuous variation is when a trait is controlled by a number of genes (polygenic), for example, height is continuous and there is not just a couple of set heights that people are, there is variation between short/tall and everywhere in between. I thought this too actually, but I looked it up to make sure and found: Quote Pruney fingers occur when the nervous system sends a message to the blood vessels to become narrower. The narrowed blood vessels reduce the volume of the fingertips slightly, causing loose folds of skin that form wrinkles. So vasoconstriction occurs in the finger blood vessels, and osmosis isn't the cause of the wrinkling. Edit: Also I love your notes 1729!! So pretty and satisfying to look at! (Is the black pen a pentel touch pen?I have one and I love it but I have a problem with needing to buy too many brush pens) « Last Edit: July 04, 2020, 07:19:40 pm by Owlbird83 » 2018: Biology 2019: Chemistry, Physics, Math Methods, English, Japanese 2020: Bachelor of Psychology (Monash) #### 1729 • MOTM: July 20 • Forum Regular • Posts: 83 • The best way to predict the future is to create it • Respect: +93 ##### Re: VCE Biology Question Thread « Reply #12734 on: July 04, 2020, 07:29:20 pm » +5 Why do your fingers get wrinkled when in water of a long time- Is it because of water going in due to osmosis I don't think you should be asking googlable questions on the forum (like your second one).The people who answer your questions are more here to provide clarification on concepts you don't understand, so like if you google the answer and it doesn't make sense, then ask, and include what you don't understand. (This just makes it easier for me to help). What is the difference between discontinuous and continuous inheritance In regards to your first question discontinuous is affected by alleles of a single gene and continuous is a combination of several genes & their alleles. But to elaborate, think of it as discrete inheritance means you either have it or you don't. Whilst continuous is you can have varying levels of it. Edit: Also I love your notes 1729!! So pretty and satisfying to look at! (Is the black pen a pentel touch pen?I have one and I love it but I have a problem with needing to buy too many brush pens) Thank you so much! Most of it was with was written with pentel touch, I prefer the metal tip, or fountain pens or gel pens. Muji pens are the best though. However the pen that I used for those notes was the Pentel Energel 0.5 « Last Edit: July 04, 2020, 09:29:01 pm by 1729 » What my avatar is If you are wondering about my avatar, It was inspired by a problem I did which asked me to prove that the graphs of xy = 1 and y^2 = x^2 + 2 intersected at a 90 degree angle. The resulting figure in the middle kinda like a 8-sided square in hyperbolic space. It also resembles the conformal map of the complex square root. Subjects: EngLang, Lit, Methods, Spesh, Chemistry, Biology, Physics. Goals: Melbuni | Doctor
{}
# Infinite dimensional subspaces of $L^1$ Suppose that $X$ is an infinite dimensional subspace of $L^{1}$. In some cases it is true that $X$ contains an isomorphic copy of an infinite dimensional Hilbert space. However, it is not the case when $X$ is a subspace of $\ell_1$ (orthonormal basis converges weakly to zero, but not strongly). I am curious if it is the only obstacle. More precisely: is it true that if an infinite dimensional subspace $X$ of $L^1$ does not contain an infinite dimensional Hilbertian subspace then it embeds in $\ell_1$? If not, do we know anything about such subspaces? I know that the question is a bit vague, but I hope that satisfying answers can be given. • Are you trying to understand which subspaces of $L_p$ embed into $\ell_p$? They are characterized for $1<p<\infty$ and much is known for $p=1$. – Bill Johnson Oct 10 '14 at 14:50 • @BillJohnson, not really, I am trying to investigate non-maximal subspaces of maximal operator spaces, using methods from Banach space theory in this case. – Mateusz Wasilewski Oct 10 '14 at 17:35 ## 1 Answer $L^1$ contains a copy of $\ell_q$ for every $q\in[1,2]$; I will come back and provide an original reference shortly, however to read about it you probably can't do better than the book Topics in Banach space theory by Albiac and Kalton. More information in the direction of your question was provided by David Aldous, who showed that every infinite dimensional subspace of $L^1$ contains a subspace isomorphic to $\ell_q$ for some $q\in [1,2]$. Aldous' paper is Subspaces of $L^1$, via random measures, in volume 267 of Transactions of the AMS. Soon after Aldous' result, Krivine and Maurey proved a more general result, namely that every stable Banach space contains a copy of some $\ell_p$. Also, I think that David Garling published (in Lecture Notes in Mathematics?) an account of the work of Aldous and of Krivine Maurey; when I get a spare moment I will come back and update the references with additional information. • Ok, thank you very much; I forgot about stable random variables. Of course, $\ell_q$ for $q < 2$ cannot contain a subspace isomorphic to $\ell_2$ by Pitt's theorem. – Mateusz Wasilewski Oct 10 '14 at 12:31
{}
C.5 Exercises For the exercises in this section, first start with specifying the appropriate queueing models needed to solve the exercise using Kendall’s notation. Then, specify the parameters of the model, e.g. $$\lambda_{e}$$, $$\mu$$, $$c$$, size of the population, size of the system, etc. Specify how and what you would compute to solve the problem. Be as specific as possible by specifying the equations needed. Then, compute the quantities if requested. You might also try to use to solve the problems via simulation. Exercise C.1 True or False: In a queueing system with random arrivals and random service times, the performance will be best if the arrival rate is equal to the service rate because then there will not be any queueing. Exercise C.2 The Burger Joint in the UA food court uses an average of 10,000 pounds of potatoes per week. The average number of pounds of potatoes on hand is 5,000. On average, how long do potatoes stay in the restaurant before being used? What queuing concept is use to solve this problem? Exercise C.3 Consider a single pump gas station where the arrival process is Poisson with a mean time between arrivals of 10 minutes. The service time is exponentially distributed with a mean of 6 minutes. Specify the appropriate queueing model needed to solve the problem using Kendall’s notation. Specify the parameters of the model and what you would compute to solve the problem. Be as specific as possible by specifying the equation needed. Then, compute the desired quantities. 1. What is the probability that you have to wait for service? 2. What is the mean number of customer at the station? 3. What is the expected time waiting in the line to get a pump? Exercise C.4 Suppose an operator has been assigned to the responsibility of maintaining 3 machines. For each machine the probability distribution of the running time before a breakdown is exponentially distributed with a mean of 9 hours. The repair time also has an exponential distribution with a mean of 2 hours. Specify the appropriate queueing model needed to solve the problem using Kendall’s notation. Specify the parameters of the model and what you would compute to solve the problem. Be as specific as possible by specifying the equation needed. Then, compute the desired quantities. 1. What is the probability that the operator is idle? 2. What is the expected number of machines that are running? 3. What is the expected number of machines that are not running? Exercise C.5 SuperFastCopy wants to install self-service copiers, but cannot decide whether to put in one or two machines. They predict that arrivals will be Poisson with a rate of 30 per hour, and the time spent copying is exponentially distributed with a mean of 1.75 minutes. Because the shop is small they want the probability of 5 or more customers in the shop to be small, say less than 7%. Make a recommendation based on queueing theory to SuperFastCopy. Exercise C.6 Each airline passenger and his or her carry-on baggage must be checked at the security checkpoint. Suppose XNA averages 10 passengers per minute with exponential inter-arrival times. To screen passengers, the airport must have a metal detector and baggage X-ray machines. Whenever a checkpoint is in operation, two employees are required (one operates the metal detector, one operates the X-ray machine). The passenger goes through the metal detector and simultaneously their bag goes through the X-ray machine. A checkpoint can check an average of 12 passengers per minute according to an exponential distribution. What is the probability that a passenger will have to wait before being screened? On average, how many passengers are waiting in line to enter the checkpoint? On average, how long will a passenger spend at the checkpoint? Exercise C.7 Two machines are being considered for processing a job within a factory. The first machine has an exponentially distributed processing time with a mean of 10 minutes. For the second machine the vendor has indicated that the mean processing time is 10 minutes but with a standard deviation of 6 minutes. Using queueing theory, which machine is better in terms of the average waiting time of the jobs? Exercise C.8 Customers arrive at a one-window drive in bank according to a Poisson distribution with a mean of 10 per hour. The service time for each customer is exponentially distributed with a mean of 5 minutes. There are 3 spaces in front of the window including that for the car being served. Other arriving cars can wait outside these 3 spaces. Specify the appropriate queueing model needed to solve the problem using Kendall’s notation. Specify the parameters of the model and what you would compute to solve the problem. Be as specific as possible by specifying the equation needed. Then, compute the desired quantities. 1. What is the probability that an arriving customer can enter one of the 3 spaces in front of the window? 2. What is the probability that an arriving customer will have to wait outside the 3 spaces? 3. How long is an arriving customer expected to wait before starting service? 4. How many spaces should be provided in front of the window so that an arriving customer can wait in front of the window at least 20% of the time? In other words, the probability of at least one open space must be greater than 20%. Exercise C.9 Joe Rose is a student at Big State U. He does odd jobs to supplement his income. Job requests come every 5 days on the average, but the time between requests is exponentially distributed. The time for completing a job is also exponentially distributed with a mean of 4 days. 1. What would you compute to find the chance that Joe will not have any jobs to work on? 2. What would you compute to find the average value of the waiting jobs if Joe gets about $25 per job? Exercise C.10 The manager of a bank must determine how many tellers should be available. For every minute a customer stands in line, the manager believes that a delay cost of 5 cents is incurred. An average of 15 customers per hour arrive at the bank. On the average, it takes a teller 6 minutes to complete the customer’s transaction. It costs the bank$9 per hour to have a teller available. Inter-arrival and service times can be assumed to be exponentially distributed. What is the minimum number of tellers that should be available in order for the system to be stable (i.e. not have an infinite queue)? If the system has 3 tellers, what is the probability that there will be no one in the bank? What is the expected total cost of the system per hour, when there are 2 tellers? Exercise C.11 You have been hired to analyze the needs for loading dock facilities at a trucking terminal. The present terminal has 4 docks on the main building. Any trucks that arrive when all docks are full are assigned to a secondary terminal, which a short distance away from the main terminal. Assume that the arrival process is Poisson with a rate of 5 trucks each hour. There is no available space at the main terminal for trucks to wait for a dock. At the present time nearly 50% of the arriving trucks are diverted to the secondary terminal. The average service time per truck is two hours on the main terminal and 3 hours on the secondary terminal, both exponentially distributed. Two proposals are being considered. The first proposal is to expand the main terminal by adding docks so that at least 80% of the arriving trucks can be served there with the remainder being diverted to the secondary terminal. The second proposal is to expand the space that can accommodate up to 8 trucks. Then, only when the holding area is full will the trucks be diverted to secondary terminal. What queuing model should you use to analyze the first proposal? State the model and its parameters. State what you would do to determine the required number of docks so that at least 80% of the arriving trucks can be served for the first proposal. Note you do not have to compute anything. What model should you use to analyze the 2nd proposal? State the model and its parameters. Exercise C.12 Sly’s convenience store operates a two-pump gas station. The lane leading to the pumps can house at most five cars, including those being serviced. Arriving cars go elsewhere if the lane is full. The distribution of the arriving cars is Poisson with a mean of 20 per hour. The time to fill up and pay for the purchase is exponentially distributed with a mean of 6 minutes. 1. Specify using queueing notation, exactly what you would compute to find the percentage of cars that will seek business elsewhere? 2. Specify using queueing notation, exactly what you would compute to find the utilization of the pumps? Exercise C.13 An airline ticket office has two ticket agents answering incoming phone calls for flight reservations. In addition, two callers can be put on hold until one of the agents is available to take the call. If all four phone lines (both agent lines and the hold lines) are busy, a potential customer gets a busy signal, and it is assumed that the call goes to another ticket office and that the business is lost. The calls and attempted calls occur randomly (i.e. according to Poisson process) at a mean rate of 15 per hour. The length of a telephone conversation has an exponential distribution with a mean of 4 minutes. 1. Specify using queueing notation, exactly what you would compute to find the probability of losing a potential customer? 2. What would you compute to find the probability that an arriving phone call will not start service immediately but will be able to wait on a hold line? Exercise C.14 SuperFastCopy has three identical copying machines. When a machine is being used, the time until it breaks down has an exponential distribution with a mean of 2 weeks. A repair person is kept on call to repair the machines. The repair time for a machine has an exponential distribution with a mean of 0.5 week. The downtime cost for each copying machine is \$100 per week. 1. Let the state of the system be the number of machines not working, Construct a state transition diagram for this queueing system. 2. Write an expression using queueing performance measures to compute the expected downtime cost per week. Exercise C.15 NWH Cardiac Care Unit (CCU) has 5 beds, which are virtually always occupied by patients who have just undergone major heart surgery. Two registered nurses (RNs) are on duty in the CCU in each of the three 8 hour shifts. About every two hours following an exponential distribution, one of the patients requires a nurse’s attention. The RN will then spend an average of 30 minutes (exponentially distributed) assisting the patient and updating medical records regarding the problem and care provided. 1. What would you compute to find the average number of patients being attended by the nurses? 2. What would you compute to fine the average time that a patient spends waiting for one of the nurses to arrive? Exercise C.16 HJ Bunt, Transport Company maintains a large fleet of refrigerated trailers. For the purposes of this problem assume that the number of refrigerated trailers is conceptually infinite. The trailers require service on an irregular basis in the company owned and operated service shop. Assume that the arrival of trailers to the shop is approximated by a Poisson distribution with a mean rate of 3 per week. The length of time needed for servicing a trailer varies according to an exponential distribution with a mean service time of one-half week per trailer. The current policy is to utilize a centralized contracted outsourced service center whenever more than two trailers are in the company shop, so that, at most one trailer is allowed to wait. Assume that there is currently one 1 mechanic in the company shop. Specify using Kendall’s notation the correct queueing model for this situation including the appropriate parameters. What would you compute to determine the expected number of repairs that are outsourced per week? Exercise C.17 Rick is a manager of a small barber shop at Big State U. He hires one barber. Rick is also a barber and he works only when he has more than one customer in the shop. Customers arrive randomly at a rate of 3 per hour. Rick takes 15 minutes on the average for a hair cut, but his employee takes 10 minutes. Assume that the cutting time distributions are exponentially distributed. Assume that there are only 2 chairs available with no waiting room in the shop. 1. Let the state of the system be the number of customers in the shop, Construct a state transition diagram for this queueing system. 2. What is the probability that a customer is turned away? 3. What is the probability that the barber shop is idle? 4. What is the steady-state mean number of customers in the shop? Exercise C.18 Using the supplied data set, draw the sample path for the state variable, $$N(t)$$. Give a formula for estimating the time average number in the system, $$N(t)$$, and then use the data to compute the time average number in the system over the range from 0 to 25. Assume that the value of $$N(t$$ is the value of the state variable just after time $$t$$. $$t$$ 0 2 4 5 7 10 12 15 20 $$N(t)$$ 0 1 0 1 2 3 2 1 0 1. Give a formula for estimating the time average number in the system, $$N(t)$$, and then use the data to compute the time average number in the system over the range from 0 to 25. 2. Give a formula for estimating the mean rate of arrivals over the interval from 0 to 25 and then use the data to estimate the mean arrival rate. 3. Estimate the average time in the system (waiting and in service) for the customers indicated in the diagram. 4. What queueing formula relationship is used in this problem?
{}
# Rabinizer 4: From LTL to Your Favourite Deterministic Automaton @inproceedings{Ketnsk2018Rabinizer4F, title={Rabinizer 4: From LTL to Your Favourite Deterministic Automaton}, author={Jan Křet{\'i}nsk{\'y} and Tobias Meggendorfer and Salomon Sickert and Christopher Ziegler}, booktitle={CAV}, year={2018} } • Published in CAV 14 July 2018 • Computer Science We present Rabinizer 4, a tool set for translating formulae of linear temporal logic to different types of deterministic $$\omega$$-automata. The tool set implements and optimizes several recent constructions, including the first implementation translating the frequency extension of LTL. Further, we provide a distribution of PRISM that links Rabinizer and offers model checking procedures for probabilistic systems that are not in the official PRISM distribution. Finally, we evaluate the… ltl3tela: LTL to Small Deterministic or Nondeterministic Emerson-Lei Automata • Computer Science ATVA • 2019 Experimental evaluation shows that ltl3tela can produce deterministic automata that are, on average, noticeably smaller than deterministic TELA produced by state-of-the-art translators Delag, Rabinizer 4, and Spot. Efficient Translation of Safety LTL to DFA Using Symbolic Automata Learning and Inductive Inference • Computer Science SAFECOMP • 2020 A symbolic adaptation of the $$L^*$$ active learning algorithm tailored to efficiently translate safety LTL properties into symbolic DFA and demonstrates how an inductive inference procedure can be used to provide additional input to the algorithm that greatly improves performance for certain important families of properties. Generic Emptiness Check for Fun and Profit • Computer Science ATVA • 2019 We present a new algorithm for checking the emptiness of $$\omega$$-automata with an Emerson-Lei acceptance condition (i.e., a positive Boolean formula over sets of states or transitions that must A Unified Translation of Linear Temporal Logic to ω-Automata Evidence is given that this theoretical clean and compositional approach does not lead to large automata per se and in fact in the case of DRAs yields significantly smaller automata compared to the previously known approach using determinisation of NBAs. A Unified Translation of Linear Temporal Logic to ω-Automata • Computer Science • 2020 A unified translation of LTL formulas into nondeterministic Buchi automata, limit-deterministic LTL automata (LDBA), and deterministic Rabin Automata (DRA) is presented. LTL to Smaller Self-Loop Alternating Automata and Back • Computer Science ICTAC • 2019 This paper considers SLAA with generic transition-based Emerson-Lei acceptance and presents translations of LTL to these automata and back, which produces considerably smaller automata than previous translations ofLTL to Buchi or co-Buchi SLAA. Eventually Safe Languages • Computer Science DLT • 2019 It is shown that GFG automata still enjoy exponential succinctness for LTL-definable languages and introduces a class of properties called “eventually safe” together with a specification language $$E \nu \mathrm {TL}$$ for this class. Model checking with generalized Rabin and Fin-less automata • Computer Science International Journal on Software Tools for Technology Transfer • 2019 This paper investigates whether using a more general form of acceptance, namely a transition-based generalized Rabin automaton (TGRA), improves the model checking procedure and introduces a Fin-less acceptance condition, which is a disjunction of TGBAs. New Optimizations and Heuristics for Determinization of Büchi Automata • Computer Science ATVA • 2019 In this work, we present multiple new optimizations and heuristics for the determinization of Buchi automata that exploit a number of semantic and structural properties, most of which may be applied ## References SHOWING 1-10 OF 59 REFERENCES Rabinizer 3: Safraless Translation of LTL to Small Deterministic Automata • Computer Science ATVA • 2014 This paper presents a tool for translating LTL formulae into deterministic ω-automata, the first tool that covers the whole LTL that does not use Safra’s determinization or any of its variants, and shows that this leads to significant speed-up of probabilistic LTL model checking, especially with the generalized Rabin automata. Rabinizer: Small Deterministic Automata for LTL(F, G) • Computer Science ATVA • 2012 We present Rabinizer, a tool for translating formulae of the fragment of linear temporal logic with the operators F (eventually) and G (globally) into deterministic Rabin automata. Contrary to tools Rabinizer 2: Small Deterministic Automata for LTL ∖ GU • Computer Science ATVA • 2013 A tool that generates automata for LTL(X,F,G,U) where U does not occur in any G-formula (but F still can) where DGRA have been recently shown to be as useful in probabilistic model checking as DRA. MoChiBA: Probabilistic LTL Model Checking Using Limit-Deterministic Büchi Automata • Computer Science ATVA • 2016 This work presents an extension of PRISM for LTL model checking of MDP using LDBA, a special subclass of limit-deterministic Buchi automata that can replace deterministic Rabin automata in quantitative probabilistic model checking algorithms. From LTL to Deterministic Automata: A Safraless Compositional Approach • Computer Science CAV • 2014 We present a new algorithm to construct a (generalized) deterministic Rabin automaton for an LTL formula i¾?. The automaton is the product of a master automaton and an array of slave automata, one LTL to Deterministic Emerson-Lei Automata • Computer Science GandALF • 2017 A new translation from linear temporal logic to deterministic Emerson-Lei automata with a Muller acceptance condition symbolically expressed as a Boolean formula is introduced, which is an enhanced product construction that exploits knowledge of its components to reduce the number of states. Automata with Generalized Rabin Pairs for Probabilistic Model Checking and LTL Synthesis • Computer Science CAV • 2013 This work considers deterministic automata with acceptance condition given as disjunction of generalized Rabin pairs (DGRW) as an alternative to DRW, and presents algorithms for probabilistic model-checking as well as game solving for DGRW conditions. Deterministic Automata for the (F,G)-fragment of LTL • Computer Science CAV • 2012 This work presents a direct translation of the ( F, G )-fragment of LTL into deterministic ω-automata with no determinization procedure involved and investigates the complexity of this translation and provides experimental results and compare them to the traditional method. Limit Deterministic and Probabilistic Automata for LTL ∖ GU • Computer Science, Mathematics TACAS • 2015 LTL i¾? GU is a fragment of linear temporal logic LTL, where negations appear only on propositions, and formulas are built using the temporal operators X next, F eventually, G always, and U until, Efficient Büchi Automata from LTL Formulae • Computer Science CAV • 2000 We present an algorithm to generate small Buchi automata for LTL formulae. We describe a heuristic approach consisting of three phases: rewriting of the formula, an optimized translation procedure,
{}
# Roulette Conditional Probability A roulette has 38 slots (18 red, 18 black, 2 green). A customer bets on red until she has won 5 times. What is the probability that she made a total of 12 bets? This is what I've done and I'm getting the wrong answer... Wrong = ((20/38)^7)((18/38)^5) Above, in my head, is the probabilities of her missing red 7 times and then hitting red 5 times. However, I'm guessing I'm suppose to put it in a conditional like this... P(12 total bets | won 5 times) = P(12 total bets ^ won 5 times) / P(won 5 times) = ((20/38)^7)((18/38)^5) / ((18/38)^5) Any help would be greatly appreciated. - In order for the fifth win to come on the $12$th trial, she has to have won exactly $4$ times in the first $11$ trials, and then won on the $12$th. The probability of $4$ wins in $11$ trials is $\dbinom{11}{4}p^4(1-p)^7$, where $p=\frac{18}{38}$. Multiply by $p$ for the win on the $12$th, and we get that the probability is $$\binom{11}{4}p^5(1-p)^7,$$ You are welcome. You were only counting one of the orders in which fifth win on the $12$th can happen. Definitely we need a W on the $12$th, but the other four W could have been anywhere. – André Nicolas Aug 19 '13 at 1:33
{}
# Logic Check: Building a SKLearn Pipeline I am new to the concept of building a pipeline in SKLearn and would appreciate some sense-checking to ensure that I am not leaking info from my training sets into my test set. Background: I have a sparse, high-dimensional data-set (370x1000) with a continuous variable as the target. At present, I have been running a random forest regression on all the features with a 90/10 split, followed by parameter tuning via grid search on the training set, followed by 5-fold cross validation on the optimized model (with the entire dataset). Problems with this approach: As I understand the situation, there are a number of things I am doing that might be harming the model and introducing undesired bias. Specifically, my concerns are: 1. As I am tuning parameters only on the initial train/test split, I am not accounting for other split combinations that arise during the K-Fold CV. Might optimal settings for fold 1 be different for fold 2? Intuitively I would assume so. 2. I am not doing any feature selection that might remove otherwise redundant features and shorten my feature-space (I know RF is generally quite good with high-dimensional spaces but I would still like to try). Some suggestions I have read have included removing features with very low variance. But I find myself in the same conundrum as above: if I remove low variance features from the original training set I am not accounting for other combinations during K-fold. Comparatively, if I remove all low-var features prior to splitting the data I am surely leaking info between the train/test states. 3. An alternative approach I have seen is recursive feature selection (with CV as per SKLearn) - this looks promising, as I think it means that it will partition the data-set in folds and conduct RFE on each fold, presumably giving me an averaged score of the best number of features to keep. My possible solution: I have been doing some reading around Pipelines in SKLearn and think that might be the way to go. My understanding is that an advantage of a pipeline is that i can stack transforms together, preserving the individual folds and allowing me to address the problems I have detailed above. What I am considering, and what I would appreciate sense-checking from anyone with more experience, is the following: 1. As the data-set is small, I would not split the data-set in the conventional train/test split manner, but would use K-Fold across the whole data-set. 2. Run a RF (using default params) with K-Fold to get a baseline level of performance. 3. Create a pipeline whereby I (3.1) create folds, (3.2) and then within each fold find the optimal number of features to keep, (3.3) tune hyper-parameters for that fold, and finally (3.4) predict the values in the test fold. As you might be able to tell I am struggling to get to grips with what order things should go in, and whether step 3 is actually what a Pipeline does. If someone can provide pointers/recommendations/corrections it would be appreciated. You are on the right path. It appears you might have analysis paralysis. You should start building, then see what works and what does not work. Here is code to get you started: from sklearn.ensemble import RandomForestRegressor from sklearn.feature_selection import VarianceThreshold from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline regressor = Pipeline([('vt', VarianceThreshold(threshold=0.0)), ('rf', RandomForestRegressor())]) gsvc = GridSearchCV(regressor, {'vt__threshold': [0, .1, .5, .7, 1], 'rf__n_estimators': [10, 100, 1_000], 'rf__max_depth': [2, 10, 100]})
{}
# What is the average atomic mass of gold if half of the gold found in nature has a mass of 197 amu and half has a mass of 198 amu? Nov 25, 2015 The average atomic mass is $197.5$. #### Explanation: Multiply the mass of each isotope times its percentage as a decimal number and add the results. (50%)/100=0.5 $\text{average atomic mass} = \left(0.5 \times 197\right) + \left(0.5 \times 198\right) = 197.5$ Jan 13, 2016 The average atomic mass is 198.5 u. #### Explanation: Assume that you have 2 atoms of gold. Then you have 1 atom of each isotope. $\text{Mass of 1 Au-197 atom = 197 u}$ $\text{Mass of 1 Au-198 atom = 198 u}$ stackrel(———————————————)("Mass of 2 Au atoms"color(white)(Xl) = "395 u") $\text{Average mass" = "395 u"/2 = "197.5 u}$
{}
Time Limit : sec, Memory Limit : KB English Problem D: Find the Outlier Professor Abacus has just built a new computing engine for making numerical tables. It was designed to calculate the values of a polynomial function in one variable at several points at a time. With the polynomial function f(x) = x2 + 2x + 1, for instance, a possible expected calculation result is 1 (= f(0)), 4 (= f(1)), 9 (= f(2)), 16 (= f(3)), and 25 (= f(4)). It is a pity, however, the engine seemingly has faulty components and exactly one value among those calculated simultaneously is always wrong. With the same polynomial function as above, it can, for instance, output 1, 4, 12, 16, and 25 instead of 1, 4, 9, 16, and 25. You are requested to help the professor identify the faulty components. As the first step, you should write a program that scans calculation results of the engine and finds the wrong values. Input The input is a sequence of datasets, each representing a calculation result in the following format. d v0 v1 ... vd+2 Here, d in the first line is a positive integer that represents the degree of the polynomial, namely, the highest exponent of the variable. For instance, the degree of 4x5 + 3x + 0.5 is five and that of 2.4x + 3.8 is one. d is at most five. The following d + 3 lines contain the calculation result of f(0), f(1), ... , and f(d + 2) in this order, where f is the polynomial function. Each of the lines contains a decimal fraction between -100.0 and 100.0, exclusive. You can assume that the wrong value, which is exactly one of f(0), f(1), ... , and f(d+2), has an error greater than 1.0. Since rounding errors are inevitable, the other values may also have errors but they are small and never exceed 10-6. The end of the input is indicated by a line containing a zero. Output For each dataset, output i in a line when vi is wrong. Sample Input 2 1.0 4.0 12.0 16.0 25.0 1 -30.5893962764 5.76397083962 39.3853798058 74.3727663177 4 42.4715310246 79.5420238202 28.0282396675 -30.3627807522 -49.8363481393 -25.5101480106 7.58575761381 5 -21.9161699038 -48.469304271 -24.3188578417 -2.35085940324 -9.70239202086 -47.2709510623 -93.5066246072 -82.5073836498 0 Output for the Sample Input 2 1 1 6
{}
# How to calculate the energy required for an electrochemical reaction with over potential? I am particularly interested in the reaction described in High‐Selectivity Electrochemical Conversion of CO2 to Ethanol using a Copper Nanoparticle/N‐Doped Graphene Electrode. The reaction is $$\ce{2 CO + 9 H2O + 12 e- -> C2H5OH + 12 OH- E0 = 0.084 V vs SHE}$$ and has a recommended voltage of 1.2V. My naive attempt is: • 1kWh @ 1.2V yields 30 moles electrons • 30 moles electrons has theoretical yield 2.5 moles of $$\ce{C2H5OH}$$ Do I also need to factor in the 63% Faradaic efficiency mentioned, so it'd be 1.575 moles of $$\ce{C2H5OH}$$ from 1 kWh input?
{}
# Group Epimorphism Induces Bijection between Subgroups ## Theorem Let $G_1$ and $G_2$ be groups whose identities are $e_{G_1}$ and $e_{G_2}$ respectively. Let $\phi: G_1 \to G_2$ be a group epimorphism. Let $K := \ker \left({\phi}\right)$ be the kernel of $\phi$. Let $\mathbb H_1 = \left\{{H \subseteq G_1: H \le G_1, K \subseteq H}\right\}$ be the set of subgroups of $G_1$ which contain $K$. Let $\mathbb H_2 = \left\{{H \subseteq G_2: H \le G_2}\right\}$ be the set of subgroups of $G_2$. Then there exists a bijection $Q: \mathbb H_1 \leftrightarrow \mathbb H_2$ such that: $\forall N \lhd G_1: Q \left({N}\right) \lhd G_2$ $\forall N \lhd G_2: Q^{-1} \left({N}\right) \lhd G_1$ where $N \lhd G_1$ denotes that $N$ is a normal subgroup of $G_1$. That is, normal subgroups map bijectively to normal subgroups under $Q$. ### Corollary Let $H \le G$ denote that $H$ is a subgroup of $G$. Then: $\forall H \le G, K \subseteq H: \phi \sqbrk H \cong H / K$ where $H / K$ denotes the quotient group of $H$ by $K$. ## Proof Let $Q$ be the mapping defined as: $\forall H \le \mathbb H_1: Q \left({H}\right) = \left\{{\phi \left({h}\right): h \in H}\right\}$ Let $H$ be a subgroup of $G_1$ such that $K \subseteq H$. From Group Homomorphism Preserves Subgroups, $\phi \left({H}\right)$ is a subgroup of $G_2$. This establishes that $Q$ is actually a mapping. Let $N \lhd G_1$. From Group Epimorphism Preserves Normal Subgroups, $\phi \left({N}\right)$ is a normal subgroup of $G_2$. This establishes that: $\forall N \lhd G_1: Q \left({N}\right) \lhd G_2$ Next it is shown that $Q$ is a bijection. ### Injective Nature of $Q$ Let $H, J \in \mathbb H_1$. Let $Q \left({H}\right) = Q \left({J}\right)$. Let $h \in H$. $\displaystyle \phi \left({h}\right)$ $\in$ $\displaystyle Q \left({H}\right)$ $\displaystyle \implies \ \$ $\displaystyle \phi \left({h}\right)$ $\in$ $\displaystyle Q \left({J}\right)$ $\displaystyle \implies \ \$ $\, \displaystyle \exists j \in J: \,$ $\displaystyle \phi \left({j}\right)$ $=$ $\displaystyle \phi \left({h}\right)$ $\displaystyle \implies \ \$ $\displaystyle e_{G_2}$ $=$ $\displaystyle \left({\phi \left({j}\right)}\right)^{-1} \phi \left({h}\right)$ definition of inverse element $\displaystyle$ $=$ $\displaystyle \phi \left({j^{-1} }\right) \phi \left({h}\right)$ Group Homomorphism Preserves Inverses $\displaystyle$ $=$ $\displaystyle \phi \left({j^{-1} h}\right)$ morphism property of $\phi$ $\displaystyle \implies \ \$ $\displaystyle j^{-1} h$ $\in$ $\displaystyle K$ definition of kernel $\displaystyle \implies \ \$ $\, \displaystyle \exists k \in K: \,$ $\displaystyle j^{-1} h$ $=$ $\displaystyle k$ $\displaystyle \implies \ \$ $\displaystyle h$ $=$ $\displaystyle j k$ $\displaystyle$ $\in$ $\displaystyle J$ as $K \subseteq J$ and so $k \in j$ $\displaystyle \implies \ \$ $\displaystyle H$ $\subseteq$ $\displaystyle J$ A similar argument shows that $J \subseteq H$. So by definition of set equality: $H = J$ Thus: $Q \left({H}\right) = Q \left({J}\right) \implies H = J$ So by definition, $Q$ is injective. $\Box$ ### Surjective Nature of $Q$ Now let $N' \in \mathbb H_2$. By definition of $\mathbb H_2$, $N'$ is a subgroup of $G_2$. Let $N = \left\{{x: \phi \left({x}\right) = N'}\right\}$. We have from Identity of Subgroup that $e_{G_2} \in N'$. Thus by definition of kernel, $K \subseteq N$. Now suppose $\phi \left({x}\right), \phi \left({y}\right) \in N'$. Then: $\displaystyle \phi \left({x y^{-1} }\right)$ $=$ $\displaystyle \phi \left({x}\right) \phi \left({y^{-1} }\right)$ $\displaystyle$ $=$ $\displaystyle \phi \left({x}\right) \phi \left({y}\right)^{-1}$ $\displaystyle$ $\in$ $\displaystyle N'$ One-Step Subgroup Test $\displaystyle \implies \ \$ $\displaystyle x y^{-1}$ $\in$ $\displaystyle N$ So by the One-Step Subgroup Test, $N$ is a subgroup of $G_1$. It has been established that $K \subseteq N$, and so $N \in \mathbb H_1$. Thus it follows that for all $N' \in \mathbb H_2$ there exists $N \in H_1$ such that $Q \left({N}\right) = N'$. So $Q$ is a surjection. $\Box$ So $Q$ has been shown to be both an injection and a surjection, and so by definition is a bijection. Finally, it can then be shown that if $N'$ is normal in $G_2$, it follows that $N = Q^{-1} \left({N'}\right)$ is normal in $G_1$. This establishes that: $\forall N \lhd G_2: Q^{-1} \left({N}\right) \lhd G_1$ $\blacksquare$
{}
# multiplicative digital root Given an integer $m$ consisting of $k$ digits $d_{1},\dots,d_{k}$ in base $b$, let $j=\prod_{i=1}^{k}d_{i},$ then repeat this operation on the digits of $j$ until $j. This stores in $j$ the multiplicative digital root of $m$. The number of iterations of the multiplication operation is called the multiplicative persistence of $m$. Title multiplicative digital root MultiplicativeDigitalRoot 2013-03-22 16:00:42 2013-03-22 16:00:42 CompositeFan (12809) CompositeFan (12809) 4 CompositeFan (12809) Definition msc 11A63 repeated digit product repeated digital product multiplicative persistence
{}
# How to calculate width of remaining part of line I wish to create section heading using titlesec like image But I have some difficulties in determining the width of remaining of line to include blue box containing section label, my method consist of subtracting length of gray box from textwidth but it did not work. \documentclass[a4paper]{book} \usepackage{lipsum} \usepackage[explicit]{titlesec} \usepackage{xcolor} \newlength{\myl} \colorlet{mygray}{gray!90} \colorlet{myblue}{blue!80} \newcommand{\graybox}{\colorbox{mygray}{\strut \color{white}Section~\thesection}} \settowidth{\myl}{\graybox} \titleformat{\section}[hang]{\large\bfseries}% {\graybox}{.5ex}{\colorbox{myblue}{\makebox[\dimexpr \linewidth-\myl-2\fboxsep-.5ex][l]% {\strut \color{white}\large\bfseries #1}}} \pagestyle{empty} \begin{document} • Right now you are measuring \graybox in the preamble. I think you should move the line \settowidth{\myl}{\graybox} in the last argument of \titleformat, before \colorbox{myblue}.... – campa Jun 1 '16 at 16:12 • @campa Perhaps make your suggestion an answer, as it fixes the problem nicely, even after a \setcounter{section}{100} changes \thesection width. – Steven B. Segletes Jun 1 '16 at 16:18 • Thanks @campa your comment fix the problem, can you post it as an answer. – Salim Bou Jun 1 '16 at 16:24 The computations are somewhat similar to those in the other answers, but this solution also copes with unnumbered sections and long titles. \documentclass[a4paper]{book} \usepackage{lipsum} \usepackage{titlesec} \usepackage{xcolor} \newsavebox{\sectionlabelbox} \newlength{\sectionlabelwidth} \colorlet{mygray}{gray!90} \colorlet{myblue}{blue!80} \newcommand{\sectionlabel}{% \sbox{\sectionlabelbox}{\colorbox{mygray}{\strut\color{white}Section~\thesection}}% \global\sectionlabelwidth=\wd\sectionlabelbox \usebox{\sectionlabelbox}% } \newcommand{\sectiontitle}[1]{% \colorbox{myblue}{% \parbox[t]{\dimexpr\columnwidth-\sectionlabelwidth-2\fboxsep-0.5ex}{ \raggedright\strut\color{white}#1 }% }% } \titleformat{\section}[hang] {\large\bfseries\global\sectionlabelwidth=-0.5ex }% {\sectionlabel} {.5ex} {\sectiontitle} \begin{document} \section{A test} \setcounter{section}{9} \section{Another test} \section{Another test, but with a title that is so long it has to be split across lines} \section*{A further test} \end{document} Here's a modification where the grey box has the same vertical size of the blue box. \documentclass[a4paper]{book} \usepackage{lipsum} \usepackage{titlesec} \usepackage{xcolor} \newsavebox{\sectiontitlebox} \newlength{\sectionlabelwidth} \colorlet{mygray}{gray!90} \colorlet{myblue}{blue!80} \newcommand{\sectiontitle}[1]{% \settowidth{\sectionlabelwidth}{% \colorbox{mygray}{\strut\color{white}Section~\thesection}% }% \sbox{\sectiontitlebox}{% \colorbox{myblue}{% \parbox[t]{\dimexpr\columnwidth-\sectionlabelwidth-2\fboxsep-0.5ex}{ \raggedright\strut\color{white}#1 }% }% }% \colorbox{mygray}{% \vrule height \dimexpr\ht\sectiontitlebox-\fboxsep\relax depth \dimexpr\dp\sectiontitlebox-\fboxsep\relax width 0pt \color{white}Section~\thesection }% \hspace{0.5ex}% \usebox{\sectiontitlebox}% } \newcommand{\sectionstartitle}[1]{% \colorbox{myblue}{% \parbox[t]{\dimexpr\columnwidth-2\fboxsep}{ \raggedright\strut\color{white}#1 }% }% } \titleformat{name=\section}[hang] {\large\bfseries} {} {0pt} {\sectiontitle} \titleformat{name=\section,numberless}[hang] {\large\bfseries} {} {0pt} {\sectionstartitle} \begin{document} \section{A test} \setcounter{section}{9} \section{Another test} \section{Another test, but with a title that is so long it has to be split across lines} \section*{A further test} \end{document} • I wonder if it's possible to have the grey box always the same height as the blue box? It would look nicer, in my opinion. – Bernard Jun 1 '16 at 18:42 • @Bernard Your wish is my command. ;-) – egreg Jun 1 '16 at 19:59 • @egreg:Thanks, my Lord! That is quite simple and elegant indeed. Actually, the really nicest (but it's my personal taste) would be a simple coloured \fcolorbox with no coloured background around the label. – Bernard Jun 1 '16 at 20:15 \documentclass[a4paper]{book} \usepackage{lipsum} \usepackage[explicit]{titlesec} \usepackage{xcolor} \newlength{\myl} \colorlet{mygray}{gray!90} \colorlet{myblue}{blue!80} \newcommand{\graybox}{\colorbox{mygray}{\strut \color{white}Section~\thesection}} \settowidth{\myl}{\graybox} \usepackage{tikz} \usetikzlibrary{calc} \newcommand{\currentsidemargin}{% \ifodd\value{page}% \oddsidemargin% \else% \evensidemargin% \fi% } \newlength{\whatsleft} \newcommand{\measureremainder}[1]{% \begin{tikzpicture}[overlay,remember picture] % Helper nodes \path (current page.north west) ++(\hoffset, -\voffset) node[anchor=north west, shape=rectangle, inner sep=0, minimum width=\paperwidth, minimum height=\paperheight] (pagearea) {}; node[anchor=north west, shape=rectangle, inner sep=0, minimum width=\textwidth, minimum height=\textheight] (textarea) {}; % Measure distance to right text border \path let \p0 = (0,0), \p1 = (textarea.east) in [/utils/exec={\pgfmathsetlength#1{\x1-\x0}\global#1=#1}]; \end{tikzpicture}% } \titleformat{\section}[hang]{\large\bfseries}% {\graybox}{.5ex}{\measureremainder{\whatsleft}\colorbox{myblue}{\makebox[\dimexpr\whatsleft-2\fboxsep][l]% {\strut\color{white}\large\bfseries #1}}} \pagestyle{empty} \begin{document} \section{blub} \setcounter{section}{10} \section{blub} \lipsum \end{document}
{}
# Find all values such that the inequality is true Mod note: Moved from a technical math section, so missing the homework template. This is for an Intro to Analysis course. It's been a very long time since I've taken a math course, so I do not remember much of anything. ============= Here is the problem: For the inequality below, find all values n ∈ N such that the inequality is true: (n2 + 2n +3) / (2n3 + 5n2 + 8n + 3) < 0.025. ============================ Here is my attempt at the problem: Looking at the following set {n ∈ N: (n2 + 2n +3) / (2n3 + 5n2 + 8n + 3) < 0.025} We want to find the lower bound of this set. Suppose A denotes the above set, then we have A= {n ∈ N: (n2 + 2n +3) / (2n3 + 5n2 + 8n + 3) < 0.025} Since the above rational function can be reduced to 1/(2n+1) we have 1/(2n+1) <0.025 Where we get n>19.5. Since the lower bounds of the set are 19.5, 19.4, 19.3.... And so on, this set is bound below by the natural number n=19. Therefore, the above inequality holds true for all n ∈ N greater than or equal to 20. ========== If you can only just tell me what topics I need to review to answer this correctly, I would appreciate it. Last edited by a moderator: ChrisVer Gold Member what is n2,n3? for example if n=1, n2=1 and n3=170000 you can make the inequality true.... $\frac{1+2+3}{2 \times 170000 + 5 + 8 +3} = \frac{6}{340016}= 0.00001764622 < 0.025$ and so n=1 is keeping the inequality what is n2,n3? Sorry. I didn't check to see if the format changed once I copied and pasted my problem: ============================================================== Here is the problem: For the inequality below, find all values n ∈ N such that the inequality is true: (n2 + 2n +3) / (2n3 + 5n2+ 8n + 3) < 0.025. ============================ Here is my attempt at the problem: Looking at the following set {n ∈ N: (n2+ 2n +3) / (2n3 + 5n2 + 8n + 3) < 0.025} We want to find the lower bound of this set. Suppose A denotes the above set, then we have A= {n ∈ N: (n2 + 2n +3) / (2n3 + 5n2 + 8n + 3) < 0.025} Since the above rational function can be reduced to 1/(2n+1) we have 1/(2n+1) <0.025 Where we get n>19.5. Since the lower bounds of the set are 19.5, 19.4, 19.3.... And so on, this set is bound below by the natural number n=19. Therefore, the above inequality holds true for all n ∈ N greater than or equal to 20. Thus A= {20, 21, 22, 23,...} mfb Mentor I don't see the purpose of all those "steps" before reducing the rational function (which you should probably show in more detail). Where we get n>19.5 Okay (showing the steps wouldn't hurt). Since the lower bounds of the set are 19.5, 19.4, 19.3.... And so on What is the relevance of those numbers? Once you have n>19.5 you can directly conclude that n>=20, and write down the set of solutions. RUber Homework Helper One method that might help with the simplification is to flip the inequality. ##\frac{ n^2 +2n+3}{2n^3 + 5n^2 + 8n+3} < .025 \equiv \frac{2n^3 + 5n^2 + 8n+3}{ n^2 +2n+3}>40## Which as you pointed out can be written as ##2n+1 > 40.## And you already have the solution. PeroK mfb Mentor Be careful: to do that you have to check that the two sides cannot be negative. This is easy to do here, but it is a necessary step.
{}
# Text tool for designing gameplay elements [closed] I'm still in the design phase of my game and I'm realizing that there are too many elements to keep track of and it just looks like a mess in the plain text that I created it in. My notes are basically going to translate to the classes and subclasses that'll make up the objects within my game, but I'd like to find a tool that will make it simpler to keep track of. Here's an example of the structure of my notes: NPCs: - Humans: - Human A: description - Abilities: - Ability A: description [...] - Attributes: - Attribute A: description [...] - Human B: description - Abilities: - Ability A: description [...] - Attributes: - Attribute A: description [...] - Elves: - Elf A: description - Abilities: - Ability A: description [...] - Attributes: - Attribute A: description [...] - Elf B: description - Abilities: - Ability A: description [...] - Attributes: - Attribute A: description [...] [...] World: - Continent A: overall description - City A: city description - Building A: description [...] [...] [...] Items: - Melee: overall description - Melee Weapon A: description - Damage: X - Damage Type: some type [...] [...] - Ranged: overall description - Ranged Weapon A: description - Damage: X - Damage Type: some type [...] [...] And so on... Even this simple list is starting to look messy, and once you have pages of classes and multiple subclasses for each class along with lengthy descriptions, it gets really jarring when you view it. What I would really like is a way to expand and collapse these elements so I can either view the big picture or go down to details, and maybe be able to set some groups a different color or make them bold, etc. for better viewing and categorization. It's basically a class diagram but with more game-world-related descriptions rather than function/method descriptions. I'm currently using Sublime for this, so either a plugin or a different tool that accomplishes this would really help. Thanks! ## closed as too broad by Josh♦Mar 2 '17 at 5:32 Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. • Microsoft Excel or it's analogs (e.g. Google Drive Sheets) can do exactly that - collapse-expand cells and groups of cells. – Kromster Mar 2 '17 at 4:22 • You may want to ask on the software recommendations SE; we don't consider recommendations on topic here. – Josh Mar 2 '17 at 5:33 • Too broad? I thought I was being as specific as possible as to what I'm trying to achieve. It's not like I was asking "what's the best language to make games in". I have a problem which I detailed and a very clear description of what I'm looking to do. If it's because I should be in Software Recommendations, honestly I don't like how suggestions for every possible subject is in one place. My problem is specific to game design, so I have to go there and hope a game designer will show up there with an answer. You have to understand why this is frustrating. – user3625087 Mar 2 '17 at 9:26 • Your problem really isn't a game design problem, it's a word processing/text formatting problem, and I say this as a game designer. The topic you want to use this tool for has more in common with software architecture than game design, but even there, your description of a deep & rigid class hierarchy with subclasses for everything actually goes against prevailing advice in game development, which has moved strongly toward composition over inheritance in recent years. If your class structure is too unwieldy to write out simply, imagine trying to develop & debug it! – DMGregory Mar 2 '17 at 14:08 • @DMGregory I disagree, how to organize and track your information/process is absolutely a design problem. As with anything else in game design there are a ton of different ways to do things and they are not created equally. The OP has asked a straight-forward clear question about a game design related issue and it's not at all too broad. – Aithos Mar 2 '17 at 21:34
{}
Warning This documents an unmaintained version of NetworkX. Please upgrade to a maintained version and see the current NetworkX documentation. # negative_edge_cycle¶ negative_edge_cycle(G, weight='weight')[source] Return True if there exists a negative edge cycle anywhere in G. Parameters: G (NetworkX graph) – weight (string, optional (default='weight')) – Edge data key corresponding to the edge weight negative_cycle – True if a negative edge cycle exists, otherwise False. bool Examples >>> import networkx as nx >>> G = nx.cycle_graph(5, create_using = nx.DiGraph()) >>> print(nx.negative_edge_cycle(G)) False >>> G[1][2]['weight'] = -7 >>> print(nx.negative_edge_cycle(G)) True Notes Edge weight attributes must be numerical. Distances are calculated as sums of weighted edges traversed. This algorithm uses bellman_ford() but finds negative cycles on any component by first adding a new node connected to every node, and starting bellman_ford on that node. It then removes that extra node.
{}
## Archive for the ‘Best of Dethorning STEM’ Category ### The Fallacy of the Right Answer The Fallacy of the Right Answer is everywhere. With regards to education technology, it dates back at least to BF Skinner. Skinner saw education as a series of definite, discrete, linear steps along a fixed, straight road; today this is called a curriculum. He referred to a child who guesses the password as “being right”. Khan Academy uses similar gatekeeping techniques in its exercises, limiting the context. Students must meet one criterion before proceeding to the next, being spoon-fed knowledge and seeing through a peephole not unlike Skinner’s machines. Furthermore, these steps are claimed to be objective, universal and emotionless. Paul Lockhart calls this the “ladder myth”, the conception of mathematics as a clear hierarchy of dependencies. But the learning hierarchy is tangled, replete with strange loops. It is fallacious yet popular to think that a concept, once learned, is never forgotten. But most educated adults I know (including myself) find value in rereading old material, and make connections back to what they already have learned. What was once understood narrowly or mechanically can, when revisited, be understood in a larger or more abstract context, or with new cognitive tools. There are two words for “to know” in French. Savoir means to know a fact, while connaitre means to be familiar with, comfortable with, to know a person. The Right Answer loses sight of the importance, even the possibility, of knowing a piece of information like an old friend, to find pleasure in knowing, to know for knowing’s sake, because you want to. Linear teaching is workable for teaching competencies but not for teaching insights, things like why those mechanical methods work, how they can be extended, and how they can fail. Symbol manipulation according to fixed rules is not cognition but computation. The learners take on the properties of the machines, and those who programmed them. As Papert observed, the computer programs the child, not the other way around (as he prefers). Much of this mechanical emphasis is driven by the SAT and other unreasonable standardized tests which are nothing more than timed high-stakes guessing games. They are gatekeepers to the promised land of College. Proponents of education reform frequently cite distinct age-based grades as legacy of the “factory line model” dating back to the industrial revolution. This model permeates not only how we raise children, but more importantly, what we raise them to do, what we consider necessary of an educated adult. Raising children to work machinery is the same as, or has given way to, raising them to work like machinery. Tests like the SAT emphasize that we should do reproducible de-individualized work, compared against a clear, ideal, unachievable standard. Putting this methodology online does not constitute a revolution or disruption. (source) Futurists have gone as far to see the brain itself as programmable, in some mysteriously objective sense. At some point, Nicholas Negroponte veered off his illustrious decades-long path. Despite collaborating with Seymour Papert at the Media Lab, his recent work has been dropping tablets into rural villages. Instant education, just add internet! It’s great that the kids are teaching themselves, and have some autonomy, but who designed the apps they play with? What sort of biases and fallacies do they harbor? Do African children learning the ABCs qualify as cultural imperialism? His prediction for the next thirty years is even more troublesome: that we’ll acquire knowledge by ingesting it. Shakespeare will be encoded into some nano-molecular device that works its way through the blood-brain barrier, and suddenly: “I know King Lear!”. Even if we could isolate the exact neurobiological processes that constitute reading the Bard, we all understand Shakespeare in different ways. All minds are unique, and therefore all brains are unique. Meanwhile, our eyes have spent a few hundred million years of evolutionary time adapting to carry information from the outside world into our mind at the speed of an ethernet connection. Knowledge intake is limited not by perception but by cognition. Tufte says, to simplify, add context. Confusion is not a property of information but of how it is displayed. He said these things in the context of information graphics but they apply to education as well. We are so concerned with information overload that we forget information underload, where our brain is starved for detail and context. It is not any particular fact, but the connections between them, that constitute knowledge.  The fallacy of reductionism is to insist that every detail matters: learn these things and then you are educated! The fallacy of holism is to say that no details matter: let’s just export amorphous nebulous college-ness and call it universal education! Bret Victor imagines how we could use technology to move from a contrived, narrow problem into a deeper understanding about generalized, abstract notions, much as real mathematicians do. He also presents a mental model for working on a difficult problem: I’m trying to build a jigsaw puzzle. I wish I could show you what it will be, but the picture isn’t on the box. But I can show you some of the pieces… If you are building a different puzzle, it’s possible these pieces won’t mean much to you. You might not have a spot for them to fit, or you might not yet. On the other hand, maybe some of these are just the pieces you’ve been looking for. One concern with Skinner’s teaching machines and their modern-day counterparts is that they isolate each student and cut off human interaction. We learn from each other, and many of the things that we learn fall outside of the curriculum ladder. Learning to share becomes working on a team; show-and-tell becomes leadership. Years later, in college, many of the most valuable lessons are unplanned, a result of meeting a person with very different ideas, or hearing exactly what you needed to at that moment. I found that college exposed to me brilliant people, and I could watch them analyze and discuss a problem. The methodology was much more valuable than the answer it happened to yield. The hallmark of an intellectual is do create daily what has never existed before. This can be an engineer’s workpiece, an programmer’s software, a writer’s novel, a researcher’s paper, or an artist’s sculpture. None of these can be evaluated by comparing them to a correct answer, because the correct answer is not known, or can’t even exist. The creative intellectual must have something to say and know how to say it; ideas and execution must both be present. The bits and pieces of a curriculum can make for a good technician (a term I’ve heard applied to a poet capable of choosing the exact word). It’s not so much that “schools kill creativity” so much as they replace the desire to create with the ability to create. Ideally schools would nurture and refine the former (assuming something-to-say is mostly innate) while instructing the latter (assuming saying-it-well is mostly taught). What would a society look like in which everyone was this kind of intellectual? If everyone is writing and drawing, who will take out the trash, harvest food, etc? Huxley says all Alphas and no Epsilons doesn’t work. Like the American South adjusting to an economy without slaves, elevating human dignity leaves us with the question of who will do the undignified work. As much as we say that every child deserves an education, I think that the creative intellectual will remain in an elite minority for years to come, with society continuing to run on the physical labor of the uneducated. If civilization ever truly extends education to all, then either we will need to find some equitable way of sharing the dirty work (akin to utopian socialist communes), or we’ll invent highly advanced robots. Otherwise, we may need to ask ourselves a very unsettling question: can we really afford to extend education to all, given the importance of unskilled labor to keep society running? If you liked this post, you should go read everything Audrey Watters has written. She has my thanks. ### Infographics and Data Graphics I’d like to set the record straight about two types of graphical documents floating around the internet. Most people don’t make a distinction between infographics and data graphics. Here are some of each – open them in new tabs and see if you can tell them apart. No peeking! No, really, stop reading and do it. I can wait. Okay, had a look and made your categorizations? As I see it, dog food, energy, and job titles are infographics, and Chicago buildings, movie earnings, and gay rights are data graphics. Why? Here are some distinctions to look for, which will make much more sense now that you’ve seen some examples. Naturally these are generalizations and some documents will be hard to classify, but not as often as you might think. Infographics emphasize typography, aesthetic color choice, and gratuitous illustration. Data graphics are pictorially muted and focused; color is used to convey data. Infographics have many small paragraphs of text communicate the information. Data graphics are largely wordless except for labels and an explanation of the visual encoding. In infographics, numeric data is scant, sparse, and piecemeal. In data graphics, numeric data is plentiful, dense, and multivariate. Infographics have many components that relate different datasets; sectioning is used. Data graphics have single detailed image, or less commonly multiple windows into the same data. An infographic is meant to be read through sequentially. A data graphic is meant to be scrutinized for several minutes. In infographics, the visual encoding of numeric information is either concrete (e.g. world map, human body), common (e.g. bar or pie charts), or nonexistent (e.g. tables). In data graphics, the visual encoding is abstract, bespoke, and must be learned. Infographics tell a story and have a message. Data graphics show patterns and anomalies; readers form their own conclusions. You may have heard the related term visualization – a data graphic is a visualization on steroids. (An infographic is a visualization on coffee and artificial sweetener.) A single bar, line, or pie chart is most likely a visualization but not a data graphic, unless it takes several minutes to absorb. However, visualizations and infographics are both generated automatically, usually by code. It should be fairly easy to add new data to a visualization or data graphic; not so for infographics. If you look at sites like visual.ly which collects visualizations of all stripes, you’ll see that infographics far outnumber data graphics. Selection bias is partially at fault. Data graphics require large amounts of data that companies likely want to keep private. Infographics are far better suited to marketing and social campaigns, so they tend to be more visible. Some datasets are better suited to infographics than data graphics. However, even accounting for those facts, I think we have too many infographics and too few data graphics. This is a shame, because the two have fundamentally different worldviews. An infographic is meant to persuade or inspire action. Infographics drive an argument or relate a story in a way that happens to use data, rather than allowing the user to infer more subtle and multifaceted meanings. A well-designed data graphic can be an encounter with the sublime. It is visceral, non-verbal, profound; a harmony of knowledge and wonder. Infographics already have all the answers, and serve only to communicate them to the reader. A data graphic has no obvious answers, and in fact no obvious questions. It may seem that infographics convey knowledge, and data graphics convey only the scale of our ignorance, but in fact the opposite is true. An infographic offers shallow justifications and phony authority; it presents that facts as they are. (“Facts” as they “are”.) A data graphic does not foster any conclusion upon its reader, but at one level of remove, provides its readers with tools to draw conclusions. Pedagogically, infographics embrace the fundamentally flawed idea that learning is simply copying knowledge from one mind to another. Data graphics accept that learning is a process, which moves from mystery to complexity to familiarity to intuition. Epistemologically, infographics ask that knowledge be accepted on little to no evidence, while data graphics encourage using evidence to synthesize knowledge, with no prior conception of what this knowledge will be. It is akin to memorizing a fact about the world, or accepting the validity of the scientific method. However, many of the design features that impart data graphics with these superior qualities can be exported back to infographics, with compelling results. Let’s take this example about ivory poaching. First off, it takes itself seriously: there’s no ostentatious typography and the colors are muted and harmonious. Second, subject matter is not a single unified dataset but multiple datasets that describe a unified subject matter. They are supplemented with non-numeric diagrams and illustrations, embracing their eclectic nature. Unlike most infographics, this specimen makes excellent use of layout to achieve density of information. Related pieces are placed in close proximity rather than relying on sections; the reader is free to explore in any order. This is what an infographic should be, or perhaps it’s worthy of a different and more dignified name, information graphic. It may even approach what Tufte calls “beautiful evidence”. It’s also possible to implement a data graphic poorly. Usually this comes down to a poor choice of visual encoding, although criticism is somewhat subjective. Take this example of hurricanes since 1960. The circular arrangement is best used for months or other cyclical data. Time proceeds unintuitively counterclockwise. The strength of hurricanes is not depicted, only the number of them (presumably – the radial axis is not labeled!). The stacked bars make it difficult to compare hurricanes from particular regions. If one wants to compare the total number of hurricanes, one is again stymied by the polar layout. Finally, the legend is placed at the bottom, where it will be read last. Data graphics need to explain their encoding first; even better is to explain the encoding on the diagram itself rather than in a separate legend. For example, if the data were rendered as a line chart (in Cartesian coordinates), labels could be placed alongside the lines themselves. (Here is a proper data graphic on hurricane history.) An infographic typically starts with a message to tell, but designers intent on honesty must allow the data to support their message. This is a leap of faith, that their message will survive first contact with the data. The ivory poaching information graphic never says that poaching is bad and should be stopped, in such simple words. Rather it guides us to that conclusion without us even realizing it. Detecting bias in such a document becomes much more difficult, but it also becomes much more persuasive (for sufficiently educated and skeptical readers). Similarly, poor data graphics obscure the data, either intentionally because they don’t support the predecided message, or unintentionally because of poor visual encoding. In information visualization, as in any field, we must be open to the hard process of understanding the truth, rather than blithely accepting what someone else wants us to believe. I know which type of document I want to spend my life making. ### Critical Complexity Here’s a task for you: draw a circle radius three around the origin. What system do you use? Well, you could use an intuitive system like Piaget’s turtle. Walk out three, turn ninety degrees, and then walk forward while turning inward. By identifying as a specific agent, you take advantage of having a brain that evolved to control a body. If it doesn’t seem intuitive, that’s because you’ve been trained to use other systems. Your familiarity is trumping what comes naturally, at least to children. You’re probably thinking in Cartesian coordinates. You may even recall that $x^2 + y^2 = 3^2$ will give you the circle I asked for. But that’s only because you memorized it. Why this formula? It’s not obvious that it should be a circle. It doesn’t feel very circular, unless you fully understand the abstraction beneath it (in this case, the Pythagorean theorem) and how it applies to the situation. Turtle geometry intuitively fits the human, but it’s limited and naive. Cartesian geometry accurately fits your monitor or graph paper, the technology, but it’s an awkward way to express circles. So let’s do something different. In polar coordinates, all we have to say is $r=3$ and we’re done. It’s not a compromise between the human and the technology, it’s an abstraction – doing something more elegant and concise than either native form. Human and technology alike  stretch to accommodate the new representation. Abstractions aren’t fuzzy and amorphous. Abstractions are crisp, and stacked on top of each other, like new shirts in a store. We’ve invented notation that, for this problem, compresses the task as much as possible. The radius is specified; the fact that it’s a circle centered around the origin are implicit in the conventional meaning of $r$ and the lack of other information. It’s been maximally compressed (related technical term: Kolmogorov complexity). Compression is one of the best tools we have for fighting complexity. By definition, compression hides the meaningless while showing the meaningful. It’s a continuous spectrum, on which sits a point I’ll call critical complexity. Critical complexity is the threshold above which a significant abstraction infrastructure is necessary. But that definition doesn’t mean much to you — yet. Think of knowledge as terrain. To get somewhere, we build roads, which in our metaphor are abstraction. Roads connect to each other, and take us to new places. It was trivial to abstract Cartesian coordinates into polar by means of conversions. This is like building a road, with one end connecting to the existing street grid and another ending somewhere new. It’s trivial to represent a circle in polar coordinates. This is what we do at the newly accessible location. We’ve broken a non-trivial problem into two trivial pieces – although it wasn’t a particularly hard problem, as otherwise we wouldn’t have been able to do that. Delivering these words to your machine is a hard problem. You’re probably using a webbrowser, which is written in software code, which is running on digital electronics, which are derived from analog electronics obeying Maxwell’s equations, and so on. But the great thing about abstractions is that you only need to understand the topmost one. You can work in polar coordinates without converting back to Cartesian, and you can use a computer without obtaining multiple engineering degrees first. You can build your own network of roads about how to operate a computer, disconnected from your road network about physics. Or perhaps not disconnected, but connected by a tunnel through the mountain of what you don’t understand. A tunnel is a way to bypass ignorance to learn about other things based on knowledge you don’t have, but don’t need. Of course, someone knows those things – they’ve laboriously built roads over the mountain so that you can cruise under it. These people, known as scientists and engineers, slice hard problems into many layers of smaller ones. A hard problem may have so many layers that, even if each is trivial on its own, they are non-trivial collectively. That said, some problems are easier than they look because our own sensemaking abstractions blind us. If you want to write an analog clock in JavaScript, your best bet is to configure someone else’s framework. That is, you say you want a gray clockface and a red second hand, and the framework magically does it. The user, hardly a designer, is reduced to muttering incantations at a black box hoping the spell will work as expected. Inside the box is some 200 lines or more, most of it spent on things not at all related to the high-level description of an analog clock. The resulting clock is a cul-de-sac at the end of a tunnel, overlooking a precipice. By contrast, the nascent Elm language provides a demo of the analog clock. Its eight lines of code effectively define the Kolmogorov complexity: each operation is significant. Almost every word or number defines part of the dynamic drawing in some way. To the programmer, the result is liberating. If you want to change the color of the clockface, you don’t have to ask the permission of a framework designer, you just do it. The abstractions implicit in Elm have pushed analog clocks under the critical complexity, which is the point above which you need to build a tunnel. There’s still a tunnel involved, though: the compiler written in Haskell that converts Elm to JavaScript. But this tunnel is already behind us when we set out to make an analog clock. Moreover, this tunnel leads to open terrain where we can build many roads and reach many places, rather than the single destination offered by the framework. What’s important isn’t the avoidance of tunnels, but of tunnels to nowhere. Each abstraction should have a purpose, which is to open up new terrain where abstractions are not needed, because getting around is trivial. However, the notion of what’s trivial is subjective. It’s not always clear what’s a road and what’s a tunnel. Familiarity certainly makes any abstraction seem simpler. Though we gain a better grasp on an abstraction by becoming familiar with it, we also lose sight of the underlying objective nature of abstractions: some are more intuitive or more powerful than others. Familiarity can be born both by understanding where an idea comes from and how it relates to others, and by practicing using the idea on its own. I suspect that better than either one is both together. With familiarity comes automaticity, where we can quickly answer questions by relying on intuition, because we’ve seen them or something similar before. But depending on the abstraction, familiarity can mean never discarding naïveté (turtle), contorting into awkward mental poses (Cartesian) – or achieving something truly elegant and powerful. It’s tempting to decry weak or crippling abstractions, but they too serve a purpose. Like the fancy algorithms that are slow when n is small, fancy abstractions are unnecessary for simple problems. Yes, one should practice using them on simple problems as to have familiarity when moving into hard ones. But before that, one needs to see for oneself the morass weak or inappropriately-chosen abstractions create. Powerful abstractions, I am increasingly convinced, cannot be be constructed on virgin mental terrain. For each individual, they must emerge from the ashes of an inferior system that provides both experience and motivation to build something stronger. ### Abstraction and Standardization What is the future of art? What media will it use? Computers, obviously. Information technology is very good at imitating old media: drawing programs, music programs, word processors designed for playwrights or authors. But none of these tap into the intrinsic strengths of the computer, able to do something no other medium can: simulate. Bret Victor, the man so demanding of user interfaces he left Apple, is dissatisfied with the tools available to artists that allow them to simulate. So he made his own, and gave a one-hour talk on it. Those interested should definitely take the time to watch it, but to summarize, he demonstrates the power of simulation in creating art that is part animation and part performance, with the human and computer reacting to one another. He then lifts the curtain and show us the tools he used to simulate the characters in the scene, and it’s not code. Instead, it’s a drawing program, with lines and shapes, that he uses to define behavior. Code, he points out, is based on algebra, but his system is based on geometry. Finally, he concludes with a short performance that he built with these tools. Higher is the story of earth, from the stars to cells to civilization to space travel back to the stars. What blew my mind about Higher is that a few years ago, I had independently created a short film on exactly that topic, with exactly the same background music (Kyle Gabler’s Best Of Times from World of Goo). Victor’s piece was far more polished, but we had both been inspired by the same music to express the same idea, the journey of life to the stars. Remember when I complained about not finding people who shared my narrative? So this is what that feels like. What drove Victor to create his tools was the belief that art is an attempt to communicate that which cannot be put into words. By binding simulation to lingual code, we make it inaccessible and unsuitable for art and artists. Direct manipulation of the art, which is how art has been created going back to cave paintings, allows the artist to interact with and lend emotion to the art in ways not possible through code’s layer of indirection, of abstraction. The reason artists’ needs have been neglected by developers is that, for the rest of the world, code works just fine. As I’ve previously blogged, language is one of humankind’s most powerful inventions. The direct manipulation that is liberating to the artist is confining to the engineer. Language is how we manage many layers of abstraction at once; without it we are reduced to pointing and grunting. It’s harder to communicate with a computer in code than a well-designed direct manipulation interface, but code is more powerful. In the sciences, a good result is consistent with what is already known; in art, a good piece is unexpected and shakes our established worldview. More fundamentally, the sciences observe and record some objective outside truth; art looks inward to offer one of many interpretations of the subjective human experience. This tension that we see between science and art also shows up in schools. In a recent TED talk, Sir Ken Robinson extols diversity as a fundamental human trait, which schools attempt to erase and replace with standardization. We agree that standardization has its place, but I personally think he downplays its importance. Standardization is writing, is language; those things can’t happen without common ways of thinking. At first, children need to explore concepts and use their own terms, without a top-down lesson plan imposed by school administrators. Nevertheless, the capstone is always learning what the rest of the world calls it. That isn’t smashing creativity, but rather empowering the child to learn more about the topic from others and from reference sources. It’s creating a minimum level of knowledge common every adult member of society, which is assumed by all media. Being able to communicate  facts with others isn’t just the result of education, it’s what makes education possible in the first place. With language, groups of people can unambiguously refer to things not present, a shared imagination. Verbalization is a form of abstraction. Let’s get back to the role of diversity in school. Students should be able to explore what interests them, but the converse is not true: some topics must be taught to everyone, even if some people do not find them interesting. This is especially true before high school. I know you’re not passionate about fractions, Little Johnny, but you need to learn them. Society expects everyone to have a minimum level of competence in every subject. Additionally, passion for a field isn’t always “love at first sight”. The future mathematician isn’t always the first in the class to get basic arithmetic. Although the curriculum needs to be largely standardized, the pedagogy does not. The neglect of diversity in schools is most heavily felt not in what kids are or are not learning, but how they are learning it. The inflexibility imposed on lesson plans is degrading to teachers and failing our kids. Teachers should be trusted to adapt lessons to their class, and empowered with testing results they find useful, early enough to use it. Standardized testing as it exists today does not fit the bill. Every student needs to achieve the same core competencies, but the paths to doing so will be as diverse as the children themselves. A broad exposure to both methods and topics promotes the development not just of knowledge, but of personality and identity. The reason to have art in school isn’t to improve test scores but because it’s part of being human. To be more precise, we should distinguish between “the arts” and “art”. The arts are how to create with the media classically used for art: paint, music, poetry, drama, dance, and so on. Like any other discipline, the arts require a standardized language to record and transfer this knowledge. Sometimes it’s plain English, sometimes it’s jargon, sometimes it’s symbols, but it’s still an agreed-upon abstraction. Diversity of ideas expressed in the language is inventive and healthy; diversity of the language itself is nonstandard and chaotic. With this in mind, the arts take their place at one end of a spectrum of knowledge: mathematics, natural science, social science, and history. And the arts. But art is something entirely different. It is the personal and emotional perception of an experience that communicates without words. Art is direct and concrete; it is subjective and sublime. Much of the arts attempt to create art. Victor’s tools advance the arts; what he creates with them is art. It’s a defensible position to say that art, because it does not rely on language as all the other fields of knowledge do, is not knowledge at all. But I’ll indulge Victor and say that not all knowledge can be verbalized. That doesn’t mean that art is beyond classification; Victor and I saw the same artistic ideas in the same piece of lyricless music. Conversely, just because something is written down doesn’t mean it’s standardized or useful knowledge. Recently, the mathematics community has been bewildered by an inscrutable set of papers which claim to prove a fundamental piece of number theory. No one can decipher them to tell if the proof is valid, and their author has not been forthcoming with an oral explanation. So in extreme cases, the analogy between language and standardization breaks down. The wordless expression is more coherent than words. For all the knowledge that abstract language has brought us, ineffable art remains part of the human experience. It is important for our children to learn about art to become mature and thoughtful adults. It is equally important for us to provide tools that support the nonverbal side of thought, to engage the visual and auditory parts of our brains in ways words never can. These are the same failure: the refuge in abstraction, the desire to have everything neat and orderly and predictable. Art exists to explore ambiguity and paradox; it does not demand simple answers but asks complex questions. A lot of futurists imagine a time when technology makes everything easy. There is a faith in technological convergence, where everything speaks the same language and interacts intelligently and flawlessly. But historically we see technologies become incompatible. If there’s an open standard underneath, such as email, you still get dozens of providers and clients; and if there’s not, you get the walled gardens of social media, loosely tied together by third-party “integration”. What’s important to realize is that the path of technology is not fixed. Our gadgets don’t have to make us more productive and connected; they can make us more artistic and provide privacy, if we design them so. We should stop aspiring to a monoculture of technology because, not only will it not happen for technical and economic reasons, it shouldn’t happen. Standardized technology leads to standardized thinking, especially when coupled with standardized social institutions. Creativity is  not only what drives technology further, but art and humanity as well. ### This Is You: Agency in Education This is the opening of the ambient puzzle game Osmos, by Hemisphere Games. “This is you,” is all it says, as if you’ve always been a glowing blue orb. Most games start by introducing the player to their avatar, but it’s usually a human character with a backstory. Puzzle games are an exception: they rarely give the player an avatar whatsoever. Normally you play an unacknowledged manipulator of abstract blocks according to fixed rules and patterns. Osmos is an exception to the exception. Osmos also has masterful use of player training and learning curve. It begins in the simplest possible setting with the simplest possible task: “Move to the blue circle and come to a stop.” You accelerate by ejecting mass, which propels the rest of you in the opposite direction. The game tells you these things in the same order I relayed them to you: first objective, then the means. Osmos could have said, “Hey, look how you can move by ejecting mass! Now use this ability to move to this outlined circle.” But it didn’t. The progression is guided, focused, and objective-based, especially at first. The levels build on each other, reinforcing knowledge from previous levels as the player gains experience. Impasse 1 In a rare moment of explanation, Osmos introduces players to the idea of using ejected mass to move the red motes out of the way so they can get to the blue ones. Impasse 2 Immediately afterwards, players are asked to apply that principle in a puzzle that looks harder than it is. Osmos presents players with the Odyssey, an sequence of levels that introduce gameplay concepts in a logical order. The Odyssey runs from the tutorial described above up through medium difficulty levels. After that, players gain access to a Sandbox mode where they can explore different game types at different difficulties. That is, a level of Osmos is distinguished not only by quantitative difficulty by qualitatively, by the kinds of game mechanics and obstacles found. More fundamentally, Osmos is played in discrete levels that can be won, lost, restarted, and randomized, rather than an endless arcade of increasing difficulty like Bejeweled. Players can skip to and play any level they have unlocked at will; a session of Osmos can last three minutes or three hours. Players are incentivized to complete the Odyssey and get as far as they can in the sandbox, but there’s no climactic end of the game. No explanation is given why some levels seem to take place in a petri dish while others in orbit around a star; it’s wonderfully abstract in that regard. It’s impossible to “win” and there’s no victory cutscene. It’s neither so boring and open-ended you don’t want to play nor so scripted you only want to play once. There are achievements (badges) awarded but they seem extraneous to me. And now to the point of all this: what can we learn from Osmos when designing software for education? By the structure of its gameplay and incentives, Osmos lends itself to the sporadic and time-limited technology access found in many schools. Instead of leaving students behind who didn’t win the game, or trying to pry a child who’s “gotten far” away from the computer or tablet, it’s easier to take a break from Osmos. Meanwhile the nature of gameplay means that it’s very much a solitary experience, a personal journey of discovery. For all the hype given to social gaming over the last few years, it’s not conducive to deep thinking. And yes: agency, in the sense of being a specific agent. In Osmos, the player is someoneor at least something: this is you. As I’ve said, most puzzle games don’t give the player anything to latch on to. Neither does formal arithmetic, nor algebra. Symbol manipulation provides no agency. It forces mathematics into the unnatural third person perspective (unnatural from a human’s point of view). When I played with blocks as a child I would often imagine an ant climbing and exploring my structures. Pen and paper mathematics allows the mathematician to move blocks around but not to be an ant inside his or her own creation. Seymour Papert developed LOGO to provide children with agency. LOGO is a cross between a game and a programming language. Players manipulate the “Turtle” by telling it to turn left or right or move forward, where forward is relative to how it is turned. When children first encounter difficulties, they are told to “play turtle”. By moving their own bodies through space, they are able to debug and refine their program. And by thinking how they move their own bodies through space, they are given a tangible grip on both computation and thinking. Scratch was developed at the MIT Media Lab, which was co-founded by Papert. Scratch, though very much a descendent of LOGO, adds more commands to increase the user’s power and control. Many of the commands were discussed by Papert in his book Mindstorms or seem to be reasonable extensions of it. Others (thought balloons, sounds, arithmetic, color effects) are superfluous. Still others, like if-else statements, while loops, and Boolean operations, are taken from the nuts and bolts of programming. This comes at the cost of downplaying the two high-level skills which Papert thought were so vital to learning any subject: subprocedures and debugging. With LOGO, children learned to compartmentalize knowledge into reusable pieces, and to make incremental improvements to see the results they wanted. One of LOGO’s defining characteristics was its limited set of commands, which are relative to the current position and heading of the Turtle. Osmos players can eject mass in any direction, but nothing more. In both cases, artificial scarcity of control forces users to think in a particular way. On the other hand, Scratch freely mixes LOGO-style “move forward” with Cartesian commands, both relative (“move up by”) and absolute (“move to”). It’s impossible to have agency with something that can be teleported across the map. Rather than force the user out of lazy and weak ways of thinking, Scratch offers multiple paths and users take the one of least resistance. Often this will be a hodgepodge of many different styles and systems of commands, reflecting incomplete and imprecise thinking. The large number of commands create a cluttered and unintuitive interface. 78% of Scratch’s default interface is control while only 22% of it is the canvas. The results, the thing the user cares about, are squished in a corner. Osmos has minimal controls that disappear when not in use, leaving the entire screen as the portal into the game world. Moreover, Osmos has just enough visual detail to be eye candy and not clutter. Games, in general, have excellent usability because bad usability is by definition not fun. Scratch’s default user interface, with overlaid percentages. A similar image for Osmos would be 100% purple. The differences in the command set and user interfaces belie the different purposes of the software. Scratch is meant to provide a canvas for a play or a animation, and so gives the user plenty of options for control. Osmos and LOGO are both puzzles in the sense that the controls are extremely few, yet powerful. A tool is designed to give a competent user maximum power to create; a puzzle is designed to teach new ways of thinking under constraints. By this metric, Scratch has more in common with CAD software used by engineers to design mechanical parts than it has with Osmos and LOGO. But there is another feature that groups the three differently. Both LOGO and Scratch are sandboxes; they enforce no requirements or value judgements on the player’s actions. Papert envisioned a teacher guiding the student and keeping her on task. Osmos takes a different route. As a game, it has clear objectives to complete and obstacles to avoid. There are good moves and bad moves. There are levels, with definite beginnings and ends. The Odyssey is just a long tutorial: it presents each feature and some advanced ideas before handing the player full control. Scratch and LOGO do just that as soon as they’re opened. In particular, Scratch provides no guidance on its cockpit’s worth of controls. There is a misconception, common among edtech types but not among traditional teachers, that the answer to all problems is better distribution. People are ignorant because they don’t have access to knowledge. People can’t code because they don’t have access to the software and documentation. But this is simply not true. Give people tools and they won’t know what to do with them or how to use them. Instead, we need to give students of all ages training, knowledge, and understanding. We need to force students to think about wrong ideas and make them right, and to see why they are right. We need to to show students the metacognitive tools to solve problems. An educational game isn’t about what to think, but how to think. Now read the follow-up post: Beyond Agency: Why Good Ideas Only Take Us So Far. ### Internet Idea Books: Roundup, Review, and Response What Technology Wants (Kevin Kelly, 2010) is a sweeping history of technology as a unified force which he calls “the technium”. Kelly starts slowly, drawing ever larger circles of human history, biological evolution, and the formation of planet earth from starstuff. His scope, from the Big Bang to the Singularity, is unmatchable. But the purpose of this incredible breadth is not readily apparent, and isn’t for the first half of the book, as Kelly talks about everything but technology. I advise the reader to sit back and enjoy the ride, even if it covers a lot of familiar ground. In not the first chapter on evolution, Kelly argues that the tree of life is not random, but instead is constrained by chemistry, physics, geometry, and so on. The author points to many examples of convergent evolution, where the same “unlikely” anatomical feature was evolved multiple times independently. For example, both bats and dolphins use echolocation but their common ancestor did not. Kelly is careful to attribute this phenomenon to the constraints implicit in the system and not supernatural intelligence. He argues that, in the broadest strokes, evolution is “preordained” even as the details are not. Kelly begins the next chapter by noting that evolution itself was discovered by Alfred Russel Wallace independently and concurrently as it was by Charles Darwin. This becomes the segue into convergent invention and discovery, insisting that the technium should be regarded as an extension of life, obeying most of its rules, although human decision replaces natural selection. Technology becomes an overpowering force that loses adaptations as willingly as animals devolve (which is to say, not very). The premise that technology extends life becomes the central to Kelly’s predictions. He paints a grandiose picture of technologies that are as varied and awe-inspiring as the forms of life, encouraging ever-more opportunities in an accelerating dance of evolution. “Extrapolated, technology wants what life wants,” he claims, and lists the attributes technology aspires to. Generally speaking, Kelly predicts technological divergence, where your walls are screens and your furniture thinks, and the death of inert matter. Like the forms of life, technology will specialize into countless species and then become unnoticed, or even unnoticeable. Much of what Kelly predicts has already happened for passive technologies. We don’t notice paper, government, roads, or agriculture. But I don’t think that information technology will achieve the same saturation. No matter how cheap an intelligent door becomes, a non-intelligent version will be cheaper still, and has inertia behind it. Kelly claims that such resistance can only delay the adoption of technology, not prevent it. Nevertheless something about Kelly’s book disturbed me. It was wrong, I felt, but I couldn’t articulate why. So I read a trio of books that take a more cautioned view of information and communication technologies. As I read, I asked of them: what has the internet taken from us, and how to we take it back? Continue reading ### How to save the world The end of World War I was a bad time to be an optimist. It wasn’t that millions of young men had died or that western Europe had been transfigured into a hellish bombed-out landscape, although that was certainly true. It was the inescapable philosophical consideration that civilization had done this to itself. The “progress” of the industrial revolution and German unification led inexorably to total war. Civilization itself was fundamentally flawed and unsustainable; the only alternative was to admit Rousseau was right and go back to the trees. Of course, that’s not what happened, and twenty years later they were at it again. The technology changed dramatically, but it didn’t change the fact that people were still killing each other, only how they did it. The changes that mattered were the social institutions built afterwards. Instead of the outrageous reparations in the Treaty of Versailles, there was the conciliatory Marshall Plan. Instead of the League of Nations, there was the United Nations. It wasn’t technological improvements that saved lives and improved the quality of living after the war. It was the people, with their resiliency, their forgiveness, and their intent not to make the same mistake twice. We now find ourselves, once again, on the brink of destruction. It is not destruction by military means, but rather, economic and environmental means. Natural resources are being depleted faster than they can be renewed, if they can be renewed at all. Industrialization has spread concrete, steel, and chemicals across previously untouched land. The established political institutions are being challenged by forces as diverse as the Arab Spring and the Occupy movement. The economy is still largely in shambles. And then there’s the small matter of climate change. And so on. We’ve heard it all before. At TED 2012, this grim view was presented by Paul Gilding (talk, follow-up blog post). He’s pretty blunt about it: the earth is full. Around a third of the world lives on less the two dollars a day. They have dramatically different cultures, education, living conditions, access to technology than the typical American or European. You honestly think that they’re the ones that are going to fix the problems? The people who are illiterate, innumerate, and don’t know where their next meal is coming from are going to fix climate change? Depending on your answer, I have two different responses. I’ll give both of them, but you might want to think about it first. Continue reading
{}
# Are really scientists earning this much? 1. Sep 24, 2010 ### Tom83B http://wiki.answers.com/Q/What_are_the_highest_paying_jobs In this article, they say that average astronomer's salary is more than $90000 and that physicist is 25th best paying job... I somehow always thought, that physicists would earn more by begging on the street, are the salaries really that good, or is the average so high just because there must be some physicist that earn a lot who rise the average? 2. Sep 24, 2010 ### ZapperZ Staff Emeritus 3. Sep 24, 2010 ### D H Staff Emeritus First, a bit of a caveat: An astronomer typically has a PhD. Comparing astronomers to the typical engineer is not quite a fair comparison. A better comparison results when you compare astronomers to engineers with PhDs. Astronomy don't look quite so lucrative in that light. It's still pretty dang good, however. That said, salaries in technical fields (medical doctors excluded) are fairly flat. Lawyers and doctors are a different story, particularly lawyers. The salary range for lawyers is huge. A small number make incredibly vast sums amounts of money. Most do not. Here is the salary distribution for starting salaries for lawyers in 2006 (source: http://blogs.payscale.com/ask_dr_salary/2007/09/median-vs-mean-.html): http://blogs.payscale.com/.a/6a00d8341bf85853ef0134821750c1970c-pi Those are starting salaries, mind you; the disparity grows with time. The mode in that curve is$42,000 per year (2nd mode: $135,000). You just won't see something like that for science, technology, engineering, and mathematics (STEM). Salaries are more or less unimodal and the distribution is fairly tight. A very influential scientist might makes 3 times what a freshout makes. The only way to make vast sums of money is to move beyond being a scientist, engineer, or mathematician. Presidents of prestigious universities and CEOs of large engineering firms can make a large amounts of money -- but they aren't really scientists or engineers anymore. 4. Sep 24, 2010 ### Academic I think this statistic is kind of a misrepresentation. Most people who get PhDs in an astronomy related field do not become astronomers. So most people with a PhD in astronomy are not counted in that stat. Similarly, I would expect AIP's survey to be skewed towards the high wage earners because they are the ones who actually join and stay active with AIP. 5. Sep 24, 2010 ### D H Staff Emeritus And you know this because? Those stats are quite in line with surveys done by schools, professional organizations, and government organizations contacting people who have degrees in various fields. They are not limited to people who are active with professional organizations such as the AIP. My experience: Most people who get a PhD in some field work in that field. Some don't, but that is often because they have found even better opportunities elsewhere (e.g. PhD physicists and mathematicians who become quants). The number of underemployed PhD astronomers (or physicists) is quite small. Last edited: Sep 24, 2010 6. Sep 24, 2010 ### Phyisab**** Where do people get this idea that physicists make 30k a year? 80-90k a year seems pretty standard for professors, as well as experienced engineers and scientists in industry. For accomplished senior employees in industry, it could easily be as high as 120k. 7. Sep 24, 2010 ### Tom83B Well, I started to think that after reading this thread: http://translate.google.com/transla...debaran.cz/forum/viewtopic.php?t=414&act=url" Check the third from the bottom. By the way 1USD is about 22CZK and 26000 is about the arithmetical average here, which I think is very little for a professor. Maybe in the US are scientist more treasured. That's why I'm really surprised that they are payed so well... And also the PHD comics... :) I know that these are graduate sudents, but it kind of makes me feel that there's very little money in the field. Thanks for your answers. Last edited by a moderator: Apr 25, 2017 8. Sep 25, 2010 ### rhombusjr Probably for a newly hired non-tenured professor, the salary is a bit lower ($30,000 - $50,000 perhaps?). Like most professions, you don't start right off the bat with the higher salaries in your profession. A new Assistant Professor who has only had their PhD for <5 years and only a handful of papers authored will obviously make less than a full Professor who's been teaching and doing research at a school for over 20 years. 9. Sep 25, 2010 ### jtbell ### Staff: Mentor It also makes a difference (in the USA) whether you're starting out as an entry-level assistant professor at a generic small liberal-arts college or at a major research university. 10. Sep 25, 2010 ### Pengwuino I think when the public is ignorant of what a certain group of people actually do, they assume they must not be paid well either. We all know exactly what engineers, doctors, and lawyers do, and there is no question about how much they can make. An anthropologist, however, is someone that I really don't know what they do and my immediate assumption is they probably aren't paid well for whatever they actually do. I think it's a basic human thing to think "If I don't know what they do, they must be fairly useless to society, so who would want to pay them anything?" 11. Sep 25, 2010 ### twofish-quant In academia, professors make decent amounts of money, but the trouble is that there aren't enough jobs for all the people that want them so that the salaries for entry level positions (i.e. post-docs) tends to be low. However, there are lots of jobs in industry, and$90K is a reasonable salary level for someone with an astrophysics Ph.D. that ends up working in industry. 12. Sep 25, 2010 ### twofish-quant Assistant professors make about $60-$70K. $30K-$50K is at post-doc level. I should point out that professor salaries are public records for state schools, so you can go to the library and get a list of what each professor in a state school makes. That's not obviously true in physics. In physics and academia, your salary is pretty dependent on what your field is more so than seniority. Entry level finance professors make a *LOT* more than professors in medieval French lit, and if you want to pull in the megabucks, look at the football coach. Also academia is one of the few areas in which professors are encouraged to have outside income. I know a lot of professors that make money (sometimes large amounts of money) from a side-business. 13. Sep 25, 2010 ### twofish-quant And people that go into Wall Street end up making $250-300K and there are people with physics Ph.D.'s that make$1M+. Last edited: Sep 25, 2010 14. Sep 26, 2010 ### rhombusjr I was comparing professors in the same field e.g. an assistant physics professor vs full physics professor, but thanks for the clarification. What sort of side-jobs do professors typically have? 15. Sep 26, 2010 ### twofish-quant Lots of different ones. Some of them become billionaires by making speakers. http://en.wikipedia.org/wiki/Amar_G._Bose Pretty much every MIT physics professor that I've ever met either was trying to start their own small company or was doing consulting work for large ones. One big difference between academia and industry is that in industry you generally get fired for moonlighting, whereas in academia the university will active encourage you to start your own company or do outside consulting work. 16. Sep 26, 2010 ### twofish-quant What's really shocking I think is not that the general public has no clue what physics Ph.D.'s do, but that people in academia have no real idea what physics Ph.D.'s do, and how much they make. I think that part of "the myth of the starving physics Ph.D." is that there is this idea that if you don't go into academia, then you'll be spending the rest of your life begging for spare change in street corners. The problem with this idea is that it doesn't really apply to investment bankers or high school teachers. Most people can explain what a high school teacher does and how it is beneficial to society. Most people can't explain what an investment banker does and how they are beneficial to society. Yet investment bankers make more money than high school teachers. 17. Sep 26, 2010 ### D H Staff Emeritus That's not all that shocking. People in academia tend to have misconceptions of what people outside of academia do, period. Ivory tower syndrome. Overcoming this problem is one reason why schools encourage their professors to moonlight, particularly during the summer break. 18. Sep 26, 2010 ### RufusDawes It always confused me how a starting lawyer can make so much money. 19. Sep 26, 2010 ### RufusDawes I've noticed a few people saying they want to go into a field of science to make enormous salaries. Struck me as rather wrong, and fanticiful. But we'll see you can always go work in finance with any sort of strong mathematical background. Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
{}
# Can a nowhere continuous function have a connected graph? After noticing that function $$f: \mathbb R\rightarrow \mathbb R$$ $$f(x) = \left\{\begin{array}{l} \sin\frac{1}{x} & \text{for }x\neq 0 \\ 0 &\text{for }x=0 \end{array}\right.$$ has a graph that is a connected set, despite the function not being continuous at $$x=0$$, I started wondering, doest there exist a function $$f: X\rightarrow Y$$ that is nowhere continuous, but still has a connected graph? I would like to consider three cases • $$X$$ and $$Y$$ being general topological spaces • $$X$$ and $$Y$$ being Hausdorff spaces • ADDED: $$X=Y=\mathbb R$$ But if you have answer for other, more specific cases, they may be interesting too. • As Henning points out via example, this is most interesting when $X = \Bbb{R}$ (and possibly where $Y = \Bbb{R}$ too). – Theo Bendit Jun 25 at 14:56 • I wonder whether the Conway base 13 function has a connected graph. – Nate Eldredge Jun 25 at 18:03 • By transfinite induction one can construct a function $f:\mathbb R\to\mathbb R$ whose graph meets every Borel set in the plane whose projection onto the horizontal axis is uncountable. Can such a graph be disconnected? – bof Jun 25 at 19:21 • @TheoBendit Indeed now I see that case $X=Y=\mathbb R$ is significantly more interesitng. I'll add it as another point. – Adam Latosiński Jun 25 at 22:22 • @NateEldredge: It turns out that the graph of the base-13 function is totally disconnected. – Henning Makholm Jun 27 at 13:09 Check out this paper: F. B. Jones, Totally discontinuous linear functions whose graphs are connected, November 23, (1940). Abstract: Cauchy discovered before 1821 that a function satisfying the equation $$f(x)+f(y)=f(x+y)$$ is either continuous or totally discontinuous. After Hamel showed the existence of a discontinuous function, many mathematicians have concerned themselves with problems arising from the study of such functions. However, the following question seems to have gone unanswered: Since the plane image of such a function (the graph of $$y =f(x)$$) must either be connected or be totally disconnected, must the function be continuous if its image is connected? The answer is no. In particular, Theorem 5 presents a nowhere continuous function $$f:\Bbb R \rightarrow \Bbb R$$ whose graph is connected. Whether Conway base 13 function is such an example remains unknown. (at least on MSE; see Is the graph of the Conway base 13 function connected?) It turns out the graph of Conway base 13 function is totally disconnected. See this brilliant answer. Here is an example for $$\mathbb R^2 \to \mathbb R$$: $$f(x,y) = \begin{cases} y & \text{when }x=0\text{ or }x=1 \\ x & \text{when }x\in(0,1)\text{ and }y=0 \\ 1-x &\text{when }x\in(0,1)\text{ and } y=x(1-x) \\ x(1-x) & \text{when }x\notin\{0,1\}\text{ and } y/x(1-x) \notin\mathbb Q \\ 0 & \text{otherwise} \end{cases}$$ This is easily seen to be everywhere discontinuous. But its graph is path-connected. A similar but simpler construction, also $$\mathbb R^2\to\mathbb R$$: \begin{align} g(1 + r\cos\theta, r\sin\theta) = r & \quad\text{for }r>0,\; \theta\in\mathbb Q\cap[0,\pi] \\ g(r\cos\theta, r\sin\theta) =r & \quad \text{for }r>0,\; \theta\in\mathbb Q\cap[\pi,2\pi] \\ g(x,y) =0 & \quad\text{everywhere else} \end{align} • Very nice examples. They show how easy is to break some continuity with additional dimensions while retaining enough of it to maintain connectedness of the graph. So I've added another case to the question, $f: \mathbb R \rightarrow \mathbb R$, in which such methods won't work. Do you think a function in this case is possible, like the one that John Hughes is constructing? – Adam Latosiński Jun 25 at 22:38 • @AdamLatosiński: I am unsure, and getting nowhere myself. I've been trying to figure out whether the graph of the Conway base-13 function is connected, but without success either way. – Henning Makholm Jun 25 at 22:54 • Does $y/x(1 - x)$ mean $\frac{y}{x(1 - x)}$ or $\frac{y}{x}(1 - x)$? – Bladewood Jun 26 at 16:30 • @Bladewood: The former. – Henning Makholm Jun 26 at 17:06 • I would rewrite it, the standard interpretation would be the opposite. – Apollys Jun 26 at 20:31 There is a simple general strategy for many questions of this type, which is to just try to build a counterexample by transfinite induction. Let's first think about what it means for the graph $$G$$ of a function $$f:\mathbb{R}\to\mathbb{R}$$ to be disconnected. It means there are open sets $$U,V\subset\mathbb{R}^2$$ such that $$U\cap G$$ and $$V\cap G$$ are both nonempty and together they form a partition of $$G$$ (we will say $$(U,V)$$ separates $$G$$ in that case). So, to make $$G$$ connected, we just have to one-by-one rule out every such pair $$(U,V)$$ from separating it. So, then, here is the construction. Fix an enumeration $$(U_\alpha,V_\alpha)_{\alpha<\mathfrak{c}}$$ of all pairs of open subsets of $$\mathbb{R}^2$$. By a transfinite recursion of length $$\mathfrak{c}$$ we define values of a function $$f:\mathbb{R}\to\mathbb{R}$$. At the $$\alpha$$th step, we add a new value of $$f$$ to prevent $$(U_\alpha,V_\alpha)$$ from separating the graph of $$f$$, if necessary. How do we do that? Well, if possible, we define a new value of $$f$$ such that the corresponding point in the graph $$G$$ will either be in $$U_\alpha\cap V_\alpha$$ or not be in $$U_\alpha\cup V_\alpha$$, so $$U_\alpha\cap G$$ and $$V_\alpha\cap G$$ will not partition $$G$$. If this is not possible, then $$U_\alpha$$ and $$V_\alpha$$ must partition $$A\times\mathbb{R}$$ where $$A\subseteq\mathbb{R}$$ is the set of points where we have not yet defined $$f$$. Since $$\mathbb{R}$$ is connected, this means we can partition $$A$$ into sets $$B$$ and $$C$$ (both open in $$A$$) such that $$U_\alpha\cap (A\times\mathbb{R})=B\times\mathbb{R}$$ and $$V_\alpha\cap (A\times\mathbb{R})=C\times\mathbb{R}$$. Now since we have defined fewer than $$\mathfrak{c}$$ values of $$f$$ so far in this construction, $$|\mathbb{R}\setminus A|<\mathfrak{c}$$ and in particular $$A$$ is dense in $$\mathbb{R}$$. If $$B$$ were empty, then $$U_\alpha$$ would have empty interior and thus would be empty, and so $$(U_\alpha,V_\alpha)$$ can never separate the graph of $$f$$. A similar conclusion holds if $$C$$ is empty, so let us assume both $$B$$ and $$C$$ are nonempty. It follows that $$\overline{B}$$ and $$\overline{C}$$ cannot be disjoint (otherwise they would be a nontrivial partition of $$\mathbb{R}$$ into closed subsets), so there is a point $$x\in\mathbb{R}\setminus A$$ that is an accumulation point of both $$B$$ and $$C$$. Since $$x\not\in A$$, we have already defined $$f(x)$$. Note now that $$(x,f(x))\not\in U_\alpha$$, since $$U_\alpha$$ would then contain an open ball around $$(x,f(x))$$ and thus would intersect $$C\times\mathbb{R}$$. Similarly, $$(x,f(x))\not\in V_\alpha$$. Thus $$U_\alpha$$ and $$V_\alpha$$ already do not contain the entire graph of $$f$$, and so we do not need to do anything to prevent them from separating it. At the end of this construction we will have a partial function $$\mathbb{R}\to\mathbb{R}$$ such that by construction, its graph is not separated by any pair of open subsets of $$\mathbb{R}^2$$, and the same is guaranteed to hold for any extension of our function. Extending to a total function, we get a total function $$f:\mathbb{R}\to\mathbb{R}$$ whose graph is connected. But we can of course arrange in this construction for $$f$$ to be nowhere continuous; for instance, we could start out by defining $$f$$ on all the rationals so that the image of every open interval is dense in $$\mathbb{R}$$. In fact, the construction shows that any partial function $$\mathbb{R}\to\mathbb{R}$$ defined on a set of cardinality less than $$\mathfrak{c}$$ can be extended to a total function whose graph is connected. (Or even stronger, you could start with any partial function whose domain omits $$\mathfrak{c}$$ points from every interval, since that is all you need to guarantee that the set $$A$$ is dense at each step.) • Without the last sentence, you might have ended up with $f(x)=0$ ;) – Hagen von Eitzen Jun 26 at 6:49 • @HagenvonEitzen: There's not actually a need to do anything additional to make the function nowhere continuous. The construction directly implies that that every Jordan curve in the plane will intersect the graph, which includes even very small circles anywhere in the plane. So the graph naturally becomes dense in $\mathbb R^2$. – Henning Makholm Jun 26 at 7:55 • @EricWofsey: But whenever you have a Jordan curve, then the set of points inside and outside the curve form a possible pair of open $U_\alpha$ and $V_\alpha$ for your construction. There's no point in $U_\alpha\cap V_\alpha$ one can add to the graph, so your construction will add a point outside $U_\alpha\cup V_\alpha$ to the graph instead instead. But $\mathbb R^2\setminus(U_\alpha\cup V_\alpha)$ are exactly the points on the Jordan curve. – Henning Makholm Jun 26 at 16:57 • Even simpler, for every nonempty open $S\subseteq \mathbb R^2$ and $s\in S$, there will be an $\alpha$ such that $(U_\alpha,V_\alpha)=(\mathbb R^2\setminus\{s\},S)$. Then the construction must add a point from $U_\alpha\cap V_\alpha \subseteq S$ to $G$. Since $S$ was arbitrary open, this means that $G$ becomes dense. – Henning Makholm Jun 26 at 20:35 • Stronger yet, the construction ensures that $f$ is "strongly Darboux" -- i.e., $f([a,b])=\mathbb R$ for every $a<b$. For every $y$ we can show that $G$ intersects $[a,b]\times\{y\}$: WLOG $f(a)\ne y$ and $f(b)\ne y$; now extend $[a,b]\times\{y\}$ with a ray that goes straight upwards or downwards from $(a,y)$ such as to avoid $(a,f(x))$, and a ray that goes straight up or down from $(b,y)$ and avoids $(b,f(b))$. The resulting curve divides $\mathbb R^2$ into two nontrivial open regions, so just like the Jordan curve it intersects $G$. But the vertical parts don't, by construction. – Henning Makholm Jun 26 at 21:00 Great question, and I don't have an answer for you, but I've got some small thoughts: By summing up weighted and displaced copies of $$f$$, you can get discontinuities at many places. For instance, you could write $$F(x) = \sum_{n \in \Bbb Z} \frac{f(x-n)}{1+n^2}$$ That'll have an $$f$$-like discontinuity at every integer. Digression A comment asks whether the graph is still connected. Let me show that it is at $$x = 1$$ as an example, which should be reasonably compelling for other integer points. (For non-integer points, $$F$$ is continuous, so we're fine). Write \begin{align} F(x) &= \frac{1}{2} f(x-1) + \sum_{n\ne 1 \in \Bbb Z} \frac{f(x-n)}{1+n^2}\\ &= \frac{1}{2} f(x-1) + G_1(x) \end{align} where $$G_1$$ is a function that's continuous at $$x = 1$$. Let's look at the graph of $$F$$ near $$1$$, say on the interval $$(3/4, 5/4)$$. It's exactly $$K = \{ (x, \frac{1}{2} f(x-1) + G_1(x)) \mid 3/4 < x < 5/4 \}$$ Contrast this with the graph of $$f$$ near $$0$$, which is $$H = \{ (x, f(x)) \mid -1/4 < x < 1/4 \}$$ and which we know (from standard calculus books like Spivak) to be connected. Now look at the function $$S : K \to H : (x, y) \mapsto (x-1, y - G_1(x))$$ This is clearly continuous and a bijection (and even extends to a bijection from a (vertical) neighborhood of $$K$$ to a neighborhood of $$H$$), so $$K$$ is also connected. End of digression And then for numbers with finite base-2-expansions, you can do the same sort of thing: let $$G(x) = \sum_{k \in \Bbb Z, k > 0} \frac{1}{2^k} F(2^k x)$$ and that'll have $$f$$-like discontinuities at all the points with finite base-2 representations, which is a dense set in $$\Bbb R$$. But I have a feeling that sliding over to the uncountable-set territory is going to be a lot harder. • This is a good way to get functions which are discontinuous at many points, but are the graph of $F$ and and the graph of $G$ still connected? – Adam Chalumeau Jun 25 at 14:46 • Well...they could only be disconnected at their points of discontinuity. And (for $F$ at least) at those points the graph is (roughly) the sum of something linear (the derivative approximation) and the graph of $f$; applying a shearing operation gets rid of the linear part, and you've got something a lot like the graph of $f$. I'll add details. – John Hughes Jun 25 at 15:16 • @AdamChalumeau: See "Digression" in which I prove that the graph of $F$ is nice. For $G$, it's presumably tougher.. – John Hughes Jun 25 at 15:24
{}
# Do prime numbers have prime factors? (This is a somewhat trivial question). Do prime numbers have prime factors, i.e. itself? For example is 7 a prime factor of 7? The reason I ask this is because there is a statement in my lecture notes: If a number $n>1$ is not prime, then it has a prime factor. I was hoping to restate this more generally as, all integers $n>1$ have prime factors. Also, does it make more semantic sense to say that a prime $p$ is an integer, a natural number (i.e. $p\in \{0,1,2...\})$, or a positive integer? This statement from your lecture notes If a number $n>1$ is not prime, then it has a prime factor. is true - but it's not a very good way to say what it's trying to say. Here's an expanded version. Every integer $n > 1$ has a prime factor. If $n$ happens to be prime then that prime factor is $n$ itself. If $n$ is not prime then it has a prime factor less than itself. For the last part of your question: every prime is an integer, a natural number and a positive integer since every positive integer is also a natural number and an integer. It's probably best to use the most restrictive description - a prime is a positive integer - in fact, an integer greater than 1. • Except primes exist in all euclidean domains – Zelos Malum Sep 23 '16 at 2:47 • @ZelosMalum Yes, and there are primes in other rings too. But that level of abstraction is probably not useful for the OP at his level of learning. – Ethan Bolker Sep 23 '16 at 12:42 • I think it is good to mention others at the end just to make them see there is much more but not necciserily be detailed – Zelos Malum Sep 23 '16 at 13:12 Yes, all integers $n>1$ have prime factors. Composite numbers have prime factors less than themselves. Prime numbers have no prime factors less than themselves.
{}
? Free Version Moderate Oxidation States and Ionization Energies CHEM-CROVZY Which of the following statements about the oxidation states and ionization energies are true? Select ALL that apply. A Vanadium exists as ${ V }^{ 5+ }$ as it highest oxidation state​. B Iron exist as ${ Fe }^{ 4+ }$ within hemoglobin. C Chromium is orange​ in color due to its highest oxidation state, ${ Cr }^{ 6+ }$. D Manganese exists as ${Mn}^{ 7+}$ as it highest oxidation states, and this is the state present in living organisms.
{}
## EE Student Information ### The Department of Electrical Engineering supports Black Lives Matter. Read more. • • • • • EE Student Information, Spring Quarter through Academic Year 2020-2021: FAQs and Updated EE Course List. Updates will be posted on this page, as well as emailed to the EE student mail list. As always, use your best judgement and consider your own and others' well-being at all times. # Probability Seminar presents "Fast and memory-optimal dimension reduction using Kac's walk" Topic: Fast and memory-optimal dimension reduction using Kac's walk Monday, October 26, 2020 - 4:00pm Venue: Zoom Speaker: Vishesh Jain (MIT) Abstract / Description: Introduced in the 1950s by Mark Kac as a toy model for a one-dimensional Boltzmann gas, the Kac walk is the following simple and well-studied Markov chain on the special orthogonal group: at every time step, sample two distinct uniform coordinates $i,j$ and a uniform angle $\theta$, and rotate in the $(i,j)$-plane by $\theta$. In this talk, I will discuss how the Kac walk can be used for the purpose of dimensionality reduction, specifically, for the design of linear transformations with optimal Johnson–Lindenstrauss and Restricted Isometry properties, and which support memory-optimal fast matrix-vector multiplication. I will also discuss the performance of a variant of the Kac walk, for which $\theta = \pi/4$ at every time step This is joint work with Natesh S. Pillai (Harvard), Ashwin Sah (MIT), Mehtaab Sawhney (MIT), and Aaron Smith (U Ottawa).
{}
ODTÜ-BÝLKENT Algebraic Geometry Seminar (See all past talks ordered according to speaker and date) **** 2021 Spring Talks **** (The New Yorker, Dec 7, 2020 Cover) This semester we plan to have all our seminars online 1. Zoom, 5 February 2021, Friday, 15:40 Caner Koca-[City University of New York] - Kähler Geometry and Einstein-Maxwell Metrics Abstract: A classical problem in Kähler Geometry is to determine a canonical representative in each Kähler class of a complex manifold. In this talk, I will introduce this problem in several well-known settings (Calabi-Yau, Kähler-Einstein, constant-scalar-curvature-Kähler, extremal Kähler). In light of recent examples and developments, I will elucidate a possible role of Einstein-Maxwell metrics in this problem. 2. Zoom, 12 February 2021, Friday, 15:40 Yýldýray Ozan-[ODTÜ] - Liftable homeomorphisms of finite abelian p-group regular branched covers over the 2-sphere and the projective plane Abstract: This talk mainly is based our work joint with F. Atalan and E. Medetoðullarý. In 2017 Ghaswala and Winarski classified finite cyclic regular branched coverings of the 2-sphere, where every homeomorphism of the base (preserving the branch locus) lifts to a homeomorphism of the covering surface, answering a question of Birman and Hilden. In this talk, we will present generalizations of  this result in two directions. First, we will replace finite cyclic groups with finite abelian p-groups. Second, we will replace the base surface with the real projective plane. The main tool is the algebraic characterization of such coverings in terms of the automorphism groups of these finite abelian p-groups. Due to computational insufficiencies we have complete results only for groups of rank 1 and 2. In particular, we prove that for a regular branched $A$-covering $\pi:\Sigma\rightarrow S^2$, where $A={\mathbb Z}_{p^r}\times{\mathbb Z}_{p^t}, \ 1\leq r\leq t$, all homeomorphisms $f:S^2 \to S^2$ lift to those of $\Sigma$, if and only if $t=r$ or $t=r+1$ and $p=3$. If time permits we will also present some applications to automorphisms of Riemann surfaces. 3. Zoom, 19 February 2021, Friday, 15:40 Meral Tosun-[Galatasaray] - A new root system and free divisors Abstract: In this talk, we will  construct a root system for the minimal resolution graph of some surface singularities and we will show that the new roots give linear free divisors. 4. Zoom, 26 February 2021, Friday, 15:40 Tony Scholl-[Cambridge] - Plectic structures on locally symmetric varieties Abstract: In this talk I will discuss a class of locally symmetric complex varieties whose cohomology seems to behave as if they are products, even though they are not. This has geometric and number-theoretic consequences which I will describe. This is joint work with Jan Nekovář (Paris). 5. Zoom, 5 March 2021, Friday, 15:40 Alexander Degtyarev-[Bilkent] - 800 conics in a smooth quartic surface Abstract: Generalizing Bauer, define $N_{2n}(d)$ as the maximal number of smooth rational curves of degree $d$ that can lie in a smooth degree-$2n$ K3-surface in $\mathbb{P}^{n+1}$. (All varieties are over $\mathbb{C}$.) The bounds $N_{2n}(1)$ have a long history and currently are well known, whereas for $d=2$ the only known value is $N_6(2)=285$ (my recent result reported in this seminar). In the most classical case $2n=4$ (spatial quartics), the best known examples have 352 or 432 conics (Barth and Bauer), whereas the best known upper bound is 5016 (Bauer with a reference to Strømme). For $d=1$, the extremal configurations (for various values of $n$) tend to exhibit similar behavior. Hence, contemplating the findings concerning sextic surfaces, one may speculate that  -- it is easier to count *all* conics, both irreducible and reducible, but  -- nevertheless, in extremal configurations all conics are irreducible. On the other hand, famous Schur's quartic (the one on which the maximum $N_4(1)$ is attained) has 720 conics (mostly reducible), suggesting that 432 should be far from the maximum $N_4(2)$. Therefore, in this talk I suggest a very simple (although also implicit) construction of a smooth quartic with 800 irreducible conics. The quartic found is Kummer in the sense of Barth and Bauer: it contains 16 disjoint conics. I conjecture that $N_4(2)=800$ and, moreover, 800 is the sharp upper bound on the total number of conics (irreducible or reducible) in a smooth spatial quartic. 6. Zoom, 12 March 2021, Friday, 15:40 Anar Dosi-[ODTU-Northern Cyprus] - Algebraic spectral theory and index of a variety Abstract: The present talk is devoted to an algebraic treatment of the joint spectral theory within the  framework of Noetherian modules over an algebra finite extension of an algebraically closed field. We discuss the spectral mapping theorem and analyse the index of tuples in purely algebraic case. The index function over tuples from the coordinate ring of a variety is naturally extended up to a numerical Tor-polynomial which behaves as the Hilbert polynomial and provides a link between the index and dimension of a variety. 7. Zoom, 19 March 2021, Friday, 15:40 Remziye Arzu Zabun-[Gaziantep] - Topology of Real Schläfli Six-Line Configurations on Cubic Surfaces and in $\mathbb{RP}^3$ Abstract: A famous configuration of 27 lines on a non-singular cubic surface in $\mathbb{CP}^3$ contains remarkable subconfigurations, and in particular the ones formed by six pairwise disjoint lines. We will discuss such six-line configurations in the case of real cubic surfaces from topological viewpoint, as configurations of six disjoint lines in the real projective 3-space, and show that the condition that they lie on a cubic surface implies a very special property which distinguishes them in the Mazurovskii list of 11 deformation types of configurations formed by six disjoint lines in $\mathbb{RP}^3$. This is joint work with Sergey Finashin. 8. Zoom, 26 March 2021, Friday, 16:00 Türkü Özlüm Çelik-[Simon Fraser University] - Integrable Systems in Symbolic, Numerical and Combinatorial Algebraic Geometry Abstract: The Kadomtsev-Petviashvili (KP) equation is a universal integrable system that describes nonlinear waves. It is known that algebro-geometric approaches to the KP equation provide solutions coming from a complex algebraic curve, in terms of the Riemann theta function associated with the curve. Reviewing this relation, I will introduce an algebraic object and discuss its algebraic and geometric features: the so-called Dubrovin threefold of an algebraic curve, which parametrizes the solutions. Mentioning the relation of this threefold with the classical algebraic geometry problem, namely the Schottky problem, I will report a procedure that is via the threefold and based on numerical algebraic geometric tools, which can be used to deal with the Schottky problem from the lens of computations. I will finally focus on the geometric behaviour of the threefold when the underlying curve degenerates. 9. Zoom, 2 April 2021, Friday, 15:40 Özhan Genç-[Jagiellonian] - Instanton Bundles on $\mathbb{P}^1 \times \mathbb{F}_1$ Abstract: A $\mu$-stable vector bundle $\mathcal{E}$ of rank 2 with $c_1 (\mathcal{E})=0$ on $\mathbb{P}_{\mathbb{C}}^{3}$ is called mathematical instanton bundle if $\mathrm{H}^1 (\mathbb{P}^{3}, \mathcal{E}(-2))=0$. In this talk, we will study the definiton of mathematical instanton bundles on Fano 3-folds and the construction of them on $\mathbb{P}^1 \times \mathbb{F}_1$ where $\mathbb{F}_1$ is the del Pezzo surface of degree 8. This talk is based on the joint work with Vincenzo Antonelli and Gianfranco Casnati. 10. Zoom, 9 April 2021, Friday, 15:40 Berrin Þentürk-[TEDU] - Free Group Action on Product of 3 Spheres Abstract: A long-standing Rank Conjecture states that if an elementary abelian $p$-group acts freely on a product of spheres, then the rank of the group is at most the number of spheres in the product. We will discuss the algebraic version of the Rank Conjecture given by Carlsson for a differential graded module $M$ over a polynomial ring. We will state a stronger conjecture concerning varieties of square-zero upper triangular matrices corresponding to the differentials of certain modules. By the work on free flags in $M$ introduced by Avramov, Buchweitz, and Iyengar, we will obtain some restriction on the rank of submodules of these matrices. By this argument we will show that $(\mathbb{Z}/2\mathbb{Z})^4$ cannot act freely on product of $3$ spheres of any dimensions. 11. Big Blue Button, 16 April 2021, Friday, 15:40 Yanký Lekili-[Imperial College London] - A panorama of Mirror Symmetry Abstract: Mirror symmetry is one of the most striking developments in modern mathematics whose scope extends to very different fields of pure mathematics. It predicts a broad correspondence between two subfields of geometry - symplectic geometry and algebraic geometry. Homological mirror symmetry uses the language of triangulated categories to give a mathematically precise meaning to this correspondence. Since its announcement, by Kontsevich in ICM (1994), it has attracted huge attention and over the years several important cases of it have been established. Despite significant progress, many central problems in the field remain open. After reviewing the general features, I will survey some of my recent results on mirror symmetry (with thanks to collaborators T. Perutz, A. Polishchuk, K. Ueda, D. Treumann). 12. Zoom, 30 April 2021, Friday, 15:40 Çisem Güneþ Aktaþ-[Abdullah Gül] - Real representatives of equisingular strata of  projective models of K3-surfaces Abstract: It is a wide open problem what kind of singularities a projective surface or a curve of a given degree can have. In general, this problem seems hopeless. However, in the case of K3-surfaces, the equisingular deformation classification of surfaces with any given polarization becomes a mere computation. In this talk, we will discuss projective models of K3-surfaces of different polarizations  together with the deformation classification problems.  Although it is quite common that a real variety may have no real points, very few examples of equisingular deformation classes with this property are known.  We will study an algorithm detecting real representatives in equisingular strata of projective models of K3-surfaces. Then, we will apply this algorithm to spatial quartics and find two new examples of real strata without real representatives where the only previously known example of this kind is in the space of plane sextics. ODTÜ talks are either at Hüseyin Demir Seminar room or at Gündüz Ýkeda seminar room at the Mathematics building of ODTÜ. Bilkent talks are at room 141 of Faculty of Science A-building at Bilkent. Zoom talks are online. Year Year 1 2000 Fall Talks  (1-15) 2001 Spring Talks  (16-28) 2 2001 Fall Talks  (29-42) 2002 Spring Talks  (43-54) 3 2002 Fall Talks  (55-66) 2003 Spring Talks  (67-79) 4 2003 Fall Talks  (80-90) 2004 Spring Talks (91-99) 5 2004 Fall Talks (100-111) 2005 Spring Talks (112-121) 6 2005 Fall Talks (122-133) 2006 Spring Talks (134-145) 7 2006 Fall Talks (146-157) 2007 Spring Talks (158-168) 8 2007 Fall Talks (169-178) 2008 Spring Talks (179-189) 9 2008 Fall Talks (190-204) 2009 Spring Talks (205-217) 10 2009 Fall Talks (218-226) 2010 Spring Talks (227-238) 11 2010 Fall Talks (239-248) 2011 Spring Talks (249-260) 12 2011 Fall Talks (261-272) 2012 Spring Talks (273-283) 13 2012 Fall Talks (284-296) 2013 Spring Talks (297-308) 14 2013 Fall Talks (309-319) 2014 Spring Talks (320-334) 15 2014 Fall Talks (335-348) 2015 Spring Talks (349-360) 16 2015 Fall Talks (361-371) 2016 Spring Talks (372-379) 17 2016 Fall Talks (380-389) 2017 Spring Talks (390-401) 18 2017 Fall Talks (402-413) 2018 Spring Talks (414-425) 19 2018 Fall Talks (426-434) 2019 Spring Talks (435-445) 20 2019 Fall Talks (446-456) 2020 Spring Talks (457-465) 21 2020 Fall Talks (467-476) 2021 Spring Talks (477-488) 22
{}
One-loop corrections to multiscale effective vertices in the EFT for Multi-Regge processes in QCD # One-loop corrections to multiscale effective vertices in the EFT for Multi-Regge processes in QCD Maxim Nefedov Samara National Research University, II Institute for Theoretical Physics, Hamburg University E-mail: nefedovma@gmail.com Speaker.Work supported in part by the Foundation for the Advancement of Theoretical Physics and Mathematics BASIS, grant No. 18-1-1-30-1 ###### Abstract The computation of one-loop corrections to the and effective vertices in the framework of gauge-invariant effective theory for Multi-Regge processes in QCD is reviewed. Due to consistent implementation of the “tilted Wilson line” regularization for rapidity divergences, the gauge-invariance has been preserved at all stages of calculation independently on the rapidity regulator and cancellation of the power-like dependence on the regularization variable is traced. Only single-logarithmic rapidity divergence is left in the final result. One-loop corrections to multiscale effective vertices in the EFT for Multi-Regge processes in QCD Maxim Nefedovthanks: Speaker.  thanks: Work supported in part by the Foundation for the Advancement of Theoretical Physics and Mathematics BASIS, grant No. 18-1-1-30-1 Samara National Research University, II Institute for Theoretical Physics, Hamburg University E-mail: nefedovma@gmail.com \abstract@cs XXVII International Workshop on Deep Inelastic Scattering and Related Subjects Torino, Italy, 8-12 April 2019 ## 1 Introduction In the Multi-Regge Kinematics (MRK) for the partonic scattering in QCD, the final-state partons can be grouped into clusters w.r.t. their rapidity. Different clusters are highly-separated in rapidity from each-other, so that the typical -channel momentum transfer is much smaller than the invariant mass of any pair of final-state clusters. At leading power in , higher-order QCD corrections to such amplitudes are enhanced by high-energy logarithms . The Gauge-Invariant Effective Field Theory (EFT) for Multi-Regge processes in QCD [1, 2] has been introduced as a systematic tool for computation of asymptotics of QCD scattering amplitudes in the Multi-Regge limit in the Leading Logarithmic Approximation and beyond. The Hermitian version of this EFT [3, 4] contains all corrections, restoring the unitarity of High-Energy scattering and therefore provides a framework for studies of High-Energy QCD and gluon saturation phenomena, alternative to the Balitsky-JIMWLK or Color-Glass Condensate pictures, see Refs. [5, 6] for the recent work in this direction. In the High-Energy EFT [1, 2], different rapidity-clusters of final-state particles are produced by different gauge-invariant subamplitudes – effective vertices. This effective vertices are connected by -channel exchanges of Reggeized gluons () and Reggeized quarks (), collectively named as Reggeons – gauge-invariant degrees of freedom of the High-Energy QCD. Eventually, it should be possible to integrate-out physical quarks and gluons, order-by-order in , and formulate the high-energy limit of QCD entirely in terms of Reggeons – Reggeon Field Theory, see e.g. [6, 7, 8]. Calculation of the one-loop corrections to different effective vertices is a major task in development of this formalism. The main technical difficulty in the Higher-Order calculations in High-Energy EFT is the appearance of Rapidity divergences in loop and phase-space integrals. These divergences arise due to the presence of “Eikonal” denominators in the induced vertices of interactions of Reggeons with ordinary (Yang-Mills) partons, taken together with kinematical constraints following from MRK. See Sec. 2 of Ref. [9] for the analysis of the conditions of appearance of rapidity divergences at one loop. At present, many calculations [9, 10, 11, 12] in the High-Energy EFT has been done with the use of a variant of “tilted Wilson line” regularization, where the direction vectors () of Wilson lines in the definition of Reggeon-parton interactions are slightly shifted from the light-cone: n±μ→~n±μ=n±μ+r⋅n∓μ,  1l±→1~l±=1l±+r⋅l∓,\specialhtml:\specialhtml: (1.0) where is the regularization variable. In Ref. [9] we have observed, that to keep the -interaction gauge-invariant for one also have to modify the usual MRK kinematic constraint, stating that four-momentum of -Reggeon has only one nonzero light-cone component and transverse momentum . The kinematic constraint for Reggeon , consistent with gauge-invariance at is ~q−1=q−1+r⋅q+1=0.\specialhtml:\specialhtml: (1.0) For Reggeized quarks, such modification is not strictly necessary, but it turns out, that many scalar integrals actually simplify in the kinematics (1), so we prefer to keep it both for Reggeized gluons and quarks. In the present contribution we will discuss two examples of one-loop corrections to Reggeon-Particle-Particle effective vertices: and . The first one involves an off-shell photon (), so that the vertex has two scales of virtuality: virtuality of the photon and of the Reggeized quark . More details concerning this example can be found in our Ref. [9]. The second example already has been considered in Ref. [10], however in this reference part of diagrams has been but to zero by the gauge choice for external gluons and therefore gauge-invariance of amplitude and cancellation of power-like dependence on the rapidity-regulator has not been verified. We fill this gap in the present contribution. Our paper has the following structure: In the Sec. 2 integrals appearing in our calculation are listed and we comment on their rapidity divergences. Explicit expressions for this integrals are provided in Ref. [9]. In Sec. 3 we review the calculations for above-mentioned examples and in the Sec. 4 we summarize our conclusions. ## 2 One-loop rapidity-divergent integrals It is convenient to categorize one-loop integrals appearing in our calculations according to the type of their dependence on the rapidity-regulator variable . Then the simplest integrals containing only one quadratic and one or two linear propagators turn out to be the most singular ones. Integrals: A[−](p)=∫[ddl](p+l)2[~l−], A[−−](p)=∫[ddl]l2[~l−][~l−−~p−], where , , and denotes the PV-prescription for the light-cone denominator: , are related with each-other as: A[−−](p)=1~p−A[−](p),\specialhtml:\specialhtml: (2.0) and both are proportional to . Integrals B[−](p)=∫[ddl]l2(p+l)2[~l−],  B[−−](p)=∫[ddl]l2(p+l)2[~l−][~l−+~p−], are related as: B[−−](p)=2~p−B[−](p),\specialhtml:\specialhtml: (2.0) and contain the -dependence on the rapidity regulator for and also term appears for . These power-like terms come together with -factors and could lead to -dependence after expansion in , which would contradict to Reggeization of gluon and quark. Cancellation of this terms happens between different diagrams and hence is a nontrivial dynamical property of QCD. The integral B[+−](p)=∫[ddl]l2(p+l)2[~l+][~l−], contributes to one-loop correction to propagators of Reggeized gluon and quark and it contains only logarithmic rapidity-divergence , related with Reggeization. Similar single-logarithmic divergence is present in a “triangle” integral: C[−](−q21,−q2,q−)=∫[ddl]l2(q1+l)2(q1+q+l)2[~l−], which has been computed for the case in Ref. [10] and for the case in the Ref. [9]. For the case, the term also appears in the integral . These are all scalar integrals necessary for the calculation of one-loop corrections to Particle-Particle-Reggeon effective vertices. ## 3 One-loop effective vertices The set of EFT Feynman diagrams, contributing to the one-loop correction to -effective vertex is shown in the Fig. 1. To compute them, we perform the tensor reduction procedure, similar to the standard one. However since now some integrals contain Eikonal denominators, depending on the vector , one should include this vector to the ansatz for the tensor structure. The result [9]: Γ(1)+μ(q1,q)=ieeq⋅¯u(q+q1)[C[Γ]⋅Γ(0)+μ(q1,q)+C[Δ(1)]⋅Δ(1)+μ(q1,q)+C[Δ(2)]⋅Δ(2)+μ(q1,q)], can be expressed in terms of three gauge-invariant Lorentz structures: Γ(0)+μ(q1,q)=γμ+^q1n−μ2q−, Δ(1)+μ(q1,q)=^qq−(n−μ−2(q1)μq+1), Δ(2)+μ(q1,q)=^qq−(n−μ−qμq+), where is the Fadin-Sherman scattering vertex and coefficients are the following: C[Γ] = −¯αsCF4π12{[(d−8)Q2+(d−6)t1]B(t1)−2(d−7)Q2B(Q2)Q2−t1 (3.0) −2[(Q2−t1)C(t1,Q2)−q−(t1C[−](t1,Q2,q−)+(B[−](q)−B[−](q+q1)))]}, C[Δ(1)] = −¯αsCF4π(Q2+t1)2(Q2−t1)2[((d−2)Q2−(d−4)t1)B(t1)−2Q2B(Q2)], (3.0) C[Δ(2)] = −¯αsCF4πQ2(Q2−t1)2[((d−6)t1−(d−8)Q2)B(Q2)+2(t1−2Q2)B(t1)], (3.0) were is the dimensionless strong-coupling constant, and are the usual one-loop scalar “bubble” and “triangle” integrals [13]. We observe, that integrals appearing in the expansion of the second and third diagrams in the Fig. 1 cancel-away. Also the terms cancel between integrals , and in Eq. (3.0), so that only single-logarithmic rapidity divergence is left. In Ref. [9] we have checked, that this rapidity divergence cancels in the single-Reggeon exchange contribution to the -amplitude at one loop and EFT result agrees with MRK limit of one-loop QCD amplitude. Diagrams contributing to the one-loop correction to -vertex with on-shell external Yang-Mills gluons with helicities and and momenta and are shown in the Fig. 2. This one-loop correction can be decomposed as: γabc,(1)λ1+λ2=igsfabc⋅ϵμ(q,λ1)(ϵ∗(q+q1,λ2))ν[C[γ(0)+]⋅γ(0)μ,+,ν+C[δ+]⋅δμ,+,ν], where the helicity-conserving (Lipatov’s) and helicity-flip Lorentz structures are: γ(0)μ,+,ν=2q−gμν+2n−μq1ν−2n−νq1μ+t1n−μn−νq−, δμ,+,ν=2q−[gμν+2q1μq1νt1], while coefficients in front of them read: C[γ(0)+] = −¯αsCA4π[q−t1C[−](t1,0,q−)+B(t1)], (3.0) C[δ+] = ¯αs4π(d−4)B(t1)2(d−1)(d−2)(2nF−(d−2)CA). (3.0) Eqns. (3.0) and (3.0) coincide with the results of Ref. [10], however in the calculations in this paper, the diagrams framed in the Fig. 2 where nullified by the gauge-choice for external gluons. We take them into account, and hence we can check the Slavnov-Taylor identities and trace-out the cancellation of power-like dependence on the regulator . Modified kinematical constraint (1) guarantees the gauge-invariance of amplitude in all orders in , and we observe, that contributions of integrals and cancel in the and respectively, while in higher orders in (which we eventually drop), coefficients in front of this integrals are gauge-invariant, which serves as a useful cross-check of the calculation. Cancellation of contributions of this integrals happens between different diagrams and essentially relies on relations (2.0) and (2.0), while integral due to the constraint (1). Therefore all power-like dependence on the rapidity-regulator cancels in the leading power in and we are again left with single-logarithmic rapidity divergence related with gluon Reggeization. ## 4 Conclusions and discussion In the present contribution we have reviewed the structure of rapidity divergences in the one-loop integrals contributing to the one-loop corrections to Particle-Particle-Reggeon effective vertices in the gauge-invariant EFT for Multi-Regge processes in QCD [1, 2] and illustrated their application on two examples of such vertices: and . The first one contains two scales of virtuality: squared transverse momentum of Reggeized quark and virtuality of the photon , and new Lorentz structure appears in the answer for . Cancellation of power-like dependence on rapidity regularization parameter is observed in both cases, so that only single-logarithmic rapidity divergence is left in the end. ## References • [1] L. N. Lipatov, Nucl. Phys. B 452, 369 (1995) [hep-ph/9502308]. • [2] L. N. Lipatov and M. I. Vyazovsky, Nucl. Phys. B 597, 399 (2001) [hep-ph/0009340]. • [3] L. N. Lipatov, Phys. Rept. 286, 131 (1997) doi:10.1016/S0370-1573(96)00045-2 [hep-ph/9610276]. • [4] S. Bondarenko and M. A. Zubkov, Eur. Phys. J. C 78, no. 8, 617 (2018) doi:10.1140/epjc/s10052-018-6089-1 [hep-ph/1801.08066]. • [5] M. Hentschinski, Phys. Rev. D 97, no. 11, 114027 doi:10.1103/PhysRevD.97.114027 , [hep-ph/1802.06755], (2018). • [6] S. Bondarenko and S. Pozdnyakov, Int. J. Mod. Phys. A 33, no. 35, 1850204 doi:10.1142/S0217751X18502044 , [hep-th/1806.02563], (2018) . • [7] S. Bondarenko, L. Lipatov, S. Pozdnyakov and A. Prygarin, Eur. Phys. J. C 77, no. 9, 630 doi:10.1140/epjc/s10052-017-5208-8 , [hep-th/1708.05183], (2017). • [8] S. Bondarenko and S. Pozdnyakov, [hep-th/1903.11288], (2019). • [9] M. A. Nefedov, [hep-ph/1902.11030], (2019). • [10] M. Hentschinski, A. Sabio Vera, Phys. Rev. D85, 056006, doi:10.1103/PhysRevD.85.056006, [hep-ph/1110.6741] (2012); G. Chachamis, M. Hentschinski, J. D. Madrigal Martinez, A. Sabio Vera, Phys. Rev. D87, 076009, doi:10.1016/j.nuclphysb.2012.03.015 , [hep-ph/1212.4992] (2013). • [11] M. Nefedov, V. Saleev, Mod. Phys. Lett. A32, 1750207, doi:10.1142/S0217732317502078 , [hep-th/1709.06246], (2017). • [12] G. Chachamis, M. Hentschinski, J. D. Madrigal Martinez, A. Sabio Vera, Nucl. Phys. B876, 453–472, doi:10.1016/j.nuclphysb.2013.08.013 , [hep-ph/1307.2591], (2013). • [13] R. K. Ellis, G. Zanderighi, JHEP 0802, 002, doi:10.1088/1126-6708/2008/02/002 , [hep-ph/0712.1851], (2008). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{}
# How do you evaluate 3/5 \div 4/7? Apr 14, 2018 $\frac{21}{20}$ #### Explanation: $\frac{3}{5} \div \frac{4}{7}$ Dividing by a number is the same thing as multiplying it by its reciprocal, or $1$ over the number. Therefore, this expression can be rewritten as $\frac{3}{5} \cdot \frac{7}{4}$. Now, we can simplify this: $\frac{21}{20}$ Hope this helps! Apr 14, 2018 $\frac{21}{20}$ #### Explanation: $\text{to evaluate division of fractions}$ • " leave the first fraction" • " change division to multiplication" • " invert the second fraction, that is turn it upside down" $\Rightarrow \frac{3}{5} \div \frac{4}{7}$ $= \frac{3}{5} \times \frac{7}{4}$ $\text{there are no common factors between values on }$ $\text{numerator/denominator}$ $\text{Thus multiply numbers on numerators together and}$ $\text{numbers on denominators together}$ $\Rightarrow \frac{3}{5} \times \frac{7}{4} = \frac{3 \times 7}{5 \times 4} = \frac{21}{20} \leftarrow \textcolor{red}{\text{in simplest form}}$
{}
prioritizr / prioritizr # Compare 73e98cf ... +0 ... 8dd4040 Showing 15 of 79 files from the diff. R/zones.R changed. R/solve.R changed. Other files ignored by Codecov docs/pkgdown.yml has changed. man/solve.Rd has changed. man/zones.Rd has changed. @@ -4,7 +4,7 @@ 4 4 #' Category vector 5 5 #' 6 6 #' Convert an object containing binary (integer) fields (columns) into a 7 - #' integer vector indicating the column index where each row is 7 + #' integer vector indicating the column index where each row is 8 8 #' 1. 9 9 #' 10 10 #' @param x matrix, data.frame, [Spatial-class], @@ -15,7 +15,7 @@ 15 15 #' value of zero. Also, note that in the argument to x, each row must 16 16 #' contain only a single value equal to 1. 17 17 #' 18 - #' @return integer vector 18 + #' @return integer vector. 19 19 #' 20 20 #' @seealso [base::max.col()] 21 21 #' @@ -14,7 +14,7 @@ 14 14 #' @param budget numeric value specifying the maximum expenditure of 15 15 #' the prioritization. For problems with multiple zones, the argument 16 16 #' to budget can be a single numeric value to specify a budget 17 - #' for the entire solution or a numeric vector to specify 17 + #' for the entire solution or a numeric vector to specify 18 18 #' a budget for each each management zone. 19 19 #' 20 20 #' @details @@ -35,11 +35,11 @@ 35 35 #' 36 36 #' \describe{ 37 37 #' 38 - #' \item{data as an integer vector}{containing indices that indicate which 38 + #' \item{data as an integer vector}{containing indices that indicate which 39 39 #' planning units should be locked for the solution. This argument is only 40 40 #' compatible with problems that contain a single zone.} 41 41 #' 42 - #' \item{data as a logical vector}{containing TRUE and/or 42 + #' \item{data as a logical vector}{containing TRUE and/or 43 43 #' FALSE values that indicate which planning units should be locked 44 44 #' in the solution. This argument is only compatible with problems that 45 45 #' contain a single zone.} @@ -52,7 +52,7 @@ 52 52 #' zone. Thus each row should only contain at most a single TRUE 53 53 #' value.} 54 54 #' 55 - #' \item{data as a character vector}{containing field (column) name(s) 55 + #' \item{data as a character vector}{containing field (column) name(s) 56 56 #' that indicate if planning units should be locked for the solution. 57 57 #' This format is only 58 58 #' compatible if the planning units in the argument to x are a @@ -41,7 +41,7 @@ 41 41 #' 42 42 #' \describe{ 43 43 #' 44 - #' \item{data as character vector}{containing field (column) name(s) that 44 + #' \item{data as character vector}{containing field (column) name(s) that 45 45 #' contain penalty values for planning units. This format is only 46 46 #' compatible if the planning units in the argument to x are a 47 47 #' [Spatial-class], [sf::sf()], or @@ -52,7 +52,7 @@ 52 52 #' contain multiple zones, the argument to data must 53 53 #' contain a field name for each zone.} 54 54 #' 55 - #' \item{data as a numeric vector}{containing values for 55 + #' \item{data as a numeric vector}{containing values for 56 56 #' planning units. These values must not contain any missing 57 57 #' (NA) values. Note that this format is only available 58 58 #' for planning units that contain a single zone.} @@ -37,7 +37,7 @@ 37 37 #' 38 38 #' \describe{ 39 39 #' 40 - #' \item{weights as a numeric vector}{containing weights for each feature. 40 + #' \item{weights as a numeric vector}{containing weights for each feature. 41 41 #' Note that this format cannot be used to specify weights for problems with 42 42 #' multiple zones.} 43 43 #' ### Everything is accounted for! No changes detected that need to be reviewed. ##### What changes does Codecov check for? Lines, not adjusted in diff, that have changed coverage data. Files that introduced coverage data that had none before. Files that have missing coverage data that once were tracked. ### 2 Commits Files Coverage R 94.92% src 98.81% Project Totals (124 files) 96.11%
{}
# Solved The problem has been solved ## Rosetta Antibody: Unable to open file and terminates Category: Compilation Hello, please note I'm revising my posting with some major updates, apologies to the 9 viewers who have come here prior as this will read completely different. I'm encountering a read error with antibody_H3. Below is the executable I ran (I've also reproduced the error with .static mode): mpiexec -np 4 $ROSETTA3/bin/antibody_H3.cxx11threadmpiserialization.linuxgccrelease \ @abH3.flags abH3.flags looks just like: Post Situation: ## PyRosetta on Ubuntu 16.04 build error Category: Compilation - OS type/version/arch: Linux Ubuntu 16.04 (on an Azure virtual machine) - Python version: 3.5 - Version of PyRosetta including SVN revision number. PyRosetta4.Release.python35.ubuntu.release-236.tar.bz2 - Version of Rosetta: Rosetta 3.11 for Linux Hello, I've installed PyRosetta and upon importing in Python3.5 I receive the following error: Post Situation: ## Confusion for Input Tutorial Category: Compilation Hi All, I have downloaded Rosetta 3.11 and built it with Scons.py using$ ./scons.py -j 4 mode=release bin The building of the binaries seems to work correctly providing me with: scons: done building targets. I am now attempting to familiarize myself with Rosetta by traversing the tutorials starting with the Input & Output Tutorial so I am within this directory: Post Situation: ## compiling with intel c++ compiler Category: Compilation Intel compiler has problem dealing with defaulted and virtualized copy assignment operator, reported here. Because of this bug, compilation with intel compiler fails at rosetta_bin_linux_2019.35.60890_bundle/main/source/src/protocols/genetic_algorithm/Entity.hh line 55. By implementing it explicitly, or removing "virtual" seems to be a solution. Post Situation: ## running in MPI mode and multiple scores per output PDB file? Category: Design Scoring Hi Forum I recently did a Rosetta fixbb run with MPI and found that the score file had a lot more lines of output than there were actual PDB files. Specifically, I've got 353 scores in score.sc but only 12 PDB files.  is it possible that the parallel processors are simply overwriting the PDBs?   Is there a flag I should be including to avoid this? Thanks! Post Situation: ## GrowLigand Category: Small Molecules Hi all, Anyone with experience with the GrowLigand mover? I have as input a holo.pdb structure with a protein target of interest and a small molecule inhibitor which binds with low affinitty (parametrized to be used with genpot). I was hopping that this mover would return possible modifications on the ligand so I could rationalize my next steps of synthesis. Here is my RosettaScripts: Post Situation: ## cryptic error "Got some signal... It is:15" -- an issue with 'fixbb', or something else? Category: Design Hello Forum I'm trying to run fixbb on my cluster here, and everything seems to have been going well for a while, but it suddenly stopped and spit out the following in the log file: Post Situation: ## Speed problem when running RosettaLigand ligand docking Category: Docking Hello, I am trying to do protein design using a protocol derived from RosettaLigand ligand docking. When I run Rosetta Design as the following command line, everything seems correct and there are output structures (pdb files). The problem is that it takes ages to get the results. In average, it will take 15~20 mins to get one models. For 1000 models, it spent 9 days. I think there is something wrong, but I don't know how to solve it. Post Situation: ## Make error during installation of updated ncbi-blast-2.9.0+-src plus error with nr-database Category: Fragment Generation Dear fellows, my recent situation with setting up the fragment generation for structure prediction ab initio consists in follow: - It seems as if I solved my problems with version compatibility of gcc-compiler and the environment variable FRAGMENT_PICKER. - But a problem with installation of updated make_fragments.pl and install_dependencies.pl is still remained unsolved. I carried out next actions with the ncbi-blast-2.9.0+-src package: Post Situation: ## Ligand question - aromatic bonds not being enforced? Category: Constraints Small Molecules Nucleic Acids Hi, I am trying to run Local Relax on an ADP-bound protein structure. I cleared the first hurdle to make a ligand params file, and the program runs fine and includes the ligand. I made the params file by taking ADP from a related structure, converted it to mol2 using iBabel, and made the params file using molfile_to_params.py Post Situation:
{}
2 edited body For $\mathbb{R}$. Suppose f is our compactly supported function and g(x) is its Fourier transform. Since f is compactly supported, $\hat{f} = g$ is the restriction to $\mathbb{R}$ of an entire function g(z) by the Paley-Weiner Paley-Wiener theorems. Since g is entire and vanishes on an open set, $g \equiv 0$. The proof of this last fact (weakening the assumption to vanishing on a set with an accumulation point) uses that $\mathbb{C}$ is connected which is of course directly related to $\mathbb{R}$ being connected. I expect that you knew this proof, but maybe you accidentally overlooked where connectedness was used. Or more likely, this proof didn't explain what you had in mind and you want a more general proof for $\mathbb{R}^n$. I can't currently do that. Instead, I have another idea which focuses on a different aspect than connectedness, but seems to be related. In connection with the analogous statement for polynomials. A polynomial can only have finitely many zeroes over a field is proved via a complexity argument using that infinity > finite. Analytic functions, i.e. the completion of polynomials over $\mathbb{C}$ can have infinitely many zeroes, but uncountably many zeroes implies the analytic function is identically 0. So it seems that a set that has a limit point is more complex (in terms of complexity) than a countable set. I'm thinking the complexity argument should be interpreted in terms of density in topology - no finite subset of a $\mathbb{N}$ is dense in the discrete topology or any open subset of the co-finite topology on $\mathbb{N}$. Similarly for $\mathbb{R}$ and $\mathbb{C}$. I hope this is helpful. This is an interesting question and I'll think more about it. 1 For $\mathbb{R}$. Suppose f is our compactly supported function and g(x) is its Fourier transform. Since f is compactly supported, $\hat{f} = g$ is the restriction to $\mathbb{R}$ of an entire function g(z) by the Paley-Weiner theorems. Since g is entire and vanishes on an open set, $g \equiv 0$. The proof of this last fact (weakening the assumption to vanishing on a set with an accumulation point) uses that $\mathbb{C}$ is connected which is of course directly related to $\mathbb{R}$ being connected. I expect that you knew this proof, but maybe you accidentally overlooked where connectedness was used. Or more likely, this proof didn't explain what you had in mind and you want a more general proof for $\mathbb{R}^n$. I can't currently do that. Instead, I have another idea which focuses on a different aspect than connectedness, but seems to be related. In connection with the analogous statement for polynomials. A polynomial can only have finitely many zeroes over a field is proved via a complexity argument using that infinity > finite. Analytic functions, i.e. the completion of polynomials over $\mathbb{C}$ can have infinitely many zeroes, but uncountably many zeroes implies the analytic function is identically 0. So it seems that a set that has a limit point is more complex (in terms of complexity) than a countable set. I'm thinking the complexity argument should be interpreted in terms of density in topology - no finite subset of a $\mathbb{N}$ is dense in the discrete topology or any open subset of the co-finite topology on $\mathbb{N}$. Similarly for $\mathbb{R}$ and $\mathbb{C}$. I hope this is helpful. This is an interesting question and I'll think more about it.
{}
# Trigonometric function facts for kids Kids Encyclopedia Facts (Redirected from Secant) All of the trigonometric functions of any angle can be constructed using a circle centered at O with radius of 1. Trigonometric functions: Sine, Cosine, Tangent, Cosecant, Secant, Cotangent In mathematics, the trigonometric functions are a set of functions which relate angles to the sides of a right triangle. There are many trigonometric functions, the 3 most common being sine, cosine, tangent, followed by cotangent, secant and cosecant. The last three are called reciprocal trigonometric functions, because they act as the reciprocals of other functions. Secant and cosecant are rarely used. Function Abbreviation Relation Sine sin $\sin \theta = \cos \left(\frac{\pi}{2} - \theta \right) \,$ Cosine cos $\cos \theta = \sin \left(\frac{\pi}{2} - \theta \right)\,$ Tangent tan (or tg) $\tan \theta = \frac{\sin \theta}{\cos \theta} = \cot \left(\frac{\pi}{2} - \theta \right) = \frac{1}{\cot \theta} \,$ Cotangent cot (or ctg) $\cot \theta = \frac{\cos \theta}{\sin \theta} = \tan \left(\frac{\pi}{2} - \theta \right) = \frac{1}{\tan \theta} \,$ Secant sec $\sec \theta = \frac{1}{\cos \theta} = \csc \left(\frac{\pi}{2} - \theta \right) \,$ Cosecant csc (or cosec) $\csc \theta =\frac{1}{\sin \theta} = \sec \left(\frac{\pi}{2} - \theta \right) \,$ ## Definition The trigonometric functions sometimes are also called circular functions. They are functions of an angle; they are important when studying triangles, among many other applications. Trigonometric functions are commonly defined as ratios of two sides of a right triangle containing the angle, and can equivalently be defined as the lengths of various line segments from a unit circle (a circle with radius of one). ### Right triangle definitions A right triangle always includes a 90° (π/2 radians) angle, here labeled C. Angles A and B may vary. Trigonometric functions specify the relationships between side lengths and interior angles of a right triangle. In order to define the trigonometric functions for the angle A, start with a right triangle that contains the angle A: We use the following names for the sides of the triangle: • The hypotenuse is the side opposite the right angle, also the longest side of a right-angled triangle, in this case h. • The opposite side is the side opposite to the angle we are interested in, in this case a. • The adjacent side is the side that is in contact with the right angle the angle we are interested in, hence its name. In this case, the adjacent side is b. All triangles are taken to exist in Euclidean geometry, so that the inside angles of each triangle sum to π radians (or 180°); therefore, for a right triangle, the two non-right angles are between zero and π/2 radians. Notice that strictly speaking, the following definitions only define the trigonometric functions for angles in this range. We extend them to the full set of real arguments by using the unit circle, or by requiring certain symmetries and that they be periodic functions. 1) The sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse. In our case $\sin A = \frac {\textrm{opposite}} {\textrm{hypotenuse}} = \frac {a} {h}.$ Note that since all those triangles are similar, this ratio does not depend on the particular right triangle that is chosen, as long as it contains the angle A. The set of zeroes of sine (that is, the values of $x$ for which $\sin x =0$) is $\left\{n\pi\big| n\isin\mathbb{Z}\right\}.$ 2) The cosine of an angle is the ratio of the length of the adjacent side to the length of the hypotenuse. In our case $\cos A = \frac {\textrm{adjacent}} {\textrm{hypotenuse}} = \frac {b} {h}.$ The set of zeroes of cosine is $\left\{\frac{\pi}{2}+n\pi\bigg| n\isin\mathbb{Z}\right\}.$ 3) The tangent of an angle is the ratio of the length of the opposite side to the length of the adjacent side. In our case $\tan A = \frac {\textrm{opposite}} {\textrm{adjacent}} = \frac {a} {b}.$ The set of zeroes of tangent is $\left\{n\pi\big| n\isin\mathbb{Z}\right\}.$ This is the same set as that of the sine function, since $\tan A = \frac {\sin A}{\cos A}.$ The remaining three functions are best defined using the above three functions. 4) The cosecant csc(A) is the multiplicative inverse of sin(A); it is the ratio of the length of the hypotenuse to the length of the opposite side: $\csc A = \frac {\textrm{hypotenuse}} {\textrm{opposite}} = \frac {h} {a}$. 5) The secant sec(A) is the multiplicative inverse of cos(A); it is the ratio of the length of the hypotenuse to the length of the adjacent side: $\sec A = \frac {\textrm{hypotenuse}} {\textrm{adjacent}} = \frac {h} {b}$. 6) The cotangent cot(A) is the multiplicative inverse of tan(A); it is the ratio of the length of the adjacent side to the length of the opposite side: $\cot A = \frac {\textrm{adjacent}} {\textrm{opposite}} = \frac {b} {a}$. ### Definitions by power series One can also define the trigonometric functions by using power series: $\sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots = \sum_{n=0}^\infty \frac{(-1)^nx^{2n+1}}{(2n+1)!}$ $\cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots = \sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)!}$ and define tangent, cotangent, secant and cosecant using identities, see below. ## Identities Some important identities: $\tan x = \frac{\sin x}{\cos x}$ $\cot x = \frac{\cos x}{\sin x}$ $\sec x = \frac{1}{\cos x}$ $\csc x = \frac{1}{\sin x}$ $\sin^2 x + \cos^2 x = 1$ $\sin 2x = 2 \sin x \cos x$ $\cos 2x = \cos x \cos x - \sin x \sin x = \cos^2 x - \sin^2 x = 2 \cos^2 x - 1 = 1 - 2 \sin^2 x$ $\tan 2x = \frac{2 \tan x}{1 - \tan^2 x}$ $\sin \left(x \pm y \right)=\sin x \cos y \pm \cos x \sin y$ $\cos \left(x \pm y \right)=\cos x \cos y \mp \sin x \sin y$ $\tan \left ( x \pm y \right ) = \frac{\tan x \pm \tan y}{1 \mp \tan x \tan y}$ ## Hyperbolic functions The hyperbolic functions are like the trigonometric functions, in that they have very similar properties. Each of six trigonometric functions has a corresponding hyperbolic form. They are defined in terms of the exponential function, which is based on the constant e. • Hyperbolic sine: $\sinh x = \frac {e^x - e^{-x}} {2} = \frac {e^{2x} - 1} {2e^x} = \frac {1 - e^{-2x}} {2e^{-x}}.$ • Hyperbolic cosine: $\cosh x = \frac {e^x + e^{-x}} {2} = \frac {e^{2x} + 1} {2e^x} = \frac {1 + e^{-2x}} {2e^{-x}}.$ • Hyperbolic tangent: $\tanh x = \frac{\sinh x}{\cosh x} = \frac {e^x - e^{-x}} {e^x + e^{-x}} = \frac{e^{2x} - 1} {e^{2x} + 1} = \frac{1 - e^{-2x}} {1 + e^{-2x}}.$ • Hyperbolic cotangent: $\coth x = \frac{\cosh x}{\sinh x} = \frac {e^x + e^{-x}} {e^x - e^{-x}} = \frac{e^{2x} + 1} {e^{2x} - 1} = \frac{1 + e^{-2x}} {1 - e^{-2x}}, \qquad x \neq 0.$ • Hyperbolic secant: $\operatorname{sech}\,x = \frac{1}{\cosh x} = \frac {2} {e^x + e^{-x}} = \frac{2e^x} {e^{2x} + 1} = \frac{2e^{-x}} {1 + e^{-2x}}.$ • Hyperbolic cosecant: $\operatorname{csch}\,x = \frac{1}{\sinh x} = \frac {2} {e^x - e^{-x}} = \frac{2e^x} {e^{2x} - 1} = \frac{2e^{-x}} {1 - e^{-2x}}, \qquad x \neq 0.$ ## Related pages • Joseph, George G., The Crest of the Peacock: Non-European Roots of Mathematics, 2nd ed. Penguin Books, London. (2000). . • Weisstein, Eric W., "Tangent" from MathWorld, accessed 21 January 2006. ## Images for kids Trigonometric function Facts for Kids. Kiddle Encyclopedia.
{}
# Tag Info 33 Yes. Yes. A thousand times yes. Markdown provides enough methods to draw attention to different parts of text, though bold and italic and even bold italic, not to forget block quotes when one wishes to quote a large section of text, to make using colors via MathJax completely unnecessary. Not only that, but a post written in a rainbow of colours is ... 33 Please note the sarcasm. I don't see any $\color{purple}{\text{problem}}$ with this, adding $\color{red}{\text{c}}\color{orange}{\text{o}}\color{green}{\text{l}}\color{blue}{\text{o}}\color{purple}{\text{r}}$ to text shouldn't reduce $\color{#12dd13}{\text{readibility}}$, nor should any other $\LaTeX$, such as $\boxed{\text{boxes}}$ $\require{cancel}\cancel{... 30 This is supposed to be an all-purpose math site, and obviously not everyone in the world who has a math question knows how to use latex. Although learning latex is not so very hard, it is not trivial either, and assuming that people must have this skill in order to get continued service seems like a clear violation of the intended scope of the site. Also,... 26 Testing alternate way of implementing spoiler $$\require{action} \require{enclose} \toggle{ x\cdot 0 = 0\quad\enclose{roundedbox}{\text{ Click this for derivation }} }{ \begin{array}{rll} x\cdot 0 &= \mathtip{x\cdot 0 + 0}{0 \text{ is additive identity}} \\ &= \mathtip{x\cdot 0 + (x\cdot 0 + -(x\cdot 0))}{ -(x\cdot 0) \text{ is additive inverse ... 24 Disclaimer: I'm part of the MathJax team. Also, this got a bit long. tl;dr Try out NVDA with MathPlayer 4 on Firefox here on math.SE JAWS 13 is a bit old (2011) and the situation of screenreaders with respect to math and the web has changed drastically since then. As already mentioned, JAWS 16 was the first version to introduce direct MathML support but as ... 23 I am delighted that you are considering this issue. I am slightly red/green color blind. It is nearly impossible for me to see the block quotes on math.stackexchange. A light grey background would solve the problem completely for me. A solid or dotted black line along, say, the left the border could serve to distinguish a block quote from piece of computer ... 21 (I'm a new user so I don't have anything to contribute as far as norms here go, but I'm really interested in how to effectively convey information in an accessible way!) Like others have said, the most obvious downside is colorblind people and people using screen readers. Colorblind people will miss information and so you should make sure the color doesn't ... 19 There are three approaches for line breaks. As pointed out in the comments, Approaches 2 and 3 produce the same HTML output when rendered on the page. Approach 1: Press "Enter" twice Output: Hi Bye Code: Hi Bye Approach 2: Two spaces at end of line Output: Hi Bye Code: Hi Bye There are two spaces after "Hi." That is, the text looks like: Hi&... 19 Colored text can be useful. I can only think of one example at the moment, which is for highlighting correspondences between different parts of a piece of text, like so: \frac{d}{dx} (\color{red}{x^3} + \color{blue}{x^2}) = \color{red}{3x^2} + \color{blue}{2x} Colored text is not useful for merely emphasizing or highlighting a bit of text; italics or bold ... 17 Although there are some great comments above, one thing no one has mentioned is people who use assistive technology like screen readers. Having the mathematics clearly marked as mathematics (as oppose to "fake" mathematics using other HTML tricks) makes it much easier to properly voice the mathematics for screen readers. Although few screen readers ... 15 Some decide instead of typing the actual text of the question, typesetting formulae with LATEX and using plotting software, it would be better for them to avoid the effort Okay, hold on. You seem to be under the impression that computer typesetting and plotting is easy. Something anyone who uses this site should have in their skillset. Do you really expect ... 13 Simple solution: don't abuse MathJax to colour your text. 12 Since my comment under the other answer got about three times more votes than the answer, maybe it's worth converting it into a full answer: The correct way would be, as far as I can tell, to convert the brackets into opening and closing delimiters through the use of \mathopen and \mathclose: \forall x \in \mathopen{]} -1, 1 \mathclose{[}, f(x) > 0$$\... 11 (I've never used Meta before -- I noted there's no answer yet, but a lot of comments, so I hope I'm not doing something wrong) To first give an example of what I would consider good coloring; I like the explanation of Fourier Transform, on betterexplained and technically copied from altdevblogaday): To be fair, he could have made it a little easier for ... 11 You can do it, but you need to use large squares and hide their height and depth using \smash. It helps to use a definition or two. Here is one approach: \def\smallstrut{\Space{0em}{.6em}{.2em}} \def\cbox#1{\textstyle\smash{\color{#1}{\Rule{1em}{.8em}{.2em}}}\smallstrut} \begin{smallmatrix} \cbox{red}\cbox{teal}\cbox{green}\cbox{blue}\\ \cbox{red}\cbox{... 11 Use mathjax for mathematics, not for text formatting. The mathjax italics happen because mathjax thinks "abcde" is just a string of variables. 10 You can use \limits to get that as in$\sum\limits_{i=n}^\mathbb{N}\frac1{i^2}$. However, this severely messes with the interline spacing. code: \sum\limits_{i=n}^\mathbb{N}\frac1{i^2} You can also use \displaystyle as in$\displaystyle\sum_{i=n}^\mathbb{N}\frac1{i^2}. However, this really messes with interline spacing. code: \displaystyle\sum_{i=n}^\... 10 As suggested in other posts about this issue, you can use <sup> or <sub>. It is probably not optimal, but at least doable within the limitations of the software. See: How do I use a small font size in questions and answers? You can also support a related feature requests Allow the <small> tag and Markdown extension for really small tiny ... 10 You are not supposed to write a greeting at the start of the question. There is a script in place that removes certain common forms of greetings (or at least what appears to be like one, there are rare false positives). Of course there are ways to fool the script, but you should not. 9 Also very helpful is Carol Fisher's Alphabetical List of TEX Commands available in MathJax, which gives examples of all MathJax commands, and has a little MathJax sandbox for experiments. 9 For F.Zer \begin{align} f(x:xs) &= \sum_{i:\ 0 \leq i < \#(x:xs)} (x:xs).i * (i + 1)\\ &=\sum_{i:\ i =0} (x:xs).i * (i + 1)+\sum_{i:\ 1\le i<\#xs+1}(x:xs).i * (i + 1)\\ &=x + \sum_{i:\ 1 \leq i \le \#xs} xs.(i-1) * (i + 1)\\ &=x + \sum_{j:\ 0 \leq j < \#xs} xs.j * (j + 2)\quad\quad(\text{Here }j=i-1\text{ and }i\text{ is replaced ... 9 I don't there's been any recent change; this is how things have always worked. To get a single line break, put two spaces at the end of the line - for example, **Theorem** adsfdf *proof*: asdfdsaf produces Theorem adsfdf proof: asdfdsaf 9 Plain\mathrm{\TeX}$defines \mathcode$="405B \mathcode$="505D \delcode$="05B302 \delcode$="05D303 and$\mathrm{\LaTeX}does essentially the same. One could get extensible French brackets by something like \def\lfb{\delimiter"405D303 } \def\rfb{\delimiter"505B302 } Compiling the following file \def\lfb{\delimiter"405D303 } \def\rfb{\delimiter"... 9 To reiterate what was said in a comment, that's not really possible, except in a way on mobile. It is possible though to hide text as a spoiler. Like this (hover over it to show, click to fix it, on mobile one has to click but this is even indicated). The syntax is >! Text That is sometimes used, but it is not really convenient as it takes up the full ... 9 As far as I know, math SE does not have a formal style guide so (as the comments to your question indicate) there is no one correct answer to where the punctuation should go. If the correctness is your main concern then I would suggest consulting a style guide and, should anyone then question your decision, you can point them to the style guide as your ... 8 Line breaks and many other stuff are not supported in comments. The basic design rationale is (as described by SE) that comments should be short and sweet: if you need to start a new paragraph, either you are really writing an answer (and so should be posting as such) or you are being too verbose in the comments. 8 Another possibility is to use \smash to stop MathJAX from adjusting the line spacing to accomodate mathematical expressions: \begin{align} \smash{\sum_{i=1}^n}F_{2i-1}&=F_1+F_3+F_5+\cdots+F_{2n-1}\\ &=1+2+5+\cdots+F_{2n-1}\\ &=F_{2n}\\ \end{align}\$\begin{align} \smash{\sum_{i=1}^n}F_{2i-1}&=F_1+F_3+F_5+\cdots+F_{2n-1}\\ &=1+2+5+... 7 I think someone had already mentioned WP:MATH somewhere. You can just type WP:MATH in the firefox's English wikipedia search. Only top voted, non community-wiki answers of a minimum length are eligible
{}
Search results Search: All articles in the CJM digital archive with keyword representations Expand all        Collapse all Results 1 - 9 of 9 1. CJM Online first Rotger, Victor; de Vera-Piquero, Carlos Galois Representations Over Fields of Moduli and Rational Points on Shimura Curves The purpose of this note is introducing a method for proving the existence of no rational points on a coarse moduli space $X$ of abelian varieties over a given number field $K$, in cases where the moduli problem is not fine and points in $X(K)$ may not be represented by an abelian variety (with additional structure) admitting a model over the field $K$. This is typically the case when the abelian varieties that are being classified have even dimension. The main idea, inspired on the work of Ellenberg and Skinner on the modularity of $\mathbb{Q}$-curves, is that to a point $P=[A]\in X(K)$ represented by an abelian variety $A/\bar K$ one may still attach a Galois representation of $\operatorname{Gal}(\bar K/K)$ with values in the quotient group $\operatorname{GL}(T_\ell(A))/\operatorname{Aut}(A)$, provided $\operatorname{Aut}(A)$ lies in the centre of $\operatorname{GL}(T_\ell(A))$. We exemplify our method in the cases where $X$ is a Shimura curve over an imaginary quadratic field or an Atkin-Lehner quotient over $\mathbb{Q}$. Keywords:Shimura curves, rational points, Galois representations, Hasse principle, Brauer-Manin obstructionCategories:11G18, 14G35, 14G05 2. CJM Online first Abdesselam, Abdelmalek; Chipalkatti, Jaydeep On Hilbert Covariants Let $F$ denote a binary form of order $d$ over the complex numbers. If $r$ is a divisor of $d$, then the Hilbert covariant $\mathcal{H}_{r,d}(F)$ vanishes exactly when $F$ is the perfect power of an order $r$ form. In geometric terms, the coefficients of $\mathcal{H}$ give defining equations for the image variety $X$ of an embedding $\mathbf{P}^r \hookrightarrow \mathbf{P}^d$. In this paper we describe a new construction of the Hilbert covariant; and simultaneously situate it into a wider class of covariants called the Göttingen covariants, all of which vanish on $X$. We prove that the ideal generated by the coefficients of $\mathcal{H}$ defines $X$ as a scheme. Finally, we exhibit a generalisation of the Göttingen covariants to $n$-ary forms using the classical Clebsch transfer principle. Keywords:binary forms, covariants, $SL_2$-representationsCategories:14L30, 13A50 3. CJM 2011 (vol 63 pp. 1107) Liu, Baiying Genericity of Representations of p-Adic $Sp_{2n}$ and Local Langlands Parameters Let $G$ be the $F$-rational points of the symplectic group $Sp_{2n}$, where $F$ is a non-Archimedean local field of characteristic $0$. Cogdell, Kim, Piatetski-Shapiro, and Shahidi constructed local Langlands functorial lifting from irreducible generic representations of $G$ to irreducible representations of $GL_{2n+1}(F)$. Jiang and Soudry constructed the descent map from irreducible supercuspidal representations of $GL_{2n+1}(F)$ to those of $G$, showing that the local Langlands functorial lifting from the irreducible supercuspidal generic representations is surjective. In this paper, based on above results, using the same descent method of studying $SO_{2n+1}$ as Jiang and Soudry, we will show the rest of local Langlands functorial lifting is also surjective, and for any local Langlands parameter $\phi \in \Phi(G)$, we construct a representation $\sigma$ such that $\phi$ and $\sigma$ have the same twisted local factors. As one application, we prove the $G$-case of a conjecture of Gross-Prasad and Rallis, that is, a local Langlands parameter $\phi \in \Phi(G)$ is generic, i.e., the representation attached to $\phi$ is generic, if and only if the adjoint $L$-function of $\phi$ is holomorphic at $s=1$. As another application, we prove for each Arthur parameter $\psi$, and the corresponding local Langlands parameter $\phi_{\psi}$, the representation attached to $\phi_{\psi}$ is generic if and only if $\phi_{\psi}$ is tempered. Keywords:generic representations, local Langlands parametersCategories:22E50, 11S37 4. CJM 2009 (vol 62 pp. 34) Campbell, Peter S.; Nevins, Monica Branching Rules for Ramified Principal Series Representations of $\mathrm{GL}(3)$ over a $p$-adic Field We decompose the restriction of ramified principal series representations of the $p$-adic group $\mathrm{GL}(3,\mathrm{k})$ to its maximal compact subgroup $K=\mathrm{GL}(3,R)$. Its decomposition is dependent on the degree of ramification of the inducing characters and can be characterized in terms of filtrations of the Iwahori subgroup in $K$. We establish several irreducibility results and illustrate the decomposition with some examples. Keywords:principal series representations, branching rules, maximal compact subgroups, representations of $p$-adic groupsCategories:20G25, 20G05 5. CJM 2009 (vol 62 pp. 439) Sundhäll, Marcus; Tchoundja, Edgar On Hankel Forms of Higher Weights: The Case of Hardy Spaces In this paper we study bilinear Hankel forms of higher weights on Hardy spaces in several dimensions. (The Schatten class Hankel forms of higher weights on weighted Bergman spaces have already been studied by Janson and Peetre for one dimension and by Sundhäll for several dimensions). We get a full characterization of Schatten class Hankel forms in terms of conditions for the symbols to be in certain Besov spaces. Also, the Hankel forms are bounded and compact if and only if the symbols satisfy certain Carleson measure criteria and vanishing Carleson measure criteria, respectively. Keywords:Hankel forms, Schatten—von Neumann classes, Bergman spaces, Hardy spaces, Besov spaces, transvectant, unitary representations, Möbius groupCategories:32A25, 32A35, 32A37, 47B35 6. CJM 2006 (vol 58 pp. 23) Dabbaghian-Abdoly, Vahid Constructing Representations of Finite Simple Groups and Covers Let $G$ be a finite group and $\chi$ be an irreducible character of $G$. An efficient and simple method to construct representations of finite groups is applicable whenever $G$ has a subgroup $H$ such that $\chi_H$ has a linear constituent with multiplicity $1$. In this paper we show (with a few exceptions) that if $G$ is a simple group or a covering group of a simple group and $\chi$ is an irreducible character of $G$ of degree less than 32, then there exists a subgroup $H$ (often a Sylow subgroup) of $G$ such that $\chi_H$ has a linear constituent with multiplicity $1$. Keywords:group representations, simple groups, central covers, irreducible representationsCategories:20C40, 20C15 7. CJM 2005 (vol 57 pp. 648) Nevins, Monica Branching Rules for Principal Series Representations of $SL(2)$ over a $p$-adic Field We explicitly describe the decomposition into irreducibles of the restriction of the principal series representations of $SL(2,k)$, for $k$ a $p$-adic field, to each of its two maximal compact subgroups (up to conjugacy). We identify these irreducible subrepresentations in the Kirillov-type classification of Shalika. We go on to explicitly describe the decomposition of the reducible principal series of $SL(2,k)$ in terms of the restrictions of its irreducible constituents to a maximal compact subgroup. Keywords:representations of $p$-adic groups, $p$-adic integers, orbit method, $K$-typesCategories:20G25, 22E35, 20H25 8. CJM 2000 (vol 52 pp. 1121) Ballantine, Cristina M. Ramanujan Type Buildings We will construct a finite union of finite quotients of the affine building of the group $\GL_3$ over the field of $p$-adic numbers $\mathbb{Q}_p$. We will view this object as a hypergraph and estimate the spectrum of its underlying graph. Keywords:automorphic representations, buildingsCategory:11F70 9. CJM 1997 (vol 49 pp. 543) Ismail, Mourad E. H.; Rahman, Mizan; Suslov, Sergei K. Some summation theorems and transformations for $q$-series We introduce a double sum extension of a very well-poised series and extend to this the transformations of Bailey and Sears as well as the ${}_6\f_5$ summation formula of F.~H.~Jackson and the $q$-Dixon sum. We also give $q$-integral representations of the double sum. Generalizations of the Nassrallah-Rahman integral are also found. Keywords:Basic hypergeometric series, balanced series,, very well-poised series, integral representations,, Al-Salam-Chihara polynomials.Categories:33D20, 33D60
{}
# 代数|MA20217/MA20219 Algebra 2B代写 Elementary axiomatic theory of rings. Integral domains, fields, characteristic. Subrings and product of rings. Homomorphisms, ideals and quotient rings. Isomorphism theorems. Fields of fractions. Use Cramer’s rule to solve the system \begin{aligned} 2 x_{1}-x_{2} &=1 \ 4 x_{1}+4 x_{2} &=20 . \end{aligned} Solution. The coefficient matrix and right-hand-side vectors are $$A=\left[\begin{array}{rr} 2 & -1 \ 4 & 4 \end{array}\right] \text { and } \mathbf{b}=\left[\begin{array}{r} 1 \ 20 \end{array}\right]$$ so that $$\operatorname{det} A=8-(-4)=12$$ and therefore $$x_{1}=\frac{\left|\begin{array}{rr} 2 & 1 \ 4 & 20 \end{array}\right|}{\left|\begin{array}{rr} 2 & -1 \ 4 & 4 \end{array}\right|}=\frac{36}{12}=3 \quad \text { and } \quad x_{2}=\frac{\left|\begin{array}{rr} 1 & -1 \ 20 & 4 \end{array}\right|}{\left|\begin{array}{rr} 2 & -1 \ 4 & 4 \end{array}\right|}=\frac{24}{12}=2$$ ## MA20217/MA20219 COURSE NOTES : Since this holds for all $x$, we conclude that $(f+g)+h=f+(g+h)$, which is the associative law for addition of vectors. Next, if 0 denotes the constant function with value 0 , then for any $f \in V$ we have that for all $0 \leq x \leq 1$, $$(f+0)(x)=f(x)+0=f(x) .$$ (We don’t write the zero element of this vector space in boldface because it’s customary not to write functions in bold.) Since this is true for all $x$ we have that $f+0=f$, which establishes the additive identity law. Also, we define $(-f)(x)=-(f(x))$ so that for all $0 \leq x \leq 1$, $$(f+(-f))(x)=f(x)-f(x)=0,$$
{}
## Can i expect good results having low correlation attributes? 4 This was a question i saw in an interview for a data scientist position: "Here is the following correlation heatmap that i got from my attributes. Regarding the correlation of each feature with the dependent variable (target/class), it is noticeable that correlations are not very expressive. Yet, i would like to know if can i expect good results from a classification model using this dataset. Also, what further investigation can i do (if i shouldn't look after correlation only)?" Did you try to train a classifier? – Sahar Milis – 2020-09-16T03:45:33.027 This was a question from an interview for data scientist position in a company of my town. – joann2555 – 2020-09-16T20:59:52.713 0 It's a general question, so there are more then a few things you can do. Although, what stopping you to train a basic clssifier and investigate the results? Some ideas: • Use Predictive Power Score to keep on investigate your data • Check for non-linear correlation between the features • Investigation the features importance • Use dimension reduction • Check for imbalances I should've explained that this was a question from an interview for a data scientist position in a company. I will edit the question. – joann2555 – 2020-09-16T21:00:53.240 3 The correlation does not effect your model using decision trees in a classification problem. In the theory of decision tree models, you don`t need correlation or check of multicollinearity. Because the split in decision trees is made of entropy/information gain. The correlation does only check linear dependencies. The same is, when the dataset is highly correlated. You will get very good results with decision trees, because there you don´t need to delete correlated features or do dimension reduction (if you don´t have to). It can be, that you don´t get very good results, when you use linear structured models like multiclass neural network, or multiclass logistic regression. There you will see that dimension reduction etc. can have a high influence on the accuracy in these models. I had a similar question but with highly correlated features: decision -tree regression to avoid multicollinearity for regression model? In your case I would say, if we use decision trees, it is not noticeable. However we should check this with the permutation importance of the features and check the polynomial dependencies. Of course you should ask the interviewer more question about his questions and the target of his question, to get more background information. This is very important in interviews.
{}
# Why I hate philosophy Discussion in 'General Philosophy' started by rpenner, Apr 26, 2012. 1. ### TeroRegistered Member Messages: 76 I always had trouble with philosophers. To honor them, I posted a story a few weeks back. Oops. It wont let me post it. So here is the story, you can see it on my home page. 3. ### rpennerFully WiredValued Senior Member Messages: 4,833 Presumably, NH is talking about "mik" quoted in the OP where I used the phrase "misuse of philosophy." I disagree. I believe him to be an ignorant troll. He asserted his claimed 170+ IQ gave him the right to "question authority" which is obviously special pleading and an authoritarian claim on its face. Embrace of the statement the clocks "more slowly at higher velocities" makes no physical sense because this would allow us to progressively refine an absolute standard of rest as the state of motion where clocks tick fastest. This absolute standard of rest was what the Michelson–Morley experiment and all of its successors were designed to uncover and famously failed. Neither can Galilean Relativity save the statement (Galilean relativity has absolute time, but no universal standard of rest) because if two clocks are in motion relative to each other and each clock must tick "more slowly at [it's relatively higher velocity]" and by the transitive property of ordering one arrives at the contradiction that a clock must tick slower than itself. Time standards used to be solar-based and closely tied to one's position on Earth. Astronomers developed corrections to raw sun dial readings to better approximate a time standard by which celestial motions moved more uniformly. Eventually mechanical (later electronic) clocks became reliable time-keepers that needed only to by synchronized with local events like the sun reaching the zenith (noon). As mechanized transport and fast communication networks (railroad and telegraph) developed, for the purposes of trade, local solar standards of time keeping were replaced with time zone so that larger regions could be synchronized and actions coordinated. This is a man-made convention and has no connection with the physical concept of absolute time. In satellite systems like GPS and television program delivery, a global convention of synchronization has been established (misleadingly called Universal Time) but this is just part of a man-made synchronization procedure built upon a convenient, if imaginary, coordinate system centered on the Earth. In UT, 2012-05-06 05:00:00 has a well defined meaning, but its meaning is different from the absolute time of Galileo and Newton. It is merely a man-made label for events even if an equally valid system would label those same events with non-synchronous times. I'm thinking of: $E \vec{u} = c^2 \vec{p} \\ E^2 = \left( mc^2 \right)^2 + \left( p c \right)^2 \\ ( c^2 - u^2 ) \left( \Delta t \right)^2 = ( c^2 - u'^2 ) \left( \Delta t' \right)^2 = c^2 \left( \Delta \tau \right)^2$ Which relate the coordinate energy (E), coordinate velocity (u), coordinate momentum (p), mass (m), elapsed coordinate time ( $\Delta t$ ) and elapsed proper time ( $\Delta \tau$ ) for a free particle in an inertial coordinate system. These were derived from the physics of the universe and are incompatible with absolute coordinate time or absolute proper time. 5. ### ughaibuRegistered Senior Member Messages: 224 I'd say it's bollocks. Can you quote the philosophers involved in this "tension"? As I recall, the high profile pests who have wanked on about the demise, actual or impending, of philosophy, are exactly two in number: Hawking and Krauss. And I seriously doubt that the fact that both of these have had flaws in their thinking pointed out by professional philosophers, is coincidental. 7. ### EmilValued Senior Member Messages: 2,801 As has been previously asserted the philosophy, including logic, has some very useful shortcuts. For example: if an assumption is assumed as true and leads to a paradox, this means that assumption is false. 8. ### hansdaValued Senior Member Messages: 2,424 Science as we know today started with Newton's Laws of Motions . Newton published his Science in a book titled 'Mathematical Principles of Natural Philosophy' . So , as per Newton , Science is that part of Philosophy ; which follows mathematical principles . There may be some part of Philosophy which does not follow mathematical principles or for which the mathematical principles are not yet discovered . So, we can say that Science is subset of Philosophy .
{}
# Goldman-Turaev formality implies Kashiwara-Vergne Tuesday, 4 December, 2018 ## Published in: arXiv:1812.01159 Let Σ be a compact connected oriented 2-dimensional manifold with non-empty boundary. In our previous work, we have shown that the solution of generalized (higher genus) Kashiwara-Vergne equations for an automorphism F \in {\rm Aut}(L) of a free Lie algebra implies an isomorphism between the Goldman-Turaev Lie bialgebra \mathfrak{g}(Σ) and its associated graded {\rm gr}\, \mathfrak{g}(Σ). In this paper, we prove the converse: if F induces an isomorphism \mathfrak{g}(Σ) \cong {\rm gr} \, \mathfrak{g}(Σ), then it satisfies the Kashiwara-Vergne equations up to conjugation. As an application of our results, we compute the degree one non-commutative Poisson cohomology of the Kirillov-Kostant-Souriau double bracket. The main technical tool used in the paper is a novel characterization of conjugacy classes in the free Lie algebra in terms of cyclic words. Anton Alekseev Nariya Kawazumi Yusuke Kuno Florian Naef
{}
# Tag Info 37 The Elements of Statistical Learning by Hastie et al. define ridge regression as follows (Section 3.4.1, equation 3.41): $$\hat \beta{}^\mathrm{ridge} = \underset{\beta}{\mathrm{argmin}}\left\{\sum_{i=1}^N(y_i - \beta_0 - \sum_{j=1}^p x_{ij}\beta_j)^2 + \lambda \sum_{j=1}^p \beta_j^2\right\},$$ i.e. explicitly exclude the intercept term $\beta_0$ from the ... 27 $\beta_0$ is not the odds of the event when $x_1 = x_2 = 0$, it is the log of the odds. In addition, it is the log odds only when $x_1 = x_2 = 0$, not when they are at their lowest non-zero values. 27 It will almost never be meaningful to use the no intercept model in logistic regression. The intercept parameter $\beta_0$ is modelling the marginal distribution of the response $Y$, so using $\beta_0=0$ is tantamont to assuming that $P(Y=1)=0.5$, marginally. Do you really know that? If that is untrue, you cannot trust any inference from the no intercept ... 21 It's unusual to not fit an intercept and generally inadvisable - one should only do so if you know it's 0, but I think that (and the fact that you can't compare the $R^2$ for fits with and without intercept) is well and truly covered already (if possibly a little overstated in the case of the 0 intercept); I want to focus on your main issue which is that you ... 18 Adding +0 (or -1) to a model formula (e.g., in lm()) in R suppresses the intercept. This is generally considered a bad thing to do; see: When is it OK to remove the intercept in lm()? When forcing intercept of 0 in linear regression is acceptable/advisable The estimated slope is calculated differently depending on whether the intercept is estimated ... 16 It is logical, once you consider the matrix notation that your formula will be translated into internally. In the matrix, the non-constant predictors will be translated into (one or more) columns, and the intercept will be translated into a column consisting entirely of ones. For instance, in R you would write a very simple OLS as: lm(z~1+x+y) In matrix ... 14 Here is an illustration that simulates $y$ and $x$ independently of each other so that the true slope is zero. The mean of $y$ is nonzero, such that the true intercept is also nonzero. The LS line without intercept must start at $(0,0)$ without intercept, and will try to "catch up" with the data points as quickly as possible if $y$ has nonzero mean, which ... 13 The Ordinary Least Squares estimate of the slope when the intercept is suppressed is: $$\hat{\beta}=\frac{\sum_{i=1}^N x_iy_i}{\sum_{i=1}^N x_i^2}$$ 13 The coefficients of each predictor are almost always going to change when you add more predictors. This is an example of the answer changing when you ask a different question. Your software should let you fit a regression with no predictor at all. For example, if I try to predict people's weights with a regression with no predictors, then I will get the mean ... 12 Short answer to question in title: (almost) NEVER. In the linear regression model $$y = \alpha + \beta x + \epsilon$$, if you set $\alpha=0$, then you say that you KNOW that the expected value of $y$ given $x=0$ is zero. You almost never know that. $R^2$ becomes higher without intercept, not because the model is better, but because the definition of $... 11 If you write out the fitted model for the log odds of smoking $$\log \frac{\Pr(Y=1)}{\Pr(Y=0)} = -4.380\,1 + -0.324\,56\ I_\mathrm{teen} + 1.451\,19 \ I_\mathrm{mature} + -0.989\,1\ I_\mathrm{old}$$ where the dummies are $$I_\mathrm{teen}=\left\{ \begin{array}{l l} 0 & X\neq\mathrm{teenager}\\ 1& X=\mathrm{teenager}\\ \end{array}\right.$$ &c., ... 11 This an example of linear regression fit. The intercept of this fit is negative and it fits well. 10 You need to drop the intercept (the vector of 1's) from the mm matrix since the glmnet package automatically demeans the data and reports the intercept term by default. Alternatively, you can use the intercept parameter to glmnet (TRUE by default). 10 @gung has given the OLS estimate. That's what you were seeking. However, when dealing with physical quantities where the line must go through the origin, it's common for the scale of the error to vary with the x-values (to have, roughly, constant relative error). In that situation, ordinary unweighted least squares would be inappropriate. In that situation,... 10 Start with a simple logistic regression:$\,\,\text{logit}(\mu) \,= \beta_0 + \beta_1 x\quad\quad$(original)$\quad\quad\quad\quad= \beta_0^* + \beta_1^* (x-\bar{x})/s_x\quad\quad$(standardized x)$\quad\quad\quad\quad= (\beta_0^* -\beta_1^*\bar{x}/s_x)+ (\beta_1^*/s_x) x$So$\beta_1=\beta_1^*/s_x$and$\beta_0=\beta_0^* -\beta_1^*\bar{x}/s_x$More ... 10 The formula lm(formula = y ~ x1 + x2) will include an intercept by default. The formula lm(formula = y ~ x1 + x2 -1) or lm(formula = y ~ x1 + x2 +0) is how R estimates an OLS model without an intercept. The formula lm(formula = y-1 ~ x1 + x2) estimates a model against a dependent variable y with 1 subtracted from it. Centering all terms at their ... 10 In addition to @DaveT's helpful answer, here are a few more clarifications regarding the estimated intercepts in your models. Model 1 The (true) intercept in your first model lm(mpg ~ 1, data=mtcars) represents the mean value of mpg for all cars represented by the ones included in this data set, regardless of their displacement (disp) or horse power (hp)... 9 The intercept in a linear regression model may represent two totally different things: A) Your theoretical model may lead you to a specification with a constant term. A basic example from Economics is when one wants to estimate the parameters of a production function (a statistical relationship that links output produced with production factors used)$$... 9 The intercept should generally only be omitted if all the predictors and the response have mean=0 (in which case the intercept must necessarily be 0). Setting standardize=TRUE, which is the default option for glmnet::glmnet, only standardizes the predictors. The function has another parameter to standardize the response, but by default this is set to ... 9 When dealing with categorical variables in LASSO regression, it is usual to use a grouped LASSO that keeps the dummy variables corresponding to a particular categorical variable together (i.e., you cannot exclude only some of the dummy variables from the model). A useful method is the Modified Group LASSO (MGL) described in Choi, Park and Seo (2012). In ... 8 The intercept has a meaning here, as in any regression. But the meaning is neither interesting nor useful. As calendar year is the predictor, the intercept here is the value predicted for year 0. Set aside the fact that there was no year 0 and years are reckoned in retrospect, in the calendar you are using, to have run ..., 1 BC (or BCE), 1 (AD), etc. ... 8 Something like this should do it: fit <- lm( I(y-9.81) ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 , data=data[i:(i+k),]) Something similar should be possible in many packages. An alternative: interc <- rep(9.81,k+1) fit <- lm(y ~ 0 + x1 + x2 + I(x3^2) + x4 + x5 + x6 + offset(interc),data=data[i:(i+k),]) While the coefficients and standard errors ... 8 The formulas are the same as always, so let's focus on understanding what's going on. Here is a small cloud of points. Its slope is uncertain. (Indeed, the coordinates of these points were drawn independently from a standard Normal distribution and then moved a little to the side, as shown in subsequent plots.) Here is the OLS fit. The intercept is ... 8 Nick Cox provided an excellent response and I wanted to add a more intuitive answer. Model 1 Model 1 investigates the relationship between IQ and Brain size among subjects represented by the ones in the study, regardless of those subjects' Gender, Height and Weight. In other words, if you imagine the target population of subjects from which the subjects in ... 7 Most multiple regression models include a constant term (i.e., the intercept), since this ensures that the model will be unbiased--i.e., the mean of the residuals will be exactly zero. (The coefficients in a regression model are estimated by least squares--i.e., minimizing the mean squared error. Now, the mean squared error is equal to the variance of the ... 7 As mentioned before it is sort of hard to justify not using an intercept unless there is strong knowledge that the linear regression line passes through the origin. However, how fitting the model with and without the intercept affects the residuals is kind of case by case. For example, if the true model that generated the data did have an intercept far ... 7 This question touches on a number of existing posts but I didn't find one that related to all of it. You should not include the constraint as if it were an ordinary data point, with the same uncertainty (but see point 4.) ordinary regression through the origin is entirely straightforward. Consider$y_i = \beta x_i + \epsilon_iS = \sum_i (y_i - \beta ... 7 It does not penalize the intercept. But it does penalize the covariates, which are correlated with the intercept. Thus, changing the estimates of the coefficients for the non-constant variables changes the estimates of the intercept. To help see that, note that in your dataset, all your covariates are in the interval $[-100, -99]$, making the estimate of ... 7 It's not true that adding predictors should generally cause the estimate of the intercet $\alpha$ to decrease. The intercept is the predicted $y$ value when all the $x$ predictors are equal to 0. So adding new predictors can cause the intercept to increase or decrease, by pretty much any amount, based on the mean of the $x$ predictor you're adding and the ... 6 It does not make sense to have such a design matrix because the columns are linearly dependent (specifically, column 1 = column 2 + column 3) so you cannot compute the OLS estimator, which requires inversion of $X'X$, where $X$ is said design matrix. Your proposed design matrix falls under what is sometimes called the "dummy variable trap". What you can do, ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# Commutativity of a sheaf of groups from an epimorphism Let $F$ and $G$ be sheaves of groups $\mathcal{S}^{op}\to Groups$ and $f:F\to G$ an epimorphism (of sheaves of sets). If $F$ is a sheaf of commutative groups, is $G$ also a sheaf of commutative groups? If $f$ would be a surjection on every section then for $x,y\in G(V)$ there would be $x', y'\in F(V)$ with $f(V)(x')=x$ and $f(V)(y')=y$ and $$xy=f(V)(x')f(V)(y')=f(V)(x'y')=f(V)(y'x')=f(V)(y')f(V)(x')=yx.$$ but $f$ is only surjective on stalks. -
{}
# In what topology DM stacks are stacks Background/motivation One of the main reason to introduce (algebraic) stacks is build "fine moduli spaces" for functors which, strictly speaking, are not representable. The yoga is more or less as follows. One notices that a representable functor on the category of schemes is a sheaf in the fpqc topology. In particular it is a sheaf in coarser topologies, like the fppf or étale topologies. Now some naturally defined functors (for instance the functor $\mathcal{M}_{1,1}$ of elliptic curves) are not sheaves in the fpqc topology (actually $\mathcal{M}_{1,1}$ is not even an étale sheaf) so there is no hope to represent them. Enters the $2$-categorical world and we introduce fibered categories and stacks. Many functors which are not sheaves arise by collapsing fibered categories which ARE stacks, so not all hope is lost. But, as not every fpqc sheaf is representable, we should not expect that every fpqc stack is in some sense "represented by a generalized space", so we make a definition of what we mean by an algebraic stack. Let me stick with the Deligne-Mumford case. Then a DM stack is a fibered category (in groupoids) over the category of schemes, which 1) is a stack in the étale topology 2) has a "nice" diagonal 3) is in some sense étale locally similar to a scheme. I don't need to make precise what 2) and 3) mean. By the preceding philosophy we should expect that DM stacks generalize schemes in the same way that stacks generalize sheaves. In particular I would expect that DM stacks turn out to be stacks in finer topologies, just as schemes are sheaves not only in the Zariski topology (which is trivial) but also in the fpqc topology (which is a theorem of Grothendieck). Question Is it true that DM stacks are actually stacks in the fpqc topology? And if not, did someone propose a notion of "generalized space" in the context of stacks, so that this result holds? -
{}
Peirce’s 1870 “Logic Of Relatives” • Comment 10.12 Potential ambiguities in Peirce’s two versions of the “rich black man” example can be resolved by providing them with explicit graphical markups, as shown in Figures 28 and 29. (28) (29) On the other hand, as the forms of relational composition become more complex, the corresponding algebraic products of elementary relatives, for example, $\mathrm{(x\!:\!y\!:\!z)(y\!:\!z)(z)},$ will not always determine unique results without the addition of more information about the intended linkings of terms. This entry was posted in Graph Theory, Logic, Logic of Relatives, Logical Graphs, Mathematics, Peirce, Relation Theory, Semiotics and tagged , , , , , , , . Bookmark the permalink. One Response to Peirce’s 1870 “Logic Of Relatives” • Comment 10.12 This site uses Akismet to reduce spam. Learn how your comment data is processed.
{}
## College Algebra 7th Edition $2x^3-6x^2+4x$ Square the binomial using the formula $(a-b)^2=a^2-2ab+b^2$ with $a=x$ and $b=2$ to obtain: $=x^2(x-2)+x[x^2-2(x)(2) + 2^2] \\=x^2(x-2)+x(x^2-4x+4)$ Distribute $x^2$ and $x$ to obtain: $=x^2(x) -x^2(2) + x(x^2) -x(4x)+x(4) \\=x^3-2x^2+x^3-4x^2+4x$ Combine like terms to obtain: $=(x^3+x^3) + (-2x^2-4x^2) + 4x \\=2x^3-6x^2+4x$
{}
SEARCH HOME Math Central Quandaries & Queries Question from Monica, a student: Find dy/dx in terms of x and y, if sin(xy)=(x^2)-y. I tried to solve the problem by doing... =cos(xy)*[x*y'+y*x'] = 2x - y* y' =cos(xy)*[x*y'y+y*1] =2x-y*y' =cos(xy)*[(dy/dx)*x+y] = 2x-y*y' y'*(x+y)cos(xy) = y'(y-2x) I am not sure if I simplified the problem correctly, or distributed correctly. Could you please assist me with this practice problem? Thank you. Hi Monica, When you differentiate y in this expression you should get y' not y*y'. Hence the first line should be $\cos(xy) \times [x \times y' + y \times x'] = 2x - y'$ which becomes $\cos(xy) \times [x \times y' + y \times 1] = 2x - y'$ Can you see how to complete it now? Write back if you need more help. Penny Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
{}
# phase difference The magnitude is thus: and the phase is given by tan ϕ = −3 as ϕ = −72°. Phase difference, $\Delta \phi$ between 2 particles is just the difference in phase between them. In conjunction with the phase difference are two other terms: When the waveform A is ahead of B (i.e., when it reaches its maximum value before B reaches its maximum value), it is said to be, At the same time, B is behind (following) A, and it is said to be, Note that the phase angle must always be less than 180°. The results from this experiment, displayed in Fig. More commonly used in AC circuits to indicate the timing relationship of current with respect to voltage. Observations made in the Fresnel mode with a large defocus distance showed that the intensity of the interference fringe system present in the region of the geometrical shadow of the wire, although weak, was great enough to be directly visible on the fluorescent screen. Hi, the What is Phase,Phase Angle,Phase Difference Easy Understanding is very good, congratulations to electrical4u.net’ authors. Electron holography of long-range electrostatic fields. Phase is expressed in angle or radian. The length of this vector denotes the peak value of the waveform (employing a convenient scale), and its angle is decided by selecting an arbitrary direction (usually horizontal) for a reference waveform. • For a resistive load, there is no phase difference between current and voltage. How Do We Describe Phase Difference? When two waveforms are out of phase, then the way to express the time difference between the two is by stating the angle difference for one cycle, i.e., the angle value of the first waveform when the other one has a zero value. Often we will have two sinusoidal or other periodic waveforms having the same frequency, but is phase shifted. A phase difference of 61 seconds is the same as phase difference of 1 second. 17. In other words, the two alternating quantities have phase difference when they have the same frequency, but they attain their zero value at the different instant. The variation of skitter, however, decreases with fn. When the two quantities have the same frequency, and their maximum and minimum point achieve at the same point, then the quantities are said to have in the same phase. We took advantage of this additional biprism effect to give further evidence of the phase-difference effect. Microdensitometer traces below the photographs are drawn at higher magnification. $\Delta \phi$ between A and B: $\Delta \phi = 2 \pi \frac{\Delta t}{T}$ or $\Delta \phi = 2 \pi \frac{\Delta x}{\lambda}$, $y = y_{o} \, sin \left( x \frac{2 \pi}{\lambda} \right)$, $y = – y_{o} \, cos \left( t \frac{2 \pi}{T} \right)$. Set Δf to the sum of the maximum frequency deviation and worst-case drift: Get maximum normalized deviation from fIF: Choose the highest Q with a reasonably straight curve in Fig. All the rotor inertias are the same and equal to 1, i.e., Mn = 1. A schematic drawing of the whole setup of our first experiment (Matteucci, Missiroli, & Pozzi, 1982) is shown in Fig. } Thus, if a vector, Vector representation of cyclic values is both very convenient and easy. "item": Here the phase angle theta is the phase difference between the voltage applied to the impedance and the current flow through the impedance. There are several types, many of which produce zero output, after filtering, only when their inputs have a 90° phase difference. The platinum wire W (Matteucci, 1978) was coated laterally for half of its length with a thin layer of gold (black region), thus becoming a bimetallic biprism. The electrostatic Aharonov–Bohm experiment proposed by Boyer can therefore be regarded as a nonlocal type-2 phenomenon. There are 360° in one cycle. Notify me of follow-up comments by email. Please try again. across the capacitor when there is a sinusoidal voltage input. Fig. When dissimilar components (say resistor and capacitor together) are used in a circuit the phase angle is not 90° and can be any angle between −90° and +90°. The modulation index is not necessarily unity. For example, a horizontal (on the paper) vector of 6 in (15 cm) length can be drawn for a 120 V sinusoidal voltage. AC power supply usually has 50/60 Hz frequency depending on the region. Sample the signals x(t) and r(t) using a sampling frequency Fs = 10 KHz. First of all, we present what is the phase of a signal and in which unit it is measured. First we represent the power network using a graph, in which each vertex is a generator while each edge is a transmission line linking two generators. Then when Δfi and Δδi, i = 1, …, N, are both sufficiently small, it is easy to verify that the dynamics can be linearized to the following form: where ΔPmi is the difference between the mechanical power and the stable one, and. The standard deviation, however, is significantly affected by ϕ. We denote the standard frequency by f0 and the frequency deviation of generator i by Δfi. Phase difference is the difference, between two waves is having the same frequency and referenced to the same point in time. Fig. This phenomenon is known as interference and takes place when the signals are of the same frequency. Leading p… To calculate phase angle between two sine waves we need to measure the time difference between the peak points (or zero crossing) of the waveform. First, consider the situation when the FM input signal vi is unmodulated, so its frequency is fI. Let’s call y3(t) the superposition y1(t)+y2(t) and A3 its amplitude. Vasilis F. Pavlidis, ... Eby G. Friedman, in Three-Dimensional Integrated Circuit Design (Second Edition), 2017. Consider the two alternating currents Im1 and Im2 shown in the figure below. The phase difference represented by the Greek letter Phi (Φ). The other eigenvalues may be positive or negative since the weights can be positive or negative. Administrator of Mini Physics. Worst-case frequency drift = ± 15 kHz. In this case, since ϕ2 and ϕ1 are not simultaneously equal to 270°, the worst case μJ1,2 also decreases. On receiver and IF circuit ICs, a common FM or FSK demodulator is a quadrature detector. Electrical4u will use the information you provide on this form to be in touch with you and to provide updates and marketing. A negative phase difference, such as ΔΦ31 and ΔΦ32, indicates that the signal y3(t) follows the signals y1(t) and y2(t), we also say that y3(t) lags y1(t) and y2(t). © Electronics-lab.com – 2020, WORK IS LICENCED UNDER CC BY SA 4.0, By continuing to use the site, you agree to the use of cookies. Given: Maximum frequency deviation = 60 kHz. This is shown in. "@type": "ListItem", The images were recorded on a photographic plate with an exposure time of 10 s. Some results obtained for three different angles (− 24 degree, 0 degree, and 24 degree), together with the corresponding microdensitometer traces (magnified for the sake of clarity), are shown in Fig. This can be extended for logic signals by comparing their rising or falling edges timing. 18B, and coated, Fig. Here the diagram, both the wave reaches different values at same time. The interference fringe systems of the wire recorded in correspondence of uncoated, Fig.
{}
## Commutators of angular momentum and a central force Hamiltonian September 30, 2015 phy1520 , , , , In problem 1.17 of [1] we are to show that non-commuting operators that both commute with the Hamiltonian, have, in general, degenerate energy eigenvalues. It suggests considering $$L_x, L_z$$ and a central force Hamiltonian $$H = \Bp^2/2m + V(r)$$ as examples. Let’s just demonstrate these commutators act as expected in these cases. With $$\BL = \Bx \cross \Bp$$, we have \label{eqn:angularMomentumAndCentralForceCommutators:20} \begin{aligned} L_x &= y p_z – z p_y \\ L_y &= z p_x – x p_z \\ L_z &= x p_y – y p_x. \end{aligned} The $$L_x, L_z$$ commutator is \label{eqn:angularMomentumAndCentralForceCommutators:40} \begin{aligned} \antisymmetric{L_x}{L_z} &= \antisymmetric{y p_z – z p_y }{x p_y – y p_x} \\ &= \antisymmetric{y p_z}{x p_y} -\antisymmetric{y p_z}{y p_x} -\antisymmetric{z p_y }{x p_y} +\antisymmetric{z p_y }{y p_x} \\ &= x p_z \antisymmetric{y}{p_y} + z p_x \antisymmetric{p_y }{y} \\ &= i \Hbar \lr{ x p_z – z p_x } \\ &= – i \Hbar L_y \end{aligned} cyclicly permuting the indexes shows that no pairs of different $$\BL$$ components commute. For $$L_y, L_x$$ that is \label{eqn:angularMomentumAndCentralForceCommutators:60} \begin{aligned} \antisymmetric{L_y}{L_x} &= \antisymmetric{z p_x – x p_z }{y p_z – z p_y} \\ &= \antisymmetric{z p_x}{y p_z} -\antisymmetric{z p_x}{z p_y} -\antisymmetric{x p_z }{y p_z} +\antisymmetric{x p_z }{z p_y} \\ &= y p_x \antisymmetric{z}{p_z} + x p_y \antisymmetric{p_z }{z} \\ &= i \Hbar \lr{ y p_x – x p_y } \\ &= – i \Hbar L_z, \end{aligned} and for $$L_z, L_y$$ \label{eqn:angularMomentumAndCentralForceCommutators:80} \begin{aligned} \antisymmetric{L_z}{L_y} &= \antisymmetric{x p_y – y p_x }{z p_x – x p_z} \\ &= \antisymmetric{x p_y}{z p_x} -\antisymmetric{x p_y}{x p_z} -\antisymmetric{y p_x }{z p_x} +\antisymmetric{y p_x }{x p_z} \\ &= z p_y \antisymmetric{x}{p_x} + y p_z \antisymmetric{p_x }{x} \\ &= i \Hbar \lr{ z p_y – y p_z } \\ &= – i \Hbar L_x. \end{aligned} If these angular momentum components are also shown to commute with themselves (which they do), the commutator relations above can be summarized as \label{eqn:angularMomentumAndCentralForceCommutators:100} \antisymmetric{L_a}{L_b} = i \Hbar \epsilon_{a b c} L_c. In the example to consider, we’ll have to consider the commutators with $$\Bp^2$$ and $$V(r)$$. Picking any one component of $$\BL$$ is sufficent due to the symmetries of the problem. For example \label{eqn:angularMomentumAndCentralForceCommutators:120} \begin{aligned} \antisymmetric{L_x}{\Bp^2} &= \antisymmetric{y p_z – z p_y}{p_x^2 + p_y^2 + p_z^2} \\ &= \antisymmetric{y p_z}{{p_x^2} + p_y^2 + {p_z^2}} -\antisymmetric{z p_y}{{p_x^2} + {p_y^2} + p_z^2} \\ &= p_z \antisymmetric{y}{p_y^2} -p_y \antisymmetric{z}{p_z^2} \\ &= p_z 2 i \Hbar p_y 2 i \Hbar p_y -p_y 2 i \Hbar p_z \\ &= 0. \end{aligned} How about the commutator of $$\BL$$ with the potential? It is sufficient to consider one component again, for example \label{eqn:angularMomentumAndCentralForceCommutators:140} \begin{aligned} \antisymmetric{L_x}{V} &= \antisymmetric{y p_z – z p_y}{V} \\ &= y \antisymmetric{p_z}{V} – z \antisymmetric{p_y}{V} \\ &= -i \Hbar y \PD{z}{V(r)} + i \Hbar z \PD{y}{V(r)} \\ &= -i \Hbar y \PD{r}{V}\PD{z}{r} + i \Hbar z \PD{r}{V}\PD{y}{r} \\ &= -i \Hbar y \PD{r}{V} \frac{z}{r} + i \Hbar z \PD{r}{V}\frac{y}{r} \\ &= 0. \end{aligned} We’ve shown that all the components of $$\BL$$ commute with a central force Hamiltonian, and each different component of $$\BL$$ do not commute. The next step will be figuring out how to use this to show that there are energy degeneracies. # References [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014. ## PHY1520H Graduate Quantum Mechanics. Lecture 4: Quantum Harmonic oscillator and coherent states. Taught by Prof. Arun Paramekanti September 29, 2015 phy1520 , , , , ### Disclaimer Peeter’s lecture notes from class. These may be incoherent and rough. This lecture reviewed a lot of quantum harmonic oscillator theory, and wouldn’t make sense without having seen raising and lowering operators (ladder operators), number operators, and the like. These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] chap. 2 content. ### Classical Harmonic Oscillator Recall the classical Harmonic oscillator equations in their Hamiltonian form \label{eqn:qmLecture4:40} \ddt{x} = \frac{p}{m} \label{eqn:qmLecture4:60} \ddt{p} = -k x. With \label{eqn:qmLecture4:140} \begin{aligned} x(t = 0) &= x_0 \\ p(t = 0) &= p_0 \\ k &= m \omega^2, \end{aligned} the solutions are ellipses in phase space \label{eqn:qmLecture4:100} x(t) = x_0 \cos(\omega t) + \frac{p_0}{m \omega} \sin(\omega t) \label{eqn:qmLecture4:120} p(t) = p_0 \cos(\omega t) – m \omega x_0 \sin(\omega t). After a suitable scaling of the variables, these elliptical orbits can be transformed into circular trajectories. ### Quantum Harmonic Oscillator \label{eqn:qmLecture4:160} \hat{H} = \frac{\hat{p}^2}{2 m} + \inv{2} k \hat{x}^2 Set \label{eqn:qmLecture4:200} \hat{X} = \sqrt{\frac{m \omega}{\Hbar}} \hat{x} \label{eqn:qmLecture4:220} \hat{P} = \sqrt{\inv{m \omega \Hbar}} \hat{p} The commutators after this change of variables goes from \label{eqn:qmLecture4:240} \antisymmetric{ \hat{x}}{\hat{p}} = i \Hbar, to \label{eqn:qmLecture4:260} \antisymmetric{ \hat{X}}{\hat{P}} = i. The Hamiltonian takes the form \label{eqn:qmLecture4:280} \begin{aligned} \hat{H} &= \frac{\Hbar \omega}{2} \lr{ \hat{X}^2 + \hat{P}^2 } \\ &= \Hbar \omega \lr{ \lr{ \frac{\hat{X} -i \hat{P}}{\sqrt{2}} } \lr{ \frac{\hat{X} +i \hat{P}}{\sqrt{2}}} + \inv{2} }. \end{aligned} Define ladder operators (raising and lowering operators respectively) \label{eqn:qmLecture4:320} \hat{a}^\dagger = \frac{\hat{X} -i \hat{P}}{\sqrt{2}} \label{eqn:qmLecture4:340} \hat{a} = \frac{\hat{X} +i \hat{P}}{\sqrt{2}} so \label{eqn:qmLecture4:360} \hat{H} = \Hbar \omega \lr{ \hat{a}^\dagger \hat{a} + \inv{2} }. We can show \label{eqn:qmLecture4:380} \antisymmetric{\hat{a}}{\hat{a}^\dagger} = 1, and \label{eqn:qmLecture4:400} N \ket{n} \equiv \hat{a}^\dagger a = n \ket{n}, where $$n \ge 0$$ is an integer. Recall that \label{eqn:qmLecture4:420} \hat{a} \ket{0} = 0, and \label{eqn:qmLecture4:440} \bra{X} X + i P \ket{0} = 0. With \label{eqn:qmLecture4:460} \braket{x}{0} = \Psi_0(x), we can show \label{eqn:qmLecture4:480} \inv{\sqrt{2}} \lr{ X + \PD{X}{} } \Psi_0(X) = 0. Also recall that \label{eqn:qmLecture4:520} \hat{a} \ket{n} = \sqrt{n} \ket{n-1} \label{eqn:qmLecture4:540} \hat{a}^\dagger \ket{n} = \sqrt{n + 1} \ket{n+1} ### Coherent states Coherent states for the quantum harmonic oscillator are the eigenkets for the creation and annihilation operators \label{eqn:qmLecture4:580} \hat{a} \ket{z} = z \ket{z} \label{eqn:qmLecture4:600} \hat{a}^\dagger \ket{\tilde{z}} = \tilde{z} \ket{\tilde{z}} , where \label{eqn:qmLecture4:620} \ket{z} = \sum_{n = 0}^\infty c_n \ket{n}, and $$z$$ is allowed to be a complex number. Looking for such a state, we compute \label{eqn:qmLecture4:640} \begin{aligned} \hat{a} \ket{z} &= \sum_{n=1}^\infty c_n \hat{a} \ket{n} \\ &= \sum_{n=1}^\infty c_n \sqrt{n} \ket{n-1} \end{aligned} compare this to \label{eqn:qmLecture4:660} \begin{aligned} z \ket{z} &= z \sum_{n=0}^\infty c_n \ket{n} \\ &= \sum_{n=1}^\infty c_n \sqrt{n} \ket{n-1} \\ &= \sum_{n=0}^\infty c_{n+1} \sqrt{n+1} \ket{n}, \end{aligned} so \label{eqn:qmLecture4:680} c_{n+1} \sqrt{n+1} = z c_n This gives \label{eqn:qmLecture4:700} c_{n+1} = \frac{z c_n}{\sqrt{n+1}} \label{eqn:qmLecture4:720} \begin{aligned} c_1 &= c_0 z \\ c_2 &= \frac{z c_1}{\sqrt{2}} = \frac{z^2 c_0}{\sqrt{2}} \\ \vdots & \end{aligned} or \label{eqn:qmLecture4:740} c_n = \frac{z^n}{\sqrt{n!}}. So the desired state is \label{eqn:qmLecture4:760} \ket{z} = c_0 \sum_{n=0}^\infty \frac{z^n}{\sqrt{n!}} \ket{n}. Also recall that \label{eqn:qmLecture4:780} \ket{n} = \frac{\lr{ \hat{a}^\dagger }^n}{\sqrt{n!}} \ket{0}, which gives \label{eqn:qmLecture4:800} \begin{aligned} \ket{z} &= c_0 \sum_{n=0}^\infty \frac{\lr{z \hat{a}^\dagger}^n }{n!} \ket{0} \\ &= c_0 e^{z \hat{a}^\dagger} \ket{0}. \end{aligned} The normalization is \label{eqn:qmLecture4:820} c_0 = e^{-\Abs{z}^2/2}. While we have $$\braket{n_1}{n_2} = \delta_{n_1, n_2}$$, these $$\ket{z}$$ states are not orthonormal. Figuring out that this overlap \label{eqn:qmLecture4:840} \braket{z_1}{z_2} \ne 0, will be left for homework. ### Dynamics We don’t know much about these coherent states. For example does a coherent state at time zero evolve to a coherent state? \label{eqn:qmLecture4:860} \ket{z} \stackrel{?}{\rightarrow} \ket{z(t)} It turns out that these questions are best tackled in the Heisenberg picture, considering \label{eqn:qmLecture4:880} e^{-i \hat{H} t/\Hbar } \ket{z}. For example, what is the average of the position operator \label{eqn:qmLecture4:900} \bra{z} e^{i \hat{H} t/\Hbar } \hat{x} e^{-i \hat{H} t/\Hbar } \ket{z} = \sum_{n, n’ = 0}^\infty \bra{n} c_n^\conj e^{i E_n t/\Hbar} \lr{ a + a^\dagger} \sqrt{ \frac{\Hbar}{m \omega} } c_{n’} e^{i E_{n’} t/\Hbar} \ket{n}. This is very messy to attempt. Instead if we know how the operator evolves we can calculate \label{eqn:qmLecture4:920} \bra{z} \hat{x}_{\textrm{H}}(t) \ket{z}, that is \label{eqn:qmLecture4:940} \expectation{\hat{x}}(t) = \bra{z} \hat{x}_{\textrm{H}}(t) \ket{z}, and for momentum \label{eqn:qmLecture4:960} \expectation{\hat{p}}(t) = \bra{z} \hat{p}_{\textrm{H}}(t) \ket{z}. The question to ask is what are the expansions of \label{eqn:qmLecture4:1000} \hat{a}_{\textrm{H}}(t) = e^{i \hat{H} t/\Hbar} \hat{a} e^{-i \hat{H} t/\Hbar}. \label{eqn:qmLecture4:1020} \hat{a}^\dagger_{\textrm{H}}(t) = e^{i \hat{H} t/\Hbar} \hat{a}^\dagger e^{-i \hat{H} t/\Hbar}. The question to ask is how do these operators ask on the basis states \label{eqn:qmLecture4:1040} \begin{aligned} \hat{a}_{\textrm{H}}(t) \ket{n} &= e^{i \hat{H} t/\Hbar} \hat{a} e^{-i \hat{H} t/\Hbar} \ket{n} \\ &= e^{i \hat{H} t/\Hbar} \hat{a} e^{-i t \omega (n + 1/2)} \ket{n} \\ &= e^{-i t \omega (n + 1/2)} e^{i \hat{H} t/\Hbar} \sqrt{n} \ket{n-1} \\ &= \sqrt{n} e^{-i t \omega (n + 1/2)} e^{i t \omega (n – 1/2)} \ket{n-1} \\ &= \sqrt{n} e^{-i \omega t} \ket{n-1} \\ &= e^{-i \omega t} \ket{n}. \end{aligned} So we have found \label{eqn:qmLecture4:1060} \begin{aligned} \hat{a}_{\textrm{H}}(t) &= a e^{-i\omega t} \\ \hat{a}^\dagger_{\textrm{H}}(t) &= a^\dagger e^{i\omega t} \end{aligned} # References [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014. ## Can anticommuting operators have a simulaneous eigenket? September 28, 2015 phy1520 , , ## Question: Can anticommuting operators have a simulaneous eigenket? ([1] pr. 1.16) Two Hermitian operators anticommute \label{eqn:anticommutingOperatorWithSimulaneousEigenket:20} \symmetric{A}{B} = A B + B A = 0. Is it possible to have a simultaneous eigenket of $$A$$ and $$B$$? Prove or illustrate your assertion. Suppose that such a simultaneous non-zero eigenket $$\ket{\alpha}$$ exists, then \label{eqn:anticommutingOperatorWithSimulaneousEigenket:40} A \ket{\alpha} = a \ket{\alpha}, and \label{eqn:anticommutingOperatorWithSimulaneousEigenket:60} B \ket{\alpha} = b \ket{\alpha} This gives \label{eqn:anticommutingOperatorWithSimulaneousEigenket:80} \lr{ A B + B A } \ket{\alpha} = \lr{A b + B a} \ket{\alpha} = 2 a b \ket{\alpha}. If this is zero, one of the operators must have a zero eigenvalue. Knowing that we can construct an example of such operators. In matrix form, let \label{eqn:anticommutingOperatorWithSimulaneousEigenket:120} A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & a \\ \end{bmatrix} \label{eqn:anticommutingOperatorWithSimulaneousEigenket:140} B = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & b \\ \end{bmatrix}. These are both Hermitian, and anticommute provided at least one of $$a, b$$ is zero. These have a common eigenket \label{eqn:anticommutingOperatorWithSimulaneousEigenket:160} \ket{\alpha} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}. A zero eigenvalue of one of the commuting operators may not be a sufficient condition for such anticommutation. # References [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014. ## Grade 11 physics handout: “The Big Five … in Physics”? September 26, 2015 math and physics play , ## Motivation Check out fig. 1, a handout given to my daughter has a handout from her grade 11 physics class, titled “The Big Five”, covering some dynamics equations. fig. 1. The Big Five… in Physics I found this handout disorienting. Part of that disorientation is because of the weird African animal theme which I couldn’t see the rationale for. Aurora showed me that the second equation can be outlined with an elephant, apparently justifying the animal theme. The equations themselves are not in a form that I would have expected, and have a lot of redundancy built in. The assumptions required for these equations to be valid are also not stated. Those equations are \label{eqn:theBigFivePhysics:40} v_2 = v_1 + a \Delta t \label{eqn:theBigFivePhysics:60} \Delta d = \inv{2} \lr{ v_1 + v_2 } \Delta t \label{eqn:theBigFivePhysics:80} \Delta d = v_1 \Delta t + \inv{2} a \lr{\Delta t}^2 \label{eqn:theBigFivePhysics:100} \Delta d = v_2 \Delta t – \inv{2} a \lr{\Delta t}^2 \label{eqn:theBigFivePhysics:120} v_2^2 = v_1^2 + 2 a \Delta d. ## Reverse engineering “the big five”. ### Difference of velocity The first equation \ref{eqn:theBigFivePhysics:40} is just a discrete version of the definition of scalar acceleration \label{eqn:theBigFivePhysics:140} a = \frac{dv}{dt}. The approximation of that is \label{eqn:theBigFivePhysics:160} a = \frac{\Delta v}{\Delta t}, or \label{eqn:theBigFivePhysics:180} \Delta v = v_2 – v_1 = a \Delta t. ### Constant acceleration Next in the list, it’s clear that the equation(s) for $$\Delta d$$ is really based on an assumption of constant acceleration. In fact, all four of the next equations are nothing more than variations of \label{eqn:theBigFivePhysics:200} v = a t. To arrive at fig. 2. fig. 2. Displacement as area under the curve. Should we wish to integrate, it’s the second simplest integral we could possibly do \label{eqn:theBigFivePhysics:220} \begin{aligned} \Delta x &= \int_{t_1}^{t_2} v(t) dt \\ &= \int_{t_1}^{t_2} a t dt \\ &= \inv{2} a \lr{ t_2^2 – t_1^2 } \\ &= \inv{2} a \Delta t \lr{ t_2 + t_1 }. \end{aligned} This looks a little different than what’s on the formula cheet, but since (for constant acceleration) we have \label{eqn:theBigFivePhysics:240} t = \frac{v}{a}, this can be written as \label{eqn:theBigFivePhysics:260} \begin{aligned} \Delta x &= \inv{2} a \Delta t \lr{ \frac{v_2}{a} + \frac{v_1}{a} } \\ &= \inv{2} \Delta t \lr{ v_1 + v_2 }, \end{aligned} as found on the formula sheet (except for them using $$\Delta d$$ for the difference in position .) Each of the next equations follow from straight algebra \label{eqn:theBigFivePhysics:280} \begin{aligned} \Delta x – v_1 \Delta t &= \inv{2} \Delta t \lr{ v_1 + v_2 } – v_1 \Delta t \\ &= \inv{2} \Delta t \lr{ -v_1 + v_2 } \\ &= \inv{2} \lr{\Delta t}^2 \frac{\Delta v}{\Delta t} \\ &= \inv{2} a \lr{\Delta t}^2, \end{aligned} and \label{eqn:theBigFivePhysics:300} \begin{aligned} \Delta x – v_2 \Delta t &= \inv{2} \Delta t \lr{ v_1 + v_2 } – v_2 \Delta t \\ &= \inv{2} \Delta t \lr{ v_1 – v_2 } \\ &= -\inv{2} \lr{\Delta t}^2 \frac{\Delta v}{\Delta t} \\ &= -\inv{2} a \lr{\Delta t}^2, \end{aligned} and finally \label{eqn:theBigFivePhysics:320} \begin{aligned} \Delta x &= \inv{2} a \lr{ t_2^2 – t_1^2 } \\ &= \inv{2 a } \lr{ v_2^2 – v_1^2 }. \end{aligned} ### A better set of equations. If I had to write these “big five” equation, I’d be more inclined to write them as \label{eqn:theBigFivePhysics:360} a = \frac{\Delta v}{\Delta t} \label{eqn:theBigFivePhysics:380} v = a t = \frac{\Delta x}{\Delta t} \label{eqn:theBigFivePhysics:400} \Delta x = \int_{t_1}^{t_2} v dt = \inv{2} a \lr{ t_2^2 – t_1^2 } \label{eqn:theBigFivePhysics:420} t_1 = \frac{v_1}{a} \label{eqn:theBigFivePhysics:440} t_2 = \frac{v_2}{a}. Anything more than that is just algebra. The last two could be omitted since they really follow from \ref{eqn:theBigFivePhysics:380}. For high school where calculus isn’t known, I’d swap out \ref{eqn:theBigFivePhysics:400} for \ref{eqn:theBigFivePhysics:60} which can be derived graphically by understanding that the distance is the area under the velocity curve. I’d also leave out all mentions of big African animals, which is, just plain weird! ## Lagrangian for magnetic portion of Lorentz force September 26, 2015 phy1520 , , , , In [1] it is claimed in an Aharonov-Bohm discussion that a Lagrangian modification to include electromagnetism is \label{eqn:magneticLorentzForceLagrangian:20} \LL \rightarrow \LL + \frac{e}{c} \Bv \cdot \BA. That can’t be the full Lagrangian since there is no $$\phi$$ term, so what exactly do we get? If you have somehow, like I did, forgot the exact form of the Euler-Lagrange equations (i.e. where do the dots go), then the derivation of those equations can come to your rescue. The starting point is the action \label{eqn:magneticLorentzForceLagrangian:40} S = \int \LL(x, \xdot, t) dt, where the end points of the integral are fixed, and we assume we have no variation at the end points. The variational calculation is \label{eqn:magneticLorentzForceLagrangian:60} \begin{aligned} \delta S &= \int \delta \LL(x, \xdot, t) dt \\ &= \int \lr{ \PD{x}{\LL} \delta x + \PD{\xdot}{\LL} \delta \xdot } dt \\ &= \int \lr{ \PD{x}{\LL} \delta x + \PD{\xdot}{\LL} \delta \ddt{x} } dt \\ &= \int \lr{ \PD{x}{\LL} – \ddt{}\lr{\PD{\xdot}{\LL}} } \delta x dt + \delta x \PD{\xdot}{\LL}. \end{aligned} The boundary term is killed after evaluation at the end points where the variation is zero. For the result to hold for all variations $$\delta x$$, we must have \label{eqn:magneticLorentzForceLagrangian:80} \boxed{ \PD{x}{\LL} = \ddt{}\lr{\PD{\xdot}{\LL}}. } Now lets apply this to the Lagrangian at hand. For the position derivative we have \label{eqn:magneticLorentzForceLagrangian:100} \PD{x_i}{\LL} = \frac{e}{c} v_j \PD{x_i}{A_j}. For the canonical momentum term, assuming $$\BA = \BA(\Bx)$$ we have \label{eqn:magneticLorentzForceLagrangian:120} \begin{aligned} \ddt{} \PD{\xdot_i}{\LL} &= \ddt{} \lr{ m \xdot_i + \frac{e}{c} A_i } \\ &= m \ddot{x}_i + \frac{e}{c} \ddt{A_i} \\ &= m \ddot{x}_i + \frac{e}{c} \PD{x_j}{A_i} \frac{dx_j}{dt}. \end{aligned} Assembling the results, we’ve got \label{eqn:magneticLorentzForceLagrangian:140} \begin{aligned} 0 &= \ddt{} \PD{\xdot_i}{\LL} \PD{x_i}{\LL} \\ &= m \ddot{x}_i + \frac{e}{c} \PD{x_j}{A_i} \frac{dx_j}{dt} \frac{e}{c} v_j \PD{x_i}{A_j}, \end{aligned} or \label{eqn:magneticLorentzForceLagrangian:160} \begin{aligned} m \ddot{x}_i &= \frac{e}{c} v_j \PD{x_i}{A_j} \frac{e}{c} \PD{x_j}{A_i} v_j \\ &= \frac{e}{c} v_j \lr{ \PD{x_i}{A_j} \PD{x_j}{A_i} } \\ &= \frac{e}{c} v_j B_k \epsilon_{i j k}. \end{aligned} In vector form that is \label{eqn:magneticLorentzForceLagrangian:180} m \ddot{\Bx} = \frac{e}{c} \Bv \cross \BB. So, we get the magnetic term of the Lorentz force. Also note that this shows the Lagrangian (and the end result), was not in SI units. The $$1/c$$ term would have to be dropped for SI. # References [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.
{}
# What is the expectation of an empirical model in model based RL? Artificial Intelligence Asked by ijuneja on November 4, 2021 In the paper – "Action Elimination and Stopping Conditions for the Multi-Armed Bandit and Reinforcement Learning Problems", on page 1083, on the 6th line from the bottom, the authors define expectation of the empirical model as $$hat{mathbb{E}}_{s,s’,a}[V(s’)] = sum_{s’ in S} hat{P}^{a}_{s, s’}V(s’).$$ I didn’t understand the significance of this quantity since it puts $$V(s’)$$ inside an expectation while assuming the knowledge of $$V(s’)$$ in the definition on the right. A clarification in this regard would be appreciated. EDIT: The paper defines $$hat{P}^{a}_{s, s’}$$ as, $$hat{P}^{a}_{s, s’} = frac{|(s, a, s’, t)|}{|(s, a, t)|}.$$ Where $$|(s, a, t)|$$ is the number of times state $$s$$ was visited and action $$a$$ was taken and $$|(s, a, s’, t)|$$ as the number of times among the $$|(s, a, t)|$$ times $$(s, a)$$ was visited when the next state landed in was $$s’$$ during model learning. No explicit definition for $$V$$ is provided however, $$V^{pi}$$ is defined as the usual expected discounted return, using the same definition as Sutton and Barto or other sources. If I understand your question correctly, the significance of this is due to the fact that $$s'$$ is random. In the RHS of the equation it is assumed that $$V(cdot)$$ is known for each state, but the quantity is measuring the expected value of the next state given the current state and action. Answered by harwiltz on November 4, 2021 ## Related Questions ### Is my pseudocode titled “Monte Carlo Exploring Starts (with model)” correct? 0  Asked on February 8, 2021 ### Measuring novel configuration of points 1  Asked on February 7, 2021 by vaibhav-thakkar ### Generation of ‘new log probabilities’ in continuous action space PPO 1  Asked on February 5, 2021 by gideon ### What is the order of execution of steps in back-propagation algorithm in a neural network? 1  Asked on February 4, 2021 by gokul ### Why do we add additional axis in CNN autoencoder while denoising? 0  Asked on February 3, 2021 by maciek-woniak ### Why is the mean used to compute the expectation in the GAN loss? 1  Asked on February 2, 2021 by a-is-for-ambition ### Computation of initial adjoint for NODE 1  Asked on January 28, 2021 by seewoo-lee ### Advantage Actor Critic model implementation with Tensorflowjs 1  Asked on January 28, 2021 by sergiu-ionescu ### How to frame this problem using RL? 0  Asked on January 27, 2021 by blue-sky ### How to find distance between 2 points when dimensions are all of different nature? 0  Asked on January 23, 2021 by manish-kausik-hari-baskar ### Train 3D object detection model for custom object 0  Asked on January 22, 2021 ### Is there Binary Zero-Shot Learning with no defined prototypes for the unseen class? 0  Asked on January 22, 2021 by ddaedalus ### How can I generate natural language sentences given logical structures that contain the subject, verb and target? 2  Asked on January 21, 2021 by onza ### How are Artificial Neural Networks and the Biological Neural Networks similar and different? 3  Asked on January 20, 2021 by andreas-storvik-strauman ### In GradCAM, why is activation strength considered an indicator of relevant regions? 1  Asked on January 17, 2021 ### Is there any artificially intelligent system that really mimics human intelligence? 3  Asked on January 14, 2021 by curious-g ### Why scaling down the parameter many times during training will help the learning speed be the same for all weights in Progressive GAN? 0  Asked on January 10, 2021 by toby ### How could we solve the TSP using a hill-climbing approach? 1  Asked on January 6, 2021 by dua-fatima ### What’s the purpose of layers without biases? 1  Asked on January 1, 2021 by mark-mark ### How to avoid rapid actuator movements in favor of smooth movements in a continuous space and action space problem? 1  Asked on December 28, 2020 by opt12 ### Ask a Question Get help from others!
{}
# Artin groups of type $D_n$ as mapping class groups? According to Allcock (Braid Pictures for Artin groups, https://arxiv.org/abs/math/9907194), the Artin group $$A(D_n$$) of type $$D_n$$ may be realized as an index 2 subgroup of the orbifold fundamental group of $$\{x \in L^n | \forall i \neq j, x_i \neq x_j\}$$, where $$L$$ is the orbifold consisting of a disc with one cone point of order $$2$$. My question is the following: is there a natural representation of the Artin group $$A(D_n)$$ as some mapping class group of $$L$$, similar to the Birman theorem realizing the classical braid group as a mapping class group of a punctured disc? Or is there a natural action of the Artin group $$A(D_n)$$ on some complex of curves/arcs on the orbifold $$L$$? Perron-Vannier prove that the Artin group $$A(D_n)$$ "geometrically embeds" into the mapping class group of a surface, i.e., that there is a surface $$\Sigma$$ with boundary (and no punctures) and a faithful (albeit not surjective) representation $$A(D_n) \to \rm{Mod}(\Sigma)$$ mapping the standard generators to Dehn twists. The surface $$\Sigma$$ can be chosen as follows. Take a disc $$\Delta$$ and embedded arcs in $$\Delta$$ that intersect in the pattern of $$D_n$$ (edges in $$D_n$$ correspond to intersection points and vertices to arcs). Close up each arc with a band to form $$\Sigma$$. The Dehn twists along the closed arcs then generate a subgroup of $$\rm{Mod}(\Sigma)$$ isomorphic to $$D_n$$. In fact, the mentioned Dehn twists map to the standard generators of $$A(D_n)$$. Labruère later studied the same construction of $$\Sigma$$, but taking the intersection pattern of a cycle $$\widetilde A_{n-1}$$ rather than $$D_n$$. The kernel of the resulting (still not surjective) representation $$A(\widetilde A_{n-1}) \to \rm{Mod}(\Sigma)$$ is generated by the so-called "cycle relation", and one can show that the quotient of $$A(\widetilde A_{n-1})$$ by that relation is again isomorphic to $$A(D_n)$$ (for example, using Baader-Lönne's results).
{}
## Elementary Algebra $80=2\times2\times2\times2\times5$ 80 divided by the prime number 5 is 16. 16 can be broken down further into prime factors: $2\times2\times2\times2$. Therefore, $80=2\times2\times2\times2\times5$
{}
## FILE FORMATS Creating add-ons for Microsoft Flight Simulator requires a great number of files, even for small additions to the sim. Most of the files will be created for you by the editors in the sim, or will have been created beforehand using external tools (like 3DS Max). However, some files are used time and again within an add-on package and can be edited (or even created) by hand as part of an add-on, so it's useful to know what these files are and how they should be structured. In this section, you can find basic information on some of these file formats. The file formats covered here are: ### Editing Files When creating or editing any XML or CFG file, you must ensure that it is saved using a UTF-8 encoding (without BOM) if your package is destined to be used on the XBox. Any other encoding may cause issues when published. Also keep in mind that XML files with an <?xml> header should also specify "utf-8" for the encoding attribute, for example: <?xml version="1.0" encoding="UTF-8"?> This does not encode the file itself, but does provide a hint to the text parser that the file is using this encoding. To encode the file correctly, you can use the options within the text editor that you use. For example, in Notepad++: ### .CAB Files (Deprecated) Previously, in FSX, you could use CAB files to store files - primarily used for XML gauges. However, with Microsoft Flight Simulator this is not possible and has been deprecated. CAB files are not supported on Xbox and, as such, they cannot be used for any aircraft packages for that platform. If you have a legacy aircraft that you have updated for Microsoft Flight Simulator and it still uses CAB files, then you will need to resolve this before these aircraft will be available for the Xbox. The simple solution is to simply extract the files from the CAB file into a folder with the same name as the CAB file (minus the extension).
{}
# Exchange rate peg and level irfs Dear dynare-users, In my model Ive got the change of the nominal exchange rate DS_t as an endogenous variable. I define an exchange rate peg as a policy which sets DS_t=1. My question is, how can I retrieve the level of the exchange ratewithout getting an unit root? Wheb I define exp(DS_t)=exp(S_t- S_t-1) I get the unit root problem… Thanks for the support I don’t understand your question. If an exchange rate peg is defined as a policy which sets DS_t=1, then the nominal exchange rate S_t will always be at its initial (undefined) value. There cannot be any movements. Okey admittedly the question was really illposed. Let me try again: My small open economy model works fine, I compare a money supply rule with an exchange rate peg. So far, the equation on foreign bonds was defined as //5 Foreign bonds exp(lb)*(1+phib*bf)= beta*exp(Rs+ds(+1)+lb(+1)-dps(+1)); with ds(+1) being the change in the nominal exchange rate. As a reader of the paper it is hard to understand, what a positive deviation from steady state of ds means, i.e. does the nominal exchange rate appreciate or depreciate? To this end, I would like to report the IRF not of the change of the nominal exchange rate but of the level, S_level. Hence I define* ds(+1)=(s_level(+1)-s_level)*. Of course, when the policy sets a peg, then s_level is at its initial value. It would be nice to have that the steady state deviation of s_level in this case is zero, not NaN. Suppose I replaced ds in all the equations with s_level-s_level(-1), then my system works fine for a pegged regime and I get what I want, but when I use the money supply rule, then I get an unit root, and I am not quite sure where it comes from. I set the steady state of s_level=1, the model is in logs. Are you talking about simulations or just IRFs at first order? In many models it is common that the price level is indeterminate due to a unit root. The only thing that is determinate is the change in the price level, i.e. inflation or in your case the change in the exchange rate. Due to the unit root, the unconditional variance does not exist and is not displayed. That being said, the IRFs at first order should still be valid and be displayed. The presence of a unit root is generally not a problem. The only issue is that the steady state for the variable with the unit root cannot be computed endogenously. Rather, there are infinitely many potential steady states. Thus, Dynare will take any initval or steady_state_model value you assign to s_level as the steady state. Hence, setting s_level=1 in initval and adding the definition should be sufficient to generate IRFs for ds. They should show a permanent shift in the level of the exchange rate. Note the different timing here compared to what you wrote. My version is correct, yours is not. However at higher order you will run into problems, because there simulations are used and the s_level will move over time and not have a tendency to return to the initial level. Dear jpfeifer, thanks for your reply. After some digging in the dynare forum I also encountered that the presence of a unit root perse is not a problem when computing first order approximations. Yet, I decided to back out the series for the nominal exchange rate in a separate m.file. The way I proceed for completeness is as follows: We have that \hat{\Delta S}t=\log(\Delta S_t)-\log(\bar{\Delta S})=\log(\Delta S_t)-0=\log(S_t/S{t-1})=\log(S_t)-\log(S_{t-1}). The IRF is given by: \frac{\partial \hat{\Delta S}t}{\partial \epsilon_t}=\frac{\partial\log(S_t)-\log(S{t-1})}{\partial \epsilon_t}, with \frac{\partial \hat{\Delta S}{t-1}}{\partial \epsilon_t}=0, we then have \frac{\partial \log(S {t+j}}{\partial \epsilon_t}=\frac{\partial \hat{\Delta S}{t+j}}{\partial \epsilon_t}+\frac{\partial \hat{\Delta S}{t+j-1}}{\partial \epsilon_t}+…+\frac{\partial \hat{\Delta S}{t}}{\partial \epsilon_t} which is equal to the cumulative sum of \frac{\partial \log(S {t+j}}{\partial \epsilon_t} and the code I use is I hope this helps for future reference. Philipp
{}
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack GMAT Club It is currently 29 Mar 2017, 22:06 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Second try is the charm 710 (48 q, 38 v) Author Message TAGS: ### Hide Tags GMAT Forum Moderator Status: Accepting donations for the mohater MBA debt repayment fund Joined: 05 Feb 2008 Posts: 1884 Location: United States Concentration: Operations, Finance Schools: Ross '14 (M) GMAT 1: 610 Q0 V0 GMAT 2: 710 Q48 V38 GPA: 3.54 WE: Accounting (Manufacturing) Followers: 59 Kudos [?]: 816 [2] , given: 234 Second try is the charm 710 (48 q, 38 v) [#permalink] ### Show Tags 21 Jun 2009, 14:13 2 KUDOS Hi all, First try at the GMAT was back in Dec 2007. Finished the online Kaplan program and the Kaplan CD set (GMAT/GRE/LSAT pack). I also used the PowerPrep DOS program from GMAC. Highest practice back then was a 620. Scored a 610 on the actual. No dice with the 2008 school year app. Signed up for the Manhattan GMAT online class in the Summer of 2008, finished in the fall. Work kicked my butt for a bit with travel, but I was able to buckle down and finish the MGMAT material from April-June 2009. Practice exams went from start of 620ish to to a finish of 680 (between GMAC prep software and MGMAT practice exams). Took the exam yesterday, actual unofficial results were a 710 (48 q, 38 v). I can say with some assurance the MGMAT class knocked the socks off of the Kaplan programs. A friend of mine took the Kaplan classroom form and gave me all his materials, which I found identical to the online program and the self study. I asked him about the instructor and his response was the instructor simply went through the book chapter by chapter. I found the strategies and the material covered in MGMAT supplemented each other nicely. and I believe that helped push me over the 700 mark. Best of luck to the rest. _________________ Strategy Discussion Thread | Strategy Master | GMAT Debrief| Please discuss strategies in discussion thread. Master thread will be updated accordingly. | GC Member Write Ups GMAT Club Premium Membership - big benefits and savings Last edited by mohater on 21 Jun 2009, 18:46, edited 1 time in total. Manhattan GMAT Discount Codes Jamboree Discount Codes EMPOWERgmat Discount Codes Founder Affiliations: AS - Gold, HH-Diamond Joined: 04 Dec 2002 Posts: 14652 Location: United States (WA) GMAT 1: 750 Q49 V42 GPA: 3.5 Followers: 3838 Kudos [?]: 24139 [2] , given: 4615 Re: Second try is the charm 710 (48 q, 38 v) [#permalink] ### Show Tags 21 Jun 2009, 16:59 2 KUDOS Expert's post Congratulations! That's a great improvement from 600. Also - thanks for your feedback on Manhattan and Kaplan courses - I added your thread to the reviews here: gmat-prep-courses-classes-reviews-ratings-and-comparison-78451.html#p590321. If there is anything else that you think would be helpful to know for future prep class takers - would appreciate your thoughts. Good Luck with essays! _________________ Founder of GMAT Club US News Rankings progression - last 10 years in a snapshot - New! Just starting out with GMAT? Start here... Need GMAT Book Recommendations? Best GMAT Books Co-author of the GMAT Club tests GMAT Club Premium Membership - big benefits and savings GMAT Forum Moderator Status: Accepting donations for the mohater MBA debt repayment fund Joined: 05 Feb 2008 Posts: 1884 Location: United States Concentration: Operations, Finance Schools: Ross '14 (M) GMAT 1: 610 Q0 V0 GMAT 2: 710 Q48 V38 GPA: 3.54 WE: Accounting (Manufacturing) Followers: 59 Kudos [?]: 816 [0], given: 234 Re: Second try is the charm 710 (48 q, 38 v) [#permalink] ### Show Tags 21 Jun 2009, 18:30 mxb908 wrote: Mohater - First, Congrats! on an awesome score. I feel I have been in a similar boat with regards to work..study and break of schedule...Can you pass some tips on how to prepare for the verbal and quant. I have been preparing for a while now and have not been able to get consistently high points on Verbal or Math. Was wondering if you could share your study tips. thanks, Hi mxb908, Thanks for the well wishes. To prep for verbal: When speaking, always try to be aware of what you are saying. You will often catch saying things that are unidiomatic or using plural pronouns for singular nouns (among other things). Also, make a habit of reading a publication that you know is scrutinized from an editing standpoint. My major struggle on verbal was sentence correction (SC). Main points I used as a reference base: Figure out what the sentence is saying Make sure singular/plural agree Make sure pronoun antecedents are clear Make sure you understand the order of events. Quant: read read read read read. You need to memorize certain things (area/volume/surface area equations), and for data sufficiency (DS) problems, you need to reduce the question quickly to eliminate the obvious incorrect answers. SC and DS have similar structure - in one min or less, you can usually eliminate two/three obvious incorrect answers. This will help your guessing on the more difficult (700-800) problems. What I did for SC: I used the official GMAT verbal guide (provided with my MGMAT materials) and went through the problem in blocks of 10, and went through as many blocks as I could in any given day. For the ones I got wrong, I mark them on the paper. I would also mark the ones I guessed on (using a different marking). After finishing all the SC problems, I would go through the ones I got wrong and the ones I guessed on. By waiting until I finished all of the problems, I would not recall the problem nor what I marked the first time around. I feel this helped me the most. Let me know if I can provide any other details. _________________ Strategy Discussion Thread | Strategy Master | GMAT Debrief| Please discuss strategies in discussion thread. Master thread will be updated accordingly. | GC Member Write Ups GMAT Club Premium Membership - big benefits and savings Last edited by mohater on 22 Jun 2009, 19:40, edited 1 time in total. Manager Joined: 08 Apr 2009 Posts: 105 Followers: 1 Kudos [?]: 2 [0], given: 3 Re: Second try is the charm 710 (48 q, 38 v) [#permalink] ### Show Tags 22 Jun 2009, 08:06 Congratulations for the 700+ score Manager Joined: 16 Apr 2009 Posts: 243 Schools: Ross Followers: 3 Kudos [?]: 84 [0], given: 10 Re: Second try is the charm 710 (48 q, 38 v) [#permalink] ### Show Tags 22 Jun 2009, 09:10 Congrats ! Quote: SC and DS have similar structure - in one min or less, you can usually eliminate two/three obvious incorrect answers. This will help your guessing on the more difficult (700-800) problems. What I did for SC: I used the official GMAT verbal guide (provided with my MGMAT materials) and went through the problem in blocks of 10, and went through as many blocks as I could in any given day. For the ones I got wrong, I mark them on the paper. I would also mark the ones I guessed on (using a different marking). After finishing all the SC problems, I would go through the ones I got wrong and the ones I guessed on. By waiting until I finished all of the problems, I would not recall the problem nor what I marked the first time around. I feel this helped me the most. I liked this tip.I'll implement it. _________________ Keep trying no matter how hard it seems, it will get easier. GMAT Forum Moderator Status: Accepting donations for the mohater MBA debt repayment fund Joined: 05 Feb 2008 Posts: 1884 Location: United States Concentration: Operations, Finance Schools: Ross '14 (M) GMAT 1: 610 Q0 V0 GMAT 2: 710 Q48 V38 GPA: 3.54 WE: Accounting (Manufacturing) Followers: 59 Kudos [?]: 816 [0], given: 234 Re: Second try is the charm 710 (48 q, 38 v) [#permalink] ### Show Tags 23 Jun 2009, 06:50 I decided to post a full write up based on reading others' posts here. My main struggle was sentence correction. When you reach the 600-700 and 700-800 sentence correction problems on the GMAT, and can narrow it down to two choices and get stuck, it's almost always an idiom problem. If you're well versed in proper English literature (something I've been out of contact with for some time), it should be fairly easy to pick the right one. I also had a bit of trouble on quant. Mainly because I would solve for the unknowns of the problem and not actually answer the question asked (the the test writers always have an unknown as one of the answer choices). Key points for those with similar issues: SLOW DOWN when reading the problem. On the exam, it's a good idea to allocate ~2 minutes per problem on both sections. Some problems will take a bit longer, and some a bit shorter, but DO NOT let yourself get stuck for too long. Missing easy questions and not answering questions at all hurts your score a lot more than missing hard questions. Practice setting up quant problems (data sufficiency or otherwise) and always make sure you're answering the question asked. Honestly, the best advice I can offer is make sure you give prepping for the exam its due time. There is not secret recipe here. Focus on the weaknesses. My experience on the GMAT: Try 1 (Dec 2007): Practice exams were maxing at 640, and based on what I read on forums and based on what people told me, you usually score higher than your practice. I slept well that night, woke up, and made my way to the testing center (nearest one was 50 miles away). Finished both sections and I felt "well" overall. Clicked to receive my score, 610. Not terrible, but not good enough for top tier programs. Never mind that, I applied anyway, but as expected no dice. After speaking to people on list serves I'm a member of on the topic and reading online reviews about the various prep programs, I decided to sign up for the Manhattan GMAT (MGMAT) online course. It's less expensive than the Kaplan or Princeton online programs and one particular person I was speaking to started in the same boat as me (560 on first practice) and broke 700 after the MGMAT program. The class itself is like any other class room (albeit online). The instructors engage students to chime in and answer questions, and also answer questions during and after class. The out of class work (homework if you will) is VERY demanding. You will need to allocate ~10 hours/week outside of the class to finish all the material. Given my work schedule at that time demanded quite a bit of travel, the out of classwork was put on hold for ~5 months. My online account was set to expire, but I emailed the MGMAT group to see if my access could be extended for a bit, as I could not give the program its due time when I was enrolled and there is a lot of information online (problem sets, labs, practice exams, etc.). MGMAT was kind enough to extend my access for six months (MGMAT offers that for a fee, but it was given to me for free). Finally, after my last trip for work (March 09, but I'll probably start traveling again soon), I buckled down and studied. From March-May I was finish the online content, while taking the last couple practice exams (MGMAT gives you six practice exams, and you can reset them (by request). The problem is problems might repeat after the reset (MGMAT assures no question repeats between the six exams). Then May-June I buckled down where I was still struggling the most. I then took the two GMAC practice exams. Practice exams began in the 550-640 (inconsistent scores)when I finished the MGMAT program I was in the 670-680 range (consistent scores). Try 2 (June 2009): Felt good the day before the exam. Went over maybe 15-20 problems that day (problems I had already done and wanted to ensure I understood how to approach them/why I got them wrong). Other than that, did no studying and just relaxed. Tried to sleep that night, and did not sleep at all. Ended up rolling out of bed at 4:30. Did the normal routine an hour later (morning prayer, breakfast, etc.) and then made my way to the testing center. Felt terrible as the food in my stomach was not sitting too well. Exam started at 8am. Went through both essay sections without too much trouble. Quant: Got stuck on some easy problems I REFUSED to guess on. Thankfully, I made up time on other problems and finished with more than 30 seconds to spare. Got stuck on a couple sentence correction problems on the verbal section, but still finished with more than 30 seconds to spare. Note: I had a much WORSE feeling the second time about my score. Thankfully, on try 2, I broke the 700 mark. I think I was more stressed this time as I was answering harder questions overall (due to the adaptive nature of the exam). Not sleeping probably didn't help either. I scored both the highest I've ever score on the individual parts and the highest overall score (between both the real and the practice exam). _________________ Strategy Discussion Thread | Strategy Master | GMAT Debrief| Please discuss strategies in discussion thread. Master thread will be updated accordingly. | GC Member Write Ups GMAT Club Premium Membership - big benefits and savings GMAT Forum Moderator Status: Accepting donations for the mohater MBA debt repayment fund Joined: 05 Feb 2008 Posts: 1884 Location: United States Concentration: Operations, Finance Schools: Ross '14 (M) GMAT 1: 610 Q0 V0 GMAT 2: 710 Q48 V38 GPA: 3.54 WE: Accounting (Manufacturing) Followers: 59 Kudos [?]: 816 [1] , given: 234 What worked for me (550 on first practice to 710 exam #2) [#permalink] ### Show Tags 23 Jun 2009, 19:51 1 KUDOS I wrote a review between the two prep programs I tried (it's posted in that section). Here I'll post how I got there (excluding the essay sections). Quant: I've always been pretty strong in quant. type problems/classes. My biggest issue on standardized tests is the question doesn't always ask you to solve for the unknown. It wants you to take the unknown and extrapolate out some other value. Knowing people will short read the problem, the answer to the unknown is often one of the answers (this is only true for concrete answers, not variables in solutions). To resolve this - SLOW DOWN. Try to budget an average of 2 mins per question (using the time provided here). Some questions you will use more time and some you will use less. The goal is to make sure you don't get *stuck* for too long and MAKE sure you answer what the question is asking. You can solve all the math problems with Algebra, if you want to spend a lot of time on the problems. Rehashing number properties (probability, exponents, etc.), geometry, simply trig, will help you get through tough problems is a reasonable amount of time. I made sure I memorized the following: General rule for other polygons Rules for right triangle (especially the special ones) Squares through 15 (and maybe a few other things I'm not recalling right now. I'll edit this post if anything changes). Data Sufficiency: The problem can usually be reduced to something simpler. i.e. If the question is asking "if X and Y are both integers, is x-y-5 > x+y+3?" You might look at the problem and say "ok, I have two variables so I either need to equations or two variables." The problem is you can reduce the inequality as follows: Subtract X from both sides -------> (-x)+x-y-5>(-x)+x+y+3 = y-5>y+3 Add Y to both sides -------> (+y)-y-5 > (+y)+y+3 = -5>2y+3 Add 5 to both sides -------> (+5)-5 > (+5)+2y+3 = 0 > 2y+8 You know have "if X and Y are both integers, is 0 > 2y+8?" X is irrelevant now. All you need to know is y and you can solve for the inequality. Reducing the equation lets you know "what do I really need to solve this" Also, problems can always be set up in this fashion BCE | |ACE If the answer is not A, it cannot be D (or if you start with b and if not B, it can not be D). Then, go through the elimination process to eliminate the remaining choices among ACE or BCE. This will improve your chances of guessing on the more difficult questions you get stuck on. Verbal: I was pretty strong on reading comprehension and critical reasoning to begin with. One of my captain obvious problems on verbal was I could not be captain obvious. If I found a problem where the solution was essential verbatim of what was in the passage/argument, I would rule it out as being "too obvious". Often, I was wrong on those. Please note: the solution MUST be verbatim for this to be the case. Some solutions change slight wording (sometimes/always/never/etc.) and is NOT a verbatim rehash of what was in the passage or argument. My main weakness verbal was sentence correct. I am Ralph Wiggum from the Simspons (cartoon here in the US ("Me fail English, that's unpossible")). The following strategy worked for me: - Ensure pronouns have clear antecedents (must BE very clear, and some pronouns, like "which" have very specific rules). - Ensure the number agree (i.e. singular subject, singular verb) - Make sure the sentence makes sense (order of operation, things listed, present perfect, past participle, etc.) Also, a similar strategy exists between both Data Sufficiency and Sentence Correction: You can usually knock off 2-3 wrong answers very quickly (within one minute). This, again, will help your chances with the high level (700-800) problems when you need to guess. For critical reasoning: I would list out the options (A,B,C,D,E) and use some to make's relationship to that statement/argument (i.e. "a +" if it strengthens the argument, or "a -" if it weakens the argument). Again, on the very difficult problems, you could quickly reduce two or three options and increase your chances on the upper level problems). Hope this helps, please post any questions. _________________ Strategy Discussion Thread | Strategy Master | GMAT Debrief| Please discuss strategies in discussion thread. Master thread will be updated accordingly. | GC Member Write Ups GMAT Club Premium Membership - big benefits and savings Manager Joined: 28 May 2009 Posts: 155 Location: United States Concentration: Strategy, General Management GMAT Date: 03-22-2013 GPA: 3.57 WE: Information Technology (Consulting) Followers: 8 Kudos [?]: 216 [0], given: 91 Re: What worked for me (550 on first practice to 710 exam #2) [#permalink] ### Show Tags 24 Jun 2009, 12:17 Thanks for sharing your experience Muhamad and it looks like you have some experience with standardized tests. The problem that I have and my guess is that for most of the students is timing and performing under pressure. I can probably figure out the solution if I have a lot of time and no pressure, but again that's not the case. Anyway, can we know which material you used and if possible can you rate them? _________________ Founder Affiliations: AS - Gold, HH-Diamond Joined: 04 Dec 2002 Posts: 14652 Location: United States (WA) GMAT 1: 750 Q49 V42 GPA: 3.5 Followers: 3838 Kudos [?]: 24139 [0], given: 4615 Re: What worked for me (550 on first practice to 710 exam #2) [#permalink] ### Show Tags 24 Jun 2009, 18:03 Congratulations! Thank you for the debrief. Where to next? What programs are you planing to apply to? _________________ Founder of GMAT Club US News Rankings progression - last 10 years in a snapshot - New! Just starting out with GMAT? Start here... Need GMAT Book Recommendations? Best GMAT Books Co-author of the GMAT Club tests GMAT Club Premium Membership - big benefits and savings GMAT Forum Moderator Status: Accepting donations for the mohater MBA debt repayment fund Joined: 05 Feb 2008 Posts: 1884 Location: United States Concentration: Operations, Finance Schools: Ross '14 (M) GMAT 1: 610 Q0 V0 GMAT 2: 710 Q48 V38 GPA: 3.54 WE: Accounting (Manufacturing) Followers: 59 Kudos [?]: 816 [0], given: 234 Re: What worked for me (550 on first practice to 710 exam #2) [#permalink] ### Show Tags 24 Jun 2009, 20:00 megafan wrote: Thanks for sharing your experience Muhamad and it looks like you have some experience with standardized tests. The problem that I have and my guess is that for most of the students is timing and performing under pressure. I can probably figure out the solution if I have a lot of time and no pressure, but again that's not the case. Anyway, can we know which material you used and if possible can you rate them? Hi megafan, On GMAT try #1, I used the Kaplan online program, the Kaplan self study book, the Kaplan CD set (GRE/GMAT/LSAT pack) and the older DOS based GMAC's Powerprep GMAT application (predecessor of GMAT Prep). My scores were all over the place, ranging from 550 to 640 (no consistency). After consulting some friends, GMATclub and a couple list servs I'm on, I decided to sign up for the MGMAT program. I also used the GMAC's GMAT Prep application. The real value in MGMAT is in the books. The classroom (whether in person or online) is useful, but if you can focus, the books are all you really need (along with the Offical Guide, Official Quant/Verbal books). If you need help with JUST the exam itself and getting through it, Kaplan should be sufficient (not the boat either of us is/was in). For this reason, I give the Kaplan program 6/10. I also found some of the Kaplan stategies counter productive (i.e 3/2 split, go with the 3, spend more up front to get to hard problems sooner, use your ear on sentence correction, etc.). Also, the Kaplan self study, the CD pack, the online course and the in person course are all EXACTLY the same. I confirmed this when a friend who took the in person class gave me all his materials when he finished. His "classroom student edition" book was virtually the same as my $20 book from Barnes and Noble. He also said the instructor simply went through the book chapter by chapter, never diverging from the book/canned material. That being said, another friend took the Kaplan classroom version and scored a 750. He was already a high performer as it was (4.0 in school, all around "smart" guy), so the Kaplan class just solidified his review. I found the strategies for MGMAT much better (i.e. try to fix ~2 mins per problem on both sections, SC split/re-split, DS AD/BCE listing, CR listing options with how they address the argument, etc.). I also really like how MGMAT focuses on the actual material, as well as the strategies. For this I give the MGMAT books an overall score of 9/10. When solving 700-800 questions, I could *usually* get it down to two and need to pick one or the other. For these, fully solving the problem was rarely an option (unless I found some shortcut for the math or some obtuse easy way to knock out the other option on verbal). To beat the nerves just make sure to fully read the problem, and keep track of time (but not to a point where you're looking at the clock every 10 seconds). MGMAT focuses on timing throughout the entire program (both in the book and in the classroom). If you can dedicate to the time constraints, you will finish the sections. My nerves were a wreck on the GMAT try #2 (as I posted in that section), but I managed to hold it together, read each problem and focus on solving what the problem asked for. Let me know if I can provide anything else. _________________ Strategy Discussion Thread | Strategy Master | GMAT Debrief| Please discuss strategies in discussion thread. Master thread will be updated accordingly. | GC Member Write Ups GMAT Club Premium Membership - big benefits and savings Last edited by mohater on 24 Jun 2009, 20:08, edited 2 times in total. GMAT Forum Moderator Status: Accepting donations for the mohater MBA debt repayment fund Joined: 05 Feb 2008 Posts: 1884 Location: United States Concentration: Operations, Finance Schools: Ross '14 (M) GMAT 1: 610 Q0 V0 GMAT 2: 710 Q48 V38 GPA: 3.54 WE: Accounting (Manufacturing) Followers: 59 Kudos [?]: 816 [0], given: 234 Re: What worked for me (550 on first practice to 710 exam #2) [#permalink] ### Show Tags 24 Jun 2009, 20:02 bb wrote: Congratulations! Thank you for the debrief. Where to next? What programs are you planing to apply to? I'm hopefully sticking to top 10 programs for both MBA and PhD. With this list, MSU is the only exception (not top 10), but I did my undergrad there and really like the department. Prelim list: MBA: U Michigan Wharton HBS Chicago PhD: UT Austin U Michigan U Chicago Wharton U Washington MSU _________________ Strategy Discussion Thread | Strategy Master | GMAT Debrief| Please discuss strategies in discussion thread. Master thread will be updated accordingly. | GC Member Write Ups GMAT Club Premium Membership - big benefits and savings Manager Joined: 28 May 2009 Posts: 155 Location: United States Concentration: Strategy, General Management GMAT Date: 03-22-2013 GPA: 3.57 WE: Information Technology (Consulting) Followers: 8 Kudos [?]: 216 [0], given: 91 Re: What worked for me (550 on first practice to 710 exam #2) [#permalink] ### Show Tags 24 Jun 2009, 22:01 Thanks for the info Muhamad, I completely agree with you regarding MGMAT and Kaplan. I purchased the entire MGMAT guides which came w/ a timer and other exam related stuff. I just finished the first guide and it turned out to be really helpful . I took GRE classroom coaching ($1,250) with Kaplan before and I must confess that they are really bad at classroom teaching, so I decided not to go with kaplan again. But my prep for GRE does make the quant and RC easy, but SC and CR are new to me and I guess the MGMAT SC guide will help with that. I also purchased Powerscore's CR Bible. Well that's about my experience with Kaplan and Mgmat. Again, thanks for taking the time to put a debrief and Good Luck on the application! _________________ Intern Joined: 16 Nov 2008 Posts: 34 Followers: 0 Kudos [?]: 3 [0], given: 9 Re: Second try is the charm 710 (48 q, 38 v) [#permalink] ### Show Tags 26 Jun 2009, 01:12 Awesome debrief! Congratulations on a great score! Your effort is really admired. All the best in your journey to B-School... _________________ Life is a Test Retired Moderator Status: The last round Joined: 18 Jun 2009 Posts: 1309 Concentration: Strategy, General Management GMAT 1: 680 Q48 V34 Followers: 79 Kudos [?]: 1042 [0], given: 157 Re: Second try is the charm 710 (48 q, 38 v) [#permalink] ### Show Tags 26 Jun 2009, 01:34 Congrats Boss!!! The feeling to be a part of 700 club will be great, Is it??? What area, in your opinion, need more attention?? Q or V??? _________________ GMAT Forum Moderator Status: Accepting donations for the mohater MBA debt repayment fund Joined: 05 Feb 2008 Posts: 1884 Location: United States Concentration: Operations, Finance Schools: Ross '14 (M) GMAT 1: 610 Q0 V0 GMAT 2: 710 Q48 V38 GPA: 3.54 WE: Accounting (Manufacturing) Followers: 59 Kudos [?]: 816 [2] , given: 234 Re: Second try is the charm 710 (48 q, 38 v) [#permalink] ### Show Tags 26 Jun 2009, 02:46 2 KUDOS Hussain15 wrote: Congrats Boss!!! The feeling to be a part of 700 club will be great, Is it??? What area, in your opinion, need more attention?? Q or V??? Again, this is from a personal standpoint: Hands down verbal. I scored in the 80th percentile in both individual sections, but had I got a few more points on the verbal section, I might have hit the 750 range. I am teh sux0rz at sentence correction. One of my bigger vices with Kaplan was the "use your ear to solve the problem" strategy. If you consistently violate the rules of proper written English (speaking, emails, chatting online, etc.), your ear will provide you with more wrong answers than right answers. My reading comprehension and critical reasoning have always been much better than my sentence correction. _________________ Strategy Discussion Thread | Strategy Master | GMAT Debrief| Please discuss strategies in discussion thread. Master thread will be updated accordingly. | GC Member Write Ups GMAT Club Premium Membership - big benefits and savings Manager Joined: 15 May 2010 Posts: 139 Followers: 3 Kudos [?]: 25 [0], given: 40 Re: Second try is the charm 710 (48 q, 38 v) [#permalink] ### Show Tags 13 Nov 2010, 12:45 Not entirely sure how you only got one kudos for this debrief! +1 I just had a dissapointing 610 and also maxed out at 640 in prep tests so this debrief really helped me out! Congrats and hope all went well with the apps etc. _________________ Rule #76: No excuses. Play like a champion! Re: Second try is the charm 710 (48 q, 38 v)   [#permalink] 13 Nov 2010, 12:45 Similar topics Replies Last post Similar Topics: 2 710 Q48 V38 IR 6 GMAT Debrief. 0 25 Jul 2014, 10:11 Second time's the charm! : 710/800 (Q47, V41) 3 11 Nov 2010, 10:26 710 - Q(48) V(38) - AWA 4.0 17 14 Jun 2008, 11:43 700 q48 v38 7 26 Oct 2007, 09:27 700 (Q48, V38) 13 31 Jul 2007, 13:17 Display posts from previous: Sort by
{}
# How can I change the color of an object at runtime? I wish to have the shader effect as in the game 'The stack' by ketchapp As you can see, the color of the objects as well as the skybox keep changing. I'm completely c# noob and trying my best to find shaders to replicate the effect. But I can't find anything. How can I change the color of an object at runtime? • We do not allow questions which ask for off-site resources, but we do answer how to solve specific problems. I rewrote your question to fit this website. – Philipp Aug 3 '16 at 8:12 • Thank you so much for going extra mile than just answering the question, Philipp. – Bhoopalan Thaati Aug 4 '16 at 3:32 You can change the color of an object with a very simple C# script on the object. First the quick and dirty solution to make an object red: GetComponent<Renderer>().material.color = Color.red; This changes the first color property of the first material of the object to RGB #ff0000. When you would like to set a specific color property of a specific material, check out Renderer.materials and Material.SetColor. Why is this solution dirty? Because what it actually does internally is to create a complete copy of the material, then change the color value on it and then assign the new material to the object. This is not just slow, it also means that the object now has an own material and no longer shares its material with all other objects which use the same material asset. A far more elegant solution is to use a material property block: MaterialPropertyBlock props = new MaterialPropertyBlock();
{}
# $\limsup$ and $\liminf$ of a sequence Consider a sequence $$a_{n}$$ with $$a_{n}=(-1)^{n} (\frac{1}{2}-\frac{1}{n})$$. Let $$b_{n}=\sum_{k=1}^{n} a_{k}$$ for all $$n\in\mathbb{N}$$. Then find $$\limsup\limits_{n\to\infty} b_{n}\ \ \text{and}\ \ \liminf\limits_{n\to\infty} b_{n}$$ • Compute $b_1,b_2,b_3,b_4$ and see a pattern. See the even and odd indices to make things clearer. – Teresa Lisbon Nov 12 '19 at 8:54 • Not getting any pattern....b_{1}=1/2, b_{2}=1/2, b_{3}=1/6,.... – Sachin Nov 12 '19 at 8:57 • Note that $a_n$ is very close to half when $n$ is large even, and close to $-\frac 12$ when it is large and odd. Now, think about the sequences $b_{2n}$ and $b_{2n+1}$, can you see why they may be convergent? – Teresa Lisbon Nov 12 '19 at 9:00 $$\tag 1 \displaystyle{\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k}}$$ $$\tag 2 \displaystyle{\sum_{k=1}^{\infty} \frac{(-1)^{k}}{2}}$$ The second series doesn't converge, but you can still compute the $$\text{lim sup}$$ and $$\text{lim inf}$$
{}
# Tag Info ## Hot answers tagged librarylink 67 I wrote a package to automatically generate all the boilerplate needed for LibraryLink and for managed library expressions based on a template that describes a class interface: LTemplate Here's how it works Write a template (a Mathematica) expression that describes a C++ class interface This template is used to generate LibraryLink-compatible C functions ... 36 Here are three very simple examples to show how to call a Fortran subroutine using LibraryLink. First the subroutine is compiled into object file. Then a wrapper is used to call the Fortran subroutine and compiled into dynamic library. At the end, the library is loaded into Mathematica and run. In the examples Mathematica Version 8 is used. FIRST EXAMPLE ... 34 Let me give you a very basic example, how you can employ an asynchronously running LibraryLink function for this specific task. I will not do any real packet listen, but only explain the general setup. Usually, a LibraryFunction in Mathematica is written in C and wrapped with a very small boiler plate code that is needed to attach the function directly to ... 33 This is a community wiki answer. Feel free to improve it. Introduction RawArray is an atomic array type that can hold data in any of the following formats: "Integer8", "UnsignedInteger8", "Integer16", "UnsignedInteger16", "Integer32", "UnsignedInteger32", "Integer64", "UnsignedInteger64", "Real32", "Real64", "Complex64", "Complex128" (Some aliases can ... 25 Taking user5601's suggestion to do a little demo, I quickly whipped this up as an example of ProcessLink being used to do non-trivial communication between Mathematica and an external program, but with much less ceremony than using ProcessLink or MathLink. Let's take this little Go program: package main import "net/http" import "bufio" import "os" import "... 21 Yes, I did something like that and it runs very very nicely. What I implemented is a dynamic Newton fractale visualizer where you can manipulate the number and position of the complex roots, the colours and the gamma correction settings from the Mathematica side. These values are sent to a parallel C++ implementation which calculates the fractale into a ... 21 Note that ListContourPlot3D takes the coordinates to be the position indices by default. If you want to keep the coordinates used in generating the data, then you have to include it. data = Flatten[ Table[{x, y, z, x^2 + y^2 + z^2 + RandomReal[0.1]}, {x, -2, 2, 0.2}, {y, -2, 2, 0.2}, {z, -2, 2, 0.2}], 2]; plot = ListContourPlot3D[data, Contours -&... 20 First of all, you must be relatively comfortable with the C language. That is absolutely a prerequisite. If you are not comfortable with C, brush up your C skills first. Next, look at concrete examples while reading the LibraryLink user guide. The examples are described in the last section. Start with the simplest ones. The user guide is meant more as ... 18 Here is a more general approach. It is based on the 2D method from here. It assumes the polyhedron is not self-intersecting but imposes no requirement of convexity or even connectedness, other than that it be closed and bounded. Strictly speaking, I think this will work for an unbounded polyhedron provided it contains no vertical ray. For ease of exposition ... 17 The purpose of managed library expressions is to provide "garbage collection" (i.e. automatic cleanup) for data structures created in C through LibraryLink. The documentation is here: Managed Library Expressions It comes with a full demo with source code which you should read. What are managed library expressions and what are they good for? Why they are ... 17 Using Compile with CompilationTarget->"C" does generate C-Code to be compiled in a generalized way, the resulting code will contain some overhead due to that compared to hand-written code which can easily explain any difference in runtimes. Even for cases where that overhead is minimal or non-existent automatic code generation will always produce ... 16 In terms of loading C++ functions, I would strongly suggest LibraryLink. It is a great tool except that it requires you to write sometimes intimidating C code. To make LibraryLink easier to use, I have developed a package called wll-interface, available in this github repository. It is a header-only library written in C++, doing only one thing—... 16 At the end of this post you'll find the code for a small benchmark to compare LibraryLink with standard passing vs LibraryLink with MathLink based passing on two counts: function call overhead; use a function that does nothing, has no arguments, returns nothing (llNone and mlNone for LibraryLink and MathLink, respectively) passing real arrays; use a ... 16 Update: The very likely reason for the garbled error messages is that you have a Chinese version of Visual Studio printing errors in Chinese, and there is a mismatch in the character encoding of these messages and how Mathematica tried to interpret them. CreateLibrary has two very useful options: "ShellCommandFunction" and "ShellOutputFunction". Set them ... 14 My advice is to not rely on the code samples in the documentation. It has proven to be unreliable. For instance In the documentation to MTensor_getComplexData the example doesn't even use the function. The documentation to MTensor_free can barely be called documentation since it does not explain about memory allocation etc The documentation to AbortQ states ... 14 There is a package called LTemplate that automates writing some of the boilerplate code for LibraryLink: How to simplify writing LibraryLink code? I consider this less effort than writing standard LibraryLink code. In this sense it is a fitting answer for this question. However, I do recommend familiarizing yourself with the standard way of using ... 14 I can partially answer to my own questions. Amazingly, but it is easier to use the internal MKL. Let us consider my question about multiplication a band matrix by a dense matrix. The corresponding function is mkl_zdiamm. I wrote the following code (diamm.c) #include <stdio.h> #include <stdlib.h> #include <WolframLibrary.h> #include <... 14 Simple solution There is an easier solution than the one I gave almost 2 years ago. In principle, you wrap your library function inside another CompiledFunction that is listable. Let the code speak: fun = LibraryFunctionLoad["demo", "demo_I_I", {Integer}, Integer]; With[{fc = fun}, funListable = Compile[{{i, _Integer, 0}}, fc[i], RuntimeAttributes -&... 14 This cannot be answered very well without knowing your C++ library much better. As you said, you have a choice between MathLink and LibraryLink. Generally, I recommend LibraryLink because: It runs in the same process, and data transfer is much faster than with MathLink It provides features that MathLink does not have, such as direct manipulation of packed ... 13 Let me describe a way which works through all systems and simplifies the distribution of library code within a package a lot. First I want to point out that there are two major scenarios here: You are currently developing a package containing LibraryLink functions. When you are actively working on such a package, it is most likely not installed in your \$... 12 Here is a fast method that will "often" work. Roughly, it requires that the convex polygon have no sharp angles between faces. Preprocessing goes as follows. Create triangles from the polygons. So a 5-gon with vertices {a,b,c,d,e} would become the set of triangles {{a,b,c},{a,c,d},{a,d,e}}. For each vertex we average it's star (set of points connected by ... 12 I'm not sure whether I will get everything right here, but to my knowledge the key-point is indeed MTensor_disown. When you call loadFun you basically move the write-priviliges for the array to your library functions. This means, changing values inside the library will be transparent on the Mathematica side. Let's load the library functions: loadFun = ... 12 You can look here for more examples and details: http://community.wolfram.com/groups/-/m/t/189092 http://community.wolfram.com/groups/-/m/t/189735 In short, do the following (call the file DoubleIt.c): #include "WolframLibrary.h" DLLEXPORT int DoubleIt(WolframLibraryData libData, mint Argc, MArgument *Args, MArgument Res) { mint x; ... 11 Mathematicas invocation of the compiler doesn't know about where to find the Fortran library. With a little help, however, we can point the way. Mind you this was done on a Mac but the Linux variant of Unix will behave similar. Needs["CCompilerDriver"]; CreateLibrary[{"MMA.cc", "fadd.o"}, "myadd", "Debug" -> True, "TargetDirectory" -> ".", ... 10 Preface I will give a complete solution that shows how a dynamic file watcher can be implemented in Mathematica. The file watcher will track the size of the file and when it changes automatically reload the contents. It will work as an asynchronous library function that does not block the kernel from other evaluations. The example here will additionally ... 9 Does the library call back functions are slower then the macro since they need to "call back" to Mathematica? I'm pretty sure this is not the case. I guess that the difference is really that the WolframLibrary.h is a public interface which hides all implementation, while the WolframCompileLibrary.h is used by the CCodeGenerator and gives you at ... 9 You have not defined any message text in Mathematica. The text you supply in the C code is the message tag, e.g libData->Message("myerror"); Then you need to define the actual message content in Mathematica: LibraryFunction::myerror = "Here's my message" The relevant documentation page is here. 9 when I need to compile it, I used the following method: Copy the total code to Mathematica notebook with src = all_code_of_C_file, and compile it with CreateLibrary[src, "lib_link"] This is not how CreateLibrary is meant to be used. You would only type the C code in a Mathematica string if it is so short and so simple that you can't be bothered ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
Week4TeamBWProblems - Problem 1 Two boats the Prada(Italy... • Homework Help • erruiz3 • 14 • 86% (7) 6 out of 7 people found this document helpful This preview shows page 1 - 3 out of 14 pages. Problem 1: Two boats, the Prada (Italy) and the Oracle (USA), are competing for a The sample times in minutes for the Prada were as follows: 12.9, 12.5, 1 The sample times in minutes for the Oracle were as follows: 14.1, 14.1, At the .05 significance level, can you conclude that there is a differe Prada (Italy) Oracle (U.S.A) 12.9 14.1 12.5 14.1 11 14.2 13.3 17.4 11.2 15.8 11.4 16.7 11.6 16.1 12.3 13.3 14.2 13.4 11.3 13.6 10.8 19 x-bar = 12.17 14.5 s = 1.0562512327 2.2078887488 Data: n1 10 n2 12 x1-bar 12.17 x2-bar 14.5 s1 1.0562512327 s2 2.2078887488 (1) Formulate the hypotheses: For data analysis, the appropriate test is the t t est: two-sample assuming Explain these results to a person who knows about the t test for a si (2) Decide the test statistic and the level of significance: (3) State the decision Rule: (4) Calculate the value of test statistic:
{}
Found 547 results 2018 F. Sottile, TDDFT, Linear Response and the DP code. EUSPEC Training School on Spectroscopy Codes, Sofia (Bulgaria), 2018. 2017 L. Prussel, Ab-initio description of optical nonlinear properties of semiconductors in the presence of an electrostatic field, Ecole Polytechnique, Palaiseau, 2017. F. Sottile, Bethe-Salpeter equation approach in solids. CECAM Theoretical Chemistry for Extended Systems, Toulouse (France), 2017. , Collective charge excitations of the two-dimensional electride $\mathrmCa_2\mathrmN$, Phys. Rev. B, vol. 96. 2017. , Direct observation of the band structure in bulk hexagonal boron nitride, Phys. Rev. B, vol. 95. American Physical Society, p. 085410, 2017. , Excitons in van der Waals materials: From monolayer to bulk hexagonal boron nitride, Phys. Rev. B, vol. 95. American Physical Society, p. 035125, 2017. F. Sottile, Introduction to Green functions methods for valence spectroscopies. Workshop on Common problems and solutions in core and valence theoretical spectroscopies, Paris (France), 2017. , Low-energy electronic excitations and band-gap renormalization in CuO, Phys. Rev. B, vol. 95. American Physical Society, p. 195142, 2017. , Model dielectric function for 2D semiconductors including substrate screening, Scientific Reports, vol. 7. p. 39844 - , 2017. , Second Harmonic Generation in Silicon Based Heterostructures: The Role of Strain and Symmetry, Nanoscience and Nanotechnology Letters, vol. 9. 2017. , Self-consistent Dyson equation and self-energy functionals: An analysis and illustration on the example of the Hubbard atom, Phys. Rev. B, vol. 96. American Physical Society, p. 045124, 2017. F. Sottile, Spectroscopy beyond GW. EUSpec meeting: Ab-initio correlated methods in spectroscopy, 2017. F. Sottile, Time Dependent Density Functional Theory. NSF/CECAM school on Computational Material Science, Lausanne (Switzerland), 2017. 2016 F. Sottile, Ab initio approaches to spectroscopies. SOLEIL, Theory Days, Gif-sur-Yvette (France), 2016. , Ab initio description of second-harmonic generation from crystal surfaces, Phys. Rev. B, vol. 94. 2016. , Ab initio electronic stopping power of protons in bulk materials, PHYSICAL REVIEW B, vol. 93. AMER PHYSICAL SOC, ONE PHYSICS ELLIPSE, COLLEGE PK, MD 20740-3844 USA, p. 035128, 2016. , Exciton Band Structure in Two-Dimensional Materials, Phys. Rev. Lett., vol. 116. American Physical Society, p. 066803, 2016. F. Sottile, Exciton dispersion and beyond. 26 Condensed Matter Division of the EPS, Groningen (Netherlands), 2016. F. Da Pieve, Fingerprints of entangled spin and orbital physics in itinerant ferromagnets via angle-resolved resonant photoemission, Phys. Rev. B, vol. 93. American Physical Society, p. 035106, 2016. , Improved ab initio calculation of surface second-harmonic generation from Si(111)($1\ifmmode\times\else×\fi1$):H, Phys. Rev. B, vol. 93. p. 235304, 2016. , Interpretation of monoclinic hafnia valence electron energy-loss spectra by time-dependent density functional theory, Phys. Rev. B, vol. 93. American Physical Society, p. 165105, 2016. L. Reining, Linear Response and More: the Bethe-Salpeter Equation, in Quantum Materials: Experiments and Theory: Modeling and Simulation Vol. 6, E. Pavarini, Koch, E., van den Brink, J., and Sawatzky, G., Eds. Forschungszentrum Jülich, 2016.
{}
# What is the value of $$\frac{a^{2}~+~ac}{a^{2}c~-~c^{3}}-\frac{a^{2}~-~c^{2}}{a^{2}c~+~2ac^2~+~c^3}-\frac{2c}{a^2~-~c^2}+\frac{3}{a~+~c}$$? This question was previously asked in CDS Maths Previous Paper 10 (Held On: 8 Nov 2020) - 10 View all CDS Papers > 1. 0 2. $$\frac{ac}{a^2 + c^2}$$ 3. $$\frac{6}{a+c}$$ Option 4 : $$\frac{6}{a+c}$$ ## Detailed Solution Given: The given expression is $$\frac{a^{2}~+~ac}{a^{2}c~-~c^{3}}-\frac{a^{2}~-~c^{2}}{a^{2}c~+~2ac^2~+~c^3}-\frac{2c}{a^2~-~c^2}+\frac{3}{a~+~c}$$ Calculation: $$\frac{a^{2}~+~ac}{a^{2}c~-~c^{3}}-\frac{a^{2}~-~c^{2}}{a^{2}c~+~2ac^2~+~c^3}-\frac{2c}{a^2~-~c^2}+\frac{3}{a~+~c}$$ ⇒ $$\frac{a(a~+~c)}{c(a^2~-~c^2)}-\frac{a^2~-~c^2}{c(a^2~+~2ac~+~c^2)}-\frac{2c}{a^2~-~c^2}+\frac{3}{a~+~c}$$ ⇒ $$\frac{a(a~+~c)}{c(a~-~c)(a~+~c)}-\frac{(a~-~c)(a~+~c)}{c(a~+~c)^2}-\frac{2c}{a^2~-~c^2}+\frac{3}{a~+~c}$$ ⇒ $$\frac{a}{c(a~-~c)}-\frac{(a~-~c)}{c(a~+~c)}-\frac{2c}{(a~-~c)(a~+~c)}+\frac{3}{a~+~c}$$ ⇒ $$\frac{a(a~+~c)~-~(a~-~c)^2~-~2c^2~+~3c(a~-~c)}{c(a~-~c)(a~+~c)}$$ ⇒ $$\frac{a^2~+~ac~-~(a^2~-~2ac~+~c^2)~-~2c^2~+~3ca~-~3c^2}{c(a~-~c)(a~+~c)}$$ ⇒ $$\frac{a^2~+~ac~-~a^2~+~2ac~-~c^2~-~2c^2~+~3ca~-~3c^2}{c(a~-~c)(a~+~c)}$$ ⇒ $$\frac{6ac~-~6c^2}{c(a~-~c)(a~+~c)}$$ ⇒ $$\frac{6c(a~-~c)}{c(a~-~c)(a~+~c)}$$ ⇒ $$\frac{6}{a~+~c}$$ ∴ The value of $$\frac{a^{2}~+~ac}{a^{2}c~-~c^{3}}-\frac{a^{2}~-~c^{2}}{a^{2}c~+~2ac^2~+~c^3}-\frac{2c}{a^2~-~c^2}+\frac{3}{a~+~c}$$ is $$\frac{6}{a~+~c}$$.
{}
Math Help - trig prove 1. trig prove $cosecA+cosec2A+cosec4A=cot\frac{A}{4}-cot4A$ 2. Re: trig prove Originally Posted by srirahulan $cosecA+cosec2A+cosec4A=cot\frac{A}{4}-cot4A$ This is not true in general. Try, for example, $A=\frac\pi3$.
{}
# What WD-40 Is REALLY For! 67,620 views • Nov 18, 2020 67K views - 3 days ago Fran Blanche 142K subscribers 5 Likes I love spending time on my PC : 9 Likes Another day, another time to go to bed. Nighties lovelies! Also: ain’t easy being crow… …but it’s a little easier than being attorney / lawyer / barrister. 6 Likes GERMANY # Warp Drive News. Seriously! 80,619 views • Nov 21, 2020 80K views - 1 day ago ovelyal 3.39K subscribers # I am Not dead, I am 56 Today 139,064 views • Nov 22, 2020 139K views - 14 hours ago apetor 944K subscribers 3 Likes Bittersweet moment. The only thing that would have made this moment better, would be if Paul Bearer were still alive. 6 Likes # SCTV - Election Central 108,066 views • Nov 3, 2016 108K views - 4 years ago SCTV 45.7K subscribers 10 minutes later: #HurrianHymn #OldestSong # The Oldest (Known) Song of All Time 255,918 views • Aug 14, 2020 255K views - 3 months ago hochelaga 90.9K subscribers At 1:58 to 2:18+, The Hohle Fels Flute 35,000 - 40,000 years ago. # CANSOFCOM - Canadian Special Operations Forces Command 37,208 views • Nov 5, 2020 934 subscribers 6 minutes and 52 seconds audiovisual. At 2:45: Precision of force, not precision of violence. Violence is criminal and illegal use of force. Aggression is legal use of force. 1. A domineering, forceful, or assaultive verbal or physical action intended to hurt another animal or person; the verbal or motor behavioral expression of the effects of anger, hostility, or rage. -When anger, hostility, or rage creates enough illegal force to require aggressive intervention, a forceful verbal or physical action may be required to stop it. Illegal aggression is violence. #VictoryStartsHere 1 hour ago U.S.Army, Lisa Miller, Ph. D. Professor of Psychology and Education Founder of the Spirituality, Mind, Body Institute (SMBI) 2 Likes # Like the idea of burning through this thread faster then the last one. 5 Likes Tomorrow I have a blood test to check whether hormones are involved in my hypertension; they’re looking for secondary reasons to why my blood pressure won’t go down, after a disastrous 24 hours ABPM where my systolic pressure wouldn’t go below 190 mmHg (but then, after wearing it for a few hours the AVPM device hurt me each time it took a measure, which might have skewed the results). Since the test it’s at 07:00, today I’m off to bed earlier than usual to try and have some more sleep. So, nighties lovelies! Also: Russian cyberfarm, best cyberfarm! 7 Likes 7 Likes New EvE Online advertizing campaign starts off in Utah, USA and is about to go viral. The metal monolith, a trigonal prism, is reminiscent of, well, Triglavians of course. This time it feels like Hilmar could have chosen a better spot, like Times Square, or Place de la Concorde… 7 Likes It looks more like EDENCOM beacon. Its square, not a pyramid. 5 Likes Erm no, it’s a trigonal prism (triangular basis and top, parallel ribs). I found some camera footage (via a link in the article above) circling the structure. 6 Likes Perspective can do that for you. Depends where you watch. Its square or triangular. 6 Likes Gravity generator malfunction 7 Likes 4 Likes Now I’m off to bed, it’s been a loong day and can’t believe it’s been jsut tuesday instead of how it feels… like thursday. Nighties lovelies! Also: if you use some of these passwords, it’s like not having a password at all… Seriously? 2.5 million people were still using “123456” as their password in 2020?? 6 Likes Even if passwords are a security system used for computer programs and hardware systems, it doesn’t mean that there are not any security to design those systems to work, and, that they can’t switch an on switch to be off, and an off switch to be interpreted as being on. Computers work with zeros and ones, true or false, combinations, and, the people that register patents for those logical statements applied to science also work security to prove it, and to support it. When entities bring that in court and misrepresent it, because they like to hate and try to get away with previous hate, it’s important to bring it back to them and refresh their memory how they did attack it psychologically. 4 Likes # LIVE: SpaceX launches a Falcon 9 booster for a record seventh time during a Starlink mission 4,766 watching now • Started streaming 30 minutes ago NASASpaceflight 161K subscribers … in a world filled with horrible things at times … Like those times that were written that you people wouldn’t believe like before. 1 minute later (not minutes): # ver·i·fy /ˈverəˌfī/ verb 1. make sure or demonstrate that (something) is true, accurate, or justified. “his conclusions have been verified by later experiments” • LAW swear to or support (a statement) by affidavit. 2 minutes later (not minute): An affidavit is a written statement from an individual which is sworn to be true. It is an oath that what the individual is saying is the truth. An affidavit is used along with witness statements to prove the truthfulness of a certain statement in court. An affidavit is a written statement of fact voluntarily made by an affiant or deponent under an oath or affirmation which is administered by a person who is authorized to do so by law. Wikipedia 13 hours later: The problem is not you being uneducated. The problem is that you are educated just enough to believe what you have been taught, and not educated enough to question anything from what you have been taught. 14 hours later: ### Calculation The index value I of the CAC 40 index is calculated using the following formula:[5] with t the day of calculation; N the number of constituent shares in the index (usually 40); Qi,t the number of shares of company i on day t; Fi,t the free float factor of share i; fi,t the capping factor of share i (exactly 1 for all companies not subject to the 15% cap); Ci,t the price of share i on day t; Qi,0 the number of shares of company i on the index base date; Ci,0 the price of equity i on the index base date; and Kt the “adjustment coefficient for base capitalization” on day t (reflecting the switch from the French franc to the Euro in 1999). {\displaystyle I_{t}=1– 000\times {\frac {\sum {i=1}^{N}Q{i,t},F_{i,t},f_{i,t},C_{i,t},}{K_{t},\sum {i=1}^{N}Q{i,0},C_{i,0},}}} 15 hours later: FRANCE # 1906 - Fragment of forgotten silent film (Remastered with AI 4K 60 FPS) 166,630 views • Apr 26, 2020 166K views - 6 months ago TimeMachine 32 minutes later: # Diego Maradona has died at the age of 60 • 23 minutes ago 9 minutes later: #F35 #FatAmy #MondaysWithMover # F-35A Crash at Eglin AFB (5-19-20) Accident Investigation Board Report Review and Analysis 26,140 views • Nov 23, 2020 26K views - 1 day ago C.W. Lemoine 286K subscribers 17 hours later, 1 hour after the last update: #climatechange #environment #naturaldisasters # Scientists Can Now Prove That Climate Change Is Causing Natural Disasters 25,069 views • Nov 25, 2020 25K views - 6 hours ago Seeker 4.58M subscribers 22 minutes later: # THE SENECA TRAP: Why You’ll NEVER Succeed 50,054 views • Nov 20, 2020 50K views - 5 days ago Andrew Kirby 523K subscribers 3 Likes Guess what? I’m going to bed! (How unusual, heh?) Nighties lovelies! Also: this is a special one… Atkinso Hyperlegible is a free font typeface focused on being easy to read, by using different techniques so each character is unique and easily identifiable to people with vision issues. I miss they had a sample text, but for what they show, looks very well conceived. Personally I still can read my computer screen from 40 cm away, at least most websites and documents… 4 Likes However, of all the list of phobia, including phobophobia, the fear of fears, the fear of corruption is not listed in those phobia as a medical condition. https://www.mayoclinicproceedings.org/article/S0025-6196(11)63736-1/fulltext Tomophobia is considered a specific phobia, which is a unique phobia related to a specific situation or thing. In this case, a medical procedure. While tomophobia isn’t common, specific phobias in general are quite common.May 15, 2020 ### Tomophobia: Understanding the Fear of Medical Procedures Cleithrophobia, the fear of being trapped, is often confused with claustrophobia, the fear of enclosed spaces. Cleithrophobia is related to winter phobias due to the potential risk of being trapped underneath a snowdrift or thin ice. ### Cleithrophobia: The Fear of Being Trapped - Verywell Mind 2 minutes later: ### List of Phobias: How Many Are There? - Healthline www.healthline.com › health › list-of-phobias It’s impossible to name all of the possible fears that people can have, but here’s a list of the most common and unique ones, including a fear of phobias, as well … 8 minutes later: 9 minutes later: 24 minutes later: # XB-46 – The Needle 163,016 views • Nov 23, 2020 158K views - 2 days ago Dark Skies 180K subscribers # Vietnam War - Project Controlled Weather Popeye 87,709 views • Nov 21, 2020 87K views - 4 days ago - Dark Docs 574K subscribers 2 Likes
{}
/ hep-ex CERN-EP-2017-138 Search for new phenomena with large jet multiplicities and missing transverse momentum using large-radius jets and flavour-tagging at ATLAS in 13 TeV $pp$ collisions Pages: 53 Abstract: A search is presented for particles that decay producing a large jet multiplicity and invisible particles. The event selection applies a veto on the presence of isolated electrons or muons and additional requirements on the number of b-tagged jets and the scalar sum of masses of large-radius jets. Having explored the full ATLAS 2015-2016 dataset of LHC proton-proton collisions at $\sqrt{s}=13~\mathrm{TeV}$, which corresponds to 36.1 fb$^{-1}$ of integrated luminosity, no evidence is found for physics beyond the Standard Model. The results are interpreted in the context of simplified models inspired by R-parity-conserving and R-parity-violating supersymmetry, where gluinos are pair-produced. More generic models within the phenomenological minimal supersymmetric Standard Model are also considered. Note: *Temporary entry*; Comments: 53 pages in total, author list starting page 37, 7 figures, 5 tables, submitted to JHEP, All figures including auxiliary figures are available at http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/SUSY-2016-13/ Total numbers of views: 3966 Numbers of unique views: 2045
{}
Expanding over-the-counter derivatives books at Citi and Goldman Sachs inflated their systemic scores more than any other indicator used by the Basel Committee to assess banks’ systemic risk, Risk Quantum analysis shows. Citi reported notional amounts of OTC derivatives of $41.4 trillion at end-March, up 14.7% from three months earlier. The increase added eight basis points to the bank’s systemic score, against an 11bp increase from all other indicators combined. The OTC derivatives component accounted for 64bp of the bank’s 684bp systemic score in Q1, compared with 56bp out of 665bp in Q4 2020. At Goldman Sachs, OTC derivatives notionals ballooned 18.9% to$42.4 trillion over the period. The associated risk score increased by 10bp, out of a total 52bp rise across the indicator gamut. Derivatives weighted for 65bp on the 607bp total score, up from 55bp out of 555bp three months prior. Though the OTC derivatives indicator blinked higher at five other US global systemically important banks, other indicators were more responsible for their higher systemic risk scores. At Bank of America and JP Morgan, higher volumes of trading and available-for-sale securities bit the hardest, making up 23bp and 19bp of the respective 72bp and 80bp quarterly surges in systemic scores. Morgan Stanley, BNY Mellon and State Street paid the price of a higher reliance on short-term wholesale funding – an indicator specific to the Fed’s implementation of the Basel-set formulas – which added 10bp, 9bp and 2bp to their respective scores. At Wells Fargo, the component that lurched up the most were intra-financial system assets, which added 4bp to the all-round score. ### What is it? US G-Sibs are designated using the Basel Committee’s assessment methodology to gauge systemic risk. The total score is found by averaging the scores of five systemic indicator categories: size; interconnectedness; complexity; cross-jurisdictional activity; and substitutability. The Federal Reserve uses its own measure, known as Method 2, which uses a different calculation formula, deriving a G-Sib score from the sum of the first four indicator categories above, plus a short-term wholesale funding factor. Individual indicator values are multiplied by fixed coefficients to produce a final G-Sib score. These Method 2 scores are calculated quarterly, but only the year-end score is used to set each bank’s capital surcharge for the following year but one. The G-Sib surcharge applied to designated firms is the higher of that determined by the Basel Committee’s methodology and by the Fed. Under both methods, the higher the score, the higher the G-Sib surcharge, which currently ranges from 1–3.5% under Method 1, and from 1–4% under Method 2. ### Why it matters The annual first-quarter rally in OTC notionals is a well-known feature of the US too-big-to-fail regime. Banks tend to compress their derivatives books towards the year-end, in order to show up in lean form when regulators review their G-Sib scores in November. After the assessment, they lift brakes, allowing business to return to more regular levels. This year, there have been signs of that tried-and-tested strategy losing some steam. The eight US G-Sibs cut notional by 9% between the third and the fourth quarter of 2020, compared with a 16% drop the year before - although it should be noted they started trimming their books earlier last year, likely spurred by pandemic-driven uncertainty. Still, between ‘meme stock’ gyrations and policy rate guessing games, the first few months of 2021 have shown how quickly derivatives books can build up. Either broker-dealers follow 2020’s cue and take precautionary early action on book size, or notionals may prove too bloated not to feed into their G-Sib scores come November. ### Get in touch If you have any thoughts on our latest analysis or want to suggest other ways to present and analyse the data, you can email us. ### Tell me more JP Morgan, BofA face higher G-Sib surcharges
{}
# How to fit gamma distribution to events not happen? [duplicate] I am trying to fit a gamma distribution to the failure time of a kind of bulb. I have 40 data. However only half of them are actually the failure time. The result 20 are times those bulbs being used (but they haven't failed yet). How can I fit a gamma distribution to all the data I have? • Hi, welcome to SE. It sounds like the best approach is to use Survival Analysis techniques because some of your bulbs have censored lifetimes. Essentially, if you want to fit a Gamma distribution, the likelihood function needs to be adjusted for those censored observations. – StatsPlease Feb 11 '18 at 21:36 • @StatsPlease Could you please provide more about the second way? It's coursework so I cannot choose other models to fit. – Harold Feb 11 '18 at 21:54 • Many questions on site relate to estimation via maximum likelihood under censoring. A search should turn some of them up. While it's easy enough (as you can see at the 2nd wikipedia link) to write the log-likelihood fo the censored and uncensored observations (and to use a good optimization routine to maximize it), I'd use a survival analysis routine (like survreg in R) to fit a gamma to censored data myself - it takes care of a lot of the effort automatically – Glen_b -Reinstate Monica Feb 11 '18 at 22:08 • Look at stats.stackexchange.com/questions/133347/… for instance – kjetil b halvorsen May 12 '18 at 20:15 For instance, suppose we have observations of failure times $\boldsymbol x = (x_1, \ldots, x_n)$, and observations of censoring times $\boldsymbol y = (c_1, \ldots, c_m)$, for a total sample of $m+n$ bulbs, where observations are IID gamma with shape $a$ and rate $b$. Then the likelihood is simply $$\mathcal L(a, b \mid \boldsymbol x, \boldsymbol c) = \prod_{i=1}^n \frac{b^a x_i^{a-1} e^{-b x_i}}{\Gamma(a)} \prod_{j=1}^m S_X(c_j),$$ where $S$ is the survival function of the lifetime; i.e. $$S_X(c_j) = \Pr[X > c_j] = \int_{x = c_j}^\infty \frac{b^a x^{a-1} e^{-b x}}{\Gamma(a)} \, dx = \Gamma(a;c_j).$$ A closed-form solution in the general case is not possible. Software exists to calculate the solution when the data are provided.
{}
How do you evaluate the expression (c-a)- b given that a=-10, b=4, and c=1? Apr 12, 2018 $7$ Explanation: $\text{substitute the given values into the expression}$ $\Rightarrow \left(c - a\right) - b$ $= \left(\textcolor{b l u e}{1} - \textcolor{red}{- 10}\right) - \textcolor{m a \ge n t a}{4}$ $\left[\text{note that "--" is equivalent to} +\right]$ $= \left(1 + 10\right) - 4 = 11 - 4 = 7$
{}
# The feasible solution for a LPP is shown in fig.12.12.Let $Z=3x-4y$ be the objective function.(Maximum value of $Z$+Minimum value of $Z$) is equal to $(A)13 \quad (B)\; 1\quad (C)\;-13 \quad (d)\;-17$
{}
# Would quantum entanglement be increased by anti-Unruh effect?. (arXiv:1802.07886v1 [gr-qc]) on 2018-2-24 1:22pm GMT Authors: Taotao LiBaocheng ZhangLi You We study the "anti-Unruh effect" for an entangled quantum state in reference to the counterintuitive cooling previously pointed out for an accelerated detector coupled to the vacuum. We show that quantum entanglement for an initially entangled (spacelike separated) bipartite state can be increased when either a detector attached to one particle is accelerated or both detectors attached to the two particles are in simultaneous accelerations. However, if the two particles (e.g., detectors for the bipartite system) are not initially entangled, entanglement cannot be created by the anti-Unruh effect. Thus, within certain parameter regime, this work shows that the anti-Unruh effect can be viewed as an amplification mechanism for quantum entanglement. # Eternal Inflation: When Probabilities Fail ## Philsci-Archive: No conditions. Results ordered -Date Deposited. on 2018-2-24 1:23am GMT Norton, John D. (2018) Eternal Inflation: When Probabilities Fail. [Preprint] # "Click!" Bait for Causalists ## Philsci-Archive: No conditions. Results ordered -Date Deposited. on 2018-2-24 1:22am GMT Price, Huw and Liu, Yang (2017) "Click!" Bait for Causalists. [Preprint] # Generalized Ehrenfest Relations, Deformation Quantization, and the Geometry of Inter-model Reduction ## Latest Results for Foundations of Physics on 2018-2-24 12:00am GMT ### Abstract This study attempts to spell out more explicitly than has been done previously the connection between two types of formal correspondence that arise in the study of quantum–classical relations: one the one hand, deformation quantization and the associated continuity between quantum and classical algebras of observables in the limit $\hbar \rightarrow 0$ , and, on the other, a certain generalization of Ehrenfest’s Theorem and the result that expectation values of position and momentum evolve approximately classically for narrow wave packet states. While deformation quantization establishes a direct continuity between the abstract algebras of quantum and classical observables, the latter result makes in-eliminable reference to the quantum and classical state spaces on which these structures act—specifically, via restriction to narrow wave packet states. Here, we describe a certain geometrical re-formulation and extension of the result that expectation values evolve approximately classically for narrow wave packet states, which relies essentially on the postulates of deformation quantization, but describes a relationship between the actions of quantum and classical algebras and groups over their respective state spaces that is non-trivially distinct from deformation quantization. The goals of the discussion are partly pedagogical in that it aims to provide a clear, explicit synthesis of known results; however, the particular synthesis offered aspires to some novelty in its emphasis on a certain general type of mathematical and physical relationship between the state spaces of different models that represent the same physical system, and in the explicitness with which it details the above-mentioned connection between quantum and classical models. # In defence of Everettian decision theory ## ScienceDirect Publication: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics on 2018-2-22 5:13pm GMT Publication date: Available online 18 February 2018 Source:Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics I consider the interrelations between two decision-theoretic approaches to probability which have been developed in the context of Everettian quantum mechanics: that due to Deutsch and Wallace on the one hand, and that due to Greaves and Myrvold on the other. Having made precise these interrelations, I defend Everettian decision theory against recent objections raised by Dawid and Thébault. Finally, I discuss the import of these results from decision theory for the rationality of an Everettian agent's betting in accordance with the Born rule. # What is Quantum Mechanics? A Minimal Formulation ## Latest Results for Foundations of Physics on 2018-2-21 12:00am GMT ### Abstract This paper presents a minimal formulation of nonrelativistic quantum mechanics, by which is meant a formulation which describes the theory in a succinct, self-contained, clear, unambiguous and of course correct manner. The bulk of the presentation is the so-called “microscopic theory”, applicable to any closed system S of arbitrary size N, using concepts referring to S alone, without resort to external apparatus or external agents. An example of a similar minimal microscopic theory is the standard formulation of classical mechanics, which serves as the template for a minimal quantum theory. The only substantive assumption required is the replacement of the classical Euclidean phase space by Hilbert space in the quantum case, with the attendant all-important phenomenon of quantum incompatibility. Two fundamental theorems of Hilbert space, the Kochen–Specker–Bell theorem and Gleason’s theorem, then lead inevitably to the well-known Born probability rule. For both classical and quantum mechanics, questions of physical implementation and experimental verification of the predictions of the theories are the domain of the macroscopic theory, which is argued to be a special case or application of the more general microscopic theory. # Completing the Physical Representation of Quantum Algorithms Provides a Quantitative Explanation of Their Computational Speedup ## Latest Results for Foundations of Physics on 2018-2-20 12:00am GMT ### Abstract The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: (i) extending the representation to the process of setting the problem, (ii) relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and (iii) symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution (half of the information specifying it when the solution is an unstructured bit string). Completing the physical representation shows that the number of computation steps (oracle queries) required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution.
{}
Journal cover Journal topic Biogeosciences An interactive open-access journal of the European Geosciences Union Journal topic Biogeosciences, 16, 3977–3996, 2019 https://doi.org/10.5194/bg-16-3977-2019 Biogeosciences, 16, 3977–3996, 2019 https://doi.org/10.5194/bg-16-3977-2019 Research article 17 Oct 2019 Research article | 17 Oct 2019 # Modelling long-term blanket peatland development in eastern Scotland Modelling long-term blanket peatland development in eastern Scotland Ward Swinnen1,2, Nils Broothaerts1, and Gert Verstraeten1 Ward Swinnen et al. • 1Division of Geography and Tourism, Department of Earth and Environmental Sciences, KU Leuven, Leuven, 3000, Belgium • 2Research Foundation – Flanders (FWO), Brussels, 1000, Belgium Correspondence: Ward Swinnen (ward.swinnen@kuleuven.be) Abstract Blanket peatlands constitute a rare ecosystem on a global scale, but blanket peatland is the most important peatland type on the British Isles. Most long-term peatland development models have focussed on peat bogs and high-latitude regions. Here, we present a process-based 2-D hillslope model to simulate long-term blanket peatland development along complex hillslope topographies. To calibrate the model, the peatland architecture was assessed along 56 hillslope transects in the headwaters of the river Dee (633 km2) in eastern Scotland, resulting in a dataset of 866 soil profile descriptions. The application of the calibrated model using local pollen-based land cover and regional climate reconstructions (mean annual temperature and mean monthly precipitation) over the last 12 000 years shows that the Early Holocene peatland development was largely driven by a temperature increase. An increase in woodland cover only has a slight positive effect on the peat growth potential contradicting the hypothesis that blanket peatland developed as a response to deforestation. Both the hillslope measurements and the model simulations demonstrate that the blanket peatland cover in the study area is highly variable both in extent and peat thickness stressing the need for spatially distributed peatland modelling. At the landscape scale, blanket peatlands were an important atmospheric carbon sink during the period 9.5–6 kyr BP. However, during the last 6000 years, the blanket peatlands were in a state of dynamic equilibrium with minor changes in the carbon balance. 1 Introduction Peatlands occur across the globe and contain up to one third of the global soil carbon stock, despite covering approximately less than 3 % of the Earth's surface (Gorham, 1991; Xu et al., 2018). Especially at higher latitudes, peatlands are an important ecosystem type, and their dynamics have profoundly influenced the terrestrial carbon cycle throughout the Holocene (Yu et al., 2011). Unfortunately, little is known about long-term peatland dynamics and their response to climatic and land cover changes (Wu, 2012). Blanket peatlands are spreads of peat of varying thickness, covering the underlying topography, thus “blanketing” the landscape (Lindsay, 1995). This peatland type occurs in hyperoceanic climates with cool and moist conditions throughout the year, and it is mostly confined to the maritime edges of the continents (Gallego-Sala and Prentice, 2013). Due to its location in the landscape, blanket peatland formation is more controlled by topography, compared to other peatland types (Parry et al., 2012). Although rare on a global scale, up to 6 % of the area of the United Kingdom is covered by blanket peatland (Jones et al., 2003). The large area of the Scottish blanket peatlands, covering 23 % of the country, compared to the international rarity of these environments, makes the Scottish peatlands a high-value target for conservation efforts (Fyfe et al., 2013; Tipping, 2008). During the Holocene, large areas of blanket peatland have developed throughout the Scottish Highlands, and this shift from mineral to waterlogged and nutrient-poor organic soils is one of the most important Holocene landscape changes in Scotland. Different hypotheses have been raised regarding the cause of this peatland development (Tipping, 2008). The original hypothesis, as proposed by Moore, linked the blanket peatland initiation to human impact, where anthropogenic land use change and increased grazing during the Neolithic period led to a shift in the hillslope hydrology resulting in the paludification of the upland soils (Moore, 1973). While this hypothesis has been supported by local studies throughout the British Isles, other authors have suggested that, at least for Scotland, the initiation of blanket peatlands resulted from climatic changes during the Atlantic period (Ellis and Tallis, 2000; Huang, 2002; Simmons and Innes, 1988; Tipping, 2008). A recent study based on a database of basal radiocarbon dates shows regional differences in the timing of the blanket peatland development with an earlier timing for central and southern Scotland, compared to the other regions of the British Isles (Gallego-Sala et al., 2016). Most of the case studies studying the blanket peatland initiation are based on field data such as pollen cores and radiocarbon dating, but studying causalities based on timing alone is difficult (Gallego-Sala et al., 2016). Process-based modelling of this landscape transformation could prove to be a useful technique, complementary to the field data, to provide insight into the underlying processes and mechanisms. In recent decades, several peatland models have been developed, varying in spatial and temporal scale and in model complexity (Frolking et al., 2010). A good overview of the models developed for simulating long-term peatland behaviour is given by Baird et al. (2012). Currently, several long-term peatland models such as DigiBog and the Holocene Peatland Model (HPM) allow for simulations of peatland processes and the feedbacks between ecology, hydrology and peat properties over Holocene timescales (Baird et al., 2012; Frolking et al., 2010). These models have been applied successfully within the context of peat bogs, but they are difficult to transfer to blanket peatlands for two reasons. Firstly, these models are developed as cohort models, where each year a new peat layer is added to the soil profile and included in the calculations for the remaining part of the simulations. As a result, these allow for the simulation of temporal changes in peat properties such as hydraulic conductivity within the peat profile, but as the simulated time period increases, these models become computationally expensive, especially when a spatial dimension is added. Secondly, these models have been developed for peat bogs, which have a different peatland architecture compared to blanket peatlands, and they are therefore not always adapted to simulating peatland processes along complex hillslope topographies. These issues have been partially resolved by the MILLENNIA model, which has been designed specifically for blanket peatlands (Heinemeyer et al., 2010). This model is also a cohort model but incorporates additional processes which are specific to blanket peat such as runoff-driven peat erosion. However, the high degree of detail in the model domain and the representation of the processes make it difficult to apply cohort models at the landscape scale. In this study, a process-based peatland model is presented which is able to simulate the hillslope hydrology and peatland dynamics along topographically complex hillslopes on Holocene timescales. Additionally, the representation of the model domain is relatively simple using a diplotelmic peat profile, making it computationally feasible to study peatland development on a landscape scale by simulating a large number of hillslope cross sections. The model is applied to the Upper Dee area in the Cairngorms National Park in eastern Scotland. The goal of this study is twofold: firstly, to apply a relatively simple process-based peatland model to study the long-term blanket peatland development in the Scottish Highlands on a landscape scale and secondly, to identify the relative importance of climatic and land cover changes for long-term blanket peatland development. 2 Materials and methods ## 2.1 Study area The study area consists of the headwaters of the river Dee in eastern Scotland, with an elevation ranging from 322 to 1309 m a.s.l. The area lies in the centre of the Cairngorms National Park and is managed by the Mar Lodge, Invercauld and Mar estates (Fig. 1). The geology of the Dee catchment is characterized by metamorphic and igneous rocks, with schists and granulites in the southern part of the study area and granite batholith intrusions in the north (Maizels, 1985). The entire area was glaciated by the Scottish ice sheet during the last ice age, which retreated between approximately 16 kyr and 13.6 kyr BP. In contrast to the western Highlands, the Cairngorms massif was not subjected to widespread glacial expansion during the Younger Dryas (Loch Lomond Stadial). During this period, the glacial activity remained largely restricted to the cirques (Everest and Kubik, 2006). The development of the current landscape and soils in the Upper Dee area has been influenced by the deglaciation, forming a wide variety of glacial and fluvioglacial landforms (Ballantyne, 2008). In many parts of the study area, the bedrock is covered by glacial till of varying thicknesses (Maizels, 1985). The summits and ridges mostly carry skeletal soils and bedrock outcrops, while the slopes are covered by blanket peat and alpine podzols (Smith, 1985). The peat deposits are found both lying directly on bedrock and overlying a layer of mineral sediment. This mineral substrate consists of gravel-rich silt loam and sandy loam in the southern part of the study area and sandy loam to loamy sand in the northern part. Currently, the area is dominated by semi-natural land cover, including alpine and montane heath vegetation on the highest summits, heather moorland, and small pockets of natural forest (Tetzlaff and Soulsby, 2008). The total annual precipitation ranges from 800 mm in the eastern part of the study area to almost 2000 mm on the mountain tops, with a significant proportion of the precipitation falling as snow during the winter months (Dunn et al., 2001). The temperature regimes can vary considerably within the study area. The town of Braemar (339 m a.s.l.) has a mean annual temperature of 6.8 C, ranging from 1.6 C as a mean winter temperature (December–January–February; DJF) to 12.8 C as a mean summer temperature (June–July–August; JJA). In contrast, the summit of Cairn Gorm (1245 m a.s.l.) has a mean annual temperature of 0.6 C and ranges from a mean winter temperature (DJF) of 2.6 C to a mean summer temperature (JJA) of 5.3 C. Early Holocene traces of human presence have been found within the study area, with archaeological evidence indicating the presence of Mesolithic hunter–gatherer structures as early as 8.2 cal kyr BP in the western part of the study area (Warren et al., 2018). The first traces of permanent settlement in the Upper Dee are from the village of Braemar around 1000 CE (Paterson, 2011). In contrast to the western part of Scotland, the study area shows no traces of large scale peat extraction, which is probably due to the relatively thin peat profiles and difficult access to the area (Maurer, 2015). Figure 1Location of the Upper Dee area, with an indication of the hillslope transects and the pollen sites used for the land cover reconstruction. ## 2.2 Field data For the study area, the blanket peatland architecture was assessed along 56 hillslope transects across the study area during field campaigns in 2015 and 2017 using soil corings, laboratory analysis of peat samples and pollen cores (Fig. 1). The soil corings were taken along the hillslope transects with a spacing of approximately 50 m using a gauge auger. The hillslope topography was measured using post-processing real-time kinematic (RTK) GPS measurements. The transect locations were selected in order to include a wide variety of lithologies, elevation zones and topographic parameters such as aspect, slope and curvature. Additionally, the carbon content, dry bulk density and water content of the peat deposits were derived from 35 field samples, collected as core sections with a length of 5 cm. These samples were collected at random coring locations and at depths ranging from 5 to 165 cm below the surface. The regional vegetation evolution over the past 12 000 years was reconstructed based on seven pollen cores located within the study area (Fig. 1). These cores provide vegetation information from different elevation zones with varying distances to the low-lying valleys of the Dee and the Spey (Birks, 1969; Hunter, 2016; Huntley, 1994; Paterson, 2011). Using the REVEALS (Regional Estimates of Vegetation Abundance from Large Sites) model (Sugita, 2007), the pollen percentages were converted to regional vegetation fractions. REVEALS was developed to reconstruct regional vegetation composition using pollen data from large lakes, but previous studies has shown that a group of sites can also be used to estimate regional vegetation cover (Fyfe et al., 2013; Mazier et al., 2012; Trondman et al., 2016). Pollen type parameters (pollen productivity and fall speed) were based on the standardized set of Mazier et al. (2012). The regional vegetation fractions were grouped in five classes (coniferous trees, deciduous trees, shrubs, heather, grasses and herbs) and used as land cover input in the hillslope model. As the land cover reconstruction for Scotland based on REVEALS by Fyfe et al. does not include pollen data from high-elevation sites, a new land cover reconstruction was made by Hunter for this study using local pollen data (Fyfe et al., 2013; Hunter, 2016). At all coring locations, the soil profiles were described based on visual inspection, analysing the colour, texture and possible presence of macroscopic remains (charcoal, wood, etc.). Based on the coring descriptions, the peat thickness could be derived. In this study, peat is defined as a dark organic-rich layer of at least 10 cm thick without or with a minimal presence of mineral material based on visual inspection. Organic-rich horizons with a clear presence of mineral material were not classified as peat. In total, 34 peat samples were radiocarbon dated at 17 locations throughout the study area, encompassing a range of topographic situations and peat thicknesses (see Appendix A2 for dating details). All radiocarbon dates were performed by the Belgian Royal Institute for Cultural Heritage and calibrated using the software OxCal 4.3 and the IntCal13 calibration curve (Bronk Ramsey, 2009; Reimer et al., 2013). ## 2.3 Model outline The model presented here is based on the concept of impeded drainage, where environmental conditions such as bedrock topography can cause waterlogging and peat formation (Alexandrov et al., 2016; Clymo, 1984; Ingram, 1982). The basic structure of the hillslope peatland model consists of a hydrology module simulating the water table behaviour along the hillslope, which is coupled to a peat growth module simulating biomass production and decomposition (Fig. 2). The model domain consists of a two-dimensional hillslope cross section, which is discretized in a series of model grid points. The stratigraphy consists of impermeable bedrock, overlain by a layer of glacial till, which is assumed to be porous. Over time, a peat profile can develop on top of this till substrate when the right environmental conditions are met. The hillslope topography is based on the detailed GPS measurements for each coring location. Figure 2General model workflow. For a more detailed description of the model structure, the reader is referred to the text. ### 2.3.1 Hillslope hydrology module The water table dynamics are modelled using a variant of the Boussinesq equation for a non-constant slope (Hilberts et al., 2004): $\begin{array}{}\text{(1)}& \begin{array}{rl}& \mathit{\epsilon }\frac{\partial S}{\partial t}=\phantom{\rule{0.125em}{0ex}}\frac{k}{\mathit{\epsilon }}\mathrm{cos}i\left(x\right)\left[B\frac{\partial S}{\partial x}+S\phantom{\rule{0.125em}{0ex}}\frac{\partial B}{\partial x}+\phantom{\rule{0.125em}{0ex}}\mathit{\epsilon }S\frac{\partial i\left(x\right)}{\partial x}\right]\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}+\phantom{\rule{0.125em}{0ex}}\frac{k}{\mathit{\epsilon }}\mathrm{sin}i\left(x\right)\left[\mathit{\epsilon }\frac{\partial S}{\partial x}-SB\frac{\partial i\left(x\right)}{\partial x}\right]+\mathit{\epsilon }N,\end{array}\end{array}$ with $B=\phantom{\rule{0.125em}{0ex}}\frac{\partial }{\partial x}S$, x as the distance to the hillslope bottom (m), ε as the soil porosity (m m−1), S as the actual water storage (m), k as the hydraulic conductivity (m s−1), i as the bedrock slope (m m−1), and N as the rainfall recharge or infiltration (m) (Hilberts et al., 2004). The Boussinesq equation is a simplified form of the full Richards equation for unsaturated soils by excluding a representation of the unsaturated zone. This simplification can be justified for peat soils given the often shallow position of the water table in peatlands (Ballard et al., 2011; Paniconi et al., 2003). As a result, the Boussinesq equation assumes an instantaneous exchange of water (e.g. infiltration and evapotranspiration) between the surface and the saturated zone of the soil profile. This simplified representation leads to a significant reduction of the computational time (Ballard et al., 2011). To enable the use of the Boussinesq equation for the simulation of the hillslope hydrology, local topographic depressions are filtered out. In this study, the Boussinesq equation is discretized using a forward-time central-space finite-difference scheme for the diffusion component and a 1st-order upwind finite-difference scheme for the advection component (Campforts and Govers, 2015). The diplotelmic nature of the model is represented by the depth-integrated saturated hydraulic conductivity. Each stratigraphic unit (mineral substrate, catotelm and acrotelm) has a specific saturated hydraulic conductivity value. The bedrock is excluded from the water table depth calculations as it is assumed to be impermeable. A simple snow module is included in the hydrological model, with precipitation falling as snow during periods with sub-zero temperatures. The amount of melt is based on a degree day factor model. Snow sublimation is not explicitly represented in the model. For each time step, infiltration and saturation excess overland flow is calculated. The produced runoff is assumed to leave the hillslope before the next time step. For open peatland vegetation types, all rainfall is assumed to be able to infiltrate for intensities below 2 mm h−1. For higher intensities, the infiltration rate increases with higher precipitation rates. $\begin{array}{}\text{(2)}& \mathrm{ir}=\mathrm{0.626}\cdot p+\mathrm{0.0002},\end{array}$ with “ir” as the infiltration rate (mm h−1) and p as the precipitation rate (mm h−1) (Holden and Burt, 2002). For woodland peatlands, infiltration rates of up to 30 mm h−1 are reported (Cairns et al., 1978). In the model, this maximal infiltration rate of 30 mm h−1 is used for a fully forested peatland. The final infiltration rate at a certain location is determined based on linear interpolation between the infiltration rates of open and forested peatland based on the percentage of woodland cover at each model grid point. The potential plant transpiration and soil evaporation (mm d−1) are calculated separately based on the leaf area index (LAI), which enables differentiation based on the vegetation cover (Eq. 3) (Williams et al., 1983). $\begin{array}{}\text{(3)}& \begin{array}{rl}& {E}_{\mathrm{soil}}=\phantom{\rule{0.125em}{0ex}}{E}_{\mathrm{pot}}{e}^{\left(-\mathrm{0.4}\mathrm{LAI}\right)}\\ & {E}_{\mathrm{plant}}=\phantom{\rule{0.125em}{0ex}}\frac{{E}_{\mathrm{pot}}\mathrm{LAI}}{\mathrm{3}},\phantom{\rule{0.125em}{0ex}}\mathrm{0}\phantom{\rule{0.125em}{0ex}}\le \mathrm{LAI}\phantom{\rule{0.125em}{0ex}}\le \mathrm{3}\\ & {E}_{\mathrm{plant}}=\phantom{\rule{0.125em}{0ex}}{E}_{\mathrm{pot}}-\phantom{\rule{0.125em}{0ex}}{E}_{\mathrm{soil}},\phantom{\rule{0.125em}{0ex}}\mathrm{LAI}\phantom{\rule{0.125em}{0ex}}>\mathrm{3},\end{array}\end{array}$ with Esoil as the soil evaporation rate (mm d−1), Eplant as the plant transpiration rate (mm d−1) and Epot as the potential evapotranspiration rate (mm d−1), which is calculated using the Thornthwaite equation based on the mean monthly temperature. The actual evapotranspiration rate (AET) (mm d−1) is calculated as a function of the water table depth (Eq. 4). If a grid point consists of glacial till without a peat cover, the AET is at the potential rate when the water table is at the surface (z1=0) and decreases linearly until depth z2. If peat is present, the actual evapotranspiration is assumed to be at the potential rate if the water table is located in the upper horizon (z1 is the acrotelm thickness) and decreases linearly until depth z2 (m) (Frolking et al., 2010; Lafleur et al., 2005). In this study, z2 is set to 1 m for both the glacial till and peat soils. In contrast to more detailed peatland models such as the MILLENNIA model, the relationship of the AET to the water table depth is not influenced by changes in vegetation groups and their root properties, resulting in constant values for z1 and z2 throughout the simulations (Carroll et al., 2015). $\begin{array}{}\text{(4)}& {\mathrm{AET}}_{t}=\left({E}_{\mathrm{soil}}+\phantom{\rule{0.125em}{0ex}}{E}_{\mathrm{plant}}\right)\frac{{z}_{\mathrm{2}}-wt}{{z}_{\mathrm{2}}-{z}_{\mathrm{1}}},\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\mathrm{for}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}{z}_{\mathrm{1}}\le wt\le \phantom{\rule{0.125em}{0ex}}{z}_{\mathrm{2}}.\end{array}$ Since detailed local climate reconstructions are scarce, several peatland modelling studies have used continental or at best regional climate reconstructions (mostly pollen-based) which were fine-tuned using local climate information (Frolking et al., 2010; Heinemeyer et al., 2010; Morris et al., 2015). Here, a similar approach is used. Input data for temperature and precipitation values are based on a European gridded dataset of mean annual temperature and mean monthly precipitation anomalies for the last 12 000 years derived from pollen data with a spatial resolution of $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ and a temporal resolution of 500 years (Fig. 3) (Mauri et al., 2015). As total annual precipitation amounts and mean annual temperatures vary considerably throughout the study area, the precipitation and temperature data were corrected for orographic effects. Data from eight meteorological stations in the vicinity of the study area were used to construct linear regression models, correcting the mean daily precipitation amount and mean annual temperature for each location based on the elevation (Eqs. 5–6) (see Appendix A1 for the weather station details). $\begin{array}{}\text{(5)}& P=\left(\mathrm{0.003776}\cdot E\right)\phantom{\rule{0.125em}{0ex}}+\phantom{\rule{0.125em}{0ex}}\mathrm{1.669}\end{array}$ and $\begin{array}{}\text{(6)}& \mathit{\text{MAT}}=\left(-\mathrm{0.0083}\cdot E\right)+\mathrm{13.2839},\end{array}$ with P as the mean daily precipitation amount (mm), E as the elevation (m a.s.l.) and MAT as the mean annual temperature (C). Additional local climate information was added to the climate reconstruction data by incorporating random variability to the reconstructed temperature and precipitation data, based on the variability as observed in the weather station of Braemar for the period 1853–2010. The relatively low spatial and temporal resolution of the continental-scale climate reconstruction will probably lead to the underrepresentation of short-lived events and local climate variability. However, by incorporating the orographic corrections and random variability, local climate information is used to fine-tune the records and increase both the spatial and temporal variability. The time series used as model inputs are based on daily temperature and hourly precipitation data from the weather station of Braemar, which are rescaled using both the regression equations for elevation effects and the long-term anomalies for temperature and precipitation (Fig. 3). As a result, precipitation and temperature series with a high temporal resolution are used throughout the entire studied period. The model is run with a spatial resolution of 50 m, similar to the average coring distance. The time resolution is set to 400 s for the hillslope hydrology module and 1 year for the peat growth module to ensure model stability. Figure 3Reconstructed mean annual temperature (C) and mean monthly precipitation (mm month−1) anomalies for the period 12 kyr–100 BP with a 500-year interval for the location of Braemar. Values extracted from a gridded European dataset with a spatial resolution of $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ (Mauri et al., 2015). ### 2.3.2 Peat growth module The peat accumulation at each grid point is calculated as the balance between biomass production and decomposition. In the literature, several relatively simple equations can be found to calculate the net primary production (NPP) based on climatic data (mean annual temperature, total annual rainfall or potential evapotranspiration) such as the Miami model or the Thornthwaite memorial model of which some have been implemented in peatland models (Heinemeyer et al., 2010; Lieth, 1973; Lieth and Box, 1972). In some cases, such as the Miami model, the NPP is calculated based on multiple climatic variables (rainfall and temperature), using the minimum value of both equations as the final NPP value. However, given the climatic conditions of the Scottish Highlands, precipitation is not a limiting factor for biomass production. As a consequence, the biomass production is simulated as a function of the mean annual temperature using a power function regression equation based on field data for the Moor House–Upper Teesdale National Nature Reserve in northern England (Garnett, 1998), corrected for the woodland cover. A possible disadvantage of this simple approach is the dependence of the biomass production calculations on the quality of the climate and land cover reconstructions. $\begin{array}{}\text{(7)}& \mathrm{NPP}=\mathrm{60.06}\phantom{\rule{0.125em}{0ex}}\left({\mathrm{MAT}}^{\mathrm{1.134}}\right)\cdot \left(\mathrm{1}+\left(\frac{wc}{w{c}_{\mathrm{max}}}\cdot wi\right)\right),\end{array}$ with NPP as the net primary production (g m−2 a−1), MAT as the mean annual temperature at the grid point location (C), wct as the woodland fraction, wcmax as the woodland percentage of a fully forested peatland, and wi as the percentage increase in NPP between an open and wooded peatland. In general, the NPP is higher for wooded peatland compared to open peatland vegetation, with reported values of a 12 % increase for bogs and 17 % for fens (Beilman and Yu, 2001; Szumigalski and Bayley, 1997). In this study, wi is set to 15 % and wcmax to 40 %. The peat column at each grid point is divided in an oxic and anoxic zone based on the calculated mean annual water table height. The total decomposition can thus be written as $\begin{array}{}\text{(8)}& D={k}_{\mathrm{1}}\cdot wt+{k}_{\mathrm{2}}\cdot \left(h-wt\right),\end{array}$ with D as the total decomposition (m a−1), k1 and k2 as the rates of decomposition under anoxic and oxic conditions (yr−1), h as the thickness of the soil profile above the bedrock (m), and wt as the height of the water table above the bedrock (m) (Hilbert et al., 2000). The decomposition rates are dependent on the mean annual air temperature using a Q10 temperature multiplier, which is the ratio by which the biomass respiration rate increases under a 10 C temperature increase. The range in Q10 values mentioned in the literature is large, but Chapman and Thurlow demonstrated that Q10 values are generally higher for temperatures between 0 and 5 C (Chapman and Thurlow, 1998). This effect can be attributed to the fact that as temperatures rise above the freezing point, more microbial groups will become active, leading to relatively large changes in respiration rates for small changes in temperature. As a result, two Q10 values are used in this study. A Q10 value of 2.2 is used for temperatures above 5 C, and 3.7 is used for temperatures between 4 and 5 C. Below 4 C, the decomposition is assumed to cease completely (Chapman and Thurlow, 1998; Rosswall, 1973 as cited by Clymo, 1984; Wieder and Yavitt, 1994; Wu, 2012). The biological module runs on an annual timescale. Based on the calculated peat accumulation rate, the hillslope topography is updated annually. ### 2.3.3 Peatland initiation Simulations start with a hillslope consisting of an impermeable bedrock covered by glacial till. As the thickness of the till is not known at each location, it is assumed to have a constant thickness of 50 cm. Over time, the organic matter accumulates within the upper 30 cm of the mineral soil forming an organic-rich horizon based on the balance between biomass production and decomposition. When a threshold is exceeded, additional organic matter which is produced starts to accumulate as peat at that location, with the properties of an acrotelm. In this study, the threshold is set at an amount of organic matter equivalent to a peat layer with a thickness of 10 cm using the median dry bulk density and organic-carbon percentage of the 35 peat samples collected in the field. This ensures that a similar definition for peat is used both for the hillslope corings as for the model simulations. Once the peat thickness exceeds the thickness of the acrotelm layer, the peat layer becomes diplotelmic, with the peat below the acrotelm having the properties of the catotelm. Once the biomass within the simulated peat profile decreases below the biomass threshold, the grid point is no longer considered to be covered by a peat layer and only mineral soil properties are taken into account. ### 2.3.4 Boundary conditions The impermeable bedrock below the glacial till is used as a zero-flux boundary condition at the bottom of the model domain. At several locations throughout the study area, rivers have eroded the stream bank, exposing the peat. At the lower end of the hillslope, the water storage is thus set to a fixed value, representing the depth of the river. For the grid point at the top of the hillslope transect, a lateral zero-flux boundary is assumed. ## 2.4 Model calibration and validation Model calibration is based on the measured mean peat thickness per topographic class. In total, nine topographic classes were defined by dividing both the measured slope and curvature at each coring location into three classes, resulting in nine possible combinations. The calibration procedure resulted in topographic class limits of 0.098 and 0.135 m m−1 for the slope and $-\mathrm{0.184}×{\mathrm{10}}^{-\mathrm{3}}$ m−1 and $\mathrm{0.184}×{\mathrm{10}}^{-\mathrm{3}}$ m−1 for the curvature. For all 56 hillslope transects, the modelled mean peat thickness per topographic class after 12 000 years of simulation is compared to the mean peat thickness measured in the field. In total, three model parameters were calibrated: the decomposition rates under oxic and anoxic conditions and the acrotelm thickness. The goodness of fit of each parameter combination was evaluated based on the minimization of the root mean square error (RMSE) between the mean modelled and measured peat thickness per topographic class. Out of the 866 hillslope corings, 433 were selected randomly to be used as calibration points and the others as validation points. Since the spacing between the soil corings is slightly variable, the model results were resampled to the locations of the soil corings using linear interpolation. As an additional validation of the model behaviour, the simulated peat growth initiation dates for all model grid points can be evaluated against a dataset of basal radiocarbon dates for blanket peat deposits in the upland regions of Scotland with an elevation above 300 m a.s.l. (n=30) (Gallego-Sala et al., 2016). The dataset was expanded by incorporating 17 additional basal radiocarbon dates on peat deposits from within the study area (see Appendix A2 for dating details). For each of the 17 locations within the study area for which radiocarbon dates were available, the basal age was estimated using the clam (Classical Age–Depth Modelling of Cores from Deposits) 2.2 software package to construct age–depth models and extrapolate to the bottom of the peat layer (Blaauw, 2010). As the available initiation dates based on radiocarbon dating were not necessarily taken at the modelled transect locations, the comparison between modelled and observed peat growth initiation is based on the probability density curves using a bin width of 500 years. Depending on the amount of available radiocarbon dates at each location, some estimates of the peat growth initiation date were based on the extrapolation of the age–depth model over large sections of the peat profile. To analyse the effect of the age–depth model extrapolation on the resultant probability density curve, an additional probability density curve was constructed containing only those radiocarbon dated samples which were directly measured at the bottom of a peat layer (n=20). 3 Results ## 3.1 Field measurements In total, soil coring descriptions were made at 866 locations throughout the study area (detailed descriptions and location data can be found in the data availability section, Swinnen, 2019a, b). Based on the definition of peat used in this study, 57 % of the coring locations contained a surface peat layer, with a mean measured peat thickness over all coring locations of 36 cm and a maximal value of 3 m. The mean measured peat thickness per hillslope transect varies between 0 and 96 cm (Fig. 4). Overall, the transects with a high mean peat thickness can be found in the upstream parts of tributaries of the river Dee. Strong spatial variability occurs, even at small distances, making the peat cover throughout the area highly variable in both occurrence and mean thickness. The mean measured peat thickness per topographic class ranges from 23.6±31.9 cm for the class with a moderate slope and a convex curvature to 54.1±65.3 cm for the topographic class with a low slope and a straight curvature. Based on 35 randomly selected soil samples, which were identified in the field as peat, the median organic-carbon percentage was calculated to be 51.9±7.3 % and the dry bulk density was calculated to be 0.128±0.063 g cm−3. Figure 4Mean measured peat thickness at the hillslope transects (n=56). As the model is based on the principle of impeded drainage, there is an assumed relationship between the bedrock slope and the peat thickness at a certain location. The peat thickness data indicate that this relationship is present, showing a clear decrease in the maximum thickness with an increasing bedrock slope. However, the variability, especially at lower slope angles, indicates that shallow peat layers or even the absence of peat is observed for every slope value (Fig. 5a). In the Boussinesq equation, the bedrock slope is used instead of the surface slope. One could argue that the bedrock slope might not relate directly to the surface slope in, for example, local depressions filled with peat. However, the comparison of the bedrock slope and surface slope for all coring locations indicate a clear and strong relationship between both, with the observed range in slope values strongly exceeding the differences between the bedrock and surface slope at a single location (Fig. 5b). As a result, the use of the bedrock slope is thus unlikely to introduce a bias in the modelled peat thickness values. Figure 5(a) Scatterplot of the measured peat thickness (m) as a function of the bedrock slope (%) and (b) scatterplot of the surface slope (%) as a function of the bedrock slope (%) for all coring locations (n=866). The slope calculations are based on the measured coordinates of the coring locations. The pollen-based reconstructed land cover shows an Early Holocene woodland increase until the period 8.4–7.2 cal kyr BP (Fig. 6). This period is followed by a general woodland decline, with the woodland cover dropping below 5 % from 3.6 cal kyr BP onwards. The reconstructions for the individual pollen cores show an important east–west gradient in terms of maximal forest cover, with higher woodland percentages for the eastern and lower-lying part of the study area (Paterson, 2011). The woodland is a mixed type containing both coniferous species (Scots pine) and deciduous species (birch, rowan and aspen). A study by Fyfe et al. reconstructed the Holocene vegetation over Scotland using the REVEALS model for seven sites across the Scottish mainland, resulting in a maximal forest extent by 6.7 cal kyr BP (Fyfe et al., 2013). The data presented here show an earlier woodland cover decline around 7.2 cal kyr BP and a larger proportion of coniferous species in the forest composition. In comparison to the sites of Fyfe et al., the woodland cover appears to be relatively low, with an open heather landscape prevailing during the period under study, which can be attributed to the relatively high elevation of the study area. Figure 6Reconstructed vegetation proportions for the study area using the REVEALS model, based on seven pollen cores. ## 3.2 Model calibration Point-by-point calibration resulted in poor correspondence between the modelled and observed peat thickness. As a consequence, the model parameters were calibrated based on the mean peat thickness per topographic class. In total, nine topographic classes were constructed by classifying all coring locations based on the slope and the hillslope curvature. The best fitting parameter combination results in an acrotelm thickness of 10 cm, an oxic decomposition rate at 10 C of 2.15 % yr−1 and an anoxic decomposition rate at 10 C of 0.24 % yr−1, which corresponds to an oxic/anoxic decomposition ratio of 9. These values correspond largely to those reported in the literature (Ballard et al., 2011; Clymo, 1984; Wu, 2012; Yu et al., 2001). The RMSE on the mean peat thickness for the best fitting parameter combination is 9.53 cm (Fig. 7). Figure 7Modelled and measured mean peat thickness per topographic class for both the calibration and validation transects. ## 3.3 Blanket peatland development The calibrated model was run to simulate the long-term blanket peatland development since 12 kyr BP for the 56 hillslope transects using the calibrated parameter values. The reconstructed land cover history (Fig. 6) is used for vegetation evolution throughout the simulations. Overall, the model simulations indicate that mean peat accumulation rates were low until 9.5 kyr BP, with small variations between the different grid points (Fig. 8). Later, the accumulation rates increased and were high during two phases in the Early Holocene: 9.5–8.5 and 8–6.5 kyr BP. From 6 to 2 kyr BP, the rates were relatively stable and slightly positive on average. A long-term decrease in accumulation rates occurred between 2 and 1 kyr BP, which increased again to positive values around 0.5 kyr BP. The mean peat and carbon accumulation rate over all grid points and for the entire studied period is $\mathrm{0.03}×{\mathrm{10}}^{-\mathrm{3}}$ m yr−1 and 1.79 g C m−2 yr−1. The maximal mean peat and carbon accumulation rate over the entire studied period is $\mathrm{0.18}×{\mathrm{10}}^{-\mathrm{3}}$ m yr−1 and 11.95 g C m−2 yr−1 and occurs at 7050 BP. Figure 8Simulated mean peat accumulation rate and standard deviation for all grid points. Figure 9 indicates the evolution of the mean peat thickness and corresponding organic-carbon mass over all hillslope transects. The mean peat thickness reaches a maximal value of 0.36 m around 2 kyr BP and declines slightly afterwards to the current value of 0.33 m or 22.04 kg C m−2. Overall, the peatland development occurs mostly before 6 kyr BP and shows limited variations afterwards, with a slight decline in mean peat thickness between 2 and 1 kyr BP. When only the model grid points with peat cover are considered, the maximal value for the modelled mean thickness is 0.66 m, declining to 0.61 m nowadays or 40.25 kg C m−2. In total, the model simulates a peat cover of 61 % at the coring locations, which is comparable to the coring data, which result in 57 % peat cover. Figure 9Simulated mean peat thickness/carbon mass and standard deviation for all grid points (a) and for the grid points with a peat cover (b). Although the model calibration is solely based on the current peat thickness, the timing of peatland development can be evaluated based on the radiocarbon dating of peat profiles. Overall, the model simulations indicate that peat growth initiated at most locations between 9.75 and 6.75 kyr BP. The basal radiocarbon dates (n=47) show a more diffuse pattern for the Scottish upland areas (Fig. 10a). When only considering those dates for which the radiocarbon sample was taken at the bottom of the peat column (n=20), excluding the sites for which the initiation date was estimated by the extrapolation of an age–depth model to the bottom of the peat core, the probability density function shifts to older initiation ages and corresponds much better with the simulated dates (Fig. 10b). Figure 10Probability density function of the peat growth initiation dates based on the model simulation and the radiocarbon dating database for Scottish upland areas (above 300 m a.s.l., see Appendix A2) using a bin width of 500 years. (a) All dates. (b) All dates, excluding those for which a date was obtained by extrapolating an age–depth model to the bottom of the peat column. ## 3.4 Sensitivity analysis To study the model sensitivity to variations in parameter values, a sensitivity analysis is carried out. In total, seven parameters are varied over the range as mentioned in the literature. The parameters broadly cluster in two groups: peat properties and environmental parameters (Table 1). Each parameter is varied stepwise, while all other parameters are kept at the standard value. The sensitivity is evaluated as the current mean peat thickness over all grid points after a simulation over a 12 000-year period using the pollen-based climate and land cover variations as environmental boundary conditions (Fig. 11). Overall, the model appears to be most sensitive to the peat decomposition rate, mean annual temperature and woodland cover. The peat thickness shows no sensitivity towards changes in catotelm conductivity. This is probably because the low conductivity values of the catotelm result in slow drainage compared to other components of the hillslope hydrology and thus in quasi-permanent water saturation of the catotelm. The acrotelm conductivity shows the same behaviour except for the lowest value. The acrotelm is under oxic conditions for most of the simulated values. Only for the lowest conductivity value does the water table rise above the catotelm–acrotelm boundary, resulting in lower decomposition rates and a higher mean peat thickness. Table 1Overview of the parameters used in the parameter sensitivity test, listing the standard value and the range over which the parameter is changed. Figure 11Mean simulated peat thickness for all variables used in the parameter sensitivity test. The climate sensitivity of a model grid point appears to be dependent on the presence of a peat layer. Overall, the percentage of model grid points covered by peat appears to be more sensitive to precipitation changes than the mean peat thickness (Figs. 12, 13). This might be a result of the diplotelmic representation of the peat profile. The strong difference in saturated hydraulic conductivity for the two peat horizons results in minimal water table changes when the precipitation amount is varied. In other words, the use of a diplotelmic model results in a water table which fluctuates only slightly around the acrotelm–catotelm boundary. As a result, the peat accumulation rates and resulting peat thickness are not very sensitive to precipitation changes. However, this does not mean that the water table is located at the same depth for all grid points. Depending on the local hillslope topography, some locations will be fully saturated for most of the time, while at other locations, the water table will be located much lower in the peat profile. For the substrate, only a single saturated hydraulic conductivity value is used, which results in a more sensitive response to precipitation changes for the grid points which do not have a peat cover. Overall, the highest peat thickness and percentage peat cover can be found for the scenarios with a high temperature and precipitation amount. Figure 12Mean peat thickness for all combinations of temperature and precipitation changes. Figure 13Percentage of all model grid points with a peat cover of at least 10 cm for all combinations of precipitation and temperature variations. 4 Discussion Point-by-point calibration of the hillslope model resulted in a poor correspondence between modelled and observed peat thickness. Using the mean peat thickness per topographic class, however allowed for a calibration of the model with an RMSE below 10 cm (Fig. 7). This indicates that a spatial peatland model with a simplified representation of the peat profile is unable to capture the local variability, but it can replicate the general peatland evolution on the landscape scale. Similar model behaviour has been found in sediment erosion modelling, where point-by-point comparison yields poor correspondence but where the mean value per topographic class performs sufficiently well (Peeters et al., 2006). The calibrated acrotelm thickness of 10 cm fits well within the range of 5–50 cm mentioned in the literature (Ballard et al., 2011; Belyea and Clymo, 2001; Clymo, 1984). The same holds true for the calibrated oxic decomposition rate of 2.15 % yr−1 at 10 C, where values between 0.25 % and 7 % yr−1 can be found in the literature (Kleinen et al., 2012; Lucchese et al., 2010; Malmer and Wallen, 2004; Wu, 2012; Yu et al., 2001). In contrast, the calibrated anoxic decomposition rate of 0.239 % yr−1 at 10 C is relatively high and exceeds values from other studies which find rates between $\mathrm{1.6}×{\mathrm{10}}^{-\mathrm{3}}$ and $\mathrm{2.6}×{\mathrm{10}}^{-\mathrm{2}}$ % yr−1. The high calibrated decomposition rates can be attributed to the fact that these rates within the model do not only encompass peat decomposition within the soil profile, but they also include other processes which lead to a decrease in peat thickness in the field such as particulate organic-carbon export through gully development and shallow mass movements. These processes are not represented in the model, but they affect the peat thickness as it is measured in the field, leading to higher decomposition rates in the model calibration. The MILLENNIA peatland model contains a peat erosion module, which calculates the total organic-carbon (TOC) export based on the local water table depth and hillslope runoff depth (Heinemeyer et al., 2010). However, this approach is not able to discriminate between the different erosion processes observed in the field and would require a high amount of additional data. As studies on soil erosion have demonstrated that a model complexity reduction is necessary to reduce the model uncertainty when applied at the landscape scale, peat erosion is not included in the model (Jetten et al., 2003; Van Rompaey and Govers, 2002). As a consequence, the calibrated decomposition rates must be regarded as a model parameter encompassing a range of processes removing peat mass from a certain location rather than simply the decomposition of organic matter within the peat profile itself. Overall, the peatland model is not able to simulate the high peat thickness values (larger than 1.5 m), observed at some locations in the landscape. This can be partially attributed to the relatively high calibrated decomposition rates (Fig. 14). A second reason might the fact that local depressions within the hillslope topography were filtered out to enable the use of the Boussinesq equation for the simulation of the subsurface flow. Since local depressions showed to contain thick peat deposits in the field, the filtering procedure on the hillslope topography reduces the potential of modelling high peat thickness values at these locations. Figure 14Frequency distribution of the measured and modelled peat thickness for all grid points. The range of simulated peat accumulation rates appears to be realistic, with periods of high mean accumulation rates coinciding with periods of temperature increase (Figs. 3, 8). More specifically, the mean peat accumulation rates were high during the periods 10–8.5 kyr BP, where the mean annual temperature increased 3.74 C. This resulted in a mean peat accumulation rate over all modelled locations for the entire period of $\mathrm{0.064}×{\mathrm{10}}^{-\mathrm{3}}$ m yr−1 or 4.24 g C m−2 yr−1 and 8 –6.5 kyr BP, with a temperature increase of 2.34 C and a mean peat accumulation rate of $\mathrm{0.104}×{\mathrm{10}}^{-\mathrm{3}}$ m yr−1 or 6.88 g C m−2 yr−1. It appears to be the temperature increase, rather than the temperature itself, which drives peat growth. The increased biomass production due to the temperature increase outweighs the lowering in water table height caused by higher evapotranspiration rates and creates an imbalance between production and decomposition, leading to positive accumulation rates and a peat thickness increase. In contrast to existing cohort models, which have shown to be capable of capturing local variations in dynamics within the peat profile, the relatively simple diplotelmic model presented here cannot reproduce the local dynamics with the same degree of detail (Frolking et al., 2010; Heinemeyer et al., 2010; Morris et al., 2012). However, the simple representation of the model domain leads to a decrease in computation time which allows for the application of the model over large spatial and temporal domains. In combination with the pollen-based climate and land cover reconstructions, it allows for studying peatland development on the landscape scale, rather than at the scale of a single peat bog or peat profile as is often the case for the cohort models, making it possible to answer different research questions. ## Peat growth initiation Figure 15Reconstructed peat accumulation rates based on all available radiocarbon dates within the study area and the mean peat accumulation rate. A study for the British Isles based on an envelope climate model for blanket peatlands finds a contraction in the area suitable for blanket peatland development in eastern Scotland since 6 kyr BP, and other studies find accumulation rates after 6 kyr BP to be relatively low (Gallego-Sala et al., 2016; Simmons and Innes, 1988). In this study, accumulation rates decrease from 8 to 6 kyr BP onwards (Fig. 7). Overall, the mean accumulation rate remains positive until approximately 2 kyr BP, but it never reaches the high values which occurred during the Early Holocene. This results in a slowdown of peatland development and carbon storage after 6 kyr BP (Fig. 9). The asymptotic behaviour of the peat thickness evolution at the landscape scale, stabilizing after 6 kyr BP, is also found in other studies modelling long-term peatland development at the local scale. While the modelled peat thickness trajectory is dependent on the specific conditions (climate, land cover, topography, etc.), it is clear that the carbon sequestering potential of a peatland has it limits at millennial timescales, as the balance between biomass production and decomposition comes in equilibrium with the environmental conditions (Frolking et al., 2010; Heinemeyer et al., 2010). The conclusion that the blanket peatland development in the Upper Dee area can be attributed to climate warming, independent of an increase in precipitation, as demonstrated by the sensitivity analysis, is in line with a study by a Morris et al. (2018), who compared a large dataset of peatland initiation dates across the globe with GCM (general circulation model) paleoclimate simulations, concluding that peatland initiation in formerly glaciated areas can be attributed to rising growing season temperatures. Additionally, a recent study on buried peat layers indicates that in northern latitudes (> 40 N) peat growth is extensive during warm periods such as the last interglacial and the marine isotope stage (MIS) 3 interstadial periods (57–29 kyr BP) (Treat et al., 2019). It is clear that anaerobic conditions are required for the development of peat soils. However, regional climatic changes towards wetter conditions do not seem to be necessary for blanket peatland initiation. Apparently, local factors driving the hydrology such as hillslope topography, soil properties, etc. will determine where anoxic conditions will be established in order to enable blanket peatlands to develop (Morris et al., 2018). The model simulations do not support the original hypothesis on the origin of the blanket peatlands, linking the peatland development to a deforestation-driven change in hillslope hydrology (Moore, 1973). Firstly, both the available basal radiocarbon dates and the simulated initiation dates indicate a shift towards peat soils during a period of increasing or stable woodland cover (Figs. 6, 10). Secondly, the parameter sensitivity analysis indicates that a decrease in tree cover, either by natural or anthropogenic causes, decreases the peat growth potential because the decrease in evapotranspiration due to a loss of tree cover is outweighed by the reduction in biomass production under the environmental conditions present in the study area. Tipping (2008) studied the Holocene blanket peatland development in five upland and northern sites in Scotland using a combination of geomorphic, archaeological and radiocarbon data, resulting in the hypothesis that blanket peatlands were common over large parts of the Scottish Highlands within the first few millennia of the Holocene either due to rapid soil development or climatic changes (Tipping, 2008). This study supports Tipping's hypothesis, as shown by the model simulations and peatland initiation dates, and it provides evidence that a changing climate (increasing mean annual temperature) was the main driver of blanket peatland development. Although the simulated mean peat accumulation rates remain at low levels after 6 kyr BP, this does not mean peat profiles are unable to develop but rather that the peatlands at a landscape scale are in dynamic equilibrium with the stabilizing Holocene climate. This can be demonstrated using a forced model simulation where all peat soils are removed at 6 kyr BP. The resultant simulated peat thickness evolution indicates that peat starts to develop immediately after the peat removal (Fig. 16). After approximately 1500 years, the mean peat thickness over the study area again reaches the values of the standard model run. This indicates that the model can simulate peatland regeneration at locations which are impacted by the removal of peat cover either by natural processes (e.g. by shallow mass movements or gullies) or following anthropogenic peat cutting. Figure 16Simulated mean peat thickness/carbon mass and standard deviation for all grid points (a) and for the grid points with peat cover (b), with a removal of all peat cover within the study area at 6 kyr BP. 5 Conclusions A new process-based model was presented to study long-term blanket peatland development along hillslopes. The simulations for the past 12 000 years indicate that a relatively simple diplotelmic model is able to capture long-term peatland dynamics on the landscape scale. However, point-by-point comparison still shows poor results, which can be attributed to the use of a single set of calibrated parameters and the idealized representation of the model domain. Overall, both the field data and model simulations indicate that the blanket peatlands in the Upper Dee area developed mostly during the Atlantic period, with a peak in peat growth initiation dates around 9 kyr BP. The timing of peatland initiation together with the results of the sensitivity analysis support the hypothesis of a climate-driven origin of the blanket peatlands in the Scottish highlands, where the peatland development shows to be driven by a long-term regional warming trend during the Early Holocene. A higher woodland cover leads to an increase in peat growth potential, contradicting the original hypothesis of Moore (1973), which identified deforestation as a potential driver of blanket peatland development. In more recent periods, the relatively stable climate and land cover within the study area since 6 kyr BP result in a stabilization of peatland development, indicating that the study area served as a terrestrial carbon sink mainly during the Atlantic period and has stabilized during the Late Holocene. Data availability Data availability. The supplementary data to this article consist of two datasets. A list of all soil corings, including the coring location, elevation and measured peat depth, is available online at https://doi.org/10.17632/pxszz2wzny.1 (Swinnen, 2019a). The detailed stratigraphic descriptions for each soil coring are available online at https://doi.org/10.17632/ms484mrjj5.1 (Swinnen, 2019b). Appendix A Table A1MIDAS (Met Office Integrated Data Archive System) weather stations used for the construction of the regression equations for orographic temperature and precipitation corrections. Table A2Radiocarbon dating results. Calibrated ages were calculated using the software OxCal 4.3 and the IntCal13 calibration curve (Bronk Ramsey, 2009; Reimer et al., 2013). Author contributions Author contributions. The conceptualization and methodology development of this project was carried out by WS, NB and GV. The field work was performed by WS, NB and GV. WS carried out the lab work, developed the model code and performed the model simulations. GV and NB supervised the research. The writing of the paper was carried out by WS, NB and GV. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors thank the Mar Estate, Mar Lodge Estate and Invercauld Estate for permission to access the area. The authors thank Teun Daniëls, Sofie De Geeter, Yasmine Hunter, Ellen Jennen, Vincent Lenaerts and Remi Swinnen for their assistance during the field campaigns. Danny Paterson and Richard Tipping are thanked for sharing their pollen data from the Upper Dee area. The pollen data of Birks (1969) and Huntley (1994) were extracted from the European Pollen Database (EPD; http://www.europeanpollendatabase.net/, last access: 1 June 2018), and the work of the data contributors and the EPD community is gratefully appreciated. Financial support Financial support. This research has been supported by the Fonds Wetenschappelijk Onderzoek (grant nos. 1167019N and G0A6317N). Review statement Review statement. This paper was edited by Alexey V. Eliseev and reviewed by Andreas Heinemeyer and one anonymous referee. References Alexandrov, G. A., Brovkin, V. A., and Kleinen, T.: The influence of climate on peatland extent in Western Siberia since the Last Glacial Maximum, Sci. Rep., 6, 6–11, https://doi.org/10.1038/srep24784, 2016. Baird, A. J., Morris, P. J., and Belyea, L. R.: The DigiBog peatland development model 1: rationale, conceptual model, and hydrological basis, Ecohydrology, 5, 242–255, https://doi.org/10.1002/eco.2, 2012. Ballantyne, C. K.: After the ice: Holocene geomorphic activity in the Scottish Highlands, Scottish Geogr. J., 124, 8–52, https://doi.org/10.1080/14702540802300167, 2008. Ballard, C. E., McIntyre, N., Wheater, H. S., Holden, J., and Wallage, Z. E.: Hydrological modelling of drained blanket peatland, J. Hydrol., 407, 81–93, https://doi.org/10.1016/j.jhydrol.2011.07.005, 2011. Beilman, D. W. and Yu, Z.: Differential Response of Peatland Types to Climate: Modeling Peat Accumulation in Continental Western Canada, 38–86, 2001. Belyea, L. R. and Clymo, R. S.: Feedback control of the rate of peat formation, P. R. Soc. B, 268, 1315–1321, https://doi.org/10.1098/rspb.2001.1665, 2001. Belyea, L. R. and Malmer, N.: Carbon sequestration in peatland?: patterns and mechanisms of response to climate change, Glob. Change Biol., 10, 1043–1052, 2004. Birks, H. H.: Studies in the vegetational history of Scotland, University of Cambridge, 1969. Blaauw, M.: Methods and code for “classical” age-modelling of radiocarbon sequences, Quat. Geochronol., 5, 512–518, https://doi.org/10.1016/j.quageo.2010.01.002, 2010. Bronk Ramsey, C.: Bayesian Analysis of Radiocarbon Dates, Radiocarbon, 51, 337–360, https://doi.org/10.1017/S0033822200033865, 2009. Cairns, A., Dutch, M. E., Guy, E. M., and Stout, J. D.: Effect of irrigation with municipal water or sewage effluent on the biology of soil cores: I. Introduction, total microbial populations, and respiratory activity, New Zeal. J. Agr. Res., 21, 1–9, https://doi.org/10.1080/00288233.1978.10427377, 1978. Campforts, B. and Govers, G.: Keeping the edge: A numerical method that avoids knickpoint smearing when solving the stream power law, J. Geophys. Res.-Earth, 120, 1189–1205, https://doi.org/10.1002/2014JF003376, 2015. Carroll, M. J., Heinemeyer, A., Pearce-Higgins, J. W., Dennis, P., West, C., Holden, J., Wallage, Z. E., and Thomas, C. D.: Hydrologically driven ecosystem processes determine the distribution and persistence of ecosystem-specialist predators under climate change, Nat. Commun., 6, 1–10, https://doi.org/10.1038/ncomms8851, 2015. Chapman, S. J. and Thurlow, M.: Peat respiration at low temperatures, Soil Biol. Biochem., 30, 1013–1021, https://doi.org/10.1016/S0038-0717(98)00009-1, 1998. Clymo, R. S.: The Limits to Peat Bog Growth, Philos. T. R. Soc. B, 303, 605–654, https://doi.org/10.1098/rstb.1984.0002, 1984. Cunliffe, A. M., Baird, A. J., and Holden, J.: Hydrological hotspots in blanket peatlands: Spatial variation in peat permeability around a natural soil pipe, Water Resour. Res., 49, 5342–5354, https://doi.org/10.1002/wrcr.20435, 2013. Dai, T. S. and Sparling, J. H.: Measurement of hydraulic conductivity of peats, Can. J. Soil Sci., 53, 21–26, 1973. Dunn, S. M., Langan, S. J., and Colohan, R. J. E.: The impact of variable snow pack accumulation on a major Scottish water resource, Sci. Total Environ., 265, 181–194, https://doi.org/10.1016/S0048-9697(00)00658-6, 2001. Ellis, C. J. and Tallis, J. H.: Climatic control of blanket mire development at Kentra Moss, north-west Scotland, J. Ecol., 88, 869–889, https://doi.org/10.1046/j.1365-2745.2000.00495.x, 2000. Everest, J. and Kubik, P.: The deglaciation of eastern Scotland: Cosmogenic 10Be evidence for a Lateglacial stillstand, J. Quat. Sci., 21, 95–104, https://doi.org/10.1002/jqs.961, 2006. Frolking, S., Roulet, N. T., Tuittila, E., Bubier, J. L., Quillet, A., Talbot, J., and Richard, P. J. H.: A new model of Holocene peatland net primary production, decomposition, water balance, and peat accumulation, Earth Syst. Dynam., 1, 1–21, https://doi.org/10.5194/esd-1-1-2010, 2010. Fyfe, R. M., Twiddle, C., Sugita, S., Gaillard, M. J., Barratt, P., Caseldine, C. J., Dodson, J., Edwards, K. J., Farrell, M., Froyd, C., Grant, M. J., Huckerby, E., Innes, J. B., Shaw, H., and Waller, M.: The Holocene vegetation cover of Britain and Ireland: Overcoming problems of scale and discerning patterns of openness, Quaternary Sci. Rev., 73, 132–148, https://doi.org/10.1016/j.quascirev.2013.05.014, 2013. Gallego-Sala, A. V., Charman, D. J., Harrison, S. P., Li, G., and Prentice, I. C.: Climate-driven expansion of blanket bogs in Britain during the Holocene, Clim. Past, 12, 129–136, https://doi.org/10.5194/cp-12-129-2016, 2016. Gallego-Sala, A. V. and Prentice, I. C.: Blanket peat biome endangered by climate change, Nat. Clim. Change, 3, 152–155, https://doi.org/10.1038/nclimate1672, 2013. Garnett, M. H.: Carbon storage in Pennine moorland and response to change, University of Newcastle upon Tyne, 302 pp., 1998. Gorham, E.: Northern Peatlands?: Role in the Carbon Cycle and Probable Responses to Climatic Warming, Ecol. Appl., 1, 182–195, 1991. Heinemeyer, A., Croft, S., Garnett, M. H., Gloor, E., Holden, J., Lomas, M. R., and Ineson, P.: The MILLENNIA peat cohort model: Predicting past, present and future soil carbon budgets and fluxes under changing climates in peatlands, Clim. Res., 45, 207–226, https://doi.org/10.3354/cr00928, 2010. Hilbert, D. W., Roulet, N., and Moore, T.: Modelling and analysis of peatlands as dynamical systems, J. Ecol., 88, 230–242, 2000. Hilberts, A. G. J., van Loon, E. E., Troch, P. A., and Paniconi, C.: The hillslope-storage Boussinesq model for non-constant bedrock slope, J. Hydrol., 291, 160–173, https://doi.org/10.1016/J.JHYDROL.2003.12.043, 2004. Holden, J. and Burt, T. P.: Infiltration, runoff and sediment production in blanket peat catchments: Implications of field rainfall simulation experiments, Hydrol. Process., 16, 2537–2557, 2002. Holden, J. and Burt, T. P.: Hydraulic conductivity in upland blanket peat: measurement and variability, Hydrol. Process., 17, 1227–1237, https://doi.org/10.1002/hyp.1182, 2003. Holden, J., Wallage, Z. E., Lane, S. N., and McDonald, A. T.: Water table dynamics in undisturbed, drained and restored blanket peat, J. Hydrol., 402, 103–114, https://doi.org/10.1016/j.jhydrol.2011.03.010, 2011. Huang, C. C.: Holocene landscape development and human impact in the Connemara Uplands, Western Ireland, J. Biogeogr., 29, 153–165, 2002. Hunter, Y.: A Holocene paleo-ecological analysis of peat stratigraphy in the Upper-Dee valley (Scotland) and the Dijle catchment (Belgium), KU Leuven, 86 pp., 2016. Huntley, B.: Late Devensian and Holocene palaeoecology and palaeoenvironments of the Morrone Birkwoods, Aberdeenshire, Scotland, J. Quaternary Sci., 9, 311–336, 1994. Ingram, H. A. P.: Size and shape in raised mire ecosystems: a geophysical model, Nature, 297, 300–303, https://doi.org/10.1038/297300a0, 1982. Ingram, H. A. P.: Hydrology, in: Mires: Swamp, Bog, Fen and Moor, A. General Studies, edited by: Gore, A. J. P., Elsevier Scientific Publishing Company, Amsterdam, 67–158, 1983. Jetten, V., Govers, G., and Hessel, R.: Erosion models: Quality of spatial predictions, Hydrol. Process., 17, 887–900, https://doi.org/10.1002/hyp.1168, 2003. Jones, P. S., Stevens, D. P., Blackstock, T. H., Burrows, C. R., and Howe, E. A. (Eds.): Priority Habitats of Wales: A Technical Guide, Bangor, 140 pp., 2003. Kleinen, T., Brovkin, V., and Schuldt, R. J.: A dynamic model of wetland extent and peat accumulation: Results for the Holocene, Biogeosciences, 9, 235–248, https://doi.org/10.5194/bg-9-235-2012, 2012. Lafleur, P. M., Hember, R. A., Admiral, S. W., and Roulet, N. T.: Annual and seasonal variability in evapotranspiration and water table at a shrub-covered bog in southern Ontario, Canada, Hydrol. Process., 19, 3533–3550, https://doi.org/10.1002/hyp.5842, 2005. Lieth, H.: Primary production: Terrestrial ecosystems, Hum. Ecol., 1, 303–332, 1973. Lieth, H. and Box, E. O.: Evapotranspiration and primary productivity, Publ. Climatol., 25, 37–46, 1972. Lindsay, R.: Bogs: The Ecology, Classification and Conservation of Ombrotrophic Mires, 124 pp., 1995. Lucchese, M., Waddington, J. M., Poulin, M., Pouliot, R., Rochefort, L., and Strack, M.: Organic matter accumulation in a restored peatland: Evaluating restoration success, Ecol. Eng., 36, 482–488, https://doi.org/10.1016/j.ecoleng.2009.11.017, 2010. Maizels, J.: The physical background of the River Dee, in: The biology and management of the river Dee, edited by: Jenkins, D., Huntingdon, 7–22, 1985. Malmer, N. and Wallen, B.: Input rates, decay losses and accumulation rates of carbon in bogs during the last millennium: internal processes and environmental changes, The Holocene, 14, 111–117, 2004. Maurer, E.: Spatial variation in organic carbon storage in Holocene floodplain soils, Msc thesis, 132 pp., 2015. Mauri, A., Davis, B. A. S., Collins, P. M., and Kaplan, J. O.: The climate of Europe during the Holocene: A gridded pollen-based reconstruction and its multi-proxy evaluation, Quaternary Sci. Rev., 112, 109–127, https://doi.org/10.1016/j.quascirev.2015.01.013, 2015. Mazier, F., Gaillard, M. J., Kuneš, P., Sugita, S., Trondman, A. K., and Broström, A.: Testing the effect of site selection and parameter setting on REVEALS-model estimates of plant abundance using the Czech Quaternary Palynological Database, Rev. Palaeobot. Palynol., 187, 38–49, https://doi.org/10.1016/j.revpalbo.2012.07.017, 2012. Met Office: Met Office Integrated Data Archive System (MIDAS) Land and Marine Surface Stations Data (1853-current), available at: http://catalogue.ceda.ac.uk/uuid/220a65615218d5c9cc9e4785a3234bd0 (last access: 16 January 2019), 2012. Moore, P. D.: The Influence of Prehistoric Cultures upon the Initiation and Spread of Blanket Bog in Upland Wales, Nature, 241, 350–353, 1973. Morris, P. J., Baird, A. J., and Belyea, L. R.: The DigiBog peatland development model 2: ecohydrological simulations in 2D, Ecohydrology, 5, 256–268, https://doi.org/10.1002/eco.2, 2012. Morris, P. J., Baird, A. J., Young, D. M., and Swindles, G. T.: Untangling climate signals from autogenic changes in long-term peatland development, Geophys. Res. Lett., 42, 10788–10797, https://doi.org/10.1002/2015GL066824, 2015. Morris, P. J., Swindles, G. T., Valdes, P. J., Ivanovic, R. F., Gregoire, L. J., Smith, M. W., Tarasov, L., Haywood, A. M., and Bacon, K. L.: Global peatland initiation driven by regionally asynchronous warming, P. Natl. Acad. Sci. USA, 115, 4851–4856, https://doi.org/10.1073/pnas.1717838115, 2018. Paniconi, C., Troch, P. A., Van Loon, E. E., and Hilberts, A. G. J.: Hillslope-storage Boussinesq model for subsurface flow and variable source areas along complex hillslopes: 2. Intercomparison with a three-dimensional Richards equation model, Water Resour. Res., 39, 1317, https://doi.org/10.1029/2002WR001730, 2003. Parry, L. E., Charman, D. J., and Noades, J. P. W.: A method for modelling peat depth in blanket peatlands, Soil Use Manag., 28, 614–624, https://doi.org/10.1111/j.1475-2743.2012.00447.x, 2012. Paterson, D.: The Holocene history of Pinus sylvestris woodland in the Mar Lodge Estate, Cairngorms, Eastern Scotland, University of Stirling, 363 pp., 2011. Peeters, I., Rommens, T., Verstraeten, G., Govers, G., Van Rompaey, A., Poesen, J., and Van Oost, K.: Reconstructing ancient topography through erosion modelling, Geomorphology, 78, 250–264, https://doi.org/10.1016/j.geomorph.2006.01.033, 2006. Reimer, P. J., Bard, E., Bayliss, A., Beck, J. W., Blackwell, P. G., Ramsey, C. B., Buck, C. E., Cheng, H., Edwards, R. L., Friedrich, M., Grootes, P. M., Guilderson, T. P., Haflidason, H., Hajdas, I., Hatté, C., Heaton, T. J., Hoffmann, D. L., Hogg, A. G., Hughen, K. A., Kaiser, K. F., Kromer, B., Manning, S. W., Niu, M., Reimer, R. W., Richards, D. A., Scott, E. M., Southon, J. R., Staff, R. A., Turney, C. S. M., and van der Plicht, J.: IntCal13 and Marine13 Radiocarbon Age Calibration Curves 0–50,000 Years cal BP, Radiocarbon, 55, 1869–1887, 2013. Rosa, E. and Larocque, M.: Investigating peat hydrological properties using field and laboratory methods: application to the Lanoraie peatland complex (southern Quebec, Canada), Hydrol. Process., 22, 1866–1875, https://doi.org/10.1002/hyp, 2008. Rosswall, T.: Decomposition of plant litter at Stordalen – a summary, in: International Biological Programme: Swedish tundra biome progress report 14, 124–133, 1973. Simmons, I. G. and Innes, J. B.: Late Quaternary vegetational history of the North York Moors. VIII. Correlation of Flandrian II litho- and pollen stratigraphy at North Gill, Glaisdale Moor, J. Biogeogr., 15, 249–272, 1988. Smith, J. S.: Land use within the catchment of the river Dee, in: The biology and management of the river Dee, edited by: D. Jenkins, Huntingdon, 29–33, 1985. Sugita, S.: Theory of quantitative reconstruction of vegetation I: Pollen from large sites REVEALS regional vegetation composition, Holocene, 17, 229–241, https://doi.org/10.1177/0959683607075837, 2007. Swinnen, W.: Soil coring list, https://doi.org/10.17632/pxszz2wzny.1, 2019a. Swinnen, W.: Soil coring description, https://doi.org/10.17632/ms484mrjj5.1, 2019b. Szumigalski, A. R. and Bayley, S. E.: Net aboveground primary production along a peatland gradient in central Alberta in relation to environmental factors, Ecoscience, 4, 385–393, https://doi.org/10.1080/11956860.1997.11682417, 1997. Tetzlaff, D. and Soulsby, C.: Sources of baseflow in larger catchments – Using tracers to develop a holistic understanding of runoff generation, J. Hydrol., 359, 287–302, https://doi.org/10.1016/j.jhydrol.2008.07.008, 2008. Tipping, R.: Blanket peat in the Scottish Highlands: Timing, cause, spread and the myth of environmental determinism, Biodivers. Conserv., 17, 2097–2113, https://doi.org/10.1007/s10531-007-9220-4, 2008. Treat, C. C., Kleinen, T., Broothaerts, N., Dalton, A. S., Dommain, R., Douglas, T. A., Drexler, J. Z., Finkelstein, S. A., Grosse, G., Hope, G., Hutchings, J., Jones, M. C., Kuhry, P., Lacourse, T., Lähteenoja, O., Loisel, J., Notebaert, B., Payne, R. J., Peteet, D. M., Sannel, A. B. K., Stelling, J. M., Strauss, J., Swindles, G. T., Talbot, J., Tarnocai, C., Verstraeten, G., Williams, C. J., Xia, Z., Yu, Z., Väliranta, M., Hättestrand, M., Alexanderson, H., and Brovkin, V.: Widespread global peatland establishment and persistence over the last 130,000 y, P. Natl. Acad. Sci. USA, 116, 4822–4827, https://doi.org/10.1073/pnas.1813305116, 2019. Trondman, A. K., Gaillard, M. J., Sugita, S., Björkman, L., Greisman, A., Hultberg, T., Lagerås, P., Lindbladh, M., and Mazier, F.: Are pollen records from small sites appropriate for REVEALS model-based quantitative reconstructions of past regional vegetation? An empirical test in southern Sweden, Veg. Hist. Archaeobot., 25, 131–151, https://doi.org/10.1007/s00334-015-0536-9, 2016. Van Rompaey, A. J. J. and Govers, G.: Data quality and model complexity for regional scale soil erosion prediction, Int. J. Geogr. Inf. Sci., 16, 663–680, https://doi.org/10.1080/13658810210148561, 2002. Warren, G., Fraser, S., Clarke, A., Driscoll, K., Mitchell, W., Noble, G., Paterson, D., Schulting, R., Tipping, R., Verbaas, A., Wilson, C., and Wickham-Jones, C.: Little House in the Mountains? A small Mesolithic structure from the Cairngorm Mountains, Scotland, J. Archaeol. Sci. Rep., 18, 936–945, https://doi.org/10.1016/j.jasrep.2017.11.021, 2018. Wieder, R. K. and Yavitt, J. B.: Peatlands and global climate change: Insights from comparative studies of sites situated along a latitudinal gradient, Wetlands, 14, 229–238, https://doi.org/10.1007/BF03160660, 1994. Williams, J. R., Dyke, P. T., and Jones, C. A.: EPIC – A model for assessing the effects of erosion on soil productivity, in: Analysis of Ecological Systems: State-of-the-Art in Ecological Modelling, edited by: Lauenroth, W. K., Skogerboe, G. V., and Flug, M., Elsevier, p. 971, 1983. Wu, J.: Response of peatland development and carbon cycling to climate change: A dynamic system modeling approach, Environ. Earth Sci., 65, 141–151, https://doi.org/10.1007/s12665-011-1073-1, 2012. Xu, J., Morris, P. J., Liu, J., and Holden, J.: PEATMAP: Refining estimates of global peatland distribution based on a meta-analysis, Catena, 160, 134–140, https://doi.org/10.1016/j.catena.2017.09.010, 2018. Yu, Z., Turetsky, M. R., Campbell, I. D., and Vitt, D. H.: Modelling long-term peatland dynamics. II. Processes and rates as inferred from litter and peat-core data, Ecol. Modell., 145, 159–173, https://doi.org/10.1016/S0304-3800(01)00387-8, 2001. Yu, Z., Beilman, D. W., Frolking, S., MacDonald, G. M., Roulet, N. T., Camill, P., and Charman, D. J.: Peatlands and Their Role in the Global Carbon Cycle, Eos, Trans. Am. Geophys. Union, 92, 97, https://doi.org/10.1029/2011EO120001, 2011.
{}
Question Fri February 25, 2011 # what are phasors?can u pls explain what is phasor diagram Fri February 25, 2011 Dear student Phasors are actually rotating vectors that are used to represent an alternating current or some other sinusoidally varying quantity. Phasor is defined as the A representation of sine wave with amplitude A, frequency $\displaystyle\omega$ and phase $\displaystyle\theta$ are said to be time-invariant. This conceot with analytic representation is called as Phasor vector or phasor. It depends on three parameters  for simplifying and explaining some calculation. Phasor consists of one wave and two wave. Phasor wave depends on the sine wave. A linear differential equations can be deducted using algebra. ## Explained the Diagram of Phasor A rotating AC voltage current shows the relationship of electrical signal. A phasor quantity represented by the rotating arrow onto the vertical axis. The vertical line or bar represents the value proportional to the sine of the product of the angular frequency for instantaneous value. A signal of the oscilloscope represent ted in the phasor diagram is running sinusoid wave. ## Phasor Diagrams Explained in One Wave In a phasor diagram explained in one wave, a power source can be supplied to the voltage either in a sine wave of a particular frequency and it can be considered to be the sum of sine waves of different frequencies. It can be directly related to vector length. An angular velocity $\displaystyle\omega$ with revolving vector length A. At angle t = 0, a phase constant is $\displaystyle\delta$ . V(t) = Asin( $\displaystyle\omega$t + $\displaystyle\delta$ ## Phasor Diagrams Explained in Two Waves One wave can be leading or lagging the other wave, in which the revolving vector of blue is leading the red revolving vector. Likewise, the red vector is lagging the blue vector. By using algebraic and trigonometric manipulation, adding and subtracting two sine waves. A Simple Problem of vector Addition: A sin( $\displaystyle\omega$t + Ï† )  = 6 sin($\displaystyle\omega$t + 400) + 3 sin ( $\displaystyle\omega$t + 1200)
{}
pvlib.iotools.read_crn(filename)[source] Read a NOAA USCRN fixed-width file into pandas dataframe. The CRN is described in 1 and 2. Parameters filename (str, path object, or file-like) – filepath or url to read for the fixed-width file. Returns data (Dataframe) – A dataframe with DatetimeIndex and all of the variables in the file. Notes CRN files contain 5 minute averages labeled by the interval ending time. Here, missing data is flagged as NaN, rather than the lowest possible integer for a field (e.g. -999 or -99). Air temperature in deg C. Wind speed in m/s at a height of 1.5 m above ground level. Variables corresponding to standard pvlib variables are renamed, e.g. SOLAR_RADIATION becomes ghi. See the pvlib.iotools.crn.VARIABLE_MAP dict for the complete mapping. CRN files occasionally have a set of null characters on a line instead of valid data. This function drops those lines. Sometimes these null characters appear on a line of their own and sometimes they occur on the same line as valid data. In the latter case, the valid data will not be returned. Users may manually remove the null characters and reparse the file if they need that line. References 1 U.S. Climate Reference Network https://www.ncdc.noaa.gov/crn/qcdatasets.html 2 Diamond, H. J. et. al., 2013: U.S. Climate Reference Network after one decade of operations: status and assessment. Bull. Amer. Meteor. Soc., 94, 489-498. DOI: 10.1175/BAMS-D-12-00170.1
{}
# Korea postpones currency swap decision The Bank of Korea (BOK) postponed a decision Wednesday 15 June on a proposal to lend some of its foreign exchange reserves to domestic banks through currency swaps, the central bank said. The central bank's policymakers were expected to approve the currency swaps at a meeting in the day as a way of managing its reserves more effectively and boosting South Korea's investment activity overseas. The BOK said it had delayed a decision on the plan because it needs to be deliberated further. Under the #### Latest issue ###### Central Banking Journal Read the latest edition of the Central Banking journal
{}
# SEABOX - Editorial #1 Author: Sergey Nagin Tester: Istvan Nagy Editorialist: Misha Chorniy Medium-hard DP(Knapsack) # Problem Statement You are given a fragment of code and binary 3-dimensional array $A[0..N-1, 0..N-1, 0..N-1]$, where $N$ can be only power of $2$, $1 <= N <= 32$. Consider all 3-dimensional arrays, which can be created with changing no more than $K$ elements of $A$ and apply this code for them. After that, we are interested in finding the minimal and maximal value of resulting function over all such arrays. Below is the fragment of code given in the statement: int F(box A, int dx = 0, int dy = 0, int dz = 0, int size = N) { vector B = {}; for (int i = dx; i < dx + size; i++) { for (int j = dy; j < dy + size; j++) { for (int k = dz; k < dz + size; k++) { B.push_back(A*[j][k]); } } } sort(B.begin(), B.end()); if (B[0] == B[size * size * size - 1]) { return 1; } int result = 0; for (int i = 0; i < 2; i++) { for (int j = 0; j < 2; j++) { for (int k = 0; k < 2; k++) { result += f(A, dx + i * size / 2, dy + j * size / 2, dz + k * size / 2, size / 2); } } } return result; } # Explanation For this subtask, we can observe that array $A$ has $N^3$ binary cells. Let's compress it in one-dimensional array $B$ of size $N^3$. $B_{i*N^2+j*N+k}$ = $A_{i,j,k}$, after that let's iterate over all bitmasks. Assume that $i$-th element will be $i$-th bit in a mask, then we need to iterate in the range from $0$ to $2^{N^3}-1$, check the number of different bits if this number not more than $K$ then launch the code above and relax the minimal and maximal values. Let's try to understand what is the function $F$ in the statement. Let's make several observations: • We always deal with cube(3-dimensional array) $A[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]$ in the function $F$ • If all elements in cube $A[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]$ are equal, then result of the function is $1$. • Otherwise, we divide our box into 8 equal sub-boxes, and sum the resulting values for them. • Variable "size" is a power of 2 • What we can do with it? Let's denote couple of functions: • $minValue[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]*$ - the minimal value of function $F$, which we can get, if we'll change not more than $i$ values inside cube $A[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]$, $0 <= i <= K$ • $maxValue[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]*$ - the maximal value of function $F$, which we can get, if we'll change not more than $i$ values inside cube $A[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]$, $0 <= i <= K$ • $zeroes[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]$ - a number of zeroes inside cube $A[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]$ • $ones[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]$ - a number of ones inside cube $A[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]$ • In which cases $maxValue[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]*$ will be 1? Only in that case if $i = 0$ and all values inside cube are zeroes or ones. Because if $i > 0$, and all the values equal, we can change states exactly $1$ of them, after that value of $F$ will be recalculated using smaller sub-boxes. In which cases $minValue[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1]*$ will be 1? Only in those cases when we can receive equal values inside the cube and change not more than $i$ values inside of it. In other words, if $max(zeroes[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1],$ $ones[dx..dx+size-1,dy..dy+size-1,dz..dz+size-1])+i>=size^3$ Otherwise, how to recalculate value of $minValue$ and $maxValue$ using smaller sub-cubes and values of $minValue$ and $maxValue$ for them. It will be very similar to knapsack problem. $minValue[cube]*$ can be calculated if we don't change more than $i$ cells inside 8 smaller subcubes. Analogically with $maxValue$. $minValue[cube]*$ = $min(\sum_{i_{1}+i_{2}+i_{3}+i_{4}+i_{5}+i_{6}+i_{7}+i_{8}=i} minValue[subcube_{1}][i_{1}]+..+minValue[subcube_{8}][i_{8}]$) $maxValue[cube]*$ = $max(\sum_{i_{1}+i_{2}+i_{3}+i_{4}+i_{5}+i_{6}+i_{7}+i_{8}=i} maxValue[subcube_{1}][i_{1}]+..+maxValue[subcube_{8}][i_{8}]$) It gives us the idea for the next solution: go(dx, dy, dz, N) //function returns the pair of minValue and maxValue for cube if N = 1 //If we have only one cell return {1, 1}, //minValue[0], minValue[1] {1, 1} //maxValue[0], maxValue[1] minValue = array of size = N * N * N+1 maxValue = array of size = N * N * N+1 //N*N*N - is the number of cells which can be changed inside cube was = 0 for i = 0..1 //Iterate over 8 subcubes for j = 0..1 for k = 0..1 was += 1 //number of subcubes which was processed tMinValue, tMaxValue = go(dx + i * (N / 2), dy + j * (N /2), dz + z * (N / 2), N / 2) //Observe that tMinValue and tMaxValue has sizes N * N * N / 8 + 1 nMinValue = array(N * N * N / 8 * was + 1, +1000000) nMaxValue = array(N * N * N / 8 * was + 1, -1000000) //new values of minValue and maxValue, in optimized code, we can get rid of them for v = 0..(N * N * N / 8) * was //combine values like a knapsack problem for u = 0..min(N * N * N / 8, u) //iterate over values for subcube nMinValue[v] = min(nMinValue[v], tMinValue + minValue[v - u]) nMaxValue[v] = max(nMaxValue[v], tMaxValue + maxValue[v - u]) for v = 0..(N * N * N / 8) * was minValue[v] = nMinValue[v] maxValue[v] = nMaxValue[v] //Corner cases, not going into subcubes if max(zeroes[dx..dx+N-1,dy..dy+N-1,dz..dz+N-1],ones[dx..dx+N-1,dy..dy+N-1,dz..dz+N-1]) == N * N * N maxValue[0] = 1 for i = 0..N * N * N if min(zeroes[dx..dx+N-1,dy..dy+N-1,dz..dz+N-1],ones[dx..dx+N-1,dy..dy+N-1,dz..dz+N-1]) + i >= N * N * N minValue* = 1 return minValue, maxValue Total complexity of this algorithm will be $O(N*N*N * log(N) + N*N*N/8 * N*N*N/8 * log(N))$ = $O(N^6*log(N))$, but with very small constant. There are many optimizations for it, the most crucial one is using a one-dimensional array instead three-dimensional. # Solution: Setter's solution can be found here Tester's solution can be found here Please feel free to post comments if anything is not clear to you. #2 Another optimization is printing 1,N^3 if K=N^3 which gives you AC from TLE on a subtask.
{}
# Solving for X in a Kronecker Product Matrix Equation I'm looking to solve this equation: $$I = (A \otimes (XB)) + E$$ for $X$ with matrices $A,B$, error matrix $E$, and identity $I$, such that $\lVert E \rVert_F$, the frobenous norm of $E$ is minimized (if needed, another matrix norm is fine). But I'm not sure how to handle the Kronecker product. • Is $A \otimes (X\cdot B ) + E$ or $(A \otimes X)\cdot B + E$ ? – Elias Costa Nov 28 '17 at 20:49 • So sorry that was ambiguous -- I've clarified it now – Robert Nov 28 '17 at 20:50 The solution is $$X=\frac{{\rm tr}(A)}{\|A\|_F^2}\,B^+$$ here's how I derived it. The objective function is \eqalign{ \phi &= \frac{1}{2}\|E\|_F^2 = \frac{1}{2}(A\otimes XB-I):(A\otimes XB-I) \cr d\phi &= (A\otimes XB-I):(A\otimes dX\,B) \cr &= (A:A)(XB:dX\,B) - (I:A)(I:dX\,B) \cr &= \Big((A:A)XBB^T - (I:A)B^T\Big):dX \cr &= \Big(\|A\|_F^2XBB^T - {\rm tr}(A)B^T\Big):dX \cr \frac{\partial\phi}{\partial X} &= \|A\|_F^2XBB^T - {\rm tr}(A)B^T \cr } Set the gradient to zero and solve for $X$ \eqalign{ \|A\|_F^2XBB^T &= {\rm tr}(A)B^T \cr X &= \frac{{\rm tr}(A)}{\|A\|_F^2} B^T(BB^T)^{-1} \cr } In the above steps, a colon (:) is simply a product notation for the trace, i.e. $$A:B={\rm tr}(A^TB)$$ and $M^+$ is the pseudoinverse of $M$. Lynn's solution assumes that $A$ is square. When it is rectangular, the term $$I:(A\otimes dX\,B)$$ must be handled more carefully. In particular, you must find a Kronecker decomposition of the identity matrix $$I = \sum_{k=1}^r Y_k\otimes Z_k$$ where the $(Y_k, Z_k)$ matrices are shaped like $(A, XB)$ respectively. Note that in the case that $A$ is square, $r=1$ and the decomposition is simply $I = I_A\otimes I_{XB}$. Using this decomposition yields \eqalign{ I:(A\otimes dX\,B) &= \sum_{k=1}^r Y_k\otimes Z_k :(A\otimes dX\,B) \cr &= \sum_{k=1}^r (Y_k:A)\,(Z_k:dX\,B) \cr &= \bigg(\sum_{k=1}^r {\rm tr}(Y_k^TA)\,Z_kB^T\bigg):dX \cr } Substituting into Lynn's differential \eqalign{ d\phi &= \Big(\|A\|_F^2XBB^T - \sum_k {\rm tr}(AY_k^T)\,Z_kB^T\Big):dX \cr \frac{\partial\phi}{\partial X} &= \|A\|_F^2XBB^T - \sum_k {\rm tr}(AY_k^T)\,Z_kB^T = 0 \cr \|A\|_F^2XBB^T &= \sum_k {\rm tr}(AY_k^T)\,Z_kB^T \cr X &= \sum_k \frac{{\rm tr}(AY_k^T)Z_k}{\|A\|_F^2}\,\,B^+ \cr }
{}
# 3D printing overhangs that are over .200 in I am new to 3D printing but have been in CNC Machining for a few years. I have a part I am trying to print that is a cylinder 1.000 in. in diameter and has a .200 in overhang starting at 1.300 in. In other words I am printing a 1.300 in. cylinder that is 1.500 in. tall that at 1.300 in. its diameter increases by .200 in. When I first printed the part the overhang had sunk or fallen out. Not by much and is still usable but made a crappy finish. What would I need to do in order to have the overhang not drop as the base layer extended outward .200 in. at 1.300 in.? I tried slowing the feed rate but that was worse. I also lowered the temp to 195 °C. I am using a Monoprice Select Mini running at 200 °C and a 1.0 Speed (Not really sure what that feed rate is in terms of mm/s). Based on what I've seen so far I would increase the speed and keep the temp at 200 °C. Any suggestions, I hope I have explained my problem well enough. • Less printing temperature may help, but in the end, you're still trying to print in thin air. Consider if it's possible to flip the cylinder, or use support structures. – towe Oct 11 '19 at 6:45 • Hi and welcome to 3DPrinting.SE! I guess you mean 200 °C not Fahrenheit (200 °F is about 93 ­°C). Furthermore, a picture says more than a thousand words. :) – 0scar Oct 11 '19 at 11:20 • Can you print the part upside down, so you have a ledge instead of an overhang? Also, what are your slicer settings for support material? Oct 11 '19 at 13:33 • @apesa: Raft? Generally rafts are considered antiquated. If you can't print without them you should try to figure out what the underlying problem behind that is Oct 11 '19 at 15:45 • I suspect this kind of overhang (90° as opposed to more than 90°) could be printed without support if slicers were smarter - it would involve printing perimeters outward starting from the self-supported part, with some overlap in the nozzle positioning to improve bonding to the previously-laid-out perimeter. But as far as I'm aware, none of them support doing this. Oct 11 '19 at 17:09 The world of 3D Printers usually uses the metric system, especially in nozzle sizes. 0.2 inches are therefore better referred to as 5 mm, which is a considerable amount: that's 11 to 13 perimeters from a 0.4 mm nozzle, depending on extrusion width (0.46 and 0.4 mm respectively). Furthermore, the bore of the item isn't supported either, it is bridging. To print overhangs and bridging without sagging, one should activate the generation of support material in the slicer. Generally speaking, PLA (judging from the print temperature) doesn't need to be printed with a raft and would be better served with a brim for bed adhesion, unless you have a perforated bed. If you have to print in the shown orientation, then you should activate support generation in your slicer. For this part, however, there is a better solution: it is of very simple geometry and it doesn't have to be printed as shown but equally could be printed "upside-down" by being rotated around the X-Axis by 180° in the slicer. This has two benefits: it removes all unsupported overhangs an avoids support structure, making the wasted material pretty much nonexistent. I strongly recommend taking a look at my 3D Design Primer and the excellent question on How to decide print orientation? and then delve into further reading: • Thanks much. The .200 refers to the part and not the nozzle size. I do all my CAD in inches. As for the raft, I am using that method simply because i'm still learning the intricacies of print setup and I much appreciate your links above. I am sure they will shed a lot of light on how to go from CAD to printer.The other challenge for me is to not think in terms of CNC milling and in terms of printing and optimizing the design for the printer and not CNC. Question, is there a preferred Slicer? Oct 11 '19 at 17:46 • @apesa As far as popularity, Slic3r and Ultimaker Cura (and progras using the cura engine) are among the popular free one, Simplify3D is very popular on the paid ones. Oct 11 '19 at 22:10 It appears that your part could be printable upside down. If possible, I'd highly recommend this, as it mostly avoids supports all together. • Thanks, I printed it upside down initially and found that it created a rough top surface area bc I was using a raft for bed adhesion. I am looking at using a Brim and flipping the part back upside down. Oct 11 '19 at 17:51
{}
logarithmic trickery Consider that since $\log(a)\log(b)=\log(a^{\log b})=\log(b^{\log{a}}$, and therefore, somewhat unexpectly: $a^{\log b} = b^{\log a}$ we can use it to construct new identities thusly Let $A = \prod_{n=1}^{\infty} \sqrt[2]{1+\frac{1}{n^{2}}}$ Let $B = \prod_{m=1}^{\infty} \sqrt[e^{m}]{1+\frac{1}{m^{3}}}$ Therefore, since we know that $A^{\log(B)}=B^{\log(A)}$ we can use that to explicitly calculate a new identity thusly: 1. Compute the logarithm of $A$, $\log(A)$: $\log(A) = \frac{1}{2} \sum_{v=1}^{\infty} \frac{(-1)^{v+1}\zeta(2v)}{v}$ 2. Compute the logarithm of $B$, $\log(B)$: $\log(B) = \sum_{s=1}^{\infty} \frac{(-1)^{s+1}\mathrm{Li}_{(3s)}(1/e)}{s}$ 3. Combine them: $\prod_{n=1}^{\infty} \prod_{s=1}^{\infty} \sqrt[s]{\left(1+\frac{1}{n^{2}}\right)^{(-1)^{s+1}\mathrm{Li}_{(3s)}(1/e)}} = \prod_{m=1}^{\infty} \prod_{v=1}^{\infty} \sqrt[ve^{m}]{\left(1+\frac{1}{m^{3}}\right)^{(-1)^{v+1}\zeta(2v)}}$ >>> A = fp.nprod(lambda n: sqrt(1+1/(n*n)), [1,inf]) >>> >>> A mpf(‘1.9109509100512501’) >>> B = fp.nprod(lambda m: (1+1/(m**3))**(1/exp(m)), [1,inf]) >>> >>> B mpf(‘1.3140291251423164’) >>> A**log(B) mpf(‘1.1934623097049237’) >>> B**log(A) mpf(‘1.1934623097049237’) one: It is a thick, yellow, Springer-Verlag dealie, illustrated and about fractals. There is a color plate on nearly every page. I dreamt of being in a bookshop and considering buying it. two: has lots of equations at the beginning, and they change as I read them. last page has a distorting/metamorphosing picture of einstein/feynman in green and blue hues. three: also a text that changes, I remember a diagram of helix against a point scatter. Also changed. last night: I remember something about the Euler-Mascheroni constant $\gamma$. Impenetrable references section. Also picture of fractals. zeta theta hybrideque Define $g(z) = \sum_{n=1}^{\infty} \frac{e^{-n^{2}}}{n^{z}}$. Note that while resembling the Riemann zeta function $\zeta(s)$, the exponential term suppresses the pole at $z=1.0$ the banshee I had this thought in the shower, and it descends from my “symmetry may be a long term red herring” idea, and it goes like this: Imagine that you have something quite like the Monster group $M$, except that there is a fixed pair of elements in this thing like $M$ — let’s call it $\mathcal{B}$ for banshee. Every time you take two elements from $\mathcal{B}$ and calculate their product, there is a $\frac{1}{|M|^{2}}$ chance that you’ll be given something which is not an element of $M$, like a real number or some other object. Let’s say it transpires that the boogeyman actually has some utility: if you happen to have a hangup on symmetry, your apprehension of the Monster group is going to obscure the existence of the banshee to you. Interesting, because of the sheer size of the order of the Monster, it is easier to confuse this banshee with a group than if you were to take Klein’s viergruppe or a cyclic group and replace one of it’s elements. The banshee $\mathcal{B}$ is not a group interpolating the primorials Here’s a puzzle: consider that the Gamma function $\Gamma(z)$ is the interpolation of the factorials. What about interpolating the primorials. If $p_{n}$ is the $n$-th prime, then what is the correct analogue of the Gamma function if we define the primorial as: $p_{n}\# = \prod_{k=1}^{n} p_{k}$
{}
# Chapter 4 - Exponential Functions - Exercises to Skills for Chapter 4 - Page 185: 59 Cannot be found. #### Work Step by Step The base of the fraction (which is a radical) is even. Thus, we cannot raise the given negative number to the given power since we cannot take the nth-root of a negative number if $n$ is even. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
# _______ lubrication technique is used for lubrication of the cylinder of a scooter engine. 1. Petroil 2. Splash 3. Gravity feed 4. Forced feed Option 1 : Petroil ## Detailed Solution Explanation: Petroil (Petro-oil lubrication system): In this method, the lubricating oil is mixed with petrol and fed into the engine cylinder during the suction stroke. The droplets of the partials cause the lubricating effect in the engine cylinder. This method of lubrication is used in small engines like motorcycles and scooters. The system of lubrication is used in scooters and motorcycles, particularly for two-stroke engines about 3 to 6% of lubrication oil is added with petrol is the petrol tank. The petrol evaporates when the engine is working. The lubricating oil is left behind in the form of mist. The parts of the engine such as piston, cylinder walls and connecting rod are lubricated by being waited with the oil mist left behind. Splash lubrication system: The splashing action of oil maintains a fog or mist of oil that drenches the inner parts of the engine such as bearings, cylinder walls, pistons, piston pins, timing gears etc. The splash oil then drips back into the sump. This system is commonly used in a single-cylinder engine with the closed crankcase. Forced feed or pressure lubrication system: This system is commonly used on the high-speed multi-cylinder engine in tractors, trucks and automobiles. # The purpose of a thermostat in an engine cooling system is to 1. Prevent the coolant from boiling 2. Allows the engine to warm up quick 3. Pressurize the system to raise the boiling point 4. Indicate to the driver, the coolant temperature Option 2 : Allows the engine to warm up quick ## Detailed Solution Concept: • Whenever the engine is started from cold, the coolant temperature has to be brought to the desired level in order to minimize the warm-up time. • This purpose is achieved by a thermostat fitted in a system which initially prevents the circulation of water below a certain temperature through the radiator so that the water gets heated up quickly. • When the preset temperature is reached, the thermostat allows the water to flow through the radiator. # The common lubrication system used in IC Engines of an automobile is called the _______ system. 1. petrol 2. pressure 3. splash 4. gravity Option 2 : pressure ## Detailed Solution Explanation: Petroil (Petro-oil lubrication system): In this method, the lubricating oil is mixed with petrol and fed into the engine cylinder during the suction stroke. The droplets of the partials cause the lubricating effect in the engine cylinder. This method of lubrication is used in small engines like motorcycles and scooters. The system of lubrication is used in scooters and motorcycles, particularly for two-stroke engines about 3 to 6% of lubrication oil is added with petrol is the petrol tank. The petrol evaporates when the engine is working. The lubricating oil is left behind in the form of mist. The parts of the engine such as piston, cylinder walls and connecting rod are lubricated by being waited with the oil mist left behind. Splash lubrication system: The splashing action of oil maintains a fog or mist of oil that drenches the inner parts of the engine such as bearings, cylinder walls, pistons, piston pins, timing gears etc. The splash oil then drips back into the sump. This system is commonly used in a single-cylinder engine with a closed crankcase. Forced feed or pressure lubrication system: This system is commonly used on the high-speed multi-cylinder engine in tractors, trucks and automobiles. # The lubricating oil is circulated in an IC engine by: 1. Positive displacement pump 2. Roots blower 3. Natural circulation thermosiphon 4. Centrifugal pump Option 1 : Positive displacement pump ## Detailed Solution The primary purpose of the lubrication system is to lubricate sliding surfaces and reduce friction losses in the engine, whilst secondary issues are involved with heat transfer. These are mainly classified into two categories: A. Non-positive displacement pumps: • Centrifugal pump is a non-positive displacement pump. In this there is a relative motion between the fluid and motor. It imparts velocity energy to the fluid, which is converted to pressure energy upon exiting the pump casing. • These pumps are also known as hydro-dynamic pumps. In these pumps the fluid is pressurized by the rotation of the propeller and the fluid pressure is proportional to the rotor speed. • These pumps cannot withstand high pressures and generally used for low-pressure and high-volume flow applications. • These pumps are primarily used for transporting fluids and find little use in the hydraulic or fluid power industries. B. Positive displacement pumps: • Positive displacement pump is a pump in which there is a physical displacement of boundary of fluid mass. • These pumps deliver a constant volume of fluid in a cycle. The output fluid flow is constant and is independent of the system pressure (load). • These pumps are used in most of the industrial fluid power applications. • Important positive displacement pumps are gears pumps, vane pumps and piston pumps. The lubricating oil is circulated in an IC engine by Positive displacement pump. # Which of the following engine cooling systems is commonly employed in heavy trucks? 1. Evaporative cooling system 2. Air cooling system with fins 3. Forced-circulation system 4. Thermosyphon system Option 3 : Forced-circulation system ## Detailed Solution Explanation: There are two types of cooling systems used for cooling the IC engines. 1. Liquid or indirect cooling system 2. Air or direct cooling system Air or direct cooling system: • In an air-cooled system, a current of air is made to flow past the outside of the cylinder barrel, the outer surface area of which has been considerably increased by providing cooling fins. • This method is mainly applicable to engines in motorcycles, small cars, airplanes, and combat tanks where the motion of the vehicle gives a good velocity to cool the engine. • In bigger units, a circulating fan is also used. • The value of the heat transfer coefficient between metal and air is appreciably low. • As a result of this, the cylinder wall temperatures of the air-cooled cylinders are considerably higher than those of the water-cooled type. Liquid or indirect cooling system: • In this system, mainly water is used and made to circulate through the jackets provided around the cylinder, cylinder head, valve ports and seats where it extracts most of the heat. It is used for heavy vehicles. Water cooling is carried out by the following five methods: (a) Director non-return system (b) Thermosyphon system (c) Forced circulation cooling system (d) Evaporative cooling system (e) Pressure cooling system Evaporative cooling system: • This is predominately used in stationary engines and in many types of industrial engines. • In this system, the engine will be cooled because of the evaporation of the water in the cylinder jackets into steams. Thermosyphon system: • In this system, the circulation of water is due to the difference in temperature of the water. So, in this system pump is not required but water is circulated because of density difference only. Forced circulation cooling system: • This system is used in many vehicles like cars, buses, trucks and other heavy vehicles. Here, the circulation of water takes place with convection currents helped by a pump. # Piston compression rings are made of which one of the following? 1. Cast iron 2. Bronze 3. Aluminium 4. White metal Option 1 : Cast iron ## Detailed Solution Explanation: Piston rings: There are two types of piston rings 1. Compression rings: These rings effectively seal the compression pressure and the leakage of the combustion gases. These are fitted in the top grooves. They also transfer heat from the piston to the cylinder walls. These rings vary in their cross-section. The following types of compression rings are used • Rectangular Rings • Taper-faced rings • Barrel-faced rings • Inside bevel rings • Keystone rings 2. Oil control rings: The main purpose of an oil ring is to scrape the excess oil from the liner and drain it back to the oil sump during the downward movement of the piston. It prevents the oil from reaching the combustion chamber. • Since grey cast iron has properties of self-lubrication and damping of small vibrations. • They are widely used for machine base, engine frames, drainage pipes, elevator and industrial furnace counterweights, pump housings cylinder and piston rings of IC engines, flywheel etc. # Which of the following is an advantage of the liquid cooling system as compared to air cooling system for an IC Engine? 1. Low cost 2. Uniform cooling 3. Light in weight 4. Power absorbed by the pump is considerable Option 2 : Uniform cooling ## Detailed Solution Explanation: Cooling system – It is a system that is used to transfer the excess heat generated so that system does not get too hot. Types of cooling system. • Air cooled system • Water cooled system Air cooled system – In this system air flow across the engine cylinders to remove the excess heat. The amount of heat dissipated depends upon the amount of area from which air flows, the conductivity of the material, and the mass flow rate of air. Advantages Disadvantages No radiator is required Cooling is not uniform It can be operated even in cold temperatures Cooling fins under certain conditions may provide vibrations and high noise level. Low cost and light weight. Engine power output is less Water cooled system – In this system a coolant is circulated around the cylinder which absorbs heat from the cylinder and cylinder heads. Advantages Disadvantages Cooling is uniform Radiator is required Power output is more Cannot be operated in cold temperatures # In two-stroke engines, the type of lubrication system employed in the crankcase is the: 1. mist lubrication system 2. wet sump lubrication system 3. dry sump lubrication system 4. splash lubrication system Option 1 : mist lubrication system ## Detailed Solution Concept: The function of a lubrication system is to provide a sufficient quantity of cool, filtered oil to give positive and adequate lubrication to all the moving parts of an engine. The various lubrication systems used for internal combustion engines may be classified as: • Mist lubrication system • Wet sump lubrication system • Dry sump lubrication system Mist lubrication system: In two-stroke engines, mist lubrication is used where crankcase lubrication is not suitable. In a two-stroke engine, as the charge is compressed in the crankcase, it is not possible to have the lubricating oil in the sump. Hence, mist lubrication is adopted in practice. In such engines, the lubricating oil is mixed with the fuel, the usual ratio being 3% to 6%. The oil and fuel mixture is inducted through the carburetor. Wet sump lubrication system: In the wet sump system, the bottom of the crankcase contains an oil pan or sump from which the lubricating oil is pumped to various engine components by a pump. After lubricating these parts, the oil flows back to the sump by gravity. There are three varieties in the wet-sump lubrication system. • the splash system • the splash and pressure system • the pressure feed system Dry Sump Lubrication System: In this, the supply of oil is carried in an external tank. An oil pump draws oil from the supply tank and circulates it under pressure to the various bearings of the engine. # Piston rings are usually made of 1. Steel 2. Cast iron 3. Aluminium 4. Babbit Option 2 : Cast iron ## Detailed Solution Explanation: Piston rings are expandable split rings embedded inside the grooves on the perimeter of a piston and mainly perform the following functions- 1. Seal the combustion chamber from the crankcase 2. Provide a uniform oil film between the piston and cylinder wall thereby controlling the oil consumption 3. Transfer the heat from the piston to the cool cylinder walls. • In most cases, piston rings are made up of Cast Iron. • Cast iron easily adheres to the cylinder wall. • In addition, cast iron can be easily coated with other materials to enhance its durability. # The lowest temperature at which the oil ceases to flow when cooled is known as ________. 1. Flash point 2. Fire point 3. Cloud point 4. Pour point Option 4 : Pour point ## Detailed Solution Concept: Flash Point: The flash point of a volatile material is the lowest temperature at which vapors of the material will ignite when given an ignition source. Fire Point: The fire point of a fuel is the lowest temperature at which the vapor of the fuel will continue to burn for at least 5 seconds after ignition by an open flame. The main difference in fire and flashpoint is that at the flashpoint a substance will ignite briefly, but vapor might not be produced at a rate to sustain the fire. Flashpoint and fire points are related to high-temperature characteristics of the fuel and tell the behavior of fuel at high temperatures. Cloud Point: Cloud point is the temperature at which oil becomes cloudy or hazy when oil is cooled at a specified rate. Pour Point: It is the temperature at which oil just ceases to flow. The pour point of the liquid is the lowest temperature at which it becomes semi-solid and loses its flow characteristics. Cloud point and pour point are related to low-temperature characteristics of the fuel and tells the behavior of fuel at low temperatures. # Select the incorrect statement from below about good quality lubricating oils. 1. They do not affect the mechanical efficiency of the engine 2. They reduce frictional resistance in bearings 3. They should have low viscosity at low temperature for ease of starting 4. They assist in sealing of piston during operation Option 1 : They do not affect the mechanical efficiency of the engine ## Detailed Solution Good qualities of the lubricating oil: • Should be available in a wide range of viscosity • Reduces frictional resistance • There should be little change in the viscosity of the oil with the change in temperature. • Chemically stable with the bearing material and the atmosphere at all temperature encountered in the application • Should have sufficient specific heat to carry away frictional heat, without an abnormal rise in the temperature • Should be available at reasonable cost • Helps to seal the piston during operation • It should have low viscosity at low temperatures so that it will reduce the initial frictional resistance. Hence it will ease in starting • Lesser frictional resistance means lesser heat rejection which further improves the mechanical efficiency Mechanical efficiency (η) = 1 – (QR/Qin) If heat rejected is reduced then mechanical efficiency will increase. # Select the incorrect statement from following about an air-cooled IC engine. 1. The heat is dissipated to the atmosphere by convection from fins placed on cylinder walls 2. Radiation plays a significant role in the dissipation of heat 3. The air is blown over the fins 4. The excess heat of combustion is conducted through the cylinder wall to the exterior of the wall Option 2 : Radiation plays a significant role in the dissipation of heat ## Detailed Solution Air-cooled IC engine: In an air-cooled system a current of air is made to flow past the outside of the cylinder barrel, the outer surface area of which has been considerably increased by providing the fins. This method will increase the rate of cooling. The temperature of the fin decreases from its root to its tip. The heat generated due to combustion in the engine cylinder will be conducted to the fins through the wall of the cylinder. The air is blown over the fins. Air takes the heat away by convection. Convection plays a major role in heat transfer, not radiation. # In an IC engine, boundary lubrication is likely to occur between surfaces with relative velocity during: 1. starting and stopping 2. maximum power condition 3. constant speed operation 4. idling Option 1 : starting and stopping ## Detailed Solution Explanation: Lubrication Principles: If one surface is moving and inclined to the other, the viscous drag of the oil tends to draw the lubricant into the space between the surfaces and builds up a wedge. This develops an oil–film pressure that can support a load. If the two surfaces were parallel or if they did not have relative motion, the oil–film pressure would not get developed and a load would not be supported by the lubricant. There are three lubrication regions: Hydrodynamic lubrication, Boundary lubrication, and Mixed–film lubrication Boundary Lubrication: With a lower relative velocity of the moving surfaces, adequate pressure cannot be developed to support the load by the oil film. At this point of time, boundary lubrication will exist. This will occur especially during the starting and stopping of an engine. As the speed increases, sufficient film pressure is developed, and the load is supported by the oil film and this will shift to the hydrodynamic lubrication. Hydrodynamic Lubrication: If the sliding surfaces are completely separated by a film of oil, there is no metal–to–metal contact and the wear on the surfaces is a minimum. This type of lubrication is developed when there is a relative motion between the two inclined surfaces separated by an oil film. # In a multi-cylinder heavy-duty engine, mainly _______ lubrication system is used. 1. pressure feed 2. splash and wet sump 3. splash 4. scoop feed Option 1 : pressure feed ## Detailed Solution Explanation: Splash lubrication system: The splashing action of oil maintains a fog or mist of oil that drenches the inner parts of the engine such as bearings, cylinder walls, pistons, piston pins, timing gears etc. The splash oil then drips back into the sump. This system is commonly used in a single-cylinder engine with the closed crankcase Forced feed or pressure lubrication system: This system is commonly used on the high-speed multi-cylinder engine in tractors, trucks and automobiles Wet sump lubrication system: In the wet sump system, the bottom of the crankcase contains an oil pan or sump from which the lubricating oil is pumped to various engine components by a pump. After lubricating these parts, the oil flows back to the sump by gravity. There are three varieties in the wet-sump lubrication system. • the splash system • the splash and pressure system • the pressure feed system # Admittance of oil between two surfaces having relative motion is called: 1.  lubrication 2.  viscosity 3. coalescence 4. turbidity Option 1 :  lubrication ## Detailed Solution Explanation: Lubrication • It is the admittance of oil between two surfaces having relative motion. Viscosity • It is defined as the measure of the resistance of a fluid to gradual deformation by shear or tensile stress. In other words, viscosity describes a fluid’s resistance to flow. • the viscosity is the measure of the friction of fluids. There are two ways to measure a fluid’s viscosity as dynamic Viscosity (Absolute Viscosity) and Kinematic Viscosity. Coalescence • It is the process by which two or more separate masses of miscible substances seem to "pull" each other together until they make the slightest contact. Turbidity • Turbidity is the cloudiness or haziness of a fluid caused by large numbers of individual particles that are generally invisible to the naked eye, similar to smoke in air. • The measurement of turbidity is a key test of water quality. # Select the incorrect statement from below. 1. Oil rings are present to seal the combustion space from leakage of oil 2. A suitable thickness of the top of the piston is needed to provide sufficient bearing area for side load 3. Piston pin is used to connect piston and the connecting rod 4. Piston rings are present to prevent gases of combustion from leakage out Option 2 : A suitable thickness of the top of the piston is needed to provide sufficient bearing area for side load ## Detailed Solution Piston rings: There are two types of piston rings: 1. Compression rings: These rings effectively seal the compression pressure and the leakage of the combustion gases. These are fitted in the top grooves. They also transfer heat from the piston to the cylinder walls. 2. Oil control rings: The main purpose of an oil ring is to scrape the excess oil from the liner and drain it back to the oil sump during the downward movement of the piston. It prevents the oil from reaching the combustion chamber. Functions of the oil rings: • Wipes the oil from the cylinder walls as the piston moves down • Separates the combustion chamber and oil chamber and prevents the leakage of pressure in the combustion chamber Piston pin: Piston pin (also known as Gudgeon pin) is used to connect piston and the connecting rod. Suitable thickness of the top of the piston Two criteria are used for calculating the thickness of the piston head • Strength criteria: Piston is treated as a flat circular plate of uniform thickness subjected to uniformly distributed gas pressure (pm) over the entire area. It is given by Grashoff’s formula, $${t_h} = \sqrt {\frac{{3{p_m}}}{{\left( {16{\sigma _b}} \right)}}}$$ • Heat dissipation criteria: Piston should have sufficient thickness to quickly transfer the heat to the cylinder walls. A suitable thickness of the top of the piston is needed to provide to quickly transfer the heat to the cylinder walls not for sufficient bearing area for side load. ## Question 17 1. Rubber 2. Plastic 3. Brass 4. Copper Option 4 : Copper ## Detailed Solution Explanation: • In the radiator, the hot coolant enters from the engine and it gets cooled down by the atmospheric air • So there is a requirement of material with high thermal conductivity and also corrosion resistance with coolant • So copper tubes are used due to high thermal conductivity # If the oil viscosity increases, then the wear loss _______ 1. remains the same 2. becomes zero 3. decreases 4. increases Option 3 : decreases ## Detailed Solution Explanation: Viscosity and Its effect: Viscosity is the most important property of oil when considering engine protection. It determines how the engine’s lubricant will react to changes in speed, pressure, and temperature. It affects heat generation in, cylinders, bearings, and gear sets related to an oil's internal friction. • It governs the sealing effect of oils and the rate of oil consumption, as well as determines the ease with which machines may be started or operated under varying temperature conditions, particularly in cold climates. • Viscosity is a measure of an oil's resistance to flow. It decreases with increasing temperature and increases with decreased temperature. • An oil's viscosity is measured most commonly by kinematic viscosity and its unit is centistoke. • A general increase in viscosity at higher temperatures results in lower oil consumption and less wear. • During cold winter months, it may be difficult to get your car to start first thing in the morning. This is because colder temperatures cause lubricants to thicken and require more energy to circulate due to reduced flow. As a result, your vehicle’s crankshaft has to push through thick oil in order to spin fast enough for your car to start. This can cause components in your engine to experience wear and tear. However, when the weather is warmer, the oil becomes thinner and easier to circulate. • A reduced viscosity at lower temperatures, which will improve starting and lower fuel consumption. # Thermostat comes into operation at temperatures of about _____ in automobile water cooling systems. 1. 40°C 2. 60°C 3. 20°C 4. 80°C Option 4 : 80°C ## Detailed Solution Explanation: Thermostat and its function: • Engine of an automobile is always designed to operate at a stable temperature. • Outside environment temperature keeps on changing, like in summer or winter which leads to different operating conditions for an engine. • Engine has to be warmed-up quickly to attain maximum efficiency. • Once the engine has reached a stable operating temperature, it should not fall below it, which is done with the help of a thermostat. • The thermostat is a valve like arrangement which opens and closes according to the temperature. • It is fitted between the water outlet of the cylinder head and the inlet of the radiator in the water-cooling system. • When the engine is cold, the thermostat is closed. It does not permit water to enter the radiator i.e. thermostat disconnects the engine from the radiator until it reaches to the sufficient operating temperature, generally 80°C in automobile water cooling system. • Then afterwards the thermostat adjusts flow of coolant to the radiator. # Engine overheating may be due to: 1. open thermostat 2. excess coolant 4. broken fan belt Option 4 : broken fan belt ## Detailed Solution Explanation: • Excess coolant does not cause engine overheating. Thermostat: • Whenever the engine is started from cold, the coolant temperature has to be brought to the desired level in order to minimize the warm-up time. • This purpose is achieved by a thermostat fitted in a system which initially prevents the circulation of water below a certain temperature through the radiator so that the water gets heated up quickly. • When the preset temperature is reached, the thermostat allows the water to flow through the radiator. Broken fan belt: • Due to broken belt the cooling fan would not work properly and causes engine overheating.
{}
# Problem #2010 2010 All the numbers $2, 3, 4, 5, 6, 7$ are assigned to the six faces of a cube, one number to each face. For each of the eight vertices of the cube, a product of three numbers is computed, where the three numbers are the numbers assigned to the three faces that include that vertex. What is the greatest possible value of the sum of these eight products? $\textbf{(A)}\ 312 \qquad \textbf{(B)}\ 343 \qquad \textbf{(C)}\ 625 \qquad \textbf{(D)}\ 729 \qquad \textbf{(E)}\ 1680$ This problem is copyrighted by the American Mathematics Competitions. Note: you aren't logged in. If you log in, we'll keep a record of which problems you've solved.
{}
# A generalized version of inclusion exclusion principle using a binomial identity I'm trying to find a way to derive a generalized inclusion exclusion principle for the number of elements which are in the intersection of at least $s$ sets from $A_1,A_2,...,A_n$ using this identity: let $k$ and $s$ be positive integers and let $k\ge s\ge 1$ $$\sum_{i=0}^{k-s} (-1)^i{s-1+i \choose s-1}{k \choose s+i} = 1$$ I'm coming from this question: proof that the binomial sum is equal to 1 It appears from the sum that what determines whether we add or substract the product of binomial coefficient is whether it has even or odd number of elements and it can be applied to every subset of \$s. But I don't quite understand the concept of the generalized form of inclusion exclusion principle. For a generalized inclusion-exclusion theorem, see this answer. In that answer, the Theorem says that the number of items in exactly $$k$$ of the sets is $$\sum_{j=0}^m(-1)^{j-k}\binom{j}{k}N(j)$$ Corollary 2 says that the number of items in at least $$k$$ of the sets is $$\bbox[5px,border:2px solid #C0A000]{\sum_{j=k}^m(-1)^{j-k}\binom{j-1}{j-k}N(j)}$$ where $$\binom{-1}{n}=$$$$(-1)^n\binom{n}{n}$$$$=(-1)^n$$$$[n\ge0]$$. If $$k\ge s$$, then \begin{align} \sum_{i=0}^{k-s}(-1)^i\binom{s-1+i}{s-1}\binom{k}{s+i} &=\sum_{i=0}^{k-s}(-1)^i\binom{s-1+i}{i}\binom{k}{k-s-i}\tag{1}\\ &=\sum_{i=0}^{k-s}\binom{-s}{i}\binom{k}{k-s-i}\tag{2}\\ &=\binom{k-s}{k-s}\tag{3}\\[8pt] &=1\tag{4} \end{align} Explanation: $$(1)$$: $$\binom{n}{k}=\binom{n}{n-k}$$ for $$n\ge0$$ and $$n\in\mathbb{Z}$$ $$(2)$$: $$\binom{-n}{k}=(-1)^k\binom{n+k-1}{k}$$ $$(3)$$: Vandermonde's Identity $$(4)$$: $$\binom{n}{n}=1$$ for $$n\ge0$$ and $$n\in\mathbb{Z}$$
{}