text
stringlengths
100
957k
meta
stringclasses
1 value
Find all School-related info fast with the new School-Specific MBA Forum It is currently 02 Jul 2015, 13:42 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If x= 2^b - (8^8 + 8^6), for which of the following b values Author Message TAGS: Intern Joined: 17 Jun 2013 Posts: 5 Followers: 0 Kudos [?]: 3 [1] , given: 7 If x= 2^b - (8^8 + 8^6), for which of the following b values [#permalink]  20 Sep 2013, 10:12 1 KUDOS 00:00 Difficulty: 55% (hard) Question Stats: 60% (02:29) correct 40% (01:25) wrong based on 111 sessions If x= 2^b - (8^8 + 8^6), for which of the following b values is x closest to 0? (A) 20 (B) 24 (C) 25 (D) 30 (E) 42 [Reveal] Spoiler: OA Last edited by Bunuel on 20 Sep 2013, 10:18, edited 1 time in total. RENAMED THE TOPIC. Math Expert Joined: 02 Sep 2009 Posts: 28252 Followers: 4464 Kudos [?]: 45048 [0], given: 6640 Re: If x= 2^b - (8^8 + 8^6), for which of the following b values [#permalink]  20 Sep 2013, 10:23 Expert's post 1 This post was BOOKMARKED abhisheksharma85 wrote: If x= 2^b - (8^8 + 8^6), for which of the following b values is x closest to 0? (A) 20 (B) 24 (C) 25 (D) 30 (E) 42 $$8^8 + 8^6=8^2*8^6 + 8^6=64*8^6+8^6=65*8^6=65*2^{18}\approx{2^{6}*2^{18}}=2^{24}$$. $$2^b - (8^8 + 8^6)=0$$ --> $$2^b - 2^{24}=0$$ --> $$b=24$$ OR: 8^8 + 8^6 is very close to 8^8=2^24, thus b=24. _________________ Magoosh GMAT Instructor Joined: 28 Dec 2011 Posts: 2426 Followers: 762 Kudos [?]: 3077 [0], given: 38 Re: If x= 2^b - (8^8 + 8^6), for which of the following b values [#permalink]  20 Sep 2013, 10:27 Expert's post abhisheksharma85 wrote: If x= 2^b - (8^8 + 8^6), for which of the following b values is x closest to 0? (A) 20 (B) 24 (C) 25 (D) 30 (E) 42 Dear abhisheksharma85, I'm happy to help with this. Recall the exponent rule: (a^m)^n = a^(m*n) Therefore, 8^8 + 8^6 = (2^3)^8 + (2^3)^6 = 2^24 + 2^18 This is the thing we want to cancel with 2^b. Well, keep in mind that 2^24 is 2^6 = 64 times larger than 2^18, so for the purposes of cancelling, we will ignore the tinier number. If we want to cancel the 2^24, we need b = 24, answer = (B). Does all this make sense? Mike _________________ Mike McGarry Magoosh Test Prep Math Expert Joined: 02 Sep 2009 Posts: 28252 Followers: 4464 Kudos [?]: 45048 [0], given: 6640 Re: If x= 2^b - (8^8 + 8^6), for which of the following b values [#permalink]  20 Sep 2013, 10:31 Expert's post Bunuel wrote: abhisheksharma85 wrote: If x= 2^b - (8^8 + 8^6), for which of the following b values is x closest to 0? (A) 20 (B) 24 (C) 25 (D) 30 (E) 42 $$8^8 + 8^6=8^2*8^6 + 8^6=64*8^6+8^6=65*8^6=65*2^{18}\approx{2^{6}*2^{18}}=2^{24}$$. $$2^b - (8^8 + 8^6)=0$$ --> $$2^b - 2^{24}=0$$ --> $$b=24$$ OR: 8^8 + 8^6 is very close to 8^8=2^24, thus b=24. Similar questions to practice: the-value-of-10-8-10-2-10-7-10-3-is-closest-to-which-of-95082.html which-of-the-following-is-closest-to-10180-1030-a-110224.html which-of-the-following-best-approximates-the-value-of-q-if-99674.html new-tough-and-tricky-exponents-and-roots-questions-125956-40.html if-x-10-10-x-2-2x-7-3x-2-10x-2-is-closest-to-143897.html 1-0-0001-0-04-10-the-value-of-the-expression-above-is-59398.html m24-q-7-explanation-76513.html 10-180-10-30-which-of-the-following-best-approximates-84309.html Hope it helps. _________________ SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1859 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Followers: 23 Kudos [?]: 961 [0], given: 193 Re: If x= 2^b - (8^8 + 8^6), for which of the following b values [#permalink]  26 Feb 2014, 20:08 8^8 = 2^24 8^6 = 2^18 2^24 + 2^18 is nearly to 2^24, so b=24 = Answer = B _________________ Kindly press "+1 Kudos" to appreciate GMAT Club Legend Joined: 09 Sep 2013 Posts: 5357 Followers: 310 Kudos [?]: 60 [0], given: 0 Re: If x= 2^b - (8^8 + 8^6), for which of the following b values [#permalink]  16 Jun 2015, 02:12 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: If x= 2^b - (8^8 + 8^6), for which of the following b values   [#permalink] 16 Jun 2015, 02:12 Similar topics Replies Last post Similar Topics: 3 If x = 8^6 - 5^6, which of the following is NOT a factor of x? 2 25 Nov 2014, 23:45 4 If a^2b > 1 and b < 2, which of the following could be the value of a? 6 04 Apr 2011, 03:59 9 For which of the following values of x is 13 24 Aug 2010, 11:41 4 For which of the following values of x is 4 21 Mar 2010, 10:00 If the units digit of a^2 + b^2 is 7, which of the following 4 06 Nov 2006, 00:47 Display posts from previous: Sort by
{}
# How do you write 4 3/4 as an improper fraction? $\frac{19}{4}$ The are $4$ quarters in each whole number and there are $4$ whole numbers here. You need to multiply the denominator by the amount of whole numbers and then add the numerator to get your answer, which is $19$ quarters. $4 \frac{3}{4} = \frac{4 \cdot 4 + 3}{4} = \frac{19}{4}$
{}
A graduated cylinder, measuring cylinder or mixing cylinder is a common piece of laboratory equipment used to measure the volume of a liquid. View PDF. 23 3. For example, if the area between the 40ml mark and the 50ml mark is divided into ten segments, each segment represents 1ml. Not sure what size graduated cylinder you need? Single scales allow to read the volume from top to bottom (filling volume) while double scale cylinders allow reading for filling and pouring (reverse scale). A buret is a scaled cylindrical tube attached to a stopcock, or valve. They measure volumes accurately if the reading technique is correct. FREE Shipping on orders over $25 shipped by Amazon. 46 6. 2nd through 4th Grades . The nearly perfect cylindrical design allows equal graduation marks. Made this video to help our introductory science students study for their lab final. It has a spout for pouring and a built-in base so it can stand upright on its own. The precise value would be 40.0 The markings are calibrated according to actual volume measurements, by manufacturers. BRAND graduated cylinders are made of high-quality plastics which provide excellent chemical resistance. The most accurate of the reading that could be done here is reduced down to 1 mL due to the given means of measurement on the cylinder. Graduated cylinders are thin glass tubes used to measure the volumes of liquids. It measures the volume of the displaced liquid. To measure the volume of liquid in a graduated cylinder, you should make a reading at the bottom of the meniscus, the lowest point on the curved surface of the liquid. 33 4. Plastic graduated cylinders are a safer alternative to glass. One of the most commonly used measuring devices in science is the graduated cylinder, which measures liquid volume. 19 9. [10], Two graduated cylinders. Simplify volume readings: liquid in plastic cylinders doesn’t form a meniscus, and level liquid avoids confusion and errors. Arrives before Christmas. Steady the tube with one hand while pouring the liquid you are measuring into it from another container. Fill the 100-mL graduated cylinder to exactly 40.0 mL, using the beaker and the pipette. Some of the worksheets for this concept are Name date, Measuring volume with graduated cylinders, Graduated cylinders, Reading graduated cylinder 5ml s1, Reading graduated cylinder mixed whole s1, Graduated cylinders name answers, Work measurement name, Volume. A graduated cylinder, measuring cylinder or mixing cylinder is a common piece of laboratory equipment used to measure the volume of a liquid. It is basically a volume measurement vessel with a long and slender cylindrical body and is necessarily transparent, to be able to use it for measurements. [2] With this kind of cylinder, the metered liquid does not pour directly, but is often removed using a Cannula. Large graduated cylinders are usually made of polypropylene for its excellent chemical resistance or polymethylpentene for its transparency, making them lighter and less fragile than glass. The process of calculating volume using a graduated cylinder is straightforward, but certain steps must be taken to ensure an accurate reading and maintain a safe working environment. What is a Meniscus? Polypropylene (PP) is easy to repeatedly autoclave; however, autoclaving in excess of about 121 °C (250 °F) (depending on the chemical formulation: typical commercial grade polypropylene melts in excess of 177 °C (351 °F)), can warp or damage polypropylene graduated cylinders, affecting accuracy.[1]. 7 2. 2nd through 4th Grades. Polish. 99. 0 times. The main reason as to why the reading of the volume is done via meniscus is due to the nature of the liquid in a closed surrounded space. For accuracy the volume on graduated cylinders is depicted on scales with 3 significant digits: 100mL cylinders have 1ml grading divisions while 10mL cylinders have 0.1 mL grading divisions. Graduated Cylinder Worksheet. VOLUME MEASUREMENT | Gratuated cylinders and pipettes | Concave surface in measuring pipette - Liquids and solids - Interactive Physics simulation | Free and Interactive Flash animation - Physics and Chemistry by a Clear Learning in High School, Middle School, Upper School, Secondary School and Academy. The error, give or take 0.1 mL, must be included too. Simplify volume readings: liquid in plastic cylinders doesn’t form a meniscus, and level liquid avoids confusion and errors. Once you familiarize yourself with the procedure, you will be able to repeat the steps with confidence and quickly measure small … IT STANDS FOR MENISCUS Reply. Also, the international symbols “IN” and “EX” are more likely to be used instead of “TC” and “TD” respectively. It is still just measuring the volume of a liquid. Graduated Cylinders (1s - Decimals) The scale of these graduated cylinders counts by 1s, which means students will need to use a decimal answer. Croatian. {\displaystyle \pm } Each marked line on the graduated cylinder represents the amount of liquid that has been measured. The standard is equal to approximately 5.5 cm. In the 10-mL graduated cylinder, first subtract 8 mL - 6 mL = 2 mL. alternatives. PCCL | jean pierre fournat Graduated cylinder Last updated August 22, 2019 Different types of graduated cylinder: 10mL, 25mL, 50mL and 100mL graduated cylinder. 2) Determine the volume of the liquids in the following cylinders: {\displaystyle \pm } 3 hours ago. 922 plays . Read the water's volume by the bottom of the meniscus. Edit. This video will discuss when to use a graduated cylinder and how to read it. Plastic graduated cylinders. To calculate the volume of a cylindrical shell, let's take some real-life example, maybe... a roll of toilet paper, because why not? Graduated cylinders can be made of glass, borosilicate or plastic and must be read carefully to ensure accuracy no matter what material is used. So how do I find the volume of the egg in the graduated cylinder? Unwanted particles or drops of liquid in the cylinder could throw off the measurement. What is its density in g/cm^3? 17 8. Make sure the cylinder is on a flat surface. With molded graduations and ring marks at the primary scale points, calibrated ‘In’. 5 10 15 20 battery C. 5 10 15 20 glue D. 5 10 15 20 screw 9) Which object had the greatest volume? Measuring Volume With Graduated Cylinder - Displaying top 8 worksheets found for this concept.. 4.4 out of 5 stars 64. Another use is to measure the amount of water displaced by objects. These cylinders are generally availabl… Advanced - Some Decimals. Definition of graduated cylinder : a tall narrow container with a volume scale used especially for measuring liquids Examples of graduated cylinder in a Sentence Recent Examples on the Web While … She holds a Bachelor of Arts in film and media production from the University at Buffalo, a Master of Fine Arts in visual effects from Academy of Art University and a Diploma in social media marketing from ALISON. That use is explained in the article, and it is not correct to say that the graduated cylinder is used to measured the volume of the object directly. It has a plastic or glass base (stand, foot, support) and a "spout" for easy pouring of the measured liquid. This forces the liquid surface to develop either a convex or concave shape, depending on the type of the liquid in the cylinder. Glass Graduated Cylinder Set - Thick Lab Cylinders 5ml 10ml 50ml 100ml Measuring Cylinder with 3 Glass Dropper, 2 Brushes and 1 Glass Stirring Rod. This is good enough for most applications in the home or at work although not quite good enough for analytical chemistry methods. Measuring Volume With Graduated Cylinder - Displaying top 8 worksheets found for this concept.. VITLAB was the first manufacturer to produce Class A graduated cylinders from PMP that are certified compliant according to DIN 12681. [9] Another example, if the reading is done and the value calculated is set to be 40.0 mL. [5] Formerly the tolerances for “to deliver” and “to contain” cylinders are distinct; however now these are the same. 9/18/2015 06:35:00 am. 4. Mixing cylinders have ground glass joints instead of a spout, so they can be closed with a stopper or connect directly with other elements of a manifold. graduated cylinder definition: 1. a glass cylinder that has lines printed on the side of it showing how much it contains, used by…. Volume measurement, Graduated cylinders. When reading a graduated cylinder you need to keep the graduated cylinder on the desk and lower your eyes to the level of the meniscus and you read where the bottom of the meniscus is. :) Enter the external radius of the cylinder. From this, the derived error would be one tenth of the least figure. The curve at the surface of the water in a graduated cylinder is called a MENISCUS. It has a narrow cylindrical shape. VITLAB has decades of experience in the development and production of graduated cylinders made of plastic. Once you familiarize yourself with the procedure, you will be able to repeat the steps with confidence and quickly measure small amounts of liquids. jlombardo5651_68311. The lot certificate supplied bears the batch number and the actual nominal value ascertained under the test conditions. Graduated Cylinders . Kristy Barkan began her writing career in 1998 as a features reporter for the University at Buffalo's "Spectrum" newspaper. Additionally, what is the volume of a 10 mL graduated cylinder? It means the each line of the graduated cylinder goes up by 1 ml. 13 Qs . Be sure to include one point of estimation in your reading. English. Metric Volume and Density . We use an instrument called GRADUATED CYLINDER to measure liquid volume. They won’t break, chip or shatter amid the bustle and bumps of everyday lab work. This is simple. Practicing using lab tools, measurements, and the metric system has never been more fun! Crystal clear. Some of the worksheets for this concept are Name date, Measuring volume with graduated cylinders, Graduated cylinders, Reading graduated cylinder 5ml s1, Reading graduated cylinder mixed whole s1, Graduated cylinders name answers, Work measurement name, Volume. Reading the liquid at the bottom part of a concave or the top part of the convex liquid is equivalent to reading the liquid at its meniscus. For instance, if the reading is done and the value calculated is set to be 36.5 mL. Graduated cylinders are generally more accurate and precise than laboratory flasks and beakers, but they should not be used to perform volumetric analysis;[3] volumetric glassware, such as a volumetric flask or volumetric pipette, should be used, as it is even more accurate and precise. This dip is called the meniscus; it forms because liquid molecules are more attracted to the glass than they are to each other. Science. 25) After introducing a 5 gram-object into a graduated cylinder, the volume in the cylinder increased from 15ml to 19ml. VOLUME MEASUREMENT | Gratuated cylinders and pipettes | Concave surface in measuring pipette - Liquids and solids - Interactive Physics simulation | Free and Interactive Flash animation - Physics and Chemistry by a Clear Learning in High School, Middle School, Upper School, Secondary School and Academy. Count the number of segments up to the line nearest the meniscus. Get it as soon as Wed, Dec 23. Graduated cylinders are sometimes used to measure the volume of a solid indirectly by measuring the displacement of a liquid. Calculate the volume of the liquid by adding the whole measurement to the sum of the segments. 1.First, find the volume of the water without the egg: 4 ml PCCL | jean pierre fournat When you get your 5th correct answer, you will get a certificate that you can share with your teacher letting your teacher know how you did. A traditional graduated cylinder (A in the image), and mixing cylinders (B in the picture), Common piece of laboratory equipment used to measure the volume of a liquid, http://www.elementalscientific.net/store/scripts/prodView.asp?idproduct=1239, "Volume Measurements with a Graduated Cylinder", Nuclear magnetic resonance (NMR) instrument, https://en.wikipedia.org/w/index.php?title=Graduated_cylinder&oldid=981368247, Short description is different from Wikidata, Creative Commons Attribution-ShareAlike License, This page was last edited on 1 October 2020, at 21:53. Select a cylinder that is large enough to hold the volume of liquid being measured. Our class B glass graduated cylinders are more accurate than our Erlenmeyer flasks and beakers and can be used to measure the volume of irregular objects by the amount of water they displace. Exact scales for easy measurements of volumes . The formula for the volume of a cylinder is V=Bh or V=πr2h . Which is NOT a step for reading a graduated cylinder properly? Learn more. It has a narrow cylindrical shape. When you are ready to start this activity, click on the begin button. A Class B graduated cylinder such as the one shown in the video is guaranteed to be accurate to 1%. Four different objects were placed in a graduated cylinder 1 at a time: 5 10 15 20 Empty A. 18 5. The video below helps demonstrate this. The formula for the volume of a cylinder is height x π x (diameter / 2)2, where (diameter / 2) is the radius of the base (d = 2 x r), so another way to write it is height x π x radius2. Learners view an explanation of how to read a graduated cylinder by measuring the lowest portion of … Determine the increments of measurement on the tube. Take the liquid measurement at the very bottom of the dip in the surface of the liquid. STEPHONJOENS. Volume of a cylinder formula. Lilli L. asked • 10/24/19 an object weighing 89.7 g is placed in a graduated cylinder displacing the volume from 50.0 mL to 60.5 mL. 2) Determine the volume of the liquids in the following cylinders: In this activity you will need to successfully determine the volume of colored water in 5 different graduated cylinders. The formula for the volume of a cylinder is height x π x (diameter / 2) 2, where (diameter / 2) is the radius of the base (d = 2 x r), so another way to write it is height x π x radius 2.Visual in the figure below: First, measure the diameter of the base (usually easier than measuring the radius), then measure the height of the cylinder. Ascertain to which line the meniscus is closest. Q. Class A has double the accuracy of class B. the word stands for mILLILITER Reply. The process of calculating volume using a graduated cylinder is straightforward, but certain steps must be taken to ensure an accurate reading and maintain a safe working environment. View PDF. A traditional graduated cylinder is usually narrow and tall so as to increase the accuracy and precision of volume measurement. 9/18/2015 06:33:54 am. A graduated cylinder is meant to be read with the surface of the liquid at eye level, where the center of the meniscus shows the measurement line. Therefore, the more precise value equates to 36.5 ± Graduated Cylinder Practice. Figure 2.1. Therefore, the volume corresponding to each division must first be determinate. where R - external radius, and r - internal radius. Copyright 2020 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. Based on this information, the density of the object is: A) 1.25 g/ml, B) 4 g/ml, C) 5 g/ml, D) 15 g/ml, E) 19 g/ml. C 10. Look at the horizontal lines on the side of the cylinder. 10) Which object had the least volume? [6], To read the volume accurately, the observation must be at an eye level and read at the bottom of a meniscus of the liquid level. [4] Typical capacities of graduated cylinders are from 10 mL to 1000 mL.$15.99 $15. measuring volumes of irregular objects This does not warrant listing as a separate use of graduated cylinders in the lede. For accuracy the volume on graduated cylinders is depicted on scales with 3 significant digits: 100mL cylinders have 1ml grading divisions while 10mL cylinders have 0.1 mL grading divisions. answer choices. When reading a graduated cylinder you need to keep the graduated cylinder on the desk and lower your eyes to the level of the meniscus and you read where the bottom of the meniscus is. Graduated Cylinder Challenge. Since graduated cylinders come in different sizes, they also measure with different degrees of accuracy. By nature, liquid in the cylinder would be attracted to the wall around it through molecular forces. To properly measure the volume of a liquid in a graduated cylinder you must be at eye-level and read the bottom point of the meniscus. Substitute 8 for r and 15 for h in the formula V=πr2h . A graduated cylinder is basically a test tube with volume markings. Visual in the figure below: First, measure the diameter of the base (usually easier than measuring the … Cylinders can have single or double scales. Take out the 100-mL graduated cylinder and the pipette. Your students will be fully engaged and learning when they complete this engaging hands-on 1 at a time: 5 10 15 20 Empty a it through molecular forces volume by the of. Tools, measurements, and r - external radius of the cylinder would be 40.0 {. Hand while pouring the liquid measurement at the bottom of the liquid this dip is called a meniscus )... Double scales stable base below, which measures liquid volume marked line on the of. Din 12681 volume at the primary scale points, calibrated ‘ in ’ to include point... Quite good enough for analytical chemistry methods produced by a force of 800 N acting on an of. With this kind of cylinder, the volume of a liquid video is guaranteed to be mL! Subtract to find the volumes of these graduated cylinders are sometimes used to measure the volumes of irregular this. Enter the external radius of the cylinder is on a flat surface. < >. Is set to be 40.0 mL which is not a step for reading graduated. T form a meniscus, and r - internal radius has never been more fun 1.first, find volume. Introductory science students study for their lab final egg in the formula for the at. Produced by a force of 800 N acting on an area of 2.0 m2 or mixing cylinder 8! Each marked line on the graduated cylinder, the derived error would be attracted to the of. - Displaying top 8 worksheets found for this concept mark is divided ten... The cylinder would be one tenth of the most commonly used measuring devices in science is the graduated to. It from another container Wed, Dec 23 it has a spout for pouring a... While pouring the liquid surface to develop either a convex or concave shape, depending on the of. Must first be determinate to release a small amount of water displaced by objects their lab.... The precise value would be one tenth of the graduated cylinder volume. with one hand pouring! A force of 800 N acting on an area of 2.0 m2 eye level to take a.. Come in different sizes, they also measure with different degrees of accuracy tall shape, depending on the of... Using a graduated cylinder is on a flat surface. < /p > measuring into it another. Are from 10 mL graduated cylinder and the actual nominal value ascertained under the test conditions more to... The external radius, and level liquid avoids confusion and errors tubes used measure! In the video is guaranteed to be accurate to 1 % < /p > lab final devices in is. Water in a graduated cylinder so it can stand upright on graduated cylinder volume own to. Each other to 1000 mL they are to each division must first be determinate 100-mL graduated cylinder line on graduated! The first manufacturer to produce Class a graduated cylinder goes up by 1 mL since graduated cylinders are sometimes to. Media, All Rights Reserved a small graduated cylinder volume of liquid being measured is in liquid form - Displaying top worksheets. Meniscus occurs between a liquid is V=Bh or V=πr2h this forces the liquid you are measuring it... 20 graduated cylinder volume a measure liquid volume 50mL mark is divided into ten segments, each segment 1ml. Media, All Rights Reserved: 10mL, 25mL, 50mL and 100mL graduated cylinder and value. Double scales of 800 N acting on an area of 2.0 m2 measures liquid volume number and 25-mL. Avoids confusion and errors the markings are calibrated according to DIN 12681 substance being measured volume at the surface the... Liquid form this forces the liquid attracted to the line nearest the meniscus it! Error, give or take 0.1 mL, must be included too raised scale liquid does not listing..., tall shape, raised scale a features reporter for the volume of liquid has! Volume corresponding to each division must first be determinate directly, but is often using... The reading is done and the 25-mL graduated cylinder, measuring cylinder or mixing cylinder is called meniscus! Volume corresponding to each other it can stand upright on its own video will discuss when use. Just measuring the volume of liquid being measured [ 2 ] with this kind of cylinder, which them. Or drops of liquid that has been measured a test tube with one hand while the... Determine the volume of liquid being measured pccl | jean pierre fournat that method only works if reading. From 10 mL graduated cylinder for most applications in the cylinder could throw off the measurement so can! Return the overflow cup and the pipette and tall so as to increase the accuracy of Class.. The metric system has never been more fun ; 40.1 or 39.9 mL single... This video to help our introductory science students study for their lab final do this until the graduated.... Use a graduated cylinder and the pipette 1 % 22, 2019 different types of cylinders... It forms because liquid molecules are more attracted to the line nearest the meniscus. with. The 10-mL graduated cylinder properly of estimation in your reading 10-mL graduated cylinder contains 17.5! An instrument called graduated cylinder and how to read the water without egg! Upright on its own plastic cylinders doesn ’ t form a meniscus. you... Is divided into ten segments, each segment represents 1ml pressure produced by a force of 800 N acting an... When reading only works if the reading is done and the 50mL mark divided... Test conditions of laboratory equipment used to measure the volume of a liquid and 50mL! 40Ml mark and the pipette above the graduated cylinder - Displaying top 8 worksheets found for concept. Been more fun batch number and the 25-mL graduated cylinder properly select a cylinder that large... 40.1 or 39.9 mL while pouring the liquid in plastic cylinders doesn ’ t form a,. Four different objects were placed in a graduated cylinder such as the one shown in the cylinder would be to... To 1 % the air above different graduated cylinders are a safer alternative to glass called meniscus... The lot certificate supplied bears the batch number and the metric system has never been more fun in activity. Of the cylinder would be attracted to the line nearest the meniscus )! Be one tenth of the meniscus ; it forms because liquid molecules are more attracted to sum... Commonly used measuring devices in science is the graduated cylinder to the line nearest the meniscus. so... Until the graduated cylinder 1 at a time: 5 10 15 20 Empty a dip in cylinder... Kristy Barkan began her writing career in 1998 as a separate use graduated! Equates to 36.5 ± { \displaystyle \pm } 0.1 ; 36.4 or 36.6.... Last updated August 22, 2019 different types of graduated cylinder to the... Means the each line of the curve called the meniscus. cylinder - Displaying top 8 worksheets for... Easily, so take special care when working with noxious or volatile liquids are to each division must be... 17.5 mL of water free Shipping on orders over$ 25 shipped Amazon! Measuring devices in science is the graduated cylinder is called the meniscus. is often removed using Cannula! High-Quality plastics which provide excellent chemical resistance or valve included too determine volume... One hand while pouring the liquid you are ready to start this you... Common piece of laboratory equipment used to measure the volumes of these graduated cylinders from PMP that are compliant... Measuring volume with graduated cylinder contains exactly 17.5 mL of water, graduated cylinder volume. Ml of water cup and the actual nominal value ascertained under graduated cylinder volume test conditions measure the volume of cylinder... Stand upright on its own and 100mL graduated cylinder Worksheet ‘ in ’ meniscus and! The bulb the need for a stand Wed, Dec 23 have single or double scales the. A test tube with one hand while pouring the liquid in the surface the. Amid the bustle and bumps of everyday lab work certified compliant according to actual volume measurements, and the graduated... Study for their lab final Buffalo 's Spectrum '' newspaper subtract mL... Most applications in the 10-mL graduated cylinder contains exactly 17.5 mL of water each division first. Without the need for a stand with noxious or volatile liquids of 2.0 m2 buret! The cabinet and the height is 15 cm external radius of the water without the egg in the.! Empty a use is to measure the volume of a solid indirectly graduated cylinder volume measuring displacement! Displaced by objects Media, All Rights Reserved, 25mL, 50mL and 100mL graduated cylinder exactly. Ml = 2 mL '' newspaper of water displaced by objects displaced by objects the 40ml mark the. Plastics which provide excellent chemical resistance laboratory equipment used to measure the of. Meniscus occurs between a liquid and the pipette in plastic cylinders doesn ’ t,!: ) Enter the external radius, and level liquid avoids confusion and errors is to! Measuring volume using a graduated cylinder goes up by 1 mL the tube with volume markings the markings are according. Enough for analytical chemistry methods by manufacturers system has never been more fun - 6 mL = mL!, liquid in plastic cylinders doesn ’ t form a meniscus, and r - internal radius we an! Number and the actual nominal value ascertained under the test conditions up the... Ring marks at the primary scale points, calibrated ‘ in ’ the more precise value equates to 36.5 {...: liquid in the surface of the meniscus ; it forms because liquid molecules are more to! Cylindrical tube attached to a stopcock, or valve segments, each segment represents 1ml included.. Click the bulb that method only works if the substance being measured is liquid...
{}
# Extract the different components of variance in a linear mixed model in R Consider a mixed model as follows. library(lme4) data <- structure(list(blk = c(1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3L), gent = c(1, 2, 3, 4, 7, 11, 12, 1, 2, 3, 4, 5, 9, 1, 2, 3, 4, 8, 6, 10L), yld = c(83, 77, 78, 78, 70, 75, 74, 79, 81, 81, 91, 79, 78, 92, 79, 87, 81, 96, 89, 82L), syld = c(250, 240, 268, 287, 226, 395, 450, 260, 220, 237, 227, 281, 311, 258, 224, 238, 278, 347, 300, 289L)), .Names = c("blk", "gent", "yld", "syld"), class = "data.frame", row.names = c(NA, -20L)) data$blk <- as.factor(data$blk) data$gent <- as.factor(data$gent) The data is unbalanced. # Mixed effect model frmla <- "syld ~ 1 + gent + (1|blk)" library(lme4) model <- lmer(formula(frmla), data = data) model Linear mixed model fit by REML ['merModLmerTest'] Formula: syld ~ 1 + gent + (1 | blk) Data: data REML criterion at convergence: 73.9572 Random effects: Groups Name Std.Dev. blk (Intercept) 9.385 Residual 16.919 Number of obs: 20, groups: blk, 3 Fixed Effects: (Intercept) gent2 gent3 gent4 gent5 gent6 gent7 gent8 gent9 256.000 -28.000 -8.333 8.000 32.127 43.678 -36.805 90.678 62.127 gent10 gent11 gent12 32.678 132.195 187.195 Primarily I want to compare the gent levels by LS means. library("lmerTest") lsmeans(model) Least Squares Means table: gent Estimate Standard Error DF t-value Lower CI Upper CI p-value gent 1 1.0 256.0 11.2 6.9 22.9 229 283 <2e-16 *** gent 2 5.0 228.0 11.2 6.9 20.4 201 255 <2e-16 *** gent 3 6.0 247.7 11.2 6.9 22.2 221 274 <2e-16 *** gent 4 7.0 264.0 11.2 6.9 23.6 237 291 <2e-16 *** gent 5 8.0 288.1 18.5 8.0 15.6 245 331 <2e-16 *** gent 6 9.0 299.7 18.5 8.0 16.2 257 342 <2e-16 *** gent 7 10.0 219.2 18.5 8.0 11.8 177 262 <2e-16 *** gent 8 11.0 346.7 18.5 8.0 18.8 304 389 <2e-16 *** gent 9 12.0 318.1 18.5 8.0 17.2 275 361 <2e-16 *** gent 10 2.0 288.7 18.5 8.0 15.6 246 331 <2e-16 *** gent 11 3.0 388.2 18.5 8.0 21.0 346 431 <2e-16 *** gent 12 4.0 443.2 18.5 8.0 24.0 401 486 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 In addition I am interested in variance partitioning. The variance component due to random effect and residual can be estimated as follows. VCrandom <- VarCorr(model) print(VCrandom, comp = "Variance") Groups Name Variance blk (Intercept) 88.083 Residual 286.250 How to partition the total variance into components due to each of the factors gent and blk along with the residual ? Something similar to the output given by PROC MIXED of SAS, where MSE is computed even when estimation is by ML or REML instead of least squares. Should I treat the fixed effect as random just for the purpouse of getting variance component ? frmla2 <- "syld ~ 1 + (1|gent) + (1|blk)" model2 <- lmer(formula(frmla2), data = data) model2 VCrandom2 <- VarCorr(model2) print(VCrandom2, comp = "Variance") Groups Name Variance gent (Intercept) 4152.08 blk (Intercept) 116.11 Residual 274.92 If there is no random effect, variance components can be estimated using the least squares approach (ANOVA, Sum of squares, MSE). The package mixlm has provision for variance partitioning using SS in case of mixed models. library(mixlm) mixlm <- lm(syld ~ 1 + r(gent) + r(blk), data) Anova(mixlm, type="III") Analysis of variance (unrestricted model) Response: syld Mean Sq Sum Sq Df F value Pr(>F) gent 5360.49 58965.36 11 18.73 0.0009 blk 638.58 1277.17 2 2.23 0.1886 Residuals 286.25 1717.50 6 - - Err.term(s) Err.df VC(SS) 1 gent (3) 6 3044.5 2 blk (3) 6 52.8 3 Residuals - - 286.3 (VC = variance component) Expected mean squares gent (3) + 1.66666666666667 (1) blk (3) + 6.66666666666667 (2) Residuals (3) WARNING: Unbalanced data may lead to poor estimates The estimates are different # Total variance var(data\$syld) |source | model1| model2| mixlm| |:--------|-------:|-------:|------:| |gent | NA| 4152.08| 3044.5| |blk | 88.083| 116.11| 52.8| |Residual | 286.250| 274.92| 286.3| Can fixed effect variance be extracted using predict function as suggested here In R: How to extract the different components of variance in a linear mixed model! ? var(predict(model)) Which is the most appropriate method compatible with (RE)ML estimates in lme4 ? This isn't exactly an answer, but whenever I see a question about "explained variance" in mixed models, I always think of this email from Douglas Bates, the original author of lme4 and co-author of nlme, on the R-Sig-ME mailing list on Feb 26, 2010, in response to: I have written some code to implement Gelman & Pardoe's Rsq for an lmer object. It gives some believable results, but it's difficult to be confident because of the translation from Bayesian into frequentist paradigms. If anyone is interested then I'd be really happy to discuss this off-list and share/develop the code. I think it is useful to have this email here on CV. This is Bates' reply: Assuming that one wants to define an R^2 measure, I think an argument could be made for treating the penalized residual sum of squares from a linear mixed model in the same way that we consider the residual sum of squares from a linear model. Or one could use just the residual sum of squares without the penalty or the minimum residual sum of squares obtainable from a given set of terms, which corresponds to an infinite precision matrix. I don't know, really. It depends on what you are trying to characterize. In other words, what's the purpose? What aspect of the R^2 for a linear model are you trying to generalize? I'm sorry if I sound argumentative but discussions like this sometimes frustrate me. A linear mixed model does not behave exactly like a linear model without random effects so a measure that may be appropriate for the linear model does not necessarily generalize. I'm not saying that this is the case but if the request is "I don't care what the number means or if indeed it means anything at all, just give me a number I can report", that's not the style of statistics I practice. I regard Bill Venables' wonderful unpublished paper "Exegeses on Linear Models" (just put the name in a search engine to find a copy - there is only one paper with "Exegeses" and "Linear Models" in the title) as required reading for statisticians. As Bill emphasizes in that paper, statistics is not just a collection of formulas (many of which are based on approximations). It's about models and comparing how well different models fit the observed data. If we start with a formula and only ask ourselves "How do we generalize this formula?" we're missing the point. We should start at the model. In a linear model the R^2 statistic is a dimensionless comparison of the quality of the current model fit, as measured by the residual sum of squares, to the fit one would obtain from a trivial model. When the current model can be shown to contain a model with an intercept term only (and whose coefficient will be estimated by the mean response) then that model fit is the trivial model. Otherwise the trivial model is a prediction of zero for each response. We know that the trivial model will produce a greater residual sum of squares than the current model fit because the models are nested. The R^2 is the proportion of variability not accounted for by the trivial model but accounted for by the current model (my apologies to my grammar teachers for having juxtaposed prepositions). The interesting point there is that when you think of the relationships between models you can determine how you handle the case of a model that does not have an intercept term. If you start from the formula instead you can end up calculating a negative R^2 because you compare models that are not nested. Such nonsensical results are often reported. (I think it was the Mathematica documentation that gave a careful explanation of why you get a negative R^2 instead of recognizing that the formula they were using did not apply in certain cases.) It may be that there is a sensible measure of the quality of fit from a linear mixed model that generalizes the R^2 from a linear model. I don't see an obvious candidate but I will freely admit that I haven't thought much about the problem. I would ask others who are thinking about this to consider both the "what" and the "why". George Mallory's justification of "because it's there" for attempting to climb Everest is perhaps a good justification for such endeavors (Mallory may have questioned his rationale as he lay freezing to death on the mountain). I don't think it is a good justification for manipulating formulas.
{}
# Explaining how 1 + 2 + 3 + . . . can possibly equal -1/12 to a kid When I did the my biographies for my kids last week my older son said that the thing in math that he’s see but that he does not believe is this equality: 1 + 2 + 3 + 4 + . . . . = -1/12 This sum was made popular by a Numberphile video a couple of years ago (which now has over 4 million views!): there have also been several good follow ups. For example this video with Ed Frenkel which was also produced by Numberphile: and this video by Mathologer which is absolutely excellent: I spent some time today trying to think about how to discuss this series with my older son. I’m glad that he is bothered by the result – it is obviously very very strange. Obviously I can’t go into the details about the Riemann Zeta function with him, but I still think there’s some what to help him make some sense of the series. So, I spent the day reviewing some ideas in G. H. Hardy’s book “Divergent Series.” Here are a few passages that caught my eye: (a) Book Cover I don’t remember where I heard about this book. My best guess is that it was mentioned in Jordan Ellenberg’s “How Not to be Wrong” in the section about Grandi’s series. Unfortunately I only have the audiobook version of “How not to be Wrong” and don’t know how to search it! (b) first passage The remark beginning at “It is plain . . . ” caught my attention.  This is right at the beginning of the book – section 1.3.   The statement: “it does not occur to a modern mathematician that a collection of mathematical symbols should have a ‘meaning’ until one has been assigned to it by definition.” also felt very powerful to me. (c) second passage The continuation of the previous page is also important – the point about Cauchy was definitely mentioned in “How not to be Wrong” as well. (d) third passage For the third passage we have to go much later in the book – nearly to the end, in fact.  The passage here – 13.10.11, in particular – shows the strange result.  Not in a Numberphile video, or some other internet video, but in a math textbook by G. H. Hardy: (e) fourth passage Finally – and this really is just about the last page of the book – section 13.17 provides a word of caution and an example of what can go wrong playing around with these divergent infinite series. So, I’m going to spend the next few days and maybe even the next few weeks thinking about how to share some sort of idea about this strange series with my son.  I’ll welcome any suggestions! # Talking through the area of the Koch snowflake with kids This project is the 2nd of two projects on the Koch snowflake. The reason for the projects was that my younger son wondered how the Koch snowflake could have an infinite perimeter but a finite area. The first project (about the perimeter) is here: Exploring the perimeter of the Koch snowflake Our approach to studying the area was similar to the approach for studying the perimeter. Essentially we looked at the steps in the construction of the Koch Snowflake and then looked for a pattern. Here are the initial thoughts from the kids about the area: The first step in studying the area was to look at the total area of the first few iterations of the Koch Snowflake. I decided to avoid the complexity of geometry triangle formulas and just talked about scaling. My younger son also came up with a really nice argument for Now that we’ve seen a first few cases, can we find the pattern? The amount of area that we add each time has a fairly simple pattern – it is just multiplication by 4 and division by 9. The only time that doesn’t happen is in the first step. Can we connect the numbers with the geometry? Now that we’ve seen and understood the pattern, how can we figure out the sum? I love that the boys saw that the main sum we were looking at here was less than 2. I didn’t want to derive the geometric sum formula, so I just gave it to them. We can talk about it another time. That formula seems to be the easiest way to find the exact value of the sum, though. Finally we wrapped up and discussed the process we used to study the area and perimeter. I don’t really believe that my younger son now understands every detail of what we talked about, but I hope that he’s a little bit less confused about the area and perimeter of the Koch snowflake. I think the math here is something that all kids would find interesting. # Exploring the perimeter of the Koch Snowflake Last week we have a fun talk about the boys “math biographies”: Math Biographies for my kids When I asked my younger son to tell me about a math idea that he’s see but that he doesn’t believe to be true, he brought up the area and perimeter of the Koch snowflake. The perimeter is infinite while the area is finite, and he does not believe that these two facts can go together. Today I thought it would be fun to talk about the perimeter of the Koch snowflake – no need to tackle both ideas at once. Here’s the introduction to the Koch snowflake and some thoughts from my younger son on what he finds confusing about the shape: After that introduction we began to tackle the problem of finding the perimeter. We began by looking at the first couple of iterations in the construction of the snowflake to try to find a pattern. At this point in the project the boys didn’t quite see the pattern: As a way to help the boys see the pattern in the perimeter, I asked my younger son to calculate the perimeter of the 4th iteration. My older son had been doing most of the calculating up to this point, and I hoped that my younger son working though the details here would shed a bit more light on what was going on as you move from one step to the next. The counting project we reference at the end of this video is here: John Golden’s visual pattern problem Finally, we looked at how we could use math to describe the pattern that we found in the last video. We also discuss what it means mathematically for the perimeter to be infinite. We need fairly precise language to describe the situation here, so this part of the project also gives the kids a nice way to learn the language of math. # Does 1 – 1 + 1 – 1 + 1 . . . . . = 1/2 This morning, for a little first day of school fun, we played with Grandi’s Series. I’ve seen the series pop up in a few places in the last few days – first in part of a little note I wrote up inspired by a Gary Rubenstein talk: A Talk I’d live to give to calculus students and then a day or two later in this tweet from (the twitter account formerly known as) Five Triangles: So, what’s going on with this series? What would the boys think? Here’s their initial reaction: And here’s their reaction when I showed them what happens when we assume that the series does sum to some value x: We have touched a little bit on this series (and my favorite math term “Algebraic Intimidation”) previously: Jordan Ellenberg’s “Algebraic Intimidation” It is fun to hear the boys struggle to try to explain / reconcile the strange ideas in Grandi’s series. I’m also glad that they are learning to think through what’s going on rather than just believing the algebra. # A talk I’d love to give to Calc students I saw a neat video from Gary Rubenstein recently: In the video he presents a neat Theorem about partitions due to Euler. Simon Gregg, by coincidence, was looking at partitions recently, too, and has written up a nice post which includes some ideas from Rubenstein’s video: The part that struck me in Rubenstein’s video wasn’t about partitions, though, it was about the manipulation of the infinite product. It all works out just fine, which is pretty neat, but sometimes manipulating infinite quantities produces strange results. See this famous video from Numberphile, for example: Just as an aside, here’s a longer and more detailed explanation of the same result: The fascinating thing to me is that Euler’s proof in Rubenstein’s video is easy to believe, but the sum in the Numberphile video is not easy to believe at all. Both are examples, I think, of what Jordan Ellenberg called “algebraic intimidation” in his book How not to be Wrong. I used Ellenberg’s idea when I talked about the -1/12 sum with my kids: Jordan Ellenberg’s “Algebraic Intimidation” The talk I’d like to give to calculus students would start with the theorem presented in Rubenstein’s video. Once the students were comfortable with the ideas about the infinite products and the ideas about partitions, I’d move on to the idea in the Numberphile video. It would be a fun way to show students that infinite sums and products can be strange and you can sometimes stumble on really strange results. # A fun coincidence with our “1/3 in binary” project Saw this tweet from Matt Henderson (via a Steven Strogatz retweet) today: It first reminded me of one of Patrick Honner’s blog posts from a few years ago: Honner’s post plus a lucky coincidence with a Numberphile video inspired a fun project with the boys: Numberphile’s Pebbling the Chessboard game and Mr. Honner’s square It wasn’t until later in the day that a different thought about this graphic hit me – we did a project about this shape LAST WEEK!! Revisiting 1/3 in Binary Since I was looking for a quick little something to do with the boys tonight, I head each one of them take a look at the tweet and tell me what they thought. My younger son went first – he had lots of neat thoughts about the shape and we eventually found our way to the connection between this shape and writing 1/3 in binary: My older son went next – he didn’t see the connection right away, but we eventually got there, too (oh, and ugh, just listening to this video I realize that I misunderstood my son when he was talking about a geometric series – whoops, he did say the right definition). Thank you internet – what a fun coincidence! # Revisiting 1/3 in binary Last night we talked about writing $pi$ in base 3.   A long long long time ago we talked about writing 1/3 in binary.  Here are those two projects: Pi in base 3 Writing 1/3 in Binary I suspected that the boys wouldn’t remember the project about writing 1/3 in binary, so I thought it would make a good follow up to last night’s project. I started by just posing the question and seeing where things went. They boys had lots of ideas and we eventually got most of the way there: At the end of the last video they boys figured out that if our number was indeed 1/3, if we multiplied it by 3 we should get 1. That reminded them of the proof that 0.9999…. (repeating forever) = 1. We reviewed that proof and applied it to the situation we had now. Just one little problem . . . what if we apply the idea in this proof to a different series, say 1 + 2 + 4 + 8 + 16 + . . . . ? We’ve looked at the idea in this video before: Jordan Ellenberg’s “Algebraic Intimidation” We felt pretty comfortable believing that 0.9999…. = 1 and that we’d found the correct series for 1/3 in binary, but do we believe the results when we apply the exact same ideas to a new series? I love projects like this one 🙂 # A nice series problem for kids from Five Triangles Back in 2013 we did a neat problem on Numberphile’s “Pebbling the Chessboard” video: That video also reminded me of a neat “proof without words” that Patrick Honner had written about: Our project is here: Numberphile’s Pebbling the Chessboard game and Mr. Honner’s Square and Patrick Honner’s blog post is here: Proof Without Words: Two Dimensional Geometric Series Tonight I saw a neat tweet from Five Triangles that reminded me of the prior project: I thought it would be a fun one to try out with my older son, though I didn’t quite know how to introduce the problem. I started with a slightly easier series as a trial: 1/2 + 2/4 + 3/8 + 4 / 16 + . . . Since things seemed to go pretty well with the first problem I decided to go ahead and try out the series posted by Five Triangles: So, a neat problem for kids building off of a the “simple” infinite series 1 + 1/2 + 1/4 + . . . . As our project from 2013 shows, the more complicated versions can have interesting geometric interpretations, but I’ll leave those for another time. Tonight it was just fun to see some neat arithmetic with infinite series. # Building arithmetic and number sense by talking about geometric series I’ve been a little busy both at home and at work this week and as of this morning hadn’t given any thought to our Family Math projects for this weekend. More or less on a whim I decided to return to an old favorite topic for today’s project – infinite series, and specifically geometric series. Two of our prior talks on infinite series are linked below, you can find others on the blog under the tag “infinite series”: Just for Fun: Some Infinite sums Talking about Infinite Series The first part of the talk today was introducing the concept of a geometric series. The main idea I’m trying to get at today is showing how we can extend a common way of showing why 0.9999… = 1 to the problem of summing a geometric series. We talk through some of the basic ideas using the series 1 + 1/2 + 1/4 + 1/8 + . . . as our example. The next thing that we looked at was the series 1 + 1/3 + 1/9 + 1/27 + . . . . My older son initially believes that this series will also sum to 2 because it goes on forever. My younger son’s initial guess is that it will sum to 1.5. His reason is that (except for the first term) the terms are smaller than the series 1 + 1/2 + 1/4 + 1/8 + . . . . One theme that shows up here that will continue for the rest of today’s project is that subtracting two infinite series is a little confusing to the boys. I should have found a better way, or at least an alternate way, to explain this idea to them. In the next talk I wanted to have the boys pick their own series to sum. Unfortunately, I wasn’t clear with them that I wanted to look only at series where the terms went to zero. That lack of clarity caused a small problem at the start of this part of the project. Once we got on the right path, we worked through the series 1 + 1/5 + 1/25 + 1/125 + . . . without too much difficulty. But the next series caused a little bit of trouble: 1 – 1/3 + 1/9 – 1/27 + . . . . The subtraction and the negative signs were big stumbling blocks here. I really needed to provide a better way to help them see what was going on when we subtract one series from another. In the next part of our talk we moved on to talking about an general geometric series. This discussion is a big step up in abstraction. I think this abstraction was not as difficult for my older son as it was for my younger son, which isn’t a huge surprise. Subtracting the individual terms in each series still presented a little bit of difficulty. We did manage to get to find a fairly simple formula for our sum, though. Even with the difficulty we had, I think the discussion here are a nice example of how you can take an idea from a specific setting and use it in a slightly more abstract setting. The last part of today’s project involved using the formula we found in the last video in the situations that we’d already considered. A few examples showed that our formula seemed to match the prior results. Yay! We then wrapped up by looking at a few situation where the terms in the series do not go to 0. Here the formula produces some results that seem strange. For now I’m leaving these odd results as fun little paradoxes for the boys to ponder. Watching these talks as I put this blog together makes me wish I’d done a better job with this project. I think that the important mathematical ideas here can be made accessible to kids if you present them in the right way. The results are neat and some seem strange (you’ll hear my son reference Numberphile’s video about the result 1 + 2 + 3 + . . . = -1/12 in the last video – so these strange results can really make kids think). Hopefully the next time we return to this topic I’ll remember the lessons from this one and present some ideas in a slightly different (and hopefully slightly better!) way. # Jordan Ellenberg’s “Algebraic Intimidation” One of the ideas that seems to have stuck in my mind from reading “How not to be Wrong” a couple of times is the concept of “algebraic intimidation.”   Ellenberg uses this phrase to describe one of the standard ways to “prove”  that 0.9999….. = 1.   I go through the proof that he’s talking about in the first video below if you’ve not seen it before. The idea of algebraic intimidation is, I suppose, pretty simple:  the math all looks right, therefore the result must be right because you **better** believe the math! This concept obviously generalizes to all sorts of situations.   As the maybe useful / maybe harmful (depending on what year it is) quantatitive ideas seem to be creeping back into the financial markets, I feel like I’m seeing the old algebraic intimidation hammer at work a lot more frequently these days.  But, hey, we all miss 2008, right? While a post about martingales might more more relevant to the attempts at using math to intimidate in the financial markets, I think Ellenberg’s example is infinitely more interesting.  Particularly for students, and I’d love to use the examples below in a room full of kids who are interested in math. The idea of talking about algebraic intimidation once again came up this past weekend in our Family Math project.  I asked the boys what they wanted to talk about  and they gave me a surprising answer – “Infinite Series.”   The entire set of talks from this weekend is here: The two conversations relevant to algebraic intimidation  are below and came when one of the examples that they wanted to talk about was “the -1/12 series.”  Say what you want about that old Numberphile video, but the ideas in it sure stuck with my kids! I led off this part of our project with the standard proof of why 0.999…. = 1 and then, following some examples in Ellenberg’s book, extended the ideas in that proof to a few other areas where you get some rather odd results.  We then moved on to the “-1/2 series” and followed the ideas in the original Numberphile video. You’ll see that both kids are quite skeptical of the results.  My younger son in particular is almost physically upset.  That’s good.  I want them to learn to question results rather than just blindly trusting the math, and I especially want them to feel free to question results that seem odd.  You certainly won’t find many results that seem more goofy than the ones below 🙂
{}
# The minimum value Question: The minimum value of $f(x)=\sin x$ in $\left[\frac{-\pi}{2}, \frac{\pi}{2}\right]$ isThe minimum value of $f(x)=\sin x$ in $\left[\frac{-\pi}{2}, \frac{\pi}{2}\right]$ is_________ Solution: The given function is $f(x)=\sin x, x \in\left[\frac{-\pi}{2}, \frac{\pi}{2}\right]$. $f(x)=\sin x$ Differentiating both sides with respect to x, we get $f^{\prime}(x)=\cos x$ For maxima or minima, $f^{\prime}(x)=0$ $\Rightarrow \cos x=0$ $\Rightarrow x=-\frac{\pi}{2}$ or $x=\frac{\pi}{2}$, for all $x \in\left[\frac{-\pi}{2}, \frac{\pi}{2}\right]$ Now, $f^{\prime \prime}(x)=-\sin x$ At $x=\frac{\pi}{2}$, we have $f^{\prime \prime}\left(\frac{\pi}{2}\right)=-\sin \frac{\pi}{2}=-1<0$ So, $x=\frac{\pi}{2}$ is the point of local maximum of $f(x)$. At $x=-\frac{\pi}{2}$, we have $f^{\prime \prime}\left(-\frac{\pi}{2}\right)=-\sin \left(-\frac{\pi}{2}\right)=\sin \frac{\pi}{2}=1>0$         $[\sin (-\theta)=-\sin \theta]$ So, $x=-\frac{\pi}{2}$ is the point of local minimum of $f(x)$. $\therefore$ Minimum value of $f(x)=f\left(-\frac{\pi}{2}\right)=\sin \left(-\frac{\pi}{2}\right)=-\sin \frac{\pi}{2}=-1$ Thus, the minimum value of $f(x)=\sin x$ in $\left[\frac{-\pi}{2}, \frac{\pi}{2}\right]$ is $-1$. The minimum value of $f(x)=\sin x$ in $\left[\frac{-\pi}{2}, \frac{\pi}{2}\right]$ is  ___−1___.
{}
A solid metal sphere of radius a = 1.30 cm is surrounded by a concentric spherical metal shell of inner radius b = 1.80 cm and outer radius c = 2.30 cm. The inner sphere has a net charge of Q1 = 4.00 µC, and the outer spherical shell has a net charge of Q2=-7.20 µC. What is Er at a point located at radius r = 2.70 cm, i.e. outside the outer shell? What is the surface charge density, sb, on the inner surface of the outer spherical conductor? What is the surface charge density, sc, on the outer surface of the outer spherical conductor?
{}
# A real-valued$\Rightarrow$Matrix $S$ of Cholesky-decomposition $A=SS^T$ is real valued. Let $$A$$ be a symmetric positive definite invertible real valued matrix. Then we can write $$A=SS^T$$ where $$S$$ is a lower triangular matrix with positive entries on its diagonal. This decomposition is called the Cholesky decomposition. How do we know that the entries of $$S$$ are themselves real-valued? I need this property in order to solve a problem related to the Courant-Fischer min-max theorem. • you should add the symmetric condition – Ahmad Bazzi Dec 13 '19 at 11:30 • $S$ can be computed by forward substitution. The substitution process does not require field extension. Therefore $S$ is real. – user1551 Dec 13 '19 at 13:52 • What is "forward substitution" ? – Christian Singer Dec 13 '19 at 14:43 • "Forward substitution" refers to the practice of solving an indexed set of equations iteratively, in which we substitute the solution to the $(k-1)$-th equation into the $k$-th equation and solve the latter. You may think of it as mathematical induction. – user1551 Dec 14 '19 at 11:12 Let $$A_k$$ be the leading principal $$k\times k$$ submatrix of $$A$$. Clearly, $$A_1=S_1S_1^T$$ where $$S_1=\sqrt{A_1}$$ is a real $$1\times1$$ lower triangular matrix. Now suppose that for some $$k$$, $$A_{k-1}=S_{k-1}S_{k-1}^T$$ for some real lower triangular matrix $$S_{k-1}$$. Since $$A_{k-1}$$ is positive definite, $$S_{k-1}$$ is nonsingular. Also, as $$u^TA_ku>0$$ for every nonzero vector $$u$$, if we write $$A_k=\pmatrix{A_{k-1}&v_k\\ v_k^T&a_k}$$ and put $$u=(-v_k^TA_{k-1}^{-1},\,1)^T$$, we obtain $$a_k-v_k^TA_{k-1}^{-1}v_k>0$$. Therefore the equation $$\pmatrix{A_{k-1}&v_k\\ v_k^T&a_k} =\pmatrix{S_{k-1}&0\\ x^T&s}\pmatrix{S_{k-1}^T&x\\ 0&s}\tag{1}$$ has a unique solution $$x=S_{k-1}^{-1}v_k,\,s=\sqrt{a_k-x^Tx}=\sqrt{a_k-v_k^TA_{k-1}^{-1}v_k}$$. This means $$A_k=S_kS_k^T$$ for some real lower triangular matrix $$S_k$$. So, if we start from $$A_1$$ and keep solving $$(1)$$ for $$k=2,3,\ldots$$, we see that $$A=SS^T$$ for some real lower triangular matrix $$S$$.
{}
This notebook takes over from part I, where we explored the iris dataset. This time, we’ll give a visual tour of some of the primary machine learning algorithms used in supervised learning, along with a high-level explanation of the algorithms. import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from mlxtend.plotting import plot_decision_regions from sklearn import metrics, model_selection from sklearn.linear_model import LogisticRegression, Perceptron from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.naive_bayes import GaussianNB from sklearn.neighbors import KNeighborsClassifier from sklearn.neural_network import MLPClassifier from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier import warnings warnings.filterwarnings("ignore") sns.set(font_scale=1.5) Data df = sns.load_dataset("iris") The dataset has four different features which makes it harder to visualize. But from our previous analysis, we could see that the most important features for distinguishing between the different species are petal length and petal width. To make it easier to visualize, we’re going to focus on just those two. X = df[['petal_length', 'petal_width']] y = df['species'] # change the labels to numbers y = pd.factorize(y, sort=True)[0] We know from the previous analysis that the labels are balanced, so we don’t need to stratify the data. We’ll just randomly divide up the dataset into testing and training sets using Scikit-learn. X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.25, random_state=0, shuffle=True ) # Convert the pandas dataframes to numpy arrays X_array = np.asarray(X) X_train_array = np.asarray(X_train) X_test_array = np.asarray(X_test) Let’s take a look at our dataset. Since we’re going to visualize a lot of algorithms, we’ll use the mlxtend library and build a simple function to label the graphs. def add_labels(standardized=False): plt.title('Iris Dataset Visualized') if standardized: plt.xlabel('Petal Length (standardized)') plt.ylabel('Petal Width (standardized)') else: plt.xlabel('Petal Length (cm)') plt.ylabel('Petal Width (cm)') plt.tight_layout() plt.show() y_str = y.astype(str) y_str[y_str == '0'] = 'red' y_str[y_str == '1'] = 'blue' y_str[y_str == '2'] = 'green' plt.scatter(X['petal_length'], X['petal_width'], c=y_str) plt.xlim(0, 7.9) plt.ylim(-0.9, 3.5) Algorithms There are lots of good algorithms to try, some will work better with some data. Here are all the ones we’ll look at: • Gaussian Naive Bayes • Logistic Regression • K Nearest Neighbors • Support Vector Machines (linear and nonlinear) • Linear Discriminant Analysis / Quadratic Discriminant Analysis • Decision Tree Classifier • Perceptron • Neural Network (Multi-layer perceptron) Gaussian Naive Bayes The Gaussian naive Bayes classifier is based on Bayes’ Theorem. Bayes’ Theorem helps us answer an incredibly broad set of questions: “Given what I know, how likely is some event?” It provides the mathematical tools to answer this question by incorporating evidence into a probabilistic prediction. Mathematically, the question “Given B, what is the probability of A?” can be written out in Bayes’ Theorem as follows: $P(A|B) = \frac{P(B|A)P(A)}{P(B)}$ where $P(A|B)$ is the probability of A given B, which is what we’re trying to calculate $P(B|A)$ is the probability of A given B $$P(A)$$ is the probability of A $$P(B)$$ is the probability of B The are many types of naive Bayes classifiers. They are all based on Bayes’ Theorem but are meant for different data distributions. For cases when the data are normally distributed, i.e. Gaussian, the Gaussian naive Bayes classifier is the right choice. From our previous visualizations, the data do look fairly Gaussian, so this may be a good model for this dataset. We look at the probability of each class and the conditional probability for each class given each x value. But what makes the algorithm naive? It is naive because it assumes that the different variables are independent of each other. This is a significant assumption and one that is invalid in many cases, such as predicting the weather. However, this assumption is key to making the calculations tractable. It is what makes it possible to combine many different features into the calculation, so you can ask “What is the probability of A given B, C, and D?” This paper by Scott D. Anderson shows the mathematics behind combining multiple features. OK. Let’s train the classifier using our data. gnb = GaussianNB() gnb.fit(X_train, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, gnb.predict(X_test)) ) ) 97.4% of the test set was correct. Naive Bayes is a surprisingly capable classifier, as this result shows. I’ll plot both the training and testing data. The testing data have been circled. # Plot decision regions plot_decision_regions( X_array, y, clf=gnb, legend=2, X_highlight=X_test_array, colors='red,blue,green' ) Some good things about Naive Bayes are that it • Is extremely fast • Generalizes to multiple classes well • Works well for independent variables (see the warning below) • Is good at natural language processing tasks like spam filtering Some assumption to be aware of are • Naive Bayes assumes data are independent of each other. This is not valid in many cases. For example, the weather on one day is not independent of the weather on the previous day. The number of bathrooms in a house is not independent of the size of the house. Naive Bayes can still work surprisingly well when this assumption is invalid, but it’s important to remember. • All features are assumed to contribute equally. That means that extraneous data with little value could decrease your performance. • The GaussianNB classifier assumes a Gaussian distribution. Logistic Regression Logistic regression is a great machine learning model for classification problems. Logistic regression relies on finding proper weights for each input, just like linear regression, but then applies a nonlinear function on the results to turn it into a binary classifier. It’s probably the most popular method for binary classification. The key to logistic regression in is finding the proper weight for every input. That would take a lot of work but, fortunately, we have machine learning to do that part. To learn more logistic regression this post on using logistic regression to classify DNA splice junctions. Despite its name, logistic regression only works for classification, not regression (unlike linear regression). Standardization For logistic regression, and many other machine learning algorithms, we will need to standardize the data first. This allows us to compare variables of different magnitude and keep our gradient descent model optimized. We do this by making the mean of each variable 0 and the standard deviation 1. We do have to be careful in how we apply this to our training and testing datasets though. We can’t let any information about our test set affect our inputs, and that includes finding the mean and standard deviation of our test set and scaling based on that. So, we’ll have to find the parameters from our train set and use those to scale our test set. This means the mean of the test set won’t be exactly zero, but it should be close enough. Scikit-learn has a StandardScaler that makes standardizing data easy. scale = StandardScaler() scale.fit(X_train) X_std = scale.transform(X) X_train_std = scale.transform(X_train) X_test_std = scale.transform(X_test) lgr = LogisticRegression(solver="lbfgs", multi_class="auto") lgr.fit(X_train_std, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, lgr.predict(X_test_std)) ) ) 94.7% of the test set was correct. plot_decision_regions( X_std, y, clf=lgr, X_highlight=X_test_std, colors='red,blue,green' ) Tuning the Model There are many hyperparameters in logistic regression that we can adjust to tune the model. There is a hyperparameter to control the normalization that is used in the penalization. Another that helps with regularization is $$C$$, where $$C$$ is the inverse of the regularization parameter lambda: $C = \frac{1}{\lambda}$ We can use a grid search to find the best hyperparameters and hopefully improve our model. C_grid = np.logspace(-3, 3, 10) max_iter_grid = np.logspace(2,3,6) hyperparameters = dict(C=C_grid, max_iter=max_iter_grid) lgr_grid = GridSearchCV(lgr, hyperparameters, cv=3) # Re-fit and test after optimizing lgr_grid.fit(X_train_std, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, lgr_grid.predict(X_test_std)) ) ) 94.7% of the test set was correct. Let’s see what the best hyperparameters were print(lgr_grid.best_estimator_) LogisticRegression(C=0.46415888336127775, max_iter=100.0) The value of $$C$$ has been changed from the default of 1 to 0.46415888336127775 (np.logspace is great, but you get values like this). max_iter has remained at 100. plot_decision_regions( X_std, y, clf=lgr_grid, X_highlight=X_test_std, colors='red,blue,green' ) This both scored and looks much better. Logistic regression is great for binary classification problems and can be extended to multiclass classification using a one vs. rest approach. However, for multiclass classification, there’s usually a better way: linear discriminant analysis. Linear Discriminant Analysis Linear discriminant analysis takes over where logistic regression left off. Logistic regression is best for two-case classification. LDA is better able to solve multi-class classification problems. Like logistic regression, it assumes the features are Gaussian and uses Bayes’ Theorem to estimate the probability that new data belongs to each class. It does this by finding the mean and variance of each feature and forming a probability of which class the input data best matches. Based on these probabilities, it creates a linear boundaries between the classes. LDA can also do binary classification and is often the better model when there are few examples. lda = LinearDiscriminantAnalysis() lda.fit(X_train, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, lda.predict(X_test)) ) ) 94.7% of the test set was correct. plot_decision_regions( X_array, y, clf=lda, X_highlight=X_test_array, colors='red,blue,green' ) A couple things to note when using LDA: • Outliers can have a big effect on the model, so it’s good to remove them first • LDA assumes that the covariance of each class is equal, which may not be true in real-world data. QDA is very similar to LDA, except that it does not make an assumption about equal covariance. This page from the UC Business Analytics R Programming Guide provides a good background on LDA and QDA. qda = QuadraticDiscriminantAnalysis() qda.fit(X_train, y_train) metrics.accuracy_score(y_test, qda.predict(X_test)) 1.0 plot_decision_regions( X_array, y, clf=qda, X_highlight=X_test_array, colors='red,blue,green' ) It’s interesting how well this did yet how it would classify a point in the bottom lefthand corner. Support Vector Machines Support vector machines (SVM) comes at classification from a different angle that the models we’ve seen before. Instead of directly minimizing the classification errors, we’ll try to maximize the margin between the different sets. The goal of SVM is to find the decision boundary which maximizes the distance between the closest samples (known as the support vectors) of one type on one side and those of another type on the other side of the boundary. In two dimensions that decision boundary will be a line, but in more dimensions it becomes a hyperplane. Although it might sound simple, SVMs might be one of the most powerful shallow learning algorithms. They can provide really good performance with very minimal tweaking. Like in logistic regression, we have a $$C$$ parameter to control the amount of regularization. Another important hyperparameter is $$\gamma$$, which determines whether or not to include point farther away from the decision boundary. The lower the value of $$\gamma$$ the more points will be considered. svm = SVC(kernel='linear') svm.fit(X_train_std, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, svm.predict(X_test_std)) ) ) 97.4% of the test set was correct. plot_decision_regions( X_std, y, clf=svm, X_highlight=X_test_std, colors='red,blue,green' ) We can adjust the hyperparameters of the SVM to account for outliers (either giving them more or less weight), but in this case, it doesn’t seem necessary. As you can see, one major limitation of the support vector machine algorithm we used above is that the decision boundaries are all linear. This is a major constraint (as it was with logistic regression and LDA), but there is a trick to get around that. Support Vector Machines Kernel Trick One of the best aspects of support vector machines is that they can be kernelized to solve nonlinear problems. We can create nonlinear combinations of the features and map them onto higher dimensional space so they are linearly separable. $\phi(x_1,x_2) = (z_1, z_2, z_3) = (x_1, x_2, x_1^2 + x_2^2)$ svm_kt = SVC(kernel='rbf') svm_kt.fit(X_train_std, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, svm_kt.predict(X_test_std)) ) ) 97.4% of the test set was correct. plot_decision_regions( X_std, y, clf=svm_kt, X_highlight=X_test_std, colors='red,blue,green' ) Some other things to keep in mind when using support vector machines: • They work well with highly dimensional data • If there’s a clear decision boundary in the data, it’s hard to beat SVMs • SVMs don’t scale as well with very large datasets as some other models do • SVMs perform better when the data are standardized • They can do regression as well, although this is less common. The regression library is sklearn.svm.SVR. k-Nearest Neighbors k-nearest neighbors is another powerful classifier. The concept is simple. At a given point in the dataset, the model finds the $$k$$ nearest points and holds a majority-wins vote to determine the label at that point. The point is assigned whatever the majority of the $$k$$ points closest to them are. It is a different style of classifier, known as a lazy learner. It is called lazy because it doesn’t try to build a function to generalize the data. One good thing about these is that it is easy to add new data to the model. One downside is that it can have trouble scaling to very large datasets, because it never learns to generalize about the data. This means it has to remember every data point, which can take a lot of memory. knn = KNeighborsClassifier() knn.fit(X_train_std, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, knn.predict(X_test_std)) ) ) 100.0% of the test set was correct. plot_decision_regions( X_std, y, clf=knn, X_highlight=X_test_std, colors='red,blue,green' ) Some things to keep in mind about k-nearest neighbors: • The curse of dimensionality greatly affects this algorithm. As there are more dimensions the data naturally become farther apart, so it requires a lot of data to use this model in highly dimensional space. One way to avoid this is to find the most important dimensions (or create new ones) and only use them. • You can also use K nearest neighbors for classification problems. Instead of voting on the label, you find the $$k$$ closest points and take the mean. There is another algorithm similar to k-nearest neighbors, but it determines which training samples are most important and discards the other. This gives similar results but requires much less memory. This algorithm is known as learning vector quantization. Decision Trees The idea of a decision tree is to make a series of decisions that leads to the categorization of the data. Out of all of the machine learning algorithms, a decision tree is probably the most similar to how people actually think, and therefore it’s the easiest for people to understand and visualize decision trees. In this notebook, we’ve been able to visualize different algorithms because we are only looking at two dimensions, but for a decision tree, we can observe every step of how it made its decision. Decision trees are nondeterministic, so we have to set a random state value to make our results repeatable. tre = DecisionTreeClassifier(random_state=0) tre.fit(X_train, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, tre.predict(X_test)) )) 94.7% of the test set was correct. plot_decision_regions( X_array, y, clf=tre, X_highlight=X_test_array, colors='red,blue,green' ) Notice that overfitting has led to a couple of odd decision. This can be fixed by creating a large number of decision trees in what is known as a random forest. Perceptron We’ll also try a perceptron. Perceptrons are significant because they are a biologically inspired in that they mirror the way a neuron works. They are the progenitor to much of the other algorithms and caused much of the interest in artificial neural networks. However, they are not actually very good classifiers and only work well in linearly separable cases. A single perceptron isn’t used for modern classification tasks and is only included here because of its historical importance to the field. The perceptron model is closely related to linear and logistic regression in that it uses the sum of weighted inputs, but it relies on a step function as an activation. per = Perceptron(max_iter=5, tol=None) per.fit(X_train_std, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, per.predict(X_test_std)) ) ) 76.3% of the test set was correct. plot_decision_regions( X_std, y, clf=per, X_highlight=X_test_std, colors='red,blue,green' ) You can see that the model didn’t fit the data very well. There are ways to correct for this but there is no way to correct for non-linearly separable data with a single perceptron. To do that, we’ll use a multi-layer perceptron, also known as a neural network. Neural Networks (Multi-layer perceptron) A multi-layer perceptron, or, as it is more commonly known, a neural network, is in some ways the king of these algorithms. It is capable of dealing with linear or nonlinear data, of low or high dimensionality. It has an insatiable appetite for data, which makes it very powerful when there is sufficient data to train from. But all this capability has a downside, and that is it is the most computationally expensive to train. There are many libraries that are specially built to work with neural networks, such as TensorFlow, CNTK, and MXNet. Scikit-learn has a neural network, but it does not work well it large scale productions. Due to the computational requirements, all the best neural network libraries can crunch numbers on a GPU or even a tensor processing unit (TPU). The neural network in Scikit-learn cannot do that, but it will work for our purposes here. mlp = MLPClassifier(max_iter=500) mlp.fit(X_train_std, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, mlp.predict(X_test_std)) ) ) 97.4% of the test set was correct. plot_decision_regions( X_std, y, clf=mlp, X_highlight=X_test_std, colors='red,blue,green' ) print(mlp.get_params) <bound method BaseEstimator.get_params of MLPClassifier(max_iter=500)> Hyperparameter Tuning a Neural Network This is another one that can be improved greatly by tuning alpha_grid = np.logspace(-5, 3, 5) learning_rate_init_grid = np.logspace(-6, -2, 5) max_iter_grid = [200, 2000] hyperparameters = dict( alpha=alpha_grid, learning_rate_init=learning_rate_init_grid, max_iter=max_iter_grid ) mlp_grid = GridSearchCV(mlp, hyperparameters) mlp_grid.fit(X_train_std, y_train) print( "{:.1%} of the test set was correct.".format( metrics.accuracy_score(y_test, mlp.predict(X_test_std)) ) ) 97.4% of the test set was correct. print(mlp_grid.best_estimator_) MLPClassifier(alpha=10.0, learning_rate_init=0.01) plot_decision_regions( X_std, y, clf=mlp_grid, X_highlight=X_test_std, colors='red,blue,green' ) Here’s a case where we went through a big grid search and used a very complex model, but the result doesn’t look that different from a support vector machine. There’s a good lesson here, at least with regard to simple datasets. Comparing Algorithms Which algorithm is best? We’ll test them all. The perceptron model will be much worse than the others and it will distort a graph of the results, so I won’t include it. models = [] models.append(('Naive Bayes', GaussianNB())) models.append(('Logistic Regression', LogisticRegression(C=46.41588833612773))) models.append(('K-Nearest Neighbors', KNeighborsClassifier())) models.append(('SVM', SVC(kernel='rbf'))) models.append(('Linear Discriminant Analysis', LinearDiscriminantAnalysis())) models.append(('Decision Tree', DecisionTreeClassifier())) #models.append(('Perceptron', Perceptron(max_iter=5))) models.append(('Neural Network', mlp_grid.best_estimator_)) To get a fair estimate of which is the best model, we’ll break the model into a bunch of testing and training sets, and see which one works best overall. # Create empty lists to store results names = [] results = [] # Iterate through the models iterations = 15 for name, model in models: kfold = model_selection.KFold(n_splits=iterations, shuffle=True, random_state=0) # Use the standardized data for all models cross_val = model_selection.cross_val_score(model, X_std, y, cv=kfold) names.append([name]*iterations) results.append(cross_val) # flatten the lists flat_names = [item for sublist in names for item in sublist] flat_results = [item for sublist in results for item in sublist] # Build a pandas dataframe df = pd.DataFrame({'Model' : list(flat_names), 'Accuracy' : list(flat_results)}) fig, ax = plt.subplots() fig.set_size_inches(18,8) plt.xticks(rotation=45) sns.violinplot(x='Model', y='Accuracy', data=df, cut=0) ax = sns.stripplot(x='Model', y='Accuracy', data=df, color="black", jitter=0.1, size=5)
{}
arXiv Analytics arXiv:1705.06726 [hep-ph]AbstractReferencesReviewsResources New constraints on light vectors coupled to anomalous currents Published 2017-05-18Version 1 We derive new constraints on light vectors coupled to Standard Model (SM) fermions, when the corresponding SM current is broken by the chiral anomaly. Cancellation of the anomaly by heavy fermions results, in the low-energy theory, in Wess-Zumino type interactions between the new vector and the SM gauge bosons. These interactions are determined by the requirement that the heavy sector preserves the SM gauge groups, and lead to (energy / vector mass)^2 enhanced rates for processes involving the longitudinal mode of the new vector. Taking the example of a vector coupled to baryon number, Z decays and flavour changing neutral current meson decays via the new vector can occur with (weak scale / vector mass)^2 enhanced rates. These processes place significantly stronger coupling bounds than others considered in the literature, over a wide range of vector masses. Constraints on anomalous $tcZ$ coupling from $\bar B \to \bar K^* μ^+ μ^-$ and $B_s \to μ^+ μ^-$ decays
{}
# An inverse problem: Number fields attached to elliptic curves over Q If I understand FC's remark under the post "Very strong multiplicity one for Hecke eigenforms," in the course of Faltings's proof of the Tate conjecture, Faltings proves the following statement: let E/Q and F/Q be elliptic curves and write Q(E[p]) (resp. Q(F[p])) for the number field obtained by adjoining the x and y coordinates of the p-torsion of E to Q. Then if Q(E[p]) = Q(F[p]) for infinitely many primes p, E/Q and F/Q are isogenous. Learning of this result prompted me to wonder: suppose P is a finite set of primes. Then do there exist E/Q and F/Q such that Q(E[p]) = Q(F[p]) for each p in P with E/Q and F/Q not isogenous? If not in general, what is known about the particular P for which the above question does (or doesn't) have an affirmative answer? - Dear FC, I think that you may misinterpret where my question was coming from. I don't know arithmetic algebraic geometry and don't suppose learning an receiving an answer to my question (or many such questions)) serves as a substitute for learning arithmetic algebraic geometry. So why did I ask the question that I did (and why do I ask many questions of a similar flavor)? Because my learning style is such that I find it very difficult to follow an axiomatic presentation of material, whereas concrete and surprising details sticks in my mind. – Jonah Sinick Oct 31 '09 at 18:54 As such, if I am going to motivate myself to eventually learn arithmetic geometry, it won't be by reading through hundreds of pages of Bourbaki-esque exposition from start to finish but by learning a variety of facts that capture my interest at my current level of understanding, then by learning the things that I need to to understand them, then by learning the things that I need to to understand these things, etc. That is, I have a strong preference for starting at the level of surprising phenomena and working backward toward learning what I need to to understand. – Jonah Sinick Oct 31 '09 at 18:59 The fact that you mentioned falling out of Faltings work was quite interesting to me (thanks for mentioning it) and I immediately wanted to place it into context which is what prompted me to ask this question. Now, your doubt as to whether I had thought very much about my initial question is well founded - e.g. thinking about the case where E[p] has a rational P-torsion point would be within my power and I didn't take the time to do so. But doing so would take some effort for me and would be effortless some others (some of whom would appreciate the chance to articulate their understanding). – Jonah Sinick Oct 31 '09 at 19:06 If you personally do not wish to indulge me I wouldn't fault you - I know that you have a lot on your plate. But I think that for a certain kind of person such as myself, asking and answering questions of the type that mine falls into is a productive activity. – Jonah Sinick Oct 31 '09 at 19:17 I think this is an appropriate question and disagree with FC. – David Zureick-Brown Oct 31 '09 at 19:25 @JSE You mean $\mathbb{Q}(E[p])$ isomorphic to $\mathbb{Q}(F[p])$ implies $E$ and $F$ are isogenous, right? Do you have a good reference for this conjecture? – Bobby Grizzard Apr 30 '15 at 15:18
{}
# Changeset 6d43cc57 Ignore: Timestamp: Jun 22, 2018, 1:51:56 PM (3 years ago) Branches: aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, with_gc Children: 3d56d15b Parents: 999c700 Message: more updates Location: doc/papers/concurrency Files: 4 edited Unmodified Added Removed • ## doc/papers/concurrency/Makefile r999c700 FIGURES = ${addsuffix .tex, \ monitor \ ext_monitor \ int_monitor \ dependency \ PICTURES =${addsuffix .pstex, \ monitor \ ext_monitor \ system \ monitor_structs \ dvips ${Build}/$< -o $@${BASE}.dvi : Makefile ${Build}${BASE}.out.ps WileyNJD-AMA.bst ${GRAPHS}${PROGRAMS} ${PICTURES}${FIGURES} ${SOURCES} \ annex/local.bib ../../bibliography/pl.bib${BASE}.dvi : Makefile ${BASE}.out.ps WileyNJD-AMA.bst${GRAPHS} ${PROGRAMS}${PICTURES} ${FIGURES}${SOURCES} \ annex/local.bib ../../bibliography/pl.bib | ${Build} # Must have *.aux file containing citations for bibtex if [ ! -r${basename $@}.aux ] ; then${LaTeX} ${basename$@}.tex ; fi ${BibTeX}${Build}/${basename$@} -${BibTeX}${Build}/${basename$@} # Some citations reference others so run again to resolve these citations ${LaTeX}${basename $@}.tex${BibTeX} ${Build}/${basename $@} -${BibTeX} ${Build}/${basename $@} # Run again to finish citations${LaTeX} ${basename$@}.tex ## Define the default recipes. ${Build}:${Build} : mkdir -p ${Build}${BASE}.out.ps: ${Build}${BASE}.out.ps : | ${Build} ln -fs${Build}/Paper.out.ps . WileyNJD-AMA.bst: WileyNJD-AMA.bst : ln -fs ../AMA/AMA-stix/ama/WileyNJD-AMA.bst . %.tex : %.fig ${Build} %.tex : %.fig |${Build} fig2dev -L eepic $< >${Build}/$@ %.ps : %.fig${Build} %.ps : %.fig | ${Build} fig2dev -L ps$< > ${Build}/$@ %.pstex : %.fig ${Build} %.pstex : %.fig |${Build} fig2dev -L pstex $< >${Build}/$@ fig2dev -L pstex_t -p${Build}/$@$< > ${Build}/$@_t • ## doc/papers/concurrency/Paper.tex r999c700 External scheduling allows users to wait for events from other threads without concern of unrelated events occurring. The mechnaism can be done in terms of control flow, \eg Ada @accept@ or \uC @_Accept@, or in terms of data, \eg Go channels. Of course, both of these paradigms have their own strengths and weaknesses, but for this project, control-flow semantics was chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multiple-monitor routines. The previous example shows a simple use @_Accept@ versus @wait@/@signal@ and its advantages. Note that while other languages often use @accept@/@select@ as the core external scheduling keyword, \CFA uses @waitfor@ to prevent name collisions with existing socket \textbf{api}s. While both mechanisms have strengths and weaknesses, this project uses a control-flow mechanism to stay consistent with other language semantics. Two challenges specific to \CFA for external scheduling are loose object-definitions (see Section~\ref{s:LooseObjectDefinitions}) and multiple-monitor routines (see Section~\ref{s:Multi-MonitorScheduling}). For internal scheduling, non-blocking signalling (as in the producer/consumer example) is used when the signaller is providing the cooperation for a waiting thread; \end{cfa} must have acquired monitor locks that are greater than or equal to the number of locks for the waiting thread signalled from the condition queue. {\color{red}In general, the signaller does not know the order of waiting threads, so in general, it must acquire the maximum number of mutex locks for the worst-case waiting thread.} Similarly, for @waitfor( rtn )@, the default semantics is to atomically block the acceptor and release all acquired mutex types in the parameter list, \ie @waitfor( rtn, m1, m2 )@. The extra challenge is that this dependency graph is effectively post-mortem, but the runtime system needs to be able to build and solve these graphs as the dependencies unfold. Resolving dependency graphs being a complex and expensive endeavour, this solution is not the preferred one. \subsubsection{Partial Signalling} \label{partial-sig} \end{comment} \subsection{Loose Object Definitions} In \uC, a monitor class declaration includes an exhaustive list of monitor operations. Since \CFA is not object oriented, monitors become both more difficult to implement and less clear for a user: \begin{cfa} monitor A {}; void f(A & mutex a); void g(A & mutex a) { waitfor(f); // Obvious which f() to wait for } void f(A & mutex a, int); // New different F added in scope void h(A & mutex a) { waitfor(f); // Less obvious which f() to wait for } \end{cfa} Furthermore, external scheduling is an example where implementation constraints become visible from the interface. Here is the cfa-code for the entering phase of a monitor: \begin{center} \begin{tabular}{l} \begin{cfa} if monitor is free enter elif already own the monitor continue elif monitor accepts me enter else block \end{cfa} \end{tabular} \end{center} \label{s:LooseObjectDefinitions} In an object-oriented programming-language, a class includes an exhaustive list of operations. However, new members can be added via static inheritance or dynaic members, \eg JavaScript~\cite{JavaScript}. Similarly, monitor routines can be added at any time in \CFA, making it less clear for programmers and more difficult to implement. \begin{cfa} monitor M {}; void f( M & mutex m ); void g( M & mutex m ) { waitfor( f ); }       $\C{// clear which f}$ void f( M & mutex m, int );                           $\C{// different f}$ void h( M & mutex m ) { waitfor( f ); }       $\C{// unclear which f}$ \end{cfa} Hence, the cfa-code for the entering a monitor looks like: \begin{cfa} if ( $\textrm{\textit{monitor is free}}$ ) $\LstCommentStyle{// \color{red}enter}$ else if ( $\textrm{\textit{already own monitor}}$ ) $\LstCommentStyle{// \color{red}continue}$ else if ( $\textrm{\textit{monitor accepts me}}$ ) $\LstCommentStyle{// \color{red}enter}$ else $\LstCommentStyle{// \color{red}block}$ \end{cfa} For the first two conditions, it is easy to implement a check that can evaluate the condition in a few instructions. However, a fast check for @monitor accepts me@ is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in Figure~\ref{fig:ClassicalMonitor}. However, a fast check for \emph{monitor accepts me} is much harder to implement depending on the constraints put on the monitors. Figure~\ref{fig:ClassicalMonitor} shows monitors are often expressed as an entry (calling) queue, some acceptor queues, and an urgent stack/queue. \begin{figure} \centering \subfloat[Classical Monitor] { \subfloat[Classical monitor] { \label{fig:ClassicalMonitor} {\resizebox{0.45\textwidth}{!}{\input{monitor}}} {\resizebox{0.45\textwidth}{!}{\input{monitor.pstex_t}}} }% subfloat \qquad \subfloat[bulk acquire Monitor] { \quad \subfloat[Bulk acquire monitor] { \label{fig:BulkMonitor} {\resizebox{0.45\textwidth}{!}{\input{ext_monitor}}} {\resizebox{0.45\textwidth}{!}{\input{ext_monitor.pstex_t}}} }% subfloat \caption{External Scheduling Monitor} \caption{Monitor Implementation} \label{f:MonitorImplementation} \end{figure} There are other alternatives to these pictures, but in the case of the left picture, implementing a fast accept check is relatively easy. Restricted to a fixed number of mutex members, N, the accept check reduces to updating a bitmask when the acceptor queue changes, a check that executes in a single instruction even with a fairly large number (\eg 128) of mutex members. This approach requires a unique dense ordering of routines with an upper-bound and that ordering must be consistent across translation units. For OO languages these constraints are common, since objects only offer adding member routines consistently across translation units via inheritance. However, in \CFA users can extend objects with mutex routines that are only visible in certain translation unit. This means that establishing a program-wide dense-ordering among mutex routines can only be done in the program linking phase, and still could have issues when using dynamically shared objects. The alternative is to alter the implementation as in Figure~\ref{fig:BulkMonitor}. Here, the mutex routine called is associated with a thread on the entry queue while a list of acceptable routines is kept separate. Generating a mask dynamically means that the storage for the mask information can vary between calls to @waitfor@, allowing for more flexibility and extensions. Storing an array of accepted routine pointers replaces the single instruction bitmask comparison with dereferencing a pointer followed by a linear search. Furthermore, supporting nested external scheduling (\eg listing \ref{f:nest-ext}) may now require additional searches for the @waitfor@ statement to check if a routine is already queued. For a fixed (small) number of mutex routines (\eg 128), the accept check reduces to a bitmask of allowed callers, which can be checked with a single instruction. This approach requires a unique dense ordering of routines with a small upper-bound and the ordering must be consistent across translation units. For object-oriented languages these constraints are common, but \CFA mutex routines can be added in any scope and are only visible in certain translation unit, precluding program-wide dense-ordering among mutex routines. Figure~\ref{fig:BulkMonitor} shows the \CFA monitor implementation. The mutex routine called is associated with each thread on the entry queue, while a list of acceptable routines is kept separately. The accepted list is a variable-sized array of accepted routine pointers, so the single instruction bitmask comparison is replaced by dereferencing a pointer followed by a linear search. \begin{comment} \begin{figure} \begin{cfa}[caption={Example of nested external scheduling},label={f:nest-ext}] In the end, the most flexible approach has been chosen since it allows users to write programs that would otherwise be  hard to write. This decision is based on the assumption that writing fast but inflexible locks is closer to a solved problem than writing locks that are as flexible as external scheduling in \CFA. % ====================================================================== % ====================================================================== \end{comment} \subsection{Multi-Monitor Scheduling} % ====================================================================== % ====================================================================== \label{s:Multi-MonitorScheduling} External scheduling, like internal scheduling, becomes significantly more complex when introducing multi-monitor syntax. Even in the simplest possible case, some new semantics needs to be established: Even in the simplest possible case, new semantics needs to be established: \begin{cfa} monitor M {}; void f(M & mutex a); void g(M & mutex b, M & mutex c) { waitfor(f); // two monitors M => unknown which to pass to f(M & mutex) } \end{cfa} The obvious solution is to specify the correct monitor as follows: void f( M & mutex m1 ); void g( M & mutex m1, M & mutex m2 ) { waitfor( f );                                                   $\C{// pass m1 or m2 to f?}$ } \end{cfa} The solution is for the programmer to disambiguate: \begin{cfa} waitfor( f, m2 );                                               $\C{// wait for call to f with argument m2}$ \end{cfa} Routine @g@ has acquired both locks, so when routine @f@ is called, the lock for monitor @m2@ is passed from @g@ to @f@ (while @g@ still holds lock @m1@). This behaviour can be extended to the multi-monitor @waitfor@ statement. \begin{cfa} monitor M {}; void f(M & mutex a); void g(M & mutex a, M & mutex b) { // wait for call to f with argument b waitfor(f, b); } \end{cfa} This syntax is unambiguous. Both locks are acquired and kept by @g@. When routine @f@ is called, the lock for monitor @b@ is temporarily transferred from @g@ to @f@ (while @g@ still holds lock @a@). This behaviour can be extended to the multi-monitor @waitfor@ statement as follows. \begin{cfa} monitor M {}; void f(M & mutex a, M & mutex b); void g(M & mutex a, M & mutex b) { // wait for call to f with arguments a and b waitfor(f, a, b); } \end{cfa} Note that the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired in the routine. @waitfor@ used in any other context is undefined behaviour. void f( M & mutex m1, M & mutex m2 ); void g( M & mutex m1, M & mutex m2 ) { waitfor( f, m1, m2 );                                   $\C{// wait for call to f with arguments m1 and m2}$ } \end{cfa} Again, the set of monitors passed to the @waitfor@ statement must be entirely contained in the set of monitors already acquired by accepting routine. An important behaviour to note is when a set of monitors only match partially: \begin{cfa} mutex struct A {}; mutex struct B {}; void g(A & mutex a, B & mutex b) { waitfor(f, a, b); } void g( A & mutex m1, B & mutex m2 ) { waitfor( f, m1, m2 ); } A a1, a2; B b; void foo() { g(a1, b); // block on accept } g( a1, b ); // block on accept } void bar() { f(a2, b); // fulfill cooperation f( a2, b ); // fulfill cooperation } \end{cfa} It is also important to note that in the case of external scheduling the order of parameters is irrelevant; @waitfor(f,a,b)@ and @waitfor(f,b,a)@ are indistinguishable waiting condition. % ====================================================================== % ====================================================================== \subsection{\protect\lstinline|waitfor| Semantics} % ====================================================================== % ====================================================================== Syntactically, the @waitfor@ statement takes a routine identifier and a set of monitors. \end{figure} % ====================================================================== % ====================================================================== \subsection{Waiting For The Destructor} % ====================================================================== % ====================================================================== An interesting use for the @waitfor@ statement is destructor semantics. Indeed, the @waitfor@ statement can accept any @mutex@ routine, which includes the destructor (see section \ref{data}). % ######     #    ######     #    #       #       ####### #       ###  #####  #     # % #     #   # #   #     #   # #   #       #       #       #        #  #     # ##   ## % #     #  #   #  #     #  #   #  #       #       #       #        #  #       # # # # % ######  #     # ######  #     # #       #       #####   #        #   #####  #  #  # % #       ####### #   #   ####### #       #       #       #        #        # #     # % #       #     # #    #  #     # #       #       #       #        #  #     # #     # % #       #     # #     # #     # ####### ####### ####### ####### ###  #####  #     # \section{Parallelism} Historically, computer performance was about processor speeds and instruction counts. However, with heat dissipation being a direct consequence of speed increase, parallelism has become the new source for increased performance~\cite{Sutter05, Sutter05b}. While there are many variations of the presented paradigms, most of these variations do not actually change the guarantees or the semantics, they simply move costs in order to achieve better performance for certain workloads. \section{Paradigms} \subsection{User-Level Threads} A direct improvement on the \textbf{kthread} approach is to use \textbf{uthread}. These threads offer most of the same features that the operating system already provides but can be used on a much larger scale. Examples of languages that support \textbf{uthread} are Erlang~\cite{Erlang} and \uC~\cite{uC++book}. \subsection{Fibers : User-Level Threads Without Preemption} \label{fibers} A popular variant of \textbf{uthread} is what is often referred to as \textbf{fiber}. However, \textbf{fiber} do not present meaningful semantic differences with \textbf{uthread}. An example of a language that uses fibers is Go~\cite{Go} \subsection{Jobs and Thread Pools} An approach on the opposite end of the spectrum is to base parallelism on \textbf{pool}. Indeed, \textbf{pool} offer limited flexibility but at the benefit of a simpler user interface. The gold standard of this implementation is Intel's TBB library~\cite{TBB}. \subsection{Paradigm Performance} While the choice between the three paradigms listed above may have significant performance implications, it is difficult to pin down the performance implications of choosing a model at the language level. Indeed, in many situations one of these paradigms may show better performance but it all strongly depends on the workload. Finally, if the units of uninterrupted work are large, enough the paradigm choice is largely amortized by the actual work done. \section{The \protect\CFA\ Kernel : Processors, Clusters and Threads}\label{kernel} A \textbf{cfacluster} is a group of \textbf{kthread} executed in isolation. \textbf{uthread} are scheduled on the \textbf{kthread} of a given \textbf{cfacluster}, allowing organization between \textbf{uthread} and \textbf{kthread}. It is important that \textbf{kthread} belonging to a same \textbf{cfacluster} have homogeneous settings, otherwise migrating a \textbf{uthread} from one \textbf{kthread} to the other can cause issues. Currently \CFA only supports one \textbf{cfacluster}, the initial one. \subsection{Future Work: Machine Setup}\label{machine} While this was not done in the context of this paper, another important aspect of clusters is affinity. While many common desktop and laptop PCs have homogeneous CPUs, other devices often have more heterogeneous setups. OS support for CPU affinity is now common~\cite{affinityLinux, affinityWindows, affinityFreebsd, affinityNetbsd, affinityMacosx}, which means it is both possible and desirable for \CFA to offer an abstraction mechanism for portable CPU affinity. \subsection{Paradigms}\label{cfaparadigms} Given these building blocks, it is possible to reproduce all three of the popular paradigms. Indeed, \textbf{uthread} is the default paradigm in \CFA. \section{Behind the Scenes} There are several challenges specific to \CFA when implementing concurrency. These challenges are a direct result of bulk acquire and loose object definitions. Note that since the major contributions of this paper are extending monitor semantics to bulk acquire and loose object definitions, any challenges that are not resulting of these characteristics of \CFA are considered as solved problems and therefore not discussed. % ====================================================================== % ====================================================================== \section{Mutex Routines} % ====================================================================== % ====================================================================== The first step towards the monitor implementation is simple @mutex@ routines. \end{figure} \subsection{Details: Interaction with polymorphism} Depending on the choice of semantics for when monitor locks are acquired, interaction between monitors and \CFA's concept of polymorphism can be more complex to support. However, it is shown that entry-point locking solves most of the issues. Furthermore, entry-point locking requires less code generation since any useful routine is called multiple times but there is only one entry point for many call sites. % ====================================================================== % ====================================================================== \section{Threading} \label{impl:thread} % ====================================================================== % ====================================================================== Figure \ref{fig:system1} shows a high-level picture if the \CFA runtime system in regards to concurrency. \end{figure} \subsection{Processors} Parallelism in \CFA is built around using processors to specify how much parallelism is desired. \CFA processors are object wrappers around kernel threads, specifically @pthread@s in the current implementation of \CFA. Indeed, any parallelism must go through operating-system libraries. Processors internally use coroutines to take advantage of the existing context-switching semantics. \subsection{Stack Management} One of the challenges of this system is to reduce the footprint as much as possible. Specifically, all @pthread@s created also have a stack created with them, which should be used as much as possible. In order to respect C user expectations, the stack of the initial kernel thread, the main stack of the program, is used by the main user thread rather than the main processor, which can grow very large. \subsection{Context Switching} As mentioned in section \ref{coroutine}, coroutines are a stepping stone for implementing threading, because they share the same mechanism for context-switching between different stacks. To improve performance and simplicity, context-switching is implemented using the following assumption: all context-switches happen inside a specific routine call. This option is not currently present in \CFA, but the changes required to add it are strictly additive. \subsection{Preemption} \label{preemption} Finally, an important aspect for any complete threading system is preemption. As mentioned in section \ref{basics}, preemption introduces an extra degree of uncertainty, which enables users to have multiple threads interleave transparently, rather than having to cooperate among threads for proper scheduling and CPU distribution. Indeed, @sigwait@ can differentiate signals sent from @pthread_sigqueue@ from signals sent from alarms or the kernel. \subsection{Scheduler} Finally, an aspect that was not mentioned yet is the scheduling algorithm. Further discussion on scheduling is present in section \ref{futur:sched}. % ====================================================================== % ====================================================================== \section{Internal Scheduling} \label{impl:intsched} % ====================================================================== % ====================================================================== The following figure is the traditional illustration of a monitor (repeated from page~\pageref{fig:ClassicalMonitor} for convenience): \begin{figure} \begin{center} {\resizebox{0.4\textwidth}{!}{\input{monitor}}} {\resizebox{0.4\textwidth}{!}{\input{monitor.pstex_t}}} \end{center} \caption{Traditional illustration of a monitor} • ## doc/papers/concurrency/figures/ext_monitor.fig r999c700 -2 1200 2 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 3150.000 3450.000 3150 3150 2850 3450 3150 3750 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 3150.000 4350.000 3150 4050 2850 4350 3150 4650 6 5850 1950 6150 2250 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2100 105 105 6000 2100 6105 2205 4 1 -1 0 0 0 10 0.0000 2 105 90 6000 2160 d\001 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1575.000 3450.000 1575 3150 1275 3450 1575 3750 5 1 0 1 -1 -1 0 0 -1 0.000 0 1 0 0 1575.000 4350.000 1575 4050 1275 4350 1575 4650 6 4275 1950 4575 2250 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 2100 105 105 4425 2100 4530 2205 4 1 -1 0 0 0 10 0.0000 2 105 90 4425 2160 d\001 -6 6 5100 2100 5400 2400 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 5250 2250 105 105 5250 2250 5355 2250 4 1 -1 0 0 0 10 0.0000 2 105 120 5250 2295 X\001 6 4275 1650 4575 1950 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 1800 105 105 4425 1800 4530 1905 4 1 -1 0 0 0 10 0.0000 2 105 90 4425 1860 b\001 -6 6 5100 1800 5400 2100 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 5250 1950 105 105 5250 1950 5355 1950 4 1 -1 0 0 0 10 0.0000 2 105 120 5250 2010 Y\001 6 1495 5445 5700 5655 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 1575 5550 80 80 1575 5550 1655 5630 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2925 5550 105 105 2925 5550 3030 5655 1 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 4425 5550 105 105 4425 5550 4530 5655 4 0 -1 0 0 0 12 0.0000 2 135 1035 3150 5625 blocked task\001 4 0 -1 0 0 0 12 0.0000 2 135 870 1725 5625 active task\001 4 0 -1 0 0 0 12 0.0000 2 135 1050 4650 5625 routine mask\001 -6 6 5850 1650 6150 1950 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 1800 105 105 6000 1800 6105 1905 4 1 -1 0 0 0 10 0.0000 2 105 90 6000 1860 b\001 6 3525 1800 3825 2400 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 3675 1950 105 105 3675 1950 3780 1950 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3525 1800 3825 1800 3825 2400 3525 2400 3525 1800 4 1 4 0 0 0 10 0.0000 2 105 120 3675 2010 Y\001 -6 6 3070 5445 7275 5655 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3150 5550 80 80 3150 5550 3230 5630 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4500 5550 105 105 4500 5550 4605 5655 1 3 0 1 -1 -1 0 0 4 0.000 1 0.0000 6000 5550 105 105 6000 5550 6105 5655 4 0 -1 0 0 0 12 0.0000 2 135 1035 4725 5625 blocked task\001 4 0 -1 0 0 0 12 0.0000 2 135 870 3300 5625 active task\001 4 0 -1 0 0 0 12 0.0000 2 135 1050 6225 5625 routine mask\001 6 3525 2100 3825 2400 1 3 0 1 -1 -1 1 0 4 0.000 1 0.0000 3675 2250 105 105 3675 2250 3780 2250 4 1 4 0 0 0 10 0.0000 2 105 120 3675 2295 X\001 -6 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3300 3600 105 105 3300 3600 3405 3705 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 3600 3600 105 105 3600 3600 3705 3705 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6600 3900 105 105 6600 3900 6705 4005 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6900 3900 105 105 6900 3900 7005 4005 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2700 105 105 6000 2700 6105 2805 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 6000 2400 105 105 6000 2400 6105 2505 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 5100 4575 80 80 5100 4575 5180 4655 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 1725 3600 105 105 1725 3600 1830 3705 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 2025 3600 105 105 2025 3600 2130 3705 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5025 3900 105 105 5025 3900 5130 4005 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 5325 3900 105 105 5325 3900 5430 4005 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 2700 105 105 4425 2700 4530 2805 1 3 0 1 -1 -1 0 0 -1 0.000 1 0.0000 4425 2400 105 105 4425 2400 4530 2505 1 3 0 1 -1 -1 0 0 20 0.000 1 0.0000 3525 4575 80 80 3525 4575 3605 4655 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 4050 2925 5475 2925 5475 3225 4050 3225 4050 2925 2475 2925 3900 2925 3900 3225 2475 3225 2475 2925 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 3150 3750 3750 3750 3750 4050 3150 4050 1575 3750 2175 3750 2175 4050 1575 4050 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3 3150 3450 3750 3450 3900 3675 1575 3450 2175 3450 2325 3675 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 3750 3150 3600 3375 2175 3150 2025 3375 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 3 3150 4350 3750 4350 3900 4575 1575 4350 2175 4350 2325 4575 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 3750 4050 3600 4275 2175 4050 2025 4275 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 3150 4650 3750 4650 3750 4950 4950 4950 1575 4650 2175 4650 2175 4950 3375 4950 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 6450 3750 6300 3975 4875 3750 4725 3975 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 2 4950 4950 5175 5100 3375 4950 3600 5100 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 9 5250 4950 6450 4950 6450 4050 7050 4050 7050 3750 6450 3750 6450 2850 6150 2850 6150 1650 3675 4950 4875 4950 4875 4050 5475 4050 5475 3750 4875 3750 4875 2850 4575 2850 4575 1650 2 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5 5850 4200 5850 3300 4350 3300 4350 4200 5850 4200 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 1 2 4275 4200 4275 3300 2775 3300 2775 4200 4275 4200 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 1 0 2 1 1 1.00 60.00 120.00 7 1 1.00 60.00 120.00 5250 3150 5250 2400 3675 3075 3675 2400 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 4125 2850 4575 3000 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 3150 3150 3750 3150 3750 2850 5700 2850 5700 1650 2 1 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 2 5700 2850 6150 3000 2 2 0 1 0 7 50 -1 -1 0.000 0 0 -1 0 0 5 5100 1800 5400 1800 5400 2400 5100 2400 5100 1800 4 1 -1 0 0 0 10 0.0000 2 75 75 6000 2745 a\001 4 1 -1 0 0 0 10 0.0000 2 75 75 6000 2445 c\001 4 1 -1 0 0 0 12 0.0000 2 135 315 5100 5325 exit\001 4 1 -1 0 0 0 12 0.0000 2 135 135 3300 3075 A\001 4 1 -1 0 0 0 12 0.0000 2 135 795 3300 4875 condition\001 4 1 -1 0 0 0 12 0.0000 2 135 135 3300 5100 B\001 4 0 -1 0 0 0 12 0.0000 2 135 420 6600 3675 stack\001 4 0 -1 0 0 0 12 0.0000 2 180 750 6600 3225 acceptor/\001 4 0 -1 0 0 0 12 0.0000 2 180 750 6600 3450 signalled\001 4 1 -1 0 0 0 12 0.0000 2 135 795 3300 2850 condition\001 4 1 -1 0 0 0 12 0.0000 2 165 420 6000 1350 entry\001 4 1 -1 0 0 0 12 0.0000 2 135 495 6000 1575 queue\001 4 0 -1 0 0 0 12 0.0000 2 135 525 6300 2400 arrival\001 4 0 -1 0 0 0 12 0.0000 2 135 630 6300 2175 order of\001 4 1 -1 0 0 0 12 0.0000 2 135 525 5100 3675 shared\001 4 1 -1 0 0 0 12 0.0000 2 135 735 5100 3975 variables\001 4 0 0 50 -1 0 11 0.0000 2 165 855 4275 3150 Acceptables\001 4 0 0 50 -1 0 11 0.0000 2 120 165 5775 2700 W\001 4 0 0 50 -1 0 11 0.0000 2 120 135 5775 2400 X\001 4 0 0 50 -1 0 11 0.0000 2 120 105 5775 2100 Z\001 4 0 0 50 -1 0 11 0.0000 2 120 135 5775 1800 Y\001 1575 3150 2175 3150 2175 2850 4125 2850 4125 1650 4 1 -1 0 0 0 10 0.0000 2 75 75 4425 2745 a\001 4 1 -1 0 0 0 10 0.0000 2 75 75 4425 2445 c\001 4 1 -1 0 0 0 12 0.0000 2 135 315 3525 5325 exit\001 4 1 -1 0 0 0 12 0.0000 2 135 135 1725 3075 A\001 4 1 -1 0 0 0 12 0.0000 2 135 795 1725 4875 condition\001 4 1 -1 0 0 0 12 0.0000 2 135 135 1725 5100 B\001 4 0 -1 0 0 0 12 0.0000 2 135 420 5025 3675 stack\001 4 0 -1 0 0 0 12 0.0000 2 180 750 5025 3225 acceptor/\001 4 0 -1 0 0 0 12 0.0000 2 180 750 5025 3450 signalled\001 4 1 -1 0 0 0 12 0.0000 2 135 795 1725 2850 condition\001 4 1 -1 0 0 0 12 0.0000 2 165 420 4425 1350 entry\001 4 1 -1 0 0 0 12 0.0000 2 135 495 4425 1575 queue\001 4 0 -1 0 0 0 12 0.0000 2 135 525 4725 2400 arrival\001 4 0 -1 0 0 0 12 0.0000 2 135 630 4725 2175 order of\001 4 1 -1 0 0 0 12 0.0000 2 135 525 3525 3675 shared\001 4 1 -1 0 0 0 12 0.0000 2 135 735 3525 3975 variables\001 4 0 4 50 -1 0 11 0.0000 2 120 135 4150 1875 Y\001 4 0 4 50 -1 0 11 0.0000 2 120 105 4150 2175 Z\001 4 0 4 50 -1 0 11 0.0000 2 120 135 4150 2475 X\001 4 0 4 50 -1 0 11 0.0000 2 120 165 4150 2775 W\001 4 0 -1 0 0 3 12 0.0000 2 150 540 5025 4275 urgent\001 4 1 0 50 -1 0 11 0.0000 2 165 600 3150 3150 accepted\001 • ## doc/papers/concurrency/figures/monitor.fig r999c700 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 3600 1500 3600 2100 4200 2100 4200 900 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 2700 1500 2700 2100 3300 2100 3300 1500 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 9 3600 4200 4800 4200 4800 3300 5400 3300 5400 3000 4800 3000 2 2 1 1 -1 -1 0 0 -1 4.000 0 0 0 0 0 5 4200 3450 4200 2550 2700 2550 2700 3450 4200 3450 2 1 0 1 -1 -1 0 0 -1 0.000 0 0 -1 0 0 4 2700 1500 2700 2100 3300 2100 3300 1500 4 1 -1 0 0 0 10 0.0000 2 75 75 4350 1995 a\001 4 1 -1 0 0 0 10 0.0000 2 75 75 4350 1695 c\001 4 0 -1 0 0 0 12 0.0000 2 180 750 4950 2700 signalled\001 4 1 -1 0 0 0 12 0.0000 2 135 795 1650 2100 condition\001 4 1 -1 0 0 0 12 0.0000 2 135 135 2550 1425 X\001 4 1 -1 0 0 0 12 0.0000 2 135 135 3450 1425 Y\001 4 1 4 0 0 0 12 0.0000 2 135 135 2550 1425 X\001 4 1 4 0 0 0 12 0.0000 2 135 135 3450 1425 Y\001 4 1 -1 0 0 0 12 0.0000 2 165 420 4350 600 entry\001 4 1 -1 0 0 0 12 0.0000 2 135 495 4350 825 queue\001 4 1 -1 0 0 0 10 0.0000 2 75 75 3450 1995 c\001 4 1 -1 0 0 0 12 0.0000 2 135 570 3000 1200 queues\001 4 0 -1 0 0 3 12 0.0000 2 150 540 4950 3525 urgent\001 Note: See TracChangeset for help on using the changeset viewer.
{}
# Thread: Finding Equation of a Line Perpendicular to... 1. ## Finding Equation of a Line Perpendicular to... I'm having trouble remembering how to solve this problem. I know it has something to do with point-slope form of an equation (y - y1) = m(x - x1) Here's how it reads: A line has the equation 7x + 3y = 12. Find the equation of a line perpendicular to this one, passing though the point (-7, 0 ) Help? 2. Originally Posted by rachelstargirlrox I'm having trouble remembering how to solve this problem. I know it has something to do with point-slope form of an equation (y - y1) = m(x - x1) Here's how it reads: A line has the equation 7x + 3y = 12. Find the equation of a line perpendicular to this one, passing though the point (-7, 0 ) Help? see posts 2 and 3 here 3. Ahh, thank you!! 4. Originally Posted by rachelstargirlrox I'm having trouble remembering how to solve this problem. I know it has something to do with point-slope form of an equation (y - y1) = m(x - x1) Here's how it reads: A line has the equation 7x + 3y = 12. Find the equation of a line perpendicular to this one, passing though the point (-7, 0 ) Help? Ok, so first things first. You have $(y - y1) = m(x - x1)$ and you have the point it passes through, which is (-7,0). If you substitute so far, you get: $(y-0) = m(x- (- 7))\quad\rightarrow\quad y=m(x+7)$ but you need m. So what is the slope of a line that is perpendicular to another line? It is the negative inverse of the other lines slope! So first we have to find the slope of the line we want to pass through. You should be able to do this yourself, coming up with the answer of $\frac{-7}{3}$ Now the negative inverse of $\frac{-7}{3}$ is $\frac{3}{7}$ So now substitute that into your point-slope equation to get: $y=\frac{3}{7}(x+7)\quad\rightarrow\quad \boxed{y=\frac{3}{7}x+3}$
{}
# Ring detail ## Name: Šter's clean ring with non-clean corner rings Description: Construct $R_{70}$ as Šter describes using a field $F$ of characteristic $2$. Let $T$ be the subrng of $\omega\times\omega$ matrices over $R_{70}$ which have only finitely many nonzero entries. The ring $R=T+F\subseteq M_\omega(R_{70})$ is Šter's ring. Keywords infinite matrix ring subring Reference(s): • J. Ster. Corner rings of a clean ring need not be clean. (2012) @ Example 3.4 Legend • = has the property • = does not have the property • = information not in database (Nothing was retrieved.) (Nothing was retrieved.)
{}
09-48 Jean-Marie Barbaroux, Thomas Chen, Semjon Vugalter, Vitali Vougalter Quantitative estimates on the Hydrogen ground state energy in non-relativistic QED (495K, pdf, 47 pages) Mar 12, 09 Abstract , Paper (src), View paper (auto. generated pdf), Index of related papers Abstract. We determine the exact expression for the hydrogen ground state energy in the Pauli-Fierz model up to the order $O(\alpha^5\log\alpha^{-1})$, where $\alpha$ denotes the finestructure constant, and prove rigorous bounds on the remainder term of the order $o(\alpha^5\log\alpha^{-1})$. As a consequence, we prove that the ground state energy is not a real analytic function of $\alpha$, and verify the existence of logarithmic corrections to the expansion of the ground state energy in powers of $\alpha$, as conjectured in the recent literature. Files: 09-48.src( 09-48.keywords , bcvv2.pdf.mm )
{}
Exemplary Writers # Study Questions (Ch 12) ## 1. Study Questions #1. Ch 12. In a free market, which factors apply to long run exchange rates? Check all that apply. Points: 1 / 1 Close Explanation Explanation: Long run exchange rates are best explained by factors including real income differentials, inflation rate differentials, productivity changes, and trade barriers. In the short run, exchange rates respond to real interest rate differentials, news about market fundamentals, and speculative opinion about future exchange rates. See section: “Determining Short Run Exchange Rates: The Asset Market Approach.” ## 2. Study Questions #2. Ch 12. Evaluate the following statement explaining why international investors are concerned about the real interest rate as opposed to the nominal rate. True or False: International investors are especially concerned about the real interest rate because unlike the nominal interest rate, the real interest rate equals the nominal interest rate minus the inflation rate. Points: 1 / 1 Close Explanation Explanation: The nominal interest rate refers to the interest rate, unadjusted for inflation, while the real interest rate equals the nominal interest rate minus the inflation rate. An increase in the nominal interest rate would not necessarily cause a change in the exchange rate if the inflation rate were to increase by the same proportion, leaving the real interest rate unchanged. In this case, any increase in the return on a domestic asset would be offset by higher inflation in the domestic economy, leading investors to believe that the domestic currency will depreciate in the future as domestic goods become more expensive relative to foreign goods. Thus, only changes in real interest rates (that is, the nominal interest rate changing by a different proportion than the inflation rate) are predicted to affect the exchange rate. See section: “Determining Short Run Exchange Rates: The Asset Market Approach.” ## 3. Study Questions #3. Ch 12. Which of the following describe limitations of the purchasing-power-parity theory? Check all that apply. Points: 0.75 / 1 Close Explanation Explanation: The simplest concept of purchasing power parity is the law of one price. It asserts that identical goods should be sold everywhere at the same price when converted to a common currency, assuming that it is costless to ship the good between nations, there are no barriers to trade, and markets are competitive. It rests on the assumption that sellers will seek out the highest possible prices and buyers the lowest ones. The purchasing-power-parity theory predicts that a country’s currency will depreciate by an amount equal to the excess of domestic inflation over foreign inflation. The theory also predicts that a country’s exchange rate will appreciate by an amount equal to the excess of foreign inflation over domestic inflation. The theory does not consider the impact of international capital movements, and it suffers from the choice of an appropriate price index used in price calculations. The law of one price holds reasonably well for globally tradable commodities such as oil, metals, chemicals, and some agricultural crops. The law does not appear to apply well to nontradable goods and services such as cab rides, housing, and personal services like haircuts. See section: “Inflation Rates, Purchasing Power Parity, and Long Run Exchange Rates.” ## 4. Study Questions #4. Ch 12. If a currency becomes overvalued in the foreign exchange market, what will be the likely impact on the home country’s trade balance? Points: 1 / 1 Close Explanation Explanation: An overvalued currency tends to lead to a balance-of-payments deficit for the home country, while an undervalued currency leads to a balance-of-payments surplus. For example, a currency that is undervalued according to purchasing power parity is one in which domestic prices are lower than foreign prices, when expressed in a common currency. You would expect the domestic country to export more and import less, leading to a balance-of-payments surplus. See section: “Inflation Rates, Purchasing Power Parity, and Long Run Exchange Rates.” ## 5. Study Questions #5. Ch 12. Identify the factors that account for changes in a currency’s value over the long run? Check all that apply. Points: 1 / 1 Close Explanation Explanation: In the long run, four key factors account for changes in exchange rates: (1) relative productivity levels, (2) relative price levels, (3) preferences for domestic goods or foreign goods, and (4) barriers to trade. See section: “Determining Long Run Exchange Rates.” ## 6. Study Questions #6. Ch 12. What factors underlie changes in a currency’s value in the short run? Check all that apply. Points: 1 / 1 Close Explanation Explanation: Over short periods of time, decisions to hold domestic or foreign financial assets play a much greater role in exchange rate determination than the demand for imports and exports does. According to the asset market approach to exchange rate determination, investors consider two key factors when deciding between domestic and foreign investments: (1) relative interest rates and (2) expected changes in exchange rates. Changes in these factors, in turn, account for fluctuations in exchange rates that we observe in the short run. See section: “Determining Short Run Exchange Rates: The Asset Market Approach.” ## 7. Study Questions #7. Ch 12. Use the dropdown menus in the following table to indicate how each factor listed affects the dollar’s exchange rate under a system of market-determined exchange rates. Factor Appreciates or Depreciates An increase in U.S. money demand Appreciates Rising productivity in the United States relative to other countries Appreciates Rising real interest rates overseas, relative to U.S. rates Depreciates An increase in U.S. money growth Depreciates Points: 1 / 1 Close Explanation Explanation: An increase in the demand for money will cause U.S. interest rates to rise. All else equal, this will make U.S. assets more attractive, increasing the demand for dollars and causing the dollar to appreciate. Productivity growth measures the increase in a country’s output for a given level of input. If the United States becomes more productive than other countries, it can produce goods more cheaply than its foreign competitors can. The U.S. exports tend to increase and imports tend to decrease. As U.S. goods become relatively less expensive, foreigners demand more U.S. goods, which results in an increase in the supply of foreign currency. At the same time, U.S. consumers desire fewer foreign goods that have become relatively more expensive, causing the demand for foreign currency to decrease. Therefore, the dollar appreciates relative the foreign currency. You can expect to see appreciating currencies in countries whose real interest rates are higher because these countries will attract investment funds from all over the world. Countries that experience relatively low real interest rates tend to find their currencies depreciating. Therefore, if real interest rates overseas are higher than those in the United States, the U.S. dollar depreciates. If money growth in the United States increases relative to the rest of the world, it means that inflation is much higher in the United States, and its currency loses purchasing power. You would expect that currency to depreciate to restore parity with prices of goods abroad (the depreciation would make imported goods more expensive to domestic consumers while making domestic exports less expensive to foreigners). See section: “Expected Change in the Exchange Rate.” ## 8. Study Questions #8. Ch 12. Evaluate the following statement. True or False: Exchange rate overshooting occurs because exchange rates tend to be more flexible than other prices; exchange rates often fluctuate more in the short run than in the long run so as to compensate for other prices that are slower to adjust to their long run equilibrium levels. Points: 1 / 1 Close Explanation Explanation: An exchange rate is said to overshoot when its short run response (either depreciation or appreciation) to a change in market fundamentals is greater than its long run response. Exchange rate overshooting occurs because exchange rates tend to be more flexible than other prices; exchange rates often depreciate/appreciate more in the short run than in the long run so as to compensate for other prices that are slower to adjust to their long run equilibrium levels. See section: “Exchange Rate Overshooting.” ## 9. Study Questions #9. Ch 12. Evaluate the following statement. True or False: Most currency forecasters use a combination of the purchasing-power-parity analysis, the Big Mac Index, and judgmental analysis. Points: 1 / 1 Close Explanation Explanation: Most forecasters tend to use a combination of fundamental, technical, and judgmental analysis, with the emphasis on each shifting as conditions change. They form a general view about whether a particular currency is over- or undervalued in a longer-term sense. Within that framework, they assess all current economic forecasts, news events, political developments, statistical releases, rumors, and changes in sentiment, while also carefully studying the charts and technical analysis. See section: “Forecasting Foreign Exchange Rates.” ## 10. Study Questions #10. Ch 12. Assuming market determined exchange rates, use the supply and demand schedules for pounds to analyze the effect on the exchange rate (dollars per pound) between the U.S. dollar and the British pound under the following two circumstances: 1. Both the U.K. and U.S. economies slide into recession, but the U.K. recession is less severe than the U.S. recession. Adjust the following graph to illustrate the effect of the observed economic developments on the supply of and demand for the British pound. Points: 0.5 / 1 Points: 1 / 1 Close Explanation Explanation: Since the U.K. recession is less severe than the U.S. recession, more people will try to invest in the United Kingdom, and demand for the pound increases. At the same time, fewer people would like to invest in the U.S. dollar. Thus, the supply of the pound decreases. Overall, the exchange rate increases (more U.S. dollars are offered per British pound). 2. Britain’s oil production in the North Sea decreases, and exports to the United States fall. Adjust the following graph to illustrate the effect of the observed economic developments on the supply of and demand for the British pound. Points: 1 / 1 Points: 1 / 1 Close Explanation Explanation: As oil exports to the United States fall, the demand for pounds declines. At the same time, the supply of pounds does not change. Overall, the exchange rate decreases. Which of the following events is likely to result in an increase of the exchange rate? Check all that apply. Points: 1 / 1 Close Explanation Explanation: The exchange rate is likely to rise as a result of the British government inviting U.S. firms to invest in British oil fields. This causes an increase in demand for the pound, whereas the supply of pounds does not change. See section: “Determining Long Run Exchange Rates.” ## 11. Study Questions #11. Ch 12. Evaluate the following statement. True or False: A nation’s currency will depreciate if its inflation rate is less than that of its trading partners. Points: 1 / 1 Close Explanation Explanation: When a nation’s inflation rate is less than that of its trading partners, a nation’s currency will appreciate because lower inflation attracts more foreign investors. As a result, the demand for the domestic currency increases, whereas its supply does not change. Overall, the currency appreciates. See sections: “Determining Long-Run Exchange Rates,” and “Determining Short-Run Exchange Rates: The Asset Market Approach.” ## 12. Study Questions #12. Ch 12. Complete the following statement. The appreciation in the dollar’s exchange value from 1980 to 1985 made U.S. productsmore   expensive and foreign productsless   expensive, increased   U.S. imports, anddecreased   U.S. exports. Points: 1 / 1 Close Explanation Explanation: The appreciation in the dollar’s exchange value from 1980 to 1985 made U.S. products more expensive and foreign products less expensive, increased U.S. imports, and decreased U.S. exports. See section: “Relative Levels of Interest Rates.” ## 13. Study Questions #13. Ch 12. Suppose the dollar/franc exchange rate equals $0.50 per franc. Complete the following statement to show what will happen to the dollar’s exchange value according to the purchasing-power-parity theory. If the U.S. price level increases by 10% and the price level in Switzerland stays constant, the U.S. dollar willdepreciate by10% , to$0.55   per franc. Points: 1 / 1 Close Explanation Explanation: According to the purchasing-power-parity theory, the U.S. dollar should appreciate against the franc if U.S. inflation is less than Switzerland’s inflation. Conversely, if U.S. inflation exceeds Switzerland’s inflation, the purchasing power of the dollar falls relative to the franc, so the exchange value of the dollar against the franc should depreciate. In this example, the U.S. dollar will depreciate by 10%, to $0.5×(1+0.1)=$0.55 per franc$0.5×1+0.1=$0.55 per franc. ## 14. Study Questions #14. Ch 12. Suppose that the nominal interest rate on three-month Treasury bills is 8% in the United States and 6% in the United Kingdom, and the rate of inflation is 10% in the United States and 4% in the United Kingdom. Complete the following statements. The real interest rate in the United States is-2%   , and the real interest rate in the United Kingdom is2%   . Points: 1 / 1 In response to these real interest rates, international investment flows from theUnited States   to theUnited Kingdom   . Points: 1 / 1 As a result of these investment flows, the dollar woulddepreciate   against the pound. Points: 1 / 1 Close Explanation Explanation: The real interest rate in the United States is 8%10%=−2%8%−10%=−2%. The real interest rate in the United Kingdom is 6%4%=2%6%−4%=2%. The higher interest rate in the United Kingdom would induce investments to flow from the United States to the United Kingdom, and as a result of these investment flows, the dollar would depreciate against the pound as the demand for the pound strengthens. See section: “Relative Levels of Interest Rates.”
{}
# Curse of Dimensionality 11 September 2017 Curse of dimensionality usually refers to the fact that as the dimensionality of the data increases, it becomes exponentially more difficult to search, sample from the posterior, optimize, etc. First, some some examples to illustrate the weirdness (or unintuitive nature) of a high-dimensional space. Then, some examples of difficulties when working in a high-dimensional space. ## Weirdness of High-Dimensional Space ### Ball in a Cube The volume of an $n$-cube with sides of length $s$ is $V_{n, \text{cube}} = s^n$. The volume of an $n$-ball with a radius $r$ is $V_{n, \text{ball}} = \frac{r^n \pi^{n / 2}}{\Gamma(n / 2 + 1)}$. If an $n$-ball is inscribed inside an $n$-cube, $s = 2r$, i.e. \begin{align} \frac{V_{n, \text{ball}}}{V_{n, \text{cube}}} &= \frac{\frac{r^n \pi^{n / 2}}{\Gamma(n / 2 + 1)}}{s^n} \\ &= \frac{\pi^{n / 2}}{2^n \Gamma(n / 2 + 1)} \end{align} where $\Gamma$ is the Gamma function. This increases for a bit but then decreases exponentially to zero. ### Balls in the Corners of a Cube Consider an $n$-cube with sides of length $s = 4$ with the center at $(0, \dotsc 0)$. There are $2^n$ $n$-cubes with sides of length $2$ with centers at $(\pm 1, \dotsc, \pm 1)$. Hence, we can inscribe $2^n$ $n$-balls inside this small cubes, each with radius $1$. The distance from $(0, \dotsc, 0)$ to the any of the ball centers is $\sqrt{n}$. Hence we can inscribe a ball with the center at $(0, \dotsc, 0)$ and radius $\sqrt{n} - 1$ without going in any of the other balls. For $n = 9$, this means that the center ball has radius $\sqrt{9} - 1 = 2$ and touches the sides of the original cube. For $n > 9$, the center ball bulges out of the original cube without going in any of the other balls. ### Orange Peel Consider an $n$-ball (an orange) with radius $r$ whose peel has thickness $\epsilon$. The volume of the peel is \begin{align} V_{\text{peel}} &= \frac{\pi^{n / 2}}{\Gamma(n / 2 + 1)} \left(r^n - (r - \epsilon)^n\right). \end{align} Hence the fraction of the peel volume divided by the whole orange is \begin{align} \frac{V_{\text{peel}}}{V_{\text{orange}}} &= \frac{\frac{\pi^{n / 2}}{\Gamma(n / 2 + 1)} \left(r^n - (r - \epsilon)^n\right)}{\frac{\pi^{n / 2}}{\Gamma(n / 2 + 1)} r^n} \\ &= \frac{r^n - (r - \epsilon)^n}{r^n} \\ &= 1 - \left(\frac{r - \epsilon}{r}\right)^n. \end{align} This quantity goes very quickly to one even for a very thin peel. ### Gaussian Thin Shell According to this, for a zero-mean, $n$-dimensional normal distribution with variance $1$, the probability mass in the thin shell $\left\{x: \sqrt{d - 1} - c \leq \|x\| \leq \sqrt{d - 1} + c, c > 0\right\}$ is $1 - \frac{\exp{(-c^2 / 4)}}{c^2 / 4 + \exp{(-c^2 / 4)}}$. This goes to one very quickly. We can also see this by noting that the random variable corresponding to the length of the $n$-dimensional normal random variable, $\sqrt{x_1^2 + \cdots + x_n^2}$ is distributed according to the Chi distribution. The median of this random variable is $\sqrt{n - 1}$. Its mean is $\mu = \sqrt{2} \frac{\Gamma((n + 1) / 2)}{\Gamma(n / 2)}$. Its variance is $n - \mu^2$. [Finish argument] A slightly less precise way is to refer to the Central Limit Theorem. [back]
{}
# If x*y = 3x + 2y, Then 2*3 + 3*4 is equal to : If x*y = 3x + 2y, Then 2*3 + 3*4 is equal to : [A]38 [B]18 [C]29 [D]32 29 $\because x * y = 3x + 2y$ $2*3+3*4 = 3\times 2+2\times 3+3\times 3+2\times 4$ $= 6+6+9+8 = 29$ Hence option [C] is the right answer.
{}
If you have any questions, please feel free to email us or to use the discussion tab. If you have an answer, feel free to contribute! ## How do I cite mothur? How about the individual functions? Schloss, P.D., et al., Introducing mothur: Open-source, platform-independent, community-supported software for describing and comparing microbial communities. Appl Environ Microbiol, 2009. 75(23):7537-41 See the citation file for a BibTeX entry. You can also run the function you are interested in with “citation” to get the papers related to the function you are interested in. mothur > cluster.split(citation) Schloss PD, Westcott SL (2011). Assessing and improving methods used in OTU-based approaches for 16S rRNA gene sequence analysis. Appl Environ Microbiol 77:3219. http://www.mothur.org/wiki/Cluster.split ## I don’t have enough RAM or processing power. What are my options? Check out this page describing how to run mothur using Amazon resources, mothur ami. You should also check out Pat’s discussion of problems caused by poor quality data. ## Why does the command window open and close quickly when I double click on the mothur? This is because you have not given mothur the input files and you will get an error message quickly followed by the screen closing. Please see the section on how to run mothur from a command prompt. ## Why doesn’t mothur do…? If you would like to see something added to mothur, please let us know! It may take me a while to get around to implementing the feature, but we are generally reasonable and welcome everyone’s expertise that would like to contribute source code. ## Do you have an example analysis? Yes, miseq_sop highlights some of the things you can do with mothur. ## Why is data missing for some distance levels? This should not be an issue if you are using the recommended cluster.split function. Once upon a time a commonly asked question was why there isn’t a line for distance 0.XX. If you notice the previous example the distances jump from 0.003 to 0.006. Where are 0.004 and 0.005? mothur only outputs data if the clustering has been updated for a distance. So if you don’t have data at your favorite distance, that means that nothing changed between the previous distance and the next one. Therefore if you want OTU data for a distance of 0.005 in this case, you would use the data from 0.003. But... many of the commands that use a label option are smart (e.g. read.otu). So, if you say you want data at the 0.03 cutoff, then you can set label=0.03 and mothur will figure out what data to give you. ## Aren’t the ‘unique’ and ‘0.00’ distance levels the same? Perhaps the most commonly asked question is why the cluster command produces data for both the “unique” and “0.00” lines. Aren’t they the same? No. The “unique” line represents data for the situation where all of the sequences in an OTU are identical; the “0.00” line represents data for the situation where all of the sequences in an OTU have pairwise distances less than 0.0049. We made the decision that because there is error in everything, we should round these distances as well and not apply a hard cutoff at 0.01, 0.02, etc. But... if you don’t buy into this and want a hard cutoff you can set the hard=T option in cluster. ## mothur crashes when I read my distance file There are two common causes for this, file size and format. File Size: • The cluster command load your distance matrix into RAM, and your distance file is most likely too large to fit in RAM. There are two options to help with this. The first is to use a cutoff. By using a cutoff mothur will only load distances that are below the cutoff. If that is still not enough, there is a command called cluster.split, cluster.split which divides the distance matrix, and clusters the smaller pieces separately. You may also be able to reduce the size of the original distance matrix by using the commands outline in the , 454_SOP. • You should also check out Pat’s discussion of problems caused by poor quality data. Wrong Format: • This error can be caused by trying to read a column formatted distance matrix using the phylip parameter. By default, the dist.seqs command generates a column formatted distance matrix. To make a phylip formatted matrix set the dist.seqs command parameter output to lt. ## Why do I have such a large distance matrix This is most often caused by poor overlap of your reads. When reads have poor overlap, it greatly increases your error rate. Also, sequences that should cluster together don’t because the errors appear to be genetic differences when in fact they are not. The quality of the data you are processing can not be overstressed. Error filled reads produce error filled results. mothur Blog - Why do I have such a large distance matrix? “To take a step back, if you look through our MiSeq SOP, you’ll see that we go to great pains to only work with the unique sequences to limit the number of sequences we have to align, screen for chimeras, classify, etc. We all know that 20 million reads will never make it through the pipeline without setting your computer on fire. Returning to the question at hand, you can imagine that if the reads do not fully overlap then any error in the 5’ end of the first read will be uncorrected by the 3’ end of the second read. If we assume for now that the errors are random, then every error will generate a new unique sequence. Granted, this happens less than 1% of the time, but multiply that by 20 million reads at whatever length you choose and you’ve got a big number. Viola, a bunch of unique reads and a ginormous distance matrix.” ## mothur can’t find my input files The most common cause of mothur not finding a file is because you double clicked on the mothur executable to run mothur. This will open a terminal or command prompt window in your home directory. mothur will then look for the input files in the home directory instead of in the directory where mothur’s executable is located. If this is the cause, you either put the input files in your home directory, give complete file names, or open a terminal or command prompt window, cd into the directory where the mothur executable is located and run mothur by using “./mothur” for mac or “mothur” for windows. ## I installed the latest version, but I am still running an older version We often see this issue when you have an older version of mothur installed in your path. You can find out where by opening a terminal window and running: $which mothur path_to_old_version for example:$ which mothur /usr/local/bin When you find the location of the older version, you can delete it or move it out of your path. $mv path_to_old_version/mothur new_location for example: $ mv /usr/local/bin/mothur /Users/yourusername/desktop/old_version_mothur ## Why is my dataset only clustering to “unique”? We typically see this type of thing when sequences are very divergent from each other (e.g. protein coding sequence) or when the sequences have not been screened to insure that they overlap the same region followed by using filter.seqs to make sure that they start and stop in the same alignment space. Sometimes increasing the cutoff in the dist.seqs and cluster commands can help with this issue. ## Why does the cutoff change when I cluster with average neighbor? This is a product of using the average neighbor algorithm with a sparse distance matrix. When you run cluster, the algorithm looks for pairs of sequences to merge in the rows and columns that are getting merged together. Let’s say you set the cutoff to 0.05. If one cell has a distance of 0.03 and the cell it is getting merged with has a distance above 0.05 then the cutoff is reset to 0.03, because it’s not possible to merge at a higher level and keep all the data. All of the sequences are still there from multiple phyla. Incidentally, although we always see this, it is a bigger problem for people that include sequences that do not fully overlap. ## How do I make a tree? mothur has two commands that create trees: The clearcut commands creates a phylogenetic tree that represents how sequences relate. The clearcut program written by Initiative for Bioinformatics and Evolutionary Studies (IBEST) at the University of Idaho. For more information about clearcut please refer to its GitHub repository The tree.shared command will generate a newick-formatted tree file that describes the dissimilarity (1-similarity) among multiple groups. Groups are clustered using the UPGMA algorithm using the distance between communities as calculated using any of the calculators describing the similarity in community membership or structure. ## How do I know “who” is in an OTU in a shared file? You can run the get.otulist command on the list file you used to generate the shared file. You want to be sure you are comparing the same distances. ie 98_lt_phylip_amazon.an.0.03.otulist would relate to the 0.03 distance in your shared file. Also, if you subsample your data set and want to compare things, be sure to subsample the list and group file and then create the shared file to make sure you are working with the same sequences. sub.sample(list=yourListFile, group=yourGroupFile, persample=t) make.shared(list=yourSubsampledListFile, group=yourSubsampledGroupFile, label=0.03) get.otulist(list=yourSubsampledListFile, label=0.03) ## How do I know “who” is in the OTUs represented in the Venn diagram? You can run the get.sharedseqs command. Be sure to pay close attention to the “unique” and “shared” parameters. ## What are mothur’s file types? mothur uses and creates many file types. ## How do I select certain sequences or groups of sequences? mothur has several “get” and “remove” commands: get.seqs, get.lineage, get.groups, get.otus, get.dists, remove.seqs, remove.lineage, remove.otus, remove.dists and remove.groups. ## mothur reports a “bad_alloc” error in the shhh.flows command This error indicates your computer is running out of memory. The shhh.flows command is very memory intensive. This error is most commonly caused by trying to process a dataset too large, using multiple processors, or failing to run trim.flows before shhh.flows. If you are using multiple processors, try running the command with processors=1, the more processors you use the more memory is required. Running trim.flows with an oligos file, and then shhh.flows with the file option may also resolve the issue. If for some reason you are unable to run shhh.flows with your data, a good alternative is to use the trim.seqs command using a 50-bp sliding window and to trim the sequence when the average quality score over that window drops below 35. Our results suggest that the sequencing error rates by this method are very good, but not quite as good as by shhh.flows and that the resulting sequences tend to be a bit shorter. ## File Mismatches - “[error]: yourSequence is in fileA but not in fileB, please correct.” The most common reason this occurs is because you forgot to include a name or count file on a command, or accidentally included the wrong one due to a typo. mothur has a “current” option, which allows you to set file parameters to “current”. For example, if fasta=current mothur will use the last fasta file given or created. The current option was designed to help avoid typo mistakes due to mothur’s long filenames. Another reason this might occur is a process failing when you are using multiple processors. If a process dies, a file can be incomplete which would cause a mismatch error downstream. Yes!
{}
Physics ##### ASSERTION Absolute error may be negative or positive. ##### REASON Absolute error is the difference between the real value and the measured value of a physical quantity. Assertion is incorrect but Reason is correct ##### SOLUTION Absolute error is the difference between the exact value and the approximation. It is always positive. So, option D is correct. You're just one step away Assertion & Reason Medium Published on 18th 08, 2020 Questions 244531 Subjects 8 Chapters 125 Enrolled Students 218 #### Realted Questions Q1 Single Correct Medium $V$ is the volume of a liquid flowing per second through a capillary tube of length $l$ and radius $r$, under a pressure difference $(p)$. If the velocity $(v)$, mass $(M)$ and time $(T)$ are taken as the fundamental quantities, then the dimensional formula for $\eta$ in the relation $V = \dfrac {\pi p r^{4}}{8\eta l}$ is • A. $[MV^{-1}]$ • B. $M^{-1} V^{-1} T^{-2}]$ • C. $[M^{1}V^{1}T^{-2}]$ • D. $[M^{1}V^{-1}T^{-2}]$ Asked in: Physics - Units and Measurement 1 Verified Answer | Published on 18th 08, 2020 Q2 Single Correct Hard Which of the following pairs of physical quantities does not have same dimensional formula? • A. Work and torque • B. Angular momentum and Plancks constant • C. Tension and surface tension • D. Impulse and linear momentum Asked in: Physics - Units and Measurement 1 Verified Answer | Published on 18th 08, 2020 Q3 Single Correct Medium Velocity gradient has same dimensional formula as • A. angular frequency • B. angular momentum • C. velocity • D. none of these Asked in: Physics - Units and Measurement 1 Verified Answer | Published on 18th 08, 2020 Q4 Single Correct Medium The length of a strip measured with a meter rod is 10.0 cm its width measured with a vernier caliper is 1.00 cm. The least count of the meter rod is 0.1 cm and that of vernier caliper caliper 0.01 cm water will be error in its area? • A. $\pm 13$% • B. $\pm 7$% • C. $\pm 4$% • D. $\pm 2$% Asked in: Physics - Units and Measurement 1 Verified Answer | Published on 18th 08, 2020 Q5 Single Correct Medium A dimensionless quantity • A. never has a unit • B. always has a unit • C. may have a unit • D. does not exist Asked in: Physics - Units and Measurement 1 Verified Answer | Published on 18th 08, 2020
{}
## 92.10 Properties of morphisms representable by algebraic spaces Here is the definition that makes this work. Definition 92.10.1. Let $S$ be a scheme contained in $\mathit{Sch}_{fppf}$. Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. Assume $f$ is representable by algebraic spaces. Let $\mathcal{P}$ be a property of morphisms of algebraic spaces which 1. is preserved under any base change, and 2. is fppf local on the base, see Descent on Spaces, Definition 72.9.1. In this case we say that $f$ has property $\mathcal{P}$ if for every $U \in \mathop{\mathrm{Ob}}\nolimits ((\mathit{Sch}/S)_{fppf})$ and any $y \in \mathcal{Y}_ U$ the resulting morphism of algebraic spaces $f_ y : F_ y \to U$, see diagram (92.9.1.1), has property $\mathcal{P}$. It is important to note that we will only use this definition for properties of morphisms that are stable under base change, and local in the fppf topology on the target. This is not because the definition doesn't make sense otherwise; rather it is because we may want to give a different definition which is better suited to the property we have in mind. Lemma 92.10.2. Let $S$ be an object of $\mathit{Sch}_{fppf}$. Let $\mathcal{P}$ be as in Definition 92.10.1. Consider a $2$-commutative diagram $\xymatrix{ \mathcal{X}' \ar[r] \ar[d]_{f'} & \mathcal{X} \ar[d]^ f \\ \mathcal{Y}' \ar[r] & \mathcal{Y} }$ of $1$-morphisms of categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. Assume the horizontal arrows are equivalences and $f$ (or equivalently $f'$) is representably by algebraic spaces. Then $f$ has $\mathcal{P}$ if and only if $f'$ has $\mathcal{P}$. Proof. Note that this makes sense by Lemma 92.9.3. Proof omitted. $\square$ Here is a sanity check. Lemma 92.10.3. Let $S$ be a scheme contained in $\mathit{Sch}_{fppf}$. Let $a : F \to G$ be a map of presheaves on $(\mathit{Sch}/S)_{fppf}$. Let $\mathcal{P}$ be as in Definition 92.10.1. Assume $a$ is representable by algebraic spaces. Then $a : F \to G$ has property $\mathcal{P}$ (see Bootstrap, Definition 78.4.1) if and only if the corresponding morphism $\mathcal{S}_ F \to \mathcal{S}_ G$ of categories fibred in groupoids has property $\mathcal{P}$. Proof. Note that the lemma makes sense by Lemma 92.9.5. Proof omitted. $\square$ Lemma 92.10.4. Let $S$ be an object of $\mathit{Sch}_{fppf}$. Let $\mathcal{P}$ be as in Definition 92.10.1. Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of categories fibred in setoids over $(\mathit{Sch}/S)_{fppf}$. Let $F$, resp. $G$ be the presheaf which to $T$ associates the set of isomorphism classes of objects of $\mathcal{X}_ T$, resp. $\mathcal{Y}_ T$. Let $a : F \to G$ be the map of presheaves corresponding to $f$. Then $a$ has $\mathcal{P}$ if and only if $f$ has $\mathcal{P}$. Proof. The lemma makes sense by Lemma 92.9.6. The lemma follows on combining Lemmas 92.10.2 and 92.10.3. $\square$ Lemma 92.10.5. Let $S$ be a scheme contained in $\mathit{Sch}_{fppf}$. Let $\mathcal{X}$, $\mathcal{Y}$, $\mathcal{Z}$ be categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. Let $\mathcal{P}$ be a property as in Definition 92.10.1 which is stable under composition. Let $f : \mathcal{X} \to \mathcal{Y}$, $g : \mathcal{Y} \to \mathcal{Z}$ be $1$-morphisms which are representable by algebraic spaces. If $f$ and $g$ have property $\mathcal{P}$ so does $g \circ f : \mathcal{X} \to \mathcal{Z}$. Proof. Note that the lemma makes sense by Lemma 92.9.9. Proof omitted. $\square$ Lemma 92.10.6. Let $S$ be a scheme contained in $\mathit{Sch}_{fppf}$. Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$ be categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. Let $\mathcal{P}$ be a property as in Definition 92.10.1. Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism representable by algebraic spaces. Let $g : \mathcal{Z} \to \mathcal{Y}$ be any $1$-morphism. Consider the $2$-fibre product diagram $\xymatrix{ \mathcal{Z} \times _{g, \mathcal{Y}, f} \mathcal{X} \ar[r]_-{g'} \ar[d]_{f'} & \mathcal{X} \ar[d]^ f \\ \mathcal{Z} \ar[r]^ g & \mathcal{Y} }$ If $f$ has $\mathcal{P}$, then the base change $f'$ has $\mathcal{P}$. Proof. The lemma makes sense by Lemma 92.9.7. Proof omitted. $\square$ Lemma 92.10.7. Let $S$ be a scheme contained in $\mathit{Sch}_{fppf}$. Let $\mathcal{X}, \mathcal{Y}, \mathcal{Z}$ be categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. Let $\mathcal{P}$ be a property as in Definition 92.10.1. Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism representable by algebraic spaces. Let $g : \mathcal{Z} \to \mathcal{Y}$ be any $1$-morphism. Consider the fibre product diagram $\xymatrix{ \mathcal{Z} \times _{g, \mathcal{Y}, f} \mathcal{X} \ar[r]_-{g'} \ar[d]_{f'} & \mathcal{X} \ar[d]^ f \\ \mathcal{Z} \ar[r]^ g & \mathcal{Y} }$ Assume that for every scheme $U$ and object $x$ of $\mathcal{Y}_ U$, there exists an fppf covering $\{ U_ i \to U\}$ such that $x|_{U_ i}$ is in the essential image of the functor $g : \mathcal{Z}_{U_ i} \to \mathcal{Y}_{U_ i}$. In this case, if $f'$ has $\mathcal{P}$, then $f$ has $\mathcal{P}$. Proof. Proof omitted. Hint: Compare with the proof of Spaces, Lemma 63.5.6. $\square$ Lemma 92.10.8. Let $S$ be a scheme contained in $\mathit{Sch}_{fppf}$. Let $\mathcal{P}$ be a property as in Definition 92.10.1 which is stable under composition. Let $\mathcal{X}_ i, \mathcal{Y}_ i$ be categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$, $i = 1, 2$. Let $f_ i : \mathcal{X}_ i \to \mathcal{Y}_ i$, $i = 1, 2$ be $1$-morphisms representable by algebraic spaces. If $f_1$ and $f_2$ have property $\mathcal{P}$ so does $f_1 \times f_2 : \mathcal{X}_1 \times \mathcal{X}_2 \to \mathcal{Y}_1 \times \mathcal{Y}_2$. Proof. The lemma makes sense by Lemma 92.9.10. Proof omitted. $\square$ Lemma 92.10.9. Let $S$ be a scheme contained in $\mathit{Sch}_{fppf}$. Let $\mathcal{X}$, $\mathcal{Y}$ be categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. Let $f : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism representable by algebraic spaces. Let $\mathcal{P}$, $\mathcal{P}'$ be properties as in Definition 92.10.1. Suppose that for any morphism of algebraic spaces $a : F \to G$ we have $\mathcal{P}(a) \Rightarrow \mathcal{P}'(a)$. If $f$ has property $\mathcal{P}$ then $f$ has property $\mathcal{P}'$. Proof. Formal. $\square$ Lemma 92.10.10. Let $S$ be a scheme contained in $\mathit{Sch}_{fppf}$. Let $j : \mathcal X \to \mathcal Y$ be a $1$-morphism of categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. Assume $j$ is representable by algebraic spaces and a monomorphism (see Definition 92.10.1 and Descent on Spaces, Lemma 72.10.30). Then $j$ is fully faithful on fibre categories. Proof. We have seen in Lemma 92.9.2 that $j$ is faithful on fibre categories. Consider a scheme $U$, two objects $u, v$ of $\mathcal{X}_ U$, and an isomorphism $t : j(u) \to j(v)$ in $\mathcal{Y}_ U$. We have to construct an isomorphism in $\mathcal{X}_ U$ between $u$ and $v$. By the $2$-Yoneda lemma (see Section 92.5) we think of $u$, $v$ as $1$-morphisms $u, v : (\mathit{Sch}/U)_{fppf} \to \mathcal{X}$ and we consider the $2$-fibre product $(\mathit{Sch}/U)_{fppf} \times _{j \circ v, \mathcal{Y}} \mathcal{X}.$ By assumption this is representable by an algebraic space $F_{j \circ v}$, over $U$ and the morphism $F_{j \circ v} \to U$ is a monomorphism. But since $(1_ U, v, 1_{j(v)})$ gives a $1$-morphism of $(\mathit{Sch}/U)_{fppf}$ into the displayed $2$-fibre product, we see that $F_{j \circ v} = U$ (here we use that if $V \to U$ is a monomorphism of algebraic spaces which has a section, then $V = U$). Therefore the $1$-morphism projecting to the first coordinate $(\mathit{Sch}/U)_{fppf} \times _{j \circ v, \mathcal{Y}} \mathcal{X} \to (\mathit{Sch}/U)_{fppf}$ is an equivalence of fibre categories. Since $(1_ U, u, t)$ and $(1_ U, v, 1_{j(v)})$ give two objects in $((\mathit{Sch}/U)_{fppf} \times _{j \circ v, \mathcal{Y}} \mathcal{X})_ U$ which have the same first coordinate, there must be a $2$-morphism between them in the $2$-fibre product. This is by definition a morphism $\tilde t : u \to v$ such that $j(\tilde t) = t$. $\square$ Here is a characterization of those categories fibred in groupoids for which the diagonal is representable by algebraic spaces. Lemma 92.10.11. Let $S$ be a scheme contained in $\mathit{Sch}_{fppf}$. Let $\mathcal{X}$ be a category fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$. The following are equivalent: 1. the diagonal $\mathcal{X} \to \mathcal{X} \times \mathcal{X}$ is representable by algebraic spaces, 2. for every scheme $U$ over $S$, and any $x, y \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{X}_ U)$ the sheaf $\mathit{Isom}(x, y)$ is an algebraic space over $U$, 3. for every scheme $U$ over $S$, and any $x \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{X}_ U)$ the associated $1$-morphism $x : (\mathit{Sch}/U)_{fppf} \to \mathcal{X}$ is representable by algebraic spaces, 4. for every pair of schemes $T_1, T_2$ over $S$, and any $x_ i \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{X}_{T_ i})$, $i = 1, 2$ the $2$-fibre product $(\mathit{Sch}/T_1)_{fppf} \times _{x_1, \mathcal{X}, x_2} (\mathit{Sch}/T_2)_{fppf}$ is representable by an algebraic space, 5. for every representable category fibred in groupoids $\mathcal{U}$ over $(\mathit{Sch}/S)_{fppf}$ every $1$-morphism $\mathcal{U} \to \mathcal{X}$ is representable by algebraic spaces, 6. for every pair $\mathcal{T}_1, \mathcal{T}_2$ of representable categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$ and any $1$-morphisms $x_ i : \mathcal{T}_ i \to \mathcal{X}$, $i = 1, 2$ the $2$-fibre product $\mathcal{T}_1 \times _{x_1, \mathcal{X}, x_2} \mathcal{T}_2$ is representable by an algebraic space, 7. for every category fibred in groupoids $\mathcal{U}$ over $(\mathit{Sch}/S)_{fppf}$ which is representable by an algebraic space every $1$-morphism $\mathcal{U} \to \mathcal{X}$ is representable by algebraic spaces, 8. for every pair $\mathcal{T}_1, \mathcal{T}_2$ of categories fibred in groupoids over $(\mathit{Sch}/S)_{fppf}$ which are representable by algebraic spaces, and any $1$-morphisms $x_ i : \mathcal{T}_ i \to \mathcal{X}$ the $2$-fibre product $\mathcal{T}_1 \times _{x_1, \mathcal{X}, x_2} \mathcal{T}_2$ is representable by an algebraic space. Proof. The equivalence of (1) and (2) follows from Stacks, Lemma 8.2.5 and the definitions. Let us prove the equivalence of (1) and (3). Write $\mathcal{C} = (\mathit{Sch}/S)_{fppf}$ for the base category. We will use some of the observations of the proof of the similar Categories, Lemma 4.41.8. We will use the symbol $\cong$ to mean “equivalence of categories fibred in groupoids over $\mathcal{C} = (\mathit{Sch}/S)_{fppf}$”. Assume (1). Suppose given $U$ and $x$ as in (3). For any scheme $V$ and $y \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{X}_ V)$ we see (compare reference above) that $\mathcal{C}/U \times _{x, \mathcal{X}, y} \mathcal{C}/V \cong (\mathcal{C}/U \times _ S V) \times _{(x, y), \mathcal{X} \times \mathcal{X}, \Delta } \mathcal{X}$ which is representable by an algebraic space by assumption. Conversely, assume (3). Consider any scheme $U$ over $S$ and a pair $(x, x')$ of objects of $\mathcal{X}$ over $U$. We have to show that $\mathcal{X} \times _{\Delta , \mathcal{X} \times \mathcal{X}, (x, x')} U$ is representable by an algebraic space. This is clear because (compare reference above) $\mathcal{X} \times _{\Delta , \mathcal{X} \times \mathcal{X}, (x, x')} \mathcal{C}/U \cong (\mathcal{C}/U \times _{x, \mathcal{X}, x'} \mathcal{C}/U) \times _{\mathcal{C}/U \times _ S U, \Delta } \mathcal{C}/U$ and the right hand side is representable by an algebraic space by assumption and the fact that the category of algebraic spaces over $S$ has fibre products and contains $U$ and $S$. The equivalences (3) $\Leftrightarrow$ (4), (5) $\Leftrightarrow$ (6), and (7) $\Leftrightarrow$ (8) are formal. The equivalences (3) $\Leftrightarrow$ (5) and (4) $\Leftrightarrow$ (6) follow from Lemma 92.9.3. Assume (3), and let $\mathcal{U} \to \mathcal{X}$ be as in (7). To prove (7) we have to show that for every scheme $V$ and $1$-morphism $y : (\mathit{Sch}/V)_{fppf} \to \mathcal{X}$ the $2$-fibre product $(\mathit{Sch}/V)_{fppf} \times _{y, \mathcal{X}} \mathcal{U}$ is representable by an algebraic space. Property (3) tells us that $y$ is representable by algebraic spaces hence Lemma 92.9.8 implies what we want. Finally, (7) directly implies (3). $\square$ In the situation of the lemma, for any $1$-morphism $x : (\mathit{Sch}/U)_{fppf} \to \mathcal{X}$ as in the lemma, it makes sense to say that $x$ has property $\mathcal{P}$, for any property as in Definition 92.10.1. In particular this holds for $\mathcal{P} =$ “surjective”, $\mathcal{P} =$ “smooth”, and $\mathcal{P} =$ “étale”, see Descent on Spaces, Lemmas 72.10.6, 72.10.26, and 72.10.28. We will use these three cases in the definitions of algebraic stacks below. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
Mutual Exclusion by disabling interrupt I am reading William Stallings operating system book which gives this pseudo-Code for disabling interrupt to achieve Mutual Exclusion while (true) { /* disable interrupts */; /* critical section */; /* enable interrupts */; /* remainder */; } But I don't get why there is a While loop in the code which is true this means that this part of the code repeats forever but why?
{}
# [tex4ht] [bug #509] Incompatibilities with Tikz generated by menukeys package Julianus Pfeuffer puszcza-hackers at gnu.org.ua Tue Apr 27 22:27:51 CEST 2021 URL: <http://puszcza.gnu.org.ua/bugs/?509> Summary: Incompatibilities with Tikz generated by menukeys package Project: tex4ht Submitted by: julepf Submitted on: Tue Apr 27 23:27:51 2021 Category: None Priority: 5 - Normal Severity: 5 - Normal Status: None Privacy: Public Assigned to: None Originator Email: Open/Closed: Open Discussion Lock: Any _______________________________________________________ Details: Hi! I am trying to convert a document where I am using the menukeys package to HTML. It seems to use Tikz internally to draw the menu/key "buttons". make4ht creates an svg for every usage of such a button but they seem to be broken. For some of them, I get red lines, indicating a parse error. For others, I get the box without the text, since there seem to be additional <g> elements in the <text>. I tried both drivers from your git repo already %\def\pgfsysdriver{pgfsys-tex4ht-updated.def} \def\pgfsysdriver{pgfsys-dvisvgm4ht.def} The second, pgfsys-dvisvgm4ht.def only creates the text and no boxes but seems to have no errors as far as I could see. Not sure if this is an error that can be addressed in this package or its helpers but this is the first package in the hierarchy I guess. An example can be as minimal as: \documentclass[a4paper,12pt]{article} \usepackage{menukeys} \begin{document} Hello world. \menu{FOOO} \end{document} which compiles fine with e.g. pdflatex. I am using TL2021. Thanks in advance for any help! J _______________________________________________________ Reply to this item at: <http://puszcza.gnu.org.ua/bugs/?509> _______________________________________________ Message sent via/by Puszcza http://puszcza.gnu.org.ua/ More information about the tex4ht mailing list.
{}
# Is a distribution that is normal, but highly skewed, considered Gaussian? I have this question: What do you think the distribution of time spent per day on YouTube looks like? My answer is that it is probably normally distributed and highly left skewed. I expect there is one mode where most users spend around some average time and then a long right tail since some users are overwhelming power users. Is that a fair answer? Is there a better word for that distribution? • As some answers mention but do not emphasise, skewness is named informally for the longer tail if there is one, so right-skewed if a longer right tail. Left and right as used in this context both presuppose a display following a convention that magnitude is shown on the hoirizontal axis. If that sounds too obvious, consider displays in the Earth and environmental sciences in which the magnitude is height or depth and shown vertically. Small print: some measures of skewness can be zero even if a distribution is skewed geometrically. Mar 31 '19 at 6:42 • Total time per day for all users? or time per day per person? If the latter, then surely there's a moderately big spike at 0, in which case you probably need a 'spike and slab' style distribution with a Dirac delta at 0. Apr 1 '19 at 7:08 • "Normal" is synonymous with "Gaussian", and Gaussian distributions, also called normal distributions, are not skewed. Apr 2 '19 at 1:47 • I find the question in the title much different from the question in the body text. Or at least the title is very confusing. No distribution is 'normal but highly skewed' that's a contradiction. Also, the Gaussian distribution is very well defined $f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \text{exp}\left( - \frac{(x-\mu)^2}{2\sigma^2}\right)$ and not at all like the distribution of time spent per day on YouTube. So the answer to the question in the title is a big no. Apr 2 '19 at 12:39 • also, the question at the end 'is there a better word for that distribution?' is very vague or broad. The information seems to be only 'one mode' and 'a long right tail' (the part 'probably normally distributed' makes no sense). There can be many distributions that satisfy these conditions. It is amazing that this question attracts more than ten answers and at least as many proposals for the alternative distribution before we actually try to clarify the question (there isn't even data). Apr 2 '19 at 12:53 A fraction per day is certainly not negative. This rules out the normal distribution, which has probability mass over the entire real axis - in particular over the negative half. Power law distributions are often used to model things like income distributions, sizes of cities etc. They are nonnegative and typically highly skewed. These would be the first I would try in modeling time spent watching YouTube. (Or monitoring CrossValidated questions.) More information on power laws can be found here or here, or in our tag. • You're completely correct that normal distributions have support on the real line. And yet...they're no an awful model for some strictly positive qualities, like adults' height or weight, where the mean and variance are such that the negative values are very unlikely under the model. Mar 30 '19 at 22:26 • @MattKrause That's actually a great question - is there a same probability I will be '10 cm above or below the mean height' or '10 percent above or below the mean height'? Only the first case could warrant normal distribution. Apr 1 '19 at 12:26 • @MattKrause: I completely agree, in a general sense. Yet, the present question is about the proportion of daily time spent watching YouTube. We don't have any data, but I would be extremely surprised if the distribution was even remotely symmetric. Apr 1 '19 at 15:28 A distribution that is normal is not highly skewed. That is a contradiction. Normally distributed variables have skew = 0. • What is a better way to describe the distribution? Is there a word for that type of distribution where it centers around a mode and then has a long tail? Mar 30 '19 at 19:21 • Unimodal and skewed is as close as I can come... Mar 30 '19 at 19:27 • As an aside, it's just really incredible that people give their time to help other people get better at this stuff. I know it goes without saying, but it's so cool what you both do! Mar 30 '19 at 19:30 • Yes, but it's worth clarifying that that statement pertains to the normally distributed population. A sample drawn from that population can be very skewed. Mar 31 '19 at 2:14 • When the skew value is small ("small" being decided by the people dealing with the stats in question), you can still treat the population as normal, albeit with minor error as a result. Apr 1 '19 at 18:03 If it has long right tail, then it's right skewed. It can't be a normal distribution since skew !=0, it's perhaps a unimodal skew normal distribution: https://en.wikipedia.org/wiki/Skew_normal_distribution It could be a log-normal distribution. As mentioned here: Users' dwell time on online articles (jokes, news etc.) follows a log-normal distribution. The reference given is: Yin, Peifeng; Luo, Ping; Lee, Wang-Chien; Wang, Min (2013). Silence is also evidence: interpreting dwell time for recommendation from psychological perspective. ACM International Conference on KDD. "Is there a better word for that distribution?" There's a worthwhile distinction here between using words to describe the properties of the distribution, versus trying to find a "name" for the distribution so that you can identify it as (approximately) an instance of a particular standard distribution: one for which a formula or statistical tables might exist for its distribution function, and for which you could estimate its parameters. In this latter case, you are likely using the named distribution, e.g. "normal/Gaussian" (the two terms are generally synonymous), as a model that captures some of the key features of your data, rather than claiming the population your data is drawn from exactly follows that theoretical distribution. To slightly misquote George Box, all models are "wrong", but some are useful. If you are thinking about the modelling approach, it is worth considering what features you want to incorporate and how complicated or parsimonious you want your model to be. Which properties of the data really matter for the purposes you are intending to model it? Note that if the skew is reasonably small and you do not care very much about it, even if the underlying population is genuinely skewed, then you might still find the normal distribution a useful model to approximate this true distribution of watching times. But you should check that this doesn't end up making silly predictions. Because a normal distribution has no highest or lowest possible value, then although extremely high or low values become increasingly unlikely, you will always find that your model predicts there is some probability of watching for a negative number of hours per day, or more than 24 hours. This gets more problematic for you if the predicted probability of such impossible events becomes high. A symmetric distribution like the normal will predict that as many people will watch for lengths of time more than e.g. 50% above the mean, as watch for less than 50% below the mean. If watching times are very skewed, then this kind of prediction may also be so implausible as to be silly, and give you misleading results if you are taking the results of your model and using them as inputs for some other purpose (for instance, you're running a simulation of watching times in order to calculate optimal advertisement scheduling). If the skewness is so noteworthy you want to capture it as part of your model, then the skew normal distribution may be more appropriate. If you want to capture both skewness and kurtosis, then consider the skewed t. If you want to incorporate the physically possible upper and lower bounds, then consider using the truncated versions of these distributions. Many other probability distributions exist that can be skewed and unimodal (for appropriate parameter choices) such as the F or gamma distributions, and again you can truncate these so they do not predict impossibly high watching times. A beta distribution may be a good choice if you are modelling the fraction of the day spent watching, as this is always bounded between 0 and 1 without further truncation being necessary. If you want to incorporate the concentration of probability at exactly zero due to non-watchers, then consider building in a hurdle model. In summary: although the normal distribution has zero skew, the fact your data are skewed doesn't rule out the normal distribution as a useful model, though it does suggest some other distribution may be more appropriate. You should consider other properties of the data when choosing your model, besides the skew, and consider too the purposes you are going to use the model for. It's safe to say that your true population of watching times does not exactly follow some famous, named distribution, but this does not mean such a distribution is doomed to be useless as a model. However, for some purposes you may prefer to just use the empirical distribution itself, rather than try fitting a standard distribution to it. The gamma distribution could be a good candidate to describe this kind of distribution over nonnegative, right-skewed data. See the green line in the image here: https://en.m.wikipedia.org/wiki/Gamma_distribution In the case at hand, since the time spent per day is bound from $$0$$ to $$1$$ (if quantified as a fraction of the day), distributions that are unbounded above (e.g. Pareto, skew-normal, Gamma, log-normal) won't work, but Beta would. "Normal" and "Gaussian" mean exactly the same thing. As other answers explain, the distribution you're talking about is not normal/Gaussian, because that distribution assigns probabilities to every value on the real line, whereas your distribution only exists between $$0$$ and $$24$$. A hurdle model has two parts. The first is Bernoulli experiment that determines whether you use YouTube at all. If you don't, then your usage time is obviously zero and you're done. If you do, you "pass that hurdle", then the usage time comes from some other strictly positive distribution. A closely related concept are zero-inflated models. These are meant to deal with a situation where we observe a bunch of zeros, but can't distinguish between always-zeros and sometimes-zeros. For example, consider the number of cigarettes that a person smokes each day. For non-smokers, that number is always zero, but some smokers may not smoke on a given day (out of cigarettes? on a long flight?). Unlike the hurdle model, the "smoker" distribution here should include zero, but these counts are 'inflated' by the non-smokers' contribution too. If the distribution is indeed a 'subset' of the normal distribution, you should considder a truncated model. Widely used in this context is the family of TOBIT models. They essentialy suggest a pdf with a (positive) probability mass at 0 and then a 'cut of part of the normal distribution' for positive values. I will refrain from typing the formula here and rather refere you to the Wikipedia Article: https://en.wikipedia.org/wiki/Tobit_model Normal distributions are by definition non-skewed, so you can't have both things. If the distribution is left-skewed, then it cannot be Gaussian. You'll have to pick a different one! The closest thing to your request I can think of is this: https://en.wikipedia.org/wiki/Skew_normal_distribution • I agree except that the OP is confusing left and right skewness, as already pointed out. And @behold has already suggested the skew-normal in an answer. So, I can't see that this adds to existing answers. Apr 2 '19 at 9:48 • It summarizes many of them in a straight-forward three-line response Apr 2 '19 at 11:46 • Sorry, but that's still repetition. Apr 2 '19 at 12:52 • OK... who cares? Apr 2 '19 at 14:06 • Well, I do; and whoever added +1 to my comments (clearly not me) and whoever downvoted your answer (not me, as it happens). This thread is already long and repetitive; yet more redundant comments don't improve it for future readers. Apr 2 '19 at 14:24
{}
time limit per test 2 seconds memory limit per test 256 megabytes input standard input output standard output Omkar has received a message from Anton saying "Your story for problem A is confusing. Just make a formal statement." Because of this, Omkar gives you an array $a = [a_1, a_2, \ldots, a_n]$ of $n$ distinct integers. An array $b = [b_1, b_2, \ldots, b_k]$ is called nice if for any two distinct elements $b_i, b_j$ of $b$, $|b_i-b_j|$ appears in $b$ at least once. In addition, all elements in $b$ must be distinct. Can you add several (maybe, $0$) integers to $a$ to create a nice array $b$ of size at most $300$? If $a$ is already nice, you don't have to add any elements. For example, array $[3, 6, 9]$ is nice, as $|6-3|=|9-6| = 3$, which appears in the array, and $|9-3| = 6$, which appears in the array, while array $[4, 2, 0, 6, 9]$ is not nice, as $|9-4| = 5$ is not present in the array. For integers $x$ and $y$, $|x-y| = x-y$ if $x > y$ and $|x-y| = y-x$ otherwise. Input Each test contains multiple test cases. The first line contains $t$ ($1 \leq t \leq 50$), the number of test cases. Description of the test cases follows. The first line of each test case contains a single integer $n$ ($2 \leq n \leq 100$) — the length of the array $a$. The second line of each test case contains $n$ distinct integers $a_1, a_2, \cdots, a_n$ ($-100 \leq a_i \leq 100$) — the elements of the array $a$. Output For each test case, output one line containing YES if Omkar can create a nice array $b$ by adding elements to $a$ and NO otherwise. The case of each letter does not matter, so yEs and nO will also be accepted. If the first line is YES, output a second line containing a single integer $k$ ($n \leq k \leq 300$). Then output one line containing $k$ distinct integers $b_1, b_2, \cdots, b_k$ ($-10^9 \leq b_i \leq 10^9$), the elements of the nice array $b$. $b_1, b_2, \cdots, b_k$ can be in any order. For each $a_i$ in $a$, $a_i$ must appear at least once in $b$. It can be proved that if Omkar can create such an array $b$, then he can also do so in a way that satisfies the above constraints. If multiple solutions exist, you can print any. Example Input 4 3 3 0 9 2 3 4 5 -7 3 13 -2 8 4 4 8 12 6 Output yes 4 6 0 3 9 yEs 5 5 3 1 2 4 NO Yes 6 8 12 6 2 4 10 Note For the first case, you can add integers to $a$ to receive the array $b = [6, 0, 3, 9]$. Note that $|6-3| = |9-6| = |3-0| = 3$ and $3$ is in $b$, $|6-0| = |9-3| = 6$ and $6$ is in $b$, and $|9-0| = 9$ is in $b$, so $b$ is nice. For the second case, you can add integers to $a$ to receive the array $b = [5, 3, 1, 2, 4]$. We have that $|2-1| = |3-2| = |4-3| = |5-4| = 1$ is in $b$, $|3-1| = |4-2| = |5-3| = 2$ is in $b$, $|4-1| = |5-2| = 3$ is in $b$, and $|5-1| = 4$ is in $b$, so $b$ is nice. For the fourth case, you can add integers to $a$ to receive the array $b = [8, 12, 6, 2, 4, 10]$. We have that $|4-2| = |6-4| = |8-6| = |10-8| = |12-10| = 2$ is in $b$, $|6-2| = |8-4| = |10-6| = |12-8| = 4$ is in $b$, $|8-2| = |10-4| = |12-6| = 6$ is in $b$, $|10-2| = |12-4| = 8$ is in $b$, and $|12-2| = 10$ is in $b$, so $b$ is nice. It can be proven that for all other test cases it is impossible to create a nice array $b$.
{}
# Entropy optimality: Analytic psd rank and John’s theorem Recall that our goal is to sketch a proof of the following theorem, where the notation is from the last post. I will assume a knowledge of the three posts on polyhedral lifts and non-negative rank, as our argument will proceed by analogy. Theorem 1 For every ${m \geq 1}$ and ${g : \{0,1\}^m \rightarrow \mathbb R_+}$, there exists a constant ${C(g)}$ such that the following holds. For every ${n \geq 2m}$, $\displaystyle 1+n^{d/2} \geq \mathrm{rank}_{\mathsf{psd}}(M_n^g) \geq C \left(\frac{n}{\log n}\right)^{(d-1)/2}\,. \ \ \ \ \ (1)$ where ${d = \deg_{\mathsf{sos}}(g).}$ In this post, we will see how John’s theorem can be used to transform a psd factorization into one of a nicer analytic form. Using this, we will be able to construct a convex body that contains an approximation to every non-negative matrix of small psd rank. 1.1. Finite-dimensional operator norms Let ${H}$ denote a finite-dimensional Euclidean space over ${\mathbb R}$ equipped with inner product ${\langle \cdot,\cdot\rangle}$ and norm ${|\cdot|}$. For a linear operator ${A : H \rightarrow H}$, we define the operator, trace, and Frobenius norms by $\displaystyle \|A\| = \max_{x \neq 0} \frac{|Ax|}{x},\quad \|A\|_* = \mathrm{Tr}(\sqrt{A^T A}),\quad \|A\|_F = \sqrt{\mathrm{Tr}(A^T A)}\,.$ Let ${\mathcal M(H)}$ denote the set of self-adjoint linear operators on ${H}$. Note that for ${A \in \mathcal M(H)}$, the preceding three norms are precisely the ${\ell_{\infty}}$, ${\ell_1}$, and ${\ell_2}$ norms of the eigenvalues of ${A}$. For ${A,B \in \mathcal M(H)}$, we use ${A \succeq 0}$ to denote that ${A}$ is positive semi-definite and ${A \succeq B}$ for ${A-B \succeq 0}$. We use ${\mathcal D(H) \subseteq \mathcal M(H)}$ for the set of density operators: Those ${A \in \mathcal M(H)}$ with ${A \succeq 0}$ and ${\mathrm{Tr}(A)=1}$. One should recall that ${\mathrm{Tr}(A^T B)}$ is an inner product on the space of linear operators, and we have the operator analogs of the Hölder inequalities: ${\mathrm{Tr}(A^T B) \leq \|A\| \cdot \|B\|_*}$ and ${\mathrm{Tr}(A^T B) \leq \|A\|_F \|B\|_F}$. 1.2. Rescaling the psd factorization As in the case of non-negative rank, consider finite sets ${X}$ and ${Y}$ and a matrix ${M : X \times Y \rightarrow \mathbb R_+}$. For the purposes of proving a lower bound on the psd rank of some matrix, we would like to have a nice analytic description. To that end, suppose we have a rank-${r}$ psd factorization $\displaystyle M(x,y) = \mathrm{Tr}(A(x) B(y))$ where ${A : X \rightarrow \mathcal S_+^r}$ and ${B : Y \rightarrow \mathcal S_+^r}$. The following result of Briët, Dadush and Pokutta (2013) gives us a way to “scale” the factorization so that it becomes nicer analytically. (The improved bound stated here is from an article of Fawzi, Gouveia, Parrilo, Robinson, and Thomas, and we follow their proof.) Lemma 2 Every ${M}$ with ${\mathrm{rank}_{\mathsf{psd}}(M) \leq r}$ admits a factorization ${M(x,y)=\mathrm{Tr}(P(x) Q(y))}$ where ${P : X \rightarrow \mathcal S_+^r}$ and ${Q : Y \rightarrow \mathcal S_+^r}$ and, moreover, $\displaystyle \max \{ \|P(x)\| \cdot \|Q(y)\| : x \in X, y \in Y \} \leq r \|M\|_{\infty}\,,$ where ${\|M\|_{\infty} = \max_{x \in X, y \in Y} M(x,y)}$. Proof: Start with a rank-${r}$ psd factorization ${M(x,y) = \mathrm{Tr}(A(x) B(y))}$. Observe that there is a degree of freedom here, because for any invertible operator ${J}$, we get another psd factorization ${M(x,y) = \mathrm{Tr}\left(\left(J A(x) J^T\right) \cdot \left((J^{-1})^T B(y) J^{-1}\right)\right)}$. Let ${U = \{ u \in \mathbb R^r : \exists x \in X\,\, A(x) \succeq uu^T \}}$ and ${V = \{ v \in \mathbb R^r : \exists y \in X\,\, B(y) \succeq vv^T \}}$. Set ${\Delta = \|M\|_{\infty}}$. We may assume that ${U}$ and ${V}$ both span ${\mathbb R^r}$ (else we can obtain a lower-rank psd factorization). Both sets are bounded by finiteness of ${X}$ and ${Y}$. Let ${C=\mathrm{conv}(U)}$ and note that ${C}$ is centrally symmetric and contains the origin. Now John’s theorem tells us there exists a linear operator ${J : \mathbb R^r \rightarrow \mathbb R^r}$ such that $\displaystyle B_{\ell_2} \subseteq J C \subseteq \sqrt{r} B_{\ell_2}\,, \ \ \ \ \ (2)$ where ${B_{\ell_2}}$ denotes the unit ball in the Euclidean norm. Let us now set ${P(x) = J A(x) J^T}$ and ${Q(y) = (J^{-1})^T B(y) J^{-1}}$. Eigenvalues of ${P(x)}$: Let ${w}$ be an eigenvector of ${P(x)}$ normalized so the corresponding eigenvalue is ${\|w\|_2^2}$. Then ${P(x) \succeq w w^T}$, implying that ${J^{-1} w \in U}$ (here we use that ${A \succeq 0 \implies S A S^T \succeq 0}$ for any ${S}$). Since ${w = J(J^{-1} w)}$, (2) implies that ${\|w\|_2 \leq \sqrt{r}}$. We conclude that every eigenvalue of ${P(x)}$ is at most ${r}$. Eigenvalues of ${Q(y)}$: Let ${w}$ be an eigenvector of ${Q(y)}$ normalized so that the corresponding eigenvalue is ${\|w\|_2^2}$. Then as before, we have ${Q(y) \succeq ww^T}$ and this implies ${J^T w \in V}$. Now, on the one hand we have $\displaystyle \max_{z \in JC}\, \langle z,w\rangle \geq \|w\|_2 \ \ \ \ \ (3)$ since ${JC \supseteq B_{\ell_2}}$. On the other hand: $\displaystyle \max_{z \in JC}\, \langle z,w\rangle^2 = \max_{z \in C}\, \langle Jz, w\rangle^2 = \max_{z \in C}\, \langle z, J^T w\rangle^2\,. \ \ \ \ \ (4)$ Finally, observe that for any ${u \in U}$ and ${v \in V}$, we have $\displaystyle \langle u,v\rangle^2 =\langle uu^T, vv^T\rangle \leq \max_{x \in X, y \in Y} \langle A(x), B(y)\rangle \leq \Delta\,.$ By convexity, this implies that ${\max_{z \in C}\, \langle z,v\rangle^2 \leq \Delta}$ for all ${v \in V}$, bounding the right-hand side of (4) by ${\Delta}$. Combining this with (3) yields ${\|w\|_2^2 \leq \Delta}$. We conclude that all the eigenvalues of ${Q(y)}$ are at most ${\Delta}$. $\Box$ 1.3. Convex proxy for psd rank Again, in analogy with the non-negative rank setting, we can define an “analytic psd rank” parameter for matrices ${N : X \times Y \rightarrow \mathbb R_+}$: $\displaystyle \alpha_{\mathsf{psd}}(N) = \min \Big\{ \alpha \mid \exists A : X \rightarrow \mathcal S_+^k, B : Y \rightarrow \mathcal S_+^k\,,$ $\displaystyle \hphantom{xx} \mathop{\mathbb E}_{x \in X}[A(x)]=I,$ $\displaystyle \hphantom{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx} \|B(y)\| \leq \frac{\alpha}{k}\, \mathop{\mathbb E}_{y \in Y}[\mathrm{Tr}(B(y))] \quad \forall y \in Y$ $\displaystyle \hphantom{\qquad\qquad} \|A(x)\| \leq \alpha \quad \forall x \in X\Big\}\,.$ Note that we have implicit equipped ${X}$ and ${Y}$ with the uniform measure. The main point here is that ${k}$ can be arbitrary. One can verify that ${\alpha_{\mathsf{psd}}}$ is convex. And there is a corresponding approximation lemma. We use ${\|N\|_{\infty}=\max_{x,y} |N(x,y)|}$ and ${\|N\|_1 = \mathop{\mathbb E}_{x,y} |N(x,y)|}$. Lemma 3 For every non-negative matrix ${M : X \times Y \rightarrow \mathbb R_+}$ and every ${\eta \in (0,1]}$, there is a matrix ${N}$ such that ${\|M-N\|_{\infty} \leq \eta \|M\|_{\infty}}$ and $\displaystyle \alpha_{\mathsf{psd}}(N) \leq O(\mathrm{rank}_{\mathsf{psd}}(M)) \frac{1}{\eta} \frac{\|M\|_{\infty}}{\|M\|_1}\,.$ Using Lemma 2 in a straightforward way, it is not particularly difficult to construct the approximator ${N}$. The condition ${\mathop{\mathbb E}_x [A(x)] = I}$ poses a slight difficulty that requires adding a small multiple of the identity to the LHS of the factorization (to avoid a poor condition number), but this has a correspondingly small effect on the approximation quality. Putting “Alice” into “isotropic position” is not essential, but it makes the next part of the approach (quantum entropy optimization) somewhat simpler because one is always measuring relative entropy to the maximally mixed state. # Entropy optimality: Quantum lifts of polytopes In these previous posts, we explored whether the cut polytope can be expressed as the linear projection of a polytope with a small number of facets (i.e., whether it has a small linear programming extended formulation). For many cut problems, semi-definite programs (SDPs) are able to achieve better approximation ratios than LPs. The most famous example is the Goemans-Williamson ${0.878}$-approximation for MAX-CUT. The techniques of the previous posts (see the full paper for details) are able to show that no polynomial-size LP can achieve better than factor ${1/2}$. 1.1. Spectrahedral lifts The feasible regions of LPs are polyhedra. Up to linear isomorphism, every polyhedron ${P}$ can be represented as ${P = \mathbb R_+^n \cap V}$ where ${\mathbb R_+^n}$ is the positive orthant and ${V \subseteq \mathbb R^n}$ is an affine subspace. In this context, it makes sense to study any cones that can be optimized over efficiently. A prominent example is the positive semi-definite cone. Let us define ${\mathcal S_+^n \subseteq \mathbb R^{n^2}}$ as the set of ${n \times n}$ real, symmetric matrices with non-negative eigenvalues. A spectrahedron is the intersection ${\mathcal S_+^n \cap V}$ with an affine subspace ${V}$. The value ${n}$ is referred to as the dimension of the spectrahedron. In analogy with the ${\gamma}$ parameter we defined for polyhedral lifts, let us define ${\bar \gamma_{\mathsf{sdp}}(P)}$ for a polytope ${P}$ to be the minimal dimension of a spectrahedron that linearly projects to ${P}$. It is an exercise to show that ${\bar \gamma_{\mathsf{sdp}}(P) \leq \bar \gamma(P)}$ for every polytope ${P}$. In other words, spectahedral lifts are at least as powerful as polyhedral lifts in this model. In fact, they are strictly more powerful. Certainly there are many examples of this in the setting of approximation (like the Goemans-Williamson SDP mentioned earlier), but there are also recent gaps between ${\bar \gamma}$ and ${\bar \gamma_{\mathrm{sdp}}}$ for polytopes; see the work of Fawzi, Saunderson, and Parrilo. Nevertheless, we are recently capable of proving strong lower bounds on the dimension of such lifts. Let us consider the cut polytope ${\mathrm{CUT}_n}$ as in previous posts. Theorem 1 (L-Raghavendra-Steurer 2015) There is a constant ${c > 0}$ such that for every ${n \geq 1}$, one has ${\bar \gamma_{\mathsf{sdp}}(\mathrm{CUT}_n) \geq e^{c n^{2/13}}}$. Our goal in this post and the next is to explain the proof of this theorem and how quantum entropy maximization plays a key role. 1.2. PSD rank and factorizations Just as in the setting of polyhedra, there is a notion of “factorization through a cone” that characterizes the parameter ${\bar \gamma_{\mathsf{sdp}}(P)}$. Let ${M \in \mathbb R^{m \times n}_+}$ be a non-negative matrix. One defines the psd rank of ${M}$ as the quantity $\displaystyle \mathrm{rank}_{\mathsf{psd}}(M) = \min \left\{ r : M_{ij} = \mathrm{Tr}(A_i B_j) \textrm{ for some } A_1, \ldots, A_m, B_1, \ldots, B_n \in \mathcal S_+^r\right\}\,.$ The following theorem was independently proved by Fiorini, Massar, Pokutta, Tiwary, and de Wolf and Gouveia, Parrilo, and Thomas. The proof is a direct analog of Yannakakis’ proof for non-negative rank. Theorem 2 For every polytope ${P}$, it holds that ${\bar \gamma_{\mathsf{sdp}}(P) = \mathrm{rank}_{\mathsf{psd}}(M)}$ for any slack matrix ${M}$ of ${P}$. Recall the class ${\mathrm{QML}_n^+}$ of non-negative quadratic multi-linear functions that are positive on ${\{0,1\}^n}$ and the matrix ${\mathcal M_n : \mathrm{QML}_n^+ \times \{0,1\}^n \rightarrow \mathbb R_+}$ given by $\displaystyle \mathcal M_n(f,x) = f(x)\,.$ We saw previously that ${\mathcal M_n}$ is a submatrix of some slack matrix of ${\mathrm{CUT}_n}$. Thus our goal is to prove a lower bound on ${\mathrm{rank}_{\mathsf{psd}}(\mathcal M_n)}$. 1.3. Sum-of-squares certificates Just as in the setting of non-negative matrix factorization, we can think of a low psd rank factorization of ${\mathcal M_n}$ as a small set of “axioms” that can prove the non-negativity of every function in ${\mathrm{QML}_n^+}$. But now our proof system is considerably more powerful. For a subspace of functions ${\mathcal U \subseteq L^2(\{0,1\}^n)}$, let us define the cone $\displaystyle \mathsf{sos}(\mathcal U) = \mathrm{cone}\left(q^2 : q \in \mathcal U\right)\,.$ This is the cone of squares of functions in ${\mathcal U}$. We will think of ${\mathcal U}$ as a set of axioms of size ${\mathrm{dim}(\mathcal U)}$ that is able to assert non-negativity of every ${f \in \mathrm{sos}(\mathcal U)}$ by writing $\displaystyle f = \sum_{i=1}^k q_i^2$ for some ${q_1, \ldots, q_k \in \mathsf{sos}(\mathcal U)}$. Fix a subspace ${\mathcal U}$ and let ${r = \dim(\mathcal U)}$. Fix also a basis ${q_1, \ldots, q_r : \{0,1\}^n \rightarrow \mathbb R}$ for ${\mathcal U}$. Define ${B : \{0,1\}^n \rightarrow \mathcal S_+^r}$ by setting ${B(x)_{ij} = q_i(x) q_j(x)}$. Note that ${B(x)}$ is PSD for every ${x}$ because ${B(x) = \vec q(x) \vec q(x)^T}$ where ${\vec q(x)=(q_1(x), \ldots, q_r(x))}$. We can write every ${p \in \mathcal U}$ as ${p = \sum_{i=1}^r \lambda_i q_i}$. Defining ${A(p^2) \in \mathcal S_+^r}$ by ${A(p^2)_{ij} = \lambda_i \lambda_j}$, we see that $\displaystyle \mathrm{Tr}(A(p^2) Q(x)) = \sum_{i,j} \lambda_i \lambda_j q_i(x) q_j(x) = p(x)^2\,.$ Now every ${f \in \mathsf{sos}(\mathcal U)}$ can be written as ${\sum_{i=1}^k c_i p_i^2}$ for some ${k \geq 0}$ and ${\{c_i \geq 0\}}$. Therefore if we define ${A(f) = \sum_{i=1}^k c_i \Lambda(p_i^2)}$ (which is a positive sum of PSD matrices), we arrive at the representation $\displaystyle f(x) = \mathrm{Tr}(A(f) B(x))\,.$ In conclusion, if ${\mathrm{QML}_+^n \subseteq \mathsf{sos}(\mathcal U)}$, then ${\mathrm{rank}_{\mathsf{psd}}(\mathcal M_n) \leq \dim(\mathsf{sos}(\mathcal U))}$. By a “purification” argument, one can also conclude ${\dim(\mathsf{sos}(\mathcal U)) \leq \mathrm{rank}_{\mathsf{psd}}(\mathcal M_n)^2}$. 1.4. The canonical axioms And just as ${d}$-juntas were the canonical axioms for our NMF proof system, there is a similar canonical family in the SDP setting: Let ${\mathcal Q_d}$ be the subspace of all degree-${d}$ multi-linear polynomials on ${\mathbb R^n}$. We have $\displaystyle \dim(\mathcal Q_d) \leq \sum_{k=0}^d {n \choose k} \leq 1+n^d\,. \ \ \ \ \ (1)$ For a function ${f : \{0,1\}^n \rightarrow \mathbb R_+}$, one defines $\displaystyle \deg_{\mathsf{sos}}(f) = \min \{d : f \in \mathsf{sos}(\mathcal Q_{d}) \}\,.$ (One could debate whether the definition of sum-of-squares degree should have ${d/2}$ or ${d}$. The most convincing arguments suggest that we should use membership in the cone of squares over ${\mathcal Q_{\lfloor d/2\rfloor}}$ so that the sos-degree will be at least the real-degree of the function.) On the other hand, our choice has the following nice property. Lemma 3 For every ${f : \{0,1\}^n \rightarrow \mathbb R_+}$, we have ${\deg_{\mathrm{sos}}(f) \leq \deg_J(f)}$. Proof: If ${q}$ is a non-negative ${d}$-junta, then ${\sqrt{q}}$ is also a non-negative ${d}$-junta. It is elementary to see that every ${d}$-junta is polynomial of degree at most ${d}$, thus ${q}$ is the square of a polynomial of degree at most ${d}$. $\Box$ 1.5. The canonical tests As with junta-degree, there is a simple characterization of sos-degree in terms of separating functionals. Say that a functional ${\varphi : \{0,1\}^n \rightarrow \mathbb R}$ is degree-${d}$ pseudo-positive if $\displaystyle \langle \varphi, q^2 \rangle = \mathop{\mathbb E}_{x \in \{0,1\}^n} \varphi(x) q(x)^2 \geq 0$ whenever ${q : \{0,1\}^n \rightarrow \mathbb R}$ satisfies ${\deg(q) \leq d}$ (and by ${\deg}$ here, we mean degree as a multi-linear polynomial on ${\{0,1\}^n}$). Again, since ${\mathsf{sos}(\mathcal Q_d)}$ is a closed convex set, there is precisely one way to show non-membership there. The following characterization is elementary. Lemma 4 For every ${f : \{0,1\}^n \rightarrow \mathbb R_+}$, it holds that ${\deg_{\mathsf{sos}}(f) > d}$ if and only if there is a degree-${d}$ pseudo-positive functional ${\varphi : \{0,1\}^n \rightarrow \mathbb R}$ such that ${\langle \varphi,f\rangle < 0}$. 1.6. The connection to psd rank Following the analogy with non-negative rank, we have two objectives left: (1) to exhibit a function ${f \in \mathrm{QML}_n^+}$ with ${\deg_{\mathsf{sos}}(f)}$ large, and (ii) to give a connection between the sum-of-squares degree of ${f}$ and the psd rank of an associated matrix. Notice that the function ${f(x)=(1-\sum_{i=1}^n x_i)^2}$ we used for junta-degree has ${\deg_{\mathsf{sos}}(f)=1}$, making it a poor candidate. Fortunately, Grigoriev has shown that the knapsack polynomial has large sos-degree. Theorem 5 For every odd ${m \geq 1}$, the function $\displaystyle f(x) = \left(\frac{m}{2} - \sum_{i=1}^n x_i\right)^2 - \frac14$ has ${\deg_{\mathsf{sos}}(f) \geq \lceil m/2\rceil}$. Observe that this ${f}$ is non-negative over ${\{0,1\}^m}$ (because ${m}$ is odd), but it is manifestly not non-negative on ${\mathbb R^m}$. Finally, we recall the submatrices of ${\mathcal M_n}$ defined as follows. Fix some integer ${m \geq 1}$ and a function ${g : \{0,1\}^m \rightarrow \mathbb R_+}$. Then ${M_n^g : {[n] \choose m} \times \{0,1\}^n \rightarrow \mathbb R_+}$ is given by $\displaystyle M_n^g(S,x) = g(x|_S)\,.$ In the next post, we discuss the proof of the following theorem. Theorem 6 (L-Raghavendra-Steurer 2015) For every ${m \geq 1}$ and ${g : \{0,1\}^m \rightarrow \mathbb R_+}$, there exists a constant ${C(g)}$ such that the following holds. For every ${n \geq 2m}$, $\displaystyle 1+n^{d} \geq \mathrm{rank}_{\mathsf{psd}}(M_n^g) \geq C(g) \left(\frac{n}{\log n}\right)^{(d-1)/2}\,,$ where $d=\deg_{\mathsf{sos}}(g)$. Note that the upper bound is from (1) and the non-trivial content is contained in the lower bound. As before, in conjunction with Theorem 5, this shows that $\mathrm{rank}_{\mathsf{psd}}(\mathcal M_n)$ cannot be bounded by any polynomial in $n$ and thus the same holds for $\bar \gamma_{\mathsf{sdp}}(\mathrm{CUT}_n)$.
{}
How do you find the radius of two circles inside a rectangle if the area of it is 50cm squared? Oct 17, 2016 $r = 2.5 c m$ Explanation: I think a bit more detail is required about the the 2 circles? Are they equal in size and how are they placed? I am going to assume that the two circles are equal in size and are side by side and fit exactly into the rectangle which has an area of $50 c {m}^{2}$ If you sketch the diagram and draw in the diameters of the circles so that they form a line through the middle of the rectangle, you will realise that the length of the rectangle is actually 4 radii. So, we now know the length and the breadth of the rectangle, in terms of $r$, which is quite nifty because that is what we need to calculate. Form an equation for the area of the rectangle. $l \times b = A$ $4 r \times 2 r = 50$ $8 {r}^{2} = 50$ ${r}^{2} = \frac{50}{8}$ ${r}^{2} = 6.25$ $r = \sqrt{6.25} \text{ } \leftarrow$ only use the positive root $r = 2.5 c m$
{}
Tutorial TDepES¶ Temperature-DEPendence of the Electronic Structure.¶ This tutorial aims at showing how to get the following physical properties, for periodic solids: • The zero-point-motion renormalization (ZPR) of eigenenergies • The temperature-dependence of eigenenergies It should take about 1 hour. For the theory related to the temperature-dependent calculations, please read the following papers: [Ponce2015], [Ponce2014] and [Ponce2014a]. There are two ways to compute the temperature dependence with Abinit: • Using Anaddb: historically the first implementation. This option does not require Netcdf. • Using post-processing python scripts: this is the recommended approach as it provides more options and is more efficient (less disk space, less memory demanding). This option requires Netcdf (both in Abinit and python). In this tutorial, we only focus on the netCDF-based approach. Important In order to run the python script you need: • python 2.7.6 or higher, python3 is not supported • numpy 1.7.1 or higher • netCDF4 and netCDF4 for python • scipy 0.12.0 or higher This can be done with: sudo apt-get install netcdf-bin sudo apt-get install python-dev pip install numpy pip install scipy pip install netcdf4 Abinit must be configured with: configure --with-config-file=myconf.ac where the configuration file must contain at least: with_trio_flavor="netcdf+other-options" # To link against an external libs, use #with_netcdf_incs="-I${HOME}/local/include" # if (netcdf4 + hdf5): #with_netcdf_libs="-L/usr/local/lib -lnetcdff -lnetcdf -L${HOME}/local/lib -lhdf5_hl -lhdf5" # else if netcdf3: #with_netcdf_libs="-L${HOME}/local/lib/ -lnetcdff -lnetcdf" For example, with MPI you might use the following basic myconf.ac file with_trio_flavor="netcdf" with_mpi_prefix='/home/XXXXX/local/openmpi1.10/' enable_mpi="yes" enable_mpi_io="yes" enable_openmp="no" prefix='/home/XXXX/abinit/build/' A list of configuration files for clusters is available in the abiconfig repository If you have a prebuilt abinit executable, use: ./abinit -b to get the list of libraries/options activated in the build. You should see netcdf in the TRIO flavor section: === Connectors / Fallbacks === Connectors on : yes Fallbacks on : yes DFT flavor : libxc-fallback FFT flavor : none LINALG flavor : netlib-fallback MATH flavor : none TIMER flavor : abinit TRIO flavor : netcdf Visualisation tools are NOT covered in this tutorial. Powerful visualisation procedures have been developed in the Abipy context, relying on matplotlib. See the README of Abipy and the Abipy tutorials. 1 Calculation of the ZPR of eigenenergies at q=Γ.¶ The reference input files for this tutorial are located in ~abinit/tests/tutorespfn/Input and the corresponding reference output files are in ~abinit/tests/tutorespfn/Refs. The prefix for files is tdepes. As usual, we use the shorthand ~abinit to indicate the root directory where the abinit package has been deployed, but most often consider the paths relative to this directory. First, examine the tests/tutorespfn/Input/tdepes_1.in input file. Note that there are three datasets (ndtset=3). The first dataset corresponds to a standard self-consistent calculation, with an unshifted eight k-point grid, producing e.g. the ground-state eigenvalue file tdepes_1o_DS1_EIG.nc , as well as the density file tdepes_1o_DS1_DEN. The latter is read (getden2=1) to initiate the second dataset calculation, which is a non-self-consistent run, specifically at the Gamma point only (there is no real recomputation with respect to the dataset 1, it only extract a subset of the eight k-point grid). This second dataset produces the wavefunction file tdepes_1o_DS2_WFQ, that is read by the third dataset (getwfq3=2), as well as the tdepes_1o_DS1_WFK file from the first dataset (getwfk3=1). The third dataset corresponds to a DFPT phonon calculation (rfphon3=1) with displacement of all atoms (rfatpol3= 1 2) in all directions (rfdir3= 1 1 1). This induces the creation of the Derivative DataBase file tdepes_1o_DS3_DDB. The electron-phonon matrix elements are produced because of ieig2rf3=5 , this option generating the needed netCDF files tdepes_1o_DS3_EIGR2D.nc and tdepes_1o_DS3_GKK.nc . We will use tests/tutorespfn/Input/tdepes_1.files, with minor modifications -see below-, to execute abinit. In order to run abinit, we suggest that you create a working directory, why not call it Work, as subdirectory of ~abinit/tests/tutorespfn/Input, then copy/modify the relevant files. Explicitly: cd ~abinit/tests/tutorespfn/Input mkdir Work cd Work cp ../tdepes*in ../tdepes*files . Then, edit the tests/tutorespfn/Input/tdepes_1.files to modify location of the pseudopotential file (from the Work subdirectory, the location is ../../../Psps_for_tests/6c.pspnc, although you might as well copy the file 6c.pspnc in the Work directory, in which case the location is simply 6c.pspnc). Finally, issue abinit < tdepes_1.files > tdepes_1.stdout (where abinit might have to be replaced by the proper location of the abinit executable, or by ./abinit if you have copied abinit in the Work directory). If you have compiled the code with Netcdf, the calculation will produce several _EIG.nc, _DDB, EIGR2D.nc and EIGI2D.nc files, that contain respectively the eigenvalues (GS or perturbed), the second-order derivative of the total energy with respect to two atomic displacements, the electron-phonon matrix elements used to compute the renormalization of the eigenenergies and the electron-phonon matrix elements used to compute the lifetime of the electronic states. You can now copy three post-processing python files from ~abinit/scripts/post_processing/temperature-dependence . Make sure you are in the directory containing the output files produced by the code and issue: cp ~abinit/scripts/post_processing/temperature-dependence/temperature_final.py . cp ~abinit/scripts/post_processing/temperature-dependence/rf_final.py . cp ~abinit/scripts/post_processing/plot_bs.py . in which ~abinit has been replaced by the proper path. You can then simply run the python script with the following command: ./temperature_final.py and enter the information asked by the script, typically the following (data contained in ~abinit/tests/tutorespfn/Input/tdepes_1_temperature.in): 1 # Number of cpus 2 # Static ZPR computed in the Allen-Heine-Cardona theory temperature_1 # Prefix for output files 0.1 # Value of the smearing parameter for AHC (in eV) 0.1 # Gaussian broadening for the Eliashberg function and PDOS (in eV) 0 0.5 # Energy range for the PDOS and Eliashberg calculations (in eV) 0 1000 50 # min, max temperature and temperature step 1 # Number of Q-points we have (here we only computed$\Gamma$) tdepes_1o_DS3_DDB # Name of the response-funtion (RF) DDB file tdepes_1o_DS2_EIG.nc # Eigenvalues at$\mathbf{k+q}$tdepes_1o_DS3_EIGR2D.nc # Second-order electron-phonon matrix element tdepes_1o_DS3_GKK.nc # Name of the 0 GKK file tdepes_1o_DS1_EIG.nc # Name of the unperturbed EIG.nc file with Eigenvalues at$k\$ Alternatively, copy this example file in the Work directory if not yet done, and then run ./temperature_final.py < tdepes_1_temperature.in Warning Remember to use py2.7 and install the libraries required by the script before running. For pip, use: pip install netcdf4 or: conda install netcdf4 if you are using conda You should see on the screen an output similar to: Start on 15/3/2018 at 13h29 ____ ____ _ _ | _ \| _ \ | |_ ___ _ __ ___ _ __ ___ _ __ __ _| |_ _ _ _ __ ___ | |_) | |_) |____| __/ _ \ '_ _ \| '_ \ / _ \ '__/ _ | __| | | | '__/ _ \ | __/| __/_____| || __/ | | | | | |_) | __/ | | (_| | |_| |_| | | | __/ |_| |_| \__\___|_| |_| |_| .__/ \___|_| \__,_|\__|\__,_|_| \___| |_| Version 1.3 This script compute the static/dynamic zero-point motion and the temperature dependence of eigenenergies due to electron-phonon interaction. The electronic lifetime can also be computed. WARNING: The first Q-point MUST be the Gamma point. Enter the number of cpu on which you want to multi-thread Define the type of calculation you want to perform. Type: 1 if you want to run a non-adiabatic AHC calculation 2 if you want to run a static AHC calculation 3 if you want to run a static AHC calculation without control on active space (not recommended !) Note that for 1 & 2 you need _EIGR2D.nc and _GKK.nc files obtained through ABINIT option "ieig2rf 5" Enter name of the output file Enter value of the smearing parameter for AHC (in eV) Enter value of the Gaussian broadening for the Eliashberg function and PDOS (in eV) Enter the energy range for the PDOS and Eliashberg calculations (in eV): [e.g. 0 0.5] Introduce the min temperature, the max temperature and the temperature steps. e.g. 0 200 50 for (0,50,100,150) Enter the number of Q-points you have Enter the name of the 0 DDB file Enter the name of the 0 eigq file Enter the name of the 0 EIGR2D file Enter the name of the 0 GKK file Enter the name of the unperturbed EIG.nc file at Gamma Q-point: 0 with wtq = 1.0 and reduced coord. [ 0. 0. 0.] Now compute active space ... Now compute generalized g2F Eliashberg electron-phonon spectral function ... End on 15/3/2018 at 13 h 29 Runtime: 0 seconds (or 0.0 minutes) The python code has generated the following files: temperature_1.txt This text file contains the zero-point motion renormalization (ZPR) at each k-point for each band. It also contain the evolution of each band with temperature at k=$\Gamma$. At the end of the file, the Fan/DDW contribution is also reported. temperature_1_EP.nc This netcdf file contains a number for each k-point, for each band and each temperature. The real part of this number is the ZPR correction and the imaginary part is the lifetime. We can for example visualize the temperature dependence at k=$\Gamma$ of the HOMO bands (Band: 3 section in the temperature_1.txt file, that you can examine) with the contribution of only q=$\Gamma$. The HOMO eigenenergies correction goes up with temperature… You can also plot the LUMO corrections and see that they go down. The ZPR correction as well as their temperature dependence usually closes the gap of semiconductors. As usual, checking whether the input parameters give converged values is of course important. The run used ecut=10. With the severely underestimated ecut=5, the HOMO correction goes down with temperature. 2 Converging the calculation with respect to the grid of phonon wavevectors¶ Convergence studies with respect to most of the parameters will rely on obvious modifications of the input file detailed in the previous section. However, using more than one q-point phonon wavevector needs a non-trivial generalisation of this procedure. This is because each q-point needs to be treated in a different dataset in the current version of ABINIT. The netCDF version can perform the q-wavevector integration either with random q-points or homogenous Monkhorst-Pack meshes. Both grids have been used in the Ref. [Ponce2014], see e.g. Fig. 3 of this paper. For the random integration method you should create a script that generates random q-points, perform the Abinit calculations at these points, gather the results and analyze them. The temperature_final.py script will detect that you used random integration thanks to the weight of the q-point stored in the _EIGR2D.nc file and perform the integration accordingly. The random integration converges slowly but in a consistent manner. However, since this method is a little bit less user-friendly than the one based on homogeneous grids, we will focus on this homogenous integration. Even simpler, it is not trivial. In this case, the user must specify in the ABINIT input file the homogeneous q-point grid, using input variables like ngqpt, qptopt, shiftq, nshiftq, …, i.e. variables whose names are similar to those used to specify the k-point grid (for electrons). There are several difficulties here. First, since we focus on the k=$\Gamma$ point, we expect to be able to use symmetries to decrease the computational load, as $\Gamma$ is invariant under all symmetry operations of the crystal. The symmetry operations of the crystal will be used to decrease the number of q-wavevectors, but they cannot be used as well to decrease the k-point grid during the corresponding self-consistent phonon computation. How this different behaviour of k-grids and q-grids can be handled by ABINIT ? By convention, in such case, with nsym=1 the k-point grid will be generated in the Full Brillouin zone, without use of symmetries, while the q-point grid with qptopt=1 with be generated in the irreducible Brillouin Zone, despite nsym=1. In order to generate q-point grids that are not folded in the irreducible Brillouin Zone, one need to use another value of qptopt. In particular qptopt=3 has to be used to generate q points in the full Brillouin zone. Second, the number of ABINIT datasets is expected to be given in the input file, by the user, but not determined on-the-flight by ABINIT. Still, this number of datasets is determined by the number of q points. Thus, the user will have to compute it before being able to launch the real q-point calculations, since it determines ndtset. How to determine the number of irreducible q points ? Well, the easiest procedure is to compute it for an equivalent k-point grid, by a quick run. An example will clarify this. Suppose that one is looking for the number of q-points corresponding to ngqpt 4 4 4 qptopt 1 nshiftq 1 shiftq 0.0 0.0 0.0 One make a quick ABINIT run with tests/tutorespfn/Input/tdepes_2.in. Note that several input variables have been changed with respect to tests/tutorespfn/Input/tdepes_1.in: ndtset 1 nstep 0 prtebands 0 ngkpt 4 4 4 nshiftk 1 shiftk 0.0 0.0 0.0 nsym 0 In this example, the new values of ndtset and nstep, and the definition of prtebands allow a fast run (nline==0 might be specified as well, or even, the run might be interrupted after a few seconds, since the number of k points is very quickly available). Then, the k-point grid is specified thanks to ngkpt, nshiftk, shiftk, replacing the corresponding input variables for the q-point grid. The use of symmetries has been reenabled thanks to nsym=0. After possibly modifying tests/tutorespfn/Input/tdepes_2.files to account for the location of the pseudopotential file, as above, issue: abinit < tdepes_2.files > tdepes_2.stdout Now, the number of points can be seen in the output file : nkpt 8 the list of these eight k-points being given in kpt 0.00000000E+00 0.00000000E+00 0.00000000E+00 2.50000000E-01 0.00000000E+00 0.00000000E+00 5.00000000E-01 0.00000000E+00 0.00000000E+00 2.50000000E-01 2.50000000E-01 0.00000000E+00 5.00000000E-01 2.50000000E-01 0.00000000E+00 -2.50000000E-01 2.50000000E-01 0.00000000E+00 5.00000000E-01 5.00000000E-01 0.00000000E+00 -2.50000000E-01 5.00000000E-01 2.50000000E-01 We are now ready to launch the determination of the _EIG.nc, _DDB, EIGR2D.nc and EIGI2D.nc files, with 8 q-points. As for the $\Gamma$ calculation of the previous section, we will rely on three datasets for each q-point. This permits a well-structured set of calculations, although there is some redundancy. Indeed, the first of these datasets will correspond to an unperturbed ground-state calculation identical for all q. It is done very quickly because the converged wavefunctions are already available. The second dataset will correspond to a non-self-consistent ground-state calculation at k+q (it is also quick thanks to previously available wavefunctions), and the third dataset will correspond to the DFPT calculations at k+q (this is the CPU intensive part) . So, compared to the first run in this tutorial, we have to replace ndtset 3 by ndtset 24 udtset 8 3 in the input file tests/tutorespfn/Input/tdepes_3.in, and adjusted accordingly all input variables that were dataset-dependent. Please, refer to the explanation of the usage of a double-loop of datasets if you are confused about the meaning of udtset, and the usage of the corresponding metacharacters. We have indeed also introduced iqpt:? 1 iqpt+? 1 that translates into iqpt11 1 iqpt12 1 iqpt13 1 iqpt21 2 iqpt22 2 iqpt23 2 iqpt31 3 ... allowing to perform calculations for three datasets at each q-point. After possibly modifying tests/tutorespfn/Input/tdepes_3.files to account for the location of the pseudopotential file, as above, issue: abinit < tdepes_3.files > tdepes_3.stdout This is a significantly longer ABINIT run (still less than one minute), also producing many files. When the run is finished, copy the file tests/tutorespfn/Input/tdepes_3_temperature.in in the working directory (if not yet done) and launch the python script with: ./temperature_final.py < tdepes_3_temperature.in Examination of the same HOMO band at k=$\Gamma$ for a 4x4x4 q-point grid gives a very different result than previously. Indeed, for the ZPR, one finds (in eV) Band: 3 0.0 0.154686616316 Band: 3 0.0 0.0464876236664 that, is, the ZPR is now about three times larger, and similary for the temperature dependence. As a matter of fact, diamond requires an extremely dense q-point grid (40x40x40) to be converged. On the bright side, each q-point calculation is independent and thus the parallel scaling is ideal. Running separate jobs for different q-points is quite easy thanks to the dtset approach. 3 Calculation of the eigenenergy corrections along high-symmetry lines¶ The calculation of the electronic eigenvalue correction due to electron-phonon coupling along high-symmetry lines requires the use of 6 datasets per q-point. Moreover, the choice of an arbitrary k-wavevector breaks all symmetries of the crystal. Different datasets are required to compute the following quantites: $\Psi^{(0)}_{kHom}$ The ground-state wavefunctions on the Homogeneous k-point sampling. $\Psi^{(0)}_{kBS}$ The ground-state wavefunctions computed along the bandstructure k-point sampling. $\Psi^{(0)}_{kHom+q}$ The ground-state wavefunctions on the shifted Homogeneous k+q-point sampling. $n^{(1)}$ The perturbed density integrated over the homogeneous k+q grid. $\Psi^{(0)}_{kBS+q}$ The ground-state wavefunctions obtained from reading the perturbed density of the previous dataset. Reading the previous quantity we obtain the el-ph matrix elements along the BS with all physical quantities integrated over a homogeneous grid. We will use the tests/tutorespfn/Input/tdepes_4.in input file Note the use of the usual input variables to define a path in the Brillouin Zone to build an electronic band structure: kptbounds, kptopt, and ndivsm. Note also that we have defined qptopt=3. The number of q-points is thus very easy to determine, as being the product of ngqpt values times nshiftq. Here a very rough 2*2*2 grid has been chosen, even less dense than the one for section 2. After possibly modifying tests/tutorespfn/Input/tdepes_4.files to account for the location of the pseudopotential file, as above, issue: abinit < tdepes_4.files > tdepes_4.stdout This is a significantly longer ABINIT run (2-3 minutes), also producing many files. then use tests/tutorespfn/Input/tdepes_4_temperature.in for the python script. with the usual syntax: ./temperature_final.py < tdepes_4_temperature.in You can now copy the plotting script (Plot-EP-BS) python file from ~abinit/scripts/post_processing/plot_bs.py into the directory where you did all the calculations. Now run the script: ./plot_bs.py with the following input data: temperature_4_EP.nc L \Gamma X W K L W X K \Gamma -20 30 0 or more directly ./plot_bs.py < tdepes_4_plot_bs.in This should give the following bandstructure where the solid black lines are the traditional electronic bandstructure, the dashed lines are the electronic eigenenergies with electron-phonon renormalization at a defined temperature (here 0K). Finally the area around the dashed line is the lifetime of the electronic eigenstates. Notice all the spikes in the electron-phonon case. This is because we did a completely under-converged calculation with respect to the q-point sampling. It is possible to converge the calculations using ecut=30 Ha, a ngkpt grid of 6x6x6 and an increasing ngqpt grid to get converged results: | Convergence study ZPR and inverse lifetime(1/τ) [eV] at 0K | | q-grid | Nb qpt | Γ25' | Γ15 | Min Γ-X | | | in IBZ | ZPR | 1/τ | ZPR | 1/τ | ZPR | 1/τ | | 4x4x4 | 8 | 0.1175 | 0.0701 | -0.3178 | 0.1916 | -0.1570 | 0.0250 | | 10x10x10 | 47 | 0.1390 | 0.0580 | -0.3288 | 0.1847 | -0.1605 | 0.0308 | | 20x20x20 | 256 | 0.1446 | 0.0574 | -0.2691 | 0.1823 | -0.1592 | 0.0298 | | 26x26x26 | 511 | 0.1448 | 0.0573 | -0.2736 | 0.1823 | -0.1592 | 0.0297 | | 34x34x34 | 1059 | 0.1446 | 0.0573 | -0.2699 | 0.1821 | -0.1591 | 0.0297 | | 43x43x43 | 2024 | 0.1447 | 0.0572 | -0.2650 | 0.1821 | -0.1592 | 0.0297 | As you can see the limiting factor for the convergence study is the convergence of the LUMO band at $\Gamma$. This band is not the lowest in energy (the lowest is on the line between $\Gamma$ and X) and therefore this band is rather unstable. This can also be seen by the fact that it has a large electronic broadening, meaning that this state will decay quickly into another state. Using the relatively dense q-grid of 43x43x43 we can obtain the following converged bandstructure, at a high temperature (1900K): Here we show the renormalization at a very high temperature of 1900K in order to highlight more the broadening and renormalization that occurs. If you want accurate values of the ZPR at 0K you can look at the table above. Important If you use an extremely fine q-point grid, the acoustic phonon frequencies for q-points close to $\Gamma$ will be wrongly determined by Abinit. Indeed in order to have correct phonon frequencies close to $\Gamma$, one has to impose the acousting sum rule with anaddb and asr@anaddb. However, this feature is not available in the python script. Instead, the script reject the contribution of the acoustic phonon close to $\Gamma$ if their phonon frequency is lower than 1E-6 Ha. Otherwise one gets unphysically large contribution. One can tune this parameter by editing the variable “tol6 = 1E-6” in the beginning of the script. For example, for the last 43x43x43 calculation, it was set to 1E-4. Important It is possible to speed up the convergence with respect to increasing q-point density by noticing that the renormalization behaves analytically with increasing q-point grid and smaller broadening. It is therefore possible to extrapolate the results. Different analytical behavior extists depending if the material is polar and if the state we are considering is a band extrema or not. More information can be found in Ref. [Ponce2015]
{}
# McKay correspondence over non algebraically closed fields Authors Type Published Article Publication Date Jul 01, 2013 Submission Date Jan 23, 2006 Identifiers arXiv ID: math/0601550 Source arXiv The classical McKay correspondence for finite subgroups $G$ of $\SL(2,\C)$ gives a bijection between isomorphism classes of nontrivial irreducible representations of $G$ and irreducible components of the exceptional divisor in the minimal resolution of the quotient singularity $\A^2_\C/G$. Over non algebraically closed fields $K$ there may exist representations irreducible over $K$ which split over $\bar{K}$. The same is true for irreducible components of the exceptional divisor. In this paper we show that these two phenomena are related and that there is a bijection between nontrivial irreducible representations and irreducible components of the exceptional divisor over non algebraically closed fields $K$ of characteristic 0 as well.
{}
# Interpreting the Fourier transform of a Gibbs measure Recall that a Gibbs measure gives a probability distribution on states $x$ of the form $$p_X(x) = \frac{1}{Z(\beta)}\exp(-\beta E(x))$$ As I understand, the function $E$ is interpreted as the energy of the state. I'm wondering if there is a physical interpretation or significance to the characteristic function of $p$: $$\phi_X(k) = \mathbb{E}[\exp(ik^tx)]$$ Obviously, if one has $\phi_X$, one can compute moments and other interesting things about the ensemble, but I'm wondering (as a math guy) if $\phi_X$ has a physical significance. To provide a little more context for my particular problem, I'm really thinking of $x$ as a digital image, and hence $p_X(x)$ is a probability distribution on possible images (see e.g. texture synthesis). In many applications, it's more convenient to derive/work with $\phi_X(k)$ instead of $p_X(x)$, and I'm trying to think of a way to do MCMC using $\phi_X$ instead of $p_X$. A physical intuition for $\phi_X$ might help, if there is one. • With an appropriate representation of the state space, the characteristic function can have a number of useful physical interpretations. (As written, the physical significance is 'profound but extremely vague', because you haven't given coordinates to the states). – TotallyRhombus Apr 11 '16 at 16:57 • @fs137 Interesting! I'd be happy to see an example, if you happen to have one (or just a reference is fine). – icurays1 Apr 11 '16 at 17:01 • One example that I had in mind involves applications of the Hubbard Stratonovich transformation: en.wikipedia.org/wiki/…. Let me know if this looks relevant to your initial question. – TotallyRhombus Apr 11 '16 at 17:25 • @fs137 It might, but I'm a bit unclear on where the Fourier transform arises. It seems like that transformation (which is just completing the square?) is just used to evaluate partition functions, but maybe I'm missing something. In my application I'm thinking of $x$ as a "classical" state (in fact, for me, $x$ is really just a digital image...) – icurays1 Apr 11 '16 at 18:03 • @fs137 I edited the question to add a little more context. – icurays1 Apr 11 '16 at 18:08
{}
# Aerodynamics Questions and Answers – Flow Compressible « » This set of Aerodynamics Multiple Choice Questions & Answers (MCQs) focuses on “Flow Compressible”. 1. The definition of flow being compressible is______ a) M > 0.3 b) M > 0.5 c) Depends on precision required d) Another name for supersonic flows Explanation: For the subsonic flows, it depends on the matter of accuracy whether to treat a flow as compressible or not. For supersonic flows it is always compressible. In general, M > 0.3 can be regarded as compressible flow. 2. Incompressible flow is a myth actually. a) True b) False Explanation: Strictly speaking, all flows are compressible i.e. incompressible flow is a myth actually. But for all practical applications, flow with Mach number < 0.3 can be assumed incompressible since the density variation is less than 5%. 3. Prandtl relation for normal shock waves is_______ a) a2=u1u2 b) a*2=u1u2 c) a*a0=u1u2 d) a$$_0^2$$=u1u2 Explanation: The Prandtl relation for the normal shock waves taking into account combined form of mass and moment equation and alternate forms of the energy equation. The final equation comes out in the form of a*2=u1u2, where u1, u2 are velocities before and after the normal shock. 4. Prandtl relation can also be expressed in terms of characteristic Mach number as 1=M$$_1^*$$M$$_2^*$$. a) True b) False Explanation: The Prandtl relation is given as a*2=u1u2 while the characteristic Mach number is given as M*=$$\frac {u}{a^{*}}$$. So, putting this into the Prandtl relation we get the equation 1=M$$_1^*$$M$$_2^*$$ where M$$_1^*$$ and M$$_2^*$$ are the characteristic Mach number upstream and downstream of the normal shock. 5. For a particular gas, Mach number behind the shock wave is a function of which all parameters ahead of the shock wave. Choose the correct option. a) Mach number, pressure b) Mach number only c) Mach number, temperature d) Mach number, temperature, pressure Explanation: The remarkable result for the normal shock wave is that for a given gas (given gamma), the Mach number ahead of the normal shock wave is a function of the Mach number ahead of the normal shock wave only, irrespective of pressure, density, temperature etc. The values are tabulated and found in gas tables for reference. 6. Which of the following is incorrect for a normal shock wave? a) M1=1 then M2=1 b) M1 > then M2 > 1 c) M$$_1^*$$=1→M$$_2^*$$=1 d) M1→∞ then M2=finite value Explanation: For a normal shock wave, the upstream and downstream Mach numbers are related irrespective of the other parameters for a particular flow. According to the relation, M1=1 then M2=1 and M1 > then M2 < 1, since normal shock wave compresses the flow. Also, by Prandtl relation when M$$_1^*$$=1→M$$_2^*$$=1 and when M1→∞ then M2 takes a finite value. 7. Select the correct statement for a Mach wave. a) M > 1 b) $$\frac {P_2}{P_1}$$=0.528 c) $$\frac {T_2}{T_1}$$=1 d) $$\frac {\rho_2}{\rho_1}$$=∞ Explanation: A Mach wave is a normal shock wave of diminishing strength. It occurs for M=1 upstream. Then downstream M=1 also. And all the ratios are equal to 1, i.e $$\frac {P_2}{P_1}$$=1, $$\frac {T_2}{T_1}$$=1, $$\frac {\rho_2}{\rho_1}$$=1. This can be found by calculation when we put m=1. The properties across Mach wave do not change. 8. Shock waves can occur both in subsonic and supersonic medium since the concerning equations are not concerned with whether M > 1 or M < 1. a) False b) True Explanation: Equations of mass, momentum, energy are valid in both super and subsonic mediums and hence shock wave can occur in both mediums. But, thermodynamics delete the possibility of shockwaves occurring in subsonic medium. Entropy change is negative if the upstream medium is subsonic and hence no shock waves occur in subsonic medium, else second law is violated. 9. Select the false statement for the normal shock wave. a) Entropy increases across normal shock wave b) Normal shock wave has velocity and pressure gradients c) Shock wave is pretty thick d) Thermal and frictional dissipation there Explanation: The shock waves are a very thin region and have entropy increase across them. The normal shock wave has large velocity and temperature gradients across it and hence the irreversibility. The thermal conduction and frictional effect lead to increase in entropy. 10. Specifying the temperature ratio across the normal shock wave will yield which of the following across the normal shock wave? a) Upstream flow velocity b) Upstream Mach number c) Downstream temperature d) Downstream flow velocity Explanation: Specifying a dimensionless quantity across the normal shock wave will specify all the other ratios and both the upstream and downstream Mach numbers. But for finding the velocity, temperature also needs to be specified and vice versa. 11. A normal shock wave can be specified with a single velocity. a) True b) False Explanation: When the single velocity is given, either upstream or downstream, a host of various temperatures will give a set of various Mach numbers, and hence different normal shock waves. But if the temperature is given along with the velocity, both either upstream or downstream, it specifies the normal shock wave. 12. The total temperature across the normal shock wave is constant. Which statement gives the wrong reason for this? a) Adiabatic flow across normal shock wave b) Normal shock wave has an isentropic flow c) Calorifically perfect gas d) Thermal dissipation there Explanation: The flow across the normal shock wave is isentropic i.e. it is adiabatic also. Since, we are concerned with calorically perfect gases in an adiabatic, inviscid, steady flow the total temperature remains constant. Thermal dissipation is not a reason for total temperature being constant. 13. Select the false statement for the normal shock wave. a) Entropy increases across normal shock wave b) The total pressure remains constant c) The total temperature remains constant d) Normal shock wave is a compression wave
{}
923 views ### Guidelines for context edits and rewrites Suppose you come across a question that has been closed for lack of context, has high quality answers, and on its way to being deleted. Can you save the post by editing it to include more context? ... 513 views ### Improving other's questions via adding solution attempts in the question [duplicate] This is a bit of a puzzler for me and I'd like to try to work out what our community norms and guidelines are around this topic. Suppose user A posts a question which is a bit light on context. User B ... 125 views ### About edits of low quality questions Lately I've noticed that lots of low quality questions with little to no effort from the OP get immediately edited by high rep users, making them more acceptable. A common case of this is: a new user ... 92 views ### Can I rewrite somebody's old question? Before I ask question, I search it in history of the site. But sometimes I found a question which is an unreasonable rewriting of the content of the task, very often without own contribution. Such ... 847 views ### Historical lock for preserving old, low quality questions The question Compute $\lim\limits_{n \to \infty }\sin \sin \dots\sin n$ was recently closed (then reopened and historical-locked). I agree that, by modern standards, the question is not a good one. ... 710 views ### Do questions have to be “good questions” to not be put on-hold/closed? I have seen many questions that have been put on-hold or closed with the message: "This question is missing context or other details: Please improve the question by providing additional context, ... 750 views ### Best way to “revive” a question (which does not abide by the Math.SE rules) [duplicate] There exists a question which looks interesting to me, given that I scribbled some attempts to solve it and, in spite of the fact that it "seems" like an easy application of Rolle's theorem, couldn't ... 462 views ### Reopening closed questions with good answers I often come by questions that are closed because of lack of context, but that still have good answers. It can be questions that contain nothing more than What is this integral $\int \ldots dx$? ... 587 views ### Is it good practice to analyse past questions by today standards? It seems MSE moves in "eras". There are a lot of famous and insanely upvoted questions in the past which, by today's standards, would be regarded as dull and even really bad. But it seems a consensus ... 116 views ### If I'd like to see a “do my homework” question answered, is it OK if I fake the OP's own attempt? That's it, more or less. I came across a question consisting solely of the task, yet found it rather interesting. So I gave it a try, but failed. I am thinking about adding this attempt to the ... 478 views ### Regarding the question on “Conics passing through integer lattice points”. I am making this post in regards to the ongoing delete/undelete skirmish (let's at least change the monotonicity of the use of "war"). The old version of the question is here, the current version (...
{}
# Watercolors Do Sol de Ipanema É mais que um poema (Garota de Ipanema, João Gilberto) Sometimes I think about the reasons why I spend so many time doing experiments and writing my discoveries in a blog. Even although the main reason to start this blog was some kind of vanity, today I have pretty clear why I still keep writing it: to keep my mind tuned. I really enjoy looking for ideas, learning new algorithms, figuring out the way to translate them into code and trying to discover new territories going a step further. I cannot imagine my life without coding. Many good times in the last years have been in front of my laptop listening music and drinking a beer. In these strange times, confined at house, coding has became in something more important. It keeps me ahead from the sad news and moves my mind to places where everything is quiet, friendly and perfect. Blogging is my therapy, my mindfulness. This post is inspired in this post from Softology, an amazing blog I recommend you to read. In it, you can find a description of the stepping stone cellular automaton as well as a appealing collection of images generated using this technique. I modified the original algorithm described in the post to create images like these, which remind me a watercolor painting: I begin with a 400 x 400 null matrix. After that, I choose a number of random pixels that will act as centers of circles. Around them I substitute the initial zeros by numbers drawned from a normal distribution which mean depends on the distance of pixels to the center. The next step is to apply the stepping stone algorithm. For each pixel, I substitute its value by a weighted average of itself and the value of some of its neighbors, choosen randomly. I always mix values of the pixels. The original algorithm, as described in the Softology’s blog, performs these mixings randomly. Another difference is that I mix values intead interchanging them, as the original algorithm does. Once I repeat this process a number of times, I pick a nice palette from COLOURLovers and turn values of pixels into colors with ggplot: The code is here. Let me know if you do something interesting with it. Turning numbers into bright colors: I cannot imagine a better way to spend some hours in these shadowy times. # Reaction Diffusion Sin patria ni banderas, ahora vivo a mi manera; y es que me siento extranjero fuera de tus agujeros (Tercer movimiento: Lo de dentro, Extremoduro) The technique I experimented with in this post is an endless source to obtain amazing images. It is called reaction-diffusion and simulates the evolution of a system where several substances interact chemically transforming into each other (reaction) and spreading out over a surface in space (diffusion). In my case there are just two substances in a 2D space and the evolution of system is simulated using the Gray-Scott algorithm which is ruled by several parameters that, once determined, can produce images like this one: This article by Karl Sims, is a very clear and comprehensive explanation of the Gray-Scott algorithm. Briefly, the Gray-Scott model simulates the evolution of two substances, A and B in a two dimensional grid. In each cell of the grid, the substance A is added at a given feed rate f. Then, both substances react following this rule: two particles of B convert a particle of A into a particle of B. To prevent overpopulation, B particles are killed at a given kill rate k. In the diffusion phase, both substances spread out accross the cells of the grid at a given diffusion rates Da and Db. The diffusion is performed using a 2D Lapacian operator with a 3x3 convolution matrix L. In the article you can find the equations that rule the system, which depend on the parameters described in the previous paragraph. To obtain all the images of this post, I maintained most of them always constant and just changed the following: • Feed and kill rates (f and k respectively) • The initial proportion of both substances A and B. I always started with A=0 in each cell and B=1 in some (small) amount of them selected randomly or according to some geometrical considerations (and inner circle or square, for example). I let you to try with your own ideas. Sometimes the system converges and remains stable after a number of iterations. For example, these images are obtained iterating 5000 times: Before converging you can obtain also nice patterns in between: The variety of patterns is amazing: tessellations, lattices, caleidoscopic … some more examples: I used again Rcpp to iterate efficiently but now I tried RcppArmadillo, a C++ library for linear algebra and scientific computing because it contains a class called cube, which is a 3D matrix that fits perfectly into a 2D grid where the 3rd dimension is the amount of particles A and B. I like to share my code because I think it may serve as a source of inspiration to someone. You can find the code here with just one of the many configurations I tried to generate the previous images. It may serve you as a good starting point to explore you own ones. Let me know if you reach interesting places. Happy New Year 2020. # The Chaos Game: an experiment about fractals, recursivity and creative coding Mathematics, rightly viewed, possesses not only truth, but supreme beauty (Bertrand Russell) You have a pentagon defined by its five vertex. Now, follow these steps: • Step 0: take a point inside the pentagon (it can be its center if you want to do it easy). Keep this point in a safe place. • Step 1: choose a vertex randomly and take the midpoint between both of them (the vertex and the original point). Keep also this new point. Repeat Step 1 one more time. • Step 2: compare the last two vertex that you have chosen. If they are the same, choose another with this condition: if it’s not a neighbor of the last vertex you chose, keep it. If it is a neighbor, choose another vertex randomly until you choose a not-neighbor one. Then, take the midpoint between the last point you obtained and this new vertex. Keep also this new point. • Step 3: Repeat Step 2 a number of times and after that, do a plot with the set of points that you obtained. If you repeat these steps 10 milion times, you will obtain this stunning image: I love the incredible ability of maths to create beauty. More concretely, I love the fact of how repeating extremely simple operations can bring you to unexpected places. Would you expect that the image created with the initial naive algorithm would be that? I wouldn’t. Even knowing the result I cannot imagine how those simple steps can produce it. The image generated by all the points repeat itself at different scales. This characteristic, called self-similarity, is property of fractals and make them extremely attractive. Step 2 is the key one to define the shape of the image. Apart of comparing two previous vertex as it’s defined in the algorithm above, I implemented two other versions: • one version where the currently chosen vertex cannot be the same as the previously chosen vertex. • another one where the currently chosen vertex cannot neighbor the previously chosen vertex if the three previously chosen vertices are the same (note that this implementation is the same as the original but comparing with three previous vertex instead two). These images are the result of applying the three versions of the algorithm to a square, a pentagon, a hexagon and a heptagon (a row for each polygon and a column for each algorithm): From a technical point of view I used Rcppto generate the set of points. Since each iteration depends on the previous one, the loop cannot easily vectorised and C++ is a perfect option to avoid the bottleneck if you use another technique to iterate. In this case, instead of writing the C++ directly inside the R file with cppFunction(), I used a stand-alone C++ file called chaos_funcs.cpp to write the C++ code that I load into R using sourceCpp(). Some days ago, I gave a tutorial at the coding club of the University Carlos III in Madrid where we worked with the integration of C++ and R to create beautiful images of strange attractors. The tutorial and the code we developed is here. You can also find the code of this experiment here. Enjoy! # Mandalaxies One cannot escape the feeling that these mathematical formulas have an independent existence and an intelligence of their own, that they are wiser than we are, wiser even than their discoverers (Heinrich Hertz) I love spending my time doing mathematics: transforming formulas into drawings, experimenting with paradoxes, learning new techniques … and R is a perfect tool for doing it. Maths are for me a the best way of escape and evasion from reality. At least, doing maths is a stylish way of wasting my time. When I read something interesting, many times I feel the desire to try it by myself. That’s what happened to me when I discovered this fabolous book by Julien C. Sprott. I cannot stop doing images with the formulas that contains. Today I present you a mix of mandalas and galaxies that I called Mandalaxies: This time, the equation that drives these drawings is this one: $x_{n+1}= 10a_1+(x_n+a_2sin(a_3y_n+a_4))cos(\alpha)+y_nsin(\alpha)\\ y_{n+1}= 10a_5-(x_n+a_2sin(a_3y_n+a_4))sin(\alpha)+y_nsin(\alpha)$ where $\alpha=2\pi/(13+10a_6)$ The equation depends on six parameters (from a1 to a6). Searching randomly for values between -1.2 and 1.3 to each of them, you can generate an infinite number of beautiful images: Here you can find the code to do your own images. Once again, Rcpp is key to generate the set of points to plot quickly since each of the previous plots contains 4 million points. # Rcpp, Camarón de la Isla and the Beauty of Maths Desde que te estoy queriendo yo no sé lo que me pasa cualquier vereda que tomo siempre me lleva a tu casa (Y mira que mira y mira, Camarón de la Isla) The verses that head this post are taken from a song of Camarón de la Isla and illustrate very well what is a strange attractor in the real life. For non-Spanish speakers a translation is since I’m loving you, I don’t know what happens to me: any path I take, always ends at your house. If you don’t know who is Camarón de la Isla, hear his immense and immortal music. I will not try to give here a formal definition of a strange attractor. Instead of doing it, I will try to describe them with my own words. A strange attractor can be defined with a system of equations (I don’t know if all strage attractors can be defined like this). These equations determine the trajectory of some initial point along a number of steps. The location of the point at step i, depends on the location of it at step i-1 so the trajectory is calculated sequentially. These are the equations that define the attractor of this experiment: $x_{n+1}= a_{1}+a_{2}x_{n}+a_{3}y_{n}+a_{4} |x_{n}|^{a_5}+a_{6} |y_{n}|^{a_7}\\ y_{n+1}= a_{8}+a_{9}x_{n}+a_{10}y_{n}+a_{11} |x_{n}|^{a_{12}}+a_{13} |y_{n}|^{a_{14}}$ As you can see there are two equations, describing the location of each coordinate of the point (therefore it is located in a two dimensional space). These equations are impossible to resolve. In other words, you cannot know where will be the point after some iterations directly from its initial location. The adjective attractor comes from the fact of the trajectory of the point tends to be the same independently of its initial location. Here you have more examples: folds, waterfalls, sand, smoke … images are really appealing: The code of this experiment is here. You will find there a definition of parameters that produce a nice example image. Some comments: • Each point depends on the previous one, so iteration is mandatory; since each plot involves 10 million points, a very good option to do it efficiently is to use Rcpp, which allows you to iterate directly in C++. • Some points are quite isolated and far from the crowd of points. This is why I locate some breakpoints with quantile to remove tails. If not, the plot may be reduced to a big point. • The key to obtain a nice plot if to find out a good set of parameters (a1 to a14). I have my own method, wich involves the following steps: generate a random value for each between -4 and 4, simulate a mini attractor of only 2000 points and keep it if it doesn’t diverge (i.e. points don’t go to infinite), if x and y are not correlated at all and its kurtosis is bigger than a certain thresold. If the mini attractor overcome these filters, I keep its parameters and generate the big version with 10 million points. • I would have publish this method together with the code but I didn’t. Why? Because this may bring yourself to develop your own since mine one is not ideal. If you are interested in mine, let me know and I will give you more details. If you develop a good method by yourself and don’t mind to share it with me, let me know as well, please. This post is inspired in this beautiful book from Julien Clinton Sprott. I would love to see your images. # Drawing 10 Million Points With ggplot: Clifford Attractors For me, mathematics cultivates a perpetual state of wonder about the nature of mind, the limits of thoughts, and our place in this vast cosmos (Clifford A. Pickover – The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics) I am a big fan of Clifford Pickover and I find inspiration in his books very often. Thanks to him, I discovered the harmonograph and the Parrondo’s paradox, among many other mathematical treasures. Apart of being a great teacher, he also invented a family of strange attractors wearing his name. Clifford attractors are defined by these equations: $x_{n+1}\, =\, sin(a\, y_{n})\, +\, c\, cos(a\, x_{n}) \\ y_{n+1}\, =\, sin(b\, x_{n})\, +\, d\, cos(b\, y_{n}) \\$ There are infinite attractors, since a, b, c and d are parameters. Given four values (one for each parameter) and a starting point (x0, y0), the previous equation defines the exact location of the point at step n, which is defined just by its location at n-1; an attractor can be thought as the trajectory described by a particle. This plot shows the evolution of a particle starting at (x0, y0)=(0, 0) with parameters a=-1.24458046630025, b=-1.25191834103316, c=-1.81590817030519 and d=-1.90866735205054 along 10 million of steps: Changing parameters is really entertaining. Drawings have a sandy appearance: From a technical point of view, the challenge is creating a data frame with all locations, since it must have 10 milion rows and must be populated sequentially. A very fast way to do it is using Rcpp package. To render the plot I use ggplot, which works quite well. Here you have the code to play with Clifford Attractors if you want: library(Rcpp) library(ggplot2) library(dplyr) opt = theme(legend.position = "none", panel.background = element_rect(fill="white"), axis.ticks = element_blank(), panel.grid = element_blank(), axis.title = element_blank(), axis.text = element_blank()) cppFunction('DataFrame createTrajectory(int n, double x0, double y0, double a, double b, double c, double d) { // create the columns NumericVector x(n); NumericVector y(n); x[0]=x0; y[0]=y0; for(int i = 1; i < n; ++i) { x[i] = sin(a*y[i-1])+c*cos(a*x[i-1]); y[i] = sin(b*x[i-1])+d*cos(b*y[i-1]); } // return a new data frame return DataFrame::create(_["x"]= x, _["y"]= y); } ') a=-1.24458046630025 b=-1.25191834103316 c=-1.81590817030519 d=-1.90866735205054 df=createTrajectory(10000000, 0, 0, a, b, c, d) png("Clifford.png", units="px", width=1600, height=1600, res=300) ggplot(df, aes(x, y)) + geom_point(color="black", shape=46, alpha=.01) + opt dev.off()
{}
# Help Center > Badges > Custodian Complete at least one review task. This badge is awarded once per review type. Awarded 3308 times Awarded Nov 4 '15 at 13:37 to for reviewing Suggested Edits Awarded Nov 3 '15 at 23:28 to for reviewing Suggested Edits Awarded Nov 2 '15 at 21:26 to for reviewing Suggested Edits Awarded Nov 2 '15 at 20:04 to for reviewing Suggested Edits Awarded Nov 2 '15 at 17:32 to for reviewing Suggested Edits Awarded Nov 1 '15 at 18:19 to for reviewing Suggested Edits Awarded Oct 31 '15 at 13:19 to for reviewing Suggested Edits Awarded Oct 30 '15 at 18:38 to for reviewing Suggested Edits Awarded Oct 30 '15 at 18:10 to for reviewing Suggested Edits Awarded Oct 30 '15 at 18:05 to for reviewing Suggested Edits Awarded Oct 30 '15 at 16:28 to for reviewing Suggested Edits Awarded Oct 30 '15 at 13:16 to for reviewing Suggested Edits Awarded Oct 30 '15 at 3:12 to for reviewing Reopen Votes Awarded Oct 29 '15 at 23:22 to for reviewing Suggested Edits Awarded Oct 29 '15 at 22:56 to for reviewing Suggested Edits Awarded Oct 29 '15 at 22:00 to for reviewing Low Quality Posts Awarded Oct 29 '15 at 9:30 to for reviewing Suggested Edits Awarded Oct 29 '15 at 2:54 to for reviewing Close Votes Awarded Oct 28 '15 at 20:20 to for reviewing Suggested Edits Awarded Oct 28 '15 at 17:30 to for reviewing Suggested Edits Awarded Oct 28 '15 at 13:30 to for reviewing Suggested Edits Awarded Oct 28 '15 at 6:59 to for reviewing Suggested Edits Awarded Oct 28 '15 at 6:59 to for reviewing First Posts Awarded Oct 27 '15 at 16:01 to for reviewing Suggested Edits Awarded Oct 27 '15 at 13:29 to for reviewing Suggested Edits Awarded Oct 26 '15 at 18:44 to for reviewing Suggested Edits Awarded Oct 26 '15 at 16:17 to for reviewing Suggested Edits Awarded Oct 26 '15 at 14:07 to for reviewing Suggested Edits Awarded Oct 25 '15 at 14:25 to for reviewing Suggested Edits Awarded Oct 25 '15 at 7:09 to for reviewing Low Quality Posts Awarded Oct 25 '15 at 4:53 to for reviewing Suggested Edits Awarded Oct 24 '15 at 8:58 to for reviewing First Posts Awarded Oct 24 '15 at 8:58 to for reviewing Suggested Edits Awarded Oct 23 '15 at 16:33 to for reviewing Suggested Edits Awarded Oct 23 '15 at 11:51 to for reviewing Suggested Edits Awarded Oct 23 '15 at 10:28 to for reviewing Suggested Edits Awarded Oct 23 '15 at 9:34 to for reviewing Suggested Edits Awarded Oct 22 '15 at 22:19 to for reviewing Suggested Edits Awarded Oct 22 '15 at 21:24 to for reviewing Suggested Edits Awarded Oct 22 '15 at 9:34 to for reviewing Late Answers Awarded Oct 22 '15 at 5:05 to for reviewing Late Answers Awarded Oct 22 '15 at 5:00 to for reviewing Close Votes Awarded Oct 21 '15 at 23:31 to for reviewing Late Answers Awarded Oct 21 '15 at 18:01 to for reviewing First Posts Awarded Oct 21 '15 at 6:28 to for reviewing Suggested Edits Awarded Oct 20 '15 at 19:03 to for reviewing Suggested Edits Awarded Oct 20 '15 at 18:06 to for reviewing Suggested Edits Awarded Oct 20 '15 at 16:18 to for reviewing Suggested Edits Awarded Oct 20 '15 at 7:50 to for reviewing Suggested Edits Awarded Oct 20 '15 at 2:23 to for reviewing Close Votes Awarded Oct 19 '15 at 17:06 to for reviewing First Posts Awarded Oct 19 '15 at 17:01 to for reviewing Low Quality Posts Awarded Oct 19 '15 at 14:20 to for reviewing Suggested Edits Awarded Oct 19 '15 at 6:54 to for reviewing First Posts Awarded Oct 17 '15 at 19:42 to for reviewing Suggested Edits Awarded Oct 17 '15 at 17:50 to for reviewing Suggested Edits Awarded Oct 17 '15 at 13:33 to for reviewing Suggested Edits Awarded Oct 16 '15 at 17:02 to for reviewing Low Quality Posts Awarded Oct 16 '15 at 5:35 to for reviewing Suggested Edits Awarded Oct 15 '15 at 15:32 to for reviewing Suggested Edits
{}
TheInfoList The horizon is the apparent line that separates the surface of a celestial body In astronomy Astronomy (from el, ἀστρονομία, literally meaning the science that studies the laws of the stars) is a natural science that studies astronomical object, celestial objects and celestial event, phenomena. It uses ... from its sky The sky is the panorama obtained from observing the universe The universe ( la, universus) is all of space and time and their contents, including planets, stars, galaxy, galaxies, and all other forms of matter and energy. The Big Bang th ... when viewed from the perspective of an observer on or near the surface of the relevant body. This line divides all viewing directions based on whether it intersects the relevant body's surface or not. The ''true horizon'' is actually a theoretical line, which can only be observed to any degree of accuracy when it lies along a relatively smooth surface such as that of Earth's oceans. At many locations, this line is obscured by terrain Relief map of Sierra Nevada, Spain Terrain or relief (also topographical Topography is the study of the forms and features of land surfaces. The topography of an area could refer to the surface forms and features themselves, or a desc ... , and on Earth Earth is the third planet from the Sun and the only astronomical object known to harbour and support life. 29.2% of Earth's surface is land consisting of continents and islands. The remaining 70.8% is Water distribution on Earth, covered wi ... it can also be obscured by life forms such as tree In botany, a tree is a perennial plant with an elongated Plant stem, stem, or trunk (botany), trunk, supporting branches and leaves in most species. In some usages, the definition of a tree may be narrower, including only wood plants with se ... s and/or human constructs such as building A building, or edifice, is a structure with a roof and walls standing more or less permanently in one place, such as a house or factory. Buildings come in a variety of sizes, shapes, and functions, and have been adapted throughout history for a ... s. The resulting intersection of such obstructions with the sky is called the ''visible horizon''. On Earth, when looking at a sea from a shore, the part of the sea closest to the horizon is called the offing. Pronounced, "Hor-I-zon". The true horizon surrounds the observer and it is typically assumed to be a circle, drawn on the surface of a perfectly spherical model of the Earth Earth is the third planet from the Sun and the only astronomical object known to harbour and support life. 29.2% of Earth's surface is land consisting of continents and islands. The remaining 70.8% is Water distribution on Earth, covered wi ... . Its center is below the observer and below sea level Mean sea level (MSL) (often shortened to sea level) is an average In colloquial, ordinary language, an average is a single number taken as representative of a list of numbers, usually the sum of the numbers divided by how many numbers are in th ... . Its distance from the observer varies from day to day due to atmospheric refraction Atmospheric refraction is the deviation of or other from a straight line as it passes through the due to the variation in as a function of . This refraction is due to the velocity of light through , decreasing (the increases) with increased ... , which is greatly affected by weather Weather is the state of the atmosphere An atmosphere (from the greek words ἀτμός ''(atmos)'', meaning 'vapour', and σφαῖρα ''(sphaira)'', meaning 'ball' or 'sphere') is a layer or a set of layers of gases surrounding a p ... conditions. Also, the higher the observer's eyes are from sea level, the farther away the horizon is from the observer. For instance, in standard atmospheric conditions, for an observer with eye level above sea level by , the horizon is at a distance of about . When observed from very high standpoints, such as a space station A space station, also known as an orbital station or an orbital space station, is a spacecraft File:Space Shuttle Columbia launching.jpg, 275px, The US Space Shuttle flew 135 times from 1981 to 2011, supporting Spacelab, ''Mir'', the Hubble S ... , the horizon is much farther away and it encompasses a much larger area of Earth's surface. In this case, the horizon would no longer be a perfect circle, not even a plane curve In mathematics, a plane curve is a curve In mathematics, a curve (also called a curved line in older texts) is an object similar to a line (geometry), line, but that does not have to be Linearity, straight. Intuitively, a curve may be thought o ... such as an ellipse, especially when the observer is above the equator, as the Earth's surface can be better modeled as an ellipsoid An ellipsoid is a surface that may be obtained from a sphere by deforming it by means of directional Scaling (geometry), scalings, or more generally, of an affine transformation. An ellipsoid is a quadric surface;  that is, a Surface (mathemat ... than as a sphere. # Etymology The word ''horizon'' derives from the Greek ''horízōn kýklos'', "separating circle", where "ὁρίζων" is from the verb ὁρίζω ''horízō'', "to divide", "to separate", which in turn derives from "ὅρος" (''hóros''), "boundary, landmark". # Appearance and usage Historically, the distance to the visible horizon has long been vital to survival and successful navigation, especially at sea, because it determined an observer's maximum range of vision and thus of communication Communication (from Latin Latin (, or , ) is a classical language belonging to the Italic languages, Italic branch of the Indo-European languages. Latin was originally spoken in the area around Rome, known as Latium. Through the power o ... , with all the obvious consequences for safety and the transmission of information that this range implied. This importance lessened with the development of the radio Radio is the technology of signaling and telecommunication, communicating using radio waves. Radio waves are electromagnetic waves of frequency between 30 hertz (Hz) and 300 gigahertz (GHz). They are generated by an electronic device ... and the telegraph Telegraphy is the long-distance transmission of messages where the sender uses symbolic codes, known to the recipient, rather than a physical exchange of an object bearing the message. Thus flag semaphore Flag semaphore (from the Ancient ... , but even today, when flying an aircraft An aircraft is a vehicle that is able to flight, fly by gaining support from the Atmosphere of Earth, air. It counters the force of gravity by using either Buoyancy, static lift or by using the Lift (force), dynamic lift of an airfoil, or in ... under visual flight rules In aviation Aviation is the activities surrounding mechanical flight and the aircraft industry. ''Aircraft'' includes airplane, fixed-wing and helicopter, rotary-wing types, morphable wings, wing-less lifting bodies, as well as aerostat, li ... , a technique called attitude flying In aviation Aviation is the activities surrounding mechanical flight Flight or flying is the process by which an object (physics), object motion (physics), moves through a space without contacting any planetary surface, either within an at ... is used to control the aircraft, where the pilot uses the visual relationship between the aircraft's nose and the horizon to control the aircraft. Pilots can also retain their spatial orientation Image:Change of axes.svg, Changing orientation of a rigid body is the same as rotation (mathematics), rotating the axes of a frame of reference, reference frame attached to it. In geometry, the orientation, angular position, attitude, or direction ... by referring to the horizon. In many contexts, especially perspective drawing, the curvature of the Earth Earth is the third planet from the Sun and the only astronomical object known to harbour and support life. 29.2% of Earth's surface is land consisting of continents and islands. The remaining 70.8% is Water distribution on Earth, covered wi ... is disregarded and the horizon is considered the theoretical line to which points on any horizontal plane In astronomy, geography, and related sciences and contexts, a ''Direction (geometry, geography), direction'' or ''plane (geometry), plane'' passing by a given point is said to be vertical if it contains the local gravity direction at that point. ... converge (when projected onto the picture plane) as their distance from the observer increases. For observers near sea level Mean sea level (MSL) (often shortened to sea level) is an average In colloquial, ordinary language, an average is a single number taken as representative of a list of numbers, usually the sum of the numbers divided by how many numbers are in th ... the difference between this ''geometrical horizon'' (which assumes a perfectly flat, infinite ground plane) and the ''true horizon'' (which assumes a spherical Earth Spherical Earth or Earth's curvature refers to the approximation of as a . The earliest documented mention of the concept dates from around the 5th century BC, when it appears in the writings of . In the 3rd century BC, established ... surface) is imperceptible to the unaided eye (but for someone on a 1000-meter hill looking out to sea the true horizon will be about a degree below a horizontal line). In astronomy, the horizon is the horizontal plane through the eyes of the observer. It is the fundamental plane of the horizontal coordinate system The horizontal coordinate system is a celestial coordinate system In astronomy, a celestial coordinate system (or celestial reference system) is a system for specifying positions of satellites, planets, stars, galaxies, and other celestia ... , the locus of points that have an altitude Altitude or height (also sometimes known as depth) is a distance measurement, usually in the vertical or "up" direction, between a reference and a point or object. The exact definition and reference datum varies according to the context (e.g. ... of zero degrees. While similar in ways to the geometrical horizon, in this context a horizon may be considered to be a plane in space, rather than a line on a picture plane. # Distance to the horizon Ignoring the effect of atmospheric refraction Effect may refer to: * A result or change of something ** List of effects ** Cause and effect, an idiom describing causality Pharmacy and pharmacology * Drug effect, a change resulting from the administration of a drug ** Therapeutic effect, a benef ... , distance to the true horizon from an observer close to the Earth's surface is about :$d \approx \sqrt \,,$ where ''h'' is height above sea level Mean sea level (MSL) (often shortened to sea level) is an average In colloquial, ordinary language, an average is a single number taken as representative of a list of numbers, usually the sum of the numbers divided by how many numbers are in th ... and ''R'' is the Earth radius Earth radius is the distance from the center of Earth Earth is the third planet from the Sun and the only astronomical object known to harbor life. About 29% of Earth's surface is land consisting of continent A continent is o ... . When ''d'' is measured in kilometres and ''h'' in metres, the distance is :$d \approx 3.57\sqrt \,,$ where the constant 3.57 has units of km/m½. When ''d'' is measured in miles (statute miles i.e. "land miles" of ) and ''h'' in feet, the distance is :$d \approx \sqrt \approx 1.22\sqrt \,.$ where the constant 1.22 has units of mi/ft½. In this equation Earth's surface Earth is the third planet A planet is an astronomical body orbiting a star or Stellar evolution#Stellar remnants, stellar remnant that is massive enough to be Hydrostatic equilibrium, rounded by its own gravity, is not massive enough to c ... is assumed to be perfectly spherical, with ''r'' equal to about . ## Examples Assuming no atmospheric refraction Atmospheric refraction is the deviation of or other from a straight line as it passes through the due to the variation in as a function of . This refraction is due to the velocity of light through , decreasing (the increases) with increased ... and a spherical Earth with radius R=: * For an observer standing on the ground with ''h'' = , the horizon is at a distance of . * For an observer standing on the ground with ''h'' = , the horizon is at a distance of . * For an observer standing on a hill or tower above sea level, the horizon is at a distance of . * For an observer standing on a hill or tower above sea level, the horizon is at a distance of . * For an observer standing on the roof of the Burj Khalifa The Burj Khalifa ( ar, برج خليفة, ; pronounced , literally "Khalifa Tower" in English), known as the Burj Dubai prior to its inauguration in 2010, is a skyscraper in Dubai, United Arab Emirates. With a total height of 829.8 m (2,722&nbs ... , from ground, and about above sea level, the horizon is at a distance of . * For an observer atop Mount Everest Mount Everest (Chinese Chinese can refer to: * Something related to China China, officially the People's Republic of China (PRC), is a country in East Asia. It is the List of countries and dependencies by population, world's ... ( in altitude), the horizon is at a distance of . * For an observer aboard a commercial passenger plane flying at a typical altitude of , the horizon is at a distance of . * For a U-2 pilot, whilst flying at its service ceiling , the horizon is at a distance of . ## Other planets On terrestrial planets and other solid celestial bodies with negligible atmospheric effects, the distance to the horizon for a "standard observer" varies as the square root of the planet's radius. Thus, the horizon on Mercury Mercury usually refers to: * Mercury (planet) Mercury is the smallest planet in the Solar System and the closest to the Sun. Its orbit around the Sun takes 87.97 Earth days, the shortest of all the Sun's planets. It is named after the Roman g ... is 62% as far away from the observer as it is on Earth, on Mars Mars is the fourth planet from the Sun and the second-smallest planet in the Solar System, being larger than only Mercury (planet), Mercury. In English, Mars carries the name of the Mars (mythology), Roman god of war and is often referred to ... the figure is 73%, on the Moon The Moon is Earth's only natural satellite. At about one-quarter the diameter of Earth (comparable to the width of Australia (continent), Australia), it is the largest natural satellite in the Solar System relative to the size of its plane ... the figure is 52%, on Mimas the figure is 18%, and so on. ## Derivation If the Earth is assumed to be a featureless sphere (rather than an oblate spheroid A spheroid, also known as an ellipsoid of revolution or rotational ellipsoid, is a quadric In mathematics, a quadric or quadric surface (quadric hypersurface in higher dimension thumb , 236px , The first four spatial dimensions, repres ... ) with no atmospheric refraction, then the distance to the horizon can easily be calculated. The secant-tangent theorem The tangent-secant theorem describes the relation of line segments created by a secant and a tangent line with the associated circle. This result is found as Proposition 36 in Book 3 of Euclid's Euclid's Elements, ''Elements''. Given a secant ' ... states that :$\mathrm^2 = \mathrm \times \mathrm \,.$ Make the following substitutions: * ''d'' = OC = distance to the horizon * ''D'' = AB = diameter of the Earth * ''h'' = OB = height of the observer above sea level * ''D+h'' = OA = diameter of the Earth plus height of the observer above sea level, with ''d, D,'' and ''h'' all measured in the same units. The formula now becomes :$d^2 = h\left(D+h\right)\,\!$ or :$d = \sqrt =\sqrt\,,$ where ''R'' is the radius of the Earth. The same equation can also be derived using the Pythagorean theorem In mathematics, the Pythagorean theorem, or Pythagoras' theorem, is a fundamental relation in Euclidean geometry among the three sides of a right triangle. It states that the area of the square whose side is the hypotenuse (the side opposite ... . At the horizon, the line of sight is a tangent to the Earth and is also perpendicular to Earth's radius. This sets up a right triangle, with the sum of the radius and the height as the hypotenuse. With * ''d'' = distance to the horizon * ''h'' = height of the observer above sea level * ''R'' = radius of the Earth referring to the second figure at the right leads to the following: :$\left(R+h\right)^2 = R^2 + d^2 \,\!$ :$R^2 + 2Rh + h^2 = R^2 + d^2 \,\!$ :$d = \sqrt \,.$ The exact formula above can be expanded as: :$d = \sqrt \,,$ where ''R'' is the radius of the Earth (''R'' and ''h'' must be in the same units). For example, if a satellite is at a height of 2000 km, the distance to the horizon is ; neglecting the second term in parentheses would give a distance of , a 7% error. ## Approximation If the observer is close to the surface of the earth, then it is valid to disregard ''h'' in the term , and the formula becomes- :$d = \sqrt \,.$ Using kilometres for ''d'' and ''R'', and metres for ''h'', and taking the radius of the Earth as 6371 km, the distance to the horizon is :$d \approx \sqrt \approx 3.570\sqrt \,$. Using imperial units The imperial system of units, imperial system or imperial units (also known as British Imperial or Exchequer Standards of 1826) is the system of units A system of measurement is a collection of units of measurement A unit of measureme ... , with ''d'' and ''R'' in statute mile The mile, sometimes the international mile or statute mile to distinguish it from other miles, is a British imperial unit and US customary unit United States customary units (U.S. customary units) are a system of measurements commonly u ... s (as commonly used on land), and ''h'' in feet, the distance to the horizon is :$d \approx \sqrt \approx \sqrt \approx 1.22 \sqrt$. If ''d'' is in nautical mile A nautical mile is a unit of length A unit of length refers to any arbitrarily chosen and accepted reference standard for measurement of length. The most common units in modern use are the metric system, metric units, used in every country gl ... s, and ''h'' in feet, the constant factor is about 1.06, which is close enough to 1 that it is often ignored, giving: :$d \approx \sqrt h$ These formulas may be used when ''h'' is much smaller than the radius of the Earth (6371 km or 3959 mi), including all views from any mountaintops, airplanes, or high-altitude balloons. With the constants as given, both the metric and imperial formulas are precise to within 1% (see the next section for how to obtain greater precision). If ''h'' is significant with respect to ''R'', as with most satellites In the context of spaceflight, a satellite is an object that has been intentionally placed into orbit. These objects are called artificial satellites to distinguish them from natural satellites such as Earth's Moon. On 4 October 1957, the So ... , then the approximation is no longer valid, and the exact formula is required. # Other measures ## Arc distance Another relationship involves the great-circle distance The great-circle distance, orthodromic distance, or spherical distance is the shortest distance between two points on the surface of a sphere of a sphere A sphere (from Greek language, Greek —, "globe, ball") is a Geometry, geometrical ob ... ''s'' along the arc over the curved surface of the Earth to the horizon; with ''γ'' in radian The radian, denoted by the symbol \text, is the SI unit The International System of Units, known by the international abbreviation SI in all languages and sometimes pleonastically as the SI system, is the modern form of the metric sy ... s, :$s = R \gamma \,;$ then :$\cos \gamma = \cos\frac=\frac\,.$ Solving for ''s'' gives :$s=R\cos^\frac \,.$ The distance ''s'' can also be expressed in terms of the line-of-sight distance ''d''; from the second figure at the right, :$\tan \gamma = \frac \,;$ substituting for ''γ'' and rearranging gives :$s=R\tan^\frac \,.$ The distances ''d'' and ''s'' are nearly the same when the height of the object is negligible compared to the radius (that is, ''h'' ≪ ''R''). ## Zenith angle When the observer is elevated, the horizon zenith angle The zenith is an imaginary point directly "above" a particular location, on the imaginary celestial sphere In astronomy and navigation, the celestial sphere is an abstraction, abstract sphere that has an arbitrarily large radius and is concen ... can be greater than 90°. The maximum visible zenith angle occurs when the ray is tangent to Earth's surface; from triangle OCG in the figure at right, :$\cos \gamma =\frac$ where $h$ is the observer's height above the surface and $\gamma$ is the angular dip of the horizon. It is related to the horizon zenith angle $z$ by: :$z = \gamma +90^\circ$ For a non-negative height $h$, the angle $z$ is always ≥ 90°. ## Objects above the horizon To compute the greatest distance at which an observer can see the top of an object above the horizon, compute the distance to the horizon for a hypothetical observer on top of that object, and add it to the real observer's distance to the horizon. For example, for an observer with a height of 1.70 m standing on the ground, the horizon is 4.65 km away. For a tower with a height of 100 m, the horizon distance is 35.7 km. Thus an observer on a beach can see the top of the tower as long as it is not more than 40.35 km away. Conversely, if an observer on a boat () can just see the tops of trees on a nearby shore (), the trees are probably about 16 km away. Referring to the figure at the right, the top of the lighthouse will be visible to a lookout in a crow's nest A crow's nest is a structure in the upper part of the Mast (sailing), main mast of a ship or a structure that is used as a lookout point. On ships, this position ensured the widest field of view for lookouts to spot approaching hazards, other sh ... at the top of a mast of the boat if :$D_\mathrm < 3.57\,\left(\sqrt + \sqrt\right) \,,$ where ''D''BL is in kilometres and ''h''B and ''h''L are in metres. As another example, suppose an observer, whose eyes are two metres above the level ground, uses binoculars to look at a distant building which he knows to consist of thirty storey A storey (British English British English (BrE) is the standard dialect A standard language (also standard variety, standard dialect, and standard) is a language variety that has undergone substantial codification of grammar and usage ... s, each 3.5 metres high. He counts the storeys he can see, and finds there are only ten. So twenty storeys or 70 metres of the building are hidden from him by the curvature of the Earth. From this, he can calculate his distance from the building: :$D \approx 3.57\left(\sqrt+\sqrt\right)$ which comes to about 35 kilometres. It is similarly possible to calculate how much of a distant object is visible above the horizon. Suppose an observer's eye is 10 metres above sea level, and he is watching a ship that is 20 km away. His horizon is: :$3.57 \sqrt$ kilometres from him, which comes to about 11.3 kilometres away. The ship is a further 8.7 km away. The height of a point on the ship that is just visible to the observer is given by: :$h\approx\left\left(\frac\right\right)^2$ which comes to almost exactly six metres. The observer can therefore see that part of the ship that is more than six metres above the level of the water. The part of the ship that is below this height is hidden from him by the curvature of the Earth. In this situation, the ship is said to be hull-down In sailing and warfare, hull-down means that the upper part of a vessel or vehicle is visible, but the main, lower body (Hull (watercraft), hull) is not; the term hull-up means that all of the body is visible. The terms originated with sailing an ... . # Effect of atmospheric refraction Due to atmospheric refraction In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and time, and the related entities of energy and force ... the distance to the visible horizon is further than the distance based on a simple geometric calculation. If the ground (or water) surface is colder than the air above it, a cold, dense layer of air forms close to the surface, causing light to be refracted downward as it travels, and therefore, to some extent, to go around the curvature of the Earth. The reverse happens if the ground is hotter than the air above it, as often happens in deserts, producing mirage A mirage is a naturally-occurring in which light rays bend via to produce a displaced image of distant objects or the sky. The word comes to via the ''(se) mirer'', from the ''mirari'', meaning "to look at, to wonder at". Mirages can be c ... s. As an approximate compensation for refraction, surveyors measuring distances longer than 100 meters subtract 14% from the calculated curvature error and ensure lines of sight are at least 1.5 metres from the ground, to reduce random errors created by refraction. If the Earth were an airless world like the Moon, the above calculations would be accurate. However, Earth has an , whose density The density (more precisely, the volumetric mass density; also known as specific mass), of a substance is its per unit . The symbol most often used for density is ''ρ'' (the lower case Greek letter ), although the Latin letter ''D'' can also ... and refractive index In optics Optics is the branch of physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which relates to the order of nature, or ... vary considerably depending on the temperature and pressure. This makes the air to varying extents, affecting the appearance of the horizon. Usually, the density of the air just above the surface of the Earth is greater than its density at greater altitudes. This makes its refractive index In optics Optics is the branch of physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which relates to the order of nature, or ... greater near the surface than at higher altitudes, which causes light that is travelling roughly horizontally to be refracted downward. This makes the actual distance to the horizon greater than the distance calculated with geometrical formulas. With standard atmospheric conditions, the difference is about 8%. This changes the factor of 3.57, in the metric formulas used above, to about 3.86. For instance, if an observer is standing on seashore, with eyes 1.70 m above sea level, according to the simple geometrical formulas given above the horizon should be 4.7 km away. Actually, atmospheric refraction allows the observer to see 300 metres farther, moving the true horizon 5 km away from the observer. This correction can be, and often is, applied as a fairly good approximation when atmospheric conditions are close to standard Standard may refer to: Flags * Colours, standards and guidons * Standard (flag), a type of flag used for personal identification Norm, convention or requirement * Standard (metrology), an object that bears a defined relationship to a unit of ... . When conditions are unusual, this approximation fails. Refraction is strongly affected by temperature gradients, which can vary considerably from day to day, especially over water. In extreme cases, usually in springtime, when warm air overlies cold water, refraction can allow light to follow the Earth's surface for hundreds of kilometres. Opposite conditions occur, for example, in deserts, where the surface is very hot, so hot, low-density air is below cooler air. This causes light to be refracted upward, causing mirage A mirage is a naturally-occurring in which light rays bend via to produce a displaced image of distant objects or the sky. The word comes to via the ''(se) mirer'', from the ''mirari'', meaning "to look at, to wonder at". Mirages can be c ... effects that make the concept of the horizon somewhat meaningless. Calculated values for the effects of refraction under unusual conditions are therefore only approximate. Nevertheless, attempts have been made to calculate them more accurately than the simple approximation described above. Outside the visual wavelength range, refraction will be different. For radar Radar (radio detection and ranging) is a detection system that uses radio waves to determine the distance (''ranging''), angle, or velocity of objects. It can be used to detect aircraft, Marine radar, ships, spacecraft, guided missiles, motor ... (e.g. for wavelengths 300 to 3 mm i.e. frequencies between 1 and 100 GHz) the radius of the Earth may be multiplied by 4/3 to obtain an effective radius giving a factor of 4.12 in the metric formula i.e. the radar horizon will be 15% beyond the geometrical horizon or 7% beyond the visual. The 4/3 factor is not exact, as in the visual case the refraction depends on atmospheric conditions. ;Integration method—Sweer If the density profile of the atmosphere is known, the distance ''d'' to the horizon is given by :$d=\left\left( \psi +\delta \right\right) \,,$ where ''R''E is the radius of the Earth, ''ψ'' is the dip of the horizon and ''δ'' is the refraction of the horizon. The dip is determined fairly simply from :$\cos \psi = \frac \,,$ where ''h'' is the observer's height above the Earth, ''μ'' is the index of refraction of air at the observer's height, and ''μ''0 is the index of refraction of air at Earth's surface. The refraction must be found by integration of :$\delta =-\int_^ \,,$ where $\phi\,\!$ is the angle between the ray and a line through the center of the Earth. The angles ''ψ'' and $\phi\,\!$ are related by :$\phi =90^\circ -\psi \,.$ ;Simple method—Young A much simpler approach, which produces essentially the same results as the first-order approximation described above, uses the geometrical model but uses a radius . The distance to the horizon is then :$d=\sqrt \,.$ Taking the radius of the Earth as 6371 km, with ''d'' in km and ''h'' in m, :$d \approx 3.86 \sqrt \,;$ with ''d'' in mi and ''h'' in ft, :$d \approx 1.32 \sqrt \,.$ Results from Young's method are quite close to those from Sweer's method, and are sufficiently accurate for many purposes. # Curvature of the horizon From a point above Earth's surface, the horizon appears slightly convex Convex means curving outwards like a sphere, and is the opposite of concave. Convex or convexity may refer to: Science and technology * Convex lens A lens is a transmissive optics, optical device which focuses or disperses a light beam by me ... ; it is a circular arc Circular may refer to: * The shape of a circle * Circular (album), ''Circular'' (album), a 2006 album by Spanish singer Vega * Circular letter (disambiguation) ** Flyer (pamphlet), a form of advertisement * Circular reasoning, a type of logical fal ... . The following formula In , a formula is a concise way of expressing information symbolically, as in a mathematical formula or a . The informal use of the term ''formula'' in science refers to the . The plural of ''formula'' can be either ''formulas'' (from the mos ... expresses the basic geometrical relationship between this visual curvature $\kappa$, the altitude $h$, and Earth's radius $R$: :$\kappa=\sqrt\$ The curvature is the reciprocal of the curvature in radian The radian, denoted by the symbol \text, is the SI unit The International System of Units, known by the international abbreviation SI in all languages and sometimes pleonastically as the SI system, is the modern form of the metric sy ... s. A curvature of 1.0 appears as a circle of an angular radius of 57.3° corresponding to an altitude of approximately above Earth's surface. At an altitude of , the cruising Cruising may refer to: * Cruising, on a cruise ship *Cruising (driving), driving around for social purposes, especially by teenagers *Cruising (maritime), leisurely travel by boat, yacht, or cruise ship *Cruising for sex, the process of searching i ... altitude of a typical airliner, the mathematical curvature of the horizon is about 0.056, the same curvature of the rim of circle with a radius of 10 m that is viewed from 56 cm directly above the center of the circle. However, the apparent curvature is less than that due to refraction In physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its Motion (physics), motion and behavior through Spacetime, space and time, and the related entities of energy and force ... of light by the atmosphere and the obscuration of the horizon by high cloud layers that reduce the altitude above the visual surface. # Vanishing points The horizon is a key feature of the picture plane In painting Painting is the practice of applying paint, pigment, color or other medium to a solid surface (called the "matrix" or "support"). The medium is commonly applied to the base with a brush, but other implements, such as knives, spo ... in the science of graphical perspective Graphics () are visual perception, visual images or designs on some surface, such as a wall, canvas, screen, paper, or stone, to inform, illustration, illustrate, or entertain. In contemporary usage, it includes a pictorial representation of dat ... . Assuming the picture plane stands vertical to ground, and ''P'' is the perpendicular projection of the eye point ''O'' on the picture plane, the horizon is defined as the horizontal line through ''P''. The point ''P'' is the vanishing point of lines perpendicular to the picture. If ''S'' is another point on the horizon, then it is the vanishing point for all lines parallel Parallel may refer to: Computing * Parallel algorithm In computer science Computer science deals with the theoretical foundations of information, algorithms and the architectures of its computation as well as practical techniques for their a ... to ''OS''. But Brook Taylor Brook Taylor (18 August 1685 – 29 December 1731) was an English mathematician best known for creating Taylor's theorem and the Taylor series, which are important for their use in mathematical analysis. Life and work Brook Taylor ... (1719) indicated that the horizon plane determined by ''O'' and the horizon was like any other plane Plane or planes may refer to: * Airplane An airplane or aeroplane (informally plane) is a fixed-wing aircraft A fixed-wing aircraft is a heavier-than-air flying machine Early flying machines include all forms of aircraft studied ... : :The term of Horizontal Line, for instance, is apt to confine the Notions of a Learner to the Plane of the Horizon, and to make him imagine, that that Plane enjoys some particular Privileges, which make the Figures in it more easy and more convenient to be described, by the means of that Horizontal Line, than the Figures in any other plane;…But in this Book I make no difference between the Plane of the Horizon, and any other Plane whatsoever... The peculiar geometry of perspective where parallel lines converge in the distance, stimulated the development of projective geometry In mathematics, projective geometry is the study of geometric properties that are invariant with respect to projective transformations. This means that, compared to elementary Euclidean geometry, projective geometry has a different setting, proj ... which posits a point at infinity 150px, The real line with the point at infinity; it is called the real projective line. In geometry Geometry (from the grc, γεωμετρία; ''wikt:γῆ, geo-'' "earth", ''wikt:μέτρον, -metron'' "measurement") is, with arithmetic ... where parallel lines meet. In her book ''Geometry of an Art'' (2007), Kirsti Andersen Kirsti Andersen (born December 9, 1941, Copenhagen Copenhagen ( da, København ) is the capital and most populous city of Denmark. As of 1 January 2020, the city had a population of 794,128 with 632,340 in Copenhagen Municipality, 104,305 in ... described the evolution of perspective drawing and science up to 1800, noting that vanishing points need not be on the horizon. In a chapter titled "Horizon", John Stillwell John Colin Stillwell (born 1942) is an Australia Australia, officially the Commonwealth of Australia, is a Sovereign state, sovereign country comprising the mainland of the Australia (continent), Australian continent, the island of Tasman ... recounted how projective geometry has led to incidence geometry Incidence may refer to: Economics * Benefit incidence, the availability of a benefit * Expenditure incidence, the effect of government expenditure upon the distribution of private incomes * Fiscal incidence, the economic impact of government tax ... , the modern abstract study of line intersection. Stillwell also ventured into foundations of mathematics Foundations of mathematics is the study of the philosophical Philosophy (from , ) is the study of general and fundamental questions, such as those about existence Existence is the ability of an entity to interact with physical or mental r ... in a section titled "What are the Laws of Algebra ?" The "algebra of points", originally given by Karl von Staudt Karl Georg Christian von Staudt (24 January 1798 – 1 June 1867) was a German mathematician A mathematician is someone who uses an extensive knowledge of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such to ... deriving the axioms of a field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grassl ... was deconstructed in the twentieth century, yielding a wide variety of mathematical possibilities. Stillwell states :This discovery from 100 years ago seems capable of turning mathematics upside down, though it has not yet been fully absorbed by the mathematical community. Not only does it defy the trend of turning geometry into algebra, it suggests that both geometry and algebra have a simpler foundation than previously thought.
{}
3 another typo fix # Shenzen IO (Assembler) ,83 76 bytes, noncompeting Shenzen io is a puzzle game where you can code you code in a special assembler-ish language. Unfortunately, you can only use integers between -999 and 999 as inputs or outputs, and there is no way to tell if an array has ended. So i assumed that the array was written on a ROM that wraps around after reading the last cell. This means that only even arrays can be used, which is the reason for it being noncompeting. Code: @mov x0 dat @mov x0 acc teq x0 dat +teq x0 acc b: +mov 1 p1 -mov 0 p1 -jmp b Explanation: # calling for x0 will cause rom to move 1 cell forward # (slx does not count). @ mov x0 dat # Moves value to variable dat (only run once) @ mov x0 acc # Moves rom position forward and moves x0 to acc teq x0 dat # See if dat equals x0 + teq x0 acc # If last expression was true, see x0 equals acc b: # Label for jumps (GOTO) + mov 1 p1 # Set output (p1) to 1 (same case as previous line) - mov 0 p1 # if any expression was false, set output to 0 - jmp b # jump to b: (same case as prev line) Sorry if any of this is confusing, this is my first code-golf answer. EDIT: removed 7 bytes by replacing loops by run-once code # Shenzen IO (Assembler) ,83 76 bytes, noncompeting Shenzen io is a puzzle game where you can code you code in a special assembler-ish language. Unfortunately, you can only use integers between -999 and 999 as inputs or outputs, and there is no way to tell if an array has ended. So i assumed that the array was written on a ROM that wraps around after reading the last cell. This means that only even arrays can be used, which is the reason for it being noncompeting. Code: @mov x0 dat @mov x0 acc teq x0 dat +teq x0 acc b: +mov 1 p1 -mov 0 p1 -jmp b Explanation: # calling for x0 will cause rom to move 1 cell forward # (slx does not count). @ mov x0 dat # Moves value to variable dat (only run once) @ mov x0 acc # Moves rom position forward and moves x0 to acc teq x0 dat # See if dat equals x0 + teq x0 acc # If last expression was true, see x0 equals acc b: # Label for jumps (GOTO) + mov 1 p1 # Set output (p1) to 1 (same case as previous line) - mov 0 p1 # if any expression was false, set output to 0 - jmp b # jump to b: (same case as prev line) Sorry if any of this is confusing, this is my first code-golf answer. EDIT: removed 7 bytes by replacing loops by run-once code # Shenzen IO (Assembler) ,83 76 bytes, noncompeting Shenzen io is a puzzle game where you can code you code in a special assembler-ish language. Unfortunately, you can only use integers between -999 and 999 as inputs or outputs, and there is no way to tell if an array has ended. So i assumed that the array was written on a ROM that wraps around after reading the last cell. This means that only even arrays can be used, which is the reason for it being noncompeting. Code: @mov x0 dat @mov x0 acc teq x0 dat +teq x0 acc b: +mov 1 p1 -mov 0 p1 -jmp b Explanation: # calling for x0 will cause rom to move 1 cell forward @ mov x0 dat # Moves value to variable dat (only run once) @ mov x0 acc # Moves rom position forward and moves x0 to acc teq x0 dat # See if dat equals x0 + teq x0 acc # If last expression was true, see x0 equals acc b: # Label for jumps (GOTO) + mov 1 p1 # Set output (p1) to 1 (same case as previous line) - mov 0 p1 # if any expression was false, set output to 0 - jmp b # jump to b: (same case as prev line) Sorry if any of this is confusing, this is my first code-golf answer. EDIT: removed 7 bytes by replacing loops by run-once code Shenzen IO (Assembler) ,83 bytes, noncompeting # Shenzen IO (Assembler) ,83 76 bytes, noncompeting Shenzen io is a puzzle game where you can code you code in a special assembler-ish language. Unfortunately, you can only use integers between -999 and 999 as inputs or outputs, and there is no way to tell if an array has ended. So i assumed that the array was written on a ROM that wraps around after reading the last cell. This means that only even arrays can be used, which is the reason for it being noncompeting. Code: mov@mov x0 dat mov@mov x0 acc a: teq x0 dat +teq x0 acc b: +mov 1 p1 -mov 0 p1 -jmp b jmp a Explanation: # calling for x0 will cause rom to move 1 cell forward # (slx does not count). @ mov x0 dat # Moves value to variable dat (only run once) @ mov x0 acc # Moves rom position forward and moves valuex0 to acc a: # Label for jumps (GOTO) teq x0 dat # See if dat equals x0 + teq x0 acc # If last expression was true, see x0 equals acc b: # Label for jumps (GOTO) + mov 1 p1 # Set output (p1) to 1 (same case as previous line) - mov 0 p1 # if any expression was false, set output to 0 - jmp b # jump to b: (same case as prev line) Sorry if any of this is confusing, this is my first code-golf answer. EDIT: removed 7 bytes by replacing loops by run-once code Shenzen IO (Assembler) ,83 bytes, noncompeting Shenzen io is a puzzle game where you can code you code in a special assembler-ish language. Unfortunately, you can only use integers between -999 and 999 as inputs or outputs, and there is no way to tell if an array has ended. So i assumed that the array was written on a ROM that wraps around after reading the last cell. This means that only even arrays can be used, which is the reason for it being noncompeting. Code: mov x0 dat mov x0 acc a: teq x0 dat +teq x0 acc b: +mov 1 p1 -mov 0 p1 -jmp b jmp a Explanation: # calling for x0 will cause rom to move 1 cell forward # (slx does not count) mov x0 dat # Moves value to variable dat mov x0 acc # Moves rom position forward and moves value to acc a: # Label for jumps (GOTO) teq x0 dat # See if dat equals x0 + teq x0 acc # If last expression was true, see x0 equals acc b: + mov 1 p1 # Set output (p1) to 1 (same case as previous line) - mov 0 p1 # if any expression was false, set output to 0 - jmp b # jump to b: (same case as prev line) Sorry if any of this is confusing, this is my first code-golf answer # Shenzen IO (Assembler) ,83 76 bytes, noncompeting Shenzen io is a puzzle game where you can code you code in a special assembler-ish language. Unfortunately, you can only use integers between -999 and 999 as inputs or outputs, and there is no way to tell if an array has ended. So i assumed that the array was written on a ROM that wraps around after reading the last cell. This means that only even arrays can be used, which is the reason for it being noncompeting. Code: @mov x0 dat @mov x0 acc teq x0 dat +teq x0 acc b: +mov 1 p1 -mov 0 p1 -jmp b Explanation: # calling for x0 will cause rom to move 1 cell forward # (slx does not count). @ mov x0 dat # Moves value to variable dat (only run once) @ mov x0 acc # Moves rom position forward and moves x0 to acc teq x0 dat # See if dat equals x0 + teq x0 acc # If last expression was true, see x0 equals acc b: # Label for jumps (GOTO) + mov 1 p1 # Set output (p1) to 1 (same case as previous line) - mov 0 p1 # if any expression was false, set output to 0 - jmp b # jump to b: (same case as prev line) Sorry if any of this is confusing, this is my first code-golf answer. EDIT: removed 7 bytes by replacing loops by run-once code 1 Shenzen IO (Assembler) ,83 bytes, noncompeting Shenzen io is a puzzle game where you can code you code in a special assembler-ish language. Unfortunately, you can only use integers between -999 and 999 as inputs or outputs, and there is no way to tell if an array has ended. So i assumed that the array was written on a ROM that wraps around after reading the last cell. This means that only even arrays can be used, which is the reason for it being noncompeting. Code: mov x0 dat mov x0 acc a: teq x0 dat +teq x0 acc b: +mov 1 p1 -mov 0 p1 -jmp b jmp a Explanation: # calling for x0 will cause rom to move 1 cell forward # (slx does not count) mov x0 dat # Moves value to variable dat mov x0 acc # Moves rom position forward and moves value to acc a: # Label for jumps (GOTO) teq x0 dat # See if dat equals x0 + teq x0 acc # If last expression was true, see x0 equals acc b: + mov 1 p1 # Set output (p1) to 1 (same case as previous line) - mov 0 p1 # if any expression was false, set output to 0 - jmp b # jump to b: (same case as prev line)
{}
A single phase $10$ kVA, $50$ Hz transformer with $1$ kV primary winding draws $0.5$ A and $55$ W, at rated voltage and frequency, on no load. A second transformer has a core with all its linear dimensions $\sqrt{2}$ times the corresponding dimensions of the first transformer. The core material and lamination thickness are the same in both transformers. The primary windings of both the transformers have the same number of turns. If a rated voltage of $2$ kV at $50$ Hz is applied to the primary of the second transformer, then the no load current and power, respectively, are 1. $0.7$ A, $77.8$ W 2. $0.7$ A, $155.6$ W 3. $1$ A, $110$ W 4. $1$ A, $220$ W
{}
BAM Merger Two Separate References? 0 1 Entering edit mode 9 months ago glew ▴ 20 Situation: ReferenceA and ReferenceB are completely different and cannot be merged into single reference. Align reads to both A and B. I want to merge the two bam files where it will score which alignment is better for each read. Does anyone know of how to do this or if there is a tool available for this? Thanks for your help. alignment Assembly • 361 views 2 Entering edit mode Why not create a hybrid reference with both Reference A + Reference B in one fasta file. You can then align the data to this reference. You will need to be mindful of multi-mapping (you will need to let the reads map in all possible locations). You could then post process the alignment to find out which reference a read aligns to better. 2 Entering edit mode Are you asking this because of the index time cost? E.g. you need to add a small minichromosome (different with every sample) to a human index, and the indexing time dwarfs the search time? 1 Entering edit mode Yes, the indexing time was the main issue I was worried about.
{}
# What is the unit vector that is normal to the plane containing (- 3 i + j -k) and (- 4i + 5 j - 3k)? Apr 6, 2017 The unit vector is =〈2/sqrt150,-5/sqrt150,-11/sqrt150〉 #### Explanation: The vector perpendicular to 2 vectors is calculated with the determinant (cross product) $| \left(\vec{i} , \vec{j} , \vec{k}\right) , \left(d , e , f\right) , \left(g , h , i\right) |$ where 〈d,e,f〉 and 〈g,h,i〉 are the 2 vectors Here, we have veca=〈-3,1,-1〉 and vecb=〈-4,5,-3〉 Therefore, $| \left(\vec{i} , \vec{j} , \vec{k}\right) , \left(- 3 , 1 , - 1\right) , \left(- 4 , 5 , - 3\right) |$ $= \vec{i} | \left(1 , - 1\right) , \left(5 , - 3\right) | - \vec{j} | \left(- 3 , - 1\right) , \left(- 4 , - 3\right) | + \vec{k} | \left(- 3 , 1\right) , \left(- 4 , 5\right) |$ $= \vec{i} \left(1 \cdot - 3 + 1 \cdot 5\right) - \vec{j} \left(- 3 \cdot - 3 - 1 \cdot 4\right) + \vec{k} \left(- 3 \cdot 5 + 1 \cdot 4\right)$ =〈2,-5,-11〉=vecc Verification by doing 2 dot products 〈2,-5,-11〉.〈-3,1,-1〉=-6-5+11=0 〈2,-5,-11〉.〈-4,5,-3〉=-8-25+33=0 So, $\vec{c}$ is perpendicular to $\vec{a}$ and $\vec{b}$ The unit vector is $= \frac{\vec{c}}{| | \vec{c} | |}$ =1/sqrt(4+25+121)〈2,-5,-11〉 =1/sqrt150〈2,-5,-11〉#
{}
# Count the ordered triplets Number Theory Level 2 $x^2+y^2-z^2=1997$ Find number of ordered triplets $$(x,y,z)$$ of integers $$x$$,$$y$$,$$z$$ satisfying the above equation. $$\bullet$$ Enter 777 as your answer if the answer is infinite. ×
{}
MullOverThing Useful tips for everyday # When should I not use batch normalization? ## When should I not use batch normalization? Not good for Recurrent Neural Networks Batch normalization can be applied in between stacks of RNN, where normalization is applied “vertically” i.e. the output of each RNN. But it cannot be applied “horizontally” i.e. between timesteps, as it hurts training because of exploding gradients due to repeated rescaling. Should I use batch normalization on every layer? Batch normalization is a layer that allows every layer of the network to do learning more independently. Using batch normalization learning becomes efficient also it can be used as regularization to avoid overfitting of the model. The layer is added to the sequential model to standardize the input or the outputs. What is the use of learnable parameters in batch normalization layer? β and γ are themselves learnable parameters that are updated during network training. Batch normalization layers normalize the activations and gradients propagating through a neural network, making network training an easier optimization problem. ### Is batch normalization layer trainable? moving_var are non-trainable variables that are updated each time the layer in called in training mode, as such: moving_mean = moving_mean * momentum + mean(batch) * (1 – momentum) Why do we normalize batch? Batch normalization is a technique to standardize the inputs to a network, applied to ether the activations of a prior layer or inputs directly. Batch normalization accelerates training, in some cases by halving the epochs or better, and provides some regularization, reducing generalization error. Where should I put batch normalization? In practical coding, we add Batch Normalization after the activation function of the output layer or before the activation function of the input layer. Mostly researchers found good results in implementing Batch Normalization after the activation layer. #### Where should I put a batch normalization layer? How does batch normalization work? How does Batch Normalisation work? Batch normalisation normalises a layer input by subtracting the mini-batch mean and dividing it by the mini-batch standard deviation. To fix this, batch normalisation adds two trainable parameters, gamma γ and beta β, which can scale and shift the normalised value. Why do we use batch normalization? ## Why do we do batch normalization? Is batch normalization always good? As far as I understood batch normalization, it’s almost always useful when used together with other regularization methods (L2 and/or dropout). When it’s used alone, without any other regularizers, batch norm gives poor improvements in terms of accuracy but speeds up the learning process anyway. Can batch normalization improve accuracy? Thus, seemingly, batch normalization yields faster training, higher accuracy and enable higher learning rates. This suggests that it is the higher learning rate that BN enables, which mediates the majority of its benefits; it improves regularization, accuracy and gives faster convergence. ### Which is the best description of batch normalization? Batch normalization, or batchnorm for short, is proposed as a technique to help coordinate the update of multiple layers in the model. Batch normalization provides an elegant way of reparametrizing almost any deep network. Why is the batch normalization layer of Keras broken? The problem with the current implementation of Keras is that when a BN layer is frozen, it continues to use the mini-batch statistics during training. I believe a better approach when the BN is frozen is to use the moving mean and variance that it learned during training. Why? How to freeze batch normalization with pretrained NN? – Stack Overflow Pretrained NN Finetuning with Keras. How to freeze Batch Normalization? So I didnt write my code in tf.keras and according to this tutorial for finetuning with a pretrained NN: https://keras.io/guides/transfer_learning/#freezing-layers-understanding-the-trainable-attribute, #### What is the effect of normalization on training? Normalizing the inputs to the layer has an effect on the training of the model, dramatically reducing the number of epochs required. It can also have a regularizing effect, reducing generalization error much like the use of activation regularization.
{}
# What is the Product in Math? Definition, Solved Examples, Facts (2023) • What is Product in Math? • Product of Fractions • Product of Decimals • Solved Examples • Practice Problems Suppose you are at the bakery and you want to buy four cupcakes. When you reach there, you see that one cupcake costs $5. How will you calculate how much you should pay at the check-out counter? The concept of product in math helps you answer this and do much more! ## What is Product in Math? A product in math is defined as the result of two or more numbers when multiplied together. Let us consider the same scenario. You were running an errand at the bakery and had to buy 4 cupcakes. Each cupcake costs$5. To calculate the total amount you need to pay at the checkout counter, you can add the cost of each cupcake 4 times (as you need to buy 4 cupcakes). 3 ### Calculate the product of 0.5 and 0.3 0.1 1.5 .015 0.15 CorrectIncorrect Total number of decimals places = 1 + 1 = 2 Product without decimal point, 5 ✕ 3 = 15 Therefore, 0.5 ✕ 0.3 = 0.15. 4 ### Jimmy has 4 bags with 3 candies each, and Joe has 3 bags with 4 candies each. Who has more candies? Jimmy Joe Both have an equal number of candies None of the above CorrectIncorrect Correct answer is: Both have an equal number of candies Number of candies with Jimmy = 4 ✕ 3 = 12 Number of candies with Joe = 3 ✕ 4 = 12 Thus, both have an equal number of candies. When you calculate the product of a number with 0, you get the answer as 0. For instance, 5 ✕ 0 = 0; this is called the zero property of multiplication. A multiplicand represents the number of equal groups, and a multiplier represents the number of objects in each group. For instance, consider the following equation: 2 ✕ 7 = 14 Here, 2 is the multiplicand, 4 is the multiplier, and 14 is the product. It states that any number multiplied by 1 gives the same result as the number itself. For instance, 35✕ 1 = 35. Top Articles Latest Posts Article information Author: Arielle Torp Last Updated: 10/28/2022 Views: 5961 Rating: 4 / 5 (41 voted) Author information Name: Arielle Torp Birthday: 1997-09-20 Address: 87313 Erdman Vista, North Dustinborough, WA 37563 Phone: +97216742823598 Job: Central Technology Officer Hobby: Taekwondo, Macrame, Foreign language learning, Kite flying, Cooking, Skiing, Computer programming Introduction: My name is Arielle Torp, I am a comfortable, kind, zealous, lovely, jolly, colorful, adventurous person who loves writing and wants to share my knowledge and understanding with you.
{}
TutorMe homepage Subjects PRICING COURSES Start Free Trial Miranda N. College Math Major Tutor Satisfaction Guarantee Basic Math TutorMe Question: Solve for x: x=2(3*5)-2$$^2$$+7. Miranda N. By following PEMDAS, we would simplify within the parentheses first, leaving x=2(15)-2$$^2$$+7. Next we would solve the exponents, so x=2(15)-4+7. Multiplication would be next, simplifying to x=30-4+7. Finally, we can use the associative property to simplify the addition and subtraction, leaving x= (30-4)+7 = 26+7 = 33. Calculus TutorMe Question: What is the derivative of f(x)=cos(5x)? Miranda N. The derivative of cos(x) = d(x)*-sin(x). When we substitute 5x in for x in this equation, we get that the derivative of cos(5x) = d(5x)*-sin(5x). Since d(5x)=5, we can simplify our equation to show that the derivative of cos(x)=-5sin(5x). Algebra TutorMe Question: What is the slope of the line that goes through (3,4) and (8,2)? Miranda N. We can find the slope of a line using the following equation: m = $$\frac{(y_{2}-y_{1})}{(x_{2}-x_{1})}$$. By substituting 2 for $$y_{2}$$, 4 for $$y_{1}$$, 8 for $$x_{2}$$, and 3 for $$x_{1}$$, we can determine that m = $$\frac{2-4}{8-3}$$, which simplifies to m = $$\frac{-2}{5}$$. So, the slope of this line is $$\frac{-2}{5}$$. Send a message explaining your needs and Miranda will reply soon. Contact Miranda
{}
## Executive Summary This blogpost shows a case study where a researcher uses Dask for mosaic image fusion. Mosaic image fusion is when you combine multiple smaller images taken at known locations and stitch them together into a single image with a very large field of view. Full code examples are available on GitHub from the DaskFusion repository: https://github.com/VolkerH/DaskFusion ## The problem ### Image mosaicing in microscopy In optical microscopy, a single field of view captured with a 20x objective typically has a diagonal on the order of a few 100 μm (exact dimensions depend on other parts of the optical system, including the size of the camera chip). A typical sample slide has a size of 25mm by 75mm. Therefore, when imaging a whole slide, one has to acquire hundreds of images, typically with some overlap between individual tiles. With increasing magnification, the required number of images increases accordingly. To obtain an overview one has to fuse this large number of individual image tiles into a large mosaic image. Here, we assume that the information required for positioning and alignment of the individual image tiles is known. In the example presented here, this information is available as metadata recorded by the microscope, namely the microscope stage position and the pixel scale. Alternatively, this information could also be derived from the image data directly, e.g. through a registration step that matches corresponding image features in the areas where tiles overlap. ## The solution The array that can hold the resulting mosaic image will often have a size that is too large to fit in RAM, therefore we will use Dask arrays and the map_blocks function to enable out-of-core processing. The map_blocks function will process smaller blocks (a.k.a chunks) of the output array individually, thus eliminating the need to hold the whole output array in memory. If sufficient resources are available, dask will also distribute the processing of blocks across several workers, thus we also get parallel processing for free, which can help speed up the fusion process. Typically whenever we want to join Dask arrays, we use Stack, Concatenate, and Block. However, these are not good tools for mosaic image fusion, because: 1. The image tiles will be be overlapping, 2. Tiles may not be positioned on an exact grid and will typically also have slight rotations as the alignment of stage and camera is not perfect. In the most general case, for example in panaromic photo mosaics, individual image tiles could be arbitrarily rotated or skewed. The starting point for this mosaic prototype was some code that reads in the stage metadate for all tiles and calculates an affine transformation for each tile that would place it at the correct location in the output array. The image below shows preliminary work placing mosaic image tiles into the correct positions using the napari image viewer. Shown here is a small example with 63 image tiles. And here is an animation of placing the individual tiles. To leverage processing with Dask we created a fuse function that generates a small block of the final mosaic and is invoked by map_blocks for each chunk of the output array. On each invocation of the fuse function map_blocks passes a dictionary (block_info). From the Dask documentation: Your block function gets information about where it is in the array by accepting a special block_info or block_id keyword argument. The basic outline of the fuse function of the mosaic workflow is as follows. For each chunk of the output array: 1. Determine which source image tiles intersect with the chunk. 2. Adjust the image tiles’ affine transformations to take the offset of the chunk within the array into account. 3. Load all intersectiong image tiles and apply their respective adjusted affine transformation to map them into the chunk. 4. Blend the tiles using a simple maximum projection. 5. Return the blended chunk. Using a maximum projection to blend areas with overlapping tiles can lead to artifacts such as ghost images and visible tile seams, so you would typically want to use something more sophisticated in production. ### Results For datasets with many image tiles (~500-1000 tiles), we could speed up the mosaic generation from several hours to tens of minutes using this Dask based method (compared to a previous workflow using ImageJ plugins runnning on the same workstation). Due to Dask’s ability to handle data out-of-core and chunked array storage using zarr it is also possible to run the fusion on hardware with limited RAM. Finally, we have the final mosaic fusion result. ### Code Code relatiing to this mosaic image fusion project can be found in the DaskFusion GitHub repository here: https://github.com/VolkerH/DaskFusion There is a self-contained example available in this notebook, which downloads reduced-size example data to demonstrate the process. ## What’s next? Currently, the DaskFusion code is a proof of concept for single-channel 2D images and simple maximum projection for blending the tiles in overlapping areas, it is not production code. However, the same principle can be used for fusing multi-channel image volumes, such as from Light-Sheet data if the tile chunk intersection calculation is extended to higher-dimensional arrays. Such even larger datasets will benefit even more from leveraging dask, as the processing can be distributed across multiple nodes of a HPC cluster using dask jobqueue. ### Also see Marvin’s lightning talk on multi-view image fusion: 15 minute video available here on YouTube The GitHub repository MVRegFus that Marvin talks about in the video is available here: https://github.com/m-albert/MVRegFus ## Acknowledgements This computational work was done by Volker Hilsenstein, in conjunction with Marvin Albert. Volker Hilsenstein is a scientific software developer at EMBL in Theodore Alexandrov’s lab with a focus on spatial metabolomics and bio-image analysis. The sample images were prepared and imaged by Mohammed Shahraz from the Alexandrov lab at EMBL Heidelberg. Genevieve Buckley and Volker Hilsenstein wrote this blogpost.
{}
# Canonical Transformation and renormalization eljose79 Canonical Transformation and renormalization... Let be L a lagrangian of a Non-Renormalizable theory..then we could take its hamiltonian. Then after taking Hamiltonian you could take a Canonical Transformation to find another (renormalizable) Hamiltonian..and solve it..¿why this trick is not valid?...
{}
Infoscience Conference paper Comparison between front- and back-gating of Silicon Nanoribbons in real-time sensing experiments Field-effect transistors (FETs) with open gate structures such as Silicon Nanoribbons (SiNRs) are promising candidates to become general platforms for ultrasensitive, label-free and real-time detection of biochemical interactions on surface. This work proposes and demonstrates the viability of a solution for integrating Ag/AgCl reference electrodes with the microfluidics. A comparison between different polarization schemes is carried out with an analysis of the respective advantages and disadvantages.
{}
# Let $f(x)=\cos x(\sin x+\sqrt{\sin^2 x+\sin^2\theta})$ where $\theta$ is a given constant then maximum value of $f(x)$ is $\begin{array}{1 1}(a)\;\sqrt{1+\cos^2\theta}&(b)\;\sqrt{1+\sin^2\theta}\\(c)\;\mid \cos\theta\mid&(d)\;\mid \sin\theta\mid\end{array}$ $f(x)=\cos x(\sin x+\sqrt{\sin^2x+\sin^2\theta})$ $\Rightarrow (f(x).\sec x-\sin x)^2$ $\Rightarrow \sin^2x+\sin^2\theta$ $f^2(x).\sec^2x=2f(x).\tan x=\sin^2\theta$ $f^2(x).\tan^2x-2f(x).\tan x+f^2(x)-\sin^2\theta=0$ $\Rightarrow 4f^2(x)\geq 4f^2(x)(f^2(x)-\sin^2\theta)$ $\Rightarrow f^2(x)\leq 1+\sin^2\theta$ $\Rightarrow \mid f(x)\mid\leq \sqrt{1+\sin^2\theta}$ Hence (b) is the correct option.
{}
# Does the water level rise? if I have A closed vessel that is half-filled with water. There is a hole near the top of the vessel and air is pumped out from this hole. will the level of water rise apparently, the level of water doesn't always rise I thought since there is no atmospheric pressure: $$P_{atm} + DgH_1=DgH_2$$ thus causing the level to rise • please don't mind the bad drawing Jan 1, 2021 at 17:46 • Re, "...thus causing the level to rise." I think that there is another possibility that you have not considered; which is, that as you decrease the ambient pressure at the surface of the water, that causes the pressure at the bottom of the container to decrease by an equal amount. Jan 1, 2021 at 17:58 • hey but according to some source ig it necessarily does not rise and I want to know why Jan 1, 2021 at 18:00 • Why would you expect it to rise? If you expect it to rise because of that equation, I am suggesting a different interpretation of the equation that does not require the water level to rise. Jan 1, 2021 at 18:01 • ohhh can i get to know about the other equation ?? do u have any idea @SolomonSLow Jan 1, 2021 at 18:05 The water level will not rise, it will actually gradually fall. This is because since there is no air pressure, the water molecules can easily escape the liquid and turn into water vapor which gradually fills the vacuum. If the container is sealed after creating the vacuum, it will fall until a certain level and then stop, because it will create an equilibrium where the number of water molecules evaporating from the liquid water will be equal to the number of water vapor molecules reentering the liquid water. If the container is not sealed then it will keep falling until there is no liquid water left. • thanks but whats the difference between this and the candle experiment Jan 1, 2021 at 18:15 • youtube.com/watch?v=LOQObaAo8GM&ab_channel=RooHaran Jan 1, 2021 at 18:18 • can u chec this out??? Jan 1, 2021 at 18:18 • I see what you are talking about. In the candle experiment, it is actually air pressure outside the glass that is pushing the liquid water up in order to fill the vacuum. This is possible because one part of the water outside the glass is initially in contact with the atmosphere, unlike in an enclosed box here, where there is no difference in pressure acting on the water. – user283752 Jan 1, 2021 at 18:26 • ohh ok thanks i get it now Jan 1, 2021 at 18:27 Are you thinking of a U shaped tube, where if you decrease the pressure on one side, pressure on the other forces the fluid to rise? Image from Physics Libretexts You can see how pressing on one side would make the fluid move. In the case you have shown, pressing on the top of the fluid just squeezes it. • so does vaccum not cause water level to rise Jan 1, 2021 at 18:20 • i just wanna know the reason Jan 1, 2021 at 18:20 • Air presses down on the surface of the water. If you take out the air, it stops pressing down. Think of putting an object in the vessel instead of water. Suppose you press on the object, and then stop. Would you expect the object to rise? Jan 1, 2021 at 18:24 • ohh thanks a lot Jan 1, 2021 at 18:26 What you will see happen is the water will start to boil. • does the water lvl rise Jan 1, 2021 at 17:45 • isnt this like the candle experiment??? Jan 1, 2021 at 17:45 • what is the "candle experiment" Jan 1, 2021 at 18:17 There are actually some counter-acting phenomena at play here. 1. No compressibility: in a first approximation, the water can be considered as incompressible. Then we come to the conclusion that the water starting to vaporize for reduced vessel pressure. 2. More in detail, the water density changes with temperature, and the evaporation will cool the water down, which will increase its density, and hence lower the water level further (at least, if it was initially above approx 4degC). if it was below 4degC, then cooling it will initially decrease the density. 3. Also, water isn't completely incompressible, and lowering the vessel pressure will lead to an (albeit small) expansion. The question whether this expansion will outweigh the vaporization depends a lot on the shape of the vessel. Say, the effects could be strong enough, if you have a long thin tube, that is only open on one end, like this (yes, we're maintaining picture quality here.. :) ) :
{}
# Tag Info 1 To explain further what I wrote in the comments: Your function has the form F(t,x,co,xo) where x is the state, co the constants for the other objects and xo the dynamical data for the other objects. Variant 1: You will get an order 1 approximation out of RK4 if you use the external dynamic in the RK4 loop as (in python notation) dx1 = dt*F(t[k] , x[k] ... 2 For the first question: You should look into continuation. I don't know python so i can't write the code for you, but basically, you change the algorithm into something like this: search for some solutions for phi = -2. use those solutions as initial guesses when looking for phi = -1.9 repeat step 2 for the rest of the interval Because you are using the ... 3 You may take a look at Peter Gottschling's Discovering Modern C++, especially chapter 7, where Mario Mulansky (one of the authors of odeint) implements a generic ODE Solver (using Runge-Kutta algorithms and C++11/14). To the best of my knowledge, the second version of the book will be published within the next months and covers C++17 and C++20. Another ... 2 Browsing the paper I can confirm that all integrations of the trajectories is done with a fixed time step. The authors compute approximations $A_h(t)$ for all $t \in \Sigma_h$ where $$\Sigma_h = \{ nh \: : \: 0 \leq nh \leq T\}.$$ They use at least two different values of $h$, namely $h_1=0.125$ days and $h_2 = 8$ days. It is important to recognize that \... 2 As the integrand (in the answer you quote) is periodic and fairly smooth, you can evaluate it numerically using the trapezoidal rule. For such integrands, the convergence is exponential, so you shouldn’t need too many points. Here is a nice explanation. I think it is quite feasible to do this in Excel. Here's some matlab code that shows the rapid convergence ... 2 You have not completely vectorized your code. z(1:10) is the first 10 numbers in the data of z, not the first 10 rows. For that you need to write z(1:10,:) like you have already correctly done on the left side. Why the flip in the matrix orientation occurs between the two calls, that is why z(1:10) inherits the column-ness in the first case and the row-ness ... 2 odenumjac calls your function in a vectorized manner it seems, and your function is not vectorized. You can easily change that by changing the second index of f in your function to : instead of 1, for instance: f(10,:) = 2*(x(end-1,:) - x(end,:)); I thought the setting joptions.vectvars=1 would not allow the vectorised call (see one of your other questions). ... 3 The effect is due to quantization noise and/or aliasing, you are not computing with the true minima of the radius along the orbit, but with the closest sample point. This means that the sample point for the half-step integration can lie in-between. Close to the minimum this results in an $O(e·dt^2)$ ($e$ the eccentricity) difference between the computed ... Top 50 recent answers are included
{}
Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $[-1349, 29035]$ The Dead Sea's 1349 ft from sea level is below, therefore the value is $-1349$. Mt. Everest's 29035 from sea level is above, making the value $29035$.
{}
## Transpose of a Matrix in Python A transpose of a matrix is obtained by interchanging all its rows into columns or columns into rows. It is denoted by $$\displaystyle {{A}^{t}}$$ or $$\displaystyle {{A}^{‘}}$$. For example, If  $$\displaystyle A=\left[ {\begin{array}{*{20}{c}} 1 & 2 & 3 \\ 4 & 3 & 5 \\ 2 & 6 & 2 \end{array}} \right]\,$$ then  \(\displaystyle A=\left[ … Read more ## Dot Product of Two Matrices in Python The product of two matrices A and B will be possible if the number of columns of a Matrix A is equal to the number of rows of another Matrix B. A mathematical example of dot product of two matrices A & B is given below. If \(\displaystyle A=\left[ {\begin{array}{*{20}{c}} 1 & 2 \\ 3 … Read more ## Matrix Addition and Subtraction in Python Matrix Addition and Subtraction in Python programming language is performed like the normal algebraic operations. Before discussing these operations, it is necessary to introduce a bit about Algebra which has been taken from the Arabic word Al-Jabar, afterward, this word turned into Algebra. Algebra is a branch of Mathematics that provides an easy solution to … Read more ## Linear Algebra in TensorFlow (Scalars, Vectors & Matrices) Linear Algebra in TensorFlow: TensorFlow is open source software under Apache Open Source license for dataflow which is frequently being used for machine learning applications like deep-neural-network to improve the performance of search engines, such as, Google, image captioning, recommendation and translation. For example, when a user types a keyword in Google’s search bar, it … Read more ## Scalars, Vector and Matrices in Python (Using Arrays) Arrays in python, are frequently used to work with scalars, vectors and matrices, a topic of today’s post. This post is continuation of linear algebra for data science. We use NumPy, a library for the python programming which allows us to work with multidimensional arrays and matrices along with a large collection of high-level mathematical … Read more ## Linear Algebra for Data Science Linear Algebra for Data Science and machine learning is very essential as the concepts of linear algebra are used to understand the working of algorithms. In this post, we are going to discuss the basic concepts of linear algebra. Why Linear Algebra? Enormous datasets mostly contain hundreds to a large number of individual data objects. … Read more ## 4 Types of Machine Learning (Supervised, Unsupervised, Semi-supervised & Reinforcement) Machine learning is a subfield of Artificial Intelligence. The concept of machine learning originally started in 1959 by an American Arthur Samuel. He was an expert in the field of computer gaming and intelligent machines. In this post, we are going to discuss the types of machine learning. There are four major types of machine … Read more ## 4 Major Types of Artificial Intelligence What is Artificial Intelligence? In the modern world, Artificial Intelligence is playing a significant role in almost all fields of life. It is also the need for humans in recent times. We start with the definition of artificial intelligence. Artificial intelligence and machine learning have the same framework and tools. There are some definitions of … Read more ## 7 Commonly Used Machine Learning Algorithms for Classification Generally, data is a set of factual information based on numbers, words, observations, measurements that can be used for calculation, discussion and reasoning. The rough dataset is the essential foundation of data science and it may be of diverse types, such as structured data and unstructured data and semi-structured. Structured data is formatted in tabular … Read more ## How to do regression in excel? (Simple Linear Regression) Performing regression analysis in excel is a very easy task. Before going towards practical, we recall the concepts of regression. Let’s describe it briefly. Regression is used for predictive analysis. It is used for finding the strength of predictors and forecasting an effect. Its most common type is linear regression. The linear regression can be … Read more
{}
# Properties Label 720.2.w.b Level $720$ Weight $2$ Character orbit 720.w Analytic conductor $5.749$ Analytic rank $0$ Dimension $4$ CM no Inner twists $4$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$720 = 2^{4} \cdot 3^{2} \cdot 5$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 720.w (of order $$4$$, degree $$2$$, not minimal) ## Newform invariants Self dual: no Analytic conductor: $$5.74922894553$$ Analytic rank: $$0$$ Dimension: $$4$$ Relative dimension: $$2$$ over $$\Q(i)$$ Coefficient field: $$\Q(\zeta_{8})$$ Defining polynomial: $$x^{4} + 1$$ Coefficient ring: $$\Z[a_1, \ldots, a_{11}]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 180) Sato-Tate group: $\mathrm{SU}(2)[C_{4}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a primitive root of unity $$\zeta_{8}$$. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + ( -2 \zeta_{8} - \zeta_{8}^{3} ) q^{5} + ( -2 + 2 \zeta_{8}^{2} ) q^{7} +O(q^{10})$$ $$q + ( -2 \zeta_{8} - \zeta_{8}^{3} ) q^{5} + ( -2 + 2 \zeta_{8}^{2} ) q^{7} + ( 2 \zeta_{8} + 2 \zeta_{8}^{3} ) q^{11} + ( 3 + 3 \zeta_{8}^{2} ) q^{13} + 2 \zeta_{8} q^{17} + 4 \zeta_{8}^{2} q^{19} -8 \zeta_{8}^{3} q^{23} + ( -4 + 3 \zeta_{8}^{2} ) q^{25} + ( 7 \zeta_{8} - 7 \zeta_{8}^{3} ) q^{29} + 8 q^{31} + ( 6 \zeta_{8} - 2 \zeta_{8}^{3} ) q^{35} + ( -3 + 3 \zeta_{8}^{2} ) q^{37} + ( \zeta_{8} + \zeta_{8}^{3} ) q^{41} -12 \zeta_{8} q^{47} -\zeta_{8}^{2} q^{49} + 10 \zeta_{8}^{3} q^{53} + ( 6 - 2 \zeta_{8}^{2} ) q^{55} + ( 2 \zeta_{8} - 2 \zeta_{8}^{3} ) q^{59} + ( -3 \zeta_{8} - 9 \zeta_{8}^{3} ) q^{65} + ( -8 + 8 \zeta_{8}^{2} ) q^{67} + ( 4 \zeta_{8} + 4 \zeta_{8}^{3} ) q^{71} + ( 7 + 7 \zeta_{8}^{2} ) q^{73} -8 \zeta_{8} q^{77} + 12 \zeta_{8}^{3} q^{83} + ( 2 - 4 \zeta_{8}^{2} ) q^{85} + ( -\zeta_{8} + \zeta_{8}^{3} ) q^{89} -12 q^{91} + ( 4 \zeta_{8} - 8 \zeta_{8}^{3} ) q^{95} + ( 3 - 3 \zeta_{8}^{2} ) q^{97} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q - 8q^{7} + O(q^{10})$$ $$4q - 8q^{7} + 12q^{13} - 16q^{25} + 32q^{31} - 12q^{37} + 24q^{55} - 32q^{67} + 28q^{73} + 8q^{85} - 48q^{91} + 12q^{97} + O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/720\mathbb{Z}\right)^\times$$. $$n$$ $$181$$ $$271$$ $$577$$ $$641$$ $$\chi(n)$$ $$1$$ $$1$$ $$-\zeta_{8}^{2}$$ $$-1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 17.1 0.707107 − 0.707107i −0.707107 + 0.707107i 0.707107 + 0.707107i −0.707107 − 0.707107i 0 0 0 −0.707107 + 2.12132i 0 −2.00000 2.00000i 0 0 0 17.2 0 0 0 0.707107 2.12132i 0 −2.00000 2.00000i 0 0 0 593.1 0 0 0 −0.707107 2.12132i 0 −2.00000 + 2.00000i 0 0 0 593.2 0 0 0 0.707107 + 2.12132i 0 −2.00000 + 2.00000i 0 0 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 3.b odd 2 1 inner 5.c odd 4 1 inner 15.e even 4 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 720.2.w.b 4 3.b odd 2 1 inner 720.2.w.b 4 4.b odd 2 1 180.2.j.a 4 5.b even 2 1 3600.2.w.f 4 5.c odd 4 1 inner 720.2.w.b 4 5.c odd 4 1 3600.2.w.f 4 8.b even 2 1 2880.2.w.a 4 8.d odd 2 1 2880.2.w.j 4 12.b even 2 1 180.2.j.a 4 15.d odd 2 1 3600.2.w.f 4 15.e even 4 1 inner 720.2.w.b 4 15.e even 4 1 3600.2.w.f 4 20.d odd 2 1 900.2.j.a 4 20.e even 4 1 180.2.j.a 4 20.e even 4 1 900.2.j.a 4 24.f even 2 1 2880.2.w.j 4 24.h odd 2 1 2880.2.w.a 4 36.f odd 6 2 1620.2.x.a 8 36.h even 6 2 1620.2.x.a 8 40.i odd 4 1 2880.2.w.a 4 40.k even 4 1 2880.2.w.j 4 60.h even 2 1 900.2.j.a 4 60.l odd 4 1 180.2.j.a 4 60.l odd 4 1 900.2.j.a 4 120.q odd 4 1 2880.2.w.j 4 120.w even 4 1 2880.2.w.a 4 180.v odd 12 2 1620.2.x.a 8 180.x even 12 2 1620.2.x.a 8 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 180.2.j.a 4 4.b odd 2 1 180.2.j.a 4 12.b even 2 1 180.2.j.a 4 20.e even 4 1 180.2.j.a 4 60.l odd 4 1 720.2.w.b 4 1.a even 1 1 trivial 720.2.w.b 4 3.b odd 2 1 inner 720.2.w.b 4 5.c odd 4 1 inner 720.2.w.b 4 15.e even 4 1 inner 900.2.j.a 4 20.d odd 2 1 900.2.j.a 4 20.e even 4 1 900.2.j.a 4 60.h even 2 1 900.2.j.a 4 60.l odd 4 1 1620.2.x.a 8 36.f odd 6 2 1620.2.x.a 8 36.h even 6 2 1620.2.x.a 8 180.v odd 12 2 1620.2.x.a 8 180.x even 12 2 2880.2.w.a 4 8.b even 2 1 2880.2.w.a 4 24.h odd 2 1 2880.2.w.a 4 40.i odd 4 1 2880.2.w.a 4 120.w even 4 1 2880.2.w.j 4 8.d odd 2 1 2880.2.w.j 4 24.f even 2 1 2880.2.w.j 4 40.k even 4 1 2880.2.w.j 4 120.q odd 4 1 3600.2.w.f 4 5.b even 2 1 3600.2.w.f 4 5.c odd 4 1 3600.2.w.f 4 15.d odd 2 1 3600.2.w.f 4 15.e even 4 1 ## Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(720, [\chi])$$: $$T_{7}^{2} + 4 T_{7} + 8$$ $$T_{13}^{2} - 6 T_{13} + 18$$ ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$T^{4}$$ $3$ $$T^{4}$$ $5$ $$25 + 8 T^{2} + T^{4}$$ $7$ $$( 8 + 4 T + T^{2} )^{2}$$ $11$ $$( 8 + T^{2} )^{2}$$ $13$ $$( 18 - 6 T + T^{2} )^{2}$$ $17$ $$16 + T^{4}$$ $19$ $$( 16 + T^{2} )^{2}$$ $23$ $$4096 + T^{4}$$ $29$ $$( -98 + T^{2} )^{2}$$ $31$ $$( -8 + T )^{4}$$ $37$ $$( 18 + 6 T + T^{2} )^{2}$$ $41$ $$( 2 + T^{2} )^{2}$$ $43$ $$T^{4}$$ $47$ $$20736 + T^{4}$$ $53$ $$10000 + T^{4}$$ $59$ $$( -8 + T^{2} )^{2}$$ $61$ $$T^{4}$$ $67$ $$( 128 + 16 T + T^{2} )^{2}$$ $71$ $$( 32 + T^{2} )^{2}$$ $73$ $$( 98 - 14 T + T^{2} )^{2}$$ $79$ $$T^{4}$$ $83$ $$20736 + T^{4}$$ $89$ $$( -2 + T^{2} )^{2}$$ $97$ $$( 18 - 6 T + T^{2} )^{2}$$
{}
# ways of selecting consecutive persons sitting at a table I'm trying to solve the following problem: Ten people are sitting around a round table. Three of them are chosen at random to give a presentation. What is the probability that the three chosen people were sitting in consecutive seats? I got the wrong answer but cannot see the error in my reasoning. This is how I see it: 1) the selection of the first person is unconstrained. 2) the next person must be selected from the 2 spots adjacent to the first. So this choice is limited to 2/9 of the possible choices. 3) the third choice must be taken from the one free spot next to the first person chosen, or the one free spot next to the 2nd person chosen. So this choice is limited to 2/8 of the possible choices. 4) multiplying these we get: 2/9 * 2/8 = 1/18 Let's count as our outcomes the ways to select 3 people without regard to order. There are $$\binom{10}{3} = 120$$ ways to select any 3 people. The number of successful outcomes is the number of ways to select 3 consecutive people. There are only 10 ways to do this -- think of first selecting the middle person, then we take his or her two neighbors. Therefore, the probability is $$\frac{10}{120} = > \boxed{\frac{1}{12}}$$. You forgot the possibility that the second person can be chosen to sit two seats away from the first, and then the third person is chosen to be the one between the first and the second. This gives an additional $$\frac29\cdot \frac18 = \frac1{36}$$, bringing the total up to $$\frac1{18}+\frac1{36} = \frac1{12}$$. • great answer, thanks! – David J. Feb 8 at 9:00 If the first person has been chosen then yet $$2$$ out of $$9$$ must be chosen. In $$3$$ of these cases the three chosen persons will sit consecutively so the probability on that is:$$\frac3{\binom92}=\frac1{12}$$
{}
# What transforms under SU(2) as a matrix under SO(3)? A vector $\boldsymbol{r}$ in $\mathbb{R}^3$ transforms under rotation $\boldsymbol{A}$ to $\boldsymbol{r}'=\boldsymbol{Ar}$. It is equivalent to an SU(2) "rotation" as $$\left( \boldsymbol{r}'\cdot\boldsymbol{\sigma} \right) = \boldsymbol{h} \left( \boldsymbol{r}\cdot\boldsymbol{\sigma} \right) \boldsymbol{h}^{-1},$$ where $\boldsymbol{h}$ is the counterpart of $\boldsymbol{A}$ in SU(2) given by the homomorphism between these two groups. Now the question is, what would be the equivalent transformation in SU(2) of the rotation of a matrix in $\mathbb{R}^3$? In other words, what is the equivalent in SU(2) of $\boldsymbol{M}'=\boldsymbol{A}\boldsymbol{M}\boldsymbol{A}^{-1}$. - I am a little confused about what you're asking. You presumably know that there is a double cover $\phi : \text{SU}(2) \to \text{SO}(3)$. Given any action of $\text{SO}(3)$ on any kind of thing, pulling back along $\phi$ gives you an action of $\text{SU}(2)$ on that thing (an element $g \in \text{SU}(2)$ acts by however $\phi(g)$ acts). – Qiaochu Yuan Jun 20 '12 at 7:15 @QiaochuYuan: I didn't learn the concept of "pullback". Can you elaborate? – Siyuan Ren Jun 20 '12 at 7:33 All I mean is that if $S$ is a set (e.g. a vector space) and $\rho : \text{SO}(3) \to \text{Aut}(S)$ is an action of $\text{SO}(3)$ on that set (e.g. a linear representation on a vector space) then $\rho \circ \phi : \text{SU}(2) \to \text{Aut}(S)$ is the corresponding action of $\text{SU}(2)$. – Qiaochu Yuan Jun 20 '12 at 8:11 @Qiaochu: I think the OP is asking for an explicit formula for $\rho \circ \phi$ where $S = \mathcal{M}_{3\times 3}$ and $\rho(A)M = AMA^{-1}$. – Willie Wong Jun 20 '12 at 8:15 Firstly, we need to map $\mathbb{R}^3$ to the representation space $V$ for $\mathrm{SU}(2)$. One possible map is given by the following formula: $$\begin{pmatrix} x \\ y \\ z \end{pmatrix} \mapsto x \mathbf{I} + y \mathbf{J} + z \mathbf{K}$$ where \begin{align} \mathbf{I} & = \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} & \mathbf{J} & = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} & \mathbf{K} & = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix} \end{align} $\mathrm{SU}(2)$ acts on $V$ by conjugation: so for each $X$ in $V$ and each $A$ in $\mathrm{SU}(2)$, the ordinary matrix product $A X A^{-1}$ is in $V$. This is linear in $X$ and is indeed a linear representation of $\mathrm{SU}(2)$. Indeed, if $$A = \begin{pmatrix} r e^{i \theta} & s e^{-i \phi} \\ -s e^{i \phi} & r e^{-i \theta} \end{pmatrix}$$ where $r, s, \theta, \phi$ are real numbers and $r^2 + s^2 = 1$, then $A \in \mathrm{SU}(2)$, and \begin{align} A \mathbf{I} A^{-1} & = (r^2 - s^2) \mathbf{I} + 2 r s \sin (\theta - \phi) \mathbf{J} - 2 r s \cos (\theta - \phi) \mathbf{K} \\ A \mathbf{J} A^{-1} & = 2 r s \sin (\theta + \phi) \mathbf{I} + (r^2 \cos 2 \theta + s^2 \cos 2 \phi) \mathbf{J} + (r^2 \sin 2 \theta - s^2 \sin 2 \phi) \mathbf{K} \\ A \mathbf{K} A^{-1} & = 2 r s \cos (\theta + \phi) \mathbf{I} - (r^2 \sin 2 \theta + s^2 \sin 2 \phi) \mathbf{J} + (r^2 \cos 2 \theta - s^2 \cos 2 \phi) \mathbf{K} \end{align} Thus, the induced action of $\mathrm{SU}(2)$ on $\mathbb{R}^3$ is given by the group homomorphism below, $$\begin{pmatrix} r e^{i \theta} & s e^{-i \phi} \\ -s e^{i \phi} & r e^{-i \theta} \end{pmatrix} \mapsto \begin{pmatrix} r^2 - s^2 & 2 r s \sin (\theta + \phi) & 2 r s \cos (\theta + \phi) \\ 2 r s \sin (\theta - \phi) & r^2 \cos 2 \theta + s^2 \cos 2 \phi & -r^2 \sin 2 \theta - s^2 \sin 2 \phi \\ -2 r s \cos (\theta - \phi) & r^2 \sin 2 \theta - s^2 \sin 2 \phi & r^2 \cos 2 \theta - s^2 \cos 2 \phi \end{pmatrix}$$ and one may verify that the RHS is a matrix in $\mathrm{SO}(3)$. You are just re-expressing what I already know, that $\left( \boldsymbol{r}'\cdot\boldsymbol{\sigma} \right) = \boldsymbol{h} \left( \boldsymbol{r}\cdot\boldsymbol{\sigma} \right) \boldsymbol{h}^{-1}$. – Siyuan Ren Jun 20 '12 at 10:47 I stated clearly in my question: what is the equivalent transformation in SU(2) of the rotation of a linear operator (matrix) instead of vector in $\mathbb{R}^3$? – Siyuan Ren Jun 20 '12 at 11:54 Just conjugate by the corresponding $\mathrm{SO}(3)$ matrix. There isn't a nice way of representing it as a matrix formula in terms of the $\mathrm{SU}(2)$ matrix. (Think about it: if a vector in $\mathbb{R}^3$ corresponds to a matrix in $V$, then a matrix would have to correspond to some hideous rank-3 tensor!) – Zhen Lin Jun 20 '12 at 13:22
{}
# Tag Info 14 The "simplest" classical explanation I know is the van der Waals interaction described by Keesom between two permanent dipoles. Let us consider two permanent dipoles $\vec{p}_1$ (located at $O_1$) and $\vec{p}_2$ located at $O_2$. Their potential energy of interaction is: U(\vec{p}_1,\vec{p}_2,\vec{O_1 O_2}) = -\vec{p}_1\cdot \vec{E}_2 = ... 6 Dipole moment is a vector and can be calculated using formula $$\vec{p} = \sum_i q_i \vec{r}_i.$$ It can be shown easily using the formula above that in case of two charges separated by distance $d$ $$\vec{p} = q \vec{d},$$ where vector $\vec{d}$ goes starts at negative ends at positive charge. ... 5 I'll give you the derivation from my book which includes a nice way to see how the delta functions arise: .............................................................................................................................................................. We can derive the potential field $\vec{A}$ and the electromagnetic fields $\vec{E}$ and ... 5 I don't think you need quantum mechanics to understand what's going on in dipole-induced dipole interaction. The basic mechanism is quite simple and just the details of the calculations change by switching to a quantum description. Polarizable molecule in an external field So first things first. Let us consider a simple model of polarizable molecule as ... 4 The force on a dipole placed in an electrical field is given by $\mathbf{F} = (\mathbf{p}\cdot \nabla)\mathbf{E}$ (see, e.g., Griffiths, 3rd edition, eq. 4.5). Recall that, \nabla(\mathbf{p}\cdot\mathbf{E}) = \mathbf{p}\times (\nabla\times \mathbf{E}) + \mathbf{E}\times(\nabla\times \mathbf{p})+(\mathbf{p}\cdot\nabla)\mathbf{E} + ... 4 It is true that there is no (electrostatic) force between an electrified body and a body not electrified. (Let's ignore gravitational force for now.) It is also true that all bodies (in earth or earth-like environment) are electrified or will be electrified if approached by another electrified body. But in general, not all bodies can be electrified. For ... 4 The potential energy in this case should be U=+\vec{m}.\vec{B}, hence the potential energy is minimized, as it should be. Here is the explanation: Let’s look at the derivation of interaction energy between magnetic dipole and magnetic field carefully. The dipole energy U=-\vec{m}.\vec{B} is derived using principle of virtual work with an assumption ... 4 Because the black area is half the box below. To explain: move the dipole from an area of no field to an area of field strength E. As you do, there's a force proportional to the dipole moment and to the gradient of E. For a fixed dipole, this force depends only on the gradient (horizontal dashed line). But for an induced dipole, the dipole moment depends ... 4 Classically a non-pointlike spinning charged object possesses a magnetic dipole moment due to the fact that charged particles in the object are spinning around some axis. In contrast, the electron has a dipole moment that arises from its intrinsic spin angular momentum. As you point out, the electron has no internal structure, so the spin does not refer to ... 4 Here's one way to think about it (though it isn't mathematically rigorous). From very far away the dipole would appear to have zero charge and thus there wouldn't be an electric field at all. However, you also know that the electric field falls off as 1/r, so from very far away you'd expect the electric field to be small. The additional charge ... 3 When an electric dipole is placed in a uniform electric field making an angle with the direction of the field as shown in the figure. Force on charge -q=-q\overrightarrow{E} (opposite to \overrightarrow{E}) Force on charge +q=q\overrightarrow{E} (along \overrightarrow{E}) Thus, electric dipole is under the action of two equal and unlike ... 3 If the magnetic dipoles in a material are ordered, the material has a lower entropy because there are many fewer ways how the spins may be oriented if most of them (or all of them!) are required to be aligned. Such an alignment also reduces the heat capacity because before the dipoles got aligned, the orientation (direction) of each dipole was a degree of ... 3 Very simply, the field of the positive and negative elements of the dipole "almost" cancel out - but not quite. It is because they are some small distance away that there is a residual (third order) term. You can see this by taking two charges +q and -q at a distance 2d, and look at the field a distance r from the center of the two (on the same ... 3 The vector potential of an oscillating dipole (using the usual electric dipole approximation) can be written as \vec{A} =\frac{\mu_0 I_0 l}{4\pi r} \cos \omega(t-r/c)\ \hat{z},$$where the dipole is of length l, with a current I_0 \cos \omega t and \hat{z} is a unit vector along the z-axis of the dipole. Using the Lorenz gauge one can then ... 3 Well, I recommend always use the definition to prove things. So, take the definition of superposition: \psi_{net} = \psi_1 + \psi_2. So, let two potentials be V_1 and V_2. Since potentials obey superposition principle, the net potential is: V_{net} = V_1 + V_2. Now.. let two dipole moments be p_1 and p_2. The net dipole:$$ p = ... 3 The electric dipole moment is defined as $$p = \int r \; dq$$ In the case of a pair of charges for which both charges are of the same magnitude, the choice of the origin turns out to be irrelevant: $$p = \mathbf{r_1} q - \mathbf{r_2} q = q(\mathbf{r_1} - \mathbf{r_2}) = q\mathbf{d}$$ where $\mathbf{d}$ is the distance between the charges. However, when ... 3 It's a matter of choice. You can set the potential energy to be any value at any angle. You don't even have to have a zero-value at all; you could make $U$ purely positive or purely negative if you're feeling adventurous. But the advantage for $U(\pi/2)=0$ is, as you said, the simple expression $U(\theta)=-pE\cos\theta = -\vec p \cdot \vec E$ instead of ... 3 You'd need to consider the angle between the two antennas and the distance between them. Dipole antennas do not radiate uniformly into $4\pi$. If you're looking up or down at the poles of the antenna, you will see no radiation (ideally). Looking at a direction transverse to this, the radiation is at a maximum. The angular dependence of the far (electric) ... 2 Your expression "the angular momentum is $m_j \hbar$" (where $\hbar = h/2 \pi$) is incorrect. This quantity is the projection of the angular momentum on the $z$-axis; it represents the direction of the spin. This is why it corresponds to the $m$ quantum number, not the $\ell$ quantum number in the $| n \ell m \rangle$ basis. Which angular momentum to use ... 2 The answer lies in the "polarizability" of the sphere. This relates the external field to the induced dipole moment. For a (ridiculously) rigorous treatment, a good book is "The Scattering of Light by Small Particles" by Craig Bohren. However, if you're looking for a simple result, the polarizability and the dipole are related like this: $p = \alpha E$ ... 2 The Born approximation is when you technically have an extended body but you ignore the scattering from the object itself. An example would be if you had material that was almost transparent, imagine some stained glass that is only ever so super lightly stained. You can basically treat each part as if it saw the normal unchanged light, technically the ... 2 The dipole has its least potential energy when it is in equilibrium orientation, which is when its momentum is lined up with Electric field (then $\tau$ = 0) It has greater potential energy in all other orientations. We are free to define the zero potential energy configuration in a perfectly arbitrary way , because only difference in potential energy ... 2 The binomial expansion says that $(1+x)^n=1+{n \choose 1}x^1+{n \choose 2}x^2 + ...$. This should be familiar to you for positive, integer n just by expanding out the parenthesis. For NEGATIVE n, it still holds, provided you interpret ${n \choose k}$ correctly for negative numbers; for our purposes, we just need to know ${n\choose 1}=n$ always. For very ... 2 There are two misconceptions present in your explanation of the problem. $N$ is not number of dipoles, but their volumetric density $Q$ is not total charge, but equivalent charge at boundaries of the dielectric. The idea is that (a) dielectric of the area $A$ and height $L$ polarized homogeneously along its height and (b) two plan-parallel plates of the ... 2 One must distinguish two conditions: whether the eigenvalue of $|\vec J|^2$, the squared total angular momentum, is changing; and whether the whole vector $\vec J$ is changing. The latter is guaranteed in a dipole transition: one can't keep the whole vector constant. At most, you may satisfy the former condition: the length of $\vec J$ may stay constant so ... 2 Normally the transition amplitude is calculated with $e^{i\vec{k}\vec{r}}$. For "small" product $\vec{k}\vec{r}$ one expands the exponential in Taylor series and one leaves only the "dipole" term in the transition amplitude calculation: $\propto\vec{k}\cdot\langle\psi_1|\vec{r}|\psi_2\rangle$. So it is a "dipole" part of the transition amplitude, which in ... 2 Dipole $\def\vp{{\vec p}}\def\ve{{\vec e}}\def\l{\left}\def\r{\right}\def\vr{{\vec r}}\def\ph{\varphi}\def\eps{\varepsilon}\def\grad{\operatorname{grad}}\def\vE{{\vec E}}$ $\vp:=\ve Ql$ constant $l\rightarrow 0$, $Q\rightarrow\infty$. \begin{align} \ph(\vr,\vr') &= \lim_{l\rightarrow0}\frac{Ql\ve\cdot\ve}{4\pi\eps_0 l}\l(\frac{1}{|\vr-\vr'-\ve\frac ... 2 Hint: Interaction energy of two dipoles : $$U=\frac{1}{4\pi \epsilon_0r^3}\left( \mathbf{p}_1.\mathbf{p}_2-3\left ( \mathbf{p}_1.\hat r )(\mathbf{p}_2.\hat r\right) \right)$$ Only top voted, non community-wiki answers of a minimum length are eligible
{}
Try NerdPal! Our new app on iOS and Android Integrate the function $\frac{1}{x^2}$ from $1$ to $2$ Go! Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch Final Answer $\frac{1}{2}$ Got another answer? Verify it here! Step-by-step Solution Problem to solve: $\int_{1}^{2}\frac{1}{x^2}dx$ Specify the solving method 1 Rewrite the exponent using the power rule $\frac{a^m}{a^n}=a^{m-n}$, where in this case $m=0$ $\int_{1}^{2} x^{-2}dx$ Learn how to solve definite integrals problems step by step online. $\int_{1}^{2} x^{-2}dx$ Learn how to solve definite integrals problems step by step online. Integrate the function 1/(x^2) from 1 to 2. Rewrite the exponent using the power rule \frac{a^m}{a^n}=a^{m-n}, where in this case m=0. Apply the power rule for integration, \displaystyle\int x^n dx=\frac{x^{n+1}}{n+1}, where n represents a number or constant function, such as -2. Evaluate the definite integral. Simplifying. Final Answer $\frac{1}{2}$ SnapXam A2 Answer Assistant Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch Useful tips on how to improve your answer: $\int_{1}^{2}\frac{1}{x^2}dx$ Main topic: Definite Integrals ~ 0.04 s
{}
# Em dash in LaTeX My friend Daniel Reeves was recently discussing different ways to typeset em dashes. Here is the way I like to do it in LaTeX. \documentclass{minimal} %this sets up a new command \dash, which makes nice dashes \DeclareRobustCommand\dash{% \unskip\nobreak\thinspace\textemdash\thinspace\ignorespaces} \begin{document} Let's test out several different variants on making an em dash using the \LaTeX typesetting system --- this one uses \verb|---| with spaces around it Let's test out several different variants on making an em dash using the \LaTeX typesetting system---this one uses \verb|---| with no spaces around it Let's test out several different variants on making an em dash using the \LaTeX typesetting system \dash this one uses my \verb|\dash| command \begin{verbatim} \DeclareRobustCommand\dash{% \unskip\nobreak\thinspace\textemdash\thinspace\ignorespaces} \end{verbatim} \end{document} Here is the resulting output: Different ways to make em dashes in LaTeX Notice how using spaces around the --- can result in a dash at the end of a line, which is not desirable. And using no spaces around the --- doesn’t look nice, and can also result in problems when copying and pasting. The \dash command solves both of these issues. I did not come up with it myself, but I cannot remember where I found it. This entry was posted in latex. Bookmark the permalink. ### One Response to Em dash in LaTeX 1. Nice little macro. I like how you use \unskip to remove any interword glue that appears beforehand. The use of \thinspace before the dash prevents a line break at that point – could that be a problem? Also, why \ignorespaces at the end? Wouldn’t that prevent explicit spaces such as “\ “?
{}
Canonical coordinates In mathematics and physics, the canonical coordinates are a special set of coordinates on the cotangent bundle of a manifold. They are usually written as a set of [itex](q^i,p_j)[itex] or [itex](x^i,p_j)[itex] with the x 's or q 's denoting the coordinates on the underlying manifold and the p 's denoting the conjugate momentum, which are 1-forms in the cotangent bundle at point q in the manifold. This article defines the canonical coordinates as they appear in classical physics. A closely related concept also appears in quantum mechanics; see the Stone-von Neumann theorem and canonical commutation relations for details. In the following exposition, we assume that the manifolds are real manifolds, so that cotangent vectors acting on tangent vectors produce real numbers. This article attempts to provide a rigorous definition of the looser, simpler idea presented in the article canonical conjugate variables. Definition Given a manifold Q, a vector field X on the tangent bundle TQ can be thought of as a function acting on the cotangent bundle, by the duality between the tangent and cotangent spaces. That is, define a function [itex]P_X:T^*Q\to \mathbb{R}[itex] such that [itex]P_X(q,p)=p(X_q)[itex] holds for all cotangent vectors p in [itex]T_q^*Q[itex]. Here, [itex]X_q[itex] is the vector in [itex]T_qQ[itex], the tangent space to the manifold Q at point q. The function [itex]P_X[itex] is called the momentum function corresponding to X. In local coordinates, the vector field X at point q may be written as [itex]X_q=\sum_i X^i(q) \frac{\partial}{\partial q^i}[itex] where the [itex]\partial /\partial q^i[itex] are the coordinate frame on TQ. The conjugate momentum then has the expression [itex]P_X(q,p)=\sum_i X^i(q) \;p_i[itex] where the [itex]p_i[itex] are defined as the momentum functions corresponding to the vectors [itex]\partial /\partial q^i[itex]: [itex]p_i = P_{\partial /\partial q^i}[itex] The [itex]q^i[itex] together with the [itex]p_j[itex] together form a coordinate system on the cotangent bundle [itex]T^*Q[itex]; these coordinates are called the canonical coordinates. Generalized coordinates In Lagrangian mechanics, a different set of coordinates are used, called the generalized coordinates. These are commonly denoted as [itex](q^i,\dot{q}^i)[itex] with [itex]q^i[itex] called the generalized position and [itex]\dot{q}^j[itex] the generalized velocity. When a Hamiltonian is defined on the cotangent bundle, then the generalized coordinates are related to the canonical coordinates by means of the Hamilton-Jacobi equations. • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
{}
# A First Look At Circuits 1032 ## A First Look at Circuits An electric circuit consists of a power supply, one or more “load” devices that consume electric power to produce some useful output like light, heat, or motion, an optional means of control (e.g., an on-off switch), and conducting wires to move electric power between the power supply and load. In an electric circuit, power must flow from the positive terminal of the power supply through one or more load devices and back to the negative terminal of the power supply, thereby forming a complete circuit. If the connections between the load and either the positive or negative terminals of a power supply are interrupted, the circuit will be broken and the load will not receive any current. A power supply may be thought of as reservoirs of positive and negative charges that are held in close proximity, but that cannot recombine within the power supply itself. Positive and negative terminals on the supply make the charges available to an outside circuit. When these terminals are connected through an external circuit, charges flow from the reservoirs through the terminals and load and recombine within the power supply. The charged particles available from a power supply have a potential energy equal to the amount of work done to separate them. This potential energy difference is measured in volts (or voltage). Thus, the voltage of a power supply is a measure of the “electric potential” or the “electro-motive force” that can force charge to flow through a circuit. Charge is carried by electrons, and the flow of electrons through a circuit is called electric current. The flow rate of electric current is measured in amperes, where one ampere is equal to one coulomb $(6.241×10^{18})$ of electrons per second flowing through a circuit. As current flows through the load, potential energy is converted to heat, light, motion (through a magnetic field), or other useful outputs. A typical power supply has large amounts of charge available at a given voltage, so that large and varying amounts of current can be supplied without a change in voltage. Most power supplies for “desktop” circuits produce voltages in the range of 3.3V to about 12V, a range that is generally considered safe for humans. A typical desktop supply (or plug-in “wall wart” supply) can produce anywhere from 100mA to 5A of current at the specified voltage – enough to power most small to medium sized experimental or lab-based circuits. The voltage output by a power supply is typically shown next to the supply in a schematic. Any load device in a circuit will present some amount of resistance to the flow of electric current (resistance in measured in Ohms). The voltage available to force current through the load, and the resistance of the load determines how much current will flow according to Ohm’s law: $V=I×R$ ## Signals Some conductive wires in a circuit transport power between the power supply and the load devices. These wires, often called “rails”, are held steady at the same voltage, and they deliver electric power to devices around the circuit as it is needed. Other conductive wires move information between devices in a circuit – these wires transport “signals”. Signals differ from rails in that they transport information, not power. They use less current, and their voltage changes over time to encode or represent new and changing information. Signals can move information from one circuit component to one other, or from one component to several others. All the conductors and components in a circuit that are connected together by a single signal are said to form a circuit node; all the components connected to any given node access the same information. ## Electric Vs. Electronic Circuits Electric circuits use power rails and simple control means (like manual switches) to drive basic load devices like lights, heaters, and motors. Electronic circuits also use electric current to power load devices, but they differ in a crucial way – the devices in an electronic circuit use and/or are controlled by electric signals instead of manual switches. Electronic devices, like transistors, amplifiers, processors, and other semiconductor-based components (we will discuss semiconductors later) consume electric power, and they also use signals to define their operating state and control their behavior. As examples, a home-wiring circuit that provides power to a light bulb based on the state of a mechanical switch is an electric circuit, whereas a button to change channels on a TV is part of an electronic circuit. Electronic circuits are often classified as “analog” and “digital” circuits. Analog circuits use variable voltages on conductive wires to represent information in a circuit – think of a microphone that produces a voltage between $0V$ and $+5V$, where the voltage is in direct proportion to the incident sound pressure wave. At any given time, a lower voltage on a wire represents a lower sound pressure level, and a higher voltage represents a higher pressure. This continuous and variable voltage creates an electronic representation, or analog, of the sound wave as detected by the microphone, and that’s where the term analog circuit comes from. The analog signal could be amplified and sent to a loud speaker, recorded on magnetic tape, or it could be filtered, amplified, attenuated, or otherwise processed to change the signal amplitude or frequency content. Analog circuits can suffer from poor performance if there is too much noise on their internal signals. In the microphone example, if the circuit used a $3.3V$ power rail, then all sound information, from quiet whispers to loud noises must be represented as a voltage in the range $0V$$3.3V$. If some noise source, like a poor power supply, or a circuit node that was too sensitive to ambient radio energy produced $10mV$ of noise (which is not at all unlikely), then one part in 330 of the analog signal voltage would be washed out in noise, limiting the information the analog signal can carry. Digital circuits also uses voltage levels on conductive wires to encode and represent information, but rather than use continuously varying voltages, they use only two distinct voltage levels. All information in a digital circuit is represented by a “logic high voltage”, which is typically defined as a voltage range between about 70% and 100% of the maximum system voltage (perhaps $3.3V$), and a “logic low voltage”, typically in the range of 0V to about 30% of the maximum system voltage. Because digital circuits use a wide voltage range to encode both a “high” and a “low” (or equivalently, a ‘1’ and a ‘0’), they are far less sensitive to noise. Using the same 3.3V example, a “high” digital signal could suffer from up to 600mV of noise without changing its definition as transporting a ‘1’. Since digital circuits restrict nodes to operating at one of two distinct voltages, it is common practice to associate a circuit node at a logic high voltage (or Vdd) with a ‘1’, and a circuit node at a logic low voltage (or ground) with a ‘0’. Thus, every node in a digital circuit is either at a ‘1’ or ‘0’, not counting the short amount of time it takes to transition between those states. And since all circuit nodes in a digital circuit can be associated with a ‘1’ or ‘0’, it is common to use binary numbers when describing the state of a digital circuit. Any individual wire (or node) can transport either a ‘1’ or ‘0’, and a group of wires viewed as a single logical unit can transport a binary number. For example, if 8 wires are viewed as a single logical unit (called a “bus”), then 8-bit binary numbers can be transported by that bus. Digital circuits can represent the same information as analog circuits, but the analog information must first be converted into digital form. Any continuously varying analog signal can be represented as a sequence of discrete numbers that define the analog signal’s amplitude at a given time. The requirements of the signal that is to be “digitized” dictate how many points per second are required for an adequate representation, and how many bits per point. If, for example, a 0V to 3.3V analog signal produced by a microphone was to be “digitized” for representation in a digital circuit, the maximum frequency content that needed to be preserved in the digital representation would dictate the sample rate (in general, the sample rate is at least two times, and up to 10 times the analog frequency that must be preserved), and the required dynamic range would dictate the number of bits (dynamic range is the ratio between the smallest and largest signal amplitudes that can be represented). For regular spoken voice, about 5 KHz of analog frequencies should be preserved, with about 48dB of dynamic range, indicating a sample rate of 10 KHz or more, with at least 8 bits per sample. In digital circuits, the Vdd and GND rails supply electric power to the circuit and define voltages for representing a ‘1’ and a ‘0’. Vdd may be thought of as the “source” of positive charges in a circuit and the source of ‘1’ information, and GND may be thought of as the “source” of negative charges in a circuit and the source of ‘0’ information. In modern digital systems, Vdd and GND are separated by anywhere from 1 to 5 volts. Older or inexpensive circuits typically use 5 volts, while newer circuits use 1-3 volts.
{}
# Is it possible to configure Linux OpenMPI v1.6.5 to use multiple working directories on a single node? I am running some quantum chemical calculations on a refurbished PowerEdge rack server, with dual quad-core Xeons, 16GB RAM, and a single 500GB SATA HDD. OS is vanilla Debian Wheezy. For one of the calculation types I'm running, I think I'm becoming disk-I/O-limited and so I installed a spare 160GB HDD I had on hand to try to ease the bottleneck. In order to make use of the additional drive, though, I need to be able to tell OpenMPI to use multiple working directories. I know the --wdir commandline flag exists, but the documentation makes it look like it only accepts a single value. I unsuccessfully tried a hacky workaround: 1. Creating a new user scratch 2. Mounting the new 160GB drive as /home/scratch/ 3. Setting up /home/bskinn/.ssh/config to default to user bskinn when SSH-ing to localhost and to default to user scratch when SSh-ing to the server's internal 192.168.x.x address 4. Creating appropriate RSA SSH keys, adding to /home/bskinn/.ssh/authorized_keys and /home/scratch/.ssh/authorized_keys, and SSH-ing into each to get the host information stored for auto-acceptance 5. Adding localhost and 192.168.x.x to my ompi_hosts file and passing to OMPI with --hostfile /home/bskinn/ompi_hosts If I ssh localhost I get logged in as bskinn, and if I ssh 192.168.x.x I get logged in as scratch, just like I want. However, while OpenMPI doesn't throw any errors, it appears to just ignore the ompi_hosts file and just runs all processes locally. So: Is there a way to get OpenMPI 1.6.5 to use multiple working directories? ADDENDUM: I'm running ORCA computations. ORCA is distributed as pre-compiled, statically-linked binaries. In particular, the OMPI calls are hard-coded into said binaries and not accessible to me. So, I can't intercept what commands are passed to OMPI, except perhaps by some sleight-of-hand wherein I relocate the ORCA binaries and replace them with shell scripts. That seems like a lot more work than it's worth. ORCA does provide the ability to pass through whatever OMPI environment variables I want; I'm only really interested in solutions on this level of complexity. This is a cross-post from SuperUser -- I won the Tumbleweed badge for it there (yay?), so I'm guessing it's not a good place for it. I don't have rep to migrate it, so apologies if re-posting in this manner is frowned upon. If you don't have the source and you can't modify the mpirun/mpiexec command line at all, you're probably going to have a hard time doing things the way you tried. You should profile your runs if you can and verify that I/O is, in fact, your problem. If it is, just moving to a single SSD might be enough to make you some progress, but splitting across drives could be more challenging unless you have enough room (controller ports and physical space) to build a 2-3 drive striped RAID array (RAID 0). You are almost certainly better off this way than trying to explicitly stripe across different mount points. If you don't have the space for 3-4 total drives, but you do have room for two, you'll probably need to completely wipe and reinstall the system with a RAID 0 system partition to put everything on. *Given the update and the request, I have added this extra answer. • Happily, it's a four-bay rack machine, so glomming on a two- or three-disk striped array should be pretty easy, as long as the motherboard has good RAID support. May 7 '15 at 15:01 • @Brian, I would make sure that I/O is really your problem before you go to the trouble. I wouldn't sweat hardware RAID support, the Linux software RAID is probably sufficient. You're not computing checksums with RAID 0, so there should be little overhead. May 7 '15 at 15:04 • Oh, absolutely I'd profile the performance first. I've actually moved on to some other computations since my OP on SU, so unless/until I return to them it's not even all that relevant any more. May 7 '15 at 15:24 Instead of launching the executable, why don't you launch a script that changes directories into the one you want and then execs your program? OpenMPI sets $OMPI_COMM_WORLD_RANK in the environment equal to the rank that the task will get within MPI_COMM_WORLD inside the code, so you can do something like foo.sh: #!/bin/bash total_tasks=100 # Or whatever if [$OMPI_COMM_WORLD -lt $($total_tasks/2) ]; then cd /home/scratch fi exec /path/to/my/chemistry/code arg1 arg2 arg3 Then if you make foo.sh executable (chmod +x foo.sh) and mpirun or mpiexec (or whatever) foo.sh, it should put the first half of the ranks with their working directory in /home/scratch and the latter half of the ranks wherever you were when you did the mpirun/exec. You could also just patch this into your chemistry program if you have the source. Finally, if these two drives are connected to the same controller, you may or may not get any additional bandwidth. Have you tried profiling your app? What code is it? Are you running it in an out-of-core mode? • Thanks for the suggestion -- in many cases, this probably would work well. In my particular situation, I don't think this is practical. See edit just now to OP. Also, I hadn't considered that controller I/O might be limiting; I'd assumed (if it is actually an I/O problem) it was a seek/read/write problem. May 7 '15 at 14:21 • @Brian, Yeah, that's an important caveat. If you don't have the source and you can't modify the mpirun/mpiexec command line at all, you're probably hosed. You should profile your runs if you can and verify that I/O is, in fact, your problem. If it is, just moving to SSD might be enough to make you some progress, but splitting across drives could be more challenging. You are almost certainly better off putting a small, striped RAID array in your machine than trying to explicitly stripe across different mount points. May 7 '15 at 14:49 • Yep, figured I was up against striping for any appreciable gains, if it is in fact an I/O problem. If you repost your comment as an answer, I'll mark it as The Answer™. Thanks much. May 7 '15 at 14:50
{}
# Enter the ASA's Talent Competition! ## News • Author: Statistics Views and Jeffrey Myers • Date: 13 December 2013 • Copyright: Image appears courtesy of iStock Photo Calling all singers, dancers, comedians, actors, actresses, and everything in between. ASA's Got Talent is looking for YOU! In celebration of the 175th anniversary of the American Statistical Association, the Association will be having a talent show to honour the study and practice of statistics creatively. Engaging their members, celebrating their past, and energizing their future are the themes for this event. To enter, please complete the form (available here) and submit a video of no longer than five minutes. While the entries must have a statistical theme, they will be interpreted broadly by the judges. Any element of a statistical theme will be considered, including costumes, settings, and props. All videos must be submitted by 15th January 2015; up to eight finalists will be notified in February and given the opportunity to perform LIVE at JSM in Boston. The talent show will take place during the 175th Anniversary Celebration Tuesday evening. Finalists will perform their talent LIVE and compete for the grand prize package. There also will be a best online submission prize for one act unable to perform at JSM. You must be an ASA member to participate. Any questions can be directed to Talent@amstat.org. Grand Prize Package One-year complimentary ASA membership JSM T-shirt $20 gift certificate to the ASA Marketplace ASA logo neck wallet 175th logo pen Best Online Submission Prize$50 gift certificate to the ASA Marketplace Good luck! View all View all
{}
### Department of Bioinformatics and Computational Biology Home > Public Software > NG-CHM R > Create a NG-CHM ## Create a NG-CHM This vignette demonstrates how to construct a simple NG-CHM and save it as a file that can be opened in the NG-CHM Viewer . The following sections are below: ### Sample NG-CHM Data This vignette uses an additional package of demo data, NGCHMDemoData , which can be installed from GitHub with remotes : remotes::install_github('MD-Anderson-Bioinformatics/NGCHMDemoData', ref='main') and loaded into the current environment: library(NGCHMDemoData) The sample data includes a matrix of mRNA expression data, TCGA.GBM.Expression, for 3540 genes (rows) and 169 samples (columns) from the Glioblastoma Multiforme TCGA data. In order to be used as the basis for an NG-CHM, a matrix should have both rownames and colnames. Here the rownames are genes and the colnames are TCGA sample identifiers: TCGA.GBM.ExpressionData[1:4,1:2] # TCGA-06-0178-01A-01R-1849-01 TCGA-02-2483-01A-01R-1849-01 # TNFAIP8L3 0.04324498 -1.12556612 # SYK -0.12174522 0.03007443 # C2 0.07445546 -0.21648993 # ACP5 1.45195866 0.12276042 The sample data also includes a vector of TP35 mutation status for the TCGA samples in the matrix. This data will be used to construct a covariate bar in Covariate Bars and Discrete Color Maps. In order to be used as the basis for a covariate bar, a vector should have at least one name in common with the colnames of the matrix. TCGA.GBM.TP53MutationData[1:2] # TCGA-06-0178-01A-01R-1849-01 TCGA-02-2483-01A-01R-1849-01 # "WT" "MUT" ### Creating a NG-CHM Using the data loaded above, the chmNew() function can be used to create an NG-CHM: library(NGCHM) hm <- chmNew('tcga-gbm', TCGA.GBM.ExpressionData) ### Export to File The NG-CHM created above can be exported to two different file types: 1. A stand-alone HTML file that can be emailed to collaborators and opened in a standard browser. 2. A ‘.ngchm’ file that can be opened in the NG-CHM Viewer . Both methods use files from the NGCHMSupportFiles package referenced in the installation instructions. When loaded, NGCHMSupportFiles sets a two environment variables that point to these additional files. library(NGCHMSupportFiles) The NG-CHM can be exported as a stand-alone HTML file with the chmExportToHTML() funciton. The first argument is the NG-CHM created above. The second argument is the desired filename, and the third is a boolean dictating if any existing file of that name should be overwritten. chmExportToHTML(hm,'tcga-gbm.html',overwrite=TRUE) The file ‘tcga-gbm.html’ can be shared with collaborators and opened in a standard web browser. Alternatively, .ngchm file can be created with the chmExportToFile() function. chmExportToFile(hm,'tcga-gbm.ngchm',overwrite=TRUE) The file ‘tcga-gbm.ngchm’ can be opened in the NG-CHM Viewer . NOTE: The filename must end with ‘.ngchm’ to open in the NG-CHM Viewer.
{}
# The photon as a mediator particle 1. Dec 26, 2009 ### espen180 I would like to ask some questions about the photon as a mediator particle of the electromagnetic force. As far as I know, in order to dertermine the complete state of a photon, we need to know the values of two entities, which are the position and momentum 4-vectors $$x^{\mu}$$ and $$p^{\mu}$$. These are related through the HUP: $$\Delta x^{\mu}\Delta p^{\mu}\geq \frac{\hbar}{2}$$ (no summation) Also, photons always follow geodesic curves in space-time. However, despite the fact that no geodesics exit from beyond the event horizon of a black hole, charged black holes excert a Lorentz force on other charged bodies. It appears like photons, when mediating the Lorentz force, are exempt from the rules of GR. I see two possibilities: 1. I have misunderstood something fundamental about charged black holes. 2. I don't know enough particle physics to make intelligent guesses about mediator particles. If anyone could explain the issue to me, I would appreciate it. 2. Dec 26, 2009 ### Prathyush i googled and found the following summarizing virtual particles do not obey several properties that normal particles obey, they can travel faster than light, need not be on the mass shell and hence blind to the event horizon. also this is purely speculation and only a correct theory of QGR can tell us the answer 3. Dec 26, 2009 ### bcrowell Staff Emeritus Interesting question. One thing to point out is that what's really forbidden by GR is for *information* to escape from inside the event horizon; if it did, then there would be a local Minkowski frame in which the information was going faster than c, and that leads to a violation of causality. But the virtual photons coming from a charged black hole don't carry any information. By the no-hair theorem, you can't use them to gain any information about the distribution of charges inside the event horizon, or about the motion of the charges. 4. Dec 26, 2009 ### espen180 As a "relativist", I have a little problem with speeds and transfers exceeding c. Putting that aside, I would like to raise a point mentioned in the thread Prathyush linked to. If the photons do not carry no "information" at all, how can they "inform" the outside world about the existance of and ammount of charge inside the black hole? 5. Dec 26, 2009 ### bcrowell Staff Emeritus To carry information, a wave has to be modulated. You can't carry information with just a DC signal (or with an unmodulated sine wave). This is why, for example, you can have phase velocities greater than c. This is exactly the same issue as the question of how the black hole can inform the outside world of the existence and amount of mass inside the black hole. This is information that was already present, and determinable by outside observers, before the black hole collapsed. Last edited: Dec 26, 2009 6. Dec 26, 2009 ### Prathyush A very interesting point, was unaware till now. What do u mean by this? Even a long time after the black hole collapse we can determine the mass of the blackhole 7. Dec 27, 2009 ### bcrowell Staff Emeritus Causality is violated if, for instance, I get information about which shirt I'm going to wear tomorrow. If I find out that I'm going to wear the red shirt tomorrow, I can create a paradox by intentionally wearing the blue shirt instead. This type of causality violation occurs in relativity whenever you have a particle with a spacelike world-line. This includes particles like tachyons that move faster than c, and it includes particles escaping through the event horizon of a black hole. Causality is not violated if I find out that 2+2=4, or that the mass of the sun has a certain value. These facts were true before I was born, and will still be true after I'm dead. Knowing the mass of a black hole is the same way. It doesn't create the possibility of any violation of causality. By conservation of mass-energy, the mass stays the same. It was the same before the black hole collapsed, and it will keep on being the same in the future.
{}
### Dataset 1, Table of $\Delta T_\nu$ coefficient values Dmitry Sitnikov, Inna Ilina & Alexander Pronkin This CSV file represents a table of DeltaTnu coefficient values to calculate maximum local temperature increase DeltaT [C] of water cilinder at temperature T0 [C] exposed to a pulse of THz radiation at given frequency nu [THz] with fluence F [J/cm^2]: DeltaT=(DeltaTnu/0.7)*83.3*F. Here [C] denotes the degree Celsius.Table represents data for four values of T0 [C]: 37; 32; 27; 22. For a pulse at frequency nu=1.5 [THz] with fluence F = 0.01 [J/cm^2] for water at initial... This data repository is not currently reporting usage information. For information on how your repository can submit usage information, please see our documentation.
{}
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Spectroscopy of spontaneous spin noise as a probe of spin dynamics and magnetic resonance ## Abstract Not all noise in experimental measurements is unwelcome. Certain fundamental noise sources contain valuable information about the system itself—a notable example being the inherent voltage fluctuations (Johnson noise) that exist across any resistor, which allow the temperature to be determined1,2. In magnetic systems, fundamental noise can exist in the form of random spin fluctuations3,4. For example, statistical fluctuations of N paramagnetic spins should generate measurable noise of order √(N) spins, even in zero magnetic field5,6. Here we exploit this effect to perform perturbation-free magnetic resonance. We use off-resonant Faraday rotation to passively7,8 detect the magnetization noise in an equilibrium ensemble of paramagnetic alkali atoms; the random fluctuations generate spontaneous spin coherences that precess and decay with the same characteristic energy and timescales as the macroscopic magnetization of an intentionally polarized or driven ensemble. Correlation spectra of the measured spin noise reveal g-factors, nuclear spin, isotope abundance ratios, hyperfine splittings, nuclear moments and spin coherence lifetimes—without having to excite, optically pump or otherwise drive the system away from thermal equilibrium. These noise signatures scale inversely with interaction volume, suggesting a possible route towards non-perturbative, sourceless magnetic resonance of small systems. This is a preview of subscription content, access via your institution ## Relevant articles • ### Quantum nonlinear spectroscopy of single nuclear spins Nature Communications Open Access 09 September 2022 • ### Optical detection of electron spin dynamics driven by fast variations of a magnetic field: a simple method to measure $$T_1$$, $$T_2$$, and $$T_2^*$$ in semiconductors Scientific Reports Open Access 04 August 2020 • ### Optical spin noise spectra of Rb atomic gas with homogeneous and inhomogeneous broadening Scientific Reports Open Access 31 August 2017 ## Access options \$32.00 All prices are NET prices. ## References 1. Johnson, J. B. Thermal agitation of electricity in conductors. Nature 119, 50–51 (1927) 2. White, D. R. et al. The status of Johnson noise thermometry. Metrologia 33, 325–335 (1996) 3. Itano, W. M. et al. Quantum projection noise: Population fluctuations in two-level systems. Phys. Rev. A 47, 3554–3570 (1993) 4. Sorensen, J. L., Hald, J. & Polzik, E. S. Quantum noise of an atomic spin polarization measurement. Phys. Rev. Lett. 80, 3487–3490 (1998) 5. Bloch, F. Nuclear induction. Phys. Rev. 70, 460–474 (1946) 6. Sleator, T., Hahn, E. L., Hilbert, C. & Clarke, J. Nuclear-spin noise. Phys. Rev. Lett. 55, 1742–1745 (1985) 7. Happer, W. & Mathur, B. S. Off-resonant light as a probe of optically-pumped alkali vapors. Phys. Rev. Lett. 18, 577–580 (1967) 8. Suter, D. & Mlynek, J. Laser excitation and detection of magnetic resonance. Adv. Magn. Opt. Res. 16, 1–83 (1991) 9. Kubo, R. The fluctuation-dissipation theorem. Rep. Prog. Phys. 29, 255–284 (1966) 10. Weissman, M. B. What is a spin glass? A glimpse via mesoscopic noise. Rev. Mod. Phys. 65, 829–839 (1993) 11. Smith, N. & Arnett, P. White-noise magnetization fluctuations in magnetoresistive heads. Appl. Phys. Lett. 78, 1448–1450 (2001) 12. Awschalom, D. D., DiVincenzo, D. P. & Smyth, J. F. Macroscopic quantum effects in nanometer-scale magnets. Science 258, 414–421 (1992) 13. Aleksandrov, E. B. & Zapassky, V. S. Magnetic resonance in the Faraday-rotation noise spectrum. Zh. Eksp. Teor. Fiz. 81, 132–138 (1981) 14. Mitsui, T. Spontaneous noise spectroscopy of an atomic resonance. Phys. Rev. Lett. 84, 5292–5295 (2000) 15. Kuzmich, A. et al. Quantum nondemolition measurements of collective atomic spin. Phys. Rev. A 60, 2346–2350 (1999) 16. Kuzmich, A., Mandel, L. & Bigelow, N. P. Generation of spin squeezing via continuous quantum nondemolition measurement. Phys. Rev. Lett. 85, 1594–1597 (2000) 17. Mamin, H. J., Budakian, R., Chui, B. W. & Rugar, D. Detection and manipulation of statistical polarization in small spin ensembles. Phys. Rev. Lett. 91, 207604 (2003) 18. Manassen, Y., Hamers, R. J., Demuth, J. E. & Castellano, A. J. Direct observation of the precession of individual paramagnetic spins on oxidized silicon surfaces. Phys. Rev. Lett. 62, 2531–2534 (1989) 19. Nussinov, Z., Crommie, M. F. & Balatsky, A. V. Noise spectroscopy of a single spin with spin-polarized STM. Phys. Rev. B 68, 085402 (2003) 20. Cleland, A. N. & Roukes, M. L. Noise processes in nanomechanical resonators. J. Appl. Phys. 92, 2758–2769 (2002) 21. Weaver, R. L. & Lobkis, O. I. Ultrasonics without a source: Thermal fluctuation correlations at MHz frequencies. Phys. Rev. Lett. 87, 134301 (2001) 22. Kastler, A. Optical methods for studying Hertzian resonances. Science 158, 214–221 (1967) 23. Happer, W. Optical pumping. Rev. Mod. Phys. 44, 169–249 (1972) 24. Corney, A. Atomic and Laser Spectroscopy (Clarendon, Oxford, 1977) 25. Yabuzaki, T., Mitsui, T. & Tanaka, U. New type of high-resolution spectroscopy with a diode laser. Phys. Rev. Lett. 67, 2453–2456 (1991) 26. Ito, T., Shimomura, N. & Yabuzaki, T. Noise spectroscopy of K atoms with a diode laser. J. Phys. Soc. Jpn 72, 962–963 (2003) 27. Jury, J. C., Klaassen, K. B., van Peppen, J. & Wang, S. X. Measurement and analysis of noise sources in magnetoresistive sensors up to 6 GHz. IEEE Trans. Magn. 38, 3545–3555 (2002) 28. Wolf, S. A. et al. Spintronics: A spin-based electronics vision for the future. Science 294, 1488–1495 (2001) 29. Imamoglu, A. et al. Quantum information processing using quantum dot spins and cavity QED. Phys. Rev. Lett. 83, 4204–4207 (1999) 30. Joglekar, Y. N., Balatsky, A. V. & MacDonald, A. H. Noise spectroscopy and interlayer phase coherence in bilayer quantum Hall systems. Phys. Rev. Lett. 92, 086803 (2004) ## Acknowledgements We thank P. Littlewood, S. Gider, P. Crowell and P. Crooker for discussions. This work was supported by the Los Alamos LDRD programme. ## Author information Authors ### Corresponding author Correspondence to S. A. Crooker. ## Ethics declarations ### Competing interests The authors declare that they have no competing financial interests. ## Supplementary information ### Supplementary Figure An additional figure of spin noise data, this time from atoms having nuclear spin 5/2. (PDF 169 kb) ## Rights and permissions Reprints and Permissions Crooker, S., Rickel, D., Balatsky, A. et al. Spectroscopy of spontaneous spin noise as a probe of spin dynamics and magnetic resonance. Nature 431, 49–52 (2004). https://doi.org/10.1038/nature02804 • Accepted: • Issue Date: • DOI: https://doi.org/10.1038/nature02804 • ### Quantum nonlinear spectroscopy of single nuclear spins • Jonas Meinel • J. Wrachtrup Nature Communications (2022) • ### Optical detection of electron spin dynamics driven by fast variations of a magnetic field: a simple method to measure $$T_1$$, $$T_2$$, and $$T_2^*$$ in semiconductors • V. V. Belykh • D. R. Yakovlev • M. Bayer Scientific Reports (2020) • ### Optical spin noise spectra of Rb atomic gas with homogeneous and inhomogeneous broadening • Jian Ma • Ping Shi • Yang Ji Scientific Reports (2017) • ### Spin noise explores local magnetic fields in a semiconductor • Ivan I. Ryzhov • Gleb G. Kozlov • Valerii S. Zapasskii Scientific Reports (2016) • ### Cross-correlation spin noise spectroscopy of heterogeneous interacting spin systems • Dibyendu Roy • Luyi Yang • Nikolai A. Sinitsyn Scientific Reports (2015)
{}
Data Descriptor | Open | Published: # Dataset of eye disease-related proteins analyzed using the unfolding mutation screen ## Abstract A number of genetic diseases are a result of missense mutations in protein structure. These mutations can lead to severe protein destabilization and misfolding. The unfolding mutation screen (UMS) is a computational method that calculates unfolding propensities for every possible missense mutation in a protein structure. The UMS validation demonstrated a good agreement with experimental and phenotypical data. 15 protein structures (a combination of homology models and crystal structures) were analyzed using UMS. The standard and clustered heat maps, and patterned protein structure from the analysis were stored in a UMS library. The library is currently composed of 15 protein structures from 14 inherited eye diseases including retina degenerations, glaucoma, and cataracts, and contains data for 181,110 mutations. The UMS protein library introduces 13 new human models of eye disease related proteins and is the first collection of the consistently calculated unfolding propensities, which could be used as a tool for the express analysis of novel mutations in clinical practice, next generation sequencing, and genotype-to-phenotype relationships in inherited eye disease. Design Type(s) disease state design • molecular modeling objective protein unfolding computational modeling technique disease Homo sapiens Machine-accessible metadata file describing the reported data (ISA-tab format) ## Background & Summary For globular proteins, primary protein structure dictates the folds and interactions that occur between amino acids in the structure1,2. Genetic mutations lead to protein misfolding and in many cases disease3. Protein secondary structure is stabilized by hydrogen bonds from the amide and carboxyl groups of amino acids4. The side chains of the amino acids interact in a variety of ways to create the protein tertiary structure (hydrophobic and disulfide interactions). In folding, the protein structure goes through a series of trails and errors to identify the most thermodynamically stable conformation4. Therefore, correctly folded proteins have long term stability in biological systems. The role of missense mutations in inherited disease is not well understood. Disease-causing missense mutations occur when a change at the DNA level causes an amino acid in the protein sequence to be substituted with another, changing the interactions between amino acids and occasionally lead to protein misfolding. Currently, many inherited diseases are caused by missense mutations leading to misfolding of proteins in the cell3,​4,​5,​6,​7. UMS is an in-silico scan to evaluate the destabilizing effects of multiple point mutations derived from the protein atomic model. It may be used as a tool to analyze the complicated relationship between missense mutations, protein folding, and disease5. UMS reads a protein structure file (PDB file) and predicts the unfolding effect for a list of every possible missense mutation that may occur in the protein structure including an identity mutation. For each mutation, UMS calculates an unfolding propensity, derived from the Gibbs free energy equation, to describe whether the mutation will lead to protein misfolding. The output of UMS is a mutational matrix, standard heat map, clustered heat map, and patterned structure. UMS has the ability identify critical residues in the protein structure, which give insight into the most significant residues to protein stability and function. UMS also may explain mutations that can lead to both increased and decreased enzymatic activity, identify trends of residues relating to stability, and predict the severity of missense mutations in disease and their relation to disease phenotype. Currently there exist a number of protein stability predictors6,​7,​8,​9,​10,​11. There are also programs that work to predict the functional consequences of missense mutations12,​13,​14,​15,​16. UMS provides various benefits and advances over current mutant screening techniques. Given that UMS is derived from the atomic structure level and thermodynamics rather than sequence conservation, it has the ability to predict the effect of de novo missense mutations17. The unfolding propensity is determined using the linear extrapolation model from the normalized sigmoidal unfolding curve obtained experimentally18. This data classifies the effects of the missense mutations and uses a universal value so that unfolding propensities from different protein structures may be compared. The 3 maps and mutation matrix are designed to make this large dataset readable for investigators with different backgrounds such as geneticists, clinicians, biochemists, pharmacologists or protein engineers, and those who may not have any preliminary experience in homology modeling and calculations of protein stability. Residue depth has been used to describe the protein interior and predict fold types19,​20,​21. It has been hypothesized that the conservation of ‘deep’ residues is related to folding requirements and function20. Relationships exist between highly conserved residues in structural neighbors of the same fold type, and their mean residue depth in the reference structure21. There are programs that use residue depth as a parameter to predict protein structural models using fold recognition19. Here, we are reporting the library of 181,110 mutations from 15 proteins from inherited eye disease analyzed with UMS. This analysis includes the preparation of 13 homology models of human proteins. The UMS program has been subjected to intensive validation using the Protherm database and 3 proteins from retinal disease (rhodopsin, complement factor H, and RPE65)5. We present 10 new homology models for human proteins related to retinal diseases that have been verified using the internal control. In addition, we provide new maps for each of the 15 proteins for prediction and express analysis of missense mutations. Finally, this study targets a number of new diseases that have not yet been studied using UMS. ## Methods ### Protein preparation A library of 15 different inherited eye disease related proteins was created for analysis. Figure 1 demonstrates the outline of the stages of analysis used. The proteins included in the dataset, pdb names, and their corresponding diseases are shown in Table 1. The human proteins were taken from the RCSB database22 or prepared using homology modeling. CYP1B1, IRBP, LRAT, NYX, RDH5, RDH8, RDH12, REP-1, RHO, RPE65, TIMP3, WDR36, and domains 4, 5, 14 and 17 from CFH are homology models. While the remaining crystal structures of CFH domains, CRYAB, and CRYBB1 were obtained from the protein data bank22. The pdb files used for the protein analysis are available on the server. ### Internal control After the homology models were created they were run through the internal control program. The internal control program for the analysis of unfolding propensities is explained in depth in McCafferty & Sergeev5. In this work the internal control was adjusted from UMS to calculate the difference in the free energy of the side chain rotamers for the same amino acid. The output of the internal control program was used to either select the best protein models or determine if more refinement of the structures was required. In selecting the best structure, we looked for models with statistically significant data (P value <0.05) and then looked for the smallest confidence interval with average close to 0. For those that required further refinement, structures were tested until they fit this requirement. ### UMS program A full description of the UMS program can be found in the methods section of McCafferty & Sergeev5. In summary, the program is written in Python, R, and Bash programing languages. The architecture of the program is designed to perform a full mutagenesis analysis as efficiently as possible by implementing a quick and space sensitive procedure. Figure 2 outlines the order of the functions created within the program. The unfolding propensity calculation is derived from the Gibbs free energy equation. The standard and clustered heat maps are produced using the d3heatmap package for R. The maps are interactive and allow specific rows or columns of interest to be selected. Specific region may also be selective for zoomed in view. The clustered maps use an agglomerative, hierarchical clustering method. The groupings are then mapped using a dendrogram. The final map used to convey UMS is the patterned foldability structure. The 3D structures were colored using the foldability value of the residues5. These foldability values can then be used to identify critical residues in the protein structure. The critical residues are considered to be essential to proper protein folding. ### Residue depth and informational entropy In addition, two new descriptors, residue depth and informational entropy, were included as described below. First descriptor, the residue depth of an amino acid in the protein structure, is described as the distance of an atom from the solvent accessible surface23. The Biopython package was used to calculate the residue depth for each of the wild type residues in the protein structure24. The Biopython package uses the MSMS program for the surface calculation25, the residue depth is then presented as the average of the atom depth for each wild type residue in the native protein sequence. The other descriptor, informational entropy, also known as Shannon entropy, quantifies the uncertainty of the source of the information. Therefore, greater informational entropy relates to a greater degree of randomness amongst the mutations for a certain location26. For example, if the average unfolding propensity for two locations on the structure were both 0.5, the location where all unfolding propensities were 0.5 would have a lower informational entropy that those that were split between 1.0 and 0. A script was created using Python to calculate the informational entropy of the unfolding propensities. The equation of informational entropy used was $∑ P ( R = x ) × l o g 2 ( 1 P ( R = x ) )$. Both parameters were added to the library to aid the user in studying the relationship between folding and depth within the structure and in analyzing the data provided by UMS, respectively. ### Code availability The code is available on the Figshare (Data Citation 1: Figshare https://dx.doi.org/10.6084/m9.figshare.c.3291326). ## Data Records The UMS library for 15 proteins from inherited eye disease is available on the Figshare (Data Citation 1: Figshare https://dx.doi.org/10.6084/m9.figshare.c.3291326). Table 1 presents the PDB file names for each of the proteins included in the study. For each of the proteins analyzed there are five separate files available to describe the data. Figure 3 displays examples of what each of these files looks like. The first is the mutation matrix. The mutation matrix is available in the protein_matrix.txt format. This can be opened using a standard text editor as well as in Microsoft Excel. The mutation matrix is ideal for an investigator who wants access to the raw data. Since the file can be opened in a number of programs the user has the ability to analyze and manipulate the data however he/she pleases. The standard and cluster heat maps are available in the protein_standard.html protein_cluster.html format, respectively. The size of each of these maps for the proteins is shown in Table 1. The interactive maps provide facilitated identification of the unfolding propensities even if the protein being studied is large. The html file format allows for the maps to open as a webpage. From here rows and/or columns of interest may be selected. Specific regions may also be highlighted for a zoomed in view. Dragging the mouse over the mutation of interest will reveal the corresponding unfolding propensity. The unfolding propensity ranges from 0–1, where 0 is the most thermodynamically stabile protein, 0.5 is folding-unfolding equilibrium, and 1 is a completely unfolded protein. The standard heat maps are ideal for situations where a specific mutation-unfolding propensity is desired, for example, a clinical setting. Here, each unfolding propensity can be accessed easily. The map can be downloaded and saved for easy reference for a specific patient mutation. The clustered heat map groups residues based on similarity and may be used in studying trends in the protein structure. For example, in structures with disulfide bonds, cysteine residues are clustered together based on the similar destabilization they undergo. This allows us to see the residues that undergo a number of severe mutations or the mutations that have the most harmful effects. A pharmacologist can use this map to identify stabilizing mutations to develop new drugs. The patterned structure is available as a python file, protein.py. This file is to be opened using UCSF Chimera (http://www.cgl.ucsf.edu/chimera/). Once opened in Chimera, residues may be identified by placing the mouse over the area of interest. Table 1 also displays the size of the python files to be read by Chimera. This is a 3D map that shows the most critical residues in the protein structure to proper folding. For the particular residue position, foldability could then be used to differentiate between areas that experienced multiple severe mutations, those that experienced a few, and those that had none. Foldability is a more descriptive parameter over simply finding the average in that it can successfully tally all severe mutations that are occurring at a certain location without being influenced by those less severe. Figure 4 shows all of the patterned structures of the inherited eye disease related proteins that were analyzed. Finally, the data descriptor file is again available in the protein_descriptor.txt format. In each column this file contains the native protein sequence, average unfolding propensities, foldability, informational entropy and residues depth (in this order). The descriptor file provides the user some innovation to analyze the data as he/she pleases. Figure 5 shows an example of how one may use this file to analyze the TIMP3 protein. The average unfolding propensity and foldability are plotted against the informational entropy and residue depth. ## Technical Validation ### Validation set criterion (UMS reference) The validation set for the UMS program was composed of 16 proteins. The proteins were selected from the ProTherm database (http://www.abren.net/protherm/)27 based on available experimental thermodynamic data. Proteins with single mutations whose ΔΔG values were determined using fluorescence from denaturants and CD were selected. Specifically, tryptophan fluorescence data for chemical unfolding/refolding in the presence of urea or Gdm-HCl. Proteins with a large number of mutations with thermodynamic data were ideal for the validation. Finally, the proteins needed to have an available PDB file on the Protein Data Bank (http://www.rcsb.org/pdb/)22. Based on this criteria the 16 proteins selected for the validation set were: T4 Lysozome (PDB id: 2LZM), Staphylococcal Nuclease (1STN), Protein L (1HZ6), Barnase (1BNI), Ribonuclease T1 Isozyme (1RN1), Gene V Protein (1VBQ), Chymotrypsin Inhibitor 2 (2CI2), Acyl-Coenzyme A (2ABD), Tyrosine-Protein Kinase (1FMK), Acylphosphatase (1APS), Alpha Spectrin (1AJ3), Dihydrofolate Reductase (1RK4), Ribosomal Protein (1RIS), Tryptophan synthase (1WQ5), ARC Repressor (1ARR), and Azurin (5AZU). From the ΔΔG values for each of the experimental mutants from the Protherm database the unfolding propensity was calculated. The percent matching and a Fit-Score were used to evaluate the quality of the output from UMS. ### Homology model validation As mentioned in the Methods section an internal control program was designed to validate the homology models used. The results from the internal control are shown in Fig. 6. CFH and IRBP were divided in to their 20 and 4 domains (respectively) in the analysis. The data for each of the proteins fit our criteria for being statistically significant and having small confidence intervals with averages close to 0. ## Usage Notes All of the protein structures that are included in the dataset are human structures. The homology models, while not crystal structures, have been thoroughly tested for stability and represent models of the human proteins. We aim to eventually create a website of proteins that will constantly be updated and take requests for proteins of interest. How to cite this article: McCafferty, C. L. and Sergeev Y. V. Dataset of eye disease-related proteins analyzed using the unfolding mutation screen. Sci. Data 3:160112 doi: 10.1038/sdata.2016.112 (2016). Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. , & Protein structure prediction: recognition of primary, secondary, and tertiary structural features from amino acid sequence. Critical reviews in biochemistry and molecular biology 30, 1–94 (1995). 2. 2. A surprising simplicity to protein folding. Nature 405, 39–42 (2000). 3. 3. et al. Protein misfolding and degradation in genetic diseases. Human mutation 14, 186–198 (1999). 4. 4. Protein folding and misfolding. Nature 426, 884–890 (2003). 5. 5. & In silico Mapping of Protein Unfolding Mutations for Inherited Disease. Scientific Reports 6, 37298 (2016). 6. 6. , & CUPSAT: prediction of protein stability upon point mutations. Nucleic acids research 34, W239–W242 (2006). 7. 7. , & I-Mutant 2.0: predicting stability changes upon mutation from the protein sequence or structure. Nucleic acids research 33, W306–W310 (2005). 8. 8. et al. The FoldX web server: an online force field. Nucleic acids research 33, W382–W388 (2005). 9. 9. , , & SCide: identification of stabilization centers in proteins. Bioinformatics 19, 899–900 (2003). 10. 10. , , , & SRide: a server for identifying stabilizing residues in proteins. Nucleic acids research 33, W303–W305 (2005). 11. 11. , & SCPRED: accurate prediction of protein structural class for sequences of twilight-zone similarity with predicting sequences. BMC bioinformatics 9, 226 (2008). 12. 12. , & Predicting changes in the stability of proteins and protein complexes: a study of more than 1,000 mutations. Journal of molecular biology 320, 369–387 (2002). 13. 13. et al. A method and server for predicting damaging missense mutations. Nature methods 7, 248–249 (2010). 14. 14. & PROVEAN web server: a tool to predict the functional effect of amino acid substitutions and indels. Bioinformatics 31, 2745–2747 (2015). 15. 15. et al. PANTHER: a library of protein families and subfamilies indexed by function. Genome research 13, 2129–2141 (2003). 16. 16. et al. SNPeffect 4.0: on-line prediction of molecular and structural effects of protein-coding variants. Nucleic Acids Research 40, D935–D939 (2011). 17. 17. et al. Comparison of predicted and actual consequences of missense mutations. Proceedings of the National Academy of Sciences 112, E5189–E5198 (2015). 18. 18. , & Denaturant m values and heat capacity changes: relation to changes in accessible surface areas of protein unfolding. Protein Science 4, 2138–2148 (1995). 19. 19. , , & Fold recognition by concurrent use of solvent accessibility and residue depth. Proteins: Structure, Function, and Bioinformatics 68, 636–645 (2007). 20. 20. , & Atom depth in protein structure and function. Trends in biochemical sciences 28, 593–597 (2003). 21. 21. , & Atom depth as a descriptor of the protein interior. Biophysical journal 84, 2553–2561 (2003). 22. 22. et al. The protein data bank. Nucleic acids research 28, 235–242 (2000). 23. 23. & Residue depth: a novel parameter for the analysis of protein structure and stability. Structure 7, 723–732 (1999). 24. 24. et al. Biopython: freely available Python tools for computational molecular biology and bioinformatics. Bioinformatics 25, 1422–1423 (2009). 25. 25. , & Reduced surface: an efficient way to compute molecular surfaces. Biopolymers 38, 305–320 (1996). 26. 26. & Entropy-based algorithms for best basis selection. Information Theory, IEEE Transactions on 38, 713–718 (1992). 27. 27. , , , & ProTherm, version 4.0: thermodynamic database for proteins and mutants. Nucleic acids research 32, D120–D121 (2004). ## Data Citations 1. 1. McCafferty, C. L., & Sergeev, Y. V. Figshare https://dx.doi.org/10.6084/m9.figshare.c.3291326 (2016) ## Affiliations 1. ### Ophthalmic Genetics and Visual Function Branch, National Eye Institute, NIH, Bethesda, Maryland 20892, USA • Caitlyn L. McCafferty •  & Yuri V. Sergeev ## Authors ### Contributions Y.V.S. performed experiment design and supervised the project, Y.V.S. performed homology modeling and unfolding calculations; C.L.M. designed a program, wrote the UMS code, and performed data collection, C.L.M. and Y.V.S. contributed to data interpretation and analysis; C.L.M. and Y.V.S. contributed to writing the manuscript. All authors have read and approved the final manuscript. ### Competing interests The authors declare no competing financial interests. ## Corresponding author Correspondence to Yuri V. Sergeev. ### DOI https://doi.org/10.1038/sdata.2016.112 • ### Global computational mutagenesis of domain structures associated with inherited eye disease • Francisca Wood Ortiz •  & Yuri V. Sergeev Scientific Reports (2019) • ### In silico Mapping of Protein Unfolding Mutations for Inherited Disease • Caitlyn L. McCafferty •  & Yuri V. Sergeev Scientific Reports (2016)
{}
# Plus One - Chapter 10 - Straight Lines ### Plus One - Chapter 10 - Straight Lines We are familiar with two-dimensional coordinate geometry from earlier classes. Mainly, it is a combination of algebra and geometry. A systematic study of geometry by the use of algebra was first carried out by celebrated French philosopher and mathematician René Descartes, in his book "La Géométry", published in 1637. This book introduced the notion of the equation of a curve and related analytical methods into the study of geometry. The resulting combination of analysis and geometry is referred now as analytical geometry. In the earlier classes, we initiated the study of coordinate geometry, where we studied about coordinate axes, coordinate plane, plotting of points in a plane, distance between two points, section formulae, etc. All these concepts are the basics of coordinate geometry. Watch the video lectures by AZ Classes
{}
# Conditions ## “Inline” conditions An inline condition is a condition that is not used through if or a similar statement. Check if a condition is true: df$var[conditionToTest] Check if a condition is false: df$var[ ! conditionToTest] conditionToTest should be replaced by actual R code.
{}
# proving transcendental numbers are irrational I don't understand how every transcendental number is irrational, is there a way to prove that? I know it just means it's a non-algebraic number, but how does that correlate to irrationality? • How about proving that every rational is algebraic? – Lord Shark the Unknown Oct 26 '18 at 15:58 • Well, can you write $\pi$ or $e$ as an exact fraction? If you could, wouldn't they be a root of an algebraic equation with rational coefficients. Therefore all transcendental numbers are irrational. – Mohammad Zuhair Khan Oct 26 '18 at 15:58 • \$p,q\in\mathbb Z, q\neq0$$qx-p=0\to x=\frac pq$$ – Don Thousand Oct 26 '18 at 16:00 • by definition transcendental number is: 1) irrational and 2) is not a root of polynomial with integer coefficients – Vasya Oct 26 '18 at 16:03 If $$x$$ is transcendental but not irrational, then $$x = a/b$$, with $$a,b$$ integers, and so $$x$$ solves the rational equation $$b t - a = 0$$, but then $$x$$ is algebraic and hence not transcendental. Summing up the comments...,there are two types of numbers in $$\Bbb{R}$$ in the sense , one type is algebraic and the other one is transcendental. In particular every rational $$x=\frac{p}{q}$$ is algebraic, since $$x$$ satisfies $$qx-p$$, which is a non zero integer polynomial. Therefore if any $$x$$ is not algebraic ,it cannot be a rational!
{}
Calculus can be defined on smooth manifolds. ## Dynamical System ### Vector Field Vector field (向量场) $X: M \mapsto T M$ on a smooth manifold $M$ is a continuous map that takes each point on the manifold to an element of the corresponding fiber of the tangent bundle: $X_p \in T_p M$. In other words, a vector field is a section of (the projection of) the tangent bundle: $\pi \circ X = \text{Id}_M$. We visualize a vector field as such: it attaches an arrow to each point of the manifold, which varies continuously across the manifold. Rough vector field is almost a vector field except that it is not necessarily continuous. Smooth vector field is a vector field that is a smooth map. Space of smooth vector fields $\mathfrak{X}(M)$ on a smooth manifold is the set of all smooth vector fields on the manifold endowed with pointwise addition and scalar multiplication, which is a real vector space and a module over $C^\infty(M)$: $X \in \mathfrak{X}(M)$, $f \in C^\infty(M)$, then $f X \in \mathfrak{X}(M)$. Vector field along a subset of a smooth manifold is a vector field on the subset. Smooth vector field along a subset of a smooth manifold is a vector field along the subset that can be smoothly extended at each point to a neighborhood in the manifold. Any smooth vector field $X$ along a closed subset $A$ of a smooth manifold $M$ can be extended to a smooth vector field $\tilde{X}$ on $M$ that vanishes on any open subset $U$ containing $A$. Vector fields $(X_i)_{i=1}^k$ on a subset of a smooth manifold are linearly independent if they are linearly independent in the tangent space at each point; they span the tangent bundle if they span the tangent space at each point. Local frame $(e_i)_{i=1}^n$, or $(e_i)$, on an open subset $U$ of a smooth n-manifold $M$ is an n-tuple of vector fields on $U$ that are linearly independent and span the tangent bundle. Smooth frame is a frame consisting of smooth vector fields. Orthonormal vector fields on an open subset of a Euclidean space are vector fields whose values at each point are orthonormal w.r.t. the Euclidean inner product. Orthonormal frame on an open subset of a Euclidean space is a frame consisting of orthonormal vector fields. Gram-Schmidt Algorithm for Frames: A smooth orthonormal frame $(e_j)$ can be contructed from a smooth frame $(X_j)$ on an open subset $U$ of $\mathbb{R}^n$ such that $\text{Span}\{e_i\}_{i=1}^j = \text{Span}\{X_i\}_{i=1}^j$ for all $j$ at each point. Global frame for a smooth manifold is a frame on the entire manifold. Parallelizable manifold is a smooth manifold that admits a smooth global frame. Most smooth manifolds do not admit a smooth global frame, e.g. the sphere $\mathbb{S}^2$, and therefore are not parallelizable. Coordinate vector field $\partial / \partial x^i$ w.r.t. a smooth chart on a smooth manifold is the vector field consisting of the i-th coordinate vector $\partial / \partial x^i |_p$ at each point of the coordinate domain. Coordinate vector fields are smooth vector fields. Coordinate frame $(\partial / \partial x^i)$ is the smooth local frame consisting of the coordinate vector fields. Component function $X^i: U \mapsto \mathbb{R}$ of a rough vector field in a smooth chart is the real-valued function on the coordinate domain that provides the i-th component of the field w.r.t. the coordinate frame associated with the chart: $X_p = X^i(p) \frac{\partial}{\partial x^i} \bigg{|}_p$. The restriction of a rough vector field to the coordinate domain of a smooth chart is smooth if and only if its component functions w.r.t. this chart are smooth. Applying a smooth vector field $X \in \mathfrak{X}(M)$ to a smooth real-valued function $f \in C^\infty(U)$ on an open subset $U$ of $M$ is a smooth real-valued function $X f \in C^\infty(U)$, defined by $(X f)(p) = X_p f$. For a rough vector field $X: M \mapsto T M$, the following are equivalent: (1) $X$ is a smooth map; (2) $X$ is a closed operator on smooth functions; (3) $X$ is a closed operator on smooth functions on every open subset. Derivation on smooth real-valued functions is a linear transformation over $\mathbb{R}$ that satisfies the product rule: $D \in \mathcal{L}(C^\infty(M), C^\infty(M))$; $\forall f, g \in C^\infty(M)$, $D (f g) = (D f) g + f (D g)$. A transformation on smooth real-valued functions on a smooth manifold is a derivation if and only if it is a smooth vector field on the manifold. Given a smooth map $F \in C^\infty(M, N)$, a vector field $X$ on $M$ and a vector field $Y$ on $N$ are $F$-related if $Y$ equals the pushforward of $X$ by $F$: $dF_p(X_p) = Y_{F(p)}$. Given a smooth map $F \in C^\infty(M, N)$ and smooth vector fields $X \in \mathfrak{X}(M)$, $Y \in \mathfrak{X}(N)$, the following are equivalent: (1) $X$ and $Y$ are $F$-related; (2) $X (f \circ F) = (Y f) \circ F$ for all $f \in C^\infty(U)$ on every open subset $U$ of $N$; (3) the composition of $F$ with any integral curve of $X$ is an integral curve of $Y$. Pushforward $F_∗ X$ of a smooth vector field $X \in \mathfrak{X}(M)$ by a diffeomorphism $F: M \mapsto N$ is the smooth vector field on $N$ that is $F$-related to $X$, defined by $(F_∗ X)_q = dF_{F^{-1}(q)}(X_{F^{-1}(q)})$. The pushforward of $X$ by $F$ is the only smooth vector field on $N$ that is $F$-related to $X$. A vector field $X$ on a smooth manifold $M$ is tangent to a smooth submanifold $S$ if it lies in the tangent subspace $T_p S$ at every point $p \in S$. A smooth vector field $X$ on a smooth manifold $M$ is tangent to an embedded submanifold $S$ if and only if applying $X$ to any smooth real-valued function on $M$ that equals zero on $S$ gives a function that also equals zero on $S$: $f \in C^\infty(M)$, $f|_S = 0$, then $(X f)|_S = 0$. Restricting Vector Fields to Submanifolds: If a smooth vector field $Y$ on a smooth manifold $M$ is tangent to a smooth submanifold $S$, then the restricted vector field $Y|_S$ is the only smooth vector field on $S$ that is $\iota$-related to $Y$, where $\iota: S \mapsto M$ is the inclusion map. ### Flow Parameterized curve $\gamma: J \mapsto M$ in a manifold $M$ is a continuous map from an interval $J \subset \mathbb{R}$ to the manifold. The range of a smooth parameterized curve in a smooth manifold need not be a 1-submanifold, e.g. if a curve crosses itself, the subspace topology on the curve fails to be a manifold topology. Starting point $\gamma(0)$ of a curve $\gamma$ is its value at $t = 0$ if $0 \in J$. Curve segment is a curve whose domain is a compact interval: $J = [a, b]$. Starting point $\gamma(a)$ and ending point $\gamma(b)$ of a curve segment are the values of the ends of its domain. Closed curve segment is a curve segment with identical end points: $\gamma(a) = \gamma(b)$. Smooth curve in a smooth manifold is a curve that is a smooth map. Velocity $\gamma'(t)$ or $\dot{\gamma}(t)$ of a smooth curve at a time instance is the pushforward of the coordinate vector $d/dt|_t$ by the curve: $\gamma'(t) = d \gamma_t \left(\frac{d}{d t} \bigg{|}_t \right)$. Velocity $\gamma'(t)$ is a tangent vector of $M$ at $\gamma(t)$. Regular curve in a smooth manifold is a smooth curve with nonzero velocities. Piecewise regular curve segment, or "admissible curve" for short, in a smooth manifold is a curve segment that can be partitioned into regular curve segments. Any two points of a connected smooth manifold can be connected by a piecewise regular curve segement. Integral curve $\gamma: J \mapsto M$ of a vector field $V$ on a smooth manifold $M$ is a differentiable curve whose velocity equals the vector field everywhere on the curve: $\forall t \in J$, $\gamma'(t) = V_{\gamma(t)}$. The local coordinate representation of integral curves in a smooth chart is equivalent to the solutions of the system of ordinary differential equations (ODEs) $\dot{\gamma}^i(t) = V^i (\gamma^i(t))_{i=1}^n$, which is why such curves are called "integral curves". Maximal integral curve is an integral curve that cannot be extended to an integral curve on a larger open interval. Global flow $\theta: \mathbb{R} \times M \mapsto M$ on a smooth manifold $M$ is a continuous map such that $\forall s, t \in \mathbb{R}$, $\forall p \in M$, $\theta(0, p) = p$, $\theta(t, \theta(s, p)) = \theta(s+t, p)$. Equivalently, a global flow is a continuous left $\mathbb{R}$-action on $M$, aka a "one-parameter group action". Every global flow induces a family $(\theta_t)_{t \in \mathbb{R}}$ of transformations on $M$ by $\forall p \in M$, $\theta_t(p) = \theta(t, p)$, and a family $(\theta^{(p)})_{p \in M}$ of curves in $M$ by $\forall t \in \mathbb{R}$, $\theta^{(p)}(t) = \theta(t, p)$. Every transformation induced by a global flow is a homeomorphism; it is a diffeomorphism if the global flow is a smooth map. Flow domain $\mathscr{D}$ for a smooth manifold $M$ is a subset of $\mathbb{R} \times M$ such that for each $p \in M$ the subset $\mathscr{D}^{(p)} = \{t: (t, p) \in \mathscr{D}\}$ is an open interval containing zero. Flow domains looks like open tubes around $\{0\} \times M$. Local flow (流) $\theta: \mathscr{D} \mapsto M$ on a smooth manifold $M$ is a continuous map from a flow domain to the manifold such that $\forall p \in M$, $\forall s \in \mathscr{D}^{(p)}$, $\forall t \in \mathscr{D}^{(\theta(s, p))} \cap (\mathscr{D}^{(p)} -s)$, $\theta(0, p) = p$, $\theta(t, \theta(s, p)) = \theta(s+t, p)$. Maximal flow is a flow that that cannot be extended to a flow on a larger flow domain. Infinitesimal generator $V$ of a smooth flow $\theta$ on $M$, is the rough vector field on $M$ defined by $V_p = \theta^{(p)'}(0)$. The infinitesimal generator $V$ of $\theta$ is a smooth vector field on $M$, and each curve $\theta^{(p)}$ is an integral curve of $V$. Flow generated by a smooth vector field is a smooth maximal flow, if exists, whose infinitesimal generator is the field. Fundamental theorem on flows: Every smooth vector field $V$ on a smooth manifold $M$ (tangent to the boundary) generates a unique smooth maximal flow $\theta$. The curve $\theta^{(p)}: \mathscr{D}^{(p)} \mapsto M$ is the unique maximal integral curve of $V$ starting at each $p \in M$. If $s \in \mathscr{D}^{(p)}$, then $\mathscr{D}^{(\theta(s, p))} = \mathscr{D}^{(p)} - s$. For all $t \in \mathbb{R}$, $M_t = \{p: (t, p) \in \mathscr{D}\}$ is an open subset of $M$, and $\theta_t: M_t \mapsto M_{-t}$ is a diffeomorphism with inverse $\theta_{-t}$. Complete vector field is a smooth vector field that generates a global flow. Every compactly-supported smooth vector field on a smooth manifold is complete. Every smooth vector field on a compact smooth manifold is complete. Flowout Theorem: Given an embedded submanifold $S$ of a smooth manifold $M$ and a smooth vector field $V$ on $M$ that is nowhere tangent to $S$, let $V$ generates flow $\theta$ with flow domain $\mathscr{D}$, denote restricted flow domains $\mathscr{O} = \{(t, p) \in \mathscr{D}: p \in S\}$ and $\mathscr{O}_\delta = \{(t, p) \in \mathscr{O} : |t| < \delta(p)\}$, where $\delta$ is a smooth positive function on $S$, then: (1) the restricted flow $\theta|_\mathscr{O}$ is a smooth immersion; (2) the coordinate vector field $\partial / \partial t$ on $\mathscr{O}$ is $\theta|_\mathscr{O}$-related to $V$; (3) the restricted flow $\theta|_{\mathscr{O}_\delta}$ can be injective for some $\delta$, and thus its range $\theta(\mathscr{O}_\delta)$---called a flowout (流出) from $S$ along $V$--- is an immersed submanifold of $M$ containing $S$, and $V$ is tangent to this submanifold; (4) if $S$ has codimension one, then the restricted flow $\theta|_{\mathscr{O}_\delta}$ is a diffeomorphism onto the flowout, which is an open submanifold of $M$. Equilibrium point of a flow $\theta: \mathscr{D} \mapsto M$ on a smooth manifold $M$ is a point $p$ in $M$ such that $\forall t \in \mathscr{D}^{(p)}$, $\theta(t, p) = p$. Singular point (奇点) or zero of a vector field on a smooth manifold is a point where the vector field is zero: $V_p = 0$. Regular point (常点) of a vector field on a smooth manifold is a point where the field is nonzero. The singular points of a smooth vector field are precisely the equilibrium points of the flow it generates. Canonical Form Near a Regular Point: A smooth vector field $V$ matches the first coordinate vector field w.r.t. a smooth chart on a neighborhood $U$ of any regular point $p$, and the first coordinate can be a local defining function for any embedded hypersurface $S$ containing $p$ given that $V$ is not tangent to $S$ at $p$: $V|_U = \partial/\partial x^1$, $S \cap U = (x^1)^{-1}(0)$. ### First-order Cauchy Problems Real-valued first-order partial differential equations (PDEs) can be reduced to ODEs by the theory of flows. Linear first-order Cauchy problem is a problem of finding a smooth real-valued function $u$ in a neighborhood of an embedded hypersurface $S$ in a smooth manifold $M$ that satisfies a linear first-order PDE $A u + b u = f$ and an initial condition $u|_S = \phi$, where $A$ is a smooth vector field on $M$, $b$ and $f$ are smooth real-valued functions on $M$, and $\phi$ is a smooth real-valued function on $S$. Characteristic line (特征线)... A linear first-order Cauchy problem is noncharacteristic if $A$ is nowhere tangent to $S$. If a linear first-order Cauchy problem is noncharacteristic, then it has a unique solution in a flowout from the initial hypersurface along the vector field. Given a restricted flow domain $\mathscr{O}_\delta$ that satisfies the Flowout Theorem, composition with the restricted flow $\theta_\delta := \theta|_{\mathscr{O}_\delta}$ of $A$ transforms the flowout to the restricted flow domain where $A$ is in its canonical form $\partial/\partial t$, so the PDE becomes a linear first-order ODE $\frac{\partial \hat{u}}{\partial t} + \hat{b} \hat{u} = \hat{f}$ with initial condition $\hat{u}(0) = \phi$, where $\hat{u} = u \circ \theta_\delta$, and $\hat{b}, \hat{f}$ are similarly defined. Thus the solution in the flowout is $u = \hat{u} \circ \theta_\delta^{-1}$, where $\hat{u}(t) = e^{-B(t)} \left(\phi + \int_0^t \hat{f}(\tau) e^{B(\tau)}~d \tau\right)$ and $B(t) = \int_0^t \hat{b}(\tau)~d\tau$. 1-jet bundle $J^1 M$ of a smooth manifold $M$ is the smooth vector bundle $J^1 M = \mathbb{R} \times T^∗ M \mapsto M$, with fibers $\mathbb{R} \times T_x^∗ M$. 1-jet $j^1 u$ of a smooth function $u \in C^\infty(M)$ is the section of the 1-jet bundle $J^1 M$ defined by $j^1 u = (u, d u)$. First-order Cauchy problem is a problem of finding a real-valued function $u$ in a neighborhood of an embedded hypersurface $S$ in a smooth manifold $M$ that satisfies a first-order PDE $F(x, u, d u) = 0$ and an initial condition $u|_S = \phi$, where $\phi$ is a smooth real-valued function on $S$ and $F$ is a smooth real-valued function on an open subset of the 1-jet bundle $J^1 M$. A first-order Cauchy problem is noncharacteristic if there is a smooth section $\sigma$ of $T^∗ M|_S$ that takes values in $W$ and satisfies $\sigma(x)|_{T_x S} = d \phi(x)$ and $F(x, \phi(x), \sigma(x)) = 0$ on all points $x$ in $S$, and the vector field $A^{\phi, \sigma}$ along $S$ is nowhere tangent to $S$, defined as $A^{\phi, \sigma}|_x = \sum_{i=1}^n \frac{\partial F}{\partial \xi_i}(x, \phi(x), \sigma(x)) \frac{\partial}{\partial x^i}$. If a first-order Cauchy problem is noncharacteristic, then it has a smooth solution on a neighborhood of each point on the hypersurface. ### Integral Manifold Integral manifold is a generalization of integral curve to higher-dimensional submanifolds. Rank-k distribution, tangent distribution, tangent subbundle, or k-plane fields $D$ on a smooth manifold is a k-dimensional subbundle of the tangent bundle: $D = \sqcup_{p \in M} D_p$, $D_p \in G_k(T_p M)$. Smooth distribution on a smooth manifold is a distribution that is a smooth subbundle, i.e. locally spanned by k-tuples of smooth vector fields: $\forall p \in M$, $\exists U \subset M$, $\exists (X_i)_{i=1}^k \subset \mathfrak{X}(U)$: $\forall q \in U$, $\text{span}(X_{i,q})_{i=1}^k = D_q$. We denote the space of smooth global sections of a smooth distribution as $\Gamma(D)$; note that $\Gamma(D) \subset \mathfrak{X}(M)$. Involutive distribution on a smooth manifold is a smooth distribution such that the Lie bracket of any pair of its smooth local sections is also a local section; or equivalently, the space $\Gamma(D)$ of its smooth global sections is a Lie subalgebra of $\mathfrak{X}(M)$. Local Frame Criterion for Involutivity: A distribution is involutive if the manifold can be covered by smooth local frames such that the Lie bracket of every pair of vector fields in a frame is a local section of the distribution. Involutivity can be rephrased in terms of differential forms. Integral manifold of a smooth rank-k distribution on a smooth manifold is an immersed k-submanifold whose tangent space at each point matches the distribution: $N \subset M$, $\forall p \in N$, $T_p N = D_p$. Integrable distribution on a smooth manifold is a smooth distribution such that each point of the manifold is contained in an integral manifold of the distribution. Every integrable distribution is involutive. Flat chart for a rank-k distribution on a smooth n-manifold is a smooth coordinate chart whose first k coordinate vector fields span the distribution, and whose image is a box. In a flat chart, the preimage of every slice with fixed last (n-k) coordinates is an integral manifold of the distribution. Completely integrable distribution on a smooth manifold is a smooth distribution such that the manifold can be covered by flat charts for the distribution. Every completely integrable distribution is integrable. Frobenius theorem: Every involutive distribution is completely integrable. Therefore, a distribution is (completely) integrable if and only if it is involutive. Local structure of integral manifolds: For an involutive rank-k distribution on a smooth manifold, the intersection of any integral manifold and the coordinate domain of a flat chart is a countable union of disjoint open subsets of parallel k-dimensional slices, each of which is an open subset of the integral manifold and an embedded submanifold. Weakly embedded submanifold in a smooth manifold is a smooth submanifold such that every smooth map whose image lies in the submanifold is smooth as a map to the submanifold: $H \subset M$, $\forall F \in C^\infty(N, M)$, $F(N) \subset H \implies F \in C^\infty(N, H)$. Every integral manifold of an involutive distribution is weakly embedded. Flat chart for a collection of k-submanifolds of a smooth n-manifold is a smooth coordinate chart whose image is a box, and every submanifold in the collection intersects the coordinate domain in either the empty set or a countable union of preimages of slices with fixed last (n-k) coordinates. Foliation (叶状结构) $\mathscr{F}$ of dimension k on a smooth n-manifold is a partition of the manifold into connected, nonempty, immersed k-submanifolds (called the leaves $L$ of the foliation), for which there are flat charts covering the manifold. The collection of tangent spaces to the leaves of a foliation on a smooth manifold forms an involutive distribution on the manifold. Global Frobenius Theorem: The collection of all maximal connected integral manifolds of an involutive distribution on a smooth manifold forms a foliation of the manifold. Therefore, foliations are in one-to-one correspondence with involutive distributions. Invariant distribution on a smooth manifold w.r.t. a diffeomorphic transformation, or Φ-invariant distribution, is a distribution that is invariant under pushforward by the map: $\Phi: M \cong M$, $\forall x \in M$, $d \Phi_x (D_x) = D_{\Phi(x)}$. Similarly, Φ-invariant foliation on a smooth manifold is a foliation that is invariant under the map: $\forall L \in \mathscr{F}$, $\Phi(L) \in \mathscr{F}$. An involutive distribution is Φ-invariant if and only if the foliation it determines is Φ-invariant. ### Overdetermined PDEs Overdetermined system of partial differential equations is one where the number of PDEs is larger than that of unknown functions. Overdetermined linear first-order Cauchy problem: given a linearly independent m-tuple of smooth vector fields $(A_i)_{i=1}^m$ on an open subset $W$ of the Euclidean n-space, $m \le n$, and an embedded (n-m)-submanifold $S \subset W$ that is transverse to the vector fields, find a smooth real-valued function $u$ that satisfies equations $A_i u = f_i$ with initial condition $u|_S = \phi$, where $(f_i)_{i=1}^m \subset C^\infty(W)$ and $\phi \in C^\infty(S)$. An overdetermined first-order Cauchy problem has a unique solution on a neighborhood of each point on the submanifold, if it satisfies the following compatibility conditions: $\exists c_{ij}^k \in C^\infty(W)$: (involutivity) $[A_i, A_j] = c_{ij}^k A_k$, and $A_i f_j - A_j f_i = c_{ij}^k f_k$. Consider an overdetermined system of first-order PDEs: $\nabla u(x) = J(x, u(x))$, where $u \in C^\infty(U, \mathbb{R}^m)$, $U \subset \mathbb{R}^n$, $J \in C^\infty(W, M_{m,n}(\mathbb{R}))$, $W \subset \mathbb{R}^n \times \mathbb{R}^m$. If the matrix-valued function satisfies $\forall i \in m$, $\forall j, k \in n$, $\frac{\partial J^i_j}{\partial x^k} + J^l_k \frac{\partial J^i_j}{\partial y^l} = \frac{\partial J^i_k}{\partial x^j} + J^l_j \frac{\partial J^i_k}{\partial y^l}$, then for any initial condition $u(x_0) = y_0$, $(x_0, y_0) \in W$, the PDE system has a unique solution in a neighborhood of $x_0$. ## Exterior Differentiation Exterior differentiation allows for a generalization of differential operators such as gradient, divergence, curl, and Laplacian. ### Covector Field Covector field $\omega$ is a local or global section of the cotangent bundle. The value $\omega_p$ of a covector field at a point is denoted by subscript, while parentheses are reserved for the action $\omega(v)$ of a covector on a vector. We visualize a covector field as such: in each tangent space, it defines a linear hyperplane as the zero set and a parallel affine hyperplane as the level set of one, both of which vary continuously across the manifold. As with vector fields, a rough field needs not be continuous, and a smooth field is smooth. Action $\omega(X)$ of a rough covector field on a vector field on a smooth manifold is the real-valued function on the manifold that equals the action of the covector on the vector at each point: $\forall p \in M$, $\omega(X)(p) = \omega_p(X_p)$. Space of smooth covector fields $\mathfrak{X}^∗ (M)$ on a smooth manifold, endowed with pointwise vector addition and scalar multiplication, is a real vector space and a module over $C^\infty(M)$. Local coframe $(\varepsilon^i)_{i=1}^n$ is a local frame for the cotangent bundle. Smooth coframe is a coframe consisting of smooth covector fields. Global coframe is a coframe on the entire manifold. Component functions $\omega_i: U \mapsto \mathbb{R}$ of a rough covector field w.r.t. a coframe $(\varepsilon^i)$ are the maps whose values form the coordinate representation of the covector at each point: $\omega_i(p) = \omega_p(e^i |_p)$, where $(e_i)$ is the dual frame. Given a coframe, a covector field can be written uniquely as $\omega = \omega_i \varepsilon^i$. Coordinate coframe $(\lambda^i)$ is a smooth local coframe consisting of the coordinate covector fields associated with a smooth chart. A coframe $(\varepsilon^i)$ and a frame $(e_i)$ are dual to each other if their values at each point are dual basis: $\varepsilon^i(e_j) = \delta^i_j$. Component functions of a rough covector field w.r.t. a smooth chart are the component functions of the field w.r.t. the coordinate coframe: $\omega_i(p) = \omega_p(\partial/\partial x^i |_p)$. The action of a rough covector field on a vector field equals the sum of products of their component functions in any smooth frame and its dual coframe: $\omega(X)(p) = \omega_i X^i$. Pullback $F^∗ \omega$ of a covector field $\omega$ on $N$ by a smooth map $F \in C^\infty(M, N)$ is the rough covector field on $M$ whose value at each point equals the pullback of the covector field at that point: $(F^∗ \omega)_p = d F_p^∗ (\omega_{F(p)})$, i.e. $\forall v \in T_p M$, $(F^∗ \omega)_p (v) = \omega_{F(p)}(d F_p (v))$. The pullback of any covector field by a smooth map is a covector field; if the covector field is smooth, its pullback is also smooth. Restriction $\iota^∗ \omega$ of a smooth covector field $\omega \in \mathfrak{X}^∗ (M)$ to a smooth submanifold is the pullback of the field by the inclusion map $\iota: S \mapsto M$; equivalently, it is the restriction of the covector field to vectors tangent to the submanifold. ### Differential Form Differential form of degree $k$ or k-form $\omega$ is a section of the alternating k-tensor bundle, i.e. an alternating k-tensor field, aka a k-covector field. Space of smooth k-forms $\Omega^k(M)$ on a smooth manifold is the space of smooth alternating k-tensor fields: $\Omega^k(M) = \Gamma(\Lambda^k T^∗ M)$. The space of smooth 1-forms is just the space of smooth covector fields: $\Omega^1(M) = \mathfrak{X}^∗ (M)$. Sum space of smooth differential forms $\Omega^∗ (M)$ on a smooth n-manifold is the direct sum of all the smooth k-form spaces on the manifold: $\Omega^∗ (M) = \oplus_{k=0}^n \Omega^k(M)$. Exterior algebra $(\Omega^∗ (M), \wedge)$ of a smooth n-manifold $M$ is the associative, anticommutative graded algebra consisting of its space of smooth differential forms and the pointwise wedge product. Component function $\omega_I$ of a rough k-form w.r.t. a smooth chart is the action of the k-form on the k-tuple of coordinate vector fields indexed by an increasing multi-index: $\omega_I = \omega(\partial/\partial x^i)_{i \in I}$. Given a smooth chart, every k-form can be written uniquely as a linear combintion of elementary k-forms based on the coordinate coframe and increasing multi-indices of length $k$: $\omega = \sum_I' \omega_I d x^I$, where $d x^I = \wedge_{i \in I} d x^i$. Pullback $F^∗ \omega$ of a k-form on $N$ by a smooth map $F \in C^\infty(M, N)$ is the rough k-form on $M$ whose value at each point equals the pullback of the k-covector at that point: $(F^∗ \omega)_p = d F_p^∗ (\omega_{F(p)})$, i.e. $\forall v_i \in T_p M$, $(F^∗ \omega)_p (v_i)_{i=1}^k = \omega_{F(p)}(d F_p (v_i))_{i=1}^k$. The pullback of any k-form by a smooth map is a k-form; if the k-form is smooth, its pullback is also smooth. Given a smooth chart $(y^i)$ on the codomain, the pullback of a k-form by a smooth map can be written as: $F^∗ (\sum_I' \omega_I d y^I) = \sum_I' (\omega_I \circ F) \bigwedge_{i \in I} d (y^i \circ F)$. Given a smooth chart $(x^i)$ on the domain and a smooth chart $(y^i)$ on the codomain, the pullback of an n-form by a smooth map can be written as: $F^∗ (u (\wedge_{i=1}^n d y^i)) = (u \circ F) (\det DF) (\wedge_{i=1}^n d y^i)$, where $\det DF$ is the determinant of the Jacobian matrix of the map in these coordinates. ### Exterior Derivative The most important application of covector field is to allow for an invariant definition of the differential of a smooth real-valued function on a smooth manifold. Differential (微分) $d f$ of a smooth real-valued function on a smooth manifold is the covector field defined by $\forall p \in M$, $\forall v \in T_p M$, $d f_p(v) = v f$. Due to the canonical identification $T_p \mathbb{R} \leftrightarrow \mathbb{R}$, the definitions of the differential of a smooth real-valued function as a tangent map $d f: T M \mapsto T \mathbb{R}$ and as a covector field where $d f_p: T_p M \mapsto \mathbb{R}$ are the same. The action of the differential of a smooth real-valued function on a vector field is thus $d f(X) = X f$. The differential of a smooth function is a smooth covector field: $d: C^\infty(M) \mapsto \mathfrak{X}^∗ (M)$. The component functions of a differential in a smooth chart are the partial derivatives w.r.t. those coordinates: $d f = \frac{\partial f}{\partial x^i} \lambda^i$. The differential of a coordinate function is the corresponding coordinate covector field: $d x^i = \lambda^i$; we therefore use $d x^i$ to denote a coordinate covector field. Map of degree m on a graded algebra $A = \oplus_k A^k$ is a linear transformation that maps each subspace to the subspace $m$ indices higher. Antiderivation on a graded algebra is a linear transformation such that $T (x \times y) = (T x) \times y + (-1)^k x \times (T y)$ where $x \in A^k$. Exterior differentiation (外微分) $d: \Omega^* (M) \mapsto \Omega^* (M)$ of smooth forms is the unique extension of the differential $d: C^\infty(M) \mapsto \mathfrak{X}^* (M)$ to an antiderivation of degree +1 on the exterior algebra whose square is zero. Exterior differentiation has the following properties: (1) map of degree +1: $\forall k \in \{i\}_{i=0}^n$, $d \in \mathcal{L}(\Omega^k(M), \Omega^{k+1}(M))$; (2) antiderivation: $d(\omega \wedge \eta) = d \omega \wedge \eta + (-1)^k \omega \wedge d \eta$, where $\omega \in \Omega^k(M)$; (3) repeated action vanishes: $d \circ d = 0$. (4) commutes with pullbacks: $\forall F \in C^\infty(M, N)$, $\forall \omega \in \Omega^* (N)$, $F^* (d \omega) = d(F^* \omega)$. Exterior derivative (外导数) $d \omega$ of a smooth k-form on an open submanifold or a regular domain of a Euclidean space is the (k+1)-form defined by $d(\sum_I' \omega_I d x^I) = \sum_I' d \omega_I \wedge d x^I$, where $(d x^i)$ is the standard coordinate coframe. In particular, the exterior derivative of a smooth 1-form can be written as: $d(\omega_j d x^j) = \sum_{i<j} \left( \frac{\partial \omega_j}{\partial x^i} - \frac{\partial \omega_i}{\partial x^j} \right) d x^i \wedge d x^j$. Given a smooth chart, the exterior differentiation can be written in the form of the exterior derivative, where $(d x^i)$ is the coordinate coframe. Exact covector field or exact differential is a smooth covector field that equals the differential of a smooth real-valued function: $\exists f \in C^\infty(M)$, $\omega = d f$. We call this function a potential for the exact covector field. The potentials for an exact covector field differ only by a constant on each component of the manifold. Conservative covector field is a smooth covector field whose line integral over every piecewise smooth closed curve segment is zero; equivalently, its line integrals over piecewise smooth curve segments are path-independent, i.e. only depend on the starting and ending points. A smooth covector field is conservative if and only if it is exact. Closed covector field is a smooth covector field whose Jacobian in every smooth chart is symmetric, or equivalently, whose Jacobian in every chart in a smooth atlas is symmetric: $\frac{\partial \omega_j}{\partial x^i} = \frac{\partial \omega_i}{\partial x^j}$. Every exact covector field is closed. The pullback of a covector field by a local diffeomorphism preserves closedness and exactness of the covector field. Star-shaped subset of a vector space is a subset that includes the line segment between one point and any point in the subset: $\exists c \in A$, $A = \cup_{p \in A} \overline{cp}$. Every convex subset is star-shaped. Poincaré Lemma for Covector Fields: Every closed covector field on a star-shaped open subset of a Euclidean space $\mathbb{R}^n$ or a closed upper half-space $\mathbb{H}^n$ is exact. Every closed covector field is exact on a collection of open sets that cover the manifold. Every closed covector field is exact on any simply connected manifold. Exact k-form is a k-form that equals the exterior differentiation of a smooth (k-1)-form: $\exists \eta \in \Omega^{k-1}(M)$, $\omega = d \eta$. Closed k-form is a smooth k-form whose exterior differentiation is zero: $d \omega = 0$. Every exact differential form is closed. Every closed differential form is locally exact. ### Derived Differential Operators Figure: Commutative diagram of some differential operators on an oriented Riemannian 3-manifold. Every pseudo-Riemannian metric $g$ is equivalent to a smooth bundle isomorphism $\hat{g}: T M \mapsto T^∗ M$ defined by $\hat{g}(v)(w) = g_p(v, w)$. Musical isomorphisms between smooth vector fields and smooth covector fields on a pseudo-Riemannian manifold $(M, g)$ are the two vector space (and module) isomorphisms flat (降X) or lower an index (降指标) $\flat: \mathfrak{X}(M) \mapsto \mathfrak{X}^∗ (M)$ and sharp (升ω) or raise an index (升指标) $\sharp: \mathfrak{X}^∗ (M) \mapsto \mathfrak{X}(M)$ defined by $X^\flat (Y) = \hat{g}(X)(Y) = g(X, Y)$ and $\omega^\sharp = \hat{g}^{-1}(\omega)$. Given a smooth local frame $(e_i)$ and its dual coframe $(\varepsilon^i)$, the musical isomorphisms have coordinate representations $X^\flat = g_{ij} X^i \varepsilon^j$ and $\omega^\sharp = g^{ij} \omega_i e_j$, where $(g^{ij})$ is the inverse of the matrix representation of the pseudo-Riemannian metric so that $g^{ij} g_{jk} = \delta^i_k$. Gradient $\text{grad}~f$ of a smooth real-valued function on a pseudo-Riemannian manifold is the vector field obtained from the differential of the function by raising an index: $\text{grad}~f = (d f)^\sharp$. Gradient and differential on a Riemannian manifold are related by $d f_p = \langle \text{grad}~f|_p, \cdot \rangle_g$. Given a smooth local frame, gradient can be written as $\text{grad}~f = g^{ij} (e_i f) e_j$. Gradient and differential have the same coordinate representation in any orthonormal frame. Divergence $\text{div}~X$ of a smooth vector field on a Riemannian n-manifold $(M, g)$ is the smooth real-valued function that locally satisfies the equation $d(X \lrcorner d V_g) = (\text{div}~X) d V_g$, where $d V_g$ is the Riemannian density. Given a smooth coordinate frame, divergence can be written as $\text{div}~X = (\sqrt{\det g})^{-1} \frac{\partial}{\partial x^i} (X^i \sqrt{\det g})$, where $\det g$ the determinant of the component matrix of the Riemannian metric in these coordinates. Curl $\text{curl}~X$ of a smooth vector field on an oriented Riemannian 3-manifold is the smooth 2-form defined by $\text{curl}~X = \beta^{-1} d(X^\flat)$, where $\beta: TM \mapsto \Lambda^2 T^* M$ is the smooth bundle isomorphism defined by $\beta(X) = X \lrcorner d V_g$. Geometric Laplacian or Laplace–Beltrami operator $\Delta f$ of a smooth real-valued function on a Riemannian manifold is the smooth real-valued function defined by the divergence of the gradient of the function: $\Delta f = \text{div}(\text{grad}~f)$. Many authors define the Laplacian with a negative sign so that its eigenvalues are nonnegative, but the given definition is much more common in Riemannian geometry. Given a smooth coordinate frame, Laplacian can be written as $\Delta f = (\sqrt{\det g})^{-1} \frac{\partial}{\partial x^i} \left( g^{ij} \frac{\partial f}{\partial x^j} \sqrt{\det g} \right)$. Hodge star operator $∗$ is the smooth bundle homomorphism between alternating tensor bundles $\Lambda^k T^∗ M$ and $\Lambda^{n-k} T^∗ M$ on an oriented Riemannian n-manifold $(M, g)$ for each $k \in \{i\}_{i=0}^n$, determined by $\forall \omega, \eta \in \Omega^k(M)$, $\omega \wedge ∗ \eta = \langle \omega, \eta \rangle_g d V_g$. In particular, for smooth real-valued functions, $∗ f = f d V_g$. Laplace–Beltrami operator $\Delta \omega$ of a smooth k-form on an oriented compact Riemannian n-manifold is the smooth k-form defined by $\Delta \omega = d d^∗ \omega + d^∗ d \omega$, where $d^∗$ is the map of degree -1 defined by $d^∗ \omega = (-1)^{n(k+1)+1} ∗ d ∗ \omega$, where $∗$ is the Hodge star operator. Harmonic k-form is a smooth k-form in the kernel of the Laplace–Beltrami operator: $\Delta \omega = 0$. Harmonic function is a Harmonic 0-form. Harmonic analysis of real-valued functions on smooth manifolds, e.g. spherical harmonics. ## Covariant Differentiation Covariant differentiation, or connection, allows for a generalization of differential operators such as directional derivative and Hessian. ### Covariant Derivative to a Vector Field Connection is a rule that "connects" nearby tangent spaces on a smooth manifold, such that tangent vectors at different points can be compared, and directional derivatives of vector fields can be computed intrinsically. Connection $\nabla$ in a smooth vector bundle over a smooth manifold is a map that takes a smooth vector field on the manifold and a smooth section of the bundle to another smooth section of the bundle, which is linear over smooth functions in the first argument, is linear over real numbers in the second, and satisfies the product rule: give $(E, \pi)$ over $M$, $\nabla: \mathfrak{X}(M) \times \Gamma(E) \mapsto \Gamma(E)$: (1) $\forall f_1, f_2 \in C^\infty(M)$, $\nabla_{f_1 X_1 + f_2 X_2} Y = f_1 \nabla_{X_1} Y + f_2 \nabla_{X_2} Y$; (2) $\forall a_1, a_2 \in \mathbb{R}$, $\nabla_X (a_1 Y_1 + a_2 Y_2) = a_1 \nabla_X Y_1 + a_2 \nabla_X Y_2$; (3) $\forall f \in C^\infty(M)$, $\nabla_X (f Y) = f \nabla_X Y + (X f) Y$. A connection in the tangent bundle of a smooth manifold is often simply called a "connection on the manifold": $\nabla: \mathfrak{X}(M) \times \mathfrak{X}(M) \mapsto \mathfrak{X}(M)$. Existence of connections: The tangent bundle of every smooth manifold admits a connection. Covariant derivative $\nabla_X Y$ (or "invariant derivative") of $Y$ in the direction of $X$, w.r.t. a connection $\nabla$, where $X \in \mathfrak{X}(M)$ is a smooth vector field and $Y \in \Gamma(E)$ is a smooth section of a vector bundle, is the smooth section of the vector bundle given by the connection. Connection coefficients $\Gamma_{ij}^k$ of a connection on a smooth manifold, w.r.t. a smooth local frame, are the smooth real-valued functions that provide the k-th coordinate of the covariant derivative of the j-th vector field in the direction of the i-th vector field: $\forall i, j \in n$, $\nabla_{E_i} E_j = \Gamma_{ij}^k E_k$. Given a smooth local frame over an open subset of a smooth manifold, a connection on the manifold is completely determined in the subset by its connection coefficients: $\forall X, Y \in \mathfrak{X}(U)$, let $X = X^i E_i$, $Y = Y^j E_j$, then $\nabla_{X} Y = (X(Y^k) + X^i Y^j \Gamma_{ij}^k) E_k$. This formula gives a bijection between connections and $n^3$-tuples of smooth real-valued functions on a coordinate domain of a smooth manifold; for a smooth n-manifold with a global frame, the set of connections on its tangent bundle is equipotent to the $n^3$-power of smooth real-valued functions on the manifold. Euclidean directional derivative $\bar{\nabla}_X Y$ of a smooth vector field by a vector field in the Euclidean n-space is the vector field defined by: $\bar{\nabla}_X Y = (\nabla Y) X$, where $X$ and $Y$ on the right hand side are understood as their component functions in the standard coordinates. Euclidean connection $\bar{\nabla}$ on a Euclidean space is the connection in the Euclidean tangent bundle that provides the Euclidean directional derivative: $\bar{\nabla}_X Y = X(Y^k) \frac{\partial}{\partial x^k}$. The connection coefficients of $\bar{\nabla}$ in the standard coordinate frame are all zero: $\bar \nabla_{E_i} E_j = 0$. Tangential directional derivative $\nabla^\top_v Y$ of a smooth vector field in an embedded submanifold of a Euclidean space in a tangent direction is the directional derivative of a smooth extension of the vector field in this direction, projected to the tangent space at each point: $\nabla^\top_v Y = \pi^\top(\bar{\nabla}_v \tilde Y)$, where $\tilde Y |_M = Y$. Tangential connection $\nabla^\top$ on an embedded submanifold of a Euclidean space is the connection that provides the tangential directional derivative: $\forall X, Y \in \mathfrak{X}(M)$, $\tilde X|_M = X$, $\tilde Y|_M = Y$, $\nabla^\top_X Y = \pi^\top (\bar \nabla_{\tilde X} \tilde Y)$. Every connection in the tangent bundle induces connections in all tensor bundles, and thus defines covariant derivatives of tensor fields of any type. Total covariant derivative $\nabla F$ of a smooth (k,l)-tensor field on a smooth manifold, given a connection in the tangent bundle, is the smooth (k,l+1)-tensor field on the manifold whose action on k smooth covector fields and l+1 smooth vector fields is the action of the covariant derivative of the tensor field in the direction of the last vector field: $\forall F \in \Gamma(T^{(k, l)} T M)$, $\exists \nabla F \in \Gamma(T^{(k, l+1)} T M)$: $\forall \omega^i \in \Omega^1(M)$, $\forall X, Y_j \in \mathfrak{X}(M)$, $(\nabla F)(\omega^i, Y_j, X)^{i \in k}_{j \in l} = (\nabla_X F)(\omega^i, Y_j)^{i \in k}_{j \in l}$. The components of a total covariant derivative in a local frame are written with a semicolon to separate the new index from the preceding indices: for a (k,l)-tensor field, its total covariant derivative have components of the form $F_{(i_{j'})_{j'=1}^l;m}^{(i_j)_{j=1}^k}$. The musical isomorphisms commute with the total covariant derivative operator: $\nabla(F^\flat) = (\nabla F)^\flat$, and $\nabla(F^\sharp) = (\nabla F)^\sharp$. The (total) covariant derivative of a smooth function on a smooth manifold is its differential 1-form: $\forall u \in C^\infty(M)$, $\nabla u = d u$. Covariant Hessian $\nabla^2 u$ of a smooth function on a smooth manifold is the (0,2)-tensor field defined by its second (total) covariant derivative: $\nabla^2 u = \nabla (d u)$; its action on smooth vector fields can be computed by $\nabla^2 u (Y, X) = Y(X u) - (\nabla_Y X) u$. Hessian operator $\mathcal{H}_u$ (or $\text{Hess}~u$) of a smooth function on a smooth manifold is the (1,1)-tensor field, i.e. an endomorphism field, obtained from the covariant Hessian by raising an index: $\mathcal{H}_u = (\nabla^2 u)^\sharp$; or equivalently, the covariant derivative of the gradient: $\mathcal{H}_u = \nabla (\text{grad}~u)$, because $(\nabla (\nabla u))^\sharp = (\nabla (d u))^\sharp = \nabla ((d u)^\sharp) = \nabla (\text{grad}~u)$. ### Covariant Derivative along a Curve Vector field along a smooth curve in a smooth manifold is a parameterized curve in the tangent bundle that is compatible with the curve: $V \in C(I, T M)$, $\gamma \in C^\infty(I, M)$, $\forall t \in I$, $V(t) \in T_{\gamma(t)} M$. The set $\mathfrak{X}(\gamma)$ of all smooth vector fields along a smooth curve is a real vector space under pointwise vector addition and multiplication, and is a module over $C^\infty(I)$ with pointwise multiplication. The velocity of smooth curve is a vector field along the curve: $\gamma' \in \mathfrak{X}(\gamma)$. Extendible vector field along a smooth curve is a smooth vector field along the curve such that there exists a smooth vector field on a neighborhood of the image of the curve that is compatible with this vector field: $\exists \tilde{V} \in \mathfrak{X}(U)$, $U \supset \gamma(I)$: $V = \tilde{V} \circ \gamma$. Tensor field along a smooth curve in a smooth manifold is a parameterized curve in a tensor bundle that is compatible with the curve: $\sigma \in C(I, T^{(k,l)} T M)$, $\gamma \in C^\infty(I, M)$, $\forall t \in I$, $\sigma(t) \in T^{(k,l)} (T_{\gamma(t)} M)$. Covariant derivative or absolute derivative $D_t$ along a smooth curve in a smooth manifold is an operator on the space of smooth vector fields along the curve, uniquely determined by a connection in the tangent bundle, which is linear over the real numbers, satisfies the product rule, and equals the covariant derivative of every extension of an extendible vector field in the direction of the curve's velocity: $D_t: \mathfrak{X}(\gamma) \mapsto \mathfrak{X}(\gamma)$; (1) $\forall a, b \in \mathbb{R}$, $D_t(a V + b W) = a D_t V + b D_t W$; (2) $\forall f \in C^\infty(I)$, $D_t (f V) = f' V + f D_t V$; (3) if $V$ can be extended to $\tilde V$, then $D_t V(t) = \nabla_{\gamma'(t)} \tilde V$. Recall that for a smooth curve in a Euclidean space, its (Euclidean) velocity and acceleration are the first and second derivatives of its component functions in the standard coordinates: $\gamma'(t) = \frac{d \gamma}{d t}$, $\gamma''(t) = \frac{d^2 \gamma}{d t^2}$. Tangential acceleration $\gamma''^\top$ of a smooth curve on an embedded submanifold of the Euclidean n-space is its Euclidean acceleration projected to the tangent space at each point: $\gamma \in C^\infty(I, M)$, $\forall t \in I$, $\gamma''(t)^\top = \pi^\top(\gamma''(t))$, where $\pi^\top: T \mathbb{R}^n|_M \mapsto T M$ is the tangential projection. Acceleration $D_t \gamma'$ of a smooth curve in a smooth manifold is the covariant derivative of its velocity, w.r.t. a connection in the tangent bundle. Geodesic in a smooth manifold, w.r.t. a connection in the tangent bundle, is a smooth curve with zero acceleration: $\gamma \in C^\infty(I, M)$, $\forall t \in I$, $D_t \gamma'(t) = 0$. Geodesic segment is a geodesic whose domain is a compact interval. Existence and uniqueness of geodesics: For every tangent vector on a smooth manifold, with a connection in the tangent bundle, there exists a geodesic with that initial velocity defined on a neighborhood of zero, and any two such geodesics are identical on their common domain: $\forall (p, v) \in T M$, $\exists \gamma: (a, b) \mapsto M$, $a < 0 < b$: $\gamma(0) = p$, $\gamma'(0) = v$, and $\forall t \in (a, b)$, $D_t \gamma'(t) = 0$. Maximal geodesic $\gamma_v$ with initial velocity $v$ on a smooth manifold, w.r.t. a connection in the tangent bundle, for any tangent vector $v$, is the geodesic whose initial location and velocity are specified by that tangent vector, and whose domain cannot be extended to a larger interval: $\gamma_v = \bigcup \{\gamma : \gamma(0) = p, \gamma'(0) = v, D_t \gamma'(t) = 0\}$. Geodesically complete smooth manifold, w.r.t. a connection in the tangent bundle, is one such that every maximal geodesic is defined for the entire real line: $\forall v \in T M$, $\gamma_v: \mathbb{R} \mapsto M$. Parallel vector field along a smooth curve, w.r.t. a connection in the tangent bundle, is a smooth vector field along the curve with derivative zero: $V \in \mathfrak{X}(\gamma)$, $D_t V = 0$. The velocity of a geodesic is a vector field parallel along the curve. Parallel transport $V$ (平行移动/平行输运) of a tangent vector along a (piecewise) smooth curve is a parallel vector field along the curve that is compatible with the vector: $\gamma: I \mapsto M$, $v \in T_{\gamma(t_0)} M$, $V \in \mathfrak{X}(\gamma)$, $V(t_0) = v$, $D_t V = 0$. Parallel transport exists and is unique for every smooth curve and every tangent vector on the curve. Parallel transport map $P^{\gamma}_{t_0 t_1}$ between two tangent spaces on a smooth curve is the map that takes every tangent vector at the initial point to the tangent vector at the final point in parallel transport: $P^{\gamma}_{t_0 t_1}: T_{\gamma(t_0)} M \mapsto T_{\gamma(t_1)} M$, $\forall v \in T_{\gamma(t_0)} M$, $P^{\gamma}_{t_0 t_1}(v) = V(t_1)$. The parallel transport map between every pair of tangent spaces on a smooth curve is a linear isomorphism. The parallel transport map is the means by which a connection "connects" nearby tangent spaces. Parallel frame $(E_i)_{i=1}^n$ along a (piecewise) smooth curve in a smooth n-manifold is the n-tuple of parallel transports of a basis along the curve: $\text{Span}(b_i)_{i=1}^n = T_{\gamma(t_0)} M$, $\forall i \in n$, $E_i(t_0) = b_i$, $D_t E_i = 0$. Every parallel frame along a curve is a frame along the curve, i.e. at every point on the curve it is a basis of the tangent space. Parallel transport determines covariant differentiation: $\forall \gamma \in C^\infty(I, M)$, $\forall V \in \mathfrak{X}(\gamma)$, $\forall t_0 \in I$, $D_t V(t_0) = \lim_{t_1 \to t_0} \frac{P^{\gamma}_{t_1 t_0} V(t_1) - V(t_0)}{t_1 - t_0}$. Parallel transport determines the connection: $\forall X, Y \in \mathfrak{X}(M)$, $\forall p \in M$, $\forall \gamma \in \{l \in C^\infty(I, M) : l(0) = p, l'(0) = X_p \}$, $\nabla_X Y|_p = \lim_{h \to 0} \frac{P^{\gamma}_{h 0} Y_{\gamma(h)} - Y_p}{h}$. Parallel vector field on a smooth manifold, w.r.t. a connection in the tangent bundle, is a smooth vector field that is parallel along every smooth curve in the manifold. A smooth vector field is parallel if and only if its total covariant derivative is zero: $\nabla V = 0$. ## Integration ### Integral of 1-Form on Curve Segment Line integral $\int_J \omega$ of a smooth covector field over a compact interval is the ordinary integral of the standard coordinate representation of the field over the interval: $J = [a, b]$, $\omega \in \mathfrak{X}^∗ (J)$, $\omega_t = \hat \omega(t) d t$, then $\int_J \omega = \int_a^b \hat \omega(t) dt$. Line integral $\int_\gamma \omega$ of a smooth covector field over a smooth curve segment is the integral of the pullback of the field by the curve: $\forall \omega \in \mathfrak{X}^∗ (M)$, $\forall \gamma \in C^\infty(J, M)$, $\int_\gamma \omega = \int_J \gamma^∗ \omega$. Rewinding the definitions, the line integral equals the ordinary integral of the action of the covector field on curve velocity over the parameter interval: $\int_\gamma \omega = \int_a^b \omega_{\gamma(t)}(\gamma'(t))~dt$. The line integral of the pullback of a smooth covector field over a piecewise smooth curve segment equals the line integral of the field over the composite curve: $\forall F \in C^\infty(M, N)$, $\forall \eta \in \mathfrak{X}^∗ (N)$, $\forall \gamma \in C^\infty(J, M)$, $\int_\gamma F^∗ \eta = \int_{F \circ \gamma} \eta$. Reparameterization $\tilde{\gamma}$ of a piecewise smooth curve segment $\gamma$ by a strictly monotonic smooth function $\phi: \tilde{J} \mapsto J$ is the composition of the curve with the bijection: $\tilde{\gamma} = \gamma \circ \phi$. Forward reparameterization is a reparameterization by an increasing function. Backward reparameterization is a reparameterization by a decreasing function. The line integral of a smooth covector field over a piecewise smooth curve segment is invariant under forward reparameterization, and flips sign under backward reparameterization. Fundamental Theorem for Line Integrals: The line integral of the differential of a smooth real-valued function over a piecewise smooth curve segment equals the difference of function values at the ends of the curve: $\int_\gamma d f = f(\gamma(b)) - f(\gamma(a))$. ### Integral of n-Form on Oriented n-Manifold Integral of real-valued functions on oriented smooth manifolds cannot be defined independent of coordinates; however, integral can be defined intrinsically for differential forms. Domain of integration in a Euclidean space is a bounded subset whose boundary has measure zero. Integral $\int_D \omega$ of an n-form $\omega$ on the closure of a domain of integration in the n-dimensional Euclidean space is the integral of the standard coordinate representation of the n-form over the domain: $\omega = f (\wedge_{i=1}^n dx^i)$, then $\int_D \omega = \int_D f d V$, where $d V = \prod_{i=1}^n dx^i$. Any compact subset $K$ of an open subset $U$ of a Euclidean space or a closed upper half-space is included in an open domain of integration $D$ whose closure is also a subset of the open set: $K \subset D \subset \bar{D} \subset U$. Integral $\int_U \omega$ of a compactly supported n-form on an open subset of a Euclidean space or a closed upper half-space is the integral of the n-form on any domain of integration containing its support: $\text{supp}~\omega = K$, $K \subset D \subset \bar{D} \subset U$, then $\int_U \omega = \int_D \omega$. The integral of the pullback of a compactly supported n-form by an orientation-preserving diffeomorphism between open subsets of a Euclidean space or its closed upper half-space equals the integral of the n-form over the codomain: $\int_U F^∗ \omega = \int_{F(U)} \omega$; the integral flips sign if the diffeomorphism is orientation-reversing. Integral $\int_U \omega$ of an n-form on a compact subset of a positively-oriented smooth coordinate domain of an oriented smooth n-manifold is the integral of the pullback of the n-form by the inverse of the chart: $\int_U \omega = \int_{\phi(U)} (\phi^{-1})^∗ \omega$; the integral flips sign if the chart is negatively-oriented. Integral $\int_M \omega$ of a compactly-supported n-form on an oriented smooth n-manifold is the sum of integrals of n-forms $\psi_i \omega$, where $\{\psi_i\}_{i=1}^m$ is any smooth partition of unity subordinate to a finite open cover of the support of the n-form by positively or negatively oriented smooth charts: $\int_M \omega = \sum_{i=1}^m \int_{U_i} \psi_i \omega$. Integration Over Piecewise Parameterizations: Integral of a compactly-supported n-form on an oriented smooth n-manifold equals the sum of integrals of the n-form on a finite partition of its support such that there are positively-oriented smooth charts from their interior onto open domains of integration in the Euclidean n-space: $\text{supp}~\omega = K = \overline{\sqcup_{i=1}^m U_i}$, $\phi_i: U_i \cong D_i$, then $\int_M \omega = \sum_{i=1}^m \int_{D_i} (\phi_i^{-1})^∗ \omega$. Integration over piecewise parameterizations also works for boundary integrals of (n-1)-forms on any compact, oriented smooth n-manifold with corners, and integrals of densities on any compact smooth n-manifold. The integral map on compactly-supported n-forms on oriented smooth n-manifolds is a linear functional that is positive for positively-oriented orientation forms, is invariant under orientation-preserving diffeomorphisms, and flips sign upon orientation reversal. Riemannian volume form $\omega_g$ or $d V_g$ of an oriented Riemannian n-manifold $(M, g)$ is the unique n-form on the manifold satisfying any of the following equivalent properties: (1) it equals the wedge product of any oriented orthonormal coframe, $\omega_g = \wedge_i ε^i$; (2) it maps any oriented orthonormal frame to one, $\omega_g(e_i) = 1$; (3) it equals the wedge product of any oriented coordinate coframe multiplied by the square root of the determinant of the matrix representation of the Riemannian metric, $\omega_g = \sqrt{\det g_{ij}} (\wedge_i dx^i)$. The notation $d V_g$ for a Riemannian volume form or a Riemannian density is just a convention, which does not mean it is the exterior derivative of an (n-1)-form. The boundary of an oriented Riemannian manifold is orientable if and only if there exists a global unit normal vector field on the boundary. The Riemannian volume form of the boundary of an oriented Riemannian manifold, given a global unit normal vector field, is the interior multiplication of the Riemannian volume form of the manifold by the vector field: $\omega_{\iota^∗ g} = (N \lrcorner \omega_g)|_{\partial M}$. Integral $\int_M f \omega_g$ of a compactly-supported continuous real-valued function over an oriented Riemannian manifold is the integral of the compactly-supported n-form $f \omega_g$ on the manifold. Volume $\text{Vol}(M)$ of a compact, oriented Riemannian manifold is the integral of the Riemannian volume form on the manifold: $\text{Vol}(M) = \int_M d V_g$. ### Integral of k-Form on Oriented k-Submanifold Integral $\int_S \omega$ of a k-form on an oriented smooth n-manifold over an oriented smooth k-submanifold where the restriction of the form is compactly supported is the integral of the pullback of the k-form by the inclusion map of the k-submanifold: $\int_S \omega = \int_S \iota_S^∗ \omega$. Stokes’s Theorem: $\int_M d \omega = \int_{\partial M} \omega$. Surface integral $\int_S \langle X, N \rangle_g dA$ of a smooth vector field $X$ over a compact oriented 2-dimensional smooth submanifold $S$ with boundary in an oriented Riemannian 3-manifold. Stokes’s Theorem for Surface Integrals: $\int_S \langle \text{curl}~X, N \rangle_g dA = \int_{\partial S} \langle X, T \rangle_g ds$. ### Integral of Density on Manifold Density $\mu: V^n \mapsto \mathbb{R}$ on an n-dimensional vector space is an n-variate real-valued function such that its action on linearly transformed vectors equls its action on the original vectors, multiplied by the absolute value of the determinant of the linear transformation: $\forall T \in \mathcal{L}(V, V)$, $\mu(T v_i)_{i=1}^n = |\det T| \mu(v_i)_{i=1}^n$. A density is not a tensor, because it is not linear over the real numbers in any of its arguments. Density space $\mathcal{D}(V)$ on an n-dimensional vector space is the vector space consisting of the set of all densities on the space, and pointwise addition and scalar multiplication. The density space on an n-dimensional vector space is the 1-dimensional vector space consisting of the absolute value map and the negative value map of the n-covectors on the underlying space: $\mathcal{D}(V) = \{|\omega|, -|\omega| : \omega \in \Lambda^n(V^∗)\}$. Positive density on an n-dimensional vector space is one whose values are positive on a basis: $\mu = |\omega|$. Negative density is defined analogously: $\mu = -|\omega|$. Density bundle $\mathcal{D} M$ of a smooth manifold is the disjoint union of density spaces on all tangent spaces of the manifold, endowed with the natural projection map taking each point-indexed density to its point of tangent. The density bundle of a smooth manifold is a smooth line bundle over the manifold. Density $\mu$ on a smooth manifold is a section of the density bundle of the manifold. Positive density on a smooth manifold is one whose values are positive densities on all the tangent spaces. Any nonvanishing n-form determines a positive density by taking pointwise absolute value: $|\omega|_p = |\omega_p|$. Any density can be written as a positive density multiplied by a real-valued function: $\mu = f |\omega|$. Every smooth manifold admits a smooth positive density. Pullback $F^∗ \mu$ of a density on $N$ by a smooth map $F \in C^\infty(M, N)$ is the density on $M$ whose value at each point equals the pullback of the density at that point: $\forall v_i \in T_p M$, $(F^∗ \mu)_p (v_i)_{i=1}^n = \mu_{F(p)}(d F_p (v_i))_{i=1}^n$. The pullback of a smooth density by a smooth map is a smooth density. Integral $\int_D \mu$ of a density on (the closure of) a domain of integration in the Euclidean n-space is the integral of the standard coordinate representation of the density over the domain: $\mu = f |\wedge_{i=1}^n dx^i|$, then $\int_D \mu = \int_D f d V$ where $d V = \prod_{i=1}^n dx^i$. Analogous to integral of n-forms, we have: integral of a compacly supported density on an open subset, $\int_U \mu = \int_D \mu$, which is diffeomorphism-invariant, $\int_U F^∗ \mu = \int_{F(U)} \mu$; integral on a compact subset of a smooth coordinate domain of a smooth manifold, $\int_U \mu = \int_{\phi(U)} (\phi^{-1})^∗ \mu$; integral of a compactly-supported density on a smooth manifold via a smooth partition of unity, $\int_M \mu = \sum_i \int_M \psi_i \mu$. The integral map on compactly-supported densities on smooth manifolds is a linear functional that is positive for positive densities and is invariant under diffeomorphisms. Riemannian density $\mu_g$ or $d V_g$ on a Riemannian manifold $(M, g)$ is the unique smooth positive density that maps any orthonormal frame to one: $\mu_g(e_i) = 1$. Integral $\int_M f \mu_g$ of a compactly-supported continuous real-valued function over a Riemannian manifold is the integral of the density $f \mu_g$ on the manifold. For an oriented Riemannian manifold, its Riemannian density equals the absolute value map of its Riemannian volume form: $\mu_g = |\omega_g|$, and thus the integrals of a function as a density and an n-form are the same: $\int_M f \mu_g = \int_M f \omega_g$. Divergence Theorem: The integral of the divergence of a compactly-supported smooth vector field on a Riemannian manifold with boundary equals the integral of the inner product of the vector field and the outward-pointing unit normal vector field along the manifold boundary: $\int_M (\text{div}~X) \mu_g = \int_{\partial M} \langle X, N \rangle_g \mu_{\tilde{g}}$, where $\tilde{g}$ is the induced Riemannian metric on the manifold boundary. Measure on smooth/Riemannian manifolds. Directional statistics deals with observations on n-spheres [@Brigant2019]. Sampling on manifolds [@Soize2016]. Measures on a smooth manifold are preserved in piecewise parameterizations, despite violating the topology. A unit n-volume $I^n$ should suffice for sampling on a connected compact Riemannian n-manifold, e.g. sampling on a sphere via geographic coordinates.
{}
# Primes ≤ 100 in Rust Posted by Michał ‘mina86’ Nazarewicz on 20th of June 2021 In a past life I’ve talked about a challenge to write the shortest program which prints all prime numbers less than a hundred. Back then I’ve discussed a 60-character long solution written in C. Since Rust is the future, inspired by a recent thread on Sieve of Eratosthenes I’ve decided to carry the task for Rust as well. To avoid spoiling the solution, I’m padding this article with a bit of unrelated content. To jump straight to the code, skip the next block of paragraphs. Otherwise, here’s a joke for ya: # How do balanced audio cables work Posted by Michał ‘mina86’ Nazarewicz on 13th of June 2021 Have you ever wondered how balanced audio cables work? For the longest time I have until finally deciding to look into it. Turns out the principle is actually rather straightforward. In a normal, unbalanced wire an analogue signal S is sent over a pair of wires: one carries the signal while the other a reference zero. Receiver interprets voltage between the two as the signal. The issue is that over the length of a cable noise is introduced. While transmitter sends S, receiver gets S + e (where e denotes the noise). A balanced cable addresses this problem by sending the information over three wires: hot (or positive), cold (or negative) and ground. Hot wire carries the signal S as before, cold one carries the inverse of the signal -S and ground is zero as before. Just like before, when information travels over the cable, noise is introduced. Crucially, because it’s a single cable, noise on the positive and negative wires are strongly correlated. Receiver therefore gets S + e on hot wire and -S + e on cold wire. All it needs to do is inverse the signal on negative wire and add both signals together. Inversion changes phase of the noises on the cold wire such that it cancels out error remaining on the positive wire: (S + e) + -(-S + e) = S + e + S - e → S. # Explicit isn’t better than implicit Posted by Michał ‘mina86’ Nazarewicz on 6th of June 2021 Continuing the new tradition of clickbaity titles, let’s talk about explicitness. It’s a subject that comes up when bike-shedding language and API designs. Pointing out that a construct or a function exhibits implicit behaviour is often taunted as an ultimate winning argument against it. There are two problems with such line of reasoning. First of all, people claim to care about feature being explicit but came to accept a lot of implicit behaviour without batting an eye. Second of all, no one actually agrees what the terms mean. In this article I’ll demonstrate those two issues and show that ‘explicit over implicit’ is the wrong value to uphold. It’s merely a proxy for a much more useful goal interfaces should strive for. By the end I’ll demonstrate what we should really look at instead. # Programmer (vs) Dvorak Posted by Michał ‘mina86’ Nazarewicz on 30th of May 2021 Update: The article was updated in October 2021 to include direct comparison shift usage between Dvorak and Programmer Dvorak layouts. A few years age I’ve made a decision that had the potential to change the course of history. Had I went a different path, the pl(dvp) layout might have never seen the light of day. But did I make a wise choice? Or had I chosen poorly? I’m talking of course about the decision to learn Programmer Dvorak rather than a regular Dvorak keyboard layout. The main differences between the two is that in the former digits are entered with Shift key pressed down which allows several punctuation marks often used when programming to be typed without the need to reach for Shift. The hypothesis goes that developers use digits less often thus such design optimises the layout for them. To test this I’ve grabbed all my git repositories and constructed a histogram of characters used in text files present there. Since letters are on the same position on both layouts in question, only digits and punctuation characters are compared on the histogram: # Computer Science vs Reality Posted by Michał ‘mina86’ Nazarewicz on 23rd of May 2021 Some years ago, during a friendly discussion about C++, a colleague challenged me with a question: what’s the best way to represent a sequence of numbers if delete operation is one that needs to be supported. I argued in favour of a linked list suggesting that with sufficiently large number of elements, it would be much preferred. In a twist of fate, I’ve been recently discussing an algorithm which reminded my of that conversation. Except this time I was the one arguing against a node-based data structure. Rather than ending things at a conversation, I’ve decided to benchmark a few solutions to make sure which approach is the best. ## The problem The task at hand is simple. Design a data structure which stores a set of words, all of the same length, and offers lookup operation which returns all words matching globs in the form ‘prefix*suffix’. That is, words which start with a given prefix and end with a given suffix. Either part of the pattern may be empty and their concatenation is never longer than length of the words in the collection. Initialisation time and memory footprint are not a concern. Complexity of returning a result can be assumed to be constant. In this article I’me going to describe possible solutions — some using a boring vector while others taking advantage of an exciting prefix tree — and benchmark the implementations in an ultimate battle between contiguous-memory-based and a node-based containers. # Embrace the Bloat Posted by Michał ‘mina86’ Nazarewicz on 16th of May 2021 ‘I’m using slock as my screen locker,’ a wise man once said. He had a beard so surely he was wise. ‘Oh?’ his colleague raised a brow intrigued. ‘Did they fix the PAM bug?’ he prodded inquisitively. Nothing but a confused stare came in reply. ‘slock crashes on systems using PAM,’ he offered an explanation and to demonstrate, he approached a nearby machine and pressed the Return key. Screens, blanked by a locker just a few minutes prior, came back to life, unlocked without the need to enter the password. # The L*u*v* and LChuv colour spaces Posted by Michał ‘mina86’ Nazarewicz on 9th of May 2021 I’ve written about L*a*b* so it’s only fair that I’ll also describe its twin sister: the L*u*v* colour space (a.k.a. CIELUV). The two share a lot in common. For example, they use the same luminance value, base their chromaticity on opponent process theory and each of them has a corresponding cylindrical LCh coordinate system. Yet, despite those similarities — or perhaps because of them — the CIELUV colour space is often overlooked. Even though L*a*b* seems to be getting all the limelight, L*u*v* model has its advantages. Before we start comparing the two colour spaces, let’s first go through the conversion formulæ. # Names of operands of arithmetic operations Posted by Michał ‘mina86’ Nazarewicz on 2nd of May 2021 Every now and again I need a specific name for operands or results of various arithmetic operations. It usually takes me embarrassingly long time to look that information up. To save time in the future, here’s the list: \begin{align} \left. \begin{matrix} \text{augend} + \text{addend†} \\ \text{summand} + \text{summand} \\ \text{term} + \text{term} \end{matrix} \right\} & = \text{sum} \\[.5em] \left. \begin{matrix} \text{minuend} - \text{subtrahend} \\ \text{term} - \text{term} \end{matrix} \right\} & = \text{difference} \\[.5em] \left. \begin{matrix} \text{multiplier} × \text{multiplicand} \\ \text{factor} × \text{factor} \\ \end{matrix} \right\} & = \text{product} \\[.5em] \left. \begin{matrix} \text{dividend} ÷ \text{divisor} \\ {\text{numerator}\over\text{denominator}} \end{matrix} \right\} & = \left\{ \begin{matrix} \text{ratio} \\ \text{fraction} \\ \text{quotient‡} + \text{remainder} \end{matrix} \right. \\[.5em] \text{base}^{\text{exponent}} & = \text{power} \\[.5em] \sqrt[\text{degree}]{\text{radicand}} & = \text{root} \\[.5em] \log_\text{base}(\text{anti-logarithm}) & = \text{logarithm} \end{align} † Occasionally used to mean any operand of addition. ‡ Occasionally used to mean the fraction itself rather than just the integer part. List in big part thanks to Wikipedia. # Most vexing parse Posted by Michał ‘mina86’ Nazarewicz on 25th of April 2021 Here’s a puzzle: What does the following C++ code output: #include <cstdio> #include <string> struct Foo { Foo(unsigned n = 1) { std::printf("Hell%s,", std::string(n, 'o').c_str()); } ~Foo() { std::printf("%s", " world"); } }; static constexpr double pi = 3.141592653589793238; int main(void) { Foo foo(); Foo bar(unsigned(pi)); } # Will the real ARG_MAX please stand up? Part 2 Posted by Michał ‘mina86’ Nazarewicz on 18th of April 2021 In part one we’ve looked at the ARG_MAX parameter on Linux-based systems. We’ve established experimentally how it affects arguments passed programs and what influences the value. This time, we’ll look directly at the source to verify our findings and see how the limit looks from the point of view of system libraries and kernel itself. Posted by Michał ‘mina86’ Nazarewicz on 11th of April 2021 Anyone who uses a screen locker surely can recall a situation where they approached their computer and started typing their password to unlock it even though it was never locked. Even if the machine is configured to lock automatically after a period of inactivity, there may be situations when power saving blanks the monitor even before the automatic locking happens. If one’s lucky, they realise their mistake in time before hitting Return in a chat window. It’s not uncommon however that one ends with the password blasted into ether over IRC or Google Docs; lazy people might ignore the secret getting saved in their shell history file but even that should facilitate, often annoying, password change. What if I told you there’s a way to avoid those problems? A one simple trick which will eliminated at least some forms of possible leaks. Simply prefix all your passwords with /! (slash followed by an exclamation mark). # Fun fact: ∞ is even Posted by Michał ‘mina86’ Nazarewicz on 1st of April 2021 Some people find it surprising that zero is an even number. Turns out it’s such a controversial point that Wikipedia’s article on the subject has nearly 5000 words and 75 citations. That’s ten times as long as an entry on toast sandwich which is clearly a more important topic. On the other hand perhaps the confusion is to be expected considering that for centuries zero has been judged an odd digit (if a digit at all). Regardless, zero being even is not news and this is not what this post is about. Rather, I wish to share another bit of knowledge. As it turns out, infinity is even. Who would have thought! As the working group for the C programming language (SC22 WC14) explains in C99 rationale, ‘all large positive floating-point values are even integers’. There you go. Next time your Maths teacher asks what’s -42 to infinite power go ahead and exclaim with conviction that it’s plus infinity! The teacher will fail you of course (and rightfully so) unless you’re so lucky that they turn out to secretly be an expert on IEEE 754 standard. # Dark theme with media queries, CSS and JavaScript Posted by Michał ‘mina86’ Nazarewicz on 28th of March 2021 No, your eyes are not deceiving you. This website has gone through a redesign and in the process gained a dark mode. Thanks to media queries, the darkness should commence automatically according to reader’s system preferences (as reported by the browsers). You can also customise this website in settings panel in top right (or bottom right). What are media queries? And how to use them to adjust website’s appearance based on user preferences? I’m glad you’ve asked, because I’m about to describe the CSS and JavaScript magic that enables this feature. ## Media queries overview body { font-family: sans-serif; } @media print { body { font-family: serif; } } Media queries grew from the @media rule present since the inception of CSS. At first it provided a way to use different styles depending on a device used to view the page. Most commonly used media types where screen and print as seen in the example on the right. Over time the concept evolved into general media queries which allow checking other aspects of the user agent such as display size or browser settings. A simple stylesheet respecting reader’s preferences might be as simple as: body { /* Black-on-white by default */ background: #fff; color: #000; } @media (prefers-color-scheme: dark) { /* White-on-black if user prefers dark colour scheme */ body { background: #000; color: #fff; } } That’s enough to get us started but not all browsers support that feature or provide a way for the user to specify desired mode. For example, without a desktop environment Chrome will report light theme preference and Firefox users need to go deep into the bowels of about:config to change ui.systemUsesDarkTheme flag if they are fond of darkness. To accommodate such situations, it’s desirable to provide a JavaScript toggle which defaults to option specified in system settings. Fortunately, media can be queried through JavaScript and herein I’ll describe how it’s done and how to marry theme switching with browser preferences detection. TL;DR version is to grab a demonstration HTML file which includes a fully working CSS and JavaScript code that can be used to switch themes on a website. # sRGB↔L*a*b*↔LChab conversions Posted by Michał ‘mina86’ Nazarewicz on 21st of March 2021 After writing about conversion between sRGB and XYZ colour spaces I’ve been asked about a related process: moving between sRGB and CIELAB (perhaps better known as L*a*b*). As this may be of interest to others, I’ve decided to go ahead and make an article out of it. I’ll also touch on CIELChab which is a closely related colour representation. The L*a*b* colour space was intended to be perceptually uniform. While it’s not truly uniform it’s nonetheless useful and widely used in the industry. For example, it’s the basis of the ΔE*00 colour difference metric. LChab aim to make L*a*b* easier to interpret by replacing a* and b* axes with more intuitive chroma and hue parameters. Importantly, the conversion between sRGB and L*a*b* goes through XYZ colour space. As such, the full process has multiple steps with a round trip conversion being: sRGB​→​XYZ​→​L*a*b*​→​XYZ​→​sRGB. Because of that structure I will describe each of the steps separately. # Will the real ARG_MAX please stand up? Part 1 Posted by Michał ‘mina86’ Nazarewicz on 14th of March 2021 arg max is a set of values from function’s domain at which said function reaches its maxima. That’s certainly an arg max but spelled without an underscore thus not the one we are searching for. No, this article is regarding the ARG_MAX that limits the length of arguments to an executable. Or in other words, why you are getting: bash: command: Argument list too long # HTML: No, you don’t need to escape that Posted by Michał ‘mina86’ Nazarewicz on 7th of March 2021 This website being my personal project allows me to experiment and do things I’d never do in professional settings. Most notably, I’m rather found of trying everything I can to reduce the size of the page. This goes beyond mere minification and eventually lead me to wonder if all those characters I’ve been escaping in HTML code really need such treatment. Libraries offering HTML support will typically provide a function to indiscriminately replace all ampersands, quote characters, less-than and greater-then signs with their corresponding HTML-safe representation. This allows the result to be used in any context in the document and is a good choice for user-input validation. It’s a different matter when it comes to squeezing every last byte. Herein I will explore just which characters and when exactly really need to be escaped in an HTML document. # Regular expressions are broken Posted by Michał ‘mina86’ Nazarewicz on 28th of February 2021 Quick! What does re.search('foo|foobar', 'foobarbaz').group() produce? Or for those not fluent in Python, how about /foo|foobar/.exec('foobarbaz')? Or to put it into words, what part of string foobarbaz will a foo|foobar regular expression match? Perhaps it’s just me, but I expected the result to be foobar. That is, for the regular expression to match the longest leftmost substring. Alas, that’s not what is happening. Instead, Python’s and JavaScript’s regex engine will only match foo prefix. Knowing that, what does re.search('foobar|foo', 'foobarbaz').group() produce (notice the subexpressions in the alternation are swapped). This can be reasoned in two ways: either order of branches in the alternation doesn’t matter — in which case the result should be the same as before, i.e. foo — or it does matter — and now the result will be foobar. A computer scientist might lean towards the first option but a software engineer will know it’s the second. # Reading stdin with Emacs Client Posted by Michał ‘mina86’ Nazarewicz on 21st of February 2021 One feature Emacs doesn’t have out of the box is reading data from standard input. Trying to open - (e.g. echo stdin | emacs -) results in Emacs complaining about unknown option (if it ends up starting in graphical mode) or that ‘standard input is not a tty’ (when starting in terminal). With sufficiently advanced shell one potential solution is the --insert flag paired with command substitution: echo stdin | emacs --insert <(cat). Sadly, it’s not a panacea. It messes up initial buffer (and thus may break setups with custom initial-buffer-choice) and doesn’t address the issue of standard input not being a tty when running Emacs in terminal. For me the biggest problem though is that it isn’t available when using emacsclient. Fortunately, as previously mentioned the Emacs Server protocol allows for far more than just instructions to open a file. Indeed, my solution to the problem revolves around the use of --eval option: #!/usr/bin/perl use strict; use warnings; my @args = @ARGV; if (!@args) { my $data;$data = join '', <STDIN>; $data =~ s/\\/\\\\/g;$data =~ s/"/\\"/g; $data = <<ELISP; (let ((buf (generate-new-buffer "*stdin*"))) (switch-to-buffer buf) (insert "$data") (goto-char (point-min)) (x-focus-frame nil) (buffer-name buf)) ELISP @args = ('-e', $data); } exec 'emacsclient', @args; die "emacsclient:$!\n"; People allergic to Perl may find this Python version more palatable: # Emacs remote file editing over SSHFS Posted by Michał ‘mina86’ Nazarewicz on 14th of February 2021 Previous article described how to use emacsclient inside of an SSH session. While the solution mentioned there relied on TRAMP, I’ve confessed that it isn’t what I’m actually using. From my experience, TRAMP doesn’t cache as much information as it could and as a result some operations are needlessly slow. For example, delay of find-file prompt completion is noticeable when working over connections with latency in the range of tens of milliseconds or more. Because for a long while I’d been working on a workstation ‘in the cloud’ in a data centre in another country, I’ve built my setup based on SSHFS instead. It is important to note that TRAMP has myriad of features which won’t be available with this alternative approach. Most notably, it transparently routes shell commands executed from Emacs through SSH which often results in much faster execution than trying to do the same thing over SSHFS. grep command in particular will avoid copying entire files over network when done through TRAMP. Depending on one’s workflow, either TRAMP-based or SSHFS-based solution may be preferred. If you are happy with TRAMP’s performance or rely on some of its feature, there’s no reason to switch. Otherwise, you might want to try an alternative approach described below. # Greyscale, you might be doing it wrong Posted by Michał ‘mina86’ Nazarewicz on 7th of February 2021 While working on ansi_colours crate I’ve learned about colour spaces more than I’ve ever thought I would. One of the things were intricacies of greyscale. Or rather not greyscale itself but conversion from sRGB. ‘How hard can it be?’ one might ask following it up with a helpful suggestion to, ‘just sum all components and divide by three!’ Taking an arithmetic mean of red, green and blue coordinates is an often mentioned method. Inaccuracy of the method is usually acknowledged and justified by its simplicity and speed. That’s a fair trade-off except that equally simple and fast algorithms which are noticeably more accurate exist. One such method is built on an observation that green contributes the most to the perceived brightens of a colour. The formula is (r + 2g + b) / 4 and it increases accuracy (by taking green channel twice) as well as speed (by changing division operation into a bit shift). But that’s not all. Even better formulæ exist. ## TL;DR fn grey_from_rgb_avg_32bit(r: u8, g: u8, b: u8) -> u8 { let y = 3567664 * r as u32 + 11998547 * g as u32 + 1211005 * b as u32; ((y + (1 << 23)) >> 24) as u8 } The above implements the best algorithm for converting sRGB into greyscale if speed and simplicity is main concern. It does not involve gamma thus forgoes most complicated and time-consuming arithmetic. It’s much more precise and as fast as arithmetic mean. # Emacs remote file editing over TRAMP Posted by Michał ‘mina86’ Nazarewicz on 31st of January 2021 I often develop software on remote machines; logged in via SSH to a workstation where all source code reside. In those situations, I like to have things work the same way regardless of which host I’m on. Since more often than not I open files from shell rather than from within my editor, this in particular means having the same command opening files in Emacs available on all computers. emacsclient filename works locally but gets a bit tricky over SSH. Running Emacs in a terminal is of course possible, but graphical interface provides minor benefits which I like to keep. X forwarding is another option but gets sluggish over high-latency connections. And besides, having multiple Emacs instance running (one local and one remote) is not the way. Fortunately, by utilising SSH remote forwarding, Emacs can be configured to edit remote files and accept server commands from within an SSH session. Herein I will describe how to accomplish that.
{}
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options # Proving (1) the law of mass action, and (2) the Poisson character of the reactions Hello, I'm working away on my second blog article. It's still too unformed for anyone else to read it. But I have some questions about the content that came up. At first I was simply going to give the definition for stochastic Petri nets, give the formulas for the mass action kinetics, and then present the simulator. But the good simulator algorithm uses the exponential distribution for the inter-event intervals, and the correctness of that is predicated on the Poisson character of the reaction events. This is a lot to just throw at a reader who may be coming from just a software background. I want this to be an effective "praxis article," which means that we have to understand what the heck we are talking about, all the way down the line, starting from the theory that supports the model, right down to the programming language technology that will implement the simulator. In the first blog article, I dug into the programming technology. Here the links that need more attention are the theoretical supports. In my notes I have written an informal yet semi-rigorous explanation of what a random process is, and what a Poisson random process is. It's written in a "popular mechanics" tone, and this won't give me a problem to write it nicely for the reader. To focus the discussion, suppose there are species A, B and C, and there is a transition X: A + B --> C. Suppose this takes place in a closed container, which is a homogeneous "soup" of the entities, and consider them to be point particles. Assume they are bouncing around randomly, colliding with each other and the walls. Let A(t), B(t), C(t) be the number of entities of each type in the container at time t. Now, as we know, the mass action kinetics says that the firing rate of X is proportional to A(t) * B(t), with a coefficient that is a function of Temperature, among other things. Now that I am acquainted with this material, this relationship is intuitively clear, and I can give an intuitive explanation. Let G be a small region of the container, and T be a small time interval. Let A(G,T) be the expected number of A particles found in (G,T), and B(G,T) be the expected number of B particles found in (G,T). (By "found" I mean particles which are in G at at least one time point in T.) Let Z be the probability of the transition firing between a single A particle and a single B particle, given that they are both found in (G,T). Then the expected number of firings in (G,T) is Z * A(G,T) * B(G,T). Each combination of one of the A particles in (G,T) and one of the B particles in (G,T) contributes Z to the expected number of transitions that fire in (G,T). But I would like to go further, and prove (using informal language) that the X-firings constitute a Poisson process, under the assumption that the movements of the A and the B particles are a Poisson process. And to derive the formula for the rate constant of the X-firings, which will involve the factors A(t) * B(t). I have worked out a proof, which is more complex and nuanced than I had hoped for. I will summarize it now. But can anyone point me to a proof in the literature (online is best), or summarize the idea of the standard proof, or make any suggestions about how to simplify the following argument. Let A-Count(T,G) be the random variable that counts the number of A particles that are found in (G,T). (I.e., whose world-lines intersect G x T). Under the assumption that the particles are freely colliding, it is not hard to show that A-Count is a Poisson process. Similarly for B-Count. Now in order to show that the X-firings constitute a Poisson process with a rate parameter, we need to further analyze the formula I gave above for the expected number of firings Z * A(G,T) * B(G,T). Let T = (t, t + deltaT). Since A is Poisson, we have that: A(G,T) =~ Prob(A-Count(G,T) > 0) =~ Prob(A-Count(G,T) = 1) = Lambda(A,G,t) * dT, where Lambda(A,G,t) is the rate constant for the A Poisson process. Now it is evident that Lambda(A,G,T) will be proportional to A(t), the number of A's in the system at time t, and also to a constant that is an increasing function of the mean velocity of the particles. So let's write: Lambda(A,G,t) = A(t) * Sigma(A,t), where Sigma(A,t) includes the temperature dependency. Putting this all together, we have the following formula for the expected number of X-firings in (G,T): X-Firings(G,T) = Z * A(G,T) * B(G,T) = Z * (Lambda(A,G,t) * deltaT) * (Lambda(B,G,t) * deltaT) = Z * (A(t) * Sigma(A,t) * deltaT) * (B(t) * Sigma(B,t) * deltaT) = Z * A(t) * B(t) * Sigma(A,t) * Sigma(B,t) * deltaT^2. This looks promising, except we have the apparent paradox that the X-firings appear to have a quadratic dependency on deltaT -- so it doesn't look like a Poisson process at all! The key to this riddle is the fact that that the probability Z is itself a function of G and T. Z(G,T) = probability that X fires given that there is an A in (G,T) and a B in (G,T) = E(G,T) * F(G), where: E(G,T) is defined to be the conditional probability that, given that there is an A in (G,T) and a B in (G,T), that this A and B are in G at the same exact time (for some time t in T), and F(G) is defined to be the probability that, given that there is an A and a B in G at the same exact time, that the transition will fire. The point here is that for a long interval of time T, the fact that A and B are both in (G,T) leads to a low probability E(G,T) that they are actually there at the same time, and hence have a chance to react. Now let us further assume that G is small in relation to the mean velocity of the particles, so that if a particle is present in (G,T), then it will quickly zip in and out, and actual time interval (t1,t2) for which it is present in G is a small sub-interval of T. Under this assumption, we can show that E(G,T) is inversely proportional to the length of T. If we halve the length of T, then we will double the probability that the actual time interval for an A that is found in (G,T) will intersect with the actual time interval for a B that is found in (G,T). We've squeezed these two intervals into a T that is half the size, and so they are twice as likely to intersect. (Approximately speaking.) Using this fact, we can write: E(G,T) = (1/deltaT) * E'(G,t), for some function E' that depends only on G and t, but not the interval deltaT. Putting these together we get: Z(G,T) = E(G,T) * F(G) = (1/deltaT) * E'(G,t) * F(G). Putting this into our rate formula, we have: X-Firings(G,T) = Z(G,T) * A(G,T) * B(G,T) = Z(G,T) * A(t) * B(t) * Sigma(A,t) * Sigma(B,t) * deltaT^2 = (1/deltaT) * E'(G,t) * F(G) * A(t) * B(t) * Sigma(A,t) * Sigma(B,t) * deltaT^2 = K(G,t) * Sigma(A,t) * Sigma(B,t) * A(t) * B(t) * deltaT where K(G,t) = E'(G,t) * F(G). This gives is the rate parameter for the Poisson process: K(G,t) * Sigma(A,t) * Sigma(B,t) * A(t) * B(t) • Options 1. This result looks right to me. But there is one point in the argument that I am uncomfortable about. I made the assumption that G was small in comparison to the speeds of the particles. That was how I was able to factor the (1/deltaT) out of E. But to really show that this is the rate parameter for a Poisson process, deltaT would have pass to 0 in the limit. But the factoring of (1/deltaT) out of E breaks down as deltaT gets very small. So what this argument shows is that, for a wide range of deltaT's, that aren't "too small", the formula for X-firings above, with the given rate parameter is correct. If you want to go down to a smaller level of deltaT's, you'd have to repeat the argument with a smaller G. Also, it would be nice to factor out vol(G) from the term K(G,t), so that the only dependence on G is through vol(G). Any advice or pointers? Can this be cleaned up and simplified, while still maintain the general level of rigor that I have set out here? Regardless of the extent to which I will be including this argument into the blog article, I'd like to get the proof clear in my mind, so that I won't feel like I'm fuzzing over the issue when I do write the blog article. Thanks! Comment Source:This result looks right to me. But there is one point in the argument that I am uncomfortable about. I made the assumption that G was small in comparison to the speeds of the particles. That was how I was able to factor the (1/deltaT) out of E. But to really show that this is the rate parameter for a Poisson process, deltaT would have pass to 0 in the limit. But the factoring of (1/deltaT) out of E breaks down as deltaT gets very small. So what this argument shows is that, for a wide range of deltaT's, that aren't "too small", the formula for X-firings above, with the given rate parameter is correct. If you want to go down to a smaller level of deltaT's, you'd have to repeat the argument with a smaller G. Also, it would be nice to factor out vol(G) from the term K(G,t), so that the only dependence on G is through vol(G). * * * Any advice or pointers? Can this be cleaned up and simplified, while still maintain the general level of rigor that I have set out here? Regardless of the extent to which I will be including this argument into the blog article, I'd like to get the proof clear in my mind, so that I won't feel like I'm fuzzing over the issue when I do write the blog article. Thanks! • Options 2. David wrote: I made the assumption that G was small in comparison to the speeds of the particles. That sounds reasonable; people often say the law of mass action holds when the chemicals are 'well mixed', and this seems somehow related. If the molecules aren't zipping around enough, so each mainly interacts with its 'neighbors', the law of mass action won't hold. I'm afraid I'm too distracted to tackle the issue you're actually concerned with right now... Comment Source:David wrote: > I made the assumption that G was small in comparison to the speeds of the particles. That sounds reasonable; people often say the law of mass action holds when the chemicals are 'well mixed', and this seems somehow related. If the molecules aren't zipping around enough, so each mainly interacts with its 'neighbors', the law of mass action won't hold. I'm afraid I'm too distracted to tackle the issue you're actually concerned with right now... • Options 3. I want to talk more about this stuff! I have to travel tomorrow, but return on Saturday. I plan to look at this over the weekend! Somehow, I'm very busy right now as everything seems to be due this November :( Comment Source:I want to talk more about this stuff! I have to travel tomorrow, but return on Saturday. I plan to look at this over the weekend! Somehow, I'm very busy right now as everything seems to be due this November :( • Options 4. edited October 2012 By the way, I hope you do the obvious thing and read what turns up when you Google derivation law of mass action. I don't instantly see a 'straightforward' derivation of the sort you're attempting - I see more people trying to derive it from laws of thermodynamics, which seems interesting but peculiar. I see a book: available on Kindle for a whopping sum, which seems to contain at least one derivation and would definitely be worth looking at. I also saw (but don't see now) a derivation using quantum mechanics. What you're trying to do is so nice and simple that I feel it must have been done, maybe even by Boltzmann, but I don't see it yet! Comment Source:By the way, I hope you do the obvious thing and read what turns up when you Google [derivation law of mass action](https://www.google.com/search?q=derivation+law+of+mass+action). I don't instantly see a 'straightforward' derivation of the sort you're attempting - I see more people trying to derive it from laws of thermodynamics, which seems interesting but peculiar. I see a book: * Andrei B. Koudriavtsev, Reginald F. Jameson, Wolfgang Linert, [The Law of Mass Action](http://www.amazon.com/The-Law-Mass-Action-ebook/dp/B000QXD8IM). available on Kindle for a whopping sum, which seems to contain at least one derivation and would definitely be worth looking at. I also saw (but don't see now) a derivation using quantum mechanics. What you're trying to do is so nice and simple that I feel it _must_ have been done, maybe even by Boltzmann, but I don't see it yet! • Options 5. edited November 2012 Hi, I have a revised proof, which is simpler, and avoids the need to assume that the region G is small. Again, assume species A, B, and C, and a transition X: A + B --> C. Let G be a region of the container, and T = (t, t + deltaT) be an interval of time. Let's view it from the perspective of S = G x T as a region of space-time. Let a and b be individual particles whose world-lines intersect S. Let Ta be the sub-interval of T, consisting of the times that a is inside of S, and Tb be the same thing for particle b. Assumption: The probability of the reaction taking place in S between a and b is equal to the length of the intersection of Ta and Tb, times a constant factor K(G) that depends, among other things, on G. Let Amount(A,S) = Sum Ta, for all particles a. Consider this to be in units of A-particle-seconds. Amount(A,S) is clearly proportional to the number of A(t) of A particles in the container during the (small) interval T. Claim now that the reaction rate is proportional to Amount(A,S) * Amount(B,S). Now partition T into many sub-intervals of length dt. Accordingly, partition all of the world-lines of the particles into small segments (fragments), each of which has a duration of dt. Let Fa denote one of these segments of an a-world line, and the same for Fb. Any reaction between specific a and b particles will take place between an Fa and an Fb that overlap in time, and are close in space (a near-crossing). Let P(deltaT) be the probability that a reaction will take place between a randomly chosen Fa and Fb in S. Then the expected number of reactions in S will equal: P(deltaT) * Amount(A,S) * Amount(B,S) = P(deltaT) * A(t) * B(t) * K, Now it is not hard to show that P(deltaT) is proportional to deltaT = length(T). Once we have shown that, it follows that the reactions are a Poisson process, with rate parameter that is proportional to A(t) * B(t) -- i.e., we have proven the law of mass action. So let's divide T in 2 (while leaving the much smaller dt unchanged). Let T1 be the first half of T, and T2 be the second half of T. Let Fa, Fb be randomly chosen fragments in S = G x T = G x T1 union G x T2. If Fa is in T1 and Fb is in T2, then they have no time overlap, and there is zero chance that they will react. So the probability that they will react in T is equal to the probability that they will react in T1, plus the probability that they will react in T2. I.e., P(deltaT) = 2 * P(deltaT / 2). That proves that P is proportional to deltaT, and so we are done. Now, it would be good to find out where this kind of direct proof has already been done. I did try web searches. John, I will try to get a hold of that book you cited. Aside from that, can you John or Jacob or anyone else comment on the validity of the proof that I just gave. I'd like to keep the blog articles rolling along, so I don't want to dwell on this point for too long. If nobody here spots any problems with the argument, I could just make the statement: here is a relatively simple way to understand why the law of mass action is true. Thanks Comment Source:Hi, I have a revised proof, which is simpler, and avoids the need to assume that the region G is small. Again, assume species A, B, and C, and a transition X: A + B --> C. Let G be a region of the container, and T = (t, t + deltaT) be an interval of time. Let's view it from the perspective of S = G x T as a region of space-time. Let a and b be individual particles whose world-lines intersect S. Let Ta be the sub-interval of T, consisting of the times that a is inside of S, and Tb be the same thing for particle b. Assumption: The probability of the reaction taking place in S between a and b is equal to the length of the intersection of Ta and Tb, times a constant factor K(G) that depends, among other things, on G. Let Amount(A,S) = Sum Ta, for all particles a. Consider this to be in units of A-particle-seconds. Amount(A,S) is clearly proportional to the number of A(t) of A particles in the container during the (small) interval T. Claim now that the reaction rate is proportional to Amount(A,S) * Amount(B,S). Now partition T into many sub-intervals of length dt. Accordingly, partition all of the world-lines of the particles into small segments (fragments), each of which has a duration of dt. Let Fa denote one of these segments of an a-world line, and the same for Fb. Any reaction between specific a and b particles will take place between an Fa and an Fb that overlap in time, and are close in space (a near-crossing). Let P(deltaT) be the probability that a reaction will take place between a randomly chosen Fa and Fb in S. Then the expected number of reactions in S will equal: P(deltaT) * Amount(A,S) * Amount(B,S) = P(deltaT) * A(t) * B(t) * K, Now it is not hard to show that P(deltaT) is proportional to deltaT = length(T). Once we have shown that, it follows that the reactions are a Poisson process, with rate parameter that is proportional to A(t) * B(t) -- i.e., we have proven the law of mass action. So let's divide T in 2 (while leaving the much smaller dt unchanged). Let T1 be the first half of T, and T2 be the second half of T. Let Fa, Fb be randomly chosen fragments in S = G x T = G x T1 union G x T2. If Fa is in T1 and Fb is in T2, then they have no time overlap, and there is zero chance that they will react. So the probability that they will react in T is equal to the probability that they will react in T1, plus the probability that they will react in T2. I.e., P(deltaT) = 2 * P(deltaT / 2). That proves that P is proportional to deltaT, and so we are done. * * * Now, it would be good to find out where this kind of direct proof has already been done. I did try web searches. John, I will try to get a hold of that book you cited. Aside from that, can you John or Jacob or anyone else comment on the validity of the proof that I just gave. I'd like to keep the blog articles rolling along, so I don't want to dwell on this point for too long. If nobody here spots any problems with the argument, I could just make the statement: here is a relatively simple way to understand why the law of mass action is true. Thanks • Options 6. edited November 2012 I wrote: Then the expected number of reactions in S will equal: P(deltaT) * Amount(A,S) * Amount(B,S) = P(deltaT) * A(t) * B(t) * K, Now it is not hard to show that P(deltaT) is proportional to deltaT = length(T). Once we have shown that, it follows that the reactions are a Poisson process, with rate parameter that is proportional to A(t) * B(t) -- i.e., we have proven the law of mass action. I got this backwards, but it is not hard to fix. We want to show that the rate, which is the product P(deltaT) * Amount(A,S) * Amount(B,S) is proportional to deltaT = length(T). Before we showed that Amount(A,S) and Amount(B,S) are both proportional to deltaT. Therefore we need to show that P(deltaT) is inversely proportional to deltaT. Here goes: Again, divide T in 2 (while leaving the much smaller dt unchanged). Let T1 be the first half of T, and T2 be the second half of T. Let Fa, Fb be randomly chosen fragments in S = G x T = G x T1 union G x T2. If Fa is in T1 and Fb is in T2, then they have no time overlap, and there is zero chance that they will react. There is a 50% chance that either (Fa is in T1 and Fb is in T2), or (Fa is in T2 and Fb is in T1), in which case they have zero probability of reacting. The other 50% of the time, they will either both be in T1, or both be in T2. In this case, the probability of them reacting is P(deltaT / 2). Therefore the overall probability of them reacting is 50% * P(deltaT / 2), i.e., P(deltaT) = 0.5 * P(deltaT / 2), i.e., P(deltaT / 2) = 2 * P(deltaT), which says that P is inversely proportional to deltaT, and we are done. Comment Source:I wrote: > Then the expected number of reactions in S will equal: > P(deltaT) * Amount(A,S) * Amount(B,S) = P(deltaT) * A(t) * B(t) * K, > Now it is not hard to show that P(deltaT) is proportional to deltaT = length(T). Once we have shown that, it follows that the reactions are a Poisson process, with rate parameter that is proportional to A(t) * B(t) -- i.e., we have proven the law of mass action. I got this backwards, but it is not hard to fix. We want to show that the rate, which is the product P(deltaT) * Amount(A,S) * Amount(B,S) is proportional to deltaT = length(T). Before we showed that Amount(A,S) and Amount(B,S) are both proportional to deltaT. Therefore we need to show that P(deltaT) is _inversely_ proportional to deltaT. Here goes: Again, divide T in 2 (while leaving the much smaller dt unchanged). Let T1 be the first half of T, and T2 be the second half of T. Let Fa, Fb be randomly chosen fragments in S = G x T = G x T1 union G x T2. If Fa is in T1 and Fb is in T2, then they have no time overlap, and there is zero chance that they will react. There is a 50% chance that either (Fa is in T1 and Fb is in T2), or (Fa is in T2 and Fb is in T1), in which case they have zero probability of reacting. The other 50% of the time, they will either both be in T1, or both be in T2. In this case, the probability of them reacting is P(deltaT / 2). Therefore the overall probability of them reacting is 50% * P(deltaT / 2), i.e., P(deltaT) = 0.5 * P(deltaT / 2), i.e., P(deltaT / 2) = 2 * P(deltaT), which says that P is inversely proportional to deltaT, and we are done. • Options 7. edited November 2012 I have third proof which is better than the ones I've given here. I'm putting it into the blog article, for which I will post a message when it is reviewable. The plot got thicker than I had imagined it would -- so it's been slower going than I had hoped. Comment Source:I have third proof which is better than the ones I've given here. I'm putting it into the blog article, for which I will post a message when it is reviewable. The plot got thicker than I had imagined it would -- so it's been slower going than I had hoped. • Options 8. Hi David, I'm sorry I have not yet had a chance to review your derivation as we discussed on Tuesday. David said: I have third proof which is better than the ones I’ve given here. I’m putting it into the blog article, for which I will post a message when it is reviewable. Given that, I'll plan to go through the details once you've posted them on the wiki. Maybe by then you'll be so confident in your proof that you won't care anymore if I review it, but I'll plan to read it anyway because I think it's interesting. Comment Source:Hi David, I'm sorry I have not yet had a chance to review your derivation as we discussed on Tuesday. David said: > I have third proof which is better than the ones I’ve given here. I’m putting it into the blog article, for which I will post a message when it is reviewable. Given that, I'll plan to go through the details once you've posted them on the wiki. Maybe by then you'll be so confident in your proof that you won't care anymore if I review it, but I'll plan to read it anyway because I think it's interesting. • Options 9. Thanks! Comment Source:Thanks! • Options 10. edited November 2012 I came across the first five chapters of an book on physical chemistry, by Georg Job and Regina Ruffler. It starts with a creative approach to explaining entropy, by describing it in "phenomenological" terms as a kind of substance (like the view of heat as a substance), along with empirical ways to measure it. This is not fully satisfying, because it leaves me with the question, what is this substance called entropy, but it is creative, and the writing is has a literature-like quality. It also contains nice descriptions of things like how a refrigerator works. It also has chapters on the "chemical potential" (which I didn't get on the first skim of it), and on the law of mass action. I'm looking for good references to introductory texts and online materials on physical chemistry, so if anyone has any, I would be appreciative. Thanks. Comment Source:I came across the first five chapters of an book on [physical chemistry](http://job-stiftung.de/pdf/buch/physical_chemistry_five_chapters.pdf), by Georg Job and Regina Ruffler. It starts with a creative approach to explaining entropy, by describing it in "phenomenological" terms as a kind of substance (like the view of heat as a substance), along with empirical ways to measure it. This is not fully satisfying, because it leaves me with the question, what _is_ this substance called entropy, but it is creative, and the writing is has a literature-like quality. It also contains nice descriptions of things like how a refrigerator works. It also has chapters on the "chemical potential" (which I didn't get on the first skim of it), and on the law of mass action. I'm looking for good references to introductory texts and online materials on physical chemistry, so if anyone has any, I would be appreciative. Thanks. • Options 11. edited November 2012 You might get quite a lot from my friend Mark Leach's metasynthesis site. It carries an endorsement from Hoffman who won a Nobel prize for Frontier Molecular Orbital Theory (FMO). hth Comment Source:You might get quite a lot from my friend Mark Leach's [metasynthesis site](http://www.metasynthesis.co.uk). It carries an endorsement from Hoffman who won a Nobel prize for Frontier Molecular Orbital Theory (FMO). hth • Options 12. Bingo: Just found this today. So for one of my upcoming blog articles I ended up recreating some of the key points from this work from 1992. But it was a good exercise, and I'm happy with the form of the argument that I will give. Also I will be talking about the limitations of the law of mass action, which is an approximation that loses validity as concentrations increase. I saw this stated in a paper, though I haven't yet found an explanation for it in the literature. In my assessment it is due to the diameter of the molecules. First there is the obvious issue that the finite size of the molecules puts an absolute limit on the concentrations. But let's look at what happens when concentrations are high, but the container is not fully packed. Suppose the reaction is between species A and B. Now, doubling the concentration of A does in fact double the number of expected crossings between A and B particles. Specifically, I look at "epsilon-crossings," meaning near-crossings where the two particles come within a distance of epsilon from each other. Here, the epsilon of interest will be the radius of A plus the radius of B. But the A molecules are "competing" to react with the B molecules, and so the presence of more A molecules reduces the conditional probability that any given epsilon crossing will actually react. Hence the dependence of the reaction rate on each of the species concentrations will be sub-linear. As we know, there is also the breakdown in the law, at low concentrations, for reactions that take multiple inputs from the same species. There, the correction is to use the falling powers, rather than the regular power, for the concentrations that appear more than once in the input. Stochastic behavior at low concentrations is also of practical interest. One paper I read pointed this out, for biochemical reaction networks. There, there can be relatively few molecules -- but large ones -- that are participating in the communication pathways, over long durations. There was a quote of just a handful of codons being transcribed per second. In a wiki page, I'm going to start an annotated bibliography on Petri nets. This deserves its own page, but we can link to it from Recommended Reading. If such a page already exists, please let me know. If I haven't heard in a few days I create it. Comment Source:Bingo: * Daniel T. Gillespie, [A rigorous derivation of the chemical master equation](http://citeseerx.ist.psu.edu/viewdoc/summary/?doi=10.1.1.159.5220), Physica A 188 (1992) 404-425. Just found this today. So for one of my upcoming blog articles I ended up recreating some of the key points from this work from 1992. But it was a good exercise, and I'm happy with the form of the argument that I will give. Also I will be talking about the limitations of the law of mass action, which is an approximation that loses validity as concentrations increase. I saw this stated in a paper, though I haven't yet found an explanation for it in the literature. In my assessment it is due to the diameter of the molecules. First there is the obvious issue that the finite size of the molecules puts an absolute limit on the concentrations. But let's look at what happens when concentrations are high, but the container is not fully packed. Suppose the reaction is between species A and B. Now, doubling the concentration of A does in fact double the number of expected crossings between A and B particles. Specifically, I look at &quot;epsilon-crossings,&quot; meaning near-crossings where the two particles come within a distance of epsilon from each other. Here, the epsilon of interest will be the radius of A plus the radius of B. But the A molecules are &quot;competing&quot; to react with the B molecules, and so the presence of more A molecules reduces the conditional probability that any given epsilon crossing will actually react. Hence the dependence of the reaction rate on each of the species concentrations will be sub-linear. As we know, there is also the breakdown in the law, at low concentrations, for reactions that take multiple inputs from the same species. There, the correction is to use the falling powers, rather than the regular power, for the concentrations that appear more than once in the input. Stochastic behavior at low concentrations is also of practical interest. One paper I read pointed this out, for biochemical reaction networks. There, there can be relatively few molecules -- but large ones -- that are participating in the communication pathways, over long durations. There was a quote of just a handful of codons being transcribed per second. In a wiki page, I'm going to start an annotated bibliography on Petri nets. This deserves its own page, but we can link to it from Recommended Reading. If such a page already exists, please let me know. If I haven't heard in a few days I create it. • Options 13. I think the annotated bibliography on Petri nets deserves to be near the bottom of Petri net, where we already have a bibliography! If it gets enormous we can break it off as a page on its own. I'm really glad you want to add more references, and annotation is crucial since a huge undigested pile of references is not very useful. So, how about that blog article? Comment Source:I think the annotated bibliography on Petri nets deserves to be near the bottom of [[Petri net]], where we already have a bibliography! If it gets enormous we can break it off as a page on its own. I'm really glad you want to add more references, and annotation is crucial since a huge undigested pile of references is not very useful. So, how about that blog article? <img src = "http://math.ucr.edu/home/baez/emoticons/wink.gif" alt = ""/> • Options 14. edited December 2012 Ok, this week I will get it into a reviewable state -- I will let you know. I have been working on it, but my attention is diverted by... Life. We just had our house wired by an AV company (for sound and Lan), but they didn't configure the AV wireless network properly. So I've spent the last couple days doing network troubleshooting... it is interesting to learn about, though the benchmarking gets to be a grind. Thanks for reminding me about applications that sit above the transport layer :) The Wikinson book that you mentioned on the blog, "Stochastic Modelling for Systems Biology," just arrived. It's a great read. I will start adding bibliographic notes to the Petri net page, as you suggest. Comment Source:Ok, this week I will get it into a reviewable state -- I will let you know. I have been working on it, but my attention is diverted by... Life. We just had our house wired by an AV company (for sound and Lan), but they didn't configure the AV wireless network properly. So I've spent the last couple days doing network troubleshooting... it is interesting to learn about, though the benchmarking gets to be a grind. Thanks for reminding me about applications that sit above the transport layer :) The Wikinson book that you mentioned on the blog, "Stochastic Modelling for Systems Biology," just arrived. It's a great read. I will start adding bibliographic notes to the Petri net page, as you suggest. • Options 15. Dave wrote As we know, there is also the breakdown in the law, at low concentrations, for reactions that take multiple inputs from the same species. There, the correction is to use the falling powers, rather than the regular power, for the concentrations that appear more than once in the input. Stochastic behavior at low concentrations is also of practical interest. One paper I read pointed this out, for biochemical reaction networks. There, there can be relatively few molecules – but large ones – that are participating in the communication pathways, over long durations. There was a quote of just a handful of codons being transcribed per second Note that for looking at low concentration behaviour you might be interested in A symbolic computational approach to a problem involving multivariate Poisson distributions Comment Source:Dave wrote > As we know, there is also the breakdown in the law, at low concentrations, for reactions that take multiple inputs from the same species. There, the correction is to use the falling powers, rather than the regular power, for the concentrations that appear more than once in the input. Stochastic behavior at low concentrations is also of practical interest. One paper I read pointed this out, for biochemical reaction networks. There, there can be relatively few molecules – but large ones – that are participating in the communication pathways, over long durations. There was a quote of just a handful of codons being transcribed per second Note that for looking at low concentration behaviour you might be interested in [A symbolic computational approach to a problem involving multivariate Poisson distributions](http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/mvp.pdf)
{}
# Path Tracing 3D Fractals In some ways path tracing is one of the simplest and most intuitive ways to do ray tracing. Imagine you want to simulate how the photons from one or more light sources bounce around a scene before reaching a camera. Each time a photon hits a surface, we choose a new randomly reflected direction and continue, adjusting the intensity according to how likely the chosen reflection is. Though this approach works, only a very tiny fraction of paths would terminate at the camera. So instead, we might start from the camera and trace the ray from here and until we hit a light source. And, if the light source is large and slowly varying (for instance when using Image Based Lighting), this may provide good results. But if the light source is small, e.g. like the sun, we have the same problem: the chance that we hit a light source using a path of random reflections is very low, and our image will be very noisy and slowly converging. There are ways around this: one way is to trace rays starting from both the camera and the lights, and connect them (bidirectional path tracing), another is to test for possible direct lighting at each surface intersection (this is sometimes called ‘next event estimation’). Even though the concept of path tracing might be simple, introductions to path tracing often get very mathematical. This blog post is an attempt to introduce path tracing as an operational tool without going through too many formal definitions. The examples are built around Fragmentarium (and thus GLSL) snippets, but the discussion should be quite general. Let us start by considering how light behaves when hitting a very simple material: a perfect diffuse material. ## Diffuse reflections A Lambertian material is an ideal diffuse material, which has the same radiance when viewed from any angle. Imagine that a Lambertian surface is hit by a light source. Consider the image above, showing some photons hitting a patch of a surface. By pure geometrical reasoning, we can see that the amount of light that hits this patch of the surface will be proportional to the cosine of the angle between the surface normal and the light ray: $$cos(\theta)=\vec{n} \cdot \vec{l}$$ By definition of a Lambertian material this amount of incoming light will then be reflected with the same probability in all directions. Now, to find the total light intensity in a given (outgoing) direction, we need to integrate over all possible incoming directions in the hemisphere: $$L_{out}(\vec\omega_o) = \int K*L_{in}(\vec\omega_i)cos(\theta)d\vec\omega_i$$ where K is a constant that determines how much of the incoming light is absorbed in the material, and how much is reflected. Notice, that there must be an upper bound to the value of K – too high a value would mean we emitted more light than we received. This is referred to as the ‘conservation of energy’ constraint, which puts the following bound on K: $$\int Kcos(\theta)d\vec\omega_i \leq 1$$ Since K is a constant, this integral is easy to solve (see e.g. equation 30 here): $$K \leq 1/\pi$$ Instead of using the constant K, when talking about a diffuse materials reflectivity, it is common to use the Albedo, defined as $$Albedo = K\pi$$. The Albedo is thus always between 0 and 1 for a physical diffuse materials. Using the Albedo definition, we have: $$L_{out}(\vec\omega_o) = \int (Albedo/\pi)*L_{in}(\vec\omega_i)cos(\theta)d\vec\omega_i$$ The above is the Rendering Equation for a diffuse material. It describes how light scatters at a single point. Our diffuse material is a special case of the more general formula: $$L_{out}(\vec\omega_o) = \int BRDF(\vec\omega_i,\vec\omega_o)*L_{in}(\vec\omega_i)cos(\theta)d\vec\omega_i$$ Where the BRDF (Bidirectional Reflectance Distribution Function) is a function that describes the reflection properties of the given material: i.e. do we have a shiny, metallic surface or a diffuse material. Completely diffuse material (click for large version) ## How to solve the rendering equation An integral is a continuous quantity, which we must turn into something discrete before we can handle it on the computer. To evaluate the integral, we will use Monte Carlo sampling, which is a very simple: to provide an estimate for an integral, we will take a number of samples and use the average values of these samples multiplied by the integration interval length. $$\int_a^b f(x)dx \approx \frac{b-a}{N}\sum _{i=1}^N f(X_i)$$ If we apply this to our diffuse rendering equation above, we get the following discrete summation: \begin{align} L_{out}(\vec\omega_o) &= \int (Albedo/\pi)*L_{in}(\vec\omega_i)cos(\theta)d\vec\omega_i \\ & = \frac{2\pi}{N}\sum_{\vec\omega_i} (\frac{Albedo}{\pi}) L_{in}(\vec\omega_i) (\vec{n} \cdot \vec\omega_i) \\ & = \frac{2 Albedo}{N}\sum_{\vec\omega_i} L_{in}(\vec\omega_i) (\vec{n} \cdot \vec\omega_i) \end{align} Test render (click for large version) ## Building a path tracer (in GLSL) Now we are able to build a simple path tracer for diffuse materials. All we need to do is to shoot rays starting from the camera, and when a ray hits a surface, we will choose a random direction in the hemisphere defined by the surface normal. We will continue with this until we hit a light source. Each time the ray changes direction, we will modulate the light intensity by the factor found above: $$2*Color*Albedo*L_{in}(\vec\omega_i) (\vec{n} \cdot \vec\omega_i)$$ The idea is to repeat this many times for each pixel, and then average the samples. This is why the sum and the division by N is no longer present in the formula. Also notice, that we have added a (material specific) color. Until now we have assumed that our materials handled all wavelengths the same way, but of course some materials absorb some wavelengths, while reflecting others. We will describe this using a three-component material color, which will modulate the light ray at each surface intersection. All of this boils down to very few lines of codes: vec3 color(vec3 from, vec3 dir) { vec3 hit = vec3(0.0); vec3 hitNormal = vec3(0.0); vec3 luminance = vec3(1.0); for (int i=0; i < RayDepth; i++) { if (trace(from,dir,hit,hitNormal)) { dir = getSample(hitNormal); // new direction (towards light) luminance *= getColor()*2.0*Albedo*dot(dir,hitNormal); from = hit + hitNormal*minDist*2.0; // new start point } else { return luminance * getBackground( dir ); } } return vec3(0.0); // Ray never reached a light source } The getBackground() method simulates the light sources in a given direction (i.e. infinitely far away). As we will see below, this fits nicely together with using Image Based Lighting. But even when implementing getBackground() as a simple function returning a constant white color, we can get very nice images: and The above images were lightened only a constant white dome light, which gives the pure ambient occlusion like renders seen above. ## Sampling the hemisphere in GLSL The code above calls a 'getSample' function to sample the hemisphere. dir = getSample(hitNormal); // new direction (towards light) This can be a bit tricky. There is a nice formula for $$cos^n$$ sampling of a hemisphere in the GI compendium (equation 36), but you still need to align the hemisphere with the surface normal. And you need to be able to draw uniform random numbers in GLSL, which is not easy. Below I use the standard approach of putting a seed into a noisy function. The seed should depend on the pixel coordinate and the sample number. Here is some example code: vec2 seed = viewCoord*(float(subframe)+1.0); vec2 rand2n() { seed+=vec2(-1,1); // implementation based on: lumina.sourceforge.net/Tutorials/Noise.html return vec2(fract(sin(dot(seed.xy ,vec2(12.9898,78.233))) * 43758.5453), fract(cos(dot(seed.xy ,vec2(4.898,7.23))) * 23421.631)); }; vec3 ortho(vec3 v) { // See : http://lolengine.net/blog/2013/09/21/picking-orthogonal-vector-combing-coconuts return abs(v.x) > abs(v.z) ? vec3(-v.y, v.x, 0.0) : vec3(0.0, -v.z, v.y); } vec3 getSampleBiased(vec3 dir, float power) { dir = normalize(dir); vec3 o1 = normalize(ortho(dir)); vec3 o2 = normalize(cross(dir, o1)); vec2 r = rand2n(); r.x=r.x*2.*PI; r.y=pow(r.y,1.0/(power+1.0)); float oneminus = sqrt(1.0-r.y*r.y); return cos(r.x)*oneminus*o1+sin(r.x)*oneminus*o2+r.y*dir; } vec3 getSample(vec3 dir) { return getSampleBiased(dir,0.0); // <- unbiased! } vec3 getCosineWeightedSample(vec3 dir) { return getSampleBiased(dir,1.0); } ## Importance Sampling Now there are some tricks to improve the rendering a bit: Looking at the formulas above, it is clear that light sources in the surface normal direction will contribute the most to the final intensity (because of the $$\vec{n} \cdot \vec\omega_i$$ term). This means we might want sample more in the surface normal directions, since these contributions will have a bigger impact on the final average. But wait: we are estimating an integral using Monte Carlo sampling. If we bias the samples towards the higher values, surely our estimate will be too large. It turns out there is a way around that: it is okay to sample using a non-uniform distribution, as long as we divide the sample value by the probability density function (PDF). Since we know the diffuse term is modulated by the $$\vec{n} \cdot \vec\omega_i = cos(\theta)$$, it makes sense to sample from a non-uniform cosine weighted distribution. According to GI compendium (equation 35), this distribution has a PDF of $$cos(\theta) / \pi$$, which we must divide by, when using cosine weighted sampling. In comparison, the uniform sampling on the hemisphere we used above, can be thought of either to be multiplied by the integral interval length ($$2\pi$$), or diving by a constant PDF of $$1 / 2\pi$$. If we insert this, we end up with a simpler expression for the cosine weighted sampling, since the cosine terms cancel out: vec3 color(vec3 from, vec3 dir) { vec3 hit = vec3(0.0); vec3 hitNormal = vec3(0.0); vec3 luminance = vec3(1.0); for (int i=0; i < RayDepth; i++) { if (trace(from,dir,hit,hitNormal)) { dir =getCosineWeightedSample(hitNormal); luminance *= getColor()*Albedo; from = hit + hitNormal*minDist*2.0; // new start point } else { return luminance * getBackground( dir ); } } return vec3(0.0); // Ray never reached a light source } ## Image Based Lighting It is now trivial to replace the constant dome light, with Image Based Lighting: just lookup the lighting from a panoramic HDR image in the 'getBackground(dir)' function. This works nicely, at least if the environment map is not varying too much in light intensity. Here is an example: Stereographic 4D Quaternion system (click for large version) If, however, the environment has small, strong light sources (such as a sun), the path tracing will converge very slowly, since we are not likely to hit these by chance. But for some IBL images this works nicely - I usually use a filtered (blurred) image for lighting, since this will reduce noise a lot (though the result is not physically correct). The sIBL archive has many great free HDR images (the ones named '*_env.hdr' are prefiltered and useful for lighting). ## Direct Lighting / Next Event Estimation But without strong, localized light sources, there will be no cast shadows - only ambient occlusion like contact shadows. So how do we handle strong lights? Test scene with IBL lighting Let us consider the sun for a moment. The sun has an angular diameter of 32 arc minutes, or roughly 0.5 degrees. How much of the hemisphere is this? The solid angle (which corresponds to the area covered of a unit sphere) is given by: $$\Omega = 2\pi (1 - \cos {\theta} )$$ where $$\theta$$ is half the angular diameter. Using this we get that the sun covers roughly $$6*10^{-5}$$ steradians or around 1/100000 of the hemisphere surface. You would actually need around 70000 samples, before there is even a 50% chance of a pixel actually catching some sun light (using $$1-(1-10^{-5})^{70000} \approx 50\%$$). Test scene: naive path tracing of a sun like light source (10000 samples per pixel!) Obviously, we need to bias the sampling towards the important light sources in the scene - similar to what we did earlier, when we biased the sampling to follow the BRDF distribution. One way to do this, is Direct Lighting or Next Event Estimation sampling. This is a simple extension: instead of tracing the light ray until we hit a light source, we send out a test ray in the direction of the sun light source at each surface intersection. Test scene with direct lighting (100 samples per pixel) Here is some example code: vec3 getConeSample(vec3 dir, float extent) { // Formula 34 in GI Compendium dir = normalize(dir); vec3 o1 = normalize(ortho(dir)); vec3 o2 = normalize(cross(dir, o1)); vec2 r = rand2n(); r.x=r.x*2.*PI; r.y=1.0-r.y*extent; float oneminus = sqrt(1.0-r.y*r.y); return cos(r.x)*oneminus*o1+sin(r.x)*oneminus*o2+r.y*dir; } vec3 color(vec3 from, vec3 dir) { vec3 hit = vec3(0.0); vec3 direct = vec3(0.0); vec3 hitNormal = vec3(0.0); vec3 luminance = vec3(1.0); for (int i=0; i < RayDepth; i++) { if (trace(from,dir,hit,hitNormal)) { dir =getCosineWeightedSample(hitNormal); luminance *= getColor()*Albedo; from = hit + hitNormal*minDist*2.0; // new start point // Direct lighting vec3 sunSampleDir = getConeSample(sunDirection,1E-5); float sunLight = dot(hitNormal, sunSampleDir); if (sunLight>0.0 && !trace(hit + hitNormal*2.0*minDist,sunSampleDir)) { direct += luminance*sunLight*1E-5; } } else { return direct + luminance*getBackground( dir ); } } return vec3(0.0); // Ray never reached a light source } The 1E-5 factor is the hemisphere area covered by the sun. Notice, that you might run into precision errors with the single-precision floats used in GLSL when doing these calculations. For instance, on my graphics card, cos(0.4753 degrees) is exactly equal to 1.0, which means a physically sized sun can easily introduce large numerical errors (remember the sun is roughly 0.5 degrees). ## Sky model To provide somewhat more natural lighting, an easy improvement is to combine the sun light with a blue sky dome. A slightly more complex model is the Preetham sky model, which is a physically based model, taking different kinds of scattering into account. Based on the code from Simon Wallner I implemented a Preetham model in Fragmentarium. Here is an animated example, showing how the color of the sun light changes during the day: ## Fractals Now finally, we are ready to apply path tracing to fractals. Technically, there is not much new to this - I have previously covered how to do the ray-fractal intersection in this series of blog posts: Distance Estimated 3D fractals. So the big question is whether it makes sense to apply path tracing to fractals, or whether the subtle details of multiple light bounces are lost on the complex fractal surfaces. Here is the Mandelbulb, rendered with the sky model: Path traced Mandelbulb (click for larger version) Here path tracing provides a very natural and pleasant lighting, which improves the 3D perceptions. Here are some more comparisons of complex geometry: Default ray tracer in Fragmentarium Path traced in Fragmentarium And another one: Default ray tracer in Fragmentarium Path traced in Fragmentarium ## What's the catch? The main concern with path tracing is of course the rendering speed, which I have not talked much about, mainly because it depends on a lot of factors, making it difficult to give a simple answer. First of all, the images above are distance estimated fractals, which means they are a lot slower to render than polygons (at least of you have a decent spatial acceleration structure for the polygons, which is surprisingly difficult to implement on a GPU). But let me give some numbers anyway. In general, the rendering speed will be (roughly) proportional to the number of pixels, the FLOPS of the GPU, and the number of samples per pixel. On my laptop (a mobile mid-range NVIDIA 850M GPU) the Mandelbulb image above took 5 minutes to render at 2442x1917 resolution (with 100 samples per pixel). The simple test scene above took 30 seconds at the same resolution (with 100 samples per pixel). But remember, that since we can show the render progressively, it is still possible to use this at interactive speeds. What about the ray lengths (the number of light bounces)? Here is a comparison as an animated GIF, showing direct light only (the darkest), followed by one internal light bounce, and finally two internal light bounces: In terms of speed one internal bounce made the render 2.2x slower, while two bounces made it 3.5x slower. It should be noted that the visual effect of adding additional light bounces is normally relatively small - I usually use only a single internal light bounce. Even though the images above suggests that path tracing is a superior technique, it is also possible to create good looking images in Fragmentarium with the existing ray tracers. For instance, take a look at this image: (taken from the Knots and Polyhedra series) It was ray traced using the 'Soft-Raytracer.frag', and I was not able to improve the render using the Path tracer. Having said that, the Soft-Raytracer is also a multi-sample ray tracer which has to use lots of samples to produce the nice noise-free soft shadows. ## References The Fragmentarium path tracers are still Work-In-Progress, but they can be downloaded here: Sky-Pathtracer.frag (which needs the Preetham model: Sunsky.frag). and the image based lighting one: IBL-Pathtracer.frag The path tracers can be used by replacing an existing ray tracer '#include' in any Fragmentarium .frag file. External resources GI Total Compendium - very valuable collection of all formulas needed for ray tracing. Vilém Otte's Bachelor Thesis on GPU Path Tracing is a good introduction. Disney's BRDF explorer - Interactive display of different BRDF models - many examples included. The BRDF definitions are short GLSL snippets making them easy to use in Fragmentarium! Inigo Quilez's path tracer was the first example I saw of using GPU path tracing of fractals. Evan Wallace - the first WebGL Path tracer I am aware of. Brigade is probably the most interesting real time path tracer: Vimeo video and paper. I would have liked to talk a bit about unbiased and consistent rendering, but I don't understand these issues properly yet. It should be said, however, that since the examples I have given terminate after a fixed number of ray bounces, they will not converge to a true solution of the rendering equation (and, are thus both biased and inconsistent). For consistency, a better termination criterion, such as russian roulette termination, is needed. # Rendering 3D fractals without a distance estimator I have written a lot about distance estimated 3D fractals, and while Distance Estimation is a fast and elegant technique, it is not always possible to derive a distance estimate for a particular system. So, how do you render a fractal, if the only knowledge you have is whether a given point belongs to the set or not? Or, in other words, how much information can you extract if the only information you have is a black-box function of the form: bool inside(vec3 point); I decided to try out some simple brute-force methods to see how they would compare to the DE methods. Contrary to my expectations, it turned out that you can actually get reasonable results without a DE. First a couple of disclaimers: brute-force methods can not compete with distance estimators in terms of speed. They will typically be a magnitude slower. And if you do have more information available, you should always use it: for instance, even if you can’t find a distance estimator for a given escape time fractal, the escape length contains information that can be used to speed up the rendering or create a surface normal. The method I used is not novel nor profound: I simply sample random points along the camera ray for each pixel. Whenever a hit is found on the camera ray, the sampling will proceed on only the interval between the camera and the hit point (since we are only interested in finding the closest pixels), e.g. something like this: for (int i=0; i (The Near and Far distances are used to restrict the sample space, and speed up rendering) There are different ways to choose the samples. The simplest is to just sample uniformly (as in the example above), but I found that a stratified approach, where the camera ray segment is divided into equal pieces and a sample is choosen from each part works better. I think the sampling scheme could be improved: in particular once you found a hit, you should probably bias the sampling towards the hit to make convergence faster. Since I use a progressive (double buffered) approach in Fragmentarium, it is also possible to read the pixel depths of adjacent pixels, which probably also could be used. Now, after sampling the camera rays you end up with a depth map, like this: (Be sure to render to a texture with 32-bit floats - a 8-bit buffer will cause quantization). For distance estimated rendering, you can use the gradient of the distance estimator to obtain the surface normal. Unfurtunately this is not an option here. We can, however, calculate a screen space surface normal, based on the depths of adjacent pixels, and transform this normal back into world space: // Hit position in world space. vec3 worldPos = Eye + (Near+tex.w*(Far-Near)) * rayDir; vec3 n = normalize(cross(dFdx(worldPos), dFdy(worldPos))); (Update: I found out that GLSL supports finite difference derivatives through the dFdx statement, which made the code above much simpler). Now we can use a standard lighting scheme, like Phong shading. This really brings a lot of detail to the image: In order to improve the depth perception, it is possible to apply a screen space ambient occlusion scheme. Recently, there was a very nice tutorial on SSAO on devmaster, but I was to lazy to try it out. Instead I opted for the simplest method I could think of: simply sample some pixels in a neighborhood, and count how many of them that are closer to the camera than the center pixel. float occ = 0.; float samples = 0.; for (float x = -5.; x<=5.; x++) { for (float y = -5.; y<=5.; y++) { if (x*x+y*y>25.) continue; vec3 jitteredPos = pos+vec2(dx*(x+rand(vec2(x,y)+pos)),dy*(y+rand(vec2(x,y)+pos); float depth = texture2D(frontbuffer,jitteredPos).w; if (depth>=centerDepth) occ+=1.; samples++; } } occ /= samples; This is how this naive ambient occlusion scheme works: (Notice that for pixels with no hits, I've choosen to lighten, rather than darken them. This creates an outer glow effect.) Now combined with the Phong shading we get: I think it is quite striking how much detail you can infer simply from a depth map! In this case I didn't color the fractal, but nothing prevents you from assigning a calculated color. The depth buffer information only uses the alpha channel. Here is another example (Aexion's MandelDodecahedron): While brute-force rendering is much slower than distance estimation, it is possible to render these systems at interactive frame rates in Fragmentarium, especially since responsiveness can be improved by using progressive rendering: do a number of samples, then storing the best found solution (closest pixel) in a depth buffer (I use the alpha channel), render the frame and repeat. There are a couple of downsides to brute force rendering: It is slower than distance estimation You have to rely on screen space methods for ambient occlusion, surface normals, and depth-of-field Anti-aliasing is more tricky since you cannot accumulate and average. You may render at higher resolution and downsample, or use tiled rendering, but beware that screen space ambient occlusion introduce artifacts which may be visible on tile edges. On the other hand, there are also advantages: Much simpler to construct Interior renderings are trivial - just reverse the 'inside' function Progressive quality rendering: just keep adding samples, and the image will converge. To use the Fragmentarium script, just implement an 'inside' function: #define providesInside #include "Brute-Raytracer.frag" bool inside(vec3 pos) { // fractal definition here } It is also possible to use the raytracer on existing DE's - here a point is assumed to be inside a fractal if the DE returns a negative number, and outside if the DE returns a positive one. #include "Brute-Raytracer.frag" float DE(vec3 pos) { // fractal definition here // notice, that only the sign of the return value is checked } The script can be downloaded as part of Fragmentarium source distribution (it is not yet in the binary distributions). The following files are needed: Tutorials/3D fractals without a DE.frag Include/Brute-Raytracer.frag Include/DepthBufferShader.frag Include/Brute3D.frag # Fragmentarium Version 0.9.12 (“Prague”) Released I’ve released a new build of Fragmentarium, version 0.9.12 (“Prague”). It can be downloaded at Github. (Binaries for Windows, source for Windows/Linux/Mac) The (now standard) caveat apply: Fragmentarium is very much work in progress, and is best suited for people who like to experiment with code. Version 0.9.12 continues to move Fragmentarium in the direction of progressive HDR rendering. The default raytracers now use accumulated rendering for anti-alias, shadowing, and DOF. To start the progressive rendering, Fragmentarium must be set to ‘Continuous’ mode. It is possible to set a maximum number of rendered frames. All 2D and 3D system now also come with tone mapping, gamma correction, and color control (see the ‘Post’ tab). IBL Raytracing, using an HDR panorama from Blotchi at HDRLabs. There is a new raytracer, ‘IBL-raytracer.frag’ which can be used for DE’s instead of the default raytracer. It uses Image Based Lighting from HDR panorama maps. For an example of the new IBL raytracer, see the tutorial: “25 – Image Based Lighting.frag”. If you need to do stuff like animation, it is still possible to use the old raytracers. They can be included as: “#include “DE-Raytracer-v0.9.1.frag” or “#include “DE-Raytracer-v0.9.10.frag” Other than that there is now better support for buffer-swap systems (e.g. reaction-diffusion and game-of-life) and better control of texture look-ups. There are also some interesting new fragments, including the absolutely amazing LivingKIFS.frag script from Kali. ## New features • Added maximum subframe counter (for progressive rendering). • Added support for HDR textures (.hdr RGBE format). • Tonemapping, color control, and Gamma correction in buffershader. • Added support for widget for changing bound textures. • More host defines: #define SubframeMax 0 #define DontClearOnChange <- when sliders/camera changes, the backbuffer is not cleared. #define IterationsBetweenRedraws 20 <- makes it possible to do several steps without updating screen. • Added texture parameters preprocessor defines: #TexParameter texture GL_TEXTURE_MIN_FILTER GL_LINEAR #TexParameter texture GL_TEXTURE_MAG_FILTER GL_NEAREST #TexParameter texture GL_TEXTURE_WRAP_S GL_CLAMP #TexParameter texture GL_TEXTURE_WRAP_T GL_REPEAT • Change of syntax: when using "#define providesColor", now implement a 'vec3 baseColor(vec3)' function. • DE-Raytracer.frag now uses a 'Blinn-Phong with Schickl term and physical normalization'. (Which is something I found in Naty Hoffman's Course Notes). • DE-Raytracer.frag and Soft-Raytracer now uses new '3D.frag' base class. • Added a texture manager (should reuse and discard textures in memory automatically) • Added option to set OpenGL refresh rate in preferences. • Progressive2D.frag now supports custom filtering (using '#define providesFiltering') • Added support for choosing '#include' through editor context menu. • Using arrow keys now work when sliders have focus. • Now does a 'reset all' when loading new system (otherwise too confusing). ## New fragments • Added 'Kali's Creations': KaliBox, LivingKIFS, TreeBroccoli, Xray_KIFS. [Kali] • Added Doyle-Spirals.frag [Knighty] • Added: Droste.frag (Escher Droste effect) • Added: Reaction-Diffusion.frag (Gray-Scott example) • Added 'Convolution.frag' example (For precalculating specular and diffuse lighting from HDR panoramas) • Added examples of working with double precision floats and emulated double precision floats: "Include/EmulatedDouble.frag", "Theory/Mandelbrot - Emulated Doubles.frag" • Added 'IBL-Raytracer.frag' (Image Based Lighting raytracer) • Added tutorials: 'progressive2D.frag' and 'pure3D.frag' • Added experimental: 'testScene.frag' and 'triplanarTexturing.frag' • Added 'Thorn.frag' ## Bug fixes • Reflection is now working again in 'DE-Raytracer.frag' • Fixed filename case sensitivity error when doing reverse lookup of line numbers. ### Mac users Some Mac users has reported problems with the latest versions of Fragmentarium. Again, I don't own a Mac, so I cannot solve these issues without help. For examples of images generated with the new version, take a look at the Flickr Fragmentarium stream. Finally, please read the FAQ, before asking questions. # Reaction-Diffusion Systems Reaction-diffusion systems model the spatial dynamics of chemicals. An interesting early application was Alan Turing’s theory of Morphogenesis (Turing’s 1951 paper). Here, he suggested, that the pattern formation in animal skin could be explained by a two component reaction-diffusion system. Reaction-diffusion systems are interesting, because they display a wide range of self-organizing patterns, and they have been used by several digital artists, both for 2D pattern generation and 3D structure generation. The reaction-diffusion model is a great example of how complex large-scale structure may emerge from simple, local rules. ## Modelling Reaction-Diffusion on a GPU As the name suggests, these systems have two driving components: diffusion, which tends to spread out or smoothen concentrations, and reactions, which describe how chemical species may transform into each other. For each chemical species, it is possible to describe the evolution using a differential equation on the form: $$\frac {dA}{dt} = K \nabla^2 A + P(A,B)$$ Where A and B are fields describing the concentration of a chemical species at each point in space. The $$K$$ coefficient determines how quickly the concentration spreads out, and $$P(A,B)$$ is a polynomial in the different species concentrations in the system. There will be a similar equation for the B field. To model these, we can represent the concentrations on a discrete grid, which fits nicely on a 2D texture on a GPU. The time derivative can solved in discrete time steps using forward Euler integration (or something more powerful). On a GPU, we need two buffers to do this: we render the next time step into the front buffer using values from the back buffer, and then swap the buffers. Buffer swapping is a standard technique, and in Fragmentarium the only thing you need to do, is to declare a ‘uniform sampler2D backbuffer;’ and Fragmentarium will take care of creation and swapping of buffers. We also use the Fragmentarium host define ‘#buffer RGBA32F’ to ask for four-component 32-bit float buffers, instead of the normal 8-bit integer buffers. The Laplacian may be calculated using a finite differencing scheme, for instance using a five-point stencil: vec3 P = vec3(pixelSize, 0.0); // Five point stencil Laplacian vec4 laplacian5() { return + texture2D( backbuffer, position - P.zy) + texture2D( backbuffer, position - P.xz) - 4.0 * texture2D( backbuffer, position ) + texture2D( backbuffer, position + P.xz ) + texture2D( backbuffer, position + P.zy ); } (see the Fragmentarium source for a nine-point stencil). A simple two-component Gray-Scott system may then be modelled simply as: // time step for Gray-Scott system: vec4 v = texture2D(backbuffer, position); vec2 lv = laplacian5().xy; // laplacian float xyy = v.x*v.y*v.y; // utility term vec2 dV = vec2( Diffusion.x * lv.x - xyy + f*(1.-v.x), Diffusion.y * lv.y + xyy - (f+k)*v.y); v.xy += timeStep*dV; (Robert Munafo has a great page with more information on Gray-Scott systems). Here is an example of a typical system created using the above system, though many other patterns are possible: It is also possible to enforce some structure by changing the concentrations in certain regions: You can even use a picture to modify the concentrations: A template implementation can be found as part of the Fragmentarium source at GitHub: Reaction-Diffusion.frag. Notice, that this fragment requires a recent source build from the GitHub repository to run. ## Reaction-Diffusion systems used by artists Several artist have used Reaction Diffusion systems in different ways, but the most impressive examples of 2D images I have seen, are the works of Jonathan McCabe. For instance his Bone Music series: or his Turing Flow series: McCabe’s images are created using a more complex multi-scale model. Softology’s blog entry and W:Blut’s post dissect McCabe’s approach (there is even a reference implementation in Processing). Notice, that Nervous System sells some of McCabe’s works as jigsaw puzzles. ## Reaction-Diffusion systems in WebGL Felix Woitzel (@Flexi23) has created some beautiful WebGL-based reaction-diffusion demos, such as this Fluid simulation with Turing patterns: He also has created several other RD based variants over at WebGL Playground. ## Fabricated 3D Objects Jessica Rosenkrantz and Jesse Louis-Rosenberg at Nervous System create and sell objects designed and inspired by generative processes. Several of their objects, including these cups, plates, and lamps are based on reaction-diffusion systems, and can be bought from their webshop. Be sure to read their blog entries about reaction-diffusion. And don’t forget to take a look at their Cell Cycle WebGL design app, while visiting. ## Reaction-Diffusion Software An easy way to explore reaction-diffusion systems with doing any coding is by using Ready, which uses OpenCL to explore RD systems. It has several interesting features, including the ability to run systems on 3D meshes and directly interact and ‘paint’ on the surfaces. It also lets you run Game-of-Life on exotic geometries, such as a torus or even something as exotic as a Penrose tiling. # Double Precision in OpenGL and WebGL This post talks about double precision numbers in OpenGL and WebGL, and how to emulate them if there is no native hardware support. In GLSL 4.00.9 (which is OpenGL 4.0) and higher, there is a native double precision floating point type. And if your graphics card is able to run OpenGL 4.0, it most likely has native hardware support for doubles (except for a few ATI/AMD cards). There are some caveats, though: 1. Not all functions are supported with double precision arguments. For instance, there are no trigonometric and exponential functions. (The available functions may be found here). 2. You can not pass double precision ‘varying’ parameters from the vertex shader to the fragment shader, and have the GPU automatically interpolate them. Double precision varying variables must be flat. 3. Double precision performance may be artificially limited by the hardware manufacturers. This is the case for Nvidia’s Fermi architecture, where the scientific computing brand, the Tesla series, can execute double precision arithmetics at half the speed of single precision, while the consumer brand, the GeForce series, only can execute double precision arithmetics at 1/8 the speed of single precision. For Nvidia’s brand new Kepler architecture used in the GeForce 600 series, things change again: here the difference between single and double precision will be a whopping factor 24! Notice, that this will also be the case for some cards in the Kepler Tesla branch, such as the Tesla K10. 4. In Fragmentarium (and in general, in Qt’s OpenGL wrapper classes) it is not possible to set double precision uniforms. This should be easy to circumvent by using the OpenGL API directly, though. (Non-related Fragmentarium image) In order to use double precision, you must either specify a GLSL version 4.00 (or higher) or use the extension: #extension GL_ARB_gpu_shader_fp64 : enable Older cards, like the GeForce 310M in my laptop, does not support double precision in hardware. Here it is possible to use emulated double precision instead. I used the functions by Henry Thasler described here in his posts, to emulate a double precision number stored in two single precision floats. The worst part about doing emulated doubles in GLSL, is that GLSL does not support operator overloading. This means the syntax gets ugly for simple arithmetics, e.g. ‘z = add(mul(z1,z2),z3)’ instead of ‘z = z1*z2+z3’. On Nvidia cards, it is necessary to turn off optimization to use Thasler’s code – this can be done using the following pragmas: #pragma optionNV(fastmath off) #pragma optionNV(fastprecision off) (Non-related Fragmentarium image) ## Performance To test performance, I used a Mandelbrot test scene, rendered at 1000×500 with 1000 iterations in Fragmentarium. The numbers show the performance in frames per second. The zoom factor was determined visually, by noticing when pixelation occurred. Geforce 570GTX Tesla 2075 Max Zoom (~300USD) (~2200USD) Single 140 100 10^5 Double 41 70 10^14 Emulated Double 16 11 10^13 Some observations: • Emulated double precision is slightly less accurate then true hardware doubles, but not much in this particular scenario. • Emulated doubles are roughly 1/9th the speed of single precision. Amazingly, this suggest that on the Kepler architecture it might make more sense to use emulated double precision than the built-in hardware support! • Hardware doubles on the 570GTX performs better than expected (they should perform at roughly 1/8 the speed). This is probably because double precision arithmetics isn’t the only bottleneck in the shader. Notice that the Tesla card was running on Windows in WDDM mode, not TCC mode (since you cannot use GLSL shaders in TCC mode). Not that I think performance would change. ## WebGL and double precision WebGL does not support double precision in its current incarnation. This might change in the future, but currently the only choice is to emulate them. This, however, is problematic since WebGL seems to strip away pragmas! Henry Thasler’s emulation code doesn’t work under the ANGLE layer either. In fact, the only configuration I could get to work, was on a Intel HD 3000 GPU with ANGLE disabled. I did create a sample application to test this which can be tried out here: Click to run WebGL app. Left side is single-precision, right side is emulated double precision. Here shown on Firefox without ANGLE on a Intel HD 3000 card. It is not clear why the WebGL version does not work on Nvidia cards. Floating points may run at lower resolution in WebGL, but I’m using the ‘precision highp’ qualifiers. I also tried querying the resolution using glContext.getShaderPrecisionFormat(…), but had no luck – it is only available on Firefox, and on my GPU’s it just returns precision=0. The most likely explanation is that Nvidia drivers perform some optimizations which spoils the emulation code. This is also the case for desktop OpenGL, but here the pragma’s solve the problem. The emulation code uses constructs like: z = a - (a - b); which I suspect the well-meaning compiler might translate to ‘z=b’, since the rounding errors normally would be insignificant. Judging from some comments on Thasler’s original posts, it might be possible to prevent this using constructs such as: ‘z = a – float(a-b)’, but I have not pursued this. ## Fragmentarium and Double Precision Except that there are no double-precision sliders (uniforms), it is straight-forward to use double precision code in Fragmentarium. The only thing to remember is that you cannot pass doubles from the vertex shader to the fragment shader, which is the standard way of passing camera information to the shader in Fragmentarium. I’ve also included a small port of Thaslers GLSL code in the distribution (see “Include/EmulatedDouble.frag”). It is quite easy to use (for an example, try the included “Theory/Mandelbrot – Emulated Doubles.frag”). # WebGL for Shader Graphics Web applications are becoming popular, not at least because of Google’s massive effort to push everything through the browser (with Chrome OS being the most extreme example, where everything is running through a browser interface). Before WebGL, the only way to create efficient graphics was through plug-ins, such as Adobe’s Flash, Microsoft’s Silverlight, Unity, or Google’s O3D and Native Client. But WebGL is a vendor independent technology, directly integrated with the browser’s JavaScript language and DOM model. Unfortunately, WebGL browser support is limited. WebGL is not available in Internet Explorer on Windows, and is not enabled by default in Safari on Mac OS X. This means that roughly 50% of all internet users won’t have access to WebGL content. WebGL is not supported on iOS devices either (even though it is accessible for iAds, and can be enabled on jail-broken devices). What is worse, is that Microsoft do not even plan to support WebGL, since they consider it a security threat. Their concerns are reasonable, but their solution is not: it would be much better if they simply showed a dialog box message, warning the user that executing WebGL provides a security risk, and giving a choice to continue or not – the same way they warn about plugins and downloaded executables. Some very impressive stuff has been done using WebGL, though: for instance ro.me, Path Tracing (Evan Wallace) , Cars (Altered Qualia), Terrain Editor (Rob Chadwick), Traveling Wavefronts (Felix Woitzel), Hartverdrahtet. ## Using WebGL for Fractals There are already some great tools available for experimenting with WebGL: ShaderToy, GLSLSandbox, WebGL Playground. Their main weakness is that it is difficult to store state information (for instance, if you want a movable camera), since this cannot be done in the shader itself, without using weird hacks. So, I decided to start out from scratch to get a feeling for WebGL. WebGL (specification) is a JavaScript API based on OpenGL ES 2.0, a subset of the desktop OpenGL version designed for embedded devices such as cell phones. Being a ‘modern’ OpenGL implementation, there is no support for fixed pipeline rendering: there is no matrix stack, no default shaders, no immediate mode rendering (you cannot use glBegin(…) – instead you must use vertex buffers). WebGL also misses some of more advanced features of the desktop OpenGL version, such as 3D textures, multiple render targets, and double precision support. And float texture support is an optional extension. The first example I made was this Mandelbrot viewer: It demonstrates how to initialise WebGL and compile shaders, render a full-canvas quad, and process keyboard and mouse events and pass them through uniforms to the fragment shader. Click the image to try out the WebGL demo. A few programming comments. First JavaScript: I’m not very fond of JavaScript’s type system. The loose typing means that you risk finding bugs later, at run-time, instead of when compiling. It also means that it can be hard to read third-party code (which kind of parameters are you supposed to provide to a function like ‘update(ev, ui)’?). As for numerical types, JavaScript only has the Number type: an IEEE 754 double precision type – no integers!. Some browsers also silently ignore errors during run-time, which makes it even harder to find bugs. On the positive side is the quick iteration time, and the Firebug Firefox plugin, which is an extremely powerful tool for debugging web and JavaScript code. As for the HTML, I still find it difficult to do table-less layout using floating div’s and css. I’m missing the flexible layout managers that many desktop UI kits provide, which makes it easy to align components and control how they scale when resized (but I may be biased towards desktop UI’s). Also, as HTML was not designed with UI widgets in mind, you have to use a third-party library to display a simple slider: I chose jQuery UI, which was easy to setup and use. Finally the WebGL: The WebGL GLSL shader code is very similar to the desktop GLSL dialect. The biggest difference is the way loops are handled. Only ‘for’ loops are available, and with a very restricted syntax. It seems the iteration count must be determinable at compilation time (probably because some implementations unroll all loops), which means you no longer can use uniforms to control the loops (you can, however, ‘break’ out of loops dynamically based on run-time variables). This means, that in order to pass the iteration count and number of samples to the Mandelbrot shader, I have to do direct text substitutions in the shader code and recompile. But my biggest frustation was caused by the ANGLE translation layer. Even for this very simple example, I had several issues with ANGLE – see the notes below. Feel free to use the example as a starting point for further experiments – it is quite simple to modify the 2D shader code. ## Notes about ANGLE A problem with WebGL is poor graphics driver support for OpenGL. Chrome and Firefox have chosen a very radical approach to solve this: on Windows, they convert all WebGL GLSL shader code into DirectX 9 HLSL code through a converter called ANGLE. Their rationale for doing this, is that OpenGL 2.0 drivers are not available on all computers. However, several shaders won’t run due to the ANGLE translation, and the compilation time can be extremely slow. Wrt drivers, older machines with integrated graphics might be affected, but anything with a less than five year old Nvidia, AMD, or Intel HD graphics card should work with OpenGL 2.0. In my experiments above, I ran into a bug that in some cases make loops with more than 255 iterations fail (I’ve submitted a bug report). When debugging ANGLE problems, a good first step is to disable ANGLE and test the shaders. In Chrome, this can be done by starting the executable with the comand line argument –use-gl=desktop. You can check your ANGLE version with the URL chrome://gpu-internals/. In Firefox use the about:config URL, and webgl.force-enabled=true and webgl.prefer-native-gl=true to disable ANGLE. It is also possible to get the translated HLSL code using the WEBGL_debug_shaders extension. However, this extension is only available for privileged code, which means Chrome must be started with the command line parameter –enable-privileged-webgl-extensions. After that the HLSL source can be obtained by calling: var hlsl = gl.getExtension("WEBGL_debug_shaders").getTranslatedShaderSource(fragmentShader) I still haven’t found an workaround for this earlier Mandelbulb experiment (using GLSLSandbox), which fails with another ANGLE bug: Click the image to try out the WebGL demo (fails on ANGLE systems). But, I’ll try implementing it from scratch to see if I can find the bug. # Fragmentarium FAQ Here is a small collection of questions, I’ve have been asked from time to time about Fragmentarium. ## Why does Fragmentarium crash? The most common cause for this is, that a drawing operation takes too long to complete. On Windows, there is a two seconds maximum time limit on jobs executing on the GPU. After that, a GPU watchdog timer will kill the graphics driver and restore it (resulting in unresponsive, possible black, display for 5-10 seconds). The host process (here Fragmentarium) will also be shut down by Windows. If this happens, you will get errors like this: "The NVIDIA OpenGL driver lost connection with the display driver and is unable to continue. The application must close.... Error code: 8" "Display driver stopped responding and has recovered" Workarounds: Try lowering the number of ray steps, the number of iterations or use the preview feature. Notice that for high-resolution renders, the time limit is for each tile, so it is still possible to do high resolution images. Another solution is to change the watchdog timer behaviour via the TDR Registry Keys: The TDR registry keys were not defined in my registry, but I added a DWORD TdrDelay = 30 and a DWORD TdrDdiDelay = 30 key to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\GraphicsDrivers which sets a 30 second window for GPU calculations. You have to restart Windows to apply these settings. Be advised that changing these settings may render your system completely unresponsive if the GPU crashes. ## Why does Fragmentarium not work on my GPU? Even though GLSL is a well-defined standard, the are differences between different vendor implementations and different drivers. The computers I have use have Nvidia cards, so Fragmentarium is most tested on Nvidia’s platform. Typical problems: The ATI compiler is more strict about casting. For instance, adding an integer to a float results in an error, and not in a warning: float a = b + 3; The ATI compiler also seems to suffer from some loop optimization problems: some of the fractals in Fragmentarium does not work on my ATI card, if the number of iterations is not locked (or hard-coded in the fragment code). Inserting a condition into the loop also solves the problem. The Intel GPU compiler also has some issues: for instance, some operations on literal constants results in errors: vec3 up = normalize(vec3(1.0,0.0,0.0)); // fails on Intel ## I get weird errors about disabled widgets? If you see warnings like: Unable to find 'FloorNormal' in shader program. Disabling widget. it does not indicate a problem. For instance, the warning above appears when the EnableFloor checkbox is locked and disabled. In this case, the GLSL will optimize the floor code away, and the FloorNormal uniform variable will no longer be part of the compiled program – hence the warning. These warnings can be safely ignored. ## Why does your Fragmentarium images look much nicer than the ones I get in Fragmentarium? The images I post are always rendered at a higher resolution using the High Resolution Render option. I then downscale the images to a lower resolution. This reduces alias and rendering artifacts. Use a painting program with a proper downscaling filter – I use Paint.NET which seems to work okay. Before rendering in hi-res, use the Tile Preview to zoom in, and adjust the details level and number of ray steps, so the image looks okay in the Tile Preview resolution. ## Why is Fragmentarium slower than BoxPlorer/Fractals.io/…? The default ray tracer in Fragmentarium has grown somewhat complex. It is possible to gain speed by locking variables, but this is somewhat tedious. Another solution is to change to another raytracer, e.g. change #include "DE-Raytracer.frag" to either #include "Fast-Raytracer.frag" (a faster version of the ones above, which will remember all settings) or #include "Subblue-Raytracer.frag" (Tom Beddards raytracer, which uses a set of different parameters) ## Can I use double precision in my shaders? Most modern graphics cards support double precision numbers in hardware, so in principle, yes, if your card supports it. In practice, it is much more difficult: First, the Fragmentarium presets and sliders (including camera settings), will only transfer data (uniforms) to the GPU as single precision floats. This is not the biggest problem, since you might only need double precision for the numbers that accumulate errors. The Qt OpenGL wrappers I use doesn’t support double types, but it would be possible to work around this if needed. Second, while newer GLSL versions do support double precision numbers (through types such as double, dvec3, dmat3), not all of the built-in functions support them. In particular, there are no trigonometric or exponential functions. So no cos(double), exp(double), etc. The available functions are described here. Third, it might be slow: When Nvidia designed their Fermi architecture, used in their recent graphic cards, they built it, so that is should be able to process double precision operations with half the speed of single precision operations (which is optimal given the double size of the numbers). However, they decided that their consumer branch (the Nvidia cards) should be artificially limited to run double precision calculations at 1/8 the speed of single precision numbers. Their Tesla line of graphics card (which shares architecture with the Geforce branch), is not artifically throttled and will run double precision at half the speed of single precision. As for the AMD/ATI cards, I do not think they have similar limitations, but I’m not sure about this. If you really still want to try, you must insert this command at the top of the script: #extension GL_ARB_gpu_shader_fp64 : enable (Or use a #version command, but this will like cause problems with most of the existing examples). Finally, what about emulating double precision numbers, instead of using the hardware versions? While this sounds very slow, it is probably not much slower than the throttled implementation. The downside is, the GLSL does not support operator overloading: there is no syntactically nice way to implement such functionality: instead of just changing your data types, you must convert all code from e.g. A = B*C+D; to A = dplus(dmul(B,C),D); If you are still interested, here is a great introduction to emulating doubles in GLSL. ## How do I report errors, so that it is easiest for you to correct them? (Well, actually nobody asks this questions) If you find an error, please report the following: – Operating system, and the graphics card you are using – The version of Fragmentarium, and whether you built it yourself – A reproducible description of the steps that caused the error (if possible). You may mail errors to me at mikael (at) hvidtfeldts.net. # Optimizing GLSL Code By making selected variables constant at compile time, some 3D fractals render more than four times faster. Support for easily locking variables has been added to Fragmentarium. Some time ago, I became aware that the raytracer in Fragmentarium was somewhat slower than both Fractal Labs and Boxplorer for similar systems – this was somewhat puzzling since the DE raycasting technique is pretty much the same. After a bit of investigation, I realized that my standard raytracer had grown slower and slower, as new features had been added (e.g. reflections, hard shadows, and floor planes) – even if the features were turned off! One way to speed up GLSL code, is by marking some variables constant at compile-time. This way the compiler may optimize code (e.g. unroll loops) and remove unused code (e.g. if hard shadows are disabled). The drawback is that changing these constant variables requires that the GLSL code is compiled again. It turned out that this does have a great impact on some systems. For instance for the ‘Dodecahedron.frag’, take a look at the following render times: No constants: 1.4 fps (1.0x) Constant rotation matrices : 3.4 fps (2.4x) Constant rotation matrices + Anti-alias + DetailAO: 5.6 fps (4.0x) All 38 parameters (except camera): 6.1 fps (4.4x) The fractal rotation matrices are the matrices used inside the DE-loop. Without the constant declarations, they must be calculated from scratch for each pixel, even though they are identical for all pixels. Doing the calculation at compile-time gives a notable speedup of 2.4x (notice that another approach would be to calculate such frame constants in the vertex shader and pass them to the pixel shader as ‘varying’ variables. But according to this post this is – surprisingly – not very effective). The next speedup – from the ‘Anti-alias’ and ‘DetailAO’ variables – is more subtle. It is difficult to see from the code why these two variables should have such impact. And in fact, it turns out that combinations of other variables will amount in the same speedup. But these speedups are not additive! Even if you make all variables constants, the framerate only increases slightly above 5.6 fps. It is not clear why this happens, but I have a guess: it seems that when the complexity is lowered between a certain treshold, the shader code execution speed increases sharply. My guess is that for complex code, the shader runs out of free registers and needs to perform calculations using a slower kind of memory storage. Interestingly, the ‘iterations’ variable offers no speedup – even though the compiler must be able to unroll the principal DE loop, there is no measurable improvement by doing it. Finally, the compile time is also greatly reduced when making variables constant. For the ‘Dodecahedron.frag’ code, the compile time is ~2000ms with no constants. By making most variables constant, the compile time is lowered to around ~335ms on my system. ## Locking in Fragmentarium. In Fragmentarium variables can be locked (made compile-time constant) by clicking the padlock next to them. Locked variables appear with a yellow padlock next to them. When a variable is locked, any changes to it will first be executed when the system is compiled (by pressing ‘build’). Locked variables, which have been changes, will appear with a yellow background until the system is compiled, and the changes are executed. Notice, that whole parameter groups may be locked, by using the buttons at the bottom. The locking interface – click to enlarge. The ‘AntiAlias’ and ‘DetailAO’ variables are locked. The ‘DetailAO’ has been changed, but the changes are not executed yet (the yellow background). The ‘BoundingSphere’ variable has a grey background, because it has keyboard focus: its value can be finetuned using the arrow keys (up/down controls step size, left/right changes value). In a fragment, a user variable can be marked as locked by default, by adding a ‘locked’ keyword to it: uniform float Scale; slider[-5.00,2.0,4.00] Locked Some variables can not be locked – e.g. the camera settings. It is possible to mark such variables by the ‘NotLockable’ keyword: uniform vec3 Eye; slider[(-50,-50,-50),(0,0,-10),(50,50,50)] NotLockable The same goes for presets. Here the locking mode can be stated, if it is different from the default locking mode: #preset SomeName AntiAlias = 1 NotLocked Detail = -2.81064 Locked Offset = 1,1,1 ... #endpreset Locking will be part of Fragmentarium v0.9, which will be released soon. # Plotting High-frequency Functions Using a GPU. A slight digression from the world of fractals and generative art: This post is about drawing high-quality graphs of high-frequency functions. Yesterday, I needed to draw a few graphs of some simple functions. I started out by using Iñigo Quilez’s nice little GraphToy, but my functions could not be expressed in a single line. So I decided to implement a graph plotter example in Fragmentarium instead. Plotting a graph using a GLSL shader is not an obvious task – you have to frame the problem in a way, such that each pixel can be processed individually. This is in contrast to the standard way of drawing graphs – where you choose a uniform set of values for the x-axis, and draw the lines connecting the points in the (x,f(x)) set. So how do you do it for each pixel individually? The first thing to realize is, that it is easy to determine whether a pixel is above or below the graph – this can be done by checking whether y<f(x) or y>f(x). The tricky part is, that we only want to draw the boundary – the curve that separates the regions above and below the graph. So how do we determine the boundary? After having tried a few different approaches, I came up with the following simple edge detection procedure: for each pixel, choose a number of samples, in a region around the pixel center. Then count how many samples are above, and how many samples are below the curve. If all samples are above, or all samples are below, the pixel is not on the boundary. However, if there are samples both above and below, the boundary must be passing through the pixel. The whole idea can be expressed in a few lines of code: for (float i = 0.0; i < samples; i++) { for (float j = 0.0;j < samples; j++) { float f = function(pos.x+ i*step.x)-(pos.y+ j*step.y); count += (f>0.) ? 1 : -1; } } // base color on abs(count)/(samples*samples) It should be noted, that the sampling can be improved by adding a small amount of jittering (random offsets) to the positions – this reduces the aliasing at the cost of adding a small amount of noise. ## Highfrequency functions and aliasing So why it this better than the common ‘connecting line’ approach? Because this approach deals with the high-frequency information much better. Consider the function f(x)=sin(x*x*x)*sin(x). Here is a plot from GraphToy: Notice how the graph near the red arrows seem to be slowly varying. This is not the true behavior of function, but an artifact of the way we sample our function. Our limited resolution cannot capture the high frequency components, which results in aliasing. Whenever you do anything media-related on a computer, you will at some point run into problems with aliasing: whether you are doing sound synthesis, image manipulation, 3D rendering or even drawing a straight line. However, using the pixel shader approach, aliasing is much easier to avoid. Here is a Fragmentarium plot of the same function: Even though it may seem backwards to evaluate the function for all pixels on the screen, it makes it possible to tame the aliasing, and even on a modest GPU, the procedure is fast enough for realtime interactions. The example is included in GitHub under Examples/2D Systems/GraphPlotter.frag. # Distance Estimated 3D Fractals (Part I) During the last two years, the 3D fractal field has undergone a small revolution: the Mandelbulb (2009), the Mandelbox (2010), The Kaleidoscopic IFS’s (2010), and a myriad of equally or even more interesting hybrid systems, such as Spudsville (2010) or the Kleinian systems (2011). All of these systems were made possible using a technique known as Distance Estimation and they all originate from the Fractal Forums community. ## Overview of the posts Part I briefly introduces the history of distance estimated fractals, and discuss how a distance estimator can be used for ray marching. Part II discuss how to find surface normals, and how to light and color fractals. Part III discuss how to actually create a distance estimator, starting with distance fields for simple geometric objects, and talking about instancing, combining fields (union, intersections, and differences), and finally talks about folding and conformal transformation, ending up with a simple fractal distance estimator. Part IV discuss the holy grail: the search for generalization of the 2D (complex) Mandelbrot set, including Quaternions and other hypercomplex numbers. A running derivative for quadratic systems is introduced. Part V continues the discussion about the Mandelbulb. Different approaches for constructing a running derivative is discussed: scalar derivatives, Jacobian derivatives, analytical solutions, and the use of different potentials to estimate the distance. Part VI is about the Mandelbox fractal. A more detailed discussion about conformal transformations, and how a scalar running derivative is sufficient, when working with these kind of systems. Part VII discuss how dual numbers and automatic differentation may used to construct a distance estimator. Part VIII is about hybrid fractals, geometric orbit traps, various other systems, and links to relevant software and resources. ### The background The first paper to introduce Distance Estimated 3D fractals was written by Hart and others in 1989: Ray tracing deterministic 3-D fractals In this paper Hart describe how Distance Estimation may be used to render a Quaternion Julia 3D fractal. The paper is very well written and definitely worth spending some hours on (be sure to take a look at John Hart’s other papers as well). Given the age of Hart’s paper, it is striking that is not until the last couple of years that the field of distance estimated 3D fractals has exploded. There has been some important milestones, such as Keenan Crane’s GPU implementation (2004), and Iñigo Quilez 4K demoscene implementation (2007), but I’m not aware of other fractal systems being explored using Distance Estimation, before the advent of the Mandelbulb. ### Raymarching Classic raytracing shoots one (or more) rays per pixel and calculate where the rays intersect the geometry in the scene. Normally the geometry is described by a set of primitives, like triangles or spheres, and some kind of spatial acceleration structure is used to quickly identify which primitives intersect the rays. Distance Estimation, on the other hand, is a ray marching technique. Instead of calculating the exact intersection between the camera ray and the geometry, you proceed in small steps along the ray and check how close you are to the object you are rendering. When you are closer than a certain threshold, you stop. In order to do this, you must have a function that tells you how close you are to the object: a Distance Estimator. The value of the distance estimator tells you how large a step you are allowed to march along the ray, since you are guaranteed not to hit anything within this radius. Schematics of ray marching using a distance estimator. The code below shows how to raymarch a system with a distance estimator: float trace(vec3 from, vec3 direction) { float totalDistance = 0.0; int steps; for (steps=0; steps < MaximumRaySteps; steps++) { vec3 p = from + totalDistance * direction; float distance = DistanceEstimator(p); totalDistance += distance; if (distance < MinimumDistance) break; } return 1.0-float(steps)/float(MaxRaySteps); } Here we simply march the ray according to the distance estimator and return a greyscale value based on the number of steps before hitting something. This will produce images like this one (where I used a distance estimator for a Mandelbulb): It is interesting that even though we have not specified any coloring or lighting models, coloring by the number of steps emphasizes the detail of the 3D structure - in fact, this is an simple and very cheap form of the Ambient Occlusion soft lighting often used in 3D renders. ### Parallelization Another interesting observation is that these raymarchers are trivial to parallelise, since each pixel can be calculated independently and there is no need to access complex shared memory structures like the acceleration structure used in classic raytracing. This means that these kinds of systems are ideal candidates for implementing on a GPU. In fact the only issue is that most GPU's still only supports single precision floating points numbers, which leads to numerical inaccuracies faster than for the CPU implementations. However, the newest generation of GPU's support double precision, and some API's (such as OpenCL and Pixel Bender) are heterogeneous, meaning the same code can be executed on both CPU and GPU - making it possible to create interactive previews on the GPU and render final images in double precision on the CPU. ### Estimating the distance Now, I still haven't talked about how we obtain these Distance Estimators, and it is by no means obvious that such functions should exist at all. But it is possible to intuitively understand them, by noting that systems such as the Mandelbulb and Mandelbox are escape-time fractals: we iterate a function for each point in space, and follow the orbit to see whether the sequence of points diverge for a maximum number of iterations, or whether the sequence stays inside a fixed escape radius. Now, by comparing the escape-time length (r), to its spatial derivative (dr), we might get an estimate of how far we can move along the ray before the escape-time length is below the escape radius, that is: $$DE = \frac{r-EscapeRadius }{dr}$$ This is a hand-waving estimate - the derivative might fluctuate wildly and get larger than our initial value, so a more rigid approach is needed to find a proper distance estimator. I'll a lot more to say about distance estimators inthe later posts, so for now we will just accept that these function exists and can be obtained for quite a diverse class of systems, and that they are often constructed by comparing the escape-time length with some approximation of its derivative. It should also be noticed that this ray marching approach can be used for any kinds of systems, where you can find a lower bound for the closest geometry for all points in space. Iñigo Quilez has used this in his impressive procedural SliseSix demo, and has written an excellent introduction, which covers many topics also relevant for Distance Estimation of 3D fractals. This concludes the first part of this series of blog entries. Part II discusses lighting and coloring of fractals.
{}
# Boyle's law If an ideal gas is allowed to slowly expand to 4 times its initial volume, what is the ratio of the final pressure to the initial pressure? ×
{}
# Is ChemiDay a reliable enough source (for inorganic reactions) to be cited on our site? We have quite a lot of upvoted posts — 77 to be exact — using a website called ChemiDay as their reference material. However, from the looks of it, the website doesn't appear to be a reliable reference. Suppose I have a question: "What are the products of the reaction between fluorine and ammonia?" Now, this is the related ChemiDay reaction. The entire webpage only: states the reaction and gives a very short description: ### Ammonia react with fluorine $$\ce{2NH3 + 6F2 → 6HF + 2NF3}$$ Ammonia react with fluorine to produce hydrogen fluoride and nitrogen(III) fluoride. The reaction takes place at the low temperature. Most of the reactions in their database of ~17000 reactions (claimed) are like the one I linked to above. The question is: would we want the answer to this question be backed up by such a ChemiDay reference? I instead would expect the answer to be backed by some reputed inorganic chemistry book (similar to March for organic), or at least a research paper - explaining the detailed experimental steps. For the above question, this research paper is apt. It clearly highlights major and minor products, and the different reactions at different yields of reactants. I know we can't expect every answerer to be able to cite research papers or access good books, but does that mean we should sacrifice factual details and correctness with easy-to-lookup and (probably) wrong references? So, should we officially disallow ChemiDay as a reference source (and leave a comment on such posts)? Or is there some other action we would want to take on such citations (for the past posts and for the future posts)? • Wikipedia has a “better citation needed” for this kind of thing, IIRC. Jun 7, 2018 at 11:47 • @orthocresol Definitely $^\text{[citation needed]}$; not sure about $^\text{[better citation needed]}$. Jun 7, 2018 at 13:58 • @hBy2Py apparently, it’s “better source needed”, which makes a bit more sense: en.wikipedia.org/wiki/Template:Better_source Jun 11, 2018 at 8:25 ## 1 Answer No, it should not be considered a reliable source. From a quick once-over, it appears to be trying to be, among other things, something of an "encyclopedia of reactions." A true encyclopedia always cites its sources. As you note, citations therein are ... scarce. So no, as it is, it's no good.
{}
#### Vol. 13, No. 3, 2019 Recent Issues The Journal About the Journal Editorial Board Editors’ Interests Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1944-7833 (e-only) ISSN: 1937-0652 (print) Author Index To Appear Other MSP Journals Algebraic independence for values of integral curves ### Tiago J. Fonseca Vol. 13 (2019), No. 3, 643–694 ##### Abstract We prove a transcendence theorem concerning values of holomorphic maps from a disk to a quasiprojective variety over $\overline{ℚ}$ that are integral curves of some algebraic vector field (defined over $\overline{ℚ}$). These maps are required to satisfy some integrality property, besides a growth condition and a strong form of Zariski-density that are natural for integral curves of algebraic vector fields. This result generalizes a theorem of Nesterenko concerning algebraic independence of values of the Eisenstein series ${E}_{2}$, ${E}_{4}$, ${E}_{6}$. The main technical improvement in our approach is the replacement of a rather restrictive hypothesis of polynomial growth on Taylor coefficients by a geometric notion of moderate growth formulated in terms of value distribution theory. However, your active subscription may be available on Project Euclid at https://projecteuclid.org/ant We have not been able to recognize your IP address 3.215.79.204 as that of a subscriber to this journal. Online access to the content of recent issues is by subscription, or purchase of single articles. or by using our contact form.
{}
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) Published by Pearson # Chapter 10 - Exponents and Radicals - 10.6 Solving Radical Equations - 10.6 Exercise Set: 5 TRUE #### Work Step by Step Squaring both sides of the given equation, $t=7,$ gives $t^2=49.$ Hence the given statement is $\text{ TRUE .}$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
# (194) Thu 8 Sep 94 11:09 By: Martin Goldberg To: ALL Re: old tyme religion St: @EID:6b8d (194) Thu 8 Sep 94 11:09 By: Martin Goldberg To: ALL Re: old tyme religion St: ------------------------------------------------------------------------------ @EID:6b8d 1d285920 @SPLIT: 9 Sep 94 01:45:36 @124/4115 55 01/02 +++++++++++ @MSGID: 1:124/4115.221 46add9f9 David Worrell is still with us through the internet. I get notes from him on occasion on other fundy bashing echoes in other nets. By logical decuction, they're all the same. Anywya, he has this to offer From:IN%"LIBRARY%EMU@utkvx.utk.edu" To:IN%"goldbe02@UTSW.SWMED.EDU" 1) 1) Let us worship Aphrodite, 1) Though we hear she's rather flighty 1) Still she looks great in a nightie 1) And that's good enough for me. 2) 2) We will pray to Father Zeus 2) In his temple we'll hang loose 2) Eating roast beef au jus, 2) And that's good enough for me. 3) 3) Let us worship like the Druids 3) Drinking strange fermented fluids 3) Running naked through the wo-ods, 3) And that's good enough for me. 4) 4) My roommate worships Buddha. 4) There is no idol cuter. 4) Comes in copper, bronze, and pewter, 4) And that's good enough for me. 5) 5) We will worship Sun Myung Moon 5) Though we know he is a goon. 5) All our money he'll have soon. 5) And that's good enough for me. 6) 6) We will go down to the temple, 6) Sit on mats woven of hemp(le), 6) Try to set a good exemple {\Fit0 [sic]}, 6) And that's good enough for me. 7) 7) We will finally pray to Jesus, 7) From our sins we hope he frees us, 7) Eternal life he guarantees us, 7) And that's good enough for me. 8) 8) Let us pray to Zarathustra 8) Let us pray just like we useta 8) I'm a Zarathustra boosta 8) It's good enough for me. 9) 9) Let us pray like the Egyptians 9) Build pyramids to put our crypts in 9) Fill our subways with inscriptions 9) It's good enough for me. 10) 10) If it's good enough for Dagon 10) That conservative old pagan 10) Who still votes for Ronald Reagan 10) It's good enough for me 11) 11) We will have a mighty orgy, 11) In the honor of Astarte 11) It will be one helluva party 11) And it's good enough for me. 12) 12) We will sacrifice to Yuggoth 12) Carve the signs of Azathoth 12) Burn a candle for Yog-Sothoth 12) And the Goat with a thousand young. 13) 13) We will all be saved by Mithras 13) We will all be saved by Mithras 13) Slay the bull and play the zithras 13) On that resurrection day. 14) 14) We will all bow down to Enlil 14) We will all bow down to Enlil 14) Pass your cup and get a refill 14) With bold Gilgamesh the brave. 15) 15) It was good enough for Loki 15) It was good enough for Loki 15) He thinks Thor's a little hokey 15) And he's good enough for me. 16) 16) We will all go to Nirvana 16) So be sure to mind your manners 16) Make a left turn at Savannah 16) And we'll see the Promised Land. 17) 17) It was good for old Jehovah 17) He had a son who was a nova 17) Hey there, Mithras move on ova' 17) A new resurrection day. 18) 18) Where's the gong gang? I can't find it 18) I think Northwoods is behind it 18) For they've always been cymbal minded 18) Yet they're good enough for me. 19) 19) I hear Valkyries a-comin 19) In the air their song is coming 19) They forgot the words they're humming 19) Yet they're good enough for me. 20) 20) There are people into voodoo 20) Africa has raised a whoodo 20) Just one little doll will do you 20) And it's good enough for me. 21) 21) It was good for Thor and Odin 21) Grab an axe and get your woad on 21) 'Til the Giants went and rode in 21) And it's good enough for me. 22) 22) If your rising sign is Aries 22) You'll be taken by the faeries 22) Meet the Buddha in Benares 22) Where he'll hit you with a pie. 23) 23) There will be a lot of lovin' 23) When we're gathered in our coven. 23) Quit your pushin' and your shovin' 23) So there'll be room enough for me. 24) 24) There are followers of Conan. 24) And you'll never hear 'em groaning 24) Followed Crom up to his throne(in) 24) And it's good enough for me 25) 25) It could be that you're a Parsi. 25) It could be that you're a Parsi. 25) Walk on by her; you'll get in free 25) And you're good enough for me. 26) 26) Azathoth is in his Chaos. 26) Azathoth is in his Chaos. 26) Now if only he don't sway us, 26) Then that's good enough for me. 27) 27) Just like Carlos Casteneda, 27) Just like Carlos Casteneda, 27) It'll get you sooner or later 27) And it's good enough for me. 28) 28) We will venerate Bubastes. 28) We will venerate Bubastes. 28) If you like us then just ask us, 28) And that's good enough for me. 29) 29) We will all sing Hari Krishna. 29) We will all sing Hari Krishna. 29) It's not mentioned in the mishna 29) But that's good enough for me. 30) 30) We will read from the Cabala. 30) Quote the Tree of Life mandala 30) It won't get you in Valhalla, 30) Yet it's good enough for me. 31) 31) If you think that you'll be sa-ved, 31) If you think that you'll be sa-ved, 31) If you follow Mogan David, 31) You're not good enough for me. 32) 32) It's the opera written for us. 32) We will all join in the chorus. 32) It's the opera about Boris 32) Which is Godunov for me. 33) 33) There is room enough in Hades 33) For lots of criminals and shadies 33) And disreputable ladies, 33) And they're good enough for me. 34) 34) To the tune of Handel's Largo'' 34) We will hymn the gods of cargo 34) 'Til they slap on an embargo 34) And that's good enough for me. 35) 35) Praise to Popacatapetl 35) Just a tiny cigarette'll 35) Put him in terrific fettle 35) So he's good enough for me. 36) 36) We will drive up to Valhalla 36) riding Beetles, not Impalas 36) Singing Deutschland Uber Alles'' 36) And that's good enough for me. 37) 37) We will all bow to Hephaestus 37) As a blacksmith he will test us 37) 'Cause his balls are pure asbestos 37) So he's good enough for me. 38) 38) We will sing of Iluvatur, 38) Who sent the Valar 'cross the water 38) To lead Morgoth to the slaughter 38) And that's just fine with me. 39) 39) We will sing of Foul the Render, 39) Who's got Drool Rockworm on a bender 39) In his cave in Kiril Threndor 39) They're both too much for me. 40) 40) We will sing the Jug of Issek, 40) And of Fafhrd his chief mystic, 40) Though to thieving Mouser will stick, 40) That's still good enough for me. 41) 41) Of Lord Shardik you must beware; 41) To please him you must swear; 41) 'Cause enraged he's a real Bear, 41) And that's good enough for me. 42) 42) You can dance and wave the thyrsos 42) And sing lots of rowdy verses 42) 'Til the neighbors holler curses, 42) And that's good enough for me. 43) 43) Let us celebrate Jehovah 43) Who created us $\backslash$ab/ $\backslash$ova/ 43) He'll be on tonight on Nova 43) 'Cause he's good enough for me. 44) 44) Montezuma used to start out 44) He would rip a certain part out 44) You would really eat your heart out 44) And he's good enough for me. 45) 45) We will go to worship Zeus 45) Though his morals are quite loose 45) He gave Leda quite a goose 45) And he's good enough for me. 46) 46) It was good enough for Loki 46) For he is the god of Chaos 46) And this verse doesn't even rhyme, or scan. 46) Fuck you! It's good enough for me. 47) 47) Let us sing to old Discordia 47) 'Cause it's sure she's never bored ya 47) And if she's good enough for ya 47) Then she's good enough for me. 48) 48) We will go to worship Venus 48) Though we hear she's kind of mean(us) 48) She might bite you on the --- elbow 48) But she's good enough for me. 49) 49) Well, we went to worship Venus 49) And, by god, you should have seen us 49) 'Cause the clinic had to screen us 49) But she's good enough for me. 50) 50) We will go and worship Isis 50) She will help us in a crisis 50) And she'll never raise her prices 50) So she's good enough for me. 51) 51) We will sing a song of Mithras 51) Let us sing a song of Mithras 51) But there is no rhyme for Mithras! 51) Still he's good enough for me. 52) 52) We will go to worship Kali 52) Hugging her is quite a folly 52) She'd be quite an armful, golly! 52) And she's good enough for me. 53) 53) We will all bow down to Allah, 53) For he gave his loyal follow- 53) Ers the mighy petro-dollah, 53) And that's good enough for me. 54) 54) Let us sing to Lord Cthuhlu 54) Don't let Lovecraft try to fool you 54) Or the Elder Gods WILL rule you 54) And that's good enough for me. 55) 55) Let us watch Ka.ka.pa.ull 55) Frolic in her swimming pool 55) Subjecting chaos to her rule 55) And that's all right with me 56) 56) Let's all listen up to Jesus 56) He says rich folks like old Cr\oe sus 56) Will be damned until Hell freezes 56) And that don't sound good to me. 57) 57) Let us do our thing for Eris 57) Goddess of the discord there is 57) Apple's golden, it's not ferrous 57) And that's good enough for me 58) 58) Of the Old Ones, none is vaster 58) Even Cthulhu's not his master 58) I refer to the unspeakable \underbar{\hbox to .5truein{\ \hfil}}${}^*$ 58) and that's good enough for me 58) {\Fit0 (${}^*$ Well, do YOU want to say it?)} 59) 59) Let us worship old Jehovah 59) All you other gods move ovah 59) Cause the one God's takin' over 59) And it's good enough for me 60) 60) Let us sing for Brooharia 60) Though the blood's a lot less cleaner 60) It's not Christian Santaria 60) So it's good enough for me 61) 61) Timmy Leary we will sing to 61) And the things that he was into 61) (Well, at least it wasn't Shinto) 61) And that's good enough for me. 62) 62) Let us sing a praise to Loki 62) The Norse God of Fire and Chaos 62) Which is why this verse doesn't rhyme or scan 62) But it's good enough for me 63) 63) It was good for the object 63) Of the Manhatten Project 63) It was good for Enrico 63) So it's good enough Fermi 64) 64) Father Odin we will follow 64) And in fighting we will wallow 64) 'Til we end up in Valhalla 64) And that's good enough for me. 65) 65) Hari Krishna he would laugh on 65) To see me dressed in saffron 65) With my hair that's only a half-on 65) And that's good enough for me. 66) 66) I will rise at early morning 66) When my Lord gives me the warning 66) That the Solar Age is dawning 66) And that's good enough for me. 67) 67) When old Quetzalcoatl 67) Found a virgin he could throttle 67) He put her heart in a bottle 67) And that's good enough for me. 68) 68) Azathoth is in his chaos 68) We know he's a really big boss 68) Now if only he don't slay us 68) Then that's good enough for me. 69) 69) Hari Krishna he must laugh on 69) To see me dressed in saffron 69) With my hair that's only half on 69) And that's good enough for me 70) 70) If you're really into dancing 70) And want to try some trancing 70) Then the voodoo gods are prancing 70) And that's good enough for me 71) 71) We will sacrifice to Yuggoth 71) Light a candle for Yog-Sothoth 71) If we're good we'll send a Shoggoth 71) And that's good enough for me. 72) 72) It was good enough for Loki 72) Where he goes, it sure get's smokey 72) He thinks Thor's a little hokey 72) And he's good enough for me 73) 73) Don't neglect that shrine of Zeus's 73) Though he's lost his vital juices 73) Still the old boy has his uses 73) And he's good enough for me. 74) 74) There's one thing I do know 74) Zeus's favorite is Juno 74) She's the best at, well, you know 74) And she's good enough for me. 75) 75) When we worship Bacchus 75) The ethanol will sock us 75) We'll all get good and raucous 75) And that's good enough for me. 76) 76) We will venerate old Bacchus 76) Drinking Beer and eating Tacos 76) 'Til you've tried it please don't knock us 76) 'Cause it's good enough for me. 77) 77) Let's all drink to Dionysus 77) His wine and women beyond prices 77) He made a Maenad out of my Sis 77) And that's good enough for me 78) 78) Let us dance with Dionysus 78) And get drunk on wine and spices 78) The Christains call them vices 78) But they're good enough for me 79) 79) It was good enough for Cupid 79) And the tricks to which he sto-oped 79) 'Thou his wings are kinda stupid 79) But he's good enough for me 80) 80) No one wrote a verse for Buddha 80) Though I think they really could'a 80) And I really think they should'a 80) 'Cause he's good enough for me 81) 81) There are some who practice Shinto 81) There's no telling what they're into 81) Though I guess we could begin to 81) But that's good enough for me 82) 82) Warriors for Allah 82) Are sure to have a gala 82) Time in old Valhalla 82) And that's good enough for me 83) 83) Any time that I start hearin' 83) Jesus loves you'' I start leerin' 83) May-be so, but not like Brian 83) And he's good enough for me 84) 84) And though J.C.'s into fish too 84) He's an avatar of Vishu 84) So he's welcome here to 84) And that's good enough for me 85) 85) And for those who follow Cthulhu 85) We've really got a lulu 85) Drop a bomb on Honolulu 85) 'Cause that's good enough for you 86) 86) We know it's good enough for Hastur 86) He's a mighty kinky master 86) When you pray he goes much faster --- msgedsq 2.1 * Origin: Meatloaf in Hell (1:124/4115.221) SEEN-BY: 102/2 138 752 835 850 851 890 943 1302 112/1 147/7 170/400 206/1701 SEEN-BY: 206/2708 209/209 720 770 270/101 280/1 10 25 35 41 53 74 108 115 SEEN-BY: 280/333 378 385 500 290/627 309/2 345/31 396/1 3615/50 @PATH: 124/4115 1 396/1 280/1 209/720 102/2 851 E-Mail Fredric L. Rice / The Skeptic Tank
{}
# Fibre of a local homeomorphism can be covered by disjoint open sets. Let $f\colon X\rightarrow Y$ be an open local homeomorphism and $y\in Y$. Do there exist pairwise disjoint open neighborhoods $U_x$ for $x\in f^{-1}(y)$? If not, what would be mild topological restrictions to $X,Y,f$ to make this true? It's easy if $X$ is compact Hausdorff since the fiber will always be discrete. • Note that $X$ is $T_1$ iff $Y$ is $T_1$: If $X$ is $T_1$ and $f(x),f(y)$ are points of $Y$, then by local injectivity, $x$ has a neighborhood $U$ containing at most one point from the fiber of $y$, and we can shrink $U$ such that it does not contain that point any more. Now $f(U)$ is a neighborhood of $f(x)$ disjoint from $f(y)$. For the other direction, if $x$ and $y$ are points in $X$, and if $f(x)\ne f(y)$, then a neighborhood of $f(x)$ disjoint from $f(y)$ pulls back to such a neighborhood of $x$. And if $f(x)=f(y)$, then local injectivity gives a neighborhood of $x$ not containing $y$. – Stefan Hamcke Jun 30 '15 at 16:33 • One sufficient condition for the existence of disjoint nbh of the points in $F=f^{-1}(y)$ would be $X$ being $T_3$ and fibers being countable. In that case, $Y$ is $T_1$, so $y$ is closed, implying that any subset of $F$ is closed in $X$. Assume by induction we have found pairwise disjoint neighborhoods $U_1,\dots, U_n, U^{n+1}$ of the first $n$ points in $F=\{x_1,x_2,\dots\}$ and of $\{x_{n+1},\dots\}$, respectively. Now choose a nbh $U_{n+1}$ of $x_{n+1}$ within $U^{n+1}$ and disjoint from a neighborhood $U^{n+2}$ of $\{x_{n+2},\dots\}$. This way, we get pairwise disjoint neighborhoods. – Stefan Hamcke Jun 30 '15 at 16:35 • I don't know if this suits you since the conditions are not particularly weak. – Stefan Hamcke Jun 30 '15 at 16:36 • Do you have a counterexample where it fails? – Heiko Jun 30 '15 at 21:05 • Only one which is not Hausdorff: Let $X=\{a,b,c\}$ with the particular point topology of $b$, and let $Y=\{d,e\}$ with the particular point topology of $e$ (this is also called the Sierpinski space). The map $f:X\to Y$ sending $a$ and $c$ to $d$, and $b\mapsto e$, is a local homeomorphism, but $a$ and $c$ cannot be separated by neighborhoods. – Stefan Hamcke Jun 30 '15 at 23:41
{}
My Math Forum Differential Equation Help Calculus Calculus Math Forum April 11th, 2016, 03:08 AM #1 Newbie   Joined: Apr 2016 From: UK Posts: 10 Thanks: 0 Differential Equation Help Hi, I'd really appreciate any help with this. Obtain the solution of: $2\cot x\frac{\text{d}y}{\text{d}x} = (4-y^2)$ for which y=0 at x=pi/3 giving your answer in the form $\sec^2x=g(y)$ Here's my attempt at a solution: $\int_{}^{}\frac{2}{4-y^2}\text{d}y=\int_{}^{}\tan(x)\text{d}x$ $\int_{}^{}\tan x\text{d}x = \ln|\sec x| +c$ $\int_{}^{}\frac{2}{4-y^2}\text{d}y = \frac{1}{2}\int_{}^{}\frac{1}{y+2}-\frac{1}{y-2} dy$ $\frac{1}{2}(\ln|y+2|-\ln|y-2|) = \ln|\sec x| + C$ $\ln|y+2|-\ln|y-2| = \ln|\sec^2x| +C$ Not sure where to go from there or even whether what I've done so far is correct. It'd be nice to know what the answer actually is as well so that I can see if I can get it right. Thanks. Last edited by skipjack; April 11th, 2016 at 09:16 AM. April 11th, 2016, 04:09 AM   #2 Math Team Joined: Jan 2015 From: Alabama Posts: 3,264 Thanks: 902 Quote: Originally Posted by kkg Hi, I'd really appreciate any help with this. Obtain the solution of: $2\cot x\frac{\text{d}y}{\text{d}x} = (4-y^2)$ for which y=0 at x=pi/3 giving your answer in the form $\sec^2x=g(y)$ Here's my attempt at a solution: $\int_{}^{}\frac{2}{4-y^2}\text{d}y=\int_{}^{}\tan(x)\text{d}x$ $\int_{}^{}\tan x\text{d}x = \ln|\sec x| +c$ $\int_{}^{}\frac{2}{4-y^2}\text{d}y = \frac{1}{2}\int_{}^{}\frac{1}{y+2}-\frac{1}{y-2} dy$ $\frac{1}{2}(\ln|y+2|-\ln|y-2|) = \ln|\sec x| + C$ $\ln|y+2|-\ln|y-2| = \ln|\sec^2x| +C$ Not sure where to go from there or even whether what I've done so far is correct. It'd be nice to know what the answer actually is as well so that I can see if I can get it right. Thanks. You've done very well. Now I presume you know the "laws of logarithms", in particular, log(a)+ log(b)= log(ab) and log(a/b)= log(a)- log(b). So $\displaystyle \ln|y+ 2|- \ln|y- 2|= \ln\left|\frac{y+2}{y- 2}\right|= \ln C'|\sec^2(x)|$ where C= ln(C'). Now, recall that ln(x) is the "inverse" to $\displaystyle e^x$. That is [math]e^{\ln(a)}= a[math] so taking the exponential of both sides $\displaystyle \frac{y+ 2}{y- 2}= C' \sec^2(x)$. We can drop the absolute value signs because C' itself can be taken to be positive or negative. Last edited by skipjack; April 11th, 2016 at 09:17 AM. April 11th, 2016, 06:27 AM   #3 Newbie Joined: Apr 2016 From: UK Posts: 10 Thanks: 0 Quote: Originally Posted by Country Boy You've done very well. Now I presume you know the "laws of logarithms", in particular, log(a)+ log(b)= log(ab) and log(a/b)= log(a)- log(b). So $\displaystyle \ln|y+ 2|- \ln|y- 2|= \ln\left|\frac{y+2}{y- 2}\right|= \ln C'|\sec^2(x)|$ where C= ln(C'). Now, recall that ln(x) is the "inverse" to $\displaystyle e^x$. That is $\displaystyle e^{\ln(a)}= a$ so taking the exponential of both sides $\displaystyle \frac{y+ 2}{y- 2}= C' \sec^2(x)$. We can drop the absolute value signs because C' itself can be taken to be positive or negative. Hi, thanks for the help! Can I just ask why is it that C=ln(C') and it isn't just + C? Last edited by skipjack; April 11th, 2016 at 10:17 AM. April 11th, 2016, 10:36 AM #4 Global Moderator   Joined: Dec 2006 Posts: 20,975 Thanks: 2224 As y = 0 at x = $\pi$/3, C' = -1/4. Tags differential, equation Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post max233 Calculus 4 March 26th, 2016 03:21 AM Sonprelis Calculus 6 August 6th, 2014 10:07 AM PhizKid Differential Equations 0 February 24th, 2013 10:30 AM pepeatienza Differential Equations 1 May 13th, 2008 01:14 PM jwade456 Differential Equations 1 May 8th, 2008 12:14 PM Contact - Home - Forums - Cryptocurrency Forum - Top
{}
# which of the following is the identity element 1 is the identity element for multiplication, because if you multiply any number by 1, the number doesn't change. The two most familiar examples are 0, which when added to a number gives the number; and 1, which is an identity element for multiplication. Such a semigroup is also a monoid.. Can you explain this answer? Question: What is the identity of the element X in the following ions? Follow these eight clues to sleuth the identity of this element. is done on EduRev Study Group by Class 8 Students. Each identity type corresponds to an element that can be contained inside the element in configuration. The equation is not an identity, therefore the correct option is A. 3. Free Online Library: Name that element! Chemistry Q&A Library What is the identity of the element with the following electron configuration: 1s22s22p63s23p1. The Inverse Property The Inverse Property: A set has the inverse property under a particular operation if every element of the set has an inverse.An inverse of an element is another element in the set that, when combined on the right or the left through the operation, always gives the identity element as the result. In a class, 65% of the students are boys. The number of protons in the nucleus of an atom determines its identity as a particular element. Learn this topic by watching Electron Configuration Concept Videos. An atom is the smallest fundamental unit of an element. An element has three naturally occurring isotopes with the following masses and abundances: Isotopic Mass (amu) Fractional Abundance 38.964 0.9326 39.964 1.000 × 10 −4 40.962 0.0673 Calculate the atomic weight of this element. …, rpe-byzn-gwojoin fasterrrrrrr girls ♥️ want satisfaction​. What is the molecular structure for the ion? Which of the following is the identity element? C) inferiority because of racial identity. Which of the following is the identity of the element? Question: What is the identity of the element X in the following ions? One's identity consists of three basic elements: personal identity, family identity and social identity. Then, turn the page to test your chemistry IQ. Which of the following is the identity element?a)1b)-1c)0d)None of theseCorrect answer is option 'C'. Learn this topic by watching Electron Configuration Concept Videos. Click hereto get an answer to your question ️ Which of the following when added to the additive identity element results 0 ? They come in different kinds, called elements, but each atom shares certain characteristics in common. Suppose, is any real number, then ... superiority because of racial identity. Enter the symbol of the element. Which of the following is not an element of cultural diversity? So, a monoid holds three properties simultaneously − Closure, Associative, Identity element. X^2+, a cation that has 36 electrons (b). Examples. Multiplicative identity definition, an identity that when used to multiply a given element in a specified set leaves that element unchanged, as the number 1 for the real-number system. 2is a right identity) = e 2(since e 1is a left identity) Definition 3.5 An element which is both a right and left identity is called the identity element(Some authors use the term two sided identity.) Building Line Following and Food Following RobotsBuilding Line Following and Food Following, RD Sharma Solutions for Class 8 Mathematics. Determine the identity of the element with the following electron configuration. Attractive color palette. For example, water is a compound composed of the elements hydrogen and oxygen. The fourth equation is. Simplify $$(125 \times {t}^{ - 4} \div ( {5}^{ - 3} \times 10 \times 16 \times {t}^{ - 4} )$$​, oaf-qjeh-ppf.................... only interested one can jojn​, PROVE THAT(root cosec-1 by cosec+1 )+(root cosec+1 by cosec-1)=2 sec theta​, montrer que racine( n2+5n +8)n est pas un entier​, honeyyy come fasttttttterr ♥️rpe-byzn-gwojoin fasterrrrrrr girls ♥️ want satisfaction​, (c) 15%(d) 14%25. The type used depends on the scenario and the service's security requirements. X^2+, a cation that has 36 electrons (b). Click hereto get an answer to your question ️ Which of the following when added to the additive identity element results 0 ? Account provisioning based on multi-factor authentication C. Frequently review performed activities and request justification D. Account information to be provided by supervisor or line manager Identity management is just as pervasive in mediated communication. All Chemistry Practice Problems Electron Configuration Practice Problems. Which of the following is identity element of multiplication eto ang pagpipilian nyo po A.0 B.1 C.100 D.none of the above - 744726 Which of the following is an essential element of a privileged identity lifecycle management? Answers of Which of the following is the identity element?a)1b)-1c)0d)None of theseCorrect answer is option 'C'. Dec 27,2020 - Which of the following is the identity element?a)1b)-1c)0d)None of theseCorrect answer is option 'C'. The following table describes each identity type. (a). We know that. The DuPont identity is an expression that breaks return on equity (ROE) down into three parts: profit margin, total asset turnover, and financial leverage. The number of protons is the element's atomic number, and is unique to each element. Apart from being the largest Class 8 community, EduRev has the largest solved All Chemistry Practice Problems Electron Configuration Practice Problems. “Zero” is called the identity element, (also known as additive identity) If we add any number with zero, the resulting number will be the same number. Question. a 1 b. community of Class 8. In a group, the additive identity is the identity element of the group, is often denoted 0, and is unique (see below for proof). Atoms are the basic building blocks of everything around us. A. This is true for any real numbers, complex numbers and even for imaginary numbers. Such a semigroup is also a monoid.. Read the poem and answer the following questions. The portion of the spectrum showing the 1s peaks for atoms of the two elements is shown above. Menu. Example: Identify the unknown element. The second equation is. (a) X^{2+}, a cation that has 36 electrons (b) X^{-}, an anion that has 36 electrons Let. What is the identity of the element? In mathematics, an identity element, or neutral element, is a special type of element of a set with respect to a binary operation on that set, which leaves any element of the set unchanged when combined with it. The following theorems can understand the elementary features of Groups: Theorem1:-1. This concept is used in algebraic structures such as groups and rings. Dictionary ! The term identity element is often shortened to identity (as in the case of additive identity and multiplicative identity), when there is no possibility of confusion, but the identity implicitly depends on the binary operation it is associated with. Step-by-step explanation: mark my answer as brainliest answer . Correct answer is option 'C'. A. Which of the following was NOT among Democritus's ideas? All atoms have a dense central core called the atomic nucleus. A sample containing atoms of C and F was analyzed using x-ray photoelectron spectroscopy. IE 1 = 801 kJ/mol IE 2 = 2427 kJ/mol IE 3 = 3660 kJ/mol IE 4 = 25025 kJ/mol Since there is a very large increase in IE 4 compared to IE 3, this element would be expected to have 3 valence electrons. Examples. Taking LHS, The equation is an identity. This discussion on Which of the following is the identity element?a)1b)-1c)0d)None of theseCorrect answer is option 'C'. The identity element of a semigroup (S,•) is an element e in the set S such that for all elements a in S, e•a = a•e = a. Define identity element. 2. Statement: - In a Group G, there is only one identity element (uniqueness of identity) Proof: - let e and e' are two identities in G and let a ∈ G ∴ ae = a (i) ∴ ae' = a (ii) All atoms have at least one proton in their core, and the number of protons determines which kind of element an atom is. Q. An element of a mathematical system that does not change the other elements in the system when it operates on them: zero is the identity element for addition (x + 0 = x) and one is the identity element … Follow these eight clues to sleuth the identity of this element. In chemistry, an element is defined as a constituent of matter containing the same atomic type with an identical number of protons. Related to logo design is the color palette. O E-P, V-shaped O E-As, linear O E-S, linear O E-Cl, V-shaped Submit Answer Try Another Version 3 item attempts remaining To determine this, answer the following questions: (a) (2 pts) Given that XO3 has 24 valence electrons, how many valence electrons are there for X? The identity element (denoted by e or E) of a set S is an element such that (aοe)=a, for every element a∈S. (a). Free Online Library: Name that element! identity element synonyms, identity element pronunciation, identity element translation, English dictionary definition of identity element. By continuing, I agree that I am at least 13 years old and have read and Existence of Inverse: If we mark the identity elements in the table then the element at the top of the column passing through the identity element is the inverse of the element in the extreme left of the row passing through the identity element and vice versa. A. X^3+, A Action That Has 10 Electrons B. X^3-, An Anion That Has 18 Electrons C. 200_X^3+, A Action Which As 77 Electrons And 120 Neutrons D. 200_X^3+, A Action Which As 75 Electrons And 122 Neutrons E. n. The element of a set of numbers that when combined with another number in a particular operation leaves that number unchanged. To find an identity element, for g ∈ G we need to solve x∗g = g and g ∗x = g. Then we must have xag = g, so x = a −1. Dictionary ... An element of an algebraic structure which, when applied to another element under an operation in that structure, yields this, second element. Both X^-, an anion that has 36 electrons Theorem 3.1 If S is a set with a binary operation ∗ that has a left identity element e 1 and a right identity element e 2 then e 1 = e 2 = e. Proof. Let S be a set of three elements given by S = { A , B , C } . -1 c 0 d. None of these 2 See answers pragna2005 pragna2005 Answer: 1 is called as identity element . Define identity element. B. In addition 0+10000=10000 the number itself S = { a, b, c, d }, S = \ {a,b,c,d\}, S = {a,b,c,d}, and consider the binary operation defined by the following table: ∗ a b c d a a a a a b c b d b c d c b c d a b c d. It includes both changeable and stable aspects and is influenced by both outside and inside factors. race. (1 point) Atoms are tiny indivisible particles. A) language B) religion C) ethnicity D) race E) art. Which of the following is the identity element? See more. IE 1 = 801 kJ/mol IE 2 = 2427 kJ/mol IE 3 = 3660 kJ/mol IE 4 = 25025 kJ/mol Since there is a very large increase in IE 4 compared to IE 3, this element would be expected to have 3 valence electrons. Identity is a complicated and debatable term。 It is a set of characteristics that belongs uniquely to somebody. (a) 2/3(b) 28/65(c) 5/6(d) 42/65​, चतुर ने एक वस्तु का मूल्य 220 रुपए अंकित किया और उसने उस वस्तु को 25% की छूट पर इस तरह से बेचा ताकि वह 33(13/14)% का प्रतिशत लाभ अर्जित कर सके। वस्तु The equation is an identity. Solution for Find the identity element for the following binary operators defined on the set Z. A ring or field is a group under the operation of addition and thus these also have a unique additive identity 0. $\endgroup$ – Mathematicing May 17 '15 at 13:46 An element of a mathematical system that does not change the other elements in the system when it operates on them: zero is the identity element for addition (x + 0 = x) and one is the identity element for … So while 1 is the identity element for multiplication, it is NOT the identity element for addition. A period 3 element has the following ionization energies. B. Which of the following was originally a tenet of Dalton's atomic theory, but had to be revised about a century ago? The most numerous ethnicity in the United States is. What is the identity of this element? Add your answer and earn points. Which of the following determines the identity of an element? An identity element is an element of a set which leaves other elements unchanged when combined with them. Consider the following Lewis structure, where E is an unknown element Which of the following could be the identity of element E? Determine the identity of the element with the following electron configuration. If the answer is not available please wait for a while and a community member will probably answer this If there is an identity element, which elements have inverses? Again, this definition will make more sense once we’ve seen a few examples. Then, the it seems the identity element can be shown to exists if 2g1+(-2g1)=0. You can check that a is both a left and a right identity element. realfaithg realfaithg A. To find the inverse of g, we must solve g ∗ x = a −1and x ∗ g = a . This should also be simple, with … What is the identity of the element with the following electron configuration: 1s22s22p63s23p1. 41. Can you explain this answer? (3) What is the identity of element X? This site is using cookies under cookie policy. You can specify conditions of storing and accessing cookies in your browser. (1 point) Matter consists of tiny particles called atoms. Which of the following is identity element of multiplication eto ang pagpipilian nyo po A.0 B.1 C.100 D.none of the above - 744726 On aparticular day 80% of girl students were presentWhat was the fraction of boys who were A monoid is a semigroup with an identity element. An identity element is also called a unit element. 4 are solved by group of students and teacher of Class 8, which is also the largest student In mathematics, an identity element, or neutral element, is a special type of element of a set with respect to a binary operation on that set, which leaves any element of the set unchanged when combined with it. Identity element definition is - an element (such as 0 in the set of all integers under addition or 1 in the set of positive integers under multiplication) that leaves any element of the set to which it belongs unchanged when combined with it by a specified operation. What is the identity of the element X in the following ions? a. atomic number b. mass number c. atomic mass d. overall change See answer raqueljuliet987 is waiting for your help. A service can provide six types of identities. [Xe] 6s 2 4f 14 5d 10 6p 4 . X^-, an anion that has 36 electrons For multiplication, the identity element is 1. | EduRev Class 8 Question is disucussed on EduRev Study Group by 106 Class 8 Students. Identity Element In mathematics, an identity element is any mathematical object that, when applied by an operation such as addition or multiplication, to another mathematical object such as a number leaves the other object unchanged. Definition of identity element : an element (such as 0 in the set of all integers under addition or 1 in the set of positive integers under multiplication) that leaves any element of the set to which it belongs unchanged when combined with it by a specified … Element X (8 protons, 8 electrons, 8 neutrons) Element Y (35 protons, 36 electrons, 46 neutrons) Element Z (12 protons, 10 electrons, 13 neutrons) Problem : What is the identity of element Q if the ion Q 2+ contains 10 electrons? Identity elements are specific to each operation (addition, multiplication, etc.). agree to the. Get an answer to your question “Which of the following determines the identity of an element?A) atomic number B) mass number C) atomic mass D) overall charge ...” in Chemistry if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions. Which element on the periodic table helps play tricks with birthday candles, colors plants green, and soothes achy stomachs? In the following table, all of the elements of S … Can you explain this answer? 0, zero, is defined as the identity element for addition and subtraction. soon. Which element on the periodic table helps play tricks with birthday candles, colors plants green, and soothes achy stomachs? As soon as an operation has both a left and a right identity, they are necessarily unique and equal as shown in the next theorem. EduRev is a knowledge-sharing community that depends on everyone being able to pitch in when they know something. The atomic number Question bank for Class 8. D) all of the above E) B and C. The explanation : 0 is the identity element of addition because an identity element means when we do any operation to a set of numbers with the identity we get the same number. identity element for multiplication is 1 as 8*1=8 the number itself and according to your question. Then, turn the page to test your chemistry IQ. Menu. s \in S; s ∈ S; an element that is both a left and right identity is called a two-sided identity, or identity element, or identity for short. To determine this, answer the following questions: (a) (2 pts) Given that XO3 has 24 valence electrons, how many valence electrons are there for X? A certain metal sulfide, MS2, is determined to be 40.064% sulfur by mass. Since 2g1 is in the subset H and we have shown that -2g1 is the inverse of 2g1, does it not suffice to say the identity element exists if 2g1+(-2g1)=0? Atoms are indestructible. Atoms retain their identity in a chemical reaction. Enter the symbol of the element. …, presentthat day if the total number of students presentthat day was 70%? Which of the following is not true about identity management in social media? Which of the following is the identity element? Physics,kinematics.please explain the answer of question? The Questions and Regularly perform account re-validation and approval. Atoms are indivisible. It includes both changeable and stable aspects and is influenced by both outside and inside factors the of! The Students are boys is an element understand the elementary features of Groups: Theorem1: -1 that has electrons. A constituent of matter containing the same atomic type with an which of the following is the identity element number of protons is the identity the... About a century ago done on EduRev Study Group by 106 Class 8 Students as a constituent of matter the! Accessing cookies in your browser least 13 years old and have read agree., and soothes achy stomachs 40.064 % sulfur by mass by Group Students... Correct option is a compound composed of the element of a set of that. Elements hydrogen and oxygen step-by-step explanation: mark my answer as brainliest answer Group of Students teacher. Identity, family identity and social identity, B, C } two elements shown! See answer raqueljuliet987 is waiting for your help a semigroup with an identical number of protons in the following?... Able to pitch in when they know something % sulfur by mass please! With birthday candles, colors plants green, and soothes achy stomachs answers pragna2005. Electrons Free Online Library: Name that element of identity element for multiplication, etc )! Element pronunciation, identity element translation, English dictionary definition of identity element 0. Unknown element which of the following when added to the additive identity 0 which element on the periodic table play. Therefore the correct option is a semigroup with an identity if it leaves every element unchanged 1. Disucussed on EduRev Study Group by 106 Class 8 about identity management is just as pervasive mediated! Most numerous ethnicity in the following when added to the additive identity 0 the peaks. Pervasive in mediated communication check that a is both a left and a right identity element for and!, which elements have inverses the < identity > element in configuration, Associative, identity element synonyms, element! Chemistry IQ of a set of numbers that when combined with another number in a Class, 65 % the... B. mass number c. atomic mass d. overall change See answer raqueljuliet987 is for... Turn the page to test your chemistry IQ unique additive identity 0 ve seen a few examples and Food,. Dense central core called the atomic nucleus social media allows a sender to say difficult things without forcing the to... Identity element pronunciation, identity element results 0 turn the page to test chemistry... More sense once we ’ ve seen a few examples 10 6p 4 identical! N'T change element pronunciation, identity element synonyms, identity element specific to each element personal,. Just as pervasive in mediated communication C } following binary operators defined on the periodic table helps tricks! Everyone being able to pitch in when they know something element 's atomic theory, each., zero, is determined to be revised about a century ago particular element community. Element exists then Find the inverse of g, we must solve g ∗ X =.! Containing atoms of the following Lewis structure, where E is an element... Elements hydrogen and oxygen by which of the following is the identity element 8 Students sulfide, MS2, is determined to be 40.064 sulfur! Ethnicity in the following ions you multiply any number by 1, the number itself and to! Has the following electron configuration Concept Videos of the element X in the following binary operators defined on the Z. Particular element shares certain characteristics in common and subtraction a set of numbers when. Click hereto get an answer to your question ️ which of the Students are boys more! A unique additive identity element monoid holds three properties simultaneously − Closure, Associative, identity element exists Find. Service 's security requirements their core, and soothes achy stomachs is called as identity.... The service 's security requirements teacher of Class 8 Mathematics ) B and c. dictionary because it has than! Seen a few examples Students are boys Concept is used in algebraic structures as. This is true for any real number, then a service can provide six of... Characteristics in common of these 2 See answers pragna2005 pragna2005 answer: 1 is called as element. Answer: 1 is called as identity element translation, English dictionary definition identity! See answers pragna2005 pragna2005 answer: 1 is the identity of element E What the... Answer: 1 is called as identity element synonyms, identity element for addition translation English. Into chemically simpler components ( because it has more than one element ) is a semigroup with an number. By mass Lewis structure, where E is an element largest student community of Class 8 element is. Element has the largest student community of Class 8, which is in., complex numbers and even for imaginary numbers ethnicity in the following when to... Element an atom is your help today, there are about 118 elements the... Is determined to be revised about a century ago type corresponds to an element that be! Sulfide, MS2, is determined to be 40.064 % sulfur by mass overall change answer. Translation, English dictionary definition of identity element algebraic structures such as and! Is also the largest Class 8 down into chemically simpler components ( because has... Media allows a sender to say difficult things without forcing the receiver to respond immediately to test your chemistry.. Used depends on everyone being able to pitch in when they know something as 8 * 1=8 the of! Waiting for your help identity consists of three basic elements: personal identity, family identity social... Identity, therefore the correct option is a compound C } and is to... A particular operation leaves that number unchanged Groups and rings there are about 118 elements in following. Concept is used in soda cans, is any real numbers, complex numbers and for!, turn the page to test your chemistry IQ mass d. overall change answer... Each atom shares certain characteristics in common, then a service can provide six types of.! ) matter consists of three basic elements: personal identity, therefore the correct is. Because it has more than one element ) is a Group under the operation of addition thus... 14 5d 10 6p 4 element on the periodic table helps play tricks birthday... Specify conditions of storing and accessing cookies in your browser say difficult things without the! Conditions of storing and accessing cookies in your browser of identities will probably answer this soon more sense once ’... Free Online Library: Name that element are solved by Group of and. Be 40.064 % sulfur by mass using x-ray photoelectron spectroscopy and stable and! Be 40.064 % sulfur by mass learn this topic by watching electron configuration: 1s22s22p63s23p1 40.064 % by! Following determines the identity of the following binary operators defined on the set Z atomic number, and soothes stomachs! Number does n't change old and have read and agree to the additive identity.! 13:46 Aluminum, which is also called a unit element have a unique additive identity.... Electrons ( B ) religion C ) ethnicity D ) all of the following determines the identity of element. Element exists then Find the inverse of g, we must which of the following is the identity element g X., an element that can be broken down into chemically simpler components ( because it more! Components ( because it has more than one element ) is a personal. Real number, and the service 's security requirements ) religion C ) ethnicity D ) race E art. And oxygen theorems can understand the elementary features of Groups: Theorem1: -1 if element. 1=8 the number of protons able to pitch in when they know something have read and agree to the identity. Are tiny indivisible particles proton in their core, and soothes achy stomachs of Students and teacher of 8. Element pronunciation, identity element synonyms, identity element understand the elementary features of Groups Theorem1... For the following was not among Democritus 's ideas your browser configuration Videos! S = { a, B, C } ) art of Groups: Theorem1: -1 left... To respond immediately such as Groups and rings chemically simpler components ( because it more... Edurev has the largest solved question bank for Class 8 Students additive identity element universe! Element pronunciation, identity element pronunciation, identity element translation, English dictionary definition of element. Answer is not the identity of the two elements is shown above also called a element... Largest Class 8 Students that a is both a left and a right identity for. In different kinds, called elements, but each atom shares certain characteristics in common,. For Class 8 Mathematics contained inside the < identity > element in configuration identity, family identity social. Had to be 40.064 % sulfur by mass you can specify conditions of storing and accessing cookies your... Come in different kinds, called elements, but each atom shares certain characteristics in common a metal. With birthday candles, colors plants green, and the number of protons is the identity of element?. Table helps play tricks with birthday candles, colors plants green, and soothes achy stomachs ) atoms tiny... Family identity and social identity change See answer raqueljuliet987 is waiting for your help and the service 's security.... Sender to say difficult things without forcing the receiver to respond immediately ∗ g = a −1and X g! Study Group by 106 Class 8 particles called atoms among Democritus 's ideas etc. ) which kind of E! Features of Groups: Theorem1: -1 called as identity element, which elements have?...
{}
## Generalized eigenproblem without fermion doubling for Dirac fermions on a lattice M. J. Pacholski, G. Lemut, J. Tworzydło, C. W. J. Beenakker SciPost Phys. 11, 105 (2021) · published 14 December 2021 ### Abstract The spatial discretization of the single-cone Dirac Hamiltonian on the surface of a topological insulator or superconductor needs a special "staggered" grid, to avoid the appearance of a spurious second cone in the Brillouin zone. We adapt the Stacey discretization from lattice gauge theory to produce a generalized eigenvalue problem, of the form ${\mathcal H}\psi=E {\mathcal P}\psi$, with Hermitian tight-binding operators ${\mathcal H}$, ${\mathcal P}$, a locally conserved particle current, and preserved chiral and symplectic symmetries. This permits the study of the spectral statistics of Dirac fermions in each of the four symmetry classes A, AII, AIII, and D. ### Authors / Affiliations: mappings to Contributors and Organizations See all Organizations. Funders for the research work leading to this publication
{}
## Two Integrals ### January 11, 2011 The exponential integral appears frequently in the study of physics, and the related logarithmic integral appears both in physics and in number theory. With the Euler-Mascheroni constant γ = 0.5772156649015328606065, formulas for computing the exponential and logarithmic integral are: $\mathrm{Ei}\ (x) = -\int_{-x}^{\infty}\frac{e^{-t}\ d\ t}{t} = \gamma + \mathrm{ln}\ x + \sum_{k=1}^{\infty}\frac{x^k}{k \cdot k!}$ $\mathrm{Li}\ (x) = \mathrm{Ei}\ (\mathrm{ln}\ x) = \int_{0}^{x}\frac{d\ t}{\mathrm{ln}\ t} = \gamma + \mathrm{ln}\ \mathrm{ln}\ x + \sum_{k=1}^{\infty}\frac{(\mathrm{ln}\ x)^k}{k \cdot k!}$ Since there is a singularity at Li(1) = −∞, the logarithmic integral is often given in an offset form with the integral starting at 2 instead of 0; the two forms of the logarithmic integral are related by Lioffset(x) = Li(x) – Li(2) = Li(x) – 1.04516378011749278. It is this form that we are most interested in, because the offset logarithmic integral is a good approximation of the prime counting function π(x), which computes the number of primes less than or equal to x: x 106 1021 Lioffset(x) 78627 21127269486616126182 π(x) 78498 21127269486018731928 If you read the mathematical literature, you should be aware that there is some notational confusion about the two forms of the logarithmic integral: some authors use Li for the logarithmic integral and li for its offset variant, other authors turn that convention around, and still other authors use either notation in either (or both!) contexts. The good news is that in most cases it doesn’t matter which variant you choose. Your task is to write functions that compute the exponential integral and the two forms of the logarithmic integral. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2
{}
# MTHexapod reports failure in state transition when it is actually succeeding XMLWordPrintable #### Details • Type: Story • Status: Done • Resolution: Done • Fix Version/s: None • Component/s: None • Labels: • Story Points: 1 • Sprint: TSSW Sprint - Sep 13 - Sep 27, TSSW Sprint - Sep 27 - Oct 11 • Team: Telescope and Site • Urgent?: No #### Description I am not sure if this is an issue with the CSC or with the low level controller but we are constantly getting failures in state transition with the MTHexapod component when operating with the real hardware. Assuming the system is in STANDBY state A simple; import salobj   r = salobj.Remote(salobj.Domain(), "MTHexapod", index=1)   await r.start_task   await salobj.set_summary_state(r, salobj.State.DISABLED, settingsToApply="default") results in; RuntimeError: Error on cmd=cmd_start, initial_state=5: msg='Command failed', ackcmd=(ackcmd private_seqNum=1948430428, ack=, error=1, result='Failed: Failed: final state is instead of ') most of the time, though it works some times. Despite the failure reported above the CSC does transition to DISABLED state shortly after. I wonder if the CSC should allow a bit more time for the state transition to occur, or if the low level controller is reporting the command as completed too early. #### Activity Hide Russell Owen added a comment - - edited I think this is in the CSC. Let me give a bit of background: the low-level controller does not report command success or failure, so the CSC has to guess based on the data that the low-level controller does send. I fervently hope we Te-Wei Tsai can fix this someday. Meanwhile we are stuck with it and it leads to issues such as this. I looked at the CSC code that handles the state transition commands and its guessing is too naive. The current code issues the state transition command then then waits for 2 telemetry messages from the low-level controller, checks the controller state, and fails the command if it's not the desired new state. A more robust algorithm is to check the next "up to N" telemetry samples, waiting for the new state. Te-Wei Tsai is there some way to predict a minimum time for the low-level controller to respond to a request for state change? I could use that information to pick a suitable maximum number of telemetry samples. Show Russell Owen added a comment - - edited I think this is in the CSC. Let me give a bit of background: the low-level controller does not report command success or failure, so the CSC has to guess based on the data that the low-level controller does send. I fervently hope we Te-Wei Tsai can fix this someday. Meanwhile we are stuck with it and it leads to issues such as this. I looked at the CSC code that handles the state transition commands and its guessing is too naive. The current code issues the state transition command then then waits for 2 telemetry messages from the low-level controller, checks the controller state, and fails the command if it's not the desired new state. A more robust algorithm is to check the next "up to N" telemetry samples, waiting for the new state. Te-Wei Tsai is there some way to predict a minimum time for the low-level controller to respond to a request for state change? I could use that information to pick a suitable maximum number of telemetry samples. Hide Tiago Ribeiro added a comment - Sounds good! I figured it was something on those lines. When you get an appropriate number of times, I imagine you can convert that into a timeout in seconds, right? Can you also report that in the “ack in progress”? Show Tiago Ribeiro added a comment - Sounds good! I figured it was something on those lines. When you get an appropriate number of times, I imagine you can convert that into a timeout in seconds, right? Can you also report that in the “ack in progress”? Hide Te-Wei Tsai added a comment - This is related to DM-29578. The telemetry frequency is ~20 Hz. If there is a state change, it will reflect in State and EnabledSubState: // Get state information tlmStruct->State = GUItlmStruct->State; tlmStruct->EnabledSubState = GUItlmStruct->EnabledSubState; tlmStruct->OfflineSubState = GUItlmStruct->OfflineSubState; tlmStruct->TestState = GUItlmStruct->TestState; https://github.com/lsst-ts/ts_hexapod_controller/blob/develop/src/actuatorTlm.c#L693-L697 I think wait for >= 0.5 second is reasonable but I might be wrong. Thanks! Show Te-Wei Tsai added a comment - This is related to DM-29578 . The telemetry frequency is ~20 Hz. If there is a state change, it will reflect in State and EnabledSubState : // Get state information tlmStruct->State = GUItlmStruct->State; tlmStruct->EnabledSubState = GUItlmStruct->EnabledSubState; tlmStruct->OfflineSubState = GUItlmStruct->OfflineSubState; tlmStruct->TestState = GUItlmStruct->TestState; https://github.com/lsst-ts/ts_hexapod_controller/blob/develop/src/actuatorTlm.c#L693-L697 I think wait for >= 0.5 second is reasonable but I might be wrong. Thanks! Hide Russell Owen added a comment - This affects both the MT hexapod and MT rotator. Show Russell Owen added a comment - This affects both the MT hexapod and MT rotator. Hide Russell Owen added a comment - - edited The issue affects both in MTHexapod and MTRotator. The fix is in in BaseCsc in ts_hexrotcomm. However, I took the liberty of simplifying assert_summary_state, deprecating an argument used to ts_mtrotator, so I also have a trivial patch for that package. • Update to use ts_utils. • Fix cleanup in a unit test file. Pull requests: Show Russell Owen added a comment - - edited The issue affects both in MTHexapod and MTRotator. The fix is in in BaseCsc in ts_hexrotcomm. However, I took the liberty of simplifying assert_summary_state, deprecating an argument used to ts_mtrotator, so I also have a trivial patch for that package. Additional changes to ts_hexrotcomm: Update to use ts_utils. Fix cleanup in a unit test file. Pull requests: https://github.com/lsst-ts/ts_hexrotcomm/pull/42 https://github.com/lsst-ts/ts_mtrotator/pull/51 Hide Tiago Ribeiro added a comment - reviewed in GitHub... Show Tiago Ribeiro added a comment - reviewed in GitHub... Hide Russell Owen added a comment - Released: • ts_hexrotcomm v0.20.0 • ts_mtrotator v0.18.0. This requires ts_hexrotcomm v0.20.0, but is not requires in order to get the fix (i.e. one can use v0.17.0 if desired). Show Russell Owen added a comment - Released: ts_hexrotcomm v0.20.0 ts_mtrotator v0.18.0. This requires ts_hexrotcomm v0.20.0, but is not requires in order to get the fix (i.e. one can use v0.17.0 if desired). #### People Assignee: Russell Owen Reporter: Tiago Ribeiro Reviewers: Tiago Ribeiro Watchers: Andy Clements, Holger Drass, Russell Owen, Sandrine Thomas, Te-Wei Tsai, Tiago Ribeiro 0 Vote for this issue Watchers: 6 Start watching this issue #### Dates Created: Updated: Resolved: #### Jenkins No builds found.
{}
Take practice tests and exercise with short tricks and examples. This is best app for aptitude tricks. Studyplan SSC CGL Maths Quantitative Aptitude Algebra. Trigonometry MCQ Questions and answers with easy and logical explanations. Sunil Kumar Kharub. $2+\sqrt{3}$ B. Trigonometry Function Formulas. 54k watch mins. best tricks and shortcuts to trigonometry problems concept. A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. Updated on December 29, 2020 by Admin. This will be helpful for the aspirants preparing for WBPSC, WBCS, Railway exams. Free Online Aptitude Tests 6676 Aptitude Online Test. Trigonometry Function Formulas. Get Trigonometry Questions and Answers for SSC CGL Tier 1, CGL Tier 2, CDS, Railway ALP Stage 1, RRB Group D and CDS Exam, Learn How to solve Geometry Questions Fast with Short tricks method at Smartkeeda, Download Geometry PDF For Railways at Free of Cost Trigonometry Questions and Answers Quantitative Aptitude. Hindi Quantitative Aptitude. Similar Classes. Arithmetic amp Quantitative Aptitude for Competitive 1 / 19. Free classes & tests. Arithmetic Ability provides you all type of quantitative and competitive aptitude MCQ questions on Trigonometry with easy and logical explanations. Trigonometry Concepts And Tricks-Part III. Trigonometry Shortcuts Scribd. But at the same time quantitative aptitudes are very time consuming. Quantitative Aptitude Practice Tests KidsFront. It also shows you how to check your answer different ways. What Are Tricks For Solving Trigonometry JEE Multiple. Trigonometry MCQ is important for … Number System. aptitude shortcut methods for trigonomentry problems with. Aptitude Tricks Apps on Google Play. Sameer Sardana. Quantitative Aptitude is the most important and most scoring part in any competitive exams. SSC CGL 2019 - Counting Traingles Short Tricks - gyanSHiLA - Siddharth Sir . On the basis of this table many problems can be solved. Trigonometry Quiz for SSC JE SSC CHSL in Quantitative. Hindi Quantitative Aptitude. Easy … From this app you can learn some tricks for break compitetive exams like SSC, HSSC, CTET, HTET etc. what are tricks for solving trigonometry jee multiple. 3. Trigonometry for Competitive Exams PDF 2015 Exampundit in. Trigonometry Tricks Amp Shortcuts For SSC CGL Exam. Shortcut Aptitude Tricks Apps on Google Play. Today we will be covering a very important topic from the Advance Maths part of the Quantitative Aptitude section that is – Important Notes & Short Tricks Aptitude questions and answers section on Trigonometry with explanation for various interview, competitive examination and entrance test. Basic Trigonometry Questions for RRB and SSC CGL Exams. Hindi Quantitative Aptitude. Find Cube Root in 2 seconds | Maths Tricks | SSC CGL 2017 Exam. Share. You can share data. Exams. Quantitative aptitude shortcut methods and tricks which can help you prepare better for your competitive exams like IBPS, SBI, SSC, CAT, CSAT, CDS, AFCAT, RRB etc. Solved examples with … In this class, Sameer Sardana will discuss the smart strategy to crack questions from Trigonometry. Or maybe we have a distance and angle and need to "plot the dot" along and up: Questions like these are common in engineering, computer animation and more. The class will be covered in English . Best Tricks And Shortcuts To Trigonometry Problems Concept. Similar Classes. Sep 22, 2020 • 1h 1m . Aptitude Shortcut methods and tricks fortrignometry problems were given below. answers quantitative aptitude. SSC CGL Tier-2 2019: Practice the most important Trigonometry Questions for English Language Section of SSC CGL Tier-2 2019 Exam. Examples of things that we can measure in a triangle are the lengths of the sides, the angles (which we … Advertisement . Aptitude Shortcut methods For Trigonomentry Problems with Math Handbook of Formulas Processes and Tricks May 1st, 2018 - of Formulas Processes and Tricks www … In this class Class, Kuntal Mahinder will discuss Trigonometry tips and tricks with trigonometry questions and answers. Practice Questions on Trigonometry Aptitude Questions and Answers. Hence, the scope and application of Trigonometry for SSC CGL is enormous. English Quantitative Aptitude. So literally trigonometry is the study of measuring triangles. Quantitative Aptitude shortcut PDF – Exams Covered: Banking; Railway; Insurance SSC; Why to choose IBPS Guide Quantitative Aptitude Shortcuts PDF: All the tips and tricks are simple, clear and the candidates can easily grasp it. Sep 17, 2020 • 1h . In this session Trigonometry Concepts and Tricks will be discussed. Trigonometry Questions And Answers Quantitative Aptitude. Lesson 1 • Started at 11:30 AM. Trigonometry Tricks and Shortcuts. Trigonometry Tricks amp Shortcuts for SSC CGL Exam. Trigonometry Formula Memorization Trick Quicker Maths. And trigonometry gives the answers! $1+2\sqrt{3}$ C. $2+\dfrac{1}{\sqrt{3}}$ D. $1+\sqrt{3}$ View Ans. So, here are few Trigonometry Function Formulas. Basic Concepts: MockBankers are suggested to learn the table which is given below. Concept :Those who want to clear basic concepts, other-wise can directly go to questions and answer . Candidates those who are preparing for SSC/FCI and all other competitive Exams can also download this in PDF. Displaying (1-3 of 3) «First; Prev; 1; Next; Last » Easy Trigonometry Question - 1. Trigonometry Shortcut Tricks And Important Identities PDF. Watch Now. Height and distance is one of the very important topic of Quantitative Aptitude which frequently comes in competitive exams. SSC-CGL: Maths, Quantitative Aptitude, Algebra, Trigonometry: Approach, Booklist, Strategy, Free Studymaterial 2013 for Combined Graduate Level Exam Tier 1, 2; ACIO Assistant Central Intelligence officer : Maths, Reasoning, English; SSC-CGL Logical Reasoning, General Intelligence: preparation strategy, approach, roadmap, booklist [Strategy] The Art of Preparing for Aptitude Exams; CDS … Trigonometry Short tricks - 5 seconds में Solve करे - Target - SSC CGL 2019 | CAT | RRB. Trigonometry Aptitude Questions And Answers Pdf. The main focus will be on how to solve these questions efficiently. Bengali Quantitative Aptitude. Aptitude Shortcut methods For Trigonomentry Problems with. Advertisement. 2M watch mins. Aptitude Shortcut methods For Trigonomentry Problems with. Quantitative aptitude questions with solutions set 1 - Covers chapters like area and perimeter, surface area, volume, height and distance etc. Short Tricks Of Maths For Iit Jee Scribd. The course is divided as: 1. Click here to Download Aptitude Shortcuts and Examples for Trigonometry Problems SSC CGL Quantitative Aptitude | Most Expected | Previous Years Questions. In this session Sunil Kharub Sir will discuss amazing Tricks and Shortcuts For Chapter Trigonometry Useful For All SSC EXAMS. Aptitude Practice Questions on Trigonometry Heights and. Trigonometry Function Formulas Math Shortcut Tricks. Watch : Trigonometry tricks, trigonometry tricks for ssc, trigonometry tricks for ssc cgl, trigonometry tricks in hindi, How to crack ssc cgl exam | SSC CGL 2017 | SSC CGL Maths tricks, ssc mts, ssc cgl exam preparation,important questions for ssc cgl 2017, how to crack ssc cgl 2017, ssc exam 2017 question,ssc exam preparation … Aptitude Tricks; Reasoning Tricks; Roman Numeral; Math Formulas; Home > Math Shortcuts > Trigonometry Function Formulas . Useful for candiates preparing for Bank exams, RRB, Railway, SSC CGL exams etc. Quantitative aptitude questions with solutions set 2 - Covers chapters like area and perimeter, surface area, volume, height & distance etc. Trigonometry Questions and Answers: Quantitative Aptitude based on trigonometry is mostly frequently asked aptitude question in the interview and other competitive exams like UPSC, SSC, IBPS bank exam & CAT exam. Good in the sense of knowledge and Smart in the sense of using quantitative aptitude tricks. Candidates who are preparing for the competitive exam are advised to learn some basic formulas, concepts and methods to solve the trigonometry aptitude problems. Trigonometry for SSC CGL: Trigonometry is an extension of Geometry. Q Trigonometry Tricks 3 SSC Hacks An Easy Way To. Let’s learn some basics of these formulas. Advanced Math Trigonometry Tricks Amp Shortcuts FOR SSC CGL. Advanced Math Trigonometry Tricks Amp Shortcuts YouTube. Dinesh Miglani. mental math with tricks and shortcuts yesfreeclass. We can split even non-right angled triangles into 2 right-angled triangles. The World Of Trigonometry Quick And Dirty Tips. Take practice tests and exercise with short tricks and examples. Advertisement. 2. Here we will try to give you some basic idea about this topic and some formulas to do those maths. Jul 30, 2020 • 1h 3m . Share. Calculation Tricks for Maths - SSC CHSL / CGL 2019 - gyanSHiLA. trigonometry shortcut for memorizing 30 45 60 angle sin cos. advanced math trigonometry tricks amp shortcuts for ssc cgl. Live. Hindi Quantitative Aptitude. This class will be conducted in Bengali and the notes will be provided in Bengali. To crack an exam you need to have good knowledge in Aptitude part (all other topics are also important). So let’s starts the discussion on height and distance. Level 1. Q1. Level 2. Watch Now. Useful for candiates preparing for Bank exams, RRB, Railway, SSC CGL exams etc. Watch Now. Login. 2M watch mins. Math Tricks and Shortcuts for Making Calculations Easy. Trigonometry - Concepts And Tricks. Trigonometry can find that missing angle and distance. Trigonometry Questions and Answers Quantitative Aptitude. Master your Quantitative Aptitude! Tips And Tricks On Trigonometry Mathematics NTSE. Handwritten Trigonometry Tips & Tricks Notes PDF Download December 23, 2019 by SSCGuides 1 Comment Hello Everyone जैसे की आप सभी जानते हैं की किसी भी competitive exams के लिए maths एक बहुत ही महत्वपूर्ण विषय है. So through with this table and with trigonometry formulas. This sections illustrates the process of solving trigonometric equations of various forms. What is the value of $$\dfrac{\tan (25^\circ)+ \tan (50^\circ)}{1 - \tan (25^\circ) \times \tan (50^\circ) }$$ A. Trigonometry Function Formulas … It deals with the properties of angles of a triangle (Right angled triangle). The main functions in trigonometry are Sine, Cosine and Tangent. so Download the app and prepare for compitetive exams. To do a height and distance math problems you need to have a knowledge of Trigonometry. From this section 10-15 questions will come for sure in SSC exams, RRB exams. English Quantitative Aptitude. Quantitative Aptitude Practice Tests KidsFront. You can arrive for the solution within less time and also with greater accuracy. Level 3. This is the Aptitude Questions & Answers section on & Trigonometry& with explanation for various interview, competitive examination and entrance test. Sine, Cosine and Tangent. Aptitude Shortcut Methods For Trigonomentry Problems With. Share. Similar Classes. Sameer Sardana. WBPSC. Intro. Questions and Answers with Tricks: This contains direct video solutions of given questions with tricks and alternate methods. Solved examples with detailed answer description, explanation are given and it would be easy to understand 30 45 60 angle sin cos. advanced Math Trigonometry Tricks amp Shortcuts for SSC JE SSC CHSL / CGL -... ( 1-3 of 3 ) « First ; Prev ; 1 ; Next ; Last » Easy Question. Cgl 2019 | CAT | RRB this is the Aptitude questions & answers section on Trigonometry. For various interview, competitive examination and entrance test time quantitative aptitudes very. Sine, Cosine and Tangent Bank exams, RRB exams of given questions with solutions set 1 Covers. Trigonometry with explanation for various interview, competitive examination and entrance test & with explanation various... Be provided in Bengali chapters like area and perimeter, surface area, volume, height and distance etc WBPSC... Answers section on Trigonometry with Easy and logical explanations the table which is given below Aptitude Tricks ; Tricks. Problems can be solved 1 - Covers chapters like area and perimeter, surface area, volume, height distance. With short Tricks - gyanSHiLA - Siddharth Sir of measuring triangles Trigonometry & explanation. Amp quantitative Aptitude questions with solutions set 1 - Covers chapters like area and perimeter, surface area volume.: practice the most important Trigonometry questions for RRB and SSC CGL 2017 Exam is.... Hence, the scope and application of Trigonometry for SSC CGL exams who want to clear Concepts. - 5 seconds में solve करे - Target - SSC CGL Tier-2 2019 Exam given.. Counting Traingles short Tricks - gyanSHiLA - Siddharth Sir clear basic Concepts: MockBankers are suggested to the! A triangle ( Right angled triangle ) Tricks and examples to clear basic Concepts other-wise... Way to Maths - SSC CGL is enormous questions and answers with and! Which frequently comes in competitive exams s learn some basics of these Formulas important ) topic of quantitative Aptitude the. Interview, competitive examination and entrance test Trigonometry useful for all SSC,... For Bank exams, RRB exams Maths - SSC CGL Tier-2 2019 Exam can directly go to questions answers... Provides you all type of quantitative and competitive Aptitude MCQ questions on Trigonometry with and... Ssc Hacks an Easy Way to video solutions of given questions with Tricks and methods! Part ( all other competitive exams arrive for the solution within less time and also with greater accuracy ; ;... Like SSC, HSSC, CTET, HTET etc Root in 2 seconds Maths. In Aptitude part ( all other topics are also important ) Math Formulas Home... Other topics are also important ) Sardana will discuss Trigonometry tips and Tricks fortrignometry problems were given.... Chapters like area and perimeter, surface area, volume, height and distance is one of the very topic! Angled triangle ) in SSC exams section on & Trigonometry & with explanation for interview. Time and also with greater accuracy solve these questions efficiently of given with. Some Formulas to do those Maths in any competitive exams can also Download this in PDF etc... Thought and well explained computer science and programming articles, quizzes and practice/competitive interview! Split even non-right angled triangles into 2 right-angled triangles check your answer different ways main focus will be how! Mockbankers trigonometry aptitude tricks suggested to learn the table which is given below questions & answers section on & &! Can split even non-right angled triangles into 2 right-angled triangles give you some basic about... Sine, Cosine and Tangent helpful for the aspirants preparing for WBPSC WBCS... In 2 seconds | Maths Tricks | SSC CGL 2017 Exam in this session Kharub! 2019 Exam Trigonometry MCQ is important for … answers quantitative Aptitude is the important... Trigonometry Question - 1 hence, the scope and application of Trigonometry for SSC CGL 2017 Exam the within. For Bank exams, RRB, Railway, SSC CGL 2017 Exam for the solution less... Your answer different ways SSC JE SSC CHSL in quantitative, Kuntal Mahinder will discuss amazing Tricks and methods... To clear basic Concepts, other-wise can directly go to questions and answer will come sure... ; Last » Easy Trigonometry Question - 1 idea about this topic some! From Trigonometry WBPSC, WBCS, Railway exams of quantitative Aptitude is the Aptitude and... For Bank exams, RRB, Railway, SSC CGL Exam Trigonometry Question - 1 this PDF! Tricks | SSC CGL 2019 - gyanSHiLA - Siddharth Sir Tricks ; Reasoning Tricks ; Tricks! Trigonometry for SSC JE SSC CHSL / CGL 2019 - gyanSHiLA - Siddharth Sir CHSL! Direct video solutions of given questions with trigonometry aptitude tricks and examples Counting Traingles short -. Split even non-right angled triangles into 2 right-angled triangles of a triangle Right. Let ’ s learn some basics of these Formulas ; Roman Numeral ; Math Formulas ; Home > Shortcuts. For the solution within less time and also with greater accuracy of this and..., Cosine and Tangent provides you all type of quantitative and competitive Aptitude MCQ questions on Trigonometry explanation. So through with this table and with Trigonometry Formulas 2 right-angled triangles will discuss Trigonometry tips and fortrignometry! Chsl / CGL 2019 - gyanSHiLA - Siddharth Sir CGL Exam 2019 Exam can also Download this in PDF for! Next ; Last » Easy Trigonometry Question - 1 right-angled triangles Numeral ; Math Formulas Home... Tricks ; Reasoning Tricks ; Reasoning Tricks ; Roman Numeral ; Math Formulas ; Home > Math Shortcuts > Function... Session Trigonometry trigonometry aptitude tricks and Tricks will be discussed other topics are also important ) and all competitive! For Maths - SSC CHSL / CGL 2019 - gyanSHiLA Cosine and Tangent other-wise can directly go to and! Bengali and the notes will be helpful for the aspirants preparing for Bank exams, RRB Railway... Other-Wise can directly go to questions and answer Cosine and Tangent you all of! Cube Root in 2 seconds | Maths Tricks | SSC CGL is enormous, CTET, HTET etc, and. Trigonometry Concepts and Tricks will be helpful for the solution within less time and also with accuracy! - Counting Traingles short Tricks and alternate methods Covers chapters like area and perimeter, surface area,,. For memorizing 30 45 60 angle sin cos. advanced Math Trigonometry Tricks amp Shortcuts for SSC CGL: Trigonometry the... Exams etc, the scope and application of Trigonometry this sections illustrates the of! Aptitude for competitive 1 / 19 SSC JE SSC CHSL / CGL 2019 -.. Check your answer different ways displaying ( 1-3 of 3 ) « First Prev... Clear basic Concepts: MockBankers are suggested to learn the table which is given below which frequently in! & answers section on & Trigonometry & with explanation for various interview, competitive examination and entrance test preparing. And the notes will be helpful for the aspirants preparing for Bank exams RRB... 2017 Exam Aptitude questions and answer were given below so through with this table many problems can solved. Most scoring part in any competitive exams Aptitude questions with solutions set 1 - Covers like. Tests and exercise with short Tricks and examples conducted in Bengali CTET, HTET etc 30 45 angle. Tricks Apps on Google Play Tricks with Trigonometry questions and answers section on Trigonometry with explanation various. Aptitude Shortcuts and examples for Trigonometry problems this sections illustrates the process solving... At the same time quantitative aptitudes are very time consuming have a knowledge of Trigonometry discuss smart. Most Expected | Previous Years questions will be discussed CGL exams etc will... Roman Numeral ; Math Formulas ; Home > Math Shortcuts > Trigonometry Function Formulas competitive can... Preparing for SSC/FCI and all other topics are also important ) practice/competitive programming/company interview..: those who want to clear basic Concepts, other-wise can directly go questions! Suggested to learn the table which is given below we can split even non-right triangles! For SSC CGL 2019 - Counting Traingles short Tricks - 5 seconds में solve करे - Target SSC! Home > Math Shortcuts > Trigonometry Function Formulas … height and distance is one of the very important of... To do those Maths have a knowledge of Trigonometry, Kuntal Mahinder will discuss Trigonometry tips and will!
{}
# What is the surface area produced by rotating f(x)=(1-x)/(x^2+6x+9), x in [0,3] around the x-axis? Surface Area $\textcolor{red}{S = - 0.1637650055 \text{ }}$square unit #### Explanation: Given $y = \frac{1 - x}{{x}^{2} + 6 x + 9} \text{ }$ and $x = 0$ to $x = 3$ The formula for finding the surface area or revolution $S = 2 \pi {\int}_{a}^{b} y \cdot \mathrm{ds}$ $S = 2 \pi {\int}_{a}^{b} y \cdot \sqrt{1 + {\left(\frac{\mathrm{dy}}{\mathrm{dx}}\right)}^{2}} \mathrm{dx}$ $S = 2 \pi {\int}_{a}^{b} \frac{1 - x}{{x}^{2} + 6 x + 9} \cdot \sqrt{1 + {\left(\frac{\mathrm{dy}}{\mathrm{dx}}\right)}^{2}} \mathrm{dx}$ Also $\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{x - 5}{x + 3} ^ 3$ $S = 2 \pi {\int}_{a}^{b} \frac{1 - x}{{x}^{2} + 6 x + 9} \cdot \sqrt{1 + {\left(\frac{x - 5}{x + 3} ^ 3\right)}^{2}} \mathrm{dx}$ I suggest Simpson's Rule to calculate the integration and $S = 2 \pi {\int}_{a}^{b} \frac{1 - x}{{x}^{2} + 6 x + 9} \cdot \sqrt{1 + {\left(\frac{x - 5}{x + 3} ^ 3\right)}^{2}} \mathrm{dx}$ $\textcolor{red}{S = - 0.1637650055 \text{ }}$square unit God bless...I hope the explanation is useful.
{}
# Quotient Group of $\mathbb{Z}^4$ and a Lattice I am working on problem 2 of the Rutgers 2017 Fall Algebra Qualifier where we are tasked with determining the structure of $\mathbb{Z}^4 /S$ where $S$ is the group generated by the vectors $(5,-2,-4,1)$, $(-5,4,4,1)$, $(0,6,0,6)$. So the first thing I noted was that $$(5,-2,-4,1) + (-5,4,4,1) = (0,2,0,2).$$ So it follows the third vector given $(0,6,0,6)$ is in the span of first two, and so the question remains to show: Find $\mathbb{Z}^4 / \lbrace a (5,-2,-4,1) + b (-5,4,4,1), a, b \in \mathbb{Z} \rbrace$ Now I tried to look for similar problems to this to make sense of it and came across the following: Is $\mathbb{Z}\times\mathbb{Z}/((6,5),(3,4))$ is finitely generated? But I'm not sure how to use the matrix techniques there correctly and rigorously. So I now I'm working with equivalence classes but the ease with which one can declare $$\mathbb{Z} / k\mathbb{Z} = \mathbb{Z}_k$$ seems to be lost when I move into the 2 basis vector situation. • The keyword to look up is Smith normal form: en.wikipedia.org/wiki/Smith_normal_form – Qiaochu Yuan Jan 7 '18 at 6:26 • There are a bunch of posts on this topic: 1, 2, 3, for example. – André 3000 Jan 7 '18 at 6:43 • @Quasicoherent how to efficiently find those links? I tried approach0 out too but it seems I still could dig them up – frogeyedpeas Jan 7 '18 at 6:51 • @frogeyedpeas I'm not sure how to find them. I just had answered these sorts of questions several times, so I bookmarked them. Googling site:math.stackexchange.com smith normal form brought up this relevant post, but you'd have to know the Smith normal form is the right thing to search. – André 3000 Jan 7 '18 at 7:39 The point is to determine the structure of $G$, we can freely compose automorphisms with $A$ to get a nicer matrix, for which the cokernel is obvious. In particular, we can apply row and column operations to $A$. At this point, we'll put $A$ in Smith Normal Form as is suggested in the linked question. Edit: to elaborate, since we made $A$ essentially diagonal, the cokernel splits as a direct sum: $$\ZZ^4/(e_1,2e_2) = (\ZZ/\ZZ)\oplus (\ZZ/2\ZZ) \oplus (\ZZ/0) \oplus (\ZZ/0) = \ZZ/2\ZZ \times \ZZ^2.$$ • @jgon this made sense up until the last point do you mean to say $\mathbb{Z}^4/ {( \mathbb{Z} \times 2 \mathbb{Z } ) }$ which is $\mathbb{Z}^3 \times \mathbb{Z}/ 2 \mathbb{Z}$? – frogeyedpeas Jan 7 '18 at 6:56 • I was inspired by math.stackexchange.com/questions/1243829/… where they seem to take $k\mathbb{Z}$ into the denominator of the quotient for each $k$ along the diagonal of the smith normal form (I'm going to review the proof and exact mechanics in a bit so I should be able to prove it myself but wanted to clarify here first) – frogeyedpeas Jan 7 '18 at 6:58 • I apologize, your answer matches mine too, It seems I forgot that $3-1=2$ – frogeyedpeas Jan 7 '18 at 6:59 • Ah cool, glad it got worked out. – jgon Jan 7 '18 at 7:00 A slightly different approach that amounts to the same thing (row reduction) using tietze transformations. Let's rewrite the group in terms of its presentation (although I will suppress all the commutation relations such as $ab=ba$, $ac=ca$ etc.) Additionally, each $a,b,c,d$ are just standard basis vectors. We get: $$\langle a,b,c,d: a^{-5}b^4c^4d=1, a^{-5}b^2c^4d=1 \rangle$$ The first thing to note is that we get $d^{-1}=c^{-4}b^{-4}a^5$ (and likewise for the second relation) so we in fact obtain that $c^{-4}b^{-2}a^{-5}=c^{-4}b^{-4}a^{-5},$ and by cancellation $b^2=1$. Hence, we can remove $d$ as a generator and replace the relations by $b^2=1$ so the new presentation is $$\langle a,b,c : b^2=1 \rangle$$ along with the usual commutation relations, or $\mathbb Z^2 \times \mathbb Z_2$.
{}
### 14.4.1.1 A double-integrator lattice First consider the double integrator from Example 13.3. Let and . This models the motion of a free-floating particle in , as described in Section 13.3.2. The phase space is , and . Let . The coming ideas can be easily generalized to allow any acceleration bound by letting ; however, will be chosen to simplify the presentation. The differential equation can be integrated once to yield (14.21) in which is an initial speed. Upon integration of (14.21), the position is obtained as (14.22) which uses two initial conditions, and . A discrete-time model exists for which the reachability graph is trapped on a lattice. This is obtained by letting and be any positive real number. The vector fields over that correspond to the cases of , , and are shown in Figure 14.12. Switching between these fields at every and integrating yields the reachability graph shown in Figure 14.13. This leads to a discrete-time transition equation of the form , in which , and represents time . Any action trajectory can be specified as an action sequence; for example a six-stage action sequence may be given by . Start from . At any stage and for any action sequence, the resulting state can be expressed as (14.23) in which are integers that can be computed from the action sequence. Thus, any action sequence leads to a state that can be expressed using integer coordinates in the plane. Starting at , this forms the lattice of points shown in Figure 14.13. The lattice is slanted (with slope ) because changing speed requires some motion. If infinite acceleration were allowed, then could be changed instantaneously, which corresponds to moving vertically in . As seen in (14.21), changes linearly over time. If , then the configuration changes quadratically. If , then it changes linearly, except when ; in this case, no motion occurs. The neighborhood structure is not the same as those in Section 5.4.2 because of drift. For , imagine having a stack of horizontal conveyor belts that carry points to the right if they are above the -axis, and to the left if they are below it (see Figure 14.12b). The speed of the conveyor belt is given by . If , the distance traveled along is . This causes horizontal motion to the right in the phase plane if and horizontal motion to the left if . Observe in Figure 14.13 that larger motions result as increases. If , then no horizontal motion can occur. If , then the coordinate changes by . This slowing down or speeding up also affects the position along . For most realistic problems, there is an upper bound on speed. Let be a positive constant and assume that . Furthermore, assume that is bounded (all values of are contained in an interval of ). Since the reachability graph is a lattice and the states are now confined to a bounded subset of , the number of vertices in the reachability graph is finite. For any fixed , the lattice can be searched using any of the algorithms of Section 2.2. The search starts on a reachability graph for which the initial vertex is . Trajectories that are approximately time-optimal can be obtained by using breadth-first search (Dijkstra's algorithm could alternatively be used, but it is more expensive). Resolution completeness can be obtained by reducing by a constant factor each time the search fails to find a solution. As mentioned in Section 5.4.2, it is not required to construct an entire grid resolution at once. Samples can be gradually added, and the connectivity can be updated efficiently using the union-find algorithm [243,823]. A rigorous approximation algorithm framework will be presented shortly, which indicates how close the solution is to optimal, expressed in terms of input parameters to the algorithm. Recall the problem of connecting to grid points, which was illustrated in Figure 5.14b. If the goal region contains lattice points, then exact arrival at the goal occurs. If it does not contain lattice points, as in the common case of being a single point, then some additional work is needed to connect a goal state to a nearby lattice point. This actually corresponds to a BVP, but it is easy to solve for the double integrator. The set of states that can be reached from some state within time lie within a cone, as shown in Figure 14.14a. Lattice points that fall into the cone can be easily connected to by applying a constant action in . Likewise, does not even have to coincide with a lattice point. Thus, it is straightforward to connect to a lattice point, obtain a trajectory that arrives at a lattice point near , and then connect it exactly to . Steven M LaValle 2012-04-20
{}
# Volume of specials sets on sphere $S^N$ Suppose I'm given $m$ points $\{q_i\}$ on the sphere $S^N$. I want to get a lower/upper bound for the volume of the following sets with respect to uniform probability measure $\mathbb{P}$ on the sphere. Let $$A_i = \{ x \in S^N : (q_i,x) > (q_j,x) ~\forall j \neq i \}$$ Is there (say) a formula for such a quantity in terms of the $q_i$? Suppose I know everything there is to know about the $q_i$, is there a general strategy for calculating such things? On a 2-sphere, such regions are spherical polygons (each of the inequalities in the definition of $A_i$ gives you a half-sphere). The area of such a polygon is given by a formula using the sum of angles of the polygon. So I am afraid you will first have to construct these Voronoi polygons. There are some algorithms for that.
{}
ClusterKernel-class: Definition of the ['ClusterKernel'] class Description This class defines a Kernel mixture Model (KMM). Details This class inherits from the [IClusterModel] class. A Kernel mixture model is a mixture model of the form: f({x}|\boldsymbol{θ}) =∑_{k=1}^K p_k ∏_{j=1}^d φ(x_j;σ^2_{k}) \quad x \in {R}^d. Some constraints can be added to the variances in order to reduce the number of parameters. Slots component A [ClusterKernelComponent] with the dim and standard deviation of the kernel mixture model. rawData A matrix with the original data set kernelName string with the name of the kernel to use. Possible values: "gaussian", "polynomial", "exponential". Default is "gaussian". kernelParameters vector with the parameters of the kernel. Author(s) Serge Iovleff [IClusterModel] class Examples 1 2 3 getSlots("ClusterKernel") data(geyser) new("ClusterKernel", data=geyser) Search within the MixAll package Search all R packages, documentation and source code Questions? Problems? Suggestions? or email at ian@mutexlabs.com. Please suggest features or report bugs with the GitHub issue tracker. All documentation is copyright its authors; we didn't write any of that.
{}
# How do you find the average value of the function for f(x)=x^3, 0<=x<=2? $2$ $\frac{1}{\textcolor{b l u e}{2} - \textcolor{b l u e}{0}} {\int}_{\textcolor{b l u e}{0}}^{\textcolor{b l u e}{2}} \setminus {x}^{3} \setminus \mathrm{dx} = \frac{1}{2} {\left[\frac{1}{4} {x}^{4}\right]}_{\textcolor{b l u e}{0}}^{\textcolor{b l u e}{2}} = \frac{1}{8} \left({\left(\textcolor{b l u e}{2}\right)}^{4} - {\left(\textcolor{b l u e}{0}\right)}^{4}\right) = \frac{1}{8} \left(16 - 0\right) = 2$
{}
# Sigma Level Versus Process Capability Estimates Article Revised: March 27, 2019 From the Lean Six Sigma Green Belt Forum on 6-sigma-training.com: ## Question I am a little bit confused about the “process sigma level calculation” : 1. The lesson presents a first method based on the RTY : First we have to compute the Normalized Yield and then derive the corresponding Z value. Finally we have to add 1.5 to account for long-term process variations and this is how we obtain the process sigma level. 2. A second method is proposed based on Process Capability Analysis. More specifically, using Zu and ZL. These values are used to evaluate the overall defective rate so that we can extract the Z value. But this time, we are not adding 1.5 to get the sigma level ? Why ?
{}
# A group of students at a high school took a students who passed or faidel the exam is bro... following table. Determine whether gender ar... in dependent by filling out the bla in the se... probabilities to the nearest thousandth. Passed Failed Male 9 21 Female 48 2 Question Describing quantitative data A group of students at a high school took a students who passed or faidel the exam is bro... following table. Determine whether gender ar... in dependent by filling out the bla in the se... probabilities to the nearest thousandth. Passed Failed Male 9 21 Female 48 2 2021-02-22 Passed Failed Total Given passed and failed. Male 9 21 30 Added total column to show Female 48 2 50 total of all male and female students. 9 males passed/30 male students If asked how many males $$\displaystyle\frac{{9}}{{30}}={0.3}$$ students passed out of all male student 9 males passed/80 students 9/80=0.1125 Male students who passed over all students. 48 females passed/50 total females Females students who passed $$\displaystyle\frac{{48}}{{50}}={0.96}$$ over all female students. ### Relevant Questions Is there a relationship between gender and relative finger length? To find out, we randomly selected 452 U.S. high school students who completed a survey. The two-way table summarizes the relationship between gender and which finger was longer on the left hand (index finger or ring finger). $$\begin{array} {lc} & \text{Gender} \ \text {Longer finger} & \begin{array}{l|c|r|r} & \text { Female } & \text { Male } & \text { Total } \\\hline \text { Index finger } & 78 & 45 & 123 \\\hline \text{ Ring finger } & 82 & 152 & 234 \\ \hline \text { Same length } & 52 & 43 & 95 \\ \hline \text { Total } & 212 & 240 & 452 \end{array}\ \end{array}$$ Suppose we randomly select one of the survey respondents. Define events R: ring finger longer and F: female. Given that the chosen student does not have a longer ring finger, what's the probability that this person is male? Write your answer as a probability statement using correct symbols for the events. The table below shows the number of people for three different race groups who were shot by police that were either armed or unarmed. These values are very close to the exact numbers. They have been changed slightly for each student to get a unique problem. Suspect was Armed: Black - 543 White - 1176 Hispanic - 378 Total - 2097 Suspect was unarmed: Black - 60 White - 67 Hispanic - 38 Total - 165 Total: Black - 603 White - 1243 Hispanic - 416 Total - 2262 Give your answer as a decimal to at least three decimal places. a) What percent are Black? b) What percent are Unarmed? c) In order for two variables to be Independent of each other, the P $$(A and B) = P(A) \cdot P(B) P(A and B) = P(A) \cdot P(B).$$ This just means that the percentage of times that both things happen equals the individual percentages multiplied together (Only if they are Independent of each other). Therefore, if a person's race is independent of whether they were killed being unarmed then the percentage of black people that are killed while being unarmed should equal the percentage of blacks times the percentage of Unarmed. Let's check this. Multiply your answer to part a (percentage of blacks) by your answer to part b (percentage of unarmed). Remember, the previous answer is only correct if the variables are Independent. d) Now let's get the real percent that are Black and Unarmed by using the table? If answer c is "significantly different" than answer d, then that means that there could be a different percentage of unarmed people being shot based on race. We will check this out later in the course. Let's compare the percentage of unarmed shot for each race. e) What percent are White and Unarmed? f) What percent are Hispanic and Unarmed? If you compare answers d, e and f it shows the highest percentage of unarmed people being shot is most likely white. Why is that? This is because there are more white people in the United States than any other race and therefore there are likely to be more white people in the table. Since there are more white people in the table, there most likely would be more white and unarmed people shot by police than any other race. This pulls the percentage of white and unarmed up. In addition, there most likely would be more white and armed shot by police. All the percentages for white people would be higher, because there are more white people. For example, the table contains very few Hispanic people, and the percentage of people in the table that were Hispanic and unarmed is the lowest percentage. Think of it this way. If you went to a college that was 90% female and 10% male, then females would most likely have the highest percentage of A grades. They would also most likely have the highest percentage of B, C, D and F grades The correct way to compare is "conditional probability". Conditional probability is getting the probability of something happening, given we are dealing with just the people in a particular group. g) What percent of blacks shot and killed by police were unarmed? h) What percent of whites shot and killed by police were unarmed? i) What percent of Hispanics shot and killed by police were unarmed? You can see by the answers to part g and h, that the percentage of blacks that were unarmed and killed by police is approximately twice that of whites that were unarmed and killed by police. j) Why do you believe this is happening? Do a search on the internet for reasons why blacks are more likely to be killed by police. Read a few articles on the topic. Write your response using the articles as references. Give the websites used in your response. Your answer should be several sentences long with at least one website listed. This part of this problem will be graded after the due date. The following table gives a two-way classification of all basketball players at a state university who began their college careers between 2004 and 2008, based on gender and whether or not they graduated. $$\begin{array}{|c|c|c|}\hline &\text{Graduated}&\text{Did not Graduate}\\\hline \text{Male} &129&51\\ \hline \text{Female}&134&36 \\ \hline \end{array}\\$$ If one of these players is selected at random, find the following probability. $$P(\text{graduated or male})=$$ Enter your answer in accordance to the question statement Twenty-six students in a college algebra class took a final exam on which the passing score was 70. The mean score of those who passed was 78, and the mean score of those who failed was 26. The mean of all scores was 72. How many students failed the exam? Show whether the following is quantitative or qualitative data i) Gender of students at a college ii) Weight of babies at a hospital iii) Colour of sweets in a box iv) Number of students in a class v) Colour of sweets in a box The following is a two-way table showing preferences for an award (A, B, C) by gender for the students sampled in survey. Test whether the data indicate there is some association between gender and preferred award. $$\begin{array}{|c|c|c|}\hline &\text{A}&\text{B}&\text{C}&\text{Total}\\\hline \text{Female} &20&76&73&169\\ \hline \text{Male}&11&73&109&193 \\ \hline \text{Total}&31&149&182&360 \\ \hline \end{array}\\$$ Chi-square statistic=? p-value=? Conclusion: (reject or do not reject $$H_0$$) Does the test indicate an association between gender and preferred award? (yes/no) A local school has both male and female students Each student either plays a sport or doesn't. The two-way table summarizes a random sample of 80 students. $$\begin{array}{|c|c|c|}\hline& \text{Female} & \text{male} \\ \hline\text{No sport} & 12&15\\\hline\text{Sport}&36&17\\ \hline\end{array}$$ Let sport be the event that a randomly chosen student (from the table) plays a sport Let female be the event that a randomly chosen student (from the table) is female. $$\begin{array}{c|cc|c} &\text{Physics}&\text{Chemistry}&\text{Total}\\ \hline \text{Males}&100&68&168\\ \text{Females}&71&61&132\\ \hline \text{Total}&171&129&300 \end{array}\$$
{}
# \printnoidxglossaries vs \printglossaries I'm using texmaker on windows and have serious trouble to use glossaries properly using \printglossaries. After installing and reinstalling activeperl nothing has changed. whenever I try to output the glossaries nothing shows up. So I'm using \printnoidxglossaries. Is that a disadvantage or can anyone tell me what the possible reason for the dysfunction of \printglossaries could be. My main goal is to create multiple glossaries to show the symbols im using in my thesis and the acronyms. Right now I used some replacement solution (table) to solve the problem. I searched high and low to find a proper explanation how to set something like that up, but to no avail. any help is hughely appreciated!!! A Sample for the glossary i wan to create (now as table) And here the code where i Have tried to create multiple glossaries. \documentclass[12pt,twoside,booktabs,a4paper]{book} \usepackage[ngerman]{babel} \usepackage[utf8]{inputenc} \usepackage[acronym,toc,shortcuts]{glossaries} \newglossary[ch1]{formel}{ch2}{ch3}{Formelverzeichnis} \makenoidxglossaries \setacronymstyle{long-short} % %------Acronym--------- \renewcommand*{\acronymname}{Abkürzungsverzeichnis} \newacronym[shortplural={BLKs},longplural={Belastungskollektive}]{BLK}{BLK}{Belastungskollektiv} \newacronym{DL}{DL}{Dauerlauf} \newacronym[shortplural={Fzg-DL},longplural={Fahrzeugdauerläufen}]{Fzg-DL}{Fzg-DL}{Fahrzeug Dauerlauf} %-----Formel--- \newglossaryentry{re} {% name={$R_e$}, description={Streckgrenze}, symbol={Pa}, sort=streckgrenze, type=formel } \newglossaryentry{rm} {% name={$R_m$}, description={Zugfestigkeit}, symbol={Pa}, sort=Zugfestigkeit, type=formel } \begin{document} \printnoidxglossary[type=acronym] \newpage \gls{BLK} \gls{DL} \gls{Fzg-DL} \gls{re} \end{document} • Welcome. Could you add some working code, called minimal working example(MWE), which starts with \documentclass and ends with \end{document} – Bobyandbob Apr 11 '18 at 16:57 • See the glossaries performance page for a comparison, but you don't need Perl. You can use the makeglossaries-lite Lua script instead. See also What can interfere with glossaries to prevent printing? – Nicola Talbot Apr 11 '18 at 17:26 • Hi, thank you for the quick reply. @Nicola: can I simply replace the \makeglossaries command through the \makeglossaries-lite command? Without installing anything else? I tried but to no avail. I would truly love if someone could either show here or point me to any paper etc. showing how to generate a glossary/ nomenclatur in table style? merci – aerioeus Apr 12 '18 at 11:54 • makeglossaries and makeglossaries-lite are scripts not commands. (It's a bit confusing since there's also a command called \makeglossaries that you use in the document.) – Nicola Talbot Apr 12 '18 at 17:34 • Since you have some responses below that seem to answer your question, please consider marking one of them as ‘Accepted’ by clicking on the tickmark below their vote count (see How do you accept an answer?). This shows which answer helped you most, and it assigns reputation points to the author of the answer (and to you!). It's part of this site's idea to identify good questions and answers through upvotes and acceptance of answers. – samcarter_is_at_topanswers.xyz Apr 13 '18 at 10:08 There are actually five methods of generating glossary lists (summarised in section 1.1 Indexing Options of the glossaries user manual). The first uses \printnoidxglossaries, the second two use \printglossaries and the last two (which require glossaries-extra) use \printunsrtglossaries. Table 1.1: Glossary Options: Pros and Cons summarizes the advantages and disadvantages of each method. Using a slightly trimmed version of your MWE, here are all the methods: ## 1.\printnoidxglossaries This method doesn't require any external tools, but it's designed for ASCII sort values and it can significantly slow the document build. If the sort value contains fragile commands, you need to use the sanitizesort setting. MWE: \documentclass[12pt,twoside,a4paper]{book} \usepackage[ngerman]{babel} \usepackage[utf8]{inputenc} \usepackage[acronym,toc,shortcuts]{glossaries} \newglossary[ch1]{formel}{ch2}{ch3}{Formelverzeichnis} \makenoidxglossaries \setacronymstyle{long-short} %------Acronym--------- \renewcommand*{\acronymname}{Abkürzungsverzeichnis} \newacronym[shortplural={BLKs},longplural={Belastungskollektive}]{BLK}{BLK}{Belastungskollektiv} \newacronym{DL}{DL}{Dauerlauf} \newacronym[shortplural={Fzg-DL},longplural={Fahrzeugdauerläufen}]{Fzg-DL}{Fzg-DL}{Fahrzeug Dauerlauf} %-----Formel--- \newglossaryentry{re} {% name={$R_e$}, description={Streckgrenze}, symbol={Pa}, sort=streckgrenze, type=formel } \newglossaryentry{rm} {% name={$R_m$}, description={Zugfestigkeit}, symbol={Pa}, sort=Zugfestigkeit, type=formel } \begin{document} \printnoidxglossary[type=acronym] \newpage \gls{BLK} \gls{DL} \gls{Fzg-DL} \gls{re} \end{document} If the document is called myDoc.tex, then the complete document build process requires: pdflatex myDoc pdflatex myDoc UTF-8 characters, such as ü and ä, won't be correctly sorted. Page 1: Page 3: Since rm hasn't been referenced in the document (with, e.g., \gls{rm}) it doesn't appear in the list. Symbols are often problematic with this method, but since you've used the sort key to assign an alphabetic value (sort=streckgrenze) this shouldn't cause a problem in this case. This method also can't form ranges in the location (page) lists. ## 2. makeindex (\printglossaries) This method uses the helper application makeindex to generate the sorted lists. The command \makeglossaries is needed to ensure that the appropriate files are created for makeindex: \documentclass[12pt,twoside,a4paper]{book} \usepackage[ngerman]{babel} \usepackage[utf8]{inputenc} \usepackage[acronym,toc,shortcuts]{glossaries} \newglossary[ch1]{formel}{ch2}{ch3}{Formelverzeichnis} \makeglossaries \setacronymstyle{long-short} %------Acronym--------- \renewcommand*{\acronymname}{Abkürzungsverzeichnis} \newacronym[shortplural={BLKs},longplural={Belastungskollektive}]{BLK}{BLK}{Belastungskollektiv} \newacronym{DL}{DL}{Dauerlauf} \newacronym[shortplural={Fzg-DL},longplural={Fahrzeugdauerläufen}]{Fzg-DL}{Fzg-DL}{Fahrzeug Dauerlauf} %-----Formel--- \newglossaryentry{re} {% name={$R_e$}, description={Streckgrenze}, symbol={Pa}, sort=streckgrenze, type=formel } \newglossaryentry{rm} {% name={$R_m$}, description={Zugfestigkeit}, symbol={Pa}, sort=Zugfestigkeit, type=formel } \begin{document} \printglossary[type=acronym] \newpage \gls{BLK} \gls{DL} \gls{Fzg-DL} \gls{re} \end{document} If the document is called myDoc.tex then the complete document build process is: pdflatex myDoc makeindex -s myDoc.ist -t myDoc.alg -o myDoc.acr myDoc.acn makeindex -s myDoc.ist -t myDoc.ch1 -o myDoc.ch2 myDoc.ch3 pdflatex myDoc Note that there must be a separate makeindex call for each glossary. Since this document has two lists, there must be two makeindex calls. This is quite cumbersome, so the glossaries package provides two scripts to run makeindex the required number of times with the required settings. In both cases the script reads the .aux file to find out what systems calls need to be made. The first script is the makeglossaries Perl script, which needs Perl installed. The build process is now simplified to: pdflatex myDoc makeglossaries myDoc pdflatex myDoc The second script is the makeglossaries-lite Lua script. This is on CTAN as makeglossaries-lite.lua but the TeX distributions may change the extension. (For example, TeX Live on Linux creates a symbolic link called makeglossaries-lite without the extensions.) So if there's no extension, the build process is: pdflatex myDoc makeglossaries-lite myDoc pdflatex myDoc but the .lua extension is retained you may need to do: pdflatex myDoc makeglossaries-lite.lua myDoc pdflatex myDoc I suspect the problem that you're having is integrating this step into TeXMaker. See Using Texmaker with glossaries on Windows for further help. Another possibility is to use the automake package option. This will try to use TeX's shell escape to run makeindex: \usepackage[acronym,toc,shortcuts,automake]{glossaries} The resulting document is the same as in the previous example. Again, this method isn't designed for UTF-8, as makeindex doesn't have UTF-8 support. Since makeindex isn't aware of LaTeX commands, sort values that contain markup can result in odd ordering. For example, the sort value \emph{word} will be sorted according to the characters \ e m p h { w o r d } which will put it in the symbols group (rather than in the more intuitive W letter group). ## 2. xindy (\printglossaries) This method is very similar to the makeindex method, from the document code point of view, but it requires the xindy package option: \documentclass[12pt,twoside,a4paper]{book} \usepackage[ngerman]{babel} \usepackage[utf8]{inputenc} \usepackage[acronym,toc,shortcuts,xindy]{glossaries} \newglossary[ch1]{formel}{ch2}{ch3}{Formelverzeichnis} \makeglossaries \setacronymstyle{long-short} %------Acronym--------- \renewcommand*{\acronymname}{Abkürzungsverzeichnis} \newacronym[shortplural={BLKs},longplural={Belastungskollektive}]{BLK}{BLK}{Belastungskollektiv} \newacronym{DL}{DL}{Dauerlauf} \newacronym[shortplural={Fzg-DL},longplural={Fahrzeugdauerläufen}]{Fzg-DL}{Fzg-DL}{Fahrzeug Dauerlauf} %-----Formel--- \newglossaryentry{re} {% name={$R_e$}, description={Streckgrenze}, symbol={Pa}, sort=streckgrenze, type=formel } \newglossaryentry{rm} {% name={$R_m$}, description={Zugfestigkeit}, symbol={Pa}, sort=Zugfestigkeit, type=formel } \begin{document} \printglossary[type=acronym] \newpage \gls{BLK} \gls{DL} \gls{Fzg-DL} \gls{re} \end{document} In this case, \makeglossaries is still needed to create the associated files needed by xindy (an alternative to makeindex), but the xindy package option ensures that the information is written in xindy's format. The build process is now: pdflatex myDoc xindy -L german -C din5007-utf8 -I xindy -M myDoc -t myDoc.ch1 -o myDoc.ch2 myDoc.ch3 xindy -L german -C din5007-utf8 -I xindy -M myDoc -t myDoc.alg -o myDoc.acr myDoc.acn pdflatex myDoc Again, this is quite cumbersome, so you can use the makeglossaries or makeglossaries-lite scripts. In this case, makeglossaries-lite doesn't work so well as it's not as intelligent as makeglossaries, but since xindy is a Perl script, there's no advantage to using makeglossaries-lite in this case. So the best document build is: pdflatex myDoc makeglossaries myDoc pdflatex myDoc In other words, the document build process is effectively the same as for the previous example. This method has the advantage over the previous two methods in that it supports UTF-8 and non-English languages, so it should correctly order German words. The disadvantage with this method its lack of support for symbols. Xindy strips all LaTeX commands and braces from the sort value, which is usually desirable (for example, if you have a sort value of \emph{word} it's good that xindy treats this as just word), but it causes a problem when the entire sort value consists solely of commands. For example, \ensuremath{\alpha} devolves into an empty string, which xindy doesn't like. The other problem is that xindy merges entries with identical sort values, so if stripping commands causes the sort value of one entry to become identical to another, then the entries will be merged. You've used the sort key in your symbols (such as sort=streckgrenze) so this isn't a problem. ## 4. bib2gls (\printunsrtglossaries) This method requires the glossaries-extra extension package and the bib2gls helper application (which requires Java). This uses a different approach to the other methods. All entries are defined in .bib files. So you might have the file abbreviations.bib that contains: % Encoding: UTF-8 @abbreviation{BLK, short = {BLK}, long = {Belastungskollektiv}, longplural = {Belastungskollektive} } @abbreviation{DL, short = {DL}, long = {Dauerlauf} } @abbreviation{Fzg-DL, short = {Fzg-DL}, shortplural = {Fzg-DL}, long = {Fahrzeug Dauerlauf}, longplural = {Fahrzeugdauerläufen} } and symbols.bib that contains: % Encoding: UTF-8 @symbol{re, name={$R_e$}, description={Streckgrenze}, symbol={Pa} } @symbol{rm, name={$R_m$}, description={Zugfestigkeit}, symbol={Pa} } The document code is much simpler now: \documentclass[12pt,twoside,a4paper]{book} \usepackage[ngerman]{babel} \usepackage[utf8]{inputenc} \usepackage[abbreviations,shortcuts,record]{glossaries-extra} \newglossary[ch1]{formel}{ch2}{ch3}{Formelverzeichnis} \setabbreviationstyle{long-short} src={abbreviations}% entries defined in abbreviations.bib ] src={symbols},% entries defined in abbreviations.bib type = formel, % put these entries in the 'formel' list sort-field=description % sort according to the 'description' field ] \begin{document} \printunsrtglossary[type=abbreviations,title=Abkürzungsverzeichnis] \newpage \gls{BLK} \gls{DL} \gls{Fzg-DL} \gls{re} \end{document} The document build process is now: pdflatex myDoc bib2gls myDoc pdflatex myDoc So again you need to find a way to integrate a helper application into your build process. This method has the advantage over the first two in that it supports UTF-8 and non-English sorting. (The document language setting is picked up from the .aux file.) It also has the advantage over xindy in that bib2gls allows empty and identical sort values, but it also has a limited understanding of some basic kernel symbol commands. For example, it will convert \ensuremath{\alpha} into the mathematical Greek lower case alpha 𝛼. As illustrated in the above (sort-field=description), you can also sort according to a different field, if that provides a more appropriate order. ## 5. \printunsrtglossaries (no sorting) This final method doesn't do any sorting or indexing. All defined entries are listed, regardless of whether they've been used in the document. Entries are listed in order of definition: \documentclass[12pt,twoside,a4paper]{book} \usepackage[ngerman]{babel} \usepackage[utf8]{inputenc} \usepackage[abbreviations,shortcuts,sort=none]{glossaries-extra} \newglossary[ch1]{formel}{ch2}{ch3}{Formelverzeichnis} \setabbreviationstyle{long-short} \newacronym[shortplural={BLKs},longplural={Belastungskollektive}]{BLK}{BLK}{Belastungskollektiv} \newacronym{DL}{DL}{Dauerlauf} \newacronym[shortplural={Fzg-DL},longplural={Fahrzeugdauerläufen}]{Fzg-DL}{Fzg-DL}{Fahrzeug Dauerlauf} \newglossaryentry{re} {% name={$R_e$}, description={Streckgrenze}, symbol={Pa}, type=formel } \newglossaryentry{rm} {% name={$R_m$}, description={Zugfestigkeit}, symbol={Pa}, type=formel } \begin{document} \printunsrtglossary[type=abbreviations,title=Abkürzungsverzeichnis] \newpage \gls{BLK} \gls{DL} \gls{Fzg-DL} \gls{re} \end{document} The document build process is simply: pdflatex myDoc Page 1 now looks like: Page 3 now looks like: Note that there are no location (page) lists. ## Summary • Options 1 and 5 are the simplest as they don't require any external tools. • Options 2, 3 and 4 all require an external tool which needs to be incorporated into your build process. How you do this depends on your text editor. • Option 2 doesn't require any additional software (makeindex is precompiled and available with all modern TeX distributions). • Option 3 requires Perl. • Option 4 requires Java. • Options 3 and 4 work best for UTF-8 non-English sort values. For help integrating the external tools into your document build, see Incorporating makeglossaries or makeglossaries-lite or bib2gls into the document build. • Wow Nicola, what a detailed answer. I truly thank you, although I need to digest it first all tomorrow and might get back to you if I dont get something. However. thanks a lot! A – aerioeus Apr 12 '18 at 20:00 • Ah, one question directly: how can I let the Glossary (Formelverzeichnis) look like a table header with a line above and below the word "Formelverzeichnis"? merci in advance A – aerioeus Apr 12 '18 at 20:01 • @aerioeus That might be better asked as a follow-up question (e.g. "How to change the glossary section header style?") Basically, the header is added using \glossarysection[\glossarytoctitle]{\glossarytitle} where \glossarysection by default either uses \section* or \chapter*. (In this example, it uses \chapter*.) So it's either a case of changing the style of unnumbered chapters or redefining \glossarysection to use a custom format. – Nicola Talbot Apr 12 '18 at 20:11 • O.k., then I will start a follow up question - merci again. 👍🏻 – aerioeus Apr 12 '18 at 20:17
{}
How can I get the list of files in a directory using C or C++? How can I determine the list of files in a directory from inside my C or C++ code? I'm not allowed to execute the ls command and parse the results from within my program. • This is a duplicate of 609236 Mar 4, 2009 at 20:35 • Dec 25, 2014 at 1:54 • @chrish - Yea but this one has the classic "I'm not allowed to execute the 'ls'"! It's exactly how I'd feel 1st year of Computer Science. ;D <3 x Oct 22, 2016 at 11:50 • C and C++ are not the same language. Therefore, the procedure to accomplish this task will be different in both languages. Please chose one and re-tag accordingly. Mar 2, 2017 at 21:58 • And neither of those languages (other than C++ since C++17) even has a concept of a directory - so any answer is likely to be dependent on your OS, or on any abstraction libraries you might be using. Feb 20, 2018 at 11:38 UPDATE 2017: In C++17 there is now an official way to list files of your file system: std::filesystem. There is an excellent answer from Shreevardhan below with this source code: #include <string> #include <iostream> #include <filesystem> namespace fs = std::filesystem; int main() { std::string path = "/path/to/directory"; for (const auto & entry : fs::directory_iterator(path)) std::cout << entry.path() << std::endl; } In small and simple tasks I do not use boost, I use dirent.h. It is available as a standard header in UNIX, and also available for Windows via a compatibility layer created by Toni Ronkko. DIR *dir; struct dirent *ent; if ((dir = opendir ("c:\\src\\")) != NULL) { /* print all the files and directories within directory */ while ((ent = readdir (dir)) != NULL) { printf ("%s\n", ent->d_name); } closedir (dir); } else { /* could not open directory */ perror (""); return EXIT_FAILURE; } It is just a small header file and does most of the simple stuff you need without using a big template-based approach like boost (no offence, I like boost!). • @ArtOfWarfare: tinydir was not even created when this question was answered. Also it is a wrapper around dirent (POSIX) and FindFirstFile (Windows) , while dirent.h just wraps dirent for windows. I think it is a personal taste, but dirent.h feels more as a standard May 20, 2013 at 19:13 • @JoshC: because *ent is just a returned pointer of the internal representation. by closing the directory you will eliminate the *ent as well. As the *ent is only for reading, this is a sane design, i think. Sep 25, 2014 at 11:41 • people get real!! this is a question from 2009 and it has not even mentioned VS. So do not criticize that your full proprietary (although quite nice) IDE is not supporting centuries old OS standards. Also my answer said it is "available" for windows, not "included" in any IDE from now and for all times ... I am pretty sure you can download dirent and put it in some include dir and voila there it is. Apr 15, 2016 at 9:43 • The answer is misleading. It should begin with: "...I use dirent.h, for which a Windows open-source compatibility layer also exists". Jul 3, 2016 at 19:43 • With C++14 there is std::experimental::filesystem, with C++17 there is std::filesystem. See answer of Shreevardhan below. So no need for 3rd party libraries. Apr 13, 2017 at 7:49 C++17 now has a std::filesystem::directory_iterator, which can be used as #include <string> #include <iostream> #include <filesystem> namespace fs = std::filesystem; int main() { std::string path = "/path/to/directory"; for (const auto & entry : fs::directory_iterator(path)) std::cout << entry.path() << std::endl; } Also, std::filesystem::recursive_directory_iterator can iterate the subdirectories as well. • AFAIK can also be used in C++14, but there it is still experimental: namespace fs = std::experimental::filesystem; . It seems to work ok though. Jul 11, 2016 at 8:03 • This should be the preferred answer for current use (starting with C++17) Jan 3, 2017 at 18:28 • Heed when passing std::filesystem::path to std::cout, the quotation marks are included in the output. To avoid that, append .string() to the path to do an explicit instead of an implicit conversion (here std::cout << p.string() << std::endl;). Example: coliru.stacked-crooked.com/view?id=a55ea60bbd36a8a3 Apr 13, 2017 at 9:04 • What about NON-ASCII characters in file names? Shouldn't std::wstring be used or what's the type from the iterator? Jan 19, 2018 at 13:46 • I'm not sure if I'm alone in this, but without linking to -lstdc++fs, I'd get a SIGSEGV (Address boundary error). I couldn't find anywhere in the documentation that this was required, and linker didn't give any clue either. This worked for both g++ 8.3.0 and clang 8.0.0-3. Does anyone have any insight is to where things like this is specified in the docs/specs? Jul 2, 2019 at 10:16 Unfortunately the C++ standard does not define a standard way of working with files and folders in this way. Since there is no cross platform way, the best cross platform way is to use a library such as the boost filesystem module. Cross platform boost method: The following function, given a directory path and a file name, recursively searches the directory and its sub-directories for the file name, returning a bool, and if successful, the path to the file that was found. bool find_file(const path & dir_path, // in this directory, const std::string & file_name, // search for this name, path & path_found) // placing path here if found { if (!exists(dir_path)) return false; directory_iterator end_itr; // default construction yields past-the-end for (directory_iterator itr(dir_path); itr != end_itr; ++itr) { if (is_directory(itr->status())) { if (find_file(itr->path(), file_name, path_found)) return true; } else if (itr->leaf() == file_name) // see below { path_found = itr->path(); return true; } } return false; } Source from the boost page mentioned above. For Unix/Linux based systems: You can use opendir / readdir / closedir. Sample code which searches a directory for entry name'' is: len = strlen(name); dirp = opendir("."); while ((dp = readdir(dirp)) != NULL) if (dp->d_namlen == len && !strcmp(dp->d_name, name)) { (void)closedir(dirp); return FOUND; } (void)closedir(dirp); return NOT_FOUND; Source code from the above man pages. For a windows based systems: You can use the Win32 API FindFirstFile / FindNextFile / FindClose functions. The following C++ example shows you a minimal use of FindFirstFile. #include <windows.h> #include <tchar.h> #include <stdio.h> void _tmain(int argc, TCHAR *argv[]) { WIN32_FIND_DATA FindFileData; HANDLE hFind; if( argc != 2 ) { _tprintf(TEXT("Usage: %s [target_file]\n"), argv[0]); return; } _tprintf (TEXT("Target file is %s\n"), argv[1]); hFind = FindFirstFile(argv[1], &FindFileData); if (hFind == INVALID_HANDLE_VALUE) { printf ("FindFirstFile failed (%d)\n", GetLastError()); return; } else { _tprintf (TEXT("The first file found is %s\n"), FindFileData.cFileName); FindClose(hFind); } } Source code from the above msdn pages. • Usage: FindFirstFile(TEXT("D:\\IMAGE\\MYDIRECTORY\\*"), &findFileData); Aug 11, 2016 at 0:47 • With C++14 there is std::experimental::filesystem, with C++17 there is std::filesystem, which have similar functionality as boost (the libs are derived from boost). See answer of Shreevardhan below. Apr 13, 2017 at 7:51 • For windows, refer to learn.microsoft.com/en-us/windows/desktop/FileIO/… for details Nov 12, 2018 at 13:43 One function is enough, you don't need to use any 3rd-party library (for Windows). #include <Windows.h> vector<string> get_all_files_names_within_folder(string folder) { vector<string> names; string search_path = folder + "/*.*"; WIN32_FIND_DATA fd; HANDLE hFind = ::FindFirstFile(search_path.c_str(), &fd); if(hFind != INVALID_HANDLE_VALUE) { do { // read all (real) files in current folder // , delete '!' read other 2 default folder . and .. if(! (fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) ) { names.push_back(fd.cFileName); } }while(::FindNextFile(hFind, &fd)); ::FindClose(hFind); } return names; } PS: as mentioned by @Sebastian, you could change *.* to *.ext in order to get only the EXT-files (i.e. of a specific type) in that directory. • This solution if platform-specific. That is the reason you need 3rd-party libraries. May 29, 2014 at 13:14 • @kraxor Yes, it only works in Windows, but OP never asks to have a cross-platform solution. BTW, I always prefer to choose something without using 3rd-libraries (if possible). May 29, 2014 at 14:13 • @herohuyongtao OP never specified a platform, and giving a heavily platform-dependent solution to a generic question can be misleading. (What if there is a one-line solution that works only on PlayStation 3? Is that a good answer here?) I see you edited your answer to state that it only works on Windows, I guess it's fine this way. May 29, 2014 at 17:20 • @herohuyongtao OP mentioned he can't parse ls, meaning he is probably on unix.. anyway, good answer for Windows. Sep 18, 2014 at 21:20 • I ended up using a std::vector<std::wstring> and then fileName.c_str() instead of a vector of strings, which wouldn't compile. Jul 12, 2016 at 15:38 For a C only solution, please check this out. It only requires an extra header: https://github.com/cxong/tinydir tinydir_dir dir; tinydir_open(&dir, "/path/to/dir"); while (dir.has_next) { tinydir_file file; printf("%s", file.name); if (file.is_dir) { printf("/"); } printf("\n"); tinydir_next(&dir); } tinydir_close(&dir); • It's portable - wraps POSIX dirent and Windows FindFirstFile • It uses readdir_r where available, which means it's (usually) threadsafe • Supports Windows UTF-16 via the same UNICODE macros • It is C90 so even very ancient compilers can use it • Very nice suggestion. I haven't tested it on a windows computer yet but it works brilliantly on OS X. May 18, 2013 at 23:15 • The library doesn't support std::string, so you can't pass file.c_str() to the tinydir_open. It givers error C2664 during compilation on msvc 2015 in this case. Oct 18, 2017 at 9:20 • @StepanYakovenko the author stated clearly that "For a C only solution" – user5125586 Sep 3, 2021 at 11:12 I recommend using glob with this reusable wrapper. It generates a vector<string> corresponding to file paths that fit the glob pattern: #include <glob.h> #include <vector> using std::vector; vector<string> globVector(const string& pattern){ glob_t glob_result; glob(pattern.c_str(),GLOB_TILDE,NULL,&glob_result); vector<string> files; for(unsigned int i=0;i<glob_result.gl_pathc;++i){ files.push_back(string(glob_result.gl_pathv[i])); } globfree(&glob_result); return files; } Which can then be called with a normal system wildcard pattern such as: vector<string> files = globVector("./*"); • Test that glob() returns zero. May 21, 2015 at 16:51 • I would like to use glob.h as you recommended. But still, I can't include the .h file : It says No such file or directory. Can you tell me how to solve this issue please ? Feb 23, 2016 at 10:36 • Note that this routine goes only one level deep (no recursion). It also doesn't do a quick check to determine whether it's a file or directory, which you can do easily by switching GLOB_TILDE with GLOB_TILDE | GLOB_MARK and then checking for paths ending in a slash. You'll have to make either modification to it if you need that. May 15, 2016 at 17:16 • Is this cross-platform compatible? Jul 1, 2018 at 7:11 • Unfortunately you cannot find uniformly hidden files via glob. Oct 5, 2018 at 11:30 I think, below snippet can be used to list all the files. #include <stdio.h> #include <dirent.h> #include <sys/types.h> int main(int argc, char** argv) { list_dir("myFolderName"); return EXIT_SUCCESS; } static void list_dir(const char *path) { struct dirent *entry; DIR *dir = opendir(path); if (dir == NULL) { return; } while ((entry = readdir(dir)) != NULL) { printf("%s\n",entry->d_name); } closedir(dir); } This is the structure used (present in dirent.h): struct dirent { ino_t d_ino; /* inode number */ off_t d_off; /* offset to the next dirent */ unsigned short d_reclen; /* length of this record */ unsigned char d_type; /* type of file */ char d_name[256]; /* filename */ }; • I'd like this one. Apr 28, 2019 at 9:25 • This did the job for me in C++11 without having to use Boost etc. Good solution! – Nav Dec 12, 2020 at 16:56 • This was nice! In what order am I supposed to get the files? May 27, 2021 at 0:56 Here is a very simple code in C++11 using boost::filesystem library to get file names in a directory (excluding folder names): #include <string> #include <iostream> #include <boost/filesystem.hpp> using namespace std; using namespace boost::filesystem; int main() { path p("D:/AnyFolder"); for (auto i = directory_iterator(p); i != directory_iterator(); i++) { if (!is_directory(i->path())) //we eliminate directories { cout << i->path().filename().string() << endl; } else continue; } } Output is like: file1.txt file2.dat • Hi, and where can I get this library? Jul 8, 2015 at 19:30 • @Alexander De Leon: You can get this library at their site boost.org, read getting started guide first, then use their boost::filesystem library boost.org/doc/libs/1_58_0/libs/filesystem/doc/index.htm Jul 8, 2015 at 20:02 • @Bad how would I change this to output the complete directory for each file. like I want D:/AnyFolder/file1.txt and so on? Apr 19, 2021 at 22:40 Why not use glob()? #include <glob.h> glob_t glob_result; glob("/your_directory/*",GLOB_TILDE,NULL,&glob_result); for(unsigned int i=0; i<glob_result.gl_pathc; ++i){ cout << glob_result.gl_pathv[i] << endl; } • This could be a better answer if you explain the required includes. May 15, 2016 at 0:46 • Test that glob() returns zero! May 20, 2016 at 18:58 • This is good when you know the file you are looking for such as *.txt Jul 21, 2016 at 17:35 Try boost for x-platform method http://www.boost.org/doc/libs/1_38_0/libs/filesystem/doc/index.htm or just use your OS specific file stuff. • While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - From Review Jun 28, 2018 at 12:26 • @ice1000 Seriously? This Q&A is from 2009 – Tim Jun 28, 2018 at 13:30 Check out this class which uses the win32 api. Just construct an instance by providing the foldername from which you want the listing then call the getNextFile method to get the next filename from the directory. I think it needs windows.h and stdio.h. class FileGetter{ WIN32_FIND_DATAA found; HANDLE hfind; char folderstar[255]; int chk; public: FileGetter(char* folder){ sprintf(folderstar,"%s\\*.*",folder); hfind = FindFirstFileA(folderstar,&found); //skip . FindNextFileA(hfind,&found); } int getNextFile(char* fname){ //skips .. when called for the first time chk=FindNextFileA(hfind,&found); if (chk) strcpy(fname, found.cFileName); return chk; } }; • Where will you close handle? Oct 22, 2020 at 23:03 GNU Manual FTW http://www.gnu.org/software/libc/manual/html_node/Simple-Directory-Lister.html#Simple-Directory-Lister Also, sometimes it's good to go right to the source (pun intended). You can learn a lot by looking at the innards of some of the most common commands in Linux. I've set up a simple mirror of GNU's coreutils on github (for reading). https://github.com/homer6/gnu_coreutils/blob/master/src/ls.c Maybe this doesn't address Windows, but a number of cases of using Unix variants can be had by using these methods. Hope that helps... Shreevardhan answer works great. But if you want to use it in c++14 just make a change namespace fs = experimental::filesystem; i.e., #include <string> #include <iostream> #include <filesystem> using namespace std; namespace fs = experimental::filesystem; int main() { string path = "C:\\splits\\"; for (auto & p : fs::directory_iterator(path)) cout << p << endl; int n; cin >> n; } #include <string> #include <iostream> #include <filesystem> namespace fs = std::filesystem; int main() { std::string path = "/path/to/directory"; for (const auto & entry : fs::directory_iterator(path)) std::cout << entry.path() << std::endl; } #include <windows.h> #include <iostream> #include <string> #include <vector> using namespace std; string wchar_t2string(const wchar_t *wchar) { string str = ""; int index = 0; while(wchar[index] != 0) { str += (char)wchar[index]; ++index; } return str; } wchar_t *string2wchar_t(const string &str) { wchar_t wchar[260]; int index = 0; while(index < str.size()) { wchar[index] = (wchar_t)str[index]; ++index; } wchar[index] = 0; return wchar; } vector<string> listFilesInDirectory(string directoryName) { WIN32_FIND_DATA FindFileData; wchar_t * FileName = string2wchar_t(directoryName); HANDLE hFind = FindFirstFile(FileName, &FindFileData); vector<string> listFileNames; listFileNames.push_back(wchar_t2string(FindFileData.cFileName)); while (FindNextFile(hFind, &FindFileData)) listFileNames.push_back(wchar_t2string(FindFileData.cFileName)); return listFileNames; } void main() { vector<string> listFiles; listFiles = listFilesInDirectory("C:\\*.txt"); for each (string str in listFiles) cout << str << endl; } • -1. string2wchar_t returns the address of a local variable. Also, you should probably use the conversion methods available in WinAPI instead of writing your own ones. Oct 23, 2013 at 7:30 char **getKeys(char *data_dir, char* tablename, int *num_keys) { char** arr = malloc(MAX_RECORDS_PER_TABLE*sizeof(char*)); int i = 0; for (;i < MAX_RECORDS_PER_TABLE; i++) arr[i] = malloc( (MAX_KEY_LEN+1) * sizeof(char) ); char *buf = (char *)malloc( (MAX_KEY_LEN+1)*sizeof(char) ); snprintf(buf, MAX_KEY_LEN+1, "%s/%s", data_dir, tablename); DIR* tableDir = opendir(buf); struct dirent* getInfo; i = 0; while(1) { if (getInfo == 0) break; strcpy(arr[i++], getInfo->d_name); } *(num_keys) = i; return arr; } This implementation realizes your purpose, dynamically filling an array of strings with the content of the specified directory. int exploreDirectory(const char *dirpath, char ***list, int *numItems) { struct dirent **direntList; int i; errno = 0; if ((*numItems = scandir(dirpath, &direntList, NULL, alphasort)) == -1) return errno; if (!((*list) = malloc(sizeof(char *) * (*numItems)))) { fprintf(stderr, "Error in list allocation for file list: dirpath=%s.\n", dirpath); exit(EXIT_FAILURE); } for (i = 0; i < *numItems; i++) { (*list)[i] = stringDuplication(direntList[i]->d_name); } for (i = 0; i < *numItems; i++) { free(direntList[i]); } free(direntList); return 0; } • How would I call this? I'm getting segfaults when I try to run this function on the first if block. I'm calling it with char **list; int numItems; exploreDirectory("/folder",list, numItems); Feb 21, 2017 at 15:17 This works for me. I'm sorry if I cannot remember the source. It is probably from a man page. #include <ftw.h> int AnalizeDirectoryElement (const char *fpath, const struct stat *sb, int tflag, struct FTW *ftwbuf) { if (tflag == FTW_F) { std::string strFileName(fpath); DoSomethingWith(strFileName); } return 0; } void WalkDirectoryTree (const char * pchFileName) { int nFlags = 0; if (nftw(pchFileName, AnalizeDirectoryElement, 20, nFlags) == -1) { perror("nftw"); } } int main() { WalkDirectoryTree("some_dir/"); } you can get all direct of files in your root directory by using std::experimental:: filesystem::directory_iterator(). Then, read the name of these pathfiles. #include <iostream> #include <filesystem> #include <string> #include <direct.h> using namespace std; namespace fs = std::experimental::filesystem; void ShowListFile(string path) { for(auto &p: fs::directory_iterator(path)) /*get directory */ cout<<p.path().filename()<<endl; // get file name } int main() { ShowListFile("C:/Users/dell/Pictures/Camera Roll/"); getchar(); return 0; } This answer should work for Windows users that have had trouble getting this working with Visual Studio with any of the other answers. 1. Download the dirent.h file from the github page. But is better to just use the Raw dirent.h file and follow my steps below (it is how I got it to work). Github page for dirent.h for Windows: Github page for dirent.h Raw Dirent File: Raw dirent.h File 4. Include "dirent.h" in your code. 5. Put the below void filefinder() method in your code and call it from your main function or edit the function how you want to use it. #include <stdio.h> #include <string.h> #include "dirent.h" string path = "C:/folder"; //Put a valid path here for folder void filefinder() { DIR *directory = opendir(path.c_str()); struct dirent *direntStruct; if (directory != NULL) { printf("File Name: %s\n", direntStruct->d_name); //If you are using <stdio.h> //std::cout << direntStruct->d_name << std::endl; //If you are using <iostream> } } closedir(directory); } I tried to follow the example given in both answers and it might be worth noting that it appears as though std::filesystem::directory_entry has been changed to not have an overload of the << operator. Instead of std::cout << p << std::endl; I had to use the following to be able to compile and get it working: #include <iostream> #include <filesystem> #include <string> namespace fs = std::filesystem; int main() { std::string path = "/path/to/directory"; for(const auto& p : fs::directory_iterator(path)) std::cout << p.path() << std::endl; } trying to pass p on its own to std::cout << resulted in a missing overload error. Peter Parker's solution, but without using for: #include <algorithm> #include <filesystem> #include <ranges> #include <vector> using namespace std; int main() { vector<filesystem::path> filePaths; ranges::transform(filesystem::directory_iterator("."), back_inserter(filePaths), [](const auto& dirFile){return dirFile.path();} ); } System call it! system( "dir /b /s /a-d * > file_names.txt" ); EDIT: This answer should be considered a hack, but it really does work (albeit in a platform specific way) if you don't have access to more elegant solutions. • I'm not allowed to execute the 'ls' command and parse the results from within my program. I knew there would be someone that would send something like this... – yyny Apr 26, 2015 at 15:37 • For Windows, this is by far the most pragmatic way. Pay special attention to the /A switch. Whichever is the way you choose, security can seriously get-in-a-way here. If one is not "coding it in" from the start. Windows impersonations, authentications and other "deserts" are never easy to get right. Dec 2, 2020 at 11:36 Since files and sub directories of a directory are generally stored in a tree structure, an intuitive way is to use DFS algorithm to recursively traverse each of them. Here is an example in windows operating system by using basic file functions in io.h. You can replace these functions in other platform. What I want to express is that the basic idea of DFS perfectly meets this problem. #include<io.h> #include<iostream.h> #include<string> using namespace std; void TraverseFilesUsingDFS(const string& folder_path){ _finddata_t file_info; string any_file_pattern = folder_path + "\\*"; intptr_t handle = _findfirst(any_file_pattern.c_str(),&file_info); //If folder_path exsist, using any_file_pattern will find at least two files "." and "..", //of which "." means current dir and ".." means parent dir if (handle == -1){ cerr << "folder path not exist: " << folder_path << endl; exit(-1); } //iteratively check each file or sub_directory in current folder do{ string file_name=file_info.name; //from char array to string //check whtether it is a sub direcotry or a file if (file_info.attrib & _A_SUBDIR){ if (file_name != "." && file_name != ".."){ string sub_folder_path = folder_path + "\\" + file_name; TraverseFilesUsingDFS(sub_folder_path); cout << "a sub_folder path: " << sub_folder_path << endl; } } else cout << "file name: " << file_name << endl; } while (_findnext(handle, &file_info) == 0); // _findclose(handle); } Building on what herohuyongtao posted and a few other posts: http://www.cplusplus.com/forum/general/39766/ What is the expected input type of FindFirstFile? How to convert wstring into string? This is a Windows solution. Since I wanted to pass in std::string and return a vector of strings I had to make a couple conversions. #include <string> #include <Windows.h> #include <vector> #include <locale> #include <codecvt> std::vector<std::string> listFilesInDir(std::string path) { std::vector<std::string> names; //Convert string to wstring std::wstring search_path = std::wstring_convert<std::codecvt_utf8<wchar_t>>().from_bytes(path); WIN32_FIND_DATA fd; HANDLE hFind = FindFirstFile(search_path.c_str(), &fd); if (hFind != INVALID_HANDLE_VALUE) { do { // read all (real) files in current folder // , delete '!' read other 2 default folder . and .. if (!(fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)) { //convert from wide char to narrow char array char ch[260]; char DefChar = ' '; WideCharToMultiByte(CP_ACP, 0, fd.cFileName, -1, ch, 260, &DefChar, NULL); names.push_back(ch); } } while (::FindNextFile(hFind, &fd)); ::FindClose(hFind); } return names; } • If you know that you will be only using multibyte you could use WIN32_FIND_DATAA, FindFirstFileA and FindNextFileA. Then There will be no need to convert result to multibyte or Input to unicode. Dec 6, 2019 at 6:49 • Just advice: std::wstring_convert is deprecated (a few years ago now). if you are using OS in some variety of English, perhaps this might be a good enough replacement, .. other than that vector of strings, and I assume with c++ exceptions in use, is the sure way to the largest and slowest solution. unless you use some of the few very good, std lib replacements ... Dec 2, 2020 at 11:49 #include <vector> #include <string> #include <algorithm> #ifdef _WIN32 #include <windows.h> std::vector<std::string> files_in_directory(std::string path) { std::vector<std::string> files; // check directory exists char fullpath[MAX_PATH]; GetFullPathName(path.c_str(), MAX_PATH, fullpath, 0); std::string fp(fullpath); if (GetFileAttributes(fp.c_str()) != FILE_ATTRIBUTE_DIRECTORY) return files; // get file names WIN32_FIND_DATA findfiledata; HANDLE hFind = FindFirstFile((LPCSTR)(fp + "\\*").c_str(), &findfiledata); if (hFind != INVALID_HANDLE_VALUE) { do { files.push_back(findfiledata.cFileName); } while (FindNextFile(hFind, &findfiledata)); FindClose(hFind); } // delete current and parent directories files.erase(std::find(files.begin(), files.end(), ".")); files.erase(std::find(files.begin(), files.end(), "..")); // sort in alphabetical order std::sort(files.begin(), files.end()); return files; } #else #include <dirent.h> std::vector<std::string> files_in_directory(std::string directory) { std::vector<std::string> files; // open directory DIR *dir; dir = opendir(directory.c_str()); if (dir == NULL) return files; // get file names struct dirent *ent; while ((ent = readdir(dir)) != NULL) files.push_back(ent->d_name); closedir(dir); // delete current and parent directories files.erase(std::find(files.begin(), files.end(), ".")); files.erase(std::find(files.begin(), files.end(), "..")); // sort in alphabetical order std::sort(files.begin(), files.end()); return files; } #endif // _WIN32 • With C++17 we should use std::filesystem::directory_iterator and similar. Sep 22, 2021 at 10:27 • @0xC0000022L Sure. This is a cross-platform solution for those who do not have c++17 support. Sep 22, 2021 at 11:14 • This is hardly cross-platform. Alone the Windows implementation doesn't account for _UNICODE being defined. And besides this is going blow up in the face of of a user in really big directories. There's a reason for why most (underlying) APIs are already based on an iterator-model as opposed to fetching a huge list all at once. That said, this is certainly a start. But quite frankly I'd probably rewrite the Windows portion to behave like readdir() and friends as this means a single interface which is more flexible than the one you offer. Sep 22, 2021 at 12:08 • @0xC0000022L Thanks for the feedback. I used this piece of code in my small projects where there are not much files, and the platform is either Windows or Ubuntu. The codes do not belong to me. (I should have referred the sources.) This is a simple solution to most situations. I posted this to refer later and share with the others. As C++17 is widely used nowadays, this post becomes no longer needed. However, if you think it is a good idea to keep a non-modern solution without 3rd party libraries, I encourge you to post a new answer in which case I will delete this one. Sep 22, 2021 at 18:12 Shreevardhan's design also works great for traversing subdirectories: #include <string> #include <iostream> #include <filesystem> using namespace std; namespace fs = filesystem; int main() { string path = "\\path\\to\\directory"; // string path = "/path/to/directory"; for (auto & p : fs::recursive_directory_iterator(path)) cout << p.path() << endl; } Compilation: cl /EHsc /W4 /WX /std:c++17 ListFiles.cpp Simply in Linux use following ASCI C style code #include <bits/stdc++.h> #include <dirent.h> using namespace std; int main(){ DIR *dpdf; struct dirent *epdf; dpdf = opendir("./"); if (dpdf != NULL){ cout << epdf->d_name << std::endl; } } closedir(dpdf); return 0; } Hope this helps! Just something that I want to share and thank you for the reading material. Play around with the function for a bit to understand it. You may like it. e stood for extension, p is for path, and s is for path separator. If the path is passed without ending separator, a separator will be appended to the path. For the extension, if an empty string is inputted then the function will return any file that does not have an extension in its name. If a single star was inputted than all files in the directory will be returned. If e length is greater than 0 but is not a single * then a dot will be prepended to e if e had not contained a dot at the zero position. For a returning value. If a zero-length map is returned then nothing was found but the directory was open okay. If index 999 is available from the return value but the map size is only 1 then that meant there was a problem with opening the directory path. Note that for efficiency, this function can be split into 3 smaller functions. On top of that, you can create a caller function that will detect which function it is going to call based on the input. Why is that more efficient? Said if you are going to grab everything that is a file, doing that method the subfunction that built for grabbing all the files will just grab all that are files and does not need to evaluate any other unnecessary condition everytime it found a file. That would also apply to when you grab files that do not have an extension. A specific built function for that purpose would only evaluate for weather if the object found is a file and then whether or not if the name of the file has a dot in it. The saving may not be much if you only read directories with not so much files. But if you are reading a mass amount of directory or if the directory has couple hundred thousands of files, it could be a huge saving. #include <stdio.h> #include <sys/stat.h> #include <iostream> #include <dirent.h> #include <map> std::map<int, std::string> getFile(std::string p, std::string e = "", unsigned char s = '/'){ if ( p.size() > 0 ){ if (p.back() != s) p += s; } if ( e.size() > 0 ){ if ( e.at(0) != '.' && !(e.size() == 1 && e.at(0) == '*') ) e = "." + e; } DIR *dir; struct dirent *ent; struct stat sb; std::map<int, std::string> r = {{999, "FAILED"}}; std::string temp; int f = 0; bool fd; if ( (dir = opendir(p.c_str())) != NULL ){ r.erase (999); while ((ent = readdir (dir)) != NULL){ temp = ent->d_name; fd = temp.find(".") != std::string::npos? true : false; temp = p + temp; if (stat(temp.c_str(), &sb) == 0 && S_ISREG(sb.st_mode)){ if ( e.size() == 1 && e.at(0) == '*' ){ r[f] = temp; f++; } else { if (e.size() == 0){ if ( fd == false ){ r[f] = temp; f++; } continue; } if (e.size() > temp.size()) continue; if ( temp.substr(temp.size() - e.size()) == e ){ r[f] = temp; f++; } } } } closedir(dir); return r; } else { return r; } } void printMap(auto &m){ for (const auto &p : m) { std::cout << "m[" << p.first << "] = " << p.second << std::endl; } } int main(){ std::map<int, std::string> k = getFile("./", ""); printMap(k); return 0; } #include<iostream> #include <dirent.h> using namespace std; char ROOT[]={'.'}; void listfiles(char* path){ DIR * dirp = opendir(path); dirent * dp; while ( (dp = readdir(dirp)) !=NULL ) { cout << dp->d_name << " size " << dp->d_reclen<<std::endl; } (void)closedir(dirp); } int main(int argc, char **argv) { char* path; if (argc>1) path=argv[1]; else path=ROOT; cout<<"list files in ["<<path<<"]"<<std::endl; listfiles(path); return 0; }
{}
# Efficient Cryptographic Password Hardening Services From Partially Oblivious Commitments ### Abstract Password authentication still constitutes the most widespread authentication concept on the Internet today, but the human incapability to memorize safe passwords has left this concept vulnerable to various attacks ever since. Affected enterprises such as Facebook now strive to mitigate such attacks by involving external cryptographic services that harden passwords. Everspaugh et al. provided the first comprehensive formal treatment of such a service, and proposed the $\mathrm{P{\small YTHIA}}$ PRF-Service as a cryptographically secure solution (Usenix Security’15). $\mathrm{P{\small YTHIA}}$ relies on a novel cryptographic primitive called partially oblivious pseudorandom functions, and its security is proven under a strong new interactive assumption in the random oracle model. In this work, we first prove that this strong assumption is inherently necessary for the $\mathrm{P{\small YTHIA}}$ construction, i.e., it cannot be weakened without invalidating the security of $\mathrm{P{\small YTHIA}}$. More generally, it is impossible to reduce the security of $\mathrm{P{\small YTHIA}}$ to any non-interactive assumptions. Hence any efficient, scalable password hardening service that is secure under weaker assumptions necessarily requires a conceptually different construction. To this end, we propose a construction for password hardening services based on a novel cryptographic primitive called partially oblivious commitments, along with an efficient secure instantiation based on simple assumptions. The performance and storage evaluation of our prototype implementation shows that our protocol runs almost twice as fast as $\mathrm{P{\small YTHIA}}$, while achieving a slightly relaxed security notion but relying on weaker assumptions. Type Publication 23rd ACM Conference on Computer and Communications Security (CCS 2016) Date
{}
Let cost of 1 litre milk be Re.1 Milk in 1 litre mixture in first can = $$\frac{3}{4}litre$$ The cost price of 1 litre mixture in first can = Rs. $$\frac{3}{4}$$ In 1 litre mixture in second can = $$\frac{1}{2}litre$$ The cost price of 1 litre mixture in second can = $$Rs.\frac{1}{2}$$ In 1 litre of final mixture = $$\frac{5}{8}litre$$ The mean price = $$Rs \frac{5}{8}litre$$ By the rule of alligation we have, $$\frac{x}{y} = \frac{(\frac{3}{4} – \frac{5}{8})}{(\frac{5}{8}- \frac{1}{2})}$$ $$\Rightarrow \frac{x}{y} = \frac{\frac{1}{8}}{\frac{1}{8}}$$ $$\Rightarrow \frac{1}{1}$$ Therefore, to get 12 litres of milk such that the ratio of water to milk is 3:5. A milk vendor should mix 6 litres of milk from each container. Explore more such questions and answers at BYJU’S.
{}
# [OS X TeX] Hyphenation Lindsay Stirton L.Stirton at uea.ac.uk Tue Mar 13 17:47:56 CET 2007 ```Hello, I have been learning LaTeX on and off for about a year now--using it for non-critical work, and gradually doing more and more writing on Carbon Emacs and gwTeX. I am slowly figuring out how to do everything I need to do using this setup. The one thing I can't figure out is hyphenation. I understand that (as long as I am writing in English) TeX should automatically hyphenate--no need to use \usepackage[english]{babel} or any such command in the header. All the sources I consult speak about what a great hyphenation algorithm TeX has. But the only way I seem to be able to do hyphenation is manually (with /- ). Where am I going wrong? Any advice gratefully appreciated. Lindsay
{}
new zealand power supply voltage 3 phase The current Ih in towards the zero point becomes a phase current and accordingly a phase current will flow If = Ih through the windings. Wyeconnected systems have a neutral 4.2. Taking this one step further, high power loads are typically powered using three phases. New Zealand power outlets accept power plugs with 3 flat pins (1 of which is an earth pin, which is a simple safety measure). There are good reasons for this arrangement. Poor quality or fluctuating power supply can often cause power surges, spikes and voltage fluctuations. Three Phase AC Voltage. A new power quality Standard, AS61000.3.100, has recently been released that details requirements additional to the existing systems Standard. The power plugs have two flat pins in a V-shape with a grounding pin. Medium Voltage:greater than 1000 volts and less than 100 kV 5.3. Deltaconnected systems typically do not have a neutral 5. 150-225kVA robust 3 phase UPS power protection designed to meet a wide range of requirements from medium data centers to industrial and facilities applications NEW ZEALAND - … He made many careful calculations and measurements and found out that 60 Hz (Hertz, cycles per second) was the best frequency for alternating current (AC) power generating. Country Voltages of typical low voltage mains [V] Load voltage Frequency [Hz] Single-phase Three-phase 3wire 4wire Australia New Zealand 240 230; 240 240/415 230/400; 240/415 50 50 America Australia / New Zealand 5 The higher voltage helps to deliver more energy to commercial and industrial loads. Design Solution The circuit in Figure 1 is a 12 V, 250 mA wide-range flyback power supply that operates from a single-phase or a three-phase input. You can also consider a combined power plug adapter/voltage converter. Machines with 3 phase motors (like lathes or milling machines) can usually be run from a single phase supply by using a "VSD" (Variable Speed Drive) to convert 1 phase power into 3 phase power. Power is generated at the utility in three phases that are 120 degrees out of phase with one another at 60 Hz. Alternating current electric power distribution systems can be classified by the following properties: 1. Voltage classes: (ANSI C84.1-2016) 5.1. Electricity is supplied throughout New Zealand at 230/240 volts (50 hertz), although most hotels and motels provide 110 volt AC sockets (rated at 20 watts) for electric razors only. Power outages or surges are extremely rare, regardless of how remote you may be. A phase voltage (phase voltage = main voltage/√3; for example 400V = 690/√3 ) will lie across the windings. Voltage (V) 3 Phase AC 1 Phase … The three voltage sources are phase shifted 120° with respect to each other to balance the load currents. For Victoria the corresponding figures are 60,000km, 1600km, and 30,000km. New Zealand enjoys first-rate infrastructure, and the electrical grid is no exception. internet searches have found very little due to the age and nature of this machine, and of course the company who made the machine died a … Three-phase voltage, frequency and number of wires - Electric Power Standards Around the World - Electricity Power System Worldwide - Although single-phase power is more prevalent today, three phase is still chosen as the power of choice for many different types of applications. New Zealand, Australia and New Guinea uses different power plugs to the rest of the world. A single to three phase power supply that converts single phase power to three phase power for equipment up to 60kW FASTEC Engage Two clutches developed by Fastec Ltd, -being an adjustable centrifugal clutch, and a spring clutch – which can each be used, either independently or in conjunction with a Fastec Drive, to soft start high inertia loads. Resistor-network converters will usually be advertised as supporting something like 50-1600 Watts. This simply means that 60 times per second, each individual leg of power makes one peak and valley (a full circle), and all three of the phases together split the cycle into thirds (trisect). New Zealand Power Plug. What is the voltage of electricity supply in New Zealand? Standard electric motors in industry use three phase, 415 volts at 50 hertz Also very common is 220 – 240 volts three phase, which is a three phase motor run from a single phase inverter. Ask Question Asked 7 years ago. A "standard" residential power connection in NZ is 15 kVA and there are four common connection styles in NZ; 1) Single Phase; 1x 63A; 230V Line-Neutral Detect when all 3 phases are present and have the correct phase … If you have balanced three-phase power, where all three phase voltages are equal in magnitude and 120° apart in phase, then: $$V_{L-L} = \sqrt{3} \times V_{L-N}$$ ... New Feature: Table Support. Section 28: Voltage supply to installations • (1) The supply of electricity to installations operating at a voltage of 200 volts AC or more but not exceeding 250 volts AC (calculated or measured at the point of supply)— * (a) must be at standard low voltage; and ±0.5% full scale (at 25°C and 65% humidity, rated power supply voltage, 50/60 Hz sine wave input) Operating time ±50 ms (at 25°C and 65% humidity, rated power supply voltage) Applicable standards Conforming standards: EN 60947-5-1 Installation environment (pollution level 2, installation category III) EMC: EN 60947-5-1: Safety standards Most homeowners will require single-phase whereas industrial or commercial applications usually require three-phase power. Generators at power stations supply three-phase electricity. An expensive and heavy power converter transforms the voltage of 230 volts from a New Zealand power outlet to work with a non-230 volt device, however a more lightweight and cheaper power adapter (or plug adapter) changes the shape of the plug on your device to fit into a power outlet found in New Zealand. Country-wise power voltage table; Panel Cooling Units Series Top. Three-phase power is better for motor starting and running. Single-phase sets are 230/240. Designed and manufactured in New Zealand, the device can cleverly optimise a lowcapacity, single-phase power supply to enable starting and running of three phase motors. Power Supplies for Three Phase Industrial Applications all voltage rating equal to the sum of the individual MOSFET voltages. If you travel to New Zealand with a device that does not accept 230 Volts at 50 Hertz, you will need a voltage converter. The system of three-phase alternating current electrical generation and distribution was invented by a nineteenth century creative genius named Nicola Tesla. You might have general power circuits connected to one phase, (single phase) A/C connected to the second phase, and your workshop connected to the third phase. Frequency:50 Hz or 60 Hz 2. This DPB01-series DIN rail mount phase monitoring relay features as 3-phase or 3-phase + neutral line voltage monitoring relay for phase sequence, phase loss, over and under-voltage (separately adjustable set points) with built-in time delay function. There are three main types of voltage converter. Three-phase generators are set up to produce 400/420 volts. Using the StackFET technique with a low-cost 600 V MOS- Low Voltage:1000 volts or less 5.2. High … Neutral present: 4.1. Examples of this can include flickering lights, lights glowing brighter or dimmer, incandescent bulbs blowing prematurely, failure of electronic equipment (especially computers), interference of radio or … Electricity in New Zealand is 230 Volts, alternating at 50 cycles per second. 3-phase A/C is obviously connected to all 3 phases, and is marginally more efficient than a single phase … The new Standard. If the standard voltage in your country is in the range of 100 V - 127 V (as is in the US, Canada and most South American countries), you need a voltage converter in New Zealand. Helios Power Solutions New Zealand offers single phase output uninterruptible power supply from 500VA to 20 kVA. Re: 3 phase power Originally Posted by motorbyclist manuals for the software in english and german (bosch controller), wiring diagrams etc are all in italian. Note that New Zealand uses 230 V and 50 Hz, as opposed to America’s 120 Hz and 60 Hz system. The good news is that New Zealand is big into sustainable and renewable energy sources. Galaxy VS is a highly efficient, modular, easy-to-deploy 20 to 150 kW (480 V), 10 to 150 kW (400 V), and 10 to 75 kW (208 V), three-phase uninterruptible power supply that delivers top performance for edge, small, and medium data centers, as well as critical infrastructure in commercial and industrial facilities. Do I need to take a converter? DIN Rail, Rack & Tower Mounting Options.Sentinel Power New South Wales and the ACT operate about 30,000km of conventional 22kV single and three phase lines, about 110,000km conventional 11kV or lower voltage single and three phase lines, and 30,000km of SWER lines. The Australian voltage standard for Single Phase Power is 230V, but with Three Phase Power the standard voltage between each of the three wires will be close to 400V. (looks like a sad face) So thats three flat pins — one of which is an earthing pin (this is simply a safety measure). Some power plugs don’t have the earth pin but they still fit into the power … Country-wise power voltage table. Number of phases:single or three phase 3. The cable size calculator calculates current rating, voltage drop and short circuit rating, according to the Australia and New Zealand standard AS/NZS 3008. Number of wires:2, 3, or 4 (not counting the safety ground) 4. This distributes the current over three instead of one set of wires, allowing for smaller and thus less expensive wiring. New Zealand (as well as Australia, China, and several other countries) uses different power plugs to the rest of the world and this power plug is known as Plug Type I.. Three phase power supply - what is line to line voltage. The new Standard stipulates a nominal 230V, and the allowable voltage to the customer’s point of supply is, as mentioned, +10% to –6%. Adapter/Voltage converter three voltage sources are phase shifted 120° with respect to each other to the. Most homeowners will require single-phase whereas industrial or commercial applications usually require three-phase power is better for motor starting running... Voltage table ; Panel Cooling Units Series Top the load currents single or three phase 3 that! Tower Mounting Options.Sentinel power New Zealand is 230 volts, alternating at 50 cycles per second wires:2 3... Power is better for motor starting and running less expensive wiring flat pins in a with! Each other to balance the load currents systems Standard distribution systems can be by. Can often cause power surges, spikes and voltage fluctuations enjoys first-rate infrastructure, and 30,000km enjoys first-rate,... Helps to deliver more energy to commercial and industrial loads power outages or are. And running pins in a V-shape with a grounding pin usually require three-phase power is better motor. Higher voltage helps to deliver more energy to commercial and industrial loads have... Flat pins in a V-shape with a grounding pin power supply can often cause surges! Power is better for motor starting and running distributes the current over three instead of one set of wires allowing. Distribution was invented by a nineteenth century creative genius named Nicola Tesla pins in V-shape! 3, or 4 ( not counting the safety ground ) 4 Cooling Units Series Top require three-phase.. Or surges are extremely rare, regardless of how remote you may be high power loads are typically using. Advertised as supporting something like 50-1600 Watts set up to produce 400/420 volts three-phase alternating electric! And 60 Hz system three voltage sources are phase shifted 120° with respect to each other to the! T have the earth pin but they still fit into the power plugs have two flat pins a! Phase shifted 120° with respect to each other to balance the load currents distribution systems can be classified by following... Systems typically do not have a neutral 5 the safety ground ) 4 century..., has recently been released that details requirements additional to the existing systems.... Volts, alternating at 50 cycles per second commercial and industrial loads power supply from 500VA to 20 kVA 1000... Safety ground ) 4 can also consider a combined power plug adapter/voltage converter taking this one further... Whereas industrial or commercial applications usually require three-phase power New Zealand is 230 volts, alternating at cycles... One set of wires, allowing for smaller and thus less expensive wiring t have the pin... That New Zealand power plug of how remote you may be set up to produce 400/420 volts generation and was! The load currents safety ground ) 4 be classified by the following properties:.. Country-Wise power voltage table ; Panel Cooling Units Series Top usually require three-phase power is for. 230 V and 50 Hz, as opposed to America ’ s 120 Hz and 60 Hz system less wiring... Over three instead of one set of wires, allowing for smaller and thus less wiring! Sustainable and renewable energy sources are extremely rare, regardless of how remote you be... And 60 Hz system energy to commercial and industrial loads electrical generation distribution! Surges, spikes and voltage fluctuations and running 50-1600 Watts Zealand offers single phase output power... Creative genius named Nicola Tesla into sustainable and renewable energy sources single-phase whereas industrial or applications. No exception, and 30,000km power Solutions New Zealand is 230 volts, at... Hz and 60 Hz system 3, or 4 ( not counting the safety ground ) 4 other to the... Into the power plugs have two flat pins in a V-shape with a pin. Usually be advertised as supporting something like 50-1600 Watts recently been released that details requirements additional to the existing Standard. Big into sustainable and renewable energy sources has recently been released that requirements. Still fit into the power you can also consider a combined power plug New power Standard! Current over three instead of one set of wires, allowing for smaller and thus less expensive wiring over. 50 Hz, as opposed to America ’ s 120 Hz and 60 Hz system loads are typically powered three... At 50 cycles per second or fluctuating power supply from 500VA to 20 kVA greater than 1000 and... Volts and less than 100 kV 5.3 new zealand power supply voltage 3 phase this one step further, high power loads are powered... Smaller and thus less expensive wiring to produce 400/420 volts to deliver more energy to commercial and industrial loads less... Din Rail, Rack & Tower Mounting Options.Sentinel power New Zealand is 230 volts alternating. Greater than 1000 volts and less than 100 kV 5.3 also consider a combined plug... The existing systems Standard require three-phase power is better for motor starting and running per... Less expensive wiring systems typically do not have a neutral 5 be advertised as supporting something like 50-1600.... The higher voltage helps to deliver more energy to commercial and industrial loads Rack & Tower Mounting Options.Sentinel New! To each other to balance the load currents per second typically do not have neutral! 20 kVA rare, regardless of how remote you may be safety ground ) 4 be as. Electric power distribution systems can be classified by the following properties new zealand power supply voltage 3 phase 1 New Zealand uses 230 and! Are set up to produce 400/420 volts power voltage table ; Panel Cooling Units Series Top one of... Phase shifted 120° with respect to each other to balance the load currents Nicola.!, allowing for smaller and thus less expensive wiring big into sustainable renewable! Distribution systems can be classified by the following properties: 1 by a nineteenth century genius! And distribution was invented by a nineteenth century creative genius named Nicola Tesla usually three-phase! With a grounding pin Tower Mounting Options.Sentinel power New Zealand is 230,. Adapter/Voltage converter expensive wiring single or three phase 3 allowing for smaller and thus less expensive wiring helios Solutions! Solutions New Zealand power plug adapter/voltage converter instead of one set of,! Power surges, spikes and voltage fluctuations 120° with respect to each other to balance load. Usually require three-phase power is better for motor starting and running grid is no exception alternating! Most homeowners will require single-phase whereas industrial or commercial applications usually require three-phase power is better for motor and... Less expensive wiring by the following properties: 1 current over three instead of one set wires... Zealand uses 230 V and 50 Hz, as opposed to America s! To 20 kVA poor quality or fluctuating power supply can often cause power surges, spikes and voltage.! This distributes the current over three instead of one set of wires, allowing for and. Power is better for motor starting and running have two flat pins in a V-shape with a grounding pin voltage... Voltage: greater than 1000 volts and less than 100 kV 5.3 electrical grid is no exception Mounting Options.Sentinel New... To America ’ s 120 Hz and 60 Hz system 100 kV 5.3 electricity in New Zealand power plug the! And 60 Hz system are typically powered using three phases Hz, as opposed to ’. To deliver more energy to commercial and industrial loads following properties: 1 voltage helps to more..., high power loads are typically powered using three phases up to produce 400/420 volts ( not counting safety. 1000 volts and less than 100 kV 5.3 balance the load currents rare regardless. Hz system as opposed to America ’ s 120 Hz and 60 Hz system three! Powered using three phases, spikes and voltage fluctuations extremely rare, of... And less than 100 kV 5.3 you may be of one set of,. Quality Standard, AS61000.3.100, has recently been released that details requirements additional to the existing Standard! Most homeowners will require single-phase whereas industrial or commercial applications usually require three-phase power systems do. The load currents to balance the load currents and 30,000km have a neutral 5 flat pins in V-shape. Often cause power surges, spikes and voltage fluctuations three instead of one set of wires allowing... Power supply can often cause power surges, spikes and voltage fluctuations news is that Zealand! Will require single-phase whereas industrial or commercial applications usually require three-phase power is better for motor starting and.! Zealand is big into sustainable and renewable energy sources the earth pin but they still fit the... The three voltage sources are phase shifted 120° with respect to each to. Or three phase 3 from 500VA to 20 kVA fit into the power was invented by a nineteenth creative..., AS61000.3.100, has recently been released that details requirements additional to the existing systems Standard power better. To America ’ s 120 Hz and 60 Hz system been released that details additional. Of phases: single or three phase 3 country-wise power voltage table Panel. For motor starting and running a grounding pin 400/420 volts typically do not have a neutral 5 plug... Electrical generation and distribution was invented by a nineteenth century creative genius Nicola. Supply from 500VA to 20 kVA alternating at 50 cycles per second the three sources! Than 1000 volts and less than 100 kV 5.3: greater than 1000 volts less... Cycles per second require single-phase whereas industrial or commercial applications usually require three-phase power is better for motor starting running... Released that details requirements additional to the existing systems Standard the higher voltage helps to deliver energy... Or commercial applications usually require three-phase power usually require three-phase power power plug adapter/voltage.... They still fit into the power plugs have two flat pins in a V-shape a! Of how remote you may be like 50-1600 Watts or commercial applications usually require three-phase power better. Are 60,000km, 1600km, and the electrical grid is no exception and industrial....
{}
# Interpretation of residuals vs fitted plot I am checking that I have met the assumptions for multiple regression using the built in diagnostics within R. I think that from my online research, the DV violates the assumption of homoscedasticity (please see the residuals vs fitted plot below). I tried log transforming the DV (log10) but this didn't seem to improve the residuals vs fitted plot. There are 2 dummy coded variables within my model and 1 continuous variable. The model only explains 23% of the variance in selection (DV) therefore, could the lack of homoscedasticity be because variable/s are missing? Any advice on where to go from here would be greatly appreciated. • Seen better, seen much worse. Judging these plots is a dark and subjective art. I am a fan of residual diagnostics but, consistently with that, I believe, I stress that getting the functional form right is more important than matching error assumptions exactly, which you will never manage. The main messages I pick up from the plot are that the overall shape looks about right, but I see two big clumps and one smaller one, so does that match anything we should worry about? I like to look at observed vs fitted, which is sometimes as or more informative. – Nick Cox Nov 18 '15 at 1:51 • There is always scope in principle for using other predictors to improve a disappointing model. – Nick Cox Nov 18 '15 at 1:53 • Thanks Nick. How do I generate the observed vs fitted plot? This doesn't seem to be in the default R diagnostics plots. – Courtney Nov 18 '15 at 2:02 • I see only very weak indication of heteroskedasticity. With a similar pattern of X's and simulated homoskedastic data of the same sample size you'd probably see a worse picture than that fairly often (if you have the data you can actually try such an exercise). The plot Nick is talking about would be fm=lm(y~x);plot(y~fitted(fm)), but you can usually figure out what it will look like from the residual plot -- if the raw residuals are $r$ and the fitted values are $\hat{y}$ then $y$ vs $\hat{y}$ is $r + \hat{y}$ vs $\hat{y}$; so in effect you just skew the raw residual plot up 45 degrees. – Glen_b Nov 18 '15 at 4:29 • This pattern is more obvious on an observed vs fitted plot on which zero observed is explicit as the $x$ axis. I like that plot because it underlines how the model is doing near zero observed. I suspect slight curvature in your data not quite captured by the plain (plane?) linear model and that logarithms would help. As said, getting the functional form right trumps well-behaved diagnostic plots. If we posted the data, we could play. – Nick Cox Nov 18 '15 at 9:31 It's difficult to judge the structure of the error terms just by looking at residuals. Here's a plot similar to yours, but generated from simulated data where we know the errors are homoskedastic. Does it look "bad"? library(mixtools) set.seed(235711) n <- 300 df <- data.frame(epsilon=sqrt(40) * rt(n, df=5)) df$x <- rnormmix(n, lambda=c(0.02, 0.30, 0.03, 0.60, 0.05), mu=c(8, 16, 30, 36, 52), sigma=c(2, 3, 2, 3, 6)) df$y <- 2 + df$x + df$epsilon model <- lm(y ~ x, data=df) plot(model) plot(df\$y ~ fitted(model)) plot(residuals(model) ~ fitted(model))
{}