content
stringlengths
86
994k
meta
stringlengths
288
619
An Intuitive Guide To Exponential Functions & e e has always bothered me — not the letter, but the mathematical constant. What does it really mean? The mathematical constant e is the base of the natural logarithm. And when you look up natural logarithm you get: The natural logarithm, formerly known as the hyperbolic logarithm, is the logarithm to the base e, where e is an irrational constant approximately equal to 2.718281828459. Nice circular reference there. The full scale 10,000 Year Clock is now under construction. While there is no completion date scheduled, we do plan to open it to the public once it is ready. The essay below by Long Now board member Kevin Kelly discusses what we hope the Clock will be once complete. Introduction - 10,000 Year Clock 1. FAULTY CAUSE: (post hoc ergo propter hoc) mistakes correlation or association for causation, by assuming that because one thing follows another it was caused by the other. example: A black cat crossed Babbs' path yesterday and, sure enough, she was involved in an automobile accident later that same afternoon. example: The introduction of sex education courses at the high school level has resulted in increased promiscuity among teens. A recent study revealed that the number of reported cases of STDs (sexually transmitted diseases) was significantly higher for high schools that offered courses in sex education than for high schools that did not. 2. Three Lectures by Hans Bethe IN 1999, legendary theoretical physicist Hans Bethe delivered three lectures on quantum theory to his neighbors at the Kendal of Ithaca retirement community (near Cornell University). Given by Professor Bethe at age 93, the lectures are presented here as streaming videos synchronized with slides of his talking points and archival material. Intended for an audience of Professor Bethe's neighbors at Kendal, the lectures hold appeal for experts and non-experts alike. The presentation makes use of limited mathematics while focusing on the personal and historical perspectives of one of the principal architects of quantum theory whose career in physics spans 75 years.
{"url":"http://www.pearltrees.com/akrebs/science/id4642148","timestamp":"2014-04-20T03:18:43Z","content_type":null,"content_length":"20020","record_id":"<urn:uuid:bdcfc7e5-be45-46b4-9cdb-0ba46ec8c3ba>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by TRAY Total # Posts: 20 what are the answer to these questionBased on what you have read so far, what is the one thing that always changes when a scene changes in The Diary of Anne Frank? (1 point) the Franks reason for hiding the passage of time the use of voiceover for Anne s diary 2. Wh... What is the sum of the first 70 consecutive odd numbers? Find the cost of a home in 20 years, assuming an annual inflation rate of 2%, if the present value of the house is $280,000. (Round your answer to the nearest cent. Develop a tree diagram for tossing two, eight-sided gaming dice to figure out how many possibilities there are. Discuss the purpose of using such a visual in working out probability. college algebra Find the cost of a home in 20 years, assuming an annual inflation rate of 2%, if the present value of the house is $280,000. (Round your answer to the nearest cent. A history teacher gives a 27 question T-F exam. In how many different ways can the test be answered if the possible answers are T or F, or possibly to leave the answer blank? Choose a natural number between 1 and 25, inclusive. What is the probability that the number is a multiple of 3? What is the probability of obtaining a sum of at least 7 when rolling a pair of dice? a die with eight sides, marked one, two, three, and so on. Assuming equally likely outcomes, a bead maker wants to separate 10,000,000 beads into boxes. she will put 1000,000 beads into each box. into which box would she put the 4,358,216th bead? a bead maker wants to separate 10,000,000 beads into boxes. she will put 1000,000 beads into each box. into which box would she put the 4,358,216th bead? Five years ago, you bought a house for $151,000, with a down payment of $30,000, which meant you took out a loan for $121,000. Your interest rate was 5.75% fixed. You would like to pay more on your loan. You check your bank statement and find the following information: Escrow ... Five years ago, you bought a house for $151,000, with a down payment of $30,000, which meant you took out a loan for $121,000. Your interest rate was 5.75% fixed. You would like to pay more on your loan. You check your bank statement and find the following information: Escrow ... Five years ago, you bought a house for $151,000, with a down payment of $30,000, which meant you took out a loan for $121,000. Your interest rate was 5.75% fixed. You would like to pay more on your loan. You check your bank statement and find the following information: Escrow ... Algebra: Statistics Expected Value to assess the fairness of the risk. Provide one example to show how you can use the Expected Value computation to assess the fairness of a situation (probability experiment). Provide the detailed steps and calculations. Suppose that you want to purchase a home for $450,000 with a 30 year mortgage at 6% interest. Suppose that you can put 40% down. Assume that the monthly cost to finance $1,000 is $6.00. What is the total amount of interest paid on the 30 year loan? medical terminology define midsagittal Suppose you roll a bowling ball at 2.5 m/s across the roof of a flat building. It leaves the edge and strikes the ground 2.0 s later. How high is the roof? How far from the edge of the roof does the ball hit the ground? USE THE LCD TO DETERMINE WHICH FRACTION IS GREATER. 1/4,2/7 Estimate the number of pints in 49 3/4 cups?
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=TRAY","timestamp":"2014-04-19T02:22:33Z","content_type":null,"content_length":"10339","record_id":"<urn:uuid:9b02e897-10e8-4921-8626-ecdb8e5b7d81>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
JOe goes to the gym to run and swim. When running he burns 35 calories per minut, and when he swims he burns 30... - Homework Help - eNotes.com JOe goes to the gym to run and swim. When running he burns 35 calories per minut, and when he swims he burns 30 calories per minute. He has burned 730 calories after exercising for a total of 23 min. How long on each activity? Let Joe runs for x minutes, and he swims for (23-x) minutes. Number of calories burnt in running is 35*x, And the number of calories burnt in swimming is 30*(23-x). Joe burns 730 calories in total (running plus swimming). 35x + 30(23-x) = 730 `rArr` 35x + 690 - 30x = 730 `rArr` 5x = (730-690) = 40 `rArr` x = 40/5 = 8. So, Joe runs for 8 minutes and swims for (23-8) = 15 minutes. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/joe-goes-gym-run-swim-when-running-he-burns-35-438017","timestamp":"2014-04-21T01:15:52Z","content_type":null,"content_length":"25608","record_id":"<urn:uuid:28e226a1-61a8-4c64-a90a-65cf218f1228>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Bit Shift Operator Signed Left Shift Author Bit Shift Operator Signed Left Shift Hi! All can anyone explain how Joined: Jan - 5 << 29 is giving a positive number of 1610612736 18, 2001 Is << not a signed left shift and should therefore give a Posts: 18 negative number? I am not able to manually determine th above result. Can anyone please explain??!! Ranch Hand Hi Sandy.The number -5 will be represented as 1111 1111 ...1011.After shifting to the left 29 times,the 0 will be in the MSB position.Now the number will be 0110 0000 ...0000 ;a Joined: Oct positive number.The value will be (2 raised to 30 + 2 raised to 29) ie 1610612736. 18, 2000 Try doing the mathematics with paper and pencil,and u will get it. Posts: 135 [This message has been edited by Udayan Naik (edited January 19, 2001).] Udayan Naik<BR>Sun Certified Programmer for the Java 2 Platform Joined: Jan Thanks Udayan. 18, 2001 I get it now. Posts: 18 Sandy. subject: Bit Shift Operator Signed Left Shift
{"url":"http://www.coderanch.com/t/197035/java-programmer-SCJP/certification/Bit-Shift-Operator-Signed-Left","timestamp":"2014-04-17T07:22:23Z","content_type":null,"content_length":"21535","record_id":"<urn:uuid:59f76e5f-a193-4557-ad6e-79387f858d4e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Determining a null sequence March 12th 2013, 09:13 AM Determining a null sequence I'm trying to see if (5n^5) / (n^3) is null. I know I need to use the combination rules to work it down to a basic null sequence, like {1/n} etc. I don't know which rule to apply and at what stage (Crying) March 12th 2013, 09:30 AM Re: Determining a null sequence That is impossible to do because it is not true: March 12th 2013, 09:51 AM Re: Determining a null sequence That'll explain why I couldn't do it. Thank you :)
{"url":"http://mathhelpforum.com/differential-geometry/214640-determining-null-sequence-print.html","timestamp":"2014-04-17T22:33:57Z","content_type":null,"content_length":"5051","record_id":"<urn:uuid:9637736c-ca93-4da8-ad73-458db53dc16b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: hello can someone please show me the steps to solve this algebra equation 1.4=x-104/10 and we are looking for x • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/507c8708e4b07c5f7c1f97fa","timestamp":"2014-04-19T17:29:08Z","content_type":null,"content_length":"34702","record_id":"<urn:uuid:3a74383b-977f-42d2-8c0c-a31c3ce43123>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH CIRCLES A kingdom consists of 12 cities located on a one-way circular road. A magician comes on the 13th of every month to cast spells. She starts at the city which was the 5th down the road from the one that she started at during the last month (for example, if the cities are numbered 1-12 clockwise, and the direction of travel is clockwise, and she started at city #9 last month, she will start at city #2 this month). At each city that she visits, the magician casts a spell if the city is not already under the spell, and then she moves on to the next city. If she arrives at a city which is already under the spell, then she removes the spell from the city, and leaves the kingdom until the next month. Last Thanksgiving the capital city was free of the spell. Prove that it will be free of the spell this Thanksgiving as well.* * For the solution and other Olympiad exam questions, click here This question from the 2001 Bay Area Math Olympiad (BAMO) is the sort of math problem tackled by middle and high school students in the Berkeley Math Circle. Virtually unknown in America until recently, mathematical circles for talented school children originated in Hungary more than a century ago and spread over Eastern Europe and Asia. They eventually led to the start of many national and international math contests, including the first International Mathematical Olympiad (IMO) held in Romania in 1959. The United States, which will host the 42nd competition in July, joined the IMOin 1974, and since then its team has performed among the very top of the approximately 80 participating countries. Math circles are led by mathematicians and teachers trained in solving problems for Olympiads. IMO problems come from various areas of mathematics, many of which are covered in math curricula at secondary schools. The tantalizing problems, however, require not only computational skill but also deep thinking, creativity and the ability to explain each step of the reasoning that leads to a The middle school years are ideal for nurturing this kind of mathematical talent, and it is then that interested students should begin to study algebra and geometry and to construct proofs. High school is often too late, and even then a good high school mathematics course in the United States generally brings a student no farther along than the 18th century. Former International Math Olympian for Bulgaria, Zvezdelina ("Zvezda") Stankova-Frenkel, A.B., M.A. '92 is one of the pioneers establishing math circles in the United States. An assistant professor of mathematics and computer science at Mills College, she is a founder of the Berkeley Math Circle and BAMO. "I always wanted to be close to the most talented young kids in the United States and give them the same chance of early encounter and joy with mathematics as I was given in Bulgaria through the math circles," says Stankova-Frenkel, who has coached the U.S. team in preparation for the IMO at the Math Olympiad Summer Program (MOSP) in 1998-2000. When she joined her Bulgarian middle school math circle in fifth grade, mathematics was just one of her interests along with piano, ballet and poetry. Only three months after joining the circle, she won first prize in a regional competition that included problems in geometry and elementary algebra. "It was a very creative and competitive atmosphere," she remembers. She entered an elite high school in which many courses were taught in English and was selected for the Bulgarian national mathematics team, winning silver medals in the Mathematics Olympiads in Cuba in 1987 and in Australia a year later. In Australia, her unique solution to a complex problem-"If (a2 + b2)/ 1+ab is an integer, then it is the square of an integer"-made her well known in Bulgaria and in the mathematical world. In 1989, already attending Sophia University, she was among 15 Bulgarian students selected to study in the United States. (She was one of two chosen by the American Embassy for Bryn Mawr, which in turn chose her.) One of her advisors at Bryn Mawr, Paul Melvin, Rachel C. Hale Professor in the Science and Mathematics, comments, "Zvez enriched our program in many ways, both as a role model for other math majors and, because of her advanced training in Bulgaria and her remarkable mathematical talent, as an active participant in our graduate seminars. In her junior year, she ran an immensely successful course on "Olympiad Problems with Some Theory," which was attended by a mixture of advanced undergraduates, graduate students and faculty." View from other side In order to learn about pre-college education in the United States, she worked for certification as a school teacher in Massachusetts while completing her Ph.D. in algebraic geometry at Harvard. "I realized that there is very little, if any, connection between professional mathematicians and secondary school teachers here (not so in Eastern Europe!)," she says. "So, I needed to get on 'the other side of the fence' and see what kind of problems teachers faced every day in school." After receiving her Ph.D in 1997, she also became certified in California, where she was a post-doctoral fellow of the Mathematical Sciences Research Institute at Berkeley and Morrey Assistant Professor of Mathematics at Berkeley. Stankova-Frenkel did not know Harvard from Bryn Mawr before coming to the United States and says that it took her some time to realize the necessity of women's colleges. "I grew up in a culture that nurtured scientific talent equally in boys and girls from an early age. I never felt slighted or given less of a mathematical chance in Bulgaria based on my gender," she explains. "In fact, for the two years when I was on the Bulgarian IMO team, there were two girls out of six students. Compare this with the U.S. IMO team, which finally had a girl on it (Melanie Wood from Indiana, currently at Duke University) only three years ago! Since starting to teach at Mills two years ago, I have seen many cases of exceptionally bright young women who were indoctrinated with the idea that they would never be good in math. I am proud to say that I have turned a number of cases around and I am sure the same thing happens all the time at Bryn Mawr." Return to Summer 2001 highlights
{"url":"http://www.brynmawr.edu/alumnae/bulletin/mathcirc.htm","timestamp":"2014-04-18T23:18:48Z","content_type":null,"content_length":"7515","record_id":"<urn:uuid:fb6d4ea2-04df-42ed-98ae-d0cb89600e33>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: On the binary solitaire cone David Avis a,1 Antoine Deza b,c,2 a McGill University, School of Computer Science, Montr’eal, Canada b Tokyo Institute of Technology, Department of Mathematical and Computing Sciences. Tokyo, Japan c Ecole des Hautes Etudes en Sciences Sociales, Centre d'Analyse et de Math’ematique Sociales, Paris, France The solitaire cone SB is the cone of all feasible fractional Solitaire Peg games. Valid inequalities over this cone, known as pagoda functions, were used to show the infeasibility of various peg games. The link with the well studied dual metric cone and the similarities between their combinatorial structures ­ see (3) ­ leads to the study of a dual cut cone analogue; that is, the cone generated by the {0, 1}­valued facets of the solitaire cone. This cone is called binary solitaire cone and denoted BSB . We give some results and conjectures on the combinatorial and geometric properties of the binary solitaire cone. In particular we prove that the extreme rays of SB are extreme rays of BSB strengthening the analogy with the dual metric cone whose extreme rays are extreme rays of the dual cut cone. Other related cones are also considered. 1 Introduction and Basic Properties
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/896/3940265.html","timestamp":"2014-04-21T02:28:52Z","content_type":null,"content_length":"8330","record_id":"<urn:uuid:97c1cab1-f5c2-4f44-9b2d-f8dc7cab6218>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: panel data models. [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: panel data models. From nicola.baldini2@unibo.it To statalist@hsphsun2.harvard.edu Subject Re: st: panel data models. Date Tue, 13 Feb 2007 12:27:26 +0100 Such generic/theoretical (i.e. not related to Stata) questions are hardly answered on Statalist. According to some statisticians, your questions are re-phrased as: given that I have panel data, which is the better way to estimate standard errors? You can see some answers at: - using cross-sectional commands to manage panel data is not uncommon (at least in some disciplines, that one reviewed by Petersen, and mine also - innovation management) - independently of which are your beliefs, Stata tells you if individual effects really exist (e.g. -xttest0- or the test at the bottom of the results of a fixed effect model). Obviously, not correcting the standard errors accordingly may give biased results (the magnitude of the bias depends on the type of the applied correction -- not all corrections lead to unbiased results) - if you have panel data, the use of GLS lets you to take advantage of the maximum amount of information available (coefficients may be biased, too) Hope this helps, At 02.33 13/02/2007 -0500, "Vladimir V. Dashkeyev" wrote: >4. Does existence of individual effects mean that once one believes in >their presence and estimates a panel model, it automatically prohibits >him/her to estimate a cross section model, since the letter will >result in biased estimates because of omitted variable (individual >effect) bias? Or a researcher can suppose that in the long run >individual effects are insignificant and estimate a cross section >model, and at the same time suppose that in the short run the effects >are significant and estimate a panel model? * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-02/msg00477.html","timestamp":"2014-04-18T20:50:48Z","content_type":null,"content_length":"7187","record_id":"<urn:uuid:11b737f5-f5f3-4f3e-b7be-8646111b5b88>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Oblique Triangles An oblique triangle is any triangle that is not a right triangle. It could be an acute triangle (all threee angles of the triangle are less than right angles) or it could be an obtuse triangle (one of the three angles is greater than a right angle). Actually, for the purposes of trigonometry, the class of “oblique triangles” might just as well include right triangles, too. Then the study of oblique triangles is really the study of all triangles Let’s agree to a convention for labelling the parts of oblique triangles generalizing the convention for right triangles. Let the angles be labelled A, B, and C, and let the sides opposite them be labelled a, b, and c, respectively. Solving oblique triangles The trigonometry of oblique triangles is not as simple of that of right triangles, but there are two theorems of geometry that give useful laws of trigonometry. These are called the “law of cosines” and the “law of sines.” There are other “laws” that used to be used, but since the common use of calculators, these two laws are enough. The law of cosines This is a simply stated equation: It looks like the Pythagorean theorem except for the last term, and if C happens to be a right angle, that last term disappears (since the cosine of 90° is 0), so the law of cosines is actually a generalization of the Pythagorean theorem. Note that each triangle gives three equations for the law of cosines since you can permute the letters as you like. The other two versions are then a^2 = b^2 + c^2 – 2bc cos A, and b^2 = c^2 + a^2 – 2ca cos B. The law of cosines relates the three sides of the triangle to one of the angles. You can use it in a couple of ways. First, if you know one angle and the two adjacent sides, then you can determine the opposite side. For instance, if angle C = 60°, side a = 5, and side b = 8, then the law of cosines says c^2 = 25 + 64 – 80 cos 60°. Since the cosine of 60° is 1/2, that equation simplifies to c^2 = 49, so c = 7. Second, if you know all three sides of a triangle, then you can use it to find any angle. For instance, if the three sides are a = 5, b = 6, and c = 7, then the law of cosines says 49 = 25 + 36 – 60 cos C, so cos C = 12/60 = 0.2, and, with the use of a calculator, C = 1.3734 radians = 78.69°. Note: When triangle is obtuse, the cos C is negative. Suppose the three sides are a = 5, b = 6, and c = 10. Then the law of cosines says 100 = 25 + 36 – 60 cos C, so cos C = - 49/60 = - 0.81667. As you can see in the graphs on the previous page, the cosine of an obtuse angle is negative. This is fine, and your calculator will compute the arccosine properly. You’ll get C = 2.2556 radians = The law of sines The law of sines is also a simply stated equation Note that the law of sines says that three ratios are equal. Like the law of cosines, you can use the law of sines in two ways. First, if you know two angles and the side opposite one of them, then you can determine the side opposite the other one of them. For instance, if angle A = 30°, angle B = 45°, and side a = 16, then the law of sines says (sin 30°)/16 = (sin 45°)/b. Solving for b gives b = 16(sin 45°)/(sin 30°) = 22.6274. Second, if you know two sides and the angle opposite one of them, then you can almost determine the angle opposite the other one of them. For instance, if side a = 25, side b = 15, and angle A = 40°, then the law of sines says (sin 40°)/25 = (sin B)/15. Solving for sin B gives sin B = 15 (sin 40°)/25 = 0.38567. Now, the arcsin of 0.38567 = 22.686°. Warning: you may not have the correct answer. There are two angles between 0 and 180° with a given sine; the second one is the supplement of the first. So in this case, the second one is the obtuse angle 180 – 22.686 = 157.314°. This situation is indeterminant. Knowing two sides and the angle opposite one of them is not always enough to determine the triangle. There is no deterministic "side-side-angle" congruence theorem in geometry. 553. AB is a line 652 feet long on one bank of a stream, and C is a point on the opposite bank. A = 53° 18', and B = 48° 36'. Find the width of the stream from C to AB. 557. In a triangle ABC, a = 700 feet, B = 73° 48', and C = 37° 21'. If M is the middle point of BC find the length of AM, and the angles BAM and MAC. 561. Three circles of radii 3, 4, and 5 touch each other externally. Find the angles of the triangle formed by joining their centers. 563. A and B are points on opposite sides of a river. On one bank the line AC 650 feet is measured. The angle A = 73° 40', and C = 52° 38'. Find AB. 570. P and Q are two inaccessible points. To find the distance between them, a point A is taken in QP produced, and a line AB 1200 feet long is measured making the angle PAB = 26° 35'. The angle ABP = 48° 12' and ABQ = 106° 42'. How long is PQ? 579. The sides of a parallelogram are AB = 209.16 and AD =347.25, and the diagonal AC = 351.47. Find the angles and the other diagonal. 580. In a parallelogram ABCD, the diagonal AC = 521.16, than angle ABC = 110° 48' 12", and BAC = 27° 19' 36". Find the lengths of the sides and the other diagonal. 586. The diagonals of a parallelogram are 374.14 and 427.21 and the included angle is 70° 12' 38". Find the sides. 590. The sides of a quadrilateral in order are 763.83, 721.75, 547.12, and 593.21, and the angle between the first two sides is 53° 13' 12". Find the other three angles. 593. A and B are two points on opposite sides of a body of water, and soundings are to be taken in the line AB at points one quarter, one half, and three quarters of the distance from A to B. On the shore a line AC 1200 feet long is measured, and angles BAC = 63° 19' and ACB = 78° 43'. What angles must be turned off from CA at C in order to line up the boat from which the soundings are made at the proper points on AB? 608. On one side of a stream lines PA = 586.3 feet, PB = 751.6 feet are measures, angle APB being 167° 36'. Q is a point on the opposite side of the stream. Angle PAQ = 63° 18' and PBQ = 49° 24'. Find PQ. 612. To find the distance between two inaccessible points P and Q, a line AB 763.4 feet long is laid off so that AB produced intersects PQ externally [that is, the two line segments AB and PQ don’t intersect]. The angles PAB = 98° 47', QAB = 41° 36', PBA = 37° 16', and QBA = 94° 12'. Find the length of PQ. 553. You can use the law of sines to determine either of the lengths AB or BC. The question is to find the distance from C to AB. That means you drop a perpendicular from C to that line and determine its length. You could use the angle A and the line AC to find it, or you could use the angle B and the line BC to find it. 557. Same hint as 553. 561. The circles are tangent, so a line from one center to another is the sum of the radius of one circle and that of the other. You’ve got a triangle with sides 7, 8, and 9. You can use the law of cosines to find the angles. 563. The law of sines works well here. 570. Draw the figure. To find PQ, first find AP and AQ. You can find AP using the law of sines on triangle ABP, and you can find AQ using the law of sines on triangle ABQ. 579. You know the sides of triangles ABC and ADC, so you can determine their angles. In triangle ABD you then know an angle and the two adjacent sides, so you can find the opposite side BD. 580. First solve the triangle ABC. Next in triangle ABD you know two sides and you can easily determine the angle BAD. 586. The "included angle" is one of the two angles between the two diagonals. The other included angle is its supplement 180&deg – 70° 12' 38". Let P be the point where the two diagonals meet. It is the midpoint of each diagonal, so you know the distance between P and any vertex. Use the law of cosines to on two triangles with vertices P and two of the vertices of the parallelogram. 590. You know the sides of the quadrilateral ABCD and the angle at B. You can solve triangle ABC. Then you know all the sides of triangle ACD, so you can find its angles. 593. First determine the distance AB using the law of sines. Then for each of the proper positions of the boat P, you know two sides and the included angle of the triangle PAC, so you can determine the needed angle using the law of cosines. 608. First solve the triangle APB. Then you’ll have enough information to solve the triangle AQB. 612. There are several ways to solve this one. Here’s one way. Determine PA using the law of sines for triangle PAB, and determine QA using the law of sines for triangle QAB. Then use the law of cosines for triangle PAQ. 553. 345.43 feet. 557. 490.83 feet. 561. 48 ° 11' 24", 58 ° 24' 42", 73° 23' 54". 563. 640 feet 10 inches. 570. 651.9 feet. 579. 106° 18' 46", 73° 41' 14", 452.92. 580. 255.93, 372.11, 369.22. 586. 231.94, 328.93. 590. 125° 6' 12", 70° 57' 54", 110° 42' 42". 593. 23° 27', 47° 58, 66° 34'. 608. 854.6 feet. 612. 920.76 feet.
{"url":"http://www.clarku.edu/~djoyce/trig/oblique.html","timestamp":"2014-04-18T16:02:49Z","content_type":null,"content_length":"13577","record_id":"<urn:uuid:8de49c14-09ed-412a-919d-b0fa46e35ee5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mathematical Principles of Natural Philosophy (1729)/Book 1/Section 13 From Wikisource ←Section XII The Mathematical Principles of Natural Philosophy by , translated by Andrew Motte Section XIV→ Book I, Section XIII: Of the attractive forces of bodies which are not of a sphaerical figure. Section XIII. Of the attractive forces of bodes which are not of a sphærical figure. Proposition LXXXV. Theorem XLII. If a body be attracted by another, and its attraction be vastly stronger when it if contiguous to the attracting body, than when they are separated from one another by a very small interval; the forces of the particles of the attracting body decrease, in the recess of the body attracted, in more than a duplicate ratio of the distance of the particles. For if the forces decrease ine a duplicate ratio of the distances from the particles, the attraction towards a sphærical body, being (by prop. 74.) recioprocally as the square of the distance of the attracted body from the centre of the sphere, will not be sensibly increased by the contact, and it will be still less increased by it, if the attraction, in the recess of the body attracted, decreas es in a still less proportion. The proposition therefore is evident concerning attractive spheres. And the case is the same of concave sphærical orbs attracting external bodies. And much more does it appear in orbs that attract bodies placed within them, because there the attractions diffused through the cavities of those orbs are (by prop. 70.) destroyed by contrary attractions, and therefore have no effects even in the place of contact. Now if from these spheres and sphærical orbs we take of way any parts remote from the place of contact, and add new parts any where at pleasure; we may change the figures of the attractive bodies at pleasure, but the parts added or taken away, being remote from the place of contact, will cause no remarkable excess of the attraction arising from the contact: of the two bodies. Therefore the proposition holds good in bodies of all figures. Q. E. D. Proposition LXXXVI. Theorem XLUUU. If the forces of the particles of which an attractive body is composed, decrease, in the recess of the attracted body, in a triplicate or more than triplicate ratio of the distance from the particles; the attraction will be vastly stronger in the point of contact than when the attracting and attracted bodies are separated from each other though by never so small on interval. For that the attraction is infinitely increased when the attracted corpuscle comes to touch an attracting sphere of this kind appears by the solution of problem 41. exhibited in the second and third examples. The same will also appear (by comparing those examples and therorem 41. together) of attractions of bodies concavo-convex orbs, whether the attracted bodies be placed without the orbs, or in the cavities whithin them. And by adding to or taking from those spheres and orbs, any attractive matter any where without the place of contact, so that the attractive bodies may receive any ass igned figure, the proposition will hold good of all bodies universsally. Q. E. D. Proposition LXXXVII. Theorem XLIV. if two bodies similar to each other, and consisting of matter equally attractive, attract separately two corpuscles proportional to those bodies, and in a like situation to them; the accelerative attractions of the corpuscle towards the entire bodies will be as the accelerative attractions of the corpuscule towards particles of the bodies proportional to the wholes, and alike situated in For if the bodies are divided into particles proportional to the wholes and alike situated in them, it will be, as the attraction towards any particle of one of the bodies to the attraction towards the correspondent particle in the other body, so are the attractions towards the several particles of the first body to the attractions towards the several correspondent particles of the other body; and by composition, so is the attraction towards the first whole body to the attraction towards the second whole body. Q. E. D. Cor. 1. Therefore, if as the distances of the corpuscles attracted increase, the attractive forces of the particles decrease in the ratio of any power of the distances; the accelerative attractions towards the whole bodies will be as the bodies directly and those powers of the distances inversely. As if the forces of the particles decrease in a duplicate ratio of the distances from the corpus cles attracted, and the bodies are as $\scriptstyle A^3$ and $\scriptstyle B^3$, and therefore both the cubic sides of the bodies, and the distance of the attracted corpuscles from the bodies are as A and B; the accelerative attractions towards the bodies will be as $\scriptstyle \frac {A^3}{A^2}$ and $\scriptstyle \frac {B^3}{B^2}$, that is, as A and B the cubic sides of those bodies. If the forces of the particles decrease in a triplicate ratio of the distances from the attracted corpuscles; the accelerative attractions towards the whole bodies will be as $\scriptstyle \frac {A^3}{A^3}$ and $\scriptstyle \frac {B^3}{B^3}$, that is, equal. If the forces decrease in a quadruplicate ratio; the attractions towards the bodies will be as $\scriptstyle \frac {A^3}{A^4}$ and $\scriptstyle \ frac {B^3}{B^4}$; that is, reciprocally as the cubic sides A and B. And so in other cases. Cor. 2. Hence on the other hand, from the forces with which like bodies attract corpuscles similarly situated, may be collected the ratio of the decrease of the attractive forces of the particles as the attracted corpuscle recedes from them; if so be that decrease is directly or inversely in any ratio of the distances. Proposition LXXXVIII. Theorem XLV. If the attractive forces of the equal particles of any body be at the distance of the places from the particles, the force of the whole body will tend to its centre of gravity; and will be the same with the force of a globe, consisting of similar and equal matter; and having its centre in the centre of gravity. Let the particles A, B, (Pl. 23. Fig. 7.) of the body RSTV attract any corpuscle Z with forces which, supposing the particles to be equal between themselves, are as the distances AZ, BZ; but if they are supposed unequal, are as those particles and their distances AZ, BZ conjunctly, or (if I may so speak) as those particles drawn into their distances AZ, BZ respectively. And let those forces be expressed by the contents under A x AR, and B x BZ. Join AB, and let it be cut in G, so that AG may be to BG as the particle B to the particle A; and G will be the common centre of gravity of the particles A and B. The force A x AZ will (by cor. 2. of the laws) be resolved into the forces A x GZ and A x AG; and the force B x BZ into the forces B x GZ and B x BG. Now the forces A x AG and B x BG, because A is proportional to B, and BG to AG, are equal; and therefore having contrary directions destroy one other. There remain then the forces A x GZ and B x GZ. These tend from Z towards the centre G, and compose the force $\scriptstyle \overline {A + B} \times GZ$; that is the same force as if the attractive particles A and B were placed in their common centre of gravity G, composing there a little globe. By the same reasoning if there be added a third particle C, and the force of it be compounded with the force $\scriptstyle \overline {A + B} \times GZ$ tending to the centre G; the force thence aris ing will tend to the common centre of gravity of that globe in G and of the particle C; that is, to the common centre of gravity of the three particles A, B, C; and will be the same as if that globe and the particle C were placed in that common centre composing a greater globe there. And so we may go on in infinitum. Therefore the whole force of all the particles of any body whatever RSTV, is the same as if the body, without removing its centre of gravity, were to put on the form of a globe. Q. E. D. Cor. Hence the motion of the attracted body Z will be the same, as if the attracting body RSTV were sphærical; and therefore if that attracting body be either at rest, or proceed uniformly in a right line; the body attracted will move in an ellipsis having its centre in the centre of gravity of the attracting body. Proposition LXXXIX. Theorem XLVI. If there be several bodies consisting of equal particles whose forces are as the distance of the places from each; the force compounded of all the forces by which any corpuscle is attracted, will tend to the common centre of gravity of the attracting bodies; and will be the same as if those attracting bodies, preserving their common centre of gravity, should unite there, and be formed into a This is demonstrated after the same manner as the foregoing proposition. Cor. Therefore the motion of the attracted body will be the same as if the attracting bodies, preferring their common centre of gravity, should unite there, and be formed into a globe. And therefore if the common centre of gravity of the attracting bodies be either at rest, or proceeds uniformly in a right line; the attracted body will move in an ellipsis having its centre in the common centre of gravity of the attracting bodies. Proposition XC. Problem XLIV. If to the several points of any circle there tend equal centripetal forces, increasing or decreasing in any ratio of the distances; it is required to find the force with which a corpuscle it attracted, that is situate any where in a right line which stands at right angles to the plane of the circle at its centre. Suppose a circle to be described about the centre A (Pl. 24. Fig. 1.) with any interval AD in a plane to which the right line AP is perpendicular; and let it be required to find the force with which a corpuscle P is attracted towards the same. From any point E of the circle, to the attracted corpuscle P, let there be drawn the right line PE. In the right line PA take PF equal to PE, and make a perpendicular FK, erected at F, to be as the force with which the point E attracts the corpuscle P. And let the curve line IKL be the locus of the point K. Let that curve meet the plane of the circle in L. In PA take PH equal to PD, and erect the perpendicular HI meeting that curve in I; and the attraction of the corpuscle P towards the circle will be as the area AHIL drawn into the altitude AP. Q. E. I. For let there be taken in AE a very small line Ee. Join Pe, and in PE, PA taka PC equal to Pe. And because the force with which any point E of the annulus described about the centre A with the interval AE in the aforesaid plane, attracts to it self the body P, is supposed to be as FK; and therefore the force with which that point attracts the body P towards A is as $\scriptstyle \frac {AP \times FK}{PE}$ and the force with which the whole annulus attracts the body P towards A, is as the annulus and $\scriptstyle \frac {AP \times FK}{PE}$ conjunctly; and that annulus also is as the rectangle under the radius AE and the breadth Ee, and this rectangle (because PE and AE, Ee and CE are proportional) is equal to the rectangle PE x CE or PE x Ff; the force with which that annulus attracts the body P towards A, will be as Pe x Ff and $\scriptstyle \frac {AP \times FK}{PE}$ conjunctly; that is as the content under Ff x FK x AP, or as the area FKkf drawn into AP. And therefore the sum of the forces with which all the annuli, in the circle described about the centre A with the interval AD, attract the body P towards A, is as the whole area AHIKL drawn into AP. Q. E. D. Cor. 1. Hence if the forces of the points decrease in the duplicate ratio of the distances, that is, if FK be as $\scriptstyle \frac {1}{PF^2}$, and therefore the area AHIKL as $\scriptstyle \frac {1}{PA} - \frac {1}{PH}$; the attraction of the corpuscle P towards the circle will be as $\scriptstyle 1- \frac {PA}{PH}$; that is, as $\scriptstyle \frac {AH}{PH}$. Cor. 2. And universally if the forces of the points at the distances D be reciprocally as any power $\scriptstyle D^n$, of the distances; that is, if FK be as $\scriptstyle \frac 1{D^n}$, and therefore the area AHIKL as $\scriptstyle \frac {1}{PA^{n - 2}} - \frac {1}{PH^{n-2}}$; the attraction of the corpuscle P towards the circle will be as $\scriptstyle \frac {1}{PA^{n - 2}} - \frac cor. 3. And if the diameter of the circle be increased in infinitum, and the number n be greater than unity; the attraction of the corpuscle P towards the whole infinite plane will be reciprocally as $\scriptstyle PA^{n - 2}$ because the other term $\scriptstyle \frac {PA}{PH^{n - 2}}$ vanishes. Proposition XCI. Problem XLV. To find the attraction of a corpuscle situate in the axis of a round solid, to whose several points there tend equal centripetal forces decreasing in any ratio of the disŧances whatsover. Let the corpuscle P (Pl. 24. Fig. 2.) situate in the axis AB of the solid DECG, be attracted towards that solid. Let the solid be cut by any circle as RFS, perpendicular to the axis; and in its s emi-diameter FS, in any plane PALKB passing through the axis. Let there be taken (by prop. 90.) the length FK proportional to the force with which the corpuscle P is attracted towards that circle. Let the locus of the point K be the curve line LKI, meeting the planes of the outermost circles AL and BI in L and I; and the attraction of the corpuscle P towards the solid will be as the area LABI. Q. E. I. Cor. 1 Hence if the solid be a cylinder described by the parallelogram ADEB (Pl. 24. Fig. 3.) revolved about the axis AB, and the centripetal forces tending to the several points be reciprocally as the squares of the distances from the points; the attraction of the corpuscle P towards this cylinder will be as AB - PE + PD. For the ordinate FK (by cor. 1. prop. 90.) will be as $\scriptstyle I - \frac {PF}{PR}$. The part I of this quantity, drawn into the length AB, describes the area I x AB; and the other part $\scriptstyle \frac {PF}{PR}$ drawn into the length PB, describes the area I into $\scriptstyle \overline {PE - AD}$ (as may be easily shewn from the quadrature of the curve LKI); and in like manner, the same part drawn into the length PA describes the area I into $\scriptstyle \ overline {PD - AD}$, and drawn into AB, the difference of PB and PA describes I into $\scriptstyle \overline {PE - PD}$, the difference of the areas. From the first content I x AB take away the last content I into $\scriptstyle \overline {PE - PD}$, and there will remain the area LABI equal to I into $\scriptstyle \overline {AB - PE + PD}$. Therefore the force being proportional to this area, is as $\scriptstyle {AB - PB + PD}$. Cor. 2. Hence also is known the force by which a spheroid AGBC (Pl. 24. Fig. 4.) attracts any body P situate externally in its axis AB. Let NKPM be a conic section whose ordinate ER perpendicular to PE, may be always equal to the length of the line PD, continually drawn to the point D in which that ordinate cuts the spheroid. From the vertices A, B, of the spheroid, let there be erected to its axis AB the perpendiculars AK, BM, respectively equal to AP, BP, and therefore meeting the conic section in K and M; and join KM cutting off from it the segment KMRK Let S be the centre of the s pheroid, and SC its greatest semi-diameter; and the force with which the spheroid attracts the body P, will be to the force with which a sphere described with the diameter AB attracts the same body, as $\scriptstyle \frac {AS \ times CS^2 - PS \times KMRK}{Ps^2 + CS^2 - AS^2}$ is to $\scriptstyle \frac {AS^3}{3PS^2}$. And by a calculation founded on the same principles may be found the forces of the segments of the spheroid. Cor. 3. If the corpuscle be placed within the spheroid and in its axis, the attraction will be as its distance from the centre. This may be easily collected from the following reasoning, whether the particle be in the axis or in any other given diameter. Let AGOF (Pl. 2.4. FQ. 5.) be an attracting spheroid, S its centre, and P the body attracted. Through the body P let there be drawn the s emi-diameter SPA, and two right lines DE, FG meeting the spheroid in D and E, F and G; and let PCM, HLN be the superficies of two interior spheroids similar and concentrical to the exterior, the firs t of which passes through the body P, and cuts the right lines DE, FG in B and C; and the latter cuts the same right lines in H and I, K and L. Let the spheroids have all one common axis, and the parts of the right lines intercepted on both sides DP and BE, FP and CG, DH and IE, FK and LG will be mutually equal; because the right lines DE, PB, and HI are bissected in the same points as are al so the right lines FG, PC and KL. Conceive now DPF, EPG to represent opposite cones described with the infinitely small vertical angles DPF, EPG, and the lines DH, EI to be infinitely small also. Then the particles of the cones DHKF, GLIE, cut off by the spheroidical superficies, by reason of the equality of the lines DH and EI, will be to one another as the squares of the distances from the body P, and will therefore attract that corpuscle equally. And by a like reasoning if the spaces DPF, EGCB be divided into particles by the superficies of innumerable similar spheroids concentric to the former and having one common axis, all these particles will equally attract on both sides the body P towards contrary parts. Therefore the forces of the cone DPF, and of the conic segment EGCB are equal and by their contrariety destroy each other. And the case is the same of the forces of all the matter that lies without the interior spheroid PCBM. Therefore the body P is attracted by the interior spheroid PCBM alone, and therefore (by cor. 3. prop. 71.) its attraction is to the force with which the body A is attracted by the whole spheroid AGOD, as the distance PS to the distance AS. Q. E. D. Proposition XCII. Theorem XLVI. An attracting body being given, it is required to find the ratio of the decrease of the centripetal forces tending to its several points. The body given must be formed into a sphere, a cylinder, or some regular figure whose, law of attraction answering to any ratio of decrease may be found by prop. 80. 81 and 91. Then, by experiments, the force of the attractions must be found at several distances, and the law of attraction towards the whole, made known by that means, will give the ratio of the decrease of the forces of the s everal parts; which was to be found. Proposition XCIII. Theorem XLVIII. If a solid be plane one one side, and infinitely extended on all other sides, and consist of equal particles equally attractive, whose forces decrease, in the recess from the solid, in the ratio of any power greater than the square of the disŧances; and a corpuscle placed towards either part of the plane is attracted by the force of the whole solid; I say that the attractive force of the whole solid, in the attractive force of the whole solid, in the recess from its plane superficies, will decrease in the ratio of a power whose side is the distance of the corpuscle from the plane, and its index less by 3 than the index of the power of the distance. Case. 1. Let LGI (Pl. 24. Fig. 6.) be the plane by which the solid is terminated. Let the solid lie on that hand of the plane that is towards I, and let it be resolved into innumerable planes mHM, nIN, oKO, &c. parallel to GL. And first let the attracted body C be placed without the solid. Let there be drawn CGHI perpendicular to those innumerable planes, and let the attractive forces of the points of the solid decrease in the ratio of a power of the distances whose index is the number as not less than 3. Therefore (by cor. 3. prop. 90.) the force with which any plane mHM attracts the point C, is reciprocally as $\scriptstyle CH^{n - 2}$. In the plane mHM take the length HM reciprocally proportional to $\scriptstyle CH^{n - 2}$, and that force will be as HM. In like manner in the several planes IGL, nIN, oKO, &c. take the lengths GL, IN, KO, &c. reciprocally proportional to $\scriptstyle CG^{n - 2}$, $\scriptstyle CI^{n - 2}$, $\scriptstyle CK^{n - 2}$, &c. and the forces of those planes will be as the lengths so taken, and therefore the sum of the forces as the sum of the lengths, that is, the force of the whole solid as the area GLOK produced infinitely towards OK. But that area (by the known methods of quadratures) is reciprocally as $\scriptstyle Cg^{n - 3}$, and therefore the force of the whole solid is reciprocally as $\scriptstyle CG^{n - 3}$. Q. E. D. Case 2. Let the corpuscle (Fig. 7.) be now placed on that hand of the plane IGL that is within the solid, and take the distance CK equal to the distance CG. And the part of the solid LGI x KO terminated by the parallel planes IGL, oKO, will attract the corpuscle, situate in the middle, neither one way nor another, the contrary actions of the opposite points destroying one another by reas on of their equality. Therefore the corpuscle C is attracted by the force only of the solid situate beyond the plane OK. But this force (by case 1.) is reciprocally as $\scriptstyle CH^{n - 3}$, that is (because CG, CK are equal) reciprocally as $\scriptstyle CG^{n - 3}$. Q. E. D. Cor. 1. Hence if the solid LGIN be terminated on each side by two infinite parallel planes LG, IN; its attractive force is known, subducting from the attractive force of the whole infinite solid LGKO , the attractive force of the more distant part NIKO infinitely produced towards KO. Cor. 2. If the more distrant part of this solid be rejected, because its attraction compared with the attraction of the nearer part is inconsiderable; the attraction of that nearer part will, as the distance increases, decrease nearly in the ratio of the power $\scriptstyle CG^{n-3}$ Cor. 3. And hence if an finite body, plane on one side, attract a corpuscle situate over-against the middle of that plane, and the distance between the corpuscle and the plane compared with the dimensions of the attracting body be extremely small; and the attracting body consist of homogeneous particles, whose attractive forces decrease in the ratio of any power of the distances greater than the quadruplicate; the attractive force of the whole body will decrease very nearly in the ratio of a power whose side is that very small distance, and the index less by 3 than the index of the former power. This assertion does not hold good however of a body consisting of particles whose attractive forces decrease in the ratio of the triplicate power of the distances; because in that cases the attraction of the remoter part of the infinite body in the second corollary is always infinitely greater than the attraction of the nearer part. If a body is attracted perpendicularly towards a given plane, and from the law of attraction given the motion of the body be required; the problem will be solved by seeking (by prop. 39.) the motion of the body descending in a right line towards that plane, and (by cor. 2. of the laws) compounding that motion with an uniform motion, performed in the direction of lines parallel to that plane. And on the contrary if there be required the law of the attraction tending towards the plane in perpendicular directions, by which the body may be caus ed to move in any given curve line, the problem will be solved by working after the manner of the third problem. But the operations may be contracted by resolving the ordinates into converging series. As if to a base A the length B be ordinately applied in any given angle, that length be as any power of the bas e $\scriptstyle A^{\frac mn}$; and there be sought the force with which a body, either attracted towards the base or driven from it in the direction of that ordinate, may be caused to move in the curve line which that ordinate always describes with its superior extremity; I suppose the base to be increased by a very small part O, and I resolve the ordinate $\scriptstyle \overline {A+O} \vert^ {\frac mn}$ into an infinite series $\scriptstyle A^{\frac mn} + \frac mnOA^{\frac {m - n}{m}} + \frac {mm - mn}{2nn}OOA^{\frac {m - 2n}{n}}$ &c. and I suppose the force proportional to the term of this series in which O is of two dimensions, that is to the term $\scriptstyle \frac {mm - mn}{2nn}OOA \frac {m - 2n}{n}$. Therefore the force sought is as $\scriptstyle \frac {mm - mn}{2nn}A \frac {m - 2n}{n}$, or, which is the same thing, as $\scriptstyle \frac {mm - mn}{2nn}B \frac {m - 2n}{n}$. As if the ordinate describe a parabola, m begin = 2, and n = 1, the force will be as the given quantity $\scriptstyle 2B^0$, and therefore is . given. Therefore with a given force the body will move in a a parabola, as Galileo has demonstrated. If the ordinate describe an hyperbola, m being= 0—1, and n=1; the force will be as $\scriptstyle{2A^{-3}}$ or $\scriptstyle{2B^3}$; and therefore a force which is as the cube of the ordinate will cause the body to move in an hyperbola. But leaving this kind of propositions, I shall go on to some others relating to motion which I have not yet touched upon.
{"url":"http://en.wikisource.org/wiki/The_Mathematical_Principles_of_Natural_Philosophy_(1729)/Book_1/Section_13","timestamp":"2014-04-16T18:08:09Z","content_type":null,"content_length":"89918","record_id":"<urn:uuid:f34d6781-db7c-4490-8ec6-b082d147b6da>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic addition/simplification prob. May 13th 2011, 05:05 AM Algebraic addition/simplification prob. $\frac{a}{ab+{b}^{2 }}+\frac{b}{{a}^{2}+ab}$ I get the principle i.e. multiply denominators to find common denominator, add, factor, simplify. I've done loads of examples but for some reason I just can't get this one. I know it's relatively straight forward, I think my problems somewhere in the simplification/indices area. I get as far as:- $=\frac{(a)({a}^{2}+ab) + (b)(ab+{b}^{2})}{(ab+{b}^{2})({a}^{2}+ab)}$ $=\frac{({a}^{3}+{a}^{2}b)+(a{b}^{2}+{b}^{3})}{(ab+ {b}^{2})({a}^{2}+ab)}$ At this point it all seems to get overly complicated. Anybody break it down a bit for me? May 13th 2011, 05:09 AM Always look to see if you can simplify before plowing ahead with fractions. You didn't do anything wrong, but you're making life a bit harder for yourself than you need to. Here's what I would $\frac{a}{ab+b^{2}}+\frac{b}{a^{2}+ab}=\frac{a}{b(a +b)}+\frac{b}{a(a+b)}.$ Does that make things a bit easier? May 13th 2011, 05:56 AM Thanks, this was one of the many roads I started down, but never quite got to the end of. I can at least, get to the answer now, but remain a bit unconvinced by my methodology. Am I right in thinking $b(a+b) + a(a+b)$ = A common denominator $ab(a+b)$ In which case I only need to muliply each numerator by the 'missing' bit of the denominator, i.e. $\frac{a}{b(a+b)}$ x $\frac{a}{a} = \frac{{a}^{2} }{ab(a+b)}$ and the same again for the other half? May 13th 2011, 06:01 AM Exactly right. Be convinced!
{"url":"http://mathhelpforum.com/algebra/180410-algebraic-addition-simplification-prob-print.html","timestamp":"2014-04-17T23:08:03Z","content_type":null,"content_length":"7157","record_id":"<urn:uuid:bf97f91c-0c7a-4930-866a-566787fd4f59>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the volume of the object using method of shells. November 23rd 2009, 12:53 PM Find the volume of the object using method of shells. Hello all. My problem is this: Rotate the region bounded by the given curves about the line indicated. Obtain the volume of the solid by the method of shells. (give your answer in terms of http://www.ilrn.com/ilrn/ $x=sqrt(4-y^2)$ y-axis, x-axis. About the y-axis I've been working on this for hours and cannot get it. Help would be appreciated. November 23rd 2009, 01:18 PM When using shells, the cross sections are parallel to the axis we are revolving about. In this case, we are revolving about y so the cross sections will be parallel to the y-axis and 'stacked up' along the x-axis. So, integrate in terms of x. What we have here is a hemisphere of radius 2. Solving the given equation for y in terms of x gives: Since we are bounded by the axes, we are in the positive region: Check your result using the volume of a hemisphere formula. $V=\frac{2}{3}r^{3}$ November 23rd 2009, 01:30 PM Hello all. My problem is this: Rotate the region bounded by the given curves about the line indicated. Obtain the volume of the solid by the method of shells. (give your answer in terms of http://www.ilrn.com/ilrn/ $x=sqrt(4-y^2)$ y-axis, x-axis. About the y-axis I've been working on this for hours and cannot get it. Help would be appreciated. note that the graph of this relation is a semicircle in quads I and IV the solid formed by rotating the graphed region about the y-axis will yield a sphere. what confuses me is the statement that the x-axis is a boundary for the region ... if so, then I suppose the region could either be the quarter circle in quad I or in quad IV. if that is the case, the volume of the solid hemisphere formed by using cylindrical shells is ... $V = 2\pi \int_0^2 x \sqrt{4 - x^2} \, dx$ if the whole sphere is required, double the result. November 23rd 2009, 01:59 PM First of all thanks for the quick replys that was really helpful and I was able to complete the problem! Is it possible I could get help with 1 more problem? It is the same type of problem again but slightly different. It is the following: Use the method of shells to find the volume of the solid obtained by revolving the region bounded by the given curves about the http://www.ilrn.com/ilrn/formulaImag...%7D%5C%3A&ns=0- axis. (give your answer in terms of http://www.ilrn.com/ilrn/formulaImage?f=%5Cpi&ns=0.) $y= -x^2+13x-40$ x=0 x=8 x-axis. I know that I have to break it into a left half and right half and then add the two integrals together but I'm not sure if I have them set up correctly. This problem seems extremly long the method that I've been going about it and I feel like I keep making mistakes some where. If I knew how to enter in all my work that I've done I would but I can't figure it out atm. Any help would be appreciated. November 23rd 2009, 02:31 PM Note that when we factor, we get $x^{2}-13x+40=(x-5)(x-8)$ This tells us where to break up the integration limits.
{"url":"http://mathhelpforum.com/calculus/116340-find-volume-object-using-method-shells-print.html","timestamp":"2014-04-20T20:17:16Z","content_type":null,"content_length":"10247","record_id":"<urn:uuid:63f62899-0e9f-4064-90b9-d0d6df619f1d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Beam bending under a uniformly distributed load 1. The problem statement, all variables and given/known data I have a bumper and I am trying to determine whether or not the rectangular tubing I am using to build it is strong enough to withstand a given load (rear end collision). The horizontal member is 4x3x.1875 tubing (4" base, 3" high, 95" long). Two 3x3x.1975 tubes are used as supports with the center of these supports being 32" from each side. If it wasn't for the horizontal edge pieces hanging past the supports than it would fit the model for a beam that is fixed at both ends. This bumper must be designed to withstand 66,000 lbs force using a safety factor of 4 based on the tensile strength of the material used (SA 36: 58,000 psi). 2. Relevant equations σ = 58000 psi/4 = 14,500 psi M = wl^2/12 (maximum bending moment for a fixed-fixed beam under uniformly distributed load) w = distributed load per longitudinal unit l = 28 inches (distance between supports) S = M/σ (elastic section modulus) 3. The attempt at a solution Because the horizontal member stretches past the supports can I distribute the total load across the entire member? Example: w = 66000 lb/95 in = 694.74 lb/in (load distributed across entire length) M = 694.74 lb/in * 28^2 in^2 / 12 = 45389.7 in-lb (bending moment only concerned about length between the two supports) S = 45389.7 in-lb/14500 lb = 3.13 in^3 Now calculate the elastic section modulus based upon the shape of the tubing where: b = 3 in d = 4 in b1 = 2.625 in d1 = 3.625 in S = (bd^3-b1d1^3)/(6d) = 2.79 in^3 Since the actual S is less than the required S it seems that this member is not strong enough. Is this the correct procedure? Thanks for your help!
{"url":"http://www.physicsforums.com/showthread.php?t=589450","timestamp":"2014-04-18T23:18:44Z","content_type":null,"content_length":"24937","record_id":"<urn:uuid:d106a74b-5d34-46ce-ab12-16459080d559>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Drawing plane and coordinate system - maths online Gallery Coordinate system illustrates the connection between the position of a point and its coordinates in slightly more complex situations than the previous applet. The user can mark points, draw straight lines and read off the coordinates of the cursor position. Thus the geometric aspects of certain problems (finding the intersection point of two straight lines) become linked with algebraic structures (the solution being a pair of numbers). The applet is started from the red button in its own window.
{"url":"http://www.univie.ac.at/future.media/moe/galerie/zeich/zeich.html","timestamp":"2014-04-16T13:25:33Z","content_type":null,"content_length":"5263","record_id":"<urn:uuid:72365333-d5d1-4d83-9160-82cc924151eb>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Trial Division Loop Problem October 6th, 2012, 04:41 PM #1 Junior Member Join Date Sep 2012 California, Bay Area Thanked 0 Times in 0 Posts Hello everybody! I've been working on trial division loop problem and I hit a dead end Before I write about how I attempted to the problem before coming here, I have to show you guys the problem. THE PROBLEM------------------------------- Find and print out the eight prime integers between 99,900 and 99,999 using trial division (testing divisibility by possible factors). Write two nested loops: The outer loop will iterate over the 100 numbers being tested for primeness; the inner loop will check potential factors, from 2 up to the square root of the large dividend number (don't worry about which ones are or aren't prime). Use the modulo operator % to test for divisibility, and stop testing factors — cut the inner loop short — after finding one. Example: the first dividend to test is 99,900; its potential factors run from 2 to 316. Since ( 99900 % 2 == 0 ), it is not prime, so do not check factors 3 and higher, but exit the inner loop and go directly to processing 99,901. SO TO TACKLE THIS PROBLEM. Thanks for jps for the advice on writing out the steps. 1. I set the min value to 99900 and the max value to 99999 and my outer loop will iterate for 100 times. 2. The outer loop also has to test for primeness so I wrote an "if statement" to check for divisibility and if it is divisible by 2, a continue statement to end that loop and to continue to the next one. 3. I wrote the inner loop to test for factors by trial division. 4. Loop goes wrong and won't print the prime numbers.. anybody have any ideas on whats wrong? Heres my loop: /* This program is designed to find and print out the eight prime integers between 99,900 and 99,999 using trial division*/ // instructions = write two nested loops // the outer loop will iterate over the 100 numbers being tested for primeness // the inner loop will check potential factors from 2 up to the square root of the large dividend numbers. // This program will use % to test for divisibility public class assignment7 public static void main(String args[]) // The two numbers int divider = 2; int min; int max = 99999; // Loop iteration of the 100 numbers for(min = 99900; min < max; min++) for(divider = 2; divider <= Math.sqrt(min); divider++) if(min % divider == 0) System.out.println("Prime: "+ min); any ideas or advice? Last edited by Wwong3333; October 7th, 2012 at 02:44 AM. work hard, play hard. Sort the problem out into steps. from m=Min to Max test m for prime if m is found to not be prime during tests, stop testing m and move to m+1 otherwise all tests failed, m must be prime, PRINT m OR SAVE m FOR PRINTING LATER ON The caps part marks the place in your loops where you would remember or print a prime number. Sort the problem out into steps. from m=Min to Max test m for prime if m is found to not be prime during tests, stop testing m and move to m+1 otherwise all tests failed, m must be prime, PRINT m OR SAVE m FOR PRINTING LATER ON The caps part marks the place in your loops where you would remember or print a prime number. Thanks for the reply! I understand the steps you gave me.. but is there a way in java where you can assign a variable a certain range? you put m = Min to Max work hard, play hard. The direct answer to the question is no. What I wrote up is pseudo code. That line would refer to the portion of your code where you set a for loop to run from 99900 to 99999, where Min=99900 and Max=99999. Using m as the "current number up for evaluation" as the loop progresses. So first cycle m=99900. Second cycle m=99901. ...and so on. The direct answer to the question is no. What I wrote up is pseudo code. That line would refer to the portion of your code where you set a for loop to run from 99900 to 99999, where Min=99900 and Max=99999. Using m as the "current number up for evaluation" as the loop progresses. So first cycle m=99900. Second cycle m=99901. ...and so on. Ahh! pseudocode. Thanks for the steps you wrote down for me. I wrote an outer loop that runs from 99900 to 99999, and wrote a inner loop to test for primeness. However it goes wrong, and debugging is giving me major headaches.. any tips on how to fix it? work hard, play hard. Nice. Does it work? This looks like a good place to do a println(indexOfTheLoop) just to make sure it prints 99900, 99901, 99902, ..., 99999 before moving on. Nice. Does this part work? Yet another stop-to-test-point the way I see it. You could println(allNumbersFoundToBePrime) just to have a list to compare and see that you are getting only primes, and all primes. There are many lists of prime numbers available through your favorite search engine to compare to. That does not sound good. But other than that, it does not tell us much. Nothing specific. My old eyes can not see your screen from here. Please post your code and error messages when posting a question. You can use comments in code to draw attention to areas you think need attention. Nice. Does it work? This looks like a good place to do a println(indexOfTheLoop) just to make sure it prints 99900, 99901, 99902, ..., 99999 before moving on. Nice. Does this part work? Yet another stop-to-test-point the way I see it. You could println(allNumbersFoundToBePrime) just to have a list to compare and see that you are getting only primes, and all primes. There are many lists of prime numbers available through your favorite search engine to compare to. That does not sound good. But other than that, it does not tell us much. Nothing specific. My old eyes can not see your screen from here. Please post your code and error messages when posting a question. You can use comments in code to draw attention to areas you think need attention. Ah sorry, I'm new to using forums. // Loop iteration of the 100 numbers for(min = 99900; min < max; min++) The outer loop works fine. It iterates 100 times from 99900 to 99999. However, the problem resides in the inner loop. for(divider =2; divider <= Math.sqrt(min); divider++) if(min % divider == 0) System.out.println("Prime number: "min); I've tried using while & do-while loops but no output comes out. I am confused because my program outputs a whole bunch of numbers including even ones. As you are a expert, do you see any errors or flaws within my inner loop? Sorry.. I would try to work it out by myself but I spent 4 hours trying new things and debugging but nothing works.. also not to mention feeling like a complete failure after all that hard work over 30 lines of code =\ Last edited by Wwong3333; October 7th, 2012 at 11:01 PM. work hard, play hard. Seems to be a syntax error on this line: System.out.println("Prime number: "min); I assume that was related to transferring the code to the forum since you said the program prints, and obviously that line will not compile. Try to break the pseudo code down into smaller steps. When every detail is listed, translating to code will be much easier. What is the output exactly? Compare the output you are getting to the code. What line of code says even numbers are not prime, and should be ignored? If you can step through the code, inserting values in place of the variables, line by line, and see what the code is doing, you will eventually see the point where the code does something that does not match what you thought it would do. When posting code and questions, it is also a great idea to include the output from your sample run. October 6th, 2012, 05:06 PM #2 October 6th, 2012, 05:57 PM #3 Junior Member Join Date Sep 2012 California, Bay Area Thanked 0 Times in 0 Posts October 6th, 2012, 07:38 PM #4 October 7th, 2012, 02:45 AM #5 Junior Member Join Date Sep 2012 California, Bay Area Thanked 0 Times in 0 Posts October 7th, 2012, 05:05 AM #6 October 7th, 2012, 10:57 PM #7 Junior Member Join Date Sep 2012 California, Bay Area Thanked 0 Times in 0 Posts October 8th, 2012, 08:17 AM #8
{"url":"http://www.javaprogrammingforums.com/whats-wrong-my-code/18159-trial-division-loop-problem.html","timestamp":"2014-04-16T11:30:08Z","content_type":null,"content_length":"97852","record_id":"<urn:uuid:031f1f37-be78-467a-85ce-6eabd3eef194>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Liquid-propellant rocket Liquid-propellant rocket PDF Pdf Size: 2.67 MB | Book Pages: 174 S AND REQUIREMENTS – Vol. II - Liquid Propellant Rocket Engines - V.M. Polyaev and V Pdf Size: 4.86 MB | Book Pages: 167 ,I I I 20~~emorial Meeting of the Northern Section Japan Society for Aeronautical and Space Sciences Sendai, Japan March 7-9,2007 Scaling of Performance in Liquid Pdf Size: 6.68 MB | Book Pages: 118 2nd Meeting of Users, UNED, Madrid 24-25 February 2003 C219-1 PRELIMINARY DESIGN AND SIMULATION OF A LIQUID PROPELLANT ROCKET ENGINE Germán García Gómez Pdf Size: 6.96 MB | Book Pages: 240 Approval of the thesis: ANALYSIS OF REGENERATIVE COOLING IN LIQUID PROPELLANT ROCKET S submitted by MUSTAFA EMRE BOYSAN ¸ in partial fulfillment of the Pdf Size: 2.86 MB | Book Pages: 112 Overview of Instabilities in Liquid-Propellant Rocket Engines Fred E. C. Culick* Chapter 1 California Institute of Technology, Pasadena, California 91125 Pdf Size: 6.87 MB | Book Pages: 194 Liquid Propellant Rocket - 1926 (8) et - 1944 (9) up C launch of Explorer 1 - 1958 (9, 10, 27) y Redstone - 1961 (10, 29) Delta, Scout - 1960 (27) Pdf Size: 5.82 MB | Book Pages: 219 his liquid-propellant rockets grew bigger and flew higher. He also developed a gyro-scope tem for flight control, a payload compartment, and a parachute recovery Pdf Size: 4.29 MB | Book Pages: 142 16.512, Rocket Pr sion Prof. Manuel Martinez-Sanchez Lecture 9: Liquid Cooling Cooling of Liquid Propellant Rockets We consider only bi-propellant liquid rockets Pdf Size: 1.62 MB | Book Pages: 122 in Liquid Propellant Rockets," Proceedings of the Eighth Symposium (International) on , Combustion Inst., Pittsburgh, PA, 1962, pp. 1140-1151. Pdf Size: 6.87 MB | Book Pages: 113 300 Experimental Investigation of Liquid Helium Pressurization Method for Liquid Propellant Rocket∗ Sehwan IN∗∗, Sangkwon JEONG∗∗, Youngkwon KIM∗∗ Pdf Size: 5.15 MB | Book Pages: 186 Anatomy of a Liquid Propellant Rocket worksheet either alone, in groups, or as an entire class. 2. Divide students into groups of six. Pdf Size: 4.96 MB | Book Pages: 115 majority of Liquid-propellant rocket s use the bipropellant arrangement. In some cases, the liquid oxidizer and fuel can exist together in the same storage tank. Pdf Size: 5.05 MB | Book Pages: 245 LPRE = Liquid Propellant Rocket Engine mol = Mole MnO2 = Manganese (IV) dioxide MOSFET = Metal oxide semicon or field effect transistor MSDS Pdf Size: 5.34 MB | Book Pages: 139 Design of Liquid Propellant Rocket Engine Huzel, D. K. & Huang, D. H., NASA SP-125 2. Heterogeneous Wolfhard,H. G., Pdf Size: 2.96 MB | Book Pages: 119 advocated liquid propellant rocket engines, orbital space stations, solar energy, and colonization of the Solar tem. His most famous Pdf Size: 6.01 MB | Book Pages: 171 Liquid Propellant Rockets Liquid propellant rockets are an invention . of the twentieth century. They are far more complex than solid rockets. Generally, a liquid Pdf Size: 6.96 MB | Book Pages: 177 sign of Liquid-Propellant Rocket Engines, AIAA, Washington, DC, 1992, pp.84–104. 116Sellers, J. P.,“Effect of Carbon on Heat Transfer in a Pdf Size: 6.96 MB | Book Pages: 92 Nature and Purpose:Individual components of liquid propellant produc- sary to produce solid rocket propellant are complex and specialized. Pdf Size: 6.58 MB | Book Pages: 158 ion to Solid Rocket Propulsion P. Kuentzmann been put into orbit by a liquid propellant launcher (R7 Semiorka, October 1957); the first successful US Pdf Size: 6.39 MB | Book Pages: 150 competitive in performance to liquid propellant s, but at much reduced cost and to REPLACE the use of liquid propellant rocket engines for many uses. Pdf Size: 4.01 MB | Book Pages: 99 behaviour of the liquid propellant that circulates Liquid Propellant Rocket Instability, NASA, Washington. [2] Huzel, D. K., (1971) Design of Liquid Pdf Size: 5.34 MB | Book Pages: 210 Liquid Propellant Rockets zSeparate Liquid Fuel and Liquid Oxidizer zTypical Fuel/Oxidizer Combinations zKerosene/Liquid Oxygen (LOX) zLiquid Hydrogen/LOX Pdf Size: 2.1 MB | Book Pages: 129 ion to Liquid Propellant Performance, Utility and Applications. Solid rocket motor materials, propellant grains and construction are described. Pdf Size: 7.15 MB | Book Pages: 119 Liquid propellant rockets are an invention of the twentieth century. They are far more complex than solid rockets. Generally, a liquid rocket Pdf Size: 7.06 MB | Book Pages: 132 Goddard’s experiments in Liquid-propellant rocket s continued for many years. His rockets grew bigger and flew higher. He developed a gyroscope Pdf Size: 4.96 MB | Book Pages: 176 liquid propellant rocket engine development, instability has been a major issue. High frequency combustion instability (HFCI) is the interaction be- Pdf Size: 2.19 MB | Book Pages: 204 1 how to design, build and test small liquid-fuel rocket s rocketlab / china lake, california Pdf Size: 2.77 MB | Book Pages: 140 Liquid Propellant - Rocket propellants in liquid form. Mass - The amount of contained in an object. Mass Fraction - The mass of propellants in a Pdf Size: 5.34 MB | Book Pages: 172 Engine s for liquid propellant rockets. We present r esults for alcohol/LOX propellant mixture, some discussion on its applications on rocket engines. I. I Pdf Size: 3.62 MB | Book Pages: 154 would not cover some smaller liquid propellant rocket s with limited thrust, but longer burn times. Pdf Size: 6.68 MB | Book Pages: 83 By contrast, in a hybrid solid/liquid-propellant rocket,3aliquid oxidizer is sprayed onto a solid-fuel grain during the Pdf Size: 4.29 MB | Book Pages: 117 Liquid rockets (or liquid-propellant rocket ) which uses one or more types of liquid propellants that are held in tanks prior to burning. 3. Pdf Size: 5.72 MB | Book Pages: 200 2 Abstract Liquid propellant rocket s are very complex machinery and require immense research time to obtain the best performance possible. Pdf Size: 3.43 MB | Book Pages: 246 Liquid-Propellant Rocket s in order to optimize performance. e is unreasonable low for the envelope design of the CCA. In order to choose the Lf fraction Pdf Size: 5.72 MB | Book Pages: 232 4.2.5 Future Developments in Liquid Propellant Technology LAUNCH PR SION for launching rockets with the following characteristics: Pdf Size: 3.34 MB | Book Pages: 71 Liquid propellant rocket s usable in "missiles", not controlled in 9A005, having a total impulse capacity equal to or greater than 1.1 MNs; b. Pdf Size: 5.53 MB | Book Pages: 181 AE 6410 - Dynamics Course, II. Historical Overview From Liquid Propellant Rocket Combustion Instability, Ed. Harrje and Reardon, NASA Publication SP-194 Pdf Size: 2.19 MB | Book Pages: 214 Progress on his work was described in "Liquid Propellant Rocket Development", published by the Smithsonian in 1936. During the late 1930's, Pdf Size: 7.06 MB | Book Pages: 85 Keywords: supercritical , liquid propellant rockets, jet phenomena INTRODUCTION The demand for performance enhancement and design optimization for Pdf Size: 4.29 MB | Book Pages: 229 solid and all-liquid propellant rockets. However, problems with accurate calibration and reliable operation, safety All Handbook ebooks are the property of their respective owners. does not host any of pdf ebooks on this site. We just links to books available on the internet. DMCA Info
{"url":"http://handbook2.com/l/liquid-propellant-rocket","timestamp":"2014-04-17T12:35:57Z","content_type":null,"content_length":"48021","record_id":"<urn:uuid:87ea76d5-7941-4918-90f4-6917786aae93>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Dear colleagues in the mathematics and statistics community, This year we held the second annual Discovery Grant survey, collecting basic information as to the outcomes of the EG 1508 (Math & Stats) DG competition. While we do not have absolutely full data, we have had a very good response rate to our requests for information, and have been able to come up with a basic picture of the outcome of this year’s competition, data which is given below. Our impression is that overall it was a much more orderly and fair outcome than the one of 2011, with evenly distributed bin levels and success rates within reason. Part of this is that the total budget coming into the EG 1508 for the competition was not unreasonably and unexpectedly cut, while another part might be that there were a lower number of applicants to EG 1508 than last year. A further point is that bin values for mathematics grants were not different from statistics grants for the same bin. Another noteworthy feature is the number of members of mathematics and statistics departments who are applying to EGs other than EG 1508, with good success rates and good award amounts. This testifies to the continuing underfunding of math & stats grants as compared with our fellow scientists. Because of the sensitivity of our community to the value of research performed at smaller institutions, we have also subdivided the results of this competition by institution size. Since our data is not absolutely complete, the numbers below are underestimates of the total applicants, and since the missing data is principally from smaller institutions, the success rates and grant award averages are probably over estimates. Best regards, Walter Craig for the MNLC Bin levels (\$ values) for EG 1508 • A = \$60K • B = \$52-51K • C = \$48-45K • D = \$40K • E = \$35K • F = \$30K • G = \$26-24K • H = \$22-20K • I = \$17-15K • J = \$13-12K • below this received \$0K Note: The variation in some bins by one or two \$K is probably partially due to small supplements to early career researchers, as in a NSERC-Contact announcement of October 2011. Some senior researchers have also received small additional increments, which we understand less well, although at the top of the scale this is probably due to `grandfathering’ of prior awards. Basic data Total applications from Math and Stats departments = 235 (this is a lower bound, because of underreporting in our survey) Total applications to EG 1508 = 211 Total grants awarded in EG 1508 = 150 Total grants awarded from other EGs = 20 Total budget (annual) allocated by EG 1508 in this competition = \$3,238K Bin populations: A=1, B=3, C=5, D=1, E=8, F=16, G=21, H=25, I=32, J=38 Statistical data EG 1508 Math: #applicants = 151 (this again is a lower bound, because of underreporting) average grant \$22.7K #grants = 111 success rate 73.5% EG 1508 Statistics: #applicants = 60 (ditto) average grant \$18.4K #grants = 39 success rate 65% Other EG’s: #applicants = 24 (even harder to verify completeness, this is a lower bound) average grant \$28.2K #grants = 20 success rate 83.3% Percent applications to other EGs: 10%+ Data subdivided by institution size EG 1508 Mathematics: Size Av Grant Success Rate # Apps 3 28.31 80.00 60 2 19.22 70.59 69 1 15.71 63.64 22 Overall 22.71 73.51 151 EG 1508 Statistics: Size Av Grant Success Rate # Apps 3 19.65 85.00 20 2 17.65 62.50 32 1 15.00 25.00 8 Overall 18.38 65.00 60 Size categories: Institutions are considered here to be of size 3 when total NSERC (annualized) grants exceed \$3M, of size 2 when total NSERC grants are between this and \$225K, and below this, size Note: In putting together this statistical data subdivided by institution size, we have (somewhat arbitrarily) classified institution size in terms of its total NSERC grant profile. Maybe it would have been better to do this with the tri-council profile instead, but this information is less available. In any case, a list of total NSERC grant awards per institution is available either from us, or else from NSERC. Recent Comments • Michael Kozdron on Canadian Mathematics Community Statement about NSERC Discovery Grants • Vasilisa Shramchenko on Canadian Mathematics Community Statement about NSERC Discovery Grants
{"url":"https://nmlc.math.ca/blog/","timestamp":"2014-04-18T20:43:52Z","content_type":null,"content_length":"116221","record_id":"<urn:uuid:938bafe4-00c1-4fa0-8788-635648a1cbc4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Erdos Conjecture I know that the Erdos Conjecture on Arithmetic progressions has been covered before on this forum ( , but I have a new idea for the problem. First, so everyone knows what conjecture I'm talking about, the conjecture states that if the sum of the reciprocals of a sequence diverges, then it must contain arithmetic progressions of arbitrary length. For now I'll work on the case of an arithmetic progression of length 3, because it is a little easier, and this sub-problem also hasn't been solved yet. So my idea is: If the conjecture is false, then there exists a sequence with no arithmetic progressions of length 3 such that the sum of the reciprocals of the series diverges. However, if the conjecture is in fact true, then no matter what sequence you have that has no arithmetic progression, it will converge, and it will converge to a number at or below a specific constant. I will call this (hypothetical) constant c. I had another idea that if we used the sequence defined by every number in the sequence being the smallest such that no arithmetic progression of length 3 exists so far in the sequence, then that will provide a good lower bound for this constant, and might be the sequence that converges (by this I mean that the sum of the reciprocals converges, but it is easier to just say converges) to c. This is where you guys come in. I am hopeless at figuring out the summation, so I need one of you to compute it for me. Thanks in advance! I have discovered a truly marvelous proof of this, which this margin is too narrow to contain. Re: Erdos Conjecture Finding this sequence might be very tricky on its own. But even if it is possible: It is not trivial that this sequence gives you an upper bound for the numbers. In fact, it is not clear whether such an upper bound even exists. The conjecture could be true, even if there are sets without arithmetic progressions where the sum of inverses can reach arbitrary large numbers (although I doubt that). After getting some numbers by hand, OEIS found the sequence you are looking for: A003278 Re: Erdos Conjecture After calculating the first 2^17 terms of the sequence, I have found that my sum (lets call it c') is >3.00545. I have found this by using an idea that the sequence a[n]=3^m+a[n-(2^m)], where m is selected such that 2^m<n<2^m+1. I have no idea how to find the complete sum, I have just been using Microsoft Excel the whole time. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain. Re: Erdos Conjecture I'm not sure there's any basis for the following statement... If the conjecture is false, then there exists a sequence with no arithmetic progressions of length 3 such that the sum of the reciprocals of the series diverges. It could be that in every sum of the reciprocals of a series that diverges there is always an arithmetic progression of length 3 but not always one of length 4. Correct me if I'm mistaken. Re: Erdos Conjecture Timefly wrote:I'm not sure there's any basis for the following statement... If the conjecture is false, then there exists a sequence with no arithmetic progressions of length 3 such that the sum of the reciprocals of the series diverges. It could be that in every sum of the reciprocals of a series that diverges there is always an arithmetic progression of length 3 but not always one of length 4. Correct me if I'm mistaken. I'm sorry, I am working on an easier version that is a subproblem of the original, so my new subconjecture is this: If the sum of the reciprocals of a sequence diverges, then there is an arithmetic progression of length 3. I will work on the full problem once I know that I can solve this. The subconjecture is also unproved, so I will test to see if my method works for l (l=length) 3, then extend to further results. Sorry for the misunderstanding. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain. Re: Erdos Conjecture Alright. Never mind. Re: Erdos Conjecture tomtom2357 wrote:No matter what sequence you have that has no arithmetic progression, it will converge, and it will converge to a number at or below a specific constant. I will call this (hypothetical) constant c. Do you have any reason why that should be true? As an analogy, the sum of 1/r^n converges for any r less than 1, but there is no upper bound on what this can converge to. Finding such a c would indeed prove the conjecture, but the existence of such a c is a stronger statement than the conjecture itself. addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk. Re: Erdos Conjecture mike-l wrote: tomtom2357 wrote:No matter what sequence you have that has no arithmetic progression, it will converge, and it will converge to a number at or below a specific constant. I will call this (hypothetical) constant c. Do you have any reason why that should be true? As an analogy, the sum of 1/r^n converges for any r less than 1, but there is no upper bound on what this can converge to. Finding such a c would indeed prove the conjecture, but the existence of such a c is a stronger statement than the conjecture itself. You are right if I extend this to arithmetic progressions of length more than three, as you can take infinitely many sequences with the limit sequence being {1,2,3,...} and since this goes to infinity (as in the sum of the reciprocals diverges), so does the limit of the sequences. However, I am trying to solve the problem for arithmetic progressions of length three, so this problem doesn't arise. I have found that (using a shanks transformation of the series) that the series converges to about 3.0079 I have discovered a truly marvelous proof of this, which this margin is too narrow to contain. Re: Erdos Conjecture tomtom2357 wrote: mike-l wrote: tomtom2357 wrote:No matter what sequence you have that has no arithmetic progression, it will converge, and it will converge to a number at or below a specific constant. I will call this (hypothetical) constant c. Do you have any reason why that should be true? As an analogy, the sum of 1/r^n converges for any r less than 1, but there is no upper bound on what this can converge to. Finding such a c would indeed prove the conjecture, but the existence of such a c is a stronger statement than the conjecture itself. You are right if I extend this to arithmetic progressions of length more than three, as you can take infinitely many sequences with the limit sequence being {1,2,3,...} and since this goes to infinity (as in the sum of the reciprocals diverges), so does the limit of the sequences. However, I am trying to solve the problem for arithmetic progressions of length three, so this problem doesn't arise. I have found that (using a shanks transformation of the series) that the series converges to about 3.0079 You didn't answer my question. I gave an example where a set of sequences all converge, but there was no upper bound on what they converged to. How do you know that this can't be the case here? addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk. Re: Erdos Conjecture mike-l wrote: tomtom2357 wrote: mike-l wrote: tomtom2357 wrote:No matter what sequence you have that has no arithmetic progression, it will converge, and it will converge to a number at or below a specific constant. I will call this (hypothetical) constant c. Do you have any reason why that should be true? As an analogy, the sum of 1/r^n converges for any r less than 1, but there is no upper bound on what this can converge to. Finding such a c would indeed prove the conjecture, but the existence of such a c is a stronger statement than the conjecture itself. You are right if I extend this to arithmetic progressions of length more than three, as you can take infinitely many sequences with the limit sequence being {1,2,3,...} and since this goes to infinity (as in the sum of the reciprocals diverges), so does the limit of the sequences. However, I am trying to solve the problem for arithmetic progressions of length three, so this problem doesn't arise. I have found that (using a shanks transformation of the series) that the series converges to about 3.0079 You didn't answer my question. I gave an example where a set of sequences all converge, but there was no upper bound on what they converged to. How do you know that this can't be the case here? If you take the limit of a set of sequences that has no arithmetic progression of length three, then you get a sequence that has no arithmetic progression of length three. Now if c doesn't exist, then there is a sequence with no arithmetic progression of length 3 whose reciprocals diverge, and that means that the (sub)conjecture is false. So c only exists if the conjecture is true. So if I find c, then I have effectively proved the (sub)conjecture. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain. Re: Erdos Conjecture Why are you taking limits? You are talking about the set { {1/a_n} | a_n is an integer, there is no arithmetic progression of length 3 among the a_n}. You are claiming that if it's true that all members of this set have convergent sums, then the sums are bounded above. I am talking about the set { {1/r^k} | r > 1 }. Now this set happens to have no arithmetic progressions of length 3 as well for all but countably many, so we can ignore those. So aside from the r^k need not be integers, this is similar to your set. Every member of this set converges, but the sums are not bounded. So it's not true in general that if a collection of sequences all converge, then their sums has an upper bound. The existence of such an upper bound is a stronger condition than the convergence. So you think the set {1,2,4,5,10,...} has a reciprocal sum a little over 3. Why does this tell you anything about the sequence {1,3,4,6,10,...}? addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk. Re: Erdos Conjecture tomtom2357 wrote:if the conjecture is in fact true, then no matter what sequence you have that has no arithmetic progression, it will converge, and it will converge to a number at or below a specific constant. I will call this (hypothetical) constant c. What makes you think c exists? I don't. Suppose {a_1, a_2, a_3, ...} is a sequence like you suggest (no arithmetic progression of size 3), and the sum of 1 / a_i is less than r. Then {a_1 / R, a_2 / R, a_3 / R, ... } is a sequence with no arithmetic progression of size 3, and the sum of its reciprocals is r*R. Now choose R to be arbitrarily big; then r*R > c. EDIT (LATER): I know this is wrong. The terms have to be integers. Re: Erdos Conjecture tomtom2357 wrote:Now if c doesn't exist, then there is a sequence with no arithmetic progression of length 3 whose reciprocals diverge Why do you think this? Yes, finding c would prove the conjecture, but the conjecture could also be true even if there is no such c, just as 1/r^n converges for all r<1 but can have an arbitrarily high limit by choosing r arbitrarily close to 1. Treatid basically wrote:widdout elephants deh be no starting points. deh be no ZFC. (If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome) Re: Erdos Conjecture mike-l wrote:Why are you taking limits? You are talking about the set { {1/a_n} | a_n is an integer, there is no arithmetic progression of length 3 among the a_n}. You are claiming that if it's true that all members of this set have convergent sums, then the sums are bounded above. I am talking about the set { {1/r^k} | r > 1 }. Now this set happens to have no arithmetic progressions of length 3 as well for all but countably many, so we can ignore those. So aside from the r ^k need not be integers, this is similar to your set. Every member of this set converges, but the sums are not bounded. So it's not true in general that if a collection of sequences all converge, then their sums has an upper bound. The existence of such an upper bound is a stronger condition than the convergence. So you think the set {1,2,4,5,10,...} has a reciprocal sum a little over 3. Why does this tell you anything about the sequence {1,3,4,6,10,...}? Okay then, I have another idea. a[n] is the smallest integer such that you can construct a sequence of n integers (starting at 1) that have no arithmetic progression of length 3, and that the nth integer is a[n]. Now let b[n] be a sequence with no arithmetic progressions of length 3. By definition, b[n]>=a[n], so therefore the sum of the reciprocals of b[n] converges if the sum of the reciprocals of a[n] converges. Now I only have to prove the convergence of the reciprocals of a[n]. Note: a[n] may contain arithmetic progressions of length 3, I am using it because of its nice property stated above. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain. Re: Erdos Conjecture Again, a_n converging is a stronger condition than your conjecture. In fact, that simply finds the c you were talking about before if it does. Both methods you've suggested would prove your sub conjecture, but you haven't offered any idea on how to actually prove either of them. addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk. Re: Erdos Conjecture mike-l wrote:Again, a_n converging is a stronger condition than your conjecture. In fact, that simply finds the c you were talking about before if it does. Both methods you've suggested would prove your sub conjecture, but you haven't offered any idea on how to actually prove either of them. All I need to do is prove a lower bound on the series, such as a[n]>=x^1.5, that would prove its convergence. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain. Re: Erdos Conjecture All I need to do to prove the Riemann hypothesis is prove a lower bound for zeta(z) in the critical strip but away from the middle, such as |zeta(z)|>1. Jerry Bona wrote:The Axiom of Choice is obviously true; the Well Ordering Principle is obviously false; and who can tell about Zorn's Lemma? Re: Erdos Conjecture tomtom2357 wrote: mike-l wrote:Again, a_n converging is a stronger condition than your conjecture. In fact, that simply finds the c you were talking about before if it does. Both methods you've suggested would prove your sub conjecture, but you haven't offered any idea on how to actually prove either of them. All I need to do is prove a lower bound on the series, such as a[n]>=x^1.5, that would prove its convergence. Yes, but you have no idea if that's even true, nor any idea on how to prove such a bound. addams wrote:This forum has some very well educated people typing away in loops with Sourmilk. He is a lucky Sourmilk.
{"url":"http://forums.xkcd.com/viewtopic.php?p=2892641","timestamp":"2014-04-20T23:31:13Z","content_type":null,"content_length":"54254","record_id":"<urn:uuid:91031005-be01-4929-b472-b1dab5ad080f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
April 13th 2008, 10:51 PM #1 Mar 2008 Hey guys, sorry Im in a hurry, so if I make mistakes while typing, my bad. A SWAT team member is attempting to exit a warehouse via a ventilation shaft that is at the top of a wall, level with the roof. The roof is 30m high and she is currently on the floor, a distance of 40m from the wall. She can use a crossbow to attach a rope to the roof, climb the rope and then work her way along a roof beam to the shaft. She can climb the rope at 1m/s and can move along the roof beam at 2m/s. What is the quickest she can reach the shaft? Answer: For my answer I got x = 10(sqrt(3)) m Total time taken = 20 + 15(sqrt(3)) which approximately equals 45.98 seconds. Am I right? A sheet of cardboard 30cm by 40cm is to be cut into a box with a lid, as shown below. Find the dimensions of the box that maximise the volume it contains. Answer: For my answer I got x = 5.657cm H = 5.657cm W = 14.343 cm L = 18.685 cm All of the dimensions are approx of course. So am I right. Sorry Im just checking that Ive done these questions right as Ive got a mid semester exam coming!!! 3) Last question. Its to do with maxima and minima and critical points. Can a point on a graph be an absolute AND relative maximum/minimum at the same time? Or is a point a absolute maximum/minimum OR a relative maximum/minimum? Thanks in advance for your help guys. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/34407-optimisation.html","timestamp":"2014-04-19T15:12:37Z","content_type":null,"content_length":"29992","record_id":"<urn:uuid:4c2864e2-3ea4-48fc-b710-a037134d1a4d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Central angles This lesson offers a concise, but thorough explanation of central angles, but also of arcs and sectors of a circle Definition: An angle is a central angle if it meets the following two conditions 1) The vertex of the angle is located at the center of a circle. 2) The rays that make up its sides are radii of the circle. Below, find an illustration of the definition above: You can name it angle ABC.It is important to notice that such angle is always less than 180 degrees. Therefore, such angles can only be acute or obtuse Related definitions: A portion of the circumference of the circle. This is illustrated below in red: A sector is the area enclosed within a central angle and an arc. Again, this is illustrated below, but in green: As you can see, the area in green is included between the arc in red and the angle When computing the area of a sector, use the following ratio or formula to find out what part of the circle's area is covered by the sector: A computation! A circle has a radius of 10 centimeters. This radius and the center of the circle is used to make of angle of 45 degrees. Find the area of the resulting sector Divide 45 degrees divided by 360 degrees to determine the fraction of the circle covered by this sector 45/360 = 1/8 The area of the circle is A = pi × r^2 A = 3.14 × 10^2 A = 3.14 × 10 × 10 A = 3.14 × 10^2 A = 3.14 × 100 A = 314 square centimeters Now just mutliply 314 by 1/8 314 × 1/8 = 314/8 = 39.25 The area of the sector is 39.25 square centimeters Here we go.If you have any questions about this lesson, do not hesitate to contact me. Now try to do this problem on your own. A circle has a radius of 5 centimeters. This radius and the center of the circle is used to make of angle of 60 degrees. Find the area of the resulting sector
{"url":"http://www.basic-mathematics.com/central-angles.html","timestamp":"2014-04-19T22:07:30Z","content_type":null,"content_length":"34025","record_id":"<urn:uuid:d751ef14-442d-4636-9fc0-62ab774e153d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Coefficients in cohomology up vote 12 down vote favorite (Sorry if this is too elementary for this site) I’m having some trouble understanding sheaf cohomology. It’s supposed to provide a theory of cohomology “with local coefficient”, and allow easy comparison between different theories like singular, Cech, de Rham and Alexander Spanier. What I don’t understand is: what’s all the fuss with coefficients that vary with each open set? Indeed what’s all the fuss with changing coefficients in an ordinary cohomology theory as in Eilenberg Steenrod? Homology is trying to measure the “holes” of a space; wouldn’t integer coefficients suffice already? I’m not really sure what cohomology is trying to measure; at least I think the first singular group is trying to measure some kind of “potential difference”, like explained in Hatcher’s book. It gets worse for me when the coefficient group isn’t the integers. But when I get to sheaf cohomology I’m totally dumbstruck as to what it’s trying to measure, and what useful information of the space can be extracted from it. Now if it’s just about comparisons of different theories I can live with that… Can someone please give me an intuitive explanation of the fuss with all these different coefficients? Please start off with why we even use different coefficients in Eilenberg Steenrod. Sorry if this is too elementary. 3 I think this might be more appropriate for math.stackexchange.com. So singular theory is cohomology with coefficients in the constant sheaf. In topology this is all we really need, but not in algebraic geometry or number theory. Group cohomology/homology with constant coefficients is boring, you are just looking at the trivial module that the group acts on. This should be boring, the trivial representation does not tell you a whole lot about the group. – Sean Tilson Apr 30 '11 at 14:16 1 I agree with Sean. Group homology with twisted coefficients arises in interesting contexts all the time. – Jim Conant Apr 30 '11 at 14:19 2 George Whitehead's textbook "Elements of homotopy theory" has a long and well-motivated chapter on local coefficients in topology, and also about obstruction theory and Postnikov systems. It is hard to imagine to do these things systematically without changing coefficients explicitly. – Zoran Skoda May 1 '11 at 8:59 add comment 5 Answers active oldest votes This (elementary and perfectly standard) example might help show the power of sheaves with non-constant coefficients: First, think about the circle $S^1$. Suppose you want to understand (real) line bundles on the circle. You can certainly cover the circle with two open contractible subsets $U_1$ and $U_2$ (which you can take to be the complements of the north and south poles), and we know that any line bundle on a contractible space is trivial. So if you've got a line bundle $L$ over $S^1$, you can restrict it to either $U_i$ and get a trivial bundle $L_i$. $L$ is built from these $L_i$ and the way they they are patched together over $U_1\cap U_2$. Now what does it mean to patch the $L_i$ together over $U_{12}=U_1\cap U_2$? It means choosing an isomorphism $L_1|U_{12}\rightarrow L_2|U_{12}$. For any $x\in U_{12}$, the restriction of this isomorphism to the fiber $L_x$ over $x$ is an isomorphism between 1-dimensional vector spaces, and so (after choosing bases) can be identified with an element of ${\bf R}^*$ (the non-zero reals). Therefore your patching consists of a continuous map $$U_{12}\rightarrow {\mathbb R}^*$$ which is to say, a Cech 1-cocycle for the sheaf of continuous ${\bf R}^{*}$-valued functions. Now of course you could build a line bundle in some other way, say by starting with two different contractible sets $U_1$ and $U_2$. When do two sets of patching data give isomorphic up vote 17 line bundles? A little thought reveals that the answer is: When and only when the corresponding cocycles give the same class in down vote accepted $$H^1(S^1,G^{*})$$ with $ G^{*} $ being the sheaf of continuous ${\bf R}^*$-valued functions. Therefore line bundles are classified by $H^1(S^1,G^{*})$. Now consider the exact sequence of sheaves $$0 \rightarrow G \rightarrow G^*\rightarrow {\bf Z}/2{\bf Z}\rightarrow 0$$ where $G$ is the sheaf of continuous ${\bf R}$ valued functions, and the map on the left is exponentiation. Follow the long exact sequence of cohomology, use the fact that $G$ is acyclic, and conclude that $H^1(S^1,G^*)=H^1(S^1,{\bf Z}/2{\bf Z})={\bf Z}/2{\bf Z}$. In other words, there are exactly two real line bundles over $S^1$ --- and indeed there are: the cylinder and the Mobius strip. Exercise: Do a similar calculation for ${\bf CP}^1$ (the Riemann sphere). Conclude that the set of (complex) line bundles is in one-one correspondence with $H^2({\bf CP}^1,{\bf Z})={\bf add comment As soon as you proceed from the first ideas of ''counting holes'' in a space to more advanced problems in algebraic topology, you will begin to appreciate local coefficient systems. Even the passage from $Z$ to rings like $Z/2$ does not merely simplify computations, but allows you to detect more phenomena. For example, the map $RP^2 \to S^2$ that collapses $RP^1$ to a point is null in integral homology, but not in $Z/2$-homology. Think a few minutes about why this is not a contradiction to the universal coefficient theorem. But local coefficient systems are useful in a variety of situations. Poincare duality for nonoriented manifolds has been mentioned (and in fact, it sheds light on the oriented case as well). Then there is obstruction theory: If $f:X \to Y$ is a fibration with fibre $F$. Let $g:Z \to Y$ be a map. A basic problem of homotopy theory is to decide whether there can be a lift $h: Y \ to X$ of $g$ through $f$. There is a sequence of obstructions to the existence of such a thing; and these obstructions live in $H^n (Z; \pi_{n-1}(F))$, but with twisted coefficients if $Y$ up vote is not simply-connected. Then the Leray-Serre spectral sequence comes to my mind: it relates the (co)homology of the base, the fibre and the total space of a fibration; and if the base isn't 10 down simply-connected, then local coefficients are inevitable. Especially in the last two situations, the introduction of local coefficient systems makes the proofs more transparent even in the simply-connected case. I admit that for most purposes of algebraic topology, the introduction of sheaves (more general than local coefficient systems) is overkill. The classical areas where sheaves are most important are complex analysis and algebraic geometry. add comment As Johannes Ebert says, the classical areas where sheaves are most important are complex analysis and algebraic geometry. There are two completely different kinds of sheaves one might consider on a complex manifold: constructible sheaves (basically, locally constant along a stratification) and quasicoherent sheaves (modules over the ring of functions). It's kind of an amazing accident, and I think rather misleading, that "sheaf theory" is useful for studying both kinds of sheaves. Certainly there are theorems that apply to both kinds, but most interesting theorems require you to assume one or the other. up vote This is very much a matter of opinion, and I expect to get comments disagreeing with me! Let me give just one example of what I mean. For any sheaf at all, we can consider Cech cohomology 8 down using an open cover. In the constructible-sheaf world (like you were getting in topology), one likes to assume that the intersections of sets in the cover are contractible, so all cohomology vote comes from gluing. In the quasicoherent-sheaf world, one likes to assume that the sets in the cover are affine (and that the scheme is separated, so the intersections are likewise affine), again so all cohomology comes from gluing. Obviously one could state a general theorem about acyclic covers or somesuch, but it's crucial to bear in mind how different those are for the two kinds of sheaves. (N.B. Of course there are sheaves that are neither constructible nor quasicoherent, and one occasionally does use, but not as often as these two.) Illuminating answer! But I have to disagree a bit, as there exist whole (and very classical) theories which rely on \emph{mixing} the two types of sheaves. A large portion of the 8 classical theory of compact Riemann surfaces is organized around the exponential sequence $ \mathbb{Z} \to \mathbb{O} \to \mathcal{O}^{\times}$. The first sheaf is constructible, the middle sheaf coherent and the third is neither. However, the exp sequence does not exist in ''algebraic algebraic geometry''. – Johannes Ebert Apr 30 '11 at 19:23 add comment I had the impression that Hatcher's book claims as motivation that local coefficients allow Poincaré Duality to work properly for non-orientable spaces. up vote 7 down vote add comment A practical motivation: ordinary homology with, say, mod2 or rational coefficients are often easier to compute (and hence - to apply) than integral homology. up vote 3 down vote add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/63519/coefficients-in-cohomology/63520","timestamp":"2014-04-18T18:27:33Z","content_type":null,"content_length":"74261","record_id":"<urn:uuid:79bd4a06-2f98-4eb0-a252-dfee7c2eff26>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding a cheapest cycle in a quasi-transitive digraph with real-valued vertex costs J. Bang-Jensen, G. Gutin and A. Yeo Disrete Optimization Volume 3, , 2004. We consider the problem of finding a minimum cost cycle in a digraph with real-valued costs on the vertices. This problem generalizes the problem of finding a longest cycle and hence is NP-hard for general digraphs. We prove that the problem is solvable in polynomial time for extended semicomplete digraphs and for quasi-transitive digraphs, thereby generalizing a number of previous results on these classes. As a byproduct of our method we develop polynomial algorithms for the following problem: Given a quasi-transitive digraph $D$ with real-valued vertex costs, find, for each $j=1,2,\ ldots{},|V(D)|$, $j$ disjoint paths $P_1,P_2,\ldots{},P_j$ such that the total cost of these paths is minimum among all collections of $j$ disjoint paths in $D$. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00002358/","timestamp":"2014-04-17T01:08:09Z","content_type":null,"content_length":"7032","record_id":"<urn:uuid:e86c9711-e1ff-4780-ae97-a5844586b181>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: no names allowed, we serve types only From: David BL <davidbl_at_iinet.net.au> Date: Sun, 21 Feb 2010 21:20:12 -0800 (PST) Message-ID: <20a5144d-6896-4a43-9507-33ea1255a260_at_b36g2000pri.googlegroups.com> On Feb 22, 9:10 am, paul c <toledobythe..._at_oohay.ac> wrote: > While I > sympathize with Keith D's wish to eliminate the 'name,type' double, I > think that practical issues require not only so-called 'possreps' but > also what I think of as a 'name,type,use' triples. What a mess! Regarding possreps, I think they can be understood very clearly in a typeless setting. Consider the following from Darwen (see http://www.dcs.warwick.ac.uk/~hugh/TTM/taamoti.slides.pdf): POSSREP { A LENGTH, B LENGTH, CTR POINT CONSTRAINT A >= B AND B > LENGTH ( 0.0 ) } ; TYPE CIRCLE CONSTRAINT THE_A ( ELLIPSE ) = THE_B ( ELLIPSE ) POSSREP { R = THE_A ( ELLIPSE ) , CTR = THE_CTR ( ELLIPSE ) } ; In the following R means the reals. Let Tellipse be a set of tuples as follows: Tellipse = { (a,b,ctr) in RxRx(RxR) | a >= b and b > 0 } Let Pellipse be a function that maps each tuple t in Tellipse to a real living ,breathing, ellipse value (i.e. an infinite set of points in RxR): for all t in Tellipse, Pellipse(t) = { (x,y) in RxR | ((x-t.ctr.x)/t.a)^2 + ((y-t.ctr.y)/t.b)^2 = 1 Similarly let Tcircle = { (r,ctr) in Rx(RxR) | r > 0 } for all t in Tcircle, Pcircle(t) = { (x,y) in RxR | (x-t.ctr.x)^2 + (y-t.ctr.y)^2 = t.r^2 Note that range(Pcircle) is a subset of range(Pellipse). This embodies D&D's assertion that type CIRCLE is a subtype of type ELLIPSE. Database systems don't tend to formalise ellipses and circles. Instead they simply parameterise their values using elements of Tellipse and Tcircle and leave the mappings Pellipse, Pcircle unspecified. In my opinion, we should regard a possrep as formally this (unspecified) mapping such as Pcircle on a given domain Tcircle. Usually we would require such a mapping to be onto (so that all possible circles can be represented) and 1-1 (so that when two tuples are distinct the DBMS can deduce that the circles that they represent are distinct as well). Note that Tcircle is uncountably infinite so we cannot even define an encoding that can represent each element of Tcircle on a computer. In practise it means we simply deal with some countable subset of Tcircle (and therefore limit ourselves to a corresponding subset of all circles. The relationship between the two possreps is embodied in the following: CtoE = { (t1,t2) in Tcircle x Tellipse | Pcircle(t1) = Pellipse(t2) } Since every circle is an ellipse, CtoE is the graph of a function that maps every tuple in Tcircle to a corresponding tuple in Tellipse that represents the same circle/ellipse value. Since Pcircle, Pellipse aren't specified to the DBMS, we instead need to express CtoE directly. Of course that is very easy (ctr,r) |---> (ctr,r,r) Expressing this using D&D type selectors we could express this as: CIRCLE(ctr,r) is-a ELLIPSE(ctr,r,r) IMO this is cleaner (and indeed more practical) than the D&D syntax which seems to get it arse about. D&D syntax specifies how to map a subset of the ellipse values to matching circle values, by specifying a constraint and by expressing the possrep of CIRCLE in terms of the possrep of ELLIPSE. Received on Sun Feb 21 2010 - 23:20:12 CST
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2010/02/21/0062.htm","timestamp":"2014-04-17T17:01:53Z","content_type":null,"content_length":"10219","record_id":"<urn:uuid:b209197b-e491-4be9-8983-3c4c46831fa5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
How to calculate a size factor used for Endurance Limit [Archive] - Mechanical Design Forum 17th May '11, 00:08 I am trying to use a kb factor in the Marin Factor calculation to determine an endurance limit for a hollow rectangular tube in bending only. The kb is based on a diameter but there is equivalent diameter equations. The diametrical equivalent for a rectangular cross section is .808(bh)^0.5. But I am uncertain if this is for a solid rectangular section or if this also applies to a hollow I'm not sure if I should take the above equation .808(bh)^0.5 and subtract the 95% stress area by using equation .05bh. Thereby the the equivalent diameter would be .808(bh)^0.5-.05bh
{"url":"http://www.mechanicaldesignforum.com/archive/index.php/t-934.html","timestamp":"2014-04-17T00:50:22Z","content_type":null,"content_length":"5275","record_id":"<urn:uuid:2be87395-db26-4e75-9ac7-b3731fa62402>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Topics in combinatorics and algorithms Files in this item 9011017.pdf (6MB) (no description provided) PDF Title: Topics in combinatorics and algorithms Author(s): Shen, Xiaojun Committee Liu, C. L. Department / Computer Science Discipline: Computer Science Granting University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Mathematics Computer Science This thesis studies several topics in theoretical computer science. First, the author shows that $5n-4$ is a tight lower bound on the number of edges in the visibility graph of n non-intersecting line segments in the plane.Second, the author studies a new class of combinatorial structures called Generalized Latin squares which is a generalization of the classical definition of Latin squares. A perfect $\langle k,l\rangle$-Latin square is an N $\times$ N array in which any row or column contains every distinct symbol and the symbol $a\sb{ij}$ appears exactly $\kappa$ times in the $i\sp{\rm th}$ row and l times in the $j\sp{\rm th\/}$ column, or vice versa. Let A = ($a\sb{ij}$) and B = ($b\sb{ij}$) be two perfect $\langle k,l\ rangle$-Latin squares of order N with the symbol set $\{1, 2, \..., D\}.$ They are said to be orthogonal, if D divides N and each of the $D\sp2$ ordered pairs of symbols $(s,t)$ $(1 \leq s,t \leq D)$ appears exactly $N\sp2/D\sp2$ times in the array C = ($(a\sb{ij} , b\sb{ij})$). The author shows some general existence and orthogonality results by presenting constructive Abstract: algorithms.Third, the author shows some new results for the problem of unbounded searching. Given a function F:N$\sp{+}\ \to\ \{X,Y\}$ with the property that if $F(n\sb{0})$ = Y then $F (n)$ = Y for all $n\ >\ n\sb{0}$, the unbounded search problem is to use tests of the form "is $F(i)$ = X?" to determine the smallest n such that F(n) = Y. The "cost" of a search algorithm is a function c(n), the number of such tests used when the location of the first Y is n. He shows that the "ultimate algorithm" of Bentley and Yao (Info. Proc. Let. 5 (1976), 82-87) is "far" from optimal in the sense that it is only the second one in an infinite sequence of search algorithms, each of which is much closer to optimality than its predecessor.Finally, consider this problem: how should n records with $\kappa$ keys be ordered so that a search can be performed as quickly as possible under any key? Fiat et al present an $O(lgn)$ time algorithm. This is asymptotically optimal. However, it uses quite complicated encoding and decoding procedures and requires tables of enormous sizes for the encoding method to work. The author presents a simpler O(lgn) algorithm for this problem, which works for any table of reasonable size. He also introduces an $O(lg\sp{2}n)$ algorithm which has better performance than any known $O(lgn)$ algorithm when n is not extremely large. Issue Date: 1989 Type: Text Language: English URI: http://hdl.handle.net/2142/21655 Rights Copyright 1989 Shen, Xiaojun Available in 2011-05-07 in Online AAI9011017 OCLC (UMI)AAI9011017 This item appears in the following Collection(s) Item Statistics • Total Downloads: 1 • Downloads this Month: 0 • Downloads Today: 0 My Account Access Key
{"url":"https://www.ideals.illinois.edu/handle/2142/21655","timestamp":"2014-04-20T08:34:29Z","content_type":null,"content_length":"22954","record_id":"<urn:uuid:79572a3f-3a69-4331-9320-3b779388bddf>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Pi shirt not nerdy enough for you? Count on us for some fresh, genuinely nerdy shirts to show off your love for science, math and technology. We feature only the nerdiest shirts - designed by nerds, for nerds! Not sure which shirt to give your favorite nerd? We offer gift certificates - give the gift of nerdy shirts! These are designs from our sister store, Kawaii Shirt Shop. Kawaii Shirt Shop specializes in cute and quirky shirts, some of which are nerdy.
{"url":"http://www.thenerdiestshirts.com/","timestamp":"2014-04-21T00:11:47Z","content_type":null,"content_length":"24360","record_id":"<urn:uuid:206f8a21-d919-4fe6-9fb5-83ae667c982c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
The Level Game - Need help!! July 25th 2013, 12:09 AM #1 Jul 2013 The Level Game - Need help!! I need help figuring out the chance of success for completing this “level” game. Starting with level 1, and making it to the end (level 7) Given the % chance of “Success” at each level, is it possible to determine the overall % chance to win? Level 1: 77% Level 2: 75% Level 3: 71% Level 4: 67% Level 5: 60% Level 6: 50% Level 7: 33% So there is a 77% chance to make it to level 2 Once you are at level 7, there is a 33% chance to win the game. I want to know if it is possible to figure out the overall % chance to win the game (and how). Thank you for any & all help Re: The Level Game - Need help!! To win the entire game you have to win level1, and level2, and level3, and level4, and level5, and level6, and level7. Just multiply all of the probabilities together. So, you have about a 2.72% chance of winning the entire game. Last edited by downthesun01; July 25th 2013 at 12:45 AM. July 25th 2013, 12:42 AM #2 Senior Member Oct 2009
{"url":"http://mathhelpforum.com/statistics/220801-level-game-need-help.html","timestamp":"2014-04-19T02:29:11Z","content_type":null,"content_length":"33547","record_id":"<urn:uuid:2f68cbfc-1ea8-423b-8c9c-010f89471c89>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
[Tutor] Binary chop function - this works, but I'm not sure why Alan Gauld alan.gauld at btinternet.com Thu Feb 14 09:21:46 CET 2008 "Arun Srinivasan" <soulguidedtelescope at gmail.com> wrote > I wrote an implementation that works, but I'm confused as to why. > def chop(search_int, sorted_list): > if len(sorted_list) == 1 or 2: |This is not doing what you think it is. Pythopn sees this as: if ( len(sorted_list == 1) or 2: So it evaluates whether the list is one element, if it isn't it then checks if 2 is true, which it always is. So the if condition is always trie and yuou code always executes the if block. Now the if block as written will always find the element in the list if it exists. If you rewrote the if test like so you would get a more effective test: if len(sorted_list) in [1,2]: > for x in sorted_list: > if x == search_int: > return sorted_list.index(x) > return -1 You could also rewrite the block as: if sorted_list[0] == search_int: return 0 elif len(sorted_list) == 2 and sorted_list[1] == search_int: return 1 else return -1 Which should be marginally faster and only works for lists of length 1 or 2. > midpoint = (len(sorted_list) - 1) / 2 > mp_value = sorted_list[midpoint] > if mp_value == search_int: > return midpoint > elif mp_value > search_int: > return chop(search_int, sorted_list[:midpoint]) > else: > return chop(search_int, sorted_list[midpoint + 1:]) > Basically, it only returns the index if it matches in the if > statement at > the beginning of the function, but since that is limited to lists of > length > 2 or 1, why doesn't it return only 0 or 1 as the index? I think > there is > something about recursion here that I'm not fully comprehending. The rest of the code is broken because, as you say, it only returns 0 or 1. you need to add the lower bound to the return value but you don't the lower bound! Alan G. More information about the Tutor mailing list
{"url":"https://mail.python.org/pipermail/tutor/2008-February/060172.html","timestamp":"2014-04-17T14:43:59Z","content_type":null,"content_length":"4853","record_id":"<urn:uuid:058fa9de-415d-4da4-ad18-9ac6a595cce9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
A Two-Scale Approach for Lubricated Soft-Contact Modeling: An Application to Lip-Seal Geometry Advances in Tribology Volume 2012 (2012), Article ID 412190, 12 pages Research Article A Two-Scale Approach for Lubricated Soft-Contact Modeling: An Application to Lip-Seal Geometry ^1DII, Universitá del Salento, 73100 Monteroni di Lecce, Italy ^2DMMM, Politecnico di Bari, 70126 Bari, Italy Received 31 May 2012; Revised 15 September 2012; Accepted 19 September 2012 Academic Editor: Michel Fillon Copyright © 2012 Michele Scaraggi and Giuseppe Carbone. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We consider the case of soft contacts in mixed lubrication conditions. We develop a novel, two scales contact algorithm in which the fluid- and asperity-asperity interactions are modeled within a deterministic or statistic scheme depending on the length scale at which those interactions are observed. In particular, the effects of large-scale roughness are deterministically calculated, whereas those of small-scale roughness are included by solving the corresponding homogenized problem. The contact scheme is then applied to the modeling of dynamic seals. The main advantage of the approach is the tunable compromise between the high-computing demanding characteristics of deterministic calculations and the much lower computing requirements of the homogenized solutions. 1. Introduction Compliant contacts, most commonly known as soft-contacts, are very common in nature (e.g., cartilage lubrication, eye-eyelid contact) and technology (e.g., tires, rubber sealings, adhesives). It has long been stated that the friction and fluid leakage characteristics of wet soft-contacts are strongly related, among the other factors, to the local interactions occurring at the contact interface [ 1–5]. In the case of randomly rough surfaces, the basic understanding of the role played by the asperity-asperity and fluid-asperity interactions, occurring over a wide range of roughness length-scales, has been largely investigated and debated in the very recent scientific literature [6–12]. Given the (usual) fractal nature of random roughness, a number of interesting phenomena have been highlighted, as for example, the viscous-hydroplaning [6], the viscous flattening [9–15], the fluid-induced roughness anisotropic deformation [10, 11], the local [10, 11] and global [8, 16] fluid entrapment, and many others. The way to deal with random roughness contact mechanics, despite being nontrivial and suffering of a certain description fragmentation, is however well described in the current scientific literature. On the other side, nowadays bio-inspired research [17, 18], together with the widely-spreading practice of surface engineering [19], is showing the many (mainly unexplored) opportunities offered by the physical-chemical ordered modification of surfaces in order to tailor targeted macroscopic contact characteristics, such as adhesion and friction. Bio-inspired adhesive research [20] is probably the best state of the art example of such research trend. However, investigating the combined effect of, let us say, quantized roughness and fluid action has not equally attracted the scientific community attention, apart from few experimental investigations [12, 21, 22] and basic theoretical investigations [23, 24]. This may be justified by the complexity of the numerical formulation of the problem, which is expected to not to present an analytical treatment. As a result, and to the best of authors’ knowledge, the combined effect of lubricant action and single-scale (contact splitter) roughness has not been practically investigated by tribologists’ community. In this work we give our personal contribution to this research field and, in particular, we discuss a novel numerical scheme of soft mixed lubrication, which can be adopted to perform such investigations. In our model the roughness is split into two contributions. A threshold scale (where is the representative macroscopic size of the contact and is the threshold roughness length-scale) is identified. For length scales , that is , the system is investigated by using a deterministic approach, whereas for , that is, , the problem is treated by homogenizing the equations. This is of utmost help in performing numerical simulations, as for example in the case of microstructured or bio-inspired surfaces where the surface geometry roughness is characterized by a single scale texture (e.g., a pillars array) combined with random roughness at sub-micrometer scales. The main advantage of the proposed approach is, therefore, the strong reduction of numerical complexity. The paper is organized as follows. In Section 2 we describe the mixed lubrication model scheme (we refer to appendix for the details of numerics), whereas in Section 3 we report an example of application of the proposed model to the case of a lip sealing geometry operating in steady-sliding contact. For a detailed description on dynamic sealings modeling, the reader is referred to [25–29]. 2. Problem Formulation Here we consider a generic rough compliant solid in steady sliding contact with a rigid smooth counter surface, as shown in Figure 1(a). Due to the coupled action of asperity-fluid and asperity-asperity interactions, the compliant surface gains the actual (or deformed) configuration (Figure 1(b)). In soft-contacts, the shape difference between the deformed and initial configurations cannot be usually neglected, due to the occurrence of large deformations. Nevertheless, the small displacement assumption is usually adopted in the literature of soft-elastohydrodynamics [6, 30, 31] (soft-EHL), supported by a relevant experimental validation for ball-on-flat contact geometries. However, if one analyses the contact at much shorter length scales, that is, at the asperities length scales, then one finds out that the asperities may be completely flattened because of the high contact and fluid pressures. Hence, neglecting the influence of large deformation would lead to strong inaccuracy in describing the evolution of the system at the micro-scales. For this reason, in this work, the small displacement assumption has been relaxed. Moreover, in some cases, for example, for dynamic sealings modeling, the precise calculation of the large tangential displacement of the rubber at the interface of the sliding contact is a must in order to capture key phenomena as the well known reverse pumping effect [26]. Consider now the schematic drawing of Figure 1(a). We assume that the whole texture belongs to the class of Reynolds roughness, that is, with (where is the surface height distribution). In such a case, the thin film lubrication formulation is sufficiently accurate in describing the fluid dynamics at the sliding interface at all roughness lengths. The (ensemble) average local separation, is calculated as: where and (the subscript or 2 or 3, see Figure 1(b)) are the three coordinates of the lip surface points at the initial () and at the deformed () state respectively. The quantity represents the macroscopic contact shape and the undeformed low-pass filtered (i.e., for ) surface roughness. is the generic component of the displacement vector describing the average local surface deformation of the compliant body. In the classical EHL approach the displacement is calculated within linear elasticity framework [31], by adopting a boundary element approach which requires operation where is the number of discretization points. In our case, however, we adopt a more general nonlinear rheology and the displacement components are calculated by employing a classical finite-element solver, which requires operations. Observe that the normal (locally averaged) stress is (see also [6]) where , is the locally averaged fluid pressure and the locally average solid contact pressure, the latter coming from the asperity-asperity interactions occurring at length-scales . We observe that [6]: where is the local normalized area of solid contact, where is the average fluid shear stress (see later for more details). We assume that the solid friction shear stress is constant (and directed along the sliding direction), as it happens in the case for a rubber-inert substrate contact [6]. Clearly, the model can be easily extended to include different boundary friction conditions. The relation between the average solid contact pressure and the local average interfacial separation is obtained from Persson’s theory of contact mechanics [32, 33]. In particular, given the local elastic properties of the material and the power spectral density (PSD, see, e.g., Figure 2) of the surface ( is the surface roughness field, with average value , , where is the low frequency cut-off wavelength), the theory allows to calculate the local areal fraction of solid-solid contact as a function of the locally averaged solid-solid contact pressure as where the quantity is the root mean square gradient of the rough surface and is related to the PSD of the rough surface through the formula . The relation between the local interfacial separation and the solid-solid contact pressure is instead determined, in the adhesionless case, by requiring that the change of elastic energy per unit nominal contact area at the interface equals the work done by the contact pressure to deform the elastic body: In (6) the elastic energy per unit contact area is calculated as a function of the contact pressure (see [32, 33] for more details). The final formula linking the local separation and the solid-solid contact pressure is relatively complicated, but at large separation it simplifies and becomes [33] where the quantities and can be easily calculated once the PSD of the rough surface is known [33], and . We consider now the case where a Newtonian fluid is sandwiched at the interface between the solids. We assume constant viscosity , constant density , and isothermal conditions. The adoption of a Reynolds roughness, and the assumption of a representative average interfacial separation value , suggests that the fluid velocity varies slowly with the coordinates and compared to the variation in the orthogonal direction . In such a case, combining the equilibrium with the continuity equation, the homogenized problem formulation reads [9, 34] where, as before, and are the locally averaged (over a length scale given by the longest wavelength surface roughness component, i.e., ) interfacial surface and fluid pressure, respectively. The fluid flow conductivity (tensor) and can be related, respectively, to the pressure flow factor (tensor) [7, 9, 35, 36] and to the shear flow factor (tensor) , which we assume to be a function of the average interfacial separation only. Here has been calculated on the basis of the Bruggeman’ effective medium theory and on the Persson’s contact mechanics [34]. Within this approach, the effect of solid contacts percolation, as well as of local fluid trapping, on the fluid flow can be taken approximately into account, as recently discussed in [16]. For simplicity, we have assumed , and , where is the identity tensor. Moreover, by considering the steady sliding condition, the homogenized lubricant equation simplifies into: where is the sliding velocity. However, in order to include the effect of micro-cavitations occurring at large-scale roughness lengths, a JFO cavitation model (constant mixture pressure in the cavitation zone) is adopted. The Reynolds equation is then reformulated under a mass conservative equation valid throughout the cavitating/not cavitating domain: where we have adopted the cavitation index ( in the cavitation zones, and otherwise) and the dummy variable defined as where is the lubricant density, and is the representative solid shear elastic modulus. Note that (10) must be solved on the deformed configuration, which is an unknown of the problem: therefore, reformulating the fluid equation in the initial configuration may result numerically convenient. Thus (10) has been rephrased by using the mapping rule The Jacobian of the transformation can be calculated as the inverse of the Jacobian of the transformation Observe that the determinant of the Jacobian tensors must be necessarily larger than zero, that is, . The above transformation enables the generation of an adaptive mesh grid which follows the changing of the surface shape during the deformation process, thus without loosing spatial resolution. The detailed numerics derivation is reported in the Appendix. The boundary conditions to be applied in the resolution of (10) depend, clearly, on the particular soft-contact problem under investigation. The dimensionless formulation of (10) is where and (note that has been made dimensionless with , whereas other lengths with ). corresponds in this case to the largest deterministic roughness wavelength. 3. Results In this work we use the proposed model to investigate the lubricated contact of the lip seal schematically shown in Figure 3. Moreover, being much smaller than the shaft diameter, it is possible to reduce the computational domain to only a small angular fraction of the lip surface. Therefore, (10) is solved with the constant pressure boundary conditions at the low pressure side and at the high pressure side , and with the periodicity conditions on the circumferential -direction, with spatial period . The two-scale mixed lubrication model has been numerically solved, as described in the Appendix. We have opted for a fractal self-affine isotropic geometry. For any self-affine fractal surface the statistical properties are invariant under the transformation In such a case it can be shown that for isotropic surface the PSD is where , , is the Hurst exponent of the randomly rough profile, which is related to the fractal dimension . To apply our method we need to numerically generate the surface in the low frequencies spectrum, that is, for , see, for example, Figure 2. To this end we have utilized the spectral method described in [37] where the roughness is described by a periodic surface in the form of Fourier series where , . Since is real we must have . Moreover for randomly rough surfaces the following relation must be satisfied with , , where the symbol is the ensemble average operator. The PSD of Surface Equation (17) is from which it follows: For isotropic surfaces we have which simply gives and assuming self-affine fractal surface (see (16)) one obtains Hence the quantities can be determined once and the Hurst exponent of the fractal surface are known. However to completely characterize the rough profile we still need the probability distribution of the quantities . We first observe that the condition with , is satisfied if the phases of the complex quantities are random numbers uniformly distributed between and . We also recall the condition also implies that and that the quantities . So what we need now is only the probability distribution of . Of course there are several choices and the simplest one is to assume that the probability density function of is just a Dirac’s delta function centered at , that is, It can be shown that this choice guarantees also that the random profile has a Gaussian random distribution. 3.1. Sealing at Nearly Zero Sliding Velocity Here we report on the model application to the case where the sliding velocity between the two mating surfaces is vanishing (), that is, for the case of static seals. The self-affine roughness of the lip surface presents a long-distance cut-off frequency , fractal dimension , , and small scale cut-off frequency . The threshold frequency adopted in the calculation is . In the following, the seal is assumed linear elastic. The calculation is performed at a constant rigid penetration of the lip. In Figure 4 we show the locally averaged interfacial separation field (note that is the circumferential direction, whereas is the axial direction). Interestingly the corrugation which can be observed in the figure is just a consequence of the deterministically included roughness (), at larger frequency the surface appears smooth since the high frequency () content of the roughness has been included through Persson’s statistic model, that is, by means of homogenization. The corresponding average solid contact pressure is shown in Figure 5(a). Observe first that, due to the presence of roughness, the contact is split into many contact patches in the whole lip apparent contact area and, correspondingly, normal stresses are concentrated into contact spots. Observe also that the average solid contact pressure is smooth at the contact borders, differently, from what instead expected in the case of smooth elastic contact (e.g., in the case of the Hertzian contact). This is due to the homogenized roughness contribution, which distributes, on a wider local contact area, the pressure acting on the single deterministic asperity. The average fluid pressure is shown in Figure 5(b). Observe that the fluid pressure presents steep variations (almost step-like) in those locations where the separation between the surfaces takes the smallest values. This is in perfect agreement with the critical junction theory of leakage in seals [8], which predicts that the hydraulic resistivities are only concentrated in few points where the fluid flow encounters strong restrictions. At these restrictions the pressure must present high gradients as clearly shown in Figures 5(b) and 6. In particular, in Figure 6 we report the average interfacial separation (black curve), the average solid contact pressure (green curve), the average fluid pressure (red curve), the total average normal stress (blue curve, slightly distinguished from the average solid contact pressure), and the solid contact pressure in the case of perfectly smooth contact (black dashed curve, useful for comparison), for (i.e., along the axial direction). Note that the largest pressure gradients occur in the very proximity of the largest values of , that is, where the local average separation takes its minimum value. It is also interesting to observe the normalized solid contact area for this static contact case, see Figure 7(a). Note that, due to the high local squeezing pressure, the asperities low-frequency features (i.e., the roughness asperities described by the deterministical model) have been squeezed so much to coalesce in larger contact patches. Figure 7(b) shows a sample of the leakage flux lines calculations (red lines) superposed to the locally averaged fluid velocity intensity (in gray scale). As expected [8], the presence of asperity-asperity contacts increases the hydraulic resistivity at the interface and, in particular, largest values of fluid velocity occur at the local minimum in the average interfacial separations. We have also carried out calculations assuming the seal obeys a Mooney-Rivlin model. The results are then compared with the elastic case. In Figure 8 we show the normalized area of real contact for the elastic (Figure 8(a)) and hyperelastic (Figure 8(b)) bulk rheology. Observe that for the linear case, the area of contact are slightly larger, as expected, then for the other case. This however, does not strongly affect the leakage flow for the current geometry, as shown in Figure 9. However, interestingly, the nonlinear rheology is characterized by a larger value of surface area change, evaluated as (see Figure 10), where we show in red colors that on part of the interaction domain there is a shrinking, that is, . Observe that in plane surface displacements are responsible for the frequency-shift of roughness PSD, for example, in the case of Figure 10(b) we expect a relevant modification of the homogenized power spectral density shape, which can therefore affect the local contact mechanics and flow factors. In the literature, surface displacements are not usually taken into account; however the present study shows that such a surface effect can be relevant for contact 3.2. Sealing at Non-Zero Sliding Velocity In this section we show the case of non-zero sliding velocity. The macroscopic lip geometry is the same as before, as well as the roughness PSD. However, this time the part of roughness that is described deterministically is characterized by only one length scale, that is, just one frequency is included. The remaining roughness has been therefore included within the homogenized approach. In Figure 11 we show the cavitation areas (black areas), occurring over a circumferential portion of the lip-shaft macroscopic contact domain. Results are shown at different values of dimensionless sliding velocity. Interestingly, cavitation originates as expected in the low pressure side of the region (i.e., on the left side of the domain, whereas sliding velocity is directed from top to the bottom) and at the trailing edge of asperities. By increasing the sliding velocity, the cavitation extends from the low to the high pressure side. The cavitated areas may even coalesce at the largest sliding velocity, with the formation of cavitation fingers shown in Figure 11(e). The adoption of our two-scale approach allows then to capture the complex solid contact and fluid-dynamics characteristics of real contact geometry, which can help engineers to have useful insights into the mixed-EHL lubrication condition occurring at the interface of soft-contacts, especially in terms of friction and leakage, with much less computation effort. Figure 12 shows the axially component of fluid flow at the interface. Red areas corresponds to the counter-gradient flow (i.e., the flow directed in the opposite direction compared to the externally applied fluid pressure gradient). Observe that, on the high-pressure side, due to the asperities presence, a certain fluid recirculation is observed which tends to hamper the observed net leakage (blue fingers in the low-pressure side). The recirculation is due to the effect of fluid depressurization which occurs at the divergent part of the single asperity/substrate interfaces. This depressurization determines a relevant fluid suction from the high pressure side which is partially balanced by the flow induced by the counter-pressure gradient. 4. Conclusions We have presented a novel two-scale approach for the description of the mixed lubrication regime for real soft-contacts. We modeled the asperity-asperity and asperity-fluid interactions with a deterministic or a statistical approach (DSA) depending on length scale at which the contact region is observed. The roughness at large length scales, which mainly determines the fluid flow at the interface, is deterministically included in the model while the roughness at short wavelengths, which strongly contributes only to the friction, is included by means of a homogenization process (recently developed in [9]). We have applied the DSA to lip sealings contact mechanics modeling, and we have analyzed the mixed lubrication characteristics at nearly-zero and non-zero shaft sliding velocity. In the case of nearly-zero sliding velocity the lip seal behaves as a classical static seal, and we showed that the fluid flow at the interface is determined only at the smallest constrictions along the leakage path, in agreement with recent developments of static seals theory. In the case of non-zero sliding velocity, we showed the occurrence of micro-cavitations and cavitation fingers, whereas leakage has been shown to be associated to asperity-induced fluid suction at the high pressure side (for the given geometry). Finally, we note that DSA-based mixed lubrication models, which belong to the class of multiscale contact mechanics models, provide a high-resolution description of very complex contact problems with a reduced or, at least, tunable computational effort, opening the perspective for its application in general purpose engineering software. Equation (10) has been discretized with the control volume approach. In particular, (10) can be integrated in a portion of the contact area considering that the elementary area gives where The forward difference for the Couette term has been used in order to get a stable scheme, in conjunction with the adoption of the Gauss-Seidel technique. The average fluid shear stresses in the fluid zones can be calculated as: whereas in the cavitation zones: The resolution scheme is summarized in Figure 13. The contact model is split into two, coupled problems, respectively, the deterministic and the homogenized problem. Given the macroscopic gap relation , the average surface stress is determined by solving the homogenized part, which consists of the Persson’s contact mechanics and of the homogenized fluid problem (previously discussed). The deterministic part follows, where the macroscopic deformation problem is solved as a function of the previously calculated average surface stress field. To do so, in this work we have used the Ansys finite element code; in particular, the lip geometry, similar to that described in [27], has been meshed with tetrahedral structural elements of type 92. The macroscopic gap relation is then finally updated, determining the loop restart. The solver iterates until certain convergence criteria are satisfied. In our case, convergence is checked on the average interfacial separation field . Under-relaxation, with relaxation factors in the range , is usually adopted to numerically damp the interfacial separation solution. : Fluid viscosity : Poisson’s ratio : Reduced root-mean-square roughness : Local normalized area of solid contact : Locally averaged interfacial separation : Parameter for the asymptotic interfacial separation law : Parameter for the asymptotic interfacial separation law : Threshold roughness wavelength : Full film density : Shear flow conductivity : Normal (locally averaged) stress : Pressure flow conductivity : Normal (locally averaged) fluid stress : Normal (locally averaged) solid contact stress : Tangential (locally averaged) fluid stress -component : Tangential (locally averaged) stress -component : Tangential (locally averaged) solid contact stress -component : Reduced sliding velocity : Roughness magnification or wavenumber : Threshold roughness wavenumber : Representative area of interaction : Area of solid contact in : Total roughness power spectral density : Power spectral density of the statistically-calculated roughness : Young’s modulus : Reduced elastic modulus : Cavitation index : Shear elastic modulus : Mapping rule Jacobian : Deterministic surface roughness height function : Inverse of the mapping rule Jacobian Representative macroscopic size of the contact : Shaft sliding velocity : Mean velocity : Macroscopic contact shape function : Locally stored elastic energy : Generic component of the average surface displacement vector : Initial state reference : Actual or deformed state reference. The authors acknowledge Regione Puglia for having supported part of this research activity through the constitution of the TRASFORMA Laboratory Network cod. 28. 1. D. B. Hamilton, J. A. Walowit, and C. M. Allen, “A theory of lubrication by micro-irregularities,” Journal of Basic Engineering, vol. 88, p. 177, 1966. 2. K. Tønder, “Mathematical verification of the applicability of modified Reynolds equations to striated rough surfaces,” Wear, vol. 44, no. 2, pp. 329–343, 1977. View at Scopus 3. D. Dowson, “Modelling of elastohydrodynamic lubrication of real solids by real lubricants,” Meccanica, vol. 33, no. 1, pp. 47–58, 1998. View at Scopus 4. B. N. J. Persson, Sliding Friction: Physical Principles and Applications, Springer, 2000. 5. K. L. Johnson, Contact Mechanics, Cambridge University Press, 1985. 6. B. N. J. Persson and M. Scaraggi, “On the transition from boundary lubrication to hydrodynamic lubrication insoft contacts,” Journal of Physics Condensed Matter, vol. 21, no. 18, Article ID 185002, 2009. View at Publisher · View at Google Scholar · View at Scopus 7. B. N. J. Persson, “Fluid dynamics at the interface between contacting elastic solids with randomly rough surfaces,” Journal of Physics Condensed Matter, vol. 22, no. 26, Article ID 265004, 2010. View at Publisher · View at Google Scholar · View at Scopus 8. B. Lorenz and B. N. J. Persson, “Leak rate of seals: effective-medium theory and comparison with experiment,” European Physical Journal E, vol. 31, no. 2, pp. 159–167, 2010. View at Publisher · View at Google Scholar · View at Scopus 9. B. N. J. Persson and M. Scaraggi, “Lubricated sliding dynamics: flow factors and Stribeck curve,” European Physical Journal E, vol. 34, p. 113, 2011. 10. M. Scaraggi, G. Carbone, B. N. J. Persson, and D. Dini, “Lubrication in soft rough contacts: a novel homogenized approach—part I,” Soft Matter, vol. 7, pp. 10395–10406, 2011. 11. M. Scaraggi, B. N. J. Carbone, and D. Dini, “Lubrication in soft rough contacts: a novel homogenized approach—part II—discussion,” Soft Matter, vol. 7, pp. 10407–10416, 2011. 12. M. Scaraggi, G. Carbone, and D. Dini, “Experimental evidence of micro-EHL lubrication in rough soft contacts,” Tribology Letters, vol. 43, no. 2, pp. 169–174, 2011. View at Publisher · View at Google Scholar · View at Scopus 13. C. J. Hooke and C. H. Venner, “Surface roughness attenuation in line and point contacts,” Proceedings of the Institution of Mechanical Engineers, Part J, vol. 214, no. 5, pp. 439–444, 2000. View at Publisher · View at Google Scholar · View at Scopus 14. C. J. Hooke and K. Y. Li, “Rapid calculation of the pressures and clearances in rough, elastohydrodynamically lubricated contacts under pure rolling—part 1: low amplitude, sinusoidal roughness,” Proceedings of the Institution of Mechanical Engineers Part C, vol. 220, no. 6, pp. 901–913, 2006. View at Publisher · View at Google Scholar · View at Scopus 15. C. H. Venner and A. A. Lubrecht, “An engineering tool for the quantitative prediction of general roughness deformation in EHL contacts based on harmonic waviness attenuation,” Proceedings of the Institution of Mechanical Engineers, Part J: Journal of Engineering Tribology, vol. 219, no. 5, Article ID J03804, pp. 303–312, 2005. View at Publisher · View at Google Scholar · View at Scopus 16. M. Scaraggi and B. N. J. Persson, “Time-dependent fluid squeeze-out between soft elastic solids with randomly rough surfaces,” Tribology Letters, vol. 47, no. 3, pp. 409–416, 2012. 17. H. Gao, X. Wang, H. Yao, S. Gorb, and E. Arzt, “Mechanics of hierarchical adhesion structures of geckos,” Mechanics of Materials, vol. 37, no. 2-3, pp. 275–285, 2005. View at Publisher · View at Google Scholar · View at Scopus 18. B. Bhushan, “Bioinspired structured surfaces,” Langmuir, vol. 28, no. 3, pp. 1698–1714, 2012. 19. E. Stratakis, A. Ranella, and C. Fotakis, “Biomimetic micro/nanostructured functional surfaces for microfluidic and tissue engineering applications,” Biomicrofluidics, vol. 5, no. 1, Article ID 013411, 2011. View at Publisher · View at Google Scholar · View at Scopus 20. G. Carbone, E. Pierro, and S. N. Gorb, “Origin of the superior adhesive performance of mushroom-shaped microstructured surfaces,” Soft Matter, vol. 7, no. 12, pp. 5545–5552, 2011. View at Publisher · View at Google Scholar · View at Scopus 21. M. Varenberg and S. N. Gorb, “Hexagonal surface micropattern for dry and wet friction,” Advanced Materials, vol. 21, no. 4, pp. 483–486, 2009. View at Publisher · View at Google Scholar · View at 22. E. Buselli, V. Pensabene, P. Castrataro, P. Valdastri, A. Menciassi, and P. Dario, “Evaluation of friction enhancement through soft polymer micro-patterns in active capsule endoscopy,” Measurement Science and Technology, vol. 21, no. 10, Article ID 105802, 2010. View at Publisher · View at Google Scholar · View at Scopus 23. M. Scaraggi, “Lubrication of textured surfaces: a general theory for flow and shear stress factors,” Physical Review E, vol. 86, Article ID 026314, 2012. 24. M. Scaraggi, “Textured surface hydrodynamic lubrication: discussion,” Tribology Letters, vol. 48, no. 3, pp. 375–391, 2012. 25. R. F. Salant, “Modelling rotary lip seals,” Wear, vol. 207, no. 1-2, pp. 92–99, 1997. View at Scopus 26. R. F. Salant, “Theory of lubrication of elastomeric rotary shaft seals,” Proceedings of the Institution of Mechanical Engineers, Part J, vol. 213, no. 3, pp. 189–201, 1999. View at Scopus 27. M. Hajjam and D. Bonneau, “Elastohydrodynamic analysis of lip seals with microundulations,” Proceedings of the Institution of Mechanical Engineers, Part J, vol. 218, no. 1, pp. 13–21, 2004. View at Scopus 28. M. Hajjam and D. Bonneau, “Influence of the roughness model on the thermoelastohydrodynamic performances of lip seals,” Tribology International, vol. 39, no. 3, pp. 198–205, 2006. View at Publisher · View at Google Scholar · View at Scopus 29. S. R. Harp and R. F. Salant, “An average flow model of rough surface lubrication with inter-asperity cavitation,” Journal of Tribology, vol. 123, no. 1, pp. 134–143, 2001. View at Publisher · View at Google Scholar · View at Scopus 30. J. de Vicente, J. R. Stokes, and H. A. Spikes, “The frictional properties of Newtonian fluids in rolling—sliding soft-EHL contact,” Tribology Letters, vol. 20, no. 3-4, pp. 273–286, 2005. View at Publisher · View at Google Scholar · View at Scopus 31. B. J. Hamrock, Fundamentals of Fluid Film Lubrication, McGraw-Hill, 1994. 32. B. N. J. Persson, “Theory of rubber friction and contact mechanics,” Journal of Chemical Physics, vol. 115, no. 8, pp. 3840–3861, 2001. View at Publisher · View at Google Scholar · View at Scopus 33. B. N. J. Persson, “Relation between interfacial separation and load: a general theory of contact mechanics,” Physical Review Letters, vol. 99, no. 12, Article ID 125502, 2007. View at Publisher · View at Google Scholar · View at Scopus 34. B. N. J. Persson, N. Prodanov, B. A. Krick, et al., “Elastic contact mechanics: percolation of the contact area and fluid squeezeout,” European Physical Journal E, vol. 35, p. 5, 2012. 35. N. Patir and H. S. Cheng, “An average flow model for determining effects of three-dimensional roughness on partial hydrodynamic lubrication,” Journal of Lubrication Technology, Transactions of ASME, vol. 100, no. 1, pp. 12–17, 1978. View at Scopus 36. N. Patir and H. S. Cheng, “Application of average flow model to lubrication between rough sliding surfaces,” Journal of Lubrication Technology, Transactions of ASME, vol. 101, no. 2, pp. 220–230, 1979. View at Scopus 37. C. Putignano, L. Afferrante, G. Carbone, and G. P. Demelio, “A new efficient numerical method for contact mechanics of rough surfaces,” International Journal of Solids and Structures, vol. 49, no. 2, pp. 338–343, 2012.
{"url":"http://www.hindawi.com/journals/at/2012/412190/","timestamp":"2014-04-17T16:48:43Z","content_type":null,"content_length":"359386","record_id":"<urn:uuid:b26f8dc9-d42d-4a1e-8d79-d49a9514d6f0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A club decides to sell T-shirts for $15.00 as a fundraiser. It costs $20.00 plus $9.00 per T-shirt to make the T-shirts. Write and solve an equation to find how many T-shirts the club needs to make and sell in order to profit at least $150.00. • one year ago • one year ago Best Response You've already chosen the best response. -20-9x+15x = 150 6x = 170 x = 170/6 Best Response You've already chosen the best response. I think Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50411526e4b0af601a257730","timestamp":"2014-04-16T23:01:39Z","content_type":null,"content_length":"30112","record_id":"<urn:uuid:1609d320-0860-442e-8bbf-5ffceb7de0e7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Determinants (using elementary row operations) July 17th 2012, 05:14 AM #1 Jul 2012 United Kingdom Determinants (using elementary row operations) Hi, if anyone could explain how you get from (x+y+z+t)det 1 1 1 1 qqqqqqqqqqq y z t x qqqqqqqqqqq z t x y qqqqqqqqqqq t x y z (x+y+z+t)det 0 1 1 1 qqqqqqqqqq (y-z+t-x) z t x qqqqqqqqqq (z-t+x-y) t x y qqqqqqqqqq (t-x+y-z) x y z that would be brilliant! Totally confused PS. The qs are just in there to make the matrices slightly more lined up (spaces don't seem to work)! Ignore the qs Last edited by ebb; July 17th 2012 at 05:18 AM. Follow Math Help Forum on Facebook and Google+ Re: Determinants (using elementary row operations) I figured that step out but I am stuck again :/ The original question was: find det x y z t ---- y z t x ---- z t x y ---- t x y z as a product of linear factors We end up with (x+y+z+t)(x-y+z-t)det (t-y-z+x) 2(x-z) ----------------------- (y-t) --- (y+z-t-x) Which is easy enough to solve I thought... BUT, they give the final answer as -(x+y+z+t)(x+iy-z-it)(x-y+z-t)(x-iy-z+it) WHERE DID THE iS COME FROM? Madness. Help please! Last edited by ebb; July 17th 2012 at 05:37 AM. Follow Math Help Forum on Facebook and Google+ July 17th 2012, 05:24 AM #2 Jul 2012 United Kingdom
{"url":"http://mathhelpforum.com/advanced-algebra/201066-determinants-using-elementary-row-operations.html","timestamp":"2014-04-17T14:19:38Z","content_type":null,"content_length":"32868","record_id":"<urn:uuid:ec107fc6-d2a8-44c1-9b7b-dd6ce6e348ab>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
definite integral estimation January 25th 2013, 11:50 AM #1 Sep 2010 definite integral estimation I'm practicing toward final exam and I'm having difficulty to solve particular question: Given function f(x)=x^9 / sqrt(1+x) defined on closed interval [0,1] I need to prove without calculating the integral that the definite integral between 0 and 1 with f(x) as integrand is at least 1/(10*sqrt(2)) and at most 1/10. Our course material was only about Darboux sums (not Riemann) My idea was to find a certain partition of [0,1] interval and show that for upper and lower integral Darboux sums hold 1/(10*sqrt(2)) <= s(P)<=S(P) <= 1/10 I also noticed that the desired estimate is quite good so I probably need a very fine partition to be able to prove. Is it a correct way of thinking to solve the problem? If yes, how can i think of a way of constructing the desired partition ? Thank you. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/212032-definite-integral-estimation.html","timestamp":"2014-04-16T05:36:46Z","content_type":null,"content_length":"29789","record_id":"<urn:uuid:daa94bc5-34bc-4e07-b4b2-c905caaa9074>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Adams Morgan, Washington, DC Washington, DC 20003 Tufts Grad For Math, Japanese, English and Technology Tutoring ...I have tips and tricks to make learning math simple and fun, and my hope is that the student comes to look forward to his or her tutoring sessions. I have had quite a few prior students in Algebra 2 , with one currently, and have helped build their confidence with... Offering 10+ subjects including algebra 2
{"url":"http://www.wyzant.com/Adams_Morgan_Washington_DC_algebra_2_tutors.aspx","timestamp":"2014-04-20T01:04:19Z","content_type":null,"content_length":"62351","record_id":"<urn:uuid:027e6be4-67ec-4aaf-aca4-07978766f47d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Subtract Fractions Edit Article Edited by Doylep, Josh, Kalpit, Teresa and 7 others Subtracting fractions with like denominators is easy, but when the fractions have unlike denominators, it can take a few steps to make them have the same denominator so that you can subtract them. These few steps can take a little while, but once you get the hang of it, you'll be able to subtract fractions in no time at all. If you want to know how to do it, just follow these steps. 1. 1 Find the denominators of the fractions. If you want to subtract fractions, then the first thing you have to do is to make sure that they have like denominators. The numerator is the number on the top of the fraction and the denominator is the number on the bottom. In the example, 3/4 - 1/3, the two denominators of the fractions are 4 and 3. Circle them. □ If the denominators of the fractions are the same, then you can just subtract the numerators and keep the denominator the same. For example, 4/5 - 3/5 = 1/5. If the fraction is in simplified form, as this one is, then you're done. 2. 2 Find the least common multiple (LCM) of the denominators. The LCM of two numbers is the smallest number that is evenly divisible by both numbers. You'll need to find the LCM of 4 and 3, which will give you the lowest common denominator (LCD) of the fraction. Here's the best method to use for small numbers: □ List the first few multiples of 4: 4 x 1 = 4, 4 x 2 = 8, 4 x 3 = 12, 4 x 4 =16 □ List the first few multiples of 3: 3 x 1 =3, 3 x 2 = 6, 3 x 3 = 9, 3 x 4 = 12 □ Stop when you have found a common multiple. You can see that 12 is a multiple of both 4 and 3. Since it's the smallest, you can stop there. ☆ Note that you can do this for all numbers, including whole numbers and mixed numbers. For whole numbers, think of the denominator as 1. (Thus, 2 = 2/1.) For mixed numbers, rewrite the mixed number as an improper fraction. (Thus, 2 1/2 = 5/2.) 3. 3 Make the numerators of the fractions match their new denominators. Now that you know that the LCM of 4 and 3 is 12, you can think of 12 as the new denominator of the fractions. But to make the fractions equivalent, you'll have to multiply their numerators by a number that would give them the same value with their new denominators. Here's how you do it: □ With the fraction 3/4, you know the new denominator is 12, so you'll have to find the number that 4 multiplies with to get 12. 4 x 3 = 12, so you'll essentially be multiplying 3/4 x 3/3 for the numerator and denominator to retain the original value of the fraction. You know that 4 x 3 is 12, and that's the denominator, and 3 x 3 is 9, so the new numerator of the fraction is 9. 3 /4 can be rewritten as 9/12. □ With the fraction 1/3, you know the new denominator is 12, so you'll have to find the number that 3 multiplies with to get 12. 3 x 4 = 12, so you'll essentially be multiplying 1/3 x 4/4 for the numerator and denominator to retain the original value of the fraction. You know that 3 x 4 is 12, and that's the denominator, and 1 x 4 is 4, so the new numerator of the fraction is 4. 1 /3 can be rewritten as 4/12. 4. 4 Write the new numerators over the lowest common denominator. Now that you know that the lowest common multiple of 4 and 3 is 12, you can say that the lowest common denominator of the fractions 1/ 3 and 3/4 is 12. Now that you know the new numerators, you can just write them over the same denominator as one fraction with subtracted numerators. Make sure to write the new numerators in the appropriate order, since changing the order in a subtraction problem will give you the wrong answer. Here's what you can write: □ 3/4 - 1/3 = 9/12 - 4/12 □ 9/12 - 4/12 = (9-4)/12 5. 5 Subtract the numerators. Once you have written the new numerators over the LCD, you are ready to subtract. Simply subtract the numerators in the appropriate order; do not do anything to the □ 9-4 = 5, so 9/12 - 4/12 = 5/12 6. 6 Simplify your answer. Once you have your answer, check to see if you can simplify it. If the numerator and denominator can be divided by the same number, divide them by that number. Remember that fractions are proportions, so whatever you do to the numerator, you must also do to the denominator. Do not divide one without dividing the other by the same number. 5/12 remains as it is because it cannot be simplified further. □ For example, the fraction 6/8 can be simplified because both 6 and 8 are divisible by 2. Just divide 6 and 8 by 2 for a new simplified answer: 6/2 = 3, 8/2 = 4, so 6/8 = 3/4. Article Info Thanks to all authors for creating a page that has been read 57,493 times. Was this article accurate?
{"url":"http://www.wikihow.com/Subtract-Fractions","timestamp":"2014-04-20T21:07:50Z","content_type":null,"content_length":"67700","record_id":"<urn:uuid:cadf5974-9e5b-4ad8-a92a-852ad1abdd70>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Variance and Standard Deviation Worksheet Topic: Variance and Standard Deviation Worksheet This spreadsheet is a product of the conversation between DD, Xlledx, and others under the Magic Formula post linked here. Average Projections, Variance and Standard Deviation Calc (normal size leagues) Average Projections, Variance and Standard Deviation Calc (larger leagues) The spreadsheet is my attempt to build on the idea that the Magic Formula is based on. For a complete explanation on what the Magic Formula is, please read the entirety of Joosbawx’s thread which I referenced and linked above. Cliff notes version: The Magic Formula calculates the difference between the top player and the “last starter” at each position. Those differences (also called ranges) are used to determine which position is deemed most valuable and therefore which position you should focus on early in the draft. Rather than compare the ranges, DD had made the argument that comparing the players' totals to the overall mean (or average) score of their position would be a more accurate representation of each player’s value in relation to the rest of their position. Example: The mean of the RB position last year in my league was 101.625. The RB1 scored 188 points, so his “replacement value”, or the value of RB1 when compared to the mean, is 188-101.625 or 86.375. Conversely, RB20 scored 70 points, so his replacement value would be 70-101.625 or -31.625 Once these replacement values are calculated, we can use them to determine a position’s variance and standard deviation. What the heck is variance and standard deviation? Ok, Quick statistics recap: Variance measures how the data is distributed within a range, or how much the data differ. If every score were the same in the set of data, then the variance is 0, otherwise there will be some level of variance, the larger the variance the larger the disbursement. Technically speaking, variance is the sum of the squared differences between each score and the mean average of all scores or Standard Deviation similarly measures the variation of the data from the average. A low standard deviation indicates that the scores tend to be very close to the mean for the position, whereas high standard deviation indicates that the scores are spread farther out over a large range. Technically speaking, standard deviation is the square root of the variance. So what does that have to do with Fantasy Football? Glad you asked. The Variance and Standard Deviation helps us to determine the disparity among the scores of a given position. The higher the variance and standard deviation, the more spread out the scores are along the range and the farther the scores are away from the mean of the position. Therefore, the positions with a high variance and high standard deviation will generally be more valuable than the the positions with a low variance and low standard deviation. Example: Using 2010 data from my league, the RB position had a variance of approx. 1177 and a standard deviation of 34. The WR position had a variance of approx. 501 and a standard deviation of 22. Using those stats, RBs overall were more valuable than WRs. I already knew RBs were more valuable than WRs. That seems like a lot of work to figure that out. True. But these stats can tell you so much more. By using the standard deviation and the replacement value, you can have a baseline to compare individual players at difference positions using positional value. Not sure when QB or WR should begin to enter the Round 1 discussion, This can help. Going back to the above example, using the 2010 stats, here are the replacement values for the top 5 RBs and WRs (actual stats from my league) Player Total Pts X-Mean Player Total Pts X-Mean RB #1 219 114.6667 WR #1 148 53.27778 RB #2 144 39.66667 WR #2 142 47.27778 RB #3 138 33.66667 WR #3 137 42.27778 RB #4 138 33.66667 WR #4 125 30.27778 RB #5 130 25.66667 WR #5 125 30.27778 Obviously, RB #1 is the most valuable of the players, but surprisingly, WR #1 is the next valuable, scoring 53 more points than the average WR. Here are the 10 players listed in order, based on replacement value. (Disclaimer: my league scoring is much more balanced among the positions than most. I do not consider my league to have “standard scoring”) Player Total Pts X-Mean RB #1 219 114.6667 WR #1 148 53.27778 WR #2 142 47.27778 WR #3 137 42.27778 RB #2 144 39.66667 RB #3 138 33.66667 RB #4 138 33.66667 WR #4 125 30.27778 WR #5 125 30.27778 RB #5 130 25.66667 But you’re using old data, how does that help me for the 2012 draft? Another good question! By averaging the stats of the last 4 years, we can generate projections for the 2012 season without worrying about guessing yards or TDs. Now considering that last year’s stats are more meaningful than stats form 3 or 4 years ago, we use the following formula to generate the 2012 projections: 2012 projections = 2011 scores(40%) + 2010 scores(30%) + 2009 scores(20%) + 2008 scores(10%). So in 2012, I can predict that the #1 RB will scores approx. 210 points (188 * 40% + 219 *30% + 252 *20% + 185*10%) OK, OK, I’ll bite. How do I use the spreadsheet? There are two spreadsheets. One has the capabilities to do 12 QBs and TEs along with 36 RBs and WRs. The other is for deeper leagues with the availability of 24 QBs and TEs along with 48 RBs and WRs. To decide the number of each position to use, simply multiply the number of teams by the number of players you are able to start at each position: Example: 12 team league with a starting lineup of 1 QB, 2 RB, 3 WR, and 1 TE would use 12 QBs, 24 RBs, 36 WRs, and 12 TEs. Flex Example: If your league uses a flex, I would count the flex as an extra spot for all allowed position. This will allow you to determine which position is best to use as your flex. So 10 team league with a lineup of 1QB, 2RB, 2WR, 1TE and 1 flex (RB/WR/TE) would use 10 QB, 30 RB, 30 WR, and 20 TE. Once you have the number of positions, simply enter those players’ totals for years 2008-2011. Who actually scored the points does not matter. 2011 RB#1 is the RB who scored the most points whether it was Rice, Foster or whoever. As you enter the values, the spreadsheet will automatically begin to calculate the mean, the replacement value (X-mean), the variance and the standard deviation. Important note: Only enter data for the number of positions that you calculated above. If you are only using data for 10 QBs, leave QB 11 and QB 12 BLANK. If you enter any values in those rows (even 0), those values will be used in the calculations and ultimately skew the results. At this point, the spreadsheet is designed for 4 years worth of data. If you do not possess that much data, you would need to manually change the formula to recalculate the 2012 projections to use less than 4 years of data. Both spreadsheets already have my league’s data included as an example. I hope you find the spreadsheet useful. Please feel free to offer feedback. Last edited by Devo49 (2012-08-11 21:55:29) Im sorry, I cant hear you over the sound of my own awesomeness... Re: Variance and Standard Deviation Worksheet I cant download it here, but I liked your presentation a lot. MF Cheat Sheet Generator beta 0.7 Twitter: @assholestoassho Facebook: daniel.hansen.980 Youtube: How To Spot a Fake Pokemon Re: Variance and Standard Deviation Worksheet read through twice already...nice work. I think we should join forces to rule the world. Re: Variance and Standard Deviation Worksheet I'd love to see a totally comprehensive effort on all of your guys' parts. Re: Variance and Standard Deviation Worksheet DD wrote: I'd love to see a totally comprehensive effort on all of your guys' parts. this could be included into TMF Worksheet without any difficulty whatsoever. Re: Variance and Standard Deviation Worksheet I can't wait to see it, can't get at it now. You guys are all very talented with numbers and putting them places and stuff. Re: Variance and Standard Deviation Worksheet Really nice presentation and thorough walk-through... You present your method cleaner and more efficiently that Zarzycki does. I like your method much more! Last edited by broskie9 (2012-08-10 07:15:00) 12 Team League - QB: Ryan RB: Lynch, Morris, Murray, Stewart, Thomas, Powell WR: Green, White, Harvin, Decker, Wright TE: Pettigrew 10 Team League - QB: Ryan, Luck RB: McFadden, Martin, Johnson, Ballard, Thomas, Green, Rodgers WR: Green, Decker, Wayne, Garcon TE: Hernandez, Rudolph Re: Variance and Standard Deviation Worksheet Awesome job, look forward to digging more into this over the weekend. Thanks for all your efforts! Re: Variance and Standard Deviation Worksheet Excellent work. - L - "I hate it when I forget to turn my swag off at night and I wake up covered in bitches." - Will Ferrell What is the Magic Formula? Re: Variance and Standard Deviation Worksheet Still think you guys are over thinking all of this. Yeah, I'll take the condom thing for, uh... eight thou. My TFFG Free League Team: http://www.fleaflicker.com/nfl/team?lea … mId=918960 Re: Variance and Standard Deviation Worksheet JamesRustler wrote: Still think you guys are over thinking all of this. Dead weight Minimart. MF Cheat Sheet Generator beta 0.7 Twitter: @assholestoassho Facebook: daniel.hansen.980 Youtube: How To Spot a Fake Pokemon Re: Variance and Standard Deviation Worksheet I love the variance, mean and standard deviation discussion, works quite well for my league, especially when I integrate it into the MF. As a point of clarification Devo, you the X - Mean, just means you're taking the total points of RB1 and subtracting it from the mean, correct? Just want to make sure the Var or StDev aren't being integrated there as well. I want me some glory hole! Re: Variance and Standard Deviation Worksheet I used the x-Mean and MF to map out when certain positions should be taken in first 9 rounds of the draft for my particular league's scoring. I then took my league's projections and put where that player could be drafted by others (yes I know, and agree, that these are NOT a good basis for evaluation, however I know that the others in my league will not be doing the same sort of analysis I will be, I'm sure it's the same in many other leagues and the others will be 'relying' on the projections to an extent). For example, RB1 - Foster; WR1 - Calvin; QB1 - Rodgers and so on.... If you have the time and inclination I'd recommend you do that same; the reason being is that while we as FFG regulars may understand where you 'should' draft someone, you should be aware that that will NOT happen in your draft. You should keep track of where others are drafting. For instance, if your analysis shows that you can wait for your 2nd RB until round 8th round (mine has RB18, 9-team league, going as an 8th round draft) and there's a run on RB2's you'll have to move up when you're going to draft someone. Know your league and your opponents - I know my league will have a run of QBs early and while my analysis says I can wait for Ryan as my 8th ranked QB in the mid-4th round, chances are he'll be gone by the early to mid-3rd and because of positional league importance (thank-you MF) you may need to move him up. I say this for those of you look at MF and X-Mean as a 'Rule.' Remember it's a guideline and if you're able to use it to map out where people should be taken and then be able to use YOUR draft board as well you'll be able to get value players in some areas but also have to reach for some as well, especially if there's a run on a certain position. Don't be afraid to reach for a player or position because you know the positional importance, I guess is my tip for the day.....trust the numbers and know thy opponent. I want me some glory hole! Re: Variance and Standard Deviation Worksheet WinterIsComing wrote: I love the variance, mean and standard deviation discussion, works quite well for my league, especially when I integrate it into the MF. As a point of clarification Devo, you the X - Mean, just means you're taking the total points of RB1 and subtracting it from the mean, correct? Just want to make sure the Var or StDev aren't being integrated there as well. Yes, X-Mean simply means that player's points - the mean of the position. That is why those on the bottom half of the list have negative X-Mean values. Im sorry, I cant hear you over the sound of my own awesomeness... Re: Variance and Standard Deviation Worksheet JamesRustler wrote: Still think you guys are over thinking all of this. You. Are. Troll. It is a fair wind that blew men to the ale.
{"url":"http://www.thefantasyfootballguys.com/forum/viewtopic.php?id=61264","timestamp":"2014-04-21T04:38:56Z","content_type":null,"content_length":"66719","record_id":"<urn:uuid:a23e8e39-4208-49a9-a33a-ea9a8ad9b39b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
EconPort - Online Resource Summary Title Game Theory Course: 3.2 Computing Mixed-Strategy Nash Equilibria of 2 x 2 Strategic-Form Games Author Jim Ratliff Category Game Theory Subject Nash Equilibrium Type Article Description We'll now see explicitly how to find the set of (mixed-strategy) Nash equilibria for general two-player games where each player has a strategy space containing two actions (i.e. a "2x2 matrix game"). We first compute the best-response correspondence for a player. We partition the possibilites into three cases: The player is completely indifferent; she has a dominant strategy; or, most interestingly, she plays strategically (i.e., based upon her beliefs about her opponent's play). URL http://www.virtualperfection.com/gametheory/Section3.2.html Home URL http://www.virtualperfection.com/gametheory/index.html
{"url":"http://www.econport.org/econport/request?page=web_or_summary&contentMetadataID=875","timestamp":"2014-04-18T10:34:48Z","content_type":null,"content_length":"22203","record_id":"<urn:uuid:905baac4-84b5-48df-93b4-85305fffd55f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Closest point on a curve June 18th 2010, 01:07 AM #1 Closest point on a curve Hi, I'm trying to solve the following problem: Find the point on the parabola $y=x^2$ that is closest to the point (3, 0). Thanks for your help. Hello Stroodle We still seem to be having problems with LaTex at present, so I'll write it without. The distance, D, of the point (x, y) from (3, 0) is given by D^2 = (x-3)^2 + y^2 Now replace y by x^2, and expand to get: D^2 = x^4 + x^2 - 6x + 9 Differentiate and put the result equal to zero: 4x^3 + 2x - 6 = 0 which has a solution x = 1. The second derivative is positive at this point, giving a minimum value of D^2 (and hence D) at the point (1, 1). Hello StroodleWe still seem to be having problems with LaTex at present, so I'll write it without. The distance, D, of the point (x, y) from (3, 0) is given by D^2 = (x-3)^2 + y^2 Now replace y by x^2, and expand to get: D^2 = x^4 + x^2 - 6x + 9 Differentiate and put the result equal to zero: 4x^3 + 2x - 6 = 0 which has a solution x = 1. The second derivative is positive at this point, giving a minimum value of D^2 (and hence D) at the point (1, 1). I figured I'd take the time to write this out in latex to assit with clarity, and because I'm bored. I found a site to generate latex images, then I paste the URL here to get the latex equations. Its a pain but it works. So, as the Grandad pointed out above, from the pythagorean theorem and intuition (or the distance formula), the distance from the points given is: Since by the definition of our equation $f(x)&space;=&space;x^2$ and the fact that the point can be represented as $(x,&space;f(x))$ we can replace the y with the x^2 as in the following: Then we differentiate and set the derivitive equal to zero to find posibile soultions (i.e. the critical points). This is done here: First note that: And the derivitive is: I think this derivitive is diffirent than what the previous poster put before me, but this is because he took a different approach, the answer will be the same either way. Anyway, the zero of this function is 1. Thus, the distance is shortest at x=1, and this distance is the following: And thats the answer, I do believe. FYI, LaTeX is back! Demo: $\displaystyle{\left(\beta mc^2 + \sum_{k = 1}^3 \alpha_k p_k \, c\right) \psi (\mathbf{x},t) = i \hbar \frac{\partial\psi(\mathbf{x},t) }{\partial t}}$ YESSSSSSS! Ahmen to latex. June 18th 2010, 01:49 AM #2 June 18th 2010, 11:51 AM #3 June 18th 2010, 12:00 PM #4 June 18th 2010, 01:20 PM #5
{"url":"http://mathhelpforum.com/calculus/148769-closest-point-curve.html","timestamp":"2014-04-17T23:23:42Z","content_type":null,"content_length":"47372","record_id":"<urn:uuid:f99a6f40-cd60-404b-9510-c57c30a15366>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Introduction2. Proposed Low Power Imaging System with Smart Image Capture and Adaptive Complexity 2D-DCT2.1. Traditional CMOS Imaging System with 2D-DCT Calculation2.2. Proposed Low Power Imaging System with Smart Image Capture and Adaptive Complexity 2D-DCT Calculation2.2.1. Architecture of the Proposed Low Power Imaging System2.2.2. Adaptive Complexity Compression3. Implementation of the Low Power Imaging System and Results3.1. Image Sensor for Smart Image Capture with Block Type Decision Block3.2. Image Sensor for Smart Image Capture with Block Type Decision Block Adaptive Complexity 2D-DCT Calculation3.2.1. Adaptive Data Format3.2.2. Adaptive Spatial Resolution3.3. Performance4. ConclusionsAcknowledgmentsConflict of InterestReferences Wide utilization of portable battery-operated devices in multimedia applications, such as cell phones, portable digital assistants (PDAs) and smart toys, has triggered a demand for ultra low-power image system. CMOS imaging technology has recently become a very attractive solution for these applications as they consume less power, and operate at higher speeds compared with CCD imaging technology [1]. Many low power designs for image capture were reported during the last decade [2,3,4,5,6,7,8]. A review of low power designs in CMOS image sensors at different levels is given in [2]. Some works aim at compensating the reduced signal to noise ratio and dynamic range caused by a low operating voltage [3]. In [4], SOI (Silicon-On-Insulator) technology is used instead of the traditional CMOS technology since it has smaller parasitic capacitance and reduced leakage current. In 2006, the first self powered image sensor was proposed by Fish et al. [5]. Then an optimized energy harvesting CMOS image sensor was proposed [6], where the photodetector itself can be used for power generation besides the PGPd. In [7,8] a block based dual VDD image sensor was proposed. It has dual supply voltages during image capture stage and the supply voltage is decided according to the block types. After the image capture, energy-aware data compression is usually performed for efficient transmission. The Discrete Cosine Transform (DCT), and in particular the DCT-II, is often used in image/video processing such as JPEG still image compression, MJPEG, MPEG video compression due to its good energy compaction. However the DCT itself contains very computationally intensive matrix multiplications and therefore is power consuming. Numerous algorithms have been proposed attempting to minimize the number of additions and multiplications such as the Loeffler DCT [9,10,11] or even to replace multiplications with only add and shift operations, i.e., Distributed Arithmetic (DA), Coordinate Rotation Digital Computer (CORDIC) and binDCT [12,13,14]. Also, data-dependent DCT algorithms have been introduced for low power purpose [15]. In all the existing CMOS imaging systems, the image capture stage and the compression stage are simply concatenated together. The DCT architectures without multiplier are usually time consuming and hardware-expensive. Some DCT designs subsample the area where pixels change less, but all the block type predictions are made during the digital image processing and require extra processing time and also extra memory space to store the image data during prediction. However, in our imaging system, these two stages are intertwined together. The block type prediction is moved to the front end, that is, the block type is estimated during image capture in an analog way (at the mean time during read out). It is more efficient in terms of power. In addition, no extra time is needed during image processing or image capturing for block type prediction. The 2D DCT calculation has adaptive data format and computation complexity depending on the block type. The paper is organized as follows. In Section 2, the general algorithm and architecture of the proposed low power imaging system is described. Section 3 discusses the circuit implementation of the imager with the block decision circuit and the adaptive-complexity 2D-DCT. Performance regarding the image quality and the power consumption will be also given in Section 3. Finally, conclusions are presented in Section 4. The proposed imager for smart image capture is mainly composed of a pixel array, row and column decoders, block decision block, and readout circuits, as shown in Figure 4. It is similar to the imager proposed in [8] but not exactly the same. It works in a rolling shutter mode, the signals do not need to be sampled to in-pixel capacitors as required in [8] but to column sample and hold circuit. The readout and the decision operations share the same row select signals and row select transistors. However in [8], the imager works in a global shutter way and the decisions are made at the middle of the integration time, therefore the read out and the decision have separate row select signals and row select transistors. The pixel circuitry here is much simpler than that in [8]. The conventional 3T CMOS APS pixel is used in the pixel array [1]. A p-channel source follower is used to compensate for threshold voltage level-shifting from the n-channel, pixel-level source Block diagram of the proposed imager for smart image capture. The block type decision unit is shared by each 8 columns and computes the type for each block according to the estimated voltage variance values V[Max]-V[Min], similar to what was done in [8]. The enable signal activates the computations only when needed to save power. The block type decision unit is shown in Figure 4. Detailed description about Winner Take All (WTA), Loser Take All (LTA), Update Max/Min circuitries can be found in [8]. During the readout, the decision signal for each 8 × 8 block is output through a multiplexer one by one to the compression module for adaptive complexity controlling. A chip for image capture and block type prediction is implemented based on TSMC 0.18 µm process, as shown in Figure 5. Its attributes are given in Table 1. The simulations are done by Cadence Layout of the image sensor with block type decision circuitry. jlpea-03-00267-t001_Table 1 Chip Attributes. Technology TSMC 0.18 µm Voltage supply 1.8 V Pixel array size 128 × 128 Pitch width 5 µm Chip size 2 mm × 2 mm Fill factor 26% Estimated power (whole chip) 0.5 mW @ 30 FPS Estimated power (decision logic) 7 µW @ 30 FPS 2D DCT can be done by running a 1D DCT over every row and then every column. Vector processing using parallel multipliers is a method used for implementation of DCT. The advantages of vector processing method are regular structure, simple control and interconnect and good balance between performance and complexity of implementation. The complexity of the 2D-DCT depends on the block type decided by the image sensor during image capture. Two optimizations are performed for the small variance blocks to save power. For small variance blocks, only the differential part of the pixel values V[pixel]-V[DC] are used for AC coefficients computing. Here, the VDC is the minimum value in each corresponding 8 × 8 block. In order to simplify the implementation and reduce the hardware requirement, the first pixel is used as the DC part instead of the minimum pixel in the block. Figure 6 shows the circuit for implementing the adaptive input format according to block types. For large variance blocks, the pixel values are put for DCT calculation directly. For small variance blocks, the inputs are calculated by subtracting the first pixel value of a block V[first_b] from the pixel values, and then input to the DCT block. The DC values should be compensated to the DC coefficients to generate the final DC coefficients. Another benefit brought by this optimization is a small increment in image quality of the reconstructed picture. The reason is that because fewer bits are performed for DCT coefficients calculation, less information is lost during the truncation stage. There are less values toggling during DCT coefficient calculation and therefore less power is consumed. Schematic of adaptive input format according to block types. For small variance blocks, the spatial resolution for these blocks can be reduced while not affecting the image quality much. Consequently part of the calculations can be skipped to save power consumption during DCT. For small variance blocks, the calculation for the second row is just the same as the first row, therefore we can skip the row DCT alternatively. In addition, since half of the inputs of the non-skipped row DCT and column DCT are the same, half of the calculation groups can be skipped during calculation for small variance blocks, as shown in Figure 7. For now, the adaptive complexity 2D-DCT is implemented based on FPGA (Cyclone EP1C20F400C8) first. Later we are planning to integrate the imager for capture and the compression on the same chip. According to the synthesis report given by Quartus II, there is about 10% hardware increase than a conventional 2D-DCT with unique complexity. The maximum frequency is 100 MHz. Schematic of adaptive spatial resolution according to block types. (a) Normal case—large variance blocks; (b) Low-complexity case (dashed paths are disabled)—Small variance blocks. In order to show the benefit of the proposed low power algorithm, the proposed 2D-DCT based computation and a reference 2D-DCT core released by Xilinx [16] are implemented and compared. As shown in Figure 8, three images of “Camera man”, “Plane” and “Garden” which have background ratio of 50%, 90% and 0.8% small variance block ratios are used to represent three different types of images. The images have 8 bits resolution. Test images (a) Camera man (background ratio is 50%); (b) Plane (background ratio is 90%); (c) Garden (background ratio is 0.8%). Simulations about the PSNR vs. Variance threshold at different quantization levels are given in Figure 9a. At higher quantization level, the PSNR degradation is smaller. Therefore our low power algorithm is more efficient at higher quantization levels. The worst case happens when there is no quantization performed. The relationship between the Compression Ratio (CR) and variance threshold is given in Figure 9b. Since we have not applied entropy encoding yet, the Compression Ratio (CR) here is expressed by level of the quantization, that is, the percentage of the non-zero coefficients after quantization. The change of the compression ratio is not big since the compression is mainly done by DCT and quantization, the sub sampling by 2 × 2 on background blocks does not add much to the compression ratio. How PSNR changes with the variance threshold also depends on image types. Figure 10 shows the reconstructed image quality and power consumption for different types of images at different variance thresholds compared with those of a traditional compression in worst case (no quantization is performed). (a) PSNR of reconstructed images vs. Variance threshold at different quantization levels; (b) Compression ratio and PSNR vs. Variance threshold. (a) PSNR and Power vs. Variance threshold for normal background images—Cameraman; (b) PSNR and Power vs. Variance threshold for flat background images—Plane; (c) PSNR and Power vs. Variance threshold for busy background images—Garden. The power is estimated by Quartus II based on the Voltage Change Dump (VCD) files from post layout netlist simulation and the PSNR analysis is done by Matlab. The clock frequency used here is 0.5 MHz corresponding to a 128 × 128 array working at 30 FPS. It can be increased up to 100 MHz for imagers with larger array size and higher frame rate. The power savings varies from image to image. It is more efficient for predominant background images with up to about 46% power saving while no extra image quality degradation is observed compared with traditional compression. Extra image quality degradation is small because optimizations are performed only for background blocks. Power saving and the image quality of the reconstructed picture depend on the variance threshold. It is a tradeoff between image quality and power, and can be easily controlled by the threshold according to different applications. As shown in Table 2, by choosing the appropriate variance threshold, i.e., 30 out of 256 for 8-bit resolution images, significant amounts of power can be saved with no extra image quality degradation. jlpea-03-00267-t002_Table 2 Power saved During 2D-DCT Calculation. Images Background ratio (th = 30) Percentage of power saving Extra image quality degradation Garden 0.8% 0.5% None Cameraman 50% 24% None Plane 90% 46% None The power of the whole imaging system is the sum of the image sensor, ADC and the compression. For the proposed image sensor, the power estimation of the block type prediction is 7.0 µW. This adds only about 1.4% to the total imager power, and 0.7% to the system. Therefore, we conclude that the expected power savings outperform the extra power caused by the block type decision circuitry and result in significant power savings for the system. A novel low power CMOS imaging system with smart image capture and adaptive complexity 2D-DCT calculation is proposed, simulated and implemented. The complexity of the 2D-DCT calculation is controlled by the block types which are estimated during image capture stage. It does not add additional processing time or memory space for block type prediction. The imager is more efficient when the picture has predominant background. By choosing appropriate threshold, up to 46% of the power consumption can be saved during 2D-DCT calculation for images having predominant background, while no extra image quality degradation occurs for the reconstructed pictures compared with traditional compressions. For typical scenarios, up to about 23% of power can be saved for the whole imaging system. The idea of smart image capture and adaptive complexity according to block types can be extended to other 2D DCT architectures.
{"url":"http://www.mdpi.com/2079-9268/3/3/267/xml","timestamp":"2014-04-16T04:17:34Z","content_type":null,"content_length":"50387","record_id":"<urn:uuid:a8ed3a35-9dcc-41f1-b24d-e238264cab73>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Ali Enayat e-mail : enayat AT american DOT edu AS A RESULT, THIS WEBPAGE WILL TRANSMIGRATE TO ITS NEXT LIFE SOME TIME THIS SUMMER. THE ABOVE E-MAIL ADDRESS IS VALID UNTIL FURTHER NOTICE. Telegraphic Bio I started working as a faculty member in the Department of Mathematics and Statistics of American University in 1987, and have been here except for my periods of sabbatical/research leave: I spent Fall 1993 in Tehran, Iran, as a faculty member at Sharif University of Technology, and a researcher at IPM ; during Spring and Summer of 2007, I was in the Netherlands at Utrecht University ; I spent September and October 2009 in Sweden at the Mittag-Leffler Institute, and during November 2009 I was in the Netherlands for collaborative work with Professor Albert Visser at Utrecht University. I grew up in Tehran, Iran during the ancien régime, where I attended Ario Elementary School, and graduated from Alborz High School in 1976, three years before the Iranian revolution. I received my B.S. degree from Iowa State University (1979), and my Ph.D. in Mathematics (1984) from the University of Wisconsin (Madison), under the direction of Ken Kunen. I was a faculty member at Western Illinois University during 1984-1985, and at San Jose State University (California) during 1985-1987. I am a mathematical logician, with a strong interest in the metamathematics of foundational axiomatic systems such as Zermelo-Fraenkel set theory (ZF) and Peano arithmetic (PA). My approach is dominantly model-theoretic and has focused on fragments of ZF, ZF with large cardinals, Quine-Jensen set theory NFU, and arithmetical systems of various flavors, ranging from fragments of PA, all the way to second order arithmetic and its subsystems. One of my research projects deals with the comparative study of automorphisms of models of a variety of theories, ranging in strength from fragments of Peano arithmetic, all the way up to systems of set theory with large cardinals. The picture that has emerged from this work reveals that many foundational theories T (such as Peano arithmetic, second order arithmetic, and certain extensions of set theory with large cardinals) are characterized by the behavior of the automorphisms of models of T. This work also sheds light on the model theory of the Quine-Jensen system NFU of set theory with a universal set. I have completed four papers concerning automorphisms of models of foundational theories, available below, and I am working on-and-off on several others. More recently, I have also started working on calibrating the interpretability strength of various foundational systems (including finite set theory, and theories with built-in satisfaction classes). My joint paper with Albert Visser and James Schmerl (available below) is the first paper in a projected series of papers dealing with this topic. My latest work involves a new look at self-embeddings of models of arithmetic and set theory. Some of this work is in collaboration with Volodya Shavrukov. I serve as an associate editor of the Bulletin of the Iranian Mathematical Society, dealing with papers in the areas of mathematical logic and set theory. You can find the latest issue of the Bulletin of the Iranian Mathematical Society online at http://bims.ims.ir . Recent Papers : New Constructions of full satisfaction classes (with Albert Visser). Also, the paper below is ready; send me an e-mail if you are interested in obtaining a copy: A New Proof of Tanaka’s Theorem (every nonstandard countable model of WKL[0] has a nontrivial self-embedding onto an initial segment of itself). · Models of Arithmetic Papers Automorphisms of models of bounded arithmetic, Fundamenta Mathematicae, vol.192 (2006), pp. 37-65. From bounded arithmetic to second order arithmetic via automorphisms, in Logic in Tehran, Lecture Notes in Logic, vol. 26, Association for Symbolic Logic, 2006. · Models of Set Theory AND Arithmetic Papers Omega-models of finite set theory [with James Schmerl and Albert Visser], in Set theory, Arithmetic, and Foundations of Mathematics: Theorems, Philosophies (edited by J. Kennedy and R. Kossak), Cambridge University Press, 2011. An improper arithmetically closed Borel subalgebra of P(omega) mod FIN [with Saharon Shelah], Topology and Its Applications, December 2011, pp. 2495-2502, journal copy available here. Minimal elementary extensions of models of set theory and arithmetic, Archive for Mathematical Logic, (1990), no. 3, 181-192. Undefinable classes and definable elements in models of set theory and arithmetic. Proceedings of American Mathematical Society, 103 (1988), no. 4, 1216-1220 · Models of Set Theory Papers Counting models of set theory. Fundamenta Mathematicae vol. 174 (2002), no. 1, pp. 23-47. Power-like models of set theory, Journal of Symbolic Logic (2001), no. 4, pp. 1766-1782. Analogues of the MacDowell-Specker theorem for set theory. Models, algebras, and proofs (Bogotá, 1995), pp. 25-50, Lecture Notes in Pure and Appl. Math., 203, Dekker, New York, 1999. Conservative extensions of models of set theory and generalizations.,J. Symbolic Logic, vol. 51 (1986), no. 4, pp. 1005-1021. Weakly compact cardinals in models of set theory, Journal of Symbolic Logic, vol. 50 (1985), no. 2, pp. 476-486. On certain elementary extensions of models of set theory. Transactions of American Mathematical Society, vol. 283 (1984), no. 2, pp. 705-715. · General Model Theory Papers · Topology Papers Delta as a continuous function of x and epsilon. American Mathematical Monthly, 107 (2000), no. 2, pp. 151-155. Nonmetrizability of uncountable well-ordered spaces [with A. Abian], Simon Stevin (published in Belgium) 55 (1981), no. 1-2, pp. 3--6. · Edited Proceedings Volumes Proceedings of the IPM Logic Conference 2007, Special Issue of the Annals of Pure and Applied Logic, (guest) edited by A. Enayat and I. Kalantari, March 2010. Logic in Tehran, Proceedings of the Logic, Algebra, and Arithmetic conference held in Tehran during October 2003, edited by A. Enayat, I. Kalantari, and M. Moniri, Lecture Notes in Logic Series, vol. 26, Association for Symbolic Logic, La Jolla, CA; A K Peters, Ltd., Wellesley, MA, 2006. Nonstandard Models of Arithmetic and Set Theory, Contemporary Mathematics, vol. 361, American Mathematical Society (2004), edited by A. Enayat and R. Kossak, American Mathematical Society, Providence, RI, 2004. · Forthcoming 2013 Meetings 32nd Weak Arithmetics Days, June 24-26, Athens, Greece. · Recent Talks, Visits, and Meetings University of Oslo Logic Seminar (Norway, Sep 13, 2012) University of Gothenburg Logic Seminar (Sweden, Sep 14, 2012) Oxford, UK ( March 26-28, 2012, visited Prof. Volker Halbach and his research group) Cambridge, UK (March 28-30, 2012, visited Prof. Thomas Forster and his research group) · Somewhat Recent Talks Young Set Theory Workshop, (March 2011, Königswinter, Germany), slides 1, slides 2 · Less Recent Talks Pure Mathematics Seminar, University of East Anglia (May, 2010, Norwich, England) Model Theory Seminar, University of Leeds (May 2010, Leeds, England) Logic Seminar, University of Manchester (May 2010, Manchester, England) Kunen Fest: Topology and Set Theory Conference (April 2009, University of Wisconsin, Madison), slides IPM Logic Conference 2007 (June 2007, Tehran, Iran), slides(1), slides(2) University of Paris Logic Seminar (May 2007, Paris, France), slides 70^th Anniversary of NF (May 2007, Cambridge, England), slides University of Manchester Logic Seminar (April 2007, Manchester, England), slides Oxford Logic Seminar (April 2007, Oxford, England) Amsterdam-Utrecht Logic Colloquium (April 2007, Utrecht, The Netherlands), slides NYC Logic Conference (May 2005, New York City) Logic, Algebra, and Arithmetic (October 2003, Tehran, Iran) ASL Winter Meeting (with Joint Mathematics Meetings) of the Association for Symbolic Logic (January 2009, Washington, DC) Logic Colloquium 2008 (July 2008, Bern, Switzerland) Ph.D. Students Amir Togha, PhD George Washington University 2004, On Automorphisms of Structures in Logic and Orderability of Groups in Topology [jointly advised with Professor Valentina Harizanov]. Dr. Togha is currently an assistant professor at CUNY-Bronx. Shahram Mohsenipour, PhD Institute for Theoretical Physics and Mathematics (IPM) 2005, Elementary End Extensions in Model Theory and Set Theory. Dr. Mohsenipour is currently holding a research position at IPM. MA Students Betsy Andersen (project topic: Coding Theory), completed Spring 1994. Omar Mirza (project topic: Gödel’s Theorem), completed Summer 1994. Blair Jones, (thesis topic: Ramsey Theory), completed Spring 1995. Michelle Perschbacher (project topic: Music and Number Theory), completed Summer 1997. Valbona Bejleri (project topic: The Probablistic Method), completed Summer 2001. Adeniran Adeboye (project topic: Combinatorial Number Theory), completed Spring 2002. Anna Rose Haralampus (project topic: Fractals and Topology), completed Spring 2003. Caleb Rossiter (thesis topic: Relativity Theory), completed Spring 2004. Stephen Wheatley (project topic: Nonstandard Analysis), completed Spring 2006. Mahmoud Momenipour IASBS (thesis topic: Logical Foundations of Nonstandard Analysis), completed Fall 2006. Kun Zhao, (project topic: Laws of Large Numbers), completed Summer 2010. Adam Moskey (thesis topic: Continuous Selections), completed Fall 2010. Michael Headley (project topic: Transcendental Numbers), completed Fall 2010. Michael Cassel (project topic: Automorphisms of Models of PA), completed, Spring 2013. Amanda Purcell (thesis topic: Ultraproducts and Applications), in progress.
{"url":"http://academic2.american.edu/~enayat/","timestamp":"2014-04-16T15:58:36Z","content_type":null,"content_length":"109134","record_id":"<urn:uuid:0fe59971-359a-4de2-b8b0-94983bf59ecb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Raising chickens for meat and money Raising Chickens for Meat (and Money) Raising chickens for meat is a great sideline business for s small farm. You can fill your freezer with great, naturally-grown food, and put a few dollars in your wallet, too. And, if you raise chickens seasonally, (i.e. in the snow-free months only), you can get started with very little equipment and resources. But, what kind of money can you expect to make raising meat chickens? Here's a breakdown of the expenses and income possible: these are the costs that apply to each bird you raise. Note I'm using my expense figures as an example, substitute your own costs into the formula given to figure your costs. • Day-old chicks @ $1.00 each • Organic chick starter feed @ $0.30/lb • Organic chick grower feed @ $0.26/lb • Processing at abattoir @3.00/each FIXED COSTS - These are the costs for equipment that can be reused and amortized over a number of years. Here's what you need: • Broody boxes and heat lamps to shelter the young meat chicks • Feeders and waterers • A movable coop • Electromesh fencing and fence charger Let's use the information above to make a sample budget for a small meat chicken operation, raising 300 meat birds in 'batches' of 100. We are using small batches so that equipment can be re-used as each batch is finished. Raising Chickens for Meat - Sample Budget Let's assume we want to raise three batches of 100 meat birds, aiming for a 'market weight' of 5 lbs per bird. Here's how to calculate costs and profits: Figure a feed conversion of about 5:1 ; that is, it takes 5 lbs. of feed to grow 1 lb. of chicken. This will further break down into 1/3 for organic chick starter feed, and 2/3 for grower. Here's the feed cost using the feed prices given: 300 meat birds x 5lbs. each x feed conversion of 5:1 = a total of 7500 lbs of feed. 1/3 of this is chick starter = 2500 lbs @ $0.30/lb = $750. 2/3 is chick grower = 5000 lbs. x $0.26/lb = $1,300. So the total feed cost to raise 300 meat chickens is $2,050. Add in the cost of processing at the abattoir - $3.00 per bird - and the ,b>total variable costs = $2,050 + (300 x $3) = $2,950. Calculating the Fixed Costs. Your fixed costs will vary quite a bit depending on what equipment you use, and if you buy or build your coop, feeders, and broody boxes. I'll use numbers from my records, and you can use the same formula substituting your own figures to calculate your fixed costs. • Movable Coop $150 • Broody boxes $75 • Lamps, feeders, waterers, etc $175 • Electric poultry netting $150 • Fence charger, wire, ground posts $250 Total fixed costs: $800 So, let's run those numbers to create our budget. Assume we can amortize all the equipment over 5 years, and that we will continue to do 3 batches of meat chickens each year. Here's the results: • Feed cost for 300 birds = $2,950 • Fixed cost for 300 birds (amortized over 5 years) = $800/5 = $160. Total costs per batch of 300 meat chickens = $3,110.Now, what to charge? The cost per pound of your chicken is $3,110 / (300 x 5 lbs.) = I recommend pricing your chicken to net at least 60% or 70%, to allow for losses. So in this example, your price should be around $3.25 to $3.35 lb. This means you will net around $1,800 on the first 3 batches, which pays for all your equipment and gives you about $1,000 free and clear. Makes raising chickens for meat a pretty good start-up business.See also: Tips for Raising Chickens for Meat The New Terra Farm Movable Coop Return to Home page from Raising Chickens for Meat
{"url":"http://www.new-terra-natural-food.com/raising-chickens-for-meat.html","timestamp":"2014-04-21T10:20:40Z","content_type":null,"content_length":"10274","record_id":"<urn:uuid:36b38233-bd35-4524-852a-26fe29f56cd4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
The introduction the decimal system in the 13^th century to Europeans was the most significant achievement in the development of a number system, in which calculation with large numbers became feasible. Without the notion of zero, the descriptive and prescriptive modeling processes in commerce, astronomy, physics, chemistry, and industry would have been unthinkable. The lack of such a symbol is one of the serious drawbacks in the Roman numeral system. In addition, the Roman numeral system is difficult to use in any arithmetic operations, such as multiplication. The purpose of this site is to raise students and teachers awareness of issues in working with zero and other numbers. Imprecise mathematical thinking is by no means unknown; however, we need to think more clearly if we are to keep out of confusions. To search the site, try Edit | Find in page [Ctrl + f]. Enter a word or phrase in the dialogue box, e.g. "Ln" or "Tan" If the first appearance of the word/phrase is not what you are looking for, try Find Next. It is India that gave us the ingenious method of expressing all numbers by means of ten symbols, each symbol receiving a value of position as well as an absolute value; a profound and important idea which appears so simple to us now that we ignore its true merit. But its very simplicity and the great ease, which it has lent to computations put our arithmetic in the first rank of useful inventions. W should appreciate the grandeur of the achievement the more when we remember that it escaped the genius of Archimedes and Apollonius, two of the greatest men produced by Greek Culturally, our discomfort with the concepts of zero (and infinite) is reflected in such humor as 2 plus 0 still equals 2, even for large values, and popular retorts of similar tone. A like uneasiness occurs in confronting infinity, whose proper use first rests on a careful definition of what is finite. Are we mortals hesitant to admit to our finite nature? Such lighthearted commentary reflects an underlying awkwardness in the manipulation of mathematical expressions where the notions of zero and infinity present themselves. Another fallacy is that the square root of a positive number yields two distinct results. It is not simply a problem of ignorance by young novices who have often been mangled. The same errors are commonly committed by seasoned practitioners. Nay, even educators! These errors can frequently be found as well in prestigious texts published by mainstream publishers. A Common Fallacy Reading the 10^th edition of a book on Management Science (Taylor, 2010), I found the author dividing 2 by zero in a Linear Programming Simplex tableau performing a column ratio test, with the stated conclusion, 2 à · 0 = infinity (Â¥). A typographical error? Confusion? Willful sin? A telephone call bringing the obvious error to the attention of the publisher for correction in future editions was met with an astonishing return call from the editor of the text still insisting that 2 à · 0 = Â¥. Although both the author and editor insist on this computational outcome, they nonetheless somehow decline to continue the Simplex calculation based on this result, contrary to the logic of their conclusion. Questions I had were: How can you divide two by zero? Which number, when multiplied by zero, gives you 2? Dividing by Zero Can Get You into Trouble! If we persist in retaining such errata in our educational texts, an unwitting or unscrupulous person could utilize the result to show that 1 = 2 as follows: This result follows directly from the assumption that it is a legal operation to divide by zero because a - a = 0. If one divides 2 by zero even on a simple, inexpensive calculator, the display will indicate an error condition. Again, I do emphasis, the question in this Section goes beyond the fallacy that 2/0 is infinity or not. It demonstrates that one should never divide by zero [here (a-a)]. If one does allow oneself dividing by zero, then one ends up in a hell. That is all. A Sample of the Grown-ups' Arguments Insisting on Dividing by Zero For your thinking pleasure while reading, below is a sample of the grown-ups' arguments that insist on their own justifications for dividing by zero, among others. □ A visitor of this site wrote to me that: "My argument is that if you can divide zero into a number, and come out with nothing, then you should be able to divide by zero and get nothing. I learned to divide by groups, like 20/5 is 20 put into 5 groups which equals 4 in each group. Well, if you can put nothing into groups., you should be able to put a number into no groups and come out with the same answer." □ Here is yet another persisting argument on a wrong argument we have been taught at early age. However, instead of re-thinking for ourselves with a new-mind eye, unfortunately, some of us still try to justify what are unjustifiable. "I'm one of those people who doesn't see the difference between saying that "two divided by zero is something that simply can't be done" and "two divided by zero is infinity" because, to me, neither one results in a tangible answer." This visitor seems to suggest that since both 2/0 and Â¥ are meaningless to use, and undefined respectively, therefore, both are the same thing. This is a bad logic, isn't it? □ Another visitor wrote: "If 2/0 is infinity and undefined, I can accept that because infinity is not defined as anything in particular, but just a number that is too large to compute. I believe this is the same reason that the calculator gives "error" as the result of that division. I am not saying that infinity and undefined are the same concept, but I think they are related." If the result of dividing by zero (which is a meaningless operation in the first place) by calculator was infinity, then calculator would give you "infinity" not the word "error", or "E", this is so, because only a limited number of letters such as E or the word Error could be formed. Therefore, any illegal operation will default to the error indicator. The Error message means that "the operation you have performed is illegal, such as dividing by zero." Right? The calculator does the mathematical operations, but does not have a mind of its own. Your mind does the interpretation of the results. Notice also that, although no calculator has human's mind characteristics, however, any calculator has its own "infinity". Infinity for particular calculator is any number greater than the largest number you can display on that particular calculator. "INfinity" is a notion not a number, and it has any meaning if only in relation to what is "finite" in a given situation. Do you understand me? □ One of my favorite visitors of this site, kindly wrote to me that: "I agree with you that one should never divide by zero. I'm just not convinced that your argument explains why we shouldn't. I feel the 1 = 2 argument is circular in that particular context. It shows that if we assume that division by zero IS infinity then there is no contradiction. There is only a contradiction if division by zero is NOT infinity--the exact opposite of the point you wish to make." The contradiction comes from an inverse argument. Suppose we allow division by zero then, e.g., one can show 1= 2, because dividing both sides by zero we get Â¥ = Â¥. As we notice, the first Â¥ is not necessarily equal to the second one, otherwise one can do the backward operation and then conclude that 1 = 2. The same visitor patiently wrote back to me that: "The two infinities are exactly the same "quantity" just as 6/2 and 3 are the same quantity. How do you mean they are not the same? Infinity plus 100 IS infinity. This can be proven by doing a simple one-to-one mapping of 100+infinity onto itself. Hilbert used his infamous hotel example to show this. The question he was asked is to imagine an infinitely large hotel room with one guest in each and every room. Now imagine a new guest who desires a room. He does not want to share a room. How do you make room for him? The answer is, as in all good puzzles, obvious." Unfortunately, David Hilbert got into this trap. "This is not mathematics. It is theology," a remark made by Paul Albert Gordan, as reported in Olver's book. Remember that, the sign Â¥ is not for any number it is only for a concept, and "infinite quantity" is unmeasurable by any numerical scale. Therefore, one should never do any kinds of arithmetic operations with it, such as Â¥ + 100 = Â¥, which gives the silly result, 100 = 0. Remember also that, a good Logic (including dialogue logic, interrogative logic, informal logic, probability logic and artificial intelligence) is a strong container where we put our Ideas to delivered it to someone else. Therefore, empty logic is useless. Also, having useful ideas but not using strong logic to make it common is dead. One must look for both the container and what it contains. Both are needed good ideas communicated by good logic. □ One of my British colleagues kindly wrote to me that: "â⠬¦Personally I like the numeral zero as it provokes people to think about their preconceptions. It is possible that people find the notion that you can NOT divide a number by zero unnerving because they like their life organized. Again this could be that when they "do" maths they only consider a "tick" for a correct answer their own reward! Math is far more fun and interesting, don't you think? As a non-maths specialist lecturer I am pleased to point students to consider your web page. Please keep such ideas in the public domain." Mathematical modeling (i.e., mathematical thinking) is the process of contemplating on the decision problem. In mathematical modeling, mathematics is used as a language to describe, and as a tool to prescribe, and control the decision making process. Therefore mathematical models process aims at describing, prescribing, and controlling our decision making process in all areas of human activities. The cardinal aim of mathematical modeling process is to make our world measurable, calculable, predictable, and thus more manageable. The decision making process is contemplating on the elements of the decision. By definition of Esthetics, the longer you contemplate on anything the more beautiful that thing is. With respect to beauty of the mathematical modeling process, we distinguish it from other mental manifestations; this process is the results of the perfect apprehension of relations formed by a complexity of elements of the model. Our high school curriculum should put more emphasis on mathematical modeling rather than maths which in most cases are merely "puzzle solving" which has nothing to do with students lives. This will bring excitement in learning the math language and its applications. Mathematics may be difficult for some students' minds to grasp because of its hierarchical structure: one thing builds on another and depends on it. Roadmapping is the duty of our teachers. Much of the weakness in our current Math Education system is historical in nature and can be discerned by carefully thinking about the following diagram. It is a simplified 4-step model of using mathematics to solve a problem: Click on the image to enlarge. A Mathematics Education System Standard estimates are that about 80-percent of Math Education at the K-12 level is focused on part 2 of the above diagram. Historically, Math Education systems focused on helping students to learn to carry out a number of different types of "step 2" using some combination of mental and written knowledge and skills. It takes a typical students hundreds of hours of study and practice to develop a reasonable level of speed and accuracy in performing addition, subtraction, multiplication, and division on integers, decimal fractions, and fractions. Even this amount of instructional time and practice -- spread out over years of schooling -- tends to produce modest results. Speed and accuracy decline relatively rapidly without continued practice of the skills. During the past 5,000 years there has been a steady increasing body of knowledge in mathematics, science, and engineering. The industrial age and our more recent information age have lead to a steady increase in the use of "higher" math in many different disciplines and on the job. Our Education System has moved steadily toward the idea that the basic computational aspects described above are insufficient. Students also need to know basic algebra, geometry, statistics, probability, and other higher math topics. As these topics began to be introduced into the general curriculum, a gap developed between the math that students were learning in school and the math that most people used in their everyday lives. More and more, Math Education focused on learning math topics in a self-contained environment where what was being learned had little immediate use in the lives of the students and little use in the lives of their parents. A pattern of Math Education curriculum developed in which one of the main reasons for learning the material in a particular course was to be prepared to take the next course. Students developed little skill at transferring their math knowledge and skills into non-math disciplines or into problems that they encountered outside of school. Only a modest number of adults maintain the math knowledge and skills that they initially developed while studying algebra, geometry, and other topics beyond basic arithmetic. That brings us up to current times. Many high schools require students to take three years of math (during their four years of high school) in order to graduate. There is considerable pressure to have all students take an algebra course. The nature of the instruction and the learning in many of these math courses follows the "80-percent on step 2" that has been noted above. Students are now learning the underlying concepts, or how to make use of the math in other courses or outside of a formal school setting. The language of Mathematics does not consist of formulas alone. The definitions and terms are verbalized often acquiring a meaning different from the customary one. Many students are inclined to hold this against mathematics. For example, one may wonder whether 0 is a number. As the argument goes, it is not, because when one says, I watched a number of movies, one does not mean 0 as a possibility. 1 is an unlikely candidate either. But do not forget that ambiguities exist in plain English (the number's number is one of them) and in other sciences as well. As s matter of fact, mathematical language is by far more accurate than any other one may think of. Do not forget also that every science and a human activity field has its own lingo and a word usage in many instances much different from that one may be more comfortable with. As a final note in this subsection, the 4-step diagram represents only part of the field of math. For example, it does not include math as a human endeavor with its long and rich history. □ A careful reader wrote: ".. I did understand your "Zero Saga" and I agree wholeheartedly that dividing by zero is completely meaningless. I looked at it this way: When one divides, 13/2 for example, one is basically saying: "what number will you get if you break 13 into two equal parts?" That, of course, will be 6.5. This works for every number except for zero, which isn't exactly a number. If one divides by zero, 4/0 for example, one is saying: "what number will you get if you break 4 into zero equal parts?" That does not make sense although someone can get a little creative as follows. If the question is asking what number will you get if you break 4 into zero equal parts, it can be said that 4/0 implies that the answer is any number of unequal numbers you wish that will add up to 4. That is because zero equal parts=any number of unequal parts. Zero is a number, and a concept for "nothing." "What number will you get if you break 4 into zero equal parts?" The answer to this question is that, it is impossible to break anything including numbers into zero equal parts. Right. Try to break an apple into zero equal parts! □ On this anther reader wrote: "..I thought of the question that you asked, "Try to break an apple into zero equal parts!" was a perfect example of this. If you have 1 whole apple and you attempt to divide it into 0 equal parts do you not still have 1 whole apple?" Have you really attempted in doing so? I am sure you failed, Right? So do not conclude anything. □ Another careful reader wrote: "Thanks for a great page... I always looked at division by zero using probability/gambling. If you have a 1 in 10 chance of something happening then you have a 1/10 , 0.1, 10% chance of it happening. I always saw this as the number of times something can happen out of the number of possible variations of what can happen. It's inconsistent and can't be defined what odds of 1/0 (1:0)are. Out of 0 events an event happening 1 time is just erroneous and invalid." □ Another reader of this page wrote to me that: "..It seems apparent that the zero paradox should be broken into to areas: mathematical and physical. Not only is the need to define zero, but infinity as well. For some it is not a question of whether it exists, but merely what the definite result is." I do agree with you that one must make a clear distinction between the abstract concepts and the concrete concepts as well as their useful implications in modeling process of reality. Therefore, one must engage in investigating mathematical knowledge, especially the relation between conceptual and applied (procedural) knowledge. The distinction between these knowledge types is possible at a theoretical, epistemological and terminological level. One may classified them according to their different approach to a given problem: Applied knowledge: How to get from where one is to where one wants to go in a finite number of steps. Conceptual knowledge: How to get from where one is to where one wants to go in a finite or an infinite number of steps, or a leap without any steps at all. An example of conceptual knowledge would be Where one is: natural numbers Where one wants to go: the end of them How: Infinite number of steps. For the applied knowledge it would be Where one is: natural numbers Where one wants to go: the end of them How: In a finite number of steps depends on what calculator you are using. As you see, conceptuality is subjective while realization is objective. Most conceptuality is metaphysical; while reality is mostly physical. Now with respect to the last part of your comment, "what the definite result is", one must recall that: being definite has the property of being definable. □ Another reader kindly sent me an email with the subject heading: "To Infinity and Beyond.." "...... I've been brainwashed since high school to learn that 1/infinity is 0. ..If infinity really is just an abstract concept, not a physical one, then would infinity emerge when we travel at 100% the speed of light?" What one can reach is finite, thus one can never reach infinity, therefore forget its beyond! Now, according to the Einstein's theory of relativity, nothing in our universe can accelerate up to the speed of light. Only, under this critical hypothesis (i.e., condition) you might apply the relativity model. Like any mathematical model of reality, this model has its own restriction(s). For example, considering you travel with speed of v, then for each second that passes for you, the clock of the observer, registering seconds, where the constant c is the speed of light. Since the condition for using this model requires that v < c, therefore, for the variable v, you cannot choose v = c in the above function. By violating this condition, i.e., substituting v = c, you are making the denominator equal to zero, and then claiming 1/0 is infinity. Thus, my dear reader, infinity does not emerge when you travel at 100% the speed of light. According to this relativity model, you cannot travel 100% the speed of light, unless you become a photon! It is unfortunate that, you violate certain rules and as a result of you own action, you are siting and wondering why and how! You are not alone, on this phenomenon and its many manifestations in our lives. □ Here is a good argument from the real-life practical observations: " ...Truly fascinating argument. I am also one of those students always taught that any number divided by 0 = infinity. For those that argue that infinity is a correct answer, how can they explain that calculators, typewriters, and computer keyboard do not reflect the infinity symbol?" □ Another reader wrote: "I enjoyed your discussion very much. I have comments on two issues. I am waiting for your valuable comments. (1) But, I think "infinity" is not a concept only. We can SEE the "infinity" in our own eyes in broad day light." But infinity as a number does not exist. "(2) Possibly, in certain cases 2/0 can be infinity. But, the result is not unique--i.e. it is not the same always. Therefore, the problem lies in the "nonuniqueness" of the result, so that it is not "consistent" (which assumes unique mathematical value) amongst all the cases. If you interpret 20/5 =4 means that if you take 5 oranges from a total of 20 oranges in your fridge you can do it 4 times. Then if you take 0 oranges from 2 oranges (2/0) you can take it infinite number of times (that is, it does not end but surely exists/continues)." If you "take 0 oranges from 2 oranges", it means 2-0=2. Repeating this operation again and again is nonsense. Once is enough, right? Otherwise eventually you get tired of counting this repetition, beyond that is infinity which you have never reached. □ An engineer kindly wrote, with the the heading: Taking zero as the amount of "error". "...The problem is in the measurement calculation, not the outcome. Consider 2/(error) which is meaningful no matter how small the measurement error is, unless the error is zero. That is, there is no error. Your Web site defines such a calculation, showing that some people forget that they are not dividing by zero, but dividing by an error." You are right, if there is no error, then the act of dividing is meaningless. □ A careful reader wrote: "After reading your Zero Saga site, I must say I thoroughly enjoyed reading the part where outside individuals commented on your theories and examples. I could understand exactly why they were so adamantly arguing with you because we have been brought up to believe you can indeed divide by zero. However, your examples disproving the division were very persuading and I found myself smiling as individuals got more and more upset as you proved their childhood lessons were indeed fallacy...I guess it shows how unwilling we can be to changing our old ways of thinking...Thanks! " You are right. Correcting our habit is much harder than learning something correctly in the first place. Habits are indeed the "gravity" of mind. One must be aware of this very formidable and attractive force. What we see is how difficult it is for many of us to correct the habitually wrong conceptions than learning them correctly. What we should do as with anything else is rethink the concept for ourselves. We must be willing to change the way of thinking, the courage to see things differently, and not be stuck in a gravity of the mind situation. □ A frustrated reader wrote: "I did not find what I was looking for however and am wondering if you could help. Just like you cannot compute the square root of -1, nor can you compute 1/0 but... a number system including the square root of -1 has been developed and that number system has made the modeling of all sorts of phenomena possible. How come the scientific community is mum on the development of a number system including 1/0? I am very curious about this development because I foresee it to be the precursor to the next big break through in applied and theoretical mathematics. I can not seem to find an inkling of research out there. I have in two instances in my life come across it. The first was my first year calculus teacher who told the class his research was just that and the second the infamous Tycheon. Any information or help you could provide would be greatly appreciated." I understand your feeling of frustration. There have been many attempts, however, unlike the established applied numerical systems such as complex numbers, these new systems are merely "abstract" with no application. As, you expressed well, within the current concrete numerical systems; the act of dividing by zero is meaningless, thus forbidden. □ The following comments are from Sri Lanka: I came across your article on the subject of "ZERO" on the internet while trying find out whether it is correct to place a naught in front of single number, for example 05 or 09 instead of 5 or 9. I am from UK living in Sri Lanka and feel somewhat irritated to see noughts being used in front of single numbers. It is seen here on sign boards over shops, i.e. Colombo 03. and in newspaper adverts, i.e. 03 years warrantee. I asked a student for a reason, and his reply was that it looks neater! Another said that is the way they are taught so there is no mistake in what they are writing. But a Sri Lankan who was educated in the 50/60s said that he was never taught to add a naught in front of a number. I do have my theory though, could it be because (some) people are looking for ways of cheating the system and adding a naught stops a 1 being turned into a 10? Another interesting observation is the use of the word zero in Sri Lanka, which is probably a cultural thing. Using the word zero rather than naught. For example, in the UK we rarely use the word zero. If we were giving out a phone number, we would usually say, my number is oh seven... where here in Sri Lanka they would say, my number is zero seven... The British Telephone directory enquiries service never uses the word zero, always oh. □ Another careful reader wrote: "I was just reading your article on the zero and although I am not a mathematician to say that anything/0=infinity seems daft. The way I see numbers is rather simplistic. The smaller the number you divide by, the larger the number you get. An infinitesimally small number on the bottom of a fraction will result in an infinitely large result. However, that infinitesimally small number is still a number. Zero denotes nothing. If you divide something by nothing you have NOT divided it." You are right in stating that " If you divide something by nothing you have NOT divided it." Thank you for your time and sharing your thoughtfulness. □ Another visitor wrote "I do appreciate your line of explanation to the extent that "division by zero is not to be attempted". Albeit, stating that any defined number divided by zero is infinity is not incorrect. You are questioning " conventional way...". Again how do one define convention? In real analysis, between any two consecutive points there is another point - leading to the statement there can be infinite number of points... So do we say there are "Undefined points??? In my humble opinion, anything divided by zero can be said to be infinity and zero divided by zero is The main problem I have with this line of argument is "the act of dividing by zero" which is meaningless Therefore, it does not make to ask further what is its result, whether it is indeterminate or not. Mathematical conventions are created for unification of our usual arithmetic operational rules, in most cases. You are right in stating that ".... there can be infinite number of points...". Which means there are innumerable and even uncountable number of points within this non-empty dense set. It does not say exactly how many. Infinity is a concept not any specific number, therefore one cannot do any kinds of arithmetic operations on a concept nor include it in any arithmetic You are certainly entitled to keep "... opinion, anything divided by zero can be said to be infinity" as your opinion (and I respect it) but not as a fact and forcing others to agree with your opinion or belief. You may like to visit the Web site How to distinguish among Rumor, Belief, Opinion, and Fact. □ A cheerful colleague from Australia wrote that: "I think from now on, the answer should be: x/0 = http://home.ubalt.edu/ntsbarsh/zero/ZERO.HTM Really enjoyed your article and comments." Thank you for your time and so much for your kindness to me. What About Taking the Limit? Viewing this issue from the perspective of limits, when considering f(x) = 2/x. Lim (2/x) as a approaches zero (not equal to zero), neither the left nor the right limit exists. In other words, if one divides 2 by x very small positive number close to zero, the result is a very large positive number, while dividing 2 by a very small negative number close to zero produces a very large negative number. Since the two results are not equal, the limit does not exist. Neither does the limit of each side exist, as shown in the following graphical representation of f(X) = 2/X. A Graphical Representation of f(X) = 2/X One of my readers kindly wrote to me that: "...you wrote, correctly, that the limit of 2/x when x approaches zero does not exist, since it approaches different results from the negative and positive sides. Well, how about lim (2/x^2) when a approaches zero? .." As I pointed out earlier. the lim (2/x) does not exists partly because the right (and the left) limits do not exist. Similarly lim (2/x^2) does not exists because it approaches (but never reaching) a very large unspecified positive number. The same reader wrote back to me that: "Isn't that the definition of infinity? The same idea that you attack in your paper?" Infinity is a concept not a number. Do you understand the difference? As I stated before, conceptuality is subjective while realization is objective. Most conceptuality is metaphysical; while reality is mostly physical. It seems you are a believer of infinity as a number and also a believer that you can reach the limit, and thus missing an important distinction. Approaching is different from reaching. Whatever you reach is not a limit. Therefore, you can take a limit, however you cannot "take it to the limit" as in a popular love song the lover is wishing for. If you can, then it is not a limit, it is merely a numerical functional evaluation. Chris, a high school student sent me the first quadrant graph of function 1/x, with the following comments: "..we have just finished my GCSE exams, and we are taught that when x=0, the y value "jumps" off to infinity and the line is very nearly vertical. What do you think about this? Congratulation Chris for having such an analytical mind and a strong desire to learn more. The paraphrases from your teacher, such as "jumps off to infinity" and "the line (curve?) is very nearly vertical" are not exact mathematical statements. This is so because they cannot be Now let see where the problem is. The function y = 1/x is defined everywhere except at zero. Notice that, you graph misses the other part of the function, which is in the third quadrant, symmetric (to the part you already have) around the origin, for negative x values. Right Chris? As you see this function is not continuous. That is, to graph it you have to lift you hand from the graph paper to draw the other part. This is a good reason why this function is not defined at point x=0, not even for the limits. Educating the Educators Unfortunately I find that the act of dividing by zero is not at all an uncommon practice. Many references in applied mathematics can be found committing this and other errors. And if educators profess division by zero as an appropriate mathematical practice, they should not be surprised to see this error persist among their students just as the teachers themselves learned this practice from their own teachers. You might think, as one of my readers from Eastern Europe believed that "... the Anglo-Saxons culture do not have a way with numbers." While respecting this opinion, unfortunately, I found that this error is not limited to a particular culture. In fact, it is the problem often initiated by our educators worldwide. For example, in the textbook for Educacion Mathematica by Gracia, et al. [1989, page 138], which is widely used in Spanish speaking Schools of Education, you will find that the function y = 1/(X^2 - 1), evaluated at X = -1 is 952380952. Where did this number come from? The right question one might ask is who educates our educators? Ball [1990] interviewed 10 elementary and 9 secondary teachers, asking, "Suppose that a student asks you what 7 divided by 0 is. How would you respond? Why is that what you'd say?" What she found was that 1 of the 10 elementary teacher candidates could explain using the meaning of the terms, 2 gave the correct rule, 5 gave an incorrect rule, and 2 didn't know. 4 of the secondary candidates could explain using the meaning of the terms and 5 only gave the correct rule, e.g.; "You can't divide by zero . . . It's just something to remember," but gave no further justification when probed. Some of the teacher who only gave the correct rule were math majors. Klinger, the author of a book titled "Mathematics for Everyone", which is translated into almost all the European languages, and read by most old-timer educators, wrote: "Division by zero is a more delicate matter even though it is open stated that the result is "infinity". We do not wish to conduct a philosophical discussion on "infinity" and shall confine ourselves to saying that, if we make a divisor a smaller and smaller decimal value, the quotient will become infinitely great. Thus, if we divide 1 by 1/100 we get 100 and if we divide 1 by one millionth, we get one million." pp.3-4. In the same book, we read the following, about Zero as a power: "The power zero has a quite distinctive property of its own. (Yes, we mean zero. Why not?) Let us apply the division of 2^2 by 2^2 2^2 / 2^2 = 2^2-2 = 2^0 Now, the division of any quantity whatsoever by itself (for example, the division of 4 by 4) always yields 1, so what we may state the following important rule: Any value (arithmetic or algebraic) taken to the power zero is equal to 1: 2^0 = 1, 100^0 = 1, a^0 = 1, e^0 = 1, and so on" pp.50-51. Clearly this must not be taken as a proof of "any value ...taken to the power zero is equal to 1". Moreover, what about 0^0? Is it one too? When 0^0 came about? What is its value? One may ask when and how 0^0 became equal to 1? Since the right-side limit of X^X as X approaches positive-zero is one, but its left-side limit does not exists, therefore, one concludes that 0^0 is undefined. It seems that Euler was the first to argue for 0^0 = 1. Newton was the first who used positive, negative, integer, and fractional exponents. However, there are other people who only think in terms of integers, and some of them think 0^0 = 1 is a good idea. Nevertheless, Since X^-1 = 1/X , it follows that X ^-1. X = = X ^-1+1 = X^0 = 1, however, since X^-1 = 1/X is correct for all X except zero, therefore X^0 = 1, except for X = 0. Clearly, 0^0 is one of those expressions that do not have a definitive meaning but can be given a "contextual" one. Therefore, the value of 0^0 depends on the context where it occurs; you might wish to substitute it with 1, indeterminate, or undefined. The context in which 0^0 is taken to be 1 is, e.g., the coefficient of the binomial expansion (X + Y)^n, is to be valid even for X = -Y, and for all non-negative integers which includes n = 0. This in turn maintains the beauty of the Pascal Triangular numbers. It is too important to be arbitrarily restricted! By contrast, the function 0^n is quite unimportant. In most Elementary Education programs for prospective teachers, such as the one at the Towson State University in Maryland, it is required to take four math courses, concepts of mathematics I and II, plus teaching mathematics in the elementary school, together with a supervised math-teaching experience session. While the standard is high, the main question is who educates our educators? Adding to this, doubling the existing difficulties for the teachers, the school systems hiring a teacher seems to be more concerned about "how he/she would handle violence in the classroom?" Unfortunately, it is a miserable story to tell. There must be a conviction that mathematics teacher and researchers in mathematics education have much to learn from each other, especially at a time when the school and adult curricula are converging. Based on my experience, I offer the following three distinct headings: □ Recruitment: What can be done to encourage reluctant would-be mathematics teachers to take the plunge? □ Retention: What support do they need to enable them to become sufficiently competent, confident and comfortable with mathematics so that they can teach it to others? □ Re-training: What is it like teaching mathematics without a strong background in mathematics? Unfortunately, mathematics has been fundamentally depersonalized to "something machines do" and that the meaningful response is that we need always to emphasis that mathematics has little value divorced from imagination. Machines will always do 'imaginationless' mathematics better than humans. But "mathematics imagination meld" is needed by society and it can become a fascinating subject for most children in the classroom. Too many pupils now think that mathematics is boring. Mathematics can and must be made more fun, more relevant, and more challenging, for pupils and for teachers. The use of Internet interactive technology in the classroom can add a new and precious variety. This variety can help to engage and hold pupils' attention, and can raise the chances that the lesson will have been judged a success. The new interactive technology can help to attract and retain teachers by making the whole process more business-like, more efficient and more effective. However the provision of appropriate hardware, software and training remain expensive and intractable hindrances to progress. There is a "math" video series [Harlan Meyer, Diamond Entertainment, 1996]. One is called Addition, then Subtraction, Multiplication and, of course, Division. The division segment of the series starts by misspelling the word quotient. Then the "star" of the video shows how to divide by using repeated subtraction; however, she asks "If I have 12 doggy bones and I take away 4 groups of 3 bones, how many will I have left?" She answers herself, "Right, four." But it was the "trick" she claimed for dividing by zero. Unfortunately, there are many instances like this which sent your blood pressure through the roof. Zero is nothing. So just remember nothing INTO something is nothing. Teaching kids to count is fine, but teaching them what counts is best. Click on the image to enlarge. Students Exposure to Zero One may view "division" as a subtraction operation. When you write 20/5 = 4, what you really mean is that how many times you can subtract 5 from 20? and the answer is 4 times. That is why division is the "inverse" operation for multiplication, which is an addition. That is, 5 x 4 = 20, means, adds to itself 5, 4 times, and you will get 20. So dividing by "0", has no meaning, because the question: how many times you can subtract nothing from something? The question itself makes no sense. The act of dividing by zero is meaningless. Therefore, it does not make to ask further what is its result, whether it is indeterminate or not? Zero is an important concept, so time should be spent establishing that from early age one has some understanding of zero; zero, nought, nothing - as ever, the language should be varied. In absence of a concept of zero there could have been only positive numerals in computation, the inclusion of zero in mathematics opened up a new dimension of negative numerals. Zero, when used as a counting number (such as zero defect) , means that no such objects are present. A concept and symbol that connotes nullity represents a qualitative advancement of the human capacity of abstraction. As always, concepts are only real in their correct context. Act of Dividing by Zero Is a Meaningless Operation: Forget Its Result Another author wrote that perhaps a / 0 = a because if he should divide a units of apple pies among 0 people, he would be left with the entire pie. Unfortunately his analysis also leaves him with pie in his face since his analysis references the results of a division is a distribution transaction, but as he is not making any transaction, therefore the result is meaningless. For example, if one distributes a pie to 2 people, each would get 1/2. In other words, the distribution takes place across an equal sign among the number specified in the denominator on the left hand side. Since he specifies zero people in the distribution, the transaction is not taken place. The result of interest is on the right-hand side. It is certainly true that if you do not distribute the pie, you retain it; but when you use an equal sign, you do imply the result of some real transaction to the other side. Similarly, a / 0 is also meaningless as being the amount of pie the zero people received. This is certainly true for apple pies. The results may differ with other varieties of pies, not all of which have been reviewed! Once you understand what division is, then there should be little difficulty in understanding why division by zero is not allowed. A visitor of this site wrote that: "... but I think it's simply because the division operation is defined in that way. There could be many interpretations for this definition, but there is no reason. As you demonstrated many times, allowing division by zero causes contradiction, thereby making that mathematical system useless. On the other hand, prohibiting division by zero has not yet known to cause any contradiction. If any, that's the only reason why we define division as it currently is." Division by zero does cause contradiction. That's why we can not divide 2 apples among zero people. It's meaningless, and has no other "interpretations". The act of "distributing" apples cannot be performed. However, adding zero apples to 2 apples we get 2 apples. Notice that, in addition and subtraction operations we must have the same "dimensions", i.e., not adding up apples with oranges. However, in division operation, this is not a necessary condition, such as in speed, expressed, e.g., as kilometers/hour. It might even have a hybrid dimensionality, like momentum in physics. A Software Engineer kindly wrote to me that: " ... for 2/0 = Â¥, I think you get all fired up about not much. Of course if there is any implied assertion that Â¥ is a number then one gets into all sorts of contradictions that you describe. But I don't think anyone in his right mind considers this. ....the textbook author simply uses the infinity symbol Â¥ as a conventional way to denote something that doesn't exist -- something that's impossible (impossible for exactly the reasons you state)." My concern here is not whether 2/0 = Â¥ is true or not. It could have been a hundred times worse than this and I will not lift a finger against it. But, what I'm combating is the act of dividing by zero in the first place, the carelessness of our educators, and not willing to know that they are misleading students. Adding to these, by the way of doubling our difficulties, now it is claimed that Â¥ is "a conventional way to denote something that doesn't exist." There is no such conventional usage for Â¥. Do you know of any? Whenever, mathematics is distorted and sensationalized, or even pseudo-mathematics is used uncritically, a disservice is done to public understanding of mathematical fact. What I am attempting to signify here is nothing more than this: in applied mathematics dividing by zero is a meaningless operation.in Origin of the Common Fallacy: Dividing by Zero The Babylonians, and Chinese did not have a symbol for zero. The word zero comes from the Arabic "al-sifer". Introduced to Europe during Italian renaissance in the 12^th century by Leonardo Fibonacci (and by Nemorarius a less well-known mathematician) as "cifra" from which we have obtained our present cipher, meaning empty space. Sifer in turn is a translation of Hindi word "sunya" meaning void or empty. In Hindi "shunya" means zero. The terms aught, naught, and cipher are older names in English for zero symbol. It may also make you wonder that the word "cifra" in Russian means "written numbers." Similarly, "Ziffer" in German means one single written number; it is used in contrast to a single letter. Zero in German is called 'Null'. French use "zero" as in English. However "chiffre" is the equivalent of the German "Ziffer", a single digit numbers (integers from 0 to 9). Number(s) are called "nombre(s)". The equivalent of "null" is also "nul" in French: "le rà ©sultat est nul" translates to "the result is null". The ancient Egyptians never used a zero symbol in writing their numerals. Therefore there was no function for a zero in writing their numerals. The two applications of the zero concept used by ancient Egyptian scribes were: 1) as a zero reference point for a system of integers used on construction guidelines, and 2) as a value that resulted from subtracting a number from an equal number. It is quite extraordinary that neither the Egyptians nor the Greek were able to create a symbol to represent zero, or nothingness. The conceptual difficulty may have been that the zero is something that must be there in order to say that nothing is there. The Hindu-Arabic numerals were used for written calculations in the West not before the twelfth century, when Arabic texts were translated into Latin. Babylonians also used a zero, approximately at the same time as Egyptians, before 1500 BC. Certainly, zero's application in our base 10 decimal system was a step forward, as logarithms of Napier and others brought into use. The origin of the fallacy that any number divided by zero is equal to infinity goes back to the work of Bhà ¡skara, an Hindu mathematician who wrote in the 12^th century that "3/0 = Â¥, this fraction, of which the denominator is cipher is termed an infinite quantity". He made this false claim in connection with an attempt to correct the wrong assertion made earlier by Brahmagupta of India that A / 0 = 0. Notice that by this fallacy one tries to define "infinity" in terms of zero. Unfortunately, similar practices seem to prevail to the present day. A similar fallacy exists for logarithms of zero which is believed by many to be (negative) infinity. An author who still advocates that 1/0 = Â¥, writes also: "... (b^n - 1)/(b - 1) which is the formula for the sum of a geometric sequence. This has the equivalent form of b^(n-1) + ... + b^2 + b + 1. If you substitute b=1 in the later expression, the sum is n. Now consider what happens with the first form of the expression when b=1 is substituted. (b^n - 1)/(b - 1) = (1^n - 1)/(1 - 1) = n. This means that 0/0 = n. Similar proofs exist which show, for example, that 0 = 1". Remember not to divide by zero. In fact (b^n - 1)/(b - 1) = b^(n-1) + ... + b^2 + b + 1, is correct if and only if b is not 1, and n is a positive integer. A reader from Canada kindly sent me an email with the subject heading: "Two-dollar bills don't exist" "Saw your website about division by zero! Excellent! Here's a little bit of tongue-in-cheek math! Enjoy! How many dimes are in a quarter? 25/10 = 2.5 Ans. 2 dimes and a nickel remaining How many two-dollar bills are in a five-dollar bill? 5/2 = 2.5 Ans. 2 two-dollar bills and a one-dollar bill remaining but wait: There is no such thing as a 2-dollar bill. How many two-dollar bills are in a five-dollar bill? 5/2 = 0 because two-dollar bills don't exist!" In Canada, you are right on the money!, but not on the 2-dollar US bills! In US we do not have such a dilemma! The Need for Numbers Counting is as old as prehistoric man; after he learned to count, man invented words for numbers and later still, symbolic numerals. The numeral system we use today originated with the Hindus. They were devised to go with the 10-based, or "decimal," method of counting, so named after the Latin word decima, meaning tenth, or tithe. The first popularizer of this notation was a Muslim mathematician, Al-Khwarizmi in the 9^th century, however it took the new numbers about two centuries to reach Spain and then to England in a book called Craft of Nombrynge. Mathematics is a human endeavor which has spanned over four thousand years; it is part of our cultural heritage; it is a very useful, beautiful and prosperous subject. Mathematics is one of the oldest of sciences; it is also one of the most active; for its strength is the vigor of perpetual youth. Mathematics is also our native language. Numbers are a cultural phenomena, humans invented them to quantify the external world around them. The external world is qualitative in its nature. However, human can understand, compare and manipulate numbers only. Therefore, we use some measurable and numerical scales to quantify the world. This enables us to understand the world by, for example finding any relationship, manipulating, comparing, calculating, etc. That is, to make an Analytical Structured Model for the external world. Then we use the same scale to qualify it back to the world. If you cannot measure it, you cannot manage it. This is the essence of human's understanding and decision making process. Click on the image to enlarge. The Need for a Measurable and Numerical Scaling System The Origin of Algebra: The Vedic Mathematics that is enjoying a vogue in mathematics pedagogy these days is the pre-modern Indian mathematics preserved in Sanskrit; in this category there are a number of mathematical methods many of which were adopted by Muslim recipients and became the basis of our algebra in 13^th century. The early use of the Arabic word "al-jabr" was in the sense of "the resetting of bones," and this term was adopted by the early Muslim algebraists as an analogy for what we today call "combining like terms. Again, The word "algebra" comes from a phrase (in bold below) in the title of an Arab book "Kitab al muhtasar fi hisab al gabr w'al muqubalah." This has been translated as "A compact introduction (book) to calculation using rules of completion and reduction," but some have suggested "al gabr" comes from Babylonian "gabru" meaning solution of an equation, and that "muqubalah" (q reads like k) was its equivalent in Arabic. The book covered simple equations like the one in the preceding section, also quadratic ones involving x^2, as well as other areas such as geometry and the division of inheritances. Its author, Mukhammad ibn Musa Al-Khorezmi (lived about 780-850) was the chief mathematician in the "House of Wisdom", an academy of sciences established in Baghdad by the Caliph Al Ma'mun, son of Harun Al Rashid of "Arabian Nights" fame. The "House of Wisdom" was involved in Al Ma'mun's expedition to measure the size of the Earth, which Al-Khorezmi afterwards estimated to have a circumference of 21000 Arab miles. We are not sure how big the Arab mile was, the actual figure is about 25000 of our miles. Al-Khorezmi came from the oasis of Khorazem, at the northeast of Persia. He is also credited with helping establish among the Arabs the Indian numbering system, using decimal notation and the zero. Previous systems of writing numbers used letters, like the Roman numeral systems or the cruder ones of the Greeks and Hebrews. When Al-Khorezmi's book on the new system reached Europe, the Europeans called its use "algorism" or "algorithm," a corruption of the author's name. Today "algorithm" means method of calculation, and the rise of computers has led to extensive work on developing efficient computer algorithms The word algebra originated from the title of the book "ilm al-jabr w'almuqabala", written during the 9^th century by a Persian Muslim mathematician named al-Khworizimi who wrote in Arabic which was the language of scientific era of the Islamic world. Therefore, the word Algebra derived from the Arabic word Al-jabr meaning combining the like terms. Finally I would like to quote Johann Carl Friedrich Gauss saying "Mathematics is the queen of the sciences and number theory is the queen of mathematics." Visit also the following Web sites: Arabic Numeral System Babylonian Numerals Egyptian Numerals Greek Number Systems Incas Numerals Indian Numerals Mayan Numerals Roman Numerals The Two Notions of Zero The notion of zero was introduced to Europe in the Middle Ages by Leonardo Fibonacci who translated from Arabic the work of the Persian (from Usbekestan province) scholar Abu Ja'far Muhammad ibn (al)-Khwarizmi (the word "algorithm," Medieval Latin 'algorismus', is a contamination of his name and the Greek word arithmos, meaning "number,: has come to represent any iterative, step-by-step procedure) who in turn documented (in Arabic, in the 7^th century) the original work of the Hindu mathematician Ma-hà ¡và ­ral as a superior mathematical construction compared with the then prevalent Roman numerals which do not contain the concept of zero. When these scholarly treatises were being translated by European accountants, they translated 1, 2, 3, ....; upon reaching zero, they pronounced, "empty", Nothing! The scribe asked what to write and was instructed to draw an empty hole, thus introducing the present notation for zero. Hindu and early Muslim mathematicians were using a heavy dot to mark zero's place in calculations. Perhaps we would not be tempted to divide by zero if we also express the zero as a dot rather that the 0 character. You might ask then how did the Romans do calculations with their numerals notations? Romans typically relied on the Chinese abacus, their version of our modern calculator, visit, e.g., Ancient Civilizations Web site. By using pebbles as counters, there was no need to use Roman numerals. People known as "calculatores" (after "calcule", Latin for "pebbles"), did the math used to tally totals in addition, subtraction, division and multiplication. For us using Roman numerals system to perform arithmetic operations such as division, or multiplication are very difficult if not impossible. In modern days they are used for decorative purposes only. Zero as a concept, was derived, perhaps from the concept of a void. The concept of void existed in Hindu philosophy and the Buddhist concept of Nirvana, that is: attaining salvation by merging into the void of eternity. Ma-hà ¡và ­ral (born, around 850 BC) was a Hindu mathematician, unfortunately, not much is known about him. As pointed out by George Wilhelm Friedrich Hegel, "India, such a vast country, has no documented history." In the West, the concept of void and nothingness appeared first in the works of Arthur Schopenhauer during the 19^th century, although zero as a number has been adapted much earlier. The Arabic writing mathematicians not only developed decimal notation, they also gave irrational numbers, such as square root of 2, equal rights in the realm of Number. And they developed the language, though not yet the notation, of algebra. One of the influential persons in both areas was Omar Khayyam, known in the west more as a poet. I consider that an important point; too many people still believe that mathematicians have to be dry and uninteresting. Initially, there was some resistance to accepting this significant modification to the time-honored Roman numerical notation, in particular from the privileged job-secured Roman numerical calculation experts: The Tax Gatherers: Click on the image to enlarge. Roman Numerical Calculation Experts: The Tax Gatherers Among the trite objections to leaving Roman numerals for the new notation was the difficulty in distinguishing between the numeral 1 and 7. The solution, still employed in Europe, was to use a cross-hatch to distinguish the numeral 7. The introduction of the new system indisputably marked the democratization of mathematical computation by its simplicity and lack of mystery. Up to then the "abacus" was the champion. Abacus was a favorite tool for a few and praised by Socrates. The Greek's emphasis on geometry (i.e., measuring the land for agricultural purposes, the earth, thus the world geography) so kept them from perfecting number notation system. They simply had no use for zero. Greeks were not too much interested in arithmetic, believing in inherited nobility of a few, the Greeks had the adage "that arithmetic should be taught in democracies, for it teaches relations of equality, but that geometry alone should be reserved for oligarchies, as it demonstrates the proportions within inequality." To ask, is the sum of the parts greater, lesser or equal to the sum of the whole? One thought would be to eliminate zero! As in the "Reverse Polish Notation" which eliminates the need of Sacrilegious as it may sound on first impression, the notation of zero is at heart nothing more than a directional separator as in the case of a thermometer. It is, in actuality, "not there." For example, in order to express the number 206, a symbol is needed to show that there are no tens. The digit 0 serves this purpose. Zero became a part of the Natural Numbers System in the last century when Giuseppe Peano puts it in his first of five axioms for his number theory. One may think of an analogy. Zero is similar to the "color" black, which is not a color at all. It is the absence of color, while the Sun Light contains all the colors. Zero is the only digit which cannot stand alone. It is a lonely number, lonelier than one. It requires some sort of companionship to give meaning to its life. It can go on the left. On the right. Or both ways! Or in the middle as part of a threesome. Witness "01", "10", or "102". Even "1000". A relationship with other numbers gives it meaning (i.e. it is a dependent number). By itself it is nothing! When we write 10, we mean 1 ten and 0 ones. In some number systems, it would be redundant to mention the 0 ones, because zero means there are no objects there. Place value uses relative positions. So an understanding of the role of 0 as marking that a particular Ã¢â ¬Ë place' is empty is essential, as is its role of maintaining the Ã¢â ¬Ë place' of the other digits. The usage of zero here is more of a qualitative than quantitative. Therefore, it is called an operational zero. Recently, a visitor of this site kindly wrote to me that: " I have a basic question about the importance of zero 0 in mathematics. In Mathematics zero has 2 different roles to play. 1. To provide symbol for empty set 2. To serve as place holder symbol in positional number system .. in the articles that I read it was said that zero is important due to both of it's aspect. Whereas I can say by common sense that zero is only useful in case of first aspect i.e. to act as symbol for empty. For second aspect there is no need of zero. To illustrate this lets suppose that in our decimal system we don't have zero ; so it becomes the number system of base 9 (but without zero) because there are only 9 unique symbols to represent any number. They are 1, 2, 3, 4, 5, 6, 7, 8, and 9. Please note that this is 9 base system but without zero. This is important for my discussion. Here by 9 base I mean from 1 to 9 and not from 0 to 8. Here we don't have zero so we can't represent empty set (or emptiness) but that is ok as I am only concerned with showing that the 2nd aspect of zero is not important. For example, we want to represent 104 (base 10) in form of base 9 number system then it is 125 (base 9). So thus any number of any base can be converted to its equivalent symbol (number) of some different base which doesn't provide zero. So what have we lost here? Without zero also we can do as far as positional number system representation is concerned." You are right in that one can express any number in different base systems that exclude zero. However performing any arithmetic operations becomes very tedious, as it is evident, for example with Roman numerical system. That is the main historical reason for the success of decimal numerical system in initiating scientific discoveries, and faster, easier for everyone, and provides more accurate commercial transactions. Another reader wrote: "..I am writing a paper for symbolic logic on zero. It seems to be it is 'nothing' in addition/subtraction, but if it is nothing then how can it effect numbers in multiplication? Also as to your comment on 2/0 being meaningless. I am wondering what the answer should be, if it can be more clearly defined), and why." Here, my dear reader has mixed the two distinct notions of zero: Zero as a number being used in our numerical systems AND as a concept for 'nothing'. As a result of this mixed-up, he is "wondering" at his own mental creature. We used to think that if we know one, we know the other. We are finding out that we must learn a great deal more about "AND". The introduction of zero into the decimal system was the most significant achievement in the development of a number system in which calculation with large numbers was feasible. Without the notion of zero, astronomy, physics, and chemistry modeling would have been unthinkable. The lack of such a symbol is one of the serious drawbacks in Roman numeral system, beside being difficult to use in any arithmetic operations such as multiplication. Visit also the Web site: A History of Zero. Is Zero Either Positive or Negative? In many languages you come across expressions which refer to "red numbers" and "black numbers" to denote negative and positive ones. For example, in the Ancient China the two colors were used in the arithmetic meaning, but in the opposite way on their counting rods. They were associated with Yin and Yang, the principal forces of the Tao cosmology. The use of colors, elsewhere was simply a convention by accountants: red ink to indicate losses, black ink for profits. Click on the image to enlarge. Adding Zero "as-if" Adding One More Zero Natural numbers are positive integer numbers. One horse, two trees, etc. However, the arrival of zero caused the inevitable rise of the even more nefarious numbers: The negative numbers. What about negative numbers? The negative sign is an extension of the number system used to indicate directionality. Zero must be distinguished from nothing. Zero belongs to the integer set of numbers. Zero is neither positive nor negative but psychologically it is negative. The concept of zero represents "something" that is "not there," while zero as a number represents the lowest of all non-negative numbers. For example, if a person has no account in a bank, his/her account is nothing (not there). If he/she has an account, he/she may have an account-balance of zero. One of my readers kindly wrote to me that: "...In High school Algebra books they like to teach about numbers. You know whole numbers, natural numbers, rational numbers, irrational numbers, and integers to name a few. The problem that I often run across is where does the zero fit in. For instance 'a positive integer', does this include zero? We know that whole numbers include 0, but it is a positive whole number. Can you clarify some of this for me? Why or why not they are included or excluded. I really wish they would put a little more information into these books as your web sight shows we need it. So I want to thank you, for bringing things into the light.." You are right, unfortunately some algebra books are confusing on categorizing zero in our numerical systems. However, the accepted and widely use categories for inclusion of zero as a positive number is "non-negative integers", while for excluding it from positive integer the terminology "positive integers" is used. Similarly, for the real numbers involving zero, the following four categories: "positive", "negative", "non-negative" and "non-positive" are being used. The last two categories include zero, while the first two exclude zero, respectively. Therefore, as you see, the first two sets are the subsets of the last two, respectively. Another visitor kindly wrote to me that: "I hope you can enlighten me on this one. I've been teaching math for four years but it is only this year that I encountered this problem. I always thought and believed that zero is neither positive nor negative. It's only when we used the book International Student (7^th Ed., by Lial, Hornsby, and Miller, Addison Wesley) when they presented inverse property of addition a + (-a) = 0 they wrote these: Number Additive Inverse 6 -6 -4 -(-4) or 4 2/3 -2/3 0 -0 or 0 Note: found in page 6. This is rather confusing to me and to my students because I told them that zero is neither positive nor negative, then why did these authors attach a negative sign on zero? I looked at other books and I found another one Modern Algebra and Trigonometry (3^rd Ed., by Elbridge Vance), that when he also presented Existence of Additive inverses (axiom 6A), in one of his statements he wrote: 0 = -0. Can you please help me on this one? What could probably be their reason for writing these? These actually confused my students and even me myself in the process. Thank you very much. I hope you would reply to these question of mine." I agree with you that it is confusing. It is also a difficult and uncomfortable situation when you as a knowledgeable teacher want to correct the textbook, and your students taking the textbook as the ultimate authority as if it's a Bible. You might like to remind them by mentioning that the purpose of education is critical thinking for oneself. You are right also in that: The additive inverse of any number is a unique number. Therefore, the additive inverse of 0 cannot be " -0, or 0". (Thanks goodness! they did not include, double zeroes -00, and 00, etc.) Moreover, the additive inverse of zero is itself. This property of zero also characterizes the zero (i.e., no other number has such nice property). Furthermore, zero is the Null element for addition. Any operation has a unique Null. The inverse of a Null element for any operation is itself. For example, the Null element for both multiplication and division operations is 1. Is Zero an Even or Odd Number? If one defines evenness or oddness on the integers (either positive or all), then zero seems to be taken to be even; and if one only defines evenness and oddness on the natural numbers, then zero seems to be neither. This dilemma is caused by the fact that the concepts of even and oddness predated zero and the negative integers. The problem posed by this question is that zero is not to be really a number not that it is even or odd. Most modern textbooks apply concepts such as "even" only to "natural numbers," in connection with primes and factoring. By "natural numbers" they mean positive integers, not including zero. Those who work in foundations of mathematics, though, consider zero a natural number, and for them the integers are whole numbers. From that point of view, the question whether zero is even just does not arise, except by extension. One may say that zero is neither even nor odd. Because you can pick an even number and divide it in groups, take, e.g., 2, which can be divided in two groups of "1", and 4 can be divided in two groups of "2". But can you divide zero? That's why there are so many "questions." If you feel that the question if zero is an even number is of no practical value at all, let me quote the following news from the German television news program (ZDF) "Heute" on Oct. 1, Smog alarm in Paris: Only cars with an odd terminating number on the license plate are admitted for driving. Cars with an even digit terminating were not allowed to be driven. There were problems: Is the terminating number 0 an even number? Drivers with such numbers were not fined, because the police did not know the answer. Similar phenomenon occurred recently (November 2012) in New York city when the governor's decided to allow cars with even numbers and zero at the end of their number plates to fill up at gas stations on even days the Sandy strong hurricane. It came to the attention of a BBC reporter. A visitor of this site kindly wrote to me that: "Is zero odd or even? I suggest a convention, i.e. a useful unproved mechanism which makes me feel better, that zero is indeed Even! I offer two arguments: A1: "Odd" numbers are spaced two apart. So are "even" numbers. Proceeding downward, 8,6,4,2,0,-2,-4 .. should all be considered Even. While odd numbers 9,7,5,3,1,-1,-3 ... skip over zero in a most stubborn manner. A2: Let two softball teams play a game, with each player betting one dollar a run to the opposing team. Further presume that no runs are scored (due to beer consumption) and no extra innings are allowed because it got dark. The final score is zero to zero. If a player is asked by his wife whether he won or lost, he would probably indicate that he "broke even". As the old math teacher said: " Proof? Why any fool can see that." These issues make themselves strongly felt in the classroom, textbook, in the frequent mishandling of the notion of zero by the novice and professional alike and therefore recommend themselves to our attention. These are among many issues of how to teach these concepts, say, to kids. Zero is "not there" Judging from the treatment accorded to the concept of zero, we do practice a variety of avoidance mechanisms rather than confront the imagery associated with this seemingly difficult In reciting one's telephone number, social security number, postal zip code or post office box, room number, street number or any of a variety of other numeric nominals, we carefully avoid pronouncing the digit "zero" and instead substitute "oh." One may say "it is caused by our desire to communicate quickly, if we can say the same thing in one syllable, why not?" What about number seven, should we find a substitute for this too? In some parts of the world, the phrasing "naught" and "aught" are used but it is quite uncommon to hear "zero." All the other digits are correctly enunciated with this one curious exception. However, in the US Army there is an additional curious habit of saying "duece" instead of "two". For example, the M102 105mm Artillery Cannon is called a "One oh Duece" (notice the "oh" therein). Is the presence of nothing (reflecting non-existence) different from the absence of something (reflecting non-availability) or the absence of anything (reflecting non-existence)? Zero is a symbol for "not there" which is different from "nothing" "Not there" reflects that the number or item(s) exists but they are not just available. "Nothing" reflects nonexistence. There is also "the Zero Factor about the US Presidents" known as the Zero Factor and Tecumseh's Curse which is the curse of Indian chief Tecumseh which has Killed every U.S. President before the end of their term in office, if they were elected in a year that ended with 0. The first victim of the curse was William Henry Harrison, whose troops killed the Indian chief in 1813 (the zero factor has one exception, i.e., Ronald Reagan who was elected in 1980). Zero not only has the quality of being nothing, it is also a noun, verb, adverb, and an adjective as in "zero possibility". "We zeroed in on the cause," means we had isolated all the possibilities, and have discovered the one remaining. In this use as a verb, zero equals one. However, "The result was a big, fat, zero," uses the noun to express the idea of results of "nothing". Here, zero has the quality of not being there. Zero as an action appears in the Conservative Laws of physics. The term "zeroing in on (whatever)" might have originated also with the military. The "zero" in this term might refer to the distance from the last bomb dropped or the last shell fired to some target. The aim is always to try in reducing this distance to zero. On a roulette wheel, there is the number Zero which is neither Red nor Black. Zero is the GREEN number, for all the cash the house rakes in when it comes up. It is considered neither Even nor Odd. Is zero a number? Consider the following scene: Ernie: I've put a number of cookies in that Jar. You can have them if you give me your teddy. Bert: Great While Ernie hands over the teddy and looks eagerly in the jar, said: Bert "Wait a Minute There's No Cookies Here. You Said You Put a Number of Cookies in There" Ernie: That's right, zero is a number. It is not uncommon these days to hear, on dating scene, the phrase "get rid of the 'zero' and get yourself a hero". Zero is often used in the description of an undesirable individual as illustrated in following cartoon: Click on the image to enlarge. A Sociological Application of Zero Clearly some sort of an avoidance mechanism is in operation. It is as though the name itself invokes a kind of anxiety perhaps associated with "nothingness", a kind of emptiness which humankind finds uncomfortable and prefer to avoid confronting. As with all such anxiety- provoking ideas, some other imagery is substituted which provides a veneer to mask the disquieting emotional undertones of the discomforting idea. Zero represents the amount of nothing. Today zero has a meaning not just of a number, but as the bottom, or failure. He made no baskets, or, he made zero baskets -- meaning he failed to score. Or he gave zero assistance. If you are familiar with numerology, you notice that there is no zero to work with in the numbers that correlate with the alphabet, strange? Not at all. The absence of zero may suggest that the Pythagorean who first developed the duality between numbers and letters were not aware of the zero notion. The notion of zero is much younger. In tennis scores, zero is called "love," because zero looks like an egg, the French called it "l'oeuf," which is French for "egg." You may have also noticed the weird numbering in the tennis scores which goes back to medieval numerology, in which 60 was considered a "complete" number (much like 100 is considered a nice round number today). Back in medieval times, tennis's four points were 15, 30, 45 (later abbreviated to 40), and 60, or game. On the telephone keypad, zero has the honor of representing the operator. There is no zero in most games, such as plying cards (after all who wants to win zero!). Zero is placed at the end of the keypad on the computer and at the bottom of the keypad on the telephone. Is zero the beginning or the end? Notice that on a calculator's keypad the numbers starts with the largest numbers on the top and work their way down to zero. What about the o and 0 being right next to each other on the PC keyboard? Numbers are located three places. First it is located on the keyboard keys with the range 1, 2,...,0; this is the same order that phone keypad. Second, on the right of the keyboard is a calculator-like pad where zero is the last listed number. Finally, there is a list of functions key, however there is no F0 because that could translate into no function and what would be the point of having a key "without" function. There will always be questions about the true meaning and function of zero. Is it the end or the beginning? What does ground zero mean? Some use it as starting point; the military uses it as an ending point. The resistance against zero can be noted even at the architectural level in buildings where the ground-level is rarely denoted as the zeroth-level as it should be. However, for mathematicians it comes easily to label the floors of a building to include zero, for example, the Department of Mathematics' building at the University of Zagreb in Croatia has floors numbered as -1, 0, 1, 2, and 3. In fact, this is not a particularity of one building but a common practice in modern buildings in large cities such as Buenos Aires. In most European countries the floors are always numbered starting from 0. We do have a special word to say 'ground floor' in a conversation, not using 0, but the elevators will always offer you a "0 button" for the ground floor. Now is the time to test yourself. Consider 0/0 (zero divided by zero) which of the following takes precedence and why? A: Any number divided by zero is meaningless; B: Zero divided by any number is zero; C: Any number divided by itself is 1. By now you should know the answer and the why. For example, for Part C: Any number divided by itself is 1.", which is a true statement for zero. That is 0/0 is also meaningless. One may still argue that 0/0 = 1. Well, if we allow this you end up with some inconsistent results. For example, you end up showing 5 = 1: 5 = 5 . (1) = 5 . (0/0) = [5 . (0)] / 0 = 0/0 = 1 One may say that "I understand why it's considered meaningless to divide a number by 0. But why is the answer considered meaningless when dividing 0 by 0? I think of it as 0 / 0 = x. Zero times x = 0. This is possible because anything multiplied by zero equals zero." The problem with this argument is that "What is the value of x?" It could be any number therefore, one number cannot be equal to so many different numbers. Thus, 0/0 is indeed meaningless. Therefore, teaching our young students that "0/0 = Any Number (AN); this is equivalent to AN x 0 = 0." is wrong. One should never divide by zero. Division by zero is a meaningless operation. How could you divide 3 apples among zero people! How could you divide "nothing" among nobody! You may like to visit and find out what is wrong in the following Web site: Paul, a 3rd grader, divides by ZERO. One of my readers wrote to me that: ".... what is 0/0? This is equal to any number because when you multiply any number by zero, you get zero. This is why 0/0 is an indeterminate quantity. Is it correct to say x/y = z/y implies x = z unless y = 0? The main problem I have with this line of argument is "the act of dividing by zero" which is meaningless Therefore, it does not make to ask further what is its result, whether it is indeterminate or not. Visit also Numbers and Numerical Prefixes, Question Corner Is 1 a Number or Just a "Unit" for Counting? This question is an historical one, because in Euclid "numbers' do not include one. An arithmos is a multitude and hence the opposite of one. The distinction then involves the problem of the one and the many. To the ancients, 1 was never a number. A number was a multitude of units and 1 is a unit, not a multitude. The ancients seem to have identified the multitude with the things themselves in a manner which is difficult to understand. The ancients Greek had no applied mathematics. The practical and useful mathematics as they had, was developed prior to, independent of and unsupported by any theoretical structure. Moreover, far from supporting useful mathematics, the theoretical systems tended to inhibit its development mainly because they denouncing the treatment of one as a number. This situation persisted through the middle ages and beyond. You may say that any number, like 1 for example, is simply a "unit" for counting. For example the question, "what do you think of when you think of the number 26?" one may relate it to something, like 26 dollars or 26 shirts, and so on. However, some people the number 26, is 20 + 6, 13(2), 5 ^2 + 1, and so on. They think of 26 as a number, while others think of 26 as a "unit" for counting. The Newtonian concept of number which; defines number as the ratio of a magnitude to a unit magnitude of the same kind. The main objection is that the notion of magnitude itself has never been properly defined, and this definition fails in the particular case of complex numbers. Presumably this is because there are no complex magnitudes to provide the necessary ratio. We are interested in a more objective answer to the question, "is 1 a number or just a 'unit' for counting?" (or as you may phrase the question, "do numbers exist?"). Interestingly enough, some teachers make the statement "I do not think that numbers exist because a number does not refer to anything physical." They say that "a number always refers to a quantity." One must be aware of the facts that when we use a number it may not have any "dimension" at all. For example, probability is dimensionless. It is a number between zero and one expressing the degree of your belief in occurrence of an event, not any quantity. While we may express for example, the height of a person as ( 5 feet,10 inches) with an absolute error of our measurement of, say 1/5 of an inch (express as length dimension). However, the relative error of the measurement is dimensionless, that is a pure number without any units. Fermat's Last Theorem (FLT) is another example that illustrates the ability to separate numbers and quantities. The statement x^1+y^1 =z ^1 can be interpreted as follows--there exists a rope of integer length z that can be cut into ropes x and y such that x and y both have integer lengths, and the length of z does not equal the length of x which does not equal the length of y which is non-zero. The statement x^2 + y^2 = z^2 is commonly called the Pythagorean Theorem, and it can be interpreted as follows -- There exists a square of integer side length z such that two squares, x and y, can be formed from the area of z, where the a right triangle's side lengths of x and y are integers and side length z does not equal side length x which does not equal side length y which is non-zero. FLT shows that there is no solution to the integer equation x^3 + y^3 = z^3. This can be applied to a quantity as follows-- there are no cube whole number pairs that add to a cube But what about the fifth dimension? And the sixth? FLT shows that the equation x^n +y^n = z^n has no solutions for all n integer greater than 2, so what is the physical relevance of the statement that x^5 + y^5 = z^5 has no integer solutions? The only logical explanation is that numbers do not need to refer to physical objects, numbers exist perfectly well on their own. The illustrative examples given in this Section are particularly vivid because they allow you to demonstrate the separation of numbers and quantities to your students at early age. One of the visitor of this site wrote to me that: "Not on your web page, but a question regardless. Why is 1 (the number one) not considered prime? I don't consider it a prime for my own reasons, but would like to hear somebody else's My personal prejudice is (in two words) rather shaky. If I define a prime as a positive integer, divisible only by itself and one; then the number 1 is "special". Thus, a prime has Two constraints: A: Indivisibility by any number other than itself, and B: the trivial Exception of One. Since 1 is itself, it Is the trivial exception." A prime number has exactly and only two factors: one and the number. However a composite number has more than two factors. Finally, one is neither prime nor composite because it has only one factor. The question, whether 1 is prime or not goes back to Socrates. The difficulty they had was not considering 1 as a number but a Unit measuring other numbers. Origin of Infinity and its Symbol (Â¥ ) For the ancient Chinese "10,000" meant "infinity" and the Emperor was called "10,000 years" as a way of wishing him infinitely long life. In India, when writing about large number, they use Laky. Lash is Sanskrit meaning 100 000. The word appeared first, in non-mathematical works in epic and dharmasastra literature. The word "lacquer" (Lack in German) stems from this word, since huge numbers of "lacquer" lice sit on the "lacquer" tree producing what is now called "lacquer". The word came into scientific use through the early Muslim mathematicians into Medieval Latin and from there into all the European languages. This sign Â¥, or the Western graph for the number 8 positioned horizontally, that is, "the lazy eight", is a sign to denote the idea of infinitely great or infinity, referring to, for example time. The concept of infinity in mathematical systems is expressed by the sign . As far as history is concerned, the most common similar medieval symbol is the snake biting its own tail. It is as if represents a double endlessness or eternity. On a Roman abacus kept at Bibliotheque Nationale in Paris stands the symbol Â¥ on top of its column for 1000. For 1000 the Romans used M an Etruscan letter whose sides were curved, the curious form Â¥ , possibly, to convey the concept of a very large number, which ever since the English mathematician John Wallis proposed it in 1655 has been accepted as the mathematical symbol for infinity. He also wrote in the Philosophical Transactions (1671) that "Infinitely means more than any Finite number assignable". Later, the infinite numbers have been discussed in a formal way by George Cantor in 1883. Cantor went to some effort to make connections with ancient and medieval ideas about infinity in his famous series of papers on set theory. While zero is a concept and a number, Infinity is not a number; it is the name for a concept. Infinity cannot be considered as a number since since it does not follow numbers' properties. For example, (infinity + 2) is not more than infinity. Since infinite is the opposite of finite, therefore whoever uses "infinite" must first give an indication for what is finite. For example, in the use of statistical tables, such as t-table, almost all textbooks denote symbol of infinity (Â¥) for the parameter of any t-distribution with values greater than 120. I share Cantor's view that "....in principle only finite numbers ought to be admitted as actual." Many writers have given much attention to clarifying the nature of the "infinite": what is it, how can we know anything about it, etc. Many constructively-minded mathematicians such as David Hilbert choose to emphasize that we can restrict ourselves to the finite and thereby avoid many of these problems: this is the so-called "finitary standpoint". Aristotle considered the infinite as something for which there is no exit in an attempt to pass through it. In his Physics: Book III, he wrote "It is plain, too, that the infinite cannot be an actual thing and a substance and principle." To facilitate a visual understanding of the infinity concept, you may wish to use the following demonstration for your students: Draw to straight line segments parallel to each other on the board. Make one line segments clearly longer (say, twice) length than the other line segment, as shown in the following graph: Now, pose the following question: Which line segment have the most points? Many students may give you wrong answer and even give give you arguments similar to this one. "Since one line segment is clearly shorter, it can be a multiplication of the other. Therefore it is a subset of the larger one, although both line segments have a large number of points. Two times infinity equals infinity in theory but one is clearly twice the size of the other" The correct answer is neither of them. Now, you have to convince your students that for every point on one line segment there is exactly one point on the other line segment. To demonstrate this fact, first create a vertex by drawing two other straight lines each passing through the left and right ends of the two line segments. Select a point on either of line segments, and then draw a straight line passing through that point and the vertex. As it is shown in the following graph, you see that for every point in one line segment there is exactly one point on the other one. Both have an infinite and innumerable points. You may also like to pose a questions such as, What do you get when you add up infinitely many positive numbers? What do you think the following adds up to? 1/2 + 1/4 + 1/8 + 1/16 + ....... X = 1/2 + 1/4 + 1/8 + 1/16 + ..... Then, multiplying both sides by 2, we get 2X = 1 + (1/2 + 1/4 + 1/8 + 1/16 + .....) = 1 + X Therefore, X = 1. The above sequence adds up to 1. You may also give a verbal explanation. Suppose you are given a piece of wood of length 1 meter. Cut it into to halves, put aside one half and then cut the other half into two halves again. Repeat this cutting process for eternity. Now, put all the pieces together, what do you get! Unfortunately, there are teachers who continue misleading students as the following argument illustrates: "When we multiply 4 times 3, what we're really doing is adding 3 plus 3 plus 3 plus 3. So, in a sense, multiplication is just really fast addition, right? Well, as it turns out, division is just really fast subtraction. So, if you're diving 12 by 3, the answer is the number of times you can subtract 3 from 12 before you get to zero (i.e., 12 - 3 - 3 - 3 = 0). So, the answer is 4. Now that you know that, imagine what happens if you try to divide 12 by 0. You start subtracting zeroes, you realize that you are doing it infinite times. So, division by zero is infinity." But when you start subtracting zeroes, even infinite times, you never get down to zero! One should never divide by zero. For other current common misconceptions and fallacies on infinity, visit, for example, the Web sites: INFINITY: You Can't Get There From Here. Other Apparent Difficulties with Zero It may be considered frivolous hyperbole to suggest that the demise of the Roman Empire was due to the absence of zero in its number system, but one can only ponder the fate of our civilization given the difficulty our culture seems to have with the presence of zero in our number system. The notion of zero brings another wearying and yet intriguing questions: Is our current century the 20^th century or the 21^st century? According to the Holy Scriptures (see, Matthew chapter 2), King Herod was alive when Jesus was born, and Herod died in 4 BC. Does that mean the millennium actually started in 1996? As you may know, our calendar was improperly set up by a monk in about 525 AD. Since the zero's concept was not available yet, he began the calendar with year 1. Anno domini means "in the year of our lord"; but by starting at 1, the calendar does not correctly reflect the verbal statement. As year one begins, Christ is just born. As year two begins Christ is one year old. The second Century begins with 101 and the second millennium begins with 2002. Still there is a confusion when referring to any particular year in any century. For example, the American Independent achieved in 18^th century, but we refer to it as 1776, "as if" it occurred in duting 17^th century. Ordinal numbers, which the Gregorian calendar uses, indicate sequence. Thus "A.D. 1" (or the first year A.D.) refers to the year that begins at the zero point and ends one year later. Think of a carpenter's ruler, if you will; the first inch is the interval between the edge and the one-inch mark. Thus, e.g., the millennium ended with the passing of the two-thousandth year, not with its inception. Cardinal numbers, which astronomers use in their calculations, indicate quantity. Zero is a cardinal number and indicates a value; it does not name an interval. Thus "zero" indicates the division between B.C. and A.D., not the interval of the first year before or after this point. Continuing with our example, put two rulers end to end: although there is a zero point, there is no "zero'th" inch. As it stands now, we refer to years with ordinal numbers and to ages with cardinal numbers. Thus a child less than a year old is usually said to be so many weeks or months old, rather than "zero years old." If we changed over to this system for our calendar (referring to the age of our era, rather than to the order of the year), then there would be "zero years" for both A.D. and B.C.! That is to say, the last twelve months before the birth of Christ and the first twelve months after the birth of Christ would be the years 0 B.C. and A.D. 0 respectively. For more on this, you may like also to visit the Web site Zero. The main confusion is between the notions of "time window length" and a "point in time". There is an interval between 0 and 1. Considering whether the millennium starts in 2000 or 2001, depends on whether you look at a number as a points on time or a time interval. Years are intervals; numbers are points. Therefore, it is always a mistake to treat years as points. For example, consider the old arithmetic question: John was born in 1985 and Jane in 1986. How much older is John than Jane? The answer, of course, can be anywhere from a few seconds to two years, depending on when in those intervals the two people were born. This is quite revealing of the cultural predilections of the time when the calendar was reorganized, first under the Julian scheme undertaken under the auspices of the Roman Emperor, Julius Caesar, after whom the month of July was named, and subsequently under the Gregorian calendar currently in use, which was devised during the reign of Pope Gregory. What is quietly yet magnificently revealed by this now-curious omission is the absence of the notion of zero in the numbering systems then in use. When the notion of zero was subsequently introduced in the west in the Middle Ages, it could hardly have been regarded as feasible to rewrite the entire calendar, if the debate occurred in the first place. Clearly then, our ideas about numbers permeate our culture. How zero relates to time of day? For instance, what time would be at 12:30 if not for zero. Continuous data come in the forms of Interval or Ratio measurements. The zero point in an Interval scale is arbitrary. The different scales for measuring temperature all have a zero, yet each has a different value! For example, on a Celsius thermometer, zero is set at the temperature at which pure water freezes at the sea level altitude. While zero degrees Fahrenheit is 32 ° degrees below freezing, and finally absolute zero is the theoretical point at which molecular movement ceases. Therefore, since the absolute temperature can be created in the laboratory, it is only a concept. So, here one must accept that the meaning of zero is relative to its context. Now the question is: does 80 ° degrees Fahrenheit temperature implies it's twice as hot as when it's 40 ° degrees? The answer is a No. Why not? A visitor of this site asked me "I want to know what the opposite of zero is." Well, not everything has an opposite. The concept of opposite is a human invention in order to make the world manageable, there is no real opposite in nature. Is day opposite of night? Is male opposite of female, or they are complementary to each other? What is the opposite of color blue? Here we must be cautious when we ask about apposite of zero. The difference is between quality (which is a concept) versus quantity (which a number). For example, what is "minus red?" or what is opposite of red? However, in the context of the real line, you can say that the opposite of zero is itself, while the opposite of +2 is -2 with respect to the origin point 0, as both have the same distance from the origin while one in on its right-side and the other on the left-side. This definition is acceptable if you accept the opposite of left is the right. What is the opposite of 1/2? If you say, it's 2, then 0 has no opposite. Visit also the Web site: Calendars through the Ages. Descartes' Representation of Numbers: Arithmetic Operations on Negative Numbers During the Renaissance era trigonometric and logarithmic functions, the idea of approximation, etc., were developed. In Renà © Descartes' system there is a one-to-one correspondence between real numbers and points. This correspondence was postulated by an axiom of continuity by Hilbert later. From Analytical-Geometry concepts introduced by Descartes in the 17^th century, a real numbers is a point while an interval is the length between two points which is also the absolute of the difference between two numbers. Before the introduction of zero to Europe during the Renaissance era, there was no concept of a negative numbers. Even today, for many people, in particular for young pupils the concept of a negative number is hard to understand for strong ontological reasons. Renà © Descartes was the one who extended real numbers to include negative numbers. The way he accomplished this was by representing numbers on a real number axis as in above figure. While the number zero had been accepted as a Natural number long ago, in became official much latter. For example, in Sweden school children were to learn that zero is a natural number in their 1960 textbooks. The positive numbers are on the right of zero (origin), and negative numbers are on the left of zero, which is arbitrary. The reason he chose the right side for positive numbers is because most people are right-handed--not because positive is better. The word "right" might have to do with the use of the English word "right", the Spanish "derecho", etc. to mean certain "positive" things. Vectors and Numbers: Two Distinct Representations of Numbers: The word "vector" (literally "carrier", from "vectus", past passive participle of "veho" meaning "I carry", and related to the English word "vehicle") was invented by William Rowan Hamilton. However, the useful mixing of mathematics and physics by Isaac Newton's work in describing his three laws of motion is the first powerful tool we now call Analytical Modeling. Any number represents a point on this real number axis, called O-X axis. For example, point B is +2. It could be represented also as a vector with its origin zero and end point B, with length 2 units, as depicted in the following Figure. Now a question for you: What -3 represents in the following figure? Is it a point?, a number?, a vector (what vector?)? The four arithmetic operations are well defined by the vector representation of real numbers and appropriate kinds of movements on this real-number axis: Addition and subtraction operations on numbers can be viewed as the results of movements in certain directions, either to the left or to the right. For example 2 - 3 means starting from the origin O going 2 steps to the right an then 5 steps to the left, you will end up at point -1. Therefore, by vector representation, addition and subtraction easily can be executed on this axis. Multiplication of a two positive numbers can be considered as a vector multiplied by a scalar. For example, 1.5 time -3 means moving from the origin O toward the same direction (because the scalar 1.5 is positive) the vector -3 and continuing the one half more steps in the same direction, as shown in the following figure, 1.5B = C. Multiplication of a negative number by a positive number such as -2 times 3, means moving from the origin O toward the opposite direction (because the scalar -2 is negative) of vector +3 and continuing the same amount steps in the same direction. That is moving from the origin O toward the direction of vector -3 and continuing the same amount steps in the same direction. Division of real numbers may be defined in terms multiplication. That is, dividing two numbers A by B is a number C such that A = B.C. Multiplication of Two Fractions: A visitor of this site asked me: "..â⠬¦ When we say that we multiply M by N; we say M X N. In this case we can either say 1. N times M 2. M times N (N X M) Here both the statements are true because the answer is the same though imagination differs. From amongst the 2 variables (M and N) one is "quantity" and the another is "number of times" that quantity is repeated. So "number of times" is always a whole number, obviously. This is quite intuitive. BUT "quantity can be either whole number or fraction even, this is also obvious and intuitive because real quantity can be anything; complete number or fraction. So when we multiply fraction by whole number in the following way (1/2) X 7. This is simple and intuitive. I can say that the fraction 1/2 is being added seven (7) times. Similarly when we multiply whole number by fraction in the following way 7 X (1/2). This poses problem! Because "number of times" is always a whole number. But since we know that even if we reverse the above thing its going to mean the same i.e., (1/2) X 7. So to make it intuitive we can again say that fraction 1/2 added seven (7) times. This is just the trick to make the problematic statement intuitive since we know that this way also we yield the right result. BUT THE REAL PROBLEM POSES when both the variables are fractions. For example (1/2) X (3/7). Here how will I imagine what is happening? If I say that the fraction 1/2 is added 3/7 times then this statement itself is erroneous. "Number of Times" is always a whole number. How to imagine this? I am not able to justify myself the above process.â⠬¦." Notice that, (1/2) X (3/7) can be read as one-half of (3 divided into 7 pieces). That is, half of (one piece of 3 divided into 7 pieces). Each piece is same as a piece of 3 divided by 14, i.e., 3/14. Therefore, (1/2) X (3/7) = [(1)(3)] / [(2)(7)] = 3/14. As expected. Multiplication of Two Negative Numbers: Another visitor asked me: "....Why two negative multiplied together give a plus, why not negative." The oldest source for the rule "a negative number times a negative number gives a positive number" is in Diophantus of Alexandria who applied it in his Arithmetica, without any proof. Numbers with signs (neither plus nor minus) where not developed till the invention of zero. Brahmagupta developed some of colorful rules: ○ [The sum] of two positives is positive, of two negatives negative; of a positive and a negative [the sum] is their difference; if they are equal it is zero. The sum of a negative and zero is negative, [that] of a positive and zero positive, [and that] of two zeros zero. ○ [If] a smaller [positive] is to be subtracted from a larger positive, [the result] is positive; [if] a smaller negative from a larger negative, [the result] is negative; [if] a larger [negative or positive is to be subtracted] from a smaller [positive or negative, the algebraic sign of] their difference is reversed---negative [becomes] positive and positive ○ A negative minus zero is negative, a positive [minus zero] positive; zero [minus zero] is zero. When a positive is to be subtracted from a negative or a negative from a positive, then it is to be added. ○ The product of a negative and a positive is negative, of two negatives positive, and of positives positive; the product of zero and a negative, of zero and a positive, or of two zeros is zero. ○ A positive divided by a positive or a negative divided by a negative is positive; a zero divided by a zero is zero; a positive divided by a negative is negative; a negative divided by a positive is [also] negative. ○ A negative or a positive divided by zero has that [zero] as its divisor, or zero divided by a negative or a positive [has that negative or positive as its divisor]. The square of a negative or of a positive is positive; [the square] of zero is zero. That of which [the square] is the square is [its] square-root. Multiplication of a negative number by another negative number, such as -2 times -3, means moving from the origin O toward the opposite direction of vector -3 and continuing the same amount steps in the same direction. The movement from the origin O in the direction of +3 and continuing the same amount steps in the same direction will locate us at point +6. Therefore, multiplication of two negative numbers is a positive number. One may even consider the multiplication of two vectors in higher dimensions: The multiplication of two vectors is equal to multiplication of the length of the two vectors times Cosin of the angle between the two vectors. This is known as "scalar multiplication of two vectors", and "dot multiplication of two vectors". I will elaborate on this at the end of the following Trigonometric Functions and Measuring Angles: First, let us understand the definition of the Cosin of an angle, we consider a right triangle, from Greek "orthe gonia", the word "right" for the angle in a rectangle with the English word "rectify" which means to make right. The prefix "rect" is Latin and means "right, straight, or erect". Consider the following right triangle Having a right-angle with sides of length A, B, and hypotenuse length C. The Cosine of angle a equals the length of side B divided the length of the hypotenuse C, that is Cos (a) =B/C. Similarly, the Sine of angle a equals the length of side A divided the length of the hypotenuse C, that is Sin (a)=B/C. However, the Tan of angle a equals the length of side A divided the length of side B that is Tan (a)=A/B. Clearly, Tan (a) = Sin (a)/Cos (a), provided Cos (a) is not zero. The inverses of these functions are Arc's functions, i.e. angles. For example, Arctan or simply ATan denotes the inverse of Tan (a). I have always equated for my students the word "right" for the angle in a rectangle with the English word "rectify" which means to make right. The prefix "rect" is Latin and means "right, straight, or erect" For example, what is the cosine of 0? Cos (0) is 1 because if you collapse the angle of the right triangle until the angle a = 0, then the side C (hypotenuse) = side B. Therefore, Cos(0) = 1. One may extend the one dimensional analytical-geometry concepts in visualization of other algebraic elements, such as equations. For example, the above figure is a graph (i.e., a picture) of equation Y = X + 1, which is a straight line. The slope of the line is the Tan (a), where the angle (a) is that of the horizontal axis making with the line, counterclockwise. For example, the slope of the above line is m = Tan (45) = 1. As another example, if a line has slope of m = -1, then the angle (a) is obtained by the inverse function aTan(-1) = 135 degrees. The convention of measuring angles in counterclockwise removed any existing ambiguities in communicating and in computations when angles are involved such as functional evaluations in trigonometry. For example, as shown clearly in the following figure: By the above convention, the angle OB makes with OA is 45 degrees, while the angle OA makes with OB is 135 degrees (not -45 degrees). Therefore, finally the algebra and geometry are unified by the means of analytical-geometry concepts in last couple of hundred years. This helps overcoming our human visual limitation to see with the eyes-mind and work within space of higher dimensions than the 3-dimensional space we live in. The 0 kilometer stone monument is near The Chain Bridge in Budapest, Hungary. All the road distances in the country are measured from this monument. Now back to our earlier question, Why is -2 times -3 = +6? This time, let us consider both numbers as vectors. According to the definition that "The multiplication of two vectors is equal to multiplication of the length of the two vectors times the Cosin of the angle between the two vectors." So (-2)(-3) = 2(3)Cos(0) because the angle between the vector -2 and vector -3 is 0. This is so, since Cos (0) = 1, and both lengths are positive, the multiplication of two negative numbers is always positive. Another visitor of this site kindly wrote to me that: "I would like to share with you "my explanation" to state that (-1)(-1) = + 1 Consider (-1)(-1) = X. Since the 17^th century it is a rule to do: (-1)(-1) + (-1) = X + (-1). There is the rule a(b+B) = AB+ac. Using it, we get: (-1) [(-1) + 1] = X + (-1). From Descartes: +a + (-a) = 0. Using this new rule we get: (-1) [0] = X + (-1). There is the rule (a)(0) = 0. To accomplish it then (-1) [0] = 0. Then X + ( -1 ) = 0, that implies X + ( -1 ) + (+1) = 0 + (+1), so X = +1, therefore, (-1)(-1) = +1. The result (-1)(-1) = +1 is derived and imposed by the existing rules when the real numbers are extended to include negative numbers. My best regards and thank you for sharing with us your expertise." This proof can be generalized for of any two equal negative numbers. However, it will be only a special-case proof. For multiplication of any two negative numbers we have used the vector representation of numbers in this section. Thanks are yours for your nice algebraic proof. Unification and Extensions: Computing the Lengths, Areas, Volumes, and Hyper-volumes As pointed out earlier the main achievement in using the Analytical Geometry concepts is to overcome the limitation of human vision working with dimension higher than 3. For example, a manufacturer who produces more than 3 products has to work in higher dimension when it comes to production decision to determine the optimal production level for each product. The main aim of mathematics is in unification and extensions of its concepts and methodologies. As an interesting application consider the following unification and generalization of length, area, volume, and hyper-volume for a Simplex by using the beautiful concept of determinants. A simplex in an n-dimensional space is the simplest shape having n + 1 vertices. For example, a line segment is a simplex in 1-dimensional space, a triangle is a simplex in 2-dimensional space while a pyramid is a simplex in 3-dimensional space. Length of a line segment with two vertex (x1), and (x2): The length is the absolute value of the determinant: 1 x1 1 x2 divided by 1! (1 factorial= 1). Similarly, the area of a triangle is the absolute value of the determinant: 1 x1 y1 1 x2 y2 1 x3 y3 divided by 2! (2 factorial = 2). Similarly the volume of a pyramid is the absolute value of the determinant: 1 x1 y1 z1 1 x2 y2 z2 1 x3 y3 z3 1 x4 y4 z4 divided by 3! (3 factorial 1 ´ 2 ´ 3 = 6). This nice approach for computing lengths, areas, volumes of a simplex, is now unified and generalized for computing hyper-volumes in k-dimension with the denominator equal to k factorial. You may like performing your computation using Linear Algebra Calculator. Genealogy of Rational and Irrational Numbers The Pythagoreans held that number, that is to say, the set of positive whole numbers, positive integers, rules supreme in nature and in thought. If that is so, then any two numbers are commensurable; that means that each contains so-and-so many units of the same type, so that the ratio of the two can be expressed as the ratio of two integers. This new number, the fraction is called a rational number. The Pythagorean claim, then, was that all ratios occurring in the natural world are commensurable and can be expressed as rational numbers. Geometrically speaking this means the lines can be divided into exact numbers of (perhaps very small) equal segments; the ratio of the numbers of segments in each line is the ratio of their lengths. It was known to the early Greek mathematicians that certain "abstract" quantities, such as the square root of 2 are not rational, cannot be expressed as p/q where p and q are integers. The Pythagoreans wanted to hold that quantities of this sort are not part of the real world of natural objects and events. It turned out that a simple application of their famous theorem, i.e. the Pythagoras theorem to be a very real! something shocking to them. One of Socrates questions was "How to double a square?" Certainly the answer is not doubling the side of the There are quantities that are real in the world but, in a genuine sense, are not commensurable. If we can measure the side of a square exactly in terms of some measuring rod, we cannot measure its diagonal precisely with the same rod, no matter how fine the divisions on it. No proper fraction, the ratio of two integers, nor any finite decimal number can express the ratio between the two lengths perfectly. They call such beasts irrational numbers. Consider the following right triangle with two sides having lengths a, and b respectively, and its hypotenuse having length c. Pythagorean' theorem states that a^2 + b^2 = c^2 The following is a simple illustrative and beautiful ancient Indian proof. Consider the following squire having side length (a + b). Therefore, the area of this square is (a+b)^2 = a^2 + b^2 + 2ab. However, the whole area contains four right angles each having area of a.b/2, and the middle squire having the area of c^2. Therefore, a^2 + b^2 + 2ab = 4(a.b/2) + c^2. This gives: a^2 + b^2 = c^2, as expected. Together, the rational and irrational numbers constitute the real numbers. A real number such as square root of 3 is a point on the "real line" axis. Geometric representations of some real numbers are depicted in the following Figure using Pythagoras theorem: One can picture real numbers on the "real" number line, as "points" that fill up all the available positions on the line. But not quite all positions, as we shall see, in the transcendental numbers in the next section in this site. When one comes, many centuries later, to the question of how to handle irrational properly, or rationally, one might say, within the general concept of what a number is, the matter takes on a rather different complexion. It turns out, for example, that between any pair of rational, no matter how close together they may be, there is an infinite set of irrational. One may ask what is the solution to the equation X^3 - 5 = 0? Perhaps the easiest way to answer is to point out that, for example 1.7099 is not a solution. Indeed, we know this without computation, since 1.70998 is a rational number, and no rational number is a solution of this equation. Hen you may say a solution is X = 5^1/3. If so, it's vicious circle. Therefore for practical purposes X = 1.70998 is indeed a solution. Irrational number fascinated the Greeks who where interested also in the geometric interpretation of numbers, such as square root of 2. Nowadays application of square root of 2 is in estimating the real distance traveled across the city going from a location point A to another location B, which has a distance of AB = d measured from the city map. The above figure illustrates an application of square root of 2 in estimating the real distance between two locations in the modern cities with the distance (d) measured on the map. As another application of square root of 2, consider the International Metric Standard Paper Sizes (IMSPS) such as A4 which are widely used. In the IMSPS paper size system, all pages have a height-to-width ratio of a square root of two. That is; length of the longer side of the paper divided by the length of of its shorter side is always equal to the square root of two. This characteristic is especially convenient. For example, when we put two pages of A4 next to each other, then the resulting page will be A3 having again the same height-to-width ratio as shown in the above Figure. One of my visitors kindly wrote to me: "I have recently read your Internet article on the number zero, which I found very interesting. There are a number of anomalous elements which, perhaps you could explain. The number zero, when delimited on a number line is designated real e. g. ...1.2...0 ...-1...-2... because the values between the integers are included, on the number line. However when 0 is delimited in a collection of integers i.e. 1,2.0, -1, -2, (according to the same publication), it is now a member of the counting numbers, Z. Where does 0 stand in relation to Bertrand Russell's theory of types. Perhaps zero is in the intersection of the number sets real and Z Does this set that putatively contains zero contain itself, which is a fundamental precept of sets in Russell's paradox. Perhaps this is what statisticians really mean by the empty or null set. As you have probably have now guessed I am not a professional mathematician anything beyond A level mathematics is beyond my scope as yet. Not withstanding your comments would be appreciated as well as any corrections for misunderstood concepts. Please do not reply with a full explanation of the theory of types (I will not understand it, my level of lucidity in mathematical proofs is about the level that root 2 is not real). Thank you for reading this communication and reply in layman's terms would be appreciated." Well, Bertrand Russell was a literary man but not an academic mathematician. Unfortunately, he had thought in many wrong directions and creating many paradoxes useful for him alone. He was engaged in his own daydreamed mathematical logic, for example to prove for him "Why I Am Not a Christian." He was mixing-up the domain of human beliefs with the domain of rational thoughts. This seems odd but that is not my fault. This is the Russell's logic in a nutshell. It seems he had been too good a mathematician not to know exactly when centuries ended. Six hours before midnight on the last day of 1900, he wrote to his American friend Helen Thomas a letter he would later call "boastful," announcing what he thought was the completion of his Principles of Mathematics. "Thank goodness a new age will begin in six hours. [. . .] In October I invented a new subject, which turned out to be all mathematics for the first time treated in its essence. Since then I have written 200,000 words, and I think they are all better than any I had written before." The Two Numbers Nature Cares Most: Inventions or Discoveries The two numbers that Nature loves most are denoted by p and e. The first is relevant to planets movements around the sun while the second is related to the growth of population of different species. What is p? Planets move around the Sun in ellipsoid shaped-path with major diameter and minor diameter denoted by 2a, and 2b respectively, then the areas they travel are p.a.b. For a circle a = b = r the radius of a circle, therefore the area is p.r^2, and its circumference length is 2 p.r . Therefore, p is the ratio of the circumference length of ANY circle divided by the length of its diameter. That is, to have a notion for the numerical value of p, take a robe of any size and make a circle, then circumference/diameter is the p. Using such a geometric argument, Al-Biruni in 11^th century suggested that p must be an irrational number. It is nice to notice that, the derivative of the area of a circle: A = p r^2, is the circumference C = 2 p r. Similarly, for sphere the surface is S = 4 p r^2, which is the derivative of volume V = 4/3 p r^3. Beside the fact that p is a number, it is also a measurement for an angle in terms of radians. A radian is an angle subtended at the center of a circle by an arc whose length is equal to the radius. Therefore: 180 degrees = p radians In both cases, p is dimensionless, it is just a number, with two related applications. What is e? The growth of population for every species follows an Exponential law. The size of a population is, after length of time t years is P.e ^rt , where P is the initial population size and r is the rate of growth of a particular species. The growth rate of human population is about r = 0.019 since World War II. What is the difference between the accumulations of $1000 invested at a given rate (r) if the interest is compounded daily versus annually? Suppose you invest $1000, over a period if t-years, with an annual (fixed) interest rate of r, if the interest is added n times per year at the end of each period, then your compounded investment is $1000(1 + r/n)^nt. Now suppose the banker adds the interest at the end of each day then your investment growth faster 1000(1 + r/365) ^365t which is very close to 1000e ^rt which is the compounded investment continuously. In fact, increasing number of time intervals, e.g. days into half-days, this approximation gets much better, as shown by the following limiting result when the length of each period gets smaller and smaller: The number e is discovered by John Napier, and it is the base for the so called natural-logarithm, because this number appears frequently in nature. Notice that, the explicit function y = Ln (x), x > 0, is equivalent to the implicit function x = e^y, by definition. Moreover, the first and the second function are generally called the logarithmic (Ln) and the exponential (Exp) functions, respectively. The exact numerical values for these constants are not known, however, it is already available up to 2 million digits after the decimal point: Pi = p = 3.141592654...., and e = 2.718281828.. For example e can be approximated by the following series: e = 1 + 1 / 1! + 1 / 2! + 1 / 3! + 1 / 4! + 1 / 5! + ...... Instead of the above series, one may use the series for e^1/2 which converges faster, and you need only then square its sum. and for p, within one-sixth of one percent, by adding the square root of 2 and the square root of 3. Or using, p ^2 = 6{ 1/1^2 + 1/2^2 + 1/3^2 + 1/4^2 +â⠬¦.} or approximating p directly by: 2 x 2 x 4 x 4 x 6 x 6... (2) ----------------------------- 3 x 3 x 5 x 5 x 7 x 7... or, using the Andrew John Wiles formula directly: A beautiful formula known as Stirling's formula exists which involves the two transcendental numbers. The Stirling formula is the following approximation for n factorial: this approximation gets better as n gets larger. Stirling formula has a good number of applications in the Combinatorial counting, and Probability theory. Since human beings invented the numerical systems, these numbers, which are both beautiful and necessary for nature, become so complicated when expressed in numerical systems for our It is unfortunate that many people attach to these numbers various mysteries and religious beliefs. In fact, even mathematicians call them transcendental numbers! Heavenly created! The above explanation, that it is because of human numerical systems, demystifies all these. A real number which satisfies a polynomial equation with integer coefficients is called algebraic. A real number which is not algebraic is called transcendental. For example, the Golden Section, which is the way of diving a line of that the ratio of the larger part (b) to the total is the same as the ratio of the smaller (a) to larger, i.e., a/b = b/(a+b). The golden ratio is number ^2 - X -1 = 0. It is also interesting to know that p can be approximated as 4 divided by square root of the golden ratio 1à ·6180339887.... In general, it is also true that the ratio of two consecutive Fibonacci numbers N[i]/N[i-1] tends to the golden ratio., beginning with any couple of numbers. In other words, it is the limit of the ratio of any similar sequence of numbers. That is, let N[3] = N[1] + N[2], and N[4] = N[2] + N[3], and so on, then N[i]/N[i-1] tends to the golden ratio, no matter what the original values of N[1] and N[2] are. The Golden section ratio has application in Fibonacci search optimization algorithm, and it is often used by artists because of its human aesthetics which is meant to be pleasing to the Imaginary Numbers: We recall that a real number which satisfies a polynomial equation with integer coefficients is called algebraic. Such as in equation X^2 = 1. Now, we may not stop at this simple equation but are curious enough to take matters further by changing the sign on the right-hand side. The new equation X^2 = -1 turned out to be the source of much of the discoveries. The solution to this equation is +i, and -i where i is defined by the property that i squared equals -1. The number i is known as the pure Imaginary Number. One of the most beautiful formula in history of mathematics is the Euler's formula involving the five most beautiful numbers: 1, 0, e, i, and the p: e ^i p + 1 = 0 That is, the result of raising an irrational number to a power that is an imaginary number can turn out be a natural number. This is obtained from Euler's general and an amazing identity: e ^i X = Cos(X) + i Sin(X) This close connection between trigonometric functions, the natural constant "e", and the square root of -1 cannot be a mere accident; rather, we must be catching a glimpse of a rich, interesting mathematical pattern that for the most part lies hidden from our senses. It is a fact that, without the applications of imaginary numerical system no Engineering Sciences would have been existing as our everyday lives depend on. The invention, development, and the naming of the numerical systems must be viewed from a historical perspective as a necessity for human to make the world predictable, manageable, and Notice that, if a number is not real it does not mean it is unreal. Similarly, since, for example, One may wonder if transcendental numbers are so important for nature, why their looks so strange. One must notice that, while numerical systems are human being's inventions, numbers such as p and e, are humans' discovery. Expressing exact values of these numbers in our systems are impossible tasks. Another beautiful irrational number is: e^(p sqrt(163)), which differs from an integer by less than 10^-12. Try it on your calculator. One of the visitors of this site wrote to me that: "I think the terms Negative, Irrational, Imaginary, and Transcendental numbers are confusing to the layman. I would prefer the following terminology to help make these concepts meaningful to the general public: Consider substituting: Losing numbers for Negative numbers Illogical or "goofy" numbers for Irrational numbers Pretend or "phony" numbers for Imaginary numbers Spiritual or religious numbers for Transcendental numbers The words in quote marks might be more appropriate for grade school application." I agree with you. The original naming are standard simply because of their historical contexts. These original keywords express the attitude of mathematicians toward these sets of numbers when they tried to make a sense of their meanings and/or their applications. What is the Benford's Law: Benford's Law states that if we randomly select a number from a table of physical constants or statistical data, the probability that the first digit will be a "1" is about 0.301, rather than 0.1 as we might expect if all digits were equally likely. In general, the "law" says that the probability of the first digit being a "d" is This implies that a number in a table of physical constants is more likely to begin with a smaller digit than a larger digit. This can be observed, for instance, by examining tables of Logarithms and noting that the first pages are much more worn and smudged than later pages. Frank Benford's law is also called the First Digit Law, First Digit Phenomenon, or Leading Digit Phenomenon. In listings, tables of statistics, etc. This law is closely related to the The Zipf's Law. Gosh Numbers: The phrase "gosh numbers" is used as a curious form, or speculation of an unknown, or not well understood numbers. For example: ○ When the transcendental numbers that pop up in surprising places. For instance, p, e, or Fibonacci sequences, and their notable occurrences in nature are examples for gosh number. ○ "The nearest star is trillions of miles away? Gosh!." A number that makes someone say "Gosh!" ○ The car mechanic tells you that your car repairs will be, "Gosh, I'd say $500." That is, a usage of the phrase as a guessing number. ○ "Minus 40 degrees is the temperature which is the same in both the Fahrenheit and Celsius scales. Gosh." What is "gosh" is that the crossover point between the two scales. Visit also the following Web sites: Fascinating Flat Facts about Phi Fibonacci Numbers and the Golden Section Golden Section in Art, Architecture and Music Equation: Its Structure, Roots, and Solutions The word "Moaadeleh" meaning "equation" appeared in the writings of al-Khwarizmi together with the common algebraic operations connected to the idea of an equation: ○ Jabr: completion, restoration, enforcement, and compulsion ○ Muqabalah: posing, opposite, comparison ○ Radd: reducing, returning, canceling, and removing ○ Takmil: completion. Later, the word "equation" introduced by Fibonacci's work to Europe. Like the Latin "aequatio" it is an abstract noun referring to any process of equalizing. However, as a technical term it means the mathematical expression that we now call an equation: namely, two separate combinations of various quantities, known and/or unknown, that are equal to each other. Equations are any symbolic expression with equality sign, such as X^2 = 4. It is good to know that, the word "root" is derived from Sanskrit word "Bija" for "seed" or "root", is the usual term for algebra, where the "root" is the unknown quantity (often called X) that then produces a definite result via the structure of equation. The definite result (i.e., a fruit) of an equation is called its solution because it satisfies the equation. The above figure depicts the historical analogy developed during the second century between equations' elements and a tree structure. Since by plugging in the numerical value for the root (X), the equation resolves, i.e., it disappears. Hence the numerical value is called a solution, as for example when sugar is mixed with water in making lemonade sugar disappears, making a solution. Misplacement of the Sign Another common error is found in some textbooks (see, e.g., Mathematical Methods for Economists, by Glaister) which announce that the square root of 4 has two answers namely +2, and -2. When this writer confronted an author guilty of this practice observing that one number cannot be equal to two different numbers, the reply received was "check it for yourself by squaring both sides". He followed with self-satisfaction, "you see!". This writer advised that following his argument one could also demonstrate that one is equal to minus one. An observer witnessing this exchange jumped in volunteering the results of the computation performed with a calculator as producing a single result of plus 2 declaring "he is right." Solving the equation X^2 = 4 has two solutions: X = square root (Sqrt) of 4, is two, therefore, the solution is both X = 2, and X = -2. The symbol à ± is plus OR minus (could be both, but not at the same time). This correct result is distorted when one goes on to write X = Unfortunately, this distinction is not still recognized by many instructors. For example, this error is committed by many authors when for example, taking square root of 9. The authors profess to the students that "there are two possible numbers that square to 9, 3 and -3. So, when we take the square root of 9, we put a + and - in front of it." While the first part of this statement is correct, however the conclusion is wrong. When we take the square root of 9, we always get 3, NOT 3 and -3. 1 = 2 right? Confusions between Continuous and Discrete Variables Another author wrote: "Here's a problem for you if you add x x times, then you get x+x+x+...... (x times) When you take the derivative, then you get 1+1+1+...(x times) Equals x. But if you simplify x+x+x..(x times) to get x^2, and take the derivative, then you get 2x. Now evaluating both results at point x = 1, you get 1 = 2." At first glance, it would certainly appear that we have finally found our proof that 1 = 2 with the magic of differential calculus, a feat no doubt worthy of a Nobel prize! The difficulty arises however, when we take into account the requirement that any derivative is always taken with respect to only one variable at a time. So when we talk about taking the derivative of the summation of x, taken x times, we are really describing a function with two variables, one continuous, x, and one discrete, x times. The first case satisfies this requirement. Let us clarify by using x as the continuous variable, and n as the discrete variable indicating the number of times the operation of addition is performed. Thus, the summation is nx and the derivative with respect to x is n. If, as in this case evaluating at x = n, then the derivative is in fact x. In the second case, the derivative is not defined, because applying the same notation, you cannot take Ã¢â ¬Ë nx', re label it as Ã¢â ¬Ë x.x', and take the derivative of x^2 with respect to x, getting 2x, because this would imply taking the derivative of a discrete variable. What this case erroneously purports is taking the derivative of the product (the first term times the derivative of the second term, plus the derivative of the first term times the second term) to get 2x. Applied correctly, the derivative with respect to x would be n. Now evaluating at x = n, then the derivative is in fact x. 1 = 5 right? Confusions between Numbers and Operations The following interesting example appeared in the The International Journal of Ephemera, Issue No.3 responding to this question: "Now try to explain the following steps... 1. -5 = -5 (obviously) 2. 25 - 30 = 1 - 6 (just the same) 3. 25 - 30 + 9 = 1 - 6 + 9 (just added 9) 4. (5 - 3)^2=(1 - 3)^2 (using the binomial rules) 5. 5 - 3 = 1 - 3 (square root of 4) 6. 5=1 !!!!" The writer unfortunately confuses the value of a number with the result from taking the square root operation. Taking the square root (an operation) on both sides of (5 - 3)^2 = (1 - 3)^2 gives à ± (5 - 3) = à ± (1 - 3) or, more simply, (5 - 3) = à ± (1 - 3). These give 2 = à ± 2 Obviously only one is correct. For example, to find one side of a square-shaped area with area of 4 measured in (length unit)^2, (equivalently expressed as x^2 = 4), taking the square root of both sides gives a result of x = à ± 2. The length, in the context of the problem, is only correctly expressed as x = 2 (length units). Looking at this issue from a different perspective, if a = b, then a^2 = b^2. The reverse, however, may not hold. For example from (-2)^2 = (2)^2 one cannot conclude -2 = 2 If a^2 = b^2, what can we say? We can say that |a| = |b|, or if you wish, a = à ± b. Whether either or both are correct is dependent on the context of the problem. Similarly, One of the visitors of this site wrote to me that: "This would take some faith to believe: à ± (5 - 3) = à ± (1 - 3) or, more simply, (5 - 3) = à ± (1 - 3). These give 2 = à ± 2 I imagine the "truth" is; (5 - 3)^2 = (2)^2 = 4 = ( - 2)^2 = (1 - 3)^2 " It seems to me that in the first line of your comment, you have taken "a part of my argument" out of its context. Now there no need for any "faith to believe". Moreover, you should read 2 = à ± 2, as 2 is equal to +2, OR (not AND) to -2. This understanding is similar, to the reading of: 5 ³ 2. Right? For the last line of your comment, your imagination is correct. However, you have not taken square root operation yet, which is the main topic of this Section of the site. If you do so, you'll see what you get. The same visitor, replied back to me that: "Dear Arsham, I am impressed with you taking the time to respond to me, and thank you for that. The mathematics I learnt 45 years ago indicated: Let ^2=4, that is X^2 - 4 = 0. This can be written as (X - 2)(X + 2) = 0, which gives X = 2 or X = -2. Therefore, X = à ± 2. By this argument, 1 = -1 when you raise both side to the power of 2. Notice that if a = b then a^2 = b^2. However, a^2 = b^2 does not imply a = b. Right. So one must be very careful. You are not alone. I do understand in blaming the teacher who robbed your young mind long ago. Your major task in life is to reevaluate all you have been taught, for yourself. Now, by starting with - My visitor did his numerical experiment and found out that starting with X = does not give us two different numbers, namely X =+2, and X = -2. In some statistical texts dealing with regression analysis one reads assertions such as "R^2 is the variance of the estimates divided by the variance of Y. R is the square root of R^2". We do not argue with the first assertion, however the statement that "R is the square root of R^2" is not only misleading, it is also mathematically incorrect and may indeed lead to the wrong answer where the correlation between variable is negative! A correct statement would be that "R could be either à ± the square root of R^2 depending on the sign of the slope." Recurring Fractions Are Rational Numbers! Confusions between Series and Their Limits We have already mentioned rational numbers in this site. Rational Numbers are those real numbers which can be written as p/q where both numbers p and q are integer. The following question is posed by an author is whether or not the recurring fractions such as 0.4444444.... is a rational number, what about 0.9999....? Many instructors argue as follows, Let X = 0.444444..., and then multiplying both sides by ten, we get 10X = 4 + 0.44444... = 4 + X. This gives, X= 4/9, therefore 0.444444... is a rational number! These instructors also claim that in fact all recurring fractions are rational numbers except 0.99999... Because, they argue similarly that, let X = 0.999..., and then multiplying both sides by ten, get 10X = 9 + 0.9999... = 9 + X. This gives, X = 1, therefore it is an integer number not a rational number! Some teachers use a quick trick that seemed to have the desired effect, at least for students who were willing to admit that: 1/3 = 0.333333... "Just multiply both sides by 3, and see what you get." 1 = 0.999999... There are a few misconceptions embedded here. First of all 1 is a rational number too, since it can be written as 1 = 1/1. Moreover, these teachers ignoring the vagueness of the notorious dots used in symbolizing the infinite sequences. The arithmetical operation "to multiply by 3" is defined for the multiplication of finite representations of real numbers and the operation simply is not defined and is not applicable to infinite ones. Indeed, multiplication is begun from the least significant digit, but this digit in an infinite representation simply is absent. Now let us go back to the main question "is 0.4444.. a rational number or not". The answer is no because 0.444444...is not a well defined number. It is a series. The correct question should have been, "what is the limit of 0.444444....?" Clearly, one can write this recurring fraction as: 4/10 + 4/100 + 4/1000 +....... Which is a geometric series with the limit equal to (4/10) / [1- (1/10)] = 4/9. Therefore 4/9 is Its limit which is never attainable even increase the number of digits in the fractional portion. Again, 4/9 is not equal to 0.44444... because 0.44444... is not a number, while 4/9 is a well defined number. If you are old enough, you may remember the once a very popular song "..Take it to the limit one more time..", well, we can never take it to the limit. If you can, then that is not the Now let's get back to recurring fraction 0.999..... . There is nothing special about this one. Again, we must think of it as a series not a number. This series can be written as: 9/10 + 9/100 + 9/1000 +...... Which is again a geometric series with the limit (9/10) / [1- (1/10)] = 1 which is a rational number. However, this is only a limit, therefore, unattainable, so is 0.99999.... Just as asking yourself: How much is the 1/3 of a dollar bill? Is it 33 cents? Or is it 0.33333.... cents? Similarly, the main question considering topology of the continuum is: What is the meaning of 0.9999... You may ask "Why were stock prices quoted in eighths of a dollar?" In the 18^H century, the American dollar was officially equated in value to the Spanish silver dollar, and the Spanish silver dollar was so large it was literally divided up into eight parts. Due to this, for a long time fractions of an American dollar were also expressed in eighths, especially by America's European trading partners. When the US stock market was established stock prices were quoted in dollars and eighths of a dollar. This practice changed only recently. Note: A geometric series/sequence/progression is so-called because any term is the geometric mean of its adjacent terms. Archytas called it geometric mean possibly because the tangent OT to a circle from an external point O is the geometric mean of the product OA.OB where points A and B are the intersection points of the circle and the line passing through point O and the center of the circle.) IEEE Special Floating Point There has been a story about a "bug" in one of the early computers (probably the ENIAC Machines) where division by zero occurred and the machine began to loop endlessly continuously subtracting zero from the dividend until technicians on duty manually stopped the machine. This was, of course, before error checking was incorporated into the hardware or software of the The usual implementation of real numbers in today's computers as floating point numbers has the well-known deficiency that most numbers can only be represented up to some fixed accuracy. This means that even the basic arithmetic operations can not be performed exactly, leading to the ubiquitous round-off errors. This is a serious problem in all disciplines where high accuracy calculations are required. The IEEE has special floating point bit patterns to represent error values. For example 0/0 is NaN (not a number). The IEEE has many NaNs for "indeterminate" quantity. In IEEE standard, 1 divided by 0 is some kind of infinity. It is an indeterminate quantity because it oscillates between + Â¥ and - Â¥ (depending on whether you are dividing by positive or negative 0). According IEEE, 1/0 = Â¥, (0 as the result of floating point operations) is valid in the real numbers completed by a projective infinity, and is quite useful in many respects. One only needs to realize that the extended real numbers no longer form a "field", so that care is needed with the operations. This has to do with computer bits and bytes which is not enough to represent small or large positive or negative numbers. The IEEE standard on floating point operations requires 1/0 = Â¥, so one finds this nowadays on almost all computers. The inventors of the IEEE standard spent a lot of time to come to a consensus about what to require. It seems to me uncomfortably like the arguments made during IEEE 754 development that The IEEE Standard for Floating Point Arithmetic, and Some disasters attributable to bad numerical computing. 0 = 1 right? Some authors continue to claim that there are different ways in which it can be proved mathematically that One equals Zero. Any of the following Three erroneous proofs are offered as Taking Square Root of Both Sides of an Equality Let x = - 0.5, then 2x = -1, thus 2x+1 = 0. Adding x^2 to both sides we have x^2+2x+1 = x^2, which can be written as (x+1)^2 = x^2, since taking the square root of both sides gives x+1 = x, the author concludes that 1 = 0. The error in this case arises when taking the square root of both sides. In that operation, one must write x+1 = à ± x, one of which gives x = - 0.5 as expected. The other equation has no solution. The author of this fallacy misleads the reader by camouflaging a simple first-order equation to appear as quadratic equation by adding x^2 to both sides. However, when the x^ 2 terms are brought to one side, the coefficient of x^2 becomes zero, demonstrating that the equation is indeed not a quadratic equation, which requires as a necessary and sufficient condition that the x^2 coefficient be non-zero. When the author concludes that 1 = 0, what he (or perhaps she) is really doing is taking the demonstration of a contradiction and proffering it as a solution. Apparent solutions which demonstrate contradictions are in fact proofs that the candidate solution is not a solution after all! 0 = 1 right? Manipulation on Divergent Series Friedrich Gottlob Frege, in his "On Sense and Reference" (1892, p. 41) says, "...combinations of symbols can occur that seem to stand for something but have (at least so far) no reference, e.g., divergent infinite series." We use the same symbol to represent both the infinite series as a formal object or limiting process and also the value of that limit when it exists. The following manipulation is a case related to the concept of infinity and divergent series. Consider the series: 1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 ... Now consider the following two ways of grouping the terms: 1 + (-1+1) + (-1+1) + (-1+1) ... = 1+0+0+0+... = 1 (1-1) + (1-1) + (1-1)+ (1-1) ... = 0+0+0+0+... = 0 Thus 1 = 0. Here the error comes from the fact that this is not a convergent series. It is meaningless to manipulate the terms of a divergent series since, by the definition of divergence, the result of such a series is indeterminate. The presenter himself demonstrates why the proffered series is divergent. Why? Because the result of the series is not a fixed, finite value. Another Example Involving Manipulations on Divergent Series: One of my readers wrote to me that: Take x/(x+1) for x being any rational number with exception of -1 (We wouldn't like to divide by zero) This develops as x/(x+1) = 1- (1/x) + (1/x^2) -(1/x^3) + (1/x^4) - .... Now take x = 1 then Becomes: 1/2 = 1-1+1-1+1- ... Without dividing by zero. What did I do wrong ? I am glad that this reader knows something is wrong. A real number series with general term U(n) converges if |U(n+1)/U(n)| is less than one, which is a necessary condition. The general term for the above series is U(n) = (-1) ^(n-1)/x^(n-1), for n = 1, 2, 3, .... Therefore, |U(n+1)/U(n)| = |1/x| must be less than one. This means that, this series does not converge for any value of x within the open interval of (-1, 1). This means, we are not allowed taking x = 1, as you did and got an strange result. 0 = 1 right? Taking Conventions for Proofs What is zero to the power of zero? Consider anything to the power of 0: 1^0 = 1, 2^0 = 1, ... In fact, anything to the power of zero equals 1. Thus 0^0 = 1 Now consider zero to any power. 0^0 = 0, 0^1 = 0, ... In this example the writer goes on to state "In fact, zero to the power of anything equals 0. Thus 0^0 = 0. Thus 1 = 0". When a new entity in mathematics is defined one must also define its properties, and it's a good idea to make them consistent with the properties of related entities if possible. To make it consistent, sometimes we have to agree on some "conventions". Raising a quantity to a specific (positive integer) power means multiplying of that quantity by itself a specified number of times (i.e. involution). Again, the above error came from the fact that any number (Except Zero) raised to the power of zero is by convention, (not a mathematical fact) equal to one, not as the result of any proof. This convention is adopted because raising any number to the power of zero is a meaningless operation. 3^2 means 3 times itself (once). While 3^0 means 3 times itself zero times, which is a meaningless statement. One cannot say, "I' ve meet a certain person zero times." In order to be able to generalize the algebraic rule that: a ^(b + c) = a ^b . a ^ c, for example, even when b = 0, we have to set a ^ 0 = 1. Therefore, it is the convention that a ^ 0 = 1. This convention is needed in order to have consistency in this algebraic operations rule. However, if a = 0, b = 0, c = 0, we no longer have a need for a ^ 0 = 1. Similar argument holds for the algebraic rule that: a ^(b - c) = a ^b / a ^ c, for the case when b = c. In both operations, this problem arises only when a = 0, otherwise, in all other cases a ^ 0 = 1 is defined as 1. Another way of explaining (not, proving!) the convention that, any number raised to the power of zero is 1, except 0, is to look at a pattern... 2 ^5 = 32 2 ^4 = 16 2 ^3 = 8 2 ^2 = 4 2 ^1 = 2, If you follow the pattern, dividing by two each time, the next in the pattern is: 2 ^0 = 1 Obviously, this will work with other positive numbers. For negative numbers, the signs alternate in the sequence. Therefore, it work for any positive and negative number, except 0. Why not? Because, as we demonstrated earlier, one cannot divide by zero. As the pattern continues, you can describe negative exponents: 2 ^ -1 = 1/2 2 ^ -2 = 1/4. Conventions are agreed to among mathematicians to facilitate generalizations and mathematical operations. It is only by convention that a number raised to the zero power is equal to one. The same convention excludes zero to the zero power, Since there is no need for such an operation. A reader asked me: "What is the meaning of raising a number to the power of a negative number?" The answer to this question relies on the convention that a ^0 = 1, (for all values of a, except zero). Based on this convention, therefore, the operation a ^-n means 1 / a^ n, except for a = 0. Remember that, similar to dividing by zero, zero to the power of any non-positive (which includes zero) is a meaningless operation. You may remember this by the following argument, which is not a proof: 0^0 = 0^ 1 - 1 = 0^1 . 0^-1 = 0 / 0. The flaw is that 0^-1 = 1/0 which is a meaningless operation. Moreover, the above derivation is not a proof because, to be able to write a^-n = 1 / a^n, we have to use the convention that a^0 = 1, for all a except 0. Similarly, a^n . a^-n = a^n / a^n, given that by the convention a^0 = 1, for all a except 0. Now we realize that by this convention everything looks fine. Here is a question for you. What is zero to the power of a negative number? There is another convention is for n factorial (n!), for n being any non-negative integer. It states that zero factorial equals 1, 0! = 1. However, 1! = 1. However, one cannot similarly conclude that 0 = 1 because the one result is arrived at by convention and the other result is a mathematical fact. As with all conventions certain protocols and rules must be followed in arithmetic operations involving zero. One of the visitors of this site wrote to me with a question: Question: Do the rules and principles you so artfully applied to Zero in your fine web page, apply equally and similarly to the DOUBLE zero? For example, what is single-zero raised to the double-zero power, and vice versa? Is division by double zero also meaningless and/or forbidden?" Double zero is still zero. Zero is the only number having such a characteristic. One of my French colleagues kindly wrote to me that: "A little remark about the paragraph on 0^0. I believe that in your essay you could stress less the 'conventional' aspect of taking 0^0 = a particular value but insisting more on the analytic limit context since 0^0 as well as 0/0 or 0^Â¥ are encountered when searching for limits of expressions, and to every expression there is a definite answer (in that case 0, 1, an imaginary number, ...) even several (depending on the path to approach the limit value) but not an universal one, as if there was a common value to all functions of any kind. Similarly about 0! = 1, the origin of this value is an analytic continuation of n! and it is not possible to make it monotonous on [0,1] and it cannot be an argument to deduce 0 = 1 One often refers to the 0^0 = 1 convention as a "combinatorial" one, where as other values are more common in other settings. Also while it is rare that an extension to 0 of an iterative process such as multiplication or exponentiation has a sound and meaningful interpretation as you often note in your text, it is often very rewarding in mathematics to search for interpretations and settings where an extension of such processes to rational, real and complex numbers can be developed. Among the important and ancient domains where it has been done are differentiation, functional composition and solid revolution. â⠬¦ thanks for your popularizing efforts, the extensive bibliography you have included" Clearly, a functional evaluation may not give the same result as taking the limit. For example, the function f(X) = Sin(X) / X is not defined at point X = 0 since f(0) = 0/0. However, Lim f(X) = 1 as X approaches to 0 (but never equal to 0). Moreover, you are right in pointing out that the mathematical conventions involving zero are to make our arithmetic operations consistent, whenever is needed. For example, the convention that 0! = 1 is needed to make combinatorial calculation, such as C[n]^n = 1 meaningful, however, this cannot be taken as a proof for 0! = 1. As another example, we may ask what is the limit for the function f(X) = X^X, with the domain X ³ 0? By taking logarithm, it can be shown that Lim f(x) = X^X = 1 as X approaches zero. This limit can be verified using your calculator by evaluating f(X) for an infinitesimal value for X, say X = 0.0000001. Notice that, this limiting value cannot be taken as a result for 0^0 by concluding that 0^0 = 1. Visit also the following Web site: Complex Numbers. From Finger Numbers to Computer: The Most Fascinating Journey The concept of number is the obvious distinction between the beast and man. Thanks to number, the cry becomes a song, noise acquires rhythm, the spring is transformed into a dance, force becomes dynamic and outlines figures. Before the advent of a fairly general writing ability the finger numbers were widely used as a universal numerical language. The numbers were indicated by means of different positions of fingers and hands. In a rudimentary way we still occasionally express numbers by our fingers. Neither the spoken numbers nor the finger numbers have any permanency. To preserve numbers for the purpose of records it is necessary to have other representations. Furthermore, without same memory aids the performance of calculation is extremely difficult. Ancient Greeks were mostly interested in geometry, since their numerical system was based of denoting numbers to alphabets. The word geometry was derived from measuring (metry) and geo (land, earth). Before 400 B.C., Thales and Pythagoras among others, proved theorems that are still useful and being taught in schools. These ancient mathematicians had their own laptops. A laptop was usually a wooden tray contains smooth sand. They used a finger or a stick to draw and provide their convincing arguments about geometric figures. To restart the laptop, they smoothed the surface and started another session. This was the first laptop. This was the beginning of science then it went to sleep. The use of a positional system with a zero seems to have made its appearance in India in the period A.D. 600- 800. Around A.D. 800 the system was known among the Muslims in Baghdad and it gradually superseded the older type Arabic numerals. One of the greatest Muslim mathematicians of this time was Mohammed ibn Musa al-Khowarizmi, whose work, Al-Jabr wal-Mqabalah contributed much to the spread of calculations with the new system, first in the Muslim world and later in Europe. This treatise is of interest also because it is believed that its title AL-Jabr has given rise to the term algebra of modern mathematics. The works of al-Khowarizmi were translated into Latin, and through a perversion of his name the art of computing with Hindu-Arabic numerals became known as algorism This term took on various other forms; in Geoffrey Chaucer work it appears as augrime. The word is still preserved in mathematics where a repeated calculation process is called an algorism. The numerals took a great variety of shapes some quite different from those now in use, but through the introduction of printing the forms became standardized and have since remained almost unchanged. The transition to the new numerals was a long-drawn-out process. For several centuries there was considerable ill feeling between the algorismists, the users of the new numerals, and the abacists, who adhered to the abacus ad the Roman numerals. Tradition long preserved Roman numerals in bookkeeping coinage and inscription. Not until the sixteeth century had the new numerals won a complete victory in schools and trade. It was during the enlightenment era that educational system, such as universities as we know it today flourish. Separation of Church and State allowed individual freedom and after over 2000 (from 400 B.C to the 16^th Century) years, at last we went back to pre-Socrates to re-invent science. While pre-Socrates put more emphasis on geometry than arithmetic, the combination of both geometry and arithmetic created by Desecrate, as a Cartesian system is what we now refer to as Analytic Geometry. The analytic geometry was the intellectual foundation for Isaac Newton's models in describing his three laws of motion. His work is the first powerful tool we now call Analytical Modeling. Analytical modeling now dominates all fields of human knowledge, including art, science, and social science. You may rightly ask, Why we read numbers backwards? As opposed to the Latin writing, the Arabic writing is from right to the left. Therefore, writing a number in Arabic, the smallest positional-digit is written first and the nest smallest, and so on. For example 92 means, 2 ones, and 9 tens. While is English writing, for example it means 9 tens and 2 ones. The beginning of the positions will be to the right of the writer in Arabic, while in the Western work is the opposite is the case. This represents a stage in the introduction of the numerals into the West in which direction has been difficult for children to understand easily. Arithmetick (a Greek word) is a Science teaching the Manner and Use of Numbering. This science may be wrought diversely, with pen or with counters, and finally the use of computer age in solving large-scale problems by means of distributed, multithreaded, and parallel algorithms. There are many amusing things in the universe, but human's mind is the most amusing of all. One of my readers from England wrote: "When I was 11 our class was asked to pick a small integer and present a project on that number. My classmates picked rather obvious small (and positive) values and observed that four seems stable because dogs have as many legs as cars have wheels (and so on). This didn't seem very interesting to me, so I picked zero and focused on the way it doesn't behave well mathematically. I got a poor mark and -- it was explained to me - that this was because I hadn't picked "a proper number". It is difficult to explain zero to people, even teachers." You are right, I must say unfortunately. ". . . if one were to place two points on a map, students would find it easy to state the direction and length of the line connecting them (5" running SE-NW). But what if the two points were on top of one another? The length is easier to discern than the direction. The correct answer is that the solution is undefined (and the question silly). The 2/0=INF crowd are essentially saying that the vector exists in the solution domain of valid compass points and might be inclined to nominally place it in the region of the northern magnetic pole and answer N-S. This is an answer that is measurably more inferior to the one given by the students who suggest that perhaps the vector in question points in ALL compass directions (for at least the answer matches the question for meaninglessness)." A vector has 3 components: An origin, a direction, and a (nonzero) length. Therefore, "two points were on top of one another" does not represent a vector. "â⠬¦I have a question for you. If percentages are hundredths-of, what is the name of the scale that measures in thousandths? It is the scale used for chemical purity, such as with ammonia solution (strong when 990) and silver (pure grade at 995)." Analog to percent you may call this scale 'permilli', as in millimeter which is one of thousandths-of a meter. "The practical problems with zero cropped up in the early days of flight simulators. The problem was that if the plane was plummeting perfectly straight down (or climbing away from the center of the Earth, like a rocket) this vector no longer denoted which way it faced because the lateral vector of acceleration had zero length. The consequence was that computational failure prorogated through the calculations and the program would crash..." A vertical line has no slope. It seems to me that the program did not include occurrence of such an event. "â⠬¦it's particularly weird that 0^0 = 1 (when all other zero-to-the-something-powers are zero). Additionally, as 10^2 is 100, it follows that 100 root 2 is ten (which it is). Analogously, with 10^0 = 1, 1 root zero is 10 (or 6, or 5, or -3). But are we not allowed to root-zero things the same way we must refrain from dividing with it?" First of all, 0^0 is Not 1, it's undefined. However, 10^0 is 1 by convention only, not by any proofs. However, the reverse operation is not permitted. Even for non-zero power in your other example, 10^2 is 100, which can be written equally as 10^2 = 100 = (-10)^2. Clearly, one cannot conclude 10 = -10 from it. Right? Notice that, there is not even any convention for dividing by zero. "Thanks for your page on these controversial values; I experienced a non-zero measure of enjoyment while reading it." Errant Views and Calculator-assisted Experiment Our conclusion is that these two errant views are widely held among authors of applied mathematics texts and, unsurprisingly, among their students. Sadly, these persistent errors do not exist in isolation in a classroom or academic text. Important conclusions are inappropriately drawn after a witting or unwitting division by zero, leading the calculator to conclude subsequently, "therefore...," as he or she goes on to some consequent insight. This writer uses the 1 = 2 , Here is a basic calculator for assisting you to perform numerical experimentation, for at least a few hours (as students do in, e.g. Physics labs.). This serves as a learning tool for the fundamental mathematical concepts and functions, including trig functions (all entries for angles must be between 0 and 360 degrees), we have covered, which are highlighted in this site Therefore, the materials presented in this site may also serve as a manual for this calculator. The main purpose of performing experimentation is to enhance your learning, therefore if you get any surprising display results, then you must think carefully for the "why?" The answers can be found in this site. Right? Unfortunately, most of Java-based calculators available on the Web produce wrong results. To exercise your knowledge, find out what is wrong with: For a collection of useful Java Applets links visit A Basic Scientific Calculator Web site. Notes, Further Readings, and References
{"url":"http://home.ubalt.edu/ntsbarsh/zero/zero.htm","timestamp":"2014-04-21T09:35:54Z","content_type":null,"content_length":"289519","record_id":"<urn:uuid:fcd26b6b-293d-4c49-8fb2-0670e0d9ee63>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Fairchilds, TX Statistics Tutor Find a Fairchilds, TX Statistics Tutor ...If you need guidance in your college and graduate level report and thesis, please contact me as I can help!I have had years of Biology courses, both at the high school and college freshman level. I am adept at SPSS because I have used it for science research and teaching other graduate students.... 23 Subjects: including statistics, chemistry, physics, biology ...Although I have more experience tutoring in math and science, I enjoy tutoring language arts and other subjects. I understand that in addition to struggling with understanding a subject, test anxiety can be a difficult obstacle to overcome. Because I have personally dealt with test anxiety myse... 47 Subjects: including statistics, English, chemistry, reading ...My reputation at the Air Force Academy was the top calculus instructor. I have taught precalculus during the past two years and have enjoyed success with the accomplishments of my students. Several have received excellent scholarships from various universities around the country and especially in Texas. 11 Subjects: including statistics, calculus, geometry, algebra 1 ...I have a bachelors degree in computer science and worked for a software company for three years. I have used C, C++, vb.net, Java, and C# extensively. I have a strong background in computers including a bachelors degree in computer science. 30 Subjects: including statistics, reading, Spanish, ASVAB I am currently a CRLA certified level 3. I have been tutoring for close to 5 years now on most math subjects from Pre-Algebra up through Calculus 3. I have done TA jobs where I hold sessions for groups of students to give them extra practice on their course material and help to answer any question... 7 Subjects: including statistics, calculus, algebra 2, algebra 1 Related Fairchilds, TX Tutors Fairchilds, TX Accounting Tutors Fairchilds, TX ACT Tutors Fairchilds, TX Algebra Tutors Fairchilds, TX Algebra 2 Tutors Fairchilds, TX Calculus Tutors Fairchilds, TX Geometry Tutors Fairchilds, TX Math Tutors Fairchilds, TX Prealgebra Tutors Fairchilds, TX Precalculus Tutors Fairchilds, TX SAT Tutors Fairchilds, TX SAT Math Tutors Fairchilds, TX Science Tutors Fairchilds, TX Statistics Tutors Fairchilds, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/fairchilds_tx_statistics_tutors.php","timestamp":"2014-04-16T10:26:45Z","content_type":null,"content_length":"24131","record_id":"<urn:uuid:44ec5fa0-a241-40a9-87da-0ad6d069ae6c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
A development of the equations of electromagnetism in material continua Harry F. Tiersten The book derives the equations of electrostatics, magnetostatics and electromagnetics in material continua from the fundamental statements of Coulomb, Ampére, Faraday and Maxwell in a careful systematic way. The book is intended to provide graduate students in mechanics and applied mathematics, as well as faculty and appropriate research workers, with an understanding of electromagnetism and prepare them for studies on the interaction of the electric and magnetic fields with deformable solid continua. Unusually careful attention is given to the description of the stored electric and magnetic energies and the prescription for the body force exerted by the fields on the material continuum. Much of the approach to the development of the equations is original and more systematic and convincing than others. We haven't found any reviews in the usual places. Introduction 1 Electric Field Equations in Charged Regions 9 Electric Field Equations in Charged and Polarized Regions 21 11 other sections not shown Bibliographic information
{"url":"http://books.google.com/books?id=xt3vAAAAMAAJ&q=electric+field&source=gbs_word_cloud_r&cad=5","timestamp":"2014-04-19T10:14:07Z","content_type":null,"content_length":"107667","record_id":"<urn:uuid:020050cb-2134-44e5-bfb0-befaf978c03d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Flow rate through a squre tube Hi Fred. Nice award, I think you earned it! Hi Blaster, Equations for fluid flow, such as Darcy-Weisbach, Hagen-Poiseuille, Colebrook equations, the Moody chart and similar flow equations all assume flow in circular pipes. These equations can be used for other cross sectional shapes such as square cross sections, but these cross sections have to be equated to a circular cross section that produces the same restriction. That cross section is called the "hydraulic diameter". Note that the hydraulic diameter is not the same as an "equivalent diameter" that results in the areas being equal. The hydraulic diameter is can be calculated from: Dh = 4A/U Where A = cross sectional area U = wetted perimeter This is the equation Fred provided above. For example, a 1 mm square tube (inner dimensions) has a hydraulic radius of 1 mm. Once you've found the hydraulic diameter, you can attack this just like any other circular pipe flow problem. For incompressible flow, I'd suggest applying Darcy-Weisbach directly. Pressure drop is called head loss or frictional head loss and a few other names too. That equation is: h = f L V^2 / ( 2 D g) where f = friction factor L = pipe length V = fluid velocity D = hydraulic diameter g = constant (acceleration due to gravity = 32.174 ft/s2 = 9.806 m/s2) LMNO Engineering - I'm using this reference because they have a calculator at this site. You may not want to use it, but it would serve as a check of your own numbers. I'd suggest creating your own program using a The only variable above you won't have is friction factor, which is a function of Reynolds number. Wikipedia has a good article on Reynolds number here. It's the same equation provided by Clausius Wikipedia Reynolds Number Of course, you still need kinematic or absolute viscosity. The values for viscosity can be found in any text book or on the web, for example here: Viscosity for Water Once you do that, you still need to find friction factor. There are many ways to calculate that, including taking it directly off of the Moody diagram. I'd recommend you create a spreadsheet that does all this for you, so I'd suggest using an equation as described at Engineering Tips Forum here: Friction Factor at Eng Tips Note that for the above friction factor you'll need pipe roughness, which is dependant on your actual hardware. If you don't know what it is for your square tube, I'd suggest using 0.00015 which is commonly used for clean pipe. Note also, D in these equations is the hydraulic diameter. The last thing to do is determine flow as a function of pressure drop per the Darcy-Weisbach equation. Note that head (h) is the pressure created by a given column of the fluid in question. Velocity is a function of flow rate. You'll need to separate out those portions and treat them separately, or possibly use the calculator given on the internet. If I do these calculations, I come up with a Reynolds number of 2577, which is somewhere in the transition zone. It may be turbulent, and it may be laminar. - If I assume it is laminar, the flow rate for your case will be .0324 gallons per minute. Convert that how you need to. - If I assume the flow is turbulent, I get .0178 GPM. Couple things to note here. I've neglected exit losses, which in this case are fairly small, but to explain how to calculate exit losses would take another post this size. I've also assumed there are no elbows or other restrictions in this line. Finally, I've assumed the line is horizontal. If there is any elevation change in the line, you can correct for that using Bernoulli's equation. This is a lot to cover in a single post, so I may have been overly brief. Feel free to ask questions, I'm sure myself or others here can help out. Hope that helps.
{"url":"http://www.physicsforums.com/showthread.php?t=152479","timestamp":"2014-04-16T07:41:16Z","content_type":null,"content_length":"113652","record_id":"<urn:uuid:d20c338c-d09f-4eb8-8e4f-9012b3fbf874>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Stuck on Subspace proof question November 8th 2010, 04:14 PM #1 Jul 2010 Stuck on Subspace proof question Let A and B be mxn matrices and let S be the set of all x in R^n such that Ax = Bx. Show that S is a subspace of R^n. My approach: So i know that there are 3 properties for H to be a subspace in R^n. 1) the zero vector is in H 2) for each u and v in H, the sum u+v is in H 3) for each u in H and each scalar c, the vector cu is in H So im confused on how i would go abouts proving this? Would i take property 3 and multiply Ax=BX by a scalar? Let A and B be mxn matrices and let S be the set of all x in R^n such that Ax = Bx. Show that S is a subspace of R^n. My approach: So i know that there are 3 properties for H to be a subspace in R^n. 1) the zero vector is in H 2) for each u and v in H, the sum u+v is in H 3) for each u in H and each scalar c, the vector cu is in H So im confused on how i would go abouts proving this? Would i take property 3 and multiply Ax=BX by a scalar? Sounds good to me! 1) What are A0 and B0? Are they equal? (0 is the 0 vector) 2) Suppose x and y are such that Ax=Bx and Ay= By. What are A(x+y) and B(x+y)? Are they equal? 3) Suppose x is such that Ax= By andck is a scalar. What are A(cx) and B(cx)? Are they equal? November 8th 2010, 04:55 PM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/advanced-algebra/162578-stuck-subspace-proof-question.html","timestamp":"2014-04-17T23:31:59Z","content_type":null,"content_length":"36116","record_id":"<urn:uuid:6f425f1a-11f9-46a0-a5a7-fe930a8e042e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
de Prisco, Lampson, and Lynch - Revisiting the Paxos Algorithm This document was made by OCR from a scan of the technical report. It has not been edited or proofread and is not meant for human consumption, but only for search engines. To see the scanned original, replace OCR.htm with Abstract.htm or Abstract.html in the URL that got you here. Theoretical Computer Science 243 (2000) 35–91 Fundamental Study Revisiting the PAXOS algorithm * Roberto De Prisco^a,^* , Butler Lampson^b, Nancy Lynch^a ^aMIT Laboratory for Computer Science, 545 Technology Square, Cambridge, MA 02139, USA ^bMicrosoft Corporation, 180 Lake View Ave, Cambridge, MA 02138, USA Received October 1998; revised July 1999 Communicated by M. Mavronicolas The PAXOS algorithm is an efficient and highly fault-tolerant algorithm, devised by Lamport, for reaching consensus in a distributed system. Although it appears to be practical, it seems to be not widely known or understood. This paper contains a new presentation of the PAXOS algorithm, based on a formal decomposition into several interacting components. It also contains a correctness proof and a time performance and fault-tolerance analysis. The formal framework used for the presentation of the algorithm is provided by the Clock General Timed Automaton (Clock GTA) model. The Clock GTA provides a systematic way of describing timing-based systems in which there is a notion of “normal” timing behavior, but that do not necessarily always exhibit this “normal” timing behavior. ~c 2000 Elsevier Science B.V. All rights reserved. Keywords: I/O automata models; Formal verification; Distributed consensus; Partially synchronous systems; Fault-tolerance 1. Introduction...............................................................................................36 1.1. Related work.........................................................................................38 1.2. Road 2. Models....................................................................................................39 2.1. I/O automata and the GTA..........................................................................39 2.2. The Clock GTA.....................................................................................40 2.3. Composition of automata............................................................................42 * A preliminary version of this paper appeared in Proceedings of the 11th International Workshop on Distributed Algorithms, Saarbr^ ucken, Germany, September 1997, Lecture Notes in Computer Science, Vol. 1320, 1997, pp. 111–125. The first author is on leave from the Dipartimento di Informatica ed Applicazioni, Universitâ di Salerno, 84081 Baronissi (SA), Italy. * Corresponding author. E-mail addresses: robdep@theory.lcs.mit.edu (R. De Prisco), blampson@microsoft.com (B. Lampson), lynch@theory.lcs.mit.edu (N. Lynch). 0304-3975/00/$ - see front matter ~c 2000 Elsevier Science B.V. All rights reserved. PII: S0304-3975(00)00042-6 3. The distributed setting....................................................................................43 3.1. Processes.............................................................................................44 3.2. Channels.............................................................................................44 3.3. Distributed systems 4. The consensus problem...................................................................................46 5. Failure detector and leader elector .......................................................................47 5.1. A failure detector....................................................................................47 5.2. A leader 6. The PAXOS algorithm .....................................................................................52 6.1. Overview.............................................................................................52 6.2. Automaton BASICPAXOS ..............................................................................54 6.3. Automaton SPAX .....................................................................................80 6.4. Correctness and analysis of ^SPAX ................................................................... 81 6.5. Messages.............................................................................................86 6.6. Concluding 7. The MULTIPAXOS algorithm ...............................................................................87 8. Application to data replication............................................................................88 9. Conclusion................................................................................................89 Acknowledgement ............................................................................................89 References 1. Introduction Reaching consensus is a fundamental problem in distributed systems. Given a dis­tributed system in which each process starts with an initial value, to solve a consensus problem means to give a distributed algorithm that enables each process to eventually output a value of the same type as the input values, in such a way that three conditions, called agreement, validity and termination, hold. There are different definitions of the problem depending on what these conditions require. Distributed consensus has been extensively studied. A good survey of early results is provided in [13]. We refer the reader to [24] for a more recent treatment of consensus problems. Real distributed systems are often partially synchronous systems subject to process, channel and timing failures and process recoveries. In a partially synchronous dis­tributed system, processes take actions within " time and messages are delivered within d time, for given constants " and d. However, these time bounds hold when the system exhibits a “normal” timing behavior. Hence the above-mentioned bounds of " and d can be occasionally violated (timing failures). Processes may stop and recover; it is possible to keep the state of a process, or part of it, in a stable storage so that the state, or part of it, survives the failure. Messages can be lost, duplicated or reordered. Any practical consensus algorithm needs to consider the above practical setting. Moreover, the basic safety properties must not be affected by the occurrence of failures. Also, the performance of the algorithm must be good when there are no failures, while when failures occur, it is reasonable to not expect efficiency. Lamport’s PAXOS algorithm [19] meets these requirements. The model considered is a partially synchronous distributed system where each process has a direct communi­cation channel with each other process. The failures allowed are timing failures, loss, duplication and reordering of messages, and process stopping failures. Process recov­eries are allowed; some stable storage is needed. PAXOS is guaranteed to work safely, that is, to satisfy agreement and validity, regardless of process, channel and timing failures and process recoveries. When the distributed system stabilizes, meaning that there are no failures, nor process recoveries, and a majority of the processes are not stopped, for a sufficiently long time, termination is also achieved and the performance of the algorithm is good. Hence PAXOS has good fault-tolerance properties and when the system is stable it combines those fault-tolerance properties with the performance of an efficient algorithm, so that it can be useful in practice. In the original paper [19], the PAXOS algorithm is described as the result of discoveries of archaeological studies of an ancient Greek civilization. That paper contains also a proof of correctness and a discussion of the performance analysis. The style used for the description of the algorithm often diverts the reader’s attention. Because of this, we found the paper hard to understand and we suspect that others did as well. Indeed the PAXOS algorithm, even though it appears to be a practical and elegant algorithm, seems not widely known or understood. In [19] a variation of PAXOS that considers multiple concurrent runs of PAXOS for reaching consensus on a sequence of values is also presented. We call this variation the MULTIPAXOS algorithm. ^1 This paper contains a new, detailed presentation of the PAXOS algorithm, based on a formal decomposition into several interacting components. It also contains a correctness proof and a time performance and fault-tolerance analysis. The MULTIPAXOS algorithm is also described, together with an application to data replication. The formal framework used for the presentation is provided by the Clock General Timed Automaton (Clock GTA), which has been developed in [5]. The Clock GTA is a special type of Lynch and Vaandrager’s General Timed Automaton (GTA) model [26–28]. The Clock GTA uses the timing mechanisms of the GTA to provide a systematic way of describing both the normal and the abnormal timing behaviors of a partially synchronous distributed system subject to timing failures. The model is intended to be used for performance and fault-tolerance analysis of practical distributed systems based upon the stabilization of the system. The correctness proof uses automata composition and invariant assertion methods. Automata composition is useful for representing a system using separate components. We provide a modular presentation of the PAXOS algorithm, obtained by decomposing it into several components. Each one of these components copes with a specific aspect of the problem. In particular there is a “failure detector” module that detects process failures and recoveries. There is a “leader elector” module that copes with the problem of electing a leader; processes elected leaders by this module, are used as leaders 1 PAXOS is the name of the ancient civilization studied in [19]. The actual algorithm is called the “single-decree synod” protocol and its variation for multiple consensus is called the “multi-decree parliament” protocol. We use the name PAXOS for the single-decree protocol and the name MULTIPAXOS for the multi-decree parliament protocol. in PAXOS. The PAXOS algorithm is then split into a basic part that ensures agreement and validity and into an additional part that ensures termination when the system stabi­lizes; the basic part of the algorithm, for the sake of clarity of presentation, is further subdivided into three components. The correctness of each piece is proved by means of invariants, i.e., properties of system states which are always true in any execution. The time performance and fault-tolerance analysis is conditional on the stabilization of the system behavior starting from some point in an execution. Using the Clock GTA we prove that when the system stabilizes PAXOS reaches consensus in O(1) time and uses O(n) messages, where n is the number of processes. We also briefly discuss the MULTIPAXOS protocol and a data replication algorithm which uses MULTIPAXOS. With MUL­TIPAXOS the high availability of the replicated data is combined with high fault tolerance. 1.1. Related work The consensus algorithms of Dwork et al. [9] and of Chandra and Toueg [2] bear some similarities with PAXOS. The algorithm of [9] also uses “rounds” conducted by a leader, but the strategy used in each round is different from the one used by PAXOS. Also, [9] does not consider process restarts. The time analysis provided in [9] is condi­tional on a “global stabilization time” after which process response times and message delivery times satisfy the time assumptions. This is similar to our stabilized analysis. A similar time analysis, applied to the problem of reliable group communication, can be found in [12]. The algorithm of Chandra and Toueg is based on the idea of an abstract failure detector [2]. It turns out that failure detectors provide an abstract and modular way of incorporating partial synchrony assumptions in the model of computation. A P failure detector incorporates the partial synchrony considered in this paper. One of the algorithms in [2] uses a 9' failure detector, which is weaker than a P failure detector. This algorithm is based on the rotating coordinator paradigm and as PAXOS uses majorities to achieve consistency. However it takes, in the worst case, longer time than PAXOS to achieve termination. Chandra et al. [1] identified the “weakest” failure detector that can be used to solve the consensus problem. This weakest failure detector is #" and it is equivalent to 9'. The Chandra and Toueg algorithm does not consider channel failures (however, it can be modified to work with loss of messages but the resulting algorithm is less efficient than PAXOS with respect to the number of messages sent). The failure detector provided in this paper differs from those classified by Chandra and Toueg in that it provides reliability conditional on the system stabilization. If the system eventually stabilizes then our failure detector can be classified in the class of the eventually perfect failure detectors. However it should be noted that in order for PAXOS to achieve termination it is not needed that the system become stable forever but only for a sufficiently long time. Dolev et al. [8] have adapted the Chandra and Toueg’s definition of failure detector to consider also omission failures and have given a distributed consensus protocol that allows majorities to achieve consensus. MULTIPAXOS can be easily used to implement a data replication algorithm. The data replication algorithms in [23, 30, 17,22] are based on ideas similar to the ones used in PAXOS bears some similarities with the three-phase commit protocol [34]. However, the three-phase commit protocol does not always guarantee majorities to progress. The commit algorithm of Keidar and Dolev [18] is similar to PAXOS in that it always guar­antees majorities to progress. Also, PAXOS is more efficient than the three-phase commit protocol when the system is stable and consensus has to be reached on a sequence of values (a three-phase protocol is needed only for the first consensus problem, while all the subsequent ones can be solved with a two-phase exchange of messages). Cristian’s timed asynchronous model [4] is similar to the distributed setting consid­ered in this paper. It assumes, however, a bounded clock drift even when the system is unstable. Our model is weaker in the sense that makes no assumption on clock drift when the system is unstable. The Clock GTA provides a formal way of modelling the stability property of the timed asynchronous model. In [31] Patt-Shamir introduces a special type of GTA used for the clock synchronization problem. The Clock GTA considers only the local time; our goal is to model good timing behavior starting from some point on and thus we are not concerned with synchronization of the local clocks. In [20] Lampson provides a brief overview of the PAXOS algorithm together with key ideas for proving the correctness of the algorithm. We used these ideas in the correctness proof provided in this 1.2. Road map Section 2 describes the I/O automaton models used and Section 3 describes the distributed system considered. Section 4 gives a formal definition of the consensus problem. In Section 5 a failure detector and a leader elector are presented; they are used by the PAXOS algorithm. The PAXOS algorithm itself is described and analyzed in Section 6. Section 7 describes MULTIPAXOS and Section 8 discusses how to use MULTIPAXOS to implement a data replication algorithm. 2. Models Our formal framework is provided by I/O automaton models, specifically by the Clock GTA model developed in [5]. In this section we briefly describe essential notions about I/O automata needed to read the rest of the paper. We refer the interested reader to [24, Chapters 8 and 23] for more information and references about I/O automaton models, and to [5] for a more detailed presentation of the Clock GTA model. 2.1. I/O automata and the GTA The I/O automata models are formal models suitable for describing asynchronous and partially synchronous distributed systems. An I/O automaton is a simple type of state machine in which transitions are associated with named actions. These actions are classified into categories, namely input, output, internal and, for the timed models, time-passage. Input and output actions are used for communication with the external environment, while internal actions are local to the automaton. The time-passage actions are intended to model the passage of time. The input actions are assumed not to be under the control of the automaton, that is, they are controlled by the external environment, which can force the automaton to execute the input actions. Internal and output actions are controlled by the automaton. The time-passage actions are also controlled by the automaton (though this may at first seem somewhat strange, it is just a formal way of modelling the fact that the automaton must perform some action before some amount of time elapses). The General Timed Automaton (GTA) uses time-passage actions called v(t), t E + to model the passage of time. The time-passage action v(t) represents the passage of time by the amount t. A GTA consists of four components: (i) the signature, consisting of four disjoint sets of actions, namely, the input, output, internal and time-passage actions; (ii) the set of states; (iii) the set of initial states, which is a nonempty subset of the set of states; (iv) the state-transition relation, which specifies all the possible state to state transitions. A state-to-state transition, usually called a step, is a triple (s, 1r, s') where s and ^s^'^ are states of the automaton and 1r is an action that takes the automaton from s to s'. If for a particular state s and action 1r, there is some transition of the form (s, 1r, s'), then we say that 1r is enabled in s. Input actions are enabled in every state. A timed execution fragment of a GTA is defined to be either a finite sequence · = s0,1r1,s1,1r2,...,1r,.,s,. or an infinite sequence = s0,1r1,s1,1r2,. ..,1r,.,s,.,..., where the s’s are states, the 1r’s are actions (either input, output, internal, or time-passage), and (sk, 1r k+1 , sk+1) is a step for every k. Note that if the sequence is finite, it must end with a state. The length of a finite execution fragment = s0, 1r1,s1, 1r2,. .. , 1r,., s,. ^is ,.. A timed execution fragment beginning with a start state is called a timed execution. If · is any timed execution fragment and [1r,. ]is any action in , then we say that the time of occurrence of [1r,. ]is the sum of all the reals in the time-passage actions preceding 1r,. ^in . A timed execution fragment is said to be admissible if the sum of all the reals in the time-passage actions in is oc. A state is said to be reachable if it is the final state of a finite timed execution of the GTA. In the rest of the paper we will often refer to timed executions (resp. timed execution fragments) simply as executions (resp. execution fragments). 2.2. The Clock GTA A Clock GTA is a GTA with a special component included in the state; this special variable is called Clock and it can assume values in . The purpose of Clock is to model the local clock of the process. The only actions that are allowed to modify Clock are the time-passage actions v(t). When a time-passage action v(t) is executed by the automaton, the Clock is incremented by an amount of time t' 0 independent of the amount t of time specified by the time-passage action.^2 Since the occurrence of the time-passage action v(t) represents the passage of (real) time by the amount t, by incrementing the local variable Clock by an amount t' different from t we are able to model the passage of (local) time by the amount t'. As a special case, we have some time-passage actions in which t' = t; in these cases the local clock of the process is running at the speed of real time. In the following and in the rest of the paper, we use the notation s .x to denote the value of state component x in state s. Definition 2.1. A step (sk 1,V(t),sk) of a Clock GTA is called regular if sk.Clock - sk 1. Clock = t; it is called irregular if it is not regular. That is, a time-passage step executing action v(t) is regular if it increases Clock by t' = t. In a regular time-passage step, the local clock is increased by the same amount as the real time, whereas in an irregular time-passage step v(t) that represents the passage of real time by the amount t, the local clock is increased either by t' <t (the local clock is slower than the real time) or by t' > t (the local clock is faster than the real time). Definition 2.2. A timed execution fragment of a Clock GTA is called regular if all the time-passage steps of are regular. It is called irregular if it is not regular, i.e., if at least one of its time-passage step is irregular. In a partially synchronous distributed system processes are expected to respond and messages are expected to be delivered within given time bounds. A timing failure is a violation of these time bounds. An irregular time-passage step can model the occurrence of a timing failure. We remark that a timing failure can actually be either an upper bound violation (a process or a channel is slower than expected) or a lower bound violation (a process or a channel is faster than expected). Obviously, in a regular execution fragment there are no timing failures. Though we have defined a regular execution fragment so that it does not contain any of the timing failures, we remark that for the the scope of this paper we actually need only that the former type of timing failures (upper bound) does not happen. That is, for the scope of this paper, we could have defined a regular step v(t) as one that increases the clock time by an amount t', t' t. 2.2.1. Using MMTAs to describe Clock GTAs GTAs encode timing restrictions explicitly into the code of the automata. This pro­vides a lot of flexibility but requires more complicated code to explicitly handle the 2 Formally, we have that if (s, v(t), s') is a step then also (s, v(t^˜), s'), for any ^˜t> 0, is a step. Hence a Clock GTA cannot keep track of the real time. time and the time bounds. In many situations however we do not need such a flexibility and we only need to specify simple time bounds (e.g., an enabled action is executed within " time). The MMTA^3 model is a subclass of the GTA model suitable for describing such simple time bounds. The MMTA does not have time-passage actions but each action is coupled with its time of execution so that the execution of an MMTA is a (not necessarily finite) sequence = s0, (in1, t1), s1, (in2, t2),. .. , (inr, tr), sr,..., where the s’s are states, the in’s are actions, and the t’s are times in [I^¿0.^ To specify the time bounds an MMTA has a fifth component (with respect to the four components of a GTA) called task partition, which is an equivalence relation on the locally controlled actions (i.e., internal and output action). Each equivalence class is called a task of the automaton. A task C having at least one enabled action is said enabled. Each task C has a lower bound, lower(C), and an upper bound upper(C), on the time that can elapse before an enabled action belonging to the task C is executed. If the task is not enabled then there is no restriction. There is a standard technique that transforms any MMTA into a GTA (see [24, Sec­tion 23.1]). This technique can be extended to transform any MMTA into a Clock GTA (see [5]). In the rest of the paper we will sometimes use MMTAs to de­scribe Clock GTAs and when using MMTAs we will always use lower(C) =0 and upper(C) = ". The following lemma [5] holds. Lemma 1. Consider a regular execution fragment of a Clock GTA described with the MMTA model, starting from a reachable state s0 and lasting for more than " time. Then (i) any task C enabled in s0 either has a step or is disabled within " time, and (ii) any new enabling of C has a subsequent step or disabling within " time, provided that lasts for more than " time from the enabling of C. 2.3. Composition of automata The composition operation allows an automaton representing a complex system to be constructed by composing automata representing simpler system components. The most important characteristic of the composition of automata is that properties of isolated system components still hold when those isolated components are composed with other components. The composition identifies actions with the same name in different com­ponent automata. When any component automaton performs a step involving action in, so do all component automata that have in in their signatures. Since internal actions of an automaton A are intended to be unobservable by any other automaton B, automaton A cannot be composed with automaton B unless the internal actions of A are disjoint from the actions of B. (Otherwise, A’s performance of an internal action could force B to take a step.) Moreover, A and B cannot be composed unless the sets of output actions of A and B are disjoint. (Otherwise two automata would have the control of an 3 The name MMT derives from Merritt, Modugno, and Tuttle who introduced this automaton [29]. output action.) When A and B can be composed we say that they are compatible. The transitions of the composition are obtained by allowing all the components that have a particular action t in their signature to participate, simultaneously, in steps involving t, while all the other components do nothing. Note that this implies that all the components participate in time-passage steps, with the same amount of time passing for all of them. For a formal definition of the composition operation we refer the reader to [24, Section 23.2.3]. Here we recall the following theorems. Theorem 2. The composition of a compatible collection of GTAs is a GTA. Given the execution = s0, t1,s1,..., of a GTA A obtained by composing a compat­ible collection {^Ai}iEI of GTAs, the notation IAi denotes the sequence obtained from by deleting each pair t,., s,. for which [t,. ]is not action of Ai and by replacing each remaining [s,. ]^by (s,.)i, that is, automaton Ai’s piece of s,.. Theorem 3. Let {^Ai}iEI be a compatible collection of GTAs and let A be the com­position of Ai, for all i EI. If is an execution of A, then IAi is an execution of Ai, for every iEI. The above theorem is important because it enables us to claim that properties proven to be true for a particular automaton A are still true for a bigger automaton obtained by composing automaton A with other automata. We will make extensive use of this theorem in the rest of the paper. Clock GTAs are GTAs; hence, they can be composed as GTAs are composed. How­ever we point out that the composition of Clock GTAs does not yield a Clock GTA but a GTA. 3. The distributed setting In this section we discuss the distributed setting. We consider a partially synchronous distributed system consisting of n processes. The distributed system provides a bidi­rectional channel for every two processes. Each process is uniquely identified by its identifier i E I, where I is a totally ordered finite set of n identifiers. The set I is known by all the processes. Moreover each process of the system has a local clock. Local clocks can run at different speeds, though in general we expect them to run at the same speed as real time. We assume that a local clock is available also for channels; though this may seem somewhat strange, it is just a formal way to express the fact that a channel is able to deliver a given message within a fixed amount of time, by relying on some timing mechanism (which we model with the local clock). We use Clock GTAs to model both processes and channels. Throughout the rest of the paper we use two constants, ‘ and d, to represent upper bounds on the time needed to execute an enabled action and to deliver a message, respectively. These time bounds do not necessarily hold for every action and message in every execution; a violation of these bounds is a timing failure. 3.1. Processes We allow process stopping failures and recoveries, and timing failures. To formally model process stops and recoveries we model process i with a Clock GTA which has a special state component called Statusi and two input actions Stop[i] and Recoveri. The state variable Statusi reflects the current condition of process i. The effect of action Stop[i] is to set Statusi to stopped, while the effect of Recoveri is to set Statusi to alive. Moreover when Statusi = stopped, all the locally controlled actions are not enabled and the input actions have no effect, except for action Recoveri. We say that a process i is alive (resp. stopped) in a given state s if we have s. Statusi = alive (resp. s.Statusi = stopped). We say that a process i is alive (resp. stopped) in a given execution fragment, if it is alive (resp. stopped) in all the states of the execution fragment. An automaton modelling a process is called a process automaton. Between a failure and a recovery a process does not lose its state. We remark that PAXOS needs only a small amount of stable storage (see Section 6.6); however, for simplicity, we assume that the entire state of a process is stable. We also assume that there is an upper bound of ‘ on the elapsed clock time if some locally controlled action is enabled. This time bound can be violated if timing failures happen. Finally, we provide the following definition of “stable” execution fragment of a given process automaton. This definition is used later to define a stable execution of a distributed system. Definition 3.1. Given a process automaton PROCESSi, we say that an execution fragment of PROCESSi is stable if process i is either stopped or alive in and is regular. 3.2. Channels We consider unreliable channels that can lose and duplicate messages. Reordering of messages is allowed, i.e., is not considered a failure. Timing failures are also possible. Fig. 1 shows the code of a Clock GTACHANNELi[;] j, which models the communication channel from process i to process j; there is one automaton for each possible choice of i and j. Notice that we allow the possibility that the sender and the receiver are the same process. We denote by h' the set of messages that can be sent over the channels. The time-passage actions of CHANNELi[;]j do not let pass the time beyond t^ + d if a message (m[;]t^), that is, a message m sent at time t^,^ is in the channel. Clearly this restriction is on the local time and messages can also be lost. However if the execution is regular and no messages are lost then a particular message is delivered in a timely manner. The following definition of “stable” execution fragment for a channel captures the condition under which messages are delivered on time. CHANNELi[,] [j] Input: Send(m)i[,] j, Losei[,] j, Duplicatei, [j] Output: Receive(m)i[,] [j] Time-passage: v(t) Clock E , initially arbitrary Msgs, a set of elements of .h' × , initially empty input Send(m)i, [j] input Duplicatei, [j] Eff: add (m, Clock) to Msgs Eff: let (m, t) be an element of Msgs let t' such that t6t'6Clock place (m,t') into Msgs output Receive(m)i[,] [j] time-passage v(t) Pre: (m, t) is in Msgs, for some t Eff: remove (m, t) from Msgs Pre: Let t'¿0 be such that input Losei[,] [j] for all (m, t'') E Msgs Eff: remove one element of Msgs Clock + t' 6t'' + d E: Clock := Clock + [t]'[] Fig. 1. Automaton CHANNELi[,]j. Definition 3.2. Given a channelCHANNELi[,] j, we say that an execution fragment of CHANNELi[,] [j ]is stable if no Losei[,] [j ]and Duplicate i, j actions occur in and is regular. We remark that the above definition requires also that no Duplicate i, j actions happen. This is needed for the performance analysis (duplicated messages may introduce delays in the PAXOS algorithm). The next lemma follows from the above discussion. Lemma 4. In a stable execution fragment of CHANNELi[,]j beginning in a reachable state s and lasting for more than d time, we have that (i) all messages (m, t) that in state s are in Msgsi, [j ]are delivered by time d, and (ii) any message sent in is delivered within time d of the sending, provided that lasts for more than d time from the sending of the message. 3.3. Distributed systems A distributed system is the composition of automata modelling channels and pro­cesses. We are interested in modelling bad and good behaviors of a distributed system; in order to do so we provide some definitions that characterize the behavior of a dis­tributed system. The definition of “nice” execution fragment given later in this section, captures the good behavior of a distributed system. Informally, a distributed system behaves nicely if there are no process failures and recoveries, no channel failures and no irregular steps – remember that an irregular step models a timing failure – and a majority of the processes are alive. Definition 3.3. A communication system for the set 5 of processes, is the composition of channel automataCHANNELi[,] jfor all possible choices of i,j E 5. Definition 3.4. A distributed system is the composition of process automata modeling the set 5 of processes and a communication system for 5. We define the communication system [SCHA ]to be the communication system for the set 5 of all processes. Next we provide the definition of “stable” execution fragment for a distributed system exploiting the definition of stable execution fragment given previously for channels and process automata. Definition 3.5. Given a distributed system S, we say that an execution fragment of S is stable if: (i) for all automata PROCESSi modelling process i, i E S it holds that IPROCESSi is a stable execution fragment for process i; (ii) for all channels CHANNELi[,] [j ]with i,j E S it holds that ICHANNELi[,] jis a stable execution fragment for CHANNELi[,] j. Finally we provide the definition of “nice” execution fragment that captures the conditions under which PAXOS satisfies termination. Definition 3.6. Given a distributed system S, we say that an execution fragment of S is nice if is a stable execution fragment and a majority of the processes are alive in . The above definition requires a majority of processes to be alive. As is explained in Section 6.6, any quorum scheme could be used instead of majorities. In the rest of the paper, we will often use the word “system” to mean “distributed system”. 4. The consensus problem Several different but related agreement problems have been considered in the lit­erature. All have in common that processes start the computation with initial values and at the end of the computation each process must reach a decision. The variations mostly concern stronger or weaker requirements that the solution to the problem has to satisfy. The requirement that a solution to the problem has to satisfy are captured by three properties, usually called agreement, validity and termination. It is clear that the definition of the consensus problem must take into account the distributed setting in which the problem is considered. We assume that for each process i there is an external agent that provides an initial value v by means of an action Init(v)i. We denote by V the set of possible initial values and, given a particular execution , we denote by V the subset of V consisting of those values actually used as initial values in , that is, those values provided by Init(v)i actions executed in . A process outputs a decision v by executing an action Decide(v)i. If a process i executes action Decide(v)i more than once then the output value v must be the same. To solve the consensus problem means to give a distributed algorithm that, for any execution of the system, satisfies · Agreement: All the Decide(v) actions in have the same v. · Validity: For any Decide(v) action in , v belongs to V. and, for any admissible execution , satisfies · Termination: If = fly and y is a nice execution fragment and for each process i alive in y an Init(v)i action occurs in while process i is alive, then any process i alive in y, executes a Decide(v) i action in . The agreement and termination conditions require, as one can expect, that processes “agree” on a particular value. The validity condition is needed to relate the output value to the input values (otherwise a trivial solution, i.e., always output a default value, exists). 5. Failure detector and leader elector In this section we provide a failure detector algorithm and then we use it to imple­ment a leader election algorithm, which, in turn, is used in Section 6 to implement PAXOS. The failure detector and the leader elector we implement here are both sloppy, meaning that they are guaranteed to give accurate information on the system only in a stable execution. However, this is enough for implementing 5.1. A failure detector In this section we provide an automaton that detects process failures and recoveries. This automaton satisfies certain properties that we will need in the rest of the paper. We do not provide a formal definition of the failure detection problem, however, roughly speaking, the failure detection problem is the problem of checking which processes are alive and which ones are stopped. Fig. 2 shows a Clock GTA, called DETECTOR(z, c)i, which detects failures. In our setting failures and recoveries are modeled by means of actions Stop[i] and Recoveri. These two actions are input actions of DETECTOR(z, c)i. Moreover DETECTOR(z, c)i has InformStopped(j)i and InformAlive(j)i as output actions which are executed when, respectively, the stopping and the recovering of process j are detected. DETECTOR(z, c)s Input: Receive(m)j[,] s, Stops, Recovers Output: InformStopped(j)s, InformAlive(j)s, Send(m)s[,] [j] Internal: Check(j)s Time-passage: v(t) Clock E init. arbitrary for alljEI: StatusE{alive,stopped} init. alive Prevrec(j) E ¿0 init. arbitrary Lastinform(j) E ^¿0 init. Clock Alive E [2]I[ ]init. I Lastsend(j) E ¿0 init. Clock Lastcheck(j) E ^¿0 init. Clock input Stops Eff: Status := stopped output Send(“Alive”)s, [j] Pre: Status = alive Eff: Lastsend(j) := Clock + z input Receive(“Alive”)j, [s] Eff: if Status = alive then Prevrec(j) := Clock if j E Alive then Alive := Alive U {j} Lastcheck(j) := Clock + c internal Check(j)s Pre: Status = alive Eff: Lastcheck(j) := Clock + c if Clock¿Prevrec(j) +z + d then Alive := Alive\{j} input Recovers Eff: Status:=alive output InformStopped(j)s Pre: Status= alive j E Alive Eff: Lastinform(j) := Clock + " output InformAlive(j)s Pre: Status= alive Eff: Lastinform(j) := Clock + " time-passage v(t) Pre: none Eff: if Status = alive then Let t' be such that Vj, Clock + t' <Lastinform(j) Vj, Clock + t' <Lastsend(j) Vj, Clock + t' <Lastcheck(j) Clock := Clock + [t]'[] Fig. 2. Automaton DETECTOR for process s. Automaton DETECTOR(z, c)s works by having each process constantly sending “Alive” messages to each other process and checking that such messages are received from other processes. It sends at least one “Alive” message in an interval of time of a fixed length z (i.e., if an “Alive” message is sent at time t then the next one is sent before time t + z) and checks for incoming messages at least once in an interval of time of a fixed length c. Let us denote by [SDET ]the system consisting of system [SCHA ]and an automaton DETECTOR(z, c)s for each process s E 5. For simplicity of notation, henceforth we assume that z = t' and c = t', that is, we use DETECTOR(t', t')s. In practice the choice of z and c may be different. Using the strategy used by DETECTOR(t', t')sit is not hard to prove the following lemmas (for a detailed formal proof we refer the interested reader to [5]). Lemma 5. If an execution fragment of [SDET, ]starting in a reachable state and lasting for more than 3t' + 2d time, is stable and process s is stopped in , then by time 3t' + 2d from the beginning of , for each process j alive in , an action InformStopped(s) [j ]is executed and no subsequent InformAlive(s) [j ]action is executed in . Lemma 6. If an execution fragment of [SDET, ]starting in a reachable state and lasting for more than d + 2t' time, is stable and process s is alive in , then by time d+2t' from the beginning of ,for each process j alive in , an action InformAlive(s)j is executed and no subsequent InformStopped(s) [j ]action is executed in . The strategy used by DETECTORs is a straightforward one. For this reason it is very easy to implement. However the failure detector so obtained is not reliable, i.e., it does not give accurate information, in the presence of failures (Stops, Loses[,] j, irregular executions). For example, it may consider a process stopped just because the “Alive” message of that process was lost in the channel. Automaton DETECTORs is guaranteed to provide accurate information on faulty and alive processes only when the system is stable. 5.2. A leader elector Electing a leader in an asynchronous distributed system is a difficult task. An informal argument that explains this difficulty is that the leader election problem is somewhat similar to the consensus problem (which, in an asynchronous system subject to failures is unsolvable [14]) in the sense that to elect a leader all processes must reach consensus on which one is the leader. It is fairly clear how a failure detector can be used to elect a leader. Indeed the failure detector gives information on which processes are alive and which ones are not alive. This information can be used to elect the current leader. We use the DETECTOR(t', t')s automaton to check for the set of alive processes. Fig. 3 shows automaton LEADERELECTORs which is an MMTA. Remember that we use MMTAs to describe in a simpler way Clock GTAs. Automaton LEADERELECTORs interacts with DETECTOR(t',t')s by means of actions InformStopped(j)s, which inform process s that process j has stopped, and InformAlive(j)s, which inform process s that process j has recovered. Each process updates its view of the set of alive processes when these two actions are executed. The process with the biggest identifier in the set of alive processes Input: InformStopped(j)i, InformAlive(j)i, Stop[i], Recoveri Output: Leaderi, NotLeaderi Status E {alive[;] stopped} initially alive Pool E [2]5[ ]initially {i} Derived variable: input Recoveri Leader, defined as max of Pool Actions: Eff: Status := alive input Stop[i] output NotLeaderi Pre: Status= alive Eff: Status := stopped i = Leader Eff: none output Leaderi input InformAlive(j)i Pre: Status= alive Eff: if Status = alive Pool :=PoolU{j} i = Leader Eff: none input InformStopped(j)i Eff: if Status = alive then Pool := Pool\{j} Tasks and bounds: {Leaderi, NotLeaderi}, bounds [0[;] t'] Fig. 3. Automaton LEADERELECTOR for process i. is declared leader. We denote with [SLEA ]the system consisting of [SDET ]composed with a LEADERELECTORi automaton for each process i E 5. Fig. 4 shows [SLEA; ]it also shows SDET, which is a subsystem of SLEA. Since DETECTOR(t'[;] t')i is not a reliable failure detector, also LEADERELECTORi is not reliable. Thus, it is possible that processes have different views of the system so that more than one process considers itself leader, or the process supposed to be the leader is actually stopped. However, as the failure detector becomes reliable when the system SDET executes a stable execution fragment (see Lemmas 5 and 6), also the leader elector becomes reliable when system [SLEA ]is stable. Notice that when [SLEA ]executes a stable execution fragment, so does SDET. Formally, we say that a state s of system [SLEA, ]is a unique-leader state if there exists an alive process i such that for all alive processes j it holds that s:Leader [j ]= i. Fig. 4. The system SLEA. In such a case, process i is the leader of state s. Moreover, we say that an execution of system [SLEA, ]is a unique-leader execution if all the states of are unique-leader states with the same leader in all the states. Next lemma states that in a stable execution fragment, eventually there is unique-leader state. Lemma 7. If an execution fragment of [SLEA, ]starting in a reachable state and lasting for more than 4t' + 2d, is stable, then by time 4t' + 2d, there is a state occurrence s such that in state s and in all the states after s there is a unique leader. Moreover this unique leader is always the process with the biggest identier among the processes alive in . Proof. First notice that the system [SLEA ]consists of system [SDET ]composed with other automata. Hence by Theorem 3 we can use any property of [SDET. ]In particular we can use Lemmas 5 and 6 and thus we have that by time 3t' + 2d each process has a consistent view of the set of alive and stopped processes. Let i be the leader. Since is stable and thus also regular, by Lemma 1, within additional t' time, actions Leaderj and NotLeader [j ]are consistently executed for each process j, including process j = i. The fact that i is the process with the biggest identifier among the processes alive in follows directly from the code of LEADERELECTORi. We remark that, for many algorithms that rely on the concept of leader, it is im­portant to provide exactly one leader. For example when the leader election is used to generate a new token in a token ring network, it is important that there is exactly one process (the leader) that generates the new token, because the network gives the right to send messages to the owner of the token and two tokens may result in an interference between two communications. For these algorithms, having two or more leaders jeopardizes the correctness. Hence the sloppy leader elector provided before is not suitable. However, for the purpose of this paper, LEADERELECTOR is all we need. 6. The PAXOS algorithm PAXOS was devised a very long time ago^4 but its discovery, due to Lamport, is very recent [19]. In this section we describe the PAXOS algorithm, provide an implementation using Clock GT automata, prove its correctness and analyze its performance. The performance analysis is given assuming that there are no failures nor recoveries, and a majority of the processes are alive for a sufficiently long time. We remark that when no restrictions are imposed on the possible failures, the algorithm might not terminate. 6.1. Overview Our description of PAXOS is modular: we have separated various parts of the overall algorithm; each piece copes with a particular aspect of the problem. This approach should make the understanding of the algorithm much easier. The core part of the algorithm is a module that we call BASICPAXOS; this piece incorporates the basic ideas on which the algorithm itself is built. The description of this piece is further subdivided into three components, namely BPLEADER, BPAGENT and BPSUCCESS. In BASICPAXOS processes try to reach a decision by running what we call a “round”. A process starting a round is the leader of that round. BASICPAXOS guarantees that, no matter how many leaders start rounds, agreement and validity are not violated. This means that in any run of the algorithm no two different decisions are ever made and any decision is equal to some input value. However to have a complete algorithm that satisfies termination when there are no failures for a sufficiently long time, we need to augment BASICPAXOS with another module; we call this module STARTER. The functionality of STARTER is to make the current leader start a new round if the previous one is not completed within some time bound. Leaders are elected by using the LEADERELECTOR algorithm provided in Section 5. We remark that this is possible because the presence of two or more leaders does not jeopardize agreement or validity; however, to get termination there must be a unique leader. ^4The most accurate information dates it back to the beginning of this millennium [19]. Fig. 5. PAXOS: process i. Some of the actions shown in the figure will be defined later in this section. Thus, our implementation of PAXOS is obtained by composing the following automata: CHANNELi[;] [j ]for the communication between processes, DETECTORi and LEADERELECTORi for the leader election, BASICPAXOSi and STARTERi, for every process i[;]j 5. The resulting system is called SPAX. Fig. 5 shows the automaton at process i. Notice that not all of the actions are drawn in the picture: we have drawn only some of them and we refer to the formal code for all of the actions. Actions Stop[i] and Recoveri are input actions of all the automata. The [SPAX ]automaton at process i interacts with automata at other processes by sending messages over the channels. Channels are not drawn in the picture. Fig. 6 shows the messages exchanged by processes i and j. The automata that send and receive these messages are shown in the picture. We remark that channels and actions interacting with channels are not drawn, as well as other actions for the interaction with other automata. Fig. 6. BASICPAXOS: Messages. It is worth to remark that some pieces of the algorithm do need to be able to measure the passage of the time (DETECTORi, STARTERi and BPSUCCESSi) while others do not. We will prove (Theorems 9 and 10) that the system [SPAX ]solves the consensus problem ensuring partial correctness – any output is guaranteed to be correct, that is, agreement and validity are satisfied – and (Theorem 17) that [SPAX ]guarantees also ter­mination when the system executes a nice execution fragment, that is, without failures and recoveries and with at least a majority of the processes remaining alive. 6.1.1. Roadmap for the rest of the section In Section 6.2 we provide automaton BASICPAXOS. This automaton is responsible for carrying out a round in response to an external request. We prove that any round satisfies agreement and validity and we provide a performance analysis for a successful round. Then in Section 6.3 we provide automaton STARTER which takes care of the problem of starting new rounds. In Section 6.4 we prove that the entire system [SPAX ]^is correct and provide a performance analysis. In Section 6.5 we provide some comments about the number of messages used by the algorithm. Finally Section 6.6 contains some concluding remarks. 6.2. Automaton BASICPAXOS In this section we present the automaton BASICPAXOS which is the core part of the PAXOS algorithm. We begin by providing an overview of how automaton BASICPAXOS works, then we provide the automaton code along with a detailed description and finally we prove that it satisfies agreement and validity. 6.2.1. Overview The basic idea is to have processes propose values until one of them is accepted by a majority of the processes; that value is the final output value. Any process may propose a value by initiating a round for that value. The process initiating a round is said to be the leader of that round while all processes, including the leader itself, are said to be agents for that round. Informally, the steps for a round are the following. (1) To initiate a round, the leader sends a “Collect” message to all agents^5 announcing that it wants to start a new round and at the same time asking for information about previous rounds in which agents may have been involved. (2) An agent that receives a message sent in Step 1 from the leader of the round, responds with a “Last” message giving its own information about rounds previously conducted. With this, the agent makes a kind of commitment for this particular round that may prevent it from accepting (in Step 4) the value proposed in some other round. If the agent is already committed for a round with a bigger round number then it informs the leader of its commitment with an “OldRound” message. (3) Once the leader has gathered information about previous rounds from a majority of agents, it decides, according to some rules, the value to propose for its round and sends to all agents a “Begin” message announcing the value and asking them to accept it. In order for the leader to be able to choose a value for the round it is necessary that initial values be provided. If no initial value is provided, the leader must wait for an initial value before proceeding with Step 3. The set of processes from which the leader gathers information is called the info-quorum of the round. (4) An agent that receives a message from the leader of the round sent in Step 3, responds with an “Accept” message by accepting the value proposed in the current round, unless it is committed for a later round and thus must reject the value proposed in the current round. In the latter case the agent sends an “OldRound” message to the leader indicating the round for which it is committed. (5) If the leader gets “Accept” messages from a majority of agents, then the leader sets its own output value to the value proposed in the round. At this point the round is successful. The set of agents that accept the value proposed by the leader is called the accepting-quorum. Since a successful round implies that the leader of the round reached a decision, after a successful round the leader still needs to do something, namely to broadcast the reached decision. Thus, once the leader has made a decision it broadcasts a “Success” message announcing the value for which it has decided. An agent that receives a “Success” message from the leader makes its decision choosing the value of the successful round. We use also an “Ack” message sent from the agent to the leader, so that the leader can make sure that everyone knows the outcome. Fig. 7 shows: (a) the steps of a successful round r; (b) the responses from an agent that informs the leader that an higher numbered round r' has been already initiated; (c) the broadcast of a decision. The parameters used in the messages will be explained later. Section 6.2.2 contains a description of the messages. 5 Thus it sends a message also to itself. This helps in that we do not have to specify different behaviors for a process according to the fact that it is both leader and agent or just an agent. We just need to specify the leader behavior and the agent behavior. Fig. 7. Exchange of messages. Since different rounds may be carried out concurrently (several processes may con­currently initiate rounds), we need to distinguish them. Every round has a unique identifier. Next we formally define these round identifiers. A round number is a pair (x, i) where x is a nonnegative integer and i is a process identifier. The set of round numbers is denoted by . A total order on elements of is defined by (x, i) <(y,j) iffx<y or, x=y and i<j. We say that round r precedes round r' if r<r'. If round r precedes round r' then we also say that r is a previous round, with respect to round r'. We remark that the ordering of rounds is not related to the actual time the rounds are conducted. It is possible that a roundr' is started at some point in time and a previous round r, that is, one with r <r', is started later on. For each process i, we define a “+i” operation that given a round number (x,j) and an integer y, returns the round number (x,j) +i y = (x + y, i). Every round in the algorithm is tagged with a unique round number. Every message sent by the leader or by an agent for a round (with round number) r E , carries the round number r so that no confusion among messages belonging to different rounds is possible. However the most important issue is about the values that leaders propose for their rounds. Indeed, since the value of a successful round is the output value of some pro­cesses, we must guarantee that the values of successful rounds are all equal in order to satisfy the agreement condition of the consensus problem. This is the tricky part of the algorithm and basically all the difficulties derive from solving this problem. Consistency is guaranteed by choosing the values of new rounds exploiting the infor­mation about previous rounds from at least a majority of the agents so that, for any two rounds, there is at least one process that participated in both rounds. In more detail, the leader of a round chooses the value for the round in the following way. In Step 1, the leader asks for information and in Step 2 an agent responds with the number of the latest round in which it accepted the value and with the accepted value or with round number (0,j) and nil if the agent has not yet accepted a value. Once the leader gets such information from a majority of the agents (which is the info-quorum of the round), it chooses the value for its round to be equal to the value of the latest round among all those it has heard from the agents in the info-quorum or equal to its initial value if all agents in the info-quorum were not involved in any previous round. Moreover, in order to keep consistency, if an agent tells the leader of a round r that the last round in which it accepted a value is round T1, r^1 <r, then implicitly the agent commits itself not to accept any value proposed in any other round T^11,^ T^1<T^11<T. Given the above setting, if T^1 is the round from which the leader of round T gets the value for its round, then, when a value for round T has been chosen, any round T11, T^1 <T ^11<T, cannot be successful; indeed at least a majority of the processes are committed for round T, which implies that at least a majority of the processes are rejecting round T^11.^ This, along with the fact that info-quorums and accepting-quorums are majorities, implies that if a round T is successful, then any round with a bigger round number ^˜1>T is for the same value. Indeed the information sent by processes in the info-quorum of round i ˜ is used to choose the value for the round, but since info-quorums and accepting-quorums share at least one process, at least one of the processes in the info-quorum of round T^1 is also in the accepting-quorum of round T. Indeed, since the round is successful, the accepting-quorum is a majority. This implies that the value of any round ^˜i> T must be equal to the value of round T, which, in turn, implies agreement. We remark that instead of majorities for info-quorums and accepting-quorums, any quorum system can be used. Indeed the only property that is required is that there be a process in the intersection of any info-quorum with any accepting-quorum. Example. Fig. 8 shows how the value of a round is chosen. In this example we have a network of 5 processes, A, B, C, D, E (where the ordering is the alphabetical one) and vA, vB denote the initial values of A and B. At some point process B is the leader and starts round (1,B). It receives information from A,B,E (the set {A,B,E} is the Fig. 8. Choosing the values of rounds. Empty boxes denote that the process is in the info-quorum, and black boxes denote acceptance. Dotted lines indicate commitments. info-quorum of this round). Since none of them has been involved in a previous round, process B is free to choose its initial value VB as the value of the round. However it receives acceptance only from B, C (the set {B, C} is the accepting-quorum for this round). Later, process A becomes the leader and starts round (2,A). The info-quorum for this round is {A, D, E}. Since none of this processes has accepted a value in a previous round, A is free to choose its initial value for its round. For round (2,D) the info-quorum is {C,D,E}. This time in the quorum there is process C that has accepted a value in round (1, B) so the value of this round must be the same of that of round (1,B). For round (3,A) the info-quorum is {A,B,E} and since A has accepted the value of round (2,A) then the value of round (2,A) is chosen for round (3,A). For round (3,B) the info-quorum is {A, C,D}. In this case there are three processes that accepted values in previous rounds: process A that has accepted the value of round (2,A) and processes C,D, that have accepted the value of round (2,D). Since round (2,D) is the higher round number, the value for round (3,B) is taken from round (2,D). Round (3,B) is successful. To end up with a decision value, rounds must be started until at least one is suc­cessful. The basic consensus module BASICPAXOS guarantees that a new round does not violate agreement or validity, that is, the value of a new round is chosen in such a way that if the round is successful, it does not violate agreement and validity. However, it is necessary to make BASICPAXOS start rounds until one is successful. We deal with this problem in Section 6.3. 6.2.2. The code In order to describe automaton BASICPAXOS for process we provide three automata. One is called BPLEADER and models the “leader” behavior of the process; another one is called BPAGENT and models the “agent” behavior of the process; the third one is called BPSUCCESS and it simply takes care of broadcasting a reached decision. Automaton BASICPAXOS is the composition of BPLEADER , BPAGENT and BPSUCCESS . Figs. 9 and 10 show the code for BPLEADER , while Fig. 11 shows the code for BPAGENT . We remark that these code fragments are written using the MMTA model. Remember that we use MMTA to describe in a simpler way Clock GT automata. Figs. 12 and 13 show automaton BPSUCCESS . The purpose of this automaton is simply to broadcast the decision once it has been reached by the leader of a round. Figs. 6 and 7 describe the exchange of messages used in a round. It is worth noticing that the code fragments are “tuned” to work efficiently when there are no failures. Indeed messages for a given round are sent only once, that is, no attempt is made to try to cope with losses of messages and responses are expected to be received within given time bounds. Other strategies to try to conduct a successful round even in the presence of some failures could be used. For example, messages could be sent more than once to cope with the loss of some messages or a leader could wait more than the minimum required time before abandoning the current round and starting a new one – this is actually dealt with in Section 6.3. We have chosen to send only one message for each step of the round: if the execution is nice, one message is enough to conduct a successful round. Once a decision has been made, there is nothing to do but try to send it to others. Thus once the decision has been made by the leader, the leader repeatedly sends the decision to the agents until it gets an acknowledgment. We remark that also in this case, in practice, it is important to choose appropriate time-outs for the re-sending of a message; in our implementation we have chosen to wait the minimum amount of time required by an agent to respond to a message from the leader; if the execution is stable this is enough to ensure that only one message announcing the decision is sent to each agent. We remark that there is some redundancy that derives from having separate au­tomata for the leader behavior and for the broadcasting of the decision. For example, both automata BPLEADER and BPSUCCESS need to be aware of the decision, thus both have a Decision variable (the Decision variable of BPSUCCESS is updated when action RndSuccess is executed by BPLEADER after the Decision variable of BPLEADER is set). Having only one automaton would have eliminated the need of such a duplication. However we preferred to separate BPLEADER and BPSUCCESS because they accomplish different tasks. Input: Receive(m)j[,] i, m E {“Last”, “Accept”, “OldRound”} Init(v)i, NewRoundi, Stop[i], Recoveri, Leaderi, NotLeaderi Internal: Collecti, BeginCast[i], GatherLast(m)i, m is a “Last” message GatherAccept(m)i, m is a “Accept” message GatherOldRound(m)i m is a “OldRound” message Output: Send(m)i[,] j, m E {“Collect”, “Begin”} Gathered(v)i, Continuei, RndSuccess(v)i Status E {alive, stopped} init. alives IamLeader, a boolean init. false Mode E {collect , gatherlast, wait ,begincast, gatheraccept, decided,done} init. done Init Value E VU nil init. nil Decision E V U {nil} init. nil Derived Variable: CurRnd E R init. (0, i) HighestRnd E R init. (0, i) Value E V U {nil} init. nil ValFromE R init. (0,i) Info Quo E [2]I[ ]init. {} Accept Quo E [2]I[ ]init. {} InMsgs, multiset of msgs init. {} OutMsgs, multiset of msgs init. {} LeaderAlive, a boolean, true if Status = alive and IamLeader = true Actions: input Stop[i] Eff: Status := stopped input Recoveri Eff: Status:= alive input Leaderi Eff: if Status = alive then IamLeader := true input NotLeaderi Eff: if Status = alive then IamLeader := false output Send(m)i[,] [j] Pre: Status = alive mi, j E OutMsgs E: removemi[,] jfrom OutMsgs input Receive(m)j[,] [i] Eff: if Status = alive then addmj[,] ito InMsgs input Init(v)i Eff: if Status = alive then In it Value := v input NewRoundi Eff: if LeaderAlive = true then CurRnd := HighestRnd +i 1 HighestRnd := CurRnd Mode := collect Fig. 9. Automaton BPLEADER for process i (part 1). output Collecti Pre: LeaderAlive = true Mode = collect Eff: ValFrom := (0, i) Info Quo := {} Accept Quo := {} Vj put (CurRnd, “Collect”)i[,] [j] in OutMsgs Mode := gatherlast internal GatherLast(m)i Pre: LeaderAlive = true Mode = gatherlast m = (r, “Last”,r', ^v)j, i m E InMsgs CurRnd = r E: remove m from InMsgs Info Quo := Info Quo U {j} if ValFrom<r' and v=nil then Value := v ValFrom :=r' if I Info QuoI >n/2 then Mode := gathered output Gathered(Value) Pre: LeaderAlive = true Mode = gathered Eff: if Value = nil and InitValue = nil then Value := Init Value if Value = nil then Mode := begincast Mode := wait internal Continues Pre: LeaderAlive = true Mode = wait Value = nil Init Value = nil Eff: Value := Init Value Mode := begincast internal BeginCast[i] Pre: LeaderAlive = true Mode = begincast Eff: Vj, let m be (CurRnd, “Begin”, Value)i[,] [j ]put m in OutMsgs Mode := gatheraccept internal GatherAccept(m), Pre: LeaderAlive = true Mode = gatheraccept m = (T, “Accept”)j, [i ]mE InMsgs CurRnd = T E: remove m from InMsgs AcceptQuo := AcceptQuo U {j} if I Accept QuoI >n/2 then Decision := Value Mode := decided output RndSuccess(Decision )j Pre: LeaderAlive = true Mode = decided E: Mode := done internal GatherOldRound(m), Pre: Status = alive m = (T, “OldRound”, ^T')j, i mE InMsgs HighestRnd <r' E: remove m from InMsgs HighestRnd :=r' Tasks and bounds: {Collecti, Gathered(v)i, Continues, BeginCast[i], RndSuccess(v)i}, bounds [0, t'] {GatherLast(m)i, m E InMsgs, m is a “Last” message}, bounds [0, t'] {GatherAccept(m)i, m E InMsgs, m is a “Accept” message}, bounds [0, t'] {GatherOldRound(m)i, m E InMsgs, m is a “OldRound” message}, bounds [0, t'] {Send(m)i[,] [j,mi, ]jE OutMsgs}, bounds [0, t'] Fig. 10. Automaton BPLEADER for process i (part 2). Input: Receive(m)j[,] s, m E {“Collect”, “Begin”} Init(v)s, Stops, Recovers Internal: LastAccept(m)s, m is a “Collect” message Accept(m)s, m is a “Begin” message Output: Send(m)s[,] j, m E {“Last”, “Accept”, “OldRound”} Status E {alive, stopped} init. alive Commit E init. (0, s) LastRE init. (0,s) InMsgs, multiset of msgs init. {} LastVEVU{nil} init. nil OutMsgs, multiset of msgs init. {} input Stops E: Status := stopped input Recovers Eff: Status := alive output Send(m)s[,] [j] Pre: Status = alive m E OutMsgs Eff: removems[,] jfrom OutMsgs input Receive(m)j[,] [s] Eff: if Status = alive then add [mj,s ]to InMsgs input Init(v)s Eff: if Status = alives then if Last V= nil then LastV:= v Tasks and bounds: internal LastAccept(m)s Pre: Status = alive m = (r, “Collect”)j[,] sE InMsgs E: remove m from InMsgs if r Commit then Commit := r put (r, “Last”, LastR , Last V)s, j in OutMsgs put (r, “OldRound”, Commit)s[,] [j ]in OutMsgs internal Accept(m)s Pre: Status = alive m = (r, “Begin”,v)j[,] sE InMsgs E: remove m from InMsgs if r Commit then put (r, “Accept”)s, [j ]in InMsgs LastR:=r, LastV:=v put (r, “OldRound”, Commit)s, [j ]in OutMsgs {LastAccept(m)s, m E InMsgs, m is a “Collect” message}, bounds [0, t'] {Accept(m)s, m E InMsgs, m is a “Begin” message}, bounds [0, t'] {Send(m)s[,] [j,ms, ]jE OutMsgs }, bounds [0, t'] Input: Receive(m)j[,] s, m E {“Ack”, “Success”} Stops, Recovers, Leaders, NotLeaders, RndSuccess(v)s Internal: SendSuccesss, Checks Output: Send(m)s[,] j, m E {“Ack”, “Success”} Time-passage: v(t) Clock E init. arbitrary For each j E I StatusE{alive,stopped} init. alive Acked(j), a boolean init. false IamLeader, a boolean init. false LastSendAck(j) E U {00} init. 00 Decision E V U {nil} init. nil LastSendSuc(j)E U{00} init. 00 Prevsend E U{nil} init. nil OutAckMsgs(j), set of msgs init. {} Last Check E U {00} init. 00 OutSucMsgs(j), set of msgs init. {} LastSS E U {00} init. 00 input Stops E: Status := stopped input Leaders Eff: if Status = alive and IamLeader = false then IamLeader := true if Decision = nil then LastSS := clock + ‘ PrevSend := nil output Send(m)s[,] [j] Pre: Status = alive ms, j E OutAckMsgs(j) Eff: OutAckMsgs(j) := {} LastSendAck(j) := 00 output Send(m)s[,] [j] Pre: Status = alive ms, j E OutSucMsgs(j) Eff: OutSucMsgs(j) := {} LastSendSuc(j) := 00 input Recovers Eff: Status:= alive input NotLeaders Eff: if Status = alive then IamLeader := false LastSS := 00 LastCheck := 00 For each j E I LastSendSuc(j) := 00 input Receive( (“Ack”) )j, s Eff: if Status = alive then Acked(j) := true input Receive( (“Success”, v) )j, s Eff: if Status = alive then Decision := v put (“Ack”)s[,] [j ]into OutAckMsgs(j) LastSendAck(j) := Clock + ‘ internal SendSuccess, Pre: Status = alive IamLeader = true Decision = nil PrevSend = nil j = i, Acked(j)= false Eff: Vj=i such that Acked(j)= false put (“Success”, Decision),; [j] in OutSucMsgs(j) LastSendSuc(j) := Clock + " PrevSend := Clock LastCheck := Clock + (2"+ 2d) + " LastSS := 00 time-passage v(t) Pre: none Eff: if Status = alive then Let t' be such that Clock + t' <Last Check Clock + t' <LastSS and for each j E 5 Clock + t' <LastSendAck(j) Clock + t' <LastSendSuc(j) Clock := Clock + [t]'[] Fig. 13. Automaton BPSUCCESS for process i (part 2). In addition to the code fragments of BPLEADERi, BPAGENT and BPSUCCESSi, we provide here some comments about the messages, the state variables and the actions. 6.2.2.1. Messages. In this paragraph we describe the messages used for communication between the leader i and the agents of a round. Every message m is a tuple of elements. The messages are: (1) “Collect” messages, m = (r, “Collect”),[;] j. This message is sent by the leader of a round to announce that a new round, with number r, has been started and at the same time to ask for information about previous rounds. (2) “Last” messages, m = (r, “Last”,r',v)j; i.This message is sent by an agent to re­spond to a “Collect” message from the leader. It provides the last round r' in which the agent has accepted a value, and the value v proposed in that round. If the agent did not accept any value in previous rounds, then v is either nil or the initial value of the agent and r' is (0[;]j). (3) “Begin” messages, m = (r[;] “Begin”[;]v)i[;] j.This message is sent by the leader of round r to announce the value v of the round and at the same time to ask to accept it. (4) “Accept” messages, m = (r[;] “Accept”)j[;] i. This message is sent by an agent to re­spond to a “Begin” message from the leader. With this message an agent accepts the value proposed in the current round. (5) “OldRound” messages, m = (r[;] “OldRound”[;]r')j[;] i.This message is sent by an agent to respond either to a “Collect” or a “Begin” message. It is sent when the agent is committed to reject round r and it informs the leader about round r', which is the higher numbered round for which the agent is committed to reject round r. (6) “Success” messages, m = (“Success”[;]v)i[;] j.This message is sent by the leader to broadcast the decision. (7) “Ack” messages, m = (“Ack”)j[;] i. This message is an acknowledgment, so that the leader can be sure that an agent has received the “Success” message. We use the kind of a message to indicate any message of that kind. For example the notation m E {“Collect”, “Begin”} means that m is either a “Collect” message, that is m = (r[;] “Collect”) for some r, or a “Begin” message, that is m = (r[;] “Begin”[;] v) for some r and v. Automaton BPLEADERi. Variable Statusi is used to model process failures and recover­ies. Variable IamLeaderi keeps track of whether the process is leader. Variable Modei is used like a program counter, to go through the steps of a round. Variable Init Valuei contains the initial value of the process. Variable Decisioni contains the value, if any, decided by process i. Variable CurRndi contains the number of the round for which process i is currently the leader. Variable HighestRnd[i] stores the highest round number seen by process i. Variable Valuei contains the value being proposed in the current round. Variable ValFromi is the round number of the round from which Valuei has been chosen (recall that a leader sets the value for its round to be equal to the value of a particular previous round, which is round ValFromi). Variable Info Quo[i] con­tains the set of processes for which a “Last” message has been received by process i (that is, the info-quorum). Variable Accept Quo contains the set of processes for which an “Accept” message has been received by process i (that is, the accepting-quorum). We remark that in the original paper by Lamport, there is only one quorum which is fixed in the first exchange of messages between the leader and the agents, so that only processes in that quorum can accept the value being proposed. However, there is no need to restrict the set of processes that can accept the proposed value to the info-quorum of the round. Messages from processes in the info-quorum are used only to choose a consistent value for the round, and once this has been done anyone can accept that value. This improvement is also suggested in Lamport’s paper [19]. Fi­nally, variables InMsgs[i] and OutMsgs[i] are buffers used for incoming and outcoming messages. Actions Stops and Recovers model process failures and recoveries. Actions Leaders and NotLeader are used to update IamLeaderi. Actions Send(m)i[,] and Receive(m)i[,] [j ]send messages to the channels and receive messages from the channels. Action Init(v)i is used by an external agent to set the initial value of process i. Action NewRoundi starts a new round. It sets the new round number by increasing the highest round num­ber ever seen. Action Collects resets to the initial values all the variables that describe the status of the round being conducted and broadcasts a “Collect” message. Action GatherLast(m) collects the information sent by agents in response to the leader’s “Col­lect” message. This information is the number of the last round accepted by the agent and the value of that round. Upon receiving these messages, GatherLast(m) updates, if necessary, variables Value, and ValFromi. Also it updates the set of processes which eventually will be the info-quorum of the current round. Action GatherLast(m) is ex­ecuted until information is received from a majority of the processes. When “Last” messages have been collected from a majority of the processes, the info-quorum is fixed and GatherLast(m) is no longer enabled. At this point action Gathered(v) is enabled. If Value, is defined then the value for the round is set, and action BeginCast[i] is enabled. If Value, is not defined (and this is possible if the leader does not have an initial value and does not receive any value in “Last” messages) the leader waits for an initial value before enabling action BeginCast[i]. When an initial value is provided, action Continues can be executed and it sets Value, and enables action BeginCast[i]. Ac­tion BeginCast [i ]broadcasts a “Begin” message including the value chosen for the round. Action GatherAccept(m) gathers the “Accept” messages. If a majority of the processes accept the value of the current round then the round is successful and GatherAccept[i] sets the Decision, variable to the value of the current round. When variable Decisions has been set, action RndSuccess(v) is enabled. Action RndSuccess is used to pass the decision to BPSUCCESSi. Action GatherOldRound(m) collects messages that inform process i that the round previously started by i is “old”, in the sense that a round with a higher number has been started. Process i can update, if necessary, variable HighestRnd[i]. Automaton BPAGENTi. Variable Status is used to model process failures and recov­eries. Variable LastR, is the round number of the latest round for which process i has sent an “Accept” message. Variable LastViis the value for round LastRi. Variable Commits specifies the round for which process i is committed and thus specifies the set of rounds that process i must reject, which are all the rounds with round number less than Commits. We remark that when an agent commits for a round r and sends to the leader of round r a “Last” message specifying the latest round r^1 <r in which it has accepted the proposed value, it is enough that the agent commits to not accept the value of any round T^11 in between T^1 and T. To make the code simpler, when an agent commits for a round T, it commits to reject any round T^11 <T. Finally, variables InMsgs [i ]and OutMsgs [i ]are buffers used for incoming and outcoming messages. Actions Stops and Recover, model process failures and recoveries. Actions Send(m)i, [j ]and Receive(m)i[,] send messages to the channels and receive messages from the channels. Action LastAccept responds to the “Collect” message sent by the leader by sending a “Last” message that gives information about the last round in which the agent has been involved. Action Accept responds to the “Begin” message sent by the leader. The agent accepts the value of the current round if it is not rejecting the round. In both LastAccept and Accept actions, if the agent is committed to reject the current round because of a higher numbered round, then an “OldRound” message is sent to the leader so that the leader can update the highest round number ever seen. Action Init(v) sets to v the value of LastV only if this variable is undefined. With this, the agent sends its initial value in a “Last” message whenever the agent has not yet accepted the value of any Automaton BPSUCCESS . Variable Status is used to model process failures and re­coveries. Variable IamLeader keeps track of whether the process is leader. Variable Decision stores the decision. Variable Acked(j) contains a boolean that specifies whether or not process j has sent an acknowledgment for a “Success” message. Vari­able Prevsend records the time of the previous broadcast of the decision. Variables Last Check , LastSS , and variables LastSendAck(j) , LastSendSuc(j) , for j = , are used to impose the time bounds on enabled actions. Their use should be clear from the code. Variables OutAckMsgs(j) and OutSucMsgs(j) , for j = , are buffers for outcoming “Ack” and “Success” messages, respectively. There are no buffers for in­coming messages because incoming messages are processed immediately, that is, by action Receive(m) ; j. Actions Stop and Recover model process failures and recoveries. Actions Leader and NotLeader are used to update IamLeader . Actions Send(m) ; j and Receive(m) ; j send messages to the channels and receive messages from the channels. Action Receive (m) handles the receipt of “Ack” and “Success” messages. Action RndSuccess simply takes care of updating the Decision variable and sets a time bound for the execution of action SendSuccess . Action SendSuccess sends the “Success” message, along with the value of Decision to all processes for which there is no acknowledgment. It sets the time bounds for the re-sending of the “Success” message and also the time bounds LastSendSuc(j) for the actual sending of the messages. Action Check re-enable action SendSuccess after an appropriate time bound. We remark that 2‘ + 2d is the time needed to send the “Success” message and get back an “Ack” message (see the analysis in the proof of Lemma 11). We remark that automaton BPSUCCESS needs to be able to measure the passage of time. 6.2.3. Partial correctness Let us define the system [SBPX ]to be the composition of system [SCHA ]and automaton BASICPAXOS for each process E 5 (remember that BASICPAXOS is the composition of automata BPLEADER , BPAGENT and BPSUCCESS ). In this section we prove the partial correctness of [SBPX: ]we show that in any execution of the system [SBPX, ]agreement and validity are guaranteed. For these proofs, we augment the algorithm with a collection ' of history variables. Each variable in ' is an array indexed by the round number. For every round number r a history variable contains some information about round r. In particular the set ' consists of: Hleader(r) E 5 U nil, initially nil (the leader of round r). Hvalue(r) E VU nil, initially nil (the value for round r). Hf rom(r) E U nil, initially nil (the round from which Hvalue(r) is taken). Hinfquo(r), subset of 5, initially { } (the info-quorum of round r). Haccquo(r), subset of 5, initially { } (the accepting-quorum of round r). Hrej ect(r), subset of 5, initially { } (processes committed to reject round r). The code fragments of automata BPLEADER and BPAGENT augmented with the history variables are shown in Figs. 14 and 15. The figures show only the actions that change history variables. Actions of BPSUCCESS do not change history variables. Initially, when no round has been started yet, all the information contained in the history variables is set to the initial values. All but Hrej ect(r) history variables of round r are set by the leader of round r, thus if the round has not been started these variables remain at their initial values. More formally we have the following lemma. Lemma 8. In any state of an execution of [SBPX, ]if Hleader(r) = nil then Hvalue(r) = nil, Hf rom(r) = nil, Hinfquo(r) = { }, Haccquo(r) = { }. Proof. By an easy induction. Given a round r, Hrej ect(r), is modified by all the processes that commit them­selves to reject round r, and we know nothing about its value at the time round r is started. Next we define some key concepts that will be instrumental in the proofs. Definition 6.1. In any state of the system [SBPX, ]a round r is said to be dead if |Hreject(r)|¿n/2. That is, a round r is dead if at least n/2 of the processes are rejecting it. Hence, if a round r is dead, there cannot be a majority of the processes accepting its value, i.e., round r cannot be We denote by [S ]the set {r E | Hleader(r) = nil} of started rounds and by [V ]the set {r E | Hvalue(r) = nil} of rounds for which the value has been chosen. Clearly in any state s of an execution of [SBPX, ]we have that [V ] S. Next we formally define the concept of anchored round which is crucial to the proofs. The idea of anchored round is borrowed from [21]. Informally a round r is anchored if its value is consistent with the value chosen in any previous round r'. Consistent means that either the value of round r is equal to the value of round r' or round r' is dead. Intuitively, it is clear that if all the rounds are either anchored or dead, then agreement is satisfied. ABPleader (history variables) input NewRoundi Eff: if LeaderAlive = true then CurRnd :=HighestRnd + 1 · Hleader(CurRnd) := i HighestRnd := CurRnd Mode := collect output BeginCasti Pre: LeaderAlive = true Mode = begincast Eff: Vj put (CurRnd, “Begin”, Value)i, [j] in OutMsgs · Hinfquo(CurRnd) := Info Quo · Hf rom(CurRnd):= ValFrom · Hvalue(CurRnd):= Value Mode := gatheraccept internal GatherAccept(m), Pre: LeaderAlive = true Mode = gatheraccept m = (r, “Accept”)j[,] E InMsgs CurRnd =r E: remove m from InMsgs Accept Quo := Accept Quo U{j} if |AcceptQuo| > n/2 then Decision := Value · Haccquo(CurRnd) := Accept Quo Mode := decide Fig. 14. Actions of BPLEADER for process i augmented with history variables. Only the actions that do change history variables are shown. Other actions are the same as in BPLEADERi, i.e. they do not change history variables. Actions of BPSUCCESS do not change history variables. Definition 6.2. A round r E [V ]is said to be anchored if for every round r' E [V ]such that r' <r, either round r' is dead or Hvalue(r') = Hvalue(r). Next we prove that [SBPX ]guarantees agreement, by using a sequence of invariants. The key invariant is Invariant 6.8 which states that all rounds are either dead or anchored. The first invariant, Invariant 6.3, captures the fact that when a process sends a “Last” message in response to a “Collect” message for a round r, then it commits to not vote for rounds previous to round r. ABPagent [i ](history variables) internal LastAccept(m)i Pre: Status=alive = (r, “Collect”)j[,] iE InMsgs Eff: remove m from InMsgs if r Commit then Commit := r · For all r', LastR<r'<r · Hrej ect(r') := Hrej ectt(r' ) U {i} put (r, “Last”, LastR , LastV)i[,] [j] in OutMsgs else put (r, “OldRound”, Commit)i, [j] in OutMsgs Fig. 15. Actions of BPAGENT for process i augmented with history variables. Only the actions that do change history variables are shown. Other actions are the same as in BPAGENTi, i.e. they do not change history variables. Actions of BPSUCCESS do not change history variables. Invariant 6.3. In any state s of an execution of [SBPX, ]if message (r, “Last”, r'',v)j[,] iis in OutMsgs[j], then jEHreject(r'), for all r' such that r''<r'<r. Proof. We prove the invariant by induction on the length k of the execution c. The base is trivial: if k =0 then = s0, and in the initial state no message is in OutMsgs[j]. Hence the invariant is vacuously true. For the inductive step assume that the invariant is true for = s0lr1s1 ... rksk and consider the execution [s0r1s1...kskrs. ]We need to prove that the invariant is still true in s. We distinguish two cases. Case 1: (r, “Last”, r'', [v][)][j, ]E sk. OutMsgs[j]. By the inductive hypothesis we have j E sk. Hreject(r'), for all r' such that T'' <T' <T. Since no process is ever removed from any Hrej ect set, we have j E s.Hrej ect(r'), for all T ' such that r''<T' <T. Case 2: (T, “Last”,r'', ^v)j, iE=sk.OutMsgs[j]. Since by hypothesis we have (T, “Last”, r'', ^v)j, iE s.OutMsgs[j], it must be that ir = LastAccept(m)j, with m = (T, “Collect”) and it must be sk. LastR [j ]=r''.Then the invariant follows by the code of LastAccept(m)j which puts process j into Hreject(r') for all T ' such that r''<T' <T. The next invariant states that the commitment made by an agent when sending a “Last” message is still in effect when the message is in the communication channel. This should be obvious, but to be precise in the rest of the proof we prove it formally. Invariant 6.4. In any state s of an execution of [SBPX, ]if message (T, “Last”,r'', v)j, iis inCHANNELj[,] i,then j E Hreject(T'), for all T ' such that r''<T' <T. Proof. We prove the invariant by induction on the length k of the execution c. The base is trivial: if k = 0 then = s0, and in the initial state no messages are in CHANNELj[,] i. Hence the invariant is vacuously true. For the inductive step assume that the invariant is true for = s0lr1s1 . . . lrksk and consider the execution s0ir1s1 . . . iikskirs. We need to prove that the invariant is still true in s. We distinguish two cases. Case 1: (r, “Last”, r'', [v][)][j, ]E sk.CHANNELj[,] . By the inductive hypothesis we have j E sk. Hreject(r'), for all r' such that T'' <T' <T. Since no process is ever removed from any Hrej ect set, we have j E s.Hrej ect(T'), for all T' such that T'' <T' <T. Case 2: (T, “Last”, T'',v)j[,] iE= sk.CHANNELj[,]i. Since by hypothesis (T, “Last”, T'',v)j[,] iEs. OutMsgs[j], it must be that ir = Send(m)j[,] with m = (T, “Last”, T'',v)j[,] i.By the precon­dition of action Send(m)j[,] we have that message (T, “Last”, T'',v)j[,] iE sk.OutMsgs[j]. By Invariant 6.3 we have that process j Esk.Hreject(T') for all T' such that T'' <T' <T. Since no process is ever removed from any Hreject set, we have j Es.Hreject(T'), for all T' such that T'' <T' <T. The next invariant states that the commitment made by an agent when sending a “Last” message is still in effect when the message is received by the leader. Again, this should be obvious. Invariant 6.5. In any state s of an execution of [SBPX, ]if message (T, “Last”, T'', v)j[,] iis in InMsgs[i], then j E Hreject(T'), for all T' such that T'' <T' <T. Proof. We prove the invariant by induction on the length k of the execution x. The base is trivial: if k =0 then = s0, and in the initial state no messages are in InMsgs[i]. Hence the invariant is vacuously true. For the inductive step assume that the invariant is true for = s0lr1s1 . . . rksk and consider the execution s0lr1s1 . . . nkskns. We need to prove that the invariant is still true in s. We distinguish two cases. Case 1: (T, “Last”, T'',v)j[,] iE sk.InMsgs[i]. By the inductive hypothesis we have j E sk. Hreject(T'), for all T' such that T'' <T' <T. Since no process is ever removed from any Hrej ect set, we have j E s.Hrej ect(T'), for all T' such that T'' <T' <T. Case 2: (T, “Last”, T'',v)j[,] iE= sk.InMsgs[i]. Since by hypothesis (T, “Last”, T'',v)j[,] iEs. InMsgs[i], it must be that ir = Receive(m)i[,] [j ]with m = (T, “Last”, T'',v)j[,] i.In order to execute action Receive(m)i[,] [j ]we must have (T, “Last”, T'',v)j[,] iE sk.CHANNELj[,] i. By In­variant 6.4 we have j E sk.Hreject(T') for all T' such that T'' <T' <T. Since no process is ever removed from any Hreject set, we have j Es.Hreject(T'), for all T' such that T'' <T' <T. The following invariant states that the commitment to reject smaller rounds, made by the agent is still in effect when the leader updates its information about previous rounds using the agents’ “Last” messages. Invariant 6.6. In any state s of an execution [SBPX, ]if process j E Info Quo[i], for some process i, and CurRnd = T, then VT' such that s. ValFrom <T' <T, we have that j E Hreject(T'). Proof. We prove the invariant by induction on the length k of the execution c. The base is trivial: if k =0 then = s0, and in the initial state no process j is in Info Quo[i] for any i. Hence the invariant is vacuously true. For the inductive step assume that the invariant is true for = s0lr1s1 ... lrksk and consider the execution s0ir1s1 ... iikskirs. We need to prove that the invariant is still true in s. We distinguish two cases. Case 1: In state sk, j E Info Quo[i], for some process i, and CurRnd = r. Then by the inductive hypothesis, in state sk we have that j EHreject(r'), for all r' such that sk.ValFrom <T' <r. Since no process is ever removed from any Hreject set and, as long as CurRnd is not changed, variable ValFrom, is never decreased, then also in state s we have that j E Hrej ect(r'), for all r' such that s. ValFrom <T' <r. Case 2: In state sk, it is not true that j E Info Quo[i], for some process i, and CurRndi = T. Since in state s it holds that j E Info Quo[i], for some process i, and CurRnd = r, it must be the case that ir = GatherLast(m) with m = (T, “Last”, T'',vj[,] i.Notice that, by the precondition of GatherLast(m)i, m E InMsgs[i]. Hence, by Invariant 6.5 we have that j E Hreject(T'), for all T' such that r''<T' <T. By the code of the GatherLast(m)i action we have that ValFromi¿r''. Whence the invariant is proved. The following invariant is basically the previous one stated when the leader has fixed the info-quorum. Invariant 6.7. In any state of an execution of [SBPX, ]ifj E Hinfquo(T) then VT' such that Hf rom(T)<T'<T, we have that jEHreject(T'). Proof. We prove the invariant by induction on the length k of the execution x. The base is trivial: if k =0 then = s0, and in the initial state we have that for every round T, Hleader(T) = nil and thus by Lemma 8 there is no process j in Hinfquo(T). Hence the invariant is vacuously true. For the inductive step assume that the invariant is true for = s0lr1s1...ksk and consider the execution s0 lr1s1 ... lrksklrs. We need to prove that the invariant is still true in s. We distinguish two cases. Case 1: In state sk, j E Hinfquo(T). By the inductive hypothesis, in state sk we have that j E Hrej ect(T'), for all T' such that Hf rom(T)<T' <T. Since no process is ever removed from any Hreject set, then also in state s we have that j E Hreject(T'), for all T' such that Hf rom(T)<T'<T. Case 2: In state sk, j E Hinfquo(T). Since in state s, j EHinfquo(T), it must be the case that action ir puts j in Hinfquo(T). Thus it must be ir = BeginCast [i ]for some process i, and it must be s k.CuTRnd = T and j E sk.InfoQuo[i]. Since action BeginCast[i] does not change CurRnd and Info Quo [i ]we have that s.CuTRnd = T and j E s.InfoQuo[i]. By Invariant 6.6 we have that j E Hreject(T') for all T' such that s.ValFrom <T' <T. By the code of BeginCast [i ]we have that Hf rom(T) = s. ValFromi. We are now ready to prove the main invariant. Invariant 6.8. In any state of an execution of [SBPX, ]any nondead round T E RV is anchored. Proof. We proceed by induction on the length k of the execution . The base is trivial. When k =0 we have that = s0 and in the initial state no round has been started yet. Thus Hleader(r) = nil and by Lemma 8 we have that [V ]= { } and thus the assertion is vacuously true. For the inductive step assume that the assertion is true for = s0lr1s1] ... lrksk and consider the execution s0lr1s1 ... iiksk irs. We need to prove that, for every possible action ir the assertion is still true in state s. First we observe that the definition of “dead” round depends only upon the history variables and that the definition of “anchored” round depends upon the history variables and the definition of “dead” round. Thus the definition of “anchored” depends only on the history variables. Hence actions that do not modify the history variables cannot affect the truth of the assertion. The actions that change history variables are: (1) ir = NewRoundi (2) ir = BeginCast[i] (3) ir = GatherAccept(m)i (4) ir = LastAccept(m)i Case 1: Assume ir = NewRoundi. This action sets the history variable Hleader(r), where r is the round number of the round being started by process i. The new round r does not belong to [V ]since Hvalue(r) is still undefined. Thus the assertion of the lemma cannot be contradicted by this action. Case 2: Assume ir = BeginCast[i]. Action ir sets Hvalue(r), Hf rom(r) and Hinfquo(r) where r =sk.CurRndi. Round r belongs to [V ]in the new state s. In order to prove that the assertion is still true it suffices to prove that round r is anchored in state s and any roundr', r' >r is still anchored in state s. Indeed rounds with round number less than T are still anchored in state s, since the definition of anchored for a given round involves only rounds with smaller round numbers. First we prove that round T is anchored. From the precondition of BeginCast [i ]we have that Hinfquo(T) contains more than n/2 processes; indeed variable Modes is equal to begincast only if the cardinality of Info Quo [i ]is greater than n/2. Using Invari­ant 6.7 for each process j in s.Hinfquo(T), we have that for every roundr', such that s.Hfrom(r)<r' <T, there are more than n/2 processes in the set Hreject(r'), which means that every round r', s.Hfrom(T) <r' <T, is dead. Moreover, by the code of ir we have that s.Hfrom(T) = sk. ValFrom and s.Hvalue(T) = sk. Values. From the code (see action GatherLasti) it is immediate that in any state Values is the value of round ValFromi. In particular we have that sk. Values = sk.Hvalue(sk. ValFromi). Hence we have s.Hvalue(T) = s.Hvalue(s. Hfrom(r)). Finally we notice that round Hf rom(r) is an­chored (any round previous to T is still anchored in state s) and thus we have that any round r' <, is either dead or such that s.Hvalue(s. Hfrom(T)) = s.Hvalue(r'). Hence for any round r' < we have that either roundr' is dead or that s.Hvalue(r) = s.Hvalue(r'). Thus round T is anchored in state s. Finally, we need to prove that any non-dead roundr', r' >T that was anchored in sk is still anchored in s. Since action BeginCast [i ]modifies only history variables for round T, we only need to prove that in state s, Hvalue(r') = Hvalue(T). Let r'' be equal to Hf rom(r). Sincer' is anchored in state sk we have that sk.Hvalue(r') = sk.Hvalue(r''). Again because BeginCast [i ]modifies only history variables for round r, we have that s.Hvalue(r') = s.Hvalue(r''). But we have proved that round r is anchored in state s and thus s.Hvalue(r) = s. Hvalue(r''). Hence s.Hvalue(r') = s.Hvalue(r). Case 3: Assume ir = GatherAccept(m)i. This action modifies only variable Haccquo, which is not involved in the definition of anchored. Thus this action cannot make the assertion false. Case 4: Assume ir = LastAccept(m)i. This action modifies Hinf quo and Hrej ect. Variable Hinf quo is not involved in the definition of anchored. Action LastAccept(m)i may put process i in Hrej ect of some rounds and this, in turn, may make those rounds dead. However this cannot make false the assertion; indeed if a round r was anchored in sk it is still anchored when another round becomes dead. The next invariant follows from the previous one and gives a more direct statement about the agreement property. Invariant 6.9. In any state of an execution of [SBPX, ]all the Decision variables that are not nil, are set to the same value. Proof. We prove the invariant by induction on the length k of the execution c. The base of the induction is trivially true: for k = 0 we have that = s0 and in the initial state all the Decisioni variables are undefined. Assume that the assertion is true for = s0lr1s1 ... irksk and consider the execution s0ir1s1 ... irkskirs. We need to prove that, for every possible action ir the assertion is still true in state s. Clearly the only actions which can make the assertion false are those that set Decisioni, for some process i. Thus we only need to consider actions GatherAccept( (r, “Accept”))i and actions RndSuccess(v)i and Receive( (“Success”, v) )i, j of automaton BPSUCCESSi. Case 1. Assume r= GatherAccept((r,“Accept”))i. This action sets Decisioni to Hvalue(r). If all Decisionj, j = i, are undefined then Decisioni is the first decision and the assertion is still true. Assume there is only one Decisionj already defined. Let Decisionj = Hvalue(r') for some roundr'. By Invariant 6.8, rounds r andr' are an­chored and thus we have that Hvalue(r') = Hvalue(r). Whence Decisioni = Decisionj. If there are some Decisionj, j = i, which are already defined, then by the inductive hypothesis they are all equal. Thus, the lemma follows. Case 2. Assume ir = RndSuccess(v)i. This action sets Decisioni to v. By the code, value v is equal to the Decisionj of some other process. The lemma follows by the inductive hypothesis. Case 3. Assume ir = Receive( (“Success”, v) )i. This action sets Decisioni to v. It is easy to see (by the code) that the value sent in a “Success” message is always the Decision of some process. Thus we have that Decisioni is equal to Decisionj for some other process j and the lemma follows by the inductive hypothesis. Finally we can prove that agreement is satisfied. Validity is easier to prove since the value proposed in any round comes either from a value supplied by an Init(v)i action or from a previous round. Invariant 6.10. In any state of an execution of [SBPX, ]for any r E RV we have that Hvalue(r) E V. Proof. We proceed by induction on the length k of the execution . The base of the induction is trivially true: for k = 0 we have that = s0 and in the initial state all the Hvalue variables are Assume that the assertion is true for = s0lr1s1 ::: irksk and consider the execution s0ir1s1 ::: irkskirs. We need to prove that, for every possible action ir the assertion is still true in state s. Clearly the only actions that can make the assertion false are those that modify Hvalue. The only action that modifies Hvalue is BeginCast. Thus, assume ir = BeginCast[i]. This action sets Hvalue(r) to Valuei. We need to prove that all the values assigned to Valuei are in the set V. Variable Valuei is modified by actions NewRoundi and GatherLast(m)i. We can easily take care of action NewRoundi because it simply sets Valuei to be InitValuei which is obviously in V. Thus we only need to worry about GatherLast(m)i actions. A GatherLast(m)i action sets variable Valuei to the value specified into the “Last” message if that value is not nil. The value specified into any “Last” message is either nil or the value Hvalue(r') of a previous round r'; by the inductive hypothesis we have that Hvalue(r') belongs to V. Invariant 6.11. In any state of an execution of [SBPX, ]all the Decision variables that are not undefined are set to some value in V. Proof. A variable Decision is always set to be equal to Hvalue(r) for some r. Thus the invariant follows from Invariant 6.10. Theorem 10. In any execution of the system [SBPX; ]validity is satisfied. Proof. Immediate from Invariant 6.11. 6.2.4. Analysis of SBPX In this section we analyze the performance of [SBPX. ]Since termination is not guaran­teed by [SBPX ]in this section we provide a performance analysis (Lemma 14) assuming that a successful round is conducted. Then in Section 6.4, Theorem 17 provides the per­formance analysis of [SPAX, ]which, in a nice execution fragment, guarantees termination. Let us begin by making precise the meaning of the expressions “the start (end) of a round”. Definition 6.12. In an execution fragment whose states are all unique-leader states with process being the unique leader, the start of a round is the execution of action NewRound and the end of a round is the execution of action RndSuccess . A round is successful if it ends, that is, if the RndSuccess action is executed by the leader . Moreover we say that a process reaches its decision when automaton BPSUCCESS sets its Decision variable. We remark that, in the case of a leader, the decision is actually reached when the leader knows that a majority of the processes have accepted the value being proposed. This happens in action GatherAccept(m) of BPLEADER . However, to be precise in our proofs, we consider the decision reached when the variable Decision of BPSUCCESS is set; for the leader this happens exactly at the end of a successful round. Notice that the Decide(v) action, which communicates the decision v of process to the external environment, is executed within " time from the point in time when process reaches the decision, provided that the execution is regular (in a regular execution actions are executed within the expected time bounds). The following lemma states that once a round has ended, if the execution is stable, the decision is reached by all the alive processes within linear (in the number of processes) time. Lemma 11. If an execution fragment of the system [SBPX, ]starting in a reachable state s and lasting for more than 3" + 2d time, is stable and unique-leader, with process leader, and process reaches a decision in state s, then by time 3" + 2d, every alive process j = has reached a decision, and the leader has Acked(j) = true for every alive process j = . Proof. First notice that [SBPX ]is the composition of CHANNEL , j and other automata. Hence, by Theorem 3 we can apply Lemma 4. Let / be the alive processes j = such that Acked(j) = false. If / is empty then the lemma is trivially true. Hence assume / = { }. By assumption, the action that brings the system into state s is action RndSuccess (the leader reaches a decision in state s). Hence action SendSuccess is enabled. By the code of BPSUCCESS , action SendSuccess is executed within " time. This action puts a “Success” message for each process j / into OutSucMsgs(j) . By the code of BPSUCCESS , each of these messages is put on CHANNEL [,j, ]i.e., action Send((“Success”, v) ) ,j is executed, within " time. By Lemma 4 each alive process j / receives the “Success” message, i.e., executes a Receive( (“Success”, v)) , jaction, within d time. This action sets Decisionj to v and puts an “Ack” message into OutAckMsgs( )j. By the code of BPSUCCESSj, this “Ack” message is put on CHANNELj[,] , i.e., action Send(“Ack”)j[,] is executed, within " time, for every process j. By Lemma 4 the leader receives the “Ack” message, i.e., executes a Receive((“Ack”) [)j, ]action, within d time, for each process j. This action sets Acked(j) = true. Summing up the time bounds we get the lemma. In the following we are interested in the time analysis from the start to the end of a successful round. We consider a unique-leader execution fragment , with process i leader, and such that the leader i has started a round by the first state of (that is, in the first state of , CurRndi = r for some round number r). We remark that in order for the leader to execute step 3 of a round, i.e., action BeginCast[i], it is necessary that Valuei be defined. If the leader does not have an initial value and no agent sends a value in a “Last” message, variable Valuei is not defined. In this case the leader needs to wait for the execution of the Init(v)i to set a value to propose in the round (see action Continuei). Clearly the time analysis depends on the time of occurrence of the Init(v)i. To deal with this we use the following definition. Definition 6.13. Given an execution fragment , we define ^ti to be 0, if variable InitValuei is defined in the first state of ; the time of occurrence of action Init(v)i, if variable InitValuei is undefined in the first state of and action Init(v)i is executed in ; oc, if variable InitValuei is undefined in the first state of and no Init(v)i action is executed in . Moreover, we define ^7i to be max{7" + 2d, ^ti + 2"}. Informally, the above definition of 7i gives the time, counted from the beginning of a round, by which a BeginCast [i ]action is expected to be executed, assuming that the execution is stable and the round being conducted is successful. More formally we have the following lemma. Lemma 12. Suppose that for an execution fragment of the system [SBPX, ]starting in a reachable state s in which s:Decision = nil, then it holds that (i) is stable; (ii) is a unique-leader execution, with process i leader; (iii) lasts for more than ^7i ; (iv) the action that brings the system into state s is action NewRoundi for some round r; (v) round r is successful. Then we have that action Begin Cast[i] for round r is executed within time ^7j of the beginning of . Proof. First notice that [SBPX ]is the composition of CHANNELi[,] [j ]and other automata. Hence, by Theorem 3 we can apply Lemmas 1 and 4. Since the execution is stable, it is also regular, and thus by Lemma 1 actions of BPLEADERi and BPAGENTi are executed within " time and by Lemma 4 messages are delivered within d time. Action NewRoundi enables action Collecti which is executed in at most " time. This action puts “Collect” messages, one for each agent j, into OutMsgs[i]. By the code of BPLEADERi (see tasks and bounds) each one of these messages is sent on CHANNELi[,]j i.e., action Sendi[,] [j ]is executed for each of these messages, within " time. By Lemma 4 a “Collect” message is delivered to each agent j, i.e., action Receivei[,] [j ]is executed, within d time. Then it takes " time for an agent to execute action LastAccept [j ]which puts a “Last” message in OutMsgs[j]. By the code of BPAGENT (see tasks and bounds) it takes additional " time to execute action Sendj[,] to send the “Last” message on CHANNELj[,] . By Lemma 4, this “Last” message is delivered to the leader, i.e., action Receivej[,] is executed, within additional d time. By the code of BPLEADER (see tasks and bounds) each one of these messages is processed by GatherLast within " time. Action Gathered is executed within additional " time. At this point there are two possible cases: ( ) Value is defined and ( ) Value is not defined. In case (i), action BeginCast is enabled and is executed within " time. Summing up the times considered so far we have that action BeginCast is executed within 7"+2d time from the start of the round. In case (ii), action Continue is executed within " time of the execution of action Continue , and thus by time t + ". This action enables action BeginCast which is executed within additional " time. Hence action BeginCast is executed by time t + 2". Putting together the two cases we have that action BeginCast is executed by time max{7" + 2d, t + 2"}. Hence we have proved that action BeginCast is executed in by time T . Next lemma gives a bound for the time that elapses between the execution of the BeginCast action and the RndSuccess action for a successful round in a stable exe­cution fragment. Lemma 13. Suppose that for an execution fragment of the system [SBPX, ]starting in a reachable state s in which s:Decision = nil, then it holds that: (i) is stable; (ii) is a unique-leader execution, with process leader; (iii) lasts for more than 5" + 2d time; (iv) the action that brings the system into state s is action BeginCast for some round r; (v) round r is successful. Then we have that action RndSuccess is performed by time 5" + 2d from the begin­ning of . Proof. First notice that [SBPX ]is the composition of CHANNEL , j and other automata. Hence, by Theorem 3 we can apply Lemmas 1 and 4. Since the execution is stable, it is also regular, and thus by Lemma 1 actions of BPLEADER and BPAGENT are executed within " time and by Lemma 4 messages are delivered within d time. Action BeginCast puts “Begin” messages for round r in OutMsgs . By the code of BPLEADER (see tasks and bounds) each one of these messages is put on CHANNEL , j by means of action Send , j in at most " time. By Lemma 4 a “Begin” message is delivered to each agent j, i.e., action Receive , jis executed, within d time. By the code of BPAGENT|j (see tasks and bounds) action Accept[j] is executed within " time. This action puts an “Accept” message in OutMsgs[j]. By the code of BPAGENT [j ]the “Accept” message is put on CHANNELj[,] , i.e., action Sendj[,] for this message is executed, within " time. By Lemma 4 the message is delivered, i.e., action Receivej[,] [i ]for that message is executed, within d time. By the code of BPLEADERi action GatherAccept [i ]is executed for a majority of the “Accept” messages within additional ‘ time. At this point variable Decisioni is defined and action RndSuccessi is executed within ‘ time. Summing up all the times we have that the round ends within 5‘ + 2d. We can now easily prove a time bound on the time needed to complete a round. Lemma 14. Suppose that for an execution fragment of the system [SBPX, ]starting in a reachable state s in which s.Decision = nil, then it holds that (i) is stable; (ii) is a unique-leader execution, with process i leader; (iii) lasts for more than ^7i + 5‘ + 2d; (iv) the action that brings the system into state s is action NewRoundi for some round r; (v) round r is successful. Then we have that action Begin Cast[i] for round r is executed within time ^7i of the beginning of and action RndSuccessi is executed by time ^7i + 5‘ + 2d of the beginning of . Proof. Follows from Lemmas 12 and 13. The previous lemma states that in a stable execution a successful round is conducted within some time bound. However, it is possible that even if the system executes nicely from some point in time on, no successful round is conducted and to have a successful round a new round must be started. We take care of this problem in the next section. We will use a more refined version of Lemma 14; this refined version replaces condition (v) with a weaker requirement. This weaker requirement is enough to prove that the round is successful. Lemma 15. Suppose that for an execution fragment of [SBPX, ]starting in a reachable state s in which s.Decision = nil, then it holds that (i) is nice; (ii) is a unique-leader execution, with process i leader; (iii) lasts for more than ^7i + 5‘ + 2d time; (iv) the action that brings the system into state s is action NewRoundi for some round r; (v) there exists a set / c 5 of processes such that every process in / is alive and / is a majority, for every j E /, s. Commitj r and in state s for every j E / and k E 5, CHANNELk[,] [j ]and InMsgs [j ]do not contain any “Collect” message belonging to any round r' ¿r. Then we have that action BeginCasti is performed by time ^7i and action RndSuccessi is performed by time ^7i + 5‘ + 2d from the beginning of . Proof. Process sends a “Collect” message which is delivered to all the alive voters. All the alive voters, and thus all the processes in /, respond with “Last” messages which are delivered to the leader. No process j / can be committed to reject round r. Indeed, by assumption, process j is not committed to reject round r in state s and process j cannot commit to reject round r. The latter is due to the fact that in state s no message that can cause process j to commit to reject round r is either in InMsgs[j] nor in any channel to process j, and in the only leader is , which only sends messages belonging to round r. Since / is a majority, the leader receives at least a majority of “Last” messages and thus it is able to proceed with the next step of the round. The leader sends a “Begin” message which is delivered to all the alive voters. All the alive voters, and thus all the processes in /, respond with “Accept” messages since they are not committed to reject round r. Since / is a majority, the leader receives at least a majority of “Accept” messages. Therefore given that lasts for enough time round r is successful. Since round r is successful, the lemma follows easily from Lemma 14. 6.3. Automaton SPAX To reach consensus using [SBPX, ]rounds must be started by an external agent by means of the NewRound action that makes process start a new round. In this section we provide automata STARTER that start new round. Composing STARTER with [SBPX ]^we obtain SPAX. The system [SBPX ]guarantees that running rounds does not violate agreement and validity, even if rounds are started by many processes. However, since running a new round may prevent a previous one from succeeding, initiating too many rounds is not a good idea. The strategy used to initiate rounds is to have a leader election algorithm and let the leader initiate new rounds until a round is successful. We exploit the robustness of BASICPAXOS in order to use the sloppy leader elector provided in Section 5. As long as the leader elector does not provide exactly one leader, it is possible that no round is successful, however agreement and validity are always guaranteed. This means that regardless of termination, in any run of the algorithm no two different decisions are ever made and any decision is equal to some input value. Moreover, when the leader elector provides exactly one leader, if the system [SBPX ]is executing a nice execution fragment then a round is successful. Automaton STARTER takes care of the problem of starting new rounds. This automaton interacts with LEADERELECTOR by means of the Leader and NotLeader actions and with BASICPAXOS by means of the NewRound , Gathered(v) , Continue and RndSuccess(v) actions. Fig. 5, given at the beginning of the section, shows the interaction of the STARTER automaton with the other automata. The code of automaton STARTER is shown in Figs. 16 and 17. Automaton STARTER does the following. Whenever process becomes leader, the STARTER automaton starts a new round by means of action NewRound . Moreover the automaton checks that action BeginCast is executed within the expected time bound (given by Lemma 14). If BeginCast is not executed within the expected time bound, then STARTER starts a new round. Similarly once BeginCast has been executed, the automaton checks that action RndSuccess(v) is executed within the expected time bound (given by Lemma 14). Again, if such an action is not executed within the expected time bound, STARTER starts a new round. We remark that to check for the execution of BeginCast , the automaton actually checks for the execution of action Gathered(v) . This is because the expected time of execution of BeginCast depends on whether an initial value is already available when action Gathered(v) is executed. If such a value is available when Gathered(v) is executed then BeginCast is enabled and is expected to be executed within " time of the execution of Gathered(v) . Otherwise the leader has to wait for the execution of action Init(v) which enables action Continue and action BeginCast is expected to be executed within " time of the execution of Continue . In addition to the code we provide some comments about the state variables and the actions. Variables IamLeader and Status are self-explanatory. Variable Start is true when a new round needs to be started. Variable RndSuccess is true when a decision has been reached. Variables DlineGat and DlineSuc are used to check for the execution of actions Gathered(v) and RndSuccess(v) . They are also used, together with variable LastNR , to impose time bounds on enabled actions. Automaton STARTER updates variable IamLeader according to the input actions Leader and NotLeader and executes internal and output actions whenever it is the leader. Variable Start is used to start a new round and it is set either when a Leader action changes the leader status IamLeader from false to true, that is, when the process becomes leader or when the expected time bounds for the execution of actions Gathered(v) and RndSuccess(v) elapse without the execution of these actions. Vari­able RndSuccess is updated by the input action RndSuccess(v) . Action NewRound starts a new round. Actions CheckGathered and CheckRndSuccess check, respectively, whether actions Gathered(v) and RndSuccess(v) are executed within the expected time bounds. Using an analysis similar to the one done in the proof of Lemma 12 we have that action Gathered(v) is supposed to be executed within 6" + 2d time of the start of the round. The time bound for the execution of action RndSuccess(v) depends on whether the leader has to wait for an Init(v) event. However by Lemma 13 action RndSuccess(v) is expected to be executed within 5" + 2d time from the time of oc­currence of action BeginCast and action BeginCast is executed either within " time of the execution of action Gathered(v) , if an initial value is available when this action is executed, or else within " time of the execution of action Continue . Hence actions Gathered(v) and Continue both set a deadline of 6" + 2d for the execution of action RndSuccess(v) . Actions CheckGathered and CheckRndSuccess start a new round if the above deadlines expire. 6.4. Correctness and analysis of SPAX Even in a nice execution fragment a round may not reach success. This is possi­ble when agents are committed to reject the first round started in the nice execution Input: Leaders, NotLeaders, Stops, Recovers, Gathered(v)s, Continues, RndSuccess(v )s Internal: CheckGathereds, CheckRndSuccesss Output: NewRounds Time-passage: v(t) Clock E ER init. arbitrary DlineSucEER U{00} init. nil Status E {alive[;] stopped} init. alive DlineGat E ER U {00} init. nil IamLeader, a boolean init. false LastNR E ER U {00} init. 00 Start, a boolean init. false RndSuccess, a boolean init. false input Stops E: Status := stopped input Recovers Eff: Status := alive input Leaders Eff: if Status = alive then if IamLeader = false then IamLeader := true if RndSuccess = false then Start := true DlineGat := 00 DlineSuc := 00 LastNR := Clock + t' input NotLeaders Eff: if Status = alive then LastNR := 00 DlineSus := 00 DlineGat := 00 IamLeader := false output NewRounds Pre: Status= alive IamLeader = true Start = true Eff: Start:=false DlineGat := Clock + 6t' + 2d LastNR := 00 input Gathered(v)s Eff: if Status = alive then DlineGat := 00 if v = nil then DlineSuc := Clock + 6t' + 2d input Continues Eff: if Status = alive then DlineSuc := Clock + 6t' + 2d internal CheckRndSuccessi Pre: Status = alive IamLeader = true DlineSuc = nil Clock >DlineSuc Eff: DlineSuc:=00 Start := true LastNR := Clock + t' time-passage v(t) Pre: none Eff: if Status = alive then Let t' be such that Clock + t' <LastNR and Clock + t' <DlineGat + t' and Clock + t'<DlineSuc + t' Clock:= Clock + [t]'[] Fig. 17. Automaton STARTER for process i (part 2). fragment because they are committed for higher numbered rounds started before the beginning of the nice execution fragment. However, in such a case a new round is started and there is nothing that can prevent the success of the new round. Indeed in the newly started round, alive processes are not committed for higher numbered rounds since during the first round they inform the leader of the round number for which they are committed and the leader, when starting a new round, always uses a round number greater than any round number ever seen. In this section we will prove that in a long enough nice execution fragment termination is guaranteed. Remember that SPA X is the system obtained by composing system [SLEA ]with one automaton BASICPAXOS and one automaton STARTER for each process i 5. Since this system contains as a subsystem the system [SBPX, ]it guarantees agreement and validity. However, in a long enough nice execution fragment of [SPAX ]termination is achieved, too. The following lemma states that in a long enough nice, unique-leader execution, the leader reaches a decision. We recall that 7 = max{7t' + 2d, ^ti + 2t'} and that ^ti is the time of occurrence of action Init(v) in (see Definition 6.15). Lemma 16. Suppose that for an execution fragment of [SPAX, ]starting in a reachable state s in which s.Decision = nil, then it holds that (i) is nice; (ii) is a unique-leader execution, with process i leader; (iii) lasts for more than 7^i + 20t' + 7d time. Then by time 7^i + 20t' + 7d the leader i has reached a decision. Proof. First we notice that system [SPAX ]contains as subsystem [SBPX; ]hence by using Theorem 3, the projection of on the subsystem [SBPX ]is actually an execution of SBPX and thus Lemmas 14 and 15 are still true in . For simplicity, in the following we assume that ^7j =0, i.e., that process i has exe­cuted an Init(v)i action before . At the end of each case we consider, we will add ^7i to the time bound to take into account the possibility that process i has to wait for an Init(v)i action. Notice that 7^i =0 implies that7i =0 for any fragment fi of starting at some state of and ending in the last state of Let s' be the first state of such that no “Collect” message sent by a process k = i is present in CHANNELk[;]j nor in InMsgs [j ]for any j. State s' exists in and its time of occurrence is less or equal to d + t'. Indeed, since the execution is nice, all the messages that are in the channels in state s are delivered by time d and messages present in any InMsgs set are processed within t' time. Since i is the unique leader, in state s' no messages sent by a process k = i is present in any channel nor in any InMsgs set. Let ' be the fragment of beginning at s'. Since ' is a fragment of , we have that ' is nice, process i is the unique leader in ' and ^7i ,^ =0. If process i has started a round r' by state s' and roundr' is successful, then round r' ends by time ^7i ,^ + 5t' + 2d = 5t' + 2d in '. Indeed if the action that brings that system in state s' is a NewRoundi action for round r' then by Lemma 14 we have that the round ends by time ^7i ,^ + 5t'+ 2d = 5t' + 2d. If action NewRoundi for roundr' has been executed before, roundr' ends even more quickly and the time bound holds anyway. Since the time of occurrence of s' is less or equal to t' + d we have that round r' ends by time 6t' + 3d. Considering the possibility that process i has to wait for an Init(v)i action we have that round r' ends by time 7^i + 6t' + 3d in . Hence the lemma is true in this case. Assume that either (a) process i has started a round r' by state s' but round r is not successful or (b) that process i has not started any round by state s'. In both cases process i executes a NewRoundi action by time ^7i ,^ + 7t' + 2d = 7t' + 2d in ' . Indeed in case (a), by the code of STARTERi, action CheckRndSuccessi is executed within ^7i ,^ + 6t' + 2d = 6t' + 2d time and it takes additional t' time to execute action NewRoundi. In case (b), by the code of BPLEADERi, action NewRoundi is executed within t' time. Let r '' be the round started by such an action. Let s'' be the state after the execution of the NewRoundi action and let '' be the fragment of starting in s''. Since '' is a fragment of , we have that '' is nice, process i is the unique leader in '' and ^7i ,,^ =0. We notice that since the time of occurrence of state s' is less or equal to t' + d the time of occurrence of s'' is less or equal to 8t' + 3d in . We now distinguish two possible cases. Case 1: Roundr'' is successful. In this case, by Lemma 14 we have that round [r]''[ ]is successful within ^7i ,,^ + 5t' + 2d = 5t' + 2d time in ''. Since the time of occurrence of S'' is less or equal to 8t' + 3d, we have that round T'' ends by time 13t' + 5d in . Considering the possibility that process i has to wait for an Init(v)i action we have that round T'' ends by time 7^i + 13t' + 5d in . Hence the lemma is true in this case. Case 2. Round T'' is not successful. By the code of STARTERi, action NewRoundi is executed within ^Ti ,,^ + 7" + 2d = 6t' + 2d time in ''. Indeed, it takes ^Ti ,,^ + 5' + 2d to execute action CheckRndSuccessi and additional " time to execute action NewRoundi. Let T''' be the new round started by i with such an action, let 5''' be the state of the system after the execution of action NewRoundi and let ''' be the fragment of '' beginning at 5'''. The time of occurrence of 5''' is less or equal than 15t' + 5d in . Clearly ''' is nice, process i is the unique leader in '''. Any alive process j that rejected round T'' because of a round ^˜i , ^˜i>T'', has responded to the “Collect” message of round T'', with a message (T'',“OldRound”,^˜rj[;] iinforming the leader i about round ^˜i. Since '' is nice all the “OldRound” messages are received before state 5'''. Since action NewRoundi uses a round number greater than all the ones received in “OldRound” messages, we have that for any alive process j, s'''.Commit j<T'''. Let / be the set of alive processes. In state s''', for every j E / and any k E 5, CHANNELk[,] [j ]does not contain any “Collect” message belonging to any round ^˜1>T''' nor such a message is present in any InMsgs [j ]set (indeed this is true in state s'). Finally, since '' is nice, by definition of nice execution fragment, we have that / contains a majority of the processes. Hence, we can apply Lemma 15 to the execution fragment '''. By Lemma 15, round T''' is successful within ^Ti ,,,^ + 5' + 2d = 5' + 2d time from the beginning of '''. Since the time of occurrence of 5''' is less or equal to 15t' + 5d in , we have that round T''' ends by time 20t' + 7d in . Considering the possibility that process i has to wait for an Init(v)i action we have that round T''' ends by time ^7j + 20t' + 7d in . Hence the lemma is true also in this case. If the execution is stable for enough time, then the leader election eventually elects a unique leader (Lemma 7). In the following theorem we consider a nice execution fragment and we let i be the process eventually elected unique leader. We recall that ti is the time of occurrence of action Init(v)i in and that " and d are constants. Theorem 17. Let be a nice execution fragment of [SPAX ]starting in a reachable state and lasting for more than t^i + 35t' + 13d. Then the leader i executes Decide(v')i by time t^i + 32t' + 11d from the beginning of . Moreover by time t^i + 35t' + 13d from the beginning of any alive process j executes Decide(v' )j. Proof. Since [SPAX ]^contains SLEA ^and SBPX as subsystems, by Theorem 3 we can use any property of [SLEA ]^and SBPX. Since the execution fragment is nice (and thus stable), by Lemma 7 there is a unique leader by time 4t' + 2d. Let 5' be the first unique-leader state of and let i be the leader. By Lemma 7 the time of occurrence of s' before or at time 4'+2d. Let ' be the fragment of starting in state s'. Since is nice, ' is nice. By Lemma 16 we have that the leader reaches a decision by time 7j, + 20t' + 7d from the beginning of '. Summing up the times and noticing that ^7i ,^ ti , + 7" + 2d and that ^ti , <t ^i we have that the leader reaches a decision by time ^ti + 31t' + 11d. Within additional t' time action Decide(v')i is executed. The leader reaches a decision by time ^ti + 31t' + 11d. By Lemma 11 we have that a decision is reached by every alive process j within additional 3t' + 2d time, that is by time ^ti + 34t' + 13d. Within additional t' time action Decide(v' )j is executed. 6.5. Messages It is not difficult to see that in a nice execution, which is an execution with no failures, the number of messages spent in a round is linear in the number of processes. Indeed in a successful round the leader broadcasts two messages and the agents respond to the leader’s messages. Once the leader reached a decision another broadcast is enough to spread this decision to the agents. It is easy to see that, if everything goes well, at most 6n messages are sent to have all the alive processes reach the decision. However failures may cause the sending of extra messages. It is not difficult to construct situations where the number of messages sent is quadratic in the number of processes. For example if we have that before i becomes the unique leader, all the processes act as leaders and send messages, even if i becomes the unique leader and conducts a successful round, there are 61(n^2) messages in the channels which are delivered to the agents which respond to these messages. Automaton BPSUCCESS keeps sending messages to processes that do not acknowledge the “Success” messages. If a process is dead and never recovers, an infinite number of messages is sent. In a real implementation, clearly the leader should not send messages to dead processes. Finally the automaton DETECTOR sends an infinite number of messages. However the information provided by this automaton can be used also by other applications. 6.6. Concluding remarks The PAXOS algorithm was devised in [19]. In this section we have provided a new presentation of the PAXOS algorithm. We conclude this section with a few remarks. The first remark concerns the use of majorities for info-quorums and accepting-quorums. The only property that is used is that there exists at least one process common to any info-quorum and any accepting-quorum. Thus any quorum scheme for info-quorums and accepting-quorums that guarantees the above property can be used. As pointed out in also [21], the amount of stable storage needed can be reduced to a very few state variables. These are the last round started by a leader (which is stored in the CurRnd variable), the last round in which an agent accepted the value and the value of that round (variables LastR, Last V), and the round for which an agent is committed (variable Commit). These variables are used to keep consistency, that is, to always propose values that are consistent with previously proposed values, so if they are lost then consistency might not be preserved. In our setting we assumed that the entire state of the processes is in stable storage, but in a practical implementation only the variables described above need to be stable.
{"url":"http://research.microsoft.com/en-us/um/people/blampson/63a-RevisitingPaxos/63a-RevisitingPaxosOCR.htm","timestamp":"2014-04-18T05:57:39Z","content_type":null,"content_length":"1047896","record_id":"<urn:uuid:6ea51729-4ff5-4a2a-ab7e-e33c9f6f0265>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Simplify and Split the Schrödinger Equation for Hydrogen In quantum physics, you may need to simplify and split the Schrödinger equation for hydrogen. Here's the usual quantum mechanical Schrödinger equation for the hydrogen atom: The problem is that you're taking into account the distance the proton is from the center of mass of the atom, so the math is messy. If you were to assume that the proton is stationary and that r[p] = 0, this equation would break down to the following, which is much easier to solve: Unfortunately, that equation isn't exact because it ignores the movement of the proton, so you see the more-complete version of the equation in quantum mechanics texts. To simplify the usual Schrödinger equation, you switch to center-of-mass coordinates. The center of mass of the proton/electron system is at this location: And the vector between the electron and proton is r = r[e] – r[p] Using vectors R and r instead of r[e] and r[p] makes the Schrödinger equation easier to solve. The Laplacian for R is And the Laplacian for r is How can you relate to the usual equation's After the algebra settles, you get where M = m[e] + m[p] is the total mass and is called the reduced mass. When you put together the equations for the center of mass, the vector between the proton and the electron, the total mass, and m, then the time-independent Schrödinger equation becomes the following: Then, given the vectors, R and r, the potential is given by, The Schrödinger equation then becomes This looks easier — the main improvement being that you now have |r| in the denominator of the potential energy term rather than |r[e] – r[p]|. Because the equation contains terms involving either R or r but not both, the form of this equation indicates that it's a separable differential equation. And that means you can look for a solution of the following form: Substituting the preceding equation into the one before it gives you the following: And dividing this equation by gives you Well, well, well. This equation has terms that depend on either but not both. That means you can separate this equation into two equations, like this (where the total energy, E, equals E[R] + E[r]): gives you And multiplying gives you Now you have two Schrödinger equations, which you can solve independently.
{"url":"http://www.dummies.com/how-to/content/how-to-simplify-and-split-the-schrodinger-equation.html","timestamp":"2014-04-18T15:10:32Z","content_type":null,"content_length":"58263","record_id":"<urn:uuid:03aad404-4034-4131-b19b-15842c534fc8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] question about writing roots in fractions? April 24th 2010, 11:03 PM #1 Dec 2009 [SOLVED] question about writing roots in fractions? I'm doing a calculus problem and i found that $x =\frac{\sqrt3}{3}$ in the answer in the back, it says $\frac{1}{\sqrt3}$ I put them both in the calculator and it comes out to be about 0.57.... why did the book put the root in the denominator? i thought it was standard form to have it in the numerator? not sure where this topic belongs, sorry if in wrong category! Last edited by dorkymichelle; April 24th 2010 at 11:17 PM. I'm doing a calculus problem and i found that x =\frac{\sqrt3}{3} in the answer in the back, it says \frac{1}{\sqrt3} I put them both in the calculator and it comes out to be about 0.57.... why did the book put the root in the denominator? i thought it was standard form to have it in the numerator? not sure where this topic belongs, sorry if in wrong category! Note that $\frac{1}{\sqrt{3}} = \frac{1}{\sqrt{3}} \cdot \frac{\sqrt{3}}{\sqrt{3}} = \frac{\sqrt{3}}{3}$. It is considered to be more elegant to rationalise denominators, because it better represents what a fraction is - namely, dividing a length into a COUNTABLE number of pieces. However, rationalising the denominator is not usually done until the very end, as it could result in more work with whatever you happen to be doing. Perhaps your book intends you to use this answer for further calculations... I found it odd because i didnt encounter 1/root 3 in my calculations at all. is there a way to solve for x in the equation $-12x^2+4$ without using the quadratic formula? wow... i can't believe i didn't see that!!! April 24th 2010, 11:10 PM #2 April 24th 2010, 11:19 PM #3 Dec 2009 April 24th 2010, 11:25 PM #4 April 24th 2010, 11:27 PM #5 Dec 2009 April 24th 2010, 11:31 PM #6 April 24th 2010, 11:34 PM #7 Dec 2009
{"url":"http://mathhelpforum.com/algebra/141203-solved-question-about-writing-roots-fractions.html","timestamp":"2014-04-16T06:25:21Z","content_type":null,"content_length":"50357","record_id":"<urn:uuid:8507fcfa-f0dc-430b-b549-87da12a6097d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Interactive Math = Preview Document = Member Document = Pin to Pinterest This notebook file is full of fun and offers plenty of skill practice for telling time. Common Core Math: Measurement & Data 1.MD.3 Notebook file full of fun while practicing addition and subtraction of equivalent fractions. Interactive file for Smart Notebook. This fun Ghost Hunt activity can be used year round to reinforce math skills multiples. Interactive file for Smart Notebook. This fun Ghost Hunt activity can be used year round to reinforce math skills for equivalent fractions. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Students trace numbers, set up number sentences with objects, and solve the problem. Interactive Notebook activity with a fun math theme. Uses a cash register. • Interactive notebook activities with a fun shapes and pattern theme. • Practice x1 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Practice x7 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Practice x8 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Practice x9 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Practice x10 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Practice x11 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Practice x12 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Practice x2 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Practice x3 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Practice x4 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. • Practice x5 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Practice x6 multiplication tables with this fun, colorful game. Interactive .notebook file for Smart Board. Interactive .notebook file. Use with Smart Notebook software or viewer. Graph Coordinates. Interactive .notebook file. Use with Smart Notebook software or viewer. Graph Coordinates. Interactive .notebook file. Use with Smart Notebook software or viewer. Graph Coordinates. Comprehensive introduction to basic concepts of the metric system of measurement with numerous related materials. Includes useful posters, interactive exercises, and printable worksheets. Emphasizes becoming familiar with the metric system, not converting to and from the traditional U.S. system. Interactive .notebook file for Smart Board. "Ask a Genius"-- Find out or review basic facts about the metric system. Interactive .notebook file for Smart Board. Manipulate metric to metric measurements by correctly moving the decimal point. Interactive .notebook file for Smart Board. Match metric prefixes with their symbol and their multiplier. Interactive .notebook file for Smart Board. Associate common items with the metric unit used to measure them. Interactive .notebook file for Smart Board.
{"url":"http://www.abcteach.com/directory/interactive-smart-notebook-files-interactive-math-9638-4-3","timestamp":"2014-04-16T16:10:15Z","content_type":null,"content_length":"154220","record_id":"<urn:uuid:4780c617-2168-4041-b08b-69ae4033c58d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Monday, in our MAT8181 class, we’ve discussed seasonal unit roots from a practical perspective (the theory will be briefly mentioned in a few weeks, once we’ve seen multivariate models). Consider some time series , for instance traffic on French roads, > autoroute=read.table( + "http://freakonometrics.blog.free.fr/public/data/autoroute.csv", + header=TRUE,sep=";") > X=autoroute$a100 > T= 1:length(X) > plot(T,X,type="l",xlim=c(0,120)) > reg=lm(X~T) > abline(reg,col="red") As discussed in a... Fast computation of cross-validation in linear models The leave-one-out cross-validation statistic is given by where , are the observations, and is the predicted value obtained when the model is estimated with the th case deleted. This is also sometimes known as the PRESS (Prediction Residual Sum of Squares) statistic. It turns out that for linear models, we do not actually have to estimate the... Moving the North Pole to the Equator I am still working with @3wen on visualizations of the North Pole. So far, it was not that difficult to generate maps, but we started to have problems with the ice region in the Arctic. More precisely, it was complicated to compute the area of this region (even if we can easily get a shapefile). Consider the globe, worldmap <- ggplot()... Testing for trend in ARIMA models Today’s email brought this one: I was wondering if I could get your opinion on a particular problem that I have run into during the reviewing process of an article. Basically, I have an analysis where I am looking at a couple of time-series and I wanted to know if, over time there was an upward trend in the... where did the normalising constants go?! [part 2] Coming (swiftly and smoothly) back home after this wonderful and intense week in Banff, I hugged my loved ones, quickly unpacked, ran a washing machine, and then sat down to check where and how my reasoning was wrong. To start with, I experimented with a toy example in R: and (of course!) it produced the where did the normalising constants go?! [part 1] When listening this week to several talks in Banff handling large datasets or complex likelihoods by parallelisation, splitting the posterior as and handling each term of this product on a separate processor or thread as proportional to a probability density, then producing simulations from the mi‘s and attempting at deriving simulations from the original product, How effective is my research programming workflow? The Philip Test – Part 1 Philip Guo, who writes a wonderful blog on his views and experiences of academia – including a lot of interesting programming stuff – came up with a research programming version of The Joel Test last summer, and since then I’ve been thinking of writing a series commenting on how well I fulfil each of the items on Displaying time series, spatial, and space-time data with R is available for pre-order Two years ago, motivated by a proposal from John Kimmel, Executive Editor at Chapman & Hall/CRC Press, I started working …Sigue leyendo → Writing a book with a little help from Emacs and friends This post provides technical details about the making of my book “Displaying time series, spatial, and space-time data with R”, …Sigue leyendo → Near-zero variance predictors. Should we remove them? $Near-zero variance predictors. Should we remove them?$ Datasets come sometimes with predictors that take an unique value across samples. Such uninformative predictor is more common than you might think. This kind of predictor is not only non-informative, it can break some models you may want to fit to your data (see example below). Even more common is the presence of predictors that
{"url":"http://www.r-bloggers.com/page/11/?s=latex","timestamp":"2014-04-16T07:39:29Z","content_type":null,"content_length":"39716","record_id":"<urn:uuid:591091b1-0793-4f3b-b874-d94565fe5ca5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodbridge, VA Statistics Tutor Find a Woodbridge, VA Statistics Tutor I have a masters in economics and a strong math background. I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through problems with students since that is the best way ... 14 Subjects: including statistics, calculus, geometry, algebra 1 I have 11 years' experience teaching and mentoring college undergraduate and graduate students on quantitative research projects, including teaching applied statistics and research methods, and using SPSS, STATA, and Excel. I also have over 20 years of research experience in the social sciences, mo... 6 Subjects: including statistics, SPSS, Microsoft Excel, Microsoft Word ...I look forward to tutoring you! KatherineI am very patient and understanding of students needs. I work with students in order to help them learn how to budget their time and prioritize tasks. 56 Subjects: including statistics, chemistry, reading, calculus ...As an undergraduate student in Electrical Engineering and Physics and as a graduate student, I took courses in mathematical methods for physics and engineering. These courses included fundamental theory and techniques (including numerical) of linear algebra and its applications to engineering an... 16 Subjects: including statistics, physics, calculus, geometry ...I am currently a math major at George Mason University in Fairfax, Va. I currently tutor at a Kumon center and realize this is what I love and plan to do with rest of my life. I plan on graduating in 2015 with a BS in Mathematics with hopefully a Computer Science minor. 10 Subjects: including statistics, calculus, geometry, algebra 1 Related Woodbridge, VA Tutors Woodbridge, VA Accounting Tutors Woodbridge, VA ACT Tutors Woodbridge, VA Algebra Tutors Woodbridge, VA Algebra 2 Tutors Woodbridge, VA Calculus Tutors Woodbridge, VA Geometry Tutors Woodbridge, VA Math Tutors Woodbridge, VA Prealgebra Tutors Woodbridge, VA Precalculus Tutors Woodbridge, VA SAT Tutors Woodbridge, VA SAT Math Tutors Woodbridge, VA Science Tutors Woodbridge, VA Statistics Tutors Woodbridge, VA Trigonometry Tutors Nearby Cities With statistics Tutor Alexandria, VA statistics Tutors Annandale, VA statistics Tutors Arlington, VA statistics Tutors Bethesda, MD statistics Tutors Burke, VA statistics Tutors Centreville, VA statistics Tutors Fairfax, VA statistics Tutors Falls Church statistics Tutors Herndon, VA statistics Tutors Hyattsville statistics Tutors Lorton, VA statistics Tutors Manassas, VA statistics Tutors Occoquan statistics Tutors Springfield, VA statistics Tutors Washington, DC statistics Tutors
{"url":"http://www.purplemath.com/Woodbridge_VA_statistics_tutors.php","timestamp":"2014-04-17T19:35:58Z","content_type":null,"content_length":"24215","record_id":"<urn:uuid:62399450-76e1-4f92-9377-1ef5e16a80f2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: infix problem? [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: infix problem? From Kit Baum <baum@bc.edu> To statalist@hsphsun2.harvard.edu Subject st: Re: infix problem? Date Wed, 14 May 2008 10:10:03 -0400 But even with a double declaration you cannot read 16 digits and retain them all. This is beyond the capability of a double data type. You should use a string variable type to deal with a 16-character ID code properly, as is often discussed on this list. Even if an ID is numeric, there is no downside to treating it as string, and will ensure that this kind of problem does not bite Kit Baum, Boston College Economics and DIW Berlin An Introduction to Modern Econometrics Using Stata: On May 14, 2008, at 02:33 , Nick wrote: This is arguable. The help of -infix- does indicate that if you want a -double- you need to specify that, so Stata is putting the onus on you to think about variable types. Otherwise put, your punishment is that you got what you asked for. Despite that, the idea that Stata should be smart on your behalf is naturally attractive. Quite what that would mean with -infix- is not clear except to Stata developers who know the exact algorithm. In particular, a decision on optimal variable types presumably implies two passes through the data, i.e. the field width is not enough to decide. Hau Chyi I've downloaded several variables from the SIPP (Study of Income and Program Participation), and realized there seems to be a problem with the -infix- command, which I hope can be illustrated by the following Here is only one observation with one variable, which looks like below in the asc file. This is the SSUID, the survey unit id of each individual. If you save this into a asc file as, say "d:\documents\test\test.asc" , and run the following lines: infix SSUID 1-16 using "d:\documents\test\test.asc"; format SSUID %16.0f; and then: - -list SSUID- The variable Stata reads is: | SSUID | 1. | 1234567948140544 | It's completely wrong! I realize this after discovering some families I generated from SSUID (and other family identifyers) have more than 100 kids!! The problem disappears when I do - -infix double SSUID 1-16 using ... - In other words, the precision -infix- chooses automatically is wrong. Is this a bug of infix or some memory allocation error of my computer? No matter what, I recommend if you are infixing variables with more than 10 digits, you'd better check the ascii file to see if it's truly * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-05/msg00522.html","timestamp":"2014-04-16T16:45:34Z","content_type":null,"content_length":"8029","record_id":"<urn:uuid:a14b6fca-773b-4546-a803-45885515f717>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
An Almost Linear Time and O(n log n + e) Messages Distributed Algorithm for Minimum‐Weight Spanning Trees - IN PROCEEDINGS OF THE 20TH INTERNATIONAL SYMPOSIUM ON DISTRIBUTED COMPUTING (DISC , 2006 "... We present a distributed algorithm that constructs an O(log n)-approximate minimum spanning tree (MST) in any arbitrary network. This algorithm runs in time Õ(D(G) + L(G, w)) where L(G, w) is a parameter called the local shortest path diameter and D(G) is the (unweighted) diameter of the graph. Our ..." Cited by 24 (7 self) Add to MetaCart We present a distributed algorithm that constructs an O(log n)-approximate minimum spanning tree (MST) in any arbitrary network. This algorithm runs in time Õ(D(G) + L(G, w)) where L(G, w) is a parameter called the local shortest path diameter and D(G) is the (unweighted) diameter of the graph. Our algorithm is existentially optimal (up to polylogarithmic factors), i.e., there exists graphs which need Ω(D(G) + L(G, w)) time to compute an H-approximation to the MST for any H ∈ [1, Θ(log n)]. Our result also shows that there can be a significant time gap between exact and approximate MST computation: there exists graphs in which the running time of our approximation algorithm is exponentially faster than the time-optimal distributed algorithm that computes the MST. Finally, we show that our algorithm can be used to find an approximate MST in wireless networks and in random weighted networks in almost optimal Õ(D(G)) time. - SPAA , 2007 "... We use the recently introduced advising scheme framework for measuring the difficulty of locally distributively computing a Minimum Spanning Tree (MST). An (m, t)-advising scheme for a distributed problem P is a way, for every possible input I of P, to provide an ”advice” (i.e., a bit string) about ..." Cited by 21 (10 self) Add to MetaCart We use the recently introduced advising scheme framework for measuring the difficulty of locally distributively computing a Minimum Spanning Tree (MST). An (m, t)-advising scheme for a distributed problem P is a way, for every possible input I of P, to provide an ”advice” (i.e., a bit string) about I to each node so that: (1) the maximum size of the advices is at most m bits, and (2) the problem P can be solved distributively in at most t rounds using the advices as inputs. In case of MST, the output returned by each node of a weighted graph G is the edge leading to its parent in some rooted MST T of G. Clearly, there is a trivial (⌈log n⌉, 0)advising scheme for MST (each node is given the local port number of the edge leading to the root of some MST T), and it is known that any (0, t)-advising scheme satisfies t ≥ ˜ Ω ( √ n). Our main result is the construction of an (O(1), O(log n))-advising scheme for MST. That is, by only giving a constant number of bits of advice to each node, one can decrease exponentially the distributed computation time of MST in arbitrary graph, compared to algorithms dealing with the problem in absence of any a priori information. We also consider the average size of the advices. On the one hand, we show that any (m, 0)-advising scheme for MST gives advices of average size Ω(log n). On the other hand we design an (m, 1)-advising scheme for MST with advices of constant average size, that is one round is enough to decrease the average size of the advices from log n to constant. - Laboratory for Computer Science, Massachusetts Institute of Technology , 1988 "... Jennifer Lundelius Welch Leslie Lamport Digital Equipment Corporation, Systems Research Center Abstract: rithms are often hard to prove correct because they have no natural decomposition into separately provable parts. This paper presents a proof technique for the modular verification of su ..." Cited by 12 (3 self) Add to MetaCart Jennifer Lundelius Welch Leslie Lamport Digital Equipment Corporation, Systems Research Center Abstract: rithms are often hard to prove correct because they have no natural decomposition into separately provable parts. This paper presents a proof technique for the modular verification of such non-modular algorithms. It generalizes existing verification techniques based on a totally-ordered hierarchy of refinements to allow a partiallyordered hierarchy--that is; a lattice of different views of the algorithm. The technique is applied to the well-known distributed minimum spanning tree algorithm of Gallager, Humblet and Spira, which has until recently lacked a rigorous proof. 1. - IN THE INTERNET. PROC. OF JOINT CONFERENCE ON INFORMATION SCIENCES '95 , 1995 "... ... this paper we propose a new fully distributed algorithm to build a minimum spanning tree (MST) in a generic communication network. During the execution, the algorithm maintains a collection of disjoint trees spanning all the group members. Every tree, which initially consists of only one node, i ..." Cited by 2 (0 self) Add to MetaCart ... this paper we propose a new fully distributed algorithm to build a minimum spanning tree (MST) in a generic communication network. During the execution, the algorithm maintains a collection of disjoint trees spanning all the group members. Every tree, which initially consists of only one node, independently expands by joining the closest tree, until all the nodes are connected in a single tree. The resulting communication topology is both robust (there are no singularities subject to failures) and scalable (every node stores a limited amount of local information that is independent of the size of the network). - Applied Soft Computing , 2011 "... Due to the hardness of solving the minimum spanning tree (MST) problem in stochastic environments, the stochastic MST (SMST) problem has not received the attention it merits, specifically when the probability distribution function (PDF) of the edge weight is not a priori known. In this paper, we fir ..." Cited by 2 (2 self) Add to MetaCart Due to the hardness of solving the minimum spanning tree (MST) problem in stochastic environments, the stochastic MST (SMST) problem has not received the attention it merits, specifically when the probability distribution function (PDF) of the edge weight is not a priori known. In this paper, we first propose a learning automata‐based sampling algorithm (Algorithm 1) to solve the MST problem in stochastic graphs where the PDF of the edge weight is assumed to be unknown. At each stage of the proposed algorithm, a set of learning automata is randomly activated and determines the graph edges that must be sampled in that stage. As the proposed algorithm proceeds, the sampling process focuses on the spanning tree with the minimum expected weight. Therefore, the proposed sampling method is capable of decreasing the rate of unnecessary samplings and shortening the time required for finding the SMST. The convergence of this algorithm is theoretically proved and it is shown that by a proper choice of the learning rate the spanning tree with the minimum expected weight can be found with a probability close enough to unity. Numerical results show that Algorithm 1 outperforms the standard sampling method. Selecting a proper learning rate is the most challenging issue in learning automata theory by which a good trade off can be achieved between the cost and efficiency of algorithm. To improve the efficiency (i.e., the convergence speed and convergence rate) of Algorithm 1, we
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1036936","timestamp":"2014-04-18T22:21:06Z","content_type":null,"content_length":"27787","record_id":"<urn:uuid:c9512ef1-4048-427c-87d1-e8a425293a02>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
CASE STUDY: Acid Rain Environmental Science Main Page "Consider a Spherical Cow", John Harte. Narrative and case study design by Robert R. Gotwals, Jr. to learn about the chemical reactions that lead to acid rain. Background Reading: Acid rain is defined as the mixing of atmospheric pollutants with natural precipitation (rain, snow, sleet, etc.) to form an acidic aqueous (water) compound. When this acidic water is deposited on the Earth in the form of rain or snow, the acid causes substantial damage to farm crops, plants and trees, buildings and statues, and living creatures who live in ponds, lakes, and rivers. Acid rain is produced by the emission of various pollutants into the atmosphere by various industries. Coal-burning plants which produce electricity, for example, emit large quantities of sulfur-containing chemicals into the air. These chemicals undergo a series of chemical reactions in the lower atmosphere. The products of these reactions combine with natural rainwater and snow, and are then deposited as acid rain. A quick overview of acidity might be helpful. There are a number of definitions for an acid, as you know if you have taken first-year chemistry! We'll use a fairly simple definition. An acid is created when a hydrogen ion (a hydrogen atom with a positive charge, written H^+) mixes with ordinary water (H[2]O) to form a hydronium ion (H[3]O^+). The reaction looks like this: Reactants &nbsp Product H^+ + H[2]O -----> H[3]O^+ If we know how much of the hydrogen atom we have (that is, if we know its concentration, written as [H^+], we can calculate its pH. pH is a measure of the acidity of a liquid solution, and is shown on a scale from 1 to 14: ACIDIC NEUTRAL BASIC ____________ ___|______ ________________ 1 &nbsp 2 &nbsp 3 &nbsp 4 &nbsp 5 6 &nbsp 7 &nbsp 8 &nbsp 9 &nbsp 10 &nbsp 11 &nbsp 12 &nbsp 13 &nbsp 14 According to the graphic above, an chemical with a pH less than 7 is considered to be an acid, a neutral substance (like pure water) has a pH of 7, and any liquid above a pH of 7 is called basic. We are concerned, of course, with rainwater that has a pH of less than 7! To build this model, we need to investigate the chemistry of acid rain in a little more detail. The primary pollutant generated by coal-burning plants is sulfuric acid, or H[2]SO[4]. When sulfuric acid is deposited into the atmosphere, it dissociates, or breaks apart, into two new products: Reactant &nbsp Products H[2]SO[4] -----> H^+ + HSO[4^-] Notice the charges on the right -- a plus charge on the hydrogen and the minus charge on the sulfur compound add up to zero, which is the charge on the sulfuric acid. More importantly, notice that we now have a hydrogen ion (H^+) on the loose! This ion can now react with water to form acid rain, but we're not quite done yet! We need to investigate two "k" numbers: K and k! The letter "K" is called the equilibrium constant. This is a unitless number that measures the tendency of chemical to break apart. Chemists prefer to say: K is a measure of the tendency of a reaction to go to the right, that is, to follow the reaction arrow to form products. The table below shows this tendency: If K > 1, there is a HIGH probability that the reaction will go to the right If K = 1, the reaction is at equilibrium, and it will stay just where it is If K < 1, there is a LOW probability that the reaction will go to the right An example: what is the value of K for these non-chemical situations? 1. The probability that you can take a week off of school whenever you feel like it? My guess is that K is very low, a number like 0.0000000000001! 2. The probability that it will rain sometime in the next month? K is probably very high, something like 1,0000. In the reaction above, K is 1,0000. Since this is a number substantially bigger than 1, we can say that it is absolutely true that H[2]SO[4] will dissociate into the two products. If sulfuric acid gets into the atmosphere, we can guarantee that it will break apart. In most books it should be noted that you will generally see the term "K[a]", where the little "a" stands for acid. K[a] is called the "dissociation constant of an acid", and you look them up in tables of dissociation constants found in many chemistry books. The second "k" is the rate constant for a reaction. This "k" tells you not whether or not the reaction will occur (that's the job of K[a]!), but how fast it will go IF it is allowed to do so! In each of the acid rain reactions, there will be TWO values of k -- one in the forward direction and one in the reverse direction. All of the reactions of interest here are reversible -- the products can recombine to form the original chemical. Basically, a chemical can break apart (dissociate) but then, if conditions are right, connect back together again! We show this with an arrow going in both &nbsp k[-1] &nbsp &nbsp <----- &nbsp &nbsp <----- &nbsp H[2]SO[4] -----> H^+ + HSO[4^-] k[1] &nbsp &nbsp Remember that if K[a] is large, there is not much of a chance that things will recombine, but it is still a possibility! Each of the reactions of importance will produce a hydrogen ion, which contributes to the overall acidity of the water. If we know the concentration of the hydrogen ion, written as [H^+], we can calculate the pH of the water using this equation: pH = -log[10][H^+] Building the Model: RECOMMENDATION: make sure that you can WRITE DOWN the reactions that you will need to model. If you can't write them down, you don't understand this well enough to build the model. If this is the case, ask for help! In this model, we need to study the concentrations of three species: H[2]SO[4], HSO[4], and SO[4]^+. For each dissociation reaction, there is a dissociation constant, K[a], and a rate constant for the forward and reverse reactions. We will also have a starting concentration for each of the species above. The dissociation constants are given from a table of dissociation constants: K[a] for H[2]SO[4]: 1 x 10^3 K[a] for HSO[4^-]: 1.3x 10^-2 For the four rate constants (k[1], k[2], k[-1], k[-2]), you will need to use the algorithms below: k[1] = [H[2]SO[4]] k[2] = [HSO[4]] k[-1] = [HSO[4]][H] / K[a1] k[-2] = [SO[4]][H] / K[a2] We can calculate [H] using the following algorithm: [H] = [HSO[4]] = 2 [SO[4]] With the information above, you should be able to construct your basic model of the dissociation reactions of acid rain formation. Initial Conditions: The following initial conditions might be useful in testing your model. • [H[2]SO[4]] = 5 x 10^-5 moles/liter (moles is a term used for an amount of a chemical. (Moles/liter means so much of the chemical mixed in 1 liter of water.) • [HSO[4]] = 0 moles/liter • [SO[4]] = 0 moles/liter Run your model for 8 seconds. You will need to use a fairly small dt to get this model to behave correctly. Experiment with changing the value of dt. Make sure that you add a converter to convert [H] into pH. You will need to use the built-in function "log10". Once you are ready to run your model, create a graph of [H[2]SO[4]], [HSO[4]], [SO[4]], and [H]. Use either a numeric display window or a simple data table to look at how pH changes while the reactions proceed. Developed by Copyright &copy 2000 Questions or comments about this page should be directed to gotwals@shodor.org
{"url":"http://shodor.org/succeed-1.0/enviro/labs/acidrain.html","timestamp":"2014-04-16T16:57:54Z","content_type":null,"content_length":"10316","record_id":"<urn:uuid:120fff8e-da85-4399-8e8a-2f9585b44803>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Rising Sun Member Forums - trig question - help me find X nakman 03-19-2010 10:11 PM trig question - help me find X 1 Attachment(s) I am trying to draw out the bolt pattern of a 6 on 5.5 wheel spacer.. and my limited CAD ability has revealed a very rusty trig ability. When I was 16 I could have done this on a napkin in about 2 minutes... but tonight I find myself scratching my head.. :o Can someone solve this? 30-60-90 triangle, the long leg that's not the hypotenuse is 2.25, aka the radius of the 5.5" circle. I need to know how far "up" or down, or sideways to draw, so that I can draw in the hypotenuse then create a lug hole where that intersects the circle. Here's a chance to prove how smart you is.. what's X? :confused: wesintl 03-19-2010 10:20 PM 1 * √2.25 ? I need to find the trig and geometry symbols but i think you will know what i'm talking about AxleIke 03-19-2010 10:21 PM Sure thing: Tan 30 = x/2.250 .5773 = x/ 2.25 x= 1.299" fubuki 03-19-2010 10:23 PM AxleIke 03-19-2010 10:26 PM An easy way to remember these trig functions is the following pnuemonic: Chief SOH CAH TOA (pronounced Cheif soak-a toe-a) The SOH CAH TOA part is as follows SINE = OPPOSITE over HYPOTENUSE COSINE = ADJACENT over HYPOTENUSE TANGENT = OPPOSITE over ADJACENT Where the hypotenuse is obvious, and the adjacent side is not the hypotenuse, but the other connecting side to the angle you have, and the opposite is the one not touching the angle, if that makes any sense. nakman 03-19-2010 10:32 PM 1 Attachment(s) Originally Posted by AxleIke (Post 143133) Sure thing: Tan 30 = x/2.250 .5773 = x/ 2.25 x= 1.299" Dr. Axle, thanks man! :bowdown: edit: yeah I remember the SOH CAH TOA from high school, just couldn't seem to get it, and using the computer calculator probably didn't help me either. I remembered the sin of a 30 degree angle was .5, yet for some reason my answer wasn't double or half of my known value. I can't wait until Gavin learns this stuff, so I can learn it all over again. :) wesintl 03-19-2010 10:37 PM Doh.. I think i was in the ball park.. lol can the airplane take off on that triangle Tim? nakman 03-19-2010 10:40 PM Originally Posted by wesintl (Post 143141) can the airplane take off on that triangle Tim? Only if the hypotenuse is a conveyor belt. :D edit: and good one Fubuki, was hoping someone would post that one. :) rover67 03-19-2010 10:40 PM Isaac is right but I think you are looking for a different number... since the lugs on that pattern are 60* apart it's Tan (60) = x / 2.25 x = 3.897 Edit: Nevermind, you two confused me. Don't look at these numbers. :D nakman 03-19-2010 10:51 PM 1 Attachment(s) Originally Posted by rover67 (Post 143143) Isaac is right but I think you are looking for a different number... since the lugs on that pattern are 60* apart it's Tan (60) = x / 2.25 x = 3.897 Edit: Nevermind, you two confused me. Don't look at these numbers. :D dude you're cracking me up.. but hey i drew a wheel spacer! All times are GMT -6. The time now is 04:38 AM. Powered by vBulletin® Version 3.7.1 Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
{"url":"http://www.risingsun4x4club.org/forum2/printthread.php?t=12557","timestamp":"2014-04-17T10:38:04Z","content_type":null,"content_length":"13090","record_id":"<urn:uuid:8d3ead7a-59a0-4777-b2fc-0dcf66b19c3b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Manhattan Beach Algebra Tutor Find a Manhattan Beach Algebra Tutor ...Respectfully,Gatsha I hold a 3rd Degree Black Belt in Shaolin Chuan Fa (a derivation of Shaolin Kenpo) and a 1st degree Black Belt in Kenpo Jiu-Jitsu. I've been training for 9 years and teaching for 5. I have the patience, interpersonal skills and enthusiasm to successfully train and teach the martial arts to children of all ages as well as adults. 2 Subjects: including algebra 1, martial arts ...I love to see my kids succeed and with some hard work and determination by all parties I know we will succeed. With good study skills and work habits the future is limitless.For the past eighteen years, I have been teaching 1-12th grades (and Kindergarten on occasion) at a private Christian scho... 25 Subjects: including algebra 2, algebra 1, reading, geometry ...The goal was to introduce beginning level finance and real-life applicabilty layered on top of the pre-existing subject requirements. This was a huge success - my students exhibited much higher scores on their exit exams in comparison to their peers who did not take my course. The curriculum lives on within NAI's mathematic department to this day. 12 Subjects: including algebra 2, algebra 1, reading, writing ...I am very patient with children and try to bring out the best in them. I encourage children to try their hardest and praise them when they do. I am a very energetic, fun, and funny person who has a great passion for teaching. 5 Subjects: including algebra 2, precalculus, algebra 1, prealgebra ...I also have associates degrees in physical science, behavioral arts, and language arts. I tutor students in math (pre-algebra, algebra, geometry, calculus, & differential equations), science (biology, environmental science, human geography, & all areas of physics up to a college level), and test... 41 Subjects: including algebra 1, algebra 2, chemistry, physics
{"url":"http://www.purplemath.com/Manhattan_Beach_Algebra_tutors.php","timestamp":"2014-04-17T04:41:50Z","content_type":null,"content_length":"24216","record_id":"<urn:uuid:0e8a4aef-9644-4c75-87b2-cf26ed9641d9>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
I need help converting decimals into fractions Re: I need help converting decimals into fractions I need help converting decimals into fractions I need help converting decimals into fractions You can see lots of lessons on this If you have a problem your stuck on, please post what you;'ve tried so we can try to help. Thnx. Re: I need help converting decimals into fractions All you have to do is do this: pretend its: 0.8 0.8 = 8/10 which can be simplified into 4/5. So like if its: 0.18 0.18 = 18/100 which can be simplified...somehow...I'm too lazy And: 0.188 0.188 = 188/1000 then simplify that. So all you have to do is see if its in the tenths, hundredths, or thousandths place and just make it a fraction like that. Hope I Helped!!!!
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=4020","timestamp":"2014-04-18T11:11:54Z","content_type":null,"content_length":"19804","record_id":"<urn:uuid:9fb05a2a-2850-4cf1-8b8e-c65288b72683>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Hello - Matlab question May 3rd 2009, 12:18 PM #1 May 2009 Hello - Matlab question hey fellas.. greetings from Kuwait. I need a little help. I'm really new to matlab. just started today, actually. here's what I need to do. 1- Define a variable and assign to it the the value from 0 to positive infinity, say x. 2- Do some calculations including that variable, and I want the answers to be in terms of that variable, x. calculations include integration, inverse laplace transfer, and other things. Now I've been searching around and reading/watching tutorials today and I've figured out how to do the operations I need to do. All that's missing is how to do them and find the answers in terms of that variable, and how to define the variable in the first place. p.s: I don't know about your home work help policy, but I assure you I've done my best. I'm not a math major, and I've never been obliged to use Matlab until today. I've never had any matlab training, but I've done a great job so far today learning the basics. I just couldn't figure out this one thing. In fact, I don't even know if it's possible. so I appreciate any help. hey fellas.. greetings from Kuwait. I need a little help. I'm really new to matlab. just started today, actually. here's what I need to do. 1- Define a variable and assign to it the the value from 0 to positive infinity, say x. 2- Do some calculations including that variable, and I want the answers to be in terms of that variable, x. calculations include integration, inverse laplace transfer, and other things. Now I've been searching around and reading/watching tutorials today and I've figured out how to do the operations I need to do. All that's missing is how to do them and find the answers in terms of that variable, and how to define the variable in the first place. p.s: I don't know about your home work help policy, but I assure you I've done my best. I'm not a math major, and I've never been obliged to use Matlab until today. I've never had any matlab training, but I've done a great job so far today learning the basics. I just couldn't figure out this one thing. In fact, I don't even know if it's possible. so I appreciate any help. 1. Fire up Matlab (wait 10 minutes for this to happen) 2. Open up any of the tutorials you have read, and type what it shows at the Matlab command prompt See attachment (which is a FreeMat session rather than Matlab, but it is more or less the same) May 3rd 2009, 10:59 PM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/math-software/87170-hello-matlab-question.html","timestamp":"2014-04-20T11:47:53Z","content_type":null,"content_length":"35193","record_id":"<urn:uuid:901c7fea-7a9e-4cd0-bce6-e56baf1a6b52>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Secrets from a bathroom floor Issue 52 September 2009 An exhaustive algorithm for finding mixed tilings [maths]\textbf{Step 1} Let $k$ be either 3, 4, 5, or 6. \textbf{Step 2} Set $c := \frac{k-2}{2} = \frac{1}{n_1}+\frac{1}{n_2}+...+\frac{1}{n_k}.$ \textbf{Step 3} Set $n$ to be the biggest whole number less than or equal to $\frac{c}{k}$. Note that not all $n_i$ can be bigger than $n$, for otherwise $$\frac{1}{n_1}+\frac{1}{n_2}+...+\frac{1}{n_k} < c.$$ \textbf{Step 4} Suppose $n_1 Back to main article
{"url":"http://plus.maths.org/content/secrets-bathroom-floor-0","timestamp":"2014-04-19T15:36:31Z","content_type":null,"content_length":"21915","record_id":"<urn:uuid:d54d61b2-f849-4329-b349-458fabe9e005>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Alhambra, CA Precalculus Tutor Find an Alhambra, CA Precalculus Tutor ...I am currently focusing on research, and planning to attend a graduate program for a PhD in art history. For the past three summers, I have worked as the lead teaching assistant for a math course for incoming freshmen at Caltech. I'm also a tutor there, and I've been working with students from elementary school to undergrad and community college (including PCC, ELAC, Mt. 51 Subjects: including precalculus, reading, chemistry, physics ...My coursework includes the following: Calculus, Multivariable Calculus, Linear Algebra, Differential Equations, Partial Differential Equations, Analysis on the Real Line, Analysis in n-Space, Complex Analysis, Abstract Algebra (Groups, Rings, Fields, Galois Theory), General/Algebraic Topology, Me... 20 Subjects: including precalculus, chemistry, calculus, reading ...I have tutored at the middle school, high school and college levels and I would be delighted to tutor Algebra 2! I have tutored algebra at Los Angeles City College, and I have also tutored children in 8th grade math at Thomas Starr King Middle School in Los Angeles, CA. I have done private tutoring for the comprehensive high school exit exam as well. 65 Subjects: including precalculus, Spanish, reading, English ...Trig is very important in Calculus and students going on in Math need to be fluent in it. I've been president of the LA Sidewalk Astronomers for years. I've given classes in astronomy for kids and in telescope making. 24 Subjects: including precalculus, chemistry, English, calculus ...After 1 year of Intensive English Program at Evans CAS, I became a math teacher at the same school, working with students of different age groups and cultural backgrounds. In addition to regular teaching, I've been tutoring math students, developing tools to make my teaching more productive, and training other math teachers. I base my teaching on students' learning styles. 14 Subjects: including precalculus, calculus, geometry, ESL/ESOL
{"url":"http://www.purplemath.com/Alhambra_CA_Precalculus_tutors.php","timestamp":"2014-04-18T11:21:48Z","content_type":null,"content_length":"24394","record_id":"<urn:uuid:38969949-1b20-4fc5-a6b2-b5fb74b512cb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Secrets from a bathroom floor Issue 52 September 2009 An exhaustive algorithm for finding mixed tilings [maths]\textbf{Step 1} Let $k$ be either 3, 4, 5, or 6. \textbf{Step 2} Set $c := \frac{k-2}{2} = \frac{1}{n_1}+\frac{1}{n_2}+...+\frac{1}{n_k}.$ \textbf{Step 3} Set $n$ to be the biggest whole number less than or equal to $\frac{c}{k}$. Note that not all $n_i$ can be bigger than $n$, for otherwise $$\frac{1}{n_1}+\frac{1}{n_2}+...+\frac{1}{n_k} < c.$$ \textbf{Step 4} Suppose $n_1 Back to main article
{"url":"http://plus.maths.org/content/secrets-bathroom-floor-0","timestamp":"2014-04-19T15:36:31Z","content_type":null,"content_length":"21915","record_id":"<urn:uuid:d54d61b2-f849-4329-b349-458fabe9e005>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Question about a function March 8th 2009, 10:36 AM #1 Mar 2009 Question about a function Assuming the conditions for the MVT hold for $f:\left[ {a,a + h} \right] \to$R, so that for some $\theta \in \left( {0,1} \right)$ we have $f\left( {a + h} \right) - f\left( a \right) = hf'\left( {a + \theta h} \right)$. If we fix f and a for each non-zero h, how would you write $\theta \left( h \right)$ for a corresponding value of theta? Assuming the conditions for the MVT hold for $f:\left[ {a,a + h} \right] \to$R, so that for some $\theta \in \left( {0,1} \right)$ we have $f\left( {a + h} \right) - f\left( a \right) = hf'\left( {a + \theta h} \right)$. If we fix f and a for each non-zero h, how would you write $\theta \left( h \right)$ for a corresponding value of theta? You can't. All the MVT tells you is that f(a+h)- f(z)= h f( $\xi$) but says nothing about where $\xi$ is. Last edited by Plato; March 8th 2009 at 01:27 PM. Reason: Fix LaTeX I'm not sure what else I can say beyond what I have already given. The 'R' in the function definition is the set of real numbers, and I have been asked "Fix f and a, and for each non-zero h write theta(h) for a corresponding value of theta." In that case, I must agree with Halls. You said that you know it is possible. How do you know that? It may be that whoever wrote the question has a complete answer in mind. If I were you, I would discuss this the source of the question. March 8th 2009, 12:01 PM #2 MHF Contributor Apr 2005 March 8th 2009, 01:13 PM #3 Mar 2009 March 8th 2009, 01:31 PM #4 March 8th 2009, 02:50 PM #5 Mar 2009 March 8th 2009, 03:15 PM #6
{"url":"http://mathhelpforum.com/differential-geometry/77511-question-about-function.html","timestamp":"2014-04-18T14:09:15Z","content_type":null,"content_length":"50337","record_id":"<urn:uuid:07fdcd14-c333-486d-b590-55fbe2722f92>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: When to use Poisson or Negative Binomial Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: When to use Poisson or Negative Binomial From "Querze, Alana Renee" <arq@ku.edu> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject RE: st: When to use Poisson or Negative Binomial Date Fri, 27 May 2011 11:34:37 +0000 Thank you for your thoughts on which to choose. When I run my nbgreg model I get significant results for more variables than the xtpoisson (if the results were similar it wouldn't matter which I used) so I want to have a justification for the one I use. @Argyn: it does make sense to me if the model is wrong making it robust is not exactly important, but from the books I've read authors seem to believe overdispersion can be overcome using xtpoisson with boot strapped standard errors. @Maarten: what else do I need to know besides the marginal mean and variance to choose the right command? I chose between random and fixed effects by using the Hausman test... is there a similar test in STATA that would help me know which model better fits the data? From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] on behalf of Maarten Buis [maartenlbuis@gmail.com] Sent: Friday, May 27, 2011 3:02 AM To: statalist@hsphsun2.harvard.edu Subject: Re: st: When to use Poisson or Negative Binomial > On Thu, May 26, 2011 at 12:14 PM, Querze, Alana Renee <arq@ku.edu> wrote: >> But I don't know whether it is better to sacrifice robustness or efficiency. >> Anyone know how to justify the use of one over the other? (BTW I have just under 400 districts and 52 months in my panel data). On Fri, May 27, 2011 at 1:45 AM, Argyn Kuketayev wrote: > i wouldn't use Poisson if the variance is much greater than the mean, > e.g. mean = 17, variance = 40. > who cares how robust is the estimate if it simply doesn't fit That is not quite all there is to say on this topic. You first need to know with respect to what aspects of the model a model is "robust". For example, if we talk about robust standard errors than that typically is typically robust with respect to the model of the variance and higher moments, but assumes that the model is correct with respect to the mean. The comparison of the marginal mean and variance does not tell us enough to distinguish between the two. Hope this helps, Maarten L. Buis Institut fuer Soziologie Universitaet Tuebingen Wilhelmstrasse 36 72074 Tuebingen * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-05/msg01479.html","timestamp":"2014-04-21T13:04:58Z","content_type":null,"content_length":"10965","record_id":"<urn:uuid:396b68ab-f76c-40bc-9780-aa5547e91fb1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Alhambra, CA Precalculus Tutor Find an Alhambra, CA Precalculus Tutor ...I am currently focusing on research, and planning to attend a graduate program for a PhD in art history. For the past three summers, I have worked as the lead teaching assistant for a math course for incoming freshmen at Caltech. I'm also a tutor there, and I've been working with students from elementary school to undergrad and community college (including PCC, ELAC, Mt. 51 Subjects: including precalculus, reading, chemistry, physics ...My coursework includes the following: Calculus, Multivariable Calculus, Linear Algebra, Differential Equations, Partial Differential Equations, Analysis on the Real Line, Analysis in n-Space, Complex Analysis, Abstract Algebra (Groups, Rings, Fields, Galois Theory), General/Algebraic Topology, Me... 20 Subjects: including precalculus, chemistry, calculus, reading ...I have tutored at the middle school, high school and college levels and I would be delighted to tutor Algebra 2! I have tutored algebra at Los Angeles City College, and I have also tutored children in 8th grade math at Thomas Starr King Middle School in Los Angeles, CA. I have done private tutoring for the comprehensive high school exit exam as well. 65 Subjects: including precalculus, Spanish, reading, English ...Trig is very important in Calculus and students going on in Math need to be fluent in it. I've been president of the LA Sidewalk Astronomers for years. I've given classes in astronomy for kids and in telescope making. 24 Subjects: including precalculus, chemistry, English, calculus ...After 1 year of Intensive English Program at Evans CAS, I became a math teacher at the same school, working with students of different age groups and cultural backgrounds. In addition to regular teaching, I've been tutoring math students, developing tools to make my teaching more productive, and training other math teachers. I base my teaching on students' learning styles. 14 Subjects: including precalculus, calculus, geometry, ESL/ESOL
{"url":"http://www.purplemath.com/Alhambra_CA_Precalculus_tutors.php","timestamp":"2014-04-18T11:21:48Z","content_type":null,"content_length":"24394","record_id":"<urn:uuid:38969949-1b20-4fc5-a6b2-b5fb74b512cb>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Gravitation on Ramani's blog Speed Of Light =Gravity=Infinity = 0 = Science? In Science on March 26, 2014 at 09:00 The Speed( I am using this term to enable every one to understand this) is 299,792,458 metres per second, This is equal to the Speed of Gravity. Speed of Light is equal to Speed of Gravity(+ or – 1%) The Speed of Light, according to Newton is Infinite. That is you can not comprehend because there are innumerable Digits,’ it is limitless. Zero can not be comprehended because there is nothing to comprehend.( some say “A positive or negative number when divided by zero is a fraction with the zero as denominator” So in terms of Comprehending Infinity is as comprehensible as Zero is. Therefore Infinity is equal to Zero. Speed of Light is Infinite. Therefore, Speed of Light is equal to Zero. I am aware of the Logical Fallacy of undistributed Middle is present in my argument. So do the Scientist theories! We take Science because it is convenient. We do not question it. Then why not God? * Follow Theory of Relativity, it is more fun . Newtonian Gravity. Therefore the theory assumes the speed of gravity to be infinite. This assumption was adequate to account for all phenomena with the observational accuracy of that time. It was not until the 19th century that an anomaly in astronomical observations which could not be reconciled with the Newtonian gravitational model of instantaneous action was noted. Relativity Gravity. General relativity predicts that gravitational radiation should exist and propagate as a wave at lightspeed: a slowly evolving and weak gravitational field will produce, according to general relativity, effects like those of Newtonian gravitation. Infinity (symbol: ∞) is an abstract concept describing something without any limit and is relevant in a number of fields, predominantly mathematics and physics. The English word infinity derives from Latin infinitas, meaning “the state of being without finish”, and which can be translated as “unboundedness”, itself calqued from the Greek word apeiros, meaning “endless” The Indian mathematical text Surya Prajnapti (c. 3rd–4th century BCE) classifies all numbers into three sets: enumerable, innumerable, and infinite. Each of these was further subdivided into three • Enumerable: lowest, intermediate, and highest • Innumerable: nearly innumerable, truly innumerable, and innumerably innumerable • Infinite: nearly infinite, truly infinite, infinitely infinite In the Indian work on the theory of sets, two basic types of infinite numbers are distinguished. On both physical and ontological grounds, a distinction was made between asaṃkhyāta (“countless, innumerable”) and ananta (“endless, unlimited”), between rigidly bounded and loosely bounded infinities. Minecraft can be a great tool for visualizing complicated subjects, such as the speed of light (aka “c”). Using a straight track and simple math, we can see how the universe might be limiting speeds for very fast things, such as light. The “doors” metaphor is admittedly, imperfect. While they do limit speed outside of “acceleration”, they falsely imply that there is something “in space” that slows things down, which does not appear to be the case at the moment. The actual mechanisms that limit objects to light speed will be the topic for a future video. (Hint: It has to do with time!) The rules governing the use of zero appeared for the first time in Brahmagupta‘s book Brahmasputha Siddhanta (The Opening of the Universe),^[25] written in 628 AD. Here Brahmagupta considers not only zero, but negative numbers, and the algebraic rules for the elementary operations of arithmetic with such numbers. In some instances, his rules differ from the modern standard. Here are the rules of • The sum of zero and a negative number is negative. • The sum of zero and a positive number is positive. • The sum of zero and zero is zero. • The sum of a positive and a negative is their difference; or, if their absolute values are equal, zero. • A positive or negative number when divided by zero is a fraction with the zero as denominator. • Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. • Zero divided by zero is zero. In saying zero divided by zero is zero, Brahmagupta differs from the modern position. Mathematicians normally do not assign a value to this, whereas computers and calculators sometimes assign NaN, which means “not a number.” Moreover, non-zero positive or negative numbers when divided by zero are either assigned no value, or a value of unsigned infinity, positive infinity, or negative Read my Posts on Time, Astrophysics.Science. Related articles Gravity Assumptions Galore. In Astrophysics, Society on December 31, 2012 at 11:48 ‘Gravity is the Force that appears to attract Physical Bodies towards each other’. Let’s see how much we know about it. Note the Definition, ‘appears to attract’ – We are not sure. Einstein’s Views -’Theory of General Relativity‘ as applied to Gravity. Space and Time , Spacetime condition Gravity. “General relativity, or the general theory of relativity, is the geometric theory of gravitation published by Albert Einstein in 1916^[1] and the current description of gravitation in modern physics. General relativity generalises special relativity and Newton’s law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of partial differential equations.”… Some predictions of general relativity differ significantly from those of classical physics, especially concerning the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light. Examples of such differences include gravitational time dilation, gravitational lensing, the gravitational redshift of light, and the gravitational time delay. The predictions of general relativity have been confirmed in all observations and experiments to date. Although general relativity is not the only relativistic theory of gravity, it is the simplest theory that is consistent with experimental data. However, unanswered questions remain, the most fundamental being how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity. Space is curved(please read my blogs under Astrophysics). Have we defined Space, Tie? Consequently Gravitation as influenced by Spacetime,at best, remains a conjecture. Newton on Gravity(theory discared, yet being used in Calculations) ” “I deduced that the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve: and thereby compared the force requisite to keep the Moon in her Orb with the force of gravity at the surface of the Earth; and found them answer pretty nearly’… Although Newton’s theory has been superseded, most modern non-relativistic gravitational calculations are still made using Newton’s theory because it is a much simpler theory to work with than general relativity, and gives sufficiently accurate results for most applications involving sufficiently small masses, speeds and energies.’ Irrespective of the fact whether our Theory is Right or Wrong, Principles of Nature work,so much for Science. By the way , can some one forward me the exact definition of Nature? Earth’s Gravity. a) Earth is ‘assumed’ to be surrounded by its own Gravitational Filed, which exerts an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point is proportional to the planetary body’s mass and inversely proportional to the square of the distance from the center of the body…. The fundamental is an Assumption. Again, ‘ This means that, ignoring air resistance, an object falling freely near the Earth’s surface increases its velocity by 9.81 m/s (32.2 ft/s or 22 mph) for each second of its descent. Thus, an object starting from rest will attain a velocity of 9.81 m/s (32.2 ft/s) after one second, 19.62 m/s (64.4 ft/s) after two seconds, and so on, adding 9.81 m/s (32.2 ft/s) to each resulting velocity. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time. ‘The strength of the gravitational field is numerically equal to the acceleration of objects under its influence, and its value at the Earth’s surface, denoted g, is approximately expressed below as thestandard average. g = 9.81 m/s^2 = 32.2 ft/s^2′ We want to know Earth’s Gravitation, that is what we want to prove. Acceleration of Bodies is determined the force of Gravitation. So when we rely on the Acceleration of Objects under the influence of the Earths’ gravitation , have we not already assumed Earth’s Gravity? We also ignore the effect of Electromagnetism and other forces at work,EM being dependent on Gravitation or is the other way around? The Theory is riddled with Assumptions and leaping to conclusions. Chinese Scientists have found a way to measure Gravity and it throws Spanner in the works. ‘By conducting six observations of total and annular solar eclipses, as well as Earth tides, a team headed by Tang Keyun, a researcher with the Institute of Geology and Geophysics under the Chinese Academy of Sciences (CAS), found that the Newtonian Earth tide formula includes a factor related to the propagation of gravity. “Earth tide” refers to a small change in the Earth’s surface caused by the gravity of the moon and sun. Based on the data, the team, with the participation of the China Earthquake Administration and the University of the CAS, found that gravitational force released from the sun and gravitational force recorded at ground stations on Earth did not travel at the same speed, with the time difference exactly the same as the time it takes for light to travel from the sun to observation stations on […] By applying the new data to the propagation equation of gravity, the team found that the speed of gravity is about 0.93 to 1.05 times the speed of light with a relative error of about 5 percent, providing the first set of strong evidence showing that gravity travels at the speed of light.” Greater Gravitation, Live Indefinitely In Astrophysics, Time on July 26, 2012 at 11:22 Einstein’s Theory that Time is not a Constant and and that it Dilates, relative to Gravitation has been proved. If the Gravitation pull is less you age slower and if more you age faster. Let’s extend this Logic further. If you keep on increasing the G, your longevity increases. So at a particular point of Gravitation, you would live indefinitely. But the question is shall the object that exerts Gravitation change with increase in Gravitational pull? Does it pull itself? What are its states vis-a-vis change in Pull? Does the object it pulls change its state based on the Pull exerted on it? To understand the seemingly paradoxical Nature of Time please read my Blog http://www.independent.co.uk/news/science/einsteins-theory-is-proved–and-it-is-bad-news-if-you-own-a-penthouse-2088195.html and check blogs under Time/Astrophysics. The world’s most accurate clock has neatly shown how right Albert Einstein was 100 years ago, when he proposed that time is a relative concept and the higher you live above sea level the faster you should age. Einstein’s theory of relativity states that time and space are not as constant as everyday life would suggest. He suggested that the only true constant, the speed of light, meant that time can run faster or slower depending on how high you are, and how fast you are travelling. Now scientists have demonstrated the true nature of Einstein’s theory for the first time with an incredibly accurate atomic clock that is able to keep time to within one second in about 3.7 billion years – roughly the same length of time that life has existed on Earth. James Chin-Wen Chou and his colleagues from the US National Institute of Standards and Technology in Boulder, Colorado, found that when they monitored two such clocks positioned just a foot apart in height above sea level, they found that time really does run more quickly the higher you are – just as Einstein predicted. “These precise clocks reveal the effects of gravitational pull, so if we position one clock closer to a planet, you also increase the gravitational pull and time actually runs slower than for another, similar clock positioned higher up,” Dr Chou said. “No one has seen such effects before with clocks which is why we wanted to see if these effects are there. We would say our results agree with Einstein’s theory – we weren’t expecting any discrepancies and we didn’t find any,” he explained. Quantum,Tantric Practices,Pentagram and Satan Worship Videos In Astrophysics, Physics, Science on May 29, 2012 at 10:43 Advanced Theory of Quantum Gravity Model bears striking resemblances to Occultism,Tantric Practices and even Satan Worship practices Pentagram forms essential part in all these practices; so does it in Quantum. Please watch the videos and for reference on Tantric practices look up ‘Shakti and Shakta’ by Sir .John Woodroffe. As to logical sequencing the Saptha bhangi Naya of the Jainism is the best. This methodology bears a striking resemblance to Quantum Logic. Seven modes of postulation are possible. They are. 1.It is. 2.It is not. 3.It is indescribable. 4.It is and it is not. 5.It is not and indescribable. 6.It is and it is indescribable. 7.It is and it is not and it is indescribable. No more postulation are possible. Compare this with Quantum all quantum equations are based on 5 outcomes. 1. All A 2. All B 3. Equally A & B 4. Combination of A & B, either mostly A or mostly B 5. Neither A nor B So, pentagons/pentagrams come as no surprise” Causal Dynamical Triangulations, (or CDT) causes physical systems to emerge from non-physical spin-foam dimension, via the 2 dimensional projections of 4-dimensional pentachorons. Do you know what a 2-d projection of a pentachoron is? An inverted pentagram! Watch 4:28-5:00 for the “program-conjuring” part:
{"url":"http://ramanan50.wordpress.com/tag/gravitation/","timestamp":"2014-04-20T11:19:19Z","content_type":null,"content_length":"123522","record_id":"<urn:uuid:e553fa60-88ee-46b0-a927-6dca43678cb3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Set Operations in the Unix Shell A while ago I wrote about how I solved the Google Treasure Hunt Puzzle Nr. 4 about prime numbers. I took an unusual approach and solved this problem entirely from the Unix shell. The solution involved finding the intersection between a bunch of files containing numbers. This lead me to an idea to write a post about how to do various set operations from the shell by using common utilities such as sort, uniq, diff, grep, head, tail, comm, and others. I'll cover the following set operations in this article: • Set Membership. Test if an element belongs to a set. • Set Equality. Test if two sets contain the same elements. • Set Cardinality. Return the number of elements in the set. • Subset Test. Test if a given set is a subset of another set. • Set Union. Find union of two sets. • Set Intersection. Find intersection of two sets. • Set Complement. Given two sets A and B, find all elements in A that are not in B. • Set Symmetric Difference. Find symmetric difference of two sets. • Power Set. Generate all subsets of a set. • Set Cartesian Product. Find A x B. • Disjoint Set Test. Test if two sets are disjoint. • Empty Set Test. Test if a given set is empty. • Minimum. Find the smallest element of a set. • Maximum. Find the largest element of a set. Update: I wrote another post about these operations and created a cheat sheet. Download cheat sheet: set operations in unix shell (.txt) To illustrate these operations, I created a few random sets to work with. Each set is represented as a file with one element per line. The elements are positive numbers. First I created two sets A and B with 5 elements each so that I could easily check that the operations really work. Sets A and B are hand crafted. It's easy to see that only elements 1, 2 and 3 are in common: $ cat A $ cat B I also created a set Asub which is a subset of set A and Anotsub which is not a subset of A (to test the Subset Test operation): $ cat Asub $ cat Anotsub Next I created two equal sets Aequal and Bequal again with 5 elements each: $ cat Aequal $ cat Bequal Then I created two huge sets Abig and Bbig with 100,000 elements (some of them are repeated, but that's ok). The easiest way to generate sets Abig and Bbig is to take natural numbers from /dev/urandom. There are two shell commands that can easily do that. The first is "od" and the second is "hexdump". Here is how to create two files with 100,000 natural numbers with both commands. With hexdump: $ hexdump -e '1/4 "%u\n"' -n400000 /dev/urandom > Abig $ hexdump -e '1/4 "%u\n"' -n400000 /dev/urandom > Bbig The "-e" switch specifies a hand-crafted output format. It says take 1 element of size 4 bytes and output it as an unsigned integer. The "-n" switch specifies how many bytes to read, in this case 400000 (400000 bytes / 4 bytes per element = 100000 elements). With od: $ od -An -w4 -tu4 -N400000 /dev/urandom | sed 's/ *//' > Abig $ od -An -w4 -tu4 -N400000 /dev/urandom | sed 's/ *//' > Bbig The "-An" switch specifies that no line address is necessary. The "-w4" switch specifies number of bytes to output per line. The "-tu4" says to output unsigned 4-byte numbers and "-N400000" limits the output to 400000 bytes (400000/4 = 100000 elements). The output from od has to be filtered through sed to drop the leading whitespace characters. Okay, now let's look at various set operations. Set Membership The set membership operation tests if an element belongs to a set. We write a ∈ A, if element a belongs to set A, and we write a ∉ A, if it does not. The easiest way to test if an element is in a set is to use "grep" command. Grep searches the file for lines matching a pattern: $ grep -xc 'element' set The "-c" flag outputs number of elements in the set. If it is not a multi-set, the number of elements should be 0 or 1. The "-x" option specifies to match the whole line only (no partial matches). Here is an example of this operation run on set A: $ grep -xc '4' A $ grep -xc '999' A That's correct. Set A contains element 4 but does not contain element 999. If the membership operation has to be used from a shell script, the return code from grep can be used instead. Unix commands succeed if the return code is 0, and fail otherwise: $ grep -xq 'element' set # returns 0 if element ∈ set # returns 1 if element ∉ set The "-q" flag makes sure that grep does not output the element if it is in the set. Set Equality The set equality operation tests if two sets are the same, i.e., contain the same elements. We write A = B if sets A and B are equal and A ≠ B if they are not. The easiest way to test if two sets are equal is to use "diff" command. Diff command compares two files for differences. It will find that the order of lines differ, so the files have to be sorted first. If they are multi-sets, the output of sort has to be run through "uniq" command to eliminate duplicate elements: $ diff -q <(sort set1 | uniq) <(sort set2 | uniq) # returns 0 if set1 = set2 # returns 1 if set1 ≠ set2 The "-q" flag quiets the output of diff command. Let's test this operation on sets A, B, Aequal and Bequal: $ diff -q <(sort A | uniq) <(sort B | uniq) # return code 1 -- sets A and B are not equal $ diff -q <(sort Aequal | uniq) <(sort Bequal | uniq) # return code 0 -- sets A and B are equal If you have already sorted sets, then just run: $ diff -q set1 set2 Set Cardinality The set cardinality operations returns the number of elements in the set. We write |A| to denote the cardinality of the set A. The simplest way to count the number of elements in a set is to use "wc" command. Wc command counts the number of characters, words or lines in a file. Since each element in the set appears on a new line, counting the number of lines in the file will return the cardinality of the set: $ wc -l set | cut -d' ' -f1 Cut command is necessary because "wc -l" also outputs the name of the file it was ran on. The cut command outputs the first field which is number of lines in the file. We can actually get rid of cut: $ wc -l < set Let's test if on sets A and Abig: $ wc -l A | cut -d' ' -f1 $ wc -l Abig | cut -d' ' -f1 $ wc -l < A $ wc -l < Abig Subset Test The subset test tests if the given set is a subset of another set. We write S ⊆ A if S is a subset of A, and S ⊊ A, if it's not. I found a very easy way to do it using the "comm" utility. Comm compares two sorted files line by line. It may be run in such a way that it outputs lines that appear only in the first specified file. If the first file is subset of the second, then all the lines in the 1st file also appear in the 2nd, so no output is produced: $ comm -23 <(sort subset | uniq) <(sort set | uniq) | head -1 # comm returns no output if subset ⊆ set # comm outputs something if subset ⊊ set Please remember that if you have a numeric set, then sort must take "-n" option. Let's test if Asub is a subset of A: $ comm -23 <(sort -n Asub|uniq) <(sort -n A|uniq) | head -1 # no output - yes, Asub ⊆ A Now let's test if Anotsub is a subset of A: $ comm -23 <(sort -n Anotsub|uniq) <(sort -n A|uniq) | head -1 6 # has output - no, Anotsub ⊊ A If you want to use it from a shell script, you'd have to test if the output from this command was empty or not. Set Union The set union operation unions two sets, i.e., join them into one set. We write C = A ∪ B to denote union of sets A and B which produces set C. Set union is extremely easy to create. Just use the "cat" utility to concatenate two files: $ cat set1 set2 If the duplicates (elements which are both in set1 and set2) are not welcome, then the output of cat can be filtered via awk: $ cat set1 set2 | awk '!found[$1]++' # we can also get rid of cat by just using awk: $ awk '!found[$1]++' set1 set2 If we don't want to use awk, which is a whole-blown programming language, then we can sort the output of cat and filter it via uniq: $ cat set1 set2 | sort | uniq # we can get rid of cat by specifying arguments to sort: $ sort set1 set2 | uniq # finally we can get rid of uniq by specifying -u flag to sort $ sort -u set1 set2 If the sets set1 and set2 are already sorted, then the union operation can be made much faster by specifying the "-m" command line option, which merges the files (like the final step of merge-sort $ sort -m set1 set2 | uniq # or $ set -um set1 set2 Let's test this operation on sets A and B: $ cat A B # with duplicates $ awk '!found[$1]++' # without dupes $ sort -n A B | uniq # with sort && uniq Set Intersection The set intersection operation finds elements that are in both sets at the same time. We write C = A ∩ B to denote the intersection of sets A and B, which produces the set C. There are many ways to do set intersection. The first way that I am going to show you uses "comm": $ comm -12 <(sort set1) <(sort set2) The "-12" option to comm directs it to suppress output of lines appearing just in the 1st and the 2nd file and makes it output lines appearing in both 1st and 2nd, which is the intersection of two Please remember that if you have a numeric set, then sort must take "-n" option. Another way to do it is to use "grep" utility. I actually found about this method as I was writing this article: $ grep -xF -f set1 set2 The "-x" option forces grep to match the whole lines (no partial matches). The "-f set1" specifies the patterns to use for searching. The "-F" option makes grep interpret the given patterns literally (no regexes). It works by matching all lines of set1 in set2. The lines that appear just in set1 or just in set2 are never output. The next way to find intersection is by using "sort" and "uniq": $ sort set1 set2 | uniq -d The "-d" option to uniq forces it to print only the duplicate lines. Obviously, if a line appears in set1 and set2, after sorting there will be two consecutive equal lines in the output. The "uniq -d" command prints such repeated lines (but only 1 copy of it), thus it's the intersection operation. Just a few minutes before publishing this article I found another way to do intersection with "join" command. Join command joins files on a common field: $ join <(sort -n A) <(sort -n B) Here is a test run: $ sort -n A B | uniq -d $ grep -xF -f A B $ comm -12 <(sort -n A) <(sort -n B) Set Complement The set complement operation finds elements that are in one set but not the other. We write A - B or A \ B to denote set's B complement in set A. Comm has become a pretty useful command for operating on sets. It can be applied to implement set complement operation as well: $ comm -23 <(sort set1) <(sort set2) The option "-23" specifies that comm should not print elements that appear just in set2 and that are common to both. It leaves comm to print elements which are just in set1 (and not in set2). The "grep" command can also be used to implement this operation: $ grep -vxF -f set2 set1 Notice that the order of sets has been reversed from that of comm. That's because we are searching those elements in set1, which are not in set2. Another way to do it is, of course, with "sort" and "uniq": $ sort set2 set2 set1 | uniq -u This is a pretty tricky command. Suppose that a line appears in set1 but does not appear in set2. Then it will be output just once and will not get removed by uniq. All other lines get removed. Let's put these commands to test: $ comm -23 <(sort -n A) <(sort -n B) $ grep -vxF -f B A $ sort -n B B A | uniq -u Set Symmetric Difference The set symmetric difference operation finds elements that are in one set, or in the other but not both. We write A Δ B to denote symmetric difference of sets A and B. The operation can be implemented very easily with "comm" utility: $ comm -3 <(sort set1) <(sort set2) | sed 's/\t//g' # sed can be replaced with tr $ comm -3 <(sort set1) <(sort set2) | tr -d '\t' Here comm is instructed via "-3" not to output fields that are common to both files, but to output fields that are just in set1 and just in set2. Sed is necessary because comm outputs two columns of data and some of it is right padded with a \t tab character. It can also be done with "sort" and "uniq": $ sort set1 set2 | uniq -u We can use mathematics and derive a few formulas involving previously used operations for symmetric difference: A Δ B = (A - B) ∪ (B - A). Now we can use grep: $ cat <(grep -vxF -f set1 set2) <(grep -vxF -f set2 set1) # does (B - A) ∪ (A - B) # this can be simplified $ grep -vxF -f set1 set2; grep -vxF -f set2 set1 Let's test it: $ comm -3 <(sort -n A) >(sort -n B) | sed 's/\t//g' $ sort -n A B | uniq -u $ cat <(grep -vxF -f B A) <(grep -vxF -f A B) Power Set The power set operation generates a power-set of a set. What's a power set? It's a set that contains all subsets of the set. We write P(A) or 2^A to denote all subsets of A. For a set with n elements, the power set contains 2^n elements. For example, the power-set of the set { a, b, c } contains 2^3 = 8 elements. The power-set is { {}, {a}, {b}, {c}, {a, b}, {a, c}, {b, c}, {a, b, c} }. It's not easy to do that with simple Unix tools. I could not think of anything better than a silly Perl solution: $ perl -le ' sub powset { return [[]] unless @_; my $head = shift; my $list = &powset; [@$list, map { [$head, @$_] } @$list] chomp(my @e = <>); for $p (@{powset(@e)}) { print @$p; }' set Can you think of a way to do it with Unix tools? Set Cartesian Product The set Cartesian product operation produces produces a new set that contains all possible pairs of elements from one set and the other. The notation for Cartesian product of sets A and B is A x B. For example, if set A = { a, b, c } and set B = { 1, 2 } then the Cartesian product A x B = { (a, 1), (a, 2), (b, 1), (b, 2), (c, 1), (c, 2) }. I can't think of a great solution. I have a very silly solution in bash: $ while read a; do while read b; do echo "$a, $b"; done < set1; done < set2 Can you think of other solutions? Disjoint Set Test The disjoint set test operation finds if two sets are disjoint, i.e., they do not contain common elements. Two sets are disjoint if their intersection is the empty set. Any of the set intersection commands (mentioned earlier) can be applied on the sets and the output can be tested for emptiness. If it is empty, then the sets are disjoint, if it is not, then the sets are not disjoint. Another way to test if two sets are disjoint is to use awk: $ awk '{ if (++seen[$0]==2) exit 1 }' set1 set2 # returns 0 if sets are disjoint # returns 1 if sets are not disjoint It works by counting seen elements in set1 and then set2. If any of the elements appear both in set1 and set2, seen count for that element would be 2 and awk would quit with exit code 1. Empty Set Test The empty set test tests if the set is empty, i.e., contains no elements. The empty set is usually written as Ø. It's very easy to test if the set is empty. The cardinality of an empty set is 0: $ wc -l set | cut -d' ' -f1 # outputs 0 if the set is empty # outputs > 0 if the set is not empty Getting rid of cut: $ wc -l < set # outputs 0 if the set is empty # outputs > 0 if the set is not empty The minimum operation returns the smallest number in the set. We write min(A) to denote the minimum operation on the set A. The minimum element of a set can be found by first sorting it in ascending order and then taking the first element. The first element can be taken with "head" Unix command which outputs the first part of the file: $ head -1 <(sort set) The "-1" option specifies to output the first line only. If the set is already sorted, then it's even simpler: $ head -1 set Remember to use "sort -n" command if the set contains numeric data. Example of running minimum operation on sets A and Abig: $ head -1 <(sort -n A) $ head -1 <(sort -n Abig) The maximum operation returns the biggest number in the set. We write max(A) to denote the maximum operation on the set A. The maximum element of a set can be found by first sorting it in ascending order and then taking the last element. The last element can be taken with "tail" Unix command which outputs the last part of the file: $ tail -1 <(sort set) The "-1" option specifies to output the last line only. If the set is already sorted, then it's even simpler: $ tail -1 set Remember to use "sort -n" command if the set contains numeric data. Example of running maximum operation on sets A and Abig: $ tail -1 <(sort -n A) $ head -1 <(sort -n Abig) Have Fun! Have fun working with these set operations! Thanks to lhunath and waldner from #bash for helping. :) Aww, I was disappointed by the power set example. You see, back in the mid 1990's, when we were implementing gcc multilib support (this is the idea that if the user gives gcc flags that lead to incompatible ABIs, you need to supply corresponding libgcc and possibly libc/libm that match...) at Cygnus (mostly for embedded work) I had to implement power set in sh (after all, the user could combine the incompatible options) - and not even full POSIX sh, but a subset that pre-autoconf configure scripts were allowed to use (so that they actually worked on things like LynxOS.) I mostly remember it being unpleasant, and hoped I'd find something more modern and elegant here :-) (Eventually Ian Taylor rewrote all that, and I don't know what the current gcc-multilib packaging does - avoiding the problem and specifying the desired combinations explicitly would not have been I mostly remember it being unpleasant, and hoped I’d find something more modern and elegant here :-) Easiest way to do is to observe that for a set of size N, each element in the power set corresponds to a binary vector of length N. For example, for a set of size 3 with elements {A,B,C} 000 --> {} 001 --> {A} 010 --> {B} 111 --> {A,B,C} So to enumerate the elements of a power set, just iterate over the 2^N possible values of a binary vector of length N. Minor typo: The od command example and its description are not consistent in the mention of arguments. Nice article! In the empty test set example, did you mean 'cut' instead of 'set' after the pipe symbol? A bash-only powerset could look like: powerset_overkill() { local Set=$* if [ "${Set}" = "" ]; then echo ${Set} for i in ${Set}; do local Subset=${Set/${i}/} powerset_overkill ${Subset} powerset() { powerset_overkill $* | sort | uniq time powerset 1 2 3 4 5 6 For the power set, you could use shell functions, e.g. p() { [ $# -eq 0 ] && echo || (shift; p "$@") | while read r ; do echo -e "$1 $r\n$r"; done } p `cat set` outputs the power set, e.g. $ p 1 2 3 Of course, this is limited to max_args of bash, but who needs the power set of >500 elements anyway? ;-) Hi Peteris. Nice article, which intersects with something I wrote last year about shell scripting, using the set operations example. It's always good to compare notes on how someone else tackles the same problem. By the way, there's no need to start a pipeline off with cat. Instead of $ cat set1 set2 | sort -m | uniq edit: Thomas wrote this example and then noticed that it was not correct. He commented: Now that I think about it $ cat set1 set2 | sort -m is wrong! Sort -m only works when merging multiple sorted files together: catting the inputs into a single file is a bad idea in this case. you could just have $ sort -m set1 set2 | uniq or even (with gnu sort) $ sort -mu set1 set2 Thanks for that. I'll analyze it in detail later. For now here are my methods for set operations: Nice stuff. :-) If you're working with really big files, then you'll have problems with diff because it tries to read the whole file into memory. You can use 'cmp' for equality checks instead The utility blm is the Boolean Line Manipulator and supports all of the aboved mentioned operations. It is also more than 1000 times faster than the shell and it is much easier to read and write complex operations. The download page for blm is linked in this post and it is auto-installable in Debian and Ubuntu as the blm package. It's definitely simpler and possibly more efficient to use `combine` for set operations: http://manpages.ubuntu.com/manpages/intrepid/man1/combine.html $ set -um set1 set2 Think you mean sort instead of set there. BTW, I think the normal definition of a set is that an element can't appear in it more than once, and that order doesn't matter. So a lot of your "sets" aren't, in fact. Great! Now I just wish I had photographic memory to remember all this :) Nice stuff, there - thanks for posting! I kept moving through the page by first trying out to solve by myself before proceeding on to see how you've solved - some exercise to my mind. you should give ipython a try. see youve made yourself comfortable with bash and unix commands but ipython will make things much easier for such tasks. nice post btw! As a physicist who was quite good with awk once said, "Those who do not understand UNIX are doomed to reimplement it [,poorly]." It is the temptation to learn higher level languages that preempts coders from learning the UNIX tools. And they just end up reinventing the wheel in perl, ruby, python, javascript, etc. Perl was written out of laziness, because someone was too lazy to learn awk well enough to do a certain manipulation of log files, and out of frustration he wrote Perl. The physicist quoted above later made him realise it was possible to do the task with awk. I liked Thomas Guest's article. If you want to learn something "new", look no further than your base system. It seems (though I cannot confirm) fewer and fewer people are learning the UNIX tools, *thoroughly*. It seems (though I cannot confirm) more and more people see them at a stepping stone. I refer the reader again to the quote I started off with. With the UNIX tools, one can do more with less (allocated memory). No task is too big. Divide and conquer. Brute force is always an option, as Ritchie once said. It's articles like this one that makes me want to give the www intertubes one big squishy hug! Thank you this article was exactly what I needed... Here's a refinement to the cardinality operation: sort -u set | wc -l Also, just a minor correction: head -1 should probably read: tail -1 (in the last example of the maximum operation) Amazing stuff. I needed to do some of these set operations at work in a jiffy and this page helped a lot. It has been a while since I did much stuff in Unix (10 years) and had forgotten most of it and how powerful it can be. I think I should get a book and get back into Unix and unleash the power of the shell. Perhaps this time, I would get around to learning awk. Thank you! AWESOME article; outstanding solutions for just about everything except the power set problem. Here's an idea for a future article: associative array operations in unix shell MANY developers jump to perl the moment a problem seems to require associative arrays; I've never understood this; it seems to me that similar strategies to the ones you've employed for sets could be employed for associative arrays Thanks for your efforts to keep the UNIX way alive and well. Set Cartesian Product. Find A x B |cat N |cat A |cat A |while read a ; do cat N |sed -e "s/./& $a/g" Oh well, at-least one while is removed. I could have done better with xargs and removed the second while too, but it isn't like find unfortunately in that it doesn't allow me to embed pipelines in the arg Oh well, I am cheating :) |cat A |xargs -I{} -n 1 sh -c 'cat N |xargs -n1 echo {} ' Advised to run only with smaller sets :) Not a good solution but just trying to do it with pipelines alone. cat A |xargs -n 1 ksh -c 'cat A |xargs |sed -e "s; ;,;g" -e "s;.*;{&};g" '|xargs|sed -e 's; ;\\\\\\\ ;g'|xargs -I{} ksh -c 'echo {}' > tmp cat tmp |xargs -n 3 |cat |xargs -I{} ksh -c 'echo {} |xargs -n 1 |sort -u |xargs ' |sort -u $cat N |cat A $ cat A |xargs -I{} -n 1 sh -c 'cat N |xargs -n1 echo {} ' a 1 a 2 a 3 b 1 b 2 b 3 c 1 c 2 c 3 Power set: $ cat A | xargs -n 1 ksh -c 'cat A |xargs |sed -e "s; ;,;g" -e "s;.*;{&};g" ' | xargs |sed -e 's; ;\\\\\\\ ;g'|xargs -I{} ksh -c 'echo {}'| xargs -n `wc -l <A` | xargs -I{} sh -c 'echo {}|xargs -n 1|sort -u|xargs ' | sort -u a b a b c a c b c Rahul, thanks for these. I never figured out how to do cartesian in an interesting manner in the shell. I actually sent you an email because I was afraid they got broken by this old blog software. cartesian product: $ cat A $ cat B $ join -j2 A B a 1 a 2 b 1 b 2 c 1 c 2 or, if you like $ join -j2 A B | awk '{print "("$1", "$2")"}' (a, 1) (a, 2) (b, 1) (b, 2) (c, 1) (c, 2) I love it. Anyone mentioned using that set expansion for a calculator? $ echo {1,2,3}{1,2,3,4,5} | wc -w Also, you can get it to print every number from 1 to 99 echo {,1,2,3,4,5,6,7,8,9}{0,1,2,3,4,5,6,7,8,9} ... but why should one ever remember crypted bash/zsh/ksh/... syntax and it's subtle differences _iff_ script languages like perl, python, _guarantee_ an abstraction level _plus_ simplifying all those tasks _plus_ adding syntactic sugar (in a _per_language_ and not _per_platform_) consistent manner? i do honor the effort and i do honor being able to accomplish a lot of solutions w/ the most basic tools, _but_ i prefer to be able to move to solaris and not have to reinvent everything again great article! just a couple of small things: $ awk '!found[$1]++' # without dupes (missing input files) $ cat > Atest $ cat > Btest $ sort Atest Btest | uniq -d will show duplicates within a single multigroup as well as items common to them both. also there's a typos @ $ set -um set1 set2, $ comm -3 <(sort -n A) >(sort -n B) | sed 's/\t//g' btw, i would really appreciate a post comparing the efficiency of some of the solutions when there're several provided. another thing, being the n00b that i am i can't figure out how to force the following being evaluated echo {$(paste -sd, A)}{$(paste -sd, B)} this returns {3,5,1,2,4}{11,1,12,3,2} but if i wrote directly echo {3,5,1,2,4}{11,1,12,3,2} i'd get the desire cross product? Christopher Suters solution is really nice, didn't know there's a join command, still using a paste command it can be done like this: eval echo -e {$(paste -sd, A)},{$(paste -sd, B)} | tr ' ' '\n' could you please fix my precious comment? Here's my take on power set without resorting to a scripting language: (echo -e "echo \c"; for set in set1 set2 set3; do echo -e "{`sort -u $set|xargs|tr ' ' ','`,}\c"; done)|bash -|xargs -n1 For example, here are three files representing different sets: $ head set* ==> set1 <== ==> set2 <== ==> set3 <== And here's the power set of them: $ (echo -e "echo \c"; for set in set1 set2 set3; do echo -e "{`sort -u $set|xargs|tr ' ' ','`,}\c"; done) | bash -s | xargs -n1 Whoops, could make it easier by using eval: $ eval $(echo -e "echo \c"; for set in set1 set2 set3; do echo -e "{`sort -u $set|xargs|tr ' ' ','`,}\c"; done)|xargs -n1 It took me a bit of hunting to figure out that <(list) creates a named pipe. The only problem with producing large sets with the methods outlined above is that there may be duplicates in sets. This messed with the complement operations when a duplicate is in set1: sort set2 set2 set1|uniq -u doesn't show any duplicates, comm outputs a de-duped set, while grep leaves the duplicates in set1 unmolested. I guess the lesson is: make sure your sets are true sets first. Power set: $ cat set $ eval echo {$(cat set | xargs -I{} echo -n \{{},\})} {abc} {ab} {ac} {a} {bc} {b} {c} {} or slightly shorter: $ eval echo {$(cat set | xargs -I# echo -n \{#,\})} I've got the idea when I saw the wonderful combinatorical feature mentioned under #6 in this article. First I found (after some tries) this which would give the desired result: $ echo {{,a}{,b}{,c}} {} {c} {b} {bc} {a} {ac} {ab} {abc} (The outmost braces are nessesary for the empty set.) But here we manually type in the members of the set. To build the expression from the content of the file "set" I went the following steps: $ cat set |xargs -I{} echo \{{},\} We need it in one line, so we append -n to echo: cat set |xargs -I{} echo -n \{{},\} (We dont get a newline after that!) To surround this with curly brackets we use command substition and another echo: $ echo {$(cat set |xargs -I{} echo -n \{{},\})} With eval we ask the bash do brace expansion again: $ eval echo {$(cat set |xargs -I{} echo -n \{{},\})} {abc} {ab} {ac} {a} {bc} {b} {c} {} I think it should be stressed, if not already, that "uniq" is sensitive to whitespace. It caused me some issues until I s/ *//g all white space in each line using sed before the uniq. cappuccino machines Nespresso D120-US-BK-NE CitiZ Automatic Single-Serve Espresso Maker and Milk Frother, Limousine Black Bash Solution to union: # Usage: ./union.sh set1 set2 declare -Ai union while read -re element; do done < <(cat "${@}") declare -p union Leave a new comment It would be nice if you left your e-mail address. Sometimes I want to send a private message, or just thank for the great comment. Having your e-mail really helps. I will never ever spam you. (Your twitter name, if you have one. (I'm @pkrumins, btw.)) * use <pre>...</pre> to insert a plain code snippet. * use <pre lang="lang">...</pre> to insert a syntax highlighted code snippet. For example, <pre lang="python">...</pre> will insert Python highlighted code. * use <code>...</code> to highlight a variable or a single shell command. * use <a href="url">title&lt/a> to insert links. * use other HTML tags, such as, <b>, <i>, <blockquote>, <sup>, <sub> for text formatting. Type the first letter of your name: (just to make sure you're a human) Please preview the comment before submitting to make sure it's OK.
{"url":"http://www.catonmat.net/blog/set-operations-in-unix-shell/","timestamp":"2014-04-17T12:38:18Z","content_type":null,"content_length":"126396","record_id":"<urn:uuid:a14c67d4-31f6-4379-969f-1c83f3a72689>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] Fisher exact test, anyone? Bruce Southey bsouthey@gmail.... Tue Nov 16 09:45:53 CST 2010 On 11/16/2010 07:04 AM, Ralf Gommers wrote: > On Mon, Nov 15, 2010 at 12:40 AM, Bruce Southey <bsouthey@gmail.com > <mailto:bsouthey@gmail.com>> wrote: > On Sat, Nov 13, 2010 at 8:50 PM, <josef.pktd@gmail.com > <mailto:josef.pktd@gmail.com>> wrote: > > http://projects.scipy.org/scipy/ticket/956 and > > http://pypi.python.org/pypi/fisher/ have Fisher's exact > > testimplementations. > > > > It would be nice to get a version in for 0.9. I spent a few > > unsuccessful days on it earlier this year. But since there are > two new > > or corrected versions available, it looks like it just needs testing > > and a performance comparison. > > > > I won't have time for this, so if anyone volunteers for this, scipy > > 0.9 should be able to get Fisher's exact. > https://github.com/rgommers/scipy/tree/fisher-exact > All tests pass. There's only one usable version (see below) so I > didn't do performance comparison. I'll leave a note on #956 as well, > saying we're discussing on-list. > I briefly looked at the code at pypi link but I do not think it is > good enough for scipy. Also, I do not like when people license code as > 'BSD' and there is a comment in cfisher.pyx '# some of this code is > originally from the internet. (thanks)'. Consequently we can not use > that code. > I agree, that's not usable. The plain Python algorithm is also fast > enough that there's no need to bother with Cython. > The code with ticket 956 still needs work especially in terms of the > input types and probably the API (like having a function that allows > the user to select either 1 or 2 tailed tests). > Can you explain what you mean by work on input types? I used > np.asarray and forced dtype to be int64. For the 1-tailed test, is it > necessary? I note that pearsonr and spearmanr also only do 2-tailed. > Cheers, > Ralf I have no problem including this if we can agree on the API because everything else is internal that can be fixed by release date. So I would accept a place holder API that enable a user in the future to select which tail(s) is performed. 1) It just can not use np.asarray() without checking the input first. This is particularly bad for masked arrays. 2) There are no dimension checking because, as I understand it, this can only handle a '2 by 2' table. I do not know enough for general 'r by c' tables or the 1-d case either. 3) The odds-ratio should be removed because it is not part of the test. It is actually more general than this test. 4) Variable names such as min and max should not shadow Python functions. 5) Is there a reference to the algorithm implemented? For example, SPSS provides a simple 2 by 2 algorithm: 6) Why exactly does the dtype need to int64? That is, is there something wrong with hypergeom function? I just want to understand why the precision change is required because the input should enter with sufficient precision. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20101116/2b8f09b5/attachment-0001.html More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-November/027470.html","timestamp":"2014-04-17T01:23:22Z","content_type":null,"content_length":"7029","record_id":"<urn:uuid:fad75d96-016e-4acb-a169-e234f4c2d19f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Podcasts Multiplication -Place Value Strategy Fourth grade students will explain the process and steps of an alternative way to solve multiplication problems. Students will explore this alternative algorithm that focuses on place value. Students will use their place value knowledge in order to break apart each number and solve for a product. Students will also use their knowledge of arrays to organize the steps of the problem.
{"url":"http://alex.state.al.us/pdcsr2.php?std_id=53717","timestamp":"2014-04-19T09:25:19Z","content_type":null,"content_length":"12249","record_id":"<urn:uuid:e72c8d64-b8a1-45ac-b74e-84dbe595c287>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
angular acceleration Number of results: 14,894 I already figured out the answers to some but I need help with two more. 1) A disk with an initial angular velocity ω0 = 4.5 rad/s experiences a constant angular acceleration of α = 7.5 rad/s2 for time t = 15 s. Please answer the following questions. ω0 = 4.5 ... Wednesday, April 2, 2014 at 11:06am by Michelle college phyiscs the angular acceleration is 1.60/160 in radian/s^2 I dont see how anything can be solved without know how long it accelerated at this acceleration. Displacement is a function of time, and Sunday, October 11, 2009 at 11:57am by bobpursley Time to stop = (angular displacement)/(average angular velocity) = 11*2*pi rad/(9.0 rad/s) = 7.68 s Angular acceleration = -(18 rad/s)/(7.68 s) = -2.34 rad/s^2 Friday, February 25, 2011 at 2:16am by drwls (a) What is the tangential acceleration of a bug on the rim of a 6.0 in. diameter disk if the disk moves from rest to an angular speed of 79 rev/min in 5.0 s? (b) When the disk is at its final speed, what is the tangential velocity of the bug? (c) One second after the bug ... Sunday, March 28, 2010 at 1:21am by Kelsey Starting from rest, a wheel undergoes constant angular acceleration for a period of time T. At what time after the start of rotation does the wheel reach an angular speed equal to its average angular speed for this interval? Tuesday, June 8, 2010 at 10:34pm by Ashley A planet orbits its star in a circular orbit (uniform circular motion) of radius 1.62x10^11 m. The orbital period of the planet around its star is 37.0 years. Determine the following quantities for this orbital motion: Angular acceleration , Tangential acceleration, Radial ... Sunday, November 13, 2011 at 10:45am by sand A planet orbits its star in a circular orbit (uniform circular motion) of radius 1.62x10^11 m. The orbital period of the planet around its star is 37.0 years. Determine the following quantities for this orbital motion: Angular acceleration , Tangential acceleration, Radial ... Sunday, November 13, 2011 at 9:06pm by sand A planet orbits its star in a circular orbit (uniform circular motion) of radius 1.62x10^11 m. The orbital period of the planet around its star is 37.0 years. Determine the following quantities for this orbital motion: Angular acceleration , Tangential acceleration, Radial ... Monday, November 14, 2011 at 9:53am by sand A planet orbits its star in a circular orbit (uniform circular motion) of radius 1.62x10^11 m. The orbital period of the planet around its star is 37.0 years. Determine the following quantities for this orbital motion: Angular acceleration , Tangential acceleration, Radial ... Tuesday, November 15, 2011 at 11:06am by sand A planet orbits its star in a circular orbit (uniform circular motion) of radius 1.62x10^11 m. The orbital period of the planet around its star is 37.0 years. Determine the following quantities for this orbital motion: Angular acceleration , Tangential acceleration, Radial ... Tuesday, November 15, 2011 at 4:19pm by sand Physics (Circular Motion!!) A) Top angular speed is wmax = 2 pi radians/1.1 s = 5.71 rad/s Angular acceleration during first 30 s: alpha = 5.71/30 = 0.1904 rad/s^2 Tangential acceleration = R*alpha = 0.952 m/s^2 B) centripetal acceleration at top speed = R*wmax^2 = 163 m/s^2 = 16.6 g's Thursday, September 2, 2010 at 12:52pm by drwls Suppose you exert a force of 180 N tangential to a 0.280-m-radius 75.0-kg grindstone (a solid disk). (a)What torque is exerted? (b) What is the angular acceleration assuming negligible opposing friction? (c) What is the angular acceleration if there is an opposing frictional ... Sunday, July 14, 2013 at 5:05pm by Justin I already took this course. It is your turn to try the problems. When we see exactly what you are having trouble with perhaps we can help. Remember Torque = rate of change of angular momentum = I alpha omega = w = angular speed angular momentum = I w alpha = dw/dt = rate of ... Wednesday, January 5, 2011 at 9:35am by Damon I need help with this problem. A ceiling fan is turned on and a net torque of 2.0 Nm is applied to the blades. The blades have a total moment of inertia of 0.24 kgm2. What is the angular acceleration of the blades in rad/s2? The formula is well known: Torque= momentof inertia... Wednesday, March 21, 2007 at 7:06pm by Jim wr = v a = r dw/dt 1.4 = r dw/dt dw/dt = angular acceleration = 1.4/r = 2.8/.57 assume constant angular deacceleration radial acceleration = w^2 r tangential acceleration = r dw/dt (we know dw/dt from above) w r = v = 3 m/s at t = 0 wo = 3/r w = wo - (dw/dt)t that gives you w^... Friday, November 26, 2010 at 5:02pm by Damon A 5.0 kg \rm kg, 52-cm \rm cm-diameter cylinder rotates on an axle passing through one edge. The axle is parallel to the floor. The cylinder is held with the center of mass at the same height as the axle, then released. a)What is the magnitude of the cylinder's initial angular... Friday, December 6, 2013 at 2:03pm by Chloe never mind- I figured it out. ok so then, for the angular speed, would I sinply use wfinal ^2=winitial^2+2(angular acceleration)(distance) ? Monday, March 24, 2008 at 4:18pm by Ty The flywheel of a steam engine runs with a constant angular speed of 164 rev/min. When steam is shut off, the friction of the bearings and the air brings the wheel to rest in 1.6 h. I figured out the constant angular acceleration=1.71 rev/min2 but im not sure how to find:What ... Tuesday, September 28, 2010 at 10:43pm by question A pulley rotating in the counterclockwise direction is attached to a mass suspended from a string. The mass causes the pulley's angular velocity to decrease with a constant angular acceleration α = -2.2 rad/s2. If the pulley's initial angular velocity is ω = 5.37 rad... Thursday, November 4, 2010 at 8:28pm by aok 1. A centrifuge in a medical laboratory rotates at an angular speed of 3650 rev/min. When switched off, it rotates through 50.0 revolutions before coming to rest. Find the constant angular acceleration of the centrifuge. 2. A ball of mass 0.120 kg is dropped from rest from a ... Sunday, July 6, 2008 at 11:03pm by Elisa (a) Divide the final angular velocity (in rad/s, NOT rpm) by the time interval. (b) angle = (1/2)*(angular acceleration)*t^2 OR (1/2)*(final angular velocity)*time Friday, October 7, 2011 at 5:51pm by drwls Start withe the equation Torque = (Moment of inertia)(angular acceleration) which is sometimes written in textbooks as T = I*alpha (a) The angular acceleration will not depend upon how much thread has been pulled, but the angular velocity achieved (Wmax)will depend upon that ... Friday, March 26, 2010 at 11:01am by drwls An ice skater rotating around a vertical axis increases in angular velocityfrom 450degrees/s to 610degrees/s in 2.3 seconds. Find the angular acceleration Wednesday, November 4, 2009 at 8:42pm by stew Angular acceleration is usually expressed in radians/s^2 and angular speed in radians/s. In your case, 21/1.8 = 11.67 is the average speed in rev/min. The final speed is twice that, or 23.33 rev/min The angular acceleration is 23.33 rev/min divided by 1.8 min, which is 12.96 ... Monday, March 5, 2012 at 9:05pm by drwls ω2=ω1)+angular acceleration*time or time = (ω2-ω1)/angular acceleration =(0.28-0)/0.021 =40/3 sec. Tuesday, July 16, 2013 at 1:52pm by MathMate A wind turbine is initially spinning at a constant angular speed. As the wind's strength gradually increases, the turbine experiences a constant angular acceleration of 0.180 rad/s2. After making 2874 revolutions, its angular speed is 139 rad/s. A. What is the initial angular ... Friday, March 30, 2012 at 5:11pm by Natasha A wind turbine is initially spinning at a constant angular speed. As the wind's strength gradually increases, the turbine experiences a constant angular acceleration 0.170 rad/s2. After making 2870 revolutions, its angular speed is 131 rad/s. (a) What is the initial angular ... Tuesday, July 16, 2013 at 2:08pm by kyle A 46.5-cm diameter disk rotates with a constant angular acceleration of 2.4 rad/s2. It starts from rest at t = 0, and a line drawn from the center of the disk to a point P on the rim of the disk makes an angle of 57.3° with the positive x-axis at this time. (Hint: Remember the... Monday, November 30, 2009 at 2:49pm by Ed A child pushes a merry go-round from rest to a final angular speed of 0.6 rev/s with constant angular acceleration. In doing so, the child pushes the merry-go-round 1.8 revolutions. What is the angular acceleration of the merry go-round? rad/s^2 This is how i did this, but its... Friday, March 23, 2012 at 5:39pm by Adam A flywheel of a radius .5 m starts from rest and under goes a uniform angular acceleration of 3 rad/sec2 for a period of 5 seconds. Find the angular velocity of the flywheel at the end of this time? Find the angular displacement (in radians) that the wheel turns through during... Wednesday, May 9, 2012 at 1:19am by Mary An electronic motor is revolving at 2000rpm. After 3sec, it is moving at 150rpm. Determine the angular acceleration. What is angular momentum of the electric motor if the rotational inertia is 18.kg.square meter in initial velocity? In final velocity? What is the average ... Sunday, December 16, 2012 at 11:05am by Bob A 46.5-cm diameter disk rotates with a constant angular acceleration of 2.4 rad/s2. It starts from rest at t = 0, and a line drawn from the center of the disk to a point P on the rim of the disk makes an angle of 57.3° with the positive x-axis at this time. (Hint: Remember the... Saturday, November 28, 2009 at 6:03pm by Ed A solid disk with a mass of 100kg and a radius of 0.2m, turns clockwise through an angular displacement 10 rad when starting from rest to attain its maximum angular speed, of 1 revolution every 0.5s. What is the angular acceleration that the wheel experienced in order to ... Monday, February 20, 2012 at 1:58pm by Brittany A solid disk with a mass of 100kg and a radius of 0.2m, turns clockwise through an angular displacement 10 rad when starting from rest to attain its maximum angular speed, of 1 revolution every 0.5s. What is the angular acceleration that the wheel experienced in order to ... Monday, February 20, 2012 at 10:47pm by Brittany A person is riding a bicycle, the wheels of a bicycle have an angular velocity of +20.5 rad/s. Then, the brakes are applied. In coming to rest, each wheel makes an angular displacement of +12.0 revolutions. (a) How much time does it take for the bike to come to rest? (b) What ... Wednesday, November 10, 2010 at 8:14pm by sean A person is riding a bicycle, the wheels of a bicycle have an angular velocity of +24.0 rad/s. Then, the brakes are applied. In coming to rest, each wheel makes an angular displacement of +16.0 revolutions. (a) How much time does it take for the bike to come to rest? s (b) ... Thursday, November 17, 2011 at 8:53pm by Taylor A coin with a diameter of 1.9 cm is dropped on edge onto a horizontal surface. The coin starts out with an initial angular speed of 19 rad/s and rolls in a straight line without slipping. The rotation slows with an angular acceleration of magnitude 2.1 rad/s2. What is the ... Monday, September 27, 2010 at 7:54pm by help Consider a 59-cm-long lawn mower blade rotating about its center at 2990 rpm. (a) Calculate the linear speed of the tip of the blade. (b) If safety regulations require that the blade be stoppable within 3.0 s, what minimum angular acceleration will accomplish this? Assume that... Saturday, June 16, 2012 at 10:33am by Camilla Search: A person is riding a bicycle, the wheels of a bicycle have an angular velocity of +18.0 rad/s. Then, the brakes are applied. In coming to rest, each wheel makes an angular displacement of +11.0 revolutions. How much time does it take for the bike to come to rest? What ... Friday, February 25, 2011 at 2:16am by Taylor The balance wheel of a watch oscillates with angular amplitude 1.7π rad and period 0.64 s. Find (a) the maximum angular speed of the wheel, (b) the angular speed of the wheel at displacement 1.7π/2 rad, and (c) the magnitude of the angular acceleration at ... Sunday, January 9, 2011 at 4:50pm by kylie an airplane propeller slows from an initial angular speed of 12.5rev/s to a final angular speed of 5rev/s during the process the propeller rotates through 21 rev. find the angular acceleration of the propeller if in radians per second squared, assuming its constant? Thursday, April 26, 2012 at 4:13pm by adan Please check solution A uniform solid disk with a mass of 40.3 kg and a radius of 0.454 m is free to rotate about a frictionless axle. Forces of 90.0 N and 125 N are applied to the disk (a) What is the net torque produced by the two forces? (Assume counterclockwise is the positive direction (b) ... Monday, May 7, 2007 at 1:18pm by Papito The blades of a ceiling fan have a radius of 0.340m and are rotating about a fixed axis with an intial angular velocity of +1.25rad/s2. When the switch on the fan is turned to a higher speed, the blades acire an angular acceleration of +2.55rad/s2. After 0.480s have elapsed ... Tuesday, January 21, 2014 at 4:35pm by bre A diver is trying a new more and is attempting to increase his angular acceleration in order to increase the angular displacement and thus produce more rotations from a standing start on the board the diver accelerates at 950 degree/s/s.what is the angular displacement over 0.... Saturday, March 9, 2013 at 4:02am by Vanitha A 175 kg merry-go-round in the shape of a uniform, solid, horizontal disk of radius 1.50 m is set in motion by wrapping a rope about the rim of the disk and pulling on the rope. What constant force would have to be exerted on the rope to bring the merry-go-round from rest to ... Sunday, July 20, 2008 at 7:32pm by Elisa A fan blade is rotating with a constant angular acceleration of +8.4 rad/s2. At what point on the blade, as measured from the axis of rotation, does the magnitude of the tangential acceleration equal that of the acceleration due to gravity? Tuesday, July 16, 2013 at 2:50pm by frank Moment of force? What is that? Torque? Torque= momentofInertia*angular acceleration angular acceleation is not specified. Monday, November 15, 2010 at 5:04am by bobpursley Physics - Angular Momentum When the angular momentum changes, the 'change' in the angular momentum vector (ie. dL) is ____. [a.] perpendicular to the torque vector. [b.] parallel to the angular momentum vector. [c.] parallel to the torque vector. .. im confused on this one.. i think it's [A]. am i right... Sunday, April 8, 2007 at 12:34am by COFFEE A fan blade is rotating with a constant angular acceleration of +13.7 rad/s2. At what point on the blade, as measured from the axis of rotation, does the magnitude of the tangential acceleration equal that of the acceleration due to gravity? Thursday, October 28, 2010 at 4:31pm by JOHN Torque = (moment of inertia) x (angular acceleration) = 50 N*m The moment of inertia is I = (1/2) M*R^2 = 281.25 kg*m^2 The maximum angular velocity you want to achieve is w = 1200 rev/min * (2 pi/ 60) = 125.66 rad/s The angular acceleration rate is alpha = 125.66 rad/s / t ... Saturday, July 23, 2011 at 5:45pm by drwls displacement= 1/2 angulare acceleration*t^2 solve fodr angular acceleration Monday, March 14, 2011 at 5:48pm by bobpursley Two objects of equal mass are on a turning wheel. Mass 1 is located at the rim of the wheel while mass 2 is located halfway between the rim and the axis of rotation. The wheel is rotating with a non-zero angular acceleration. For each of the following statements select the ... Tuesday, March 20, 2012 at 9:26pm by Miriam Two objects of equal mass are on a turning wheel. Mass 1 is located at the rim of the wheel while mass 2 is located halfway between the rim and the axis of rotation. The wheel is rotating with a non-zero angular acceleration. For each of the following statements select the ... Wednesday, March 21, 2012 at 7:35pm by Miriam A person holds a ladder horizontally at its center. Treating the ladder as a uniform rod of length 3.55 m and mass 10.31 Kg, find the torque the person must exert on the ladder to give it an angular acceleration of 0.406 rad/s2. Should I set the given angular acceleration ... Monday, June 4, 2007 at 4:27pm by anne constant acceleration a d = (1/2) a t^2 by the way the angular acceleration alpha = a/R Saturday, August 7, 2010 at 2:46pm by Damon A merry-go-round starts from rest and accelerates uniformly over 29.0 s to a final angular velocity of 5.35 rev/min. (a) Find the maximum linear speed of a person sitting on the merry-go-round 4.50 m from the center. (b) Find the person's maximum radial acceleration. (c) Find ... Tuesday, March 1, 2011 at 12:08pm by Anonymous A merry-go-round starts from rest and accelerates uniformly over 26.0 s to a final angular velocity of 5.50 rev/min. (a) Find the maximum linear speed of a person sitting on the merry-go-round 4.25 m from the center. (b) Find the person's maximum radial acceleration. (c) Find ... Wednesday, March 14, 2012 at 1:23pm by John if the angular quantities theta omega and angular acceleration were specified in terms of degrees rather than radians how would the kinematics equations for uniformly accelerated rotational motion have to be altered Monday, February 28, 2011 at 9:03pm by luna if the angular quantities theta omega and angular acceleration were specified in terms of degrees rather than radians how would the kinematics equations for uniformly accelerated rotational motion have to be altered Monday, February 28, 2011 at 9:03pm by luna A person is riding a bicycle, the wheels of a bicycle have an angular velocity of +18.5 rad/s. Then, the brakes are applied. In coming to rest, each wheel makes an angular displacement of +14.5 revolutions. (a) How much time does it take for the bike to come to rest? 1 s (b) ... Tuesday, February 22, 2011 at 11:01pm by Lauren What is a tire's angular acceleration if the tangential acceleration at a radius of 0.15 m is 9.4 x 10-2 m/s2? Monday, March 19, 2012 at 10:32am by Anonymous A solid sphere of uniform density starts from rest and rolls without slipping a distance of d = 3.4 m down a q = 29° incline. The sphere has a mass M = 3.7 kg and a radius R = 0.28 m. What is the magnitude of the frictional force on the sphere? --The other person did this for ... Friday, April 9, 2010 at 3:43pm by rufy a turntable is designed to acquire an angular velocity of 38.6 rev/s in 0.5 s, starting from rest. Find the average angular acceleration of the turntable during the 0.5 s period. Tuesday, November 23, 2010 at 6:51pm by Anonymous Physics(Please help) A fan blade is rotating with a constant angular acceleration of +12.4 rad/s2. At what point on the blade, as measured from the axis of rotation, does the magnitude of the tangential acceleration equal that of the acceleration due to gravity? I do not know how to start this. Tuesday, June 5, 2012 at 3:34pm by Hannah please help...A wheel 1.0m in diameter is rotating about a fixed axis with an initial angular velocity of 2rev.s-1.the angular acceleration is 3rev.s-1. (1)compute the angular velocity after 6s. (2) through what angle has the wheel turned in this time interval? (3)what is the ... Monday, January 2, 2012 at 3:14am by anonymous A tire placed on a balancing machine in a service station starts from rest and turns through 10.8 revolutions in 8.48 s before reaching its final angular speed. Calculate its angular acceleration. Monday, March 14, 2011 at 5:48pm by Emily A tire placed on a balancing machine in a service station starts from rest and turns through 4.71 revolutions in 1.29 s before reaching its final angular speed. Calculate its angular acceleration. Sunday, September 30, 2012 at 4:52pm by Arthur A tire placed on a balancing machine in a service station starts from rest and turns through 4.82 revolutions in 1.48 s before reaching its final angular speed. Calculate its angular acceleration. Thursday, October 4, 2012 at 8:36pm by Emily A tire placed on a balancing machine in a service station starts from rest and turns through 4.82 revolutions in 1.48 s before reaching its final angular speed. Calculate its angular acceleration. Thursday, October 4, 2012 at 8:38pm by Emily A tire placed on a balancing machine in a service station starts from rest and turns through 4.82 revolutions in 1.48 s before reaching its final angular speed. Calculate its angular acceleration Sunday, October 7, 2012 at 7:51pm by Emily Physics HELP! A centrifuge in a medical laboratory rotates at an angular speed of 3500 rev/ min. When switched off, it rotates through 41.0 revolutions before coming to rest. (a) Find the angular speed in rad/s. (b) Find the displacement in radians. (c) Find the constant angular ... Friday, October 26, 2012 at 4:35pm by Mel Physics(Please help) A dentist causes the bit of a high-speed drill to accelerate from an angular speed of 1.44 x 104 rad/s to an angular speed of 5.05 x 104 rad/s. In the process, the bit turns through 1.77 x 104 rad. Assuming a constant angular acceleration, how long would it take the bit to ... Thursday, June 7, 2012 at 4:57pm by Hannah The GPE at the starting point is mgL/2 The GPE at the end is zero. The gain of KE equals then mgL/2 KE of this rotating rod is 1/2 I w^2 mgL/2=1/2 *1/3 mL^2*w^2 solve for w b) angular acceleration = 1/I *torque where torque= mg*L/2 angular acceleration= 3/mL^2 *mgL/2 = 3/2 Lg ... Thursday, August 11, 2011 at 6:18pm by bobpursley Angular acceleration = Torque/(moment of inertia) The moment of inertia is (1/2) M R^2 for a disk. After 10 seconds, the number of radians turned is theta = (1/2)(angular acceleration)*(10 s)^2 Divide that by 2 pi for the number of revolutions. Rotational work done is (torque... Saturday, December 4, 2010 at 8:46am by drwls Physics- Elena please help!!!! A centrifuge in a medical laboratory rotates at an angular speed of 3500 rev/ min. When switched off, it rotates through 41.0 revolutions before coming to rest. (a) Find the angular speed in rad/s. (b) Find the displacement in radians. (c) Find the constant angular ... Monday, October 29, 2012 at 5:14pm by Tom A person is riding a bicycle, and its wheels have an angular velocity of +15.5 rad/s. Then, the brakes are applied and the bike is brought to a uniform stop. During braking, the angular displacement of each wheel is +15.5 revolutions. (a) How much time does it take for the ... Wednesday, November 3, 2010 at 9:44pm by james After fixing a flat tire on a bicycle you give the wheel a spin. (a) If its initial angular speed was 5.41 rad/s and it rotated 13.7 revolutions before coming to rest, what was its average angular Friday, November 5, 2010 at 7:40pm by aok physics, bike After fixing a flat tire on a bicycle you give the wheel a spin. Its initial angular speed was 6.50 and it rotated 15.2 revolutions before coming to rest.What was its average angular acceleration? Thursday, January 13, 2011 at 6:59pm by john Your calculations look OK except: (1) w has to be in radians per second, not rpm. (2) You need to include dimensions with the numbers. For example v is in m/s and a is in m/s^2. For the time required, divide the final angular velocity of C by its angular acceleration rate, 5.7... Friday, February 10, 2012 at 10:01pm by drwls physics - drwls? A disk rotates about its central axis starting from rest and accelerates with constant angular acceleration. At one time it is rotating at 7 rev/s. 55 revolutions later, its angular speed is 21 rev/ s. (a) Calculate the angular acceleration. (b) Calculate the time required to ... Monday, March 12, 2007 at 1:25am by winterWX 7.A turntable reaches an angular speed of 33.3 rpm, in 2.1 s, starting from rest. (a) Assuming the angular acceleration is constant, what is its magnitude? b How many revolutions does the turntable make during this time interval? Tuesday, February 22, 2011 at 8:55am by jimmyjonas science , physics A person is riding a bicycle, and its wheels have an angular velocity of +15.5 rad/s. Then, the brakes are applied and the bike is brought to a uniform stop. During braking, the angular displacement of each wheel is +15.5 revolutions. (a) How much time does it take for the ... Wednesday, November 3, 2010 at 10:11pm by Anonymous alpha is my code for d^2T/dt^2, angular acceleration and dT/dt is omega, angular velocity Monday, December 9, 2013 at 3:08am by Damon During a tennis serve, a racket is given an angular acceleration of magnitude 155 rad/s2. At the top of the serve, the racket has an angular speed of 13 rad/s. If the distance between the top of the racket and the shoulder is 1.5 m, find the magnitude of the total acceleration... Wednesday, March 26, 2008 at 6:06pm by Zach College Physics A turntable reaches an angular speed of 33.3 rpm, in 2.0 s, starting from rest. (a) Assuming the angular acceleration is constant, what is its magnitude? (b) How many revolutions does the turntable make during this time interval? Friday, September 25, 2009 at 5:28pm by Megan pulsar is a rapidly rotating neutron star that emits radio pulses with precise synchronization, there being one such pulse for each rotation of the star. The period of rotation is found by measuring the time between pulses. At present, the pulsar in the central region of the ... Monday, March 17, 2008 at 3:13pm by shelby torque = I(moment of inertia) times angular acceleration(change in angular velocity over time) change in angular velocity over time simply put is velocity final - velocity intial divided by total time that's all im doing for you,you should get it form that Wednesday, April 25, 2012 at 2:06am by visoth Physics help please!!! The angular speed of digital video discs (DVDs) varies with whether the inner or outer part of the disc is being read. (CDs function in the same way.) Over a 133 min playing time, the angular speed varies from 570 rpm to 1600 rpm. Assuming it to be constant, what is the ... Saturday, February 18, 2012 at 11:47am by john The centripetal acceleration r w^2 is proportional to radius. w is the angular velocity (rad/sec), which is the same for both points. There is no tangential acceleration at constant w. Tuesday, April 13, 2010 at 3:40am by drwls the tangential linear acceleration of the child. = g = 9.81m/s^2 instantaneous angular acceleration of the see-saw = a/r = 9.81/1.91 = 5.14rad/s^2 Monday, October 31, 2011 at 5:31pm by Matthias Schwartzkopf Physics(Please check) This is how the solution is reached: Centripital acceleration = radius x (angular speed)^2 The acceleration of A = 2.8 (the acceleration of B) So to set it up we get: Sqrt(L2^2+L1^2) w^2 = 2.8 (L1) w ^2 The angular velocities cancel because they are equal. Now square both sides... Tuesday, June 5, 2012 at 5:07pm by RJ Since the velocities of the rims are equal at the point of contact, the angular velocities are inversely proportional to the radii. The same applies to the angular acceleration rates. use that fact to answer the questions yourself Tuesday, October 19, 2010 at 10:25am by drwls A flywheel with a diameter of 2.45 m is rotating at an angular speed of 74.1 rev/min. (a) What is the angular speed of the flywheel in radians per second? (b) What is the linear speed of a point on the rim of the flywheel? (c) What constant angular acceleration (in revolutions... Wednesday, November 9, 2011 at 8:51am by kyle A diver performing a double somersault spins at an angular speed of 4.5 π rad/s precisely 0.70s after leaving the platform. Assuming the diver begins with zero initial angular speed and accelerates at a constant rate, what is the diver’s angular acceleration during the ... Wednesday, March 28, 2012 at 2:12pm by Ethan Start with the tension. T= mg-ma Then the moment of inertia... Torque=I *angular acceleration Force*radius= I * linear acceleration/rad solve for I. Monday, March 24, 2008 at 4:18pm by bobpursley a = angular acceleration w = a t = angular velocity theta = (1/2) a t^2 = radians turned total from rest in this case thetas = 2pi * 4 = 8 pi radians so (1/2) a t^2 = 8 pi you know a, get t and then Wednesday, January 27, 2010 at 6:09pm by Damon A dentist causes the bit of a high-speed drill to accelerate from an angular speed of 1.20x10^4 rad/s to an angular speed of 3.14x10^4 rad/s. In the process, the bit turns through 1.92x10^4 rad. Assuming a constant angular acceleration, how long would it take the bit to reach ... Tuesday, October 23, 2012 at 8:38am by Sam A centrifuge takes 100 s to spin up from rest to its final angular speed with constant angular acceleration. If a point located 8.00 cm from the axis of rotation of the centrifuge moves with a speed of 150 m/s when the centrifuge is at full speed, how many revolutions does the... Sunday, June 24, 2012 at 7:12pm by Bloom a = angular acceleration I = rotational inertia w = angular velocity v = linear velocity Torque = RxFT = Ia so, a = RxFT/I = (R/I)(4t-0.10t^2) w = integral(a) with respect to t v= wR Wednesday, March 10, 2010 at 8:34pm by FredR Two identical dragsters, starting from rest, accelerate side by side along a straight track. The wheels have identical angular acceleration. The wheels on one of the cars roll without slipping, while the wheels on the other slip during part of the time... Saturday, April 5, 2008 at 3:35am by James Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=angular+acceleration&page=2","timestamp":"2014-04-19T02:30:02Z","content_type":null,"content_length":"44133","record_id":"<urn:uuid:6b07aec2-ff9e-412c-bfac-d42704b95872>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: StringCases and Shortest Replies: 4 Last Post: Mar 12, 2009 3:21 AM Messages: [ Previous | Next ] StringCases and Shortest Posted: Mar 11, 2009 5:26 AM I want to select shortest substring between brackets from the string. For example: Func["f(a+b) some text (comments)" ] should give: Func["(f(a+b) some text (comments)" ] should give: {"(a+b)","(comments)"} too. In the help I found this line: in[]: StringCases["-(a)--(bb)--(c)-", Shortest["(" ~~ __ ~~ ")"]] out: {"(a)","(bb)","(c)"} which, at first sight, works as I desire. But when I add bracket at start of line then answer is incorrect in[]: StringCases["(-(a)--(bb)--(c)-", Shortest["(" ~~ __ ~~ ")"]] out: {"(-(a)","(bb)","(c)"} What is wrong? And how to solve this problem? Date Subject Author 3/11/09 Grischika@mail.ru 3/11/09 Re: StringCases and Shortest Jens-Peer Kuska 3/12/09 Re: StringCases and Shortest Grischika@mail.ru 3/12/09 Re: StringCases and Shortest sjoerd.c.devries@gmail.com 3/12/09 Re: StringCases and Shortest raffy@mac.com
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1909922&messageID=6639061","timestamp":"2014-04-21T02:44:36Z","content_type":null,"content_length":"21183","record_id":"<urn:uuid:532d1f5a-d0ec-4038-b81d-6cbfac377413>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Reshape problem [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Reshape problem From "Radu Ban" <raduban@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Reshape problem Date Thu, 4 May 2006 14:58:36 -0400 i don't know if this is the best way, but you could use the -xpose- command. but before that you need to make all you variables numerical. for example: /*coding males as 1 females as 0, Y as 1 no(missing) as 0 */ /*coding age as numeric age*/ forval i = 2/256 { gen real_v`i' = 1 if v`i' == "M" | v`i' == "Y" replace real_v`i' = 0 if v`i' == "F" | v`i' == "" replace real_v`i' = real(v`i') in 2 /*now you no longer really need id and v1, as long as you make note of what each row means*/ drop id v1 /*now you can transpose*/ xpose, clear /*now you rename to get the variables name back*/ rename v1 sex rename v2 age /*now you label values*/ label define yesno 1 yes 2 no label define gender 1 male 0 female hope this helps. there are probably more elegant solutions though. 2006/5/4, Thomas Speidel <thomassp@cancerboard.ab.ca>: Dear Statalisters: I have a dataset that I need to re-organize in a long format. Here is a sample: | id v1 v2 v3 v4 | | 2 Sex M M M | | 3 Age 47 66 56 | | 4 Left eye | | 5 Right eye Y Y Y | | 6 Lower eyelid Y Y Y | | 7 Upper eyelid | | 8 Lateral canthus | | 9 Medial canthus Y Y | | 10 Recurrent lesion | | 11 Primary lesion Y Y Y | There are 255 observations (v2-v256) occupying the columns. I need each observation to be a distinct row. Almost all of the entries are string, (missing means "NO"). I am having difficulties using the reshape command to achieve my goal, possibly because of the strings. Any suggestion on how to approach this? Should I create a loop to encode all variables? Thanks for any suggestion, Thomas Speidel Statistical Associate Clinical Trials Unit Tom Baker Cancer Centre 1331 - 29th Street N.W. Calgary, AB, T2N 4N4 Tel. (403) 521-3370 Email: thomassp@cancerboard.ab.ca This e-mail and any attachments may contain confidential and privileged information. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this e-mail and destroy any copies. Any dissemination or use of this information by a person other than the intended recipient is unauthorized and may be illegal. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-05/msg00175.html","timestamp":"2014-04-21T07:14:00Z","content_type":null,"content_length":"8576","record_id":"<urn:uuid:59a82173-b1f7-4217-b15a-ea8dd4caff7f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Contemporary Mathematics 2003; 202 pp; softcover Volume: 337 ISBN-10: 0-8218-3379-0 ISBN-13: 978-0-8218-3379-7 List Price: US$65 Member Price: US$52 Order Code: CONM/337 This volume covers material presented by invited speakers at the AMS special session on Riemannian and Lorentzian geometries held at the annual Joint Mathematics Meetings in Baltimore. Topics covered include classification of curvature-related operators, curvature-homogeneous Einstein 4-manifolds, linear stability/instability singularity and hyperbolic operators of spacetimes, spectral geometry of holomorphic manifolds, cut loci of nilpotent Lie groups, conformal geometry of almost Hermitian manifolds, and also submanifolds of complex and contact spaces. This volume can serve as a good reference source and provide indications for further research. It is suitable for graduate students and research mathematicians interested in differential geometry. Graduate students and research mathematicians interested in differential geometry. • K. Abe, D. Grantcharov, and G. Grantcharov -- On some complex manifolds with torus symmetry • M. J. S. L. Ashley and S. M. Scott -- Curvature singularities and abstract boundary singularity theorems for space-time • A. Derdzinski -- Curvature-homogeneous indefinite Einstein metrics in dimension four: The diagonalizable case • P. E. Ehrlich, Y.-T. Jung, J.-S. Kim, and S.-B. Kim -- Jacobians and volume comparison for Lorentzian warped products • B. Fiedler and P. Gilkey -- Nilpotent Szabó, Osserman and Ivanov-Petrova pseudo-Riemannian manifolds • P. B. Gilkey, R. Ivanova, and I. Stavrov -- Jordan Szabó algebraic covariant derivative curvature tensors • A. D. Helfer -- Differential topology, differential geometry, and hyperbolic operators • C. Jang and P. E. Parker -- Examples of conjugate loci of pseudoriemannian 2-step nilpotent Lie groups with nondegenerate center • R. G. McLenaghan, R. G. Smirnov, and D. The -- Group invariant classification of orthogonal coordinate webs • J. H. Park -- Spectral geometry and the Kaehler condition for Hermitian manifolds with boundary • P. Rukimbira -- Energy, volume and deformation of contact metrics • R. Sharma -- Holomorphically planar conformal vector fields on almost Hermitian manifolds • B. D. Suceavă -- Fundamental inequalities and strongly minimal submanifolds • M. Tanimoto -- Linear perturbations of spatially locally homogeneous spacetimes • M. M. Tripathi -- Certain basic inequalities for submanifolds in \((\kappa,\mu)\)-spaces
{"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-337","timestamp":"2014-04-17T22:49:25Z","content_type":null,"content_length":"16423","record_id":"<urn:uuid:f6e7b1e7-24b7-4fbf-8702-f7a55aa6380d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
SailNet Community - View Single Post - Correction for Leeway - Coastal Navigation Question Though you say the leeway is 4 degrees, it is much easier to identify the leeway as a vector (direction and speed). This vector can be expressed on the chart as a line showing how far the leeway would take you over an arbitrary lenth of time (say two hours); therefore, if the 22 knot wind and what ever associated current were to cause your vessel to move in a direction of 90 degrees at 1 knot, then you can plot a 2 nautical mile line from point A at 90 degrees to represent where you would end up by the "leeway". You can also plot a line for twelve nautical miles from A to B at 145 degrees to represent your intended path for the arbitrary 2 hours. If you were to determine the course direction from the end of the 2 mile line to the end of the 12 mile line; then, this would be the course to use that would correct for the leeway. I'm sure someone could explain this more clearly,- Take care and joy, Aythya crew
{"url":"http://www.sailnet.com/forums/756568-post2.html","timestamp":"2014-04-17T01:36:02Z","content_type":null,"content_length":"33638","record_id":"<urn:uuid:aa9ecc8e-6ec5-41e7-bc26-168c1784f8f3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
A Robust Trust-Region Algorithm with a Nonmonotonic Penalty Parameter Scheme for Constrained Optimization Results 1 - 10 of 16 , 1995 "... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..." Cited by 114 (2 self) Add to MetaCart this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can - SIAM Journal on Optimization , 1992 "... This work presents a global convergence theory for a broad class of trust-region algorithms for the smooth nonlinear progro.mmln S problem with equality constraints. The main result generalizes Powell's 1975 result for unconstrained trust-region algorithms. ..." Cited by 42 (10 self) Add to MetaCart This work presents a global convergence theory for a broad class of trust-region algorithms for the smooth nonlinear progro.mmln S problem with equality constraints. The main result generalizes Powell's 1975 result for unconstrained trust-region algorithms. - SIAM Journal on Optimization , 1998 "... Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques ..." Cited by 38 (11 self) Add to MetaCart Abstract. This paper describes a software implementation of Byrd and Omojokun’s trust region algorithm for solving nonlinear equality constrained optimization problems. The code is designed for the efficient solution of large problems and provides the user with a variety of linear algebra techniques for solving the subproblems occurring in the algorithm. Second derivative information can be used, but when it is not available, limited memory quasi-Newton approximations are made. The performance of the code is studied using a set of difficult test problems from the CUTE collection. - SIAM Journal on Optimization , 1994 "... . We provide an effective and efficient implementation of a sequential quadratic programming (SQP) algorithm for the general large scale nonlinear programming problem. In this algorithm the quadratic programming subproblems are solved by an interior point method that can be prematurely halted by a t ..." Cited by 22 (10 self) Add to MetaCart . We provide an effective and efficient implementation of a sequential quadratic programming (SQP) algorithm for the general large scale nonlinear programming problem. In this algorithm the quadratic programming subproblems are solved by an interior point method that can be prematurely halted by a trust region constraint. Numerous computational enhancements to improve the numerical performance are presented. These include a dynamic procedure for adjusting the merit function parameter and procedures for adjusting the trust region radius. Numerical results and comparisons are presented. Key words: nonlinear programming, interior point, SQP, merit function, trust region, large scale 1. Introduction. In a series of recent papers, [3], [6], and [8], the authors have developed a new algorithmic approach for solving large, nonlinear, constrained optimization problems. This proposed procedure is, in essence, a sequential quadratic programming (SQP) method that uses an interior point algorithm... , 1997 "... A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strate ..." Cited by 21 (8 self) Add to MetaCart A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strategy for obtaining global convergence is based on the trust region approach. The merit function is a type of augmented Lagrangian. A new updating scheme is introduced for the penalty parameter, by means of which monotone increase is not necessary. Global convergence results are proved and numerical experiments are presented. Key words: Nonlinear programming, successive quadratic programming, trust regions, augmented Lagrangians, Lipschitz conditions. Department of Applied Mathematics, IMECC-UNICAMP, University of Campinas, CP 6065, 13081970 Campinas SP, Brazil (chico@ime.unicamp.br). This author was supported by FAPESP (Grant 903724 -6), FINEP and FAEP-UNICAMP. y Department of - Journal of Optimization Theory and Applications , 1999 "... We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the ob ..." Cited by 19 (6 self) Add to MetaCart We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved and in second phase the objective function value is reduced in an approximate feasible set. The point that results from the second phase is compared with the current point using a nonsmooth merit function that combines feasibility and optimality. This merit function includes a penalty parameter that changes between different iterations. A suitable updating procedure for this penalty parameter is included by means of which it can be increased or decreased along different iterations. The conditions for feasibility improvement at the first phase and for optimality improvement at the second phase are mild, and large-scale implementations of the resulting method are possible. We prove that under suitable conditions, that ... - in Optimal Design and Control , 1994 "... Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of th ..." Cited by 11 (5 self) Add to MetaCart Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization startegy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multil... - THE STATE OF THE ART IN NUMERICAL ANALYSIS , 1996 "... ..." - Journal of Optimization Theory and Applications , 1998 "... . The family of feasible methods for minimization with nonlinear constraints includes Rosen's Nonlinear Projected Gradient Method, the Generalized Reduced Gradient Method (GRG) and many variants of the Sequential Gradient Restoration Algorithm (SGRA). Generally speaking, a particular iteration of an ..." Cited by 7 (4 self) Add to MetaCart . The family of feasible methods for minimization with nonlinear constraints includes Rosen's Nonlinear Projected Gradient Method, the Generalized Reduced Gradient Method (GRG) and many variants of the Sequential Gradient Restoration Algorithm (SGRA). Generally speaking, a particular iteration of any of these methods proceeds in two phases. In the Restoration Phase, feasibility is restored by means of the resolution of an auxiliary nonlinear problem, generally a nonlinear system of equations. In the Minimization Phase, optimality is improved by means of the consideration of the objective function, or its Lagrangian, on the tangent subspace to the constraints. In this paper, minimal assumptions are stated on the Restoration Phase and the Minimization Phase that ensure that the resulting algorithm is globally convergent. The key point is the possibility of comparing two successive nonfeasible iterates by means of a suitable merit function that combines feasibility and optimality. The mer... - I⋅E I +w S⋅E S ES EI located Pareto optimum (a) (b) ZR E=w I⋅E I +w S⋅E S , 1999 "... The sequential quadratic programming (SQP) algorithm has been one of the most successful general methods for solving nonlinear constrained optimization problems. We provide an introduction to the general method and show its relationship to recent developments in interior-point approaches. We emph ..." Cited by 5 (0 self) Add to MetaCart The sequential quadratic programming (SQP) algorithm has been one of the most successful general methods for solving nonlinear constrained optimization problems. We provide an introduction to the general method and show its relationship to recent developments in interior-point approaches. We emphasize large-scale aspects. Key words: sequential quadratic programming, nonlinear optimization, Newton methods, interior-point methods, local convergence, global convergence ? Contribution of Sandia National Laboratories and not subject to copyright in the United States. Preprint submitted to Elsevier Preprint 1 July 1999 1 Introduction In this article we consider the general method of Sequential Quadratic Programming (hereafter denoted SQP) for solving the nonlinear programming problem minimize f(x) x subject to: h(x) = 0 g(x) 0 (NLP) where f : R n ! R, h : R n ! R m , and g : R n ! R p . Broadly defined, the SQP method is a procedure that generates iterates converging ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=794668","timestamp":"2014-04-21T04:47:00Z","content_type":null,"content_length":"38572","record_id":"<urn:uuid:3f7130b8-c61c-4fa5-b41b-d2dacff6eb89>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
What is CLASSIFYING TRIANGLES GAMES? Mr What? is the first search engine of definitions and meanings, All you have to do is type whatever you want to know in the search box and click WHAT IS! All the definitions and meanings found are from third-party authors, please respect their copyright. © 2014 - mrwhatis.net
{"url":"http://mrwhatis.net/classifying-triangles-games.html","timestamp":"2014-04-19T23:38:20Z","content_type":null,"content_length":"36188","record_id":"<urn:uuid:7d2c37e2-d933-4bc5-84a7-66779c91fe66>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
statistics percentage without a mean November 15th 2006, 05:02 AM #1 statistics percentage without a mean This is not a homework or exam exercise, I would actually like to statistically compare some real data. I would like to compare data collected from same source but by different people, and come up with some of percentage of variation in this method. Two people are measuring exactly the same thing, but coming up with different answers (human error). I would like to give this error a percentage of variance. (Is that what it is called?) For an example, the two dip readings are 56 and 45, so the difference between them is 11. There is no mean or average. The angle of dip is at its maximum is 90 degrees, so how do I work out for a set of data what the average percentage error is between these two sets of measurements taken by different people. In a same way, I would also like to compare the errors in the direction (strike) measurements. For example comparing a direction reading of 153 degrees (which is roughly SE) to 31 degrees (NE). Again, no mean value, only the knowledge that the is 360 degrees altogether. I would be very happy to be even able to compare there measurements independently, but can you also compare the dip and direction variable in a same equation. Phew....clearly one can say that I am no math person! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/7608-statistics-percentage-without-mean.html","timestamp":"2014-04-20T11:39:08Z","content_type":null,"content_length":"29317","record_id":"<urn:uuid:99ebe99f-e748-42aa-9c59-7681db69c733>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling the Release Kinetics of Poorly Water-Soluble Drug Molecules from Liposomal Nanocarriers Journal of Drug Delivery Volume 2011 (2011), Article ID 376548, 10 pages Research Article Modeling the Release Kinetics of Poorly Water-Soluble Drug Molecules from Liposomal Nanocarriers ^1Department of Physics, North Dakota State University, Fargo, ND 58108-6050, USA ^2Department of Pharmaceutical Technology, Friedrich Schiller University of Jena, Lessingstraße 8, 07743 Jena, Germany Received 22 December 2010; Accepted 22 March 2011 Academic Editor: Volkmar Weissig Copyright © 2011 Stephan Loew et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Liposomes are frequently used as pharmaceutical nanocarriers to deliver poorly water-soluble drugs such as temoporfin, cyclosporine A, amphotericin B, and paclitaxel to their target site. Optimal drug delivery depends on understanding the release kinetics of the drug molecules from the host liposomes during the journey to the target site and at the target site. Transfer of drugs in model systems consisting of donor liposomes and acceptor liposomes is known from experimental work to typically exhibit a first-order kinetics with a simple exponential behavior. In some cases, a fast component in the initial transfer is present, in other cases the transfer is sigmoidal. We present and analyze a theoretical model for the transfer that accounts for two physical mechanisms, collisions between liposomes and diffusion of the drug molecules through the aqueous phase. Starting with the detailed distribution of drug molecules among the individual liposomes, we specify the conditions that lead to an apparent first-order kinetic behavior. We also discuss possible implications on the transfer kinetics of (1) high drug loading of donor liposomes, (2) attractive interactions between drug molecules within the liposomes, and (3) slow transfer of drugs between the inner and outer leaflets of the liposomes. 1. Introduction Poor solubility in water is a well-recognized obstacle for efficient oral or parenteral drug administration [1, 2]. Liposomes are among the most widely used type of pharmaceutical nanocarriers for small and poorly water-soluble drug molecules [3]. These drugs preferentially partition into the hydrophobic compartment that is formed by the hydrocarbon tails of the liposomal lipids. Liposomes have been used in their first generation (conventional liposomes) predominantly as long-circulating transport vehicles [4, 5], followed by a second generation that improved the circulation time further by decorating the surface with PEG-chains (stealth liposomes [6]). Third-generation liposomes are now being engineered to contain targeting ligands [7] and to carry out stimuli-sensitive triggering of the drug release [8]. An important property of liposome-based drug delivery is the release kinetics of the drug from the host, which has been investigated for a number of model systems [9–12]. Experimental investigations of the transfer of temoporfin between two different types of liposomes (i.e., from donor liposomes to acceptor liposomes) have recently been carried out using a mini ion exchange column technique [13 ]. The column separates donor from acceptor liposomes and thus allows to monitor the time dependence of the drug transfer. It is observed that, typically, the transfer follows an apparent first-order behavior, characterized by a single exponential function. This is remarkable given the complexity of the system, with the drug molecules being able to migrate from the donor to the acceptor liposomes via different physical mechanisms. In fact, there are two mechanisms that, in general, act simultaneously. The first mechanism is the transfer of drugs upon collisions between two liposomes. In this case, the drug molecules directly migrate from one liposome to another with minimal exposure to the aqueous phase. The second mechanism refers to the transfer of drugs via diffusion through the aqueous phase. We note that the collision mechanism has been invoked, for example, to explain the transfer of lipids [14] and cholesterol [15] between vesicles, and the transfer of fatty acids between vesicles and fatty acid binding proteins [16]. Also the diffusion mechanism was found to be consistent with the transport of lipids [17]. In some cases, both mechanisms were suggested to contribute to the transport of lipids between vesicles [18] and to the transport of lipophilic drugs from oil-in-water emulsions to cells [19] and from plasma proteins to lipid vesicles [20]. In our preceding experimental work, where we have investigated the kinetics of temoporfin transport from donor to acceptor liposomes [13], we found that above a certain concentration (corresponding to a liposome-to-liposome distance of about 200nm for our specific system) the transfer was dominated by collisions; for smaller concentrations transport through diffusion was prevalent. The objective of the present work is to introduce and discuss a detailed kinetic model for the release properties of poorly water-soluble drug molecules from liposomal nanocarriers. Despite a large number of experimental studies about the kinetics of lipid and drug transfer between liposomes and other nanocarriers, there is little theoretical work available that addresses the nature of the transfer kinetics. Our theoretical formalism is based on a detailed distribution function of drug molecules among the individual liposomes. Kinetic rate equations for that distribution function account for two transport mechanisms: collisions between liposomes and drug diffusion through the aqueous phase. We specify a set of conditions at which our microscopic model produces an apparent first-order kinetics with simple exponential behavior, as used in previous work [14, 19]. We point out that our kinetic model can be applied to any kind of small mobile pharmaceutical nanocarrier, including liposomes, micelles [21], colloids [22], and nanoparticles [23]. In the second part of our work, we discuss conditions that lead to deviations from simple exponential behavior. First, for the diffusion mechanism, high drug loading tends to increase the transfer rate. The kinetics remains exponential only if donor and acceptor liposomes are chemically similar. Second, the presence of attractive interactions between drug molecules within the liposomes (which can result in the formation of aggregates [24]) is expected to slow down the transfer kinetics. We note that not much molecular detail is presently known about how poorly water-soluble drug molecules inside a lipid bilayer interact. However, modeling studies of rigid membrane-embedded inclusions such as transmembrane proteins or peptides suggest a general tendency of the host membrane to mediate attractive interactions between inclusions that may lead to the formation of aggregates [25]. These attractive interactions may be driven by elastic deformations of the host membrane [26], by depletion of the flexible lipid chains from the region in between rigid inclusions [27], and by fluctuations via the Casimir effect [28]. Our analysis for the collision mechanism suggests that aggregate formation can give rise to sigmoidal behavior of the transfer kinetics. Third, drug molecules (even if they are poorly water soluble) do not necessarily reside predominantly in the innermost region of the membrane's hydrocarbon region. For example, some aromatic compounds such as indole are well known for their preference of the membrane's interfacial region between the headgroups and the hydrocarbon chains [29, 30]. Other aromatic compounds such as benzene are distributed throughout the hydrocarbon chain region without a bias toward the polar/apolar interface [30]. Among the reasons for the preferential partitioning of indole are electrostatic interactions, hydrogen bond formation, and the steric shape of the molecule. For lipid monolayers, there is evidence that drug partitioning also depends on the lateral pressure [31]. Generally, whenever a drug molecule interacts more favorably with the interfacial or headgroup region than with the hydrocarbon tail region, the corresponding partitioning preference can be lumped into at least two energetically preferred states that correspond to the inner and outer leaflet of the membrane. Transfer between the two states (i.e., flip-flop) then introduces an additional characteristic time [32]. We note that two- or multiple-state modeling has been invoked previously to model the partitioning of amino-acid analogues in membranes [33] and the permeation of drug molecules through membranes [34, 35]. 2. Model Consider an aqueous solution (of fixed volume ) that contains a number of donor and acceptor liposomes. Donor and acceptor liposomes may be either two chemically different types of liposomes (i.e., composed of different lipids) or equivalent liposomes (i.e., containing the same lipid composition). In the latter case, the distinction between donor and acceptor liposomes refers only to their initial loading; at the end of the transport process (i.e., at thermal equilibrium), both types would be indistinguishable. The donor liposomes initially carry a total number of poorly water-soluble drug molecules. Transfer of drug molecules from donor to acceptor liposomes will take place with time until, eventually, an equilibrium partitioning is reached. We describe the time dependence of this transfer by the number of drug molecules carried by the donor liposomes, , and by the acceptor liposomes, . It is then initially and , as well as in equilibrium and , where and denote the equilibrium number of drug molecules carried by donor and acceptor liposomes, respectively. We point out that, although we refer to the drug carriers as liposomes, our model is more general. That is, it can be applied to different types of mobile carriers such as micelles, colloids, nanoparticles, or polymeric aggregates, given the carrier possesses some capacity to host poorly water-soluble drug molecules. In the following, we suggest a model for the time dependence of the transfer process (i.e., for and ) that leads to a first-order kinetics, characterized by a simple exponential function. We consider a “single-state model” where there is only a single energetic state available for each drug molecule in a given liposome. The single-state model excludes the presence of intraliposomal kinetics (the extension to a two-state model will be discussed below). We account for two different transport mechanisms: (i) transport through collisions between liposomes and (ii) transport via diffusion of drug molecules through the aqueous phase. Both mechanisms are schematically illustrated in Figure 1. Our transport model of drugs from donor to acceptor liposomes employs the framework of chemical reaction kinetics. We note that due to the generally slow release kinetics of poorly water-soluble drugs, we can treat the aqueous solution as spatially uniform at all times. Hence, no combined diffusion-reaction kinetics [36] needs to be included in our model. 2.1. Transfer through Collisions Only Our model for the collision-mediated drug transfer between liposomes starts with the detailed distribution of drug molecules among all liposomes. We introduce the number of donor liposomes that carry drug molecules. An analogous definition is used for the number of acceptor liposomes that carry drug molecules. The index is confined to the region where is the maximal number of drug molecules that a liposome can carry. The time-dependent distribution functions and represent a full microscopic knowledge of the kinetics of drug transfer. The total numbers of donor liposomes , acceptor liposomes , drug molecules residing in donor liposomes , and drug molecules residing in acceptor liposomes , can be calculated on the basis of the distribution functions and according to Mathematically, and are the zeroth-moments of the distributions functions and whereas and appear as the corresponding first moments. We assume that and are constant (i.e., independent of time), and so then is the total number of liposomes . This is appropriate if fusion and fission between liposomes can be ignored. Due to our focus on poorly water-soluble drug molecules, it is also justified to assume that the total number of drug molecules carried by all liposomes, , is constant. That is, we neglect the small fraction of drug molecules that reside in the aqueous phase without being bound to a liposome. Figure 2 schematically illustrates a specific exemplification of the system. Collisions require two liposomes to come to close proximity. The magnitude of drug transport between, say, donor liposomes and is thus where is the volume of the aqueous solution. The underlying transfer process is thus second order. If a single drug molecule is transferred from a donor that carries initially drug molecules to a donor that carries initially drug molecules, the distribution function changes according to , , , and . Hence, the numbers and decrease whereas and increase. Figure 3 shows an illustration of this scheme for and . The transfer rate between the populations and will also depend on the corresponding numbers of drug molecules and . We assume the drug molecules within each liposome form an ideal mixture so that the transfer rate is directly proportional to . In writing a rate equation for donor population , we have to account for all possible ways of collisions between donor liposomes of index with other liposomes (donors and acceptors) of index . These considerations lead us to where we have defined the function In (2), is the unit rate of drug transfer through collisions between two chemically equivalent liposomes, and denotes the time derivative of a physical quantity . The first two lines in (2) account for collisions of donor liposomes with other donor liposomes. The last two lines in (2) account for collisions of donor liposomes with acceptor liposomes. Note that (2) allows for a difference in the chemical nature of donor and acceptor liposomes. This chemical mismatch is accounted for by the integer in the last two lines of (2), which expresses the difference in the number of drug molecules between a donor and acceptor liposomes in thermal equilibrium, (i.e., for each donor and acceptor liposome will contain the same number of drug molecules in thermal equilibrium). We do not attempt to calculate from a microscopic model; yet below we show how is related to the change in standard Gibbs free energy for the process of transferring drug molecules from donor to acceptor liposomes. Due to symmetry, we obtain from by replacing , , and , Equations (2) and (4) constitute a microscopic model for the kinetic behavior of drug transport from donor to acceptor liposomes through the collision mechanism; it can be verified that implying and thus ensuring conservation of the number of donor and acceptor liposomes ( and ) as well as of the total number of drug molecules (). To characterize the total numbers and of drug molecules that reside in donor and acceptor liposomes, respectively, we carry out the summations and using (2) and (4). The result are the two first-order differential equations where we have introduced the definition of the apparent rate constant Initially, all drug molecules are incorporated in the donor liposomes, implying and . The solution of (6) is then Hence, indeed appears as the inverse characteristic time for the transfer process. In contrast to previous models [14], depends only on the total concentration of liposomes but not on the concentrations of donor or acceptor liposomes individually. We also mention that (6) and the solution in (8) are valid for any number of donor and acceptor liposomes (i.e, any choice of and ). This includes but is not restricted to sink conditions (where ). Thermodynamic equilibrium corresponds to the long-time limit, , at which we have and with From (9), we obtain the difference between the numbers of drug molecules per donor and acceptor liposome, . This agrees with our interpretation of in (2) and (4). We note that for chemically identical donor and acceptor liposomes, it is and all liposomes carry the same number of drug molecules in equilibrium, implying . The largest possible value of is for which we obtain and . The smallest possible value of is implying and . Hence, . The solution in (8) corresponds to a simple exponential decay of the number of drug molecules in the donor liposomes. This suggests that we can express the transfer kinetics of drug molecules from donor (D) to acceptor (A) liposomes as the chemical reaction scheme with rate constants and . The corresponding kinetic behavior is then governed by the equations and where and are the numbers of drug molecules carried by donor and acceptor liposomes, respectively. With and we obtain which has indeed the same structure as (8). Comparison of (8) with (11) reveals and . The equilibrium constant of the reaction in (10) is thus Comparing this with (where is Boltzmann's constant and is the absolute temperature) allows us to compute the change in standard Gibbs free energy for the transfer of a single drug molecule from a donor to an acceptor liposome. The enthalpic and entropic contributions to will be influenced by , which is, generally, temperature dependent. Let us briefly discuss two cases. First, if donor and acceptor liposomes are chemically identical, then and has only an entropic contribution. Specifically, for , we find because a given drug molecule has more donor liposomes to reside in than acceptor liposomes. Second, the limiting cases for , namely, and , yield (thus, with all drugs migrating to the acceptor liposomes) and (thus with all drugs remaining in the donor liposomes), respectively. We point out that our model predicts a simple exponential time behavior despite the presence of drug transfer through a second-order two-body collision process (i.e., collisions between two liposomes). Chemical reactions that deplete the reactants through binary collisions generally display a long time-tail in their concentration dependence. For example, the kinetic behavior of the dimerization reaction 2 monomer→dimer follows the equation where is the concentration of the reactant (i.e., the monomers) and the rate constant. With an initial concentration the time behavior becomes , implying for long times. For our system, however, the numbers of donor and acceptor liposomes remain unchanged. Thus, collisions do not deplete the reactants, and the concentration dependencies of and become exponential in time. 2.2. Transfer through Diffusion Only Diffusion allows for transfer of drug molecules directly through the aqueous phase, without the need of collisions between liposomes. Denoting the additional state in the aqueous phase by W (in addition to donor (D) and acceptor (A)) the corresponding transport scheme (again, as in (10), formally expressed as a chemical reaction) can be written as [14, 37] with rate constants , , , and for the drug release (“rel”) and uptake (“upt”) in donor (“d”) and acceptor (“a”) liposomes. To formulate the rate equations, it is useful to first consider the drug distribution function . We assume the probability of a drug molecule to leave donor liposomes of index to be proportional to the total number of drug molecules in that liposome population. Similarly, the probability of a drug molecule to enter donor liposomes of index is assumed to be proportional to the total number of empty binding sites in that liposome population. Because the uptake is based on collisions of liposomes with drug molecules in the aqueous solution, the rate should also be proportional to the drug concentration in the aqueous phase (here, is the total number of drug molecules residing in the aqueous phase). This leads to the following rate equations for (with for or ). A similar equation can be written for the acceptor liposomes. Based on (15), it can be verified that , thus ensuring conservation of (and similarly for ). Carrying out the summation using (15) leads to This equation simply expresses the proportionality of the release to the total number of bound drug molecules and the proportionality of the uptake to the total number of free binding sites. Consistent with (16) we complete the set of rate equations corresponding to the scheme in (14) To obtain first-order behavior, we make three assumptions. The first is a steady-state approximation for the number of drug molecules in the aqueous phase, . The solubility limit of poorly water-soluble drugs is small so that, effectively, any release of drugs from one liposome is accompanied by an immediate uptake by another (or the same [38]) liposome. The second assumption is weak drug loading of all liposomes; this amounts to , , and . We finally assume the same rate for the uptake of drug molecules from the aqueous phase into donor and acceptor liposomes, implying . This is strictly valid only for chemically equivalent donor and acceptor liposomes but should generally be a reasonable approximation. That is, we expect the energy barrier for entering a liposome from the aqueous phase to be small (as compared to the energy barrier for the release from a liposome), irrespective of the liposome's chemical structure. Subject to our three assumptions (16) and (17) become equivalent to Equation (18) are now identical to (6) if we identify and where appears as the rate constant. Here again, as for (6), the validity of (18) is not subject to a restriction with respect to and . 3. Discussion Both transfer mechanisms, through liposome collisions and via diffusion through the aqueous phase, lead to the same first-order kinetic behavior; see (6) and (18). The rate constant of the combined process is Its dependence on the total liposome concentration allows the experimental determination of the transfer mechanism [13]. We note that the first-order behavior predicted by (6) and (18) requires several assumptions to be fulfilled: low liposome loading with drug molecules, rate constants that are strictly proportional to concentrations of drug molecules, and no intraliposomal kinetics with a rate similar to . In the following, we discuss how the kinetic behavior is predicted to change if any of these assumptions is not fulfilled. 3.1. Extension to High Drug Loading While high drug loading obviously increases the number of available drug molecules (and thus increases the efficiency of liposomal carriers [39]) it also affects the kinetics of the drug release. Our present model predicts such a dependence for the diffusion mechanism whereas the kinetics for the collision mechanism is not affected. Recall that the transition from (16) and (17) to (18) was based on the approximation of weak drug loading, , , and . Without that approximation, we obtain instead of (18) a nonlinear set of differential equations For the special case that donor and acceptor liposomes are chemically similar, , we obtain a simple exponential behavior Here, high drug loading simply increases the rate constant for the diffusion mechanism by the factor . In the general case , and no simple exponential decay is predicted for high loading of the liposomes with drug molecules. Figure 4 shows a numerical example, based on (20) with and . For (weak loading regime; broken lines in Figure 4) we observe the simple exponential behavior according to (18) with equilibrium values and . For the initial loading of the donor liposomes is maximal. This leads to both a faster decay and a shift in the equilibrium distribution, reaching and . The reason for the increased rate constant is the reduced ability of highly loaded liposomes to take up drug molecules. Hence, if drug molecules are released from initially highly loaded donor liposomes they will be taken up exclusively by acceptor liposomes. The increase in the transfer rate at high loading also affects the equilibrium values and . The equilibrium is shifted toward a more uniform distribution of drug molecules between donor and acceptor liposomes (in agreement with Figure 4). 3.2. Sigmoidal Behavior Our model presented so far is unable to predict sigmoidal behavior. That is, no inflection point can be observed in and . Behind this prediction is our assumption that the transfer rates are strictly proportional to the concentration difference of the drug molecules. For the collision mechanism, this is expressed by our definition of the function in (3). However, if drug molecules within a given liposome interact with each other, the simple relation will no longer be valid. More specifically, attractive interactions between drug molecules within liposomes will increase the energy barrier to remove a drug molecule. This becomes relevant at high drug loading. Hence, in the presence of attractive interactions, it will be more unlikely that a drug molecule is transferred from a highly loaded donor liposome to an empty acceptor liposome. We discuss the consequences of attractive interactions for the collision mechanisms, which is described by (2) and (4). To account for the decrease in the rate constant at high loading we replace (3) by Clearly, for weak loading ( and ) the original first-order model leading to the exponential behavior in (8) is recovered. For large loading of either donor or acceptor liposomes, the transfer rate becomes small. We note that using (22) does not lead to a set of differential equations in terms of only and . Here, we do not attempt to provide an analytical solution to the problem. Instead, we illustrate its predictions by numerically solving (2) and (4) with given in (22). Figure 5 shows the behavior of and as function of (with ), derived for . For simplicity, we have set which results in an equipartitioning of drug molecules between donor and acceptor liposomes (). We start with liposomes. The acceptor liposomes are initially empty whereas each donor liposome contains initially drug molecules (out of a maximal number ). Different curves in Figure 5 correspond to (a), (b), (c), (d), and (e). As long as the drug loading is weak (curves (a) and (b)), the solution is simply exponential, characterized by (see (8) with ). Here, the kinetics is independent of the total number of drug molecules (which is why curves (a) and (b) virtually overlap). If the initial loading of the donor liposomes becomes larger (curve (c)) the kinetics slows down. Eventually, once the initial loading approaches its maximal value , the behavior slows down even more and, in addition, becomes sigmoidal. Attractive drug-drug interactions slow down the release from initially highly loaded donor liposomes; at later times (when the donor liposomes are no longer highly loaded), the release becomes faster. This leads to sigmoidal behavior. 3.3. Extension to a Two-State Model In the final part of this work, we briefly discuss an extension of our model to account for two distinct states of the drug molecule inside each liposome. A simple rationale for the presence of two distinct states is provided by the bilayer structure of the liposomes. That is, a drug molecule may preferentially be bound to either the inner or outer monolayer, having to flip-flop in order to change the host monolayer. The typical flip-flop time can be large if the drug has some amphiphilicity or surface activity instead of being strongly lipophilic [40]. Drug molecules residing in the inner monolayer cannot be transported directly to another liposome; they first have to migrate to the outer monolayer. We denote by and the number of drug molecules residing in the inner () and outer () leaflets of donor liposomes, respectively. Similarly, and refer to the number of drug molecules residing in the inner () and outer () leaflets of acceptor liposomes. The reaction scheme in (10) can then be generalized to account for the inter leaflet transport in donor and acceptor liposomes Here, and are the two rate constants corresponding to the transfer of drugs between the two leaflets of the donor liposomes (and similarly for and referring to the acceptor liposomes). The rate constants and are identical to those for the single-state model, where is given in (19). Based on (23), the rate equations can be written as In the limit of a symmetric lipid bilayer, the two rate constants for flip-flop of a drug molecule from the inner to the outer leaf and from the outer to the inner leaf are identical (we note that the two leaflets of a liposomal bilayer are not strictly equivalent which, in a more refined model, would entail two different rate constants for flip-flop; this dependence on liposome curvature is neglected here). If we assume furthermore that donor and acceptor liposomes are chemically similar, we may write as well as . In this case, the rate equations depend on only two parameters, the rate constants and . If we assume all drug molecules initially reside in the donor liposomes, the initial conditions are , and , where is the total number of drug molecules in the system. The solution of (25) can be expressed as The solution is thus a combination of exponential decays with corresponding effective rate constants and . Such biexponential behavior has been observed for the spontaneous transfer of certain lipids between phosphatidylcholine vesicles [41] and also for the release behavior of an imidazole derivate from liposomes [42]. The effective rate constants and can be calculated from and through Hence, a measurement of and could be used to obtain the two model parameters ( and ). Figure 6 displays a plot of , , , , , , calculated for and . All drug molecules are initially distributed equally among the two leaflets of the donor liposomes. Release of drug molecules from the outer leaf of the donor liposomes is fast (), the slow process is the flip-flop of drug molecules between the two leaflets of the liposomes. Hence, at intermediate times, say at , the outer leaflets have almost reached their equilibrium values whereas the inner layers remain still fairly close to their initial values. After reaching thermal equilibrium (), half of all drug molecules have migrated to the acceptor liposomes. Clearly, the presence of the two different rate constants ( and ) leads to the biexponential behavior of and in Figure 6. We briefly discuss two limiting cases for (26). First, for the flip-flop of drug molecules between the inner and outer leaves is infinitely slow, implying , , . In this case, we recover the kinetics of (8), yet with only (instead of ) drug molecules participating in the transfer and identical donor and acceptor liposomes (). Second, for flip-flop becomes infinitely fast and (26) read . Because 50% of the drug molecules reside in the inner leaflets, they do not contribute to the outer-leaflet-concentration-differences that drive the transfer kinetics. Hence, the apparent rate constant is reduced from to . 4. Conclusions In this work, we have presented a detailed model for the transfer kinetics of poorly water-soluble drug molecules between liposomal carrier systems. Apart from liposomes, the scope of the model includes other types of small and mobile pharmaceutical nanocarriers, such as micelles, colloids, and nanoparticles. Starting from a microscopic distribution function of drug molecules among donor and acceptor liposomes, we have specified the conditions that lead to an apparent first-order kinetic behavior. These include low drug loading of the liposomes, strict proportionality of all rate constants to drug concentrations, no aggregation phenomena of drugs within liposomes, and no overlap of the intraliposomal flip-flop kinetics. Systems that do not fulfill these conditions do not, generally, exhibit an apparent first-order kinetics. Instead the behavior may become biexponential or sigmoidal. High drug loading may preserve the first order kinetics but with increased apparent rate constant. An optimal drug delivery system should keep the drug load on the way to the target and release it only after arrival at the target. Understanding the kinetics and mechanisms of drug release from liposomal (and other) nanocarriers is thus a prerequisite to systematically improving drug delivery systems. The authers thank Drs. Alexander Wagner, Martin Holzer, and Rolf Schubert for illuminating discussions. S. May acknowledges support from NIH through Grant GM077184. 1. C. A. Lipinski, “Drug-like properties and the causes of poor solubility and poor permeability,” Journal of Pharmacological and Toxicological Methods, vol. 44, no. 1, pp. 235–249, 2000. View at Publisher · View at Google Scholar · View at Scopus 2. A. Fahr, P. Van Hoogevest, J. Kuntsche, and M. L. S. Leigh, “Lipophilic drug transfer between liposomal and biological membranes: what does it mean for parenteral and oral drug delivery?” Journal of Liposome Research, vol. 16, no. 3, pp. 281–301, 2006. View at Publisher · View at Google Scholar · View at Scopus 3. A. Fahr and X. Liu, “Drug delivery strategies for poorly water-soluble drugs,” Expert Opinion on Drug Delivery, vol. 4, no. 4, pp. 403–416, 2007. View at Publisher · View at Google Scholar · View at Scopus 4. G. Poste and D. Papahadjopoulos, “Lipid vesicles as carriers for introducing materials into cultured cells—influence of vesicle lipid composition on mechanism(s) of vesicle incorporation into cells,” Proceedings of the National Academy of Sciences of the United States of America, vol. 73, no. 5, pp. 1603–1607, 1976. View at Scopus 5. A. Sharma and U. S. Sharma, “Liposomes in drug delivery: progress and limitations,” International Journal of Pharmaceutics, vol. 154, no. 2, pp. 123–140, 1997. View at Publisher · View at Google Scholar · View at Scopus 6. M. L. Immordino, F. Dosio, and L. Cattel, “Stealth liposomes: review of the basic science, rationale, and clinical applications, existing and potential,” International Journal of Nanomedicine, vol. 1, no. 3, pp. 297–315, 2006. View at Scopus 7. P. Sapra and T. M. Allen, “Ligand-targeted liposomal anticancer drugs,” Progress in Lipid Research, vol. 42, no. 5, pp. 439–462, 2003. View at Publisher · View at Google Scholar · View at Scopus 8. R. R. Sawant and V. P. Torchilin, “Liposomes as ‘smart’ pharmaceutical nanocarriers,” Soft Matter, vol. 6, no. 17, pp. 4026–4044, 2010. 9. A. Rogerson, J. Cummings, and A. T. Florence, “Adriamycin-loaded niosomes: drug entrapment, stability and release,” Journal of Microencapsulation, vol. 4, no. 4, pp. 321–328, 1987. View at Scopus 10. R. Margalit, R. Alon, M. Linenberg, I. Rubin, T. J. Roseman, and R. W. Wood, “Liposomal drug delivery—thermodynamic and chemical kinetic considerations,” Journal of Controlled Release, vol. 17, no. 3, pp. 285–296, 1991. View at Publisher · View at Google Scholar · View at Scopus 11. P. Saarinen-Savolainen, T. Järvinen, H. Taipale, and A. Urtti, “Method for evaluating drug release from liposomes in sink conditions,” International Journal of Pharmaceutics, vol. 159, no. 1, pp. 27–33, 1997. View at Publisher · View at Google Scholar · View at Scopus 12. A. R. Mohammed, N. Weston, A. G. A. Coombes, M. Fitzgerald, and Y. Perrie, “Liposome formulation of poorly water soluble drugs: optimisation of drug loading and ESEM analysis of stability,” International Journal of Pharmaceutics, vol. 285, no. 1-2, pp. 23–34, 2004. View at Publisher · View at Google Scholar · View at Scopus 13. H. Hefeshaa, S. Loew, X. Liu, S. May, and A. Fahr, “Transfer mechanism of temoporfin between liposomal membranes,” Journal of Controlled Release, vol. 150, no. 3, pp. 279–286, 2011. View at Publisher · View at Google Scholar 14. J. D. Jones and T. E. Thompson, “Spontaneous phosphatidylcholine transfer by collision between vesicles at high lipid concentration,” Biochemistry, vol. 28, no. 1, pp. 129–134, 1989. View at 15. T. L. Steck, F. J. Kezdy, and Y. Lange, “An activation-collision mechanism for cholesterol transfer between membranes,” Journal of Biological Chemistry, vol. 263, no. 26, pp. 13023–13031, 1988. View at Scopus 16. M. G. Wootan, “Mechanism of fluorescent fatty acid transfer from adipocyte fatty acid binding protein to membranes,” Biochemistry, vol. 32, no. 33, pp. 8622–8627, 1993. View at Scopus 17. L. R. McLean and M. C. Phillips, “Mechanism of cholesterol and phosphatidylcholine exchange or transfer between unilamellar vesicles,” Biochemistry, vol. 20, no. 10, pp. 2893–2900, 1981. View at 18. E. Yang and W. H. Huestis, “Mechanism of intermembrane phosphatidylcholine transfer—effects of pH and membrane configuration,” Biochemistry, vol. 32, no. 45, pp. 12218–12228, 1993. View at Scopus 19. D. E. Decker, S. M. Vroegop, T. G. Goodman, T. Peterson, and S. E. Buxser, “Kinetics and thermodynamics of emulsion delivery of lipophilic antioxidants to cells in culture,” Chemistry and Physics of Lipids, vol. 76, no. 1, pp. 7–25, 1995. View at Publisher · View at Google Scholar · View at Scopus 20. S. Sasnouski, D. Kachatkou, V. Zorin, F. Guillemin, and L. Bezdetnaya, “Redistribution of Foscan (R) from plasma proteins to model membranes,” Photochemical and Photobiological Sciences, vol. 5, no. 8, pp. 770–777, 2006. View at Publisher · View at Google Scholar · View at Scopus 21. V. P. Torchilin, “Micellar nanocarriers: pharmaceutical perspectives,” Pharmaceutical Research, vol. 24, no. 1, pp. 1–16, 2007. View at Publisher · View at Google Scholar · View at Scopus 22. B. Mishra, B. B. Patel, and S. Tiwari, “Colloidal nanocarriers: a review on formulation technology, types and applications toward targeted drug delivery,” Nanomedicine: Nanotechnology, Biology, and Medicine, vol. 6, no. 1, pp. e9–e24, 2010. View at Publisher · View at Google Scholar · View at Scopus 23. Z. Cai, Y. Wang, L. J. Zhu, and Z. Q. Liu, “Nanocarriers: a general strategy for enhancement of oral bioavailability of poorly absorbed or pre-systemically metabolized drugs,” Current Drug Metabolism, vol. 11, no. 2, pp. 197–207, 2010. View at Publisher · View at Google Scholar · View at Scopus 24. F. Ricchelli, S. Gobbo, G. Moreno, C. Salet, L. Brancaleon, and A. Mazzini, “Photophysical properties of porphyrin planar aggregates in liposomes,” European Journal of Biochemistry, vol. 253, no. 3, pp. 760–765, 1998. View at Publisher · View at Google Scholar · View at Scopus 25. B. West, F. L. H. Brown, and F. Schmid, “Membrane-protein interactions in a generic coarse-grained model for lipid bilayers,” Biophysical Journal, vol. 96, no. 1, pp. 101–115, 2009. View at Publisher · View at Google Scholar · View at Scopus 26. K. S. Kim, J. C. Neu, and G. F. Oster, “Many-body forces between membrane inclusions: a new pattern-formation mechanism,” Europhysics Letters, vol. 48, no. 1, pp. 99–105, 1999. View at Scopus 27. K. Bohinc, V. Kralj-Iglič, and S. May, “Interaction between two cylindrical inclusions in a symmetric lipid bilayer,” Journal of Chemical Physics, vol. 119, no. 14, pp. 7435–7444, 2003. View at Publisher · View at Google Scholar · View at Scopus 28. M. Goulian, R. Bruinsma, and P. Pincus, “Long-range forces in heterogeneous fluid membranes,” Europhys Letters, vol. 22, pp. 145–150, 1993. 29. F. N. R. Petersen, M. Ø. Jensen, and C. H. Nielsen, “Interfacial tryptophan residues: a role for the cation-π effect?” Biophysical Journal, vol. 89, no. 6, pp. 3985–3996, 2005. View at Publisher · View at Google Scholar · View at Scopus 30. K. E. Norman and H. Nymeyer, “Indole localization in lipid membranes revealed by molecular simulation,” Biophysical Journal, vol. 91, no. 6, pp. 2046–2054, 2006. View at Publisher · View at Google Scholar · View at Scopus 31. A. Doisy, J. E. Proust, TZ. Ivanova, I. Panaiotov, and J. L. Dubois, “Phospholipid/drug interactions in liposomes studied by rheological properties of monolayers,” Langmuir, vol. 12, no. 25, pp. 6098–6103, 1996. View at Scopus 32. C. Bombelli, G. Caracciolo, P. Di Profio, et al., “Inclusion of a photosensitizer in liposomes formed by DMPC/gemini surfactant: correlation between physicochemical and biological features of the complexes,” Journal of Medicinal Chemistry, vol. 48, no. 15, pp. 4882–4891, 2005. View at Publisher · View at Google Scholar · View at Scopus 33. D. Sengupta, J. C. Smith, and G. M. Ullmann, “Partitioning of amino-acid analogues in a five-slab membrane model,” Biochimica et Biophysica Acta, vol. 1778, no. 10, pp. 2234–2243, 2008. View at Publisher · View at Google Scholar · View at Scopus 34. G. Camenisch, G. Folkers, and H. Van De Waterbeemd, “Shapes of membrane permeability-lipophilicity curves: extension of theoretical models with an aqueous pore pathway,” European Journal of Pharmaceutical Sciences, vol. 6, no. 4, pp. 321–329, 1998. View at Publisher · View at Google Scholar · View at Scopus 35. S. Balaz, “Lipophilicity in trans-bilayer transport and subcellular pharmacokinetics,” Perspectives in Drug Discovery and Design, vol. 19, no. 1, pp. 157–177, 2000. View at Publisher · View at Google Scholar · View at Scopus 36. M. Grassi, G. Grassi, R. Lapasin, and I. Colombo, Understanding Drug Release and Absorption Mechanisms: A Physical and Mathematical Approach, CRC Press, 2006. 37. L. Thilo, “Kinetics of phospholipid exchange between bilayer membranes,” Biochimica et Biophysica Acta, vol. 469, no. 3, pp. 326–334, 1977. View at Scopus 38. P. F. F. Almeida, “Lipid transfer between vesicles: effect of high vesicle concentration,” Biophysical Journal, vol. 76, no. 4, pp. 1922–1928, 1999. View at Scopus 39. N. Liu and H. J. Park, “Factors effect on the loading efficiency of vitamin C loaded chitosan-coated nanoliposomes,” Colloids and Surfaces B: Biointerfaces, vol. 76, no. 1, pp. 16–19, 2010. View at Publisher · View at Google Scholar · View at Scopus 40. S. Schreier, S. V. P. Malheiros, and E. De Paula, “Surface active drugs: self-association and interaction with membranes and surfactants. physicochemical and biological aspects,” Biochimica et Biophysica Acta, vol. 1508, no. 1-2, pp. 210–234, 2000. View at Publisher · View at Google Scholar · View at Scopus 41. J. D. Jones, P. F. Almeida, and T. E. Thompson, “Spontaneous interbilayer transfer of hexosylceramides between phospholipid bilayers,” Biochemistry, vol. 29, no. 16, pp. 3892–3897, 1990. View at Publisher · View at Google Scholar · View at Scopus 42. J. Liu, H. Lee, M. Huesca, A. Young, and C. Allen, “Liposome formulation of a novel hydrophobic aryl-imidazole compound for anti-cancer therapy,” Cancer Chemotherapy and Pharmacology, vol. 58, no. 3, pp. 306–318, 2006. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/jdd/2011/376548/","timestamp":"2014-04-17T07:12:20Z","content_type":null,"content_length":"512839","record_id":"<urn:uuid:e8a8144e-3ffa-4e90-9438-dbadb94a75a5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
Error Bounds for Computed Eigenvalues. Next: Error Bound for Computed Up: Stability and Accuracy Assessments Previous: Transfer Residual Error to &nbsp Contents &nbsp Index Error Bounds for Computed Eigenvalues. For the Hermitian eigenproblem, the forward) error in the computed eigenvalue, i.e., for some eigenvalue With more information, a better error bound can be obtained. Let us assume that gap between This improves (4.54) if the gap Note that (4.55) needs information on 4.56) below. Next: Error Bound for Computed Up: Stability and Accuracy Assessments Previous: Transfer Residual Error to &nbsp Contents &nbsp Index Susan Blackford 2000-11-20
{"url":"http://web.eecs.utk.edu/~dongarra/etemplates/node151.html","timestamp":"2014-04-16T07:30:57Z","content_type":null,"content_length":"8787","record_id":"<urn:uuid:a48af041-d4af-4d5e-a4bc-538da9be047c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Tetrahedra passing through a hole up vote 3 down vote favorite Assume a plane $P\subset\mathbb R^3$ has a hole $H$, and that the hole is topologically a compact disc. Being so, $P\setminus H$ does not separate the space. A regular tetrahedron $\sigma^3$ (of edge-length 1, say) wants to pass through the hole. As far as I know, there are some papers about this. 1. H. Maehara, N. Tokushige, A regular tetrahedron passes through a hole smaller than its face, preprint. 2. J. Itoh, Y. Tanoue, T. Zamfirescu, Tetrahedra passing through a circular or square hole, Rendiconti del Circolo Matematico di Palermo, Suppl. 77 (2006), 349-354. 3. J. Itoh, T. Zamfirescu, Simplicies passing through a hole, J. of Geometry, 83 (2005), 65-70. The paper 1 shows that the minimum side-length of holes in the shape of a regular triangle is $\frac{1+\sqrt2}{\sqrt6}\approx0.985599$. The paper 2 shows that the minimum diameter of circular holes is $\frac{t^2-t+1}{\sqrt{\frac{3}{4}t^2-t+1}}\approx0.8957$ $\left(3t=2+\sqrt[3]{\sqrt{43}-4}-\sqrt[3]{\sqrt{43}+4}\right)$ and that the minimum diagonal-length of holes in the shape of a square is 1. The paper 3 shows that there exists a convex hole $H\subset P$ of diameter $\frac{\sqrt3}{2}$ and width $\frac{\sqrt2}{2}$ such that the regular tetrahedron $\sigma^3$ moving in $\mathbb R^3$ can pass through $H$. In the first paragraph of the proof, they say "Take a square $Q\subset P$ of edge-length $\frac12$, with vertices $q_{\pm,+}=\left(\pm{\frac14}, \frac12\right)$ and $q_{\pm,-}=(\pm{\frac14}, 0)$. Denote the point $\left(0, \frac{\sqrt11}{4}\ right)\in P$ by $v$. Take a disc $D$ of center $\left(0, \frac{\sqrt2}{4}\right)$ and radius $\frac{\sqrt2}{4}$. Define the hole $H$ as the convex hull of $D\cup T$, where $T$ is the triangle $vq_ Then, here is my question. Question: What is the shape of the holes which have the minimum area? As far as I know, this question still remains unsolved. I have tried to solve this question, but I don't have any good idea. I suspect the paper 3 would be a key. I need your help. convex-polytopes euclidean-geometry polyhedra add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged convex-polytopes euclidean-geometry polyhedra or ask your own question.
{"url":"http://mathoverflow.net/questions/138752/tetrahedra-passing-through-a-hole","timestamp":"2014-04-16T22:23:29Z","content_type":null,"content_length":"46534","record_id":"<urn:uuid:4bdcd707-04e3-45f5-b2c5-895d529a2d3e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Pursuing MSCS I'm changing careers, will be obtaining an MSCS degree, probably taking a 3 1/2 to 4 year track I have limited programming experience but I enjoyed FORTRAN at Auburn back in . . . 1983? Anyway, I've been dabbling off & on, here and there, with Java & Python & whatever language is used with Arduinos (?) for the past ~8 years - all self-taught. As I'm in my mid-50's, this will probably be the last time I'll have an opportunity for an epic-career change so I'm taking this seriously. I began programming again last Thursday and wrote two simple scripts (?) yesterday: • Hourly vs. Annual Salary • Guess the Number • "Colorado Foliage" (posted below) • . . . misc other stuff . . . I realize my methods aren't organized well, efficient, and so forth but I'm in "judo solution" mode - just making an effort to get it done, thinking I'll "fix it" later (ha). Having said that, any comments are most welcome - thank you! Code: Select all import random import turtle as t startX = 0 startY = 0 leaf_angle = 30 leaf_size = 90 size_incr = 45 t.pen(fillcolor="#008844", pencolor="green", pensize=2) def oh_yes(): leaf_angle = 340 leaf_size = 100 angle1 = 340 angle2 = angle1 - 180 for i in range(5): t.goto(startX, startY) for i in range(5): t.goto(startX, startY) for i in range(33): startX = random.randrange(-250,250) startY = random.randrange(-250,250) Last edited by stranac on Sun Oct 20, 2013 1:46 pm, edited 1 time in total. Reason: First post lock When I'm in a difficult situation and there seems to be no possible solution, for some reason I always feel a little better, have a little more optimism when I remember the scene where George Jetson screams "Jane! Stop this crazy thing!" Re: Pursuing MSCS That's pretty good actually, but here are some things I would change. You are defining some things you overwrite before you ever use them. Why define things you don't need: Code: Select all startX = 0 startY = 0 leaf_angle = 30 leaf_size = 90 size_incr = 45 leaf_angle in the function is also never used. These two loops are almost the same: Code: Select all for i in range(5): t.goto(startX, startY) for i in range(5): t.goto(startX, startY) I would just make the loop logic a function, and pass the different parts as parameters(just the 38 and 218 are different, as far as I can tell). Maybe even just pass 'left' or 'right', and select an angle based on that. And maybe get the stem(is that the name for it?) drawing to a separate function as well, but that's a matter of personal preference. This career change seems like a really big step. Hope you won't give up. Friendship is magic! R.I.P. Tracy M. You will be missed. Re: Pursuing MSCS Wow, thanks very much stranac! This career change seems like a really big step. Hope you won't give up. Me too! Here I am, 55 years old (as of next week) and changing careers. The MS Computer Science degree I'm going to pursue is at a university where I'm already teaching part time (I have an MS in Post-Secondary Education). The CS dept. is probably going to require me to take five "transition" classes before taking the twelve courses for the MSCS degree - I am very much looking forward to these opportunities to build a reasonable foundation, preparing for the post-graduate level courses. I'm planning on starting in January 2014 and taking four years to complete (51 hours total) while still working full time. That would toss me into the real world with an MSCS degree in December of 2017, if all goes to plan of course. At 59 years old, my career plan would involve professional practice for at least four or five years, then returning to the teaching profession, teaching CS courses and continuing with that until the day I die. I'm excited while simultaneously a bit apprehensive about this. My wife tells me I can do it . . . we shall see. So, here goes - thanks again for the comments & suggestions, very much appreciated. When I'm in a difficult situation and there seems to be no possible solution, for some reason I always feel a little better, have a little more optimism when I remember the scene where George Jetson screams "Jane! Stop this crazy thing!" Re: Pursuing MSCS So I really liked your script, and wanted to see if I could make it a bit better structured. The first thing I noticed after taking a better look were globals: you use them all over the place. Using globals is really not a good practice - it makes code harder to read and maintain. If you need a lot of shared state across multiple functions, it's usually a good idea to use classes. It also looked to me like you had plans to implement some rotation, different leaf sizes and similar, so I did that as well. As for the math involved, I didn't change much. Here's how my code ended up looking(if you want to leave this as an exercise for yourself, I won't mind if you don't read it). It could probably be made better, but I'm pretty satisfied with it. Code: Select all import random import turtle class Leaf(object): def __init__(self, startX, startY, leaf_size, rotation): self.startX = startX self.startY = startY self.leaf_size = leaf_size self.rotation = rotation def draw(self): def draw_leaves(self): angle = 302 + self.rotation size_delta = 25 for i in range(7): if i > 3: # start making the leafs smaller size_delta = -25 angle += 32 self.leaf_size += size_delta turtle.goto(self.startX, self.startY) turtle.circle(self.leaf_size, 40) turtle.setheading(angle - 180) turtle.circle(self.leaf_size, 40) def draw_stem(self): turtle.setheading(275 + self.rotation) turtle.forward(leaf_size / 2) turtle.setheading(180 + self.rotation) turtle.forward(self.leaf_size / 10) turtle.goto(self.startX, self.startY) turtle.setheading(90 + self.rotation) turtle.pen(fillcolor="#008844", pencolor="green", pensize=2) for i in range(33): # I prefer to use randint when I don't need randrange's step parameter startX = random.randint(-250, 250) startY = random.randint(-250, 250) leaf_size = random.randint(10, 200) rotation = random.randint(0, 360) Leaf(startX, startY, leaf_size, rotation).draw() Friendship is magic! R.I.P. Tracy M. You will be missed. Re: Pursuing MSCS stranac wrote:So I really liked your script, and wanted to see if I could make it a bit better structured. I guess I'm making the obvious "newbie mistakes." Thank you for the suggestions. stranac wrote:It could probably be made better, but I'm pretty satisfied with it. The random rotations and sizing are great but I think it would be better if the missing two small bottom parts, next to the stem, were included. Where did they go? When I'm in a difficult situation and there seems to be no possible solution, for some reason I always feel a little better, have a little more optimism when I remember the scene where George Jetson screams "Jane! Stop this crazy thing!" Re: Pursuing MSCS Guess they went for a walk... But now they're back. Code: Select all import random import turtle class Leaf(object): def __init__(self, startX, startY, leaf_size, rotation): self.startX = startX self.startY = startY self.leaf_size = leaf_size self.rotation = rotation def draw(self): def draw_leaves(self): angle = 302 + self.rotation size_delta = 25 for i in range(9): turtle.goto(self.startX, self.startY) turtle.circle(self.leaf_size, 40) turtle.setheading(angle - 180) turtle.circle(self.leaf_size, 40) if i > 3: # start making the leafs smaller size_delta = -25 angle += 32 self.leaf_size += size_delta def draw_stem(self): turtle.setheading(275 + self.rotation) turtle.forward(leaf_size / 2) turtle.setheading(180 + self.rotation) turtle.forward(self.leaf_size / 10) turtle.goto(self.startX, self.startY) turtle.setheading(90 + self.rotation) turtle.pen(fillcolor="#008844", pencolor="green", pensize=2) for i in range(33): # I prefer to use randint when I don't need randrange's step parameter startX = random.randint(-250, 250) startY = random.randint(-250, 250) leaf_size = random.randint(10, 200) rotation = random.randint(0, 360) Leaf(startX, startY, leaf_size, rotation).draw() Friendship is magic! R.I.P. Tracy M. You will be missed. Re: Pursuing MSCS Thanks for fixing it. Here's another script (?) I created yesterday - a very simple guessing game Based on your comments earlier, I can see now that I should make some changes - but hey, it worked, and that was my goal Code: Select all import random auto_pick = random.randrange(1,101,1) user_pick = auto_pick + 1 while user_pick != auto_pick: user_pick = int(input("Enter a value from 1 to 100 ")) if user_pick == auto_pick: print ("You win!") if user_pick < auto_pick: print ("too low") print ("too high") When I'm in a difficult situation and there seems to be no possible solution, for some reason I always feel a little better, have a little more optimism when I remember the scene where George Jetson screams "Jane! Stop this crazy thing!" Re: Pursuing MSCS This looks ok. The only thing I would really do different is change the Code: Select all Code: Select all Friendship is magic! R.I.P. Tracy M. You will be missed. Re: Pursuing MSCS I wrote this during lunch today . . . looks better when running in a large window . . . Code: Select all --> bw lunch stuff 01 import random def skiing(): rNum = random.randrange(120) for i in range(rNum): print (" " * (80+i), "[|]") for i in range(rNum): print (" " * (rNum+80-i), "[|]") for i in range(2500): When I'm in a difficult situation and there seems to be no possible solution, for some reason I always feel a little better, have a little more optimism when I remember the scene where George Jetson screams "Jane! Stop this crazy thing!" Re: Pursuing MSCS . . . just a brief update: I'm no longer going to pursue an MSCS degree After discussing this potential career change with three wildly different "professional programmers," I'm not convinced I would enjoy the daily coding grind although I would love to teach at the local university For the remainder of my life, I'm sure I'll continue to enjoy programming at the hobbyist level With the utmost respect and appreciation for "real programmers" thank you & best of luck with everything to all of you You may find me here again someday When I'm in a difficult situation and there seems to be no possible solution, for some reason I always feel a little better, have a little more optimism when I remember the scene where George Jetson screams "Jane! Stop this crazy thing!"
{"url":"http://www.python-forum.org/viewtopic.php?p=10111","timestamp":"2014-04-16T16:05:43Z","content_type":null,"content_length":"46897","record_id":"<urn:uuid:3ee335a5-a738-435c-ac17-da5e282379b7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
What is this equation called? Can it work Horizontially as well? In my most recent math class one of the equations we had to know was -16t^2+Tv[0]+Th[0] 1. This is NOT an equation. An equation always has the = symbol between two expressions. 2. Your formula is incorrect. It should be h(t) = -16t + v t + h . There is no time factor with the h term, and you shouldn't write both t and T for the same thing. The letter t is usually used to indicate time. This function gives the height at time t, of an object thrown with an initial velocity of v from an initial height of h , I may have put the little letters out of order, but that is the basic formula, it has to do with figuring how long it will take something to land, the H being the point at which the projectile was fired/thrown from. I was playing angry birds, and thought it might work sideways as well as vertically? Basically I guess I'm asking if this isn't a horizontal equation, what would it be? The function above is strictly the vertical position at a given time. The -16t term is the clue there. To find the position of a ball thrown horizontally or at some angle from the horizontal, you have to break up the velocity vector into its vertical and horizontal components.
{"url":"http://www.physicsforums.com/showthread.php?s=0027c84b9f55a2811d116184df002ab4&p=4613523","timestamp":"2014-04-20T11:23:56Z","content_type":null,"content_length":"30209","record_id":"<urn:uuid:9cc0131b-774f-4c1a-bde1-d1d6cd7d6482>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Program for Simulation of Three-Dimensional Variable-Density Ground-Water TWRI Book 6, Chapter A7 You can DOWNLOAD THIS REPORT (1,608 KB) in Portable Document Format (PDF) The Adobe PDF Reader program is available for free from Adobe. SEAWAT home page - download source code and examples Guo, Weixing, and Langevin, C.D., 2002, User's Guide to SEAWAT: A Computer Program for Simulation of Three-Dimensional Variable-Density Ground-Water Flow: Techniques of Water-Resources Investigations Book 6, Chapter A7, 77 p. (Supersedes OFR 01-434.) ABSTRACT: The SEAWAT program was developed to simulate three-dimensional, variable-density, transient ground-water flow in porous media. The source code for SEAWAT was developed by combining MODFLOW and MT3DMS into a single program that solves the coupled flow and solute-transport equations. The SEAWAT code follows a modular structure, and thus, new capabilities can be added with only minor modifications to the main program. SEAWAT reads and writes standard MODFLOW and MT3DMS data sets, although some extra input may be required for some SEAWAT simulations. This means that many of the existing pre- and post-processors can be used to create input data sets and analyze simulation results. Users familiar with MODFLOW and MT3DMS should have little difficulty applying SEAWAT to problems of variable-density ground-water flow. MODFLOW was modified to solve the variable-density flow equation by reformulating the matrix equations in terms of fluid mass rather than fluid volume and by including the appropriate density terms. Fluid density is assumed to be solely a function of the concentration of dissolved constituents; the effects of temperature on fluid density are not considered. Temporally and spatially varying salt concentrations are simulated in SEAWAT using routines from the MT3DMS program. SEAWAT uses either an explicit or implicit procedure to couple the ground-water flow equation with the solute-transport equation. With the explicit procedure, the flow equation is solved first for each timestep, and the resulting advective velocity field is then used in the solution to the solute-transport equation. This procedure for alternately solving the flow and transport equations is repeated until the stress periods and simulation are complete. With the implicit procedure for coupling, the flow and transport equations are solved multiple times for the same timestep until the maximum difference in fluid density between consecutive iterations is less than a user-specified tolerance. The SEAWAT code was tested by simulating five benchmark problems involving variable-density ground-water flow. These problems include two box problems, the Henry problem, Elder problem, and HYDROCOIN problem. The purpose of the box problems is to verify that fluid velocities are properly calculated by SEAWAT. For each of the box problems, SEAWAT calculates the appropriate velocity distribution. SEAWAT also accurately simulates the Henry problem, and SEAWAT results compare well with those of SUTRA. The Elder problem is a complex flow system in which fluid flow is driven solely by density variations. Results from SEAWAT, for six different times, compare well with results from Elder's original solution and results from SUTRA. The HYDROCOIN problem consists of fresh ground water flowing over a salt dome. Simulated contours of salinity compare well for SEAWAT and MOCDENSE. Chapter 1: Introduction Purpose and Scope Development of SEAWAT Chapter 2: Mathematical Description of Variable-Density Ground-Water Flow Basic Assumptions Concept of Equivalent Freshwater Head Governing Equation for Ground-Water Flow Darcy's Law for Variable-Density Ground-Water Flow General Form of Darcy's Law Assumption of Axes Alignment with Principal Permeability Directions Darcy's Law in Terms of Equivalent Freshwater Head Governing Equation for Flow in Terms of Freshwater Head Governing Equation for Solute Transport Boundary and Initial Conditions Dirichlet Boundary Neumann Boundary Cauchy Boundary Initial Conditions Sink and Source Terms Concentration and Density Chapter 3: Finite-Difference Approximation for the Variable-Density Ground-Water Flow Equation Finite-Difference Approximation for the Flow Equation Construction of System Equations Chapter 4: Design and Structure of the SEAWAT Program Temporal Discretization Explicit Coupling of Flow and Transport Implicit Coupling of Flow and Transport Structure of the SEAWAT Program Array Structure and Memory Allocation Chapter 5: Modifications of MODFLOW and MT3DMS Matrix and Vector Accumulators Modifications of the Basic Flow Equation Addition of Relative Density-Difference Term Addition of Solute Mass Accumulation Term Conversion from Volume Conservation to Mass Conservation Conversion from Fluid Volume Storage to Fluid Mass Storage Conversion Between Confined and Unconfined Conditions Vertical Flow Calculation for Dewatered Conditions Variable-Density Flow for Water-Table Case Modifications of MODFLOW Stress Packages Well (WEL) Package River (RIV) Package Drain (DRN) Package Recharge (RCH) Package Evapotranspiration (EVT) Package General-Head Boundary (GHB) Package Time-Varying Constant Head (CHD) Package Modification of MODFLOW Solver Packages MODFLOW-MT3DMS Link Package and Modifications to MT3DMS Chapter 6: Instructions for Using SEAWAT Preparation of MODFLOW Input Packages for SEAWAT Basic (BAS) Package Output Control (OC) Option Block-Centered Flow (BCF) Package Well (WEL) Package Drain (DRN) Package River (RIV) Package Evapotranspiration (EVT) Package General-Head Boundary (GHB) Package Recharge (RCH) Package Time-Varying Constant Head (CHD) Package Solver (SIP, SOR, PCG) Packages Preparation of MT3DMS Input Packages for SEAWAT Basic Transport (BTN) Package Advection (ADV) Package Source/Sink Mixing (SSM) Package Running SEAWAT Output Files and Post Processing Calculation of Equivalent Freshwater Head Tips for Designing SEAWAT Models Chapter 7: Benchmark Problems Box Problems Case 1 Case 2 Henry Problem Elder Problem HYDROCOIN Problem References Cited 1. Schmatic showing two piezometers, one filled with freshwater and the other with saline aquifer water, open to the same point in the aquifer 2. Diagram showing representative elementary volume in a porous medium 3. Schematic showing relation between a coordinate system aligned with the principal axes of permeability and the upward z-axis 4. Generalized flow chart of the SEAWAT program 5. Schematic showing example of the explicit scheme used to couple the flow and transport equations 6. Schematic showing example of the implicit scheme used to couple the flow and transport equations 7. Flow chart showing step-by-step procedures of the SEAWAT program 8. Schematic showing cell indices and variable definitions for the case of a partially dewatered cell underlying an active model cell 9. Schematic showing conceptual representation of flow between two cells for the water-table case 10. Diagram showing conceptual model and variable description for river leakage in MODFLOW and SEAWAT 11. Diagram showing conceptual model and variable description for drain leakage in MODFLOW and SEAWAT 12. Grid showing boundary conditions and model parameters for the Henry problem 13. Graphs showing comparison between SEAWAT and SUTRA for the Henry problem 14. Grid showing boundary conditions and model parameters for the Elder problem 15. Finite-difference grid used to simulate the Elder problem 16. Schematics showing comparison between SEAWAT, SUTRA, and Elder's solution for the Elder problem over time 17. Grid showing boundary conditions and model parameters for the HYDROCOIN problem 18. Graph showing comparison between SEAWAT and MOCDENSE for the HYDROCOIN problem 1. MODFLOW and MT3DMS packages used in SEAWAT
{"url":"http://fl.water.usgs.gov/Abstracts/twri_6_A7_guo_langevin.html","timestamp":"2014-04-19T14:33:11Z","content_type":null,"content_length":"9523","record_id":"<urn:uuid:5555d94c-0892-45d4-bf5a-f187eab5560f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling the Chemoelectromechanical Behavior of Skeletal Muscle Using the Parallel Open-Source Software Library OpenCMISS Computational and Mathematical Methods in Medicine Volume 2013 (2013), Article ID 517287, 14 pages Research Article Modeling the Chemoelectromechanical Behavior of Skeletal Muscle Using the Parallel Open-Source Software Library OpenCMISS ^1Universität Stuttgart, Institut für Mechanik (Bauwesen), Lehrstuhl II, Pfaffenwaldring 7, 70569 Stuttgart, Germany ^2Stuttgart Research Centre for Simulation Technology, Pfaffenwaldring 5a, 70569 Stuttgart, Germany Received 25 July 2013; Revised 28 August 2013; Accepted 13 September 2013 Academic Editor: Eduardo Soudah Copyright © 2013 Thomas Heidlauf and Oliver Röhrle. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. An extensible, flexible, multiscale, and multiphysics model for nonisometric skeletal muscle behavior is presented. The skeletal muscle chemoelectromechanical model is based on a bottom-up approach modeling the entire excitation-contraction pathway by strongly coupling a detailed biophysical model of a half-sarcomere to the propagation of action potentials along skeletal muscle fibers and linking cellular parameters to a transversely isotropic continuum-mechanical constitutive equation describing the overall mechanical behavior of skeletal muscle tissue. Since the multiscale model exhibits separable time scales, a special emphasis is placed on employing computationally efficient staggered solution schemes. Further, the implementation builds on the open-source software library OpenCMISS and uses state-of-the-art parallelization techniques taking advantage of the unique anatomical fiber architecture of skeletal muscles. OpenCMISS utilizes standardized data structures for geometrical aspects (FieldML) and cellular models (CellML). Both standards are designed to allow for a maximum flexibility, reproducibility, and extensibility. The results demonstrate the model’s capability of simulating different aspects of nonisometric muscle contraction and efficiently simulating the chemoelectromechanical behavior in complex skeletal muscles such as the tibialis anterior 1. Introduction Skeletal muscles’ ability to actively generate force in a controlled fashion allows us to consciously move our body. The force generation is achieved through complex processes on multiple scales and multiple parts of the musculoskeletal system, for example, neural stimuli generation, depolarization at neuromuscular junctions, force generation within skeletal muscle sarcomeres, force transmission to the tendons, and sensory feedback to the nervous system. These processes are extremely complex, strongly coupled with each other, and by far not fully understood. Like in many other research areas, detailed simulation frameworks appealing to realistic models can provide an effective tool to investigate functional and structural interrelations of skeletal muscle force generation. An improved understanding of the physiological mechanisms may also lead to a better understanding of mechanisms behind musculoskeletal disorders. State-of-the-art simulations taking into account the force generating capabilities of skeletal muscles are subject to either phenomenological descriptions using discrete [1–4] or continuum-mechanical models [5–7]. The most commonly used skeletal muscle modeling frameworks investigating the musculoskeletal system as a whole are based on discrete mechanics, that is, rigid-body dynamics simulations, in which the skeletal muscles are described by Hill-type models (cf. the review by Zajac [8]). Although such models are widely used to analyze movement, they exhibit significant drawbacks. All functional and structural properties are lumped together to a few parameters. For example, Hill-type skeletal muscle models are described at a point in space through spring constants, damper properties, and one overall activation level, and the calculated muscle force acts along a predefined line of action. Since such models lack a volumetrical representation of the skeletal muscles, they are not capable of properly taking into account structural properties, for example, complex fiber architectures, motor unit fiber distributions, or the interaction of a skeletal muscle with surrounding tissue, for example, bones, muscles, or fat tissue. While continuum-mechanical skeletal muscle models can take into account complex muscle fiber distributions [9], regional activation properties, and a dynamically generated line of action [7], they are computationally more challenging and restrict their findings purely to mechanical aspects of muscle force generation; for example, see [6]. Further, researchers appealing to continuum-mechanical models mainly focus on skeletal muscles in isolation. However, considering natural motor unit (MU) recruitment principles to activate specific skeletal muscle fibers by action potentials (APs, electrical signals of short duration), one has to replace such single scale continuum-mechanical models with multiscale, multiphysics models that take into account the entire pathway from neural stimulation to muscle force generation and feedback to the neural system. Models describing the excitation-contraction coupling (ECC) do exist [10, 11] but are typically limited to describe the force generation within a sarcomere and, hence, on the cellular level and not on the level of an entire skeletal muscle. Models that are guided by either the natural principles of MU recruitment, MU fiber distributions, or muscle force generation on the cellular level and its effect on the force generation of an entire skeletal muscle are rare and do often have significant limitations. For example, Hernández-Gascόn et al. [12] include a phenomenological description of the cellular processes and ignore biophysical principles of AP propagation and crossbridge dynamics. Fernandez et al. [13] use a neuron model to simultaneously generate an AP at all neuromuscular junctions that is propagated through the muscle tissue using the three-dimensional (3D) bidomain equations neglecting functional structures such as MU fiber distributions or the fact that APs propagate along a single muscle fiber and do not effect neighboring ones. Furthermore, the model describing the cellular behavior of a sarcomere was adopted from cardiac mechanics. Böl et al. [14] couple 3D electrical field equations with phenomenological fiber models. The model of Röhrle and coworkers is currently the only one that can take into account a biophysical cell model, which includes multiple subcellular models including fatigue, and allows for spatial descriptions, MU fiber distributions, MU recruitment principles, and skeletal muscle force generation [15–18]. However, the chemoelectromechanical model of Röhrle and coworkers has framework-inherent limitations that do not allow its extension to a fully coupled framework embracing neural inputs, force generation, and feedback mechanisms. The major limitation is the fact that the cellular equations are only unidirectionally coupled to the mechanical model. The behavior of a single skeletal muscle fiber is precomputed and stored in a look-up table. Within the mechanical model, the cellular variables associated with force generation, that is, the crossbridge concentrations in the attached pre- and postpower stroke state ( and , resp.), are copied into a detailed 3D structural model and homogenized to compute the resulting stress tensor. Any geometrical variations of a skeletal muscle fiber due to a contraction, for example, a length change, are not considered. The same applies to feedback, that is, an alternation of the recruitment sequence due to the mechanical state. The choice of precomputing the cellular behavior has been chosen to reduce the overall computational cost. This was necessary as the original framework is based on serial legacy code (CMISS) appealing to data structures not necessarily suitable for parallelization. Further, only isometric contractions were considered. The isometric case provided justification for neglecting the force-velocity relationship. In reality, however, series elastic elements against which the muscle shortens during tension development prevent true isometric conditions. The aim of this contribution is to introduce a completely new, computationally efficient, fully coupled, multiphysics simulation framework for skeletal muscle modeling providing the basis to include biophysical motor unit recruitment and feedback mechanisms at a later stage. The framework is based on the open-source software library OpenCMISS [19], which, together with the entire model described in this contribution, can be downloaded from https://github.com/OpenCMISS. OpenCMISS was designed to achieve maximal flexibility and efficiency through the use of new data structures such as FieldML [20], access to well-established model repositories via CellML [21, 22], and a distributed-memory foundation for executing large problems. The new libraries and the data structure provide the basis to combine different mesh regions with different dimensionality, for example, 0D models for the cellular behavior, 1D models for the AP propagation, and 3D models for the mechanical model, within one framework. This allows for a strong and bidirectional coupling of the chemoelectrical cellular behavior and the mechanical model—a major advantage over commercially available software packages. Furthermore, the modular organization of the framework allows for straightforward extensions of the model and substitution of model components, for example, the cellular model. 2. Materials and Methods Figure 1 provides an overview of the proposed computational framework. The individual parts of the framework (model of the half-sarcomere, propagation of the AP, and continuum-mechanical model) are presented in the subsequent sections. Here, the interactions and couplings between the individual model parts are explained. The muscle fibers of one MU are stimulated through their corresponding motoneuron at the neuromuscular junction. In the proposed model, the neural discharges are modeled as an ionic current that is applied at the center of a fiber, which represents the neuromuscular junction. In this contribution, the MU discharge times are predefined, for example, by a regular frequency. However, computing the discharge rates of a motoneuron pool, one could, for example, also appeal to the model of Fuglevand et al. [23], as shown in Röhrle [16], or to a biophysical model like the one by Negro and Farina [ 24]. The coupling of a motoneuron-pool model to the muscle model is unidirectional; that is, the flow of information between the models only occurs from the motoneuron-pool model to the muscle model. Hence, the MU recruitment and firing times can be precomputed independently of the muscle model. In contrary to [15], the governing equations for describing the bioelectrical field and those of 3D finite elasticity theory are solved in a strongly coupled way, where the solution of the mechanics influences the bioelectrical fields and vice versa. The bioelectrical field itself is determined by solving the so-called monodomain equation, which is a reaction-diffusion equation. The monodomain equation is solved using an operator splitting technique providing the mathematical justification to separately treat the reaction part, which is given by the half-sarcomere model [11], and the diffusion part, which describes the AP propagation. The half-sarcomere model is biophysically based and is described by a set of ordinary differential equations (ODEs) in time, that is, exhibiting no spatially varying quantities (0D). The diffusive part is described by a transient 1D partial differential equation (PDE). Since the two parts of the monodomain equation are solved separately, the operator splitting technique requires a mapping of the membrane voltage, , between the two parts at each time step. Among many other cellular quantities, the half-sarcomere model (reaction term) computes the active stress contribution at a particular location along a skeletal muscle fiber. The active stress, , enters the constitutive equation through a mapping (homogenization) to the continuum-mechanical model. In return, the shortening velocity, , is passed from the continuum-mechanical model to the half-sarcomere model. To take into account the length changes due to skeletal muscle tissue deformations, the bioelectrical field equations are solved on a deforming/moving domain. Thus, the equations describing the AP propagation along the muscle fibers have to be adjusted to the deformation. This can be achieved either by modifying the conductivity tensor or by solving the monodomain equation on the deformed geometry. In this contribution, the latter is employed. In the following sections, the different parts of the computational framework, that is, the mechanical model (Section 2.1), the half-sarcomere model (Section 2.2), and the AP propagation model (Section 2.3), are introduced. Furthermore, implementation and high-performance computing aspects of the resulting multiphysics discretization schemes are presented in Section 2.4. 2.1. The Mechanical Problem In continuum mechanics, the motion of a body is described by the placement function that assigns each point in the reference configuration at time a corresponding point in the actual (deformed) configuration at time ; that is, . The deformation of a body is commonly measured by the deformation gradient tensor and the strain by the Green-Lagrangian strain tensor , where is the right Cauchy-Green deformation tensor and denotes the second-order identity tensor. Inertia forces and body forces are assumed to be small compared to the forces acting in the muscle. Thus, the balance of linear momentum reduces to where denotes the Cauchy stress tensor. The Cauchy stress can be derived from the second Piola-Kirchhoff stress tensor, , via a scaled covariant push forward operation: , with being the Jacobian. The stress tensor (e.g., or ) is derived from a constitutive equation. A constitutive equation characterizes the material behavior under load; that is, it relates the stress in a body to the strain. Skeletal muscle tissue is generally considered to be transversely isotropic and hyperelastic. Furthermore, muscle tissue is considered to be incompressible under physiological conditions. The second Piola-Kirchhoff stress tensor of a hyperelastic material can be derived from a strain energy function defined per unit reference volume by with hydrostatic pressure entering (3) as Lagrange multiplier associated with the incompressibility constraint ; see, for example, [25]. For transversely isotropic materials, the strain energy function can be expressed in terms of the right Cauchy-Green deformation tensor and a second-order structural tensor , where denotes a unit vector in the reference configuration pointing in the fiber direction. Applying the theory of invariants (see Spencer [26]), the strain energy function of a transversely isotropic material can be expressed as with principal invariants , , and and mixed invariants and . Following the idea of a fiber-reinforced material (cf. Spencer [27]), the strain energy function is split into an isotropic part that represents the ground matrix and an anisotropic part that represents the embedded fibers. Furthermore, a term is introduced to represent the muscle’s ability to actively generate force via crossbridge cycling: On the right-hand side of (5), a dependence on the third principal invariant has directly been omitted due to the incompressibility constraint . Being based on the principle of superposition, the ansatz in (5) neglects any couplings between the individual parts of the strain energy leading to the assumptions that (i) the active behavior is independent of the other terms and (ii) there is no interaction between the fibers and the matrix. In the following, first the terms representing the passive behavior of skeletal muscle are introduced before describing the active part of the strain energy. 2.1.1. Passive Material Behavior For the isotropic contribution, the Mooney-Rivlin material description is employed; see, for example, Holzapfel [28]. This material description is known to be well suited for representing -like stress-stain curves of soft biological tissues: The material parameters of the Mooney-Rivlin model, and , are determined in a uniaxial compression test using the experimental data of Zheng et al. [29 ]. The set of parameters used within this work is summarized in Table 1. For the anisotropic contribution, a polynomial strain-energy function of the fiber stretch, , has been adopted from Markert et al. [30]: where is the number of polynomial terms and and denote material parameters. Note that the anisotropic contribution applies only to the tensile range, that is, for . A uniaxial extension test in fiber direction is used to fit material parameters and to the experimental data of Hawkins and Bey [31]. A single polynomial term () was found to be sufficient to reproduce the experimental 2.1.2. Active Contractile Behavior In many physiological conditions the mechanical behavior of skeletal muscle is dominated by its active, force generating behavior. In accordance with previously published skeletal muscle models [6, 15, 32], it is assumed that the active stress only acts in fiber direction. Furthermore, the generated force depends on the length of the muscle [33] and the shortening velocity [34]. Following this, the active part of the strain energy is assumed to be a function of deformation, represented through the fiber stretch , the rate of deformation , and the fiber direction. Proceeding from (3), the active part of the stress tensor yields In (8), the scalar-valued active stress function, , which takes the form of a nominal (or engineering) stress, is introduced. Further, the active stress function, , is assumed to depend on a constant maximum active stress = 7.3N/cm^2 (cf. [31]), a function relating the generated stress to the muscle length , and a function that links the macroscopic continuum-mechanical system to the quantities at the microscale , which depends on the level of activation and the velocity : The function is determined in a biophysical model at the microscale (see Section 2.2). The force-length relation is adopted from Röhrle et al. [15] (see also [6]): In (10), denotes the optimal fiber stretch, which, based on experimental data [31], is assumed to take a value of 1.2. In summary, the second Piola-Kirchhoff stress tensor, , yields 2.2. The Micromodel of a Half-Sarcomere The basis for modeling subcellular processes in the present contribution is the Shorten et al. [11] model. The Shorten model describes the complex, nonlinear, biophysical processes leading from electrical excitation to contraction and force generation of a half-sarcomere by means of ODEs. Two versions of the model using slightly different parametrizations allow the distinction between slow-twitch (type I) and fast-twitch (type II) muscle fibers. The model has been validated on mouse muscles. To model the entire ECC, the half-sarcomere model [11] combines several submodels describing (a) membrane electrophysiology, (b) calcium release from the sarcoplasmic reticulum (SR), (c) calcium dynamics, (d) crossbridge dynamics, and (e) fatigue. In more detail, the individual parts are as follows. (a) For a description of the Hodgkin-Huxley electrophysiology of action potentials via ionic currents that pass through various channels and pumps (sodium channels, delayed rectifier and inverse rectifier potassium channels, chloride channels, and pumps) in the sarcolemma and T-tubules, see Adrian and Peachey [35] and Wallinga et al. [36]. (b) Intracellular calcium release from the sarcoplasmic reticulum to the cytosol in response to membrane depolarization through RyR calcium release channels is described by a ten-state model originally proposed by Ríos et al. [37]. This submodel couples the T-tubule membrane voltage to the opening of the dihydropyridine receptor/RyR complex. (c) The released calcium () ions bind in the cytosol to parvalbumin and ATP along with troponin on the myofilaments. Moreover, intracellular magnesium ions () compete with for parvalbumin and ATP binding sites. After being transported back to the SR via -ATPase, binds to calsequestrin. The description of the calcium dynamics goes back to the model of Baylor and Hollingworth [38]. (d) The binding of two ions to troponin C leads to a conformational change in the troponin molecule that removes the blocking tropomyosin from the actin filament and thereby allows the myosin head to attach to the actin binding sites. This model is based on an eight-state model of crossbridge dynamics in skeletal muscle using the generic models of Razumova et al. [39, 40] and Campbell et al. [41, 42]. (e) Muscle fatigue is modeled through subcellular mechanisms on the basis of phosphate dynamics. The accumulation of phosphate () is believed to be the primary mechanism behind metabolic fatigue. Here, is formed from the energy-providing reaction of ATP to adenosine diphosphate (ADP) during crossbridge cycling when weakly bound crossbridges isomerize into strongly bound crossbridges. The produced phosphate is transported passively to the SR where it precipitates with [11]. Although the degree of detail of the model of Shorten et al. [11], for example, modeling the signaling pathway of the ECC or fatigue, is not essential for the presented overall modeling framework, the authors refrain from simplifying the model, as this will be the basis for further developments that will build on different biophysical components. Moreover, the complexity of the model introduces new challenges for efficiency and parallelization. According to the sliding filament theory [43], the active force production in skeletal muscle is due to crossbridge cycling. The crossbridge dynamics model, which depends on all above-described models, defines the force producing step called power stroke as the transition between the two attached states, that is, the prepower stroke state and the postpower stroke state . Therefore, one can assume that the actively generated stress in a half-sarcomere under isometric conditions is proportional to the concentration of crossbridges in the postpower stroke state [40]. The value of is normalized using the value of at maximum tetanic stimulation ; that is, . The half-sarcomere model [11] was developed for isometric contractions. Truly isometric conditions, however, do not exist in skeletal muscle, since (i) contractile tissue is in series with elastic components of the musculoskeletal system stretching under contraction-induced stress increase and (ii) various nonuniformities exist along the muscle fiber; that is, while one part of the fiber shortens, another part is stretched. The scaling quantity , (cf. (9)) is found by multiplying the normalized concentration of crossbridges in the postpower stroke state by Hill’s hyperbolic force-velocity relation [34]: In (12), denotes the maximum isometric active force, and and are the Hill parameters, which are chosen such that [44, 45] and [46] with being the maximum shortening velocity at zero force production. To extend the single half-sarcomere model to a model of a muscle fiber, the electrophysiological characteristic of propagating APs along the length of fibers is considered. The equations representing the AP propagation are presented in Section 2.3. 2.3. Action Potential Propagation The propagation of an AP along a skeletal muscle fiber is initiated at the neuromuscular junction located in the middle of the length of each fiber. Starting at the neuromuscular junction, the short-term depolarization of the muscle-fiber membrane voltage travels along the length of the fiber towards its ends. The macroscopic electrical conductivity of muscle tissue perpendicular to the fiber direction is up to one magnitude lower than the conductivity along the fiber direction [47, 48], and electrical stimulation from one fiber to adjacent ones is not observed. Therefore, the propagation of an AP along a skeletal muscle fiber is modeled as a 1D system. The propagation of APs in biological tissue is typically modeled using the bidomain equations; see, for example, Pullan et al. [49]. In the 1D case, the bidomain equations reduce to the simpler monodomain equation, a reaction-diffusion equation [50, 51], which is given by In (13), denotes the spatial variable describing the position along the path of the fiber, is the conductivity, represents the membrane voltage, reflects the ratio of the membrane surface area to the volume, and is the capacitance of the cell membrane per unit area. Depending on the twitch type of the fiber, two different values are used for the membrane capacitance, that is, = 0.58μF/cm^2 for slow-twitch fibers and = 1.0μF/cm^2 for fast-twitch fibers [11]. The value of cm^−1 is identical for both fiber types [52]. Furthermore, the reaction term depends nonlinearly on and denotes the sum of ionic currents crossing the cell membrane of the sarcolemma and the T-tubule. 2.4. High-Performance Computing After introducing the individual submodels and their interactions, this section focuses on efficient solution strategies for this complex and computationally very demanding multiphysics model describing phenomena on different length and time scales. To achieve this, various concepts of software engineering, for example, advanced discretization schemes for multiphysics problems, parallelization, or staggered solution schemes, are adopted. These concepts have been implemented within the open-source software library OpenCMISS [19]. 2.4.1. Operator Splitting For the numerical treatment of the monodomain equation (cf. (13)), it is convenient to apply an operator splitting technique (or fractional-step method) to separate the nonlinear reaction term from the diffusion term; see, for example, Sundnes et al. [50, 53]. Applying the first-order accurate Godunov-type splitting, (13) yieldswhere refers to the time step, and denote the values of the membrane voltage at discrete times and , respectively, and is the value at the intermediate time . The advantage of the operator-splitting approach is that different numerical methods can be applied to the different subsystems; that is, the nonlinear reaction (14a) is solved using an implicit multistep ODE integration method as commonly done for highly nonlinear, stiff, biophysical cell models (see Pullan et al. [49]), while one uses the backward-Euler method for the diffusion equation (14b). Furthermore, different time steps can be used for the different subsystems (subcycling). For the discretization of the spatial derivative term in (14b) the finite element method (FEM) [54] is applied. 2.4.2. Discretization in Space and Time The solution of the bioelectrical field equations, (14a) and (14b), requires an extremely small time step and a very fine mesh due to the rapid changes and steep gradients occurring in physiological cell models; see [49, 52]. On the other hand, using a similarly spatial and temporal discretization for the solution of the 3D mechanical model is prohibitively expensive and unnecessary, as changes on the scale of an entire muscle occur at considerably larger time scales. Following the idea of different characteristic length scales, a multiphysics discretization scheme is proposed: a much finer mesh is used for the bioelectrical model than for the continuum-mechanical system. First, a relatively coarse 3D finite element (FE) mesh of the muscle’s geometry is generated. Then, relatively fine 1D FE muscle fiber meshes are embedded in the 3D elements (cf. [18]). The governing equations of the continuum-mechanical model, (2), and the incompressibility constraint are discretized using the coarse 3D mesh, while the diffusion part of the bioelectrical field equation, (14b), is solved on the 1D fiber meshes. Some variables exist on both meshes, and thus, transfer operations between the two meshes are required. The transfer from the coarse 3D FE mesh to the fine 1D fiber meshes is called interpolation, while the transfer in the opposite direction is termed homogenization. The homogenization and interpolation processes are discussed for each affected variable in Section 2.4.3. Due to the different characteristic time scales of the different physical phenomena, a staggered solution scheme with three different time steps is applied in this work. A schematic representation of the time-stepping scheme is shown in Figure 2. First, the half-sarcomere models, (14a), are solved for 50 time steps with time step size . The symbol in Figure 2 denotes the solution process for computing the states of the half-sarcomere model for time . Note, for simplicity and readability of Figure 2, only a fractional number of time steps are depicted. In case of computing the cellular states, which will be used within the next time step of the diffusion equation, only 5 instead of the actual 50 time steps are depicted in Figure 2. Each discretization point of the monodomain equation is associated with its own half-sarcomere model. The half-sarcomere model is mathematically described by ODEs in time, which do not rely on any spatial quantities. Therefore, each half-sarcomere model can be solved independently of all other half-sarcomere models. The final values of the membrane voltage computed in these steps are used as starting values for the diffusion equation (14b). This process is denoted by in Figure 2. Following the solution of the diffusion equation (14b) with time step , which is indicated by in Figure 2, the updated values of the membrane voltage are used as initial conditions for the next solution step of the half-sarcomere model (indicated by in Figure 2). This procedure is repeated a number of times (3 times in Figure 2, 1000 times in the actual computations) before the values of the active stress are homogenized (). The homogenization process is denoted in Figure 2 by . The homogenized values enter the continuum-mechanical model, (2), through the stress tensor, which is given by (11). The continuum-mechanical model is only solved in time increments of size (cf. step in Figure 2). Further, the values of the sarcomere velocity are interpolated and applied to the half-sarcomere models; see . At the same time, the position of the nodes of the 1D fiber meshes is updated based on the calculated deformation. The described steps are repeated until the final time is reached. 2.4.3. Homogenization and Interpolation As described above, some variables are shared between the different discretizations. For example, the values of the active stress field are determined in the model of the half-sarcomere, that is, at the nodes of the 1D fiber meshes. In order to include the active stress field in the continuum-mechanical constitutive equation, which is evaluated at the integration points, for example, the Gauß points, associated with the weak formulation of the 3D finite elements, the values need to be homogenized. Like in Röhrle et al. [15], the homogenization is achieved by computing the arithmetic mean of all 1D nodal values that are closest to a certain Gauß point of the continuum-mechanical 3D FE mesh. Other elaborate homogenization techniques like those proposed in [55, 56] could be adopted but are not further considered here. The positions of the nodes of the 1D fiber meshes are defined in terms of the local element coordinate system of the 3D geometric FEs. Using this definition, their actual positions can be determined from the deformation of the muscle’s geometry, that is, from the actual configuration. Using the basis functions of the 3D FEs for the interpolation, the nodal positions of the 1D fiber meshes are updated after each solution of the mechanical submodel. Further, information about sarcomere velocity is required in the half-sarcomere models located at the nodes of the 1D fiber meshes; see (12). The sarcomere velocity cannot be determined in the biophysical model of the half-sarcomere, as the velocity also relies on the boundary conditions of the continuum-mechanical model of the entire muscle. Therefore, the local sarcomere velocity is approximated by a backward finite difference scheme: , where represents the distance between two adjacent nodes and and denote two consecutive time steps of the continuum-mechanical model. To avoid unrealistic high variations in sarcomere velocity and to mimic the structural links between adjacent skeletal muscle fibers, the average of the velocity is calculated over a patch of seven sequential nodes of one fiber. 2.4.4. Data Structure The open-source software library OpenCMISS [19] provides a highly flexible framework for the simulation of coupled multiphysics problems. Being arranged in a hierarchical fashion, the concepts of regions, meshes, fields, and so forth (see [19] for details) allow for couplings between different physical problems at different length and time scales. The presented skeletal muscle model is built on a single region, since the different physical models occupy the same space (volume-coupled problem). When the interaction of a skeletal muscle with neighboring structures such as other muscles, bone, fat, or skin is of interest, these structures can be added to the model as additional regions; see Figure 3. To couple different regions, their interaction can be defined via interface conditions, for example, contact. The region used for the chemoelectromechanical muscle model contains two meshes: a 3D representation of the geometry that is used for the continuum-mechanical model (mesh 1 in Figure 3) and a second mesh (mesh 2 in Figure 3) consisting of a number of 1D fibers that are used for the solution of the bioelectrical model. The 1D fiber meshes are embedded in the 3D FEs. Fields are a key data structure in OpenCMISS. Any quantity that can be associated with a mesh is represented in OpenCMISS as a field. A field variable can be constant across the mesh, it can vary from element to element, from node to node, from interpolation point (e.g., Gauß point) to interpolation point, or from data point (arbitrarily located) to data point. The representation of fields in OpenCMISS is based on FieldML [20], which provides field transfer operators (homogenization or interpolation) to handle different spatial scales; see also Section 2.4.3. Further, OpenCMISS employs nested control loops to handle different temporal scales. In the presented model, two separate control loops for the continuum-mechanical model and the bioelectrical problem, each with its own time step size, are linked to a superior main control loop. The control loop for the mechanical model is only associated with a single solver, while the bioelectrical control loop is connected to a solver for the diffusion equation and a second solver for the half-sarcomere model. The half-sarcomere model is provided in CellML format [21]. CellML is a markup language for the description of subcellular models based on XML (Extensible Markup Language). In a multiscale model, CellML can be used to conveniently describe the physical processes occurring at a single point within a model at a larger spatial scale. A CellML model repository containing more than 500 models is available for download at http://www.cellml.org/, among them the biophysical model of a half-sarcomere of Shorten et al. [11]. In OpenCMISS, the time step sizes for the CellML models can be chosen independently of the time step sizes used to solve equations representing different physics. For example, the half-sarcomere model, (14a), requires a much smaller time step than the diffusion equation, (14b), and hence, subcycling of the CellML model is employed. 2.4.5. Parallelization OpenCMISS is developed for parallel computations in a heterogeneous multiprocessing environment [19], where the MPI standard (http://mpi-forum.org/) is used for distributed memory parallelization and the OpenMP standard (http://openmp.org/) is used for shared memory parallelization. The implementation of the distributed memory parallelization in OpenCMISS builds on the concept of domain decomposition. For the presented chemoelectromechanical skeletal muscle model, the domain is decomposed in such a way that each 1D embedded fiber mesh is uniquely assigned to a processor; see Figure 4. This approach reduces the amount of communication between the individual processors to a minimum for the bioelectrical model. Parallel efficiency is hereby guaranteed by the fact that the diffusion part of the bioelectrical model is evaluated 1000 times more often than the continuum-mechanical model (). Hence, a user-defined domain decomposition, rather than a computed decomposition based on the graph partitioning packages ParMETIS (http://glaros.dtc.umn.edu/gkhome/ metis/parmetis/overview) or Scotch (http://www.labri.fr/perso/pelegrin/scotch/), which is typically used within OpenCMISS, is optimal with respect to the entire chemoelectromechanical model. Although currently not implemented, the individual muscle fiber meshes within a single computational domain could be further parallelized using an OpenMP shared memory parallelization. Further, the integration of the ODEs describing the half-sarcomere model is highly suitable for parallel execution on GPGPUs. 3. Results 3.1. Computational Model To analyze the performance of the computational framework, a simple geometric model is considered. A cubic geometry with 2cm edge lengths is generated and discretized using eight triquadratic/ trilinear Lagrange finite elements (Taylor-Hood elements). A fiber direction is defined that is uniformly aligned and parallel to an edge of the cube. A total of 400 muscle fiber meshes are evenly distributed in the cubic geometry, and each fiber is discretized using 60 linear Lagrange finite elements. First, the muscle is passively stretched in fiber direction by 20% to reach the optimal fiber stretch of . Under isometric conditions (the muscle specimen is fixed at the optimal length), a 100Hz tetanic stimulation frequency is applied to the central half-sarcomere model of all fibers in the model. To analyze the speedup in a parallel environment, the described model is executed on 1, 2, and 4 processors. A speedup of 2.18 is achieved when going from 1 to 2 processors, while a speedup of 1.95 is achieved when comparing 2 to 4 processors. Further, the simulations were repeated using only 36 1D fiber meshes instead of 400. In this case, a speedup of 1.44 is achieved when going from 1 to 2 processors, while a speedup of 1.50 is achieved when comparing 2 to 4 processors. Table 2 lists the timing results and speedup factors for an Intel Xeon Processor E5520 and 8GB of RAM. In the example with 400 fibers, the solution of the bioelectrical model dominates the total computing time. Here, a speedup factor of 2.18, which exceeds the theoretically achievable value of 2, occurs, which can be explained by a significantly higher number of cache misses on 1 processor than on multiple processors, as the size of the bioelectrical model for each processor scales down proportionally to the number of processors. (No ghost elements exist, and no communication between the processors is required in the bioelectrical model.) The other end of the spectrum is marked by the example using only 36 fibers, that is, fibers per 3D element, leading to a one by one correspondence between the number of Gauß points in the plane perpendicular to the fibers and the number of embedded fibers. (The 3D elements use Gauß points.) Note that the discretization for the mechanics is independent of the number of embedded fibers and is identical in both cases. In case of 36 fibers, the speedup factors are very poor, since the solution of the continuum-mechanical problem claims a larger fraction of the total computing time. The poor scaling of the continuum-mechanical model is due to the few 3D elements. Together with the required ghost elements each processor has to compute (i) 8 FEs when 1 processor is used, (ii) 8 FEs when 2 processors are used, and (iii) 6 FEs when 4 processors are used. (All elements that share a surface with an actual element of the domain are ghost elements.) For practical applications, however, a finer discretization of the continuum-mechanical model is desirable to achieve a higher accuracy and a better approximation of the muscle’s geometry. Furthermore, the application of more fibers is preferable for a realistic muscle simulation. Within this work, different time step sizes are used for the solution of the different submodels. Critical time step sizes for the bioelectrical model have already been investigated in Davidson [52]. Here, the model behavior for different time step sizes of the continuum-mechanical model, , are investigated. Figure 5 shows the stress evolution of a shortening contraction () of a muscle that is uniformly stimulated at 50Hz. The results for three different time step sizes ( = 0.1ms, 0.5ms, and 2.0ms) are shown, whereof the solutions for the smaller two time steps almost coincide (red dashed line and blue crosses) and the solution for the largest time step size ( = 2.0ms) depicts significant deviations and oscillatory behavior. 3.2. Force-Velocity Relation Under nonisometric conditions, the force-velocity relation plays an important role in skeletal muscle simulations. To illustrate the influence of the velocity on the force, a geometrically simple model is examined. Again, a rectangular tissue block with uniform fiber direction and 2cm length is first stretched in fiber direction by 20% to reach the optimal muscle length. In a second step, all fibers are jointly stimulated with 50Hz, and the muscle specimen is allowed to shorten at a certain velocity . The numerical experiment is repeated under isometric conditions and at 1, 10, and 25% of the maximum shortening velocity of 200mm/s. The results are depicted in Figure 6. The model predicts lower forces at higher velocities. The decline in the force by choosing a shortening velocity of 10 and 25% of the maximum shortening velocity is a direct result of the force-length relationship and due to the fact that the muscle reaches for higher velocities lengths at which it can produce much less force in a shorter amount of time. To segregate the influence of the force-length relation, Figure 7 shows the same results as Figure 6, however by plotting the force versus the actual length of the specimen. 3.3. Feasibility of the Framework and Code To demonstrate the ability of the chemoelectromechanical model to represent a realistic muscle, a model of a tibialis anterior (TA) muscle is generated. The geometrical representation of the TA is based on the Visible Human data set [57], and the fiber direction is based on diffusion tensor MRI data. The geometric model has previously been used in Röhrle et al. [18]. Within the present contribution, 10MUs and stimulation frequencies between 6 and 30Hz are assumed for the TA model. Detailed information on the methodology of assigning MU fiber distributions is given in [18]. The motor endplates are assumed to be located at the center of the fibers, where a depolarizing current is injected at the times of stimulation. The numerical experiment is carried out under isometric conditions. Figure 8 shows the geometry of the TA muscle and the fiber distribution (a). The fibers show the local membrane potential distribution (blue indicates the resting potential, red indicates the depolarized state). Further, the normalized muscle fiber membrane voltage (blue), normalized free calcium concentration in the myoplasm (green), and the normalized active stress (red) are plotted versus time for MUs 2, 4, 6, 8, and 10 (see Figure 8(b)). 4. Discussion From a modeling point of view, this work appeals to a very complex biophysical half-sarcomere model describing the entire EEC. The model contains a large number of parameters. Many of these parameters are difficult to determine, and only few are available for any muscle and any species. The most trustworthy parameter sets are probably given by Shorten et al. [11], who validated their model to experimental data for different electrical stimulation patterns on force production in soleus and extensor digitorum longus (EDL) muscles of mice for slow-twitch and fast-twitch fibers, respectively. Using in the proposed multiscale framework the described detailed biophysical model provides the basis for testing different physiological hypotheses and investigating different skeletal muscle phenomena, such as fatigue, signaling pathways, residual force enhancement/depression, myopathies, or influence of drugs in future studies. Although the ECC model of Shorten et al. [11] describes many aspects of the entire pathway from electrical stimulation to force production, it does not consider the titin filament that has recently gained attention in the literature [58, 59]. Nevertheless, a model representing the effect of the titin filament, for example, the one by Rode et al. [60], could be included in the model of Shorten et al. [11] if conditions are of interest, where the titin filament is expected to have a significant influence. Further, the modeling assumption that a fiber can be represented as a 1D geometrical object assumes that all parallel aligned sarcomeres within the cross section of a fiber behave identically not allowing for sarcomere inhomogeneities within the cross section of the fiber. Moreover, the embedding of the anatomically based 1D fiber meshes within the 3D mesh for the continuum mechanics and the homogenization process required due to the different meshes provide a few restrictions on the micromechanical skeletal muscle model. While assuming the electrical isolation of individual fibers is physiologically valid, the proposed framework does not distinguish individual fibers or fascicles in the mechanical model. While there exist first works on investigating the mechanical interaction of adjacent muscle fibers and fascicles through the extracellular connective tissue, for example, by Sharafi and Blemker [61, 62], the mechanical behavior of the fibers and the connective tissue within this framework is based on a macroscopic continuum-mechanical approach. Including micromechanical considerations within this framework, however, would lead to a computationally extremely demanding muscle model. This is particularly due to the fact that the mechanical considerations of Sharafi and Blemker [61, 62], which have only been carried out on a small block of tissue, are carried out for purely passive muscle tissue and would need to be further extended to active contractile behavior. Furthermore, material parameters of the extracellular connective tissue and stripped muscle fibers are not readily available [61], and hence a further source of uncertainty would be introduced into the model. Within this framework, the active stresses determined in the half-sarcomere model are homogenized and included in the continuum-mechanical constitutive equation. The homogenization is required for computational efficiency. A skeletal muscle model that would use the same number of elements for the bioelectrical and the mechanical problem no longer require any homogenization; however, this approach results in a computational model that is no longer feasible for any practical application. It should be noted that the homogenization process has little effect on the convergence behavior of the mechanical problem. This has been demonstrated in Röhrle et al. [15] by maintaining a fixed number of embedded fiber models while successively refining the number of 3D mechanical elements until homogenization is no longer required. The investigation showed very good convergence properties [15] if compared to the mechanical-only problem. Improving the constitutive equation for describing the macroscopic behavior of skeletal muscle mechanics does not only apply to its active contribution. In general, future research needs to further focus on experimental studies and continuum-mechanical material descriptions to develop valid constitutive equations for skeletal muscle mechanics in general. Within this framework the isotropic Mooney-Rivlin material model has been extended by a contribution acting in the along-fiber stretch regime for describing the transversely isotropic material behavior of passive muscle tissue. The anisotropic contribution to the passive behavior is negligibly small in the small strain regime, and hence the Mooney-Rivlin parameters can be used to characterize the passive material behavior around the reference configuration. Based on a comparison with the infinitesimal strain theory, the consistency condition for the Mooney-Rivlin parameters yields a value of for the shear modulus [28 ], which is close to experimentally determined values [63]. However, there is some experimental evidence that under compression passive muscle tissue exhibits a stiffer behavior in the cross-fiber direction than in the fiber direction [64]. Although this material behavior can be included in a continuum-mechanical formulation [64], the material behavior of the present contribution is isotropic in the compressive range and exhibits a transversely isotropic material behavior in the along-fiber stretch region, as in most other works in this field of research; see, for example, [6, 15]. More accurate or micromechanically based subject- or muscle-specific material parameters would be desirable but are currently not available. Despite using two different discretizations, that is, one grid for the mechanical model and a different grid for the electrophysiological model, the computational cost of the model is still very considerable. Hence, a staggered solution is proposed to further reduce the computational effort. Staggered solution schemes are often favorable when within one model different subsystems describe processes with very different characteristic time step sizes. The microscopic half-sarcomere model shows rapid changes and steep gradients, while the changes in the continuum-mechanical system occur at a much larger time scale. The application of the staggered solution scheme implies the following assumptions. The changes in the variables in the bioelectrical field equations, (14a) and (14b), are small within one time step of the continuum-mechanical model; that is, these changes do not have a strong effect on the continuum-mechanical system. On the other hand, the changes introduced through one solution step of the mechanical system, (2), are small; that is, not updating the mechanical fields at every time step at which the bioelectrical field equations are solved introduces a rather small error (cf. Figure 5). Based on the results depicted in Figure 5, the time step for the considered problem could be chosen even larger; however, the authors have retained from this possibility as they have an extended framework in mind that also provides feedback from the mechanical to the recruitment model. In that case, it is presumed that a smaller mechanical time step might be more suitable. This, however, has to be shown in future research. Equivalent assumptions have to be made for the operator split within the bioelectrical field problem, where the diffusion equation is separated from the reaction term. In contrast to staggered schemes, monolithic solution schemes do not rely on these assumptions. A monolithic scheme has been investigated for the bioelectrical field equations [65]; however, only a simple, phenomenological model for the reaction term has been evaluated. Further, Göktepe and Kuhl [66] propose a fully implicit approach for cardiac electromechanics. As the proposed chemoelectromechanical model uses a much more detailed, biophysical half-sarcomere model for the reaction term, the staggered schemes have been employed to reduce the overall cost while maintaining accuracy and stability for long stimulation periods. Further, the fact that the bioelectrical model is solved on a deforming domain (as a result of the continuum-mechanical model) results in monolithic solution schemes that are not so straightforward to implement. 5. Conclusions An extensible, flexible, multiscale, and multiphysics modeling framework for nonisometric skeletal muscle mechanics has been presented. The skeletal muscle model spans the entire excitation-contraction pathway using an electrophysiological membrane model, a biophysical half-sarcomere model (including the hyperbolic force-velocity relationship) for active force generation, action potential propagation along individual muscle fibers, and a continuum-mechanical description of the macroscopic muscle tissue allowing for complex interactions with surrounding tissues. The framework is based on state-of-the-art parallelization techniques providing the basis to investigate many different aspects of skeletal muscle physiology and mechanics in the future. In particular, the extensible and flexible open-source software library OpenCMISS will provide the basis for future extensions such as including the effects of titin, neurocontrol, feedback mechanisms, and many more aspects. The key to all of that is its implementation within a single framework using novel data structures, for example, FieldML and CellML, not requiring any external data exchange, staggered solution schemes addressing computational efficiency in the presence of different and separable time scales, and parallelization strategies. Conflict of Interests The authors do not have any conflict of interests with the content of the paper. The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the funding programme Open Access Publishing and the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Moreover, this work was supported by the European Research Council under the European Union’s Seventh Framework Programme (FP/ 2007-2013)/ERC Grant Agreement no. 306757 (LEAD). 1. G. J. van Ingen Schenau, M. F. Bobbert, G. J. Ettema, J. B. de Graaf, and P. A. Huijing, “A simulation of rat edl force output based on intrinsic muscle properties,” Journal of Biomechanics, vol. 21, no. 10, pp. 815–824, 1988. View at Scopus 2. M. G. Pandy, “Computer modeling and simulation of human movement,” Annual Review of Biomedical Engineering, vol. 3, pp. 245–273, 2001. View at Publisher · View at Google Scholar 3. M. Günther, S. Schmitt, and V. Wank, “High-frequency oscillations as a consequence of neglected serial damping in Hill-type muscle models,” Biological Cybernetics, vol. 97, no. 1, pp. 63–79, 2007. View at Publisher · View at Google Scholar · View at Scopus 4. M. Günther and S. Schmitt, “A macroscopic ansatz to deduce the Hill relation,” Journal of Theoretical Biology, vol. 263, no. 4, pp. 407–418, 2010. View at Publisher · View at Google Scholar · View at Scopus 5. P. Meier and R. Blickhan, “FEM-simulation of skeletal muscle: the influence of inertia during activation and deactivation,” in Skeletal Muscle Mechanics: From Mechanisms to Function, W. Herzog, Ed., chapter 12, pp. 207–233, John Wiley & Sons, 2000. 6. S. S. Blemker, P. M. Pinsky, and S. L. Delp, “A 3D model of muscle reveals the causes of nonuniform strains in the biceps brachii,” Journal of Biomechanics, vol. 38, no. 4, pp. 657–665, 2005. View at Publisher · View at Google Scholar · View at Scopus 7. O. Röhrle and A. J. Pullan, “Three-dimensional finite element modelling of muscle forces during mastication,” Journal of Biomechanics, vol. 40, no. 15, pp. 3363–3372, 2007. View at Publisher · View at Google Scholar · View at Scopus 8. F. E. Zajac, “Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control,” Critical Reviews in Biomedical Engineering, vol. 17, no. 4, pp. 359–411, 1989. View at Scopus 9. S. S. Blemker and S. L. Delp, “Three-dimensional representation of complex muscle architectures and geometries,” Annals of Biomedical Engineering, vol. 33, no. 5, pp. 661–673, 2005. View at Publisher · View at Google Scholar · View at Scopus 10. L. R. Smith, G. Meyer, and R. L. Lieber, “Systems analysis of biologicalnetworks in skeletal muscle function,” Wiley Interdisciplinary Reviews, vol. 5, no. 1, pp. 55–71, 2013. 11. P. R. Shorten, P. O'Callaghan, J. B. Davidson, and T. K. Soboleva, “A mathematical model of fatigue in skeletal muscle force contraction,” Journal of Muscle Research and Cell Motility, vol. 28, no. 6, pp. 293–313, 2007. View at Publisher · View at Google Scholar · View at Scopus 12. B. Hernández-Gascón, J. Grasa, B. Calvo, and J. Rodríguez, “A 3D electro-mechanical continuum model for simulating skeletal muscle contraction,” Journal of Theoretical Biology, pp. 108–118, 2013. 13. J. W. Fernandez, M. L. Buist, D. P. Nickerson, and P. J. Hunter, “Modelling the passive and nerve activated response of the rectus femoris muscle to a flexion loading: a finite element framework,” Medical Engineering and Physics, vol. 27, no. 10, pp. 862–870, 2005. View at Publisher · View at Google Scholar · View at Scopus 14. M. Böl, R. Weikert, and C. Weichert, “A coupled electromechanical model for the excitation-dependent contraction of skeletal muscle,” Journal of the Mechanical Behavior of Biomedical Materials, vol. 4, no. 7, pp. 1299–1310, 2011. View at Publisher · View at Google Scholar · View at Scopus 15. O. Röhrle, J. B. Davidson, and A. J. Pullan, “Bridging scales: a three-dimensional electromechanical finite element model of skeletal muscle,” SIAM Journal on Scientific Computing, vol. 30, no. 6, pp. 2882–2904, 2008. View at Publisher · View at Google Scholar · View at Scopus 16. O. Röhrle, “Simulating the electro-mechanical behavior of skeletalmuscles,” IEEE Computing in Science and Engineering, vol. 12, no. 6, pp. 48–58, 2010. 17. O. Röhrle, M. Sprenger, E. Ramasamy, and T. Heidlauf, “Multiscale skeletal muscle modeling: from cellular level to a multi-segmentSkeletal muscle model of the upper limb,” in Computer Models in Biomechanics, G. A. Holzapfel and E. Kuhl, Eds., pp. 103–116, Springer, Amsterdam, The Netherlands, 2013. 18. O. Röhrle, J. B. Davidson, and A. J. Pullan, “A physiologically based, multi-scale model of skeletal muscle structure and function,” Frontiers in Physiology, vol. 3, 2012. 19. C. Bradley, A. Bowery, R. Britten et al., “OpenCMISS: a multi-physics & multi-scale computational infrastructure for the VPH/Physiome project,” Progress in Biophysics and Molecular Biology, vol. 107, no. 1, pp. 32–47, 2011. View at Publisher · View at Google Scholar · View at Scopus 20. G. R. Christie, P. M. F. Nielsen, S. Blackett, C. P. Bradley, and P. J. Hunter, “Fieldml: concepts and implementation,” Philosophical Transactions of the Royal Society A, vol. 367, no. 1895, pp. 1869–1884, 2009. View at Publisher · View at Google Scholar · View at Scopus 21. A. Garny, D. P. Nickerson, J. Cooper et al., “CellML and associated tools and techniques,” Philosophical Transactions of the Royal Society A, vol. 366, no. 1878, pp. 3017–3043, 2008. View at Publisher · View at Google Scholar · View at Scopus 22. C. M. Lloyd, M. D. B. Halstead, and P. F. Nielsen, “CellML: its future, present and past,” Progress in Biophysics and Molecular Biology, vol. 85, no. 2-3, pp. 433–450, 2004. View at Publisher · View at Google Scholar · View at Scopus 23. A. J. Fuglevand, D. A. Winter, and A. E. Patla, “Models of recruitment and rate coding organization in motor-unit pools,” Journal of Neurophysiology, vol. 70, no. 6, pp. 2470–2488, 1993. View at 24. F. Negro and D. Farina, “Decorrelation of cortical inputs and motoneuron output,” Journal of Neurophysiology, vol. 106, no. 5, pp. 2688–2697, 2011. View at Publisher · View at Google Scholar · View at Scopus 25. J. Bonet and R. D. Wood, Nonlinear Continuum Mechanics for Finite Element Analysis, Cambridge University Press, 2nd edition, 2008. 26. A. J. M. Spencer, “Theory of invariants,” in Continuum Physics, A. Eringen, Ed., vol. 1, pp. 239–353, Academic Press, New York, NY, USA, 1971. 27. A. J. M. Spencer, Deformations of Fibre-Reinforced Materials, Oxford University Press, 1972. 28. G. A. Holzapfel, Nonlinear Solid Mechanics, John Wiley & Sons, West Sussex, England, 2000. 29. Y. Zheng, A. F. T. Mak, and B. Lue, “Objective assessment of limb tissue elasticity: development of a manual indentation procedure,” Journal of Rehabilitation Research and Development, vol. 36, no. 2, pp. 71–85, 1999. View at Scopus 30. B. Markert, W. Ehlers, and N. Karajan, “A general polyconvex strainenergy function for fiber-reinforced materials,” Proceedings in Applied Mathematics and Mechanics, vol. 5, no. 1, pp. 245–246, 2005. View at Publisher · View at Google Scholar 31. D. Hawkins and M. Bey, “A comprehensive approach for studying muscle-tendon mechanics,” Journal of Biomechanical Engineering, vol. 116, no. 1, pp. 51–55, 1994. View at Scopus 32. A. E. Ehret, M. Böl, and M. Itskov, “A continuum constitutive model for the active behaviour of skeletal muscle,” Journal of the Mechanics and Physics of Solids, vol. 59, no. 3, pp. 625–636, 2011. View at Publisher · View at Google Scholar · View at Scopus 33. B. R. MacIntosh, P. F. Gardiner, and A. J. McComas, Skeletal Muscle: Form and Function, Human Kinetics, 2nd edition, 2006. 34. A. V. Hill, “The heat of shortening and the dynamic constants of muscle,” Proceedings of the Royal Society B, vol. 126, no. 843, pp. 136–195, 1938. View at Publisher · View at Google Scholar 35. R. H. Adrian and L. D. Peachey, “Reconstruction of the action potential of frog sartorius muscle,” The Journal of Physiology, vol. 235, no. 1, pp. 103–131, 1973. View at Scopus 36. W. Wallinga, S. L. Meijer, M. J. Alberink, M. Vliek, E. D. Wienk, and D. L. Ypey, “Modelling action potentials and membrane currents of mammalian skeletal muscle fibres in coherence with potassium concentration changes in the T-tubular system,” European Biophysics Journal, vol. 28, no. 4, pp. 317–329, 1999. View at Publisher · View at Google Scholar · View at Scopus 37. E. Ríos, M. Karhanek, J. Ma, and A. González, “An allosteric model of the molecular interactions of excitation- contraction coupling in skeletal muscle,” The Journal of General Physiology, vol. 102, no. 3, pp. 449–481, 1993. View at Publisher · View at Google Scholar · View at Scopus 38. S. M. Baylor and S. Hollingworth, “Model of sarcomeric Ca^2+ movements, including ATP Ca^2+ binding and diffusion, during activation of frog skeletal muscle,” The Journal of General Physiology, vol. 112, no. 3, pp. 297–316, 1998. View at Publisher · View at Google Scholar · View at Scopus 39. M. V. Razumova, A. E. Bukatina, and K. B. Campbell, “Stiffness-distortion sarcomere model for muscle simulation,” Journal of Applied Physiology, vol. 87, no. 5, pp. 1861–1876, 1999. View at 40. M. V. Razumova, A. E. Bukatina, and K. B. Campbell, “Different myofilament nearest-neighbor interactions have distinctive effects on contractile behavior,” Biophysical Journal, vol. 78, no. 6, pp. 3120–3137, 2000. View at Scopus 41. K. B. Campbell, M. V. Razumova, R. D. Kirkpatrick, and B. K. Slinker, “Myofilament kinetics in isometric twitch dynamics,” Annals of Biomedical Engineering, vol. 29, no. 5, pp. 384–405, 2001. View at Publisher · View at Google Scholar · View at Scopus 42. K. B. Campbell, M. V. Razumova, R. D. Kirkpatrick, and B. K. Slinker, “Nonlinear myofilament regulatory processes affect frequency-dependent muscle fiber stiffness,” Biophysical Journal, vol. 81, no. 4, pp. 2278–2296, 2001. View at Scopus 43. A. F. Huxley and R. Niedergerke, “Structural changes in muscle during contraction: interference microscopy of living muscle fibres,” Nature, vol. 173, no. 4412, pp. 971–973, 1954. View at Publisher · View at Google Scholar · View at Scopus 44. C. Y. Scovil and J. L. Ronsky, “Sensitivity of a Hill-based muscle model to perturbations in model parameters,” Journal of Biomechanics, vol. 39, no. 11, pp. 2055–2063, 2006. View at Publisher · View at Google Scholar · View at Scopus 45. O. Till, T. Siebert, C. Rode, and R. Blickhan, “Characterization of isovelocity extension of activated muscle: a Hill-type model for eccentric contractions and a method for parameter determination,” Journal of Theoretical Biology, vol. 255, no. 2, pp. 176–187, 2008. View at Publisher · View at Google Scholar · View at Scopus 46. A. V. Hill, First and Last Experiments in Muscle Mechanics, UniversityPress Cambridge, 1970. 47. B. R. Epstein and K. R. Foster, “Anisotropy in the dielectric properties of skeletal muscle,” Medical and Biological Engineering and Computing, vol. 21, no. 1, pp. 51–55, 1983. View at Scopus 48. F. L. H. Gielen, W. Wallinga-de Jonge, and K. L. Boon, “Electrical conductivity of skeletal muscle tissue: experimental results from different muscles in vivo,” Medical and Biological Engineering and Computing, vol. 22, no. 6, pp. 569–577, 1984. View at Scopus 49. A. J. Pullan, M. L. Buist, and L. K. Cheng, Mathematically Modellingthe Electrical Activity of the Heart: From Cell to Body Surfaceand Back Again, World Scientific, 2005. 50. J. Sundnes, B. F. Nielsen, K. A. Mardal, X. Cai, G. T. Lines, and A. Tveito, “On the computational complexity of the bidomain and the monodomain models of electrophysiology,” Annals of Biomedical Engineering, vol. 34, no. 7, pp. 1088–1097, 2006. View at Publisher · View at Google Scholar · View at Scopus 51. B. F. Nielsen, T. S. Ruud, G. T. Lines, and A. Tveito, “Optimal monodomain approximations of the bidomain equations,” Applied Mathematics and Computation, vol. 184, no. 2, pp. 276–290, 2007. View at Publisher · View at Google Scholar · View at Scopus 52. J. B. Davidson, Biophysical modelling of skeletal muscle [Ph.D. thesis], University of Auckland, Auckland, New Zealand, 2009. 53. J. Sundnes, G. T. Lines, and A. Tveito, “An operator splitting method for solving the bidomain equations coupled to a volume conductor model for the torso,” Mathematical Biosciences, vol. 194, no. 2, pp. 233–248, 2005. View at Publisher · View at Google Scholar · View at Scopus 54. O. C. Zienkiewicz, R. L. Taylor, and J. Z. Zhu, The Finite Element Method: Its Basis and Fundamentals, vol. 1, Butterworth-Heinemann, 2005. 55. C. Linder, M. Tkachuk, and C. Miehe, “A micromechanically motivated diffusion-based transient network model and its incorporation into finite rubber viscoelasticity,” Journal of the Mechanics and Physics of Solids, vol. 59, no. 10, pp. 2134–2156, 2011. View at Publisher · View at Google Scholar · View at Scopus 56. M. Tkachuk and C. Linder, “The maximal advance path constraint for the homogenization of materials with random network microstructure,” Philosophical Magazine, vol. 92, no. 22, pp. 2779–2808, 2012. View at Publisher · View at Google Scholar 57. V. M. Spitzer and D. G. Whitlock, “The visible human dataset: the anatomical platform for human simulation,” The Anatomical Record, vol. 253, no. 2, pp. 49–57, 1998. 58. W. Herzog and T. R. Leonard, “Force enhancement following stretching of skeletal muscle: a new mechanism,” Journal of Experimental Biology, vol. 205, no. 9, pp. 1275–1283, 2002. View at Scopus 59. D. Labeit, K. Watanabe, C. Witt et al., “Calcium-dependent molecular spring elements in the giant protein titin,” Proceedings of the National Academy of Sciences of the United States of America, vol. 100, no. 23, pp. 13716–13721, 2003. View at Publisher · View at Google Scholar · View at Scopus 60. C. Rode, T. Siebert, and R. Blickhan, “Titin-induced force enhancement and force depression: a “sticky-spring” mechanism in muscle contractions?” Journal of Theoretical Biology, vol. 259, no. 2, pp. 350–360, 2009. View at Publisher · View at Google Scholar · View at Scopus 61. B. Sharafi and S. S. Blemker, “A micromechanical model of skeletal muscle to explore the effects of fiber and fascicle geometry,” Journal of Biomechanics, vol. 43, no. 16, pp. 3207–3213, 2010. View at Publisher · View at Google Scholar · View at Scopus 62. B. Sharafi and S. S. Blemker, “A mathematical model of force transmission from intrafascicularly terminating muscle fibers,” Journal of Biomechanics, vol. 44, no. 11, pp. 2031–2039, 2011. View at Publisher · View at Google Scholar · View at Scopus 63. K. Hoyt, T. Kneezel, B. Castaneda, and K. J. Parker, “Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity,” Physics in Medicine and Biology, vol. 53, no. 15, pp. 4063–4080, 2008. View at Publisher · View at Google Scholar · View at Scopus 64. M. van Loocke, C. G. Lyons, and C. K. Simms, “A validated model of passive muscle in compression,” Journal of Biomechanics, vol. 39, no. 16, pp. 2999–3009, 2006. View at Publisher · View at Google Scholar · View at Scopus 65. M. Munteanu and L. F. Pavarino, “Decoupled schwarz algorithms for implicit discretizations of nonlinear monodomain and bidomain systems,” Mathematical Models and Methods in Applied Sciences, vol. 19, no. 7, pp. 1065–1097, 2009. View at Publisher · View at Google Scholar · View at Scopus 66. S. Göktepe and E. Kuhl, “Electromechanics of the heart: a unified approach to the strongly coupled excitation-contraction problem,” Computational Mechanics, vol. 45, no. 2-3, pp. 227–243, 2010. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/cmmm/2013/517287/","timestamp":"2014-04-16T16:45:13Z","content_type":null,"content_length":"301170","record_id":"<urn:uuid:ee251d46-9d3c-4f9b-9bf4-5f3061995b9b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café The Title of This Post is False Posted by David Corfield Continuing our enquiry into what homotopy type theory can do for the formalisation of natural language, if we could make good sense of the phenomena of self-reference, that should pique the interest of those discontent with predicate logic. We won’t be explicitly involving the homotopy aspect of the type theory here, and yet who knows whether it might assert itself, as it did last time on the subject of ‘the’. So, let’s take a look at the classic liar paradox: This proposition is false. Now, what rules of type formation would allow such a proposition-as-type, $Id_{Prop}(This \: proposition, \bottom)?$ Well, section 4.14 of Aarne Rante’s book Type-Theoretical Grammar discusses indexicals: $\frac{A: Type \qquad a : A}{this(A, a): A}.$ So to say ‘this proposition’ we had better already have established a type of propositions, $Prop$, as well as a term of that type which we intend to refer to as ‘this proposition’. The idea of the paradox is that we have such a term. ‘This proposition is false’ is supposed to be the term of type $Prop$: $Id_{Prop}(This \: proposition, \bottom) : Prop$ But we can’t form this term without forming ‘this proposition’. We seem to need this($Prop$, this proposition): $Prop$, but we can’t form this since we haven’t formed ‘this proposition’. Is it as simple as that – incorrect term formation? Was there a whiff of fixed-pointness here, ringing inductive or coinductive bells? What of the Gödel sentence which states of itself that it cannot be proved? Posted at February 6, 2013 10:01 AM UTC Re: The Title of This Post is False This has nothing to do with type theory or homotopy-theoretic foundations. You can perform exactly the same argument in ordinary first-order logic equipped with Russell’s descriptive operator ɩ, or in a number of other formal systems. They typically resolve the paradox in exactly the same way as you did, by observing that there is no way of forming “this proposition”. The usual attempts at self-referential propositions involve fixed-point operators. A naive attempt to define your proposition would be as the fixed point of negation “fix (λ p : Prop . ¬p)”. But now we have to argue that such a fixed point exists, which it does not, so people typically change the rules of the game (by changing the logic, or negation, or in some other way). Gödel’s argument goes through thanks to the arithmetic fixed-point theorem where the fixed point crucially passes through a level of encoding that makes everything magically tick. Think of the encoding as a layer of encapsulation that “freezes” the infinite descend into an abyss of self-reference. A programmer would say that we “suspend the recursive call by thunking it”. So, no homotopy type theory here folks. Posted by: Andrej Bauer on February 6, 2013 11:12 AM | Permalink | Reply to this Re: The Title of This Post is False Thanks for your thoughts. Am I detecting something more than just a ‘you don’t need type theory for this’ point in what you wrote? Even were it the case that other resolutions are possible, still we might be interested in what type theory, which we may like for a host of other reasons, has to say about self-reference. Posted by: David Corfield on February 6, 2013 12:12 PM | Permalink | Reply to this Re: The Title of This Post is False You are detecting a slight dissatisfaction with the idea idea that (homotopy) type theory is the new panacea. At the very least you should put in proper historical context a type-theoretic account of standard material which has been studied very satisfactorily to death in other settings. A youngling who is just learning type theory and is (rightfully) fascinated by it might think, after reading your post, that type theory is bringing something new to the subject of self-reference and paradoxical statements. I see absolutely nothing new here. Posted by: Andrej Bauer on February 6, 2013 1:22 PM | Permalink | Reply to this Re: The Title of This Post is False If that’s the impression I gave, thanks for redressing the balance. I approach these themes as someone historically suspicious of analytic philosophy’s formalizations. Just yesterday, someone in my department gave a talk on why ‘Logic is not mathematical’, arguing to this conclusion because Fregean-style logic does not handle mass nouns, self-reference and anaphoric expressions well. In the past, I would have cheered this along, but seeing the glimmer of hope that dependent type theory may not fare too badly here, I wanted to see what was possible. I’m not sure who your ‘younglings’ are, but I do know if they come from philosophy they are very unlikely to have heard of dependent type theory. You have only to search the main online encyclopedia in English-language philosophy to see that little is generally known. Posted by: David Corfield on February 6, 2013 3:23 PM | Permalink | Reply to this Re: The Title of This Post is False I think Andrej might be a little concerned because recursive (i.e., self-referential) types have been a central object of study in programming language semantics from its origin, and while the type-theoretic approach certainly clarifies traditional accounts of self-reference, it does not really contradict them. A recursive type $\mu \alpha.\;A(\alpha)$ is a type where $\alpha$ is the recursive self-reference. If you place no restrictions on how the self-referential variable $\alpha$ can occur in $A(\alpha)$ , it is easy to show that all types are inhabited, by showing that a fixed-point combinator $(A \to A) \to A$ is typeable. Here’s a sketch: Let $X$ be some type, and let $A \triangleq \mu \alpha.\; (\alpha \to X)$. Next, define $h : (X \to X) \to A \to X$ as the term $\lambda f:X \to X, a:A.\; f(a a)$. To see that this is well-typed, note that since $a$ has the type $A \triangleq mu \alpha. \alpha \to X$, it also has the type $A \to X$, and so it can be applied to itself. Now that we have $h$, we can define the fixed point combinator $C : (X \to X) \to X$ as $\lambda f.\; (h\;f)\;(h\;f)$. Again, to see that this is well-typed, note that $h\;f$ has the type $A \to X$, and so by folding the recursive type up, it also has the type $A$. This fixed-point combinator is, essentially, a proof term for Curry’s paradox. And indeed, you’ll find that most of the paradoxes of self-reference have a corresponding fixed point combinator in the typed lambda calculus. Here are some salient observations about this. None of them are particularly novel, but I think the type-theoretic view makes them especially clear. 1. The fact that the Liar involves falsehood is a red herring. Curry’s paradox doesn’t care what the type $X$ is, and so the paradox goes through even in minimal logic. 2. The derivation of the fixed-point combinator from recursive types was completely syntactic. As a result, every model of logic which validates (a) contraction (i.e., $A \implies A \wedge A$), (b) modus ponens, and (c) unrestricted self-reference is inconsistent. 3. Inconsistency in a logic corresponds to Turing-completeness. This is what originally motivated the study of recursive types in language semantics: models of the untyped lambda calculus (which is Turing complete) would need to satisfy the isomorphism $V \simeq V \to V$. 4. However, eliminating unrestricted contraction does block Curry’s paradox. In linear logic (without exponentials), it is possible to allow unrestricted self-reference without inconistency. See, for instance, Kazushige Terui’s Light Affine Set Theory, in which he gives a set theory using linear logic as its ambient logic, for which naive, unrestricted comprehension is sound. Posted by: Neel Krishnaswami on February 7, 2013 10:04 AM | Permalink | Reply to this Re: The Title of This Post is False Thanks for bringing this up, Neel! Personally, I think your points 3 and 4 suggest one general way in which type theory does have something new to contribute to a discussion about self-reference and paradox. (I mean “new” to someone familiar only with set theory, not “new” in any absolute sense, since type theory has been around for quite a while now.) Namely, another big difference between type theory and set theory that hasn’t come up much in recent posts is that type theory is also a programming language: every expression, and in particular every proof, can be interpreted as a program and “executed”. And if we phrase your point 3 a little differently, it says that logical inconsistency is about nontermination of proofs. I think this idea is intuitive even from a set-theoretic perspective (though not precise). Consider Russell’s paradox $R = \{ x \,|\, xotin x \}$. In order to decide whether $R\in R$, we ask whether $Rotin R$, which requires us to know whether $R\in R$, which prompts us to ask whether $Rotin R$, in an “infinite spiral” which will never terminate. Moreover, as you know (but others may not), the standard way that one proves consistency of a type theory is by showing that the execution of all proofs does terminate. More specifically, one shows 1. Each step in the execution of a program preserves its type (“preservation”); 2. If a program has not yet yielded a “value” (a canonical form of its type), then it can be executed further (“progress”); and 3. The execution of every program terminates. Therefore, if there were a proof (i.e. a program) of type False, it could be executed until it reached a value. But the type False is the positive type with no generators, hence has no canonical forms and no values, so no such program can exist. Of course, by Gödel’s incompleteness theorem, this proof can’t be carried out in the same type theory: you need a metatheory strong enough enough to carry out the proof of termination. Such proofs are generally by induction, because roughly speaking, the way to prove termination is to assign some kind of “invariant” to each program in such a way that each execution step decreases the invariant with respect to some well-founded ordering. (I think this gives a nice “explanation” of why “logical strength is measured by the length of the well-founded inductions we can carry out”, which is roughly the same sort of thing that ZF-theorists have discovered to be true in using large (well-ordered) cardinals as a yardstick for consistency strength.) One way to break such a consistency proof is to introduce “axioms” that don’t compute. This marks an important stylistic difference between type theory and set theory: in set theory where everything is already phrased in terms of axioms, one is free (from a technical perspective) to introduce arbitrary new axioms, with only the problem of convincing the reader that they are reasonable. In type theory, by contrast, one generally avoids asserting “axioms” since they cause the execution of proofs as programs to get stuck. Instead, we extend type theories by introducing new type-forming operations, which are always expected to adhere to certain patterns ensuring that programs can still be executed. (This is one of the biggest things we don’t understand yet about homotopy type theory: how to phrase univalence and higher inductive types in this sort of way.) Another way to break such a consistency proof is, as you said, to introduce constructions that cause nontermination. Recursive types are one of these. Another is a self-containing universe “ $Type:Type$”, which leads to Girard’s paradox (a type-theoretic version of Burali-Forti’s paradox) and Coquand’s paradoxes (type-theoretic versions of the Russell–Cantor paradox). I think all this suggests a different philosophical perspective on self-reference and inconsistency, namely: The problem is not self-reference; the problem is non-termination. In other words, the devices we use to ensure consistency, such as replacing general recursive types with inductive and coinductive ones, or replacing a self-containing universe with a hierarchy of universes ($Type_0:Type_1$, $Type_1:Type_2$, …), should be viewed merely as technical devices to ensure termination, not fundamental or conceptual restrictions on the theory. In particular, there are plenty of “self-referential” programs/proofs which are perfectly okay (in that their execution terminates), but whose termination is not “detected” by these devices. It’s well-known that there are general recursive programs which terminate on any input, but are not describable using the sort of higher-order primitive recursion allowed by an ordinary natural numbers type. (Indeed, the Halting Problem implies that this must be the case.) And the examples of NF and linear set theory show that a self-containing universe is not intrinsically inconsistent either; by imposing different restrictions on the allowable proofs about a self-containing universe, we can still ensure termination/consistency. In particular, we can resolve the “problem of universes” in mathematical foundations, in a way which I find particularly attractive as a category theorist. Namely, we can believe (at least at a philosophical level) that there “really is” a set of all sets, or a type of all types, or a category of all categories. We just have to ensure somehow that when talking about these things, we only write down terminating proofs. One convenient way to do this, of course, is by assigning a number to each reference to the universe, in such a way that whenever we use the fact that the universe contains itself, the “containing” copy has a larger number. From a practical perspective, this is just universe polymorphism with typical ambiguity. But philosophically it is different: rather than the universe levels “really being there” despite our desire not to have to talk about them, there is “really only one universe” while we are using a technical numbering device to ensure that our proofs terminate. The numbers assigned to each universe occurrence are not an intrinsic part of the proof, any more than the well-founded invariants assigned to different parts of a computer program by an automated termination-checker are an intrinsic part of the code. Posted by: Mike Shulman on February 8, 2013 12:11 AM | Permalink | Reply to this Re: The Title of This Post is False “The fact that the Liar involves falsehood is a red herring. Curry’s paradox doesn’t care what the type X is, and so the paradox goes through even in minimal logic.” Is the statement This statement is true. a paradox? Can’t you just say that the statement is true? If it is not a paradox, then your system should allow “this statement is true” while not allowing “this statement is false”. Posted by: Jeffery Winkler on February 12, 2013 10:12 PM | Permalink | Reply to this Re: The Title of This Post is False It’s not a paradox, but it also doesn’t have a definite truth value. You’re free to make it either true or false, without producing a contradiction. Posted by: Walt on February 12, 2013 10:58 PM | Permalink | Reply to this Re: The Title of This Post is False If you have contraction, modus ponens, and unrestricted self-reference, you will be able to give a proof of “this statement is true”, because you can give a proof of the derivability of any proposition. The system is inconsistent! To say something more useful than that, you need to restrict one of those three features. The usual thing people do is to keep contraction and modus ponens, and limit the kinds of self-referential expressions that are allowed. The general recipe for doing this is to observe that in a proposition $\mu \alpha.\; A(\alpha)$, we are essentially trying taking the fixed point of a function on truth values $\lambda \alpha. A(\ alpha)$. This means that you can take your favorite fixed point theorem from mathematics, and turn it into a restriction on the kinds of recursive definition that you want to allow. Logicians tend to turn by reflex to the Knaster-Tarski theorem, which says that any monotone functional $f : L \to L$ on a complete lattice $L$ has a set of fixed points which form a complete lattice. (In particular, $f$ has a least and a greatest fixed point.) Now: 1. Note that the booleans are a complete lattice (choosing true above than false, say), 2. “this statement is true” corresponds to the proposition $\mu \alpha.\;\alpha$, and the identity function $\lambda \alpha.\;\alpha$ is trivially monotone, 3. and therefore $\mu \alpha.\;\alpha$ has a least and a greatest fixed point. If you choose to interpret $\mu$ as a least fixed point, then the truth value of $\mu \alpha.\;\alpha$ is false. If you choose to interpret it as a greatest fixed point then its truth value is true. Both of these are consistent choices, just as Walt notes. If your model of truth values is richer than the booleans, you might find yourself using other fixed point theorems (such as Banach’s theorem, which comes up a lot in temporal logic). Posted by: Neel Krishnaswami on February 15, 2013 11:22 AM | Permalink | Reply to this Re: The Title of This Post is False If getting explicit self-reference is a problem, would Yablo’s Paradox be any easier to start with? How would we encode the infinite family of sentences: $(S_i)$: for all $k \gt i$, $S_k$ is false Posted by: Stuart Presnell on February 6, 2013 1:01 PM | Permalink | Reply to this Re: The Title of This Post is False No it does not help at all. Yablo’s paradox just obsucres the self reference. While each statement in the sequence is not self-referential, the sequence itself is. Posted by: Andrej on February 6, 2013 1:23 PM | Permalink | Reply to this Re: The Title of This Post is False I like Yablo's Paradox, because it makes clear what self reference really means. (Or maybe, with Yablo, we think that it doesn't involve self reference, in which case we see what is really important Everybody agrees that $S_1$: $S_1$ is false. is self-referential. Not much harder is that $S_1$: $S_2$ is true. $S_2$: $S_1$ is false. is self-referential. And of course $S_1$: $S_2$ is true. $S_2$: $S_3$ is true. $S_3$: $S_1$ is false. is self-referential. And so on. So a loop of any length is self-referential. Now, Yablo's paradox takes a limit of these paradoxes that has no loops. But it is still ill-founded. An ill-founded finite relation must have loops, but an ill-founded infinite relation need not have them. So to avoid self reference (or whatever you want to call this feature of potential paradox), you need the reference relation between sentences (where $S$ is related to $T$ iff $S$ is referred to by $T$) to be well-founded. (This is potentially very interesting, because much of proof theory is about which proof systems are able to prove which relations are well-founded.) Posted by: Toby Bartels on June 25, 2013 3:43 AM | Permalink | Reply to this Re: The Title of This Post is False I wrote in small part: Yablo's paradox takes a limit of these paradoxes Sorry, it's not actually a limit of the ones that I wrote down, but that's not essential to the point. Posted by: Toby Bartels on June 25, 2013 8:45 PM | Permalink | Reply to this Re: The Title of This Post is False On a similar topic, I came up with an interesting type-theoretical variant of Loeb’s theorem: Consider some sort of type theory. There is a set $\mathrm{Type}$ of types in this type theory and a dependent family of sets $\mathrm{Term} : \mathrm{Type} \to \mathrm{Set}$ for the terms of each type. Moreover, this type theory should be sophisticated enough to describe itself. Therefore, there should also be a $\mathrm{IntType} \in \mathrm{Type}$ and a $\mathrm{IntTerm}$ with $T : \mathrm {IntType} \vdash \mathrm{IntTerm} (T) : \mathrm{Set}$. Now, for any $T \in \mathrm{Type}$, there is a corresponding $\mathrm{Int} (T) \in \mathrm{Term} (\mathrm{IntType})$, and for every $t \in \ mathrm{Term} (T)$, there is a corresponding $\mathrm{Int} (t) \in \mathrm{Term} (\mathrm{IntTerm} (\mathrm{Int} (T)))$. Now the theorem is this: Given a $A \in \mathrm{Type}$, and given a function inside the type theory $f \in \mathrm{Term} (\mathrm{Int} (A) \to A)$, there is a term $\mathrm{fix} (f) \in \mathrm{Term} (A)$ such that $\mathrm{fix} (f)$ can be reduced to $f (\mathrm{Int} (\ mathrm{fix} (f)))$. Posted by: Itai Bar-Natan on February 6, 2013 1:42 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2013/02/the_title_of_this_post_is_fals.html","timestamp":"2014-04-16T10:11:41Z","content_type":null,"content_length":"64435","record_id":"<urn:uuid:9804cef5-fa91-40b4-aeec-493e6e922bea>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
More On Geometric Langlands (a Grand Unified Theory of Math?) After mentioning in the last posting that Witten is giving talks in Berkeley and Cambridge this week, I found out about various recent developments in Geometric Langlands, some of which Witten presumably will be talking about. Edward Frenkel has put a draft version of his new book Langlands Correspondence for Loop Groups on his web-site. In the introduction he describes the Langlands Program as “a kind of Grand Unified Theory of Mathematics”, initially linking number theory and representation theory, now expanding into relations with geometry and quantum field theory. The book is nearly 400 pages long, and to be published by Cambridge University Press. Frenkel also notes that recent developments in geometric Langlands have focused on extending the story from the case of flat connections on a Riemann surface to connections with ramification (i.e. certain point singularities are allowed). He has a new paper out on the arXiv about this, entitled Ramifications of the geometric Langlands program, and he writes that: in a forthcoming paper [by Gukov and Witten] the geometric Langlands correspondence with tame ramification is studied from the point of view of dimensional reduction of four-dimensional supersymmetric Yang-Mills theory. The title of the forthcoming Gukov-Witten paper is supposedly “Gauge theory, ramification, and the geometric Langlands program.” Presumably people attending Witten’s talks in Berkeley and Cambridge will get to hear about this new story for the ramified case. For the rest of us, on his web-site David Ben-Zvi has notes from talks this summer by Witten at Luminy where he describes some of this. Ben-Zvi also has an announcement of a series of lectures on geometric Langlands that he’ll be giving at Oxford next April. The summary of the lectures says that he’ll “describe upcoming work of Gukov and Witten which brings together geometric Langlands and link homology theory.” Link homology theory is also known as Khovanov homology, and I wrote about this two years ago here, advertising Atiyah’s speculation that there may be a 4d TQFT story going on, something I always have found very intriguing. Ben-Zvi has recently lectured on Khovanov homology at Austin, and began his lecture by saying that this material relates “themes in 21st century representation theory” to 4d TQFT. He goes on to cover some of the ideas about 4d TQFT and “categorification” that I was very impressed by when I heard about them from a talk by Igor Frenkel a few months ago (described here). At first I thought Ed Frenkel’s claim that geometric Langlands was going to give a Grand Unified Theory of mathematics was completely over the top, but seeing how some of these very different and fascinating relations between new kinds of mathematics and quantum field theory seem to be coming together, I’m more and more willing to believe that investigating them will come to dominate mathematical physics in the coming years. Update: Slides from Witten’s Berkeley lectures are here. And many thanks to David Ben-Zvi for the informative comments! 50 Responses to More On Geometric Langlands (a Grand Unified Theory of Math?) 1. Hi Peter, Witten has only delivered one lecture so far, and it was devoted to reviewing background material: mostly S-duality and a few words about topological twisting, all of which can be found in the Kapustin-Witten paper. 2. Thanks A.J.! It would be great if you could keep us informed about the rest of the lectures… 3. It sounds like they are doing interesting math, but leaving physics to the LQG crowd. 4. I agree with SFB for the “interesting math “,but not for the”LQG crowd”. 5. “At first I thought Ed Frenkel’s claim that geometric Langlands was going to give a Grand Unified Theory of mathematics was completely over the top, but seeing how some of these very different and fascinating relations between new kinds of mathematics and quantum field theory seem to be coming together, I’m more and more willing to believe that investigating them will come to dominate mathematical physics in the coming years.” Perhaps a domination of mathematical physics, but the claim of a grand unification of mathematics is in fact way over the top unless you believe that mathematics is nothing but mathematical physics. It probably all depends on your own personal values, biases, points of view, and even whom you believe owns mathematics. Recall Lubos’ wild claim that someday mathematics will be completely subsumed by string theory? 6. I expect many of the people who have been working on geometric Langlands for years would be kind of shocked to be called mathematical physicists, Richard. Do they all instantly become mathematical physicists just because Witten got interested in what they’re doing? 7. Onymous – I don’t believe I said that. 8. Apologies, I misread Peter’s original statement — didn’t notice that he specifically singled out mathematical physics — and so misinterpreted your “…unless you believe that mathematics is nothing but mathematical physics” as an implication that geometric Langlands is mathematical physics. Never mind. 9. Hi and thanks for the references! (all notes on my page should be taken with many grains of salt..) I should point out that the preprint by Gukov and Witten doesn’t actually talk at all about link homology, so my talk description was perhaps premature, but a connection between geometric Langlands and some kind of link homology is to be expected following their ideas (cf Gukov’s Strings talk). Cautis and Kamnitzer also have very interesting work in progress on such a relation. After all, geometric Langlands is a very general categorification program in representation theory, so one would expect it to relate to the kinds of categorifications that give rise to Khovanov homology. There just aren’t too many fundamental structures associated with a semisimple Lie group, and they all connect.. Of course it’s a joke to speak of geometric Langlands as a grand unified theory… but the Langlands duality is certainly among the broadest themes in math, a kind of nonabelian generalization of the Fourier transform, and it’s extremely exciting that we can view it in the geometric setting as electric-magnetic duality in four dimensional gauge theories! 10. David Ben-Zvi says “it’s extremely exciting that we can view it in the geometric setting as electric-magnetic duality in four dimensional gauge theories!” Can you expand on that? Sounds very interesting. it’s extremely exciting that we can view it in the geometric setting as electric-magnetic duality in four dimensional gauge theories! Can you expand on that? Sounds very interesting. This is the insight of the Kapustin-Witten paper. You can find a summary here. a kind of nonabelian generalization of the Fourier transform Is it a nonabelian generalization, or isn’t it rather a categorification of the Fourier transform? It seemed to me that much of Langlands can be nicely understood as taking place in categorified linear algebra. I have made remarks on how the Hecke operator looks like a 2-linear map for instance here. It would be great if you could keep us informed about the rest of the lectures… If anyone feels like reporting on interesting lectures online, we have a guest account for that over on the n-Café. For instance we had David Roberts guest-reporting from a lecture by Brian Wang here, similar to the many guest reports we had # at the string coffee table. 14. Urs: “Is it a nonabelian generalization, or isn’t it rather a categorification of the Fourier transform?” well it’s both.. the main difficulty is the nonabelian nature rather than the categorification, and that is where Langlands tells us what to do (in the geometric or classical, noncategorified setting). Categorifications of the Fourier transform have been used for almost 30 years I think (starting with the Fourier-Deligne transform, see eg Laumon’s first ICM), and the geometric Langlands program suggests that one can extend this to nonabelian settings (G-bundles on curves). By the way maybe this is an excuse to air one of my pet peeves, the use of the term “Fourier-Mukai” to refer to any functor between derived categories given by an integral kernel.. I would be surprised if an analyst referred to any map on function spaces given by integration against a kernel (or any matrix) as a Fourier transform, and the same should hold in the categorified setting — in some precise sense (due to Toen and which I’m badly paraphrasing) all functors between derived categories are given by integral kernels! “Honest” Fourier-Mukai transforms should have additional structure and properties (for example taking convolution to tensor product). Similarly not any duality is a T-duality! well it’s both.. [...] Categorifications of the Fourier transform have been used [...] and the geometric Langlands program suggests that one can extend this to nonabelian settings [...] Great, thanks! That’s what I was hoping some expert would say. Probably I just talked to the wrong experts so far! Because each time I’d ask a question along the lines “isn’t an eigenbrane just a categorified eigenvector in some 2-vector space” the answer I’d get would be something like “no, 2-vector space only appear after we categorify Langlands itself, like Kapranov discussed.” 16. I don’t know much category theory, but I thought that the non-Abelian generalization of the Fourier transform is the character expansion (or Plancherel transform in the non-compact case) for functions on non-Abelian groups. Aside from a character formula, that is the simplest generalization. Obviously, I am missing the point and something deeper is meant. Can anyone explain this to a dumb theoretical physicist? 17. I just wanted to add that the sort of examples I mentioned don’t help much with non-Abelian duality in classical or quantum field theory. To perform a duality transformation, a zero-curvature condition is Fourier transformed and the parameter integrated over is the dual field. This only really works in the Abelian case. There are non-Abelian generalizations of duality done this way, but they are rather messy, and not obviously useful. 18. Well, Witten finished his lectures, but ran out of time to say much of anything about ramification. There’s just too much information to be covered in (somewhat less than) 3 hours. Most of what he said is pretty well covered in David Ben-Zvi’s notes, and in Urs’s posts on the subject, or in the Kapustin-Witten paper for that matter. We did get scans of his notes, so perhaps those will be available online one of these days. Can anyone explain this to a dumb theoretical physicist? One way to get an intuition for what is going on with these Hecke operators and similar transformations is to consider the drastically oversimplified baby toy example situation where the underlying spaces are in fact just – finite sets. A vector bundle over a finite set is then just an array of finitely many vector spaces. Think of that as a vector whose entries are vector spaces. Such a beast is known as a (Kapranov-Voevodsky) 2-vector. The categorification involved here is that which takes the monoid of complex numbers and replaces it by the monoidal category of complex vector spaces. So we can imagine doing linear algebra with these vectors whose entries are vector spaces by replacing sums of complex numbers by direct sums of vector spaces and products of complex numbers by tensor products of vector spaces. In particular, let X any Y be two finite sets and consider a vector bundle L over X x Y . By the above, this is now like a |X| x |Y| matrix with entries being vector spaces. Using the above dictionary, we can define the categorified matrix product of L with a 2-vector over Y, simply by using the ordinary prescription for matrix multiplication but replacing sums of numbers by direct sums of vector spaces and products of numbers by tensor products of vector spaces. One can convince onself, that this categorified action of a 2-matrix on a 2-vector can equivalently be reformulated in a more arrow-theoretic way as follows: We have projections p1 and p2 from X x Y to X and to Y, respectively. This makes X x Y into a span Given a 2-vector V -> X over X, we may pull it back along p1 to X x Y, tensor the result componentwise with L and push the result of that back along p2. This operation produces precisely the naive categorified matrix product that I mentioned above. But the nice thing is that this pullback-tensor-pushforward along a “correspondence” like X x Y generalizes to vastly more interesting situations. There is an entire zoo of well-known operations of this kind. The Fourier-Moukai transformation is one example. The Hecke transformation that appears in geometric Langlans is another. In the above sense, all of these operations can be understodd as linear maps on 2-vector spaces. A description of what I just said, including some helpful diagrams and links to further material can be found here: 20. Concerning the abelian vs. nonabelian categorified Fourier transform: there is something called the “classical limit” of geometric Langlands, as decribed for instance here: The Hecke operation in geometric Langlands is a generalization of the categorified Fourier transformation: is a “2-linear map” in the sense of my comment above such that it coincides with the Fourier-Moukai transformation in this “classical limit”. In other words, the Hecke operation is a deformation of the Fourier-Moukai transformation. 21. I never understood what is the relation of elliptic cohomology (not that I don’t know what it presents mathematically since I have followed the area with an ever increasing distance since the days of the Atiyah-Singer index theorem) with particle physics except that Witten has generated a certain enthusiasmus with some particle physicists. Since I have learned to make a distiction between physics and what (some) physicists are doing and since this blog (as Peter’s book) is primarily about the present state of particle physics I think it is a legitimate question to ask about its relation to particle physics. If this is not permitted then this will be my last contribution to this blog. 22. To Peter Orland: You are correct about the Plancheral theorem. But that tells you that if you know the irreducible representations, and their dimensions/characters, you know how to decompose functions. It doesnt tell you what the characters are. In the first instance Langlands is a parameterization of irreducible reps, and a determination of their character; roughly they are in bijection with conjugacy classes in another group. The categorification nonsense is an elaboration of this, to say *all* information you can extract comes from this dual group. 23. Urs and Anon, Thanks for the responses. I understand that a character formula of some sort is need to make Plancheral meaningful. What I worry about is that even with such a character formula, there isn’t enough for non-Abelian electromagnetic duality. In fact, I am skeptical a USEFUL duality for pure Yang-Mills theorists exists. To carry out a duality transformation, the Bianchi identity needs to be imposed by integrating over a new field (in 3+1 dimensions, this field is a one-form). Then we would like to integrate out the orginal gauge field to obtain a action in this new field. Doing this in practice is tough. There are tricks for doing it with certain character formulas, but the dual theory is a mess, since the dual fields are discretely valued (o.k. on the lattice, but without a good continuum interpretation). Are these new techniques are somehow better? If so, it would be very interesting. In the first instance Langlands is a parameterization of irreducible reps, and a determination of their character; roughly they are in bijection with conjugacy classes in another group. That’s the original “algebraic” Langlands thing. The categorification nonsense is an elaboration of this, to say *all* I think the categorification nonsense comes in when you pass from the original to the geometric Langlands correspondence. In the original Langlands setup, the Hecke operator is an ordinary linear map, acting on a space of modular forms. In the geometric version of the theory, it becomes the Hecke operator that acts on derived coherent sheaves on some moduli space. And that guy is no longer an ordinary linear map. But it is a categorified linear map, if you like (and also if you don’t like it). In particular, in a special limit it is nothing but a certain categorification of the Fourier transformation. I never understood what is the relation of elliptic cohomology [...] with particle physics Elliptic cohomology is not about particle physics. It is about string physics. Elliptic cohomology is to strings like particles are to K-cohomology #. But what is the direct relation of elliptic cohomology to geometric Langlands, that made you bring this up here? 26. Interesting, so after all elliptic cohomology isn’t about particle physics it is rather about ST. That’s precisely what I expected. 27. Urs Interesting, so after all elliptic cohomology isn’t about particle physics it is rather about ST. That’s precisely what I expected. I guess I got into the Langland’s column by accident, but without this accident I probably would not have received such a precise answer. Interesting, so after all elliptic cohomology isn’t about particle physics it is rather about ST. Yes, check out the table at the beginning of the introduction of those notes. Generalized cohomology theories are labelled by something called their “chromatic filtration”. The idea is that a cohomology theory of chromatic level p comes from the physics of “p-particles” – otherwise known as (p-1)-branes. K-cohomology has filtration 1. It corresponds to 1-particles (0-branes). Ordinary points, that is. Elliptic cohomology has filtration 2. It corresponds to 2-particles, otherwise known as 1-branes or strings. Ordinary (singular) cohomology has filtration level 0. There is a precise sense in which it corresponds to 0-particles (or (-1) branes). I expect this table is open ended. But I have never seen anything about cohomology theories of chromatic filtration larger than 2. I am skeptical a USEFUL duality for pure Yang-Mills theorists exists. It is a famous conjecture that 4-dimensional Yang-Mills theory has a duality called S-duality. Yang-Mills theories (in a given dimension, for a fixed number of supercharges) are parameterized by a complex number tau , the coupling constant, and a Lie group the gauge group. For N=4 supersymmetric Yang-Mills, there is conjectured to be an isomorphism between Yang-Mills theory for and that for (-1/tau , G^L) . -1/tau is, roughly, the inverted coupling constant (therefore: “weak-strong coupling duality”) and G^L is the Lie group that is Langlands dual to G. See the first few paragraphs of this, for instance. That this is indeed an isomorphism of field theories is not a theorem, but it is supported by enough evidence that makes everybody assume it is indeed true. This is the S-duality conjecture. Since the Langlands dual group appears in this conjecture, it has long been speculated that there is indeed a relation between S-duality and the Langlands program. But until recently nobody could really substantiate this. The achievement of the Kapustin-Witten work is to show that for the special case that the 4-dimensional Yang-Mills theory is suitably compactified down to two dimensions, the S-dualiy conjecture for Yang-Mills theory is essentially equivalent to the geometric Langlands conjecture. All the ingredients of geometric Langlands, like those moduli spaces of bundles and the derived coherent sheaves on them, can be understood in terms of field configurations and boundary conditions of compactified N=4 super Yang-Mills theory. Notice that this amounts to further support for the S-duality conjecture, because it increases the number of people that truest the S-duality conjecture by those mathematicians that trust the geometric Langland conjecture. But it might also be noteworthy that this suggests that the geometric Langlands duality is only a tiny aspect of a much bigger story – since it is (apparently) just the special case of S-duality applied to a very specific compactification of Yang-Mills theory only. 30. Urs, Yes, I know about the S-duality conjecture (I would much more interested in a similar conjecture about pure Yang-Mills than N=2 or N=4 Yang-Mills. Theories with adjoint matter are very different from those we know about in nature). Though a conjecture is nice, to really prove it operator equivalences are needed. The procedure I discussed before, character expansions of the Bianchi identity, etc., is the first step to find such equivalences. In Abelian theories, this is how Kramers-Wannier duality works. There are some non-Abelian constructions due to Sharachandra and Anishetty, they haven’t proved useful yet. 31. Urs, Their’s a mild caveat to be added to your statement that All the ingredients of geometric Langlands, like those moduli spaces of bundles and the derived coherent sheaves on them, can be understood in terms of field configurations and boundary conditions of compactified N=4 super Yang-Mills theory. The geometric Langlands correspondence is stated in terms of D-modules on the moduli stack of not-necessarily stable G-bundles. Kapustin & Witten’s work doesn’t quite give full information about the moduli stack, but only its semi-stable locus. As I far as I can tell, the relation between N=4 SYM and the Langlands correspondence for D-modules on the full stack hasn’t been completely spelled out. 32. Question to Urs. First of all thanks a lot for all your explanations. You work on cool stuff anyway ( though it is a little over my head at this time ). Do I understand your research program correctly when I assume that You try to link the standard model and ST in purely algebraic terms by means of higher category theory? Hence when changing the algebraic setting they do not look much different but are connected through certain higher morphisms? 33. Peter Orland Conceptual realism demands to separate Kramers-Wannier duality (and its structural extension the order-disorder issue) from speculative ideas. The o-d duality is a local quantum physical phenomenon which has no known analog in higher dimensions. Whereas o-d is a phenomenon which has a solid operator algebraic intrinsic understanding (if you want I can provide you with recent literature) there is nothing like this for the S conjecture. By now Wikipedia has more material on wild conjectures than about genuine results. There is the danger that we may be fooled to our own simulacrums and metaphors in particular that conjectures solidify because they comes from somebody with a high status in the community or because they have been hanging around for a long time so that several generations have stepped on them. wild conjectures S-duality is certainly a conjecture, but hardly a wild conjecture. I mean, that’s the point: S-duality is apparently as wild as geometric Langlands. Kapustin & Witten’s work doesn’t quite give full information about the moduli stack, but only its semi-stable locus. Right, thanks. There are probably a couple of such technicalities. I am not working on this stuff, so it’s hard to keep them all in mind. So what about that “classical limit” in which, apparently, geometric Langlands is only proven so far. Does compactified SYM exactly coincide with the geometric Langlands data in that limit? 36. A couple of comments: Kapustin-Witten’s theory does (as far as I understand) cover the full stack of bundles, not just the semistable locus. The sigma-model/mirror symmetry description fails outside the semistable locus, but they emphasize in the paper that the gauge theory sees the entire stack of bundles — I think the problem is us geometers have only been able in the past really to process the classical aspects of the theory (solns of the equations of motion etc) but quantum gauge theory is a lot smarter than we are (speaking for myself at least). As far as I know they can’t completely say what S-duality predicts off the semistable locus, but the important point is it does actually apply there. The classical limit of Langlands is only proven generically, missing the hardest locus — it’s a beautiful result and one of the best in the subject, but saying classical geometric Langlands is understood is on the same level as saying you understand (noncompact) Lie groups when you understand their diagonalizable elements – the hardest part involved unipotents.. Also I’m not sure I would think of Hecke operators as Fourier transforms – the Hecke operators are the symmetries of moduli of bundles (and sheaves on them), while the Fourier-Mukai type transforms relate G and G^ the dual group. One sense (of many) in which geometric Langlands is a nonabelian categorified generalization of the Fourier transform is that while Plancherel helps you decompose spaces of functions on a group, geometric Langlands type results help you decompose the CATEGORY of all representations of a group — since these categories are not semisimple there’s a big difference between listing irreducibles and their characters and actually describing the structure of general representations. (Geometric Langlands ideas can be used to study for example the category of Harish-Chandra modules for a real semisimple Lie group). 37. Bert, I cannot understand your explanation especially well. In my attempt to translate your statement into simple language, I conclude you mean more conjectures than solid statements dominate our field. I don’t need to be reminded of this, since I have seen it all over the literature for the last decade or so. I was asking if the experts on Langlands believe a useful concrete electric-magnetic duality transformation can be constructed from non-Abelian Fourier transforms (character expansions). I suspect the answer is no, since no one gave me a simple “yes”. 38. I’m probably mixing algebraic number theory with analytic number theory but is there a relationship between elliptic cohomology and elliptic Mobius transformations? Also I’m not sure I would think of Hecke operators as Fourier transforms – the Hecke operators are the symmetries of moduli of bundles Oh, sorry, I misspoke if I said that. The Langlands correspondence is analogous to the Fourier transform, exchanging skyscraper sheaves (analogous to delta-functions) with Hecke-eigensheaves (analogous to plane waves). So, in this analogy, the Hecke operator is like a categorified derivative. 40. I am afraid the sad truth is the answer is “no”. It is better to live in quantum reality than to become complacent with a Disney version of it. I was not trying to explain anything in technical terms but only pointing to the obvious observation that Kramers-Wannier on a microscopic level (achieved by Leo Kadanoff) was quantum from the beginning whereas the Seiberg Witten duality is from a physical Disney dreamland which precisely of this is so useful to a large part of mathematics. The kind of mathematics for which it had no use is the operator-algebraic mathematical setting of QT which dates back to von Neumann and has been enriched by the locality principle in AQFT. By the way the manner Kadanoff has extracted (noncommutative) operator commutation relations for the (what we nowadays call) the Ising primary fields from the Euclidean lattice setting (via a partially guessed properties of the transfer matrix formalism) had my deep admiration; the Leitmotiv of all my work with Swieca in the early 70s was related to adapt ate Kadanoff’s order/disorder ideas to the continuous setting of QFT; in many cases we even succeeded to read this back into a continuous functional integrals setting by using an Aharonov-Bohm analog language. Later, when I was working with Rehren on an algebraic approach to chiral conformal QFT I remembered those Kadanoff ideas and we found a completely explicit operator version of an “exchange algebra” for the conformal Ising field theory from which it was possible to compute its n-point Wightman functions. A historical review can be found in but thinking about this now, I should have written much more about Leo Kadanoff’s contributions; he really deserved a Nobel prize together with Wilson. In those days we also convinced ourselves that this order-disorder idea has no electric-magnetic counterpart in the full QFT setting. 41. Bert, I also worked extensively on duality. Like you, I concluded that there is no simple operator equivalence between a non-Abelian gauge theory and its dual. But there are intriguing exceptions of systems with non-Abelian systems which do have duality transformations and disorder operators. In my Ph.D. thesis I found lattice systems with permutation-group $S_{N}$ symmetry which have nontrivial duals. But I will spare people here from a list of more publications on the subject. Peter (O.) 42. Conceptual realism demands to separate Kramers-Wannier duality (and its structural extension the order-disorder issue) from speculative ideas. The o-d duality is a local quantum physical phenomenon which has no known analog in higher dimensions. The 3D Ising model on a cubic lattice is Kramers-Wannier dual to Ising gauge theory on the same lattice. Why is this not o-d duality in higher dimensions? I concluded that there is no simple operator equivalence between a non-Abelian gauge theory and its dual. Is this saying that you consider the S-duality conjecture to be in fact false? If so, I’d be interested in the details of the assumptions that go into this. I recall that Bert Schroer was (similarly ?) claiming that the AdS/CFT duality conjecture (in the sense of Maldacena) is false, and that the correct duality statement was along the lines of Rehren’s work. In that case I got the impression that two rather different concepts were being compared, and that in fact Rehren’s work had little relation to the setup considered by Maldacena et al. Compare for instance Jacques Distler’s account. The crucial difference in this case is that Rehren’s work was based on a fixed and precise axiom set, while Maldacena’s work uses notions of quantum field theory that have not been axiomatized For people like Bert Schroer this is reason enough to completely reject all QFT that does not fit into the AQFT axioms. For other people, in contrast, the restrictive applicability of the AQFT axioms is reason enough to reject those. To some extent it is a matter of taste concerning which role of rigour you find useful in physics research. I can easily tolerate both these standpoints. But I would like to know in each case which one is assumed by which participant. 44. Urs, You keep ignoring the fact that Peter Orland is asking about pure YM theory, not N=4 SYM. There’s a beautiful story about duality in non-supersymmetric abelian gauge theories, and many people (including Peter) have tried hard to generalize this to the non-abelian case. I gather that he’s trying to understand whether geometric Langlands gives any insight into that problem, and as far as I can tell, the answer is just no. 45. Urs, Sorry that I am giving long-winded answers to your questions. I am mainly interested in advancing methods in asymptotically-free field theories and in constructions which could eventually facilitate calculations. I try to learn other stuff, because I can’t predict what I may need to know in the future. But I am more interested in theoretical, rather than mathematical physics (as people abuse use the term nowadays, to study mathematical techniques, rather than to prove theorems). I believe (after some years of trying to show the contrary) there is no USEFUL version of Kramers-Wannier duality which is true for PURE non-Abelian gauge theories. There are non-Abelian dualities for some special $S_N$-invariant systems, which I mentioned above (there is also non-Abelian Bosonization in two dimensions). The general problem for duality in non-Abelian theories is constructing dual fields with local commutation or anti-commutation relations. Supersymmetric or other theories with adoint matter have some sort of charge-monopole duality – but such theories are effectively Abelian. These theories are interesting in their own right, but to my way of thinking, they are not as important as Yang-Mills theories coupled only to fundamental (not adjoint) Fermion color charges, or pure Yang-Mills theories. There are other notions of duality in QCD. The ‘t Hooft loop is the disorder operator. Unfortunately, there is probably no useful local dual-field-theory formulation for which it is the order You keep ignoring the fact that Peter Orland is asking about pure YM theory, not N=4 SYM. In as far as I am ignoring anything, it is not on purpose. I’d be glad to be enlightened. Maybe I found Peter Orland’s statement I concluded that there is no simple operator equivalence between a non-Abelian gauge theory and its dual. seemed to refer to arbitrary gauge theories. I gather that he’s trying to understand whether geometric Langlands gives any insight into that problem, and as far as I can tell, the answer is just no. Hm, maybe here is the source of the misunderstanding. Kapustin-Witten show that geometric Langlands does give insight into the type of duality present in N=4 SYM. So in far as this is different to other types of duality, geometric Langlands apparently does not apply to these. Supersymmetric or other theories with adoint matter have some sort of charge-monopole duality – but such theories are effectively Abelian. Could you expand on what you mean by “effectively abelian” here? Thanks! 47. Urs, By “effectively Abelian”, I mean that that the magnetic-monopole charge is well-defined and quantized. In QCD or pure Yang-Mills, there is no precise definition of magnetic-monopole charge. In the Georgi-Glashow model (an the related deformation of N=2 supersymmetric gauge theory) a Higgs field breaks the gauge group down to the Cartan subgroup. Thus there are Abelian monopoles, with quantized charge, etc. These theories have a confined phase for sufficiently small monopole mass, which goes back to Polyakov’s observations in the 70′s. Duality for such theories is not so different from those of Abelian Wilson lattice gauge theories. They are, however, quite different from QCD. Now there is an old result made by many people (Fradkin, Shenker, Rabinovici and others) that there is little difference between a Higgs field in a gauge theory and a scalar field in that gauge theory without a Higgs potential. The basic point is that the operator creating a massive vector Boson in the Higgs theory looks just like the operator creating a “meson” built from scalars in the confined phase. From this point of view, any theory with scalar matter is not so different from a Higgs theory. In particular, it is possible to define magnetic charge, no matter what the scalar potential happens to be. So in such theories charge-monopole duality is a sensible concept. The reason why the possibility of duality for Yang-Mills theories is interesting is because it could yield insight into the confinement phase. Some sort of magnetic condensation occurs, producing confinement and a mass gap, as simulations show, but we want to know why. 48. Off-topic mathematical physics fun: Andre LeClair is claiming there’s a physical system, which, on physical grounds, suggests the Riemann hypothesis is true. Are there any experts around to comment on whether it’s plausible? 49. For those like me who don’t know much about the Langlands programme but would like to, a useful account is an older one by Frenkel: `Lectures on the Langlands Program and conformal field theory’, This entry was posted in Uncategorized. Bookmark the permalink.
{"url":"http://www.math.columbia.edu/~woit/wordpress/?p=492","timestamp":"2014-04-21T15:05:08Z","content_type":null,"content_length":"105805","record_id":"<urn:uuid:887157b5-0cc9-4113-84e4-e433cf011e81>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
The Science of Conjecture: Evidence and Probability Before Pascal by James Franklin Johns Hopkins University Press; 485 pp. $55 What do we know, and how surely do we know it? The general answer was given by Aristotle in the Nicomachean Ethics: certainty can be found only in mathematics, all other knowledge being to some degree doubtful. Much evil has been let loose upon the world by defiance of, or exaggeration of, this simple truth: at the one extreme, by the belief that absolute certainty can be found in non-mathematical dogmas, and at the other, by the vulgar conclusion that since certainty is not possible outside mathematics (nor even inside it, according to a few bold theorists), everything we think we know is really just a set of epiphenomenal delusions arising from our personal and social circumstances. The natural fruit of the first of these errors is obscurantist tyranny; of the second, that mendacious solipsism Americans have come to know so intimately well, according to which, since nothing can be known, language has no content and no purpose but the manipulation of the world for the gratification of our private appetites. Whether that second folly has any worse mischief to unload on us than the indignities we suffered during the 42nd Presidency, we shall eventually find out, since it has colonized a large part of our academic life, and seems still to be increasing its hold on the minds of the intelligent young. Aristotle's observation implies that most of what we can hope to know must emerge from the weighing of probabilities. To what branch of human knowledge does this weighing of probabilities, this "science of conjecture," as James Franklin calls it, itself belong? Without thinking very much, most of us moderns would make a paradox of the whole thing by replying: "to mathematics." The fact that we can give this answer at all, and be partly right in giving it, is a wonderful thing in itself, almost a miracle. That mathematics, our only stock of certain knowledge, can be used with great precision to tell us useful things about the uncertain majority of human experience, is astounding. It poses, in fact, deep philosophical questions to which convincing answers are in short supply. In mathematical statistics, for example, there is an entity named the Poisson distribution, used for estimating the occurrence of rare events. It was first derived in 1837 by the French mathematician Siméon-Denis Poisson from a study of deaths by horse kicks in the Prussian army. It turns out that if you list the number of cavalry corps in which there were no deaths, one death, two deaths, three deaths, … the numbers you have listed follow an elegant mathematical formula. When I first encountered this in my studies I easily mastered the math but got stuck on the metaphysical question: How did the horses know when to stop kicking? I have still not seen any answer that leaves me entirely satisfied. The first significant results in the mathematical theory of probability were given to us by Pierre Fermat and Blaise Pascal, who developed the fundamental principles in a correspondence undertaken during the year 1654, in which they discussed two problems posed by the Chevalier de Méré, a professional gambler. (The problems were, first, how to divide the stakes of an unfinished game of chance between two players when one of them is ahead, and second, to quantify the odds in dice-throwing.) In The Science of Conjecture, James Franklin has set out to provide a full account of all non-mathematical approaches to probabilistic reasoning prior to that annus mirabilis and also, in a brief but very useful epilogue, to summarize subsequent non-mathematical developments. This means that he has embraced a very wide field of inquiry indeed, taking in practically all the major intellectual disciplines and pseudo-disciplines, from medicine to moral theology, from rhetoric to astrology. He begins with Bishop Butler's phrase: "Probability is the very guide of life." He ends with a stirring, and very timely, defense of rational judgment against "the forces of unreason" that are on the loose in our academies. Franklin teaches mathematics at the University of New South Wales in Sydney, Australia. The author reminds us that pre-modern thinkers had a keen grasp of the "science of conjecture" long before that science was quantified. Lawyers were specially skilful at weighing, and displaying, probabilities in a convincing way. Charged with having spoken against the supremacy of his King in matters religious, Sir Thomas More was confronted with just one witness, an inveterate liar by reputation. He dealt with that witness's testimony thus: Can it therefore seem likely vnto your honorable Lordshipps that I wold, in so weyghty a cause, so unadvisedly overshoote myself as to trust master Rich, a man of me alwaies reputed for one of so litle truth … that I wold vnto him vtter the secreates of my consciens towchinge the Kings supremacye? … A thinge which I neuer wold, after the statute thereof made, reveale either to the kings highnes himself, or to any of his honorable councellours … Can this in your iudgments, my lordes, seeme likely to be true? Under the circumstances, it was not in the least likely — though, alas, Sir Thomas went to the block anyway. The weighing of evidence in courtrooms is, of course, still conducted today — a good illustration of the fact that non-mathematical reasoning about probabilities did not stop abruptly in 1654, and is in fact never likely to stop. Even in our own extremely mathematical age, such methods still form much the larger part of probabilistic thinking and arguing. Even, in fact, in areas where the matters under discussion are of a strictly scientific nature, and in theory quantifiable, the mathematical calculus of probability is often of very little help: think of meteorology. Franklin notes in passing, as many others have done, that mathematics herself, though her truths must always be demonstrated deductively, most often advances by induction and intuition. This function behaves like this here … and here … and here. Perhaps it behaves like this everywhere! Let's see if I can construct a deductive proof … In an analogy I like very much, the sociologist Erving Goffman speaks of the "front" and the "back" of intellectual work, comparing such work to what goes on in a theater or a restaurant, where the smooth, disciplined, orderly "front" for presentation to the public is supported by a noisy, chaotic "back" where professionals prepare the dishes, or don the costumes, amid much yelling and banging and breakage. The converse is also true. Just as the post-Pascalian world is rich in unquantified and unquantifiable reasoning about probability, so, it turns out, the ancient and medieval world was by no means innocent of numerical methods for dealing with chance. Gamblers — at any rate, successful gamblers — must always have had some notion of "odds." We know that they did, for related terms escaped into ordinary language: "vernacular quantification," Franklin calls it, and quotes passages like Sir Andrew Aguecheek's "it's four to one she'll none of me" in Twelfth Night. More surprising, at any rate to me, is Franklin's account of the medieval trade in annuities, in which "[m]onasteries were among the principal sellers … and churchmen common among the buyers." It was all squared with the Church's prohibitions of usury by dint of some ingenious reasoning, notably in Alexander Lombard's Treatise on Usury of 1307. Lombard's main point: the contract is illicit only when one party has notably the better side. If the right price can be found, given the probabilities, then no wrong has been done. Franklin presents these topics in a chapter headed "Aleatory Contracts: Insurance, Annuities and Bets," the best part of the book, for my money. I was also surprised to learn that the first English state lottery was organized as early as 1566. "The public showed a certain skepticism about the government's honesty …" Franklin notes drily, and only 34,000 of the 400,000 tickets were sold. Apparently it was not only in their appreciation of drama that the Elizabethan public was more sophisticated than ourselves. This is not an easy book to read, though it is easier towards the end than at the beginning. I am not sure that Franklin found the best method of organizing his material; however, this is not a very constructive criticism, as I don't see how a net cast so wide can bring in anything other than an unwieldy mass. The author's style is at any rate clear and fluent, with an occasional sly Gibbonian aside to make the reader chuckle. Of the Jacobean jurist Sir Edward Coke's argument that "the Judge ought to be … for the party indifferent," Franklin observes: The Jesuits no doubt remained skeptical of the "indifference" of English judges, especially those Jesuits personally tortured by Coke. Franklin lets all the important sources speak for themselves, in many long quotations — a sensible way to present material of this sort, I think. I learned a lot from The Science of Conjecture. I am glad to have read it, and shall keep it for its reference value. I cannot say I ever picked it up eagerly, though, and I set it aside at last with some relief. This is a dense, quite difficult and often very dry account of a large and important subject.
{"url":"http://www.johnderbyshire.com/Reviews/Math/conjecture.html","timestamp":"2014-04-16T07:43:31Z","content_type":null,"content_length":"13732","record_id":"<urn:uuid:7721258c-475d-4fe9-888a-14c3890650e8>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Could spomeone please take the time to help me understand this?? Write the point-slope form of the equation passing through (5, -1) with a slope of 6. • one year ago • one year ago Best Response You've already chosen the best response. these are the answers : y-1=6(x+5) y+5=6(x-1) y+1=6(x-5) y-5=6(x+1) Best Response You've already chosen the best response. point slope form y-y1=m(x-x1) m=slope Best Response You've already chosen the best response. y-(-1)=6(x-5) y+1=6(x-5) Best Response You've already chosen the best response. there is only one set of numbers how can i find m? Best Response You've already chosen the best response. slope is given in question see it Best Response You've already chosen the best response. oh so m=6 Best Response You've already chosen the best response. any more question about this Best Response You've already chosen the best response. Best Response You've already chosen the best response. am i using y=mx+b? Best Response You've already chosen the best response. you cant use this because you didnt know what is b (y-intercept) Best Response You've already chosen the best response. ok then what? k=mx? Best Response You've already chosen the best response. y=mx+b is slope-intercept form. This is point-slope form. You use point-slope form. All you do is plug in the numbers. That's all you do. Look at what I drew it explains it all. M is the slope and (x, y) is the point given. Best Response You've already chosen the best response. Best Response You've already chosen the best response. oh and if i suddenly go off line itsbecause we dont have elctricity and my battery is dying Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. First, we find the y-intercept. -1 = 6(5) + b Best Response You've already chosen the best response. 1-1(1)=6 5-51) Best Response You've already chosen the best response. those two are appart of the sam equation.. Best Response You've already chosen the best response. Once you solve for b, you have your slope (m=6) and y-intercept. I calculated it and found b= -1-30 = -31 y = 6x + (-31) therefore, y = 6x -31 Best Response You've already chosen the best response. so we are using y=mx+b? Best Response You've already chosen the best response. Best Response You've already chosen the best response. um how did you find b? Best Response You've already chosen the best response. Do what Kymber did. That's another way to write the equation of a line. You end up with the same result. Best Response You've already chosen the best response. It says "point-slope form", that is not y=mx+b. The answer choices you gave are in point-slope form. I don't know why Brinethery is talking about slope-intercept form. Best Response You've already chosen the best response. i have told u already Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50994006e4b085b3a90d9b2f","timestamp":"2014-04-18T03:42:10Z","content_type":null,"content_length":"268037","record_id":"<urn:uuid:6b1bf022-c08f-4472-824d-f0067aa1692f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Rotation Statistics I was bored, so I decided to look up some stats. Below are the 2011 FIP, and Bill James' projected 2012 of all the possible SP candidates for the Orioles in 2012. Enjoy, for whatever it's worth. Zach Britton 2011 FIP: 4.0 2011 xFIP: 4.12 Bill James 2012 FIP Projection: 3.84 Jeremy Guthrie 2011 FIP: 4.48 2011 xFIP: 4.47 Bill James 2012 FIP Projection: 4.56 Brian Matusz 2011 FIP: 7.66 2011 xFIP: 5.22 Bill James 2012 FIP Projection: 4.55 Jake Arrieta 2011 FIP: 5.34 2011 xFIP: 4.52 Bill James 2012 FIP Projection: 4.79 Tommy Hunter 2011 FIP: 4.48 2011 xFIP: 4.28 Bill James 2012 FIP Projection: 4.48 Chris Tillman 2011 FIP: 3.99 2011 xFIP: 4.83 Bill James 2012 FIP Projection: 4.82 Brad Bergesen 2011 FIP: 4.92 2011 xFIP: 4.52 Bill James 2012 FIP Projection: 4.63 Dana Eveland (only 30 Innings in 2011) 2011 FIP: 3.19 2011 xFIP: 3.60 Bill James 2012 FIP Projection: 3.81 No projections of Tsuyoshi Wada available yet. Re: Rotation Statistics Wow, never would have thought Tillman had the lowest on the team last year (amongst starters). Re: Rotation Statistics Matt P wrote:Wow, never would have thought Tillman had the lowest on the team last year (amongst starters). His numbers are REALLY weird. His xFIP and ERA are like afull run higher than his FIP Re: Rotation Statistics I'm also pretty shocked at how low he projects Eveland's to be next season. These projections are always ridiculously favorable to the player though and rarely end up being right. Re: Rotation Statistics Matt P wrote:I'm also pretty shocked at how low he projects Eveland's to be next season. These projections are always ridiculously favorable to the player though and rarely end up being right. Yea the Eveland projection is pretty lolworthy. He has him only pitching 59 innings though, So it's not even projecting him to be a SP anyways. Fun to look at though. Last edited by Tucker Blair on December 20th, 2011, 10:47 pm, edited 1 time in total. Re: Rotation Statistics Oh, I thought that was projected as a full season of starting. Re: Rotation Statistics Matt P wrote:Oh, I thought that was projected as a full season of starting. haha, I did too, until I went back and looked at it. My bad. Re: Rotation Statistics Matt P wrote:Wow, never would have thought Tillman had the lowest on the team last year (amongst starters). He was very lucky with HRs. If you look at Tillman's xFIP you will see it is a lot higher. JordanTuwiner.com | Bitcoin tips: 1J9qpCdrcZCwntz1nYEE8AeH9vdBJcqqPW Re: Rotation Statistics I don't see why everyone thinks Bill James is God. When it comes to projections, I don't think he knows his *** from his elbow Re: Rotation Statistics birdwatcher55 wrote:I don't see why everyone thinks Bill James is God. When it comes to projections, I don't think he knows his *** from his elbow Have you ever thought that he might have the same opinion about you? Re: Rotation Statistics ofahn wrote: birdwatcher55 wrote:I don't see why everyone thinks Bill James is God. When it comes to projections, I don't think he knows his *** from his elbow Have you ever thought that he might have the same opinion about you? You're a funny guy. You should be on stage. Seriously I lost all confidence in James for a number of reasons. A classic James projection: 12/62/.285 for Felix Pie Re: Rotation Statistics According to Baseball- Reference, Felix Pie would have the following line per 162 Game Schedule and it is based on his actual numbers. I do not know what James Projection was based on (Playing time, place in batting order,Park Factors, etc...) but your 12/62/.285 line is not that far off of reality. 428 - PA 392 - AB 52 - Runs 98 - Hits 18 - 2B 5 - 3B 7 - HR 39 - RBI .249 - BA .298 - OBP .374 - SLG .673 - OPS You pick one player from a possible 1,200 that get projected in any given year and then have it not be that far from reality. (12 Hits)
{"url":"http://orioles-nation.com/forums/viewtopic.php?p=5451","timestamp":"2014-04-16T14:44:54Z","content_type":null,"content_length":"57685","record_id":"<urn:uuid:34ff95e8-a4d0-4a76-ab6e-72e9ef0aeeb2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Items tagged with function For a function of one variable, I use the map command quite a bit to evaluate the function at several values of the independent variable. For example, How may I do something similar for a function of two variables? For example, Can the map command be used to evaluate f at each ordered pair? Maybe I'm not even using the correct data structure. Thanks!
{"url":"http://www.mapleprimes.com/tags/function?page=5","timestamp":"2014-04-19T07:09:37Z","content_type":null,"content_length":"95104","record_id":"<urn:uuid:40af91d2-e52c-4c9c-8ce5-37be4eb9e4b5>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring the Quantity of Heat On the previous page, we learned what heat does to an object when it is gained or released. Heat gains or losses result in changes in temperature, changes in state or the performance of work. Heat is a transfer of energy. When gained or lost by an object, there will be corresponding energy changes within that object. A change in temperature is associated with changes in the average kinetic energy of the particles within the object. A change in state is associated with changes in the internal potential energy possessed by the object. And when work is done, there is an overall transfer of energy to the object upon which the work is done. In this part of Lesson 2, we will investigate the question How does one measure the quantity of heat gained or released by an object? Specific Heat Capacity Suppose that several objects composed of different materials are heated in the same manner. Will the objects warm up at equal rates? The answer: most likely not. Different materials would warm up at different rates because each material has its own specific heat capacity. The specific heat capacity refers to the amount of heat required to cause a unit of mass (say a gram or a kilogram) to change its temperature by 1°C. Specific heat capacities of various materials are often listed in textbooks. Standard metric units are Joules/kilogram/Kelvin (J/kg/K). More commonly used units are J/g/°C. Use the widget below to view specific heat capacities of various materials. Simply type in the name of a substance (aluminum, iron, copper, water, methanol, wood, etc.) and click on the Submit button; results will be displayed in a separate window. The specific heat capacity of solid aluminum (0.904 J/g/°C) is different than the specific heat capacity of solid iron (0.449 J/g/°C). This means that it would require more heat to increase the temperature of a given mass of aluminum by 1°C compared to the amount of heat required to increase the temperature of the same mass of iron by 1°C. In fact, it would take about twice as much heat to increase the temperature of a sample of aluminum a given amount compared to the same temperature change of the same amount of iron. This is because the specific heat capacity of aluminum is nearly twice the value of iron. Heat capacities are listed on a per gram or per kilogram basis. Occasionally, the value is listed on a per mole molar heat capacity. The fact that they are listed on a per amount basis is an indication that the quantity of heat required to raise the temperature of a substance depends on how much substance there is. Any person who has boiled a pot of water on a stove, undoubtedly know this truth. Water boils at 100°C at sea level and at slightly lowered temperatures at higher elevations. To bring a pot of water to a boil, its temperature must first be raised to 100°C. This temperature change is achieved by the absorption of heat from the stove burner. One quickly notices that it takes considerably more time to bring a full pot of water to a boil than to bring a half-full of water to a boil. This is because the full pot of water must absorb more heat to result in the same temperature change. In fact, it requires twice as much heat to cause the same temperature change in twice the mass of water. Specific heat capacities are also listed on a per K or a per °C basis. The fact that the specific heat capacity is listed on a per degree basis is an indication that the quantity of heat required to raise a given mass of substance to a specific temperature depends upon the change in temperature required to reach that final temperature. In other words, it is not the final temperature that is of importance, it is the overall temperature change. It takes more heat to change the temperature of water from 20°C to 100°C (a change of 80°C) than to increase the temperature of the same amount of water from 60°C to 100°C (a change of 40°C). In fact, it requires twice as much heat to change the temperature of a given mass of water by 80°C compared to the change of 40°C. A person who wishes to bring water to a boil on a stovetop more quickly should begin with warm tap water instead of cold tap water. This discussion of specific heat capacity deserves one final comment. The term specific heat capacity is somewhat of a misnomer. The term implies that substances have a capacity to contain heat. As has been previously discussed, heat is not something that is contained in an object. Heat is something that is transferred to or from an object. Objects contain energy in a variety of forms. When that energy is transferred to other objects of different temperatures, we refer to transferred energy as heat or thermal energy. While it's not likely to catch on, a more appropriate term would be specific energy capacity. Relating the Quantity of Heat to the Temperature Change Specific heat capacities provide a means of mathematically relating the amount of thermal energy gained (or lost) by a sample of any substance to the sample's mass and its resulting temperature change. The relationship between these four quantities is often expressed by the following equation. Q = m•C•ΔT Q is the quantity of heat transferred to or from the object, m is the mass of the object, C is the specific heat capacity of the material the object is composed of, and ΔT is the resulting temperature change of the object. As in all situations in science, a delta (?) value for any quantity is calculated by subtracting the initial value of the quantity from the final value of the quantity. In this case, ΔT is equal to T[final] - T[initial]. When using the above equation, the Q value can turn out to be either positive or negative. As always, a positive and a negative result from a calculation has physical significance. A positive Q value indicates that the object gained thermal energy from its surroundings; this would correspond to an increase in temperature and a positive ΔT value. A negative Q value indicates that the object released thermal energy to its surroundings; this would correspond to a decrease in temperature and a negative ΔT value. Knowing any three of these four quantities allows an individual to calculate the fourth quantity. A common task in many physics classes involves solving problems associated with the relationships between these four quantities. As examples, consider the two problems below. The solution to each problem is worked out for you. Additional practice can be found in the Check Your Understanding section at the bottom of the page. │Example Problem 1 │ │What quantity of heat is required to raise the temperature of 450 grams of water from 15°C to 85°C? The specific heat capacity of water is 4.18 J/g/°C.│ Like any problem in physics, the solution begins by identifying known quantities and relating them to the symbols used in the relevant equation. In this problem, we know the following: m = 450 g C = 4.18 J/g/°C T[initial] = 15°C T[final] = 85°C We wish to determine the value of Q - the quantity of heat. To do so, we would use the equation Q = m•C•ΔT. The m and the C are known; the ΔT can be determined from the initial and final temperature. T = T[final] - T[initial] = 85°C - 15°C = 70.°C With three of the four quantities of the relevant equation known, we can substitute and solve for Q. Q = m•C•ΔT = (450 g)•(4.18 J/g/°C)•(70.°C) Q = 131670 J Q = 1.3x10^5 J = 130 kJ (rounded to two significant digits) │Example Problem 2 │ │A 12.9 gram sample of an unknown metal at 26.5°C is placed in a Styrofoam cup containing 50.0 grams of water at 88.6°C. The water cools down and the metal warms up until thermal equilibrium is │ │achieved at 87.1°C. Assuming all the heat lost by the water is gained by the metal and that the cup is perfectly insulated, determine the specific heat capacity of the unknown metal. The specific │ │heat capacity of water is 4.18 J/g/°C. │ Compared to the previous problem, this is a much more difficult problem. In fact, this problem is like two problems in one. At the center of the problem-solving strategy is the recognition that the quantity of heat lost by the water (Q[water]) equals the quantity of heat gained by the metal (Q[metal]). Since the m, C and ΔT values of the water are known, the Q[water] can be calculated. This Q [water] value equals the Q[metal] value. Once the Q[metal] value is known, it can be used with the m and ΔT value of the metal to calculate the Q[metal]. Use of this strategy leads to the following Part 1: Determine the Heat Lost by the Water m = 50.0 g C = 4.18 J/g/°C T[initial] = 88.6°C T[final] = 87.1°C ΔT = -1.5°C (T[final] - T[initial]) Solve for Q[water]: Q[water] = m•C•ΔT = (50.0 g)•(4.18 J/g/°C)•(-1.5°C) Q[water] = -313.5 J (unrounded) (The - sign indicates that heat is lost by the water) Part 2: Determine the value of C[metal] Q[metal] = 313.5 J (use a + sign since the metal is gaining heat) m = 12.9 g T[initial] = 26.5°C T[final] = 87.1°C ΔT = (T[final] - T[initial] ) Solve for C[metal]: Rearrange Q[metal] = m[metal]•C[metal]•ΔT[metal] to obtain C[metal] = Q[metal] / (m[metal]•ΔT[metal]) C[metal] = Q[metal] / (m[metal]•ΔT[metal]) = (313.5 J)/[(12.9 g)•(60.6°C)] C[metal] = 0.40103 J/g/°C C[metal] = 0.40 J/g/°C (rounded to two significant digits) Heat and Changes of State The discussion above and the accompanying equation relates the heat gained or lost by an object to the resulting temperature changes of that object. As we have learned, sometimes heat is gained or lost but there is no temperature change. This is the case when the substance is undergoing a state change. So now we must investigate the mathematics related to changes in state and the quantity of To begin the discussion, let's consider the various state changes that could be observed for a sample of matter. The table below lists several state changes and identifies the name commonly associated with each process. │ Process │Change of State │ │ Melting │Solid to Liquid │ │ Freezing │Liquid to Solid │ │Vaporization │ Liquid to Gas │ │Condensation │ Gas to Liquid │ │ Sublimation │ Solid to Gas │ │ Deposition │ Gas to Solid │ In the case of melting, boiling and sublimation, energy would have to be added to the sample of matter in order to cause the change of state. Such state changes are referred to as being endothermic. Freezing, condensation and deposition are exothermic; energy is released by the sample of matter when these state changes occur. So one might notice that a sample of ice (solid water) undergoes melting when it is placed on or near a burner. Heat is transferred from the burner to the sample of ice; energy is gained by the ice causing the change of state. But how much energy would be required to cause such a change of state? Is there a mathematical formula that might help in determining the answer to this question? There most certainly is. The amount of energy required to change the state of a sample of matter depends on three things. It depends upon what the substance is, on how much substance is undergoing the state change, and upon what state change that is occurring. For instance, it requires a different amount of energy to melt ice (solid water) compared to melting iron. And it requires a different amount of energy to melt ice (solid water) as it does to vaporize the same amount of liquid water. And finally, it requires a different amount of energy to melt 10.0 grams of ice compared to melting 100.0 grams of ice. The substance, the process and the amount of substance are the three variables that affect the amount of energy required to cause a specific change in state. Use the widget below to investigate the effect of the substance and the process upon the energy change. (Note that the Heat of Fusion is the energy change associated with the solid-liquid state change.) The values for the specific heat of fusion and the specific heat of vaporization are reported on a per amount basis. For instance, the specific heat of fusion of water is 333 J/gram. It takes 333 J of energy to melt 1.0 gram of ice. It takes 10 times as much energy - 3330 J - to melt 10.0 grams of ice. Reasoning in this manner leads to the following formulae relating the quantity of heat to the mass of the substance and the heat of fusion and vaporization. For melting and freezing: Q = m•ΔH[fusion] For vaporization and condensation: Q = m•ΔH[vaporization] where Q represents the quantity of energy gained or released during the process, m represents the mass of the sample, ΔH[fusion] represents the specific heat of fusion (on a per gram basis) and ΔH [vaporization] represents the specific heat of vaporization (on a per gram basis). Similar to the discussion regarding Q = m•C•ΔT, the values of Q can be either positive or negative. Values of Q are positive for the melting and vaporization process; this is consistent with the fact that the sample of matter must gain energy in order to melt or vaporize. Values of Q are negative for the freezing and condensation process; this is consistent with the fact that the sample of matter must lose energy in order to freeze or condense. As an illustration of how these equations can be used, consider the following two example problems. │Example Problem 3 │ │Elise places 48.2 grams of ice in her beverage. What quantity of energy would be absorbed by the ice (and released by the beverage) during the melting process? The heat of fusion of water is 333 J/│ │g. │ The equation relating the mass (48.2 grams), the heat of fusion (333 J/g), and the quantity of energy (Q) is Q = m•ΔH[fusion]. Substitution of known values into the equation leads to the answer. Q = m•ΔH[fusion] = (48.2 g)•(333 J/g) Q = 16050.6 J Q = 1.61 x 10^4 J = 16.1 kJ (rounded to three significant digits) Example Problem 3 involves a rather straightforward, plug-and-chug type calculation. Now we will try Example Problem 4, which will require a significant deeper level of analysis. │Example Problem 4 │ │What is the minimum amount of liquid water at 26.5 degrees that would be required to completely melt 50.0 grams of ice? The specific heat capacity of liquid water is 4.18 J/g/°C and the specific │ │heat of fusion of ice is 333 J/g. │ In this problem, the ice is melting and the liquid water is cooling down. Energy is being transferred from the liquid to the solid. To melt the solid ice, 333 J of energy must be transferred for every gram of ice. This transfer of energy from the liquid water to the ice will cool the liquid down. But the liquid can only cool as low as 0°C - the freezing point of the water. At this temperature the liquid will begin to solidify (freeze) and the ice will not completely melt. We know the following about the ice and the liquid water: Given Info about Ice: m = 50.0 g ΔH[fusion] = 333 J/g Given Info about Liquid Water: C = 4.18 J/g/°C T[initial] = 26.5°C T[final] = 0.0°C ΔT = -26.5°C (T[final] - T[initial] ) The energy gained by the ice is equal to the energy lost from the water. Q[ice] = -Q[liquid water] The - sign indicates that the one object gains energy and the other object loses energy. We can calculate the left side of the above equation as follows: Q[ice] = m•ΔH[fusion] = (50.0 g)•(333 J/g) Q[ice] = 16650 J Now we can set the right side of the equation equal to m•C•ΔT and begin to substitute in known values of C and ΔT in order to solve for the mass of the liquid water. The solution is: 16650 J = -Q[liquid water] 16650 J = -m[liquid water]•C[liquid water]•ΔT[liquid water] 16650 J = -m[liquid water]•(4.18 J/g/°C)•(-26.5°C) 16650 J = -m[liquid water]•(-110.77 J/°C) m[liquid water] = -(16650 J)/(-110.77 J/°C) m[liquid water] = 150.311 g m[liquid water] = 1.50x10^2 g (rounded to three significant digits) Heating and Cooling Curves Revisited On the previous page of Lesson 2, the heating curve of water was discussed. The heating curve showed how the temperature of water increased over the course of time as a sample of water in its solid state (i.e., ice) was heated. We learned that the addition of heat to the sample of water could cause either changes in temperature or changes in state. At the melting point of water, the addition of heat causes a transformation of the water from the solid state to the liquid state. And at the boiling point of water, the addition of heat causes a transformation of the water from the liquid state to the gaseous state. These changes in state occurred without any changes in temperature. However, the addition of heat to a sample of water that is not at any phase change temperatures will result in a change in temperature. Now we can approach the topic of heating curves on a more quantitative basis. The diagram below represents the heating curve of water. There are five labeled sections on the plotted lines. The three diagonal sections represent the changes in temperature of the sample of water in the solid state (section 1), the liquid state (section 3), and the gaseous state (section 5). The two horizontal sections represent the changes in state of the water. In section 2, the sample of water is undergoing melting; the solid is changing to a liquid. In section 4, the sample of water is undergoing boiling; the liquid is changing to a gas. The quantity of heat transferred to the water in sections 1, 3, and 5 is related to the mass of the sample and the temperature change by the formula Q = m•C•ΔT. And the quantity of heat transferred to the water in sections 2 and 4 is related to the mass of the sample and the heat of fusion and vaporization by the formulae Q = m•ΔH[fusion] (section 2) and Q = m•ΔH[vaporization] (section 4). So now we will make an effort to calculate the quantity of heat required to change 50.0 grams of water from the solid state at -20.0°C to the gaseous state at 120.0°C. The calculation will require five steps - one step for each section of the above graph. While the specific heat capacity of a substance varies with temperature, we will use the following values of specific heat in our calculations: Solid Water: C=2.00 J/g/°C Liquid Water: C = 4.18 J/g/°C Gaseous Water: C = 2.01 J/g/°C Finally, we will use the previously reported values of ΔH[fusion] (333 J/g) and ΔH[vaporization] (2.23 kJ/g). Section 1: Changing the temperature of solid water (ice) from -20.0°C to 0.0°C. Use Q[1] = m•C•ΔT where m = 50.0 g, C = 2.00 J/g/°C, T[initial] = -200°C, andT[final] = 0.0°C Q[1] = m•C•ΔT = (50.0 g)•(2.00 J/g/°C)•(0.0°C - -20.0°C) Q[1] = 2.00 x10^3 J = 2.00 kJ Section 2: Melting the Ice at 0.0°C. Use Q[2] = m•ΔH[fusion] where m = 50.0 g and ΔH[fusion] = 333 J/g Q[2] = m•ΔH[fusion] = (50.0 g)•(333 J/g) Q[2] = 1.665 x10^4 J = 16.65 kJ Q[2] = 16.7 kJ (rounded to 3 significant digits) Section 3: Changing the temperature of liquid water from 0.0°C to 100.0°C. Use Q[3] = m•C•ΔT where m = 50.0 g, C = 4.18 J/g/°C, T[initial] = 0.0°C, and T[final] = 100.0°C Q[3] = m•C•ΔT = (50.0 g)•(4.18 J/g/°C)•(100.0°C - 0.0°C) Q[3] = 2.09 x10^4 J = 20.9 kJ Section 4: Boiling the Water at 100.0°C. Use Q[4] = m•ΔH[vaporization] where m = 50.0 g and ΔH[vaporization] = 2.23 kJ/g Q[4] = m•ΔH[vaporization] = (50.0 g)•(2.23 kJ/g) Q[4] = 111.5 kJ Q[4] = 112 kJ (rounded to 3 significant digits) Section 5: Changing the temperature of liquid water from 100.0°C to 120.0°C. Use Q[5] = m•C•ΔT where m = 50.0 g, C = 2.01 J/g/°C, T[initial] = 100.0°C, and T[final] = 120.0°C Q[5] = m•C•ΔT = (50.0 g)•(2.01 J/g/°C)•(120.0°C - 100.0°C) Q[5] = 2.01 x10^3 J = 2.01 kJ The total amount of heat required to change solid water (ice) at -20°C to gaseous water at 120°C is the sum of the Q values for each section of the graph. That is, Q[total] = Q[1] + Q[2] + Q[3] + Q[4] + Q[5] Summing these five Q values and rounding to the proper number of significant digits leads to a value of 154 kJ as the answer to the original question. In the above example, there are several features of the solution that are worth reflecting on: First: The lengthy problem was divided into parts, with each part representing one of the five sections of the graph. Since there were five Q values being calculated, they were labeled as Q[1], Q[2], etc. This level of organization is required in a multi-step problem such as this one. Second: Attention was given to the +/- sign on ΔT. The change in temperature (or of any quantity) is always calculated as the final value of the quantity minus the initial value of that quantity. Third: Attention was given to units throughout the course of the problem. Units of Q will either be in Joule or kiloJoule depending on which quantities are being multiplied. Failure to pay attention to units is a common cause of failure in problems like these. Fourth: Attention was given to significant digits throughout the course of the problem. While this should never become the major emphasis of any problem in physics, it is certainly a detail worth attending to. We've learned here on this page how to calculate the quantity of heat involved in any heating/cooling process and in any change of state process. This understanding will be critical as we proceed to the next page of Lesson 2 on the topic of calorimetry. Calorimetry is the science associated with determining the changes in energy of a system by measuring the heat exchanged with the surroundings. Check Your Understanding 1. Water has an unusually high specific heat capacity. Which one of the following statements logically follows from this fact? a. Compared to other substances, hot water causes severe burns because it is a good conductor of heat b. Compared to other substances, water will quickly warm up to high temperatures when heated. c. Compared to other substances, it takes a considerable amount of heat for a sample of water to change its temperature by a small amount. 2. Explain why large bodies of water such as Lake Michigan can be quite chilly in early July despite the outdoor air temperatures being near or above 90°F (32°C). 3. The table below describes a thermal process for a variety of objects (indicated by red, bold-faced text). For each description, indicate if heat is gained or lost by the object, whether the process is endothermic or exothermic, and whether Q for the indicated object is a positive or negative value. Heat Gained or Heat Endo- or Exothermic? Q: + or -? Process Lost? a. An ice cube is placed into a glass of room temperature lemonade in order to cool the beverage down. b. A cold glass of lemonade sits on the picnic table in the hot afternoon sun and warms up to 32°F. c. The burners on an electric stove are turned off and gradually cool down to room temperature. d. The teacher removes a large chunk of dry ice from a thermos and places it into water. The dry ice sublimes, producing gaseous carbon e. Water vapor in the humidified air strikes the window and turns to a dew drop (drop of liquid water). 4. An 11.98-gram sample of zinc metal is placed in a hot water bath and warmed to 78.4°C. It is then removed and placed into a Styrofoam cup containing 50.0 mL of room temperature water (T=27.0°C; density = 1.00 g/mL). The water warms to a temperature of 28.1°C. Determine the specific heat capacity of the zinc. 5. Jake grabs a can of soda from the closet and pours it over ice in a cup. Determine the amount of heat lost by the room temperature soda as it melts 61.9 g of ice (ΔH[fusion] = 333 J/g). 6. The heat of sublimation (ΔH[sublimation]) of dry ice (solid carbon dioxide) is 570 J/g. Determine the amount of heat required to turn a 5.0-pound bag of dry ice into gaseous carbon dioxide. (Given: 1.00 kg = 2.20 lb) 7. Determine the amount of heat required to increase the temperature of a 3.82-gram sample of solid para-dichlorobenzene from 24°C to its liquid state at 75°C. Para-dichlorobenzene has a melting point of 54°C, a heat of fusion of 124 J/g and specific heat capacities of 1.01 J/g/°C (solid state) and 1.19 J/g/°C (liquid state).
{"url":"http://www.physicsclassroom.com/Class/thermalP/U18l2b.cfm","timestamp":"2014-04-16T20:41:25Z","content_type":null,"content_length":"92263","record_id":"<urn:uuid:53d20071-ab62-47fa-8d4a-9a0c1eed43fe>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
A straight line segment between two points on a curve. A curved planar figure, all points of which are the same distance from a fixed point called the center. 1) A line segment connecting two points on a circle, and passing through its center. 2) Twice the radius (of a circle). 1) A line segment from a circle to its center. 2)The distance from any point on a circle to its center. Geometric Definition • Geometrically, a circle can be defined as: Given a fixed point C, a circle is the figure formed from all points P for which PC is constant. • The fixed point is called the center. Analytic Definition • Analytically, a circle can be defined as: Given a constant r, a circle is the locus of all points (x, y) for which x^2 y^2 ----- + ----- = 1 r^2 r^2 x^2 + y^2 = r^2 • The geometric definition and the analytic definition are equivalent, and will produce the same figure for center (0, 0) Line Segments • A line segment from (x, y) to (0, 0), is called a radius of the circle. • A line segment from (x, y) to (-x, -y), is called a diameter of the circle. • These terms are also used to refer to the lengths of these line segments. • A line segment from (x[1], y[1]) to (x[2], y[2]) is called a chord of the circle. • A circle (in the standard form) intercepts the x axis at (r, 0) and (-r, 0). • A circle (in the standard form) intercepts the y axis at (0, r) and (0, -r). • The excentricity e of a circle is • For all circles, e = 0; General Case • The equation Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0 will describe a circle if the discriminate, B^2 - 4AC, is equal to 0. Parametric Definition • A circle can also be defined parametrically: Given a constant r and a parameter θ, a circle is the locus of all points (x, y) for which x = r cos θ, and y = r sin θ Polar Definition • A circle can also be defined in polar coordinates Given a constant r greater than 0, a circle is the locus of all points (ρ, θ) for which │ Circle Planet Math│ │ Circle MathWorld │ │ Circle Wikipedia │ copyright 2006, j.h.young, revised 4/26/06 Circle / Circle / Circle /
{"url":"http://math.comsci.us/analytic/circle.html","timestamp":"2014-04-17T21:57:52Z","content_type":null,"content_length":"9633","record_id":"<urn:uuid:f60552c7-50d3-4752-a05b-4d6b5af6fa39>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
January 13th 2011, 08:10 PM Angle measurements r taken from two points on directly opposite sides of a tree. How high is the tree?? tree is in the middle. the angle above the tree is 115 and the two side angles are 35 and 30. the tree is 20.6m tall but how do u find that answer??? January 13th 2011, 11:12 PM Is this the complete text of the question? With the given values the tree can have any height from bonsai to sequoia. In my opinion there is missing a length (Thinking)
{"url":"http://mathhelpforum.com/trigonometry/168288-trigonometry-print.html","timestamp":"2014-04-18T09:48:40Z","content_type":null,"content_length":"4685","record_id":"<urn:uuid:28d85628-da30-43c6-ac9a-80b4a7914aec>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Object Orientated Implementation of Numerical Methods von Didier Besset Kategorie: Allgemein ISBN: 1558606793 Amazon.co.uk Didier Bessett's Object-Oriented Implementation of Numerical Methods offers a wide-ranging set of objects for common numerical algorithms. Written for the math-literate Java and Smalltalk programmer, this volume demonstrates that both languages can be used to tackle common numerical calculations with ease. This title bridges the gap between pure algorithms and object design. By tackling issues like class design, interfaces, and overcoming floating-point rounding errors in both Java and Smalltalk, the code can be used as is or as a model for your own custom numerical classes. The range of recipes, or sample numerical classes, all coded in both OOPLs, is rich. For anyone who's taken a few undergraduate math courses (like calculus, linear algebra, or statistics), plenty of the material will be familiar. After presenting some basic algorithm and mathematical principles, the book shows you the code that gets the job done (first in Smalltalk and then in Java). There's no room for demo code that shows how to use all this. The emphasis is on a good cross-section of common numerical calculations. The tour begins with calculus and moves through linear algebra, with plenty of material on matrices. Later sections on statistics cover familiar terms and calculations like linear regression and calculations useful for establishing correlations between one or more independent variables. Sections on data mining examine the mathematical rules for finding patterns in large amounts of data. (There's also a nifty set of classes for implementing genetic algorithms.) Throughout, you get advice on choosing the right algorithm for the job. (There are class diagrams that map out how this class library is organised.) Of course, it will help to know some of the underlying maths to get the most out of this intelligent and wide-ranging book, but the writing is remarkably clear, and the source code is a model of intelligibility, so even readers who are averse to equations will find Object-Oriented Implementation of Numerical Methods readable. In general, any competent Java or Smalltalk programmer will be able to tap into solid mathematical code by reading it, without having to reinvent the proverbial wheel. --Richard Dragan Amazon.com Didier Besset's Object-Oriented Implementation of Numerical Methods offers a wide-ranging set of objects for common numerical algorithms. Written for the math-literate Java and Smalltalk programmer, this volume demonstrates that both languages can be used to tackle common numerical calculations with ease. This title bridges the gap between pure algorithms and object design. By tackling issues like class design, interfaces, and overcoming floating-point rounding errors in both Java and... Kurzbeschreibung Written by a programmer for programmers, this is the first book that demonstrates how to apply the state-of-the-art in software development, object-oriented programming, to numerical The CD contains ready to use code in both Java and Smalltalk for all the algorithms discussed. This comprehensive book is not a catalogue of recipes, but a guide to understanding how object-oriented languages really work with numerical methods. The book provides a clear path to new object-oriented... Allgemein > Object Orientated Implementation of Numerical Methods
{"url":"http://www.uni-protokolle.de/buecher/isbn/1558606793/","timestamp":"2014-04-18T03:09:30Z","content_type":null,"content_length":"8703","record_id":"<urn:uuid:4967741f-211b-4d61-88a8-e640d782f484>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Solve Textbook Exercises Compute the expected value of the product of two integrals and , where is the standard WienerProcess[]. Compare with an alternative computation. Using the Ito formula, verify that the process is a martingale with respect to the filtration generated by the Wiener process . Apply the Ito formula by converting the process to its standard form. The diffusion coefficient of the standard Ito process must be zero for to be a martingale. Prove the martingale property directly.
{"url":"http://wolfram.com/mathematica/new-in-9/time-series-and-stochastic-differential-equations/solve-textbook-exercises.html","timestamp":"2014-04-21T10:15:14Z","content_type":null,"content_length":"7457","record_id":"<urn:uuid:5da781fc-6747-4e4e-a7d3-75be58fa045a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Gaussian integers. How can I prove that there is no isomorphism from Z[i]/<a+bi> to Zn, for any n, if (a,b) ≠1? is this true even if (a,b) ≠ 1? if (a,b) = 1, yes, i know that it's isomorphic to Z/(a^2+b^2)Z, although i would like to see a cleaner version of the isomorphism than what i came up with. but what if a+bi = d(k+mi)? for example, what is Z[i]/<4+6i>? really needing some help, here... Last edited by Deveno; April 5th 2011 at 09:53 PM. Ok, I missed that coprimality condition, but the answer's still the same, though it must be fixed. You really want to read http://home.wlu.edu/~dresdeng/papers/factorrings.pdf , in particular corollary 3. Tonio
{"url":"http://mathhelpforum.com/advanced-algebra/176947-gaussian-integers.html","timestamp":"2014-04-18T16:48:30Z","content_type":null,"content_length":"44640","record_id":"<urn:uuid:da4d36d2-282d-4177-b969-5839d233e4f4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: semi-random sampling (how to impose properties of one population Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: semi-random sampling (how to impose properties of one population onto a subsample of a different population) From Austin Nichols <austinnichols@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: semi-random sampling (how to impose properties of one population onto a subsample of a different population) Date Sun, 7 Aug 2011 21:35:54 -0400 Ekaterina Hertog-- Do you need a sample? You could just reweight using a propensity score. I.e. is this for analysis or for surveying the sampled obs? On Sun, Aug 7, 2011 at 10:32 AM, Steven Samuels <sjsamuels@gmail.com> wrote: > Sorry, I misunderstood. Here's code that you can adapt. Note that you set the sample size you want in the first line > *************CODE BEGINS************* > **************CODE ENDS************** > On Aug 7, 2011, at 5:05 AM, Ekaterina Hertog wrote: > Dear Steven, > thank you for your help, however it does not fully solve my problem. Your proposed solution will allow me to roughly preserve the population percentages from the whole sample into a subsample. What I need however, is to impose populations percentages found in a different dataset on a subsample I am creating. Essentially i have two datasets: one of high income women and one of middle income women. High income women tend to be older and are more likely to live in the capital. I need to create a subsample of a dataset of middle income woemn which would match the high income women dataset on age and location characteristics. > Does anyone know how to do this in Stata 11? > Ekaterina > On 07/08/2011 09:08, Steven Samuels wrote: >> The following code shows how to take a 10% sample within categories formed by two variables. The sample and whole population percentages will be approximately the same, with the agreement better for larger within-cell sample sizes. >> Steve >> *************CODE BEGINS************* >> sysuse auto, clear >> expand 6 >> set seed 842655 >> recode rep78 1/2=5 .=5 >> tab rep78 foreign, cell >> sample 10, by(foreign rep78) >> tab rep78 foreign, cell >> **************CODE ENDS************** >> On Aug 6, 2011, at 4:23 PM, Ekaterina Hertog wrote: >> Dear all, >> I need to take a subsample of observations from a big dataset making sure that the people in the subsample have a given geographic and age profile. I need to make sure that, say, 50% of people in the subsample come from the capital and 50% from other towns. Within each of these 2 locations I want to preserve a certain age structure: say in a city: 3 people ages 23, 4 people aged 24 … >> Within those geographic and age profiles I want to select the observations randomly. Is it possible to do that in Stata 11? Any thoughts on how I would go about it? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-08/msg00283.html","timestamp":"2014-04-19T12:24:38Z","content_type":null,"content_length":"11156","record_id":"<urn:uuid:694255e6-96a6-4214-9065-b273335fb862>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
help with algebra word problem September 25th 2011, 01:55 PM #1 Sep 2011 help with algebra word problem Hi everyone I need help in determining the the number of children that went to the restaurant how would I do that? The Hungry Heifer diner offers an all-you-can-eat buffet at $25.90 per adult and $17.90 per child. On a particular day, the diner had total buffet revenue of $6,609.40 from 266 customers. How many customers were children? Adults $25.90 Child $17.90 Total customers 266 Revenue $6 609.40 Thanks any help would be appreciated Re: help with algebra word problem Let x be adults and y be children. $x+y = 266$ $25.9x+17.9y =6609.4$ September 25th 2011, 02:08 PM #2
{"url":"http://mathhelpforum.com/algebra/188814-help-algebra-word-problem.html","timestamp":"2014-04-18T03:36:59Z","content_type":null,"content_length":"33107","record_id":"<urn:uuid:c93f89f4-c111-41c7-b671-0feb45550971>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
fractional indices can anyone help me with this question simplify 9^-1/2 x 8^2/3 ? $9^{-1/2} \cdot 8^{2/3}$ First, if you have a negative exponent, you can make it positive by putting it in the denominator of a fraction: $= \frac{1}{9^{1/2}} \cdot 8^{2/3}$ An exponent of 1/2 is the same as square root, so the square root of 9 is 3: $= \frac{1}{3} \cdot 8^{2/3}$ The 8 to the 2/3 power can be rewritten using the power-of-a-power property: $= \frac{1}{3} \cdot (8^{1/3})^2$ An exponent of 1/3 is the same as cube root, so the cube root of 8 is 2: $= \frac{1}{3} \cdot 2^2$ Simplify: $= \frac{4}{3}$ EDIT: Sorry pickslides! ok thankyou very much. I was going to leave it as square root of 9^-1 x cube root of 8^2
{"url":"http://mathhelpforum.com/algebra/151296-fractional-indices.html","timestamp":"2014-04-21T02:35:20Z","content_type":null,"content_length":"39170","record_id":"<urn:uuid:625cf816-0920-464e-b2fe-f84b019f9446>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
or axis Major / Minor axis of an ellipse Major axis: The longest diameter of an ellipse. Minor axis: The shortest diameter of an ellipse. Try this Drag any orange dot. The ellipse changes shape as you change the length of the major or minor axis. The major and minor axes of an ellipse are diameters (lines through the center) of the ellipse. The major axis is the longest diameter and the minor axis the shortest. If they are equal in length then the ellipse is a circle. Drag any orange dot in the figure above until this is the case. Each axis is the perpendicular bisector of the other. That is, each axis cuts the other into two equal parts, and each axis crosses the other at right angles. The focus points always lie on the major (longest) axis, spaced equally each side of the center. See Foci (focus points) of an ellipse Calculating the axis lengths Ellipse definition and properties). Referring to the figure on the right, if you were drawing an ellipse using the string and pin method, the string length would be a+b, and the distance between the pins would be f. The length of the minor axis is given by the formula: f is the distance between foci a,b are the distances from each focus to any point on the ellipse The length of the major axis is given by the formula: a,b are the distances from each focus to any point on the ellipse Other ellipse topics (C) 2009 Copyright Math Open Reference. All rights reserved
{"url":"http://www.mathopenref.com/ellipseaxes.html","timestamp":"2014-04-16T10:09:50Z","content_type":null,"content_length":"10074","record_id":"<urn:uuid:d9ee8018-602e-4c8e-9d6c-cfa69ae19842>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00505-ip-10-147-4-33.ec2.internal.warc.gz"}