content
stringlengths
86
994k
meta
stringlengths
288
619
Jump to page? Re: Jump to page? Well e.g.: If I wanted to skip to a page I know there is a nice problem (say page 57 out of 93) the I would need to click about 18 times to get to it.See the problem? Try going bigger. Page 132 out of 297. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=209111","timestamp":"2014-04-18T13:37:06Z","content_type":null,"content_length":"14182","record_id":"<urn:uuid:d5ab51fb-c090-413b-ba31-9ba27525e5f0>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/darrius/medals","timestamp":"2014-04-16T04:37:18Z","content_type":null,"content_length":"126959","record_id":"<urn:uuid:e21bff88-2993-4888-8923-f6eb4f5eabfb>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: The trace fortmila James ARTHUR This paper is a report on the present state of the trace formula for a general reductive group. The trace formula is not so much an end in itself as it is a key to deep results on automorphic representations. However, such applications have only been carried out for groups of low dimension ( For reports groups, see 5 1 , 171, [8(c)], [ 3 ] ) . We will not try to discuss them here. on progress towards applying the trace formula for general the papers of Langlands [8(d) I and Shelstad [ll] in these Our discussion will be brief and largely confined to a description of the main results. On occasion we will try to give some idea of the proofs, but more often we shall simply refer the reader to papers in the bibliography. Section 1 will be especially sparse, for it contains a re- view of results which were summarized in more detail in [l(e)]. This report contains no mention of the twisted trace formula.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/419/3013869.html","timestamp":"2014-04-19T20:18:18Z","content_type":null,"content_length":"8062","record_id":"<urn:uuid:d36c2900-2996-4b55-a8ce-8252454b72c4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
m03 #36 Author Message 5% (low) Question Stats: (00:00) correct 0% (00:00) Joined: 01 Jan 2011 Posts: 9 based on 0 sessions Followers: 0 Why the answer is A? If something travels at the same speed and at the same distance twice is it not the same number of hours? Thanks! A swimmer makes a round trip up and down the river which takes her X hours. If the next day she swims the same distance with the same speed in still water, which takes her Y hours, which of the following statements is true? • X>Y • X<Y • X=Y • X=1/2*Y • none of the above Pick numbers and then check them against the options. Take 12 km as the distance traveled up/down the river, and assume the swimmer's speed to be 4 km/h; the current being 2 km/ h, which means 6 km/h down the river and 2 km/h up the river. Going upriver takes 2 hours, return journey takes 6, thus a total of 8 hours. In still water, 24 km requires 6 hours. Thus and x=8 and y=6. 1. or True. 2. or False. 3. or False. 4. or False. 5. None of the above. False because A is true. The correct answer is A. Hello mate ! gmat1220 Read the stem as you read a CR. And read it critically - you may not have to do any of that math. Director A swimmer makes a round trip Status: Matriculating up and down the river Affiliations: Chicago which takes her X hours. If the next day she swims the same distance with the same speed in Booth Class of 2015 still water Joined: 03 Feb 2011 , which takes her Y hours, which of the following statements is true? Posts: 934 See the italics. First italic implies she had to face resistance of the river one way. The second time she swam in still water, she did not face any resistance. So the text is Followers: 11 alluding that Y is less than X. Kudos [?]: 164 [0], gsagula wrote: given: 123 Why the answer is A? If something travels at the same speed and at the same distance twice is it not the same number of hours? Thanks! No the concept of average speed tells me that the even when she traveled the same distance in both cases, her average speed was low the first time and it was high the second time reducing the time marginally. Hope that helps ! Thanks a lot Dude! No the concept of average speed tells me that the even when she traveled the same distance in both cases, her average speed was low the first time and it was high the second gsagula time reducing the time marginally. ! Hope that helps But it just states that the speed is the same in the first and second time. It doesn't mention any variation. It also states that the distance is the same. I can't see any Joined: 01 Jan 2011 reasonable explanation for variation in time using the premises stated in this question. Posts: 9 The fact that still water has nothing to do with this question is definitely true. It didn't mention if the speed is related to ground or water. I assume that it is ground. Followers: 0 distance/speed = time Sorry, I didn't get it your explanation. gsagula wrote: Why the answer is A? If something travels at the same speed and at the same distance twice is it not the same number of hours? Thanks! A swimmer makes a round trip up and down the river which takes her X hours. If the next day she swims the same distance with the same speed in still water, which takes her Y hours, which of the following statements is true? • X>Y • X<Y • X=Y • X=1/2*Y • none of the above Pick numbers and then check them against the options. Take 12 km as the distance traveled up/down the river, and assume the swimmer's speed to be 4 km/h; the current being 2 km/ h, which means 6 km/h down the river and 2 km/h up the river. Going upriver takes 2 hours, return journey takes 6, thus a total of 8 hours. In still water, 24 km requires 6 hours. Thus and x=8 and y=6. 1. or True. 2. or False. 3. or False. 4. or False. 5. None of the above. False because A is true. The correct answer is A. Let the speed of the swimmer in still water be Let the speed of the stream be fluke Let the one way distance be Math Forum Moderator D Joined: 20 Dec 2010 Time going upstream Posts: 2058 T_{upstream} = \frac{D}{v_{still}-v_{stream}}T_{downstream} = \frac{D}{v_{still}+v_{stream}}Total Time = X = T_{upstream}+T_{downstream} = \frac{D}{v_{still}-v_{stream}}+\frac {D}{v_{still}+v_{stream}}X = \frac{D*v_{still}+D*v_{stream}+D*v_{still}-D*v_{stream}}{(v_{still})^2-(v_{stream})^2}X = \frac{2D*v_{still}}{(v_{still})^2-(v_{stream})^2} Followers: 123 Kudos [?]: 826 [0], given: 376 Y = \frac{2D*v_{still}}{(v_{still})^2-(v_{stream})^2} But here; , the denominator becomes maximum making the result to be minimum. Thus, we can say that will be minimum time taken by the swimmer at Ans: "A" fluke wrote: gsagula wrote: Why the answer is A? If something travels at the same speed and at the same distance twice is it not the same number of hours? Thanks! A swimmer makes a round trip up and down the river which takes her X hours. If the next day she swims the same distance with the same speed in still water, which takes her Y hours, which of the following statements is true? • X>Y • X<Y • X=Y • X=1/2*Y • none of the above Pick numbers and then check them against the options. Take 12 km as the distance traveled up/down the river, and assume the swimmer's speed to be 4 km/h; the current being 2 km/ h, which means 6 km/h down the river and 2 km/h up the river. Going upriver takes 2 hours, return journey takes 6, thus a total of 8 hours. In still water, 24 km requires 6 hours. Thus and x=8 and y=6. 1. or True. 2. or False. 3. or False. 4. or False. 5. None of the above. False because A is true. The correct answer is A. Let the speed of the swimmer in still water be Let the speed of the stream be Let the one way distance be Time going upstream gsagula T_{upstream} = \frac{D}{v_{still}-v_{stream}}T_{downstream} = \frac{D}{v_{still}+v_{stream}}Total Time = X = T_{upstream}+T_{downstream} = \frac{D}{v_{still}-v_{stream}}+\frac {D}{v_{still}+v_{stream}}X = \frac{D*v_{still}+D*v_{stream}+D*v_{still}-D*v_{stream}}{(v_{still})^2-(v_{stream})^2}X = \frac{2D*v_{still}}{(v_{still})^2-(v_{stream})^2} Joined: 01 Jan 2011 Y = \frac{2D*v_{still}}{(v_{still})^2-(v_{stream})^2} Posts: 9 But here; Followers: 0 , the denominator becomes maximum making the result to be minimum. Thus, we can say that will be minimum time taken by the swimmer at Ans: "A" Thanks Dude! What did the question mean was that in the second event she had constant speed? In this case it was inaccurately stated, because I can assume that she had the same speed of the day before. What the question states is that speed is the same. We don't have two speeds: v_{still} v_{stream} In other words, the question states that both events had the same distance and the same speed. In this case it doesn't matter where she is swimming. gsagula wrote: Thanks Dude! What the question states is that speed is the same. fluke We don't have two speeds: v_{still} v_{stream} Math Forum Moderator In other words, the question states that both events had the same distance and the same speed. Joined: 20 Dec 2010 In this case it doesn't matter where she is swimming. Posts: 2058 Speed of the swimmer is the same but the speed of the stream vary. Followers: 123 Kudos [?]: 826 [0], given: 376 is the speed of the swimmer, which I agree, is same in both instances. is the speed of the stream(flowing water) is different in both instances (in the first instance it has some speed more than 0 and in second, it is still(not flowing) and is 0) fluke wrote: gsagula wrote: Thanks Dude! What the question states is that speed is the same. We don't have two speeds: gsagula v_{still} v_{stream} Intern In other words, the question states that both events had the same distance and the same speed. Joined: 01 Jan 2011 In this case it doesn't matter where she is swimming. Posts: 9 Speed of the swimmer is the same but the speed of the stream vary. Followers: 0 v_{still} is the speed of the swimmer, which I agree, is same in both instances. is the speed of the stream(flowing water) is different in both instances (in the first instance it has some speed more than 0 and in second, it is still(not flowing) and is 0) Ow... Thanks again! So, I believe that this question is flaw because it's not speed, It's relative speed. The question should specify if the speed is in relationship to ground or water (search for Relative Velocity). One more thing, if she has resistance to go up, she will have this energy in her favor to go down, and in this case the average will remain exactly the same. Intern Thank you very much fluke and gmat1220! Joined: 01 Jan 2011 Posts: 9 Followers: 0 Director gsagula wrote: Status: Matriculating Ow... Thanks again! Affiliations: Chicago So, I believe that this question is flaw because it's not speed, It's relative speed. The question should specify if the speed is in relationship to ground or water (search for Booth Class of 2015 Relative Velocity). One more thing, if she has resistance to go up, she will have this energy in her favor to go down, and in this case the average will remain exactly the same. Joined: 03 Feb 2011 Sorry, this inference is wrong. Lets says you travel from SF to NY with the wind and make 150mph and travel back to SF from NY at 100mph against the wind. The average speed is not (150 + 100)/2 = 125mph. It will be less than 125mph. Similarly, if the swimmer travels 50mph with the river and back 30mph against the river. Her average speed is less than Posts: 934 (50+30)/2 i.e. 40mph. Try this when your cruising in your car next time. Followers: 11 Kudos [?]: 164 [0], given: 123 gmat1220 wrote: gsagula gsagula wrote: Intern Ow... Thanks again! Joined: 01 Jan 2011 So, I believe that this question is flaw because it's not speed, It's relative speed. The question should specify if the speed is in relationship to ground or water (search for Relative Velocity). One more thing, if she has resistance to go up, she will have this energy in her favor to go down, and in this case the average will remain exactly the same. Posts: 9 Sorry, this inference is wrong. Lets says you travel from SF to NY with the wind and make 150mph and travel back to SF from NY at 100mph against the wind. The average speed is Followers: 0 not (150 + 100)/2 = 125mph. It will be less than 125mph. Similarly, if the swimmer travels 50mph with the river and back 30mph against the river. Her average speed is less than (50+30)/2 i.e. 40mph. Try this when your cruising in your car next time. Right, it is Total Distance/Total Time, however the question states that "speed" is the same, and it kills the argument.
{"url":"http://gmatclub.com/forum/m03-111523.html","timestamp":"2014-04-17T07:14:46Z","content_type":null,"content_length":"170138","record_id":"<urn:uuid:55852961-ccda-4971-8028-b9c80b7b33f1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] I don't even know where to begin...HELP PLEASE! July 3rd 2007, 12:47 AM #1 So here's my problem: I want to do some work with collective intelligence and use it to make predictions. For example, say I have 50 people each rate 20 foods on a 1-10 scale...then I have a 51st person who rates only 10 of those foods on a 1-10 scale. I'd like to be able to predict what he's likely to rate the remaining 10 foods based on how the 10 ratings he's already supplied compare with the ratings of the first 50 people who rated all 20 foods. I have no idea where to begin (I could come up with some algorithms, but I'm sure a significant amount of work has already been done on this subject and I'd rather not waste time reinventing the Does anyone know what FIELD of mathematics my question pertains to? Is anyone able to write some sort of algorithm to solve such problems? Where can I go for help? So here's my problem: I want to do some work with collective intelligence and use it to make predictions. For example, say I have 50 people each rate 20 foods on a 1-10 scale...then I have a 51st person who rates only 10 of those foods on a 1-10 scale. I'd like to be able to predict what he's likely to rate the remaining 10 foods based on how the 10 ratings he's already supplied compare with the ratings of the first 50 people who rated all 20 foods. I have no idea where to begin (I could come up with some algorithms, but I'm sure a significant amount of work has already been done on this subject and I'd rather not waste time reinventing the Does anyone know what FIELD of mathematics my question pertains to? Is anyone able to write some sort of algorithm to solve such problems? Where can I go for help? Multilinear regression might be what you are looking for. July 3rd 2007, 12:06 PM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/advanced-math-topics/16469-solved-i-don-t-even-know-where-begin-help-please.html","timestamp":"2014-04-16T08:20:50Z","content_type":null,"content_length":"33154","record_id":"<urn:uuid:f2ed35e8-8daf-491d-bc5c-69e942103793>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Acoustics and Psychoacoustics: Introduction to sound - Part 2 | EE Times Design How-To Acoustics and Psychoacoustics: Introduction to sound - Part 2 [Part 1 discusses pressure waves and sound transmission.] 1.2 Sound intensity, power and pressure level The energy of a sound wave is a measure of the amount of sound present. However, in general we are more interested in the rate of energy transfer, instead of the total energy transferred. Therefore we are interested in the amount of energy transferred per unit of time, that is the number of joules per second (watts) that propagate. Sound is also a three-dimensional quantity and so a sound wave will occupy space. Because of this it is helpful to characterise the rate of energy transfer with respect to area, that is, in terms watts per unit area. This gives a quantity known as the sound intensity which is a measure of the power density of a sound wave propagating in a particular direction, as shown in Figure 1.7. Figure 1.7 Sound intensity. 1.2.1 Sound intensity level The sound intensity represents the flow of energy through a unit area. In other words it represents the watts per unit area from a sound source and this means that it can be related to the sound power level by dividing it by the radiating area of the sound source. As discussed earlier, sound intensity has a direction which is perpendicular to the area that the energy is flowing through, see Figure 1.7. The sound intensity of real sound sources can vary over a range which is greater than one million-million (10^12) to one. Because of this, and because of the way we perceive the loudness of a sound, the sound intensity level is usually expressed on a logarithmic scale. This scale is based on the ratio of the actual power density to a reference intensity of 1 picowatt per square metre (10-12 Wm^ -2).^1 Thus the sound intensity level (SIL) is defined as: SIL = 10 log[10](I[actual]/I[ref]). (1.10) where I[actual] = the actual sound power density level (in W m^-2) and I[ref] = the reference sound power density level (10^-12 Wm^-2) The factor of 10 arises because this makes the result a number in which an integer change is approximately equal to the smallest change that can be perceived by the human ear. A factor of 10 change in the power density ratio is called the bel; in Equation 1.10 this would result in a change of 10 in the outcome. The integer unit that results from Equation 1.10 is therefore called the decibel (dB). It represents a ^10v10 change in the power density ratio, that is a ratio of about 1.26. Example 1.6 A loudspeaker with an effective diameter of 25 cm radiates 20 mW. What is the sound intensity level at the loudspeaker? Sound intensity is the power per unit area. Firstly, we must work out the radiating area of the loudspeaker which is: A[speaker] = pr^2 = p(0.25 m/2) = 0.049 m^2 Then we can work out the sound intensity as: I = (W/A[speaker]) = (20 x 10^-3 W/0.049 m^2) = 0.41 W m^-2 This result can be substituted into Equation 1.12 to give the sound intensity level, which is: SIL = 10 log[10](I[actual]/I[ref]) = 10 log[10](0.41 W m^-2/10^-12 W m^-2) = 116 dB
{"url":"http://www.eetimes.com/document.asp?doc_id=1274891","timestamp":"2014-04-16T08:23:56Z","content_type":null,"content_length":"129606","record_id":"<urn:uuid:0964beb8-f1ab-484a-aaaa-9f09b4e60cd3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
note mugwumpjism <p>In Mathematics terms, for two numbers A,B to be congruent mod N, then A - B must be an exact multiple of N. So in other words, it is a three argument comparison operator that returns a true/false value, rather than a two argument function like Perl's mod that returns a number between 0 and N-1 (or N+1 and 0 if N is negative).</p> <p>But Perl's (and other computer languages') mod operator is based on mapping numbers to what mathematicians call fields; in this case a "wrap-around" set of numbers from 0 to N-1 (or from N+1 to 0 if N is negative). In this context, a field with length 0 doesn't make much sense, so is by default an exception.</p> <p>There are good reasons for this; for instance, much code assumes that if A = B mod N, then abs(A) &lt; abs(N). (abs() being the absolute operator). Knuth's decision that "x mod 0 is defined to be x" may make sense to one school of thought, but to another it's bogus.</p> <p>If you don't like the behaviour, use a <code>( $N ? $A % $N : $A )</code> construct, or see the section in Perl 6 on use limits, which allows you to define the behaviour of this special case.</p> <p><b>Update:</b> OK, IANAM and apparently it's only a field when N is prime. But hopefully you can understand what I mean :)</p> 87384 87384
{"url":"http://www.perlmonks.org/?displaytype=xml;node_id=88738","timestamp":"2014-04-17T18:26:00Z","content_type":null,"content_length":"1875","record_id":"<urn:uuid:9a5df7b1-2e39-4a28-8747-68311828d133>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
7.9 miles, Cambridge, MA 02142 Engineering math and physics help with MIT-educated EIT I specialize in mechanical engineering subjects such as structural analysis, dynamics, vibrations, fluid mechanics, and the mathematics needed to understand them (linear algebra, calculus, differential equations). My tutoring style varies depending on what is effective... Offering 8 subjects including calculus, SAT math and discrete math
{"url":"http://www.wyzant.com/tutorsearch?z=02180&d=40&kw=Math","timestamp":"2014-04-23T08:07:56Z","content_type":null,"content_length":"72550","record_id":"<urn:uuid:eb92ba1a-be94-4bfa-a4c8-1588db853082>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometric series From Encyclopedia of Mathematics A series in cosines and sines of multiple angles, that is, a series of the form or, in complex form, Trigonometric series are first encountered in the work of L. Euler (1744). He obtained the expansions In the middle of the 18th century, in connection with the study of problems on the free oscillations of strings, there arose the question of the possibility of "representing" functions characterizing the initial position of a string in the form of a sum of a trigonometric series. This question raised fierce debates, continuing over several decades, among the best analysts of that time, such as D. Bernoulli, J. d'Alembert, J.L. Lagrange, and Euler. These arguments related to the essence of the notion of a function. At that time, functions were usually related to their analytic specifications, which led to the consideration of analytic or piecewise-analytic functions only. But here the need arose to construct a trigonometric series "representing" a function whose graph could be a rather arbitrary curve. However, the significance of the arguments was even greater. In fact, out of these discussions various questions arose connected with many fundamentally important concepts and ideas of mathematical analysis in general, such as the "representation" of functions by Taylor series and the analytic continuation of functions, the use of divergent series, interchange of limits, infinite systems of equations, interpolation of functions by polynomials, etc. Subsequently, as well as in this initial period, the theory of trigonometric series served as a source of new ideas of mathematical analysis and influenced the development of other branches of it. Investigations in trigonometric series played an important role in the construction of the Riemann and Lebesgue integrals. The theory of functions of a real variable originated and was then developed in close connection with the theory of trigonometric series. As generalizations of the theory of trigonometric series there emerged the Fourier integral, almost-periodic functions, general orthogonal series, and abstract harmonic analysis. Research into trigonometric series served as a starting point in the creation of set theory. Trigonometric series are a powerful means for representing and studying functions. The question introduced into the debates of the mathematicians in the 18th century was solved in 1807 by J. Fourier, who gave formulas for calculating the coefficients of the trigonometric series (1) that had to "represent" a function and who applied them in the solution of problems of heat conduction. Formulas (2) have acquired the name Fourier formulas, although they were encountered earlier by A. Clairaut (1754) and Euler (1777) via term-by-term integration. The trigonometric series (1) whose coefficients are defined by (2) is called the Fourier series of The character of the results obtained depends on how the "representation" of the function by a series is to be understood, and how the integral in (2) is to be understood. The modern form of the theory of trigonometric series was obtained after the appearance of the Lebesgue integral. The theory of trigonometric series can conditionally be divided into two main branches: the theory of Fourier series, in which it is supposed that the series (1) is the Fourier series of some function, and the theory of general trigonometric series, where this hypothesis is not made. The main results in the theory of general trigonometric series are given below (here the measure of sets and measurability of functions are to be understood in the sense of Lebesgue). The first systematic study of trigonometric series in which it was not supposed that these series are Fourier series, was the dissertation of B. Riemann (1853). For this reason, the theory of general trigonometric series is sometimes called the Riemann theory of trigonometric series. For the study of the properties of an arbitrary series (1) with coefficients converging to zero, Riemann considered the continuous function obtained after twice term-by-term integration of the series (1). If the series (1) converges at some point symmetric derivative of this leads to the summation of (1) produced by the factors If a trigonometric series converges on a set of positive measure, then its coefficients converge to zero (the Cantor–Lebesgue theorem). Convergence to zero of the coefficients of a trigonometric series also follows from convergence of the series on a set of the second category (W. Young, 1909). One of the central problems in the theory of general trigonometric series is that of "representing" an arbitrary function by a trigonometric series. By strengthening results of N.N. Luzin (1915) on the representation of functions by trigonometric series that are summable almost-everywhere by the methods of Abel–Poisson and Riemann, D.E. Men'shov proved (1940) the following theorem, relating to the most important case when the representation of Men'shov's theorem can be strengthened as follows: If a It is not known (1984) whether the condition that Therefore the problem on the representation of functions that can take infinite values on a set of positive measure has been considered for the case when convergence almost-everywhere is replaced by a weaker condition, namely, convergence in measure. Convergence in measure to functions that can take infinite values is defined as follows: The sequence Much research has been devoted to the problem of uniqueness of trigonometric series: Can two distinct trigonometric series converge to the same function? In another formulation: If a trigonometric series converges to zero, then does it follow that all coefficients of the series are zero? Here one may have in mind convergence at all points or at all points outside some set. The answers to these questions depend essentially on the properties of the set outside which convergence is presupposed. The following terminology has been established. A set Uniqueness set), or a The existence of perfect set possessing these properties. This result is of prime importance in the uniqueness problem. It follows from the existence of Perfect sets can also be The following problem touches on the uniqueness problem. If a trigonometric series converges to a function If a trigonometric series converges absolutely at some point According to the Denjoy–Luzin theorem, absolute convergence on a set of positive measure of a trigonometric series (1) implies that the series This survey only covers one-dimensional trigonometric series (1). There are separate results relating to general trigonometric series of several variables. Here in many cases it is also necessary to find natural formulations of the problems. [1] N.K. [N.K. Bari] Bary, "A treatise on trigonometric series" , Pergamon (1964) (Translated from Russian) [2] A. Zygmund, "Trigonometric series" , 1–2 , Cambridge Univ. Press (1988) [3] N.N. Luzin, "The integral and trigonometric series" , Moscow-Leningrad (1951) (In Russian) (Thesis; also: Collected Works, Vol. 1, Moscow, 1953, pp. 48–212) [4] B. Riemann, "Ueber die Darstellbarkeit einer Function durch eine trigonometrische Reihe" H. Weber (ed.) , B. Riemann's Gesammelte Mathematische Werke , Dover, reprint (1953) pp. 227–265 ((Original: Göttinger Akad. Abh. For the convergence of Fourier series see also Carleson theorem. The problems related to Men'shov's theorem cited above that were unsolved in 1984 have now been solved. In [a1], S.V. Konyagin proved the necessity part of the following brilliant theorem. (The sufficiency part had already been proved by D.E. Men'shov (see [a2] or [1], vol. II, p. 437).) Men'shov–Konyagin theorem. Let for almost-all In particular, no trigonometric series has sum [a1] S.V. Konyagin, "Limits of indeterminacy of trigonometric series" Math. Notes , 44 (1988) pp. 910–920 Mat. Zametki , 44 : 6 (1988) pp. 770–784 [a2] D.E. Men'shov, "On limits of indeterminacy of Fourier series" Mat. Sb. , 30 : 3 (1952) pp. 601–650 (In Russian) How to Cite This Entry: Trigonometric series. S.A. Telyakovskii (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Trigonometric_series&oldid=15507 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Trigonometric_series","timestamp":"2014-04-20T05:48:59Z","content_type":null,"content_length":"37224","record_id":"<urn:uuid:78d163ff-35bc-4a67-b4ad-2654f2525e1a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Static longitudinal and lateral stability characteristics at low speed of 45 degree sweptback-midwing models having wings with an aspect ratio of 2, 4, or 6 David F. Thomas, Jr, Walter D. Wolhart Sep 1957 Results are presented of tests conducted in the Langley stability tunnel to determine the effects of various components and combinations of components on the static longitudinal and lateral stability characteristics at low speed of a series of 45 degree sweptback-midwing- airplane configurations having wings with an aspect ratio of 2, 4, or 6. The tests were made at a dynamic pressure of 24.9 pounds per square foot which corresponds to a Mach number of 0.13 and Reynolds numbers of 1.00 x 10(exp 6), 0.71 x 10 (exp 6), and 0.58 x 10 (exp 6) based on the respective wing mean aerodynamic chords. The angle-of-attack range covered was from -4 degrees to 32 degrees and the sideslip angles used for the lateral-derivative tests were 5 degrees and -5 degrees. An increasing loss in tail contribution to directional stability both a principal cause of all the complete models becoming directionally unstable in the high angle-of-attack range. An Adobe Acrobat (PDF) file of the entire report:
{"url":"http://naca.central.cranfield.ac.uk/report.php?NID=6812","timestamp":"2014-04-20T23:35:29Z","content_type":null,"content_length":"2333","record_id":"<urn:uuid:2699d4ba-bbb0-4510-8634-de512b68e7fa>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [SI-LIST] : the old high-frequency return current model C Deibele (deibele@fnal.gov) Mon, 04 Oct 1999 11:24:26 -0500 > Other field solvers, such as the Maxwell 2D Extractor, from Ansoft, will > solve LaPlace's equation to get the capacitance matrix elements, and > separately, Ampere's equation, to get the high frequency current > distribution, and then the magnetic fields, and then the inductance matrix > elements. The boundary condition it uses is that at high frequency, skin > depth is so small, there is no field inside the conductor and it behaves > like a perfect conductor, so the B field is only tangential at the surface, > no normal component. I have compared the two results- Hyperlynx's and > Ansoft's tool- and the agreement is within 2% across a wide range of aspect > ratios. > In addition, the Ansoft tool, for example, will separately, if requested, > solve the Helmholtz equation (frequency domain solution for J and B) and > extract the current distribution inside the conductor (using FEM), at a > specified frequency. This will allow you to see the actual current > distribution inside the conductor, as frequency is varied. The plots I sent > around a few weeks ago were created using this tool. That's why the Ansoft > tool is often referred to as the "Rolls Royce of field solvers". I actually had lots of problems with Ansoft's program. If phase is of any importance, they have a lousy program for calculating it. I have been working with Sonnet software -- they have a program like Ansoft's and it works great. It isn't perfect, mind you, but it is much much I don't know about the cost differences though. It all boils down to what the particular program uses for the MoM as green's functions. is sort of new to the business of MoM for enclosed structures. I think in awhile they will have a more mature product. Not yet though. Definitely try out sonnet's program. www.sonnet.com i believe. **** To unsubscribe from si-list: send e-mail to majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE si-list, for more help, put HELP. si-list archives are accessible at http:// www.qsl.net/wb6tpu/si-list ****
{"url":"http://www.qsl.net/wb6tpu/si-list2/1835.html","timestamp":"2014-04-21T02:42:53Z","content_type":null,"content_length":"4821","record_id":"<urn:uuid:194a6f92-eef2-45e8-a06b-5e85f61c60f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Finance::Amortization - Simple Amortization Schedules use Finance::Amortization # make a new schedule $amortization = new Finance::Amortization(principal => 100000, rate = 0.06/12, periods = 360); # get the balance after a the twelveth period $balance = $amortization->balance(12) # get the interest paid during the twelfth period $interest = $amortization->interest(12); Finance::Amortization is a simple object oriented interface to an amortization table. Pass in the principal to be amortized, the number of payments to be made, and the interest rate per payment. It will calculate the rest on demand, and provides a few methods to ask for the state of the table after a given number of periods. Finance::Amortization is written in pure perl and does not depend on any other modules. It exports no functions; all access is via methods called on an amortization object. (Except for new(), of $am = Finance::Amortization->new(principal => 0, rate => 0, periods => 0, compounding => 12, precision => 2); Creates a new amortization object. Calling interface is hash style. The fields principal, rate, and periods are available, all defaulting to zero. Compounding is a parameter which sets how many periods the rate is compounded over. Thus, if each amortization period is one month, setting compounding to 12 (the default), will make the rate an annual rate. That is, the interest rate per period is the rate specified, divided by the compounding. So, to get an amortization for 30 years on 200000, with a 6% annual rate, you would call new(principal => 200000, periods => 12*30, rate => 0.06), the compounding will default to 12, and so the rate will work out right for monthly payments. precision is used to specify the number of decimal places to round to when returning answers. It defaults to 2, which is appropriate for US currency and many others. $rate_per_period = $am->rate() returns the interest rate per period. Ignores any arguments. $initial_value = $am->principal() returns the initial principal being amortized. Ignores any arguments. $number_of_periods = $am->periods() returns the number of periods in which the principal is being amortized. Ignores any arguments. $pmt = $am->payment() returns the payment per period. This method will cache the value the first time it is called. $balance = $am->balance(12); Returns the balance of the amortization after the period given in the argument $interest = $am->interest(12); Returns the interest paid in the period given in the argument This module uses perl's floating point for financial calculations. This may introduce inaccuracies and/or make this module unsuitable for serious financial applications. Please report any bugs or feature requests to bug-finance-amortization at rt.cpan.org, or through the web interface at http://rt.cpan.org/NoAuth/ReportBug.html?Queue=Finance-Amortization. Use Math::BigRat for the calculations. Provide amortizers for present value, future value, annuities, etc. Allow for caching calculated values. Provide output methods and converters to various table modules. HTML::Table, Text::Table, and Data::Table come to mind. Write better test scripts. Better checking for errors and out of range input. Return undef in these cases. Use a locale dependent value to set an appropriate default for precision in the new() method. None. This entire module is in the public domain. Nathan Wagner <nw@hydaspes.if.org> This entire module is written by me and placed into the public domain.
{"url":"http://search.cpan.org/~wagner/Finance-Amortization-0.5/lib/Finance/Amortization.pm","timestamp":"2014-04-19T23:18:12Z","content_type":null,"content_length":"17040","record_id":"<urn:uuid:d0e5e1e7-d48e-4a47-b7b2-661804902c29>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] array allocation using tuples gives views of same array Warren Focke focke@slac.stanford.... Thu Nov 15 09:29:12 CST 2007 On Thu, 15 Nov 2007, George Nurser wrote: > It looks to me like > a,b = (zeros((2,)),)*2 > is equivalent to > x= zeros((2,)) > a,b=(x,)*2 > If this is indeed a feature rather than a bug, is there an alternative > compact way to allocate many arrays? a, b = [zeros((2,)) for x in range(2)] More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-November/029960.html","timestamp":"2014-04-20T01:33:35Z","content_type":null,"content_length":"3253","record_id":"<urn:uuid:b6c2bea2-08d0-4a90-9d86-2b54e77e1386>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Color_charge particle physics color charge is a property of which are related to their strong interactions in the context of quantum chromodynamics (QCD). This has analogies with the notion of electric charge of particles, but because of the mathematical complications of QCD, there are many technical differences. The "color" of quarks and gluons has to do with visual perception of color; rather, it is a whimsical name for a property which has almost no manifestation at distances above the size of an atomic nucleus . The term "color" itself is simply derived from the fact that the property it describes has three aspects (analogous to the three primary colors ), as opposed to the single "aspect" of electromagnetic charge. Shortly after the existence of quarks was first proposed in 1964, Oscar W. Greenberg introduced the notion of color charge to explain how quarks could coexist inside some hadrons in otherwise identical states and still satisfy the Pauli exclusion principle. The concept turned out to be useful. Quantum chromodynamics has been under development since the 1970s and constitutes an important ingredient in the standard model of particle physics. Red, Green, and Blue One can say that a quark's color can take one of three values: "red", "green", or "blue"; and that an antiquark can take one of three "anticolors", sometimes called "antired", "antigreen" and "antiblue" (occasionally represented as cyan, magenta and yellow, respectively). In the same vein it can be said that gluons are mixtures of two colors: for example red-antigreen, and this constitutes their color charge. QCD considers eight gluons of the possible nine color/anti-color combinations to be unique; see Gluon for the reason of this. Also see Coupling constants for clarification of the following material. Coupling constant and charge In a quantum field theory the notion of a coupling constant and a charge are different but related. The coupling constant sets the magnitude of the force of interaction; for example, in quantum electrodynamics, the fine structure constant is a coupling constant. The charge in a gauge theory has to do with the way a particle transforms under the gauge symmetry; i.e., its representation under the gauge group. For example, the electron has charge -1 and the positron has charge +1, implying that the gauge transformation has opposite effects on them in some sense. Specifically, if a local gauge transformation φ(x) is applied in electrodynamics, then one finds $A_muto A_mu+partial_muphi\left(x\right)$, $psito exp\left[iQphi\left(x\right)\right]psi$ and $overlinepsito exp\left[-iQphi\left(x\right)\right]overlinepsi$ is the field, and is the electron field with (a bar over denotes its antiparticle — the positron). Since QCD is a non- theory, the representations, and hence the color charges, are more complicated. They are dealt with in the next section. Quark and gluon fields and color charges In QCD the gauge group is the non-Abelian group SU(3). The running coupling is usually denoted by α[s]. Each flavor of quark belongs to the fundamental representation (3) and contains a triplet of fields together denoted by ψ. The antiquark field belongs to the complex conjugate representation (3^*) and also contains a triplet of fields. We can write $psi = begin\left\{pmatrix\right\}psi_1 psi_2 psi_3end\left\{pmatrix\right\}$ and $overlinepsi = begin\left\{pmatrix\right\}overlinepsi^*_1 overlinepsi^*_2 overlinepsi^*_3end\left\{pmatrix\right The gluon contains an octet of fields, belongs to the adjoint representation ), and can be written using the Gell-Mann matrices $\left\{mathbf A\right\}_mu = A_mu^alambda_a.$ All other belong to the trivial representation ) of color . The color charge of each of these fields is fully specified by the representations. Quarks and antiquarks have color charge 4/3, whereas gluons have color charge 8. All other particles have zero color charge. Mathematically speaking, the color charge of a particle is the value of a certain quadratic Casimir operator in the representation of the particle. In the simple language introduced previously, the three indices "1", "2" and "3" in the quark triplet above are usually identified with the three colors. The colorful language misses the following point. A gauge transformation in color SU(3) can be written as ψ → Uψ, where U is a 3X3 matrix which belongs to the group SU(3). Thus, after gauge transformation, the new colors are linear combinations of the old colors. In short, the simplified language introduced before is not gauge invariant. Color charge is conserved, but the book-keeping involved in this is more complicated than just adding up the charges, as is done in quantum electrodynamics. One simple way of doing this is to look at the interaction vertex in QCD and replace it by a color line representation. The meaning is the following. Let ψ[i] represent the i-th component of a quark field (loosely called the i-th color). The color of a gluon is similarly given by a which corresponds to the particular Gell-Mann matrix it is associated with. This matrix has indices i and j. These are the color labels on the gluon. At the interaction vertex one has q[i]→g[ij]+q[j]. The color-line representation tracks these indices. Color charge conservation means that the ends of these color-lines must be either in the initial or final state, equivalently, that no lines break in the middle of a diagram. Since gluons carry color charge, two gluons can also interact. A typical interaction vertex (called the three gluon vertex) for gluons involves g+g→g. This is shown here, along with its color line representation. The color-line diagrams can be restated in terms of conservation laws of color; however, as noted before, this is not a gauge invariant language. Note that in a typical non-Abelian gauge theory the gauge boson carries the charge of the theory, and hence has interactions of this kind; for example, the W boson in the electroweak theory. In the electroweak theory, the W also carries electric charge, and hence interacts with a photon. See also • Howard Georgi, Lie algebras in particle physics, (1999) Perseus Books Group, ISBN 0-7382-0233-9. • David J. Griffiths, Introduction to Elementary Particles, (1987) John Wiley & Sons, New York ISBN 0-471-60386-4 • J. Richard Christman, Color and Charm, (2001) Project PHYSNET document MISN-0-283.
{"url":"http://www.reference.com/browse/wiki/Color_charge","timestamp":"2014-04-17T09:41:26Z","content_type":null,"content_length":"83671","record_id":"<urn:uuid:77866113-79f8-415c-a46e-968f43929b8a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Contradictory Derivative October 18th 2012, 12:52 AM #1 Oct 2012 Contradictory Derivative Let's say f(x) = x^2. This would give f'(x) = 2x. Now we change the definition of f(x) = x^2 to f(x) = x + x + x... up to x times. If we now differentiate it with respect to x, we will get f'(x) = d(x)/dx + d(x)/dx + d(x)/dx ....up to x times. And this sums up to x. Thus according to new definition of f(x), the derivative f' (x) would be x and not 2x. Though we all know that there is some gap in calculating the derivative in second method, what is the missing thing here? How can it be captured to cover every aspect? Re: Contradictory Derivative Hey vamosromil. The subtle difference is that 1) this only applies if x is an integer and 2) such a summation assumes that we sum up a known amount of times (i.e. n*x where n is fixed). If you add up d(x)/dx n times you get n which is what we expect. The derivative of nx is n and does not depend on x at all. You can't do what you did because what you are doing (and the way you are doing it) is ill-defined. If you are summing up a fixed amount of times is just (x + x + ... + x) n times and not "x" times (what if x is 1.123123798123978123 or pi)? You have to be careful with how you specify things. Re: Contradictory Derivative Re: Contradictory Derivative I wonder how can you write down 2.453+2.435+?... exactly 2.435 times? Re: Contradictory Derivative Hello everyone, I see there are some typical questions being asked. I posted this problem on another forum too. And the first question of writing a real number other than integer that many number of times has also been addressed. But the core explanation is yet to be found..here is the link for that forum: Contradictory Derivative Re: Contradictory Derivative But the core explanation is yet to be found..here is the link for that forum: Contradictory Derivative What "core explanation?" Asked and answered. October 18th 2012, 02:19 AM #2 MHF Contributor Sep 2012 October 18th 2012, 02:21 AM #3 October 18th 2012, 02:40 AM #4 Sep 2010 October 18th 2012, 02:44 AM #5 Oct 2012 October 18th 2012, 04:11 AM #6
{"url":"http://mathhelpforum.com/pre-calculus/205592-contradictory-derivative.html","timestamp":"2014-04-18T17:07:20Z","content_type":null,"content_length":"48770","record_id":"<urn:uuid:8206fef3-064c-481b-8666-f9d7faf70e26>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
REAL_POLY_MUL_CONV : term -> thm Multiplies two real polynomials while retaining canonical form. For many purposes it is useful to retain polynomials in a canonical form. For more information on the usual normal form in HOL Light, see the function REAL_POLY_CONV, which converts a polynomial to normal form while proving the equivalence of the original and normalized forms. The function REAL_POLY_MUL_CONV is a more delicate conversion that, given a term p1 * p2 where p1 and p2 are real polynomials in normal form, returns a theorem |- p1 * p2 = p where p is in normal form. Fails if applied to a term that is not the product of two real terms. If these subterms are not polynomials in normal form, the overall normalization is not guaranteed. # REAL_POLY_MUL_CONV `(x pow 2 + x) * (x pow 2 + -- &1 * x + &1)`;; val it : thm = |- (x pow 2 + x) * (x pow 2 + -- &1 * x + &1) = x pow 4 + x More delicate polynomial operations that simply the direct normalization with REAL_POLY_CONV. REAL_ARITH, REAL_POLY_ADD_CONV, REAL_POLY_CONV, REAL_POLY_NEG_CONV, REAL_POLY_POW_CONV, REAL_POLY_SUB_CONV, REAL_RING.
{"url":"http://www.cl.cam.ac.uk/~jrh13/hol-light/HTML/REAL_POLY_MUL_CONV.html","timestamp":"2014-04-16T13:25:21Z","content_type":null,"content_length":"2304","record_id":"<urn:uuid:b298e293-a2e7-40ac-8fd0-360db8ac40f1>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
9th Grade Math Problems What our customers say... Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences: I can't tell you how happy I am to finally find a program that actually teaches me something!!! Derek Brennan, IL Thank you so much Algebrator , you saved me this year, I was afraid to fail at Algebra class, but u saved me with your step by step solving equations, am thankful. A.R., Arkansas I like the ability to show all, some, or none of the steps. Sometimes I need to cross reference my work, and other times I just need to check the solution. I also like how an explanation can be shown for each step. That helps learn the functions of each different method for solving. Lori Barker Search phrases used on 2009-09-15 : Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among • solve quadratic equations cubed • trigonometric problems with answers • word problems ising formuals • course 3 chapter 4 pre algebra worksheets • free 6th grade math book probability tree diagram • free book downloads for 10 year olds • cube root of fraction • solve the quadratic by factoring calculator • free printable daily third grade math • word problem of application of system of linear equation integers exponent • multiply expressions with exponents • simplifying fraction square root • convert to a square root • Subtraction and Addition method, algebra 1, problems online • greatest common factor of 900 • projects*quadratics • free ratio math worksheets • www.emathtutors.com • square formula • solving+inequalities+worksheet • online simultaneous equation solver • ks3 maths lesson plans • Solving Differential Equations on the TI 89 • sample paper maths of class-7th • algebra asymptote lessons • simple slope formula for excel • solving equations by adding or subtracting fractions • general aptitude questions with solutions • free math worksheets for estimation • algebraic expansion exponent • plotting point & solving equations free worksheets • Free check my algebra problems • practise question paper for geometry std ninth • system equations+math games • worksheets for level 2 calculation on area and volume • importance of quadratic equation • learning basic algebra • y intercept quadratic formula given two points • free math worksheets polygons answer key • find formula for numbers solver • Type in geometry Problem Get Answer • free online calculator adding rational expression • practical test in quadratic equation using factoring • adding and subtracting systems of equations solver • polynomial problem solver • fun lessons using square roots • free online rational expressions calculator • trig values • squaring radical fractions • Algebra and Trigonometry Structure and Method Book 2 McDougal Littell • quadratic trinomial comics • multiplication worksheets to do on the computer for 3rd graders • pizzazz book d answers • graph hyperbola • gcse 8th grade math question paper(ratio and proportion) • system of equation by substitution worksheet /puzzle • high school math sat sample worksheet • 11+ worksheets printables • Downloadable Aptitude Tests Free • simultaneous motion problems on the gmat • developing skills in algebra book c • rudin solution • how to calculate log TI • glencoe math games • gcse maths algebra worksheets • example poems about mathematics • Finding Equation for a Line solver • inequality worksheets • geometry tutorial for 9th and10th grade • equations on finding the percent of a whole • the square root of 1800 • double root calculator • convert 0.416666667 to a fraction • simplifying complex rational expressions answer generator • radical expressions dividing • green globs cheats • If both are negative then make them both positive in the fraction. • algebra 1 book answers nc edition • free algebra 2 online tutoring • solve inequality parameter mathematica • take a online pratice test AHSGE science • free graphing and lines solvers • who invented hyperbolas and ellipses • "equation worksheet" 6th grade • solver ti • binomial cubed • prentice hall 8th grade science worksheets and answers • linear systems worksheets free download
{"url":"http://www.softmath.com/algebra-help/9th-grade-math-problems.html","timestamp":"2014-04-20T03:10:27Z","content_type":null,"content_length":"25957","record_id":"<urn:uuid:60fe9c03-7f4f-41ff-9946-43f6a4d5588f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Deriving filtering algorithms from constraint checkers Beldiceanu, Nicolas and Carlsson, Mats and Petit, Thierry (2004) Deriving filtering algorithms from constraint checkers. In: Principles and Practice of Constraint Programming – CP 2004, 10th International Conference, 27 Sept - 1 Oct 2004, Toronto, Canada. Full text not available from this repository. This article deals with global constraints for which the set of solutions can be recognized by an extended finite automaton whose size is bounded by a polynomial in n, where n is the number of variables of the corresponding global constraint. By reformulating the automaton as a conjunction of signature and transition constraints we show how to systematically obtain a filtering algorithm. Under some restrictions on the signature and transition constraints this filtering algorithm achieves arc-consistency. An implementation based on some constraints as well as on the metaprogramming facilities of SICStus Prolog is available. For a restricted class of automata we provide a filtering algorithm for the relaxed case, where the violation cost is the minimum number of variables to unassign in order to get back to a solution. Item Type: Conference or Workshop Item (Paper) Additional Information: Lecture Notes in Computer Science; 3258, ISBN 978-3-540-23241-4, Extended version available as SICS Tech Report T2004:08 ID Code: 2720 Deposited By: INVALID USER Deposited On: 21 May 2008 Last Modified: 18 Nov 2009 16:13 Repository Staff Only: item control page
{"url":"http://soda.swedish-ict.se/2720/","timestamp":"2014-04-19T17:13:07Z","content_type":null,"content_length":"14706","record_id":"<urn:uuid:31472b50-9581-4a6e-af04-fc06c4cf841a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Overlap Add Method using Circular Convolution Technique 08 Apr 2013 (Updated 11 Apr 2013) Performs convolution using the Overlap Add Method with the Circular convolution. % Theory: % Overlap Add Method: % The overlapadd method is an efficient way to evaluate the discrete convolution of a very long signal with a finite impulse response % (FIR) filter where h[m] = 0 for m outside the region [1, M].The concept here is to divide the problem into multiple convolutions of h[n] % with short segments of x[n], where L is an arbitrary segment length. Because of this y[n] can be written as a sum of short convolutions. % Algorithm: % The signal is first partitioned into non-overlapping sequences, then the discrete Fourier transforms of the sequences are evaluated by % multiplying the FFT xk[n] of with the FFT of h[n]. After recovering of yk[n] by inverse FFT, the resulting output signal is reconstructed by % overlapping and adding the yk[n]. The overlap arises from the fact that a linear convolution is always longer than the original sequences. In % the early days of development of the fast Fourier transform, L was often chosen to be a power of 2 for efficiency, but further development has % revealed efficient transforms for larger prime factorizations of L, reducing computational sensitivity to this parameter. % A pseudo-code of the algorithm is the following: % Algorithm 1 (OA for linear convolution) % Evaluate the best value of N and L % H = FFT(h,N) (zero-padded FFT) % i = 1 % while i <= Nx % il = min(i+L-1,Nx) % yt = IFFT( FFT(x(i:il),N) * H, N) % k = min(i+N-1,Nx) % y(i:k) = y(i:k) + yt (add the overlapped output blocks) % i = i+L % end % Circular convolution with the overlapadd method % When sequence x[n] is periodic, and Nx is the period, then y[n] is also periodic, with the same period. To compute one period of y[n], % Algorithm 1 can first be used to convolve h[n] with just one period of x[n]. In the region M ? n ? Nx, the resultant y[n] sequence is correct. % And if the next M ? 1 values are added to the first M ? 1 values, then the region 1 ? n ? Nx will represent the desired convolution. % The modified pseudo-code is: % Algorithm 2 (OA for circular convolution) % Evaluate Algorithm 1 % y(1:M-1) = y(1:M-1) + y(Nx+1:Nx+M-1) % y = y(1:Nx) % end clear all; Xn=input('Enter 1st Sequence X(n)= '); Hn=input('Enter 2nd Sequence H(n)= '); L=input('Enter length of each block L = '); % Code to plot X(n) subplot (2,2,1); xlabel ('n---->'); ylabel ('Amplitude ---->'); title(' X(n)'); %Code to plot H(n) subplot (2,2,2); xlabel ('n---->'); ylabel ('Amplitude ---->'); title(' H(n)'); % Code to perform Convolution using Overlap Add Method Xn=[Xn zeros(1,L-R)]; Hn=[Hn zeros(1,N-M)]; for k=0:K Xnk=[Xnp z]; y(k+1,:)=mycirconv(Xnk,Hn); %Call the mycirconv function. for i=1:K %Code to plot the Convolved Signal subplot (2,2,3:4); xlabel ('n---->'); ylabel ('Amplitude ---->'); title('Convolved Signal'); % Add title to the Overall Plot ha = axes ('Position',[0 0 1 1],'Xlim',[0 1],'Ylim',[0 1],'Box','off','Visible','off','Units','normalized', 'clipping' , 'off'); text (0.5, 1,'\bf Convolution using Overlap Add Method ','HorizontalAlignment','center','VerticalAlignment', 'top') Performs convolution using the Overlap Add Method with the Circular convolution. % Theory: % Overlap Add Method: % The overlapadd method is an efficient way to evaluate the discrete convolution of a very long signal with a finite impulse response % (FIR) filter where h[m] = 0 for m outside the region [1, M].The concept here is to divide the problem into multiple convolutions of h[n] % with short segments of x[n], where L is an arbitrary segment length. Because of this y[n] can be written as a sum of short convolutions. % Algorithm: % The signal is first partitioned into non-overlapping sequences, then the discrete Fourier transforms of the sequences are evaluated by % multiplying the FFT xk[n] of with the FFT of h[n]. After recovering of yk[n] by inverse FFT, the resulting output signal is reconstructed by % overlapping and adding the yk[n]. The overlap arises from the fact that a linear convolution is always longer than the original sequences. In % the early days of development of the fast Fourier transform, L was often chosen to be a power of 2 for efficiency, but further development has % revealed efficient transforms for larger prime factorizations of L, reducing computational sensitivity to this parameter. % A pseudo-code of the algorithm is the following: % Algorithm 1 (OA for linear convolution) % Evaluate the best value of N and L % H = FFT(h,N) (zero-padded FFT) % i = 1 % while i <= Nx % il = min(i+L-1,Nx) % yt = IFFT( FFT(x(i:il),N) * H, N) % k = min(i+N-1,Nx) % y(i:k) = y(i:k) + yt (add the overlapped output blocks) % i = i+L % end % Circular convolution with the overlapadd method % When sequence x[n] is periodic, and Nx is the period, then y[n] is also periodic, with the same period. To compute one period of y[n], % Algorithm 1 can first be used to convolve h[n] with just one period of x[n]. In the region M ? n ? Nx, the resultant y[n] sequence is correct. % And if the next M ? 1 values are added to the first M ? 1 values, then the region 1 ? n ? Nx will represent the desired convolution. % The modified pseudo-code is: % Algorithm 2 (OA for circular convolution) % Evaluate Algorithm 1 % y(1:M-1) = y(1:M-1) + y(Nx+1:Nx+M-1) % y = y(1:Nx) % end clear all; Xn=input('Enter 1st Sequence X(n)= '); Hn=input('Enter 2nd Sequence H(n)= '); L=input('Enter length of each block L = '); % Code to plot X(n) subplot (2,2,1); xlabel ('n---->'); ylabel ('Amplitude ---->'); title(' X(n)'); %Code to plot H(n) subplot (2,2,2); xlabel ('n---->'); ylabel ('Amplitude ---->'); title(' H(n)'); % Code to perform Convolution using Overlap Add Method Xn=[Xn zeros(1,L-R)]; Hn=[Hn zeros(1,N-M)]; for k=0:K Xnk=[Xnp z]; y(k+1,:)=mycirconv(Xnk,Hn); %Call the mycirconv function. for i=1:K %Code to plot the Convolved Signal subplot (2,2,3:4); xlabel ('n---->'); ylabel ('Amplitude ---->'); title('Convolved Signal'); % Add title to the Overall Plot ha = axes ('Position',[0 0 1 1],'Xlim',[0 1],'Ylim',[0 1],'Box','off','Visible','off','Units','normalized', 'clipping' , 'off'); text (0.5, 1,'\bf Convolution using Overlap Add Method ','HorizontalAlignment','center','VerticalAlignment', 'top') % Theory: % % Overlap Add Method: % The overlapadd method is an efficient way to evaluate the discrete convolution of a very long signal with a finite impulse response % (FIR) filter where h[m] = 0 for m outside the region [1, M].The concept here is to divide the problem into multiple convolutions of h[n] % with short segments of x[n], where L is an arbitrary segment length. Because of this y [n] can be written as a sum of short convolutions. % % Algorithm: % % The signal is first partitioned into non-overlapping sequences, then the discrete Fourier transforms of the sequences are evaluated by % multiplying the FFT xk[n] of with the FFT of h[n]. After recovering of yk[n] by inverse FFT, the resulting output signal is reconstructed by % overlapping and adding the yk[n]. The overlap arises from the fact that a linear convolution is always longer than the original sequences. In % the early days of development of the fast Fourier transform, L was often chosen to be a power of 2 for efficiency, but further development has % revealed efficient transforms for larger prime factorizations of L, reducing computational sensitivity to this parameter. % A pseudo-code of the algorithm is the following: % % Algorithm 1 (OA for linear convolution) % Evaluate the best value of N and L % H = FFT(h,N) (zero-padded FFT) % i = 1 % while i <= Nx % il = min(i+L-1,Nx) % yt = IFFT( FFT(x(i:il),N) * H, N) % k = min(i+N-1,Nx) % y(i:k) = y(i:k) + yt (add the overlapped output blocks) % i = i+L % end % % Circular convolution with the overlapadd method % % When sequence x[n] is periodic, and Nx is the period, then y[n] is also periodic, with the same period. To compute one period of y[n], % Algorithm 1 can first be used to convolve h[n] with just one period of x[n]. In the region M ? n ? Nx, the resultant y[n] sequence is correct. % And if the next M ? 1 values are added to the first M ? 1 values, then the region 1 ? n ? Nx will represent the desired convolution. % The modified pseudo-code is: % % Algorithm 2 (OA for circular convolution) % Evaluate Algorithm 1 % y(1:M-1) = y(1:M-1) + y(Nx+1:Nx+M-1) % y = y(1:Nx) % end % clc; clear all; Xn=input('Enter 1st Sequence X(n)= '); Hn=input('Enter 2nd Sequence H(n)= '); L=input('Enter length of each block L = '); % Code to plot X(n) subplot (2,2,1); stem(Xn); xlabel ('n---->'); ylabel ('Amplitude ---->'); title(' X(n) '); %Code to plot H(n) subplot (2,2,2); stem(Hn,'red'); xlabel ('n---->'); ylabel ('Amplitude ---->'); title(' H(n)'); % Code to perform Convolution using Overlap Add Method NXn=length(Xn); M=length (Hn); M1=M-1; R=rem(NXn,L); N=L+M1; Xn=[Xn zeros(1,L-R)]; Hn=[Hn zeros(1,N-M)]; K=floor(NXn/L); y=zeros(K+1,N); z=zeros(1,M1); for k=0:K Xnp=Xn(L*k+1:L*k+L); Xnk=[Xnp z]; y(k+1,:)=mycirconv(Xnk,Hn); %Call the mycirconv function. end p=L+M1; for i=1:K y(i+1,1:M-1)=y(i,p-M1+1:p)+y(i+1,1:M-1); end z1=y(:,1:L)'; y=(z1(:))' %Code to plot the Convolved Signal subplot (2,2,3:4); stem(y,'black'); xlabel ('n---->'); ylabel ('Amplitude ---->'); title('Convolved Signal'); % Add title to the Overall Plot ha = axes ('Position',[0 0 1 1],'Xlim',[0 1],'Ylim',[0 1],'Box','off','Visible','off','Units','normalized', 'clipping' , 'off'); text (0.5, 1,'\bf Convolution using Overlap Add Method ','HorizontalAlignment','center','VerticalAlignment', 'top')
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/41173-overlap-add-method-using-circular-convolution-technique/content/Overlap%20Add%20Method/Overlap_Add_Method.m","timestamp":"2014-04-17T09:54:15Z","content_type":null,"content_length":"22530","record_id":"<urn:uuid:be2c9610-74df-4b3b-a2e2-45c3e6683881>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational Complexity When I won't be in email contact for a while I set up a vacation program so that if someone emails me they get a message. I used to use the following: I am not in email contact. If you absolutely, positively, have to contact me then get a life. My wife told me this was offensive, so I changed it to I am not in email contact. If you absolutely, positively, have to contact me then you have the wrong priorities. She didn't like that one much either, but it was better. And I think its cleverer. But this raises the question, what is the proper etiquette for vacation programs? 1. 15 years ago someone who is not computer savy was offended by the `get a life' vacation program, thinking that I had send it personally. 2. 2 years ago a shy grad student from a different school was terrified by my `wrong priorities' vacation program. 3. Aside from that, most people tell me they like both of them. 4. I often email someone, get a vacation program reply, and then within 5 minutes get a real reply. I find that someone rude. 5. Whatever the vacation program etiquette it will likely be irrelevant as we are logged on more and more, even on vacation. (Guest post from Shiva Kintali. All capital letters, italics, and boldface are from Shiva.) A request to FOCS/STOC PC members Hi All, I would like to point out a concern I have about the FOCS/STOC conference proceedings. There is a huge gap of around four months between the announcement of accepted papers and the conference date. For example: STOC'07 acceptance date was Feb-18th and conference date is June 11. FOCS'07 acceptance date was July 1st and conference date is Oct 21. Most of the authors don't upload their drafts/camera-ready papers on their homepages, for some unknown reasons. Some of them are kind enough to send their drafts if you send them an e-mail. Some don't bother to reply. If there is an exciting result (most of the STOC/FOCS papers have exciting results), most of us would like to know the techniques used, as soon as possible. For example, one of the FOCS'07 result helped me a lot in my research. I knew that the result can be used in my research, but I had to wait for four months. Waiting for four months to know the details of a result is really frustrating. Also, there is a gap of around 40 days between the acceptance date and the deadline for camera-ready submissions. I guess the difference between the submitted paper and camera ready version is latexification and adding the suggestions of the reviewers. This should not take more than couple of weeks. Once the committe is happy with the camera-ready version, the digital proceedings can be uploaded on the ACM/IEEE portals. I think a gap of one month between the acceptance date and uploading the digital proceedings is reasonable. Of course, this would require some hardwork from the authors and the committee. This hardwork would not go waste !! Can somebody PLEASE propose this in the next FOCS/STOC business meeting !! The game of checkers seems to have been solved. Its a draw. See here or here if you don't mind seeing an ad for low cholestrol cooking before getting to the article or here if you subscribe to the nytimes or here if you trust wikepedia. The checkers program CHINOOK cannot lose (it can draw). The program has been around for quite some time, being improved over time. The researchers are Jon Schaeffer (the originator), Rob Lake, Paul Lu, Martin Bryant, and Norman Treloar. They say they have a `computational proof not a math proof'. Not sure what that means, but I do believe that Checkers is now done. There is a very good book called One Jump Ahead that is about the program Chinook that plays Checkers very well (now perfectly apparently) but it was written a long time ago, before the recent news. My impression of Chess and Checkers playing programs is that they are very clever engineering but not really much for a theorist to get excited about. However, very clever engineering should not be underrated. I also think that these programs have taught us that (some) humans are very good at these games in a way that is different than machines. When Deep Blue beat Kasporov, rather than thinking (as the popular press did) Oh no, computers are smarter than humans!! I thought Wow, it took that much computing power and that much look-ahead to beat Kasporov. Kasporov must be very good (duh) and the way he plays is different than what a computer would do. Similarly, the Chinook researchers ended up being very impressed with Marion Tinsley (the best checkers player of all time, since deceased). Analysing his games it seems as though he almost never made a mistake. Chinook and Tinsley had two matches- Tinsley won the first one with 4 wins to Chinook's 2. During the second one Tinsley took ill and had to forfeit- he died a few months later. Will checkers decline in popularity? I don't think so--- its already so unpopular that it can't decline much. This story may give it a temporary revival. A PhD Student, Michal Kouril, found a new van der Waerden number, W(6.2)=1132. See here for details. I had a list of known VDW numbers in an earlier post, but I redo it here with the new result. VDW(k,c) is the least number W such that no matter how you c-color the elements {1,2,...,W} there will be k numbers equally spaced (e.g., 3,7,11,15) that are the same color. W(k,c) exists by VDW's Theorem. See Wikipedia or my post in Luca's blog The only VDW numbers that are known are as follows: (see this paper) by Landman, Robertson, Culver from 2005 and the website above about W(6,2). 1. VDW(3,2)=9, (easy) 2. VDW(3,3)=27, (Chvátal, 1970, math review entry, 3. VDW(3,4)=76, (Brown, Some new VDW numbers (prelim report), Notices of the AMS, Vol 21, (1974), A-432. 4. VDW(4,2)=35, Chvátal ref above 5. VDW(5,2)=178, Stevens and Shantarum, 1978 full article! 6. VDW(6,2)=1132. Michal Kouril. 2007. (Not available yet.) Over email I had the following Q & A iwth Michal Kouril. BILL: Why is it worth finding out? MICHAL: As my advisor Jerry Paul put it Why do we climb Mount Everest?" Because it is there! The advances we've made during the pursuit of W(6,2) can have implications on other worthy problems. BILL: Predict when we will get W(7,2) MICHAL: Septemer 30, 2034. Or any time before or after. Interest in Van der Waerden numbers has been growing lately and I would not be surprised if we saw W(7,2) lot sooner than this. Some unknown VDW numbers are already just a matter of the amount of computing power you throw at them in order to prove the exact value. But W(7,2) still need more analysis to make them provable in a reasonable amount of time. (Back to bill's blog:) In a perfect world Michal would be interviewed by Steven Colbert instead of me. Oh well... The following is a quote from Comedian Jerry Seinfeld. The source is Seinfeld Universe: The Entire Domain by Greg Gattuso (Publisher of Nothing: The Newsletter for Seinfeld Fans, page 96. I was great at Geometry. If I wanted to train someone as a comedian, I would make them do lots of proofs. That's what comedy is: a kind of bogus proof. You set up a fallacious premise and then prove it with rigorous logic. It just makes people laugh. You'll find that most of my stuff is based on that system ... You must think rationally on a completely absurd plane. I doubt that many comedians have seen lots of proofs though they may have an intuitive sense of logic for their routines. And not all comedians use this style. I know of one theoretical computer scientist who is a comedy writer. Jeff Westbrook got his PhD in 1989 with Robert Tarjan on Algorithms and Data Structures for Dynamic Graph Algorithms. He was faculty at Yale, and then a researcher at AT+T before working on the TV shows Futurama and The Simpsons. I actually met him in 1989- he didn't seem that funny at the time. Are there other theorists or mathematicians that are also professional comedians or comedy writers? I doubt there are many. If you define theorist or mathematician as having a PhD then I assume its very very few. If you defining it as majored in math or CS there would probably be some. I will be sending the following letter by snail mail and you should send a similar letter- you may have an effect on spam. Dear Govenor Huckabee, There is someone trying to destroy Americas computer infrastructure and blame it on you! I received an email (excerpts below) that look like it was from your campaign but clearly it is not. I know it is not from your campaign since spam is so vile, so disgusting, that a man of high moral character such as yourself would not use it. (Note that even your ethically challenged competitors have not used it.) The spam in question asks the receiver to send a certain email to friends, relatives, and co-workers. This sounds like a chain letter, which is illegal, but of more importance it could crash America's computers. I urge you to take some action to make sure the public knows it is not you behind this vile spam, and put some effort into tracking down the people Here are excerpts and my comments on it. Mike Huckabee - The Exploratory Committee When we launched the barber pole campaign a few weeks ago to raise 400 contributions in 96 hours, we had a tremendous response: 600+ total contributions, 400+ first-time contributors to the campaign and quite a few laughs. While this is not quite asking for money, that might be the next step in this disgusiting scam. Republicans, Democrats and Independents. I am interested in sharing my vision for America with all comers. I have a clear record that I'm proud of and I am willing to promote it to anyone regardless of their politics. Another dead giveaway--- during the primaries you target your own party only. The goal of this new, online campaign is to have 400 online volunteers send emails ! on the campaigns behalf over the next 72 hours. Please focus only on people you know: friends, family members and co-workers. We have designed a special email that we would like you to send. This is the real dirt- they want to flood our computers with this email!!! Now that you are allerted to the danger, please do something about it. William Gasarch, Concerned Citizen A blog entry of Lance's on open problems noted that it would be good to have a repository of open problems. Perhaps a wiki or something. I recently go the following email that may be an answer: I am writing you in (very belated) response to a post on your blog in mid March. You posted a message called "A Place for Open Problems" where you suggest: "We need some Web 2.0 system. A blog or wiki to post the problems. A tagging method to mark the area and status. A voting system to rank the importance of the problem. A commenting system for discussion. A sophisticated RSS system for tracking. A visual appealing and simple interface. And most importantly, someone willing to put it all together for no compensation beyond the thanks of the community." Together with Robert Samal, we have just finished the construction of a system which matches your request quite closely. There are still some small modifications we are making, but it is alive and fully functional, and we would greatly appreciate any input/publicity from you and your readers. Our website is called "The Open Problem Garden" and lives at the following url: here it is Hope you enjoy it. Best, Matt DeVos I corrected them about Lance making that posting, not me. Of much more importance - they have a wiki!! Is it good to use? Will we use it? This is one of those chicken-and-egg problems where if enough people use it then it will be a good resource. Of course, Matt and Robert are not innocent bystanders- if it has a good interface and other features then we are more likely to use it. It seems to be open problems in all of mathematics, though computer science theory is a category. If there was a wiki tailored to Theory would that be better or worse? I would guess worse because the distinction can be artificial anyway. And of course there is the issue of- are you better off working on your open problems or posting them? It may come down to this: Which is greater, your curiosity or your ego? (Guest Post by Ken Regan) pdf file available here Computational complexity theory is the study of information flow and the effort required for it to reach desired conclusions. Computational models like cellular automata, Boolean or algebraic circuits, and other kinds of fixed networks exemplify this well, since they do not have "moving parts" like Turing machine tape heads, so the flow's locations are fixed. Measures of effort include the time for the flow, the amount of space or hardware needed, and subtler considerations such as time/space to prepare the network, or energy to overcome possible dissipation during its operation. These models and measures have fairly tight relations to Turing machines and their familiar complexity measures. For an example and open problem, consider the general task of moving all "garbage bits" to the end of a string, leaving the "good bits" in their original sequence. We can model this as computing the function f: {0,1,2}^* ® {0,1,2}^* exemplified by f(1020212) = 1001222, f(2200) = 0022, f(101) = 101, etc., with 0,1 as "good bits" and 2 as "garbage." A rigorous inductive definition, using e for the empty string, is f(e) = e, f(0x) = 0f(x), f(1x) = 1f(x), and f(2x) = f(x)2. This is the "topological sort" of the partial order B = {0 < 2, 1 < 2} that is stable, meaning that subsequences of incomparable elements are preserved. The problem is, can we design circuits C[n], each computing f(x) on strings x of length n, that have size O(n)? The circuits C[n] have input gates labeled x[1],...,x[n] which receive the corresponding "trits" (0, 1, or 2) of the input string x, and output gates y[1],...,y[n] giving y = f(x). The first question is, what interior computational gates can C[n] have? A comparator gate g for a partial order (P, < ) has two input and two output wires, maps (a,b) either to (a,b) or (b,a), and never maps to (d,c) when c < d. The unique stable comparator g[P] maps (a,b) to (a,b) unless b < a. The following slightly extends the famous 0-1 law for comparator networks: Theorem 1. If a circuit C[n] of comparator gates computes f(x) correctly for all x ÃŽ {0,2}^n (not even including any 1s), then for every partial order (P, < ), the circuit C[P] with each comparator replaced by g[P] computes the stable topological sort of P. Proof. First suppose C[P] errs for a total order (P, < ). Then there are x,y ÃŽ P^n such that C[P](x) = y, but for some j, y[j]+1 < y[j]. Take the permutation p such that x[i] = y[p(i)] for all indices i. Define a binary string y¢ ÃŽ {0,2}^* by y¢[i] = 0 if y[i] < y[j], y¢[i] = 2 otherwise, and x¢ by x¢[i] = y¢[p(i)] for all i. Then C[n](x¢) = y¢ (exercise: prove this by induction taking gates one at a time), contradicting that the original C[n] was correct on {0,2}*. For (P, < ) not a total order, an error C[P](x) = y (which might violate only stability) is also an error in the total order (P[x], < ¢) with P[x] = {(a,i): x[i] = a} and (a,i) < ¢(b,j) if a < b or a is not comparable to b and i < j. [¯] Corollary 2. Circuits C[n] of comparator gates computing f require size n*log[2](n) - O(n). [¯] This follows by applying the standard sorting lower bound to C[P]. It's interesting that we did not need 1s in x to argue stability, and the lower bound allows gates g in C[n] to be arbitrary when either input is 1. For general circuits, however, the argument doesn't hold, and all bets are off! To see why, consider sorting the total order {0 < 1 < 2}. Clever O(n)-size circuits can count the numbers a,b,c of 0s, 1s, and 2s in the input string x, respectively, and then assemble the correct output y = 0^a 1^b 2^c. For the basic idea see Muller-Preparata, 1975, and various sources on the "Dutch National Flag Problem." Applying this counting idea to our poset B reduces our task to "nice" strings z of length N = 2^k with exactly N/2 2s. Theorem 3. If s(N)-size circuits D[N] can compute f(z) for "nice" z, then f has circuits of size at most s(4n) + O(n). Proof. We can build O(n)-size circuits E[n] that on inputs x of length n count b,c as above and find k such that m = 2^k-1 is the least power of 2 above n. Make E[n](x) output z = x1^m+c-n2^m-c, which gives |z| = N < 4n. Then compute y¢ = D[N](z) and re-use the computed b,c,m to pluck off the n bits of f(x). [¯] This reduction to nice z enhances the "flow" metaphor. The m-many 2s in z can be advance-routed to the last m places of y¢, so the whole issue is how the m-many 0s and 1s in z flow together into the first m places of y¢. Must this flow progress (without loss of circuit-size generality) by "squeezing out 2s" in an intuitively plane-filling fashion, allowing "mileposts" whose forced spacing might mandate having n*log[2](n) - O(n) gates? Or can linear-size networks rise above the planar view? No one I've asked has known, and lack of them frustrates a desired general linear-size circuit simulation of my "Block Move" model. Issues here may be involved. Nor do I know nicer descriptions of O(nlogn)-sized circuits than "use ancillas to tag bits of x and work in P[x] as in the proof of Theorem 1, employing ideas of Theorem 3 and/or mapping into the O(nlogn)-sized Ajtai-Komlos-Szemeredi networks." Those seeking an o(nlogn) upper bound may be my guest, but those believing a super-linear circuit lower bound must reflect that no such bounds are known for string functions whose graphs belong to NP or to E. The above inductive definition of f yields a linear-time algorithm on any model that simulates each operation of a double-ended queue in O(1) time. But is booting a 2 to the rear in f(2x) = f(x)2 really in constant time, even amortized? True, our technical issues shrink away on passing from linear to polynomial time, so all this may seem to have nothing to do with P versus NP. But au-contraire the Baker-Gill-Solovay "oracle" obstacle may mean nothing more than that standard "diag-sim" and timing techniques are insensitive to internal information flow. The "Natural Proofs" obstacle may ultimately say only that network-preparation/"nonuniformity" is a subtly powerful consideration. Honing tools for information-flow analysis on incrementally more-general cases that yield super-linear lower bounds may be the walk to walk before trying to run. File translated from T[E]X by T[T]H, version 3.77. On 21 Jun 2007, 23:36. On the post Math Terms used in Real Life- Good or Bad I mentioned the following: On 24, season two, there was a line `we can't break in, its been Huffman coded!' This makes no sense mathematically but it raises awareness of security issues. I had thought that Huffman Codes are just used to compress data and had nothing to do with hiding information. I was wrong! Yakov Nekrich pointed out the following to me: Actually Huffman codes can be difficult to break, see for instance this article: On breaking a Huffman code by Gillman, D.W. Mohtashemi, M. Rivest, R.L. I'm curious- did the writers of 24 know this or not? I would guess no, and they just lucked out. Unless Hillman or Mohtashemi is moonlightening as a writer for 24 (I doubt Rivest needs the money.) As a collector of Novelty songs and a math-person I was morally obligated to purchase Musical Fruitcake by The Klein Four, a band consisting of math grad students singing songs about math. They sing a cappella (without instruments). While you are not morally obligated to purchase their CD,you can. Or find out more about them (or go here for samples). SO, how is their CD? I give each song a rating between 1 and 10, 10 being Excellent and 1 being unlistenable. 1. Power of One: A love song that uses Math. Rather pleasant and clever. But the math is fairly easy. lyrics Rating: 8. 2. Finite Simple Group of Order two: Their signature song, and their best known since its on You-Tube. Another love song that uses math, but much more sophisticated math. Better sung on the CD than on the video. lyrics Rating: 9 3. Three Body Problem: Sung by a guy about losing his girl to another guy. Lots of Physics-Math involved. Touching. lyrics Rating: 7 4. Just the four of us: Seems to be autobiographical and partially a Rap Satire. More fun for them than for me. lyrics Rating: 5 5. Lemma: Lyrics are not online. Thats just as well. It sounds like its a song about liking a lemma- not funny enough for satire, not serious enough for--- how could a math song ever be serious? Rating: 4 6. Calculating: The best song ever written about algebraic topology. lyrics Rating: 6 7. XX Potential: Lyrics not online. About Women doing math (XX vs XY). Nice rythmes but not much math in the song. Rating: 6 8. Confuse Me: About how confusing math can be. Mentions some math- mostly group theory. (A commenter corrected me on this- there is no group theory in this song. I was... confused.) lyrics Rating: 9. Universal: Yet another love song that uses Math. The math used is intermediary between Power of One and Finite Simple Group. Tune is not catchy. Lyrics are as tedious as Category Theory. Lyrics not on line. Rating: 4 10. Contradiction: Seems to be a guy singing about having lost his girlfriend. But its hard to tell- which is a problem. Also, no math except `contradiction'. Lyrics not on line. Rating: 4 11. Mathematics Paradise: To the tune of Gangster Paradise by Coolio. Weird Al had the song Amish Paradise to that tune, and for a brief time Coolio was mad at him for that (they seem to have made up). I doubt Coolio has heard this album, but you never know. Anyway, this is the BEST song on the CD. Clever words, sung well (at least well enough). About the pain of being a 5th year grad student in math. Hopes, dreams, despair- its all there! lyrics Rating: 10 12. Stefanie (The Ballad of Galois): Historically inaccurate, but kind of fun. Has a Country-Western Twang to it. Rating: 8 13. Musical Fruitcake (Pass it Around) Mostly random words, but kind of interesting. Rating: 6 14. Abandon Soap Mostly random words, but not so interesting. Title is like `abandons hope' Very short. Rating: 5 So, what is the final evaluation? I rate CD's by how many songs I really like. I like six of them which is very good. Based just on their Video I had written they shouldn't quit their day jobs- thought since they are grad students in math they probably don't have day jobs.. My current opinion is higher. Still, the math novelty song business is brutal- I wish them luck. The number of times I've bought a CD because the artists had one really good song, and then found out that the one good song was there only good song is at least VDW(4,2). (Yes Arrogant Worms, singers of the brilliant CARROT JUICE IS MURDER but nothing else even half as good- I'm talking to YOU!). As for other Math-novelty song- I'll have a post on that once I get a complete list of all that I know on this topic. Could take a while. Collapsing Degrees Guest post by Stuart Kurtz and Jim Royer. Bill Gasarch asked us to write an article about Collapsing Degrees, in the memory and honor of our coauthor, Steve Mahaney. In 1986, Alan Selman and Steve Mahaney created the Structure in Complexity Conference, now the Conference on Computational Complexity. But in 1986, it was about structure, a term that Paul Young borrowed from computability theory, and which has passed into disuse, but in those days defined us. The word structure embodied optimism about a particular approach to the P vs. NP problem—that its solution might be found in through exploring structural properties of sets and degrees. For example, Berman and Hartmanis had shown that if all NP-complete sets are paddable, then all NP-complete sets were isomorphic under polynomial time computable and invertable reductions, and hence P ≠ NP. Their result leveraged a structural property about specific sets (paddability) to a structural result about degrees (the complete polynomial time m-degree of NP consists of a single polynomial-time isomorphism result), to obtain a complexity-theoretic result. That summer, after the conference, Steve visited us in Chicago, beginning a long and productive collaboration. We beat around the isomorphism conjecture for several days, until Steve mentioned that it wasn't even know that a collapse happened at any nontrivial degree. We smelled blood. Relativization provided some guidance. Berman had proven that the EXP-complete degree consisted of a single 1-li degree. If P = NP, then 1-li degrees collapse. Of course, if P = NP, our rationale for interest in the Isomorphism Conjecture was mooted, and what we really cared about was the “true” P ≠ NP case. Our main result from that summer was that collapsing degrees existed, without requiring an additional complexity-theoretic hypothesis. Our proof involved a finite-injury priority argument, and seemed to require it. It was a joy and a privilege to have had Steve Mahaney as a colleague and friend. Until we meet again, peace.
{"url":"http://blog.computationalcomplexity.org/2007_07_01_archive.html","timestamp":"2014-04-19T22:31:40Z","content_type":null,"content_length":"219162","record_id":"<urn:uuid:a7a8b734-00f8-48c1-9af7-c0a6a9fd36e0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
– RL Components: Learners The top of the learner hierarchy is more conceptual than functional. The different classes distinguish algorithms in such a way that we can automatically determine when an algorithm is not applicable for a problem. PolicyGradientLearner is a super class for all continuous direct search algorithms that use the log likelihood of the executed action to update the weights. Subclasses are ENAC, GPOMDP, or REINFORCE. Reinforce is a gradient estimator technique by Williams (see “Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning”). It uses optimal baselines and calculates the gradient with the log likelihoods of the taken actions. Episodic Natural Actor-Critic. See J. Peters “Natural Actor-Critic”, 2005. Estimates natural gradient with regression of log likelihoods to rewards. Black-box optimization algorithms can also be seen as direct-search RL algorithms, but are not included here.
{"url":"http://pybrain.org/docs/api/rl/learners.html","timestamp":"2014-04-18T08:03:09Z","content_type":null,"content_length":"21128","record_id":"<urn:uuid:9dd40a66-158f-4312-97a7-500a37e70cff>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
A couple of proofs using Mean Value Theorem November 19th 2009, 05:28 PM #1 Nov 2009 A couple of proofs using Mean Value Theorem I've gotten through the entirety of this assignment and about 6 other proofs without any problem, but I'm just struggling with these two. Both of these are in a section that heavily covers the Mean Value Theorem. I'm not looking for answers directly, but maybe pointers could be nice. All of my friends are artsy and avoid math like the plague. (Edit here.. I had the problem written down slightly wrong.) If f is a differentiable, odd function show that ∀b>0, ∃c∈(-b,b) st f'(c) = f(b)/b I've gotten as far as.. f(-b) - f(b) = f'(c)(-b - b) f'(c) == (f(-b) - f(b))/(-b - b) I've gotten the second. So others may take a look at this later for any reason, I'll leave my work here: Prove |sin a - sin b| ≤ |a - b| sin a - sin b = d/dc(sin c)(a - b) by Mean Value Theorem sin a - sin b = cos c (a - b) |sin a - sin b| = |cos c||a - b| |sin a - sin b| ≤ |a - b| since |cos c| >= 0 If I need to post these both on different threads, I'll do so. I was a bit unclear about the section in the rules saying new questions need new threads, and if that means that two questions had to have their own. Thank you Last edited by Nikker; November 19th 2009 at 05:58 PM. Reason: Had a problem written down incorrectly/updating status I've gotten through the entirety of this assignment and about 6 other proofs without any problem, but I'm just struggling with these two. Both of these are in a section that heavily covers the Mean Value Theorem. I'm not looking for answers directly, but maybe pointers could be nice. All of my friends are artsy and avoid math like the plague. I've gotten as far as.. f(-b) - f(b) = f'(c)(-b - b) f'[c] == (f(-b) - f(b))/(-b - b) I'm assuming I have to somehow use the the fact that f is an odd function to show that (f(-b) - f(b))/(-b - b) is equivalent to f'(b)/b... But I'm not sure what steps are in between. Are you sure that the problem states $f'(c)=\frac{f'(b)}{b}$? What if $f(x)=x$ and $b=\frac{1}{2}$? The second is to prove that |sin a - sin b| ≤ |a - b| And all I have is this: sin a - sin b = d/dc(sin c)(a - b) sin a - sin b = cos c (a - b) For the next step, note that $|\sin a - \sin b|=|\cos c||a-b|.$ Ahhh, you're right. I'm not sure where that came from, but it's f(b)/b, not f'(b). Thanks. I'll update the question and take a closer look. Edit: Your tip also definitely helped with the second one. I'm still not much further with the first, since I know I have to use the fact that it's an odd function, but am not quite sure where. Awesome. Thank you very much. It's always the small things I miss, apparently. November 19th 2009, 05:46 PM #2 Senior Member Dec 2008 November 19th 2009, 05:49 PM #3 Nov 2009 November 19th 2009, 05:58 PM #4 November 19th 2009, 06:01 PM #5 Nov 2009 November 19th 2009, 06:01 PM #6 November 19th 2009, 06:04 PM #7 Nov 2009
{"url":"http://mathhelpforum.com/calculus/115668-couple-proofs-using-mean-value-theorem.html","timestamp":"2014-04-17T09:42:08Z","content_type":null,"content_length":"50399","record_id":"<urn:uuid:2da83283-8893-46ab-8f8c-6d53d91f3067>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Holmdel Village, NJ Math Tutor Find a Holmdel Village, NJ Math Tutor ...I have worked with many students that others were unable to help, whether due to learning disability or other conditions. I believe by the end of the tutoring process, contrary to what may seem like an arduous and dull task, you may be surprised to find that you approach test day with fervor, vi... 23 Subjects: including algebra 1, ACT Math, MCAT, geometry ...During raising my two girls it became clear that I really loved teaching/tutoring them in math and physics. My oldest, currently a high school senior, reached the highest levels in honors and AP classes including AP BC Calculus. I am certified to teach math K-12 in New Jersey in preparation to transition from my current career. 11 Subjects: including SAT math, differential equations, algebra 1, algebra 2 I am currently enrolled at Rutgers University in the School of Arts and Sciences. My major is Cell Biology and Neuroscience and my minor is Psychology. I have been a private tutor for over 9 years and have also worked with other tutoring companies. 40 Subjects: including algebra 1, algebra 2, biology, calculus Hello everyone,My name is Zach and I want to help others succeed with their educational goals! I have a Masters Degree from Monmouth University in education, with certification in special needs and social studies. I have a BA from Rutgers University where I majored in history, and I am also highly qualified in English, passing the Praxis in 2012. 30 Subjects: including algebra 1, SAT math, English, reading ...I have a BS in biochemistry with a minor in math from The College of New Jersey. I am available for tutoring chemistry, biology, and math in southeastern Monmouth County and Long Branch area. My rate is $50 per hour. 21 Subjects: including algebra 2, geometry, elementary math, trigonometry Related Holmdel Village, NJ Tutors Holmdel Village, NJ Accounting Tutors Holmdel Village, NJ ACT Tutors Holmdel Village, NJ Algebra Tutors Holmdel Village, NJ Algebra 2 Tutors Holmdel Village, NJ Calculus Tutors Holmdel Village, NJ Geometry Tutors Holmdel Village, NJ Math Tutors Holmdel Village, NJ Prealgebra Tutors Holmdel Village, NJ Precalculus Tutors Holmdel Village, NJ SAT Tutors Holmdel Village, NJ SAT Math Tutors Holmdel Village, NJ Science Tutors Holmdel Village, NJ Statistics Tutors Holmdel Village, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Holmdel_Village_NJ_Math_tutors.php","timestamp":"2014-04-17T19:24:12Z","content_type":null,"content_length":"24140","record_id":"<urn:uuid:0f1b5742-afcc-486f-bf91-32e8f9617e9c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Trouble understanding LL(k) vs. LALR(k) etc. Carl Cerecke <cdc@maxnet.co.nz> 19 Mar 2004 23:49:54 -0500 From comp.compilers | List of all articles for this month | From: Carl Cerecke <cdc@maxnet.co.nz> Newsgroups: comp.compilers Date: 19 Mar 2004 23:49:54 -0500 Organization: TelstraClear References: 04-03-042 04-03-057 Keywords: parse Posted-Date: 19 Mar 2004 23:49:54 EST Christian Maeder wrote: > Johnathan wrote: >>Statements = <Statement> | <Statements> <Statement> > That's left recursion (for "Statements"), so the grammar is not LL and > not suited for recursive descent. Simple reverse to right recursion: > .... "<Statement> <Statements>" > (Right recursion is less efficient than left recursion for LR parsers, > though.) Technically yes, but, practically, there's almost always no difference. For example, compare R -> r R | r with L -> L l | l. For a list of length n, both will do n shifts, and n reductions to parse the list. The only difference is that right recursion will require a stack that is of length n, while left recursion requires a stack of length 2. Unless your input has very long lists, you won't notice any Of course, the *best* way to parse either of the above is with a regular expression (i.e. a finite automaton), and not a stack-based machine (LL or LR) at all. Any decent recursive-descent parser generator allows regular expressions in its grammar specification, eliminating the need for much recursion - and leading to a nicer shape abstract syntax tree. There has been some work integrating regular expressions with LR parsers, (Kannapinn most recently, if I remember correctly), but RE's don't tend to fit into LR as naturally as recursive descent. Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/04-03-072","timestamp":"2014-04-20T21:02:20Z","content_type":null,"content_length":"7049","record_id":"<urn:uuid:a2f800c9-afff-404c-b474-1f0adb5a87c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
World Series Strikeouts Our book just came up last week — you can download the Kindle version today at Amazon. Essentially, the book is about sabermetrics and how one can explore the wealth of publicly available data using the statistical system R. Max and I live about 4536 miles apart, but we both are into baseball, statistics, and R, so it was a nice collaboration. Here’s a simple study that one can do using the Lahman package in R. One point that was obvious in the recent World Series was that players were striking out a lot. That motivates the questions: (1) what is the trend in striking out over World Series series, and (2) was the 2013 series unusual with respect to strikeouts? I open up RStudio (a nice interface for R) and load the Lahman package. The Lahman package contains all of the datasets available on Sean Lahman’s database. We’ll focus on the data frame BattingPost that contains the batting statistics of the players who have played in the World Series. We are only interested in World Series data for the seasons since 1903 and we use the subset function to create a new data frame with the data we want. wsdata = subset(BattingPost, round=="WS" & yearID >= 1903) What I want to do is compute the sum of at-bats and sum of strikeouts for each series. This is conveniently done using the ddply function in the plyr package which we load. This function says we want to break up the wsdata by the yearID variable, and for each yearID compute AB, the sum of at-bats, and SO, the sum of strikeouts. so.data <- ddply(wsdata, .(yearID), summarize, AB = sum(AB, na.rm=TRUE), SO = sum(SO, na.rm=TRUE)) From baseball-reference.com, we collect the strikeout data for the recent 2013 series and add this to the current data frame. so.data <- rbind(so.data, data.frame(yearID = 2013, AB = 194 + 201, SO = 59 + 43)) We compute a new variable SO.Rate equal to the percentage of at-bats that are strikeouts. so.data$SO.Rate <- with(so.data, 100 * SO / AB) We use the plot function to construct a scatterplot of strikeout rates over season. We add a smoothing curve to see the basic pattern. Last, we use the identify function to label some interesting seasons corresponding to strikeout rates that don’t follow the general pattern. with(so.data, plot(yearID, SO.Rate)) with(so.data, lines(lowess(yearID, SO.Rate))) with(so.data, identify(yearID, SO.Rate, n=11, labels=yearID)) As we would suspect, we see a steady increase in the strikeout rates over years. Although 2013 had a high strikeout rate, there were higher rates of strikeouts in the recent seasons 2001, 2009 and 2012 (remember the Tigers were in last year’s series). Also there were some historical seasons with high strikeout rates. One notable season was 1963 where the Dodgers with Sandy Koufax and Don Drysdale overwhelming the Yankess in 4 games. 2 responses 1. Nice post. I created an interactive version of the same plot using rCharts (a package that I authored) and Polycharts. Here is a link http://ramnathv.github.io/rChartsNYT/ 2. […] the previous post Jim mentioned we live some 4,000 miles […]
{"url":"http://baseballwithr.wordpress.com/2013/11/02/world-series-strikeouts/","timestamp":"2014-04-16T21:51:42Z","content_type":null,"content_length":"58774","record_id":"<urn:uuid:a6259dc3-c217-4a5c-aa3c-83ab763c36a5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Element in the absolute Galois group of the rationals up vote 9 down vote favorite Usually when people talk on the absolute Galois group G[ℚ] of ℚ they have in mind two elements they can describe explicitly, namely the identity and complex conjugation (clearly, everything is up to conjugation), although the cardinality of the group is uncountable. Can you describe other elements of G[ℚ]? absolute-galois-group nt.number-theory add comment 6 Answers active oldest votes See the last of this extended answer. I'm going to part company with everyone else and say that you can describe other elements of $\text{Gal}(\mathbb{Q})$. In other words, I claim that you can identity a specific element of $\text{Gal}(\mathbb{Q})$ in a wide range of ways, together with an algorithm to compute the values of that element as a function on $\mathbb{Q}$. You can use either a synthetic model of $\overline{\mathbb{Q}}$, or its model as a subfield of $\mathbb{C}$. Although this is all doable, what's not so clear is whether these explicit elements are interesting. The other two parts of the answer raise interesting issues, but they are moot for the original question. This is not exactly the question, but it is related. To begin with, it is difficult to "explicitly" describe $\overline{\mathbb{Q}}$ except as a subfield of $\mathbb{C}$. I found a paper, Algebraic consequences of the axiom of determinacy (in English translation of the title) that establishes that $\mathbb{C}$ does not have any automorphisms other than complex conjugation in ZF plus the axiom of determinacy (AD). So you need some part of the axiom of choice (AC) for this related question. As for the smaller field $\overline{\mathbb{Q}}$, the Wikipedia page for the fundamental theorem of algebra suggests that you might not even be able to construct it in the first place without the axiom of countable choice. (I say "suggests" because I'm not entirely sure that that is a theorem. Note that AC and AD both imply countable choice even though they are enemy axioms.) Any construction with countable choice isn't truly "explicit". On the other hand, if you allow countable choice, then I suspect that you can build $\overline{\mathbb{Q}}$ synthetically by induction rather than as a subfield of $\mathbb{C}$, and that you can build many automorphisms of it as you go along. So the questions for logicians is whether there is a universe over ZF in which $\overline{\mathbb{Q}}$ does not exist, or a universe in which it does exist but has no automorphisms. up vote 6 I got email about this from Kevin Buzzard that made me look again at the paper referenced by Wikipedia, A weak countable choice principle by Bridges, Richman, and Schuster. According to down vote this paper, life is pretty strange without countable choice. You want to make the real numbers as the metric completion of the rationals. However, there is a difference between general accepted Cauchy sequences and what they called "modulated" sequences, which are sequences of rationals with a promised rate of convergence. They cite a result of Ruitenberg that the modulated complex numbers are algebraically closed in ZF. Hence $\mathbb{Q}$ has an algebraic closure in ZF. But it still seems possible that without countable choice, algebraic closures of $\mathbb{Q}$ need not be unique up to isomorphism, and that the complex analysis model of $\overline{\ mathbb{Q}}$ might not have automorphisms other than complex conjugation. A better and hopefully final technical answer: As mentioned, $\overline{\mathbb{Q}}$ exists explicitly (in just ZF) as a subfield of $\mathbb{C}$. You can also construct it synthetically as follows: Consider the monic Galois polynomials over $\mathbb{Z}$. These are the polynomials such that the Galois group acts freely transitively on the roots; equivalently the splitting field is obtained by adjoining just one root. The Galois polynomials can be written in a finite notation and enumerated. Beginning with $\mathbb{Q}$, formally adjoin a root of $p_n(x)$, the $n$th monic Galois polynomial, for each $n$ in turn. If $p_n(x)$ factors over the field constructed so far, the factors can also be expressed in a finite notation; take the first irreducible factor. The result is an explicit, synthetically constructed $\overline{\mathbb{Q}}$. For comparison, let $\widetilde{\mathbb{Q}}$ be the algebraic closure of $\mathbb{Q}$ in $\mathbb{C}$. Each element of it is computable: Its digits can be generated by an algorithm, even with an explicit bound on its running time. As we build $\overline{\mathbb{Q}}$, we can also build an isomorphism between $\widetilde{\mathbb{Q}}$. We can do this by sending the formal root of $p_n(x)$ to its first root in $\mathbb{C}$, using some convenient ordering on $\mathbb{C}$. Or we could just as well have used its last root, its second root if it has one, etc. Composing these many different isomorphisms between $\overline{\mathbb{Q}}$ and $\widetilde{\mathbb{Q}}$ gives you many field automorphisms. add comment It is a theorem of Artin that the only non-trivial element in $\mathrm{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ (up to conjugation) of finite order is complex conjugation. This explains Leila Schneps' remark. There is a related theorem of Artin and Schreier that says that if a field $F$ has a finite non-trivial absolute Galois group, then $F$ has characteristic zero and $ up vote 11 \overline{F} = F(\sqrt{-1})$. down vote 1 This answers the question about finite order elements. However over certain fields you can write explicitly elements of infinite order. For example if $F$ is a finite field with $q$ elements, then there is the Frobenius element $x\mapsto x^q$. Or if $F$ is the field of Laurent series over the complex numbers, then you have a description of the algebraic closure via the Puisseaux series, and thus you can describe explicitly elements of the absolute Galois group. – Lior Bary-Soroker Nov 25 '09 at 16:36 Define "describe explicitly". – user631 Nov 25 '09 at 22:18 an element in the algebraic closure of $\mathbb{C}((t))$ is a Laurent series in $t^{1/n}$, for some $n$. Now an automorphism can be defined by multiply $t^{1/n}$ by $e^{2\pi i/n}$. – Lior Bary-Soroker Nov 26 '09 at 19:20 1 That's not what I was asking. You are asking whether one can describe elements of G_Q "explicitly", but you haven't defined what this means. – user631 Nov 26 '09 at 23:04 This is part of my question. To find reasonable ways to give elements in G_Q. I tried to give an example to a way I thought of, and I hoped to have some other creative ideas. – Lior Bary-Soroker Nov 28 '09 at 22:41 add comment Leila Schneps claims that one cannot write down any other element. up vote 3 See her paper in The Grothendieck theory of dessins d'enfants. Papers from the Conference on Dessins d'Enfant held in Luminy, April 19--24, 1993. Edited by Leila Schneps. London down vote Mathematical Society Lecture Note Series, 200. Cambridge University Press, Cambridge, 1994. I think that http://www.aimath.org/WWN/motivesdessins/schneps1.pdf is more or less the same. what do you mean by `write-down'? – Lior Bary-Soroker Nov 25 '09 at 13:23 I have no idea what exactly she meant. – Mariano Suárez-Alvarez♦ Nov 25 '09 at 14:06 For Galois automorphisms of C, you actually need the axiom of choice for any to exist because (as an exercise) any others are Q-linear but not R-linear and hence are nonmeasurable 1 functions. I am not sure if you actually need the axiom of choice to show that there are nontrivial Galois automorphisms of the closure of Q, but it may be similar - you need to make a choice of lift of the automorphism for every finite Galois extension F of Q in some compatible way. – Tyler Lawson Nov 25 '09 at 17:08 add comment This is an analogous to the complex conjugation: up vote 1 Let K be a local field, i.e. a finite extension of ℚ[p], for some p. Then its Galois group G[K] is finitely presented (See "Cohomology of number fields" of Jürgen Neukirch, Alexander down vote Schmidt, and Kay Wingberg). Then its generators are in a sense explicit elements of G[ℚ]. add comment I don't know of any other elements, but I just want to point out that it's hard to really explicitly describe "elements of $\text{Gal}~\bar{\mathbb{Q}}/\mathbb{Q}$"; see e.g. this up vote 0 down answer for a much clearer explanation than I could provide. I read this answer, and it's fine with me to find elements up to conjugation. Both the complex conjugation and the `local elements' coming from local fields are defined up to conjugation. – Lior Bary-Soroker Nov 25 '09 at 13:25 Well, if you only care about them up to conjugacy, I suppose you could look at Galois representations of this group, about which some nontrivial things are in fact known, although I don't claim to understand it. – Harrison Brown Nov 25 '09 at 13:30 add comment Dear "FC" Could you please give me a precise reference which I can find a proof of "Artin theorem" which is, up to conjugation non-trivial element of finite order in absolute Galois group up vote 0 of Q, is complex conjugation. I could not find it in the paper you have shared. you wrote "This explains Leila Schneps' remark" which is not clear where you mean. down vote the theorem appears, e.g., in Lang's "Algebra" – Lior Bary-Soroker Oct 16 '10 at 6:00 add comment Not the answer you're looking for? Browse other questions tagged absolute-galois-group nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/6802/element-in-the-absolute-galois-group-of-the-rationals","timestamp":"2014-04-21T09:43:38Z","content_type":null,"content_length":"85291","record_id":"<urn:uuid:d3a1ef19-1e00-4908-9ab7-9dfedbe443be>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Second Generation Wavelets: Theory and Applications Friday, October 31 Second Generation Wavelets: Theory and Applications 1:30 PM-2:30 PM Chair: John G. Lewis, Boeing Information and Support Services, The Boeing Company Room: Ballroom 3 In the last decade wavelets have been applied successfully to sound (1D), image (2D), and video (3D) processing. Typical applications include compression, noise reduction, progressive transmission, etc. Each time the data is defined on an Euclidean space R^n and sampled on a regular grid. Many applications, however, need wavelets defined on general geometries (curves, surfaces, manifolds), wavelets adjusted to irregular sampling, or adaptive wavelet transforms. Therefore we introduce Second Generation Wavelets: wavelets which are not necessarily translates and dilates of one function, but still enjoy all powerful properties such as time-frequency localization, multiresolution, and fast algorithms. While the Fourier transform has been the principal tool in constructing classical wavelets, e.g. Daubechies, it can no longer be used to build Second Generation Wavelets. We present the lifting scheme, an entirely spatial construction technique for Second Generation Wavelets. The speaker will give examples how lifting can be used to build wavelets for irregular samples, spherical wavelets, and multiresolution geometry. He will also show that all classical wavelets can be obtained through lifting, that lifting speeds up the fast wavelet transform by a factor of two, and that lifting allows for integer-to-integer wavelet transforms which are important in lossy compression. (ps: No preliminary knowledge of wavelets will be assumed). Wim Sweldens Bell Laboratories, Lucent Technologies LA97 Homepage | Program Updates| Registration | Hotel Information | Transportation | Program-at-a-Glance | MMD, 6/26/97
{"url":"http://www.siam.org/meetings/archives/la97/ip6.htm","timestamp":"2014-04-21T12:10:00Z","content_type":null,"content_length":"2527","record_id":"<urn:uuid:23c09c36-2a2c-4758-80c5-e3db56890229>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Math... - Page 3 - Chief Delphi The purpose of computing is insight, not numbers -- Richard Hamming whip up a simple simulation, crunch a few billion numerical integration floating-point calculations in a matter of seconds, and instantly plot a time-graph of the results (in color!), create a spreadsheet and play with the inputs to see how they affect the outputs, instantly plot a complex non-linear equation to visualize its behavior and help find its roots, do a monte-carlo simulation of a probability or statistics problem, fit a model to a set of experimental data, etc etc etc Using calculators, and especially personal computers, opens up worlds of understanding for those who will use these tools to gain insight, and not as a substitute for thinking.
{"url":"http://www.chiefdelphi.com/forums/showthread.php?postid=973480","timestamp":"2014-04-18T16:00:56Z","content_type":null,"content_length":"165112","record_id":"<urn:uuid:8975095f-88bd-4454-b1bc-03614827ad10>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial and Rational Functions Course 3 Unit 5 - Polynomial and Rational Functions In units of the algebra and functions strand in Courses 1 and 2, students built a repertoire of function families that model linear, exponential, quadratic, and power (direct and inverse) relationships. They also developed skill in using algebraic expressions to represent those patterns of change, skill in manipulating the expressions to solve equations and inequalities and to gain insight into relationships, and skill in solving a wide variety of authentic quantitative problems. In this unit, polynomial and rational functions are added to students' toolkit of families of Unit Overview Topics studied in this unit include the definition and properties of polynomials, operations on polynomials, completing the square, proof of the quadratic formula, solving quadratic equations (including complex number solutions), the vertex form of quadratic functions, definition and properties of rational functions, and operations on rational expressions. This unit is an introduction to polynomial and rational functions and expressions and should be accessible to most students. Further work toward developing proficiency with symbol manipulation occurs in subsequent Review tasks and in Course 4. Objectives of the Unit • Recognize patterns in problem conditions and in data plots that can be described by polynomial and rational functions • Write polynomial and rational function rules to describe patterns in graphs, numerical data, and problem conditions • Use table, graph, or symbolic representations of polynomial and rational functions to answer questions about the situations they represent: (1) calculate y for a given x (i.e., evaluate functions); (2) find x for a given y (i.e., solve equations and inequalities); and (3) identify local max/min points and asymptotes • Rewrite polynomial and rational expressions in equivalent forms by expanding or factoring, by combining like terms, and by removing common factors in numerator and denominator of rational • Add, subtract, and multiply polynomial and rational expressions and functions • Extend understanding and skill in work with quadratic functions to include completing the square, interpreting vertex form, and proving the quadratic formula • Recognize and calculate complex number solutions of quadratic equations Sample Overview The sample investigation is Investigation 2 of Lesson 1. In the first investigation, students learned how to model complex graphical patterns with polynomial functions of degree 3 and 4. They explored a variety of polynomial functions to discover the type of graph possibilities. They began to see the relationship between the degree of a polynomial function and other important features of its graph, especially the number of local maximum and local minimum points. In the sample investigation provided below, students work with polynomial expressions and functions, learning how to combine them by addition and subtraction. They look for a pattern relating the degrees of components to the degree of the sum or difference of two polynomials and a pattern relating the degree of a polynomial expression to the number of zeroes of the corresponding function. This investigation makes use of the public-domain CPMP-Tools computer software. Select "Course 3" from the menu bar, then choose Algebra and CAS. Enter a polynomial equation with parameters a, b, and c (such as the one on Course 3 page 325) in the Y= tab. Then select the Graph tab to view. Instructional Design Throughout the curriculum, interesting problem contexts serve as the foundation for instruction. As lessons unfold around these problem situations, classroom instruction tends to follow a four-phase cycle of classroom activities—Launch, Explore, Share and Summarize, and Apply. This instructional model is elaborated under Instructional Design. View the Unit Table of Contents and Sample Lesson Material You will need the free Adobe Acrobat Reader software to view and print the sample material. How the Algebra and Functions Strand Continues In Course 3, there is one more algebra unit, Unit 8, Inverse Functions. This unit develops student understanding of inverses of functions with a focus on logarithmic functions and their use in modeling and analyzing problem situations and data patterns. Topics studied in this unit include inverses of functions, logarithmic functions and their relation to exponential functions, properties of logarithms, equation solving with logarithms, and inverse trigonometric functions and their applications to solving trigonometric equations. Course 3 Unit 7, Recursion and Iteration, is technically a discrete mathematics unit, but working with sequences and series helps students strengthen their symbolic skills. Course 4: Preparation for Calculus extends student algebraic skills and understandings in equations and functions in algebra units but also in geometry units such as Unit 2, Vectors and Motion, and Unit 6, Surfaces and Cross Sections. (See the CPMP Courses 1-4 descriptions.)
{"url":"http://www.wmich.edu/cpmp/2nd/unitsamples/c3u5intro.html","timestamp":"2014-04-19T08:19:26Z","content_type":null,"content_length":"19355","record_id":"<urn:uuid:534184fd-20b2-49a1-a668-575983e4a581>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Longitudinal Data Analysis, Including Categorical Outcomes • Donald Hedeker, University of Illinois at Chicago This workshop will focus on the analysis of longitudinal data, also known as "panel data." In either case, the data consist of repeated observations over time on the same units. The approach will use mixed models. Models for continuous outcomes will first be presented, including description of the multilevel or hierarchical representation of the model. Use of polynomials for expressing change across time, treatment of time-invariant and time-varying covariates, and modeling of the variance-covariance structure of the longitudinal outcomes will be described. For dichotomous, ordinal and nominal outcomes, this workshop will focus next on the mixed logistic regression model, and generalizations of it. Specifically, the following models will be described: mixed logistic regression for dichotomous outcomes, mixed logistic regression for nominal outcomes, and mixed proportional odds and non-proportional odds models for ordinal outcomes. The latter models are useful because the proportional odds assumption of equal covariate effects across the cumulative logits of the model is often unreasonable. Finally, missing data issues will be covered. Mixed models allow incomplete data across time and assume that these missing observations are "missing at random" (MAR) under maximum likelihood estimation. Approaches that can go further, and don't necessarily assume MAR, are through the use of pattern mixture and selection models. Applications will be described of mixed pattern mixture and selection models. In all cases, methods will be illustrated using software, with SAS used for most examples, and augmented with use of SPSS for continuous outcomes and SuperMix for categorical outcomes. Prerequisites: Participants should be thoroughly familiar with linear regression, and have some knowledge of logistic regression. Fee: Members = $1500; Non-members = $3000 Tags: longitudinal data, panel data, mixed models, mixed logistic regression, mixed proportional odds, Course Sections Section 1 Location: ICPSR -- Ann Arbor, MI Date(s): June 23 - June 27 Time: 9:00 AM - 5:00 PM • Donald Hedeker, University of Illinois at Chicago
{"url":"http://www.icpsr.umich.edu/icpsrweb/sumprog/courses/0134?tag=panel+data&location=Ann+Arbor%2C+MI","timestamp":"2014-04-24T22:17:36Z","content_type":null,"content_length":"12368","record_id":"<urn:uuid:539ada04-5e66-4812-b45b-819b5c49c871>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Do Burnside Group Factors have Gamma? up vote 9 down vote favorite The Free Burnside group $G=B(2,665)=\langle a,b|g^{665} \rangle$ is infinite, by the work of Adyan and Novikov. Furthermore, the centralizer of any nonidentity element in $G$ is finite cyclic, and so the group is an i.c.c. group and the associated left group von Neumann algebra $LG$ is a type $II_{1}$ factor. It is a fact, due to of Adyan, that this group is not amenable, so the group von Neumann algebra is not injective. A type $II_{1}$ factor $M$ with trace $\tau$ has Property $\Gamma$ if for every finite subset $\{ x_{1}, x_{2},..., x_{n} \} \subseteq M$ and each $\epsilon >0$, there is a unitary element $u$ in $M$ with $\tau (u)=0$ and $||ux_{j}-x_{j}u||_{2}<\epsilon$ for all $1 \leq j \leq n$. (Here $||T||_2=(\tau(T^{*}T))^{1/2}$ for $T\in M$.) I should mention that if a group is not inner amenable in the sense described in Is there an i.c.c. nonamenable simple group that is inner amenable? then its left group von Neumann algebra does not have property $\Gamma$. (There exist i.c.c. inner amenable groups whose group von Neumann algebras don't have $\Gamma$, as recently shown by Stefaan Vaes: http://arxiv.org/PS_cache/arxiv/pdf/0909/0909.1485v1.pdf.) My question is: Does the group von Neumann algebra $LG$ have Property $\Gamma$? von-neumann-algebras oa.operator-algebras gr.group-theory add comment 1 Answer active oldest votes If $G$ has a homomorphism onto a finite cyclic group, then it is inner amenable? (For every subset count the number of elements in the factor.) If so then the free Burnside group is certainly inner amenable. On the other hand, by Zelmanov's theorem, there exists a finite index subgroup of $B(2,665)$ which does not have finite factors. That group also is i.c.c., etc. So you may want to ask your question about that group. What is the reason for this question? There is a "similar" and quite popular question of whether $B(2,n)$, $n\gg 1$, has property (T). Y. up vote Shalom conjectured in his ICM talk that it has. Gromov (unpublished) conjectured that it has not. 3 down vote Edit: I was wrong in the first statement. The counting does not produce a measure because two disjoint subsets can map to the same set in the factor-group. So the question about inner amenability of $B(2,665)$ is open. Also, thanks for the question regarding Property (T)! I wasn't aware of this. – Jon Bannon Sep 22 '10 at 18:30 It is an interesting question. The non-amenability of $B(2,n)$ and Tarski monsters was proved using Kesten-Grigorchuk combinatorial criterion of amenability. Is there a similar "co-growth" criterium for inner amenability? If one can formulate the property in terms of words (= walks on the Cayley graph), one would be able to answer your question. – Mark Sapir Sep 22 '10 at 18:33 That is precisely the content of my question: mathoverflow.net/questions/27233/… I would absolutely love to see if such a "co-growth" condition for inner amenability were feasible. – Jon Bannon Sep 22 '10 at 19:17 3 By the way, unlike free Burnside group $B(2,n)$ which is unique, there are many Tarski monsters. Some better than others. There is a class of groups called ``lacunary hyperbolic groups" which contain many Tarski monsters. See front.math.ucdavis.edu/0701.5365 . – Mark Sapir Sep 22 '10 at 19:42 Thanks for this. I'll have a look. – Jon Bannon Sep 22 '10 at 20:23 add comment Not the answer you're looking for? Browse other questions tagged von-neumann-algebras oa.operator-algebras gr.group-theory or ask your own question.
{"url":"https://mathoverflow.net/questions/39640/do-burnside-group-factors-have-gamma","timestamp":"2014-04-19T10:25:15Z","content_type":null,"content_length":"58996","record_id":"<urn:uuid:6132ea2a-d9f4-4fa3-8079-6c831e8a07b6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Risk-Adjusted Performance Ulcer Index An Alternative Approach to the Measurement of Investment Risk & Risk-Adjusted Performance by Peter G. Martin This page also available in PDF format What is the Ulcer Index? Ulcer Index (UI) is a method for measuring investment risk that addresses the real concerns of investors, unlike the widely used standard deviation of return (SD). UI is a measure of the depth and duration of drawdowns in prices from earlier highs. Using UI instead of SD can lead to very different conclusions about investment risk and risk-adjusted return, especially when evaluating strategies that seek to avoid major declines in portfolio value (market timing, dynamic asset allocation, hedge funds, etc.). The Ulcer Index was originally developed by the author of this page in 1987. Since then, it has been widely recognized and adopted by the investment community. According to Nelson Freeburg, editor of Formula Research, Ulcer Index is “perhaps the most fully realized statistical portrait of risk there is.” The Index was first described in The Investor's Guide to Fidelity Funds: Winning Strategies for Mutual Fund Investors, by myself and Byron McCann. Originally published by John Wiley & Sons in 1989, this out-of-print book is available in PDF format from http://www.tangotools.com/ui/fkbook.pdf. Ulcer Index is also explained on Wikipedia at http://en.wikipedia.org/wiki/Ulcer_Index Note: There have been instances of the term "Ulcer Index" being used for risk measures that do not strictly follow the details described here. This document explains the correct use of the concept. What's Wrong with Standard Deviation of Return? Standard deviation is a statistical measure of the variability or unpredictability of an investment's return. As a measure of risk, it suffers from a number of serious drawbacks: • Both upward and downward changes in value add to the calculated SD. Real investors associate risk only with the downside. Rising prices create profits, not risk. • The calculated value of SD is not affected by the sequences in which gains and losses occur. Thus, SD does not recognize the strings of losses that result in significant drawdowns in value. The three hypothetical investments in the chart below have the same annualized return (-0.52%/year) and the same SD (4.66%/month), but no rational investor would consider them as having the same This chart shows the NAV of an S&P 500 Index mutual fund with dividends reinvested, using monthly data over the 10 years 2000-2009. The Actual (dark blue) line is the fund's actual performance. The Sorted (magenta) line was obtained by re-arranging the monthly percentage returns, so that the worst months occur first and the best ones occur last. The Flattened (yellow) line was obtained by shuffling the monthly returns in such a way as to minimize the depth and duration of drawdowns. The set of monthly returns is the same in each case; only their order has been changed. See • When SD is used to measure the risk of a market timing strategy, it will tell you roughly how often you were out of the market, but nothing about whether you were out at the right times. SD doesn't tell you if your strategy reduced risk by avoiding market downturns. Because of these weaknesses, SD does not reward an investment strategy for avoiding market downturns. Using Ulcer Index as a risk measure avoids all of these problems. What About Other Risk Measures? Other established risk measures have weaknesses too. For example: • Some are based on the single worst event over a time period, which by definition has no statistical significance (e.g. Worst Trade and Maximum Drawdown). • Some are based on absolute rather than percentage price changes, which distorts results over longer periods with strong price trends (e.g. Average Maximum Retracement). • Some tell you how often you lost money, but not how much (e.g. Percentage Losing Quarters). • Some cannot be used to compare investment alternatives. For example, Percentage Losing Trades cannot be used to compare a market timing strategy with a buy-and-hold approach, because the latter has no trades. How is Ulcer Index Calculated? Ulcer Index measures the depth and duration of percentage drawdowns in price from earlier highs. The greater a drawdown in value, and the longer it takes to recover to earlier highs, the higher the UI. Technically, it is the square root of the mean of the squared percentage drawdowns in value. The squaring effect penalizes large drawdowns proportionately more than small drawdowns (the SD calculation also uses squaring). In effect, UI measures the "severity" of drawdowns, as represented by the dark regions in the charts below: Drawdowns in value: S&P 500 index with dividends reinvested Drawdowns in value: Fidelity Select Precious Metals & Minerals fund The algorithm for computing UI is simple, and can be seen in the pseudo-code fragment below: SumSq = 0 MaxValue = 0 for T = 1 to NumOfPeriods do if Value[T] > MaxValue then MaxValue = Value[T] else SumSq = SumSq + sqr(100 * ((Value[T] / MaxValue) - 1)) UI = sqrt(SumSq / NumOfPeriods) An Excel spreadsheet showing how to calculate the Ulcer Index is available at http://www.tangotools.com/ui/UlcerIndex.xls UI has a further advantage over SD. Its calculated value is essentially the same over a wide range of time intervals between data points. Weekly price data is a robust choice, but daily data can be used as well. As the interval is extended beyond a week, there is an increasing danger of missing significant intra-period drawdown-and-recovery events. The use of quarterly or longer intervals is strongly discouraged for this reason. Unlike UI, the calculated value of SD depends directly on the time period used. For example, the SD of annual return is roughly 7.2 times the SD of weekly return (7.2 is the square root of 52 weeks per year). Since the time period is often unstated, this creates opportunities for serious misunderstandings about an investment's risk. Measuring Investment Performance A popular method for measuring investment "performance" is to divide the excess return of an investment by its risk. (Excess return is total return minus the return offered by risk-free investments such as money market funds.) This calculation provides a single number that accounts for both return and risk. It reports the additional return achieved per unit of risk assumed. Traditionally the Sharpe Ratio is used, where risk is again represented by the standard deviation of return: Sharpe Ratio = (Total return - Risk-free return) / SD Just as SD is a poor risk measure, so is this formula a poor performance measure. This problem is solved by simply replacing SD with UI. This new performance measure is known as the "Ulcer Performance Index" (UPI) or "Martin Ratio". UPI = (Total return - Risk-free return) / UI When plotting investments on a risk vs return chart, UI can be used instead of SD for the horizontal (risk) axis. If a line is drawn between the points representing the risk-free return and a risky investment, the slope of the line is equal to the Ulcer Performance Index. As with the Sharpe Ratio, if an investment lies above the line joining the risk-free return with the S&P 500, the investment is "beating the market" on a risk-adjusted basis. Market Timing Example The table below shows the results achieved with both UI and SD. We compared two strategies over the period 1940-1997: buy-and-hold the S&P 500 index, and timing the index with a simple momentum indicator. Results include reinvestment of dividends in both cases. │ │Buy-and-Hold│Timing System│% Change│ │Annualized Total Return (%/yr) │ 12.59 │ 14.79 │ 17.5 │ │Ulcer Index (%) │ 8.85 │ 5.14 │ -41.9 │ │Ulcer Performance Index (UPI) │ 0.92 │ 2.01 │ 118.9 │ │Standard Deviation (%/yr) │ 16.10 │ 13.18 │ -18.1 │ │SD Performance (Sharpe Ratio) │ 0.51 │ 0.78 │ 52.9 │ For the timing system, annualized total return is increased by a modest 2.2 percentage points over buy-and-hold. SD reports 18% lower risk and 53% higher performance (Sharpe Ratio). UI reports 42% lower risk and 119% higher performance (UPI). Thus, UI places a much higher value on the market timing system than does SD. Other experimental work has shown that many popular market timing systems have little value when SD is used to measure risk, but significant value when UI is used instead. This arises because SD fails to recognize the success of timing systems in avoiding major market downturns. Caveat Emptor With any method for computing investment risk and performance, it is important to use data covering as long a time period as possible. In particular, the measurement period must include both bull and bear markets for the investments of interest. Needless to say, when comparing multiple investments, the same time period must be used in each case. All risk and return calculations should include reinvestment of interest, dividends and other distributions; and should be net of all recurring fees, transaction costs and trading slippage. Any use of Ulcer Index or UPI that falls outside of these constraints would be inappropriate and of little value. This page also available in PDF format (C) Copyright 1987-2011 by Peter G. Martin. All rights reserved.
{"url":"http://www.tangotools.com/ui/ui.htm","timestamp":"2014-04-21T10:01:55Z","content_type":null,"content_length":"16198","record_id":"<urn:uuid:97d909dc-bb41-4e51-b138-b6470abea222>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Find Math Words Fun 1. the number of observations in a given statistical category 2. A box is drawn around the quartile values and whiskers extend from each quartile to the extreme data points 3. When all outcomes of an experiment are equally likely 6. a bar chart representing a frequency distribution 7. An arrangement of a group of objects in which the order DOES matter 9. An outcome or set of outcomes in an experiment 10. Any activity based on chance 11. the most frequent value given in a set of data 13. A single repetition or observation of an experiment 15. The median of the upper and lower half of a data set 17. The middle number in a set of numbers that are listed in order
{"url":"http://www.armoredpenguin.com/crossword/Data/2012.05/1611/16112859.341.html","timestamp":"2014-04-17T06:51:22Z","content_type":null,"content_length":"57395","record_id":"<urn:uuid:0989446e-8a86-4f7c-8947-42851e6fe468>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem Solving Students are challenged to solve these problems by drawing pictures rather than using traditional fraction algorithms. • Monkey Business challenges students to work backwards to solve this fraction division problem. • Bake Sale challenges students to work backwards to solve another fraction division problem involving disappearing cookies.
{"url":"http://www.mathwire.com/fractions/fracps.html","timestamp":"2014-04-21T09:58:41Z","content_type":null,"content_length":"4377","record_id":"<urn:uuid:bcd85978-c920-4f5a-a91b-77188c9a8d2c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
velocity Question February 22nd 2010, 04:29 PM #1 Junior Member Nov 2009 velocity Question if s = 2t^3 - 5t^2 - 2 and t=2, is s increasing or decreasing? i don't really get what im supposed to do here. how do i find which one it is? i know how to find avg. V (or V) and avg. a (or a), but what do i do here? thanks for any help, this question is stumping me if s(t) is increasing, then s'(t) > 0 if s(t) is decreasing, then s'(t) < 0 so ... find s'(2) and answer the question. Hi mneox, You have 2 choices, depending whether you have covered differentiation or not. calculate the derivative Now evaluate f'(2) f(t) is increasing if f'(t) >0 f(t) is decreasing if f'(t)<0 Work with the graph and examine f(2). See attached sketch. Last edited by Archie Meade; February 23rd 2010 at 05:27 PM. February 22nd 2010, 04:42 PM #2 February 22nd 2010, 04:52 PM #3 MHF Contributor Dec 2009 February 22nd 2010, 05:04 PM #4 Junior Member Nov 2009 February 23rd 2010, 07:17 AM #5
{"url":"http://mathhelpforum.com/calculus/130203-velocity-question.html","timestamp":"2014-04-18T01:34:55Z","content_type":null,"content_length":"45693","record_id":"<urn:uuid:2dd53fea-025c-4716-8401-2dafaa193fe1>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Programming March 8th 2013, 07:05 PM #1 Mar 2013 Linear Programming I keep having my task rejected because it says I am not showing work. I don't understand what kind of work they want me to show and because we use TaskStream to submit my paper, There isn't anyone to ask either. This is what they say: Comments on this criterion: No revision is provided to show the work. The optimum combination (4.5, 3) is determined but the work shown is not sufficient. Please show the work done to determine the candidate points. This is the task and what I submitted: C. Determine how many cases each of Brand X and of Brand Y you recommend should be produced during each production period for optimum production if Company A wants to generate the greatest amount of profit, showing all of your work. The vertices are: P(0, 0) = $0 + $0 = $0 profit P(0, 6) = $0 + $180 = $180 profit P(2.5, 5) = $100 + $150 = $250 profit P(4.5, 3) = $180 + $90 = $270 profit P(6, 0) = $240 + $0 = $240 profit P = ($40*4.5) + ($30*3) = $270 I would reccomend that during optimum production, the company produce 4.5 cases of brand X and 3 cases of brand Y. Any help is greatly appreciated. Re: Linear Programming Have you learned about the simplex method? Are you supposed to use some algorithm other than brute force (checking each intersection)? Re: Linear Programming Just brute force. Here is the whole question it my help to see the whole problem. Graphical models enable a manager to visualize the objective function (profit line), constraints, and possible solutions to a given problem, and to make more informed decisions based on that Company A produces and sells a popular pet food product packaged under two brand names, with formulas that contain different proportions of the same ingredients. Company A made this decision so that their national branded product would be differentiated from the private label product. Some product is sold under the company’s nationally advertised brand (Brand X), while the re-proportioned formula is packaged under a private label (Brand Y) and is sold to chain stores. Because of volume discounts and other stipulations in the sales agreements, the contribution to profit from the Brand Y private label product is only $30 per case compared to $40 per case for product sold to distributors under the company’s Brand X national brand. An ample supply is available of most of the pet food ingredients; however, three additives are in limited supply. The tight supply of nutrient C (one of several nutrient additives), a flavor additive, and a color additive all limit production of both Brand X and Brand Y. The formula for a case of Brand X calls for 4 units of nutrient C, 12 units of flavor additive, and 6 units of color additive. The Brand Y formula per case requires 4 units of nutrient C, 6 units of flavor additive, and 15 units of color additive. The supply of the three ingredients for each production period is limited to 30 units of nutrient C, 72 units of flavor additive, and 90 units of color additive. A. Determine the equations for each of the three constraints that are plotted on the attached “Graph 1,” showing all work necessary to arrive at the equations. 1. Identify each constraint as a minimum or a maximum constraint. B. Determine the total contribution to profit if the company produces a combination of cases of Brand X and Brand Y that lies on the purple objective function (profit line) as it is plotted on the attached “Graph 1.” C. Determine how many cases each of Brand X and of Brand Y you recommend should be produced during each production period for optimum production if Company A wants to generate the greatest amount of profit, showing all of your work. D. Determine the total contribution to profit that would be generated by the production level you recommend in part C, showing all of your work. Re: Linear Programming In your submission above, you followed each P line by a sum of two prices, but did you include a calculation of multiplying the points by the prices? Perhaps they really want all of the details. In your conclusion include the point, the price at this point, and perhaps the calculation again. Beyond this, I cannot surmise what other details they require given the method and the work you already did. March 9th 2013, 08:41 AM #2 Mar 2013 BC, Canada March 9th 2013, 02:49 PM #3 Mar 2013 March 9th 2013, 05:25 PM #4 Mar 2013 BC, Canada
{"url":"http://mathhelpforum.com/calculus/214468-linear-programming.html","timestamp":"2014-04-18T21:46:08Z","content_type":null,"content_length":"41201","record_id":"<urn:uuid:001d58ff-5d3c-4a91-8f73-62f49ed2b404>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
BIOGRAPHY 7.1 Pafnuty L. Chebyshev (1821 -1894) Pafnuty Lvovich Chebyshev was born in Okatovo, Russia. His parents, who belonged to the gentry, had him privately tutored. He quickly became fascinated by mathematics and eventually studied mathematics and physics at Moscow University. Even as a student, he won a silver medal for a now-famous paper on calculating the roots of equations. It was only the first of many brilliant papers that he wrote while teaching mathematics at St. Petersburg University and pursuing a keen interest in mechanical engineering. (Among other things, he contributed significantly to ballistics, which gave rise to various innovations in artillery, and he invented a calculating machine.) Always, he stressed the unity of theory and practice, saying: Mathematical sciences have attracted especial attention since the greatest antiquity; they are attracting still more interest at present because of their influence on industry and arts. The agreement of theory and practice brings most beneficial results; and it is not exclusively the practical side that gains; the sciences are advancing under its influence as it discovers new objects of study for them, new aspects to exploit in subjects long familiar. Chebyshev typically worked toward the effective solution of problems by establishing algorithms (methods of computation) that gave either an exact numerical answer or an approximation that was correct within precisely defined limits. A most important example of this approach in the field of statistics is his formulation of what is now called Chebyshev's theorem, discussed in Chapter 7. For practical purposes, many a frequency distribution that is only slightly skewed (with a coefficient of skewness between -0.5 and +0.5, for example) can be treated as a perfectly symmetrical one, and the higher percentages applicable to a normal curve (discussed in Application 7.1, Standard Scores) can be applied to estimate the proportions of observations falling within specified distances from the mean. Chebyshev's theorem, however, demonstrates a radical change: He was the first mathematician to insist on absolute accuracy in limit theorems. In the words of A. N. Kolmogorov, another eminent Russian mathematician, "he always aspired to estimate exactly in the form of inequalities absolutely valid under any number of tests the possible deviations from limit regularities." Source: Dictionary of Scientific Biography, vol.3 (New York: Charles Scribner's, 1971), pp. 226 and 231. Copyright © 2003 South-Western. All Rights Reserved. Disclaimer
{"url":"http://www.swlearning.com/quant/kohler/stat/biographical_sketches/bio7.1.html","timestamp":"2014-04-18T20:50:40Z","content_type":null,"content_length":"8782","record_id":"<urn:uuid:b141603b-6a7c-4316-8960-983b0eb00542>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
A note on the complexity of cryptography Results 1 - 10 of 20 - Journal of Algorithms , 1985 "... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co ..." Cited by 188 (0 self) Add to MetaCart This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, cross-references will be given to that book and the list of problems (NP-complete and harder) presented there. Readers who have results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.) or open problems they would like publicized, should - Advances in cryptology—CRYPTO 2000 (Santa Barbara, CA), 166–183, Lecture Notes in Comput. Sci. 1880 , 2000 "... Abstract. The braid groups are infinite non-commutative groups naturally arising from geometric braids. The aim of this article is twofold. One is to show that the braid groups can serve as a good source to enrich cryptography. The feature that makes the braid groups useful to cryptography includes ..." Cited by 98 (4 self) Add to MetaCart Abstract. The braid groups are infinite non-commutative groups naturally arising from geometric braids. The aim of this article is twofold. One is to show that the braid groups can serve as a good source to enrich cryptography. The feature that makes the braid groups useful to cryptography includes the followings: (i) The word problem is solved via a fast algorithm which computes the canonical form which can be efficiently manipulated by computers. (ii) The group operations can be performed efficiently. (iii) The braid groups have many mathematically hard problems that can be utilized to design cryptographic primitives. The other is to propose and implement a new key agreement scheme and public key cryptosystem based on these primitives in the braid groups. The efficiency of our systems is demonstrated by their speed and information rate. The security of our systems is based on topological, combinatorial and group-theoretical problems that are intractible according to our current mathematical knowledge. The foundation of our systems is quite different from widely used cryptosystems based on number theory, but there are some similarities in design. Key words: public key cryptosystem, braid group, conjugacy problem, key exchange, hard problem, non-commutative group, one-way function, public key infrastructure 1 - In Cryptology and Computational Number Theory , 1990 "... ..." , 1998 "... In our opinion, the Foundations of Cryptography are the paradigms, approaches and techniques used to conceptualize, define and provide solutions to natural cryptographic problems. In this essay, we survey some of these paradigms, approaches and techniques as well as some of the fundamental result ..." Cited by 24 (0 self) Add to MetaCart In our opinion, the Foundations of Cryptography are the paradigms, approaches and techniques used to conceptualize, define and provide solutions to natural cryptographic problems. In this essay, we survey some of these paradigms, approaches and techniques as well as some of the fundamental results obtained using them. Special effort is made in attempt to dissolve common misconceptions regarding these paradigms and results. c flCopyright 1998 by Oded Goldreich. Permission to make copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that new copies bear this notice and the full citation on the first page. Abstracting with credit is permitted. A preliminary version of this essay has appeared in the proceedings of Crypto97 (Springer's Lecture Notes in Computer Science, Vol. 1294). 0 Contents 1 Introduction 2 I Basic Tools 6 2 Central Paradigms 6 2.1 Computati... - Information Processing Letters , 1997 "... Abstract We introduce the notion of associative one-way functions and prove that they exist if and only if P 6 = NP. As evidence of their utility, we present two novel protocols that apply strong forms of these functions to achieve secret key agreement and digital signatures. ..." Cited by 12 (0 self) Add to MetaCart Abstract We introduce the notion of associative one-way functions and prove that they exist if and only if P 6 = NP. As evidence of their utility, we present two novel protocols that apply strong forms of these functions to achieve secret key agreement and digital signatures. , 2004 "... The paper introduces the concept of a negative database, in which a set of records DB is represented by its complement set. That is, all the records not in DB are represented, and DB itself is not explicitly stored. After introducing the concept, several results are given regarding the feasibility o ..." Cited by 12 (9 self) Add to MetaCart The paper introduces the concept of a negative database, in which a set of records DB is represented by its complement set. That is, all the records not in DB are represented, and DB itself is not explicitly stored. After introducing the concept, several results are given regarding the feasibility of such a scheme and its potential for enhancing privacy. It is shown that a database consisting of n, l-bit records can be represented negatively using only O(ln) records. It is also shown that membership queries for DB can be processed against the negative representation in time no worse than linear in its size and that reconstructing the database DB represented by a negative database NDB given as input is an NP-hard problem when time complexity is measured as a function of the size of , 1993 "... Abstract We propose associative one-way functions as a new cryptographic paradigm for exchanging secret keys and for signing digital documents. First, we precisely define these functions and establish some of their basic properties. Next, generalizing a theorem of Selman, we constructively prove tha ..." Cited by 9 (1 self) Add to MetaCart Abstract We propose associative one-way functions as a new cryptographic paradigm for exchanging secret keys and for signing digital documents. First, we precisely define these functions and establish some of their basic properties. Next, generalizing a theorem of Selman, we constructively prove that they exist if and only if P 6 = NP. In addition, we exhibit an implementation based on integer multiplication. We present a novel protocol that enables two parties to agree on a secret key, and we discuss the security of this protocol. Finally, we generalize our protocol to enable two or more parties to agree on a secret key, and we present a similar protocol for signing documents. , 1996 "... In #11# T. Matsumoto and H. Imai described a new asymmetric algorithm based on multivariate polynomials of degree twoover a #nite #eld. Then in #14# this algorithm was broken. The aim of this paper is to show that despite this result it is probably possible to use multivariate polynomials of degree ..." Cited by 7 (0 self) Add to MetaCart In #11# T. Matsumoto and H. Imai described a new asymmetric algorithm based on multivariate polynomials of degree twoover a #nite #eld. Then in #14# this algorithm was broken. The aim of this paper is to show that despite this result it is probably possible to use multivariate polynomials of degree two in carefully designed algorithms for asymmetric cryptography. In this paper we will give some examples of suchschemes. All the examples that we will give, belong to two large family of schemes: HFE and IP. With HFE we will be able to do encryption, signatures or authentication in an asymmetric way. Moreover HFE #with properly chosen parameters# resist to all known attacks and can be used in order to givevery short asymmetric signatures or very short encrypted messages #of length 128 bits or 64 bits for example#. IP can be used for asymmetric authentications or signatures. IP authentications are zero knowledge. Note 1 : Another title for this paper could be #How to repair Matsumoto-Imai algorithm with the same kind of public polynomials". Note 2 : This paper is the extended version of the paper with the same title published at Eurocrypt '96. 1 , 1996 "... The Gabidulin Public Key Cryptosystem (PKC), like the well known McEliece PKC, is based on error correcting codes, and was introduced as an alternative to the McEliece system with the claim that much smaller codes could be used, resulting in a more practical system. ..." Cited by 5 (0 self) Add to MetaCart The Gabidulin Public Key Cryptosystem (PKC), like the well known McEliece PKC, is based on error correcting codes, and was introduced as an alternative to the McEliece system with the claim that much smaller codes could be used, resulting in a more practical system.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=536257","timestamp":"2014-04-20T12:31:15Z","content_type":null,"content_length":"35930","record_id":"<urn:uuid:e3489bc4-7d20-465a-b771-546faf1eb6d6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: High School Basic Algebra This page: basic algebra Dr. Math See also the Dr. Math FAQ: cubic and quartic equations order of operations Internet Library: basic algebra T2T FAQ: algebra help About Math basic algebra linear algebra linear equations Complex Numbers Discrete Math Fibonacci Sequence/ Golden Ratio conic sections/ coordinate plane practical geometry Negative Numbers Number Theory Square/Cube Roots Browse High School Basic Algebra Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Solving simple linear equations. Positive/negative integer rules. Mixture problems. Quadratic equations. Absolute value. Completing the square. Direct and indirect variation. Inequalities and negative numbers. Is x to the second power plus 6x plus 5 factorable? The problems are: x^12-y^12 and x^12+y^12. (x-2)(x+3) = 6 : solve by factoring for x. Label your variables, write an equation, solve, label answers... This problem is giving me a hard time: 6x^2 + 31x + 5. I can't figure out how to factor this: 5x^2 + 2x -1. The best I can do is: (5x+1)^2 + 2(x-1). Factor the given expression completely: 8 - t^3. Is it possible to factor the polynomial k1*x^2 + k2*y^2 + k3*z^2 + k4*xy + k5*xz + k6*yz into (g1*x + g2*y + g3*z)(h1*x + h2*y + h3*z) over the complex field for every choice of k1 through k6? Factor (xy+1)(x+1)(y+1)+xy and ... I need help factoring x^3 - x^2 + x - 2 = 4. I do not understand why you cannot factor a sum of two squares, but you can factor a perfect square trinomial. How do you reduce 5k^2-13k-6 / 5k+2 to lowest terms? What is factoring by grouping? When factoring a trinomial, why is it necessary to write the trinomial in four terms? How do you factor 2r^2 + rt - 6t^2 ? How do I solve an equation like x^3 + 3x^2 - x - 3 = 0 by factoring? I'm having trouble factoring because of the four terms and the x^3. Factor 3x-21, 5x^2y-15xy^2, 18x^2-27x, and 2a-8b-10. How can I factor 4x^2 - 36, or x^2 + yz + xy + xz? What's the fastest way to solve any type of quadratic equation? Are there common points or rules I can apply in every situation? How do you factor 2x^2 - 11x + 15 or 6a^2 + 7a - 5? A trick to solving quadratics was presented to me. I was wondering if there is a proof for it. Factoring the numerator of a limit problem. How do you factor trinomials with a number in front of the first variable? For example: 6y^2 - 19y + 10. A discussion of using the grouping method to factor a trinomial and how to determine the correct middle terms, as well as using the discriminant to test if the polynomial is factorable. A discussion and proof of why factoring trinomials by grouping works. Factor the expression 1+y(1+x)^2(1+xy). This one has been frustrating. Factor: 9a^2+4bc-4c^2-b^2 I don't remember how to factor. Are there any solutions of Fermat's Last Theorem, x^n + y^n = z^n, for n less than 2? Could you explain Ferrari's method for quartics? Find all integer values of x for which [12(x^2-4x+3) / (x^3-3x^2-x+3)] has a positive integer value. We are learning about sequences and how to find the patterns in numbers. Our teacher gave us the sequence 0, 3, 8, 15, 24, 35 and told us that we had to use factoring to find the answer. I know the answer is (n + 1)(n - 1), but I can't see how to get that. Alfredo needs to make 250 ml of a 27% alcohol solution, using a 15% solution and a 40% solution. How much of each should he use? What is the inverse function of f(x) = 1 - 2x? What is the general method for finding an inverse function? Given that 1J1 = 2, 3J5 = 34, 6J9 = 117, and 10J14 = 296, conjecture a value for 3J8. Find the solution set: 3-1/4x <_ 2+ 3/8 x; solve the system... Factor ax^2 - bx - c. In the equation y = (m/x) + c, how do the values of m and c affect the graph? How do you find the asymptotes? The equations ax^4 + bx^3 + c = 0 and cx^4 + bx^3 + a = 0 have a common root. Find all possible values of b, if a + c = 100. Can you give me simple steps for how to do a chapter in the book called "Finding Equations from Relations"? Determine the equation of the parabola whose vertex is at (1,3) and whose directrix is y=x. Page: [<prev] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [next>]
{"url":"http://mathforum.org/library/drmath/sets/high_algebra.html?start_at=241&num_to_see=40&s_keyid=40387015&f_keyid=40387016","timestamp":"2014-04-19T01:58:40Z","content_type":null,"content_length":"24639","record_id":"<urn:uuid:e514936d-973c-498f-8c81-f88b1f6fba0a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Markov Processes and Non-Equilibrium Statistical Mechanics , 1999 "... . We study the statistical mechanics of a finite-dimensional non-linear Hamiltonian system (a chain of anharmonic oscillators) coupled to two heat baths (described by wave equations). Assuming that the initial conditions of the heat baths are distributed according to the Gibbs measures at two differ ..." Cited by 53 (14 self) Add to MetaCart . We study the statistical mechanics of a finite-dimensional non-linear Hamiltonian system (a chain of anharmonic oscillators) coupled to two heat baths (described by wave equations). Assuming that the initial conditions of the heat baths are distributed according to the Gibbs measures at two different temperatures we study the dynamics of the oscillators. Under suitable assumptions on the potential and on the coupling between the chain and the heat baths, we prove the existence of an invariant measure for any temperature difference, i.e., we prove the existence of steady states. Furthermore, if the temperature difference is sufficiently small, we prove that the invariant measure is unique and mixing. In particular, we develop new techniques for proving the existence of invariant measures for random processes on a non-compact phase space. These techniques are based on an extension of the commutator method of H ormander used in the study of hypoelliptic differential operators. 1. Intr...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=8011511","timestamp":"2014-04-19T23:36:50Z","content_type":null,"content_length":"12611","record_id":"<urn:uuid:d15c6eb5-04a8-4ced-97cc-1a8a013a72eb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Absence of evidence in the Bayesian Here's a little something that maybe you didn't know about induction. Let's say I have evidence B. I can use this evidence B to argue inductively for claim A. Evidence B doesn't A, but it does make A more likely. So what happens if I instead have evidence not-B. That is, I've looked, and found that evidence B is absent. Does that make not-A more likely? In other words, does absence of evidence amount to evidence of absence? Yes. And I can prove it mathematically. [I mention math and thus lose half my readers... Skip the proof section if you must.]The Proof Good to read first: Induction and the Bayesian Bayes' theorem states the following: $P(A|B) = \frac{P(B | A)\, P(A)}{P(B)}.$ P(A|B) is equal to the probability that claim A is true if we find evidence B. P(A) is the "prior" probability that A is true. P(B|A) is the probability of finding evidence B if we know A is true. P (B) is the "prior" probability of finding evidence B. If B is evidence for A, then P(A|B) > P(A). If not-B (written as ~B) is evidence for not-A, then P(~A|~B) > P(~A). Thus we seek to prove the If P(A|B) > P(A), then P(~A|~B) > P(~A) We will use, in addition to Bayes' theorem, the following identities: ~(~X) = X P(~X) = 1 - P(X) P(~X|Y) = 1 - P(X|Y) This theorem assumes two things. First, none of the prior probabilities can be zero. For example, if P(B) = 0, Bayes' theorem doesn't even make sense anyways, since it divides by zero. Second, it assumes that probability is a good way to model knowledge.* 1. Start with Bayes' theorem: P(A|B) = P(B|A)*P(A)/P(B) 2. We're given: P(A|B) > P(A) 3. Combining 1 and 2: P(B|A)*P(A)/P(B) > P(A) 4. Multiply by P(B): P(B|A)*P(A) > P(B)*P(A) 5. Use the identities: [1-P(~B|A)]*P(A) > [1-P(~B)]*P(A) 6. Some algebra: P(~B)*P(A) > P(~B|A)*P(A) 7. Use Bayes' theorem: P(~B)*P(A) > P(A|~B)*P(~B) 8. Use the identities: P(~B)*[1-P(~A)] > [1-P(~A|~B)]*P(~B) 9. Some algebra: P(~B)*P(~A|~B) > P(~A)*P(~B) 10. Divide out by P(~B): P(~A|~B) > P(~A) *Note: Some people might consider this second assumption questionable. Under certain interpretations, probability is only reliable in analyzing repeatable phenomena, and the universe is not Discussion and Conclusion What does this mean? It means that if the existence of some evidence supports a claim, then the non-existence of that evidence detracts from the claim. This is a logical necessity in induction. The only assumptions are that we can model our knowledge with probabilities, and that none of the prior probabilities are certain. This directly contradicts the conventional wisdom that "Absence of evidence is not evidence of absence". So where did this conventional wisdom come from? There are two justifications I can think of. First, "absence of evidence" might mean that we neither know whether there is or there isn't evidence, because we haven't looked. In that case, the conventional wisdom is true. Second, though I proved that absence of evidence is evidence of absence, I did not prove that it's very good evidence of absence. For example, if I found bigfoot behind a tree, that would provide extremely good evidence for bigfoot, but if I didn't find him behind a tree, that would provide very weak evidence against bigfoot. But it's still evidence, mathematically speaking. I've previously explained this asymmetry as the basis for the concept of " burden of proof So, I wasn't lying when I said the Bayesian gives us insight into the inner workings of reason! This is just one of the reasons that math is cool. 8 comments: Yay for math! I'm familiar with this notion, but I've sometimes heard it described by example. Say we want to confirm the statement that all swans are white (not actually true, but we'll pretend it is for now). Then seeing a white swan would be evidence of this claim. But seeing a black crow would also be evidence for the claim, since you've seen a non-white thing that turned out to be a non-swan. I think it's a little problematic to simply say that black crows are evidence that swans are white. We need to specify exactly what observation we are talking about. Let's imagine that you see a black thing off in the distance, perhaps a swan, perhaps not. The mere possibility that it is a swan can be evidence against our claim. If we go on to observe that it is a crow rather than a swan, we've acquired evidence for our claim. Of course, all the evidence here is extremely weak. I do not believe in a sharp dichotomy between "weak" and "strong" evidence, but if there were one, this would definitely fall in the weak For example, if I found bigfoot behind a tree, that would provide extremely good evidence for bigfoot, but if I didn't find him behind a tree, that would provide very weak evidence against bigfoot. But it's still evidence, mathematically speaking. ummm... what happens if some of us saw bigfoot behind the tree and some of us didn't? When I spoke of bigfoot, I was thinking of a very mathematically idealized scenario. In reality, we need to consider the possibility of hoaxes and false positives. If multiple independent witnesses see undeniable and self-consistent evidence of bigfoot, that would suffice to overcome most contrary evidence. Or did you mean that different people look in the same place, and only some of them see bigfoot? I'm not sure how that could happen, unless we're talking about pareidolia. Pareidolia is when pattern recognition misfires (ie, seeing Jesus in a crepe or bigfoot on mars). This result and the credo that "absence of evidence is not evidence of absence" are not contradictory. I see the latter as a statement of logic. Essentially, if A -> B, it does not follow that ~A -> ~B. Bayes theorem deals in probabilistic reasoning which isn't quite the same, as it involves degrees of certainty, not absolutes. To illustrate the difference, imagine you are shown 100 upside-down cups and told that there may be a ball under one or more of them. If you turn over 99 cups and still haven't found a ball, you will probably assign a low probability to the existence of a ball, but you cannot say absolutely that one doesn't exist, because there is still one cup remaining. I already agree with you. I usually take "evidence" to mean the probabilistic kind rather than the absolute kind. But if by "evidence" we mean absolute proof, then of course, "absence of evidence is not evidence of absence" is correct. Talking about Bigfoot behind a tree isn't too interesting an example because people have already looked behind virtually all "trees" on the planet. Each "tree" provided a very, very small evidence of absence, but by the time almost all trees have been looked behind ... To me, more interesting real life examples would be the questions of whether their is life on other planets, or intelligent life. Not only is there no convincing evidence that no alien intelligent life has visited earth, but we haven't found radio transmissions yet. Lol, so it's not just me who thinks bigfoot is boring! I intended it to be a boring example, so as not to detract from the more general point. SETI is another topic all to itself! As far as SETI is concerned, I don't think it has provided strong evidence that there is no intelligent life out there. Its search space is far too small. However, I think it does suggest that more of the same kind of searching is unlikely to turn up anything.
{"url":"http://skepticsplay.blogspot.com/2008/06/absence-of-evidence-in-bayesian.html?showComment=1212804060000","timestamp":"2014-04-18T00:16:29Z","content_type":null,"content_length":"99244","record_id":"<urn:uuid:4241e11f-2798-4106-8380-46ec6166c0d0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Rate of Chance in Calculus | eHow 1. eHow 2. Education 3. K-12 4. High School 5. Rate of Chance in Calculus Rate of Chance in Calculus By Ryan Malloy, eHow Presenter Rate of chance is a calculus term that typically applies to one of two things. Find out about rate of chance in calculus with help from an experienced math tutor in this free video clip. Video Transcript Hi there. This is Ryan Malloy here at the Worldwide Center of Mathematics. In this video, we're going to discuss the concept of the rate of change as it applies to calculus. So when you hear the term rate of change in calculus, there's typically one of two meanings that's being talked about. Let's say we've got some function f of x, we won't define it for now. There's two things we might be asked about it. The average rate of change over an interval, which is sometimes called the AROC, or average rate of change, and we have the instantaneous rate of change at a value. Sometimes called the IROC. So how do we express the average rate of change over an interval? If we are looking at the average rate of change on the interval from a to b where a and b are simply two numbers that are within the domain of our function. The average rate of change can be expressed quite simply as value of the function at a minus the value of the function at b divided by a minus b. And it's just that simply. The instantaneous rate of change is a little bit more complicated and it uses some more advanced techniques. So let's say that we want instantaneous rate of change at some value a. This is given by a limit. The limit as h approaches zero where h is just some arbitrary variable of f of a plus h minus f of a divided by h. Over here we can't simply plug in h directly since there'll be a zero in the denominator. But typically this limit is not very difficult to compute, and as a result, there are a number of rules and properties that have been well established as to how to do it quickly. For example, if we have f of x equals let's say x cubed. Instead of computing this limit for x cubed, we can simply use what's known as the power rule. And so if we want to find the instantaneous rate of change of f of a, sometimes indicated by f prime a. Well we'd simply take three x squared at a. Which just gives us three a squared. So for example if a were two, we get two squared is four times three is 12. My name is Ryan Malloy and we've just discussed the rate of change in calculus. Follow eHow Related Ads
{"url":"http://www.ehow.com/video_12222323_rate-chance-calculus.html","timestamp":"2014-04-21T12:30:41Z","content_type":null,"content_length":"99316","record_id":"<urn:uuid:05d20da4-a54e-4fe3-8119-308be2a9c521>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
Central Limit Theorem Date: 03/08/2002 at 11:04:34 From: Stephanie Sparks Subject: Probability of "at least" Here's the problem: The probability that a drug is effective on any one patient is 62%. Find the probability that, of the next 200 patients, at least half (or more) will survive. I've figured that the probability of the patient dying is 38%, and out of the next 200 patients, we want to know if at least 100 or more people will survive, exactly 100 or more successes in 200 trials, but I don't know what to do from there. Date: 03/08/2002 at 17:47:27 From: Doctor Jubal Subject: Re: Probability of "at least" Hi Stephanie, Thanks for writing Dr. Math. The most straightforward way to do this, and one that would take a lot of time, is to recognize that the probability of achieving k successes in N trials has a binomial distribution, which is P(k) = p^k * (1-p)^(N-k) * ---------- where p is the probability of success on a single trial. You could use this to calculate P(100), P(101), P(102), and so on to P(200), and then add all these probabilities up, and you'd have the exact answer. It would also take you far more time than you probably want to devote to this problem. One of the most useful theorems of statistics is the Central Limit Theorem, which says that if you take a large number of independent random variables and add them together, no matter what the form of the probability distributions of the individual variables is, the sum will have a distribution that is approximately Gaussian. This is nice because it lets us approximate a binomial distribution (with a "large enough" value of N) as a Gaussian one, because the binomial distriubtion itself is really the sum of several Bernoulli distributions (each trial has a Bernoulli distribution), so as the number of trials gets large, the binomial distribution becomes more or less Gaussian. Because the Gaussian distribution is continuous (whereas the binomial distribution is discrete), it lets us do an integral instead of the sum P(100) + P(101) + P(102) + ... + P(200). Best of all, the values of the integral we need to do have already been calculated and put in the back of almost any mathematical handbook or statistics text. The Gaussian distribution that best approximates a binomial distribution has mean Np and variance Np(1-p), which makes a lot of sense because Np and Np(1-p) are the mean and variance of the binomial distribution itself. So, since 62% of the patients survive, the mean number of survivors in 200 patients is (0.62)(200) = 124. Also, we can expect a variance in this value of (0.62)(0.38)(200) = 47.12, for a standard deviation of about 6.86. As you said, we want to know what the probability of at least 100 patients surviving is, or rather what the probability of doing no worse than 24 more deaths than the mean. 24 is (24)/(6.86) = 3.5 times more than the standard deviation. Now, for a Gaussian distribution, what is the probability of falling no more than 3.5 standard deviations below the mean? Does this help? Write back if you'd like to talk about this some more, or if you have any other questions. - Doctor Jubal, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/52817.html","timestamp":"2014-04-17T21:43:04Z","content_type":null,"content_length":"8334","record_id":"<urn:uuid:980b5338-5d57-4fce-9da3-81ec28fbe05e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: undefined function or variable Replies: 2 Last Post: Sep 17, 2013 10:31 AM Messages: [ Previous | Next ] Re: undefined function or variable Posted: Sep 17, 2013 10:31 AM "JIAWEI " <jiaweihe.zju@gmail.com> wrote in message > Dear all, > I am new to matlab and got stuck with a problem---- undefined function or > variable b. > could any of you help me with it? thank you very much. > my function is as follows; > function [ p ] = RT( a ) > %UNTITLED2 1-dimension rapid transformation > % Detailed explanation goes here > [m,n]=size(a); > if n~=1 > for i=1:n/2 > b(i) = a(i) | a(i+n/2); Assuming n is even, you could do this by converting a into a column vector, reshaping that to have 2 columns and half as many rows, and ORing the two columns. [You'd then want to make sure that b's orientation, row vector or column vector, is what you want it to be.] > c(i) = xor(a(i),~a(i+n/2)); Ditto but with XOR and NOT. > end > b=RT(b); c=RT(c); > end > p = [b,c]; If n is equal to 1 (i.e. you pass in a scalar or a column vector as a) then your FOR loop never executes and neither b nor c are defined before this line executes. > end > I have a feeling that the problem may be that b=RT(b) is not correct,but I > am not sure and have no idea how to fix it. thank you again! That's okay, if recursive. Steve Lord To contact Technical Support use the Contact Us link on Date Subject Author 9/16/13 undefined function or variable JIAWEI 9/16/13 Re: undefined function or variable JIAWEI 9/17/13 Re: undefined function or variable Steven Lord
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2598891&messageID=9258482","timestamp":"2014-04-20T01:12:35Z","content_type":null,"content_length":"19803","record_id":"<urn:uuid:c2f7d491-0ad6-4053-9f76-724d518e161c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
The entropy of nations: Global energy inequality lessens, but for how long? This is a plot of the fraction of world population (as displayed on a logarithmic scale) versus per-capita energy consumption expressed in kilowatts. If the data are exponential in nature, they should lie along a straight line, which they largely do. The labels show the consumption rates for several countries Credit: Yakovenko The 18th century writer Adam Smith provided a workable metaphor for the way society utilizes resources. In his book "The Wealth of Nations," he argued that even as individuals strive, through personal industry, to maximize their advantage in life, they inadvertently contribute—-as if under the influence of a "hidden hand"—-to an aggregate disposition of wealth. Well, if Smith were a physicist and alive in the 21st century he might be tempted to compare people or nations to molecules and to replace the phrase "hidden hand" with "thermodynamic process." Exponential behavior Victor Yakovenko, a scientist at the Joint Quantum Institute (1), studies the parallels between nations and molecules. The distribution of energies among molecules in a gas and the distribution of per-capita energy consumption among nations both obey an exponential law. That is, the likelihood of having a certain energy value is proportional to e^(-E/kT), where T is the temperature and k is a proportionality factor called Boltzmann's constant. ("Temperature" here is taken to be the average national per-capita energy consumption in the world.) Studies of world energy consumption often feature plots of energy consumption or population over time. Yakovenko and his colleagues prefer to draw out the underlying exponential distribution of national energy use by plotting the fraction of world population versus per-capita consumption. The JQI researchers draw on data from the U.S. Energy Information Administration (EIA). It covers the period from 1980 to 2010 and includes numbers from more than 200 countries; see figure 1. Their results are published in the Journal "Entropy" (2). A few years ago Yakovenko made a similar study of national per-capita income distributions (3). Actually, the consumption data can be graphed in another way, one that illustrates the distributive nature of energy use. In a "Lorenz plot," both the vertical and horizontal axes are dimensionless. Figure 2 shows data curves for four years—-1980, 1990, 2000, and 2010. The progression of curves is toward a fifth curve which stands for the idealized exponential behavior. This is a plot of the accumulative capita energy consumption versus accumulative population for four different years: 1980 (blue curve), 1990 (brown), 2000 (green), 2010 (red) and idealized exponential (black). Credit: Yakovenko Maximum entropy This fifth curve corresponds to a state of maximum entropy in the distribution of energy. Entropy is not merely a synonym for disorder. Rather, entropy is a measure of the number of different ways a system can exist. If, for example, $100 was to be divided among ten people, total equality would dictate that each person received $10. In Figure 2, this is represented by the solid diagonal line. Maximum inequality would be equivalent to giving all $100 to one person. This would be represented by a curve that hugged the horizontal axis and then proceeded straight up the rightmost vertical Statistically, both of these scenarios are rather unlikely since they correspond to unique situations. The bulk of possible divisions of $100 would look more like this example: person 1 gets $27, person 2 gets $15, and so forth down to person 10, who receives only $3. The black curve in Figure 2 represents this middle case, where, in the competition for scarce energy resources, neither total equality nor total inequality reigns. Of course, the labels along the curves are a stark reminder that some nations get much more than the average and some nations much less. In Figure 2 the slope of the curve at any one point corresponds to the per-capita energy consumption. So the upper right of each curve is inhabited by the high-consuming nations: USA, Russia, France, UK. And the lower-left, lower-slope positions on the curve include Brazil and India. The movement of China upwards on the curve is the most dramatic change over the past 40 years. The inequality between the haves and have-nots is often characterized by a factor called the Gini coefficient, or G (named for Italian sociologist Corrado Gini), defined as area between the Lorenz curve and the solid diagonal line divided by half the area beneath the diagonal line. G is then somewhere between 0 and 1, where 0 corresponds to perfect equality and 1 to perfect inequality. The curve corresponding to the maximum-entropy condition, has a G value of 0.5. The JQI scientists calculated and graphed G over time, showing how G has dropped over the years. In other words, inequality in energy consumption among the nations has been falling. Many economists attribute this development as a result of increased globalization in trade. And as if to underscore the underlying thermodynamic nature of the flow of commodities, a recent study by Branko Milanovic of the World Bank features a Gini curve very similar to that of the JQI curve. However, he was charting the decline of global income inequality by tracking the a parameter called purchasing power parity (PPP) among nations (4). Can it continue? The JQI curve suggests that the trend toward lesser inequality in energy consumption will start stalling out, as the energy consumption distribution begins to approach full exponential behavior. Is this because of the inexorable applicability of the laws of thermodynamics to national energy consumption? Just as with gas molecules, where some molecules are "rich" (possess high energy) and others "poor," are some nations destined to be rich and others poor? Maybe not. Professor Yakovenko believes that one obvious way to alter the circumstances of energy distribution expressed in the figures above is the further development of renewable sources of energy. "These graphs apply to a well-mixed, globalized world, where a finite pool of fossil fuels is redistributable on a global scale. If the world switches to locally-produced and locally-consumed renewable energy and stops reshuffling the deck of cards (fossil fuels), then the laws of probability would not apply, and inequality can be lowered further. After all, the Sun shines roughly equally on everybody." Yakovenko adds that for an exponential distribution what he calls "the rule of thirds" will be in effect. This means the top 1/3 of the world population will consume 2/3 of the total produced energy while the bottom 2/3 of the population will consume only 1/3 of the total energy. More information: "Global Inequality in Energy Consumption from 1980 to 2010," Scott Lawrence, Qin Liu, and Victor M. Yakovenko, published online at Entropy, 16 December 2013 www.mdpi.com/1099-4300/ 3 / 5 (2) Jan 03, 2014 The distribution of energies among molecules in a gas and the distribution of per-capita energy consumption among nations both obey an exponential law. This is Rather intuitive due to the concept of "Cultural Exchange". Additionally, the U.S. and European governments and corporations actually promote this distribution, for reasons that aren't exactly clear to me, though an explanation of what I mean can't fit in 1000 characters nor perhaps 10,000. This means the top 1/3 of the world population will consume 2/3 of the total produced energy while the bottom 2/3 of the population will consume only 1/3 of the total energy. This is close to known natural mathematical law. However, inequality within nations is far greater than this, and is roughly 90/10 relationship, which is far more biased than natural law predicts should happen. Thus indicating statistically significant corruption and bias in the economic system, almost entirely favoring the top.
{"url":"http://phys.org/news/2014-01-entropy-nations-global-energy-inequality.html","timestamp":"2014-04-19T06:02:15Z","content_type":null,"content_length":"77675","record_id":"<urn:uuid:60b8d3c7-605e-4906-b9d4-c7cc09fb3747>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
efficient way of splitting a list in to lists where the sum of the values in each does not exceed n efficient way of splitting a list in to lists where the sum of the values in each does not exceed n Ian Kelly ian.g.kelly at gmail.com Wed Nov 24 17:16:42 CET 2010 On Wed, Nov 24, 2010 at 8:34 AM, Tom Boland <tom at t0mb.net> wrote: > I'm trying to find a _nice_ way of taking a list of integers, and splitting > them in to lists where the sum of the values does not exceed a threshold. > for instance: > l = [1, 2, 3, 4, 5, 6] > n = 6 > nl = [1,2,3], [4], [5], [6] > I don't mind if it's done like playing blackjack/pontoon (the card game > where you try not to exceed a total of 21), ie. going through the list > sequentially, and splitting off the list as soon as the running total would > exceed n. A way of efficiently making all lists as close to n as possible > would be nice however. You've described the bin-packing problem [1]. It's known to be NP-hard, so you won't find a solution that is both efficient and optimal, although the Wikipedia page mentions some polynomial-time approximate algorithms that you might want to take a look at. [1] http://en.wikipedia.org/wiki/Bin_packing_problem More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2010-November/593030.html","timestamp":"2014-04-21T15:09:15Z","content_type":null,"content_length":"4466","record_id":"<urn:uuid:8158c189-e96c-4cb8-90b5-fd27483fc640>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Concatenated Acoustic Tube Model of the Vocal Tract Next: Wave Propagation in a Up: Case Study: The Kelly-Lochbaum Previous: Case Study: The Kelly-Lochbaum The first step towards a digital model is in representing the tube as a series of 1.1(b) (where 1.1(a). Figure 1.1: (a) The vocal tract, modeled as a single one-dimensional acoustic tube of varying cross-sectional area and (b) an eight tube model suitable for discretization. Stefan Bilbao 2002-01-22
{"url":"https://ccrma.stanford.edu/~bilbao/master/node5.html","timestamp":"2014-04-16T07:27:40Z","content_type":null,"content_length":"3944","record_id":"<urn:uuid:fef443dd-e17e-446d-b6c6-cb5cb9171cfc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition and Domain of a Rational Expression - Problem 1 In finding the domain for a rational expression all we are concerned about is when the denominator can’t be 0. Our x will be able to be everything else. So we are looking at this I know that I’m going to be able to put in everything except for what the denominator makes the denominator 0, denominator is 0 when x is -2 so that I know that I’m left with all real’s except -2. Different ways of writing that this is one way of saying everything but -2 you may want to do a different notation which is –infinity to -2 soft bracket union of -2 infinity. Different ways of writing the exact same thing, but the main part of this problem is the domain of the statement is everything except for -2. rational expression domain
{"url":"https://www.brightstorm.com/math/algebra-2/rational-expressions-and-functions/definition-and-domain-of-a-rational-expression-problem-1/","timestamp":"2014-04-17T07:03:35Z","content_type":null,"content_length":"58684","record_id":"<urn:uuid:9e4a931f-572e-4ac2-845d-a9432d257d47>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
You Observe A Plane Approaching Overhead And Assume ... | Chegg.com You observe a plane approaching overhead and assume that its speed is 550 miles per hour. The angle of elevation of the plane is 19° at one time and 55° one minute later. Approximate the altitude of the plane. (Round your answer to two decimal places.)
{"url":"http://www.chegg.com/homework-help/questions-and-answers/observe-plane-approaching-overhead-assume-speed-550-miles-per-hour-angle-elevation-plane-1-q3087990","timestamp":"2014-04-20T09:24:55Z","content_type":null,"content_length":"20563","record_id":"<urn:uuid:e2612412-aae6-4540-86b3-ae64f6e7156c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Engineering Mechanics Solution Manual By Ferdinand Singer Engineering Mechanics Solution Manual By Ferdinand Singer PDF Sponsored High Speed Downloads A.Forouzan Solution Manual −Control Systems Engineering, 4th Edition by Nise Solution Manual −Principles of Electronic Materials and Devices By Safa O. Kasap −Strength of Materials 4th Ed. by Ferdinand L. Singer Andrew PyteL Solution Manual by SolutionManualGroup ... −An Introduction to Numerical Analysis by Endre Suli Solution Manual −Engineering Mechanics Dynamics 11th Edition by ... Manual By Ferdinand Beer, Jr.,E. Russell Johnston, Elliot ... 27−Strength of Materials 4th Ed. by Ferdinand L. Singer Andrew PyteL Solution Manual ... 510−An Introduction to Economic Dynamics Solution Manual 511−Engineering Mechanics Dynamics 3rd ed. by Hibbeler R.C. updated ... Manual By Ferdinand Beer, Jr.,E. Russell Johnston, Elliot Eisenberg, William Instructor’s and Solutions Manual to Accompany Vector Mechanics for Engineers - Dynamics ... of the solutions contained in this manual. Ferdinand P. Beer E. Russell Johnston, Jr. William E. Clausen. ... and 13 the one best suited for the solution of a given problem. 94 Engineering Mechanics Statics International Edition NEW MECHANICS FOR ENGINEERS, STATICS Fifth Edition by Ferdinand P. Beer (deceased), and E. Russell 27−Strength of Materials 4th Ed. by Ferdinand L. Singer Andrew PyteL Solution Manual ... 214−Unit operations of chemical engineering by McCabe Solution Manual 215−Engineering Mechanics−Dynamics SI Version Volume 2 5th ... Manual By Ferdinand Beer, Jr.,E. Russell Johnston, Elliot ... 27−Strength of Materials 4th Ed. by Ferdinand L. Singer Andrew PyteL Solution Manual ... 214−Unit operations of chemical engineering by McCabe Solution Manual 215−Engineering Mechanics−Dynamics SI Version Volume 2 5th ... Manual By Ferdinand Beer, Jr.,E. Russell Johnston, Elliot ... › Engineering › Civil & Environmental ... 2013 · Strength of materials, 4th edition [solutions manual] singer, pytel 2 Document Transcript. ... books.google.com › Science › Mechanics › Statics Rating: 4/5 · 2 reviews... Solution (4th) Edition or (3th) ... Solution Manual, Instructor Manual, Te… Hébergé par OverBlog anatom ... manual of engineering mechanics statics and dynamics by dynamics ferdinand singer third edition answers for manual. The menu bar contains ‘Theory’ , ‘ Problem ... Section V: Applied Mechanics 168 Problem solution This command is linked with the general program of the ... Ferdinand L. singer “ strength of materials” fourth 2 Mechanics of Materials by Ferdinand P. Beer, E. Russell John StonJrMc. Graw Hill Refrence : ... 1 CPHEOO manual, New Delh, ... Environmental engineering 12. Matrix algebra, solution techniques 13. Numerical integration 14. Mechanics of Materials by Ferdinand P. Beer, ... Strength of materials by Singer Haper and Row. ENVIRONMENTAL ENGINEERING – I Duration: 3 Hr. College Assessment: 20 Marks University Assessment: 80 Marks ... 1 CPHEOO manual, New Delhi, ... 2 Mechanics of Materials by Ferdinand P. Beer, E. Russell John StonJrMc. Graw Hill Refrence : ... 1 CPHEOO manual, New Delh, ... Environmental engineering 12. Matrix algebra, solution techniques 13. Numerical integration 14. Table generation from IS: ...
{"url":"http://ebookily.org/pdf/engineering-mechanics-solution-manual-by-ferdinand-singer","timestamp":"2014-04-23T10:23:19Z","content_type":null,"content_length":"23545","record_id":"<urn:uuid:7dcae4f5-9598-43d2-9c20-2a42e2e87cdb>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Are there existing resources on modular-esque recurrence relations? up vote 3 down vote favorite Does anyone know where I would be able to get information on analyzing a class of polynomial recurrence relations of a form like this? $\begin{align*} f_{n,k}(x) & =a(x)f_{n-1,k}(x)+b(x)f_{n-2,k}(x) & n\equiv1\,\mbox{(mod k)}\\ & =f_{n-1,k}(x)+a(x)b(x)f_{n-2,k}(x) & n\equiv2\,\mbox{(mod k)}\\ & =f_{n-1,k}(x)+b(x)f_{n-2,k}(x) & \mbox My preliminary work suggests that for all $k$, it should collapse down into a single second-order recurrence relation: $\begin{align*} f_{n,k} & =a_k(x)f_{n-k,k}+b_k(x)f_{n-2k,k}(x)\end{align*}$ with the original recurrence divided into $k$ separate solutions each defined by initial conditions $f_{j,k}(x)$ and $f_{j+k,k}(x)$, $0\leq j < k$, where the functions $a_k(x)$ and $b_k(x)$ are themselves defined by a specific pair of recurrence relations in $k$. But the modular structure's giving me troubles in proving either the collapse or, given the collapse, the validity of the recurrences potentially defining $a_k(x)$ and $b_k(x)$ through any sort of inductive approach, and I haven't had any luck digging up any references to a recurrence structure of even a vaguely similar form. Even just something similar in nature could give me a lead into pinning this down. co.combinatorics discrete-mathematics difference-equations polynomials I have no idea, but at least I can verify a tiny bit. If $u_n$ is given by $au_{n-1}+bu_{n-2}$, $u_{n-1}+abu_{n-2}$, or $u_{n-1}+bu_{n-2}$ according as $n$ is 1, 2, or 0 modulo 3, then (at least for $n\equiv1\pmod3$) we get $u_n=(2ab+a+b)u_{n-3}+ab^3u_{n-6}$, which agrees with your suggested form. But perhaps this was already found in your "preliminary work". – Gerry Myerson Dec 6 '11 at It was, yes, though I appreciate it anyway; so far I've verified it by hand for $k$ from 2 to 5. – Justin Hilyard Dec 7 '11 at 4:58 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged co.combinatorics discrete-mathematics difference-equations polynomials or ask your own question.
{"url":"http://mathoverflow.net/questions/82793/are-there-existing-resources-on-modular-esque-recurrence-relations","timestamp":"2014-04-16T22:40:00Z","content_type":null,"content_length":"49055","record_id":"<urn:uuid:dc68e54c-7e7a-4425-a7ef-335021854293>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: An Arbitrary Number of Years Since Mathem atician Paul Erdős’s Birth Replies: 2 Last Post: Mar 31, 2013 3:36 PM Messages: [ Previous | Next ] Re: An Arbitrary Number of Years Since Mathematician Paul Erd?s’s Birth Posted: Mar 31, 2013 3:36 PM On Saturday, March 30, 2013 12:56:42 AM UTC+3, Frederick Williams wrote: > Sam Wormley wrote: > > > > An Arbitrary Number of Years Since Mathematician Paul ErdÅ sâ s Birth > > > http://www.scientificamerican.com/article.cfm?id=an-arbitrary-number-of-years-since-mathematicians-birth > "In fact, a more fitting celebration for ErdÅ s might be December 26, > 2039, or 1,521 months after his birth: 1,521 is the total number of > papers he collaborated on, which is more than any other mathematician in > history." Not only that: good'ol Paul is probably the only mathematician in history who published quite a few number of papers way after he died, mostly because many mathematicians just want to give a push-up to their Erdös number and it was, apparently, enough to have had a short interchange of words with him in order to associate him in some of one's own paper in the future. I, thus, have material for several joint papers with Erdös for the next few yeas...;) Date Subject Author 3/29/13 An Arbitrary Number of Years Since Mathem Sam Wormley atician Paul Erdős’s Birth 3/29/13 Re: An Arbitrary Number of Years Since Mathematician Paul Frederick Williams Erd?s s Birth 3/31/13 Re: An Arbitrary Number of Years Since Mathematician J. Antonio Perez M. Paul Erd?s’s Birth
{"url":"http://mathforum.org/kb/message.jspa?messageID=8795327","timestamp":"2014-04-17T05:36:23Z","content_type":null,"content_length":"19830","record_id":"<urn:uuid:db64f5b5-b42d-4c67-b54d-cd539cb58453>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving that programs eventually do something good Results 1 - 10 of 29 - ACM SYMPOSIUM ON OPERATING SYSTEMS PRINCIPLES , 2009 "... Complete formal verification is the only known way to guarantee that a system is free of programming errors. We present our experience in performing the formal, machine-checked verification of the seL4 microkernel from an abstract specification down to its C implementation. We assume correctness of ..." Cited by 148 (33 self) Add to MetaCart Complete formal verification is the only known way to guarantee that a system is free of programming errors. We present our experience in performing the formal, machine-checked verification of the seL4 microkernel from an abstract specification down to its C implementation. We assume correctness of compiler, assembly code, and hardware, and we used a unique design approach that fuses formal and operating systems techniques. To our knowledge, this is the first formal proof of functional correctness of a complete, general-purpose operating-system kernel. Functional correctness means here that the implementation always strictly follows our high-level abstract specification of kernel behaviour. This encompasses traditional design and implementation safety properties such as the kernel will never crash, and it will never perform an unsafe operation. It also proves much more: we can predict precisely how the kernel will behave in every possible situation. seL4, a third-generation microkernel of L4 provenance, comprises 8,700 lines of C code and 600 lines of assembler. Its performance is comparable to other high-performance L4 kernels. - In POPL’2007: Principles of Programming Languages , 2007 "... An invariance assertion for a program location ℓ is a statement that always holds at ℓ during execution of the program. Program invariance analyses infer invariance assertions that can be useful when trying to prove safety properties. We use the term variance assertion to mean a statement that holds ..." Cited by 38 (11 self) Add to MetaCart An invariance assertion for a program location ℓ is a statement that always holds at ℓ during execution of the program. Program invariance analyses infer invariance assertions that can be useful when trying to prove safety properties. We use the term variance assertion to mean a statement that holds between any state at ℓ and any previous state that was also at ℓ. This paper is concerned with the development of analyses for variance assertions and their application to proving termination and liveness properties. We describe a method of constructing program variance analyses from invariance analyses. If we change the underlying invariance analysis, we get a different variance analysis. We describe several applications of the method, including variance analyses using linear arithmetic and shape analysis. Using experimental results we demonstrate that these variance analyses give rise to a new breed of termination provers which are competitive with and sometimes better than today’s state-of-the-art termination provers. , 2008 "... This note describes a separation-logic-based approach for the specification and verification of safety properties of pointer-manipulating imperative programs. The programmer may declare inductive datatypes and primitive recursive functions for specification. Verification proceeds by symbolic executi ..." Cited by 34 (5 self) Add to MetaCart This note describes a separation-logic-based approach for the specification and verification of safety properties of pointer-manipulating imperative programs. The programmer may declare inductive datatypes and primitive recursive functions for specification. Verification proceeds by symbolic execution using an abstract representation of memory as a separation logic assertion. Folding or unfolding abstract predicate assertions is performed through explicit ghost statements. Lemma functions enable inductive proofs of memory representation equivalences and facts about the primitive recursive functions. An SMT solver is used to solve queries over data values; an algorithm is described that prevents non-termination of the SMT solver while enabling reduction of any ground term. Since no significant search is performed by either the verifier or the SMT solver, verification time is predictable and low. , 2009 "... We propose a new verification method for temporal properties of higher-order functional programs, which takes advantage of Ong’s recent result on the decidability of the model-checking problem for higher-order recursion schemes (HORS’s). A program is transformed to an HORS that generates a tree repr ..." Cited by 32 (7 self) Add to MetaCart We propose a new verification method for temporal properties of higher-order functional programs, which takes advantage of Ong’s recent result on the decidability of the model-checking problem for higher-order recursion schemes (HORS’s). A program is transformed to an HORS that generates a tree representing all the possible event sequences of the program, and then the HORS is modelchecked. Unlike most of the previous methods for verification of higher-order programs, our verification method is sound and complete. Moreover, this new verification framework allows a smooth integration of abstract model checking techniques into verification of higher-order programs. We also present a type-based verification algorithm for HORS’s. The algorithm can deal with only a fragment of the properties expressed by modal μ-calculus, but the algorithm and its correctness proof are (arguably) much simpler than those of Ong’s game-semantics-based algorithm. Moreover, while the HORS model checking problem is n-EXPTIME in general, our algorithm is linear in the size of HORS, under the assumption that the sizes of types and specifications are bounded by a constant. "... A concurrent data-structure implementation is considered nonblocking if it meets one of three following liveness criteria: waitfreedom, lock-freedom,orobstruction-freedom. Developers of nonblocking algorithms aim to meet these criteria. However, to date their proofs for non-trivial algorithms have b ..." Cited by 16 (6 self) Add to MetaCart A concurrent data-structure implementation is considered nonblocking if it meets one of three following liveness criteria: waitfreedom, lock-freedom,orobstruction-freedom. Developers of nonblocking algorithms aim to meet these criteria. However, to date their proofs for non-trivial algorithms have been only manual pencil-and-paper semi-formal proofs. This paper proposes the first fully automatic tool that allows developers to ensure that their algorithms are indeed non-blocking. Our tool uses rely-guarantee reasoning while overcoming the technical challenge of sound reasoning in the presence of interdependent liveness properties. "... Abstract. We describe a method for synthesizing reasonable underapproximations to weakest preconditions for termination—a long-standing open problem. The paper provides experimental evidence to demonstrate the usefulness of the new procedure. 1 ..." Cited by 11 (3 self) Add to MetaCart Abstract. We describe a method for synthesizing reasonable underapproximations to weakest preconditions for termination—a long-standing open problem. The paper provides experimental evidence to demonstrate the usefulness of the new procedure. 1 - In CAV , 2007 "... Our goal in this book is to build sofware tools that automatically search for proofs of program termination in mathematical logic. However, before delving directly into strategies for automation, we must first introduce some notation and establish a basic foundation in the areas of program semantics ..." Cited by 11 (2 self) Add to MetaCart Our goal in this book is to build sofware tools that automatically search for proofs of program termination in mathematical logic. However, before delving directly into strategies for automation, we must first introduce some notation and establish a basic foundation in the areas of program semantics, logic and set theory. We must also discuss how programs can be proved terminating using manual techniques. The concepts and notation introduced in this chapter will be used throughout the remainder of the book. 1.1 Program termination and well-founded relations For the purpose of this book it is convenient to think of the text of a computer program as representing a relation that specifies the possible transitions that the program can make between configurations during execution. We call this the program’s transition relation. Program executions can be thought of as traversals starting from a starting configuration and then moving from configuration to configuration as allowed by the transition relation. A program is called terminating if all the executions allowed by the transition relation are finite. We call a program non-terminating if the transition relation allows for at least one infinite execution. Treating programs as relations is conveinant for our purpose, as in this setting proving program termination is equivliant to proving the program’s transition relation well-founded — thus giving us access to the numerous well established techniques from mathematical logic used to establish well-foundedness. In the next few sections we define some notation, discuss our representation for program configurations, and give some basic results related 3 4 "... We describe a new algorithm for proving temporal properties expressed in LTL of infinite-state programs. Our approach takes advantage of the fact that LTL properties can often be proved more efficiently using techniques usually associated with the branchingtime logic CTL than they can with native LT ..." Cited by 7 (7 self) Add to MetaCart We describe a new algorithm for proving temporal properties expressed in LTL of infinite-state programs. Our approach takes advantage of the fact that LTL properties can often be proved more efficiently using techniques usually associated with the branchingtime logic CTL than they can with native LTL tools. The caveat is that, in certain instances, nondeterminism in the system’s transition relation can cause CTL methods to report counterexamples that are spurious with respect to the original LTL formula. To address this problem we describe an algorithm that, as it attempts to apply CTL proof methods, finds and then removes problematic nondeterminism via an analysis on the potentially spurious counterexamples. Problematic nondeterminism is characterized using decision predicates, and removed using a partial and symbolic determinization procedure that introduces new prophecy variables to predict the future outcome of these decisions. We demonstrate—using examples taken from the PostgreSQL database server, Apache web server, and Windows OS kernel—that our method can yield enormous performance improvements in comparison to known tools, allowing us to automatically prove properties of programs where we could not prove them before. 1. "... To build new tools and programming languages that make it easier for professional software developers to create, debug, and understand code, it is helpful to better understand the questions that developers ask during coding activities. We surveyed professional software developers and asked them to l ..." Cited by 6 (3 self) Add to MetaCart To build new tools and programming languages that make it easier for professional software developers to create, debug, and understand code, it is helpful to better understand the questions that developers ask during coding activities. We surveyed professional software developers and asked them to list hard-to-answer questions that they had recently asked about code. 179 respondents reported 371 questions. We then clustered these questions into 21 categories and 94 distinct questions. The most frequently reported categories dealt with intent and rationale – what does this code do, what is it intended to do, and why was it done this way? Many questions described very specific situations – e.g., what does the code do when an error occurs, how to refactor without breaking callers, or the implications of a specific change on security. These questions revealed opportunities for both existing research tools to help developers and for developing new languages and tools that make answering these questions easier. "... Predicate abstraction has become one of the most successful methodologies for proving safety properties of programs. Recently, several abstraction methodologies have been proposed for proving liveness properties. This paper studies “ranking abstraction” where a program is augmented by a non-constrai ..." Cited by 6 (3 self) Add to MetaCart Predicate abstraction has become one of the most successful methodologies for proving safety properties of programs. Recently, several abstraction methodologies have been proposed for proving liveness properties. This paper studies “ranking abstraction” where a program is augmented by a non-constraining progress monitor based on a set of ranking functions, and further abstracted by predicate-abstraction, to allow for automatic verification of progress properties. Unlike many liveness methodologies, the augmentation does not require a complete ranking function that is expected to decrease with each helpful step. Rather, adequate user-provided inputs are component rankings from which a complete ranking function may be automatically formed. The premise of the paper is an analogy between the methods of ranking abstraction and predicate abstraction, one ingredient of which is refinement: When predicate abstraction fails, one can refine it. When ranking abstraction fails, one must determine whether the predicate abstraction, or the ranking abstraction, needs be refined. The paper presents
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3776791","timestamp":"2014-04-17T14:07:47Z","content_type":null,"content_length":"40005","record_id":"<urn:uuid:547fbb24-0411-4c19-bfe9-8f68f20cb44e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
The entropy of nations: Global energy inequality lessens, but for how long? This is a plot of the fraction of world population (as displayed on a logarithmic scale) versus per-capita energy consumption expressed in kilowatts. If the data are exponential in nature, they should lie along a straight line, which they largely do. The labels show the consumption rates for several countries Credit: Yakovenko The 18th century writer Adam Smith provided a workable metaphor for the way society utilizes resources. In his book "The Wealth of Nations," he argued that even as individuals strive, through personal industry, to maximize their advantage in life, they inadvertently contribute—-as if under the influence of a "hidden hand"—-to an aggregate disposition of wealth. Well, if Smith were a physicist and alive in the 21st century he might be tempted to compare people or nations to molecules and to replace the phrase "hidden hand" with "thermodynamic process." Exponential behavior Victor Yakovenko, a scientist at the Joint Quantum Institute (1), studies the parallels between nations and molecules. The distribution of energies among molecules in a gas and the distribution of per-capita energy consumption among nations both obey an exponential law. That is, the likelihood of having a certain energy value is proportional to e^(-E/kT), where T is the temperature and k is a proportionality factor called Boltzmann's constant. ("Temperature" here is taken to be the average national per-capita energy consumption in the world.) Studies of world energy consumption often feature plots of energy consumption or population over time. Yakovenko and his colleagues prefer to draw out the underlying exponential distribution of national energy use by plotting the fraction of world population versus per-capita consumption. The JQI researchers draw on data from the U.S. Energy Information Administration (EIA). It covers the period from 1980 to 2010 and includes numbers from more than 200 countries; see figure 1. Their results are published in the Journal "Entropy" (2). A few years ago Yakovenko made a similar study of national per-capita income distributions (3). Actually, the consumption data can be graphed in another way, one that illustrates the distributive nature of energy use. In a "Lorenz plot," both the vertical and horizontal axes are dimensionless. Figure 2 shows data curves for four years—-1980, 1990, 2000, and 2010. The progression of curves is toward a fifth curve which stands for the idealized exponential behavior. This is a plot of the accumulative capita energy consumption versus accumulative population for four different years: 1980 (blue curve), 1990 (brown), 2000 (green), 2010 (red) and idealized exponential (black). Credit: Yakovenko Maximum entropy This fifth curve corresponds to a state of maximum entropy in the distribution of energy. Entropy is not merely a synonym for disorder. Rather, entropy is a measure of the number of different ways a system can exist. If, for example, $100 was to be divided among ten people, total equality would dictate that each person received $10. In Figure 2, this is represented by the solid diagonal line. Maximum inequality would be equivalent to giving all $100 to one person. This would be represented by a curve that hugged the horizontal axis and then proceeded straight up the rightmost vertical Statistically, both of these scenarios are rather unlikely since they correspond to unique situations. The bulk of possible divisions of $100 would look more like this example: person 1 gets $27, person 2 gets $15, and so forth down to person 10, who receives only $3. The black curve in Figure 2 represents this middle case, where, in the competition for scarce energy resources, neither total equality nor total inequality reigns. Of course, the labels along the curves are a stark reminder that some nations get much more than the average and some nations much less. In Figure 2 the slope of the curve at any one point corresponds to the per-capita energy consumption. So the upper right of each curve is inhabited by the high-consuming nations: USA, Russia, France, UK. And the lower-left, lower-slope positions on the curve include Brazil and India. The movement of China upwards on the curve is the most dramatic change over the past 40 years. The inequality between the haves and have-nots is often characterized by a factor called the Gini coefficient, or G (named for Italian sociologist Corrado Gini), defined as area between the Lorenz curve and the solid diagonal line divided by half the area beneath the diagonal line. G is then somewhere between 0 and 1, where 0 corresponds to perfect equality and 1 to perfect inequality. The curve corresponding to the maximum-entropy condition, has a G value of 0.5. The JQI scientists calculated and graphed G over time, showing how G has dropped over the years. In other words, inequality in energy consumption among the nations has been falling. Many economists attribute this development as a result of increased globalization in trade. And as if to underscore the underlying thermodynamic nature of the flow of commodities, a recent study by Branko Milanovic of the World Bank features a Gini curve very similar to that of the JQI curve. However, he was charting the decline of global income inequality by tracking the a parameter called purchasing power parity (PPP) among nations (4). Can it continue? The JQI curve suggests that the trend toward lesser inequality in energy consumption will start stalling out, as the energy consumption distribution begins to approach full exponential behavior. Is this because of the inexorable applicability of the laws of thermodynamics to national energy consumption? Just as with gas molecules, where some molecules are "rich" (possess high energy) and others "poor," are some nations destined to be rich and others poor? Maybe not. Professor Yakovenko believes that one obvious way to alter the circumstances of energy distribution expressed in the figures above is the further development of renewable sources of energy. "These graphs apply to a well-mixed, globalized world, where a finite pool of fossil fuels is redistributable on a global scale. If the world switches to locally-produced and locally-consumed renewable energy and stops reshuffling the deck of cards (fossil fuels), then the laws of probability would not apply, and inequality can be lowered further. After all, the Sun shines roughly equally on everybody." Yakovenko adds that for an exponential distribution what he calls "the rule of thirds" will be in effect. This means the top 1/3 of the world population will consume 2/3 of the total produced energy while the bottom 2/3 of the population will consume only 1/3 of the total energy. More information: "Global Inequality in Energy Consumption from 1980 to 2010," Scott Lawrence, Qin Liu, and Victor M. Yakovenko, published online at Entropy, 16 December 2013 www.mdpi.com/1099-4300/ 3 / 5 (2) Jan 03, 2014 The distribution of energies among molecules in a gas and the distribution of per-capita energy consumption among nations both obey an exponential law. This is Rather intuitive due to the concept of "Cultural Exchange". Additionally, the U.S. and European governments and corporations actually promote this distribution, for reasons that aren't exactly clear to me, though an explanation of what I mean can't fit in 1000 characters nor perhaps 10,000. This means the top 1/3 of the world population will consume 2/3 of the total produced energy while the bottom 2/3 of the population will consume only 1/3 of the total energy. This is close to known natural mathematical law. However, inequality within nations is far greater than this, and is roughly 90/10 relationship, which is far more biased than natural law predicts should happen. Thus indicating statistically significant corruption and bias in the economic system, almost entirely favoring the top.
{"url":"http://phys.org/news/2014-01-entropy-nations-global-energy-inequality.html","timestamp":"2014-04-19T06:02:15Z","content_type":null,"content_length":"77675","record_id":"<urn:uuid:60b8d3c7-605e-4906-b9d4-c7cc09fb3747>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help April 21st 2009, 10:15 AM #1 Aug 2008 A cross-country competitor runs on a bearing of N60°W for 2 km, then due north for 3 km. a) How far is he from the starting point? No one will ever know the answer if there is NOT ENOUGH INFORMATION GIVEN!! a) Two sides and an included angle. b) All three sides. Let me know when you get serious Joker. The included angle is 150 degrees! And then the law oof cosines with the sides (2 and 3) are all ya need! Im so stupid! But ya gotta admit, if he went an other way but due north it was over! Last edited by VonNemo19; April 21st 2009 at 12:17 PM. Reason: add April 21st 2009, 11:53 AM #2 April 21st 2009, 12:03 PM #3 Aug 2008 April 21st 2009, 12:15 PM #4
{"url":"http://mathhelpforum.com/trigonometry/84830-trig-prob.html","timestamp":"2014-04-20T04:51:25Z","content_type":null,"content_length":"38461","record_id":"<urn:uuid:02d12413-c99a-4163-85bf-dd8013f4019c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Identifying Special Angle Pairs In this lesson you’ll learn about how to identify Special Angle Pairs. The presentation by the instructor will be done in own handwriting, using video and with the help of several examples with solution. It will be followed by explaining their use to help you solve problems. Before proceeding to look into the explanations, you may recall from earlier learning on basics of interior, exterior, and alternate angles etc. In the below diagram, (More text below video...) (Continued from above) Such paired angles are represented by special names and identified accordingly. In this case For Example: • two pairs of interior angles ( • two pairs of alternate angles ( • four pairs of corresponding angles [( Two lines cut by a transversal are parallel if and only if: • alternate interior angles are equal in measure, or • alternate exterior angles are equal in measure, or • corresponding angles are equal in measure. Winpossible's online math courses and tutorials have gained rapidly popularity since their launch in 2008. Over 100,000 students have benefited from Winpossible's courses... these courses in conjunction with free unlimited homework help serve as a very effective math-tutor for our students. - All of the Winpossible math tutorials have been designed by top-notch instructors and offer a comprehensive and rigorous math review of that topic. - We guarantee that any student who studies with Winpossible, will get a firm grasp of the associated problem-solving techniques. Each course has our instructors providing step-by-step solutions to a wide variety of problems, completely demystifying the problem-solving process! - Winpossible courses have been used by students for help with homework and by homeschoolers. - Several teachers use Winpossible courses at schools as a supplement for in-class instruction. They also use our course structure to develop course worksheets.
{"url":"http://www.winpossible.com/lessons/Geometry_Identifying_Special_Angle_Pairs.html","timestamp":"2014-04-19T09:23:52Z","content_type":null,"content_length":"56627","record_id":"<urn:uuid:712e91b6-f8c8-4186-8bcf-d2fb00a08599>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
53102419 submission sfcrazy (1542989) "openSUSE teams just announced the release of openSUSE 13.1 and it has already been reviewed. There are some core points which sets openSUSE apart from populist OS Ubuntu. While Ubuntu has become more or less Canonical owned project, openSUSE is becoming more and more community driven project and looking at the recent controversies around Ubuntu and their move towards mobile platforms, openSUSE seems to be a great option for desktop users."Link to Original Source 48824649 comment Comment: Re:Another advantage (Score 1) 153 by MasterPatricko (#44337851) Attached to: Rethinking the Wetsuit I've heard humans taste bad (c.f. animals that take a bite of humans spit us out / don't take another one) and we're probably not very efficient meals compared to fatty seals or muscley fish, so I doubt there is any evolutionary advantage to sharks becoming better human predators. 48336355 comment Comment: Re:It's the promises you have to look out for (Score 1) 86 Qt is publicly developed on Gitorious (https://qt.gitorious.org/), accepting merge requests (with code review). Even if you can't get them to accept patches into the 'official' codebase, you could always just branch it and fix the bugs for yourself ... 47863737 comment Comment: Re:Debian is not just binary (Score 1) 311 by MasterPatricko (#44107041) Attached to: Are You Sure This Is the Source Code? Really, that isn't specific to debian. 47863525 comment Comment: Re:Bogus argument (Score 1) 311 by MasterPatricko (#44106917) Attached to: Are You Sure This Is the Source Code? On Linux packages (rpm, deb) are almost always signed by a distribution key, which needs root access to accept. On Windows binary signing just gives you a company name associated with the exe, which I think is regularly ignored by users ... 47245367 comment Comment: Re:Kinda cool that they found it (Score 2) 91 by MasterPatricko (#43945593) Attached to: New In-Memory Rootkit Discovered By German Hoster If you're serious about computer security you bring the analysis tools with you, from an independent known-good source, not using anything from the possibly-compromised machine. 47031695 comment Comment: Re:Same as last time (Score 4, Insightful) 559 by MasterPatricko (#43881961) Attached to: No, the Tesla Model S Doesn't Pollute More Than an SUV Small fuel efficient cars have a huge problematic bug , that has never been worked out. They're dangerous, hard to spot, slow to get out of the way As evidenced by your own statement it's the huge speeding behemoths that are actually the ones causing the accidents, even if it's those around them that suffer the consequences ... and yet you claim it's the small cars that should be removed from the road? 40175349 comment Comment: Re:What this means (Score 2) 259 A symmetry under X means the system under test is unchanged (ie the same physical laws work, your predictions are still correct) when you do X. A simple example is the symmetry under spatial translation -- if your experiment still behaves the same way if it's moved a meter to the left, it has "spatial translational symmetry". This symmetry isn't exactly true on the surface of the earth because of variations in the gravitational field etc., but on a small scale for lab experiments it's true, and in deep space it's certainly true. Another example is symmetry under spatial rotation -- your experiment doesn't care whether you face it north or east. By a very cool bit of maths called Noether's Theorem, you can show that for every symmetry that a system has, there is an associated conserved quantity. So systems with spatial translation symmetry will show conservation of momentum. Systems with time translation symmetry exhibit conservation of energy -- within that system, you can't create or destroy energy. Rotational symmetry results in conservation of angular momentum. Much of modern physics is built around identifying the symmetries that the universe (or parts of the universe) obeys, the associated conserved quantities, and what happens when those symmetries are broken -- for example the maths leading to the Higgs boson. Currently we believe the universe overall obeys C(harge) P(arity) T(ime) symmetry, that is if you change matter for antimatter, flip everything spatially (as in a mirror), and reverse the direction of time, everything would be the same. This recent experiment shows that time symmetry by itself is not obeyed -- if you only reverse the direction of time, this particular particle collision is not the same. 40174725 comment Comment: Re:Quick question then (Score 1) 259 Photons moving through a medium are "slowed down" by interactions of the electromagnetic field with the atoms of the medium. Remember that a photon is just localised electromagnetic energy. In a medium, the electromagnetic fields behave differently than in a vacuum, because of the all the atoms with their various charged bits (protons, electrons) -- there is a different "resistance" to changing the field strength because the field has to move the atoms as well. This resistance to changing the field strength is what determines the speed of the electromagnetic wave. In mathematical terms we say photons (electromagnetic waves) travel at speed c/n, where n is the refractive index of the material, and n is sqrt(epsilon * mu), where epsilon and mu are the relative permittivity and permeability (to electromagnetic fields) of the medium. A simpler, but wrong, model you might hear is that the photons are being absorbed and reemitted many times as it passes through the medium, all while travelling at c between the atoms, but that can't be really true because otherwise light would be highly directionally spread out after exiting any high refractive index material, but we can see straight through glass and water. 33580811 comment Comment: Caring what Farhad Manjoo thinks a Dumb Idea (Score 5, Insightful) 128 by MasterPatricko (#40200553) Attached to: Facebook Smartphone a Dumb Idea, Says Farhad Manjoo Says random Slashdot poster 33527869 comment Comment: Re:Cross platform via wine (Score 5, Interesting) 145 by MasterPatricko (#40180831) Attached to: Humble Indie Bundle V Released disappointing, but they have an excuse, don't know how valid it really is: from the FAQ: Q: Why is Limbo for Linux a wrapper? A: Unfortunately the audio for Limbo is middle-ware which could not be properly ported. 32137961 comment Comment: Re:That's not reporting (Score 1) 17 Nah, just post the story simultaneously to different sites and have them reference each other 31911465 comment Comment: The real reason the judge was annoyed (Score 4, Interesting) 227 Instead, Plaintiff merely submitted 252 raw pages of documents obtained through discovery without so much as a summary of the information contained in those documents or an explanation to the Court how any of the line items contained therein directly relate to Kumar’s UMaple activities. Seems to me that's the real reason the judge wasn't feeling like awarding any more damages, not some kind of protest against the DMCA or statutory damages. 29312583 comment Comment: Re:Strange chemicals lyin' in ponds (Score 1) 97 by MasterPatricko (#39046561) Attached to: Did Life Emerge In Ponds Rather Than Ocean Vents? Listen, strange chemicals lyin' in ponds distributin' ions is no basis for a system of life. Supreme biological diversity derives from a mandate from the creator, not from some farcical aquatic Oh! Come and see the violence inherent in the system! Help! Help! I'm being repressed!
{"url":"http://slashdot.org/~MasterPatricko","timestamp":"2014-04-18T04:02:33Z","content_type":null,"content_length":"93909","record_id":"<urn:uuid:8649c7e9-9acc-4aa3-a52e-d87067383a8e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Report in Wirtschaftsmathematik (WIMA Report) 15 search hits An online approach to detecting changes in nonlinear autoregressive models (2011) Claudia Kirch Joseph Tadjuidje Kamgaing In this paper we develop monitoring schemes for detecting structural changes in nonlinear autoregressive models. We approximate the regression function by a single layer feedforward neural network. We show that CUSUM-type tests based on cumulative sums of estimated residuals, that have been intensively studied for linear regression in both an offline as well as online setting, can be extended to this model. The proposed monitoring schemes reject (asymptotically) the null hypothesis only with a given probability but will detect a large class of alternatives with probability one. In order to construct these sequential size tests the limit distribution under the null hypothesis is obtained. Dynamic Multi-Period Routing With Two Classes (2010) Sven O. Krumke Christiane Zeck In the Dynamic Multi-Period Routing Problem, one is given a new set of requests at the beginning of each time period. The aim is to assign requests to dates such that all requests are fulfilled by their deadline and such that the total cost for fulling the requests is minimized. We consider a generalization of the problem which allows two classes of requests: The 1st class requests can only be fulfilled by the 1st class server, whereas the 2nd class requests can be fulfilled by either the 1st or 2nd class server. For each tour, the 1st class server incurs a cost that is alpha times the cost of the 2nd class server, and in each period, only one server can be used. At the beginning of each period, the new requests need to be assigned to service dates. The aim is to make these assignments such that the sum of the costs for all tours over the planning horizon is minimized. We study the problem with requests located on the nonnegative real line and prove that there cannot be a deterministic online algorithm with a competitive ratio better than alpha. However, if we require the difference between release and deadline date to be equal for all requests, we can show that there is a min{2*alpha, 2 + 2/alpha}-competitive algorithm. Earliest Arrival Flows in Series-Parallel Graphs (2009) Stefan Ruzika Heike Sperber Mechthild Steiner We present an exact algorithm for computing an earliest arrival flow in a discrete time setting on series-parallel graphs. In contrast to previous results for the earliest arrival flow problem this algorithm runs in polynomial time. Efficient Computation of Equilibria in Bottleneck Games via Game Transformation (2011) Thomas L. Werth Heike Sperber Sven O. Krumke Generalized Max Flow in Series-Parallel Graphs (2010) Katharina Beygang Sven O. Krumke Christiane Zeck In the generalized max flow problem, the aim is to find a maximum flow in a generalized network, i.e., a network with multipliers on the arcs that specify which portion of the flow entering an arc at its tail node reaches its head node. We consider this problem for the class of series-parallel graphs. First, we study the continuous case of the problem and prove that it can be solved using a greedy approach. Based on this result, we present a combinatorial algorithm that runs in O(m*m) time and a dynamic programming algorithm with running time O(m*log(m)) that only computes the maximum flow value but not the flow itself. For the integral version of the problem, which is known to be NP-complete, we present a pseudo-polynomial algorithm. How to find Nash equilibria with extreme total latency in network congestion games? (2008) Heike Sperber We study the complexity of finding extreme pure Nash equilibria in symmetric network congestion games and analyse how it depends on the graph topology and the number of users. In our context best and worst equilibria are those with minimum respectively maximum total latency. We establish that both problems can be solved by a Greedy algorithm with a suitable tie breaking rule on parallel links. On series-parallel graphs finding a worst Nash equilibrium is NP-hard for two or more users while finding a best one is solvable in polynomial time for two users and NP-hard for three or more. Additionally we establish NP-hardness in the strong sense for the problem of finding a worst Nash equilibrium on a general acyclic graph. Locating stops along bus or railway lines - a bicriterial problem (2003) Anita Schöbel In this paper we consider the location of stops along the edges of an already existing public transportation network, as introduced in [SHLW02]. This can be the introduction of bus stops along some given bus routes, or of railway stations along the tracks in a railway network. The goal is to achieve a maximal covering of given demand points with a minimal number of stops. This bicriterial problem is in general NP-hard. We present a nite dominating set yielding an IP-formulation as a bicriterial set covering problem. We use this formulation to observe that along one single straight line the bicriterial stop location problem can be solved in polynomial time and present an e cient solution approach for this case. It can be used as the basis of an algorithm tackling real-world instances. Min-Max Quickest Path Problems (2010) Stefan Ruzika Markus Thiemann In a dynamic network, the quickest path problem asks for a path such that a given amount of flow can be sent from source to sink via this path in minimal time. In practical settings, for example in evacuation or transportation planning, the problem parameters might not be known exactly a-priori. It is therefore of interest to consider robust versions of these problems in which travel times and/or capacities of arcs depend on a certain scenario. In this article, min-max versions of robust quickest path problems are investigated and, depending on their complexity status, exact algorithms or fully polynomial-time approximation schemes are proposed. Minimum Cut Tree Games (2008) Anne M. Schwahn In this paper we introduce a cooperative game based on the minimum cut tree problem which is also known as multi-terminal maximum flow problem. Minimum cut tree games are shown to be totally balanced and a solution in their core can be obtained in polynomial time. This special core allocation is closely related to the solution of the original graph theoretical problem. We give an example showing that the game is not supermodular in general, however, it is for special cases and for some of those we give an explicit formula for the calculation of the Shapley value. Online Delay Management (2010) Sven O. Krumke Clemens Thielen Christiane Zeck We present extensions to the Online Delay Management Problem on a Single Train Line. While a train travels along the line, it learns at each station how many of the passengers wanting to board the train have a delay of delta. If the train does not wait for them, they get delayed even more since they have to wait for the next train. Otherwise, the train waits and those passengers who were on time are delayed by delta. The problem consists in deciding when to wait in order to minimize the total delay of all passengers on the train line. We provide an improved lower bound on the competitive ratio of any deterministic online algorithm solving the problem using game tree evaluation. For the extension of the original model to two possible passenger delays delta_1 and delta_2, we present a 3-competitive deterministic online algorithm. Moreover, we study an objective function modeling the refund system of the German national railway company, which pays passengers with a delay of at least Delta a part of their ticket price back. In this setting, the aim is to maximize the profit. We show that there cannot be a deterministic competitive online algorithm for this problem and present a 2-competitive randomized algorithm.
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16168/start/0/rows/10/doctypefq/report/sortfield/title/sortorder/asc","timestamp":"2014-04-20T14:23:10Z","content_type":null,"content_length":"46563","record_id":"<urn:uuid:680e38b9-3a1a-4fc1-8373-03a60d4fe399>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Capitola ACT Tutor ...To learn, students must feel comfortable, interested, and challenged. When students are too challenged they feel overwhelmed and discouraged. When students are not interested or challenged they get bored. 22 Subjects: including ACT Math, reading, English, physics ...My organic chemistry course was taught by an instructor that is well known for his skill in the subject, students attending other universities often choose to come to Cabrillo to take the course. I love chemistry almost as much as I do biology and as my students could tell you I'm very passionat... 31 Subjects: including ACT Math, chemistry, reading, calculus ...I have journalistic expertise in Cultural Anthropology and have studied under the world renown Dr. Napoleon Chagnon (the most cited living anthropologist in the world) while at UCSB. I have also taught Introduction to Anthropology at the high school level. 75 Subjects: including ACT Math, English, Spanish, reading ...My experience in education includes >1000 hours of private tutoring over the course of 3 years in addition to classroom level student teaching at Mountain View High School and Del Mar High School. As a private tutor, I've had the privilege of guiding a wide variety of students through every l... 14 Subjects: including ACT Math, chemistry, calculus, physics ...I'm proficient with math up to Calculus 11A and can tutor in PSAT/SAT and ACT prep as well as AP Chemistry, Biology, U.S History and English. I've had roughly 4 years of tutoring experience, mostly in high school and college classes. I'm an easy going tutor that likes to take time explaining concepts as well as workarounds and shortcuts. 18 Subjects: including ACT Math, chemistry, English, biology
{"url":"http://www.purplemath.com/Capitola_ACT_tutors.php","timestamp":"2014-04-16T19:14:22Z","content_type":null,"content_length":"23494","record_id":"<urn:uuid:d7b6de8e-4cff-4d6b-8373-828456faa5ad>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Modelling and simulation of transient gas flow Yaacob, Z 1996, Modelling and simulation of transient gas flow , PhD thesis, Salford : University of Salford. Restricted to Repository staff only until 03 October 2014. Download (4MB) | Request a copy One of the objectives of this research was to develop mathematical models for transient flow in a gas transmission system. Models were developed from the continuity and the momentum equations. Different approximations in the equations result in the formulation of two different sets of partial differential equations, and these are the hyperbolic and the parabolic models. The Partial Differential Equations were numerically solved by an implicit finite difference technique. A Four Point Scheme was use to approximate the pressures, flows and their partial derivatives. With this scheme a second order accuracy for both spatial and time variables was achieved for both hyperbolic and parabolic models. The nonlinear equation sets from the finite differences discretization process are solved by using a Newton-Raphson iterative procedure. This procedure resulted in the formation of a sparse Jacobian Matrix. This large matrix was then compacted algebraically to reduce the time of the numerical solution. Although an implicit finite difference approximation was used in simulating the models, the important of the correct choice of the magnitude of the temporal and spatial steps should not be overlooked, particularly for high frequency disturbances 'Aliasing' problems will occur if temporal or spatial steps which are too large relative to the frequency of disturbance are used. Comparisons between the response of the hyperbolic and the parabolic models with different input boundary conditions were performed. As a result, recommendations are made on how the different models should be used in simulating real pipeline systems. Actions (login required) Edit record (repository staff only)
{"url":"http://usir.salford.ac.uk/26975/","timestamp":"2014-04-19T09:26:20Z","content_type":null,"content_length":"24110","record_id":"<urn:uuid:72ee1916-8927-4b04-9aa4-1b45e27ef9fe>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
The springs of a 1200 kg car sag 12 cm when a 90 kg person gets in the car. Determine the spring constant and the... - Homework Help - eNotes.com The springs of a 1200 kg car sag 12 cm when a 90 kg person gets in the car. Determine the spring constant and the total energy stored in the spring. Using Hooke's Law F = -kx, the spring constant of the spring can be determined. When a person with a mass of 90 kg gets into the car, the springs sag by 12 cm. The force exerted by the mass of 90 kg is 90*9.8 = 882 N 882 = k*x = k*(0.12) => k = 882/0.12 => k = 7350 N/m The energy stored in the springs when the person with a mass of 90 kg enters the car is equal to (1/2)*k*x^2 = (1/2)*7350*(0.12)^2 = 52.92 J It should be noticed that the potential energy in the spring is not the same as the loss in gravitational potential energy if a mass of 90 kg drops by 12 cm. This is due to the fact that the spring exerts a force in the opposite direction when the weight is applied. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/springs-1200-kg-car-sag-12-cm-when-90-kg-person-345619","timestamp":"2014-04-20T00:07:15Z","content_type":null,"content_length":"27192","record_id":"<urn:uuid:ff285377-5b5f-457d-9215-b3006218f3d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
“Single speed gear ratios, what do you find works best” 12-23-2009 #1 Virtus pre nummis Join Date Apr 2009 “Single speed gear ratios, what do you find works best” I have asked this question several times and everyone says by extra cogs and test them out to see which is best. So I started looking around and found this website Thread title “Single speed gear ratios, what do you find works best” I copied the following posts and was wondering what all you guys think? Keep in mind I am a novice and just barely ride my bike. Started riding again at age 62 after not riding since I was 16, so take it easy on me fellas; use this equation when calculating ratios it really helps... divide your front chain ring (teeth) by the rear cog (teeth) then multiply by your wheel size. for example mine is 36/16x26 = 58.5 a 55 is perfect for me... just some equation some brits showed me to calculate it all into a single number... aside from personal preference take the size of your sprocket multiply it by your wheel size then divide that by the size of your rear freewheel and you get your gear ratio. Ideal ratio is 55, not sure if this is totally true but I saw it on a bmx website a while back. Ex. 30t sprocket X 26" ÷ 15t freewheel = 52 YES. USE THE FORMULA, YALL! IT'S EASY. The single number is called "gear inches." It means how far your bike rolls with one exact 360 degree rotation of your cranks. So it is essential to factor in wheel size when picking your gears. The reason 55 gear inches is considered classic is that 44/16 was the classic gear ratio on BMX bikes for like 20 years (which comes out to 55 gear inches).... On race tracks you need it to be spinny enough to get in front of everyone when the gate drops, but still powerful enough to stay fast throughout the track to win the race. People would sometimes do 43/16 or 45/16 but that was pretty much it. With 26" wheels, 34-16, 32-15, 30-14, 28-13, 26-12 are all optimal all-around single speed gear ratios. Note that it just happens to be double the rear cog plus 2. If your looking for suggestions, you may want to include the type of riding you are doing. I know from reading one of your other posts that you only ride on the road. This obviously makes a huge If your looking for suggestions, you may want to include the type of riding you are doing. I know from reading one of your other posts that you only ride on the road. This obviously makes a huge You are correct David. So I did the math and the results were 32tX29"/16t= 58 so is the higher the number the harder the ride or vise versa? I like the ride that I have now just wanted to get more speed and less huffandpuff against the wind. The wind I find is the biggest problem with riding on flat roads. Small inclines are to be expected. Wow this gear ratio stuff can get real complicated real fast! Length of cranks makes a difference. I don't even know the length of my crank arm. Its not on the spec sheet of the bicycle, anybody know? 09 Monocog 29er stock? Your Monocog probably has 175mm cranks. Look on the backside of one of the the cranks...it shoud say it on there somewhere. Last edited by A1an; 12-23-2009 at 05:23 PM. You are correct David. So I did the math and the results were 32tX29"/16t= 58 so is the higher the number the harder the ride or vise versa? I like the ride that I have now just wanted to get more speed and less huffandpuff against the wind. The wind I find is the biggest problem with riding on flat roads. Small inclines are to be expected. Well, you're not going to get both speed and easy pedaling. Not with a singlespeed. Don't worry overmuch about gear inches and crank arms and that stuff. Just look at your current gearing: it's 32 x 16. To make pedaling easier (but slower), increase the rear cog in size (for example, to 18t). To make pedaling harder (but faster), decrease the rear cog (to 15t say) I'm suggesting changing the rear cog because it's easier and cheaper than changing the front, but you could change the front instead. It's the opposite direction, though: a bigger chainring = harder/faster, a smaller chainring = easier/slower. Hey thanks a million for all the good info especially that explaination SeatBoy that was what I really wanted to know! Everyone here has really helped me a great deal. BTW for those that are interested I went to another forum called Two Spokes and tried to ask questions and a person posting by Industry Hack jumped my case and said because my 29er had Big Apples that I wasn't on the same page as mountian bikers. I agree with that but that was not my question, at any rate they started deleting my posts and questions. So a word to the wise stay away from that forum if you don't want to be treated like crap. I sure am glad I came back here where everyone here has went out of their way to be helpful!! Your Monocog probably has 175mm cranks. Look on the backside of one of the the cranks...it shoud say it on there somewhere. You were right! They are 175mm! Thanks for the help. road? 44x16 If you have a geared bike, you have 27 gear ratios you can try. Pick the one you like... Sit and spin my ass... He is riding an '09 Monocog 29 if I read correctly, I have a Raleigh SS 29 and when I am smooth pavement riding my gearing is 33 front 16 rear. I find that very good for maintaing a good cruising speed with out spinning yourself out peddling. If tires are factored in with that I was running 47c Kenda hybrid tires (so I dont wear out my knobbies) Only you can really determine what is the right gear. The variables of fitness, and terrain as given are too abstract with the information provided. Gear inches isn't the rollout either; it's the theoretical diameter of the wheel with a given gear ratio. Charlie Rides a Bike Cycle Matrix Coaching Unfortunately the internet brings out the worst in some people. As was already pointed out...best thing to do is experiment with different cogs to find out what works best for you. Some guys will even go so far as to bring their cogs, chain whip, and lockring tool out on the trail with them so they can test out cogs back to back. As for myself...even tough it is easy I am still too lazy to swap my gearing around. I found a "magic ratio" that pretty much works for me on all of the trails in my area. It allows me a mid 11mph avg on the long flat/rooty crap at a reasonable cadence and is still somewhat tolerable on long steep climbs. You can usually find a multi pack of the stamped steel cogs for pretty cheap. It does not take a ton of cash to tinker around with your gearing. Good luck and have fun! 32-20 on the mountain bike and 50-16 on the fixie. Its really trial and error plus the longer you do this you refine it and as you gain single speed fitness and experience this changes things too. Here's a little personal history. I started on a 32, 19 and felt it was too tough and went to a 20 and then a 21. Even on the 21 I wasn't making climbs I wanted to so I went to a 33 upfront. Still wasn't making the climbs and went back to the 20 and then I started making some of the climbs some of the time. After giving it some thought I decided the problem was I needed to climb faster so my legs would not fatigue out and stall so I went to 19 still with the 33 up front. Bingo! Not only was I climbing like a mountain goat but I was nailing the climbs almost every time. Now I just went to an 18 in the back and it feels even better. If I could find a Boone 34 104mm chainring I would be all over that too. All of this took place over a year and half of riding a single speed 29er. I do think the 34/18 will be the "magic bullet" for a while. OB1 Kielbasa One is good! What works best really is dependent on your fitness and your terrain nothing else.. It depends on the type of trail, how technical, how fit you are, etc etc. I find that in Dallas I can clear every trail on a 34x20 29er, sometimes I wish I had a 34x17 or a 34x21. In single speed you are always on the wrong gear. With my set up, climbs, technical section and twisty single track are my friend, long straight aways are the enemy as I can only go up to 130rpm effectively. I just moved and find that my single speed set up is to "easy" for Mcallen, so I am going up on the ratio to a 34x18 or 17. Some people here like the 22x12 or 22x11 set up which is a mechanical aberration... Sit and spin my ass... In a word ... none. :) At least in my ~1 month of trying SS'ing out on a 29er. Rode almost exclusively on-road with the standard knobbie tires & started out with the stock 32/18 & eventually ended at 33/12 & still wasn't satisfied. But riding only road certainly make things easier as far as gearing selection goes. Still not for me. I'll stick to gears. IGH to be specific. But +1 ... What works best really is dependent on your fitness and your terrain nothing else.. Dude, you moved to Mcallen? That's where a UT is, right? No offense but you're going downhill bad. Start moving northwards. I just started SSing in Colorado and went with a 32x21 based on local input. Two of my friends in flattish Kansas City run 32x16s and 34x16s. It all depends on how strong you are but for my first SS, 32x21 is a good start- I may stick with it. I know guys who run 32x18 here but they also big-ring some pretty steep climbs on their gearies so take it easy till you really feel it is too Disclaimer: I'm about as much of a SS newbie as one gets, but I stayed at a Holiday Inn Express earlier this year. get yourself 3x consecutive chainrings and 3 - 4 x consecutive cogs and you will be all set. ie: 32, 33, 34 and 18, 19, 20, 21. that'll give you the perfect spread to play with depending on your terrain. you probably don't even need that much really. see the chart: OK here is what I came up with by using an Excel spreadsheet using the formula above. It also seems to be a relative inexpensive change to make. I currently have a 32/16 gear ratio using the above formula would give me 58". So I plugged in all sprocket sizes from 24 to 42 and I came up with a 34/18 being the closest ratio of 54.78 to the supposedly magic number of 55. Now the only problem is I'm not sure what size that chainring in on my monocog. I looked it over and could not find the size imprinted in the ring. is it 110mm, 90mm ?? Any ideas. BTW it looks like both those parts are less than $30 bucks. get yourself 3x consecutive chainrings and 3 - 4 x consecutive cogs and you will be all set. ie: 32, 33, 34 and 18, 19, 20, 21. that'll give you the perfect spread to play with depending on your terrain. you probably don't even need that much really. see the chart: Wow goes to show how great minds work alike! Thanks Jacques! 55" a magic number on the road? That seems awfully short. I ride 78-82" on the road and there are lots of hills here (usually I climb 2500 feet in every 60km of riding). Off road I ride and race with 51". My rides: Lynskey Ti Pro29 SL singlespeed GF Superfly 29er HT S-Works Roubaix SL3 Dura Ace Pake French 75 track 55" a magic number on the road? That seems awfully short. I ride 78-82" on the road and there are lots of hills here (usually I climb 2500 feet in every 60km of riding). Off road I ride and race with 51". Hey Serious, Those are not my numbers! If you read the thread that was someone else's numbers based on a Penny Farthing I think he said. Whatever that is? Anyway where in hell are you riding that the rise at 2500 feet in 60km? Is that a continuous uphill ride? You must have leg muscles like and elephant! Anyway I admire you for your skills. Like I said at first I am just a novice starting off after 40 years of not riding! Someday I might ride 60 km before I die! I started off with 32/18 and after a while went to 32/20. You don't need to use gear inches unless you are comparing different sized wheels. I ride in Toronto (in a suburb called Richmond Hill) and although there are no sustained climbs here (longest climbs are less than 2 km), there is not much flat riding either. I race on a singlespeed mtb a lot, so I am in better shape than the average rider. My rides: Lynskey Ti Pro29 SL singlespeed GF Superfly 29er HT S-Works Roubaix SL3 Dura Ace Pake French 75 track 12-23-2009 #2 mtbr member Join Date Jul 2008 12-23-2009 #3 Virtus pre nummis Join Date Apr 2009 12-23-2009 #4 Virtus pre nummis Join Date Apr 2009 12-23-2009 #5 mtbr member Join Date Jun 2007 12-23-2009 #6 mtbr member Join Date May 2006 12-23-2009 #7 Virtus pre nummis Join Date Apr 2009 12-23-2009 #8 Virtus pre nummis Join Date Apr 2009 12-23-2009 #9 mtbr member Join Date Jan 2004 12-23-2009 #10 mtbr member Join Date Jun 2009 12-24-2009 #11 Jam Econo Join Date May 2006 12-24-2009 #12 mtbr member Join Date Jun 2007 12-24-2009 #13 mtbr member Join Date Sep 2008 12-24-2009 #14 mtbr member Join Date Jul 2004 12-24-2009 #15 mtbr member Join Date Feb 2004 12-24-2009 #16 mtbr member Join Date Jan 2004 12-24-2009 #17 mtbr member Join Date Jun 2009 12-24-2009 #18 mtbr member Join Date Jan 2004 12-24-2009 #19 mtbr member Join Date Oct 2008 12-24-2009 #20 Virtus pre nummis Join Date Apr 2009 12-24-2009 #21 Virtus pre nummis Join Date Apr 2009 12-24-2009 #22 mtbr member Join Date Jan 2005 12-24-2009 #23 Virtus pre nummis Join Date Apr 2009 12-24-2009 #24 mtbr member Join Date Jun 2008 12-25-2009 #25 mtbr member Join Date Jan 2005
{"url":"http://forums.mtbr.com/29er-bikes/%93single-speed-gear-ratios-what-do-you-find-works-best%94-580814.html","timestamp":"2014-04-19T05:18:01Z","content_type":null,"content_length":"171272","record_id":"<urn:uuid:25471a20-2579-4cc1-a70c-5abab7f56b62>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: find the definite integral y = x(1-x) from -2 to 2 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ef3b002e4b0dc507db74866","timestamp":"2014-04-18T16:43:17Z","content_type":null,"content_length":"137602","record_id":"<urn:uuid:79f4bcb2-158a-4d18-afde-1034c93bfa20>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 106 Syllabus MATH 106 CALCULUS and ANALYTIC GEOMETRY Mon, Wed, and Fri 8:00 to 9:40 AM Room E106 5 UNITS Instructor Larry Green Phone Number Office: 541-4660 Extension 341 e-mail: DrLarryGreen@gmail.com Class Grades Web Page: http://www.ltcconline.net/greenl/courses/106/106.htm Videos of Worked Out Problems Required Text Calculus Ninth Edition by Larson Hostetler and Edwards Course Description The topics covered in this course include applications of the integral, techniques of integration, exponential and logarithmic functions, hyperbolic functions, and inverse trigonometric functions. Student Learning Outcomes 1. Employ integrals to applications from physics. 2. Apply the Fundamental Theorem of Calculus in determining indefinite integrals. 3. Compute geometric quantities using integrals. 4. Solve differential equations. 5. Determine integrals and derivatives of transcendental functions. Prerequisite A grade of C or better in Math 105 or equivalent. Grading Policy Your letter grade will be based on your percentage of possible points. A 90 -- 100% C 70 -- 79% B 80 -- 89% D 60 -- 69% Homework: ......................................... .150 points Exam 1: Jan 30 ..................... .. ......150 points Exam 2: Feb 24 ................................ 150 points Exam 3: Mar 19 ..................... .. .150 points Final Exam: Mar 23 ............................ ..400 points Exam Policy Students are to bring calculators, pencils or pens, and paper to each exam. Grading will based on the progress towards the final answer, and the demonstration of understanding of the concept that is being tested, therefore, work must be shown in detail. Any student who cannot make it to an exam may elect to take the exam up to two days before the exam is scheduled. If all homework is completed and no more than three homework assignments are scored less than or equal to 5 points, then the midterm with the lowest score will be dropped. Homework Policy Homework will be turned in at the end of class on the date due. If a student has additional questions, that student may see me after class in my office and then turn in the homework by 4:00 PM on the date due. Homework that is turned in within one week of the due date will be counted as half credit. Homework may be turned later than one week after the due date, but points will not be awarded. At the beginning of each class, a 2 to 5 minute quiz will be given. Each quiz will count as 20% of the homework assignment and cannot be made unless there is a proven medical excuse. Daily Quizzes The first five minutes of each class, there will be a quiz that covers the main point from the previous lecture. Each quiz will count as 20% of the homework grade. Quizzes cannot be made up and will not be counted unless the corresponding homework is turned in on time. Office Hours: Monday ............................ 9:40 to 10:40 MSC Tuesday.......................... 1:00 to 2:00 A210 Wednesday .................... 12:00 to 1:00 A210 Thursday........................ 12:00 to 1:00 MSC Friday........................ 9:40 to 10:40 A210 CALCULATORS: A TI 89 graphing calculator is required for this class. Instructions on the TI 89 Calculator LEARNING DISABILITIES: If you have a learning disability, be sure to discuss your special needs with Larry. Learning disabilities will be accommodated. TUTORING: Tutors are available at no cost in A 201 (The Math Success Center). A WORD ON HONESTY: Cheating or copying will not be tolerated. People who cheat dilute the honest effort of the rest of us. If you cheat on a quiz or exam you will receive an F for the course, not merely for the test. Other college disciplinary action including expulsion might occur. Please don t cheat in this class. If you are having difficulty with the course, please see me. Lecture will always be geared towards an explanation of the topics that will be covered on the upcoming homework assignment. Date Section Topic Exercises 1-4 Introductions 1-6 4.4 Fundamental Thms 5,10,15,20,27,34,52,73,90,102 4.5 Substitution 5,10,14,29,38,47,52,56,63,74,79 1-9 5.1 Logs and Derivatives 1,39,44,47,52,57,66,76,77,83,88,92,106,113 1-11 5.2 Logs and Integrals 1,6,11,16,21,26,31,38,47,54,58,61,69,85,90,106,107 1-13 5.3 Inverse Functions 71,74,76,81,82,97,99,100,101,102,103,104,112 1-16 Happy Birthday Martin Luther King 1-18 5.4 Exponential Functions 34,37,41,50,59,64,70,73,80,86,88,95,103,116,130 1-20 5.5 Other Bases 42,51,60,65,70,73,79,92,107,110,122,124,125,SP 1-23 5.6 Derivatives of Inverse Trig 43,46,49,52,55,59,62,64,73,76,83,93,95,97,100 1-25 5.6 Derivatives of Inverse Trig 43,46,49,52,55,59,62,64,73,76,83,93,95,97,100 5.7 Integrals of Inverse Trig 1,6,11,16,21,26,31,36,41,46,46,53,58,61,66,71,76,83 (Snow Day) 1-27 5.8 Hyperbolic Functions 1,8,15,22,29,43,57,67,71,79,82,84,91,111,SP 5.7 Integrals of Inverse Trig 1,6,11,16,21,26,31,36,41,46,53,58,61,66,71,76,83 (Note New Due Date) 1-30 Exam I 2-1 6.1 Slope Fields, Euler 1,6,11,16,21,26,45,54,65,72,76,89,92,96 2-3 6.2 Growth and Decay 1,6,11,16,21,28,37,58,63,67,69,71,74 2-6 6.3 Separation of Variables 1,6,11,16,20,25,29,34,39,45,53,60,65,72,77,79,83,92,93 2-8 7.1 Bounded Area 1,6,9,14,21,28,35,40,43,50,55,60,65,69,78,87,96,99 2-10 7.2 Volume by Discs 1,6,10,13,17,22,27,32,35,44,46,54,65,69,75 2-13 7.3 Volume by Shells 1,6,9,15,18,21,28,41,48,51,58,SP 2-15 7.4 Arc Length & Surface Area 1,6,11,16,25,27,33,36,40,45,52,55,58,66 2-17 Happy Birthday Lincoln 2-20 Happy Birthday Washington 2-22 7.5 Work 1,4,7,10,13,16,19,22,25,28,31,36,39,41,SP 2-24 Exam II 2-27 Return Exam II 2-29 Snow Day 3-2 7.6 Moments and Centroids 1,8,18,23,26,32,33,36,43,48,52,59 3-5 7.7 Fluid Pressure 1,4,7,10,13,16,19,22,23,25,26,29,36 3-7 8.1 Integration Rules 1,15,29,36,43,63,74,83 8.2 Integration By Parts 5,15,20,30,35,49,58,67,89,114 3-9 8.3 Trig Integrals 5,10,15,20,25,30,35,40,45,50,54,59,68,87,92,97,104,SP 3-12 8.4 Trig Substitution 1,4,7,10,13,18,23,28,35,41,46,49,54,63,67,70,75,82 3-14 8.5 Partial Fractions 1,4,7,10,13,16,19,22,25,28,31,34,39,42,45,48,51,60,64,67 An Algorithm For Integration 3-16 8.7 L'Hopital's Rule 1,9,18,27,36,47,54,61,84,95,101,103,104 8.8 Improper Integrals 11,30,47,55,78,85,88,93,94,106 3-19 Exam III 3-23 Comprehensive Final Exam 8:00 AM - 9:50 AM 1. Come to every class meeting. 2. Arrive early, get yourself settled, spend a few minutes looking at your notes from the previous class meeting, and have you materials ready when class starts. 3. Read each section before it is discussed in class 4. Do some math every day. 5. Start preparing for the tests at least a week in advance. 6. Spend about half of your study time working with your classmates. 7. Take advantage of tutors and office hours, extra help can make a big difference.
{"url":"http://www.ltcconline.net/greenl/syllabi/w12/106syl.htm","timestamp":"2014-04-18T13:18:22Z","content_type":null,"content_length":"17098","record_id":"<urn:uuid:bdb03367-9cfb-40ee-8eee-b952bd3323ff>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
'BufferedReader' problem... August 14th, 2011, 12:31 PM #1 Junior Member Join Date Aug 2011 Thanked 0 Times in 0 Posts hi every1... i am new on this site... i need help... here is the program... /*Program with class name 'Volume' using function overloading to compute the volume of cube, a sphere and a cuboid.Let the user input the values and his choice*/ import java.io.*; class Volume public void main() throws IOException BufferedReader ob=new BufferedReader(new InputStreamReader(System.in)); System.out.println("A.Volume of cube\nB.Volume of sphere\nC.Volume of cuboid\nEnter choice according to the menu."); char ch=(char)ob.read(); final double P=3.14; Volume ob1=new Volume(); System.out.println("Enter the side of the cube..."); double s=Double.parseDouble(ob.readLine()); System.out.println("Enter the radius of the sphere..."); double r=Double.parseDouble(ob.readLine()); System.out.println("Enter the length, breadth and height of the cuboid..."); double l=Double.parseDouble(ob.readLine()); double b=Double.parseDouble(ob.readLine()); double h=Double.parseDouble(ob.readLine()); public void volu(double side) System.out.println("Volume of cube:"); double vol=side*side*side; public void volu(double radius,double pi) System.out.println("Volume of sphere:"); double vol=(4/3)*pi*radius*radius*radius; public void volu(double length,double breadth,double height) System.out.println("Volume of cuboid:"); double vol=length*breadth*height; A.Volume of cube B.Volume of sphere C.Volume of cuboid Enter choice according to the menu. Enter the side of the cube... [after this the following error is observed] java.lang.NumberFormatException: empty String at sun.misc.FloatingDecimal.readJavaFormatString(Floa tingDecimal.java:994) at java.lang.Double.parseDouble(Double.java:510) at Volume.main(Volume.java:15) so, this is the program... i hv also found a solution to this... the solution is that if a make a new object inside every if block, then the number can be inputted and the error is not observed(for example) BufferedReader ob2=new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter the side of the cube..."); double s=Double.parseDouble(ob2.readLine()); but, my question is that why cant we input a number using BufferedReader inside the if block, 'if' is not a function that we have to declare a variable again and again... so this is it... plz HELP... Looks like BufferedReader does not have any method to read doubles, i have eclipse and that one shows me all methods of this class. I am not that experienced and i think that the keyboard is the input for your program, and if you want to read from keyboard using this constructor Scanner tastatura = new Scanner(System.in); , Scanner class has method to read double from keyboard. And Scanner can also read from files with this one Scanner in = new Scanner(new File(url)); and there url is the path to the file. I'd say your problem is caused by console input - you can't capture keystrokes from the command line because the command line is typically editable: you don't *input* characters until you press Enter. Consider the output of this program and what might be happening in your program if the 'A' is being read from an input buffer that contains more than just 'A'... package com.javaprogrammingforums.domyhomework; import java.io.*; public class BufferedReaderReadChar public static void main(String[] args) throws Exception BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); while (true) System.out.print("Enter some characters: "); char c = (char)br.read(); if (c >= 0x20) /* printable */ System.out.println("You typed '" + c + "'"); else /* special character */ System.out.println("You typed (0x" + Integer.toString(c, 0x10) + ")"); Last edited by Sean4u; August 15th, 2011 at 05:02 PM. Reason: Bad markup NumberFormatException: empty String With just a bit more programming you can prevent this. Read the user's input into a String and test if the String is empty. If it is ask the user to enter again. If you want to force the user to enter good data, you could write a method to prompt the user for the data, test for good data and ask again if it is bad. Otherwise convert it to a double and return the double. Another thing to consider is to put the parseDouble call in try{}catch block to catch bad data. And another consideration: Use the Scanner class. It has methods to test if the next data is a double before you read it. August 15th, 2011, 04:27 AM #2 Join Date Jul 2011 Thanked 0 Times in 0 Posts August 15th, 2011, 05:00 PM #3 Super Moderator Join Date Jul 2011 Tavistock, UK Thanked 103 Times in 93 Posts August 15th, 2011, 05:37 PM #4 Super Moderator Join Date May 2010 Eastern Florida Thanked 1,953 Times in 1,927 Posts
{"url":"http://www.javaprogrammingforums.com/member-introductions/10379-bufferedreader-problem.html","timestamp":"2014-04-16T17:37:20Z","content_type":null,"content_length":"67359","record_id":"<urn:uuid:f657a257-6c50-4910-8bf8-4c94862f3a69>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Most analysis of microwave background data and predictions about its ability to constrain cosmology have been based on the cosmological parameter space described in Sec. 6.1 above. This space is motivated by inflationary cosmological scenarios, which generically predict power-law adiabatic perturbations evolving only via gravitational instability. Considering that this space of models is broad and appears to fit all current data far better than any other proposed models, such an assumed model space is not very restrictive. In particular, proposed extensions tend to be rather ad hoc, adding extra elements to the model without providing any compelling underlying motivation for them. Examples which have been discussed in the literature include multiple types of dark matter with various properties, nonstandard recombination, small admixtures of topological defects, production of excess entropy, or arbitrary initial power spectra. None of these possibilities are attractive from an aesthetic point of view: all add significant complexity and freedom to the models without any corresponding restrictions on the original parameter space. The principle of Occam's Razor should cause us to be skeptical about any such additions to the space of models. On the other hand, it is possible that some element is missing from the model space, or that the actual cosmological model is radically different in some respect. The microwave background is the probe of cosmology most tightly connected to the fundamental properties of the universe and least influenced by astrophysical complications, and thus the most capable data source for deciding whether the universe actually is well described by some model in the usual model space. An interesting question is the extent to which the microwave background can determine various properties of the universe independent from particular models. While any cosmological interpretation of temperature fluctuations in the microwave sky requires some kind of minimal assumptions, all of the conclusions outlined below can be drawn without invoking a detailed model of initial conditions or structure formation. These conclusions are in contrast to precision determination of cosmological parameters, which does require the assumption of a particular space of models and which can vary significantly depending on the space. The Friedmann-Robertson-Walker spacetime describing homogeneous and isotropic cosmology comes in three flavors of spatial curvature: positive, negative, and flat, corresponding to The microwave background provides the cleanest and most powerful probe of the geometry of the universe (Kamionkowski et al. 1994). The surface of last scattering is at a high enough redshift that photon geodesics between the last scattering surface and the Earth are significantly curved if the geometry of the universe is appreciably different than flat. In a positively-curved space, two geodesics will bend towards each other, subtending a larger angle at the observer than in the flat case; likewise, in a negatively-curved space two geodesics bend away from each other, resulting in a smaller observed angle between the two. The operative quantity is the angular diameter distance; Weinberg (2000) gives a pedagogical discussion of its dependence on A change in angular scale of this magnitude will change the apparent scale of all physical scales in the microwave background. A model-independent determination of r[s] (cf. Eq. (29). If coherent acoustic oscillations are visible, this scale sets their characteristic wavelengths. Even if coherent acoustic oscillations are not present, the sound horizon represents the largest scale on which any causal physical process can influence the primordial plasma. Roughly, if primordial perturbations appear on all scales, the resulting microwave background fluctuations appear as a featureless power law at large scales, while the scale at which they begin to depart from this assumed primordial behavior corresponds to the sound horizon. This is precisely the behavior observed by current measurements, which show a prominent power spectrum peak at an angular scale of a degree (l = 200), arguing strongly for a flat universe. Of course, it is logically possible that the primordial power spectrum has power on scales only significantly smaller than the horizon at last scattering. In this case, the largest scale perturbations would appear at smaller angular scales for a given geometry. But then the observed power-law perturbations at large angular scales must be reproduced by the Integrated Sachs-Wolfe effect, and resulting models are contrived. If the microwave background power spectrum exhibites acoustic oscillations, then the spacing of the acoustic peaks depends only on the sound horizon independent of the phase of the oscillations; this provides a more general and precise probe of flatness than the first peak position. The second physical scale provides another test: the Silk damping scale is determined solely by the thickness of the surface of last scattering, which in turn depends only on the baryon density [b] h ^2, the expansion rate of the universe and standard thermodynamics. Observation of an exponential suppression of power at small scales gives an estimate of the angular scale corresponding to the damping scale. Note that the effects of reionization and gravitational lensing must both be accounted for in the small-scale dependence of the fluctuations. If the reionization redshift can be accurately estimated from microwave background polarization (see below) and the baryon density is known from primordial nucleosynthesis or from the alternating peak heights signature (Sec. 5.4), only a radical modification of the standard cosmology altering the time dependence of the scale factor or modifying thermodynamic recombination can change the physical damping scale. If the estimates of 7.2. Coherent acoustic oscillations If a series of peaks equally spaced in l is observed in the microwave background temperature power spectrum, it strongly suggests we are seeing the effects of coherent acoustic oscillations at the time of last scattering. Microwave background polarization provides a method for confirming this hypothesis. As explained in Sec. 4.2, polarization anisotropies couple primarily to velocity perturbations, while temperature anisotropies couple primarily to density perturbations. Now coherent acoustic oscillations produce temperature power spectrum peaks at scales where a mode of that wavelength has either maximum or minimum compression in potential wells at the time of last scattering. The fluid velocity for the mode at these times will be zero, as the oscillation is turing around from expansion to contraction (envision a mass on a spring.) At scales intermediate between the peaks, the oscillating mode has zero density contrast but a maximum velocity perturbation. Since the polarization power spectrum is dominated by the velocity perturbations, its peaks will be at scales interleaved with the temperature power spectrum peaks. This alternation of temperature and polarization peaks as the angular scale changes is characteristic of acoustic oscillations (see Kosowsky (1999) for a more detailed discussion). Indeed, it is almost like seeing the oscillations directly: it is difficult to imagine any other explanation for density and velocity extrema on alternating scales. The temperature-polarization cross-correlation must also have peaks with corresponding phases. This test will be very useful if a series of peaks is detected in a temperature power spectrum which is not a good fit to the standard space of cosmological models. If the peaks turn out to reflect coherent oscillations, we must then modify some aspect of the underlying cosmology, while if the peaks are not coherent oscillations, we must modify the process by which perturbations evolve. If coherent oscillations are detected, any cosmological model must include a mechanism for enforcing coherence. Perturbations on all scales, in particular on scales outside the horizon, provide the only natural mechanism: the phase of the oscillations is determined by the time when the wavelength of the perturbation becomes smaller than the horizon, and this will clearly be the same for all perturbations of a given wavelength. For any source of perturbations inside the horizon, the source itself must be coherent over a given scale to produce phase-coherent perturbations on that scale. This cannot occur without artificial fine-tuning. 7.3. Adiabatic primordial perturbations If the microwave background temperature and polarization power spectra reveal coherent acoustic oscillations and the geometry of the universe can also be determined with some precision, then the phases of the acoustic oscillations can be used to determine whether the primordial perturbations are adiabatic or isocurvature. Quite generally, Eq. (28) shows that adiabatic and isocurvature power spectra must have peaks which are out of phase. While current measurements of the microwave background and large-scale structure rule out models based entirely on isocurvature perturbations, some relatively small admixture of isocurvature modes with dominant adiabatic modes is possible. Such mixtures arise naturally in inflationary models with more than one dynamical field during inflation (see, e.g., Mukhanov and Steinhardt 1998). 7.4. Gaussian primordial perturbations If the temperature perturbations are well approximated as a gaussian random field, as microwave background maps so far suggest, then the power spectrum C[l] contains all statistical information about the temperature distribution. Departures from gaussianity take myriad different forms; the business of providing general but useful statistical descriptions is a complicated one (see, e.g., Ferreira et al. 1997). Tiny amounts of nongaussianity will arise inevitably from non-linear evolution of fluctuations, and larger nongaussian contributions can be a feature of the primordial perturbations or can be induced by ``stiff'' stress-energy perturbations such as topological defects. As explained below, defect theories of structure formation seem to be ruled out by current microwave background and large-scale structure measurements, so interest in nongaussianity has waned. But the extent to which the temperature fluctuations are actually gaussian is experimentally answerable, and as observations improve this will become an important test of inflationary cosmological models. 7.5. Tensor or vector perturbations As described in Sec. 4.3, the tensor field describing microwave background polarization can be decomposed into two components corresponding to the gradient-curl decomposition of a vector field. This decomposition has the same physical meaning as that for a vector field. In particular, any gradient-type tensor field, composed of the G-harmonics, has no curl, and thus may not have any handedness associated with it (meaning the field is even under parity reversal), while the curl-type tensor field, composed of the C-harmonics, does have a handedness (odd under parity reversal). This geometric interpretation leads to an important physical conclusion. Consider a universe containing only scalar perturbations, and imagine a single Fourier mode of the perturbations. The mode has only one direction associated with it, defined by the Fourier vector k; since the perturbation is scalar, it must be rotationally symmetric around this axis. (If it were not, the gradient of the perturbation would define an independent physical direction, which would violate the assumption of a scalar perturbation.) Such a mode can have no physical handedness associated with it, and as a result, the polarization pattern it induces in the microwave background couples only to the G harmonics. Another way of stating this conclusion is that primordial density perturbations produce no C-type polarization as long as the perturbations evolve linearly. On the other hand, primordial tensor or vector perturbations produce both G-type and C-type polarization of the microwave background (provided that the tensor or vector perturbations themselves have no intrinsic net polarization associated with them). Measurements of cosmological C-polarization in the microwave background are free of contributions from the dominant scalar density perturbations and thus can reveal the contribution of tensor modes in detail. For roughly scale-invariant tensor perturbations, most of the contribution comes at angular scales larger than 2° (2 < l < 100). Figure 4 displays the C and G power spectra for scale-invariant tensor perturbations contributing 10% of the COBE signal on large scales. A microwave background map with forseeable sensitivity could measure gravitational wave perturbations with amplitudes smaller than 10^-3 times the amplitude of density perturbations (Kamionkowski and Kosowsky 1998). The C-polarization signal also appears to be the best hope for measuring the spectral index n[T] of the tensor perturbations. Figure 4. Polarization power spectra from tensor perturbations: the solid line is C[l]^G and the dashed line is C[l]^C. The amplitude gives a 10% contribution to the COBE temperature power spectrum measurement at low l. Note that scalar perturbations give no contribution to C[l]^C. Reionization produces a distinctive microwave background signature. It suppresses temperature fluctuations by increasing the effective damping scale, while it also increases large-angle polarization due to additional Thomson scattering at low redshifts when the radiation quadrupole fluctuations are much larger. This enhanced polarization peak at large angles will be significant for reionization prior to z = 10 (Zaldarriaga 1997). Reionization will also greatly enhance the Ostriker-Vishniac effect, a second-order coupling between density and velocity perturbations (Jaffe and Kamionkowski 1998). The nonuniform reionization inevitable if the ionizing photons come from point sources, as seems likely, may also create an additional feature at small angular scales (Hu and Gruzinov 1998, Knox et al. 1998). Taken together, these features are clear indicators of the reionization redshift z[r] independent of any cosmological model. Primordial magnetic fields would be clearly indicated if cosmological Faraday rotation were detected in the microwave background polarization. A field with comoving field strength of 10^-9 gauss would produce a signal with a few degrees of rotation at 30 GHz, which is likely just detectable with future polarization experiments (Kosowsky and Loeb 1996). Faraday rotation has the effect of mixing G-type and C-type polarization, and would be another contributor to the C-polarization signal, along with tensor perturbations. Depolarization will also result from Faraday rotation in the case of significant rotation through the last scattering surface (Harari et al. 1996) Additionally, the tensor and vector metric perturbations produced by magnetic fields result in further microwave background fluctuations. A distinctive signature of such fields is that for a range of power spectra, the polarization fluctuations from the metric perturbations is comparable to, or larger than, the corresponding temperature fluctuations (Kahniashvili et al. 2000). Since the microwave background power spectra vary as the fourth power of the magnetic field amplitude, it is unlikely that we can detect magnetic fields with comoving amplitudes significantly below 10^-9 gauss. However, if such fields do exist, the microwave background provides several correlated signatures which will clearly reveal them. 7.8. The topology of the universe Finally, one other microwave background signature of a very different character deserves mention. Most cosmological analyses make the implicit assumption that the spatial extent of the universe is infinite, or in practical terms at least much larger than our current Hubble volume so that we have no way of detecting the bounds of the universe. However, this need not be the case. The requirement that the unperturbed universe be homogeneous and isotropic determines the spacetime metric to be of the standard Friedmann-Robertson-Walker form, but this is only a local condition on the spacetime. Its global structure is still unspecified. It is possible to construct spacetimes which at every point have the usual homogeneous and isotropic metric, but which are spatially compact (have finite volumes). The most familiar example is the construction of a three-torus from a cubical piece of the flat spacetime by identifying opposite sides. Classifying the possible topological spaces which locally have the metric structure of the usual cosmological spacetimes (i.e. have the Friedmann-Robertson-Walker spacetimes as a topological covering space) has been studied extensively. The zero-curvature and positive-curvature cases have only a handful of possible topological spaces associated with them, while the negative curvature case has an infinite number with a very rich classification. See Weeks (1998) for a review. If the topology of the universe is non-trivial and the volume of the universe is smaller than the volume contained by a sphere with radius equal to the distance to the surface of last scattering, then it is possible to detect the topology. Cornish et al. (1998) pointed out that because the last scattering surface is always a sphere in the covering space, any small topology will result in matched circles of temperature on the microwave sky. The two circles represent photons originating from the same physical location in the universe but propagating to us in two different directions. Of course, the temperatures around the circles will not match exactly, but only the contributions coming from the Sachs-Wolfe effect and the intrinsic temperature fluctuations will be the same; the velocity and Integrated Sachs-Wolfe contributions will differ and constitute a noise source. Estimates show the circles can be found efficiently via a direct search of full-sky microwave background maps. Once all matching pairs of circles have been discovered, their number and relative locations on the sky strongly overdetermine the topology of the universe in most cases. Remarkably, the microwave background essentially allows us to determine the size of the universe if it is smaller than the current horizon volume in any dimension.
{"url":"http://ned.ipac.caltech.edu/level5/Kosowsky2/Kosowsky7.html","timestamp":"2014-04-18T03:04:50Z","content_type":null,"content_length":"23684","record_id":"<urn:uuid:16f84781-9fed-4d46-a3ee-440bebc68e89>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
st: programming a weighted two stage least squares model Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: programming a weighted two stage least squares model From Andrew Barnes <abarnes2@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: programming a weighted two stage least squares model Date Fri, 1 Apr 2011 08:38:43 -0700 Hi folks, I programmed up this weighted two stage least squares (W2SLS) but I'm not sure if it's correct. Also, I don't know if the canned post estimation commands for ivregress (e.g., estat firststage, estat endog, estat overid) will be valid for a W2SLS model. Any thoughts or comments are appreciated. Cheers, Andrew I have a model y2=y1 + x + e where y1 is endogenous (as is y2 obviously), x is some exogenous determinant of y2, and the error terms (e) are heteroscedastic. In the single equation model, I'm using weighted least squares a la Wooldridge reg y2 y1 x predict double y2hat, xb replace y2hat=0.999 if y2hat>=1 replace y2hat=0.001 if y2hat<=0 gen WTy2hat=1/(y2hat*(1-y2hat)) reg y2 y1 x [w=WTy2hat] Now, to resolve the endogeneity, I want to run a weighted 2SLS model using instruments z (so the reduced form first stage is y1=x+z+n, where n is an error term) I have done the following: ***W2SLS BY HAND ***weighted first stage reg y1 x z predict double y1hat, xb replace y1hat=0.999 if y1hat>=1 replace y1hat=0.001 if y1hat<=0 drop y1hat gen WTy1hat=1/(y1hat*(1-y1hat)) reg y1 x z [w=WTy1hat] **get predicted y1hat from weighted first stage predict double y1hat, xb ***weighted second stage reg y2 y1hat x predict double y2hat, xb replace y2hat=0.999 if y2hat>=1 replace y2hat=0.001 if y2hat<=0 gen WTy2hat=1/(y2hat*(1-y2hat)) reg y2 y1hat x [w=WTy2hat] ***calculate correct VCE *(from http://www.stata.com/statalist/archive/2010-02/msg00307.html) scalar rmsebyhand=e(rmse) *wrong VCE mat vbyhand=e(V) scalar dfk=e(df_r) **correct resids gen double eps2=(y2- _b[y1hat]*y1 - _b[x]*x - _b[_cons])^2 su eps2 ***correct rmse scalar rmsecorr=sqrt(r(sum)/dfk) ***correct VCE mat vcorr=(rmsecorr/rmsebyhand)^2*vbyhand mat li vcorr ****display b vector and correct standard errors qui reg y2 y1hat x [w=WTy2hat] matrix b=e(b) matrix list b matrix colnames vcorr= y1hat x _cons matrix rownames vcorr = y1hat x _cons ereturn post b vcorr ereturn display * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-04/msg00031.html","timestamp":"2014-04-20T21:14:25Z","content_type":null,"content_length":"9413","record_id":"<urn:uuid:cf0634a5-40ff-4329-9f82-c2a7dfc0377f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Narrow Search Earth and space science Sort by: Per page: Now showing results 1-10 of 57 This is an activity about size and scale. Learners will construct a 3-D model scale model of one of the MMS satellites. After, they will calculate the octagonal area of the top and bottom of the satellites, given the measurements of the satellite.... (View More) Then, learners will compare the octagonal cross-section area of the satellites with the circular cross-section area of the launch vehicle to determine if the eight-sided spacecraft will fit the circular rocket hull. This is lesson one as part of the MMS Mission Educator's Instructional Guide. (View Less) This is an activity about satellite flight. Learners will first watch a video about the orbit and formation of the MMS satellites to learn about their flight configuration. After, they will research similar facts about other types of satellites.... (View More) Next, learners will compute the volume of MMS' tetrahedral flight configuration and investigate how the tetrahedral volume changes as the satellites change positions. Finally, they will create a report that outlines their findings.This activity requires student access to internet accessible computers. This is lesson three as part of the MMS Mission Educator's Instructional Guide. (View Less) Students will learn about the Transit of Venus through reading a NASA press release and viewing a NASA eClips video that describes several ways to observe transits. Then students will study angular measurement by learning about parallax and how... (View More) astronomers use this geometric effect to determine the distance to Venus during a Transit of Venus. This activity is part of the Space Math multimedia modules that integrate NASA press releases, NASA archival video, and mathematics problems targeted at specific math standards commonly encountered in middle school textbooks. The modules cover specific math topics at multiple levels of difficulty with real-world data and use the 5E instructional sequence. (View Less) This is an online set of information about astronomical alignments of ancient structures and buildings. Learners will read background information about the alignments to the Sun in such structures as the Great Pyramid, Chichen Itza, and others.... (View More) Next, the site contains 10 short problem sets that involve a variety of math skills, including determining the scale of a photo, measuring and drawing angles, plotting data on a graph, and creating an equation to match a set of data. Each set of problems is contained on one page and all of the sets utilize real-world problems relating to astronomical alignments of ancient structures. Each problem set is flexible and can be used on its own, together with other sets, or together with related lessons and materials selected by the educator. This was originally included as a folder insert for the 2010 Sun-Earth Day. (View Less) This is a math-science integrated unit about spectrographs. Learners will find and calculate the angle that light is transmitted through a holographic diffraction grating using trigonometry. After finding this angle, the students will build their... (View More) own spectrographs in groups and research and design a ground or space-based mission using their creation. After the project is complete, student groups will present to the class on their trials, tribulations, and findings during this process. The activity is part of Project Spectra, a science and engineering program for middle-high school students, focusing on how light is used to explore the Solar System. (View Less) This is a lesson about using the light from the star during an occultation event to identify the atmosphere of a planet. Learners will add and subtract light curves (presented as a series of geometrical shapes) to understand how this could occur.... (View More) The activity is part of Project Spectra, a science and engineering program for middle-high school students, focusing on how light is used to explore the Solar System. (View Less) This math problem determines the areas of simple and complex planar figures using measurement of mass and proportional constructs. Materials are inexpensive or easily found (poster board, scissors, ruler, sharp pencil, right angle), but also... (View More) requires use of an analytical balance (suggestions are provided for working with less precise weighing tools). This resource is from PUMAS - Practical Uses of Math and Science - a collection of brief examples created by scientists and engineers showing how math and science topics taught in K-12 classes have real world applications. (View In this activity, learners draw a circle with a single focus, an ellipse with two foci close together, and an ellipse with two foci far apart, and compare the shapes. Learners then measure the Sun in four images each taken in a different season,... (View More) comparing the apparent size of the Sun in each image to determine when Earth is closest to the Sun. This is the second activity in the SDO Secondary Learning Unit. The activity is reprinted with permission from the Great Explorations in Math and Science (GEMS). (View Less) Math skills are applied throughout this investigation of windows. Starting with basic window shapes, students determine area and complete a cost analysis, then do the same for windows of unconventional shapes. Students will examine photographs taken... (View More) by astronauts through windows on the Space Shuttle and International Space Station to explore the inverse relationship between lens size and area covered. This lesson is part of the Expedition Earth and Beyond Education Program. (View Less) This is a book containing over 200 problems spanning over 70 specific topic areas covered in a typical Algebra II course. Learners can encounter a selection of application problems featuring astronomy, earth science and space exploration, often with... (View More) more than one example in a specific category. Learners will use mathematics to explore science topics related to a wide variety of NASA science and space exploration endeavors. Each problem or problem set is introduced with a brief paragraph about the underlying science, written in a simplified, non-technical jargon where possible. Problems are often presented as a multi-step or multi-part activities. This book can be found on the Space Math@NASA website. (View Less) «Previous Page123456 Next Page»
{"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects%5B%5D=Mathematics%3AAlgebra&topicsSubjects%5B%5D=Mathematics%3AGeometry","timestamp":"2014-04-20T07:25:57Z","content_type":null,"content_length":"77663","record_id":"<urn:uuid:0fcdc99b-8e95-421e-9a3c-7670885c23e9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding uncertainty: ESP and Bayes In the first part of this article we looked at a psychological study which claims to provide evidence that certain types of extra-sensory perception exist, using a statistical method called significance testing. But do the results of the study really justify this conclusion? What's wrong with p values? Despite the very widespread use of statistical significance testing (particularly in psychology) the method has been heavily criticised, by psychologists themselves as well as by some statisticians and other scientists. R.A. Fisher invented the notion of significance testing. One criticism that is relevant here concerns alternatives to the null hypothesis. Remember the conclusion that was reached when the p value in Bem's experiment was 0.009? "Either the null hypothesis really is true, but nevertheless an unlikely event has occurred. Or the null hypothesis isn't true." This says nothing about how likely the data were if the null hypothesis isn't true. Maybe they are still unlikely even if the null hypothesis is false. Surely we can't just throw out the null hypothesis without further investigation of what the probabilities are if the null hypothesis is false? This is an issue that must be dealt with if one is trying to use the test to decide whether the null hypothesis is true or false. It should be said that the great statistician and geneticist R.A. Fisher, who invented the notion of significance testing, would simply not have used the result of a significance test on its own to decide whether a null hypothesis is true or false — he would have taken other relevant circumstances into account. But unfortunately not every user of significance tests follows Fisher's approach. The usual way to deal with the situation where the null hypothesis might be false is to define a so-called alternative hypothesis. In the case of the Bem experiment this would be the hypothesis that the average rate of correct answers is greater than 50%. You might think that we could just calculate the probability of getting the Bem's data on the assumption that the alternative hypothesis is true. But there's a snag. The alternative hypothesis simply says that the average rate is more than 50%. It doesn't say how much more than 50%. If the real average rate were, let's say, 99%, then getting an observed rate of 51.7% isn't very likely, but if the real average rate were 51.5%, then getting an observed rate of 51.7% is quite likely. But real averages of 99% and of 51.5% are both covered by the alternative hypothesis. So this isn't going to get us off the hook. Let's do Bayes One possibility is to meet the issue of misinterpreting the p value head on. I said that many people think that a p value of 0.009 actually means that the probability that the null hypothesis is true is 0.009, given the data that were observed. Well, I explained why that's not correct — but why do people interpret it that way? In my view it's because what people actually want to know is how likely it is that the null hypothesis is true, given the data that were observed. The p value does not tell them this, so they just act as if it did. The p value does not tell people what they want to know because in order to find the probability that the null hypothesis is true (given the data), one needs to take a Bayesian approach to statistics. There's more than one way to do that, but the one I'll describe is as follows. It uses the odds form of Bayes' theorem, the theorem behind Bayesian approaches to statistics. There's much more on this in our articles How psychic was Paul?, The logic of drug testing and It's a Let’s write for the probability that hypothesis A is true given the observed data and for the probability that the data is observed given that hypothesis A is true. The expression is known as the posterior odds for the alternative hypothesis and the expression is known as the prior odds for the alternative hypothesis . The odds form of Bayes’ theorem says that So we get to the posterior odds by multiplying the prior odds for the alternative hypothesis by a quantity which is known as the Bayes factor. Let’s look at this a bit more closely. We’re trying to find the probability that the null hypothesis is true, given the data, or This means that the posterior odds for the alternative hypothesis is actually and if you know these posterior odds, it’s straightforward to work out So far, so good. But to find the posterior odds for the alternative hypothesis, we need to know the prior odds for the alternative hypothesis as well as the Bayes factor. It is the existence of the prior odds in the formula that puts some people off the Bayesian approach entirely. The prior odds is a ratio of probabilities of hypotheses before the data have been taken into account. That is, it is supposed to reflect the beliefs of the person making the calculation before they saw any of the data, and people’s beliefs differ in a subjective way. One person may simply not believe it possible at all that ESP exists. In that case they would say, before any data were collected, that posterior odds must also be 0, whatever the value of the Bayes factor. Hence, for this person whatever the data might be. This person started believing that ESP could not exist, and his or her mind cannot be changed by the data. Hey look! A flying pig! People have different prior odds for the existence of ESP. Another person might think it is very unlikely indeed that ESP exists but not want to rule it out as being absolutely impossible. This may lead them to set the prior odds for the alternative hypothesis, not as zero, but as some very small number, say 1/10,000. If the Bayes factor turned out to be big enough, the posterior odds for the alternative hypothesis might nevertheless be a reasonably sized number so that Bayes' theorem is telling this person that, after the experiment, they should consider the alternative hypothesis to be reasonably likely. Thus different people can look at the same data and come to different conclusions about how likely it is that the null hypothesis (or the alternative hypothesis) is true. Also, the probability that the null hypothesis is true might or might not be similar to the p value — it all depends on the prior odds as well as on the Bayes factor. (In many cases, actually it turns out that the probability of the null hypothesis is very different from the p value for a wide range of plausible values of the prior odds, drawing attention yet again to the importance of being pedantic when saying what probability is described by the p value. This phenomenon is sometimes called Lindley's paradox, after the renowned Bayesian statistician Dennis Lindley who drew attention to it.) The Bayes factor You might think that the issue of people having different prior odds could be avoided by concentrating on the Bayes factor. If I could tell you the Bayes factor for one of Bem's experiments, you could decide what your prior odds were and multiply them by the Bayes factor to give your own posterior odds. In fact, one of Bem's critics, Eric-Jan Wagenmakers (see here for a paper criticising Bem's work), takes exactly that line, concentrating on the Bayes factors. For Bem's Experiment 2, for instance, Wagenmakers and colleagues calculate the Bayes factor as about 1.05. Therefore the posterior odds for the alternative hypothesis are not much larger than the prior odds, or putting it another way, they would say that the data provide very little information to change one's prior views. They therefore conclude that this experiment provides rather little evidence that ESP exists, and certainly not enough to overturn the established laws of physics. They come to similar conclusions about several more of Bem's experiments and for others they calculate the Bayes factor as being less than 1. In these cases the posterior odds for the alternative hypothesis will be smaller than the prior odds; that is, one should believe less in ESP after seeing the data than one did beforehand. Well, that's an end of it, isn't it? Despite all Bem's significance tests, the evidence provided by his experiments for the existence of ESP is either weak or non-existent. But no, it’s not quite as simple as that. The trouble is that there’s more than one way of calculating a Bayes factor. Remember that the Bayes factor is defined as It’s reasonably straightforward to calculate also depends on subjective prior opinions. Avoiding the issue of the prior odds, by concentrating on the Bayes factor, has not made the subjectivity go Wagenmakers and his colleagues used one standard method of calculating the Bayes factors for Bem's experiment, but this makes assumptions that Bem (with Utts and Johnson) disagrees with. For Experiment 2 (where Wagenmakers and colleagues found a Bayes factor of 1.05) Bem, Utts and Johnson calculate Bayes factors by four alternative methods, all of which they claim to be more appropriate than that of Wagenmakers and they result in Bayes factors ranging from 2.04 to 6.09. Can experimental data ever provide strong evidence for ESP? They calculate Bayes factors around 2 when they make what they describe as "sceptical" prior assumptions, that is, assumptions which they feel would be made by a person who is sceptical about the possibility of ESP, though evidently not quite as sceptical as Wagenmakers. The Bayes factor of about 6 comes from assumptions that Bem and his colleagues regard as being based on appropriate prior knowledge about the size of effect typically observed in psychological experiments of this general nature. These Bayes factors indicate considerably greater evidence in favour of ESP, from the same data, that Wagenmakers considered to be the case. Bem, Utts and Johnson report similar results for the other Since then Wagenmakers has come back with further arguments as to why his original approach is better (and as to why Bem, Utts and Johnson's "knowledge-based" assumptions aren't based on the most appropriate knowledge). Who is right? Well, in my view that's the wrong question. It would be very nice if experiments like these could clearly and objectively establish, one way or the other, whether ESP can exist. But the hope that the data can speak for themselves in an unambiguous way is in vain. If there is really some kind of ESP effect, it is not large (otherwise it would have been discovered years ago). So any such effect must be relatively small and thus not straightforward to observe against the inevitable variability between individual experimental participants. Since the data are therefore not going to provide really overwhelming evidence one way or the other, it seems to me that people will inevitably end up believing different things in the light of the evidence, depending on what else they know and to some extent on their views of the world. Just looking at the numbers from Bem's experiments is not going to make this subjective element disappear. About the author Kevin McConway is Professor of Applied Statistics at the Open University. As well as teaching statistics and health science, and researching the statistics of ecology and evolution and in decision making, he works in promoting public engagement in statistics and probability. Kevin is an occasional contributor to the Understanding Uncertainty website. Submitted by Anonymous on November 13, 2012. It may be obfuscating the issue to suggest that p-values alone are not strongly suggestive of some genuine effect in an experiment like this. The reason is that unless the experiment is flawed, the only probability distribution of results that can occur without an interesting effect is that of the null hypothesis. Of course a single p-value like 0.009 is not by itself indicative of anything but a fluke: run 100 experiments and this sort of result is rather likely. This is why the particle physics community demands p-values of 0.0000003 before using the term "discovery". I don't believe they qualify this with a Bayesian criterion: the null hypothesis is simply the hypothesis that the hypothesis is not true. It is perhaps inaccurate to say that no effect has been discovered years ago: for example, the book "Entangled Minds" by Dean Radin (2006) presents meta-analysis of a wide range of possible effects from experiments over several decades and finds qualitatively similar weak effects for most of them. The p-values are much lower in some cases, with much larger effective sample sizes. If I understand the data correctly, the effects cannot be adequately explained by the hypothesis that some of the experiments are flawed or fraudulent, because the distribution of the results of different experiments is much more like a weak effect with the variation in the results being the result of sampling. I would welcome the views of an expert like McConway on Radin's claims. So it may be more accurate to say that (1) these effects have not been accepted as definitely genuine by mainstream psychology (or other sciences) and (2) there is little or no understanding of mechanisms that might explain the effects. I feel it is this lack of any real scientific understanding of the nature of the purported phenomenon that blocks acceptance rather than the statistics themselves. Weaker statistics have been used (rightly in my opinion) to make major strategic decisions in other fields. Given a strong p-value (say 0.000001) in an experiment, two intelligent people can come to two different conclusions. The first might say "unless this experiment is faulty or fraudulent, there is very likely a real effect here". The second might say "this experiment is almost certainly faulty or fraudulent, since the conclusion is ridiculous". If ESP effects are weak in the way that Radin's analysis and this experiment (and many others) suggest, it would take a rather large experiment (or a genuine effect plus a bit of luck) to even reach these sorts of p-values, though more extreme ones can be found by combining data from many different experiments. Liam Roche
{"url":"http://plus.maths.org/content/understanding-uncertainty-esp-and-bayes","timestamp":"2014-04-18T13:34:57Z","content_type":null,"content_length":"48837","record_id":"<urn:uuid:8c52f155-ef12-46e0-bf20-74ae62a63095>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
How to save data When you want to save the data, select the command Save as in the File menu. The program will display the following dialog box: • File name: select or type the name of the file for the data. This box lists files with the filename extension selected in the Save as type box. You can select a file by clicking a file name. • Save as type: select the type of file you want to see in the file name list box. When the correct file name is entered, select the Save button. You can also select Cancel or press the Esc key to interrupt the procedure and close the file selector box. See also
{"url":"http://www.medcalc.org/manual/how_to_save_data.php","timestamp":"2014-04-19T00:03:47Z","content_type":null,"content_length":"31627","record_id":"<urn:uuid:1b73e56e-570f-4890-8181-80fb9ae115c4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
WebDiarios de Motocicleta Sunday, April 19, 2009 Happy Easter! Christos a înviat! Tuesday, April 14, 2009 Via David Eppstein, I find out about the Workshop on Theory and Multicores. A memorable citation from the call: [...] Indeed, the programs of many mainstream computer science conferences, such as ASPLOS, DAC, ISCA, PLDI and POPL are heavily populated with papers on parallel computing and in particular on many-core computing. In contrast, the recent programs of flagship theory conferences, such as FOCS, SODA and STOC, hardly have any such paper. This low level of activity should be a concern to the theory community, for it is not clear, for example, what validity the theory of algorithms will have if the main model of computation supported by the vendors is allowed to evolve away from any studied by the theory. The low level of current activity in the theory community is not compatible with past involvement of theorists in parallel computing, and to their representation in the technical discourse. Which theory community? The one that thinks our machine models are not robust enough to tell the difference between O( ) and O( )? The one that thinks n is polynomial and even efficient? The one that thinks Amazon should be happy with an O(lg ) approximation for its profit? The one that thinks the coolest conclusions we ever came to are the philosophical implications of interactive proofs and zero knowledge? The theory community will do just fine, thank you very much. As for "the low level of current activity" being incompatible "with past involvement of theorists in parallel computing" --- it is exactly the past involvement that led to the current attitude towards parallel computing! Parallel computing is the fabled field where we got burned badly, and the first example people use to argue that theory is still very young. (It usually goes like this: "Imagine an equivalent of Perelman disappearing for 20 years and coming back with an answer to the most burning question about PRAMs from 1985. Who would care now?"). It's going to be hard to regain enthusiasm about parallel computing, as timely as the moment may be. But at least we learned our lessons. Never again will a hyped-up mob of theorists rush head-on to study a machine model too early. Never will we study a machine model before such computers are even built, and while the practicality of such computers is still debated. We understand that producing a theoretical model too early will have us chasing the wrong rabbits, and that our results might be as relevant as the 5-state 17-symbol universal Turing Machine was in the design of the Pentium. Alright, enough of this. I should go back to reading about 3-round quantum interactive proofs with entanglement. Wednesday, April 8, 2009 Count the number of permutations of size n, such that |π(i) - π(i+1)| ≤ k, for all i. That is, the hop between any two consecutive values must be short. I am asking for an algorithm, not a closed formula. (Combinatorialists: is one known?) Your algorithm should work fast for values like n=10^6 and k=8. I have not thought hard about supporting larger k (say, k = 20), but I would certainly be interested to hear such a solution if you have one. Credits. This problem was used in a selection contest for the Romanian olympic team. The original author was Marius Andrei. My subjective judgment of its difficulty is "average+" (it was tricky, but I solved it within 10 minutes). YMMV. As we will discuss in the next post, for some application it suffices to get lower bounds on the one-way communication complexity -- that is, the case in which a single message is sent, either from Alice to Bob, or from Bob to Alice. The party receiving the message must immediately announce to answer, based on the message and their own input. Referring to our INDEXING lower bound in , we can immediately obtain a one-way lower bound by setting =0 (one message Bob→Alice) or =0 (one message Alice→Bob). We conclude that Alice must send Ω( ) bits in the A→B model, and that Bob must send Ω(lg ) in the B→A model. The former case is much more interesting in applications, so when people say "the one way communication complexity of INDEXING," they always mean the Alice→Bob model. One-way communication is often simple to analyze, and we can obtain lower bounds by direct appeal to information theory. Here is a short and simple proof of the INDEXING lower bound: Lemma. The one-way communication complexity of INDEXING is Ω(n). Assume there exists a protocol in which Alice sends bits, which works with probability (say) 0.99 over the uniform distribution. We use this to construct an encoder for a random vector of bits. We show that the average length of the encoding is strictly less than , which is a contradiction because the of the random bit string is Say we want to encode A[1..n]. We give Alice the input A, and include her message M in the encoding. Then, we specify the set of indices i, such that, if Bob has input i and receives message M from Alice, he will give an incorrect answer to INDEXING. This is the entire encoding. Claim 1: we can decode A from the encoding. To accomplish this, simmulate the action of Bob for every possible i. For every index in the bad set, A[i] is the opposite of what Bob says. Claim 2: the average size of the encoding is at most 0.85 n. The first component (Alice's message) is n/10 in size by assumption. The second component has size lg(n choose k), where k is the number of bad indices for a particular A. How do we analyze the expected size of this component? • E[A][k] = 0.01 n. Indeed, for uniformly random A and i, the algorithm works with probability 0.99. So in expectation over A, the bad set of indices has size 0.01 n. • By the Markov bound, Pr[A][k ≤ 0.02 n] ≥ 1/2. • When k ≤ 0.02 n, lg(n choose k) ≤ n/2. • When k ≥ 0.02 n, lg(n choose k) ≤ n. • Therefore E[lg(n choose k)] ≤ 0.75 n. Therefore, the total size of the encoding is 0.85 n in expectation, contradiction. A puzzle. Since this blog has featured algorithmic puzzles recently, it's time for a lower bound puzzle: Say Alice and Bob receive two set of n elements from the universe [n^2]. Show that the one-way randomized communication complexity of set disjointness is Ω(n lg n) bits. This is the best result possible, since Alice can specify her entire set with O( ) bits, after which Bob knows the answer with no error. In fact, the O( ) complexity holds even for large universes: use a universal hash function from the universe to a range of 100 . The hash functions introduces zero collisions with probability 99%, so it doesn't introduce false positives in the intersection. Thanks to Mert Sağlam for telling me about this question. As far as I know, the proof is folklore. David Woodruff wrote an email describing this result many years ago. Monday, April 6, 2009 The following simple puzzle was circulating among Romanian olympiad participants around 1998. It was supposed to be a quick way to tell apart algorithmists from mere programmers. Given an undirected graph G, and two vertices u and v, find a cycle containing both u and v, or report than none exists. Running time O(m). Update: Simple ideas based on a DFS (like: find one path, find another) do not work. Think of the following graph: If you first find the path s → a → b → t, you will not find a second. The one-line answer is: try to route two units on flow from s to t in the unit-capacity graph (with unit node capacities if you want simple cycles). This is not the same as two DFS searches, because the second DFS is in the residual graph (it can go back on the edges of the first DFS). About the previous puzzle: As many people noticed, Alice can guarantee a win or a draw. She computes the sum of the elements on odd positions and the sum on the even positions. Depending on which is higher, she only plays odd positions or only even positions. (Bob has no choice, since the subarrays he's left with always have the ends of the same parity.) But how do you compute the optimal value for Alice? If the sum of even and odd is equal, how can Alice determine whether she can win, or only draw? A simple dynamic program running in O(n^2) time works. Can you solve it faster? Friday, April 3, 2009 This is the 3rd post in the thread on communication complexity, following Episodes 1 and 2. This post is also available in PDF In today's episode, we will prove our first randomized/distributional lower bound. We will consider a problem that appears quite trivial at first sight -- INDEXING, defined as follows: • Alice receives a bit vector, x ∈ {0,1}^n • Bob receives an index, y ∈ [n] (note standard notation: [n]={1,...,n}) • their goal is to output x[y], i.e. the y-th bit of the vector x. One way to view INDEXING is as the simplest case of lopsided set disjointness , where Bob's set is of size one: Alice receives the set = { x[i ] =1 }, and Bob receives = { Intuition. What trade-off should we expect between Alice's communication a, and Bob's communication b? Clearly, the problem can be solved by [a=1, b=lg n] and by [a=n, b=1]. In between these two extremes, the best use of b seems to be for Bob to send b bits of his index. Alice now knows the index to be in a set of n/2^b possibilities. She can simply send all her bits at these positions, using communication a = n/2^b. Finally, Bob announces the answer with one more bit. We thus showed the upper bound: a ≤ n/2^b-1. It should be fairly intutive that this is also a tight lower bound (up to constants). Indeed, no matter what Bob communicates, his index will be uncertain in a set of n/2^b possibilities (on average). If Alice sends less than n/2^b+1 bits of information, at least half of the possible index positions will not have a specified answer based on Alice's message. In other words, the protocol fails to determine the answer with constant probability (i.e. makes constant error). Distributions. Before proving a distributional lower bound, we must find the distribution that makes the problem hard. From the intuition above, it should be clear that the right distributions are uniform, both for Alice's vector and for Bob's index. We are in the situation of product distributions : the inputs of Alice and Bob are independent. This is a very good situation to be in, if you remember the main take-home lesson from Episode 1 : rectangles. Remember that some fixed communication transcript is realized in a combinatorial rectangle , where is a subset of Alice's possible inputs, and a subset of Bob's possible inputs. The main ideas of a deterministic proof were: • little communication from Alice means A is large; • little communication from Bob means that B is large; • but the rectangle must be monochromatic (the answers must be fixed, since the communication is over) • if A and B are large, you must have both yes-instances and no-instances. For a product distribution, we will perform essentially the same analysis, given that the densities μ(A) and μ(B) can be measured independently. The only difference will be that the rectangle is not monochromatic, but almost monochromatic. Indeed, if the protocol may make some errors, the answer need not be fixed in the rectangle. But it must happen that one answer (yes or no) is much more frequent -- otherwise the error is large. The first three steps of the analysis are formalized in the following randomized richness lemma, from the classic paper of [Miltersen, Nissan, Safra, Wigderson STOC'95]: Lemma. (randomized richness) Say Alice and Bob compute a function f : XxY -> {0,1} on a product distribution over XxY. Assuming that: • f is dense, in the sense that E[ f(x,y) ] ≥ α ≥ 1/5. • the distributional error is at most ε. • Alice communicates a bits, and Bob b bits. Then, there exists a rectangle AxB (A⊂X, B⊂Y) satisfying: • Alice's side is large: μ(A) ≥ 2^-O(a); • Bob's side is large: μ(B) ≥ 2^-O(a+b); • the rectangle is almost monochromatic: E [ f(x,y) | x∈A, y∈B ] ≥ 1- O(ε). Proof: Though the statement is very intuitive, the proof is actually non-obvious. The obvious approach, which fails, would be some induction on the bits of communication: fix more bits of the communication, making sure the rectangle doesn't decrease too much, and the error doesn't increase too much in the remaining rectangle. The elegant idea is to use the deterministic richness lemma. Let F be the output of the protocol (what Alice and Bob answer). We know that f and F coincide on 1-ε of the inputs. By definition, the protocol computes F deterministically with no error (duh!). It is also clear that F is rich, because it is dense E[F] ≥ α-ε. So apply the deterministic richness lemma from Episode 1. We get a large rectangle of F in which the answer is one. But how do we know that f is mostly one in the rectangle? It is true that F and f differ on only ε of the inputs, but that ε might include this entire rectangle that we chose! (Note: the rectangle size is ~ 2^-a x 2^-b, so much much smaller than some constant ε. It could all be filled with errors.) We were too quick to apply richness on F. Now define G, a cleaned-up version of F. Consider some transcript of the protocol, leading to a rectangle AxB. If the answer is zero, let G=0 on AxB. If the answer is one, but the error inside this rectangle is more than 10 ε, also let G=0. Otherwise, let G=1 on the rectangle. How much of the truth table gets zeroed out because of excessive error (above 10 ε)? Well, the overall average error is ε, so we can apply a Markov bound to it: the mass of all rectangles in which the error exceeds 10 ε is at most 1/10. is also fairly dense: E[ ≥ E[F] - 1/10 ≥ α - (1/10) - ε ≥ 1/10 - ε. Thus, G is rich, and we can find a bit rectangle in which it is identically one. But in that rectangle, E[f] ≥ 1 - 10 ε by construction of G. The good, the bad, the ugly. It remains to prove that in large rectangle, the fraction of zeros must be non-negligible (it may not be almost all ones). This part is, of course, problem specific, and we shall do it here for INDEXING. Unfortunately, these proofs are somewhat technical. They typically apply a number of "pruning" steps on the rectangle. An example of pruning on the space of rectangles was seen above: we zeroed out all rectangles that had more than 10 ε error. In these proofs, we throw out rows and columns we don't like for various reasons. One usually makes many funny definitions, talking about "good rows", "bad rows", "ugly columns", etc. While looking at such proofs, it is important to remember the information-theoretical intuition behind them. After you understand why the statement is true (handwaving about the information of this and the entropy of that), you can deal with these technicalities on the night before the STOC/FOCS deadline. Here is how the proof proceeds for INDEXING: Lemma: Consider any A ⊂ {0,1}^n and B ⊂ [n] such that |A| ≥ 2^n+1 / 2^|B|/2. Then E[A,B ][f(x,y)] ≥ 0.95. Proof: Assume for contradiction that Pr[A,B ][f(x,y) = 0] ≤ 1/20. Let an ugly row be a row x∈A such that Pr[B ][f(x,y) = 0] > 1/10. At most half of the rows are ugly by the Markov bound. Discard all ugly rows, obtaining A'⊂A, with |A'| ≥ |A|/2. For every remaining x∈A', we have x[y]=0 for at least 0.9 |B| indices from B (this is the definition of f). Call these good indices. There are (|B| choose 0.9|B|) = (|B| choose 0.1|B|) choices for the good indices. For every set of good indices, there are at most 2^n - 0.9|B| vectors x which are zero on the good indices. Thus: |A'| ≤ (|B| choose 0.1|B|) 2^n - 0.9|B| ≤ 2^n 2^O(0.1 |B| lg 10) / 2^0.9|B| = 2^n / 2^0.9|B|-O(0.1 |B| lg 10) This is at most 2 / 2 , at least for a sufficiently small value of 1/10 (I never care to remember the constants in the binomial inequalities). We have reached a contradiction with the lemma's guarantee that is large. Combine this with the richness lemma with an error We have | and | , so it must be the case that = Ω(| |) ≥ . We have obtained a tight lower bound. Wednesday, April 1, 2009 It is well known that d-regular bipartite graphs have a perfect matching. Applying this iteratively, it even follows that such graphs can be decomposed into the union of d perfect matchings. (The existence of a matching follows from Hall's theorem, which is probably beaten into students during any combinatorics course.) But how easy is it to find a perfect matching in a d-regular bipartite graph? The following classic gem finds a matching in O(m) time --- but, oddly enough, it assumes d is a power of two! Find an Eulerian circuit of the graph in O(m) time, which exists for any even d. Now throw out the 2nd, 4th, 6th, etc edges of the circuit. You are left with a graph that is (d/2)-regular. Recurse to find a matching. Why do you get a regular graph when you throw out all edges on even positions? Think about it: the edges of the circuit go back and forth between the two parties of the graph. For any vertex , its incident edges are grouped into /2 pairs of consecutive edges. One edge of such a pair appears on an odd position in the circuit, and one on an even position. This algorithm is due to Gabow and Kariv [STOC'78]. By contrast, obtaining a linear-time algorithm for d not a power of two took until 2001 [Cole-Ost-Schirra]. In SODA'09, [Goel-Kapralov-Khanna] achieved sublinear algorithms for d greater than sqrt(n). The idea is that matchings are so frequent that you can still find one (with care) in a random sample of the graph. My interest is not so much with the sublinear complexity, but more with matching in general bipartite graphs (not regular). After you see this beautiful algorithm for matching in regular graphs, it is hard not to feel a bit optimistic that general bipartite matching may also be solvable in O(m) time.
{"url":"http://infoweekly.blogspot.com/2009_04_01_archive.html","timestamp":"2014-04-17T15:26:06Z","content_type":null,"content_length":"101398","record_id":"<urn:uuid:46af7e6b-9658-4b86-b691-b2447a48f071>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
chopchop (Experimental WEP attacks) : Unix/Linux - Page 4 Hi sylvain, thanks for your answer. In fact, the patch delivered with chopchop is for linux-wlan-ng, and I don't use it (only orinoco_cs driver). Maybe I'll test with linux-wlan-ng patched whith chopchop patch. Did anyone suceed in using orinoco drivers with chopchop ? mfenetre wrote:Hi sylvain, thanks for your answer. In fact, the patch delivered with chopchop is for linux-wlan-ng, and I don't use it (only orinoco_cs driver). Maybe I'll test with linux-wlan-ng patched whith chopchop patch. Did anyone suceed in using orinoco drivers with chopchop ? ok so you have to use wlan-ng patched drivers. Otherwise it won't work (that's the case for orinoco). It can work with hostap also I think sylvain wrote:ok so you have to use wlan-ng patched drivers. Otherwise it won't work (that's the case for orinoco). It can work with hostap also I think He has to use the wlan-ng patch. I didn't manage to make hostap work. mfenetre, just a reminder: You need an AP, an associated card, and an injection card using the wlan-ng patched module (Or just associate the wlan-ng card, yank it out, back in, inject, and hope the it hasn't been disassociated). If you don't know where to begin, have a look at the auditor CD, chopchop is included: Mathematical origin of 5% and 13% in WEP attacks Not sure if this question fits in this forum but I'm sure to be corrected if I'm wrong, so here goes... What is the origin of the 5% and the 13% probabilities in the WEP attacks? I have read the FMS and H1kari papers and understood them (I think). Now, I know that: Prob of success = e^(-3) = 5% (when all X, Y and Z are not swapped) Prob of success = e^(-2) = 13% (when two of X, Y and Z are not swapped) I already know that they come from modeling the remaining KSA swaps as random, but how are these stats derived? On Pg. 9 of the FMS paper there is a reference to the following formula: where B is the # of the byte of the SK being attacked and N is the length of the keystream. But this formula doesn't seem to apply to my question because there aren't any logical values of B and N that make (2B/N) equal to 2 or 3. Is there a general form of some crypto-analytical formula that applies here? Thanks for the help! Answer to my own question: origin of 5% When I now see the answer, I want to kick myself for not figuring it out sooner... For the FMS attack to work, the first two bytes of the IV and the target byte of the secret key must survive the KSA swapping algorithm unchanged after the expected swaps occur. If we model the remaining swaps as random, then the chance that the three bytes in question are unchanged is 5%. This number comes from aggregating the probability that a byte is unchanged over each step over the three bytes. P(1 byte is unchanged after one random swap) = (1 – 1/N) N is the length of the resulting keystream. P(1 byte is unchanged after N random swaps) = (1 – 1/N)^N P(3 bytes are unchanged after N random swaps) = ((1 – 1/N)^N)^3 The expression, ((1 – 1/N)^N)^3, can be modeled as e^-3 because as N grows to be of any applicable length, the value of the expression asymptotically heads for 0.05. In the end, the value of N is irrelevant as the value is always just below 5%. If we were to try to keep two bytes the same, P=((1 – 1/N)^N)^2 or or e^-2 or 13%. Thanks anyway. It's a bit incorrect. Basic formula is (1-k/n)^n ~ exp(-k) when n is sufficiently large (mathly speaking lim of the left term when n grows to infinity is exp(-k)). In the papers, you get quantities like (253/256)^(256-p-1) (probability the (256-p-1) bytes of the KSA are different from 3 given values), with p=3,... First you approximate the exponent with 256, and you rewrite (1-3/256)^256, which then you approximate with the limit exp(-3). cf http://mathworld.wolfram.com/ExponentialFunction.html Origin of 5% This makes sense, thanks. Perhaps it is a case of 6 and one-half-dozen. I got my explanation from "Attacks On RC4 and WEP" by FMS: "The probability that three locations will not be pointed to by a pseudo random index during the remaining N - 1 - x rounds is better than ((1-1/N)^N)^3 ~ e^-3 ~ 5%." can be reduced to and finally reduced directly to Anyway, thanks for the general formula - crystal clear now. half dozen is six It's the same. Let M = 3N, then ((1-1/N)^N)^3 = (1-3/M)^M Madory wrote:This makes sense, thanks. Perhaps it is a case of 6 and one-half-dozen. I got my explanation from "Attacks On RC4 and WEP" by FMS: "The probability that three locations will not be pointed to by a pseudo random index during the remaining N - 1 - x rounds is better than ((1-1/N)^N)^3 ~ e^-3 ~ 5%." can be reduced to and finally reduced directly to Anyway, thanks for the general formula - crystal clear now. KoreK wrote:He has to use the wlan-ng patch. I didn't manage to make hostap work. mfenetre, just a reminder: You need an AP, an associated card, and an injection card using the wlan-ng patched module (Or just associate the wlan-ng card, yank it out, back in, inject, and hope the it hasn't been disassociated). If you don't know where to begin, have a look at the auditor CD, chopchop is included: Hi KoreK I use the new Auditor (120305-01) on my HP OmniBook XE2 Laptop. I also use the Orinoco Silver WiFi card. Is the necessary chopchop patch already installed on the Auditor CD? Must i apply any patches? I've got the same problem like mfenetre few posts over me. PS: Please dont flame me for my (maybe stupid) question... I searched a answer in google, readme's and this forum several hours/days. PPS: R.E.S.P.E.C.T. to Korek and Devine for her great tools! drivers are already patched in new auditor version..and there is an auditor forum...maybe it's a better place to ask...not sure you really search...probably too lazy Beep wrote:Hi KoreK I use the new Auditor (120305-01) on my HP OmniBook XE2 Laptop. I also use the Orinoco Silver WiFi card. Is the necessary chopchop patch already installed on the Auditor CD? Must i apply any patches? I've got the same problem like mfenetre few posts over me. PS: Please dont flame me for my (maybe stupid) question... I searched a answer in google, readme's and this forum several hours/days. PPS: R.E.S.P.E.C.T. to Korek and Devine for her great tools! OK, which one of you is the chick? "Make yourselves sheep and the wolves will eat you." ~ Benjamin Franklin Sons of Confederate Veterans G8tK33per wrote:OK, which one of you is the chick? You need a new pair of stockings , cabin boy? As for Beep, if you bothered reading my previous posts... And while I am at it, noise_gaining why don't you take a math class... Probably the header file isn't in the path. Most often this type of thing occurs because the code's author assumes one particular path, and your system is slightly different. Try using an explicit path, for example, change: #include stdio.h #include /usr/src/stdio.h (of course the path preceding the header file name would be the required one for your system.) Stop the TSA now! Boycott the airlines.
{"url":"http://www.netstumbler.org/unix-linux/chopchop-experimental-wep-attacks-t12489-45.html","timestamp":"2014-04-20T13:19:07Z","content_type":null,"content_length":"45625","record_id":"<urn:uuid:d4b055db-5ec3-40af-b73a-08cd2f61bd23>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Speed has vast appeal to our students. It embodies risk, excitement, prowess and fun. Speedometers, a basic speed meter that displays instantaneous speed, also offers many applications for educators to teach math. Here are just a few… Basic Addition & Subtraction while reading meters: The speed limit for highway 95 is Your vehicle is traveling at What is the difference between the two speeds? Integer Addition: Using the same question above, state the difference between the two speeds as a positive integer if speeding and a negative integer if below the speed limit (or vice versa if you would like) Percentage Above or Below: --Using the difference in the posted speed and the actual speed, calculate the percentage of difference. --Using the percentage difference and the posted speed, calculate the actual speed Elementary Hypothesis Testing: Based on the error rates of speedometers, what is the likelihood that the car traveling 1 mph over the speed limit is actually speeding? The game is simple and fun. It could be used for an activity day and a good way to engage the students’ minds during recess. I will explain the basic way and then add some optional twists Parallel Zombie Tag. Zombies have a signature walk in which their arms are extended in a parallel fashion, like so The game involves dedicating one individual the parallel zombie while the rest are pedestrians. When the pedestrian gets tagged, they two become zombies. Now, the only defense a pedestrian has from the ‘Parallel Zombie’ is their ‘Perpendicular Cross’, in which they hold their fingers like so, The catch is that while holding up their ‘Perpendicular Cross’, pedestrians cannot move. *************Added Twists***************** Given that a pedestrian could hold up their 'perpendicular cross' forever and never get tagged you could set a time limit of 10 seconds. This would ensure that everyone will eventually become a parallel zombie. If you are dealing with more advanced students you could make them count to 30 by 3's or 20 by 2's, etc. ************Additional Notes About The Game************* I think this game would serve as a great transition into exponential growth or exponents. For example, you could time how long it takes everyone to become a parallel zombie when starting with 1 zombie, 2 zombies, 5 zombies, etc and then review the times with your students. The time involved should exponentially decrease. Proofs/Formulas With A Memo Pad And Some Detective Work (all grades) Post 1 of 3 How much do we value information that is already packaged and ready for us? Obviously it depends but when it comes to formulas and proofs I believe your students will value their work much more if they value the formula/proof. One way to implement this is to take the formula away from them and make them derive it themselves. Lets start with something basic and then latter we can move on to something more challenging. Since we have readers from all over the spectrum of mathematics, I would like to provide three examples of deductive reasoning for elementary, middle and high schools stretched out over three posts The Right Triangle and the Rectangle: The goal of this exercise is for your students to informally prove [to you] the area formula of a right triangle is true What Each Student Will Need 1. Two pieces (two colors) of construction paper 2. Scissors 3. Ruler 4. Tape or glue 5. Marker 6. Memo Pad (can be made from stapling scrap paper or using a post-it pad) The Right Triangle and the Rectangle: 1. Write At Top Of The Memo Pad “What Mr. [fill in the blank] Accepts To be True” 2. Write the formula for area of a triangle on the board 2. Ask the students to read the formula out loud 3. Tell them you don't believe them. Ask them to prove it! 4. For the first 5 minutes allow them to play, allow them to experiment ways to prove the formula is true. Ask them to share their ideas with each other and you. Applaud each idea but relay to them that it's still not enough proof. Tell the class that just like a detective doesn't always believe the witness, we will need to build up a case by starting with what we do believe. 5. Using the materials above, ask each student to draw a 4 by 6 inch rectangle on both pieces of construction paper 6. Ask them to calculate the area of the rectangle by multiplying its length by its width (4x6 = 24inches squared). (Tell them that you accept that the area of a rectangle is the length multiplied by its width but you don't accept the area of a triangle is ½ the base times its height. We are going to add what it is you accept to be true in their memo pad) 2. Add The Following To Your Memo Pad I. The area of a rectangle is its length multiplied by its width(L x W) 2. Next, ask them to label the width and the length of each rectangle 3. Now propose the question of what the area would be if you cut the rectangle in half (The answer should be 12 inches squared) 4. Ask them to prove it by cutting one of the rectangles in half either horizontally or vertically through the middle of the rectangle as shown below and calculating the new area (will be either 6x2 or 4x3 = 12) 2. Tell them you now believe them, that a rectangle cut in half will yield half the area and add this to your set of Axioms II. A rectangle cut exactly in half will yield two pieces, both with half the original area 2. Now pose the question, is it possible to get a triangle from half a rectangle? Is it possible to get two triangles of equal size from exactly half a rectangle? 3. Ask them to prove it. 4. Ask them to cut the other rectangle (on the other piece of construction paper) in half as well but using the following criteria 1. The rectangle must be cut in half 2. Each piece must be the same shape and size 3. The resulting shapes must be two triangles 2. The students should quickly figure out that if they cut a rectangle in half diagonally they meet each of the criteria above. 3. Add this to your set of axioms III. It is possible to get two triangles of equal size from half a rectangle Now comes the hard part! Putting it all together and making the students see the solution. Let me show you how I might play-out this scenario if it were my classroom. Me: “OK!, I get it!...you guys were right. You've proved the area for a right triangle is 1/2(b)(h). Let's move on to something else” Class: “Wait, what?...What are you talking about? Show us” Me: “Well, we agree that I. Area of a rectangle is Length x Width right? (Draw a 4 x 6 rectangle on the board and label the area 24 inches squared) Class: “yes” Me: “We've also agreed that II.A rectangle cut in half will yield two pieces, both with half the original area. (Cut the rectangle in half diagonally and label each side 12 inches squared) Class: “yes” Me: “And finally, we agree that III. It is possible to get two triangles of equal size from half a rectangle. Me: “Well, a Right triangle is nothing more than rectangle cut in half diagonally--There's the ½ part of the equation. And the base and height are simply new names for the length and width. Thus, it makes sense that ½(b)(h) is the area of a right triangle! There is a lot of adversity toward calculating instruments these days. Although there may be some merit to the accusation, I tend to side with the position that calculators have a role in the classroom and when used properly can allow educators to concentrate on the bigger parts of a math problem. Although this is typically not the place for such a discussion, for those interested in my opinion see my writings here This posting is about limiting the usage of a calculator to a few buttons. The purpose of the exercise is to show the relationships of addition/subtraction or multiplication/division or even exponents/radicals. We do this by asking our students to solve a problem in one format but show its proof via another format. Q: What is the product of 5 and 6? A: 30 Our answer is of course 30. However, instead of accepting this as out answer let's as for an informal proof. Something like this. Q: What is the product of 5 and 6? A: 30; because the quotient of 30 and 6 is 5. How to use this in the classroom: Create 5 problems you want your students to be able to solve. 1) 8 - 3 2) 6 x 2 3) 8 / 2 4) etc 5) etc Create a column for their answer and a column for their informal proof. Base their grade on the informal proof. Problem Answer Informal Proof 1) 8 – 3 ____5___ ___Because 5 + 3 is 8___ 2) 6 x 2 _______ ____________________ 3) 8 / 2 _______ ____________________ 4) etc _______ ____________________ 5) etc _______ ____________________ Fantasy Football is an extremely popular and fun past-time for football enthusiasts. Especially for those, like me, who were never that great at the sport. This post gives you some ideas of ways to connect it with math, however, this is not something you can throw together in the last moment. It will take proper planning and time, but, in the end, your student may be having the most fun they've ever had in a math class. The game could be built on any theme you would like, combining like terms, simplifying fractions, solving equations, however, the theme of this particular post is factoring. The goal of the project is to hold a live draft with your classroom in which the students who complete the hardest math problems will receive the tops pics in the draft. I will provide some advice for how to implement the program toward the end of the post, although, I would love any feedback. Note 1: It helps to understand the metrics of fantasy football. If you have never experienced fantasy football but are interested in using this exercise, it may be helpful to review the game. Here is a link How to Play Fantasy Football. The classroom activity may also run much smother if you first practice holding a draft with friends or relatives. Note 2: It will be much easier if you split the class up in 2 teams or, at most, 4 teams. Any more teams than this would pose difficulty in terms of management and time. As it stands, the typical fantasy football team requires 15 players (including 9 starters and 6 backups). Thus, a classroom of 24 students divided into groups of 4 would yield 6 teams. 6 Teams of 15 players each would require 90 Problems. This would be a hard feat to accomplish within a one hour class. I will give strategies for ways around this below. However, 24 students divided into two teams would only yield 30 Items You Need: Multiple Large Posters Cut-outs or printouts of the top fantasy football players A draft sheet for each student Access to the top 100 fantasy football pics for each team. Postcards with problems on the front and answers on the back How to set it up: (Honestly I can think of multiple ways to do this of which I will choose the easiest one and, in time add others) 1. Print off the top 50 fantasy football players (You can find them here) 2. Proceed to divide the top 50 players into 4 parts (note that two of the parts will need to have 13 players) 3. Since there will be no kickers or defenses in the top 50 players (as these picks come later in the draft, proceed to subtract four of the players and add two defenses and two kickers). 4. You should now have 50 players divided into four uneven groups (2 groups of 12, 2 groups of 13) 5. Next, create (or borrow) 50 problems that you would like your students to be able to solve. 6. Place the hardest 12 problems in the group with the top 12 picks, the next 12 hardest problems in the second group and so forth. 7. Using multiple poster boards, create positions for each of the 50 players the students will be drafting from (this could simply be blank squares on a poster board with the players name and position beside it) 8. Optional, cut out or print out pictures of each player to paste on the poster board by their name and position. 9. Using index cards write each of the 50 math problems on the front with their answers on the back. 10. Either keep the postcards in four stacks or tape each postcard in the correct position under the players name How To Play: 1. If using two teams, have students from each team huddle up in a circle to figure our who they wish to draft first. 2. Flip a coin for who drafts first 3. Have students pick the player they wish to draft. Once chosen, have them select two teammates from their group to go to the back of the room and solve the problem. 4. Note: I'm not sure if I would allow the students to see the problems prior to picking the player. What you might run into is your smarter students will be solving problems while the other team is drafting. Even still, your team might solve it correctly, but the other team may pick that particular player and thus they did a lot of work for nothing. What's your thoughts? 5. Using some type of stop watch or countdown, set the time to 2-4 mins based on the toughness of the problem. While your students are on the clock, have the opposite team who is waiting try to figure out the problem as well. 6. If the student answer it correctly within the allotted time, they get the pick. If they do not, than proceed to ask the opposite team what the correct answer is. 7. If they answer correctly: they may keep the player (if they so choose to) as well as pick any other player on the board for the opposite team. 8. If they answer incorrectly, nothing happens and they go back to the drafting. Note one part 7: Notice that if the drafting team answers incorrectly, the other team has a choice: They may either answer the question correctly to get the player as well as choose a player for the opposing team OR they may pass on the question and proceed to ask for a new question todraft the player they wish. 9. Students will continue the draft until all positions on their roster is full. 10. Here are the 15 players and their positions they must choose from. Draft Sheet Each For Each Student QB ________________________ RB ________________________ RB ________________________ WR ________________________ WR ________________________ WR ________________________ TE ________________________ K ________________________ Defense ________________________ Any position ________________________ Any position ________________________ Any position ________________________ Any position ________________________ Any position ________________________ Any position ________________________ **I'm anxious to hear any feedback you guys have on this**
{"url":"http://handsonmath.blogspot.com/2012_09_01_archive.html","timestamp":"2014-04-21T12:08:55Z","content_type":null,"content_length":"125052","record_id":"<urn:uuid:3e719e07-54a1-487e-8cd0-54984d064a6d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Question to make you think 01-05-2005 #1 Registered User Join Date Mar 2003 Question to make you think This isn't a homework question or anything. Actually, it's a math related question that I've had to think about in the implementation of a physics program I'm perpentually trying to get to work in a realistic manner. Now, I already know the 'correct' answer, but it took me a couple of days before I was like "oh, DUH" and then I kicked my cat*. I'll post the correct answer if nobody can get it. So here's the scenario: A sphere (mathematically perfect sphere) is moving across a plane (a mathematically perfect horizontal plane). Lets just say we know the force of friction at any instant (the actual numbers don't matter), we know its linear and angular velocity at any instant, etc. How long does it take for the sphere to lose all of its kinetic energy (comes to a complete stop)? I'm hoping that there aren't going to be any smart asses that are going to have an answer in like five minutes after I hit 'submit', in fact I'm kind of hoping you guys won't know otherwise I'm just kind of going to feel stupid. * i didn't REALLY kick my cat, but i wanted to Is the sphere accelerating? If it's a constant velocity it's a simple question of how much the friction is making it decelerate. If it's accelerating then it's just a question of interacting The only forces on the sphere is the force of the ground pushing up and the force of friction. And, it's not friction that brings it to a rest arg, I guess we don't have any other takers (i.e anyone that cares). I guess I will just give the answer: it never comes to rest!!! I can prove this too for anyone that cares, but, well you know how that goes The reason you see 'spheres' come to rest is that the surface and the object itself deforms in front of the path of the ball, such that the force pushing it upward actually induces a spin in the opposite direction that friction makes it spin, and this basically 'bleeds' off the kinetic energy and eventually makes it stop. But, when you have a mathematical sphere, there is no such thing as surface deformation until you program it that way. > arg, I guess we don't have any other takers (i.e anyone that cares) I was cooking I don't see how a sphere has no coefficient of friction, even if it is perfect, unless you meant only friction with the ground, which is silly, since there'll be air resistance. If it's in a vacuum, it's an easy question, yeah. You are definitely right about the air resistance thing, and sorry if i rushed it a bit. However, the key here is that the sphere *does* have friction acting on it, but friction doesn't bring the sphere to a stop! The kinetic energy oscillates back and forth between rotational and linear kinetic energy. For a while, friction pushes in the opposite direction of the linear motion of the sphere. But, the problem is that it also increases the rotational velocity of the sphere. Eventually, the velocity of the point on the sphere making contact with the ground reverses. When this happens, friction pushes in the opposite direction which takes away from the rotational kinetic energy of the sphere, but *increases the linear kinetic energy back in the same direction the sphere originally started moving in* and as I said before, the reason you don't see this in real life is because, like you said, air resistance stops objects, but also because the surface deforms in front of the sphere, and there's not actually any such thing as a mathematically perfect sphere anyway...all of these factors are what actually bring the sphere to rest. I dunno, most people are having dreams about girls and sports cars, here I am thinking about spheres rolling on planes I'm gonna need some sort of a proof or something. I suck at physics without math. who needs a woman when you have a sphere. □ "Problem Solving C++, The Object of Programming" -Walter Savitch □ "Data Structures and Other Objects using C++" -Walter Savitch □ "Assembly Language for Intel-Based Computers" -Kip Irvine □ "Programming Windows, 5th edition" -Charles Petzold □ "Visual C++ MFC Programming by Example" -John E. Swanke □ "Network Programming Windows" -Jones/Ohlund □ "Sams Teach Yourself Game Programming in 24 Hours" -Michael Morrison □ "Mathmatics for 3D Game Programming & Computer Graphics" -Eric Lengyel arg, I guess we don't have any other takers (i.e anyone that cares). I guess I will just give the answer: Interestingly a very similar answer is correct for the length of time required for a perfect cube (lying face down) to come to rest when sliding across a perfect plane. (Though from the physics point of view its a bit fishy talking about resistance and homogenious perfect surfaces in the first place) Entia non sunt multiplicanda praeter necessitatem You are definitely right about the air resistance thing, and sorry if i rushed it a bit. However, the key here is that the sphere *does* have friction acting on it, but friction doesn't bring the sphere to a stop! If its a perfect sphere then surely the area of contact is infinitely small, so where is the friction coming from? Entia non sunt multiplicanda praeter necessitatem The Brain who needs a woman when you have a sphere. You mean 2 spheres? If its a perfect sphere then surely the area of contact is infinitely small, so where is the friction coming from? It must be infinitely small friction. -Stephen Cope Okay I am done checking. I am pretty sure I am going to go insane. If you guys don't actually read it, it's cool, but don't argue with me unless you do Sweet, I like proofs The sphere is rolling along an incline such that static friction is working on it. Static friction means that the relative velocity at the point of contact is zero. This might seem like a contradiction, but it's correct. The reason it is correct is because the point where the sphere touches the ground has two sources that contribute to its total velocity at any instant: the contribution from the linear velocity of the sphere, and the contribution of the angular velocity of the sphere. The magnitude of the angular contribution is the magnitude of the angular velocity (in radians) times the radius of the sphere. The direction it points is exactly opposite that of linear velocity. How do I know this? I know this because it is static friction, which, as defined, means that the contact point velocity is zero (otherwise it would be slipping, and it is kinetic friction). So, what I have so far: Friction = static friction LinearVelocity + (AngularVelocity * Radius) = 0 (because there is no slipping) (AngularVelocity * Radius) = -LinearVelocity So, now I need to show that it never loses kinetic energy. I am going to show that for every decrease in linear kinetic energy the angular kinetic energy gains exactly that amount. Linear kinetic energy is 1/2mv^2, which arises from force times displacement (or, to be more technically, the dotproduct between the force and the displacement, because there is such a thing as negative work, which is what is technically being done whenever the force and the displacement don't act in the same direction). We have an average force acting on the sphere for a given displacement of the center of gravity of the object. Note that to be super technically the friction force may oscillate, but it always results in an average equivalent force which represents the same thing. So, the ball starts rolling forward with the initial kinetic energy you give it when you push it. Then over some time duration the center of gravity of the ball displaces some distance with an average friction force The magnitude of the change of the linear kinetic energy of the ball is: FrictionForce * Distance (at first this is negative because the linear kinetic energy decreases) Now, to angular kinetic energy. The angular kinetic energy is 1/2 I * W ^2 where I is the moment of inertia (we don't care about it) and W is the angular velocity, which is equivalent to a Torque times the Angle it passes through while the torque is applied. And, basically, here is what actually proves that I am right: Torque * Angle = Force * Distance Torque = Force * Perpendicular distance (from center) Force = FrictionForce Perpendicular distance from center = Radius Torque = FrictionForce * Radius Angle = Distance / Radius (FrictionForce * Radius) * (Distance / Radius) is the angular work being done on the object, radius cancels, so you get: FrictionForce * Distance = FrictionForce * Distance Which is the same as saying: |AngularWorkDone| = |LinearWorkDone| the magnitude of the angular work being done is equal to the magnitude of the linear work being done. But, because of what I showed above: (AngularVelocity * Radius) = -LinearVelocity So, the friction force is always doing positive work on one, and negative work on the other. This makes sense, because when you push a ball 'forward', it moves forward, but spins 'backward'. Subsequently, friction never makes it lose all of its kinetic energy...what brings it to rest is outside forces (i.e the fact that real spheres don't only touch at one point, the fact that real horizontal planes aren't perfectly flat or perfectly horizontal, air resistance, the expansion of the universe, my god damn cat, etc). Okay, i just need to stop drinking coffee. Last edited by Darkness; 01-05-2005 at 06:05 PM. Interestingly a very similar answer is correct for the length of time required for a perfect cube (lying face down) to come to rest when sliding across a perfect plane. It's because real cubes touch at multiple points, and the torque induces from the normal force at each contact point opposing each other. in the strictest sense, real spheres are more like the (Though from the physics point of view its a bit fishy talking about resistance and homogenious perfect surfaces in the first place) it's because, it's a computer program If its a perfect sphere then surely the area of contact is infinitely small, so where is the friction coming from? The fact that it's a computer program. One of the constraints i had on this computer program was tha tobjects don't penetrate. But, one of the solutions I thought of for handlign this was to let the sphere penetrate the surface just a little bit, such that the point of contact is not just a point, but rather the cross section area of where the plane intersects the sphere And, I had a similar question from the get go from one of my physics professors. The friction force is the integral of pressure over an area. In physics simulation programs, you have to take the limit of everything, i.e when spheres collide, you do an 'impulse' which is an instantaneous change in momentum (the time derivative of force). In real life, when impulses occur, they don't really happen instantaneously...large, but bounded forces act over an insanely short period of time (3-20 ms for a ball hitting a baseball bat). In a computer program, there's no real way to know these types of things so you take the limit. Similarly, don't think that the contact area is *really* zero, it's the limit as it *approaches* zero! Physics simulations are crap because you have to make about 80bazillion assumptions. but it's sweet when it works Last edited by Darkness; 01-05-2005 at 05:59 PM. Well, from a more physical point of view, if you had the cube sliding over the plane, and the cube was too 'perfect', then you'd still have friction, it just wouldn't behave in any way near that approximated by the F=mu*N equation (it might cold weld, for example). And yeah, I realize that this is tangential to the statement that this is a computer program, not a real physical system. Of course, on the computer, it might slow down... Or even speed up. (Truncation/rounding error.) Last edited by Zach L.; 01-05-2005 at 06:12 PM. The word rap as it applies to music is the result of a peculiar phonological rule which has stripped the word of its initial voiceless velar stop. 01-05-2005 #2 01-05-2005 #3 Registered User Join Date Mar 2003 01-05-2005 #4 Registered User Join Date Mar 2003 01-05-2005 #5 01-05-2005 #6 Registered User Join Date Mar 2003 01-05-2005 #7 01-05-2005 #8 01-05-2005 #9 01-05-2005 #10 01-05-2005 #11 01-05-2005 #12 01-05-2005 #13 Registered User Join Date Mar 2003 01-05-2005 #14 Registered User Join Date Mar 2003 01-05-2005 #15
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/60335-question-make-you-think.html","timestamp":"2014-04-18T09:46:43Z","content_type":null,"content_length":"109544","record_id":"<urn:uuid:7c9bd5be-2c29-4ce9-b786-d31b2fbfb19c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Bootstrap standard errors Daniel posted on Monday, February 14, 2005 - 12:10 pm Is use of bootstrap standard errors for calculating indirect effects better than WLSMV-derived stanadard errors when working with categorical outcomes? When I calculate indirect effects without bootstrap, I get significant indirect effects. However, when I bootstrap, I get non-significant effects. Linda K. Muthen posted on Monday, February 14, 2005 - 3:53 pm It is sometimes the case that bootstrap standard errors are larger than model estimated standard errors. This would result in some effects becoming non-signficant. Daniel posted on Tuesday, February 15, 2005 - 4:04 am Which method is generally more trustworthy with ordered-categorical data? Is there a rule-of-thumb? Linda K. Muthen posted on Tuesday, February 15, 2005 - 6:26 am I don't know. I think it would depend on many factors. Daniel posted on Friday, February 18, 2005 - 9:53 am Linda, one of my indirect effects (a*b) in this mediation analysis is not significant. However, each individual path (path a and path b) from the exogenous variable to the slope factor is significant. Do you have any idea how I would report this finding? Essentially, the pieces are significant but not the whole indirect effect. BMuthen posted on Friday, February 18, 2005 - 11:04 am This can happen. For example, if the a and b parameter estimates are positively correlated, this increases the standard error yielding non-significance. J. Williams posted on Tuesday, February 22, 2005 - 5:30 am What is the sample size in the analysis where the z was sig. and the percentile was not? We found this to occur rarely in simulation data with continuous variables. In most cases the significant z is a Type I error though. When sample size is 200+, the z is never signifcant if the percentile is not. bmuthen posted on Saturday, February 26, 2005 - 5:55 pm Let's see if Daniel replies. Marion posted on Monday, July 04, 2011 - 7:17 am Hello, I'm analysing a mediation model and want to test the indirect path with bootstrapping. Mplus is doing it, but never completely. Number of bootstrap draws Requested 5000 Completed 2950 Can you help me? Why is Mplus doing that? Does it tell something about my model? Bengt O. Muthen posted on Monday, July 04, 2011 - 6:30 pm That typically says that your model is difficult to estimate for your data, so that for different subsets of the data the estimation does not converge properly. For instance, the sample size may be small or the model ill-fitting. You can send your files to support if you want guidance on that. Marion posted on Monday, July 04, 2011 - 11:55 pm Dear Mr. Muthen, I would love to get some guidance on that, because I've already tried lots of things. What do I have to send to you? Whats the e-mail address? Thank you so much. Bengt O. Muthen posted on Tuesday, July 05, 2011 - 9:19 am If you have a valid Mplus support contract, you should send it to support@statmodel.com. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=23&page=573","timestamp":"2014-04-17T12:58:30Z","content_type":null,"content_length":"29427","record_id":"<urn:uuid:bd3fb6b8-ceb0-4048-a3c5-de951d0d648b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Navy Electricity and Electronics Training Series (NEETS) Module 10—Introduction to Wave Propagation, Transmission Lines, and Antennas i - ix 1-1 to 1-10 1-11 to 1-20 1-21 to 1-30 1-31 to 1-40 1-41 to 1-47 2-1 to 2-10 2-11 to 2-20 2-21 to 2-30 2-31 to 2-40 2-40 to 2-47 3-1 to 3-10 3-11 to 3-20 3-21 to 3-30 3-31 to 3-40 3-41 to 3-50 3-51 to 3-58 4-1 to 4-10 4-11 to 4-20 4-21 to 4-30 4-31 to 4-40 4-41 to 4-50 4-51 to 4-60 , Index Figure 3-20.—Dc applied to an equivalent transmission line. As the switch is closed, the battery voltage is applied to the input terminals of the line. Now, C1 has no charge and appears, effectively, as a short circuit across points A and B. The full battery voltage appears across inductor L1. Inductor L1 opposes the change of current (0 now) and limits the rate of charge of C1. Capacitor C2 cannot begin to charge until after C1 has charged. No current can flow beyond points A and B until C1 has acquired some charge. As the voltage across C1 increases, current through L2 and C2 charges C2. This action continues down the line and charges each capacitor, in turn, to the battery voltage. Thus a voltage wave is traveling along the line. Beyond the wavefront, the line is uncharged. Since the line is infinitely long, there will always be more capacitors to be charged, and current will not stop flowing. Thus current will flow indefinitely in the line. Notice that current flows to charge the capacitors along the line. The flow of current is not advanced along the line until a voltage is developed across each preceding capacitor. In this manner voltage and current move down the line together in phase. Ac Applied to an Infinite Line An rf line displays similar characteristics when an ac voltage is applied to its sending end or input terminals. In figure 3-21, view A, an ac voltage is applied to the line represented by the circuit shown. Figure 3-21.—Ac applied to an equivalent transmission line. In view B the generator voltage starts from zero (T1) and produces the voltage shown. As soon as a small voltage change is produced, it starts its journey down the line while the generator continues to produce new voltages along a sine curve. At T2 the generator voltage is 70 volts. The voltages still move along the line until, at T3, the first small change arrives at point W, and the voltage at that point starts increasing. At T5, the same voltage arrives at point X on the line. Finally, at T7, the first small change arrives at the receiving end of the line. Meanwhile, all the changes in the sine wave produced by the generator pass each point in turn. The amount of time required for the changes to travel the length of the line is the same as that required for a dc voltage to travel the same distance. At T7, the voltage at the various points on the line is as follows: At the generator: -100 V At point W: 0 V At point X: +100 V At point Y: 0 V If these voltages are plotted along the length of the line, the resulting curve is like the one shown in figure 3-22, view A. Note that such a curve of instantaneous voltages resembles a sine wave. The changes in voltage that occur between T7 and T8 are as follows: Figure 3-22.—Instantaneous voltages along a transmission line. A plot of these new voltages produces the solid curve shown in figure 3-22, view B. For reference, the curve from T7 is drawn as a dotted line. The solid curve has exactly the same shape as the dotted curve, but has moved to the right by the distance X. Another plot at T9 would show a new curve similar to the one at T8, but moved to the right by the distance Y. By analyzing the points along the graph just discussed, you should be able to see that the actions associated with voltage changes along an rf line are as follows: 1. All instantaneous voltages of the sine wave produced by the generator travel down the line in the order they are produced. 2. At any point, a sine wave can be obtained if all the instantaneous voltages passing the point are plotted. An oscilloscope can be used to plot these values of instantaneous voltages against time. 3. The instantaneous voltages (oscilloscope displays) are the same in all cases except that a phase difference exists in the displays seen at different points along the line. The phase changes continually with respect to the generator until the change is 360 degrees over a certain length of line. 4. All parts of a sine wave pass every point along the line. A plot of the readings of an ac meter (which reads the effective value of the voltage over a given time) taken at different points along the line shows that the voltage is constant at all points. This is shown in view C of figure 3-22. 5. Since the line is terminated with a resistance equal to Z 0, the energy arriving at the end of the line is absorbed by the resistance. If a voltage is initially applied to the sending end of a line, that same voltage will appear later some distance from the sending end. This is true regardless of any change in voltage, whether the change is a jump from zero to some value or a drop from some value to zero. The voltage change will be conducted down the line at a constant rate. Recall that the inductance of a line delays the charging of the line capacitance. The velocity of propagation is therefore related to the values of L and C. If the inductance and capacitance of the rf line are known, the time required for any waveform to travel the length of the line can be determined. To see how this works, observe the following relationship: Q = IT This formula shows that the total charge or quantity is equal to the current multiplied by the time the current flows. Also: Q = CE This formula shows that the total charge on a capacitor is equal to the capacitance multiplied by the voltage across the capacitor. If the switch in figure 3-23 is closed for a given time, the quantity (Q) of electricity leaving the battery can be computed by using the equation Q = IT. The electricity leaves the battery and goes into the line, where a charge is built up on the capacitors. The amount of this charge is computed by using the equation Q = CE. Figure 3-23.—Dc applied to an equivalent transmission line. Since none of the charge is lost, the total charge leaving the battery during T is equal to the total charge on the line. Therefore: Q = IT = CE As each capacitor accumulates a charge equal to CE, the voltage across each inductor must change. As C1 in figure 3-23 charges to a voltage of E, point A rises to a potential of E volts while point B is still at zero volts. This makes E appear across L2. As C2 charges, point B rises to a potential of E volts as did point A. At this time, point B is at E volts and point C rises. Thus, we have a continuing action of voltage moving down the infinite line. In an inductor, these circuit components are related, as shown in the formula This shows that the voltage across the inductor is directly proportional to inductance and the change in current, but inversely proportional to a change in time. Since current and time start from zero, the change in time (DT) and the change in current (DI) are equal to the final time (T) and final current (I). For this case the equation becomes: ET = LI If voltage E is applied for time (T) across the inductor (L), the final current (I) will flow. The following equations show how the three terms (T, L, and C) are related: IT = CE ET = LI For convenience, you can find T in terms of L and C in the following manner. Multiply the left and right member of each equation as follows: (IT)(ET) = (CE)(LI) Then: EIT2 = LCEI Dividing by (EI): T2 = LC This final equation is used for finding the time required for a voltage change to travel a unit length, since L and C are given in terms of unit length. The velocity of the waves may be found by: Where: D is the physical length of a unit This is the rate at which the wave travels over a unit length. The units of L and C are henrys and farads, respectively. T is in seconds per unit length and V is in unit lengths per second. As previously discussed, an infinite transmission line exhibits a definite input impedance. This impedance is the CHARACTERISTIC IMPEDANCE and is independent of line length. The exact value of this impedance is the ratio of the input voltage to the input current. If the line is infinite or is terminated in a resistance equal to the characteristic impedance, voltage and current waves traveling the line are in phase. To determine the characteristic impedance or voltage-to-current ratio, use the following procedure: Take the square root: A problem using this equation will illustrate how to determine the characteristics of a transmission line. Assume that the line shown in figure 3-23 is 1000 feet long. A 100-foot (approximately 30.5 meter) section is measured to determine L and C. The section is found to have an inductance of 0.25 millihenries and a capacitance of 1000 picofarads. Find the characteristic impedance of the line and the velocity of the wave on the line. If any other unit length had been considered, the values of L and C would be different, but their ratio would remain the same as would the characteristic impedance. Transmission line characteristics are based on an infinite line. A line cannot always be terminated in its characteristic impedance since it is sometimes operated as an OPEN-ENDED line and other times as a SHORT-CIRCUIT at the receiving end. If the line is open-ended, it has a terminating impedance that is infinitely large. If a line is not terminated in characteristic impedance, it is said to be finite. When a line is not terminated in Z0, the incident energy is not absorbed but is returned along the only path available—the transmission line. Thus, the behavior of a finite line may be quite different from that of the infinite line. The equivalent circuit of an open-ended transmission line is shown in figure 3-24, view A. Again, losses are to be considered as negligible, and L is lumped in one branch. Assume that (1) the battery in this circuit has an internal impedance equal to the characteristic impedance of the transmission line (Zi = Z0); (2) the capacitors in the line are not charged before the battery is connected; and (3) since the line is open-ended, the terminating impedance is infinitely large. Figure 3-24.—Reflection from an open-ended line. When the battery is connected to the sending end as shown, a negative voltage moves down the line. This voltage charges each capacitor, in turn, through the preceding inductor. Since Z equals Z , one-half the applied voltage will appear across the internal battery impedance, Z , and one-half across the impedance of the line, Z . Each capacitor is then charged to E/2 (view B). When the last capacitor in the line is charged, there is no voltage across the last inductor and current flow through the last inductor stops. With no current flow to maintain it, the magnetic field in the last inductor collapses and forces current to continue to flow in the same direction into the last capacitor. Because the direction of current has not changed, the capacitor charges in the same direction, thereby increasing the charge in the capacitor. Since the energy in the magnetic field equals the energy in the capacitor, the energy transfer to the capacitor doubles the voltage across the capacitor. The last capacitor is now charged to E volts and the current in the last inductor drops to zero. At this point, the same process takes place with the next to the last inductor and capacitor. When the magnetic field about the inductor collapses, current continues to flow into the next to the last capacitor, charging it to E volts. This action continues backward down the line until the first capacitor has been fully charged to the applied voltage. This change of voltage, moving backward down the line, can be thought of in the following manner. The voltage, arriving at the end of the line, finds no place to go and returns to the sending end with the same polarity (view C). Such action is called REFLECTION. When a reflection of voltage occurs on an open-ended line, the polarity is unchanged. The voltage change moves back to the source, charging each capacitor in turn until the first capacitor is charged to the source voltage and the action stops (view D). As each capacitor is charged, current in each inductor drops to zero, effectively reflecting the current with the opposite polarity (view C). Reflected current of opposite polarity cancels the original current at each point, and the current drops to zero at that point. When the last capacitor is charged, the current from the source stops flowing (view D). Important facts to remember in the reflection of dc voltages in open-ended lines are: · Voltage is reflected from an open end without change in polarity, amplitude, or shape. · Current is reflected from an open end with opposite polarity and without change in amplitude or shape. A SHORT-CIRCUITED line affects voltage change differently from the way an open-circuited line affects it. The voltage across a perfect short circuit must be zero; therefore, no power can be absorbed in the short, and the energy is reflected toward the generator. The initial circuit is shown in figure 3-25, view A. The initial voltage and current waves (view B) are the same as those given for an infinite line. In a short-circuited line the voltage change arrives at the last inductor in the same manner as the waves on an open-ended line. In this case, however, there is no capacitor to charge. The current through the final inductor produces a voltage with the polarity shown in view C. When the field collapses, the inductor acts as a battery and forces current through the capacitor in the opposite direction, causing it to discharge (view D). Since the amount of energy stored in the magnetic field is the same as that in the capacitor, the capacitor discharges to zero. Introduction to Matter, Energy, and Direct Current, Introduction to Alternating Current and Transformers, Introduction to Circuit Protection, Control, and Measurement Introduction to Electrical Conductors, Wiring Techniques, and Schematic Reading Introduction to Generators and Motors Introduction to Electronic Emission, Tubes, and Power Supplies, Introduction to Solid-State Devices and Power Supplies Introduction to Amplifiers, Introduction to Wave-Generation and Wave-Shaping Circuits Introduction to Wave Propagation, Transmission Lines, and Antennas Microwave Principles, Modulation Principles , Introduction to Number Systems and Logic Circuits, Introduction to Microelectronics, Principles of Synchros, Servos, and Gyros Introduction to Test Equipment Radio-Frequency Communications Principles Radar Principles, The Technician's Handbook, Master Glossary, Test Methods and Practices, Introduction to Digital Computers, Magnetic Recording, Introduction to Fiber Optics
{"url":"http://www.rfcafe.com/references/electrical/NEETS-Modules/NEETS-Module-10-3-21-3-30.htm","timestamp":"2014-04-20T18:29:06Z","content_type":null,"content_length":"37047","record_id":"<urn:uuid:db33a699-d7b9-4604-97b5-19179f54f09e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Matheology § 203 Replies: 4 Last Post: Feb 2, 2013 4:28 PM Messages: [ Previous | Next ] fom Re: Matheology § 203 Posted: Feb 2, 2013 4:28 PM Posts: 1,969 Registered: 12/4/12 On 2/2/2013 1:24 PM, Virgil wrote: > In article > <37a55e15-deab-4cc3-9497-1915e997705c@k8g2000yqb.googlegroups.com>, > WM <mueckenh@rz.fh-augsburg.de> wrote: >> On 2 Feb., 02:56, Alan Smaill <sma...@SPAMinf.ed.ac.uk> wrote: >>> "The logicist reduction of the concept of natural number met a >>> difficulty on this point, since the definition of ?natural number¹ >>> already given in the work of Frege and Dedekind is impredicative. More >>> recently, it has been argued by Michael Dummett, the author, and Edward >>> Nelson that more informal explanations of the concept of natural number >>> are impredicative as well. That has the consequence that impredicativity >>> is more pervasive in mathematics, and appears at lower levels, than the >>> earlier debates about the issue generally presupposed." >> I do not agree with these authors on this point. >>> So, how on earth do you know that induction is a correct >>> principle over the natural numbers? >> If a theorem is valid for the number k, and if from its validity for n >> + k the validity for n + k + 1 can be concluded with no doubt, then n >> can be replaced by n + 1, and the validity for n + k + 2 is proven >> too. This is the foundation of mathematics. To prove anything about >> this principle is as useless as the proof that 1 + 1 = 2. > That inductive argument appears to be based on the very same flaws that > WM objects to in allowing actual infiniteness. It is. That is the whole point of the quoted article. In the last statement, WM is reasserting one of Poincare's observations. But, the real problem is that everyone is looking at arithmetical foundations. Frege retracted his life's work and tried to direct everyone's attention back to geometry. What foundational thinkers avoid because of "circularity" is polarity and duality in projective geometry. Embrace it. Understand it. And most of the crap falls away.
{"url":"http://mathforum.org/kb/message.jspa?messageID=8240604","timestamp":"2014-04-20T16:22:12Z","content_type":null,"content_length":"21662","record_id":"<urn:uuid:67326c74-c1a2-4c9d-9dc3-0daa65d23d74>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of direction co sine Olivia MFSK is an amateur radio teletype protocol designed to work in difficult (low signal-to-noise ratio multipath propagation ) conditions on shortwave bands . The signal can still be properly copied when it is buried 10 below the noise floor (i.e. when the amplitude of the noise is slightly over 3 times that of the signal). The protocol was developed at the end of Pawel Jalocha . The first on-the-air tests were performed by two radio amateurs, Fred OH/DK4ZC and Les VK2DSG on the Europe-Australia path in the 20-meter amateur band. The tests proved that the protocol works well and can allow regular intercontinental radio contacts with as little as one watt RF power. Since Olivia has become a standard for digital data transfer under white noise, fading and , flutter (polar path) and auroral conditions. The technical details Being a teletype protocol, Olivia transmits a stream of (7-bit) characters. The characters are sent in blocks of 5. Each block takes 2 seconds to transmit, thus the effective data rate is 2.5 character/second or 150 characters/minute. The transmission is 1000 and the rate is 31.25 tones/second. To accommodate for different conditions and for the purpose of experimentation the bandwidth and the baud rate can be changed. The Olivia transmission system is constructed of two layers: the lower, modulation and forward error correcting (FEC) code layer is a classical multiple frequency-shift keying (MFSK) while the higher layer is a forward error correcting code based on Walsh functions. Both layers are of similar nature: they constitute a "1-out-of-N" FEC code. For the first layer the orthogonal functions are (co)sine functions, with 32 different frequencies (tones). At a given time only one of those 32 tones is being sent. The demodulator measures the amplitudes of all the 32 possible tones (using a Fourier transform ) and (knowing that only one of those 32 could have been sent) picks up the tone with the highest amplitude. See the equations and graphs behind the MFSK layer here For the second FEC layer: every ASCII character is encoded as one of 64 possible Walsh functions (or vectors of a Hadamard matrix). The receiver again measures the amplitudes for all 64 vectors (here comes the Hadamard Transform) and chooses the greatest. See the algorithms and more details here For optimal performance the actual demodulators work with soft decisions and the final (hard) decision to decode a character is taken only at the second layer. Thus the first layer demodulator actually produces soft decisions for each of the 5 bits associated to an MFSK tone instead of simply picking up the highest tone to produce hard decisions for those 5 bits. In order to avoid simple transmitted patterns (like a constant tone) and to minimize the chance for a false lock at the synchronizer the characters encoded into the Walsh function pass through a scrambler and interleaver. This stage simply shifts and XORs bits with predefined scrambling vectors and so it does not improve the performance where the white (uncorrelated) noise is concerned, but the resulting pattern gains certain distinct characteristics which are of great help to the synchronizer. The receiver synchronizes automatically by searching through possible time and frequency offsets for a matching pattern. The frequency search range is normally +/- 100 Hz but can be as high as +/- 500 Hz if the user wishes so. The MFSK layer The default mode sends 32 tones within the 1000 Hz audio bandwidth and the tones are spaced by 1000 Hz/32 = 31.25 Hz. The tones are shaped to minimize the amount of energy sent outside the nominal bandwidth. The shape applied is plotted as the red trace on this [broken link]. The blue trace represents the more classical Hann window , which was used in the first version of the system. The exact shape formula is: $+1.0000000000 +1.1913785723 cos\left(x\right) -0.0793018558 cos\left(2x\right) -0.2171442026 cos\left(3x\right) -0.0014526076 cos\left(4x\right)$ where x ranges from – π to π. The coefficients represent the symbol shape in the frequency domain and were calculated by a minimization procedure which sought to make the smallest crosstalk and the smallest frequency spillover. This graph presents the 500 Hz MFSK tone (red trace) shaped according to the above formula. The blue trace is the envelope. The tones are sent at 31.25 baud or every 32 milliseconds. The phase is not preserved from one tone to the next: instead a random shift of ±90 degrees is introduced in order not to transmit a pure tone when the same symbol is repeatedly sent. Because the symbols are smoothly shaped there is no need to keep the phase constant, which normally is the case when no (e.g. square) shaping is used. The modulator uses the Gray code to encode 5-bit symbols into the tone numbers. The waveform generator is based on the 8000 Hz sampling rate. The tones are spaced by 256 samples in time and the window that shapes them is 512 samples long. The demodulator is based on the FFT with the size of 512 points. The tone spacing in frequency is 8000 Hz/256 = 31.25 Hz and the demodulator FFT has the resolution of 8000 Hz/512 = 15.625 Hz thus half of the tone separation. To adapt the system to different propagation conditions, the number of tones and the bandwidth can be changed and the time and frequency parameters are proportionally scaled. The number of tones can be 2, 4, 8, 16, 32, 64, 128 or 256. The bandwidth can be 125, 250, 500, 1000 or 2000 Hz. The Walsh functions FEC layer The modulation layer of the Olivia transmission system sends at a time one out of 32 tones (the default mode). Each tone constitutes thus a symbol that carries 5 bits of information. For the FEC code, 64 symbols are taken to form a block. Within each block one bit out of every symbol is taken and it forms a 64-bit vector coded as a Walsh function. Every 64-bit vector represents a 7-bit ASCII character, thus each block represents 5 ASCII characters. This way, if one symbol (tone) becomes corrupted by the noise, only one bit of every 64-bit vector becomes corrupt, thus the transmission errors are spread uniformly across the characters within a The two layers (MFSK+Walsh function) of the FEC code can be treated as a two dimensional code: the first dimension is formed along the frequency axis by the MFSK itself while the second dimension is formed along the time axis by the Walsh functions. The two dimensional arrangement was made with the idea in mind to solve such arranged FEC code with an iterative algorithm, however, no such algorithm was established to date. The scrambling and simple bit interleaving is applied to make the generated symbol patterns appear more random and with minimal self-correlation. This avoids false locks at the receiver. Bit interleaving: The Walsh function for the first character in a block is constructed from the 1st bit of the 1st symbol, the 2nd bit of the 2nd symbol, and so on. The 2nd Walsh function is constructed from the 2nd bit of the 1st symbol, the 3rd bit of the 2nd symbol, and so on. Scrambling: The Walsh functions are scrambled with a pseudo-random sequence 0xE257E6D0291574EC. The Walsh function for the 1st character in a block is scrambled with the scrambling sequence, the 2nd Walsh function is scrambled with the sequence rotated right by 13 bits, the 3rd with the sequence rotated by 26 bits, and so on. External links
{"url":"http://www.reference.com/browse/direction+co+sine","timestamp":"2014-04-21T14:04:44Z","content_type":null,"content_length":"82343","record_id":"<urn:uuid:77d900f5-6938-435a-8b02-aa9a486a58d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra Tutors Stratford, CT 06614 Math, Science and History tutor 6-12 grade and beyond. ...In addition I have a Mathematics minor which includes Calculus, Ordinary Differential Equations, Partial Differential equations, Statistics, Number theory, Linear Algebra , Set Theory, Numerical Analysis, Topology etc. I have six years teaching experience and my... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/New_Haven_CT_Linear_Algebra_tutors.aspx","timestamp":"2014-04-18T08:10:27Z","content_type":null,"content_length":"61125","record_id":"<urn:uuid:b3356522-b973-4984-bde9-89d638f7aef2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Bounding the Elliptope of Quantum Correlations & Proving Separability in Mixed States Le contenu de cette page n’est pas disponible en français. Veuillez nous en excuser. Bounding the Elliptope of Quantum Correlations & Proving Separability in Mixed States We present a method for determining the maximum possible violation of any linear Bell inequality per quantum mechanics. Essentially this amounts to a constrained optimization problem for an observable’s eigenvalues, but the problem can be reformulated so as to be analytically tractable. This opens the door for an arbitrarily precise characterization of quantum correlations, including allowing for non-random marginal expectation values. Such a characterization is critical when contrasting QM to superficially similar general probabilistic theories. We use such marginal-involving quantum bounds to estimate the volume of all possible quantum statistics in the complete 8-dimensional probability space of the Bell-CHSH scenario, measured relative to both local hidden variable models as well as general no-signaling theories. See arXiv:1106.2169. Time permitting, we’ll also discuss how one might go about trying to prove that a given mixed state is, in fact, not entangled. (The converse problem of certifying non-zero entanglement has received extensive treatment already.) Instead of directly asking if any separable representation exists for the state, we suggest simply checking to see if it “fits” some particular known-separable form. We demonstrate how a surprisingly valuable sufficient separability criterion follows merely from considering a highly-generic separable form. The criterion we generate for diagonally-symmetric mixed states is apparently completely tight, necessary and sufficient. We use integration to quantify the “volume” of states captured by our criterion, and show that it is as large as the volume of states associated with the PPT criterion; this simultaneously proves our criterion to be necessary as well as the PPT criterion to be sufficient, on this family of states. The utility of a sufficient separability criterion is evidenced by categorically rejecting Dicke-model superradiance for entanglement generation schema. See arXiv:1307.5779.
{"url":"http://www.perimeterinstitute.ca/fr/videos/bounding-elliptope-quantum-correlations-proving-separability-mixed-states","timestamp":"2014-04-21T11:06:39Z","content_type":null,"content_length":"30436","record_id":"<urn:uuid:241a38b1-b371-450d-8a36-c1d3e529353d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
RPN Calculators — the Short Version RPN stands for "Reverse Polish Notation," a parenthesis-free method for performing calculations. In the United States this method has been popularized by the Hewlett-Packard Corporation^1, which sells a line of RPN calculators. The method depends on a stack of values as its organizing principle. When a number is entered, the previous entry is "raised" in the stack, so that several values can be stored. When a mathematical operation is performed, it is performed on the most recent value (or values, if more than one is required). For example, to divide 80 by 81, you make these entries: At first, this method may seem cumbersome, but in time one begins to see its great value — one need never worry about algebraic order of precedence, because the order in which operations are performed is entirely controlled by the order in which the values are entered and the operations performed. On some problems, the algebraic order of precedence can be depended on to provide the correct result. But many problems are difficult to state using the default precedence. In this next example, you want to multiply the result of two additions ((a + b ) * (c + d)). If you simply enter this problem into an algebraic calculator, you will get a + (b * c) + d, not (a + b) * (c + d), as you expected, because multiplications are performed before additions. In order to solve this problem on an algebraic calculator, you must carefully place parentheses between the operations (assuming your calculator has parentheses). But in RPN notation, the problem is simple to solve: a (vaue) b (value) c (value) d (value) Once you begin to get a feel for RPN, you will realize it is the easiest way to solve all but the simplest problems. You never have to press more keys than on an algebraic calculator, and you often press far fewer. Take Me Back to the Virtual Calculator
{"url":"http://arachnoid.com/lutusp/calculator.html","timestamp":"2014-04-20T23:27:05Z","content_type":null,"content_length":"24426","record_id":"<urn:uuid:00171e2d-e51a-498a-8ea1-030b75888372>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
MPECLib Bibliography • Anandalingam, G, and White, D J, A Solution Method for the Linear Static Stackelberg Problem Using Penalty Functions. IEEE Trans. Auto. Contr. 35 (1990), 1170-1173. • Bard, J F, Convex Two-Level Optimization. Mathematical Programming 40 (1988), 15-27. • Bard, J F, Some Properties of the Bilevel Programming Problem. Journal of Optimization Theory and Applications 68 (1991), 371-378. • Bard, J F, and Falk, J E, An Explicit Solution to the Multi-Level Programming Problem. Comp. Op. Res. 9 (1982), 77-100. • Bixby, R, Ceria, S, McZeal C M, and Savelsbergh, M W P, An Updated Mixed Integer Programming Library: MIPLIB 3.0. Optima 58 (1998), 12-15. • Candler, W, and Townsley, R, A Linear Two-Level Programming Problem. Comp. Op. Res. 9 (1982), 59-76. • Carolan, W J, Hill, J E, Kennington J L, Niemi, S, and Wichmann, S J, An Empirical Evaluation of the KORBX Algorithms for Military Airlift Applications. Operations Research 38, 2 (1990), 240-248. • Clark, P A, and Westerberg, A W, Bilevel Programming for Steady-State Chemical Process Design-i. Fundamentals and Algorithms. Comput. Chem. Eng. 14 (1990), 87. • Clark, P A, and Westerberg, A W, A Note on the Optimality Conditions for the Bilevel Programming Problem. Naval Research Logistics 35 (1988), 413-418. • DeSilva, A H, Sensitivity Formulas for Nonlinear Factorable Programming and their Application to the Solution of an Implicitly Defined Optimization Model of US Crude Oil Production. PhD thesis, George Washington University, 1978. • Dirkse, S P, and Ferris, M C, MCPLIB: A Collection of Nonlinear Mixed Complementarity Problems. Optimization Methods and Software 5 (1995), 319-345. • Facchinei, F, Jiang, H, and Qi, L, A Smoothing Method for Mathematical Programs with Equilibrium Constraints. Tech. rep., Universita di Roma La Sapienza, 1996. • Falk, J E, and Liu, J, On Bilevel Programming, Part i: General Nonlinear Cases. Mathematical Programming 70 (1995), 47. • Ferris, M C, and Tin-Loi, F, On the Solution of a Minimum Weight Elastoplastic Problem Involving Displacement and Complementarity Constraints. Comp. Meth. in Appl. Mech. and Engng 174 (1999), • Ferris, M C, and Tin-Loi, F, Nonlinear Programming Approach for a Class of Inverse Problems in Elastoplasticity. Structural Engineering and Mechanics 6 (1998), 857-870. • Floudas, C A, Pardalos, P M, Adjiman, C S, Esposito, W R, Gumus, Z H, Harding, S T, Klepeis, J L, Meyer, C A, and Schweiger, C A, Handbook of Test Problems in Local and Global Optimization. Kluwer Academic Publishers, 1999. • Floudas, C A, and Pardalos, P M, Eds, State of the Art in Global Optimization. Kluwer Academic Publishers, 1996. • Fukushima, M, and Qi, L, Eds, Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods. Kluwer Academic Publishers, 1999. • Grzebieta, R H, Al-Mahaidi, R, and Wilson, J L, Eds, Proceedings of 15th ACMSM, 1997. • Hansen, Jaumard, and Savard, G, New Branch-and-Bound Rules for Linear Bilevel Programming. SIAM J. Sci. Stat. Comp 13 (1992), 1194-1217. • Hearn, and Ramana, Solving Congestion Toll Pricing Models. Tech. rep., University of Florida, 0000. • Henderson, J M, and Quandt, R E, Microeconomic Theory. McGraw Hill, New York, 1980. 3rd edition • Jiang, H, and Ralph, D, Smooth SQP Methods for Mathematical Programs with Nonlinear Complementarity Constraints. Tech. rep., University of Melbourne, 1997. • Jiang, H, and Ralph, D, QPECgen: A MATLAB Generator for Mathematical Programs with Quadratic Objectives and Affine Variational Inequality Cnostraints. Computational Optimization and Applications • Kehoe, T, A Numerical Investigation of the Multiplicity of Equilibria. Mathematical Programming Study 23 (1985), 240-258. • Light, M, Optimal Taxation: An Application of Mathematical Programming with Equilibrium Constraints in Economics. Tech. rep., Department of Economics, University of Colorado, Boulder, 1999. • Liu, Y H, and Hart, S M, Characterizing an Optimal Solution to the Linear Bilevel Programming Problem. European Journal of Operational Research 79 (1994), 164-166. • Luo, Z, Pang, J S, and Ralph, D, Mathematical Programs with Equilibrium Constraints. CUP, 1997. • Maier, G, Giannessi, F, and Nappi, A, Indirect Identification of Yield Limits by Mathematical Programming. Engineering Structures 4 (1982), 86-98. • Murphy, F H, Sherali, H D, and Soyster, A L, A Mathematical Programming Approach for Determining Oligopolistic Market Equilibrium. Mathematical Programming 24 (1982), 92-106. • Nemhauser, G L, and Trick, M A, Scheduling a Major College Basketball Conference. Operations Research 46 (1998), 1-8. • Outrata, J V, and Zowe, J, A Numerical Approach to Optimization Problems with Variational Inequality Constraints. Mathematical Programming 68 (1995), 105-130. • Outrata, J V, On Optimization Problems with Variational Inequality Constraints. SIAM J. Optim. 4 (1994), 340-357. • Outrata, J V, Kocvara, and Zowe, J, Nonsmooth Approach to Optimization Problems with Equilibrium Constraints. Kluwer, 1998. • Pang, J S, and Tin-Loi, F, A penalty interior point algorithm for a inverse parameter identification problem in elastoplasticity. Mechanics of Structures and Machines 29 (2001), 85-99. • Savard, G, and Gauvin, J, The Steepest Descent for the Nonlinear Bilevel Programming Problem. Operation Research Letters 15 (1994), 265-272. • Scholtes, S, and Stohr, M, Exact Penalization of Mathematical Programs with Equilibrium Constraints. SIAM J. Control Optim. 37, 2 (1999), 617-652. • Scholtes, S, Research Papers in Managements Studies. Tech. rep., The Judge Institute, University of Cambridge, 1997. • Shimizu, K, and Aiyoshi, E, A New Computational Method for Stackelberg and Mim-Max Problems by Use of a Penalty Method. IEEE Trans. on Aut. Control 26 (1981). • Shimizu, K, Ishizuka, Y, and Bard, J F, Nondifferentiable and Two-Level Mathematical Programming. Kluwer Academic Publishers, 1997. • Toint, P L, Ed, Operations Research and Decision Aid Methodologies in Traffic and Transportation Management. Springer Verlag, 1997. NATO ASI Series F • Yezza, A, First-Order Necessary Optimality Conditions for General Bilevel Programming Problems. Journal of Optimization Theory and Applications 89 (1996), 189-219.
{"url":"http://www.gamsworld.org/mpec/biblio.htm","timestamp":"2014-04-18T13:07:35Z","content_type":null,"content_length":"7297","record_id":"<urn:uuid:a04f7dda-4a27-4f48-9b39-f4364164f300>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Pictures of Julia and Mandelbrot Sets/History From Wikibooks, open books for an open world Already in the antiquity it was known, that if the fraction $x_0$ is near √a, then the fraction $x_1 = (x_0 + a/x_0)/2$ is a better approximation to √a. For if $x_0$ is an approximation to √a that is smaller than √a, then $a/x_0$ is an approximation to √a that is larger than √a, and conversely, therefore $x_1 = (x_0 + a/x_0)/2$ is a better approximation to √a than both $x_0$ and $a/x_0$. We can, of course, repeat the procedure, and it is very effective: you get an approximation to √a with ten correct digits, if you start with $x_0$ = 1.5 and apply the procedure three times. To find the square root of $a$, is to solve the equation $x^2 - a = 0$, and we have solved this equation by the iteration $x$ → $(x + a/x)/2$. And it is this method we must apply for solving an equation, if we do not have a formula, or if this - after the invention of the computer - seems too laborious to use. It was Newton who described the general procedure: if $f(x)$ is a (continuous) differentiable function which intersects the x-axis near $x = x_0$, and we apply the iteration $x$ → $x - f(x)/f'(x)$ a number of times starting in $x_0$, then we get a good approximation to a solution x* of the equation $f(x)$ = 0 (because we have assumed that $f(x)$ intersects the x-axis, $f'(x)$ <> 0 near the point x* where $f(x*)$ = 0). The method can be generalized, so that it works for $f'(x*)$ = 0, but if $f'(x)$ converges to ∞ for x converging to x*, as it is the case for $f(x) = x^{1/3}$ and x* = 0, then the procedure leads away from the solution. There are situations where a solution cannot be found by this method, and there can be points whose sequence of iteration either converges towards a cycle of points (containing more than one point) or does not converge. This question ought to be studied more closely, advertised the English mathematician Arthur Cayley in 1879 in a paper of only one page. He was aware, that the Newton procedure could as well be applied for solving a complex equation $f(z)$ = 0, and he proposed that we "look away from the realities" and examine what can happen when the iteration does not lead to a solution. But this proposal was his only contribution to the theory. Julia and Fatou[edit] Thirty years passed before anyone took up this question. But then the two french mathematicians Gaston Julia (1893-1978) and Pierre Fatou (1878-1929) published papers where they examined iteration of general complex rational functions, and they proved all the facts about "Julia" sets and "Fatou" domains we have used here. Julia and Fatou were of course not able to produce detailed pictures showing the Julia set for Newton iteration for solving the equation $z^3$ = 1, for instance, but they knew that the Julia set in this case is not a curve. Cayley certainly imagined that the plane would be divided up by geometrical curves, and if he did so, it was not unforgivable, for this is namely the case for the so-called weak Newton procedure: this leads more slowly but more safe to a We have intimated, that our mean results regarding Julia sets, were known to Julia and Fatou. This is not quite true, for the fact that the sequences of iteration in the neutral case can go into annular shaped revolving movements, was first discovered in 1942 by the German mathematician C.L. Siegel. We have said that these revolving movements can be either polygon or annular shaped, that they are lying concentrically and that there is a finite cycle which is centre for the movements. And possibly the reader has wondered: this centre cycle, does it belongs to the Fatou domain or to the Julia set? We will now answer this question. In the section about field lines we have defined a complex number $\alpha$ which in the attracting case has norm smaller than 1, but which in the neutral case has norm 1 and therefore corresponds to an angle (its argument). When this angle is rational (with respect to $2\pi$), the terminal movements are finite (polygonal, the parabolic case), and the centre cycle belongs to the Julia set, when the angle is irrational, the terminal movements are infinite (annular, the Siegel-disc case), and the centre cycle belongs to the Fatou domain. But apart for such few results, not very much happened in this theory before Benoit B. Mandelbrot (1924-2010) in the late 1970s began his serious study of Julia sets by using computer. It is Mandelbrot who has coined the word fractal geometry. He had had Julia as teacher (at École Polytechnique), and around 1964 he began his "varied forays into unfashionable and lonely corners of the Unknown". But the fractal patterns he first studied were self-similar in the strict sense of this word, namely invariant under linear transformation. It was first in 1978-79 he began his study of Julia sets for rational complex functions. He made some print-outs and he studied families of rational complex functions, formed by multiplying the function by a complex parameter $\lambda$. His intention was to let the computer draw a set M of parameters $\lambda$ for which the Julia set is not (totally) disconnected (a "fractal dust"). Such a program could be made very simple: he found two critical points for the function, and plotted the points $\lambda$ for which the two critical points did not iterate towards the same cycle. For then there would be at least two Fatou domains, and therefore the Julia set would not be a dust cloud. He chose functions for which he knew a real parameter value $\lambda$ such that the iteration behaved chaotically on some real interval, for then this $\lambda$ might belong to his set M. He began (in 1979) with the family $\lambda (1 + z^2)^2/(z(z^2 - 1))$ (having four real and two imaginary critical points). For $\lambda$ = 1/4 it behaves chaotically on an interval, and he "felt that in order to achieve a set having a rich structure, it was best to pick a complicated map (every beginner I have since then watched operate has taken the same tack)". The picture where $\lambda$ varied over the complex plane, showed a highly structured but very fuzzy "shadow" of the set. A very blotchy version, but "it sufficed to show that the topic was worth pursuing, but had better be persued in an easier context". Then (in 1980) he studied the family $\lambda z(1 - z)$ (having critical points 1/2 and ∞), which for $\lambda$ = -2 and 4 behaves chaotically on the interval [-2, 2]. He saw two discs of radius 1 and centres in 0 and 2: "Two lines of algebra confirmed that these discs were to be expected here, and that the method was working. We also saw, on the real line to the right and left of the above discs, the crude outlines of round blobs which I call "atoms" today. They appeared to be bisected by intervals known in the Myrberg theory, which encoraged us to go on to increasing bold computations. For a while, every investment in computation yielded increasing sharply focussed pictures. Helped by imagination, I saw the atoms fall into a hierarchy, each carrying smaller atoms attached to it". "After that, however, our luck seemed to break; our pictures, instead of becoming increasingly sharp, seemed to become increasingly messy. Was this the fault of the faltering Textronix [cathode ray tube ("worn out and very faint")]?". Mandelbrot ran the program on another computer: "The mess had failed the vanish! In fact, as you can check, it showed signs of being systematic. We promptly took a much closer look. Many specks of dirt duly vanished after we zoomed in. But some specks failed to vanish; in fact, they proved to solve into complex structures endowed with "sprouts" very similar to those of the whole set M. Peter Moldave and I could not contain our excitement. Some reasons made us redo the whole computation using the equivalent map z → $z^2 - c$, and here the main continent of the set M proved to be shaped like each of the islands! Next, we focussed on the sprouts corresponding to different orders of bifurcation, and we compared the corresponding off-shore islands. They proved to lie on the intersection of stellate patterns of logarithmic spirals! (...) We continued to flip in this fashion between the set M and selected Julia sets J, and made an exciting discovery. I saw that the set M goes beyond being a numerical record of numbers of points in limit cycles. It also has uncanny "hieroglyphical" character: including within itself a whole deformed collection of reduced-size versions of all the Julia sets". Mandelbrot, B.B.: Fractals and the Rebirth of Iteration Theory. In: Peitgen & Richter: The Beauty of Fractals (1986), pp 151-160 (in this article you can see Mandelbrots first two print-outs of the last picture). Mandelbrot, B.B.: Fractal aspects of the iteration of z → $\lambda z(1 - z)$ for complex $\lambda$ and z. In: Nonlinear Dynamics, Annals New York Acad. Sciences 357 (1980), pp 249-259.
{"url":"http://en.wikibooks.org/wiki/Pictures_of_Julia_and_Mandelbrot_Sets/History","timestamp":"2014-04-19T14:31:23Z","content_type":null,"content_length":"40747","record_id":"<urn:uuid:a5ef115f-2f26-4378-a9f6-7523bbb3a91e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Fraud Number 3: U-Gene From: Cimode <cimode_at_hotmail.com> Date: 19 Jun 2006 03:58:05 -0700 Message-ID: <1150714685.475107.175620@g10g2000cwb.googlegroups.com> > The consequence of admitting the premice that a relation is a specific > subset of functions is that you apply all characteritics of a function: > Funny. For some reason I thought that in math, the functions were a > subset of the relations. And therefore, the relations are a superset > of the functions. It hurts to discover one's own ignorance. I have to admit I have struggled with that concept a few years but I came to the conclusion that relations were indeed a subset and not a superset of function as you are stating. Think about it... For a an ensemblist mathematical set to be declared a subset of an other ensemblist mathematical set it must first inherit ALL characteritics of the superset...In short, IF set1 INCLUDE set2 then set2 (caracteristics on elements of set2) = set1(caracteristics on elements of set2) + set2(caracteristics on elements of set2) An application for instance of the above is between rationals and integers Integers are a subset of Rationals because they inherit all characteristics common to ALL Rationals. but have characteristics of their own.... You can have for instance the following hierarchy INTEGER DECIMAL If I follow your reasoning and based on the above. If you state that a function is a subset of relation then you must admit that a function ANY function share the characteristics of relation.....Let's see one characteristic of relations: multidimensionality. ARE ALL functions multidimensional? linear functions are not multidimensional. OTOH, if you consider the opposite, meaning relations as a subset of functions then you realize the definition applies correctly...For instance, functions use variables to represent dynamic value place holders. You will realize that characteristics applies to all functions whether relations, linear etc.....One could represent it as follows... RELATIONS LINEAR TRIGO. Etc... But you seem convinced of the opposite...Thanks for demonstrating logically your point B4 stating anything. LIstenning. > > --> relations, relvars, relvalues are on different level of definition. > Do I understand you correctly that by "relvalue" you mean "any possible > value from any possible domain that can occur within a relation" ? You > have already pointed out that, acoording to Codd, each defined relation > establishes a new domain of its own. I take it that the values of such > a domain are indeed "relation values". It would then follow, if I > understand you correctly, that all relation values are "relvalues" but > not all "relvalues" are relation values. Not quite...Relations are higher level concept than relvar. One Domain defined at relation level is not the same as a domain from which values are drawn at attribute data type level. The relationship between the two is a complex relationship to define (for instance how operators apply between attribute data type and operator which apply to the relation as a data type) Does it make better sense. > Then I think that what you mean by "relvalue" is just "value". You are correct. I personally prefer *value* but some people brought the issue of *relvalue* in a pedagogical. Some of the people stated that relation = relvar = relvalue which as you seem to understand is not quite the same. > > --> R-tables are ONE possible representation of relvars. > Sorry. R-tables are one possible representation of RELATION VALUES. You are correct. This definition may lead to confusion. Instead, I should have stated "R-Tables are a projection or representation of relvars in a specific point in time" which is basically equivalent to ...You are also correct asserting that they represent values rather than variables. I tend to look at it in a time perspective..I consider that values are *variable fillers* in time. Thank you for bringing that out. > Variables simply do not *need* "being represented". They have a name, > a declared type, and they contain a value. Date models variables as a > triple of exactly these three components. Represented seems to be a non consensual term. Projected seems to be a better term. Nevertherless, I am curious about your statement...Do you consider that relvar do not need to be projected...In that case, how do you operate them? > > Hope this clarified. > I wouldn't bet on it. Received on Mon Jun 19 2006 - 05:58:05 CDT
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2006/06/19/1259.htm","timestamp":"2014-04-18T06:40:27Z","content_type":null,"content_length":"11640","record_id":"<urn:uuid:8783acc0-2986-40f6-98dc-b2ba9cdb44c4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Clowes, Samuel W. A Derbyshire landowner related to the Hornbys whom Lear met at Knowsley. As early as February 1848 Lear was considering him as a companion for a travel to Egypt (SL, 68). He will be in Rome with Lear in 1858-59. 4.i.58, 30.i.58, 1.ii.58, 9.ii.58, 22.ii.58, 27.ii.58, 9.iii.58, 9.ix.58, 19.x.58, 25.x.58, 26.x.58, 27.x.58, 28.x.58, 3.xi.58, 10.xi.58, 17.xi.58, 19.xi.58, 20.xi.58, 21.xi.58, 25.xi.58, 26.xi.58, 28.xi.58, 30.xi.58, 2.xii.58, 3.xii.58, 22.xii.58, 23.xii.58, 24.xii.58, 25.xii.58, 26.xii.58, 27.xii.58, 28.xii.58, 29.xii.58, 30.xii.58, 1.i.59, 2.i.59, 3.i.59, 4.i.59, 5.i.59, 7.i.59, 8.i.59, 10.i.59, 12.i.59, 13.i.59, 14.i.59, 15.i.59, 16.i.59, 18.i.59, 20.i.59, 21.i.59, 22.i.59, 23.i.59, 24.i.59, 26.i.59, 28.i.59, 30.i.59, 1.ii.59, 2.ii.59, 3.ii.59, 4.ii.59, 5.ii.59, 6.ii.59, 7.ii.59, 8.ii.59, 9.ii.59, 11.ii.59, 12.ii.59, 13.ii.59, 14.ii.59, 16.ii.59, 17.ii.59, 19.ii.59, 20.ii.59, 21.ii.59, 22.ii.59, 23.ii.59, 24.ii.59, 27.ii.59, 1.iii.59, 2.iii.59, 3.iii.59, 4.iii.59, 9.iii.59, 24.iii.59, 3.iv.59, 4.iv.59, 5.iv.59, 3.v.59, 20.vi.59, 22.vi.59, 27.vi.59, 29.vi.59, 30.viii.59, 5.ix.59, 12.ix.59, 13.x.59, 14.x.59, 2.xi.59, 3.xi.59, 10.xi.59, 12.xi.59, 15.xi.59, 16.xi.59, 17.xi.59, 21.xi.59, 22.xii.59, 26.xii.59, 12.i.60, 13.i.60, 14.i.60, 30.i.60, 7.vi.60, 21.vi.60, 28.vi.60, 30.vii.60, 11.ix.60, 12.x.60, 29.xi.60, 3.xii.60, 4.xii.60, 7.xii.60, 11.xii.60, 19.xii.60, 2.1.61, 11.i.61, 21.i.61, 23.i.61, 17.iii.61, 8.iv.61, 18.iv.61, 20.iv.61, 22.iv.61, 23.iv.61, 27.iv.61, 2.v.61, 10.v.61, 15.v.61, 16.v.61, 18.v.61, 25.viii.61, 31.viii.61, 6.ix.61, 15.ix.61, 31.x.61 , 6.xi.61, 24.xii.61.
{"url":"http://www.nonsenselit.org/diaries/people/clowes-sw/","timestamp":"2014-04-18T08:39:43Z","content_type":null,"content_length":"25062","record_id":"<urn:uuid:7f2fd889-0438-4a0a-9300-a3edf3d9b9a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Tutorial: Tightly Culling Shadow Casters for Directional Lights (Part 2) In the previous tutorial we came to the conclusion that the below image highlights the planes we want to pull towards ourselves, assuming we are the light source. Representing planes in 2D is not easy, but imagine each black line segment represents a plane that is coming from our eyes and is rotated along its respective line segment. This is the tightest possible collection of planes that fully encloses the player’s view frustum, from the light’s point-of-view. Also remember that the left, bottom, and far planes are on the opposite side of the frustum. They form the back faces of the k-DOP we are trying to create. On input, we have 8 points, representing the 8 corners of the frustum, and with this input our goal is to create all of the planes necessary to make the outline shown above, and at the same time make sure all of the planes are facing towards the inside of the volume. Gathering the Corner Points Our first goal is to get the 8 corner points. A good idea is to name both the planes and the points, so we will start with the following enumerations. /** Frustum planes. */ enum LSM_FRUSTUM_PLANES { /** Frustum points. */ enum LSM_FRUSTUM_POINTS { When working with these points and planes here forth we will strictly adhere to this order. Creating a specific order for the points is important for ensuring the planes we add to our final k-DOP will be pointing consistently inwards. We can determine the locations of each point by intersecting the 3 planes that make that corner. While this sounds difficult to do, it is in fact a very small and efficient function. * Gets the intersection point of 3 planes if there is one. If not, false is returned. * \param _pP0 Plane 1. * \param _pP1 Plane 2. * \param _pP2 Plane 3. * \param _vRet The returned intersection point, valid only if true is returned. * \return Returns true if the 3 planes intersect at a single point. LSE_INLINE LSBOOL LSE_CALL CIntersect::ThreePlanes( const CPlane3 &_pP0, const CPlane3 &_pP1, const CPlane3 &_pP2, CVector3 &_vRet ) { CVector3 vU = _pP1.n % _pP2.n; LSREAL fDenom = _pP0.n * vU; if ( CMathLib::Abs( fDenom ) <= LSM_FLT_EPSILON ) { return false; } _vRet = (vU * _pP0.dist + _pP0.n.Cross( _pP1.n * _pP2.dist - _pP2.n * _pP1.dist )) / fDenom; return true; Note that the vector % operator is a cross product, and the vector * operator is a dot product, and LSM_FLT_EPSILON is defined as 1.192092896e-07f. The next step is to use this to get the 8 corner points, which is straight-forward. * Fills an array of 8 points with the corner points of a frustum. * \param _fFrustum The frustum whose corners are to be extracted. * \param _vRet The array of exactly 8 points into which the frustum corners are extracted. LSVOID LSE_CALL CPhysUtils::FrustumCorners( const CFrustum &_fFrustum, CVector3 _vRet[8] ) { CIntersect::ThreePlanes( _fFrustum[LSM_FP_FAR], _fFrustum[LSM_FP_BOTTOM], _fFrustum[LSM_FP_LEFT], _vRet[LSM_FP_FAR_BOTTOM_LEFT] ); CIntersect::ThreePlanes( _fFrustum[LSM_FP_FAR], _fFrustum[LSM_FP_TOP], _fFrustum[LSM_FP_LEFT], _vRet[LSM_FP_FAR_TOP_LEFT] ); CIntersect::ThreePlanes( _fFrustum[LSM_FP_FAR], _fFrustum[LSM_FP_TOP], _fFrustum[LSM_FP_RIGHT], _vRet[LSM_FP_FAR_TOP_RIGHT] ); CIntersect::ThreePlanes( _fFrustum[LSM_FP_FAR], _fFrustum[LSM_FP_BOTTOM], _fFrustum[LSM_FP_RIGHT], _vRet[LSM_FP_FAR_BOTTOM_RIGHT] ); // Again with the near plane. CIntersect::ThreePlanes( _fFrustum[LSM_FP_NEAR], _fFrustum[LSM_FP_BOTTOM], _fFrustum[LSM_FP_LEFT], _vRet[LSM_FP_NEAR_BOTTOM_LEFT] ); CIntersect::ThreePlanes( _fFrustum[LSM_FP_NEAR], _fFrustum[LSM_FP_TOP], _fFrustum[LSM_FP_LEFT], _vRet[LSM_FP_NEAR_TOP_LEFT] ); CIntersect::ThreePlanes( _fFrustum[LSM_FP_NEAR], _fFrustum[LSM_FP_TOP], _fFrustum[LSM_FP_RIGHT], _vRet[LSM_FP_NEAR_TOP_RIGHT] ); CIntersect::ThreePlanes( _fFrustum[LSM_FP_NEAR], _fFrustum[LSM_FP_BOTTOM], _fFrustum[LSM_FP_RIGHT], _vRet[LSM_FP_NEAR_BOTTOM_RIGHT] ); Here, a special class acts as a wrapper for a frustum, but it is still just 6 planes. And we always access those planes via our enumerations above. Our array of 8 vectors used to store the points is also always accessed via our enumerations. We don’t check for a false return from the intersection function because if the frustum is not well formed, we have bigger problems on our hands than this. If other parts of the engine are not breaking and we come to this point, we can assume the frustum is well formed and all 8 points are valid. Finding the Outline and Putting it All Together With the 8 corner points at our disposal, we need a way to determine which points outline the view frustum from the light’s point-of-view. That is, which points were used to make the black outline in the first image above. The best way to do that is to find neighboring planes such that one plane is facing towards the light and the other facing away from it. In order to do this we need a table of neighboring planes. On input, we provide the plane we want to examine, and on output the function gives us its 4 neighbors. * Given a plane, the 4 neighbors of that plane are returned. * \param _fpPlane The plane for which to locate neighbors. * \param _fpRet Holds the returned neighbors. Must be at least 4 elements in the array. LSVOID LSE_CALL CFrustum::GetNeighbors( LSM_FRUSTUM_PLANES _fpPlane, LSM_FRUSTUM_PLANES _fpRet[4] ) { static const LSM_FRUSTUM_PLANES fpTable[LSM_FP_TOTAL][4] = { { // LSM_FP_LEFT { // LSM_FP_RIGHT { // LSM_FP_TOP { // LSM_FP_BOTTOM { // LSM_FP_NEAR { // LSM_FP_FAR for ( LSUINT32 I = 4UL; I--; ) { _fpRet[I] = fpTable[_fpPlane][I]; Nothing magic about this. It is just a table. If my for loop looks strange, it is because it is meant to be efficient. It is always more efficient to count down towards 0 than to count up to a given number, as the compiler has countless ways to check against 0 in very few clock cycles. But to be more efficient the loop could be unrolled. Now we can loop over the 6 planes of a frustum and get the neighbors of that plane. // For each plane. for ( LSUINT32 I = LSM_FP_TOTAL; I--; ) { // If this plane is facing away from us, move on. LSREAL fDir = _fFrustum[I].n * vDir; if ( fDir > LSM_ZERO ) { continue; } // For each neighbor of this plane. LSM_FRUSTUM_PLANES fpNeighbors[4]; CFrustum::GetNeighbors( static_cast<LSM_FRUSTUM_PLANES>(I), fpNeighbors ); for ( LSUINT32 J = 4UL; J--; ) { And as mentioned above, if one plane is facing towards us and its neighbor is facing away from us, we have found an edge. // For each plane. for ( LSUINT32 I = LSM_FP_TOTAL; I--; ) { // If this plane is facing away from us, move on. LSREAL fDir = _fFrustum[I].n * vDir; if ( fDir > LSM_ZERO ) { continue; } // For each neighbor of this plane. LSM_FRUSTUM_PLANES fpNeighbors[4]; CFrustum::GetNeighbors( static_cast<LSM_FRUSTUM_PLANES>(I), fpNeighbors ); for ( LSUINT32 J = 4UL; J--; ) { LSREAL fNeighborDir = _fFrustum[fpNeighbors[J]].n * vDir; // If this plane is facing away from us, the edge between plane I and plane J // marks the edge of a plane we need to add. if ( fNeighborDir > LSM_ZERO ) { // These planes form an edge on the outline. We need to find the two points that they share. Each plane in the frustum shares exactly 2 corner points with each of its neighbors. We have found an edge between two planes that lies in the outline shown in the first image above, but what are the points that create that edge? The easiest thing to do is to make another table. On input, we are going to provide 2 planes, and on output we want the indices of the 2 corners those planes share. As simple as a table such as this seems, there is an important note to be made. We need the planes we are about to add to consistently face inwards. In graphics programming, triangles on the outside of a mesh using a specific winding order in order to achieve this type of consistency. Borrowing from the world of graphics programming, we will also specify our points in a very precise order which will allow us to create planes that face in the direction we desire. We start with the declaration of our table. static const LSM_FRUSTUM_POINTS fpTable[LSM_FP_TOTAL][LSM_FP_TOTAL][2] = { This declares a 3-dimensional array in which the first 2 dimensions are the two input planes, and the third are the 2 points shared by that plane. In order to create the table, we must imagine we are looking directly at one plane and visualize from that perspective which 4 sides neighbor it. Maintaining a counter-clockwise order, we add the points for each plane to the table. { // LSM_FP_LEFT { // LSM_FP_LEFT LSM_FP_FAR_BOTTOM_LEFT, LSM_FP_FAR_BOTTOM_LEFT, // Invalid combination. { // LSM_FP_RIGHT LSM_FP_FAR_BOTTOM_LEFT, LSM_FP_FAR_BOTTOM_LEFT, // Invalid combination. { // LSM_FP_TOP LSM_FP_NEAR_TOP_LEFT, LSM_FP_FAR_TOP_LEFT, { // LSM_FP_BOTTOM LSM_FP_FAR_BOTTOM_LEFT, LSM_FP_NEAR_BOTTOM_LEFT, { // LSM_FP_NEAR LSM_FP_NEAR_BOTTOM_LEFT, LSM_FP_NEAR_TOP_LEFT, { // LSM_FP_FAR LSM_FP_FAR_TOP_LEFT, LSM_FP_FAR_BOTTOM_LEFT, The left/left and left/right combinations are invalid. The rest can be seen easily in the image above. When we get to the left/top pair, we can look at the image and see that, going counter-clockwise, those planes share the near-top-left and far-top-left points. Moving on to the right side, things are fairly simple. The right side shares all of the same planes as the left side, but the winding is reversed. The next entries in our table are easy to add. Copy, replace LEFT with RIGHT, and reverse the order of each entry. { // LSM_FP_RIGHT { // LSM_FP_LEFT LSM_FP_FAR_BOTTOM_RIGHT, LSM_FP_FAR_BOTTOM_RIGHT, // Invalid combination. { // LSM_FP_RIGHT LSM_FP_FAR_BOTTOM_RIGHT, LSM_FP_FAR_BOTTOM_RIGHT, // Invalid combination. { // LSM_FP_TOP LSM_FP_FAR_TOP_RIGHT, LSM_FP_NEAR_TOP_RIGHT, { // LSM_FP_BOTTOM LSM_FP_NEAR_BOTTOM_RIGHT, LSM_FP_FAR_BOTTOM_RIGHT, { // LSM_FP_NEAR LSM_FP_NEAR_TOP_RIGHT, LSM_FP_NEAR_BOTTOM_RIGHT, { // LSM_FP_FAR LSM_FP_FAR_BOTTOM_RIGHT, LSM_FP_FAR_TOP_RIGHT, Repeat this for every plane and the final table and function becomes this: * Given two planes, returns the two points shared by those planes. The points are always * returned in counter-clockwise order, assuming the first input plane is facing towards * the viewer. * \param _fpPlane0 The plane facing towards the viewer. * \param _fpPlane1 A plane neighboring _fpPlane0. * \param _fpRet An array of exactly two elements to be filled with the indices of the points * on return. LSVOID LSE_CALL CFrustum::GetCornersOfPlanes( LSM_FRUSTUM_PLANES _fpPlane0, LSM_FRUSTUM_PLANES _fpPlane1, LSM_FRUSTUM_POINTS _fpRet[2] ) { static const LSM_FRUSTUM_POINTS fpTable[LSM_FP_TOTAL][LSM_FP_TOTAL][2] = { { // LSM_FP_LEFT { // LSM_FP_LEFT LSM_FP_FAR_BOTTOM_LEFT, LSM_FP_FAR_BOTTOM_LEFT, // Invalid combination. { // LSM_FP_RIGHT LSM_FP_FAR_BOTTOM_LEFT, LSM_FP_FAR_BOTTOM_LEFT, // Invalid combination. { // LSM_FP_TOP LSM_FP_NEAR_TOP_LEFT, LSM_FP_FAR_TOP_LEFT, { // LSM_FP_BOTTOM LSM_FP_FAR_BOTTOM_LEFT, LSM_FP_NEAR_BOTTOM_LEFT, { // LSM_FP_NEAR LSM_FP_NEAR_BOTTOM_LEFT, LSM_FP_NEAR_TOP_LEFT, { // LSM_FP_FAR LSM_FP_FAR_TOP_LEFT, LSM_FP_FAR_BOTTOM_LEFT, { // LSM_FP_RIGHT { // LSM_FP_LEFT LSM_FP_FAR_BOTTOM_RIGHT, LSM_FP_FAR_BOTTOM_RIGHT, // Invalid combination. { // LSM_FP_RIGHT LSM_FP_FAR_BOTTOM_RIGHT, LSM_FP_FAR_BOTTOM_RIGHT, // Invalid combination. { // LSM_FP_TOP LSM_FP_FAR_TOP_RIGHT, LSM_FP_NEAR_TOP_RIGHT, { // LSM_FP_BOTTOM LSM_FP_NEAR_BOTTOM_RIGHT, LSM_FP_FAR_BOTTOM_RIGHT, { // LSM_FP_NEAR LSM_FP_NEAR_TOP_RIGHT, LSM_FP_NEAR_BOTTOM_RIGHT, { // LSM_FP_FAR LSM_FP_FAR_BOTTOM_RIGHT, LSM_FP_FAR_TOP_RIGHT, // == { // LSM_FP_TOP { // LSM_FP_LEFT LSM_FP_FAR_TOP_LEFT, LSM_FP_NEAR_TOP_LEFT, { // LSM_FP_RIGHT LSM_FP_NEAR_TOP_RIGHT, LSM_FP_FAR_TOP_RIGHT, { // LSM_FP_TOP LSM_FP_NEAR_TOP_LEFT, LSM_FP_FAR_TOP_LEFT, // Invalid combination. { // LSM_FP_BOTTOM LSM_FP_FAR_BOTTOM_LEFT, LSM_FP_NEAR_BOTTOM_LEFT, // Invalid combination. { // LSM_FP_NEAR LSM_FP_NEAR_TOP_LEFT, LSM_FP_NEAR_TOP_RIGHT, { // LSM_FP_FAR LSM_FP_FAR_TOP_RIGHT, LSM_FP_FAR_TOP_LEFT, { // LSM_FP_BOTTOM { // LSM_FP_LEFT LSM_FP_NEAR_BOTTOM_LEFT, LSM_FP_FAR_BOTTOM_LEFT, { // LSM_FP_RIGHT LSM_FP_FAR_BOTTOM_RIGHT, LSM_FP_NEAR_BOTTOM_RIGHT, { // LSM_FP_TOP LSM_FP_NEAR_BOTTOM_LEFT, LSM_FP_FAR_BOTTOM_LEFT, // Invalid combination. { // LSM_FP_BOTTOM LSM_FP_FAR_BOTTOM_LEFT, LSM_FP_NEAR_BOTTOM_LEFT, // Invalid combination. { // LSM_FP_NEAR LSM_FP_NEAR_BOTTOM_RIGHT, LSM_FP_NEAR_BOTTOM_LEFT, { // LSM_FP_FAR LSM_FP_FAR_BOTTOM_LEFT, LSM_FP_FAR_BOTTOM_RIGHT, // == { // LSM_FP_NEAR { // LSM_FP_LEFT LSM_FP_NEAR_TOP_LEFT, LSM_FP_NEAR_BOTTOM_LEFT, { // LSM_FP_RIGHT LSM_FP_NEAR_BOTTOM_RIGHT, LSM_FP_NEAR_TOP_RIGHT, { // LSM_FP_TOP LSM_FP_NEAR_TOP_RIGHT, LSM_FP_NEAR_TOP_LEFT, { // LSM_FP_BOTTOM LSM_FP_NEAR_BOTTOM_LEFT, LSM_FP_NEAR_BOTTOM_RIGHT, { // LSM_FP_NEAR LSM_FP_NEAR_TOP_LEFT, LSM_FP_NEAR_TOP_RIGHT, // Invalid combination. { // LSM_FP_FAR LSM_FP_FAR_TOP_RIGHT, LSM_FP_FAR_TOP_LEFT, // Invalid combination. { // LSM_FP_FAR { // LSM_FP_LEFT LSM_FP_FAR_BOTTOM_LEFT, LSM_FP_FAR_TOP_LEFT, { // LSM_FP_RIGHT LSM_FP_FAR_TOP_RIGHT, LSM_FP_FAR_BOTTOM_RIGHT, { // LSM_FP_TOP LSM_FP_FAR_TOP_LEFT, LSM_FP_FAR_TOP_RIGHT, { // LSM_FP_BOTTOM LSM_FP_FAR_BOTTOM_RIGHT, LSM_FP_FAR_BOTTOM_LEFT, { // LSM_FP_NEAR LSM_FP_FAR_TOP_LEFT, LSM_FP_FAR_TOP_RIGHT, // Invalid combination. { // LSM_FP_FAR LSM_FP_FAR_TOP_RIGHT, LSM_FP_FAR_TOP_LEFT, // Invalid combination. _fpRet[0] = fpTable[_fpPlane0][_fpPlane1][0]; _fpRet[1] = fpTable[_fpPlane0][_fpPlane1][1]; Now we have detected edges and we have a way to get the points that form that edge. // For each plane. for ( LSUINT32 I = LSM_FP_TOTAL; I--; ) { // If this plane is facing away from us, move on. LSREAL fDir = _fFrustum[I].n * vDir; if ( fDir > LSM_ZERO ) { continue; } // For each neighbor of this plane. LSM_FRUSTUM_PLANES fpNeighbors[4]; CFrustum::GetNeighbors( static_cast<LSM_FRUSTUM_PLANES>(I), fpNeighbors ); for ( LSUINT32 J = 4UL; J--; ) { LSREAL fNeighborDir = _fFrustum[fpNeighbors[J]].n * vDir; // If this plane is facing away from us, the edge between plane I and plane J // marks the edge of a plane we need to add. if ( fNeighborDir > LSM_ZERO ) { LSM_FRUSTUM_POINTS fpPoints[2]; CFrustum::GetCornersOfPlanes( static_cast<LSM_FRUSTUM_PLANES>(I), fpNeighbors[J], fpPoints ); The main plane we are examining is the I plane. The J planes are neighbors. We pass the I plane first and the J plane second to maintain a winding consistent with what we designed in our above If we have 3 points, we have all the information we need to form a plane. Before we get our 3rd point, I will present the math for creating a plane from 3 points. LSE_INLINE LSE_CALLCTOR CPlane3::CPlane3( const CVector3 &_vPoint0, const CVector3 &_vPoint1, const CVector3 &_vPoint2 ) { n = CVector3::CrossProduct( _vPoint1 - _vPoint0, _vPoint2 - _vPoint0 ); dist = n.Dot( _vPoint0 ); So the last step is to find a 3rd point, and luckily this is the easiest step of all. Since we are a directional light, we have a vector that points directly away from us. We can find our 3rd point simply by adding it to either of the first 2 points. if ( fNeighborDir > LSM_ZERO ) { LSM_FRUSTUM_POINTS fpPoints[2]; CFrustum::GetCornersOfPlanes( static_cast<LSM_FRUSTUM_PLANES>(I), fpNeighbors[J], fpPoints ); CPlane3 pAddMe( _vCorners[fpPoints[0]], _vCorners[fpPoints[1]], _vCorners[fpPoints[0]] + vDir ); m_ckdKDop.AddPlane( pAddMe ); The whole function, including the code to add the back planes of the frustum, follows. * Createe the k-DOP for this directional light based on the corner points of a frustum. * \param _fFrustum The frustum. * \param _vCorners The 8 corner points of the frustum. LSVOID LSE_CALL CDirLight::MakeKDop( const CFrustum &_fFrustum, const CVector3 _vCorners[8] ) { CVector3 vDir = GetDir(); // Add planes that are facing towards us. for ( LSUINT32 I = LSM_FP_TOTAL; I--; ) { LSREAL fDir = _fFrustum[I].n * vDir; if ( fDir < LSM_ZERO ) { m_ckdKDop.AddPlane( _fFrustum[I] ); // We have added the back sides of the planes. Now find the edges. // For each plane. for ( LSUINT32 I = LSM_FP_TOTAL; I--; ) { // If this plane is facing away from us, move on. LSREAL fDir = _fFrustum[I].n * vDir; if ( fDir > LSM_ZERO ) { continue; } // For each neighbor of this plane. LSM_FRUSTUM_PLANES fpNeighbors[4]; CFrustum::GetNeighbors( static_cast<LSM_FRUSTUM_PLANES>(I), fpNeighbors ); for ( LSUINT32 J = 4UL; J--; ) { LSREAL fNeighborDir = _fFrustum[fpNeighbors[J]].n * vDir; // If this plane is facing away from us, the edge between plane I and plane J // marks the edge of a plane we need to add. if ( fNeighborDir > LSM_ZERO ) { LSM_FRUSTUM_POINTS fpPoints[2]; CFrustum::GetCornersOfPlanes( static_cast<LSM_FRUSTUM_PLANES>(I), fpNeighbors[J], fpPoints ); CPlane3 pAddMe( _vCorners[fpPoints[0]], _vCorners[fpPoints[1]], _vCorners[fpPoints[0]] + vDir ); m_ckdKDop.AddPlane( pAddMe ); Here, the directional light maintains its own k-DOP. The routine begins by going over the 6 planes of the frustum and adding the back planes. Then it goes through all of the planes facing towards us, and for each neighbor of those it seeks planes facing away from us. When found, the 2 points shared by those faces are retrieved and plugged into the array of points that was passed to the function. The only important member of the directional light here is its k-DOP. * The k-DOP bounding volume for this light. CCappedKDop<13UL> m_ckdKDop; The maximum number of planes that can be generated by this algorithm is 11, but we use 13 because, in the future, we will want to add 2 fake planes (a discussion for another time). Assume an orthogonal projection aligned perfectly along each of the world axis, and a light pointing directly towards it exactly along the X axis. The only real back plane of the frustum is the one to the -X side, but due to floating-point errors it is conceivable that the +Y, -Y, +Z, and -Z planes may be redundantly added. In general, the planes should pass either the back-side test or the edge test and not both, but a stable implementation must assume the worst can happen. Note that if it does happen, it will not affect the stability of the culling test, only its efficiency. If you find your edge planes pointing in the wrong direction, use – vDir instead of + vDir for the 3rd point. The k-DOP we have created can be fed into the same function I presented for culling objects from the frustum and can be used to determine the bare minimum of objects that may cast visible shadows. This routine can also be extended to create occlusion boxes for AABB’s and even OBB’s. If you define an occluder with either of these, the same edge detection and k-DOP construction can be used to form a type of k-DOP that is used to exclude objects from a cull (here, we keep objects that are fully or partially inside the k-DOP, whereas with occlusion culling we reject objects that are fully inside the k-DOP). An astute reader may have noticed that we never closed the k-DOP. From our point of view as the light, we never added a near plane. This basically allows the k-DOP to extend infinitely towards us from the view frustum. If you don’t want objects to cast shadows from 10 miles away, you can add a near plane whose normal is the same as the directional light’s direction, and whose distance is ((PlayerPos DOT LightDir) – SomeDistance). If your plane distances are not negated, you will need to negate this value to get the actual plane distance. Creating an air-tight k-DOP around a view frustum is not such a difficult task conceptually, and its implementation is efficient. I hope this guide can help those who have either not yet implemented shadows or have implemented directional-light shadows but without an air-tight k-DOP. L. Spiro 17 Awesome Comments So Far Don't be a stranger, join the discussion by leaving your own comment 1. belfegor December 16, 2012 at 6:28 AM # Just to say thank you very much. I got this working “somewhat”, but i think i made a mistake somewhere, because when i display count of objects rendered seen by “sun” light and by “player” camera there is huge difference then i expected, but it is still better then to render everything from “sun” point of view. Shadow maps are rendered correctly (i think) because there is no sudden and unexpected disappearance when i “walk” thru scene. I needed to negate -vDir as you mentioned: CPlane3 pAddMe( _vCorners[fpPoints[0]], _vCorners[fpPoints[1]], _vCorners[fpPoints[0]] – vDir ); I failed to use your Clasify::AABox & CTest::AabbKDop functions properly as those didn’t work for me somehow, but instead i used functions from other tutorial on the net i have found earlier: and everything else is the same. If you find time could you possibly edit “CascadedShadowMaps11″ sample from DX SDK and add this culling technique? Thanks again. □ L. Spiro December 19, 2012 at 8:18 AM # If you had to invert vDir then either your winding is in reverse order or your planes are stored in reverse. So to get the culling test to work, inside CClassify::Aabb(), try adding _pPlane.dist instead of subtracting it. I might look into modifying the DirectX samples in the future. L. Spiro 2. chenQuan January 13, 2013 at 3:41 PM # I think your method is very efficient way to find out the point inside three plane , but I don’t really understand this line of code “_vRet = (vU * _pP0.dist + _pP0.n.Cross( _pP1.n * _pP2.dist – _pP2.n * _pP1.dist )) / fDenom;” Would you please explain the math behind this code ? □ L. Spiro January 14, 2013 at 1:50 AM # That is quite a long line of math and a lot happening there. Is there a specific part of it that you don’t understand? You will also be able to find more through Google than I could tell you. The search terms would be “Intersection point of Bundle of Planes”. L. Spiro ☆ chenQuan January 15, 2013 at 11:42 AM # that is my first time leaving a comment to an English blog,it looks not bad at least you understood what I said.I found the answer in today.Thank you ○ chenQuan January 15, 2013 at 11:48 AM # in “real time collision detection” the book name between Book bracket disapeard ~ 3. belfegor January 24, 2013 at 7:16 AM # Actually this works great for me now. This time i have created k-dop for each split in “cascades”, previously i was creating just one from “whole” cam frustum witch was wrong. Again, thank you very much. 4. belfegor May 13, 2013 at 6:05 AM # I got question about creating plane from 3 points, last line you say: dist = n.Dot( _vPoint0 ); In some implementations i have seen that it is: dist = -n.Dot( _vPoint0 ); Did you made a typo or it should be like that? □ L. Spiro May 13, 2013 at 7:52 AM # As I mentioned in the post some implementations use the reversed distance because it makes certain equations slightly faster, since many planar equations require the reversed distance over the actual distance. You have to pick one convention or the other and be sure to be consistent. The code I posted is consistent within its convention, but if you get your planar equations from multiple sources you have to be careful about which convention they are using. L. Spiro 5. belfegor May 13, 2013 at 8:51 PM # I am trying to figure out why do i had to subtract instead add light direction like in this tutorial: //CPlane3 pAddMe( _vCorners[fpPoints[0]], _vCorners[fpPoints[1]], _vCorners[fpPoints[0]] + vDir ); CPlane3 pAddMe( _vCorners[fpPoints[0]], _vCorners[fpPoints[1]], _vCorners[fpPoints[0]] – vDir ); And few more stupid question if you don’t mind. CVector3 vDir = GetDir(); // direction of light Is it to light, or to light target? lightPos – lightTarget lightTarget – lightPos I see in some functions you use Dot product function, like this: dist = n.Dot( _vPoint0 ); but somewhere: LSREAL fDir = _fFrustum[I].n * vDir; Since it returns Real i assume it is same as dot? Is it? Thank you for your help so far. □ L. Spiro May 14, 2013 at 1:32 AM # For #1, lights don’t have target positions. At a high level you could lock them to point at certain things, but at a normal level they just have directions, and these are the directions they actually face, not inversed. Again using the inverse direction is a popular choice since it simplifies some equations but here it is not. #2: I have some old code in which I used CVector3::Dot(), but it is all being ported to use the * operator. They are the same. L. Spiro 6. belfegor May 13, 2013 at 9:08 PM # I am really sorry to bother you again with noobish questions, i forgat about one more. If i want to know relation with point against this plane created this way, how do i calculate it? I’ve seen implementation like this when distance is -dot(point, normal): relation = plane.normal.dot(point) + plane.distance; Thank you. □ L. Spiro May 14, 2013 at 1:27 AM # The way my planes are stored, the equation is: * Gets the distance from this plane to a point. This plane must be normalized. * \param _vPoint Point from which to get the distance to this plane. * \return Returns the distance from the point to this plane, signed. LSE_INLINE _tType LSE_FCALL SignedDistance( const _tVec3Type &_vPoint ) const { return n.Dot( _vPoint ) – dist; L. Spiro 7. belfegor May 14, 2013 at 1:53 AM # Thank you very much for patience and answers. 8. Louise November 27, 2013 at 7:10 PM # Fine tutorial, i’v added this to my engine, but it’s (not a suprise) not working. My coords system is RightHanded, Y-Up. The problem is probably with frustum, my frustum planes are defived from CameraProjectionMatrix (comboMatrix) as follows: // matrix is realy XMFLOAT4X4 (row-major order) // Left clipping plane m_pPlanes[LSM_FP_LEFT] = CSPlane(comboMatrix._14 + comboMatrix._11, comboMatrix._24 + comboMatrix._21, comboMatrix._34 + comboMatrix._31, -(comboMatrix._44 + comboMatrix._41)); // Right clipping plane m_pPlanes[LSM_FP_RIGHT] = CSPlane(comboMatrix._14 – comboMatrix._11, comboMatrix._24 – comboMatrix._21, comboMatrix._34 – comboMatrix._31, -(comboMatrix._44 – comboMatrix._41)); // Top clipping plane m_pPlanes[LSM_FP_TOP] = CSPlane(comboMatrix._14 – comboMatrix._12, comboMatrix._24 – comboMatrix._22, comboMatrix._34 – comboMatrix._32, -(comboMatrix._44 – comboMatrix._42)); // Bottom clipping plane m_pPlanes[LSM_FP_BOTTOM] = CSPlane(comboMatrix._14 + comboMatrix._12, comboMatrix._24 + comboMatrix._22, comboMatrix._34 + comboMatrix._32, -(comboMatrix._44 + comboMatrix._42)); // Near clipping plane m_pPlanes[LSM_FP_NEAR] = CSPlane(comboMatrix._13, comboMatrix._23, comboMatrix._33, -comboMatrix._43); // Far clipping plane m_pPlanes[LSM_FP_FAR] = CSlane(comboMatrix._14 – comboMatrix._13, comboMatrix._24 – comboMatrix._23, comboMatrix._34 – comboMatrix._33, -(comboMatrix._44 – comboMatrix._43)); the – at the ‘d’ component of plane is here to store positive distance (that you also use) not the -d if i draw the frustum by taking for each plane an enormously big quad and clipping it to 5 remaining planes looks just fine – so the frustum itself should be fine i think. but then when i dra the points calculated by CIntersect::ThreePlanes fro your sample, they do not go into corners of frustum, instead they are scattered all over in space. http://imageshack.com/a/img812/1641/zpej.jpg (the image is taken by ‘external debuging camera’, the real camera frustum is where it is imaged) green stars are the calculated frustum corners. If i replace those calculated by CIntersect::ThreePlanes with my own ones that lies where I thing they should be (at the corners of the white frustum outline) the algorithm works but only when my camera direction is mainly that same as the light direction (+/- about 30 degrees) if that magic barrier is crossed the second part of CDirLight::MakeKDop get screved (the one with edges) and I end up with KDop that have so weird planes orientation that it encloses nothing ;( my light direction is fine (assuming the light is from top of the sky it would be (0, -1, 0)) any ideas what may be wrong ? do the corners calculated by CIntersect::ThreePlanes should lie exacly where the frustum corners are ? (if so, i have idea that the order of frustum planes is wrong for me) or they are free to go as in the image ? Thanks for any ideas where to check. □ L. Spiro November 29, 2013 at 11:13 PM # Sorry, this was marked as spam so I did not see it until now. But glad you could solve the problem. L. Spiro 9. Louise November 27, 2013 at 11:48 PM # all I need was to swap places of LEFT and RIGHT planes in my frustum planes calculations AND provide new version of IntersectThreePlanes function Quick replacement using DXMath: static bool IntersectThreePlanes(const CSPlane &_pP0, const CSPlane &_pP1, const CSPlane &_pP2, XMFLOAT4 &_vRet) XMVECTOR PNS = XMVectorSet(1.0f, 1.0f, 1.0f, -1.0f); XMVECTOR LNP0 = XMVectorZero(); XMVECTOR LNP1 = XMVectorZero(); XMVECTOR P0 = XMVectorMultiply(XMLoadFloat4(&_pP0.n), PNS); XMVECTOR P1 = XMVectorMultiply(XMLoadFloat4(&_pP1.n), PNS); XMVECTOR P2 = XMVectorMultiply(XMLoadFloat4(&_pP2.n), PNS); XMPlaneIntersectPlane(&LNP0, &LNP1, P0, P1); XMVECTOR Res = XMPlaneIntersectLine(P2, LNP0, LNP1); XMStoreFloat4(&_vRet, Res); return true;
{"url":"http://lspiroengine.com/?p=187","timestamp":"2014-04-19T11:57:00Z","content_type":null,"content_length":"76759","record_id":"<urn:uuid:61563779-ce59-4b75-8771-b6e6180ba5ff>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Tables; Mathematical Tables;: Containing the Common, Hyperbolic, and Logistic Logarithms, Also Sines, Tangents, Secants, & Versed-sines Both Natural and Logarithmic. Together with Several Other Tables Useful in Mathematical Calculations. To which is Prefixed a Large and Original History of the Discoveries and Writings Relating to Those Subjects; with the Complete Description and Use of the Tables (Google Charles Hutton F. C. and J. Rivington; Wilkie and Robinson; J. Walker; Lackington, Allen, and Company; Vernor, Hood, and Sharpe; C. Law; Longman, Hurst, Rees, Orme and Brown; Black, Parry, and Kingsbury; J. Richardson; L.B. Seeley; J. Murray; R. Baldwin; Sherwood, Neely and Jones; Gale and Curtis; J. Johnson and Company; and G. Robinson. , 1811 - 133 pages We haven't found any reviews in the usual places. Popular passages The rectangle contained by the diagonals of a quadrilateral inscribed in a circle is equal to the sum of the two rectangles contained by its opposite sides. Seeing there is nothing (right well-beloved Students of the Mathematics) that is so troublesome to mathematical practice, nor that doth more molest and hinder calculators, than the multiplications, divisions, square and cubical extractions of great numbers, which besides the tedious expense of time are for the most part subject to many slippery errors, I began therefore to consider in my mind by what certain and ready art I might remove those hindrances. Kepler's work, however, it may not be improper in this place to take notice of an... Tables of Logarithms, for all numbers from 1 to 102100, and for the sines and tangents to every ten seconds of each degree in the quadrant; as also, for the sines of the first 72 minutes to every single second : with other useful and necessary tables... DIVISION BY LOGARITHMS. RULE. From the logarithm of the dividend subtract the logarithm of the divisor ; the remainder will be the logarithm of the quotient EXAMPLE I. Ixi following. But when the perpendicular falls out of the triangle, the difference of the two arcs will be the side required. PROP. XXVI. — Having two sides, and the angle opposite to one of them ; to find the angle between them. ... as in a continued scale of proportionals, infinite in number, between the two terms of the ratio ; which infinite number of mean proportionals, is to that infinite number of the like and equal... Then, because the sum of the logarithms of numbers, gives the logarithm of their product ; and the difference of the logarithms, gives the logarithm of the quotient of the numbers : from the tw... ... numbers. Which hint Neper taking, he desired him at his return to call upon him again. Craig, after some weeks had passed, did so, and Neper then shewed him a rude draught of that he called ' Canon Mirabilis Logarithmorum. Multiply the logarithm of the number given by the proposed index of the power, and the product will be the logarithm of the power sought Note. Bibliographic information
{"url":"http://books.google.com/books?id=zDMAAAAAQAAJ","timestamp":"2014-04-16T16:00:03Z","content_type":null,"content_length":"121488","record_id":"<urn:uuid:54b34f57-74d6-4e91-b358-70e0232f7790>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Really need help understanding this limit/derivative question... The limit Lim x approachs 5pi CosX +1 / x-5pi represents the derivative of some function f(x) at some number a. Find f and a . I don't even understand the wording of this question... • one year ago • one year ago Best Response You've already chosen the best response. There is an alternative form for the definition of a derivative...\[f'(a)=\lim_{x \rightarrow a} \frac{f(x)-f(a)}{x-a}\]It is often used when your are analyzing a piecewise defined function where the continuity is questionable. If we re-write your problem to fit this form...\[f'(5\pi)=\lim_{x \rightarrow 5\pi} \frac{f(x)-f(5\pi)}{x-5\pi}\]If f(x) is the cosine...\[f'(5\pi)=\lim_{x \ rightarrow 5\pi} \frac{\cos (x)-\cos (5\pi)}{x-5\pi}\]From this, it looks like f(x)=cos(x) and a=5pi . Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d23d2ae4b069abbb71572f","timestamp":"2014-04-21T10:29:29Z","content_type":null,"content_length":"28310","record_id":"<urn:uuid:6ca20a95-c637-4988-a030-194e32cb1049>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Prealgebra & Introductory Algebra, Third Edition Sign in to access your course materials or manage your account. Back to skip links Example: ISBN, Title, Author or Keyword(s) Option 1: Search by ISBN number for the best match. Zero-in on the exact digital course materials you need. Enter your ISBNs below. Option 2: Search by Title and Author. Searching by Title and Author is a great way to find your course materials. Enter information in at least one of the fields below.
{"url":"http://www.coursesmart.com/9780321649492","timestamp":"2014-04-17T07:58:40Z","content_type":null,"content_length":"79493","record_id":"<urn:uuid:164b7022-9d28-473d-b67c-c3ee0ce30828>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial questions January 14th 2011, 04:14 PM #1 Jan 2011 Polynomial questions I tried doing these, but I always got the wrong answer or just got stuck on one part. 1.Resolve algebraically I have absolutely no clue on how to start with this one. 2. $4+squareroot(x-2)=x$ I squared both sides, but it kept getting more and more complicated and really confused me. 3.Determine the coordinates of the intersection points between the line $y=x+7$ and the circle $(x+2)^2+(y-3)^2=52$ I tried substituting the line into the circle equation, but than I didn't know where to go from there. Thanks guys. Hint: factor $x^3+4x^2-7x-10 = (x+1)(...)$ 2. $4+squareroot(x-2)=x$ I squared both sides, but it kept getting more and more complicated and really confused me. Hint: rewrite $4+\sqrt{x-2}=x$ as $\sqrt{x-2}=x - 4$ Now square both sides. 3.Determine the coordinates of the intersection points between the line $y=x+7$ and the circle $(x+2)^2+(y-3)^2=52$ I tried substituting the line into the circle equation, but than I didn't know where to go from there. Yes, your approach is correct. What did you get after substituting? Simplify, and you should have a quadratic equation. Yes, your approach is correct. What did you get after substituting? Simplify, and you should have a quadratic equation. I got $x=-13+-\sqrt-900/20$ Careful! I think you've made the mistake of square rooting both sides. You can't do that here because $\sqrt{x^2+a^2} eq x + a$ Let $x = 3$ and $a = 4$ to prove this quite simply yourself. Anyway, going back to your question: $y = x + 7$ Substituting that into $(x+2)^2+(y-3)^2=52$ gives: Do not square root here. You have to expand the brackets. $x^2 + 4 x + 4 + x^2 + 8x + 16 = 52$ Tidy from here and factorize - you get a nice answer. Thanks guys, that really helped! January 14th 2011, 04:26 PM #2 Senior Member Dec 2010 January 14th 2011, 04:47 PM #3 Jan 2011 January 14th 2011, 05:00 PM #4 January 14th 2011, 05:47 PM #5 Jan 2011
{"url":"http://mathhelpforum.com/algebra/168390-polynomial-questions.html","timestamp":"2014-04-18T22:50:35Z","content_type":null,"content_length":"43941","record_id":"<urn:uuid:588bbcc8-d238-4ec4-b0f4-bafdcaa1a634>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Drama 30 Math 25 The table above shows the number of students in 3 clubs at McAuliffe School. Although no student is in all 3 clubs, 0 students are in both Chess and Drama, 5 are in both chess and math, and 6 are in both drama and math. How many different students are in the 3 clubs. A 68 B 69 C 74 D 79 E 84 Solution: Answer C The solution show a Venn Diagram and adds up ALL the parts - i.e. 25+14+14+5+10+6. But shouldn't the solution be the 25+14+14, since these are the ones that show all the DIFFERENT students? Thank you! According to the rest of the info given, there must be a typo - the poster must have meant to say there were 10 students in both chess and drama. If you add only 25, 14, and 14, you will not be adding all of the different students. Take a look at the Venn diagram again before you read the rest of this. The 25 in the big portion of the Chess circle represents the students who are taking ONLY chess. It does not represent any of the students who are taking chess plus something else. Ditto with each of the two 14's. We don't want the total number of students who are in ONLY ONE club - we want to count all of them, including the ones in 2 clubs. So 25+14+14 is the starting point, but now we have to add the students who are in 2 clubs. There are 10 students in both chess and drama, so add 10. 6 students are in both drama and math, so add 6. 5 students are in both chess and math, so add 5. Notice that the purpose of the Venn diagram is to split out all of the categories separately - those who are in exactly one club, those who are in exactly two clubs, and those who are in all three clubs (zero, in this case). Stacey Koprince Director of Online Community
{"url":"http://www.manhattangmat.com/forums/post4831.html","timestamp":"2014-04-21T04:32:57Z","content_type":null,"content_length":"35559","record_id":"<urn:uuid:d344d7d4-8f11-409d-a4e7-1a97a5a67835>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Combinatorics Hi Agnishom; Here is the first gf problem I ever saw. Let's start right here. How many ways can you pick r balls from 3 green, 3 white, 3 blue and 3 gold? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=285085","timestamp":"2014-04-18T19:05:45Z","content_type":null,"content_length":"13808","record_id":"<urn:uuid:160c676d-53da-4e5f-b8ed-c45c14dbc122>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
John D. My name is John D. and I'm 28 years old. I graduated from UMass Dartmouth in 2008. I have been tutoring for ten years now. I worked as an assistant teacher at Wareham Cooperative High School for two years. Currently, I work at Fairhaven High School as a Paraprofessional. I enjoy helping students learn math. It's rewarding. Besides tutoring, I enjoy playing basketball with my friends. I'm looking for tutoring hours during the evening except Mondays. My approach is to see where the students need the most help and strengthen those weaknesses. I don't want the student to fall behind. I want the student to feel comfortable, and not discouraged. I want the students to feel that they have accomplished something. I first started tutoring when I was attending Cape Cod Community College. I tutored in algebra and precalculus. When I transferred to UMass Dartmouth, I continued tutoring in math from algebra I to calculus. Right now I tutor in UMD Primes, a program for UMass Dartmouth on Mondays for Algebra I. I connect with the students. I'm an experienced tutor. I love working with students who are willing to put in the time to learn. I have a strong understanding in math. I majored in math in college. I have been tutoring ever since I was in high school and it has been great. John's subjects
{"url":"http://www.wyzant.com/Tutors/MA/New-Bedford/7579764/?g=3FI","timestamp":"2014-04-16T04:56:19Z","content_type":null,"content_length":"84155","record_id":"<urn:uuid:3551cbca-c9ff-4c83-a123-07296a392c9f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: RE: Controlling alignment of left AND right axes in combined gra [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: RE: Controlling alignment of left AND right axes in combined graphs From "Marcelo Villafani" <villafani.1@osu.edu> To <statalist@hsphsun2.harvard.edu> Subject st: RE: RE: Controlling alignment of left AND right axes in combined graphs Date Mon, 19 Mar 2007 17:12:58 -0400 Thank you Scott for your kind help. The problem is that the labels in yaxis(2) are left aligned (first digit next to yaxis) and I need them right aligned. I tried to solve this introducing labstyle() in your code, that is: ylabel(,angle(h) axis(2) format(%10.0fc) labstyle(right)) But that makes the problem worst since labstyle() seems only to work with text labels. Is there a justification option for numeric labels that I am missing? -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Scott Merryman Sent: Monday, March 19, 2007 12:26 PM To: statalist@hsphsun2.harvard.edu Subject: st: RE: Controlling alignment of left AND right axes in combined > -----Original Message----- > From: owner-statalist@hsphsun2.harvard.edu [mailto:owner- > statalist@hsphsun2.harvard.edu] On Behalf Of Marcelo Villafani > Sent: Sunday, March 18, 2007 9:08 AM > To: statalist@hsphsun2.harvard.edu > Subject: st: Controlling alignment of left AND right axes in combined > graphs > I am plotting two separate graphs that have horizontal ylabels on the > right > and left axes, that later are combined in a col(1) orientation with the > second plot providing xlabels for both of them. The problem is that the > y-axes have different numbers of digits and I cannot align the ylabels. > I found that a similar question was posted last January regarding the left > axis, but I couldn't adjust the suggested solution to the case where left > and right y-axes are used simultaneously. How about this? set obs 100 gen x = uniform() gen y1 = uniform()*10 gen y2 = uniform()*4000 gen y3 = uniform()*1000 gen y4 = uniform()*900000 //Upper left qui sum y1 _natscale `=r(min)' `=r(max)' 5 local m1 = r(max) local l1 = length("`m1'") //Upper right qui sum y2 _natscale `=r(min)' `=r(max)' 5 local m2 = r(max) local l2 = length("`m2'") //Lower left qui sum y3 _natscale `=r(min)' `=r(max)' 5 local m3 = r(max) local l3 = length("`m3'") //Lower right qui sum y4 _natscale `=r(min)' `=r(max)' 5 local m4 = r(max) local l4 = length("`m4'") local diff_left = abs((`l1'- `l3')*2 ) local diff_right = abs((`l2' - `l4')*2) display "lengths: `l1' `l2' `l3' `l4' `diff_left' `diff_right' " scatter y1 x, ylabel(,angle(h)) xlabel(none) xtitle("") /// ytitle(, margin(0 `diff_left' 0 0)) /// || scatter y2 x, yaxis(2) ylabel(,angle(h) axis(2)) legend(off) /// ytitle(, margin(`diff_right' 0 0 0) axis(2)) name(gr1, replace) scatter y3 x, ylabel(,angle(h)) xlabel(none) ytitle(, margin(0 0 0 0)) /// || scatter y4 x, yaxis(2) ylabel(,angle(h) axis(2)) /// legend(order(1 "Y1 or Y3" 2 "Y2 or Y4")) /// ytitle(, margin(0 0 0 0) axis(2) ) name(gr2, replace) graph combine gr1 gr2, col(1) name(gr_combine, replace) graph drop gr1 gr2 * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-03/msg00747.html","timestamp":"2014-04-18T14:04:49Z","content_type":null,"content_length":"9413","record_id":"<urn:uuid:2a00899b-9adc-421a-bb1a-dc5aa1f8fa8f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Sum of a range of numbers? Enjoy an ad free experience by logging in. Not a member yet? New to the CF scene Join Date Sep 2006 Thanked 0 Times in 0 Posts Sum of a range of numbers? I have been searching for a solution to my problem, but I have not been able to find one yet, which is why I am making my first post here. I am working on a project that lets the registered members submit a weekly worksheet with a number of multiple choice questions. Each week the number of questions changes, but there is always 2 choices to each question. For each question the member selects which answer they think is correct, and a 'confidence rating'. The confidence rating is used to score how much confidence they have that their answer choice is correct. For a sheet with 16 questions, the member has to rate each question with 1-16, and each value can only be used one time. Each sheet has a total possible points score, which is the sum of each possible rating (ie (1+2+3...14+15+16 = 136). For each week, and year there will be a records page that will show the number of points, and the member's 'average' for that time frame. For the 'average' I have to take the number of points the member received, and divide it by the number of points possible. I know this is really more of a percentage, but this is the way to client wants this process done. So, is there way to be able to figure out the number of total points possible for each sheet in mysql without having to create a new column in one of my tables with the value or use php? The tables are currently set up as follows: table: member_sheets sheet_id | member_id | week_id table: member_answers answer_id | sheet_id | question_id | answer_choice | answer_rating | answer_result (correct||incorrect) table: questions question_id | question_text | question_choiceA | question_choiceB | question_answer(a||b) Thank you in advance to anyone who can help or lend some advice. UE Antagonizer Join Date Dec 2005 Utah, USA, Northwestern hemisphere, Earth, Solar System, Milky Way Galaxy, Alpha Quadrant Thanked 637 Times in 625 Posts Let me rephrase your question so I can make sure I understand what you're asking. You want to know the total number of points possible for a sheet, which has any number of questions, and each question is worth from 1 to n, n being the number of questions on the sheet, with no number being used twice. Not using PHP, you can do a group by and sum the column answer_rating, but that depends on the accuracy of the member entries. If you have validation on the form so you are guaranteed a unique rating value for every question then you can do it this way. SELECT sum(answer_rating) from member_answers group by b.sheet_id This could be wrong depending on how your data looks. You could also count up the number of questions for a sheet and then accumulate rating total in a loop, but that's using PHP. New to the CF scene Join Date Sep 2006 Thanked 0 Times in 0 Posts Thank you very much! Worked like a charm
{"url":"http://www.codingforums.com/mysql/96532-sum-range-numbers.html","timestamp":"2014-04-19T07:11:41Z","content_type":null,"content_length":"65980","record_id":"<urn:uuid:56b905c2-e21b-43e3-8011-824cf5b63ac7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Additive combinatorics (by Ryan O'Donnell) For a while now I've taken a dilettantish interest in the subject of Additive Combinatorics . If A is a subset of the integers {1, ..., N}, let A + A denote the set {a + b : a, b in A}. Additive Combinatorics studies such questions as, "If A + A is small compared to |A|^2, what can be said about A?" and "What is the size of the largest A that contains no nontrivial length-3 arithmetic progression?" My interest in this topic was piqued a little over a year ago by the confluence of three events. First, the Green-Tao theorem was announced: The primes contain arbitrarily long arithmetic progressions. Second, the Barak-Impagliazzo-Wigderson paper on extracting randomness from independent sources came out, which crucially used a 2003 result from additive combinatorics by Bourgain, Katz, and Tao. Third, I was reading some of the course notes and expositional papers on Ben Green's webpage (mainly because he's a masterfully clear writer) and I realized that many of the favourite techniques used in additive combinatorics are also favourite techniques in theoretical computer science -- the probabilistic method, graph theory, discrete Fourier analysis, finite fields and low-degree polynomials therein... So what are the applications to theoretical computer science? There are already at least a few: • More recent work on multisource extractors; e.g., the paper of Dvir and Shpilka or the recent work of Jean Bourgain (mentioned here). • Szemer�di's Regularity Lemma, used nonstop in the area of property testing, was developed originally for "Szemer�di's Theorem" from additive combinatorics. Ben Green also has a version of the regularity lemma for boolean functions, which I used in a paper on hardness of approximation for "Grothendieck problems". • Tao and Vu's recent papers on the probability that a random boolean matrix has determinant zero. • Chandra, Furst, and Lipton, inventors of the "number on the forehead" model in communication complexity, gave surprisingly efficient communication upper bounds based on arithmetic progressions in • The most-cited work in TCS history (maybe): Coppersmith and Winograd's n^{2.376} matrix multiplication algorithm. The title: Matrix multiplication via arithmetic progressions. I like to think that many more applications and connections to TCS are on the way. Here are a few scattershot ideas... anyone want to chime in with others? For an introduction to additive combinatorics, one might look 3 comments: 1. thanks very much for many useful links. i enjoy your posts. 2. fantastic post. Informative and thought-provoking. 3. Hi Ryan, Neat posts! One comment though: although widely cited, CW'90 is not likely to be most cited paper in TCS. A few counterexamples: - RSA paper: 2938 citations - Karp's NP-completeness: 1367 citations - Shor's quantum factoring paper: 707 citations - Bentley's kd-tree paper: 786 citations (name misspelled as "Bently") - CW'90: 535 citations All data according to Google Scholar.
{"url":"http://blog.computationalcomplexity.org/2005/08/additive-combinatorics-by-ryan.html","timestamp":"2014-04-20T13:23:43Z","content_type":null,"content_length":"160553","record_id":"<urn:uuid:34672901-4b29-415e-8dda-c8ea738b0f6f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Crypto Functions This module provides a set of cryptographic functions. • md4: The MD4 Message Digest Algorithm (RFC 1320) • md5: The MD5 Message Digest Algorithm (RFC 1321) • sha: Secure Hash Standard (FIPS 180-2) • hmac: Keyed-Hashing for Message Authentication (RFC 2104) • des: Data Encryption Standard (FIPS 46-3) • aes: Advanced Encryption Standard (AES) (FIPS 197) • ecb, cbc, cfb, ofb: Recommendation for Block Cipher Modes of Operation (NIST SP 800-38A). • rsa: Recommendation for Block Cipher Modes of Operation (NIST 800-38A) • dss: Digital Signature Standard (FIPS 186-2) The above publications can be found at NIST publications, at IETF. byte() = 0 ... 255 ioelem() = byte() | binary() | iolist() iolist() = [ioelem()] Mpint() = <<ByteLen:32/integer-big, Bytes:ByteLen/binary>> start() -> ok Starts the crypto server. info() -> [atom()] Provides the available crypto functions in terms of a list of atoms. info_lib() -> [{Name,VerNum,VerStr}] • Name = binary() • VerNum = integer() • VerStr = binary() Provides the name and version of the libraries used by crypto. Name is the name of the library. VerNum is the numeric version according to the library's own versioning scheme. VerStr contains a text variant of the version. > info_lib(). [{<<"OpenSSL">>,9469983,<<"OpenSSL 0.9.8a 11 Oct 2005">>}] md4(Data) -> Digest • Data = iolist() | binary() • Digest = binary() Computes an MD4 message digest from Data, where the length of the digest is 128 bits (16 bytes). md4_init() -> Context Creates an MD4 context, to be used in subsequent calls to md4_update/2. md4_update(Context, Data) -> NewContext • Data = iolist() | binary() • Context = NewContext = binary() Updates an MD4 Context with Data, and returns a NewContext. md4_final(Context) -> Digest • Context = Digest = binary() Finishes the update of an MD4 Context and returns the computed MD4 message digest. md5(Data) -> Digest • Data = iolist() | binary() • Digest = binary() Computes an MD5 message digest from Data, where the length of the digest is 128 bits (16 bytes). md5_init() -> Context Creates an MD5 context, to be used in subsequent calls to md5_update/2. md5_update(Context, Data) -> NewContext • Data = iolist() | binary() • Context = NewContext = binary() Updates an MD5 Context with Data, and returns a NewContext. md5_final(Context) -> Digest • Context = Digest = binary() Finishes the update of an MD5 Context and returns the computed MD5 message digest. sha(Data) -> Digest • Data = iolist() | binary() • Digest = binary() Computes an SHA message digest from Data, where the length of the digest is 160 bits (20 bytes). sha_init() -> Context Creates an SHA context, to be used in subsequent calls to sha_update/2. sha_update(Context, Data) -> NewContext • Data = iolist() | binary() • Context = NewContext = binary() Updates an SHA Context with Data, and returns a NewContext. sha_final(Context) -> Digest • Context = Digest = binary() Finishes the update of an SHA Context and returns the computed SHA message digest. md5_mac(Key, Data) -> Mac • Key = Data = iolist() | binary() • Mac = binary() Computes an MD5 MAC message authentification code from Key and Data, where the the length of the Mac is 128 bits (16 bytes). md5_mac_96(Key, Data) -> Mac • Key = Data = iolist() | binary() • Mac = binary() Computes an MD5 MAC message authentification code from Key and Data, where the length of the Mac is 96 bits (12 bytes). sha_mac(Key, Data) -> Mac • Key = Data = iolist() | binary() • Mac = binary() Computes an SHA MAC message authentification code from Key and Data, where the length of the Mac is 160 bits (20 bytes). sha_mac_96(Key, Data) -> Mac • Key = Data = iolist() | binary() • Mac = binary() Computes an SHA MAC message authentification code from Key and Data, where the length of the Mac is 96 bits (12 bytes). des_cbc_encrypt(Key, IVec, Text) -> Cipher • Key = Text = iolist() | binary() • IVec = Cipher = binary() Encrypts Text according to DES in CBC mode. Text must be a multiple of 64 bits (8 bytes). Key is the DES key, and IVec is an arbitrary initializing vector. The lengths of Key and IVec must be 64 bits (8 bytes). des_cbc_decrypt(Key, IVec, Cipher) -> Text • Key = Cipher = iolist() | binary() • IVec = Text = binary() Decrypts Cipher according to DES in CBC mode. Key is the DES key, and IVec is an arbitrary initializing vector. Key and IVec must have the same values as those used when encrypting. Cipher must be a multiple of 64 bits (8 bytes). The lengths of Key and IVec must be 64 bits (8 bytes). des_cbc_ivec(Data) -> IVec • Data = iolist() | binary() • IVec = binary() Returns the IVec to be used in a next iteration of des_cbc_[encrypt|decrypt]. Data is the encrypted data from the previous iteration step. des3_cbc_encrypt(Key1, Key2, Key3, IVec, Text) -> Cipher • Key1 =Key2 = Key3 Text = iolist() | binary() • IVec = Cipher = binary() Encrypts Text according to DES3 in CBC mode. Text must be a multiple of 64 bits (8 bytes). Key1, Key2, Key3, are the DES keys, and IVec is an arbitrary initializing vector. The lengths of each of Key1, Key2, Key3 and IVec must be 64 bits (8 bytes). des3_cbc_decrypt(Key1, Key2, Key3, IVec, Cipher) -> Text • Key1 = Key2 = Key3 = Cipher = iolist() | binary() • IVec = Text = binary() Decrypts Cipher according to DES3 in CBC mode. Key1, Key2, Key3 are the DES key, and IVec is an arbitrary initializing vector. Key1, Key2, Key3 and IVec must and IVec must have the same values as those used when encrypting. Cipher must be a multiple of 64 bits (8 bytes). The lengths of Key1, Key2, Key3, and IVec must be 64 bits (8 bytes). des_ecb_encrypt(Key, Text) -> Cipher • Key = Text = iolist() | binary() • Cipher = binary() Encrypts Text according to DES in ECB mode. Key is the DES key. The lengths of Key and Text must be 64 bits (8 bytes). des_ecb_decrypt(Key, Cipher) -> Text • Key = Cipher = iolist() | binary() • Text = binary() Decrypts Cipher according to DES in ECB mode. Key is the DES key. The lengths of Key and Cipher must be 64 bits (8 bytes). blowfish_ecb_encrypt(Key, Text) -> Cipher • Key = Text = iolist() | binary() • IVec = Cipher = binary() Encrypts the first 64 bits of Text using Blowfish in ECB mode. Key is the Blowfish key. The length of Text must be at least 64 bits (8 bytes). blowfish_ecb_decrypt(Key, Text) -> Cipher • Key = Text = iolist() | binary() • IVec = Cipher = binary() Decrypts the first 64 bits of Text using Blowfish in ECB mode. Key is the Blowfish key. The length of Text must be at least 64 bits (8 bytes). blowfish_cbc_encrypt(Key, Text) -> Cipher • Key = Text = iolist() | binary() • IVec = Cipher = binary() Encrypts Text using Blowfish in CBC mode. Key is the Blowfish key, and IVec is an arbitrary initializing vector. The length of IVec must be 64 bits (8 bytes). The length of Text must be a multiple of 64 bits (8 bytes). blowfish_cbc_decrypt(Key, Text) -> Cipher • Key = Text = iolist() | binary() • IVec = Cipher = binary() Decrypts Text using Blowfish in CBC mode. Key is the Blowfish key, and IVec is an arbitrary initializing vector. The length of IVec must be 64 bits (8 bytes). The length of Text must be a multiple 64 bits (8 bytes). blowfish_cfb64_encrypt(Key, IVec, Text) -> Cipher • Key = Text = iolist() | binary() • IVec = Cipher = binary() Encrypts Text using Blowfish in CFB mode with 64 bit feedback. Key is the Blowfish key, and IVec is an arbitrary initializing vector. The length of IVec must be 64 bits (8 bytes). blowfish_cfb64_decrypt(Key, IVec, Text) -> Cipher • Key = Text = iolist() | binary() • IVec = Cipher = binary() Decrypts Text using Blowfish in CFB mode with 64 bit feedback. Key is the Blowfish key, and IVec is an arbitrary initializing vector. The length of IVec must be 64 bits (8 bytes). blowfish_ofb64_encrypt(Key, IVec, Text) -> Cipher • Key = Text = iolist() | binary() • IVec = Cipher = binary() Encrypts Text using Blowfish in OFB mode with 64 bit feedback. Key is the Blowfish key, and IVec is an arbitrary initializing vector. The length of IVec must be 64 bits (8 bytes). aes_cfb_128_encrypt(Key, IVec, Text) -> Cipher aes_cbc_128_encrypt(Key, IVec, Text) -> Cipher • Key = Text = iolist() | binary() • IVec = Cipher = binary() Encrypts Text according to AES in Cipher Feedback mode (CFB) or Cipher Block Chaining mode (CBC). Text must be a multiple of 128 bits (16 bytes). Key is the AES key, and IVec is an arbitrary initializing vector. The lengths of Key and IVec must be 128 bits (16 bytes). aes_cfb_128_decrypt(Key, IVec, Cipher) -> Text aes_cbc_128_decrypt(Key, IVec, Cipher) -> Text • Key = Cipher = iolist() | binary() • IVec = Text = binary() Decrypts Cipher according to Cipher Feedback Mode (CFB) or Cipher Block Chaining mode (CBC). Key is the AES key, and IVec is an arbitrary initializing vector. Key and IVec must have the same values as those used when encrypting. Cipher must be a multiple of 128 bits (16 bytes). The lengths of Key and IVec must be 128 bits (16 bytes). aes_cbc_ivec(Data) -> IVec • Data = iolist() | binary() • IVec = binary() Returns the IVec to be used in a next iteration of aes_cbc_*_[encrypt|decrypt]. Data is the encrypted data from the previous iteration step. erlint(Mpint) -> N mpint(N) -> Mpint • Mpint = binary() • N = integer() Convert a binary multi-precision integer Mpint to and from an erlang big integer. A multi-precision integer is a binary with the following form: <<ByteLen:32/integer, Bytes:ByteLen/binary>> where both ByteLen and Bytes are big-endian. Mpints are used in some of the functions in crypto and are not translated in the API for performance reasons. rand_bytes(N) -> binary() Generates N bytes randomly uniform 0..255, and returns the result in a binary. Uses the crypto library pseudo-random number generator. rand_uniform(Lo, Hi) -> N • Lo, Hi, N = Mpint | integer() • Mpint = binary() Generate a random number N, Lo =< N < Hi. Uses the crypto library pseudo-random number generator. The arguments (and result) can be either erlang integers or binary multi-precision integers. mod_exp(N, P, M) -> Result • N, P, M, Result = Mpint • Mpint = binary() This function performs the exponentiation N ^ P mod M, using the crypto library. rsa_sign(Data, Key) -> Signature rsa_sign(DigestType, Data, Key) -> Signature • Data = Mpint • Key = [E, N, D] • E, N, D = Mpint Where E is the public exponent, N is public modulus and D is the private exponent. • DigestType = md5 | sha The default DigestType is sha. • Mpint = binary() • Signature = binary() Calculates a DigestType digest of the Data and creates a RSA signature with the private key Key of the digest. rsa_verify(Data, Signature, Key) -> Verified rsa_verify(DigestType, Data, Signature, Key) -> Verified • Verified = boolean() • Data, Signature = Mpint • Key = [E, N] • E, N = Mpint Where E is the public exponent and N is public modulus. • DigestType = md5 | sha The default DigestType is sha. • Mpint = binary() Calculates a DigestType digest of the Data and verifies that the digest matches the RSA signature using the signer's public key Key. rsa_public_encrypt(PlainText, PublicKey, Padding) -> ChipherText • PlainText = binary() • PublicKey = [E, N] • E, N = Mpint Where E is the public exponent and N is public modulus. • Padding = rsa_pkcs1_padding | rsa_pkcs1_oaep_padding | rsa_no_padding • ChipherText = binary() Encrypts the PlainText (usually a session key) using the PublicKey and returns the cipher. The Padding decides what padding mode is used, rsa_pkcs1_padding is PKCS #1 v1.5 currently the most used mode and rsa_pkcs1_oaep_padding is EME-OAEP as defined in PKCS #1 v2.0 with SHA-1, MGF1 and an empty encoding parameter. This mode is recommended for all new applications. The size of the Msg must be less than byte_size(N)-11 if rsa_pkcs1_padding is used, byte_size(N)-41 if rsa_pkcs1_oaep_padding is used and byte_size(N) if rsa_no_padding is used. Where byte_size(N) is the size part of an rsa_private_decrypt(ChipherText, PrivateKey, Padding) -> PlainText • ChipherText = binary() • PrivateKey = [E, N, D] • E, N, D = Mpint Where E is the public exponent, N is public modulus and D is the private exponent. • Padding = rsa_pkcs1_padding | rsa_pkcs1_oaep_padding | rsa_no_padding • PlainText = binary() Decrypts the ChipherText (usually a session key encrypted with rsa_public_encrypt/3) using the PrivateKey and returns the message. The Padding is the padding mode that was used to encrypt the data, see rsa_public_encrypt/3. rsa_private_encrypt(PlainText, PrivateKey, Padding) -> ChipherText • PlainText = binary() • PrivateKey = [E, N, D] • E, N, D = Mpint Where E is the public exponent, N is public modulus and D is the private exponent. • Padding = rsa_pkcs1_padding | rsa_no_padding • ChipherText = binary() Encrypts the PlainText using the PrivateKey and returns the cipher. The Padding decides what padding mode is used, rsa_pkcs1_padding is PKCS #1 v1.5 currently the most used mode. The size of the Msg must be less than byte_size(N)-11 if rsa_pkcs1_padding is used, and byte_size(N) if rsa_no_padding is used. Where byte_size(N) is the size part of an Mpint-1. rsa_public_decrypt(ChipherText, PublicKey, Padding) -> PlainText • ChipherText = binary() • PublicKey = [E, N] • E, N = Mpint Where E is the public exponent and N is public modulus • Padding = rsa_pkcs1_padding | rsa_no_padding • PlainText = binary() Decrypts the ChipherText (encrypted with rsa_private_encrypt/3) using the PrivateKey and returns the message. The Padding is the padding mode that was used to encrypt the data, see dss_sign(Data, Key) -> Signature dss_sign(DigestType, Data, Key) -> Signature • DigestType = sha | none (default is sha) • Data = Mpint | ShaDigest • Key = [P, Q, G, X] • P, Q, G, X = Mpint Where P, Q and G are the dss parameters and X is the private key. • ShaDigest = binary() with length 20 bytes • Signature = binary() Creates a DSS signature with the private key Key of a digest. If DigestType is 'sha', the digest is calculated as SHA1 of Data. If DigestType is 'none', Data is the precalculated SHA1 digest. dss_verify(Data, Signature, Key) -> Verified dss_verify(DigestType, Data, Signature, Key) -> Verified • Verified = boolean() • DigestType = sha | none • Data = Mpint | ShaDigest • Signature = Mpint • Key = [P, Q, G, Y] • P, Q, G, Y = Mpint Where P, Q and G are the dss parameters and Y is the public key. • ShaDigest = binary() with length 20 bytes Verifies that a digest matches the DSS signature using the public key Key. If DigestType is 'sha', the digest is calculated as SHA1 of Data. If DigestType is 'none', Data is the precalculated SHA1 rc4_encrypt(Key, Data) -> Result • Key, Data = iolist() | binary() • Result = binary() Encrypts the data with RC4 symmetric stream encryption. Since it is symmetric, the same function is used for decryption. dh_generate_key(DHParams) -> {PublicKey,PrivateKey} dh_generate_key(PrivateKey, DHParams) -> {PublicKey,PrivateKey} • DHParameters = [P, G] • P, G = Mpint Where P is the shared prime number and G is the shared generator. • PublicKey, PrivateKey = Mpint() Generates a Diffie-Hellman PublicKey and PrivateKey (if not given). dh_compute_key(OthersPublicKey, MyPrivateKey, DHParams) -> SharedSecret • DHParameters = [P, G] • P, G = Mpint Where P is the shared prime number and G is the shared generator. • OthersPublicKey, MyPrivateKey = Mpint() • SharedSecret = binary() Computes the shared secret from the private key and the other party's public key. exor(Data1, Data2) -> Result • Data1, Data2 = iolist() | binary() • Result = binary() Performs bit-wise XOR (exclusive or) on the data supplied. DES in CBC mode The Data Encryption Standard (DES) defines an algorithm for encrypting and decrypting an 8 byte quantity using an 8 byte key (actually only 56 bits of the key is used). When it comes to encrypting and decrypting blocks that are multiples of 8 bytes various modes are defined (NIST SP 800-38A). One of those modes is the Cipher Block Chaining (CBC) mode, where the encryption of an 8 byte segment depend not only of the contents of the segment itself, but also on the result of encrypting the previous segment: the encryption of the previous segment becomes the initializing vector of the encryption of the current segment. Thus the encryption of every segment depends on the encryption key (which is secret) and the encryption of the previous segment, except the first segment which has to be provided with an initial initializing vector. That vector could be chosen at random, or be a counter of some kind. It does not have to be secret. The following example is drawn from the old FIPS 81 standard (replaced by NIST SP 800-38A), where both the plain text and the resulting cipher text is settled. The following code fragment returns Key = <<16#01,16#23,16#45,16#67,16#89,16#ab,16#cd,16#ef>>, IVec = <<16#12,16#34,16#56,16#78,16#90,16#ab,16#cd,16#ef>>, P = "Now is the time for all ", C = crypto:des_cbc_encrypt(Key, IVec, P), % Which is the same as P1 = "Now is t", P2 = "he time ", P3 = "for all ", C1 = crypto:des_cbc_encrypt(Key, IVec, P1), C2 = crypto:des_cbc_encrypt(Key, C1, P2), C3 = crypto:des_cbc_encrypt(Key, C2, P3), C = <<C1/binary, C2/binary, C3/binary>>, C = <<16#e5,16#c7,16#cd,16#de,16#87,16#2b,16#f2,16#7c, <<"Now is the time for all ">> == crypto:des_cbc_decrypt(Key, IVec, C). The following is true for the DES CBC mode. For all decompositions P1 ++ P2 = P of a plain text message P (where the length of all quantities are multiples of 8 bytes), the encryption C of P is equal to C1 ++ C2, where C1 is obtained by encrypting P1 with Key and the initializing vector IVec, and where C2 is obtained by encrypting P2 with Key and the initializing vector last8(C1), where last (Binary) denotes the last 8 bytes of the binary Binary. Similarly, for all decompositions C1 ++ C2 = C of a cipher text message C (where the length of all quantities are multiples of 8 bytes), the decryption P of C is equal to P1 ++ P2, where P1 is obtained by decrypting C1 with Key and the initializing vector IVec, and where P2 is obtained by decrypting C2 with Key and the initializing vector last8(C1), where last8(Binary) is as above. For DES3 (which uses three 64 bit keys) the situation is the same.
{"url":"http://erldocs.com/R14A/crypto/crypto.html","timestamp":"2014-04-20T05:44:28Z","content_type":null,"content_length":"42468","record_id":"<urn:uuid:1449bdec-5aaa-4ecc-bd75-0d0c50d06b07>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
NA Digest Saturday, April 27, 1991 Volume 91 : Issue 17 NA Digest Saturday, April 27, 1991 Volume 91 : Issue 17 Today's Editor: Cleve Moler Today's Topics: From: Jack Dongarra <dongarra@cs.utk.edu> Date: Sat, 27 Apr 91 12:20:40 -0400 Subject: Netlib/NA-Net Machine Will Be Down a Few Days The Mathematical Sciences Section at Oak Ridge National Laboratory is moving to a new building next week. As a result, the computer running netlib and na-net at ORNL will be down starting Wednesday, May 1, 1991, for a few days. We are automatically rerouting mail for netlib@ornl.gov to netlib@research.att.com, so netlib service will not be interrupted. However, the na-net will be off the air until the move has been completed. Mail sent to na.<lastname>@na-net.ornl.gov will be queued until the machine comes back up. We hope to have the machine back by Friday, May 3. From: Glenn McKee <mckee_g@vaxc.stevens-tech.edu> Date: Tue, 23 Apr 1991 10:30 EST Subject: Stone's Strongly Implicit Procedure Dear colleagues, I am interested in speeding up a PDE algorithm that involves solving a large, sparse matrix system. I understand that a useful algorithm is a modified version of Stone's Strongly Implicit Proceedure. Our library lacks references to the this algorithm, and I was wondering if someone had the algorithm already coded in either C or FORTRAN and would consider sending me a copy. Glenn McKee E-mail: MCKEE_G@VAXC.STEVENS-TECH.EDU From: R. Kaasschieter <wsanrk@win.tue.nl> Date: Tue, 23 Apr 91 16:44:39 +0200 Subject: Solving a Nonlinear Initial BVP Dear colleagues, It happens that we have to solve for our department of chemistry the following initial boundary value problem on a three-dimensional du/dt = div ( D(u) grad u ) in the domain, - n . ( D(u) grad u ) = f(u) on the boundary, u = 1 for t = 0. D is uniformly bounded from below by a positive constant and f(u) >= 0 for all u. A typical example for D is D(u) = exp(-u). Since the solution is known to be positive, D(u) > 1. We have planned to use finite elements for the space discretisation. Our main problem is a sensible choice for the time discretisation. Do you have any experience in solving this problem or related problems, or do you know relevant literature? Rik Kaasschieter and Bob Mattheij Please send your suggestions and comments to Dr. E.F. Kaasschieter Department of Mathematics and Computing Science Eindhoven University of Technology P.O. Box 513 5600 MB Eindhoven The Netherlands E-mail: wsanrk@win.tue.nl From: Renato De Leone <deleone@cs.wisc.edu> Date: Thu, 25 Apr 1991 07:33:03 CDT Subject: References Sought for Parallel SOR Hello na-netters, I am looking for references on parallel Successive Overrelaxation algorithms for SIMD machine. In particular for solving system of linear equations with specially structured (5-diagonals) matrices. Any ideas or pointers to references would be greatly appreciated. Thanks a lot Renato De Leone Computer Sciences Dept. University of Wisconsin-Madison 1210 W. Dayton Madison, WI 53705 (608) 263-2677 From: Steven Lee <slee@csrd.uiuc.edu> Date: Sat, 27 Apr 91 12:33:56 CDT Subject: 2nd Annual Midwest NA Day for the 2nd Annual Midwest NA Day Location: Champaign-Urbana, IL Digital Computer Laboratory 1304 W. Springfield Ave, Urbana Date: Saturday, May 11 1991 8:30AM - 4:30PM The 2nd Annual Midwest NA Day is scheduled for Saturday, May 11 1991 on the University of Illinois, Urbana-Champaign campus. The first NA Day was held in April of last year to take special note of the retirement of Bill Gear from the University of Illinois. This year, the conference coincides with the May graduation ceremonies in which Gene Golub is to receive an honorary doctorate from the University. This is an informal conference with an emphasis on scheduling a variety of talks from speakers with interests in numerical analysis and scientific computing. Those who wish to be included on the electronic mailing list for news on this event (schedule information, travel directions, etc.) should send email to: Again, for those who plan to attend and would also like to contribute a 20-minute or 40-minute presentation, please send the title and a brief abstract of your proposed talk to the e-mail address listed above. We have already had numerous responses to the previous announcement in NA Digest; we hope to finalize the program and list of speakers within the next few days. Organizers: Paul Saylor, Robert Skeel, Faisal Saied, Ahmed Sameh, Michael J. Holst and Steven Lee From: Bernard Danloy <danloy@anma.ucl.ac.be> Date: Tue, 23 Apr 91 and Fri, 26 Apr 91 Subject: New Measures of Precision in Floating-Point Arithmetic In the last NA-Digest (21 April ; Vol.91/16), two communications were devoted to the precision of the floating point arithmetic of a computer. I would like to thank Nick Higham and Cleve Moler for their attempt to define unambiguous concepts ; the lack of well defined concepts is too often the source of confusion and errors. In addition, I am very happy to have learned that the number mu defined by Higham and Moler is not always a negative power of the base b : I never payed enough attention to that quantity by itself and was ready to claim the opposite ; I just hope I was not the only one. However it seems to me that Higham and Moler did not go as far as needed : they refer to the basic concept of "floating point number" : this is meaningful in a classical environment where the arithmetical unit and the memory use exactly the same representation for real numbers. If this is the case, it is irrelevant to store the intermediate results of a process in a variable or to keep them within the arithmetic unit ; the quantities eps,mu and u are unambiguous if we define the floating point numbers as the possibles values a variable of real type can take. But it should be reminded that not all computers work that way : it may happen that the arithmetical unit can handle quantities it is impossible to save in a variable without loss of information : such a situation was mentioned by Higham and Moler and arises when the memory uses the basic IEEE standard ( 53 bits ) while the arithmetical unit conforms to the extended IEEE standard ( 64 bits ). How should we define eps,mu and u in such a context ? We are then facing two possible values for eps and u ; the values derived from the characteristics of the memory appear as a reasonable choice but the values related to the arithmetical unit are not a useless information. The case of mu is much more difficult because the number of possible values is almost unpredictable : mu is the result of some computation and therefore its value depends on the whole sequence of operations : it is then crucial to know exactly which computation has to be performed and how it is implemented. On one single machine, the number of computations resulting in different values for mu can be far greater than 2 ! I show now what you obtain on a Sun 3/60 computer equipped with an optional arithmetical coprocessor Motorola 68881 when you are looking for mu through Pascal programs. Here follows a summary of my results ( x and y denote real variables ) Without using the arithmetical coprocessor : 1. The boolean expression 1+x>1 is true iff x >= 2^(-53) + 2^(-105) 2. When y is precomputed by y := 1+x the boolean expression y>1 is true iff x >= 2^(-53) + 2^(-105) Using the optionnal arithmetical coprocessor : 3. The boolean expression 1+x>1 is true iff x >= 2^(-64) + 2^(-116) 4. When y is precomputed by y := 1+x the boolean expression y>1 is true iff x >= 2^(-53) + 2^(-64) + 2^(-105) 5. The boolean expression 1+(x+y)>1 is true in many situations, for example, a. if x > 2^(-64) and y >= 0 b. if x = 2^(-64) and y >= 2^(-127) c. if x = 2^(-64) - 2^(-117) and y >= 2^(-117) + 2^(-128) + 2^(-169) The above results clearly show that defining mu as the smallest quantity such that 1+mu>1 is ambiguous if we don't say precisely : - how mu may be obtained and used - where the result of the addition has to be stored before it is compared to 1 I am far from being an expert but I suggest to adopt the following definition : if x is a variable of real type, mu is the smallest value that x can take if we want the computer to give the value true to the boolean expression 1+x>1 That corresponds to the situations 1 and 3 above ; the fact that mu does not reflect the representation of the floating point numbers within the memory is not disturbing at all : comparing 1 and 1+x is part of a tricky way to get information about that representation, assuming a unique standard is valid everywhere. Maybe, it is time to realize that, when this is not the case, the errors generated by a single computation do have two independent sources : - the arithmetical operation itself - the transfer of numbers to ( and from ? ) the memory It is then perfectly normal to use two independent parameters mu and eps to characterize the precision of a floating point arithmetic : the value of mu says how precisely a result can be obtained within the arithmetical processor and eps says how precisely that result can be stored in the memory. Date: Fri, 26 Apr 91 20:15:11 +0200 I would like to add new comments and proposals to my first note on the measure of precision of a floating-point arithmetic. I just realized that if we want to characterize a t-digit base b arithmetic, we need four and not only three parameters. Two of them are concerned with "static" properties ( Cleve Moler did use very properly the term " geometric " ) and the two others are concerned with " dynamic " properties. The problem we are facing now is just that, as a result of historical evolution, a confusion has been created between " static " and " dynamic " ; the best example is perhaps the quantity u ( unit roundoff ) : it is clearly a " dynamic " concept but it has been defined until now by a " static " value : I suggest we try to get rid of our habits and focus on what we really need. Here are my proposals : " STATIC " ( or " GEOMETRIC " ) ASPECT b, t and eps = b^(1-t) are clearly the basic parameters ; two of them define the third. I suggest to characterize the " static " properties of the floating- point arithmetic by (1) eps = b^(1-t) : this defines ( within the memory ) the spacing of the floating-point numbers between 1 and b Important remark : eps has nothing to do with the equivalent spacing within the arithmetical unit. (2) b : the base ( the knowledge of b is only needed if we want to get error bounds as precise as possible : for a same value of eps, the precision attached to some computations is a bit higher if the base b is smaller ) " DYNAMIC " ASPECT Any arithmetic computation usually requires three things : - a transfer of the data from the memory to the processor - the execution of the operation itself ( within the processor ) - a transfer of the result from the processor to the memory With a " dynamic " point of view, the precision may be characterized by two independant parameters : (3) u is the unit roundoff of the memory and gives information about the rounding which occurs when transfering a number from the processor to the memory ( let us remember that reading data on a peripheral usually implies a conversion to the base b and involves thus the processor ). I suggest to define u as the smallest floating-point number x such that true is the computed value of the boolean expression fl(y>1) if y has been previously obtained by fl(1+x), stored in the memory and picked back up ( this double transfer is crucial ) N.B. The old definition of u as 0.5*b^(1-t) is a very special case of the new one and is equivalent in the case of rounding away from zero. ( What was considered as perfect twenty years ago, before rounding to even appeared ). Actually the new value is equal to u = rho * eps where rho is 1 in the case of pure chopping 1/2 in the case of rounding away from zero something slightly bigger than 1/2 in the case of rounding to even The new definition of u makes it equal to the quantity mu introduced by Nick Higham provided that the computations are performed in two steps as required. ( By the way, Nick did the two steps since he asked the computer to perform the addition and made himself the comparison ). (4) nu is the unit roundoff of the arithmetical processor when it performs a computation on data extracted from the memory. It is defined as the smallest floating-point number x such that true is the computed value of the boolean expression fl(1+x>1) where the unusual notation ( I wrote fl(1+x>1) and not fl(1+x)>1 ) means that the user does not force the intermediate result fl(1+x) to be sent back to the memory ( the computer is given complete freedom and gets the due credit for its choice ). Actually nu is equal to u if the computer ( I should probably have to say the software ) always transfers intermediate results back to the memory and never tries to avoid it. The value of nu is equivalent to that of mu if the computation is performed in one single step : we just ask the computer to evaluate the complete expression as a whole. As I did already mention it in my first note, I ran some computations on a Sun3 with a Motorola chip 68881. I found Using Pascal with the otion -f68881 activated : eps = 2^(-52) b = 2 u = 2^(-53) + 2^(-64) + 2^(-105) nu = 2^(-64) + 2^(-116) Using Pascal without the otion -f68881 : eps = 2^(-52) b = 2 u = 2^(-53) + 2^(-105) nu = 2^(-53) + 2^(-105) Using MATLAB ( and the Motorola chip ) : eps = 2^(-52) b = 2 u = 2^(-53) + 2^(-64) + 2^(-105) nu = 2^(-53) + 2^(-64) + 2^(-105) Partial tests have been run on a PC and confirm the last values for MATLAB. Further trials will performed later. It turns out that the " static " parameters are independant of the software and characterize a standard like IEEE, as they should ; on the opposite, the " dynamic " parameters do vary very strongly : even on a single machine, they may depend on the software but also on the options activated by the user. At this point, one thing is clear for me : we really need to agree on precise concepts and the topics are worth a much deeper discussion. I hope NA-NET will contribute to it. Bernard Danloy Institut de Mathematique University of Louvain-la-Neuve Email : danloy@anma.ucl.ac.be End of NA Digest
{"url":"http://netlib.org/na-digest-html/91/v91n17.html","timestamp":"2014-04-18T08:15:05Z","content_type":null,"content_length":"17731","record_id":"<urn:uuid:0b7e2f3b-091f-46e3-9ca6-4390aea3d541>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
SAT rectangle problem May 30th 2012, 01:14 PM #1 Apr 2012 United Sates of America SAT rectangle problem If the length of a rectangle is increased by 30% and the width of the same rectangle is decreased by 30%, what is the effect on the area of a rectangle? Re: SAT rectangle problem Obviously the area of a rectangle with length L and width W is L*W, so, you want to think about how you might modify this product to reflect the fact that L is 30% larger and W is 30% smaller; if you then write this new product down, hopefully you'll be able to see how the area is modified. Re: SAT rectangle problem i got no change somehow... but that is wrong... This is a level 5 question on the SAT practice test. No wonder its pretty difficult Re: SAT rectangle problem May 30th 2012, 01:41 PM #2 Junior Member May 2012 May 31st 2012, 06:39 AM #3 Apr 2012 United Sates of America May 31st 2012, 07:49 AM #4
{"url":"http://mathhelpforum.com/geometry/199468-sat-rectangle-problem.html","timestamp":"2014-04-17T20:02:55Z","content_type":null,"content_length":"38175","record_id":"<urn:uuid:540230f9-e269-46ff-bac9-b20fe7875dd7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
On-line CRC calculation and free library Introduction on CRC calculations Whenever digital data is stored or interfaced, data corruption might occur. Since the beginning of computer science, people have been thinking of ways to deal with this type of problem. For serial data they came up with the solution to attach a parity bit to each sent byte. This simple detection mechanism works if an odd number of bits in a byte changes, but an even number of false bits in one byte will not be detected by the parity check. To overcome this problem people have searched for mathematical sound mechanisms to detect multiple false bits. The CRC calculation or cyclic redundancy check was the result of this. Nowadays CRC calculations are used in all types of communications. All packets sent over a network connection are checked with a CRC. Also each data block on your harddisk has a CRC value attached to it. Modern computer world cannot do without these CRC calculation. So let's see why they are so widely used. The answer is simple, they are powerful, detect many types of errors and are extremly fast to calculate especially when dedicated hardware chips are used. One might think, that using a checksum can replace proper CRC calculations. It is certainly easier to calculate a checksum, but checksums do not find all errors. Lets take an example string and calculate a one byte checksum. The example string is "Lammert" which converts to the ASCII values [ 76, 97, 109, 109, 101, 114, 116 ]. The one byte checksum of this array can be calculated by adding all values, than dividing it by 256 and keeping the remainder. The resulting checksum is 210. You can use the calculator above to check this result. In this example we have used a one byte long checksum which gives us 256 different values. Using a two byte checksum will result in 65,536 possible different checksum values and when a four byte value is used there are more than four billion possible values. We might conclude that with a four byte checksum the chance that we accidentily do not detect an error is less than 1 to 4 billion. Seems rather good, but this is only theory. In practice, bits do not change purely random during communications. They often fail in bursts, or due to electrical spikes. Let us assume that in our example array the lowest significant bit of the character 'L' is set, and the lowest significant bit of charcter 'a' is lost during communication. The receiver will than see the array [ 77, 96, 109, 109, 101, 114, 116 ] representing the string "M`mmert". The checksum for this new string is still 210, but the result is obviously wrong, only after two bits changed. Even if we had used a four byte long checksum we would not have detected this transmission error. So calculating a checksum may be a simple method for detecting errors, but doesn't give much more protection than the parity bit, independent of the length of the checksum. The idea behind a check value calculation is simple. Use a function F(bval,cval) that inputs one data byte and a check value and outputs a recalculated check value. In fact checksum calculations as described above can be defined in this way. Our one byte checksum example could have been calculated with the following function (in C language) that we call repeatedly for each byte in the input string. The initial value for cval is 0. int F_chk_8( int bval, int cval ) { retun ( bval + cval ) % 256; The idea behind CRC calculation is to look at the data as one large binary number. This number is divided by a certain value and the remainder of the calculation is called the CRC. Dividing in the CRC calculation at first looks to cost a lot of computing power, but it can be performed very quickly if we use a method similar to the one learned at school. We will as an example calculate the remainder for the character 'm'—which is 1101101 in binary notation—by dividing it by 19 or 10011. Please note that 19 is an odd number. This is necessary as we will see further on. Please refer to your schoolbooks as the binary calculation method here is not very different from the decimal method you learned when you were young. It might only look a little bit strange. Also notations differ between countries, but the method is similar. 1 0 1 = 5 1 0 0 1 1 / 1 1 0 1 1 0 1 1 0 0 1 1 | | --------- | | 1 0 0 0 0 | 0 0 0 0 0 | --------- | 1 1 1 0 = 14 = remainder With decimal calculations you can quickly check that 109 divided by 19 gives a quotient of 5 with 14 as the remainder. But what we also see in the scheme is that every bit extra to check only costs one binary comparison and in 50% of the cases one binary substraction. You can easily increase the number of bits of the test data string—for example to 56 bits if we use our example value "Lammert "—and the result can be calculated with 56 binary comparisons and an average of 28 binary substractions. This can be implemented in hardware directly with only very few transistors involved. Also software algorithms can be very efficient. For CRC calculations, no normal substraction is used, but all calculations are done modulo 2. In that situation you ignore carry bits and in effect the substraction will be equal to an exclusive or operation. This looks strange, the resulting remainder has a different value, but from an algebraic point of view the functionality is equal. A discussion of this would need university level knowledge of algebraic field theory and I guess most of the readers are not interested in this. Please look at the end of this document for books that discuss this in detail. Now we have a CRC calculation method which is implementable in both hardware and software and also has a more random feeling than calculating an ordinary checksum. But how will it perform in practice when one ore more bits are wrong? If we choose the divisor—19 in our example—to be an odd number, you don't need high level mathematics to see that every single bit error will be detected. This is because every single bit error will let the dividend change with a power of 2. If for example bit n changes from 0 to 1, the value of the dividend will increase with 2^n. If on the other hand bit n changes from 1 to 0, the value of the dividend will decrease with 2^n. Because you can't divide any power of two by an odd number, the remainder of the CRC calculation will change and the error will not go unnoticed. The second situation we want to detect is when two single bits change in the data. This requires some mathematics which can be read in Tanenbaum's book mentioned below. You need to select your divisor very carefully to be sure that independent of the distance between the two wrong bits you will always detect them. It is known, that the commonly used values 0x8005 and 0x1021 of the CRC16 and CRC-CCITT calculations perform very good at this issue. Please note that other values might or might not, and you cannot easily calculate which divisor value is appropriate for detecting two bit errors and which isn't. Rely on extensive mathematical research on this issue done some decades ago by highly skilled mathematicians and use the values these people obtained. Furthermore, with our CRC calculation we want to detect all errors where an odd number of bit changes. This can be achieved by using a divisor with an even number of bits set. Using modulo 2 mathematics you can show that all errors with an odd number of bits are detected. As I have said before, in modulo 2 mathematics the substraction function is replaced by the exclusive or. There are four possible XOR operations. 0 XOR 0 => 0 even => even 0 XOR 1 => 1 odd => odd 1 XOR 0 => 1 odd => odd 1 XOR 1 => 0 even => even We see that for all combinations of bit values, the oddness of the expression remains the same. When chosing a divisor with an even number of bits set, the oddness of the remainder is equal to the oddness of the divident. Therefore, if the oddness of the dividend changes because an odd number of bits changes, the remainder will also change. So all errors which change an odd number of bits will be detected by a CRC calculation which is performed with such a divisor. You might have seen that the commonly used divisor values 0x8005 and 0x1021 actually have an odd number of bits, and not even as stated here. This is because inside the algorithm there is a "hidden" extra bit 2^16 which makes the actual used divisor value 0x18005 and 0x11021 inside the algorithm. Last but not least we want to detect all burst errors with our CRC calculation with a maximum length to be detected, and all longer burst errors to be detected with a high probability. A burst error is quite common in communications. It is the type of error that occurs because of lightning, relay switching, etc. where during a small period all bits are set to one. To really understand this you also need to have some knowledge of modulo 2 algebra, so please accept that with a 16 bit divisor you will be able to detect all bursts with a maximum length of 16 bits, and all longer bursts with at least 99.997% certainty. In a pure mathematical approach, CRC calculation is written down as polynomial calculations. The divisor value is most often not described as a binary number, but a polynomial of certain order. In normal life some polynomials are used more often than others. The three used in the on-line CRC calculation on this page are the 16 bit wide CRC16 and CRCCCITT and the 32 bits wide CRC32. The latter is probably most used now, because amongst others it is the CRC generator for all network traffic verification and validation. For all three types of CRC calculations I have a free software library available. The test program can be used directly to test files or strings. You can also look at the source codes and integrate these CRC routines in your own program. Please be aware of the initialisation values of the CRC calculation and possible necessary postprocessing like flipping bits. If you don't do this you might get different results than other CRC implementations. All this pre and post processing is done in the example program so it should be not to difficult to make your own implementation working. A common used test is to calculate the CRC value for the ASCII string "123456789". If the outcome of your routine matches the outcome of the test program or the outcome on this website, your implementation is working and compatible with most other implementations. Just as a reference the polynomial functions for the most common CRC calculations. Please remember that the highest order term of the polynomal (x^16 or x^32) is not present in the binary number representation, but implied by the algorithm itself. Polynomial functions for common CRC's CRC-16 0x8005 x^16 + x^15 + x^2 + 1 CRC-CCITT 0x1021 x^16 + x^12 + x^5 + 1 CRC-DNP 0x3D65 x^16 + x^13 + x^12 + x^11 + x^10 + x^8 + x^6 + x^5 + x^2 + 1 CRC-32 0x04C11DB7 x^32 + x^26 + x^23 + x^22 + x^16 + x^12 + x^11 + x^10 + x^8 + x^7 + x^5 + x^4 + x^2 + x^1 + 1 2002 Computer Networks, describing common network systems and the theory and algorithms behind their implemenentation. Andrew S. Tanenbaum various The Art of Computer Programming is the main reference for seminumerical algorithms. Polynomial calculations are described in depth. Some level of mathematics is necessary Donald E. Knuth to fully understand it though. – DNP 3.0, or distributed network protocol is a communication protocol designed for use between substation computers, RTUs remote terminal units, IEDs intelligent electronic DNP User Group devices and master stations for the electric utility industry. It is now also used in familiar industries like waste water treatment, transportation and the oil and gas Nice to be here Nice to be anywhere
{"url":"http://www.lammertbies.nl/comm/info/crc-calculation.html","timestamp":"2014-04-20T03:10:11Z","content_type":null,"content_length":"26978","record_id":"<urn:uuid:07f3a81a-5003-48d1-941d-e7b9d7c092ac>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00449-ip-10-147-4-33.ec2.internal.warc.gz"}