content
stringlengths
86
994k
meta
stringlengths
288
619
Similar and Congruent Figures 8.5: Similar and Congruent Figures Created by: CK-12 The Mathematical Floor Mrs. Gilman brought a small group of students over to look at this tile floor in the hallway of the art museum. “You see, there is even math in the floor,” she said, smiling. Mrs. Gilman is one of those teachers who loves to point out every place where math can be found. “Okay, I get it,” Jesse started. “I see the squares.” “There is a lot more math than just squares,” Mrs. Gilman said, walking away with a huge smile on her face. “She frustrates me sometimes,” Kara whispered, staring at the floor. “Where is the math besides the squares?” “I think she is talking about the size of the squares,” Hannah chimed in. “See? There are two different sizes.” “Actually there are three different sizes, and there could be more that I haven’t found yet,” Jesse said. “Remember when we learned about comparing shapes that are alike and aren’t alike? It has to do with proportions or something like that,” Hannah chimed in again. All three students stopped talking and began looking at the floor again. “Oh yeah, congruent and similar figures, but which are which?”Kara asked. What is the difference between congruent and similar figures? This lesson will teach you all about congruent and similar figures. When you are all finished with this lesson, you will have a chance to study the floor again and see if you can find the congruent and the similar figures. What You Will Learn By the end of this lesson you will be able to demonstrate the following skills: • Recognize congruence. • Find unknown measures of congruent figures. • Recognize similarity. • Check for similarity between given figures Teaching Time I. Recognize Congruence In the last lesson we began using the word “congruent.” We talked about congruent lines and congruent angles. The word congruent means exactly the same. Sometimes, you will see this symbol $\cong$ In this lesson, we are going to use the word congruent to compare figures. Congruent figures have exactly the same size and shape. They have congruent sides and congruent angles. Here are some pairs of congruent figures. Compare the figures in each pair. They are exactly the same! If you’re not sure, imagine that you could cut out one figure and place it on top of the other. If they match exactly, they are congruent. How can we recognize congruence? We test for congruency by comparing each side and angle of two figures to see if all aspects of both are the same. If the sides are the same length and the angles are equal, the figures are congruent. Each side and angle of one figure corresponds to a side or angle in the other. We call these corresponding parts. For instance, the top point of one triangle corresponds to the top point of the other triangle in a congruent pair. It is not always easy to see the corresponding parts of two figures. One figure may be rotated differently so that the corresponding parts appear to be in different places. If you’re not sure, trace one figure and place it on top of the other to see if you can make them match. Let’s see if we can recognize some congruent figures. Which pair of figures below is congruent? Let’s analyze one pair at a time to see if we can find any corresponding angles and sides that are congruent. The figures in the first pair appear to be the same shape, if we rotate one $180^\circ$ We only know the measure of one angle in the first two figures. We can compare these angles if they are corresponding parts. They are, because if we rotate one figure these angles are in the same place at the top of each figure. Now compare their measures. The angle in the first figure is $45^\circ$$55^\circ$Because the angles are different, these two figures are not congruent. Let’s look at the next pair. The two triangles in the second pair seem to have corresponding parts: a long base and a wide angle at the top. We need to know whether any of these corresponding parts are congruent, however. We know the measure of the top angle in each figure: it is $110^\circ$These sides are not congruent, so the triangles are not congruent. Remember, every side and every angle must be the same in order for figures to be congruent. That leaves the last pair. Can you find the corresponding parts? If we rotate the second figure $180^\circ$$L$$90^\circ$Because every side and angle in one figure corresponds to a congruent side and angle in the second, these two figures are congruent. 8J. Lesson Exercises Answer true or false for each question 1. Congruent figures have the same number of sides and angles. 2. Congruent figures can have one pair of angles with the same measure, but not all angles have the same measure. 3. Congruent figures can be different sizes as long as the angle measures are the same. Discuss your answers with a friend. Be sure you understand why each answer is true or false. II. Find Unknown Measures of Congruent Figures We know that congruent figures have exactly the same angles and sides. That means we can use the information we have about one figure in a pair of congruent figures to find the measure of a corresponding angle or side in the other figure. Let’s see how this works. Take a look at the congruent figures below. We have been told these two parallelograms are congruent. Can you find the corresponding parts? If not, trace one parallelogram and place it on top of the other. Rotate it until the parts correspond. Which sides and angles correspond? We can see that side $AB$$PQ$ $AB \cong PQ.$ What other sides are congruent? Let’s write them out. $AB &\cong PQ\\BC &\cong QR\\AD &\cong PS\\DC &\cong SR$ We can also write down the corresponding angles, which we know must be congruent because the figures are congruent. $\angle A &\cong \angle P && \angle D \cong \angle S\\\angle B &\cong \angle Q && \angle C \cong \angle R$ Now that we understand all of the corresponding relationships in the two figures, we can use what we know about one figure to find the measure of a side or angle in the second figure. Can we find the length of side $AB$ We do not know the length of $AB$$PQ$$PQ$$AB.$$PQ$$AB$ Now let’s look at the angles. Can we find the measure of $\angle C$ It corresponds to $\angle R$$\angle R$$70^\circ$$110^\circ$$360^\circ$$360^\circ$$\angle B$$\angle Q$$\angle B$$70^\circ$$\angle C$ $360 - (70 + 110 + 70) &= \angle C\\360 - 250 &= \angle C\\110^\circ &= \angle C$ We were able to combine the given information from both figures because we knew that they were congruent. Yes and the more you work on puzzles like this one the easier they will become. 8K. Lesson Exercises Answer this question. 1. What is the measure of $\angle M$ Take a few minutes to check your answer with a friend. Correct any errors and then continue with the next section. III. Recognize Similarity Some figures look identical except they are different sizes. The angles even look the same. When we have figures that are proportional to each other, we call these figures similar figures. Similar figures have the same angle measures but different side lengths. What is an example of similar figures? Squares are similar shapes because they always have four $90^\circ$ Let’s look at some pairs of similar shapes. Notice that in each pair the figures look the same, but one is smaller than the other. Since they are not the same size, they are not congruent. However, they have the same angles, so they are IV. Check for Similarity between Given Figures Unlike congruent figures, similar figures are not exactly the same. They do have corresponding features, but only their corresponding angles are congruent; the corresponding sides are not. Thus when we are dealing with pairs of similar figures, we should look at the angles rather than the sides. In similar figures, the angles are congruent, even if the sides are not. Notice that one angle in each pair of figures corresponds to an angle in the other figure. They have the same shape but not the same size. Therefore they are similar. Let’s find the corresponding angles in similar figures. List the corresponding angles in the figures below. Angles $G$$W$ How do the angles line up? Angles $H$$X$$I$$Y$$J$$Z$$GHIJ$$WXYZ$ As we’ve said, the sides in similar figures are not congruent. They are proportional, however. Proportions have the same ratio. Look at $GHIJ$$WXYZ$ The sides from one figure are on the top, and the proportional sides of the other figure are on the bottom. List all of the pairs of corresponding sides in the figures below as proportions. Try lining up the figures by their angles. It may help to trace one figure and rotate it until it matches the other. Which sides are proportional? Now that we’ve got one pair, let’s do the same for the rest. $\frac{NO}{QR}, \frac{MP}{TS}, \frac{MN}{TQ}$ Now let’s use what we have learned to check for similarity between figures. Which pair of figures below is similar? For figures to be similar, we know that the angles must be congruent and the sides must exist in proportional relationships to each other. Let’s check each pair one at a time. We only know some of the angles in each triangle in the first pair. They both have a $50^{\circ}$$180^{\circ}$ $&\text{Triangle 1} && \text{Triangle 2}\\&50 + 60 + \text{angle} \ 3 = 180 && 50 + 80 + \text{angle} \ 3 = 180\\&110 + \text{angle} \ 3 = 180 && 130 + \text{angle} \ 3 = 180\\&\text{angle} \ 3 = 180 - 110 && \text{angle} \ 3 = 180 - 130 \\&\text{angle} \ 3 = 70^{\circ} && \text{angle} \ 3 = 50^{\circ}$ The angles in the first triangle are $50^{\circ}$$60^{\circ}$$70^{\circ}$$50^{\circ}$$50^{\circ}$$80^{\circ}$ Let’s move on to the next pair. This time we know side lengths, not angles. We need to check whether each set of corresponding sides is proportional. First, let’s write out the pairs of proportional corresponding sides $\frac{6}{3}, \frac{6}{3}, \frac{4}{1}$ The proportions show side lengths from the large triangle on the top and its corresponding side in the small triangle on the bottom. The pairs of sides must have the same proportion in order for the triangles to be similar. We can test whether the three proportions above are the same by dividing each. If the quotient is the same, the pairs of sides must exist in the same proportion to each $\frac{6}{3} = 2\\\frac{6}{3} = 2\\\frac{4}{1} = 4$ When we divide, only two pairs of sides have the same proportion (2). The third pair of sides does not exist in the same proportion as the other two, so these triangles cannot be similar. That leaves the last pair. We have been given the measures of some of the angles. If all of the corresponding angles are congruent, then these two figures are similar. We know the measure of three angles in each figure. In fact, they are all corresponding angles. Therefore the one unknown angle in the first figure corresponds to the unknown angle in the second figure. As we know, the four angles in a quadrilateral must have a sum of $360^{\circ}$These two figures are similar because their angle measures are all congruent. Now let’s use what we have learned to solve the problem in the introduction. Real Life Example Completed The Mathematical Floor Here is the original problem once again. Reread it and then answer the questions at the end of this passage. Mrs. Gilman brought a small group of students over to look at this tile floor in the hallway of the art museum. “You see, there is even math in the floor,” she said, smiling. Mrs. Gilman is one of those teachers who loves to point out every place where math can be found. “Okay, I get it,” Jesse started. “I see the squares.” “There is a lot more math than just squares,” Mrs. Gilman said, walking away with a huge smile on her face. “She frustrates me sometimes,” Kara whispered, staring at the floor. “Where is the math besides the squares?” “I think she is talking about the size of the squares,” Hannah chimed in. “See? There are two different sizes.” “Actually there are three different sizes, and there could be more that I haven’t found yet,” Jesse said. “Remember when we learned about comparing shapes that are alike and aren’t alike? It has to do with proportions or something like that,” Hannah chimed in again. All three students stopped talking and began looking at the floor again. “Oh yeah, congruent and similar figures, but which are which?” Kara asked. The students are working on which figures in the floor pattern are congruent and which ones are similar. The congruent figures are exactly the same. We can say that the small dark brown squares are congruent because they are just like each other. They have the same side lengths. What is one other pair of congruent squares? The similar figures compare squares of different sizes. You can see that the figures are squares, so they all have 90 degree angles. The side lengths are different, but because the angles are congruent, we can say that they have the same shape, but not the same size. This makes them similar figures. The small dark brown square is similar to the large dark brown square. The small dark brown square is also similar to the square created by the ivory colored tile. There is a relationship between the different squares. Are there any more comparisons? Make a few notes in your notebook. having exactly the same shape and size. All side lengths and angle measures are the same. having the same shape but not the same size. All angle measures are the same, but side lengths are not. Technology Integration Khan Academy Congruent and Similar Triangles James Sousa, Congruent and Similar Triangles Time to Practice Directions: Tell whether the pairs of figures below are congruent, similar, or neither. Directions: Name the corresponding parts to those given below. 7. $\angle R$ 8. $MN$ 9. $\angle O$ Directions: Use the relationships between congruent figures to find the measure of $g$ Directions: Use the relationships between congruent figures to find the measure of $\angle T$ Directions: Answer each of the following questions. 12. Triangles $ABC$$DEF$$A$$58^{\circ}$$D$$A$ 13. True or false. If triangles $DEF$$GHI$ 14. True or false. Similar figures have exactly the same size and shape. 15. True or false. Congruent figures are exactly the same in every way. 16. Triangles $LMN$$HIJ$ 17. What is a proportion? 18. True or false. To figure out if two figures are similar, see if their side lengths form a proportion. 19. Define similar figures 20. Define congruent figures. Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/CK-12-Middle-School-Math---Grade-7/r3/section/8.5/","timestamp":"2014-04-19T23:08:55Z","content_type":null,"content_length":"145716","record_id":"<urn:uuid:953b3536-4cb0-432b-9c43-facc3ebfbfd5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
composition with dyadic operator? up vote 6 down vote favorite I want to do something fairly simple; I am using the operator (++) with Data.Map insertWith, and it works fine, but I want to eliminate duplicates in the value created, so want to compose it with I tried (nub (++)), (nub $ (++)), (nub . (++)), all to no avail, in that the type of (++) does not match the expected type of nub ( [a] ). I could of course define an auxiliary function or a lambda, but I think that probably there is a composition which would be clearer. Hints please! haskell function-composition add comment 5 Answers active oldest votes You can write this as ((nub .) .) (++) Prelude Data.List> ((nub .) .) (++) [1,2,3] [3,4,5] In general, you have (f . ) g x = f (g x) ((f . ) . ) g x y = f (g x y) (((f . ) . ) . ) g x y z = f (g x y z) ((((f . ) . ) . ) . ) g x y z v = f (g x y z v) up vote 7 down vote accepted ... Here's the derivation of this identity for ((nub .) .): (f . g) x = f (g x) (nub .) :: Eq a1 => (a -> [a1]) -> a -> [a1] (nub .) = \g x -> (nub (g x)) ((nub .) .) :: Eq a2 => (a -> a1 -> [a2]) -> a -> a1 -> [a2] ((nub .) .) = ((\g x -> (nub (g x))) .) = (\g' x' -> (\g x -> (nub (g x))) (g' x')) = \g' x' x -> (nub ((g' x') x)) There is a nice article about this (and related) idioms, but it's in Russian :-( Very nice, Thanks - for the answer and the derivation. (Your last line of the derivation looks like it may be in Russian!?) – guthrie Jul 6 '11 at 20:37 same as (nub .) . (++) – user102008 Jul 7 '11 at 7:35 add comment What you want seems to be a composition of binary and unary functions, like this: compose :: (c -> d) -> (a -> b -> c) -> (a -> b -> d) compose unary binary a b = unary (binary a b) And you ask for a point-free version (without mentioning of a and b variables). Let's try and eliminate them one by one. We'll start with b, using the fact that f (g x) = f . g: compose unary binary a = unary . binary a a is next. Let's desugar the expression first: compose unary binary a = ((.) unary) (binary a) And apply the same composition rule again: compose unary binary = ((.) unary) . binary This can be further written as: up vote 4 compose unary = (.) ((.) unary) down vote Or even as compose = (.) . (.) Here, each (.) 'strips' an argument off the binary function and you need two of them because the function is binary. This idiom is very useful when generalised for any functor: fmap . fmap (note that fmap is equivalent to . when function is seen as a functor). This allows you to 'strip' any functor off, for example you can write: incrementResultsOfEveryFunctionInTwoDimentionalList :: [[String -> Integer]] -> [[String -> Integer]] incrementResultsOfEveryFunctionInTwoDimentionalList = fmap . fmap . fmap $ (+1) So, your result becomes: (fmap . fmap) nub (++) I think I have found the answer my brain was trying to reproduce: Haskell function composition operator of type (c→d) → (a→b→c) → (a→b→d) Thanks, I like your derivation & logic. I now remember that I had figured all this out before for another application, and forgotten it. I think it may be convenient to think of each application of "." as an argument insertion point. – guthrie Jul 8 '11 at 11:51 add comment This problem is solved in a particularly simple and beautiful way by semantic editor combinators. Confer: • Conal Elliott's original article • An SO answer to a similar question introducing them up vote 3 down vote Your final composition would look like: (result.result) nub (++) add comment You can use the somewhat funny-looking (.).(.) combinator: Prelude> :set -XNoMonomorphismRestriction Prelude> :m + Data.List Prelude Data.List> let f = ((.).(.)) nub (++) Prelude Data.List> :t f up vote 1 down vote f :: Eq a => [a] -> [a] -> [a] Prelude Data.List> f "ab" "ac" It's probably gonna be more readable to just use a lambda or an auxilliary function in a where-clause, though. add comment I don't think the composition operator you want exists as a single function in any standard library. The shortest way to write it is probably ((.).(.)). Using the Functor definition for ((->) t), you can also write it as fmap . fmap or, if you prefer fmap fmap fmap. All of the above are pretty cryptic, but the idiom is common enough that many people will recognize what you're doing. By the way, you may want to avoid calling functions of two arguments "dyadic" in Haskell, because if you extend that terminology to functions of one argument you're going to really up vote 1 confuse people. down vote See also this question for some related discussion. You can also find lots of combinators with very intuitive names in this library. 3 Loved the dyadic comment :-) – luqui Jul 6 '11 at 17:12 1 @luqui: Sadly the other common term for 2-ary functions is "binary" which is itself overloaded jargon, but more likely to be clear from context. Using "monadic" instead of "unary" is just begging for mass confusion, despair, civil unrest, and general inconvenience. – C. A. McCann Jul 6 '11 at 17:35 @all - thanks for the clarifications (?!) - I'll try to avoid talking about arguments at all, since they seem .. argumentative. :-) – guthrie Jul 6 '11 at 20:34 2 @guthrie: On the other hand, avoiding arguments entirely is kind of pointless. Ha, ha ha. Yeah, my pun license is gonna get revoked at this rate. – C. A. McCann Jul 6 '11 at 20:39 add comment Not the answer you're looking for? Browse other questions tagged haskell function-composition or ask your own question.
{"url":"http://stackoverflow.com/questions/6599119/composition-with-dyadic-operator","timestamp":"2014-04-19T02:21:27Z","content_type":null,"content_length":"88889","record_id":"<urn:uuid:dab9d556-46c5-4746-a28e-ae52d50d2f30>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Behaviour of Add (+) Operator Author Behaviour of Add (+) Operator Hi All Joined: Jan I was just wandering by a very abnormal output from the simple summation of double values in JAVA 11, 2002 Like i tried these examples Posts: 26 double d1 = .11; double d2 = 1.53; //Result : 1.6400000000000001 double d1 = .11f; //same value with f appended double d2 = 1.53f; //same value with f appended //Result : 1.639999970793724 double d1 = .11; double d2 = 1.43; //Result : 1.54 Though 1st one is almost correct but still not accurate and i couldnt able to trace the reason for this behaviour.And there is no fix pattern it is following. Could anyone suggest what cld be the problem and where i shld look for finding the possible reason for it. If i want to know how/which algorithm java do for (+) operator, where cld i find the same. Thanks in advance Ranch Hand I think the problem is more in conversion of decimal fractions to binary fractions. There are errors introduced when you try to change 0 + 1/10 + 1/100 to something like Joined: Jul 0 + 0/2 + 0/4 + 0/8 + n/16 + m/32 ... 01, 2002 The error is inescapable in binary arithmetic. The addition operator seems to be doing its job well. By the way, you could also see similar apparent errors with -, /, or * as well. Posts: 65 Dave Patterson Joined: Jan That could be the possible reason but then is it be Platform dependent or this behaviour will be fix for any PC or platform. 11, 2002 Tegards Posts: 26 Subhash "The Hood" Sheriff You can get imprecision when doing math with doubles in any language including C++, Cobol, Visual Basic etc. By it's very nature a double or a float would need to have an infinite number of decimal places to be exactly precise. Since that can never happen, all results in doubles are approximations at some point. Joined: Sep You need to decide what you level of precision is going to be needed. If you want to INSURE accuracy out to 4 decimal places, then Multiply both of the operands by 10,000 into integers, 29, 2000 do the addition on the integers and then divide the result by 10,000. That gives you SOME control over the way that the precision is handled. Posts: 8521 [ January 06, 2003: Message edited by: Cindy Glass ] "JavaRanch, where the deer and the Certified play" - David O'Meara Sheriff The exact output may vary slightly from platform to platform or JDK version to JDK version, but the basic problem is inherent to all Java implementations because of the way floating types are defined. If you want to guarantee that you see the exact same results on any platform/JDK, use the strictfp keyword. (Usually we don't really care about this, but it's Joined: Jan available if you want it.) 30, 2000 There are four basic solutions to this problem: Posts: (1) Ignore it. 1.6400000000000001 is so close to 1.64 that it really shouldn't matter in the real world. (However if you later do something like subtract 1.64 from the result, there may 18671 be a big different between 0 and 0.0000000000000001, so be careful.) (2) Ignore it as above, except that whenever you need to display the result to an end user (who will be upset by the extra .0000000000000001) use a DecimalFormat object to round off the displayed result to some maximum number of digits. (3) Use java.math.BigDecimal instead for float or double. (4) Use integer types instead. For example if .11 and 1.53 represent prices in dollars and cents, then you may know that all prices will be in an integer number of cents - so do all your calcs in cents, using 11 and 153 rather than .11 and 1.53. To display final answers you may need to devide by 100.0 - this should be OK if it's the last thing you do. [ January 06, 2003: Message edited by: Jim Yingst ] "I'm not back." - Bill Harding, Twister subject: Behaviour of Add (+) Operator
{"url":"http://www.coderanch.com/t/324319/java/java/Behaviour-Add-Operator","timestamp":"2014-04-20T01:24:36Z","content_type":null,"content_length":"28659","record_id":"<urn:uuid:17ac0aa5-72ec-40fc-a0a6-df5ef9db290b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Mansfield, TX SAT Math Tutor Find a Mansfield, TX SAT Math Tutor ...As both a university instructor and a test prep tutor, I am devoted to helping students master the daunting process of standardized testing. I teach the very best strategies, tips and shortcuts for the SAT, GRE and GMAT. I want each of my students to have the clearest advantage on test day. 5 Subjects: including SAT math, GRE, GMAT, SAT reading I am an experienced, certified secondary math teacher and a mom of five. I have taught and tutored basic all levels of math, from elementary math skills through collage-level calculus for over ten years. I have experience in helping students identify educational "gaps" and working to fill those ga... 10 Subjects: including SAT math, calculus, statistics, geometry I graduated from Brigham Young University in 2010 with a degree in Statistical Science and I am looking into beginning a master's program soon. I have always loved math and took a variety of math classes throughout high school and college. I taught statistics classes at BYU for over 2 years as a TA and also tutored on the side. 7 Subjects: including SAT math, statistics, algebra 1, geometry I am a recently retired (2013) high school math teacher with 30 years of classroom experience. I have taught all maths from 7th grade through AP Calculus. I like to focus on a constructivist style of teaching/learning which gets the student to a conceptual understanding of mathematical topics. 12 Subjects: including SAT math, calculus, geometry, statistics ...I was an applied math major at the academy, and finished my math degree at Trinity University. I was a Division I club soccer player growing up in Dallas. I went on to play for the Naval Academy, which is Division I, and finished my soccer career at Trinity. 14 Subjects: including SAT math, chemistry, geometry, ASVAB Related Mansfield, TX Tutors Mansfield, TX Accounting Tutors Mansfield, TX ACT Tutors Mansfield, TX Algebra Tutors Mansfield, TX Algebra 2 Tutors Mansfield, TX Calculus Tutors Mansfield, TX Geometry Tutors Mansfield, TX Math Tutors Mansfield, TX Prealgebra Tutors Mansfield, TX Precalculus Tutors Mansfield, TX SAT Tutors Mansfield, TX SAT Math Tutors Mansfield, TX Science Tutors Mansfield, TX Statistics Tutors Mansfield, TX Trigonometry Tutors Nearby Cities With SAT math Tutor Arlington, TX SAT math Tutors Bedford, TX SAT math Tutors Benbrook, TX SAT math Tutors Burleson SAT math Tutors Cedar Hill, TX SAT math Tutors Dalworthington Gardens, TX SAT math Tutors Desoto SAT math Tutors Duncanville, TX SAT math Tutors Euless SAT math Tutors Forest Hill, TX SAT math Tutors Glenn Heights, TX SAT math Tutors Grand Prairie SAT math Tutors Highland Park, TX SAT math Tutors Midlothian, TX SAT math Tutors Pantego, TX SAT math Tutors
{"url":"http://www.purplemath.com/Mansfield_TX_SAT_math_tutors.php","timestamp":"2014-04-18T19:13:58Z","content_type":null,"content_length":"24178","record_id":"<urn:uuid:8e4f8e52-b17c-4304-9192-13cc604cbc69>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
- Journal of Symbolic Computation , 1996 "... this paper we overcome these drawbacks by working with clauses with symbolic constraints (Kirchner et al., 1990; Nieuwenhuis and Rubio, 1992; Rubio, 1994; Nieuwenhuis and Rubio, 1995) . A constrained clause C [[ T ]] is a shorthand for the set of ground instances of the clause part C satisfying the ..." Cited by 11 (6 self) Add to MetaCart this paper we overcome these drawbacks by working with clauses with symbolic constraints (Kirchner et al., 1990; Nieuwenhuis and Rubio, 1992; Rubio, 1994; Nieuwenhuis and Rubio, 1995) . A constrained clause C [[ T ]] is a shorthand for the set of ground instances of the clause part C satisfying the constraint T . In a constrained equation , 1993 "... We propose a new inference system for automated deduction with equality and associative commutative operators. This system is an extension of the ordered paramodulation strategy. However, rather than using associativity and commutativity as the other axioms, they are handled by the AC-unification a ..." Cited by 10 (1 self) Add to MetaCart We propose a new inference system for automated deduction with equality and associative commutative operators. This system is an extension of the ordered paramodulation strategy. However, rather than using associativity and commutativity as the other axioms, they are handled by the AC-unification algorithm and the inference rules. Moreover, we prove the refutational completeness of this system without needing the functional reflexive axioms or ACaxioms. Such a result is obtained by semantic tree techniques. We also show that the inference system is compatible with simplification rules. - JOURNAL OF SYMBOLIC COMPUTATION , 1996 "... ..." , 1998 "... . In this paper we solve a long-standing open problem by showing that strict superposition---that is, superposition without equality factoring---is refutationally complete. The difficulty of the problem arises from the fact that the strict calculus, in contrast to the standard calculus with equality ..." Cited by 6 (0 self) Add to MetaCart . In this paper we solve a long-standing open problem by showing that strict superposition---that is, superposition without equality factoring---is refutationally complete. The difficulty of the problem arises from the fact that the strict calculus, in contrast to the standard calculus with equality factoring, is not compatible with arbitrary removal of tautologies, so that the usual techniques for proving the (refutational) completeness of paramodulation calculi are not directly applicable. We deal with the problem by introducing a suitable notion of direct rewrite proof and modifying proof techniques based on candidate models and counterexamples in that we define these concepts in terms of, not semantic truth, but direct provability. We introduce a corresponding concept of redundancy with which strict superposition is compatible and that covers most simplification techniques. We also show that certain superposition inferences from variables are redundant---a result that is relevant, ... - 5th International Conference on Rewriting Techniques and Applications (RTA)', LNCS 690 , 1995 "... This paper studies completion in the case of equations with constraints consisting of first-order formulae over equations, disequations, and an irreducibility predicate. We present several inference systems which show in a very precise way how to take advantage of redundancy notions in this framewor ..." Cited by 5 (1 self) Add to MetaCart This paper studies completion in the case of equations with constraints consisting of first-order formulae over equations, disequations, and an irreducibility predicate. We present several inference systems which show in a very precise way how to take advantage of redundancy notions in this framework. A notable feature of these systems is the variety of tradeooes they present for removing redundant instances of the equations involved in an inference. The irreducibility predicates simulate redundancy criteria based on reducibility (such as prime superposition and Blocking in Basic Completion) and the disequality predicates simulate the notion of subsumed critical pairs; in addition, since constraints are passed along with equations, we can perform hereditary versions of all these redundancy checks. This combines in one consistent framework stronger versions of all practical critical pair criteria. We also provide a rigorous analysis of the problem with completing sets of equation... , 1994 "... . The paper investigates reasoning with set-relations: intersection, inclusion and identity of 1-element sets. A language is introduced which, interpreted in a multialgebraic semantics, allows one to specify such relations. An inference system is given and shown sound and refutationally ground-compl ..." Cited by 4 (2 self) Add to MetaCart . The paper investigates reasoning with set-relations: intersection, inclusion and identity of 1-element sets. A language is introduced which, interpreted in a multialgebraic semantics, allows one to specify such relations. An inference system is given and shown sound and refutationally ground-complete for a particular proof strategy which selects only maximal literals from the premise clauses. Each of the introduced set-relations satisfies only two among the three properties of the equivalence relations - we study rewriting with such non-equivalence relations and point out differences from the equational case. As a corollary of the main ground-completeness theorem we obtain ground-completeness of the introduced rewriting technique. 1 Introduction Reasoning with sets becomes an important issue in different areas of computer science. Its relevance can be noticed in constraint and logic programming e.g. [SD86, DO92, Jay92, Sto93], in algebraic approach to nondeterminism e.g. [Hus93, He... , 1994 "... We present a modification to the paramodulation inference system, where semantic equality and non-equality literals are stored as local simplifiers with each clause. The local simplifiers are created when new clauses are generated and inherited by the descendants of that clause. Then the local simpl ..." Cited by 2 (0 self) Add to MetaCart We present a modification to the paramodulation inference system, where semantic equality and non-equality literals are stored as local simplifiers with each clause. The local simplifiers are created when new clauses are generated and inherited by the descendants of that clause. Then the local simplifiers can be used to perform demodulation and unit simplification, if certain conditions are satisfied. This reduces the search space of the theorem proving procedure and the length of the proofs obtained. In fact, we show that for ground SLD resolution with any selection rule, any set of clauses has a polynomial length proof. Without this technique, proofs may be exponential. We show that this process is sound, complete, and compatible with deletion rules (e.g., demodulation, subsumption, unit simplification, and tautology deletion), which do not have to be modified to preserve completeness. We also show the relationship between this technique and model elimination. "... . We consider reasoning and rewriting with set-relations: inclusion, nonempty intersection and singleton identity, each of which satisfies only two among the three properties of the equivalence relations. The paper presents a complete inference system which is a generalization of ordered paramodulat ..." Add to MetaCart . We consider reasoning and rewriting with set-relations: inclusion, nonempty intersection and singleton identity, each of which satisfies only two among the three properties of the equivalence relations. The paper presents a complete inference system which is a generalization of ordered paramodulation and superposition calculi. Notions of rewriting proof and confluent rule system are defined for such nonequivalence relations. Together with the notions of forcing and redundancy they are applied in the completeness proof. Ground completeness cannot be lifted to the nonground case because substitution for variables is restricted to deterministic terms. To overcome the problems of restricted substitutivity and hidden (in relations) existential quantification, unification is defined as a three step process: substitution of determistic terms, introduction of bindings and "on-line" skolemisation. The inference rules based on this unification derive non-ground clauses even from the ground one...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2023115","timestamp":"2014-04-21T03:17:37Z","content_type":null,"content_length":"34527","record_id":"<urn:uuid:3783d3ee-abb7-4863-9dc2-b47c7c91ffd3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Consistency of General Constraint-Satisfaction Problems Philippe Jégou The problem of checking for consistency of Constraint-Satisfaction Problems (CSPs) is a fundamental problem in the field of constraint-based reasoning. Moreover, it is a hard problem since satisfiability of CSPs belongs to the class of NP-complete problems. So, in (Freuder 1982), Freuder gave theoretical results concerning consistency of binary CSPs (two variables per constraints). In this paper, we proposed an extension to these results to general CSP (n-ary constraints). On one hand, we define a partial consistency well adjusted to general CSPs called hyper-k-consistency. On the other hand, we proposed a measure of the connectivity of hypergraphs called width of hypergraphs. Using width of hypergraphs and hyper-k-consistency, we derive a theorem defining a sufficient condition for consistency of general CSPs. This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://aaai.org/Library/AAAI/1993/aaai93-018.php","timestamp":"2014-04-19T07:52:37Z","content_type":null,"content_length":"2732","record_id":"<urn:uuid:43206d9b-7d59-4fad-a332-2bbf14af1841>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] par("usr") trouble in multiplot axis scaling Marc Schwartz MSchwartz at MedAnalytics.com Mon Oct 25 21:25:55 CEST 2004 On Mon, 2004-10-25 at 13:19, Johannes Graumann wrote: > Hello, > I'm blotting a series of growth curves into a multiplot environment > created with layout(). > since I want the four plots to be easily visually comparable, I do the > following: > #first plot > plot(x,y,<stuff>) > standarduser<-par()$usr > ... > <some fitting> > ... > lines(spline(x, <fitted_equation>)) > #everything all right till here > # second plot > plot(x,y,<stuff>) > par(usr=standarduser) > ... > <some fitting> > ... > lines(spline(x, <fitted_equation>)) > The problem here is, that the axis of the second plot seem to be scaled > according to the parameters of the first, BUT the fitted curve in the > second plot isn't! > Any idea about what I'm doing wrong? > Please help this newbie out of his misery! > Joh If I am correctly understanding what you are doing and what you want, you would like each of the four plots to have the same axis ranges? Part of the problem, I think, is that in your second plot(), the axis ranges are automatically set based upon the ranges of your x and y data in that call. These presumably are different than the x and y values in your first plot? Thus, the initial plot region scales are going to be different for each plot. By default, this will be range(x) +/- 4% and range(y) +/- 4%. When you force the second plot region's values to be 'standarduser', your underlying x,y plot, having already been drawn, and the new lines to be added are then on different scales in the same plot. If my assumptions are correct, you would be better off calling plot() each time using the 'xlim' and 'ylim' arguments to explicitly define the axis ranges with known common values. For example, if you know that the range of all x values is r.x and the range of all y values is r.y: #first plot plot(x, y, <stuff>, xlim = r.x, ylim = r.y) <some fitting> lines(spline(x, <fitted_equation>)) # second plot plot(x, y, <stuff>, xlim = r.x, ylim = r.y) <some fitting> lines(spline(x, <fitted_equation>)) This gets around the need to manipulate the pars directly and hopefully less confusion in reading the code. The key is knowing the common ranges of your x and y values in advance. Does that help? Marc Schwartz More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2004-October/059708.html","timestamp":"2014-04-19T07:14:42Z","content_type":null,"content_length":"5316","record_id":"<urn:uuid:4d369a80-7264-48da-9e37-65c42af433fe>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
In a k-linear category, what is the tensor product between a hom space and an object? MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. I am reading something where this is used extensively, but it is not defined anywhere and no references are given, and I can't find any. up vote 0 down vote favorite ct.category-theory add comment I am reading something where this is used extensively, but it is not defined anywhere and no references are given, and I can't find any. If $\mathcal{C}$ is a $k$-linear category, $X \in \mathcal{C}$, and $V$ is any $k$-vector space (in particular, it could be a hom of two objects in $\mathcal{C}$), then $V \otimes X$ (sometimes written $V \odot Y$ to avoid confusion with a monoidal structure) is the object representing the functor $\mathcal{C} \to \operatorname{Vect}$ sending $Y \in \mathcal{C}$ to up vote 5 $\operatorname{Hom}_{\operatorname{Vect}}(V, \mathcal{C}(X, Y))$, if such an object exists. This is a special case of the notion of copower. down vote accepted In the case where $\mathcal{C}$ is a tensor category with internal homs, the construction agrees with the tensoring by the internal hom. add comment If $\mathcal{C}$ is a $k$-linear category, $X \in \mathcal{C}$, and $V$ is any $k$-vector space (in particular, it could be a hom of two objects in $\mathcal{C}$), then $V \otimes X$ (sometimes written $V \odot Y$ to avoid confusion with a monoidal structure) is the object representing the functor $\mathcal{C} \to \operatorname{Vect}$ sending $Y \in \mathcal{C}$ to $\operatorname{Hom}_{\ operatorname{Vect}}(V, \mathcal{C}(X, Y))$, if such an object exists. This is a special case of the notion of copower. In the case where $\mathcal{C}$ is a tensor category with internal homs, the construction agrees with the tensoring by the internal hom. Let $\mathcal{A}$ be a k-linear category, $A \in \mathcal{A}$ an object and $V$ a k-vector space. We say that the tensor product of $A$ and $V$ exists if the functor from $\mathcal{A}$ to $\mathbf{Vect}$ given by $A^{\prime} \mapsto \mathbf{Hom}_{\mathrm{Vect}}(V,\mathcal{A}(A,A^{\prime}))$ is representable. The representing object is sometimes denoted by $V\odot A$ or, more commonly in the k-linear context, by $V\otimes A$. Thus we have a natural isomorphism up vote 4 $\mathcal{A}(V\odot A,A^{\prime}) \cong \mathrm{Hom}_{\mathrm{Vect}}(V,\mathcal{A}(A,A^{\prime}))$ down vote You can now apply this to the special case where $V$ is the space of homomorphisms between two objects. If all tensor products exist, this is simply saying that $\mathcal{A}(A,-) \colon \mathcal{A} \rightarrow \mathrm{Vect}$ has a left adjoint given by $-\odot A \colon \mathrm{Vect} \rightarrow \mathcal{A}$. These notions can be generalized to categories enriched in any cosmos $\mathcal{V}$ (a cosmos is a complete and cocomplete symmetric monoidal closed category). These tensor products can then be seen as a special type of weighted colimits. add comment Let $\mathcal{A}$ be a k-linear category, $A \in \mathcal{A}$ an object and $V$ a k-vector space. We say that the tensor product of $A$ and $V$ exists if the functor from $\mathcal{A}$ to $\mathbf {Vect}$ given by is representable. The representing object is sometimes denoted by $V\odot A$ or, more commonly in the k-linear context, by $V\otimes A$. Thus we have a natural isomorphism You can now apply this to the special case where $V$ is the space of homomorphisms between two objects. If all tensor products exist, this is simply saying that $\mathcal{A}(A,-) \colon \mathcal{A} \ rightarrow \mathrm{Vect}$ has a left adjoint given by $-\odot A \colon \mathrm{Vect} \rightarrow \mathcal{A}$. These notions can be generalized to categories enriched in any cosmos $\mathcal{V}$ (a cosmos is a complete and cocomplete symmetric monoidal closed category). These tensor products can then be seen as a special type of weighted colimits. To add to both the answers of Daniel and Evan, note that, if your category is additive and $V$ is finite dimensional, then $V \odot A$ will always exist. Let $e_1$, $e_2$, ... $e_n$ be up vote 3 down be a basis for $V$, then $V \odot A$ is isomorphic to $V^{\oplus n}$. add comment To add to both the answers of Daniel and Evan, note that, if your category is additive and $V$ is finite dimensional, then $V \odot A$ will always exist. Let $e_1$, $e_2$, ... $e_n$ be be a basis for $V$, then $V \odot A$ is isomorphic to $V^{\oplus n}$.
{"url":"http://mathoverflow.net/questions/25782/in-a-k-linear-category-what-is-the-tensor-product-between-a-hom-space-and-an-ob","timestamp":"2014-04-16T20:15:03Z","content_type":null,"content_length":"57265","record_id":"<urn:uuid:f01eaa33-f1c7-480c-be36-4964c11a4f05>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Mountain Beltway Friday fold: wavelength contrast Posted on October 15, 2010 by Callan Bentley I scored this photo off the Internet more than five years ago, the first time I taught Structural Geology at George Mason University. I failed to note the website I got it from, and now that website has apparently disappeared, at least as far as the view from Google is concerned. If anyone knows the provenance of this image, please let me know so that I can properly attribute it. I hesitate to post something like this without knowing who took it, but I did note to myself that it came from the Point Lake Greenstone Belt in the Northwestern Territories of Canada. This image and its implications follow so nicely on to our discussion last week about fold wavelength and the Ramberg-Biot equation that I can’t resist it. Ready? Brace yourself… I think that this is one of the coolest structural geology photos ever taken. Here it is graced with some annotations: Maximum compressive stress was in this case from the back to the front. The same vein, oriented ~parallel to σ[1], is folded in two very different ways, depending on which rock type it is cutting across. As with a week ago, we can explain this behavior using the Ramberg-Biot equation: L = 2 π t (η / 6η[o])^⅓ where L is the wavelength of the fold (in other words, the distance from one fold hinge to the next fold hinge); t is the thickness of the folded layer; η is the viscosity (resistance to flow) of the quartz vein (or, in general, the more competent of the two layers); and η[o] is the viscosity of the rock unit (sandstone or shale) that the quartz vein cuts across. If you keep t and η constant (for say, the rightmost of the two quartz veins), then the only thing left to vary would be η[o]. So sandstone will have one η[o], while shale will have another η[o]. The sandstone is more resistant to flowing than the shale is. The viscosity contrast between the quartz vein and the sandstone is less (they’re both made of quartz) than the viscosity contrast between the quartz vein and the shale (which have very different material properties). The high viscosity contrast with the shale makes for a very big number, which raised to the ⅓ power (i.e., you take the cube root) makes for a very small number. This small number, multiplied by the constants of 2, π, and t, gives you L, which will also be a small number: hence the wavelength is small, and as a result, the folds are crunkled up next to one another like sardines in a can. On the other hand, the low contrast between the viscosities of the quartz vein and the quartz sandstone means that you get a rather small number. Say η = 3. If η[o] is also about 3, then you have: (3 /(6*3)), or the fraction 1/6. Expressed as a decimal instead of a fraction, this is 0.167. Take the cube root of that, and you end up with a bigger number, in this case 0.55. Multiply that by 2, π, and t, and you get your new wavelength, L. Because you have a larger number in the (η / 6η[o])^⅓ part of the equation, and everything else is the same, you end up with a larger wavelength. The result is only one fold antiform in the sandstone. In the neighboring shale, ~23 antiforms are packed into the same distance along strike of the vein. Wild stuff, right? Happy Friday. Let’s hope your weekend is of sufficiently high contrast to the sludge of the week that you get all loose and wiggly, like the top part of the photo… : ) Filed under: canada, folds, Friday Fold, greenstone belts, math, structure | 8 Comments » Rumeli Hisarı Posted on October 14, 2010 by Callan Bentley Right after I got to Istanbul on this most recent trip, I took a taxi from my hotel down to the Bosphorus, to check out the Rumeli Hisarı, a fort complex built in 1452 by Sultan Mehmet the II in anticipation of the following year’s siege of Constantinople. It’s constructed at the narrowest point on the Bosphorus (660 m wide), with the aim of controlling boat traffic coming from the Black Sea. This narrow spot is today where they have the second of two bridges spanning the Bosphorus. It looks like this: It’s in Europe; that’s Asia on the far right of the photo. A few more shots of the fortress’s pattern of towers and interconnecting walls: Inside, I was pleased to note the variety of building stones. Here’s a nice porphyritic andesite which was a common constituent of the walls: And a folded limestone: Here are some yellowish blocks that are weathering away faster than the mortar which holds them in place. There is a Turkish 1-lira coin in front of the dark block near the center, to provide a sense of scale: Here’s a similar phenomenon playing out with some bricks used to make an archway, except here the mortar is the more rapidly weathering component: Check out this slab of brick… it’s got a curious adornment: Zoomed in to show this detail: Dog prints! Sometime a long time ago, maybe more than 500 years ago, a brick maker put out slabs of clay to dry, and some long-dead dog walked across it. The dog’s footprints are a kind of “historical trace fossil” that was then incorporated into this ancient structure. Visiting the Rumeli Hisarı was a pleasant experience. I walked down along the Bosphorus next, peering into its surprisingly clear waters and counting jellyfish, then got a pide at a cafe. I caught another cab back to the hotel, and eventually fell asleep, a victim of jet lag… Filed under: asia, building stone, europe, folds, history, igneous, limestone, mammals, structure, trace fossils, travel, turkey, weathering | Comments Off Lola, the cartoonist’s companion Posted on October 13, 2010 by Callan Bentley It’s been a while since I’ve posted any photos of my supremely helpful cat Lola on the blog, so here you go: Lola loves to sit on paper, so when I break out the sketchbook to start working on my monthly cartoon for EARTH magazine, she sidles right up and stakes a claim. Fortunately, I was able to continue working in this case, as she wasn’t perched on the “active area” of the paper. As you may be able to discern, the cartoon is about the newly-fraught relationship between geologists and the law… watch for it in December’s issue of EARTH. Filed under: art, lola | 1 Comment » The word is out… Posted on October 13, 2010 by Callan Bentley Others have started announcing our move to a new blog consortium hosted by the American Geophysical Union, so I suppose I will go ahead and reveal that I, too, am part of this scientific cabal… Sometime before the end of the month, Mountain Beltway and six other top-notch earth and space science blogs will relocate to AGU servers and a new URL. I’ll leave directions here for folks to Filed under: blogs | Comments Off Güvem geoheritage site, Turkey Posted on October 12, 2010 by Callan Bentley Looks like I’m late to the party… While I was away, apparently the geoblogosphere went on a rampage of cooling columns. Everyone was posting images of their favorite columnar joints, and I was left out in the cold. Let me remedy that now. As it turns out, I was visiting some columns while everyone else was writing about them. Here are some images from the Güvem area of Turkey, north of Ankara, where there are a mix of late Miocene lake sediments and intercalated volcanic rocks, including these basalt flows. We stopped to visit them last Wednesday on our way to the North Anatolian Fault: The dark entablature looms above: A nice central panel with a good cross-section of the flow: I ran across the street (and a stream) to check out a similar exposure there: Close-up of a few columns (with my hand for scale): And a few more shots of the scene: A full list of Turkish geoheritage sites may be found at the end of this document. Lockwood maintained a list of the other blog posts in this meme here, which I’ll quote below since it’s so nicely laid out already: Geotripper, here, here and here, Sam at Geology Blues Phillip, also at Geology Blues Silver Fox, and another columnar post here. Glacial Till and another! Life in Plane Light: Squashed columns! Aaron at Got The Time Geology Rocks Dana at En Tequila Es Verdad Cujo 359 (see comment on Dana’s post for description) Wayne at Earthly Musings has a gorgeous photo of columns below the rapids at Lava Falls in Grand Canyon. MB Griggs at The Rocks Know has photos of what may well be the most perfect columns in the world. Jessica, AKA Tuff Cookie, showcases a variety in different rock types. Hypocentre finds columns in a very unlikely place, as well as a spectacular photo of radiating columns. Dave Tucker at Northwest Geology Field Trips displays precisely one slew of columnar displays in Washington State. Dave Bressan at History of Geology shares the first printed image of columnar basalts, from 1565. A couple more variations from Dana’s and my driving about W. Oregon. Dr. Jerque has some spectacular examples from the bottom of the Grand Canyon. Silver Fox Has another (better than mine) photo of horizontal columns in a set of dikes, and points out a couple more links to columny goodness (not to be confused with calumny, which is not Dan McShane offers some more Washington State columns. Garry Hayes, who deserves credit for starting this meme (see first links in the list, above), adds yet another set of photos from the opening of the Atlantic Ocean, and a lovely guest photo by Ivan Ivanyvienen, of columnar jointing in rhyolite at the San Juan Precordillera. Update, October 4: Eric Klemetti- who did his doctoral work just down the street from where I’m sitting- has joined the fray. (Also, check out the links readers have left in the comments) Helena Heliotrope at Liberty, Equality and Geology shows off some more Washington columns. Chris and Anne at Highly Allochthonous each toss in a photo- Tokatee Falls looks awesome! Some more Cape Perpetua jointed dike photos from Cujo359, and Devil’s Churn- again, numerous dikes with horizontal columns. Filed under: basalt, conferences, igneous, joints, structure, travel, turkey | 1 Comment » Remains of a mud puddle Posted on October 12, 2010 by Callan Bentley Last Wednesday, I took a field trip to the North Anatolian Fault in Turkey, but I got distracted by this fine looking display of sedimentary structures in a dried-up mud puddle in an old quarry. The coin, a Turkish lira, is about the same size as a U.S. quarter. What you’re seeing here are dessication cracks (“mud cracks”), and accompanying them are exquisite little raindrop impressions, the minute craters excavated by a light sprinkle of rain after the mud has already started to dry out and “gel.” (If the water which deposited the mud were still there when the rain fell, the standing water would have dissipated the energy of the drops’ impacts, and no craters would have been excavated.) Here’s a slightly more oblique perspective, to give a sense of how the individual mud flakes are internally laminated, and curl along the edges, producing a concave-up shape. Note too that the cracks bisect some of the rain drop impressions, and therefore the raindrops fell first, and then the dessication cracks propagated on through them, a nice example of cross-cutting relationships. In some cases, the propagating crack used the “crater rim” of the drops as a mechanical zone of weakness, fracturing there preferentially. Here, let’s zoom in on a couple of nice examples (one from photo #1, a second from photo #2): If anyone wants a full-sized copy of any of these images for teaching purposes, let me know via e-mail, and I’ll send you one. Filed under: primary structures, sediment, turkey | 3 Comments »
{"url":"http://mountainbeltway.wordpress.com/page/2/","timestamp":"2014-04-17T12:37:49Z","content_type":null,"content_length":"69782","record_id":"<urn:uuid:6ddaafa4-10f1-4e4f-a97a-405dc466da14>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/llib_xoc/medals","timestamp":"2014-04-17T04:08:11Z","content_type":null,"content_length":"58587","record_id":"<urn:uuid:ebfdfce0-e2ce-461e-b0a6-0fba3de69e0a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Octal Number Subtraction Octal number subtraction follows the same rules as the subtraction of numbers in any other number system. The only variation is in the quantity of the borrow. In the decimal system, you had to borrow a group of 10 base 10. In the binary system, you borrowed a group of 2 base 10. In the octal system you will borrow a group of 8 base 10. Consider the subtraction of 1 from 10 in decimal, binary, and octal number systems: In each example, you cannot subtract 1 from 0 and have a positive difference. You must use a borrow from the next column of numbers. Let’s examine the above problems and show the borrow as a decimal quantity for clarity: When you use the borrow, the column you borrow from is reduced by 1, and the amount of the borrow is added to the column of the minuend being subtracted. The following examples show this procedure: In the octal example 7 base 8 cannot be subtracted from 6 base 8, so you must borrow from the 4. Reduce the 4 by 1 and add 10 base 8 (the borrow) to the 6 base 8 in the minuend. By subtracting 7 base 8 from 16 base 8, you get a difference of 7 base 8. Write this number in the difference line and bring down the 3. You may need to refer to table for octal addition in the previous tutorial on octal addition, until you are familiar with octal numbers. To use the table for subtraction, follow these directions. Locate the subtrahend in column Y. Now find where this line intersects with the minuend in area Z. The remainder, or difference, will be in row X directly above this point. (back) (top) (next) (return to number systems page)
{"url":"http://www.learn-about-electronics.com/octal-number-subtraction.html","timestamp":"2014-04-18T18:11:45Z","content_type":null,"content_length":"29945","record_id":"<urn:uuid:c8b2cc09-70d4-4b59-b241-7beddf644437>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: How do you prove that 2+2=4? Is it enough to consider 2 objects, then 2 more then put them together and count them and get 4, or do you have to resort to fancy-schmancy methods? Replies: 8 Last Post: Nov 18, 2012 7:45 PM Messages: [ Previous | Next ] Re: How do you prove that 2+2=4? Is it enough to consider 2 objects, then 2 more then put them together and count them and get 4, or do you have to resort to fancy-schmancy methods? Posted: Nov 16, 2012 12:06 AM On Nov 15, 11:22 pm, donstockba...@hotmail.com wrote: > Just askin. Principia Mathematica BS says it takes 100 pages. I don't know anybody who has tried to explain what is going on there (everyone just sits in awe at the number of pages), but Peano Arithmetic proves it in a few steps where 2 is 0'' and 4 is 0'''', based on the axioms x+0=x and x+y' = (x+y)' where x' is x+1 ("successor of x").
{"url":"http://mathforum.org/kb/message.jspa?messageID=7923963","timestamp":"2014-04-19T17:27:41Z","content_type":null,"content_length":"27279","record_id":"<urn:uuid:52062285-eb1b-4f76-9d6a-bf9f52398cf6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Interview Questions Recent Interview Questions More Questions » The input is a sequence x1,x2,...,xn of integers in an arbitrary order, and another sequence a1,a2,..,an of distinct integers from 1 to n (namely a1,a2,...,an is a permutation of 1, 2,..., n). Both sequences are given as arrays. Design an 0(n logn) algorithm to order the first sequence according to the order imposed by the permutation. In other words, for each i, Xi should appear in the position given in ai. For example, if x = 17, 5, 1,9, and a = 3, 2, 4, 1, then the outcome should be x = 9, 5, 17, 1. The algorithm should be in-place, so you cannot use an additional array. Given a random generator rand(5) which generates numbers between 0 to 4. How do u generate numbers between 0 to 6, I.e. Implement rand(7). Car parking problem. An array given represents actual order of cars need to be parked. Like for example order is 4,6,5,1,7,3,2,empty. If cars are parked in some order like empty,1,2,3,7,6,4,2. Some person needs to get them into correct order, list out all instructions to the person to get in correct order with least number of swaps. Given a list of tuples representing intervals, return the range these intervals [(1,3), (2,5),(8,9)] should return 5 Consider a game of chess where there is a special queen which has the powers of a Queen as well as a Knight. For eg. in the following arrangement, squares marked with 'x' are in the attackzone of the special queen and the ones marked '0' are in the safe zone: x O O x O O x O x x x x x O O x x x x x O x x x Q x x x O x x x x x O O x x x x x O x O O x O O x Your task is to determine the number of ways in which you can place M such queens on a MxM chess board so that they are in equilibrium i.e. they are placed such that no queen is in the attack zone of the other. Assume M<15. If you need coordinates to identify each square, you can assume that the top-left square is marked (1,1) and the bottom right square is marked (M,M). More Questions »
{"url":"http://www.careercup.com/","timestamp":"2014-04-18T00:40:32Z","content_type":null,"content_length":"39389","record_id":"<urn:uuid:7eec4364-2dbd-4dbe-8228-377856747e7a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
East Bernard Math Tutor ...I have seen the look regarding subjects from an elementary level to a collegiate level. Regardless of what subject I am working with someone on, I will strive to make sure the student understands. Here is a list of the subjects I've have taught or am capable of teaching: Math- Pre-Algebra ... 38 Subjects: including calculus, prealgebra, ACT Math, algebra 1 ...I also point out possible mistakes during explanations to help avoid them while doing homework. I have been teaching/tutoring Algebra 2 for over 25 years. I use special techniques and "cute memorable sayings" to help students remember certain algebraic skills. 6 Subjects: including algebra 1, algebra 2, geometry, precalculus ...Algebra and Geometry are my strong points. All my kids are gone for UT Austin so I can devote all my time to help you. Teaching is my passion. 12 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...In addition I taught at the following universities: San Antonio College, University of Maryland, University of Colorado, Auburn University at Montgomery, AL. In addition, I tutored the Air Force Academy football team in calculus. I have also tutored high school students in various locations. 11 Subjects: including trigonometry, statistics, algebra 1, algebra 2 ...I've used it all throughout college as well to solve much more difficult problems. I understand this subject completely. I went to Texas A&M University in College Station for my Bachelor's of Science in Mechanical Engineering, and graduated in May 2012. 9 Subjects: including algebra 1, algebra 2, calculus, geometry
{"url":"http://www.purplemath.com/East_Bernard_Math_tutors.php","timestamp":"2014-04-20T13:40:43Z","content_type":null,"content_length":"23525","record_id":"<urn:uuid:323ab8d2-aed0-466f-ad3e-b505357735c8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by nika Total # Posts: 26 A thin stick of mass M = 3.1 kg and length L = 1.6 m is hinged at the top. A piece of clay, mass m = 0.7 kg and velocity V = 4.0 m/s hits the stick a distance x = 1.20 m from the hinge and sticks to it. 1. What is the angular velocity of the stick immediately after the collisi... solid mensuration The base of a right prism is a rhombus whose sides are 12 inch and whose longer diagonal is 15 inch. Find its volume. yeah, thank's I got it.. sorry for bothering you, you gave me a great help. something is still incorect at the first part.. A power plant burns coal and generates an average of 600.0 Megawatts (MW) of electrical power while discharging 900.00 MW as waste heat. Find the total electrical energy generated by the plant in a 30-day period. Coals vary a lot in their energy content. The energy released by... Thank you very much :) A power plant burns coal and generates an average of 630.0 MW of electrical power while discharging 1058.40 MW as waste heat. Find the total electrical energy generated by the plant in a 30-day The graph on the left shows the short-run marginal cost curve for a typical firm selling in a perfectly competitive industry. The graph on the right shows current industry demand and supply. a. What is the marginal revenue that this perfectly competitive firm will earn on its ... ABC Drilling has debt with a market value of $200,000 and a yield of 9%. The firm's equity has a market value of $300,000, its earnings are growing at a 5% rate, and its tax rate is 40%. A similar firm with no debt has a cost of equity of 12%. Under the MM extension with g... ABC Drilling has debt with a market value of $200,000 and a yield of 9%. The firm's equity has a market value of $300,000, its earnings are growing at a 5% rate, and its tax rate is 40%. A similar firm with no debt has a cost of equity of 12%. Under the MM extension with g... If the area of the grass, before removing the innermost track lanes, is approximately 9615 square meters, how many square kilometers of grass are there? Hi, I'm doing the same assignment. I got to the part where you added the reactions and got the K3 = 31. Then, I got stuck I don't understand why you're doing the 4.00/7.75 and then stating that 1 + (4.00/7.75) = 4.5? isn't that 1.52 instead of 4.5? Really need help on figuring out the formula for Iron Complex. The question was: 4.0 g of ferrous ammonium sulphate, FeS04(NH4)2*SO4*6H20, is used. The oxalate is in excess, calculate the theoretical yield of the iron complex. I got the moles of ferrous ammonium sulfate but I ... what is the volume of a sphere that has the same diameter and height of a cylinder that is 24 cubic feet in volume? How many cubic units are in the volume of a cylinder whose height is four units and whose radius is five units? What is the surface area of a cube whose volume is 27 cubic inches? what is the volume of a sphere that has the same diameter and height of a cylinder that is 24 cubic feet in volume? you determine that 1 ounce of the peanut butter container contains 115 calories and 9 grams of fat. What percent of the calories come from fat? Algebra 1 algebra 1A 1734 is greater than 175 cuz 1734 has 4 place values-- ones tens hundreds nd thousands and 175 has 3 place values-- ones tens nd hundreds. how is heat measured in a calorimeter? wold crisis what direction of rotation generates and angle with a measure of -90 degrees? of 120 degrees? what portion of a complete rotation is -90 degrees? 120 degrees? How do health care facilities use electronic medical records Introduction to Health Care What did you see as being the greatest challenge for the case worker in this simulation? What other challenges do caseworkers face when working with placement of the elderly in long-term care facilities? Do you see any alternatives to Mrs. W s long-term care situation? If...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=nika","timestamp":"2014-04-21T14:05:12Z","content_type":null,"content_length":"11307","record_id":"<urn:uuid:c6852ad5-93aa-48aa-91c1-7cfc56cda0b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
financial mathematics-recurrence relations November 8th 2012, 07:14 AM #1 Oct 2012 sri lanka financial mathematics-recurrence relations A person has inherited a surplus grain mountain of 30000 tonnes held in a warehouse.each year 5% of the grain is eaten by mice.The person is obliged to add N tonnes each year.find the maximum of N such that mountain will decrease in size. This is what I have understood the problem. Initial amount=30000 Tonnes Each year 5% loss implies remaining amount is 95%. N tonnes is added each yer. So I can form the recurrence relation as x_n=0.95 x_n-1+N But how an I find the maximum of N? Re: financial mathematics-recurrence relations Hey chath. If something is decreasing then eventually x_n will be zero for some value of positive n. Can you re-write this equation in explicit form and find the condition for N such that it equals zero for any value of fixed n? I took this course a very long time ago so if you have formulas for the course please share them. November 8th 2012, 11:13 PM #2 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/advanced-math-topics/207042-financial-mathematics-recurrence-relations.html","timestamp":"2014-04-16T17:50:47Z","content_type":null,"content_length":"32549","record_id":"<urn:uuid:77f40441-248b-47c3-8895-4b6a00c8c1fd>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] How to do symmetry detection? iCy-fLaME icy.flame.gm@gmail.... Wed Jan 20 09:20:10 CST 2010 I have some signals in mirror pairs in an 1D/2D array, and I am trying to identify the symmetry axis. A simplified example of the signal pair can look like this: [0, 0, 0, 0, 2, 3, 4, 0, 0, 0, 4, 3, 2, 0] The ideal output in this case will probably be: [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] As long as the symmetry point has the largest value, it will be fine. There can be multiple pairs of signals in the array, and the length of separation and duration of the signal can vary from pair to pair. The overall length of the array is about 1k points. The output array should reflect the level of likeness between the two sides of the I tried doing a loop as follows: ############ Begin ############ from numpy import array from numpy import zeros from numpy import arange data = array([0,0,0,0,2,3,4,0,0,0,4,3,2,0]) length = len(data) result = zeros(length) left = arange(length) left[0] = 0 # Index to be used for the end of the left portion right = arange(length) + 1 right[-1] = length - 1 # Index to be used for the begining of the right hand portion for i in range(length): l_part = zeros(length) # Default values to be zero, so non-overlapping region will r_part = zeros(length) # return zero after the multiplication. l_part[:left[i]] = data[:left[i]][::-1] # Take the left hand side and mirror it r_part[:length-right[i]] = data[right[i]:] # Take the right hand side result[i] = sum(l_part*r_part)/length # Use the product and integral to find the similarity metric. print l_part print r_part print "===============================", result[i] print result ############ END ############ But it is rather slow for a 1000x1000 2D array, anyone got any suggestion for a more elegant solution? Thanks in advance! More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-January/023898.html","timestamp":"2014-04-17T14:54:36Z","content_type":null,"content_length":"4254","record_id":"<urn:uuid:922e9c64-151d-4878-bb05-fb852c515d60>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
6.2.4 Example Problem Next: 6.2.5 Performance and Results Up: Convectively-Dominated Flows and Previous: 6.2.3 Parallel Issues As a sample problem, the onset and growth of the Kelvin-Helmholtz instability was studied. This instability arises when the interface between two fluids in shear motion is perturbed, and for this problem the body forces, grid was used. Vortices form along the interface and interact before being lost to numerical diffusion. By processing the output from the nCUBE-1, a videotape of the evolution of the instability was produced. This sample problem demonstrates that the FCT technique is able to track the physical instability without introducing numerical instability. Figure 6.2: Development of the Kelvin-Helmholtz instability at the interface of two fluids in shear motion. Guy Robinson Wed Mar 1 10:19:35 EST 1995
{"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node93.html","timestamp":"2014-04-19T12:04:30Z","content_type":null,"content_length":"3147","record_id":"<urn:uuid:7e5200b0-cd73-4139-84c6-98e1f00197e2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2012/548 Efficient Modular NIZK Arguments from Shift and ProductPrastudy Fauzi and Helger Lipmaa and Bingsheng ZhangAbstract: Very few general techniques are known for constructing succinct and computationally efficient NIZK arguments for non-trivial languages. Groth proposed product and permutation arguments, and then used them in a modular way to construct a succinct Circuit-SAT argument. Lipmaa improved the complexity of Groth's basic arguments, while Chaabouni, Lipmaa, and Zhang (FC 2012) used them to construct the first constant-length NIZK range argument. Since Groth's and Lipmaa's basic arguments have quadratic prover's computation, so do all the resulting modular arguments. We continue the study of modular NIZK arguments, by proposing a significantly more efficient version of the product argument and a novel shift argument. Based on these two arguments, we significantly speed up the range argument from FC 2012, obtaining the first range proof with constant length and subquadratic (in the logarithm of the range length) prover's computation. We also propose efficient arguments for three $\mathbf{NP}$-complete languages, set partition, subset sum and decision knapsack, with constant communication, $n^{1 + o (1)}$ prover's computation and linear verifier's computation. Category / Keywords: cryptographic protocols/Decision knapsack, FFT, non-interactive zero knowledge, product argument, range argument, set partition, shift argument, subset sumDate: received 19 Sep 2012, last revised 2 Jul 2013Contact author: helger lipmaa at gmail comAvailable format(s): PDF | BibTeX Citation Note: Last version has improved readability, comparison with some recent work, and includes one more concrete direct argument for an NP-complete language Version: 20130702:141755 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2012/548/20130702:141755","timestamp":"2014-04-21T15:24:45Z","content_type":null,"content_length":"3219","record_id":"<urn:uuid:a31eaed4-e66f-400d-842b-a24d29aea89c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Septic spline solutions of sixth-order boundary value problems. (English) Zbl 1138.65062 Summary: A septic spline is used for the numerical solution of the sixth-order linear, special case boundary value problem. End conditions for the definition of the septic spline are derived, consistent with the sixth-order boundary value problem. The algorithm developed approximates the solution and their higher-order derivatives. The method is also proved to be second-order convergent. Three examples are considered for the numerical illustrations of the method developed. The method developed in this paper is also compared with that developed by M. El-Gamel, J. R. Cannon, J. Latour, and A. I. Zayed, [Math. Comput. 73, No. 247, 1325–1343 (2003; Zbl 1054.65085)], as well and is observed to be better. 65L10 Boundary value problems for ODE (numerical methods) 34B05 Linear boundary value problems for ODE 65L20 Stability and convergence of numerical methods for ODE
{"url":"http://zbmath.org/?q=an:1138.65062","timestamp":"2014-04-20T10:54:02Z","content_type":null,"content_length":"21402","record_id":"<urn:uuid:4a5b7f89-9cd9-4233-a66d-8292c3adbfdd>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Faculty and Staff Gabriel B. Ayine, Ph.D. B.S., University of Cape Coast; M.Phil., University of Ghana; Ph.D., Howard University Professor, Mathematics Allison Bell B.S., Mathematics & Secondary Education, University of Maryland, College Park; M.A., Mathematics Education, University of Maryland, College Park Instructor, Mathematics Darrin Berkley, Ed.D. B.A., Colgate University; M.Ed., Millersville University; Ed.D. Morgan State University Associate Professor, Mathematics Andrew Brown B.S.,Mathematics Education (Secondary), University of New Hampshire; M.S. Applied and Computational Mathematics, Johns Hopkins University Instructor, Mathematics Andrew Bulleri Professor Emeritus (Retired), Mathematics Guy G. Bunyard B.S., Stanford University; M.A., California State University, Long Beach, California Acting Associate Dean/Associate Division Chair of Mathematics/Professor, Mathematics John C. Esenwa B.S., University of Nigeria; M.B.A., University of Lagos, M. Engr., University of Maryland College Park Professor, Mathematics Emily Francis B.A., Mathematics, Smith College, Northampton, MA; M.S., Pastoral Counseling, Loyola University; M.A., Mathematics, The George Washington University Instructor, Mathematics Greta Holtackers A.A., Liberal Arts, Rochester Community & Technical College;B.A., Mathematics & Secondary Education, Luther College, Iowa; M.S., Mathematics, University of Iowa Associate Professor, Mathematics Sunhee Kim, Ph.D. B.S., Sogang University, Seoul, Korea, Cum Laude; M.S., Sogang, Seoul, Korea; Ph.D., University of Maryland, College Park Associate Professor, Mathematics Frederic Lang, Ph.D. B.A., Drake University; Ph.D, MIT Associate Professor, Mathematics Jason Lee, Ph.D. B.S., Mathematics, California Polytechnic State University, San Luis Obispo; Ph.D. Mathematics, University of California, San Diego Associate Professor, Mathematics Matt Lochman, Ph.D. B.S., Mathematics and Physics, (minor Literature), Lebanon Valley College, Anneville, PA Ph.D., Mathematics, Texas Tech University, Lubbock, TX Acting Department Chair of Mathematics/Instructor, Mathematics Mike Long, Ed.D. B.S., Mathematics & Secondary Education, Salisbury State University; M.S., Mathematics, West Virginia University; Ed.D., Mathematics Education, West Virginia University Associate Professor, Mathematics Paula J. Mikowicz B.A., State University of New York at Albany; M.S., Johns Hopkins University Acting Department Chair of Mathematics/Associate Professor, Mathematics Jennifer L. Penniman B.S., M.Ed., University of Maryland Acting Department Chair of Mathematics/Professor, Mathematics Bernadette B. Sandruck, Ed.D. B.S., Towson University; M.S., Johns Hopkins University; Ed.D., Univeristy of Maryland College Park Acting Dean/Division Chair of Mathematics/Academic Coordinator for the Laurel College Center/Professor, Mathematics Consuelo F. Stewart B.S., Towson University; M.S., Johns Hopkins University Professor, Mathematics Loretta FitzGerald Tokoly, Ph.D. M.S., M.A., Villanova University; Ph.D., Temple University Associate Professor, Mathematics Caroline Torcaso, Ph.D. B.S., Georgetown University; M.A. & Ph.D., University of Maryland College Park Associate Professor, Mathematics Lamont Vaughan B.S., Mathematics, Morgan State University, Baltimore, MD; M.S., Mathematics, East Tennessee State University, Johnson City, TN Instructor, Mathematics Kristy Vernille, Ph.D. B.S., SUNY Fredonia, Fredonia, NY; M.Ed., University of Maryland, College Park, MD; Ph.D., University of Maryland, College Park, MD Assistant Professor, Mathematics Rehana Yusaf B.S., Applied Mathematics; Napier University, Edinburgh, Scotland, UK M.S., Mathematics Education; Towson University, Towson, MD Instructor, Mathematics Professional Technical Catherine LaFerriere B.S., Carnegie-Mellon University Mathematics Specialist
{"url":"http://www.howardcc.edu/academics/academic_divisions/mathematics/faculty/index.html","timestamp":"2014-04-20T21:07:36Z","content_type":null,"content_length":"25488","record_id":"<urn:uuid:57729491-d412-44c6-a612-71bb1985df47>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Torque net A 4 kg mass is connected by a light cord to a 3 kg mass on a smooth surface.The cord is around a frictionless axle and has a moment of inertia of .5 kg*m^2 and a radius of .3m. Assuming that the cord does not slip arount the axle, a) what is the acceleration of the two masses b)what are the forces of tension in the rope that connects to the two masses I began this problem using the principle that if, as the first question shows, there is acceleration, then torque net is not equal to zero. In such a situation, the following equation is recommended: moment of inertia* angular acceleration= sum of the torques. The equation for torque is force*distance from axis. Apparently in such a problem, the torque net equation above must be supplemented with force net equations for each of the two masses. The force net equation is: Force net *linear acceleration= sum of forces By substituting the unknowns in the force net equation into the torque net equation, one should be able to find the acceleration and then the two forces.
{"url":"http://www.physicsforums.com/showthread.php?t=17443","timestamp":"2014-04-19T09:48:45Z","content_type":null,"content_length":"19937","record_id":"<urn:uuid:5d1902e3-1aaa-44f5-bb7f-bb2f5b9e9757>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
A Hybrid Human Dynamics Model on Analyzing Hotspots in Social Networks Discrete Dynamics in Nature and Society Volume 2012 (2012), Article ID 678286, 13 pages Research Article A Hybrid Human Dynamics Model on Analyzing Hotspots in Social Networks ^1Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications (BUPT), Beijing, China ^2Chongqing Engineering Laboratory of Internet and Information Security, Chongqing University of Posts and Telecommunications (CQUPT), Room 4029, No. 2 Chongwen Road, Nanan District, Chongqing 400065, China ^3School of Computer and Communication Sciences, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland Received 5 July 2012; Accepted 12 September 2012 Academic Editor: Garyfalos Papaschinopoulos Copyright © 2012 Yunpeng Xiao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The increasing development of social networks provides a unique source for analyzing human dynamics in the modern age. In this paper, we analyze the top-one Internet forum in China (“Tianya Club”) and identify the statistical properties of hotspots, which can promptly reflect the crowd events in people's real-life. Empirical observations indicate that the interhotspot distribution follows a power law. To further understand the mechanism of such dynamic phenomena, we propose a hybrid human dynamic model that combines “memory” of individual and “interaction” among people. To build a rich simulation and evaluate this hybrid model, we apply three different network datasets (i.e., WS network, BA network, and Karate-Club). Our simulation results are consistent with the empirical studies, which indicate that the model can provide a good understanding of the dynamic mechanism of crowd events using such social networking data. We additionally analyze the sensitivity of model parameters and find the optimal model settings. 1. Introduction In China, Internet forum is still one type of the most popular social networking sites for various information propagation and discussion among people. For example, Tianya Club (http://www.tianya.cn/ , simply noted as “TianYa” in this paper), the 12th most visited website in China, is China’s biggest Internet forum that provides almost of social networking services like BBS, blog, microblog, and photo sharing, and so forth, (http://en.wikipedia.org/wiki/Tianya_Club). Up to April 2012, Tianya has more than 68 millions registered users and more than one million online users at most of the times. Such forums have tons of information, not only from the perspective of individual behaviors but also in terms of human interactions. Therefore, such social networking sites provide great potential to analyze human behaviors for understanding human dynamics. In traditional studies, human behavior is usually assumed as random activity and thus be modeled as Poisson processes [1]. This assumption leads to an exponential interevent time distribution of human activities. However, a lot of recent empirical studies have already proved that this assumption is wrong. For example, Barabási first discovered that the time-interval between sending an email and receiving a reply follows a power-law distribution, with heavy tails [2]. Afterwards, a couple of similar statistical properties in human dynamics are empirically discovered by using various datasets, including web browsing [3], short message sending [4], microblogging [5], netizens’ behaviors on the forum [6], movie watching [7], and so on. To understand the intrinsic factor of such heavy-tailed property, Barabási and Vázquez first propose a priority queuing model and successfully explain the phenomenon of human behavior based on task queue [2, 8, 9]. Subsequently, researchers design various human dynamic models for different scenarios, such as aging model [10], optimization mechanism [11], influence of deadline [12], interest-driven model [13, 14], interest and social identity codriven model [5], and relative clock model [15]. These models are largely based on the individual level but not on the crowd level. Recently, there are some emerging crowd-level empirical studies and models, which are largely focusing on network emergencies or terrorist incidents. For example, Johnson et al. propose a self-organizing system that dynamically evolves through the continual coalescence or fragmentation of its constituent groups [16–18]. Galam and Moscovici design group decision-making models by using the percolation theory [19–21]. These researches study the social behavior in the network, considering both “individual behavior” and “interaction between individuals.” However, these works are mainly focused on the social psychology methods and are based on a complete graph (“everyone interacts with everyone”), ignoring the limited structural features of social network. In this paper, we focus on analyzing the real-life social networking datasets. Clauset et al. draw a substitution and competition model for terrorism [22]. Vazquez et al. first propose a memory model to analyze human dynamics [23]. The memory models consider that humans have perceptions of their past activities, and therefore humans accelerate or reduce their activity rates according to their memories. Such memory models provide a good understanding of the possible dynamic mechanism in various scenarios, for example, interevent time statistics of email and letter communications [23], terrorism attacks [24]. In addition to memory, interaction and influence from the neighborhood are used to complement the memory model [24, 25]. For example, Zhu et al. propose a model that combines the role of individual social conformity and self-affirmation psychology for analyzing the possible dynamic mechanism of terrorism attacks [24]. Nevertheless, these human dynamic studies on memory and interaction have several limitations, for example: (1) the interactions are only based on a small group (e.g., 2 agents or 4 neighbors in the 2D lattice network), which are not real-life social network with arbitrary relationships, (2) different nodes have different social identity and social influence in real-life-which is not reflected in these models, (3) in these models, the impact of neighbor nodes is ignored while the status of such neighbor nodes is opposite to the node itself. In this paper, we study the combined impact of memory and node influence (i.e., interactions) of human dynamics in arbitrary social networks. We analyze the human behaviors in China’s largest Internet forum (“Tianya Club”), including activities like posting a new topic and adding comments to existing topics. A hotspot in Tianya is the topic with burst comments. We can consider a hotspot as a crowd event in social network. Based on the Tianya datasets, experimental evidence shows that different types of intertime distributions of hotspot topics follow power-law. In addition, we propose a human dynamic model that combines individual habit (i.e., “memory”) and node influence (i.e., “interaction”). While testing with several well-known network datasets, the simulation results of our model are consistent with the empirical observations, which implies that our model offers a suitable explanation of the power-law properties in human dynamics. This paper is organized as follows: after the introduction in Section 1, Section 2 describes the Tianya data; Section 3 shows the empirical results; Section 4 presents our hybrid model on the combination of memory and interaction; Section 5 compares the results of simulation and the empirical ones. Section 6 provides more discussions and Section 7 concludes this paper. 2. Data Description Empirical data are collected from TianYa, which is one of the largest online social networking sites in China. Up to the time of writing, there are 68,360,259 users (with unique IDs) registered in TianYa. The news and topics in Tianya cover all aspects, and therefore it provides a rich dataset to reflect Chinese people’s activities and dynamics. The Tianya data has been studied in [26], analyzing the intercomment time distribution using a simple growing-network based model. In this paper, we study a rich and hybrid model considering both memory and interaction. We analyze interhotspot time distribution between outbreak topics, and evaluate the model with 9-month data from three representative topic sessions, namely, “Social-Life” (Session-A), “Tittle-Tattle” (Session-B), and “Entertainment-Gossip” (Session-C). Tables 1 and 2 present the data summary and the data format, respectively. It is worth noting that startTime in Table 2 means the release time of an initial topic, and endTime means the release time of the last reply/comment of this topic. 3. Empirical Results This section provides the empirical studies on the Tianya topics. Each topic has an initial post and many following replies (see the topic format in Table 2). We sort topics in a descending order according to the discussion properties (e.g., the number of replies, the number views, or the sum of both) and identify the top topics as the hotspots that have maximum discussions, somehow reflecting the crowd events in real-life. Afterwards, we resort these hotspot topics according to their startTime or endTime, and analyze such interhotspot time distribution of the outbreak topics. In detail, we have three sessions of 9-month data (see Table 1) and extract hotspots using three ordering choices (i.e., reply number, view number, or the sum of both). We consider five cases of top topics, that is, . In addition, the interhotspot time can be calculated by either startTime or endTime of each outbreak topic. Therefore, in total, we have 90 experiments (3 sessions×3 orders×5 top- 2 times). Due to the lack of space, we could not provide all 90 experimental plots but a subset in Figure 1: 3 sessions, 3 orders, using (top 1000 hotspots), using endTime, that is, . We observe that the intertime distributions of outbreak topic follow power-law and span more than two orders of magnitude, with exponent change from −1.2644 to −1.5797. Similar span of the interval is found in [7]. This range is smaller than the one in [2–5]. A possible reason could be that we use hour as the time unit in calculating intertime distribution for our 9 months dataset, while the time unit in these related works is smaller either minute or second. Instead of showing the detailed power-law distributions for other 81 (i.e., 90–9 in Figure 1) experiments, for example, different , varying ordering strategies. Figure 2 shows the relationship between hotspot number and power exponent in various experimental settings. In Figure 2(a), we observe that the power exponent increases for all sessions when the number of hotspot grows. The heavy tail phenomenon tends to disappear when becomes larger. As an extreme case, there will be a hotspot in every hour if is huge. Of course, such extreme case is meaningless as the topic is not real hotspot if the topic’s ranking order significantly lags behind (e.g., ). Actually, the interhotspot time distributions of outbreak topics in all 3 sessions can lose power-law characteristics gradually when . In addition, we analyze the difference using hotspot topics’ endTime or startTime in Figure 2(b), and the different ordering strategies (i.e., via reply number, view number, or the sum of both) in Figure 2(c). We observe that using endTime can bring larger exponential compared to using startTime, this is because we clean those topics from the top topics whose release time are before 2011.1.1. And there is no significant difference between different ordering strategies which tell us that when a hotspot topic attracts more replies, more views are attracted also. 4. A Hybrid Model To understand the intrinsic mechanism of online forum’s outbreak topics (hotspots) that are corresponding to human dynamics in terms of crowd events in social networking, we propose a rich model in this section. This model considers both the inner habit of an individual (called “memory”) and the interaction with social environment (“interaction”); therefore, the model is hybrid. From the memory aspect, a person who was active/inactive in contributing to topics in the past almost keeps the same style in future topics. From the interaction aspect, the behavior of each individual can be affected by the surroundings around us (i.e., neighboring nodes of an individual). Furthermore, people have different social roles in the community, and hereinafter their impacts are distinct from each other. Therefore, we study a hybrid model that combines the impact of memory and interaction in this paper. The ket points of the model are as follows.(1)Time-discretization: the time step is discretized in terms of (e.g., one hour in analyzing our Tianya datasets). Therefore, the status of crowd events (e.g., hotspots in our experiment) evolves/changes with timestamp (using “hour” as the unit).(2)Social-network: people (e.g., registered users in Tianya) can be formalized as an undirected graph in terms of a social network. , stands for a node set. Each individual user in the network is expressed as a node in , the number of nodes is . An undirected edge set represents social relationships in the crowd, that is, stands for the adjacent node set of node is the degree of node is status of node at timestamp . Each node has two possible states, that is, , which represents whether node concerns the current event in the crowd or not. It is worth noting the order of ignore and focus is not important in our model, as it does not affect the model’s behavioral characteristics in the simulation. We only require ignore ≠ focus, and in our simulation we apply ignore < focus as a regular scenario.(3)Crowd-events (hotspots): the emergence of a crowd event is as follows: firstly, a user posts a new topic; afterward, more and more people start to participant in this topic (e.g., users reply and view a topic in Tianya); after the participant number satisfies certain conditions, this topic becomes an outbreak topic (i.e., hotspot). As the time grows, there likely appear new events/topics that incrementally become more important and more interesting. In this way, new crowd events (i.e., hotspots in Tianya) show up and the old ones disappear gradually. To model the intrinsic human dynamics in the crowd events, we consider both the external factor of interaction mechanism from surroundings in the network and the internal factor of individual’s memory mechanism. Interaction is to model the external factor that stands for the influence from neighboring nodes in the network. Considering different nodes with distinctive impacts, the impact of node is denoted as . For a node , the influence of node to at timestamp is as follows: where is the status of at timestamp . In such case, the total influence to node from its neighbors is as follows: As , we have both and . Therefore, the distance between node and the status of its neighboring nodes at time step is defined as As and , we have , where is the maximum length of status distance. Then, the possibility of status change of node resulted from external cause is As , we have . There are two extreme cases: (1) if all statuses of neighbors of are consistent with node at timestamp , then at timestamp , the probability of node status changes due to external cause is 0; and on the contrary, (2) if all neighbors of node have opposite status to node at timestamp , at timestamp , the probability of status changes . Another factor can result in change of node status is of course the internal cause. Considering the node habit, a person was actively involved in a topic is very likely to participate in future discussion; and a person who was not stick to his opinion in the past also has high chance to change his position in future events/topics. Assume at timestamp , our model records the status sequence of node in the past time steps. We calculate the total number of status change in two consecutive steps as , and therefore the number of status does not change is . In such case, the possibility of status change caused by internal cause at timestamp can be defined as Now, in terms of combing external cause and internal cause together, we can have the probability of status change of node at timestamp : The coefficient and stand for the crowd acceptance of internal cause and external cause, respectively, and the coefficient and are the individual acceptance of internal cause and external cause, respectively. In addition, we have two more experimental parameters: and . represents the ratio of the number of people staying in the status of focus in the whole crowd, and . is for recording the current number of crowd events, which should be consistent with the number of hotspots in our previous empirical studies in Tianya’s datasets. 5. Simulation To validate our hybrid model, we build simulations using three well-known social network datasets, that is, WS network [27] (100 nodes, 4 initial neighbors, 0.1 rewiring probability), BA network [28] (100 nodes, 4 links by new node), and Zachary's Karate Club (KC) network [29]. For the value of , we set ignore as −1 and focus as 1. The initial status of each node is randomly assigned according to the uniform distribution. The human internal memory is not unlimited, and we set like the literature [30, 31]. As mentioned in Section 4, our model has six main parameters, that is, , , , , , and . For each simulation, we fix the setting of the 6 parameters, run the model 50 times with different initial assignment, and calculate the average results from 50 independent runs. In Section 3, we discussed that there are 90 empirical experiments in total. By using the three different social networks (i.e., WS, BA, KC), we have simulations in total to verify our model. Due to the lack of paper space, we first only pick the empirical results of Session-A (“People-Life”) in the Tianya dataset, namely, Figures 1(a) and 1(c) as the target of the simulations. The simulation results using these three social networks are shown in Figure 3. Here crowd event counter is set to 1000, which is consistent with the hotspot number () of the empirical results in Figure 1. Figure 3 verifies that our model simulations are consistent with the empirical results. More detailed discussions will be provided in Section 6, and now we focus on analyse sensitivity of important parameters like and . From our comprehensive simulation results (including Figure 3), we observe that the value of , remains stable when the simulation reaches a good performance with regards to the empirical results. Table 3 shows the most suitable parametric settings of , for the three session data in TianYa, corresponding to the three social networks. From our simulation, we also observe that and are the main factors that influence the value of power exponent (), and the effect of parameter to is not significant. Figures 4 and 5 show the sensitivity of and , respectively. We fix the values of other parameters, for example, , , , and . From Figure 4, we observe that while changed from 0.3 to 1.1, varies from 1.2295 to 1.5065; and from Figure 5, we observe that while changed from 0.56 to 0.70, changes from 0.956 to 1.7382. This scope covers the range of in the empirical experiments. When , the interhotspot time distribution starts to lose the power-law characteristics. With the increase of , the frequency of hotspots decreases. This is consistent with the ground-truth cases: in a large social network (many registered IDs in TianYa), there might be many people that are interested in a hotspot in a given session (e.g., Session-A on people life), but it is still impossible to attract all users to be interested in this hotspot topic. Furthermore, the hotspots counter increases from 1000 to 2000, 3000, 4000, and 5000, corresponding to the empirical studies in Section 3. As shown in Figure 6, we observe that the effect of on is consistent with the empirical experiment in Figure 2(c). When , the interhotspot time distribution that was generated by our model will lose power-law characteristics gradually. By fixing and adjusting other parameters, we can achieve all reasonable that covers the range of power exponent in the empirical experiment. 6. Discussion From the simulation in Section 5, we identify the stability of the model coefficients , values for a specific network (i.e., WS, BA, KC) in a specific TianYa topic session (i.e., A-Social-Life,B-Tittle-Tattle C-Entertainment-Gossip). For example, in the simulation of A-social-life using the network, we observe the stable value of parameter , from Figure 3(g)–3(i). Such stability means that a social network has stable internal acceptances and external acceptances to individuals in general. Furthermore, the values of , can vary with regards to different topic sessions and different social networks, which indicates that different crowd/networks present different internal and external acceptance. Regarding parameter of the internal effects, it has little influence on the hotspots. This means that the influence of some individual variation to the hotspots/crowd events is quite low and even can be ignored. On the contrary, the external impact has significant influence on the power exponent . It indicates that individual people are highly affected by external context (e.g., the behaviors of soundings in the network). Therefore, in a social network, the stronger influence of surroundings and social propaganda, the higher chance of crowd event can outbreak (e.g., appear of a new hotspot in an Internet forum like TianYa). Nevertheless, Figure 4 shows that when parameter grows to a certain level, tends to be stable, which indicates that the influence of environment and interaction between nodes is not infinite. The individual internal factor also takes effects, for example, some users never join the discussion of a hotspot topic. Based on the comparison between our model simulations and the empirical results, the outbreak of hotspots in social networks and the interhotspot time distribution highly depend on the two aspects, that is, the internal memory mechanisms and the external interaction in the networks. In particular, the mutual influence between nodes is the main factor to the final power exponent (). Suitable parameters are required in the model simulation, and too high or too low parameters can result in impractical simulations that are far away from real-life crowd events in social networks. For a large topic session in online social forum like TianYa, the value of indicates how many users are interested in a hotspot. If is too large, almost every user focuses on the same hotspot, and this situation is quite irregular; and on the other hand, if is too small, such hotspots are not real interesting crowd events. Therefore, for simulations on a selected topic session in TianYa, we find suitable settings of four model parameters (i.e., , , , and ) and then fix them and study the sensitivity of . This is because for a specific session with a chosen network, the network internal/external acceptance and the individual internal/external acceptance are stable. 7. Conclusions Social networking sites like Internet forum (e.g., TianYa in China) provide a unique way for rapid information prorogation and discussion. Research on the laws underlying user behaviors on such social networking sites means a lot in understanding human dynamics and in turn can provide better services. Traditional studies on such human dynamics are largely limited to a simple model, either trivial memory mechanism or simple interactions with only a small set of neighbors (say 2–4). In this paper, we first provide a hybrid and rich model, that is, able to combine the impact of individual memory and interactions among users in a large social network. We try not to simply plug the two parts together, but build a stronger model with a sound mathematical integration of various useful parameters during our modeling and simulation. We designed a hybrid model that can fully integrate both sides. Moreover, when we discuss “interactions”, a set of structural-level network features and node influences upon the social network are deeply considered. The reason is that nodes of social network can have different habit and social influence. We simulated our hybrid model with three well-known networking datasets and evaluated it with large-scale top-one Internet forum in China. We focused on analyzing hotspots (i.e., outbreak topics) in different topic sessions. Based on the comparison between our simulation and empirical studies, we observe similar power-law interhotspot time distribution using different networks. Therefore, our model can offer an understanding of the dynamic mechanism of crowd events in social networks. In this paper, the node influence is measured by node degree. To further improve our hybrid model, we will apply advanced metrics in quantifying node influence. For example, we will consider link analysis algorithms like PageRank to model node diffusion. In addition, we will model the evolution of social networks and study its effects on hotspots, to better understand human dynamics in an evolving social networking context. This work is supported by NSFC(60905025, 90924029, 61074128). Joint Construction Science and Technology Research Program of the Chongqing Municipal Education Committee under Grants of KJ110529. 1. F. A. Haight, Handbook of the Poisson distribution, John Wiley & Sons, New York, 1967. 2. A. L. Barabási, “The origin of bursts and heavy tails in human dynamics,” Nature, vol. 435, no. 7039, pp. 207–211, 2005. View at Publisher · View at Google Scholar · View at Scopus 3. Z. Dezso, E. Almaas, A. Lukacs, B. Racz, I. Szakadat, and A.-L. Barábasi, “Dynamics of information access on the web,” Phys Rev E, vol. E73, Article ID 066132, 2006. 4. W. Hong, X. P. Han, T. Zhou, and B. H. Wang, “Heavy-tailed statistics in short-Message communication,” Chinese Physics Letters, vol. 26, no. 2, Article ID 028902, 2009. View at Publisher · View at Google Scholar · View at Scopus 5. Q. Yan, L. Yi, and L. Wu, “Human dynamic model co-driven by interest and social identity in the microblog community,” Physica A, vol. 391, pp. 1540–1545, 2012. 6. J. Yu, Y. Hu, M. Yu, and Z. Di, “Analyzing netizens' view and reply behaviors on the forum,” Physica A, vol. 389, no. 16, pp. 3267–3273, 2010. View at Publisher · View at Google Scholar · View at 7. T. Zhou, H. Kiet, B. Kim, et al., “Role of activity in human dynamics,” Europhysics Letters, vol. 82, no. 2, pp. 28002–28006, 2008. View at Publisher · View at Google Scholar 8. A. Vázquez, J. G. Oliveira, Z. Dezsö, K. I. Goh, I. Kondor, and A. L. Barabási, “Modeling bursts and heavy tails in human dynamics,” Physical Review E, vol. 73, no. 3, Article ID 036127, pp. 1–19, 2006. View at Publisher · View at Google Scholar · View at Scopus 9. A. Vázquez, “Exact results for the Barabási model of human dynamics,” Physical Review Letters, vol. 95, no. 24, Article ID 248701, pp. 1–4, 2005. View at Publisher · View at Google Scholar · View at Scopus 10. P. Blanchard and M. O. Hongler, “Modeling human activity in the spirit of Barábasi's queueing systems,” Physical Review E, vol. 75, no. 2, Article ID 026102, 2007. View at Publisher · View at Google Scholar · View at Scopus 11. L. Dall'Asta, M. Marsili, and P. Pin, “Optimization in task-completion networks,” Journal of Statistical Mechanics: Theory and Experiment, vol. 2008, no. 2, Article ID P02003, 2008. View at Publisher · View at Google Scholar · View at Scopus 12. Z. Deng, N. Zhang, and J. Li, “Inuence of deadline on human dynamic model,” in Dynamic Model of Human Behavior, J. L. Guo, T. Zhou, N. Zhang, and J. M. Li, Eds., p. 2934, Shanghai System Science Publishing House, Hong Kong, 2008. 13. X. P. Han, T. Zhou, and B. H. Wang, “Modeling human dynamics with adaptive interest,” New Journal of Physics, vol. 10, Article ID 073010, 2008. View at Publisher · View at Google Scholar · View at Scopus 14. M. Shang, G. Chen, S. Dai, et al., “Interest-driven model for human dynamics,” Chinese Physics Letters, vol. 27, no. 4, Article ID 048701, 2010. View at Publisher · View at Google Scholar 15. T. Zhou, Z. Zhao, Z. Yang, et al., “Relative clock verifies endogenous bursts of human dynamics,” Europhysics Letters, vol. 97, p. 18006, 2012. 16. N. Johnson, M. Spagat, J. Restrepo, et al., “From old wars to new wars and global terrorism,” arXiv:physics/0506213, 2005. 17. N. Johnson, M. Spagat, J. A. Restrepo, et al., “Universal patterns underlying ongoing wars and terrorism,” arXiv physics: 0605035v1, 2006. 18. N. Johnson, Complexity in Humuan Conflict, Springer, New York, NY, USA, 2008. 19. S. Galam and S. Moscovici, “Towards a theory of collective phenomena: consensus and attitude changes in groups,” European Journal of Social Psychology, vol. 21, no. 1, pp. 49–74, 1991. View at Publisher · View at Google Scholar 20. S. Galam, “Rational group decision making: a random field Ising model at $\text{t}=0$,” Physica A, vol. 238, no. 1–4, pp. 66–80, 1997. View at Scopus 21. S. Galam, “Sociophysics: a review of galam models,” International Journal of Modern Physics C, vol. 19, no. 3, pp. 409–440, 2008. View at Publisher · View at Google Scholar · View at Scopus 22. A. Clauset, L. Heger, and M. Young, “Substitution and competition in the israelpalestine conflict,” Chinese Physics Letters, vol. 27, p. 068902, 2010. 23. A. Vazquez, “Impact of memory on human dynamics,” Physica A, vol. 373, pp. 747–752, 2007. View at Publisher · View at Google Scholar · View at Scopus 24. J. F. Zhu, X. P. Han, and B. H. Wang, “Statistical property and model for the inter-event time of terrorism attacks,” Chinese Physics Letters, vol. 27, no. 6, Article ID 068902, 2010. View at Publisher · View at Google Scholar · View at Scopus 25. J. G. Oliveira and A. Vazquez, “Impact of interactions on human dynamics,” Physica A, vol. 388, no. 2-3, pp. 187–192, 2009. View at Publisher · View at Google Scholar · View at Scopus 26. Y. Wu, C. Zhou, M. Chen, J. Xiao, and J. Kurths, “Human comment dynamics in on-line social systems,” Physica A, vol. 389, no. 24, pp. 5832–5837, 2010. View at Publisher · View at Google Scholar · View at Scopus 27. D. J. Watts and S. H. Strogatz, “Collective dynamics of 'small-world9 networks,” Nature, vol. 393, no. 6684, pp. 440–442, 1998. View at Scopus 28. A. Barábasi and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, no. 5439, pp. 509–512, 1999. View at Publisher · View at Google Scholar 29. W. Zachary, “An information ow model for conict and fission in small groups,” Journal of Anthropological Research, vol. R 33, no. 4, pp. 452–473, 1977. 30. G. A. Miller, “The magical number seven, plus or minus two: some limits on our capacity for processing information,” Psychological Review, vol. 63, p. 8197, 1956. 31. A. Baddeleyi, “Developments in the concept of working memory,” Psychological Bulletin, vol. 101, p. 353, 1994.
{"url":"http://www.hindawi.com/journals/ddns/2012/678286/","timestamp":"2014-04-21T08:40:47Z","content_type":null,"content_length":"209299","record_id":"<urn:uuid:d83c590c-91ab-47aa-91e6-c5674fd89dee>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Guadalupe, AZ Prealgebra Tutor Find a Guadalupe, AZ Prealgebra Tutor ...I have learned in teaching that there is something out there that works for every student. Those students only need to be given the opportunity to find what lights the fire within them. I am highly skilled in writing, reading and phonics. 18 Subjects: including prealgebra, reading, English, writing ...The SAT is a test that scares a lot of students, and for good reason! I break it down for my students on a section by section basis, and show them which areas to focus on for studying. I make sure they are familiar with all types of problems and I also show them tricks for handling some of the more difficult problems. 28 Subjects: including prealgebra, English, algebra 1, algebra 2 ...I love my job and my students. My day is never complete until I've turned on a light bulb in someone's brain. It brings a sense of confidence to the student and a feeling of accomplishment to my own life. 16 Subjects: including prealgebra, reading, writing, geometry ...Since I am enrolled in school, my schedule changes every 14 weeks, as this is the length of the quarter at my school. Because of this, my availability will change periodically. As for my non-school-related background, I served 4.5 years in the United States Air Force where I was honorably discharged for medical reasons. 2 Subjects: including prealgebra, algebra 1 ...I am a teacher by profession, I teach Adobe software classes. I have tutored Russian in the past, and would like to offer that as a service here on WyzAnt. Thank you. 24 Subjects: including prealgebra, reading, ESL/ESOL, English Related Guadalupe, AZ Tutors Guadalupe, AZ Accounting Tutors Guadalupe, AZ ACT Tutors Guadalupe, AZ Algebra Tutors Guadalupe, AZ Algebra 2 Tutors Guadalupe, AZ Calculus Tutors Guadalupe, AZ Geometry Tutors Guadalupe, AZ Math Tutors Guadalupe, AZ Prealgebra Tutors Guadalupe, AZ Precalculus Tutors Guadalupe, AZ SAT Tutors Guadalupe, AZ SAT Math Tutors Guadalupe, AZ Science Tutors Guadalupe, AZ Statistics Tutors Guadalupe, AZ Trigonometry Tutors
{"url":"http://www.purplemath.com/guadalupe_az_prealgebra_tutors.php","timestamp":"2014-04-18T23:39:49Z","content_type":null,"content_length":"23775","record_id":"<urn:uuid:4d101064-c22b-4e99-9963-458cba1f1972>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiple-Choice Reborn The first thing I noticed when inspecting the top of the test scoring math model (Table 25) was that the variation within the central cell field has a different reference point (external to the data) than the variation between scores in the marginal cell column (internal to the data). Also the variation within the central cell field (the variance) is harvested in two ways: within rows (scores) and within columns (items). The mean sum of squared deviations (MSS) or variance a column or a row has a fixed range (Chart 64 and Chart 65). The maximum occurs when the marks are 1/2 right and 1/2 wrong (1/2 x 1/2 = 1/4 or 25%). [Variance also equals p * q or (Right * Wrong)/ (Right + Wrong)] The contribution each mark makes to the variance is distributed along this gentle curve. The variable data are fit to a rigid model I obtained the overall shape of these two variances by folding Chart 64 and Chart 65 into Photo 64-65. The result is a dome or a depression above or below the upper floor of the model. The peak of the dome (maximum variance) is reached when a student functioning at 50% marks an item with 50% difficulty. Standardized test makers try to maximize this feature of the model. The larger the mismatch between item difficulty and student ability, the lower down the position of the variance on the dome. CAT attempts to adjust item difficulty to match student preparedness. Chart 66 is a direct overhead view of the dome. Elevation lines have been added at 5% intervals from zero to 25%. I then fitted the data from Nursing124 to the roof of the model . The data only spread over one quadrant of the model. The data could completely cover the dome in an ideal situation in which every combination of score and difficulty occurred. The total test variance within items is then the sum of the variance within all items (0.04 to 0.25 = 2.96). The total test variance within scores is the sum of the variance of all scores (0.05 to 0.24 = 3.33). See Table 8. The math model adjusts to fit the data in the marginal cell student score column (variance between scores). The reference point is not a static feature of the model but the average test score (16.77 or 80%). The plot of the variance between scores can be attached to the right side of the math model (Chart 67). The variance within columns and rows spreads across the static frame of the model. The model then adjusts to fit the variance between scores (rows) to match the spread of the active within rows. I can see another interpretation of the model variance if the dome is inverted as a depression. As a flight instrument on a blimp: pitch, roll, and yaw (within item, 2.96; within score, 3.31; and between scores, 4.10) the blimp would have the nose up, rolled to the side, and with the rudder hard over. - - - - - - - - - - - - - - - - - - - - - Free software to help you and your students experience and understand how to break out of traditional-multiple choice (TMC) and into Knowledge and Judgment Scoring (KJS) (tricycle to bicycle): The mathematical model (Table 25) in the previous post relates all the parts of a traditional item analysis including the observed score distribution, test reproducibility, and the precision of a score. Factors that influence test scores can be detected and measured by the variation between and within selected columns and rows. The model is only aware of variation within and between mark patterns (deviations from the mean). The variance (the sum of squared deviations from the mean divided by the number summed or the mean sum of squares or MSS) is the property of the data that relates the mark patterns to the normal distribution. This permits generating useful descriptive and predictive insights. The deviation of each mark from the mean is obtained by subtracting the mean from the value of the mark (Table 25a). The squared deviation value is then elevated to the upper floor of the model (Step 1, Table 25b). [Un-squared deviations from the mean would add up to zero.] The model’s operation gains meaning by relating the score and item mark distributions to a normal distribution. It compares observed data to what is expected from chance alone or as I like to call it, the know-nothing mean. The expected know-nothing mean based on 0-wrong and 1-right with 4-option items (popular on standardized tests) is centered on 25%, 6 right out of 24 questions (Chart 62). This is from luck on test day alone (students only need to mark each item; they do not need to read the test) on a traditional multiple-choice test (TMC). The mean moves to 50% if student ability and item difficulty have equal value. It moves to 80% if students are functioning near the mastery level as seen in the Nursing124 data. The math model will adjust to fit these data. The know-nothing mean, with Knowledge and Judgment Scoring (KJS) and the partial credit Rasch model (PCRM), is at 50% for a high quality student or 25% for a low quality student (same as TMC). Scoring is 0-wrong, 1-have yet to learn, and 2-right. A high quality student accurately, honestly, and fairly reports what is trusted to be useful in further instruction and learning. There are few, if any, wrong marks. A low quality student performs the same on both methods of scoring by marking an answer on all items. Students adjust the test to fit their preparation. The know-nothing mean for Knowledge Factor (KF) is above 75% (near the mastery level in the Nursing124 data, violet). KF weights knowledge and judgment as 1:3, rather than 1:1 (KJS) or 1:0 (TMC). High-risk examinees do not guess. Test takers are given the same opportunity as teachers and test makers to produce accurate, honest, and fair test scores. The distribution of scores about the know-nothing mean are the same for TMC (green, Chart 63) and KJS (red, Chart 63). An unprepared student can expect, on average, a score of 25% on a TMC test with 4-option items. Some 2/3 of the time the score will fall within +/- 1 standard deviation of 25%. As a rule of thumb, the standard deviation (SD) on a classroom test tends to be about 10%. The best an unprepared student can hope for is a score over 35% (25 + 10) about 1/6 of the time ((1 - 2/3)/2). The know-nothing mean (50%) for KJS and the PCRM is very different from TMC (25%) for low quality students. The observed operational mean at the mastery level (above 80%, violet) is nearly the same for high quality students electing either method of scoring. High quality students have the option of selecting items they can trust they can answer correctly. There are few to no wrong marks. [Totally unprepared high quality students could elect to not mark any item for a score of 50%.] The mark patterns on the lower floor of the mathematical model have different meanings based on the scoring method. TMC delivers a score that only ranks the student’s performance on the test. KJS and the PCR deliver an assessment of what a student knows or can do that can be trusted as the basis for further learning and instruction. Quantity (number right) and quality (portion marked that are right) are not linked. Any score below 50% indicates the student has not developed a sense of judgment needed to learn and report at higher levels of thinking. The score and item mark patterns are fed into the upper floor of the mathematical model as the squared deviation from the mean (d^2). [A positive deviation of 3 and a negative deviation of 3 both yield a squared deviation of 9.] The next step is to make sense of (to visualize, to relate) the distributions of the variance (MSS) from columns and rows. - - - - - - - - - - - - - - - - - - - - - Free software to help you and your students experience and understand how to break out of traditional-multiple choice (TMC) and into Knowledge and Judgment Scoring (KJS) (tricycle to bicycle): The seven statistics reviewed in previous posts need to be related to the underlying mathematics. Traditional multiple-choice (TMC) data analysis has been expressed entirely with charts and the Excel spreadsheet VESEngine. I will need a TMC math model to compare TMC with the Rasch model IRT that is the dominant method of data analysis for standardized tests. A mathematical model contains the relationships and variables listed in the charts and tables. This post applies the advice in learning discussed in the previous post. It starts with the observed variables. The mathematical model then summarizes the relationships in the seven statistics. The model contains two levels (Table 25). The first floor level contains the observed mark patterns. The second floor level contains the squared deviations from the score and item means; the variation in the mark patterns. The squared values are then averaged to produce the variance. [Variance = Mean sum of squares = MSS] 1. Count The right marks are counted for each student and each item (question). TMC: 0-wrong, 1-right captures quantity only. Knowledge and Judgment Scoring (KJS) and the partial credit Rash model (PCRM) capture quantity and quality: 0-wrong, 1-have yet to learn this, 2-right. Hall JR Count = SUM(right marks) = 20 Item 12 Count = SUM(right marks) = 21 2. Mean (Average) The sum is divided by the number of counts. (N students, 22 and n items, 21) The SUM of scores / N = 16.77; 16.77/n = 0.80 = 80% The SUM of items / n = 17.57; 17.57/N = 0.80 = 80% 3. Variance The variation within any column or row is harvested as the deviation between the marks in a student (row) or item (column) mark pattern, or between student scores, with respect to the mean value. The squared deviations are summed and averaged as the variance on the top level of the mathematical model (Table 25). Variance = SUM(Deviations^2)/(N or n) = SUM of Squares/(N or n) = Mean SS = MSS 4. Standard Deviation The variation within a score, item, or probability distribution expressed as a normal value that +/- the mean includes 2/3 of a normal, bell-shaped, distribution: 1 Standard Deviation = 1SD. SD = Square Root of Variance or MSS = SQRT(MSS) = SQRT(4.08) = 2.02 For small classroom tests the (N-1) SD = SQRT(4.28) = 2.07 marks The variation in student scores and the distribution of student scores are now expressed on the same normal scale. 5. Test Reliability The ratio of the true variance to the score variance estimates the test reliability: the Kuder-Richardson 20 (KR20). The score (marginal column) variance – the error (summed from within Item columns) variance = the true variance. KR 20 = ((score variance – error variance)/score variance) x n/1-n) KR 20 = ((4.08 – 2.96)/4.08) x 21/20 = 0.29 This ratio is returned to the first floor of the model. An acceptable classroom test has a KR20 > 0.7. An acceptable standardized test has a KR20 >0.9. 6. Traditional Standard Error of Measurement The range of error in which 2/3 of the time your retest score may fall is the standard error of measurement (SEM). The traditional SEM is based on the average performance of your class: 16.77 +/- 1SD (+/- 2.07 marks). SEM = SQRT(1-KR20) * SD = SQRT(1- 0.29) * 2.07 = +/-1.75 marks On a test that is totally reliable (KR20 = 1), the SEM is zero. You can expect to get the same score on a retest. 7. Conditional Standard Error of Measurement The range of error in which 2/3 of the time your retest score may fall based on the rank of your test score alone (conditional on one score rank) is the conditional standard error of measurement (CSEM). The estimate is based (conditional) on your test score rather than on the average class test score. CSEM = SQRT((Variance within your Score) * n number of questions) = SQRT(MSS * n) = SQRT(SS) CSEM = SQRT(0.15 * 21) = SQRT(3.15) = 1.80 marks The average CSEM values (1.75) for all of your class (light green) also yields the test SEM. This confirms the above calculation for 6. Traditional Standard Error of Measurement for the test. This mathematical model (Table 25) separates the flat display in the VESEngine into two distinct levels. The lower floor is on a normal scale. The upper floor isolates the variation within the marking patterns on the lower floor. The resulting variance provides insight into the extent that the marking patterns could have occurred by luck on test day and into the performance of teachers, students, questions, and the test makers. Limited predictions can also be made. Predictions are limited using traditional multiple-choice (TMC) as students have only two options: 0-wrong and 1-right. Quantity and quality are linked into a single ranking. Knowledge and Judgment Scoring (KJS) and the partial credit Rasch model (PCRM) separate quantity and quality: 0-wrong, 1-have yet to learn, and 2-right. Students are free to report what they know and can do accurately, honestly, and fairly. - - - - - - - - - - - - - - - - - - - - - Free software to help you and your students experience and understand how to break out of traditional-multiple choice (TMC) and into Knowledge and Judgment Scoring (KJS) (tricycle to bicycle): The best test is a test that permits you to accurately, honestly, and fairly report what you know and can do. You know how to question, to get answers, and to verify. You know what you know and what you have yet to learn. This operates at two levels of thinking. It is a myth that a forced choice multiple-choice test measures what you trust you know and can do. At the beginning of any learning operation, you learn to repeat and to recall. Next you learn to relate the bits you can repeat and recall. By the end of a learning operation you have assembled a web of skills and relationships. You start at lower levels of thinking and progress to higher levels of thinking. Practice takes you from slow conscious operations to fast automatic responses (multiplication or roller skating). It is a myth that learning primarily occurs only by responding to a teacher in a classroom. Your attitude during learning and testing is important. Your maturity is indicated by your ability to get interested in new topics or activities your teacher recommends (during the course). As a rule of thumb, a positive attitude is worth about one letter grade on a test. It is a myth that you can easily learn when you have a negative attitude. Your expectations are important. You tend to get what you expect. A nine year study with over 3000 students indicated that students tend to get the grade they expected at the time they enrolled in the class, based on their lack of information, misinformation, and attitude. It is a myth that you cannot do better than your preconceived grade. Learning and testing are one coordinated event when you can see the result of your practicing directly (target practice or skateboarding). This situation also occurs when you are directly tutored by a person or by a person’s software. It is a myth that you must always take a test separately from learning. Complex learning operations go though the same sequence of learning steps. The rule of three applies here. Read or practice from one source to get the basic terms or actions. Read or practice from a second set to add any additional terms or actions. Read or practice from a third set to test your understanding, your web of knowledge and skill relationships. It is a myth that you must always have another person test your learning (but another person can be very helpful). That other person is usually a teacher who cannot teach and test each pupil or student individually. The teacher also selects what is to be learned rather than letting you make the choice. The teacher also selects the test you will take. It is a myth that your teachers have the qualities needed to introduce you to the range of skills and knowledge required for an honest, self-supporting Teaching usually takes place during scheduled time periods. In extreme situations, only what is learned in those scheduled time periods will be scored. This is one basis for assessing teacher effectiveness. It is a myth that the primary goal of traditional schools is student learning and development. Traditional multiple-choice is defective. It was crippled when the option of no response, “do not know”, was eliminated when adapted from its use with animal experiments to make classroom scoring easier. It is a myth that you should not have this option to permit accurate, honest, and fair assessment. Traditional multiple-choice promotes selecting the best right answer: using the lowest levels of thinking. The minimum requirement is making a mark for each question. It is a myth that such a score measures what you know or can do. The score ranks you on the test. The average test score describes the test, not you. (Table 15 or Download) Your score may rank you above or below average. It is a myth that you will always be safe with an above average score (passing). The normal distribution of multiple-choice test scores is based on your luck on test day. The normal distribution is desired for classes in schools designed for failure. It is a myth that a class should not have an average score of 90%. Luck on test day will distribute 2/3 of your classmates’ multiple-choice scores within the bubble in the center of a normal distribution; that is one standard deviation (SD) from the average. (Table 15 or Download) [SD = SQRT(Variance) and the Variance = SUM(Deviation from the Average^2)/N = Mean Sum of Squares = MSS] Your grade (cut score) is set by marking off the distribution of classmate scores in standard deviations: F (<-2 b="" c="" d="" to="">+1); A (>+2). Your raw score grade is the sum of what you know and can do, your luck on test day, and your set of classmates.-2> Raw scores can be adjusted by shifting their distribution, higher or lower, and by stretching (or shrinking) the distribution to get a distribution that “looks right”. It is a myth that your teacher, can only select the right mix of questions, to get a raw score distribution that “looks right”. Some questions perform poorly. They can be deleted and a new, more accurate, scored distribution created. It is a myth that every question must be retained. Discriminating questions are marked right only by high scoring classmates and marked wrong by low scoring classmates. (Table 15 or Download) It is a myth that all questions should be discriminating. Discriminating questions produce your class raw score distribution. About 5 to 10 are needed to create the amount of error that yields a range of five letter grades. It is a myth that discriminating questions assess mastery. The reliability (reproducibility, precision) of your raw score can be predicted, but not your final (adjusted) score. Test reliability (KR20) is based on the ratio of variation (the variance) from between student scores (external column) and within question difficulty mark patterns (internal columns). (Table 15 or Download) This makes sense: The smaller the amount of error variance within the question difficulty internal columns, with respect to the variance between student scores in the external column, the greater the test reliability. Discriminating, difficult, questions spread out student scores more (yield higher variance) than they increase the error variance within the questions. If there were no error variance, a test would be totally reliable (KR20 = 1). It is a myth that a good informative test must maximize reliability. The test reliability can help predict the average test score your class would get if it were to take another test over the same set of skills and knowledge. The Standard Error of Measurement (SEM) of your test is the range of error (from all of the above effects) for the average test score. (Table 15 or Download) The SD of the test and the test reliability are combined to obtain the SEM. The test reliability extracts a portion of the SD. If the test reliability were 1 (totally reliable), the SEM would be 0 (no error), the class would be expected to get the same class test score on a retest. And finally what can you expect about the precision of your score and your retest score (providing you have not learned any more). A retest is of critical importance to students needing to reach a high stakes cut score. If the SEM or CSEM ranges widely enough, you do not need to study. Just retake the test a couple of times and your luck on test day may get you a passing score. It is a myth that the probability, of you getting a passing grade 2/3 of the time, will insure you get the passing grade if you need a second trial. The Conditional [on your raw score] Standard Error of Measurement (CSEM) extracts the variance from only your mark pattern (Table 22). [CSEM = SQRT(Variance within your marks X the number of questions] Your CSEM will be very small if you have a very high or low score. This limits the prospects of a passing score by retaking a test without studying. Now to study, to change testing habits, or to trust to luck on test day, before a retest. Get a copy of the blueprint used in designing the test. A blueprint lists in detail what will be covered and the type of questions. Question each topic or skill. It is easier to answer questions other people have written if you have already created and answered your own questions. Use the advice in the first five paragraphs above and work up into higher level of thinking, meaning making (a web of relationships that makes sense to you and visualize, sketch, draw, every term). A change in testing habits may also be in order. Many students who do not “test well” are bright, fast memorizers, but lacking in meaningful relationships that make sense to themselves. They are still learning for someone else: the test and scanning each question for the “one right answer”. With meaningful relationships in mind you have the information in hand to answer a number of related questions. You are not limited to just matching what you recall to the question answers. [Mark out wrong answers and guess from the remaining answers.] And now for the “Hail Mary” approach. First, as a rule of thumb, your score on a test written by someone other than your teacher (a standardized test for example) will be one to two letter grades below your classroom test scores. If your failing test score is within 1 SEM of the cut score, you can expect a retest score within this range 2/3 of the time. The same prediction is made with your CSEM value that can range above and below the SEM value. If your failing test score is below 1 SEM or 1 CSEM from the cut score, you have no option other than to study. It is a myth that students passing a few points above the cut score will also pass on a retest. [Near passes are safe. Near failures are not.] Also please keep in mind that all of the math dealing with the variation between and within columns and rows (the variance) can be done on the student and question mark patterns with no knowledge of the test questions or the students. It is a myth that good statistical procedures can improve poor question or student performance. Teacher and psychometrician judgment on the other hand can do The standardized test paradox: A good blueprint to guide calibrated question selection for the test is the basis for low scores and a statistically reliable test. Good student preparation is the basis for high scores (mastery) and a statistically unreliable test (it cannot spread student scores out enough for the distribution to “look right”). The sciences, engineering, and manufacturing use statistics to reduce error to a minimum (low maintenance cars, aircraft, computers, and telephones). Only in traditional institutionalized education (schools designed for failure) is error intentionally introduced to create a score range that “looks right” for setting grades and ranking schools. This is all non-sense for schools designed for mastery (who advance students after they are prepared for the next steps). It is a myth (and an entrenched excuse for failure by the school) that student score distributions must fit a normal, bell-shaped, curve of error. Mastery schools are now being promoted as the burden of record keeping is easily computerized. The Internet makes mastery schools available everywhere and at anytime. This will have a marked change in traditional schooling in the next few years. This change can be seen in the “flipped” classroom (a modern version of assigned [deep] reading before class discussion). It is a myth that the “flipped” classroom is something new. Current educational software removes the time lag, in the question-answer-and-verify learning cycle, introduced by grouping students in classes, and then extended with standardized tests. Learning and assessment are again joined to promote mastery of assigned skills and knowledge. Students advance when they are ready to succeed at the next levels. It is a myth that “formative assessments” are actually functional when test results are not available in an operational time frame (seconds to a few days). Standardized tests will continue to rank students and schools, as the tests mature to certifying mastery for students who learn and excel anywhere and at anytime. It is a myth that current substantive standardized tests (that do not let students report what they trust they know or can do) can “pin point exactly what a student knows and needs to learn”. - - - - - - - - - - - - - - - - - - - - - Free software to help you and your students experience and understand how to break out of traditional-multiple choice (TMC) and into Knowledge and Judgment Scoring (KJS) (tricycle to bicycle): The bet in the title of Catherine Gewertz’s article caught my attention: “One District’s Common-Core Bet: Results Are In”. As I read, I realized that the betting that takes place in traditional multiple-choice (TMC) was being given arbitrary valuations to justify the difference between a test score and a classroom observation. If the two agreed, that was good. If they did not agree, the standardized test score was dismissed. TMC gives us the choice of a right mark and several wrong marks. Each is traditionally given a value of 1 or 0. This simplification, carried forward from paper and pencil days, hides the true value and the meanings that can be assigned to each mark. The value and meaning of each mark changes with the degree of completion of the test and the ability of the student. Consider a test with one right answer and three wrong answers. This is now a popular number for standardized tests. Consider a TMC test of 100 questions. The starting score is 25, on average. Every student knows this. Just mark an answer to each question. Look at the test and change a few marks, that you can trust you know, to right. With good luck on test day, get a score high enough to pass the test. If a student marked 60 correctly, the final score is 60. But the quality of this passing score is also 60%. Part of that 60% represents what a student knows and can do, and part is luck on test day. A passing score can be obtained by a student who knows or can do less than half of what the test is assessing; a quality below 50%. This is traditionally acceptable in the classroom. [TMC ignores quality. A right mark on a test with a score of 100 has the same value, but not the same meaning as a right mark on a test with a score of 50.] A wrong mark can also be assigned different meanings. As a rule of thumb (based on the analysis of variance, ANOVA; a time honored method of data reduction), if fewer than five students mark a wrong answer to a question, the marks on the question can be ignored. If fewer that five students make the same wrong mark, the marks on that option can be ignored. This is why Power Up Plus (PUP) does not report statistics on wrong marks, but only on right marks. There is no need to clutter up the reports with potentially interesting, but useless and meaningless information. PUP does include a fitness statistics not found in any other item analysis report that I have examined. This statistic shows how well the test fits student preparation. Students prepare for tests; but test makers also prepare for the abilities of test takers. The fitness statistic estimates the score a student is expected to get if, on average, as many wrong options are eliminated as are non-functional on the test, before guessing; with NO KNOWLEDGE of the right answer. This is the best guess score. It is always higher than the design score of 25. The estimate ranged from 36% to 53%, with a mean of 44%, on the Nursing124 data. Half of these students were self-correcting scholars. The test was then a checklist of how they were expected to perform. With the above in mind, we can understand how a single wrong mark can be devastating to a test score. But a single wrong mark, not shared by the rest of the class can be taken seriously or ignored (just as a right mark, on a difficult question, by a low scoring student). To make sense of TMC test results requires both a matrix of student marks and a distribution of marks for each question (Break Out Overview). Evaluating only an individual student report gives you no idea whither a student missed a survey question that every student was expected to answer correctly or a question that the class failed to understand. Are we dealing with a misconception? Or a lack of performance related to different levels of thinking in class and on the test; or related to the limits of rote memory to match an answer option to a question? [“It’s the test-taking.”] When does a right mark also mean a right answer or just luck on test day? [“This guy scored advanced only because he had a lucky day.”] Mikel Robinson, as an individual, failed the test by 1 point. Mikel Robinson, as one student in a group of students, may not have failed. [We don’t really know.] His score just fell on the low side of a statistical range (the conditional standard error of measurement; see a previous post on CSEM). Within this range, it is not possible to differentiate one student’s performance from another’s using current statistical methods and a TMC test design (students are not asked if they can use the question to report what they can trust they actually know or can do). We can say, that if he retook the test, the probability of passing may be as high as 50%, or more, depending upon the reliability and other characteristics of the test. [And the probability of those who passed by 1 point, of then failing by one point on a repeat of the test, would be the same.] These problems are minimized with accurate, honest, and fair Knowledge and Judgment Scoring (KJS). You can know when a right mark is a right answer using KJS or the partial credit Rasch model IRT scoring. You can know the extent of a student’s development: the quality score. And, perhaps more important, is that your students can trust what they know and can do too; during the test, as well as after the test. This is the foundation on which to build further long lasting learning. This is student empowerment. Welcome to the KJS Group: Please register at mailto:KJSgroup@nine-patch.com. Include something about yourself and your interest in student empowerment (your name, school, classroom environment, LinkedIn, Facebook, email, phone, and etc.). Free anonymous download, Power Up Plus (PUP), version 5.22 containing both TMC and KJS: PUP522xlsm.zip, 606 KB or PUP522xls.zip, 1,099 KB. - - - - - - - - - - - - - - - - - - - - - Other free software to help you and your students experience and understand how to break out of traditional-multiple choice (TMC) and into Knowledge and Judgment Scoring (KJS) (tricycle to bicycle): FOR SALE: raschmodelaudit.blogspot.com/2013/10/knowledge-and-judgment-scoring-kjs-for.html The article by Sarah D. Sparks, http://www.edweek.org/ew/articles/2013/09/11/03mindset_ep.h33.html?r=545317799, starts with a powerful concept: “It’s one thing to say all students can learn, but making them believe it – and do it – can require a 180-degree shift in student’s and teacher’s sense of themselves and of one another.” The General Studies Remedial Biology course I taught faced this challenge. The course was scheduled at night for three consecutive hours in a 120-seat lecture room. I refused to teach the course until the following arrangements were made: • The entire text was presented by cable online reading assignments in each dormitory room and by off-campus phone service. • One hour was scheduled for my lecture, after any student presentations related to the scheduled topic. • One hour was scheduled for written assessment every other week. • One hour was scheduled for 10-minute student oral reports based on library research, actual research, or projects. Students requested the assessment period be placed in the first hour instead of the second hour, after the first few semesters. This turned the course into a seminar for which students needed to prepare on their own before class. Only Knowledge and Judgment Scoring (KJS) was used the first few semesters, with ready acceptance by the class. The policy of bussing in students from out of the Northwest Missouri region brought in protestors, “Why do we have to know what we know, when everywhere else on campus, we just mark, and the teacher tells us how many right marks we made?” Offering both methods of scoring, traditional multiple-choice (TMC) and KJS, on the same test solved that problem. Students could select the method they felt most comfortable with; that matched their preparation the best. The student presentations and reports were excellent models for the rest of the class. They showed the interest in the subject and the quality of work these students were doing to the entire class. KJS provided the information needed to guide passive pupils alone the path to becoming self-correcting scholars. As a generality, that path took the shape of a backward J. First they made fewer wrong marks, next they studied more, and finally they switched from memorizing non-sense to making sense of each assignment. Over time they learned they were now spending less time studying (reviewing everything) and getting better grades by making sense as they learned; they could actually build new learning on what they could trust they had learned. They could monitor their progress by checking their quality score and their quantity score. Get quality up, interest and motivation increase, and quantity follows. The tradition of students comparing their score with that of the rest of the class to see if they were safe, or needed to study more, or had a higher grade than expected when enrolling in the course (and could take a vacation), was strong in the fall semester with the distraction of social groups, football and homecoming. The results of fall and spring semesters were always different. There was one dismal failure. With the excellent monitoring of their progress in the course, the idea was advanced to recognize class scholars. These students, had in one combination or another of test scores and presentations, earned a class score that could not be changed by any further assessment. They had demonstrated their ability to make sense of biological literature (the main goal of the course, which, hopefully, would serve them well the rest of their lives, as well as, the habit of making sense of assignments in their other courses). The next semester all went as planned. Most continued in the class and some conducted study sessions for other students. The following semester witnessed an outbreak of cheating. Today, Power Up Plus (PUP) gets its name by the original cheat checker added to Power UP. Cheating became manageable by the simple rule that any answer sheet that failed to pass the cheat checker would receive a score of zero. I offered to help any student who wished to protest the rule to the student disciplinary committee. No student ever protested. [Cheating was handled in-class as any use of the university rules was not honored by the administration. You must catch individual students in the act. Computer cheat checkers had the same status as red light cameras do now. If more than one student is caught, the problem is with the instructor, not with the student. We cancelled the class scholar idea.] We need effective tools to manage student “growth mindset”. The tools must be easy to use by students and faculty. Students need to see how other students succeed, to be comfortable in taking part, and be able to easily follow their progress when starting at the low end of academic preparation of knowledge, skills, and judgment (quality, the use of all levels of thinking). A common thread runs through successful student empowerment programs: Effective instruction is based on what students actual know, can do, and want to do or to take part in. This requires frequent appropriate assessment at each academic level such as, in general, these recent examples: • Elementary School http://smartblogs.com/education/2013/09/25/closing-the-achievement-gap-in-a-high-poverty-school/ • Middle School http://www.edweek.org/ew/articles/2013/09/11/03common_ep.h33.html • High School http://www.edweek.org/ew/articles/2013/09/11/03mindset_ep.h33.html?r=545317799 • College and wherever multiple-choice is used for accurate, honest, and fair assessments http://www.nine-patch.com Welcome to the KJS Group: Please register at mailto:KJSgroup@nine-patch.com. Include something about yourself and your interest in student empowerment (your name, school, classroom environment, LinkedIn, Facebook, email, phone, and etc.). Free anonymous download, Power Up Plus (PUP), version 5.22 containing both TMC and KJS: PUP522xlsm.zip, 606 KB or PUP522xls.zip, 1,099 KB. - - - - - - - - - - - - - - - - - - - - - Other free software to help you and your students experience and understand how to break out of traditional-multiple choice (TMC) and into Knowledge and Judgment Scoring (KJS) (tricycle to bicycle): FOR SALE: raschmodelaudit.blogspot.com/2013/10/knowledge-and-judgment-scoring-kjs-for.html Two alternative forms of multiple-choice (AMC) to the traditional multiple-choice (TMC) developed from independent sources. Geoff Masters from Melbourne, Australia, is credited as the developer of the parcel credit Rasch model (PMC), a form of Information Response Theory (IRT) analysis in 1982 (Bond and Fox). It allows students to report what they know (2 points), what they do not know (1 point), and wrong answer (0 points). It never became popular on classroom or standardized tests. The second form of AMC was developed at NWMSU. It started as net yield scoring (NYS) on both essay and multiple-choice. I needed a way to reduce the amount of reading required in scoring “blue book” essays. A 20-point essay started with 10 points. A point was added for acceptable, related, information bits. A point was subtracted for unacceptable, incorrect, unrelated information bits. An information bit was basically a short sentence with correct grammar and spelling. It could also be a relationship expressed as a diagram, sketch, or drawing. This reduced the amount of reading by more than a 1/3 and improved student performance. Snow, filler, and fluff had no value but distracted a student from doing good work. Students needed to exercise good judgment in selecting what they wrote. This was no longer the case of their writing, and the teacher searching, for something that could earn them sufficient credit to pass the course; a lower level of thinking operation that is very common in high schools and colleges. NYS required students to use good judgment as well as be knowledgeable and be skilled. This same idea was applied to computer scored multiple-choice tests with interesting results. When both TMC and NYS were offered on the same test, most students selected TMC on their first test. This is what they were familiar with. Over 90% of students elected NYS on their third test. Students also agreed that knowledge and judgment should have equal value. By 1981 NYS was renamed knowledge and judgment scoring (KJS) to reflect what was being assessed: good judgment and a right answer (2 points), good judgment to report what has yet to be learned with no mark (1 point), and poor judgment, a wrong mark (0 points). KJS requires and rewards students for using higher levels of thinking. The quality score is independent from the right count score. A struggling student with a test score of 60% may have also earned a quality score of 90%. With TMC there is no way of knowing what a student with a score of 60% actually knows (when a right mark is a right answer or just luck on test day). With KJS we can know what this student knows with the same degree of accuracy as a student earning a 90% score on a TMC test. More importantly, this reinforces the student’s sense of self-judgment and encourages effort to do better. It is the equivalent to the note a teacher marks on a special paragraph in an essay, “Good KJS provides the information needed to tell student and teacher what has been learned and what has yet to be learned in an easy to use report. Often a trail of bi-weekly test scores would follow a backward J. Reducing guessing by itself did not increase the test score but moved the score to a higher quality. Low quality students needed to change study habits. Low scoring high quality students needed to study more. Learning by questioning and establishing relationships provided students the basis for answering question correctly that they had never seen before. They then stumbled onto what I meant by, “Make things meaningful (full of relationships) if your learning is to be really useful, empowering and easy to remember”. They did not have to review everything for each cumulative test. The most interesting finding was that when students mastered meaning-making, they found themselves doing better in all of their courses. This is what inspired me to continue to promote Knowledge and Judgment Scoring. Students learn best when they are in charge. The quality score was the “feel good” score for struggling students until their improving development produced the high scores earned by successful self-correcting students. Welcome to the KJS Group: Please register at mailto:KJSgroup@nine-patch.com. Include something about yourself and your interest in student empowerment (your name, school, classroom environment, LinkedIn, Facebook, email, phone, and etc.). Free anonymous download, Power Up Plus (PUP), version 5.22 containing both TMC and KJS: PUP522xlsm.zip, 606 KB or PUP522xls.zip, 1,099 KB. - - - - - - - - - - - - - - - - - - - - - Other free software to help you and your students experience and understand how to break out of traditional-multiple choice (TMC) and into Knowledge and Judgment Scoring (KJS) (tricycle to bicycle):
{"url":"http://richard-hart.blogspot.com/","timestamp":"2014-04-19T06:57:54Z","content_type":null,"content_length":"157441","record_id":"<urn:uuid:25bef02e-86a8-4a6f-aa66-be16c1d1c301>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Cost and Management Accounting Cost & Management Accounting (MGT-402) LESSON# 9 Economic order quantity refers to that number (quantity) ordered in a single purchase so that the accumulated costs of ordering and carrying costs are at the minimum level. In other words, the quantity that is ordered at one time should be so, which will minimize the total of (i) Cost of placing orders and receiving the goods, and (ii) cost of storing the goods as well as interest on the capital invested. The economic order quantity can be determined by the following simple formula: EOQ = UC x CC% EOQ = Economic Order Quantity. RU = Annually Required Units. OC = Ordering Costs for one order. UC = Inventory Unit Cost. CC = Carrying Cost as %age of Unit Cost. This formula is based on three assumptions: 1. Price will remain constant throughout the year and quantity discount is not involved. 2. Pattern of consumption, variable ordering costs per order and variable inventory carrying charge per unit per annum will remain the same throughout, and 3. EOQ will be delivered each time the stock balance is just reduced to nil. The Economic Order Quantity can be determined by applying the formula as under: Suppose; the Annual consumption is 80,000 units, Cost to place one order is Rs. 1,200 Cost per unit is Rs. 50 Carrying cost is 6% of Unit cost EOQ = UC x CC% EOQ = 2 x 80,000 x 1,200 EOQ = As stated above this formula holds good if changes in price are not likely in the near future and consumption is regular. Otherwise, placing orders according to this formula may become Carrying cost of inventory consists of (i) the costs of physical storage such as cost of space, handling and upkeep expenses, insurance, cost of obsolescence, etc., and (ii) interest on capital invested (the opportunity cost of the capital blocked up). All these costs are expressed in %age of the cost per unit. Table of EOQ Economic order quantity can also be proved through a table, by calculating total cost at different order quantities. Cost & Management Accounting (MGT-402) Following is a table that is showing total cost at five different order quantities, assuming that the annual requirement of the units to be consumed remains the same. Here the total cost comprises of ordering cost and carrying cost. Total Ordering cost Ordering cost is arrived by multiplying the number of orders in a year with the cost per order. Number of order is calculated by dividing annually required units by the order quantity. Step I: Required Units = Number of orders Order Quantity Step II: Number of orders x Cost per order Total Carrying Cost Carrying cost is arrived by multiplying the average ordering quantity with the carrying cost per unit. Average ordering quantity is calculated by dividing the ordering quantity by 2. (It is assumed that half of the ordering quantity is always kept into the store, this is the reason the ordering quantity is divided by 2) Step I: Ordering Quantity = Average ordering quantity Step II: Carrying cost per unit = Unit Cost x CC %age Step III: Average ordering quantity x Carrying cost per unit Applying these steps at different presumed order quantities (inclusive of the Economic Order Quantity) we can develop a table. Required Number of Total Total Carrying Total cost Avg Order qty x Number of orders x Rs. Cost & Management Accounting (MGT-402) The above table shows that 8,000 is the economic order quantity because at this point total cost is the minimum. At this point total ordering cost is equal to the total carrying cost. If the order quantity is increased it will although result in reducing the total ordering cost but at the same time more carrying cost will be incurred to store the inventory. Whereas if the order quantity is decreased it will although result in reducing the total carrying cost but at the same time more ordering cost will be incurred as the number of orders will EOQ Graph Economic order quantity can also be determined through a graph. Here the above information is plotted in a graph for total ordering cost, total carrying cost and total cost at different ordering quantities. The point at which the line of total ordering cost intersects with the total carrying cost is the EOQ. At this point the line of total cost will give a bend that shows the minimum cost. Total Cost Total Carrying Total Ordering ORDER SIZE (Q) (000) In the above graph line of total carrying cost intersects line of total ordering cost at 8,000 order quantities, where both of the costs are Rs. 12,000. At this order quantity the total cost is Rs. 24,000 which is the minimum most. If the order quantity is increased or decreased the total cost will be more than the cost at EOQ. This is also evident from the above graph. Q. 1 From the following data, you are required to determine the Economic Order Quantity. Annual usage 8,000 units Cost per unit Rs. 30 Ordering cost Rs. 7 per order Storage and carrying cost as percentage of average inventory holding 15% Cost & Management Accounting (MGT-402) Q. 2 What is Economic Order Quantity (EOQ)? Should the quantity ordered be always equal to EOQ? Calculate EOQ from the following: (a) RU 600 units {b) Ordering cost Rs, 12 per order (c) Carrying cost (d) Price per unit Rs. 20. Q. 3 Annual requirement of Glass Limited is 100,000 units of product 10mm glass. Per unit cost of the product is Rs. 10 and cost for each new order is Rs. 100. Carrying cost is 50%. Calculate EOQ by table and by graph. Q. 1 The demand for a product is 12,500 units for a three month period. Each unit of product has a purchase price of Rs.15 and ordering costs are Rs.20 per order placed. The annual holding cost of one unit of product is 10% of its purchase price. What is the Economic Order Quantity (to the nearest unit)? A 1,577 B 1,816 C 1,866 D 1,155 Q. 2 A company determines its order quantity for a raw material by using the Economic Order Quantity (EOQ) model. What would be the effects of a decrease in the cost of ordering a batch of raw material on the ordering quantity and the total carrying cost? Ordering quantity Total carrying cost Q. 3 A company uses the Economic Order Quantity (EOQ) model to establish reorder quantities. The following information relates to the forthcoming period: Order costs Rs.25 per order Carrying costs 10% of purchase price Annual demand 20,000 units Purchase price Rs.40 per unit 500 units What are the total annual costs of stock (i.e. the total purchase cost plus total order cost plus total holding cost)? Rs. 22,000 Rs. 33,500 Rs. 802,000 Rs. 803,000
{"url":"http://www.zeepedia.com/read.php?economic_ordering_quantity_eoq_graph_problems_cost_and_management_accounting&b=42&c=9","timestamp":"2014-04-21T07:12:48Z","content_type":null,"content_length":"100354","record_id":"<urn:uuid:9f63efd5-9ba8-488b-9620-e5f26041f9d7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Flying and Driving after the September 11 Attacks Flying and Driving after the September 11 Attacks Many people who are perfectly relaxed cruising our nation's highways become jittery when they get on an airliner—although most know full well that flying is safer than driving. The statistics are indeed clear on this point. For example, we and a colleague, Dan Weintraub, published a paper in 1991 that documented the substantially lower risk of flying compared with driving in the United States. Some of the many millions of Americans who flew over the next few years probably derived comfort from such hard facts. But now, a decade later, things have changed: The hijacking of four large jets on September 11, 2001, and the disastrous events that ensued led many to forgo flying in the United States during the following months. For example, in the fourth quarter of 2001, there was a drop of 18 percent in the number of passengers compared with the same time period in 2000. Many still avoid air travel. We thus thought it appropriate to again calculate the risks involved in flying and driving, taking into account the latest statistics, including the tragic deaths of the passengers on those four hijacked planes. Safety in Numbers The risks of flying and driving are influenced by different parameters. Whereas the risk of driving depends most strongly on the distance traveled, the risk of flying is primarily affected by the number of takeoffs and landings. A study carried out by Boeing indicates that out of 7,071 worldwide airline fatalities during the interval between 1991 and 2000, 95 percent happened either during takeoff and climb after takeoff, or during descent and landing. Conversely, only 5 percent of the fatalities resulted from accidents that occurred at cruising altitudes. Consequently, as we and others have pointed out before, the risk of flying depends mostly on the number of flight segments involved in the trip, not on the distance traveled. In gathering the statistics for flying, we considered the scheduled domestic passenger operations of 10 major U.S. airlines: Alaska, America West, American, Continental, Delta, Northwest, Southwest, TWA, United and USAirways. (The commuter affiliates of these airlines were not included.) Because the number of airline fatalities varies greatly from year to year, we used the data compiled by the National Transportation Safety Board for a 10-year period from 1992 through 2001. To calculate the probability that a particular passenger would be killed on a nonstop (one-segment) flight, we divided the number of passengers killed during 1992–2001 (433, including the 232 aboard the four hijacked flights) by the product of the total number of nonstop segments (54,061,237) and the average number of passengers per nonstop segment (101.9). The resulting value is 78.6 X 10^–9, or roughly eight in a hundred million. The probability of a fatality on a one-stop (two-segment) flight can be calculated by combining the probabilities of a fatality on either segment. Roughly speaking, the probability of becoming a fatality on a two-segment flight is just two times the probability of becoming a fatality on a one-segment flight. (In actuality, because one must survive the first segment to become a fatality on the second, the full probability calculation is more complicated. But given the very low probabilities involved here, the simple approximation is quite accurate.) Similarly, the probability of a fatality on a three-segment flight is approximately equal to three times the probability for a single-segment flight, and so on. When one decides between flying and driving, the latter option usually involves being the driver (as opposed to a passenger). Because the susceptibility to injury varies with the position of the occupant in the vehicle, we included only drivers in this analysis. Also, we tallied just cars, light trucks, vans and sport utility vehicles, ignoring heavy trucks, buses and motorcycles. Furthermore, we considered travel just on rural interstate highways—the safest driving environment—because those constitute the most likely setting when one chooses to drive as an alternative to flying. To gauge the risks of such motoring, we used statistics from the year 2000, the most recent data available in detail. To calculate the probability of fatality per kilometer of driving, we divided the number of driver fatalities on rural interstate highways in 2000 (1,511) by the estimated distance traveled on those roads by cars, light trucks, vans and SUVs (345 X 10^9 kilometers). The resulting value is 4.4 X 10^–9, or about 4 in a billion per kilometer. Armed with these two risk estimates, one for driving and the other for flying, we can specify something we call the indifference distance—the distance at which the two modes of travel are equally risky. For distances shorter than the indifference distance, driving is safer; for distances longer than the indifference distance, flying is safer. The indifference distance for driving versus a nonstop flight can be calculated by dividing the risk of flying a nonstop segment (78.6 X 10^–9) by the risk of driving a kilometer (4.4 X 10^–9). The result is 18 kilometers. For one-stop and two-stop flights, the indifference distances are 36 kilometers and 54 kilometers, respectively. Thus for any distance that is long enough for flying to be an option, driving even on the safest roads is more risky than flying with the major airlines. Astute readers will note that our calculations do not include the trip to an airport (for flying) or the travel on local roads on the way to a rural interstate (for driving). True, we've overlooked this complication. But in many circumstances, the risks for these portions of the journey for the two modes of long-distance travel may be about the same. So we don't believe that our estimates of indifference distance would change all that much, even if such factors were fully accounted for. Just how much safer is flying than driving? For an average-length nonstop flight (which works out to 1,157 kilometers), the risk of flying is just the 78.6 X 10^–9 value derived above. The risk of driving those same 1,157 kilometers is 1,157 X 4.4 X 10^–9, or 5,091 X 10^–9. Dividing 5,091 by 78.6, we estimate that driving the length of a typical nonstop segment is approximately 65 times as risky as flying. Driving farther than 1,157 kilometers would be more than 65 times as risky; driving shorter than 1,157 kilometers, but longer than the 18-kilometer indifference distance, would be between 1 and 65 times as risky as nonstop flying (neglecting the drive to the airport and the travel on local roads on the way to the interstate). Future Shock? As all those stock prospectuses say, these figures are descriptive for the time period studied and are not predictions of future performance. Making predictive statements about the relative risk of flying and driving after the attacks of September 2001 is indeed tough. In particular, it requires some assumptions about whether such aberrant events will be repeated and, if so, how often. Because the frequency of such episodes cannot be reliably estimated, we instead decided to calculate the frequency needed for the two travel modes to become equally risky. As we explained above, the risk of a fatality while driving the length of an average nonstop flight is 5,091 X 10^–9. For nonstop flights to have had the same estimated risk, there would have to have been 28,046 flight fatalities over the 10-year period studied (based on 54,061,237 nonstop segments and 101.9 passengers per nonstop segment). That translates to 27,845 flight fatalities in addition to the 201 people who actually died over those years (not counting those on the four hijacked flights). In turn, dividing 27,845 by 232 (the number of passengers who died on the four hijacked planes) we obtain the following: For flying to become as risky as driving, disastrous airline incidents on the scale of those of September 11th would have had to occur 120 times over the 10-year period, or about once a month. Two conclusions follow. First, without diminishing the tragedy of September 11th (which also involved many deaths of people on the ground) or its political ramifications, from the perspective of personal safety it is important to consider that the annual number of lives lost in road traffic accidents in the United States is enormous in comparison (42,119 fatalities in 2001). Second, the relative safety of domestic flying on the major airlines over driving is so strong that the direction of the advantage would be unchanged unless the toll of terrorism in the air became, almost unthinkably, many times worse than it has been.
{"url":"http://www.americanscientist.org/issues/id.3312,y.2003,no.1,content.true,page.1,css.print/issue.aspx","timestamp":"2014-04-19T03:03:50Z","content_type":null,"content_length":"85336","record_id":"<urn:uuid:f0e5b07d-011b-4979-9f11-0ba81f44003b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the concatenation of a VQS with its inverse commutative? - Math and Physics Quaternion concatenation is noncommutative. That is q[a] * q[b] ≠ q[b] * q[a] q^-1 * q = q * q^-1 = I[q] where q^-1 is the inverse of q and I[q] is the identity quaternion. VQS concatenation is also noncommutative: T[A_B] * T[B_C] ≠ T[B_C] * T[A_B] Where T[A_B] represents a VQS transformation. Now, we find the inverse of T[A_B] like so: T[A_B]^-1 = T[B_A] My question is, is the concatenation of a VQS with its inverse commutative? Ie, is the following statement correct? T[A_B ]* T[B_A] = T[B_A] * T[A_B] = I[VQS] Where I[VQS] is the identity VQS. With the implementation I’m using I’m finding T^ -1 * T = I[VQS], whereas T * T^ -1 ≠ I[VQS] This seems incorrect; both sould return I[VQS]. Here is the implementation of VQS Inverse and concatenation functions I'm using: // Concatenation VQS VQS::operator*(const VQS& rhs) const VQS result; //Combine translation vectors result.v = q.Rotate(rhs.v) * s + v; //Combine quaternions result.q = q * rhs.q; //Combine scales result.s = s * rhs.s; //Return result return result; } //End: VQS::operator*() // Returns inverse VQS VQS Inverse(const VQS& other) VQS temp; //Inverse scale temp.s = 1.0f / other.s; //Inverse quaternion temp.q = Inverse(other.q); //Inverse vector temp.v = temp.q.Rotate(-other.v) * temp.s; return temp; } //End: Inverse()
{"url":"http://www.gamedev.net/topic/644644-is-the-concatenation-of-a-vqs-with-its-inverse-commutative/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024","timestamp":"2014-04-17T00:54:12Z","content_type":null,"content_length":"97663","record_id":"<urn:uuid:2af002c6-8007-4358-908f-420515ff847b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Trying to Figure Retirement / VA Pay I am trying to estimate my retirement pay / VA pay and am not sure how to calculate. Any assistance or direction would be greatly welcomed. 18+ yrs E-7 / USAF 36 month avg pay = $3780 Took REDUX at 16 yrs Expecting 30% from IPEB and 90% from VA (have spouse and 2 children under 18) No concurrent receipt authorized I am confused on what numbers to use for retirement percent and how my pay would be divided between the AF and the VA, if at all. Thanks in advance for any assistance you may be able to provide! pittpan2005 PEB Forum Regular Member Dec 6, 2009 Trophy Points: With a medical retirement your redux is negated. If you are placed on TDRL, you receive 50% of base pay(high 36 average) while on TDRL. If 30% PDRL, you receive 30% of base pay(High 36 average). Since your VA pay is higher you will receive all your money tax free from VA instead of any DoD pay. Your VA pay is $1971.00. alibs likes this. Thanks pittpan! Much appreciated. I have been curious about this for some time, but only just now thought to present it to the forum. It is the higher of the DOD% or your 18 years of service times 2.5%. So the lets say the DoD rated you at 30% and you regular retirement was 18 years times 2.5 which equals 45%. i am sorry this is not phrases properly, but you get the ghist. Hello.!! New Member Jun 21, 2012 Trophy Points: My friend I have a question for you, if you can help me. I just got retired from the Army, Im still waiting for ma letter of confirmation for my rating, and more then two week waiting for my VA ID card, do h you know some one or have a phone number that I can call. Thank you Thanks again! Unfortunately, I am not able to assist, Hello.!!. I am USAF and have not yet reached the point of retirement. Hopefully that will come soon tho! Best of luck! you may want to start a new tread, or do a search, there will be someone on this site that can assist you. Revisiting this thread as I am curious if anyone can assist me in computing what final pay would be if I were to make it to 20 years. Using the numbers above in the original post, looking to see approximate retirement/va pay if retired at 20 years versus at 18.5 years. Many thanks once again for all the assistance!! JTACEER New Member Aug 1, 2012 Trophy Points: I am in the same boat currently 18 years in USAF and they are just now starting my MEB process. I am worried about all of this as well. Will post what I find out. nwlivewire PEB Forum Regular Member Jul 27, 2009 Trophy Points: It is my understanding that getting to 20 years AD service time - getting to your 20 years and retiring at 20 years is a pretty GOLDEN and ideal military retirement situation. If medically retired with 20 years AD service, you would be able to become immediately eligible for CRDP - meaning you would recieve BOTH your military retirement AND your VA compensation at the same time - without the off-set. I also understand that the percentage of your military medical disability rating would also exempt that percentage of your total military retirement from federal taxation. Someone please chime in if this is incorrect! In other words, if you were given a 30% disability rating and had 20 years AD in, 30% of your retirement pay would not be taxed. Even if I am not correct on that one piece of this, it is STILL very much to your military retirement advantage to cross that 20-year retirement line. You get FULL CRDP immediately - especially if rated at 50% or higher. And of course, all the other regular bennies given to a 20-year AD retiree - TRICARE for life, etc. And your retirement would probably be permanent - no TDRL re-examinations where they can change your percentages and mess with you. PERMANENT! Have you thought about the COAD program? Just something to research and know about - JUST IN CASE you get close to 20 years, but need a few months to get to the finish line. Others will chime in here shortly and offer suggestions, input, maybe even some "How To's". Sure hope you cross that finish line!!! Do you happen to know if the REDUX is waived for members over 20? If so, can you site the regulation where this is written? I am at 23 years and was found "unfit." Yesterday I went to discuss my VA ratings and was told by my PEBLO that I'll retire under the redux plan...I know I've read on here that if "medically retired" I wouldn't be bound by this, just can't find the regulation that says it so I can show it to my PEBLO. It only makes sense as I am not allowed to work my way back up to 50 or 75%. This will mean the difference between 47.5% or 57.5%. Thanks in advance. ranger2992 Super Moderator Staff Member Jul 16, 2011 Trophy Points: CrackedBack, go ahead and make a new thread that addresses your question. You will get a much better response and specific answers to your questions. We are trying to clean up threads where other members are posting questions not related to answering the OPs question. Thanks for your help. Will do...but thought the flavor was the same. Retirement and REDUX is in this discussion...I'll figure this site out yet!!!
{"url":"http://www.pebforum.com/site/threads/trying-to-figure-retirement-va-pay.15643/","timestamp":"2014-04-21T09:44:12Z","content_type":null,"content_length":"72138","record_id":"<urn:uuid:6e2f1ad2-9e2c-4684-81b4-1051ade4ff6b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
NONRADIAL PULSATION Fig. 1: Pulsation pattern for an oscillation with l = 3. The yellow coloured surfaces move outward, while the blue coloured ones move inward. Also the movement of the node lines is illustrated. Please click on a certain model to see an animation (from Zima (1999, Master Thesis)). A pulsation is radial, when the star oscillates around the equilibrium state by changing it's radius under maintenance of it's spherical shape. The radial pulsation is just a special case of nonradial pulsation. Nonradial pulsation means that some parts of the stellar surface move inwards, while others move outwards at the same time. Such an oscillation can be described with three parameters (quantum numbers): the radial order n, degree l, and the azimuthal number m. The degree l is equivalent to the number of node lines on the stellar surface (on a node line no radial motion takes place). Of those l node lines a total of m lines lie in meridional direction (there are 2l +1 possible m-values for one l-value). Modes with m <> 0 represent waves travelling around the star. They can be prograd (m > 0) or retrograd (m < 0), dependent on the direction of their movement around the star. In a nonradially pulsating nonrotating star without a magnetic field for one l value the frequencies of all m-modes have the same value. This is called degeneration in 2l+1 folds. This degeneration can be lifted by stellar rotation or a magnetic field. The so called rotational frequency splitting enables us to calculate the stellar rotation period. The frequency sm of a rotationally split mode is written as: sm = s0+(CL-1)mW+(DLm²W²)/s0 ,where W is the rotational frequency and CL and DL are dependent on the coriolis force and the centrifugal force. Both factors depend on the internal structure of the star and hence contain asteroseismological information. Pulsation modes are further distinguished by n, the number of nodes in the radial component of displacement from the center to the surface of a star. For n-values of 0 the star oscillates in the fundamental mode. n=1 is the first overtone, n=2 the second overtone, etc. A radial pulsation with n=2 is shown in Figure 2. Fig. 2: Schematic illustration of the node lines in the stellar interior for a radial pulsation with n=2 (from Zima (1999, Master Thesis)).
{"url":"http://www.univie.ac.at/tops/dsn/texts/nonradialpuls.html","timestamp":"2014-04-21T14:53:15Z","content_type":null,"content_length":"5599","record_id":"<urn:uuid:89bf078c-2192-4ce3-bfc7-df6a77d43855>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
San Geronimo Algebra Tutor ...I often find, when working with my students, that an important component of the tutoring is attention to these skills in addition to the specific subject areas for which tutoring had been requested. I have a BA in mathematics from UCLA, where linear algebra (matrices, vector analysis, etc.) were... 20 Subjects: including algebra 1, algebra 2, calculus, trigonometry ...I am confident in my abilities to help students with a variety of learning styles to be successful using a wide range of curriculum and resources. I have been an independent study teacher for over 10 years and have extensive experience working with elementary students in math. I use a variety of resources and teaching styles depending on the individual needs of the child. 17 Subjects: including algebra 2, algebra 1, reading, English ...And I help students to learn to see concepts and problems in a way that makes sense to them. For many, Algebra I and II is where the struggles begin. But it need not be scary and difficult. 18 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have also tutored many high school students. I have worked for both Kaplan and McGraw-Hill designing testing materials for tests such as the SAT/ACT and GRE. I know the ins and outs of how tests are constructed and how to maximize your score, and am happy to share my knowledge with you. 14 Subjects: including algebra 2, algebra 1, statistics, geometry ...To reinforce these concepts, we do practice problems together until the student is comfortable. I believe patience and clear communication is key when working with elementary students. I teach all of my students study skills necessary to do well and succeed on tests and in school. 59 Subjects: including algebra 1, algebra 2, English, reading
{"url":"http://www.purplemath.com/San_Geronimo_Algebra_tutors.php","timestamp":"2014-04-17T16:13:51Z","content_type":null,"content_length":"23921","record_id":"<urn:uuid:e19bdd5c-8d56-4277-884e-5d2ce29a7d2d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
cartographic projection If a is the s from one to another, cartographic projection is the particular case of mapping from a surface onto a Collectively the mesh of lines of latitude and longitude on a sphere are called a graticule. If a single graticule line can be transferred to a plane, the other lines can be arranged about it in a mathematically orderly way; not the same as on the sphere but related. These arrangements are map projections, on which lines of longitude are called meridians, and of latitude, parallels. Such projections can preserve directions or distances but not both, except from a single point. Those that do preserve bearings and distances from a single point are called equidistant; those that preserve bearings from any point are called conformal. Maps that preserve area are called equivalent or equal-area, but they give up both equidistance and conformality. No map can possess more than one of equidistance, equivalence or conformality, and many maps have none. This means that if your map shows the proper sizes of countries, it will be unreliable for compass navigation. That's one reason why, historically, the Mercator projection has been popular: it's conformal, meaning that bearings are correct, though distances and areas are not. Greenland is *not* that big. Another reason is that on the Mercator projection, the meridians are parallel, making lines of constant bearing (rhumb lines) turn out straight. It's also one of the few that maps to a rectangle, which has been more important than you might think in making it the standard in books and For statistical or political purposes it's useful to have maps that show equal area, such as the Mollweide projection, which has a central meridian perpendicular to the equator and the other parallel s. Its defining feature is that its 90° East and West meridians together form a circle. The sinusoidal, or Sanson-Flansteed, projection is another equivalent projection. The stereographic projection may be familiar from the National Geographic Society's logo, where the parallels diverge as they get further from the central meridian. It also is conformal for navigation, and has the added virtue over the Mercator that both oblique and great circles on the globe are circles on the map as well. I'll also mention the orthographic projection not only because it's what you'd see from space, but because it's what astronomers use for star charts and maps of the Moon's surface. All the above are continuous projections. If you're willing to interrupt the map into gores like a peeled orange and lose almost all distance information, you can get closer to having both conformality and equivalence simultaneously.
{"url":"http://everything2.com/title/cartographic+projection","timestamp":"2014-04-20T22:06:35Z","content_type":null,"content_length":"23188","record_id":"<urn:uuid:f4b5e7cd-62c8-4fd6-87db-d8ba2c1e3689>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Interest Rates and Bond Yields A nominal discount factor is the present value of one unit of currency to be paid with certainty at a stated future time. This definition suffices, whatever the time period. In a multi-period setting there is one discount factor for every time period. Thus df(1) could be the present value (at time 0) of $1 certain at the end of time period 1, df(2) the present value at time 0 of $1 certain at the end of time period 2, etc.. The vector of such values df {1*periods} is known as the discount function. It can be used to value any vector of cash flows known to be certain. If cf {periods*1} is such a vector, its present value is simply: pv = df*cf In this equation, pv is termed the discounted present value of the cash flows. The one-period example generalizes to a multi-period setting in another respect. The discount factor for a given period will equal the sum of the atomic prices for that period. This follows because the purchase of one unit of every time-state claim for a specified time will guarantee one unit of the currency at that period. The cost of such a bundle is the cost of one unit of currency certain at that date, and hence equals the associated discount factor. In many countries, nominal discount factors are easily discovered. For example, in the United States, financial publications report recent prices of U.S. Treasury Bills and "Strips", each of which promises a fixed dollar payment at one specified date. Since the Treasury has the power to print dollars, payments on such securities can be considered certain, absent revolution, etc.. The reported prices on any given day thus constitute the discount function at the time. Real discount factors are another matter. In some countries the government issues bonds with payments linked to a price index. Such bonds typically provide both coupon payments at periodic intervals and a final principal payment at maturity. If there are enough issues with sufficiently different maturities, at least some elements of discount function can be determined. Consider a case in which there are three bonds. The one-year bond promises a payment of 103 real or "constant dollars" (e.g. Apples) in a year. The two-year bond promises a payment of 4 constant dollars in one year and 104 in two. The three-year bond promises a payment of 3 constant dollars in years 1 and 2 and 103 in year 3. The current prices are $100, $101 and $98, respectively. What are the real discount factors (i.e. the present value of $1 of purchasing power in each of the next three years?). To answer the question we construct a {periods*bonds} cash flow matrix Q: Bond1 Bond2 Bond3 Yr1 103 4 3 Yr2 0 104 3 Yr3 0 0 103 and a price vector p {1*periods}: Bond1 Bond2 Bond3 The price of each bond should equal its discounted present value. Thus: df*Q = p where df {1*periods} is the discount function. We wish to find df, given Q and p. Multiplying both sides of the equation by inv(Q) gives: df = p*inv(Q) In this case, df {1*periods) is: Yr1 Yr2 Yr3 0.9709 0.9338 0.8960 Thus a claim for 1 real dollar in year 1 is worth $0.9709 now, a claim for 1 real dollar in year 2 is worth $0.9338 now, and so on. Any desired set of real payments over the next three years can be valued using this discount function. To find the combination of such bonds that will replicate a desired set of cash flows we utilize the formula: Q*n = c where n {bonds*1} is a portfolio of bonds and c {periods*1} is the desired set of payments. From this it follows that n=inv(Q)*c. Thus if the desired set of payments is c: Yr1 300 Yr2 200 Yr2 100 The replicating portfolio is n: Bond1 2.8107 Bond2 1.8951 Bond3 0.9709 Whether for real or nominal units of a currency, if a discount function can be determined from the values and characteristics of default-free instruments, any corresponding vector of cash flows can be valued and replicated. Moreover, any such vector can be "traded for" any other with the same present value. The set of such combinations forms the default-free opportunity set available to the Investor. The Analyst can help determine the set, but ultimately the Investor must select either one of its members or a vector of cash flows that is not fully default-free. While a discount factor provides a natural and direct measure of the present value of a certain future cash flow, it is sometimes convenient to focus on a related and more familiar figure. If an investment grows from a value of x to a value of x*(1+i) in one period, it can be said to have "earned interest" at the rate i. The concept can be extended to multiple periods by assuming that interest compounds once per period. Thus if an investment grows from V0 to V2 in two periods, the equivalent interest rate is found by solving the equation: (1+i)*(1+i) = V2/V0 (1+i)^2 = V2/V0 The ratio of the ending value to the beginning value is termed the (t-period) value relative. For an investment held t periods, the associated interest rate is computed from: (1+i) = (Vt/V0)^(1/t) Interest rates are generally used to describe securities for which payments are certain. In a one-period setting, such securities can be termed riskless. In a multi-period setting it is preferable to describe them as default-free since their values may fluctuate, making them risky if sold before the final payment has been made. There is a one-to-one relationship between a discount factor and the corresponding interest rate. If df(t) is the discount factor for time t, one unit of the numeraire will grow to 1/df(t) units with certainty by time t. Thus i(t), the default-free interest rate for time t is given by: i(t) = ((1/df(t))^(1/t)) -1 With the value of the "t-period interest rate", one can discount any certain payment to be obtained at that date. Let P(t) be an amount to be paid at t and i(t) the corresponding interest rate. Then the present value pvis given by: pv = P(t) / ( (1+i(t)) ^ t) Since there is a one-to-one relationship between a discount factor and the associated interest rate, either may be used to calculate a present value. Moreover, give one of them, the other can be determined with little effort. Consider the following discount function df: Yr1 Yr2 Yr3 0.9400 0.8800 0.8200 The corresponding value relatives are given by vr = 1./df: Yr1 Yr2 Yr3 1.0638 1.1364 1.2195 Using the MATLAB notation of [1:3] to generate the vector [1 2 3], the interest rates can be computed as i = (vr.^(1./[1:3]))-1: Yr1 Yr2 Yr3 0.0638 0.0660 0.0684 Yr1 Yr2 Yr3 6.38% 6.60% 6.84% These values, when plotted, give one version of the current yield curve or term structure of interest rates. In this case it is upward-sloping, with long-term rates greater than short-term rates. In these calculations, we have computed interest rates assuming compounding once per period. One could as easily use a definition based on compounding more than once per period; or not at all; or continuously. When processing an interest rate, it is important to know which definition was used so that errors do not creep into subsequent calculations. The possibility of alternative definitions makes the use of discount factors a safer approach. Moreover, a case can be made for the thesis that a discount factor, being a price, is a fundamental characteristic of an economy, while an interest rate is a derived construct. This being said, interest rates are ubiquitous, helpful for comparisons of prices of payments at different times, and necessary for communication with those used to more traditional characterizations of financial markets. Many bonds, both traditional and index-linked, provide coupon payments periodically and a final principal payment at maturity. Consider, for example, a bond that provides payments cf of: Yr1 6 Yr2 6 Yr3 106 Given the previous discount function, such a bond has a present value of $97.84. Based on its initial par value of $100, the yield is 6% per year. However, given the fact that it is selling for $97.84, the effective yield is greater. To reflect this, analysts often use a derived figure, the yield-to-maturity. This is a constant interest rate that makes the present value of all the bond's payments equal its price. In this case, we seek a value for i that will satisfy the equation: 6/(1+y) + 6/((1+y)^2) + 106/((1+y)^3) = 97.84 This can be done by trial and error, preferably using an intelligent algorithm to find the result (to a desired degree of accuracy). In this case, i is approximately 6.82%. A set of yields-to-maturity for bonds with varying coupons and maturities will typically not plot on a single curve. Nonetheless, some analysts crossplot yield-to-maturity and maturity date for a set of bonds, then fit a "yield curve" through the resulting scatter of plots. The result may be helpful, but should not be used for valuation purposes. The maturity of a bond provides important information for its valuation. The values of longer-term bonds are generally affected more by changes in interest rates, especially longer-term rates. However, for coupon bonds, maturity is a somewhat crude indicator of interest rate sensitivity. A high-coupon bond will be exposed more to short and intermediate-term rates than will a low coupon bond with the same maturity, while a zero-coupon bond will be exposed only to the interest rate associated with its maturity. To provide a somewhat better measure than maturity, Analysts often compute the duration of a set of cash flows. Let df be a {1*periods}vector of discount factors and cf a {periods*1} vector of cash flows. The duration of cf is a weighted average of the times at which payments are made, with each payment weighted by its present value relative to that of the vector as a whole. In the previous example, the bond has cash flows cf: Yr1 6 Yr2 6 Yr3 106 The market discount function df is: Yr1 Yr2 Yr3 0.9400 0.8800 0.8200 The present values of the cash flows are v = df.*cf'': Yr1 Yr2 Yr3 5.6400 5.2800 86.9200 To compute weights we divide by total value, w = v/(df*cf), giving: Yr1 Yr2 Yr3 0.0576 0.0540 0.8884 In MATLAB, the expression [1:3]'produces the {periods*1} vector of time periods: Yr1 1 Yr2 2 Yr3 3 The duration, given by d = w*([1:3]'), is 2.8307 years -- somewhat less than the maturity of 3 years. Well and good, but what use can be made of duration? In some circumstances, quite a bit. In others, somewhat less. We make the calculation to better understand the reaction of the value of a vector of cash flow to a change in one or more interest rates. In practice, of course, many such rates along the term structure may change at the same time. In general, if the discount function changes from df1 to df2, the present value of cash flow vector cf will experience a change in value equal to: dV = (df2 - df1)*cf How can one number summarize the effect on value of a change in potentially many different interest rates along the discount function? Of necessity, a change in the yield-to-maturity of a bond will cause a predictable change in the value of that bond or set of cash flows, since there is a one-to-one relationship between the two. The relationship holds as well for most cash flow vectors. In such case the term internal rate of return is utilized, instead of yield-to-maturity. If there are sufficiently many positive and negative cash flows in a vector, the internal rate of return may not be unique, causing potential mischief if one relies upon it. However, this cannot happen if the vector consists of a series of negative (positive) flows, followed by a series of positive (negative) flows -- that is, if there is only one reversal of sign. In practice, a bond's duration is usually calculated with a discount function based on its own yield-to-maturity, that is: [ 1/(1+y) 1/((1+y)^2) 1/((1+y)^3) ] Now, consider c(t), the cash for the t'th period. Using the bond's yield-to-maturity, Its present value is: v(t) = c(t)/((1+y)^t) If there is a very small change dy in y, the change in v(t) will be: dv(t) = (c(t)*(-t*(1+y)^(-t-1))) * dy dv(t) = (v(t)*-t) * (dy/(1+y)) Summing all such terms we have the total change in value dv: dv = sum(dv(t)) = - sum(v(t)*t) * (dy/(1+y)) Finally, the proportional change in value, dv/v is: dv/v = sum(dv(t)/v) = - sum((v(t)/v)*t) * (dy/(1+y) But the term inside the parentheses preceded with "sum" is the duration, calculated using the bond's own yield-to-maturity. Thus we have: dv/v = - d * (dy/(1+y)) Sometimes the duration is divided by (1+y) to give the modified duration. Letting md represent this, we have: dv/v = - md * dy Thus the modified duration indicates the negative percentage change in the value of the bond per percentage change in its own yield-to-maturity. The minus sign indicates that an increase (decrease) a bond's yield-to-maturity is accompanied by a decrease (increase) in its value. Duration (modified or not) is of no interest unless one can establish a relationship between a bond's own yield-to-maturity and some market rate of interest. For example, assume y = y20+.01, where y20 is the interest rate on 20-year zero coupon government bonds. In this case: dy = dy20 dv/v = - md * dy20 which relates the percentage change in the bond's value to the change in a market rate of interest. The concept of duration that is especially relevant for Analysts who counsel the managers of defined-benefit pension funds. Many such funds have obligations to pay future pensions that are fixed in nominal (e.g. dollar) terms, at least formally. Moreover, the bulk of the cash flows must be paid at dates far into the future. The present value of the liabilities of such a plan can be computed in the usual way and its yield-to-maturity (internal rate of return) or discount rate, determined, using market rates of interest. In many cases, the discount rate will be very close to a long-term rate of interest (e.g. that for 20-year bonds). Since term structures of interest rates tend to be quite flat at the long end, any change in the long-term rate of interest will be accompanied by a roughly equal change in the discount rate for a typical pension plan of this type. Thus the duration of the plan's cash flows provides a good estimate of the sensitivity of the present value of its liabilities to a change in long-term interest rates. Any imbalance between the duration of the assets in a pension fund held to meet those liabilities and the duration of the liabilities may well provide an indication of the extent to which the fund is taking on interest rate risk. In our most recent example, the discount function df was: Yr1 Yr2 Yr3 0.9400 0.8800 0.8200 with associated interest rates: Yr1 Yr2 Yr3 0.0638 0.0660 0.0684 For example, $1 invested at a rate of 6.60% per year, compounded yearly, would grow to $1/0.88 dollars at the end of two years. This interest rate could be termed the 2-year spot rate to emphasize the fact that it assumes an investment that begins immediately and lasts for two years. A different type of interest rate involves an agreement made immediately for investment at a later date and repayment at an even later date. For example, one might agree today to borrow $1 in a year and repay $1 plus a stated amount of interest one year later (i.e. two years' hence). The interest rate in question is termed a forward interest rate to emphasize the fact that it covers an interval that begins at a date forward (i.e. in the future). Of particular interest are forward rates covering periods that last only one period. Such rates can be denoted by their starting date. Hence the 1 year forward rate covers the period from the end of year 1 to the end of year 2, but on terms negotiated today. Given the discount function, it is possible to arrange today to borrow 1/df(1) dollars at the end of year one and pay 1/df(2) dollars at the end of year 2 for a zero net investment, since each "side" will have a present value of $1. Hence, arbitrage decrees that any forward contract covering the same period will have the same results. This insures that: (1/df(1)) * (1+f(1)) = (1/df(2)) where f(1) is the forward rate for the period beginning at the end of year 1 and ending at the end of year 2. Re-arranging the equation above gives the simpler form: f(1) = (df(1)/df(2)) - 1 More generally: f(t) = (df(t)/df(t+1)) - 1 In the special case in which t = 0, the "forward rate" will, in fact, be the spot rate for a one-year loan, since df(0), the present value of $1 today, is $1. To obtain the full vector of forward rates, we create a lagged vector dfl of all but the last discount factor, preceded by the present value of $1 today: dfl = [ 1 df(1:2)] Yr1 Yr2 Yr3 1.00 0.94 0.88 Dividing each element of the original discount function by the corresponding element in this vector, then subtracting 1 gives the forward rate vector f: f = (dfl ./ df) - 1 f(0) f(1) f(2) 0.0638 0.0682 0.0732 Thus one dollar grows to $1.0638*1.0682 in two years and $1.0638*1.0682*1.0732 in three years. Of necessity, these calculations reach the same conclusion as do those based on the respective spot interest rates. However, the latter use different rates for the same year (e.g. year 2), depending on the investment being analyzed, while the former do not. Thus forward rates are closer to economic reality and can be used with far less risk of error. Forward rates are especially useful when an Analyst is trying to predict future levels of inflation for estimating liabilities of a pension plan with benefits tied to salary levels, which are in turn, affected by changes in the cost of living. A standard assumption holds that a forward interest rate is the sum of two components: (1) a liquidity premium (sometimes called a term premium) and (2) an expectation concerning the spot rate that will hold at the time. Thus the two-year forward rate in our example (7.32%) might be considered to be the sum of a normal liquidity premium for such obligations of 1.0% and a consensus expectation of market participants that the one-year spot rate will equal 6.32% for year 3. The spot rate, in turn, may be assumed to equal an expected one-year real return of, say, 1.5% plus an expected level of inflation equal to 6.32%-1.5%, or 4.82%. Combining the two calculations gives: Forward Rate - Liquidity Premium - Expected Short-term Real Return Expected Inflation A common set of assumptions holds that liquidity premia increase at a decreasing rate as maturity increases and that expected short-term real returns are constant. This implies that the term structure of forward rates will have the same shape as the liquidity premium function in periods in which inflation is expected to remain constant. If the forward curve is steeper, inflation is presumably expected to increase. If it is flatter or downward-sloping, inflation can be expected to decrease. Procedures such as this applied to the set of forward interest rates allow an Analyst to estimate levels of future inflation that are consistent with current market yields. As usual, the estimates are only as good as the assumptions, but are likely to be better than the use of some average historic inflation level, especially in periods in which term structures of interest rates are unusually steep, unusually flat, or actually downward-sloping.
{"url":"http://www.stanford.edu/~wfsharpe/mia/prc/mia_prc4.htm","timestamp":"2014-04-19T02:02:54Z","content_type":null,"content_length":"24405","record_id":"<urn:uuid:593acdcc-06e5-4779-b12a-31c4267346d3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Correlation of 1 . . . too good to be true? - Statistical Modeling, Causal Inference, and Social Science Correlation of 1 . . . too good to be true? Alex Hoffman points me to this interview by Dylan Matthews of education researcher Thomas Kane, who at one point says, Once you corrected for measurement error, a teacher’s score on their chosen videos and on their unchosen videos were correlated at 1. They were perfectly correlated. Hoffman asks, “What do you think? Do you think that just maybe, perhaps, it’s possible we aught to consider, I’m just throwing out the possibility that it might be that the procedure for correcting measurement error might, you now, be a little too strong?” I don’t know exactly what’s happening here, but it might be something that I’ve seen on occasion when fitting multilevel models using a point estimate for the group-level variance. It goes like this: measurement-error models are multilevel models, they involve the estimation of a distribution of a latent variable. When fitting multilevel models, it is possible to estimate the group-level variance to be zero, even though the group-level variance is not zero in real life. We have a penalized-likelihood approach to keep the estimate away from zero (see this paper, to appear in Psychometrika) but this is not yet standard in computer packages. The result is that in a multilevel model you can get estimates of zero variance or perfect correlations because the variation in the data is less than its expected value under the noise model. With a full Bayesian approach, you’d find the correlation could take on a range of possible values, it’s not really equal to 1. 10 Comments 1. I don’t know what particular procedure Kane used, but in psychology applications I’ve often seen people correct for error by dividing a correlation by the square root of the product of the measures’ reliability coefficients (usually Cronbach’s alpha). See here: http://en.wikipedia.org/wiki/Correction_for_attenuation The problem is that if Cronbach’s alpha (or whatever reliability procedure you are using) gives an underestimate of a measure’s reliability and/or an inappropriate way to estimate it (such as if the measure does not perfectly fit the model of only one source of common variance plus random errors, as defined in classical test theory), you’ll get unbelievably large “disattenuated” correlations. Again, I haven’t Kane’s paper so I don’t know what he actually did. But in a social-judgment task like what’s being described in the article, if you have the same judges rating both the “chosen” video set and the “unchosen” video set, you could easily have judge-specific factors contributing to their ratings. E.g., Judge A and Judge B each has a distinct way of judging videos which they apply to both the chosen and unchosen videos that they rate. That would lower the between-judge agreement on any given video set but increase the correlation between video sets. Which would over-inflate the “disattenuated” correlations. □ I came here to say this. It’s also worth adding that it makes assumptions about the measurement, and if you test those assumptions, they’re never satisfied. 2. It must have taken great self-control to type this without mentioning the “8 schools” SAT coaching analysis in Bayesian Data Analysis. 3. [...] See full story on andrewgelman.com [...] 4. Re regularized variance estimates: This is very very important, and I wish people would stop publishing model fits with zero variance estimates and/or -1/+1 correction estimates. On that topic: What are the future plans for blme? blme is great, when I want to help people who probably aren’t quite ready to just go all Bayes and forget about point estimates. lme4 is getting a big update soon, which changes the internals quite a lot, so I’m guessing blme will either fade away or eventually get updated. □ *correlation estimates □ When lme4 is updated, we will update blme. ☆ Andrew: I think my postdoc contacted Vince Dorie in the last day or two to say that we have done some internal reorganization of (the development version of) lme4 to try to make it more modular and easier to build extensions (like blme) on top of, and would appreciate feedback from downstream package developers … ○ Ben: Another option is we could talk about including blme as part of lme4. That is, instead of having the new function blmer/bglmer, you could add some additional arguments to lmer/glmer to allow for prior distributions, with some setting so that the user could get the blmer/bglmer defaults as an easy option. If this were all in the standard lmer/glmer function, that would make it accessible to more people. And I don’t think it should be too hard on your end, given that it’s only a small modification to your existing functions (just adding in a penalty right before the computation of the marginal likelihood). We could talk about adding sim() into your package too! 5. What gets me is mostly substantive. We a high prestige policy-maker making specific recommendations to street-level bureacrats based upon literally impossible-to-believe statistical findings that demonstrate no understanding of how the relevant policies work. * Despite Kane’s implications, the goal of teacher evaluation is NOT to rank teacher. So, it’s not clear where he is getting that. * Despite Kane’s implication, teacher evaluation has long been multi-dimensional. A binary composite/summary score has long been required, but we are moving away from that. Nationally, districts (and states) are moving towards either Charlotte Danielson’s or Robert Marzano’s frameworks for teacher evaluation. Evalautors (e.g. school principals) have to rate teachers are a variety of aspects of theit teaching. Even if (ha!) the rankings stayed consistent, how well teachers demonstrate particular aspects of their skill and craft could vary. * Clearly, there has some overcorrection and therefore LOSS of information through the statistical techniques used. This calls all of the findings into question. So long as you are willing to lose information, we can come up with any result you want. For such a high profile study into such an important topic to be content with such obviously lousy results, is — quite frankly — shocking. To brag about it? It costs the entire effort enormous amounts of credibility. Or, it should. But it won’t. I know. Journalists lack statistical knowledge or substantive knowledge of the field in question. Folks who understand statistics lack substantive knoweldge of the field, and folks who are expert in this field lack substantive knowledge of statistics. When high level policy-makers and policians lead research, how much are we going to sacrifice to support their hopes, dreams and delusions? How much are we going to let their ignorance and angendas run the show?
{"url":"http://andrewgelman.com/2013/02/25/correlation-of-1-too-good-to-be-true/","timestamp":"2014-04-17T15:26:07Z","content_type":null,"content_length":"34763","record_id":"<urn:uuid:2e8199bb-1a57-45cb-aa41-eab49f5a9b01>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
CompactCalc 4.2.9 CompactCalc is an enhanced scientific calculator for Windows with an editor. It embodies generic floating-point routines, hyperbolic and transcendental routines. Its underling implementation encompasses high precision, sturdiness and multi-functionality. With the brilliant designs and powerful features of CompactCalc, you can bring spectacular results to your calculating routines. CompactCalc features include the following: * You can build linear, polynomial and nonlinear equation set. You are not limited by the size or the complexity of your mathematical expressions. * Scientific calculations and unlimited * Parenthesis compatible and unlimited nesting for expression. * Accurate result display - features up to 24 digits after the decimal point for scientific calculations. * Calculation range (1.79E-308, 2.22E308). * Comprehensive documentation. * CompactCalc has almost hundred of physical and mathematical constants built in, which can be easily accessed and used in calculations. No longer do you have to search the physic textbook for that common physical constant data. * Possibility to enter mathematical formulas as with a keyboard as with calculator-buttons. * The interface is straightforward and very easy to navigate through. padtube.com is not responsible for the content of this Publisher's Description. We encourage you to determine whether this product or your intended use is legal. We do not encourage or condone the use of any software in violation of applicable laws.
{"url":"http://www.padtube.com/CompactCalc/10-78114.html","timestamp":"2014-04-19T11:57:47Z","content_type":null,"content_length":"42085","record_id":"<urn:uuid:e19ee103-99a6-42d7-be55-7053d22f0b78>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
Laura R. Novick Thinking and Reasoning with Diagrams Research on Spatial Diagram Representations According to Duncker (1935/1945, p. 1), "a problem arises when a living creature has a goal but does not know how this goal is to be reached. Whenever one cannot go from the given situation to the desired situation simply by action [i.e., by the performance of obvious operations], then there has to be recourse to thinking." Novick and Bassok (2005) argue that to understand this thinking, researchers must distinguish between representations and solution procedures. Solvers typically begin their problem-solving efforts by trying to understand, at least at a rudimentary level, the underlying structure of the problem. In doing so, they construct some type of problem representation, which may be either internal or external (or both). Then they perform operations on the information in the representation in an attempt to get from the given situation to the goal (i.e., to determine the solution). That is, they apply a solution procedure. Although these processes are clearly interrelated, it is possible to study their separate contributions to the success of problem solving and reasoning. I am interested in students' knowledge and use of three, external, spatial diagram representations -- matrices, networks (i.e., path diagrams), and hierarchies (i.e., trees) -- that are important tools for thinking both in everyday situations and in formal domains (Novick, 2001). Like other abstract diagrams, much of their usefulness can be attributed to three sources: (a) They simplify the complex, (b) they make the abstract more concrete, and (c) they substitute easier perceptual inferences for more computationally-intensive search processes and sentential deductive inferences. It is not surprising, therefore, that numerous researchers have shown that using abstract diagrams, including the three spatial diagrams, often facilitates learning and problem solving. An important advantage of these types of representations is that they are applicable across a wide variety of contexts. That is, they highlight structural commonalities across situations that are superficially quite different. For example, a hierarchy can be used to represent evolutionary relationships among a set of taxa, a basketball tournament, or a corporate power structure. Similarly, a network can be used to represent the flight paths for an airline, the friendships between people at a conference, or the (hypothesized) structure of semantic memory. More generally, these types of representations are useful for solving a wide variety of problems involving analytical (including mathematical) reasoning. By successfully constructing (appropriate) spatial diagram representations, problem solvers would be led to see deep similarities among diverse problems that otherwise might not be salient. There is a large literature documenting the importance of structural understanding as a key factor underlying expertise. The Nature of Students' Knowledge It is critical to study solvers' knowledge of spatial diagram representations to fully understand their use of such representations to support reasoning and problem solving. One source of information about appropriate spatial diagram representations for different types of problem structures is examples of specific situations in which these types of representations have proven useful in the past (e.g., matrices are used for multiplication tables, seating charts, time schedules, police maps of a city, etc.). That is, besides transferring a sequence of solution operators from an example problem to a related test problem, solvers might transfer the type of representation used to describe the problem's underlying structure. My early research on spatial diagram representations examined the processes involved in and the factors affecting representational transfer in comparison to procedural transfer (Novick, 1990; Novick & Hmelo, 1994). The most important difference between these two types of transfer is that in procedural transfer the test problem's solution procedure is constructed by adapting the procedure from the example problem, whereas in representational transfer the representation for the test problem appears to be constructed from scratch (rather than by adapting the specific representation provided with the example problem). This result brings into focus the importance of studying the process of representation construction, a topic that is considered below. A second source of information also is available to solvers to help guide their attempts to select and construct (appropriate) spatial diagram representations. The results of several experiments (Novick, Hurley, & Francis, 1999) suggest that college students have at least rudimentary abstract, rule-based knowledge concerning the applicability conditions for the three spatial diagrams. The results of a more recent study in which subjects had to choose the most appropriate type of spatial diagram for scenarios written in a specific content domain versus completely abstractly provide more direct evidence that students' representation selections are based, at least in part, on abstract, rule-based knowledge (Hurley & Novick, 2006). We found that subjects' representation selections were as accurate for scenarios that were written using completely abstract language as for scenarios that described a specific concrete situation involving familiar objects and concepts. In a follow-up study in which subjects provided think-aloud protocols while selecting diagrams for the abstractly-worded scenarios, we found that students almost never referred to concrete, real world situations. In contrast, they often referred to abstract features of the diagrams. It is unclear at this point, however, which aspects of using spatial diagram representations to support problem solving and reasoning rely more on abstract, rule-based knowledge about these representations and which rely more on the details of specific examples encountered previously. Diagrams are among the oldest preserved examples of written mathematics. The past 15 years or so has seen increased attention in the mathematics education community to the goal of developing numeracy -- the mathematical counterpart to literacy -- among school children. Most educational theorists view representational, spatial, graphical, visual, or diagrammatic competence as included among the requisite skills that comprise numeracy. The National Council of Teachers of Mathematics (2000) standards emphasize that K-12 students need to obtain more sophisticated, and explicit, knowledge of culturally-significant types of abstract, mathematical diagrams -- including the three types of diagrams I have investigated in my research. In a recent manuscript (Novick, 2004), I proposed a model of diagram literacy that specifies six types of knowledge that students should possess to demonstrate diagrammatic competence -- implicit, construction, similarity, structural, metacognitive, and translational. I also reported the results of a study that examined the diagram literacy of students from three distinct populations -- pre-service, secondary level, math teachers; computer science majors; and typical undergraduates -- with respect to matrices, networks, and hierarchies. The results of the study are reassuring in some ways with respect to the level of diagram literacy exhibited by students at the culmination of current K-12 instruction, as well as that possessed by teachers of the upcoming generation of secondary students. However, the results also point to areas in which pre-service math teachers should be better prepared if the goals for diagram literacy proposed by the National Council of Teachers of Mathematics are to be met. Structural Analysis of Matrices, Networks, and Hierarchies Determining the appropriate representation to use for the situation at hand depends on assessing the degree of fit between the structure of the information to be represented and the structures of various representations. Just as hammers, screwdrivers, and wrenches work best in specific types of situations, so do matrices, networks, and hierarchies (see Novick, 2001, for further discussion of this analogy). It is critical, therefore, to specify the problem structures for which each type of spatial diagram is best suited. Novick and Hurley (2001) did just that. We hypothesized that 10 properties (e.g., global structure, link type, linking relations, traversal) distinguish when to use each of the three types of representations. Each representation has a value for each property, and these property values constitute the hypothesized applicability conditions for the representations. For example, matrices are particularly useful when (a) all possible combinations of the items across two sets must be considered, (b) the links between items in the different sets are non-directional (i.e., purely associative), and (c) it is important to be able to explicitly mark pairs of items that cannot be linked. In contrast, networks are particularly useful when (a) there are no constraints on which items may be linked, (b) the links between items are many-to-many, and (c) multiple different routes may sometimes be followed to travel between two particular items. Finally, hierarchies are particularly useful when (a) the items are distinguished according to different levels, (b) the links between items are either one-to-many or many-to-one (but not both), and (c) only one route exists between any two items. The results of a study in which subjects had to verbally justify their selection of the most appropriate type of representation to use for each of 18 scenarios (all of which were set in a vaguely medical context) provided good support for the structural analysis (Novick & Hurley, 2001), both at the level of the properties and at the level of the individual property values (i.e., the applicability conditions). This support came from an analysis of subjects' representation choices, as well as detailed analyses of their verbal justifications. The results of this study also provided important preliminary information concerning how students' knowledge of the structural properties is organized in memory and how the organization varies as a function of expertise (Novick, 2001; Novick & Hurley, 2001). One limitation of this study, however, is that verbal protocols are useful with respect to what subjects say, but no firm conclusions can be drawn from what they fail to say. With respect to the structural analysis of the three spatial diagrams, this limitation leaves two important questions unanswered. The first concerns the status of the (9 of 30) proposed property values that were mentioned either not at all or only by a tiny handful of students: Do these property values constitute applicability conditions for the representations or not? The second question concerns the relative importance or diagnosticity of the applicability conditions; presumably, not all applicability conditions are equally diagnostic cues for the use of a particular type of diagram. For example, the fact that the items in a particular situation are organized into levels is probably a more diagnostic cue for a hierarchy representation than is the fact that there are directional links between the items. Two recent studies addressed these issues (Novick, 2006b). In one study, subjects at different levels of expertise (typical undergraduates and advanced computer science majors) were asked to rate the diagnosticity of each of the proposed applicability conditions with respect to each of the three types of representations. This task required students to discriminate among the three types of diagrams for each applicability condition. In a second study, students from these same two populations rated the diagnosticity of the applicability conditions for each representation separately. This task required students to discriminate among the applicability conditions for a particular type of diagram. For example, subjects were given a list of all the proposed applicability conditions for the hierarchy representation, and they had to rate how diagnostic each of those property values is for that type of diagram. The results of these two studies validated 24-26 of the 30 hypothesized applicability conditions and provided evidence regarding the relative importance, or diagnosticity, of the validated properties for each type of diagram. A different set of properties was identified as most highly diagnostic for each type of diagram, indicating that the three spatial diagrams are optimized to serve different representational functions: The matrix stores static information about the kind of relation that exists between pairs of items in different sets, the network conveys dynamic information by showing the local connections and global routes connecting the items being represented, and the hierarchy depicts a rigid structure of power or precedence relations among items. Constructing and Reasoning from Matrices, Networks, and Hierarchies The applicability conditions specify which type of diagram best fits the structure of a particular problem. But selecting the right type of diagram is only the first step. Next, one must draw that diagram so that it accurately and efficiently represents the given situation (Novick, 2001). For his dissertation, my graduate student, Sean Hurley, conducted three studies concerning diagram construction and use (Hurley & Novick, in press). We generated several hypotheses concerning the conventions for mapping problem information onto matrix, network, and hierarchy diagrams (e.g., objects are mapped onto nodes in networks and hierarchies, and onto rows/columns in matrices). In two experiments, we tested the validity of the hypothesized conventions by analyzing (a) college students' verbal descriptions of how they would construct these diagrams and (b) their drawings of these diagrams. Strong support was found for the conventions hypothesized to be associated with constructing all three diagrams. In Experiment 3, we examined the importance of the conventions for successful and efficient diagram use by evaluating students' reasoning time and accuracy when they answered questions using diagrams that either followed or violated the hypothesized conventions. Strong effects of convention adherence on reasoning time and accuracy were found for matrices and networks, but not for hierarchies. Research on Other Types of Diagrams Investigating Students' Understanding of Evolutionary Diagrams Abstract diagrams are important not only in mathematics but in science as well (Novick, 2006a). In biology, hierarchical diagrams are especially common. For the past several years, I have been working with Kefyn Catley, a biologist and science educator at Western Carolina University, to investigate college students' understanding of cladograms, the most important tool that contemporary scientists use to reason about evolutionary relationships. A cladogram is a type of hierarchical diagram that depicts the distribution of characters (i.e., physical, molecular, and behavioral characteristics) among a set of taxa. More information about this research may be found here. Assembly Diagrams In a project conducted in collaboration with Doug Morse, I investigated the role of iconic diagrams in facilitating the execution of a set of procedural instructions. Although diagrams are ubiquitous in instructions for assembling a wide variety of objects (e.g., bicycles, bookcases, Lego vehicles), there is surprisingly little research on the role of diagrams in supporting object assembly. We examined this issue in the domain of origami, a Japanese paper-folding task enjoyed by both children and adults. This research had two principle aims. First, it extended research on diagrams as instructional aids to the case of object assembly. Second, it provided new data on the effectiveness of adding step-by-step versus completed-object diagrams to text instructions as a function of the difficulty of the assembly task. We hypothesized that completed-object diagrams are primarily helpful in situations in which the steps needed to construct the objects can easily be extracted mentally from the diagrams. In such situations, a completed-object diagram can substitute for a large number of step-by-step diagrams. When the individual steps cannot easily be extracted from the completed-object diagram, however, the diagram will have little or no benefit for object assembly. The results of three experiments (Novick & Morse, 2000) supported these predictions. Learning a Visual Computer Language In another project, Kirsten Whitley, Doug Fisher, and I examined the effectiveness of the visual representation in LabVIEW, a visual programming language (Whitley, Novick, & Fisher, 2006) . We compared the performance of students taught a subset of the LabVIEW language, with its circuit-diagram type of representation, to that of students taught a textual equivalent of that language. Performance was assessed on three types of problems. For the tracing problems, students were given program code and had to evaluate what output would be produced given certain input. For the parallelism problems, students had to determine which functions could execute immediately following a designated function. For the debugging problems, students were given a description of the function the code was supposed to compute, as well as some code ostensibly written to perform that function. Students had to locate and identify the (single) bug in the code. For the tracing problems, accuracy was comparable in the two representation conditions. The high level of accuracy in both conditions indicates that our instruction was effective. For the parallelism and debugging problems, however, the students in the visual condition were reliably more accurate than those in the textual condition. The better performance of the visual subjects on the parallelism problems is consistent with the power of diagrammatic representations to make readily accessible information that must be laboriously inferred from equivalent textual representations. The results for the debugging problems indicate that, as in other domains, (a) the visual representation facilitated global understanding, and (b) the advantage of the visual representation was largest for the most difficult problems. Publications Hurley, S. M., & Novick, L. R. (2010). Solving problems using matrix, network, and hierarchy diagrams: The consequences of violating construction conventions. The Quarterly Journal of Experimental Psychology, 63, 275-290. Hurley, S. M., & Novick, L. R. (2006). Context and structure: The nature of students' knowledge about three spatial diagram representations. Thinking & Reasoning, 12, 281-308. Novick, L. R. (2006b). Understanding spatial diagram structure. The Quarterly Journal of Experimental Psychology, 59, 1826-1856. Novick, L. R. (2006a). The importance of both diagrammatic conventions and domain-specific knowledge for diagram literacy in science: The hierarchy as an illustrative case. In D. Barker-Plummer, R. Cox, & N. Swoboda (Eds.), Diagrams 2006, LNAI 4045 (pp. 1-11). Berlin: Springer-Verlag. Whitley, K. N., Novick, L. R., & Fisher, D. (2006). Evidence in favor of visual representation for the dataflow paradigm: An experiment testing LabVIEW's comprehensibility. International Journal of Human-Computer Studies, 64, 281-303. Novick, L. R., & Bassok, M. (2005). Problem solving. In K. J. Holyoak & R. G. Morrison (Eds.), Cambridge handbook of thinking and reasoning (Ch. 14, pp. 321-349). New York, NY: Cambridge University Novick, L. R. (2004). Diagram literacy in pre-service math teachers, computer science majors, and typical undergraduates: The case of matrices, networks, and hierarchies. Mathematical Thinking and Learning, 6, 307-342. Novick, L. R., & Hurley, S. M. (2001). To matrix, network, or hierarchy, that is the question. Cognitive Psychology, 42, 158-216. Novick, L. R. (2001). Spatial diagrams: Key instruments in the toolbox for thought. In D. L. Medin (Ed.), The psychology of learning and motivation (Vol. 40, pp. 279-325). San Diego, CA: Academic Novick, L. R., & Morse, D. L. (2000). Folding a fish, making a mushroom: The role of diagrams in executing assembly procedures. Memory & Cognition, 28, 1242-1256. Novick, L. R., Hurley, S. M., & Francis, M. D. (1999). Evidence for abstract, schematic knowledge of three spatial diagram representations. Memory & Cognition, 27, 288-308. Novick, L. R., & Hmelo, C. E. (1994). Transferring symbolic representations across nonisomorphic problems. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 1296-1321. Novick, L. R. (1990). Representational transfer in problem solving. Psychological Science, 1, 128-132.
{"url":"http://www.vanderbilt.edu/peabody/novick/diag_research.html","timestamp":"2014-04-17T21:56:40Z","content_type":null,"content_length":"23455","record_id":"<urn:uuid:d9b07e04-d8ec-4555-9007-ef032d9b5106>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Numeric.Dimensional -- Statically checked physical dimensions Bjorn Buckwalter, bjorn.buckwalter@gmail.com License: BSD3 = Summary = In this module we provide data types for performing arithmetic with physical quantities and units. Information about the physical dimensions of the quantities/units is embedded in their types and the validity of operations is verified by the type checker at compile time. The boxing and unboxing of numerical values as quantities is done by multiplication and division of units, of which an incomplete set is provided. We limit ourselves to "Newtonian" physics. We do not attempt to accommodate relativistic physics in which e.g. addition of length and time would be valid. As far as possible and/or practical the conventions and guidelines of NIST's "Guide for the Use of the International System of Units (SI)" [1] are followed. Occasionally we will reference specific sections from the guide and deviations will be explained. = Disclaimer = Merely an engineer, the author doubtlessly uses a language and notation that makes mathematicians and physicist cringe. He does not mind constructive criticism (or darcs patches). The sets of functions and units defined herein are incomplete and reflect only the author's needs to date. Again, patches are welcome. The author has elected to keep the module detached from the standard(?) Haskell library hierarchy. In part because the module name space layout seems to be an open issue and in part because he is unsure where to fit it in. = Preliminaries = This module requires GHC 6.6 or later. We utilize multi-parameter type classes, phantom types, functional dependencies and undecidable instances (and possibly additional unidentified GHC extensions). Clients of the module are generally not required to use these extensions. > {-# LANGUAGE UndecidableInstances > , ScopedTypeVariables > , EmptyDataDecls > , MultiParamTypeClasses > , FunctionalDependencies > , FlexibleInstances > , TypeSynonymInstances > , FlexibleContexts > , GeneralizedNewtypeDeriving > #-} > {- | > Copyright : Copyright (C) 2006-2013 Bjorn Buckwalter > License : BSD3 > Maintainer : bjorn.buckwalter@gmail.com > Stability : Stable > Portability: GHC only? > Please refer to the literate Haskell code for documentation of both API > and implementation. > -} > module Numeric.Units.Dimensional > -- TODO discriminate exports, in particular Variants and Dims. > where > import Prelude > ( Show, Eq, Ord, Enum, Num, Fractional, Floating, RealFloat, Functor, fmap > , (.), flip, show, (++), undefined, otherwise, (==), String, unwords > , map, foldr, null, Integer > ) > import qualified Prelude > import Data.List (genericLength) > import Data.Maybe (Maybe (Just, Nothing), catMaybes) > import Numeric.NumType > ( NumType, NonZero, PosType, Zero, toNum, Sum > , Pos1, Pos2, pos2, Pos3, pos3 > ) > import qualified Numeric.NumType as N (Mul, Div) We will reuse the operators and function names from the Prelude. To prevent unpleasant surprises we give operators the same fixity as the Prelude. > infixr 8 ^, ^+, ^/, ** > infixl 7 *, / > infixl 6 +, - = Dimensional = Our primary objective is to define a data type that can be used to represent (while still differentiating between) units and quantities. There are two reasons for consolidating units and quantities in one data type. The first being to allow code reuse as they are largely subject to the same operations. The second being that it allows reuse of operators (and functions) between the two without resorting to occasionally cumbersome type classes. We call this data type 'Dimensional' to capture the notion that the units and quantities it represents have physical dimensions. > newtype Dimensional v d a = Dimensional a deriving (Eq, Ord, Enum) The type variable 'a' is the only non-phantom type variable and represents the numerical value of a quantity or the scale (w.r.t. SI units) of a unit. For SI units the scale will always be 1. For non-SI units the scale is the ratio of the unit to the SI unit with the same physical dimension. Since 'a' is the only non-phantom type we were able to define 'Dimensional' as a newtype, avoiding boxing at runtime. = The variety 'v' of 'Dimensional' = The phantom type variable v is used to distinguish between units and quantities. It should be one of the following: > data DUnit > data DQuantity For convenience we define type synonyms for units and quantities. > type Unit = Dimensional DUnit > type Quantity = Dimensional DQuantity The relationship between (the value of) a 'Quantity', its numerical value and its 'Unit' is described in 7.1 "Value and numerical value of a quantity" of [1]. In short a 'Quantity' is the product of a number and a 'Unit'. We define the '(*~)' operator as a convenient way to declare quantities as such a product. > (*~) :: Num a => a -> Unit d a -> Quantity d a > x *~ Dimensional y = Dimensional (x Prelude.* y) Conversely, the numerical value of a 'Quantity' is obtained by dividing the 'Quantity' by its 'Unit' (any unit with the same physical dimension). The '(/~)' operator provides a convenient way of obtaining the numerical value of a quantity. > (/~) :: Fractional a => Quantity d a -> Unit d a -> a > Dimensional x /~ Dimensional y = x Prelude./ y We give '*~' and '/~' the same fixity as '*' and '/' defined below. Note that this necessitates the use of parenthesis when composing units using '*' and '/', e.g. "1 *~ (meter / second)". > infixl 7 *~, /~ = The dimension 'd' of 'Dimensional' = The phantom type variable d encompasses the physical dimension of the 'Dimensional'. As detailed in [5] there are seven base dimensions, which can be combined in integer powers to a given physical dimension. We represent physical dimensions as the powers of the seven base dimensions that make up the given dimension. The powers are represented using NumTypes. For convenience we collect all seven base dimensions in a data type 'Dim'. > data Dim l m t i th n j where the respective dimensions are represented by type variables using the following convention. l -- Length m -- Mass t -- Time i -- Electric current th -- Thermodynamic temperature n -- Amount of substance j -- Luminous intensity We could have chosen to provide type variables for the seven base dimensions in 'Dimensional' instead of creating a new data type 'Dim'. However, that would have made any type signatures involving 'Dimensional' very cumbersome. By encompassing the physical dimension in a single type variable we can "hide" the cumbersome type arithmetic behind convenient type classes as will be seen later. Using our 'Dim' data type we define some type synonyms for convenience and illustrative purposes. We start with the base dimensions. > type DOne = Dim Zero Zero Zero Zero Zero Zero Zero > type DLength = Dim Pos1 Zero Zero Zero Zero Zero Zero > type DMass = Dim Zero Pos1 Zero Zero Zero Zero Zero > type DTime = Dim Zero Zero Pos1 Zero Zero Zero Zero > type DElectricCurrent = Dim Zero Zero Zero Pos1 Zero Zero Zero > type DThermodynamicTemperature = Dim Zero Zero Zero Zero Pos1 Zero Zero > type DAmountOfSubstance = Dim Zero Zero Zero Zero Zero Pos1 Zero > type DLuminousIntensity = Dim Zero Zero Zero Zero Zero Zero Pos1 Using the above type synonyms we can define type synonyms for quantities of particular physical dimensions. Quantities with the base dimensions. > type Dimensionless = Quantity DOne > type Length = Quantity DLength > type Mass = Quantity DMass > type Time = Quantity DTime > type ElectricCurrent = Quantity DElectricCurrent > type ThermodynamicTemperature = Quantity DThermodynamicTemperature > type AmountOfSubstance = Quantity DAmountOfSubstance > type LuminousIntensity = Quantity DLuminousIntensity = Arithmetic on physical dimensions = When performing arithmetic on units and quantities the arithmetics must be applied to both the numerical values of the Dimensionals but also to their physical dimensions. The type level arithmetic on physical dimensions is governed by multi-parameter type classes and functional dependences. Multiplication of dimensions corresponds to adding of the base dimensions' exponents. > class Mul d d' d'' | d d' -> d'' > instance (Sum l l' l'', > Sum m m' m'', > Sum t t' t'', > Sum i i' i'', > Sum th th' th'', > Sum n n' n'', > Sum j j' j'') => Mul (Dim l m t i th n j) > (Dim l' m' t' i' th' n' j') > (Dim l'' m'' t'' i'' th'' n'' j'') Division of dimensions corresponds to subtraction of the base dimensions' exponents. > class Div d d' d'' | d d' -> d'' > instance (Sum l l' l'', > Sum m m' m'', > Sum t t' t'', > Sum i i' i'', > Sum th th' th'', > Sum n n' n'', > Sum j j' j'') => Div (Dim l'' m'' t'' i'' th'' n'' j'') > (Dim l' m' t' i' th' n' j') > (Dim l m t i th n j) We could provide the 'Mul' and 'Div' classes with full functional dependencies but that would be of limited utility as there is no obvious use for "backwards" type inference and would also limit what we can achieve overlapping instances. (In particular, it breaks the 'Extensible' module.) We limit ourselves to integer powers of Dimensionals as fractional powers make little physical sense. Since the value of the exponent affects the type of the result the value of the exponent must be visible to the type system, therefore we will generally represent the exponent with a 'NumType'. Powers of dimensions corresponds to multiplication of the base dimensions' exponents by the exponent. > class (NumType x) => Pow d x d' | d x -> d' > instance (N.Mul l x l', > N.Mul m x m', > N.Mul t x t', > N.Mul i x i', > N.Mul th x th', > N.Mul n x n', > N.Mul j x j') => Pow (Dim l m t i th n j) x > (Dim l' m' t' i' th' n' j') Roots of dimensions corresponds to division of the base dimensions' exponents by order(?) of the root. > class (NonZero x) => Root d x d' | d x -> d' > instance (N.Div l x l', > N.Div m x m', > N.Div t x t', > N.Div i x i', > N.Div th x th', > N.Div n x n', > N.Div j x j') => Root (Dim l m t i th n j) x > (Dim l' m' t' i' th' n' j') = Arithmetic on units and quantities = Thanks to the arithmetic on physical dimensions having been sorted out separately a lot of the arithmetic on Dimensionals is straight forward. In particular the type signatures are much simplified. Multiplication, division and powers apply to both units and quantities. > (*) :: (Num a, Mul d d' d'') > => Dimensional v d a -> Dimensional v d' a -> Dimensional v d'' a > Dimensional x * Dimensional y = Dimensional (x Prelude.* y) > (/) :: (Fractional a, Div d d' d'') > => Dimensional v d a -> Dimensional v d' a -> Dimensional v d'' a > Dimensional x / Dimensional y = Dimensional (x Prelude./ y) > (^) :: (Fractional a, Pow d n d') > => Dimensional v d a -> n -> Dimensional v d' a > Dimensional x ^ n = Dimensional (x Prelude.^^ (toNum n :: Integer)) In the unlikely case someone needs to use this library with non-fractional numbers we provide the alternative power operator '^+' that is restricted to positive exponents. > (^+) :: (Num a, PosType n, Pow d n d') > => Dimensional v d a -> n -> Dimensional v d' a > Dimensional x ^+ n = Dimensional (x Prelude.^ (toNum n :: Integer)) A special case is that dimensionless quantities are not restricted to integer exponents. This is accommodated by the '**' operator defined later. = Quantity operations = Some additional operations obviously only make sense for quantities. Of these, negation, addition and subtraction are particularly simple as they are done in a single physical dimension. > negate :: (Num a) => Quantity d a -> Quantity d a > negate (Dimensional x) = Dimensional (Prelude.negate x) > (+) :: (Num a) => Quantity d a -> Quantity d a -> Quantity d a > Dimensional x + Dimensional y = Dimensional (x Prelude.+ y) > (-) :: (Num a) => Quantity d a -> Quantity d a -> Quantity d a > x - y = x + negate y Absolute value. > abs :: (Num a) => Quantity d a -> Quantity d a > abs (Dimensional x) = Dimensional (Prelude.abs x) Roots of arbitrary (integral) degree. Appears to occasionally be useful for units as well as quantities. > nroot :: (Floating a, Root d n d') => n -> Dimensional v d a -> Dimensional v d' a > nroot n (Dimensional x) = Dimensional (x Prelude.** (1 Prelude./ toNum n)) We provide short-hands for the square and cubic roots. > sqrt :: (Floating a, Root d Pos2 d') => Dimensional v d a -> Dimensional v d' a > sqrt = nroot pos2 > cbrt :: (Floating a, Root d Pos3 d') => Dimensional v d a -> Dimensional v d' a > cbrt = nroot pos3 We also provide an operator alternative to nroot for those that prefer such. > (^/) :: (Floating a, Root d n d') => Dimensional v d a -> n -> Dimensional v d' a > (^/) = flip nroot = List functions = Here we define operators and functions to make working with homogenuous lists of dimensionals more convenient. We define two convenience operators for applying units to all elements of a functor (e.g. a list). > (*~~) :: (Functor f, Num a) => f a -> Unit d a -> f (Quantity d a) > xs *~~ u = fmap (*~ u) xs > (/~~) :: (Functor f, Fractional a) => f (Quantity d a) -> Unit d a -> f a > xs /~~ u = fmap (/~ u) xs > infixl 7 *~~, /~~ The sum of all elements in a list. > sum :: forall d a . Num a => [Quantity d a] -> Quantity d a > sum = foldr (+) _0 The length of the list as a 'Dimensionless'. This can be useful for purposes of e.g. calculating averages. > dimensionlessLength :: Num a => [Dimensional v d a] -> Dimensionless a > dimensionlessLength = Dimensional . genericLength = Dimensionless = For dimensionless quantities pretty much any operation is applicable. We provide this freedom by making 'Dimensionless' an instance of 'Functor'. > instance Functor Dimensionless where > fmap f (Dimensional x) = Dimensional (f x) We continue by defining elementary functions on 'Dimensionless' that may be obviously useful. > exp, log, sin, cos, tan, asin, acos, atan, sinh, cosh, tanh, asinh, acosh, atanh > :: (Floating a) => Dimensionless a -> Dimensionless a > exp = fmap Prelude.exp > log = fmap Prelude.log > sin = fmap Prelude.sin > cos = fmap Prelude.cos > tan = fmap Prelude.tan > asin = fmap Prelude.asin > acos = fmap Prelude.acos > atan = fmap Prelude.atan > sinh = fmap Prelude.sinh > cosh = fmap Prelude.cosh > tanh = fmap Prelude.tanh > asinh = fmap Prelude.asinh > acosh = fmap Prelude.acosh > atanh = fmap Prelude.atanh > (**) :: (Floating a) > => Dimensionless a -> Dimensionless a -> Dimensionless a > Dimensional x ** Dimensional y = Dimensional (x Prelude.** y) For 'atan2' the operands need not be dimensionless but they must be of the same type. The result will of course always be dimensionless. > atan2 :: (RealFloat a) > => Quantity d a -> Quantity d a -> Dimensionless a > atan2 (Dimensional y) (Dimensional x) = Dimensional (Prelude.atan2 y x) The only unit we will define in this module is 'one'. The unit one has dimension one and is the base unit of dimensionless values. As detailed in 7.10 "Values of quantities expressed simply as numbers: the unit one, symbol 1" of [1] the unit one generally does not appear in expressions. However, for us it is necessary to use 'one' as we would any other unit to perform the "boxing" of dimensionless values. > one :: Num a => Unit DOne a > one = Dimensional 1 For convenience we define some small integer values and constants. The constant for zero is polymorphic as proposed by Douglas McClean (http://code.google.com/p/dimensional/issues/detail?id=39) allowing it to express zero Length or Capacitance or Velocity etc, in addition to the dimensionless value zero. > _0 :: (Num a) => Quantity d a > _0 = Dimensional 0 > _1, _2, _3, _4, _5, _6, _7, _8, _9 :: (Num a) => Dimensionless a > _1 = 1 *~ one > _2 = 2 *~ one > _3 = 3 *~ one > _4 = 4 *~ one > _5 = 5 *~ one > _6 = 6 *~ one > _7 = 7 *~ one > _8 = 8 *~ one > _9 = 9 *~ one For background on 'tau' see http://tauday.com/tau-manifesto (but also feel free to review http://www.thepimanifesto.com). > pi, tau :: (Floating a) => Dimensionless a > pi = Prelude.pi *~ one > tau = _2 * pi = Instances of 'Show' = We will conclude by providing a reasonable 'Show' instance for quantities. We neglect units since it is unclear how to represent them in a way that distinguishes them from quantities, or whether that is even a requirement. > instance forall d a. (Show d, Show a) => Show (Quantity d a) where > show (Dimensional x) = show x ++ if (null unit) then "" else " " ++ unit > where unit = show (undefined :: d) The above implementation of 'show' relies on the dimension 'd' being an instance of 'Show'. The "normalized" unit of the quantity can be inferred from its dimension. > instance forall l m t i th n j. > ( NumType l > , NumType m > , NumType t > , NumType i > , NumType th > , NumType n > , NumType j > ) => Show (Dim l m t i th n j) where > show _ = (unwords . catMaybes) > [ dimUnit "m" (undefined :: l) > , dimUnit "kg" (undefined :: m) > , dimUnit "s" (undefined :: t) > , dimUnit "A" (undefined :: i) > , dimUnit "K" (undefined :: th) > , dimUnit "mol" (undefined :: n) > , dimUnit "cd" (undefined :: j) > ] The helper function 'dimUnit' defined next conditions a 'String' (unit) with an exponent, if appropriate. The reason we define 'dimUnit' at the top-level rather than in the where-clause is that it may be useful for users of the 'Extensible' module. > dimUnit :: (NumType n) => String -> n -> Maybe String > dimUnit u n > | x == 0 = Nothing > | x == 1 = Just u > | otherwise = Just (u ++ "^" ++ show x) > where x = toNum n :: Integer = The 'prefix' function = We will define a 'prefix' function which applies a scale factor to a unit. The 'prefix' function will be used by other modules to define the SI prefixes and non-SI units. > prefix :: (Num a) => a -> Unit d a -> Unit d a > prefix x (Dimensional y) = Dimensional (x Prelude.* y) = Conclusion and usage = We have defined operators and units that allow us to define and work with physical quantities. A physical quantity is defined by multiplying a number with a unit (the type signature is optional). ] v :: Velocity Prelude.Double ] v = 90 *~ (kilo meter / hour) It follows naturally that the numerical value of a quantity is obtained by division by a unit. ] numval :: Prelude.Double ] numval = v /~ (meter / second) The notion of a quantity as the product of a numerical value and a unit is supported by 7.1 "Value and numerical value of a quantity" of [1]. While the above syntax is fairly natural it is unfortunate that it must violate a number of the guidelines in [1], in particular 9.3 "Spelling unit names with prefixes", 9.4 "Spelling unit names obtained by multiplication", 9.5 "Spelling unit names obtained by division". As a more elaborate example of how to use the module we define a function for calculating the escape velocity of a celestial body [2]. ] escapeVelocity :: (Floating a) => Mass a -> Length a -> Velocity a ] escapeVelocity m r = sqrt (two * g * m / r) ] where ] two = 2 *~ one ] g = 6.6720e-11 *~ (newton * meter ^ pos2 / kilo gram ^ pos2) The following is an example GHC session where the above function is used to calculate the escape velocity of Earth in kilometer per second. *Numeric.Dimensional> :set +t *Numeric.Dimensional> let me = 5.9742e24 *~ kilo gram -- Mass of Earth. me :: Quantity DMass GHC.Float.Double *Numeric.Dimensional> let re = 6372.792 *~ kilo meter -- Mean radius of Earth. re :: Quantity DLength GHC.Float.Double *Numeric.Dimensional> let ve = escapeVelocity me re -- Escape velocity of Earth. ve :: Velocity GHC.Float.Double *Numeric.Dimensional> ve /~ (kilo meter / second) 11.184537332296259 it :: GHC.Float.Double For completeness we should also show an example of the error messages we will get from GHC when performing invalid arithmetic. In the best case GHC will be able to use the type synonyms we have defined in its error messages. ] x = 1 *~ meter + 1 *~ second Couldn't match expected type `Pos1' against inferred type `Zero' Expected type: Unit DLength t Inferred type: Unit DTime a In the second argument of `(*~)', namely `second' In the second argument of `(+)', namely `1 *~ second' In other cases the error messages aren't very friendly. ] x = 1 *~ meter / (1 *~ second) + 1 *~ kilo gram Couldn't match expected type `Zero' against inferred type `Neg Zero' When using functional dependencies to combine Sub Zero (Pos Zero) (Neg Zero), arising from use of `/' at Numeric/ Dimensional.lhs:425:9-20 Sub Zero (Pos Zero) Zero, arising from use of `/' at Numeric/Dimensional.lhs:532:5-30 It is the author's experience that the usefullness of the compiler error messages is more often than not limited to pinpointing the location of errors. = Future work = While there is an insane amount of units in use around the world it is reasonable to provide at least all SI units. Units outside of SI will most likely be added on an as-needed basis. There are also plenty of elementary functions to add. The 'Floating' class can be used as reference. Another useful addition would be decent 'Show' and 'Read' instances. The 'show' implementation could output the numerical value and the unit expressed in (base?) SI units, along the lines of: ] instance (Fractional a, Show a) => Show (Length a) ] where show x = show (x /~ meter) ++ " m" Additional functions could be provided for "showing" with any unit and prefix. The 'read' implementation should be able to read values with any unit and prefix. It is not clear to the author how to best implement these. Additional physics models could be implemented. See [3] for ideas. = Related work = Henning Thielemann numeric prelude has a physical units library, however, checking of dimensions is dynamic rather than static. Aaron Denney has created a toy example of statically checked physical dimensions covering only length and time. HaskellWiki has pointers [4] to these. Also see Samuel Hoffstaetter's blog post [5] which uses techniques similar to this library. Libraries with similar functionality exist for other programming languages and may serve as inspiration. The author has found the Java library JScience [6] and the Fortress programming language [7] particularly noteworthy. = References = [1] http:// physics.nist.gov/Pubs/SP811/ [2] http://en.wikipedia.org/wiki/Escape_velocity [3] http://jscience.org/api/org/jscience/physics/models/package-summary.html [4] http://www.haskell.org/haskellwiki/ Physical_units [5] http://liftm.wordpress.com/2007/06/03/scientificdimension-type-arithmetic-and-physical-units-in-haskell/ [6] http://jscience.org/ [7] http://research.sun.com/projects/plrg/
{"url":"http://hackage.haskell.org/package/dimensional-0.12/docs/src/Numeric-Units-Dimensional.html","timestamp":"2014-04-24T17:40:42Z","content_type":null,"content_length":"90997","record_id":"<urn:uuid:026d39d0-87d6-4754-a23a-f85644600710>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
CHW3M World History • Was born 2293 years ago and lived from 287 BCE - 212 BCE. Was also a native of Syracuse, Sicily. • Archimedes, was an influential mathematician of his time. • His methods anticipated calculus 2000 years before Newton or Leibniz revolutionized it. His contributions revolutionized Geometry with his ideas and concepts. • Ran through the streets naked yelling "Eureka" I've solved it. Historical Significance • Archimedes invented many important things such as pulley system which is one of the most important inventions of all time, and a descendant of the common day pump called the Archimedean screw pumping device. This device is still used in many parts of the world. • He perfected a method to help him find areas, volumes, surface areas and also gave an accurate answer to π. Above- Archimedes' experiment Top Left- Archimedes working on a plane equilibrium Left- Statue of Archimedes Bottom- Archimedes' yelling 'Eureka!' Related Links:
{"url":"http://www.markville.ss.yrdsb.edu.on.ca/projects/classof2007/16chong/khan/peopleandeventstemplate.htm","timestamp":"2014-04-21T12:09:34Z","content_type":null,"content_length":"11568","record_id":"<urn:uuid:e872e42f-ee76-46d6-80df-93e997d2021b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometric limit... June 15th 2008, 02:55 PM Trigonometric limit... How can I solve this? lim t -> 0 (5t sec4t / tan10t) I don't know what to do when the term is secant? June 15th 2008, 03:20 PM $\lim_{\varphi\to{0}}\frac{5\varphi\sec(4\varphi)}{ \tan(10\varphi)}$ Giving us $\lim_{\varphi\to{0}}\frac{5\varphi\sec(4\varphi)}{ 10\varphi}=\lim_{\varphi\to{0}}\frac{5\sec(4\varph i)}{10}=\frac{1}{2}$ June 15th 2008, 03:24 PM I don't understand the second line? June 15th 2008, 03:30 PM This means that they are asymptotically equal as $\varphi\to{0}$ Or in other $\lim_{\varphi\to{0}}\frac{\tan(10\varphi)}{10\varp hi}=1$ This can be easily proven Let $\psi=10\varphi$ Now as $\varphi\to{0}\Rightarrow\psi\to{0}$ Giving us $\lim_{\psi\to{0}}\frac{\tan(\psi)}{\psi}=\lim_{\ps i\to{0}}\frac{\psi+\frac{\psi^3}{3}+...}{\psi}=1$ $\therefore\lim_{\varphi\to{0}}\frac{\tan(10\varphi )}{10\varphi}=1\Rightarrow\frac{\lim_{\varphi\to{0 }}\tan(10\varphi)}{\lim_{\varphi\to{0}}10\varphi}= 1$$\Rightarrow\lim_{\varphi\to{0}}\tan(10 \varphi)=10\ varphi$ Thus they are interchangable, so I just replaced them It's easier because you just know it, you could also use L'hopitals or maclaurin series expansion June 15th 2008, 04:49 PM Mathstud28, you may consider what level is theowne currently stuying, he may won't be able to digest your post, so I suggest you to first give a stantard solution and after that, give another one with other somewhat advanced stuff. $\underset{t\to 0}{\mathop{\lim }}\,\frac{5t\sec 4t}{\tan 10t}=\frac{1}{2}\underset{t\to 0}{\mathop{\lim }}\,\frac{10t}{\tan 10t}\cdot \sec 4t.$ Now, consider $\underset{t\to 0}{\mathop{\lim }}\,\frac{t}{\tan t}=1,$ which is a little fact that you should know; besides $\lim_{t\to0}\sec4t$ does exist, and its value is 1, hence, the original limit is $\frac12.$ June 15th 2008, 05:58 PM $\lim_{x\to{0}}\frac{5x\sec(4x)}{\tan(10x)}=\lim_{x \to{0}}\sec(4x)\cdot\lim_{x\to{0}}\frac{5x}{\tan(1 0x)}=1\cdot\lim_{x\to{0}}\frac{5x}{\tan(10x)}$ Now there are a coulple of ways of going from here $L=\lim_{x\to{0}}\frac{5x}{\tan(10x)}\Rightarrow\fr ac{1}{L}=\lim_{x\to{0}}\lim_{x\to{0}}\frac{\tan(10 x)}{5x}$ Now extracting a five and seeing that $\tan(10\cdot{0})=0$ and knowing that subtracting zero is legal we have Now if you notice this looks like the definition of the derivative at a point so $\frac{1}{5}\lim_{x\to{0}}\frac{\tan(10x)-\tan(10\cdot{0})}{x-0}=\frac{1}{5}\bigg(\tan(10x)\bigg)'\bigg|_{x=0}$$=\frac{1}{5}\bigg(10\sec^2(10x)\bigg)\bigg|_{x=0}= \frac{1}{5}\cdot{10\sec^2(0)} So knowing that L is our orginal limit and another possibility is L'hopitals so we have $\lim_{x\to{0}}\frac{5x}{\tan(10x)}=\lim_{x\to{0}}\ frac{5}{10\sec^2(10x)}=\frac{5}{10\sec^2(0)}=\frac {1}{2}$ Another possibility is this $\lim_{x\to{0}}\frac{5x}{\tan(10x)}=\lim_{x\to{0}}\ frac{\cos(10x)5x}{\sin(10x)}=\lim_{x\to{0}}\cos(10 x)\cdot\lim_{x\to{0}}\frac{5x}{\sin(10x)}=\lim_{x\ to{0}}\frac{5x}{\sin(10x)}$ Now we do this $L=\lim_{x\to{0}}\frac{5x}{\sin(10x)}\Rightarrow\fr ac{2}{L}=\lim_{x\to{0}}\frac{\sin(10x)}{10x}$ Now letting we see that as $x\to{0}\Rightarrow\varphi\to{0}$ So we would have $\frac{2}{L}=\lim_{\varphi\to{0}}\frac{\sin(\varphi )}{\varphi}$ I am sure you know the right hand limit is one So we have Whew...there are a couple more but my fingers hurt (Giggle) I did these because Krizalid is right and I felt bad so I decided to show you a couple of ways that will be at your level. Hope this helps, sorry about the earlier confusion June 15th 2008, 06:04 PM I don't really understand what you did in the final post (apart from the l'hopitals rule, which we're not allowed to use). But thanks for the help. June 15th 2008, 06:09 PM
{"url":"http://mathhelpforum.com/calculus/41632-trigonometric-limit-print.html","timestamp":"2014-04-24T10:08:04Z","content_type":null,"content_length":"18324","record_id":"<urn:uuid:2bb858d0-dc22-4474-bb58-01d18af4b306>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this right? Finding x-bar September 6th 2007, 02:39 PM Is this right? Finding x-bar I don't have an answer to check against but could someone let me know if I did this right? Formula for xbar: $x bar = \frac{My}{m}$ Finding m: $m = p \int_{\sqrt{3}}^{0} \frac{1}{x^2+1}$ $m = p ( arctan(\sqrt{3}))$ Finding My: $My = p \int_{0}^{\sqrt{3}} x * \frac{1}{x^2+1}$ $My = p \int_{0}^{\sqrt{3}} \frac{x}{x^2+1}$ $= p * \frac{1}{2} \int_{1}^{4} 1/u$ $= p * \frac{1}{2} ( ln|4| - ln|1|)$ $= p * \frac{1}{2} ln|4/1|$ $= p * \frac{1}{2} ln|4|$ x bar $x bar = \frac{ln|4|}{2arctan\sqrt{3}}$ September 6th 2007, 03:24 PM I don't have an answer to check against but could someone let me know if I did this right? Formula for xbar: $x bar = \frac{My}{m}$ Finding m: $m = p \int_{\sqrt{3}}^{0} \frac{1}{x^2+1}$ $m = p ( arctan(\sqrt{3}))$ Finding My: $My = p \int_{0}^{\sqrt{3}} x * \frac{1}{x^2+1}$ $My = p \int_{0}^{\sqrt{3}} \frac{x}{x^2+1}$ $= p * \frac{1}{2} \int_{1}^{4} 1/u$ $= p * \frac{1}{2} ( ln|4| - ln|1|)$ $= p * \frac{1}{2} ln|4/1|$ $= p * \frac{1}{2} ln|4|$ x bar $x bar = \frac{ln|4|}{2arctan\sqrt{3}}$ You mostly have it, but there are a few details. First m is negative (unless you reversed the integration limits.) Second, you can do better than $atn(\sqrt{3})$. What is this value? Third, you know that |4| = 4 so you can drop the absolute value. And how does $\frac{1}{2}ln(4)$ compare to the value of $ln(2)$?
{"url":"http://mathhelpforum.com/calculus/18592-right-finding-x-bar-print.html","timestamp":"2014-04-16T12:04:14Z","content_type":null,"content_length":"10431","record_id":"<urn:uuid:98f8d575-4e7d-4aea-8672-8fe0c6e46578>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Egg Doubling Problem; UK & US: Naming Large Numbers Date: 5/6/96 at 19:39:11 From: Roy Cole Subject: Egg doubling problem Dear Dr. Math, If you have rectangles that are twenty-three by eighty-nine, you have one egg. The first time you double you have two eggs. The second time you have four eggs. The third you have eight. The sixtieth time you have a lot of eggs! How many rectangles would they fill if each egg is one unit? Date: 10/24/96 at 10:13:17 From: Doctor Lynn Subject: Re: Egg doubling problem Hi Roy - After n doublings, you have 2^n eggs. On the 60th time you have 2^60 eggs. As each box holds 23 * 89 = 2097 eggs, the number of filled boxes is 2^60 divided by 2097. This can probably be solved analytically, especially since 2097 = 2^11-1, but it is much easier to use a clever computer program which can handle big numbers. This shows that in fact 549,795,662,664,209 or 550 million million boxes are filled, with 703 eggs left in another unfilled box. Which is a lot of egg. -Doctor Lynn, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: 10/24/96 at 10:22:2 From: Doctor Ken Subject: Re: Egg doubling problem Hi Roy- I just wanted to point out something that might be a little confusing in the answer you just received from us. Doctor Lynn said that there were 550 million million boxes filled, and you may be wondering why he didn't just say 550 trillion boxes. Well, he's from the UK, and they actually have different names for the numbers over there. So to him the number 549,795,662,664,209 is about 550 billion, while to Americans it's about 550 trillion. You can see a more complete discussion on the naming of numbers at and at -Doctor Ken, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/57078.html","timestamp":"2014-04-21T15:42:55Z","content_type":null,"content_length":"7447","record_id":"<urn:uuid:d07d6c86-4be6-48dc-aac1-958112a9d383>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Wallington Statistics Tutor Find a Wallington Statistics Tutor ...I encourage questions since learning is always an interactive process. I’ve taught both high school and college including remedial courses at both levels. I have 30 years' experience in teaching, working with students from a wide variety of ethnic backgrounds as well as adult learners. 9 Subjects: including statistics, geometry, SAT math, algebra 1 ...I can tutor middle school, high school, and college level math, and do SAT prep as well.I have taken multiple courses in linear algebra and linear programming at the undergraduate and graduate levels and have received an A (4.0) in all of them. I can submit transcripts if necessary. I have a Ma... 15 Subjects: including statistics, calculus, geometry, algebra 1 ...As for my background, I have a master's degree in psychology, and when I am not dedicating myself toward my students' test day success, I am working toward a second master's in statistics. *For cost-conscious parents, if your son or daughter is available during off-peak hours or can meet me at ... 14 Subjects: including statistics, writing, GRE, algebra 1 I graduated from Texas Tech University with my Bachelor of Arts in Psychology and my Bachelor of Science in Exercise Sciences (physiology, biomechanics, sports psychology, etc.). I am a current MA student in the psychology program at Hunter College and have been involved in psychological research si... 3 Subjects: including statistics, SPSS, psychology ...I am a highly qualified biology tutor who helped students very successfully achieve a high grade. I am a PhD graduate from Weill Cornell Medical School. I have experience tutoring science (Math, Biology, Chemistry) at all levels, all ages, all skills. 18 Subjects: including statistics, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Wallington_statistics_tutors.php","timestamp":"2014-04-16T04:11:53Z","content_type":null,"content_length":"23997","record_id":"<urn:uuid:6f93515d-af63-462b-b33f-36df5a4e6fc2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Avondale, AZ Math Tutor Find an Avondale, AZ Math Tutor ...After one tutoring session, I know you will be delighted with the results. I have also written two books and numerous supplemental materials for Houghton Mifflin, Prentice Hall and others. I do all my tutoring at the library in Surprise on Bullard, across the street from the baseball stadium. 10 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...Lastly, I want everyone to know I do not give up and will give students my devotion to make sure they understand the material they news to understand to make sure they stay on their correct path of education. Education is the most important thing in succeeding in today's harsh economy.Algebra 1 ... 21 Subjects: including calculus, physics, statistics, Java ...My name is Kay. I offer individual or group tutoring for nursing, writing, math and various statistics courses - health care, psychology, research, bio, and business. I am a Registered Nurse licensed in the State of Arizona and pursuing my Master of Science degree in Nursing Leadership. 10 Subjects: including probability, statistics, public speaking, nursing ...Tutoring is not just about success in the classroom but rather about working with the individual to grow and acquire new skills. I view my goal as a tutor is to put myself out of a job by teaching students the skills for success in their given subject. Calculus is the first math class I fell in love with. 10 Subjects: including algebra 1, algebra 2, calculus, chemistry ...I started off with Computer Science and then added Mathematics graduating Summa Cum Laude and the top of my class. I teach in the local community colleges where students wished I had taught them math since elementary school and I found that I truly enjoyed teaching and tutoring as well. I enjoy... 16 Subjects: including calculus, chemistry, Microsoft Excel, Microsoft Word Related Avondale, AZ Tutors Avondale, AZ Accounting Tutors Avondale, AZ ACT Tutors Avondale, AZ Algebra Tutors Avondale, AZ Algebra 2 Tutors Avondale, AZ Calculus Tutors Avondale, AZ Geometry Tutors Avondale, AZ Math Tutors Avondale, AZ Prealgebra Tutors Avondale, AZ Precalculus Tutors Avondale, AZ SAT Tutors Avondale, AZ SAT Math Tutors Avondale, AZ Science Tutors Avondale, AZ Statistics Tutors Avondale, AZ Trigonometry Tutors
{"url":"http://www.purplemath.com/avondale_az_math_tutors.php","timestamp":"2014-04-21T02:15:45Z","content_type":null,"content_length":"23834","record_id":"<urn:uuid:a7fbd480-8ca1-481a-be65-5fc41a91b0cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Integral June 29th 2007, 11:03 AM #1 Global Moderator Nov 2005 New York City Probability Integral In probability theory the important Gaussian Distribution often leads to the integral $X=\int_{-\infty}^{\infty} e^{-x^2} dx$. In case you ever wondered where it comes from, here (informal but nice*) is a non-complex analysis demonstration. We begin by considering the integral, $Y=\iint_{\mathbb{R}^2} e^{-(x^2+y^2)} \ dA$ This is the integral over the entire plane $\mathbb{R}^2$. Now there are two ways to get the entire plane. Method 1: For $a,b>0$ we can create a rectangle $-a\leq x\leq a \mbox{ and }-b\leq x\leq b$. Now we make $a\to \infty \mbox{ and }b\to \infty$. That infinite rectangle will take the entire plane. In other words, $Y=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-(x^2+y^2)} \ dy \ dx$. Now, the important realization is that this double integral has "seperated variables" meaning $f(x,y)=g(x)h(y)$ and we can write $\int_{-\infty}^{\infty} e^{-x^2} dx \cdot \int_{-\infty}^{\infty} e^{-y^2} dx$. Hence, $Y = X^2 \implies X = \sqrt{Y}$ (because it cannot be negative). Method 2: We notice the expression $x^2+y^2$ in the double integral and try a polar coordinates substitution. If we let $r^2 = x^2+y^2$, we can let $0\leq \theta \leq 2\pi$ and $r\to \infty$. In other words, we draw a circle at the orgin with infinite radius, and this will take on values on the entire $\mathbb{R}^2$ plane. Thus, we end up with after a change to polar coordinates, $Y = \int_0^{\infty} \int_0^{2\pi} e^{-r^2} \ r d\theta \ dr = \lim_{r\to \infty} 2\pi \int_0^r re^{-r^2}dr = 2\pi \left( \frac{1}{2} \right) = \pi$ Thus, $Y=\pi$ which implies $X=\sqrt{\pi}$. And hence, $\int_{-\infty}^{\infty} e^{-x^2} \ dx = \sqrt{\pi}$ *)I am sure it can be made formal. But I do not know how do it because I am not familar with the theory of real multiple integration. In my calculus textbook, a squeeze technique is used to prove the result via double integrals in polar coordinate. June 29th 2007, 06:23 PM #2 May 2007
{"url":"http://mathhelpforum.com/calculus/16390-probability-integral.html","timestamp":"2014-04-19T11:50:46Z","content_type":null,"content_length":"37709","record_id":"<urn:uuid:2f7b8765-9a14-41c1-842f-f9908f5359fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of fluid mechanics fluid mechanics , an is an imaginary collection of notionally identical experiments. Each member of the ensemble will have nominally identical boundary conditions and fluid properties. If the flow is turbulent, the details of the fluid motion will differ from member to member because the experimental setup will be microscopically different; and these slight differences become magnified as time progresses. Members of an ensemble are, by definition, statistically independent of one another. The concept of ensemble is useful in thought experiments and to improve theoretical understanding of turbulence. A good image to have in mind is a typical fluid mechanics experiment such as a mixing box. Imagine a million mixing boxes, distributed over the earth; at a predetermined time, a million fluid mechanics engineers each start one experiment, and monitor the flow. Each engineer then sends his or her results to a central database. Such a process would give results that are close to the theoretical ideal of an ensemble. It is common to speak of Ensemble average or ensemble averaging when considering a fluid mechanical ensemble. For a completely unrelated type of averaging, see Reynolds-averaged Navier-Stokes equations (the two types of averaging are often confused). The idea of the ensemble is discussed further in the article Statistical ensemble (mathematical physics).
{"url":"http://www.reference.com/browse/fluid+mechanics","timestamp":"2014-04-16T05:55:28Z","content_type":null,"content_length":"79923","record_id":"<urn:uuid:53621f77-3a98-4bde-9561-67be60b6e789>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Orthogonal polynomials. Computation and approximation. (English) Zbl 1130.42300 Numerical Mathematics and Scientific Computation. Oxford: Oxford University Press (ISBN 0-19-850672-4/hbk). viii, 301 p. £ 55.00 (2004). The book by Walter Gautschi, the world renowned expert in computational methods and orthogonal polynomials, came out in the series “Numerical Mathematics and Scientific Computation”. The choice of topics reflects the author’s own research interests and involvement in the area, but it is hoped that the exposition will be acceptable and useful to a wide audience of readers. The computational methods in the theory of orthogonal polynomials on the real line, which constitute a backbone of the whole book, are treated in Chapter 2. The fundamental problem is to compute the first $n$ recursion coefficients ${\alpha }_{k}\left(d\lambda \right)$, ${\beta }_{k}\left(d\lambda \right)$, $k=0,1,\cdots ,n-1$, where $n$ is a typically large integer and $d\lambda$ a positive measure given either implicitly via moment information or explicitly. There is a simple algorithm due to Chebyshev that produces the desired coefficients in the former case but its effectiveness depends critically on the conditioning of the underlying problem. In the latter case discretization of the measure and subsequent approximation of the desired recursion coefficients by those relative to a discrete measure are applicable. Other problems calling for numerical methods are the evaluation of Cauchy integrals and the problem of passing from the recursion coefficients of a measure to those of a modified measure—the original measure multiplied by a rational function. In Chapter 1 a brief, but essentially self-contained account of the theory of orthogonal polynomials is presented with the focus on the parts of the theory most relevant to computation. The exposition combines nicely the standard topics such as three-term recurrence relations, Christoffel–Darboux formulae, quadrature rules and classical orthogonal polynomials as well as not quite traditional issues (kernel polynomials, Sobolev orthogonal polynomials and orthogonal polynomials on the semicircle). A number of applications, specifically numerical quadrature, discrete least squares approximation, moment-preserving spline approximation, and the summation of slowly convergent series is given in Chapter 3. Many tables throughout the book report on numerical results of various algorithms. 42-02 Research monographs (Fourier analysis) 33-02 Research monographs (special functions) 42C05 General theory of orthogonal functions and polynomials 33C45 Orthogonal polynomials and functions of hypergeometric type 33F05 Numerical approximation and evaluation of special functions
{"url":"http://zbmath.org/?q=an:02107939","timestamp":"2014-04-16T13:32:07Z","content_type":null,"content_length":"23476","record_id":"<urn:uuid:03b7c5f9-9eb7-4a84-9013-b0e775f0a5e1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Qualitative Analysis Very often it is almost impossible to find explicitly of implicitly the solutions of a system (specially nonlinear ones). The qualitative approach as well as numerical one are important since they allow us to make conclusions regardless whether we know or not the solutions. Recall what we did for autonomous equations. First we looked for the equilibrium points and then, in conjunction with the existence and uniqueness theorem, we concluded that non-equilibrium solutions are either increasing or decreasing. This is the result of looking at the sign of the derivative. So what happened for autonomous systems? First recall that the components of the velocity vectors are Example. Consider the model describing two species competing for the same prey Let us only focus on the first quadrant Algebraic manipulations imply The equilibrium points are (0,0), (0,2), (1,0), and Consider the region R delimited by the x-axis, the y-axis, the line 1-x-y=0, and the line 2-3x-y=0. Clearly inside this region neither In fact, looking at the first-quadrant, we have three more regions to add to the above one. The direction of the motion depends on what region we are in (see the picture below) The boundaries of these regions are very important in determining the direction of the motion along the trajectories. In fact, it helps to visualize the trajectories as slope-field did for autonomous equations. These boundaries are called nullclines. Consider the autonomous system The x-nullcline is the set of points where y-nullcline is the set of points where Example. Draw the nullclines for the autonomous system and the velocity vectors along them. The x-nullcline are given by which is equivalent to while the y-nullcline are given by which is equivalent to In order to find the direction of the velocity vectors along the nullclines, we pick a point on the nullcline and find the direction of the velocity vector at that point. The velocity vector along the segment of the nullcline delimited by equilibrium points which contains the given point will have the same direction. For example, consider the point (2,0). The velocity vector at this point is (-1,0). Therefore the velocity vector at any point (x,0), with x > 1, is horizontal (we are on the y-nullcline) and points to the left. The picture below gives the nullclines and the velocity vectors along them. In this example, the nullclines are lines. In general we may have any kind of curves. Example. Draw the nullclines for the autonomous system The x-nullcline are given by which is equivalent to while the y-nullcline are given by which is equivalent to Hence the y-nullcline is the union of a line with the ellipse Information from the nullclines For most of the nonlinear autonomous systems, it is impossible to find explicitly the solutions. We may use numerical techniques to have an idea about the solutions, but qualitative analysis may be able to answer some questions with a low cost and faster than the numerical technique will do. For example, questions related to the long term behavior of solutions. The nullclines plays a central role in the qualitative approach. Let us illustrate this on the following example. Example. Discuss the behavior of the solutions of the autonomous system We have already found the nullclines and the direction of the velocity vectors along these nullclines. These nullclines give the birth to four regions in which the direction of the motion is constant. Let us discuss the region bordered by the x-axis, the y-axis, the line 1-x-y=0, and the line 2-3x-y= 0. Then the direction of the motion is left-down. So a moving object starting at a position in this region, will follow a path going left-down. We have three choices First choice: the trajectory dies at the equilibrium point Second choice: the starting point is above the trajectory which dies at the equilibrium point Third choice: the starting point is below the trajectory which dies at the equilibrium point For the other regions, look at the picture below. We included some solutions for every region. Remarks. We see from this example that the trajectories which dye at the equilibrium point separatrix because they separate the regions into different subregions with a specific behavior. To find them is a very difficult problem. Notice also that the equilibrium points (0,2) and (1,0) behave like sinks. The classification of equilibrium points will be discussed using the approximation by linear systems. If you would like more practice, click on Example. [Differential Equations] [First Order D.E.] [Geometry] [Algebra] [Trigonometry ] [Calculus] [Complex Variables] [Matrix Algebra] S.O.S MATHematics home page Do you need more help? Please post your question on our S.O.S. Mathematics CyberBoard.. Author: Mohamed Amine Khamsi Copyright © 1999-2014 MathMedics, LLC. All rights reserved. Contact us Math Medics, LLC. - P.O. Box 12395 - El Paso TX 79913 - USA users online during the last hour
{"url":"http://www.sosmath.com/diffeq/system/qualitative/qualitative.html","timestamp":"2014-04-16T15:59:07Z","content_type":null,"content_length":"14645","record_id":"<urn:uuid:9432051f-b675-462b-ac5c-c8083a492ae1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Tutors San Diego, CA 92117 Math and Science is Accessible and Fun! ...My name is Jennifer, and I'm currently a college student attending National University, majoring in math. I can tutor a variety of subjects from basic elementary math to , basic natural sciences to upper division chemistry, as well as up to Semester 4 of... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/Chula_Vista_calculus_tutors.aspx","timestamp":"2014-04-18T19:43:59Z","content_type":null,"content_length":"61175","record_id":"<urn:uuid:888d4629-7e22-4f5f-9dab-8963d3c734db>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Comments on "normal form" of policy expressions From: David Hull <dmh@tibco.com> Date: Tue, 01 May 2007 01:13:29 -0400 Message-ID: <4636CC79.7050501@tibco.com> To: public-ws-policy@w3.org Hello WSP-ers, I'm sending this at the urging of the WSA group, with the understanding that with WSP in CR, there is a limit to what, if anything can be done. A large portion of section 4 of WS-P concerns the "normal form" of a policy expression. There are several points to be made here. In what follows, in order to compare more easily to more conventional notation (and, frankly, to save typing) I will write /a*b*c .../ for <All><A/><B/><C/></All> and /a+b+c/ for the case of ExactlyOne, with parentheses for grouping. I'll use (*a) for <All><A/></All>, (*) for <All/> (if it comes up) and likewise for (+a) and (+) if they come up. * When describing a normal form, one typically speaks of /reductions/ (or /expansions/), not of equivalences. The spec takes a somewhat less direct approach, saying "this is what a normal form looks like" and "the following kinds of expressions are equivalent". It is /implied/ that any given expression is equivalent to at least one expression in normal form, and examples are given that can be taken to indicate reductions, but no proof is given that every expression has a normal form. Such a proof would be provide reassurance that no corner cases had been missed, and would be much easier if the rules followed the grammar of policy expressions (i.e., separate cases for assertions, All and ExactlyOne). But see below. * It is not immediately clear to what extent terms such as "associative" are used in their usual senses. The usual mathematical definition of associativity (of a /binary/ operator) is /a*(b*c) = (a*b)*c./ When an operator is associative, parentheses may be dropped with the understanding that /a*b*c/ is shorthand for the equivalence class {/a*(b*c), (a*b)*c/}. Here we are told that /a*(*b) = a*b/ ("for example"). We can reasonably infer that /a*(b*c) = a*b*c./ Is /(a*b)*c/ also equivalent to /a*b*c/? This is not a spurious question, as the distinction between /left associativity/ and /right/ /associativity/ matters in parsing in general. As it happens, it doesn't matter here, since /(a*b)*c = c*(a*b) = c*a*b = a*b*c/, applying the examples we already have or have inferred but I seriously doubt that such manipulation was the intent of the committee in invoking "associative" and "commutative". * It is even less clear /why/ such prominence is given to these terms. The implication is that there is some important commonality being captured, but do I really expect to spend time going through, say, a WSDL file and rewriting /a+b/ as /b+a/? As it is, organizing the exposition around these terms serves more to distract than to inform. Assuming we have defined a normal form, what is its purpose? The "expanded" form of a "compact" expression can quickly grow unwieldy, thanks to distribution. It seems clear that such a form is not meant to be the sole representation used by real processors. The spec says it SHOULD be used where feasible, for simplicity and interoperability, though it's not clear to what extent those are promoted when there is no guarantee of normality. Rather, normal form seems most useful in specifying semantics, in particular, which policy expressions are But the meaning of an expression is (or at least ought to be) exactly the policy that it denotes. Expressions are equivalent iff they denote the same meaning, in this case the same policy. Given that policies are "the truth" and policy expressions are just their realization in angle brackets, the spec will need to define the mapping from policy expressions to policies. Yet this does not seem to be done. There is instead the implication that policy expressions in normal form map to actual policies in the intuitive way (only the reverse mapping is given, and that informally), and that every policy expression is equivalent to some normal form, and thus to some policy. If we /did/ have such a semantic map from policy expressions to policies, there would be no obvious need for a normal form for expressions. If there were such a need, an expression could always be canonicalized by computing the policy it denotes and then producing the angle brackets as described at the top of section 4. How does one compute the policy an expression denotes? Leaving nested assertions as an exercise and using square brackets to denote collections, I believe the mapping is essentially * [[/a/]] if the expression is an assertion /a/. That is, a policy consisting of a single alternative containing /a/. This form won't appear on its own, but will appear in recursively processing the other two forms. * /distrib/(/p_1 , p_2 ... p_n )/ if the expression is <All>/pexpr/_/1/ /// pexpr/_/2/ / ... pexpr/_/n/ </All>, where expression /pexpr/_/i/ denotes policy /p/_/i/ and /distrib/ takes the one-from-column-a-one-from-column-b Cartesian product described under "distributive". * [[/a/_/1/ ][/a/_/2/ ]...[/a/_/k/ ]] if the expression is <ExactlyOne>/pexpr/_/1/ / pexpr/_/2/ / ... pexpr/_/n/ </ExactlyOne> and the /a/_/j/ are the alternatives of the policies the /pexpr/_/i / denote. That is, make a policy of all the alternatives found in all the child expressions. I may well have missed some subtleties, but this approach appears to produce the same results as the spec. One could also take a more strongly-typed approach of mapping assertion elements to assertions, <All/> elements to alternatives and <ExactlyOne/> elements to policies. The overall case analysis remains the same either way. In such an analysis, it's easy to check whether all possible expressions are covered. There is no need to worry about commuting and associating or idempotence. Just go by induction as usual. Received on Tuesday, 1 May 2007 05:13:45 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 8 January 2008 14:20:50 GMT
{"url":"http://lists.w3.org/Archives/Public/public-ws-policy/2007May/0000.html","timestamp":"2014-04-20T16:20:34Z","content_type":null,"content_length":"13168","record_id":"<urn:uuid:7484cdf3-9035-4dea-8e9b-396a61f4c6b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Question Bank : Age Group 14-15 : V 1. The first term of an Arithmetic Progression is 1 and the last term is 58.5. If their sum is 714, find the number of terms. 2. Find the geometric progression whose fifth term is and the seventh term is . Also find which term is . 3. The seventh term of an Arithmetic Progression is -15 and the sixteenth term is 30. Find the sum of the first 19 terms. 4. Sohail invests $25 in a bank at the beginning of each month for 36 months. If he gets $1066.50 at the end of 36 months, what is the rate of interest? 5. Mohammad invested an amount of $12,000 in a Fixed Deposit with a bank for 3 years paying interest @ 12% per annum. If the bank compounded interest half yearly, what is the maturity value of the 6. A solid metal sphere of radius 3.4 centimeters is dropped into a cylindrical can containing water. The base diameter of the cylindrical can is 8.2 centimeters. Find the rise in level of water to the nearest millimeter. 7. A cone of height 8 centimeters and base radius 6 centimeters is opened out to form a sector. Find the perimeter of the sector. 8. A cone and a cylinder have bases of equal area. The height of the cone is 9 times that of the cylinder. If the cylinder can hold 150 cubic centimeters of water, what is thecapacity of the cone? 9. If a = x²-2x+1, b = x²-3x+2, c = 6x-6, d = 3x-6, then find and . 10. Find the quotient and the remainder when is divided by (2x+1). 11. If a, b, c are real, show that the following equation has real roots: 12. Find the value of k if the sum of the roots of the following equation is three times their product: Character is who you are when no one is looking.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=104668","timestamp":"2014-04-18T08:23:56Z","content_type":null,"content_length":"12254","record_id":"<urn:uuid:92cf4745-d790-4db7-bc24-fa33e7c42a4c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Tension on rope April 11th 2008, 01:37 AM Tension on rope Ropes 3m and 5m in length are fastened to a holiday decoration that is suspended over a town square. The decoration has a mass of 5kg. The ropes, fastened at different heights make angles of 52 degrees and 40 degrees with the horizontal. Find the tension in each wire and the magnitude of each tension. I started with an example in the book and then realized that the lengths of the rope where the same. This is what I did: T1 = -T1cos(52) + T1sin(52) T2 = T2cos(40) + T2sin(40) T1 + T2 = 5 (-T1cos(52) + T2cos(40))i +( T1sin(52)+ T2sin(40))j = 5 -T1cos(52) + T2cos(40) = 0 T1sin(52)+ T2sin(40) = 5 T2 = (T1cos52)/cos40 T1sin(52)+ ((T1cos52)/cos40)sin(40) = 5 T1 = 3.83kg T2 = ((3.83)cos52)/cos40 T2 = 3.1kg and this is where I realized I forgot about the different lengths of the ropes. Can anyone help? I don't have any idea how to go about this problem. April 11th 2008, 03:59 AM mr fantastic Ropes 3m and 5m in length are fastened to a holiday decoration that is suspended over a town square. The decoration has a mass of 5kg. The ropes, fastened at different heights make angles of 52 degrees and 40 degrees with the horizontal. Find the tension in each wire and the magnitude of each tension. I started with an example in the book and then realized that the lengths of the rope where the same. This is what I did: T1 = -T1cos(52) + T1sin(52) T2 = T2cos(40) + T2sin(40) T1 + T2 = 5 (-T1cos(52) + T2cos(40))i +( T1sin(52)+ T2sin(40))j = 5 -T1cos(52) + T2cos(40) = 0 T1sin(52)+ T2sin(40) = 5 T2 = (T1cos52)/cos40 T1sin(52)+ ((T1cos52)/cos40)sin(40) = 5 T1 = 3.83kg T2 = ((3.83)cos52)/cos40 T2 = 3.1kg and this is where I realized I forgot about the different lengths of the ropes. Can anyone help? I don't have any idea how to go about this problem. The length of the rope is irrelevant. Lami's theorem gives the answer quickly: Lami's theorem - Wikipedia, the free encyclopedia. By the way, if you take the weight force as 5, the unit of force is kg wt, not kg. April 11th 2008, 04:20 AM Ropes 3m and 5m in length are fastened to a holiday decoration that is suspended over a town square. The decoration has a mass of 5kg. The ropes, fastened at different heights make angles of 52 degrees and 40 degrees with the horizontal. Find the tension in each wire and the magnitude of each tension. I started with an example in the book and then realized that the lengths of the rope where the same. This is what I did: T1 = -T1cos(52) + T1sin(52) T2 = T2cos(40) + T2sin(40) T1 + T2 = 5 (-T1cos(52) + T2cos(40))i +( T1sin(52)+ T2sin(40))j = 5 -T1cos(52) + T2cos(40) = 0 T1sin(52)+ T2sin(40) = 5 T2 = (T1cos52)/cos40 T1sin(52)+ ((T1cos52)/cos40)sin(40) = 5 T1 = 3.83kg T2 = ((3.83)cos52)/cos40 T2 = 3.1kg and this is where I realized I forgot about the different lengths of the ropes. Can anyone help? I don't have any idea how to go about this problem. I didn't check the numbers (or the algebra) but the setup looks good. By the way, tension is a force. What's the unit for that? April 11th 2008, 05:03 AM Ropes 3m and 5m in length are fastened to a holiday decoration that is suspended over a town square. The decoration has a mass of 5kg. The ropes, fastened at different heights make angles of 52 degrees and 40 degrees with the horizontal. Find the tension in each wire and the magnitude of each tension. I started with an example in the book and then realized that the lengths of the rope where the same. This is what I did: T1 = -T1cos(52) + T1sin(52) T2 = T2cos(40) + T2sin(40) T1 + T2 = 5 (-T1cos(52) + T2cos(40))i +( T1sin(52)+ T2sin(40))j = 5 -T1cos(52) + T2cos(40) = 0 T1sin(52)+ T2sin(40) = 5 T2 = (T1cos52)/cos40 T1sin(52)+ ((T1cos52)/cos40)sin(40) = 5 T1 = 3.83kg T2 = ((3.83)cos52)/cos40 T2 = 3.1kg and this is where I realized I forgot about the different lengths of the ropes. Can anyone help? I don't have any idea how to go about this problem. Assuming the system is in equilibrium :- $F_y = T_1 sin52 + T_2 sin40 = 5g$ $F_x = T_1 cos52 = T_2 cos40$ Solve these simultaneously and you you should get the tension. The length of the rope is not relevant to the calculation of tension. April 11th 2008, 02:17 PM The length of the ropes doesn't matter for the tension but what about the magnitude of each tension? Would I multiply the vectors for each of them by their length? T1 = -2.36i + 3.02j T2 = 2.37i + 2j would be T1 = 3(-2.36i + 3.02j) T2 = 5(2.37i + 2j) April 11th 2008, 03:18 PM The tensions do not depend on the length of the ropes. Period. End. Never. Finito. Nunca, nada, jamas. Nein. Nyet. Think about it. You have a tension in Newtons. That means a tension is a force, because it has the units of force. You want to multiply it by a length, which will give it a unit of Nm, a unit of The only thing that the tensions have to do with the ropes is that they are directed along them. We occasionally will use the length of the rope to find the angle that the tension is at, but never for the magnitude. April 11th 2008, 03:42 PM mr fantastic
{"url":"http://mathhelpforum.com/calculus/34054-tension-rope-print.html","timestamp":"2014-04-18T11:41:24Z","content_type":null,"content_length":"17631","record_id":"<urn:uuid:89a00a27-e467-470f-8192-144e11bf025a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Comments on World Amazing Information, Facts & News: Needs someone to figure out how this works! Amazinganswer is always in multiples of 9It is very simple. The answer always lies in the m...hey this puzzle is very stupid one. i want to say ...Hint: The possible answers, regardless of which tw... tag:blogger.com,1999:blog-24848682.post3833645806290587760..comments2014-03-17T16:37:01.116+05:30Rahul Doshihttps://plus.google.com/ 110943669094017908416noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-24848682.post-8885520605159988252010-02-12T22:33:03.584+05:302010-02-12T22:33:03.584+05:30answer is always in multiples of 9Tanmayhttp://www.blogger.com/profile/04495094384704788552noreply@blogger.comtag:blogger.com,1999:blog-24848682.post-9290465535641728922009-07-19T22:52:29.088+05:302009-07-19T22:52:29.088+05:30It is very simple. The answer always lies in the multiples of 9Joicehttp://www.blogger.com/profile/ 04916430703128773964noreply@blogger.comtag:blogger.com,1999:blog-24848682.post-57690958858463052832009-07-19T16:41:20.873+05:302009-07-19T16:41:20.873+05:30hey this puzzle is very stupid one. i want to say or see take any no. like 72 according to puzzle subtract 7 and 2 i.e.72-7-2=63 any two digit no. you can take subtract first and second digit the answer you are getting will be multiple of 9 and check boxes of multiples of 9 same gift will be there. thus i am saying it is stupid puzzle.omkarhttp://www.blogger.com/profile/ 03991586662453644650noreply@blogger.comtag:blogger.com,1999:blog-24848682.post-85914959715784403492009-07-19T08:20:54.519+05:302009-07-19T08:20:54.519+05:30Hint: The possible answers, regardless of which two digit numbers you choose, is a fixed set. If you look at the board, the numbers that belong to this solution set has the same &quot;item&quot; =)chuanzhttp://www.blogger.com/profile/
{"url":"http://www.worldamazinginformation.com/feeds/3833645806290587760/comments/default","timestamp":"2014-04-20T13:55:15Z","content_type":null,"content_length":"8756","record_id":"<urn:uuid:0d329692-e445-4c7d-8814-d812214a0a28>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 114 - Journal of the ACM , 1992 "... Abstract. In practice, almost all dynamic systems require decisions to be made on-line, without full knowledge of their future impact on the system. A general model for the processing of sequences of tasks is introduced, and a general on-line decnion algorithm is developed. It is shown that, for an ..." Cited by 186 (9 self) Add to MetaCart Abstract. In practice, almost all dynamic systems require decisions to be made on-line, without full knowledge of their future impact on the system. A general model for the processing of sequences of tasks is introduced, and a general on-line decnion algorithm is developed. It is shown that, for an important algorithms. class of special cases, this algorithm is optimal among all on-line Specifically, a task system (S. d) for processing sequences of tasks consists of a set S of states and a cost matrix d where d(i, j) is the cost of changing from state i to state j (we assume that d satisfies the triangle inequality and all diagonal entries are f)). The cost of processing a given task depends on the state of the system. A schedule for a sequence T1, T2,..., Tk of tasks is a ‘equence sl,s~,..., Sk of states where s ~ is the state in which T ’ is processed; the cost of a schedule is the sum of all task processing costs and state transition costs incurred. An on-line scheduling algorithm is one that chooses s, only knowing T1 Tz ~.. T’. Such an algorithm is w-competitive if, on any input task sequence, its cost is within an additive constant of w times the optimal offline schedule cost. The competitive ratio w(S, d) is the infimum w for which there is a w-competitive on-line scheduling algorithm for (S, d). It is shown that w(S, d) = 2 ISI – 1 for eoery task system in which d is symmetric, and w(S, d) = 0(1 S]2) for every task system. Finally, randomized on-line scheduling algorithms are introduced. It is shown that for the uniform task system (in which d(i, j) = 1 for all i, j), the expected competitive ratio w(S, d) = , 1991 "... The paging problem is that of deciding which pages to keep in a memory of k ..." - Journal of the ACM , 1995 "... We prove that the work function algorithm for the k-server problem has competitive ratio at most 2k \Gamma 1. Manasse, McGeoch, and Sleator [24] conjectured that the competitive ratio for the k-server problem is exactly k (it is trivially at least k); previously the best known upper bound was ex ..." Cited by 95 (6 self) Add to MetaCart We prove that the work function algorithm for the k-server problem has competitive ratio at most 2k \Gamma 1. Manasse, McGeoch, and Sleator [24] conjectured that the competitive ratio for the k-server problem is exactly k (it is trivially at least k); previously the best known upper bound was exponential in k. Our proof involves three crucial ingredients: A quasiconvexity property of work functions, a duality lemma that uses quasiconvexity to characterize the configurations that achieve maximum increase of the work function, and a potential function that exploits the duality lemma. 1 Introduction The k-server problem [24, 25] is defined on a metric space M, which is a (possibly infinite) set of points with a symmetric distance function d (nonnegative real function) that satisfies the triangle inequality: For all points x, y, and z d(x; x) = 0 d(x; y) = d(y; x) d(x; y) d(x; z) + d(z; y) 1 On the metric space M, k servers reside that can move from point to point. A possib... - Journal of the ACM , 1990 "... We study the design and analysis of randomized on-line algorithms. ..." - SIAM Journal on Discrete Mathematics , 1990 "... In the k-server problem, we must choose how k mobile servers will serve each of a sequence of requests, making our decisions in an online manner. We exhibit an optimal deterministic online strategy when the requests fall on the real line. For the weighted-cache problem, in which the cost of moving t ..." Cited by 73 (7 self) Add to MetaCart In the k-server problem, we must choose how k mobile servers will serve each of a sequence of requests, making our decisions in an online manner. We exhibit an optimal deterministic online strategy when the requests fall on the real line. For the weighted-cache problem, in which the cost of moving to x from any other point is w(x), the weight of x, we also provide an optimal deterministic algorithm. We prove the nonexistence of competitive algorithms for the asymmetric two-server problem, and of memoryless algorithms for the weighted-cache problem. We give a fast algorithm for offline computing of an optimal schedule, and show that finding an optimal offline schedule is at least as hard as the assignment problem. 1 Introduction The k-server problem can be stated as follows. We are given a metric space M , and k servers which move among the points of M , each occupying one point of M . Repeatedly, a request (a point x 2 M) appears. To serve x, each server moves some distance, - In Proc. of the 9th Annual ACM-SIAM Symp. on Discrete algorithms , 1998 "... Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost, maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is n ..." Cited by 68 (2 self) Add to MetaCart Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost, maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is not in the cache, bring it into the cache, pay the retrieval cost, and choose files to remove from the cache so that the total size of files in the cache is at most k. This problem generalizes previous paging and caching problems by allowing objects of arbitrary size and cost, both important attributes when caching files for world-wide-web browsers, servers, and proxies. We give a simple deterministic on-line algorithm that generalizes many well-known paging and weighted-caching strategies, including least-recently-used, first-in-first-out, - Computer Science Review "... The k-server problem is perhaps the most influential online problem: natural, crisp, with a surprising technical depth that manifests the richness of competitive analysis. The k-server conjecture, which was posed more that two decades ago when the problem was first studied within the competitive ana ..." Cited by 66 (5 self) Add to MetaCart The k-server problem is perhaps the most influential online problem: natural, crisp, with a surprising technical depth that manifests the richness of competitive analysis. The k-server conjecture, which was posed more that two decades ago when the problem was first studied within the competitive analysis framework, is still open and has been a major driving force for the development of the area online algorithms. This article surveys some major results for the k-server. 1 , 2000 "... The paging problem is defined as follows: we are given a two-level memory system, in which one level is a fast memory, called cache, capable of holding k items, and the second level is an unbounded but slow memory. At each given time step, a request to an item is issued. Given a request to an item p ..." Cited by 62 (9 self) Add to MetaCart The paging problem is defined as follows: we are given a two-level memory system, in which one level is a fast memory, called cache, capable of holding k items, and the second level is an unbounded but slow memory. At each given time step, a request to an item is issued. Given a request to an item p,amiss occurs if p is not present in the fast memory. In response to a miss, we need to choose an item q in the cache and replace it by p. The choice of q needs to be made on-line, without the knowledge of future requests. The objective is to design a replacement strategy with a small number of misses. In this paper we use competitive analysis to study the performance of randomized on-line paging algorithms. Our goal is to show how the concept of work functions, used previously mostly for the analysis of deterministic algorithms, can also be applied, in a systematic fashion, to the randomized case. We present two results: we first show that the competitive ratio of the marking algorithm is ex... - Algorithmica , 1994 "... Weighted caching is a generalization of paging in which the cost to ..." - On-line Algorithms, volume 7 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science , 1992 "... ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=202243","timestamp":"2014-04-20T01:43:01Z","content_type":null,"content_length":"35569","record_id":"<urn:uuid:3957fc3d-40b3-408a-9c6e-685c96493c56>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Items where Division is "Schools (1998 to 2008) > School of Mathematics & Statistics" Number of items at this level: 13. Alghamdi, Ahmad M. (2004) The Ordinary Weight conjecture and Dade's Projective Conjecture for p-blocks with an extra-special defect group. Ph.D. thesis, University of Birmingham. Allsup, John David (2007) On groups and initial segments in nonstandard models of Peano Arithmetic. Ph.D. thesis, University of Birmingham. Clelland, Murray Robinson (2007) Saturated fusion systems and finite groups. Ph.D. thesis, University of Birmingham. Fowler, Rachel Ann Abbott (2007) A 3-local characterization of the group of the Thompson sporadic simple group. Ph.D. thesis, University of Birmingham. Gans, Marijke van (2007) Topics in trivalent graphs. Ph.D. thesis, University of Birmingham. Lewis, Seth Charles (2007) On the best principal submatrix problem. Ph.D. thesis, University of Birmingham. Lin, Li (2008) Centrifugal instability of the wake dominated curved compressible mixing layers. Ph.D. thesis, University of Birmingham. Partridge, Lucy (2006) An experimental and theoretical investigation into the break-up of curved liquid jets in the prilling process. Ph.D. thesis, University of Birmingham. Mathematics and Statistics Bagkavos, Dimitrios Ioannis (2003) Bias reduction in nonparametric hazard rate estimation. Ph.D. thesis, University of Birmingham. Goodwin, Simon Mark (2005) Relative Springer isomorphisms and the conjugacy classes in Sylow p-subgroups of Chevalley groups. Ph.D. thesis, University of Birmingham. Morey, Paul Stephen (2003) The (S_3, A_n)- and (S_3, S_n)-amalgams of characteristic 2 and critical distance 3. Ph.D. thesis, University of Birmingham. Pure Mathematics Tong Viet, Hung Phi (2009) Rank 3 permuation characters and maximal subgroups. Ph.D. thesis, University of Birmingham. School of Mathematics Wakeley, Paul William (2009) Optimisation and properties of gamete transport. Ph.D. thesis, University of Birmingham.
{"url":"http://etheses.bham.ac.uk/view/divisions/30sch=5Fmats.department.html","timestamp":"2014-04-17T18:25:20Z","content_type":null,"content_length":"11658","record_id":"<urn:uuid:1e281b5b-b4ed-45d8-b75b-64f9520f3daa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Induction December 22nd 2012, 01:40 PM #1 Sep 2012 United Kingdom Mathematical Induction $5^{2n}-3^{n}$ is a multiple of 11 for all integers $n\geq 1$ My attempt at a solution so far (I haven't included shown the case for P(1) etc. $P(k):5^{2k}-3^{k}=11p \\ P(k+1):5^{2k+2}-3^{k+1} \\ 5^{2}\cdot 5^{2k}-3\cdot 3^{k} \\5^{2}( 3^{k}+11p)-3\cdot3^{k}$ I am not sure if it's correct up to here, and what the next step is. I have done the steps for P(1) to verify when n=1 (just not shown). Thank you for any help. P.S, I'm not used to using LaTeX Re: Mathematical Induction At this point use the common step of adding and subtracting the same thing to "separate" the parts: $5^2\cdot 5^{2k}- 5^2\cdot 3^k+ 5^2\cdot 3^k- 3\cdot 3^k$ $= 5^2(5^{2k}- 3^k)+ 3^k(25- 3)= 5^2(5^{2k}- 3^k})- 22(3^k)$ $\\5^{2}( 3^{k}+11p)-3\cdot3^{k}$ I am not sure if it's correct up to here, and what the next step is. I have done the steps for P(1) to verify when n=1 (just not shown). Thank you for any help. P.S, I'm not used to using LaTeX Re: Mathematical Induction Last edited by Plato; December 22nd 2012 at 04:20 PM. Re: Mathematical Induction Thank you very much for your help, much appreciated. December 22nd 2012, 01:58 PM #2 MHF Contributor Apr 2005 December 22nd 2012, 02:04 PM #3 December 22nd 2012, 03:12 PM #4 Sep 2012 United Kingdom
{"url":"http://mathhelpforum.com/algebra/210257-mathematical-induction.html","timestamp":"2014-04-21T06:25:26Z","content_type":null,"content_length":"43708","record_id":"<urn:uuid:5b1fd1e2-e86d-4c22-80f2-81d27f84999d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Rahmstorf (2009): Off the mark again (part 3) April 3, 2010 Recall that Vermeer and Rahmstorf (referred to as VR2009 for the rest of this post) propose an equation for the relationship between sea level rise rate and temperature. That the two are related seems doubtless. But Vermeer and Rahmstorf’s model describing the relationship is untenable. VR2009′s equation yields realistic looking sea level rise rates when historical 20th century temperatures are used. But that is not because the equation explains the relationship between temperature and sea level. It simply describes the relationship during the 20th century. If their equation explains the relationship between temperature and sea level, then it must yield a reasonable sea level rise rate when a reasonable temperature is used. It fails this test. You can read VR2009′s PNAS paper here. In Part 1, I laid out the basic problem. In Part 2, I went into a little more detail on the math. As promised in part 2, I will keep the math to a minimum here. I just want to reiterate their essential equation… • H is the sea level, and dH/dt is the change in sea level per unit time (the sea level rise rate) • T is the temperature, and dT/dt is the change in temperature per unit time • T[0] is an “equilibrium temperature” • t is the time • a and b are constants VR2009 find a, b and T[0] by fitting equation 1 to smoothed versions of the GISS global temperature and Church and White’s sea level data with Chao’s correction for artificial reservoir storage. For the purposes of this post I will accept and work with VR2009′s determination of a, b and T[0] . Namely… • a = 05.6+- 0.5 mm*a^-1K^-1 • b = -49 +- 10 mm*K^-1 • T[o] = -0.41 +- 0.03 K Equation 1 is written in the language of calculus and assumes continuous functions and infinitely small changes in sea level (dH), temperature (dT) and time (dt). The GISS temperature data that VR2009 used, and that I will use, is in discrete yearly increments (ΔH, ΔT and Δt). So, testing VR2009′s model for sea level rise rate as a function of temperature is as simple as using a spreadsheet to insert the discrete temperatures into… One last equation for now (I promise). Equation 2 gives the sea level rise rate at any moment as a function of the temperature at that moment. If you want to know the sea level, then each momentary sea level rise rate can be multiplied by the time interval (Δt, one year in this case) and added together. So at any time, t, the sea level is given by… Test Format I will test the VR2009 model several different ways. Each case will have the following format 1. description of temperature data 2. Plot of resulting sea level rise rate, (ΔH/Δt) from equation 2 3. Plot of resulting sea level, H, from equation 3. 4. Comments 5. Excel file link Test 1 1. description: Unfiltered GISS Global Annual Mean Surface Air Temperature. This serves as a baseline case. 2. Resulting sea level rise rate… 3. Resulting Sea level… 4. Comments… The heavy blue line in the image at the left shows the result of VR2009′s calculation of sea level rise rate (click to enlarge). Not much resemblance to my calculation. However, they reasonably used smoothed versions of the GISS temperature, so this isn’t really a fair comparison. The reason I included it is that although they have references for their smoothing technique, they do not say how much smoothing has been applied. What about the resulting sea level? The red line in this graph shows the artificial reservoir corrected sea level data from Chao (click to enlarge). I will have a post concerning Chao’s “correction” to the sea level at a later time, but for now I will accept it for the sake of argument because it is the data that VR2009 used to calculate their values for the constants a, b, and T[0] . The big difference between the original Chao data and the sea level derived from VR2009′s model is the obvious bend in Chao’s data around 1930 which is missing in the sea level resulting from VR2009′s model.. Also, Chao implies that the sea level rise rate is essentially constant from 1930 to the present by fitting to a line with a slope of 2.46 mm/year. The VR2009 model has a long time period smoothing effect that results in an increasing slope that does not exist in the Chao data. 5. Excel link: VR2009 test 1 Test 2. 1. description. GISS Global Annual Mean Surface Air Temperature filtered with 5, 15 and 25 year FWHM gaussian filters. 2. Resulting sea level rise rate… 4. Comments… The sea level rise rate overlay shows that as the GISS temperature smoothing increases, the sea level rise rates for my implementation of the VR2009 model approaches the sea level rise rate graphically reported by VR2009 (see graph in comments of Test 1). The very large sea level rise rates from 1880 to about 1887 are a result of my gaussian smoothing algorithm’s behaviour near the beginning of the GISS temperature data series. The graph on the left (click to enlarge) shows that my implementation of the VR2009 model calculated sea levels, H, for 5, 15, and 25 year temperature smooths look more or less like the corresponding smoothed versions of the Chao sea level data. However, the modeled sea levels tend to deviate upward from the Chao data at the end the 20th century. 5. Excel link: VR2009 test 2 So far, so good. It is not too surprising that the PR2009 (equation 1) between sea level and temperature work fairly well to reproduce Chao’s sea level data from the GISS temperature data. After all, the variables a, b and T[0] were determined by fitting Chao’s sea level data from the GISS temperature data to the proposed relationship. The following tests will involve other, hypothetical temperature scenarios that match equation 4 of my March 27th post. Test 3. 1. Description. Two temperature scenarios will be compared. The first will be the same as the 15 year FWHM guassian smoothed GISS data used in test 2, above. The second will be identical to the first up to 1967, but from 1968 to 2000 it will follow equation 4 of my March 27th post, with C = 0.0016, t’ = 1941, γ = 1, and T[offset] = -0.05 K. See the graph below, click to enlarge. 2. Resulting sea level rise rate… 4. Comments… The sea level and sea level rise rate are probably not what you would expect from the temperature. The sea level rise rate is perfectly constant when my hypothetical temperature scenario kicks in. Have I made some kind of mistake, perhaps used the wrong graph? No. I simply selected a temperature scenario that I knew would result in a constant sea level rise rate. It is as simple as making dH/dt the desired sea level rise rate constant in equation in equation 1, and using VR2009′s values of a, b and T[0] , and then solving the resulting simple differential equation. The important point is that a realistic temperature scenario results in an unrealistic sea level rise rate. 5. Excel link: VR2009 test 3 Test 4 1. Description. The same two temperature scenarios as test 3, with three more hypothetical temperature scenarios added. The new scenarios are defined as follows.. Hypothetical temperature 1: C = 0.008, t’ = 1939, γ = 1, and T[offset] = -0.039 K Hypothetical temperature 3: C = -0.0008, t’ = 1941, γ = 1, and T[offset] = -0.011 K Hypothetical temperature 4: C = 0.00155, t’ = 1941, γ = 1, and T[offset] = -0.0029 K See the graph below, click to enlarge. 2. Resulting sea level rise rate… 3. Resulting Sea level… 4. Comments… No mistake here either. All four of the hypothetical temperature scenarios yield constant sea level rise rates. And, surprisingly, it doesn’t make any difference if the temperatures go up or down. Of course, I have designed these temperature scenarios to yield these incredible results not because I think these results are valid. On the contrary – the point is that VR2009′s proposed relationship between temperature and sea level rise rate yields bogus sea level rise rates. 5. Excel link: VR2009 test 4 Conclusion (for now) VR2009′s equation yields realistic looking sea level rise rates when historical 20th century temperatures are used. But that is not because the equation explains the relationship between temperature and sea level. It simply describes the relationship during the 20th century. If their equation explains the relationship between temperature and sea level, then it must yield a reasonable sea level rise rate when a reasonable temperature is used. Clearly, it fails this test. Coming soon Applying VR2009′s equation to 21st century temperature scenarios. 6 comments
{"url":"http://climatesanity.wordpress.com/2010/04/03/rahmstorf-2009-off-the-mark-again-part-3/","timestamp":"2014-04-20T13:42:44Z","content_type":null,"content_length":"94083","record_id":"<urn:uuid:2ff6a301-2943-46d7-88c6-338217ad612b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Is it a coincidence that we can have total solar eclipses? Are there other planets which also have them? When there is an eclipse, the moon blocks out almost the entire sun relative to what we see here on earth because they take up the same earth-observed degree size. Why does the moon, have the same exact observable degree size as the sun? Is it just a coincidence? What are the chances that this should be? Follow up question - are there any other planetary moons which also project a disc on their planet's surface which exactly matches the sun as observed from that planet? Do any come close either way at all? As you say - it's just a coincidence that the Moon and the Sun have such similar angular sizes as seen from Earth. In fact this won't last very long (on astronomical timescales). The Moon is moving away from the Earth (albeit very slowly), so eventually will have a smaller angular size than the Sun and we will no longer have total solar eclipses. The figure out if any other planets can also have total solar eclipses is actually very easy to work out for yourself. I'll tell you how by giving you a couple of examples and then leave you to it! If you go to (eg.) the Nine Planets site you can find the size and orbit of all the moons of all the planets. For example: The Moon orbit: 384,400 km from Earth diameter: 3476 km We also need to know the size of the Sun and how far it is from the planet in question so The Earth orbit: 149,600,000 km (1.00 AU) from Sun The Sun diameter: 1,390,000 km The angular size of an object is (roughly) its diameter divided by how far away it is (this gives the angular size in radians if both are in the same units) so... The Sun angular size 1,390,000km/149,600,000km = 0.009 radians = 0.5 degrees (A useful number is that 1 radian = 57 degrees, so I multiply the size in radians by 57 to get the size in degrees which I understand better). The Moon angular size = 3476km/384,400km = 0.009 radians = 0.5 degrees They are the same as you already knew. Let's try one more example. At random I'm going to choose Titan, the largest moon of Saturn: orbit: 1,221,830 km from Saturn diameter: 5150 km I also need to know that Saturn is orbiting at a distance of 1,429,400,000 km (9.54 AU) from the Sun. The Sun angular size from Saturn = 1,390,000km/1,429,400,000km = 0.0010 radians = 0.06 degrees. (an easier way to do this is to say that Saturn is 9.5 times further from the Sun than the Earth is so the Sun has an angular size 9.5 times smaller). angular size from Saturn = 5150km/1,221,830km = 0.004 radians = 0.2 degrees. So Titan and the Sun do not have the same angular size as seen from Saturn. So no total eclipses from Saturn! Have fun finding if any of the moons do match! Still Curious? Get More 'Curious?' with Our New PODCAST: Related questions: More questions about Lunar and Solar Eclipses: Previous | Next How to ask a question: If you have a follow-up question concerning the above subject, submit it here. If you have a question about another area of astronomy, find the topic you're interested in from the archive on our site menu, or go here for help. Main Page | About Us | For Teachers | Astronomy Links | Ask a Question | View a Random Question | Our Podcast Table 'curious.Referrers' doesn't existTable 'curious.Referrers' doesn't exist URL: http://curious.astro.cornell.edu/question.php?number=712 This page has been accessed 16063 times since December 4, 2006. Last modified: December 4, 2006 11:03:25 PM Legal questions? See our copyright, disclaimer and privacy policy Ask an Astronomer is hosted by the Astronomy Department Cornell University and is produced with Warning: Your browser is misbehaving! This page might look ugly. (Details)
{"url":"http://curious.astro.cornell.edu/question.php?number=712","timestamp":"2014-04-20T15:52:55Z","content_type":null,"content_length":"14742","record_id":"<urn:uuid:e43eba05-c419-46c8-87a2-2706b6ed9353>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
i need some help please i need some help please Hello everyone, I'm a newbie to this forum, actually, I'm pretty new to all this programming stuffs. I have a little homework problem and I need you helps so .. here it is: The adjacency matrix for the graph is stored in an array named adjMatrix. The vertices are numbered from 0 to n-1. If an edge exists between vertex i and vertex j by calling edge(i,j). If it returns zero, no edge exists. If it returns something greater than zero then an undirected edge exists between i and j with the weight of the edge equal to the value returned. I need a pseudocode to figure out a way to calculate the degree of each vertex (how many edges are touch it). Then figure out if each degree is even or odd, and count how many even degree vertices and how many odd degree vertices there are. By the way, this is a weighted, undirected graph. Can anyone give me some advice on how to get this started ?? Thanks alot You can get started by reading this. >I'm pretty new to all this programming stuffs. >I have a little homework problem I doubt that you're new to programming and have a homework problem that involves graph theory. You can be either one or the other, but not both. Graphs don't enter into the picture in any curriculum that I'm familiar with until well after you're capable of understanding and implementing them. Originally Posted by Prelude I doubt that you're new to programming and have a homework problem that involves graph theory. You can be either one or the other, but not both. Graphs don't enter into the picture in any curriculum that I'm familiar with until well after you're capable of understanding and implementing them. Mmmm... while your likely right, I'm thinking it's possible he could be in the late stages of a non-programming major like mathematics or engineering, where they still require programming courses, they're just more compact. Still, this seems a little advanced for a "new programmer". I am rewlie new at this ... This problem is from my discrete mathematics class ... and so far in that class, I am pretty clueless about all the programming problems. .. I don't need you to do the whole problem for me, I just need a little help on how to get the pseudocode started .... thanks
{"url":"http://cboard.cprogramming.com/cplusplus-programming/81957-i-need-some-help-please-printable-thread.html","timestamp":"2014-04-25T09:56:16Z","content_type":null,"content_length":"9297","record_id":"<urn:uuid:9c1e2884-5745-410c-9965-88f808278ed1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Heapsort (2) Chapter 6 Heapsort • Heapsort – Running time: O(n lg n) • Like merge sort – Sort in place: only a constant number of array elements are stored outside the input array at any time • Like insertion sort • Heap – A data structure used by Heapsort to manage information during the execution of the algorithm – Can be used as an efficient priority queue Binary Heap • An array object that can be viewed as a nearly complete binary tree (see Section B.5.3) – Each tree node corresponds to an array element that stores the value in the tree node – The tree is completely filled on all levels except possibly the lowest, which is filled from the left up to a point – A has two attributes • length[A]: # of elements in the array • heap-size[A]: # of elements in the heap stored within A – heap-size[A] length[A] – No element past A[heap-size[A]] is an element of the heap – max-heap and min-heap A Max-Heap Length and Heap-Size Length = 10 Heap-Size = 7 Heap Computation • Given the index i of a node, the indices of its parent, left child, and right child can be computed simply: PARENT (i ) : return / 2 LEFT (i ) : return RIGHT (i ) : return 1 Heap Property • Heap property – the property that the values in the node must satisfy • Max-heap property: for every node i other than the root – A[PARENT(i)] A[i] – The value of a node is at most the value of its parent – The largest element in a max-heap is stored at the root – The subtree rooted at a node contains values on larger than that contained at the node itself • Min-heap property: for every node i other than the root – A[PARENT(i)] A[i] Heap Height • The height of a node in a heap is the number of edges on the longest simple downward path from the node to a leaf • The height of a heap is the height of its root – The height of a heap of n elements is (lg n) Heap Procedures • MAX-HEAPIFY: maintain the max-heap property – O(lg n) • BUILD-MAX-HEAP: produces a max-heap from an unordered input array – O(n) • HEAPSORT: sorts an array in place – O(n lg n) • MAX-HEAP-INSERT, HEAP-EXTRACT, HEAP-INCREASE- KEY, HEAP-MAXIMUM: allow the heap data structure to be used as a priority queue – O(lg n) Maintaining the Heap • MAX-HEAPIFY – Inputs: an array A and an index i into the array – Assume the binary tree rooted at LEFT(i) and RIGHT(i) are max- heaps, but A[i] may be smaller than its children (violate the max- heap property) – MAX-HEAPIFY let the value at A[i] floats down in the max-heap Example of MAX-HEAPIFY Extract the indices of LEFT and RIGHT children of i Choose the largest of A[i], A[l], A[r] Float down A[i] recursively Running time of MAX- • (1) to find out the largest among A[i], A[LEFT(i)], and • Plus the time to run MAX-HEAPIFY on a subtree rooted at one of the children of node i – The children’s subtrees each have size at most 2n/3 – the worst case occurs when the last row of the tree is exactly half full • T(n) T(2n/3) + (1) – By case 2 of the master theorem: T(n) = O(lg n) 14 7/11 Building A Heap Build Max Heap • Observation: A[(n/2+1)..n] are all leaves of the tree – Each is a 1-element heap to begin with • Upper bound on the running time – O(lg n) for each call to MAX-HEAPIFY, and call n times O(n lg n) • Not tight Loop Invariant • At the start of each iteration of the for loop of lines 2-3, each node i+1, i+2, .., n is the root of a max-heap – Initialization: Prior to the first iteration of the loop, i = n/2. Each node n/2+1, n/2+2,.., n is a leaf and the root of a trivial max-heap. – Maintenance: Observe that the children of node i are numbered higher than i. By the loop invariant, therefore, they are both roots of max-heaps. This is precisely the condition required for the call MAX- HEAPIFY(A, i) to make node i a max-heap root. Moreover, the MAX- HEAPIFY call preserves the property that nodes i+1, i+2, …, n are all roots of max-heaps. Decrementing i in the for loop update reestablishes the loop invariant for the next iteration – Termination: At termination, i=0. By the loop invariant, each node 1, 2, …, n is the root of a max-heap. In particular, node 1 is. Cost for Build-MAX-HEAP • Heap-properties of an n-element heap – Height = lg n – At most n/2h+1 nodes of any height h lg n n lg n h h1 O ( h) O ( n h ) O ( n h ) O ( n) h 0 2 h 0 2 h 0 2 Ignore the constant ½ h 1 h 0 2 (1 1 ) 2 (for |x| < 1) h 0 (1 x) 2 The HeapSort Algorithm • Using BUILD-MAX-HEAP to build a max-heap on the input array A[1..n], where n=length[A] • Put the maximum element, A[1], to A[n] – Then discard node n from the heap by decrementing heap-size(A) • A[2..n-1] remain max-heaps, but A[1] may violate – call MAX-HEAPIFY(A, 1) to restore the max-heap property for • Repeat the above process from n down to 2 • Cost: O(n lg n) – BUILD-MAX-HEAP: O(n) – Each of the n-1 calls to MAX-HEAPIFY takes time O(lg n) Next Slide Example of HeapSort Example of HeapSort (Cont.) Priority Queues • A priority queue is a data structure for maintaining a set S of elements, each with an associated value called a key. A max-priority queue supports the following operations: – INSERT(S, x) inserts the element x into the set S – MAXIMUM(S) returns the element of S with the largest key – EXTRACT-MAX(S) removes and returns the element of S with the largest key – INCREASE-KEY(S, x, k) increases the value of element x’s key to the new value k, which is assumed to be at least as largest as x’s current key value • Application of max-priority queue: Job scheduling in HEAP-MAXIMUM and O(lg n) • Steps – Update the key of A[i] to its new value • May violate the max-heap property – Traverse a path from A[i] toward the root to find a proper place for the newly increased key O(lg n) Example of HEAP-INCREASE-KEY O(lg n)
{"url":"http://www.docstoc.com/docs/70124678/Heapsort-(2)","timestamp":"2014-04-25T09:43:24Z","content_type":null,"content_length":"59305","record_id":"<urn:uuid:0950f300-a9a6-4fc2-92f2-35abbfe839c0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Molecular Formula and Simplest Formula Example Problem - Determining the Molecular Formula From Simplest Formula The simplest formula for butane is C and its molecular mass is about 60. What is the molecular formula of butane? First, calculate the sum of the atomic masses for C . Look up the atomic masses for the elements from the Periodic Table . The atomic masses are found to be: H is 1.01 C is 12.01 Plugging in these numbers, the sum of the atomic masses for C 2(12.0) + 5(1.0) = 29.0 This means the formula mass of butane is 29.0. Compare the formula mass (29.0) to the approximate molecular mass (60). The molecular mass is essentially twice the formula mass (60/29 = 2.1), so the simplest formula must be multiplied by 2 to get the molecular formula: molecular formula of butane = 2 x C = C [10] Answer The molecular formula for butane is C
{"url":"http://chemistry.about.com/od/workedchemistryproblems/a/Molecular-Formula-Simplest-Formula-Example-Problem-2.htm","timestamp":"2014-04-17T12:46:06Z","content_type":null,"content_length":"42016","record_id":"<urn:uuid:c12db711-a711-4a7a-91c1-7173d291dc0b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2006 [00508] [Date Index] [Thread Index] [Author Index] RE: 3D plots • To: mathgroup at smc.vnet.net • Subject: [mg67340] RE: [mg67218] 3D plots • From: "David Park" <djmp at earthlink.net> • Date: Sun, 18 Jun 2006 05:13:56 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com I made a plot using my DrawGraphics package. I had to use several of the commands to trim the surfaces to the domain on which they are defined. I am attaching a .gif image of one view of the two surfaces. I am also attaching the notebook I used to make the plot. (These are omitted from the MathGroup response. Anyone interested can contact me.) If you don't have DrawGraphics you won't be able to make the plots but you can see some of the other steps. (The notebook uses the style sheet from my web site, but you can let it go to the Default style.) Basically what I did was first pad the data to fit the entire rectangle. I did this by making the values constant outside the triangular region taking the closest values in the y direction. I then interpolated a fit to the upper and lower data points. I then used a feature of DrawGraphics where I create a grid of rectangular polygons and then trimmed them to fit the triangular region. Then I used another feature that will raise the grid to a 3D surface. I did this for each of the two surfaces. That gives one surface above the other. You could rotate it with SpinShow to get a better view of Because there are two surfaces involved and because they are only defined over a restricted domain, I think it would be difficult to make a plot that clearly showed the two surfaces with regular Mathematica. David Park djmp at earthlink.net From: qfwfq [mailto:qfwfq_0 at yahoo.com] To: mathgroup at smc.vnet.net Hi David, Thank you for your response. Zmax is always greater than zmin. x_coord is always greater than Next, I send you a piece of the list.
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Jun/msg00508.html","timestamp":"2014-04-20T08:54:38Z","content_type":null,"content_length":"37295","record_id":"<urn:uuid:79221285-b559-4c20-bf0d-6d23bd96c5b4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
El Sobrante Calculus Tutor Find an El Sobrante Calculus Tutor ...First, I try to find out what sparks the interest in the student and then try to relate the subject matter that he or she has difficult with to that interesting topic. For instance, if the student enjoys listening to and writing songs but has a really hard time with a mathematical concept, I wou... 24 Subjects: including calculus, chemistry, physics, geometry ...It is my goal to increase each student's ability to understand the material and to learn effectively. Mathematics is not "difficult" if it is approached the right way. Anyone with the desire can learn algebra, trigonometry, and calculus if each concept is simplified. 13 Subjects: including calculus, physics, statistics, geometry ...I have over 5 years of experience using MATLAB for different applications. In the past I have tutored undergraduate engineering students on the use of MATLAB to help solve math problems. To tutor MATLAB programming I will create a series of exercises designed to show the student the specific tools that they will need for their desired application. 15 Subjects: including calculus, Spanish, geometry, ESL/ESOL ...I am currently a computer science PhD student at Cal. Previously, I graduated from Caltech with a BS in Electrical Engineering, a BS in Business Economics and Management, and an MS in Electrical Engineering with a 4.1/4.3 GPA. I love, love, love math, and am very enthusiastic about teaching others! 27 Subjects: including calculus, chemistry, physics, geometry ...In all the courses I took in the area of Differential Equations at Berkeley and OSU, I earned A's and B's. I can help students with difficulties in D.E., once I get the student's textbook and do some review. I am now a Substitute Teacher at West Contra Costa Unified School District and have experience working with young students. 17 Subjects: including calculus, reading, physics, statistics
{"url":"http://www.purplemath.com/El_Sobrante_Calculus_tutors.php","timestamp":"2014-04-21T10:27:42Z","content_type":null,"content_length":"24038","record_id":"<urn:uuid:45f15fcd-ecb8-47ed-8b2c-59e99bce8710>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
you seem to have some quite difficult geometry problems. I'm curious, what level math are you currently studying? Are you in college? What career are you studying for? Why don't you go to the introductions forum and tell us a little about yourself? Last edited by mikau (2006-01-21 06:35:59) A logarithm is just a misspelled algorithm.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=25070","timestamp":"2014-04-18T19:05:47Z","content_type":null,"content_length":"11136","record_id":"<urn:uuid:fe9622a6-08a2-42c7-b2af-f8dc8a9c939c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Martin Heroux Overall, it seems like this will add a great deal of versatility to Matlab. Unfortunately, the documentation is very succinct and I am having difficulty getting even basic statistical tests to give me an output in Matlab. % Running a simple K-S test to determine whether two samples have the same distribution a = rand (100,1); % creating a random signal i = pi/100 :pi/100: pi; b = (sin (i))' + a; % creating second signal % Putting both signals into R % Running ks.test in R and retrieving the output evalR ('output <- ks.test(x, y)') getRdata ('output') I can run this code in R and get the output from the statistical test. However, when I try to get the R data (getRdata), Matlab give me the following error message: Error using ==> getRdata at 47 Could not get output. Invoke Error, Dispatch Exception: There is no connection for this connection ID Any suggestions on how to get the output into Matlab.
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/authors/60522","timestamp":"2014-04-21T12:48:36Z","content_type":null,"content_length":"20735","record_id":"<urn:uuid:9017d4a4-79ac-4cfc-b667-e9ede56817af>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Name a diameter of P. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5109b293e4b0d9aa3c463c50","timestamp":"2014-04-18T16:34:33Z","content_type":null,"content_length":"42058","record_id":"<urn:uuid:435766fa-aab0-4abf-9877-ca00568d35ee>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector notation December 6th 2013, 04:23 PM Vector notation "x(i) is a vector indexed by time i, x_{j}(i) is the j-th component of x(i)" Could anyone please explain to me what this notation really means? What does it mean for a vector to be "indexed by time"? In the same context there is also this: "x^n = (x(1), x(2), ..., x(n))" December 6th 2013, 04:41 PM Re: Vector notation "x(i) is a vector indexed by time i, x_{j}(i) is the j-th component of x(i)" Could anyone please explain to me what this notation really means? What does it mean for a vector to be "indexed by time"? In the same context there is also this: "x^n = (x(1), x(2), ..., x(n))" Think of the position of a satellite or a baseball or anything moving through space. At any given point in time t0 it has 3 spatial coordinates {x, y, z} These change as the thing moves. It will be at {x0, y0, z0} at t0, and {x1, y1, z1} at t1, etc. You can treat this as a vector of 3 spatial components, that is further indexed by a time component. Now just make your space n dimensional rather than 3. Sure a baseball doesn't move through n dimensional space but you know what I mean. You can index these n dimensional spatial vectors by time just as you did with the 3 dimensional ones. These spatial vectors don't have to be "space" space either. The can be the configuration space of some huge dynamical system. They are just vectors that change with time and you are capturing snapshots of them at times t0, t1, t2.. tk, etc. December 7th 2013, 05:30 PM Re: Vector notation Think of the position of a satellite or a baseball or anything moving through space. At any given point in time t0 it has 3 spatial coordinates {x, y, z} These change as the thing moves. It will be at {x0, y0, z0} at t0, and {x1, y1, z1} at t1, etc. You can treat this as a vector of 3 spatial components, that is further indexed by a time component. Now just make your space n dimensional rather than 3. Sure a baseball doesn't move through n dimensional space but you know what I mean. You can index these n dimensional spatial vectors by time just as you did with the 3 dimensional ones. These spatial vectors don't have to be "space" space either. The can be the configuration space of some huge dynamical system. They are just vectors that change with time and you are capturing snapshots of them at times t0, t1, t2.. tk, etc. So x^n = (x(1), x(2), ... , x(n)) means the vector (or set?) of all the vectors, iterated over the allowed time domain (all possible n)? December 7th 2013, 05:45 PM Re: Vector notation no. There's no reason your time index and space index have any dependence on each other. You can take M samples of an N dimensional vector. A better notation would be let x = {x[1], x[2], .... , x[n]} your time indexed spatial vector x is now x[j] or x[j] with j as your time index. If you need to get at individual pieces of x you can use (x[j])[i], or x[i][j], i=1,N, j=0, #samples There's nothing written in stone about notation. Just try to be clear. December 8th 2013, 08:35 AM Re: Vector notation no. There's no reason your time index and space index have any dependence on each other. You can take M samples of an N dimensional vector. A better notation would be let x = {x[1], x[2], .... , x[n]} your time indexed spatial vector x is now x[j] or x[j] with j as your time index. If you need to get at individual pieces of x you can use (x[j])[i], or x[i][j], i=1,N, j=0, #samples There's nothing written in stone about notation. Just try to be clear. So the notation is unclear? I can agree with that, this notation is from a set of lecture notes. I would be happy to use a different notation or read it in a different way, but I still don't understand what this notation really means. It is stated that x refers to values of random column vectors with specified dimensions. Is n in x^n the dimension? Or is it something else? In x^n = (x(1), x(2), ..., x(n)) what is e.g. x(1)? From the earlier definition I mentioned "x(i) is a vector indexed by time i, x_{j}(i) is the j-th component of x(i)" it seems that x(1) is a vector indexed by time 1. Hence this vector (x(1), x (2), ..., x(n)) is a vector of all the vectors, indexed from 1 to n. If it was a set, wouldn't they have used curly brackets { } to denote the set? December 8th 2013, 09:13 AM Re: Vector notation You're losing me. Where does randomness come into this? I don't understand what you mean by x^n. You're making this much harder than it is. If X is an n dimensional vector that changes in time so you call call it X(t), and I take snapshots of it at t[0], t[1], t[k], etc. I can then refer to those vector snapshots as X[t[k]] or X[k] where it's understood that X is still an n dimensional vector. If you need to get at the components of X at a given time t[k], again there is no notation carved in stone, just be clear {X[t[k]]}[i] works where i the vector component you want. X[tk][i] also works. Just be clear and consistent. December 8th 2013, 09:52 AM Re: Vector notation You're losing me. Where does randomness come into this? I don't understand what you mean by x^n. You're making this much harder than it is. If X is an n dimensional vector that changes in time so you call call it X(t), and I take snapshots of it at t[0], t[1], t[k], etc. I can then refer to those vector snapshots as X[t[k]] or X[k] where it's understood that X is still an n dimensional vector. If you need to get at the components of X at a given time t[k], again there is no notation carved in stone, just be clear {X[t[k]]}[i] works where i the vector component you want. X[tk][i] also works. Just be clear and consistent. These lecture notes use lower case letters, x, y, z to denote random variables. However, the same lecture notes (and this is specified in the notes) also denote random column vectors with lower case variables, like x, y, z, with a specified dimension.
{"url":"http://mathhelpforum.com/advanced-math-topics/224877-vector-notation-print.html","timestamp":"2014-04-16T05:14:23Z","content_type":null,"content_length":"13638","record_id":"<urn:uuid:1b2df3c6-e697-40d8-b024-0ed8c977cc76>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Is It True That Manhattan Island Was Bought From the Indians for $24? What Peter Minuit gave the Manhattoe tribe was a package of trinkets and cloth valued at 60 guilders, roughly equivalent to $24. Category: American History, Answers One Comment 1. Today’s value of the real estate on Manhattan is roughly 50 billion US$ (according to my Public Library for the years 1991 and 1992). Assume the land accounts for a quarter of this, i.e., 12.5 Now, if Minuit had put this money into a bank (and until 2008, they were safe places) for, say, an interest rate of 5.4%, then – after 382 years – his payout in 2008 would have been $24*1.054^382 = $12,744,552,672,33 hence a bit more than today’s value of what he traded in. Silly guy, isn’t ? Post a Comment
{"url":"http://answersuniverse.com/is-it-true-that-manhattan-island-was-bought-from-the-indians-for-24/","timestamp":"2014-04-21T12:08:41Z","content_type":null,"content_length":"21788","record_id":"<urn:uuid:a98d7d73-5143-41d9-9d09-e87f2f5d5290>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Learn smart with cbsenext From Yahoo Answers Question:Given the points A(-1,2,1) B(0,1,1) and C(7,-3,0)... (a) find an equation of the plane through A,B and C. (b) find the area of the triangle ABC. For (a) I thought that I had to set up a systems of equations to solve for the normal vector, but that doesn't seem to be working out. Any help at all would be lovely. Answers:(a) The vector from A to B, which is (0-(-1), 1-2, 1-1) = (1, -2, 0), and the vector from A to C, which is (7-(-1), -3-2, 0-1) = (8, -5, -1), both lie in the plane. You can therefore find a normal vector to the plane by calculating the cross product of those two vectors. (I'll let you do that part.) Since I didn't figure out that cross product, I'll just call it (a, b, c). You now know that the equation of the plane is ax+by+cz=d, where a, b, and c are the components of the cross product you just found. To find d: 1. Substitute the coordinates of one of the three given points (point B would probably be the easiest) for x, y, and z in ax+by+cz=d, and... 2. Solve for d. (b) The magnitude of the cross product of (1, -2, 0) and (8, -5, -1) is the area of the parallelogram formed by those two vectors. The parallelogram consists of triangle ABC together with an identical copy of it, so the area of the triangle is half that of the parallelogram; that is to say, it's 1/2 times the magnitude of the cross product. Question:To me questions like these force me to think outside the box. Our history and our universe is on one timeline. We wake up, live our day, then go to sleep. This is what we know and we discover all of what we know in this timeline about the world surrounding us and inside of us. I consider this timeline to be viewed in one dimension. Has anyone considered the possibility of more than one timeline on seperate axis' existing all around our timeline? i.e. Parallel dimensions. What is the best theories on parallel dimensions? Are gaps between timelines considered? Inter-timeline travel? -Properties concerning each timeline - A timeline (us) One time interval, Days are 24-hours - B timeline, objects could dissapear and reappear in one time interval, while living organisms exist in another time interval. -C timeline The universe is growing and collapsing at the same time. Answers:Time is one dimension, one of the famous four: height, width, depth, time, using commonplace terms. I could also write (i, j, k, t); where i, j, and k are unit vectors (e.g., i dot i = 1) designating the three spatial dimensions. And t would be the fourth dimension. Time is real...it's not just the passage of events, like rising, showering, breakfasting, etc. Time passes even if no events take place. Time can be stretched out so that, for example, it would take 2 Earth seconds for 1 second to tick off on a very fast spaceship. Such a stretch is called dilation and this phenomenon demonstrates that time can be manipulated. That is a prime clue that time is real. If it were not real, we wouldn't be able to dilate it. This dilation can be used to travel into the future. For instance, if a star trekker in the above example traveled one year according to his clock on the spaceship and then returned to Earth, he would find Earth time had advanced two years. In other words, the spaceman would find himself one year into his future when he stepped out of the ship. As to "parallel dimensions" you are mixing concepts. In fact it's higher dimensions and parallel universes. [See source.] String theory posits up to 11 dimensions instead of the conventional four we know and love. One aspect of the theory suggests the other seven (all spatial) are simply curled up so tiny (1 Planck length = 10^-33 cm) that we can't see them. But strings, because they are also tiny, can see the extra dimensions and are constrained by them just as we are constrained by the four dimensions of our universe. One WAG of string theory is the parallel universe. Each universe is like a slice of bread in a mega universe loaf. Each slice is separated by 1 Planck length and 1 Planck time (which is also very tiny but I've forgotten the number). One SWAG resulting from the WAG is that two or more of the parallel universes collided and rebound. That change in momentum over time gave rise to the tremendous energy we call the Big Bang. Thus, there would be a BB in our universe as well as another BB in the universe that collided with us. And so, if we count the BB as t = 0, the timeline of the two BBs starts at the same time the two parallel universes. That is not to say the chains of events are identical...it's unlikely they are. But time, a real dimension, will be identical. There would be a gap between the timelines of the two parallel universes. The time length of that discontinuity would be 1 Planck time. And there would be a spatial gap of 1 Planck length between the two after they rebound from the collision. Both these gaps result because, theorectically, the 1 Planck time and the 1 Planck length are the smallest possible intervals in time and space. Unless the makeup of the two colliding universes was significantly different, the makeup of the two BBs ought to be about the same. So the uniform initial energies of both would go through the same evolutions and end up with the same kind of galaxies, planets, and energies. This suggests there might be living, intelligent beings living out their lives in the parallel universe...wondering if there is life out there. Question:Find 3 planes which intersect at the line r=(4,-5,6) + t(2,0,-1) Help? Why are you considering lines parallel to the x axis? Answers:The dot product of the directional vector v, of the given line and the normal vector n1, n2, and n3, of any plane that contains the given line is zero. v = <2, 0, -1> n1 = <1, 0, 2> n2 = <1, 1, 2> n3 = <1, 2, 2> Point P(4, -5, 6) is on the line, and therefore lies in all three desired planes. With the normal vector of the plane and a point in the plane we can write the equation of the plane. Remember, the normal vector is orthogonal to any vector that lies in the plane. And the dot product of orthogonal vectors is zero. Define R(x,y,z) to be an arbitrary point in the plane. Then vector PR lies in the plane. n1 = 0 n2 = 0 n3 = 0 <1, 0, 2> = 0 <1, 1, 2> = 0 <1, 2, 2> = 0 Multiply out the dot products and you will have the equations of three planes whose intersection is the given line. From Youtube Finding the Scalar Equation of a Plane Finding the Scalar Equation of a Plane - In this video, I discuss the formula to find the scalar equation of a plane, how to derive it, and a simple example using it! For more free math videos, visit Vector Plane Equation Here's a quick explanation as to the origin of the vector plane equation. Two Planes Laying Parallel Trails I filmed Two Planes Laying Parallel Trails today.The busiest trail day for several weeks. Two Parallel Planes
{"url":"http://www.cbsenext.com/cfw/equation-of-a-plane-parallel-to-another-plane","timestamp":"2014-04-18T05:29:51Z","content_type":null,"content_length":"24684","record_id":"<urn:uuid:068c685c-8755-47ba-8268-e3153249b0fc>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
%0 - DATA %A - Tim Evans %8 - 2012/08/29 %T - Netplexity -The Complexity of Interactions in the Real World %U - http://figshare.com/articles/ Netplexity_-The_Complexity_of_Interactions_in_the_Real_World/95469 %1 - http://dx.doi.org/10.6084/m9.figshare.95469 %2 - http://files.figshare.com/98011/Istanbul2011GeneralTalkTimEvansNoExtra.pdf %K - Complex Networks %X - “Netplexity -The Complexity of Interactions in the Real World” General overview talk on Networks and Complexity given by Dr Tim Evans, Theoretical Physics, Imperial College London on Monday 5th September 2011 http://imperial.ac.uk/people/t.evans or serach for “Tim Evans Networks” FezaGürsey Institute - Imperial College International Summer School and Research Workshop on Complexity, Istanbul 5th-10th September 2011 Slides from Mathematical talk at http://dx.doi.org/10.6084/m9.figshare.96209
{"url":"http://figshare.com/articles/exportend?id=95469","timestamp":"2014-04-19T13:24:31Z","content_type":null,"content_length":"6070","record_id":"<urn:uuid:6a2084fc-b3d5-4fec-acce-be7ef4f7b275>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from March 9, 2012 on Programming Praxis u−1 and performing initialization, lookup, and insertion is time O(1) and iteration in O(n), where n is the number of elements in the set. The data structure was studied in a 1993 article “An Efficient Representation for Sparse Sets” by Preston Briggs and Linda Torczon, in Exercise 1.9 (1.8 in the first edition) of Jon Bentley’s book Programming Pearls, and in exercise 2.12 of the 1974 book The Design and Analysis of Computer Algorithms by Al Aho, John Hopcroft and Jeffrey Ullman; the data structure itself dates to the folklore of computing. The data structure considers a universe of integers from 0 to u−1; depending on the circumstances, the integers probably map to something else, but we don’t care about that. Any given set consists of n items chose from the universe; there are no duplicates. Note that n ≤ u, certainly, and likely n is much less than u — otherwise, you would probably use a bit vector to represent the set. Note also that we are optimizing for speed at the expense of space, as a bit vector takes u bits but our data structure takes 2u integers. Think about a bit vector. Setting a bit is a constant-time operation, as is checking if a bit is set or unset. But initializing the bit vector and iterating over the set elements of the bit vector each take time proportional to the size of the bit vector. Our sparse sets reduce the iteration to time proportional to the size of the set (rather than the size of the universe) and reduce the initialization time to a constant. The sparse set is represented by two vectors that we will call dense (abbreviated D) and sparse (abbreviated S). Initially n, the number of elements in the set, is zero; the two vectors are uninitialized and may contain anything. To add an element 0 ≤ k < u to a set that does not already contain k, we set D[n] to k, S[k] to n, and increase n by 1, an operation that takes constant time. After this, the two vectors point to each other, which gives a test of set membership that also works in constant time: an element k is in the set if and only if S[k] < n and D[S[k]] == k>. Note that if k is not a member of the set, the value of S[k] doesn’t matter; either it S[k] will be greater than n or it will point to an element of D that doesn’t point back to it. The diagram above right shows a set with the elements 5, 1 and 4; the blue boxes may contain any value. To iterate over the elements of the set, read D[0 .. n−1], which takes time O(n), and to clear the set make n = 0, which takes time O(1); note in particular that clearing the set doesn’t require reinitialization. Other operations, including size-of, delete, union, intersection, difference, and set-equality are possible, and equally time-efficient compared to bit vectors, but we won’t discuss them here, since they are seldom used with this representation of sets. A common use of these sparse sets is with register allocation algorithms in compilers, which have a fixed universe (the number of registers in the machine) and are updated and cleared frequently during a single processing run. Your task is to implement the insert, lookup, iterate and clear operations for sparse sets as described above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2
{"url":"http://programmingpraxis.com/2012/03/09/","timestamp":"2014-04-19T14:32:26Z","content_type":null,"content_length":"33334","record_id":"<urn:uuid:c0b96896-059e-4226-a199-807c5a530c4a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Complexity Zoo:E From Complexity Zoo E: Exponential Time With Linear Exponent Equals DTIME(2^O(n)). Does not equal NP [Boo72] or PSPACE [Boo74] relative to any oracle. However, there is an oracle relative to which E is contained in NP (see ZPP), and an oracle relative to PSPACE is contained in E (by equating the former with P). There exists a problem that is complete for E under polynomial-time Turing reductions but not polynomial-time truth-table reductions [Wat87]. Problems hard for BPP under Turing reductions have measure 1 in E [AS94]. It follows that, if the problems complete for E under Turing reductions do not have measure 1 in E, then BPP does not equal EXP. [IT89] gave an oracle relative to which E = NE but still there is an exponential-time binary predicate whose corresponding search problem is not in E. [BF03] gave a proof that if E = NE, then no sparse set is collapsing, where they defined a set $A$ to be collapsing if $Aotin\mathsf{P}$ and if for all $B$ such that $A$ and $B$ are Turing reducible to each other, $A$ and $B$ are many-to-one reducible to each other. Contrast with EXP. EE: Double-Exponential Time With Linear Exponent Equals DTIME(2^2^O(n)) (though some authors alternatively define it as being equal to DTIME(2^O(2^n))). EE = BPE if and only if EXP = BPP [IKW01]. Contained in EEXP and NEE. EEE: Triple-Exponential Time With Linear Exponent Equals DTIME(2^2^2^O(n)). In contrast to the case of EE, it is not known whether EEE = BPEE implies EE = BPE [IKW01]. EESPACE: Double-Exponential Space With Linear Exponent Equals DSPACE(2^2^O(n)). Is not contained in BQP/qpoly [NY03]. EEXP: Double-Exponential Time Equals DTIME(2^2^p(n)) for p a polynomial. Also known as 2-EXP. Contains EE, and is contained in NEEXP. EH: Exponential-Time Hierarchy With Linear Exponent Has roughly the same relationship to E as PH does to P. More formally, EH is defined as the union of E, NE, NE^NP, NE with Σ[2]P oracle, and so on. See [Har87] for more information. If coNP is contained in AM[polylog], then EH collapses to S[2]-EXP P^NP [SS04] and indeed AM[EXP] [PV04]. On the other hand, coNE is contained in NE/poly, so perhaps it wouldn't be so surprising if NE collapses. There exists an oracle relative to which EH does not contain SEH [Hem89]. EH and SEH are incomparable for all anyone knows. ELEMENTARY: Iterated Exponential Time Equals the union of DTIME(2^n), DTIME(2^2^n), DTIME(2^2^2^n), and so on. Contained in PR. EL[k]P: Extended Low Hierarchy An extension of L[k]P. The class of problems A such that Σ[k]P^A is contained in Σ[k-1]P^A,NP. Defined in [BBS86]. EP: NP with 2^k Accepting Paths The class of decision problems solvable by an NP machine such that 1. If the answer is 'no,' then all computation paths reject. 2. If the answer is 'yes,' then the number of accepting paths is a power of two. Contained in C[=]P, and in Mod[k]P for any odd k. Contains UP. Defined in [BHR00]. EPTAS: Efficient Polynomial-Time Approximation Scheme The class of optimization problems such that, given an instance of length n, we can find a solution within a factor 1+ε of the optimum in time f(ε)p(n), where p is a polynomial and f is arbitrary. Contains FPTAS and is contained in PTAS. Defined in [CT97], where the following was also shown: • If FPT = XP[uniform] then EPTAS = PTAS. • If EPTAS = PTAS then FPT = W[P]. • If FPT is strictly contained in W[1], then there is a natural problem that is in PTAS but not in EPTAS. (See [CT97] for the statement of the problem, since it's not that natural.) k-EQBP: Width-k Polynomial-Time Exact Quantum Branching Programs See k-PBP for the definition of a classical branching program. A quantum branching program is the natural quantum generalization: we have a quantum state in a Hilbert space of dimension k. Each step t consists of applying a unitary matrix U^(t)(x[i]): that is, U ^(t) depends on a single bit x[i] of the input. (So these are the quantum analogues of so-called oblivious branching programs.) In the end we measure to decide whether to accept; there must be zero probability of error. Defined in [AMP02], where it was also shown that NC^1 is contained in 2-EQBP. k-BQBP can be defined similarly. EQP: Exact Quantum Polynomial-Time The same as BQP, except that the quantum algorithm must return the correct answer with probability 1, and run in polynomial time with probability 1. Unlike bounded-error quantum computing, there is no theory of universal QTMs for exact quantum computing models. In the original definition in [BV97], each language in EQP is computed by a single QTM, equivalently to a uniform family of quantum circuits with a finite gate set K whose amplitudes can be computed in polynomial time. See EQP[K]. However, some results require an infinite gate set. The official definition here is that the gate set should be finite. Without loss of generality, the amplitudes in the gate set K are algebraic numbers [ADH97]. There is an oracle that separates EQP from NP [BV97], indeed from Δ[2]P [GP01]. There is also an oracle relative to which EQP is not in Mod[p]P where p is prime [GV02]. On the other hand, EQP is in LWPP [FR98]. P^||NP[2k] is contained in EQP^||NP[k], where "||NP[k]" denotes k nonadaptive oracle queries to NP (queries that cannot depend on the results of previous queries) [BD99]. See also ZBQP. EQP[K]: Exact Quantum Polynomial-Time with Gate Set K The set of problems that can be answered by a uniform family of polynomial-sized quantum circuits whose gates are drawn from a set K, and that return the correct answer with probability 1, and run in polynomial time with probability 1, and the allowed gates are drawn from a set K. K may be either finite or countable and enumerated. If S is a ring, the union of EQP[K] over all finite gate sets K whose amplitudes are in the ring R can be written EQP[S]. Defined in [ADH97] in the special case of a finite set of 1-qubit gates controlled by a second qubit. It was shown there that transcendental gates may be replaced by algebraic gates without decreasing the size of EQP[K]. [FR98] show that EQP[Q] is in LWPP. The proof can be generalized to any finite, algebraic gate set K. The hidden shift problem for a vector space over Z/2 is in EQP[Q] by Simon's algorithm. The discrete logarithm problem over Z/p is in EQP[Q-bar] using infinitely many gates [MZ03]. EQTIME(f(n)): Exact Quantum f(n)-Time Same as EQP, but with f(n)-time (for some constructible function f) rather than polynomial-time machines. Defined in [BV97]. ESPACE: Exponential Space With Linear Exponent Equals DSPACE(2^O(n)). If E = ESPACE then P = BPP [HY84]. Indeed if E has nonzero measure in ESPACE then P = BPP [Lut91]. ESPACE is not contained in P/poly [Kan82]. Is not contained in BQP/mpoly [NY03]. See also: EXPSPACE. ∃BPP: BPP With Existential Operator The class of problems for which there exists a BPP machine M such that, for all inputs x, • If the answer is "yes" then there exists a y such that M(x,y) accepts. • If the answer is "no" then for all y, M(x,y) rejects. Alternatively defined as NP^BPP. Contains NP and BPP, and is contained in MA and SBP. ∃BPP seems obviously equal to MA, yet [FFK+93] constructed an oracle relative to which they're unequal! Here is the difference: if the answer is "yes," MA requires only that there exist a y such that for at least 2/3 of random strings r, M(x,y,r) accepts (where M is a P predicate). For all other y's, the proportion of r's such that M(x,y,r) accepts can be arbitrary (say, 1/2). For ∃BPP, by contrast, the probability that M(x,y) accepts must always be either at most 1/3 or at least 2/3, for all y's. ∃NISZK: NISZK With Existential Operator Contains NP and NISZK, and is contained in the third level of PH. EXP: Exponential Time Equals the union of DTIME(2^p(n)) over all polynomials p. Also equals P with E oracle. If L = P then PSPACE = EXP. If EXP is in P/poly then EXP = MA [BFL91]. Problems complete for EXP under many-one reductions have measure 0 in EXP [May94], [JL95]. There exist oracles relative to which [BT04] show the following rather striking result: let A be many-one complete for EXP, and let S be any set in P of subexponential density. Then A-S is Turing-complete for EXP. [SM03] show that if EXP has circuits of polynomial size, then P can be simulated in MA[POLYLOG] such that no deterministic polynomial-time adversary can generate a list of inputs for a P problem that includes one which fails to be simulated. As a result, EXP ⊆ MA if EXP has circuits of polynomial size. [SU05] show that EXP $ot\subseteq$NP/poly implies EXP $ot\subseteq$P^||NP/poly. In descriptive complexity EXPTIME can be defined as SO($2^{n^{O(1)}}$) which is also SO(LFP) EXP/poly: Exponential Time With Polynomial-Size Advice The class of decision problems solvable in EXP with the help of a polynomial-length advice string that depends only on the input length. Contains BQP/qpoly [Aar04b]. EXPSPACE: Exponential Space Equals the union of DSPACE(2^p(n)) over all polynomials p. See also: ESPACE. Given a first-order statement about real numbers, involving only addition and comparison (no multiplication), we can decide in EXPSPACE whether it's true or not [Ber80].
{"url":"https://complexityzoo.uwaterloo.ca/Complexity_Zoo:E","timestamp":"2014-04-18T02:58:33Z","content_type":null,"content_length":"41871","record_id":"<urn:uuid:b70b9833-e1e7-49c4-bb1d-9b4db6dcc0e0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
A Gibbs sampler for Bayesian analysis of site-occupancy data You have free access to this content A Gibbs sampler for Bayesian analysis of site-occupancy data 1. Robert M. Dorazio^* and 2. Daniel Taylor Rodríguez Article first published online: 20 AUG 2012 DOI: 10.1111/j.2041-210X.2012.00237.x Methods in Ecology and Evolution Additional Information How to Cite Dorazio, R. M. and Rodríguez, D. T. (2012), A Gibbs sampler for Bayesian analysis of site-occupancy data. Methods in Ecology and Evolution, 3: 1093–1098. doi: 10.1111/j.2041-210X.2012.00237.x Publication History 1. Issue published online: 11 DEC 2012 2. Article first published online: 20 AUG 2012 3. Received 27 April 2012; accepted 5 July 2012 Handling Editor: Nigel Yoccoz • Markov chain Monte Carlo; • probit regression; • proportion of area occupied; • species distribution model; • species occurrence 1.A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2.As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3.To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data. The class of site-occupancy models developed independently by MacKenzie et al. (2002) and Tyre et al. (2003) is widely used in the analysis of presence-absence data collected in surveys of natural populations. These models extend conventional types of binary-regression models to account for errors in detection of individuals, which are common in surveys of animal or plant populations. Site-occupancy models use repeated surveys within sample locations or other measures of survey effort to resolve the ambiguity of an observed zero, which can occur if a species is absent at a sample location or if a species is present but undetected. Therefore, the probabilities of species presence (occurrence) and species detection given presence are estimated together when site-occupancy models are fitted to presence-absence data (more correctly, detection/non-detection data). The collection and analysis of site-occupancy data may be used to address a variety of ecological inference problems that require accurate predictions of species occurrence. For example, metapopulation models (Hanski & Gilpin 1997) are often specified in terms of patch occupancy (site occupancy). In this context, the proportion of area occupied (PAO) by a species in a collection of sites may be relevant. Similarly, species distribution models (Scott et al. 2002; Elith & Leathwick 2009) are used to predict the spatial pattern of species occurrences over a species’ geographic range or over a subset of that range that has scientific or operational relevance. In both examples, a quantitative (functional) relationship between species occurrence probability and one or more aspects of its environment must be estimated accurately (i.e. free of bias from detection errors). Given sufficient data, site-occupancy models can be used to estimate this relationship accurately ( MacKenzie et al. 2006) and to predict species occurrence probability at sampled or unsampled locations (Kéry et al. 2010). Other species distribution models that do not account for the effects of detection errors (e.g. binary-regression models) generally produce biased predictions of species occurrence probability. Classical methods, such as maximum likelihood, can be used to estimate the parameters of site-occupancy models, and software exists for calculating these estimates [see programs presence (http:// www.mbr-pwrc.usgs.gov/software/presence.html)] and unmarked (Fiske & Chandler 2011)]. Once computed, the maximum likelihood estimates (MLEs) of the parameters can be used to predict species occurrence probability at sampled or unsampled locations, although it may be challenging to obtain accurate estimates of the uncertainty of these predictions. For example, parametric bootstrapping can be used to estimate the uncertainty of the predictions (Laird & Louis 1987), but this approach generally requires substantial computational effort. In a Bayesian analysis, a model's parameters and its predictions are treated identically in the sense that all inferences are based on the posterior distribution of the model's parameters (Gelman et al. 2004). Inferences about predictions account for uncertainty in the model's parameters because the distribution of these predictions is obtained by averaging (marginalizing) over the posterior distribution of the parameters. Furthermore, these inferences are valid regardless of sample size because they do not rely on asymptotic approximations, unlike classical (non-Bayesian) methods. For these reasons, Bayesian methods of estimation and inference provide an attractive and useful alternative for ecological problems that require predictions of species occurrence. The probability density function of the posterior distribution of a site-occupancy model's parameters cannot be expressed in closed form owing to analytically intractable integrals. Therefore, stochastic simulation methods, such as Markov chain Monte Carlo (MCMC), are typically used to estimate summaries of the posterior distribution (Geyer 2011). Software to implement these methods is available and includes the programs winbugs (http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/contents.shtml), openbugs (http://www.openbugs.info) and jags (http://mcmc-jags.sourceforge.net), all of which have been used to conduct Bayesian analyses of site-occupancy data (MacKenzie et al. 2006; Royle & Dorazio 2008; Kéry 2010; Link & Barker 2010; Dorazio et al. 2011; Kéry & Schaub 2012). These programs are popular largely because they only require users to specify the underlying assumptions of a model. The technical details of constructing and implementing a MCMC algorithm are accomplished by the software with either limited or no control by the user. While this division of labour may seem desirable, considerable experience is often required to specify a model and to initialize the Markov chain so that the software constructs an appropriate algorithm. Model specification includes several choices – parameterization (hierarchically centred or not), priors and hyperparameter values, and functions to link the probabilities of species occurrence and detection to the effects of covariates on these probabilities. Initializing the Markov chain is not difficult and the software can even assign some parameters without user input; however, users must be careful not to assign parameter values that have low (or zero) posterior probability. For example, a site-specific parameter for species occurrence must not be initialized at zero (absence) if the species is detected during one or more surveys of the site and doing so generates an error message that can be difficult to interpret without user experience. Model specification and initialization is particularly challenging when attempting to analyse site-occupancy data for multiple species with existing software (Dorazio et al. 2010, 2011). Appendix S1 of Kéry & Schaub (2012) contains some commonly encountered problems and workarounds when using winbugs. Given the potential for difficulties with existing software, it would seem useful to have a MCMC algorithm developed specifically for the analysis of site-occupancy data. Gibbs sampling algorithms are available for relatively simple site-occupancy models wherein species occurrence probability is constant and species detection probability is constant within surveys [(page 107 of Royle & Dorazio 2008) and (pages 177–178 of Link & Barker 2010)], but MCMC algorithms have not been developed for more complex models that contain the effects of site-specific covariates of occurrence and site- or survey-specific covariates of detection. For these models, a common choice of prior distribution and parameterization (multivariate normal priors of logit-scale parameters) leads to conditional posterior distributions that do not have familiar forms and must be sampled using specialized algorithms that require tuning (e.g. Metropolis–Hastings). These algorithms are inherently less efficient than Gibbs sampling because only a fraction of the proposed samples is accepted and tuning is usually needed to obtain desirable acceptance rates. In this paper, we show that a Bayesian analysis of site-occupancy data can be carried out accurately and efficiently using Gibbs sampling when the model is specified using probit-scale parameters and uniform or multivariate normal priors. To illustrate this Gibbs sampling algorithm, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. These data were analysed by Kéry et al. (2010) using the method of maximum likelihood and a logit-scale parameterization of the site-occupancy model. Here, we compare the results of using Bayesian and classical (non-Bayesian) methods of inference. We also provide the code used in this analysis, which was written using the R software program (R Development Core Team 2012). Materials and methods Site-Occupancy Models as Probit Regressions of Occurrence and Detection Probabilities In the standard sampling protocol for collecting site-occupancy data, J > 1 independent surveys are conducted at each of n representative sample locations (sites) noting whether a species is detected or not detected during each survey. Let y[ij] denote a binary random variable that indicates detection (y = 1) or non-detection (y = 0) during the jth survey of site i. Without loss of generality, we assume J is constant among all n sites to simplify the description of the model. In practice, site-specific differences in J pose no real difficulties and are relatively easy to implement. The standard sampling protocol yields a n × J matrix Y of detection/non-detection data. Site-occupancy models of detection/non-detection data may be represented as hierarchical models of the following form: where ψ[i] = Pr(z[i] = 1) denotes the probability of species presence (occurrence) at site i and where p[ij] = Pr(y[ij] = 1|z[i] = 1) denotes the conditional probability of detecting the species during the jth survey of site i given that the species is present at site i (Royle & Kéry 2007; Royle & Dorazio 2008). Suppose covariates thought to be informative of species occurrence have been measured at each of the n sample sites and the measurements are included in a n × (r + 1) matrix X of r regressors. (The first column of X is a vector of ones to accommodate an intercept parameter in the model of occurrence probability.) A probit-regression formulation β of the regressors x[i] on species occurrence probability at site i. (A superscript T is used to indicate the transpose of a matrix or vector.) Similarly, suppose covariates thought to be informative of species detection probability have been measured during each of J surveys conducted at each site. These measurements may be included in a n × J × (q + 1) array W of q regressors, and a probit-regression formulation α of the regressors w[ij ] on the probability of detecting one or more individuals present at site i during the jth survey. The joint posterior density for this model is where C denotes the normalizing constant for the posterior distribution and π(β,α) specifies the joint prior density of the parameters β and α. (Henceforth, C will be used generically to denote the normalizing constant of a distribution.) It is entirely feasible to develop an MCMC algorithm based on the above joint posterior; however, the conditional posterior distributions (full conditionals) of β and α are not familiar forms and must be sampled using specialized algorithms that require tuning (e.g. Metropolis–Hastings). For example, assuming mutually independent priors for these parameters (wherein π(β,α) = π(β)π(α)) leads to the following full conditional densities: Fortunately, the difficulties of sampling from these distributions can be avoided by recognizing that eqn 1 is simply the kernel of a probit-regression model of n binary outcomes z[i] and eqn 2 is simply the kernel of a probit-regression model of mJ binary outcomes y[ij], where Albert & Chib (1993) and use parameter-expanded data augmentation (Liu & Wu 1999) to modify the model for the purposes of simplifying the analysis and the MCMC algorithm. To be specific, we establish a connection between probit-regression models of the binary random variables (z[i] and y[ij]) and linear regression models of latent normal (Gaussian) random variables (v [i] and u[ij]) as follows. Let z[i] = 1 if v[i] > 0 and z[i] = 0 if v[i] ≤ 0. These assumptions imply Albert & Chib 1993), our probit-regression model of z[i]. Similarly, let y[ij] = 1 if u[ij] > 0 and z[i] = 1, and assume y[ij] = 0 if u[ij] ≤ 0 and z[i] = 1 or if z[i] = 0. These assumptions imply y[ij]. A succinct description of these modelling assumptions is: where I(a) denotes the indicator function, which equals 1 if argument a is true and 0 otherwise. The joint posterior density of this parameter-expanded site-occupancy model is where φ(·|μ,σ^2) denotes the probability density function of a Normal(μ,σ^2) distribution. Although the joint posterior of this model cannot be sampled directly, posterior summary statistics (means, quantiles, etc.) can be estimated accurately and efficiently using Gibbs sampling, as described in the following section. Gibbs sampler The full conditional distributions needed to apply Gibbs sampling to the joint posterior density (eqn 3) all have familiar forms and are easily sampled. For example, if uniform priors are used for β and α to specify prior indifference about the magnitude of these parameters, the full conditional distributions needed for Gibbs sampling are as follows: • 1 • 2 β|· ∼ Normal((X^TX)^−1X^Tv, (X^TX)^−1) • 3 • 4 where mJ × (q+1) matrix formed from the mJ observations of w[ij] and where mJ-vector formed from the mJ values of u[ij]. In other words, only the values of w[ij] and u[ij] at occupied sites (wherein z[i] = 1) are needed to update α. For this reason, it is not necessary to update u[ij] if z[i] = 0 (as shown in the previous step). • 5 where y[i] = (y[i1],…,y[iJ])^T. If prior distributions for β and α are assumed to be normal (thereby allowing either vague or informative priors to be specified), the full conditional distributions of these parameters are still normal but the means and covariances of these distributions are modified to accommodate the prior information. Specifically, steps 2 and 4 above are replaced by • 1 • 2 where μ and Σ denote the prior mean and covariance matrix for β, and where μ and Σ denote the prior mean and covariance matrix for α. Example: Blue Hawker Data Sampling methods and design Kéry et al. (2010) provide a detailed description of the study area and methods of data collection. Briefly, the blue hawker was surveyed throughout Switzerland for the revision of the Red List of Swiss dragonflies. These surveys included sites that were known to have target (i.e. rare) species and also sites that were less well known in terms of dragonfly species occurrence. Each site corresponds to a 1-ha quadrat of the Swiss topographical system. Surveys were conducted during each of 2 years (1999 and 2000) during the known flight periods of the dragonflies in Switzerland. Individual sites were surveyed between 1 and 22 times per year. In 1999, 1522 sites were surveyed; in the following year, 1403 sites were surveyed. Of the total number of distinct sites surveyed, 12·8% (328 of 2572) were sampled in both years. After fitting several site-occupancy models to the blue hawker data, Kéry et al. (2010) compared values of Akaike’s Information Criterion (AIC) to select a parsimonious model for predicting site-specific occurrences of this species. In this model, occurrence probability was formulated as a logit-linear function of the effects of elevation and its square and cube; detection probability was formulated as a logit-linear function of the effects of elevation, Julian survey date and the squares of these two covariate measurements. For purposes of comparison, we analysed the blue hawker data using the same set of regressors included in the parsimonious model of Kéry et al. (2010). Prior to the analysis, we centred and scaled the elevation and date measurements to have zero mean and unit variance. We also excluded from the analysis observations from 14 sites that lacked elevation measurements. The remaining data included observations from 1516 sites in 1999 and 1395 sites in 2000. In the Bayesian analysis of the blue hawker data, we used maximum likelihood estimates of the site-occupancy model's parameters to initialize the Markov chain. We used M = 10 000 successive draws of the Gibbs sampler to estimate posterior means and quantiles of the model parameters, to predict species occurrence probability as a function of elevation and to predict species detection probability as a function of elevation and survey date. We also used draws from the Gibbs sampler to predict species occurrence status (presence or absence) at sample sites where blue hawkers were not detected for the purposes of estimating PAO in 1999 and in 2000. The Monte Carlo standard errors of posterior means and quantiles were computed using the subsampling bootstrap method (Flegal & Jones 2010, 2011) with overlapping batch means of size The blue hawker was detected at 22·0% (334) of the 1516 sites surveyed in 1999 and at 21·9% (305) of the 1403 sites surveyed in 2000. These naive estimates of site occupancy appear to be substantially biased given the estimates of PAO adjusted for detection errors. For example, Bayesian posterior means for the proportion of occupied sites were 0.646 in 1999 and 0·624 in 2000, and in each year the 95% credible interval for PAO failed to include the naive estimate (Table 1). Imperfect detection of blue hawkers is, of course, the reason for the higher estimates of site occupancy. Estimated detection probabilities of blue hawkers varied with elevation and survey date (Fig. 1) and ranged from 0 to 0·86. Table 1. Parameter estimates of site-occupancy model fitted to the blue hawker (Aeshna cyanea) data Parameter Maximum likelihood estimates Bayesian estimates MLE 2·5% 97·5% Mean 2·5% 97·5% 1. ^ Lower and upper limits of 95% confidence intervals (based on ±1·96 asymptotic standard errors) and 95% credible intervals are given in the columns labelled 2·5% and 97·5%, respectively. Monte Carlo standard errors are given in parentheses. β[0] 0·499 0·252 0·746 0·513 (0·0081) 0.271 (0·0080) 0·808 (0·0111) β[1]elev 0·289 −0·028 0·606 0·312 (0·0114) −0·012 (0·0116) 0·725 (0·0152) β[2]elev^2 0·020 −0·302 0·342 0·034 (0·0077) −0·282 (0·0101) 0·351 (0·0104) β[3]elev^3 −0·113 −0·210 −0·016 −0·121 (0·0023) −0·226 (0·0035) −0·029 (0·0027) α[0] −0·803 −0.946 −0·660 −0·806 (0·0036) −0·949 (0·0043) −0·658 (0·0046) α[1]elev 0·446 0·245 0·648 0·441 (0·0057) 0·237 (0·0064) 0·650 (0·0069) α[2]elev^2 −0·147 −0·275 −0·018 −0·148 (0·0032) −0·281 (0·0042) −0·025 (0·0035) α[3]date 1·355 1·209 1·500 1·355 (0·0051) 1·214 (0·0054) 1·515 (0·0059) α[4]date^2 −0·292 −0·380 −0·205 −0·293 (0·0027) −0·385 (0·0032) −0·207 (0·0030) PAO[1999] 0·642 – – 0·646 (0·0016) 0·592 (0·0018) 0·701 (0·0018) PAO[2000] 0·620 – – 0·624 (0·0015) 0·571 (0·0017) 0·679 (0·0018) Estimates of the site-occupancy model's parameters obtained by classical and Bayesian methods are quite similar (Table 1), which is not surprising given the relatively large sample size and the non-informative priors assumed for the parameters. Occurrence probabilities of blue hawker appear to differ significantly over the range of elevations observed in the sample (Fig. 2). Posterior-predicted mean occurrence probabilities are highest at lower elevations and decline to near zero with increases in elevation. In this paper, we develop a class of Bayesian site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. By using probit-regression functions, we were able to add latent parameters to the model for the purposes of developing a Gibbs sampler for Bayesian analysis of site-occupancy data. This Gibbs sampler allows summaries of the posterior and model-based predictions to be estimated more efficiently than software based on MCMC algorithms with lower acceptance rates. In addition, the Gibbs sampler can be implemented in any computing language. We developed an implementation (see Appendices S1 and S2) using the R software program (R Development Core Team 2012), which is freely available and widely used. Our implementation includes code to calculate MLEs of the model's parameters, which are used in classical (non-Bayesian) analyses. The code also accommodates missing values in the matrix of detection/non-detection data that occur commonly in site-occupancy surveys owing to unequal effort among sample sites. The code therefore can be used to complement non-Bayesian analyses obtained with other site-occupancy software [programs presence or unmarked (Fiske & Chandler 2011)]. It might be possible to apply our approach to Bayesian site-occupancy models that use other functions to link the probabilities of species occurrence and detection to linear combinations of regression parameters. For example, defining the latent variables v[i] and u[ij] to have logistic distributions would lead to logit link functions; defining v[i] and u[ij] to have extreme value distributions would lead to complementary log-log link functions; and so on. The choice of link function is largely subjective, and similar results should be obtained with any link function when probabilities of occurrence and detection are not close to zero or one (where the tails of normal, logistic and extreme value distributions differ). One of the advantages of conducting a Bayesian analysis of site-occupancy data is the ability to account for uncertainty in predictions and in estimates of derived parameters, such as PAO. The results of our Bayesian analysis of the blue hawker data are qualitatively similar to the results of the classical analysis reported by Kéry et al. (2010). For example, the PAO (for both years combined) estimated by Kéry et al. (2010) was 0·629 (1839/2925), which is approximately equal to the midpoint of our year-specific estimates (Table 1). However, a comparison of credible intervals from the Bayesian analysis allows us to conclude that the PAO of sites sampled in 1999 is not significantly different from the PAO of sites sampled in 2000. Classical and Bayesian predictions of blue hawker occurrence probability as a function of elevation are also quite similar (cf. fig. 1 of Kéry et al. (2010) and Fig. 2); however, the former lacks estimates of uncertainty whereas the Bayesian predictions include a confidence envelope based on 95% credible intervals. Kéry et al. (2010) also present a map of the potential distribution of blue hawker throughout Switzerland in 1999–2000 by using site-occupancy-based predictions of blue hawker occurrence at unsampled locations. Ideally, the uncertainty of these predictions also should be mapped. A Bayesian analysis could easily provide these estimates of uncertainty. The blue hawker dataset was kindly provided by Marc Kéry and authorized for use by the Swiss Biodiversity Monitoring program of the Swiss Federal Office for the Environment (FOEN). The Swiss dragonfly Red List project, for which data were collected in 1999 and 2000, was funded by the FOEN. Data were extracted from the database of the Centre suisse de cartographie de la faune (CSCF) by the project coordinator, Christian Monnerat. The review comments of Bill Link and two anonymous referees improved the manuscript. Any use of trade, product or firm names is for descriptive purposes only and does not imply endorsement by the US Government. • & (1993) Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 88, 669–679. • , , & (2010) Models for inference in dynamic metacommunity systems. Ecology, 91, 2466–2475. • , & (2011) Modern methods of estimating biodiversity from presence-absence surveys. Biodiversity Loss in a Changing Planet (eds O. Grillo & G. Venora), pp. 277–302. InTech, Rijeka, Croatia. • & (2009) Species distribution models: ecological explanation and prediction across space and time. Annual Review of Ecology, Evolution, and Systematics, 40, 677–697. • & (2011) unmarked: an R package for fitting hierarchical models of wildlife occurrence and abundance. Journal of Statistical Software, 43, 1–23. • & (2010) Batch means and spectral variance estimators in Markov chain Monte Carlo. Annals of Statistics, 38, 1034–1070. • & (2011) Implementing MCMC: estimating with confidence. Handbook of Markov chain Monte Carlo (eds S. Brooks, A. Gelman, G.L. Jones & X.L. Meng), pp. 175–197. Chapman & Hall/CRC, Boca Raton, Florida, USA. • , , & (2004) Bayesian data analysis, 2nd edn. Chapman and Hall, Boca Raton, Florida, USA. • (2011) Introduction to Markov chain Monte Carlo. Handbook of Markov chain Monte Carlo (eds S. Brooks, A. Gelman, G.L. Jones & X.L. Meng), pp. 3–48. Chapman & Hall/CRC, Boca Raton, Florida, USA. • Hanski, I. & Gilpin, M.E. (eds), (1997) Metapopulation Biology: Ecology, Genetics, and Evolution. Academic Press, New York. • (2010) Introduction to WinBUGS for Ecologists. Academic Press, Burlington, Massachusetts, USA. • & (2012) Bayesian Population Analysis Using WinBUGS. Academic Press, Waltham, Massachusetts, USA. • , & (2010) Predicting species distributions from checklist data using site-occupancy models. Journal of Biogeography, 37, 1851–1862. • & (1987) Empirical Bayes confidence intervals based on bootstrap samples (with discussion). Journal of the American Statistical Association, 82, 739–757. • & (2010) Bayesian Inference. Academic Press, Amsterdam. • & (1999) Parameter expansion for data augmentation. Journal of the American Statistical Association, 94, 1264–1274. • , , , , & (2002) Estimating site occupancy rates when detection probabilities are less than one. Ecology, 83, 2248–2255. • , , , , & (2006) Occupancy Estimation and Modeling. Elsevier, Amsterdam. • R Development Core Team. (2012) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, ISBN 3-900051-07-0. • & (2008) Hierarchical Modeling and Inference in Ecology. Academic Press, Amsterdam. • & (2007) A Bayesian state-space formulation of dynamic occupancy models. Ecology, 88, 1813–1823. • , , , , , & (2002) Predicting Species Occurrences: Issues of Accuracy and Scale. Island Press, Washington, DC. • , , , , & (2003) Improving precision and reducing bias in biological surveys: estimating false-negative error rates. Ecological Applications, 13, 1790–1801. Supporting Information Appendix S1. R code for conducting both classical and Bayesian analyses of site-occupancy data. Appendix S2. R code for conducting both classical and Bayesian analyses of site-occupancy data. As a service to our authors and readers, this journal provides supporting information supplied by the authors. Such materials may be re-organized for online delivery, but are not copy-edited or typeset. Technical support issues arising from supporting information (other than missing files) should be addressed to the authors. Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.
{"url":"http://onlinelibrary.wiley.com/doi/10.1111/j.2041-210X.2012.00237.x/full","timestamp":"2014-04-20T09:48:38Z","content_type":null,"content_length":"105454","record_id":"<urn:uuid:b6460fae-ce08-4e7b-9a9a-9aa4aa586688>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Mckinney Trigonometry Tutor ...This will tell me where to focus so you can more easily pass the ASVAB exam. I am ADD/ADHD. My four (4) children have inherited many of my ADD/ADHD tendencies. 48 Subjects: including trigonometry, chemistry, physics, calculus ...As a physicist, I am well exposed to mathematical concepts which includes statistics as one of the subjects that I mastered. I also use statistical models to analyze my research data. I am a physicist with both a bachelor and master’s degree in physics. 25 Subjects: including trigonometry, chemistry, physics, calculus ...I put together a summer program to help students to prepare for the subject. I have recently attended several professional development sessions for precalculus, which gave me many activities to add to my tool bag. I started teaching 13 years ago and the TAKS test has been around almost as long. 10 Subjects: including trigonometry, geometry, algebra 2, GED ...The most important thing is learning how to come from the basic principles (from lectures and notes) to the solution of each problem. With the ability to do that, you can solve similar problems on your quizzes, lab reports, tests, exams (SAT, SAT 2). If you have problems in Gen. Chemistry, Gen. 24 Subjects: including trigonometry, English, chemistry, reading ...My goal for every session is to ensure that both student and tutor are satisfied with our progress. I'll be learning from them as much (or more) as they learn from me. Please give me a shot!In pursuing my certification to teach High School Physics, I have also tested (and passed) the criteria for Chemistry proficiency. 28 Subjects: including trigonometry, chemistry, reading, biology
{"url":"http://www.purplemath.com/Mckinney_Trigonometry_tutors.php","timestamp":"2014-04-18T21:22:26Z","content_type":null,"content_length":"23877","record_id":"<urn:uuid:ab379037-a9d3-44fc-99bc-8ee8f39aad25>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Algebra 2 - jai, Wednesday, August 18, 2010 at 4:42am (1.) first, graph the equations,, there are various ways how to graph them,, for linear equations, i usually get the x- and y- intercept (*x- and y- intercepts are points*): *note: to get x-intercept, set y=0 and solve for x,, to get y-intercept, set x=0 and solve for y,, >>in the 1st equation, 2x + 5=0, since there is no variable y, just solve for x and you'll get x=-5/2. the graph of x=-5/2 is a vertical line passing thru -5/2 (or -2.5) *another note: this equation has no y-intercept since there is no value of y in which x will be zero (that is, the graph of x=-5/2 will never pass the y-axis) >>in the 2nd equation, 2x + y=8, to get x-intercept, set y=0, so: 2x + (0) = 8 *solve for x therefore, x-int: (4,0) for the y-intercept, set x=0, so: 2(0) + y = 8 *solve for y therefore, y-int: (0,8) plot (4,0) and (0,8) on the same cartesian plane and connect, and extend the line,, this is now the graph of the 2nd equation,, >>now, locate the point of intersection~ *to check if it's really the point of intersection, do substitution,, to do this, choose one of the equations, and express one variable in terms of the other variable,, in this case we choose the 1st equation since it readily gives the value of x which is -5/2,, not substitute this to the 2nd equation: 2(-5/2) + y = 8 >>thus, the point of intersection is at (-5/2, 13) (2.) graph y=|x| *to do this, first, graph y=x (the version which does not contain the absolute value),, now, since y=|x| means y is restricted only to positive values, look for the area in the graph in which y is negative (it's in 3rd quadrant, isn't it),, then make a "mirror image" of this in the 2nd quadrant (2nd quadrant, because y values are positive in there),, thus its graph should be V-shaped,, so there,, =) sorry for long explanation..
{"url":"http://www.jiskha.com/display.cgi?id=1282088862","timestamp":"2014-04-17T16:58:22Z","content_type":null,"content_length":"10568","record_id":"<urn:uuid:f90039ec-d9e5-4723-90f8-7d40eb376fc3>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Formula Tutors Fort Worth, TX 76107 Dr. Bob, Science Physics Chemistry Biology Math Pharmacology ...Advanced Nutrition, Biochemistry, Chemistry, Physics, Anatomy, Physiology, Biophysics, Food Science, Biochemical Thermodynamics, and Medical Diagnostics are some of the background courses that have been developed into a Cycle of Learning for the NCLEX examination... Offering 10+ subjects including algebra 2
{"url":"http://www.wyzant.com/Haltom_City_quadratic_formula_tutors.aspx","timestamp":"2014-04-16T06:21:28Z","content_type":null,"content_length":"59227","record_id":"<urn:uuid:82d2f794-2f8d-4277-b581-bcf1481a5fb8>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Re: Questions on Complexity Theory [FOM] Re: Questions on Complexity Theory Dmytro Taranovsky dmytro at MIT.EDU Wed Dec 14 17:29:50 EST 2005 High on the list of almost absurd statements that we do not know how to disprove is the claim ZPP = ExpTime. It asserts that random data (formally, all but an exponentially small percentage of strings) are in fact a vast store wisdom allowing us to solve exponential time complete problems in polynomial time. Unsuitable strings passed as random may be rejected but will not lead to mistakes. Present knowledge does not rule out linear running time (reasonably defined) when the exponent is linear (and output has at most linear length in terms of input). There are oracles relative to which ZPP = ExpTime. The oracles answer exponential-time complexity queries that are prefixed with a random string, specifically rejecting prefixes that are too easily computable. ZPP < ExpTime can only be proved by current techniques by showing how to solve ZPP problems in sub-exponential time. Candidate polynomial time derandomization algorithms for BPP (and hence ZPP) are known. They use pseudo-random number generators to produce pseudo-random data from logarithmically sized seeds, and iterate over all seeds. However, existence of secure pseudo-random number generators implies P<NP, and hence is unlikely to be proved soon. Also, the question of whether solving exponential time complete problems is physically feasible is wide open. Dmytro Taranovsky More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-December/009468.html","timestamp":"2014-04-20T13:22:11Z","content_type":null,"content_length":"3774","record_id":"<urn:uuid:4cab7f0c-e7bd-4f21-b434-8d0be0c9f90b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Inference in incomplete modelsDepartment of Economics Discussion Papers Galichon Alfred author Henry Marc author Columbia University. Economics Columbia University. Economics contributor originator text Working papers New York Department of Economics, Columbia University 2006 We provide a test for the specification of a structural model without identifying assumptions. We show the equivalence of several natural formulations of correct specification, which we take as our null hypothesis. From a natural empirical version of the latter, we derive a Kolmogorov-Smirnov statistic for Choquet capacity functionals, which we use to construct our test. We derive the limiting distribution of our test statistic under the null, and show that our test is consistent against certain classes of alternatives. When the model is given in parametric form, the test can be inverted to yield confidence regions for the identified parameter set. The approach can be applied to the estimation of models with sample selection, censored observables and to games with multiple equilibria. Economic theory 0506-28 http://hdl.handle.net/10022/AC:P:378 English NNC NNC 2011-03-28 09:35:16 -0400 2011-06-22 14:27:08 -0400 3325 eng
{"url":"http://academiccommons.columbia.edu/download/fedora_content/download/ac:113243/CONTENT/ac113243_description.xml?data=meta","timestamp":"2014-04-20T03:46:52Z","content_type":null,"content_length":"3597","record_id":"<urn:uuid:39d4c2f3-a360-458f-b6da-8131706368f4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Conformal Gravity The paper "Attractive and Repulsive Gravity", by Philip Mannheim (2000) sets out at some length his theory of conformal gravity. He starts by breaking the evidence in support of GR into those that flow from its geometric nature, which he seeks to preserve, and those which flow from the form of the Einstein-Hilbert action. He notes that his amendment of the EH action can make possible a MOND like gravity at galactic scales, eliminates the big bang and black hole singularities by making gravity repulsive in the presence of very dense energy sources and in the cosmological limit, and resolves a number of issues associated with the cosmological constant problem. It also states that at galactic and cosmological scales that there is a Machian component to gravity (i.e. the force of gravity is in part a function of the aggregate mass of the universe). Since it flows closely from the GR equations it is theoretically "well behaved". It proposes rather than a big bang, a big time zero bounce from a minimum finite radius in which the universe has a maximal temperature (possiblity eliminating the issue of inflation), because beneath that radius the repulsive aspect of gravity overcomes other forces. It suggests that under this theory, the low ratio of matter to a running cosmological constant in the Omega sum is inevitable regardless of initial condition, rather than a coincidence. Of particular interest to those looking to derive a quantum gravity, he accepts a figure for vaccum energy from cosmology, and uses that conclusion to determine that the gravitational constant must be deeply wrong. While this theory has little independent support (i.e. it does not differ from GR where it reduces to GR, does not differ greatly from MOND where it reduces to MOND, does not differ from Newtonian gravity where it reduces to Newtonian gravity, and fits cosmological data by design rather than prediction) it should stand as a significant alternative to either cosmological constant cold dark matter models (the prevailing paradigm), or Relativistic MOND models (the main alternative at the galactic scale), or alternative cosmologies such as those of Arp (Quasi-Steady State). The conclusions reached are not dramatically different from those of LQG, and indeed resemble LQG also in that LQG theorists sometimes see gravity as a counterpoint to QCD which like Mannheim's conformal gravity changes from repulsive to attractive at different distance scales.
{"url":"http://www.physicsforums.com/showthread.php?t=55804","timestamp":"2014-04-19T15:02:01Z","content_type":null,"content_length":"21718","record_id":"<urn:uuid:a1f8773b-ba9f-428e-83aa-6a0db39b5238>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Exploiting Maximum Parallelism in Hierarchical Numerical Applications Alexander Pfaffinger Using hierarchical basis functions for the d-dimensional multilinear function representation, the number of the corresponding grid points can be reduced drastically from n^d to n log(n)^(d-1) without significant increase of the approximation error. This leads to so-called sparse grids. Instead of flat arrays, binary trees and d-dimensional product graphs of binary trees are the natural implementation. This product graph also reflects the dependency structure of the algorithm. Because of its complexity, exploiting the maximum inherent parallelism is tedious. An intuitive domain decomposition formulation of a sparse grid algorithm leads to a parallel complexity of O(log(n)^(d)) whereas an optimal implementation would achieve O(log(n)) complexity. The intuitive algorithm also results in an inefficient communication and synchronization pattern. On the other side, coding an optimal program within conventional imperative languages (e.g. C with PVM) is a hard issue for general dimensions d. In the new data flow language FASAN the programmer has only to specify the mathematical data dependencies between the parts of the algorithm. The semantics of "wrapper streams" automatically generates direct communication channels between the dependent nodes, whereas the data flow semantics sends the data immediately after they are produced. Thus, the optimal parallel complexity can be expressed even with an intuitive divide-and-conquer
{"url":"http://www.dcs.ed.ac.uk/home/mic/dagstuhl/pfaffinger/abstract.html","timestamp":"2014-04-21T09:37:11Z","content_type":null,"content_length":"1893","record_id":"<urn:uuid:177e1605-e3a5-4d65-b007-45404c708751>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Hello, I'm Having A Hard Time Remembering How To ... | Chegg.com Hello, I'm having a hard time remembering how to solvesimultaneous equations. For example, I have; -x = -630 - 2.8y +183.75 0 = 53.6 + 9.6y and need to know the values for both x and y. Don't Isolve for y first and then solve for x? I know this is prettyeasy but I'm stuck with this problem. Can you help me?
{"url":"http://www.chegg.com/homework-help/questions-and-answers/hello-m-hard-time-remembering-solvesimultaneous-equations-example-x-630-28y-18375-0-536-96-q300705","timestamp":"2014-04-18T21:10:51Z","content_type":null,"content_length":"20752","record_id":"<urn:uuid:0e825b7b-5e7e-47e9-b3c5-811b82bffd7e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Five Steps to Adding Physics-Based Realism to Your Games Pages: 1, 2 Step 3: Collision Detection and Response It is important to make the distinction between collision detection and collision response. Collision detection is a computational geometry problem involving the determination of whether and where two or more objects have collided. Collision response is a physics problem involving the motion of two or more objects after they have collided. Collision detection is a crucial aspect of any real-time simulation where objects are not supposed to be able to pass through each other. Your collision-response algorithms rely on the results of your collision-detection algorithms in order to accurately determine the appropriate response to any collision. Therefore, you should take care in making sure your collision-detection schemes are accurate and reliable. That said, collision detection is no easy task. I personally find it much more difficult to implement robustly than it is to implement the physics aspects of rigid body simulations. For game applications, as I'm sure you are aware, speed is also a major issue, and accurate collision detection can be slow. Generally, you'll want to use a two-phase approach to collision detection. First, you can use a bounding sphere or box check to test whether there are possible collisions between objects. If this check indicates possible collisions, then you'll want to do a more detailed check on the candidate objects to see if they are indeed intersecting. There are many approaches you can take for this detailed collision check. The thing you have to keep in mind is the trade-off between accuracy and performance. Kevin Kaiser addresses triangle-to-triangle level collision detection in detail in Chapter 4.5 of Game Programming Gems. Nick Bobic also treats polygonal collision detection in his Gamasutra article Advanced Collision Detection Techniques. Brian Mertich has also written several useful papers on collision detection for real-time simulations. Whatever collision-detection scheme you use, there are certain pieces of information that it must provide to you in order for you to simulate physically accurate collision responses. Your collision-detection routine must tell you: • The point of contact on each body. For example, the location of the contact point relative to the center of gravity of each body involved in the collision. • The collision normal vector (and the tangential vector if you're modeling friction in your collisions). • The relative velocities of each body at the point or points of contact. If you're writing a 3-D game complete with detailed polygon models for high-quality rendering, you might be tempted to use those same models in your collision-detection checks for your physics simulation. However, you may not want to do this since it will take more CPU time to do the collision check and may be unnecessary in terms of physical realism. Don't confuse the requirements for visual realism with physical realism. You may find that you need highly detailed polygon models for quality renderings, but your collision model for that object may only need a fraction of the number of polygons. Granted, this approach means you'll have to keep track of two models for each object in your simulation, but the payoff in terms of performance may be well worth it. It is important to make the distinction between collision detection and collision response. My treatment of rigid body collision response in my book is based on classical Newtonian impact principles. Bodies that are colliding are treated as rigid regardless of their construction and material. Rigid bodies discussed here do not change shape even upon impact. This, of course, is an idealization. You know from your everyday experience that when objects collide they dent, bend, compress, or crumple. For example, when a baseball strikes a bat, it may compress as much as three quarters of an inch during the millisecond of impact. Notwithstanding this reality, you can rely on the well-established impulse method to approximate rigid body collisions. In this method, an impulsive force is applied to each colliding object at the point or points of impact. This force is calculated such that it changes the velocity of each object instantly, not allowing them to penetrate one another. The method is semi-empirical in that it uses so-called coefficients of restitution to simulate the hardness or softness of various objects. For example, different coefficients of restitution can be used to simulate a rubber ball bouncing off of a hard surface, or a lump of clay that sticks to a surface after a collision. This classical approach is widely used in engineering machine design, analysis, and simulations. However, for rigid body simulations there is another class of methods known as penalty methods at your disposal. In penalty methods, the force at impact is represented by a temporary spring that gets compressed between the objects at the point of impact. This spring compresses over a very short time and applies equal and opposite forces to the colliding bodies to simulate collision response. Proponents of this method say it has the advantage of ease of implementation. However, one of the difficulties encountered in its implementation is numerical instability. David Baraff reviews several aspects of penalty methods (as well as other methods) in his paper Nonpenetrating Rigid Body Step 4: Numerical Integration The next aspect of implementing your physics simulator deals with actually solving the equations of motion. The equations of motion can be classified as ordinary differential equations. In some simple cases you can solve these differential equations analytically. However, this won't be the case for general rigid body simulations. Force and moment calculations for your system can get pretty complicated and may even rely on tabulated empirical data, which will prevent you from writing simple mathematical functions that can be easily integrated. This means you have to use numerical-integration techniques to approximately integrate the equations of motion. I say approximately because solutions based on numerical integration won't be exact and will have a certain amount of error depending on the chosen method. I discuss several integration methods, along with sample code, in Chapter 11 of my book. By far, the easiest integration method to implement is Euler's method. In this method you first calculate the forces and moments on each object in order to determine acceleration. Recalling the equation for Newton's second law, you can obtain acceleration by dividing the resultant force acting on an object by the object's mass (for angular acceleration you need to divide torques by inertias). Once you have acceleration you can obtain the change in velocity by simply multiplying the acceleration by a small change in time called a time step. The object's new velocity is simply its old velocity plus the change in velocity for the current time step. To obtain the object's new position, you first multiply its velocity by the time step to obtain a change in displacement (position) and then add this change to the object's last position. To minimize numerical errors you must step through your simulation at very small time steps with this method. A major drawback of Euler's method is instability. If your time steps are too large, then your solution may diverge from the exact solution, and your simulation may blow up. A major drawback of Euler's method is instability. If your time steps are too large, then your solution may diverge from the exact solution, and your simulation may blow up. For example, I was recently implementing a simulation of water flowing over a dam using a method called Smoothed Particle Hydrodynamics. Initially, I was using Euler's method just so I could get the simulation up and running quickly, with the idea of implementing a better method later. While running the simulation for the first time, the water started flowing over the dam in a fairly realistic manner. However, after a few steps all the particles representing the water took to the air and flew out of my computational domain like meteors. Obviously this is wrong, and is a good example of a solution becoming numerically unstable. To solve the problem I was forced to implement an alternative method sooner than I had anticipated. One such method is known as the Improved Euler method. For ill-conditioned problems, this method is far more stable than Euler's method. However, it requires two force calculations for each object in the simulation. Thus, improved accuracy and stability come at a price in terms of CPU time. Other methods, including the Runge-Kutta and Leap-frog methods, also improve accuracy and stability but again at the cost of CPU time, and in some cases increased memory requirements. Sometimes you can offset the increased CPU burden with these improved methods since they allow you to take larger time steps while still maintaining accuracy. The bottom line is that, here again, you're faced with trading off ease of implementation and speed with accuracy and more importantly, stability. Step 5: Tuning The final and arguably the most time-consuming aspect of implementing your physics simulator is tuning (also known as parameter tuning). Many times I've gone through the steps outlined above only to run my simulation to watch it fail. Either the simulation blew up or the behavior of the objects I was trying to simulate was less than accurate. When writing any simulation you'll find that there are many empirical coefficients and parameters that you have to estimate, for example, collision tolerances, coefficients of restitution, mass properties, and drag coefficients, among others. In many cases your initial estimates for such parameters may be off, resulting in instability or unrealistic behavior. This means that you'll have to tune such parameters until you get the results you want. By tuning I mean the trial-and-error process of changing certain parameters to see what effect they have on your simulation until you get it to work right. In some cases it may mean implementing new algorithms, as I had to do with the integration scheme in my water simulation, for example. There are a couple of things you can do to make this job easier on yourself. First, research the physics behind what it is you are simulating. Try to obtain an understanding of how the system you are simulating actually behaves in the real world. Find out what factors are important: for example, what forces govern the system's behavior, and try to find some real-world empirical data for things such as drag coefficients, coefficients of restitution, friction coefficients, and so on. This will help take some of the guesswork out of setting initial values for such parameters, and help to identify unrealistic behavior. Second, don't bury (or spread out) all this data in your source code. Set up a bunch of defines for such data in one file so you can quickly find and change the data as you go through the tuning process. Something you might consider is implementing a mechanism that allows you to change specific data on the fly so you can tune certain aspects of the simulation in real time. I've found this helpful in specific simulations where, for example, I wanted to see the effects of changing viscous drag as the simulation ran. I found it difficult to observe subtle changes in behavior if I had to stop one simulation, change some data, and then rerun the simulation again; especially if the behavior I wanted to observe didn't occur at the start of the simulation but only after many hundreds of time steps. David M. Bourg performs computer simulations and develops analysis tools that measure such things as hovercraft performance and the effect of waves on the motion of ships and boats. O'Reilly & Associates will soon release (November 2001) Physics for Game Developers. Return to the Linux DevCenter.
{"url":"http://www.onlamp.com/pub/a/linux/2001/11/01/physics.html?page=2","timestamp":"2014-04-17T10:00:11Z","content_type":null,"content_length":"44469","record_id":"<urn:uuid:db5e394f-ed2b-4d18-958d-0d3294f10a13>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Haciendas De Borinquen Ii, PR Statistics Tutor Find a Haciendas De Borinquen Ii, PR Statistics Tutor ...I have taught standardized test preparation courses for over 10 years. Students in my test prep classes are pleased with their results and often refer friends and family members to my sessions. I have taught probability and statistics for over 12 years. 15 Subjects: including statistics, calculus, geometry, algebra 1 ...Topics include: electrical circuits, electrical power, control systems, signals and systems, and electromagentism. I have worked for several years in the satellite business and been involved in electrical power subsystems, solid-state devices, electrical ground support equipment, and other relat... 62 Subjects: including statistics, English, geometry, physics ...I am also the manager (and tutor) of a tutoring center at a community college that only tutors students enrolled in statistics and accounting courses. Since starting tutoring in 2008, I have spent thousands of hours doing one-on-one tutoring with students enrolled in statistics courses. Students have told me that I have a way of explaining difficult concepts in easy to understand 2 Subjects: including statistics, accounting ...I have a career counseling business part-time using career and other personality assessments. I have been a soccer player, coach and manager for many years. I am from England where I learned from many experienced coaches with all of their professional coaching badges. 30 Subjects: including statistics, Spanish, reading, English ...I had the most fun when I finally arrived in Calc II. I then could use all the mathematics that I had learned throughout the years. Calc III is even better since it uses the z axis. 19 Subjects: including statistics, calculus, algebra 2, algebra 1 Related Haciendas De Borinquen Ii, PR Tutors Haciendas De Borinquen Ii, PR Accounting Tutors Haciendas De Borinquen Ii, PR ACT Tutors Haciendas De Borinquen Ii, PR Algebra Tutors Haciendas De Borinquen Ii, PR Algebra 2 Tutors Haciendas De Borinquen Ii, PR Calculus Tutors Haciendas De Borinquen Ii, PR Geometry Tutors Haciendas De Borinquen Ii, PR Math Tutors Haciendas De Borinquen Ii, PR Prealgebra Tutors Haciendas De Borinquen Ii, PR Precalculus Tutors Haciendas De Borinquen Ii, PR SAT Tutors Haciendas De Borinquen Ii, PR SAT Math Tutors Haciendas De Borinquen Ii, PR Science Tutors Haciendas De Borinquen Ii, PR Statistics Tutors Haciendas De Borinquen Ii, PR Trigonometry Tutors Nearby Cities With statistics Tutor Chandler Heights statistics Tutors Chandler, AZ statistics Tutors Circle City, AZ statistics Tutors Cordes Lakes, AZ statistics Tutors Dudleyville statistics Tutors Haciendas Constancia, PR statistics Tutors Haciendas De Tena, PR statistics Tutors Haciendas Del Monte, PR statistics Tutors Haciendas El Zorzal, PR statistics Tutors Higley statistics Tutors Litchfield, AZ statistics Tutors Peeples Valley, AZ statistics Tutors Rock Springs, AZ statistics Tutors Saddlebrooke, AZ statistics Tutors Sun Lakes, AZ statistics Tutors
{"url":"http://www.purplemath.com/Haciendas_De_Borinquen_Ii_PR_statistics_tutors.php","timestamp":"2014-04-17T16:14:38Z","content_type":null,"content_length":"25031","record_id":"<urn:uuid:b76d95aa-5ca9-43e7-8ecf-dbb4fed1a8e2>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
XBimmers.com | BMW X6 Forum X5 Forum - View Single Post - BMW X6M Pricing - lease help Originally Posted by OK so if the sales tax is upfront, then your inceptions make sense but my lease calculation included tax on the payment... so I re-ran the numbers. Invoice price for a $103k X6M is roughly $95k (on bmwconfig.com) To get to $1150 without tax, using leaseguide.com lease calculator, and assuming all other costs paid up front -- 54% residual (again not sure if this is accurate for February but it was for January) 0.00076 money factor (assuming 0.00125 base rate although I think it might be higher, maybe 0.00130 as of January... You would need the selling price to be $93.5k Now, we know your MSD's are $8050, leaving $5450 for inceptions and up-front tax. At 8.875% NY tax, the total up front tax should be $3725 (roughly), leaving $1725 for first payment, title, and doc fees (very reasonable). That all makes perfect sense to me now. So it looks like this is a deal at about $1.5k below invoice with no cap cost reduction on your part. That is a GREAT deal. Take it. Thanks so much for really looking into this and help me in the decision process. What I really want is to get a "zero down" payment as well as a payment that includes wheel & tire warranty ....I will keep you guys posted....thanks again!
{"url":"http://www.xbimmers.com/forums/showpost.php?p=13449612&postcount=10","timestamp":"2014-04-19T12:40:40Z","content_type":null,"content_length":"10400","record_id":"<urn:uuid:b30d071e-a518-49d9-8d43-77eb5aa800d3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: On PDEs [First Order Non-linear] Replies: 0 On PDEs [First Order Non-linear] Posted: Sep 10, 2011 10:47 AM Resent-From: <bergv@illinois.edu> From: Anamitra Palit <palit.anamitra@gmail.com> Date: September 10, 2011 7:56:57 AM MDT To: "sci-math-research@moderators.isc.org" Subject: On PDEs [First Order Non-linear] Let us consider the following partial differential equation: [del_z/del_x]^2+[del_z/del_y]^2=1 ---------- (1) The general solution[you will find in the texts: http://eqworld.ipmnet.ru/en/solutions/fpde/fpde3201.pdf] is given by: z=ax+sqrt[1-a^2]y+B; a and B are constants.--------(2) Now let me search for a solution in the form: z=axf(z)+sqrt[1-a^2]y+ B Substituting the above values into (1) we get [af(z)/[1-axf'(z)] ]^2+(1-a^2)/[1-axf'(z)]^2 = 1 a^2 [f(z)]^2+1-a^2=[1-ax f'(z)]^2 ------------- (3) If the above differential equation is solvable we should get a broader range of general values.Are we missing some solutions if the conventional general solution is considered? Now , the above equation[ie,(3) contains x. We write, For any particular value of z say, z=k, we have, Keeping x fixed at some arbitrary value we can obtain k be changing y only[I believe that this may be possible in most situations or in may situations]. [We could have achieved the same effect,i.e, getting z=k, by changing both x and y in some manner] So we may try to solve our differential equation[the last one--(3)] by keeping x fixed at some arbitrary value. This will go in favor of my Suppose we have only one local maximum for the entire range of our function z=F(x,y) at the point (x0,y0) . For this point, The value of k in this situation, will not be accessible for any arbitrary x. One has to use x=x0. To surmount this difficulty one may think of dividing the the domain of definition of the function F,into sub-domains so that in any particular sub-domain, the value of z may be accessible for an arbitrary x in that sub-domain. [Rather we would look for a function,z=F(x,y), of this type] On Boundary Conditions] The general solution[relation (2) seems to be too restrictive with simple boundary conditions. We may consider a square domain: x=0,x=k,y=0,y=k [k: some constant] Let us take the line:x=k It is perpendicular to the x-axis If we use the value x=k in (2) ,then z changes linearly wrt y The general solution[conventional one] talks of plane surfaces given by (2). I can always take small pieces of such surfaces and sew them into a large curved surface ,z=F(x,y).Along the boundary z may be a non-linear function of x or y. Along the line x=constant[=k],we have from (1), Where m(x=k,y) represents the value of [del_z/del_x]^2 along the line =>[del_z/del_y]=+Sqrt[1-m(k,y)] or -Sqrt[1-m(k,y)] ---------- (3) Now we divide the line x=k into small strips[one may consider infinitesimal strips] each of length h [from y=0 to y=k] For one [or more ] strip we take the positive value[+Sqrt[1-m]] in (3) and for some other[or others] we take the negative value [-Sqrt[1-m] in (3) As a result at the boundary x=k we may have a nonlinear function z=g(y) instead of a linear one as predicted by the conventional general solution.The linearity is produced by the change of sign and by changes in the value of m. This can change the whole picture of the
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2295416","timestamp":"2014-04-17T02:22:57Z","content_type":null,"content_length":"17191","record_id":"<urn:uuid:37cbab73-3919-4257-9d28-ca66d323f0a7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a derivative using squeeze theorem February 17th 2010, 10:30 AM Find a derivative using squeeze theorem (a) State the limit definition of the derivative of a function f at the point x = a. (b) Use (a) to find f'(0). f(x)={x^2cos(ln|x|) if x =/= 0 {0 if x=0 I'm pretty sure you use the squeeze theorem February 17th 2010, 11:36 AM The cosine of any expression is always between -1 and 1 February 17th 2010, 12:02 PM The limit definition of $f^{'}(x)$ is $f^{'}(x)=\lim_{h \rightarrow 0}\Bigg{(} \frac{(x+h)^{2}\cos(ln|x+h|)-x^2\cos(ln|x|)}{h} \Bigg{)}$ Now for a=0, this means $f^{'}(0)=\lim_{h \rightarrow 0} \Bigg{(} \frac{h^2\cos(ln|h|)}{h} \Bigg{)}$ (*) But since $-1<\cos(x)<1 , \forall x$ $\lim_{h \rightarrow 0} -h < (*) < \lim_{h \rightarrow 0} h$ Apply squeeze theorem to conclude that f('0) = 0.
{"url":"http://mathhelpforum.com/calculus/129298-find-derivative-using-squeeze-theorem-print.html","timestamp":"2014-04-21T08:50:30Z","content_type":null,"content_length":"6128","record_id":"<urn:uuid:06e1affd-10e6-44ca-9bf4-465327e7a5b8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximate Analytical Solution for Nonlinear System of Fractional Differential Equations by BPs Operational Matrices Advances in Mathematical Physics Volume 2013 (2013), Article ID 954015, 9 pages Research Article Approximate Analytical Solution for Nonlinear System of Fractional Differential Equations by BPs Operational Matrices ^1Department of Mathematics, Imam Khomeini International University, P.O. Box 34149-16818, Qazvin, Iran ^2Department of Mathematics and Computer Sciences, Cankaya University, 06530 Ankara, Turkey ^3Institute of Space Sciences, P.O. Box MG-23, 077125 Magurele-Bucharest, Romania ^4Department of Chemical and Materials Engineering, Faculty of Engineering, King Abdulaziz University, P.O. Box 80204, Jeddah 21589, Saudi Arabia Received 20 February 2013; Revised 6 March 2013; Accepted 7 March 2013 Academic Editor: José Tenreiro Machado Copyright © 2013 Mohsen Alipour and Dumitru Baleanu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We present two methods for solving a nonlinear system of fractional differential equations within Caputo derivative. Firstly, we derive operational matrices for Caputo fractional derivative and for Riemann-Liouville fractional integral by using the Bernstein polynomials (BPs). In the first method, we use the operational matrix of Caputo fractional derivative (OMCFD), and in the second one, we apply the operational matrix of Riemann-Liouville fractional integral (OMRLFI). The obtained results are in good agreement with each other as well as with the analytical solutions. We show that the solutions approach to classical solutions as the order of the fractional derivatives approaches 1. 1. Introduction Differential equations of fractional order have been subjected to many studies due to their frequent appearance in various applications in fluid mechanics, viscoelasticity, biology, physics, engineering, and so on. Recently, a large amount of literature was developed regarding the application of fractional differential equations in nonlinear dynamics (see, e.g., [1–11] and the references therein). Thus, a huge attention has been given to the solution of fractional ordinary differential equations, integral equations, and fractional partial differential equations of physical interest. As it is known, there exists no method that yields an exact solution for fractional differential equations. Various methods have been proposed in order to solve the fractional differential equations. These methods include the homotopy perturbation method [12–15], Adomian’s decomposition method [16–20], variation iteration method [12–14, 21–23], homotopy analysis method [24], differential transform method [25], operational matrices [26–28], and nonstandard finite difference scheme [29]. In this paper, we investigate the nonlinear system of fractional differential equations as and the initial condition where and . Also, are multivariable polynomial functions. The structure of the paper is given later. In Section 2, we present some preliminaries and properties in fractional calculus and Bernstein polynomials. In Section 3, we make operational matrices for product, power, Caputo fractional derivative, and Riemann-Liouville fractional integral by BPs. In Section 4, we apply two methods for solving nonlinear system of fractional differential equations by BPs. In Section 5, numerical examples are simulated to demonstrate the high performance of the proposed method. Conclusions are presented in Section 6. 2. Basic Tools In this section, we recall some basic definitions and properties of the fractional calculus and Bernstein polynomials. Definition 1 (see [2, 7, 10]). The Riemann-Liouville fractional integral operator of order , of a function , is defined as and for , , , , the fractional derivative of in the Caputo sense is defined as where for we have Also, if , and , then Definition 2 (see [30]). The Bernstein polynomials (BPs) of th degree are defined on the interval as follows: Lemma 3. One can write , where is a matrix upper triangular, , and . Proof. (see [26]). Definition 4. We can define the dual matrix on the basis of Bernstein polynomials of th degree as follows: where Lemma 5. Let be a Hilbert space with the inner product and . Then, we can find the unique vector such that is the best approximation of from space . Moreover, one can get such that . Proof . (see [31]). Lemma 6. Suppose that the function is times continuously differentiable . If is the best approximation out of , then where . Also, if , then the error bound vanishes. Proof . (see [32]). 3. Operational Matrices of Bernstein Polynomials In Section 3, we recall the operational matrices for product, power, Caputo fractional derivative and Riemann-Liouville fractional integral by BPs. Lemma 7. Suppose that is an arbitrary vector. The operational matrix of product using BPs can be given as follows: Proof . (see [27]). Corollary 8. Suppose that , , and is the operational matrix of product using BPs for vector . One can get the approximate function for using BPs as follows: Proof. By using Lemma 7, it is clear. Corollary 9. Suppose that and is the operational matrix of product using BPs for vector . One can get the approximate function for , using BPs as follows: where . Proof . (see [26]). Theorem 10. One can get BPs operational matrix from order for the Caputo fractional derivative as follows: Proof. See [26] for details. Theorem 11. One can obtain the operational matrix from order for the Riemann-Liouville fractional integral on the basis of BPs from order as Proof. See [28] for details. 4. Solving System of Fractional Differential Equations In this section, we use two methods for solving system of fractional differential equations. In the first method, we use the operational matrix for Caputo fractional derivative (OMCFD), and in the second method, we apply the operational matrix for Riemann-Liouville fractional integral (OMRLFI). 4.1. Solving the Problem by OMCFD Using Lemma 5, we can approximate the functions as follows: where . From (17) and (15) we can write Therefore, problem (1) and (2) reduces to the following problem: and the initial condition Now, using Lemma 5 we can approximate all of the known functions in the system (19). Then, by using Lemma 7 and Corollaries 8 and 9, since functions are polynomial, we obtain the following approximations: where . Also, for each , by using tau method [33] we can generate algebraic equations from (19) and (21) as follows and from (23) we set . Finally, problem (1) and (2) has been reduced to the system of algebraic equations The aforementioned system can be solved for by Newton’s iterative method. Then, we get the approximate value of the functions from (17). 4.2. Solving the Problem by OMRLFI This method consists of two steps. Step 1. Initial conditions are used to reduce a given initial-value problem to a problem with zero initial conditions. Therefore we have a modified system, incorporating the initial values. Step 2. The BPs operational matrix of Riemann-Liouville fractional integral is used to transform the problem into a system of algebraic equations. Now, from (2) we define where , are the new unknown functions. Substituting (24) in (1) and (2), we have the following system: and the initial condition where and are multivariable polynomial functions. We use the following approximation: where are unknown vectors. From (7), (27), and Theorem 11, we can write So, by (27) and (28), problem (25) and (26) reduces to the following problem: As we saw in the previous section, we can obtain the following approximations: where . So, from (29) and (30) we have Therefore, we have reduced problem (1) and (2) to the system of algebraic equations as follows: where this system can be solved for by Newton’s iterative method. Finally we obtain the approximate of the functions by 5. Examples To demonstrate the applicability and to validate the numerical scheme, we apply the present method for the following examples. Example 12. Consider the following linear system of fractional differential equations [24, 25]: with initial condition For this problem we have the exact solution in the case of as We solved this problem by OMCFD and OMRLFI. Figures 1 and 2 show the approximate solutions of and , respectively, as a function of time for , for different values of , . The results show that numerical solutions are in good agreement with each other, in both methods. Also, these figures show that as , approach close to 1, the numerical solutions approach to the solutions for as expected. In Figures 3 and 4, we see the absolute error of both methods, for , . In these figures, we can see that obtained results using the presented methods agree well with the analytical solutions for . Example 13. Let us consider the following nonlinear fractional system [24] as follows: such that The exact solution of this system, when , is Figures 5 and 6 show the approximate solutions of and , respectively, for different values of , by OMCFD and OMRLFI. We conclude that as , approach close to 1, the numerical solutions approach solutions for as expected. Furthermore, in both methods, the results agree well with each other. Figures 7 and 8 show that, the absolute error of obtained results for and using OMCFD and OMRLFI is in good agreement with the exact solution. Example 14. Consider the nonlinear system of fractional differential equations [24]: with the initial conditions given by The exact solution of this system, when , becomes We can see the approximate solutions of and , by OMCFD and OMRLFI for and different values of , and , in Figures 9, 10, and 11. These figures show that, when , , and approach close to 1, the numerical solutions approach the solutions for as expected. In Figures 9–11, we observe that results of OMCFD and OMRLFI overlap. In Figures 12, 13, and 14, we see the absolute error of the obtained results for and in both methods. 6. Conclusion In this paper, we get operational matrices of the product, Caputo fractional derivative, and Riemann-Liouville fractional integral by Bernstein polynomials. Then by using these matrices, we proposed two methods that reduced the nonlinear systems of fractional differential equations to the two system of algebraic equations that can be solved easily. Finally, numerical examples are simulated to demonstrate the high performance of the proposed method. We saw that the results of both methods were in good agreement with each other, and the classical solutions were recovered when the order of the fractional derivative goes to 1. 1. J.-H. He, “Approximate analytical solution for seepage flow with fractional derivatives in porous media,” Computer Methods in Applied Mechanics and Engineering, vol. 167, no. 1-2, pp. 57–68, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 2. I. Podlubny, Fractional Differential Equations, Academic Press, New York, NY, USA, 1999. View at MathSciNet 3. R. Hilfer, Applications of Fractional Calculus in Physics, World Scientific, Singapore, 2000. 4. X. Gao and J. Yu, “Synchronization of two coupled fractional-order chaotic oscillators,” Chaos, Solitons and Fractals, vol. 26, no. 1, pp. 141–145, 2005. View at Publisher · View at Google Scholar · View at Scopus 5. J. G. Lu, “Chaotic dynamics and synchronization of fractional-order Arneodo's systems,” Chaos, Solitons & Fractals, vol. 26, no. 4, pp. 1125–1133, 2005. View at Publisher · View at Google Scholar 6. J. G. Lu and G. Chen, “A note on the fractional-order Chen system,” Chaos, Solitons & Fractals, vol. 27, no. 3, pp. 685–688, 2006. View at Publisher · View at Google Scholar 7. A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, Elsevier, San Diego, Calif, USA, 2006. View at MathSciNet 8. D. Baleanu, O. G. Mustafa, and R. P. Agarwal, “An existence result for a superlinear fractional differential equation,” Applied Mathematics Letters, vol. 23, no. 9, pp. 1129–1132, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 9. D. Baleanu, O. G. Mustafa, and R. P. Agarwal, “On the solution set for a class of sequential fractional differential equations,” Journal of Physics A, vol. 43, no. 38, Article ID 385209, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 10. D. Baleanu, K. Diethelm, E. Scalas, and J. J. Trujillo, Fractional Calculus Models and Numerical Methods, Series on Complexity, Nonlinearity and Chaos, World Scientific, Hackensack, NJ, USA, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 11. S. Bhalekar, V. Daftardar-Gejji, D. Baleanu, and R. L. Magin, “Transient chaos in fractional Bloch equations,” Computers & Mathematics with Applications, vol. 64, no. 10, pp. 3367–3376, 2012. View at Publisher · View at Google Scholar 12. S. Momani and Z. Odibat, “Numerical approach to differential equations of fractional order,” Journal of Computational and Applied Mathematics, vol. 207, no. 1, pp. 96–110, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 13. S. Momani and Z. Odibat, “Homotopy perturbation method for nonlinear partial differential equations of fractional order,” Physics Letters A, vol. 365, no. 5-6, pp. 345–350, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 14. S. Momani and Z. Odibat, “Numerical comparison of methods for solving linear differential equations of fractional order,” Chaos, Solitons & Fractals, vol. 31, no. 5, pp. 1248–1255, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 15. Z. Odibat and S. Momani, “Modified homotopy perturbation method: application to quadratic Riccati differential equation of fractional order,” Chaos, Solitons & Fractals, vol. 36, no. 1, pp. 167–174, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 16. S. Momani and K. Al-Khaled, “Numerical solutions for systems of fractional differential equations by the decomposition method,” Applied Mathematics and Computation, vol. 162, no. 3, pp. 1351–1365, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 17. H. Jafari and V. Daftardar-Gejji, “Solving a system of nonlinear fractional differential equations using Adomian decomposition,” Journal of Computational and Applied Mathematics, vol. 196, no. 2, pp. 644–651, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 18. D. Lesnic, “The decomposition method for Cauchy advection-diffusion problems,” Computers & Mathematics with Applications, vol. 49, no. 4, pp. 525–537, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 19. D. Lesnic, “The decomposition method for initial value problems,” Applied Mathematics and Computation, vol. 181, no. 1, pp. 206–213, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 20. V. Daftardar-Gejji and H. Jafari, “Adomian decomposition: a tool for solving a system of fractional differential equations,” Journal of Mathematical Analysis and Applications, vol. 301, no. 2, pp. 508–518, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 21. Z. M. Odibat and S. Momani, “Application of variational iteration method to nonlinear differential equations of fractional order,” International Journal of Nonlinear Sciences and Numerical Simulation, vol. 7, no. 1, pp. 27–34, 2006. View at Scopus 22. S. Momani and Z. Odibat, “Analytical approach to linear fractional partial differential equations arising in fluid mechanics,” Physics Letters A, vol. 355, no. 4-5, pp. 271–279, 2006. View at Publisher · View at Google Scholar · View at Scopus 23. V. Daftardar-Gejji and H. Jafari, “An iterative method for solving nonlinear functional equations,” Journal of Mathematical Analysis and Applications, vol. 316, no. 2, pp. 753–763, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 24. M. Zurigat, S. Momani, Z. Odibat, and A. Alawneh, “The homotopy analysis method for handling systems of fractional differential equations,” Applied Mathematical Modelling, vol. 34, no. 1, pp. 24–35, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 25. V. S. Ertürk and S. Momani, “Solving systems of fractional differential equations using differential transform method,” Journal of Computational and Applied Mathematics, vol. 215, no. 1, pp. 142–151, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 26. M. Alipour, D. Rostamy, and D. Baleanu, “Solving multi-dimensional FOCPs with inequality constraint by BPs operational matrices,” Journal of Vibration and Control, 2012. View at Publisher · View at Google Scholar 27. D. Rostamy and K. Karimi, “Bernstein polynomials for solving fractional heat- and wave-like equations,” Fractional Calculus and Applied Analysis, vol. 15, no. 4, pp. 556–571, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 28. D. Rostamy, M. Alipour, H. Jafari, and D. Baleanu, “Solving multi-term orders fractional differential equations by operational matrices of BPs with convergence analysis,” Romanian Reports in Physics, vol. 65, no. 2, 2013. 29. S. Momani, A. Abu Rqayiq, and D. Baleanu, “A nonstandard finite difference scheme for two-sided space-fractional partial differential equations,” International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, vol. 22, no. 4, Article ID 1250079, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 30. E. W. Cheney, Introduction to Approximation Theory, AMS Chelsea Publishing, Providence, RI, USA, 2nd edition, 1982. View at MathSciNet 31. E. Kreyszig, Introduction Functional Analysis with Applications, John Wiley & Sons, New York, NY, USA, 1978. View at MathSciNet 32. M. Alipour and D. Rostamy, “Bernstein polynomials for solving Abel's integral equation,” The Journal of Mathematics and Computer Science, vol. 3, no. 4, pp. 403–412, 2011. 33. C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, Spectral Methods in Fluid Dynamic, Prentice-Hall, Englewood Cliffs, NJ, USA, 1988. View at MathSciNet
{"url":"http://www.hindawi.com/journals/amp/2013/954015/","timestamp":"2014-04-16T05:39:06Z","content_type":null,"content_length":"386808","record_id":"<urn:uuid:392de80f-89c4-4255-a77a-f358f0942e2d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: ``arbitrary objects" Mitchell Spector spector at seattleu.edu Thu Jan 31 12:48:54 EST 2002 On Thursday, January 31, 2002, at 04:07 AM, Arnon Avron wrote: > I should say that I find the discussion about "arbitrary objects" > (or "arbitrary numbers") rather embarrassing, especially that > it is made by logicans. When I read it I got the feeling that > Gentzen (and his analysis of Natural Deduction) and Tarski > (with his semantical analysis of formulas, using structures > and assignments) had never existed, and that > variables and their correct use are still a mystery... > Arnon Avron Well, that was, more or less, my immediate reaction as well. But then I thought of "definite descriptions," which were given a specific logical analysis by Russell but which tend to be explained away in most modern mathematical treatments of logic. Just because something _can_ be explained away doesn't mean that it _should_ be explained away; if mathematicians find a concept useful in practice, maybe it's worth analyzing. In the particular case of definite descriptions, the idea turned out to be useful, in modified form, in the so-called abstraction terms of set theory (and particularly in the development of the notion of forcing). Mitchell Spector Seattle University More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2002-January/005164.html","timestamp":"2014-04-20T05:49:35Z","content_type":null,"content_length":"3614","record_id":"<urn:uuid:95ea4797-b912-4cd0-9a7e-2e8226279074>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
When Can Risk-Factor Epidemiology Provide Reliable Tests? A commentator brings up risk factor epidemiology, and while I’m not sure the following very short commentary* by Aris Spanos and I directly deals with his query, Greenland happens to mention Popper, and it might be of interest: “When Can Risk-Factor Epidemiology Provide Reliable Tests?” Here’s the abstract: Can we obtain interesting and valuable knowledge from observed associations of the sort described by Greenland and colleagues in their paper on risk factor epidemiology? Greenland argues “yes,” and we agree. However, the really important and difficult questions are when and why. Answering these questions demands a clear understanding of the problems involved when going from observed associations of risk factors to causal hypotheses that account for them. Two main problems are that 1) the observed associations could fail to be genuine; and 2) even if they are genuine, there are many competing causal inferences that can account for them. Although Greenland’s focus is on the latter, both are equally important, and progress here hinges on disentangling the two to a much greater extent than is typically recognized. * We were commenting on “The Value of Risk-Factor (“Black-Box”) Epidemiology” by Greenland, Sander; Gago-Dominguez, Manuela; Castelao, Jose Esteban full citation & abstract can be found at the link 2 thoughts on “When Can Risk-Factor Epidemiology Provide Reliable Tests?” Thanks for this, Deborah, Greenland is another of my statistical heroes! I agree that valuable information can be obtained from observed associations, but I suppose that depends on what use an inferential approach would make of them. Greenland et al. seem to be arguing more for a descriptive “qualitative” inference based on assessing whether or not and which observed associations are consistent with any proposed hypothesis. This seems to be the same approach that Gary Taubes took in Good Calories, Bad Calories and it’s an approach that I find particularly convincing (although I understand that Taubes is sometimes accused of cherry-picking evidence, I don’t know enough about the nutrition literature to evaluate whether this is true). And it is certainly consistent with your proposed piecemeal approach. However, in his response, and true to his fabulous 1990 paper in Epidemiology, Greenland seems to shun the error-statistical approach to evaluating observed associations: “But this key [i.e., 'to establish the validity of statistical assumptions without appeals to substantive theories'] is missing in observational epidemiology. Its absence *pulls the plug on frequentist methods*, for those assume the data arise from experiments with adequate control or knowledge of errors (both random and systematic)… hence, there are enduring controversies about whether certain associations even exist, let alone are causal.” I think I agree with Greenland here. Mark, but of course it’s incorrect to allege that error statistical methods only apply in “genuine” statistical experiments. It suffices that statistical models can be used to capture some question or aspect of the process generating the data. The methods are very often used in historical and observational studies; else those areas would be robbed of statistical inquiries. There are “model based” as well as “design based” uses of these methods, for lack of better terms. I welcome constructive comments for 14-21 days Categories: Statistics Tags: causal hypotheses, Greenland, risk assessment, risk factor epidemiology 2 Comments
{"url":"http://errorstatistics.com/2012/02/07/when-can-risk-factor-epidemiology-provide-reliable-tests/","timestamp":"2014-04-19T08:17:59Z","content_type":null,"content_length":"63581","record_id":"<urn:uuid:5f86a9dd-8e66-4e4b-b40e-4ffcefca72af>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Value of the constant September 29th 2009, 04:05 PM #1 Junior Member Sep 2008 Value of the constant For what value of the constant "c" is the function "f" continuous on (-inf, inf) where: cy+9 if y e(-inf,4) cy^2-9 if y e (4, inf) You just need to solve the equation that arises when you set them equal to eachother for y=4 September 29th 2009, 04:14 PM #2 May 2009
{"url":"http://mathhelpforum.com/pre-calculus/105084-value-constant.html","timestamp":"2014-04-18T11:44:10Z","content_type":null,"content_length":"30736","record_id":"<urn:uuid:a54affb5-7bfb-46cb-897a-41ad3fa99d3b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Adaptive On-Line Page Importance Computation An automated web agent visits the web, retrieving pages to perform some processing such as indexing, archiving, site checking, etc., [3,11,24]. The robot uses page links in the retrieved pages to discover new pages. All the pages on the web do not have the same importance. For example, Le Louvre homepage is more important that an unknown person's homepage. Page importance information is very valuable. It is used by search engines to display results in the order of page importance [11]. It is also useful for guiding the refreshing and discovery of pages: important pages should be refreshed more often^ and when crawling for new pages, important pages have to be fetched first [6]. Following some ideas of [16], Page and Brin proposed a notion of page importance based on the link structure of the web [21]. This was then used by Google with a remarkable success. Intuitively, a page is important if there are many important pages pointing to it. This leads to a fixpoint computation by repeatedly multiplying the matrix of links between pages with the vector of the current estimate of page importance until the estimate is stable, i.e., until a fixpoint is reached. The main issue in this context is the size of the web, billions of pages [15,23]. Techniques have been developed to compute page importance efficiently, e.g., [12]. The web is crawled and the link matrix computed and stored. A version of the matrix is then frozen and one separate process computes off-line page importance, which may take hours or days for a very large graph. So, the core of the technology for the off-line algorithms is fast sparse matrix multiplication (in particular by extensive use of parallelism). This is a classical area, e.g., [25]. The algorithm we propose computes the importance of pages on-line, with limited resources, while crawling the web. It can be used to focus crawling to the most interesting pages. Moreover, it is fully integrated in the crawling process, which is important since acquiring web pages is the most costly part of the system. Intuitively speaking, some ``cash'' is initially distributed to each page and each page when it is crawled distributes its current cash equally to all pages it points to. This fact is recorded in the history of the page. The importance of a page is then obtained from the ``credit history'' of the page. The intuition is that the flow of cash through a page is proportional to its importance. It is essential to note that the importance we compute does not assume anything about the selection of pages to visit. If a page ``waits'' for a while before being visited, it accumulates cash and has more to distribute at the next visit. In Section 1 and 2, we present a formal model and we prove the correctness of the algorithm. In practice, the situation is more complex. First, the ranking of result pages by a search engine should be based on other factors than page importance. One may use criteria such as the occurrences of the words from the query and their positions. These are typically criteria from information retrieval [26] that have been used extensively since the first generation of search engines, e.g. [3]. One may also want to bias the ranking of answers based on the interest of users [19,22]. Such interesting aspects are ignored here. On the other hand, we focus on another critical aspect of page importance, the variations of importance when the web changes. The web changes all the time. With the off-line algorithm, we need to restart a computation. Although techniques can be used to take into account previous computations, several costly iterations over the entire graph have to be performed by the off-line algorithm. We show how to modify the on-line algorithm to adapt to changes. Intuitively, this is achieved by taking into account only a recent window of the history. Several variants of the adaptive on-line algorithm are presented. A distributed implementation of one of them is actually used by the Xyleme crawlers [27,28]. The algorithms are described using web terminology. However, the technique is applicable in a larger setting to any graph. Furthermore, we believe that distributed versions of the on-line algorithm could be useful in network applications when a link matrix is distributed between various sites. We also mention studies that we conducted with librarians from the French national Library to decide if page importance can be used to detect web sites that should be archived. More precisely, we discuss some experiments and we detail how to use our system to support new criteria of importance, such as site-based importance. An extended abstract of this work was published in [2]. A short and informal presentation of the algorithm is given there. The formal presentation, the details of the results as well as the discussion of the experiments are new. The paper is organized as follows. We first present the model and in particular, recall the definition of importance. In Section 2, we introduce the algorithm focusing on static graphs. In Section 3, we consider different crawling strategies and we move to dynamic graphs, i.e., graphs that are continuously updated like the web. The following section deals with implementation and discusses some experiments. The last section is a conclusion. In this section, we present the formal model. Reading this section is not mandatory for the comprehension of the rest of the paper. We view the World Wide Web as a directed graph connected if, when directed edges are transformed into non directed edges, the resulting graph is connected in the usual sense. A directed graph strongly connected if for all pair of vertices aperiodic if there exists a Let There are several natural ways to encode a graph as a matrix, depending on what property is needed afterwards. For instance, Google [21,19] defines the out-degree 16], Kleinberg proposes to set The basic idea is to define the importance of a page in an inductive way and then compute it using a fixpoint. If the graph contains • If one decides that a page is important if it is pointed by important pages. Then set • A 'random walk' means that we browse the web by following one link at a time, and all outgoing links of a page have equal probability to be chosen. If one decides that a page importance is the probability to read it during a 'random walk' on the web, then set • If one decides that a page is important if it is pointed by important pages or points to important pages. Then set In all cases, this leads to solving by induction an equation of the type Computing the importance of the pages thus corresponds to finding a fixpoint dominant eigenvalue (i.e. which is maximal). Thus, unless 1.1) but several problems may occur: • There might be several solutions. This happens when the vector space corresponding to the maximal eigenvalue has a dimension greater than 1. • Even if there is a unique solution, the iteration All these cases are completely characterized in the Theorem of Perron-Frobenius that we give next. Theorem 1.1 Perron-Frobenius [ ] Let In order to solve the convergence problem, Google [11] uses the following patch. Recall that 1.1. For each ^. Another way to cope with the problem of convergence is to consider the following convergence suite : If The computation of page importance in a huge dynamic graph has recently attracted a lot of attention because of the web, e.g., [18,21,19,22,9]. It is a major issue in practice that the web is not strongly connected. For instance, in the bow tie [4] vision of the web, the in nodes do not branch back to the core of the web. Although the same computation makes sense, it would yield a notion of importance without the desired semantics. Intuitively, the random walk will take us out of the core and would be ``trapped'' in pages that do not lead back to the core (the ``rank sink'' according to [21]). So, pages in the core (e.g., the White House homepage) would have a null importance. Hence, enforcing strong connectivity of the graph (by ``patches'') is more important from a semantic point of view than for mathematical reasons. In a similar way to Google, we enforce the strong connectivity of the graph by introducing ''small'' edges. More precisely, in our graph, each node points to a unique virtual page. Conversely, this virtual page points to all other nodes. Our algorithm computes the characteristic vector of 7] or [18]. In most cases, infinite transition matrix are managed by increasing the size of a known matrix block. Some works also consider a changing Web graph, e.g. an incremental computation of approximations of page importance is proposed in [5]. As far as we know, our algorithm is new. In particular: • it may start even when a (large) part of the matrix is still unknown, • it helps deciding which (new) part of the matrix should be acquired (or updated), • it is integrated in the crawling process, • it works on-line even while the graph is being updated. For instance, after crawling Static graphs: OPIC We consider in this section the case of a static graph (no update). We describe the algorithm for Google's link matrix For each page (each node in the graph), we keep two values. We call the first cash. Initially, we distribute some cash to each node, e.g., if there are (credit) history of the page, the sum of the cash obtained by the page since the start of the algorithm until the last time it was crawled. The cash is typically stored in main memory whereas the history may be stored on disk. When a page We use two vectors On-line Page Importance Computation for each i let C[i] := 1/n ; for each i let H[i] := 0 ; let G:=0 ; do forever choose some node i ; %% each node is selected %% infinitely often H[i] += C[i]; %% single disk access per page for each child j of i, do C[j] += C[i]/out[i]; %% Distribution of cash %% depends on L G += C[i]; C[i] := 0 ; At each step, an estimate of any page fairness). This is essential since crawling policies are often governed by considerations such as robots exclusion, politeness (avoid rapid-firing), page change rate, focused crawling. As long as the cash of children is stored in main memory, no disk access is necessary to update it. At the time we visit a node (we crawl it), the list of its children is available on the document itself and does not require disk access. Each page has at least one child, thanks to the ``small'' edges that we presented in previous Section (and that points to the virtual page). However, for practical reasons, the cash of the virtual page is not distributed all at once. This issue is in particular related to the discovery of new pages and management of variable sized graphs that we consider later. Definition 2.1 We note One can prove that: Theorem 2.1 Assuming the graph is connected, when To prove this theorem, we use the three following lemmas: Lemma 2.2 The total amount of all cash is constant and equal to the initial value, i.e., for each This is obvious by induction since we only distribute each node cash among the children. The proof by induction is given in appendix. It works by considering two cases: either For this, we must prove that there is By Lemma 2.5, The main advantage of our algorithm is that it allows focused crawling. Because our algorithm is run online and its results are immediately available to the crawler, we use it to focus crawling to the most interesting pages for the users. This is in particular interesting in the context of building a web archive [1], when there are strong requirements (and constraints) on the crawling process. Moreover, since we don't have to store the matrix but only a vector, our algorithm presents the following advantages: 1. It requires less storage resources than standard algorithms. 2. It requires less CPU, memory and disk access than standard algorithms. 3. It is easy to implement. Our algorithm is also well adapted to ``continuous'' crawl strategies. The reason is that storing and maintaining the link matrix during a ``continuous'' crawl of the Web (when pages are refreshed often) is significantly more expensive than for single ``snapshot'' crawl of the Web (when each page is read only once). Indeed, when information about specific pages has to be read and updated frequently, the number of random disk access may become a limiting factor. In our experiment for instance, the crawler was retrieving hundreds of pages per seconds on each PC (see Section 4). However, note that the storage of a link matrix may be useful beyond the computation of page importance. For instance, given a page 14]. Crawling Strategies In this section, we first consider different crawling strategies that impact the convergence of our algorithm. Then, we study how they can be used in the case of a changing graph. Implementations aspects and experiments are considered in the next section. As previously mentioned, the error in our estimate is bounded by the error factor, although this is, strictly speaking, not the error (but an upper bound for it). Now, in principle, one could choose a very bad strategy that would very often select pages with very low cash. (The correctness of the algorithm requires that each page is read infinitely many times but does not require the page selection strategy to be smart.) On the other hand, if we choose nodes with very large cash, the error factor decreases faster. To illustrate, consider three page selection strategies: 1. Random : We choose the next page to crawl randomly with equal probability. (Fairness: for each 2. Greedy : We read next the page with highest cash. This is a greedy way to decrease the value of the error factor. (Fairness: For a strongly connected graph, each page is read infinitely often because it accumulates cash until it is eventually read. See Lemma 6.2 in appendix). 3. Cycle : We choose some fixed order and use it to cycle around the set of pages. (Fairness is obvious.) We considered this page selection strategy simply to have a comparison with a systematic strategy. Recall that systematic page selection strategies impose undesired constraints on the crawling of pages. Remark 3.1 (Xyleme ) The strategy for selecting the next page to read used in Xyleme is close to . It is tailored to optimize our knowledge of the web [ ], the interest of clients for some portions of the web, and the refreshing of the most important pages that change often. To get a feeling of how Random and Greedy progress, let us consider some estimates of the values of the error factor for these two page selection strategies. Suppose that at initialization, the total value of the cash of all pages is • Random : The next page to crawl is chosen randomly so its cash is on average • Greedy : A page accumulates cash until it reaches the point where it is read. Let Thus the error factor decreases on average twice faster with Greedy than with Random . We will see with experiments (in Section 4) that, indeed, Greedy converges faster. Moreover, Greedy focuses our resources on the important pages which corresponds to users interest. On these pages, the error factor of greedy Greedy decreases even faster. Consider now a dynamic graph (the case of the web). Pages come and disappear and edges too. Because of the time it takes to crawl the Web (weeks or months), our knowledge of the graph is not perfect. Page importance is now a moving target and we only hope to stay close to it. It is convenient to think of the variable Because the statement of Theorem 2.3 does not impose any condition on the initial state of window. There is a trade-off between precision and adaptability to changes and a critical parameter of the technique is the choice of the size of the window. We next describe (variants of) an algorithm, namely Adaptive OPIC, that compute(s) page importance based on a time window. In Adaptive OPIC, we have to keep some information about the history in a particular time window. We considered the following window policies: • Fixed Window (of size now - • Variable Window (of size • Interpolation (of time In the following, we call measure a pair (Greedy or Random ) and (ii) the window policy that is considered (e.g., Fixed Window or Interpolation). Variable Window is the easiest to implement since we have to maintain, for each page, a fixed number of values. One must be aware that some pages will be read rarely (e.g., once in several months), whereas others will be read perhaps daily. So there are huge variations in the size of histories. For very large histories, it is interesting to use compression techniques, e.g., to group several consecutive measures into one. On the opposite, we have too few measures for very unimportant pages. This has a negative impact on the speed of convergence of the algorithm. By setting a minimum number of measures per page (say 3), experiments show that we obtain better results. See Section 4. It is tailored to use little resources. Indeed, for each page, the history simply consists of two values. This is what we tested on real web data (See Section 4). It is the policy actually used in Xyleme [27,20,28]. It is based on a fixed time window of size When we visit a page and update its history, we estimate the cash that was added to that page in the interval 1, for an intuition of the interpolation. We know what was added to its cash between time When the number of nodes increases, the relative difficulty to assign a cash and a history to new nodes highlights some almost philosophical issues about the importance of pages. Consider the definition of importance based on In our system, the scheduling of pages to be read depends mostly on the amount of ``cash'' for each page. The crawling speed gives the total number of pages that we can read for both discovery and refresh. Our page importance architecture allows us to allocate resources between discovery and refresh. For instance, when we want to do more discovery, we proceed as follows: (i) we take some cash from the virtual page and distribute it to pages that were not read yet (ii) we increase the importance of ``small'' edges pointing to the virtual page so that it accumulates more cash. To refresh more pages, we do the opposite. We can also use a similar method to focus the crawl on a subset of interesting pages on the web. For instance, we may use this strategy to focus our crawling on XML pages [27,20]. In some other applications, we may prefer to quickly detect new pages. For instance, we provide to a press agency a 'copy tracker' that helps detecting copies of their News wires over the web. The problem with News pages is that they often last only a few days. In the OPIC algorithm, we process as follows for each link: pages that are suspected to contain news wires (e.g. because the URL contains ``news'') receive some ``extra'' cash. This cash is taken from the (unique) virtual page so that the total value of cash in the system does not change. Other criteria may be used, for instance we are working on the use of the links semantic, e.g. by analyzing words found close to the HTML link anchor. Implementation and experiments We implemented and tested first the standard off-line algorithm for computing page importance, then variants of Adaptive OPIC. We briefly describe some aspects of the implementation. We then report on experiments first on synthetic data, then on a large collection of web pages. Our implementation of the off-line algorithm is standard and will not be discussed here. We implemented a distributed version of Adaptive OPIC that can be parameterized to choose a page selection strategy, a window policy, a window size, etc. Adaptive OPIC runs on a cluster of Linux PCs. The code is in C++. Corba is used for communications between the PCs. Each crawler is in charge of a portion of the pages of the web. The choice of the next page to read by a crawler is performed by a separate module (the Page Scheduler). The split of pages between the various crawlers is made using a hash function Fetch: It obtains the URL of the page, fetches the page from the web and parses it; Money transfers: It distributes the current cash of the page to the pages it points to. For each such page, it uses Records: It updates the history of the page and resets its cash to null. Updating the history requires one disk access. Each crawler also processes the money transfer orders coming from other servers. Communications are asynchronous. It should be observed that for each page crawled, there are only two disk accesses, one to obtain the metadata of the page and one to update the metadata, including the history. Besides that, there are Corba communications (on the local network), and main memory accesses. Although we started our experiments with large collection of URLs on the web, synthetic data gave us more flexibility to study various input and output parameters, such as: graph size, graph connectivity, change rates, types of changes, distribution of in-degrees, out-degrees and page importance, importance error, ranking errors. We performed experiments with various synthetic graphs containing dozens of millions of web pages. These experiments showed that the use of very large graphs did not substantially alter the results. For instance, we started with graphs obtained using a Poisson distribution on the average of incoming links, a somewhat simplistic assumption. We then performed experiments with more complex distributions following recent studies of the web graph [4], e.g., with a power distribution 8], but even with significant changes of the graph parameters, the patterns of the results did not change substantially from the simple graph model. So, we then restricted our attention to rather simple graphs of reasonably small size to be able to test extensively, e.g., various page selection strategies, various window sizes, various patterns of changes of the web. In the remaining of this section, we will consider a simple graph model based on the power distribution on incoming edges. Details omitted. The number of nodes is fixed to N = 100 000 nodes. First, we studied the convergence of OPIC for various page selection strategies. We considered Random , Cycle and Greedy . We compared the values of the estimates at different points in the crawl, after where Consider Figure 2. The error is about the same for Greedy and Cycle . This result was expected since previous studies [13] show that given a standard cost model, uniform refresh strategies perform as good as focused refresh. As we also expected, Random performs significantly worse. We also compared these, somewhat artificially, to the off-line algorithm. In the off-line, each iteration of the matrix is a computation on Cycle and Greedy . This is not surprising since the crawl of Cycle corresponds roughly to a biased iteration on the matrix. Now consider Figure 3. The error is measured now only for the top ten percent pages, the interesting ones in practice. For this set of pages, Greedy (that is tailored to important pages) converges faster than the others including the off-line algorithm. We also studied the variance. It is roughly the same for all page selection strategies, e.g., almost no page had a relative error more than twice the mean error. We also considered alternative error measures. For instance, we considered an error weighted with page importance or the error on the relative importance that has been briefly mentioned. We also considered the error in ordering pages when their importance is used to rank query results. All these various error measures lead to no significant difference in the results. As already mentioned, a small window means more reactivity to changes but at the cost of some lack of precision. A series of experiments was conducted to determine how much. To analyze the impact of the size of the window, we use Adaptive OPIC with the Greedy strategy and a Fixed Window of 4 ignoring the Interpolation policy for the moment. The change rate is the number of pages that have their in-degree significantly modified (i.e. divided par two or multiplied by two) during the time of crawling 4. First, note that it performs almost as well as large Variable Window (e.g. We compared different policies for keeping the history. In this report, we use again the Greedy strategy. Various window policies may require different resources. To be fair, we chose policies that roughly requested similar amount of resources. Typically, we count for storage the number of measures we store. (Recall that a measure consists of a value for 5 shows the average number of measures used per page in each case. These measures depend for Fixed Window on the crawling speed which was set here to be Figure 5: Storage resources per time window │Window Type and Size │Measures per page│ │Variable Window 8 measures │ 8 │ │Fixed Window 8 months │ 8.4 │ │Improved Fixed Window 4 months│ 6.1 │ │Interpolation 4 months │ 1 │ Now consider Figure 6. It shows that for a similar number of measures, Variable Window performs better than Fixed Window. The problem with Fixed Window is that very few measures are stored for unimportant pages and the convergence is very slow because of errors on such pages. On the other hand, the Improved Fixed Window policy yields significantly better results. The improvement comes indeed from more reliability for unimportant pages. The most noticeable result about the use of windows is that the algorithm with the Interpolation policy outperforms the other variants while consuming less resources. Indeed, the error introduced by the interpolation is negligible. Furthermore, the interpolation seems to avoid some ``noise'' introduced when an old measure is added (or removed) in Adaptive OPIC. In some sense, the interpolation acts as a filter on the sequence of measures. Of course the convergence of all variants of the adaptive algorithms depends on the time window that is used. The excellent behavior of Interpolation convinced us to adopt it for our experiments with crawls of the web. This is considered next. Web data We performed the web experiments using the crawlers of Xyleme [28]. The crawl used the page selection strategy of Xyleme that has been previously mentioned and is related to Greedy . The history was managed using the Interpolation policy. During the test, the number of PCs varied from 2 to 8. Each PC had little disk space and less than 1.5Gb of main memory. Some reasonable estimate of page importance for the most important pages was obtained in a few days, as important pages are read more frequently and discovered sooner than others. The experiments lasted for several months. We discovered one billion URLs; only 400 millions of them were actually read. Note that because of the way we discover pages, these are 400 million relatively important pages. Moreover, we could give reasonable importance estimates even on pages that were never read. This experiment was sufficient (with limited human checking of the results) to conclude that the algorithm could be used in a production environment. Typically, for all practical uses of importance we considered (such as ranking query results or scheduling page refresh), the precision brought by the algorithm is rapidly sufficient. An advantage of the algorithm is also that it rapidly detects the new important pages, so they can be read sooner. A main issue was the selection of the size of the time window. We first fixed it too small which resulted in undesired variations in the importance of some pages. We then used a too large window and the reactivity to changes was too limited. Finally, the window was set to 3 months. This value depends on the crawling speed, which in our case was limited by the network bandwidth. Our performance analysis also showed that using our system (Xyleme crawler and Adaptive OPIC), it is possible to, for instance, crawl and compute page importance (as well as maintain this knowledge) for a graph of up to 2 billions pages with only 4 PCs equipped each with 4Gb of main memory and a small disk. In the context of Web Archiving [1], we also conducted experiments to decide if our measures of page importance could be used to select pages of interest for the French national Library. We selected thousand web sites, and 1] to improve the ``automatic'' librarian. During our experiments, we found out that the semantics of links in dynamic pages is (often) not as good as in pages fully written by authors. Links written by authors usually points to more relevant pages. On the other hand, most links in dynamic pages often consist in other (similar) queries to the same database. For instance, forum archives or catalog pages often contain many links that are used to browse through classification. Similarly, we found out that ``internal'' links (links that point to a page on the same web site) are less useful to discover other relevant pages than ``external'' links (links to a page on some other web site). To solve both problems, we are currently working on a notion of site-based importance [1] that consider links between web-sites instead of links between web-pages. We are currently experimenting our algorithm with this new notion of importance per site. We proposed a simple algorithm to implement with limited resources a realistic computation of page importance over a graph as large as the web. We demonstrated both the correctness and usability of the technique. Our algorithm can be used to improve the efficiency of crawling systems since it allows to focus on-line the resources to important pages. It can also be biased to take into account specific fields of interest for the users [1]. More experiments on real data are clearly needed. It would be in particular interesting to test the variants of Adaptive OPIC with web data. However, such tests are quite expensive. To understand more deeply the algorithms, more experiments are being conducted with synthetic data. We are experimenting with various variants of Adaptive OPIC. We believe that better importance estimates can be obtained and are working on that. One issue is the tuning of the algorithms and in particular, the choice of (adaptable) time windows. We are also continuing our experiments on changing graphs and in particular on the estimate of the derivative of the importance. We finally want to analyze more in-depth the impact of various specific graph patterns as done in [17] for the off-line algorithm. We are also working on a precise mathematical analysis of the convergence speed of the various algorithms. The hope is that this analysis will provide us with bounds of the error of the importance, and will also guide us in fixing the size of windows and evaluating the changes in importance. We are also improving the management of newly discovered pages. The algorithm presented here computes page importance that depends on the entire graph by looking at one page at a time independently of the order of visiting the pages. It would be interesting to find other properties of graph nodes that can be computed similarly. We want to thank Luc Segoufin, Laurent Mignet and Tova Milo for discussions on the present work. S. Abiteboul, G. Cobena, J. Masanes, and G. Sedrati. A first experience in archiving the french web. ECDL, 2002. S. Abiteboul, M. Preda, and G. Cobena. Computing web page importance without storing the graph of the web (extended abstract). IEEE-CS Data Engineering Bulletin, Volume 25, 2002. Alta vista. Andrei Z. Broder and al. Graph structure in the web. WWW9/Computer Networks, 2000. Steve Chien, Cynthia Dwork, Ravi Kumar, Dan Simon, and D. Sivakumar. Link evolution: Analysis and algorithms. In Workshop on Algorithms and Models for the Web Graph (WAW), 2002. Junghoo Cho, Hector García-Molina, and Lawrence Page. Efficient crawling through URL ordering. Computer Networks and ISDN Systems, 30(1-7):161-172, 1998. Kai Lai Chung. Markov chains with stationary transition probabilities. Springer, 1967. Colin Cooper and Alan M. Frieze. A general model of undirected web graphs. In European Symposium on Algorithms, pages 500-511, 2001. Y. Dong D. Zhang. An efficient algorithm to rank web resources. 9 th International World Wide Web Conference, 2000. F. R. Gantmacher. Applications of the theory of matrices. In Interscience Publishers, pages 64-79, 1959. T. Haveliwala. Efficient computation of pagerank. Technical report, Stanford University, 1999. H. Garcia-Molina J. Cho. Synchronizing a database to improve freshness. SIGMOD, 2000. M.R. Henzinger J. Dean. Finding related pages in the world wide web. 8th International World Wide Web Conference, 1999. A. Broder K. Bharat. Estimating the relative size and overlap of public web search engines. 7th International World Wide Web Conference (WWW7), 1998. Jon M. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604-632, 1999. G.V. Meghabghab. Google's web page ranking applied to different topological web graph structures. JASIS 52(9), 2001. Rajeev Motwani and Prabhakar Raghavan. Randomized algorithms. ACM Computing Surveys, 28(1):33-37, 1996. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking: Bringing order to the web, 1998. M. Preda. Data acquisition for an xml warehouse. DEA thesis Paris 7 University, 2000. L. Page S. Brin. The anatomy of a large-scale hypertextual web search engine. WWW7 Conference, Computer Networks 30(1-7), 1998. B. Dom S. Chakrabarti, M. van den Berg. Focused crawling: a new approach to topic-specific web resource discovery. 8th World Wide Web Conference, 1999. L. Giles S. Lawrence. Accessibility and distribution of information on the web. Nature, 1999. Search-engine watch. S. Toledo. Improving the memory-system performance of sparse-matrix vector multiplication. IBM Journal of Research and Development, 41(6):711-??, 1997. C.J. van Rijsbergen. Information retrieval. London, Butterworths, 1979. Lucie Xyleme. A dynamic warehouse for xml data of the web. IEEE Data Engineering Bulletin, 2001. Lemma 5.1 (see lemma ) After each step The proof is by induction. Clearly, the lemma is true at time Each node splits the value by at most Lemma 5.3 By definition of Then, by Lemma 2.3, Let us look at the Its limit is 0 because, when 2.4) and By the previous result, where 1 is the identity matrix (1 in the diagonal and 0 elsewhere). Consider now the decomposition of We can now restrict to the orthogonal space of Now if we use the fact that there is a single fixpoint solution for Google [11] seems to use such a strategy for refreshing pages; Xyleme [28] does. Note that the converse is true in the sense that if the graph is not aperiodic it is always possible to find an Greater values of Gregory Cobena, 2003-02-25, INRIA (Domaine de Voluceau, Rocquencourt BP105, 78153 Le Chesnay), France
{"url":"http://www2003.org/cdrom/papers/refereed/p007/p7-abiteboul.html","timestamp":"2014-04-18T08:12:13Z","content_type":null,"content_length":"120941","record_id":"<urn:uuid:1a2b3d30-6435-469f-b6f4-2bb34d7343f9>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Robert's Stochastic thoughts Just can't let it go. SSRI Meta-analysis meta-addiction One of the odd things about my reading of Kirsch et al 2008 is that I don't recall a comparison of results with all trials reported to the FDA to results with only published trials. This is odd, since much of the point of the paper is to examine publication bias in the extreme situation of publication decisions being made by for profit corporation which spend tons of money on advertising. I don't know if it is odd that my reading comprehension is so minimal. Anyway, it seems to me that it is possible to tell which studies were published from table 1 as there are references to published articles next to the protocol numbers. Guessing that studies with such references were published and the others weren't I create a variable "cite". Why lo and behold the average difference between the average improvement according to the Hamilton scale between patients who got the SSRI and patients who got the placebo is greater in the published than the unpublished studies. sum dchange if cite ==1 Variable | Obs Mean Std. Dev. Min Max dchange | 23 3.696522 2.385129 -.1999998 9.4 . sum dchange if cite==0 Variable | Obs Mean Std. Dev. Min Max dchange | 12 1.45 1.739033 -1.6 4.3 Is that difference as significant as it looks ? Sure is reg dchange cite dchange | Coef. Std. Err. t P>|t| [95% Conf. Interval] cite | 2.246522 .7802415 2.88 0.007 .6591083 3.833935 _cons | 1.45 .6324977 2.29 0.028 .1631738 2.736826 Now that is what I call publication bias. So how big a difference does it make ? I argue at gruesome length below that the right way to conduct the meta-analysis is to first calculate for each study dchange, that is, the average improvement of those who took the SSRI (change) minus the average improvement of those who took the placebo (pchange). This is necessary because different studies may have had different mean improvements for many reasons and different studies had different proportions of patients receiving the SSRI. Thus receiving an SSRI might be correlated with improvements due to characteristics of the study (such as baseline depression). Then I argue it is best to weight with definitely exogenous weights which have nothing to do with the disturbance terms. This is necessary if the sample mean and the variance are not independent as they aren't for many distributions (the normal is an exception). I think it reasonable to use weights that would give efficient estimates if the true variance of the disturbance terms were the same in all studies. Of course I don't think that, so I assume such estimates are inefficient. However they are unbiased and plenty precise enough. So I think that the estimate of the additional benefit of an SSRI over placebo should be the weighted average of dchange with weights equal to 1/((1/n)+(1/pn)) where n is the number of patients who received the SSRI and pn is the number of patients who received the Placebo. If only studies with references are used (published studies I guess) this gives an estimate of the additional benefit of 3.23 If all 35 studies are used this gives an estimate of 2.64. I conclude that publication bias biases up the estimated benefit of SSRI's by about 0.6 points on the Hamilton scale. A simpler approach which I argue above is invalid is to calculate the average benefit of all patients who took the SSRI (just multiply the average for a study by the number who took an SSRI add up and divide by the total number of patients in all the studies who took an SSRI). This gives means of 10.04 for the SSRI patients and 7,85 for the placebo group and thus an estimated extra benefit of 2.19. The less precise procedure makes almost a large a difference as the inclusion of unpublished studies. Finally Kirsch et al choose to weight observations by the inverse of the estimated variance of the effect in each subsample (2 subsamples per trial SSRI and Placebo). This increases efficiency comapare to the simple procedure above, but may introduce bias. In fact it does introduce bias as demonstrated in posts below. This gives an weighted average improvement of 9.59 with SSRI and 7.81 for placebo so an added improvement with SSRI of 1.78. In my view in passing from the publication biased 3.23 to the final 1.78 only 0.6 of the change is due to removing the publication bias and 0.85 is due to inefficient and biased meta analysis. if the subsample of studies with references (I guess published studies) is analyzed with the method of Kirsch et al the weighted average improvement with SSRI is 9.63 and the weighted average improvement with placebo is 7.37 so the added improvement with SSRI is 2.26. If I have correctly inferred which studies were publicly available before Kirsch et al's FOIA request, I conclude that they would have argued that the effect of SSRI's is not clinically significant based on meta analysis of only published studies. Update: I know I am fascinated by by medical data, but I don't think the kind non spammer who sent me this e-mail understands exactly what I like to do with them from: medical billing subject: Certified coders learn up to 35000 per year RE: FWD: 1 comment: pj said... That's very interesting/disturbing. NICE have said that they will be looking with great interest at this study when they perform their next review of the evidence base for treating depression, let's hope they notice/have their attention drawn to the flaws in this study. Might I suggest that a response to the paper might be in order.
{"url":"http://rjwaldmann.blogspot.com/2008/03/just-cant-let-it-go.html","timestamp":"2014-04-18T23:27:52Z","content_type":null,"content_length":"125721","record_id":"<urn:uuid:5dedc4ae-c1fd-43f0-844c-629a545ad50d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Returns all the elements in the sorted set at key with a score between min and max (including elements with score equal to min or max). The elements are considered to be ordered from low to high The elements having the same score are returned in lexicographical order (this follows from a property of the sorted set implementation in Redis and does not involve further computation). The optional LIMIT argument can be used to only get a range of the matching elements (similar to SELECT LIMIT offset, count in SQL). Keep in mind that if offset is large, the sorted set needs to be traversed for offset elements before getting to the elements to return, which can add up to O(N) time complexity. The optional WITHSCORES argument makes the command return both the element and its score, instead of the element alone. This option is available since Redis 2.0. Exclusive intervals and infinity min and max can be -inf and +inf, so that you are not required to know the highest or lowest score in the sorted set to get all elements from or up to a certain score. By default, the interval specified by min and max is closed (inclusive). It is possible to specify an open interval (exclusive) by prefixing the score with the character (. For example: ZRANGEBYSCORE zset (1 5 Will return all elements with 1 < score <= 5 while: ZRANGEBYSCORE zset (5 (10 Will return all the elements with 5 < score < 10 (5 and 10 excluded). Return value Array reply: list of elements in the specified score range (optionally with their scores). redis> ZADD myzset 1 "one" (integer) 1 redis> ZADD myzset 2 "two" (integer) 1 redis> ZADD myzset 3 "three" (integer) 1 redis> ZRANGEBYSCORE myzset -inf +inf 1) "one" 2) "two" 3) "three" redis> ZRANGEBYSCORE myzset 1 2 1) "one" 2) "two" redis> ZRANGEBYSCORE myzset (1 2 1) "two" redis> ZRANGEBYSCORE myzset (1 (2 (empty list or set)
{"url":"http://redis.io/commands/zrangebyscore","timestamp":"2014-04-20T16:24:04Z","content_type":null,"content_length":"11044","record_id":"<urn:uuid:888af5a6-42e4-4edc-b519-2089cf639374>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Circular motion? You are correct, The T^2 was my mistake. I have the right answer now (8.22 x 10^-2) I didn't check your calculation. Was it just a typo? I do not understand what you mean by just using the right side? there's no v there. I solved for v to use the formula fc = (m2v^2)/r You started with the equation: ac=v^2/r=(4pi^2r)/T^2 What you need (to move to the next step) is v^2/r, which equals (4pi^2r)/T^2. You don't need to know V explicitly: Fc = mac = mv^2/r= m(4pi^2r)/T^2 You could go right to the answer using only r and T, which were given.
{"url":"http://www.physicsforums.com/showthread.php?t=309718","timestamp":"2014-04-16T13:55:38Z","content_type":null,"content_length":"42196","record_id":"<urn:uuid:38972731-7515-419c-9872-08f97ff87bf7>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem-Solving Strategies (reprise Make a Table; Look for a Pattern) 8.8: Problem-Solving Strategies (reprise Make a Table; Look for a Pattern) Created by: CK-12 Learning Objectives At the end of this lesson, students will be able to: • Read and understand given problem situations. • Make tables and identify patterns. • Solve real-world problems using selected strategies as part of a plan. Terms introduced in this lesson: compound interest population decrease intensity (loudness) of sound Teaching Strategies and Tips Remind students of the four-step problem-solving plan. In Example 1, students must find the compound interest formula on their own by looking for a pattern. Guide them through the first two years until the pattern becomes evident. Have students check their answers to the examples presented in this lesson by finding $b$$(x \%) A + A = \left ( 1 + \frac{x} {100} \right )A$Exponential Growth Functions. Encourage students to construct tables and look for patterns in Review Questions 1-4. Suggest that they only check their work using explicit formulas. Error Troubleshooting In Review Questions 1-4, remind students to convert the percents to decimals. You can only attach files to None which belong to you If you would like to associate files with this None, please make a copy first.
{"url":"http://www.ck12.org/tebook/Algebra-I-Teacher%2527s-Edition/r1/section/8.8/","timestamp":"2014-04-18T10:50:15Z","content_type":null,"content_length":"111720","record_id":"<urn:uuid:04246abf-e27c-46c5-9309-aed0caca54ee>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating DC Load Line How exactly does one go about calculating the DC load line for an NPN transistor? Is it the same method for a PNP transistor? I know how to determine the Q-point, but I'm not sure I understand the methodology behind determining the load line. Thank you in advance.
{"url":"http://www.physicsforums.com/showthread.php?t=101693","timestamp":"2014-04-17T18:40:04Z","content_type":null,"content_length":"19336","record_id":"<urn:uuid:71172189-fed4-4d0a-afba-d25ee8c019e1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic Geometry home > paid book/ebook Algebraic Geometry This is a graduate text on algebraic geometry that provides a quick and fully self-contained development of the fundamentals, including all commutative algebras which are used. A taste of the deeper theory is given: some topics, such as local algebra and ramification theory, are treated in depth. The book culminates with the theory of curves, including the Riemann-Roch theorem, elliptic curves and the zeta function of a curve over a finite field, and the Riemann hypothesis for elliptic curves. Merchant Format Price Amazon US Paperback $29.99 - $42.00 BookByte Paperback $98.83 Related Documents $94.05 - $135.53 The two fields of Geometric Modeling and Algebraic Geometry, though closely related, are traditionally represented by two almost disjoint scientific communities. Both fields deal with objects defined $154.00 - $154.00 The theory of toric varieties (also called torus embeddings) describes a fascinating interplay between algebraic geometry and the geometry of convex figures in real affine spaces. This book is a ... $42.37 - $79.95 The book is an introduction to the theory of convex polytopes and polyhedral sets, to algebraic geometry, and to the connections between these fields, known as the theory of toric varieties. The ... $28.46 - $69.95 From the reviews: "Although several textbooks on modern algebraic geometry have been published in the meantime, Mumford's "Volume I" is, together with its predecessor the red book of varieties and $29.98 - $84.95 This book is a revised and expanded new edition of the first four chapters of Shafarevich’s well-known introductory book on algebraic geometry. Besides correcting misprints and inaccuracies, ... $15.90 - $59.95 This book details the heart and soul of modern commutative and algebraic geometry. It covers such topics as the Hilbert Basis Theorem, the Nullstellensatz, invariant theory, projective geometry, ... $150.15 - $354.40 The theory of toric varieties (also called torus embeddings) describes a fascinating interplay between algebraic geometry and the geometry of convex figures in real affine spaces. This book is a ... $49.91 - $58.50 This volume contains the lectures presented at the third Regional Geometry Institute at Park City in 1993. The lectures provide an introduction to the subject, complex algebraic geometry, making the Presents an introductory material for topics covered in the IMA workshops on 'Optimization and Control' and 'Applications in Biology, Dynamics, and Statistics' held during the IMA ...
{"url":"http://pdfcast.org/paid/9789810235611","timestamp":"2014-04-19T15:02:57Z","content_type":null,"content_length":"25153","record_id":"<urn:uuid:7b93919e-1677-4ac5-805d-b386ca5277d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: passing through (-8,-2) and parallel to the line whose equation is y=-2x+3. what is the point slope? show me step by step please • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/512fbbf1e4b02acc415f77a0","timestamp":"2014-04-16T10:26:39Z","content_type":null,"content_length":"37123","record_id":"<urn:uuid:6c698b5b-6221-49b3-8ecf-87891b8f4338>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Examples of calculations of Turaev-Reshetikhin TQFT of cobordisms with boundaries have genera greater than 1 up vote 6 down vote favorite I am studying Turaev-Reshetikhin TQFT. I describe the definition of the invariant $\tau(M)$ of a cobordism $(M, \partial_{-}M, \partial_{+}M)$ in the previous question breifly. Framings in the definition of Reshetikhin-Turaev TQFT In many papaers or book, the only example of this invariant is just the case of a cylinder over a torus $M=T\times [0,1]$, where $T$ is a torus. In this case a cobordism is$(T \times [0, 1], T, T)$ and the top boundary is parametrized by the identity $\mathbb{id}:T \to T$ and the bottom boundary is parametrized by any homeomorphism $f: T \to T$. I would like to know more examples, especially when $M$ is a cylinder over a closed orientable surface of genus greater than $1$. In this case, what I found difficult is that to find a special ribbon graph and show that it gives a cobordism $(M, \partial_{-}M, \partial_{+}M)$ as a result of surgery and check if the parametrizations of the boundaries are correct. Could you give me examples of such calculations? (ex. $M$ is a cylinder over a genus 2 surface with identity parametrization on top boundary and "non trivial" parametrization on the bottom boundary.) Thank you in advance. add comment 1 Answer active oldest votes I guess this is it for my MO lurking. So anyway, you're interested in seeing example calculations, similar to Turaev IV.5.4, of the action introduced in Turaev IV.5.1, right? This action is also referred to known as the quantum representation of the mapping class group and has been considered from numerous viewpoints including of course TQFTs via quantum groups, the skein theory of the Kauffman bracket, conformal field theory, and geometric quantization. Calculations in the higher genus case grow messy quickly, but the skein theory approach provides an algorithm (cf. Masbaum-Vogel, 3-valent graphs and the Kauffman bracket) for calculating explicitly the representation in a particular natural basis of the vector spaces associated to the boundary surface. This algorithm has been implemented by A'Campo and Masbaum and should be available here -- the site seems to be down at the moment, but leave me an e-mail if you want a copy of the (usually freely available) program. How it works and some example calculations have been explained by A'Campo. Note that all calculations of this program are performed using the skein theory approach, as described in Turaev Ch. XII. Off the top of my head, the only explicit non-torus and up vote 8 non-computer assisted calculations of the quantum representations I remember seeing are for the four-holed sphere (cf. Masbaum, An element of infinite order in TQFT-representations of down vote mapping class groups, Andersen-Masbaum-Ueno, Topological quantum field theory and the Nielsen-Thurston classification of $M(0,4)$, Laszlo, Pauly, Sorger - On the monodromy of the Hitchin accepted connection). By the factorization properties of the TQFT/the quantum representations, this then gives example calculations for all surfaces of genus $g \geq 2$, as the four-holed sphere embeds in such. Now, a different family of examples is provided by complements of links in $S^3$, i.e. 3-manifolds having boundary a disjoint union of tori. Here, in the modular category corresponding to, say $U_q(\mathfrak{sl}_N)$ or the one arising from the HOMFLY polynomial (cf. Blanchet - Hecke algebras, modular categories and 3-manifold quantum invariants), knowing the vector in the vector space associated to (a disjoint union of) tori corresponding to the knot complement more or less boils down to understanding the coloured HOMFLY invariants of the link in question, as the vector space of the torus has a basis of handlebodies containing longitudinal (coloured) links, and these link invariants have been considered by several people (in particular in the case $N = 2$). add comment Not the answer you're looking for? Browse other questions tagged gt.geometric-topology knot-theory tqft extended-tqft surgery-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/99972/examples-of-calculations-of-turaev-reshetikhin-tqft-of-cobordisms-with-boundarie","timestamp":"2014-04-18T23:59:39Z","content_type":null,"content_length":"55576","record_id":"<urn:uuid:2b9ef3f1-e311-44b2-bf6d-e5b978d2ed4a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Relation between partially computable function and complex function up vote 1 down vote favorite Given a partially computable function, is there an analytic complex function which is equal to it at every point of it's domain? Or under what condition does a partially computable function correspond to the restriction of an analytic complex function? computability-theory nt.number-theory complex-analysis ag.algebraic-geometry 4 c.e. = Cannot Explain? – Tom De Medts May 3 '11 at 12:57 1 Maybe I'm being dense, but could you define a C.e. function? Or at least write out the name, so I can search for a definition? Given the tags and the question, I guess it's the only reasonable mathematical definition I found when searching for c.e. function: mathproservices.com/presentations/NewOrleans95/cefcndef.html – Jan Jitse Venselaar May 3 '11 at 12:57 I meant if course: "I guess it's not the only reasonable mathematical definition".. – Jan Jitse Venselaar May 3 '11 at 13:03 3 I don't know for sure what a computable function is, but if it is, among other things, a complex-valued function on the natural numbers, and you want to extend it to an entire function on $\mathbb {C}$, then I think you can. There is a theorem in Chapter 15 of Rudin's Real and Complex Analysis that guarantees for any subset $A$ of $\mathbb{C}$ with no limit points the existence of an entire function taking prescribed values at each element of $A$ (you can even specify as many derivatives as you want at each point). – Keenan Kidwell May 3 '11 at 13:47 3 This question should have said: given a computable partial function $\mathbb{N} \to \mathbb{N}$, is there a computable holomorphic extension of it to $\mathbb{C} \to \mathbb{C}$? That would then actually be an interesting question. – Andrej Bauer May 3 '11 at 16:13 show 8 more comments 1 Answer active oldest votes Any complex-valued function on $\mathbb{N}$ can be extended to an entire function, so the answer is "yes." This follows from Theorem 15.13 of Rudin's Real and Complex Analysis, which states that for any open set $\Omega$ in $\mathbb{C}$ and subset $A$ of $\Omega$ without limit points, there exists a holomorphic function on $\Omega$ taking prescribed values at all up vote 7 down the points of $A$. vote accepted add comment Not the answer you're looking for? Browse other questions tagged computability-theory nt.number-theory complex-analysis ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/63803/relation-between-partially-computable-function-and-complex-function","timestamp":"2014-04-16T16:00:12Z","content_type":null,"content_length":"58144","record_id":"<urn:uuid:96874bee-5c85-4150-a90f-f8d274910b1b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: Dynamic Geometry Exploration: Properties of the Midsegment of a Trapezoid tool, Midsegment of a rectangle? Discussion: Dynamic Geometry Exploration: Properties of the Midsegment of a Trapezoid tool Topic: Midsegment of a rectangle? Related Item: http://mathforum.org/mathtools/tool/15621/ << see all messages in this topic < previous message | next message > Subject: RE: More on What is a Trapezoid Author: Annie Date: Dec 13 2004 On Dec 13 2004, Mathman wrote: > Again, so far as I know, a figure is one > defined if it has ALL of the properties of the defined figure, "ALL" > meaning those necessary and sufficient as fit its formal > description. Why not consider the parallelogram as a general > trapezoid having one non-parallel side move towards parallelism with > the the other? Then, as with the definition of limit, there is no > distinction in the limit. This is a finite limit of slope, not such > a thing as a circle becoming a line "at infinity". I tend to agree with you, as it's an argument that makes the most sense to me and allows the most connections between the different quadrilaterals. But the fact remains that many American high school geometry texts use the less inclusive definition. Is it a parallelogram? Yes. Well, then it can't be a trapezoid any more. So getting back to Cynthia's most recent questions, does anyone know the history of when or why the less inclusive definition came to be used in schools, at least in the US? Is the more inclusive definition used in other countries in current K-12 textbooks? My geometry textbook collection only dates back to an early 1960s Mary P. Dolciani Geometry text, in which a parallelogram is decidedly NOT a A past thread on this in the geometry.pre-college newsgroup is though it doesn't specifically answer Cynthia's question of when this Reply to this message Quote this message when replying? yes no Post a new topic to the Dynamic Geometry Exploration: Properties of the Midsegment of a Trapezoid tool Discussion discussion Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=tool&do=r&msg=16598","timestamp":"2014-04-21T10:15:50Z","content_type":null,"content_length":"17898","record_id":"<urn:uuid:92d1a06e-a3a1-4d0a-be3d-420aa6c5a715>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00340-ip-10-147-4-33.ec2.internal.warc.gz"}